From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B9A4C433DB for ; Sat, 13 Feb 2021 14:15:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DFEBB64E46 for ; Sat, 13 Feb 2021 14:15:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229980AbhBMOOv (ORCPT ); Sat, 13 Feb 2021 09:14:51 -0500 Received: from mail-40133.protonmail.ch ([185.70.40.133]:38425 "EHLO mail-40133.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229922AbhBMONh (ORCPT ); Sat, 13 Feb 2021 09:13:37 -0500 Date: Sat, 13 Feb 2021 14:12:49 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1613225574; bh=HLAVfjIlwGNW478FWp4tyY59eFbEvOz5vX71ZQBVijk=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=OX2dhEc9PulcqcVVekXlCqwMCvOCBQJU+xltx3p2J181V/lZ2211AdiQihOqP6gd0 o4rv90C/zI7jaWCvNdMaCTs6OTq9UIHEi6pRaVOO60tk83d+HlNK68l7jIl40PVxAy LOON0Hdhvm9OJz7VQgie2Au+6IpzDYOT2gu8F/1CcVUwFIQZI+ZUSDFg+Q3sIIRIfB yX7x2SpHplFN7UhwRyh38JOSsVib+EXwrHjs3+NK+ZvysCbnXbLiJ0ErnbPb7/Cy94 poIC6l8hRSK3JQb8vD7tiCTMjliBKFys1ZYH5oDatMtz813RNCCLmy3V+CVZY0uBsN 8ldqkdAyJVmCQ== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Jonathan Lemon , Eric Dumazet , Dmitry Vyukov , Willem de Bruijn , Alexander Lobakin , Randy Dunlap , Kevin Hao , Pablo Neira Ayuso , Jakub Sitnicki , Marco Elver , Dexuan Cui , Paolo Abeni , Jesper Dangaard Brouer , Alexander Duyck , Alexander Duyck , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Taehee Yoo , Wei Wang , Cong Wang , =?utf-8?Q?Bj=C3=B6rn_T=C3=B6pel?= , Miaohe Lin , Guillaume Nault , Florian Westphal , Edward Cree , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH v6 net-next 10/11] skbuff: allow to use NAPI cache from __napi_alloc_skb() Message-ID: <20210213141021.87840-11-alobakin@pm.me> In-Reply-To: <20210213141021.87840-1-alobakin@pm.me> References: <20210213141021.87840-1-alobakin@pm.me> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org {,__}napi_alloc_skb() is mostly used either for optional non-linear receive methods (usually controlled via Ethtool private flags and off by default) and/or for Rx copybreaks. Use __napi_build_skb() here for obtaining skbuff_heads from NAPI cache instead of inplace allocations. This includes both kmalloc and page frag paths. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index a80581eed7fc..875e1a453f7e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -562,7 +562,8 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *na= pi, unsigned int len, =09if (len <=3D SKB_WITH_OVERHEAD(1024) || =09 len > SKB_WITH_OVERHEAD(PAGE_SIZE) || =09 (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) { -=09=09skb =3D __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE); +=09=09skb =3D __alloc_skb(len, gfp_mask, SKB_ALLOC_RX | SKB_ALLOC_NAPI, +=09=09=09=09 NUMA_NO_NODE); =09=09if (!skb) =09=09=09goto skb_fail; =09=09goto skb_success; @@ -579,7 +580,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *na= pi, unsigned int len, =09if (unlikely(!data)) =09=09return NULL; =20 -=09skb =3D __build_skb(data, len); +=09skb =3D __napi_build_skb(data, len); =09if (unlikely(!skb)) { =09=09skb_free_frag(data); =09=09return NULL; --=20 2.30.1