linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Lobakin <alobakin@pm.me>
To: "David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Cc: "Jonathan Lemon" <jonathan.lemon@gmail.com>,
	"Eric Dumazet" <edumazet@google.com>,
	"Dmitry Vyukov" <dvyukov@google.com>,
	"Willem de Bruijn" <willemb@google.com>,
	"Alexander Lobakin" <alobakin@pm.me>,
	"Randy Dunlap" <rdunlap@infradead.org>,
	"Kevin Hao" <haokexin@gmail.com>,
	"Pablo Neira Ayuso" <pablo@netfilter.org>,
	"Jakub Sitnicki" <jakub@cloudflare.com>,
	"Marco Elver" <elver@google.com>,
	"Dexuan Cui" <decui@microsoft.com>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Jesper Dangaard Brouer" <brouer@redhat.com>,
	"Alexander Duyck" <alexanderduyck@fb.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"Andrii Nakryiko" <andriin@fb.com>,
	"Taehee Yoo" <ap420073@gmail.com>,
	"Cong Wang" <xiyou.wangcong@gmail.com>,
	"Björn Töpel" <bjorn@kernel.org>,
	"Miaohe Lin" <linmiaohe@huawei.com>,
	"Guillaume Nault" <gnault@redhat.com>,
	"Yonghong Song" <yhs@fb.com>, zhudi <zhudi21@huawei.com>,
	"Michal Kubecek" <mkubecek@suse.cz>,
	"Marcelo Ricardo Leitner" <marcelo.leitner@gmail.com>,
	"Dmitry Safonov" <0x7f454c46@gmail.com>,
	"Yang Yingliang" <yangyingliang@huawei.com>,
	"Florian Westphal" <fw@strlen.de>,
	"Edward Cree" <ecree.xilinx@gmail.com>,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: [PATCH v5 net-next 08/11] skbuff: introduce {,__}napi_build_skb() which reuses NAPI cache heads
Date: Thu, 11 Feb 2021 18:54:40 +0000	[thread overview]
Message-ID: <20210211185220.9753-9-alobakin@pm.me> (raw)
In-Reply-To: <20210211185220.9753-1-alobakin@pm.me>

Instead of just bulk-flushing skbuff_heads queued up through
napi_consume_skb() or __kfree_skb_defer(), try to reuse them
on allocation path.
If the cache is empty on allocation, bulk-allocate the first
16 elements, which is more efficient than per-skb allocation.
If the cache is full on freeing, bulk-wipe the second half of
the cache (32 elements).
This also includes custom KASAN poisoning/unpoisoning to be
double sure there are no use-after-free cases.

To not change current behaviour, introduce a new function,
napi_build_skb(), to optionally use a new approach later
in drivers.

Note on selected bulk size, 16:
 - this equals to XDP_BULK_QUEUE_SIZE, DEV_MAP_BULK_SIZE
   and especially VETH_XDP_BATCH, which is also used to
   bulk-allocate skbuff_heads and was tested on powerful
   setups;
 - this also showed the best performance in the actual
   test series (from the array of {8, 16, 32}).

Suggested-by: Edward Cree <ecree.xilinx@gmail.com> # Divide on two halves
Suggested-by: Eric Dumazet <edumazet@google.com>   # KASAN poisoning
Cc: Dmitry Vyukov <dvyukov@google.com>             # Help with KASAN
Cc: Paolo Abeni <pabeni@redhat.com>                # Reduced batch size
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
---
 include/linux/skbuff.h |  2 +
 net/core/skbuff.c      | 94 ++++++++++++++++++++++++++++++++++++------
 2 files changed, 83 insertions(+), 13 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 0e0707296098..906122eac82a 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1087,6 +1087,8 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size);
 struct sk_buff *build_skb_around(struct sk_buff *skb,
 				 void *data, unsigned int frag_size);
 
+struct sk_buff *napi_build_skb(void *data, unsigned int frag_size);
+
 /**
  * alloc_skb - allocate a network buffer
  * @size: size to allocate
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 860a9d4f752f..9e1a8ded4acc 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -120,6 +120,8 @@ static void skb_under_panic(struct sk_buff *skb, unsigned int sz, void *addr)
 }
 
 #define NAPI_SKB_CACHE_SIZE	64
+#define NAPI_SKB_CACHE_BULK	16
+#define NAPI_SKB_CACHE_HALF	(NAPI_SKB_CACHE_SIZE / 2)
 
 struct napi_alloc_cache {
 	struct page_frag_cache page;
@@ -164,6 +166,25 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask)
 }
 EXPORT_SYMBOL(__netdev_alloc_frag_align);
 
+static struct sk_buff *napi_skb_cache_get(void)
+{
+	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+	struct sk_buff *skb;
+
+	if (unlikely(!nc->skb_count))
+		nc->skb_count = kmem_cache_alloc_bulk(skbuff_head_cache,
+						      GFP_ATOMIC,
+						      NAPI_SKB_CACHE_BULK,
+						      nc->skb_cache);
+	if (unlikely(!nc->skb_count))
+		return NULL;
+
+	skb = nc->skb_cache[--nc->skb_count];
+	kasan_unpoison_object_data(skbuff_head_cache, skb);
+
+	return skb;
+}
+
 /* Caller must provide SKB that is memset cleared */
 static void __build_skb_around(struct sk_buff *skb, void *data,
 			       unsigned int frag_size)
@@ -265,6 +286,53 @@ struct sk_buff *build_skb_around(struct sk_buff *skb,
 }
 EXPORT_SYMBOL(build_skb_around);
 
+/**
+ * __napi_build_skb - build a network buffer
+ * @data: data buffer provided by caller
+ * @frag_size: size of data, or 0 if head was kmalloced
+ *
+ * Version of __build_skb() that uses NAPI percpu caches to obtain
+ * skbuff_head instead of inplace allocation.
+ *
+ * Returns a new &sk_buff on success, %NULL on allocation failure.
+ */
+static struct sk_buff *__napi_build_skb(void *data, unsigned int frag_size)
+{
+	struct sk_buff *skb;
+
+	skb = napi_skb_cache_get();
+	if (unlikely(!skb))
+		return NULL;
+
+	memset(skb, 0, offsetof(struct sk_buff, tail));
+	__build_skb_around(skb, data, frag_size);
+
+	return skb;
+}
+
+/**
+ * napi_build_skb - build a network buffer
+ * @data: data buffer provided by caller
+ * @frag_size: size of data, or 0 if head was kmalloced
+ *
+ * Version of __napi_build_skb() that takes care of skb->head_frag
+ * and skb->pfmemalloc when the data is a page or page fragment.
+ *
+ * Returns a new &sk_buff on success, %NULL on allocation failure.
+ */
+struct sk_buff *napi_build_skb(void *data, unsigned int frag_size)
+{
+	struct sk_buff *skb = __napi_build_skb(data, frag_size);
+
+	if (likely(skb) && frag_size) {
+		skb->head_frag = 1;
+		skb_propagate_pfmemalloc(virt_to_head_page(data), skb);
+	}
+
+	return skb;
+}
+EXPORT_SYMBOL(napi_build_skb);
+
 /*
  * kmalloc_reserve is a wrapper around kmalloc_node_track_caller that tells
  * the caller if emergency pfmemalloc reserves are being used. If it is and
@@ -838,31 +906,31 @@ void __consume_stateless_skb(struct sk_buff *skb)
 	kfree_skbmem(skb);
 }
 
-static inline void _kfree_skb_defer(struct sk_buff *skb)
+static void napi_skb_cache_put(struct sk_buff *skb)
 {
 	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+	u32 i;
 
 	/* drop skb->head and call any destructors for packet */
 	skb_release_all(skb);
 
-	/* record skb to CPU local list */
+	kasan_poison_object_data(skbuff_head_cache, skb);
 	nc->skb_cache[nc->skb_count++] = skb;
 
-#ifdef CONFIG_SLUB
-	/* SLUB writes into objects when freeing */
-	prefetchw(skb);
-#endif
-
-	/* flush skb_cache if it is filled */
 	if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) {
-		kmem_cache_free_bulk(skbuff_head_cache, NAPI_SKB_CACHE_SIZE,
-				     nc->skb_cache);
-		nc->skb_count = 0;
+		for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++)
+			kasan_unpoison_object_data(skbuff_head_cache,
+						   nc->skb_cache[i]);
+
+		kmem_cache_free_bulk(skbuff_head_cache, NAPI_SKB_CACHE_HALF,
+				     nc->skb_cache + NAPI_SKB_CACHE_HALF);
+		nc->skb_count = NAPI_SKB_CACHE_HALF;
 	}
 }
+
 void __kfree_skb_defer(struct sk_buff *skb)
 {
-	_kfree_skb_defer(skb);
+	napi_skb_cache_put(skb);
 }
 
 void napi_consume_skb(struct sk_buff *skb, int budget)
@@ -887,7 +955,7 @@ void napi_consume_skb(struct sk_buff *skb, int budget)
 		return;
 	}
 
-	_kfree_skb_defer(skb);
+	napi_skb_cache_put(skb);
 }
 EXPORT_SYMBOL(napi_consume_skb);
 
-- 
2.30.1



  parent reply	other threads:[~2021-02-11 19:07 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-11 18:52 [PATCH v5 net-next 00/11] skbuff: introduce skbuff_heads bulking and reusing Alexander Lobakin
2021-02-11 18:52 ` [PATCH v5 net-next 01/11] skbuff: move __alloc_skb() next to the other skb allocation functions Alexander Lobakin
2021-02-11 18:53 ` [PATCH v5 net-next 02/11] skbuff: simplify kmalloc_reserve() Alexander Lobakin
2021-02-11 18:53 ` [PATCH v5 net-next 03/11] skbuff: make __build_skb_around() return void Alexander Lobakin
2021-02-11 18:53 ` [PATCH v5 net-next 04/11] skbuff: simplify __alloc_skb() a bit Alexander Lobakin
2021-02-11 18:53 ` [PATCH v5 net-next 05/11] skbuff: use __build_skb_around() in __alloc_skb() Alexander Lobakin
2021-02-11 18:54 ` [PATCH v5 net-next 06/11] skbuff: remove __kfree_skb_flush() Alexander Lobakin
2021-02-12  3:28   ` Alexander Duyck
2021-02-13 13:56     ` Alexander Lobakin
2021-02-11 18:54 ` [PATCH v5 net-next 07/11] skbuff: move NAPI cache declarations upper in the file Alexander Lobakin
2021-02-11 18:54 ` Alexander Lobakin [this message]
2021-02-11 18:54 ` [PATCH v5 net-next 09/11] skbuff: allow to optionally use NAPI cache from __alloc_skb() Alexander Lobakin
2021-02-12  3:18   ` Alexander Duyck
2021-02-13 11:53     ` Alexander Lobakin
2021-02-11 18:55 ` [PATCH v5 net-next 10/11] skbuff: allow to use NAPI cache from __napi_alloc_skb() Alexander Lobakin
2021-02-11 18:55 ` [PATCH v5 net-next 11/11] skbuff: queue NAPI_MERGED_FREE skbs into NAPI cache instead of freeing Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210211185220.9753-9-alobakin@pm.me \
    --to=alobakin@pm.me \
    --cc=0x7f454c46@gmail.com \
    --cc=alexanderduyck@fb.com \
    --cc=andriin@fb.com \
    --cc=ap420073@gmail.com \
    --cc=ast@kernel.org \
    --cc=bjorn@kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=decui@microsoft.com \
    --cc=dvyukov@google.com \
    --cc=ecree.xilinx@gmail.com \
    --cc=edumazet@google.com \
    --cc=elver@google.com \
    --cc=fw@strlen.de \
    --cc=gnault@redhat.com \
    --cc=haokexin@gmail.com \
    --cc=jakub@cloudflare.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marcelo.leitner@gmail.com \
    --cc=mkubecek@suse.cz \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pablo@netfilter.org \
    --cc=rdunlap@infradead.org \
    --cc=willemb@google.com \
    --cc=xiyou.wangcong@gmail.com \
    --cc=yangyingliang@huawei.com \
    --cc=yhs@fb.com \
    --cc=zhudi21@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).