All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v8 0/5] page_pool: recycle buffers
@ 2021-06-07 19:02 Matteo Croce
  2021-06-07 19:02 ` [PATCH net-next v8 1/5] mm: add a signature in struct page Matteo Croce
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Matteo Croce @ 2021-06-07 19:02 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen,
	Yonghong Song, Michel Lespinasse, KP Singh, Andrii Nakryiko,
	Martin KaFai Lau, David Hildenbrand, Song Liu

From: Matteo Croce <mcroce@microsoft.com>

This is a respin of [1]

This patchset shows the plans for allowing page_pool to handle and
maintain DMA map/unmap of the pages it serves to the driver. For this
to work a return hook in the network core is introduced.

The overall purpose is to simplify drivers, by providing a page
allocation API that does recycling, such that each driver doesn't have
to reinvent its own recycling scheme. Using page_pool in a driver
does not require implementing XDP support, but it makes it trivially
easy to do so. Instead of allocating buffers specifically for SKBs
we now allocate a generic buffer and either wrap it on an SKB
(via build_skb) or create an XDP frame.
The recycling code leverages the XDP recycle APIs.

The Marvell mvpp2 and mvneta drivers are used in this patchset to
demonstrate how to use the API, and tested on a MacchiatoBIN
and EspressoBIN boards respectively.

Please let this going in on a future -rc1 so to allow enough time
to have wider tests.

v7 -> v8:
- use page->lru.next instead of page->index for pfmemalloc
- remove conditional include
- rework page_pool_return_skb_page() so to have less conversions
  between page and addresses, and call compound_head() only once
- move some code from skb_free_head() to a new helper skb_pp_recycle()
- misc fixes

v6 -> v7:
- refresh patches against net-next
- remove a redundant call to virt_to_head_page()
- update mvneta benchmarks

v5 -> v6:
- preserve pfmemalloc bit when setting signature
- fix typo in mvneta
- rebase on next-next with the new cache
- don't clear the skb->pp_recycle in pskb_expand_head()

v4 -> v5:
- move the signature so it doesn't alias with page->mapping
- use an invalid pointer as magic
- incorporate Matthew Wilcox's changes for pfmemalloc pages
- move the __skb_frag_unref() changes to a preliminary patch
- refactor some cpp directives
- only attempt recycling if skb->head_frag
- clear skb->pp_recycle in pskb_expand_head()

v3 -> v4:
- store a pointer to page_pool instead of xdp_mem_info
- drop a patch which reduces xdp_mem_info size
- do the recycling in the page_pool code instead of xdp_return
- remove some unused headers include
- remove some useless forward declaration

v2 -> v3:
- added missing SOBs
- CCed the MM people

v1 -> v2:
- fix a commit message
- avoid setting pp_recycle multiple times on mvneta
- squash two patches to avoid breaking bisect

[1] https://lore.kernel.org/netdev/154413868810.21735.572808840657728172.stgit@firesoul/

Ilias Apalodimas (1):
  page_pool: Allow drivers to hint on SKB recycling

Matteo Croce (4):
  mm: add a signature in struct page
  skbuff: add a parameter to __skb_frag_unref
  mvpp2: recycle buffers
  mvneta: recycle buffers

 drivers/net/ethernet/marvell/mvneta.c         | 11 ++++--
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |  2 +-
 drivers/net/ethernet/marvell/sky2.c           |  2 +-
 drivers/net/ethernet/mellanox/mlx4/en_rx.c    |  2 +-
 include/linux/mm.h                            | 11 +++---
 include/linux/mm_types.h                      |  7 ++++
 include/linux/poison.h                        |  3 ++
 include/linux/skbuff.h                        | 39 ++++++++++++++++---
 include/net/page_pool.h                       |  9 +++++
 net/core/page_pool.c                          | 28 +++++++++++++
 net/core/skbuff.c                             | 20 ++++++++--
 net/tls/tls_device.c                          |  2 +-
 12 files changed, 114 insertions(+), 22 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH net-next v8 1/5] mm: add a signature in struct page
  2021-06-07 19:02 [PATCH net-next v8 0/5] page_pool: recycle buffers Matteo Croce
@ 2021-06-07 19:02 ` Matteo Croce
  2021-06-07 19:02 ` [PATCH net-next v8 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Matteo Croce @ 2021-06-07 19:02 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen,
	Yonghong Song, Michel Lespinasse, KP Singh, Andrii Nakryiko,
	Martin KaFai Lau, David Hildenbrand, Song Liu

From: Matteo Croce <mcroce@microsoft.com>

This is needed by the page_pool to avoid recycling a page not allocated
via page_pool.

The page->signature field is aliased to page->lru.next and
page->compound_head, but it can't be set by mistake because the
signature value is a bad pointer, and can't trigger a false positive
in PageTail() because the last bit is 0.

Co-developed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 include/linux/mm.h       | 11 ++++++-----
 include/linux/mm_types.h |  7 +++++++
 include/linux/poison.h   |  3 +++
 net/core/page_pool.c     |  6 ++++++
 4 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c274f75efcf9..a0434e8c2617 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1668,10 +1668,11 @@ struct address_space *page_mapping(struct page *page);
 static inline bool page_is_pfmemalloc(const struct page *page)
 {
 	/*
-	 * Page index cannot be this large so this must be
-	 * a pfmemalloc page.
+	 * lru.next has bit 1 set if the page is allocated from the
+	 * pfmemalloc reserves.  Callers may simply overwrite it if
+	 * they do not need to preserve that information.
 	 */
-	return page->index == -1UL;
+	return (uintptr_t)page->lru.next & BIT(1);
 }
 
 /*
@@ -1680,12 +1681,12 @@ static inline bool page_is_pfmemalloc(const struct page *page)
  */
 static inline void set_page_pfmemalloc(struct page *page)
 {
-	page->index = -1UL;
+	page->lru.next = (void *)BIT(1);
 }
 
 static inline void clear_page_pfmemalloc(struct page *page)
 {
-	page->index = 0;
+	page->lru.next = NULL;
 }
 
 /*
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 5aacc1c10a45..ed6862eacb52 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -96,6 +96,13 @@ struct page {
 			unsigned long private;
 		};
 		struct {	/* page_pool used by netstack */
+			/**
+			 * @pp_magic: magic value to avoid recycling non
+			 * page_pool allocated pages.
+			 */
+			unsigned long pp_magic;
+			struct page_pool *pp;
+			unsigned long _pp_mapping_pad;
 			/**
 			 * @dma_addr: might require a 64-bit value on
 			 * 32-bit architectures.
diff --git a/include/linux/poison.h b/include/linux/poison.h
index aff1c9250c82..d62ef5a6b4e9 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -78,4 +78,7 @@
 /********** security/ **********/
 #define KEY_DESTROY		0xbd
 
+/********** net/core/page_pool.c **********/
+#define PP_SIGNATURE		(0x40 + POISON_POINTER_DELTA)
+
 #endif
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 3c4c4c7a0402..e1321bc9d316 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -17,6 +17,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/page-flags.h>
 #include <linux/mm.h> /* for __put_page() */
+#include <linux/poison.h>
 
 #include <trace/events/page_pool.h>
 
@@ -221,6 +222,8 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
 		return NULL;
 	}
 
+	page->pp_magic |= PP_SIGNATURE;
+
 	/* Track how many pages are held 'in-flight' */
 	pool->pages_state_hold_cnt++;
 	trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
@@ -263,6 +266,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 			put_page(page);
 			continue;
 		}
+		page->pp_magic |= PP_SIGNATURE;
 		pool->alloc.cache[pool->alloc.count++] = page;
 		/* Track how many pages are held 'in-flight' */
 		pool->pages_state_hold_cnt++;
@@ -341,6 +345,8 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 			     DMA_ATTR_SKIP_CPU_SYNC);
 	page_pool_set_dma_addr(page, 0);
 skip_dma_unmap:
+	page->pp_magic = 0;
+
 	/* This may be the last page returned, releasing the pool, so
 	 * it is not safe to reference pool afterwards.
 	 */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next v8 2/5] skbuff: add a parameter to __skb_frag_unref
  2021-06-07 19:02 [PATCH net-next v8 0/5] page_pool: recycle buffers Matteo Croce
  2021-06-07 19:02 ` [PATCH net-next v8 1/5] mm: add a signature in struct page Matteo Croce
@ 2021-06-07 19:02 ` Matteo Croce
  2021-06-07 19:02 ` [PATCH net-next v8 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Matteo Croce @ 2021-06-07 19:02 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen,
	Yonghong Song, Michel Lespinasse, KP Singh, Andrii Nakryiko,
	Martin KaFai Lau, David Hildenbrand, Song Liu

From: Matteo Croce <mcroce@microsoft.com>

This is a prerequisite patch, the next one is enabling recycling of
skbs and fragments. Add an extra argument on __skb_frag_unref() to
handle recycling, and update the current users of the function with that.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/sky2.c        | 2 +-
 drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +-
 include/linux/skbuff.h                     | 8 +++++---
 net/core/skbuff.c                          | 4 ++--
 net/tls/tls_device.c                       | 2 +-
 5 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
index 324c280cc22c..8b8bff59c8fe 100644
--- a/drivers/net/ethernet/marvell/sky2.c
+++ b/drivers/net/ethernet/marvell/sky2.c
@@ -2503,7 +2503,7 @@ static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space,
 
 		if (length == 0) {
 			/* don't need this page */
-			__skb_frag_unref(frag);
+			__skb_frag_unref(frag, false);
 			--skb_shinfo(skb)->nr_frags;
 		} else {
 			size = min(length, (unsigned) PAGE_SIZE);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index e35e4d7ef4d1..cea62b8f554c 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -526,7 +526,7 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
 fail:
 	while (nr > 0) {
 		nr--;
-		__skb_frag_unref(skb_shinfo(skb)->frags + nr);
+		__skb_frag_unref(skb_shinfo(skb)->frags + nr, false);
 	}
 	return 0;
 }
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index dbf820a50a39..7fcfea7e7b21 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3081,10 +3081,12 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
 /**
  * __skb_frag_unref - release a reference on a paged fragment.
  * @frag: the paged fragment
+ * @recycle: recycle the page if allocated via page_pool
  *
- * Releases a reference on the paged fragment @frag.
+ * Releases a reference on the paged fragment @frag
+ * or recycles the page via the page_pool API.
  */
-static inline void __skb_frag_unref(skb_frag_t *frag)
+static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
 	put_page(skb_frag_page(frag));
 }
@@ -3098,7 +3100,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f]);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
 }
 
 /**
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3ad22870298c..12b7e90dd2b5 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -664,7 +664,7 @@ static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i]);
+		__skb_frag_unref(&shinfo->frags[i], false);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -3495,7 +3495,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom);
+		__skb_frag_unref(fragfrom, false);
 	}
 
 	/* Reposition in the original skb */
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 76a6f8c2eec4..ad11db2c4f63 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -127,7 +127,7 @@ static void destroy_record(struct tls_record_info *record)
 	int i;
 
 	for (i = 0; i < record->num_frags; i++)
-		__skb_frag_unref(&record->frags[i]);
+		__skb_frag_unref(&record->frags[i], false);
 	kfree(record);
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next v8 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-07 19:02 [PATCH net-next v8 0/5] page_pool: recycle buffers Matteo Croce
  2021-06-07 19:02 ` [PATCH net-next v8 1/5] mm: add a signature in struct page Matteo Croce
  2021-06-07 19:02 ` [PATCH net-next v8 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
@ 2021-06-07 19:02 ` Matteo Croce
  2021-06-08  3:19     ` kernel test robot
  2021-06-07 19:02 ` [PATCH net-next v8 4/5] mvpp2: recycle buffers Matteo Croce
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Matteo Croce @ 2021-06-07 19:02 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen,
	Yonghong Song, Michel Lespinasse, KP Singh, Andrii Nakryiko,
	Martin KaFai Lau, David Hildenbrand, Song Liu

From: Ilias Apalodimas <ilias.apalodimas@linaro.org>

Up to now several high speed NICs have custom mechanisms of recycling
the allocated memory they use for their payloads.
Our page_pool API already has recycling capabilities that are always
used when we are running in 'XDP mode'. So let's tweak the API and the
kernel network stack slightly and allow the recycling to happen even
during the standard operation.
The API doesn't take into account 'split page' policies used by those
drivers currently, but can be extended once we have users for that.

The idea is to be able to intercept the packet on skb_release_data().
If it's a buffer coming from our page_pool API recycle it back to the
pool for further usage or just release the packet entirely.

To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
a field in struct page (page->pp) to store the page_pool pointer.
Storing the information in page->pp allows us to recycle both SKBs and
their fragments.
We could have skipped the skb bit entirely, since identical information
can bederived from struct page. However, in an effort to affect the free path
as less as possible, reading a single bit in the skb which is already
in cache, is better that trying to derive identical information for the
page stored data.

The driver or page_pool has to take care of the sync operations on it's own
during the buffer recycling since the buffer is, after opting-in to the
recycling, never unmapped.

Since the gain on the drivers depends on the architecture, we are not
enabling recycling by default if the page_pool API is used on a driver.
In order to enable recycling the driver must call skb_mark_for_recycle()
to store the information we need for recycling in page->pp and
enabling the recycling bit, or page_pool_store_mem_info() for a fragment.

Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Co-developed-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 include/linux/skbuff.h  | 33 ++++++++++++++++++++++++++++++---
 include/net/page_pool.h |  9 +++++++++
 net/core/page_pool.c    | 22 ++++++++++++++++++++++
 net/core/skbuff.c       | 20 ++++++++++++++++----
 4 files changed, 77 insertions(+), 7 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7fcfea7e7b21..b2db9cd9a73f 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -37,6 +37,7 @@
 #include <linux/in6.h>
 #include <linux/if_packet.h>
 #include <net/flow.h>
+#include <net/page_pool.h>
 #if IS_ENABLED(CONFIG_NF_CONNTRACK)
 #include <linux/netfilter/nf_conntrack_common.h>
 #endif
@@ -667,6 +668,8 @@ typedef unsigned char *sk_buff_data_t;
  *	@head_frag: skb was allocated from page fragments,
  *		not allocated by kmalloc() or vmalloc().
  *	@pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
+ *	@pp_recycle: mark the packet for recycling instead of freeing (implies
+ *		page_pool support on driver)
  *	@active_extensions: active extensions (skb_ext_id types)
  *	@ndisc_nodetype: router type (from link layer)
  *	@ooo_okay: allow the mapping of a socket to a queue to be changed
@@ -791,10 +794,12 @@ struct sk_buff {
 				fclone:2,
 				peeked:1,
 				head_frag:1,
-				pfmemalloc:1;
+				pfmemalloc:1,
+				pp_recycle:1; /* page_pool recycle indicator */
 #ifdef CONFIG_SKB_EXTENSIONS
 	__u8			active_extensions;
 #endif
+
 	/* fields enclosed in headers_start/headers_end are copied
 	 * using a single memcpy() in __copy_skb_header()
 	 */
@@ -3088,7 +3093,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
  */
 static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
-	put_page(skb_frag_page(frag));
+	struct page *page = skb_frag_page(frag);
+
+#ifdef CONFIG_PAGE_POOL
+	if (recycle && page_pool_return_skb_page(page))
+		return;
+#endif
+	put_page(page);
 }
 
 /**
@@ -3100,7 +3111,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
 }
 
 /**
@@ -4699,5 +4710,21 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
 #endif
 }
 
+#ifdef CONFIG_PAGE_POOL
+static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
+					struct page_pool *pp)
+{
+	skb->pp_recycle = 1;
+	page_pool_store_mem_info(page, pp);
+}
+#endif
+
+static inline bool skb_pp_recycle(struct sk_buff *skb, void *data)
+{
+	if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
+		return false;
+	return page_pool_return_skb_page(virt_to_page(data));
+}
+
 #endif	/* __KERNEL__ */
 #endif	/* _LINUX_SKBUFF_H */
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b4b6de909c93..3dd62dd73027 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -146,6 +146,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
 	return pool->p.dma_dir;
 }
 
+bool page_pool_return_skb_page(struct page *page);
+
 struct page_pool *page_pool_create(const struct page_pool_params *params);
 
 #ifdef CONFIG_PAGE_POOL
@@ -251,4 +253,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
 		spin_unlock_bh(&pool->ring.producer_lock);
 }
 
+/* Store mem_info on struct page and use it while recycling skb frags */
+static inline
+void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
+{
+	page->pp = pp;
+}
+
 #endif /* _NET_PAGE_POOL_H */
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e1321bc9d316..5e4eb45b139c 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -628,3 +628,25 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
 	}
 }
 EXPORT_SYMBOL(page_pool_update_nid);
+
+bool page_pool_return_skb_page(struct page *page)
+{
+	struct page_pool *pp;
+
+	page = compound_head(page);
+	if (unlikely(page->pp_magic != PP_SIGNATURE))
+		return false;
+
+	pp = page->pp;
+
+	/* Driver set this to memory recycling info. Reset it on recycle.
+	 * This will *not* work for NIC using a split-page memory model.
+	 * The page will be returned to the pool here regardless of the
+	 * 'flipped' fragment being in use or not.
+	 */
+	page->pp = NULL;
+	page_pool_put_full_page(pp, page, false);
+
+	return true;
+}
+EXPORT_SYMBOL(page_pool_return_skb_page);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 12b7e90dd2b5..a0b1d4847efe 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -70,6 +70,7 @@
 #include <net/xfrm.h>
 #include <net/mpls.h>
 #include <net/mptcp.h>
+#include <net/page_pool.h>
 
 #include <linux/uaccess.h>
 #include <trace/events/skb.h>
@@ -645,10 +646,13 @@ static void skb_free_head(struct sk_buff *skb)
 {
 	unsigned char *head = skb->head;
 
-	if (skb->head_frag)
+	if (skb->head_frag) {
+		if (skb_pp_recycle(skb, head))
+			return;
 		skb_free_frag(head);
-	else
+	} else {
 		kfree(head);
+	}
 }
 
 static void skb_release_data(struct sk_buff *skb)
@@ -664,7 +668,7 @@ static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i], false);
+		__skb_frag_unref(&shinfo->frags[i], skb->pp_recycle);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -1046,6 +1050,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
 	n->nohdr = 0;
 	n->peeked = 0;
 	C(pfmemalloc);
+	C(pp_recycle);
 	n->destructor = NULL;
 	C(tail);
 	C(end);
@@ -3495,7 +3500,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom, false);
+		__skb_frag_unref(fragfrom, skb->pp_recycle);
 	}
 
 	/* Reposition in the original skb */
@@ -5285,6 +5290,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 	if (skb_cloned(to))
 		return false;
 
+	/* The page pool signature of struct page will eventually figure out
+	 * which pages can be recycled or not but for now let's prohibit slab
+	 * allocated and page_pool allocated SKBs from being coalesced.
+	 */
+	if (to->pp_recycle != from->pp_recycle)
+		return false;
+
 	if (len <= skb_tailroom(to)) {
 		if (len)
 			BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next v8 4/5] mvpp2: recycle buffers
  2021-06-07 19:02 [PATCH net-next v8 0/5] page_pool: recycle buffers Matteo Croce
                   ` (2 preceding siblings ...)
  2021-06-07 19:02 ` [PATCH net-next v8 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
@ 2021-06-07 19:02 ` Matteo Croce
  2021-06-07 19:02 ` [PATCH net-next v8 5/5] mvneta: " Matteo Croce
  2021-06-07 21:40 ` [PATCH net-next v8 0/5] page_pool: " patchwork-bot+netdevbpf
  5 siblings, 0 replies; 9+ messages in thread
From: Matteo Croce @ 2021-06-07 19:02 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen,
	Yonghong Song, Michel Lespinasse, KP Singh, Andrii Nakryiko,
	Martin KaFai Lau, David Hildenbrand, Song Liu

From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate is almost doubled,
from 1110 Kpps to 2128 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  34.88%  [kernel]          [k] page_pool_release_page
   8.06%  [kernel]          [k] free_unref_page
   6.42%  [mvpp2]           [k] mvpp2_rx
   6.07%  [kernel]          [k] eth_type_trans
   5.18%  [kernel]          [k] __netif_receive_skb_core
   4.95%  [kernel]          [k] build_skb
   4.88%  [kernel]          [k] kmem_cache_free
   3.97%  [kernel]          [k] kmem_cache_alloc
   3.45%  [kernel]          [k] dev_gro_receive
   2.73%  [kernel]          [k] page_frag_free
   2.07%  [kernel]          [k] __alloc_pages_bulk
   1.99%  [kernel]          [k] arch_local_irq_save
   1.84%  [kernel]          [k] skb_release_data
   1.20%  [kernel]          [k] netif_receive_skb_list_internal

With packet rate stable at 1100 Kpps:

tx: 0 bps 0 pps rx: 532.7 Mbps 1110 Kpps
tx: 0 bps 0 pps rx: 532.6 Mbps 1110 Kpps
tx: 0 bps 0 pps rx: 532.4 Mbps 1109 Kpps
tx: 0 bps 0 pps rx: 532.1 Mbps 1109 Kpps
tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps
tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  12.91%  [kernel]          [k] eth_type_trans
  12.54%  [mvpp2]           [k] mvpp2_rx
   9.67%  [kernel]          [k] build_skb
   9.63%  [kernel]          [k] __netif_receive_skb_core
   8.44%  [kernel]          [k] page_pool_put_page
   8.07%  [kernel]          [k] kmem_cache_free
   7.79%  [kernel]          [k] kmem_cache_alloc
   6.86%  [kernel]          [k] dev_gro_receive
   3.19%  [kernel]          [k] skb_release_data
   2.41%  [kernel]          [k] netif_receive_skb_list_internal
   2.18%  [kernel]          [k] page_pool_refill_alloc_cache
   1.76%  [kernel]          [k] napi_gro_receive
   1.61%  [kernel]          [k] kfree_skb
   1.20%  [kernel]          [k] dma_sync_single_for_device
   1.16%  [mvpp2]           [k] mvpp2_poll
   1.12%  [mvpp2]           [k] mvpp2_read

With packet rate above 2100 Kpps:

tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2127 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1022 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1022 Mbps 2129 Kpps

The major performance increase is explained by the fact that the most CPU
consuming functions (page_pool_release_page, page_frag_free and
free_unref_page) are no longer called on a per packet basis.

The test was done by sending to the macchiatobin 64 byte ethernet frames
with an invalid ethertype, so the packets are dropped early in the RX path.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index d4fb620f53f3..b1d186abcc6c 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3997,7 +3997,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 		}
 
 		if (pp)
-			page_pool_release_page(pp, virt_to_page(data));
+			skb_mark_for_recycle(skb, virt_to_page(data), pp);
 		else
 			dma_unmap_single_attrs(dev->dev.parent, dma_addr,
 					       bm_pool->buf_size, DMA_FROM_DEVICE,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next v8 5/5] mvneta: recycle buffers
  2021-06-07 19:02 [PATCH net-next v8 0/5] page_pool: recycle buffers Matteo Croce
                   ` (3 preceding siblings ...)
  2021-06-07 19:02 ` [PATCH net-next v8 4/5] mvpp2: recycle buffers Matteo Croce
@ 2021-06-07 19:02 ` Matteo Croce
  2021-06-07 21:40 ` [PATCH net-next v8 0/5] page_pool: " patchwork-bot+netdevbpf
  5 siblings, 0 replies; 9+ messages in thread
From: Matteo Croce @ 2021-06-07 19:02 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen,
	Yonghong Song, Michel Lespinasse, KP Singh, Andrii Nakryiko,
	Martin KaFai Lau, David Hildenbrand, Song Liu

From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate increased by 10%,
from 296 Kpps to 326 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  23.66%  [kernel]          [k] __pi___inval_dcache_area
  22.85%  [mvneta]          [k] mvneta_rx_swbm
   7.54%  [kernel]          [k] kmem_cache_alloc
   6.49%  [kernel]          [k] eth_type_trans
   3.94%  [kernel]          [k] dev_gro_receive
   3.91%  [kernel]          [k] __netif_receive_skb_core
   3.91%  [kernel]          [k] kmem_cache_free
   3.76%  [kernel]          [k] page_pool_release_page
   3.56%  [kernel]          [k] free_unref_page
   2.40%  [kernel]          [k] build_skb
   1.49%  [kernel]          [k] skb_release_data
   1.45%  [kernel]          [k] __alloc_pages_bulk
   1.30%  [kernel]          [k] page_frag_free

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  26.41%  [kernel]          [k] __pi___inval_dcache_area
  25.00%  [mvneta]          [k] mvneta_rx_swbm
   8.14%  [kernel]          [k] kmem_cache_alloc
   6.84%  [kernel]          [k] eth_type_trans
   4.44%  [kernel]          [k] __netif_receive_skb_core
   4.38%  [kernel]          [k] kmem_cache_free
   4.16%  [kernel]          [k] dev_gro_receive
   3.21%  [kernel]          [k] page_pool_put_page
   2.41%  [kernel]          [k] build_skb
   1.82%  [kernel]          [k] skb_release_data
   1.61%  [kernel]          [k] napi_gro_receive
   1.25%  [kernel]          [k] page_pool_refill_alloc_cache
   1.16%  [kernel]          [k] __netif_receive_skb_list_core

We can see that page_pool_release_page(), free_unref_page() and
__alloc_pages_bulk() are no longer on top of the list when receiving
traffic.

The test was done with mausezahn on the TX side with 64 byte raw
ethernet frames.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 7d5cd9bc6c99..c15ce06427d0 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2320,7 +2320,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 }
 
 static struct sk_buff *
-mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
+mvneta_swbm_build_skb(struct mvneta_port *pp, struct page_pool *pool,
 		      struct xdp_buff *xdp, u32 desc_status)
 {
 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
@@ -2331,7 +2331,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	if (!skb)
 		return ERR_PTR(-ENOMEM);
 
-	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
+	skb_mark_for_recycle(skb, virt_to_page(xdp->data), pool);
 
 	skb_reserve(skb, xdp->data - xdp->data_hard_start);
 	skb_put(skb, xdp->data_end - xdp->data);
@@ -2343,7 +2343,10 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
 				skb_frag_page(frag), skb_frag_off(frag),
 				skb_frag_size(frag), PAGE_SIZE);
-		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
+		/* We don't need to reset pp_recycle here. It's already set, so
+		 * just mark fragments for recycling.
+		 */
+		page_pool_store_mem_info(skb_frag_page(frag), pool);
 	}
 
 	return skb;
@@ -2425,7 +2428,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps))
 			goto next;
 
-		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
+		skb = mvneta_swbm_build_skb(pp, rxq->page_pool, &xdp_buf, desc_status);
 		if (IS_ERR(skb)) {
 			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next v8 0/5] page_pool: recycle buffers
  2021-06-07 19:02 [PATCH net-next v8 0/5] page_pool: recycle buffers Matteo Croce
                   ` (4 preceding siblings ...)
  2021-06-07 19:02 ` [PATCH net-next v8 5/5] mvneta: " Matteo Croce
@ 2021-06-07 21:40 ` patchwork-bot+netdevbpf
  5 siblings, 0 replies; 9+ messages in thread
From: patchwork-bot+netdevbpf @ 2021-06-07 21:40 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, ayush.sawal, vinay.yadav, rohitm, davem, kuba,
	thomas.petazzoni, mw, linux, mlindner, stephen, tariqt, hawk,
	ilias.apalodimas, ast, daniel, john.fastabend, borisp, arnd,
	akpm, peterz, vbabka, yuzhao, will, fenghua.yu, guro, hughd,
	peterx, jgg, jonathan.lemon, alobakin, cong.wang, wenxu,
	haokexin, jakub, elver, willemb, linmiaohe, linyunsheng, gnault,
	linux-kernel, linux-rdma, bpf, willy, edumazet, dsahern, lorenzo,
	saeedm, andrew, pabeni, sven.auhagen, yhs, walken, kpsingh,
	andrii, kafai, david, songliubraving

Hello:

This series was applied to netdev/net-next.git (refs/heads/master):

On Mon,  7 Jun 2021 21:02:35 +0200 you wrote:
> From: Matteo Croce <mcroce@microsoft.com>
> 
> This is a respin of [1]
> 
> This patchset shows the plans for allowing page_pool to handle and
> maintain DMA map/unmap of the pages it serves to the driver. For this
> to work a return hook in the network core is introduced.
> 
> [...]

Here is the summary with links:
  - [net-next,v8,1/5] mm: add a signature in struct page
    https://git.kernel.org/netdev/net-next/c/c07aea3ef4d4
  - [net-next,v8,2/5] skbuff: add a parameter to __skb_frag_unref
    https://git.kernel.org/netdev/net-next/c/c420c98982fa
  - [net-next,v8,3/5] page_pool: Allow drivers to hint on SKB recycling
    https://git.kernel.org/netdev/net-next/c/6a5bcd84e886
  - [net-next,v8,4/5] mvpp2: recycle buffers
    https://git.kernel.org/netdev/net-next/c/133637fcfab2
  - [net-next,v8,5/5] mvneta: recycle buffers
    https://git.kernel.org/netdev/net-next/c/e4017570daee

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next v8 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-07 19:02 ` [PATCH net-next v8 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
@ 2021-06-08  3:19     ` kernel test robot
  0 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2021-06-08  3:19 UTC (permalink / raw)
  To: Matteo Croce, netdev, linux-mm
  Cc: kbuild-all, Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas, Russell King,
	Mirko Lindner

[-- Attachment #1: Type: text/plain, Size: 3248 bytes --]

Hi Matteo,

I love your patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Matteo-Croce/page_pool-recycle-buffers/20210608-030512
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 1a42624aecba438f1d114430a14b640cdfa51c87
config: x86_64-randconfig-b001-20210607 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project ae973380c5f6be77ce395022be40350942260be9)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # apt-get install iwyu # include-what-you-use
        # https://github.com/0day-ci/linux/commit/95da7ab53bd3ac29bee6be7c89fda1f7c506a182
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Matteo-Croce/page_pool-recycle-buffers/20210608-030512
        git checkout 95da7ab53bd3ac29bee6be7c89fda1f7c506a182
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross C=1 CHECK=iwyu ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


iwyu warnings: (new ones prefixed by >>)
   net/core/skbuff.c:77:1: iwyu: warning: superfluous #include <linux/highmem.h>
   net/core/skbuff.c:43:1: iwyu: warning: superfluous #include <linux/inet.h>
   net/core/skbuff.c:41:1: iwyu: warning: superfluous #include <linux/interrupt.h>
   net/core/skbuff.c:37:1: iwyu: warning: superfluous #include <linux/module.h>
   net/core/skbuff.c:60:1: iwyu: warning: superfluous #include <linux/prefetch.h>
   net/core/skbuff.c:56:1: iwyu: warning: superfluous #include <linux/rtnetlink.h>
   net/core/skbuff.c:52:1: iwyu: warning: superfluous #include <linux/string.h>
   net/core/skbuff.c:75:1: iwyu: warning: superfluous #include <linux/uaccess.h>
   net/core/skbuff.c:69:1: iwyu: warning: superfluous #include <net/ip6_checksum.h>
>> net/core/skbuff.c:73:1: iwyu: warning: superfluous #include <net/page_pool.h>
   net/core/skbuff.c:65:1: iwyu: warning: superfluous #include <net/protocol.h>

vim +73 net/core/skbuff.c

^1da177e4c3f41 Linus Torvalds   2005-04-16  64  
^1da177e4c3f41 Linus Torvalds   2005-04-16  65  #include <net/protocol.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  66  #include <net/dst.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  67  #include <net/sock.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  68  #include <net/checksum.h>
ed1f50c3a7c1ad Paul Durrant     2014-01-09  69  #include <net/ip6_checksum.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  70  #include <net/xfrm.h>
8822e270d69701 John Hurley      2019-07-07  71  #include <net/mpls.h>
3ee17bc78e0f3f Mat Martineau    2020-01-09  72  #include <net/mptcp.h>
95da7ab53bd3ac Ilias Apalodimas 2021-06-07 @73  #include <net/page_pool.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  74  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 33029 bytes --]

[-- Attachment #3: Type: text/plain, Size: 149 bytes --]

_______________________________________________
kbuild mailing list -- kbuild@lists.01.org
To unsubscribe send an email to kbuild-leave@lists.01.org

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next v8 3/5] page_pool: Allow drivers to hint on SKB recycling
@ 2021-06-08  3:19     ` kernel test robot
  0 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2021-06-08  3:19 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3465 bytes --]

Hi Matteo,

I love your patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Matteo-Croce/page_pool-recycle-buffers/20210608-030512
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 1a42624aecba438f1d114430a14b640cdfa51c87
config: x86_64-randconfig-b001-20210607 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project ae973380c5f6be77ce395022be40350942260be9)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # apt-get install iwyu # include-what-you-use
        # https://github.com/0day-ci/linux/commit/95da7ab53bd3ac29bee6be7c89fda1f7c506a182
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Matteo-Croce/page_pool-recycle-buffers/20210608-030512
        git checkout 95da7ab53bd3ac29bee6be7c89fda1f7c506a182
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross C=1 CHECK=iwyu ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


iwyu warnings: (new ones prefixed by >>)
   net/core/skbuff.c:77:1: iwyu: warning: superfluous #include <linux/highmem.h>
   net/core/skbuff.c:43:1: iwyu: warning: superfluous #include <linux/inet.h>
   net/core/skbuff.c:41:1: iwyu: warning: superfluous #include <linux/interrupt.h>
   net/core/skbuff.c:37:1: iwyu: warning: superfluous #include <linux/module.h>
   net/core/skbuff.c:60:1: iwyu: warning: superfluous #include <linux/prefetch.h>
   net/core/skbuff.c:56:1: iwyu: warning: superfluous #include <linux/rtnetlink.h>
   net/core/skbuff.c:52:1: iwyu: warning: superfluous #include <linux/string.h>
   net/core/skbuff.c:75:1: iwyu: warning: superfluous #include <linux/uaccess.h>
   net/core/skbuff.c:69:1: iwyu: warning: superfluous #include <net/ip6_checksum.h>
>> net/core/skbuff.c:73:1: iwyu: warning: superfluous #include <net/page_pool.h>
   net/core/skbuff.c:65:1: iwyu: warning: superfluous #include <net/protocol.h>

vim +73 net/core/skbuff.c

^1da177e4c3f41 Linus Torvalds   2005-04-16  64  
^1da177e4c3f41 Linus Torvalds   2005-04-16  65  #include <net/protocol.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  66  #include <net/dst.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  67  #include <net/sock.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  68  #include <net/checksum.h>
ed1f50c3a7c1ad Paul Durrant     2014-01-09  69  #include <net/ip6_checksum.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  70  #include <net/xfrm.h>
8822e270d69701 John Hurley      2019-07-07  71  #include <net/mpls.h>
3ee17bc78e0f3f Mat Martineau    2020-01-09  72  #include <net/mptcp.h>
95da7ab53bd3ac Ilias Apalodimas 2021-06-07 @73  #include <net/page_pool.h>
^1da177e4c3f41 Linus Torvalds   2005-04-16  74  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

_______________________________________________
kbuild mailing list -- kbuild(a)lists.01.org
To unsubscribe send an email to kbuild-leave(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 33029 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-06-08  3:20 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-07 19:02 [PATCH net-next v8 0/5] page_pool: recycle buffers Matteo Croce
2021-06-07 19:02 ` [PATCH net-next v8 1/5] mm: add a signature in struct page Matteo Croce
2021-06-07 19:02 ` [PATCH net-next v8 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
2021-06-07 19:02 ` [PATCH net-next v8 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-06-08  3:19   ` kernel test robot
2021-06-08  3:19     ` kernel test robot
2021-06-07 19:02 ` [PATCH net-next v8 4/5] mvpp2: recycle buffers Matteo Croce
2021-06-07 19:02 ` [PATCH net-next v8 5/5] mvneta: " Matteo Croce
2021-06-07 21:40 ` [PATCH net-next v8 0/5] page_pool: " patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.