All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v6 0/5] page_pool: recycle buffers
@ 2021-05-21 16:15 Matteo Croce
  2021-05-21 16:15 ` [PATCH net-next v6 1/5] mm: add a signature in struct page Matteo Croce
                   ` (5 more replies)
  0 siblings, 6 replies; 19+ messages in thread
From: Matteo Croce @ 2021-05-21 16:15 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is a respin of [1]

This patchset shows the plans for allowing page_pool to handle and
maintain DMA map/unmap of the pages it serves to the driver. For this
to work a return hook in the network core is introduced.

The overall purpose is to simplify drivers, by providing a page
allocation API that does recycling, such that each driver doesn't have
to reinvent its own recycling scheme. Using page_pool in a driver
does not require implementing XDP support, but it makes it trivially
easy to do so. Instead of allocating buffers specifically for SKBs
we now allocate a generic buffer and either wrap it on an SKB
(via build_skb) or create an XDP frame.
The recycling code leverages the XDP recycle APIs.

The Marvell mvpp2 and mvneta drivers are used in this patchset to
demonstrate how to use the API, and tested on a MacchiatoBIN
and EspressoBIN boards respectively.

Please let this going in on a future -rc1 so to allow enough time
to have wider tests.

Note that this series depends on the change "mm: fix struct page layout
on 32-bit systems"[2] which is not yet in master.

v5 -> v6
- preserve pfmemalloc bit when setting signature
- fix typo in mvneta
- rebase on next-next with the new cache
- don't clear the skb->pp_recycle in pskb_expand_head()

v4 -> v5:
- move the signature so it doesn't alias with page->mapping
- use an invalid pointer as magic
- incorporate Matthew Wilcox's changes for pfmemalloc pages
- move the __skb_frag_unref() changes to a preliminary patch
- refactor some cpp directives
- only attempt recycling if skb->head_frag
- clear skb->pp_recycle in pskb_expand_head()

v3 -> v4:
- store a pointer to page_pool instead of xdp_mem_info
- drop a patch which reduces xdp_mem_info size
- do the recycling in the page_pool code instead of xdp_return
- remove some unused headers include
- remove some useless forward declaration

v2 -> v3:
- added missing SOBs
- CCed the MM people

v1 -> v2:
- fix a commit message
- avoid setting pp_recycle multiple times on mvneta
- squash two patches to avoid breaking bisect

[1] https://lore.kernel.org/netdev/154413868810.21735.572808840657728172.stgit@firesoul/
[2] https://lore.kernel.org/linux-mm/20210510153211.1504886-1-willy@infradead.org/

Ilias Apalodimas (1):
  page_pool: Allow drivers to hint on SKB recycling

Matteo Croce (4):
  mm: add a signature in struct page
  skbuff: add a parameter to __skb_frag_unref
  mvpp2: recycle buffers
  mvneta: recycle buffers

 drivers/net/ethernet/marvell/mvneta.c         | 11 +++---
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |  2 +-
 drivers/net/ethernet/marvell/sky2.c           |  2 +-
 drivers/net/ethernet/mellanox/mlx4/en_rx.c    |  2 +-
 include/linux/mm.h                            | 12 ++++---
 include/linux/mm_types.h                      | 12 ++++++-
 include/linux/poison.h                        |  3 ++
 include/linux/skbuff.h                        | 34 ++++++++++++++++---
 include/net/page_pool.h                       |  9 +++++
 net/core/page_pool.c                          | 29 ++++++++++++++++
 net/core/skbuff.c                             | 24 ++++++++++---
 net/tls/tls_device.c                          |  2 +-
 12 files changed, 119 insertions(+), 23 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH net-next v6 1/5] mm: add a signature in struct page
  2021-05-21 16:15 [PATCH net-next v6 0/5] page_pool: recycle buffers Matteo Croce
@ 2021-05-21 16:15 ` Matteo Croce
  2021-05-21 16:15 ` [PATCH net-next v6 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-05-21 16:15 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is needed by the page_pool to avoid recycling a page not allocated
via page_pool.

The page->signature field is aliased to page->lru.next and
page->compound_head, but it can't be set by mistake because the
signature value is a bad pointer, and can't trigger a false positive
in PageTail() because the last bit is 0.

Co-developed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 include/linux/mm.h       | 12 +++++++-----
 include/linux/mm_types.h | 12 +++++++++++-
 include/linux/poison.h   |  3 +++
 net/core/page_pool.c     |  6 ++++++
 4 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 322ec61d0da7..4ecfd8472a17 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1668,10 +1668,12 @@ struct address_space *page_mapping(struct page *page);
 static inline bool page_is_pfmemalloc(const struct page *page)
 {
 	/*
-	 * Page index cannot be this large so this must be
-	 * a pfmemalloc page.
+	 * This is not a tail page; compound_head of a head page is unused
+	 * at return from the page allocator, and will be overwritten
+	 * by callers who do not care whether the page came from the
+	 * reserves.
 	 */
-	return page->index == -1UL;
+	return page->compound_head & BIT(1);
 }
 
 /*
@@ -1680,12 +1682,12 @@ static inline bool page_is_pfmemalloc(const struct page *page)
  */
 static inline void set_page_pfmemalloc(struct page *page)
 {
-	page->index = -1UL;
+	page->compound_head = BIT(1);
 }
 
 static inline void clear_page_pfmemalloc(struct page *page)
 {
-	page->index = 0;
+	page->compound_head = 0;
 }
 
 /*
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 5aacc1c10a45..09f90598ff63 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -96,6 +96,13 @@ struct page {
 			unsigned long private;
 		};
 		struct {	/* page_pool used by netstack */
+			/**
+			 * @pp_magic: magic value to avoid recycling non
+			 * page_pool allocated pages.
+			 */
+			unsigned long pp_magic;
+			struct page_pool *pp;
+			unsigned long _pp_mapping_pad;
 			/**
 			 * @dma_addr: might require a 64-bit value on
 			 * 32-bit architectures.
@@ -130,7 +137,10 @@ struct page {
 			};
 		};
 		struct {	/* Tail pages of compound page */
-			unsigned long compound_head;	/* Bit zero is set */
+			/* Bit zero is set
+			 * Bit one if pfmemalloc page
+			 */
+			unsigned long compound_head;
 
 			/* First tail page only */
 			unsigned char compound_dtor;
diff --git a/include/linux/poison.h b/include/linux/poison.h
index aff1c9250c82..d62ef5a6b4e9 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -78,4 +78,7 @@
 /********** security/ **********/
 #define KEY_DESTROY		0xbd
 
+/********** net/core/page_pool.c **********/
+#define PP_SIGNATURE		(0x40 + POISON_POINTER_DELTA)
+
 #endif
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 3c4c4c7a0402..e1321bc9d316 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -17,6 +17,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/page-flags.h>
 #include <linux/mm.h> /* for __put_page() */
+#include <linux/poison.h>
 
 #include <trace/events/page_pool.h>
 
@@ -221,6 +222,8 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
 		return NULL;
 	}
 
+	page->pp_magic |= PP_SIGNATURE;
+
 	/* Track how many pages are held 'in-flight' */
 	pool->pages_state_hold_cnt++;
 	trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
@@ -263,6 +266,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 			put_page(page);
 			continue;
 		}
+		page->pp_magic |= PP_SIGNATURE;
 		pool->alloc.cache[pool->alloc.count++] = page;
 		/* Track how many pages are held 'in-flight' */
 		pool->pages_state_hold_cnt++;
@@ -341,6 +345,8 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 			     DMA_ATTR_SKIP_CPU_SYNC);
 	page_pool_set_dma_addr(page, 0);
 skip_dma_unmap:
+	page->pp_magic = 0;
+
 	/* This may be the last page returned, releasing the pool, so
 	 * it is not safe to reference pool afterwards.
 	 */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH net-next v6 2/5] skbuff: add a parameter to __skb_frag_unref
  2021-05-21 16:15 [PATCH net-next v6 0/5] page_pool: recycle buffers Matteo Croce
  2021-05-21 16:15 ` [PATCH net-next v6 1/5] mm: add a signature in struct page Matteo Croce
@ 2021-05-21 16:15 ` Matteo Croce
  2021-05-21 16:15 ` [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-05-21 16:15 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is a prerequisite patch, the next one is enabling recycling of
skbs and fragments. Add an extra argument on __skb_frag_unref() to
handle recycling, and update the current users of the function with that.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/sky2.c        | 2 +-
 drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +-
 include/linux/skbuff.h                     | 8 +++++---
 net/core/skbuff.c                          | 4 ++--
 net/tls/tls_device.c                       | 2 +-
 5 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
index 222c32367b2c..aa0cde1dc5c0 100644
--- a/drivers/net/ethernet/marvell/sky2.c
+++ b/drivers/net/ethernet/marvell/sky2.c
@@ -2503,7 +2503,7 @@ static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space,
 
 		if (length == 0) {
 			/* don't need this page */
-			__skb_frag_unref(frag);
+			__skb_frag_unref(frag, false);
 			--skb_shinfo(skb)->nr_frags;
 		} else {
 			size = min(length, (unsigned) PAGE_SIZE);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index e35e4d7ef4d1..cea62b8f554c 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -526,7 +526,7 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
 fail:
 	while (nr > 0) {
 		nr--;
-		__skb_frag_unref(skb_shinfo(skb)->frags + nr);
+		__skb_frag_unref(skb_shinfo(skb)->frags + nr, false);
 	}
 	return 0;
 }
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index dbf820a50a39..7fcfea7e7b21 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3081,10 +3081,12 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
 /**
  * __skb_frag_unref - release a reference on a paged fragment.
  * @frag: the paged fragment
+ * @recycle: recycle the page if allocated via page_pool
  *
- * Releases a reference on the paged fragment @frag.
+ * Releases a reference on the paged fragment @frag
+ * or recycles the page via the page_pool API.
  */
-static inline void __skb_frag_unref(skb_frag_t *frag)
+static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
 	put_page(skb_frag_page(frag));
 }
@@ -3098,7 +3100,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f]);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
 }
 
 /**
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3ad22870298c..12b7e90dd2b5 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -664,7 +664,7 @@ static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i]);
+		__skb_frag_unref(&shinfo->frags[i], false);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -3495,7 +3495,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom);
+		__skb_frag_unref(fragfrom, false);
 	}
 
 	/* Reposition in the original skb */
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 76a6f8c2eec4..ad11db2c4f63 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -127,7 +127,7 @@ static void destroy_record(struct tls_record_info *record)
 	int i;
 
 	for (i = 0; i < record->num_frags; i++)
-		__skb_frag_unref(&record->frags[i]);
+		__skb_frag_unref(&record->frags[i], false);
 	kfree(record);
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-21 16:15 [PATCH net-next v6 0/5] page_pool: recycle buffers Matteo Croce
  2021-05-21 16:15 ` [PATCH net-next v6 1/5] mm: add a signature in struct page Matteo Croce
  2021-05-21 16:15 ` [PATCH net-next v6 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
@ 2021-05-21 16:15 ` Matteo Croce
  2021-06-03 18:45     ` Matteo Croce
  2021-06-04  7:52   ` Yunsheng Lin
  2021-05-21 16:15 ` [PATCH net-next v6 4/5] mvpp2: recycle buffers Matteo Croce
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 19+ messages in thread
From: Matteo Croce @ 2021-05-21 16:15 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Ilias Apalodimas <ilias.apalodimas@linaro.org>

Up to now several high speed NICs have custom mechanisms of recycling
the allocated memory they use for their payloads.
Our page_pool API already has recycling capabilities that are always
used when we are running in 'XDP mode'. So let's tweak the API and the
kernel network stack slightly and allow the recycling to happen even
during the standard operation.
The API doesn't take into account 'split page' policies used by those
drivers currently, but can be extended once we have users for that.

The idea is to be able to intercept the packet on skb_release_data().
If it's a buffer coming from our page_pool API recycle it back to the
pool for further usage or just release the packet entirely.

To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
a field in struct page (page->pp) to store the page_pool pointer.
Storing the information in page->pp allows us to recycle both SKBs and
their fragments.
We could have skipped the skb bit entirely, since identical information
can bederived from struct page. However, in an effort to affect the free path
as less as possible, reading a single bit in the skb which is already
in cache, is better that trying to derive identical information for the
page stored data.

The driver or page_pool has to take care of the sync operations on it's own
during the buffer recycling since the buffer is, after opting-in to the
recycling, never unmapped.

Since the gain on the drivers depends on the architecture, we are not
enabling recycling by default if the page_pool API is used on a driver.
In order to enable recycling the driver must call skb_mark_for_recycle()
to store the information we need for recycling in page->pp and
enabling the recycling bit, or page_pool_store_mem_info() for a fragment.

Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Co-developed-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 include/linux/skbuff.h  | 28 +++++++++++++++++++++++++---
 include/net/page_pool.h |  9 +++++++++
 net/core/page_pool.c    | 23 +++++++++++++++++++++++
 net/core/skbuff.c       | 24 ++++++++++++++++++++----
 4 files changed, 77 insertions(+), 7 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7fcfea7e7b21..057b40ad29bd 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -40,6 +40,9 @@
 #if IS_ENABLED(CONFIG_NF_CONNTRACK)
 #include <linux/netfilter/nf_conntrack_common.h>
 #endif
+#ifdef CONFIG_PAGE_POOL
+#include <net/page_pool.h>
+#endif
 
 /* The interface for checksum offload between the stack and networking drivers
  * is as follows...
@@ -667,6 +670,8 @@ typedef unsigned char *sk_buff_data_t;
  *	@head_frag: skb was allocated from page fragments,
  *		not allocated by kmalloc() or vmalloc().
  *	@pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
+ *	@pp_recycle: mark the packet for recycling instead of freeing (implies
+ *		page_pool support on driver)
  *	@active_extensions: active extensions (skb_ext_id types)
  *	@ndisc_nodetype: router type (from link layer)
  *	@ooo_okay: allow the mapping of a socket to a queue to be changed
@@ -791,10 +796,12 @@ struct sk_buff {
 				fclone:2,
 				peeked:1,
 				head_frag:1,
-				pfmemalloc:1;
+				pfmemalloc:1,
+				pp_recycle:1; /* page_pool recycle indicator */
 #ifdef CONFIG_SKB_EXTENSIONS
 	__u8			active_extensions;
 #endif
+
 	/* fields enclosed in headers_start/headers_end are copied
 	 * using a single memcpy() in __copy_skb_header()
 	 */
@@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
  */
 static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
-	put_page(skb_frag_page(frag));
+	struct page *page = skb_frag_page(frag);
+
+#ifdef CONFIG_PAGE_POOL
+	if (recycle && page_pool_return_skb_page(page_address(page)))
+		return;
+#endif
+	put_page(page);
 }
 
 /**
@@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
 }
 
 /**
@@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
 #endif
 }
 
+#ifdef CONFIG_PAGE_POOL
+static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
+					struct page_pool *pp)
+{
+	skb->pp_recycle = 1;
+	page_pool_store_mem_info(page, pp);
+}
+#endif
+
 #endif	/* __KERNEL__ */
 #endif	/* _LINUX_SKBUFF_H */
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b4b6de909c93..7b9b6a1c61f5 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -146,6 +146,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
 	return pool->p.dma_dir;
 }
 
+bool page_pool_return_skb_page(void *data);
+
 struct page_pool *page_pool_create(const struct page_pool_params *params);
 
 #ifdef CONFIG_PAGE_POOL
@@ -251,4 +253,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
 		spin_unlock_bh(&pool->ring.producer_lock);
 }
 
+/* Store mem_info on struct page and use it while recycling skb frags */
+static inline
+void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
+{
+	page->pp = pp;
+}
+
 #endif /* _NET_PAGE_POOL_H */
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e1321bc9d316..2a020cca489f 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -628,3 +628,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
 	}
 }
 EXPORT_SYMBOL(page_pool_update_nid);
+
+bool page_pool_return_skb_page(void *data)
+{
+	struct page_pool *pp;
+	struct page *page;
+
+	page = virt_to_head_page(data);
+	if (unlikely(page->pp_magic != PP_SIGNATURE))
+		return false;
+
+	pp = (struct page_pool *)page->pp;
+
+	/* Driver set this to memory recycling info. Reset it on recycle.
+	 * This will *not* work for NIC using a split-page memory model.
+	 * The page will be returned to the pool here regardless of the
+	 * 'flipped' fragment being in use or not.
+	 */
+	page->pp = NULL;
+	page_pool_put_full_page(pp, virt_to_head_page(data), false);
+
+	return true;
+}
+EXPORT_SYMBOL(page_pool_return_skb_page);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 12b7e90dd2b5..f769f08e7b32 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -70,6 +70,9 @@
 #include <net/xfrm.h>
 #include <net/mpls.h>
 #include <net/mptcp.h>
+#ifdef CONFIG_PAGE_POOL
+#include <net/page_pool.h>
+#endif
 
 #include <linux/uaccess.h>
 #include <trace/events/skb.h>
@@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb)
 {
 	unsigned char *head = skb->head;
 
-	if (skb->head_frag)
+	if (skb->head_frag) {
+#ifdef CONFIG_PAGE_POOL
+		if (skb->pp_recycle && page_pool_return_skb_page(head))
+			return;
+#endif
 		skb_free_frag(head);
-	else
+	} else {
 		kfree(head);
+	}
 }
 
 static void skb_release_data(struct sk_buff *skb)
@@ -664,7 +672,7 @@ static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i], false);
+		__skb_frag_unref(&shinfo->frags[i], skb->pp_recycle);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -1046,6 +1054,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
 	n->nohdr = 0;
 	n->peeked = 0;
 	C(pfmemalloc);
+	C(pp_recycle);
 	n->destructor = NULL;
 	C(tail);
 	C(end);
@@ -3495,7 +3504,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom, false);
+		__skb_frag_unref(fragfrom, skb->pp_recycle);
 	}
 
 	/* Reposition in the original skb */
@@ -5285,6 +5294,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 	if (skb_cloned(to))
 		return false;
 
+	/* The page pool signature of struct page will eventually figure out
+	 * which pages can be recycled or not but for now let's prohibit slab
+	 * allocated and page_pool allocated SKBs from being coalesced.
+	 */
+	if (to->pp_recycle != from->pp_recycle)
+		return false;
+
 	if (len <= skb_tailroom(to)) {
 		if (len)
 			BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH net-next v6 4/5] mvpp2: recycle buffers
  2021-05-21 16:15 [PATCH net-next v6 0/5] page_pool: recycle buffers Matteo Croce
                   ` (2 preceding siblings ...)
  2021-05-21 16:15 ` [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
@ 2021-05-21 16:15 ` Matteo Croce
  2021-05-21 16:15 ` [PATCH net-next v6 5/5] mvneta: " Matteo Croce
  2021-05-28  0:44   ` Matteo Croce
  5 siblings, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-05-21 16:15 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate is almost doubled,
from 1110 Kpps to 2128 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  34.88%  [kernel]          [k] page_pool_release_page
   8.06%  [kernel]          [k] free_unref_page
   6.42%  [mvpp2]           [k] mvpp2_rx
   6.07%  [kernel]          [k] eth_type_trans
   5.18%  [kernel]          [k] __netif_receive_skb_core
   4.95%  [kernel]          [k] build_skb
   4.88%  [kernel]          [k] kmem_cache_free
   3.97%  [kernel]          [k] kmem_cache_alloc
   3.45%  [kernel]          [k] dev_gro_receive
   2.73%  [kernel]          [k] page_frag_free
   2.07%  [kernel]          [k] __alloc_pages_bulk
   1.99%  [kernel]          [k] arch_local_irq_save
   1.84%  [kernel]          [k] skb_release_data
   1.20%  [kernel]          [k] netif_receive_skb_list_internal

With packet rate stable at 1100 Kpps:

tx: 0 bps 0 pps rx: 532.7 Mbps 1110 Kpps
tx: 0 bps 0 pps rx: 532.6 Mbps 1110 Kpps
tx: 0 bps 0 pps rx: 532.4 Mbps 1109 Kpps
tx: 0 bps 0 pps rx: 532.1 Mbps 1109 Kpps
tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps
tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  12.91%  [kernel]          [k] eth_type_trans
  12.54%  [mvpp2]           [k] mvpp2_rx
   9.67%  [kernel]          [k] build_skb
   9.63%  [kernel]          [k] __netif_receive_skb_core
   8.44%  [kernel]          [k] page_pool_put_page
   8.07%  [kernel]          [k] kmem_cache_free
   7.79%  [kernel]          [k] kmem_cache_alloc
   6.86%  [kernel]          [k] dev_gro_receive
   3.19%  [kernel]          [k] skb_release_data
   2.41%  [kernel]          [k] netif_receive_skb_list_internal
   2.18%  [kernel]          [k] page_pool_refill_alloc_cache
   1.76%  [kernel]          [k] napi_gro_receive
   1.61%  [kernel]          [k] kfree_skb
   1.20%  [kernel]          [k] dma_sync_single_for_device
   1.16%  [mvpp2]           [k] mvpp2_poll
   1.12%  [mvpp2]           [k] mvpp2_read

With packet rate above 2100 Kpps:

tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2127 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1022 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1022 Mbps 2129 Kpps

The major performance increase is explained by the fact that the most CPU
consuming functions (page_pool_release_page, page_frag_free and
free_unref_page) are no longer called on a per packet basis.

The test was done by sending to the macchiatobin 64 byte ethernet frames
with an invalid ethertype, so the packets are dropped early in the RX path.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index b2259bf1d299..f9c392a50143 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3964,7 +3964,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 		}
 
 		if (pp)
-			page_pool_release_page(pp, virt_to_page(data));
+			skb_mark_for_recycle(skb, virt_to_page(data), pp);
 		else
 			dma_unmap_single_attrs(dev->dev.parent, dma_addr,
 					       bm_pool->buf_size, DMA_FROM_DEVICE,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH net-next v6 5/5] mvneta: recycle buffers
  2021-05-21 16:15 [PATCH net-next v6 0/5] page_pool: recycle buffers Matteo Croce
                   ` (3 preceding siblings ...)
  2021-05-21 16:15 ` [PATCH net-next v6 4/5] mvpp2: recycle buffers Matteo Croce
@ 2021-05-21 16:15 ` Matteo Croce
  2021-05-28  0:44   ` Matteo Croce
  5 siblings, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-05-21 16:15 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate increased by 10%,
from 269 Kpps to 296 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  21.78%  [kernel]          [k] __pi___inval_dcache_area
  21.66%  [mvneta]          [k] mvneta_rx_swbm
   7.00%  [kernel]          [k] kmem_cache_alloc
   6.05%  [kernel]          [k] eth_type_trans
   4.44%  [kernel]          [k] kmem_cache_free.part.0
   3.80%  [kernel]          [k] __netif_receive_skb_core
   3.68%  [kernel]          [k] dev_gro_receive
   3.65%  [kernel]          [k] get_page_from_freelist
   3.43%  [kernel]          [k] page_pool_release_page
   3.35%  [kernel]          [k] free_unref_page

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  24.10%  [kernel]          [k] __pi___inval_dcache_area
  23.02%  [mvneta]          [k] mvneta_rx_swbm
   7.19%  [kernel]          [k] kmem_cache_alloc
   6.50%  [kernel]          [k] eth_type_trans
   4.93%  [kernel]          [k] __netif_receive_skb_core
   4.77%  [kernel]          [k] kmem_cache_free.part.0
   3.93%  [kernel]          [k] dev_gro_receive
   3.03%  [kernel]          [k] build_skb
   2.91%  [kernel]          [k] page_pool_put_page
   2.85%  [kernel]          [k] __xdp_return

The test was done with mausezahn on the TX side with 64 byte raw
ethernet frames.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 7d5cd9bc6c99..c15ce06427d0 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2320,7 +2320,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 }
 
 static struct sk_buff *
-mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
+mvneta_swbm_build_skb(struct mvneta_port *pp, struct page_pool *pool,
 		      struct xdp_buff *xdp, u32 desc_status)
 {
 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
@@ -2331,7 +2331,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	if (!skb)
 		return ERR_PTR(-ENOMEM);
 
-	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
+	skb_mark_for_recycle(skb, virt_to_page(xdp->data), pool);
 
 	skb_reserve(skb, xdp->data - xdp->data_hard_start);
 	skb_put(skb, xdp->data_end - xdp->data);
@@ -2343,7 +2343,10 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
 				skb_frag_page(frag), skb_frag_off(frag),
 				skb_frag_size(frag), PAGE_SIZE);
-		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
+		/* We don't need to reset pp_recycle here. It's already set, so
+		 * just mark fragments for recycling.
+		 */
+		page_pool_store_mem_info(skb_frag_page(frag), pool);
 	}
 
 	return skb;
@@ -2425,7 +2428,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps))
 			goto next;
 
-		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
+		skb = mvneta_swbm_build_skb(pp, rxq->page_pool, &xdp_buf, desc_status);
 		if (IS_ERR(skb)) {
 			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 0/5] page_pool: recycle buffers
  2021-05-21 16:15 [PATCH net-next v6 0/5] page_pool: recycle buffers Matteo Croce
@ 2021-05-28  0:44   ` Matteo Croce
  2021-05-21 16:15 ` [PATCH net-next v6 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-05-28  0:44 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, May 21, 2021 at 6:15 PM Matteo Croce <mcroce@linux.microsoft.com> wrote:
> Note that this series depends on the change "mm: fix struct page layout
> on 32-bit systems"[2] which is not yet in master.
>

I see that it just entered net-next:

commit 9ddb3c14afba8bc5950ed297f02d4ae05ff35cd1
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Fri May 14 17:27:24 2021 -0700

   mm: fix struct page layout on 32-bit systems

Regards,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 0/5] page_pool: recycle buffers
@ 2021-05-28  0:44   ` Matteo Croce
  0 siblings, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-05-28  0:44 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, May 21, 2021 at 6:15 PM Matteo Croce <mcroce@linux.microsoft.com> wrote:
> Note that this series depends on the change "mm: fix struct page layout
> on 32-bit systems"[2] which is not yet in master.
>

I see that it just entered net-next:

commit 9ddb3c14afba8bc5950ed297f02d4ae05ff35cd1
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Fri May 14 17:27:24 2021 -0700

   mm: fix struct page layout on 32-bit systems

Regards,
-- 
per aspera ad upstream


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-21 16:15 ` [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
@ 2021-06-03 18:45     ` Matteo Croce
  2021-06-04  7:52   ` Yunsheng Lin
  1 sibling, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-06-03 18:45 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, May 21, 2021 at 6:16 PM Matteo Croce <mcroce@linux.microsoft.com> wrote:
> +bool page_pool_return_skb_page(void *data)
> +{
> +       struct page_pool *pp;
> +       struct page *page;
> +
> +       page = virt_to_head_page(data);
> +       if (unlikely(page->pp_magic != PP_SIGNATURE))
> +               return false;
> +
> +       pp = (struct page_pool *)page->pp;
> +
> +       /* Driver set this to memory recycling info. Reset it on recycle.
> +        * This will *not* work for NIC using a split-page memory model.
> +        * The page will be returned to the pool here regardless of the
> +        * 'flipped' fragment being in use or not.
> +        */
> +       page->pp = NULL;
> +       page_pool_put_full_page(pp, virt_to_head_page(data), false);

Here I could just use the cached "page" instead of calling
virt_to_head_page() once again.

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
@ 2021-06-03 18:45     ` Matteo Croce
  0 siblings, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-06-03 18:45 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, May 21, 2021 at 6:16 PM Matteo Croce <mcroce@linux.microsoft.com> wrote:
> +bool page_pool_return_skb_page(void *data)
> +{
> +       struct page_pool *pp;
> +       struct page *page;
> +
> +       page = virt_to_head_page(data);
> +       if (unlikely(page->pp_magic != PP_SIGNATURE))
> +               return false;
> +
> +       pp = (struct page_pool *)page->pp;
> +
> +       /* Driver set this to memory recycling info. Reset it on recycle.
> +        * This will *not* work for NIC using a split-page memory model.
> +        * The page will be returned to the pool here regardless of the
> +        * 'flipped' fragment being in use or not.
> +        */
> +       page->pp = NULL;
> +       page_pool_put_full_page(pp, virt_to_head_page(data), false);

Here I could just use the cached "page" instead of calling
virt_to_head_page() once again.

-- 
per aspera ad upstream


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-21 16:15 ` [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
  2021-06-03 18:45     ` Matteo Croce
@ 2021-06-04  7:52   ` Yunsheng Lin
  2021-06-04  8:42     ` Ilias Apalodimas
  1 sibling, 1 reply; 19+ messages in thread
From: Yunsheng Lin @ 2021-06-04  7:52 UTC (permalink / raw)
  To: Matteo Croce, netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On 2021/5/22 0:15, Matteo Croce wrote:
> From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> 
> Up to now several high speed NICs have custom mechanisms of recycling
> the allocated memory they use for their payloads.
> Our page_pool API already has recycling capabilities that are always
> used when we are running in 'XDP mode'. So let's tweak the API and the
> kernel network stack slightly and allow the recycling to happen even
> during the standard operation.
> The API doesn't take into account 'split page' policies used by those
> drivers currently, but can be extended once we have users for that.
> 
> The idea is to be able to intercept the packet on skb_release_data().
> If it's a buffer coming from our page_pool API recycle it back to the
> pool for further usage or just release the packet entirely.
> 
> To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
> a field in struct page (page->pp) to store the page_pool pointer.
> Storing the information in page->pp allows us to recycle both SKBs and
> their fragments.
> We could have skipped the skb bit entirely, since identical information
> can bederived from struct page. However, in an effort to affect the free path
> as less as possible, reading a single bit in the skb which is already
> in cache, is better that trying to derive identical information for the
> page stored data.
> 
> The driver or page_pool has to take care of the sync operations on it's own
> during the buffer recycling since the buffer is, after opting-in to the
> recycling, never unmapped.
> 
> Since the gain on the drivers depends on the architecture, we are not
> enabling recycling by default if the page_pool API is used on a driver.
> In order to enable recycling the driver must call skb_mark_for_recycle()
> to store the information we need for recycling in page->pp and
> enabling the recycling bit, or page_pool_store_mem_info() for a fragment.

The state of this patch in patchwork is "Not Applicable", so
you may need to respin it again.

Some minor comment below:

> 
> Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Co-developed-by: Matteo Croce <mcroce@microsoft.com>
> Signed-off-by: Matteo Croce <mcroce@microsoft.com>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> ---
>  include/linux/skbuff.h  | 28 +++++++++++++++++++++++++---
>  include/net/page_pool.h |  9 +++++++++
>  net/core/page_pool.c    | 23 +++++++++++++++++++++++
>  net/core/skbuff.c       | 24 ++++++++++++++++++++----
>  4 files changed, 77 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 7fcfea7e7b21..057b40ad29bd 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -40,6 +40,9 @@
>  #if IS_ENABLED(CONFIG_NF_CONNTRACK)
>  #include <linux/netfilter/nf_conntrack_common.h>
>  #endif
> +#ifdef CONFIG_PAGE_POOL
> +#include <net/page_pool.h>
> +#endif
>  
>  /* The interface for checksum offload between the stack and networking drivers
>   * is as follows...
> @@ -667,6 +670,8 @@ typedef unsigned char *sk_buff_data_t;
>   *	@head_frag: skb was allocated from page fragments,
>   *		not allocated by kmalloc() or vmalloc().
>   *	@pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
> + *	@pp_recycle: mark the packet for recycling instead of freeing (implies
> + *		page_pool support on driver)
>   *	@active_extensions: active extensions (skb_ext_id types)
>   *	@ndisc_nodetype: router type (from link layer)
>   *	@ooo_okay: allow the mapping of a socket to a queue to be changed
> @@ -791,10 +796,12 @@ struct sk_buff {
>  				fclone:2,
>  				peeked:1,
>  				head_frag:1,
> -				pfmemalloc:1;
> +				pfmemalloc:1,
> +				pp_recycle:1; /* page_pool recycle indicator */

The about comment seems unnecessary, for there is comment
added above in this patch to explain that.

>  #ifdef CONFIG_SKB_EXTENSIONS
>  	__u8			active_extensions;
>  #endif
> +

Unnecessary change?

>  	/* fields enclosed in headers_start/headers_end are copied
>  	 * using a single memcpy() in __copy_skb_header()
>  	 */
> @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
>   */
>  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
>  {
> -	put_page(skb_frag_page(frag));
> +	struct page *page = skb_frag_page(frag);
> +
> +#ifdef CONFIG_PAGE_POOL
> +	if (recycle && page_pool_return_skb_page(page_address(page)))
> +		return;
> +#endif
> +	put_page(page);
>  }
>  
>  /**
> @@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
>   */
>  static inline void skb_frag_unref(struct sk_buff *skb, int f)
>  {
> -	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
> +	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
>  }
>  
>  /**
> @@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
>  #endif
>  }
>  
> +#ifdef CONFIG_PAGE_POOL
> +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
> +					struct page_pool *pp)
> +{
> +	skb->pp_recycle = 1;
> +	page_pool_store_mem_info(page, pp);
> +}
> +#endif
> +
>  #endif	/* __KERNEL__ */
>  #endif	/* _LINUX_SKBUFF_H */
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index b4b6de909c93..7b9b6a1c61f5 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -146,6 +146,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
>  	return pool->p.dma_dir;
>  }
>  
> +bool page_pool_return_skb_page(void *data);
> +
>  struct page_pool *page_pool_create(const struct page_pool_params *params);
>  
>  #ifdef CONFIG_PAGE_POOL
> @@ -251,4 +253,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
>  		spin_unlock_bh(&pool->ring.producer_lock);
>  }
>  
> +/* Store mem_info on struct page and use it while recycling skb frags */
> +static inline
> +void page_pool_store_mem_info(struct page *page, struct page_pool *pp)

The conventional practice seems to put "struct page_pool" before other
parameter in page_pool.h.

> +{
> +	page->pp = pp;
> +}
> +
>  #endif /* _NET_PAGE_POOL_H */
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index e1321bc9d316..2a020cca489f 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -628,3 +628,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
>  	}
>  }
>  EXPORT_SYMBOL(page_pool_update_nid);
> +
> +bool page_pool_return_skb_page(void *data)
> +{
> +	struct page_pool *pp;
> +	struct page *page;
> +
> +	page = virt_to_head_page(data);
> +	if (unlikely(page->pp_magic != PP_SIGNATURE))
> +		return false;
> +
> +	pp = (struct page_pool *)page->pp;
> +
> +	/* Driver set this to memory recycling info. Reset it on recycle.
> +	 * This will *not* work for NIC using a split-page memory model.
> +	 * The page will be returned to the pool here regardless of the
> +	 * 'flipped' fragment being in use or not.
> +	 */

I am not sure I understand how does the last part of comment related
to the code below, as there is no driver using split-page memory model
will reach here because those driver will not call skb_mark_for_recycle(),
right?

> +	page->pp = NULL;
> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> +
> +	return true;
> +}
> +EXPORT_SYMBOL(page_pool_return_skb_page);


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-04  7:52   ` Yunsheng Lin
@ 2021-06-04  8:42     ` Ilias Apalodimas
  2021-06-05 16:06       ` David Ahern
  0 siblings, 1 reply; 19+ messages in thread
From: Ilias Apalodimas @ 2021-06-04  8:42 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

[...]
> > +	/* Driver set this to memory recycling info. Reset it on recycle.
> > +	 * This will *not* work for NIC using a split-page memory model.
> > +	 * The page will be returned to the pool here regardless of the
> > +	 * 'flipped' fragment being in use or not.
> > +	 */
> 
> I am not sure I understand how does the last part of comment related
> to the code below, as there is no driver using split-page memory model
> will reach here because those driver will not call skb_mark_for_recycle(),
> right?
> 

Yes the comment is there to prohibit people (mlx5 only actually) to add the
recycling bit on their driver.  Because if they do it will *probably* work
but they might get random corrupted packets which will be hard to debug.

> > +	page->pp = NULL;
> > +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> > +
> > +	return true;
> > +}
> > +EXPORT_SYMBOL(page_pool_return_skb_page);
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-04  8:42     ` Ilias Apalodimas
@ 2021-06-05 16:06       ` David Ahern
  2021-06-05 16:34           ` Matteo Croce
  2021-06-07  4:35         ` Ilias Apalodimas
  0 siblings, 2 replies; 19+ messages in thread
From: David Ahern @ 2021-06-05 16:06 UTC (permalink / raw)
  To: Ilias Apalodimas, Yunsheng Lin
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, Lorenzo Bianconi, Saeed Mahameed, Andrew Lunn,
	Paolo Abeni, Sven Auhagen

On 6/4/21 2:42 AM, Ilias Apalodimas wrote:
> [...]
>>> +	/* Driver set this to memory recycling info. Reset it on recycle.
>>> +	 * This will *not* work for NIC using a split-page memory model.
>>> +	 * The page will be returned to the pool here regardless of the
>>> +	 * 'flipped' fragment being in use or not.
>>> +	 */
>>
>> I am not sure I understand how does the last part of comment related
>> to the code below, as there is no driver using split-page memory model
>> will reach here because those driver will not call skb_mark_for_recycle(),
>> right?
>>
> 
> Yes the comment is there to prohibit people (mlx5 only actually) to add the
> recycling bit on their driver.  Because if they do it will *probably* work
> but they might get random corrupted packets which will be hard to debug.
> 

What's the complexity for getting it to work with split page model?
Since 1500 is the default MTU, requiring a page per packet means a lot
of wasted memory.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-05 16:06       ` David Ahern
@ 2021-06-05 16:34           ` Matteo Croce
  2021-06-07  4:35         ` Ilias Apalodimas
  1 sibling, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-06-05 16:34 UTC (permalink / raw)
  To: David Ahern
  Cc: Ilias Apalodimas, Yunsheng Lin, netdev, linux-mm, Ayush Sawal,
	Vinay Kumar Yadav, Rohit Maheshwari, David S. Miller,
	Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas, Russell King,
	Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, Lorenzo Bianconi, Saeed Mahameed, Andrew Lunn,
	Paolo Abeni, Sven Auhagen

On Sat, Jun 5, 2021 at 6:06 PM David Ahern <dsahern@gmail.com> wrote:
>
> On 6/4/21 2:42 AM, Ilias Apalodimas wrote:
> > [...]
> >>> +   /* Driver set this to memory recycling info. Reset it on recycle.
> >>> +    * This will *not* work for NIC using a split-page memory model.
> >>> +    * The page will be returned to the pool here regardless of the
> >>> +    * 'flipped' fragment being in use or not.
> >>> +    */
> >>
> >> I am not sure I understand how does the last part of comment related
> >> to the code below, as there is no driver using split-page memory model
> >> will reach here because those driver will not call skb_mark_for_recycle(),
> >> right?
> >>
> >
> > Yes the comment is there to prohibit people (mlx5 only actually) to add the
> > recycling bit on their driver.  Because if they do it will *probably* work
> > but they might get random corrupted packets which will be hard to debug.
> >
>
> What's the complexity for getting it to work with split page model?
> Since 1500 is the default MTU, requiring a page per packet means a lot
> of wasted memory.

We could create a new memory model, e.g. MEM_TYPE_PAGE_SPLIT, and
restore the behavior present in the previous versions of this serie,
which is, save xdp_mem_info in struct page.
As this could slightly impact the performances, this can be added in a
future change when the drivers which are doing it want to use this
recycling api.

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
@ 2021-06-05 16:34           ` Matteo Croce
  0 siblings, 0 replies; 19+ messages in thread
From: Matteo Croce @ 2021-06-05 16:34 UTC (permalink / raw)
  To: David Ahern
  Cc: Ilias Apalodimas, Yunsheng Lin, netdev, linux-mm, Ayush Sawal,
	Vinay Kumar Yadav, Rohit Maheshwari, David S. Miller,
	Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas, Russell King,
	Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, Lorenzo Bianconi, Saeed Mahameed, Andrew Lunn,
	Paolo Abeni, Sven Auhagen

On Sat, Jun 5, 2021 at 6:06 PM David Ahern <dsahern@gmail.com> wrote:
>
> On 6/4/21 2:42 AM, Ilias Apalodimas wrote:
> > [...]
> >>> +   /* Driver set this to memory recycling info. Reset it on recycle.
> >>> +    * This will *not* work for NIC using a split-page memory model.
> >>> +    * The page will be returned to the pool here regardless of the
> >>> +    * 'flipped' fragment being in use or not.
> >>> +    */
> >>
> >> I am not sure I understand how does the last part of comment related
> >> to the code below, as there is no driver using split-page memory model
> >> will reach here because those driver will not call skb_mark_for_recycle(),
> >> right?
> >>
> >
> > Yes the comment is there to prohibit people (mlx5 only actually) to add the
> > recycling bit on their driver.  Because if they do it will *probably* work
> > but they might get random corrupted packets which will be hard to debug.
> >
>
> What's the complexity for getting it to work with split page model?
> Since 1500 is the default MTU, requiring a page per packet means a lot
> of wasted memory.

We could create a new memory model, e.g. MEM_TYPE_PAGE_SPLIT, and
restore the behavior present in the previous versions of this serie,
which is, save xdp_mem_info in struct page.
As this could slightly impact the performances, this can be added in a
future change when the drivers which are doing it want to use this
recycling api.

-- 
per aspera ad upstream


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-05 16:34           ` Matteo Croce
  (?)
@ 2021-06-06 13:56           ` Tariq Toukan
  2021-06-07  4:38             ` Ilias Apalodimas
  -1 siblings, 1 reply; 19+ messages in thread
From: Tariq Toukan @ 2021-06-06 13:56 UTC (permalink / raw)
  To: Matteo Croce, David Ahern
  Cc: Ilias Apalodimas, Yunsheng Lin, netdev, linux-mm, Ayush Sawal,
	Vinay Kumar Yadav, Rohit Maheshwari, David S. Miller,
	Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas, Russell King,
	Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, Lorenzo Bianconi, Saeed Mahameed, Andrew Lunn,
	Paolo Abeni, Sven Auhagen



On 6/5/2021 7:34 PM, Matteo Croce wrote:
> On Sat, Jun 5, 2021 at 6:06 PM David Ahern <dsahern@gmail.com> wrote:
>>
>> On 6/4/21 2:42 AM, Ilias Apalodimas wrote:
>>> [...]
>>>>> +   /* Driver set this to memory recycling info. Reset it on recycle.
>>>>> +    * This will *not* work for NIC using a split-page memory model.
>>>>> +    * The page will be returned to the pool here regardless of the
>>>>> +    * 'flipped' fragment being in use or not.
>>>>> +    */
>>>>
>>>> I am not sure I understand how does the last part of comment related
>>>> to the code below, as there is no driver using split-page memory model
>>>> will reach here because those driver will not call skb_mark_for_recycle(),
>>>> right?
>>>>
>>>
>>> Yes the comment is there to prohibit people (mlx5 only actually) to add the
>>> recycling bit on their driver.  Because if they do it will *probably* work
>>> but they might get random corrupted packets which will be hard to debug.
>>>
>>
>> What's the complexity for getting it to work with split page model?
>> Since 1500 is the default MTU, requiring a page per packet means a lot
>> of wasted memory.
> 
> We could create a new memory model, e.g. MEM_TYPE_PAGE_SPLIT, and
> restore the behavior present in the previous versions of this serie,
> which is, save xdp_mem_info in struct page.
> As this could slightly impact the performances, this can be added in a
> future change when the drivers which are doing it want to use this
> recycling api.
> 

page-split model doesn't only help reduce memory waste, but increase 
cache-locality, especially for aggregated GRO SKBs.

I'm looking forward to integrating the page-pool SKB recycling API into 
mlx5e datapath. For this we need it to support the page-split model.

Let's see what's missing and how we can help making this happen.

Regards,
Tariq

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-05 16:06       ` David Ahern
  2021-06-05 16:34           ` Matteo Croce
@ 2021-06-07  4:35         ` Ilias Apalodimas
  1 sibling, 0 replies; 19+ messages in thread
From: Ilias Apalodimas @ 2021-06-07  4:35 UTC (permalink / raw)
  To: David Ahern
  Cc: Yunsheng Lin, Matteo Croce, netdev, linux-mm, Ayush Sawal,
	Vinay Kumar Yadav, Rohit Maheshwari, David S. Miller,
	Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas, Russell King,
	Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, Lorenzo Bianconi, Saeed Mahameed, Andrew Lunn,
	Paolo Abeni, Sven Auhagen

Hi David,

On Sat, Jun 05, 2021 at 10:06:30AM -0600, David Ahern wrote:
> On 6/4/21 2:42 AM, Ilias Apalodimas wrote:
> > [...]
> >>> +	/* Driver set this to memory recycling info. Reset it on recycle.
> >>> +	 * This will *not* work for NIC using a split-page memory model.
> >>> +	 * The page will be returned to the pool here regardless of the
> >>> +	 * 'flipped' fragment being in use or not.
> >>> +	 */
> >>
> >> I am not sure I understand how does the last part of comment related
> >> to the code below, as there is no driver using split-page memory model
> >> will reach here because those driver will not call skb_mark_for_recycle(),
> >> right?
> >>
> > 
> > Yes the comment is there to prohibit people (mlx5 only actually) to add the
> > recycling bit on their driver.  Because if they do it will *probably* work
> > but they might get random corrupted packets which will be hard to debug.
> > 
> 
> What's the complexity for getting it to work with split page model?
> Since 1500 is the default MTU, requiring a page per packet means a lot
> of wasted memory.

It boils down to 'can we re-use the page or is someone using it'.
Yunsheng sent a patch in earlier series that implements this with
ref counters. As Matteo mentions we can also add another page pool type.

In theory none of those sound too hard, but we'll have to code it and see.

/Ilias

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-06 13:56           ` Tariq Toukan
@ 2021-06-07  4:38             ` Ilias Apalodimas
  2021-06-07 11:14               ` Tariq Toukan
  0 siblings, 1 reply; 19+ messages in thread
From: Ilias Apalodimas @ 2021-06-07  4:38 UTC (permalink / raw)
  To: Tariq Toukan
  Cc: Matteo Croce, David Ahern, Yunsheng Lin, netdev, linux-mm,
	Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, Lorenzo Bianconi, Saeed Mahameed, Andrew Lunn,
	Paolo Abeni, Sven Auhagen

Hi Tariq,

> > > > 
> > > > Yes the comment is there to prohibit people (mlx5 only actually) to add the
> > > > recycling bit on their driver.  Because if they do it will *probably* work
> > > > but they might get random corrupted packets which will be hard to debug.
> > > > 
> > > 
> > > What's the complexity for getting it to work with split page model?
> > > Since 1500 is the default MTU, requiring a page per packet means a lot
> > > of wasted memory.
> > 
> > We could create a new memory model, e.g. MEM_TYPE_PAGE_SPLIT, and
> > restore the behavior present in the previous versions of this serie,
> > which is, save xdp_mem_info in struct page.
> > As this could slightly impact the performances, this can be added in a
> > future change when the drivers which are doing it want to use this
> > recycling api.
> > 
> 
> page-split model doesn't only help reduce memory waste, but increase
> cache-locality, especially for aggregated GRO SKBs.
> 
> I'm looking forward to integrating the page-pool SKB recycling API into
> mlx5e datapath. For this we need it to support the page-split model.
> 
> Let's see what's missing and how we can help making this happen.

Yes that's the final goal.  As I said I don't think adding the page split
model will fundamentally change the current patchset.  So imho we should
get this in first, make sure that everything is fine, and then add code for
the mlx cards.

Regards
/Ilias

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-07  4:38             ` Ilias Apalodimas
@ 2021-06-07 11:14               ` Tariq Toukan
  0 siblings, 0 replies; 19+ messages in thread
From: Tariq Toukan @ 2021-06-07 11:14 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: Matteo Croce, David Ahern, Yunsheng Lin, netdev, linux-mm,
	Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, Lorenzo Bianconi, Saeed Mahameed, Andrew Lunn,
	Paolo Abeni, Sven Auhagen



On 6/7/2021 7:38 AM, Ilias Apalodimas wrote:
> Hi Tariq,
> 
>>>>>
>>>>> Yes the comment is there to prohibit people (mlx5 only actually) to add the
>>>>> recycling bit on their driver.  Because if they do it will *probably* work
>>>>> but they might get random corrupted packets which will be hard to debug.
>>>>>
>>>>
>>>> What's the complexity for getting it to work with split page model?
>>>> Since 1500 is the default MTU, requiring a page per packet means a lot
>>>> of wasted memory.
>>>
>>> We could create a new memory model, e.g. MEM_TYPE_PAGE_SPLIT, and
>>> restore the behavior present in the previous versions of this serie,
>>> which is, save xdp_mem_info in struct page.
>>> As this could slightly impact the performances, this can be added in a
>>> future change when the drivers which are doing it want to use this
>>> recycling api.
>>>
>>
>> page-split model doesn't only help reduce memory waste, but increase
>> cache-locality, especially for aggregated GRO SKBs.
>>
>> I'm looking forward to integrating the page-pool SKB recycling API into
>> mlx5e datapath. For this we need it to support the page-split model.
>>
>> Let's see what's missing and how we can help making this happen.
> 
> Yes that's the final goal.  As I said I don't think adding the page split
> model will fundamentally change the current patchset.  So imho we should
> get this in first, make sure that everything is fine, and then add code for
> the mlx cards.
> 

Sounds good

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-06-07 11:14 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-21 16:15 [PATCH net-next v6 0/5] page_pool: recycle buffers Matteo Croce
2021-05-21 16:15 ` [PATCH net-next v6 1/5] mm: add a signature in struct page Matteo Croce
2021-05-21 16:15 ` [PATCH net-next v6 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
2021-05-21 16:15 ` [PATCH net-next v6 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-06-03 18:45   ` Matteo Croce
2021-06-03 18:45     ` Matteo Croce
2021-06-04  7:52   ` Yunsheng Lin
2021-06-04  8:42     ` Ilias Apalodimas
2021-06-05 16:06       ` David Ahern
2021-06-05 16:34         ` Matteo Croce
2021-06-05 16:34           ` Matteo Croce
2021-06-06 13:56           ` Tariq Toukan
2021-06-07  4:38             ` Ilias Apalodimas
2021-06-07 11:14               ` Tariq Toukan
2021-06-07  4:35         ` Ilias Apalodimas
2021-05-21 16:15 ` [PATCH net-next v6 4/5] mvpp2: recycle buffers Matteo Croce
2021-05-21 16:15 ` [PATCH net-next v6 5/5] mvneta: " Matteo Croce
2021-05-28  0:44 ` [PATCH net-next v6 0/5] page_pool: " Matteo Croce
2021-05-28  0:44   ` Matteo Croce

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.