bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v5 0/5] page_pool: recycle buffers
@ 2021-05-13 16:58 Matteo Croce
  2021-05-13 16:58 ` [PATCH net-next v5 1/5] mm: add a signature in struct page Matteo Croce
                   ` (4 more replies)
  0 siblings, 5 replies; 22+ messages in thread
From: Matteo Croce @ 2021-05-13 16:58 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is a respin of [1]

This patchset shows the plans for allowing page_pool to handle and
maintain DMA map/unmap of the pages it serves to the driver. For this
to work a return hook in the network core is introduced.

The overall purpose is to simplify drivers, by providing a page
allocation API that does recycling, such that each driver doesn't have
to reinvent its own recycling scheme. Using page_pool in a driver
does not require implementing XDP support, but it makes it trivially
easy to do so. Instead of allocating buffers specifically for SKBs
we now allocate a generic buffer and either wrap it on an SKB
(via build_skb) or create an XDP frame.
The recycling code leverages the XDP recycle APIs.

The Marvell mvpp2 and mvneta drivers are used in this patchset to
demonstrate how to use the API, and tested on a MacchiatoBIN
and EspressoBIN boards respectively.

Please let this going in on a future -rc1 so to allow enough time
to have wider tests.

Note that this series depends on the change "mm: fix struct page layout
on 32-bit systems"[2] which is not yet in master.

v4 -> v5:
- move the signature so it doesn't alias with page->mapping
- use an invalid pointer as magic
- incorporate Matthew Wilcox's changes for pfmemalloc pages
- move the __skb_frag_unref() changes to a preliminary patch
- refactor some cpp directives
- only attempt recycling if skb->head_frag
- clear skb->pp_recycle in pskb_expand_head()

v3 -> v4:
- store a pointer to page_pool instead of xdp_mem_info
- drop a patch which reduces xdp_mem_info size
- do the recycling in the page_pool code instead of xdp_return
- remove some unused headers include
- remove some useless forward declaration

v2 -> v3:
- added missing SOBs
- CCed the MM people

v1 -> v2:
- fix a commit message
- avoid setting pp_recycle multiple times on mvneta
- squash two patches to avoid breaking bisect

[1] https://lore.kernel.org/netdev/154413868810.21735.572808840657728172.stgit@firesoul/
[2] https://lore.kernel.org/linux-mm/20210510153211.1504886-1-willy@infradead.org/

Ilias Apalodimas (1):
  page_pool: Allow drivers to hint on SKB recycling

Matteo Croce (4):
  mm: add a signature in struct page
  skbuff: add a parameter to __skb_frag_unref
  mvpp2: recycle buffers
  mvneta: recycle buffers

 drivers/net/ethernet/marvell/mvneta.c         | 11 +++---
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 17 +++++-----
 drivers/net/ethernet/marvell/sky2.c           |  2 +-
 drivers/net/ethernet/mellanox/mlx4/en_rx.c    |  2 +-
 include/linux/mm.h                            | 12 ++++---
 include/linux/mm_types.h                      | 12 +++++++
 include/linux/skbuff.h                        | 34 ++++++++++++++++---
 include/net/page_pool.h                       | 11 ++++++
 net/core/page_pool.c                          | 27 +++++++++++++++
 net/core/skbuff.c                             | 25 +++++++++++---
 net/tls/tls_device.c                          |  2 +-
 11 files changed, 126 insertions(+), 29 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH net-next v5 1/5] mm: add a signature in struct page
  2021-05-13 16:58 [PATCH net-next v5 0/5] page_pool: recycle buffers Matteo Croce
@ 2021-05-13 16:58 ` Matteo Croce
  2021-05-14  1:00   ` Matthew Wilcox
  2021-05-13 16:58 ` [PATCH net-next v5 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: Matteo Croce @ 2021-05-13 16:58 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is needed by the page_pool to avoid recycling a page not allocated
via page_pool.

The page->signature field is aliased to page->lru.next and
page->compound_head, but it can't be set by mistake because the
signature value is a bad pointer, and can't trigger a false positive
in PageTail() because the last bit is 0.

Co-developed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 include/linux/mm.h       | 12 +++++++-----
 include/linux/mm_types.h | 12 ++++++++++++
 include/net/page_pool.h  |  2 ++
 net/core/page_pool.c     |  4 ++++
 4 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 322ec61d0da7..48268d2d0282 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1668,10 +1668,12 @@ struct address_space *page_mapping(struct page *page);
 static inline bool page_is_pfmemalloc(const struct page *page)
 {
 	/*
-	 * Page index cannot be this large so this must be
-	 * a pfmemalloc page.
+	 * This is not a tail page; compound_head of a head page is unused
+	 * at return from the page allocator, and will be overwritten
+	 * by callers who do not care whether the page came from the
+	 * reserves.
 	 */
-	return page->index == -1UL;
+	return page->compound_head & 2;
 }
 
 /*
@@ -1680,12 +1682,12 @@ static inline bool page_is_pfmemalloc(const struct page *page)
  */
 static inline void set_page_pfmemalloc(struct page *page)
 {
-	page->index = -1UL;
+	page->compound_head = 2;
 }
 
 static inline void clear_page_pfmemalloc(struct page *page)
 {
-	page->index = 0;
+	page->compound_head = 0;
 }
 
 /*
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 5aacc1c10a45..44cf328e94e2 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -96,6 +96,18 @@ struct page {
 			unsigned long private;
 		};
 		struct {	/* page_pool used by netstack */
+			/**
+			 * @pp_magic: magic value to avoid recycling non
+			 * page_pool allocated pages.
+			 * It aliases with page->lru.next
+			 */
+			unsigned long pp_magic;
+			/**
+			 * @pp: pointer to page_pool.
+			 * It aliases with page->lru.prev
+			 */
+			struct page_pool *pp;
+			unsigned long _pp_mapping_pad;
 			/**
 			 * @dma_addr: might require a 64-bit value on
 			 * 32-bit architectures.
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b4b6de909c93..24b3d42c62c0 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -63,6 +63,8 @@
  */
 #define PP_ALLOC_CACHE_SIZE	128
 #define PP_ALLOC_CACHE_REFILL	64
+#define PP_SIGNATURE		(POISON_POINTER_DELTA + 0x40)
+
 struct pp_alloc_cache {
 	u32 count;
 	struct page *cache[PP_ALLOC_CACHE_SIZE];
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 3c4c4c7a0402..9de5d8c08c17 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -221,6 +221,8 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
 		return NULL;
 	}
 
+	page->pp_magic = PP_SIGNATURE;
+
 	/* Track how many pages are held 'in-flight' */
 	pool->pages_state_hold_cnt++;
 	trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
@@ -341,6 +343,8 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 			     DMA_ATTR_SKIP_CPU_SYNC);
 	page_pool_set_dma_addr(page, 0);
 skip_dma_unmap:
+	page->pp_magic = 0;
+
 	/* This may be the last page returned, releasing the pool, so
 	 * it is not safe to reference pool afterwards.
 	 */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH net-next v5 2/5] skbuff: add a parameter to __skb_frag_unref
  2021-05-13 16:58 [PATCH net-next v5 0/5] page_pool: recycle buffers Matteo Croce
  2021-05-13 16:58 ` [PATCH net-next v5 1/5] mm: add a signature in struct page Matteo Croce
@ 2021-05-13 16:58 ` Matteo Croce
  2021-05-13 16:58 ` [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 22+ messages in thread
From: Matteo Croce @ 2021-05-13 16:58 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is a prerequisite patch, the next one is enabling recycling of
skbs and fragments. Add an extra argument on __skb_frag_unref() to
handle recycling, and update the current users of the function with that.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/sky2.c        | 2 +-
 drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +-
 include/linux/skbuff.h                     | 8 +++++---
 net/core/skbuff.c                          | 4 ++--
 net/tls/tls_device.c                       | 2 +-
 5 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
index 222c32367b2c..aa0cde1dc5c0 100644
--- a/drivers/net/ethernet/marvell/sky2.c
+++ b/drivers/net/ethernet/marvell/sky2.c
@@ -2503,7 +2503,7 @@ static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space,
 
 		if (length == 0) {
 			/* don't need this page */
-			__skb_frag_unref(frag);
+			__skb_frag_unref(frag, false);
 			--skb_shinfo(skb)->nr_frags;
 		} else {
 			size = min(length, (unsigned) PAGE_SIZE);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index e35e4d7ef4d1..cea62b8f554c 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -526,7 +526,7 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
 fail:
 	while (nr > 0) {
 		nr--;
-		__skb_frag_unref(skb_shinfo(skb)->frags + nr);
+		__skb_frag_unref(skb_shinfo(skb)->frags + nr, false);
 	}
 	return 0;
 }
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index dbf820a50a39..7fcfea7e7b21 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3081,10 +3081,12 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
 /**
  * __skb_frag_unref - release a reference on a paged fragment.
  * @frag: the paged fragment
+ * @recycle: recycle the page if allocated via page_pool
  *
- * Releases a reference on the paged fragment @frag.
+ * Releases a reference on the paged fragment @frag
+ * or recycles the page via the page_pool API.
  */
-static inline void __skb_frag_unref(skb_frag_t *frag)
+static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
 	put_page(skb_frag_page(frag));
 }
@@ -3098,7 +3100,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f]);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
 }
 
 /**
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3ad22870298c..12b7e90dd2b5 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -664,7 +664,7 @@ static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i]);
+		__skb_frag_unref(&shinfo->frags[i], false);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -3495,7 +3495,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom);
+		__skb_frag_unref(fragfrom, false);
 	}
 
 	/* Reposition in the original skb */
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 76a6f8c2eec4..ad11db2c4f63 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -127,7 +127,7 @@ static void destroy_record(struct tls_record_info *record)
 	int i;
 
 	for (i = 0; i < record->num_frags; i++)
-		__skb_frag_unref(&record->frags[i]);
+		__skb_frag_unref(&record->frags[i], false);
 	kfree(record);
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-13 16:58 [PATCH net-next v5 0/5] page_pool: recycle buffers Matteo Croce
  2021-05-13 16:58 ` [PATCH net-next v5 1/5] mm: add a signature in struct page Matteo Croce
  2021-05-13 16:58 ` [PATCH net-next v5 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
@ 2021-05-13 16:58 ` Matteo Croce
  2021-05-14  3:39   ` Yunsheng Lin
  2021-05-13 16:58 ` [PATCH net-next v5 4/5] mvpp2: recycle buffers Matteo Croce
  2021-05-13 16:58 ` [PATCH net-next v5 5/5] mvneta: " Matteo Croce
  4 siblings, 1 reply; 22+ messages in thread
From: Matteo Croce @ 2021-05-13 16:58 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Ilias Apalodimas <ilias.apalodimas@linaro.org>

Up to now several high speed NICs have custom mechanisms of recycling
the allocated memory they use for their payloads.
Our page_pool API already has recycling capabilities that are always
used when we are running in 'XDP mode'. So let's tweak the API and the
kernel network stack slightly and allow the recycling to happen even
during the standard operation.
The API doesn't take into account 'split page' policies used by those
drivers currently, but can be extended once we have users for that.

The idea is to be able to intercept the packet on skb_release_data().
If it's a buffer coming from our page_pool API recycle it back to the
pool for further usage or just release the packet entirely.

To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
a field in struct page (page->pp) to store the page_pool pointer.
Storing the information in page->pp allows us to recycle both SKBs and
their fragments.
The SKB bit is needed for a couple of reasons. First of all in an effort
to affect the free path as less as possible, reading a single bit,
is better that trying to derive identical information for the page stored
data. We do have a special mark in the page, that won't allow this to
happen, but again deciding without having to read the entire page is
preferable.

The driver has to take care of the sync operations on it's own
during the buffer recycling since the buffer is, after opting-in to the
recycling, never unmapped.

Since the gain on the drivers depends on the architecture, we are not
enabling recycling by default if the page_pool API is used on a driver.
In order to enable recycling the driver must call skb_mark_for_recycle()
to store the information we need for recycling in page->pp and
enabling the recycling bit, or page_pool_store_mem_info() for a fragment.

Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Co-developed-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 include/linux/skbuff.h  | 28 +++++++++++++++++++++++++---
 include/net/page_pool.h |  9 +++++++++
 net/core/page_pool.c    | 23 +++++++++++++++++++++++
 net/core/skbuff.c       | 25 +++++++++++++++++++++----
 4 files changed, 78 insertions(+), 7 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7fcfea7e7b21..057b40ad29bd 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -40,6 +40,9 @@
 #if IS_ENABLED(CONFIG_NF_CONNTRACK)
 #include <linux/netfilter/nf_conntrack_common.h>
 #endif
+#ifdef CONFIG_PAGE_POOL
+#include <net/page_pool.h>
+#endif
 
 /* The interface for checksum offload between the stack and networking drivers
  * is as follows...
@@ -667,6 +670,8 @@ typedef unsigned char *sk_buff_data_t;
  *	@head_frag: skb was allocated from page fragments,
  *		not allocated by kmalloc() or vmalloc().
  *	@pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
+ *	@pp_recycle: mark the packet for recycling instead of freeing (implies
+ *		page_pool support on driver)
  *	@active_extensions: active extensions (skb_ext_id types)
  *	@ndisc_nodetype: router type (from link layer)
  *	@ooo_okay: allow the mapping of a socket to a queue to be changed
@@ -791,10 +796,12 @@ struct sk_buff {
 				fclone:2,
 				peeked:1,
 				head_frag:1,
-				pfmemalloc:1;
+				pfmemalloc:1,
+				pp_recycle:1; /* page_pool recycle indicator */
 #ifdef CONFIG_SKB_EXTENSIONS
 	__u8			active_extensions;
 #endif
+
 	/* fields enclosed in headers_start/headers_end are copied
 	 * using a single memcpy() in __copy_skb_header()
 	 */
@@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
  */
 static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
-	put_page(skb_frag_page(frag));
+	struct page *page = skb_frag_page(frag);
+
+#ifdef CONFIG_PAGE_POOL
+	if (recycle && page_pool_return_skb_page(page_address(page)))
+		return;
+#endif
+	put_page(page);
 }
 
 /**
@@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
 }
 
 /**
@@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
 #endif
 }
 
+#ifdef CONFIG_PAGE_POOL
+static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
+					struct page_pool *pp)
+{
+	skb->pp_recycle = 1;
+	page_pool_store_mem_info(page, pp);
+}
+#endif
+
 #endif	/* __KERNEL__ */
 #endif	/* _LINUX_SKBUFF_H */
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 24b3d42c62c0..ce75abeddb29 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -148,6 +148,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
 	return pool->p.dma_dir;
 }
 
+bool page_pool_return_skb_page(void *data);
+
 struct page_pool *page_pool_create(const struct page_pool_params *params);
 
 #ifdef CONFIG_PAGE_POOL
@@ -253,4 +255,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
 		spin_unlock_bh(&pool->ring.producer_lock);
 }
 
+/* Store mem_info on struct page and use it while recycling skb frags */
+static inline
+void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
+{
+	page->pp = pp;
+}
+
 #endif /* _NET_PAGE_POOL_H */
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 9de5d8c08c17..fa9f17db7c48 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -626,3 +626,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
 	}
 }
 EXPORT_SYMBOL(page_pool_update_nid);
+
+bool page_pool_return_skb_page(void *data)
+{
+	struct page_pool *pp;
+	struct page *page;
+
+	page = virt_to_head_page(data);
+	if (unlikely(page->pp_magic != PP_SIGNATURE))
+		return false;
+
+	pp = (struct page_pool *)page->pp;
+
+	/* Driver set this to memory recycling info. Reset it on recycle.
+	 * This will *not* work for NIC using a split-page memory model.
+	 * The page will be returned to the pool here regardless of the
+	 * 'flipped' fragment being in use or not.
+	 */
+	page->pp = NULL;
+	page_pool_put_full_page(pp, virt_to_head_page(data), false);
+
+	return true;
+}
+EXPORT_SYMBOL(page_pool_return_skb_page);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 12b7e90dd2b5..9581af44d587 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -70,6 +70,9 @@
 #include <net/xfrm.h>
 #include <net/mpls.h>
 #include <net/mptcp.h>
+#ifdef CONFIG_PAGE_POOL
+#include <net/page_pool.h>
+#endif
 
 #include <linux/uaccess.h>
 #include <trace/events/skb.h>
@@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb)
 {
 	unsigned char *head = skb->head;
 
-	if (skb->head_frag)
+	if (skb->head_frag) {
+#ifdef CONFIG_PAGE_POOL
+		if (skb->pp_recycle && page_pool_return_skb_page(head))
+			return;
+#endif
 		skb_free_frag(head);
-	else
+	} else {
 		kfree(head);
+	}
 }
 
 static void skb_release_data(struct sk_buff *skb)
@@ -664,7 +672,7 @@ static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i], false);
+		__skb_frag_unref(&shinfo->frags[i], skb->pp_recycle);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -1046,6 +1054,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
 	n->nohdr = 0;
 	n->peeked = 0;
 	C(pfmemalloc);
+	C(pp_recycle);
 	n->destructor = NULL;
 	C(tail);
 	C(end);
@@ -1725,6 +1734,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
 	skb->cloned   = 0;
 	skb->hdr_len  = 0;
 	skb->nohdr    = 0;
+	skb->pp_recycle = 0;
 	atomic_set(&skb_shinfo(skb)->dataref, 1);
 
 	skb_metadata_clear(skb);
@@ -3495,7 +3505,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom, false);
+		__skb_frag_unref(fragfrom, skb->pp_recycle);
 	}
 
 	/* Reposition in the original skb */
@@ -5285,6 +5295,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 	if (skb_cloned(to))
 		return false;
 
+	/* We can't coalesce skb that are allocated from slab and page_pool
+	 * The recycle mark is on the skb, so that might end up trying to
+	 * recycle slab allocated skb->head
+	 */
+	if (to->pp_recycle != from->pp_recycle)
+		return false;
+
 	if (len <= skb_tailroom(to)) {
 		if (len)
 			BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH net-next v5 4/5] mvpp2: recycle buffers
  2021-05-13 16:58 [PATCH net-next v5 0/5] page_pool: recycle buffers Matteo Croce
                   ` (2 preceding siblings ...)
  2021-05-13 16:58 ` [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
@ 2021-05-13 16:58 ` Matteo Croce
  2021-05-13 18:20   ` Russell King (Oracle)
  2021-05-13 16:58 ` [PATCH net-next v5 5/5] mvneta: " Matteo Croce
  4 siblings, 1 reply; 22+ messages in thread
From: Matteo Croce @ 2021-05-13 16:58 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate is more than doubled,
from 962 Kpps to 2047 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  30.67%  [kernel]          [k] page_pool_release_page
   8.37%  [kernel]          [k] get_page_from_freelist
   7.34%  [kernel]          [k] free_unref_page
   6.47%  [mvpp2]           [k] mvpp2_rx
   4.69%  [kernel]          [k] eth_type_trans
   4.55%  [kernel]          [k] __netif_receive_skb_core
   4.40%  [kernel]          [k] build_skb
   4.29%  [kernel]          [k] kmem_cache_free
   4.00%  [kernel]          [k] kmem_cache_alloc
   3.81%  [kernel]          [k] dev_gro_receive

With packet rate stable at 962 Kpps:

tx: 0 bps 0 pps rx: 477.4 Mbps 962.6 Kpps
tx: 0 bps 0 pps rx: 477.6 Mbps 962.8 Kpps
tx: 0 bps 0 pps rx: 477.6 Mbps 962.9 Kpps
tx: 0 bps 0 pps rx: 477.2 Mbps 962.1 Kpps
tx: 0 bps 0 pps rx: 477.5 Mbps 962.7 Kpps

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  12.75%  [mvpp2]           [k] mvpp2_rx
   9.56%  [kernel]          [k] __netif_receive_skb_core
   9.29%  [kernel]          [k] build_skb
   9.27%  [kernel]          [k] eth_type_trans
   8.39%  [kernel]          [k] kmem_cache_alloc
   7.85%  [kernel]          [k] kmem_cache_free
   7.36%  [kernel]          [k] page_pool_put_page
   6.45%  [kernel]          [k] dev_gro_receive
   4.72%  [kernel]          [k] __xdp_return
   3.06%  [kernel]          [k] page_pool_refill_alloc_cache

With packet rate above 2000 Kpps:

tx: 0 bps 0 pps rx: 1015 Mbps 2046 Kpps
tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps
tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps
tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps
tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps

The major performance increase is explained by the fact that the most CPU
consuming functions (page_pool_release_page, get_page_from_freelist
and free_unref_page) are no longer called on a per packet basis.

The test was done by sending to the macchiatobin 64 byte ethernet frames
with an invalid ethertype, so the packets are dropped early in the RX path.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index b2259bf1d299..9dceabece56c 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3847,6 +3847,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 	struct mvpp2_pcpu_stats ps = {};
 	enum dma_data_direction dma_dir;
 	struct bpf_prog *xdp_prog;
+	struct xdp_rxq_info *rxqi;
 	struct xdp_buff xdp;
 	int rx_received;
 	int rx_done = 0;
@@ -3912,15 +3913,15 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 		else
 			frag_size = bm_pool->frag_size;
 
-		if (xdp_prog) {
-			struct xdp_rxq_info *xdp_rxq;
+		if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
+			rxqi = &rxq->xdp_rxq_short;
+		else
+			rxqi = &rxq->xdp_rxq_long;
 
-			if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
-				xdp_rxq = &rxq->xdp_rxq_short;
-			else
-				xdp_rxq = &rxq->xdp_rxq_long;
+		if (xdp_prog) {
+			xdp.rxq = rxqi;
 
-			xdp_init_buff(&xdp, PAGE_SIZE, xdp_rxq);
+			xdp_init_buff(&xdp, PAGE_SIZE, rxqi);
 			xdp_prepare_buff(&xdp, data,
 					 MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM,
 					 rx_bytes, false);
@@ -3964,7 +3965,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 		}
 
 		if (pp)
-			page_pool_release_page(pp, virt_to_page(data));
+			skb_mark_for_recycle(skb, virt_to_page(data), pp);
 		else
 			dma_unmap_single_attrs(dev->dev.parent, dma_addr,
 					       bm_pool->buf_size, DMA_FROM_DEVICE,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH net-next v5 5/5] mvneta: recycle buffers
  2021-05-13 16:58 [PATCH net-next v5 0/5] page_pool: recycle buffers Matteo Croce
                   ` (3 preceding siblings ...)
  2021-05-13 16:58 ` [PATCH net-next v5 4/5] mvpp2: recycle buffers Matteo Croce
@ 2021-05-13 16:58 ` Matteo Croce
  2021-05-13 18:25   ` Russell King (Oracle)
  4 siblings, 1 reply; 22+ messages in thread
From: Matteo Croce @ 2021-05-13 16:58 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate increased di 10%,
from 269 Kpps to 296 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  21.78%  [kernel]          [k] __pi___inval_dcache_area
  21.66%  [mvneta]          [k] mvneta_rx_swbm
   7.00%  [kernel]          [k] kmem_cache_alloc
   6.05%  [kernel]          [k] eth_type_trans
   4.44%  [kernel]          [k] kmem_cache_free.part.0
   3.80%  [kernel]          [k] __netif_receive_skb_core
   3.68%  [kernel]          [k] dev_gro_receive
   3.65%  [kernel]          [k] get_page_from_freelist
   3.43%  [kernel]          [k] page_pool_release_page
   3.35%  [kernel]          [k] free_unref_page

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  24.10%  [kernel]          [k] __pi___inval_dcache_area
  23.02%  [mvneta]          [k] mvneta_rx_swbm
   7.19%  [kernel]          [k] kmem_cache_alloc
   6.50%  [kernel]          [k] eth_type_trans
   4.93%  [kernel]          [k] __netif_receive_skb_core
   4.77%  [kernel]          [k] kmem_cache_free.part.0
   3.93%  [kernel]          [k] dev_gro_receive
   3.03%  [kernel]          [k] build_skb
   2.91%  [kernel]          [k] page_pool_put_page
   2.85%  [kernel]          [k] __xdp_return

The test was done with mausezahn on the TX side with 64 byte raw
ethernet frames.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 7d5cd9bc6c99..6d2f8dce4900 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2320,7 +2320,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 }
 
 static struct sk_buff *
-mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
+mvneta_swbm_build_skb(struct mvneta_port *pp, struct page_pool *pool,
 		      struct xdp_buff *xdp, u32 desc_status)
 {
 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
@@ -2331,7 +2331,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	if (!skb)
 		return ERR_PTR(-ENOMEM);
 
-	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
+	skb_mark_for_recycle(skb, virt_to_page(xdp->data), pool);
 
 	skb_reserve(skb, xdp->data - xdp->data_hard_start);
 	skb_put(skb, xdp->data_end - xdp->data);
@@ -2343,7 +2343,10 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
 				skb_frag_page(frag), skb_frag_off(frag),
 				skb_frag_size(frag), PAGE_SIZE);
-		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
+		/* We don't need to reset pp_recycle here. It's already set, so
+		 * just mark fragments for recycling.
+		 */
+		page_pool_store_mem_info(skb_frag_page(frag), pool);
 	}
 
 	return skb;
@@ -2425,7 +2428,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps))
 			goto next;
 
-		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
+		skb = mvneta_swbm_build_skb(pp, pp, &xdp_buf, desc_status);
 		if (IS_ERR(skb)) {
 			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 4/5] mvpp2: recycle buffers
  2021-05-13 16:58 ` [PATCH net-next v5 4/5] mvpp2: recycle buffers Matteo Croce
@ 2021-05-13 18:20   ` Russell King (Oracle)
  2021-05-13 23:52     ` Matteo Croce
  0 siblings, 1 reply; 22+ messages in thread
From: Russell King (Oracle) @ 2021-05-13 18:20 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

On Thu, May 13, 2021 at 06:58:45PM +0200, Matteo Croce wrote:
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> index b2259bf1d299..9dceabece56c 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> @@ -3847,6 +3847,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
>  	struct mvpp2_pcpu_stats ps = {};
>  	enum dma_data_direction dma_dir;
>  	struct bpf_prog *xdp_prog;
> +	struct xdp_rxq_info *rxqi;
>  	struct xdp_buff xdp;
>  	int rx_received;
>  	int rx_done = 0;
> @@ -3912,15 +3913,15 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
>  		else
>  			frag_size = bm_pool->frag_size;
>  
> +		if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
> +			rxqi = &rxq->xdp_rxq_short;
> +		else
> +			rxqi = &rxq->xdp_rxq_long;
>  
> +		if (xdp_prog) {
> +			xdp.rxq = rxqi;
>  
> +			xdp_init_buff(&xdp, PAGE_SIZE, rxqi);
>  			xdp_prepare_buff(&xdp, data,
>  					 MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM,
>  					 rx_bytes, false);
> @@ -3964,7 +3965,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
>  		}
>  
>  		if (pp)
> +			skb_mark_for_recycle(skb, virt_to_page(data), pp);
>  		else
>  			dma_unmap_single_attrs(dev->dev.parent, dma_addr,
>  					       bm_pool->buf_size, DMA_FROM_DEVICE,

Looking at the above, which I've only quoted the _resulting_ code after
your patch above, I don't see why you have moved the
"bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE" conditional outside of
the test for xdp_prog - I don't see rxqi being used except within that
conditional. Please can you explain the reasoning there?

Thanks.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 5/5] mvneta: recycle buffers
  2021-05-13 16:58 ` [PATCH net-next v5 5/5] mvneta: " Matteo Croce
@ 2021-05-13 18:25   ` Russell King (Oracle)
  0 siblings, 0 replies; 22+ messages in thread
From: Russell King (Oracle) @ 2021-05-13 18:25 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

On Thu, May 13, 2021 at 06:58:46PM +0200, Matteo Croce wrote:
> From: Matteo Croce <mcroce@microsoft.com>
> 
> Use the new recycling API for page_pool.
> In a drop rate test, the packet rate increased di 10%,

Typo - "by" ?

> from 269 Kpps to 296 Kpps.
> 
> perf top on a stock system shows:
> 
> Overhead  Shared Object     Symbol
>   21.78%  [kernel]          [k] __pi___inval_dcache_area
>   21.66%  [mvneta]          [k] mvneta_rx_swbm
>    7.00%  [kernel]          [k] kmem_cache_alloc
>    6.05%  [kernel]          [k] eth_type_trans
>    4.44%  [kernel]          [k] kmem_cache_free.part.0
>    3.80%  [kernel]          [k] __netif_receive_skb_core
>    3.68%  [kernel]          [k] dev_gro_receive
>    3.65%  [kernel]          [k] get_page_from_freelist
>    3.43%  [kernel]          [k] page_pool_release_page
>    3.35%  [kernel]          [k] free_unref_page
> 
> And this is the same output with recycling enabled:
> 
> Overhead  Shared Object     Symbol
>   24.10%  [kernel]          [k] __pi___inval_dcache_area
>   23.02%  [mvneta]          [k] mvneta_rx_swbm
>    7.19%  [kernel]          [k] kmem_cache_alloc
>    6.50%  [kernel]          [k] eth_type_trans
>    4.93%  [kernel]          [k] __netif_receive_skb_core
>    4.77%  [kernel]          [k] kmem_cache_free.part.0
>    3.93%  [kernel]          [k] dev_gro_receive
>    3.03%  [kernel]          [k] build_skb
>    2.91%  [kernel]          [k] page_pool_put_page
>    2.85%  [kernel]          [k] __xdp_return
> 
> The test was done with mausezahn on the TX side with 64 byte raw
> ethernet frames.
> 
> Signed-off-by: Matteo Croce <mcroce@microsoft.com>

Other than the typo, I have no objection to the patch.

Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

> ---
>  drivers/net/ethernet/marvell/mvneta.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 7d5cd9bc6c99..6d2f8dce4900 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -2320,7 +2320,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
>  }
>  
>  static struct sk_buff *
> -mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
> +mvneta_swbm_build_skb(struct mvneta_port *pp, struct page_pool *pool,
>  		      struct xdp_buff *xdp, u32 desc_status)
>  {
>  	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> @@ -2331,7 +2331,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>  	if (!skb)
>  		return ERR_PTR(-ENOMEM);
>  
> -	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
> +	skb_mark_for_recycle(skb, virt_to_page(xdp->data), pool);
>  
>  	skb_reserve(skb, xdp->data - xdp->data_hard_start);
>  	skb_put(skb, xdp->data_end - xdp->data);
> @@ -2343,7 +2343,10 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>  		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
>  				skb_frag_page(frag), skb_frag_off(frag),
>  				skb_frag_size(frag), PAGE_SIZE);
> -		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
> +		/* We don't need to reset pp_recycle here. It's already set, so
> +		 * just mark fragments for recycling.
> +		 */
> +		page_pool_store_mem_info(skb_frag_page(frag), pool);
>  	}
>  
>  	return skb;
> @@ -2425,7 +2428,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
>  		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps))
>  			goto next;
>  
> -		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
> +		skb = mvneta_swbm_build_skb(pp, pp, &xdp_buf, desc_status);
>  		if (IS_ERR(skb)) {
>  			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
>  
> -- 
> 2.31.1
> 
> 

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 4/5] mvpp2: recycle buffers
  2021-05-13 18:20   ` Russell King (Oracle)
@ 2021-05-13 23:52     ` Matteo Croce
  0 siblings, 0 replies; 22+ messages in thread
From: Matteo Croce @ 2021-05-13 23:52 UTC (permalink / raw)
  To: Russell King (Oracle)
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

On Thu, May 13, 2021 at 8:21 PM Russell King (Oracle)
<linux@armlinux.org.uk> wrote:
>
> On Thu, May 13, 2021 at 06:58:45PM +0200, Matteo Croce wrote:
> > diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > index b2259bf1d299..9dceabece56c 100644
> > --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > @@ -3847,6 +3847,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
> >       struct mvpp2_pcpu_stats ps = {};
> >       enum dma_data_direction dma_dir;
> >       struct bpf_prog *xdp_prog;
> > +     struct xdp_rxq_info *rxqi;
> >       struct xdp_buff xdp;
> >       int rx_received;
> >       int rx_done = 0;
> > @@ -3912,15 +3913,15 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
> >               else
> >                       frag_size = bm_pool->frag_size;
> >
> > +             if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE)
> > +                     rxqi = &rxq->xdp_rxq_short;
> > +             else
> > +                     rxqi = &rxq->xdp_rxq_long;
> >
> > +             if (xdp_prog) {
> > +                     xdp.rxq = rxqi;
> >
> > +                     xdp_init_buff(&xdp, PAGE_SIZE, rxqi);
> >                       xdp_prepare_buff(&xdp, data,
> >                                        MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM,
> >                                        rx_bytes, false);
> > @@ -3964,7 +3965,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
> >               }
> >
> >               if (pp)
> > +                     skb_mark_for_recycle(skb, virt_to_page(data), pp);
> >               else
> >                       dma_unmap_single_attrs(dev->dev.parent, dma_addr,
> >                                              bm_pool->buf_size, DMA_FROM_DEVICE,
>
> Looking at the above, which I've only quoted the _resulting_ code after
> your patch above, I don't see why you have moved the
> "bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE" conditional outside of
> the test for xdp_prog - I don't see rxqi being used except within that
> conditional. Please can you explain the reasoning there?
>

Back in v3, skb_mark_for_recycle() was accepting an xdp_mem_info*, so
I needed rxqi out of that conditional scope to get that pointer.
Now we just need a page_pool*, so I can restore the original chunk.
Nice catch.

Thanks,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 1/5] mm: add a signature in struct page
  2021-05-13 16:58 ` [PATCH net-next v5 1/5] mm: add a signature in struct page Matteo Croce
@ 2021-05-14  1:00   ` Matthew Wilcox
  2021-05-14  1:34     ` Matteo Croce
  2021-05-18 15:44     ` Matteo Croce
  0 siblings, 2 replies; 22+ messages in thread
From: Matthew Wilcox @ 2021-05-14  1:00 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Thu, May 13, 2021 at 06:58:42PM +0200, Matteo Croce wrote:
>  		struct {	/* page_pool used by netstack */
> +			/**
> +			 * @pp_magic: magic value to avoid recycling non
> +			 * page_pool allocated pages.
> +			 * It aliases with page->lru.next

I'm not really keen on documenting what aliases with what.
pp_magic also aliases with compound_head, 'next' (for slab),
and dev_pagemap.  This is an O(n^2) documentation problem ...

I feel like I want to document the pfmemalloc bit in mm_types.h,
but I don't have a concrete suggestion yet.

> +++ b/include/net/page_pool.h
> @@ -63,6 +63,8 @@
>   */
>  #define PP_ALLOC_CACHE_SIZE	128
>  #define PP_ALLOC_CACHE_REFILL	64
> +#define PP_SIGNATURE		(POISON_POINTER_DELTA + 0x40)

I wonder if this wouldn't be better in linux/poison.h?


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 1/5] mm: add a signature in struct page
  2021-05-14  1:00   ` Matthew Wilcox
@ 2021-05-14  1:34     ` Matteo Croce
  2021-05-18 15:44     ` Matteo Croce
  1 sibling, 0 replies; 22+ messages in thread
From: Matteo Croce @ 2021-05-14  1:34 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, May 14, 2021 at 3:01 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, May 13, 2021 at 06:58:42PM +0200, Matteo Croce wrote:
> >               struct {        /* page_pool used by netstack */
> > +                     /**
> > +                      * @pp_magic: magic value to avoid recycling non
> > +                      * page_pool allocated pages.
> > +                      * It aliases with page->lru.next
>
> I'm not really keen on documenting what aliases with what.
> pp_magic also aliases with compound_head, 'next' (for slab),
> and dev_pagemap.  This is an O(n^2) documentation problem ...
>

Eric asked to document what page->signature aliases, so I did it in
the commit message and in a comment.
I can drop the code comment and leave it just the commit message.

> I feel like I want to document the pfmemalloc bit in mm_types.h,
> but I don't have a concrete suggestion yet.
>
> > +++ b/include/net/page_pool.h
> > @@ -63,6 +63,8 @@
> >   */
> >  #define PP_ALLOC_CACHE_SIZE  128
> >  #define PP_ALLOC_CACHE_REFILL        64
> > +#define PP_SIGNATURE         (POISON_POINTER_DELTA + 0x40)
>
> I wonder if this wouldn't be better in linux/poison.h?
>

I was thinking the same, I'll do it in the v6.

Regards,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-13 16:58 ` [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
@ 2021-05-14  3:39   ` Yunsheng Lin
  2021-05-14  7:36     ` Ilias Apalodimas
  0 siblings, 1 reply; 22+ messages in thread
From: Yunsheng Lin @ 2021-05-14  3:39 UTC (permalink / raw)
  To: Matteo Croce, netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On 2021/5/14 0:58, Matteo Croce wrote:
> From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> 
> Up to now several high speed NICs have custom mechanisms of recycling
> the allocated memory they use for their payloads.
> Our page_pool API already has recycling capabilities that are always
> used when we are running in 'XDP mode'. So let's tweak the API and the
> kernel network stack slightly and allow the recycling to happen even
> during the standard operation.
> The API doesn't take into account 'split page' policies used by those
> drivers currently, but can be extended once we have users for that.
> 
> The idea is to be able to intercept the packet on skb_release_data().
> If it's a buffer coming from our page_pool API recycle it back to the
> pool for further usage or just release the packet entirely.
> 
> To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
> a field in struct page (page->pp) to store the page_pool pointer.
> Storing the information in page->pp allows us to recycle both SKBs and
> their fragments.
> The SKB bit is needed for a couple of reasons. First of all in an effort
> to affect the free path as less as possible, reading a single bit,
> is better that trying to derive identical information for the page stored
> data. We do have a special mark in the page, that won't allow this to
> happen, but again deciding without having to read the entire page is
> preferable.
> 
> The driver has to take care of the sync operations on it's own
> during the buffer recycling since the buffer is, after opting-in to the
> recycling, never unmapped.
> 
> Since the gain on the drivers depends on the architecture, we are not
> enabling recycling by default if the page_pool API is used on a driver.
> In order to enable recycling the driver must call skb_mark_for_recycle()
> to store the information we need for recycling in page->pp and
> enabling the recycling bit, or page_pool_store_mem_info() for a fragment.
> 
> Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Co-developed-by: Matteo Croce <mcroce@microsoft.com>
> Signed-off-by: Matteo Croce <mcroce@microsoft.com>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> ---
>  include/linux/skbuff.h  | 28 +++++++++++++++++++++++++---
>  include/net/page_pool.h |  9 +++++++++
>  net/core/page_pool.c    | 23 +++++++++++++++++++++++
>  net/core/skbuff.c       | 25 +++++++++++++++++++++----
>  4 files changed, 78 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 7fcfea7e7b21..057b40ad29bd 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -40,6 +40,9 @@
>  #if IS_ENABLED(CONFIG_NF_CONNTRACK)
>  #include <linux/netfilter/nf_conntrack_common.h>
>  #endif
> +#ifdef CONFIG_PAGE_POOL
> +#include <net/page_pool.h>
> +#endif
>  
>  /* The interface for checksum offload between the stack and networking drivers
>   * is as follows...
> @@ -667,6 +670,8 @@ typedef unsigned char *sk_buff_data_t;
>   *	@head_frag: skb was allocated from page fragments,
>   *		not allocated by kmalloc() or vmalloc().
>   *	@pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
> + *	@pp_recycle: mark the packet for recycling instead of freeing (implies
> + *		page_pool support on driver)
>   *	@active_extensions: active extensions (skb_ext_id types)
>   *	@ndisc_nodetype: router type (from link layer)
>   *	@ooo_okay: allow the mapping of a socket to a queue to be changed
> @@ -791,10 +796,12 @@ struct sk_buff {
>  				fclone:2,
>  				peeked:1,
>  				head_frag:1,
> -				pfmemalloc:1;
> +				pfmemalloc:1,
> +				pp_recycle:1; /* page_pool recycle indicator */
>  #ifdef CONFIG_SKB_EXTENSIONS
>  	__u8			active_extensions;
>  #endif
> +
>  	/* fields enclosed in headers_start/headers_end are copied
>  	 * using a single memcpy() in __copy_skb_header()
>  	 */
> @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
>   */
>  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)

Does it make sure to define a new function like recyclable_skb_frag_unref()
instead of adding the recycle parameter? This way we may avoid checking
skb->pp_recycle for head data and every frag?

>  {
> -	put_page(skb_frag_page(frag));
> +	struct page *page = skb_frag_page(frag);
> +
> +#ifdef CONFIG_PAGE_POOL
> +	if (recycle && page_pool_return_skb_page(page_address(page)))
> +		return;
> +#endif
> +	put_page(page);
>  }
>  
>  /**
> @@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
>   */
>  static inline void skb_frag_unref(struct sk_buff *skb, int f)
>  {
> -	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
> +	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
>  }
>  
>  /**
> @@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
>  #endif
>  }
>  
> +#ifdef CONFIG_PAGE_POOL
> +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
> +					struct page_pool *pp)
> +{
> +	skb->pp_recycle = 1;
> +	page_pool_store_mem_info(page, pp);
> +}
> +#endif
> +
>  #endif	/* __KERNEL__ */
>  #endif	/* _LINUX_SKBUFF_H */
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 24b3d42c62c0..ce75abeddb29 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -148,6 +148,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
>  	return pool->p.dma_dir;
>  }
>  
> +bool page_pool_return_skb_page(void *data);
> +
>  struct page_pool *page_pool_create(const struct page_pool_params *params);
>  
>  #ifdef CONFIG_PAGE_POOL
> @@ -253,4 +255,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
>  		spin_unlock_bh(&pool->ring.producer_lock);
>  }
>  
> +/* Store mem_info on struct page and use it while recycling skb frags */
> +static inline
> +void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
> +{
> +	page->pp = pp;
> +}
> +
>  #endif /* _NET_PAGE_POOL_H */
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 9de5d8c08c17..fa9f17db7c48 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -626,3 +626,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
>  	}
>  }
>  EXPORT_SYMBOL(page_pool_update_nid);
> +
> +bool page_pool_return_skb_page(void *data)
> +{
> +	struct page_pool *pp;
> +	struct page *page;
> +
> +	page = virt_to_head_page(data);
> +	if (unlikely(page->pp_magic != PP_SIGNATURE))

we have checked the skb->pp_recycle before checking the page->pp_magic,
so the above seems like a likely() instead of unlikely()?

> +		return false;
> +
> +	pp = (struct page_pool *)page->pp;
> +
> +	/* Driver set this to memory recycling info. Reset it on recycle.
> +	 * This will *not* work for NIC using a split-page memory model.
> +	 * The page will be returned to the pool here regardless of the
> +	 * 'flipped' fragment being in use or not.
> +	 */
> +	page->pp = NULL;

Why not only clear the page->pp when the page can not be recycled
by the page pool? so that we do not need to set and clear it every
time the page is recycled。

> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> +
> +	return true;
> +}
> +EXPORT_SYMBOL(page_pool_return_skb_page);
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 12b7e90dd2b5..9581af44d587 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -70,6 +70,9 @@
>  #include <net/xfrm.h>
>  #include <net/mpls.h>
>  #include <net/mptcp.h>
> +#ifdef CONFIG_PAGE_POOL
> +#include <net/page_pool.h>
> +#endif
>  
>  #include <linux/uaccess.h>
>  #include <trace/events/skb.h>
> @@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb)
>  {
>  	unsigned char *head = skb->head;
>  
> -	if (skb->head_frag)
> +	if (skb->head_frag) {
> +#ifdef CONFIG_PAGE_POOL
> +		if (skb->pp_recycle && page_pool_return_skb_page(head))
> +			return;
> +#endif
>  		skb_free_frag(head);
> -	else
> +	} else {
>  		kfree(head);
> +	}
>  }
>  
>  static void skb_release_data(struct sk_buff *skb)
> @@ -664,7 +672,7 @@ static void skb_release_data(struct sk_buff *skb)
>  	skb_zcopy_clear(skb, true);
>  
>  	for (i = 0; i < shinfo->nr_frags; i++)
> -		__skb_frag_unref(&shinfo->frags[i], false);
> +		__skb_frag_unref(&shinfo->frags[i], skb->pp_recycle);
>  
>  	if (shinfo->frag_list)
>  		kfree_skb_list(shinfo->frag_list);
> @@ -1046,6 +1054,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
>  	n->nohdr = 0;
>  	n->peeked = 0;
>  	C(pfmemalloc);
> +	C(pp_recycle);
>  	n->destructor = NULL;
>  	C(tail);
>  	C(end);
> @@ -1725,6 +1734,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
>  	skb->cloned   = 0;
>  	skb->hdr_len  = 0;
>  	skb->nohdr    = 0;
> +	skb->pp_recycle = 0;

I am not sure why we clear the skb->pp_recycle here.
As my understanding, the pskb_expand_head() only allocate new head
data, the old frag page in skb_shinfo()->frags still could be from
page pool, right?

>  	atomic_set(&skb_shinfo(skb)->dataref, 1);
>  
>  	skb_metadata_clear(skb);
> @@ -3495,7 +3505,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
>  		fragto = &skb_shinfo(tgt)->frags[merge];
>  
>  		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
> -		__skb_frag_unref(fragfrom, false);
> +		__skb_frag_unref(fragfrom, skb->pp_recycle);
>  	}
>  
>  	/* Reposition in the original skb */
> @@ -5285,6 +5295,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
>  	if (skb_cloned(to))
>  		return false;
>  
> +	/* We can't coalesce skb that are allocated from slab and page_pool
> +	 * The recycle mark is on the skb, so that might end up trying to
> +	 * recycle slab allocated skb->head
> +	 */
> +	if (to->pp_recycle != from->pp_recycle)
> +		return false;

Since we are also depending on page->pp_magic to decide whether to
recycle a page, we could just set the to->pp_recycle according to
from->pp_recycle and do the coalesce?

> +
>  	if (len <= skb_tailroom(to)) {
>  		if (len)
>  			BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-14  3:39   ` Yunsheng Lin
@ 2021-05-14  7:36     ` Ilias Apalodimas
  2021-05-14  8:31       ` Yunsheng Lin
  0 siblings, 1 reply; 22+ messages in thread
From: Ilias Apalodimas @ 2021-05-14  7:36 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

[...]
> >  	 * using a single memcpy() in __copy_skb_header()
> >  	 */
> > @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
> >   */
> >  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
> 
> Does it make sure to define a new function like recyclable_skb_frag_unref()
> instead of adding the recycle parameter? This way we may avoid checking
> skb->pp_recycle for head data and every frag?
> 

We'd still have to check when to run __skb_frag_unref or
recyclable_skb_frag_unref so I am not sure we can avoid that.
In any case I'll have a look 

> >  {
> > -	put_page(skb_frag_page(frag));
> > +	struct page *page = skb_frag_page(frag);
> > +
> > +#ifdef CONFIG_PAGE_POOL
> > +	if (recycle && page_pool_return_skb_page(page_address(page)))
> > +		return;
> > +#endif
> > +	put_page(page);
> >  }
> >  
> >  /**
> > @@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
> >   */
> >  static inline void skb_frag_unref(struct sk_buff *skb, int f)
> >  {
> > -	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
> > +	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
> >  }
> >  
> >  /**
> > @@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
> >  #endif
> >  }
> >  
> > +#ifdef CONFIG_PAGE_POOL
> > +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
> > +					struct page_pool *pp)
> > +{
> > +	skb->pp_recycle = 1;
> > +	page_pool_store_mem_info(page, pp);
> > +}
> > +#endif
> > +
> >  #endif	/* __KERNEL__ */
> >  #endif	/* _LINUX_SKBUFF_H */
> > diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> > index 24b3d42c62c0..ce75abeddb29 100644
> > --- a/include/net/page_pool.h
> > +++ b/include/net/page_pool.h
> > @@ -148,6 +148,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
> >  	return pool->p.dma_dir;
> >  }
> >  
> > +bool page_pool_return_skb_page(void *data);
> > +
> >  struct page_pool *page_pool_create(const struct page_pool_params *params);
> >  
> >  #ifdef CONFIG_PAGE_POOL
> > @@ -253,4 +255,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
> >  		spin_unlock_bh(&pool->ring.producer_lock);
> >  }
> >  
> > +/* Store mem_info on struct page and use it while recycling skb frags */
> > +static inline
> > +void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
> > +{
> > +	page->pp = pp;
> > +}
> > +
> >  #endif /* _NET_PAGE_POOL_H */
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index 9de5d8c08c17..fa9f17db7c48 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -626,3 +626,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
> >  	}
> >  }
> >  EXPORT_SYMBOL(page_pool_update_nid);
> > +
> > +bool page_pool_return_skb_page(void *data)
> > +{
> > +	struct page_pool *pp;
> > +	struct page *page;
> > +
> > +	page = virt_to_head_page(data);
> > +	if (unlikely(page->pp_magic != PP_SIGNATURE))
> 
> we have checked the skb->pp_recycle before checking the page->pp_magic,
> so the above seems like a likely() instead of unlikely()?
> 

The check here is ! = PP_SIGNATURE. So since we already checked for
pp_recycle, it's unlikely the signature won't match.

> > +		return false;
> > +
> > +	pp = (struct page_pool *)page->pp;
> > +
> > +	/* Driver set this to memory recycling info. Reset it on recycle.
> > +	 * This will *not* work for NIC using a split-page memory model.
> > +	 * The page will be returned to the pool here regardless of the
> > +	 * 'flipped' fragment being in use or not.
> > +	 */
> > +	page->pp = NULL;
> 
> Why not only clear the page->pp when the page can not be recycled
> by the page pool? so that we do not need to set and clear it every
> time the page is recycled。
> 

If the page cannot be recycled, page->pp will not probably be set to begin
with. Since we don't embed the feature in page_pool and we require the
driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
it as is.  When we set/clear the page->pp, the page is probably already in 
cache, so I doubt this will have any measurable impact.

> > +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> > +
> >  	C(end);

[...]

> > @@ -1725,6 +1734,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
> >  	skb->cloned   = 0;
> >  	skb->hdr_len  = 0;
> >  	skb->nohdr    = 0;
> > +	skb->pp_recycle = 0;
> 
> I am not sure why we clear the skb->pp_recycle here.
> As my understanding, the pskb_expand_head() only allocate new head
> data, the old frag page in skb_shinfo()->frags still could be from
> page pool, right?
> 

Ah correct! In that case we must not clear skb->pp_recycle.  The new head
will fail on the signature check and end up being freed, while the
remaining frags will be recycled. The *original* head will be
unmapped/recycled (based of the page refcnt)  on the pskb_expand_head()
itself.

> >  	atomic_set(&skb_shinfo(skb)->dataref, 1);
> >  
> >  	skb_metadata_clear(skb);
> > @@ -3495,7 +3505,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
> >  		fragto = &skb_shinfo(tgt)->frags[merge];
> >  
> >  		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
> > -		__skb_frag_unref(fragfrom, false);
> > +		__skb_frag_unref(fragfrom, skb->pp_recycle);
> >  	}
> >  
> >  	/* Reposition in the original skb */
> > @@ -5285,6 +5295,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
> >  	if (skb_cloned(to))
> >  		return false;
> >  
> > +	/* We can't coalesce skb that are allocated from slab and page_pool
> > +	 * The recycle mark is on the skb, so that might end up trying to
> > +	 * recycle slab allocated skb->head
> > +	 */
> > +	if (to->pp_recycle != from->pp_recycle)
> > +		return false;
> 
> Since we are also depending on page->pp_magic to decide whether to
> recycle a page, we could just set the to->pp_recycle according to
> from->pp_recycle and do the coalesce?

So I was think about this myself.  This check is a 'leftover' from my
initial version, were I only had the pp_recycle bit + struct page
meta-data (without the signature).  Since that version didn't have the
signature you could not coalesce 2 skb's coming from page_pool/slab. 
We could now do what you suggest, but honestly I can't think of many use
cases that this can happen to begin with.  I think I'd prefer leaving it as
is and adjusting the comment.  If we can somehow prove this happens
oftenly and has a performance impact, we can go ahead and remove it.

[...]

Thanks
/Ilias

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-14  7:36     ` Ilias Apalodimas
@ 2021-05-14  8:31       ` Yunsheng Lin
  2021-05-14  9:17         ` Ilias Apalodimas
  0 siblings, 1 reply; 22+ messages in thread
From: Yunsheng Lin @ 2021-05-14  8:31 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On 2021/5/14 15:36, Ilias Apalodimas wrote:
> [...]
>>> +		return false;
>>> +
>>> +	pp = (struct page_pool *)page->pp;
>>> +
>>> +	/* Driver set this to memory recycling info. Reset it on recycle.
>>> +	 * This will *not* work for NIC using a split-page memory model.
>>> +	 * The page will be returned to the pool here regardless of the
>>> +	 * 'flipped' fragment being in use or not.
>>> +	 */
>>> +	page->pp = NULL;
>>
>> Why not only clear the page->pp when the page can not be recycled
>> by the page pool? so that we do not need to set and clear it every
>> time the page is recycled。
>>
> 
> If the page cannot be recycled, page->pp will not probably be set to begin
> with. Since we don't embed the feature in page_pool and we require the
> driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
> it as is.  When we set/clear the page->pp, the page is probably already in 
> cache, so I doubt this will have any measurable impact.

The point is that we already have the skb->pp_recycle to let driver to
explicitly enable recycling, as part of the 'skb flow, if the page pool keep
the page->pp while it owns the page, then the driver may only need to call
one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
for each page frag of a skb.

Maybe we can add a parameter in "struct page_pool_params" to let driver
to decide if the page pool ptr is stored in page->pp while the page pool
owns the page?

Another thing accured to me is that if the driver use page from the
page pool to form a skb, and it does not call skb_mark_for_recycle(),
then there will be resource leaking, right? if yes, it seems the
skb_mark_for_recycle() call does not seems to add any value?


> 
>>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
>>> +
>>>  	C(end);
> 
> [...]



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-14  8:31       ` Yunsheng Lin
@ 2021-05-14  9:17         ` Ilias Apalodimas
  2021-05-15  2:07           ` Yunsheng Lin
  0 siblings, 1 reply; 22+ messages in thread
From: Ilias Apalodimas @ 2021-05-14  9:17 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, May 14, 2021 at 04:31:50PM +0800, Yunsheng Lin wrote:
> On 2021/5/14 15:36, Ilias Apalodimas wrote:
> > [...]
> >>> +		return false;
> >>> +
> >>> +	pp = (struct page_pool *)page->pp;
> >>> +
> >>> +	/* Driver set this to memory recycling info. Reset it on recycle.
> >>> +	 * This will *not* work for NIC using a split-page memory model.
> >>> +	 * The page will be returned to the pool here regardless of the
> >>> +	 * 'flipped' fragment being in use or not.
> >>> +	 */
> >>> +	page->pp = NULL;
> >>
> >> Why not only clear the page->pp when the page can not be recycled
> >> by the page pool? so that we do not need to set and clear it every
> >> time the page is recycled。
> >>
> > 
> > If the page cannot be recycled, page->pp will not probably be set to begin
> > with. Since we don't embed the feature in page_pool and we require the
> > driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
> > it as is.  When we set/clear the page->pp, the page is probably already in 
> > cache, so I doubt this will have any measurable impact.
> 
> The point is that we already have the skb->pp_recycle to let driver to
> explicitly enable recycling, as part of the 'skb flow, if the page pool keep
> the page->pp while it owns the page, then the driver may only need to call
> one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
> for each page frag of a skb.
> 

The driver is meant to call skb_mark_for_recycle for the skb and
page_pool_store_mem_info() for the fragments (in order to store page->pp).
Nothing bad will happen if you call skb_mark_for_recycle on a frag though,
but in any case you need to store the page_pool pointer of each frag to
struct page.

> Maybe we can add a parameter in "struct page_pool_params" to let driver
> to decide if the page pool ptr is stored in page->pp while the page pool
> owns the page?

Then you'd have to check the page pool config before saving the meta-data,
and you would have to make the skb path aware of that as well (I assume you
mean replace pp_recycle with this?).
If not and you just want to add an extra flag on page_pool_params and be able 
to enable recycling depending on that flag, we just add a patch afterwards.
I am not sure we need an extra if for each packet though.

> 
> Another thing accured to me is that if the driver use page from the
> page pool to form a skb, and it does not call skb_mark_for_recycle(),
> then there will be resource leaking, right? if yes, it seems the
> skb_mark_for_recycle() call does not seems to add any value?
> 

Not really, the driver has 2 choices:
- call page_pool_release_page() once it receives the payload. That will
  clean up dma mappings (if page pool is responsible for them) and free the
  buffer
- call skb_mark_for_recycle(). Which will end up recycling the buffer.

If you call none of those, you'd leak a page, but that's a driver bug.
patches [4/5, 5/5] do that for two marvell drivers.
I really want to make drivers opt-in in the feature instead of always
enabling it.

Thanks
/Ilias
> 
> > 
> >>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> >>> +
> >>>  	C(end);
> > 
> > [...]
> 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-14  9:17         ` Ilias Apalodimas
@ 2021-05-15  2:07           ` Yunsheng Lin
  2021-05-17  6:38             ` Ilias Apalodimas
  0 siblings, 1 reply; 22+ messages in thread
From: Yunsheng Lin @ 2021-05-15  2:07 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On 2021/5/14 17:17, Ilias Apalodimas wrote:
> On Fri, May 14, 2021 at 04:31:50PM +0800, Yunsheng Lin wrote:
>> On 2021/5/14 15:36, Ilias Apalodimas wrote:
>>> [...]
>>>>> +		return false;
>>>>> +
>>>>> +	pp = (struct page_pool *)page->pp;
>>>>> +
>>>>> +	/* Driver set this to memory recycling info. Reset it on recycle.
>>>>> +	 * This will *not* work for NIC using a split-page memory model.
>>>>> +	 * The page will be returned to the pool here regardless of the
>>>>> +	 * 'flipped' fragment being in use or not.
>>>>> +	 */
>>>>> +	page->pp = NULL;
>>>>
>>>> Why not only clear the page->pp when the page can not be recycled
>>>> by the page pool? so that we do not need to set and clear it every
>>>> time the page is recycled。
>>>>
>>>
>>> If the page cannot be recycled, page->pp will not probably be set to begin
>>> with. Since we don't embed the feature in page_pool and we require the
>>> driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
>>> it as is.  When we set/clear the page->pp, the page is probably already in 
>>> cache, so I doubt this will have any measurable impact.
>>
>> The point is that we already have the skb->pp_recycle to let driver to
>> explicitly enable recycling, as part of the 'skb flow, if the page pool keep
>> the page->pp while it owns the page, then the driver may only need to call
>> one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
>> for each page frag of a skb.
>>
> 
> The driver is meant to call skb_mark_for_recycle for the skb and
> page_pool_store_mem_info() for the fragments (in order to store page->pp).
> Nothing bad will happen if you call skb_mark_for_recycle on a frag though,
> but in any case you need to store the page_pool pointer of each frag to
> struct page.

Right. Nothing bad will happen when we keep the page_pool pointer in
page->pp while page pool owns the page too, even if the skb->pp_recycle
is not set, right?

> 
>> Maybe we can add a parameter in "struct page_pool_params" to let driver
>> to decide if the page pool ptr is stored in page->pp while the page pool
>> owns the page?
> 
> Then you'd have to check the page pool config before saving the meta-data,

I am not sure what the "saving the meta-data" meant?

> and you would have to make the skb path aware of that as well (I assume you
> mean replace pp_recycle with this?).

I meant we could set the in page->pp when the page is allocated from
alloc_pages() in __page_pool_alloc_pages_slow() unconditionally or
according to a newly add filed in pool->p, and only clear it in
page_pool_release_page(), between which the page is owned by page pool,
right?

> If not and you just want to add an extra flag on page_pool_params and be able 
> to enable recycling depending on that flag, we just add a patch afterwards.
> I am not sure we need an extra if for each packet though.

In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
but not the pool->p.

> 
>>
>> Another thing accured to me is that if the driver use page from the
>> page pool to form a skb, and it does not call skb_mark_for_recycle(),
>> then there will be resource leaking, right? if yes, it seems the
>> skb_mark_for_recycle() call does not seems to add any value?
>>
> 
> Not really, the driver has 2 choices:
> - call page_pool_release_page() once it receives the payload. That will
>   clean up dma mappings (if page pool is responsible for them) and free the
>   buffer

The is only needed before SKB recycling is supported or the driver does not
want the SKB recycling support explicitly, right?

> - call skb_mark_for_recycle(). Which will end up recycling the buffer.

If the driver need to add extra flag to enable recycling based on skb
instead of page pool, then adding skb_mark_for_recycle() makes sense to
me too, otherwise it seems adding a field in pool->p to recycling based
on skb makes more sense?

> 
> If you call none of those, you'd leak a page, but that's a driver bug.
> patches [4/5, 5/5] do that for two marvell drivers.
> I really want to make drivers opt-in in the feature instead of always
> enabling it.
> 
> Thanks
> /Ilias
>>
>>>
>>>>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
>>>>> +
>>>>>  	C(end);
>>>
>>> [...]
>>
>>
> 
> .
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-15  2:07           ` Yunsheng Lin
@ 2021-05-17  6:38             ` Ilias Apalodimas
  2021-05-17  8:25               ` Yunsheng Lin
  0 siblings, 1 reply; 22+ messages in thread
From: Ilias Apalodimas @ 2021-05-17  6:38 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

[...]
> >>>> by the page pool? so that we do not need to set and clear it every
> >>>> time the page is recycled。
> >>>>
> >>>
> >>> If the page cannot be recycled, page->pp will not probably be set to begin
> >>> with. Since we don't embed the feature in page_pool and we require the
> >>> driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
> >>> it as is.  When we set/clear the page->pp, the page is probably already in 
> >>> cache, so I doubt this will have any measurable impact.
> >>
> >> The point is that we already have the skb->pp_recycle to let driver to
> >> explicitly enable recycling, as part of the 'skb flow, if the page pool keep
> >> the page->pp while it owns the page, then the driver may only need to call
> >> one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
> >> for each page frag of a skb.
> >>
> > 
> > The driver is meant to call skb_mark_for_recycle for the skb and
> > page_pool_store_mem_info() for the fragments (in order to store page->pp).
> > Nothing bad will happen if you call skb_mark_for_recycle on a frag though,
> > but in any case you need to store the page_pool pointer of each frag to
> > struct page.
> 
> Right. Nothing bad will happen when we keep the page_pool pointer in
> page->pp while page pool owns the page too, even if the skb->pp_recycle
> is not set, right?

Yep, nothing bad will happen. Both functions using this (__skb_frag_unref and
skb_free_head) always check the skb bit as well.

> 
> > 
> >> Maybe we can add a parameter in "struct page_pool_params" to let driver
> >> to decide if the page pool ptr is stored in page->pp while the page pool
> >> owns the page?
> > 
> > Then you'd have to check the page pool config before saving the meta-data,
> 
> I am not sure what the "saving the meta-data" meant?

I was referring to struct page_pool* and the signature we store in struct
page.

> 
> > and you would have to make the skb path aware of that as well (I assume you
> > mean replace pp_recycle with this?).
> 
> I meant we could set the in page->pp when the page is allocated from
> alloc_pages() in __page_pool_alloc_pages_slow() unconditionally or
> according to a newly add filed in pool->p, and only clear it in
> page_pool_release_page(), between which the page is owned by page pool,
> right?
> 
> > If not and you just want to add an extra flag on page_pool_params and be able 
> > to enable recycling depending on that flag, we just add a patch afterwards.
> > I am not sure we need an extra if for each packet though.
> 
> In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
> but not the pool->p.
> 
> > 
> >>
> >> Another thing accured to me is that if the driver use page from the
> >> page pool to form a skb, and it does not call skb_mark_for_recycle(),
> >> then there will be resource leaking, right? if yes, it seems the
> >> skb_mark_for_recycle() call does not seems to add any value?
> >>
> > 
> > Not really, the driver has 2 choices:
> > - call page_pool_release_page() once it receives the payload. That will
> >   clean up dma mappings (if page pool is responsible for them) and free the
> >   buffer
> 
> The is only needed before SKB recycling is supported or the driver does not
> want the SKB recycling support explicitly, right?
> 

This is needed in general even before recycling.  It's used to unmap the
buffer, so once you free the SKB you don't leave any stale DMA mappings.  So
that's what all the drivers that use page_pool call today.

> > - call skb_mark_for_recycle(). Which will end up recycling the buffer.
> 
> If the driver need to add extra flag to enable recycling based on skb
> instead of page pool, then adding skb_mark_for_recycle() makes sense to
> me too, otherwise it seems adding a field in pool->p to recycling based
> on skb makes more sense?
> 

The recycling is essentially an SKB feature though isn't it?  You achieve the
SKB recycling with the help of page_pool API, not the other way around.  So I
think this should remain on the SKB and maybe in the future find ways to turn
in on/off?

Thanks
/Ilias

> > 
> > If you call none of those, you'd leak a page, but that's a driver bug.
> > patches [4/5, 5/5] do that for two marvell drivers.
> > I really want to make drivers opt-in in the feature instead of always
> > enabling it.
> > 
> > Thanks
> > /Ilias
> >>
> >>>
> >>>>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> >>>>> +
> >>>>>  	C(end);
> >>>
> >>> [...]
> >>
> >>
> > 
> > .
> > 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-17  6:38             ` Ilias Apalodimas
@ 2021-05-17  8:25               ` Yunsheng Lin
  2021-05-17  9:36                 ` Ilias Apalodimas
  0 siblings, 1 reply; 22+ messages in thread
From: Yunsheng Lin @ 2021-05-17  8:25 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On 2021/5/17 14:38, Ilias Apalodimas wrote:
> [...]
>>
>>>
>>>> Maybe we can add a parameter in "struct page_pool_params" to let driver
>>>> to decide if the page pool ptr is stored in page->pp while the page pool
>>>> owns the page?
>>>
>>> Then you'd have to check the page pool config before saving the meta-data,
>>
>> I am not sure what the "saving the meta-data" meant?
> 
> I was referring to struct page_pool* and the signature we store in struct
> page.
> 
>>
>>> and you would have to make the skb path aware of that as well (I assume you
>>> mean replace pp_recycle with this?).
>>
>> I meant we could set the in page->pp when the page is allocated from
>> alloc_pages() in __page_pool_alloc_pages_slow() unconditionally or
>> according to a newly add filed in pool->p, and only clear it in
>> page_pool_release_page(), between which the page is owned by page pool,
>> right?
>>
>>> If not and you just want to add an extra flag on page_pool_params and be able 
>>> to enable recycling depending on that flag, we just add a patch afterwards.
>>> I am not sure we need an extra if for each packet though.
>>
>> In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
>> but not the pool->p.
>>
>>>
>>>>
>>>> Another thing accured to me is that if the driver use page from the
>>>> page pool to form a skb, and it does not call skb_mark_for_recycle(),
>>>> then there will be resource leaking, right? if yes, it seems the
>>>> skb_mark_for_recycle() call does not seems to add any value?
>>>>
>>>
>>> Not really, the driver has 2 choices:
>>> - call page_pool_release_page() once it receives the payload. That will
>>>   clean up dma mappings (if page pool is responsible for them) and free the
>>>   buffer
>>
>> The is only needed before SKB recycling is supported or the driver does not
>> want the SKB recycling support explicitly, right?
>>
> 
> This is needed in general even before recycling.  It's used to unmap the
> buffer, so once you free the SKB you don't leave any stale DMA mappings.  So
> that's what all the drivers that use page_pool call today.

As my understanding:
1. If the driver is using page allocated from page allocator directly to
   form a skb, let's say the page is owned by skb(or not owned by anyone:)),
   when a skb is freed, the put_page() should be called.

2. If the driver is using page allocated from page pool to form a skb, let's
   say the page is owned by page pool, when a skb is freed, page_pool_put_page()
   should be called.

What page_pool_release_page() mainly do is to make page in case 2 return back
to case 1.

And page_pool_release_page() is replaced with skb_mark_for_recycle() in patch
4/5 to avoid the above "case 2" -> "case 1" changing, so that the page is still
owned by page pool, right?

So the point is that skb_mark_for_recycle() does not really do anything about
the owner of the page, it is still owned by page pool, so it makes more sense
to keep the page pool ptr instead of setting it every time when
skb_mark_for_recycle() is called?

> 
>>> - call skb_mark_for_recycle(). Which will end up recycling the buffer.
>>
>> If the driver need to add extra flag to enable recycling based on skb
>> instead of page pool, then adding skb_mark_for_recycle() makes sense to
>> me too, otherwise it seems adding a field in pool->p to recycling based
>> on skb makes more sense?
>>
> 
> The recycling is essentially an SKB feature though isn't it?  You achieve the
> SKB recycling with the help of page_pool API, not the other way around.  So I
> think this should remain on the SKB and maybe in the future find ways to turn
> in on/off?

As above, does it not make more sense to call page_pool_release_page() if the
driver does not need the SKB recycling?

Even if when skb->pp_recycle is 1, pages allocated from page allocator directly
or page pool are both supported, so it seems page->signature need to be reliable
to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle
is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that
the page->signature does not need checking?

> 
> Thanks
> /Ilias


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-17  8:25               ` Yunsheng Lin
@ 2021-05-17  9:36                 ` Ilias Apalodimas
  2021-05-17 11:10                   ` Yunsheng Lin
  0 siblings, 1 reply; 22+ messages in thread
From: Ilias Apalodimas @ 2021-05-17  9:36 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

> >>

[...]

> >> In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
> >> but not the pool->p.
> >>
> >>>
> >>>>
> >>>> Another thing accured to me is that if the driver use page from the
> >>>> page pool to form a skb, and it does not call skb_mark_for_recycle(),
> >>>> then there will be resource leaking, right? if yes, it seems the
> >>>> skb_mark_for_recycle() call does not seems to add any value?
> >>>>
> >>>
> >>> Not really, the driver has 2 choices:
> >>> - call page_pool_release_page() once it receives the payload. That will
> >>>   clean up dma mappings (if page pool is responsible for them) and free the
> >>>   buffer
> >>
> >> The is only needed before SKB recycling is supported or the driver does not
> >> want the SKB recycling support explicitly, right?
> >>
> > 
> > This is needed in general even before recycling.  It's used to unmap the
> > buffer, so once you free the SKB you don't leave any stale DMA mappings.  So
> > that's what all the drivers that use page_pool call today.
> 
> As my understanding:
> 1. If the driver is using page allocated from page allocator directly to
>    form a skb, let's say the page is owned by skb(or not owned by anyone:)),
>    when a skb is freed, the put_page() should be called.
> 
> 2. If the driver is using page allocated from page pool to form a skb, let's
>    say the page is owned by page pool, when a skb is freed, page_pool_put_page()
>    should be called.
> 
> What page_pool_release_page() mainly do is to make page in case 2 return back
> to case 1.

Yea but this is done deliberately.  Let me try to explain the reasoning a
bit.  I don't think mixing the SKB path with page_pool is the right idea. 
page_pool allocates the memory you want to build an SKB and imho it must be 
kept completely disjoint with the generic SKB code.  So once you free an SKB,
I don't like having page_pool_put_page() in the release code explicitly.  
What we do instead is call page_pool_release_page() from the driver.  So the 
page is disconnected from page pool and the skb release path works as it used 
to.

> 
> And page_pool_release_page() is replaced with skb_mark_for_recycle() in patch
> 4/5 to avoid the above "case 2" -> "case 1" changing, so that the page is still
> owned by page pool, right?
> 
> So the point is that skb_mark_for_recycle() does not really do anything about
> the owner of the page, it is still owned by page pool, so it makes more sense
> to keep the page pool ptr instead of setting it every time when
> skb_mark_for_recycle() is called?

Yes it doesn't do anything wrt to ownership.  The page must always come
from page pool if you want to recycle it. But as I tried to explain above,
it felt more intuitive to keep the driver flow as-is as well as  the
release path.  On a driver right now when you are done with the skb creation, 
you unmap the skb->head + fragments.  So if you want to recycle it it instead, 
you mark the skb and fragments.

> 
> > 
> >>> - call skb_mark_for_recycle(). Which will end up recycling the buffer.
> >>
> >> If the driver need to add extra flag to enable recycling based on skb
> >> instead of page pool, then adding skb_mark_for_recycle() makes sense to
> >> me too, otherwise it seems adding a field in pool->p to recycling based
> >> on skb makes more sense?
> >>
> > 
> > The recycling is essentially an SKB feature though isn't it?  You achieve the
> > SKB recycling with the help of page_pool API, not the other way around.  So I
> > think this should remain on the SKB and maybe in the future find ways to turn
> > in on/off?
> 
> As above, does it not make more sense to call page_pool_release_page() if the
> driver does not need the SKB recycling?

Call it were? As i tried to explain it makes no sense to me having it in
generic SKB code (unless recycling is enabled).

That's what's happening right now when recycling is enabled.
Basically the call path is:
if (skb bit is set) {
	if (page signature matches)
		page_pool_put_full_page() 
}
page_pool_put_full_page() will either:
1. recycle the page in the 'fast cache' of page pool
2. recycle the page in the ptr ring of page pool
3. Release it calling page_pool_release_page()

If you don't want to enable it you just call page_pool_release_page() on
your driver and the generic path will free the allocated page.

> 
> Even if when skb->pp_recycle is 1, pages allocated from page allocator directly
> or page pool are both supported, so it seems page->signature need to be reliable
> to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle
> is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that
> the page->signature does not need checking?

Yes, the idea for the recycling bit, is that you don't have to fetch the page
in cache do do more processing (since freeing is asynchronous and we
can't have any guarantees on what the cache will have at that point).  So we
are trying to affect the existing release path a less as possible. However it's
that new skb bit that triggers the whole path.

What you propose could still be doable though.  As you said we can add the
page pointer to struct page when we allocate a page_pool page and never
reset it when we recycle the buffer. But I don't think there will be any
performance impact whatsoever. So I prefer the 'visible' approach, at least for
the first iteration.

Thanks
/Ilias
 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-17  9:36                 ` Ilias Apalodimas
@ 2021-05-17 11:10                   ` Yunsheng Lin
  2021-05-17 11:35                     ` Ilias Apalodimas
  0 siblings, 1 reply; 22+ messages in thread
From: Yunsheng Lin @ 2021-05-17 11:10 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On 2021/5/17 17:36, Ilias Apalodimas wrote:
 >>
>> Even if when skb->pp_recycle is 1, pages allocated from page allocator directly
>> or page pool are both supported, so it seems page->signature need to be reliable
>> to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle
>> is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that
>> the page->signature does not need checking?
> 
> Yes, the idea for the recycling bit, is that you don't have to fetch the page
> in cache do do more processing (since freeing is asynchronous and we
> can't have any guarantees on what the cache will have at that point).  So we
> are trying to affect the existing release path a less as possible. However it's
> that new skb bit that triggers the whole path.
> 
> What you propose could still be doable though.  As you said we can add the
> page pointer to struct page when we allocate a page_pool page and never
> reset it when we recycle the buffer. But I don't think there will be any
> performance impact whatsoever. So I prefer the 'visible' approach, at least for

setting and unsetting the page_pool ptr every time the page is recycled may
cause a cache bouncing problem when rx cleaning and skb releasing is not
happening on the same cpu.

> the first iteration.
> 
> Thanks
> /Ilias
>  
> 
> .
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-05-17 11:10                   ` Yunsheng Lin
@ 2021-05-17 11:35                     ` Ilias Apalodimas
  0 siblings, 0 replies; 22+ messages in thread
From: Ilias Apalodimas @ 2021-05-17 11:35 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Guillaume Nault, linux-kernel, linux-rdma, bpf, Matthew Wilcox,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Mon, May 17, 2021 at 07:10:09PM +0800, Yunsheng Lin wrote:
> On 2021/5/17 17:36, Ilias Apalodimas wrote:
>  >>
> >> Even if when skb->pp_recycle is 1, pages allocated from page allocator directly
> >> or page pool are both supported, so it seems page->signature need to be reliable
> >> to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle
> >> is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that
> >> the page->signature does not need checking?
> > 
> > Yes, the idea for the recycling bit, is that you don't have to fetch the page
> > in cache do do more processing (since freeing is asynchronous and we
> > can't have any guarantees on what the cache will have at that point).  So we
> > are trying to affect the existing release path a less as possible. However it's
> > that new skb bit that triggers the whole path.
> > 
> > What you propose could still be doable though.  As you said we can add the
> > page pointer to struct page when we allocate a page_pool page and never
> > reset it when we recycle the buffer. But I don't think there will be any
> > performance impact whatsoever. So I prefer the 'visible' approach, at least for
> 
> setting and unsetting the page_pool ptr every time the page is recycled may
> cause a cache bouncing problem when rx cleaning and skb releasing is not
> happening on the same cpu.

In our case since the skb is asynchronous and not protected by a NAPI context,
the buffer wont end up in the 'fast' page pool cache.  So we'll recycle by
calling page_pool_recycle_in_ring() not page_pool_recycle_in_cache().  Which
means that the page you recycled will be re-filled later, in batches, when
page_pool_refill_alloc_cache() is called to refill the fast cache.  I am not i
saying it might not happen, but I don't really know if it's going to make a
difference or not.  So I just really prefer taking this as is and perhaps
later, when 40/100gbit drivers start using it we can justify the optimization
(along with supporting the split page model).

Thanks
/Ilias

> 
> > the first iteration.
> > 
> > Thanks
> > /Ilias
> >  
> > 
> > .
> > 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH net-next v5 1/5] mm: add a signature in struct page
  2021-05-14  1:00   ` Matthew Wilcox
  2021-05-14  1:34     ` Matteo Croce
@ 2021-05-18 15:44     ` Matteo Croce
  1 sibling, 0 replies; 22+ messages in thread
From: Matteo Croce @ 2021-05-18 15:44 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, May 14, 2021 at 3:01 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> I feel like I want to document the pfmemalloc bit in mm_types.h,
> but I don't have a concrete suggestion yet.
>

Maybe simply:

/* Bit zero is set
 * Bit one if pfmemalloc page
 */
 unsigned long compound_head;

Regards,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-05-18 15:45 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-13 16:58 [PATCH net-next v5 0/5] page_pool: recycle buffers Matteo Croce
2021-05-13 16:58 ` [PATCH net-next v5 1/5] mm: add a signature in struct page Matteo Croce
2021-05-14  1:00   ` Matthew Wilcox
2021-05-14  1:34     ` Matteo Croce
2021-05-18 15:44     ` Matteo Croce
2021-05-13 16:58 ` [PATCH net-next v5 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
2021-05-13 16:58 ` [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-05-14  3:39   ` Yunsheng Lin
2021-05-14  7:36     ` Ilias Apalodimas
2021-05-14  8:31       ` Yunsheng Lin
2021-05-14  9:17         ` Ilias Apalodimas
2021-05-15  2:07           ` Yunsheng Lin
2021-05-17  6:38             ` Ilias Apalodimas
2021-05-17  8:25               ` Yunsheng Lin
2021-05-17  9:36                 ` Ilias Apalodimas
2021-05-17 11:10                   ` Yunsheng Lin
2021-05-17 11:35                     ` Ilias Apalodimas
2021-05-13 16:58 ` [PATCH net-next v5 4/5] mvpp2: recycle buffers Matteo Croce
2021-05-13 18:20   ` Russell King (Oracle)
2021-05-13 23:52     ` Matteo Croce
2021-05-13 16:58 ` [PATCH net-next v5 5/5] mvneta: " Matteo Croce
2021-05-13 18:25   ` Russell King (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).