netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v7 0/5] page_pool: recycle buffers
@ 2021-06-04 18:33 Matteo Croce
  2021-06-04 18:33 ` [PATCH net-next v7 1/5] mm: add a signature in struct page Matteo Croce
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Matteo Croce @ 2021-06-04 18:33 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is a respin of [1]

This patchset shows the plans for allowing page_pool to handle and
maintain DMA map/unmap of the pages it serves to the driver. For this
to work a return hook in the network core is introduced.

The overall purpose is to simplify drivers, by providing a page
allocation API that does recycling, such that each driver doesn't have
to reinvent its own recycling scheme. Using page_pool in a driver
does not require implementing XDP support, but it makes it trivially
easy to do so. Instead of allocating buffers specifically for SKBs
we now allocate a generic buffer and either wrap it on an SKB
(via build_skb) or create an XDP frame.
The recycling code leverages the XDP recycle APIs.

The Marvell mvpp2 and mvneta drivers are used in this patchset to
demonstrate how to use the API, and tested on a MacchiatoBIN
and EspressoBIN boards respectively.

Please let this going in on a future -rc1 so to allow enough time
to have wider tests.

Note that this series depends on the change "mm: fix struct page layout
on 32-bit systems"[2] which is not yet in master.

v6 -> v7:
- refresh patches against net-next
- remove a redundant call to virt_to_head_page()
- update mvneta benchmarks

v5 -> v6
- preserve pfmemalloc bit when setting signature
- fix typo in mvneta
- rebase on next-next with the new cache
- don't clear the skb->pp_recycle in pskb_expand_head()

v4 -> v5:
- move the signature so it doesn't alias with page->mapping
- use an invalid pointer as magic
- incorporate Matthew Wilcox's changes for pfmemalloc pages
- move the __skb_frag_unref() changes to a preliminary patch
- refactor some cpp directives
- only attempt recycling if skb->head_frag
- clear skb->pp_recycle in pskb_expand_head()

v3 -> v4:
- store a pointer to page_pool instead of xdp_mem_info
- drop a patch which reduces xdp_mem_info size
- do the recycling in the page_pool code instead of xdp_return
- remove some unused headers include
- remove some useless forward declaration

v2 -> v3:
- added missing SOBs
- CCed the MM people

v1 -> v2:
- fix a commit message
- avoid setting pp_recycle multiple times on mvneta
- squash two patches to avoid breaking bisect

[1] https://lore.kernel.org/netdev/154413868810.21735.572808840657728172.stgit@firesoul/
[2] https://lore.kernel.org/linux-mm/20210510153211.1504886-1-willy@infradead.org/

Ilias Apalodimas (1):
  page_pool: Allow drivers to hint on SKB recycling

Matteo Croce (4):
  mm: add a signature in struct page
  skbuff: add a parameter to __skb_frag_unref
  mvpp2: recycle buffers
  mvneta: recycle buffers

 drivers/net/ethernet/marvell/mvneta.c         | 11 +++---
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |  2 +-
 drivers/net/ethernet/marvell/sky2.c           |  2 +-
 drivers/net/ethernet/mellanox/mlx4/en_rx.c    |  2 +-
 include/linux/mm.h                            | 12 ++++---
 include/linux/mm_types.h                      | 12 ++++++-
 include/linux/poison.h                        |  3 ++
 include/linux/skbuff.h                        | 34 ++++++++++++++++---
 include/net/page_pool.h                       |  9 +++++
 net/core/page_pool.c                          | 29 ++++++++++++++++
 net/core/skbuff.c                             | 24 ++++++++++---
 net/tls/tls_device.c                          |  2 +-
 12 files changed, 119 insertions(+), 23 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH net-next v7 1/5] mm: add a signature in struct page
  2021-06-04 18:33 [PATCH net-next v7 0/5] page_pool: recycle buffers Matteo Croce
@ 2021-06-04 18:33 ` Matteo Croce
  2021-06-04 19:07   ` Matthew Wilcox
  2021-06-04 18:33 ` [PATCH net-next v7 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Matteo Croce @ 2021-06-04 18:33 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is needed by the page_pool to avoid recycling a page not allocated
via page_pool.

The page->signature field is aliased to page->lru.next and
page->compound_head, but it can't be set by mistake because the
signature value is a bad pointer, and can't trigger a false positive
in PageTail() because the last bit is 0.

Co-developed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 include/linux/mm.h       | 12 +++++++-----
 include/linux/mm_types.h | 12 +++++++++++-
 include/linux/poison.h   |  3 +++
 net/core/page_pool.c     |  6 ++++++
 4 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c274f75efcf9..b71074a5e82b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1668,10 +1668,12 @@ struct address_space *page_mapping(struct page *page);
 static inline bool page_is_pfmemalloc(const struct page *page)
 {
 	/*
-	 * Page index cannot be this large so this must be
-	 * a pfmemalloc page.
+	 * This is not a tail page; compound_head of a head page is unused
+	 * at return from the page allocator, and will be overwritten
+	 * by callers who do not care whether the page came from the
+	 * reserves.
 	 */
-	return page->index == -1UL;
+	return page->compound_head & BIT(1);
 }
 
 /*
@@ -1680,12 +1682,12 @@ static inline bool page_is_pfmemalloc(const struct page *page)
  */
 static inline void set_page_pfmemalloc(struct page *page)
 {
-	page->index = -1UL;
+	page->compound_head = BIT(1);
 }
 
 static inline void clear_page_pfmemalloc(struct page *page)
 {
-	page->index = 0;
+	page->compound_head = 0;
 }
 
 /*
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 5aacc1c10a45..09f90598ff63 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -96,6 +96,13 @@ struct page {
 			unsigned long private;
 		};
 		struct {	/* page_pool used by netstack */
+			/**
+			 * @pp_magic: magic value to avoid recycling non
+			 * page_pool allocated pages.
+			 */
+			unsigned long pp_magic;
+			struct page_pool *pp;
+			unsigned long _pp_mapping_pad;
 			/**
 			 * @dma_addr: might require a 64-bit value on
 			 * 32-bit architectures.
@@ -130,7 +137,10 @@ struct page {
 			};
 		};
 		struct {	/* Tail pages of compound page */
-			unsigned long compound_head;	/* Bit zero is set */
+			/* Bit zero is set
+			 * Bit one if pfmemalloc page
+			 */
+			unsigned long compound_head;
 
 			/* First tail page only */
 			unsigned char compound_dtor;
diff --git a/include/linux/poison.h b/include/linux/poison.h
index aff1c9250c82..d62ef5a6b4e9 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -78,4 +78,7 @@
 /********** security/ **********/
 #define KEY_DESTROY		0xbd
 
+/********** net/core/page_pool.c **********/
+#define PP_SIGNATURE		(0x40 + POISON_POINTER_DELTA)
+
 #endif
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 3c4c4c7a0402..e1321bc9d316 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -17,6 +17,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/page-flags.h>
 #include <linux/mm.h> /* for __put_page() */
+#include <linux/poison.h>
 
 #include <trace/events/page_pool.h>
 
@@ -221,6 +222,8 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
 		return NULL;
 	}
 
+	page->pp_magic |= PP_SIGNATURE;
+
 	/* Track how many pages are held 'in-flight' */
 	pool->pages_state_hold_cnt++;
 	trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
@@ -263,6 +266,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 			put_page(page);
 			continue;
 		}
+		page->pp_magic |= PP_SIGNATURE;
 		pool->alloc.cache[pool->alloc.count++] = page;
 		/* Track how many pages are held 'in-flight' */
 		pool->pages_state_hold_cnt++;
@@ -341,6 +345,8 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 			     DMA_ATTR_SKIP_CPU_SYNC);
 	page_pool_set_dma_addr(page, 0);
 skip_dma_unmap:
+	page->pp_magic = 0;
+
 	/* This may be the last page returned, releasing the pool, so
 	 * it is not safe to reference pool afterwards.
 	 */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v7 2/5] skbuff: add a parameter to __skb_frag_unref
  2021-06-04 18:33 [PATCH net-next v7 0/5] page_pool: recycle buffers Matteo Croce
  2021-06-04 18:33 ` [PATCH net-next v7 1/5] mm: add a signature in struct page Matteo Croce
@ 2021-06-04 18:33 ` Matteo Croce
  2021-06-04 18:33 ` [PATCH net-next v7 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: Matteo Croce @ 2021-06-04 18:33 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

This is a prerequisite patch, the next one is enabling recycling of
skbs and fragments. Add an extra argument on __skb_frag_unref() to
handle recycling, and update the current users of the function with that.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/sky2.c        | 2 +-
 drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +-
 include/linux/skbuff.h                     | 8 +++++---
 net/core/skbuff.c                          | 4 ++--
 net/tls/tls_device.c                       | 2 +-
 5 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
index 324c280cc22c..8b8bff59c8fe 100644
--- a/drivers/net/ethernet/marvell/sky2.c
+++ b/drivers/net/ethernet/marvell/sky2.c
@@ -2503,7 +2503,7 @@ static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space,
 
 		if (length == 0) {
 			/* don't need this page */
-			__skb_frag_unref(frag);
+			__skb_frag_unref(frag, false);
 			--skb_shinfo(skb)->nr_frags;
 		} else {
 			size = min(length, (unsigned) PAGE_SIZE);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index e35e4d7ef4d1..cea62b8f554c 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -526,7 +526,7 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
 fail:
 	while (nr > 0) {
 		nr--;
-		__skb_frag_unref(skb_shinfo(skb)->frags + nr);
+		__skb_frag_unref(skb_shinfo(skb)->frags + nr, false);
 	}
 	return 0;
 }
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index dbf820a50a39..7fcfea7e7b21 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3081,10 +3081,12 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
 /**
  * __skb_frag_unref - release a reference on a paged fragment.
  * @frag: the paged fragment
+ * @recycle: recycle the page if allocated via page_pool
  *
- * Releases a reference on the paged fragment @frag.
+ * Releases a reference on the paged fragment @frag
+ * or recycles the page via the page_pool API.
  */
-static inline void __skb_frag_unref(skb_frag_t *frag)
+static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
 	put_page(skb_frag_page(frag));
 }
@@ -3098,7 +3100,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f]);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
 }
 
 /**
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3ad22870298c..12b7e90dd2b5 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -664,7 +664,7 @@ static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i]);
+		__skb_frag_unref(&shinfo->frags[i], false);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -3495,7 +3495,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom);
+		__skb_frag_unref(fragfrom, false);
 	}
 
 	/* Reposition in the original skb */
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 76a6f8c2eec4..ad11db2c4f63 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -127,7 +127,7 @@ static void destroy_record(struct tls_record_info *record)
 	int i;
 
 	for (i = 0; i < record->num_frags; i++)
-		__skb_frag_unref(&record->frags[i]);
+		__skb_frag_unref(&record->frags[i], false);
 	kfree(record);
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v7 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-04 18:33 [PATCH net-next v7 0/5] page_pool: recycle buffers Matteo Croce
  2021-06-04 18:33 ` [PATCH net-next v7 1/5] mm: add a signature in struct page Matteo Croce
  2021-06-04 18:33 ` [PATCH net-next v7 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
@ 2021-06-04 18:33 ` Matteo Croce
  2021-06-04 19:41   ` Matthew Wilcox
  2021-06-04 18:33 ` [PATCH net-next v7 4/5] mvpp2: recycle buffers Matteo Croce
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Matteo Croce @ 2021-06-04 18:33 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Ilias Apalodimas <ilias.apalodimas@linaro.org>

Up to now several high speed NICs have custom mechanisms of recycling
the allocated memory they use for their payloads.
Our page_pool API already has recycling capabilities that are always
used when we are running in 'XDP mode'. So let's tweak the API and the
kernel network stack slightly and allow the recycling to happen even
during the standard operation.
The API doesn't take into account 'split page' policies used by those
drivers currently, but can be extended once we have users for that.

The idea is to be able to intercept the packet on skb_release_data().
If it's a buffer coming from our page_pool API recycle it back to the
pool for further usage or just release the packet entirely.

To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
a field in struct page (page->pp) to store the page_pool pointer.
Storing the information in page->pp allows us to recycle both SKBs and
their fragments.
We could have skipped the skb bit entirely, since identical information
can bederived from struct page. However, in an effort to affect the free path
as less as possible, reading a single bit in the skb which is already
in cache, is better that trying to derive identical information for the
page stored data.

The driver or page_pool has to take care of the sync operations on it's own
during the buffer recycling since the buffer is, after opting-in to the
recycling, never unmapped.

Since the gain on the drivers depends on the architecture, we are not
enabling recycling by default if the page_pool API is used on a driver.
In order to enable recycling the driver must call skb_mark_for_recycle()
to store the information we need for recycling in page->pp and
enabling the recycling bit, or page_pool_store_mem_info() for a fragment.

Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Co-developed-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 include/linux/skbuff.h  | 28 +++++++++++++++++++++++++---
 include/net/page_pool.h |  9 +++++++++
 net/core/page_pool.c    | 23 +++++++++++++++++++++++
 net/core/skbuff.c       | 24 ++++++++++++++++++++----
 4 files changed, 77 insertions(+), 7 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7fcfea7e7b21..057b40ad29bd 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -40,6 +40,9 @@
 #if IS_ENABLED(CONFIG_NF_CONNTRACK)
 #include <linux/netfilter/nf_conntrack_common.h>
 #endif
+#ifdef CONFIG_PAGE_POOL
+#include <net/page_pool.h>
+#endif
 
 /* The interface for checksum offload between the stack and networking drivers
  * is as follows...
@@ -667,6 +670,8 @@ typedef unsigned char *sk_buff_data_t;
  *	@head_frag: skb was allocated from page fragments,
  *		not allocated by kmalloc() or vmalloc().
  *	@pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
+ *	@pp_recycle: mark the packet for recycling instead of freeing (implies
+ *		page_pool support on driver)
  *	@active_extensions: active extensions (skb_ext_id types)
  *	@ndisc_nodetype: router type (from link layer)
  *	@ooo_okay: allow the mapping of a socket to a queue to be changed
@@ -791,10 +796,12 @@ struct sk_buff {
 				fclone:2,
 				peeked:1,
 				head_frag:1,
-				pfmemalloc:1;
+				pfmemalloc:1,
+				pp_recycle:1; /* page_pool recycle indicator */
 #ifdef CONFIG_SKB_EXTENSIONS
 	__u8			active_extensions;
 #endif
+
 	/* fields enclosed in headers_start/headers_end are copied
 	 * using a single memcpy() in __copy_skb_header()
 	 */
@@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
  */
 static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
-	put_page(skb_frag_page(frag));
+	struct page *page = skb_frag_page(frag);
+
+#ifdef CONFIG_PAGE_POOL
+	if (recycle && page_pool_return_skb_page(page_address(page)))
+		return;
+#endif
+	put_page(page);
 }
 
 /**
@@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
 }
 
 /**
@@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
 #endif
 }
 
+#ifdef CONFIG_PAGE_POOL
+static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
+					struct page_pool *pp)
+{
+	skb->pp_recycle = 1;
+	page_pool_store_mem_info(page, pp);
+}
+#endif
+
 #endif	/* __KERNEL__ */
 #endif	/* _LINUX_SKBUFF_H */
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b4b6de909c93..7b9b6a1c61f5 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -146,6 +146,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
 	return pool->p.dma_dir;
 }
 
+bool page_pool_return_skb_page(void *data);
+
 struct page_pool *page_pool_create(const struct page_pool_params *params);
 
 #ifdef CONFIG_PAGE_POOL
@@ -251,4 +253,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
 		spin_unlock_bh(&pool->ring.producer_lock);
 }
 
+/* Store mem_info on struct page and use it while recycling skb frags */
+static inline
+void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
+{
+	page->pp = pp;
+}
+
 #endif /* _NET_PAGE_POOL_H */
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e1321bc9d316..a03f48f45696 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -628,3 +628,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
 	}
 }
 EXPORT_SYMBOL(page_pool_update_nid);
+
+bool page_pool_return_skb_page(void *data)
+{
+	struct page_pool *pp;
+	struct page *page;
+
+	page = virt_to_head_page(data);
+	if (unlikely(page->pp_magic != PP_SIGNATURE))
+		return false;
+
+	pp = (struct page_pool *)page->pp;
+
+	/* Driver set this to memory recycling info. Reset it on recycle.
+	 * This will *not* work for NIC using a split-page memory model.
+	 * The page will be returned to the pool here regardless of the
+	 * 'flipped' fragment being in use or not.
+	 */
+	page->pp = NULL;
+	page_pool_put_full_page(pp, page, false);
+
+	return true;
+}
+EXPORT_SYMBOL(page_pool_return_skb_page);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 12b7e90dd2b5..f769f08e7b32 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -70,6 +70,9 @@
 #include <net/xfrm.h>
 #include <net/mpls.h>
 #include <net/mptcp.h>
+#ifdef CONFIG_PAGE_POOL
+#include <net/page_pool.h>
+#endif
 
 #include <linux/uaccess.h>
 #include <trace/events/skb.h>
@@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb)
 {
 	unsigned char *head = skb->head;
 
-	if (skb->head_frag)
+	if (skb->head_frag) {
+#ifdef CONFIG_PAGE_POOL
+		if (skb->pp_recycle && page_pool_return_skb_page(head))
+			return;
+#endif
 		skb_free_frag(head);
-	else
+	} else {
 		kfree(head);
+	}
 }
 
 static void skb_release_data(struct sk_buff *skb)
@@ -664,7 +672,7 @@ static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i], false);
+		__skb_frag_unref(&shinfo->frags[i], skb->pp_recycle);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -1046,6 +1054,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
 	n->nohdr = 0;
 	n->peeked = 0;
 	C(pfmemalloc);
+	C(pp_recycle);
 	n->destructor = NULL;
 	C(tail);
 	C(end);
@@ -3495,7 +3504,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom, false);
+		__skb_frag_unref(fragfrom, skb->pp_recycle);
 	}
 
 	/* Reposition in the original skb */
@@ -5285,6 +5294,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 	if (skb_cloned(to))
 		return false;
 
+	/* The page pool signature of struct page will eventually figure out
+	 * which pages can be recycled or not but for now let's prohibit slab
+	 * allocated and page_pool allocated SKBs from being coalesced.
+	 */
+	if (to->pp_recycle != from->pp_recycle)
+		return false;
+
 	if (len <= skb_tailroom(to)) {
 		if (len)
 			BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v7 4/5] mvpp2: recycle buffers
  2021-06-04 18:33 [PATCH net-next v7 0/5] page_pool: recycle buffers Matteo Croce
                   ` (2 preceding siblings ...)
  2021-06-04 18:33 ` [PATCH net-next v7 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
@ 2021-06-04 18:33 ` Matteo Croce
  2021-06-04 19:48   ` Matthew Wilcox
  2021-06-04 18:33 ` [PATCH net-next v7 5/5] mvneta: " Matteo Croce
  2021-06-04 18:57 ` [PATCH net-next v7 0/5] page_pool: " Matthew Wilcox
  5 siblings, 1 reply; 18+ messages in thread
From: Matteo Croce @ 2021-06-04 18:33 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate is almost doubled,
from 1110 Kpps to 2128 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  34.88%  [kernel]          [k] page_pool_release_page
   8.06%  [kernel]          [k] free_unref_page
   6.42%  [mvpp2]           [k] mvpp2_rx
   6.07%  [kernel]          [k] eth_type_trans
   5.18%  [kernel]          [k] __netif_receive_skb_core
   4.95%  [kernel]          [k] build_skb
   4.88%  [kernel]          [k] kmem_cache_free
   3.97%  [kernel]          [k] kmem_cache_alloc
   3.45%  [kernel]          [k] dev_gro_receive
   2.73%  [kernel]          [k] page_frag_free
   2.07%  [kernel]          [k] __alloc_pages_bulk
   1.99%  [kernel]          [k] arch_local_irq_save
   1.84%  [kernel]          [k] skb_release_data
   1.20%  [kernel]          [k] netif_receive_skb_list_internal

With packet rate stable at 1100 Kpps:

tx: 0 bps 0 pps rx: 532.7 Mbps 1110 Kpps
tx: 0 bps 0 pps rx: 532.6 Mbps 1110 Kpps
tx: 0 bps 0 pps rx: 532.4 Mbps 1109 Kpps
tx: 0 bps 0 pps rx: 532.1 Mbps 1109 Kpps
tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps
tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  12.91%  [kernel]          [k] eth_type_trans
  12.54%  [mvpp2]           [k] mvpp2_rx
   9.67%  [kernel]          [k] build_skb
   9.63%  [kernel]          [k] __netif_receive_skb_core
   8.44%  [kernel]          [k] page_pool_put_page
   8.07%  [kernel]          [k] kmem_cache_free
   7.79%  [kernel]          [k] kmem_cache_alloc
   6.86%  [kernel]          [k] dev_gro_receive
   3.19%  [kernel]          [k] skb_release_data
   2.41%  [kernel]          [k] netif_receive_skb_list_internal
   2.18%  [kernel]          [k] page_pool_refill_alloc_cache
   1.76%  [kernel]          [k] napi_gro_receive
   1.61%  [kernel]          [k] kfree_skb
   1.20%  [kernel]          [k] dma_sync_single_for_device
   1.16%  [mvpp2]           [k] mvpp2_poll
   1.12%  [mvpp2]           [k] mvpp2_read

With packet rate above 2100 Kpps:

tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2127 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1022 Mbps 2128 Kpps
tx: 0 bps 0 pps rx: 1022 Mbps 2129 Kpps

The major performance increase is explained by the fact that the most CPU
consuming functions (page_pool_release_page, page_frag_free and
free_unref_page) are no longer called on a per packet basis.

The test was done by sending to the macchiatobin 64 byte ethernet frames
with an invalid ethertype, so the packets are dropped early in the RX path.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index d4fb620f53f3..b1d186abcc6c 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3997,7 +3997,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
 		}
 
 		if (pp)
-			page_pool_release_page(pp, virt_to_page(data));
+			skb_mark_for_recycle(skb, virt_to_page(data), pp);
 		else
 			dma_unmap_single_attrs(dev->dev.parent, dma_addr,
 					       bm_pool->buf_size, DMA_FROM_DEVICE,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v7 5/5] mvneta: recycle buffers
  2021-06-04 18:33 [PATCH net-next v7 0/5] page_pool: recycle buffers Matteo Croce
                   ` (3 preceding siblings ...)
  2021-06-04 18:33 ` [PATCH net-next v7 4/5] mvpp2: recycle buffers Matteo Croce
@ 2021-06-04 18:33 ` Matteo Croce
  2021-06-04 18:57 ` [PATCH net-next v7 0/5] page_pool: " Matthew Wilcox
  5 siblings, 0 replies; 18+ messages in thread
From: Matteo Croce @ 2021-06-04 18:33 UTC (permalink / raw)
  To: netdev, linux-mm
  Cc: Ayush Sawal, Vinay Kumar Yadav, Rohit Maheshwari,
	David S. Miller, Jakub Kicinski, Thomas Petazzoni, Marcin Wojtas,
	Russell King, Mirko Lindner, Stephen Hemminger, Tariq Toukan,
	Jesper Dangaard Brouer, Ilias Apalodimas, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Boris Pismenny, Arnd Bergmann,
	Andrew Morton, Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Matthew Wilcox, Eric Dumazet, David Ahern, Lorenzo Bianconi,
	Saeed Mahameed, Andrew Lunn, Paolo Abeni, Sven Auhagen

From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate increased by 10%,
from 296 Kpps to 326 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  23.66%  [kernel]          [k] __pi___inval_dcache_area
  22.85%  [mvneta]          [k] mvneta_rx_swbm
   7.54%  [kernel]          [k] kmem_cache_alloc
   6.49%  [kernel]          [k] eth_type_trans
   3.94%  [kernel]          [k] dev_gro_receive
   3.91%  [kernel]          [k] __netif_receive_skb_core
   3.91%  [kernel]          [k] kmem_cache_free
   3.76%  [kernel]          [k] page_pool_release_page
   3.56%  [kernel]          [k] free_unref_page
   2.40%  [kernel]          [k] build_skb
   1.49%  [kernel]          [k] skb_release_data
   1.45%  [kernel]          [k] __alloc_pages_bulk
   1.30%  [kernel]          [k] page_frag_free

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  26.41%  [kernel]          [k] __pi___inval_dcache_area
  25.00%  [mvneta]          [k] mvneta_rx_swbm
   8.14%  [kernel]          [k] kmem_cache_alloc
   6.84%  [kernel]          [k] eth_type_trans
   4.44%  [kernel]          [k] __netif_receive_skb_core
   4.38%  [kernel]          [k] kmem_cache_free
   4.16%  [kernel]          [k] dev_gro_receive
   3.21%  [kernel]          [k] page_pool_put_page
   2.41%  [kernel]          [k] build_skb
   1.82%  [kernel]          [k] skb_release_data
   1.61%  [kernel]          [k] napi_gro_receive
   1.25%  [kernel]          [k] page_pool_refill_alloc_cache
   1.16%  [kernel]          [k] __netif_receive_skb_list_core

We can see that page_pool_release_page(), free_unref_page() and
__alloc_pages_bulk() are no longer on top of the list when receiving
traffic.

The test was done with mausezahn on the TX side with 64 byte raw
ethernet frames.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 7d5cd9bc6c99..c15ce06427d0 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2320,7 +2320,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 }
 
 static struct sk_buff *
-mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
+mvneta_swbm_build_skb(struct mvneta_port *pp, struct page_pool *pool,
 		      struct xdp_buff *xdp, u32 desc_status)
 {
 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
@@ -2331,7 +2331,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	if (!skb)
 		return ERR_PTR(-ENOMEM);
 
-	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
+	skb_mark_for_recycle(skb, virt_to_page(xdp->data), pool);
 
 	skb_reserve(skb, xdp->data - xdp->data_hard_start);
 	skb_put(skb, xdp->data_end - xdp->data);
@@ -2343,7 +2343,10 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
 				skb_frag_page(frag), skb_frag_off(frag),
 				skb_frag_size(frag), PAGE_SIZE);
-		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
+		/* We don't need to reset pp_recycle here. It's already set, so
+		 * just mark fragments for recycling.
+		 */
+		page_pool_store_mem_info(skb_frag_page(frag), pool);
 	}
 
 	return skb;
@@ -2425,7 +2428,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps))
 			goto next;
 
-		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
+		skb = mvneta_swbm_build_skb(pp, rxq->page_pool, &xdp_buf, desc_status);
 		if (IS_ERR(skb)) {
 			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 0/5] page_pool: recycle buffers
  2021-06-04 18:33 [PATCH net-next v7 0/5] page_pool: recycle buffers Matteo Croce
                   ` (4 preceding siblings ...)
  2021-06-04 18:33 ` [PATCH net-next v7 5/5] mvneta: " Matteo Croce
@ 2021-06-04 18:57 ` Matthew Wilcox
  5 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox @ 2021-06-04 18:57 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, Jun 04, 2021 at 08:33:44PM +0200, Matteo Croce wrote:
> Please let this going in on a future -rc1 so to allow enough time
> to have wider tests.
> 
> Note that this series depends on the change "mm: fix struct page layout
> on 32-bit systems"[2] which is not yet in master.

is so!  9ddb3c14afba8bc5950ed297f02d4ae05ff35cd1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 1/5] mm: add a signature in struct page
  2021-06-04 18:33 ` [PATCH net-next v7 1/5] mm: add a signature in struct page Matteo Croce
@ 2021-06-04 19:07   ` Matthew Wilcox
  2021-06-04 22:59     ` Matteo Croce
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2021-06-04 19:07 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, Jun 04, 2021 at 08:33:45PM +0200, Matteo Croce wrote:
> @@ -130,7 +137,10 @@ struct page {
>  			};
>  		};
>  		struct {	/* Tail pages of compound page */
> -			unsigned long compound_head;	/* Bit zero is set */
> +			/* Bit zero is set
> +			 * Bit one if pfmemalloc page
> +			 */
> +			unsigned long compound_head;

I would drop this hunk.  Bit 1 is not used for this purpose in tail
pages; it's used for that purpose in head and base pages.

I suppose we could do something like ...

 static inline void set_page_pfmemalloc(struct page *page)
 {
-	page->index = -1UL;
+	page->lru.next = (void *)2;
 }

if it's causing confusion.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-04 18:33 ` [PATCH net-next v7 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
@ 2021-06-04 19:41   ` Matthew Wilcox
  2021-06-07  4:51     ` Ilias Apalodimas
  2021-06-07 14:55     ` Matteo Croce
  0 siblings, 2 replies; 18+ messages in thread
From: Matthew Wilcox @ 2021-06-04 19:41 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, Jun 04, 2021 at 08:33:47PM +0200, Matteo Croce wrote:
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 7fcfea7e7b21..057b40ad29bd 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -40,6 +40,9 @@
>  #if IS_ENABLED(CONFIG_NF_CONNTRACK)
>  #include <linux/netfilter/nf_conntrack_common.h>
>  #endif
> +#ifdef CONFIG_PAGE_POOL
> +#include <net/page_pool.h>
> +#endif

I'm not a huge fan of conditional includes ... any reason to not include
it always?

> @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
>   */
>  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
>  {
> -	put_page(skb_frag_page(frag));
> +	struct page *page = skb_frag_page(frag);
> +
> +#ifdef CONFIG_PAGE_POOL
> +	if (recycle && page_pool_return_skb_page(page_address(page)))
> +		return;

It feels weird to have a page here, convert it back to an address,
then convert it back to a head page in page_pool_return_skb_page().
How about passing 'page' here, calling compound_head() in
page_pool_return_skb_page() and calling virt_to_page() in skb_free_head()?

> @@ -251,4 +253,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
>  		spin_unlock_bh(&pool->ring.producer_lock);
>  }
>  
> +/* Store mem_info on struct page and use it while recycling skb frags */
> +static inline
> +void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
> +{
> +	page->pp = pp;

I'm not sure this wrapper needs to exist.

> +}
> +
>  #endif /* _NET_PAGE_POOL_H */
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index e1321bc9d316..a03f48f45696 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -628,3 +628,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
>  	}
>  }
>  EXPORT_SYMBOL(page_pool_update_nid);
> +
> +bool page_pool_return_skb_page(void *data)
> +{
> +	struct page_pool *pp;
> +	struct page *page;
> +
> +	page = virt_to_head_page(data);
> +	if (unlikely(page->pp_magic != PP_SIGNATURE))
> +		return false;
> +
> +	pp = (struct page_pool *)page->pp;

You don't need the cast any more.

> +	/* Driver set this to memory recycling info. Reset it on recycle.
> +	 * This will *not* work for NIC using a split-page memory model.
> +	 * The page will be returned to the pool here regardless of the
> +	 * 'flipped' fragment being in use or not.
> +	 */
> +	page->pp = NULL;
> +	page_pool_put_full_page(pp, page, false);
> +
> +	return true;
> +}
> +EXPORT_SYMBOL(page_pool_return_skb_page);
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 12b7e90dd2b5..f769f08e7b32 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -70,6 +70,9 @@
>  #include <net/xfrm.h>
>  #include <net/mpls.h>
>  #include <net/mptcp.h>
> +#ifdef CONFIG_PAGE_POOL
> +#include <net/page_pool.h>
> +#endif
>  
>  #include <linux/uaccess.h>
>  #include <trace/events/skb.h>
> @@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb)
>  {
>  	unsigned char *head = skb->head;
>  
> -	if (skb->head_frag)
> +	if (skb->head_frag) {
> +#ifdef CONFIG_PAGE_POOL
> +		if (skb->pp_recycle && page_pool_return_skb_page(head))
> +			return;
> +#endif

put this in a header file:

static inline bool skb_pp_recycle(struct sk_buff *skb, void *data)
{
	if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
		return false;
	return page_pool_return_skb_page(virt_to_page(data));
}

then this becomes:

	if (skb->head_frag) {
		if (skb_pp_recycle(skb, head))
			return;
>  		skb_free_frag(head);
> -	else
> +	} else {
>  		kfree(head);
> +	}
>  }
>  
>  static void skb_release_data(struct sk_buff *skb)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 4/5] mvpp2: recycle buffers
  2021-06-04 18:33 ` [PATCH net-next v7 4/5] mvpp2: recycle buffers Matteo Croce
@ 2021-06-04 19:48   ` Matthew Wilcox
  2021-06-06  1:59     ` Matteo Croce
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2021-06-04 19:48 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, Jun 04, 2021 at 08:33:48PM +0200, Matteo Croce wrote:
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> @@ -3997,7 +3997,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
>  		}
>  
>  		if (pp)
> -			page_pool_release_page(pp, virt_to_page(data));
> +			skb_mark_for_recycle(skb, virt_to_page(data), pp);

Does this driver only use order-0 pages?  Should it be using
virt_to_head_page() here?  or should skb_mark_for_recycle() call
compound_head() internally?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 1/5] mm: add a signature in struct page
  2021-06-04 19:07   ` Matthew Wilcox
@ 2021-06-04 22:59     ` Matteo Croce
  2021-06-05 14:32       ` Matthew Wilcox
  0 siblings, 1 reply; 18+ messages in thread
From: Matteo Croce @ 2021-06-04 22:59 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, Jun 4, 2021 at 9:08 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Jun 04, 2021 at 08:33:45PM +0200, Matteo Croce wrote:
> > @@ -130,7 +137,10 @@ struct page {
> >                       };
> >               };
> >               struct {        /* Tail pages of compound page */
> > -                     unsigned long compound_head;    /* Bit zero is set */
> > +                     /* Bit zero is set
> > +                      * Bit one if pfmemalloc page
> > +                      */
> > +                     unsigned long compound_head;
>
> I would drop this hunk.  Bit 1 is not used for this purpose in tail
> pages; it's used for that purpose in head and base pages.
>
> I suppose we could do something like ...
>
>  static inline void set_page_pfmemalloc(struct page *page)
>  {
> -       page->index = -1UL;
> +       page->lru.next = (void *)2;
>  }
>
> if it's causing confusion.
>

If you prefer, ok for me.
Why not "(void *)BIT(1)"? Just to remark that it's a single bit and
not a magic like value?

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 1/5] mm: add a signature in struct page
  2021-06-04 22:59     ` Matteo Croce
@ 2021-06-05 14:32       ` Matthew Wilcox
  2021-06-06  1:50         ` Matteo Croce
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2021-06-05 14:32 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Sat, Jun 05, 2021 at 12:59:50AM +0200, Matteo Croce wrote:
> On Fri, Jun 4, 2021 at 9:08 PM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Fri, Jun 04, 2021 at 08:33:45PM +0200, Matteo Croce wrote:
> > > @@ -130,7 +137,10 @@ struct page {
> > >                       };
> > >               };
> > >               struct {        /* Tail pages of compound page */
> > > -                     unsigned long compound_head;    /* Bit zero is set */
> > > +                     /* Bit zero is set
> > > +                      * Bit one if pfmemalloc page
> > > +                      */
> > > +                     unsigned long compound_head;
> >
> > I would drop this hunk.  Bit 1 is not used for this purpose in tail
> > pages; it's used for that purpose in head and base pages.
> >
> > I suppose we could do something like ...
> >
> >  static inline void set_page_pfmemalloc(struct page *page)
> >  {
> > -       page->index = -1UL;
> > +       page->lru.next = (void *)2;
> >  }
> >
> > if it's causing confusion.
> >
> 
> If you prefer, ok for me.
> Why not "(void *)BIT(1)"? Just to remark that it's a single bit and
> not a magic like value?

I don't have a strong preference.  I'd use '2', but I wouldn't ask
BIT(1) to be changed.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 1/5] mm: add a signature in struct page
  2021-06-05 14:32       ` Matthew Wilcox
@ 2021-06-06  1:50         ` Matteo Croce
  2021-06-07 13:52           ` Matthew Wilcox
  0 siblings, 1 reply; 18+ messages in thread
From: Matteo Croce @ 2021-06-06  1:50 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Sat, Jun 5, 2021 at 4:32 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Sat, Jun 05, 2021 at 12:59:50AM +0200, Matteo Croce wrote:
> > On Fri, Jun 4, 2021 at 9:08 PM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Fri, Jun 04, 2021 at 08:33:45PM +0200, Matteo Croce wrote:
> > > > @@ -130,7 +137,10 @@ struct page {
> > > >                       };
> > > >               };
> > > >               struct {        /* Tail pages of compound page */
> > > > -                     unsigned long compound_head;    /* Bit zero is set */
> > > > +                     /* Bit zero is set
> > > > +                      * Bit one if pfmemalloc page
> > > > +                      */
> > > > +                     unsigned long compound_head;
> > >
> > > I would drop this hunk.  Bit 1 is not used for this purpose in tail
> > > pages; it's used for that purpose in head and base pages.
> > >
> > > I suppose we could do something like ...
> > >
> > >  static inline void set_page_pfmemalloc(struct page *page)
> > >  {
> > > -       page->index = -1UL;
> > > +       page->lru.next = (void *)2;
> > >  }
> > >
> > > if it's causing confusion.
> > >
> >

And change all the *_pfmemalloc functions to use page->lru.next like this?

@@ -1668,10 +1668,12 @@ struct address_space *page_mapping(struct page *page);
static inline bool page_is_pfmemalloc(const struct page *page)
{
       /*
-        * Page index cannot be this large so this must be
-        * a pfmemalloc page.
+        * This is not a tail page; compound_head of a head page is unused
+        * at return from the page allocator, and will be overwritten
+        * by callers who do not care whether the page came from the
+        * reserves.
        */
-       return page->index == -1UL;
+       return (uintptr_t)page->lru.next & BIT(1);
}

/*
@@ -1680,12 +1682,12 @@ static inline bool page_is_pfmemalloc(const
struct page *page)
 */
static inline void set_page_pfmemalloc(struct page *page)
{
-       page->index = -1UL;
+       page->lru.next = (void *)BIT(1);
}

static inline void clear_page_pfmemalloc(struct page *page)
{
-       page->index = 0;
+       page->lru.next = NULL;

}

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 4/5] mvpp2: recycle buffers
  2021-06-04 19:48   ` Matthew Wilcox
@ 2021-06-06  1:59     ` Matteo Croce
  0 siblings, 0 replies; 18+ messages in thread
From: Matteo Croce @ 2021-06-06  1:59 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, Jun 4, 2021 at 9:48 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Jun 04, 2021 at 08:33:48PM +0200, Matteo Croce wrote:
> > +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> > @@ -3997,7 +3997,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
> >               }
> >
> >               if (pp)
> > -                     page_pool_release_page(pp, virt_to_page(data));
> > +                     skb_mark_for_recycle(skb, virt_to_page(data), pp);
>
> Does this driver only use order-0 pages?  Should it be using
> virt_to_head_page() here?  or should skb_mark_for_recycle() call
> compound_head() internally?

This driver uses only order-0 pages.

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-04 19:41   ` Matthew Wilcox
@ 2021-06-07  4:51     ` Ilias Apalodimas
  2021-06-07 14:55     ` Matteo Croce
  1 sibling, 0 replies; 18+ messages in thread
From: Ilias Apalodimas @ 2021-06-07  4:51 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Matteo Croce, netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, Jun 04, 2021 at 08:41:52PM +0100, Matthew Wilcox wrote:
> On Fri, Jun 04, 2021 at 08:33:47PM +0200, Matteo Croce wrote:
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index 7fcfea7e7b21..057b40ad29bd 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -40,6 +40,9 @@
> >  #if IS_ENABLED(CONFIG_NF_CONNTRACK)
> >  #include <linux/netfilter/nf_conntrack_common.h>
> >  #endif
> > +#ifdef CONFIG_PAGE_POOL
> > +#include <net/page_pool.h>
> > +#endif
> 
> I'm not a huge fan of conditional includes ... any reason to not include
> it always?

I think we can. I'll check and change it. 

> 
> > @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
> >   */
> >  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
> >  {
> > -	put_page(skb_frag_page(frag));
> > +	struct page *page = skb_frag_page(frag);
> > +
> > +#ifdef CONFIG_PAGE_POOL
> > +	if (recycle && page_pool_return_skb_page(page_address(page)))
> > +		return;
> 
> It feels weird to have a page here, convert it back to an address,
> then convert it back to a head page in page_pool_return_skb_page().
> How about passing 'page' here, calling compound_head() in
> page_pool_return_skb_page() and calling virt_to_page() in skb_free_head()?
> 

Sure, sounds reasonable. 

> > @@ -251,4 +253,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
> >  		spin_unlock_bh(&pool->ring.producer_lock);
> >  }
> >  
> > +/* Store mem_info on struct page and use it while recycling skb frags */
> > +static inline
> > +void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
> > +{
> > +	page->pp = pp;
> 
> I'm not sure this wrapper needs to exist.
> 
> > +}
> > +
> >  #endif /* _NET_PAGE_POOL_H */
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index e1321bc9d316..a03f48f45696 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -628,3 +628,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
> >  	}
> >  }
> >  EXPORT_SYMBOL(page_pool_update_nid);
> > +
> > +bool page_pool_return_skb_page(void *data)
> > +{
> > +	struct page_pool *pp;
> > +	struct page *page;
> > +
> > +	page = virt_to_head_page(data);
> > +	if (unlikely(page->pp_magic != PP_SIGNATURE))
> > +		return false;
> > +
> > +	pp = (struct page_pool *)page->pp;
> 
> You don't need the cast any more.
> 

True

> > +	/* Driver set this to memory recycling info. Reset it on recycle.
> > +	 * This will *not* work for NIC using a split-page memory model.
> > +	 * The page will be returned to the pool here regardless of the
> > +	 * 'flipped' fragment being in use or not.
> > +	 */
> > +	page->pp = NULL;
> > +	page_pool_put_full_page(pp, page, false);
> > +
> > +	return true;
> > +}
> > +EXPORT_SYMBOL(page_pool_return_skb_page);
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index 12b7e90dd2b5..f769f08e7b32 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -70,6 +70,9 @@
> >  #include <net/xfrm.h>
> >  #include <net/mpls.h>
> >  #include <net/mptcp.h>
> > +#ifdef CONFIG_PAGE_POOL
> > +#include <net/page_pool.h>
> > +#endif
> >  
> >  #include <linux/uaccess.h>
> >  #include <trace/events/skb.h>
> > @@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb)
> >  {
> >  	unsigned char *head = skb->head;
> >  
> > -	if (skb->head_frag)
> > +	if (skb->head_frag) {
> > +#ifdef CONFIG_PAGE_POOL
> > +		if (skb->pp_recycle && page_pool_return_skb_page(head))
> > +			return;
> > +#endif
> 
> put this in a header file:
> 
> static inline bool skb_pp_recycle(struct sk_buff *skb, void *data)
> {
> 	if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
> 		return false;
> 	return page_pool_return_skb_page(virt_to_page(data));
> }
> 
> then this becomes:
> 
> 	if (skb->head_frag) {
> 		if (skb_pp_recycle(skb, head))
> 			return;
> >  		skb_free_frag(head);
> > -	else
> > +	} else {
> >  		kfree(head);
> > +	}
> >  }
> >  

ok


Thanks for having a look

Cheers
/Ilias

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 1/5] mm: add a signature in struct page
  2021-06-06  1:50         ` Matteo Croce
@ 2021-06-07 13:52           ` Matthew Wilcox
  2021-06-07 13:58             ` Matteo Croce
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2021-06-07 13:52 UTC (permalink / raw)
  To: Matteo Croce
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Sun, Jun 06, 2021 at 03:50:54AM +0200, Matteo Croce wrote:
> And change all the *_pfmemalloc functions to use page->lru.next like this?
> 
> @@ -1668,10 +1668,12 @@ struct address_space *page_mapping(struct page *page);
> static inline bool page_is_pfmemalloc(const struct page *page)
> {
>        /*
> -        * Page index cannot be this large so this must be
> -        * a pfmemalloc page.
> +        * This is not a tail page; compound_head of a head page is unused
> +        * at return from the page allocator, and will be overwritten
> +        * by callers who do not care whether the page came from the
> +        * reserves.
>         */

The comment doesn't make a lot of sense if we're switching to use
lru.next.  How about:

	/*
	 * lru.next has bit 1 set if the page is allocated from the
	 * pfmemalloc reserves.  Callers may simply overwrite it if
	 * they do not need to preserve that information.
	 */

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 1/5] mm: add a signature in struct page
  2021-06-07 13:52           ` Matthew Wilcox
@ 2021-06-07 13:58             ` Matteo Croce
  0 siblings, 0 replies; 18+ messages in thread
From: Matteo Croce @ 2021-06-07 13:58 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Mon, Jun 7, 2021 at 3:53 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Sun, Jun 06, 2021 at 03:50:54AM +0200, Matteo Croce wrote:
> > And change all the *_pfmemalloc functions to use page->lru.next like this?
> >
> > @@ -1668,10 +1668,12 @@ struct address_space *page_mapping(struct page *page);
> > static inline bool page_is_pfmemalloc(const struct page *page)
> > {
> >        /*
> > -        * Page index cannot be this large so this must be
> > -        * a pfmemalloc page.
> > +        * This is not a tail page; compound_head of a head page is unused
> > +        * at return from the page allocator, and will be overwritten
> > +        * by callers who do not care whether the page came from the
> > +        * reserves.
> >         */
>
> The comment doesn't make a lot of sense if we're switching to use
> lru.next.  How about:
>
>         /*
>          * lru.next has bit 1 set if the page is allocated from the
>          * pfmemalloc reserves.  Callers may simply overwrite it if
>          * they do not need to preserve that information.
>          */

Sounds good!

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v7 3/5] page_pool: Allow drivers to hint on SKB recycling
  2021-06-04 19:41   ` Matthew Wilcox
  2021-06-07  4:51     ` Ilias Apalodimas
@ 2021-06-07 14:55     ` Matteo Croce
  1 sibling, 0 replies; 18+ messages in thread
From: Matteo Croce @ 2021-06-07 14:55 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: netdev, linux-mm, Ayush Sawal, Vinay Kumar Yadav,
	Rohit Maheshwari, David S. Miller, Jakub Kicinski,
	Thomas Petazzoni, Marcin Wojtas, Russell King, Mirko Lindner,
	Stephen Hemminger, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Boris Pismenny, Arnd Bergmann, Andrew Morton,
	Peter Zijlstra (Intel),
	Vlastimil Babka, Yu Zhao, Will Deacon, Fenghua Yu,
	Roman Gushchin, Hugh Dickins, Peter Xu, Jason Gunthorpe,
	Jonathan Lemon, Alexander Lobakin, Cong Wang, wenxu, Kevin Hao,
	Jakub Sitnicki, Marco Elver, Willem de Bruijn, Miaohe Lin,
	Yunsheng Lin, Guillaume Nault, linux-kernel, linux-rdma, bpf,
	Eric Dumazet, David Ahern, Lorenzo Bianconi, Saeed Mahameed,
	Andrew Lunn, Paolo Abeni, Sven Auhagen

On Fri, Jun 4, 2021 at 9:42 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Jun 04, 2021 at 08:33:47PM +0200, Matteo Croce wrote:
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index 7fcfea7e7b21..057b40ad29bd 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -40,6 +40,9 @@
> >  #if IS_ENABLED(CONFIG_NF_CONNTRACK)
> >  #include <linux/netfilter/nf_conntrack_common.h>
> >  #endif
> > +#ifdef CONFIG_PAGE_POOL
> > +#include <net/page_pool.h>
> > +#endif
>
> I'm not a huge fan of conditional includes ... any reason to not include
> it always?
>

Nope, I tried without the conditional on a system without
CONFIG_PAGE_POOL and it compiles fine

> > @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
> >   */
> >  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
> >  {
> > -     put_page(skb_frag_page(frag));
> > +     struct page *page = skb_frag_page(frag);
> > +
> > +#ifdef CONFIG_PAGE_POOL
> > +     if (recycle && page_pool_return_skb_page(page_address(page)))
> > +             return;
>
> It feels weird to have a page here, convert it back to an address,
> then convert it back to a head page in page_pool_return_skb_page().
> How about passing 'page' here, calling compound_head() in
> page_pool_return_skb_page() and calling virt_to_page() in skb_free_head()?
>

I like it.

> > @@ -251,4 +253,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
> >               spin_unlock_bh(&pool->ring.producer_lock);
> >  }
> >
> > +/* Store mem_info on struct page and use it while recycling skb frags */
> > +static inline
> > +void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
> > +{
> > +     page->pp = pp;
>
> I'm not sure this wrapper needs to exist.
>

I admit that this wrapper was bigger in the previous versions, but
it's used by drivers which handle skb fragments (e.g. mvneta) to set
the pointer for each frag.
We can open code it, but it will be less straightforward.

> > +}
> > +
> >  #endif /* _NET_PAGE_POOL_H */
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index e1321bc9d316..a03f48f45696 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -628,3 +628,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
> >       }
> >  }
> >  EXPORT_SYMBOL(page_pool_update_nid);
> > +
> > +bool page_pool_return_skb_page(void *data)
> > +{
> > +     struct page_pool *pp;
> > +     struct page *page;
> > +
> > +     page = virt_to_head_page(data);
> > +     if (unlikely(page->pp_magic != PP_SIGNATURE))
> > +             return false;
> > +
> > +     pp = (struct page_pool *)page->pp;
>
> You don't need the cast any more.
>

Right.

> > +     /* Driver set this to memory recycling info. Reset it on recycle.
> > +      * This will *not* work for NIC using a split-page memory model.
> > +      * The page will be returned to the pool here regardless of the
> > +      * 'flipped' fragment being in use or not.
> > +      */
> > +     page->pp = NULL;
> > +     page_pool_put_full_page(pp, page, false);
> > +
> > +     return true;
> > +}
> > +EXPORT_SYMBOL(page_pool_return_skb_page);
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index 12b7e90dd2b5..f769f08e7b32 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -70,6 +70,9 @@
> >  #include <net/xfrm.h>
> >  #include <net/mpls.h>
> >  #include <net/mptcp.h>
> > +#ifdef CONFIG_PAGE_POOL
> > +#include <net/page_pool.h>
> > +#endif
> >
> >  #include <linux/uaccess.h>
> >  #include <trace/events/skb.h>
> > @@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb)
> >  {
> >       unsigned char *head = skb->head;
> >
> > -     if (skb->head_frag)
> > +     if (skb->head_frag) {
> > +#ifdef CONFIG_PAGE_POOL
> > +             if (skb->pp_recycle && page_pool_return_skb_page(head))
> > +                     return;
> > +#endif
>
> put this in a header file:
>
> static inline bool skb_pp_recycle(struct sk_buff *skb, void *data)
> {
>         if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
>                 return false;
>         return page_pool_return_skb_page(virt_to_page(data));
> }
>
> then this becomes:
>
>         if (skb->head_frag) {
>                 if (skb_pp_recycle(skb, head))
>                         return;
> >               skb_free_frag(head);
> > -     else
> > +     } else {
> >               kfree(head);
> > +     }
> >  }
> >
> >  static void skb_release_data(struct sk_buff *skb)

Done. I'll send a v8 soon.

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-06-07 14:56 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-04 18:33 [PATCH net-next v7 0/5] page_pool: recycle buffers Matteo Croce
2021-06-04 18:33 ` [PATCH net-next v7 1/5] mm: add a signature in struct page Matteo Croce
2021-06-04 19:07   ` Matthew Wilcox
2021-06-04 22:59     ` Matteo Croce
2021-06-05 14:32       ` Matthew Wilcox
2021-06-06  1:50         ` Matteo Croce
2021-06-07 13:52           ` Matthew Wilcox
2021-06-07 13:58             ` Matteo Croce
2021-06-04 18:33 ` [PATCH net-next v7 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
2021-06-04 18:33 ` [PATCH net-next v7 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-06-04 19:41   ` Matthew Wilcox
2021-06-07  4:51     ` Ilias Apalodimas
2021-06-07 14:55     ` Matteo Croce
2021-06-04 18:33 ` [PATCH net-next v7 4/5] mvpp2: recycle buffers Matteo Croce
2021-06-04 19:48   ` Matthew Wilcox
2021-06-06  1:59     ` Matteo Croce
2021-06-04 18:33 ` [PATCH net-next v7 5/5] mvneta: " Matteo Croce
2021-06-04 18:57 ` [PATCH net-next v7 0/5] page_pool: " Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).