All of lore.kernel.org
 help / color / mirror / Atom feed
* [net-next v3 00/10] page_pool: Add page_pool stat counters
@ 2022-02-02  1:12 Joe Damato
  2022-02-02  1:12 ` [net-next v3 01/10] page_pool: kconfig: Add flag for page pool stats Joe Damato
                   ` (11 more replies)
  0 siblings, 12 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Greetings:

Sending a v3 as I noted some issues with the procfs code in patch 10 I
submit in v2 (thanks, kernel test robot) and fixing the placement of the
refill stat increment in patch 8.

I only modified the placement of the refill stat, but decided to re-run the
benchmarks used in the v2 [1], and the results are:

Test system:
	- 2x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
	- 2 NUMA zones, with 18 cores per zone and 2 threads per core

bench_page_pool_simple results:
test name			stats enabled		stats disabled
				cycles	nanosec		cycles	nanosec

for_loop			0	0.335		0	0.334
atomic_inc 			13	6.028		13	6.035
lock				32	14.017		31	13.552

no-softirq-page_pool01		45	19.832		46	20.193
no-softirq-page_pool02		44	19.478		46	20.083
no-softirq-page_pool03		110	48.365		109	47.699

tasklet_page_pool01_fast_path	14	6.204		13	6.021
tasklet_page_pool02_ptr_ring	41	18.115		42	18.699
tasklet_page_pool03_slow	110	48.085		108	47.395

bench_page_pool_cross_cpu results:
test name			stats enabled		stats disabled
				cycles	nanosec		cycles	nanosec

page_pool_cross_cpu CPU(0)	2216	966.179		2101	915.692
page_pool_cross_cpu CPU(1)	2211	963.914		2159	941.087
page_pool_cross_cpu CPU(2)	1108	483.097		1079	470.573

page_pool_cross_cpu average	1845	-		1779	-

v2 -> v3:
	- patch 8/10 ("Add stat tracking cache refill") fixed placement of
	  counter increment.
	- patch 10/10 ("net-procfs: Show page pool stats in proc") updated:
		- fix unused label warning from kernel test robot,
		- fixed page_pool_seq_show to only display the refill stat
		  once,
		- added a remove_proc_entry for page_pool_stat to
		  dev_proc_net_exit.

v1 -> v2:
	- A new kernel config option has been added, which defaults to N,
	   preventing this code from being compiled in by default
	- The stats structure has been converted to a per-cpu structure
	- The stats are now exported via proc (/proc/net/page_pool_stat)

Thanks.

[1]:
https://lore.kernel.org/all/1643499540-8351-1-git-send-email-jdamato@fastly.com/T/#md82c6d5233e35bb518bc40c8fd7dff7a7a17e199

Joe Damato (10):
  page_pool: kconfig: Add flag for page pool stats
  page_pool: Add per-cpu page_pool_stats struct
  page_pool: Add a macro for incrementing stats
  page_pool: Add stat tracking fast path allocations
  page_pool: Add slow path order 0 allocation stat
  page_pool: Add slow path high order allocation stat
  page_pool: Add stat tracking empty ring
  page_pool: Add stat tracking cache refill
  page_pool: Add a stat tracking waived pages
  net-procfs: Show page pool stats in proc

 include/net/page_pool.h | 20 +++++++++++++++
 net/Kconfig             | 12 +++++++++
 net/core/net-procfs.c   | 67 +++++++++++++++++++++++++++++++++++++++++++++++++
 net/core/page_pool.c    | 28 ++++++++++++++++++---
 4 files changed, 124 insertions(+), 3 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [net-next v3 01/10] page_pool: kconfig: Add flag for page pool stats
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 02/10] page_pool: Add per-cpu page_pool_stats struct Joe Damato
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Control enabling / disabling page_pool_stats with a kernel config option.
Option is defaulted to N.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 net/Kconfig | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/net/Kconfig b/net/Kconfig
index 8a1f9d0..604b3eb 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -434,6 +434,18 @@ config NET_DEVLINK
 config PAGE_POOL
 	bool
 
+config PAGE_POOL_STATS
+	default n
+	bool "Page pool stats"
+	depends on PAGE_POOL
+	help
+	  Enable page pool statistics to track allocations. Stats are exported
+	  to the file /proc/net/page_pool_stat. Users can examine these
+	  stats to better understand how their drivers and the kernel's
+	  page allocator, and the page pool interact with each other.
+
+	  If unsure, say N.
+
 config FAILOVER
 	tristate "Generic failover module"
 	help
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 02/10] page_pool: Add per-cpu page_pool_stats struct
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
  2022-02-02  1:12 ` [net-next v3 01/10] page_pool: kconfig: Add flag for page pool stats Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 03/10] page_pool: Add a macro for incrementing stats Joe Damato
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

A per-cpu (empty) page_pool_stats struct has been added as a place holder.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 include/net/page_pool.h | 10 ++++++++++
 net/core/page_pool.c    |  5 +++++
 2 files changed, 15 insertions(+)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 79a8055..dae65f2 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -137,6 +137,16 @@ struct page_pool {
 	u64 destroy_cnt;
 };
 
+#ifdef CONFIG_PAGE_POOL_STATS
+/*
+ * stats for tracking page_pool events.
+ */
+struct page_pool_stats {
+};
+
+DECLARE_PER_CPU_ALIGNED(struct page_pool_stats, page_pool_stats);
+#endif
+
 struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp);
 
 static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index bd62c01..7e33590 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -26,6 +26,11 @@
 
 #define BIAS_MAX	LONG_MAX
 
+#ifdef CONFIG_PAGE_POOL_STATS
+DEFINE_PER_CPU_ALIGNED(struct page_pool_stats, page_pool_stats);
+EXPORT_PER_CPU_SYMBOL(page_pool_stats);
+#endif
+
 static int page_pool_init(struct page_pool *pool,
 			  const struct page_pool_params *params)
 {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 03/10] page_pool: Add a macro for incrementing stats
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
  2022-02-02  1:12 ` [net-next v3 01/10] page_pool: kconfig: Add flag for page pool stats Joe Damato
  2022-02-02  1:12 ` [net-next v3 02/10] page_pool: Add per-cpu page_pool_stats struct Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 04/10] page_pool: Add stat tracking fast path allocations Joe Damato
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Add simple wrapper macro for incrementing page pool stats. This wrapper is
intended to be used in softirq context.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 net/core/page_pool.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 7e33590..b1a2599 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -29,6 +29,15 @@
 #ifdef CONFIG_PAGE_POOL_STATS
 DEFINE_PER_CPU_ALIGNED(struct page_pool_stats, page_pool_stats);
 EXPORT_PER_CPU_SYMBOL(page_pool_stats);
+
+#define page_pool_stat_alloc_inc(__stat)					\
+	do {									\
+		struct page_pool_stats *pps = this_cpu_ptr(&page_pool_stats);	\
+		pps->alloc.__stat++;						\
+	} while (0)
+
+#else
+#define page_pool_stat_alloc_inc(stat)
 #endif
 
 static int page_pool_init(struct page_pool *pool,
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 04/10] page_pool: Add stat tracking fast path allocations
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (2 preceding siblings ...)
  2022-02-02  1:12 ` [net-next v3 03/10] page_pool: Add a macro for incrementing stats Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 05/10] page_pool: Add slow path order 0 allocation stat Joe Damato
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Add a counter to track successful fast-path allocations.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 include/net/page_pool.h | 3 +++
 net/core/page_pool.c    | 1 +
 2 files changed, 4 insertions(+)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index dae65f2..96949ad 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -142,6 +142,9 @@ struct page_pool {
  * stats for tracking page_pool events.
  */
 struct page_pool_stats {
+	struct {
+		u64 fast; /* fast path allocations */
+	} alloc;
 };
 
 DECLARE_PER_CPU_ALIGNED(struct page_pool_stats, page_pool_stats);
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index b1a2599..6f692d9 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -180,6 +180,7 @@ static struct page *__page_pool_get_cached(struct page_pool *pool)
 	if (likely(pool->alloc.count)) {
 		/* Fast-path */
 		page = pool->alloc.cache[--pool->alloc.count];
+		page_pool_stat_alloc_inc(fast);
 	} else {
 		page = page_pool_refill_alloc_cache(pool);
 	}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 05/10] page_pool: Add slow path order 0 allocation stat
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (3 preceding siblings ...)
  2022-02-02  1:12 ` [net-next v3 04/10] page_pool: Add stat tracking fast path allocations Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 06/10] page_pool: Add slow path high order " Joe Damato
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Track order 0 allocations in the slow path which cause an interaction with
the buddy allocator.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 include/net/page_pool.h | 1 +
 net/core/page_pool.c    | 6 ++++--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 96949ad..ab67e86 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -144,6 +144,7 @@ struct page_pool {
 struct page_pool_stats {
 	struct {
 		u64 fast; /* fast path allocations */
+		u64 slow; /* slow-path order-0 allocations */
 	} alloc;
 };
 
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 6f692d9..554a40e 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -308,10 +308,12 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 	}
 
 	/* Return last page */
-	if (likely(pool->alloc.count > 0))
+	if (likely(pool->alloc.count > 0)) {
 		page = pool->alloc.cache[--pool->alloc.count];
-	else
+		page_pool_stat_alloc_inc(slow);
+	} else {
 		page = NULL;
+	}
 
 	/* When page just alloc'ed is should/must have refcnt 1. */
 	return page;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 06/10] page_pool: Add slow path high order allocation stat
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (4 preceding siblings ...)
  2022-02-02  1:12 ` [net-next v3 05/10] page_pool: Add slow path order 0 allocation stat Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 07/10] page_pool: Add stat tracking empty ring Joe Damato
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Track high order allocations in the slow path which cause an interaction
with the buddy allocator.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 include/net/page_pool.h | 1 +
 net/core/page_pool.c    | 1 +
 2 files changed, 2 insertions(+)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index ab67e86..f59b8a9 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -145,6 +145,7 @@ struct page_pool_stats {
 	struct {
 		u64 fast; /* fast path allocations */
 		u64 slow; /* slow-path order-0 allocations */
+		u64 slow_high_order; /* slow-path high order allocations */
 	} alloc;
 };
 
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 554a40e..24306d6 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -254,6 +254,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
 		return NULL;
 	}
 
+	page_pool_stat_alloc_inc(slow_high_order);
 	page_pool_set_pp_info(pool, page);
 
 	/* Track how many pages are held 'in-flight' */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 07/10] page_pool: Add stat tracking empty ring
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (5 preceding siblings ...)
  2022-02-02  1:12 ` [net-next v3 06/10] page_pool: Add slow path high order " Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 08/10] page_pool: Add stat tracking cache refill Joe Damato
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Add a stat tracking when the ptr ring is empty. When this occurs, the cache
could not be refilled and a slow path allocation was forced.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 include/net/page_pool.h | 3 +++
 net/core/page_pool.c    | 4 +++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index f59b8a9..ed2bc73 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -146,6 +146,9 @@ struct page_pool_stats {
 		u64 fast; /* fast path allocations */
 		u64 slow; /* slow-path order-0 allocations */
 		u64 slow_high_order; /* slow-path high order allocations */
+		u64 empty; /* failed refills due to empty ptr ring, forcing
+			    * slow path allocation
+			    */
 	} alloc;
 };
 
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 24306d6..9d20b12 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -131,8 +131,10 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
 	int pref_nid; /* preferred NUMA node */
 
 	/* Quicker fallback, avoid locks when ring is empty */
-	if (__ptr_ring_empty(r))
+	if (__ptr_ring_empty(r)) {
+		page_pool_stat_alloc_inc(empty);
 		return NULL;
+	}
 
 	/* Softirq guarantee CPU and thus NUMA node is stable. This,
 	 * assumes CPU refilling driver RX-ring will also run RX-NAPI.
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 08/10] page_pool: Add stat tracking cache refill
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (6 preceding siblings ...)
  2022-02-02  1:12 ` [net-next v3 07/10] page_pool: Add stat tracking empty ring Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 09/10] page_pool: Add a stat tracking waived pages Joe Damato
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Add a stat tracking succesfull allocations which triggered a refill.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 include/net/page_pool.h | 1 +
 net/core/page_pool.c    | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index ed2bc73..4991109 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -149,6 +149,7 @@ struct page_pool_stats {
 		u64 empty; /* failed refills due to empty ptr ring, forcing
 			    * slow path allocation
 			    */
+		u64 refill; /* allocations via successful refill */
 	} alloc;
 };
 
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 9d20b12..00adab5 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -167,8 +167,10 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
 	} while (pool->alloc.count < PP_ALLOC_CACHE_REFILL);
 
 	/* Return last page */
-	if (likely(pool->alloc.count > 0))
+	if (likely(pool->alloc.count > 0)) {
 		page = pool->alloc.cache[--pool->alloc.count];
+		page_pool_stat_alloc_inc(refill);
+	}
 
 	return page;
 }
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 09/10] page_pool: Add a stat tracking waived pages
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (7 preceding siblings ...)
  2022-02-02  1:12 ` [net-next v3 08/10] page_pool: Add stat tracking cache refill Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02  1:12 ` [net-next v3 10/10] net-procfs: Show page pool stats in proc Joe Damato
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Track how often pages obtained from the ring cannot be added to the cache
because of a NUMA mismatch.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 include/net/page_pool.h | 1 +
 net/core/page_pool.c    | 1 +
 2 files changed, 2 insertions(+)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 4991109..e411ef6 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -150,6 +150,7 @@ struct page_pool_stats {
 			    * slow path allocation
 			    */
 		u64 refill; /* allocations via successful refill */
+		u64 waive;  /* failed refills due to numa zone mismatch */
 	} alloc;
 };
 
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 00adab5..41725e3 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -161,6 +161,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
 			 * This limit stress on page buddy alloactor.
 			 */
 			page_pool_return_page(pool, page);
+			page_pool_stat_alloc_inc(waive);
 			page = NULL;
 			break;
 		}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [net-next v3 10/10] net-procfs: Show page pool stats in proc
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (8 preceding siblings ...)
  2022-02-02  1:12 ` [net-next v3 09/10] page_pool: Add a stat tracking waived pages Joe Damato
@ 2022-02-02  1:12 ` Joe Damato
  2022-02-02 14:29 ` [net-next v3 00/10] page_pool: Add page_pool stat counters Ilias Apalodimas
  2022-02-02 14:31 ` Jesper Dangaard Brouer
  11 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-02  1:12 UTC (permalink / raw)
  To: netdev, kuba, ilias.apalodimas, davem, hawk; +Cc: Joe Damato

Per-cpu page pool allocation stats are exported in the file
/proc/net/page_pool_stat allowing users to better understand the
interaction between their drivers and kernel memory allocation.

Signed-off-by: Joe Damato <jdamato@fastly.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 net/core/net-procfs.c | 67 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c
index 88cc0ad..3bc6e53 100644
--- a/net/core/net-procfs.c
+++ b/net/core/net-procfs.c
@@ -4,6 +4,10 @@
 #include <linux/seq_file.h>
 #include <net/wext.h>
 
+#ifdef CONFIG_PAGE_POOL_STATS
+#include <net/page_pool.h>
+#endif
+
 #define BUCKET_SPACE (32 - NETDEV_HASHBITS - 1)
 
 #define get_bucket(x) ((x) >> BUCKET_SPACE)
@@ -310,6 +314,57 @@ static const struct seq_operations ptype_seq_ops = {
 	.show  = ptype_seq_show,
 };
 
+#ifdef CONFIG_PAGE_POOL_STATS
+static struct page_pool_stats *page_pool_stat_get_online(loff_t *pos)
+{
+	struct page_pool_stats *pp_stat = NULL;
+
+	while (*pos < nr_cpu_ids) {
+		if (cpu_online(*pos)) {
+			pp_stat = per_cpu_ptr(&page_pool_stats, *pos);
+			break;
+		}
+
+		++*pos;
+	}
+
+	return pp_stat;
+}
+
+static void *page_pool_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	return page_pool_stat_get_online(pos);
+}
+
+static void *page_pool_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	++*pos;
+	return page_pool_stat_get_online(pos);
+}
+
+static void page_pool_seq_stop(struct seq_file *seq, void *v)
+{
+}
+
+static int page_pool_seq_show(struct seq_file *seq, void *v)
+{
+	struct page_pool_stats *pp_stat = v;
+
+	seq_printf(seq, "%08llx %08llx %08llx %08llx %08llx %08llx %08llx\n",
+		   seq->index, pp_stat->alloc.fast,
+		   pp_stat->alloc.slow, pp_stat->alloc.slow_high_order,
+		   pp_stat->alloc.empty, pp_stat->alloc.refill, pp_stat->alloc.waive);
+	return 0;
+}
+
+static const struct seq_operations page_pool_seq_ops = {
+	.start = page_pool_seq_start,
+	.next = page_pool_seq_next,
+	.stop = page_pool_seq_stop,
+	.show = page_pool_seq_show,
+};
+#endif
+
 static int __net_init dev_proc_net_init(struct net *net)
 {
 	int rc = -ENOMEM;
@@ -326,6 +381,15 @@ static int __net_init dev_proc_net_init(struct net *net)
 
 	if (wext_proc_init(net))
 		goto out_ptype;
+
+#ifdef CONFIG_PAGE_POOL_STATS
+	if (!proc_create_seq("page_pool_stat", 0444, net->proc_net,
+			     &page_pool_seq_ops)) {
+		wext_proc_exit(net);
+		goto out_ptype;
+	}
+#endif
+
 	rc = 0;
 out:
 	return rc;
@@ -342,6 +406,9 @@ static void __net_exit dev_proc_net_exit(struct net *net)
 {
 	wext_proc_exit(net);
 
+#ifdef CONFIG_PAGE_POOL_STATS
+	remove_proc_entry("page_pool_stat", net->proc_net);
+#endif
 	remove_proc_entry("ptype", net->proc_net);
 	remove_proc_entry("softnet_stat", net->proc_net);
 	remove_proc_entry("dev", net->proc_net);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [net-next v3 00/10] page_pool: Add page_pool stat counters
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (9 preceding siblings ...)
  2022-02-02  1:12 ` [net-next v3 10/10] net-procfs: Show page pool stats in proc Joe Damato
@ 2022-02-02 14:29 ` Ilias Apalodimas
  2022-02-02 14:31 ` Jesper Dangaard Brouer
  11 siblings, 0 replies; 16+ messages in thread
From: Ilias Apalodimas @ 2022-02-02 14:29 UTC (permalink / raw)
  To: Joe Damato; +Cc: netdev, kuba, davem, hawk, Saeed Mahameed

Hi Joe,

Again thanks for the patches!

On Wed, 2 Feb 2022 at 03:13, Joe Damato <jdamato@fastly.com> wrote:
>
> Greetings:
>
> Sending a v3 as I noted some issues with the procfs code in patch 10 I
> submit in v2 (thanks, kernel test robot) and fixing the placement of the
> refill stat increment in patch 8.
>
> I only modified the placement of the refill stat, but decided to re-run the
> benchmarks used in the v2 [1], and the results are:
>
> Test system:
>         - 2x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
>         - 2 NUMA zones, with 18 cores per zone and 2 threads per core
>
> bench_page_pool_simple results:
> test name                       stats enabled           stats disabled
>                                 cycles  nanosec         cycles  nanosec
>
> for_loop                        0       0.335           0       0.334
> atomic_inc                      13      6.028           13      6.035
> lock                            32      14.017          31      13.552
>
> no-softirq-page_pool01          45      19.832          46      20.193
> no-softirq-page_pool02          44      19.478          46      20.083
> no-softirq-page_pool03          110     48.365          109     47.699
>
> tasklet_page_pool01_fast_path   14      6.204           13      6.021
> tasklet_page_pool02_ptr_ring    41      18.115          42      18.699
> tasklet_page_pool03_slow        110     48.085          108     47.395
>
> bench_page_pool_cross_cpu results:
> test name                       stats enabled           stats disabled
>                                 cycles  nanosec         cycles  nanosec
>
> page_pool_cross_cpu CPU(0)      2216    966.179         2101    915.692
> page_pool_cross_cpu CPU(1)      2211    963.914         2159    941.087
> page_pool_cross_cpu CPU(2)      1108    483.097         1079    470.573
>
> page_pool_cross_cpu average     1845    -               1779    -
>
> v2 -> v3:
>         - patch 8/10 ("Add stat tracking cache refill") fixed placement of
>           counter increment.
>         - patch 10/10 ("net-procfs: Show page pool stats in proc") updated:
>                 - fix unused label warning from kernel test robot,
>                 - fixed page_pool_seq_show to only display the refill stat
>                   once,
>                 - added a remove_proc_entry for page_pool_stat to
>                   dev_proc_net_exit.
>
> v1 -> v2:
>         - A new kernel config option has been added, which defaults to N,
>            preventing this code from being compiled in by default
>         - The stats structure has been converted to a per-cpu structure
>         - The stats are now exported via proc (/proc/net/page_pool_stat)
>

CC'ing Saeed since he is interested on page pool stats for mlx5.
I'd be much happier if we had per cpu per pool stats and a way to pick
them up via ethtool, instead of global page pool stats in /proc.
Anyone has an opinion on this?

[...]

Thanks!
/Ilias

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [net-next v3 00/10] page_pool: Add page_pool stat counters
  2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
                   ` (10 preceding siblings ...)
  2022-02-02 14:29 ` [net-next v3 00/10] page_pool: Add page_pool stat counters Ilias Apalodimas
@ 2022-02-02 14:31 ` Jesper Dangaard Brouer
  2022-02-02 17:30   ` Joe Damato
  11 siblings, 1 reply; 16+ messages in thread
From: Jesper Dangaard Brouer @ 2022-02-02 14:31 UTC (permalink / raw)
  To: Joe Damato, netdev, kuba, ilias.apalodimas, davem, hawk,
	Tariq Toukan, Saeed Mahameed
  Cc: brouer


Adding Cc. Tariq and Saeed, as they wanted page_pool stats in the past.

On 02/02/2022 02.12, Joe Damato wrote:
> Greetings:
> 
> Sending a v3 as I noted some issues with the procfs code in patch 10 I
> submit in v2 (thanks, kernel test robot) and fixing the placement of the
> refill stat increment in patch 8.

Could you explain why a single global stats (/proc/net/page_pool_stat) 
for all page_pool instances for all RX-queues makes sense?

I think this argument/explanation belongs in the cover letter.

What are you using this for?

And do Tariq and Saeeds agree with this single global stats approach?


> I only modified the placement of the refill stat, but decided to re-run the
> benchmarks used in the v2 [1], and the results are:

I appreciate that you are running the benchmarks.

> Test system:
> 	- 2x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
> 	- 2 NUMA zones, with 18 cores per zone and 2 threads per core
> 
> bench_page_pool_simple results:
> test name			stats enabled		stats disabled
> 				cycles	nanosec		cycles	nanosec
> 
> for_loop			0	0.335		0	0.334

I think you can drop the 'for_loop' results, we can see that the 
overhead is insignificant.

> atomic_inc 			13	6.028		13	6.035
> lock				32	14.017		31	13.552
> 
> no-softirq-page_pool01		45	19.832		46	20.193
> no-softirq-page_pool02		44	19.478		46	20.083
> no-softirq-page_pool03		110	48.365		109	47.699
> 
> tasklet_page_pool01_fast_path	14	6.204		13	6.021
> tasklet_page_pool02_ptr_ring	41	18.115		42	18.699
> tasklet_page_pool03_slow	110	48.085		108	47.395
> 
> bench_page_pool_cross_cpu results:
> test name			stats enabled		stats disabled
> 				cycles	nanosec		cycles	nanosec
> 
> page_pool_cross_cpu CPU(0)	2216	966.179		2101	915.692
> page_pool_cross_cpu CPU(1)	2211	963.914		2159	941.087
> page_pool_cross_cpu CPU(2)	1108	483.097		1079	470.573
> 
> page_pool_cross_cpu average	1845	-		1779	-
> 
> v2 -> v3:
> 	- patch 8/10 ("Add stat tracking cache refill") fixed placement of
> 	  counter increment.
> 	- patch 10/10 ("net-procfs: Show page pool stats in proc") updated:
> 		- fix unused label warning from kernel test robot,
> 		- fixed page_pool_seq_show to only display the refill stat
> 		  once,
> 		- added a remove_proc_entry for page_pool_stat to
> 		  dev_proc_net_exit.
> 
> v1 -> v2:
> 	- A new kernel config option has been added, which defaults to N,
> 	   preventing this code from being compiled in by default
> 	- The stats structure has been converted to a per-cpu structure
> 	- The stats are now exported via proc (/proc/net/page_pool_stat)
> 
> Thanks.
> 
> [1]:
> https://lore.kernel.org/all/1643499540-8351-1-git-send-email-jdamato@fastly.com/T/#md82c6d5233e35bb518bc40c8fd7dff7a7a17e199
> 
> Joe Damato (10):
>    page_pool: kconfig: Add flag for page pool stats
>    page_pool: Add per-cpu page_pool_stats struct
>    page_pool: Add a macro for incrementing stats
>    page_pool: Add stat tracking fast path allocations
>    page_pool: Add slow path order 0 allocation stat
>    page_pool: Add slow path high order allocation stat
>    page_pool: Add stat tracking empty ring
>    page_pool: Add stat tracking cache refill
>    page_pool: Add a stat tracking waived pages
>    net-procfs: Show page pool stats in proc
> 
>   include/net/page_pool.h | 20 +++++++++++++++
>   net/Kconfig             | 12 +++++++++
>   net/core/net-procfs.c   | 67 +++++++++++++++++++++++++++++++++++++++++++++++++
>   net/core/page_pool.c    | 28 ++++++++++++++++++---
>   4 files changed, 124 insertions(+), 3 deletions(-)
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [net-next v3 00/10] page_pool: Add page_pool stat counters
  2022-02-02 14:31 ` Jesper Dangaard Brouer
@ 2022-02-02 17:30   ` Joe Damato
  2022-02-03 19:21     ` Tariq Toukan
  0 siblings, 1 reply; 16+ messages in thread
From: Joe Damato @ 2022-02-02 17:30 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: netdev, kuba, ilias.apalodimas, davem, hawk, Tariq Toukan,
	Saeed Mahameed, brouer

On Wed, Feb 2, 2022 at 6:31 AM Jesper Dangaard Brouer
<jbrouer@redhat.com> wrote:
>
>
> Adding Cc. Tariq and Saeed, as they wanted page_pool stats in the past.
>
> On 02/02/2022 02.12, Joe Damato wrote:
> > Greetings:
> >
> > Sending a v3 as I noted some issues with the procfs code in patch 10 I
> > submit in v2 (thanks, kernel test robot) and fixing the placement of the
> > refill stat increment in patch 8.
>
> Could you explain why a single global stats (/proc/net/page_pool_stat)
> for all page_pool instances for all RX-queues makes sense?
>
> I think this argument/explanation belongs in the cover letter.

I included an explanation in the v2 cover letter where those changes
occurred, but you are right: I should have also included it in the v3
cover letter.

My thought process was this:

- Stats now have to be enabled by an explicit kernel config option, so
the user has to know what they are doing
- Advanced users can move softirqs to CPUs as they wish and they could
isolate a particular set of RX-queues on a set of CPUs this way
- The result is that there is no need to expose anything to the
drivers and no modifications to drivers are necessary once the single
kernel config option is enabled and softirq affinity is configured

I had assumed by not exposing new APIs / page pool internals and by
not requiring drivers to make any changes, I would have a better shot
of getting my patches accepted.

It sounds like both you and Ilias strongly prefer per-pool-per-cpu
stats, so I can make that change in the v4.

> What are you using this for?

I currently graph NIC driver stats from a number of different vendors
to help better understand the performance of those NICs under my
company's production workload.

For example, on i40e, I submit changes to the upstream driver [1] and
am graphing those stats to better understand memory reuse rate. We
have seen some issues around mm allocation contention in production
workloads with certain NICs and system architectures.

My findings with mlx5 have indicated that the proprietary page reuse
algorithm in the driver, with our workload, does not provide much
memory re-use, and causes pressure against the kernel's page
allocator.  The page pool should help remedy this, but without stats I
don't have a clear way to measure the effect.

So in short: I'd like to gather and graph stats about the page pool
API to determine how much impact the page pool API has on page reuse
for mlx5 in our workload.

> And do Tariq and Saeeds agree with this single global stats approach?

I don't know; I hope they'll chime in.

As I mentioned above, I don't really mind which approach is preferred
by you all. I had assumed that something with fewer external APIs
would be more likely to be accepted, and so I made that change in v2.

> > I only modified the placement of the refill stat, but decided to re-run the
> > benchmarks used in the v2 [1], and the results are:
>
> I appreciate that you are running the benchmarks.

Sure, no worries. As you mentioned in the other thread, perhaps some
settings need to be adjusted to show more relevant data on faster
systems.

When I work on the v4, I will take a look at the benchmarks and
explain any modifications made to them or their options when
presenting the test results.

> > Test system:

[...]

Thanks,
Joe

[1]: https://patchwork.ozlabs.org/project/intel-wired-lan/cover/1639769719-81285-1-git-send-email-jdamato@fastly.com/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [net-next v3 00/10] page_pool: Add page_pool stat counters
  2022-02-02 17:30   ` Joe Damato
@ 2022-02-03 19:21     ` Tariq Toukan
  2022-02-03 19:31       ` Joe Damato
  0 siblings, 1 reply; 16+ messages in thread
From: Tariq Toukan @ 2022-02-03 19:21 UTC (permalink / raw)
  To: Joe Damato, Jesper Dangaard Brouer
  Cc: netdev, kuba, ilias.apalodimas, davem, hawk, Saeed Mahameed, brouer



On 2/2/2022 7:30 PM, Joe Damato wrote:
> On Wed, Feb 2, 2022 at 6:31 AM Jesper Dangaard Brouer
> <jbrouer@redhat.com> wrote:
>>
>>
>> Adding Cc. Tariq and Saeed, as they wanted page_pool stats in the past.
>>
>> On 02/02/2022 02.12, Joe Damato wrote:
>>> Greetings:
>>>
>>> Sending a v3 as I noted some issues with the procfs code in patch 10 I
>>> submit in v2 (thanks, kernel test robot) and fixing the placement of the
>>> refill stat increment in patch 8.
>>
>> Could you explain why a single global stats (/proc/net/page_pool_stat)
>> for all page_pool instances for all RX-queues makes sense?
>>
>> I think this argument/explanation belongs in the cover letter.
> 
> I included an explanation in the v2 cover letter where those changes
> occurred, but you are right: I should have also included it in the v3
> cover letter.
> 
> My thought process was this:
> 
> - Stats now have to be enabled by an explicit kernel config option, so
> the user has to know what they are doing
> - Advanced users can move softirqs to CPUs as they wish and they could
> isolate a particular set of RX-queues on a set of CPUs this way
> - The result is that there is no need to expose anything to the
> drivers and no modifications to drivers are necessary once the single
> kernel config option is enabled and softirq affinity is configured
> 
> I had assumed by not exposing new APIs / page pool internals and by
> not requiring drivers to make any changes, I would have a better shot
> of getting my patches accepted.
> 
> It sounds like both you and Ilias strongly prefer per-pool-per-cpu
> stats, so I can make that change in the v4.
> 
>> What are you using this for?
> 
> I currently graph NIC driver stats from a number of different vendors
> to help better understand the performance of those NICs under my
> company's production workload.
> 
> For example, on i40e, I submit changes to the upstream driver [1] and
> am graphing those stats to better understand memory reuse rate. We
> have seen some issues around mm allocation contention in production
> workloads with certain NICs and system architectures.
> 
> My findings with mlx5 have indicated that the proprietary page reuse
> algorithm in the driver, with our workload, does not provide much
> memory re-use, and causes pressure against the kernel's page
> allocator.  The page pool should help remedy this, but without stats I
> don't have a clear way to measure the effect.
> 
> So in short: I'd like to gather and graph stats about the page pool
> API to determine how much impact the page pool API has on page reuse
> for mlx5 in our workload.
> 
Hi Joe, Jesper, Ilias, and all,

We plan to totally remove the in-driver page-cache and fully rely on 
page-pool for the allocations and dma mapping. This did not happen until 
now as the page pool did not support elevated page refcount (multiple 
frags per-page) and stats.

I'm happy to see that these are getting attention! Thanks for investing 
time and effort to push these tasks forward!

>> And do Tariq and Saeeds agree with this single global stats approach?
> 
> I don't know; I hope they'll chime in.
> 

I agree with Jesper and Ilias. Global per-cpu pool stats are very 
limited. There is not much we can do with the super-position of several 
page-pools. IMO, these stats can be of real value only when each cpu has 
a single pool. Otherwise, the summed stats of two or more pools won't 
help much in observability, or debug.

Tariq

> As I mentioned above, I don't really mind which approach is preferred
> by you all. I had assumed that something with fewer external APIs
> would be more likely to be accepted, and so I made that change in v2.
> 
>>> I only modified the placement of the refill stat, but decided to re-run the
>>> benchmarks used in the v2 [1], and the results are:
>>
>> I appreciate that you are running the benchmarks.
> 
> Sure, no worries. As you mentioned in the other thread, perhaps some
> settings need to be adjusted to show more relevant data on faster
> systems.
> 
> When I work on the v4, I will take a look at the benchmarks and
> explain any modifications made to them or their options when
> presenting the test results.
> 
>>> Test system:
> 
> [...]
> 
> Thanks,
> Joe
> 
> [1]: https://patchwork.ozlabs.org/project/intel-wired-lan/cover/1639769719-81285-1-git-send-email-jdamato@fastly.com/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [net-next v3 00/10] page_pool: Add page_pool stat counters
  2022-02-03 19:21     ` Tariq Toukan
@ 2022-02-03 19:31       ` Joe Damato
  0 siblings, 0 replies; 16+ messages in thread
From: Joe Damato @ 2022-02-03 19:31 UTC (permalink / raw)
  To: Tariq Toukan
  Cc: Jesper Dangaard Brouer, netdev, kuba, ilias.apalodimas, davem,
	hawk, Saeed Mahameed, brouer

On Thu, Feb 3, 2022 at 11:21 AM Tariq Toukan <ttoukan.linux@gmail.com> wrote:
>
>
>
> On 2/2/2022 7:30 PM, Joe Damato wrote:
> > On Wed, Feb 2, 2022 at 6:31 AM Jesper Dangaard Brouer
> > <jbrouer@redhat.com> wrote:
> >>
> >>
> >> Adding Cc. Tariq and Saeed, as they wanted page_pool stats in the past.
> >>
> >> On 02/02/2022 02.12, Joe Damato wrote:
> >>> Greetings:
> >>>
> >>> Sending a v3 as I noted some issues with the procfs code in patch 10 I
> >>> submit in v2 (thanks, kernel test robot) and fixing the placement of the
> >>> refill stat increment in patch 8.
> >>
> >> Could you explain why a single global stats (/proc/net/page_pool_stat)
> >> for all page_pool instances for all RX-queues makes sense?
> >>
> >> I think this argument/explanation belongs in the cover letter.
> >
> > I included an explanation in the v2 cover letter where those changes
> > occurred, but you are right: I should have also included it in the v3
> > cover letter.
> >
> > My thought process was this:
> >
> > - Stats now have to be enabled by an explicit kernel config option, so
> > the user has to know what they are doing
> > - Advanced users can move softirqs to CPUs as they wish and they could
> > isolate a particular set of RX-queues on a set of CPUs this way
> > - The result is that there is no need to expose anything to the
> > drivers and no modifications to drivers are necessary once the single
> > kernel config option is enabled and softirq affinity is configured
> >
> > I had assumed by not exposing new APIs / page pool internals and by
> > not requiring drivers to make any changes, I would have a better shot
> > of getting my patches accepted.
> >
> > It sounds like both you and Ilias strongly prefer per-pool-per-cpu
> > stats, so I can make that change in the v4.
> >
> >> What are you using this for?
> >
> > I currently graph NIC driver stats from a number of different vendors
> > to help better understand the performance of those NICs under my
> > company's production workload.
> >
> > For example, on i40e, I submit changes to the upstream driver [1] and
> > am graphing those stats to better understand memory reuse rate. We
> > have seen some issues around mm allocation contention in production
> > workloads with certain NICs and system architectures.
> >
> > My findings with mlx5 have indicated that the proprietary page reuse
> > algorithm in the driver, with our workload, does not provide much
> > memory re-use, and causes pressure against the kernel's page
> > allocator.  The page pool should help remedy this, but without stats I
> > don't have a clear way to measure the effect.
> >
> > So in short: I'd like to gather and graph stats about the page pool
> > API to determine how much impact the page pool API has on page reuse
> > for mlx5 in our workload.
> >
> Hi Joe, Jesper, Ilias, and all,
>
> We plan to totally remove the in-driver page-cache and fully rely on
> page-pool for the allocations and dma mapping. This did not happen until
> now as the page pool did not support elevated page refcount (multiple
> frags per-page) and stats.
>
> I'm happy to see that these are getting attention! Thanks for investing
> time and effort to push these tasks forward!
>
> >> And do Tariq and Saeeds agree with this single global stats approach?
> >
> > I don't know; I hope they'll chime in.
> >
>
> I agree with Jesper and Ilias. Global per-cpu pool stats are very
> limited. There is not much we can do with the super-position of several
> page-pools. IMO, these stats can be of real value only when each cpu has
> a single pool. Otherwise, the summed stats of two or more pools won't
> help much in observability, or debug.

OK thanks Tariq -- that makes sense to me.

I can propose a v4 that converts the stats to per-pool-per-cpu and
re-run the benchmarks, with the modification Jesper suggested to make
them run a bit longer.

I'm still thinking through what the best API design is for accessing
stats from the drivers, but I'll propose something and see what you
all think in the v4.

Thanks,
Joe

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-02-03 19:31 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-02  1:12 [net-next v3 00/10] page_pool: Add page_pool stat counters Joe Damato
2022-02-02  1:12 ` [net-next v3 01/10] page_pool: kconfig: Add flag for page pool stats Joe Damato
2022-02-02  1:12 ` [net-next v3 02/10] page_pool: Add per-cpu page_pool_stats struct Joe Damato
2022-02-02  1:12 ` [net-next v3 03/10] page_pool: Add a macro for incrementing stats Joe Damato
2022-02-02  1:12 ` [net-next v3 04/10] page_pool: Add stat tracking fast path allocations Joe Damato
2022-02-02  1:12 ` [net-next v3 05/10] page_pool: Add slow path order 0 allocation stat Joe Damato
2022-02-02  1:12 ` [net-next v3 06/10] page_pool: Add slow path high order " Joe Damato
2022-02-02  1:12 ` [net-next v3 07/10] page_pool: Add stat tracking empty ring Joe Damato
2022-02-02  1:12 ` [net-next v3 08/10] page_pool: Add stat tracking cache refill Joe Damato
2022-02-02  1:12 ` [net-next v3 09/10] page_pool: Add a stat tracking waived pages Joe Damato
2022-02-02  1:12 ` [net-next v3 10/10] net-procfs: Show page pool stats in proc Joe Damato
2022-02-02 14:29 ` [net-next v3 00/10] page_pool: Add page_pool stat counters Ilias Apalodimas
2022-02-02 14:31 ` Jesper Dangaard Brouer
2022-02-02 17:30   ` Joe Damato
2022-02-03 19:21     ` Tariq Toukan
2022-02-03 19:31       ` Joe Damato

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.