All of lore.kernel.org
 help / color / mirror / Atom feed
* [net-next v1 PATCH 0/4] page_pool: followup changes to restore tracepoint features
@ 2019-11-15 15:05 Jesper Dangaard Brouer
  2019-11-15 15:05 ` [net-next v1 PATCH 1/4] xdp: remove memory poison on free for struct xdp_mem_allocator Jesper Dangaard Brouer
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-15 15:05 UTC (permalink / raw)
  Cc: Toke Høiland-Jørgensen, netdev, Ilias Apalodimas,
	Jesper Dangaard Brouer, Saeed Mahameed, Matteo Croce,
	Jonathan Lemon, Lorenzo Bianconi, Tariq Toukan

This patchset is a followup to Jonathan patch, that do not release
pool until inflight == 0. That changed page_pool to be responsible for
its own delayed destruction instead of relying on xdp memory model.

As the page_pool maintainer, I'm promoting the use of tracepoint to
troubleshoot and help driver developers verify correctness when
converting at driver to use page_pool. The role of xdp:mem_disconnect
have changed, which broke my bpftrace tools for shutdown verification.
With these changes, the same capabilities are regained.

---

Jesper Dangaard Brouer (4):
      xdp: remove memory poison on free for struct xdp_mem_allocator
      page_pool: add destroy attempts counter and rename tracepoint
      page_pool: block alloc cache during shutdown
      page_pool: extend tracepoint to also include the page PFN


 include/net/page_pool.h          |    2 ++
 include/trace/events/page_pool.h |   22 +++++++++++++++-------
 net/core/page_pool.c             |   17 +++++++++++++++--
 net/core/xdp.c                   |    5 -----
 4 files changed, 32 insertions(+), 14 deletions(-)

--


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [net-next v1 PATCH 1/4] xdp: remove memory poison on free for struct xdp_mem_allocator
  2019-11-15 15:05 [net-next v1 PATCH 0/4] page_pool: followup changes to restore tracepoint features Jesper Dangaard Brouer
@ 2019-11-15 15:05 ` Jesper Dangaard Brouer
  2019-11-15 15:06 ` [net-next v1 PATCH 2/4] page_pool: add destroy attempts counter and rename tracepoint Jesper Dangaard Brouer
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-15 15:05 UTC (permalink / raw)
  Cc: Toke Høiland-Jørgensen, netdev, Ilias Apalodimas,
	Jesper Dangaard Brouer, Saeed Mahameed, Matteo Croce,
	Jonathan Lemon, Lorenzo Bianconi, Tariq Toukan

When looking at the details I realised that the memory poison in
__xdp_mem_allocator_rcu_free doesn't make sense. This is because the
SLUB allocator uses the first 16 bytes (on 64 bit), for its freelist,
which overlap with members in struct xdp_mem_allocator, that were
updated.  Thus, SLUB already does the "poisoning" for us.

I still believe that poisoning memory make sense in other cases.
Kernel have gained different use-after-free detection mechanism, but
enabling those is associated with a huge overhead. Experience is that
debugging facilities can change the timing so much, that that a race
condition will not be provoked when enabled. Thus, I'm still in favour
of poisoning memory where it makes sense.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 net/core/xdp.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/net/core/xdp.c b/net/core/xdp.c
index 8e405abaf05a..e334fad0a6b8 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -73,11 +73,6 @@ static void __xdp_mem_allocator_rcu_free(struct rcu_head *rcu)
 	/* Allow this ID to be reused */
 	ida_simple_remove(&mem_id_pool, xa->mem.id);
 
-	/* Poison memory */
-	xa->mem.id = 0xFFFF;
-	xa->mem.type = 0xF0F0;
-	xa->allocator = (void *)0xDEAD9001;
-
 	kfree(xa);
 }
 


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [net-next v1 PATCH 2/4] page_pool: add destroy attempts counter and rename tracepoint
  2019-11-15 15:05 [net-next v1 PATCH 0/4] page_pool: followup changes to restore tracepoint features Jesper Dangaard Brouer
  2019-11-15 15:05 ` [net-next v1 PATCH 1/4] xdp: remove memory poison on free for struct xdp_mem_allocator Jesper Dangaard Brouer
@ 2019-11-15 15:06 ` Jesper Dangaard Brouer
  2019-11-15 16:33   ` Toke Høiland-Jørgensen
  2019-11-15 15:06 ` [net-next v1 PATCH 3/4] page_pool: block alloc cache during shutdown Jesper Dangaard Brouer
  2019-11-15 15:06 ` [net-next v1 PATCH 4/4] page_pool: extend tracepoint to also include the page PFN Jesper Dangaard Brouer
  3 siblings, 1 reply; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-15 15:06 UTC (permalink / raw)
  Cc: Toke Høiland-Jørgensen, netdev, Ilias Apalodimas,
	Jesper Dangaard Brouer, Saeed Mahameed, Matteo Croce,
	Jonathan Lemon, Lorenzo Bianconi, Tariq Toukan

When Jonathan change the page_pool to become responsible to its
own shutdown via deferred work queue, then the disconnect_cnt
counter was removed from xdp memory model tracepoint.

This patch change the page_pool_inflight tracepoint name to
page_pool_release, because it reflects the new responsability
better.  And it reintroduces a counter that reflect the number of
times page_pool_release have been tried.

The counter is also used by the code, to only empty the alloc
cache once.  With a stuck work queue running every second and
counter being 64-bit, it will overrun in approx 584 billion
years. For comparison, Earth lifetime expectancy is 7.5 billion
years, before the Sun will engulf, and destroy, the Earth.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 include/net/page_pool.h          |    2 ++
 include/trace/events/page_pool.h |    9 ++++++---
 net/core/page_pool.c             |   13 +++++++++++--
 3 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 1121faa99c12..ace881c15dcb 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -112,6 +112,8 @@ struct page_pool {
 	 * refcnt serves purpose is to simplify drivers error handling.
 	 */
 	refcount_t user_cnt;
+
+	u64 destroy_cnt;
 };
 
 struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp);
diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h
index 47b5ee880aa9..ee7f1aca7839 100644
--- a/include/trace/events/page_pool.h
+++ b/include/trace/events/page_pool.h
@@ -10,7 +10,7 @@
 
 #include <net/page_pool.h>
 
-TRACE_EVENT(page_pool_inflight,
+TRACE_EVENT(page_pool_release,
 
 	TP_PROTO(const struct page_pool *pool,
 		 s32 inflight, u32 hold, u32 release),
@@ -22,6 +22,7 @@ TRACE_EVENT(page_pool_inflight,
 		__field(s32,	inflight)
 		__field(u32,	hold)
 		__field(u32,	release)
+		__field(u64,	cnt)
 	),
 
 	TP_fast_assign(
@@ -29,10 +30,12 @@ TRACE_EVENT(page_pool_inflight,
 		__entry->inflight	= inflight;
 		__entry->hold		= hold;
 		__entry->release	= release;
+		__entry->cnt		= pool->destroy_cnt;
 	),
 
-	TP_printk("page_pool=%p inflight=%d hold=%u release=%u",
-	  __entry->pool, __entry->inflight, __entry->hold, __entry->release)
+	TP_printk("page_pool=%p inflight=%d hold=%u release=%u cnt=%llu",
+		__entry->pool, __entry->inflight, __entry->hold,
+		__entry->release, __entry->cnt)
 );
 
 TRACE_EVENT(page_pool_state_release,
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index dfc2501c35d9..e28db2ef8e12 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -200,7 +200,7 @@ static s32 page_pool_inflight(struct page_pool *pool)
 
 	inflight = _distance(hold_cnt, release_cnt);
 
-	trace_page_pool_inflight(pool, inflight, hold_cnt, release_cnt);
+	trace_page_pool_release(pool, inflight, hold_cnt, release_cnt);
 	WARN(inflight < 0, "Negative(%d) inflight packet-pages", inflight);
 
 	return inflight;
@@ -349,10 +349,13 @@ static void page_pool_free(struct page_pool *pool)
 	kfree(pool);
 }
 
-static void page_pool_scrub(struct page_pool *pool)
+static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
 {
 	struct page *page;
 
+	if (pool->destroy_cnt)
+		return;
+
 	/* Empty alloc cache, assume caller made sure this is
 	 * no-longer in use, and page_pool_alloc_pages() cannot be
 	 * call concurrently.
@@ -361,6 +364,12 @@ static void page_pool_scrub(struct page_pool *pool)
 		page = pool->alloc.cache[--pool->alloc.count];
 		__page_pool_return_page(pool, page);
 	}
+}
+
+static void page_pool_scrub(struct page_pool *pool)
+{
+	page_pool_empty_alloc_cache_once(pool);
+	pool->destroy_cnt++;
 
 	/* No more consumers should exist, but producers could still
 	 * be in-flight.


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [net-next v1 PATCH 3/4] page_pool: block alloc cache during shutdown
  2019-11-15 15:05 [net-next v1 PATCH 0/4] page_pool: followup changes to restore tracepoint features Jesper Dangaard Brouer
  2019-11-15 15:05 ` [net-next v1 PATCH 1/4] xdp: remove memory poison on free for struct xdp_mem_allocator Jesper Dangaard Brouer
  2019-11-15 15:06 ` [net-next v1 PATCH 2/4] page_pool: add destroy attempts counter and rename tracepoint Jesper Dangaard Brouer
@ 2019-11-15 15:06 ` Jesper Dangaard Brouer
  2019-11-15 18:38   ` Jonathan Lemon
  2019-11-15 15:06 ` [net-next v1 PATCH 4/4] page_pool: extend tracepoint to also include the page PFN Jesper Dangaard Brouer
  3 siblings, 1 reply; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-15 15:06 UTC (permalink / raw)
  Cc: Toke Høiland-Jørgensen, netdev, Ilias Apalodimas,
	Jesper Dangaard Brouer, Saeed Mahameed, Matteo Croce,
	Jonathan Lemon, Lorenzo Bianconi, Tariq Toukan

It is documented that driver API users, have to disconnect
the page_pool, before starting shutdown phase, but is it only
documentation, there is not code catching such violations.

Given (in page_pool_empty_alloc_cache_once) alloc cache is only
flushed once, there is now an opportunity to catch this case.

This patch blocks the RX/alloc-side cache, via pretending it is
full and poison last element.  This code change will enforce that
drivers cannot use alloc cache during shutdown phase.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 net/core/page_pool.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e28db2ef8e12..b31f3bb7818d 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -364,6 +364,10 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
 		page = pool->alloc.cache[--pool->alloc.count];
 		__page_pool_return_page(pool, page);
 	}
+
+	/* Block alloc cache, pretend it's full and poison last element */
+	pool->alloc.cache[PP_ALLOC_CACHE_SIZE - 1] = NULL;
+	pool->alloc.count = PP_ALLOC_CACHE_SIZE;
 }
 
 static void page_pool_scrub(struct page_pool *pool)


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [net-next v1 PATCH 4/4] page_pool: extend tracepoint to also include the page PFN
  2019-11-15 15:05 [net-next v1 PATCH 0/4] page_pool: followup changes to restore tracepoint features Jesper Dangaard Brouer
                   ` (2 preceding siblings ...)
  2019-11-15 15:06 ` [net-next v1 PATCH 3/4] page_pool: block alloc cache during shutdown Jesper Dangaard Brouer
@ 2019-11-15 15:06 ` Jesper Dangaard Brouer
  3 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-15 15:06 UTC (permalink / raw)
  Cc: Toke Høiland-Jørgensen, netdev, Ilias Apalodimas,
	Jesper Dangaard Brouer, Saeed Mahameed, Matteo Croce,
	Jonathan Lemon, Lorenzo Bianconi, Tariq Toukan

The MM tracepoint for page free (called kmem:mm_page_free) doesn't provide
the page pointer directly, instead it provides the PFN (Page Frame Number).
This is annoying when writing a page_pool leak detector in BPF.

This patch change page_pool tracepoints to also provide the PFN.
The page pointer is still provided to allow other kinds of
troubleshooting from BPF.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 include/trace/events/page_pool.h |   13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h
index ee7f1aca7839..2f2a10e8eb56 100644
--- a/include/trace/events/page_pool.h
+++ b/include/trace/events/page_pool.h
@@ -8,6 +8,7 @@
 #include <linux/types.h>
 #include <linux/tracepoint.h>
 
+#include <trace/events/mmflags.h>
 #include <net/page_pool.h>
 
 TRACE_EVENT(page_pool_release,
@@ -49,16 +50,18 @@ TRACE_EVENT(page_pool_state_release,
 		__field(const struct page_pool *,	pool)
 		__field(const struct page *,		page)
 		__field(u32,				release)
+		__field(unsigned long,			pfn)
 	),
 
 	TP_fast_assign(
 		__entry->pool		= pool;
 		__entry->page		= page;
 		__entry->release	= release;
+		__entry->pfn		= page_to_pfn(page);
 	),
 
-	TP_printk("page_pool=%p page=%p release=%u",
-		  __entry->pool, __entry->page, __entry->release)
+	TP_printk("page_pool=%p page=%p pfn=%lu release=%u",
+		  __entry->pool, __entry->page, __entry->pfn, __entry->release)
 );
 
 TRACE_EVENT(page_pool_state_hold,
@@ -72,16 +75,18 @@ TRACE_EVENT(page_pool_state_hold,
 		__field(const struct page_pool *,	pool)
 		__field(const struct page *,		page)
 		__field(u32,				hold)
+		__field(unsigned long,			pfn)
 	),
 
 	TP_fast_assign(
 		__entry->pool	= pool;
 		__entry->page	= page;
 		__entry->hold	= hold;
+		__entry->pfn	= page_to_pfn(page);
 	),
 
-	TP_printk("page_pool=%p page=%p hold=%u",
-		  __entry->pool, __entry->page, __entry->hold)
+	TP_printk("page_pool=%p page=%p pfn=%lu hold=%u",
+		  __entry->pool, __entry->page, __entry->pfn, __entry->hold)
 );
 
 #endif /* _TRACE_PAGE_POOL_H */


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [net-next v1 PATCH 2/4] page_pool: add destroy attempts counter and rename tracepoint
  2019-11-15 15:06 ` [net-next v1 PATCH 2/4] page_pool: add destroy attempts counter and rename tracepoint Jesper Dangaard Brouer
@ 2019-11-15 16:33   ` Toke Høiland-Jørgensen
  2019-11-16 10:56     ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 9+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-11-15 16:33 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: netdev, Ilias Apalodimas, Jesper Dangaard Brouer, Saeed Mahameed,
	Matteo Croce, Jonathan Lemon, Lorenzo Bianconi, Tariq Toukan

Jesper Dangaard Brouer <brouer@redhat.com> writes:

> When Jonathan change the page_pool to become responsible to its
> own shutdown via deferred work queue, then the disconnect_cnt
> counter was removed from xdp memory model tracepoint.
>
> This patch change the page_pool_inflight tracepoint name to
> page_pool_release, because it reflects the new responsability
> better.  And it reintroduces a counter that reflect the number of
> times page_pool_release have been tried.
>
> The counter is also used by the code, to only empty the alloc
> cache once.  With a stuck work queue running every second and
> counter being 64-bit, it will overrun in approx 584 billion
> years. For comparison, Earth lifetime expectancy is 7.5 billion
> years, before the Sun will engulf, and destroy, the Earth.

I love how you just casually threw that last bit in there; and now I'm
thinking about under which conditions that would not be enough. Maybe
someone will put this code on a space probe bound for interstellar
space, which will escape the death of our solar system only to be
destined to increment this counter forever in the cold, dead void of
space?

I think that is a risk we can live with, so:

Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next v1 PATCH 3/4] page_pool: block alloc cache during shutdown
  2019-11-15 15:06 ` [net-next v1 PATCH 3/4] page_pool: block alloc cache during shutdown Jesper Dangaard Brouer
@ 2019-11-15 18:38   ` Jonathan Lemon
  2019-11-16 10:47     ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 9+ messages in thread
From: Jonathan Lemon @ 2019-11-15 18:38 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Toke Høiland-Jørgensen, netdev, Ilias Apalodimas,
	Saeed Mahameed, Matteo Croce, Lorenzo Bianconi, Tariq Toukan



On 15 Nov 2019, at 7:06, Jesper Dangaard Brouer wrote:

> It is documented that driver API users, have to disconnect
> the page_pool, before starting shutdown phase, but is it only
> documentation, there is not code catching such violations.
>
> Given (in page_pool_empty_alloc_cache_once) alloc cache is only
> flushed once, there is now an opportunity to catch this case.
>
> This patch blocks the RX/alloc-side cache, via pretending it is
> full and poison last element.  This code change will enforce that
> drivers cannot use alloc cache during shutdown phase.
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>  net/core/page_pool.c |    4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index e28db2ef8e12..b31f3bb7818d 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -364,6 +364,10 @@ static void 
> page_pool_empty_alloc_cache_once(struct page_pool *pool)
>  		page = pool->alloc.cache[--pool->alloc.count];
>  		__page_pool_return_page(pool, page);
>  	}
> +
> +	/* Block alloc cache, pretend it's full and poison last element */
> +	pool->alloc.cache[PP_ALLOC_CACHE_SIZE - 1] = NULL;
> +	pool->alloc.count = PP_ALLOC_CACHE_SIZE;
>  }
>
>  static void page_pool_scrub(struct page_pool *pool)

I'm really unclear on how this is going to be helpful at all.

Suppose that page_pool_destroy() is called, and this is the last user
of the pool, so page_pool_release() is entered, but finds there are
outstanding pages, so the actual free is delayed.

Case 1:
Now, if the driver screws up and tries to re-use the pool and allocate
another packet, it enters __page_pool_get_cached(), which will decrement
the alloc.count, and return NULL.  This causes a fallback to
__get_alloc_pages_slow(), which bumps up the pool inflight count.

A repeat call of the above results in garbage, since the alloc.count
will return stale values.   A call to __page_pool_put_page() may put
the value back in the cache since the alloc.count was decremented.
Now, the retry may sit there forever since the cache is never flushed.

Case 2:
An inflight packet somehow arrives at __page_pool_put_page() and
enters __page_pool_recycle_direct().  With the above patch, the cache
pretends it is full, (assuming nothing was allocated) so it falls back
to the ring.

Without the patch, the packet is placed in the cache, and then flushed
out to the ring later, resulting in the same behavior.  Any mistaken
allocations are handled gracefully.
-- 
Jonathan






^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next v1 PATCH 3/4] page_pool: block alloc cache during shutdown
  2019-11-15 18:38   ` Jonathan Lemon
@ 2019-11-16 10:47     ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-16 10:47 UTC (permalink / raw)
  To: Jonathan Lemon
  Cc: Toke Høiland-Jørgensen, netdev, Ilias Apalodimas,
	Saeed Mahameed, Matteo Croce, Lorenzo Bianconi, Tariq Toukan,
	brouer

On Fri, 15 Nov 2019 10:38:21 -0800
"Jonathan Lemon" <jonathan.lemon@gmail.com> wrote:

> Case 1:
> Now, if the driver screws up and tries to re-use the pool and allocate
> another packet, it enters __page_pool_get_cached(), which will decrement
> the alloc.count, and return NULL.  This causes a fallback to
> __get_alloc_pages_slow(), which bumps up the pool inflight count.

I can see that I made a mistake, and cannot use NULL as the poison
value. Let me drop this patch, so others (and yours) can go in before
merge window.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [net-next v1 PATCH 2/4] page_pool: add destroy attempts counter and rename tracepoint
  2019-11-15 16:33   ` Toke Høiland-Jørgensen
@ 2019-11-16 10:56     ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-16 10:56 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen
  Cc: netdev, Ilias Apalodimas, Saeed Mahameed, Matteo Croce,
	Jonathan Lemon, Lorenzo Bianconi, Tariq Toukan, brouer

On Fri, 15 Nov 2019 17:33:18 +0100
Toke Høiland-Jørgensen <toke@redhat.com> wrote:

> Jesper Dangaard Brouer <brouer@redhat.com> writes:
> 
> > When Jonathan change the page_pool to become responsible to its
> > own shutdown via deferred work queue, then the disconnect_cnt
> > counter was removed from xdp memory model tracepoint.
> >
> > This patch change the page_pool_inflight tracepoint name to
> > page_pool_release, because it reflects the new responsability
> > better.  And it reintroduces a counter that reflect the number of
> > times page_pool_release have been tried.
> >
> > The counter is also used by the code, to only empty the alloc
> > cache once.  With a stuck work queue running every second and
> > counter being 64-bit, it will overrun in approx 584 billion
> > years. For comparison, Earth lifetime expectancy is 7.5 billion
> > years, before the Sun will engulf, and destroy, the Earth.  
> 
> I love how you just casually threw that last bit in there; and now I'm
> thinking about under which conditions that would not be enough. Maybe
> someone will put this code on a space probe bound for interstellar
> space, which will escape the death of our solar system only to be
> destined to increment this counter forever in the cold, dead void of
> space?

Like with performance numbers, when ever presenting a number, I always
strive to relate it some something else, as without that the number is
just a number.


> I think that is a risk we can live with, so:
> 
> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>

Thx

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-11-16 10:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-15 15:05 [net-next v1 PATCH 0/4] page_pool: followup changes to restore tracepoint features Jesper Dangaard Brouer
2019-11-15 15:05 ` [net-next v1 PATCH 1/4] xdp: remove memory poison on free for struct xdp_mem_allocator Jesper Dangaard Brouer
2019-11-15 15:06 ` [net-next v1 PATCH 2/4] page_pool: add destroy attempts counter and rename tracepoint Jesper Dangaard Brouer
2019-11-15 16:33   ` Toke Høiland-Jørgensen
2019-11-16 10:56     ` Jesper Dangaard Brouer
2019-11-15 15:06 ` [net-next v1 PATCH 3/4] page_pool: block alloc cache during shutdown Jesper Dangaard Brouer
2019-11-15 18:38   ` Jonathan Lemon
2019-11-16 10:47     ` Jesper Dangaard Brouer
2019-11-15 15:06 ` [net-next v1 PATCH 4/4] page_pool: extend tracepoint to also include the page PFN Jesper Dangaard Brouer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.