All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
@ 2023-05-22  3:17 Yunsheng Lin
  2023-05-22 11:08 ` Jesper Dangaard Brouer
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Yunsheng Lin @ 2023-05-22  3:17 UTC (permalink / raw)
  To: davem, kuba, pabeni
  Cc: netdev, linux-kernel, Yunsheng Lin, Jesper Dangaard Brouer,
	Ilias Apalodimas, Eric Dumazet, Lorenzo Bianconi,
	Daniel Borkmann, John Fastabend

page_pool_ring_[un]lock() use in_softirq() to decide which
spin lock variant to use, and when they are called in the
context with in_softirq() being false, spin_lock_bh() is
called in page_pool_ring_lock() while spin_unlock() is
called in page_pool_ring_unlock(), because spin_lock_bh()
has disabled the softirq in page_pool_ring_lock(), which
causes inconsistency for spin lock pair calling.

This patch fixes it by returning in_softirq state from
page_pool_producer_lock(), and use it to decide which
spin lock variant to use in page_pool_producer_unlock().

As pool->ring has both producer and consumer lock, so
rename it to page_pool_producer_[un]lock() to reflect
the actual usage. Also move them to page_pool.c as they
are only used there, and remove the 'inline' as the
compiler may have better idea to do inlining or not.

Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
 include/net/page_pool.h | 18 ------------------
 net/core/page_pool.c    | 28 ++++++++++++++++++++++++++--
 2 files changed, 26 insertions(+), 20 deletions(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index c8ec2f34722b..126f9e294389 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -399,22 +399,4 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
 		page_pool_update_nid(pool, new_nid);
 }
 
-static inline void page_pool_ring_lock(struct page_pool *pool)
-	__acquires(&pool->ring.producer_lock)
-{
-	if (in_softirq())
-		spin_lock(&pool->ring.producer_lock);
-	else
-		spin_lock_bh(&pool->ring.producer_lock);
-}
-
-static inline void page_pool_ring_unlock(struct page_pool *pool)
-	__releases(&pool->ring.producer_lock)
-{
-	if (in_softirq())
-		spin_unlock(&pool->ring.producer_lock);
-	else
-		spin_unlock_bh(&pool->ring.producer_lock);
-}
-
 #endif /* _NET_PAGE_POOL_H */
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e212e9d7edcb..a3e12a61d456 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -134,6 +134,29 @@ EXPORT_SYMBOL(page_pool_ethtool_stats_get);
 #define recycle_stat_add(pool, __stat, val)
 #endif
 
+static bool page_pool_producer_lock(struct page_pool *pool)
+	__acquires(&pool->ring.producer_lock)
+{
+	bool in_softirq = in_softirq();
+
+	if (in_softirq)
+		spin_lock(&pool->ring.producer_lock);
+	else
+		spin_lock_bh(&pool->ring.producer_lock);
+
+	return in_softirq;
+}
+
+static void page_pool_producer_unlock(struct page_pool *pool,
+				      bool in_softirq)
+	__releases(&pool->ring.producer_lock)
+{
+	if (in_softirq)
+		spin_unlock(&pool->ring.producer_lock);
+	else
+		spin_unlock_bh(&pool->ring.producer_lock);
+}
+
 static int page_pool_init(struct page_pool *pool,
 			  const struct page_pool_params *params)
 {
@@ -617,6 +640,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 			     int count)
 {
 	int i, bulk_len = 0;
+	bool in_softirq;
 
 	for (i = 0; i < count; i++) {
 		struct page *page = virt_to_head_page(data[i]);
@@ -635,7 +659,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 		return;
 
 	/* Bulk producer into ptr_ring page_pool cache */
-	page_pool_ring_lock(pool);
+	in_softirq = page_pool_producer_lock(pool);
 	for (i = 0; i < bulk_len; i++) {
 		if (__ptr_ring_produce(&pool->ring, data[i])) {
 			/* ring full */
@@ -644,7 +668,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 		}
 	}
 	recycle_stat_add(pool, ring, i);
-	page_pool_ring_unlock(pool);
+	page_pool_producer_unlock(pool, in_softirq);
 
 	/* Hopefully all pages was return into ptr_ring */
 	if (likely(i == bulk_len))
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-22  3:17 [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock() Yunsheng Lin
@ 2023-05-22 11:08 ` Jesper Dangaard Brouer
  2023-05-22 11:45   ` Ilias Apalodimas
  2023-05-24  3:40 ` patchwork-bot+netdevbpf
  2023-05-26 19:34 ` Jakub Kicinski
  2 siblings, 1 reply; 11+ messages in thread
From: Jesper Dangaard Brouer @ 2023-05-22 11:08 UTC (permalink / raw)
  To: Yunsheng Lin, davem, kuba, pabeni
  Cc: brouer, netdev, linux-kernel, Jesper Dangaard Brouer,
	Ilias Apalodimas, Eric Dumazet, Lorenzo Bianconi,
	Daniel Borkmann, John Fastabend



On 22/05/2023 05.17, Yunsheng Lin wrote:
> page_pool_ring_[un]lock() use in_softirq() to decide which
> spin lock variant to use, and when they are called in the
> context with in_softirq() being false, spin_lock_bh() is
> called in page_pool_ring_lock() while spin_unlock() is
> called in page_pool_ring_unlock(), because spin_lock_bh()
> has disabled the softirq in page_pool_ring_lock(), which
> causes inconsistency for spin lock pair calling.
> 
> This patch fixes it by returning in_softirq state from
> page_pool_producer_lock(), and use it to decide which
> spin lock variant to use in page_pool_producer_unlock().
> 
> As pool->ring has both producer and consumer lock, so
> rename it to page_pool_producer_[un]lock() to reflect
> the actual usage. Also move them to page_pool.c as they
> are only used there, and remove the 'inline' as the
> compiler may have better idea to do inlining or not.
> 
> Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
> Signed-off-by: Yunsheng Lin<linyunsheng@huawei.com>

Thanks for spotting and fixing this! :-)

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-22 11:08 ` Jesper Dangaard Brouer
@ 2023-05-22 11:45   ` Ilias Apalodimas
  2023-05-23  2:13     ` Yunsheng Lin
  0 siblings, 1 reply; 11+ messages in thread
From: Ilias Apalodimas @ 2023-05-22 11:45 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Yunsheng Lin, davem, kuba, pabeni, brouer, netdev, linux-kernel,
	Jesper Dangaard Brouer, Eric Dumazet, Lorenzo Bianconi,
	Daniel Borkmann, John Fastabend

Thanks Yunsheng

On Mon, 22 May 2023 at 14:08, Jesper Dangaard Brouer <jbrouer@redhat.com> wrote:
>
>
>
> On 22/05/2023 05.17, Yunsheng Lin wrote:
> > page_pool_ring_[un]lock() use in_softirq() to decide which
> > spin lock variant to use, and when they are called in the
> > context with in_softirq() being false, spin_lock_bh() is
> > called in page_pool_ring_lock() while spin_unlock() is
> > called in page_pool_ring_unlock(), because spin_lock_bh()
> > has disabled the softirq in page_pool_ring_lock(), which
> > causes inconsistency for spin lock pair calling.
> >
> > This patch fixes it by returning in_softirq state from
> > page_pool_producer_lock(), and use it to decide which
> > spin lock variant to use in page_pool_producer_unlock().
> >
> > As pool->ring has both producer and consumer lock, so
> > rename it to page_pool_producer_[un]lock() to reflect
> > the actual usage. Also move them to page_pool.c as they
> > are only used there, and remove the 'inline' as the
> > compiler may have better idea to do inlining or not.
> >
> > Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
> > Signed-off-by: Yunsheng Lin<linyunsheng@huawei.com>
>
> Thanks for spotting and fixing this! :-)
>
> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
>

Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-22 11:45   ` Ilias Apalodimas
@ 2023-05-23  2:13     ` Yunsheng Lin
  2023-05-23  2:22       ` Jakub Kicinski
  2023-05-23  7:08       ` Jesper Dangaard Brouer
  0 siblings, 2 replies; 11+ messages in thread
From: Yunsheng Lin @ 2023-05-23  2:13 UTC (permalink / raw)
  To: Ilias Apalodimas, Jesper Dangaard Brouer
  Cc: davem, kuba, pabeni, brouer, netdev, linux-kernel,
	Jesper Dangaard Brouer, Eric Dumazet, Lorenzo Bianconi,
	Daniel Borkmann, John Fastabend

On 2023/5/22 19:45, Ilias Apalodimas wrote:
>> On 22/05/2023 05.17, Yunsheng Lin wrote:
>>> page_pool_ring_[un]lock() use in_softirq() to decide which
>>> spin lock variant to use, and when they are called in the
>>> context with in_softirq() being false, spin_lock_bh() is
>>> called in page_pool_ring_lock() while spin_unlock() is
>>> called in page_pool_ring_unlock(), because spin_lock_bh()
>>> has disabled the softirq in page_pool_ring_lock(), which
>>> causes inconsistency for spin lock pair calling.
>>>
>>> This patch fixes it by returning in_softirq state from
>>> page_pool_producer_lock(), and use it to decide which
>>> spin lock variant to use in page_pool_producer_unlock().
>>>
>>> As pool->ring has both producer and consumer lock, so
>>> rename it to page_pool_producer_[un]lock() to reflect
>>> the actual usage. Also move them to page_pool.c as they
>>> are only used there, and remove the 'inline' as the
>>> compiler may have better idea to do inlining or not.
>>>
>>> Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
>>> Signed-off-by: Yunsheng Lin<linyunsheng@huawei.com>
>>
>> Thanks for spotting and fixing this! :-)

It was spotted when implementing the below patch:)

https://patchwork.kernel.org/project/netdevbpf/patch/168269857929.2191653.13267688321246766547.stgit@firesoul/#25325801

Do you still working on optimizing the page_pool destroy
process? If not, do you mind if I carry it on based on
that?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-23  2:13     ` Yunsheng Lin
@ 2023-05-23  2:22       ` Jakub Kicinski
  2023-05-23  2:34         ` Yunsheng Lin
  2023-05-23  7:08       ` Jesper Dangaard Brouer
  1 sibling, 1 reply; 11+ messages in thread
From: Jakub Kicinski @ 2023-05-23  2:22 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Ilias Apalodimas, Jesper Dangaard Brouer, davem, pabeni, brouer,
	netdev, linux-kernel, Jesper Dangaard Brouer, Eric Dumazet,
	Lorenzo Bianconi, Daniel Borkmann, John Fastabend

On Tue, 23 May 2023 10:13:14 +0800 Yunsheng Lin wrote:
> On 2023/5/22 19:45, Ilias Apalodimas wrote:
> >> Thanks for spotting and fixing this! :-)  
> 
> It was spotted when implementing the below patch:)
> 
> https://patchwork.kernel.org/project/netdevbpf/patch/168269857929.2191653.13267688321246766547.stgit@firesoul/#25325801
> 
> Do you still working on optimizing the page_pool destroy
> process? If not, do you mind if I carry it on based on
> that?

Not sure what you mean, this patch is a fix and the destroy
optimizations where targeted at net-next. Fix goes in first,
and then after the tree merge on Thu the work in net-next can 
progress.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-23  2:22       ` Jakub Kicinski
@ 2023-05-23  2:34         ` Yunsheng Lin
  0 siblings, 0 replies; 11+ messages in thread
From: Yunsheng Lin @ 2023-05-23  2:34 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Ilias Apalodimas, Jesper Dangaard Brouer, davem, pabeni, brouer,
	netdev, linux-kernel, Jesper Dangaard Brouer, Eric Dumazet,
	Lorenzo Bianconi, Daniel Borkmann, John Fastabend

On 2023/5/23 10:22, Jakub Kicinski wrote:
> On Tue, 23 May 2023 10:13:14 +0800 Yunsheng Lin wrote:
>> On 2023/5/22 19:45, Ilias Apalodimas wrote:
>>>> Thanks for spotting and fixing this! :-)  
>>
>> It was spotted when implementing the below patch:)
>>
>> https://patchwork.kernel.org/project/netdevbpf/patch/168269857929.2191653.13267688321246766547.stgit@firesoul/#25325801
>>
>> Do you still working on optimizing the page_pool destroy
>> process? If not, do you mind if I carry it on based on
>> that?
> 
> Not sure what you mean, this patch is a fix and the destroy
> optimizations where targeted at net-next. Fix goes in first,
> and then after the tree merge on Thu the work in net-next can 
> progress.

Sure, it will be sent as RFC if this patch is not merged to
net-next yet:)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-23  2:13     ` Yunsheng Lin
  2023-05-23  2:22       ` Jakub Kicinski
@ 2023-05-23  7:08       ` Jesper Dangaard Brouer
  2023-05-23  8:16         ` Yunsheng Lin
  1 sibling, 1 reply; 11+ messages in thread
From: Jesper Dangaard Brouer @ 2023-05-23  7:08 UTC (permalink / raw)
  To: Yunsheng Lin, Ilias Apalodimas, Jesper Dangaard Brouer
  Cc: brouer, davem, kuba, pabeni, netdev, linux-kernel,
	Jesper Dangaard Brouer, Eric Dumazet, Lorenzo Bianconi,
	Daniel Borkmann, John Fastabend



On 23/05/2023 04.13, Yunsheng Lin wrote:
> Do you still working on optimizing the page_pool destroy
> process? If not, do you mind if I carry it on based on
> that?

I'm still working on improving the page_pool destroy process.
I prefer to do the implementation myself.

I've not submitted another version, because I'm currently using the
workqueue to detect/track a memory leak in mlx5.

The mlx5 driver combined with XDP redirect is leaking memory, and will
eventually lead to OOM.  The workqueue warning doesn't point to the
actual problem, but it makes is easier to notice that there is a problem.

--Jesper


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-23  7:08       ` Jesper Dangaard Brouer
@ 2023-05-23  8:16         ` Yunsheng Lin
  0 siblings, 0 replies; 11+ messages in thread
From: Yunsheng Lin @ 2023-05-23  8:16 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, Ilias Apalodimas
  Cc: brouer, davem, kuba, pabeni, netdev, linux-kernel,
	Jesper Dangaard Brouer, Eric Dumazet, Lorenzo Bianconi,
	Daniel Borkmann, John Fastabend

On 2023/5/23 15:08, Jesper Dangaard Brouer wrote:
 > On 23/05/2023 04.13, Yunsheng Lin wrote:
>> Do you still working on optimizing the page_pool destroy
>> process? If not, do you mind if I carry it on based on
>> that?
> 
> I'm still working on improving the page_pool destroy process.
> I prefer to do the implementation myself.

Sure, let me know if you want a hand for that.

> 
> I've not submitted another version, because I'm currently using the
> workqueue to detect/track a memory leak in mlx5.
> 
> The mlx5 driver combined with XDP redirect is leaking memory, and will
> eventually lead to OOM.  The workqueue warning doesn't point to the
> actual problem, but it makes is easier to notice that there is a problem.
> 
> --Jesper
> 
> .
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-22  3:17 [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock() Yunsheng Lin
  2023-05-22 11:08 ` Jesper Dangaard Brouer
@ 2023-05-24  3:40 ` patchwork-bot+netdevbpf
  2023-05-26 19:34 ` Jakub Kicinski
  2 siblings, 0 replies; 11+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-05-24  3:40 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: davem, kuba, pabeni, netdev, linux-kernel, hawk,
	ilias.apalodimas, edumazet, lorenzo, daniel, john.fastabend

Hello:

This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Mon, 22 May 2023 11:17:14 +0800 you wrote:
> page_pool_ring_[un]lock() use in_softirq() to decide which
> spin lock variant to use, and when they are called in the
> context with in_softirq() being false, spin_lock_bh() is
> called in page_pool_ring_lock() while spin_unlock() is
> called in page_pool_ring_unlock(), because spin_lock_bh()
> has disabled the softirq in page_pool_ring_lock(), which
> causes inconsistency for spin lock pair calling.
> 
> [...]

Here is the summary with links:
  - [net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
    https://git.kernel.org/netdev/net/c/368d3cb406cd

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-22  3:17 [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock() Yunsheng Lin
  2023-05-22 11:08 ` Jesper Dangaard Brouer
  2023-05-24  3:40 ` patchwork-bot+netdevbpf
@ 2023-05-26 19:34 ` Jakub Kicinski
  2023-05-27  7:56   ` Yunsheng Lin
  2 siblings, 1 reply; 11+ messages in thread
From: Jakub Kicinski @ 2023-05-26 19:34 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: davem, pabeni, netdev, linux-kernel, Jesper Dangaard Brouer,
	Ilias Apalodimas, Eric Dumazet, Lorenzo Bianconi,
	Daniel Borkmann, John Fastabend

On Mon, 22 May 2023 11:17:14 +0800 Yunsheng Lin wrote:
> page_pool_ring_[un]lock() use in_softirq() to decide which
> spin lock variant to use, and when they are called in the
> context with in_softirq() being false, spin_lock_bh() is
> called in page_pool_ring_lock() while spin_unlock() is
> called in page_pool_ring_unlock(), because spin_lock_bh()
> has disabled the softirq in page_pool_ring_lock(), which
> causes inconsistency for spin lock pair calling.
> 
> This patch fixes it by returning in_softirq state from
> page_pool_producer_lock(), and use it to decide which
> spin lock variant to use in page_pool_producer_unlock().
> 
> As pool->ring has both producer and consumer lock, so
> rename it to page_pool_producer_[un]lock() to reflect
> the actual usage. Also move them to page_pool.c as they
> are only used there, and remove the 'inline' as the
> compiler may have better idea to do inlining or not.
> 
> Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>

I just realized now while doing backports that the Fixes tag is
incorrect here. The correct Fixes tag is:

Fixes: 542bcea4be86 ("net: page_pool: use in_softirq() instead")

Before that we used in_serving_softirq() which was perfectly fine.
This explains the major mystery of how such a serious bug would survive
for 10+ releases... it didn't, it wasn't there :) It only came in 6.3.
We can't change the tag now but at least the universe makes sense again.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock()
  2023-05-26 19:34 ` Jakub Kicinski
@ 2023-05-27  7:56   ` Yunsheng Lin
  0 siblings, 0 replies; 11+ messages in thread
From: Yunsheng Lin @ 2023-05-27  7:56 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: davem, pabeni, netdev, linux-kernel, Jesper Dangaard Brouer,
	Ilias Apalodimas, Eric Dumazet, Lorenzo Bianconi,
	Daniel Borkmann, John Fastabend

On 2023/5/27 3:34, Jakub Kicinski wrote:
> On Mon, 22 May 2023 11:17:14 +0800 Yunsheng Lin wrote:
>> page_pool_ring_[un]lock() use in_softirq() to decide which
>> spin lock variant to use, and when they are called in the
>> context with in_softirq() being false, spin_lock_bh() is
>> called in page_pool_ring_lock() while spin_unlock() is
>> called in page_pool_ring_unlock(), because spin_lock_bh()
>> has disabled the softirq in page_pool_ring_lock(), which
>> causes inconsistency for spin lock pair calling.
>>
>> This patch fixes it by returning in_softirq state from
>> page_pool_producer_lock(), and use it to decide which
>> spin lock variant to use in page_pool_producer_unlock().
>>
>> As pool->ring has both producer and consumer lock, so
>> rename it to page_pool_producer_[un]lock() to reflect
>> the actual usage. Also move them to page_pool.c as they
>> are only used there, and remove the 'inline' as the
>> compiler may have better idea to do inlining or not.
>>
>> Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> 
> I just realized now while doing backports that the Fixes tag is
> incorrect here. The correct Fixes tag is:
> 
> Fixes: 542bcea4be86 ("net: page_pool: use in_softirq() instead")
> 
> Before that we used in_serving_softirq() which was perfectly fine.

From the comment around in_serving_softirq() and in_softirq(),
you are probably right as in_serving_softirq() is always false
no matter if bh is enabled or disabled.

> This explains the major mystery of how such a serious bug would survive
> for 10+ releases... it didn't, it wasn't there :) It only came in 6.3.
> We can't change the tag now but at least the universe makes sense again.

Yes, it makes more sense now:)

> .
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-05-27  7:56 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-22  3:17 [PATCH net] page_pool: fix inconsistency for page_pool_ring_[un]lock() Yunsheng Lin
2023-05-22 11:08 ` Jesper Dangaard Brouer
2023-05-22 11:45   ` Ilias Apalodimas
2023-05-23  2:13     ` Yunsheng Lin
2023-05-23  2:22       ` Jakub Kicinski
2023-05-23  2:34         ` Yunsheng Lin
2023-05-23  7:08       ` Jesper Dangaard Brouer
2023-05-23  8:16         ` Yunsheng Lin
2023-05-24  3:40 ` patchwork-bot+netdevbpf
2023-05-26 19:34 ` Jakub Kicinski
2023-05-27  7:56   ` Yunsheng Lin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.