netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net 0/2] net: fix netpoll crash with bnxt
@ 2020-08-26 19:40 Jakub Kicinski
  2020-08-26 19:40 ` [PATCH net 1/2] net: disable netpoll on fresh napis Jakub Kicinski
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-26 19:40 UTC (permalink / raw)
  To: davem; +Cc: eric.dumazet, michael.chan, netdev, kernel-team, Jakub Kicinski

Hi!

Rob run into crashes when using XDP on bnxt. Upon investigation
it turns out that during driver reconfig irq core produces
a warning message when IRQs are requested. This triggers netpoll,
which in turn accesses uninitialized driver state. Same crash can
also be triggered on this platform by changing the number of rings.

Looks like we have two missing pieces here, netif_napi_add() has
to make sure we start out with netpoll blocked. The driver also
has to be more careful about when napi gets enabled.

Tested XDP and channel count changes, the warning message no longer
causes a crash. Not sure if the memory barriers added in patch 1
are necessary, but it seems we should have them.

Jakub Kicinski (2):
  net: disable netpoll on fresh napis
  bnxt: don't enable NAPI until rings are ready

 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 12 ++++--------
 net/core/dev.c                            |  3 ++-
 net/core/netpoll.c                        |  2 +-
 3 files changed, 7 insertions(+), 10 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH net 1/2] net: disable netpoll on fresh napis
  2020-08-26 19:40 [PATCH net 0/2] net: fix netpoll crash with bnxt Jakub Kicinski
@ 2020-08-26 19:40 ` Jakub Kicinski
  2020-08-27  7:25   ` Eric Dumazet
  2020-08-26 19:40 ` [PATCH net 2/2] bnxt: don't enable NAPI until rings are ready Jakub Kicinski
  2020-08-26 23:17 ` [PATCH net 0/2] net: fix netpoll crash with bnxt David Miller
  2 siblings, 1 reply; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-26 19:40 UTC (permalink / raw)
  To: davem
  Cc: eric.dumazet, michael.chan, netdev, kernel-team, Jakub Kicinski,
	Rob Sherwood

napi_disable() makes sure to set the NAPI_STATE_NPSVC bit to prevent
netpoll from accessing rings before init is complete. However, the
same is not done for fresh napi instances in netif_napi_add(),
even though we expect NAPI instances to be added as disabled.

This causes crashes during driver reconfiguration (enabling XDP,
changing the channel count) - if there is any printk() after
netif_napi_add() but before napi_enable().

To ensure memory ordering is correct we need to use RCU accessors.

Reported-by: Rob Sherwood <rsher@fb.com>
Fixes: 2d8bff12699a ("netpoll: Close race condition between poll_one_napi and napi_disable")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 net/core/dev.c     | 3 ++-
 net/core/netpoll.c | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index d42c9ea0c3c0..95ac7568f693 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6612,12 +6612,13 @@ void netif_napi_add(struct net_device *dev, struct napi_struct *napi,
 		netdev_err_once(dev, "%s() called with weight %d\n", __func__,
 				weight);
 	napi->weight = weight;
-	list_add(&napi->dev_list, &dev->napi_list);
 	napi->dev = dev;
 #ifdef CONFIG_NETPOLL
 	napi->poll_owner = -1;
 #endif
 	set_bit(NAPI_STATE_SCHED, &napi->state);
+	set_bit(NAPI_STATE_NPSVC, &napi->state);
+	list_add_rcu(&napi->dev_list, &dev->napi_list);
 	napi_hash_add(napi);
 }
 EXPORT_SYMBOL(netif_napi_add);
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index 093e90e52bc2..2338753e936b 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -162,7 +162,7 @@ static void poll_napi(struct net_device *dev)
 	struct napi_struct *napi;
 	int cpu = smp_processor_id();
 
-	list_for_each_entry(napi, &dev->napi_list, dev_list) {
+	list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
 		if (cmpxchg(&napi->poll_owner, -1, cpu) == -1) {
 			poll_one_napi(napi);
 			smp_store_release(&napi->poll_owner, -1);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net 2/2] bnxt: don't enable NAPI until rings are ready
  2020-08-26 19:40 [PATCH net 0/2] net: fix netpoll crash with bnxt Jakub Kicinski
  2020-08-26 19:40 ` [PATCH net 1/2] net: disable netpoll on fresh napis Jakub Kicinski
@ 2020-08-26 19:40 ` Jakub Kicinski
  2020-08-26 20:23   ` Michael Chan
  2020-08-26 23:17 ` [PATCH net 0/2] net: fix netpoll crash with bnxt David Miller
  2 siblings, 1 reply; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-26 19:40 UTC (permalink / raw)
  To: davem
  Cc: eric.dumazet, michael.chan, netdev, kernel-team, Jakub Kicinski,
	Rob Sherwood

Netpoll can try to poll napi as soon as napi_enable() is called.
It crashes trying to access a doorbell which is still NULL:

 BUG: kernel NULL pointer dereference, address: 0000000000000000
 CPU: 59 PID: 6039 Comm: ethtool Kdump: loaded Tainted: G S                5.9.0-rc1-00469-g5fd99b5d9950-dirty #26
 RIP: 0010:bnxt_poll+0x121/0x1c0
 Code: c4 20 44 89 e0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 41 8b 86 a0 01 00 00 41 23 85 18 01 00 00 49 8b 96 a8 01 00 00 0d 00 00 00 24 <89> 02
41 f6 45 77 02 74 cb 49 8b ae d8 01 00 00 31 c0 c7 44 24 1a
  netpoll_poll_dev+0xbd/0x1a0
  __netpoll_send_skb+0x1b2/0x210
  netpoll_send_udp+0x2c9/0x406
  write_ext_msg+0x1d7/0x1f0
  console_unlock+0x23c/0x520
  vprintk_emit+0xe0/0x1d0
  printk+0x58/0x6f
  x86_vector_activate.cold+0xf/0x46
  __irq_domain_activate_irq+0x50/0x80
  __irq_domain_activate_irq+0x32/0x80
  __irq_domain_activate_irq+0x32/0x80
  irq_domain_activate_irq+0x25/0x40
  __setup_irq+0x2d2/0x700
  request_threaded_irq+0xfb/0x160
  __bnxt_open_nic+0x3b1/0x750
  bnxt_open_nic+0x19/0x30
  ethtool_set_channels+0x1ac/0x220
  dev_ethtool+0x11ba/0x2240
  dev_ioctl+0x1cf/0x390
  sock_do_ioctl+0x95/0x130

Reported-by: Rob Sherwood <rsher@fb.com>
Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 316227136f5b..57d0e195cddf 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -9504,15 +9504,15 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
 		}
 	}
 
-	bnxt_enable_napi(bp);
-	bnxt_debug_dev_init(bp);
-
 	rc = bnxt_init_nic(bp, irq_re_init);
 	if (rc) {
 		netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc);
-		goto open_err;
+		goto open_err_irq;
 	}
 
+	bnxt_enable_napi(bp);
+	bnxt_debug_dev_init(bp);
+
 	if (link_re_init) {
 		mutex_lock(&bp->link_lock);
 		rc = bnxt_update_phy_setting(bp);
@@ -9543,10 +9543,6 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
 		bnxt_vf_reps_open(bp);
 	return 0;
 
-open_err:
-	bnxt_debug_dev_exit(bp);
-	bnxt_disable_napi(bp);
-
 open_err_irq:
 	bnxt_del_napi(bp);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net 2/2] bnxt: don't enable NAPI until rings are ready
  2020-08-26 19:40 ` [PATCH net 2/2] bnxt: don't enable NAPI until rings are ready Jakub Kicinski
@ 2020-08-26 20:23   ` Michael Chan
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Chan @ 2020-08-26 20:23 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: David Miller, Eric Dumazet, Netdev, Kernel Team, Rob Sherwood

On Wed, Aug 26, 2020 at 12:40 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> Netpoll can try to poll napi as soon as napi_enable() is called.
> It crashes trying to access a doorbell which is still NULL:
>
>  BUG: kernel NULL pointer dereference, address: 0000000000000000
>  CPU: 59 PID: 6039 Comm: ethtool Kdump: loaded Tainted: G S                5.9.0-rc1-00469-g5fd99b5d9950-dirty #26
>  RIP: 0010:bnxt_poll+0x121/0x1c0
>  Code: c4 20 44 89 e0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 41 8b 86 a0 01 00 00 41 23 85 18 01 00 00 49 8b 96 a8 01 00 00 0d 00 00 00 24 <89> 02
> 41 f6 45 77 02 74 cb 49 8b ae d8 01 00 00 31 c0 c7 44 24 1a
>   netpoll_poll_dev+0xbd/0x1a0
>   __netpoll_send_skb+0x1b2/0x210
>   netpoll_send_udp+0x2c9/0x406
>   write_ext_msg+0x1d7/0x1f0
>   console_unlock+0x23c/0x520
>   vprintk_emit+0xe0/0x1d0
>   printk+0x58/0x6f
>   x86_vector_activate.cold+0xf/0x46
>   __irq_domain_activate_irq+0x50/0x80
>   __irq_domain_activate_irq+0x32/0x80
>   __irq_domain_activate_irq+0x32/0x80
>   irq_domain_activate_irq+0x25/0x40
>   __setup_irq+0x2d2/0x700
>   request_threaded_irq+0xfb/0x160
>   __bnxt_open_nic+0x3b1/0x750
>   bnxt_open_nic+0x19/0x30
>   ethtool_set_channels+0x1ac/0x220
>   dev_ethtool+0x11ba/0x2240
>   dev_ioctl+0x1cf/0x390
>   sock_do_ioctl+0x95/0x130
>
> Reported-by: Rob Sherwood <rsher@fb.com>
> Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.")
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>

Reviewed-by: Michael Chan <michael.chan@broadcom.com>

Thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net 0/2] net: fix netpoll crash with bnxt
  2020-08-26 19:40 [PATCH net 0/2] net: fix netpoll crash with bnxt Jakub Kicinski
  2020-08-26 19:40 ` [PATCH net 1/2] net: disable netpoll on fresh napis Jakub Kicinski
  2020-08-26 19:40 ` [PATCH net 2/2] bnxt: don't enable NAPI until rings are ready Jakub Kicinski
@ 2020-08-26 23:17 ` David Miller
  2 siblings, 0 replies; 13+ messages in thread
From: David Miller @ 2020-08-26 23:17 UTC (permalink / raw)
  To: kuba; +Cc: eric.dumazet, michael.chan, netdev, kernel-team

From: Jakub Kicinski <kuba@kernel.org>
Date: Wed, 26 Aug 2020 12:40:05 -0700

> Rob run into crashes when using XDP on bnxt. Upon investigation
> it turns out that during driver reconfig irq core produces
> a warning message when IRQs are requested. This triggers netpoll,
> which in turn accesses uninitialized driver state. Same crash can
> also be triggered on this platform by changing the number of rings.
> 
> Looks like we have two missing pieces here, netif_napi_add() has
> to make sure we start out with netpoll blocked. The driver also
> has to be more careful about when napi gets enabled.
> 
> Tested XDP and channel count changes, the warning message no longer
> causes a crash. Not sure if the memory barriers added in patch 1
> are necessary, but it seems we should have them.

Series applied and queued up for -stable, thanks Jakub.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net 1/2] net: disable netpoll on fresh napis
  2020-08-26 19:40 ` [PATCH net 1/2] net: disable netpoll on fresh napis Jakub Kicinski
@ 2020-08-27  7:25   ` Eric Dumazet
  2020-08-27 15:10     ` Jakub Kicinski
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2020-08-27  7:25 UTC (permalink / raw)
  To: Jakub Kicinski, davem
  Cc: eric.dumazet, michael.chan, netdev, kernel-team, Rob Sherwood



On 8/26/20 12:40 PM, Jakub Kicinski wrote:
> napi_disable() makes sure to set the NAPI_STATE_NPSVC bit to prevent
> netpoll from accessing rings before init is complete. However, the
> same is not done for fresh napi instances in netif_napi_add(),
> even though we expect NAPI instances to be added as disabled.
> 
> This causes crashes during driver reconfiguration (enabling XDP,
> changing the channel count) - if there is any printk() after
> netif_napi_add() but before napi_enable().
> 
> To ensure memory ordering is correct we need to use RCU accessors.
> 
> Reported-by: Rob Sherwood <rsher@fb.com>
> Fixes: 2d8bff12699a ("netpoll: Close race condition between poll_one_napi and napi_disable")
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
>  net/core/dev.c     | 3 ++-
>  net/core/netpoll.c | 2 +-
>  2 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/net/core/dev.c b/net/core/dev.c
> index d42c9ea0c3c0..95ac7568f693 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -6612,12 +6612,13 @@ void netif_napi_add(struct net_device *dev, struct napi_struct *napi,
>  		netdev_err_once(dev, "%s() called with weight %d\n", __func__,
>  				weight);
>  	napi->weight = weight;
> -	list_add(&napi->dev_list, &dev->napi_list);
>  	napi->dev = dev;
>  #ifdef CONFIG_NETPOLL
>  	napi->poll_owner = -1;
>  #endif
>  	set_bit(NAPI_STATE_SCHED, &napi->state);
> +	set_bit(NAPI_STATE_NPSVC, &napi->state);
> +	list_add_rcu(&napi->dev_list, &dev->napi_list);
>  	napi_hash_add(napi);
>  }
>  EXPORT_SYMBOL(netif_napi_add);
> diff --git a/net/core/netpoll.c b/net/core/netpoll.c
> index 093e90e52bc2..2338753e936b 100644
> --- a/net/core/netpoll.c
> +++ b/net/core/netpoll.c
> @@ -162,7 +162,7 @@ static void poll_napi(struct net_device *dev)
>  	struct napi_struct *napi;
>  	int cpu = smp_processor_id();
>  
> -	list_for_each_entry(napi, &dev->napi_list, dev_list) {
> +	list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
>  		if (cmpxchg(&napi->poll_owner, -1, cpu) == -1) {
>  			poll_one_napi(napi);
>  			smp_store_release(&napi->poll_owner, -1);
> 

You added rcu in this patch (without anything in the changelog).

netpoll_poll_dev() uses rcu_dereference_bh(), suggesting you might need list_for_each_entry_rcu_bh()




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net 1/2] net: disable netpoll on fresh napis
  2020-08-27  7:25   ` Eric Dumazet
@ 2020-08-27 15:10     ` Jakub Kicinski
  2020-08-27 15:43       ` Eric Dumazet
  0 siblings, 1 reply; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-27 15:10 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: davem, michael.chan, netdev, kernel-team, Rob Sherwood

On Thu, 27 Aug 2020 00:25:31 -0700 Eric Dumazet wrote:
> On 8/26/20 12:40 PM, Jakub Kicinski wrote:
> > To ensure memory ordering is correct we need to use RCU accessors.
>
> > +	set_bit(NAPI_STATE_NPSVC, &napi->state);
> > +	list_add_rcu(&napi->dev_list, &dev->napi_list);
> 
> >  
> > -	list_for_each_entry(napi, &dev->napi_list, dev_list) {
> > +	list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
> >  		if (cmpxchg(&napi->poll_owner, -1, cpu) == -1) {
> >  			poll_one_napi(napi);
> >  			smp_store_release(&napi->poll_owner, -1);
> >   
> 
> You added rcu in this patch (without anything in the changelog).

I mentioned I need it for the barriers, in particular I wanted the
store release barrier in list_add. Not extremely clean :(

> netpoll_poll_dev() uses rcu_dereference_bh(), suggesting you might
> need list_for_each_entry_rcu_bh()

I thought the RCU flavors are mostly meaningless at this point,
list_for_each_entry_rcu() checks rcu_read_lock_any_held(). I can add
the definition of list_for_each_entry_rcu_bh() (since it doesn't exist)
or go back to non-RCU iteration (since the use is just documentation,
the code is identical). Or fix it some other way?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net 1/2] net: disable netpoll on fresh napis
  2020-08-27 15:10     ` Jakub Kicinski
@ 2020-08-27 15:43       ` Eric Dumazet
  2020-08-27 17:47         ` Jakub Kicinski
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2020-08-27 15:43 UTC (permalink / raw)
  To: Jakub Kicinski, Eric Dumazet
  Cc: davem, michael.chan, netdev, kernel-team, Rob Sherwood



On 8/27/20 8:10 AM, Jakub Kicinski wrote:
> On Thu, 27 Aug 2020 00:25:31 -0700 Eric Dumazet wrote:
>> On 8/26/20 12:40 PM, Jakub Kicinski wrote:
>>> To ensure memory ordering is correct we need to use RCU accessors.
>>
>>> +	set_bit(NAPI_STATE_NPSVC, &napi->state);
>>> +	list_add_rcu(&napi->dev_list, &dev->napi_list);
>>
>>>  
>>> -	list_for_each_entry(napi, &dev->napi_list, dev_list) {
>>> +	list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
>>>  		if (cmpxchg(&napi->poll_owner, -1, cpu) == -1) {
>>>  			poll_one_napi(napi);
>>>  			smp_store_release(&napi->poll_owner, -1);
>>>   
>>
>> You added rcu in this patch (without anything in the changelog).
> 
> I mentioned I need it for the barriers, in particular I wanted the
> store release barrier in list_add. Not extremely clean :(

Hmmm, we also have smp_mb__after_atomic()

> 
>> netpoll_poll_dev() uses rcu_dereference_bh(), suggesting you might
>> need list_for_each_entry_rcu_bh()
> 
> I thought the RCU flavors are mostly meaningless at this point,
> list_for_each_entry_rcu() checks rcu_read_lock_any_held(). I can add
> the definition of list_for_each_entry_rcu_bh() (since it doesn't exist)
> or go back to non-RCU iteration (since the use is just documentation,
> the code is identical). Or fix it some other way?
> 

Oh, I really thought list_for_each_entry_rcu() was only checking standard rcu.

I might have been confused because we do have hlist_for_each_entry_rcu_bh() helper.

Anyway, when looking at the patch I was not at ease because we do not have proper
rcu grace period when a napi is removed from dev->napi_list. A driver might
free the napi struct right after calling netif_napi_del()


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net 1/2] net: disable netpoll on fresh napis
  2020-08-27 15:43       ` Eric Dumazet
@ 2020-08-27 17:47         ` Jakub Kicinski
  2020-08-27 22:32           ` [RFC -next 0/3] " Jakub Kicinski
  0 siblings, 1 reply; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-27 17:47 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: davem, michael.chan, netdev, kernel-team, Rob Sherwood

On Thu, 27 Aug 2020 08:43:22 -0700 Eric Dumazet wrote:
> On 8/27/20 8:10 AM, Jakub Kicinski wrote:
> > On Thu, 27 Aug 2020 00:25:31 -0700 Eric Dumazet wrote:  
> >> On 8/26/20 12:40 PM, Jakub Kicinski wrote:  
> >>> To ensure memory ordering is correct we need to use RCU accessors.  
> >>  
> >>> +	set_bit(NAPI_STATE_NPSVC, &napi->state);
> >>> +	list_add_rcu(&napi->dev_list, &dev->napi_list);  
> >>  
> >>>  
> >>> -	list_for_each_entry(napi, &dev->napi_list, dev_list) {
> >>> +	list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
> >>>  		if (cmpxchg(&napi->poll_owner, -1, cpu) == -1) {
> >>>  			poll_one_napi(napi);
> >>>  			smp_store_release(&napi->poll_owner, -1);
> >>>     
> >>
> >> You added rcu in this patch (without anything in the changelog).  
> > 
> > I mentioned I need it for the barriers, in particular I wanted the
> > store release barrier in list_add. Not extremely clean :(  
> 
> Hmmm, we also have smp_mb__after_atomic()

Pairing with the cmpxchg() on the netpoll side? Can do, I wasn't 
sure if the list operations themselves need some special care 
(like READ_ONCE/WRITE_ONCE)..

> >> netpoll_poll_dev() uses rcu_dereference_bh(), suggesting you might
> >> need list_for_each_entry_rcu_bh()  
> > 
> > I thought the RCU flavors are mostly meaningless at this point,
> > list_for_each_entry_rcu() checks rcu_read_lock_any_held(). I can add
> > the definition of list_for_each_entry_rcu_bh() (since it doesn't exist)
> > or go back to non-RCU iteration (since the use is just documentation,
> > the code is identical). Or fix it some other way?
> >   
> 
> Oh, I really thought list_for_each_entry_rcu() was only checking standard rcu.
> 
> I might have been confused because we do have hlist_for_each_entry_rcu_bh() helper.
> 
> Anyway, when looking at the patch I was not at ease because we do not have proper
> rcu grace period when a napi is removed from dev->napi_list. A driver might
> free the napi struct right after calling netif_napi_del()

Ugh, you're right. I didn't look closely enough at netif_napi_del():

	if (napi_hash_del(napi))
		synchronize_net();
	list_del_init(&napi->dev_list);

Looks like I can reorder these.. and perhaps make all dev->napi_list
accesses RCU, for netpoll?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC -next 0/3] Re: [PATCH net 1/2] net: disable netpoll on fresh napis
  2020-08-27 17:47         ` Jakub Kicinski
@ 2020-08-27 22:32           ` Jakub Kicinski
  2020-08-27 22:32             ` [RFC -next 1/3] net: remove napi_hash_del() from driver-facing API Jakub Kicinski
                               ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-27 22:32 UTC (permalink / raw)
  To: eric.dumazet; +Cc: davem, netdev, kernel-team, Jakub Kicinski

On Thu, 27 Aug 2020 10:47:53 -0700 Jakub Kicinski wrote:
> > Oh, I really thought list_for_each_entry_rcu() was only checking standard rcu.
> > 
> > I might have been confused because we do have hlist_for_each_entry_rcu_bh() helper.
> > 
> > Anyway, when looking at the patch I was not at ease because we do not have proper
> > rcu grace period when a napi is removed from dev->napi_list. A driver might
> > free the napi struct right after calling netif_napi_del()  
> 
> Ugh, you're right. I didn't look closely enough at netif_napi_del():
> 
> 	if (napi_hash_del(napi))
> 		synchronize_net();
> 	list_del_init(&napi->dev_list);
> 
> Looks like I can reorder these.. and perhaps make all dev->napi_list
> accesses RCU, for netpoll?

So I had a look and looks like some reshuffling may be required
to get out of this pickle. The objective is for drivers to observe
RCU grace period after netif_napi_del() not just napi_hash_del().

Sending as RFC because IDK if the churn vs improvement ratio
is acceptable here.

Jakub Kicinski (3):
  net: remove napi_hash_del() from driver-facing API
  net: manage napi add/del idempotence explicitly
  net: make sure napi_list is safe for RCU traversal

 .../net/ethernet/broadcom/bnx2x/bnx2x_cmn.h   |  8 ++---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     |  5 ++-
 drivers/net/ethernet/cisco/enic/enic_main.c   | 12 ++++---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |  4 +--
 .../net/ethernet/myricom/myri10ge/myri10ge.c  |  5 ++-
 drivers/net/veth.c                            |  3 +-
 drivers/net/virtio_net.c                      |  7 ++--
 include/linux/netdevice.h                     | 36 +++++++++----------
 net/core/dev.c                                | 32 ++++++++---------
 net/core/netpoll.c                            |  2 +-
 10 files changed, 55 insertions(+), 59 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC -next 1/3] net: remove napi_hash_del() from driver-facing API
  2020-08-27 22:32           ` [RFC -next 0/3] " Jakub Kicinski
@ 2020-08-27 22:32             ` Jakub Kicinski
  2020-08-27 22:32             ` [RFC -next 2/3] net: manage napi add/del idempotence explicitly Jakub Kicinski
  2020-08-27 22:32             ` [RFC -next 3/3] net: make sure napi_list is safe for RCU traversal Jakub Kicinski
  2 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-27 22:32 UTC (permalink / raw)
  To: eric.dumazet; +Cc: davem, netdev, kernel-team, Jakub Kicinski

We allow drivers to call napi_hash_del() before calling
netif_napi_del() to batch RCU grace periods. This makes
the API asymmetric and leaks internal implementation details.
Soon we will want the grace period to protect more than just
the NAPI hash table.

Restructure the API and have drivers call a new function -
__netif_napi_del() if they want to take care of RCU waits.

Note that only core was checking the return status from
napi_hash_del() so the new helper does not report if the
NAPI was actually deleted.

Some notes on driver oddness:
 - veth observed the grace period before calling netif_napi_del()
   but that should not matter
 - myri10ge observed normal RCU flavor
 - bnx2x and enic did not actually observe the grace period
   (unless they did so implicitly)
 - virtio_net and enic only unhashed Rx NAPIs

The last two points seem to indicate that the calls to
napi_hash_del() were a left over rather than an optimization.
Regardless, it's easy enough to correct them.

This patch may introduce extra synchronize_net() calls for
interfaces which set NAPI_STATE_NO_BUSY_POLL and depend on
free_netdev() to call netif_napi_del(). This seems inevitable
since we want to use RCU for netpoll dev->napi_list traversal,
and almost no drivers set IFF_DISABLE_NETPOLL.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 .../net/ethernet/broadcom/bnx2x/bnx2x_cmn.h   |  8 ++---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     |  5 ++-
 drivers/net/ethernet/cisco/enic/enic_main.c   | 12 ++++---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |  4 +--
 .../net/ethernet/myricom/myri10ge/myri10ge.c  |  5 ++-
 drivers/net/veth.c                            |  3 +-
 drivers/net/virtio_net.c                      |  7 ++--
 include/linux/netdevice.h                     | 32 +++++++++----------
 net/core/dev.c                                | 19 ++++-------
 9 files changed, 43 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index 7e4c93be4451..d8b1824c334d 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -825,9 +825,9 @@ static inline void bnx2x_del_all_napi_cnic(struct bnx2x *bp)
 	int i;
 
 	for_each_rx_queue_cnic(bp, i) {
-		napi_hash_del(&bnx2x_fp(bp, i, napi));
-		netif_napi_del(&bnx2x_fp(bp, i, napi));
+		__netif_napi_del(&bnx2x_fp(bp, i, napi));
 	}
+	synchronize_net();
 }
 
 static inline void bnx2x_del_all_napi(struct bnx2x *bp)
@@ -835,9 +835,9 @@ static inline void bnx2x_del_all_napi(struct bnx2x *bp)
 	int i;
 
 	for_each_eth_queue(bp, i) {
-		napi_hash_del(&bnx2x_fp(bp, i, napi));
-		netif_napi_del(&bnx2x_fp(bp, i, napi));
+		__netif_napi_del(&bnx2x_fp(bp, i, napi));
 	}
+	synchronize_net();
 }
 
 int bnx2x_set_int_mode(struct bnx2x *bp);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 57d0e195cddf..fda2e6a2e68a 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -8634,10 +8634,9 @@ static void bnxt_del_napi(struct bnxt *bp)
 	for (i = 0; i < bp->cp_nr_rings; i++) {
 		struct bnxt_napi *bnapi = bp->bnapi[i];
 
-		napi_hash_del(&bnapi->napi);
-		netif_napi_del(&bnapi->napi);
+		__netif_napi_del(&bnapi->napi);
 	}
-	/* We called napi_hash_del() before netif_napi_del(), we need
+	/* We called __netif_napi_del(), we need
 	 * to respect an RCU grace period before freeing napi structures.
 	 */
 	synchronize_net();
diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
index 6bc7e7ba38c3..8f30f33a6da2 100644
--- a/drivers/net/ethernet/cisco/enic/enic_main.c
+++ b/drivers/net/ethernet/cisco/enic/enic_main.c
@@ -2527,13 +2527,15 @@ static void enic_dev_deinit(struct enic *enic)
 {
 	unsigned int i;
 
-	for (i = 0; i < enic->rq_count; i++) {
-		napi_hash_del(&enic->napi[i]);
-		netif_napi_del(&enic->napi[i]);
-	}
+	for (i = 0; i < enic->rq_count; i++)
+		__netif_napi_del(&enic->napi[i]);
+
 	if (vnic_dev_get_intr_mode(enic->vdev) == VNIC_DEV_INTR_MODE_MSIX)
 		for (i = 0; i < enic->wq_count; i++)
-			netif_napi_del(&enic->napi[enic_cq_wq(enic, i)]);
+			__netif_napi_del(&enic->napi[enic_cq_wq(enic, i)]);
+
+	/* observe RCU grace period after __netif_napi_del() calls */
+	synchronize_net();
 
 	enic_free_vnic_resources(enic);
 	enic_clear_intr_mode(enic);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index 2e35c5706cf1..df389a11d3af 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -1029,10 +1029,10 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx)
 		WRITE_ONCE(adapter->rx_ring[ring->queue_index], NULL);
 
 	adapter->q_vector[v_idx] = NULL;
-	napi_hash_del(&q_vector->napi);
-	netif_napi_del(&q_vector->napi);
+	__netif_napi_del(&q_vector->napi);
 
 	/*
+	 * after a call to __netif_napi_del() napi may still be used and
 	 * ixgbe_get_stats64() might access the rings on this vector,
 	 * we must wait a grace period before freeing it.
 	 */
diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
index 4a5beafa0493..1634ca6d4a8f 100644
--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
@@ -3543,11 +3543,10 @@ static void myri10ge_free_slices(struct myri10ge_priv *mgp)
 					  ss->fw_stats, ss->fw_stats_bus);
 			ss->fw_stats = NULL;
 		}
-		napi_hash_del(&ss->napi);
-		netif_napi_del(&ss->napi);
+		__netif_napi_del(&ss->napi);
 	}
 	/* Wait till napi structs are no longer used, and then free ss. */
-	synchronize_rcu();
+	synchronize_net();
 	kfree(mgp->ss);
 	mgp->ss = NULL;
 }
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index e56cd562a664..7efe5d969c31 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -897,14 +897,13 @@ static void veth_napi_del(struct net_device *dev)
 		struct veth_rq *rq = &priv->rq[i];
 
 		napi_disable(&rq->xdp_napi);
-		napi_hash_del(&rq->xdp_napi);
+		__netif_napi_del(&rq->xdp_napi);
 	}
 	synchronize_net();
 
 	for (i = 0; i < dev->real_num_rx_queues; i++) {
 		struct veth_rq *rq = &priv->rq[i];
 
-		netif_napi_del(&rq->xdp_napi);
 		rq->rx_notify_masked = false;
 		ptr_ring_cleanup(&rq->xdp_ring, veth_ptr_free);
 	}
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0ada48edf749..dbc9f8aad84e 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2604,12 +2604,11 @@ static void virtnet_free_queues(struct virtnet_info *vi)
 	int i;
 
 	for (i = 0; i < vi->max_queue_pairs; i++) {
-		napi_hash_del(&vi->rq[i].napi);
-		netif_napi_del(&vi->rq[i].napi);
-		netif_napi_del(&vi->sq[i].napi);
+		__netif_napi_del(&vi->rq[i].napi);
+		__netif_napi_del(&vi->sq[i].napi);
 	}
 
-	/* We called napi_hash_del() before netif_napi_del(),
+	/* We called __netif_napi_del(),
 	 * we need to respect an RCU grace period before freeing vi->rq
 	 */
 	synchronize_net();
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index b0e303f6603f..67400efa6f00 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -70,6 +70,7 @@ struct udp_tunnel_nic;
 struct bpf_prog;
 struct xdp_buff;
 
+void synchronize_net(void);
 void netdev_set_default_ethtool_ops(struct net_device *dev,
 				    const struct ethtool_ops *ops);
 
@@ -488,20 +489,6 @@ static inline bool napi_complete(struct napi_struct *n)
 	return napi_complete_done(n, 0);
 }
 
-/**
- *	napi_hash_del - remove a NAPI from global table
- *	@napi: NAPI context
- *
- * Warning: caller must observe RCU grace period
- * before freeing memory containing @napi, if
- * this function returns true.
- * Note: core networking stack automatically calls it
- * from netif_napi_del().
- * Drivers might want to call this helper to combine all
- * the needed RCU grace periods into a single one.
- */
-bool napi_hash_del(struct napi_struct *napi);
-
 /**
  *	napi_disable - prevent NAPI from scheduling
  *	@n: NAPI context
@@ -2347,13 +2334,27 @@ static inline void netif_tx_napi_add(struct net_device *dev,
 	netif_napi_add(dev, napi, poll, weight);
 }
 
+/**
+ *  ___netif_napi_del - remove a NAPI context
+ *  @napi: NAPI context
+ *
+ * Warning: caller must observe RCU grace period before freeing memory
+ * containing @napi. Drivers might want to call this helper to combine
+ * all the needed RCU grace periods into a single one.
+ */
+void __netif_napi_del(struct napi_struct *napi);
+
 /**
  *  netif_napi_del - remove a NAPI context
  *  @napi: NAPI context
  *
  *  netif_napi_del() removes a NAPI context from the network device NAPI list
  */
-void netif_napi_del(struct napi_struct *napi);
+static inline void netif_napi_del(struct napi_struct *napi)
+{
+	__netif_napi_del(napi);
+	synchronize_net();
+}
 
 struct napi_gro_cb {
 	/* Virtual address of skb_shinfo(skb)->frags[0].page + offset. */
@@ -2777,7 +2778,6 @@ static inline void unregister_netdevice(struct net_device *dev)
 int netdev_refcnt_read(const struct net_device *dev);
 void free_netdev(struct net_device *dev);
 void netdev_freemem(struct net_device *dev);
-void synchronize_net(void);
 int init_dummy_netdev(struct net_device *dev);
 
 struct net_device *netdev_get_xmit_slave(struct net_device *dev,
diff --git a/net/core/dev.c b/net/core/dev.c
index 95ac7568f693..d2c6fa24aa23 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6555,20 +6555,15 @@ static void napi_hash_add(struct napi_struct *napi)
 /* Warning : caller is responsible to make sure rcu grace period
  * is respected before freeing memory containing @napi
  */
-bool napi_hash_del(struct napi_struct *napi)
+static void napi_hash_del(struct napi_struct *napi)
 {
-	bool rcu_sync_needed = false;
-
 	spin_lock(&napi_hash_lock);
 
-	if (test_and_clear_bit(NAPI_STATE_HASHED, &napi->state)) {
-		rcu_sync_needed = true;
+	if (test_and_clear_bit(NAPI_STATE_HASHED, &napi->state))
 		hlist_del_rcu(&napi->napi_hash_node);
-	}
+
 	spin_unlock(&napi_hash_lock);
-	return rcu_sync_needed;
 }
-EXPORT_SYMBOL_GPL(napi_hash_del);
 
 static enum hrtimer_restart napi_watchdog(struct hrtimer *timer)
 {
@@ -6653,18 +6648,16 @@ static void flush_gro_hash(struct napi_struct *napi)
 }
 
 /* Must be called in process context */
-void netif_napi_del(struct napi_struct *napi)
+void __netif_napi_del(struct napi_struct *napi)
 {
-	might_sleep();
-	if (napi_hash_del(napi))
-		synchronize_net();
+	napi_hash_del(napi);
 	list_del_init(&napi->dev_list);
 	napi_free_frags(napi);
 
 	flush_gro_hash(napi);
 	napi->gro_bitmask = 0;
 }
-EXPORT_SYMBOL(netif_napi_del);
+EXPORT_SYMBOL(__netif_napi_del);
 
 static int napi_poll(struct napi_struct *n, struct list_head *repoll)
 {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC -next 2/3] net: manage napi add/del idempotence explicitly
  2020-08-27 22:32           ` [RFC -next 0/3] " Jakub Kicinski
  2020-08-27 22:32             ` [RFC -next 1/3] net: remove napi_hash_del() from driver-facing API Jakub Kicinski
@ 2020-08-27 22:32             ` Jakub Kicinski
  2020-08-27 22:32             ` [RFC -next 3/3] net: make sure napi_list is safe for RCU traversal Jakub Kicinski
  2 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-27 22:32 UTC (permalink / raw)
  To: eric.dumazet; +Cc: davem, netdev, kernel-team, Jakub Kicinski

To RCUify napi->dev_list we need to replace list_del_init()
with list_del_rcu(). There is no _init() version for RCU for
obvious reasons. Up until now netif_napi_del() was idempotent
so to make sure it remains such add a bit which is set when
NAPI is listed, and cleared when it removed. Since we don't
expect multiple calls to netif_napi_add() to be correct,
add a warning on that side.

Now that napi_hash_add / napi_hash_del are only called by
napi_add / del we can actually steal its bit. We just need
to make sure hash node is initialized correctly.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 include/linux/netdevice.h |  4 ++--
 net/core/dev.c            | 13 +++++++++----
 2 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 67400efa6f00..55738cd862b5 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -355,7 +355,7 @@ enum {
 	NAPI_STATE_MISSED,	/* reschedule a napi */
 	NAPI_STATE_DISABLE,	/* Disable pending */
 	NAPI_STATE_NPSVC,	/* Netpoll - don't dequeue from poll_list */
-	NAPI_STATE_HASHED,	/* In NAPI hash (busy polling possible) */
+	NAPI_STATE_LISTED,	/* NAPI added to system lists */
 	NAPI_STATE_NO_BUSY_POLL,/* Do not add in napi_hash, no busy polling */
 	NAPI_STATE_IN_BUSY_POLL,/* sk_busy_loop() owns this NAPI */
 };
@@ -365,7 +365,7 @@ enum {
 	NAPIF_STATE_MISSED	 = BIT(NAPI_STATE_MISSED),
 	NAPIF_STATE_DISABLE	 = BIT(NAPI_STATE_DISABLE),
 	NAPIF_STATE_NPSVC	 = BIT(NAPI_STATE_NPSVC),
-	NAPIF_STATE_HASHED	 = BIT(NAPI_STATE_HASHED),
+	NAPIF_STATE_LISTED	 = BIT(NAPI_STATE_LISTED),
 	NAPIF_STATE_NO_BUSY_POLL = BIT(NAPI_STATE_NO_BUSY_POLL),
 	NAPIF_STATE_IN_BUSY_POLL = BIT(NAPI_STATE_IN_BUSY_POLL),
 };
diff --git a/net/core/dev.c b/net/core/dev.c
index d2c6fa24aa23..623060907132 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6533,8 +6533,7 @@ EXPORT_SYMBOL(napi_busy_loop);
 
 static void napi_hash_add(struct napi_struct *napi)
 {
-	if (test_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state) ||
-	    test_and_set_bit(NAPI_STATE_HASHED, &napi->state))
+	if (test_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state))
 		return;
 
 	spin_lock(&napi_hash_lock);
@@ -6559,8 +6558,7 @@ static void napi_hash_del(struct napi_struct *napi)
 {
 	spin_lock(&napi_hash_lock);
 
-	if (test_and_clear_bit(NAPI_STATE_HASHED, &napi->state))
-		hlist_del_rcu(&napi->napi_hash_node);
+	hlist_del_init_rcu(&napi->napi_hash_node);
 
 	spin_unlock(&napi_hash_lock);
 }
@@ -6595,7 +6593,11 @@ static void init_gro_hash(struct napi_struct *napi)
 void netif_napi_add(struct net_device *dev, struct napi_struct *napi,
 		    int (*poll)(struct napi_struct *, int), int weight)
 {
+	if (WARN_ON(test_and_set_bit(NAPI_STATE_LISTED, &napi->state)))
+		return;
+
 	INIT_LIST_HEAD(&napi->poll_list);
+	INIT_HLIST_NODE(&napi->napi_hash_node);
 	hrtimer_init(&napi->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED);
 	napi->timer.function = napi_watchdog;
 	init_gro_hash(napi);
@@ -6650,6 +6652,9 @@ static void flush_gro_hash(struct napi_struct *napi)
 /* Must be called in process context */
 void __netif_napi_del(struct napi_struct *napi)
 {
+	if (!test_and_clear_bit(NAPI_STATE_LISTED, &napi->state))
+		return;
+
 	napi_hash_del(napi);
 	list_del_init(&napi->dev_list);
 	napi_free_frags(napi);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC -next 3/3] net: make sure napi_list is safe for RCU traversal
  2020-08-27 22:32           ` [RFC -next 0/3] " Jakub Kicinski
  2020-08-27 22:32             ` [RFC -next 1/3] net: remove napi_hash_del() from driver-facing API Jakub Kicinski
  2020-08-27 22:32             ` [RFC -next 2/3] net: manage napi add/del idempotence explicitly Jakub Kicinski
@ 2020-08-27 22:32             ` Jakub Kicinski
  2 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2020-08-27 22:32 UTC (permalink / raw)
  To: eric.dumazet; +Cc: davem, netdev, kernel-team, Jakub Kicinski

netpoll needs to traverse dev->napi_list under RCU, make
sure it uses the right iterator and that removal from this
list is handled safely.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 net/core/dev.c     | 2 +-
 net/core/netpoll.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 623060907132..dfb4a4137eea 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6656,7 +6656,7 @@ void __netif_napi_del(struct napi_struct *napi)
 		return;
 
 	napi_hash_del(napi);
-	list_del_init(&napi->dev_list);
+	list_del_rcu(&napi->dev_list);
 	napi_free_frags(napi);
 
 	flush_gro_hash(napi);
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index 2338753e936b..c310c7c1cef7 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -297,7 +297,7 @@ static int netpoll_owner_active(struct net_device *dev)
 {
 	struct napi_struct *napi;
 
-	list_for_each_entry(napi, &dev->napi_list, dev_list) {
+	list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
 		if (napi->poll_owner == smp_processor_id())
 			return 1;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-08-27 22:33 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-26 19:40 [PATCH net 0/2] net: fix netpoll crash with bnxt Jakub Kicinski
2020-08-26 19:40 ` [PATCH net 1/2] net: disable netpoll on fresh napis Jakub Kicinski
2020-08-27  7:25   ` Eric Dumazet
2020-08-27 15:10     ` Jakub Kicinski
2020-08-27 15:43       ` Eric Dumazet
2020-08-27 17:47         ` Jakub Kicinski
2020-08-27 22:32           ` [RFC -next 0/3] " Jakub Kicinski
2020-08-27 22:32             ` [RFC -next 1/3] net: remove napi_hash_del() from driver-facing API Jakub Kicinski
2020-08-27 22:32             ` [RFC -next 2/3] net: manage napi add/del idempotence explicitly Jakub Kicinski
2020-08-27 22:32             ` [RFC -next 3/3] net: make sure napi_list is safe for RCU traversal Jakub Kicinski
2020-08-26 19:40 ` [PATCH net 2/2] bnxt: don't enable NAPI until rings are ready Jakub Kicinski
2020-08-26 20:23   ` Michael Chan
2020-08-26 23:17 ` [PATCH net 0/2] net: fix netpoll crash with bnxt David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).