All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net v3] net: fix race between napi kthread mode and busy poll
@ 2021-03-02  1:21 Wei Wang
  2021-03-03  0:18 ` Jakub Kicinski
  2021-03-03  9:53 ` Eric Dumazet
  0 siblings, 2 replies; 5+ messages in thread
From: Wei Wang @ 2021-03-02  1:21 UTC (permalink / raw)
  To: Jakub Kicinski, David S . Miller, netdev
  Cc: Martin Zaharinov, Alexander Duyck, Eric Dumazet, Paolo Abeni,
	Hannes Frederic Sowa

Currently, napi_thread_wait() checks for NAPI_STATE_SCHED bit to
determine if the kthread owns this napi and could call napi->poll() on
it. However, if socket busy poll is enabled, it is possible that the
busy poll thread grabs this SCHED bit (after the previous napi->poll()
invokes napi_complete_done() and clears SCHED bit) and tries to poll
on the same napi. napi_disable() could grab the SCHED bit as well.
This patch tries to fix this race by adding a new bit
NAPI_STATE_SCHED_THREADED in napi->state. This bit gets set in
____napi_schedule() if the threaded mode is enabled, and gets cleared
in napi_complete_done(), and we only poll the napi in kthread if this
bit is set. This helps distinguish the ownership of the napi between
kthread and other scenarios and fixes the race issue.

Fixes: 29863d41bb6e ("net: implement threaded-able napi poll loop support")
Reported-by: Martin Zaharinov <micron10@gmail.com>
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Wei Wang <weiwan@google.com>
Cc: Alexander Duyck <alexanderduyck@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
---
 include/linux/netdevice.h |  2 ++
 net/core/dev.c            | 14 +++++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index ddf4cfc12615..682908707c1a 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -360,6 +360,7 @@ enum {
 	NAPI_STATE_IN_BUSY_POLL,	/* sk_busy_loop() owns this NAPI */
 	NAPI_STATE_PREFER_BUSY_POLL,	/* prefer busy-polling over softirq processing*/
 	NAPI_STATE_THREADED,		/* The poll is performed inside its own thread*/
+	NAPI_STATE_SCHED_THREADED,	/* Napi is currently scheduled in threaded mode */
 };
 
 enum {
@@ -372,6 +373,7 @@ enum {
 	NAPIF_STATE_IN_BUSY_POLL	= BIT(NAPI_STATE_IN_BUSY_POLL),
 	NAPIF_STATE_PREFER_BUSY_POLL	= BIT(NAPI_STATE_PREFER_BUSY_POLL),
 	NAPIF_STATE_THREADED		= BIT(NAPI_STATE_THREADED),
+	NAPIF_STATE_SCHED_THREADED	= BIT(NAPI_STATE_SCHED_THREADED),
 };
 
 enum gro_result {
diff --git a/net/core/dev.c b/net/core/dev.c
index 6c5967e80132..03c4763de351 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4294,6 +4294,8 @@ static inline void ____napi_schedule(struct softnet_data *sd,
 		 */
 		thread = READ_ONCE(napi->thread);
 		if (thread) {
+			if (thread->state != TASK_INTERRUPTIBLE)
+				set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
 			wake_up_process(thread);
 			return;
 		}
@@ -6486,6 +6488,7 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
 		WARN_ON_ONCE(!(val & NAPIF_STATE_SCHED));
 
 		new = val & ~(NAPIF_STATE_MISSED | NAPIF_STATE_SCHED |
+			      NAPIF_STATE_SCHED_THREADED |
 			      NAPIF_STATE_PREFER_BUSY_POLL);
 
 		/* If STATE_MISSED was set, leave STATE_SCHED set,
@@ -6968,16 +6971,25 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
 
 static int napi_thread_wait(struct napi_struct *napi)
 {
+	bool woken = false;
+
 	set_current_state(TASK_INTERRUPTIBLE);
 
 	while (!kthread_should_stop() && !napi_disable_pending(napi)) {
-		if (test_bit(NAPI_STATE_SCHED, &napi->state)) {
+		/* Testing SCHED_THREADED bit here to make sure the current
+		 * kthread owns this napi and could poll on this napi.
+		 * Testing SCHED bit is not enough because SCHED bit might be
+		 * set by some other busy poll thread or by napi_disable().
+		 */
+		if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
 			WARN_ON(!list_empty(&napi->poll_list));
 			__set_current_state(TASK_RUNNING);
 			return 0;
 		}
 
 		schedule();
+		/* woken being true indicates this thread owns this napi. */
+		woken = true;
 		set_current_state(TASK_INTERRUPTIBLE);
 	}
 	__set_current_state(TASK_RUNNING);
-- 
2.30.1.766.gb4fecdf3b7-goog


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH net v3] net: fix race between napi kthread mode and busy poll
  2021-03-02  1:21 [PATCH net v3] net: fix race between napi kthread mode and busy poll Wei Wang
@ 2021-03-03  0:18 ` Jakub Kicinski
  2021-03-03  9:53 ` Eric Dumazet
  1 sibling, 0 replies; 5+ messages in thread
From: Jakub Kicinski @ 2021-03-03  0:18 UTC (permalink / raw)
  To: Wei Wang
  Cc: David S . Miller, netdev, Martin Zaharinov, Alexander Duyck,
	Eric Dumazet, Paolo Abeni, Hannes Frederic Sowa

On Mon,  1 Mar 2021 17:21:13 -0800 Wei Wang wrote:
> Currently, napi_thread_wait() checks for NAPI_STATE_SCHED bit to
> determine if the kthread owns this napi and could call napi->poll() on
> it. However, if socket busy poll is enabled, it is possible that the
> busy poll thread grabs this SCHED bit (after the previous napi->poll()
> invokes napi_complete_done() and clears SCHED bit) and tries to poll
> on the same napi. napi_disable() could grab the SCHED bit as well.
> This patch tries to fix this race by adding a new bit
> NAPI_STATE_SCHED_THREADED in napi->state. This bit gets set in
> ____napi_schedule() if the threaded mode is enabled, and gets cleared
> in napi_complete_done(), and we only poll the napi in kthread if this
> bit is set. This helps distinguish the ownership of the napi between
> kthread and other scenarios and fixes the race issue.
> 
> Fixes: 29863d41bb6e ("net: implement threaded-able napi poll loop support")
> Reported-by: Martin Zaharinov <micron10@gmail.com>
> Suggested-by: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Wei Wang <weiwan@google.com>
> Cc: Alexander Duyck <alexanderduyck@fb.com>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Paolo Abeni <pabeni@redhat.com>
> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net v3] net: fix race between napi kthread mode and busy poll
  2021-03-02  1:21 [PATCH net v3] net: fix race between napi kthread mode and busy poll Wei Wang
  2021-03-03  0:18 ` Jakub Kicinski
@ 2021-03-03  9:53 ` Eric Dumazet
  2021-03-03  9:55   ` Eric Dumazet
  1 sibling, 1 reply; 5+ messages in thread
From: Eric Dumazet @ 2021-03-03  9:53 UTC (permalink / raw)
  To: Wei Wang
  Cc: Jakub Kicinski, David S . Miller, netdev, Martin Zaharinov,
	Alexander Duyck, Paolo Abeni, Hannes Frederic Sowa

On Tue, Mar 2, 2021 at 2:21 AM Wei Wang <weiwan@google.com> wrote:
>
> Currently, napi_thread_wait() checks for NAPI_STATE_SCHED bit to
> determine if the kthread owns this napi and could call napi->poll() on
> it. However, if socket busy poll is enabled, it is possible that the
> busy poll thread grabs this SCHED bit (after the previous napi->poll()
> invokes napi_complete_done() and clears SCHED bit) and tries to poll
> on the same napi. napi_disable() could grab the SCHED bit as well.
> This patch tries to fix this race by adding a new bit
> NAPI_STATE_SCHED_THREADED in napi->state. This bit gets set in
> ____napi_schedule() if the threaded mode is enabled, and gets cleared
> in napi_complete_done(), and we only poll the napi in kthread if this
> bit is set. This helps distinguish the ownership of the napi between
> kthread and other scenarios and fixes the race issue.
>
> Fixes: 29863d41bb6e ("net: implement threaded-able napi poll loop support")
> Reported-by: Martin Zaharinov <micron10@gmail.com>
> Suggested-by: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Wei Wang <weiwan@google.com>
> Cc: Alexander Duyck <alexanderduyck@fb.com>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Paolo Abeni <pabeni@redhat.com>
> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
> ---
>  include/linux/netdevice.h |  2 ++
>  net/core/dev.c            | 14 +++++++++++++-
>  2 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index ddf4cfc12615..682908707c1a 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -360,6 +360,7 @@ enum {
>         NAPI_STATE_IN_BUSY_POLL,        /* sk_busy_loop() owns this NAPI */
>         NAPI_STATE_PREFER_BUSY_POLL,    /* prefer busy-polling over softirq processing*/
>         NAPI_STATE_THREADED,            /* The poll is performed inside its own thread*/
> +       NAPI_STATE_SCHED_THREADED,      /* Napi is currently scheduled in threaded mode */
>  };
>
>  enum {
> @@ -372,6 +373,7 @@ enum {
>         NAPIF_STATE_IN_BUSY_POLL        = BIT(NAPI_STATE_IN_BUSY_POLL),
>         NAPIF_STATE_PREFER_BUSY_POLL    = BIT(NAPI_STATE_PREFER_BUSY_POLL),
>         NAPIF_STATE_THREADED            = BIT(NAPI_STATE_THREADED),
> +       NAPIF_STATE_SCHED_THREADED      = BIT(NAPI_STATE_SCHED_THREADED),
>  };
>
>  enum gro_result {
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 6c5967e80132..03c4763de351 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4294,6 +4294,8 @@ static inline void ____napi_schedule(struct softnet_data *sd,
>                  */
>                 thread = READ_ONCE(napi->thread);
>                 if (thread) {
> +                       if (thread->state != TASK_INTERRUPTIBLE)

How safe is this read ?

Presumably KMSAN will detect that another cpu/thread is able to change
thread->state under us,
so a READ_ONCE() (or data_race()) would be needed.

Nowhere else in the kernel can we find a similar construct, I find
unfortunate to bury
in net/core/dev.c something that might be incorrect in the future.

> +                               set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
>                         wake_up_process(thread);
>                         return;
>                 }
> @@ -6486,6 +6488,7 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
>                 WARN_ON_ONCE(!(val & NAPIF_STATE_SCHED));
>
>                 new = val & ~(NAPIF_STATE_MISSED | NAPIF_STATE_SCHED |
> +                             NAPIF_STATE_SCHED_THREADED |
>                               NAPIF_STATE_PREFER_BUSY_POLL);
>
>                 /* If STATE_MISSED was set, leave STATE_SCHED set,
> @@ -6968,16 +6971,25 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
>
>  static int napi_thread_wait(struct napi_struct *napi)
>  {
> +       bool woken = false;
> +
>         set_current_state(TASK_INTERRUPTIBLE);
>
>         while (!kthread_should_stop() && !napi_disable_pending(napi)) {
> -               if (test_bit(NAPI_STATE_SCHED, &napi->state)) {
> +               /* Testing SCHED_THREADED bit here to make sure the current
> +                * kthread owns this napi and could poll on this napi.
> +                * Testing SCHED bit is not enough because SCHED bit might be
> +                * set by some other busy poll thread or by napi_disable().
> +                */
> +               if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
>                         WARN_ON(!list_empty(&napi->poll_list));
>                         __set_current_state(TASK_RUNNING);
>                         return 0;
>                 }
>
>                 schedule();
> +               /* woken being true indicates this thread owns this napi. */
> +               woken = true;
>                 set_current_state(TASK_INTERRUPTIBLE);
>         }
>         __set_current_state(TASK_RUNNING);
> --
> 2.30.1.766.gb4fecdf3b7-goog
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net v3] net: fix race between napi kthread mode and busy poll
  2021-03-03  9:53 ` Eric Dumazet
@ 2021-03-03  9:55   ` Eric Dumazet
  2021-03-03 21:46     ` Wei Wang
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Dumazet @ 2021-03-03  9:55 UTC (permalink / raw)
  To: Wei Wang
  Cc: Jakub Kicinski, David S . Miller, netdev, Martin Zaharinov,
	Alexander Duyck, Paolo Abeni, Hannes Frederic Sowa

On Wed, Mar 3, 2021 at 10:53 AM Eric Dumazet <edumazet@google.com> wrote:
>
> On Tue, Mar 2, 2021 at 2:21 AM Wei Wang <weiwan@google.com> wrote:
> >
> > Currently, napi_thread_wait() checks for NAPI_STATE_SCHED bit to
> > determine if the kthread owns this napi and could call napi->poll() on
> > it. However, if socket busy poll is enabled, it is possible that the
> > busy poll thread grabs this SCHED bit (after the previous napi->poll()
> > invokes napi_complete_done() and clears SCHED bit) and tries to poll
> > on the same napi. napi_disable() could grab the SCHED bit as well.
> > This patch tries to fix this race by adding a new bit
> > NAPI_STATE_SCHED_THREADED in napi->state. This bit gets set in
> > ____napi_schedule() if the threaded mode is enabled, and gets cleared
> > in napi_complete_done(), and we only poll the napi in kthread if this
> > bit is set. This helps distinguish the ownership of the napi between
> > kthread and other scenarios and fixes the race issue.
> >
> > Fixes: 29863d41bb6e ("net: implement threaded-able napi poll loop support")
> > Reported-by: Martin Zaharinov <micron10@gmail.com>
> > Suggested-by: Jakub Kicinski <kuba@kernel.org>
> > Signed-off-by: Wei Wang <weiwan@google.com>
> > Cc: Alexander Duyck <alexanderduyck@fb.com>
> > Cc: Eric Dumazet <edumazet@google.com>
> > Cc: Paolo Abeni <pabeni@redhat.com>
> > Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
> > ---
> >  include/linux/netdevice.h |  2 ++
> >  net/core/dev.c            | 14 +++++++++++++-
> >  2 files changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > index ddf4cfc12615..682908707c1a 100644
> > --- a/include/linux/netdevice.h
> > +++ b/include/linux/netdevice.h
> > @@ -360,6 +360,7 @@ enum {
> >         NAPI_STATE_IN_BUSY_POLL,        /* sk_busy_loop() owns this NAPI */
> >         NAPI_STATE_PREFER_BUSY_POLL,    /* prefer busy-polling over softirq processing*/
> >         NAPI_STATE_THREADED,            /* The poll is performed inside its own thread*/
> > +       NAPI_STATE_SCHED_THREADED,      /* Napi is currently scheduled in threaded mode */
> >  };
> >
> >  enum {
> > @@ -372,6 +373,7 @@ enum {
> >         NAPIF_STATE_IN_BUSY_POLL        = BIT(NAPI_STATE_IN_BUSY_POLL),
> >         NAPIF_STATE_PREFER_BUSY_POLL    = BIT(NAPI_STATE_PREFER_BUSY_POLL),
> >         NAPIF_STATE_THREADED            = BIT(NAPI_STATE_THREADED),
> > +       NAPIF_STATE_SCHED_THREADED      = BIT(NAPI_STATE_SCHED_THREADED),
> >  };
> >
> >  enum gro_result {
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 6c5967e80132..03c4763de351 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -4294,6 +4294,8 @@ static inline void ____napi_schedule(struct softnet_data *sd,
> >                  */
> >                 thread = READ_ONCE(napi->thread);
> >                 if (thread) {
> > +                       if (thread->state != TASK_INTERRUPTIBLE)
>
> How safe is this read ?
>
> Presumably KMSAN will detect that another cpu/thread is able to change
> thread->state under us,
> so a READ_ONCE() (or data_race()) would be needed.
>

Of course I meant KCSAN here.

> Nowhere else in the kernel can we find a similar construct, I find
> unfortunate to bury
> in net/core/dev.c something that might be incorrect in the future.
>
> > +                               set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
> >                         wake_up_process(thread);
> >                         return;
> >                 }
> > @@ -6486,6 +6488,7 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
> >                 WARN_ON_ONCE(!(val & NAPIF_STATE_SCHED));
> >
> >                 new = val & ~(NAPIF_STATE_MISSED | NAPIF_STATE_SCHED |
> > +                             NAPIF_STATE_SCHED_THREADED |
> >                               NAPIF_STATE_PREFER_BUSY_POLL);
> >
> >                 /* If STATE_MISSED was set, leave STATE_SCHED set,
> > @@ -6968,16 +6971,25 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
> >
> >  static int napi_thread_wait(struct napi_struct *napi)
> >  {
> > +       bool woken = false;
> > +
> >         set_current_state(TASK_INTERRUPTIBLE);
> >
> >         while (!kthread_should_stop() && !napi_disable_pending(napi)) {
> > -               if (test_bit(NAPI_STATE_SCHED, &napi->state)) {
> > +               /* Testing SCHED_THREADED bit here to make sure the current
> > +                * kthread owns this napi and could poll on this napi.
> > +                * Testing SCHED bit is not enough because SCHED bit might be
> > +                * set by some other busy poll thread or by napi_disable().
> > +                */
> > +               if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
> >                         WARN_ON(!list_empty(&napi->poll_list));
> >                         __set_current_state(TASK_RUNNING);
> >                         return 0;
> >                 }
> >
> >                 schedule();
> > +               /* woken being true indicates this thread owns this napi. */
> > +               woken = true;
> >                 set_current_state(TASK_INTERRUPTIBLE);
> >         }
> >         __set_current_state(TASK_RUNNING);
> > --
> > 2.30.1.766.gb4fecdf3b7-goog
> >

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net v3] net: fix race between napi kthread mode and busy poll
  2021-03-03  9:55   ` Eric Dumazet
@ 2021-03-03 21:46     ` Wei Wang
  0 siblings, 0 replies; 5+ messages in thread
From: Wei Wang @ 2021-03-03 21:46 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Jakub Kicinski, David S . Miller, netdev, Martin Zaharinov,
	Alexander Duyck, Paolo Abeni, Hannes Frederic Sowa

On Wed, Mar 3, 2021 at 1:55 AM Eric Dumazet <edumazet@google.com> wrote:
>
> On Wed, Mar 3, 2021 at 10:53 AM Eric Dumazet <edumazet@google.com> wrote:
> >
> > On Tue, Mar 2, 2021 at 2:21 AM Wei Wang <weiwan@google.com> wrote:
> > >
> > > Currently, napi_thread_wait() checks for NAPI_STATE_SCHED bit to
> > > determine if the kthread owns this napi and could call napi->poll() on
> > > it. However, if socket busy poll is enabled, it is possible that the
> > > busy poll thread grabs this SCHED bit (after the previous napi->poll()
> > > invokes napi_complete_done() and clears SCHED bit) and tries to poll
> > > on the same napi. napi_disable() could grab the SCHED bit as well.
> > > This patch tries to fix this race by adding a new bit
> > > NAPI_STATE_SCHED_THREADED in napi->state. This bit gets set in
> > > ____napi_schedule() if the threaded mode is enabled, and gets cleared
> > > in napi_complete_done(), and we only poll the napi in kthread if this
> > > bit is set. This helps distinguish the ownership of the napi between
> > > kthread and other scenarios and fixes the race issue.
> > >
> > > Fixes: 29863d41bb6e ("net: implement threaded-able napi poll loop support")
> > > Reported-by: Martin Zaharinov <micron10@gmail.com>
> > > Suggested-by: Jakub Kicinski <kuba@kernel.org>
> > > Signed-off-by: Wei Wang <weiwan@google.com>
> > > Cc: Alexander Duyck <alexanderduyck@fb.com>
> > > Cc: Eric Dumazet <edumazet@google.com>
> > > Cc: Paolo Abeni <pabeni@redhat.com>
> > > Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
> > > ---
> > >  include/linux/netdevice.h |  2 ++
> > >  net/core/dev.c            | 14 +++++++++++++-
> > >  2 files changed, 15 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > > index ddf4cfc12615..682908707c1a 100644
> > > --- a/include/linux/netdevice.h
> > > +++ b/include/linux/netdevice.h
> > > @@ -360,6 +360,7 @@ enum {
> > >         NAPI_STATE_IN_BUSY_POLL,        /* sk_busy_loop() owns this NAPI */
> > >         NAPI_STATE_PREFER_BUSY_POLL,    /* prefer busy-polling over softirq processing*/
> > >         NAPI_STATE_THREADED,            /* The poll is performed inside its own thread*/
> > > +       NAPI_STATE_SCHED_THREADED,      /* Napi is currently scheduled in threaded mode */
> > >  };
> > >
> > >  enum {
> > > @@ -372,6 +373,7 @@ enum {
> > >         NAPIF_STATE_IN_BUSY_POLL        = BIT(NAPI_STATE_IN_BUSY_POLL),
> > >         NAPIF_STATE_PREFER_BUSY_POLL    = BIT(NAPI_STATE_PREFER_BUSY_POLL),
> > >         NAPIF_STATE_THREADED            = BIT(NAPI_STATE_THREADED),
> > > +       NAPIF_STATE_SCHED_THREADED      = BIT(NAPI_STATE_SCHED_THREADED),
> > >  };
> > >
> > >  enum gro_result {
> > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > index 6c5967e80132..03c4763de351 100644
> > > --- a/net/core/dev.c
> > > +++ b/net/core/dev.c
> > > @@ -4294,6 +4294,8 @@ static inline void ____napi_schedule(struct softnet_data *sd,
> > >                  */
> > >                 thread = READ_ONCE(napi->thread);
> > >                 if (thread) {
> > > +                       if (thread->state != TASK_INTERRUPTIBLE)
> >
> > How safe is this read ?
> >
> > Presumably KMSAN will detect that another cpu/thread is able to change
> > thread->state under us,
> > so a READ_ONCE() (or data_race()) would be needed.
> >
>
> Of course I meant KCSAN here.
>
> > Nowhere else in the kernel can we find a similar construct, I find
> > unfortunate to bury
> > in net/core/dev.c something that might be incorrect in the future.
> >
Indeed. It seems not much code is reading and checking the thread
state. Not sure if there is any risk involved in doing this.
The reason to check for the state and then set the bit is to try to
avoid some atomic operations here. And the test I ran did show it is
working properly. But the workload I tested does not represent all the
scenarios.
Not sure what to do here. Should we remove the if () check and
unconditionally set SCHED_THREADED bit?


> > > +                               set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
> > >                         wake_up_process(thread);
> > >                         return;
> > >                 }
> > > @@ -6486,6 +6488,7 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
> > >                 WARN_ON_ONCE(!(val & NAPIF_STATE_SCHED));
> > >
> > >                 new = val & ~(NAPIF_STATE_MISSED | NAPIF_STATE_SCHED |
> > > +                             NAPIF_STATE_SCHED_THREADED |
> > >                               NAPIF_STATE_PREFER_BUSY_POLL);
> > >
> > >                 /* If STATE_MISSED was set, leave STATE_SCHED set,
> > > @@ -6968,16 +6971,25 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
> > >
> > >  static int napi_thread_wait(struct napi_struct *napi)
> > >  {
> > > +       bool woken = false;
> > > +
> > >         set_current_state(TASK_INTERRUPTIBLE);
> > >
> > >         while (!kthread_should_stop() && !napi_disable_pending(napi)) {
> > > -               if (test_bit(NAPI_STATE_SCHED, &napi->state)) {
> > > +               /* Testing SCHED_THREADED bit here to make sure the current
> > > +                * kthread owns this napi and could poll on this napi.
> > > +                * Testing SCHED bit is not enough because SCHED bit might be
> > > +                * set by some other busy poll thread or by napi_disable().
> > > +                */
> > > +               if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
> > >                         WARN_ON(!list_empty(&napi->poll_list));
> > >                         __set_current_state(TASK_RUNNING);
> > >                         return 0;
> > >                 }
> > >
> > >                 schedule();
> > > +               /* woken being true indicates this thread owns this napi. */
> > > +               woken = true;
> > >                 set_current_state(TASK_INTERRUPTIBLE);
> > >         }
> > >         __set_current_state(TASK_RUNNING);
> > > --
> > > 2.30.1.766.gb4fecdf3b7-goog
> > >

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-03-04  0:17 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-02  1:21 [PATCH net v3] net: fix race between napi kthread mode and busy poll Wei Wang
2021-03-03  0:18 ` Jakub Kicinski
2021-03-03  9:53 ` Eric Dumazet
2021-03-03  9:55   ` Eric Dumazet
2021-03-03 21:46     ` Wei Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.