All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] wait: add comment before waitqueue_active noting memory barrier is required
@ 2015-10-22  8:01 Kosuke Tatsukawa
  2015-10-22 12:56 ` Peter Zijlstra
  0 siblings, 1 reply; 12+ messages in thread
From: Kosuke Tatsukawa @ 2015-10-22  8:01 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel

This patch adds a comment before waitqueue_active noting that memory
barriers are required.

In the following code, the wake_up thread might fail to wake up the
waiting thread and leave it sleeping due to lack of memory barriers.

     wake_up thread                 waiting thread
------------------------------------------------------------------------
CONDITION = 1;                  add_wait_queue(wq, &wait);
if (waitqueue_active(wq))       for (;;) {
        wake_up(wq);                    if (CONDITION)
                                                break;
                                        wait_woken(&wait, ...);
                                }
------------------------------------------------------------------------

There are two problems that can occur.
First, on the wake_up thread side, the CPU can reorder waitqueue_active
to happen before the store.
     wake_up thread                 waiting thread
       (reordered)
------------------------------------------------------------------------
if (waitqueue_active(wq))
                                add_wait_queue(wq, &wait);
                                for (;;) {
                                        if (CONDITION)
                                                break;
CONDITION = 1;
                                        wait_woken(&wait, ...);
                                }
------------------------------------------------------------------------

Second, on the waiting thread side, the CPU can reorder the load of
CONDITION to occur during add_wait_queue active, before the entry is
added to the wait queue.
     wake_up thread                 waiting thread
                                      (reordered)
------------------------------------------------------------------------
                                spin_lock_irqsave(...)      <add_wait_queue>
                                if (CONDITION)
CONDITION = 1;
if (waitqueue_active(wq))
                                __add_wait_queue(...)       <add_wait_queue>
                                spin_unlock_irqrestore(...) <add_wait_queue>
                                wait_woken(&wait, ...);
------------------------------------------------------------------------

Both problems can be fixed by removing the waitqueue_active() call at
the cost of calling spin_lock and spin_unlock even when the queue is
empty.

However, if that is too expensive, the reordering could be prevented by
adding memory barriers in the following places.
     wake_up thread                 waiting thread
------------------------------------------------------------------------
CONDITION = 1;                  add_wait_queue(wq, &wait);
smp_mb();                       smp_mb();
if (waitqueue_active(wq))       for (;;) {
        wake_up(wq);                    if (CONDITION)
                                                break;
                                        wait_woken(&wait, ...);
                                }
------------------------------------------------------------------------
If the waiting thread uses prepare_to_wait() or wait_event*() instead of
directly calling add_wait_queue(), set_current_state() called within
those functions contains the necessary memory barrier.  The memory
barrier in the wake_up thread is still needed.

There were several places in the linux kernel source code which lacked
the memory barrier.  Hopefully, the comment will make people using
waitqueue_active a little more cautious.

Signed-off-by: Kosuke Tatsukawa <tatsu@ab.jp.nec.com>
---
 include/linux/wait.h |   13 +++++++++++++
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 1e1bf9f..4a4c6fc 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -102,6 +102,19 @@ init_waitqueue_func_entry(wait_queue_t *q, wait_queue_func_t func)
 	q->func		= func;
 }
 
+/*
+ * Note: When adding waitqueue_active before calling wake_up for
+ * optimization, some sort of memory barrier is required on SMP so
+ * that the waiting thread does not miss the wake up.
+ *
+ * A memory barrier is required before waitqueue_active to prevent
+ * waitqueue_active from being reordered by the CPU before any writes
+ * done prior to it.
+ *
+ * The waiting side also needs a memory barrier which pairs with the
+ * wake_up side.  If prepare_to_wait() or wait_event*() is used, they
+ * contain the memory barrier in set_current_state().
+ */
 static inline int waitqueue_active(wait_queue_head_t *q)
 {
 	return !list_empty(&q->task_list);
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] wait: add comment before waitqueue_active noting memory barrier is required
  2015-10-22  8:01 [PATCH v2] wait: add comment before waitqueue_active noting memory barrier is required Kosuke Tatsukawa
@ 2015-10-22 12:56 ` Peter Zijlstra
  2015-10-22 23:18   ` Kosuke Tatsukawa
  0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2015-10-22 12:56 UTC (permalink / raw)
  To: Kosuke Tatsukawa; +Cc: Ingo Molnar, linux-kernel

On Thu, Oct 22, 2015 at 08:01:37AM +0000, Kosuke Tatsukawa wrote:

Its somewhat unfortunate you chose the whole wait_woken() thing, its
'rare'.

> Second, on the waiting thread side, the CPU can reorder the load of
> CONDITION to occur during add_wait_queue active, before the entry is
> added to the wait queue.
>      wake_up thread                 waiting thread
>                                       (reordered)
> ------------------------------------------------------------------------
>                                 spin_lock_irqsave(...)      <add_wait_queue>
>                                 if (CONDITION)
> CONDITION = 1;
> if (waitqueue_active(wq))
	wake_up();
>                                 __add_wait_queue(...)       <add_wait_queue>
>                                 spin_unlock_irqrestore(...) <add_wait_queue>
>                                 wait_woken(&wait, ...);
> ------------------------------------------------------------------------

This isn't actually a problem IIRC, because wait_woken() will test
WQ_FLAG_WOKEN and not actually sleep.

> However, if that is too expensive, the reordering could be prevented by
> adding memory barriers in the following places.
>      wake_up thread                 waiting thread
> ------------------------------------------------------------------------
> CONDITION = 1;                  add_wait_queue(wq, &wait);
> smp_mb();                       smp_mb();
> if (waitqueue_active(wq))       for (;;) {
>         wake_up(wq);                    if (CONDITION)
>                                                 break;
>                                         wait_woken(&wait, ...);
>                                 }

So for wait_woken, WQ_FLAG_WOKEN should 'fix' that, and for pretty much
anything else you must have a set_current_state() before testing
CONDITION and you're good (as you state elsewhere).

> +++ b/include/linux/wait.h
> @@ -102,6 +102,19 @@ init_waitqueue_func_entry(wait_queue_t *q, wait_queue_func_t func)
>  	q->func		= func;
>  }
>  
> +/*
> + * Note: When adding waitqueue_active before calling wake_up for
> + * optimization, some sort of memory barrier is required on SMP so
> + * that the waiting thread does not miss the wake up.
> + *
> + * A memory barrier is required before waitqueue_active to prevent
> + * waitqueue_active from being reordered by the CPU before any writes
> + * done prior to it.
> + *
> + * The waiting side also needs a memory barrier which pairs with the
> + * wake_up side.  If prepare_to_wait() or wait_event*() is used, they
> + * contain the memory barrier in set_current_state().
> + */
>  static inline int waitqueue_active(wait_queue_head_t *q)
>  {
>  	return !list_empty(&q->task_list);

How about something like:

/**
 * waitqueue_active -- locklessly test for waiters on the queue
 * @q: the waitqueue to test for waiters
 *
 * returns true if the wait list is not empty
 *
 * NOTE: this function is lockless and requires care, incorrect usage
 * _will_ lead to sporadic and non-obvious failure.
 *
 * Use either while holding wait_queue_head_t::lock or when used for
 * wakeups with an extra smp_mb() like:
 *
 *	CPU0 - waker			CPU1 - waiter
 *
 *					for (;;) {
 *	@cond = true;                     prepare_to_wait(&wq, &wait, state);
 *	smp_mb();                         /* smp_mb() from set_current_state() */
 *	if (waitqueue_active(wq))         if (@cond)
 *	  wake_up(wq);                      break;
 *                                        schedule();
 *                                      }
 *
 * Because without the explicit smp_mb() its possible for the
 * waitqueue_active() load to get hoisted over the @cond store such that
 * we'll observe an empty wait list while the waiter might not observe
 * @cond.
 *
 * Also note that this 'optimization' trades a spin_lock() for an
 * smp_mb(), which (when the lock is uncontended) are of roughly equal
 * cost.
 */

Does that work for you?




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] wait: add comment before waitqueue_active noting memory barrier is required
  2015-10-22 12:56 ` Peter Zijlstra
@ 2015-10-22 23:18   ` Kosuke Tatsukawa
  2015-10-23 12:40     ` Peter Zijlstra
  0 siblings, 1 reply; 12+ messages in thread
From: Kosuke Tatsukawa @ 2015-10-22 23:18 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, linux-kernel

Peter Zijlstra wrote:
> On Thu, Oct 22, 2015 at 08:01:37AM +0000, Kosuke Tatsukawa wrote:
> 
> Its somewhat unfortunate you chose the whole wait_woken() thing, its
> 'rare'.

Yes.  I first noticed this lack of memory barrier before
waitqueue_active() issue in drivers/tty/n_tty.c which was using
wait_woken().  However, other places were mostly using prepare_to_wait()
or wait_event*(), so wait_woken() is 'rare'.


>> Second, on the waiting thread side, the CPU can reorder the load of
>> CONDITION to occur during add_wait_queue active, before the entry is
>> added to the wait queue.
>>      wake_up thread                 waiting thread
>>                                       (reordered)
>> ------------------------------------------------------------------------
>>                                 spin_lock_irqsave(...)      <add_wait_queue>
>>                                 if (CONDITION)
>> CONDITION = 1;
>> if (waitqueue_active(wq))
> 	wake_up();
>>                                 __add_wait_queue(...)       <add_wait_queue>
>>                                 spin_unlock_irqrestore(...) <add_wait_queue>
>>                                 wait_woken(&wait, ...);
>> ------------------------------------------------------------------------
> 
> This isn't actually a problem IIRC, because wait_woken() will test
> WQ_FLAG_WOKEN and not actually sleep.

In the above figure, waitqueue_active(wq) will return 0 (queue is
inactive) and skip the whole wake_up() call, because __add_wait_queue()
hasn't been called yet.  This actually does occur using a reproducer.


>> However, if that is too expensive, the reordering could be prevented by
>> adding memory barriers in the following places.
>>      wake_up thread                 waiting thread
>> ------------------------------------------------------------------------
>> CONDITION = 1;                  add_wait_queue(wq, &wait);
>> smp_mb();                       smp_mb();
>> if (waitqueue_active(wq))       for (;;) {
>>         wake_up(wq);                    if (CONDITION)
>>                                                 break;
>>                                         wait_woken(&wait, ...);
>>                                 }
> 
> So for wait_woken, WQ_FLAG_WOKEN should 'fix' that, and for pretty much
> anything else you must have a set_current_state() before testing
> CONDITION and you're good (as you state elsewhere).

wait_woken() calls set_current_state(), but that is after the CONDITION
test.


>> +++ b/include/linux/wait.h
>> @@ -102,6 +102,19 @@ init_waitqueue_func_entry(wait_queue_t *q, wait_queue_func_t func)
>>  	q->func		= func;
>>  }
>>  
>> +/*
>> + * Note: When adding waitqueue_active before calling wake_up for
>> + * optimization, some sort of memory barrier is required on SMP so
>> + * that the waiting thread does not miss the wake up.
>> + *
>> + * A memory barrier is required before waitqueue_active to prevent
>> + * waitqueue_active from being reordered by the CPU before any writes
>> + * done prior to it.
>> + *
>> + * The waiting side also needs a memory barrier which pairs with the
>> + * wake_up side.  If prepare_to_wait() or wait_event*() is used, they
>> + * contain the memory barrier in set_current_state().
>> + */
>>  static inline int waitqueue_active(wait_queue_head_t *q)
>>  {
>>  	return !list_empty(&q->task_list);
> 
> How about something like:
> 
> /**
>  * waitqueue_active -- locklessly test for waiters on the queue
>  * @q: the waitqueue to test for waiters
>  *
>  * returns true if the wait list is not empty
>  *
>  * NOTE: this function is lockless and requires care, incorrect usage
>  * _will_ lead to sporadic and non-obvious failure.
>  *
>  * Use either while holding wait_queue_head_t::lock or when used for
>  * wakeups with an extra smp_mb() like:
>  *
>  *	CPU0 - waker			CPU1 - waiter
>  *
>  *					for (;;) {
>  *	@cond = true;                     prepare_to_wait(&wq, &wait, state);
>  *	smp_mb();                         /* smp_mb() from set_current_state() */
>  *	if (waitqueue_active(wq))         if (@cond)
>  *	  wake_up(wq);                      break;
>  *                                        schedule();
>  *                                      }
>  *
>  * Because without the explicit smp_mb() its possible for the
>  * waitqueue_active() load to get hoisted over the @cond store such that
>  * we'll observe an empty wait list while the waiter might not observe
>  * @cond.
>  *
>  * Also note that this 'optimization' trades a spin_lock() for an
>  * smp_mb(), which (when the lock is uncontended) are of roughly equal
>  * cost.
>  */
> 
> Does that work for you?

Yes.  Considering that the use of wait_woken is pretty rare, I think the
explanation is more focused and easier to understand this way.

Best regards.
---
Kosuke TATSUKAWA  | 3rd IT Platform Department
                  | IT Platform Division, NEC Corporation
                  | tatsu@ab.jp.nec.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] wait: add comment before waitqueue_active noting memory barrier is required
  2015-10-22 23:18   ` Kosuke Tatsukawa
@ 2015-10-23 12:40     ` Peter Zijlstra
  2015-11-11  9:48       ` Herbert Xu
  0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2015-10-23 12:40 UTC (permalink / raw)
  To: Kosuke Tatsukawa; +Cc: Ingo Molnar, linux-kernel

On Thu, Oct 22, 2015 at 11:18:33PM +0000, Kosuke Tatsukawa wrote:
> Peter Zijlstra wrote:
> > On Thu, Oct 22, 2015 at 08:01:37AM +0000, Kosuke Tatsukawa wrote:
> > 
> > Its somewhat unfortunate you chose the whole wait_woken() thing, its
> > 'rare'.
> 
> Yes.  I first noticed this lack of memory barrier before
> waitqueue_active() issue in drivers/tty/n_tty.c which was using
> wait_woken().  However, other places were mostly using prepare_to_wait()
> or wait_event*(), so wait_woken() is 'rare'.

Which I no doubt introduced there (the wait_woken thing), and it would
have been nice if I'd been Cc to that discussion.

In any case, I found the patch in next and dropping the
waitqueue_active() think is in deed the sane solution. It will serialize
everything on the queue lock.

> >> Second, on the waiting thread side, the CPU can reorder the load of
> >> CONDITION to occur during add_wait_queue active, before the entry is
> >> added to the wait queue.
> >>      wake_up thread                 waiting thread
> >>                                       (reordered)
> >> ------------------------------------------------------------------------
> >>                                 spin_lock_irqsave(...)      <add_wait_queue>
> >>                                 if (CONDITION)
> >> CONDITION = 1;
> >> if (waitqueue_active(wq))
> > 	wake_up();
> >>                                 __add_wait_queue(...)       <add_wait_queue>
> >>                                 spin_unlock_irqrestore(...) <add_wait_queue>
> >>                                 wait_woken(&wait, ...);
> >> ------------------------------------------------------------------------
> > 
> > This isn't actually a problem IIRC, because wait_woken() will test
> > WQ_FLAG_WOKEN and not actually sleep.
> 
> In the above figure, waitqueue_active(wq) will return 0 (queue is
> inactive) and skip the whole wake_up() call, because __add_wait_queue()
> hasn't been called yet.  This actually does occur using a reproducer.

Duh, indeed.

> > Does that work for you?
> 
> Yes.  Considering that the use of wait_woken is pretty rare, I think the
> explanation is more focused and easier to understand this way.

OK, thanks, I'll queue the below.

---
Subject: sched, wait: Document waitqueue_active
From: Peter Zijlstra <peterz@infradead.org>
Date: Fri Oct 23 14:32:34 CEST 2015

Kosuku reports that there were a fair number of buggy
waitqueue_active() users and this function deserves a big comment in
order to avoid growing more.

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Kosuke Tatsukawa <tatsu@ab.jp.nec.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/wait.h |   30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -102,6 +102,36 @@ init_waitqueue_func_entry(wait_queue_t *
 	q->func		= func;
 }
 
+/**
+ * waitqueue_active -- locklessly test for waiters on the queue
+ * @q: the waitqueue to test for waiters
+ *
+ * returns true if the wait list is not empty
+ *
+ * NOTE: this function is lockless and requires care, incorrect usage _will_
+ * lead to sporadic and non-obvious failure.
+ *
+ * Use either while holding wait_queue_head_t::lock or when used for wakeups
+ * with an extra smp_mb() like:
+ *
+ *      CPU0 - waker                    CPU1 - waiter
+ *
+ *                                      for (;;) {
+ *      @cond = true;                     prepare_to_wait(&wq, &wait, state);
+ *      smp_mb();                         // smp_mb() from set_current_state()
+ *      if (waitqueue_active(wq))         if (@cond)
+ *        wake_up(wq);                      break;
+ *                                        schedule();
+ *                                      }
+ *                                      finish_wait(&wq, &wait);
+ *
+ * Because without the explicit smp_mb() it's possible for the
+ * waitqueue_active() load to get hoisted over the @cond store such that we'll
+ * observe an empty wait list while the waiter might not observe @cond.
+ *
+ * Also note that this 'optimization' trades a spin_lock() for an smp_mb(),
+ * which (when the lock is uncontended) are of roughly equal cost.
+ */
 static inline int waitqueue_active(wait_queue_head_t *q)
 {
 	return !list_empty(&q->task_list);

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] wait: add comment before waitqueue_active noting memory barrier is required
  2015-10-23 12:40     ` Peter Zijlstra
@ 2015-11-11  9:48       ` Herbert Xu
  2015-11-24  5:54         ` net: Generalise wq_has_sleeper helper Herbert Xu
  0 siblings, 1 reply; 12+ messages in thread
From: Herbert Xu @ 2015-11-11  9:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tatsu, mingo, linux-kernel, David S. Miller, netdev, Jiri Olsa,
	Eric Dumazet

Peter Zijlstra <peterz@infradead.org> wrote:
>
>> >> Second, on the waiting thread side, the CPU can reorder the load of
>> >> CONDITION to occur during add_wait_queue active, before the entry is
>> >> added to the wait queue.
>> >>      wake_up thread                 waiting thread
>> >>                                       (reordered)
>> >> ------------------------------------------------------------------------
>> >>                                 spin_lock_irqsave(...)      <add_wait_queue>
>> >>                                 if (CONDITION)
>> >> CONDITION = 1;
>> >> if (waitqueue_active(wq))
>> >     wake_up();
>> >>                                 __add_wait_queue(...)       <add_wait_queue>
>> >>                                 spin_unlock_irqrestore(...) <add_wait_queue>
>> >>                                 wait_woken(&wait, ...);
>> >> ------------------------------------------------------------------------
>> > 
>> > This isn't actually a problem IIRC, because wait_woken() will test
>> > WQ_FLAG_WOKEN and not actually sleep.
>> 
>> In the above figure, waitqueue_active(wq) will return 0 (queue is
>> inactive) and skip the whole wake_up() call, because __add_wait_queue()
>> hasn't been called yet.  This actually does occur using a reproducer.
> 
> Duh, indeed.

BTW, the networking folks found this years ago and even added
helpers to deal with this.  See for example wq_has_sleeper in
include/net/sock.h.  It would be good if we can move some of
those helpers into wait.h instead.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* net: Generalise wq_has_sleeper helper
  2015-11-11  9:48       ` Herbert Xu
@ 2015-11-24  5:54         ` Herbert Xu
  2015-11-24 21:30           ` David Miller
  2015-11-25  9:15           ` Peter Zijlstra
  0 siblings, 2 replies; 12+ messages in thread
From: Herbert Xu @ 2015-11-24  5:54 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tatsu, mingo, linux-kernel, David S. Miller, netdev, Jiri Olsa,
	Eric Dumazet

On Wed, Nov 11, 2015 at 05:48:29PM +0800, Herbert Xu wrote:
>
> BTW, the networking folks found this years ago and even added
> helpers to deal with this.  See for example wq_has_sleeper in
> include/net/sock.h.  It would be good if we can move some of
> those helpers into wait.h instead.

Here is a patch against net-next which makes the wq_has_sleeper
helper available to non-next users:

---8<---
The memory barrier in the helper wq_has_sleeper is needed by just
about every user of waitqueue_active.  This patch generalises it
by making it take a wait_queue_head_t directly.  The existing
helper is renamed to skwq_has_sleeper.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 0aa6fdf..fb99f30 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -106,7 +106,7 @@ static void aead_wmem_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN |
 							   POLLRDNORM |
 							   POLLRDBAND);
@@ -157,7 +157,7 @@ static void aead_data_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLOUT |
 							   POLLRDNORM |
 							   POLLRDBAND);
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index af31a0e..0e6702e 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -238,7 +238,7 @@ static void skcipher_wmem_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN |
 							   POLLRDNORM |
 							   POLLRDBAND);
@@ -288,7 +288,7 @@ static void skcipher_data_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLOUT |
 							   POLLRDNORM |
 							   POLLRDBAND);
diff --git a/include/linux/wait.h b/include/linux/wait.h
index 1e1bf9f..bd1157f 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -107,6 +107,50 @@ static inline int waitqueue_active(wait_queue_head_t *q)
 	return !list_empty(&q->task_list);
 }
 
+/**
+ * wq_has_sleeper - check if there are any waiting processes
+ * @wq: wait queue head
+ *
+ * Returns true if wq has waiting processes
+ *
+ * The purpose of the wq_has_sleeper and sock_poll_wait is to wrap the memory
+ * barrier call. They were added due to the race found within the tcp code.
+ *
+ * Consider following tcp code paths:
+ *
+ * CPU1                  CPU2
+ *
+ * sys_select            receive packet
+ *   ...                 ...
+ *   __add_wait_queue    update tp->rcv_nxt
+ *   ...                 ...
+ *   tp->rcv_nxt check   sock_def_readable
+ *   ...                 {
+ *   schedule               rcu_read_lock();
+ *                          wq = rcu_dereference(sk->sk_wq);
+ *                          if (wq && waitqueue_active(&wq->wait))
+ *                              wake_up_interruptible(&wq->wait)
+ *                          ...
+ *                       }
+ *
+ * The race for tcp fires when the __add_wait_queue changes done by CPU1 stay
+ * in its cache, and so does the tp->rcv_nxt update on CPU2 side.  The CPU1
+ * could then endup calling schedule and sleep forever if there are no more
+ * data on the socket.
+ *
+ */
+static inline bool wq_has_sleeper(wait_queue_head_t *wq)
+{
+	/* We need to be sure we are in sync with the
+	 * add_wait_queue modifications to the wait queue.
+	 *
+	 * This memory barrier should be paired with one on the
+	 * waiting side.
+	 */
+	smp_mb();
+	return waitqueue_active(wq);
+}
+
 extern void add_wait_queue(wait_queue_head_t *q, wait_queue_t *wait);
 extern void add_wait_queue_exclusive(wait_queue_head_t *q, wait_queue_t *wait);
 extern void remove_wait_queue(wait_queue_head_t *q, wait_queue_t *wait);
diff --git a/include/net/sock.h b/include/net/sock.h
index bbf7c2c..4a6e9b6 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -58,6 +58,7 @@
 #include <linux/memcontrol.h>
 #include <linux/static_key.h>
 #include <linux/sched.h>
+#include <linux/wait.h>
 
 #include <linux/filter.h>
 #include <linux/rculist_nulls.h>
@@ -1879,46 +1880,14 @@ static inline bool sk_has_allocations(const struct sock *sk)
 }
 
 /**
- * wq_has_sleeper - check if there are any waiting processes
+ * skwq_has_sleeper - check if there are any waiting processes
  * @wq: struct socket_wq
  *
  * Returns true if socket_wq has waiting processes
- *
- * The purpose of the wq_has_sleeper and sock_poll_wait is to wrap the memory
- * barrier call. They were added due to the race found within the tcp code.
- *
- * Consider following tcp code paths:
- *
- * CPU1                  CPU2
- *
- * sys_select            receive packet
- *   ...                 ...
- *   __add_wait_queue    update tp->rcv_nxt
- *   ...                 ...
- *   tp->rcv_nxt check   sock_def_readable
- *   ...                 {
- *   schedule               rcu_read_lock();
- *                          wq = rcu_dereference(sk->sk_wq);
- *                          if (wq && waitqueue_active(&wq->wait))
- *                              wake_up_interruptible(&wq->wait)
- *                          ...
- *                       }
- *
- * The race for tcp fires when the __add_wait_queue changes done by CPU1 stay
- * in its cache, and so does the tp->rcv_nxt update on CPU2 side.  The CPU1
- * could then endup calling schedule and sleep forever if there are no more
- * data on the socket.
- *
  */
-static inline bool wq_has_sleeper(struct socket_wq *wq)
+static inline bool skwq_has_sleeper(struct socket_wq *wq)
 {
-	/* We need to be sure we are in sync with the
-	 * add_wait_queue modifications to the wait queue.
-	 *
-	 * This memory barrier is paired in the sock_poll_wait.
-	 */
-	smp_mb();
-	return wq && waitqueue_active(&wq->wait);
+	return wq && wq_has_sleeper(&wq->wait);
 }
 
 /**
diff --git a/net/atm/common.c b/net/atm/common.c
index 49a872d..6dc1230 100644
--- a/net/atm/common.c
+++ b/net/atm/common.c
@@ -96,7 +96,7 @@ static void vcc_def_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up(&wq->wait);
 	rcu_read_unlock();
 }
@@ -117,7 +117,7 @@ static void vcc_write_space(struct sock *sk)
 
 	if (vcc_writable(sk)) {
 		wq = rcu_dereference(sk->sk_wq);
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible(&wq->wait);
 
 		sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
diff --git a/net/core/sock.c b/net/core/sock.c
index 1e4dd54..2769bd3a 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2283,7 +2283,7 @@ static void sock_def_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_all(&wq->wait);
 	rcu_read_unlock();
 }
@@ -2294,7 +2294,7 @@ static void sock_def_error_report(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_poll(&wq->wait, POLLERR);
 	sk_wake_async(sk, SOCK_WAKE_IO, POLL_ERR);
 	rcu_read_unlock();
@@ -2306,7 +2306,7 @@ static void sock_def_readable(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN | POLLPRI |
 						POLLRDNORM | POLLRDBAND);
 	sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
@@ -2324,7 +2324,7 @@ static void sock_def_write_space(struct sock *sk)
 	 */
 	if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf) {
 		wq = rcu_dereference(sk->sk_wq);
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible_sync_poll(&wq->wait, POLLOUT |
 						POLLWRNORM | POLLWRBAND);
 
diff --git a/net/core/stream.c b/net/core/stream.c
index d70f77a..8ff9d63 100644
--- a/net/core/stream.c
+++ b/net/core/stream.c
@@ -35,7 +35,7 @@ void sk_stream_write_space(struct sock *sk)
 
 		rcu_read_lock();
 		wq = rcu_dereference(sk->sk_wq);
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible_poll(&wq->wait, POLLOUT |
 						POLLWRNORM | POLLWRBAND);
 		if (wq && wq->fasync_list && !(sk->sk_shutdown & SEND_SHUTDOWN))
diff --git a/net/dccp/output.c b/net/dccp/output.c
index 4ce912e..b66c84d 100644
--- a/net/dccp/output.c
+++ b/net/dccp/output.c
@@ -201,7 +201,7 @@ void dccp_write_space(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible(&wq->wait);
 	/* Should agree with poll, otherwise some programs break */
 	if (sock_writeable(sk))
diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
index fcb2752..4f0aa91 100644
--- a/net/iucv/af_iucv.c
+++ b/net/iucv/af_iucv.c
@@ -303,7 +303,7 @@ static void iucv_sock_wake_msglim(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_all(&wq->wait);
 	sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
 	rcu_read_unlock();
diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index 1f8a144..7e2d105 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -67,7 +67,7 @@ static void rxrpc_write_space(struct sock *sk)
 	if (rxrpc_writable(sk)) {
 		struct socket_wq *wq = rcu_dereference(sk->sk_wq);
 
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible(&wq->wait);
 		sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
 	}
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 897c01c..ec10b66 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -6978,7 +6978,7 @@ void sctp_data_ready(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN |
 						POLLRDNORM | POLLRDBAND);
 	sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index 552dbab..525acf6 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -1492,7 +1492,7 @@ static void tipc_write_space(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLOUT |
 						POLLWRNORM | POLLWRBAND);
 	rcu_read_unlock();
@@ -1509,7 +1509,7 @@ static void tipc_data_ready(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN |
 						POLLRDNORM | POLLRDBAND);
 	rcu_read_unlock();
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index aaa0b58..0446ff1 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -339,7 +339,7 @@ static void unix_write_space(struct sock *sk)
 	rcu_read_lock();
 	if (unix_writable(sk)) {
 		wq = rcu_dereference(sk->sk_wq);
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible_sync_poll(&wq->wait,
 				POLLOUT | POLLWRNORM | POLLWRBAND);
 		sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: net: Generalise wq_has_sleeper helper
  2015-11-24  5:54         ` net: Generalise wq_has_sleeper helper Herbert Xu
@ 2015-11-24 21:30           ` David Miller
  2015-11-25  1:10             ` Herbert Xu
  2015-11-25  9:15           ` Peter Zijlstra
  1 sibling, 1 reply; 12+ messages in thread
From: David Miller @ 2015-11-24 21:30 UTC (permalink / raw)
  To: herbert; +Cc: peterz, tatsu, mingo, linux-kernel, netdev, jolsa, eric.dumazet

From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Tue, 24 Nov 2015 13:54:23 +0800

> On Wed, Nov 11, 2015 at 05:48:29PM +0800, Herbert Xu wrote:
>>
>> BTW, the networking folks found this years ago and even added
>> helpers to deal with this.  See for example wq_has_sleeper in
>> include/net/sock.h.  It would be good if we can move some of
>> those helpers into wait.h instead.
> 
> Here is a patch against net-next which makes the wq_has_sleeper
> helper available to non-next users:
> 
> ---8<---
> The memory barrier in the helper wq_has_sleeper is needed by just
> about every user of waitqueue_active.  This patch generalises it
> by making it take a wait_queue_head_t directly.  The existing
> helper is renamed to skwq_has_sleeper.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

I'm fine with wherever this patch goes.  Herbert is there any
particular tree where it'll facilitate another user quickest?

Or should I just toss it into net-next?

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: net: Generalise wq_has_sleeper helper
  2015-11-24 21:30           ` David Miller
@ 2015-11-25  1:10             ` Herbert Xu
  0 siblings, 0 replies; 12+ messages in thread
From: Herbert Xu @ 2015-11-25  1:10 UTC (permalink / raw)
  To: David Miller
  Cc: peterz, tatsu, mingo, linux-kernel, netdev, jolsa, eric.dumazet

On Tue, Nov 24, 2015 at 04:30:25PM -0500, David Miller wrote:
>
> I'm fine with wherever this patch goes.  Herbert is there any
> particular tree where it'll facilitate another user quickest?
> 
> Or should I just toss it into net-next?
> 
> Acked-by: David S. Miller <davem@davemloft.net>

No Dave net-next is fine I think.  This was prompted by Tatsukawa-san's
patches to fix waitqueue users affected by this very race and they
were all over the tree.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: net: Generalise wq_has_sleeper helper
  2015-11-24  5:54         ` net: Generalise wq_has_sleeper helper Herbert Xu
  2015-11-24 21:30           ` David Miller
@ 2015-11-25  9:15           ` Peter Zijlstra
  2015-11-25 16:37             ` David Miller
  1 sibling, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2015-11-25  9:15 UTC (permalink / raw)
  To: Herbert Xu
  Cc: tatsu, mingo, linux-kernel, David S. Miller, netdev, Jiri Olsa,
	Eric Dumazet

On Tue, Nov 24, 2015 at 01:54:23PM +0800, Herbert Xu wrote:
> diff --git a/include/linux/wait.h b/include/linux/wait.h
> index 1e1bf9f..bd1157f 100644
> --- a/include/linux/wait.h
> +++ b/include/linux/wait.h
> @@ -107,6 +107,50 @@ static inline int waitqueue_active(wait_queue_head_t *q)
>  	return !list_empty(&q->task_list);
>  }
>  
> +/**
> + * wq_has_sleeper - check if there are any waiting processes
> + * @wq: wait queue head
> + *
> + * Returns true if wq has waiting processes
> + *
> + * The purpose of the wq_has_sleeper and sock_poll_wait is to wrap the memory
> + * barrier call. They were added due to the race found within the tcp code.
> + *
> + * Consider following tcp code paths:
> + *
> + * CPU1                  CPU2
> + *
> + * sys_select            receive packet
> + *   ...                 ...
> + *   __add_wait_queue    update tp->rcv_nxt
> + *   ...                 ...
> + *   tp->rcv_nxt check   sock_def_readable
> + *   ...                 {
> + *   schedule               rcu_read_lock();
> + *                          wq = rcu_dereference(sk->sk_wq);
> + *                          if (wq && waitqueue_active(&wq->wait))
> + *                              wake_up_interruptible(&wq->wait)
> + *                          ...
> + *                       }
> + *
> + * The race for tcp fires when the __add_wait_queue changes done by CPU1 stay
> + * in its cache, and so does the tp->rcv_nxt update on CPU2 side.  The CPU1
> + * could then endup calling schedule and sleep forever if there are no more
> + * data on the socket.
> + *

Would be easier to refer to the comment that now adorns
waitqueue_active().

> + */
> +static inline bool wq_has_sleeper(wait_queue_head_t *wq)
> +{
> +	/* We need to be sure we are in sync with the

broken comment style.

> +	 * add_wait_queue modifications to the wait queue.
> +	 *
> +	 * This memory barrier should be paired with one on the
> +	 * waiting side.
> +	 */
> +	smp_mb();
> +	return waitqueue_active(wq);
> +}
> +
>  extern void add_wait_queue(wait_queue_head_t *q, wait_queue_t *wait);
>  extern void add_wait_queue_exclusive(wait_queue_head_t *q, wait_queue_t *wait);
>  extern void remove_wait_queue(wait_queue_head_t *q, wait_queue_t *wait);

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: net: Generalise wq_has_sleeper helper
  2015-11-25  9:15           ` Peter Zijlstra
@ 2015-11-25 16:37             ` David Miller
  2015-11-26  5:55               ` [PATCH v2] " Herbert Xu
  0 siblings, 1 reply; 12+ messages in thread
From: David Miller @ 2015-11-25 16:37 UTC (permalink / raw)
  To: peterz; +Cc: herbert, tatsu, mingo, linux-kernel, netdev, jolsa, eric.dumazet

From: Peter Zijlstra <peterz@infradead.org>
Date: Wed, 25 Nov 2015 10:15:33 +0100

> On Tue, Nov 24, 2015 at 01:54:23PM +0800, Herbert Xu wrote:
>> + * The race for tcp fires when the __add_wait_queue changes done by CPU1 stay
>> + * in its cache, and so does the tp->rcv_nxt update on CPU2 side.  The CPU1
>> + * could then endup calling schedule and sleep forever if there are no more
>> + * data on the socket.
>> + *
> 
> Would be easier to refer to the comment that now adorns
> waitqueue_active().

Yeah, that might be a good idea.  Herbert can you adjust this?

>> + */
>> +static inline bool wq_has_sleeper(wait_queue_head_t *wq)
>> +{
>> +	/* We need to be sure we are in sync with the
> 
> broken comment style.

This is how we do it in the networking, so that's why it's formatted
this way, but yes he will need to fix it up.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2] net: Generalise wq_has_sleeper helper
  2015-11-25 16:37             ` David Miller
@ 2015-11-26  5:55               ` Herbert Xu
  2015-11-30 19:46                 ` David Miller
  0 siblings, 1 reply; 12+ messages in thread
From: Herbert Xu @ 2015-11-26  5:55 UTC (permalink / raw)
  To: David Miller
  Cc: peterz, tatsu, mingo, linux-kernel, netdev, jolsa, eric.dumazet

On Wed, Nov 25, 2015 at 11:37:29AM -0500, David Miller wrote:
>
> > Would be easier to refer to the comment that now adorns
> > waitqueue_active().
> 
> Yeah, that might be a good idea.  Herbert can you adjust this?

Sure, here is an updated patch.  Note that this patch is based
on net-next where there is actually no comment above waitqueue_active.
But hopefully this will right itself once the trees are merged.

Thanks,

---8<---
The memory barrier in the helper wq_has_sleeper is needed by just
about every user of waitqueue_active.  This patch generalises it
by making it take a wait_queue_head_t directly.  The existing
helper is renamed to skwq_has_sleeper.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 0aa6fdf..fb99f30 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -106,7 +106,7 @@ static void aead_wmem_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN |
 							   POLLRDNORM |
 							   POLLRDBAND);
@@ -157,7 +157,7 @@ static void aead_data_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLOUT |
 							   POLLRDNORM |
 							   POLLRDBAND);
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index af31a0e..0e6702e 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -238,7 +238,7 @@ static void skcipher_wmem_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN |
 							   POLLRDNORM |
 							   POLLRDBAND);
@@ -288,7 +288,7 @@ static void skcipher_data_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLOUT |
 							   POLLRDNORM |
 							   POLLRDBAND);
diff --git a/include/linux/wait.h b/include/linux/wait.h
index 1e1bf9f..6aa09a8 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -107,6 +107,27 @@ static inline int waitqueue_active(wait_queue_head_t *q)
 	return !list_empty(&q->task_list);
 }
 
+/**
+ * wq_has_sleeper - check if there are any waiting processes
+ * @wq: wait queue head
+ *
+ * Returns true if wq has waiting processes
+ *
+ * Please refer to the comment for waitqueue_active.
+ */
+static inline bool wq_has_sleeper(wait_queue_head_t *wq)
+{
+	/*
+	 * We need to be sure we are in sync with the
+	 * add_wait_queue modifications to the wait queue.
+	 *
+	 * This memory barrier should be paired with one on the
+	 * waiting side.
+	 */
+	smp_mb();
+	return waitqueue_active(wq);
+}
+
 extern void add_wait_queue(wait_queue_head_t *q, wait_queue_t *wait);
 extern void add_wait_queue_exclusive(wait_queue_head_t *q, wait_queue_t *wait);
 extern void remove_wait_queue(wait_queue_head_t *q, wait_queue_t *wait);
diff --git a/include/net/sock.h b/include/net/sock.h
index bbf7c2c..4adbe00 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -58,6 +58,7 @@
 #include <linux/memcontrol.h>
 #include <linux/static_key.h>
 #include <linux/sched.h>
+#include <linux/wait.h>
 
 #include <linux/filter.h>
 #include <linux/rculist_nulls.h>
@@ -1879,12 +1880,12 @@ static inline bool sk_has_allocations(const struct sock *sk)
 }
 
 /**
- * wq_has_sleeper - check if there are any waiting processes
+ * skwq_has_sleeper - check if there are any waiting processes
  * @wq: struct socket_wq
  *
  * Returns true if socket_wq has waiting processes
  *
- * The purpose of the wq_has_sleeper and sock_poll_wait is to wrap the memory
+ * The purpose of the skwq_has_sleeper and sock_poll_wait is to wrap the memory
  * barrier call. They were added due to the race found within the tcp code.
  *
  * Consider following tcp code paths:
@@ -1910,15 +1911,9 @@ static inline bool sk_has_allocations(const struct sock *sk)
  * data on the socket.
  *
  */
-static inline bool wq_has_sleeper(struct socket_wq *wq)
+static inline bool skwq_has_sleeper(struct socket_wq *wq)
 {
-	/* We need to be sure we are in sync with the
-	 * add_wait_queue modifications to the wait queue.
-	 *
-	 * This memory barrier is paired in the sock_poll_wait.
-	 */
-	smp_mb();
-	return wq && waitqueue_active(&wq->wait);
+	return wq && wq_has_sleeper(&wq->wait);
 }
 
 /**
diff --git a/net/atm/common.c b/net/atm/common.c
index 49a872d..6dc1230 100644
--- a/net/atm/common.c
+++ b/net/atm/common.c
@@ -96,7 +96,7 @@ static void vcc_def_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up(&wq->wait);
 	rcu_read_unlock();
 }
@@ -117,7 +117,7 @@ static void vcc_write_space(struct sock *sk)
 
 	if (vcc_writable(sk)) {
 		wq = rcu_dereference(sk->sk_wq);
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible(&wq->wait);
 
 		sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
diff --git a/net/core/sock.c b/net/core/sock.c
index 1e4dd54..2769bd3a 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2283,7 +2283,7 @@ static void sock_def_wakeup(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_all(&wq->wait);
 	rcu_read_unlock();
 }
@@ -2294,7 +2294,7 @@ static void sock_def_error_report(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_poll(&wq->wait, POLLERR);
 	sk_wake_async(sk, SOCK_WAKE_IO, POLL_ERR);
 	rcu_read_unlock();
@@ -2306,7 +2306,7 @@ static void sock_def_readable(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN | POLLPRI |
 						POLLRDNORM | POLLRDBAND);
 	sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
@@ -2324,7 +2324,7 @@ static void sock_def_write_space(struct sock *sk)
 	 */
 	if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf) {
 		wq = rcu_dereference(sk->sk_wq);
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible_sync_poll(&wq->wait, POLLOUT |
 						POLLWRNORM | POLLWRBAND);
 
diff --git a/net/core/stream.c b/net/core/stream.c
index d70f77a..8ff9d63 100644
--- a/net/core/stream.c
+++ b/net/core/stream.c
@@ -35,7 +35,7 @@ void sk_stream_write_space(struct sock *sk)
 
 		rcu_read_lock();
 		wq = rcu_dereference(sk->sk_wq);
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible_poll(&wq->wait, POLLOUT |
 						POLLWRNORM | POLLWRBAND);
 		if (wq && wq->fasync_list && !(sk->sk_shutdown & SEND_SHUTDOWN))
diff --git a/net/dccp/output.c b/net/dccp/output.c
index 4ce912e..b66c84d 100644
--- a/net/dccp/output.c
+++ b/net/dccp/output.c
@@ -201,7 +201,7 @@ void dccp_write_space(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible(&wq->wait);
 	/* Should agree with poll, otherwise some programs break */
 	if (sock_writeable(sk))
diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
index fcb2752..4f0aa91 100644
--- a/net/iucv/af_iucv.c
+++ b/net/iucv/af_iucv.c
@@ -303,7 +303,7 @@ static void iucv_sock_wake_msglim(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_all(&wq->wait);
 	sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
 	rcu_read_unlock();
diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index 1f8a144..7e2d105 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -67,7 +67,7 @@ static void rxrpc_write_space(struct sock *sk)
 	if (rxrpc_writable(sk)) {
 		struct socket_wq *wq = rcu_dereference(sk->sk_wq);
 
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible(&wq->wait);
 		sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
 	}
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 897c01c..ec10b66 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -6978,7 +6978,7 @@ void sctp_data_ready(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN |
 						POLLRDNORM | POLLRDBAND);
 	sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index 552dbab..525acf6 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -1492,7 +1492,7 @@ static void tipc_write_space(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLOUT |
 						POLLWRNORM | POLLWRBAND);
 	rcu_read_unlock();
@@ -1509,7 +1509,7 @@ static void tipc_data_ready(struct sock *sk)
 
 	rcu_read_lock();
 	wq = rcu_dereference(sk->sk_wq);
-	if (wq_has_sleeper(wq))
+	if (skwq_has_sleeper(wq))
 		wake_up_interruptible_sync_poll(&wq->wait, POLLIN |
 						POLLRDNORM | POLLRDBAND);
 	rcu_read_unlock();
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index aaa0b58..0446ff1 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -339,7 +339,7 @@ static void unix_write_space(struct sock *sk)
 	rcu_read_lock();
 	if (unix_writable(sk)) {
 		wq = rcu_dereference(sk->sk_wq);
-		if (wq_has_sleeper(wq))
+		if (skwq_has_sleeper(wq))
 			wake_up_interruptible_sync_poll(&wq->wait,
 				POLLOUT | POLLWRNORM | POLLWRBAND);
 		sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] net: Generalise wq_has_sleeper helper
  2015-11-26  5:55               ` [PATCH v2] " Herbert Xu
@ 2015-11-30 19:46                 ` David Miller
  0 siblings, 0 replies; 12+ messages in thread
From: David Miller @ 2015-11-30 19:46 UTC (permalink / raw)
  To: herbert; +Cc: peterz, tatsu, mingo, linux-kernel, netdev, jolsa, eric.dumazet

From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Thu, 26 Nov 2015 13:55:39 +0800

> The memory barrier in the helper wq_has_sleeper is needed by just
> about every user of waitqueue_active.  This patch generalises it
> by making it take a wait_queue_head_t directly.  The existing
> helper is renamed to skwq_has_sleeper.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Applied, thanks a lot Herbert.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-11-30 19:47 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-22  8:01 [PATCH v2] wait: add comment before waitqueue_active noting memory barrier is required Kosuke Tatsukawa
2015-10-22 12:56 ` Peter Zijlstra
2015-10-22 23:18   ` Kosuke Tatsukawa
2015-10-23 12:40     ` Peter Zijlstra
2015-11-11  9:48       ` Herbert Xu
2015-11-24  5:54         ` net: Generalise wq_has_sleeper helper Herbert Xu
2015-11-24 21:30           ` David Miller
2015-11-25  1:10             ` Herbert Xu
2015-11-25  9:15           ` Peter Zijlstra
2015-11-25 16:37             ` David Miller
2015-11-26  5:55               ` [PATCH v2] " Herbert Xu
2015-11-30 19:46                 ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.