linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
@ 2013-10-02  9:22 Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 01/16] sched/wait: Make the signal_pending() checks consistent Peter Zijlstra
                   ` (16 more replies)
  0 siblings, 17 replies; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

While poking at the cpu hotplug implementation I had a desire to use
wait_event() with schedule_preempt_disabled() and found the ridiculous
amount of duplication in linux/wait.h.

These patches cure all the bloat and inconsistencies of these macros
that me and others noticed, while also making it 'trivial' to generate
new variants.

Compile and boot tested on x86_64.

Changes since -v4:
 - split up the big collapse patch into 11 separate patches

Changes since -v3:
 - added LKML to the CC (D'0h!!)
 - introduced ___wait_cond_timeout() to avoid double evaluation of 'condition'

Changes since -v2:
 - removed ___wait_cmd_lock_irq(), ___wait_cmd_lock_irq_timo()
 - renamed ___wait_cmd_test_timeout() to ___wait_test_timeout()
 - renamed ___wait_cmd_schedule_timeout() to ___wait_sched_timeout()
 - fixed schedule_timeout() invocations in patch 6; don't restart a
   schedule_timeout() at the total timeout; rather continue where we
   left off for a total fixed timeout.
 - use 'long' return type for ___wait_event() so that timeout values
   for schedule_timeout() fit.
 - fix wait_event_interruptible_lock_irq_timeout() and use long return type.


---
 arch/mips/kernel/rtlx.c         |   19 +-
 include/linux/tty.h             |   28 +---
 include/linux/wait.h            |  268 +++++++++++++---------------------------
 net/irda/af_irda.c              |    5 
 net/netfilter/ipvs/ip_vs_sync.c |    7 -
 5 files changed, 108 insertions(+), 219 deletions(-)


^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 01/16] sched/wait: Make the signal_pending() checks consistent
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:33   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 02/16] sched/wait: Change timeout logic Peter Zijlstra
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-change-signal.patch --]
[-- Type: text/plain, Size: 2747 bytes --]

There's two patterns to check signals in the __wait_event*() macros:

  if (!signal_pending(current)) {
	schedule();
	continue;
  }
  ret = -ERESTARTSYS;
  break;

And the more natural:

  if (signal_pending(current)) {
	ret = -ERESTARTSYS;
	break;
  }
  schedule();

Change them all into the latter form.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   13 ++++++-------
 include/linux/wait.h |   35 ++++++++++++++++-------------------
 2 files changed, 22 insertions(+), 26 deletions(-)

--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -686,14 +686,13 @@ do {									\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
 		if (condition)						\
 			break;						\
-		if (!signal_pending(current)) {				\
-			tty_unlock(tty);					\
-			schedule();					\
-			tty_lock(tty);					\
-			continue;					\
+		if (signal_pending(current)) {				\
+			ret = -ERESTARTSYS;				\
+			break;						\
 		}							\
-		ret = -ERESTARTSYS;					\
-		break;							\
+		tty_unlock(tty);					\
+		schedule();						\
+		tty_lock(tty);						\
 	}								\
 	finish_wait(&wq, &__wait);					\
 } while (0)
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -261,12 +261,11 @@ do {									\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
 		if (condition)						\
 			break;						\
-		if (!signal_pending(current)) {				\
-			schedule();					\
-			continue;					\
+		if (signal_pending(current)) {				\
+			ret = -ERESTARTSYS;				\
+			break;						\
 		}							\
-		ret = -ERESTARTSYS;					\
-		break;							\
+		schedule();						\
 	}								\
 	finish_wait(&wq, &__wait);					\
 } while (0)
@@ -302,14 +301,13 @@ do {									\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
 		if (condition)						\
 			break;						\
-		if (!signal_pending(current)) {				\
-			ret = schedule_timeout(ret);			\
-			if (!ret)					\
-				break;					\
-			continue;					\
+		if (signal_pending(current)) {				\
+			ret = -ERESTARTSYS;				\
+			break;						\
 		}							\
-		ret = -ERESTARTSYS;					\
-		break;							\
+		ret = schedule_timeout(ret);				\
+		if (!ret)						\
+			break;						\
 	}								\
 	if (!ret && (condition))					\
 		ret = 1;						\
@@ -439,14 +437,13 @@ do {									\
 			finish_wait(&wq, &__wait);			\
 			break;						\
 		}							\
-		if (!signal_pending(current)) {				\
-			schedule();					\
-			continue;					\
-		}							\
-		ret = -ERESTARTSYS;					\
-		abort_exclusive_wait(&wq, &__wait, 			\
+		if (signal_pending(current)) {				\
+			ret = -ERESTARTSYS;				\
+			abort_exclusive_wait(&wq, &__wait, 		\
 				TASK_INTERRUPTIBLE, NULL);		\
-		break;							\
+			break;						\
+		}							\
+		schedule();						\
 	}								\
 } while (0)
 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 02/16] sched/wait: Change timeout logic
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 01/16] sched/wait: Make the signal_pending() checks consistent Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:33   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 03/16] sched/wait: Change the wait_exclusive control flow Peter Zijlstra
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-change-sched-timo.patch --]
[-- Type: text/plain, Size: 2543 bytes --]

Commit 4c663cf ("wait: fix false timeouts when using
wait_event_timeout()") introduced an additional condition check after
a timeout but there's a few issues;

 - it forgot one site
 - it put the check after the main loop; not at the actual timeout
   check.

Cure both; by wrapping the condition (as suggested by Oleg), this
avoids double evaluation of 'condition' which could be quite big.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/wait.h |   24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -179,6 +179,14 @@ wait_queue_head_t *bit_waitqueue(void *,
 #define wake_up_interruptible_sync_poll(x, m)				\
 	__wake_up_sync_key((x), TASK_INTERRUPTIBLE, 1, (void *) (m))
 
+#define ___wait_cond_timeout(condition, ret)				\
+({									\
+ 	bool __cond = (condition);					\
+ 	if (__cond && !ret)						\
+ 		ret = 1;						\
+ 	__cond || !ret;							\
+})
+
 #define __wait_event(wq, condition) 					\
 do {									\
 	DEFINE_WAIT(__wait);						\
@@ -217,14 +225,10 @@ do {									\
 									\
 	for (;;) {							\
 		prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE);	\
-		if (condition)						\
+		if (___wait_cond_timeout(condition, ret))		\
 			break;						\
 		ret = schedule_timeout(ret);				\
-		if (!ret)						\
-			break;						\
 	}								\
-	if (!ret && (condition))					\
-		ret = 1;						\
 	finish_wait(&wq, &__wait);					\
 } while (0)
 
@@ -299,18 +303,14 @@ do {									\
 									\
 	for (;;) {							\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
+		if (___wait_cond_timeout(condition, ret))		\
 			break;						\
 		if (signal_pending(current)) {				\
 			ret = -ERESTARTSYS;				\
 			break;						\
 		}							\
 		ret = schedule_timeout(ret);				\
-		if (!ret)						\
-			break;						\
 	}								\
-	if (!ret && (condition))					\
-		ret = 1;						\
 	finish_wait(&wq, &__wait);					\
 } while (0)
 
@@ -815,7 +815,7 @@ do {									\
 									\
 	for (;;) {							\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
+		if (___wait_cond_timeout(condition, ret))		\
 			break;						\
 		if (signal_pending(current)) {				\
 			ret = -ERESTARTSYS;				\
@@ -824,8 +824,6 @@ do {									\
 		spin_unlock_irq(&lock);					\
 		ret = schedule_timeout(ret);				\
 		spin_lock_irq(&lock);					\
-		if (!ret)						\
-			break;						\
 	}								\
 	finish_wait(&wq, &__wait);					\
 } while (0)



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 03/16] sched/wait: Change the wait_exclusive control flow
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 01/16] sched/wait: Make the signal_pending() checks consistent Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 02/16] sched/wait: Change timeout logic Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:33   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 04/16] sched/wait: Introduce ___wait_event() Peter Zijlstra
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-change-exclusive.patch --]
[-- Type: text/plain, Size: 1173 bytes --]

Purely a preparatory patch; it changes the control flow to match what
will soon be generated by generic code so that that patch can be a
unity transform.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/wait.h |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -430,23 +430,24 @@ do {									\
 
 #define __wait_event_interruptible_exclusive(wq, condition, ret)	\
 do {									\
+	__label__ __out;						\
 	DEFINE_WAIT(__wait);						\
 									\
 	for (;;) {							\
 		prepare_to_wait_exclusive(&wq, &__wait,			\
 					TASK_INTERRUPTIBLE);		\
-		if (condition) {					\
-			finish_wait(&wq, &__wait);			\
+		if (condition)						\
 			break;						\
-		}							\
 		if (signal_pending(current)) {				\
 			ret = -ERESTARTSYS;				\
 			abort_exclusive_wait(&wq, &__wait, 		\
 				TASK_INTERRUPTIBLE, NULL);		\
-			break;						\
+			goto __out;					\
 		}							\
 		schedule();						\
 	}								\
+	finish_wait(&wq, &__wait);					\
+__out:	;								\
 } while (0)
 
 #define wait_event_interruptible_exclusive(wq, condition)		\



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 04/16] sched/wait: Introduce ___wait_event()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (2 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 03/16] sched/wait: Change the wait_exclusive control flow Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:33   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 05/16] sched/wait: Collapse __wait_event() Peter Zijlstra
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-0.patch --]
[-- Type: text/plain, Size: 1730 bytes --]

There's far too much duplication in the __wait_event macros; in order
to fix this introduce ___wait_event() a macro with the capability to
replace most other macros.

With the previous patches changing the various __wait_event*()
implementations to be more uniform; we can now collapse the lot
without also changing generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/wait.h |   36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -187,6 +187,42 @@ wait_queue_head_t *bit_waitqueue(void *,
  	__cond || !ret;							\
 })
 
+#define ___wait_signal_pending(state)					\
+	((state == TASK_INTERRUPTIBLE && signal_pending(current)) ||	\
+	 (state == TASK_KILLABLE && fatal_signal_pending(current)))
+
+#define ___wait_nop_ret		int ret __always_unused
+
+#define ___wait_event(wq, condition, state, exclusive, ret, cmd)	\
+do {									\
+	__label__ __out;						\
+	DEFINE_WAIT(__wait);						\
+									\
+	for (;;) {							\
+		if (exclusive)						\
+			prepare_to_wait_exclusive(&wq, &__wait, state); \
+		else							\
+			prepare_to_wait(&wq, &__wait, state);		\
+									\
+		if (condition)						\
+			break;						\
+									\
+		if (___wait_signal_pending(state)) {			\
+			ret = -ERESTARTSYS;				\
+			if (exclusive) {				\
+				abort_exclusive_wait(&wq, &__wait, 	\
+						     state, NULL); 	\
+				goto __out;				\
+			}						\
+			break;						\
+		}							\
+									\
+		cmd;							\
+	}								\
+	finish_wait(&wq, &__wait);					\
+__out:	;								\
+} while (0)
+
 #define __wait_event(wq, condition) 					\
 do {									\
 	DEFINE_WAIT(__wait);						\



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 05/16] sched/wait: Collapse __wait_event()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (3 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 04/16] sched/wait: Introduce ___wait_event() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 06/16] sched/wait: Collapse __wait_event_timeout() Peter Zijlstra
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-1.patch --]
[-- Type: text/plain, Size: 884 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/wait.h |   13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -224,17 +224,8 @@ __out:	;								\
 } while (0)
 
 #define __wait_event(wq, condition) 					\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		schedule();						\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
+		      ___wait_nop_ret, schedule())
 
 /**
  * wait_event - sleep until a condition gets true



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 06/16] sched/wait: Collapse __wait_event_timeout()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (4 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 05/16] sched/wait: Collapse __wait_event() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 07/16] sched/wait: Collapse __wait_event_interruptible() Peter Zijlstra
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-2.patch --]
[-- Type: text/plain, Size: 1079 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   21 +----
 include/linux/wait.h |  192 +++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 150 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -220,17 +247,9 @@ do {									\
 } while (0)
 
 #define __wait_event_timeout(wq, condition, ret)			\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE);	\
-		if (___wait_cond_timeout(condition, ret))		\
-			break;						\
-		ret = schedule_timeout(ret);				\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, ___wait_cond_timeout(condition, ret), 	\
+		      TASK_UNINTERRUPTIBLE, 0, ret,			\
+		      ret = schedule_timeout(ret))
 
 /**
  * wait_event_timeout - sleep until a condition gets true or a timeout elapses



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 07/16] sched/wait: Collapse __wait_event_interruptible()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (5 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 06/16] sched/wait: Collapse __wait_event_timeout() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:34   ` [tip:sched/core] sched/wait: Collapse __wait_event_interruptible( ) tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 08/16] sched/wait: Collapse __wait_event_interruptible_timeout() Peter Zijlstra
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-3.patch --]
[-- Type: text/plain, Size: 1060 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   21 +----
 include/linux/wait.h |  192 +++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 150 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -258,21 +277,8 @@ do {									\
 })
 
 #define __wait_event_interruptible(wq, condition, ret)			\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		schedule();						\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	\
+		      schedule())
 
 /**
  * wait_event_interruptible - sleep until a condition gets true



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 08/16] sched/wait: Collapse __wait_event_interruptible_timeout()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (6 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 07/16] sched/wait: Collapse __wait_event_interruptible() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 09/16] sched/wait: Collapse __wait_event_interruptible_exclusive() Peter Zijlstra
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-4.patch --]
[-- Type: text/plain, Size: 1192 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   21 +----
 include/linux/wait.h |  192 +++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 150 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -298,21 +304,9 @@ do {									\
 })
 
 #define __wait_event_interruptible_timeout(wq, condition, ret)		\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (___wait_cond_timeout(condition, ret))		\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		ret = schedule_timeout(ret);				\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, ___wait_cond_timeout(condition, ret),		\
+		      TASK_INTERRUPTIBLE, 0, ret,			\
+		      ret = schedule_timeout(ret))
 
 /**
  * wait_event_interruptible_timeout - sleep until a condition gets true or a timeout elapses



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 09/16] sched/wait: Collapse __wait_event_interruptible_exclusive()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (7 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 08/16] sched/wait: Collapse __wait_event_interruptible_timeout() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 10/16] sched/wait: Collapse __wait_event_lock_irq() Peter Zijlstra
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-5.patch --]
[-- Type: text/plain, Size: 1222 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   21 +----
 include/linux/wait.h |  192 +++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 150 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -427,26 +421,8 @@ do {									\
 })
 
 #define __wait_event_interruptible_exclusive(wq, condition, ret)	\
-do {									\
-	__label__ __out;						\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait_exclusive(&wq, &__wait,			\
-					TASK_INTERRUPTIBLE);		\
-		if (condition)						\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			abort_exclusive_wait(&wq, &__wait, 		\
-				TASK_INTERRUPTIBLE, NULL);		\
-			goto __out;					\
-		}							\
-		schedule();						\
-	}								\
-	finish_wait(&wq, &__wait);					\
-__out:	;								\
-} while (0)
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, ret,	\
+		      schedule())
 
 #define wait_event_interruptible_exclusive(wq, condition)		\
 ({									\



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 10/16] sched/wait: Collapse __wait_event_lock_irq()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (8 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 09/16] sched/wait: Collapse __wait_event_interruptible_exclusive() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 11/16] sched/wait: Collapse __wait_event_interruptible_lock_irq() Peter Zijlstra
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-6.patch --]
[-- Type: text/plain, Size: 1173 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   21 +----
 include/linux/wait.h |  192 +++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 150 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -648,20 +609,12 @@ do {									\
 
 
 #define __wait_event_lock_irq(wq, condition, lock, cmd)			\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		spin_unlock_irq(&lock);					\
-		cmd;							\
-		schedule();						\
-		spin_lock_irq(&lock);					\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
+		      ___wait_nop_ret,					\
+		      spin_unlock_irq(&lock);				\
+		      cmd;						\
+		      schedule();					\
+		      spin_lock_irq(&lock))
 
 /**
  * wait_event_lock_irq_cmd - sleep until a condition gets true. The



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 11/16] sched/wait: Collapse __wait_event_interruptible_lock_irq()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (9 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 10/16] sched/wait: Collapse __wait_event_lock_irq() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 12/16] sched/wait: Collapse __wait_event_interruptible_lock_irq_timeout() Peter Zijlstra
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-7.patch --]
[-- Type: text/plain, Size: 1384 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   21 +----
 include/linux/wait.h |  192 +++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 150 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -721,26 +674,12 @@ do {									\
 } while (0)
 
 
-#define __wait_event_interruptible_lock_irq(wq, condition,		\
-					    lock, ret, cmd)		\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		spin_unlock_irq(&lock);					\
-		cmd;							\
-		schedule();						\
-		spin_lock_irq(&lock);					\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+#define __wait_event_interruptible_lock_irq(wq, condition, lock, ret, cmd) \
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	   \
+		      spin_unlock_irq(&lock);				   \
+		      cmd;						   \
+		      schedule();					   \
+		      spin_lock_irq(&lock))
 
 /**
  * wait_event_interruptible_lock_irq_cmd - sleep until a condition gets true.



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 12/16] sched/wait: Collapse __wait_event_interruptible_lock_irq_timeout()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (10 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 11/16] sched/wait: Collapse __wait_event_interruptible_lock_irq() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 13/16] sched/wait: Collapse __wait_event_interruptible_tty() Peter Zijlstra
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-8.patch --]
[-- Type: text/plain, Size: 1498 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   21 +----
 include/linux/wait.h |  192 +++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 150 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -809,25 +748,12 @@ do {									\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_lock_irq_timeout(wq, condition,	\
-						    lock, ret)		\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (___wait_cond_timeout(condition, ret))		\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		spin_unlock_irq(&lock);					\
-		ret = schedule_timeout(ret);				\
-		spin_lock_irq(&lock);					\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+#define __wait_event_interruptible_lock_irq_timeout(wq, condition, lock, ret) \
+	___wait_event(wq, ___wait_cond_timeout(condition, ret),		      \
+		      TASK_INTERRUPTIBLE, 0, ret,	      		      \
+		      spin_unlock_irq(&lock);				      \
+		      ret = schedule_timeout(ret);			      \
+		      spin_lock_irq(&lock));
 
 /**
  * wait_event_interruptible_lock_irq_timeout - sleep until a condition gets true or a timeout elapses.



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 13/16] sched/wait: Collapse __wait_event_interruptible_tty()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (11 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 12/16] sched/wait: Collapse __wait_event_interruptible_lock_irq_timeout() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 14/16] sched/wait: Collapse __wait_event_killable() Peter Zijlstra
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-9.patch --]
[-- Type: text/plain, Size: 1203 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/tty.h  |   21 +----
 include/linux/wait.h |  192 +++++++++++++++------------------------------------
 2 files changed, 63 insertions(+), 150 deletions(-)

--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -679,23 +679,10 @@ static inline void tty_wait_until_sent_f
 })
 
 #define __wait_event_interruptible_tty(tty, wq, condition, ret)		\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		tty_unlock(tty);					\
-		schedule();						\
-		tty_lock(tty);						\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	\
+			tty_unlock(tty);				\
+			schedule();					\
+			tty_lock(tty))
 
 #ifdef CONFIG_PROC_FS
 extern void proc_tty_register_driver(struct tty_driver *);



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 14/16] sched/wait: Collapse __wait_event_killable()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (12 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 13/16] sched/wait: Collapse __wait_event_interruptible_tty() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 15/16] sched/wait: Collapse __wait_event_hrtimeout() Peter Zijlstra
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-deduplicate-10.patch --]
[-- Type: text/plain, Size: 884 bytes --]

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -582,22 +582,7 @@ do {									\
 
 
 #define __wait_event_killable(wq, condition, ret)			\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_KILLABLE);		\
-		if (condition)						\
-			break;						\
-		if (!fatal_signal_pending(current)) {			\
-			schedule();					\
-			continue;					\
-		}							\
-		ret = -ERESTARTSYS;					\
-		break;							\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_KILLABLE, 0, ret, schedule())
 
 /**
  * wait_event_killable - sleep until a condition gets true



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 15/16] sched/wait: Collapse __wait_event_hrtimeout()
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (13 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 14/16] sched/wait: Collapse __wait_event_killable() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-02  9:22 ` [PATCH 16/16] sched/wait: Make the __wait_event*() interface more friendly Peter Zijlstra
  2013-10-04 20:44 ` [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-dedup-hrtimer.patch --]
[-- Type: text/plain, Size: 1477 bytes --]

While not a whole-sale replacement like the others we can still reduce
the size of __wait_event_hrtimeout() considerably by noting that the
actual core of __wait_event_hrtimeout() is identical to what
___wait_event() generates.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/wait.h |   15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -338,7 +338,6 @@ do {									\
 #define __wait_event_hrtimeout(wq, condition, timeout, state)		\
 ({									\
 	int __ret = 0;							\
-	DEFINE_WAIT(__wait);						\
 	struct hrtimer_sleeper __t;					\
 									\
 	hrtimer_init_on_stack(&__t.timer, CLOCK_MONOTONIC,		\
@@ -349,25 +348,15 @@ do {									\
 				       current->timer_slack_ns,		\
 				       HRTIMER_MODE_REL);		\
 									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, state);			\
-		if (condition)						\
-			break;						\
-		if (state == TASK_INTERRUPTIBLE &&			\
-		    signal_pending(current)) {				\
-			__ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
+	___wait_event(wq, condition, state, 0, __ret,			\
 		if (!__t.task) {					\
 			__ret = -ETIME;					\
 			break;						\
 		}							\
-		schedule();						\
-	}								\
+		schedule());						\
 									\
 	hrtimer_cancel(&__t.timer);					\
 	destroy_hrtimer_on_stack(&__t.timer);				\
-	finish_wait(&wq, &__wait);					\
 	__ret;								\
 })
 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 16/16] sched/wait: Make the __wait_event*() interface more friendly
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (14 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 15/16] sched/wait: Collapse __wait_event_hrtimeout() Peter Zijlstra
@ 2013-10-02  9:22 ` Peter Zijlstra
  2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
  2013-10-04 20:44 ` [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-02  9:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel, Peter Zijlstra

[-- Attachment #1: peterz-wait-___wait_event-ret.patch --]
[-- Type: text/plain, Size: 12071 bytes --]

Change all __wait_event*() implementations to match the corresponding
wait_event*() signature for convenience.

In particular this does away with the weird 'ret' logic. Since there
are __wait_event*() users this requires we update them too.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/mips/kernel/rtlx.c         |   19 +++---
 include/linux/tty.h             |   10 +--
 include/linux/wait.h            |  113 +++++++++++++++++++---------------------
 net/irda/af_irda.c              |    5 -
 net/netfilter/ipvs/ip_vs_sync.c |    7 --
 5 files changed, 73 insertions(+), 81 deletions(-)

--- a/arch/mips/kernel/rtlx.c
+++ b/arch/mips/kernel/rtlx.c
@@ -172,8 +172,9 @@ int rtlx_open(int index, int can_sleep)
 	if (rtlx == NULL) {
 		if( (p = vpe_get_shared(tclimit)) == NULL) {
 		    if (can_sleep) {
-			__wait_event_interruptible(channel_wqs[index].lx_queue,
-				(p = vpe_get_shared(tclimit)), ret);
+			ret = __wait_event_interruptible(
+					channel_wqs[index].lx_queue,
+					(p = vpe_get_shared(tclimit)));
 			if (ret)
 				goto out_fail;
 		    } else {
@@ -263,11 +264,10 @@ unsigned int rtlx_read_poll(int index, i
 	/* data available to read? */
 	if (chan->lx_read == chan->lx_write) {
 		if (can_sleep) {
-			int ret = 0;
-
-			__wait_event_interruptible(channel_wqs[index].lx_queue,
+			int ret = __wait_event_interruptible(
+				channel_wqs[index].lx_queue,
 				(chan->lx_read != chan->lx_write) ||
-				sp_stopping, ret);
+				sp_stopping);
 			if (ret)
 				return ret;
 
@@ -440,14 +440,13 @@ static ssize_t file_write(struct file *f
 
 	/* any space left... */
 	if (!rtlx_write_poll(minor)) {
-		int ret = 0;
+		int ret;
 
 		if (file->f_flags & O_NONBLOCK)
 			return -EAGAIN;
 
-		__wait_event_interruptible(channel_wqs[minor].rt_queue,
-					   rtlx_write_poll(minor),
-					   ret);
+		ret = __wait_event_interruptible(channel_wqs[minor].rt_queue,
+					   rtlx_write_poll(minor));
 		if (ret)
 			return ret;
 	}
--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -672,14 +672,14 @@ static inline void tty_wait_until_sent_f
 #define wait_event_interruptible_tty(tty, wq, condition)		\
 ({									\
 	int __ret = 0;							\
-	if (!(condition)) {						\
-		__wait_event_interruptible_tty(tty, wq, condition, __ret);	\
-	}								\
+	if (!(condition))						\
+		__ret = __wait_event_interruptible_tty(tty, wq,		\
+						       condition);	\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_tty(tty, wq, condition, ret)		\
-	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	\
+#define __wait_event_interruptible_tty(tty, wq, condition)		\
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, 0,		\
 			tty_unlock(tty);				\
 			schedule();					\
 			tty_lock(tty))
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -179,24 +179,23 @@ wait_queue_head_t *bit_waitqueue(void *,
 #define wake_up_interruptible_sync_poll(x, m)				\
 	__wake_up_sync_key((x), TASK_INTERRUPTIBLE, 1, (void *) (m))
 
-#define ___wait_cond_timeout(condition, ret)				\
+#define ___wait_cond_timeout(condition)					\
 ({									\
  	bool __cond = (condition);					\
- 	if (__cond && !ret)						\
- 		ret = 1;						\
- 	__cond || !ret;							\
+ 	if (__cond && !__ret)						\
+ 		__ret = 1;						\
+ 	__cond || !__ret;						\
 })
 
 #define ___wait_signal_pending(state)					\
 	((state == TASK_INTERRUPTIBLE && signal_pending(current)) ||	\
 	 (state == TASK_KILLABLE && fatal_signal_pending(current)))
 
-#define ___wait_nop_ret		int ret __always_unused
-
 #define ___wait_event(wq, condition, state, exclusive, ret, cmd)	\
-do {									\
+({									\
 	__label__ __out;						\
 	DEFINE_WAIT(__wait);						\
+	long __ret = ret;						\
 									\
 	for (;;) {							\
 		if (exclusive)						\
@@ -208,7 +207,7 @@ do {									\
 			break;						\
 									\
 		if (___wait_signal_pending(state)) {			\
-			ret = -ERESTARTSYS;				\
+			__ret = -ERESTARTSYS;				\
 			if (exclusive) {				\
 				abort_exclusive_wait(&wq, &__wait, 	\
 						     state, NULL); 	\
@@ -220,12 +219,12 @@ do {									\
 		cmd;							\
 	}								\
 	finish_wait(&wq, &__wait);					\
-__out:	;								\
-} while (0)
+__out:	__ret;								\
+})
 
 #define __wait_event(wq, condition) 					\
-	___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
-		      ___wait_nop_ret, schedule())
+	(void)___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0, 0,	\
+			    schedule())
 
 /**
  * wait_event - sleep until a condition gets true
@@ -246,10 +245,10 @@ do {									\
 	__wait_event(wq, condition);					\
 } while (0)
 
-#define __wait_event_timeout(wq, condition, ret)			\
-	___wait_event(wq, ___wait_cond_timeout(condition, ret), 	\
-		      TASK_UNINTERRUPTIBLE, 0, ret,			\
-		      ret = schedule_timeout(ret))
+#define __wait_event_timeout(wq, condition, timeout)			\
+	___wait_event(wq, ___wait_cond_timeout(condition),		\
+		      TASK_UNINTERRUPTIBLE, 0, timeout,			\
+		      __ret = schedule_timeout(__ret))
 
 /**
  * wait_event_timeout - sleep until a condition gets true or a timeout elapses
@@ -272,12 +271,12 @@ do {									\
 ({									\
 	long __ret = timeout;						\
 	if (!(condition)) 						\
-		__wait_event_timeout(wq, condition, __ret);		\
+		__ret = __wait_event_timeout(wq, condition, timeout);	\
 	__ret;								\
 })
 
-#define __wait_event_interruptible(wq, condition, ret)			\
-	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	\
+#define __wait_event_interruptible(wq, condition)			\
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, 0,		\
 		      schedule())
 
 /**
@@ -299,14 +298,14 @@ do {									\
 ({									\
 	int __ret = 0;							\
 	if (!(condition))						\
-		__wait_event_interruptible(wq, condition, __ret);	\
+		__ret = __wait_event_interruptible(wq, condition);	\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_timeout(wq, condition, ret)		\
-	___wait_event(wq, ___wait_cond_timeout(condition, ret),		\
-		      TASK_INTERRUPTIBLE, 0, ret,			\
-		      ret = schedule_timeout(ret))
+#define __wait_event_interruptible_timeout(wq, condition, timeout)	\
+	___wait_event(wq, ___wait_cond_timeout(condition),		\
+		      TASK_INTERRUPTIBLE, 0, timeout,			\
+		      __ret = schedule_timeout(__ret))
 
 /**
  * wait_event_interruptible_timeout - sleep until a condition gets true or a timeout elapses
@@ -330,7 +329,8 @@ do {									\
 ({									\
 	long __ret = timeout;						\
 	if (!(condition))						\
-		__wait_event_interruptible_timeout(wq, condition, __ret); \
+		__ret = __wait_event_interruptible_timeout(wq, 		\
+						condition, timeout);	\
 	__ret;								\
 })
 
@@ -347,7 +347,7 @@ do {									\
 				       current->timer_slack_ns,		\
 				       HRTIMER_MODE_REL);		\
 									\
-	___wait_event(wq, condition, state, 0, __ret,			\
+	__ret = ___wait_event(wq, condition, state, 0, 0,		\
 		if (!__t.task) {					\
 			__ret = -ETIME;					\
 			break;						\
@@ -409,15 +409,15 @@ do {									\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_exclusive(wq, condition, ret)	\
-	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, ret,	\
+#define __wait_event_interruptible_exclusive(wq, condition)		\
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, 0,		\
 		      schedule())
 
 #define wait_event_interruptible_exclusive(wq, condition)		\
 ({									\
 	int __ret = 0;							\
 	if (!(condition))						\
-		__wait_event_interruptible_exclusive(wq, condition, __ret);\
+		__ret = __wait_event_interruptible_exclusive(wq, condition);\
 	__ret;								\
 })
 
@@ -570,8 +570,8 @@ do {									\
 
 
 
-#define __wait_event_killable(wq, condition, ret)			\
-	___wait_event(wq, condition, TASK_KILLABLE, 0, ret, schedule())
+#define __wait_event_killable(wq, condition)				\
+	___wait_event(wq, condition, TASK_KILLABLE, 0, 0, schedule())
 
 /**
  * wait_event_killable - sleep until a condition gets true
@@ -592,18 +592,17 @@ do {									\
 ({									\
 	int __ret = 0;							\
 	if (!(condition))						\
-		__wait_event_killable(wq, condition, __ret);		\
+		__ret = __wait_event_killable(wq, condition);		\
 	__ret;								\
 })
 
 
 #define __wait_event_lock_irq(wq, condition, lock, cmd)			\
-	___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
-		      ___wait_nop_ret,					\
-		      spin_unlock_irq(&lock);				\
-		      cmd;						\
-		      schedule();					\
-		      spin_lock_irq(&lock))
+	(void)___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0, 0,	\
+			    spin_unlock_irq(&lock);			\
+			    cmd;					\
+			    schedule();					\
+			    spin_lock_irq(&lock))
 
 /**
  * wait_event_lock_irq_cmd - sleep until a condition gets true. The
@@ -663,11 +662,11 @@ do {									\
 } while (0)
 
 
-#define __wait_event_interruptible_lock_irq(wq, condition, lock, ret, cmd) \
-	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	   \
-		      spin_unlock_irq(&lock);				   \
-		      cmd;						   \
-		      schedule();					   \
+#define __wait_event_interruptible_lock_irq(wq, condition, lock, cmd)	\
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, 0,	   	\
+		      spin_unlock_irq(&lock);				\
+		      cmd;						\
+		      schedule();					\
 		      spin_lock_irq(&lock))
 
 /**
@@ -698,10 +697,9 @@ do {									\
 #define wait_event_interruptible_lock_irq_cmd(wq, condition, lock, cmd)	\
 ({									\
 	int __ret = 0;							\
-									\
 	if (!(condition))						\
-		__wait_event_interruptible_lock_irq(wq, condition,	\
-						    lock, __ret, cmd);	\
+		__ret = __wait_event_interruptible_lock_irq(wq, 	\
+						condition, lock, cmd);	\
 	__ret;								\
 })
 
@@ -730,18 +728,18 @@ do {									\
 #define wait_event_interruptible_lock_irq(wq, condition, lock)		\
 ({									\
 	int __ret = 0;							\
-									\
 	if (!(condition))						\
-		__wait_event_interruptible_lock_irq(wq, condition,	\
-						    lock, __ret, );	\
+		__ret = __wait_event_interruptible_lock_irq(wq,		\
+						condition, lock,)	\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_lock_irq_timeout(wq, condition, lock, ret) \
-	___wait_event(wq, ___wait_cond_timeout(condition, ret),		      \
-		      TASK_INTERRUPTIBLE, 0, ret,	      		      \
-		      spin_unlock_irq(&lock);				      \
-		      ret = schedule_timeout(ret);			      \
+#define __wait_event_interruptible_lock_irq_timeout(wq, condition, 	\
+						    lock, timeout) 	\
+	___wait_event(wq, ___wait_cond_timeout(condition),		\
+		      TASK_INTERRUPTIBLE, 0, ret,	      		\
+		      spin_unlock_irq(&lock);				\
+		      __ret = schedule_timeout(__ret);			\
 		      spin_lock_irq(&lock));
 
 /**
@@ -771,11 +769,10 @@ do {									\
 #define wait_event_interruptible_lock_irq_timeout(wq, condition, lock,	\
 						  timeout)		\
 ({									\
-	int __ret = timeout;						\
-									\
+	long __ret = timeout;						\
 	if (!(condition))						\
-		__wait_event_interruptible_lock_irq_timeout(		\
-					wq, condition, lock, __ret);	\
+		__ret = __wait_event_interruptible_lock_irq_timeout(	\
+					wq, condition, lock, timeout);	\
 	__ret;								\
 })
 
--- a/net/irda/af_irda.c
+++ b/net/irda/af_irda.c
@@ -2563,9 +2563,8 @@ static int irda_getsockopt(struct socket
 				  jiffies + msecs_to_jiffies(val));
 
 			/* Wait for IR-LMP to call us back */
-			__wait_event_interruptible(self->query_wait,
-			      (self->cachedaddr != 0 || self->errno == -ETIME),
-						   err);
+			err = __wait_event_interruptible(self->query_wait,
+			      (self->cachedaddr != 0 || self->errno == -ETIME));
 
 			/* If watchdog is still activated, kill it! */
 			del_timer(&(self->watchdog));
--- a/net/netfilter/ipvs/ip_vs_sync.c
+++ b/net/netfilter/ipvs/ip_vs_sync.c
@@ -1637,12 +1637,9 @@ static int sync_thread_master(void *data
 			continue;
 		}
 		while (ip_vs_send_sync_msg(tinfo->sock, sb->mesg) < 0) {
-			int ret = 0;
-
-			__wait_event_interruptible(*sk_sleep(sk),
+			int ret = __wait_event_interruptible(*sk_sleep(sk),
 						   sock_writeable(sk) ||
-						   kthread_should_stop(),
-						   ret);
+						   kthread_should_stop());
 			if (unlikely(kthread_should_stop()))
 				goto done;
 		}



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Make the signal_pending() checks consistent
  2013-10-02  9:22 ` [PATCH 01/16] sched/wait: Make the signal_pending() checks consistent Peter Zijlstra
@ 2013-10-04 17:33   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:33 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  2f2a2b60adf368bacd6acd2116c01e32caf936c4
Gitweb:     http://git.kernel.org/tip/2f2a2b60adf368bacd6acd2116c01e32caf936c4
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:18 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:44 +0200

sched/wait: Make the signal_pending() checks consistent

There's two patterns to check signals in the __wait_event*() macros:

  if (!signal_pending(current)) {
	schedule();
	continue;
  }
  ret = -ERESTARTSYS;
  break;

And the more natural:

  if (signal_pending(current)) {
	ret = -ERESTARTSYS;
	break;
  }
  schedule();

Change them all into the latter form.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092527.956416254@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/tty.h  | 13 ++++++-------
 include/linux/wait.h | 35 ++++++++++++++++-------------------
 2 files changed, 22 insertions(+), 26 deletions(-)

diff --git a/include/linux/tty.h b/include/linux/tty.h
index 64f8646..0503729 100644
--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -686,14 +686,13 @@ do {									\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
 		if (condition)						\
 			break;						\
-		if (!signal_pending(current)) {				\
-			tty_unlock(tty);					\
-			schedule();					\
-			tty_lock(tty);					\
-			continue;					\
+		if (signal_pending(current)) {				\
+			ret = -ERESTARTSYS;				\
+			break;						\
 		}							\
-		ret = -ERESTARTSYS;					\
-		break;							\
+		tty_unlock(tty);					\
+		schedule();						\
+		tty_lock(tty);						\
 	}								\
 	finish_wait(&wq, &__wait);					\
 } while (0)
diff --git a/include/linux/wait.h b/include/linux/wait.h
index a67fc16..ccf0c52 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -261,12 +261,11 @@ do {									\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
 		if (condition)						\
 			break;						\
-		if (!signal_pending(current)) {				\
-			schedule();					\
-			continue;					\
+		if (signal_pending(current)) {				\
+			ret = -ERESTARTSYS;				\
+			break;						\
 		}							\
-		ret = -ERESTARTSYS;					\
-		break;							\
+		schedule();						\
 	}								\
 	finish_wait(&wq, &__wait);					\
 } while (0)
@@ -302,14 +301,13 @@ do {									\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
 		if (condition)						\
 			break;						\
-		if (!signal_pending(current)) {				\
-			ret = schedule_timeout(ret);			\
-			if (!ret)					\
-				break;					\
-			continue;					\
+		if (signal_pending(current)) {				\
+			ret = -ERESTARTSYS;				\
+			break;						\
 		}							\
-		ret = -ERESTARTSYS;					\
-		break;							\
+		ret = schedule_timeout(ret);				\
+		if (!ret)						\
+			break;						\
 	}								\
 	if (!ret && (condition))					\
 		ret = 1;						\
@@ -439,14 +437,13 @@ do {									\
 			finish_wait(&wq, &__wait);			\
 			break;						\
 		}							\
-		if (!signal_pending(current)) {				\
-			schedule();					\
-			continue;					\
-		}							\
-		ret = -ERESTARTSYS;					\
-		abort_exclusive_wait(&wq, &__wait, 			\
+		if (signal_pending(current)) {				\
+			ret = -ERESTARTSYS;				\
+			abort_exclusive_wait(&wq, &__wait, 		\
 				TASK_INTERRUPTIBLE, NULL);		\
-		break;							\
+			break;						\
+		}							\
+		schedule();						\
 	}								\
 } while (0)
 

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Change timeout logic
  2013-10-02  9:22 ` [PATCH 02/16] sched/wait: Change timeout logic Peter Zijlstra
@ 2013-10-04 17:33   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:33 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  2953ef246b058989657e1e77b36b67566ac06f7b
Gitweb:     http://git.kernel.org/tip/2953ef246b058989657e1e77b36b67566ac06f7b
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:19 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:44 +0200

sched/wait: Change timeout logic

Commit 4c663cf ("wait: fix false timeouts when using
wait_event_timeout()") introduced an additional condition check after
a timeout but there's a few issues;

 - it forgot one site
 - it put the check after the main loop; not at the actual timeout
   check.

Cure both; by wrapping the condition (as suggested by Oleg), this
avoids double evaluation of 'condition' which could be quite big.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.028892896@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index ccf0c52..b2afd66 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -179,6 +179,14 @@ wait_queue_head_t *bit_waitqueue(void *, int);
 #define wake_up_interruptible_sync_poll(x, m)				\
 	__wake_up_sync_key((x), TASK_INTERRUPTIBLE, 1, (void *) (m))
 
+#define ___wait_cond_timeout(condition, ret)				\
+({									\
+ 	bool __cond = (condition);					\
+ 	if (__cond && !ret)						\
+ 		ret = 1;						\
+ 	__cond || !ret;							\
+})
+
 #define __wait_event(wq, condition) 					\
 do {									\
 	DEFINE_WAIT(__wait);						\
@@ -217,14 +225,10 @@ do {									\
 									\
 	for (;;) {							\
 		prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE);	\
-		if (condition)						\
+		if (___wait_cond_timeout(condition, ret))		\
 			break;						\
 		ret = schedule_timeout(ret);				\
-		if (!ret)						\
-			break;						\
 	}								\
-	if (!ret && (condition))					\
-		ret = 1;						\
 	finish_wait(&wq, &__wait);					\
 } while (0)
 
@@ -299,18 +303,14 @@ do {									\
 									\
 	for (;;) {							\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
+		if (___wait_cond_timeout(condition, ret))		\
 			break;						\
 		if (signal_pending(current)) {				\
 			ret = -ERESTARTSYS;				\
 			break;						\
 		}							\
 		ret = schedule_timeout(ret);				\
-		if (!ret)						\
-			break;						\
 	}								\
-	if (!ret && (condition))					\
-		ret = 1;						\
 	finish_wait(&wq, &__wait);					\
 } while (0)
 
@@ -815,7 +815,7 @@ do {									\
 									\
 	for (;;) {							\
 		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
+		if (___wait_cond_timeout(condition, ret))		\
 			break;						\
 		if (signal_pending(current)) {				\
 			ret = -ERESTARTSYS;				\
@@ -824,8 +824,6 @@ do {									\
 		spin_unlock_irq(&lock);					\
 		ret = schedule_timeout(ret);				\
 		spin_lock_irq(&lock);					\
-		if (!ret)						\
-			break;						\
 	}								\
 	finish_wait(&wq, &__wait);					\
 } while (0)

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Change the wait_exclusive control flow
  2013-10-02  9:22 ` [PATCH 03/16] sched/wait: Change the wait_exclusive control flow Peter Zijlstra
@ 2013-10-04 17:33   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:33 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  bb632bc44970f75b66df102e831a4fc0692e9159
Gitweb:     http://git.kernel.org/tip/bb632bc44970f75b66df102e831a4fc0692e9159
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:20 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:45 +0200

sched/wait: Change the wait_exclusive control flow

Purely a preparatory patch; it changes the control flow to match what
will soon be generated by generic code so that that patch can be a
unity transform.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.107994763@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index b2afd66..7d7819d 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -428,23 +428,24 @@ do {									\
 
 #define __wait_event_interruptible_exclusive(wq, condition, ret)	\
 do {									\
+	__label__ __out;						\
 	DEFINE_WAIT(__wait);						\
 									\
 	for (;;) {							\
 		prepare_to_wait_exclusive(&wq, &__wait,			\
 					TASK_INTERRUPTIBLE);		\
-		if (condition) {					\
-			finish_wait(&wq, &__wait);			\
+		if (condition)						\
 			break;						\
-		}							\
 		if (signal_pending(current)) {				\
 			ret = -ERESTARTSYS;				\
 			abort_exclusive_wait(&wq, &__wait, 		\
 				TASK_INTERRUPTIBLE, NULL);		\
-			break;						\
+			goto __out;					\
 		}							\
 		schedule();						\
 	}								\
+	finish_wait(&wq, &__wait);					\
+__out:	;								\
 } while (0)
 
 #define wait_event_interruptible_exclusive(wq, condition)		\

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Introduce ___wait_event()
  2013-10-02  9:22 ` [PATCH 04/16] sched/wait: Introduce ___wait_event() Peter Zijlstra
@ 2013-10-04 17:33   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:33 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  41a1431b178c3b731d6dfc40b987528b333dd93e
Gitweb:     http://git.kernel.org/tip/41a1431b178c3b731d6dfc40b987528b333dd93e
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:21 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:46 +0200

sched/wait: Introduce ___wait_event()

There's far too much duplication in the __wait_event macros; in order
to fix this introduce ___wait_event() a macro with the capability to
replace most other macros.

With the previous patches changing the various __wait_event*()
implementations to be more uniform; we can now collapse the lot
without also changing generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.181897111@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 7d7819d..29d0249 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -187,6 +187,42 @@ wait_queue_head_t *bit_waitqueue(void *, int);
  	__cond || !ret;							\
 })
 
+#define ___wait_signal_pending(state)					\
+	((state == TASK_INTERRUPTIBLE && signal_pending(current)) ||	\
+	 (state == TASK_KILLABLE && fatal_signal_pending(current)))
+
+#define ___wait_nop_ret		int ret __always_unused
+
+#define ___wait_event(wq, condition, state, exclusive, ret, cmd)	\
+do {									\
+	__label__ __out;						\
+	DEFINE_WAIT(__wait);						\
+									\
+	for (;;) {							\
+		if (exclusive)						\
+			prepare_to_wait_exclusive(&wq, &__wait, state); \
+		else							\
+			prepare_to_wait(&wq, &__wait, state);		\
+									\
+		if (condition)						\
+			break;						\
+									\
+		if (___wait_signal_pending(state)) {			\
+			ret = -ERESTARTSYS;				\
+			if (exclusive) {				\
+				abort_exclusive_wait(&wq, &__wait, 	\
+						     state, NULL); 	\
+				goto __out;				\
+			}						\
+			break;						\
+		}							\
+									\
+		cmd;							\
+	}								\
+	finish_wait(&wq, &__wait);					\
+__out:	;								\
+} while (0)
+
 #define __wait_event(wq, condition) 					\
 do {									\
 	DEFINE_WAIT(__wait);						\

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event()
  2013-10-02  9:22 ` [PATCH 05/16] sched/wait: Collapse __wait_event() Peter Zijlstra
@ 2013-10-04 17:34   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:34 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  854267f4384243b19c03a2942e84f06f2beb0952
Gitweb:     http://git.kernel.org/tip/854267f4384243b19c03a2942e84f06f2beb0952
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:22 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:46 +0200

sched/wait: Collapse __wait_event()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.254863348@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 29d0249..68e3a62 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -224,17 +224,8 @@ __out:	;								\
 } while (0)
 
 #define __wait_event(wq, condition) 					\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		schedule();						\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
+		      ___wait_nop_ret, schedule())
 
 /**
  * wait_event - sleep until a condition gets true

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_timeout()
  2013-10-02  9:22 ` [PATCH 06/16] sched/wait: Collapse __wait_event_timeout() Peter Zijlstra
@ 2013-10-04 17:34   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:34 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  ddc1994b8217527e1818f690f17597fc9cedf81b
Gitweb:     http://git.kernel.org/tip/ddc1994b8217527e1818f690f17597fc9cedf81b
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:23 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:47 +0200

sched/wait: Collapse __wait_event_timeout()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.325264677@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 68e3a62..546b94e 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -247,17 +247,9 @@ do {									\
 } while (0)
 
 #define __wait_event_timeout(wq, condition, ret)			\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE);	\
-		if (___wait_cond_timeout(condition, ret))		\
-			break;						\
-		ret = schedule_timeout(ret);				\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, ___wait_cond_timeout(condition, ret), 	\
+		      TASK_UNINTERRUPTIBLE, 0, ret,			\
+		      ret = schedule_timeout(ret))
 
 /**
  * wait_event_timeout - sleep until a condition gets true or a timeout elapses

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_interruptible( )
  2013-10-02  9:22 ` [PATCH 07/16] sched/wait: Collapse __wait_event_interruptible() Peter Zijlstra
@ 2013-10-04 17:34   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:34 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  f13f4c41c9cf9cd61c896e46e4e7ba2687e2af9c
Gitweb:     http://git.kernel.org/tip/f13f4c41c9cf9cd61c896e46e4e7ba2687e2af9c
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:24 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:48 +0200

sched/wait: Collapse __wait_event_interruptible()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.396949919@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 546b94e..39e4bbd 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -277,21 +277,8 @@ do {									\
 })
 
 #define __wait_event_interruptible(wq, condition, ret)			\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		schedule();						\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	\
+		      schedule())
 
 /**
  * wait_event_interruptible - sleep until a condition gets true

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_interruptible_timeout()
  2013-10-02  9:22 ` [PATCH 08/16] sched/wait: Collapse __wait_event_interruptible_timeout() Peter Zijlstra
@ 2013-10-04 17:34   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:34 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  c2ebb1fb4eddf3d1d66fe31d1e89e83ee211b81c
Gitweb:     http://git.kernel.org/tip/c2ebb1fb4eddf3d1d66fe31d1e89e83ee211b81c
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:25 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:49 +0200

sched/wait: Collapse __wait_event_interruptible_timeout()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.469616907@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 18 +++---------------
 1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 39e4bbd..a79fb15 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -304,21 +304,9 @@ do {									\
 })
 
 #define __wait_event_interruptible_timeout(wq, condition, ret)		\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (___wait_cond_timeout(condition, ret))		\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		ret = schedule_timeout(ret);				\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, ___wait_cond_timeout(condition, ret),		\
+		      TASK_INTERRUPTIBLE, 0, ret,			\
+		      ret = schedule_timeout(ret))
 
 /**
  * wait_event_interruptible_timeout - sleep until a condition gets true or a timeout elapses

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_interruptible_exclusive()
  2013-10-02  9:22 ` [PATCH 09/16] sched/wait: Collapse __wait_event_interruptible_exclusive() Peter Zijlstra
@ 2013-10-04 17:34   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:34 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  48c2521717b39cb6904941ec2847d9775669207a
Gitweb:     http://git.kernel.org/tip/48c2521717b39cb6904941ec2847d9775669207a
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:26 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:49 +0200

sched/wait: Collapse __wait_event_interruptible_exclusive()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.541716442@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 22 ++--------------------
 1 file changed, 2 insertions(+), 20 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index a79fb15..c4ab172 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -421,26 +421,8 @@ do {									\
 })
 
 #define __wait_event_interruptible_exclusive(wq, condition, ret)	\
-do {									\
-	__label__ __out;						\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait_exclusive(&wq, &__wait,			\
-					TASK_INTERRUPTIBLE);		\
-		if (condition)						\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			abort_exclusive_wait(&wq, &__wait, 		\
-				TASK_INTERRUPTIBLE, NULL);		\
-			goto __out;					\
-		}							\
-		schedule();						\
-	}								\
-	finish_wait(&wq, &__wait);					\
-__out:	;								\
-} while (0)
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, ret,	\
+		      schedule())
 
 #define wait_event_interruptible_exclusive(wq, condition)		\
 ({									\

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_lock_irq()
  2013-10-02  9:22 ` [PATCH 10/16] sched/wait: Collapse __wait_event_lock_irq() Peter Zijlstra
@ 2013-10-04 17:34   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:34 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  13cb5042a4b80396f77cf5d599d2c002c57b89dc
Gitweb:     http://git.kernel.org/tip/13cb5042a4b80396f77cf5d599d2c002c57b89dc
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:27 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:14:50 +0200

sched/wait: Collapse __wait_event_lock_irq()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.612813379@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 20 ++++++--------------
 1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index c4ab172..d64918e 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -624,20 +624,12 @@ do {									\
 
 
 #define __wait_event_lock_irq(wq, condition, lock, cmd)			\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		spin_unlock_irq(&lock);					\
-		cmd;							\
-		schedule();						\
-		spin_lock_irq(&lock);					\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
+		      ___wait_nop_ret,					\
+		      spin_unlock_irq(&lock);				\
+		      cmd;						\
+		      schedule();					\
+		      spin_lock_irq(&lock))
 
 /**
  * wait_event_lock_irq_cmd - sleep until a condition gets true. The

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_interruptible_lock_irq()
  2013-10-02  9:22 ` [PATCH 11/16] sched/wait: Collapse __wait_event_interruptible_lock_irq() Peter Zijlstra
@ 2013-10-04 17:35   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:35 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  8fbd88fa1717601ef91ced49a32f24786b167065
Gitweb:     http://git.kernel.org/tip/8fbd88fa1717601ef91ced49a32f24786b167065
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:28 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:16:19 +0200

sched/wait: Collapse __wait_event_interruptible_lock_irq()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.686006009@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 26 ++++++--------------------
 1 file changed, 6 insertions(+), 20 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index d64918e..a577a85 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -689,26 +689,12 @@ do {									\
 } while (0)
 
 
-#define __wait_event_interruptible_lock_irq(wq, condition,		\
-					    lock, ret, cmd)		\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		spin_unlock_irq(&lock);					\
-		cmd;							\
-		schedule();						\
-		spin_lock_irq(&lock);					\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+#define __wait_event_interruptible_lock_irq(wq, condition, lock, ret, cmd) \
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	   \
+		      spin_unlock_irq(&lock);				   \
+		      cmd;						   \
+		      schedule();					   \
+		      spin_lock_irq(&lock))
 
 /**
  * wait_event_interruptible_lock_irq_cmd - sleep until a condition gets true.

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_interruptible_lock_irq_timeout()
  2013-10-02  9:22 ` [PATCH 12/16] sched/wait: Collapse __wait_event_interruptible_lock_irq_timeout() Peter Zijlstra
@ 2013-10-04 17:35   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:35 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  a1dc6852ac5eecdcd3122ae01703183a3e88e979
Gitweb:     http://git.kernel.org/tip/a1dc6852ac5eecdcd3122ae01703183a3e88e979
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:29 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:16:20 +0200

sched/wait: Collapse __wait_event_interruptible_lock_irq_timeout()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.759956109@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 25 ++++++-------------------
 1 file changed, 6 insertions(+), 19 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index a577a85..5d5408b 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -763,25 +763,12 @@ do {									\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_lock_irq_timeout(wq, condition,	\
-						    lock, ret)		\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (___wait_cond_timeout(condition, ret))		\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		spin_unlock_irq(&lock);					\
-		ret = schedule_timeout(ret);				\
-		spin_lock_irq(&lock);					\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+#define __wait_event_interruptible_lock_irq_timeout(wq, condition, lock, ret) \
+	___wait_event(wq, ___wait_cond_timeout(condition, ret),		      \
+		      TASK_INTERRUPTIBLE, 0, ret,	      		      \
+		      spin_unlock_irq(&lock);				      \
+		      ret = schedule_timeout(ret);			      \
+		      spin_lock_irq(&lock));
 
 /**
  * wait_event_interruptible_lock_irq_timeout - sleep until a condition gets true or a timeout elapses.

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_interruptible_tty()
  2013-10-02  9:22 ` [PATCH 13/16] sched/wait: Collapse __wait_event_interruptible_tty() Peter Zijlstra
@ 2013-10-04 17:35   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:35 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  0d1e1c8a430450a3ce61a842cec64f9e2a9f3b05
Gitweb:     http://git.kernel.org/tip/0d1e1c8a430450a3ce61a842cec64f9e2a9f3b05
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:30 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:16:20 +0200

sched/wait: Collapse __wait_event_interruptible_tty()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.831085521@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/tty.h | 21 ++++-----------------
 1 file changed, 4 insertions(+), 17 deletions(-)

diff --git a/include/linux/tty.h b/include/linux/tty.h
index 0503729..6e80329 100644
--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -679,23 +679,10 @@ static inline void tty_wait_until_sent_from_close(struct tty_struct *tty,
 })
 
 #define __wait_event_interruptible_tty(tty, wq, condition, ret)		\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_INTERRUPTIBLE);	\
-		if (condition)						\
-			break;						\
-		if (signal_pending(current)) {				\
-			ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
-		tty_unlock(tty);					\
-		schedule();						\
-		tty_lock(tty);						\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	\
+			tty_unlock(tty);				\
+			schedule();					\
+			tty_lock(tty))
 
 #ifdef CONFIG_PROC_FS
 extern void proc_tty_register_driver(struct tty_driver *);

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_killable()
  2013-10-02  9:22 ` [PATCH 14/16] sched/wait: Collapse __wait_event_killable() Peter Zijlstra
@ 2013-10-04 17:35   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:35 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  cf7361fd961b6f0510572af6cf8ca3ffba07018b
Gitweb:     http://git.kernel.org/tip/cf7361fd961b6f0510572af6cf8ca3ffba07018b
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:31 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:16:21 +0200

sched/wait: Collapse __wait_event_killable()

Reduce macro complexity by using the new ___wait_event() helper.
No change in behaviour, identical generated code.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.898691966@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 5d5408b..ec3683e 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -582,22 +582,7 @@ do {									\
 
 
 #define __wait_event_killable(wq, condition, ret)			\
-do {									\
-	DEFINE_WAIT(__wait);						\
-									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, TASK_KILLABLE);		\
-		if (condition)						\
-			break;						\
-		if (!fatal_signal_pending(current)) {			\
-			schedule();					\
-			continue;					\
-		}							\
-		ret = -ERESTARTSYS;					\
-		break;							\
-	}								\
-	finish_wait(&wq, &__wait);					\
-} while (0)
+	___wait_event(wq, condition, TASK_KILLABLE, 0, ret, schedule())
 
 /**
  * wait_event_killable - sleep until a condition gets true

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Collapse __wait_event_hrtimeout()
  2013-10-02  9:22 ` [PATCH 15/16] sched/wait: Collapse __wait_event_hrtimeout() Peter Zijlstra
@ 2013-10-04 17:35   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:35 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  ebdc195f2ec68576876216081035293e37318e86
Gitweb:     http://git.kernel.org/tip/ebdc195f2ec68576876216081035293e37318e86
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:32 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:16:22 +0200

sched/wait: Collapse __wait_event_hrtimeout()

While not a whole-sale replacement like the others we can still reduce
the size of __wait_event_hrtimeout() considerably by noting that the
actual core of __wait_event_hrtimeout() is identical to what
___wait_event() generates.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092528.972793648@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/wait.h | 15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index ec3683e..c065e8a 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -337,7 +337,6 @@ do {									\
 #define __wait_event_hrtimeout(wq, condition, timeout, state)		\
 ({									\
 	int __ret = 0;							\
-	DEFINE_WAIT(__wait);						\
 	struct hrtimer_sleeper __t;					\
 									\
 	hrtimer_init_on_stack(&__t.timer, CLOCK_MONOTONIC,		\
@@ -348,25 +347,15 @@ do {									\
 				       current->timer_slack_ns,		\
 				       HRTIMER_MODE_REL);		\
 									\
-	for (;;) {							\
-		prepare_to_wait(&wq, &__wait, state);			\
-		if (condition)						\
-			break;						\
-		if (state == TASK_INTERRUPTIBLE &&			\
-		    signal_pending(current)) {				\
-			__ret = -ERESTARTSYS;				\
-			break;						\
-		}							\
+	___wait_event(wq, condition, state, 0, __ret,			\
 		if (!__t.task) {					\
 			__ret = -ETIME;					\
 			break;						\
 		}							\
-		schedule();						\
-	}								\
+		schedule());						\
 									\
 	hrtimer_cancel(&__t.timer);					\
 	destroy_hrtimer_on_stack(&__t.timer);				\
-	finish_wait(&wq, &__wait);					\
 	__ret;								\
 })
 

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [tip:sched/core] sched/wait: Make the __wait_event*() interface more friendly
  2013-10-02  9:22 ` [PATCH 16/16] sched/wait: Make the __wait_event*() interface more friendly Peter Zijlstra
@ 2013-10-04 17:35   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 56+ messages in thread
From: tip-bot for Peter Zijlstra @ 2013-10-04 17:35 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, oleg, tglx

Commit-ID:  35a2af94c7ce7130ca292c68b1d27fcfdb648f6b
Gitweb:     http://git.kernel.org/tip/35a2af94c7ce7130ca292c68b1d27fcfdb648f6b
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 2 Oct 2013 11:22:33 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 4 Oct 2013 10:16:25 +0200

sched/wait: Make the __wait_event*() interface more friendly

Change all __wait_event*() implementations to match the corresponding
wait_event*() signature for convenience.

In particular this does away with the weird 'ret' logic. Since there
are __wait_event*() users this requires we update them too.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131002092529.042563462@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/mips/kernel/rtlx.c         |  19 ++++---
 include/linux/tty.h             |  10 ++--
 include/linux/wait.h            | 113 +++++++++++++++++++---------------------
 net/irda/af_irda.c              |   5 +-
 net/netfilter/ipvs/ip_vs_sync.c |   7 +--
 5 files changed, 73 insertions(+), 81 deletions(-)

diff --git a/arch/mips/kernel/rtlx.c b/arch/mips/kernel/rtlx.c
index d763f11..2c12ea1 100644
--- a/arch/mips/kernel/rtlx.c
+++ b/arch/mips/kernel/rtlx.c
@@ -172,8 +172,9 @@ int rtlx_open(int index, int can_sleep)
 	if (rtlx == NULL) {
 		if( (p = vpe_get_shared(tclimit)) == NULL) {
 		    if (can_sleep) {
-			__wait_event_interruptible(channel_wqs[index].lx_queue,
-				(p = vpe_get_shared(tclimit)), ret);
+			ret = __wait_event_interruptible(
+					channel_wqs[index].lx_queue,
+					(p = vpe_get_shared(tclimit)));
 			if (ret)
 				goto out_fail;
 		    } else {
@@ -263,11 +264,10 @@ unsigned int rtlx_read_poll(int index, int can_sleep)
 	/* data available to read? */
 	if (chan->lx_read == chan->lx_write) {
 		if (can_sleep) {
-			int ret = 0;
-
-			__wait_event_interruptible(channel_wqs[index].lx_queue,
+			int ret = __wait_event_interruptible(
+				channel_wqs[index].lx_queue,
 				(chan->lx_read != chan->lx_write) ||
-				sp_stopping, ret);
+				sp_stopping);
 			if (ret)
 				return ret;
 
@@ -440,14 +440,13 @@ static ssize_t file_write(struct file *file, const char __user * buffer,
 
 	/* any space left... */
 	if (!rtlx_write_poll(minor)) {
-		int ret = 0;
+		int ret;
 
 		if (file->f_flags & O_NONBLOCK)
 			return -EAGAIN;
 
-		__wait_event_interruptible(channel_wqs[minor].rt_queue,
-					   rtlx_write_poll(minor),
-					   ret);
+		ret = __wait_event_interruptible(channel_wqs[minor].rt_queue,
+					   rtlx_write_poll(minor));
 		if (ret)
 			return ret;
 	}
diff --git a/include/linux/tty.h b/include/linux/tty.h
index 6e80329..633cac7 100644
--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -672,14 +672,14 @@ static inline void tty_wait_until_sent_from_close(struct tty_struct *tty,
 #define wait_event_interruptible_tty(tty, wq, condition)		\
 ({									\
 	int __ret = 0;							\
-	if (!(condition)) {						\
-		__wait_event_interruptible_tty(tty, wq, condition, __ret);	\
-	}								\
+	if (!(condition))						\
+		__ret = __wait_event_interruptible_tty(tty, wq,		\
+						       condition);	\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_tty(tty, wq, condition, ret)		\
-	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	\
+#define __wait_event_interruptible_tty(tty, wq, condition)		\
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, 0,		\
 			tty_unlock(tty);				\
 			schedule();					\
 			tty_lock(tty))
diff --git a/include/linux/wait.h b/include/linux/wait.h
index c065e8a..bd4bd7b 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -179,24 +179,23 @@ wait_queue_head_t *bit_waitqueue(void *, int);
 #define wake_up_interruptible_sync_poll(x, m)				\
 	__wake_up_sync_key((x), TASK_INTERRUPTIBLE, 1, (void *) (m))
 
-#define ___wait_cond_timeout(condition, ret)				\
+#define ___wait_cond_timeout(condition)					\
 ({									\
  	bool __cond = (condition);					\
- 	if (__cond && !ret)						\
- 		ret = 1;						\
- 	__cond || !ret;							\
+ 	if (__cond && !__ret)						\
+ 		__ret = 1;						\
+ 	__cond || !__ret;						\
 })
 
 #define ___wait_signal_pending(state)					\
 	((state == TASK_INTERRUPTIBLE && signal_pending(current)) ||	\
 	 (state == TASK_KILLABLE && fatal_signal_pending(current)))
 
-#define ___wait_nop_ret		int ret __always_unused
-
 #define ___wait_event(wq, condition, state, exclusive, ret, cmd)	\
-do {									\
+({									\
 	__label__ __out;						\
 	DEFINE_WAIT(__wait);						\
+	long __ret = ret;						\
 									\
 	for (;;) {							\
 		if (exclusive)						\
@@ -208,7 +207,7 @@ do {									\
 			break;						\
 									\
 		if (___wait_signal_pending(state)) {			\
-			ret = -ERESTARTSYS;				\
+			__ret = -ERESTARTSYS;				\
 			if (exclusive) {				\
 				abort_exclusive_wait(&wq, &__wait, 	\
 						     state, NULL); 	\
@@ -220,12 +219,12 @@ do {									\
 		cmd;							\
 	}								\
 	finish_wait(&wq, &__wait);					\
-__out:	;								\
-} while (0)
+__out:	__ret;								\
+})
 
 #define __wait_event(wq, condition) 					\
-	___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
-		      ___wait_nop_ret, schedule())
+	(void)___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0, 0,	\
+			    schedule())
 
 /**
  * wait_event - sleep until a condition gets true
@@ -246,10 +245,10 @@ do {									\
 	__wait_event(wq, condition);					\
 } while (0)
 
-#define __wait_event_timeout(wq, condition, ret)			\
-	___wait_event(wq, ___wait_cond_timeout(condition, ret), 	\
-		      TASK_UNINTERRUPTIBLE, 0, ret,			\
-		      ret = schedule_timeout(ret))
+#define __wait_event_timeout(wq, condition, timeout)			\
+	___wait_event(wq, ___wait_cond_timeout(condition),		\
+		      TASK_UNINTERRUPTIBLE, 0, timeout,			\
+		      __ret = schedule_timeout(__ret))
 
 /**
  * wait_event_timeout - sleep until a condition gets true or a timeout elapses
@@ -272,12 +271,12 @@ do {									\
 ({									\
 	long __ret = timeout;						\
 	if (!(condition)) 						\
-		__wait_event_timeout(wq, condition, __ret);		\
+		__ret = __wait_event_timeout(wq, condition, timeout);	\
 	__ret;								\
 })
 
-#define __wait_event_interruptible(wq, condition, ret)			\
-	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	\
+#define __wait_event_interruptible(wq, condition)			\
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, 0,		\
 		      schedule())
 
 /**
@@ -299,14 +298,14 @@ do {									\
 ({									\
 	int __ret = 0;							\
 	if (!(condition))						\
-		__wait_event_interruptible(wq, condition, __ret);	\
+		__ret = __wait_event_interruptible(wq, condition);	\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_timeout(wq, condition, ret)		\
-	___wait_event(wq, ___wait_cond_timeout(condition, ret),		\
-		      TASK_INTERRUPTIBLE, 0, ret,			\
-		      ret = schedule_timeout(ret))
+#define __wait_event_interruptible_timeout(wq, condition, timeout)	\
+	___wait_event(wq, ___wait_cond_timeout(condition),		\
+		      TASK_INTERRUPTIBLE, 0, timeout,			\
+		      __ret = schedule_timeout(__ret))
 
 /**
  * wait_event_interruptible_timeout - sleep until a condition gets true or a timeout elapses
@@ -330,7 +329,8 @@ do {									\
 ({									\
 	long __ret = timeout;						\
 	if (!(condition))						\
-		__wait_event_interruptible_timeout(wq, condition, __ret); \
+		__ret = __wait_event_interruptible_timeout(wq, 		\
+						condition, timeout);	\
 	__ret;								\
 })
 
@@ -347,7 +347,7 @@ do {									\
 				       current->timer_slack_ns,		\
 				       HRTIMER_MODE_REL);		\
 									\
-	___wait_event(wq, condition, state, 0, __ret,			\
+	__ret = ___wait_event(wq, condition, state, 0, 0,		\
 		if (!__t.task) {					\
 			__ret = -ETIME;					\
 			break;						\
@@ -409,15 +409,15 @@ do {									\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_exclusive(wq, condition, ret)	\
-	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, ret,	\
+#define __wait_event_interruptible_exclusive(wq, condition)		\
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, 0,		\
 		      schedule())
 
 #define wait_event_interruptible_exclusive(wq, condition)		\
 ({									\
 	int __ret = 0;							\
 	if (!(condition))						\
-		__wait_event_interruptible_exclusive(wq, condition, __ret);\
+		__ret = __wait_event_interruptible_exclusive(wq, condition);\
 	__ret;								\
 })
 
@@ -570,8 +570,8 @@ do {									\
 
 
 
-#define __wait_event_killable(wq, condition, ret)			\
-	___wait_event(wq, condition, TASK_KILLABLE, 0, ret, schedule())
+#define __wait_event_killable(wq, condition)				\
+	___wait_event(wq, condition, TASK_KILLABLE, 0, 0, schedule())
 
 /**
  * wait_event_killable - sleep until a condition gets true
@@ -592,18 +592,17 @@ do {									\
 ({									\
 	int __ret = 0;							\
 	if (!(condition))						\
-		__wait_event_killable(wq, condition, __ret);		\
+		__ret = __wait_event_killable(wq, condition);		\
 	__ret;								\
 })
 
 
 #define __wait_event_lock_irq(wq, condition, lock, cmd)			\
-	___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
-		      ___wait_nop_ret,					\
-		      spin_unlock_irq(&lock);				\
-		      cmd;						\
-		      schedule();					\
-		      spin_lock_irq(&lock))
+	(void)___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0, 0,	\
+			    spin_unlock_irq(&lock);			\
+			    cmd;					\
+			    schedule();					\
+			    spin_lock_irq(&lock))
 
 /**
  * wait_event_lock_irq_cmd - sleep until a condition gets true. The
@@ -663,11 +662,11 @@ do {									\
 } while (0)
 
 
-#define __wait_event_interruptible_lock_irq(wq, condition, lock, ret, cmd) \
-	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, ret,	   \
-		      spin_unlock_irq(&lock);				   \
-		      cmd;						   \
-		      schedule();					   \
+#define __wait_event_interruptible_lock_irq(wq, condition, lock, cmd)	\
+	___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, 0,	   	\
+		      spin_unlock_irq(&lock);				\
+		      cmd;						\
+		      schedule();					\
 		      spin_lock_irq(&lock))
 
 /**
@@ -698,10 +697,9 @@ do {									\
 #define wait_event_interruptible_lock_irq_cmd(wq, condition, lock, cmd)	\
 ({									\
 	int __ret = 0;							\
-									\
 	if (!(condition))						\
-		__wait_event_interruptible_lock_irq(wq, condition,	\
-						    lock, __ret, cmd);	\
+		__ret = __wait_event_interruptible_lock_irq(wq, 	\
+						condition, lock, cmd);	\
 	__ret;								\
 })
 
@@ -730,18 +728,18 @@ do {									\
 #define wait_event_interruptible_lock_irq(wq, condition, lock)		\
 ({									\
 	int __ret = 0;							\
-									\
 	if (!(condition))						\
-		__wait_event_interruptible_lock_irq(wq, condition,	\
-						    lock, __ret, );	\
+		__ret = __wait_event_interruptible_lock_irq(wq,		\
+						condition, lock,)	\
 	__ret;								\
 })
 
-#define __wait_event_interruptible_lock_irq_timeout(wq, condition, lock, ret) \
-	___wait_event(wq, ___wait_cond_timeout(condition, ret),		      \
-		      TASK_INTERRUPTIBLE, 0, ret,	      		      \
-		      spin_unlock_irq(&lock);				      \
-		      ret = schedule_timeout(ret);			      \
+#define __wait_event_interruptible_lock_irq_timeout(wq, condition, 	\
+						    lock, timeout) 	\
+	___wait_event(wq, ___wait_cond_timeout(condition),		\
+		      TASK_INTERRUPTIBLE, 0, ret,	      		\
+		      spin_unlock_irq(&lock);				\
+		      __ret = schedule_timeout(__ret);			\
 		      spin_lock_irq(&lock));
 
 /**
@@ -771,11 +769,10 @@ do {									\
 #define wait_event_interruptible_lock_irq_timeout(wq, condition, lock,	\
 						  timeout)		\
 ({									\
-	int __ret = timeout;						\
-									\
+	long __ret = timeout;						\
 	if (!(condition))						\
-		__wait_event_interruptible_lock_irq_timeout(		\
-					wq, condition, lock, __ret);	\
+		__ret = __wait_event_interruptible_lock_irq_timeout(	\
+					wq, condition, lock, timeout);	\
 	__ret;								\
 })
 
diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c
index 0578d4f..0f67690 100644
--- a/net/irda/af_irda.c
+++ b/net/irda/af_irda.c
@@ -2563,9 +2563,8 @@ bed:
 				  jiffies + msecs_to_jiffies(val));
 
 			/* Wait for IR-LMP to call us back */
-			__wait_event_interruptible(self->query_wait,
-			      (self->cachedaddr != 0 || self->errno == -ETIME),
-						   err);
+			err = __wait_event_interruptible(self->query_wait,
+			      (self->cachedaddr != 0 || self->errno == -ETIME));
 
 			/* If watchdog is still activated, kill it! */
 			del_timer(&(self->watchdog));
diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
index f448471..f63c238 100644
--- a/net/netfilter/ipvs/ip_vs_sync.c
+++ b/net/netfilter/ipvs/ip_vs_sync.c
@@ -1637,12 +1637,9 @@ static int sync_thread_master(void *data)
 			continue;
 		}
 		while (ip_vs_send_sync_msg(tinfo->sock, sb->mesg) < 0) {
-			int ret = 0;
-
-			__wait_event_interruptible(*sk_sleep(sk),
+			int ret = __wait_event_interruptible(*sk_sleep(sk),
 						   sock_writeable(sk) ||
-						   kthread_should_stop(),
-						   ret);
+						   kthread_should_stop());
 			if (unlikely(kthread_should_stop()))
 				goto done;
 		}

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
                   ` (15 preceding siblings ...)
  2013-10-02  9:22 ` [PATCH 16/16] sched/wait: Make the __wait_event*() interface more friendly Peter Zijlstra
@ 2013-10-04 20:44 ` Peter Zijlstra
  2013-10-04 20:44   ` Peter Zijlstra
  16 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-04 20:44 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel


slightly related; do we want to do something like the following two
patches?

---
Subject: sched: Move wait code
From: Peter Zijlstra <peterz@infradead.org>
Date: Fri Oct 4 17:24:35 CEST 2013

For some reason only the wait part of the wait api lives in
kernel/wait.c and the wake part still lives in kernel/sched/core.c;
ammend this.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched/core.c |  107 ----------------------------------------------------
 kernel/wait.c       |  103 ++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 105 insertions(+), 105 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2579,109 +2579,6 @@ int default_wake_function(wait_queue_t *
 }
 EXPORT_SYMBOL(default_wake_function);
 
-/*
- * The core wakeup function. Non-exclusive wakeups (nr_exclusive == 0) just
- * wake everything up. If it's an exclusive wakeup (nr_exclusive == small +ve
- * number) then we wake all the non-exclusive tasks and one exclusive task.
- *
- * There are circumstances in which we can try to wake a task which has already
- * started to run but is not in state TASK_RUNNING. try_to_wake_up() returns
- * zero in this (rare) case, and we handle it by continuing to scan the queue.
- */
-static void __wake_up_common(wait_queue_head_t *q, unsigned int mode,
-			int nr_exclusive, int wake_flags, void *key)
-{
-	wait_queue_t *curr, *next;
-
-	list_for_each_entry_safe(curr, next, &q->task_list, task_list) {
-		unsigned flags = curr->flags;
-
-		if (curr->func(curr, mode, wake_flags, key) &&
-				(flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive)
-			break;
-	}
-}
-
-/**
- * __wake_up - wake up threads blocked on a waitqueue.
- * @q: the waitqueue
- * @mode: which threads
- * @nr_exclusive: how many wake-one or wake-many threads to wake up
- * @key: is directly passed to the wakeup function
- *
- * It may be assumed that this function implies a write memory barrier before
- * changing the task state if and only if any tasks are woken up.
- */
-void __wake_up(wait_queue_head_t *q, unsigned int mode,
-			int nr_exclusive, void *key)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&q->lock, flags);
-	__wake_up_common(q, mode, nr_exclusive, 0, key);
-	spin_unlock_irqrestore(&q->lock, flags);
-}
-EXPORT_SYMBOL(__wake_up);
-
-/*
- * Same as __wake_up but called with the spinlock in wait_queue_head_t held.
- */
-void __wake_up_locked(wait_queue_head_t *q, unsigned int mode, int nr)
-{
-	__wake_up_common(q, mode, nr, 0, NULL);
-}
-EXPORT_SYMBOL_GPL(__wake_up_locked);
-
-void __wake_up_locked_key(wait_queue_head_t *q, unsigned int mode, void *key)
-{
-	__wake_up_common(q, mode, 1, 0, key);
-}
-EXPORT_SYMBOL_GPL(__wake_up_locked_key);
-
-/**
- * __wake_up_sync_key - wake up threads blocked on a waitqueue.
- * @q: the waitqueue
- * @mode: which threads
- * @nr_exclusive: how many wake-one or wake-many threads to wake up
- * @key: opaque value to be passed to wakeup targets
- *
- * The sync wakeup differs that the waker knows that it will schedule
- * away soon, so while the target thread will be woken up, it will not
- * be migrated to another CPU - ie. the two threads are 'synchronized'
- * with each other. This can prevent needless bouncing between CPUs.
- *
- * On UP it can prevent extra preemption.
- *
- * It may be assumed that this function implies a write memory barrier before
- * changing the task state if and only if any tasks are woken up.
- */
-void __wake_up_sync_key(wait_queue_head_t *q, unsigned int mode,
-			int nr_exclusive, void *key)
-{
-	unsigned long flags;
-	int wake_flags = WF_SYNC;
-
-	if (unlikely(!q))
-		return;
-
-	if (unlikely(nr_exclusive != 1))
-		wake_flags = 0;
-
-	spin_lock_irqsave(&q->lock, flags);
-	__wake_up_common(q, mode, nr_exclusive, wake_flags, key);
-	spin_unlock_irqrestore(&q->lock, flags);
-}
-EXPORT_SYMBOL_GPL(__wake_up_sync_key);
-
-/*
- * __wake_up_sync - see __wake_up_sync_key()
- */
-void __wake_up_sync(wait_queue_head_t *q, unsigned int mode, int nr_exclusive)
-{
-	__wake_up_sync_key(q, mode, nr_exclusive, NULL);
-}
-EXPORT_SYMBOL_GPL(__wake_up_sync);	/* For internal use only */
-
 /**
  * complete: - signals a single thread waiting on this completion
  * @x:  holds the state of this particular completion
@@ -2700,7 +2597,7 @@ void complete(struct completion *x)
 
 	spin_lock_irqsave(&x->wait.lock, flags);
 	x->done++;
-	__wake_up_common(&x->wait, TASK_NORMAL, 1, 0, NULL);
+	__wake_up_locked_key(&x->wait, TASK_NORMAL, NULL);
 	spin_unlock_irqrestore(&x->wait.lock, flags);
 }
 EXPORT_SYMBOL(complete);
@@ -2720,7 +2617,7 @@ void complete_all(struct completion *x)
 
 	spin_lock_irqsave(&x->wait.lock, flags);
 	x->done += UINT_MAX/2;
-	__wake_up_common(&x->wait, TASK_NORMAL, 0, 0, NULL);
+	__wake_up_locked(&x->wait, TASK_NORMAL, 0);
 	spin_unlock_irqrestore(&x->wait.lock, flags);
 }
 EXPORT_SYMBOL(complete_all);
--- a/kernel/wait.c
+++ b/kernel/wait.c
@@ -53,6 +53,109 @@ EXPORT_SYMBOL(remove_wait_queue);
 
 
 /*
+ * The core wakeup function. Non-exclusive wakeups (nr_exclusive == 0) just
+ * wake everything up. If it's an exclusive wakeup (nr_exclusive == small +ve
+ * number) then we wake all the non-exclusive tasks and one exclusive task.
+ *
+ * There are circumstances in which we can try to wake a task which has already
+ * started to run but is not in state TASK_RUNNING. try_to_wake_up() returns
+ * zero in this (rare) case, and we handle it by continuing to scan the queue.
+ */
+static void __wake_up_common(wait_queue_head_t *q, unsigned int mode,
+			int nr_exclusive, int wake_flags, void *key)
+{
+	wait_queue_t *curr, *next;
+
+	list_for_each_entry_safe(curr, next, &q->task_list, task_list) {
+		unsigned flags = curr->flags;
+
+		if (curr->func(curr, mode, wake_flags, key) &&
+				(flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive)
+			break;
+	}
+}
+
+/**
+ * __wake_up - wake up threads blocked on a waitqueue.
+ * @q: the waitqueue
+ * @mode: which threads
+ * @nr_exclusive: how many wake-one or wake-many threads to wake up
+ * @key: is directly passed to the wakeup function
+ *
+ * It may be assumed that this function implies a write memory barrier before
+ * changing the task state if and only if any tasks are woken up.
+ */
+void __wake_up(wait_queue_head_t *q, unsigned int mode,
+			int nr_exclusive, void *key)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&q->lock, flags);
+	__wake_up_common(q, mode, nr_exclusive, 0, key);
+	spin_unlock_irqrestore(&q->lock, flags);
+}
+EXPORT_SYMBOL(__wake_up);
+
+/*
+ * Same as __wake_up but called with the spinlock in wait_queue_head_t held.
+ */
+void __wake_up_locked(wait_queue_head_t *q, unsigned int mode, int nr)
+{
+	__wake_up_common(q, mode, nr, 0, NULL);
+}
+EXPORT_SYMBOL_GPL(__wake_up_locked);
+
+void __wake_up_locked_key(wait_queue_head_t *q, unsigned int mode, void *key)
+{
+	__wake_up_common(q, mode, 1, 0, key);
+}
+EXPORT_SYMBOL_GPL(__wake_up_locked_key);
+
+/**
+ * __wake_up_sync_key - wake up threads blocked on a waitqueue.
+ * @q: the waitqueue
+ * @mode: which threads
+ * @nr_exclusive: how many wake-one or wake-many threads to wake up
+ * @key: opaque value to be passed to wakeup targets
+ *
+ * The sync wakeup differs that the waker knows that it will schedule
+ * away soon, so while the target thread will be woken up, it will not
+ * be migrated to another CPU - ie. the two threads are 'synchronized'
+ * with each other. This can prevent needless bouncing between CPUs.
+ *
+ * On UP it can prevent extra preemption.
+ *
+ * It may be assumed that this function implies a write memory barrier before
+ * changing the task state if and only if any tasks are woken up.
+ */
+void __wake_up_sync_key(wait_queue_head_t *q, unsigned int mode,
+			int nr_exclusive, void *key)
+{
+	unsigned long flags;
+	int wake_flags = 1; /* XXX WF_SYNC */
+
+	if (unlikely(!q))
+		return;
+
+	if (unlikely(nr_exclusive != 1))
+		wake_flags = 0;
+
+	spin_lock_irqsave(&q->lock, flags);
+	__wake_up_common(q, mode, nr_exclusive, wake_flags, key);
+	spin_unlock_irqrestore(&q->lock, flags);
+}
+EXPORT_SYMBOL_GPL(__wake_up_sync_key);
+
+/*
+ * __wake_up_sync - see __wake_up_sync_key()
+ */
+void __wake_up_sync(wait_queue_head_t *q, unsigned int mode, int nr_exclusive)
+{
+	__wake_up_sync_key(q, mode, nr_exclusive, NULL);
+}
+EXPORT_SYMBOL_GPL(__wake_up_sync);	/* For internal use only */
+
+/*
  * Note: we use "set_current_state()" _after_ the wait-queue add,
  * because we need a memory barrier there on SMP, so that any
  * wake-function that tests for the wait-queue being active

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-04 20:44 ` [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
@ 2013-10-04 20:44   ` Peter Zijlstra
  2013-10-05  8:04     ` Ingo Molnar
  0 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-04 20:44 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel

On Fri, Oct 04, 2013 at 10:44:05PM +0200, Peter Zijlstra wrote:
> 
> slightly related; do we want to do something like the following two
> patches?

and

---
Subject: sched: Move completion code
From: Peter Zijlstra <peterz@infradead.org>
Date: Fri Oct  4 22:06:53 CEST 2013

Completions already have their own header file: linux/completion.h
Move the implementation out of kernel/sched/core.c and into its own
file: kernel/completion.c.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/Makefile     |    2 
 kernel/completion.c |  287 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/core.c |  284 ---------------------------------------------------
 3 files changed, 288 insertions(+), 285 deletions(-)

--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -11,7 +11,7 @@ obj-y     = fork.o exec_domain.o panic.o
 	    hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
 	    notifier.o ksysfs.o cred.o reboot.o \
 	    async.o range.o groups.o lglock.o smpboot.o \
-	    rcusync.o
+	    rcusync.o completion.o
 
 ifdef CONFIG_FUNCTION_TRACER
 # Do not trace debug files and internal ftrace files
--- /dev/null
+++ b/kernel/completion.c
@@ -0,0 +1,287 @@
+
+#include <linux/sched.h>
+#include <linux/completion.h>
+
+/**
+ * complete: - signals a single thread waiting on this completion
+ * @x:  holds the state of this particular completion
+ *
+ * This will wake up a single thread waiting on this completion. Threads will be
+ * awakened in the same order in which they were queued.
+ *
+ * See also complete_all(), wait_for_completion() and related routines.
+ *
+ * It may be assumed that this function implies a write memory barrier before
+ * changing the task state if and only if any tasks are woken up.
+ */
+void complete(struct completion *x)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&x->wait.lock, flags);
+	x->done++;
+	__wake_up_locked_key(&x->wait, TASK_NORMAL, NULL);
+	spin_unlock_irqrestore(&x->wait.lock, flags);
+}
+EXPORT_SYMBOL(complete);
+
+/**
+ * complete_all: - signals all threads waiting on this completion
+ * @x:  holds the state of this particular completion
+ *
+ * This will wake up all threads waiting on this particular completion event.
+ *
+ * It may be assumed that this function implies a write memory barrier before
+ * changing the task state if and only if any tasks are woken up.
+ */
+void complete_all(struct completion *x)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&x->wait.lock, flags);
+	x->done += UINT_MAX/2;
+	__wake_up_locked(&x->wait, TASK_NORMAL, 0);
+	spin_unlock_irqrestore(&x->wait.lock, flags);
+}
+EXPORT_SYMBOL(complete_all);
+
+static inline long __sched
+do_wait_for_common(struct completion *x,
+		   long (*action)(long), long timeout, int state)
+{
+	if (!x->done) {
+		DECLARE_WAITQUEUE(wait, current);
+
+		__add_wait_queue_tail_exclusive(&x->wait, &wait);
+		do {
+			if (signal_pending_state(state, current)) {
+				timeout = -ERESTARTSYS;
+				break;
+			}
+			__set_current_state(state);
+			spin_unlock_irq(&x->wait.lock);
+			timeout = action(timeout);
+			spin_lock_irq(&x->wait.lock);
+		} while (!x->done && timeout);
+		__remove_wait_queue(&x->wait, &wait);
+		if (!x->done)
+			return timeout;
+	}
+	x->done--;
+	return timeout ?: 1;
+}
+
+static inline long __sched
+__wait_for_common(struct completion *x,
+		  long (*action)(long), long timeout, int state)
+{
+	might_sleep();
+
+	spin_lock_irq(&x->wait.lock);
+	timeout = do_wait_for_common(x, action, timeout, state);
+	spin_unlock_irq(&x->wait.lock);
+	return timeout;
+}
+
+static long __sched
+wait_for_common(struct completion *x, long timeout, int state)
+{
+	return __wait_for_common(x, schedule_timeout, timeout, state);
+}
+
+static long __sched
+wait_for_common_io(struct completion *x, long timeout, int state)
+{
+	return __wait_for_common(x, io_schedule_timeout, timeout, state);
+}
+
+/**
+ * wait_for_completion: - waits for completion of a task
+ * @x:  holds the state of this particular completion
+ *
+ * This waits to be signaled for completion of a specific task. It is NOT
+ * interruptible and there is no timeout.
+ *
+ * See also similar routines (i.e. wait_for_completion_timeout()) with timeout
+ * and interrupt capability. Also see complete().
+ */
+void __sched wait_for_completion(struct completion *x)
+{
+	wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_UNINTERRUPTIBLE);
+}
+EXPORT_SYMBOL(wait_for_completion);
+
+/**
+ * wait_for_completion_timeout: - waits for completion of a task (w/timeout)
+ * @x:  holds the state of this particular completion
+ * @timeout:  timeout value in jiffies
+ *
+ * This waits for either a completion of a specific task to be signaled or for a
+ * specified timeout to expire. The timeout is in jiffies. It is not
+ * interruptible.
+ *
+ * Return: 0 if timed out, and positive (at least 1, or number of jiffies left
+ * till timeout) if completed.
+ */
+unsigned long __sched
+wait_for_completion_timeout(struct completion *x, unsigned long timeout)
+{
+	return wait_for_common(x, timeout, TASK_UNINTERRUPTIBLE);
+}
+EXPORT_SYMBOL(wait_for_completion_timeout);
+
+/**
+ * wait_for_completion_io: - waits for completion of a task
+ * @x:  holds the state of this particular completion
+ *
+ * This waits to be signaled for completion of a specific task. It is NOT
+ * interruptible and there is no timeout. The caller is accounted as waiting
+ * for IO.
+ */
+void __sched wait_for_completion_io(struct completion *x)
+{
+	wait_for_common_io(x, MAX_SCHEDULE_TIMEOUT, TASK_UNINTERRUPTIBLE);
+}
+EXPORT_SYMBOL(wait_for_completion_io);
+
+/**
+ * wait_for_completion_io_timeout: - waits for completion of a task (w/timeout)
+ * @x:  holds the state of this particular completion
+ * @timeout:  timeout value in jiffies
+ *
+ * This waits for either a completion of a specific task to be signaled or for a
+ * specified timeout to expire. The timeout is in jiffies. It is not
+ * interruptible. The caller is accounted as waiting for IO.
+ *
+ * Return: 0 if timed out, and positive (at least 1, or number of jiffies left
+ * till timeout) if completed.
+ */
+unsigned long __sched
+wait_for_completion_io_timeout(struct completion *x, unsigned long timeout)
+{
+	return wait_for_common_io(x, timeout, TASK_UNINTERRUPTIBLE);
+}
+EXPORT_SYMBOL(wait_for_completion_io_timeout);
+
+/**
+ * wait_for_completion_interruptible: - waits for completion of a task (w/intr)
+ * @x:  holds the state of this particular completion
+ *
+ * This waits for completion of a specific task to be signaled. It is
+ * interruptible.
+ *
+ * Return: -ERESTARTSYS if interrupted, 0 if completed.
+ */
+int __sched wait_for_completion_interruptible(struct completion *x)
+{
+	long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_INTERRUPTIBLE);
+	if (t == -ERESTARTSYS)
+		return t;
+	return 0;
+}
+EXPORT_SYMBOL(wait_for_completion_interruptible);
+
+/**
+ * wait_for_completion_interruptible_timeout: - waits for completion (w/(to,intr))
+ * @x:  holds the state of this particular completion
+ * @timeout:  timeout value in jiffies
+ *
+ * This waits for either a completion of a specific task to be signaled or for a
+ * specified timeout to expire. It is interruptible. The timeout is in jiffies.
+ *
+ * Return: -ERESTARTSYS if interrupted, 0 if timed out, positive (at least 1,
+ * or number of jiffies left till timeout) if completed.
+ */
+long __sched
+wait_for_completion_interruptible_timeout(struct completion *x,
+					  unsigned long timeout)
+{
+	return wait_for_common(x, timeout, TASK_INTERRUPTIBLE);
+}
+EXPORT_SYMBOL(wait_for_completion_interruptible_timeout);
+
+/**
+ * wait_for_completion_killable: - waits for completion of a task (killable)
+ * @x:  holds the state of this particular completion
+ *
+ * This waits to be signaled for completion of a specific task. It can be
+ * interrupted by a kill signal.
+ *
+ * Return: -ERESTARTSYS if interrupted, 0 if completed.
+ */
+int __sched wait_for_completion_killable(struct completion *x)
+{
+	long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_KILLABLE);
+	if (t == -ERESTARTSYS)
+		return t;
+	return 0;
+}
+EXPORT_SYMBOL(wait_for_completion_killable);
+
+/**
+ * wait_for_completion_killable_timeout: - waits for completion of a task (w/(to,killable))
+ * @x:  holds the state of this particular completion
+ * @timeout:  timeout value in jiffies
+ *
+ * This waits for either a completion of a specific task to be
+ * signaled or for a specified timeout to expire. It can be
+ * interrupted by a kill signal. The timeout is in jiffies.
+ *
+ * Return: -ERESTARTSYS if interrupted, 0 if timed out, positive (at least 1,
+ * or number of jiffies left till timeout) if completed.
+ */
+long __sched
+wait_for_completion_killable_timeout(struct completion *x,
+				     unsigned long timeout)
+{
+	return wait_for_common(x, timeout, TASK_KILLABLE);
+}
+EXPORT_SYMBOL(wait_for_completion_killable_timeout);
+
+/**
+ *	try_wait_for_completion - try to decrement a completion without blocking
+ *	@x:	completion structure
+ *
+ *	Return: 0 if a decrement cannot be done without blocking
+ *		 1 if a decrement succeeded.
+ *
+ *	If a completion is being used as a counting completion,
+ *	attempt to decrement the counter without blocking. This
+ *	enables us to avoid waiting if the resource the completion
+ *	is protecting is not available.
+ */
+bool try_wait_for_completion(struct completion *x)
+{
+	unsigned long flags;
+	int ret = 1;
+
+	spin_lock_irqsave(&x->wait.lock, flags);
+	if (!x->done)
+		ret = 0;
+	else
+		x->done--;
+	spin_unlock_irqrestore(&x->wait.lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL(try_wait_for_completion);
+
+/**
+ *	completion_done - Test to see if a completion has any waiters
+ *	@x:	completion structure
+ *
+ *	Return: 0 if there are waiters (wait_for_completion() in progress)
+ *		 1 if there are no waiters.
+ *
+ */
+bool completion_done(struct completion *x)
+{
+	unsigned long flags;
+	int ret = 1;
+
+	spin_lock_irqsave(&x->wait.lock, flags);
+	if (!x->done)
+		ret = 0;
+	spin_unlock_irqrestore(&x->wait.lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL(completion_done);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2579,290 +2579,6 @@ int default_wake_function(wait_queue_t *
 }
 EXPORT_SYMBOL(default_wake_function);
 
-/**
- * complete: - signals a single thread waiting on this completion
- * @x:  holds the state of this particular completion
- *
- * This will wake up a single thread waiting on this completion. Threads will be
- * awakened in the same order in which they were queued.
- *
- * See also complete_all(), wait_for_completion() and related routines.
- *
- * It may be assumed that this function implies a write memory barrier before
- * changing the task state if and only if any tasks are woken up.
- */
-void complete(struct completion *x)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&x->wait.lock, flags);
-	x->done++;
-	__wake_up_locked_key(&x->wait, TASK_NORMAL, NULL);
-	spin_unlock_irqrestore(&x->wait.lock, flags);
-}
-EXPORT_SYMBOL(complete);
-
-/**
- * complete_all: - signals all threads waiting on this completion
- * @x:  holds the state of this particular completion
- *
- * This will wake up all threads waiting on this particular completion event.
- *
- * It may be assumed that this function implies a write memory barrier before
- * changing the task state if and only if any tasks are woken up.
- */
-void complete_all(struct completion *x)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&x->wait.lock, flags);
-	x->done += UINT_MAX/2;
-	__wake_up_locked(&x->wait, TASK_NORMAL, 0);
-	spin_unlock_irqrestore(&x->wait.lock, flags);
-}
-EXPORT_SYMBOL(complete_all);
-
-static inline long __sched
-do_wait_for_common(struct completion *x,
-		   long (*action)(long), long timeout, int state)
-{
-	if (!x->done) {
-		DECLARE_WAITQUEUE(wait, current);
-
-		__add_wait_queue_tail_exclusive(&x->wait, &wait);
-		do {
-			if (signal_pending_state(state, current)) {
-				timeout = -ERESTARTSYS;
-				break;
-			}
-			__set_current_state(state);
-			spin_unlock_irq(&x->wait.lock);
-			timeout = action(timeout);
-			spin_lock_irq(&x->wait.lock);
-		} while (!x->done && timeout);
-		__remove_wait_queue(&x->wait, &wait);
-		if (!x->done)
-			return timeout;
-	}
-	x->done--;
-	return timeout ?: 1;
-}
-
-static inline long __sched
-__wait_for_common(struct completion *x,
-		  long (*action)(long), long timeout, int state)
-{
-	might_sleep();
-
-	spin_lock_irq(&x->wait.lock);
-	timeout = do_wait_for_common(x, action, timeout, state);
-	spin_unlock_irq(&x->wait.lock);
-	return timeout;
-}
-
-static long __sched
-wait_for_common(struct completion *x, long timeout, int state)
-{
-	return __wait_for_common(x, schedule_timeout, timeout, state);
-}
-
-static long __sched
-wait_for_common_io(struct completion *x, long timeout, int state)
-{
-	return __wait_for_common(x, io_schedule_timeout, timeout, state);
-}
-
-/**
- * wait_for_completion: - waits for completion of a task
- * @x:  holds the state of this particular completion
- *
- * This waits to be signaled for completion of a specific task. It is NOT
- * interruptible and there is no timeout.
- *
- * See also similar routines (i.e. wait_for_completion_timeout()) with timeout
- * and interrupt capability. Also see complete().
- */
-void __sched wait_for_completion(struct completion *x)
-{
-	wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_UNINTERRUPTIBLE);
-}
-EXPORT_SYMBOL(wait_for_completion);
-
-/**
- * wait_for_completion_timeout: - waits for completion of a task (w/timeout)
- * @x:  holds the state of this particular completion
- * @timeout:  timeout value in jiffies
- *
- * This waits for either a completion of a specific task to be signaled or for a
- * specified timeout to expire. The timeout is in jiffies. It is not
- * interruptible.
- *
- * Return: 0 if timed out, and positive (at least 1, or number of jiffies left
- * till timeout) if completed.
- */
-unsigned long __sched
-wait_for_completion_timeout(struct completion *x, unsigned long timeout)
-{
-	return wait_for_common(x, timeout, TASK_UNINTERRUPTIBLE);
-}
-EXPORT_SYMBOL(wait_for_completion_timeout);
-
-/**
- * wait_for_completion_io: - waits for completion of a task
- * @x:  holds the state of this particular completion
- *
- * This waits to be signaled for completion of a specific task. It is NOT
- * interruptible and there is no timeout. The caller is accounted as waiting
- * for IO.
- */
-void __sched wait_for_completion_io(struct completion *x)
-{
-	wait_for_common_io(x, MAX_SCHEDULE_TIMEOUT, TASK_UNINTERRUPTIBLE);
-}
-EXPORT_SYMBOL(wait_for_completion_io);
-
-/**
- * wait_for_completion_io_timeout: - waits for completion of a task (w/timeout)
- * @x:  holds the state of this particular completion
- * @timeout:  timeout value in jiffies
- *
- * This waits for either a completion of a specific task to be signaled or for a
- * specified timeout to expire. The timeout is in jiffies. It is not
- * interruptible. The caller is accounted as waiting for IO.
- *
- * Return: 0 if timed out, and positive (at least 1, or number of jiffies left
- * till timeout) if completed.
- */
-unsigned long __sched
-wait_for_completion_io_timeout(struct completion *x, unsigned long timeout)
-{
-	return wait_for_common_io(x, timeout, TASK_UNINTERRUPTIBLE);
-}
-EXPORT_SYMBOL(wait_for_completion_io_timeout);
-
-/**
- * wait_for_completion_interruptible: - waits for completion of a task (w/intr)
- * @x:  holds the state of this particular completion
- *
- * This waits for completion of a specific task to be signaled. It is
- * interruptible.
- *
- * Return: -ERESTARTSYS if interrupted, 0 if completed.
- */
-int __sched wait_for_completion_interruptible(struct completion *x)
-{
-	long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_INTERRUPTIBLE);
-	if (t == -ERESTARTSYS)
-		return t;
-	return 0;
-}
-EXPORT_SYMBOL(wait_for_completion_interruptible);
-
-/**
- * wait_for_completion_interruptible_timeout: - waits for completion (w/(to,intr))
- * @x:  holds the state of this particular completion
- * @timeout:  timeout value in jiffies
- *
- * This waits for either a completion of a specific task to be signaled or for a
- * specified timeout to expire. It is interruptible. The timeout is in jiffies.
- *
- * Return: -ERESTARTSYS if interrupted, 0 if timed out, positive (at least 1,
- * or number of jiffies left till timeout) if completed.
- */
-long __sched
-wait_for_completion_interruptible_timeout(struct completion *x,
-					  unsigned long timeout)
-{
-	return wait_for_common(x, timeout, TASK_INTERRUPTIBLE);
-}
-EXPORT_SYMBOL(wait_for_completion_interruptible_timeout);
-
-/**
- * wait_for_completion_killable: - waits for completion of a task (killable)
- * @x:  holds the state of this particular completion
- *
- * This waits to be signaled for completion of a specific task. It can be
- * interrupted by a kill signal.
- *
- * Return: -ERESTARTSYS if interrupted, 0 if completed.
- */
-int __sched wait_for_completion_killable(struct completion *x)
-{
-	long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_KILLABLE);
-	if (t == -ERESTARTSYS)
-		return t;
-	return 0;
-}
-EXPORT_SYMBOL(wait_for_completion_killable);
-
-/**
- * wait_for_completion_killable_timeout: - waits for completion of a task (w/(to,killable))
- * @x:  holds the state of this particular completion
- * @timeout:  timeout value in jiffies
- *
- * This waits for either a completion of a specific task to be
- * signaled or for a specified timeout to expire. It can be
- * interrupted by a kill signal. The timeout is in jiffies.
- *
- * Return: -ERESTARTSYS if interrupted, 0 if timed out, positive (at least 1,
- * or number of jiffies left till timeout) if completed.
- */
-long __sched
-wait_for_completion_killable_timeout(struct completion *x,
-				     unsigned long timeout)
-{
-	return wait_for_common(x, timeout, TASK_KILLABLE);
-}
-EXPORT_SYMBOL(wait_for_completion_killable_timeout);
-
-/**
- *	try_wait_for_completion - try to decrement a completion without blocking
- *	@x:	completion structure
- *
- *	Return: 0 if a decrement cannot be done without blocking
- *		 1 if a decrement succeeded.
- *
- *	If a completion is being used as a counting completion,
- *	attempt to decrement the counter without blocking. This
- *	enables us to avoid waiting if the resource the completion
- *	is protecting is not available.
- */
-bool try_wait_for_completion(struct completion *x)
-{
-	unsigned long flags;
-	int ret = 1;
-
-	spin_lock_irqsave(&x->wait.lock, flags);
-	if (!x->done)
-		ret = 0;
-	else
-		x->done--;
-	spin_unlock_irqrestore(&x->wait.lock, flags);
-	return ret;
-}
-EXPORT_SYMBOL(try_wait_for_completion);
-
-/**
- *	completion_done - Test to see if a completion has any waiters
- *	@x:	completion structure
- *
- *	Return: 0 if there are waiters (wait_for_completion() in progress)
- *		 1 if there are no waiters.
- *
- */
-bool completion_done(struct completion *x)
-{
-	unsigned long flags;
-	int ret = 1;
-
-	spin_lock_irqsave(&x->wait.lock, flags);
-	if (!x->done)
-		ret = 0;
-	spin_unlock_irqrestore(&x->wait.lock, flags);
-	return ret;
-}
-EXPORT_SYMBOL(completion_done);
-
 static long __sched
 sleep_on_common(wait_queue_head_t *q, int state, long timeout)
 {

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-04 20:44   ` Peter Zijlstra
@ 2013-10-05  8:04     ` Ingo Molnar
  2013-10-08  9:59       ` Peter Zijlstra
  0 siblings, 1 reply; 56+ messages in thread
From: Ingo Molnar @ 2013-10-05  8:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Fri, Oct 04, 2013 at 10:44:05PM +0200, Peter Zijlstra wrote:
> > 
> > slightly related; do we want to do something like the following two
> > patches?
> 
> and

Yeah, both look good to me - but I'd move them into 
kernel/sched/completion.c and kernel/sched/wait.c if no-one objects.

> --- /dev/null
> +++ b/kernel/completion.c
> @@ -0,0 +1,287 @@
> +
> +#include <linux/sched.h>
> +#include <linux/completion.h>

Also, mind adding a small blurb at the top explaining what it's all about? 
Just one sentence or two.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-05  8:04     ` Ingo Molnar
@ 2013-10-08  9:59       ` Peter Zijlstra
  2013-10-08 10:23         ` Ingo Molnar
  0 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-08  9:59 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Oleg Nesterov, Paul McKenney, Linus Torvalds, Thomas Gleixner,
	Andrew Morton, linux-kernel

On Sat, Oct 05, 2013 at 10:04:16AM +0200, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Fri, Oct 04, 2013 at 10:44:05PM +0200, Peter Zijlstra wrote:
> > > 
> > > slightly related; do we want to do something like the following two
> > > patches?
> > 
> > and
> 
> Yeah, both look good to me - but I'd move them into 
> kernel/sched/completion.c and kernel/sched/wait.c if no-one objects.

Do you also want to suck in semaphore.c mutex.c rwsem.c spinlock.c etc?
Or do you want to create something like kernel/locking/ for all that.

I don't really mind too much either way except that I think that wait.c
and completion.c on their own make for a somewhat random split or
primitives.

> > --- /dev/null
> > +++ b/kernel/completion.c
> > @@ -0,0 +1,287 @@
> > +
> > +#include <linux/sched.h>
> > +#include <linux/completion.h>
> 
> Also, mind adding a small blurb at the top explaining what it's all about? 
> Just one sentence or two.

It got a bit longer:

+/*
+ * Generic wait-for-completion handler;
+ *
+ * It differs from semaphores in that their default case is the opposite,
+ * wait_for_completion default blocks whereas semaphore default non-block. The
+ * interface also makes it easy to 'complete' multiple waiting threads,
+ * something which isn't entirely natural for semaphores.
+ *
+ * But more importantly, the primitive documents the usage. Semaphores would
+ * typically be used for exclusion which gives rise to priority inversion.
+ * Waiting for completion is a typically sync point, but not an exclusion point.
+ */

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08  9:59       ` Peter Zijlstra
@ 2013-10-08 10:23         ` Ingo Molnar
  2013-10-08 14:16           ` Paul E. McKenney
  0 siblings, 1 reply; 56+ messages in thread
From: Ingo Molnar @ 2013-10-08 10:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Oleg Nesterov, Paul McKenney, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Sat, Oct 05, 2013 at 10:04:16AM +0200, Ingo Molnar wrote:
> > 
> > * Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > On Fri, Oct 04, 2013 at 10:44:05PM +0200, Peter Zijlstra wrote:
> > > > 
> > > > slightly related; do we want to do something like the following two
> > > > patches?
> > > 
> > > and
> > 
> > Yeah, both look good to me - but I'd move them into 
> > kernel/sched/completion.c and kernel/sched/wait.c if no-one objects.
> 
> Do you also want to suck in semaphore.c mutex.c rwsem.c spinlock.c etc? 
> Or do you want to create something like kernel/locking/ for all that.

Yeah, I think kernel/locking/ would be a suitable place for those, and I'd 
move lockdep*.c there too. (Such things are best done near the end of a 
merge window, when there's not much pending, to not disrupt development.)

kernel/*.c is a pretty crowded place with 100+ files currently, I've been 
gradually working towards depopulating it slowly but surely for subsystems 
that I co-maintain or where I'm frequently active. We already have:

  kernel/sched/
  kernel/events/
  kernel/irq/
  kernel/time/
  kernel/trace/

and the deeper kernel/*/* hierarchies already host another ~100 .c files. 
So the transition is half done already I suspect.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08 10:23         ` Ingo Molnar
@ 2013-10-08 14:16           ` Paul E. McKenney
  2013-10-08 19:47             ` Ingo Molnar
  0 siblings, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-08 14:16 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Tue, Oct 08, 2013 at 12:23:31PM +0200, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Sat, Oct 05, 2013 at 10:04:16AM +0200, Ingo Molnar wrote:
> > > 
> > > * Peter Zijlstra <peterz@infradead.org> wrote:
> > > 
> > > > On Fri, Oct 04, 2013 at 10:44:05PM +0200, Peter Zijlstra wrote:
> > > > > 
> > > > > slightly related; do we want to do something like the following two
> > > > > patches?
> > > > 
> > > > and
> > > 
> > > Yeah, both look good to me - but I'd move them into 
> > > kernel/sched/completion.c and kernel/sched/wait.c if no-one objects.
> > 
> > Do you also want to suck in semaphore.c mutex.c rwsem.c spinlock.c etc? 
> > Or do you want to create something like kernel/locking/ for all that.
> 
> Yeah, I think kernel/locking/ would be a suitable place for those, and I'd 
> move lockdep*.c there too. (Such things are best done near the end of a 
> merge window, when there's not much pending, to not disrupt development.)
> 
> kernel/*.c is a pretty crowded place with 100+ files currently, I've been 
> gradually working towards depopulating it slowly but surely for subsystems 
> that I co-maintain or where I'm frequently active. We already have:
> 
>   kernel/sched/
>   kernel/events/
>   kernel/irq/
>   kernel/time/
>   kernel/trace/
> 
> and the deeper kernel/*/* hierarchies already host another ~100 .c files. 
> So the transition is half done already I suspect.

Should I be thinking about making a kernel/rcu?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08 14:16           ` Paul E. McKenney
@ 2013-10-08 19:47             ` Ingo Molnar
  2013-10-08 20:01               ` Peter Zijlstra
  2013-10-08 20:40               ` Paul E. McKenney
  0 siblings, 2 replies; 56+ messages in thread
From: Ingo Molnar @ 2013-10-08 19:47 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel


* Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

> On Tue, Oct 08, 2013 at 12:23:31PM +0200, Ingo Molnar wrote:
> > 
> > * Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > On Sat, Oct 05, 2013 at 10:04:16AM +0200, Ingo Molnar wrote:
> > > > 
> > > > * Peter Zijlstra <peterz@infradead.org> wrote:
> > > > 
> > > > > On Fri, Oct 04, 2013 at 10:44:05PM +0200, Peter Zijlstra wrote:
> > > > > > 
> > > > > > slightly related; do we want to do something like the following two
> > > > > > patches?
> > > > > 
> > > > > and
> > > > 
> > > > Yeah, both look good to me - but I'd move them into 
> > > > kernel/sched/completion.c and kernel/sched/wait.c if no-one objects.
> > > 
> > > Do you also want to suck in semaphore.c mutex.c rwsem.c spinlock.c etc? 
> > > Or do you want to create something like kernel/locking/ for all that.
> > 
> > Yeah, I think kernel/locking/ would be a suitable place for those, and I'd 
> > move lockdep*.c there too. (Such things are best done near the end of a 
> > merge window, when there's not much pending, to not disrupt development.)
> > 
> > kernel/*.c is a pretty crowded place with 100+ files currently, I've been 
> > gradually working towards depopulating it slowly but surely for subsystems 
> > that I co-maintain or where I'm frequently active. We already have:
> > 
> >   kernel/sched/
> >   kernel/events/
> >   kernel/irq/
> >   kernel/time/
> >   kernel/trace/
> > 
> > and the deeper kernel/*/* hierarchies already host another ~100 .c files. 
> > So the transition is half done already I suspect.
> 
> Should I be thinking about making a kernel/rcu?

I wanted to raise it with you at the KS :-)

To me it would sure look nice to have kernel/rcu/tree.c, 
kernel/rcu/tiny.c, kernel/rcu/core.c, etc.

[ ... and we would certainly also break new ground by introducing a
  "torture.c" file, for the first time in Linux kernel history! ;-) ]

But it's really your call, this is something you should only do if you are 
comfortable with it.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08 19:47             ` Ingo Molnar
@ 2013-10-08 20:01               ` Peter Zijlstra
  2013-10-08 20:41                 ` Paul E. McKenney
  2013-10-08 20:40               ` Paul E. McKenney
  1 sibling, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-08 20:01 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Paul E. McKenney, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:
> To me it would sure look nice to have kernel/rcu/tree.c, 
> kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> 
> [ ... and we would certainly also break new ground by introducing a
>   "torture.c" file, for the first time in Linux kernel history! ;-) ]
> 
> But it's really your call, this is something you should only do if you are 
> comfortable with it.

IFF we're going to restructure rcu; can we save the CPP some work and do
away with rcu*_plugin.h ?

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08 19:47             ` Ingo Molnar
  2013-10-08 20:01               ` Peter Zijlstra
@ 2013-10-08 20:40               ` Paul E. McKenney
  2013-10-09  3:28                 ` Paul E. McKenney
  1 sibling, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-08 20:40 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> 
> > On Tue, Oct 08, 2013 at 12:23:31PM +0200, Ingo Molnar wrote:
> > > 
> > > * Peter Zijlstra <peterz@infradead.org> wrote:
> > > 
> > > > On Sat, Oct 05, 2013 at 10:04:16AM +0200, Ingo Molnar wrote:
> > > > > 
> > > > > * Peter Zijlstra <peterz@infradead.org> wrote:
> > > > > 
> > > > > > On Fri, Oct 04, 2013 at 10:44:05PM +0200, Peter Zijlstra wrote:
> > > > > > > 
> > > > > > > slightly related; do we want to do something like the following two
> > > > > > > patches?
> > > > > > 
> > > > > > and
> > > > > 
> > > > > Yeah, both look good to me - but I'd move them into 
> > > > > kernel/sched/completion.c and kernel/sched/wait.c if no-one objects.
> > > > 
> > > > Do you also want to suck in semaphore.c mutex.c rwsem.c spinlock.c etc? 
> > > > Or do you want to create something like kernel/locking/ for all that.
> > > 
> > > Yeah, I think kernel/locking/ would be a suitable place for those, and I'd 
> > > move lockdep*.c there too. (Such things are best done near the end of a 
> > > merge window, when there's not much pending, to not disrupt development.)
> > > 
> > > kernel/*.c is a pretty crowded place with 100+ files currently, I've been 
> > > gradually working towards depopulating it slowly but surely for subsystems 
> > > that I co-maintain or where I'm frequently active. We already have:
> > > 
> > >   kernel/sched/
> > >   kernel/events/
> > >   kernel/irq/
> > >   kernel/time/
> > >   kernel/trace/
> > > 
> > > and the deeper kernel/*/* hierarchies already host another ~100 .c files. 
> > > So the transition is half done already I suspect.
> > 
> > Should I be thinking about making a kernel/rcu?
> 
> I wanted to raise it with you at the KS :-)

Sorry for jumping the gun.  ;-)

> To me it would sure look nice to have kernel/rcu/tree.c, 
> kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> 
> [ ... and we would certainly also break new ground by introducing a
>   "torture.c" file, for the first time in Linux kernel history! ;-) ]

Ooh...  I had better act fast!  ;-)

> But it's really your call, this is something you should only do if you are 
> comfortable with it.

I have actually been thinking about it off and on for some time.

								Thanx, Paul


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08 20:01               ` Peter Zijlstra
@ 2013-10-08 20:41                 ` Paul E. McKenney
  2013-10-08 21:06                   ` Peter Zijlstra
  0 siblings, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-08 20:41 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Tue, Oct 08, 2013 at 10:01:55PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:
> > To me it would sure look nice to have kernel/rcu/tree.c, 
> > kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> > 
> > [ ... and we would certainly also break new ground by introducing a
> >   "torture.c" file, for the first time in Linux kernel history! ;-) ]
> > 
> > But it's really your call, this is something you should only do if you are 
> > comfortable with it.
> 
> IFF we're going to restructure rcu; can we save the CPP some work and do
> away with rcu*_plugin.h ?

OK, I'll bite...  Where did you want to put the code instead?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08 20:41                 ` Paul E. McKenney
@ 2013-10-08 21:06                   ` Peter Zijlstra
  2013-10-08 21:43                     ` Paul E. McKenney
  0 siblings, 1 reply; 56+ messages in thread
From: Peter Zijlstra @ 2013-10-08 21:06 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Ingo Molnar, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Tue, Oct 08, 2013 at 01:41:29PM -0700, Paul E. McKenney wrote:
> On Tue, Oct 08, 2013 at 10:01:55PM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:
> > > To me it would sure look nice to have kernel/rcu/tree.c, 
> > > kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> > > 
> > > [ ... and we would certainly also break new ground by introducing a
> > >   "torture.c" file, for the first time in Linux kernel history! ;-) ]
> > > 
> > > But it's really your call, this is something you should only do if you are 
> > > comfortable with it.
> > 
> > IFF we're going to restructure rcu; can we save the CPP some work and do
> > away with rcu*_plugin.h ?
> 
> OK, I'll bite...  Where did you want to put the code instead?

Just about here:

kernel/rcutree.c:#include "rcutree_plugin.h"

and save the bother of inclusion.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08 21:06                   ` Peter Zijlstra
@ 2013-10-08 21:43                     ` Paul E. McKenney
  0 siblings, 0 replies; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-08 21:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Tue, Oct 08, 2013 at 11:06:07PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 08, 2013 at 01:41:29PM -0700, Paul E. McKenney wrote:
> > On Tue, Oct 08, 2013 at 10:01:55PM +0200, Peter Zijlstra wrote:
> > > On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:
> > > > To me it would sure look nice to have kernel/rcu/tree.c, 
> > > > kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> > > > 
> > > > [ ... and we would certainly also break new ground by introducing a
> > > >   "torture.c" file, for the first time in Linux kernel history! ;-) ]
> > > > 
> > > > But it's really your call, this is something you should only do if you are 
> > > > comfortable with it.
> > > 
> > > IFF we're going to restructure rcu; can we save the CPP some work and do
> > > away with rcu*_plugin.h ?
> > 
> > OK, I'll bite...  Where did you want to put the code instead?
> 
> Just about here:
> 
> kernel/rcutree.c:#include "rcutree_plugin.h"
> 
> and save the bother of inclusion.

I would be more likely to break rcutree_plugin.h into pieces (e.g., for
preempt-rcu, RCU_FAST_NOHZ, NOCB, stall warnings, and so on than to merge
it into rcutree.c.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-08 20:40               ` Paul E. McKenney
@ 2013-10-09  3:28                 ` Paul E. McKenney
  2013-10-09  3:35                   ` Paul E. McKenney
  0 siblings, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-09  3:28 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Tue, Oct 08, 2013 at 01:40:56PM -0700, Paul E. McKenney wrote:
> On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:
> > 
> > * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> > 
> > > On Tue, Oct 08, 2013 at 12:23:31PM +0200, Ingo Molnar wrote:
> > > > 
> > > > * Peter Zijlstra <peterz@infradead.org> wrote:
> > > > 
> > > > > On Sat, Oct 05, 2013 at 10:04:16AM +0200, Ingo Molnar wrote:
> > > > > > 
> > > > > > * Peter Zijlstra <peterz@infradead.org> wrote:
> > > > > > 
> > > > > > > On Fri, Oct 04, 2013 at 10:44:05PM +0200, Peter Zijlstra wrote:
> > > > > > > > 
> > > > > > > > slightly related; do we want to do something like the following two
> > > > > > > > patches?
> > > > > > > 
> > > > > > > and
> > > > > > 
> > > > > > Yeah, both look good to me - but I'd move them into 
> > > > > > kernel/sched/completion.c and kernel/sched/wait.c if no-one objects.
> > > > > 
> > > > > Do you also want to suck in semaphore.c mutex.c rwsem.c spinlock.c etc? 
> > > > > Or do you want to create something like kernel/locking/ for all that.
> > > > 
> > > > Yeah, I think kernel/locking/ would be a suitable place for those, and I'd 
> > > > move lockdep*.c there too. (Such things are best done near the end of a 
> > > > merge window, when there's not much pending, to not disrupt development.)
> > > > 
> > > > kernel/*.c is a pretty crowded place with 100+ files currently, I've been 
> > > > gradually working towards depopulating it slowly but surely for subsystems 
> > > > that I co-maintain or where I'm frequently active. We already have:
> > > > 
> > > >   kernel/sched/
> > > >   kernel/events/
> > > >   kernel/irq/
> > > >   kernel/time/
> > > >   kernel/trace/
> > > > 
> > > > and the deeper kernel/*/* hierarchies already host another ~100 .c files. 
> > > > So the transition is half done already I suspect.
> > > 
> > > Should I be thinking about making a kernel/rcu?
> > 
> > I wanted to raise it with you at the KS :-)
> 
> Sorry for jumping the gun.  ;-)
> 
> > To me it would sure look nice to have kernel/rcu/tree.c, 
> > kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> > 
> > [ ... and we would certainly also break new ground by introducing a
> >   "torture.c" file, for the first time in Linux kernel history! ;-) ]
> 
> Ooh...  I had better act fast!  ;-)
> 
> > But it's really your call, this is something you should only do if you are 
> > comfortable with it.
> 
> I have actually been thinking about it off and on for some time.

And here is a first cut.  Just the renaming and needed adjustments,
no splitting or merging of files.

Thoughts?

							Thanx, Paul

------------------------------------------------------------------------

diff --git a/kernel/Makefile b/kernel/Makefile
index 1ce47553fb02..f99d908b5550 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -6,9 +6,9 @@ obj-y     = fork.o exec_domain.o panic.o \
 	    cpu.o exit.o itimer.o time.o softirq.o resource.o \
 	    sysctl.o sysctl_binary.o capability.o ptrace.o timer.o user.o \
 	    signal.o sys.o kmod.o workqueue.o pid.o task_work.o \
-	    rcupdate.o extable.o params.o posix-timers.o \
+	    extable.o params.o posix-timers.o \
 	    kthread.o wait.o sys_ni.o posix-cpu-timers.o mutex.o \
-	    hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
+	    hrtimer.o rwsem.o nsproxy.o semaphore.o \
 	    notifier.o ksysfs.o cred.o reboot.o \
 	    async.o range.o groups.o lglock.o smpboot.o
 
@@ -27,6 +27,7 @@ obj-y += power/
 obj-y += printk/
 obj-y += cpu/
 obj-y += irq/
+obj-y += rcu/
 
 obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o
 obj-$(CONFIG_FREEZER) += freezer.o
@@ -81,12 +82,6 @@ obj-$(CONFIG_KGDB) += debug/
 obj-$(CONFIG_DETECT_HUNG_TASK) += hung_task.o
 obj-$(CONFIG_LOCKUP_DETECTOR) += watchdog.o
 obj-$(CONFIG_SECCOMP) += seccomp.o
-obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
-obj-$(CONFIG_TREE_RCU) += rcutree.o
-obj-$(CONFIG_TREE_PREEMPT_RCU) += rcutree.o
-obj-$(CONFIG_TREE_RCU_TRACE) += rcutree_trace.o
-obj-$(CONFIG_TINY_RCU) += rcutiny.o
-obj-$(CONFIG_TINY_PREEMPT_RCU) += rcutiny.o
 obj-$(CONFIG_RELAY) += relay.o
 obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
 obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
diff --git a/kernel/rcu/Makefile b/kernel/rcu/Makefile
new file mode 100644
index 000000000000..01e9ec37a3e3
--- /dev/null
+++ b/kernel/rcu/Makefile
@@ -0,0 +1,6 @@
+obj-y += update.o srcu.o
+obj-$(CONFIG_RCU_TORTURE_TEST) += torture.o
+obj-$(CONFIG_TREE_RCU) += tree.o
+obj-$(CONFIG_TREE_PREEMPT_RCU) += tree.o
+obj-$(CONFIG_TREE_RCU_TRACE) += tree_trace.o
+obj-$(CONFIG_TINY_RCU) += tiny.o
diff --git a/kernel/rcu.h b/kernel/rcu/rcu.h
similarity index 100%
rename from kernel/rcu.h
rename to kernel/rcu/rcu.h
diff --git a/kernel/srcu.c b/kernel/rcu/srcu.c
similarity index 100%
rename from kernel/srcu.c
rename to kernel/rcu/srcu.c
diff --git a/kernel/rcutiny.c b/kernel/rcu/tiny.c
similarity index 99%
rename from kernel/rcutiny.c
rename to kernel/rcu/tiny.c
index 312e9709713f..238bb39b71c6 100644
--- a/kernel/rcutiny.c
+++ b/kernel/rcu/tiny.c
@@ -43,7 +43,7 @@
 
 #include "rcu.h"
 
-/* Forward declarations for rcutiny_plugin.h. */
+/* Forward declarations for tiny_plugin.h. */
 struct rcu_ctrlblk;
 static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp);
 static void rcu_process_callbacks(struct softirq_action *unused);
@@ -53,7 +53,7 @@ static void __call_rcu(struct rcu_head *head,
 
 static long long rcu_dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
 
-#include "rcutiny_plugin.h"
+#include "tiny_plugin.h"
 
 /* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcutree.c. */
 static void rcu_idle_enter_common(long long newval)
diff --git a/kernel/rcutiny_plugin.h b/kernel/rcu/tiny_plugin.h
similarity index 100%
rename from kernel/rcutiny_plugin.h
rename to kernel/rcu/tiny_plugin.h
diff --git a/kernel/rcutorture.c b/kernel/rcu/torture.c
similarity index 100%
rename from kernel/rcutorture.c
rename to kernel/rcu/torture.c
diff --git a/kernel/rcutree.c b/kernel/rcu/tree.c
similarity index 99%
rename from kernel/rcutree.c
rename to kernel/rcu/tree.c
index 240604aa3f70..52b67ee63a0b 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcu/tree.c
@@ -56,7 +56,7 @@
 #include <linux/ftrace_event.h>
 #include <linux/suspend.h>
 
-#include "rcutree.h"
+#include "tree.h"
 #include <trace/events/rcu.h>
 
 #include "rcu.h"
@@ -3298,7 +3298,7 @@ static void __init rcu_init_one(struct rcu_state *rsp,
 
 /*
  * Compute the rcu_node tree geometry from kernel parameters.  This cannot
- * replace the definitions in rcutree.h because those are needed to size
+ * replace the definitions in tree.h because those are needed to size
  * the ->node array in the rcu_state structure.
  */
 static void __init rcu_init_geometry(void)
@@ -3393,4 +3393,4 @@ void __init rcu_init(void)
 		rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
 }
 
-#include "rcutree_plugin.h"
+#include "tree_plugin.h"
diff --git a/kernel/rcutree.h b/kernel/rcu/tree.h
similarity index 100%
rename from kernel/rcutree.h
rename to kernel/rcu/tree.h
diff --git a/kernel/rcutree_plugin.h b/kernel/rcu/tree_plugin.h
similarity index 99%
rename from kernel/rcutree_plugin.h
rename to kernel/rcu/tree_plugin.h
index 8d85a5ce093a..8ca07e3d9f9b 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -28,7 +28,7 @@
 #include <linux/gfp.h>
 #include <linux/oom.h>
 #include <linux/smpboot.h>
-#include "time/tick-internal.h"
+#include "../time/tick-internal.h"
 
 #define RCU_KTHREAD_PRIO 1
 
diff --git a/kernel/rcutree_trace.c b/kernel/rcu/tree_trace.c
similarity index 99%
rename from kernel/rcutree_trace.c
rename to kernel/rcu/tree_trace.c
index cf6c17412932..3596797b7e46 100644
--- a/kernel/rcutree_trace.c
+++ b/kernel/rcu/tree_trace.c
@@ -44,7 +44,7 @@
 #include <linux/seq_file.h>
 
 #define RCU_TREE_NONCORE
-#include "rcutree.h"
+#include "tree.h"
 
 static int r_open(struct inode *inode, struct file *file,
 					const struct seq_operations *op)
diff --git a/kernel/rcupdate.c b/kernel/rcu/update.c
similarity index 100%
rename from kernel/rcupdate.c
rename to kernel/rcu/update.c


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-09  3:28                 ` Paul E. McKenney
@ 2013-10-09  3:35                   ` Paul E. McKenney
  2013-10-09  6:08                     ` Ingo Molnar
  0 siblings, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-09  3:35 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Tue, Oct 08, 2013 at 08:28:43PM -0700, Paul E. McKenney wrote:
> On Tue, Oct 08, 2013 at 01:40:56PM -0700, Paul E. McKenney wrote:
> > On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:

[ . . . ]

> > > > Should I be thinking about making a kernel/rcu?
> > > 
> > > I wanted to raise it with you at the KS :-)
> > 
> > Sorry for jumping the gun.  ;-)
> > 
> > > To me it would sure look nice to have kernel/rcu/tree.c, 
> > > kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> > > 
> > > [ ... and we would certainly also break new ground by introducing a
> > >   "torture.c" file, for the first time in Linux kernel history! ;-) ]
> > 
> > Ooh...  I had better act fast!  ;-)
> > 
> > > But it's really your call, this is something you should only do if you are 
> > > comfortable with it.
> > 
> > I have actually been thinking about it off and on for some time.
> 
> And here is a first cut.  Just the renaming and needed adjustments,
> no splitting or merging of files.
> 
> Thoughts?

Wow!  I rebased my commits destined for 3.14 on top of this, and "git
rebase" did it with several protests, but with no manual intervention
required.

Now if it actually still builds, boots, and runs...  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-09  3:35                   ` Paul E. McKenney
@ 2013-10-09  6:08                     ` Ingo Molnar
  2013-10-09 14:21                       ` Paul E. McKenney
  0 siblings, 1 reply; 56+ messages in thread
From: Ingo Molnar @ 2013-10-09  6:08 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel


* Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

> On Tue, Oct 08, 2013 at 08:28:43PM -0700, Paul E. McKenney wrote:
> > On Tue, Oct 08, 2013 at 01:40:56PM -0700, Paul E. McKenney wrote:
> > > On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:
> 
> [ . . . ]
> 
> > > > > Should I be thinking about making a kernel/rcu?
> > > > 
> > > > I wanted to raise it with you at the KS :-)
> > > 
> > > Sorry for jumping the gun.  ;-)
> > > 
> > > > To me it would sure look nice to have kernel/rcu/tree.c, 
> > > > kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> > > > 
> > > > [ ... and we would certainly also break new ground by introducing a
> > > >   "torture.c" file, for the first time in Linux kernel history! ;-) ]
> > > 
> > > Ooh...  I had better act fast!  ;-)
> > > 
> > > > But it's really your call, this is something you should only do if you are 
> > > > comfortable with it.
> > > 
> > > I have actually been thinking about it off and on for some time.
> > 
> > And here is a first cut.  Just the renaming and needed adjustments,
> > no splitting or merging of files.
> > 
> > Thoughts?
> 
> Wow!  I rebased my commits destined for 3.14 on top of this, and "git 
> rebase" did it with several protests, but with no manual intervention 
> required.

Git is cool!

> Now if it actually still builds, boots, and runs...  ;-)

Booting is overrated! ;-)

Seriously, this is good stuff.

   Reviewed-by: Ingo Molnar <mingo@kernel.org>

I'd definitely argue in favor of doing a mechanical move first, then any 
further reorganization separately.

(One minor detail I noticed: you'll probably need to update the RCU file 
patterns in MAINTAINERS as well.)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-09  6:08                     ` Ingo Molnar
@ 2013-10-09 14:21                       ` Paul E. McKenney
  2013-10-10  2:59                         ` Paul E. McKenney
  0 siblings, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-09 14:21 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Wed, Oct 09, 2013 at 08:08:06AM +0200, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> 
> > On Tue, Oct 08, 2013 at 08:28:43PM -0700, Paul E. McKenney wrote:
> > > On Tue, Oct 08, 2013 at 01:40:56PM -0700, Paul E. McKenney wrote:
> > > > On Tue, Oct 08, 2013 at 09:47:18PM +0200, Ingo Molnar wrote:
> > 
> > [ . . . ]
> > 
> > > > > > Should I be thinking about making a kernel/rcu?
> > > > > 
> > > > > I wanted to raise it with you at the KS :-)
> > > > 
> > > > Sorry for jumping the gun.  ;-)
> > > > 
> > > > > To me it would sure look nice to have kernel/rcu/tree.c, 
> > > > > kernel/rcu/tiny.c, kernel/rcu/core.c, etc.
> > > > > 
> > > > > [ ... and we would certainly also break new ground by introducing a
> > > > >   "torture.c" file, for the first time in Linux kernel history! ;-) ]
> > > > 
> > > > Ooh...  I had better act fast!  ;-)
> > > > 
> > > > > But it's really your call, this is something you should only do if you are 
> > > > > comfortable with it.
> > > > 
> > > > I have actually been thinking about it off and on for some time.
> > > 
> > > And here is a first cut.  Just the renaming and needed adjustments,
> > > no splitting or merging of files.
> > > 
> > > Thoughts?
> > 
> > Wow!  I rebased my commits destined for 3.14 on top of this, and "git 
> > rebase" did it with several protests, but with no manual intervention 
> > required.
> 
> Git is cool!

I have indeed learned to love it through a long series of experiences
like this!  ;-)

> > Now if it actually still builds, boots, and runs...  ;-)
> 
> Booting is overrated! ;-)

Well, some work is needed on this front, but will get there.

> Seriously, this is good stuff.
> 
>    Reviewed-by: Ingo Molnar <mingo@kernel.org>

Very good, I have added this.

> I'd definitely argue in favor of doing a mechanical move first, then any 
> further reorganization separately.
> 
> (One minor detail I noticed: you'll probably need to update the RCU file 
> patterns in MAINTAINERS as well.)

Ah, good point, fixed!

							Thanx, Paul


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-09 14:21                       ` Paul E. McKenney
@ 2013-10-10  2:59                         ` Paul E. McKenney
  2013-10-10  8:05                           ` Ingo Molnar
  0 siblings, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-10  2:59 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Wed, Oct 09, 2013 at 07:21:23AM -0700, Paul E. McKenney wrote:
> On Wed, Oct 09, 2013 at 08:08:06AM +0200, Ingo Molnar wrote:

[ . . . ]

> > > Now if it actually still builds, boots, and runs...  ;-)
> > 
> > Booting is overrated! ;-)
> 
> Well, some work is needed on this front, but will get there.
> 
> > Seriously, this is good stuff.
> > 
> >    Reviewed-by: Ingo Molnar <mingo@kernel.org>
> 
> Very good, I have added this.
> 
> > I'd definitely argue in favor of doing a mechanical move first, then any 
> > further reorganization separately.
> > 
> > (One minor detail I noticed: you'll probably need to update the RCU file 
> > patterns in MAINTAINERS as well.)
> 
> Ah, good point, fixed!

And it now builds, boots, and passes short rcutorture tests, updated
patch below.

One side-effect is the boot parameters, namely that what used to be
rcutree.blimit=10 is now simply tree.blimit=10.  Not a problem for
me, I just made my test scripts probe the source tree and generate
the corresponding format.  But is there some straightforward way to
get the name of the "rcu" directory involved?  The obvious approach
of "rcu.tree.blimit=10" does not work -- the kernel happily ignores
any such parameter.

It looks like I should be able to do something like the following
in kernel/rcu/tree.c to get back the old parameter names:

MODULE_ALIAS("rcutree");
#ifdef MODULE_PARAM_PREFIX
#undef MODULE_PARAM_PREFIX
#endif
#define MODULE_PARAM_PREFIX "rcutree."

And similarly for rcu/update.c and rcu/torture.c.

In fact, it looks like I could make rcu/update.c also use either the
"rcutree." or "rcutiny." prefix, depending on which was being built.

Any thoughts, cautions, or suggestions?

							Thanx, Paul

 b/Documentation/DocBook/device-drivers.tmpl |    5 
 b/Documentation/RCU/stallwarn.txt           |    4 
 b/Documentation/kernel-parameters.txt       |  257 ++++++++++++++--------------
 b/MAINTAINERS                               |   11 -
 b/kernel/Makefile                           |   11 -
 b/kernel/rcu/Makefile                       |    6 
 b/kernel/rcu/tiny.c                         |    8 
 b/kernel/rcu/tree.c                         |    6 
 b/kernel/rcu/tree_plugin.h                  |    4 
 b/kernel/rcu/tree_trace.c                   |    2 
 10 files changed, 168 insertions(+), 146 deletions(-)

rcu: Move RCU-related source code to kernel/rcu directory

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index fe397f90a34f..6c9d9d37c83a 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -87,7 +87,10 @@ X!Iinclude/linux/kobject.h
 !Ekernel/printk/printk.c
 !Ekernel/panic.c
 !Ekernel/sys.c
-!Ekernel/rcupdate.c
+!Ekernel/rcu/srcu.c
+!Ekernel/rcu/tree.c
+!Ekernel/rcu/tree_plugin.h
+!Ekernel/rcu/update.c
      </sect1>
 
      <sect1><title>Device Resource Management</title>
diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt
index 6f3a0057548e..934448acdfbe 100644
--- a/Documentation/RCU/stallwarn.txt
+++ b/Documentation/RCU/stallwarn.txt
@@ -15,7 +15,7 @@ CONFIG_RCU_CPU_STALL_TIMEOUT
 	21 seconds.
 
 	This configuration parameter may be changed at runtime via the
-	/sys/module/rcutree/parameters/rcu_cpu_stall_timeout, however
+	/sys/module/tree/parameters/rcu_cpu_stall_timeout, however
 	this parameter is checked only at the beginning of a cycle.
 	So if you are 10 seconds into a 40-second stall, setting this
 	sysfs parameter to (say) five will shorten the timeout for the
@@ -24,7 +24,7 @@ CONFIG_RCU_CPU_STALL_TIMEOUT
 	timing of the next warning for the current stall.
 
 	Stall-warning messages may be enabled and disabled completely via
-	/sys/module/rcutree/parameters/rcu_cpu_stall_suppress.
+	/sys/module/tree/parameters/rcu_cpu_stall_suppress.
 
 CONFIG_RCU_CPU_STALL_VERBOSE
 
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 1a036cd972fb..e442114cda5a 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2619,126 +2619,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			energy efficiency by requiring that the kthreads
 			periodically wake up to do the polling.
 
-	rcutree.blimit=	[KNL,BOOT]
-			Set maximum number of finished RCU callbacks to process
-			in one batch.
-
-	rcutree.fanout_leaf=	[KNL,BOOT]
-			Increase the number of CPUs assigned to each
-			leaf rcu_node structure.  Useful for very large
-			systems.
-
-	rcutree.jiffies_till_first_fqs= [KNL,BOOT]
-			Set delay from grace-period initialization to
-			first attempt to force quiescent states.
-			Units are jiffies, minimum value is zero,
-			and maximum value is HZ.
-
-	rcutree.jiffies_till_next_fqs= [KNL,BOOT]
-			Set delay between subsequent attempts to force
-			quiescent states.  Units are jiffies, minimum
-			value is one, and maximum value is HZ.
-
-	rcutree.qhimark=	[KNL,BOOT]
-			Set threshold of queued
-			RCU callbacks over which batch limiting is disabled.
-
-	rcutree.qlowmark=	[KNL,BOOT]
-			Set threshold of queued RCU callbacks below which
-			batch limiting is re-enabled.
-
-	rcutree.rcu_cpu_stall_suppress=	[KNL,BOOT]
-			Suppress RCU CPU stall warning messages.
-
-	rcutree.rcu_cpu_stall_timeout= [KNL,BOOT]
-			Set timeout for RCU CPU stall warning messages.
-
-	rcutree.rcu_idle_gp_delay=	[KNL,BOOT]
-			Set wakeup interval for idle CPUs that have
-			RCU callbacks (RCU_FAST_NO_HZ=y).
-
-	rcutree.rcu_idle_lazy_gp_delay=	[KNL,BOOT]
-			Set wakeup interval for idle CPUs that have
-			only "lazy" RCU callbacks (RCU_FAST_NO_HZ=y).
-			Lazy RCU callbacks are those which RCU can
-			prove do nothing more than free memory.
-
-	rcutorture.fqs_duration= [KNL,BOOT]
-			Set duration of force_quiescent_state bursts.
-
-	rcutorture.fqs_holdoff= [KNL,BOOT]
-			Set holdoff time within force_quiescent_state bursts.
-
-	rcutorture.fqs_stutter= [KNL,BOOT]
-			Set wait time between force_quiescent_state bursts.
-
-	rcutorture.irqreader= [KNL,BOOT]
-			Test RCU readers from irq handlers.
-
-	rcutorture.n_barrier_cbs= [KNL,BOOT]
-			Set callbacks/threads for rcu_barrier() testing.
-
-	rcutorture.nfakewriters= [KNL,BOOT]
-			Set number of concurrent RCU writers.  These just
-			stress RCU, they don't participate in the actual
-			test, hence the "fake".
-
-	rcutorture.nreaders= [KNL,BOOT]
-			Set number of RCU readers.
-
-	rcutorture.onoff_holdoff= [KNL,BOOT]
-			Set time (s) after boot for CPU-hotplug testing.
-
-	rcutorture.onoff_interval= [KNL,BOOT]
-			Set time (s) between CPU-hotplug operations, or
-			zero to disable CPU-hotplug testing.
-
-	rcutorture.shuffle_interval= [KNL,BOOT]
-			Set task-shuffle interval (s).  Shuffling tasks
-			allows some CPUs to go into dyntick-idle mode
-			during the rcutorture test.
-
-	rcutorture.shutdown_secs= [KNL,BOOT]
-			Set time (s) after boot system shutdown.  This
-			is useful for hands-off automated testing.
-
-	rcutorture.stall_cpu= [KNL,BOOT]
-			Duration of CPU stall (s) to test RCU CPU stall
-			warnings, zero to disable.
-
-	rcutorture.stall_cpu_holdoff= [KNL,BOOT]
-			Time to wait (s) after boot before inducing stall.
-
-	rcutorture.stat_interval= [KNL,BOOT]
-			Time (s) between statistics printk()s.
-
-	rcutorture.stutter= [KNL,BOOT]
-			Time (s) to stutter testing, for example, specifying
-			five seconds causes the test to run for five seconds,
-			wait for five seconds, and so on.  This tests RCU's
-			ability to transition abruptly to and from idle.
-
-	rcutorture.test_boost= [KNL,BOOT]
-			Test RCU priority boosting?  0=no, 1=maybe, 2=yes.
-			"Maybe" means test if the RCU implementation
-			under test support RCU priority boosting.
-
-	rcutorture.test_boost_duration= [KNL,BOOT]
-			Duration (s) of each individual boost test.
-
-	rcutorture.test_boost_interval= [KNL,BOOT]
-			Interval (s) between each boost test.
-
-	rcutorture.test_no_idle_hz= [KNL,BOOT]
-			Test RCU's dyntick-idle handling.  See also the
-			rcutorture.shuffle_interval parameter.
-
-	rcutorture.torture_type= [KNL,BOOT]
-			Specify the RCU implementation to test.
-
-	rcutorture.verbose= [KNL,BOOT]
-			Enable additional printk() statements.
-
 	rdinit=		[KNL]
 			Format: <full_path>
 			Run specified binary instead of /init from the ramdisk,
@@ -3109,6 +2989,94 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			e.g. base its process migration decisions on it.
 			Default is on.
 
+	torture.fqs_duration= [KNL,BOOT]
+			Set duration of force_quiescent_state bursts.
+
+	torture.fqs_holdoff= [KNL,BOOT]
+			Set holdoff time within force_quiescent_state bursts.
+
+	torture.fqs_stutter= [KNL,BOOT]
+			Set wait time between force_quiescent_state bursts.
+
+	torture.gp_exp= [KNL,BOOT]
+			Use expedited update-side primitives.
+
+	torture.gp_normal= [KNL,BOOT]
+			Use normal (non-expedited) update-side primitives.
+			If both gp_exp and gp_normal are set, do both.
+			If neither gp_exp or gp_normal are set, still
+			do both.
+
+	torture.n_barrier_cbs= [KNL,BOOT]
+			Set callbacks/threads for rcu_barrier() testing.
+
+	torture.nfakewriters= [KNL,BOOT]
+			Set number of concurrent RCU writers.  These just
+			stress RCU, they don't participate in the actual
+			test, hence the "fake".
+
+	torture.nreaders= [KNL,BOOT]
+			Set number of RCU readers.
+
+	torture.object_debug= [KNL,BOOT]
+			Enable debug-object double-call_rcu() testing.
+
+	torture.onoff_holdoff= [KNL,BOOT]
+			Set time (s) after boot for CPU-hotplug testing.
+
+	torture.onoff_interval= [KNL,BOOT]
+			Set time (s) between CPU-hotplug operations, or
+			zero to disable CPU-hotplug testing.
+
+	torture.rcutorture_runnable= [BOOT]
+			Start rcutorture running at boot time.
+
+	torture.shuffle_interval= [KNL,BOOT]
+			Set task-shuffle interval (s).  Shuffling tasks
+			allows some CPUs to go into dyntick-idle mode
+			during the rcutorture test.
+
+	torture.shutdown_secs= [KNL,BOOT]
+			Set time (s) after boot system shutdown.  This
+			is useful for hands-off automated testing.
+
+	torture.stall_cpu= [KNL,BOOT]
+			Duration of CPU stall (s) to test RCU CPU stall
+			warnings, zero to disable.
+
+	torture.stall_cpu_holdoff= [KNL,BOOT]
+			Time to wait (s) after boot before inducing stall.
+
+	torture.stat_interval= [KNL,BOOT]
+			Time (s) between statistics printk()s.
+
+	torture.stutter= [KNL,BOOT]
+			Time (s) to stutter testing, for example, specifying
+			five seconds causes the test to run for five seconds,
+			wait for five seconds, and so on.  This tests RCU's
+			ability to transition abruptly to and from idle.
+
+	torture.test_boost= [KNL,BOOT]
+			Test RCU priority boosting?  0=no, 1=maybe, 2=yes.
+			"Maybe" means test if the RCU implementation
+			under test support RCU priority boosting.
+
+	torture.test_boost_duration= [KNL,BOOT]
+			Duration (s) of each individual boost test.
+
+	torture.test_boost_interval= [KNL,BOOT]
+			Interval (s) between each boost test.
+
+	torture.test_no_idle_hz= [KNL,BOOT]
+			Test RCU's dyntick-idle handling.  See also the
+			torture.shuffle_interval parameter.
+
+	torture.torture_type= [KNL,BOOT]
+			Specify the RCU implementation to test.
+
+	torture.verbose= [KNL,BOOT]
+			Enable additional printk() statements.
+
 	tp720=		[HW,PS2]
 
 	tpm_suspend_pcr=[HW,TPM]
@@ -3164,6 +3132,44 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			with respect to transparent hugepages.
 			See Documentation/vm/transhuge.txt for more details.
 
+	tree.blimit=	[KNL,BOOT]
+			Set maximum number of finished RCU callbacks to process
+			in one batch.
+
+	tree.rcu_fanout_leaf=	[KNL,BOOT]
+			Increase the number of CPUs assigned to each
+			leaf rcu_node structure.  Useful for very large
+			systems.
+
+	tree.jiffies_till_first_fqs= [KNL,BOOT]
+			Set delay from grace-period initialization to
+			first attempt to force quiescent states.
+			Units are jiffies, minimum value is zero,
+			and maximum value is HZ.
+
+	tree.jiffies_till_next_fqs= [KNL,BOOT]
+			Set delay between subsequent attempts to force
+			quiescent states.  Units are jiffies, minimum
+			value is one, and maximum value is HZ.
+
+	tree.qhimark=	[KNL,BOOT]
+			Set threshold of queued
+			RCU callbacks over which batch limiting is disabled.
+
+	tree.qlowmark=	[KNL,BOOT]
+			Set threshold of queued RCU callbacks below which
+			batch limiting is re-enabled.
+
+	tree.rcu_idle_gp_delay=	[KNL,BOOT]
+			Set wakeup interval for idle CPUs that have
+			RCU callbacks (RCU_FAST_NO_HZ=y).
+
+	tree.rcu_idle_lazy_gp_delay=	[KNL,BOOT]
+			Set wakeup interval for idle CPUs that have
+			only "lazy" RCU callbacks (RCU_FAST_NO_HZ=y).
+			Lazy RCU callbacks are those which RCU can
+			prove do nothing more than free memory.
+
 	tsc=		Disable clocksource stability checks for TSC.
 			Format: <string>
 			[x86] reliable: mark tsc clocksource as reliable, this
@@ -3201,6 +3207,17 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 	unknown_nmi_panic
 			[X86] Cause panic on unknown NMI.
 
+	rcu.update.rcu_expedited= [KNL,BOOT]
+			Use expedited grace-period primitives, for
+			example, synchronize_rcu_expedited() instead
+			of synchronize_rcu().
+
+	update.rcu_cpu_stall_suppress=	[KNL,BOOT]
+			Suppress RCU CPU stall warning messages.
+
+	update.rcu_cpu_stall_timeout= [KNL,BOOT]
+			Set timeout for RCU CPU stall warning messages.
+
 	usbcore.authorized_default=
 			[USB] Default USB device authorization:
 			(default -1 = authorized except for wireless USB,
diff --git a/MAINTAINERS b/MAINTAINERS
index e61c2e83fc2b..28f2478b6794 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6903,7 +6903,7 @@ M:	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
 S:	Supported
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
 F:	Documentation/RCU/torture.txt
-F:	kernel/rcutorture.c
+F:	kernel/rcu/torture.c
 
 RDC R-321X SoC
 M:	Florian Fainelli <florian@openwrt.org>
@@ -6930,8 +6930,9 @@ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
 F:	Documentation/RCU/
 X:	Documentation/RCU/torture.txt
 F:	include/linux/rcu*
-F:	kernel/rcu*
-X:	kernel/rcutorture.c
+X:	include/linux/srcu.h
+F:	kernel/rcu/
+X:	kernel/rcu/torture.c
 
 REAL TIME CLOCK (RTC) SUBSYSTEM
 M:	Alessandro Zummo <a.zummo@towertech.it>
@@ -7618,8 +7619,8 @@ M:	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
 W:	http://www.rdrop.com/users/paulmck/RCU/
 S:	Supported
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
-F:	include/linux/srcu*
-F:	kernel/srcu*
+F:	include/linux/srcu.h
+F:	kernel/rcu/srcu.c
 
 SMACK SECURITY MODULE
 M:	Casey Schaufler <casey@schaufler-ca.com>
diff --git a/kernel/Makefile b/kernel/Makefile
index 1ce47553fb02..f99d908b5550 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -6,9 +6,9 @@ obj-y     = fork.o exec_domain.o panic.o \
 	    cpu.o exit.o itimer.o time.o softirq.o resource.o \
 	    sysctl.o sysctl_binary.o capability.o ptrace.o timer.o user.o \
 	    signal.o sys.o kmod.o workqueue.o pid.o task_work.o \
-	    rcupdate.o extable.o params.o posix-timers.o \
+	    extable.o params.o posix-timers.o \
 	    kthread.o wait.o sys_ni.o posix-cpu-timers.o mutex.o \
-	    hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
+	    hrtimer.o rwsem.o nsproxy.o semaphore.o \
 	    notifier.o ksysfs.o cred.o reboot.o \
 	    async.o range.o groups.o lglock.o smpboot.o
 
@@ -27,6 +27,7 @@ obj-y += power/
 obj-y += printk/
 obj-y += cpu/
 obj-y += irq/
+obj-y += rcu/
 
 obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o
 obj-$(CONFIG_FREEZER) += freezer.o
@@ -81,12 +82,6 @@ obj-$(CONFIG_KGDB) += debug/
 obj-$(CONFIG_DETECT_HUNG_TASK) += hung_task.o
 obj-$(CONFIG_LOCKUP_DETECTOR) += watchdog.o
 obj-$(CONFIG_SECCOMP) += seccomp.o
-obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
-obj-$(CONFIG_TREE_RCU) += rcutree.o
-obj-$(CONFIG_TREE_PREEMPT_RCU) += rcutree.o
-obj-$(CONFIG_TREE_RCU_TRACE) += rcutree_trace.o
-obj-$(CONFIG_TINY_RCU) += rcutiny.o
-obj-$(CONFIG_TINY_PREEMPT_RCU) += rcutiny.o
 obj-$(CONFIG_RELAY) += relay.o
 obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
 obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
diff --git a/kernel/rcu/Makefile b/kernel/rcu/Makefile
new file mode 100644
index 000000000000..01e9ec37a3e3
--- /dev/null
+++ b/kernel/rcu/Makefile
@@ -0,0 +1,6 @@
+obj-y += update.o srcu.o
+obj-$(CONFIG_RCU_TORTURE_TEST) += torture.o
+obj-$(CONFIG_TREE_RCU) += tree.o
+obj-$(CONFIG_TREE_PREEMPT_RCU) += tree.o
+obj-$(CONFIG_TREE_RCU_TRACE) += tree_trace.o
+obj-$(CONFIG_TINY_RCU) += tiny.o
diff --git a/kernel/rcu.h b/kernel/rcu/rcu.h
similarity index 100%
rename from kernel/rcu.h
rename to kernel/rcu/rcu.h
diff --git a/kernel/srcu.c b/kernel/rcu/srcu.c
similarity index 100%
rename from kernel/srcu.c
rename to kernel/rcu/srcu.c
diff --git a/kernel/rcutiny.c b/kernel/rcu/tiny.c
similarity index 97%
rename from kernel/rcutiny.c
rename to kernel/rcu/tiny.c
index 312e9709713f..0c9a934cfec1 100644
--- a/kernel/rcutiny.c
+++ b/kernel/rcu/tiny.c
@@ -43,7 +43,7 @@
 
 #include "rcu.h"
 
-/* Forward declarations for rcutiny_plugin.h. */
+/* Forward declarations for tiny_plugin.h. */
 struct rcu_ctrlblk;
 static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp);
 static void rcu_process_callbacks(struct softirq_action *unused);
@@ -53,7 +53,7 @@ static void __call_rcu(struct rcu_head *head,
 
 static long long rcu_dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
 
-#include "rcutiny_plugin.h"
+#include "tiny_plugin.h"
 
 /* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcutree.c. */
 static void rcu_idle_enter_common(long long newval)
@@ -67,7 +67,7 @@ static void rcu_idle_enter_common(long long newval)
 	RCU_TRACE(trace_rcu_dyntick(TPS("Start"),
 				    rcu_dynticks_nesting, newval));
 	if (!is_idle_task(current)) {
-		struct task_struct *idle = idle_task(smp_processor_id());
+		struct task_struct *idle __maybe_unused = idle_task(smp_processor_id());
 
 		RCU_TRACE(trace_rcu_dyntick(TPS("Entry error: not idle task"),
 					    rcu_dynticks_nesting, newval));
@@ -128,7 +128,7 @@ static void rcu_idle_exit_common(long long oldval)
 	}
 	RCU_TRACE(trace_rcu_dyntick(TPS("End"), oldval, rcu_dynticks_nesting));
 	if (!is_idle_task(current)) {
-		struct task_struct *idle = idle_task(smp_processor_id());
+		struct task_struct *idle __maybe_unused = idle_task(smp_processor_id());
 
 		RCU_TRACE(trace_rcu_dyntick(TPS("Exit error: not idle task"),
 			  oldval, rcu_dynticks_nesting));
diff --git a/kernel/rcutiny_plugin.h b/kernel/rcu/tiny_plugin.h
similarity index 100%
rename from kernel/rcutiny_plugin.h
rename to kernel/rcu/tiny_plugin.h
diff --git a/kernel/rcutorture.c b/kernel/rcu/torture.c
similarity index 100%
rename from kernel/rcutorture.c
rename to kernel/rcu/torture.c
diff --git a/kernel/rcutree.c b/kernel/rcu/tree.c
similarity index 99%
rename from kernel/rcutree.c
rename to kernel/rcu/tree.c
index 240604aa3f70..52b67ee63a0b 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcu/tree.c
@@ -56,7 +56,7 @@
 #include <linux/ftrace_event.h>
 #include <linux/suspend.h>
 
-#include "rcutree.h"
+#include "tree.h"
 #include <trace/events/rcu.h>
 
 #include "rcu.h"
@@ -3298,7 +3298,7 @@ static void __init rcu_init_one(struct rcu_state *rsp,
 
 /*
  * Compute the rcu_node tree geometry from kernel parameters.  This cannot
- * replace the definitions in rcutree.h because those are needed to size
+ * replace the definitions in tree.h because those are needed to size
  * the ->node array in the rcu_state structure.
  */
 static void __init rcu_init_geometry(void)
@@ -3393,4 +3393,4 @@ void __init rcu_init(void)
 		rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
 }
 
-#include "rcutree_plugin.h"
+#include "tree_plugin.h"
diff --git a/kernel/rcutree.h b/kernel/rcu/tree.h
similarity index 100%
rename from kernel/rcutree.h
rename to kernel/rcu/tree.h
diff --git a/kernel/rcutree_plugin.h b/kernel/rcu/tree_plugin.h
similarity index 99%
rename from kernel/rcutree_plugin.h
rename to kernel/rcu/tree_plugin.h
index 8d85a5ce093a..3822ac0c4b27 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -28,7 +28,7 @@
 #include <linux/gfp.h>
 #include <linux/oom.h>
 #include <linux/smpboot.h>
-#include "time/tick-internal.h"
+#include "../time/tick-internal.h"
 
 #define RCU_KTHREAD_PRIO 1
 
@@ -1133,7 +1133,7 @@ void exit_rcu(void)
 
 #ifdef CONFIG_RCU_BOOST
 
-#include "rtmutex_common.h"
+#include "../rtmutex_common.h"
 
 #ifdef CONFIG_RCU_TRACE
 
diff --git a/kernel/rcutree_trace.c b/kernel/rcu/tree_trace.c
similarity index 99%
rename from kernel/rcutree_trace.c
rename to kernel/rcu/tree_trace.c
index cf6c17412932..3596797b7e46 100644
--- a/kernel/rcutree_trace.c
+++ b/kernel/rcu/tree_trace.c
@@ -44,7 +44,7 @@
 #include <linux/seq_file.h>
 
 #define RCU_TREE_NONCORE
-#include "rcutree.h"
+#include "tree.h"
 
 static int r_open(struct inode *inode, struct file *file,
 					const struct seq_operations *op)
diff --git a/kernel/rcupdate.c b/kernel/rcu/update.c
similarity index 100%
rename from kernel/rcupdate.c
rename to kernel/rcu/update.c


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-10  2:59                         ` Paul E. McKenney
@ 2013-10-10  8:05                           ` Ingo Molnar
  2013-10-10 17:11                             ` Paul E. McKenney
  0 siblings, 1 reply; 56+ messages in thread
From: Ingo Molnar @ 2013-10-10  8:05 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel


* Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

> And it now builds, boots, and passes short rcutorture tests, updated 
> patch below.
> 
> One side-effect is the boot parameters, namely that what used to be 
> rcutree.blimit=10 is now simply tree.blimit=10.  Not a problem for me, I 
> just made my test scripts probe the source tree and generate the 
> corresponding format.  But is there some straightforward way to get the 
> name of the "rcu" directory involved?  The obvious approach of 
> "rcu.tree.blimit=10" does not work -- the kernel happily ignores any 
> such parameter.

Hm, that boot option parser attitude is a bit sad - more structure to boot 
parameters is IMHO a Good Thing.

Does it accept :

	rcu/tree/blimit=10

	rcu/tree.blimit=10

type of structure perhaps?

> 
> It looks like I should be able to do something like the following in 
> kernel/rcu/tree.c to get back the old parameter names:
> 
> MODULE_ALIAS("rcutree");
> #ifdef MODULE_PARAM_PREFIX
> #undef MODULE_PARAM_PREFIX
> #endif
> #define MODULE_PARAM_PREFIX "rcutree."

Yeah.

( To keep it simple, the undef should be unnecessary, it's not like anyone 
  can slip in a MODULE_PARAM_PREFIX without you noticing, right? )

> And similarly for rcu/update.c and rcu/torture.c.
> 
> In fact, it looks like I could make rcu/update.c also use either the 
> "rcutree." or "rcutiny." prefix, depending on which was being built.
> 
> Any thoughts, cautions, or suggestions?

Still looks nice to me!

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-10  8:05                           ` Ingo Molnar
@ 2013-10-10 17:11                             ` Paul E. McKenney
  2013-10-10 17:39                               ` Ingo Molnar
  0 siblings, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-10 17:11 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Thu, Oct 10, 2013 at 10:05:01AM +0200, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> 
> > And it now builds, boots, and passes short rcutorture tests, updated 
> > patch below.
> > 
> > One side-effect is the boot parameters, namely that what used to be 
> > rcutree.blimit=10 is now simply tree.blimit=10.  Not a problem for me, I 
> > just made my test scripts probe the source tree and generate the 
> > corresponding format.  But is there some straightforward way to get the 
> > name of the "rcu" directory involved?  The obvious approach of 
> > "rcu.tree.blimit=10" does not work -- the kernel happily ignores any 
> > such parameter.
> 
> Hm, that boot option parser attitude is a bit sad - more structure to boot 
> parameters is IMHO a Good Thing.

No argument here!

> Does it accept :
> 
> 	rcu/tree/blimit=10
> 
> 	rcu/tree.blimit=10
> 
> type of structure perhaps?

Unfortunately, no.  :-(

> > It looks like I should be able to do something like the following in 
> > kernel/rcu/tree.c to get back the old parameter names:
> > 
> > MODULE_ALIAS("rcutree");
> > #ifdef MODULE_PARAM_PREFIX
> > #undef MODULE_PARAM_PREFIX
> > #endif
> > #define MODULE_PARAM_PREFIX "rcutree."
> 
> Yeah.
> 
> ( To keep it simple, the undef should be unnecessary, it's not like anyone 
>   can slip in a MODULE_PARAM_PREFIX without you noticing, right? )

Works with the #undef, trying it without it...  And it works, but I do
get the following complaint from the compiler:

/home/paulmck/public_git/linux-rcu/kernel/rcu/tree.c:66:0: warning: "MODULE_PARAM_PREFIX" redefined [enabled by default]
/home/paulmck/public_git/linux-rcu/include/linux/moduleparam.h:13:0: note: this is the location of the previous definition

The problem is that moduleparam.h contains the following:

#define MODULE_PARAM_PREFIX KBUILD_MODNAME "."

So the #undef is ugly, but better than the compiler warning.  :-(

> > And similarly for rcu/update.c and rcu/torture.c.
> > 
> > In fact, it looks like I could make rcu/update.c also use either the 
> > "rcutree." or "rcutiny." prefix, depending on which was being built.
> > 
> > Any thoughts, cautions, or suggestions?
> 
> Still looks nice to me!

Glad you like it!  ;-)

Updated patch below.  The prefix change thrash was useful, as it rubbed
my face in the fact that I needed to update the kernel-parameter.txt file.
Some times you get lucky!

Would you like this for the upcoming 3.13 merge window, or would it be
better for 3.14?

							Thanx, Paul

------------------------------------------------------------------------

rcu: Move RCU-related source code to kernel/rcu directory

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>

 b/Documentation/DocBook/device-drivers.tmpl |    5 +
 b/Documentation/kernel-parameters.txt       |   95 ++++++++++++++++------------
 b/MAINTAINERS                               |   11 +--
 b/kernel/Makefile                           |   11 ---
 b/kernel/rcu/Makefile                       |    6 +
 b/kernel/rcu/tiny.c                         |    8 +-
 b/kernel/rcu/torture.c                      |    6 +
 b/kernel/rcu/tree.c                         |   13 ++-
 b/kernel/rcu/tree_plugin.h                  |    4 -
 b/kernel/rcu/tree_trace.c                   |    2 
 b/kernel/rcu/update.c                       |    6 +
 11 files changed, 105 insertions(+), 62 deletions(-)

rcu: Move RCU-related source code to kernel/rcu directory

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index fe397f90a34f..6c9d9d37c83a 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -87,7 +87,10 @@ X!Iinclude/linux/kobject.h
 !Ekernel/printk/printk.c
 !Ekernel/panic.c
 !Ekernel/sys.c
-!Ekernel/rcupdate.c
+!Ekernel/rcu/srcu.c
+!Ekernel/rcu/tree.c
+!Ekernel/rcu/tree_plugin.h
+!Ekernel/rcu/update.c
      </sect1>
 
      <sect1><title>Device Resource Management</title>
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 1a036cd972fb..c3dc13e90a40 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2595,7 +2595,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 	ramdisk_size=	[RAM] Sizes of RAM disks in kilobytes
 			See Documentation/blockdev/ramdisk.txt.
 
-	rcu_nocbs=	[KNL,BOOT]
+	rcu_nocbs=	[KNL]
 			In kernels built with CONFIG_RCU_NOCB_CPU=y, set
 			the specified list of CPUs to be no-callback CPUs.
 			Invocation of these CPUs' RCU callbacks will
@@ -2608,7 +2608,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			real-time workloads.  It can also improve energy
 			efficiency for asymmetric multiprocessors.
 
-	rcu_nocb_poll	[KNL,BOOT]
+	rcu_nocb_poll	[KNL]
 			Rather than requiring that offloaded CPUs
 			(specified by rcu_nocbs= above) explicitly
 			awaken the corresponding "rcuoN" kthreads,
@@ -2619,126 +2619,145 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			energy efficiency by requiring that the kthreads
 			periodically wake up to do the polling.
 
-	rcutree.blimit=	[KNL,BOOT]
+	rcutree.blimit=	[KNL]
 			Set maximum number of finished RCU callbacks to process
 			in one batch.
 
-	rcutree.fanout_leaf=	[KNL,BOOT]
+	rcutree.rcu_fanout_leaf= [KNL]
 			Increase the number of CPUs assigned to each
 			leaf rcu_node structure.  Useful for very large
 			systems.
 
-	rcutree.jiffies_till_first_fqs= [KNL,BOOT]
+	rcutree.jiffies_till_first_fqs= [KNL]
 			Set delay from grace-period initialization to
 			first attempt to force quiescent states.
 			Units are jiffies, minimum value is zero,
 			and maximum value is HZ.
 
-	rcutree.jiffies_till_next_fqs= [KNL,BOOT]
+	rcutree.jiffies_till_next_fqs= [KNL]
 			Set delay between subsequent attempts to force
 			quiescent states.  Units are jiffies, minimum
 			value is one, and maximum value is HZ.
 
-	rcutree.qhimark=	[KNL,BOOT]
+	rcutree.qhimark= [KNL]
 			Set threshold of queued
 			RCU callbacks over which batch limiting is disabled.
 
-	rcutree.qlowmark=	[KNL,BOOT]
+	rcutree.qlowmark= [KNL]
 			Set threshold of queued RCU callbacks below which
 			batch limiting is re-enabled.
 
-	rcutree.rcu_cpu_stall_suppress=	[KNL,BOOT]
-			Suppress RCU CPU stall warning messages.
-
-	rcutree.rcu_cpu_stall_timeout= [KNL,BOOT]
-			Set timeout for RCU CPU stall warning messages.
-
-	rcutree.rcu_idle_gp_delay=	[KNL,BOOT]
+	rcutree.rcu_idle_gp_delay= [KNL]
 			Set wakeup interval for idle CPUs that have
 			RCU callbacks (RCU_FAST_NO_HZ=y).
 
-	rcutree.rcu_idle_lazy_gp_delay=	[KNL,BOOT]
+	rcutree.rcu_idle_lazy_gp_delay= [KNL]
 			Set wakeup interval for idle CPUs that have
 			only "lazy" RCU callbacks (RCU_FAST_NO_HZ=y).
 			Lazy RCU callbacks are those which RCU can
 			prove do nothing more than free memory.
 
-	rcutorture.fqs_duration= [KNL,BOOT]
+	rcutorture.fqs_duration= [KNL]
 			Set duration of force_quiescent_state bursts.
 
-	rcutorture.fqs_holdoff= [KNL,BOOT]
+	rcutorture.fqs_holdoff= [KNL]
 			Set holdoff time within force_quiescent_state bursts.
 
-	rcutorture.fqs_stutter= [KNL,BOOT]
+	rcutorture.fqs_stutter= [KNL]
 			Set wait time between force_quiescent_state bursts.
 
-	rcutorture.irqreader= [KNL,BOOT]
-			Test RCU readers from irq handlers.
+	rcutorture.gp_exp= [KNL]
+			Use expedited update-side primitives.
+
+	rcutorture.gp_normal= [KNL]
+			Use normal (non-expedited) update-side primitives.
+			If both gp_exp and gp_normal are set, do both.
+			If neither gp_exp nor gp_normal are set, still
+			do both.
 
-	rcutorture.n_barrier_cbs= [KNL,BOOT]
+	rcutorture.n_barrier_cbs= [KNL]
 			Set callbacks/threads for rcu_barrier() testing.
 
-	rcutorture.nfakewriters= [KNL,BOOT]
+	rcutorture.nfakewriters= [KNL]
 			Set number of concurrent RCU writers.  These just
 			stress RCU, they don't participate in the actual
 			test, hence the "fake".
 
-	rcutorture.nreaders= [KNL,BOOT]
+	rcutorture.nreaders= [KNL]
 			Set number of RCU readers.
 
-	rcutorture.onoff_holdoff= [KNL,BOOT]
+	rcutorture.object_debug= [KNL]
+			Enable debug-object double-call_rcu() testing.
+
+	rcutorture.onoff_holdoff= [KNL]
 			Set time (s) after boot for CPU-hotplug testing.
 
-	rcutorture.onoff_interval= [KNL,BOOT]
+	rcutorture.onoff_interval= [KNL]
 			Set time (s) between CPU-hotplug operations, or
 			zero to disable CPU-hotplug testing.
 
-	rcutorture.shuffle_interval= [KNL,BOOT]
+	rcutorture.rcutorture_runnable= [BOOT]
+			Start rcutorture running at boot time.
+
+	rcutorture.shuffle_interval= [KNL]
 			Set task-shuffle interval (s).  Shuffling tasks
 			allows some CPUs to go into dyntick-idle mode
 			during the rcutorture test.
 
-	rcutorture.shutdown_secs= [KNL,BOOT]
+	rcutorture.shutdown_secs= [KNL]
 			Set time (s) after boot system shutdown.  This
 			is useful for hands-off automated testing.
 
-	rcutorture.stall_cpu= [KNL,BOOT]
+	rcutorture.stall_cpu= [KNL]
 			Duration of CPU stall (s) to test RCU CPU stall
 			warnings, zero to disable.
 
-	rcutorture.stall_cpu_holdoff= [KNL,BOOT]
+	rcutorture.stall_cpu_holdoff= [KNL]
 			Time to wait (s) after boot before inducing stall.
 
-	rcutorture.stat_interval= [KNL,BOOT]
+	rcutorture.stat_interval= [KNL]
 			Time (s) between statistics printk()s.
 
-	rcutorture.stutter= [KNL,BOOT]
+	rcutorture.stutter= [KNL]
 			Time (s) to stutter testing, for example, specifying
 			five seconds causes the test to run for five seconds,
 			wait for five seconds, and so on.  This tests RCU's
 			ability to transition abruptly to and from idle.
 
-	rcutorture.test_boost= [KNL,BOOT]
+	rcutorture.test_boost= [KNL]
 			Test RCU priority boosting?  0=no, 1=maybe, 2=yes.
 			"Maybe" means test if the RCU implementation
 			under test support RCU priority boosting.
 
-	rcutorture.test_boost_duration= [KNL,BOOT]
+	rcutorture.test_boost_duration= [KNL]
 			Duration (s) of each individual boost test.
 
-	rcutorture.test_boost_interval= [KNL,BOOT]
+	rcutorture.test_boost_interval= [KNL]
 			Interval (s) between each boost test.
 
-	rcutorture.test_no_idle_hz= [KNL,BOOT]
+	rcutorture.test_no_idle_hz= [KNL]
 			Test RCU's dyntick-idle handling.  See also the
 			rcutorture.shuffle_interval parameter.
 
-	rcutorture.torture_type= [KNL,BOOT]
+	rcutorture.torture_type= [KNL]
 			Specify the RCU implementation to test.
 
-	rcutorture.verbose= [KNL,BOOT]
+	rcutorture.verbose= [KNL]
 			Enable additional printk() statements.
 
+	rcupdate.rcu_expedited= [KNL]
+			Use expedited grace-period primitives, for
+			example, synchronize_rcu_expedited() instead
+			of synchronize_rcu().  This reduces latency,
+			but can increase CPU utilization, degrade
+			real-time latency, and degrade energy efficiency.
+
+	rcupdate.rcu_cpu_stall_suppress= [KNL]
+			Suppress RCU CPU stall warning messages.
+
+	rcupdate.rcu_cpu_stall_timeout= [KNL]
+			Set timeout for RCU CPU stall warning messages.
+
 	rdinit=		[KNL]
 			Format: <full_path>
 			Run specified binary instead of /init from the ramdisk,
diff --git a/MAINTAINERS b/MAINTAINERS
index e61c2e83fc2b..28f2478b6794 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6903,7 +6903,7 @@ M:	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
 S:	Supported
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
 F:	Documentation/RCU/torture.txt
-F:	kernel/rcutorture.c
+F:	kernel/rcu/torture.c
 
 RDC R-321X SoC
 M:	Florian Fainelli <florian@openwrt.org>
@@ -6930,8 +6930,9 @@ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
 F:	Documentation/RCU/
 X:	Documentation/RCU/torture.txt
 F:	include/linux/rcu*
-F:	kernel/rcu*
-X:	kernel/rcutorture.c
+X:	include/linux/srcu.h
+F:	kernel/rcu/
+X:	kernel/rcu/torture.c
 
 REAL TIME CLOCK (RTC) SUBSYSTEM
 M:	Alessandro Zummo <a.zummo@towertech.it>
@@ -7618,8 +7619,8 @@ M:	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
 W:	http://www.rdrop.com/users/paulmck/RCU/
 S:	Supported
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
-F:	include/linux/srcu*
-F:	kernel/srcu*
+F:	include/linux/srcu.h
+F:	kernel/rcu/srcu.c
 
 SMACK SECURITY MODULE
 M:	Casey Schaufler <casey@schaufler-ca.com>
diff --git a/kernel/Makefile b/kernel/Makefile
index 1ce47553fb02..f99d908b5550 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -6,9 +6,9 @@ obj-y     = fork.o exec_domain.o panic.o \
 	    cpu.o exit.o itimer.o time.o softirq.o resource.o \
 	    sysctl.o sysctl_binary.o capability.o ptrace.o timer.o user.o \
 	    signal.o sys.o kmod.o workqueue.o pid.o task_work.o \
-	    rcupdate.o extable.o params.o posix-timers.o \
+	    extable.o params.o posix-timers.o \
 	    kthread.o wait.o sys_ni.o posix-cpu-timers.o mutex.o \
-	    hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
+	    hrtimer.o rwsem.o nsproxy.o semaphore.o \
 	    notifier.o ksysfs.o cred.o reboot.o \
 	    async.o range.o groups.o lglock.o smpboot.o
 
@@ -27,6 +27,7 @@ obj-y += power/
 obj-y += printk/
 obj-y += cpu/
 obj-y += irq/
+obj-y += rcu/
 
 obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o
 obj-$(CONFIG_FREEZER) += freezer.o
@@ -81,12 +82,6 @@ obj-$(CONFIG_KGDB) += debug/
 obj-$(CONFIG_DETECT_HUNG_TASK) += hung_task.o
 obj-$(CONFIG_LOCKUP_DETECTOR) += watchdog.o
 obj-$(CONFIG_SECCOMP) += seccomp.o
-obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
-obj-$(CONFIG_TREE_RCU) += rcutree.o
-obj-$(CONFIG_TREE_PREEMPT_RCU) += rcutree.o
-obj-$(CONFIG_TREE_RCU_TRACE) += rcutree_trace.o
-obj-$(CONFIG_TINY_RCU) += rcutiny.o
-obj-$(CONFIG_TINY_PREEMPT_RCU) += rcutiny.o
 obj-$(CONFIG_RELAY) += relay.o
 obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
 obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
diff --git a/kernel/rcu/Makefile b/kernel/rcu/Makefile
new file mode 100644
index 000000000000..01e9ec37a3e3
--- /dev/null
+++ b/kernel/rcu/Makefile
@@ -0,0 +1,6 @@
+obj-y += update.o srcu.o
+obj-$(CONFIG_RCU_TORTURE_TEST) += torture.o
+obj-$(CONFIG_TREE_RCU) += tree.o
+obj-$(CONFIG_TREE_PREEMPT_RCU) += tree.o
+obj-$(CONFIG_TREE_RCU_TRACE) += tree_trace.o
+obj-$(CONFIG_TINY_RCU) += tiny.o
diff --git a/kernel/rcu.h b/kernel/rcu/rcu.h
similarity index 100%
rename from kernel/rcu.h
rename to kernel/rcu/rcu.h
diff --git a/kernel/srcu.c b/kernel/rcu/srcu.c
similarity index 100%
rename from kernel/srcu.c
rename to kernel/rcu/srcu.c
diff --git a/kernel/rcutiny.c b/kernel/rcu/tiny.c
similarity index 97%
rename from kernel/rcutiny.c
rename to kernel/rcu/tiny.c
index 312e9709713f..0c9a934cfec1 100644
--- a/kernel/rcutiny.c
+++ b/kernel/rcu/tiny.c
@@ -43,7 +43,7 @@
 
 #include "rcu.h"
 
-/* Forward declarations for rcutiny_plugin.h. */
+/* Forward declarations for tiny_plugin.h. */
 struct rcu_ctrlblk;
 static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp);
 static void rcu_process_callbacks(struct softirq_action *unused);
@@ -53,7 +53,7 @@ static void __call_rcu(struct rcu_head *head,
 
 static long long rcu_dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
 
-#include "rcutiny_plugin.h"
+#include "tiny_plugin.h"
 
 /* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcutree.c. */
 static void rcu_idle_enter_common(long long newval)
@@ -67,7 +67,7 @@ static void rcu_idle_enter_common(long long newval)
 	RCU_TRACE(trace_rcu_dyntick(TPS("Start"),
 				    rcu_dynticks_nesting, newval));
 	if (!is_idle_task(current)) {
-		struct task_struct *idle = idle_task(smp_processor_id());
+		struct task_struct *idle __maybe_unused = idle_task(smp_processor_id());
 
 		RCU_TRACE(trace_rcu_dyntick(TPS("Entry error: not idle task"),
 					    rcu_dynticks_nesting, newval));
@@ -128,7 +128,7 @@ static void rcu_idle_exit_common(long long oldval)
 	}
 	RCU_TRACE(trace_rcu_dyntick(TPS("End"), oldval, rcu_dynticks_nesting));
 	if (!is_idle_task(current)) {
-		struct task_struct *idle = idle_task(smp_processor_id());
+		struct task_struct *idle __maybe_unused = idle_task(smp_processor_id());
 
 		RCU_TRACE(trace_rcu_dyntick(TPS("Exit error: not idle task"),
 			  oldval, rcu_dynticks_nesting));
diff --git a/kernel/rcutiny_plugin.h b/kernel/rcu/tiny_plugin.h
similarity index 100%
rename from kernel/rcutiny_plugin.h
rename to kernel/rcu/tiny_plugin.h
diff --git a/kernel/rcutorture.c b/kernel/rcu/torture.c
similarity index 99%
rename from kernel/rcutorture.c
rename to kernel/rcu/torture.c
index be63101c6175..3929cd451511 100644
--- a/kernel/rcutorture.c
+++ b/kernel/rcu/torture.c
@@ -52,6 +52,12 @@
 MODULE_LICENSE("GPL");
 MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com> and Josh Triplett <josh@freedesktop.org>");
 
+MODULE_ALIAS("rcutorture");
+#ifdef MODULE_PARAM_PREFIX
+#undef MODULE_PARAM_PREFIX
+#endif
+#define MODULE_PARAM_PREFIX "rcutorture."
+
 static int fqs_duration;
 module_param(fqs_duration, int, 0444);
 MODULE_PARM_DESC(fqs_duration, "Duration of fqs bursts (us), 0 to disable");
diff --git a/kernel/rcutree.c b/kernel/rcu/tree.c
similarity index 99%
rename from kernel/rcutree.c
rename to kernel/rcu/tree.c
index 240604aa3f70..8a2c81e86dda 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcu/tree.c
@@ -41,6 +41,7 @@
 #include <linux/export.h>
 #include <linux/completion.h>
 #include <linux/moduleparam.h>
+#include <linux/module.h>
 #include <linux/percpu.h>
 #include <linux/notifier.h>
 #include <linux/cpu.h>
@@ -56,11 +57,17 @@
 #include <linux/ftrace_event.h>
 #include <linux/suspend.h>
 
-#include "rcutree.h"
+#include "tree.h"
 #include <trace/events/rcu.h>
 
 #include "rcu.h"
 
+MODULE_ALIAS("rcutree");
+#ifdef MODULE_PARAM_PREFIX
+#undef MODULE_PARAM_PREFIX
+#endif
+#define MODULE_PARAM_PREFIX "rcutree."
+
 /* Data structures. */
 
 static struct lock_class_key rcu_node_class[RCU_NUM_LVLS];
@@ -3298,7 +3305,7 @@ static void __init rcu_init_one(struct rcu_state *rsp,
 
 /*
  * Compute the rcu_node tree geometry from kernel parameters.  This cannot
- * replace the definitions in rcutree.h because those are needed to size
+ * replace the definitions in tree.h because those are needed to size
  * the ->node array in the rcu_state structure.
  */
 static void __init rcu_init_geometry(void)
@@ -3393,4 +3400,4 @@ void __init rcu_init(void)
 		rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
 }
 
-#include "rcutree_plugin.h"
+#include "tree_plugin.h"
diff --git a/kernel/rcutree.h b/kernel/rcu/tree.h
similarity index 100%
rename from kernel/rcutree.h
rename to kernel/rcu/tree.h
diff --git a/kernel/rcutree_plugin.h b/kernel/rcu/tree_plugin.h
similarity index 99%
rename from kernel/rcutree_plugin.h
rename to kernel/rcu/tree_plugin.h
index 8d85a5ce093a..3822ac0c4b27 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -28,7 +28,7 @@
 #include <linux/gfp.h>
 #include <linux/oom.h>
 #include <linux/smpboot.h>
-#include "time/tick-internal.h"
+#include "../time/tick-internal.h"
 
 #define RCU_KTHREAD_PRIO 1
 
@@ -1133,7 +1133,7 @@ void exit_rcu(void)
 
 #ifdef CONFIG_RCU_BOOST
 
-#include "rtmutex_common.h"
+#include "../rtmutex_common.h"
 
 #ifdef CONFIG_RCU_TRACE
 
diff --git a/kernel/rcutree_trace.c b/kernel/rcu/tree_trace.c
similarity index 99%
rename from kernel/rcutree_trace.c
rename to kernel/rcu/tree_trace.c
index cf6c17412932..3596797b7e46 100644
--- a/kernel/rcutree_trace.c
+++ b/kernel/rcu/tree_trace.c
@@ -44,7 +44,7 @@
 #include <linux/seq_file.h>
 
 #define RCU_TREE_NONCORE
-#include "rcutree.h"
+#include "tree.h"
 
 static int r_open(struct inode *inode, struct file *file,
 					const struct seq_operations *op)
diff --git a/kernel/rcupdate.c b/kernel/rcu/update.c
similarity index 98%
rename from kernel/rcupdate.c
rename to kernel/rcu/update.c
index c07af1c4e1bb..6cb3dff89e2b 100644
--- a/kernel/rcupdate.c
+++ b/kernel/rcu/update.c
@@ -53,6 +53,12 @@
 
 #include "rcu.h"
 
+MODULE_ALIAS("rcupdate");
+#ifdef MODULE_PARAM_PREFIX
+#undef MODULE_PARAM_PREFIX
+#endif
+#define MODULE_PARAM_PREFIX "rcupdate."
+
 module_param(rcu_expedited, int, 0);
 
 #ifdef CONFIG_PREEMPT_RCU


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-10 17:11                             ` Paul E. McKenney
@ 2013-10-10 17:39                               ` Ingo Molnar
  2013-10-10 18:58                                 ` Paul E. McKenney
  0 siblings, 1 reply; 56+ messages in thread
From: Ingo Molnar @ 2013-10-10 17:39 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel


* Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

> On Thu, Oct 10, 2013 at 10:05:01AM +0200, Ingo Molnar wrote:
> > 
> > * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> > 
> > > And it now builds, boots, and passes short rcutorture tests, updated 
> > > patch below.
> > > 
> > > One side-effect is the boot parameters, namely that what used to be 
> > > rcutree.blimit=10 is now simply tree.blimit=10.  Not a problem for me, I 
> > > just made my test scripts probe the source tree and generate the 
> > > corresponding format.  But is there some straightforward way to get the 
> > > name of the "rcu" directory involved?  The obvious approach of 
> > > "rcu.tree.blimit=10" does not work -- the kernel happily ignores any 
> > > such parameter.
> > 
> > Hm, that boot option parser attitude is a bit sad - more structure to boot 
> > parameters is IMHO a Good Thing.
> 
> No argument here!
> 
> > Does it accept :
> > 
> > 	rcu/tree/blimit=10
> > 
> > 	rcu/tree.blimit=10
> > 
> > type of structure perhaps?
> 
> Unfortunately, no.  :-(

So, I think this code lives within kernel/params.c. Might be fixable?

> 
> > > It looks like I should be able to do something like the following in 
> > > kernel/rcu/tree.c to get back the old parameter names:
> > > 
> > > MODULE_ALIAS("rcutree");
> > > #ifdef MODULE_PARAM_PREFIX
> > > #undef MODULE_PARAM_PREFIX
> > > #endif
> > > #define MODULE_PARAM_PREFIX "rcutree."
> > 
> > Yeah.
> > 
> > ( To keep it simple, the undef should be unnecessary, it's not like anyone 
> >   can slip in a MODULE_PARAM_PREFIX without you noticing, right? )
> 
> Works with the #undef, trying it without it...  And it works, but I do 
> get the following complaint from the compiler:
> 
> /home/paulmck/public_git/linux-rcu/kernel/rcu/tree.c:66:0: warning: "MODULE_PARAM_PREFIX" redefined [enabled by default]
> /home/paulmck/public_git/linux-rcu/include/linux/moduleparam.h:13:0: note: this is the location of the previous definition
> 
> The problem is that moduleparam.h contains the following:
> 
> #define MODULE_PARAM_PREFIX KBUILD_MODNAME "."
> 
> So the #undef is ugly, but better than the compiler warning.  :-(

agreed.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-10 17:39                               ` Ingo Molnar
@ 2013-10-10 18:58                                 ` Paul E. McKenney
  2013-10-11  7:26                                   ` Ingo Molnar
  0 siblings, 1 reply; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-10 18:58 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Thu, Oct 10, 2013 at 07:39:37PM +0200, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> 
> > On Thu, Oct 10, 2013 at 10:05:01AM +0200, Ingo Molnar wrote:
> > > 
> > > * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> > > 
> > > > And it now builds, boots, and passes short rcutorture tests, updated 
> > > > patch below.
> > > > 
> > > > One side-effect is the boot parameters, namely that what used to be 
> > > > rcutree.blimit=10 is now simply tree.blimit=10.  Not a problem for me, I 
> > > > just made my test scripts probe the source tree and generate the 
> > > > corresponding format.  But is there some straightforward way to get the 
> > > > name of the "rcu" directory involved?  The obvious approach of 
> > > > "rcu.tree.blimit=10" does not work -- the kernel happily ignores any 
> > > > such parameter.
> > > 
> > > Hm, that boot option parser attitude is a bit sad - more structure to boot 
> > > parameters is IMHO a Good Thing.
> > 
> > No argument here!
> > 
> > > Does it accept :
> > > 
> > > 	rcu/tree/blimit=10
> > > 
> > > 	rcu/tree.blimit=10
> > > 
> > > type of structure perhaps?
> > 
> > Unfortunately, no.  :-(
> 
> So, I think this code lives within kernel/params.c. Might be fixable?

But of course!  I was just trying to be lazy. ;-)

I could imagine adding a filename field to struct kernel_param that
was initialized with __FILE__, then making something like parameq()
that did the appropriate comparison allowing any match starting after a
"/" and ignoring the trailing ".h" or ".c", and then calling that from
parse_one() along with current parameq().  There doesn't seem to be any
point for doing the same to do_early_param().

There would be a few surprises with this approach, for example,
rcu_idle_gp_delay and rcu_idle_lazy_gp_delay, which are defined
in kernel/rcu/tree_plugin.h, would be:

	tree_plugin.rcu_idle_gp_delay=4
	tree_plugin.rcu_idle_lazy_gp_delay=6000

or:

	rcu/tree_plugin.rcu_idle_gp_delay=4
	rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000

or:

	kernel/rcu/tree_plugin.rcu_idle_gp_delay=4
	kernel/rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000

or I suppose even:

	linux-rcu/kernel/rcu/tree_plugin.rcu_idle_gp_delay=4
	linux-rcu/kernel/rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000

instead of (say):

	kernel/rcu/tree.rcu_idle_gp_delay=4
	kernel/rcu/tree.rcu_idle_lazy_gp_delay=6000

This could of course also be fixed by comparing the filename up to
the last "/" followed by the current parameter name.  Or, as Peter
Zijlstra suggested, by manually expanding kernel/rcu/tree_plugin.h into
kernel/rcu/tree.c.

Or I could use the non-standard __BASE_FILE__ instead of __FILE__, which
expands to .../kernel/rcu/tree.c.  LLVM seems to define this as well,
so should be OK to use.

So it doesn't look too horrible.  (Famous last words...)

Thoughts?

> > > > It looks like I should be able to do something like the following in 
> > > > kernel/rcu/tree.c to get back the old parameter names:
> > > > 
> > > > MODULE_ALIAS("rcutree");
> > > > #ifdef MODULE_PARAM_PREFIX
> > > > #undef MODULE_PARAM_PREFIX
> > > > #endif
> > > > #define MODULE_PARAM_PREFIX "rcutree."
> > > 
> > > Yeah.
> > > 
> > > ( To keep it simple, the undef should be unnecessary, it's not like anyone 
> > >   can slip in a MODULE_PARAM_PREFIX without you noticing, right? )
> > 
> > Works with the #undef, trying it without it...  And it works, but I do 
> > get the following complaint from the compiler:
> > 
> > /home/paulmck/public_git/linux-rcu/kernel/rcu/tree.c:66:0: warning: "MODULE_PARAM_PREFIX" redefined [enabled by default]
> > /home/paulmck/public_git/linux-rcu/include/linux/moduleparam.h:13:0: note: this is the location of the previous definition
> > 
> > The problem is that moduleparam.h contains the following:
> > 
> > #define MODULE_PARAM_PREFIX KBUILD_MODNAME "."
> > 
> > So the #undef is ugly, but better than the compiler warning.  :-(
> 
> agreed.
> 
> Thanks,
> 
> 	Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-10 18:58                                 ` Paul E. McKenney
@ 2013-10-11  7:26                                   ` Ingo Molnar
  2013-10-11 15:59                                     ` Paul E. McKenney
  0 siblings, 1 reply; 56+ messages in thread
From: Ingo Molnar @ 2013-10-11  7:26 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel


* Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

> > So, I think this code lives within kernel/params.c. Might be fixable?
> 
> But of course!  I was just trying to be lazy. ;-)
> 
> I could imagine adding a filename field to struct kernel_param that was 
> initialized with __FILE__, then making something like parameq() that did 
> the appropriate comparison allowing any match starting after a "/" and 
> ignoring the trailing ".h" or ".c", and then calling that from 
> parse_one() along with current parameq().  There doesn't seem to be any 
> point for doing the same to do_early_param().
> 
> There would be a few surprises with this approach, for example, 
> rcu_idle_gp_delay and rcu_idle_lazy_gp_delay, which are defined in 
> kernel/rcu/tree_plugin.h, would be:
> 
> 	tree_plugin.rcu_idle_gp_delay=4
> 	tree_plugin.rcu_idle_lazy_gp_delay=6000
> 
> or:
> 
> 	rcu/tree_plugin.rcu_idle_gp_delay=4
> 	rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000
> 
> or:
> 
> 	kernel/rcu/tree_plugin.rcu_idle_gp_delay=4
> 	kernel/rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000
> 
> or I suppose even:
> 
> 	linux-rcu/kernel/rcu/tree_plugin.rcu_idle_gp_delay=4
> 	linux-rcu/kernel/rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000
> 
> instead of (say):
> 
> 	kernel/rcu/tree.rcu_idle_gp_delay=4
> 	kernel/rcu/tree.rcu_idle_lazy_gp_delay=6000
> 
> This could of course also be fixed by comparing the filename up to the 
> last "/" followed by the current parameter name.  Or, as Peter Zijlstra 
> suggested, by manually expanding kernel/rcu/tree_plugin.h into 
> kernel/rcu/tree.c.
> 
> Or I could use the non-standard __BASE_FILE__ instead of __FILE__, which 
> expands to .../kernel/rcu/tree.c.  LLVM seems to define this as well, so 
> should be OK to use.
> 
> So it doesn't look too horrible.  (Famous last words...)

Hm, I'm not so sure about the long names, for the following reasons: 
strings like 'kernel/rcu/tree_plugin.' might mean a lot to us kernel 
developers - less to sysadmins and users who would want to utilize them.

There's also a typo danger with overly long parameters and the parameter 
parser is not very intelligent about seeing the intent of the user.

So I think while rcu/tree.val would be useful syntax, going above that, 
especially with auto-generated file names (and file names can change) 
would be overdoing it a bit :-/

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5
  2013-10-11  7:26                                   ` Ingo Molnar
@ 2013-10-11 15:59                                     ` Paul E. McKenney
  0 siblings, 0 replies; 56+ messages in thread
From: Paul E. McKenney @ 2013-10-11 15:59 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Ingo Molnar, Oleg Nesterov, Linus Torvalds,
	Thomas Gleixner, Andrew Morton, linux-kernel

On Fri, Oct 11, 2013 at 09:26:27AM +0200, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> 
> > > So, I think this code lives within kernel/params.c. Might be fixable?
> > 
> > But of course!  I was just trying to be lazy. ;-)
> > 
> > I could imagine adding a filename field to struct kernel_param that was 
> > initialized with __FILE__, then making something like parameq() that did 
> > the appropriate comparison allowing any match starting after a "/" and 
> > ignoring the trailing ".h" or ".c", and then calling that from 
> > parse_one() along with current parameq().  There doesn't seem to be any 
> > point for doing the same to do_early_param().
> > 
> > There would be a few surprises with this approach, for example, 
> > rcu_idle_gp_delay and rcu_idle_lazy_gp_delay, which are defined in 
> > kernel/rcu/tree_plugin.h, would be:
> > 
> > 	tree_plugin.rcu_idle_gp_delay=4
> > 	tree_plugin.rcu_idle_lazy_gp_delay=6000
> > 
> > or:
> > 
> > 	rcu/tree_plugin.rcu_idle_gp_delay=4
> > 	rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000
> > 
> > or:
> > 
> > 	kernel/rcu/tree_plugin.rcu_idle_gp_delay=4
> > 	kernel/rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000
> > 
> > or I suppose even:
> > 
> > 	linux-rcu/kernel/rcu/tree_plugin.rcu_idle_gp_delay=4
> > 	linux-rcu/kernel/rcu/tree_plugin.rcu_idle_lazy_gp_delay=6000
> > 
> > instead of (say):
> > 
> > 	kernel/rcu/tree.rcu_idle_gp_delay=4
> > 	kernel/rcu/tree.rcu_idle_lazy_gp_delay=6000
> > 
> > This could of course also be fixed by comparing the filename up to the 
> > last "/" followed by the current parameter name.  Or, as Peter Zijlstra 
> > suggested, by manually expanding kernel/rcu/tree_plugin.h into 
> > kernel/rcu/tree.c.
> > 
> > Or I could use the non-standard __BASE_FILE__ instead of __FILE__, which 
> > expands to .../kernel/rcu/tree.c.  LLVM seems to define this as well, so 
> > should be OK to use.
> > 
> > So it doesn't look too horrible.  (Famous last words...)
> 
> Hm, I'm not so sure about the long names, for the following reasons: 
> strings like 'kernel/rcu/tree_plugin.' might mean a lot to us kernel 
> developers - less to sysadmins and users who would want to utilize them.

Good point!

Plus some people building kernels might not be so happy to have the
full path names to their build trees hard-coded into the kernel.  And
the people wanting tiny kernels wouldn't be happy about that much useless
text appearing in the kernel that many times.

> There's also a typo danger with overly long parameters and the parameter 
> parser is not very intelligent about seeing the intent of the user.
> 
> So I think while rcu/tree.val would be useful syntax, going above that, 
> especially with auto-generated file names (and file names can change) 
> would be overdoing it a bit :-/

Keeping rcutree.val is useful because there are systems that already use
some of those boot parameters.  I could use backslash as a wildcard that
matches "/", "_", or nothingness, thus matching rcutree.val, rcu/tree.val,
and rcu_tree.val, with something like the patch below, but I am not sure
that it is really worthwhile.

							Thanx, Paul

------------------------------------------------------------------------

diff --git a/kernel/params.c b/kernel/params.c
index 81c4e78..2711c39 100644
--- a/kernel/params.c
+++ b/kernel/params.c
@@ -70,10 +70,25 @@ static char dash2underscore(char c)
 bool parameqn(const char *a, const char *b, size_t n)
 {
 	size_t i;
+	size_t j = 0;
 
 	for (i = 0; i < n; i++) {
-		if (dash2underscore(a[i]) != dash2underscore(b[i]))
-			return false;
+		if (b[j] == '\\') {
+			j++;
+			if (a[i] == '/' || a[i] == '_')
+				continue;
+			/*
+			 * A backslash is permitted to match nothingness, so
+			 * skip it and try the input character against the next
+			 * pattern character.
+			 */
+			i--;
+			continue;
+		} else if (dash2underscore(a[i]) == dash2underscore(b[j])) {
+			j++;
+			continue;
+		}
+		return false;
 	}
 	return true;
 }
diff --git a/kernel/rcu/torture.c b/kernel/rcu/torture.c
index 69a4ec8..b3ed1e0 100644
--- a/kernel/rcu/torture.c
+++ b/kernel/rcu/torture.c
@@ -56,7 +56,7 @@ MODULE_ALIAS("rcutorture");
 #ifdef MODULE_PARAM_PREFIX
 #undef MODULE_PARAM_PREFIX
 #endif
-#define MODULE_PARAM_PREFIX "rcutorture."
+#define MODULE_PARAM_PREFIX "rcu\\torture."
 
 static int fqs_duration;
 module_param(fqs_duration, int, 0444);
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 875f2a0..3dc63ee 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -66,7 +66,7 @@ MODULE_ALIAS("rcutree");
 #ifdef MODULE_PARAM_PREFIX
 #undef MODULE_PARAM_PREFIX
 #endif
-#define MODULE_PARAM_PREFIX "rcutree."
+#define MODULE_PARAM_PREFIX "rcu\\tree."
 
 /* Data structures. */
 
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 6cb3dff..2439516 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -57,7 +57,7 @@ MODULE_ALIAS("rcupdate");
 #ifdef MODULE_PARAM_PREFIX
 #undef MODULE_PARAM_PREFIX
 #endif
-#define MODULE_PARAM_PREFIX "rcupdate."
+#define MODULE_PARAM_PREFIX "rcu\\pdate."
 
 module_param(rcu_expedited, int, 0);
 


^ permalink raw reply related	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2013-10-11 15:59 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-10-02  9:22 [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
2013-10-02  9:22 ` [PATCH 01/16] sched/wait: Make the signal_pending() checks consistent Peter Zijlstra
2013-10-04 17:33   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 02/16] sched/wait: Change timeout logic Peter Zijlstra
2013-10-04 17:33   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 03/16] sched/wait: Change the wait_exclusive control flow Peter Zijlstra
2013-10-04 17:33   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 04/16] sched/wait: Introduce ___wait_event() Peter Zijlstra
2013-10-04 17:33   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 05/16] sched/wait: Collapse __wait_event() Peter Zijlstra
2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 06/16] sched/wait: Collapse __wait_event_timeout() Peter Zijlstra
2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 07/16] sched/wait: Collapse __wait_event_interruptible() Peter Zijlstra
2013-10-04 17:34   ` [tip:sched/core] sched/wait: Collapse __wait_event_interruptible( ) tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 08/16] sched/wait: Collapse __wait_event_interruptible_timeout() Peter Zijlstra
2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 09/16] sched/wait: Collapse __wait_event_interruptible_exclusive() Peter Zijlstra
2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 10/16] sched/wait: Collapse __wait_event_lock_irq() Peter Zijlstra
2013-10-04 17:34   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 11/16] sched/wait: Collapse __wait_event_interruptible_lock_irq() Peter Zijlstra
2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 12/16] sched/wait: Collapse __wait_event_interruptible_lock_irq_timeout() Peter Zijlstra
2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 13/16] sched/wait: Collapse __wait_event_interruptible_tty() Peter Zijlstra
2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 14/16] sched/wait: Collapse __wait_event_killable() Peter Zijlstra
2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 15/16] sched/wait: Collapse __wait_event_hrtimeout() Peter Zijlstra
2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-02  9:22 ` [PATCH 16/16] sched/wait: Make the __wait_event*() interface more friendly Peter Zijlstra
2013-10-04 17:35   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2013-10-04 20:44 ` [PATCH 00/16] sched/wait: Collapse __wait_event macros -v5 Peter Zijlstra
2013-10-04 20:44   ` Peter Zijlstra
2013-10-05  8:04     ` Ingo Molnar
2013-10-08  9:59       ` Peter Zijlstra
2013-10-08 10:23         ` Ingo Molnar
2013-10-08 14:16           ` Paul E. McKenney
2013-10-08 19:47             ` Ingo Molnar
2013-10-08 20:01               ` Peter Zijlstra
2013-10-08 20:41                 ` Paul E. McKenney
2013-10-08 21:06                   ` Peter Zijlstra
2013-10-08 21:43                     ` Paul E. McKenney
2013-10-08 20:40               ` Paul E. McKenney
2013-10-09  3:28                 ` Paul E. McKenney
2013-10-09  3:35                   ` Paul E. McKenney
2013-10-09  6:08                     ` Ingo Molnar
2013-10-09 14:21                       ` Paul E. McKenney
2013-10-10  2:59                         ` Paul E. McKenney
2013-10-10  8:05                           ` Ingo Molnar
2013-10-10 17:11                             ` Paul E. McKenney
2013-10-10 17:39                               ` Ingo Molnar
2013-10-10 18:58                                 ` Paul E. McKenney
2013-10-11  7:26                                   ` Ingo Molnar
2013-10-11 15:59                                     ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).