linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH tip/core/rcu 0/6] Fixes for 3.8
@ 2012-10-30 16:27 Paul E. McKenney
  2012-10-30 16:27 ` [PATCH tip/core/rcu 1/6] rcu: Accelerate callbacks for CPU initiating a grace period Paul E. McKenney
  0 siblings, 1 reply; 7+ messages in thread
From: Paul E. McKenney @ 2012-10-30 16:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, patches, oleg

Hello!

This patch contains fixes as follows:

1.	Reinstate a grace-period acceleration that permits invoking
	the first callback registered on an idle system in one grace
	period rather than two.  The previous version of this acceleration
	was invalidated by the new grace-period kthreads.
2.	Fix an integer-size mismatch that prevented RCU from shifting
	to bulk-callback-invocation mode under overload.  (Courtesy of
	Eric Dumazet.)
3.	Remove list_for_each_continue_rcu(), as it is no longer used.
4.	Update rcutorture's module-parameter printout to include new
	parameters.
5.	Document the memory-ordering properties of RCU's grace-period
	primitives.  Note that the SRCU rewrite weakened these properties
	slightly.
6.	Reduce the RCU CPU stall warning timeout to 21 seconds so that
	it is once again somewhat shorter than the soft-lockup timeout.

							Thanx, Paul


 b/Documentation/RCU/checklist.txt |   17 +++++------
 b/Documentation/RCU/whatisRCU.txt |    4 --
 b/include/linux/rculist.h         |   17 -----------
 b/include/linux/rcupdate.h        |   20 +++++++++++++
 b/kernel/rcutorture.c             |    4 ++
 b/kernel/rcutree.c                |   57 ++++++++++++++++++++++++++++++++------
 b/kernel/rcutree_plugin.h         |    8 +++++
 b/lib/Kconfig.debug               |    2 -
 8 files changed, 90 insertions(+), 39 deletions(-)


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH tip/core/rcu 1/6] rcu: Accelerate callbacks for CPU initiating a grace period
  2012-10-30 16:27 [PATCH tip/core/rcu 0/6] Fixes for 3.8 Paul E. McKenney
@ 2012-10-30 16:27 ` Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 2/6] rcu: Fix batch-limit size problem Paul E. McKenney
                     ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Paul E. McKenney @ 2012-10-30 16:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, patches, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Because grace-period initialization is carried out by a separate
kthread, it might happen on a different CPU than the one that
had the callback needing a grace period -- which is where the
callback acceleration needs to happen.

Fortunately, rcu_start_gp() holds the root rcu_node structure's
->lock, which prevents a new grace period from starting.  This
allows this function to safely determine that a grace period has
not yet started, which in turn allows it to fully accelerate any
callbacks that it has pending.  This commit adds this acceleration.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcutree.c |   26 ++++++++++++++++++++++++--
 1 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 74df86b..93d6871 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -1404,15 +1404,37 @@ rcu_start_gp(struct rcu_state *rsp, unsigned long flags)
 	    !cpu_needs_another_gp(rsp, rdp)) {
 		/*
 		 * Either we have not yet spawned the grace-period
-		 * task or this CPU does not need another grace period.
+		 * task, this CPU does not need another grace period,
+		 * or a grace period is already in progress.
 		 * Either way, don't start a new grace period.
 		 */
 		raw_spin_unlock_irqrestore(&rnp->lock, flags);
 		return;
 	}
 
+	/*
+	 * Because there is no grace period in progress right now,
+	 * any callbacks we have up to this point will be satisfied
+	 * by the next grace period.  So promote all callbacks to be
+	 * handled after the end of the next grace period.  If the
+	 * CPU is not yet aware of the end of the previous grace period,
+	 * we need to allow for the callback advancement that will
+	 * occur when it does become aware.  Deadlock prevents us from
+	 * making it aware at this point: We cannot acquire a leaf
+	 * rcu_node ->lock while holding the root rcu_node ->lock.
+	 */
+	rdp->nxttail[RCU_NEXT_READY_TAIL] = rdp->nxttail[RCU_NEXT_TAIL];
+	if (rdp->completed == rsp->completed)
+		rdp->nxttail[RCU_WAIT_TAIL] = rdp->nxttail[RCU_NEXT_TAIL];
+
 	rsp->gp_flags = RCU_GP_FLAG_INIT;
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock(&rnp->lock); /* Interrupts remain disabled. */
+
+	/* Ensure that CPU is aware of completion of last grace period. */
+	rcu_process_gp_end(rsp, rdp);
+	local_irq_restore(flags);
+
+	/* Wake up rcu_gp_kthread() to start the grace period. */
 	wake_up(&rsp->gp_wq);
 }
 
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH tip/core/rcu 2/6] rcu: Fix batch-limit size problem
  2012-10-30 16:27 ` [PATCH tip/core/rcu 1/6] rcu: Accelerate callbacks for CPU initiating a grace period Paul E. McKenney
@ 2012-10-30 16:27   ` Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 3/6] rcu: Remove list_for_each_continue_rcu() Paul E. McKenney
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Paul E. McKenney @ 2012-10-30 16:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, patches, Paul E. McKenney, Paul E. McKenney

From: "Paul E. McKenney" <paul.mckenney@linaro.org>

Commit 29c00b4a1d9e27 (rcu: Add event-tracing for RCU callback
invocation) added a regression in rcu_do_batch()

Under stress, RCU is supposed to allow to process all items in queue,
instead of a batch of 10 items (blimit), but an integer overflow makes
the effective limit being 1.  So, unless there is frequent idle periods
(during which RCU ignores batch limits), RCU can be forced into a
state where it cannot keep up with the callback-generation rate,
eventually resulting in OOM.

This commit therefore converts a few variables in rcu_do_batch() from
int to long to fix this problem, along with the module parameters
controlling the batch limits.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: <stable@vger.kernel.org> # 3.2 +
---
 kernel/rcutree.c |   15 ++++++++-------
 1 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 93d6871..e4c2192 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -212,13 +212,13 @@ DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
 #endif
 };
 
-static int blimit = 10;		/* Maximum callbacks per rcu_do_batch. */
-static int qhimark = 10000;	/* If this many pending, ignore blimit. */
-static int qlowmark = 100;	/* Once only this many pending, use blimit. */
+static long blimit = 10;	/* Maximum callbacks per rcu_do_batch. */
+static long qhimark = 10000;	/* If this many pending, ignore blimit. */
+static long qlowmark = 100;	/* Once only this many pending, use blimit. */
 
-module_param(blimit, int, 0444);
-module_param(qhimark, int, 0444);
-module_param(qlowmark, int, 0444);
+module_param(blimit, long, 0444);
+module_param(qhimark, long, 0444);
+module_param(qlowmark, long, 0444);
 
 int rcu_cpu_stall_suppress __read_mostly; /* 1 = suppress stall warnings. */
 int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT;
@@ -1791,7 +1791,8 @@ static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp)
 {
 	unsigned long flags;
 	struct rcu_head *next, *list, **tail;
-	int bl, count, count_lazy, i;
+	long bl, count, count_lazy;
+	int i;
 
 	/* If no callbacks are ready, just return.*/
 	if (!cpu_has_callbacks_ready_to_invoke(rdp)) {
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH tip/core/rcu 3/6] rcu: Remove list_for_each_continue_rcu()
  2012-10-30 16:27 ` [PATCH tip/core/rcu 1/6] rcu: Accelerate callbacks for CPU initiating a grace period Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 2/6] rcu: Fix batch-limit size problem Paul E. McKenney
@ 2012-10-30 16:27   ` Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 4/6] rcu: Add new rcutorture module parameters to start/end test messages Paul E. McKenney
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Paul E. McKenney @ 2012-10-30 16:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, patches, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The list_for_each_continue_rcu() macro is no longer used, so this commit
removes it.  The list_for_each_entry_continue_rcu() macro should be
used instead.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 Documentation/RCU/checklist.txt |   17 ++++++++---------
 Documentation/RCU/whatisRCU.txt |    4 +---
 include/linux/rculist.h         |   17 -----------------
 3 files changed, 9 insertions(+), 29 deletions(-)

diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
index cdb20d4..31ef8fe 100644
--- a/Documentation/RCU/checklist.txt
+++ b/Documentation/RCU/checklist.txt
@@ -271,15 +271,14 @@ over a rather long period of time, but improvements are always welcome!
 	The same cautions apply to call_rcu_bh() and call_rcu_sched().
 
 9.	All RCU list-traversal primitives, which include
-	rcu_dereference(), list_for_each_entry_rcu(),
-	list_for_each_continue_rcu(), and list_for_each_safe_rcu(),
-	must be either within an RCU read-side critical section or
-	must be protected by appropriate update-side locks.  RCU
-	read-side critical sections are delimited by rcu_read_lock()
-	and rcu_read_unlock(), or by similar primitives such as
-	rcu_read_lock_bh() and rcu_read_unlock_bh(), in which case
-	the matching rcu_dereference() primitive must be used in order
-	to keep lockdep happy, in this case, rcu_dereference_bh().
+	rcu_dereference(), list_for_each_entry_rcu(), and
+	list_for_each_safe_rcu(), must be either within an RCU read-side
+	critical section or must be protected by appropriate update-side
+	locks.	RCU read-side critical sections are delimited by
+	rcu_read_lock() and rcu_read_unlock(), or by similar primitives
+	such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
+	case the matching rcu_dereference() primitive must be used in
+	order to keep lockdep happy, in this case, rcu_dereference_bh().
 
 	The reason that it is permissible to use RCU list-traversal
 	primitives when the update-side lock is held is that doing so
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
index bf0f6de..9d30de0 100644
--- a/Documentation/RCU/whatisRCU.txt
+++ b/Documentation/RCU/whatisRCU.txt
@@ -789,9 +789,7 @@ RCU list traversal:
 	list_for_each_entry_rcu
 	hlist_for_each_entry_rcu
 	hlist_nulls_for_each_entry_rcu
-
-	list_for_each_continue_rcu	(to be deprecated in favor of new
-					 list_for_each_entry_continue_rcu)
+	list_for_each_entry_continue_rcu
 
 RCU pointer/list update:
 
diff --git a/include/linux/rculist.h b/include/linux/rculist.h
index e0f0fab..c92dd28 100644
--- a/include/linux/rculist.h
+++ b/include/linux/rculist.h
@@ -286,23 +286,6 @@ static inline void list_splice_init_rcu(struct list_head *list,
 		&pos->member != (head); \
 		pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
 
-
-/**
- * list_for_each_continue_rcu
- * @pos:	the &struct list_head to use as a loop cursor.
- * @head:	the head for your list.
- *
- * Iterate over an rcu-protected list, continuing after current point.
- *
- * This list-traversal primitive may safely run concurrently with
- * the _rcu list-mutation primitives such as list_add_rcu()
- * as long as the traversal is guarded by rcu_read_lock().
- */
-#define list_for_each_continue_rcu(pos, head) \
-	for ((pos) = rcu_dereference_raw(list_next_rcu(pos)); \
-		(pos) != (head); \
-		(pos) = rcu_dereference_raw(list_next_rcu(pos)))
-
 /**
  * list_for_each_entry_continue_rcu - continue iteration over list of given type
  * @pos:	the type * to use as a loop cursor.
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH tip/core/rcu 4/6] rcu: Add new rcutorture module parameters to start/end test messages
  2012-10-30 16:27 ` [PATCH tip/core/rcu 1/6] rcu: Accelerate callbacks for CPU initiating a grace period Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 2/6] rcu: Fix batch-limit size problem Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 3/6] rcu: Remove list_for_each_continue_rcu() Paul E. McKenney
@ 2012-10-30 16:27   ` Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 5/6] rcu: Clarify memory-ordering properties of grace-period primitives Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 6/6] rcu: Reduce default RCU CPU stall warning timeout Paul E. McKenney
  4 siblings, 0 replies; 7+ messages in thread
From: Paul E. McKenney @ 2012-10-30 16:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, patches, Paul E. McKenney, Paul E. McKenney

From: "Paul E. McKenney" <paul.mckenney@linaro.org>

Several new rcutorture module parameters have been added, but are not
printed to the console at the beginning and end of tests, which makes
it difficult to reproduce a prior test.  This commit therefore adds
these new module parameters to the list printed at the beginning and
the end of the tests.

Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcutorture.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/kernel/rcutorture.c b/kernel/rcutorture.c
index aaa7b9f..7fa184f 100644
--- a/kernel/rcutorture.c
+++ b/kernel/rcutorture.c
@@ -1396,12 +1396,16 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, char *tag)
 		 "fqs_duration=%d fqs_holdoff=%d fqs_stutter=%d "
 		 "test_boost=%d/%d test_boost_interval=%d "
 		 "test_boost_duration=%d shutdown_secs=%d "
+		 "stall_cpu=%d stall_cpu_holdoff=%d "
+		 "n_barrier_cbs=%d "
 		 "onoff_interval=%d onoff_holdoff=%d\n",
 		 torture_type, tag, nrealreaders, nfakewriters,
 		 stat_interval, verbose, test_no_idle_hz, shuffle_interval,
 		 stutter, irqreader, fqs_duration, fqs_holdoff, fqs_stutter,
 		 test_boost, cur_ops->can_boost,
 		 test_boost_interval, test_boost_duration, shutdown_secs,
+		 stall_cpu, stall_cpu_holdoff,
+		 n_barrier_cbs,
 		 onoff_interval, onoff_holdoff);
 }
 
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH tip/core/rcu 5/6] rcu: Clarify memory-ordering properties of grace-period primitives
  2012-10-30 16:27 ` [PATCH tip/core/rcu 1/6] rcu: Accelerate callbacks for CPU initiating a grace period Paul E. McKenney
                     ` (2 preceding siblings ...)
  2012-10-30 16:27   ` [PATCH tip/core/rcu 4/6] rcu: Add new rcutorture module parameters to start/end test messages Paul E. McKenney
@ 2012-10-30 16:27   ` Paul E. McKenney
  2012-10-30 16:27   ` [PATCH tip/core/rcu 6/6] rcu: Reduce default RCU CPU stall warning timeout Paul E. McKenney
  4 siblings, 0 replies; 7+ messages in thread
From: Paul E. McKenney @ 2012-10-30 16:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, patches, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit explicitly states the memory-ordering properties of the
RCU grace-period primitives.  Although these properties were in some
sense implied by the fundmental property of RCU ("a grace period must
wait for all pre-existing RCU read-side critical sections to complete"),
stating it explicitly will be a great labor-saving device.

Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rcupdate.h |   20 ++++++++++++++++++++
 kernel/rcutree.c         |   16 ++++++++++++++++
 kernel/rcutree_plugin.h  |    8 ++++++++
 3 files changed, 44 insertions(+), 0 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 7c968e4..91d530a 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -90,6 +90,20 @@ extern void do_trace_rcu_torture_read(char *rcutorturename,
  * that started after call_rcu() was invoked.  RCU read-side critical
  * sections are delimited by rcu_read_lock() and rcu_read_unlock(),
  * and may be nested.
+ *
+ * Note that all CPUs must agree that the grace period extended beyond
+ * all pre-existing RCU read-side critical section.  This means that
+ * on systems with more than one CPU, when "func()" is invoked, each
+ * CPU is guaranteed to have executed a full memory barrier since the
+ * end of its last RCU read-side critical section whose beginning
+ * preceded the call to call_rcu().  Note that this guarantee includes
+ * CPUs that are offline, idle, or executing in user mode, as well as
+ * CPUs that are executing in the kernel.  Furthermore, if CPU A
+ * invoked call_rcu() and CPU B invoked the resulting RCU callback
+ * function "func()", then both CPU A and CPU B are guaranteed to execute
+ * a full memory barrier during the time interval between the call to
+ * call_rcu() and the invocation of "func()" -- even if CPU A and CPU B
+ * are the same CPU (but again only if the system has more than one CPU).
  */
 extern void call_rcu(struct rcu_head *head,
 			      void (*func)(struct rcu_head *head));
@@ -118,6 +132,9 @@ extern void call_rcu(struct rcu_head *head,
  *  OR
  *  - rcu_read_lock_bh() and rcu_read_unlock_bh(), if in process context.
  *  These may be nested.
+ *
+ * See the description of call_rcu() for more detailed information on
+ * memory ordering guarantees.
  */
 extern void call_rcu_bh(struct rcu_head *head,
 			void (*func)(struct rcu_head *head));
@@ -137,6 +154,9 @@ extern void call_rcu_bh(struct rcu_head *head,
  *  OR
  *  anything that disables preemption.
  *  These may be nested.
+ *
+ * See the description of call_rcu() for more detailed information on
+ * memory ordering guarantees.
  */
 extern void call_rcu_sched(struct rcu_head *head,
 			   void (*func)(struct rcu_head *rcu));
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index e4c2192..ca32215 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -2233,6 +2233,19 @@ static inline int rcu_blocking_is_gp(void)
  * softirq handlers will have completed, since in some kernels, these
  * handlers can run in process context, and can block.
  *
+ * Note that this guarantee implies a further memory-ordering guarantee.
+ * On systems with more than one CPU, when synchronize_sched() returns,
+ * each CPU is guaranteed to have executed a full memory barrier since
+ * the end of its last RCU-sched read-side critical section whose beginning
+ * preceded the call to synchronize_sched().  Note that this guarantee
+ * includes CPUs that are offline, idle, or executing in user mode, as
+ * well as CPUs that are executing in the kernel.  Furthermore, if CPU A
+ * invoked synchronize_sched(), which returned to its caller on CPU B,
+ * then both CPU A and CPU B are guaranteed to have executed a full memory
+ * barrier during the execution of synchronize_sched() -- even if CPU A
+ * and CPU B are the same CPU (but again only if the system has more than
+ * one CPU).
+ *
  * This primitive provides the guarantees made by the (now removed)
  * synchronize_kernel() API.  In contrast, synchronize_rcu() only
  * guarantees that rcu_read_lock() sections will have completed.
@@ -2259,6 +2272,9 @@ EXPORT_SYMBOL_GPL(synchronize_sched);
  * read-side critical sections have completed.  RCU read-side critical
  * sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(),
  * and may be nested.
+ *
+ * See the description of synchronize_sched() for more detailed information
+ * on memory ordering guarantees.
  */
 void synchronize_rcu_bh(void)
 {
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index f921154..0f370a8 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -670,6 +670,9 @@ EXPORT_SYMBOL_GPL(kfree_call_rcu);
  * concurrently with new RCU read-side critical sections that began while
  * synchronize_rcu() was waiting.  RCU read-side critical sections are
  * delimited by rcu_read_lock() and rcu_read_unlock(), and may be nested.
+ *
+ * See the description of synchronize_sched() for more detailed information
+ * on memory ordering guarantees.
  */
 void synchronize_rcu(void)
 {
@@ -875,6 +878,11 @@ EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
 
 /**
  * rcu_barrier - Wait until all in-flight call_rcu() callbacks complete.
+ *
+ * Note that this primitive will not always wait for an RCU grace period
+ * to complete.  For example, if there are no RCU callbacks queued anywhere
+ * in the system, then rcu_barrier() is within its rights to return
+ * immediately, without waiting for anything, much less an RCU grace period.
  */
 void rcu_barrier(void)
 {
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH tip/core/rcu 6/6] rcu: Reduce default RCU CPU stall warning timeout
  2012-10-30 16:27 ` [PATCH tip/core/rcu 1/6] rcu: Accelerate callbacks for CPU initiating a grace period Paul E. McKenney
                     ` (3 preceding siblings ...)
  2012-10-30 16:27   ` [PATCH tip/core/rcu 5/6] rcu: Clarify memory-ordering properties of grace-period primitives Paul E. McKenney
@ 2012-10-30 16:27   ` Paul E. McKenney
  4 siblings, 0 replies; 7+ messages in thread
From: Paul E. McKenney @ 2012-10-30 16:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, patches, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The RCU CPU stall warning timeout has defaulted to 60 seconds for
some years, with almost no false positives.  This commit therefore
reduces the default to 21 seconds, slightly shorter than the new
soft-lockup timeout.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 lib/Kconfig.debug |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 28e9d6c9..41faf0b 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -972,7 +972,7 @@ config RCU_CPU_STALL_TIMEOUT
 	int "RCU CPU stall timeout in seconds"
 	depends on TREE_RCU || TREE_PREEMPT_RCU
 	range 3 300
-	default 60
+	default 21
 	help
 	  If a given RCU grace period extends more than the specified
 	  number of seconds, a CPU stall warning is printed.  If the
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-10-30 16:41 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-30 16:27 [PATCH tip/core/rcu 0/6] Fixes for 3.8 Paul E. McKenney
2012-10-30 16:27 ` [PATCH tip/core/rcu 1/6] rcu: Accelerate callbacks for CPU initiating a grace period Paul E. McKenney
2012-10-30 16:27   ` [PATCH tip/core/rcu 2/6] rcu: Fix batch-limit size problem Paul E. McKenney
2012-10-30 16:27   ` [PATCH tip/core/rcu 3/6] rcu: Remove list_for_each_continue_rcu() Paul E. McKenney
2012-10-30 16:27   ` [PATCH tip/core/rcu 4/6] rcu: Add new rcutorture module parameters to start/end test messages Paul E. McKenney
2012-10-30 16:27   ` [PATCH tip/core/rcu 5/6] rcu: Clarify memory-ordering properties of grace-period primitives Paul E. McKenney
2012-10-30 16:27   ` [PATCH tip/core/rcu 6/6] rcu: Reduce default RCU CPU stall warning timeout Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).