All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC nohz_full 0/7] v4 Provide infrastructure for full-system idle
@ 2013-07-26 23:18 Paul E. McKenney
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
  0 siblings, 1 reply; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-26 23:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw

Whenever there is at least one non-idle CPU, it is necessary to
periodically update timekeeping information.  Before NO_HZ_FULL, this
updating was carried out by the scheduling-clock tick, which ran on
every non-idle CPU.  With the advent of NO_HZ_FULL, it is possible
to have non-idle CPUs that are not receiving scheduling-clock ticks.
This possibility is handled by assigning a timekeeping CPU that continues
taking scheduling-clock ticks.

Unfortunately, timekeeping CPU continues taking scheduling-clock
interrupts even when all other CPUs are completely idle, which is
not so good for energy efficiency and battery lifetime.  Clearly, it
would be good to turn off the timekeeping CPU's scheduling-clock tick
when all CPUs are completely idle.  This is conceptually simple, but
we also need good performance and scalability on large systems, which
rules out implementations based on frequently updated global counts of
non-idle CPUs as well as implementations that frequently scan all CPUs.
Nevertheless, we need a single global indicator in order to keep the
overhead of checking acceptably low.

The chosen approach is to enforce hysteresis on the non-idle to
full-system-idle transition, with the amount of hysteresis increasing
linearly with the number of CPUs, thus keeping contention acceptably low.
This approach piggybacks on RCU's existing force-quiescent-state scanning
of idle CPUs, which has the advantage of avoiding the scan entirely on
busy systems that have high levels of multiprogramming.  This scan
takes per-CPU idleness information and feeds it into a state machine
that applies the level of hysteresis required to arrive at a single
full-system-idle indicator.

The individual patches are as follows:

1.	Add a CONFIG_NO_HZ_FULL_SYSIDLE Kconfig parameter to enable
	this feature.  Kernels built with CONFIG_NO_HZ_FULL_SYSIDLE=n
	act exactly as they do today.

2.	Add new fields to the rcu_dynticks structure that track CPU-idle
	information.  These fields consider CPUs running usermode to be
	non-idle, in contrast with the existing fields in that structure.

3.	Track per-CPU idle states.

4.	Add full-system idle states and state variables.

5.	Expand force_qs_rnp(), dyntick_save_progress_counter(), and
	rcu_implicit_dynticks_qs() APIs to enable passing full-system
	idle state information.

6.	Add full-system-idle state machine.

7.	Force RCU's grace-period kthreads onto the timekeeping CPU.

Changes since v3 (https://lkml.org/lkml/2013/7/8/441):

o	Fix an embarrassing bug that allowed multiple kthreads to be
	executing the state machine concurrently.

Changes since v2 (https://lkml.org/lkml/2013/6/28/610):

o	Completed removing NMI support (thanks to Frederic for spotting
	the remaining cruft).

o	Fix a state-machine bug, again spotted by Frederic.  See
	http://lists-archives.com/linux-kernel/27865835-nohz_full-add-full-system-idle-state-machine.html
	for the full details of the bug.

o	Updated commit log and comment as suggested by Josh Triplett.

Changes since v1 (https://lkml.org/lkml/2013/6/25/664):

o	Removed NMI support because NMI handlers cannot safely read
	the time anyway (thanks to Thomas Gleixner and Peter Zijlstra).

						Thanx, Paul

------------------------------------------------------------------------

 b/include/linux/rcupdate.h |   18 +
 b/kernel/rcutree.c         |   49 ++++-
 b/kernel/rcutree.h         |   17 +
 b/kernel/rcutree_plugin.h  |  430 ++++++++++++++++++++++++++++++++++++++++++++-
 b/kernel/time/Kconfig      |   23 ++
 5 files changed, 522 insertions(+), 15 deletions(-)


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state
  2013-07-26 23:18 [PATCH RFC nohz_full 0/7] v4 Provide infrastructure for full-system idle Paul E. McKenney
@ 2013-07-26 23:19 ` Paul E. McKenney
  2013-07-26 23:19   ` [PATCH RFC nohz_full 2/7] nohz_full: Add rcu_dyntick data " Paul E. McKenney
                     ` (7 more replies)
  0 siblings, 8 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-26 23:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

At least one CPU must keep the scheduling-clock tick running for
timekeeping purposes whenever there is a non-idle CPU.  However, with
the new nohz_full adaptive-idle machinery, it is difficult to distinguish
between all CPUs really being idle as opposed to all non-idle CPUs being
in adaptive-ticks mode.  This commit therefore adds a Kconfig parameter
as a first step towards enabling a scalable detection of full-system
idle state.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/time/Kconfig | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
index 70f27e8..a613c2a 100644
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -134,6 +134,29 @@ config NO_HZ_FULL_ALL
 	 Note the boot CPU will still be kept outside the range to
 	 handle the timekeeping duty.
 
+config NO_HZ_FULL_SYSIDLE
+	bool "Detect full-system idle state for full dynticks system"
+	depends on NO_HZ_FULL
+	default n
+	help
+	 At least one CPU must keep the scheduling-clock tick running
+	 for timekeeping purposes whenever there is a non-idle CPU,
+	 where "non-idle" includes CPUs with a single runnable task
+	 in adaptive-idle mode.  Because the underlying adaptive-tick
+	 support cannot distinguish between all CPUs being idle and
+	 all CPUs each running a single task in adaptive-idle mode,
+	 the underlying support simply ensures that there is always
+	 a CPU handling the scheduling-clock tick, whether or not all
+	 CPUs are idle.  This Kconfig option enables scalable detection
+	 of the all-CPUs-idle state, thus allowing the scheduling-clock
+	 tick to be disabled when all CPUs are idle.  Note that scalable
+	 detection of the all-CPUs-idle state means that larger systems
+	 will be slower to declare the all-CPUs-idle state.
+
+	 Say Y if you would like to help debug all-CPUs-idle detection.
+
+	 Say N if you are unsure.
+
 config NO_HZ
 	bool "Old Idle dynticks config"
 	depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC nohz_full 2/7] nohz_full: Add rcu_dyntick data for scalable detection of all-idle state
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
@ 2013-07-26 23:19   ` Paul E. McKenney
  2013-08-05  1:26     ` Frederic Weisbecker
  2013-07-26 23:19   ` [PATCH RFC nohz_full 3/7] nohz_full: Add per-CPU idle-state tracking Paul E. McKenney
                     ` (6 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-26 23:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds fields to the rcu_dyntick structure that are used to
detect idle CPUs.  These new fields differ from the existing ones in
that the existing ones consider a CPU executing in user mode to be idle,
where the new ones consider CPUs executing in user mode to be busy.
The handling of these new fields is otherwise quite similar to that for
the exiting fields.  This commit also adds the initialization required
for these fields.

So, why is usermode execution treated differently, with RCU considering
it a quiescent state equivalent to idle, while in contrast the new
full-system idle state detection considers usermode execution to be
non-idle?

It turns out that although one of RCU's quiescent states is usermode
execution, it is not a full-system idle state.  This is because the
purpose of the full-system idle state is not RCU, but rather determining
when accurate timekeeping can safely be disabled.  Whenever accurate
timekeeping is required in a CONFIG_NO_HZ_FULL kernel, at least one
CPU must keep the scheduling-clock tick going.  If even one CPU is
executing in user mode, accurate timekeeping is requires, particularly for
architectures where gettimeofday() and friends do not enter the kernel.
Only when all CPUs are really and truly idle can accurate timekeeping be
disabled, allowing all CPUs to turn off the scheduling clock interrupt,
thus greatly improving energy efficiency.

This naturally raises the question "Why is this code in RCU rather than in
timekeeping?", and the answer is that RCU has the data and infrastructure
to efficiently make this determination.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rcutree.c        |  5 +++++
 kernel/rcutree.h        |  9 +++++++++
 kernel/rcutree_plugin.h | 19 +++++++++++++++++++
 3 files changed, 33 insertions(+)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 928cb45..9412726 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -209,6 +209,10 @@ EXPORT_SYMBOL_GPL(rcu_note_context_switch);
 DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
 	.dynticks_nesting = DYNTICK_TASK_EXIT_IDLE,
 	.dynticks = ATOMIC_INIT(1),
+#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
+	.dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE,
+	.dynticks_idle = ATOMIC_INIT(1),
+#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
 };
 
 static long blimit = 10;	/* Maximum callbacks per rcu_do_batch. */
@@ -2902,6 +2906,7 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp, int preemptible)
 	rdp->blimit = blimit;
 	init_callback_list(rdp);  /* Re-enable callbacks on this CPU. */
 	rdp->dynticks->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
+	rcu_sysidle_init_percpu_data(rdp->dynticks);
 	atomic_set(&rdp->dynticks->dynticks,
 		   (atomic_read(&rdp->dynticks->dynticks) & ~0x1) + 1);
 	raw_spin_unlock(&rnp->lock);		/* irqs remain disabled. */
diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index b383258..bd99d59 100644
--- a/kernel/rcutree.h
+++ b/kernel/rcutree.h
@@ -88,6 +88,14 @@ struct rcu_dynticks {
 				    /* Process level is worth LLONG_MAX/2. */
 	int dynticks_nmi_nesting;   /* Track NMI nesting level. */
 	atomic_t dynticks;	    /* Even value for idle, else odd. */
+#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
+	long long dynticks_idle_nesting;
+				    /* irq/process nesting level from idle. */
+	atomic_t dynticks_idle;	    /* Even value for idle, else odd. */
+				    /*  "Idle" excludes userspace execution. */
+	unsigned long dynticks_idle_jiffies;
+				    /* End of last non-NMI non-idle period. */
+#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
 #ifdef CONFIG_RCU_FAST_NO_HZ
 	bool all_lazy;		    /* Are all CPU's CBs lazy? */
 	unsigned long nonlazy_posted;
@@ -545,6 +553,7 @@ static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp);
 static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp);
 static void rcu_kick_nohz_cpu(int cpu);
 static bool init_nocb_callback_list(struct rcu_data *rdp);
+static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
 
 #endif /* #ifndef RCU_TREE_NONCORE */
 
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 769e12e..6937eb6 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -2375,3 +2375,22 @@ static void rcu_kick_nohz_cpu(int cpu)
 		smp_send_reschedule(cpu);
 #endif /* #ifdef CONFIG_NO_HZ_FULL */
 }
+
+
+#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
+
+/*
+ * Initialize dynticks sysidle state for CPUs coming online.
+ */
+static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
+{
+	rdtp->dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE;
+}
+
+#else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
+
+static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
+{
+}
+
+#endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC nohz_full 3/7] nohz_full: Add per-CPU idle-state tracking
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
  2013-07-26 23:19   ` [PATCH RFC nohz_full 2/7] nohz_full: Add rcu_dyntick data " Paul E. McKenney
@ 2013-07-26 23:19   ` Paul E. McKenney
  2013-08-09 15:37     ` Frederic Weisbecker
  2013-07-26 23:19   ` [PATCH RFC nohz_full 4/7] nohz_full: Add full-system idle states and variables Paul E. McKenney
                     ` (5 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-26 23:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds the code that updates the rcu_dyntick structure's
new fields to track the per-CPU idle state based on interrupts and
transitions into and out of the idle loop (NMIs are ignored because NMI
handlers cannot cleanly read out the time anyway).  This code is similar
to the code that maintains RCU's idea of per-CPU idleness, but differs
in that RCU treats CPUs running in user mode as idle, where this new
code does not.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rcutree.c        |  4 +++
 kernel/rcutree.h        |  2 ++
 kernel/rcutree_plugin.h | 79 +++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 9412726..c1f7cf8 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -416,6 +416,7 @@ void rcu_idle_enter(void)
 
 	local_irq_save(flags);
 	rcu_eqs_enter(false);
+	rcu_sysidle_enter(&__get_cpu_var(rcu_dynticks), 0);
 	local_irq_restore(flags);
 }
 EXPORT_SYMBOL_GPL(rcu_idle_enter);
@@ -466,6 +467,7 @@ void rcu_irq_exit(void)
 		trace_rcu_dyntick("--=", oldval, rdtp->dynticks_nesting);
 	else
 		rcu_eqs_enter_common(rdtp, oldval, true);
+	rcu_sysidle_enter(rdtp, 1);
 	local_irq_restore(flags);
 }
 
@@ -534,6 +536,7 @@ void rcu_idle_exit(void)
 
 	local_irq_save(flags);
 	rcu_eqs_exit(false);
+	rcu_sysidle_exit(&__get_cpu_var(rcu_dynticks), 0);
 	local_irq_restore(flags);
 }
 EXPORT_SYMBOL_GPL(rcu_idle_exit);
@@ -585,6 +588,7 @@ void rcu_irq_enter(void)
 		trace_rcu_dyntick("++=", oldval, rdtp->dynticks_nesting);
 	else
 		rcu_eqs_exit_common(rdtp, oldval, true);
+	rcu_sysidle_exit(rdtp, 1);
 	local_irq_restore(flags);
 }
 
diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index bd99d59..1895043 100644
--- a/kernel/rcutree.h
+++ b/kernel/rcutree.h
@@ -553,6 +553,8 @@ static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp);
 static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp);
 static void rcu_kick_nohz_cpu(int cpu);
 static bool init_nocb_callback_list(struct rcu_data *rdp);
+static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq);
+static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq);
 static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
 
 #endif /* #ifndef RCU_TREE_NONCORE */
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 6937eb6..814ff47 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -2380,6 +2380,77 @@ static void rcu_kick_nohz_cpu(int cpu)
 #ifdef CONFIG_NO_HZ_FULL_SYSIDLE
 
 /*
+ * Invoked to note exit from irq or task transition to idle.  Note that
+ * usermode execution does -not- count as idle here!  After all, we want
+ * to detect full-system idle states, not RCU quiescent states and grace
+ * periods.  The caller must have disabled interrupts.
+ */
+static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq)
+{
+	unsigned long j;
+
+	/* Adjust nesting, check for fully idle. */
+	if (irq) {
+		rdtp->dynticks_idle_nesting--;
+		WARN_ON_ONCE(rdtp->dynticks_idle_nesting < 0);
+		if (rdtp->dynticks_idle_nesting != 0)
+			return;  /* Still not fully idle. */
+	} else {
+		if ((rdtp->dynticks_idle_nesting & DYNTICK_TASK_NEST_MASK) ==
+		    DYNTICK_TASK_NEST_VALUE) {
+			rdtp->dynticks_idle_nesting = 0;
+		} else {
+			rdtp->dynticks_idle_nesting -= DYNTICK_TASK_NEST_VALUE;
+			WARN_ON_ONCE(rdtp->dynticks_idle_nesting < 0);
+			return;  /* Still not fully idle. */
+		}
+	}
+
+	/* Record start of fully idle period. */
+	j = jiffies;
+	ACCESS_ONCE(rdtp->dynticks_idle_jiffies) = j;
+	smp_mb__before_atomic_inc();
+	atomic_inc(&rdtp->dynticks_idle);
+	smp_mb__after_atomic_inc();
+	WARN_ON_ONCE(atomic_read(&rdtp->dynticks_idle) & 0x1);
+}
+
+/*
+ * Invoked to note entry to irq or task transition from idle.  Note that
+ * usermode execution does -not- count as idle here!  The caller must
+ * have disabled interrupts.
+ */
+static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
+{
+	/* Adjust nesting, check for already non-idle. */
+	if (irq) {
+		rdtp->dynticks_idle_nesting++;
+		WARN_ON_ONCE(rdtp->dynticks_idle_nesting <= 0);
+		if (rdtp->dynticks_idle_nesting != 1)
+			return; /* Already non-idle. */
+	} else {
+		/*
+		 * Allow for irq misnesting.  Yes, it really is possible
+		 * to enter an irq handler then never leave it, and maybe
+		 * also vice versa.  Handle both possibilities.
+		 */
+		if (rdtp->dynticks_idle_nesting & DYNTICK_TASK_NEST_MASK) {
+			rdtp->dynticks_idle_nesting += DYNTICK_TASK_NEST_VALUE;
+			WARN_ON_ONCE(rdtp->dynticks_idle_nesting <= 0);
+			return; /* Already non-idle. */
+		} else {
+			rdtp->dynticks_idle_nesting = DYNTICK_TASK_EXIT_IDLE;
+		}
+	}
+
+	/* Record end of idle period. */
+	smp_mb__before_atomic_inc();
+	atomic_inc(&rdtp->dynticks_idle);
+	smp_mb__after_atomic_inc();
+	WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks_idle) & 0x1));
+}
+
+/*
  * Initialize dynticks sysidle state for CPUs coming online.
  */
 static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
@@ -2389,6 +2460,14 @@ static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
 
 #else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
 
+static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq)
+{
+}
+
+static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
+{
+}
+
 static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
 {
 }
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC nohz_full 4/7] nohz_full: Add full-system idle states and variables
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
  2013-07-26 23:19   ` [PATCH RFC nohz_full 2/7] nohz_full: Add rcu_dyntick data " Paul E. McKenney
  2013-07-26 23:19   ` [PATCH RFC nohz_full 3/7] nohz_full: Add per-CPU idle-state tracking Paul E. McKenney
@ 2013-07-26 23:19   ` Paul E. McKenney
  2013-08-09 15:44     ` Frederic Weisbecker
  2013-07-26 23:19   ` [PATCH RFC nohz_full 5/7] nohz_full: Add full-system-idle arguments to API Paul E. McKenney
                     ` (4 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-26 23:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds control variables and states for full-system idle.
The system will progress through the states in numerical order when
the system is fully idle (other than the timekeeping CPU), and reset
down to the initial state if any non-timekeeping CPU goes non-idle.
The current state is kept in full_sysidle_state.

A RCU_SYSIDLE_SMALL macro is defined, and systems with this number
of CPUs or fewer move through the states more aggressively.  The idea
is that the resulting memory contention is less of a problem on small
systems.  Architectures can adjust this value (which defaults to 8)
using CONFIG_ARCH_RCU_SYSIDLE_SMALL.

One flavor of RCU will be in charge of driving the state machine,
defined by rcu_sysidle_state.  This should be the busiest flavor of RCU.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rcutree_plugin.h | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 814ff47..3edae39 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -2380,6 +2380,34 @@ static void rcu_kick_nohz_cpu(int cpu)
 #ifdef CONFIG_NO_HZ_FULL_SYSIDLE
 
 /*
+ * Handle small systems specially, accelerating their transition into
+ * full idle state.  Allow arches to override this code's idea of
+ * what constitutes a "small" system.
+ */
+#ifdef CONFIG_ARCH_RCU_SYSIDLE_SMALL
+#define RCU_SYSIDLE_SMALL CONFIG_ARCH_RCU_SYSIDLE_SMALL
+#else /* #ifdef CONFIG_ARCH_RCU_SYSIDLE_SMALL */
+#define RCU_SYSIDLE_SMALL 8
+#endif
+
+/*
+ * Define RCU flavor that holds sysidle state.  This needs to be the
+ * most active flavor of RCU.
+ */
+#ifdef CONFIG_PREEMPT_RCU
+static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_preempt_state;
+#else /* #ifdef CONFIG_PREEMPT_RCU */
+static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_sched_state;
+#endif /* #else #ifdef CONFIG_PREEMPT_RCU */
+
+static int __maybe_unused full_sysidle_state; /* Current system-idle state. */
+#define RCU_SYSIDLE_NOT		0	/* Some CPU is not idle. */
+#define RCU_SYSIDLE_SHORT	1	/* All CPUs idle for brief period. */
+#define RCU_SYSIDLE_LONG	2	/* All CPUs idle for long enough. */
+#define RCU_SYSIDLE_FULL	3	/* All CPUs idle, ready for sysidle. */
+#define RCU_SYSIDLE_FULL_NOTED	4	/* Actually entered sysidle state. */
+
+/*
  * Invoked to note exit from irq or task transition to idle.  Note that
  * usermode execution does -not- count as idle here!  After all, we want
  * to detect full-system idle states, not RCU quiescent states and grace
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC nohz_full 5/7] nohz_full: Add full-system-idle arguments to API
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
                     ` (2 preceding siblings ...)
  2013-07-26 23:19   ` [PATCH RFC nohz_full 4/7] nohz_full: Add full-system idle states and variables Paul E. McKenney
@ 2013-07-26 23:19   ` Paul E. McKenney
  2013-07-26 23:19   ` [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine Paul E. McKenney
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-26 23:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds an isidle and jiffies argument to force_qs_rnp(),
dyntick_save_progress_counter(), and rcu_implicit_dynticks_qs() to enable
RCU's force-quiescent-state process to check for full-system idle.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rcutree.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index c1f7cf8..725524e 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -231,7 +231,9 @@ module_param(jiffies_till_next_fqs, ulong, 0644);
 
 static void rcu_start_gp_advanced(struct rcu_state *rsp, struct rcu_node *rnp,
 				  struct rcu_data *rdp);
-static void force_qs_rnp(struct rcu_state *rsp, int (*f)(struct rcu_data *));
+static void force_qs_rnp(struct rcu_state *rsp,
+			 int (*f)(struct rcu_data *, bool *, unsigned long *),
+			 bool *isidle, unsigned long *maxj);
 static void force_quiescent_state(struct rcu_state *rsp);
 static int rcu_pending(int cpu);
 
@@ -712,7 +714,8 @@ static int rcu_is_cpu_rrupt_from_idle(void)
  * credit them with an implicit quiescent state.  Return 1 if this CPU
  * is in dynticks idle mode, which is an extended quiescent state.
  */
-static int dyntick_save_progress_counter(struct rcu_data *rdp)
+static int dyntick_save_progress_counter(struct rcu_data *rdp,
+					 bool *isidle, unsigned long *maxj)
 {
 	rdp->dynticks_snap = atomic_add_return(0, &rdp->dynticks->dynticks);
 	return (rdp->dynticks_snap & 0x1) == 0;
@@ -724,7 +727,8 @@ static int dyntick_save_progress_counter(struct rcu_data *rdp)
  * idle state since the last call to dyntick_save_progress_counter()
  * for this same CPU, or by virtue of having been offline.
  */
-static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
+static int rcu_implicit_dynticks_qs(struct rcu_data *rdp,
+				    bool *isidle, unsigned long *maxj)
 {
 	unsigned int curr;
 	unsigned int snap;
@@ -1345,16 +1349,19 @@ static int rcu_gp_init(struct rcu_state *rsp)
 int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
 {
 	int fqs_state = fqs_state_in;
+	bool isidle = 0;
+	unsigned long maxj;
 	struct rcu_node *rnp = rcu_get_root(rsp);
 
 	rsp->n_force_qs++;
 	if (fqs_state == RCU_SAVE_DYNTICK) {
 		/* Collect dyntick-idle snapshots. */
-		force_qs_rnp(rsp, dyntick_save_progress_counter);
+		force_qs_rnp(rsp, dyntick_save_progress_counter,
+			     &isidle, &maxj);
 		fqs_state = RCU_FORCE_QS;
 	} else {
 		/* Handle dyntick-idle and offline CPUs. */
-		force_qs_rnp(rsp, rcu_implicit_dynticks_qs);
+		force_qs_rnp(rsp, rcu_implicit_dynticks_qs, &isidle, &maxj);
 	}
 	/* Clear flag to prevent immediate re-entry. */
 	if (ACCESS_ONCE(rsp->gp_flags) & RCU_GP_FLAG_FQS) {
@@ -2055,7 +2062,9 @@ void rcu_check_callbacks(int cpu, int user)
  *
  * The caller must have suppressed start of new grace periods.
  */
-static void force_qs_rnp(struct rcu_state *rsp, int (*f)(struct rcu_data *))
+static void force_qs_rnp(struct rcu_state *rsp,
+			 int (*f)(struct rcu_data *, bool *, unsigned long *),
+			 bool *isidle, unsigned long *maxj)
 {
 	unsigned long bit;
 	int cpu;
@@ -2079,7 +2088,7 @@ static void force_qs_rnp(struct rcu_state *rsp, int (*f)(struct rcu_data *))
 		bit = 1;
 		for (; cpu <= rnp->grphi; cpu++, bit <<= 1) {
 			if ((rnp->qsmask & bit) != 0 &&
-			    f(per_cpu_ptr(rsp->rda, cpu)))
+			    f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
 				mask |= bit;
 		}
 		if (mask != 0) {
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
                     ` (3 preceding siblings ...)
  2013-07-26 23:19   ` [PATCH RFC nohz_full 5/7] nohz_full: Add full-system-idle arguments to API Paul E. McKenney
@ 2013-07-26 23:19   ` Paul E. McKenney
  2013-07-29  8:19     ` Lai Jiangshan
  2013-08-09 16:20     ` Frederic Weisbecker
  2013-07-26 23:19   ` [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU Paul E. McKenney
                     ` (2 subsequent siblings)
  7 siblings, 2 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-26 23:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds the state machine that takes the per-CPU idle data
as input and produces a full-system-idle indication as output.  This
state machine is driven out of RCU's quiescent-state-forcing
mechanism, which invokes rcu_sysidle_check_cpu() to collect per-CPU
idle state and then rcu_sysidle_report() to drive the state machine.

The full-system-idle state is sampled using rcu_sys_is_idle(), which
also drives the state machine if RCU is idle (and does so by forcing
RCU to become non-idle).  This function returns true if all but the
timekeeping CPU (tick_do_timer_cpu) are idle and have been idle long
enough to avoid memory contention on the full_sysidle_state state
variable.  The rcu_sysidle_force_exit() may be called externally
to reset the state machine back into non-idle state.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/rcupdate.h |  18 +++
 kernel/rcutree.c         |  16 ++-
 kernel/rcutree.h         |   5 +
 kernel/rcutree_plugin.h  | 284 ++++++++++++++++++++++++++++++++++++++++++++++-
 4 files changed, 316 insertions(+), 7 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 48f1ef9..1aa8d8c 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -1011,4 +1011,22 @@ static inline bool rcu_is_nocb_cpu(int cpu) { return false; }
 #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
 
 
+/* Only for use by adaptive-ticks code. */
+#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
+extern bool rcu_sys_is_idle(void);
+extern void rcu_sysidle_force_exit(void);
+#else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
+
+static inline bool rcu_sys_is_idle(void)
+{
+	return false;
+}
+
+static inline void rcu_sysidle_force_exit(void)
+{
+}
+
+#endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
+
+
 #endif /* __LINUX_RCUPDATE_H */
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 725524e..aa6d96e 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -718,6 +718,7 @@ static int dyntick_save_progress_counter(struct rcu_data *rdp,
 					 bool *isidle, unsigned long *maxj)
 {
 	rdp->dynticks_snap = atomic_add_return(0, &rdp->dynticks->dynticks);
+	rcu_sysidle_check_cpu(rdp, isidle, maxj);
 	return (rdp->dynticks_snap & 0x1) == 0;
 }
 
@@ -1356,11 +1357,17 @@ int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
 	rsp->n_force_qs++;
 	if (fqs_state == RCU_SAVE_DYNTICK) {
 		/* Collect dyntick-idle snapshots. */
+		if (is_sysidle_rcu_state(rsp)) {
+			isidle = 1;
+			maxj = jiffies - ULONG_MAX / 4;
+		}
 		force_qs_rnp(rsp, dyntick_save_progress_counter,
 			     &isidle, &maxj);
+		rcu_sysidle_report_gp(rsp, isidle, maxj);
 		fqs_state = RCU_FORCE_QS;
 	} else {
 		/* Handle dyntick-idle and offline CPUs. */
+		isidle = 0;
 		force_qs_rnp(rsp, rcu_implicit_dynticks_qs, &isidle, &maxj);
 	}
 	/* Clear flag to prevent immediate re-entry. */
@@ -2087,9 +2094,12 @@ static void force_qs_rnp(struct rcu_state *rsp,
 		cpu = rnp->grplo;
 		bit = 1;
 		for (; cpu <= rnp->grphi; cpu++, bit <<= 1) {
-			if ((rnp->qsmask & bit) != 0 &&
-			    f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
-				mask |= bit;
+			if ((rnp->qsmask & bit) != 0) {
+				if ((rnp->qsmaskinit & bit) != 0)
+					*isidle = 0;
+				if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
+					mask |= bit;
+			}
 		}
 		if (mask != 0) {
 
diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index 1895043..e0de5dc 100644
--- a/kernel/rcutree.h
+++ b/kernel/rcutree.h
@@ -555,6 +555,11 @@ static void rcu_kick_nohz_cpu(int cpu);
 static bool init_nocb_callback_list(struct rcu_data *rdp);
 static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq);
 static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq);
+static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
+				  unsigned long *maxj);
+static bool is_sysidle_rcu_state(struct rcu_state *rsp);
+static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
+				  unsigned long maxj);
 static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
 
 #endif /* #ifndef RCU_TREE_NONCORE */
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 3edae39..ff84bed 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -28,7 +28,7 @@
 #include <linux/gfp.h>
 #include <linux/oom.h>
 #include <linux/smpboot.h>
-#include <linux/tick.h>
+#include "time/tick-internal.h"
 
 #define RCU_KTHREAD_PRIO 1
 
@@ -2395,12 +2395,12 @@ static void rcu_kick_nohz_cpu(int cpu)
  * most active flavor of RCU.
  */
 #ifdef CONFIG_PREEMPT_RCU
-static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_preempt_state;
+static struct rcu_state *rcu_sysidle_state = &rcu_preempt_state;
 #else /* #ifdef CONFIG_PREEMPT_RCU */
-static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_sched_state;
+static struct rcu_state *rcu_sysidle_state = &rcu_sched_state;
 #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
 
-static int __maybe_unused full_sysidle_state; /* Current system-idle state. */
+static int full_sysidle_state;		/* Current system-idle state. */
 #define RCU_SYSIDLE_NOT		0	/* Some CPU is not idle. */
 #define RCU_SYSIDLE_SHORT	1	/* All CPUs idle for brief period. */
 #define RCU_SYSIDLE_LONG	2	/* All CPUs idle for long enough. */
@@ -2444,6 +2444,38 @@ static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq)
 }
 
 /*
+ * Unconditionally force exit from full system-idle state.  This is
+ * invoked when a normal CPU exits idle, but must be called separately
+ * for the timekeeping CPU (tick_do_timer_cpu).  The reason for this
+ * is that the timekeeping CPU is permitted to take scheduling-clock
+ * interrupts while the system is in system-idle state, and of course
+ * rcu_sysidle_exit() has no way of distinguishing a scheduling-clock
+ * interrupt from any other type of interrupt.
+ */
+void rcu_sysidle_force_exit(void)
+{
+	int oldstate = ACCESS_ONCE(full_sysidle_state);
+	int newoldstate;
+
+	/*
+	 * Each pass through the following loop attempts to exit full
+	 * system-idle state.  If contention proves to be a problem,
+	 * a trylock-based contention tree could be used here.
+	 */
+	while (oldstate > RCU_SYSIDLE_SHORT) {
+		newoldstate = cmpxchg(&full_sysidle_state,
+				      oldstate, RCU_SYSIDLE_NOT);
+		if (oldstate == newoldstate &&
+		    oldstate == RCU_SYSIDLE_FULL_NOTED) {
+			rcu_kick_nohz_cpu(tick_do_timer_cpu);
+			return; /* We cleared it, done! */
+		}
+		oldstate = newoldstate;
+	}
+	smp_mb(); /* Order initial oldstate fetch vs. later non-idle work. */
+}
+
+/*
  * Invoked to note entry to irq or task transition from idle.  Note that
  * usermode execution does -not- count as idle here!  The caller must
  * have disabled interrupts.
@@ -2476,6 +2508,235 @@ static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
 	atomic_inc(&rdtp->dynticks_idle);
 	smp_mb__after_atomic_inc();
 	WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks_idle) & 0x1));
+
+	/*
+	 * If we are the timekeeping CPU, we are permitted to be non-idle
+	 * during a system-idle state.  This must be the case, because
+	 * the timekeeping CPU has to take scheduling-clock interrupts
+	 * during the time that the system is transitioning to full
+	 * system-idle state.  This means that the timekeeping CPU must
+	 * invoke rcu_sysidle_force_exit() directly if it does anything
+	 * more than take a scheduling-clock interrupt.
+	 */
+	if (smp_processor_id() == tick_do_timer_cpu)
+		return;
+
+	/* Update system-idle state: We are clearly no longer fully idle! */
+	rcu_sysidle_force_exit();
+}
+
+/*
+ * Check to see if the current CPU is idle.  Note that usermode execution
+ * does not count as idle.  The caller must have disabled interrupts.
+ */
+static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
+				  unsigned long *maxj)
+{
+	int cur;
+	unsigned long j;
+	struct rcu_dynticks *rdtp = rdp->dynticks;
+
+	/*
+	 * If some other CPU has already reported non-idle, if this is
+	 * not the flavor of RCU that tracks sysidle state, or if this
+	 * is an offline or the timekeeping CPU, nothing to do.
+	 */
+	if (!*isidle || rdp->rsp != rcu_sysidle_state ||
+	    cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
+		return;
+	/* WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu); */
+
+	/* Pick up current idle and NMI-nesting counter and check. */
+	cur = atomic_read(&rdtp->dynticks_idle);
+	if (cur & 0x1) {
+		*isidle = 0; /* We are not idle! */
+		return;
+	}
+	smp_mb(); /* Read counters before timestamps. */
+
+	/* Pick up timestamps. */
+	j = ACCESS_ONCE(rdtp->dynticks_idle_jiffies);
+	/* If this CPU entered idle more recently, update maxj timestamp. */
+	if (ULONG_CMP_LT(*maxj, j))
+		*maxj = j;
+}
+
+/*
+ * Is this the flavor of RCU that is handling full-system idle?
+ */
+static bool is_sysidle_rcu_state(struct rcu_state *rsp)
+{
+	return rsp == rcu_sysidle_state;
+}
+
+/*
+ * Return a delay in jiffies based on the number of CPUs, rcu_node
+ * leaf fanout, and jiffies tick rate.  The idea is to allow larger
+ * systems more time to transition to full-idle state in order to
+ * avoid the cache thrashing that otherwise occur on the state variable.
+ * Really small systems (less than a couple of tens of CPUs) should
+ * instead use a single global atomically incremented counter, and later
+ * versions of this will automatically reconfigure themselves accordingly.
+ */
+static unsigned long rcu_sysidle_delay(void)
+{
+	if (nr_cpu_ids <= RCU_SYSIDLE_SMALL)
+		return 0;
+	return DIV_ROUND_UP(nr_cpu_ids * HZ, rcu_fanout_leaf * 1000);
+}
+
+/*
+ * Advance the full-system-idle state.  This is invoked when all of
+ * the non-timekeeping CPUs are idle.
+ */
+static void rcu_sysidle(unsigned long j)
+{
+	/* Check the current state. */
+	switch (ACCESS_ONCE(full_sysidle_state)) {
+	case RCU_SYSIDLE_NOT:
+
+		/* First time all are idle, so note a short idle period. */
+		ACCESS_ONCE(full_sysidle_state) = RCU_SYSIDLE_SHORT;
+		break;
+
+	case RCU_SYSIDLE_SHORT:
+
+		/*
+		 * Idle for a bit, time to advance to next state?
+		 * cmpxchg failure means race with non-idle, let them win.
+		 */
+		if (ULONG_CMP_GE(jiffies, j + rcu_sysidle_delay()))
+			(void)cmpxchg(&full_sysidle_state,
+				      RCU_SYSIDLE_SHORT, RCU_SYSIDLE_LONG);
+		break;
+
+	case RCU_SYSIDLE_LONG:
+
+		/*
+		 * Do an additional check pass before advancing to full.
+		 * cmpxchg failure means race with non-idle, let them win.
+		 */
+		if (ULONG_CMP_GE(jiffies, j + rcu_sysidle_delay()))
+			(void)cmpxchg(&full_sysidle_state,
+				      RCU_SYSIDLE_LONG, RCU_SYSIDLE_FULL);
+		break;
+
+	default:
+		break;
+	}
+}
+
+/*
+ * Found a non-idle non-timekeeping CPU, so kick the system-idle state
+ * back to the beginning.
+ */
+static void rcu_sysidle_cancel(void)
+{
+	smp_mb();
+	ACCESS_ONCE(full_sysidle_state) = RCU_SYSIDLE_NOT;
+}
+
+/*
+ * Update the sysidle state based on the results of a force-quiescent-state
+ * scan of the CPUs' dyntick-idle state.
+ */
+static void rcu_sysidle_report(struct rcu_state *rsp, int isidle,
+			       unsigned long maxj, bool gpkt)
+{
+	if (rsp != rcu_sysidle_state)
+		return;  /* Wrong flavor, ignore. */
+	if (isidle) {
+		if (gpkt && nr_cpu_ids > RCU_SYSIDLE_SMALL)
+			rcu_sysidle(maxj);    /* More idle! */
+	} else {
+		rcu_sysidle_cancel(); /* Idle is over. */
+	}
+}
+
+static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
+				  unsigned long maxj)
+{
+	rcu_sysidle_report(rsp, isidle, maxj, true);
+}
+
+/* Callback and function for forcing an RCU grace period. */
+struct rcu_sysidle_head {
+	struct rcu_head rh;
+	int inuse;
+};
+
+static void rcu_sysidle_cb(struct rcu_head *rhp)
+{
+	struct rcu_sysidle_head *rshp;
+
+	smp_mb();  /* grace period precedes setting inuse. */
+	rshp = container_of(rhp, struct rcu_sysidle_head, rh);
+	ACCESS_ONCE(rshp->inuse) = 0;
+}
+
+/*
+ * Check to see if the system is fully idle, other than the timekeeping CPU.
+ * The caller must have disabled interrupts.
+ */
+bool rcu_sys_is_idle(void)
+{
+	static struct rcu_sysidle_head rsh;
+	int rss = ACCESS_ONCE(full_sysidle_state);
+
+	if (WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu))
+		return false;
+
+	/* Handle small-system case by doing a full scan of CPUs. */
+	if (nr_cpu_ids <= RCU_SYSIDLE_SMALL) {
+		int oldrss = rss - 1;
+
+		/*
+		 * One pass to advance to each state up to _FULL.
+		 * Give up if any pass fails to advance the state.
+		 */
+		while (rss < RCU_SYSIDLE_FULL && oldrss < rss) {
+			int cpu;
+			bool isidle = true;
+			unsigned long maxj = jiffies - ULONG_MAX / 4;
+			struct rcu_data *rdp;
+
+			/* Scan all the CPUs looking for nonidle CPUs. */
+			for_each_possible_cpu(cpu) {
+				rdp = per_cpu_ptr(rcu_sysidle_state->rda, cpu);
+				rcu_sysidle_check_cpu(rdp, &isidle, &maxj);
+				if (!isidle)
+					break;
+			}
+			rcu_sysidle_report(rcu_sysidle_state,
+					   isidle, maxj, false);
+			oldrss = rss;
+			rss = ACCESS_ONCE(full_sysidle_state);
+		}
+	}
+
+	/* If this is the first observation of an idle period, record it. */
+	if (rss == RCU_SYSIDLE_FULL) {
+		rss = cmpxchg(&full_sysidle_state,
+			      RCU_SYSIDLE_FULL, RCU_SYSIDLE_FULL_NOTED);
+		return rss == RCU_SYSIDLE_FULL;
+	}
+
+	smp_mb(); /* ensure rss load happens before later caller actions. */
+
+	/* If already fully idle, tell the caller (in case of races). */
+	if (rss == RCU_SYSIDLE_FULL_NOTED)
+		return true;
+
+	/*
+	 * If we aren't there yet, and a grace period is not in flight,
+	 * initiate a grace period.  Either way, tell the caller that
+	 * we are not there yet.
+	 */
+	if (nr_cpu_ids > RCU_SYSIDLE_SMALL &&
+	    !rcu_gp_in_progress(rcu_sysidle_state) &&
+	    !rsh.inuse && xchg(&rsh.inuse, 1) == 0)
+		call_rcu(&rsh.rh, rcu_sysidle_cb);
+	return false;
 }
 
 /*
@@ -2496,6 +2757,21 @@ static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
 {
 }
 
+static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
+				  unsigned long *maxj)
+{
+}
+
+static bool is_sysidle_rcu_state(struct rcu_state *rsp)
+{
+	return false;
+}
+
+static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
+				  unsigned long maxj)
+{
+}
+
 static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
 {
 }
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
                     ` (4 preceding siblings ...)
  2013-07-26 23:19   ` [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine Paul E. McKenney
@ 2013-07-26 23:19   ` Paul E. McKenney
  2013-07-29  3:36     ` Lai Jiangshan
  2013-07-29  3:35   ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Lai Jiangshan
  2013-08-05  1:04   ` Frederic Weisbecker
  7 siblings, 1 reply; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-26 23:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Because RCU's quiescent-state-forcing mechanism is used to drive the
full-system-idle state machine, and because this mechanism is executed
by RCU's grace-period kthreads, this commit forces these kthreads to
run on the timekeeping CPU (tick_do_timer_cpu).  To do otherwise would
mean that the RCU grace-period kthreads would force the system into
non-idle state every time they drove the state machine, which would
be just a bit on the futile side.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rcutree.c        |  1 +
 kernel/rcutree.h        |  1 +
 kernel/rcutree_plugin.h | 20 +++++++++++++++++++-
 3 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index aa6d96e..fe83085 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -1286,6 +1286,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
 	struct rcu_data *rdp;
 	struct rcu_node *rnp = rcu_get_root(rsp);
 
+	rcu_bind_gp_kthread();
 	raw_spin_lock_irq(&rnp->lock);
 	rsp->gp_flags = 0; /* Clear all flags: New grace period. */
 
diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index e0de5dc..49dac99 100644
--- a/kernel/rcutree.h
+++ b/kernel/rcutree.h
@@ -560,6 +560,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
 static bool is_sysidle_rcu_state(struct rcu_state *rsp);
 static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
 				  unsigned long maxj);
+static void rcu_bind_gp_kthread(void);
 static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
 
 #endif /* #ifndef RCU_TREE_NONCORE */
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index ff84bed..f65d9c2 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -2544,7 +2544,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
 	if (!*isidle || rdp->rsp != rcu_sysidle_state ||
 	    cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
 		return;
-	/* WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu); */
+	WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
 
 	/* Pick up current idle and NMI-nesting counter and check. */
 	cur = atomic_read(&rdtp->dynticks_idle);
@@ -2570,6 +2570,20 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
 }
 
 /*
+ * Bind the grace-period kthread for the sysidle flavor of RCU to the
+ * timekeeping CPU.
+ */
+static void rcu_bind_gp_kthread(void)
+{
+	int cpu = ACCESS_ONCE(tick_do_timer_cpu);
+
+	if (cpu < 0 || cpu >= nr_cpu_ids)
+		return;
+	if (raw_smp_processor_id() != cpu)
+		set_cpus_allowed_ptr(current, cpumask_of(cpu));
+}
+
+/*
  * Return a delay in jiffies based on the number of CPUs, rcu_node
  * leaf fanout, and jiffies tick rate.  The idea is to allow larger
  * systems more time to transition to full-idle state in order to
@@ -2767,6 +2781,10 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
 	return false;
 }
 
+static void rcu_bind_gp_kthread(void)
+{
+}
+
 static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
 				  unsigned long maxj)
 {
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
                     ` (5 preceding siblings ...)
  2013-07-26 23:19   ` [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU Paul E. McKenney
@ 2013-07-29  3:35   ` Lai Jiangshan
  2013-07-29 15:28     ` Paul E. McKenney
  2013-08-05  1:04   ` Frederic Weisbecker
  7 siblings, 1 reply; 26+ messages in thread
From: Lai Jiangshan @ 2013-07-29  3:35 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> At least one CPU must keep the scheduling-clock tick running for
> timekeeping purposes whenever there is a non-idle CPU.  However, with
> the new nohz_full adaptive-idle machinery, it is difficult to distinguish
> between all CPUs really being idle as opposed to all non-idle CPUs being
> in adaptive-ticks mode.  This commit therefore adds a Kconfig parameter
> as a first step towards enabling a scalable detection of full-system
> idle state.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/time/Kconfig | 23 +++++++++++++++++++++++
>  1 file changed, 23 insertions(+)
> 
> diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
> index 70f27e8..a613c2a 100644
> --- a/kernel/time/Kconfig
> +++ b/kernel/time/Kconfig
> @@ -134,6 +134,29 @@ config NO_HZ_FULL_ALL
>  	 Note the boot CPU will still be kept outside the range to
>  	 handle the timekeeping duty.
>  
> +config NO_HZ_FULL_SYSIDLE
> +	bool "Detect full-system idle state for full dynticks system"
> +	depends on NO_HZ_FULL
> +	default n
> +	help
> +	 At least one CPU must keep the scheduling-clock tick running
> +	 for timekeeping purposes whenever there is a non-idle CPU,
> +	 where "non-idle" includes CPUs with a single runnable task
> +	 in adaptive-idle mode.  Because the underlying adaptive-tick
> +	 support cannot distinguish between all CPUs being idle and
> +	 all CPUs each running a single task in adaptive-idle mode,
> +	 the underlying support simply ensures that there is always
> +	 a CPU handling the scheduling-clock tick, whether or not all
> +	 CPUs are idle.  This Kconfig option enables scalable detection
> +	 of the all-CPUs-idle state, thus allowing the scheduling-clock
> +	 tick to be disabled when all CPUs are idle.  Note that scalable
> +	 detection of the all-CPUs-idle state means that larger systems
> +	 will be slower to declare the all-CPUs-idle state.
> +
> +	 Say Y if you would like to help debug all-CPUs-idle detection.

The code is needed only for debug?
I guess not.

> +
> +	 Say N if you are unsure.
> +
>  config NO_HZ
>  	bool "Old Idle dynticks config"
>  	depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU
  2013-07-26 23:19   ` [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU Paul E. McKenney
@ 2013-07-29  3:36     ` Lai Jiangshan
  2013-07-29 16:52       ` Paul E. McKenney
  0 siblings, 1 reply; 26+ messages in thread
From: Lai Jiangshan @ 2013-07-29  3:36 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> Because RCU's quiescent-state-forcing mechanism is used to drive the
> full-system-idle state machine, and because this mechanism is executed
> by RCU's grace-period kthreads, this commit forces these kthreads to
> run on the timekeeping CPU (tick_do_timer_cpu).  To do otherwise would
> mean that the RCU grace-period kthreads would force the system into
> non-idle state every time they drove the state machine, which would
> be just a bit on the futile side.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/rcutree.c        |  1 +
>  kernel/rcutree.h        |  1 +
>  kernel/rcutree_plugin.h | 20 +++++++++++++++++++-
>  3 files changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index aa6d96e..fe83085 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -1286,6 +1286,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
>  	struct rcu_data *rdp;
>  	struct rcu_node *rnp = rcu_get_root(rsp);
>  
> +	rcu_bind_gp_kthread();
>  	raw_spin_lock_irq(&rnp->lock);
>  	rsp->gp_flags = 0; /* Clear all flags: New grace period. */

bind the gp thread when RCU_GP_FLAG_INIT ...

>  
> diff --git a/kernel/rcutree.h b/kernel/rcutree.h
> index e0de5dc..49dac99 100644
> --- a/kernel/rcutree.h
> +++ b/kernel/rcutree.h
> @@ -560,6 +560,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
>  static bool is_sysidle_rcu_state(struct rcu_state *rsp);
>  static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
>  				  unsigned long maxj);
> +static void rcu_bind_gp_kthread(void);
>  static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
>  
>  #endif /* #ifndef RCU_TREE_NONCORE */
> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> index ff84bed..f65d9c2 100644
> --- a/kernel/rcutree_plugin.h
> +++ b/kernel/rcutree_plugin.h
> @@ -2544,7 +2544,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
>  	if (!*isidle || rdp->rsp != rcu_sysidle_state ||
>  	    cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
>  		return;
> -	/* WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu); */
> +	WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);


but call rcu_sysidle_check_cpu() when RCU_GP_FLAG_FQS.

In this time, the thread may not be bound to tick_do_timer_cpu,
the WARN_ON_ONCE() may be wrong.

Does any other code ensure the gp thread bound on tick_do_timer_cpu
which I missed?

>  
>  	/* Pick up current idle and NMI-nesting counter and check. */
>  	cur = atomic_read(&rdtp->dynticks_idle);
> @@ -2570,6 +2570,20 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
>  }
>  
>  /*
> + * Bind the grace-period kthread for the sysidle flavor of RCU to the
> + * timekeeping CPU.
> + */
> +static void rcu_bind_gp_kthread(void)
> +{
> +	int cpu = ACCESS_ONCE(tick_do_timer_cpu);
> +
> +	if (cpu < 0 || cpu >= nr_cpu_ids)
> +		return;
> +	if (raw_smp_processor_id() != cpu)
> +		set_cpus_allowed_ptr(current, cpumask_of(cpu));
> +}
> +
> +/*
>   * Return a delay in jiffies based on the number of CPUs, rcu_node
>   * leaf fanout, and jiffies tick rate.  The idea is to allow larger
>   * systems more time to transition to full-idle state in order to
> @@ -2767,6 +2781,10 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
>  	return false;
>  }
>  
> +static void rcu_bind_gp_kthread(void)
> +{
> +}
> +
>  static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
>  				  unsigned long maxj)
>  {


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine
  2013-07-26 23:19   ` [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine Paul E. McKenney
@ 2013-07-29  8:19     ` Lai Jiangshan
  2013-07-29 17:43       ` Paul E. McKenney
  2013-08-09 16:20     ` Frederic Weisbecker
  1 sibling, 1 reply; 26+ messages in thread
From: Lai Jiangshan @ 2013-07-29  8:19 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> This commit adds the state machine that takes the per-CPU idle data
> as input and produces a full-system-idle indication as output.  This
> state machine is driven out of RCU's quiescent-state-forcing
> mechanism, which invokes rcu_sysidle_check_cpu() to collect per-CPU
> idle state and then rcu_sysidle_report() to drive the state machine.
> 
> The full-system-idle state is sampled using rcu_sys_is_idle(), which
> also drives the state machine if RCU is idle (and does so by forcing
> RCU to become non-idle).  This function returns true if all but the
> timekeeping CPU (tick_do_timer_cpu) are idle and have been idle long
> enough to avoid memory contention on the full_sysidle_state state
> variable.  The rcu_sysidle_force_exit() may be called externally
> to reset the state machine back into non-idle state.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  include/linux/rcupdate.h |  18 +++
>  kernel/rcutree.c         |  16 ++-
>  kernel/rcutree.h         |   5 +
>  kernel/rcutree_plugin.h  | 284 ++++++++++++++++++++++++++++++++++++++++++++++-
>  4 files changed, 316 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 48f1ef9..1aa8d8c 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -1011,4 +1011,22 @@ static inline bool rcu_is_nocb_cpu(int cpu) { return false; }
>  #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
>  
>  
> +/* Only for use by adaptive-ticks code. */
> +#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
> +extern bool rcu_sys_is_idle(void);
> +extern void rcu_sysidle_force_exit(void);
> +#else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
> +
> +static inline bool rcu_sys_is_idle(void)
> +{
> +	return false;
> +}
> +
> +static inline void rcu_sysidle_force_exit(void)
> +{
> +}
> +
> +#endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
> +
> +
>  #endif /* __LINUX_RCUPDATE_H */
> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index 725524e..aa6d96e 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -718,6 +718,7 @@ static int dyntick_save_progress_counter(struct rcu_data *rdp,
>  					 bool *isidle, unsigned long *maxj)
>  {
>  	rdp->dynticks_snap = atomic_add_return(0, &rdp->dynticks->dynticks);
> +	rcu_sysidle_check_cpu(rdp, isidle, maxj);
>  	return (rdp->dynticks_snap & 0x1) == 0;
>  }
>  
> @@ -1356,11 +1357,17 @@ int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
>  	rsp->n_force_qs++;
>  	if (fqs_state == RCU_SAVE_DYNTICK) {
>  		/* Collect dyntick-idle snapshots. */
> +		if (is_sysidle_rcu_state(rsp)) {
> +			isidle = 1;

isidle = true;
the type of isidle is bool

> +			maxj = jiffies - ULONG_MAX / 4;
> +		}
>  		force_qs_rnp(rsp, dyntick_save_progress_counter,
>  			     &isidle, &maxj);
> +		rcu_sysidle_report_gp(rsp, isidle, maxj);
>  		fqs_state = RCU_FORCE_QS;
>  	} else {
>  		/* Handle dyntick-idle and offline CPUs. */
> +		isidle = 0;

isidle = false;

>  		force_qs_rnp(rsp, rcu_implicit_dynticks_qs, &isidle, &maxj);
>  	}
>  	/* Clear flag to prevent immediate re-entry. */
> @@ -2087,9 +2094,12 @@ static void force_qs_rnp(struct rcu_state *rsp,
>  		cpu = rnp->grplo;
>  		bit = 1;
>  		for (; cpu <= rnp->grphi; cpu++, bit <<= 1) {
> -			if ((rnp->qsmask & bit) != 0 &&
> -			    f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
> -				mask |= bit;
> +			if ((rnp->qsmask & bit) != 0) {
> +				if ((rnp->qsmaskinit & bit) != 0)
> +					*isidle = 0;

*isidle = false

> +				if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
> +					mask |= bit;
> +			}
>  		}
>  		if (mask != 0) {
>  
> diff --git a/kernel/rcutree.h b/kernel/rcutree.h
> index 1895043..e0de5dc 100644
> --- a/kernel/rcutree.h
> +++ b/kernel/rcutree.h
> @@ -555,6 +555,11 @@ static void rcu_kick_nohz_cpu(int cpu);
>  static bool init_nocb_callback_list(struct rcu_data *rdp);
>  static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq);
>  static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq);
> +static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> +				  unsigned long *maxj);
> +static bool is_sysidle_rcu_state(struct rcu_state *rsp);
> +static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> +				  unsigned long maxj);
>  static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
>  
>  #endif /* #ifndef RCU_TREE_NONCORE */
> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> index 3edae39..ff84bed 100644
> --- a/kernel/rcutree_plugin.h
> +++ b/kernel/rcutree_plugin.h
> @@ -28,7 +28,7 @@
>  #include <linux/gfp.h>
>  #include <linux/oom.h>
>  #include <linux/smpboot.h>
> -#include <linux/tick.h>
> +#include "time/tick-internal.h"
>  
>  #define RCU_KTHREAD_PRIO 1
>  
> @@ -2395,12 +2395,12 @@ static void rcu_kick_nohz_cpu(int cpu)
>   * most active flavor of RCU.
>   */
>  #ifdef CONFIG_PREEMPT_RCU
> -static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_preempt_state;
> +static struct rcu_state *rcu_sysidle_state = &rcu_preempt_state;
>  #else /* #ifdef CONFIG_PREEMPT_RCU */
> -static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_sched_state;
> +static struct rcu_state *rcu_sysidle_state = &rcu_sched_state;
>  #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
>  
> -static int __maybe_unused full_sysidle_state; /* Current system-idle state. */
> +static int full_sysidle_state;		/* Current system-idle state. */
>  #define RCU_SYSIDLE_NOT		0	/* Some CPU is not idle. */
>  #define RCU_SYSIDLE_SHORT	1	/* All CPUs idle for brief period. */
>  #define RCU_SYSIDLE_LONG	2	/* All CPUs idle for long enough. */
> @@ -2444,6 +2444,38 @@ static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq)
>  }
>  
>  /*
> + * Unconditionally force exit from full system-idle state.  This is
> + * invoked when a normal CPU exits idle, but must be called separately
> + * for the timekeeping CPU (tick_do_timer_cpu).  The reason for this
> + * is that the timekeeping CPU is permitted to take scheduling-clock
> + * interrupts while the system is in system-idle state, and of course
> + * rcu_sysidle_exit() has no way of distinguishing a scheduling-clock
> + * interrupt from any other type of interrupt.
> + */
> +void rcu_sysidle_force_exit(void)
> +{
> +	int oldstate = ACCESS_ONCE(full_sysidle_state);
> +	int newoldstate;
> +
> +	/*
> +	 * Each pass through the following loop attempts to exit full
> +	 * system-idle state.  If contention proves to be a problem,
> +	 * a trylock-based contention tree could be used here.
> +	 */
> +	while (oldstate > RCU_SYSIDLE_SHORT) {
> +		newoldstate = cmpxchg(&full_sysidle_state,
> +				      oldstate, RCU_SYSIDLE_NOT);
> +		if (oldstate == newoldstate &&
> +		    oldstate == RCU_SYSIDLE_FULL_NOTED) {
> +			rcu_kick_nohz_cpu(tick_do_timer_cpu);
> +			return; /* We cleared it, done! */
> +		}
> +		oldstate = newoldstate;
> +	}
> +	smp_mb(); /* Order initial oldstate fetch vs. later non-idle work. */

why we need this mb()?
which mb() is paired with this?

> +}
> +
> +/*
>   * Invoked to note entry to irq or task transition from idle.  Note that
>   * usermode execution does -not- count as idle here!  The caller must
>   * have disabled interrupts.
> @@ -2476,6 +2508,235 @@ static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
>  	atomic_inc(&rdtp->dynticks_idle);
>  	smp_mb__after_atomic_inc();
>  	WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks_idle) & 0x1));
> +
> +	/*
> +	 * If we are the timekeeping CPU, we are permitted to be non-idle
> +	 * during a system-idle state.  This must be the case, because
> +	 * the timekeeping CPU has to take scheduling-clock interrupts
> +	 * during the time that the system is transitioning to full
> +	 * system-idle state.  This means that the timekeeping CPU must
> +	 * invoke rcu_sysidle_force_exit() directly if it does anything
> +	 * more than take a scheduling-clock interrupt.
> +	 */
> +	if (smp_processor_id() == tick_do_timer_cpu)
> +		return;
> +
> +	/* Update system-idle state: We are clearly no longer fully idle! */
> +	rcu_sysidle_force_exit();
> +}
> +
> +/*
> + * Check to see if the current CPU is idle.  Note that usermode execution
> + * does not count as idle.  The caller must have disabled interrupts.
> + */
> +static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> +				  unsigned long *maxj)
> +{
> +	int cur;
> +	unsigned long j;
> +	struct rcu_dynticks *rdtp = rdp->dynticks;
> +
> +	/*
> +	 * If some other CPU has already reported non-idle, if this is
> +	 * not the flavor of RCU that tracks sysidle state, or if this
> +	 * is an offline or the timekeeping CPU, nothing to do.
> +	 */
> +	if (!*isidle || rdp->rsp != rcu_sysidle_state ||
> +	    cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
> +		return;
> +	/* WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu); */
> +
> +	/* Pick up current idle and NMI-nesting counter and check. */
> +	cur = atomic_read(&rdtp->dynticks_idle);
> +	if (cur & 0x1) {
> +		*isidle = 0; /* We are not idle! */

*isidle = false;

And other places which use "isidle".

> +		return;
> +	}
> +	smp_mb(); /* Read counters before timestamps. */
> +
> +	/* Pick up timestamps. */
> +	j = ACCESS_ONCE(rdtp->dynticks_idle_jiffies);
> +	/* If this CPU entered idle more recently, update maxj timestamp. */
> +	if (ULONG_CMP_LT(*maxj, j))
> +		*maxj = j;
> +}
> +
> +/*
> + * Is this the flavor of RCU that is handling full-system idle?
> + */
> +static bool is_sysidle_rcu_state(struct rcu_state *rsp)
> +{
> +	return rsp == rcu_sysidle_state;
> +}
> +
> +/*
> + * Return a delay in jiffies based on the number of CPUs, rcu_node
> + * leaf fanout, and jiffies tick rate.  The idea is to allow larger
> + * systems more time to transition to full-idle state in order to
> + * avoid the cache thrashing that otherwise occur on the state variable.
> + * Really small systems (less than a couple of tens of CPUs) should
> + * instead use a single global atomically incremented counter, and later
> + * versions of this will automatically reconfigure themselves accordingly.
> + */
> +static unsigned long rcu_sysidle_delay(void)
> +{
> +	if (nr_cpu_ids <= RCU_SYSIDLE_SMALL)
> +		return 0;
> +	return DIV_ROUND_UP(nr_cpu_ids * HZ, rcu_fanout_leaf * 1000);
> +}
> +
> +/*
> + * Advance the full-system-idle state.  This is invoked when all of
> + * the non-timekeeping CPUs are idle.
> + */
> +static void rcu_sysidle(unsigned long j)
> +{
> +	/* Check the current state. */
> +	switch (ACCESS_ONCE(full_sysidle_state)) {
> +	case RCU_SYSIDLE_NOT:
> +
> +		/* First time all are idle, so note a short idle period. */
> +		ACCESS_ONCE(full_sysidle_state) = RCU_SYSIDLE_SHORT;
> +		break;
> +
> +	case RCU_SYSIDLE_SHORT:
> +
> +		/*
> +		 * Idle for a bit, time to advance to next state?
> +		 * cmpxchg failure means race with non-idle, let them win.
> +		 */
> +		if (ULONG_CMP_GE(jiffies, j + rcu_sysidle_delay()))
> +			(void)cmpxchg(&full_sysidle_state,
> +				      RCU_SYSIDLE_SHORT, RCU_SYSIDLE_LONG);
> +		break;

I don't think it will race with any body.
I think ACCESS_ONCE(full_sysidle_state) = RCU_SYSIDLE_LONG is enough.

note:
rcu_sysidle_force_exit() doesn't change full_sysidle_state if it is RCU_SYSIDLE_SHORT.

> +
> +	case RCU_SYSIDLE_LONG:
> +
> +		/*
> +		 * Do an additional check pass before advancing to full.
> +		 * cmpxchg failure means race with non-idle, let them win.
> +		 */
> +		if (ULONG_CMP_GE(jiffies, j + rcu_sysidle_delay()))
> +			(void)cmpxchg(&full_sysidle_state,
> +				      RCU_SYSIDLE_LONG, RCU_SYSIDLE_FULL);
> +		break;
> +
> +	default:
> +		break;
> +	}
> +}
> +
> +/*
> + * Found a non-idle non-timekeeping CPU, so kick the system-idle state
> + * back to the beginning.
> + */
> +static void rcu_sysidle_cancel(void)
> +{
> +	smp_mb();
> +	ACCESS_ONCE(full_sysidle_state) = RCU_SYSIDLE_NOT;
> +}
> +
> +/*
> + * Update the sysidle state based on the results of a force-quiescent-state
> + * scan of the CPUs' dyntick-idle state.
> + */
> +static void rcu_sysidle_report(struct rcu_state *rsp, int isidle,
> +			       unsigned long maxj, bool gpkt)
> +{
> +	if (rsp != rcu_sysidle_state)
> +		return;  /* Wrong flavor, ignore. */
> +	if (isidle) {
> +		if (gpkt && nr_cpu_ids > RCU_SYSIDLE_SMALL)
> +			rcu_sysidle(maxj);    /* More idle! */
> +	} else {
> +		rcu_sysidle_cancel(); /* Idle is over. */
> +	}
> +}

"gpkt" is always equal to "nr_cpu_ids > RCU_SYSIDLE_SMALL",

so we can remove "gpkt" argument and rcu_sysidle_report_gp().


> +
> +static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> +				  unsigned long maxj)
> +{
> +	rcu_sysidle_report(rsp, isidle, maxj, true);
> +}
> +
> +/* Callback and function for forcing an RCU grace period. */
> +struct rcu_sysidle_head {
> +	struct rcu_head rh;
> +	int inuse;
> +};
> +
> +static void rcu_sysidle_cb(struct rcu_head *rhp)
> +{
> +	struct rcu_sysidle_head *rshp;
> +
> +	smp_mb();  /* grace period precedes setting inuse. */

Why we need this mb()?

> +	rshp = container_of(rhp, struct rcu_sysidle_head, rh);
> +	ACCESS_ONCE(rshp->inuse) = 0;
> +}
> +
> +/*
> + * Check to see if the system is fully idle, other than the timekeeping CPU.
> + * The caller must have disabled interrupts.
> + */
> +bool rcu_sys_is_idle(void)
> +{
> +	static struct rcu_sysidle_head rsh;
> +	int rss = ACCESS_ONCE(full_sysidle_state);
> +
> +	if (WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu))
> +		return false;
> +
> +	/* Handle small-system case by doing a full scan of CPUs. */
> +	if (nr_cpu_ids <= RCU_SYSIDLE_SMALL) {
> +		int oldrss = rss - 1;
> +
> +		/*
> +		 * One pass to advance to each state up to _FULL.
> +		 * Give up if any pass fails to advance the state.
> +		 */
> +		while (rss < RCU_SYSIDLE_FULL && oldrss < rss) {
> +			int cpu;
> +			bool isidle = true;
> +			unsigned long maxj = jiffies - ULONG_MAX / 4;
> +			struct rcu_data *rdp;
> +
> +			/* Scan all the CPUs looking for nonidle CPUs. */
> +			for_each_possible_cpu(cpu) {
> +				rdp = per_cpu_ptr(rcu_sysidle_state->rda, cpu);
> +				rcu_sysidle_check_cpu(rdp, &isidle, &maxj);
> +				if (!isidle)
> +					break;
> +			}
> +			rcu_sysidle_report(rcu_sysidle_state,
> +					   isidle, maxj, false);
> +			oldrss = rss;
> +			rss = ACCESS_ONCE(full_sysidle_state);
> +		}
> +	}
> +
> +	/* If this is the first observation of an idle period, record it. */
> +	if (rss == RCU_SYSIDLE_FULL) {
> +		rss = cmpxchg(&full_sysidle_state,
> +			      RCU_SYSIDLE_FULL, RCU_SYSIDLE_FULL_NOTED);
> +		return rss == RCU_SYSIDLE_FULL;
> +	}
> +
> +	smp_mb(); /* ensure rss load happens before later caller actions. */
> +
> +	/* If already fully idle, tell the caller (in case of races). */
> +	if (rss == RCU_SYSIDLE_FULL_NOTED)
> +		return true;
> +
> +	/*
> +	 * If we aren't there yet, and a grace period is not in flight,
> +	 * initiate a grace period.  Either way, tell the caller that
> +	 * we are not there yet.
> +	 */
> +	if (nr_cpu_ids > RCU_SYSIDLE_SMALL &&
> +	    !rcu_gp_in_progress(rcu_sysidle_state) &&
> +	    !rsh.inuse && xchg(&rsh.inuse, 1) == 0)

why need to use xchg()? Who will it race with?

> +		call_rcu(&rsh.rh, rcu_sysidle_cb);
> +	return false;
>  }
>  
>  /*
> @@ -2496,6 +2757,21 @@ static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
>  {
>  }
>  
> +static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> +				  unsigned long *maxj)
> +{
> +}
> +
> +static bool is_sysidle_rcu_state(struct rcu_state *rsp)
> +{
> +	return false;
> +}
> +
> +static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> +				  unsigned long maxj)
> +{
> +}
> +
>  static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
>  {
>  }


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state
  2013-07-29  3:35   ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Lai Jiangshan
@ 2013-07-29 15:28     ` Paul E. McKenney
  0 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-29 15:28 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On Mon, Jul 29, 2013 at 11:35:56AM +0800, Lai Jiangshan wrote:
> On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > At least one CPU must keep the scheduling-clock tick running for
> > timekeeping purposes whenever there is a non-idle CPU.  However, with
> > the new nohz_full adaptive-idle machinery, it is difficult to distinguish
> > between all CPUs really being idle as opposed to all non-idle CPUs being
> > in adaptive-ticks mode.  This commit therefore adds a Kconfig parameter
> > as a first step towards enabling a scalable detection of full-system
> > idle state.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > ---
> >  kernel/time/Kconfig | 23 +++++++++++++++++++++++
> >  1 file changed, 23 insertions(+)
> > 
> > diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
> > index 70f27e8..a613c2a 100644
> > --- a/kernel/time/Kconfig
> > +++ b/kernel/time/Kconfig
> > @@ -134,6 +134,29 @@ config NO_HZ_FULL_ALL
> >  	 Note the boot CPU will still be kept outside the range to
> >  	 handle the timekeeping duty.
> >  
> > +config NO_HZ_FULL_SYSIDLE
> > +	bool "Detect full-system idle state for full dynticks system"
> > +	depends on NO_HZ_FULL
> > +	default n
> > +	help
> > +	 At least one CPU must keep the scheduling-clock tick running
> > +	 for timekeeping purposes whenever there is a non-idle CPU,
> > +	 where "non-idle" includes CPUs with a single runnable task
> > +	 in adaptive-idle mode.  Because the underlying adaptive-tick
> > +	 support cannot distinguish between all CPUs being idle and
> > +	 all CPUs each running a single task in adaptive-idle mode,
> > +	 the underlying support simply ensures that there is always
> > +	 a CPU handling the scheduling-clock tick, whether or not all
> > +	 CPUs are idle.  This Kconfig option enables scalable detection
> > +	 of the all-CPUs-idle state, thus allowing the scheduling-clock
> > +	 tick to be disabled when all CPUs are idle.  Note that scalable
> > +	 detection of the all-CPUs-idle state means that larger systems
> > +	 will be slower to declare the all-CPUs-idle state.
> > +
> > +	 Say Y if you would like to help debug all-CPUs-idle detection.
> 
> The code is needed only for debug?
> I guess not.

The code is not used only for debug, but if you enable it now, you will
likely be helping to debug it.  ;-)

							Thanx, Paul

> > +
> > +	 Say N if you are unsure.
> > +
> >  config NO_HZ
> >  	bool "Old Idle dynticks config"
> >  	depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU
  2013-07-29  3:36     ` Lai Jiangshan
@ 2013-07-29 16:52       ` Paul E. McKenney
  2013-07-29 16:59         ` Frederic Weisbecker
  2013-07-30  1:40         ` Lai Jiangshan
  0 siblings, 2 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-29 16:52 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On Mon, Jul 29, 2013 at 11:36:05AM +0800, Lai Jiangshan wrote:
> On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > Because RCU's quiescent-state-forcing mechanism is used to drive the
> > full-system-idle state machine, and because this mechanism is executed
> > by RCU's grace-period kthreads, this commit forces these kthreads to
> > run on the timekeeping CPU (tick_do_timer_cpu).  To do otherwise would
> > mean that the RCU grace-period kthreads would force the system into
> > non-idle state every time they drove the state machine, which would
> > be just a bit on the futile side.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > ---
> >  kernel/rcutree.c        |  1 +
> >  kernel/rcutree.h        |  1 +
> >  kernel/rcutree_plugin.h | 20 +++++++++++++++++++-
> >  3 files changed, 21 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> > index aa6d96e..fe83085 100644
> > --- a/kernel/rcutree.c
> > +++ b/kernel/rcutree.c
> > @@ -1286,6 +1286,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
> >  	struct rcu_data *rdp;
> >  	struct rcu_node *rnp = rcu_get_root(rsp);
> >  
> > +	rcu_bind_gp_kthread();
> >  	raw_spin_lock_irq(&rnp->lock);
> >  	rsp->gp_flags = 0; /* Clear all flags: New grace period. */
> 
> bind the gp thread when RCU_GP_FLAG_INIT ...
> 
> >  
> > diff --git a/kernel/rcutree.h b/kernel/rcutree.h
> > index e0de5dc..49dac99 100644
> > --- a/kernel/rcutree.h
> > +++ b/kernel/rcutree.h
> > @@ -560,6 +560,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> >  static bool is_sysidle_rcu_state(struct rcu_state *rsp);
> >  static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> >  				  unsigned long maxj);
> > +static void rcu_bind_gp_kthread(void);
> >  static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
> >  
> >  #endif /* #ifndef RCU_TREE_NONCORE */
> > diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> > index ff84bed..f65d9c2 100644
> > --- a/kernel/rcutree_plugin.h
> > +++ b/kernel/rcutree_plugin.h
> > @@ -2544,7 +2544,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> >  	if (!*isidle || rdp->rsp != rcu_sysidle_state ||
> >  	    cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
> >  		return;
> > -	/* WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu); */
> > +	WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
> 
> 
> but call rcu_sysidle_check_cpu() when RCU_GP_FLAG_FQS.

Yep!  But we don't call rcu_gp_fqs() until the grace period is started,
by which time the kthread will be bound.  Any setting of RCU_GP_FLAG_FQS
while there is no grace period in progress is ignored.

> In this time, the thread may not be bound to tick_do_timer_cpu,
> the WARN_ON_ONCE() may be wrong.
> 
> Does any other code ensure the gp thread bound on tick_do_timer_cpu
> which I missed?

However, on small systems, rcu_sysidle_check_cpu() can be called from
the timekeeping CPU.  I suppose that this could potentially happen
before the first grace period starts, and in that case, we could
potentially see a spurious warning.  I could imagine a number of ways
to fix this:

1.	Bind the kthread when it is created.

2.	Bind the kthread when it first starts running, rather than just
	after the grace period starts.

3.	Suppress the warning when there is no grace period in progress.

4.	Suppress the warning prior to the first grace period starting.

Seems like #3 is the most straightforward approach.  I just change it to:

	if (rcu_gp_in_progress(rdp->rsp))
		WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);

This still gets a WARN_ON_ONCE() if someone moves the timekeeping CPU,
but Frederic tells me that it never moves.  My WARN_ON_ONCE() has some
probability of complaining should some bug creep in.

Sound reasonable?

							Thanx, Paul

> >  	/* Pick up current idle and NMI-nesting counter and check. */
> >  	cur = atomic_read(&rdtp->dynticks_idle);
> > @@ -2570,6 +2570,20 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
> >  }
> >  
> >  /*
> > + * Bind the grace-period kthread for the sysidle flavor of RCU to the
> > + * timekeeping CPU.
> > + */
> > +static void rcu_bind_gp_kthread(void)
> > +{
> > +	int cpu = ACCESS_ONCE(tick_do_timer_cpu);
> > +
> > +	if (cpu < 0 || cpu >= nr_cpu_ids)
> > +		return;
> > +	if (raw_smp_processor_id() != cpu)
> > +		set_cpus_allowed_ptr(current, cpumask_of(cpu));
> > +}
> > +
> > +/*
> >   * Return a delay in jiffies based on the number of CPUs, rcu_node
> >   * leaf fanout, and jiffies tick rate.  The idea is to allow larger
> >   * systems more time to transition to full-idle state in order to
> > @@ -2767,6 +2781,10 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
> >  	return false;
> >  }
> >  
> > +static void rcu_bind_gp_kthread(void)
> > +{
> > +}
> > +
> >  static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> >  				  unsigned long maxj)
> >  {
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU
  2013-07-29 16:52       ` Paul E. McKenney
@ 2013-07-29 16:59         ` Frederic Weisbecker
  2013-07-29 17:53           ` Paul E. McKenney
  2013-07-30  1:40         ` Lai Jiangshan
  1 sibling, 1 reply; 26+ messages in thread
From: Frederic Weisbecker @ 2013-07-29 16:59 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, niv, tglx, peterz, rostedt, dhowells,
	edumazet, darren, sbw

On Mon, Jul 29, 2013 at 09:52:53AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 29, 2013 at 11:36:05AM +0800, Lai Jiangshan wrote:
> However, on small systems, rcu_sysidle_check_cpu() can be called from
> the timekeeping CPU.  I suppose that this could potentially happen
> before the first grace period starts, and in that case, we could
> potentially see a spurious warning.  I could imagine a number of ways
> to fix this:
> 
> 1.	Bind the kthread when it is created.
> 
> 2.	Bind the kthread when it first starts running, rather than just
> 	after the grace period starts.
> 
> 3.	Suppress the warning when there is no grace period in progress.
> 
> 4.	Suppress the warning prior to the first grace period starting.
> 
> Seems like #3 is the most straightforward approach.  I just change it to:
> 
> 	if (rcu_gp_in_progress(rdp->rsp))
> 		WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
> 
> This still gets a WARN_ON_ONCE() if someone moves the timekeeping CPU,
> but Frederic tells me that it never moves.  My WARN_ON_ONCE() has some
> probability of complaining should some bug creep in.

It doesn't move for now but keep in mind that it will probably be able
to move in the future. If we have several non full-dynticks CPUs, balancing
the timekeeping duty between them, depending which one runs at a given time,
may improve power savings even better.

But you can ignore that for now. Your patchset is entertaining enough that
we don't need to add more complications yet ;)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine
  2013-07-29  8:19     ` Lai Jiangshan
@ 2013-07-29 17:43       ` Paul E. McKenney
  0 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-29 17:43 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On Mon, Jul 29, 2013 at 04:19:48PM +0800, Lai Jiangshan wrote:
> On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > This commit adds the state machine that takes the per-CPU idle data
> > as input and produces a full-system-idle indication as output.  This
> > state machine is driven out of RCU's quiescent-state-forcing
> > mechanism, which invokes rcu_sysidle_check_cpu() to collect per-CPU
> > idle state and then rcu_sysidle_report() to drive the state machine.
> > 
> > The full-system-idle state is sampled using rcu_sys_is_idle(), which
> > also drives the state machine if RCU is idle (and does so by forcing
> > RCU to become non-idle).  This function returns true if all but the
> > timekeeping CPU (tick_do_timer_cpu) are idle and have been idle long
> > enough to avoid memory contention on the full_sysidle_state state
> > variable.  The rcu_sysidle_force_exit() may be called externally
> > to reset the state machine back into non-idle state.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > ---
> >  include/linux/rcupdate.h |  18 +++
> >  kernel/rcutree.c         |  16 ++-
> >  kernel/rcutree.h         |   5 +
> >  kernel/rcutree_plugin.h  | 284 ++++++++++++++++++++++++++++++++++++++++++++++-
> >  4 files changed, 316 insertions(+), 7 deletions(-)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 48f1ef9..1aa8d8c 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -1011,4 +1011,22 @@ static inline bool rcu_is_nocb_cpu(int cpu) { return false; }
> >  #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
> >  
> >  
> > +/* Only for use by adaptive-ticks code. */
> > +#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
> > +extern bool rcu_sys_is_idle(void);
> > +extern void rcu_sysidle_force_exit(void);
> > +#else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
> > +
> > +static inline bool rcu_sys_is_idle(void)
> > +{
> > +	return false;
> > +}
> > +
> > +static inline void rcu_sysidle_force_exit(void)
> > +{
> > +}
> > +
> > +#endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
> > +
> > +
> >  #endif /* __LINUX_RCUPDATE_H */
> > diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> > index 725524e..aa6d96e 100644
> > --- a/kernel/rcutree.c
> > +++ b/kernel/rcutree.c
> > @@ -718,6 +718,7 @@ static int dyntick_save_progress_counter(struct rcu_data *rdp,
> >  					 bool *isidle, unsigned long *maxj)
> >  {
> >  	rdp->dynticks_snap = atomic_add_return(0, &rdp->dynticks->dynticks);
> > +	rcu_sysidle_check_cpu(rdp, isidle, maxj);
> >  	return (rdp->dynticks_snap & 0x1) == 0;
> >  }
> >  
> > @@ -1356,11 +1357,17 @@ int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
> >  	rsp->n_force_qs++;
> >  	if (fqs_state == RCU_SAVE_DYNTICK) {
> >  		/* Collect dyntick-idle snapshots. */
> > +		if (is_sysidle_rcu_state(rsp)) {
> > +			isidle = 1;
> 
> isidle = true;
> the type of isidle is bool
> 
> > +			maxj = jiffies - ULONG_MAX / 4;
> > +		}
> >  		force_qs_rnp(rsp, dyntick_save_progress_counter,
> >  			     &isidle, &maxj);
> > +		rcu_sysidle_report_gp(rsp, isidle, maxj);
> >  		fqs_state = RCU_FORCE_QS;
> >  	} else {
> >  		/* Handle dyntick-idle and offline CPUs. */
> > +		isidle = 0;
> 
> isidle = false;
> 
> >  		force_qs_rnp(rsp, rcu_implicit_dynticks_qs, &isidle, &maxj);
> >  	}
> >  	/* Clear flag to prevent immediate re-entry. */
> > @@ -2087,9 +2094,12 @@ static void force_qs_rnp(struct rcu_state *rsp,
> >  		cpu = rnp->grplo;
> >  		bit = 1;
> >  		for (; cpu <= rnp->grphi; cpu++, bit <<= 1) {
> > -			if ((rnp->qsmask & bit) != 0 &&
> > -			    f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
> > -				mask |= bit;
> > +			if ((rnp->qsmask & bit) != 0) {
> > +				if ((rnp->qsmaskinit & bit) != 0)
> > +					*isidle = 0;
> 
> *isidle = false

All good catches, fixed.

> > +				if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
> > +					mask |= bit;
> > +			}
> >  		}
> >  		if (mask != 0) {
> >  
> > diff --git a/kernel/rcutree.h b/kernel/rcutree.h
> > index 1895043..e0de5dc 100644
> > --- a/kernel/rcutree.h
> > +++ b/kernel/rcutree.h
> > @@ -555,6 +555,11 @@ static void rcu_kick_nohz_cpu(int cpu);
> >  static bool init_nocb_callback_list(struct rcu_data *rdp);
> >  static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq);
> >  static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq);
> > +static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> > +				  unsigned long *maxj);
> > +static bool is_sysidle_rcu_state(struct rcu_state *rsp);
> > +static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> > +				  unsigned long maxj);
> >  static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
> >  
> >  #endif /* #ifndef RCU_TREE_NONCORE */
> > diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> > index 3edae39..ff84bed 100644
> > --- a/kernel/rcutree_plugin.h
> > +++ b/kernel/rcutree_plugin.h
> > @@ -28,7 +28,7 @@
> >  #include <linux/gfp.h>
> >  #include <linux/oom.h>
> >  #include <linux/smpboot.h>
> > -#include <linux/tick.h>
> > +#include "time/tick-internal.h"
> >  
> >  #define RCU_KTHREAD_PRIO 1
> >  
> > @@ -2395,12 +2395,12 @@ static void rcu_kick_nohz_cpu(int cpu)
> >   * most active flavor of RCU.
> >   */
> >  #ifdef CONFIG_PREEMPT_RCU
> > -static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_preempt_state;
> > +static struct rcu_state *rcu_sysidle_state = &rcu_preempt_state;
> >  #else /* #ifdef CONFIG_PREEMPT_RCU */
> > -static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_sched_state;
> > +static struct rcu_state *rcu_sysidle_state = &rcu_sched_state;
> >  #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
> >  
> > -static int __maybe_unused full_sysidle_state; /* Current system-idle state. */
> > +static int full_sysidle_state;		/* Current system-idle state. */
> >  #define RCU_SYSIDLE_NOT		0	/* Some CPU is not idle. */
> >  #define RCU_SYSIDLE_SHORT	1	/* All CPUs idle for brief period. */
> >  #define RCU_SYSIDLE_LONG	2	/* All CPUs idle for long enough. */
> > @@ -2444,6 +2444,38 @@ static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq)
> >  }
> >  
> >  /*
> > + * Unconditionally force exit from full system-idle state.  This is
> > + * invoked when a normal CPU exits idle, but must be called separately
> > + * for the timekeeping CPU (tick_do_timer_cpu).  The reason for this
> > + * is that the timekeeping CPU is permitted to take scheduling-clock
> > + * interrupts while the system is in system-idle state, and of course
> > + * rcu_sysidle_exit() has no way of distinguishing a scheduling-clock
> > + * interrupt from any other type of interrupt.
> > + */
> > +void rcu_sysidle_force_exit(void)
> > +{
> > +	int oldstate = ACCESS_ONCE(full_sysidle_state);
> > +	int newoldstate;
> > +
> > +	/*
> > +	 * Each pass through the following loop attempts to exit full
> > +	 * system-idle state.  If contention proves to be a problem,
> > +	 * a trylock-based contention tree could be used here.
> > +	 */
> > +	while (oldstate > RCU_SYSIDLE_SHORT) {
> > +		newoldstate = cmpxchg(&full_sysidle_state,
> > +				      oldstate, RCU_SYSIDLE_NOT);
> > +		if (oldstate == newoldstate &&
> > +		    oldstate == RCU_SYSIDLE_FULL_NOTED) {
> > +			rcu_kick_nohz_cpu(tick_do_timer_cpu);
> > +			return; /* We cleared it, done! */
> > +		}
> > +		oldstate = newoldstate;
> > +	}
> > +	smp_mb(); /* Order initial oldstate fetch vs. later non-idle work. */
> 
> why we need this mb()?
> which mb() is paired with this?

This one if for the case where we didn't do the cmpxchg() above.  The idea
is that if we saw that oldstate was RCU_SYSIDLE_SHORT or RCU_SYSIDLE_NOT,
that anyone attempting to update the value who has seen any of our
later non-idle activity also "sees" the load from full_sysidle_state,
which reduces the state space a bit.  The barrier pairs with the various
cmpxchg() operations that advance full_sysidle_state.

							Thanx, Paul

> > +}
> > +
> > +/*
> >   * Invoked to note entry to irq or task transition from idle.  Note that
> >   * usermode execution does -not- count as idle here!  The caller must
> >   * have disabled interrupts.
> > @@ -2476,6 +2508,235 @@ static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
> >  	atomic_inc(&rdtp->dynticks_idle);
> >  	smp_mb__after_atomic_inc();
> >  	WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks_idle) & 0x1));
> > +
> > +	/*
> > +	 * If we are the timekeeping CPU, we are permitted to be non-idle
> > +	 * during a system-idle state.  This must be the case, because
> > +	 * the timekeeping CPU has to take scheduling-clock interrupts
> > +	 * during the time that the system is transitioning to full
> > +	 * system-idle state.  This means that the timekeeping CPU must
> > +	 * invoke rcu_sysidle_force_exit() directly if it does anything
> > +	 * more than take a scheduling-clock interrupt.
> > +	 */
> > +	if (smp_processor_id() == tick_do_timer_cpu)
> > +		return;
> > +
> > +	/* Update system-idle state: We are clearly no longer fully idle! */
> > +	rcu_sysidle_force_exit();
> > +}
> > +
> > +/*
> > + * Check to see if the current CPU is idle.  Note that usermode execution
> > + * does not count as idle.  The caller must have disabled interrupts.
> > + */
> > +static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> > +				  unsigned long *maxj)
> > +{
> > +	int cur;
> > +	unsigned long j;
> > +	struct rcu_dynticks *rdtp = rdp->dynticks;
> > +
> > +	/*
> > +	 * If some other CPU has already reported non-idle, if this is
> > +	 * not the flavor of RCU that tracks sysidle state, or if this
> > +	 * is an offline or the timekeeping CPU, nothing to do.
> > +	 */
> > +	if (!*isidle || rdp->rsp != rcu_sysidle_state ||
> > +	    cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
> > +		return;
> > +	/* WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu); */
> > +
> > +	/* Pick up current idle and NMI-nesting counter and check. */
> > +	cur = atomic_read(&rdtp->dynticks_idle);
> > +	if (cur & 0x1) {
> > +		*isidle = 0; /* We are not idle! */
> 
> *isidle = false;
> 
> And other places which use "isidle".
> 
> > +		return;
> > +	}
> > +	smp_mb(); /* Read counters before timestamps. */
> > +
> > +	/* Pick up timestamps. */
> > +	j = ACCESS_ONCE(rdtp->dynticks_idle_jiffies);
> > +	/* If this CPU entered idle more recently, update maxj timestamp. */
> > +	if (ULONG_CMP_LT(*maxj, j))
> > +		*maxj = j;
> > +}
> > +
> > +/*
> > + * Is this the flavor of RCU that is handling full-system idle?
> > + */
> > +static bool is_sysidle_rcu_state(struct rcu_state *rsp)
> > +{
> > +	return rsp == rcu_sysidle_state;
> > +}
> > +
> > +/*
> > + * Return a delay in jiffies based on the number of CPUs, rcu_node
> > + * leaf fanout, and jiffies tick rate.  The idea is to allow larger
> > + * systems more time to transition to full-idle state in order to
> > + * avoid the cache thrashing that otherwise occur on the state variable.
> > + * Really small systems (less than a couple of tens of CPUs) should
> > + * instead use a single global atomically incremented counter, and later
> > + * versions of this will automatically reconfigure themselves accordingly.
> > + */
> > +static unsigned long rcu_sysidle_delay(void)
> > +{
> > +	if (nr_cpu_ids <= RCU_SYSIDLE_SMALL)
> > +		return 0;
> > +	return DIV_ROUND_UP(nr_cpu_ids * HZ, rcu_fanout_leaf * 1000);
> > +}
> > +
> > +/*
> > + * Advance the full-system-idle state.  This is invoked when all of
> > + * the non-timekeeping CPUs are idle.
> > + */
> > +static void rcu_sysidle(unsigned long j)
> > +{
> > +	/* Check the current state. */
> > +	switch (ACCESS_ONCE(full_sysidle_state)) {
> > +	case RCU_SYSIDLE_NOT:
> > +
> > +		/* First time all are idle, so note a short idle period. */
> > +		ACCESS_ONCE(full_sysidle_state) = RCU_SYSIDLE_SHORT;
> > +		break;
> > +
> > +	case RCU_SYSIDLE_SHORT:
> > +
> > +		/*
> > +		 * Idle for a bit, time to advance to next state?
> > +		 * cmpxchg failure means race with non-idle, let them win.
> > +		 */
> > +		if (ULONG_CMP_GE(jiffies, j + rcu_sysidle_delay()))
> > +			(void)cmpxchg(&full_sysidle_state,
> > +				      RCU_SYSIDLE_SHORT, RCU_SYSIDLE_LONG);
> > +		break;
> 
> I don't think it will race with any body.
> I think ACCESS_ONCE(full_sysidle_state) = RCU_SYSIDLE_LONG is enough.
> 
> note:
> rcu_sysidle_force_exit() doesn't change full_sysidle_state if it is RCU_SYSIDLE_SHORT.
> 
> > +
> > +	case RCU_SYSIDLE_LONG:
> > +
> > +		/*
> > +		 * Do an additional check pass before advancing to full.
> > +		 * cmpxchg failure means race with non-idle, let them win.
> > +		 */
> > +		if (ULONG_CMP_GE(jiffies, j + rcu_sysidle_delay()))
> > +			(void)cmpxchg(&full_sysidle_state,
> > +				      RCU_SYSIDLE_LONG, RCU_SYSIDLE_FULL);
> > +		break;
> > +
> > +	default:
> > +		break;
> > +	}
> > +}
> > +
> > +/*
> > + * Found a non-idle non-timekeeping CPU, so kick the system-idle state
> > + * back to the beginning.
> > + */
> > +static void rcu_sysidle_cancel(void)
> > +{
> > +	smp_mb();
> > +	ACCESS_ONCE(full_sysidle_state) = RCU_SYSIDLE_NOT;
> > +}
> > +
> > +/*
> > + * Update the sysidle state based on the results of a force-quiescent-state
> > + * scan of the CPUs' dyntick-idle state.
> > + */
> > +static void rcu_sysidle_report(struct rcu_state *rsp, int isidle,
> > +			       unsigned long maxj, bool gpkt)
> > +{
> > +	if (rsp != rcu_sysidle_state)
> > +		return;  /* Wrong flavor, ignore. */
> > +	if (isidle) {
> > +		if (gpkt && nr_cpu_ids > RCU_SYSIDLE_SMALL)
> > +			rcu_sysidle(maxj);    /* More idle! */
> > +	} else {
> > +		rcu_sysidle_cancel(); /* Idle is over. */
> > +	}
> > +}
> 
> "gpkt" is always equal to "nr_cpu_ids > RCU_SYSIDLE_SMALL",
> 
> so we can remove "gpkt" argument and rcu_sysidle_report_gp().
> 
> 
> > +
> > +static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> > +				  unsigned long maxj)
> > +{
> > +	rcu_sysidle_report(rsp, isidle, maxj, true);
> > +}
> > +
> > +/* Callback and function for forcing an RCU grace period. */
> > +struct rcu_sysidle_head {
> > +	struct rcu_head rh;
> > +	int inuse;
> > +};
> > +
> > +static void rcu_sysidle_cb(struct rcu_head *rhp)
> > +{
> > +	struct rcu_sysidle_head *rshp;
> > +
> > +	smp_mb();  /* grace period precedes setting inuse. */
> 
> Why we need this mb()?
> 
> > +	rshp = container_of(rhp, struct rcu_sysidle_head, rh);
> > +	ACCESS_ONCE(rshp->inuse) = 0;
> > +}
> > +
> > +/*
> > + * Check to see if the system is fully idle, other than the timekeeping CPU.
> > + * The caller must have disabled interrupts.
> > + */
> > +bool rcu_sys_is_idle(void)
> > +{
> > +	static struct rcu_sysidle_head rsh;
> > +	int rss = ACCESS_ONCE(full_sysidle_state);
> > +
> > +	if (WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu))
> > +		return false;
> > +
> > +	/* Handle small-system case by doing a full scan of CPUs. */
> > +	if (nr_cpu_ids <= RCU_SYSIDLE_SMALL) {
> > +		int oldrss = rss - 1;
> > +
> > +		/*
> > +		 * One pass to advance to each state up to _FULL.
> > +		 * Give up if any pass fails to advance the state.
> > +		 */
> > +		while (rss < RCU_SYSIDLE_FULL && oldrss < rss) {
> > +			int cpu;
> > +			bool isidle = true;
> > +			unsigned long maxj = jiffies - ULONG_MAX / 4;
> > +			struct rcu_data *rdp;
> > +
> > +			/* Scan all the CPUs looking for nonidle CPUs. */
> > +			for_each_possible_cpu(cpu) {
> > +				rdp = per_cpu_ptr(rcu_sysidle_state->rda, cpu);
> > +				rcu_sysidle_check_cpu(rdp, &isidle, &maxj);
> > +				if (!isidle)
> > +					break;
> > +			}
> > +			rcu_sysidle_report(rcu_sysidle_state,
> > +					   isidle, maxj, false);
> > +			oldrss = rss;
> > +			rss = ACCESS_ONCE(full_sysidle_state);
> > +		}
> > +	}
> > +
> > +	/* If this is the first observation of an idle period, record it. */
> > +	if (rss == RCU_SYSIDLE_FULL) {
> > +		rss = cmpxchg(&full_sysidle_state,
> > +			      RCU_SYSIDLE_FULL, RCU_SYSIDLE_FULL_NOTED);
> > +		return rss == RCU_SYSIDLE_FULL;
> > +	}
> > +
> > +	smp_mb(); /* ensure rss load happens before later caller actions. */
> > +
> > +	/* If already fully idle, tell the caller (in case of races). */
> > +	if (rss == RCU_SYSIDLE_FULL_NOTED)
> > +		return true;
> > +
> > +	/*
> > +	 * If we aren't there yet, and a grace period is not in flight,
> > +	 * initiate a grace period.  Either way, tell the caller that
> > +	 * we are not there yet.
> > +	 */
> > +	if (nr_cpu_ids > RCU_SYSIDLE_SMALL &&
> > +	    !rcu_gp_in_progress(rcu_sysidle_state) &&
> > +	    !rsh.inuse && xchg(&rsh.inuse, 1) == 0)
> 
> why need to use xchg()? Who will it race with?
> 
> > +		call_rcu(&rsh.rh, rcu_sysidle_cb);
> > +	return false;
> >  }
> >  
> >  /*
> > @@ -2496,6 +2757,21 @@ static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq)
> >  {
> >  }
> >  
> > +static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> > +				  unsigned long *maxj)
> > +{
> > +}
> > +
> > +static bool is_sysidle_rcu_state(struct rcu_state *rsp)
> > +{
> > +	return false;
> > +}
> > +
> > +static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> > +				  unsigned long maxj)
> > +{
> > +}
> > +
> >  static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
> >  {
> >  }
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU
  2013-07-29 16:59         ` Frederic Weisbecker
@ 2013-07-29 17:53           ` Paul E. McKenney
  0 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-29 17:53 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, niv, tglx, peterz, rostedt, dhowells,
	edumazet, darren, sbw

On Mon, Jul 29, 2013 at 06:59:46PM +0200, Frederic Weisbecker wrote:
> On Mon, Jul 29, 2013 at 09:52:53AM -0700, Paul E. McKenney wrote:
> > On Mon, Jul 29, 2013 at 11:36:05AM +0800, Lai Jiangshan wrote:
> > However, on small systems, rcu_sysidle_check_cpu() can be called from
> > the timekeeping CPU.  I suppose that this could potentially happen
> > before the first grace period starts, and in that case, we could
> > potentially see a spurious warning.  I could imagine a number of ways
> > to fix this:
> > 
> > 1.	Bind the kthread when it is created.
> > 
> > 2.	Bind the kthread when it first starts running, rather than just
> > 	after the grace period starts.
> > 
> > 3.	Suppress the warning when there is no grace period in progress.
> > 
> > 4.	Suppress the warning prior to the first grace period starting.
> > 
> > Seems like #3 is the most straightforward approach.  I just change it to:
> > 
> > 	if (rcu_gp_in_progress(rdp->rsp))
> > 		WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
> > 
> > This still gets a WARN_ON_ONCE() if someone moves the timekeeping CPU,
> > but Frederic tells me that it never moves.  My WARN_ON_ONCE() has some
> > probability of complaining should some bug creep in.
> 
> It doesn't move for now but keep in mind that it will probably be able
> to move in the future. If we have several non full-dynticks CPUs, balancing
> the timekeeping duty between them, depending which one runs at a given time,
> may improve power savings even better.
> 
> But you can ignore that for now. Your patchset is entertaining enough that
> we don't need to add more complications yet ;)

Yeah, we will need some sort of handshake for that.  Might be a simple
as setting a flag that suppresses the warning, which I clear the next
time I bind the kthread.  Well, it would need to deal with closely-spaced
moves of the timekeeping duty, wouldn't it?  Plus it would need to deal
with the fact that sampling the variable referencing the timekeeping CPU,
sampling the current CPU, and binding the kthread cannot be done as one
big atomic operation.  Which means that their would need to be two calls
in the handshake, one to prepare to move the timekeeping CPU and another
to announce that it had in fact been moved.

Which is not too hard -- I use an irq-disable lock to guard setting and
clearing an internal-to-RCU flag noting the upcoming change.  The
check and WARN_ON_ONCE() are done while holding this same lock.  The
flag is cleared only after the kthread-bind operation that follows the
last "it has in fact been moved" handshake.  So the flag has three states,
idle, ready to move, and moved.  The possibility of closely spaced moves
of the timekeeping kthread are dealt with by transitioning from "moved"
to "ready to move".  The state goes back to "idle" only after completion
of a kthread-bind operation in the "moved" state.

So agreed, let's defer this one.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU
  2013-07-29 16:52       ` Paul E. McKenney
  2013-07-29 16:59         ` Frederic Weisbecker
@ 2013-07-30  1:40         ` Lai Jiangshan
  2013-07-30 17:45           ` Paul E. McKenney
  1 sibling, 1 reply; 26+ messages in thread
From: Lai Jiangshan @ 2013-07-30  1:40 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On 07/30/2013 12:52 AM, Paul E. McKenney wrote:
> On Mon, Jul 29, 2013 at 11:36:05AM +0800, Lai Jiangshan wrote:
>> On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
>>> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
>>>
>>> Because RCU's quiescent-state-forcing mechanism is used to drive the
>>> full-system-idle state machine, and because this mechanism is executed
>>> by RCU's grace-period kthreads, this commit forces these kthreads to
>>> run on the timekeeping CPU (tick_do_timer_cpu).  To do otherwise would
>>> mean that the RCU grace-period kthreads would force the system into
>>> non-idle state every time they drove the state machine, which would
>>> be just a bit on the futile side.
>>>
>>> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>>> Cc: Frederic Weisbecker <fweisbec@gmail.com>
>>> Cc: Steven Rostedt <rostedt@goodmis.org>
>>> ---
>>>  kernel/rcutree.c        |  1 +
>>>  kernel/rcutree.h        |  1 +
>>>  kernel/rcutree_plugin.h | 20 +++++++++++++++++++-
>>>  3 files changed, 21 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
>>> index aa6d96e..fe83085 100644
>>> --- a/kernel/rcutree.c
>>> +++ b/kernel/rcutree.c
>>> @@ -1286,6 +1286,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
>>>  	struct rcu_data *rdp;
>>>  	struct rcu_node *rnp = rcu_get_root(rsp);
>>>  
>>> +	rcu_bind_gp_kthread();
>>>  	raw_spin_lock_irq(&rnp->lock);
>>>  	rsp->gp_flags = 0; /* Clear all flags: New grace period. */
>>
>> bind the gp thread when RCU_GP_FLAG_INIT ...
>>
>>>  
>>> diff --git a/kernel/rcutree.h b/kernel/rcutree.h
>>> index e0de5dc..49dac99 100644
>>> --- a/kernel/rcutree.h
>>> +++ b/kernel/rcutree.h
>>> @@ -560,6 +560,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
>>>  static bool is_sysidle_rcu_state(struct rcu_state *rsp);
>>>  static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
>>>  				  unsigned long maxj);
>>> +static void rcu_bind_gp_kthread(void);
>>>  static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
>>>  
>>>  #endif /* #ifndef RCU_TREE_NONCORE */
>>> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
>>> index ff84bed..f65d9c2 100644
>>> --- a/kernel/rcutree_plugin.h
>>> +++ b/kernel/rcutree_plugin.h
>>> @@ -2544,7 +2544,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
>>>  	if (!*isidle || rdp->rsp != rcu_sysidle_state ||
>>>  	    cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
>>>  		return;
>>> -	/* WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu); */
>>> +	WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
>>
>>
>> but call rcu_sysidle_check_cpu() when RCU_GP_FLAG_FQS.
> 
> Yep!  But we don't call rcu_gp_fqs() until the grace period is started,
> by which time the kthread will be bound.  Any setting of RCU_GP_FLAG_FQS
> while there is no grace period in progress is ignored.

tick_do_timer_cpu can be changed.
when rcu_gp_fqs() is called, the tick_do_timer_cpu may be a different CPU.

xxx_thread()
{
	bind itself to tick_do_timer_cpu.
	sleep(); /* tick_do_timer_cpu can be changed while this */
	use wrong tick_do_timer_cpu.
}
	

> 
>> In this time, the thread may not be bound to tick_do_timer_cpu,
>> the WARN_ON_ONCE() may be wrong.
>>
>> Does any other code ensure the gp thread bound on tick_do_timer_cpu
>> which I missed?
> 
> However, on small systems, rcu_sysidle_check_cpu() can be called from
> the timekeeping CPU.  I suppose that this could potentially happen
> before the first grace period starts, and in that case, we could
> potentially see a spurious warning.  I could imagine a number of ways
> to fix this:
> 
> 1.	Bind the kthread when it is created.
> 
> 2.	Bind the kthread when it first starts running, rather than just
> 	after the grace period starts.
> 
> 3.	Suppress the warning when there is no grace period in progress.
> 
> 4.	Suppress the warning prior to the first grace period starting.
> 
> Seems like #3 is the most straightforward approach.  I just change it to:
> 
> 	if (rcu_gp_in_progress(rdp->rsp))
> 		WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
> 
> This still gets a WARN_ON_ONCE() if someone moves the timekeeping CPU,
> but Frederic tells me that it never moves.  My WARN_ON_ONCE() has some
> probability of complaining should some bug creep in.
> 
> Sound reasonable?
> 
> 							Thanx, Paul
> 
>>>  	/* Pick up current idle and NMI-nesting counter and check. */
>>>  	cur = atomic_read(&rdtp->dynticks_idle);
>>> @@ -2570,6 +2570,20 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
>>>  }
>>>  
>>>  /*
>>> + * Bind the grace-period kthread for the sysidle flavor of RCU to the
>>> + * timekeeping CPU.
>>> + */
>>> +static void rcu_bind_gp_kthread(void)
>>> +{
>>> +	int cpu = ACCESS_ONCE(tick_do_timer_cpu);
>>> +
>>> +	if (cpu < 0 || cpu >= nr_cpu_ids)
>>> +		return;
>>> +	if (raw_smp_processor_id() != cpu)
>>> +		set_cpus_allowed_ptr(current, cpumask_of(cpu));
>>> +}
>>> +
>>> +/*
>>>   * Return a delay in jiffies based on the number of CPUs, rcu_node
>>>   * leaf fanout, and jiffies tick rate.  The idea is to allow larger
>>>   * systems more time to transition to full-idle state in order to
>>> @@ -2767,6 +2781,10 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
>>>  	return false;
>>>  }
>>>  
>>> +static void rcu_bind_gp_kthread(void)
>>> +{
>>> +}
>>> +
>>>  static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
>>>  				  unsigned long maxj)
>>>  {
>>
> 
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU
  2013-07-30  1:40         ` Lai Jiangshan
@ 2013-07-30 17:45           ` Paul E. McKenney
  0 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-30 17:45 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	niv, tglx, peterz, rostedt, dhowells, edumazet, darren, fweisbec,
	sbw

On Tue, Jul 30, 2013 at 09:40:03AM +0800, Lai Jiangshan wrote:
> On 07/30/2013 12:52 AM, Paul E. McKenney wrote:
> > On Mon, Jul 29, 2013 at 11:36:05AM +0800, Lai Jiangshan wrote:
> >> On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
> >>> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> >>>
> >>> Because RCU's quiescent-state-forcing mechanism is used to drive the
> >>> full-system-idle state machine, and because this mechanism is executed
> >>> by RCU's grace-period kthreads, this commit forces these kthreads to
> >>> run on the timekeeping CPU (tick_do_timer_cpu).  To do otherwise would
> >>> mean that the RCU grace-period kthreads would force the system into
> >>> non-idle state every time they drove the state machine, which would
> >>> be just a bit on the futile side.
> >>>
> >>> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >>> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> >>> Cc: Steven Rostedt <rostedt@goodmis.org>
> >>> ---
> >>>  kernel/rcutree.c        |  1 +
> >>>  kernel/rcutree.h        |  1 +
> >>>  kernel/rcutree_plugin.h | 20 +++++++++++++++++++-
> >>>  3 files changed, 21 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> >>> index aa6d96e..fe83085 100644
> >>> --- a/kernel/rcutree.c
> >>> +++ b/kernel/rcutree.c
> >>> @@ -1286,6 +1286,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
> >>>  	struct rcu_data *rdp;
> >>>  	struct rcu_node *rnp = rcu_get_root(rsp);
> >>>  
> >>> +	rcu_bind_gp_kthread();
> >>>  	raw_spin_lock_irq(&rnp->lock);
> >>>  	rsp->gp_flags = 0; /* Clear all flags: New grace period. */
> >>
> >> bind the gp thread when RCU_GP_FLAG_INIT ...
> >>
> >>>  
> >>> diff --git a/kernel/rcutree.h b/kernel/rcutree.h
> >>> index e0de5dc..49dac99 100644
> >>> --- a/kernel/rcutree.h
> >>> +++ b/kernel/rcutree.h
> >>> @@ -560,6 +560,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> >>>  static bool is_sysidle_rcu_state(struct rcu_state *rsp);
> >>>  static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> >>>  				  unsigned long maxj);
> >>> +static void rcu_bind_gp_kthread(void);
> >>>  static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
> >>>  
> >>>  #endif /* #ifndef RCU_TREE_NONCORE */
> >>> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> >>> index ff84bed..f65d9c2 100644
> >>> --- a/kernel/rcutree_plugin.h
> >>> +++ b/kernel/rcutree_plugin.h
> >>> @@ -2544,7 +2544,7 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
> >>>  	if (!*isidle || rdp->rsp != rcu_sysidle_state ||
> >>>  	    cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
> >>>  		return;
> >>> -	/* WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu); */
> >>> +	WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
> >>
> >>
> >> but call rcu_sysidle_check_cpu() when RCU_GP_FLAG_FQS.
> > 
> > Yep!  But we don't call rcu_gp_fqs() until the grace period is started,
> > by which time the kthread will be bound.  Any setting of RCU_GP_FLAG_FQS
> > while there is no grace period in progress is ignored.
> 
> tick_do_timer_cpu can be changed.
> when rcu_gp_fqs() is called, the tick_do_timer_cpu may be a different CPU.
> 
> xxx_thread()
> {
> 	bind itself to tick_do_timer_cpu.
> 	sleep(); /* tick_do_timer_cpu can be changed while this */
> 	use wrong tick_do_timer_cpu.
> }

Yes, but Frederic's patches currently disable changing of tick_do_timer_cpu.
He said that he will re-enable this at some point, and he and I will need
to coordinate that change to allow RCU to tolerate tick_do_timer_cpu
migration.

							Thanx, Paul

> >> In this time, the thread may not be bound to tick_do_timer_cpu,
> >> the WARN_ON_ONCE() may be wrong.
> >>
> >> Does any other code ensure the gp thread bound on tick_do_timer_cpu
> >> which I missed?
> > 
> > However, on small systems, rcu_sysidle_check_cpu() can be called from
> > the timekeeping CPU.  I suppose that this could potentially happen
> > before the first grace period starts, and in that case, we could
> > potentially see a spurious warning.  I could imagine a number of ways
> > to fix this:
> > 
> > 1.	Bind the kthread when it is created.
> > 
> > 2.	Bind the kthread when it first starts running, rather than just
> > 	after the grace period starts.
> > 
> > 3.	Suppress the warning when there is no grace period in progress.
> > 
> > 4.	Suppress the warning prior to the first grace period starting.
> > 
> > Seems like #3 is the most straightforward approach.  I just change it to:
> > 
> > 	if (rcu_gp_in_progress(rdp->rsp))
> > 		WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
> > 
> > This still gets a WARN_ON_ONCE() if someone moves the timekeeping CPU,
> > but Frederic tells me that it never moves.  My WARN_ON_ONCE() has some
> > probability of complaining should some bug creep in.
> > 
> > Sound reasonable?
> > 
> > 							Thanx, Paul
> > 
> >>>  	/* Pick up current idle and NMI-nesting counter and check. */
> >>>  	cur = atomic_read(&rdtp->dynticks_idle);
> >>> @@ -2570,6 +2570,20 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
> >>>  }
> >>>  
> >>>  /*
> >>> + * Bind the grace-period kthread for the sysidle flavor of RCU to the
> >>> + * timekeeping CPU.
> >>> + */
> >>> +static void rcu_bind_gp_kthread(void)
> >>> +{
> >>> +	int cpu = ACCESS_ONCE(tick_do_timer_cpu);
> >>> +
> >>> +	if (cpu < 0 || cpu >= nr_cpu_ids)
> >>> +		return;
> >>> +	if (raw_smp_processor_id() != cpu)
> >>> +		set_cpus_allowed_ptr(current, cpumask_of(cpu));
> >>> +}
> >>> +
> >>> +/*
> >>>   * Return a delay in jiffies based on the number of CPUs, rcu_node
> >>>   * leaf fanout, and jiffies tick rate.  The idea is to allow larger
> >>>   * systems more time to transition to full-idle state in order to
> >>> @@ -2767,6 +2781,10 @@ static bool is_sysidle_rcu_state(struct rcu_state *rsp)
> >>>  	return false;
> >>>  }
> >>>  
> >>> +static void rcu_bind_gp_kthread(void)
> >>> +{
> >>> +}
> >>> +
> >>>  static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
> >>>  				  unsigned long maxj)
> >>>  {
> >>
> > 
> > 
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state
  2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
                     ` (6 preceding siblings ...)
  2013-07-29  3:35   ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Lai Jiangshan
@ 2013-08-05  1:04   ` Frederic Weisbecker
  2013-08-17 23:38     ` Paul E. McKenney
  7 siblings, 1 reply; 26+ messages in thread
From: Frederic Weisbecker @ 2013-08-05  1:04 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, dhowells, edumazet, darren,
	sbw

On Fri, Jul 26, 2013 at 04:19:18PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> At least one CPU must keep the scheduling-clock tick running for
> timekeeping purposes whenever there is a non-idle CPU.  However, with
> the new nohz_full adaptive-idle machinery, it is difficult to distinguish
> between all CPUs really being idle as opposed to all non-idle CPUs being
> in adaptive-ticks mode.  This commit therefore adds a Kconfig parameter
> as a first step towards enabling a scalable detection of full-system
> idle state.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/time/Kconfig | 23 +++++++++++++++++++++++
>  1 file changed, 23 insertions(+)
> 
> diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
> index 70f27e8..a613c2a 100644
> --- a/kernel/time/Kconfig
> +++ b/kernel/time/Kconfig
> @@ -134,6 +134,29 @@ config NO_HZ_FULL_ALL
>  	 Note the boot CPU will still be kept outside the range to
>  	 handle the timekeeping duty.
>  
> +config NO_HZ_FULL_SYSIDLE
> +	bool "Detect full-system idle state for full dynticks system"
> +	depends on NO_HZ_FULL
> +	default n
> +	help
> +	 At least one CPU must keep the scheduling-clock tick running
> +	 for timekeeping purposes whenever there is a non-idle CPU,
> +	 where "non-idle" includes CPUs with a single runnable task
> +	 in adaptive-idle mode.

"adaptive-idle" is particularly confusing here. How about this:

    'where "non-idle" also includes dynticks CPUs as long they are
    running non-idle tasks.' 

          Because the underlying adaptive-tick
> +	 support cannot distinguish between all CPUs being idle and
> +	 all CPUs each running a single task in adaptive-idle mode,

s/adaptive-idle/dynticks

Thanks.

> +	 the underlying support simply ensures that there is always
> +	 a CPU handling the scheduling-clock tick, whether or not all
> +	 CPUs are idle.  This Kconfig option enables scalable detection
> +	 of the all-CPUs-idle state, thus allowing the scheduling-clock
> +	 tick to be disabled when all CPUs are idle.  Note that scalable
> +	 detection of the all-CPUs-idle state means that larger systems
> +	 will be slower to declare the all-CPUs-idle state.
> +
> +	 Say Y if you would like to help debug all-CPUs-idle detection.
> +
> +	 Say N if you are unsure.
> +
>  config NO_HZ
>  	bool "Old Idle dynticks config"
>  	depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS
> -- 
> 1.8.1.5
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 2/7] nohz_full: Add rcu_dyntick data for scalable detection of all-idle state
  2013-07-26 23:19   ` [PATCH RFC nohz_full 2/7] nohz_full: Add rcu_dyntick data " Paul E. McKenney
@ 2013-08-05  1:26     ` Frederic Weisbecker
  0 siblings, 0 replies; 26+ messages in thread
From: Frederic Weisbecker @ 2013-08-05  1:26 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, dhowells, edumazet, darren,
	sbw

On Fri, Jul 26, 2013 at 04:19:19PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> This commit adds fields to the rcu_dyntick structure that are used to
> detect idle CPUs.  These new fields differ from the existing ones in
> that the existing ones consider a CPU executing in user mode to be idle,
> where the new ones consider CPUs executing in user mode to be busy.
> The handling of these new fields is otherwise quite similar to that for
> the exiting fields.  This commit also adds the initialization required
> for these fields.
> 
> So, why is usermode execution treated differently, with RCU considering
> it a quiescent state equivalent to idle, while in contrast the new
> full-system idle state detection considers usermode execution to be
> non-idle?
> 
> It turns out that although one of RCU's quiescent states is usermode
> execution, it is not a full-system idle state.  This is because the
> purpose of the full-system idle state is not RCU, but rather determining
> when accurate timekeeping can safely be disabled.  Whenever accurate
> timekeeping is required in a CONFIG_NO_HZ_FULL kernel, at least one
> CPU must keep the scheduling-clock tick going.  If even one CPU is
> executing in user mode, accurate timekeeping is requires, particularly for
> architectures where gettimeofday() and friends do not enter the kernel.
> Only when all CPUs are really and truly idle can accurate timekeeping be
> disabled, allowing all CPUs to turn off the scheduling clock interrupt,
> thus greatly improving energy efficiency.
> 
> This naturally raises the question "Why is this code in RCU rather than in
> timekeeping?", and the answer is that RCU has the data and infrastructure
> to efficiently make this determination.

Right, and it's somehow disturbing that this code is in RCU but yeah the
infrastructure is there.

It would be perhaps more neat to have a specific RCU flavour for which the
only quiescent state is when the system is fully idle. But like you said
that's some overhead to iterate another RCU flavor, while we can reuse rcu
traditional flavour as an opportunity since it often handle callbacks
around. Too bad.

Anyway, Acked-by: Frederic Weisbecker <fweisbec@gmail.com>

Thanks.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 3/7] nohz_full: Add per-CPU idle-state tracking
  2013-07-26 23:19   ` [PATCH RFC nohz_full 3/7] nohz_full: Add per-CPU idle-state tracking Paul E. McKenney
@ 2013-08-09 15:37     ` Frederic Weisbecker
  0 siblings, 0 replies; 26+ messages in thread
From: Frederic Weisbecker @ 2013-08-09 15:37 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, dhowells, edumazet, darren,
	sbw

On Fri, Jul 26, 2013 at 04:19:20PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> This commit adds the code that updates the rcu_dyntick structure's
> new fields to track the per-CPU idle state based on interrupts and
> transitions into and out of the idle loop (NMIs are ignored because NMI
> handlers cannot cleanly read out the time anyway).  This code is similar
> to the code that maintains RCU's idea of per-CPU idleness, but differs
> in that RCU treats CPUs running in user mode as idle, where this new
> code does not.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Acked-by: Frederic Weisbecker <fweisbec@gmail.com>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 4/7] nohz_full: Add full-system idle states and variables
  2013-07-26 23:19   ` [PATCH RFC nohz_full 4/7] nohz_full: Add full-system idle states and variables Paul E. McKenney
@ 2013-08-09 15:44     ` Frederic Weisbecker
  0 siblings, 0 replies; 26+ messages in thread
From: Frederic Weisbecker @ 2013-08-09 15:44 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, dhowells, edumazet, darren,
	sbw

On Fri, Jul 26, 2013 at 04:19:21PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> This commit adds control variables and states for full-system idle.
> The system will progress through the states in numerical order when
> the system is fully idle (other than the timekeeping CPU), and reset
> down to the initial state if any non-timekeeping CPU goes non-idle.
> The current state is kept in full_sysidle_state.
> 
> A RCU_SYSIDLE_SMALL macro is defined, and systems with this number
> of CPUs or fewer move through the states more aggressively.  The idea
> is that the resulting memory contention is less of a problem on small
> systems.  Architectures can adjust this value (which defaults to 8)
> using CONFIG_ARCH_RCU_SYSIDLE_SMALL.
> 
> One flavor of RCU will be in charge of driving the state machine,
> defined by rcu_sysidle_state.  This should be the busiest flavor of RCU.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/rcutree_plugin.h | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> index 814ff47..3edae39 100644
> --- a/kernel/rcutree_plugin.h
> +++ b/kernel/rcutree_plugin.h
> @@ -2380,6 +2380,34 @@ static void rcu_kick_nohz_cpu(int cpu)
>  #ifdef CONFIG_NO_HZ_FULL_SYSIDLE
>  
>  /*
> + * Handle small systems specially, accelerating their transition into
> + * full idle state.  Allow arches to override this code's idea of
> + * what constitutes a "small" system.
> + */
> +#ifdef CONFIG_ARCH_RCU_SYSIDLE_SMALL
> +#define RCU_SYSIDLE_SMALL CONFIG_ARCH_RCU_SYSIDLE_SMALL
> +#else /* #ifdef CONFIG_ARCH_RCU_SYSIDLE_SMALL */
> +#define RCU_SYSIDLE_SMALL 8
> +#endif
> +
> +/*
> + * Define RCU flavor that holds sysidle state.  This needs to be the
> + * most active flavor of RCU.
> + */
> +#ifdef CONFIG_PREEMPT_RCU
> +static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_preempt_state;
> +#else /* #ifdef CONFIG_PREEMPT_RCU */
> +static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_sched_state;
> +#endif /* #else #ifdef CONFIG_PREEMPT_RCU */

Why the maybe_unused here? Couldn't we get rid of it if those definitions were
under NO_HZ_FULL_SYSIDLE?

> +
> +static int __maybe_unused full_sysidle_state; /* Current system-idle state. */

Ditto here?

> +#define RCU_SYSIDLE_NOT		0	/* Some CPU is not idle. */
> +#define RCU_SYSIDLE_SHORT	1	/* All CPUs idle for brief period. */
> +#define RCU_SYSIDLE_LONG	2	/* All CPUs idle for long enough. */
> +#define RCU_SYSIDLE_FULL	3	/* All CPUs idle, ready for sysidle. */
> +#define RCU_SYSIDLE_FULL_NOTED	4	/* Actually entered sysidle state. */

This may be better as an enum. This way the variables that store such values can
carry this type and the review becomes easier.

> +
> +/*
>   * Invoked to note exit from irq or task transition to idle.  Note that
>   * usermode execution does -not- count as idle here!  After all, we want
>   * to detect full-system idle states, not RCU quiescent states and grace
> -- 
> 1.8.1.5
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine
  2013-07-26 23:19   ` [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine Paul E. McKenney
  2013-07-29  8:19     ` Lai Jiangshan
@ 2013-08-09 16:20     ` Frederic Weisbecker
  2013-08-14  3:07       ` Paul E. McKenney
  1 sibling, 1 reply; 26+ messages in thread
From: Frederic Weisbecker @ 2013-08-09 16:20 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, dhowells, edumazet, darren,
	sbw

On Fri, Jul 26, 2013 at 04:19:23PM -0700, Paul E. McKenney wrote:
> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> index 3edae39..ff84bed 100644
> --- a/kernel/rcutree_plugin.h
> +++ b/kernel/rcutree_plugin.h
> @@ -28,7 +28,7 @@
>  #include <linux/gfp.h>
>  #include <linux/oom.h>
>  #include <linux/smpboot.h>
> -#include <linux/tick.h>
> +#include "time/tick-internal.h"
>  
>  #define RCU_KTHREAD_PRIO 1
>  
> @@ -2395,12 +2395,12 @@ static void rcu_kick_nohz_cpu(int cpu)
>   * most active flavor of RCU.
>   */
>  #ifdef CONFIG_PREEMPT_RCU
> -static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_preempt_state;
> +static struct rcu_state *rcu_sysidle_state = &rcu_preempt_state;
>  #else /* #ifdef CONFIG_PREEMPT_RCU */
> -static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_sched_state;
> +static struct rcu_state *rcu_sysidle_state = &rcu_sched_state;
>  #endif /* #else #ifdef CONFIG_PREEMPT_RCU */

Ah you fixed it here. Ok :)

>  
> -static int __maybe_unused full_sysidle_state; /* Current system-idle state. */
> +static int full_sysidle_state;		/* Current system-idle state. */
>  #define RCU_SYSIDLE_NOT		0	/* Some CPU is not idle. */
>  #define RCU_SYSIDLE_SHORT	1	/* All CPUs idle for brief period. */
>  #define RCU_SYSIDLE_LONG	2	/* All CPUs idle for long enough. */
[...]
> +/*
> + * Check to see if the system is fully idle, other than the timekeeping CPU.
> + * The caller must have disabled interrupts.
> + */
> +bool rcu_sys_is_idle(void)
> +{
> +	static struct rcu_sysidle_head rsh;
> +	int rss = ACCESS_ONCE(full_sysidle_state);
> +
> +	if (WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu))
> +		return false;
> +
> +	/* Handle small-system case by doing a full scan of CPUs. */
> +	if (nr_cpu_ids <= RCU_SYSIDLE_SMALL) {

I don't understand how the nr_cpu_ids > RCU_SYSIDLE_SMALL is handled. There don't
seem to be other calls of rcu_sysidle_check_cpu() than for small systems.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine
  2013-08-09 16:20     ` Frederic Weisbecker
@ 2013-08-14  3:07       ` Paul E. McKenney
  0 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-08-14  3:07 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, dhowells, edumazet, darren,
	sbw

On Fri, Aug 09, 2013 at 06:20:59PM +0200, Frederic Weisbecker wrote:
> On Fri, Jul 26, 2013 at 04:19:23PM -0700, Paul E. McKenney wrote:
> > diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> > index 3edae39..ff84bed 100644
> > --- a/kernel/rcutree_plugin.h
> > +++ b/kernel/rcutree_plugin.h
> > @@ -28,7 +28,7 @@
> >  #include <linux/gfp.h>
> >  #include <linux/oom.h>
> >  #include <linux/smpboot.h>
> > -#include <linux/tick.h>
> > +#include "time/tick-internal.h"
> >  
> >  #define RCU_KTHREAD_PRIO 1
> >  
> > @@ -2395,12 +2395,12 @@ static void rcu_kick_nohz_cpu(int cpu)
> >   * most active flavor of RCU.
> >   */
> >  #ifdef CONFIG_PREEMPT_RCU
> > -static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_preempt_state;
> > +static struct rcu_state *rcu_sysidle_state = &rcu_preempt_state;
> >  #else /* #ifdef CONFIG_PREEMPT_RCU */
> > -static struct rcu_state __maybe_unused *rcu_sysidle_state = &rcu_sched_state;
> > +static struct rcu_state *rcu_sysidle_state = &rcu_sched_state;
> >  #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
> 
> Ah you fixed it here. Ok :)

Bisectability and all that.  ;-)

> > -static int __maybe_unused full_sysidle_state; /* Current system-idle state. */
> > +static int full_sysidle_state;		/* Current system-idle state. */
> >  #define RCU_SYSIDLE_NOT		0	/* Some CPU is not idle. */
> >  #define RCU_SYSIDLE_SHORT	1	/* All CPUs idle for brief period. */
> >  #define RCU_SYSIDLE_LONG	2	/* All CPUs idle for long enough. */
> [...]
> > +/*
> > + * Check to see if the system is fully idle, other than the timekeeping CPU.
> > + * The caller must have disabled interrupts.
> > + */
> > +bool rcu_sys_is_idle(void)
> > +{
> > +	static struct rcu_sysidle_head rsh;
> > +	int rss = ACCESS_ONCE(full_sysidle_state);
> > +
> > +	if (WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu))
> > +		return false;
> > +
> > +	/* Handle small-system case by doing a full scan of CPUs. */
> > +	if (nr_cpu_ids <= RCU_SYSIDLE_SMALL) {
> 
> I don't understand how the nr_cpu_ids > RCU_SYSIDLE_SMALL is handled. There don't
> seem to be other calls of rcu_sysidle_check_cpu() than for small systems.

The other calls are from kernel/rcutree.c from the force-quiescent-state
code.  If we have a big system, we don't check until we have some other
reason to touch the cache lines.  If we have a small system, we just
dig through them on transition to idle.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state
  2013-08-05  1:04   ` Frederic Weisbecker
@ 2013-08-17 23:38     ` Paul E. McKenney
  0 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-08-17 23:38 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, dhowells, edumazet, darren,
	sbw

On Mon, Aug 05, 2013 at 03:04:55AM +0200, Frederic Weisbecker wrote:
> On Fri, Jul 26, 2013 at 04:19:18PM -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > At least one CPU must keep the scheduling-clock tick running for
> > timekeeping purposes whenever there is a non-idle CPU.  However, with
> > the new nohz_full adaptive-idle machinery, it is difficult to distinguish
> > between all CPUs really being idle as opposed to all non-idle CPUs being
> > in adaptive-ticks mode.  This commit therefore adds a Kconfig parameter
> > as a first step towards enabling a scalable detection of full-system
> > idle state.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > ---
> >  kernel/time/Kconfig | 23 +++++++++++++++++++++++
> >  1 file changed, 23 insertions(+)
> > 
> > diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
> > index 70f27e8..a613c2a 100644
> > --- a/kernel/time/Kconfig
> > +++ b/kernel/time/Kconfig
> > @@ -134,6 +134,29 @@ config NO_HZ_FULL_ALL
> >  	 Note the boot CPU will still be kept outside the range to
> >  	 handle the timekeeping duty.
> >  
> > +config NO_HZ_FULL_SYSIDLE
> > +	bool "Detect full-system idle state for full dynticks system"
> > +	depends on NO_HZ_FULL
> > +	default n
> > +	help
> > +	 At least one CPU must keep the scheduling-clock tick running
> > +	 for timekeeping purposes whenever there is a non-idle CPU,
> > +	 where "non-idle" includes CPUs with a single runnable task
> > +	 in adaptive-idle mode.
> 
> "adaptive-idle" is particularly confusing here. How about this:
> 
>     'where "non-idle" also includes dynticks CPUs as long they are
>     running non-idle tasks.' 
> 
>           Because the underlying adaptive-tick
> > +	 support cannot distinguish between all CPUs being idle and
> > +	 all CPUs each running a single task in adaptive-idle mode,
> 
> s/adaptive-idle/dynticks
> 
> Thanks.

Good point, fixed.

							Thanx, Paul

> > +	 the underlying support simply ensures that there is always
> > +	 a CPU handling the scheduling-clock tick, whether or not all
> > +	 CPUs are idle.  This Kconfig option enables scalable detection
> > +	 of the all-CPUs-idle state, thus allowing the scheduling-clock
> > +	 tick to be disabled when all CPUs are idle.  Note that scalable
> > +	 detection of the all-CPUs-idle state means that larger systems
> > +	 will be slower to declare the all-CPUs-idle state.
> > +
> > +	 Say Y if you would like to help debug all-CPUs-idle detection.
> > +
> > +	 Say N if you are unsure.
> > +
> >  config NO_HZ
> >  	bool "Old Idle dynticks config"
> >  	depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS
> > -- 
> > 1.8.1.5
> > 
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state
  2013-07-09  1:29 [PATCH RFC nohz_full 0/7] v3 Provide infrastructure for full-system idle Paul E. McKenney
@ 2013-07-09  1:30 ` Paul E. McKenney
  0 siblings, 0 replies; 26+ messages in thread
From: Paul E. McKenney @ 2013-07-09  1:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, dhowells, edumazet, darren, fweisbec, sbw,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

At least one CPU must keep the scheduling-clock tick running for
timekeeping purposes whenever there is a non-idle CPU.  However, with
the new nohz_full adaptive-idle machinery, it is difficult to distinguish
between all CPUs really being idle as opposed to all non-idle CPUs being
in adaptive-ticks mode.  This commit therefore adds a Kconfig parameter
as a first step towards enabling a scalable detection of full-system
idle state.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/time/Kconfig | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
index 70f27e8..a613c2a 100644
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -134,6 +134,29 @@ config NO_HZ_FULL_ALL
 	 Note the boot CPU will still be kept outside the range to
 	 handle the timekeeping duty.
 
+config NO_HZ_FULL_SYSIDLE
+	bool "Detect full-system idle state for full dynticks system"
+	depends on NO_HZ_FULL
+	default n
+	help
+	 At least one CPU must keep the scheduling-clock tick running
+	 for timekeeping purposes whenever there is a non-idle CPU,
+	 where "non-idle" includes CPUs with a single runnable task
+	 in adaptive-idle mode.  Because the underlying adaptive-tick
+	 support cannot distinguish between all CPUs being idle and
+	 all CPUs each running a single task in adaptive-idle mode,
+	 the underlying support simply ensures that there is always
+	 a CPU handling the scheduling-clock tick, whether or not all
+	 CPUs are idle.  This Kconfig option enables scalable detection
+	 of the all-CPUs-idle state, thus allowing the scheduling-clock
+	 tick to be disabled when all CPUs are idle.  Note that scalable
+	 detection of the all-CPUs-idle state means that larger systems
+	 will be slower to declare the all-CPUs-idle state.
+
+	 Say Y if you would like to help debug all-CPUs-idle detection.
+
+	 Say N if you are unsure.
+
 config NO_HZ
 	bool "Old Idle dynticks config"
 	depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2013-08-17 23:38 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-26 23:18 [PATCH RFC nohz_full 0/7] v4 Provide infrastructure for full-system idle Paul E. McKenney
2013-07-26 23:19 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney
2013-07-26 23:19   ` [PATCH RFC nohz_full 2/7] nohz_full: Add rcu_dyntick data " Paul E. McKenney
2013-08-05  1:26     ` Frederic Weisbecker
2013-07-26 23:19   ` [PATCH RFC nohz_full 3/7] nohz_full: Add per-CPU idle-state tracking Paul E. McKenney
2013-08-09 15:37     ` Frederic Weisbecker
2013-07-26 23:19   ` [PATCH RFC nohz_full 4/7] nohz_full: Add full-system idle states and variables Paul E. McKenney
2013-08-09 15:44     ` Frederic Weisbecker
2013-07-26 23:19   ` [PATCH RFC nohz_full 5/7] nohz_full: Add full-system-idle arguments to API Paul E. McKenney
2013-07-26 23:19   ` [PATCH RFC nohz_full 6/7] nohz_full: Add full-system-idle state machine Paul E. McKenney
2013-07-29  8:19     ` Lai Jiangshan
2013-07-29 17:43       ` Paul E. McKenney
2013-08-09 16:20     ` Frederic Weisbecker
2013-08-14  3:07       ` Paul E. McKenney
2013-07-26 23:19   ` [PATCH RFC nohz_full 7/7] nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU Paul E. McKenney
2013-07-29  3:36     ` Lai Jiangshan
2013-07-29 16:52       ` Paul E. McKenney
2013-07-29 16:59         ` Frederic Weisbecker
2013-07-29 17:53           ` Paul E. McKenney
2013-07-30  1:40         ` Lai Jiangshan
2013-07-30 17:45           ` Paul E. McKenney
2013-07-29  3:35   ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Lai Jiangshan
2013-07-29 15:28     ` Paul E. McKenney
2013-08-05  1:04   ` Frederic Weisbecker
2013-08-17 23:38     ` Paul E. McKenney
  -- strict thread matches above, loose matches on Subject: below --
2013-07-09  1:29 [PATCH RFC nohz_full 0/7] v3 Provide infrastructure for full-system idle Paul E. McKenney
2013-07-09  1:30 ` [PATCH RFC nohz_full 1/7] nohz_full: Add Kconfig parameter for scalable detection of all-idle state Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.