linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16
@ 2017-12-01 19:36 Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 01/17] rcu: Avoid ->dynticks_nmi_nesting store tearing Paul E. McKenney
                   ` (16 more replies)
  0 siblings, 17 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg

Hello!

This series includes simplifications of RCU's dyntick-idle processing,
most importantly making use of the the NMI style of processing for
interrupts as well as NMIs.  This series provides a net decrease of more
than 100 lines of code.

1.	Avoid ->dynticks_nmi_nesting store tearing.

2.	Reduce dyntick-idle state space by carefully ordering adjustments
	of nesting and overall state.

3.	Move rcu_nmi_{enter,exit}() to prepare for consolidation.

4.	Clamp ->dynticks_nmi_nesting at eqs entry/exit.  This is required
	to handle the half-interrupts featured by some architectures.
	It used to be handled by the interrupt portion of RCU's
	dyntick-idle processing, which this series is eliminating.
	Thus the clamping must move.

5.	Define rcu_irq_{enter,exit}() in terms of rcu_nmi_{enter,exit}().

6.	Make ->dynticks_nesting be a simple counter, given that it now
	handles only process-level reasons why RCU should be watching.

7.	Eliminate rcu_irq_enter_disabled() because the NMI handling
	already handles the possibility of interruption at any point.

8.	Add tracing to irq/NMI dyntick-idle transitions.

9.	Shrink ->dynticks_{nmi_,}nesting from long long to long because
	there had better not be more than a billion process-level reasons
	why RCU should be watching a given CPU.

10.	Add ->dynticks field to rcu_dyntick trace event.

11.	Stop duplicating lockdep checks in RCU's idle-entry code.

12.	Avoid ->dynticks_nesting store tearing.

13.	Fold rcu_eqs_enter_common() into rcu_eqs_enter() because it is
	now only invoked from that one place.

14.	Fold rcu_eqs_exit_common() into rcu_eqs_exit() because it is now
	only invoked from that one place.

15.	Simplify rcu_eqs_{enter,exit}() non-idle task debug code.

16.	Update dyntick-idle design documentation to reflect NMI/irq
	consolidation.

17.	Remove no longer used trace event rcu_prep_idle, courtesy of
	Steven Rostedt.

								Thanx, Paul

------------------------------------------------------------------------

 Documentation/RCU/Design/Data-Structures/Data-Structures.html |   46 
 include/linux/rcutiny.h                                       |    1 
 include/linux/rcutree.h                                       |    1 
 include/linux/tracepoint.h                                    |    5 
 include/trace/events/rcu.h                                    |   87 -
 kernel/rcu/rcu.h                                              |   31 
 kernel/rcu/tree.c                                             |  466 ++++------
 kernel/rcu/tree.h                                             |    5 
 kernel/rcu/tree_plugin.h                                      |    2 
 kernel/trace/trace.c                                          |   11 
 10 files changed, 256 insertions(+), 399 deletions(-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 01/17] rcu: Avoid ->dynticks_nmi_nesting store tearing
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 02/17] rcu: Reduce dyntick-idle state space Paul E. McKenney
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

NMIs can nest, and store tearing could in theory happen on carries
from one byte to the next.  This commit therefore adds the WRITE_ONCE()
macros preventing this.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f9c0ca2ccf0c..c5d960f86cf8 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1103,7 +1103,8 @@ void rcu_nmi_enter(void)
 		rcu_dynticks_eqs_exit();
 		incby = 1;
 	}
-	rdtp->dynticks_nmi_nesting += incby;
+	WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* Prevent store tearing. */
+		   rdtp->dynticks_nmi_nesting + incby);
 	barrier();
 }
 
@@ -1135,12 +1136,13 @@ void rcu_nmi_exit(void)
 	 * leave it in non-RCU-idle state.
 	 */
 	if (rdtp->dynticks_nmi_nesting != 1) {
-		rdtp->dynticks_nmi_nesting -= 2;
+		WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* No store tearing. */
+			   rdtp->dynticks_nmi_nesting - 2);
 		return;
 	}
 
 	/* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */
-	rdtp->dynticks_nmi_nesting = 0;
+	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */
 	rcu_dynticks_eqs_enter();
 }
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 02/17] rcu: Reduce dyntick-idle state space
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 01/17] rcu: Avoid ->dynticks_nmi_nesting store tearing Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 03/17] rcu: Move rcu_nmi_{enter,exit}() to prepare for consolidation Paul E. McKenney
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Both extended-quiescent-state entry and exit first update the nesting
counter and then adjust the dyntick-idle state.  This means that there
are four states: (1) Both nesting and dyntick idle indicate idle,
(2) Nesting indicates idle but dyntick idle does not, (3) Nesting indicates
non-idle and dyntick idle does not, and (4) Both nesting and dyntick
idle indicate non-idle.  This commit simplifies the state space by
eliminating #3, reversing the order of updates on exit from extended
quiescent state.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index c5d960f86cf8..49f661bb8ffe 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -928,21 +928,21 @@ void rcu_irq_exit_irqson(void)
  * we really have exited idle, and must do the appropriate accounting.
  * The caller must have disabled interrupts.
  */
-static void rcu_eqs_exit_common(long long oldval, int user)
+static void rcu_eqs_exit_common(long long newval, int user)
 {
 	RCU_TRACE(struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);)
 
 	rcu_dynticks_task_exit();
 	rcu_dynticks_eqs_exit();
 	rcu_cleanup_after_idle();
-	trace_rcu_dyntick(TPS("End"), oldval, rdtp->dynticks_nesting);
+	trace_rcu_dyntick(TPS("End"), rdtp->dynticks_nesting, newval);
 	if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
 	    !user && !is_idle_task(current)) {
 		struct task_struct *idle __maybe_unused =
 			idle_task(smp_processor_id());
 
 		trace_rcu_dyntick(TPS("Error on exit: not idle task"),
-				  oldval, rdtp->dynticks_nesting);
+				  rdtp->dynticks_nesting, newval);
 		rcu_ftrace_dump(DUMP_ORIG);
 		WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
 			  current->pid, current->comm,
@@ -967,8 +967,8 @@ static void rcu_eqs_exit(bool user)
 		rdtp->dynticks_nesting += DYNTICK_TASK_NEST_VALUE;
 	} else {
 		__this_cpu_inc(disable_rcu_irq_enter);
+		rcu_eqs_exit_common(DYNTICK_TASK_EXIT_IDLE, user);
 		rdtp->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
-		rcu_eqs_exit_common(oldval, user);
 		__this_cpu_dec(disable_rcu_irq_enter);
 	}
 }
@@ -1037,7 +1037,7 @@ void rcu_user_exit(void)
 void rcu_irq_enter(void)
 {
 	struct rcu_dynticks *rdtp;
-	long long oldval;
+	long long newval;
 
 	lockdep_assert_irqs_disabled();
 	rdtp = this_cpu_ptr(&rcu_dynticks);
@@ -1046,14 +1046,13 @@ void rcu_irq_enter(void)
 	if (rdtp->dynticks_nmi_nesting)
 		return;
 
-	oldval = rdtp->dynticks_nesting;
-	rdtp->dynticks_nesting++;
-	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-		     rdtp->dynticks_nesting == 0);
-	if (oldval)
-		trace_rcu_dyntick(TPS("++="), oldval, rdtp->dynticks_nesting);
+	newval = rdtp->dynticks_nesting + 1;
+	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && newval == 0);
+	if (rdtp->dynticks_nesting)
+		trace_rcu_dyntick(TPS("++="), rdtp->dynticks_nesting, newval);
 	else
-		rcu_eqs_exit_common(oldval, true);
+		rcu_eqs_exit_common(newval, true);
+	rdtp->dynticks_nesting++;
 }
 
 /*
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 03/17] rcu: Move rcu_nmi_{enter,exit}() to prepare for consolidation
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 01/17] rcu: Avoid ->dynticks_nmi_nesting store tearing Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 02/17] rcu: Reduce dyntick-idle state space Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 04/17] rcu: Clamp ->dynticks_nmi_nesting at eqs entry/exit Paul E. McKenney
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

This is a code-motion-only commit that prepares to define rcu_irq_enter()
in terms of rcu_nmi_enter() and rcu_irq_exit() in terms of rcu_irq_exit().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 150 +++++++++++++++++++++++++++---------------------------
 1 file changed, 75 insertions(+), 75 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 49f661bb8ffe..419f3c38e1b6 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -867,6 +867,44 @@ void rcu_user_enter(void)
 #endif /* CONFIG_NO_HZ_FULL */
 
 /**
+ * rcu_nmi_exit - inform RCU of exit from NMI context
+ *
+ * If we are returning from the outermost NMI handler that interrupted an
+ * RCU-idle period, update rdtp->dynticks and rdtp->dynticks_nmi_nesting
+ * to let the RCU grace-period handling know that the CPU is back to
+ * being RCU-idle.
+ *
+ * If you add or remove a call to rcu_nmi_exit(), be sure to test
+ * with CONFIG_RCU_EQS_DEBUG=y.
+ */
+void rcu_nmi_exit(void)
+{
+	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
+
+	/*
+	 * Check for ->dynticks_nmi_nesting underflow and bad ->dynticks.
+	 * (We are exiting an NMI handler, so RCU better be paying attention
+	 * to us!)
+	 */
+	WARN_ON_ONCE(rdtp->dynticks_nmi_nesting <= 0);
+	WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs());
+
+	/*
+	 * If the nesting level is not 1, the CPU wasn't RCU-idle, so
+	 * leave it in non-RCU-idle state.
+	 */
+	if (rdtp->dynticks_nmi_nesting != 1) {
+		WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* No store tearing. */
+			   rdtp->dynticks_nmi_nesting - 2);
+		return;
+	}
+
+	/* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */
+	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */
+	rcu_dynticks_eqs_enter();
+}
+
+/**
  * rcu_irq_exit - inform RCU that current CPU is exiting irq towards idle
  *
  * Exit from an interrupt handler, which might possibly result in entering
@@ -1013,6 +1051,43 @@ void rcu_user_exit(void)
 #endif /* CONFIG_NO_HZ_FULL */
 
 /**
+ * rcu_nmi_enter - inform RCU of entry to NMI context
+ *
+ * If the CPU was idle from RCU's viewpoint, update rdtp->dynticks and
+ * rdtp->dynticks_nmi_nesting to let the RCU grace-period handling know
+ * that the CPU is active.  This implementation permits nested NMIs, as
+ * long as the nesting level does not overflow an int.  (You will probably
+ * run out of stack space first.)
+ *
+ * If you add or remove a call to rcu_nmi_enter(), be sure to test
+ * with CONFIG_RCU_EQS_DEBUG=y.
+ */
+void rcu_nmi_enter(void)
+{
+	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
+	int incby = 2;
+
+	/* Complain about underflow. */
+	WARN_ON_ONCE(rdtp->dynticks_nmi_nesting < 0);
+
+	/*
+	 * If idle from RCU viewpoint, atomically increment ->dynticks
+	 * to mark non-idle and increment ->dynticks_nmi_nesting by one.
+	 * Otherwise, increment ->dynticks_nmi_nesting by two.  This means
+	 * if ->dynticks_nmi_nesting is equal to one, we are guaranteed
+	 * to be in the outermost NMI handler that interrupted an RCU-idle
+	 * period (observation due to Andy Lutomirski).
+	 */
+	if (rcu_dynticks_curr_cpu_in_eqs()) {
+		rcu_dynticks_eqs_exit();
+		incby = 1;
+	}
+	WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* Prevent store tearing. */
+		   rdtp->dynticks_nmi_nesting + incby);
+	barrier();
+}
+
+/**
  * rcu_irq_enter - inform RCU that current CPU is entering irq away from idle
  *
  * Enter an interrupt handler, which might possibly result in exiting
@@ -1071,81 +1146,6 @@ void rcu_irq_enter_irqson(void)
 }
 
 /**
- * rcu_nmi_enter - inform RCU of entry to NMI context
- *
- * If the CPU was idle from RCU's viewpoint, update rdtp->dynticks and
- * rdtp->dynticks_nmi_nesting to let the RCU grace-period handling know
- * that the CPU is active.  This implementation permits nested NMIs, as
- * long as the nesting level does not overflow an int.  (You will probably
- * run out of stack space first.)
- *
- * If you add or remove a call to rcu_nmi_enter(), be sure to test
- * with CONFIG_RCU_EQS_DEBUG=y.
- */
-void rcu_nmi_enter(void)
-{
-	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
-	int incby = 2;
-
-	/* Complain about underflow. */
-	WARN_ON_ONCE(rdtp->dynticks_nmi_nesting < 0);
-
-	/*
-	 * If idle from RCU viewpoint, atomically increment ->dynticks
-	 * to mark non-idle and increment ->dynticks_nmi_nesting by one.
-	 * Otherwise, increment ->dynticks_nmi_nesting by two.  This means
-	 * if ->dynticks_nmi_nesting is equal to one, we are guaranteed
-	 * to be in the outermost NMI handler that interrupted an RCU-idle
-	 * period (observation due to Andy Lutomirski).
-	 */
-	if (rcu_dynticks_curr_cpu_in_eqs()) {
-		rcu_dynticks_eqs_exit();
-		incby = 1;
-	}
-	WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* Prevent store tearing. */
-		   rdtp->dynticks_nmi_nesting + incby);
-	barrier();
-}
-
-/**
- * rcu_nmi_exit - inform RCU of exit from NMI context
- *
- * If we are returning from the outermost NMI handler that interrupted an
- * RCU-idle period, update rdtp->dynticks and rdtp->dynticks_nmi_nesting
- * to let the RCU grace-period handling know that the CPU is back to
- * being RCU-idle.
- *
- * If you add or remove a call to rcu_nmi_exit(), be sure to test
- * with CONFIG_RCU_EQS_DEBUG=y.
- */
-void rcu_nmi_exit(void)
-{
-	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
-
-	/*
-	 * Check for ->dynticks_nmi_nesting underflow and bad ->dynticks.
-	 * (We are exiting an NMI handler, so RCU better be paying attention
-	 * to us!)
-	 */
-	WARN_ON_ONCE(rdtp->dynticks_nmi_nesting <= 0);
-	WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs());
-
-	/*
-	 * If the nesting level is not 1, the CPU wasn't RCU-idle, so
-	 * leave it in non-RCU-idle state.
-	 */
-	if (rdtp->dynticks_nmi_nesting != 1) {
-		WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* No store tearing. */
-			   rdtp->dynticks_nmi_nesting - 2);
-		return;
-	}
-
-	/* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */
-	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */
-	rcu_dynticks_eqs_enter();
-}
-
-/**
  * rcu_is_watching - see if RCU thinks that the current CPU is idle
  *
  * Return true if RCU is watching the running CPU, which means that this
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 04/17] rcu: Clamp ->dynticks_nmi_nesting at eqs entry/exit
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (2 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 03/17] rcu: Move rcu_nmi_{enter,exit}() to prepare for consolidation Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 05/17] rcu: Define rcu_irq_{enter,exit}() in terms of rcu_nmi_{enter,exit}() Paul E. McKenney
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

In preparation for merging dyntick-idle irq handling into the NMI
algorithm, clamp ->dynticks_nmi_nesting value to allow for interrupts
that enter but never leave and vice versa.

It is important that the clamping happen outside of the extended quiescent
state.  Otherwise, there will be short windows where irqs and NMIs fail
to convince RCU to start watching.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/rcu.h  | 2 ++
 kernel/rcu/tree.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index 59c471de342a..f4a411964c41 100644
--- a/kernel/rcu/rcu.h
+++ b/kernel/rcu/rcu.h
@@ -56,6 +56,8 @@
 #define DYNTICK_TASK_EXIT_IDLE	   (DYNTICK_TASK_NEST_VALUE + \
 				    DYNTICK_TASK_FLAG)
 
+#define DYNTICK_IRQ_NONIDLE	((INT_MAX / 2) + 1)
+
 
 /*
  * Grace-period counter management.
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 419f3c38e1b6..142cdd4a50c9 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -818,6 +818,7 @@ static void rcu_eqs_enter(bool user)
 	struct rcu_dynticks *rdtp;
 
 	rdtp = this_cpu_ptr(&rcu_dynticks);
+	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0);
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
 		     (rdtp->dynticks_nesting & DYNTICK_TASK_NEST_MASK) == 0);
 	if ((rdtp->dynticks_nesting & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE)
@@ -1008,6 +1009,7 @@ static void rcu_eqs_exit(bool user)
 		rcu_eqs_exit_common(DYNTICK_TASK_EXIT_IDLE, user);
 		rdtp->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
 		__this_cpu_dec(disable_rcu_irq_enter);
+		WRITE_ONCE(rdtp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE);
 	}
 }
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 05/17] rcu: Define rcu_irq_{enter,exit}() in terms of rcu_nmi_{enter,exit}()
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (3 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 04/17] rcu: Clamp ->dynticks_nmi_nesting at eqs entry/exit Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 06/17] rcu: Make ->dynticks_nesting be a simple counter Paul E. McKenney
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

RCU currently uses two different mechanisms for tracking irqs and NMIs.
This is unnecessary complexity: Given that NMIs can nest and given that
RCU's tracking handles such nesting, the NMI tracking mechanism can also
be used to track irqs.  This commit therefore defines rcu_irq_enter()
in terms of rcu_nmi_enter() and rcu_irq_exit() in terms of rcu_nmi_exit().

Unfortunately, callers must still distinguish between the irq and NMI
functions because additional actions are taken when an irq interrupts
idle or nohz_full usermode execution, and these actions cannot always
be taken from NMI handlers.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 59 ++++++++++++++++++++-----------------------------------
 1 file changed, 21 insertions(+), 38 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 142cdd4a50c9..fde0e840563f 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -266,6 +266,7 @@ void rcu_bh_qs(void)
 
 static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
 	.dynticks_nesting = DYNTICK_TASK_EXIT_IDLE,
+	.dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE,
 	.dynticks = ATOMIC_INIT(RCU_DYNTICK_CTRL_CTR),
 };
 
@@ -914,8 +915,8 @@ void rcu_nmi_exit(void)
  *
  * This code assumes that the idle loop never does anything that might
  * result in unbalanced calls to irq_enter() and irq_exit().  If your
- * architecture violates this assumption, RCU will give you what you
- * deserve, good and hard.  But very infrequently and irreproducibly.
+ * architecture's idle loop violates this assumption, RCU will give you what
+ * you deserve, good and hard.  But very infrequently and irreproducibly.
  *
  * Use things like work queues to work around this limitation.
  *
@@ -926,23 +927,14 @@ void rcu_nmi_exit(void)
  */
 void rcu_irq_exit(void)
 {
-	struct rcu_dynticks *rdtp;
+	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
 
 	lockdep_assert_irqs_disabled();
-	rdtp = this_cpu_ptr(&rcu_dynticks);
-
-	/* Page faults can happen in NMI handlers, so check... */
-	if (rdtp->dynticks_nmi_nesting)
-		return;
-
-	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-		     rdtp->dynticks_nesting < 1);
-	if (rdtp->dynticks_nesting <= 1) {
-		rcu_eqs_enter_common(true);
-	} else {
-		trace_rcu_dyntick(TPS("--="), rdtp->dynticks_nesting, rdtp->dynticks_nesting - 1);
-		rdtp->dynticks_nesting--;
-	}
+	if (rdtp->dynticks_nmi_nesting == 1)
+		rcu_prepare_for_idle();
+	rcu_nmi_exit();
+	if (rdtp->dynticks_nmi_nesting == 0)
+		rcu_dynticks_task_enter();
 }
 
 /*
@@ -1097,12 +1089,12 @@ void rcu_nmi_enter(void)
  * sections can occur.  The caller must have disabled interrupts.
  *
  * Note that the Linux kernel is fully capable of entering an interrupt
- * handler that it never exits, for example when doing upcalls to
- * user mode!  This code assumes that the idle loop never does upcalls to
- * user mode.  If your architecture does do upcalls from the idle loop (or
- * does anything else that results in unbalanced calls to the irq_enter()
- * and irq_exit() functions), RCU will give you what you deserve, good
- * and hard.  But very infrequently and irreproducibly.
+ * handler that it never exits, for example when doing upcalls to user mode!
+ * This code assumes that the idle loop never does upcalls to user mode.
+ * If your architecture's idle loop does do upcalls to user mode (or does
+ * anything else that results in unbalanced calls to the irq_enter() and
+ * irq_exit() functions), RCU will give you what you deserve, good and hard.
+ * But very infrequently and irreproducibly.
  *
  * Use things like work queues to work around this limitation.
  *
@@ -1113,23 +1105,14 @@ void rcu_nmi_enter(void)
  */
 void rcu_irq_enter(void)
 {
-	struct rcu_dynticks *rdtp;
-	long long newval;
+	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
 
 	lockdep_assert_irqs_disabled();
-	rdtp = this_cpu_ptr(&rcu_dynticks);
-
-	/* Page faults can happen in NMI handlers, so check... */
-	if (rdtp->dynticks_nmi_nesting)
-		return;
-
-	newval = rdtp->dynticks_nesting + 1;
-	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && newval == 0);
-	if (rdtp->dynticks_nesting)
-		trace_rcu_dyntick(TPS("++="), rdtp->dynticks_nesting, newval);
-	else
-		rcu_eqs_exit_common(newval, true);
-	rdtp->dynticks_nesting++;
+	if (rdtp->dynticks_nmi_nesting == 0)
+		rcu_dynticks_task_exit();
+	rcu_nmi_enter();
+	if (rdtp->dynticks_nmi_nesting == 1)
+		rcu_cleanup_after_idle();
 }
 
 /*
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 06/17] rcu: Make ->dynticks_nesting be a simple counter
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (4 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 05/17] rcu: Define rcu_irq_{enter,exit}() in terms of rcu_nmi_{enter,exit}() Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 07/17] rcu: Eliminate rcu_irq_enter_disabled() Paul E. McKenney
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Now that ->dynticks_nesting counts only process-level dyntick-idle
entry and exit, there is no need for the elaborate segmented counter
with its guard fields and overflow checking.  This commit therefore
makes ->dynticks_nesting be a simple counter.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/rcu.h  | 27 +--------------------------
 kernel/rcu/tree.c | 40 ++++++++++++++++++++--------------------
 kernel/rcu/tree.h |  1 -
 3 files changed, 21 insertions(+), 47 deletions(-)

diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index f4a411964c41..afe0559d1867 100644
--- a/kernel/rcu/rcu.h
+++ b/kernel/rcu/rcu.h
@@ -30,32 +30,7 @@
 #define RCU_TRACE(stmt)
 #endif /* #else #ifdef CONFIG_RCU_TRACE */
 
-/*
- * Process-level increment to ->dynticks_nesting field.  This allows for
- * architectures that use half-interrupts and half-exceptions from
- * process context.
- *
- * DYNTICK_TASK_NEST_MASK defines a field of width DYNTICK_TASK_NEST_WIDTH
- * that counts the number of process-based reasons why RCU cannot
- * consider the corresponding CPU to be idle, and DYNTICK_TASK_NEST_VALUE
- * is the value used to increment or decrement this field.
- *
- * The rest of the bits could in principle be used to count interrupts,
- * but this would mean that a negative-one value in the interrupt
- * field could incorrectly zero out the DYNTICK_TASK_NEST_MASK field.
- * We therefore provide a two-bit guard field defined by DYNTICK_TASK_MASK
- * that is set to DYNTICK_TASK_FLAG upon initial exit from idle.
- * The DYNTICK_TASK_EXIT_IDLE value is thus the combined value used upon
- * initial exit from idle.
- */
-#define DYNTICK_TASK_NEST_WIDTH 7
-#define DYNTICK_TASK_NEST_VALUE ((LLONG_MAX >> DYNTICK_TASK_NEST_WIDTH) + 1)
-#define DYNTICK_TASK_NEST_MASK  (LLONG_MAX - DYNTICK_TASK_NEST_VALUE + 1)
-#define DYNTICK_TASK_FLAG	   ((DYNTICK_TASK_NEST_VALUE / 8) * 2)
-#define DYNTICK_TASK_MASK	   ((DYNTICK_TASK_NEST_VALUE / 8) * 3)
-#define DYNTICK_TASK_EXIT_IDLE	   (DYNTICK_TASK_NEST_VALUE + \
-				    DYNTICK_TASK_FLAG)
-
+/* Offset to allow for unmatched rcu_irq_{enter,exit}(). */
 #define DYNTICK_IRQ_NONIDLE	((INT_MAX / 2) + 1)
 
 
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index fde0e840563f..d123474fe829 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -265,7 +265,7 @@ void rcu_bh_qs(void)
 #endif
 
 static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
-	.dynticks_nesting = DYNTICK_TASK_EXIT_IDLE,
+	.dynticks_nesting = 1,
 	.dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE,
 	.dynticks = ATOMIC_INIT(RCU_DYNTICK_CTRL_CTR),
 };
@@ -813,6 +813,10 @@ static void rcu_eqs_enter_common(bool user)
 /*
  * Enter an RCU extended quiescent state, which can be either the
  * idle loop or adaptive-tickless usermode execution.
+ *
+ * We crowbar the ->dynticks_nmi_nesting field to zero to allow for
+ * the possibility of usermode upcalls having messed up our count
+ * of interrupt nesting level during the prior busy period.
  */
 static void rcu_eqs_enter(bool user)
 {
@@ -821,11 +825,11 @@ static void rcu_eqs_enter(bool user)
 	rdtp = this_cpu_ptr(&rcu_dynticks);
 	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0);
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-		     (rdtp->dynticks_nesting & DYNTICK_TASK_NEST_MASK) == 0);
-	if ((rdtp->dynticks_nesting & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE)
+		     rdtp->dynticks_nesting == 0);
+	if (rdtp->dynticks_nesting == 1)
 		rcu_eqs_enter_common(user);
 	else
-		rdtp->dynticks_nesting -= DYNTICK_TASK_NEST_VALUE;
+		rdtp->dynticks_nesting--;
 }
 
 /**
@@ -836,10 +840,6 @@ static void rcu_eqs_enter(bool user)
  * critical sections can occur in irq handlers in idle, a possibility
  * handled by irq_enter() and irq_exit().)
  *
- * We crowbar the ->dynticks_nesting field to zero to allow for
- * the possibility of usermode upcalls having messed up our count
- * of interrupt nesting level during the prior busy period.
- *
  * If you add or remove a call to rcu_idle_enter(), be sure to test with
  * CONFIG_RCU_EQS_DEBUG=y.
  */
@@ -984,6 +984,10 @@ static void rcu_eqs_exit_common(long long newval, int user)
 /*
  * Exit an RCU extended quiescent state, which can be either the
  * idle loop or adaptive-tickless usermode execution.
+ *
+ * We crowbar the ->dynticks_nmi_nesting field to DYNTICK_IRQ_NONIDLE to
+ * allow for the possibility of usermode upcalls messing up our count of
+ * interrupt nesting level during the busy period that is just now starting.
  */
 static void rcu_eqs_exit(bool user)
 {
@@ -994,12 +998,12 @@ static void rcu_eqs_exit(bool user)
 	rdtp = this_cpu_ptr(&rcu_dynticks);
 	oldval = rdtp->dynticks_nesting;
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0);
-	if (oldval & DYNTICK_TASK_NEST_MASK) {
-		rdtp->dynticks_nesting += DYNTICK_TASK_NEST_VALUE;
+	if (oldval) {
+		rdtp->dynticks_nesting++;
 	} else {
 		__this_cpu_inc(disable_rcu_irq_enter);
-		rcu_eqs_exit_common(DYNTICK_TASK_EXIT_IDLE, user);
-		rdtp->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
+		rcu_eqs_exit_common(1, user);
+		rdtp->dynticks_nesting = 1;
 		__this_cpu_dec(disable_rcu_irq_enter);
 		WRITE_ONCE(rdtp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE);
 	}
@@ -1011,11 +1015,6 @@ static void rcu_eqs_exit(bool user)
  * Exit idle mode, in other words, -enter- the mode in which RCU
  * read-side critical sections can occur.
  *
- * We crowbar the ->dynticks_nesting field to DYNTICK_TASK_NEST to
- * allow for the possibility of usermode upcalls messing up our count
- * of interrupt nesting level during the busy period that is just
- * now starting.
- *
  * If you add or remove a call to rcu_idle_exit(), be sure to test with
  * CONFIG_RCU_EQS_DEBUG=y.
  */
@@ -1219,7 +1218,8 @@ EXPORT_SYMBOL_GPL(rcu_lockdep_current_cpu_online);
  */
 static int rcu_is_cpu_rrupt_from_idle(void)
 {
-	return __this_cpu_read(rcu_dynticks.dynticks_nesting) <= 1;
+	return __this_cpu_read(rcu_dynticks.dynticks_nesting) <= 0 &&
+	       __this_cpu_read(rcu_dynticks.dynticks_nmi_nesting) <= 1;
 }
 
 /*
@@ -3709,7 +3709,7 @@ rcu_boot_init_percpu_data(int cpu, struct rcu_state *rsp)
 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
 	rdp->grpmask = leaf_node_cpu_bit(rdp->mynode, cpu);
 	rdp->dynticks = &per_cpu(rcu_dynticks, cpu);
-	WARN_ON_ONCE(rdp->dynticks->dynticks_nesting != DYNTICK_TASK_EXIT_IDLE);
+	WARN_ON_ONCE(rdp->dynticks->dynticks_nesting != 1);
 	WARN_ON_ONCE(rcu_dynticks_in_eqs(rcu_dynticks_snap(rdp->dynticks)));
 	rdp->cpu = cpu;
 	rdp->rsp = rsp;
@@ -3738,7 +3738,7 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp)
 	if (rcu_segcblist_empty(&rdp->cblist) && /* No early-boot CBs? */
 	    !init_nocb_callback_list(rdp))
 		rcu_segcblist_init(&rdp->cblist);  /* Re-enable callbacks. */
-	rdp->dynticks->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
+	rdp->dynticks->dynticks_nesting = 1;
 	rcu_dynticks_eqs_online();
 	raw_spin_unlock_rcu_node(rnp);		/* irqs remain disabled. */
 
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 46a5d1991450..dbd7e3753bed 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -39,7 +39,6 @@
  */
 struct rcu_dynticks {
 	long long dynticks_nesting; /* Track irq/process nesting level. */
-				    /* Process level is worth LLONG_MAX/2. */
 	int dynticks_nmi_nesting;   /* Track NMI nesting level. */
 	atomic_t dynticks;	    /* Even value for idle, else odd. */
 	bool rcu_need_heavy_qs;     /* GP old, need heavy quiescent state. */
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 07/17] rcu: Eliminate rcu_irq_enter_disabled()
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (5 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 06/17] rcu: Make ->dynticks_nesting be a simple counter Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 08/17] rcu: Add tracing to irq/NMI dyntick-idle transitions Paul E. McKenney
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Now that the irq path uses the rcu_nmi_{enter,exit}() algorithm,
rcu_irq_enter() and rcu_irq_exit() may be used from any context.  There is
thus no need for rcu_irq_enter_disabled() and for the checks using it.
This commit therefore eliminates rcu_irq_enter_disabled().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rcutiny.h    |  1 -
 include/linux/rcutree.h    |  1 -
 include/linux/tracepoint.h |  5 +----
 kernel/rcu/tree.c          | 22 ++--------------------
 kernel/trace/trace.c       | 11 -----------
 5 files changed, 3 insertions(+), 37 deletions(-)

diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index b3dbf9502fd0..ce9beec35e34 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -111,7 +111,6 @@ static inline void rcu_cpu_stall_reset(void) { }
 static inline void rcu_idle_enter(void) { }
 static inline void rcu_idle_exit(void) { }
 static inline void rcu_irq_enter(void) { }
-static inline bool rcu_irq_enter_disabled(void) { return false; }
 static inline void rcu_irq_exit_irqson(void) { }
 static inline void rcu_irq_enter_irqson(void) { }
 static inline void rcu_irq_exit(void) { }
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index 37d6fd3b7ff8..fd996cdf1833 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -85,7 +85,6 @@ void rcu_irq_enter(void);
 void rcu_irq_exit(void);
 void rcu_irq_enter_irqson(void);
 void rcu_irq_exit_irqson(void);
-bool rcu_irq_enter_disabled(void);
 
 void exit_rcu(void);
 
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index a26ffbe09e71..c94f466d57ef 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -137,11 +137,8 @@ extern void syscall_unregfunc(void);
 									\
 		if (!(cond))						\
 			return;						\
-		if (rcucheck) {						\
-			if (WARN_ON_ONCE(rcu_irq_enter_disabled()))	\
-				return;					\
+		if (rcucheck)						\
 			rcu_irq_enter_irqson();				\
-		}							\
 		rcu_read_lock_sched_notrace();				\
 		it_func_ptr = rcu_dereference_sched((tp)->funcs);	\
 		if (it_func_ptr) {					\
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index d123474fe829..444aa2b3f24d 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -271,20 +271,6 @@ static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
 };
 
 /*
- * There's a few places, currently just in the tracing infrastructure,
- * that uses rcu_irq_enter() to make sure RCU is watching. But there's
- * a small location where that will not even work. In those cases
- * rcu_irq_enter_disabled() needs to be checked to make sure rcu_irq_enter()
- * can be called.
- */
-static DEFINE_PER_CPU(bool, disable_rcu_irq_enter);
-
-bool rcu_irq_enter_disabled(void)
-{
-	return this_cpu_read(disable_rcu_irq_enter);
-}
-
-/*
  * Record entry into an extended quiescent state.  This is only to be
  * called when not already in an extended quiescent state.
  */
@@ -792,10 +778,8 @@ static void rcu_eqs_enter_common(bool user)
 		do_nocb_deferred_wakeup(rdp);
 	}
 	rcu_prepare_for_idle();
-	__this_cpu_inc(disable_rcu_irq_enter);
-	rdtp->dynticks_nesting = 0; /* Breaks tracing momentarily. */
-	rcu_dynticks_eqs_enter(); /* After this, tracing works again. */
-	__this_cpu_dec(disable_rcu_irq_enter);
+	rdtp->dynticks_nesting = 0;
+	rcu_dynticks_eqs_enter();
 	rcu_dynticks_task_enter();
 
 	/*
@@ -1001,10 +985,8 @@ static void rcu_eqs_exit(bool user)
 	if (oldval) {
 		rdtp->dynticks_nesting++;
 	} else {
-		__this_cpu_inc(disable_rcu_irq_enter);
 		rcu_eqs_exit_common(1, user);
 		rdtp->dynticks_nesting = 1;
-		__this_cpu_dec(disable_rcu_irq_enter);
 		WRITE_ONCE(rdtp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE);
 	}
 }
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 73e67b68c53b..dbce1be3bab8 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2682,17 +2682,6 @@ void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
 	if (unlikely(in_nmi()))
 		return;
 
-	/*
-	 * It is possible that a function is being traced in a
-	 * location that RCU is not watching. A call to
-	 * rcu_irq_enter() will make sure that it is, but there's
-	 * a few internal rcu functions that could be traced
-	 * where that wont work either. In those cases, we just
-	 * do nothing.
-	 */
-	if (unlikely(rcu_irq_enter_disabled()))
-		return;
-
 	rcu_irq_enter_irqson();
 	__ftrace_trace_stack(buffer, flags, skip, pc, NULL);
 	rcu_irq_exit_irqson();
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 08/17] rcu: Add tracing to irq/NMI dyntick-idle transitions
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (6 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 07/17] rcu: Eliminate rcu_irq_enter_disabled() Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 09/17] rcu: Shrink ->dynticks_{nmi_,}nesting from long long to long Paul E. McKenney
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/trace/events/rcu.h | 14 ++++++++------
 kernel/rcu/tree.c          |  6 ++++++
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
index 59d40c454aa0..4674b21247f7 100644
--- a/include/trace/events/rcu.h
+++ b/include/trace/events/rcu.h
@@ -421,16 +421,18 @@ TRACE_EVENT(rcu_fqs,
 
 /*
  * Tracepoint for dyntick-idle entry/exit events.  These take a string
- * as argument: "Start" for entering dyntick-idle mode, "End" for
- * leaving it, "--=" for events moving towards idle, and "++=" for events
- * moving away from idle.  "Error on entry: not idle task" and "Error on
- * exit: not idle task" indicate that a non-idle task is erroneously
+ * as argument: "Start" for entering dyntick-idle mode, "Startirq" for
+ * entering it from irq/NMI, "End" for leaving it, "Endirq" for leaving it
+ * to irq/NMI, "--=" for events moving towards idle, and "++=" for events
+ * moving away from idle.  "Error on entry: not idle task" and "Error
+ * on exit: not idle task" indicate that a non-idle task is erroneously
  * toying with the idle loop.
  *
  * These events also take a pair of numbers, which indicate the nesting
  * depth before and after the event of interest.  Note that task-related
- * events use the upper bits of each number, while interrupt-related
- * events use the lower bits.
+ * and interrupt-related events use two separate counters, and that the
+ * "++=" and "--=" events for irq/NMI will change the counter by two,
+ * otherwise by one.
  */
 TRACE_EVENT(rcu_dyntick,
 
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 444aa2b3f24d..d069ba2d8412 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -880,12 +880,15 @@ void rcu_nmi_exit(void)
 	 * leave it in non-RCU-idle state.
 	 */
 	if (rdtp->dynticks_nmi_nesting != 1) {
+		trace_rcu_dyntick(TPS("--="), rdtp->dynticks_nmi_nesting,
+				  rdtp->dynticks_nmi_nesting - 2);
 		WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* No store tearing. */
 			   rdtp->dynticks_nmi_nesting - 2);
 		return;
 	}
 
 	/* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */
+	trace_rcu_dyntick(TPS("Startirq"), rdtp->dynticks_nmi_nesting, 0);
 	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */
 	rcu_dynticks_eqs_enter();
 }
@@ -1057,6 +1060,9 @@ void rcu_nmi_enter(void)
 		rcu_dynticks_eqs_exit();
 		incby = 1;
 	}
+	trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="),
+			  rdtp->dynticks_nmi_nesting,
+			  rdtp->dynticks_nmi_nesting + incby);
 	WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* Prevent store tearing. */
 		   rdtp->dynticks_nmi_nesting + incby);
 	barrier();
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 09/17] rcu: Shrink ->dynticks_{nmi_,}nesting from long long to long
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (7 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 08/17] rcu: Add tracing to irq/NMI dyntick-idle transitions Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 10/17] rcu: Add ->dynticks field to rcu_dyntick trace event Paul E. McKenney
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Because the ->dynticks_nesting field now only contains the process-based
nesting level instead of a value encoding both the process nesting level
and the irq "nesting" level, we no longer need a long long, even on
32-bit systems.  This commit therefore changes both the ->dynticks_nesting
and ->dynticks_nmi_nesting fields to long.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/trace/events/rcu.h | 8 ++++----
 kernel/rcu/rcu.h           | 2 +-
 kernel/rcu/tree.c          | 6 +++---
 kernel/rcu/tree.h          | 4 ++--
 kernel/rcu/tree_plugin.h   | 2 +-
 5 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
index 4674b21247f7..b0a48231ea0e 100644
--- a/include/trace/events/rcu.h
+++ b/include/trace/events/rcu.h
@@ -436,14 +436,14 @@ TRACE_EVENT(rcu_fqs,
  */
 TRACE_EVENT(rcu_dyntick,
 
-	TP_PROTO(const char *polarity, long long oldnesting, long long newnesting),
+	TP_PROTO(const char *polarity, long oldnesting, long newnesting),
 
 	TP_ARGS(polarity, oldnesting, newnesting),
 
 	TP_STRUCT__entry(
 		__field(const char *, polarity)
-		__field(long long, oldnesting)
-		__field(long long, newnesting)
+		__field(long, oldnesting)
+		__field(long, newnesting)
 	),
 
 	TP_fast_assign(
@@ -452,7 +452,7 @@ TRACE_EVENT(rcu_dyntick,
 		__entry->newnesting = newnesting;
 	),
 
-	TP_printk("%s %llx %llx", __entry->polarity,
+	TP_printk("%s %lx %lx", __entry->polarity,
 		  __entry->oldnesting, __entry->newnesting)
 );
 
diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index afe0559d1867..6334f2c1abd0 100644
--- a/kernel/rcu/rcu.h
+++ b/kernel/rcu/rcu.h
@@ -31,7 +31,7 @@
 #endif /* #else #ifdef CONFIG_RCU_TRACE */
 
 /* Offset to allow for unmatched rcu_irq_{enter,exit}(). */
-#define DYNTICK_IRQ_NONIDLE	((INT_MAX / 2) + 1)
+#define DYNTICK_IRQ_NONIDLE	((LONG_MAX / 2) + 1)
 
 
 /*
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index d069ba2d8412..92de3bacda07 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -946,7 +946,7 @@ void rcu_irq_exit_irqson(void)
  * we really have exited idle, and must do the appropriate accounting.
  * The caller must have disabled interrupts.
  */
-static void rcu_eqs_exit_common(long long newval, int user)
+static void rcu_eqs_exit_common(long newval, int user)
 {
 	RCU_TRACE(struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);)
 
@@ -979,7 +979,7 @@ static void rcu_eqs_exit_common(long long newval, int user)
 static void rcu_eqs_exit(bool user)
 {
 	struct rcu_dynticks *rdtp;
-	long long oldval;
+	long oldval;
 
 	lockdep_assert_irqs_disabled();
 	rdtp = this_cpu_ptr(&rcu_dynticks);
@@ -1043,7 +1043,7 @@ void rcu_user_exit(void)
 void rcu_nmi_enter(void)
 {
 	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
-	int incby = 2;
+	long incby = 2;
 
 	/* Complain about underflow. */
 	WARN_ON_ONCE(rdtp->dynticks_nmi_nesting < 0);
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index dbd7e3753bed..6488a3b0e729 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -38,8 +38,8 @@
  * Dynticks per-CPU state.
  */
 struct rcu_dynticks {
-	long long dynticks_nesting; /* Track irq/process nesting level. */
-	int dynticks_nmi_nesting;   /* Track NMI nesting level. */
+	long dynticks_nesting;      /* Track process nesting level. */
+	long dynticks_nmi_nesting;  /* Track irq/NMI nesting level. */
 	atomic_t dynticks;	    /* Even value for idle, else odd. */
 	bool rcu_need_heavy_qs;     /* GP old, need heavy quiescent state. */
 	unsigned long rcu_qs_ctr;   /* Light universal quiescent state ctr. */
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index db85ca3975f1..e94e754464cd 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1687,7 +1687,7 @@ static void print_cpu_stall_info(struct rcu_state *rsp, int cpu)
 	}
 	print_cpu_stall_fast_no_hz(fast_no_hz, cpu);
 	delta = rdp->mynode->gpnum - rdp->rcu_iw_gpnum;
-	pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%llx/%d softirq=%u/%u fqs=%ld %s\n",
+	pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%ld softirq=%u/%u fqs=%ld %s\n",
 	       cpu,
 	       "O."[!!cpu_online(cpu)],
 	       "o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)],
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 10/17] rcu: Add ->dynticks field to rcu_dyntick trace event
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (8 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 09/17] rcu: Shrink ->dynticks_{nmi_,}nesting from long long to long Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 11/17] rcu: Stop duplicating lockdep checks in RCU's idle-entry code Paul E. McKenney
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/trace/events/rcu.h | 13 ++++++++-----
 kernel/rcu/tree.c          | 16 +++++++---------
 2 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
index b0a48231ea0e..d103de9f8c10 100644
--- a/include/trace/events/rcu.h
+++ b/include/trace/events/rcu.h
@@ -436,24 +436,27 @@ TRACE_EVENT(rcu_fqs,
  */
 TRACE_EVENT(rcu_dyntick,
 
-	TP_PROTO(const char *polarity, long oldnesting, long newnesting),
+	TP_PROTO(const char *polarity, long oldnesting, long newnesting, atomic_t dynticks),
 
-	TP_ARGS(polarity, oldnesting, newnesting),
+	TP_ARGS(polarity, oldnesting, newnesting, dynticks),
 
 	TP_STRUCT__entry(
 		__field(const char *, polarity)
 		__field(long, oldnesting)
 		__field(long, newnesting)
+		__field(int, dynticks)
 	),
 
 	TP_fast_assign(
 		__entry->polarity = polarity;
 		__entry->oldnesting = oldnesting;
 		__entry->newnesting = newnesting;
+		__entry->dynticks = atomic_read(&dynticks);
 	),
 
-	TP_printk("%s %lx %lx", __entry->polarity,
-		  __entry->oldnesting, __entry->newnesting)
+	TP_printk("%s %lx %lx %#3x", __entry->polarity,
+		  __entry->oldnesting, __entry->newnesting,
+		  __entry->dynticks & 0xfff)
 );
 
 /*
@@ -801,7 +804,7 @@ TRACE_EVENT(rcu_barrier,
 					 grplo, grphi, gp_tasks) do { } \
 	while (0)
 #define trace_rcu_fqs(rcuname, gpnum, cpu, qsevent) do { } while (0)
-#define trace_rcu_dyntick(polarity, oldnesting, newnesting) do { } while (0)
+#define trace_rcu_dyntick(polarity, oldnesting, newnesting, dyntick) do { } while (0)
 #define trace_rcu_prep_idle(reason) do { } while (0)
 #define trace_rcu_callback(rcuname, rhp, qlen_lazy, qlen) do { } while (0)
 #define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen_lazy, qlen) \
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 92de3bacda07..5febb76809f6 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -761,13 +761,13 @@ static void rcu_eqs_enter_common(bool user)
 	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
 
 	lockdep_assert_irqs_disabled();
-	trace_rcu_dyntick(TPS("Start"), rdtp->dynticks_nesting, 0);
+	trace_rcu_dyntick(TPS("Start"), rdtp->dynticks_nesting, 0, rdtp->dynticks);
 	if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
 	    !user && !is_idle_task(current)) {
 		struct task_struct *idle __maybe_unused =
 			idle_task(smp_processor_id());
 
-		trace_rcu_dyntick(TPS("Error on entry: not idle task"), rdtp->dynticks_nesting, 0);
+		trace_rcu_dyntick(TPS("Error on entry: not idle task"), rdtp->dynticks_nesting, 0, rdtp->dynticks);
 		rcu_ftrace_dump(DUMP_ORIG);
 		WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
 			  current->pid, current->comm,
@@ -880,15 +880,14 @@ void rcu_nmi_exit(void)
 	 * leave it in non-RCU-idle state.
 	 */
 	if (rdtp->dynticks_nmi_nesting != 1) {
-		trace_rcu_dyntick(TPS("--="), rdtp->dynticks_nmi_nesting,
-				  rdtp->dynticks_nmi_nesting - 2);
+		trace_rcu_dyntick(TPS("--="), rdtp->dynticks_nmi_nesting, rdtp->dynticks_nmi_nesting - 2, rdtp->dynticks);
 		WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* No store tearing. */
 			   rdtp->dynticks_nmi_nesting - 2);
 		return;
 	}
 
 	/* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */
-	trace_rcu_dyntick(TPS("Startirq"), rdtp->dynticks_nmi_nesting, 0);
+	trace_rcu_dyntick(TPS("Startirq"), rdtp->dynticks_nmi_nesting, 0, rdtp->dynticks);
 	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */
 	rcu_dynticks_eqs_enter();
 }
@@ -953,14 +952,13 @@ static void rcu_eqs_exit_common(long newval, int user)
 	rcu_dynticks_task_exit();
 	rcu_dynticks_eqs_exit();
 	rcu_cleanup_after_idle();
-	trace_rcu_dyntick(TPS("End"), rdtp->dynticks_nesting, newval);
+	trace_rcu_dyntick(TPS("End"), rdtp->dynticks_nesting, newval, rdtp->dynticks);
 	if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
 	    !user && !is_idle_task(current)) {
 		struct task_struct *idle __maybe_unused =
 			idle_task(smp_processor_id());
 
-		trace_rcu_dyntick(TPS("Error on exit: not idle task"),
-				  rdtp->dynticks_nesting, newval);
+		trace_rcu_dyntick(TPS("Error on exit: not idle task"), rdtp->dynticks_nesting, newval, rdtp->dynticks);
 		rcu_ftrace_dump(DUMP_ORIG);
 		WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
 			  current->pid, current->comm,
@@ -1062,7 +1060,7 @@ void rcu_nmi_enter(void)
 	}
 	trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="),
 			  rdtp->dynticks_nmi_nesting,
-			  rdtp->dynticks_nmi_nesting + incby);
+			  rdtp->dynticks_nmi_nesting + incby, rdtp->dynticks);
 	WRITE_ONCE(rdtp->dynticks_nmi_nesting, /* Prevent store tearing. */
 		   rdtp->dynticks_nmi_nesting + incby);
 	barrier();
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 11/17] rcu: Stop duplicating lockdep checks in RCU's idle-entry code
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (9 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 10/17] rcu: Add ->dynticks field to rcu_dyntick trace event Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 12/17] rcu: Avoid ->dynticks_nesting store tearing Paul E. McKenney
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

The three RCU_LOCKDEP_WARN() calls in rcu_eqs_enter_common() are
redundant with other lockdep checks, so this commit removes them.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 5febb76809f6..80cada11f544 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -781,17 +781,6 @@ static void rcu_eqs_enter_common(bool user)
 	rdtp->dynticks_nesting = 0;
 	rcu_dynticks_eqs_enter();
 	rcu_dynticks_task_enter();
-
-	/*
-	 * It is illegal to enter an extended quiescent state while
-	 * in an RCU read-side critical section.
-	 */
-	RCU_LOCKDEP_WARN(lock_is_held(&rcu_lock_map),
-			 "Illegal idle entry in RCU read-side critical section.");
-	RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),
-			 "Illegal idle entry in RCU-bh read-side critical section.");
-	RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map),
-			 "Illegal idle entry in RCU-sched read-side critical section.");
 }
 
 /*
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 12/17] rcu: Avoid ->dynticks_nesting store tearing
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (10 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 11/17] rcu: Stop duplicating lockdep checks in RCU's idle-entry code Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 13/17] rcu: Fold rcu_eqs_enter_common() into rcu_eqs_enter() Paul E. McKenney
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Although ->dynticks_nesting is updated only by process level, it is
accessed from hardirq to check for interrupt-from-idle quiescent states.
Store tearing is thus possible, so this commit applies WRITE_ONCE()
to ->dynticks_nesting stores.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 80cada11f544..b2ded4d436c6 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -778,7 +778,7 @@ static void rcu_eqs_enter_common(bool user)
 		do_nocb_deferred_wakeup(rdp);
 	}
 	rcu_prepare_for_idle();
-	rdtp->dynticks_nesting = 0;
+	WRITE_ONCE(rdtp->dynticks_nesting, 0); /* Avoid irq-access tearing. */
 	rcu_dynticks_eqs_enter();
 	rcu_dynticks_task_enter();
 }
@@ -976,7 +976,7 @@ static void rcu_eqs_exit(bool user)
 		rdtp->dynticks_nesting++;
 	} else {
 		rcu_eqs_exit_common(1, user);
-		rdtp->dynticks_nesting = 1;
+		WRITE_ONCE(rdtp->dynticks_nesting, 1);
 		WRITE_ONCE(rdtp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE);
 	}
 }
@@ -3713,7 +3713,7 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp)
 	if (rcu_segcblist_empty(&rdp->cblist) && /* No early-boot CBs? */
 	    !init_nocb_callback_list(rdp))
 		rcu_segcblist_init(&rdp->cblist);  /* Re-enable callbacks. */
-	rdp->dynticks->dynticks_nesting = 1;
+	rdp->dynticks->dynticks_nesting = 1;	/* CPU not up, no tearing. */
 	rcu_dynticks_eqs_online();
 	raw_spin_unlock_rcu_node(rnp);		/* irqs remain disabled. */
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 13/17] rcu: Fold rcu_eqs_enter_common() into rcu_eqs_enter()
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (11 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 12/17] rcu: Avoid ->dynticks_nesting store tearing Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 14/17] rcu: Fold rcu_eqs_exit_common() into rcu_eqs_exit() Paul E. McKenney
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

There is now only one call to rcu_eqs_enter_common() and there is no other
reason to keep it separate.  This commit therefore inlines it into its
sole call site, saving a few lines of code in the process.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 43 ++++++++++++++++---------------------------
 1 file changed, 16 insertions(+), 27 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index b2ded4d436c6..5c8a5796c71f 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -749,16 +749,27 @@ cpu_needs_another_gp(struct rcu_state *rsp, struct rcu_data *rdp)
 }
 
 /*
- * rcu_eqs_enter_common - current CPU is entering an extended quiescent state
+ * Enter an RCU extended quiescent state, which can be either the
+ * idle loop or adaptive-tickless usermode execution.
  *
- * Enter idle, doing appropriate accounting.  The caller must have
- * disabled interrupts.
+ * We crowbar the ->dynticks_nmi_nesting field to zero to allow for
+ * the possibility of usermode upcalls having messed up our count
+ * of interrupt nesting level during the prior busy period.
  */
-static void rcu_eqs_enter_common(bool user)
+static void rcu_eqs_enter(bool user)
 {
 	struct rcu_state *rsp;
 	struct rcu_data *rdp;
-	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
+	struct rcu_dynticks *rdtp;
+
+	rdtp = this_cpu_ptr(&rcu_dynticks);
+	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0);
+	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
+		     rdtp->dynticks_nesting == 0);
+	if (rdtp->dynticks_nesting != 1) {
+		rdtp->dynticks_nesting--;
+		return;
+	}
 
 	lockdep_assert_irqs_disabled();
 	trace_rcu_dyntick(TPS("Start"), rdtp->dynticks_nesting, 0, rdtp->dynticks);
@@ -783,28 +794,6 @@ static void rcu_eqs_enter_common(bool user)
 	rcu_dynticks_task_enter();
 }
 
-/*
- * Enter an RCU extended quiescent state, which can be either the
- * idle loop or adaptive-tickless usermode execution.
- *
- * We crowbar the ->dynticks_nmi_nesting field to zero to allow for
- * the possibility of usermode upcalls having messed up our count
- * of interrupt nesting level during the prior busy period.
- */
-static void rcu_eqs_enter(bool user)
-{
-	struct rcu_dynticks *rdtp;
-
-	rdtp = this_cpu_ptr(&rcu_dynticks);
-	WRITE_ONCE(rdtp->dynticks_nmi_nesting, 0);
-	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-		     rdtp->dynticks_nesting == 0);
-	if (rdtp->dynticks_nesting == 1)
-		rcu_eqs_enter_common(user);
-	else
-		rdtp->dynticks_nesting--;
-}
-
 /**
  * rcu_idle_enter - inform RCU that current CPU is entering idle
  *
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 14/17] rcu: Fold rcu_eqs_exit_common() into rcu_eqs_exit()
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (12 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 13/17] rcu: Fold rcu_eqs_enter_common() into rcu_eqs_enter() Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 15/17] rcu: Simplify rcu_eqs_{enter,exit}() non-idle task debug code Paul E. McKenney
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

There is now only one call to rcu_eqs_exit_common() and there is no other
reason to keep it separate.  This commit therefore inlines it into its
sole call site, saving a few lines of code in the process.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 50 ++++++++++++++++++--------------------------------
 1 file changed, 18 insertions(+), 32 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 5c8a5796c71f..46a8e06bf03e 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -917,34 +917,6 @@ void rcu_irq_exit_irqson(void)
 }
 
 /*
- * rcu_eqs_exit_common - current CPU moving away from extended quiescent state
- *
- * If the new value of the ->dynticks_nesting counter was previously zero,
- * we really have exited idle, and must do the appropriate accounting.
- * The caller must have disabled interrupts.
- */
-static void rcu_eqs_exit_common(long newval, int user)
-{
-	RCU_TRACE(struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);)
-
-	rcu_dynticks_task_exit();
-	rcu_dynticks_eqs_exit();
-	rcu_cleanup_after_idle();
-	trace_rcu_dyntick(TPS("End"), rdtp->dynticks_nesting, newval, rdtp->dynticks);
-	if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-	    !user && !is_idle_task(current)) {
-		struct task_struct *idle __maybe_unused =
-			idle_task(smp_processor_id());
-
-		trace_rcu_dyntick(TPS("Error on exit: not idle task"), rdtp->dynticks_nesting, newval, rdtp->dynticks);
-		rcu_ftrace_dump(DUMP_ORIG);
-		WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
-			  current->pid, current->comm,
-			  idle->pid, idle->comm); /* must be idle task! */
-	}
-}
-
-/*
  * Exit an RCU extended quiescent state, which can be either the
  * idle loop or adaptive-tickless usermode execution.
  *
@@ -963,11 +935,25 @@ static void rcu_eqs_exit(bool user)
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0);
 	if (oldval) {
 		rdtp->dynticks_nesting++;
-	} else {
-		rcu_eqs_exit_common(1, user);
-		WRITE_ONCE(rdtp->dynticks_nesting, 1);
-		WRITE_ONCE(rdtp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE);
+		return;
+	}
+	rcu_dynticks_task_exit();
+	rcu_dynticks_eqs_exit();
+	rcu_cleanup_after_idle();
+	trace_rcu_dyntick(TPS("End"), rdtp->dynticks_nesting, 1, rdtp->dynticks);
+	if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
+	    !user && !is_idle_task(current)) {
+		struct task_struct *idle __maybe_unused =
+			idle_task(smp_processor_id());
+
+		trace_rcu_dyntick(TPS("Error on exit: not idle task"), rdtp->dynticks_nesting, 1, rdtp->dynticks);
+		rcu_ftrace_dump(DUMP_ORIG);
+		WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
+			  current->pid, current->comm,
+			  idle->pid, idle->comm); /* must be idle task! */
 	}
+	WRITE_ONCE(rdtp->dynticks_nesting, 1);
+	WRITE_ONCE(rdtp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE);
 }
 
 /**
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 15/17] rcu: Simplify rcu_eqs_{enter,exit}() non-idle task debug code
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (13 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 14/17] rcu: Fold rcu_eqs_exit_common() into rcu_eqs_exit() Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 16/17] doc: Update dyntick-idle design documentation for NMI/irq consolidation Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 17/17] tracing, rcu: Remove no longer used trace event rcu_prep_idle Paul E. McKenney
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

The code that checks for non-idle non-nohz_idle-usermode tasks invoking
rcu_eqs_enter() and rcu_eqs_exit() prints a considerable quantity of
helpful information.  However, these checks fire rarely, so the extra
complexity is no longer worth it.  This commit therefore replaces this
debug code with simple WARN_ON_ONCE() statements.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/trace/events/rcu.h | 12 +++++-------
 kernel/rcu/tree.c          | 24 ++----------------------
 2 files changed, 7 insertions(+), 29 deletions(-)

diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
index d103de9f8c10..adf47c635c8e 100644
--- a/include/trace/events/rcu.h
+++ b/include/trace/events/rcu.h
@@ -424,15 +424,13 @@ TRACE_EVENT(rcu_fqs,
  * as argument: "Start" for entering dyntick-idle mode, "Startirq" for
  * entering it from irq/NMI, "End" for leaving it, "Endirq" for leaving it
  * to irq/NMI, "--=" for events moving towards idle, and "++=" for events
- * moving away from idle.  "Error on entry: not idle task" and "Error
- * on exit: not idle task" indicate that a non-idle task is erroneously
- * toying with the idle loop.
+ * moving away from idle.
  *
  * These events also take a pair of numbers, which indicate the nesting
- * depth before and after the event of interest.  Note that task-related
- * and interrupt-related events use two separate counters, and that the
- * "++=" and "--=" events for irq/NMI will change the counter by two,
- * otherwise by one.
+ * depth before and after the event of interest, and a third number that is
+ * the ->dynticks counter.  Note that task-related and interrupt-related
+ * events use two separate counters, and that the "++=" and "--=" events
+ * for irq/NMI will change the counter by two, otherwise by one.
  */
 TRACE_EVENT(rcu_dyntick,
 
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 46a8e06bf03e..4d374d2bc925 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -773,17 +773,7 @@ static void rcu_eqs_enter(bool user)
 
 	lockdep_assert_irqs_disabled();
 	trace_rcu_dyntick(TPS("Start"), rdtp->dynticks_nesting, 0, rdtp->dynticks);
-	if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-	    !user && !is_idle_task(current)) {
-		struct task_struct *idle __maybe_unused =
-			idle_task(smp_processor_id());
-
-		trace_rcu_dyntick(TPS("Error on entry: not idle task"), rdtp->dynticks_nesting, 0, rdtp->dynticks);
-		rcu_ftrace_dump(DUMP_ORIG);
-		WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
-			  current->pid, current->comm,
-			  idle->pid, idle->comm); /* must be idle task! */
-	}
+	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current));
 	for_each_rcu_flavor(rsp) {
 		rdp = this_cpu_ptr(rsp->rda);
 		do_nocb_deferred_wakeup(rdp);
@@ -941,17 +931,7 @@ static void rcu_eqs_exit(bool user)
 	rcu_dynticks_eqs_exit();
 	rcu_cleanup_after_idle();
 	trace_rcu_dyntick(TPS("End"), rdtp->dynticks_nesting, 1, rdtp->dynticks);
-	if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-	    !user && !is_idle_task(current)) {
-		struct task_struct *idle __maybe_unused =
-			idle_task(smp_processor_id());
-
-		trace_rcu_dyntick(TPS("Error on exit: not idle task"), rdtp->dynticks_nesting, 1, rdtp->dynticks);
-		rcu_ftrace_dump(DUMP_ORIG);
-		WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
-			  current->pid, current->comm,
-			  idle->pid, idle->comm); /* must be idle task! */
-	}
+	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current));
 	WRITE_ONCE(rdtp->dynticks_nesting, 1);
 	WRITE_ONCE(rdtp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE);
 }
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 16/17] doc: Update dyntick-idle design documentation for NMI/irq consolidation
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (14 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 15/17] rcu: Simplify rcu_eqs_{enter,exit}() non-idle task debug code Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  2017-12-01 19:36 ` [PATCH tip/core/rcu 17/17] tracing, rcu: Remove no longer used trace event rcu_prep_idle Paul E. McKenney
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 .../Design/Data-Structures/Data-Structures.html    | 46 +++++++++++++++-------
 1 file changed, 32 insertions(+), 14 deletions(-)

diff --git a/Documentation/RCU/Design/Data-Structures/Data-Structures.html b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
index 38d6d800761f..1ac011de606e 100644
--- a/Documentation/RCU/Design/Data-Structures/Data-Structures.html
+++ b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
@@ -1182,8 +1182,8 @@ CPU (and from tracing) unless otherwise stated.
 Its fields are as follows:
 
 <pre>
-  1   int dynticks_nesting;
-  2   int dynticks_nmi_nesting;
+  1   long dynticks_nesting;
+  2   long dynticks_nmi_nesting;
   3   atomic_t dynticks;
   4   bool rcu_need_heavy_qs;
   5   unsigned long rcu_qs_ctr;
@@ -1191,15 +1191,31 @@ Its fields are as follows:
 </pre>
 
 <p>The <tt>-&gt;dynticks_nesting</tt> field counts the
-nesting depth of normal interrupts.
-In addition, this counter is incremented when exiting dyntick-idle
-mode and decremented when entering it.
+nesting depth of process execution, so that in normal circumstances
+this counter has value zero or one.
+NMIs, irqs, and tracers are counted by the <tt>-&gt;dynticks_nmi_nesting</tt>
+field.
+Because NMIs cannot be masked, changes to this variable have to be
+undertaken carefully using an algorithm provided by Andy Lutomirski.
+The initial transition from idle adds one, and nested transitions
+add two, so that a nesting level of five is represented by a
+<tt>-&gt;dynticks_nmi_nesting</tt> value of nine.
 This counter can therefore be thought of as counting the number
 of reasons why this CPU cannot be permitted to enter dyntick-idle
-mode, aside from non-maskable interrupts (NMIs).
-NMIs are counted by the <tt>-&gt;dynticks_nmi_nesting</tt>
-field, except that NMIs that interrupt non-dyntick-idle execution
-are not counted.
+mode, aside from process-level transitions.
+
+<p>However, it turns out that when running in non-idle kernel context,
+the Linux kernel is fully capable of entering interrupt handlers that
+never exit and perhaps also vice versa.
+Therefore, whenever the <tt>-&gt;dynticks_nesting</tt> field is
+incremented up from zero, the <tt>-&gt;dynticks_nmi_nesting</tt> field
+is set to a large positive number, and whenever the
+<tt>-&gt;dynticks_nesting</tt> field is decremented down to zero,
+the the <tt>-&gt;dynticks_nmi_nesting</tt> field is set to zero.
+Assuming that the number of misnested interrupts is not sufficient
+to overflow the counter, this approach corrects the
+<tt>-&gt;dynticks_nmi_nesting</tt> field every time the corresponding
+CPU enters the idle loop from process context.
 
 </p><p>The <tt>-&gt;dynticks</tt> field counts the corresponding
 CPU's transitions to and from dyntick-idle mode, so that this counter
@@ -1231,14 +1247,16 @@ in response.
 <tr><th>&nbsp;</th></tr>
 <tr><th align="left">Quick Quiz:</th></tr>
 <tr><td>
-	Why not just count all NMIs?
-	Wouldn't that be simpler and less error prone?
+	Why not simply combine the <tt>-&gt;dynticks_nesting</tt>
+	and <tt>-&gt;dynticks_nmi_nesting</tt> counters into a
+	single counter that just counts the number of reasons that
+	the corresponding CPU is non-idle?
 </td></tr>
 <tr><th align="left">Answer:</th></tr>
 <tr><td bgcolor="#ffffff"><font color="ffffff">
-	It seems simpler only until you think hard about how to go about
-	updating the <tt>rcu_dynticks</tt> structure's
-	<tt>-&gt;dynticks</tt> field.
+	Because this would fail in the presence of interrupts whose
+	handlers never return and of handlers that manage to return
+	from a made-up interrupt.
 </font></td></tr>
 <tr><td>&nbsp;</td></tr>
 </table>
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH tip/core/rcu 17/17] tracing, rcu: Remove no longer used trace event rcu_prep_idle
  2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
                   ` (15 preceding siblings ...)
  2017-12-01 19:36 ` [PATCH tip/core/rcu 16/17] doc: Update dyntick-idle design documentation for NMI/irq consolidation Paul E. McKenney
@ 2017-12-01 19:36 ` Paul E. McKenney
  16 siblings, 0 replies; 18+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Commit c0f4dfd4f90 ("rcu: Make RCU_FAST_NO_HZ take advantage of
numbered callbacks") removed the only instances of trace_rcu_prep_idle,
but did not remove the TRACE_EVENT() that creates it. As defined trace
events take up memory within the kernel even when they are not used,
this is a waste of space. Remove the obsolete event.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/trace/events/rcu.h | 40 ----------------------------------------
 1 file changed, 40 deletions(-)

diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
index adf47c635c8e..9bafeaf4e0e0 100644
--- a/include/trace/events/rcu.h
+++ b/include/trace/events/rcu.h
@@ -458,45 +458,6 @@ TRACE_EVENT(rcu_dyntick,
 );
 
 /*
- * Tracepoint for RCU preparation for idle, the goal being to get RCU
- * processing done so that the current CPU can shut off its scheduling
- * clock and enter dyntick-idle mode.  One way to accomplish this is
- * to drain all RCU callbacks from this CPU, and the other is to have
- * done everything RCU requires for the current grace period.  In this
- * latter case, the CPU will be awakened at the end of the current grace
- * period in order to process the remainder of its callbacks.
- *
- * These tracepoints take a string as argument:
- *
- *	"No callbacks": Nothing to do, no callbacks on this CPU.
- *	"In holdoff": Nothing to do, holding off after unsuccessful attempt.
- *	"Begin holdoff": Attempt failed, don't retry until next jiffy.
- *	"Dyntick with callbacks": Entering dyntick-idle despite callbacks.
- *	"Dyntick with lazy callbacks": Entering dyntick-idle w/lazy callbacks.
- *	"More callbacks": Still more callbacks, try again to clear them out.
- *	"Callbacks drained": All callbacks processed, off to dyntick idle!
- *	"Timer": Timer fired to cause CPU to continue processing callbacks.
- *	"Demigrate": Timer fired on wrong CPU, woke up correct CPU.
- *	"Cleanup after idle": Idle exited, timer canceled.
- */
-TRACE_EVENT(rcu_prep_idle,
-
-	TP_PROTO(const char *reason),
-
-	TP_ARGS(reason),
-
-	TP_STRUCT__entry(
-		__field(const char *, reason)
-	),
-
-	TP_fast_assign(
-		__entry->reason = reason;
-	),
-
-	TP_printk("%s", __entry->reason)
-);
-
-/*
  * Tracepoint for the registration of a single RCU callback function.
  * The first argument is the type of RCU, the second argument is
  * a pointer to the RCU callback itself, the third element is the
@@ -803,7 +764,6 @@ TRACE_EVENT(rcu_barrier,
 	while (0)
 #define trace_rcu_fqs(rcuname, gpnum, cpu, qsevent) do { } while (0)
 #define trace_rcu_dyntick(polarity, oldnesting, newnesting, dyntick) do { } while (0)
-#define trace_rcu_prep_idle(reason) do { } while (0)
 #define trace_rcu_callback(rcuname, rhp, qlen_lazy, qlen) do { } while (0)
 #define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen_lazy, qlen) \
 	do { } while (0)
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-12-01 19:41 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-01 19:36 [PATCH tip/core/rcu 0/17] RCU dyntick updates for v4.16 Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 01/17] rcu: Avoid ->dynticks_nmi_nesting store tearing Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 02/17] rcu: Reduce dyntick-idle state space Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 03/17] rcu: Move rcu_nmi_{enter,exit}() to prepare for consolidation Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 04/17] rcu: Clamp ->dynticks_nmi_nesting at eqs entry/exit Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 05/17] rcu: Define rcu_irq_{enter,exit}() in terms of rcu_nmi_{enter,exit}() Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 06/17] rcu: Make ->dynticks_nesting be a simple counter Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 07/17] rcu: Eliminate rcu_irq_enter_disabled() Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 08/17] rcu: Add tracing to irq/NMI dyntick-idle transitions Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 09/17] rcu: Shrink ->dynticks_{nmi_,}nesting from long long to long Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 10/17] rcu: Add ->dynticks field to rcu_dyntick trace event Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 11/17] rcu: Stop duplicating lockdep checks in RCU's idle-entry code Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 12/17] rcu: Avoid ->dynticks_nesting store tearing Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 13/17] rcu: Fold rcu_eqs_enter_common() into rcu_eqs_enter() Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 14/17] rcu: Fold rcu_eqs_exit_common() into rcu_eqs_exit() Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 15/17] rcu: Simplify rcu_eqs_{enter,exit}() non-idle task debug code Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 16/17] doc: Update dyntick-idle design documentation for NMI/irq consolidation Paul E. McKenney
2017-12-01 19:36 ` [PATCH tip/core/rcu 17/17] tracing, rcu: Remove no longer used trace event rcu_prep_idle Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).