All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH tip/core/rcu 0/14] Preemptible-RCU updates for 3.20
@ 2015-01-07 17:32 Paul E. McKenney
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
  0 siblings, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani

Hello!

This series fixes some problems in preemptible RCU.

1.	Protect rcu_boost() lockless accesses with ACCESS_ONCE().

2-11.	Fix a low-probability long-standing bug in preemptible RCU that
	could occur in systems with more than 16 CPUs, but only if a full
	bank of 16 CPUs (as in 0-15, 16-31, 32-47, ...) was taken offline.
	See http://paulmck.livejournal.com/37782.html for more details.

12.	Revert old commit that worked around an rt_mutex bug.  This old
	commit could cause priority inversion, and the rt_mutex bug has
	long since been fixed.  Courtesy of Lai Jiangshan.

13.	Simplify grace-period code, enabled by 2-11 above.

14.	Remove redundant RCU callback list initialization.

							Thanx, Paul

------------------------------------------------------------------------

 b/kernel/rcu/tree.c        |  115 +++++++++++++-------------
 b/kernel/rcu/tree.h        |   24 -----
 b/kernel/rcu/tree_plugin.h |  196 ++++++++++-----------------------------------
 3 files changed, 102 insertions(+), 233 deletions(-)


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-07 17:32 [PATCH tip/core/rcu 0/14] Preemptible-RCU updates for 3.20 Paul E. McKenney
@ 2015-01-07 17:32 ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 02/14] rcu: Rename "empty" to "empty_norm" in preparation for boost rework Paul E. McKenney
                     ` (13 more replies)
  0 siblings, 14 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit prevents random compiler optimizations by applying
ACCESS_ONCE() to lockless accesses.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree_plugin.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 3ec85cb5d544..d59913ef8360 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1127,7 +1127,8 @@ static int rcu_boost(struct rcu_node *rnp)
 	struct task_struct *t;
 	struct list_head *tb;
 
-	if (rnp->exp_tasks == NULL && rnp->boost_tasks == NULL)
+	if (ACCESS_ONCE(rnp->exp_tasks) == NULL &&
+	    ACCESS_ONCE(rnp->boost_tasks) == NULL)
 		return 0;  /* Nothing left to boost. */
 
 	raw_spin_lock_irqsave(&rnp->lock, flags);
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 02/14] rcu: Rename "empty" to "empty_norm" in preparation for boost rework
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 03/14] rcu: Abstract rcu_cleanup_dead_rnp() from rcu_cleanup_dead_cpu() Paul E. McKenney
                     ` (12 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit undertakes a simple variable renaming to make way for
some rework of RCU priority boosting.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree_plugin.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index d59913ef8360..3d22f0b2dea9 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -313,8 +313,8 @@ static struct list_head *rcu_next_node_entry(struct task_struct *t,
  */
 void rcu_read_unlock_special(struct task_struct *t)
 {
-	int empty;
 	int empty_exp;
+	int empty_norm;
 	int empty_exp_now;
 	unsigned long flags;
 	struct list_head *np;
@@ -367,7 +367,7 @@ void rcu_read_unlock_special(struct task_struct *t)
 				break;
 			raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
 		}
-		empty = !rcu_preempt_blocked_readers_cgp(rnp);
+		empty_norm = !rcu_preempt_blocked_readers_cgp(rnp);
 		empty_exp = !rcu_preempted_readers_exp(rnp);
 		smp_mb(); /* ensure expedited fastpath sees end of RCU c-s. */
 		np = rcu_next_node_entry(t, rnp);
@@ -393,7 +393,7 @@ void rcu_read_unlock_special(struct task_struct *t)
 		 * so we must take a snapshot of the expedited state.
 		 */
 		empty_exp_now = !rcu_preempted_readers_exp(rnp);
-		if (!empty && !rcu_preempt_blocked_readers_cgp(rnp)) {
+		if (!empty_norm && !rcu_preempt_blocked_readers_cgp(rnp)) {
 			trace_rcu_quiescent_state_report(TPS("preempt_rcu"),
 							 rnp->gpnum,
 							 0, rnp->qsmask,
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 03/14] rcu: Abstract rcu_cleanup_dead_rnp() from rcu_cleanup_dead_cpu()
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 02/14] rcu: Rename "empty" to "empty_norm" in preparation for boost rework Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 04/14] rcu: Make rcu_read_unlock_special() propagate ->qsmaskinit bit clearing Paul E. McKenney
                     ` (11 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit abstracts rcu_cleanup_dead_rnp() from rcu_cleanup_dead_cpu()
in preparation for the rework of RCU priority boosting.  This new function
will be invoked from rcu_read_unlock_special() in the reworked scheme,
which is why rcu_cleanup_dead_rnp() assumes that the leaf rcu_node
structure's ->qsmaskinit field has already been updated.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c        | 67 ++++++++++++++++++++++++++++++++++--------------
 kernel/rcu/tree.h        |  1 +
 kernel/rcu/tree_plugin.h | 17 ++++++++++++
 3 files changed, 66 insertions(+), 19 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4c106fcc0d54..75c6b3301abb 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2227,6 +2227,46 @@ static void rcu_cleanup_dying_cpu(struct rcu_state *rsp)
 }
 
 /*
+ * All CPUs for the specified rcu_node structure have gone offline,
+ * and all tasks that were preempted within an RCU read-side critical
+ * section while running on one of those CPUs have since exited their RCU
+ * read-side critical section.  Some other CPU is reporting this fact with
+ * the specified rcu_node structure's ->lock held and interrupts disabled.
+ * This function therefore goes up the tree of rcu_node structures,
+ * clearing the corresponding bits in the ->qsmaskinit fields.  Note that
+ * the leaf rcu_node structure's ->qsmaskinit field has already been
+ * updated
+ *
+ * This function does check that the specified rcu_node structure has
+ * all CPUs offline and no blocked tasks, so it is OK to invoke it
+ * prematurely.  That said, invoking it after the fact will cost you
+ * a needless lock acquisition.  So once it has done its work, don't
+ * invoke it again.
+ */
+static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
+{
+	long mask;
+	struct rcu_node *rnp = rnp_leaf;
+
+	if (rnp->qsmaskinit || rcu_preempt_has_tasks(rnp))
+		return;
+	for (;;) {
+		mask = rnp->grpmask;
+		rnp = rnp->parent;
+		if (!rnp)
+			break;
+		raw_spin_lock(&rnp->lock); /* irqs already disabled. */
+		smp_mb__after_unlock_lock(); /* GP memory ordering. */
+		rnp->qsmaskinit &= ~mask;
+		if (rnp->qsmaskinit) {
+			raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
+			return;
+		}
+		raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
+	}
+}
+
+/*
  * The CPU has been completely removed, and some other CPU is reporting
  * this fact from process context.  Do the remainder of the cleanup,
  * including orphaning the outgoing CPU's RCU callbacks, and also
@@ -2236,7 +2276,6 @@ static void rcu_cleanup_dying_cpu(struct rcu_state *rsp)
 static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
 {
 	unsigned long flags;
-	unsigned long mask;
 	int need_report = 0;
 	struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
 	struct rcu_node *rnp = rdp->mynode;  /* Outgoing CPU's rdp & rnp. */
@@ -2252,24 +2291,14 @@ static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
 	rcu_send_cbs_to_orphanage(cpu, rsp, rnp, rdp);
 	rcu_adopt_orphan_cbs(rsp, flags);
 
-	/* Remove the outgoing CPU from the masks in the rcu_node hierarchy. */
-	mask = rdp->grpmask;	/* rnp->grplo is constant. */
-	do {
-		raw_spin_lock(&rnp->lock);	/* irqs already disabled. */
-		smp_mb__after_unlock_lock();
-		rnp->qsmaskinit &= ~mask;
-		if (rnp->qsmaskinit != 0) {
-			if (rnp != rdp->mynode)
-				raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
-			break;
-		}
-		if (rnp == rdp->mynode)
-			need_report = rcu_preempt_offline_tasks(rsp, rnp, rdp);
-		else
-			raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
-		mask = rnp->grpmask;
-		rnp = rnp->parent;
-	} while (rnp != NULL);
+	/* Remove outgoing CPU from mask in the leaf rcu_node structure. */
+	raw_spin_lock(&rnp->lock);	/* irqs already disabled. */
+	smp_mb__after_unlock_lock();	/* Enforce GP memory-order guarantee. */
+	rnp->qsmaskinit &= ~rdp->grpmask;
+	if (rnp->qsmaskinit == 0) {
+		need_report = rcu_preempt_offline_tasks(rsp, rnp, rdp);
+		rcu_cleanup_dead_rnp(rnp);
+	}
 
 	/*
 	 * We still hold the leaf rcu_node structure lock here, and
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 8e7b1843896e..9315477b47d9 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -552,6 +552,7 @@ static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
 #ifdef CONFIG_HOTPLUG_CPU
 static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp,
 				      unsigned long flags);
+static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 static void rcu_print_detail_task_stall(struct rcu_state *rsp);
 static int rcu_print_task_stall(struct rcu_node *rnp);
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 3d22f0b2dea9..d044b9cbbd97 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -307,6 +307,15 @@ static struct list_head *rcu_next_node_entry(struct task_struct *t,
 }
 
 /*
+ * Return true if the specified rcu_node structure has tasks that were
+ * preempted within an RCU read-side critical section.
+ */
+static bool rcu_preempt_has_tasks(struct rcu_node *rnp)
+{
+	return !list_empty(&rnp->blkd_tasks);
+}
+
+/*
  * Handle special cases during rcu_read_unlock(), such as needing to
  * notify RCU core processing or task having blocked during the RCU
  * read-side critical section.
@@ -967,6 +976,14 @@ static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp, unsigned long flags)
 	raw_spin_unlock_irqrestore(&rnp->lock, flags);
 }
 
+/*
+ * Because there is no preemptible RCU, there can be no readers blocked.
+ */
+static bool rcu_preempt_has_tasks(struct rcu_node *rnp)
+{
+	return false;
+}
+
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 
 /*
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 04/14] rcu: Make rcu_read_unlock_special() propagate ->qsmaskinit bit clearing
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 02/14] rcu: Rename "empty" to "empty_norm" in preparation for boost rework Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 03/14] rcu: Abstract rcu_cleanup_dead_rnp() from rcu_cleanup_dead_cpu() Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 05/14] rcu: Don't migrate blocked tasks even if all corresponding CPUs offline Paul E. McKenney
                     ` (10 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit causes rcu_read_unlock_special() to propagate ->qsmaskinit
bit clearing up the rcu_node tree once a given rcu_node structure's
blkd_tasks list becomes empty.  This is the final commit in preparation
for the rework of RCU priority boosting:  It enables preempted tasks to
remain queued on their rcu_node structure even after all of that rcu_node
structure's CPUs have gone offline.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c        |  4 ++++
 kernel/rcu/tree_plugin.h | 16 +++++++++++++---
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 75c6b3301abb..6625a1b5d9a1 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2329,6 +2329,10 @@ static void rcu_cleanup_dying_cpu(struct rcu_state *rsp)
 {
 }
 
+static void __maybe_unused rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
+{
+}
+
 static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
 {
 }
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index d044b9cbbd97..8a2b84157d34 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -322,9 +322,10 @@ static bool rcu_preempt_has_tasks(struct rcu_node *rnp)
  */
 void rcu_read_unlock_special(struct task_struct *t)
 {
-	int empty_exp;
-	int empty_norm;
-	int empty_exp_now;
+	bool empty;
+	bool empty_exp;
+	bool empty_norm;
+	bool empty_exp_now;
 	unsigned long flags;
 	struct list_head *np;
 #ifdef CONFIG_RCU_BOOST
@@ -376,6 +377,7 @@ void rcu_read_unlock_special(struct task_struct *t)
 				break;
 			raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
 		}
+		empty = !rcu_preempt_has_tasks(rnp);
 		empty_norm = !rcu_preempt_blocked_readers_cgp(rnp);
 		empty_exp = !rcu_preempted_readers_exp(rnp);
 		smp_mb(); /* ensure expedited fastpath sees end of RCU c-s. */
@@ -396,6 +398,14 @@ void rcu_read_unlock_special(struct task_struct *t)
 #endif /* #ifdef CONFIG_RCU_BOOST */
 
 		/*
+		 * If this was the last task on the list, go see if we
+		 * need to propagate ->qsmaskinit bit clearing up the
+		 * rcu_node tree.
+		 */
+		if (!empty && !rcu_preempt_has_tasks(rnp))
+			rcu_cleanup_dead_rnp(rnp);
+
+		/*
 		 * If this was the last task on the current list, and if
 		 * we aren't waiting on any CPUs, report the quiescent state.
 		 * Note that rcu_report_unblock_qs_rnp() releases rnp->lock,
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 05/14] rcu: Don't migrate blocked tasks even if all corresponding CPUs offline
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (2 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 04/14] rcu: Make rcu_read_unlock_special() propagate ->qsmaskinit bit clearing Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 06/14] rcu: Shorten irq-disable region in rcu_cleanup_dead_cpu() Paul E. McKenney
                     ` (9 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

When the last CPU associated with a given leaf rcu_node structure
goes offline, something must be done about the tasks queued on that
rcu_node structure.  Each of these tasks has been preempted on one of
the leaf rcu_node structure's CPUs while in an RCU read-side critical
section that it have not yet exited.  Handling these tasks is the job of
rcu_preempt_offline_tasks(), which migrates them from the leaf rcu_node
structure to the root rcu_node structure.

Unfortunately, this migration has to be done one task at a time because
each tasks allegiance must be shifted from the original leaf rcu_node to
the root, so that future attempts to deal with these tasks will acquire
the root rcu_node structure's ->lock rather than that of the leaf.
Worse yet, this migration must be done with interrupts disabled, which
is not so good for realtime response, especially given that there is
no bound on the number of tasks on a given rcu_node structure's list.
(OK, OK, there is a bound, it is just that it is unreasonably large,
especially on 64-bit systems.)  This was not considered a problem back
when rcu_preempt_offline_tasks() was first written because realtime
systems were assumed not to do CPU-hotplug operations while real-time
applications were running.  This assumption has proved of dubious validity
given that people are starting to run multiple realtime applications
on a single SMP system and that it is common practice to offline then
online a CPU before starting its real-time application in order to clear
extraneous processing off of that CPU.  So we now need CPU hotplug
operations to avoid undue latencies.

This commit therefore avoids migrating these tasks, instead letting
them be dequeued one by one from the original leaf rcu_node structure
by rcu_read_unlock_special().  This means that the clearing of bits
from the upper-level rcu_node structures must be deferred until the
last such task has been dequeued, because otherwise subsequent grace
periods won't wait on them.  This commit has the beneficial side effect
of simplifying the CPU-hotplug code for TREE_PREEMPT_RCU, especially in
CONFIG_RCU_BOOST builds.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c        |  20 +-------
 kernel/rcu/tree.h        |  18 -------
 kernel/rcu/tree_plugin.h | 126 +----------------------------------------------
 3 files changed, 4 insertions(+), 160 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 6625a1b5d9a1..84f16cf05991 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2276,7 +2276,6 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
 static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
 {
 	unsigned long flags;
-	int need_report = 0;
 	struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
 	struct rcu_node *rnp = rdp->mynode;  /* Outgoing CPU's rdp & rnp. */
 
@@ -2295,25 +2294,10 @@ static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
 	raw_spin_lock(&rnp->lock);	/* irqs already disabled. */
 	smp_mb__after_unlock_lock();	/* Enforce GP memory-order guarantee. */
 	rnp->qsmaskinit &= ~rdp->grpmask;
-	if (rnp->qsmaskinit == 0) {
-		need_report = rcu_preempt_offline_tasks(rsp, rnp, rdp);
+	if (rnp->qsmaskinit == 0 && !rcu_preempt_has_tasks(rnp))
 		rcu_cleanup_dead_rnp(rnp);
-	}
-
-	/*
-	 * We still hold the leaf rcu_node structure lock here, and
-	 * irqs are still disabled.  The reason for this subterfuge is
-	 * because invoking rcu_report_unblock_qs_rnp() with ->orphan_lock
-	 * held leads to deadlock.
-	 */
 	raw_spin_unlock(&rsp->orphan_lock); /* irqs remain disabled. */
-	rnp = rdp->mynode;
-	if (need_report & RCU_OFL_TASKS_NORM_GP)
-		rcu_report_unblock_qs_rnp(rnp, flags);
-	else
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
-	if (need_report & RCU_OFL_TASKS_EXP_GP)
-		rcu_report_exp_rnp(rsp, rnp, true);
+	raw_spin_unlock_irqrestore(&rnp->lock, flags);
 	WARN_ONCE(rdp->qlen != 0 || rdp->nxtlist != NULL,
 		  "rcu_cleanup_dead_cpu: Callbacks on offline CPU %d: qlen=%lu, nxtlist=%p\n",
 		  cpu, rdp->qlen, rdp->nxtlist);
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 9315477b47d9..883ebc8e2b6e 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -514,13 +514,6 @@ extern struct list_head rcu_struct_flavors;
 #define for_each_rcu_flavor(rsp) \
 	list_for_each_entry((rsp), &rcu_struct_flavors, flavors)
 
-/* Return values for rcu_preempt_offline_tasks(). */
-
-#define RCU_OFL_TASKS_NORM_GP	0x1		/* Tasks blocking normal */
-						/*  GP were moved to root. */
-#define RCU_OFL_TASKS_EXP_GP	0x2		/* Tasks blocking expedited */
-						/*  GP were moved to root. */
-
 /*
  * RCU implementation internal declarations:
  */
@@ -550,24 +543,13 @@ long rcu_batches_completed(void);
 static void rcu_preempt_note_context_switch(void);
 static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
 #ifdef CONFIG_HOTPLUG_CPU
-static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp,
-				      unsigned long flags);
 static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 static void rcu_print_detail_task_stall(struct rcu_state *rsp);
 static int rcu_print_task_stall(struct rcu_node *rnp);
 static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
-#ifdef CONFIG_HOTPLUG_CPU
-static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
-				     struct rcu_node *rnp,
-				     struct rcu_data *rdp);
-#endif /* #ifdef CONFIG_HOTPLUG_CPU */
 static void rcu_preempt_check_callbacks(void);
 void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
-#if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_PREEMPT_RCU)
-static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
-			       bool wake);
-#endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_PREEMPT_RCU) */
 static void __init __rcu_init_preempt(void);
 static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
 static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 8a2b84157d34..d594da48f4b4 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -103,6 +103,8 @@ RCU_STATE_INITIALIZER(rcu_preempt, 'p', call_rcu);
 static struct rcu_state *rcu_state_p = &rcu_preempt_state;
 
 static int rcu_preempted_readers_exp(struct rcu_node *rnp);
+static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
+			       bool wake);
 
 /*
  * Tell them what RCU they are running.
@@ -545,92 +547,6 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp)
 
 #ifdef CONFIG_HOTPLUG_CPU
 
-/*
- * Handle tasklist migration for case in which all CPUs covered by the
- * specified rcu_node have gone offline.  Move them up to the root
- * rcu_node.  The reason for not just moving them to the immediate
- * parent is to remove the need for rcu_read_unlock_special() to
- * make more than two attempts to acquire the target rcu_node's lock.
- * Returns true if there were tasks blocking the current RCU grace
- * period.
- *
- * Returns 1 if there was previously a task blocking the current grace
- * period on the specified rcu_node structure.
- *
- * The caller must hold rnp->lock with irqs disabled.
- */
-static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
-				     struct rcu_node *rnp,
-				     struct rcu_data *rdp)
-{
-	struct list_head *lp;
-	struct list_head *lp_root;
-	int retval = 0;
-	struct rcu_node *rnp_root = rcu_get_root(rsp);
-	struct task_struct *t;
-
-	if (rnp == rnp_root) {
-		WARN_ONCE(1, "Last CPU thought to be offlined?");
-		return 0;  /* Shouldn't happen: at least one CPU online. */
-	}
-
-	/* If we are on an internal node, complain bitterly. */
-	WARN_ON_ONCE(rnp != rdp->mynode);
-
-	/*
-	 * Move tasks up to root rcu_node.  Don't try to get fancy for
-	 * this corner-case operation -- just put this node's tasks
-	 * at the head of the root node's list, and update the root node's
-	 * ->gp_tasks and ->exp_tasks pointers to those of this node's,
-	 * if non-NULL.  This might result in waiting for more tasks than
-	 * absolutely necessary, but this is a good performance/complexity
-	 * tradeoff.
-	 */
-	if (rcu_preempt_blocked_readers_cgp(rnp) && rnp->qsmask == 0)
-		retval |= RCU_OFL_TASKS_NORM_GP;
-	if (rcu_preempted_readers_exp(rnp))
-		retval |= RCU_OFL_TASKS_EXP_GP;
-	lp = &rnp->blkd_tasks;
-	lp_root = &rnp_root->blkd_tasks;
-	while (!list_empty(lp)) {
-		t = list_entry(lp->next, typeof(*t), rcu_node_entry);
-		raw_spin_lock(&rnp_root->lock); /* irqs already disabled */
-		smp_mb__after_unlock_lock();
-		list_del(&t->rcu_node_entry);
-		t->rcu_blocked_node = rnp_root;
-		list_add(&t->rcu_node_entry, lp_root);
-		if (&t->rcu_node_entry == rnp->gp_tasks)
-			rnp_root->gp_tasks = rnp->gp_tasks;
-		if (&t->rcu_node_entry == rnp->exp_tasks)
-			rnp_root->exp_tasks = rnp->exp_tasks;
-#ifdef CONFIG_RCU_BOOST
-		if (&t->rcu_node_entry == rnp->boost_tasks)
-			rnp_root->boost_tasks = rnp->boost_tasks;
-#endif /* #ifdef CONFIG_RCU_BOOST */
-		raw_spin_unlock(&rnp_root->lock); /* irqs still disabled */
-	}
-
-	rnp->gp_tasks = NULL;
-	rnp->exp_tasks = NULL;
-#ifdef CONFIG_RCU_BOOST
-	rnp->boost_tasks = NULL;
-	/*
-	 * In case root is being boosted and leaf was not.  Make sure
-	 * that we boost the tasks blocking the current grace period
-	 * in this case.
-	 */
-	raw_spin_lock(&rnp_root->lock); /* irqs already disabled */
-	smp_mb__after_unlock_lock();
-	if (rnp_root->boost_tasks != NULL &&
-	    rnp_root->boost_tasks != rnp_root->gp_tasks &&
-	    rnp_root->boost_tasks != rnp_root->exp_tasks)
-		rnp_root->boost_tasks = rnp_root->gp_tasks;
-	raw_spin_unlock(&rnp_root->lock); /* irqs still disabled */
-#endif /* #ifdef CONFIG_RCU_BOOST */
-
-	return retval;
-}
-
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 
 /*
@@ -979,13 +895,6 @@ static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp)
 
 #ifdef CONFIG_HOTPLUG_CPU
 
-/* Because preemptible RCU does not exist, no quieting of tasks. */
-static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp, unsigned long flags)
-	__releases(rnp->lock)
-{
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
-}
-
 /*
  * Because there is no preemptible RCU, there can be no readers blocked.
  */
@@ -1023,23 +932,6 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp)
 	WARN_ON_ONCE(rnp->qsmask);
 }
 
-#ifdef CONFIG_HOTPLUG_CPU
-
-/*
- * Because preemptible RCU does not exist, it never needs to migrate
- * tasks that were blocked within RCU read-side critical sections, and
- * such non-existent tasks cannot possibly have been blocking the current
- * grace period.
- */
-static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
-				     struct rcu_node *rnp,
-				     struct rcu_data *rdp)
-{
-	return 0;
-}
-
-#endif /* #ifdef CONFIG_HOTPLUG_CPU */
-
 /*
  * Because preemptible RCU does not exist, it never has any callbacks
  * to check.
@@ -1058,20 +950,6 @@ void synchronize_rcu_expedited(void)
 }
 EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
 
-#ifdef CONFIG_HOTPLUG_CPU
-
-/*
- * Because preemptible RCU does not exist, there is never any need to
- * report on tasks preempted in RCU read-side critical sections during
- * expedited RCU grace periods.
- */
-static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
-			       bool wake)
-{
-}
-
-#endif /* #ifdef CONFIG_HOTPLUG_CPU */
-
 /*
  * Because preemptible RCU does not exist, rcu_barrier() is just
  * another name for rcu_barrier_sched().
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 06/14] rcu: Shorten irq-disable region in rcu_cleanup_dead_cpu()
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (3 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 05/14] rcu: Don't migrate blocked tasks even if all corresponding CPUs offline Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 07/14] rcu: Make use of rcu_preempt_has_tasks() Paul E. McKenney
                     ` (8 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Now that we are not migrating callbacks, there is no need to hold the
->orphan_lock across the the ->qsmaskinit bit-clearing process.
This commit therefore releases ->orphan_lock immediately after adopting
the orphaned RCU callbacks.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 84f16cf05991..990b406faf4e 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2289,14 +2289,14 @@ static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
 	/* Orphan the dead CPU's callbacks, and adopt them if appropriate. */
 	rcu_send_cbs_to_orphanage(cpu, rsp, rnp, rdp);
 	rcu_adopt_orphan_cbs(rsp, flags);
+	raw_spin_unlock_irqrestore(&rsp->orphan_lock, flags);
 
 	/* Remove outgoing CPU from mask in the leaf rcu_node structure. */
-	raw_spin_lock(&rnp->lock);	/* irqs already disabled. */
+	raw_spin_lock_irqsave(&rnp->lock, flags);
 	smp_mb__after_unlock_lock();	/* Enforce GP memory-order guarantee. */
 	rnp->qsmaskinit &= ~rdp->grpmask;
 	if (rnp->qsmaskinit == 0 && !rcu_preempt_has_tasks(rnp))
 		rcu_cleanup_dead_rnp(rnp);
-	raw_spin_unlock(&rsp->orphan_lock); /* irqs remain disabled. */
 	raw_spin_unlock_irqrestore(&rnp->lock, flags);
 	WARN_ONCE(rdp->qlen != 0 || rdp->nxtlist != NULL,
 		  "rcu_cleanup_dead_cpu: Callbacks on offline CPU %d: qlen=%lu, nxtlist=%p\n",
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 07/14] rcu: Make use of rcu_preempt_has_tasks()
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (4 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 06/14] rcu: Shorten irq-disable region in rcu_cleanup_dead_cpu() Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 08/14] rcu: Don't spawn rcub kthreads on root rcu_node structure Paul E. McKenney
                     ` (7 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Given that there is now arcu_preempt_has_tasks() function that checks
to see if the ->blkd_tasks list is non-empty, this commit makes use of it.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree_plugin.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index d594da48f4b4..bcc6d9134d47 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -540,7 +540,7 @@ static int rcu_print_task_stall(struct rcu_node *rnp)
 static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp)
 {
 	WARN_ON_ONCE(rcu_preempt_blocked_readers_cgp(rnp));
-	if (!list_empty(&rnp->blkd_tasks))
+	if (rcu_preempt_has_tasks(rnp))
 		rnp->gp_tasks = rnp->blkd_tasks.next;
 	WARN_ON_ONCE(rnp->qsmask);
 }
@@ -706,7 +706,7 @@ sync_rcu_preempt_exp_init(struct rcu_state *rsp, struct rcu_node *rnp)
 
 	raw_spin_lock_irqsave(&rnp->lock, flags);
 	smp_mb__after_unlock_lock();
-	if (list_empty(&rnp->blkd_tasks)) {
+	if (!rcu_preempt_has_tasks(rnp)) {
 		raw_spin_unlock_irqrestore(&rnp->lock, flags);
 	} else {
 		rnp->exp_tasks = rnp->blkd_tasks.next;
@@ -985,7 +985,7 @@ void exit_rcu(void)
 
 static void rcu_initiate_boost_trace(struct rcu_node *rnp)
 {
-	if (list_empty(&rnp->blkd_tasks))
+	if (!rcu_preempt_has_tasks(rnp))
 		rnp->n_balk_blkd_tasks++;
 	else if (rnp->exp_tasks == NULL && rnp->gp_tasks == NULL)
 		rnp->n_balk_exp_gp_tasks++;
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 08/14] rcu: Don't spawn rcub kthreads on root rcu_node structure
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (5 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 07/14] rcu: Make use of rcu_preempt_has_tasks() Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 09/14] rcu: Don't initiate RCU priority boosting on root rcu_node Paul E. McKenney
                     ` (6 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Now that offlining CPUs no longer moves leaf rcu_node structures'
->blkd_tasks lists to the root, there is no way for the root rcu_node
structure's ->blkd_task list to be nonempty, unless the root node is also
the sole leaf node.  This commit therefore refrains from creating an rcub
kthread for the root rcu_node structure unless it is also the sole leaf.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree_plugin.h | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index bcc6d9134d47..3e64bb197b1a 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1352,12 +1352,8 @@ static void __init rcu_spawn_boost_kthreads(void)
 	for_each_possible_cpu(cpu)
 		per_cpu(rcu_cpu_has_work, cpu) = 0;
 	BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec));
-	rnp = rcu_get_root(rcu_state_p);
-	(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
-	if (NUM_RCU_NODES > 1) {
-		rcu_for_each_leaf_node(rcu_state_p, rnp)
-			(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
-	}
+	rcu_for_each_leaf_node(rcu_state_p, rnp)
+		(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
 }
 
 static void rcu_prepare_kthreads(int cpu)
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 09/14] rcu: Don't initiate RCU priority boosting on root rcu_node
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (6 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 08/14] rcu: Don't spawn rcub kthreads on root rcu_node structure Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 10/14] rcu: Don't bother affinitying rcub kthreads away from offline CPUs Paul E. McKenney
                     ` (5 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Because there is no longer any preempted tasks on the root rcu_node, and
because there is no longer ever an rcub kthread for the root rcu_node,
this commit drops the code in force_qs_rnp() that attempts to awaken
the non-existent root rcub kthread.  This is strictly a performance
enhancement, removing a root rcu_node ->lock acquisition and release
along with some tests in rcu_initiate_boost(), ending with the test that
notes that there is no rcub kthread.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 990b406faf4e..5a0a4c969d38 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2513,12 +2513,6 @@ static void force_qs_rnp(struct rcu_state *rsp,
 		}
 		raw_spin_unlock_irqrestore(&rnp->lock, flags);
 	}
-	rnp = rcu_get_root(rsp);
-	if (rnp->qsmask == 0) {
-		raw_spin_lock_irqsave(&rnp->lock, flags);
-		smp_mb__after_unlock_lock();
-		rcu_initiate_boost(rnp, flags); /* releases rnp->lock. */
-	}
 }
 
 /*
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 10/14] rcu: Don't bother affinitying rcub kthreads away from offline CPUs
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (7 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 09/14] rcu: Don't initiate RCU priority boosting on root rcu_node Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 11/14] rcu: Note quiescent state when CPU goes offline Paul E. McKenney
                     ` (4 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

When rcu_boost_kthread_setaffinity() sees that all CPUs for a given
rcu_node structure are now offline, it affinities the corresponding
RCU-boost ("rcub") kthread away from those CPUs.  This is pointless
because the kthread cannot run on those offline CPUs in any case.
This commit therefore removes this unneeded code.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree_plugin.h | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 3e64bb197b1a..1fac68220999 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1322,12 +1322,8 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)
 	for (cpu = rnp->grplo; cpu <= rnp->grphi; cpu++, mask >>= 1)
 		if ((mask & 0x1) && cpu != outgoingcpu)
 			cpumask_set_cpu(cpu, cm);
-	if (cpumask_weight(cm) == 0) {
+	if (cpumask_weight(cm) == 0)
 		cpumask_setall(cm);
-		for (cpu = rnp->grplo; cpu <= rnp->grphi; cpu++)
-			cpumask_clear_cpu(cpu, cm);
-		WARN_ON_ONCE(cpumask_weight(cm) == 0);
-	}
 	set_cpus_allowed_ptr(t, cm);
 	free_cpumask_var(cm);
 }
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 11/14] rcu: Note quiescent state when CPU goes offline
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (8 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 10/14] rcu: Don't bother affinitying rcub kthreads away from offline CPUs Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 12/14] rcu: Revert "Allow post-unlock reference for rt_mutex" to avoid priority-inversion Paul E. McKenney
                     ` (3 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The rcu_cleanup_dead_cpu() function (called after a CPU has gone
completely offline) has not reported a quiescent state because there
was probably at least one synchronize_rcu() between the time the CPU
went offline and the CPU_DEAD notifier, and this would have detected
the CPU's offline state via quiescent-state forcing.  However, the plan
is for CPUs to take themselves offline, at which point it makes sense
for them to report their own quiescent state.  This commit makes this
change in preparation for the new CPU-hotplug setup.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 5a0a4c969d38..24d3c67f7be7 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2297,7 +2297,7 @@ static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
 	rnp->qsmaskinit &= ~rdp->grpmask;
 	if (rnp->qsmaskinit == 0 && !rcu_preempt_has_tasks(rnp))
 		rcu_cleanup_dead_rnp(rnp);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	rcu_report_qs_rnp(rdp->grpmask, rsp, rnp, flags); /* Rlses rnp->lock. */
 	WARN_ONCE(rdp->qlen != 0 || rdp->nxtlist != NULL,
 		  "rcu_cleanup_dead_cpu: Callbacks on offline CPU %d: qlen=%lu, nxtlist=%p\n",
 		  cpu, rdp->qlen, rdp->nxtlist);
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 12/14] rcu: Revert "Allow post-unlock reference for rt_mutex" to avoid priority-inversion
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (9 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 11/14] rcu: Note quiescent state when CPU goes offline Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 13/14] rcu: Don't scan root rcu_node structure for stalled tasks Paul E. McKenney
                     ` (2 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: Lai Jiangshan <laijs@cn.fujitsu.com>

The patch dfeb9765ce3c ("Allow post-unlock reference for rt_mutex")
ensured rcu-boost safe even the rt_mutex has post-unlock reference.

But rt_mutex allowing post-unlock reference is definitely a bug and it was
fixed by the commit 27e35715df54 ("rtmutex: Plug slow unlock race").
This fix made the previous patch (dfeb9765ce3c) useless.

And even worse, the priority-inversion introduced by the the previous
patch still exists.

rcu_read_unlock_special() {
	rt_mutex_unlock(&rnp->boost_mtx);
	/* Priority-Inversion:
	 * the current task had been deboosted and preempted as a low
	 * priority task immediately, it could wait long before reschedule in,
	 * and the rcu-booster also waits on this low priority task and sleeps.
	 * This priority-inversion makes rcu-booster can't work
	 * as expected.
	 */
	complete(&rnp->boost_completion);
}

Just revert the patch to avoid it.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.h        | 5 -----
 kernel/rcu/tree_plugin.h | 8 +-------
 2 files changed, 1 insertion(+), 12 deletions(-)

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 883ebc8e2b6e..95356477d560 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -172,11 +172,6 @@ struct rcu_node {
 				/*  queued on this rcu_node structure that */
 				/*  are blocking the current grace period, */
 				/*  there can be no such task. */
-	struct completion boost_completion;
-				/* Used to ensure that the rt_mutex used */
-				/*  to carry out the boosting is fully */
-				/*  released with no future boostee accesses */
-				/*  before that rt_mutex is re-initialized. */
 	struct rt_mutex boost_mtx;
 				/* Used only for the priority-boosting */
 				/*  side effect, not as a lock. */
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 1fac68220999..625e26040e6b 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -429,10 +429,8 @@ void rcu_read_unlock_special(struct task_struct *t)
 
 #ifdef CONFIG_RCU_BOOST
 		/* Unboost if we were boosted. */
-		if (drop_boost_mutex) {
+		if (drop_boost_mutex)
 			rt_mutex_unlock(&rnp->boost_mtx);
-			complete(&rnp->boost_completion);
-		}
 #endif /* #ifdef CONFIG_RCU_BOOST */
 
 		/*
@@ -1081,15 +1079,11 @@ static int rcu_boost(struct rcu_node *rnp)
 	 */
 	t = container_of(tb, struct task_struct, rcu_node_entry);
 	rt_mutex_init_proxy_locked(&rnp->boost_mtx, t);
-	init_completion(&rnp->boost_completion);
 	raw_spin_unlock_irqrestore(&rnp->lock, flags);
 	/* Lock only for side effect: boosts task t's priority. */
 	rt_mutex_lock(&rnp->boost_mtx);
 	rt_mutex_unlock(&rnp->boost_mtx);  /* Then keep lockdep happy. */
 
-	/* Wait for boostee to be done w/boost_mtx before reinitializing. */
-	wait_for_completion(&rnp->boost_completion);
-
 	return ACCESS_ONCE(rnp->exp_tasks) != NULL ||
 	       ACCESS_ONCE(rnp->boost_tasks) != NULL;
 }
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 13/14] rcu: Don't scan root rcu_node structure for stalled tasks
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (10 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 12/14] rcu: Revert "Allow post-unlock reference for rt_mutex" to avoid priority-inversion Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-07 17:32   ` [PATCH tip/core/rcu 14/14] rcu: Remove redundant callback-list initialization Paul E. McKenney
  2015-01-08  9:41   ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Peter Zijlstra
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Now that blocked tasks are no longer migrated to the root rcu_node
structure, there is no need to scan the root rcu_node structure for
blocked tasks stalling the current grace period.  This commit therefore
removes this scan.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 24d3c67f7be7..4ed2c2842103 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1107,15 +1107,6 @@ static void print_other_cpu_stall(struct rcu_state *rsp)
 		raw_spin_unlock_irqrestore(&rnp->lock, flags);
 	}
 
-	/*
-	 * Now rat on any tasks that got kicked up to the root rcu_node
-	 * due to CPU offlining.
-	 */
-	rnp = rcu_get_root(rsp);
-	raw_spin_lock_irqsave(&rnp->lock, flags);
-	ndetected += rcu_print_task_stall(rnp);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
-
 	print_cpu_stall_info_end();
 	for_each_possible_cpu(cpu)
 		totqlen += per_cpu_ptr(rsp->rda, cpu)->qlen;
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH tip/core/rcu 14/14] rcu: Remove redundant callback-list initialization
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (11 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 13/14] rcu: Don't scan root rcu_node structure for stalled tasks Paul E. McKenney
@ 2015-01-07 17:32   ` Paul E. McKenney
  2015-01-08  9:41   ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Peter Zijlstra
  13 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-07 17:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The RCU callback lists are initialized in both rcu_boot_init_percpu_data()
and rcu_init_percpu_data().  The former is intended for initializing
immutable data, so this commit removes the initialization from
rcu_boot_init_percpu_data() and leaves it in rcu_init_percpu_data().
This change prepares for permitting callbacks to be queued very early
in boot.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4ed2c2842103..febd0f72a3c9 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3419,9 +3419,6 @@ rcu_boot_init_percpu_data(int cpu, struct rcu_state *rsp)
 	/* Set up local state, ensuring consistent view of global state. */
 	raw_spin_lock_irqsave(&rnp->lock, flags);
 	rdp->grpmask = 1UL << (cpu - rdp->mynode->grplo);
-	init_callback_list(rdp);
-	rdp->qlen_lazy = 0;
-	ACCESS_ONCE(rdp->qlen) = 0;
 	rdp->dynticks = &per_cpu(rcu_dynticks, cpu);
 	WARN_ON_ONCE(rdp->dynticks->dynticks_nesting != DYNTICK_TASK_EXIT_IDLE);
 	WARN_ON_ONCE(atomic_read(&rdp->dynticks->dynticks) != 1);
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
                     ` (12 preceding siblings ...)
  2015-01-07 17:32   ` [PATCH tip/core/rcu 14/14] rcu: Remove redundant callback-list initialization Paul E. McKenney
@ 2015-01-08  9:41   ` Peter Zijlstra
  2015-01-08 15:22     ` Paul E. McKenney
  13 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2015-01-08  9:41 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, tglx, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani

On Wed, Jan 07, 2015 at 09:32:20AM -0800, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> This commit prevents random compiler optimizations by applying
> ACCESS_ONCE() to lockless accesses.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
>  kernel/rcu/tree_plugin.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 3ec85cb5d544..d59913ef8360 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -1127,7 +1127,8 @@ static int rcu_boost(struct rcu_node *rnp)
>  	struct task_struct *t;
>  	struct list_head *tb;
>  
> -	if (rnp->exp_tasks == NULL && rnp->boost_tasks == NULL)
> +	if (ACCESS_ONCE(rnp->exp_tasks) == NULL &&
> +	    ACCESS_ONCE(rnp->boost_tasks) == NULL)
>  		return 0;  /* Nothing left to boost. */

Didn't we just obsolete ACCESS_ONCE with that {READ,WRITE}_ONCE stuff?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-08  9:41   ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Peter Zijlstra
@ 2015-01-08 15:22     ` Paul E. McKenney
  2015-01-09  6:41       ` Davidlohr Bueso
  0 siblings, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-08 15:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, tglx, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani

On Thu, Jan 08, 2015 at 10:41:02AM +0100, Peter Zijlstra wrote:
> On Wed, Jan 07, 2015 at 09:32:20AM -0800, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > This commit prevents random compiler optimizations by applying
> > ACCESS_ONCE() to lockless accesses.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > ---
> >  kernel/rcu/tree_plugin.h | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > index 3ec85cb5d544..d59913ef8360 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -1127,7 +1127,8 @@ static int rcu_boost(struct rcu_node *rnp)
> >  	struct task_struct *t;
> >  	struct list_head *tb;
> >  
> > -	if (rnp->exp_tasks == NULL && rnp->boost_tasks == NULL)
> > +	if (ACCESS_ONCE(rnp->exp_tasks) == NULL &&
> > +	    ACCESS_ONCE(rnp->boost_tasks) == NULL)
> >  		return 0;  /* Nothing left to boost. */
> 
> Didn't we just obsolete ACCESS_ONCE with that {READ,WRITE}_ONCE stuff?

Indeed we did!  But that was after I did this commit back on October 29th.

I am planning a bulk change to READ_ONCE() and ASSIGN_ONCE() either as
the last patch for 3.20 or as the first one for 3.21.  Probably as the
first for 3.21 to minimize rebasing hassles with any needed 3.20 fixes.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-08 15:22     ` Paul E. McKenney
@ 2015-01-09  6:41       ` Davidlohr Bueso
  2015-01-09 13:49         ` Paul E. McKenney
  0 siblings, 1 reply; 30+ messages in thread
From: Davidlohr Bueso @ 2015-01-09  6:41 UTC (permalink / raw)
  To: paulmck
  Cc: Peter Zijlstra, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, bobby.prani

On Thu, 2015-01-08 at 07:22 -0800, Paul E. McKenney wrote:
> > Didn't we just obsolete ACCESS_ONCE with that {READ,WRITE}_ONCE stuff?
> 
> Indeed we did!  But that was after I did this commit back on October 29th.
> 
> I am planning a bulk change to READ_ONCE() and ASSIGN_ONCE() either as
> the last patch for 3.20 or as the first one for 3.21.  Probably as the
> first for 3.21 to minimize rebasing hassles with any needed 3.20 fixes.

That reminds me, I think the new conversion for stores will most likely
introduce silly arg bugs:

-       ACCESS_ONCE(a) = b;
+       ASSIGN_ONCE(b, a);

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-09  6:41       ` Davidlohr Bueso
@ 2015-01-09 13:49         ` Paul E. McKenney
  2015-01-09 13:56           ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-09 13:49 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Peter Zijlstra, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, bobby.prani

On Thu, Jan 08, 2015 at 10:41:54PM -0800, Davidlohr Bueso wrote:
> On Thu, 2015-01-08 at 07:22 -0800, Paul E. McKenney wrote:
> > > Didn't we just obsolete ACCESS_ONCE with that {READ,WRITE}_ONCE stuff?
> > 
> > Indeed we did!  But that was after I did this commit back on October 29th.
> > 
> > I am planning a bulk change to READ_ONCE() and ASSIGN_ONCE() either as
> > the last patch for 3.20 or as the first one for 3.21.  Probably as the
> > first for 3.21 to minimize rebasing hassles with any needed 3.20 fixes.
> 
> That reminds me, I think the new conversion for stores will most likely
> introduce silly arg bugs:
> 
> -       ACCESS_ONCE(a) = b;
> +       ASSIGN_ONCE(b, a);

I was planning to do mine by hand for this sort of reason.

Or are you thinking of something more subtle than the case where
"b" is an unparenthesized comma-separated expression?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-09 13:49         ` Paul E. McKenney
@ 2015-01-09 13:56           ` Peter Zijlstra
  2015-01-09 14:07             ` Paul E. McKenney
  2015-01-09 21:58             ` Christian Borntraeger
  0 siblings, 2 replies; 30+ messages in thread
From: Peter Zijlstra @ 2015-01-09 13:56 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Davidlohr Bueso, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, bobby.prani, borntraeger

On Fri, Jan 09, 2015 at 05:49:54AM -0800, Paul E. McKenney wrote:
> > That reminds me, I think the new conversion for stores will most likely
> > introduce silly arg bugs:
> > 
> > -       ACCESS_ONCE(a) = b;
> > +       ASSIGN_ONCE(b, a);
> 
> I was planning to do mine by hand for this sort of reason.
> 
> Or are you thinking of something more subtle than the case where
> "b" is an unparenthesized comma-separated expression?

I think he's revering to the wrong way around-ness of the thing.

Its a bit of a mixed bag on assignments, but for instance
rcu_assign_pointer() takes them the right way around, as does
atomic_set().

So yes, I think the ASSIGN_ONCE() thing got the arguments the wrong way
around.

We could maybe still change it, before its in too long ?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-09 13:56           ` Peter Zijlstra
@ 2015-01-09 14:07             ` Paul E. McKenney
  2015-01-09 16:53               ` Mathieu Desnoyers
  2015-01-09 21:58             ` Christian Borntraeger
  1 sibling, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-09 14:07 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Davidlohr Bueso, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, bobby.prani, borntraeger

On Fri, Jan 09, 2015 at 02:56:14PM +0100, Peter Zijlstra wrote:
> On Fri, Jan 09, 2015 at 05:49:54AM -0800, Paul E. McKenney wrote:
> > > That reminds me, I think the new conversion for stores will most likely
> > > introduce silly arg bugs:
> > > 
> > > -       ACCESS_ONCE(a) = b;
> > > +       ASSIGN_ONCE(b, a);
> > 
> > I was planning to do mine by hand for this sort of reason.
> > 
> > Or are you thinking of something more subtle than the case where
> > "b" is an unparenthesized comma-separated expression?
> 
> I think he's revering to the wrong way around-ness of the thing.
> 
> Its a bit of a mixed bag on assignments, but for instance
> rcu_assign_pointer() takes them the right way around, as does
> atomic_set().
> 
> So yes, I think the ASSIGN_ONCE() thing got the arguments the wrong way
> around.

Ah, yes, I had forgotten about this point.

> We could maybe still change it, before its in too long ?

I would be in favor.  Any objections?  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-09 14:07             ` Paul E. McKenney
@ 2015-01-09 16:53               ` Mathieu Desnoyers
  0 siblings, 0 replies; 30+ messages in thread
From: Mathieu Desnoyers @ 2015-01-09 16:53 UTC (permalink / raw)
  To: paulmck
  Cc: Peter Zijlstra, Davidlohr Bueso, linux-kernel, mingo, laijs,
	dipankar, akpm, josh, tglx, rostedt, dhowells, edumazet, dvhart,
	fweisbec, oleg, bobby prani, borntraeger

----- Original Message -----
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> To: "Peter Zijlstra" <peterz@infradead.org>
> Cc: "Davidlohr Bueso" <dave@stgolabs.net>, linux-kernel@vger.kernel.org, mingo@kernel.org, laijs@cn.fujitsu.com,
> dipankar@in.ibm.com, akpm@linux-foundation.org, "mathieu desnoyers" <mathieu.desnoyers@efficios.com>,
> josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com,
> dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com, "bobby prani" <bobby.prani@gmail.com>,
> borntraeger@de.ibm.com
> Sent: Friday, January 9, 2015 9:07:34 AM
> Subject: Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
> 
> On Fri, Jan 09, 2015 at 02:56:14PM +0100, Peter Zijlstra wrote:
> > On Fri, Jan 09, 2015 at 05:49:54AM -0800, Paul E. McKenney wrote:
> > > > That reminds me, I think the new conversion for stores will most likely
> > > > introduce silly arg bugs:
> > > > 
> > > > -       ACCESS_ONCE(a) = b;
> > > > +       ASSIGN_ONCE(b, a);
> > > 
> > > I was planning to do mine by hand for this sort of reason.
> > > 
> > > Or are you thinking of something more subtle than the case where
> > > "b" is an unparenthesized comma-separated expression?
> > 
> > I think he's revering to the wrong way around-ness of the thing.
> > 
> > Its a bit of a mixed bag on assignments, but for instance
> > rcu_assign_pointer() takes them the right way around, as does
> > atomic_set().
> > 
> > So yes, I think the ASSIGN_ONCE() thing got the arguments the wrong way
> > around.
> 
> Ah, yes, I had forgotten about this point.
> 
> > We could maybe still change it, before its in too long ?
> 
> I would be in favor.  Any objections?  ;-)

Agreed. The target argument should be the first parameter. Also,
do you plan to make ASSIGN_ONCE() evaluate to something or void ?
In userland liburcu CMM_STORE_SHARED(), which evaluates to the value
used as input, we hit warnings with static checkers due to unused
value, which is unfortunate. Not sure if the checkers should be
thought better, or if evaluating to void is the right approach there.

Thanks,

Mathieu


> 
> 							Thanx, Paul
> 
> 

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-09 13:56           ` Peter Zijlstra
  2015-01-09 14:07             ` Paul E. McKenney
@ 2015-01-09 21:58             ` Christian Borntraeger
  2015-01-10  0:27               ` Davidlohr Bueso
  2015-01-12  8:59               ` Peter Zijlstra
  1 sibling, 2 replies; 30+ messages in thread
From: Christian Borntraeger @ 2015-01-09 21:58 UTC (permalink / raw)
  To: Peter Zijlstra, Paul E. McKenney
  Cc: Davidlohr Bueso, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, Pranith Kumar

Am 09.01.2015 um 14:56 schrieb Peter Zijlstra:
> On Fri, Jan 09, 2015 at 05:49:54AM -0800, Paul E. McKenney wrote:
>>> That reminds me, I think the new conversion for stores will most likely
>>> introduce silly arg bugs:
>>>
>>> -       ACCESS_ONCE(a) = b;
>>> +       ASSIGN_ONCE(b, a);
>>
>> I was planning to do mine by hand for this sort of reason.
>>
>> Or are you thinking of something more subtle than the case where
>> "b" is an unparenthesized comma-separated expression?
> 
> I think he's revering to the wrong way around-ness of the thing.
> 
> Its a bit of a mixed bag on assignments, but for instance
> rcu_assign_pointer() takes them the right way around, as does
> atomic_set().
> 
> So yes, I think the ASSIGN_ONCE() thing got the arguments the wrong way
> around.
> 
> We could maybe still change it, before its in too long ?

Linus initial proposal was inspired by put_user model which is (val, ptr) and I took that.
As my focus was on avoiding the volatile bug, all my current conversions are READ_ONCE as no potential ASSIGN_ONCE user was done on a non-scalar type, so I have no first hand experience. I am fine with changing that, though, both ways have pros and cons. Last time I checked in Linus tree there was no ASSIGN_ONCE user.

When we talk about changing the parameters it might make sense to also think about some comments from George Spelvin and consider a rename to WRITE_ONCE or STORE_ONCE (READ_ONCE --> LOAD_ONCE). Unfortunately there doesnt seem to be a variant that is fool proof (in the sense of Rustys guideline that a good interface cannot be used wrong). So any proposal in that regard would be very welcome.

Christian


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-09 21:58             ` Christian Borntraeger
@ 2015-01-10  0:27               ` Davidlohr Bueso
  2015-01-12  8:59               ` Peter Zijlstra
  1 sibling, 0 replies; 30+ messages in thread
From: Davidlohr Bueso @ 2015-01-10  0:27 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Peter Zijlstra, Paul E. McKenney, linux-kernel, mingo, laijs,
	dipankar, akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells,
	edumazet, dvhart, fweisbec, oleg, Pranith Kumar

On Fri, 2015-01-09 at 22:58 +0100, Christian Borntraeger wrote:
> When we talk about changing the parameters it might make sense to also think about some comments from George Spelvin and consider a rename to WRITE_ONCE or STORE_ONCE (READ_ONCE --> LOAD_ONCE). Unfortunately there doesnt seem to be a variant that is fool proof (in the sense of Rustys guideline that a good interface cannot be used wrong). So any proposal in that regard would be very welcome.

I think any of those two names are a lot better than what we have now.
I'm a bit inclined towards LOAD/STORE though. That, and reordering the
arguments for stores would make it pretty fool proof imo.

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-09 21:58             ` Christian Borntraeger
  2015-01-10  0:27               ` Davidlohr Bueso
@ 2015-01-12  8:59               ` Peter Zijlstra
  2015-01-12 22:12                 ` Paul E. McKenney
  1 sibling, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2015-01-12  8:59 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Paul E. McKenney, Davidlohr Bueso, linux-kernel, mingo, laijs,
	dipankar, akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells,
	edumazet, dvhart, fweisbec, oleg, Pranith Kumar

On Fri, Jan 09, 2015 at 10:58:50PM +0100, Christian Borntraeger wrote:
> Am 09.01.2015 um 14:56 schrieb Peter Zijlstra:
> > On Fri, Jan 09, 2015 at 05:49:54AM -0800, Paul E. McKenney wrote:
> >>> That reminds me, I think the new conversion for stores will most likely
> >>> introduce silly arg bugs:
> >>>
> >>> -       ACCESS_ONCE(a) = b;
> >>> +       ASSIGN_ONCE(b, a);
> >>
> >> I was planning to do mine by hand for this sort of reason.
> >>
> >> Or are you thinking of something more subtle than the case where
> >> "b" is an unparenthesized comma-separated expression?
> > 
> > I think he's revering to the wrong way around-ness of the thing.
> > 
> > Its a bit of a mixed bag on assignments, but for instance
> > rcu_assign_pointer() takes them the right way around, as does
> > atomic_set().
> > 
> > So yes, I think the ASSIGN_ONCE() thing got the arguments the wrong way
> > around.
> > 
> > We could maybe still change it, before its in too long ?
> 
> Linus initial proposal was inspired by put_user model which is (val,
> ptr) and I took that. 

Yeah, like I said, its a bit of a mixed bag. We've got plenty examples
of the wrong way around.

> As my focus was on avoiding the volatile bug,
> all my current conversions are READ_ONCE as no potential ASSIGN_ONCE
> user was done on a non-scalar type, so I have no first hand
> experience. 

So the implication there is that we'd preserve ACCESS_ONCE() for use on
scalar types. I don't think we should do that, I think we should just
en-mass convert to {READ,WRITE}/{LOAD,STORE}_ONCE() and kill off
ACCESS_ONCE().

> I am fine with changing that, though, both ways have pros
> and cons. Last time I checked in Linus tree there was no ASSIGN_ONCE
> user.

Right, so Davidlohr just introduced a few in my tree :-), which is how I
came to know we even had this stuff..

> When we talk about changing the parameters it might make sense to also
> think about some comments from George Spelvin and consider a rename to
> WRITE_ONCE or STORE_ONCE (READ_ONCE --> LOAD_ONCE). 

I'd be OK with that.

> Unfortunately
> there doesnt seem to be a variant that is fool proof (in the sense of
> Rustys guideline that a good interface cannot be used wrong). So any
> proposal in that regard would be very welcome.

If you want fool proof, I think we should discard C ;-) Then again, I've
yet to see a programming language that would not let a human make a
proper idiot out of himself.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-12  8:59               ` Peter Zijlstra
@ 2015-01-12 22:12                 ` Paul E. McKenney
  2015-01-13  8:18                   ` Christian Borntraeger
  0 siblings, 1 reply; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-12 22:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Christian Borntraeger, Davidlohr Bueso, linux-kernel, mingo,
	laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx, rostedt,
	dhowells, edumazet, dvhart, fweisbec, oleg, Pranith Kumar

On Mon, Jan 12, 2015 at 09:59:57AM +0100, Peter Zijlstra wrote:
> On Fri, Jan 09, 2015 at 10:58:50PM +0100, Christian Borntraeger wrote:
> > Am 09.01.2015 um 14:56 schrieb Peter Zijlstra:
> > > On Fri, Jan 09, 2015 at 05:49:54AM -0800, Paul E. McKenney wrote:
> > >>> That reminds me, I think the new conversion for stores will most likely
> > >>> introduce silly arg bugs:
> > >>>
> > >>> -       ACCESS_ONCE(a) = b;
> > >>> +       ASSIGN_ONCE(b, a);
> > >>
> > >> I was planning to do mine by hand for this sort of reason.
> > >>
> > >> Or are you thinking of something more subtle than the case where
> > >> "b" is an unparenthesized comma-separated expression?
> > > 
> > > I think he's revering to the wrong way around-ness of the thing.
> > > 
> > > Its a bit of a mixed bag on assignments, but for instance
> > > rcu_assign_pointer() takes them the right way around, as does
> > > atomic_set().
> > > 
> > > So yes, I think the ASSIGN_ONCE() thing got the arguments the wrong way
> > > around.
> > > 
> > > We could maybe still change it, before its in too long ?
> > 
> > Linus initial proposal was inspired by put_user model which is (val,
> > ptr) and I took that. 
> 
> Yeah, like I said, its a bit of a mixed bag. We've got plenty examples
> of the wrong way around.
> 
> > As my focus was on avoiding the volatile bug,
> > all my current conversions are READ_ONCE as no potential ASSIGN_ONCE
> > user was done on a non-scalar type, so I have no first hand
> > experience. 
> 
> So the implication there is that we'd preserve ACCESS_ONCE() for use on
> scalar types. I don't think we should do that, I think we should just
> en-mass convert to {READ,WRITE}/{LOAD,STORE}_ONCE() and kill off
> ACCESS_ONCE().

Yep.  For one thing, the proposed replacements work much better with
C11 than does ACCESS_ONCE().

> > I am fine with changing that, though, both ways have pros
> > and cons. Last time I checked in Linus tree there was no ASSIGN_ONCE
> > user.
> 
> Right, so Davidlohr just introduced a few in my tree :-), which is how I
> came to know we even had this stuff..
> 
> > When we talk about changing the parameters it might make sense to also
> > think about some comments from George Spelvin and consider a rename to
> > WRITE_ONCE or STORE_ONCE (READ_ONCE --> LOAD_ONCE). 
> 
> I'd be OK with that.
> 
> > Unfortunately
> > there doesnt seem to be a variant that is fool proof (in the sense of
> > Rustys guideline that a good interface cannot be used wrong). So any
> > proposal in that regard would be very welcome.
> 
> If you want fool proof, I think we should discard C ;-) Then again, I've
> yet to see a programming language that would not let a human make a
> proper idiot out of himself.

Limit NR_CPUS to zero!  It is the only way!!!

						Thanx, Paul


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-12 22:12                 ` Paul E. McKenney
@ 2015-01-13  8:18                   ` Christian Borntraeger
  2015-01-13  9:29                     ` Peter Zijlstra
                                       ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: Christian Borntraeger @ 2015-01-13  8:18 UTC (permalink / raw)
  To: paulmck, Peter Zijlstra
  Cc: Davidlohr Bueso, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, Pranith Kumar, Linus Torvalds

Am 12.01.2015 um 23:12 schrieb Paul E. McKenney:
> On Mon, Jan 12, 2015 at 09:59:57AM +0100, Peter Zijlstra wrote:
>> On Fri, Jan 09, 2015 at 10:58:50PM +0100, Christian Borntraeger wrote:
>>> Am 09.01.2015 um 14:56 schrieb Peter Zijlstra:
>>>> On Fri, Jan 09, 2015 at 05:49:54AM -0800, Paul E. McKenney wrote:
>>>>>> That reminds me, I think the new conversion for stores will most likely
>>>>>> introduce silly arg bugs:
>>>>>>
>>>>>> -       ACCESS_ONCE(a) = b;
>>>>>> +       ASSIGN_ONCE(b, a);
>>>>>
>>>>> I was planning to do mine by hand for this sort of reason.
>>>>>
>>>>> Or are you thinking of something more subtle than the case where
>>>>> "b" is an unparenthesized comma-separated expression?
>>>>
>>>> I think he's revering to the wrong way around-ness of the thing.
>>>>
>>>> Its a bit of a mixed bag on assignments, but for instance
>>>> rcu_assign_pointer() takes them the right way around, as does
>>>> atomic_set().
>>>>
>>>> So yes, I think the ASSIGN_ONCE() thing got the arguments the wrong way
>>>> around.
>>>>
>>>> We could maybe still change it, before its in too long ?
>>>
>>> Linus initial proposal was inspired by put_user model which is (val,
>>> ptr) and I took that. 
>>
>> Yeah, like I said, its a bit of a mixed bag. We've got plenty examples
>> of the wrong way around.
>>
>>> As my focus was on avoiding the volatile bug,
>>> all my current conversions are READ_ONCE as no potential ASSIGN_ONCE
>>> user was done on a non-scalar type, so I have no first hand
>>> experience. 
>>
>> So the implication there is that we'd preserve ACCESS_ONCE() for use on
>> scalar types. I don't think we should do that, I think we should just
>> en-mass convert to {READ,WRITE}/{LOAD,STORE}_ONCE() and kill off
>> ACCESS_ONCE().
> 
> Yep.  For one thing, the proposed replacements work much better with
> C11 than does ACCESS_ONCE().

As we agreed there is no perfect interface regarding val,x vs. x,val.
But it seems that there is some consensus that I should push something like the following (still whitespace damaged) to Linus for 3.19?
Peter, Davidlohr, Paul (maybe Linus) can you ACK/NACK?


Subject: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)

Feedback has shown that WRITE_ONCE(x, val) is easier to use than ASSIGN_ONCE(val,x).
There are no in-tree users yet, so lets change it.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>


diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 84734a7..38865c7 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -215,7 +215,7 @@ static __always_inline void __read_once_size(volatile void *p, void *res, int si
        }
 }
 
-static __always_inline void __assign_once_size(volatile void *p, void *res, int size)
+static __always_inline void __write_once_size(volatile void *p, void *res, int size)
 {
        switch (size) {
        case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
@@ -235,15 +235,15 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
 /*
  * Prevent the compiler from merging or refetching reads or writes. The
  * compiler is also forbidden from reordering successive instances of
- * READ_ONCE, ASSIGN_ONCE and ACCESS_ONCE (see below), but only when the
+ * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the
  * compiler is aware of some particular ordering.  One way to make the
  * compiler aware of ordering is to put the two invocations of READ_ONCE,
- * ASSIGN_ONCE or ACCESS_ONCE() in different C statements.
+ * WRITE_ONCE or ACCESS_ONCE() in different C statements.
  *
  * In contrast to ACCESS_ONCE these two macros will also work on aggregate
  * data types like structs or unions. If the size of the accessed data
  * type exceeds the word size of the machine (e.g., 32 bits or 64 bits)
- * READ_ONCE() and ASSIGN_ONCE()  will fall back to memcpy and print a
+ * READ_ONCE() and WRITE_ONCE()  will fall back to memcpy and print a
  * compile-time warning.
  *
  * Their two major use cases are: (1) Mediating communication between
@@ -257,8 +257,8 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
 #define READ_ONCE(x) \
        ({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })
 
-#define ASSIGN_ONCE(val, x) \
-       ({ typeof(x) __val; __val = val; __assign_once_size(&x, &__val, sizeof(__val)); __val; })
+#define WRITE_ONCE(x, val) \
+       ({ typeof(x) __val; __val = val; __write_once_size(&x, &__val, sizeof(__val)); __val; })
 
 #endif /* __KERNEL__ */
 
@@ -458,7 +458,7 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
  * with an explicit memory barrier or atomic instruction that provides the
  * required ordering.
  *
- * If possible use READ_ONCE/ASSIGN_ONCE instead.
+ * If possible use READ_ONCE/WRITE_ONCE instead.
  */
 #define __ACCESS_ONCE(x) ({ \
         __maybe_unused typeof(x) __var = (typeof(x)) 0; 




^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-13  8:18                   ` Christian Borntraeger
@ 2015-01-13  9:29                     ` Peter Zijlstra
  2015-01-13 17:47                     ` Paul E. McKenney
  2015-01-13 19:12                     ` Davidlohr Bueso
  2 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2015-01-13  9:29 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: paulmck, Davidlohr Bueso, linux-kernel, mingo, laijs, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, Pranith Kumar, Linus Torvalds

On Tue, Jan 13, 2015 at 09:18:47AM +0100, Christian Borntraeger wrote:
> As we agreed there is no perfect interface regarding val,x vs. x,val.
> But it seems that there is some consensus that I should push something like the following (still whitespace damaged) to Linus for 3.19?
> Peter, Davidlohr, Paul (maybe Linus) can you ACK/NACK?
> 

ACK on this, but I have a git tree with users in, I'll fix it up though.

Thanks!

> Subject: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)
> 
> Feedback has shown that WRITE_ONCE(x, val) is easier to use than ASSIGN_ONCE(val,x).
> There are no in-tree users yet, so lets change it.
> 
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> 
> 
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 84734a7..38865c7 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -215,7 +215,7 @@ static __always_inline void __read_once_size(volatile void *p, void *res, int si
>         }
>  }
>  
> -static __always_inline void __assign_once_size(volatile void *p, void *res, int size)
> +static __always_inline void __write_once_size(volatile void *p, void *res, int size)
>  {
>         switch (size) {
>         case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
> @@ -235,15 +235,15 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
>  /*
>   * Prevent the compiler from merging or refetching reads or writes. The
>   * compiler is also forbidden from reordering successive instances of
> - * READ_ONCE, ASSIGN_ONCE and ACCESS_ONCE (see below), but only when the
> + * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the
>   * compiler is aware of some particular ordering.  One way to make the
>   * compiler aware of ordering is to put the two invocations of READ_ONCE,
> - * ASSIGN_ONCE or ACCESS_ONCE() in different C statements.
> + * WRITE_ONCE or ACCESS_ONCE() in different C statements.
>   *
>   * In contrast to ACCESS_ONCE these two macros will also work on aggregate
>   * data types like structs or unions. If the size of the accessed data
>   * type exceeds the word size of the machine (e.g., 32 bits or 64 bits)
> - * READ_ONCE() and ASSIGN_ONCE()  will fall back to memcpy and print a
> + * READ_ONCE() and WRITE_ONCE()  will fall back to memcpy and print a
>   * compile-time warning.
>   *
>   * Their two major use cases are: (1) Mediating communication between
> @@ -257,8 +257,8 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
>  #define READ_ONCE(x) \
>         ({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })
>  
> -#define ASSIGN_ONCE(val, x) \
> -       ({ typeof(x) __val; __val = val; __assign_once_size(&x, &__val, sizeof(__val)); __val; })
> +#define WRITE_ONCE(x, val) \
> +       ({ typeof(x) __val; __val = val; __write_once_size(&x, &__val, sizeof(__val)); __val; })
>  
>  #endif /* __KERNEL__ */
>  
> @@ -458,7 +458,7 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
>   * with an explicit memory barrier or atomic instruction that provides the
>   * required ordering.
>   *
> - * If possible use READ_ONCE/ASSIGN_ONCE instead.
> + * If possible use READ_ONCE/WRITE_ONCE instead.
>   */
>  #define __ACCESS_ONCE(x) ({ \
>          __maybe_unused typeof(x) __var = (typeof(x)) 0; 
> 
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-13  8:18                   ` Christian Borntraeger
  2015-01-13  9:29                     ` Peter Zijlstra
@ 2015-01-13 17:47                     ` Paul E. McKenney
  2015-01-13 19:12                     ` Davidlohr Bueso
  2 siblings, 0 replies; 30+ messages in thread
From: Paul E. McKenney @ 2015-01-13 17:47 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Peter Zijlstra, Davidlohr Bueso, linux-kernel, mingo, laijs,
	dipankar, akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells,
	edumazet, dvhart, fweisbec, oleg, Pranith Kumar, Linus Torvalds

On Tue, Jan 13, 2015 at 09:18:47AM +0100, Christian Borntraeger wrote:
> Am 12.01.2015 um 23:12 schrieb Paul E. McKenney:
> > On Mon, Jan 12, 2015 at 09:59:57AM +0100, Peter Zijlstra wrote:
> >> On Fri, Jan 09, 2015 at 10:58:50PM +0100, Christian Borntraeger wrote:
> >>> Am 09.01.2015 um 14:56 schrieb Peter Zijlstra:
> >>>> On Fri, Jan 09, 2015 at 05:49:54AM -0800, Paul E. McKenney wrote:
> >>>>>> That reminds me, I think the new conversion for stores will most likely
> >>>>>> introduce silly arg bugs:
> >>>>>>
> >>>>>> -       ACCESS_ONCE(a) = b;
> >>>>>> +       ASSIGN_ONCE(b, a);
> >>>>>
> >>>>> I was planning to do mine by hand for this sort of reason.
> >>>>>
> >>>>> Or are you thinking of something more subtle than the case where
> >>>>> "b" is an unparenthesized comma-separated expression?
> >>>>
> >>>> I think he's revering to the wrong way around-ness of the thing.
> >>>>
> >>>> Its a bit of a mixed bag on assignments, but for instance
> >>>> rcu_assign_pointer() takes them the right way around, as does
> >>>> atomic_set().
> >>>>
> >>>> So yes, I think the ASSIGN_ONCE() thing got the arguments the wrong way
> >>>> around.
> >>>>
> >>>> We could maybe still change it, before its in too long ?
> >>>
> >>> Linus initial proposal was inspired by put_user model which is (val,
> >>> ptr) and I took that. 
> >>
> >> Yeah, like I said, its a bit of a mixed bag. We've got plenty examples
> >> of the wrong way around.
> >>
> >>> As my focus was on avoiding the volatile bug,
> >>> all my current conversions are READ_ONCE as no potential ASSIGN_ONCE
> >>> user was done on a non-scalar type, so I have no first hand
> >>> experience. 
> >>
> >> So the implication there is that we'd preserve ACCESS_ONCE() for use on
> >> scalar types. I don't think we should do that, I think we should just
> >> en-mass convert to {READ,WRITE}/{LOAD,STORE}_ONCE() and kill off
> >> ACCESS_ONCE().
> > 
> > Yep.  For one thing, the proposed replacements work much better with
> > C11 than does ACCESS_ONCE().
> 
> As we agreed there is no perfect interface regarding val,x vs. x,val.
> But it seems that there is some consensus that I should push something like the following (still whitespace damaged) to Linus for 3.19?
> Peter, Davidlohr, Paul (maybe Linus) can you ACK/NACK?
> 
> 
> Subject: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)
> 
> Feedback has shown that WRITE_ONCE(x, val) is easier to use than ASSIGN_ONCE(val,x).
> There are no in-tree users yet, so lets change it.
> 
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 84734a7..38865c7 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -215,7 +215,7 @@ static __always_inline void __read_once_size(volatile void *p, void *res, int si
>         }
>  }
> 
> -static __always_inline void __assign_once_size(volatile void *p, void *res, int size)
> +static __always_inline void __write_once_size(volatile void *p, void *res, int size)
>  {
>         switch (size) {
>         case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
> @@ -235,15 +235,15 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
>  /*
>   * Prevent the compiler from merging or refetching reads or writes. The
>   * compiler is also forbidden from reordering successive instances of
> - * READ_ONCE, ASSIGN_ONCE and ACCESS_ONCE (see below), but only when the
> + * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the
>   * compiler is aware of some particular ordering.  One way to make the
>   * compiler aware of ordering is to put the two invocations of READ_ONCE,
> - * ASSIGN_ONCE or ACCESS_ONCE() in different C statements.
> + * WRITE_ONCE or ACCESS_ONCE() in different C statements.
>   *
>   * In contrast to ACCESS_ONCE these two macros will also work on aggregate
>   * data types like structs or unions. If the size of the accessed data
>   * type exceeds the word size of the machine (e.g., 32 bits or 64 bits)
> - * READ_ONCE() and ASSIGN_ONCE()  will fall back to memcpy and print a
> + * READ_ONCE() and WRITE_ONCE()  will fall back to memcpy and print a
>   * compile-time warning.
>   *
>   * Their two major use cases are: (1) Mediating communication between
> @@ -257,8 +257,8 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
>  #define READ_ONCE(x) \
>         ({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })
> 
> -#define ASSIGN_ONCE(val, x) \
> -       ({ typeof(x) __val; __val = val; __assign_once_size(&x, &__val, sizeof(__val)); __val; })
> +#define WRITE_ONCE(x, val) \
> +       ({ typeof(x) __val; __val = val; __write_once_size(&x, &__val, sizeof(__val)); __val; })
> 
>  #endif /* __KERNEL__ */
> 
> @@ -458,7 +458,7 @@ static __always_inline void __assign_once_size(volatile void *p, void *res, int
>   * with an explicit memory barrier or atomic instruction that provides the
>   * required ordering.
>   *
> - * If possible use READ_ONCE/ASSIGN_ONCE instead.
> + * If possible use READ_ONCE/WRITE_ONCE instead.
>   */
>  #define __ACCESS_ONCE(x) ({ \
>          __maybe_unused typeof(x) __var = (typeof(x)) 0; 
> 
> 
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE()
  2015-01-13  8:18                   ` Christian Borntraeger
  2015-01-13  9:29                     ` Peter Zijlstra
  2015-01-13 17:47                     ` Paul E. McKenney
@ 2015-01-13 19:12                     ` Davidlohr Bueso
  2 siblings, 0 replies; 30+ messages in thread
From: Davidlohr Bueso @ 2015-01-13 19:12 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: paulmck, Peter Zijlstra, linux-kernel, mingo, laijs, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, Pranith Kumar, Linus Torvalds

On Tue, 2015-01-13 at 09:18 +0100, Christian Borntraeger wrote:
> Subject: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)
> 
> Feedback has shown that WRITE_ONCE(x, val) is easier to use than ASSIGN_ONCE(val,x).
> There are no in-tree users yet, so lets change it.
> 
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>

Acked-by: Davidlohr Bueso <dave@stgolabs.net>


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2015-01-13 19:12 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-07 17:32 [PATCH tip/core/rcu 0/14] Preemptible-RCU updates for 3.20 Paul E. McKenney
2015-01-07 17:32 ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 02/14] rcu: Rename "empty" to "empty_norm" in preparation for boost rework Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 03/14] rcu: Abstract rcu_cleanup_dead_rnp() from rcu_cleanup_dead_cpu() Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 04/14] rcu: Make rcu_read_unlock_special() propagate ->qsmaskinit bit clearing Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 05/14] rcu: Don't migrate blocked tasks even if all corresponding CPUs offline Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 06/14] rcu: Shorten irq-disable region in rcu_cleanup_dead_cpu() Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 07/14] rcu: Make use of rcu_preempt_has_tasks() Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 08/14] rcu: Don't spawn rcub kthreads on root rcu_node structure Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 09/14] rcu: Don't initiate RCU priority boosting on root rcu_node Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 10/14] rcu: Don't bother affinitying rcub kthreads away from offline CPUs Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 11/14] rcu: Note quiescent state when CPU goes offline Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 12/14] rcu: Revert "Allow post-unlock reference for rt_mutex" to avoid priority-inversion Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 13/14] rcu: Don't scan root rcu_node structure for stalled tasks Paul E. McKenney
2015-01-07 17:32   ` [PATCH tip/core/rcu 14/14] rcu: Remove redundant callback-list initialization Paul E. McKenney
2015-01-08  9:41   ` [PATCH tip/core/rcu 01/14] rcu: Protect rcu_boost() lockless accesses with ACCESS_ONCE() Peter Zijlstra
2015-01-08 15:22     ` Paul E. McKenney
2015-01-09  6:41       ` Davidlohr Bueso
2015-01-09 13:49         ` Paul E. McKenney
2015-01-09 13:56           ` Peter Zijlstra
2015-01-09 14:07             ` Paul E. McKenney
2015-01-09 16:53               ` Mathieu Desnoyers
2015-01-09 21:58             ` Christian Borntraeger
2015-01-10  0:27               ` Davidlohr Bueso
2015-01-12  8:59               ` Peter Zijlstra
2015-01-12 22:12                 ` Paul E. McKenney
2015-01-13  8:18                   ` Christian Borntraeger
2015-01-13  9:29                     ` Peter Zijlstra
2015-01-13 17:47                     ` Paul E. McKenney
2015-01-13 19:12                     ` Davidlohr Bueso

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.