All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/16] rcu: Some minor fixes and cleanups
@ 2014-07-23  5:09 Pranith Kumar
  2014-07-23  5:09 ` [PATCH 01/16] rcu: Use rcu_num_nodes instead of NUM_RCU_NODES Pranith Kumar
                   ` (16 more replies)
  0 siblings, 17 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: paulmck, Josh Triplett, Steven Rostedt, Mathieu Desnoyers,
	Lai Jiangshan, linux-kernel

Hi Paul,

This is a series of minor fixes and cleanup patches which I found while studying
the code. All my previous pending (but not rejected ;) patches are superseded by
this series, expect the rcutorture snprintf changes. I am still waiting for you
to decide on that one :)

These changes have been tested by the kvm rcutorture test setup. Some tests give
me stall warnings, but otherwise have SUCCESS messages in the logs. But those
are occuring even without these changes with the tip paul/rcu/dev. May be it is
because I am running them for a duration of 4 minutes each only?

--
Pranith.

Pranith Kumar (16):
  rcu: Use rcu_num_nodes instead of NUM_RCU_NODES
  rcu: Check return value for cpumask allocation
  rcu: Fix comment for gp_state field values
  rcu: Remove redundant check for an online CPU
  rcu: Add noreturn attribute to boost kthread
  rcu: Clear gp_flags only when actually starting new gp
  rcu: Save and restore irq flags in rcu_gp_cleanup()
  rcu: Clean up rcu_spawn_one_boost_kthread()
  rcu: Remove redundant check for online cpu
  rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup
  rcu: Check for spurious wakeup using return value
  rcu: Rename rcu_spawn_gp_kthread() to rcu_spawn_kthreads()
  rcu: Spawn nocb kthreads from rcu_prepare_kthreads()
  rcu: Remove redundant checks for rcu_scheduler_fully_active
  rcu: Check for a nocb cpu before trying to spawn nocb threads
  rcu: kvm.sh: Fix error when you pass --cpus argument

 kernel/rcu/tree.c                             | 42 ++++++++++++++-------------
 kernel/rcu/tree.h                             |  4 +--
 kernel/rcu/tree_plugin.h                      | 40 +++++++++++++------------
 tools/testing/selftests/rcutorture/bin/kvm.sh |  4 +--
 4 files changed, 47 insertions(+), 43 deletions(-)

-- 
2.0.0.rc2


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 01/16] rcu: Use rcu_num_nodes instead of NUM_RCU_NODES
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23  5:09 ` [PATCH 02/16] rcu: Check return value for cpumask allocation Pranith Kumar
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

NUM_RCU_NODES is set at build time and is usually a huge number. We calculate
the actual number of rcu nodes necessary at boot time based on nr_cpu_ids in
rcu_init_geometry() and store it in rcu_num_nodes. We should use this variable
instead of NUM_RCU_NODES.

This commit changes all such NUM_RCU_NODES uses to rcu_num_nodes.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree_plugin.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index c31eb28..f07b643 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -885,7 +885,7 @@ void synchronize_rcu_expedited(void)
 	/* Snapshot current state of ->blkd_tasks lists. */
 	rcu_for_each_leaf_node(rsp, rnp)
 		sync_rcu_preempt_exp_init(rsp, rnp);
-	if (NUM_RCU_NODES > 1)
+	if (rcu_num_nodes > 1)
 		sync_rcu_preempt_exp_init(rsp, rcu_get_root(rsp));
 
 	put_online_cpus();
@@ -1475,7 +1475,7 @@ static void __init rcu_spawn_boost_kthreads(void)
 	BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec));
 	rnp = rcu_get_root(rcu_state_p);
 	(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
-	if (NUM_RCU_NODES > 1) {
+	if (rcu_num_nodes > 1) {
 		rcu_for_each_leaf_node(rcu_state_p, rnp)
 			(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
 	}
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 02/16] rcu: Check return value for cpumask allocation
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
  2014-07-23  5:09 ` [PATCH 01/16] rcu: Use rcu_num_nodes instead of NUM_RCU_NODES Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:06   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 03/16] rcu: Fix comment for gp_state field values Pranith Kumar
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

This commit add a check for return value of zalloc_cpumask_var() used while
allocating cpumask for rcu_nocb_mask.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree_plugin.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index f07b643..bac9797 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -88,7 +88,10 @@ static void __init rcu_bootup_announce_oddness(void)
 #ifdef CONFIG_RCU_NOCB_CPU
 #ifndef CONFIG_RCU_NOCB_CPU_NONE
 	if (!have_rcu_nocb_mask) {
-		zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL);
+		if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
+			pr_info("rcu_nocb_mask allocation failed\n");
+			return;
+		}
 		have_rcu_nocb_mask = true;
 	}
 #ifdef CONFIG_RCU_NOCB_CPU_ZERO
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 03/16] rcu: Fix comment for gp_state field values
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
  2014-07-23  5:09 ` [PATCH 01/16] rcu: Use rcu_num_nodes instead of NUM_RCU_NODES Pranith Kumar
  2014-07-23  5:09 ` [PATCH 02/16] rcu: Check return value for cpumask allocation Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23  5:09 ` [PATCH 04/16] rcu: Remove redundant check for an online CPU Pranith Kumar
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

The comment in the code states that the values are for gp_flags field, when the
values are actually for gp_state field. This commit fixes the inconsistency.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index f703ea8..9643f10 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -498,7 +498,7 @@ struct rcu_state {
 #define RCU_GP_FLAG_INIT 0x1	/* Need grace-period initialization. */
 #define RCU_GP_FLAG_FQS  0x2	/* Need grace-period quiescent-state forcing. */
 
-/* Values for rcu_state structure's gp_flags field. */
+/* Values for rcu_state structure's gp_state field. */
 #define RCU_GP_WAIT_INIT 0	/* Initial state. */
 #define RCU_GP_WAIT_GPS  1	/* Wait for grace-period start. */
 #define RCU_GP_WAIT_FQS  2	/* Wait for force-quiescent-state time. */
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 04/16] rcu: Remove redundant check for an online CPU
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (2 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 03/16] rcu: Fix comment for gp_state field values Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:09   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 05/16] rcu: Add noreturn attribute to boost kthread Pranith Kumar
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

rcu_prcess_callbacks() is the softirq handler for RCU which is raised from
invoke_rcu_core() which is called from __call_rcu_core().

Each of these three functions checks if the cpu is online. We can remove the
redundant ones. This commit removes one of these redundant check.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 72e0b1f..044946e 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2563,8 +2563,6 @@ static void rcu_process_callbacks(struct softirq_action *unused)
 {
 	struct rcu_state *rsp;
 
-	if (cpu_is_offline(smp_processor_id()))
-		return;
 	trace_rcu_utilization(TPS("Start RCU core"));
 	for_each_rcu_flavor(rsp)
 		__rcu_process_callbacks(rsp);
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 05/16] rcu: Add noreturn attribute to boost kthread
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (3 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 04/16] rcu: Remove redundant check for an online CPU Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23  5:09 ` [PATCH 06/16] rcu: Clear gp_flags only when actually starting new gp Pranith Kumar
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

rcu_boost_kthread() runs in an infinite loop and does not return. This commit
adds the __noreturn attribute to the function.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree_plugin.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index bac9797..dd53247 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1222,7 +1222,7 @@ static int rcu_boost(struct rcu_node *rnp)
  * Priority-boosting kthread.  One per leaf rcu_node and one for the
  * root rcu_node.
  */
-static int rcu_boost_kthread(void *arg)
+static __noreturn int rcu_boost_kthread(void *arg)
 {
 	struct rcu_node *rnp = (struct rcu_node *)arg;
 	int spincnt = 0;
@@ -1250,7 +1250,6 @@ static int rcu_boost_kthread(void *arg)
 	}
 	/* NOTREACHED */
 	trace_rcu_utilization(TPS("End boost kthread@notreached"));
-	return 0;
 }
 
 /*
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 06/16] rcu: Clear gp_flags only when actually starting new gp
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (4 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 05/16] rcu: Add noreturn attribute to boost kthread Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:13   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 07/16] rcu: Save and restore irq flags in rcu_gp_cleanup() Pranith Kumar
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

We are clearing the gp_flags before we check if a grace period is in progress.
If a grace period is in progress, we return after incorrectly clearing the
gp_flags. The code comments say that this is highly unlikely, but just to be
safe this commit moves the clearing of gp_flags to after the check for grace
period in progress.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 044946e..b14cecd 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1605,7 +1605,6 @@ static int rcu_gp_init(struct rcu_state *rsp)
 		raw_spin_unlock_irq(&rnp->lock);
 		return 0;
 	}
-	ACCESS_ONCE(rsp->gp_flags) = 0; /* Clear all flags: New grace period. */
 
 	if (WARN_ON_ONCE(rcu_gp_in_progress(rsp))) {
 		/*
@@ -1616,6 +1615,8 @@ static int rcu_gp_init(struct rcu_state *rsp)
 		return 0;
 	}
 
+	ACCESS_ONCE(rsp->gp_flags) = 0; /* Clear all flags: New grace period. */
+
 	/* Advance to a new grace period and initialize state. */
 	record_gp_stall_check_time(rsp);
 	/* Record GP times before starting GP, hence smp_store_release(). */
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 07/16] rcu: Save and restore irq flags in rcu_gp_cleanup()
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (5 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 06/16] rcu: Clear gp_flags only when actually starting new gp Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:16   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 08/16] rcu: Clean up rcu_spawn_one_boost_kthread() Pranith Kumar
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

We use raw_spin_lock_irqsave/restore() family of functions throughout the code
but for two locations. This commit replaces raw_spin_lock_irq()/unlock_irq()
with irqsave/restore() in one such location. This is not strictly necessary,
so I did not change the other location. I will update the other location if
this is accepted :)

This commit changes raw_spin_lock_irq()/unlock_irq() to lock_irqsave()/restore().

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index b14cecd..5dcbf36 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1706,13 +1706,13 @@ static int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
  */
 static void rcu_gp_cleanup(struct rcu_state *rsp)
 {
-	unsigned long gp_duration;
+	unsigned long gp_duration, flags;
 	bool needgp = false;
 	int nocb = 0;
 	struct rcu_data *rdp;
 	struct rcu_node *rnp = rcu_get_root(rsp);
 
-	raw_spin_lock_irq(&rnp->lock);
+	raw_spin_lock_irqsave(&rnp->lock, flags);
 	smp_mb__after_unlock_lock();
 	gp_duration = jiffies - rsp->gp_start;
 	if (gp_duration > rsp->gp_max)
@@ -1726,7 +1726,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 	 * safe for us to drop the lock in order to mark the grace
 	 * period as completed in all of the rcu_node structures.
 	 */
-	raw_spin_unlock_irq(&rnp->lock);
+	raw_spin_unlock_irqrestore(&rnp->lock, flags);
 
 	/*
 	 * Propagate new ->completed value to rcu_node structures so
@@ -1738,7 +1738,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 	 * grace period is recorded in any of the rcu_node structures.
 	 */
 	rcu_for_each_node_breadth_first(rsp, rnp) {
-		raw_spin_lock_irq(&rnp->lock);
+		raw_spin_lock_irqsave(&rnp->lock, flags);
 		smp_mb__after_unlock_lock();
 		ACCESS_ONCE(rnp->completed) = rsp->gpnum;
 		rdp = this_cpu_ptr(rsp->rda);
@@ -1746,11 +1746,11 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 			needgp = __note_gp_changes(rsp, rnp, rdp) || needgp;
 		/* smp_mb() provided by prior unlock-lock pair. */
 		nocb += rcu_future_gp_cleanup(rsp, rnp);
-		raw_spin_unlock_irq(&rnp->lock);
+		raw_spin_unlock_irqrestore(&rnp->lock, flags);
 		cond_resched();
 	}
 	rnp = rcu_get_root(rsp);
-	raw_spin_lock_irq(&rnp->lock);
+	raw_spin_lock_irqsave(&rnp->lock, flags);
 	smp_mb__after_unlock_lock(); /* Order GP before ->completed update. */
 	rcu_nocb_gp_set(rnp, nocb);
 
@@ -1767,7 +1767,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 				       ACCESS_ONCE(rsp->gpnum),
 				       TPS("newreq"));
 	}
-	raw_spin_unlock_irq(&rnp->lock);
+	raw_spin_unlock_irqrestore(&rnp->lock, flags);
 }
 
 /*
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 08/16] rcu: Clean up rcu_spawn_one_boost_kthread()
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (6 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 07/16] rcu: Save and restore irq flags in rcu_gp_cleanup() Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23  5:09 ` [PATCH 09/16] rcu: Remove redundant check for online cpu Pranith Kumar
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

Even though we pass on the return value of kthread_create, we do not
use it anywhere. This commit uses kthread_run() and adds BUG_ON() when we create
the kthread and changes the return type of this function to void.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.h        |  2 +-
 kernel/rcu/tree_plugin.h | 15 ++++++---------
 2 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 9643f10..fc21899 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -569,7 +569,7 @@ static void invoke_rcu_callbacks_kthread(void);
 static bool rcu_is_callbacks_kthread(void);
 #ifdef CONFIG_RCU_BOOST
 static void rcu_preempt_do_callbacks(void);
-static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
+static void rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
 						 struct rcu_node *rnp);
 #endif /* #ifdef CONFIG_RCU_BOOST */
 static void __init rcu_spawn_boost_kthreads(void);
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index dd53247..4d42791 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1330,7 +1330,7 @@ static void rcu_preempt_boost_start_gp(struct rcu_node *rnp)
  * already exist.  We only create this kthread for preemptible RCU.
  * Returns zero if all is well, a negated errno otherwise.
  */
-static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
+static void rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
 						 struct rcu_node *rnp)
 {
 	int rnp_index = rnp - &rsp->node[0];
@@ -1347,18 +1347,15 @@ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
 	rsp->boost = 1;
 	if (rnp->boost_kthread_task != NULL)
 		return 0;
-	t = kthread_create(rcu_boost_kthread, (void *)rnp,
+	t = kthread_run(rcu_boost_kthread, (void *)rnp,
 			   "rcub/%d", rnp_index);
-	if (IS_ERR(t))
-		return PTR_ERR(t);
+	BUG_ON(IS_ERR(t));
 	raw_spin_lock_irqsave(&rnp->lock, flags);
 	smp_mb__after_unlock_lock();
 	rnp->boost_kthread_task = t;
 	raw_spin_unlock_irqrestore(&rnp->lock, flags);
 	sp.sched_priority = RCU_BOOST_PRIO;
 	sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
-	wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */
-	return 0;
 }
 
 static void rcu_kthread_do_work(void)
@@ -1476,10 +1473,10 @@ static void __init rcu_spawn_boost_kthreads(void)
 		per_cpu(rcu_cpu_has_work, cpu) = 0;
 	BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec));
 	rnp = rcu_get_root(rcu_state_p);
-	(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
+	rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
 	if (rcu_num_nodes > 1) {
 		rcu_for_each_leaf_node(rcu_state_p, rnp)
-			(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
+			rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
 	}
 }
 
@@ -1490,7 +1487,7 @@ static void rcu_prepare_kthreads(int cpu)
 
 	/* Fire up the incoming CPU's kthread and leaf rcu_node kthread. */
 	if (rcu_scheduler_fully_active)
-		(void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
+		rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
 }
 
 #else /* #ifdef CONFIG_RCU_BOOST */
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (7 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 08/16] rcu: Clean up rcu_spawn_one_boost_kthread() Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:21   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 10/16] rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup Pranith Kumar
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

There are two checks for an online CPU if two if() conditions. This commit
simplies this by replacing it with only one check for the online CPU.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 5dcbf36..8d598a2 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2602,15 +2602,18 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
 {
 	bool needwake;
 
+	if (!cpu_online(smp_processor_id()))
+		return;
+
 	/*
 	 * If called from an extended quiescent state, invoke the RCU
 	 * core in order to force a re-evaluation of RCU's idleness.
 	 */
-	if (!rcu_is_watching() && cpu_online(smp_processor_id()))
+	if (!rcu_is_watching())
 		invoke_rcu_core();
 
-	/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
-	if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
+	/* If interrupts were disabled, don't invoke RCU core. */
+	if (irqs_disabled_flags(flags))
 		return;
 
 	/*
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 10/16] rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (8 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 09/16] rcu: Remove redundant check for online cpu Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:23   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 11/16] rcu: Check for spurious wakeup using return value Pranith Kumar
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

rcu_gp_init() is called once the gp_kthread wakes up on the condition that
the init bit is set in the gp_flags. For a spurious wakeup we need to check that
it is actually set. This commit ensures that the RCU_GP_FLAG_INIT bit is set in
gp_flags.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 8d598a2..f8847d9 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1600,7 +1600,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
 	rcu_bind_gp_kthread();
 	raw_spin_lock_irq(&rnp->lock);
 	smp_mb__after_unlock_lock();
-	if (!ACCESS_ONCE(rsp->gp_flags)) {
+	if (!(ACCESS_ONCE(rsp->gp_flags) & RCU_GP_FLAG_INIT)) {
 		/* Spurious wakeup, tell caller to go back to sleep.  */
 		raw_spin_unlock_irq(&rnp->lock);
 		return 0;
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 11/16] rcu: Check for spurious wakeup using return value
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (9 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 10/16] rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:26   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 12/16] rcu: Rename rcu_spawn_gp_kthread() to rcu_spawn_kthreads() Pranith Kumar
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

When the gp_kthread wakes up from the wait event, it returns 0 if the wake up is
due to the condition having been met. This commit checks this return value
for a spurious wake up before calling rcu_gp_init().

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f8847d9..3b20ad2 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1790,11 +1790,11 @@ static int __noreturn rcu_gp_kthread(void *arg)
 					       ACCESS_ONCE(rsp->gpnum),
 					       TPS("reqwait"));
 			rsp->gp_state = RCU_GP_WAIT_GPS;
-			wait_event_interruptible(rsp->gp_wq,
+			ret = wait_event_interruptible(rsp->gp_wq,
 						 ACCESS_ONCE(rsp->gp_flags) &
 						 RCU_GP_FLAG_INIT);
 			/* Locking provides needed memory barrier. */
-			if (rcu_gp_init(rsp))
+			if (!ret && rcu_gp_init(rsp))
 				break;
 			cond_resched();
 			flush_signals(current);
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 12/16] rcu: Rename rcu_spawn_gp_kthread() to  rcu_spawn_kthreads()
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (10 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 11/16] rcu: Check for spurious wakeup using return value Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23  5:09 ` [PATCH 13/16] rcu: Spawn nocb kthreads from rcu_prepare_kthreads() Pranith Kumar
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

We are currently spawning all the threads from this functions. This commit
renames the function so that it reflects the current scenario better.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 3b20ad2..afd54e2 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3501,9 +3501,10 @@ static int rcu_pm_notify(struct notifier_block *self,
 }
 
 /*
- * Spawn the kthreads that handle each RCU flavor's grace periods.
+ * Spawn the kthreads that handle each RCU flavor's grace periods
+ * and the no-cb and boost kthreads.
  */
-static int __init rcu_spawn_gp_kthread(void)
+static int __init rcu_spawn_kthreads(void)
 {
 	unsigned long flags;
 	struct rcu_node *rnp;
@@ -3523,7 +3524,7 @@ static int __init rcu_spawn_gp_kthread(void)
 	rcu_spawn_boost_kthreads();
 	return 0;
 }
-early_initcall(rcu_spawn_gp_kthread);
+early_initcall(rcu_spawn_kthreads);
 
 /*
  * This function is invoked towards the end of the scheduler's initialization
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 13/16] rcu: Spawn nocb kthreads from rcu_prepare_kthreads()
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (11 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 12/16] rcu: Rename rcu_spawn_gp_kthread() to rcu_spawn_kthreads() Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23  5:09 ` [PATCH 14/16] rcu: Remove redundant checks for rcu_scheduler_fully_active Pranith Kumar
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

rcu_prepare_kthreads() is called when a CPU hotplug event happens. We can also
spawn the nocb kthreads from this function. This is preparation for the next
patch which remove the redundant check for rcu_scheduler_fully_active. This
commit moves the spawning of nocb kthreads to rcu_prepare_kthreads().

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree.c        | 1 -
 kernel/rcu/tree_plugin.h | 2 ++
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index afd54e2..4dc449c 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3453,7 +3453,6 @@ static int rcu_cpu_notify(struct notifier_block *self,
 	case CPU_UP_PREPARE_FROZEN:
 		rcu_prepare_cpu(cpu);
 		rcu_prepare_kthreads(cpu);
-		rcu_spawn_all_nocb_kthreads(cpu);
 		break;
 	case CPU_ONLINE:
 	case CPU_DOWN_FAILED:
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 4d42791..9a50e1d 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1488,6 +1488,8 @@ static void rcu_prepare_kthreads(int cpu)
 	/* Fire up the incoming CPU's kthread and leaf rcu_node kthread. */
 	if (rcu_scheduler_fully_active)
 		rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
+
+	rcu_spawn_all_nocb_kthreads(cpu);
 }
 
 #else /* #ifdef CONFIG_RCU_BOOST */
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 14/16] rcu: Remove redundant checks for rcu_scheduler_fully_active
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (12 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 13/16] rcu: Spawn nocb kthreads from rcu_prepare_kthreads() Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:27   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads Pranith Kumar
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

rcu_scheduler_fully_active is set to true early in the boot process.

The check for this flag is only needed in rcu_prepare_kthreads() as this
function is called before the above flag is set. All other checks are redundant.
Hence this commit removes those redundant checks.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree_plugin.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 9a50e1d..541c776 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1341,7 +1341,7 @@ static void rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
 	if (&rcu_preempt_state != rsp)
 		return 0;
 
-	if (!rcu_scheduler_fully_active || rnp->qsmaskinit == 0)
+	if (rnp->qsmaskinit == 0)
 		return 0;
 
 	rsp->boost = 1;
@@ -1486,9 +1486,10 @@ static void rcu_prepare_kthreads(int cpu)
 	struct rcu_node *rnp = rdp->mynode;
 
 	/* Fire up the incoming CPU's kthread and leaf rcu_node kthread. */
-	if (rcu_scheduler_fully_active)
-		rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
+	if (!rcu_scheduler_fully_active)
+		return;
 
+	rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
 	rcu_spawn_all_nocb_kthreads(cpu);
 }
 
@@ -2508,9 +2509,8 @@ static void rcu_spawn_all_nocb_kthreads(int cpu)
 {
 	struct rcu_state *rsp;
 
-	if (rcu_scheduler_fully_active)
-		for_each_rcu_flavor(rsp)
-			rcu_spawn_one_nocb_kthread(rsp, cpu);
+	for_each_rcu_flavor(rsp)
+		rcu_spawn_one_nocb_kthread(rsp, cpu);
 }
 
 /*
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (13 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 14/16] rcu: Remove redundant checks for rcu_scheduler_fully_active Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:28   ` Paul E. McKenney
  2014-07-23  5:09 ` [PATCH 16/16] rcu: kvm.sh: Fix error when you pass --cpus argument Pranith Kumar
  2014-07-23 14:45 ` [PATCH 00/16] rcu: Some minor fixes and cleanups Paul E. McKenney
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:READ-COPY UPDATE...

This commit tries to spawn nocb kthreads only when the CPU is marked as a nocb
cpu. This is a minor optimization.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 kernel/rcu/tree_plugin.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 541c776..8abff0a 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1490,7 +1490,8 @@ static void rcu_prepare_kthreads(int cpu)
 		return;
 
 	rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
-	rcu_spawn_all_nocb_kthreads(cpu);
+	if (rcu_is_nocb_cpu(cpu))
+		rcu_spawn_all_nocb_kthreads(cpu);
 }
 
 #else /* #ifdef CONFIG_RCU_BOOST */
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH 16/16] rcu: kvm.sh: Fix error when you pass --cpus argument
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (14 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads Pranith Kumar
@ 2014-07-23  5:09 ` Pranith Kumar
  2014-07-23 12:31   ` Paul E. McKenney
  2014-07-23 14:45 ` [PATCH 00/16] rcu: Some minor fixes and cleanups Paul E. McKenney
  16 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23  5:09 UTC (permalink / raw)
  To: Paul E. McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, open list:RCUTORTURE TEST F...

When you pass --cpus argument to kvm.sh, it errors out as it assumes you have
all the requires CPUs to run a batch. This commit fixes this along with a minor
comment fix.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
---
 tools/testing/selftests/rcutorture/bin/kvm.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh
index 36534f9..1128d85 100755
--- a/tools/testing/selftests/rcutorture/bin/kvm.sh
+++ b/tools/testing/selftests/rcutorture/bin/kvm.sh
@@ -214,7 +214,7 @@ END {
 	batch = 0;
 	nc = -1;
 
-	# Each pass through the following loop creates on test batch
+	# Each pass through the following loop creates one test batch
 	# that can be executed concurrently given ncpus.  Note that a
 	# given test that requires more than the available CPUs will run in
 	# their own batch.  Such tests just have to make do with what
@@ -228,7 +228,7 @@ END {
 		for (i = 0; i < njobs; i++) {
 			if (done[i])
 				continue; # Already part of a batch.
-			if (nc >= cpus[i] || nc == ncpus) {
+			if (nc >= cpus[i]) {
 
 				# This test fits into the current batch.
 				done[i] = batch;
-- 
2.0.0.rc2


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/16] rcu: Check return value for cpumask allocation
  2014-07-23  5:09 ` [PATCH 02/16] rcu: Check return value for cpumask allocation Pranith Kumar
@ 2014-07-23 12:06   ` Paul E. McKenney
  2014-07-23 12:49     ` Pranith Kumar
  2014-07-23 17:14     ` Pranith Kumar
  0 siblings, 2 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:06 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:39AM -0400, Pranith Kumar wrote:
> This commit add a check for return value of zalloc_cpumask_var() used while
> allocating cpumask for rcu_nocb_mask.
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> ---
>  kernel/rcu/tree_plugin.h | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index f07b643..bac9797 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -88,7 +88,10 @@ static void __init rcu_bootup_announce_oddness(void)
>  #ifdef CONFIG_RCU_NOCB_CPU
>  #ifndef CONFIG_RCU_NOCB_CPU_NONE
>  	if (!have_rcu_nocb_mask) {
> -		zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL);
> +		if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
> +			pr_info("rcu_nocb_mask allocation failed\n");
> +			return;

Good catch, but this "return" is an accident waiting to happen.  The
accident will happen as soon as another RCU option appears, and the
person adding it quite naturally adds it at the end of this function.
The cleanest approach is to make an rcu_bootup_announce_oddness_nocb()
as one commit that does -only- code motion, and the make this change
as another commit.

							Thanx, Paul

> +		}
>  		have_rcu_nocb_mask = true;
>  	}
>  #ifdef CONFIG_RCU_NOCB_CPU_ZERO
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/16] rcu: Remove redundant check for an online CPU
  2014-07-23  5:09 ` [PATCH 04/16] rcu: Remove redundant check for an online CPU Pranith Kumar
@ 2014-07-23 12:09   ` Paul E. McKenney
  2014-07-23 13:23     ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:09 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:41AM -0400, Pranith Kumar wrote:
> rcu_prcess_callbacks() is the softirq handler for RCU which is raised from
> invoke_rcu_core() which is called from __call_rcu_core().
> 
> Each of these three functions checks if the cpu is online. We can remove the
> redundant ones. This commit removes one of these redundant check.
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

Sorry, but no.  There can be a long delay between raise_softirq() and
this function starting, particularly if ksoftirqd gets involved.  The
CPU could easily go offline in the meantime.

							Thanx, Paul

> ---
>  kernel/rcu/tree.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 72e0b1f..044946e 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2563,8 +2563,6 @@ static void rcu_process_callbacks(struct softirq_action *unused)
>  {
>  	struct rcu_state *rsp;
> 
> -	if (cpu_is_offline(smp_processor_id()))
> -		return;
>  	trace_rcu_utilization(TPS("Start RCU core"));
>  	for_each_rcu_flavor(rsp)
>  		__rcu_process_callbacks(rsp);
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 06/16] rcu: Clear gp_flags only when actually starting new gp
  2014-07-23  5:09 ` [PATCH 06/16] rcu: Clear gp_flags only when actually starting new gp Pranith Kumar
@ 2014-07-23 12:13   ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:13 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:43AM -0400, Pranith Kumar wrote:
> We are clearing the gp_flags before we check if a grace period is in progress.
> If a grace period is in progress, we return after incorrectly clearing the
> gp_flags. The code comments say that this is highly unlikely, but just to be
> safe this commit moves the clearing of gp_flags to after the check for grace
> period in progress.
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

Sorry, but no.  The WARN_ON_ONCE() indicates an error condition.  It should
not be possible to have a grace period already in progress at this point
in the code.  So the two locations for the clearing of ->gp_flags are
equivalent.

							Thanx, Paul

> ---
>  kernel/rcu/tree.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 044946e..b14cecd 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1605,7 +1605,6 @@ static int rcu_gp_init(struct rcu_state *rsp)
>  		raw_spin_unlock_irq(&rnp->lock);
>  		return 0;
>  	}
> -	ACCESS_ONCE(rsp->gp_flags) = 0; /* Clear all flags: New grace period. */
> 
>  	if (WARN_ON_ONCE(rcu_gp_in_progress(rsp))) {
>  		/*
> @@ -1616,6 +1615,8 @@ static int rcu_gp_init(struct rcu_state *rsp)
>  		return 0;
>  	}
> 
> +	ACCESS_ONCE(rsp->gp_flags) = 0; /* Clear all flags: New grace period. */
> +
>  	/* Advance to a new grace period and initialize state. */
>  	record_gp_stall_check_time(rsp);
>  	/* Record GP times before starting GP, hence smp_store_release(). */
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 07/16] rcu: Save and restore irq flags in rcu_gp_cleanup()
  2014-07-23  5:09 ` [PATCH 07/16] rcu: Save and restore irq flags in rcu_gp_cleanup() Pranith Kumar
@ 2014-07-23 12:16   ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:16 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:44AM -0400, Pranith Kumar wrote:
> We use raw_spin_lock_irqsave/restore() family of functions throughout the code
> but for two locations. This commit replaces raw_spin_lock_irq()/unlock_irq()
> with irqsave/restore() in one such location. This is not strictly necessary,
> so I did not change the other location. I will update the other location if
> this is accepted :)
> 
> This commit changes raw_spin_lock_irq()/unlock_irq() to lock_irqsave()/restore().
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

I sympathize, as I used to take the approach that you are advocating.

The reason that I changed is that we -know- that interrupts are enabled
at this point in the code, so there is no point in incurring the extra
cognitive and machine overhead of the _irqsave() variant.  Plus the
current code has documentation benefits -- it tells you that the author
felt that irqs could not possible be disabled here.

So sorry, but no.

							Thanx, Paul

> ---
>  kernel/rcu/tree.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index b14cecd..5dcbf36 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1706,13 +1706,13 @@ static int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
>   */
>  static void rcu_gp_cleanup(struct rcu_state *rsp)
>  {
> -	unsigned long gp_duration;
> +	unsigned long gp_duration, flags;
>  	bool needgp = false;
>  	int nocb = 0;
>  	struct rcu_data *rdp;
>  	struct rcu_node *rnp = rcu_get_root(rsp);
> 
> -	raw_spin_lock_irq(&rnp->lock);
> +	raw_spin_lock_irqsave(&rnp->lock, flags);
>  	smp_mb__after_unlock_lock();
>  	gp_duration = jiffies - rsp->gp_start;
>  	if (gp_duration > rsp->gp_max)
> @@ -1726,7 +1726,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
>  	 * safe for us to drop the lock in order to mark the grace
>  	 * period as completed in all of the rcu_node structures.
>  	 */
> -	raw_spin_unlock_irq(&rnp->lock);
> +	raw_spin_unlock_irqrestore(&rnp->lock, flags);
> 
>  	/*
>  	 * Propagate new ->completed value to rcu_node structures so
> @@ -1738,7 +1738,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
>  	 * grace period is recorded in any of the rcu_node structures.
>  	 */
>  	rcu_for_each_node_breadth_first(rsp, rnp) {
> -		raw_spin_lock_irq(&rnp->lock);
> +		raw_spin_lock_irqsave(&rnp->lock, flags);
>  		smp_mb__after_unlock_lock();
>  		ACCESS_ONCE(rnp->completed) = rsp->gpnum;
>  		rdp = this_cpu_ptr(rsp->rda);
> @@ -1746,11 +1746,11 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
>  			needgp = __note_gp_changes(rsp, rnp, rdp) || needgp;
>  		/* smp_mb() provided by prior unlock-lock pair. */
>  		nocb += rcu_future_gp_cleanup(rsp, rnp);
> -		raw_spin_unlock_irq(&rnp->lock);
> +		raw_spin_unlock_irqrestore(&rnp->lock, flags);
>  		cond_resched();
>  	}
>  	rnp = rcu_get_root(rsp);
> -	raw_spin_lock_irq(&rnp->lock);
> +	raw_spin_lock_irqsave(&rnp->lock, flags);
>  	smp_mb__after_unlock_lock(); /* Order GP before ->completed update. */
>  	rcu_nocb_gp_set(rnp, nocb);
> 
> @@ -1767,7 +1767,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
>  				       ACCESS_ONCE(rsp->gpnum),
>  				       TPS("newreq"));
>  	}
> -	raw_spin_unlock_irq(&rnp->lock);
> +	raw_spin_unlock_irqrestore(&rnp->lock, flags);
>  }
> 
>  /*
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23  5:09 ` [PATCH 09/16] rcu: Remove redundant check for online cpu Pranith Kumar
@ 2014-07-23 12:21   ` Paul E. McKenney
  2014-07-23 12:59     ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:21 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
> There are two checks for an online CPU if two if() conditions. This commit
> simplies this by replacing it with only one check for the online CPU.
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

I admit that it is very early in the morning my time, but I don't see
this change as preserving the semantics in all cases.  Please recheck
your changes to the second check.

							Thanx, Paul

> ---
>  kernel/rcu/tree.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 5dcbf36..8d598a2 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2602,15 +2602,18 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
>  {
>  	bool needwake;
> 
> +	if (!cpu_online(smp_processor_id()))
> +		return;
> +
>  	/*
>  	 * If called from an extended quiescent state, invoke the RCU
>  	 * core in order to force a re-evaluation of RCU's idleness.
>  	 */
> -	if (!rcu_is_watching() && cpu_online(smp_processor_id()))
> +	if (!rcu_is_watching())
>  		invoke_rcu_core();
> 
> -	/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> -	if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
> +	/* If interrupts were disabled, don't invoke RCU core. */
> +	if (irqs_disabled_flags(flags))
>  		return;
> 
>  	/*
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 10/16] rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup
  2014-07-23  5:09 ` [PATCH 10/16] rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup Pranith Kumar
@ 2014-07-23 12:23   ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:23 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:47AM -0400, Pranith Kumar wrote:
> rcu_gp_init() is called once the gp_kthread wakes up on the condition that
> the init bit is set in the gp_flags. For a spurious wakeup we need to check that
> it is actually set. This commit ensures that the RCU_GP_FLAG_INIT bit is set in
> gp_flags.
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

The trick here is that the other bits have no meaning if we are not yet
in a grace period.  If any get spuriously set, for example, by a race
between force_quiescent_state() and the end of a grace period, we really
do want to clear them all.

So sorry, but no.

							Thanx, Paul

> ---
>  kernel/rcu/tree.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 8d598a2..f8847d9 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1600,7 +1600,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
>  	rcu_bind_gp_kthread();
>  	raw_spin_lock_irq(&rnp->lock);
>  	smp_mb__after_unlock_lock();
> -	if (!ACCESS_ONCE(rsp->gp_flags)) {
> +	if (!(ACCESS_ONCE(rsp->gp_flags) & RCU_GP_FLAG_INIT)) {
>  		/* Spurious wakeup, tell caller to go back to sleep.  */
>  		raw_spin_unlock_irq(&rnp->lock);
>  		return 0;
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 11/16] rcu: Check for spurious wakeup using return value
  2014-07-23  5:09 ` [PATCH 11/16] rcu: Check for spurious wakeup using return value Pranith Kumar
@ 2014-07-23 12:26   ` Paul E. McKenney
  2014-07-24  2:36     ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:26 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:48AM -0400, Pranith Kumar wrote:
> When the gp_kthread wakes up from the wait event, it returns 0 if the wake up is
> due to the condition having been met. This commit checks this return value
> for a spurious wake up before calling rcu_gp_init().
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

How does this added check help?  I don't see that it does.  If the flag
is set, we want to wake up.  If we get a spurious wakeup, but then the
flag gets set before we actually wake up, we still want to wake up.

So sorry, but no.

							Thanx, Paul

PS.  BTW, the ones that I am not immediately responding to are those
     that I will recheck later on and possibly accept.

> ---
>  kernel/rcu/tree.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index f8847d9..3b20ad2 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1790,11 +1790,11 @@ static int __noreturn rcu_gp_kthread(void *arg)
>  					       ACCESS_ONCE(rsp->gpnum),
>  					       TPS("reqwait"));
>  			rsp->gp_state = RCU_GP_WAIT_GPS;
> -			wait_event_interruptible(rsp->gp_wq,
> +			ret = wait_event_interruptible(rsp->gp_wq,
>  						 ACCESS_ONCE(rsp->gp_flags) &
>  						 RCU_GP_FLAG_INIT);
>  			/* Locking provides needed memory barrier. */
> -			if (rcu_gp_init(rsp))
> +			if (!ret && rcu_gp_init(rsp))
>  				break;
>  			cond_resched();
>  			flush_signals(current);
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 14/16] rcu: Remove redundant checks for rcu_scheduler_fully_active
  2014-07-23  5:09 ` [PATCH 14/16] rcu: Remove redundant checks for rcu_scheduler_fully_active Pranith Kumar
@ 2014-07-23 12:27   ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:27 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:51AM -0400, Pranith Kumar wrote:
> rcu_scheduler_fully_active is set to true early in the boot process.
> 
> The check for this flag is only needed in rcu_prepare_kthreads() as this
> function is called before the above flag is set. All other checks are redundant.
> Hence this commit removes those redundant checks.
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

This might actually work, but the checks are cheap, don't happen very often,
and make it much easier to modify the code.  So sorry, but no.

							Thanx, Paul

> ---
>  kernel/rcu/tree_plugin.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 9a50e1d..541c776 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -1341,7 +1341,7 @@ static void rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
>  	if (&rcu_preempt_state != rsp)
>  		return 0;
> 
> -	if (!rcu_scheduler_fully_active || rnp->qsmaskinit == 0)
> +	if (rnp->qsmaskinit == 0)
>  		return 0;
> 
>  	rsp->boost = 1;
> @@ -1486,9 +1486,10 @@ static void rcu_prepare_kthreads(int cpu)
>  	struct rcu_node *rnp = rdp->mynode;
> 
>  	/* Fire up the incoming CPU's kthread and leaf rcu_node kthread. */
> -	if (rcu_scheduler_fully_active)
> -		rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
> +	if (!rcu_scheduler_fully_active)
> +		return;
> 
> +	rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
>  	rcu_spawn_all_nocb_kthreads(cpu);
>  }
> 
> @@ -2508,9 +2509,8 @@ static void rcu_spawn_all_nocb_kthreads(int cpu)
>  {
>  	struct rcu_state *rsp;
> 
> -	if (rcu_scheduler_fully_active)
> -		for_each_rcu_flavor(rsp)
> -			rcu_spawn_one_nocb_kthread(rsp, cpu);
> +	for_each_rcu_flavor(rsp)
> +		rcu_spawn_one_nocb_kthread(rsp, cpu);
>  }
> 
>  /*
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads
  2014-07-23  5:09 ` [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads Pranith Kumar
@ 2014-07-23 12:28   ` Paul E. McKenney
  2014-07-23 13:14     ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:28 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:09:52AM -0400, Pranith Kumar wrote:
> This commit tries to spawn nocb kthreads only when the CPU is marked as a nocb
> cpu. This is a minor optimization.
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

This only happens once at boot time, so I am happier with the single check
at the bottom.  So sorry, but no.

							Thanx, Paul

> ---
>  kernel/rcu/tree_plugin.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 541c776..8abff0a 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -1490,7 +1490,8 @@ static void rcu_prepare_kthreads(int cpu)
>  		return;
> 
>  	rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
> -	rcu_spawn_all_nocb_kthreads(cpu);
> +	if (rcu_is_nocb_cpu(cpu))
> +		rcu_spawn_all_nocb_kthreads(cpu);
>  }
> 
>  #else /* #ifdef CONFIG_RCU_BOOST */
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 16/16] rcu: kvm.sh: Fix error when you pass --cpus argument
  2014-07-23  5:09 ` [PATCH 16/16] rcu: kvm.sh: Fix error when you pass --cpus argument Pranith Kumar
@ 2014-07-23 12:31   ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 12:31 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:RCUTORTURE TEST F...

On Wed, Jul 23, 2014 at 01:09:53AM -0400, Pranith Kumar wrote:
> When you pass --cpus argument to kvm.sh, it errors out as it assumes you have
> all the requires CPUs to run a batch. This commit fixes this along with a minor
> comment fix.
> 
> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>

The current code does what I want, namely allowing you to run tests
in parallel.  If you are running on a system with less than 16 CPUs,
this will of course not work -- and I -want- it to yell at me if I try
a full parallel run on my laptop instead of my lab machine.  One easy
way to work around this is to use the --configs argument to run the big
tests sequentially, then again to run the smaller tests in parallel.
So sorry, but no.

							Thanx, Paul

> ---
>  tools/testing/selftests/rcutorture/bin/kvm.sh | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh
> index 36534f9..1128d85 100755
> --- a/tools/testing/selftests/rcutorture/bin/kvm.sh
> +++ b/tools/testing/selftests/rcutorture/bin/kvm.sh
> @@ -214,7 +214,7 @@ END {
>  	batch = 0;
>  	nc = -1;
> 
> -	# Each pass through the following loop creates on test batch
> +	# Each pass through the following loop creates one test batch
>  	# that can be executed concurrently given ncpus.  Note that a
>  	# given test that requires more than the available CPUs will run in
>  	# their own batch.  Such tests just have to make do with what
> @@ -228,7 +228,7 @@ END {
>  		for (i = 0; i < njobs; i++) {
>  			if (done[i])
>  				continue; # Already part of a batch.
> -			if (nc >= cpus[i] || nc == ncpus) {
> +			if (nc >= cpus[i]) {
> 
>  				# This test fits into the current batch.
>  				done[i] = batch;
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/16] rcu: Check return value for cpumask allocation
  2014-07-23 12:06   ` Paul E. McKenney
@ 2014-07-23 12:49     ` Pranith Kumar
  2014-07-23 17:14     ` Pranith Kumar
  1 sibling, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 12:49 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 8:06 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 01:09:39AM -0400, Pranith Kumar wrote:
>> This commit add a check for return value of zalloc_cpumask_var() used while
>> allocating cpumask for rcu_nocb_mask.
>>
>> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>> ---
>>  kernel/rcu/tree_plugin.h | 5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
>> index f07b643..bac9797 100644
>> --- a/kernel/rcu/tree_plugin.h
>> +++ b/kernel/rcu/tree_plugin.h
>> @@ -88,7 +88,10 @@ static void __init rcu_bootup_announce_oddness(void)
>>  #ifdef CONFIG_RCU_NOCB_CPU
>>  #ifndef CONFIG_RCU_NOCB_CPU_NONE
>>       if (!have_rcu_nocb_mask) {
>> -             zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL);
>> +             if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
>> +                     pr_info("rcu_nocb_mask allocation failed\n");
>> +                     return;
>
> Good catch, but this "return" is an accident waiting to happen.  The
> accident will happen as soon as another RCU option appears, and the
> person adding it quite naturally adds it at the end of this function.
> The cleanest approach is to make an rcu_bootup_announce_oddness_nocb()
> as one commit that does -only- code motion, and the make this change
> as another commit.
>

OK, I will resubmit this change as a 2 patch change.

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 12:21   ` Paul E. McKenney
@ 2014-07-23 12:59     ` Pranith Kumar
  2014-07-23 13:50       ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 12:59 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 8:21 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
>> There are two checks for an online CPU if two if() conditions. This commit
>> simplies this by replacing it with only one check for the online CPU.
>>
>> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>
> I admit that it is very early in the morning my time, but I don't see
> this change as preserving the semantics in all cases.  Please recheck
> your changes to the second check.
>
>                                                         Thanx, Paul

I guess you must be thrown off by the complementary checks, the first
check is for cpu_online() and second is for cpu_is_offline(). :)

Previously, if a cpu is offline, the first condition is false and the
second condition is true, so we return from the second if() condition.
The same semantics are being preserved.

--
Pranith.



>
>> ---
>>  kernel/rcu/tree.c | 9 ++++++---
>>  1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>> index 5dcbf36..8d598a2 100644
>> --- a/kernel/rcu/tree.c
>> +++ b/kernel/rcu/tree.c
>> @@ -2602,15 +2602,18 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
>>  {
>>       bool needwake;
>>
>> +     if (!cpu_online(smp_processor_id()))
>> +             return;
>> +
>>       /*
>>        * If called from an extended quiescent state, invoke the RCU
>>        * core in order to force a re-evaluation of RCU's idleness.
>>        */
>> -     if (!rcu_is_watching() && cpu_online(smp_processor_id()))
>> +     if (!rcu_is_watching())
>>               invoke_rcu_core();
>>
>> -     /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
>> -     if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
>> +     /* If interrupts were disabled, don't invoke RCU core. */
>> +     if (irqs_disabled_flags(flags))
>>               return;
>>
>>       /*
>> --
>> 2.0.0.rc2
>>
>



-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads
  2014-07-23 12:28   ` Paul E. McKenney
@ 2014-07-23 13:14     ` Pranith Kumar
  2014-07-23 13:42       ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 13:14 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 8:28 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 01:09:52AM -0400, Pranith Kumar wrote:
>> This commit tries to spawn nocb kthreads only when the CPU is marked as a nocb
>> cpu. This is a minor optimization.
>>
>> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>
> This only happens once at boot time, so I am happier with the single check
> at the bottom.  So sorry, but no.
>


Actually this happens whenever a CPU is coming online, but yes, it is
a minor optimization for hotplug.

>                                                         Thanx, Paul
>
>> ---
>>  kernel/rcu/tree_plugin.h | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
>> index 541c776..8abff0a 100644
>> --- a/kernel/rcu/tree_plugin.h
>> +++ b/kernel/rcu/tree_plugin.h
>> @@ -1490,7 +1490,8 @@ static void rcu_prepare_kthreads(int cpu)
>>               return;
>>
>>       rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
>> -     rcu_spawn_all_nocb_kthreads(cpu);
>> +     if (rcu_is_nocb_cpu(cpu))
>> +             rcu_spawn_all_nocb_kthreads(cpu);
>>  }
>>
>>  #else /* #ifdef CONFIG_RCU_BOOST */
>> --
>> 2.0.0.rc2
>>
>



-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/16] rcu: Remove redundant check for an online CPU
  2014-07-23 12:09   ` Paul E. McKenney
@ 2014-07-23 13:23     ` Pranith Kumar
  2014-07-23 13:41       ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 13:23 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 8:09 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 01:09:41AM -0400, Pranith Kumar wrote:
>> rcu_prcess_callbacks() is the softirq handler for RCU which is raised from
>> invoke_rcu_core() which is called from __call_rcu_core().
>>
>> Each of these three functions checks if the cpu is online. We can remove the
>> redundant ones. This commit removes one of these redundant check.
>>
>> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>
> Sorry, but no.  There can be a long delay between raise_softirq() and
> this function starting, particularly if ksoftirqd gets involved.  The
> CPU could easily go offline in the meantime.
>

That makes sense. I guess one of the other two checks in
__call_rcu_core() or invoke_rcu_core() can go then?

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/16] rcu: Remove redundant check for an online CPU
  2014-07-23 13:23     ` Pranith Kumar
@ 2014-07-23 13:41       ` Paul E. McKenney
  2014-07-23 14:01         ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 13:41 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 09:23:47AM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 8:09 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 01:09:41AM -0400, Pranith Kumar wrote:
> >> rcu_prcess_callbacks() is the softirq handler for RCU which is raised from
> >> invoke_rcu_core() which is called from __call_rcu_core().
> >>
> >> Each of these three functions checks if the cpu is online. We can remove the
> >> redundant ones. This commit removes one of these redundant check.
> >>
> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >
> > Sorry, but no.  There can be a long delay between raise_softirq() and
> > this function starting, particularly if ksoftirqd gets involved.  The
> > CPU could easily go offline in the meantime.
> >
> 
> That makes sense. I guess one of the other two checks in
> __call_rcu_core() or invoke_rcu_core() can go then?

What would have to be the case for this suggested change to be safe?
Does that condition in fact hold?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads
  2014-07-23 13:14     ` Pranith Kumar
@ 2014-07-23 13:42       ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 13:42 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 09:14:51AM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 8:28 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 01:09:52AM -0400, Pranith Kumar wrote:
> >> This commit tries to spawn nocb kthreads only when the CPU is marked as a nocb
> >> cpu. This is a minor optimization.
> >>
> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >
> > This only happens once at boot time, so I am happier with the single check
> > at the bottom.  So sorry, but no.
> 
> Actually this happens whenever a CPU is coming online, but yes, it is
> a minor optimization for hotplug.

Fair point, but still.

                                                        Thanx, Paul

> >> ---
> >>  kernel/rcu/tree_plugin.h | 3 ++-
> >>  1 file changed, 2 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> >> index 541c776..8abff0a 100644
> >> --- a/kernel/rcu/tree_plugin.h
> >> +++ b/kernel/rcu/tree_plugin.h
> >> @@ -1490,7 +1490,8 @@ static void rcu_prepare_kthreads(int cpu)
> >>               return;
> >>
> >>       rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
> >> -     rcu_spawn_all_nocb_kthreads(cpu);
> >> +     if (rcu_is_nocb_cpu(cpu))
> >> +             rcu_spawn_all_nocb_kthreads(cpu);
> >>  }
> >>
> >>  #else /* #ifdef CONFIG_RCU_BOOST */
> >> --
> >> 2.0.0.rc2
> >>
> >
> 
> 
> 
> -- 
> Pranith
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 12:59     ` Pranith Kumar
@ 2014-07-23 13:50       ` Paul E. McKenney
  2014-07-23 14:12         ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 13:50 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 08:59:06AM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 8:21 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
> >> There are two checks for an online CPU if two if() conditions. This commit
> >> simplies this by replacing it with only one check for the online CPU.
> >>
> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >
> > I admit that it is very early in the morning my time, but I don't see
> > this change as preserving the semantics in all cases.  Please recheck
> > your changes to the second check.
> >
> >                                                         Thanx, Paul
> 
> I guess you must be thrown off by the complementary checks, the first
> check is for cpu_online() and second is for cpu_is_offline(). :)
> 
> Previously, if a cpu is offline, the first condition is false and the
> second condition is true, so we return from the second if() condition.
> The same semantics are being preserved.

Fair enough!

Nevertheless, I am not seeing this as a simplification.

							Thanx, Paul

> --
> Pranith.
> 
> 
> 
> >
> >> ---
> >>  kernel/rcu/tree.c | 9 ++++++---
> >>  1 file changed, 6 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> >> index 5dcbf36..8d598a2 100644
> >> --- a/kernel/rcu/tree.c
> >> +++ b/kernel/rcu/tree.c
> >> @@ -2602,15 +2602,18 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
> >>  {
> >>       bool needwake;
> >>
> >> +     if (!cpu_online(smp_processor_id()))
> >> +             return;
> >> +
> >>       /*
> >>        * If called from an extended quiescent state, invoke the RCU
> >>        * core in order to force a re-evaluation of RCU's idleness.
> >>        */
> >> -     if (!rcu_is_watching() && cpu_online(smp_processor_id()))
> >> +     if (!rcu_is_watching())
> >>               invoke_rcu_core();
> >>
> >> -     /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> >> -     if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
> >> +     /* If interrupts were disabled, don't invoke RCU core. */
> >> +     if (irqs_disabled_flags(flags))
> >>               return;
> >>
> >>       /*
> >> --
> >> 2.0.0.rc2
> >>
> >
> 
> 
> 
> -- 
> Pranith
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/16] rcu: Remove redundant check for an online CPU
  2014-07-23 13:41       ` Paul E. McKenney
@ 2014-07-23 14:01         ` Pranith Kumar
  2014-07-23 14:14           ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 14:01 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 9:41 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 09:23:47AM -0400, Pranith Kumar wrote:
>> On Wed, Jul 23, 2014 at 8:09 AM, Paul E. McKenney
>> <paulmck@linux.vnet.ibm.com> wrote:
>> > On Wed, Jul 23, 2014 at 01:09:41AM -0400, Pranith Kumar wrote:
>> >> rcu_prcess_callbacks() is the softirq handler for RCU which is raised from
>> >> invoke_rcu_core() which is called from __call_rcu_core().
>> >>
>> >> Each of these three functions checks if the cpu is online. We can remove the
>> >> redundant ones. This commit removes one of these redundant check.
>> >>
>> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>> >
>> > Sorry, but no.  There can be a long delay between raise_softirq() and
>> > this function starting, particularly if ksoftirqd gets involved.  The
>> > CPU could easily go offline in the meantime.
>> >
>>
>> That makes sense. I guess one of the other two checks in
>> __call_rcu_core() or invoke_rcu_core() can go then?
>
> What would have to be the case for this suggested change to be safe?
> Does that condition in fact hold?
>

The only scenario which is unsafe is when this thread is preempted and
scheduled on a dying CPU. In that case we should turn off interrupts.

Actually, checking the code, I see that interrupts are turned off
already before we call _call_rcu_core(). So I am not sure such a case
will happen. On the other hand invoke_rcu_core() will be a one line
function which might as well be in-lined to avoid the double cpu
online check.

What am I missing?

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 13:50       ` Paul E. McKenney
@ 2014-07-23 14:12         ` Pranith Kumar
  2014-07-23 14:23           ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 14:12 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 9:50 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 08:59:06AM -0400, Pranith Kumar wrote:
>> On Wed, Jul 23, 2014 at 8:21 AM, Paul E. McKenney
>> <paulmck@linux.vnet.ibm.com> wrote:
>> > On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
>> >> There are two checks for an online CPU if two if() conditions. This commit
>> >> simplies this by replacing it with only one check for the online CPU.
>> >>
>> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>> >
>> > I admit that it is very early in the morning my time, but I don't see
>> > this change as preserving the semantics in all cases.  Please recheck
>> > your changes to the second check.
>> >
>> >                                                         Thanx, Paul
>>
>> I guess you must be thrown off by the complementary checks, the first
>> check is for cpu_online() and second is for cpu_is_offline(). :)
>>
>> Previously, if a cpu is offline, the first condition is false and the
>> second condition is true, so we return from the second if() condition.
>> The same semantics are being preserved.
>
> Fair enough!
>
> Nevertheless, I am not seeing this as a simplification.
>

I am not sure what you mean here, do you mean that both the checks are
actually required?


>> --
>> Pranith.
>>
>>
>>
>> >
>> >> ---
>> >>  kernel/rcu/tree.c | 9 ++++++---
>> >>  1 file changed, 6 insertions(+), 3 deletions(-)
>> >>
>> >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>> >> index 5dcbf36..8d598a2 100644
>> >> --- a/kernel/rcu/tree.c
>> >> +++ b/kernel/rcu/tree.c
>> >> @@ -2602,15 +2602,18 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
>> >>  {
>> >>       bool needwake;
>> >>
>> >> +     if (!cpu_online(smp_processor_id()))
>> >> +             return;
>> >> +
>> >>       /*
>> >>        * If called from an extended quiescent state, invoke the RCU
>> >>        * core in order to force a re-evaluation of RCU's idleness.
>> >>        */
>> >> -     if (!rcu_is_watching() && cpu_online(smp_processor_id()))
>> >> +     if (!rcu_is_watching())
>> >>               invoke_rcu_core();
>> >>
>> >> -     /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
>> >> -     if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
>> >> +     /* If interrupts were disabled, don't invoke RCU core. */
>> >> +     if (irqs_disabled_flags(flags))
>> >>               return;
>> >>
>> >>       /*
>> >> --
>> >> 2.0.0.rc2
>> >>
>> >
>>
>>
>>
>> --
>> Pranith
>>
>



-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/16] rcu: Remove redundant check for an online CPU
  2014-07-23 14:01         ` Pranith Kumar
@ 2014-07-23 14:14           ` Paul E. McKenney
  2014-07-23 15:07             ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 14:14 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 10:01:14AM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 9:41 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 09:23:47AM -0400, Pranith Kumar wrote:
> >> On Wed, Jul 23, 2014 at 8:09 AM, Paul E. McKenney
> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> > On Wed, Jul 23, 2014 at 01:09:41AM -0400, Pranith Kumar wrote:
> >> >> rcu_prcess_callbacks() is the softirq handler for RCU which is raised from
> >> >> invoke_rcu_core() which is called from __call_rcu_core().
> >> >>
> >> >> Each of these three functions checks if the cpu is online. We can remove the
> >> >> redundant ones. This commit removes one of these redundant check.
> >> >>
> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >> >
> >> > Sorry, but no.  There can be a long delay between raise_softirq() and
> >> > this function starting, particularly if ksoftirqd gets involved.  The
> >> > CPU could easily go offline in the meantime.
> >> >
> >>
> >> That makes sense. I guess one of the other two checks in
> >> __call_rcu_core() or invoke_rcu_core() can go then?
> >
> > What would have to be the case for this suggested change to be safe?
> > Does that condition in fact hold?
> >
> 
> The only scenario which is unsafe is when this thread is preempted and
> scheduled on a dying CPU. In that case we should turn off interrupts.
> 
> Actually, checking the code, I see that interrupts are turned off
> already before we call _call_rcu_core(). So I am not sure such a case
> will happen. On the other hand invoke_rcu_core() will be a one line
> function which might as well be in-lined to avoid the double cpu
> online check.
> 
> What am I missing?

I am not sure.  Perhaps the fact that __call_rcu_core() doesn't call
invoke_rcu_core() unless the condition holds (which means that you
cannnot remove the check from __call_rcu_core()) or maybe the fact that
invoke_rcu_core() is called from many other places, which means that
you might not be able to remove the check from invoke_rcu_core().

							Thanx, Paul


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 14:12         ` Pranith Kumar
@ 2014-07-23 14:23           ` Paul E. McKenney
  2014-07-23 15:11             ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 14:23 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 10:12:54AM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 9:50 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 08:59:06AM -0400, Pranith Kumar wrote:
> >> On Wed, Jul 23, 2014 at 8:21 AM, Paul E. McKenney
> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> > On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
> >> >> There are two checks for an online CPU if two if() conditions. This commit
> >> >> simplies this by replacing it with only one check for the online CPU.
> >> >>
> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >> >
> >> > I admit that it is very early in the morning my time, but I don't see
> >> > this change as preserving the semantics in all cases.  Please recheck
> >> > your changes to the second check.
> >> >
> >> >                                                         Thanx, Paul
> >>
> >> I guess you must be thrown off by the complementary checks, the first
> >> check is for cpu_online() and second is for cpu_is_offline(). :)
> >>
> >> Previously, if a cpu is offline, the first condition is false and the
> >> second condition is true, so we return from the second if() condition.
> >> The same semantics are being preserved.
> >
> > Fair enough!
> >
> > Nevertheless, I am not seeing this as a simplification.
> 
> I am not sure what you mean here, do you mean that both the checks are
> actually required?

I mean that the current compound tests each mean something.  Pulling out
the offline test adds lines of code and obscures that meaning.   This means
that it is easier (for me, anyway) to see why the current code is correct
than it is to see why your suggested change is correct.

							Thanx, Paul

> >> --
> >> Pranith.
> >>
> >>
> >>
> >> >
> >> >> ---
> >> >>  kernel/rcu/tree.c | 9 ++++++---
> >> >>  1 file changed, 6 insertions(+), 3 deletions(-)
> >> >>
> >> >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> >> >> index 5dcbf36..8d598a2 100644
> >> >> --- a/kernel/rcu/tree.c
> >> >> +++ b/kernel/rcu/tree.c
> >> >> @@ -2602,15 +2602,18 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
> >> >>  {
> >> >>       bool needwake;
> >> >>
> >> >> +     if (!cpu_online(smp_processor_id()))
> >> >> +             return;
> >> >> +
> >> >>       /*
> >> >>        * If called from an extended quiescent state, invoke the RCU
> >> >>        * core in order to force a re-evaluation of RCU's idleness.
> >> >>        */
> >> >> -     if (!rcu_is_watching() && cpu_online(smp_processor_id()))
> >> >> +     if (!rcu_is_watching())
> >> >>               invoke_rcu_core();
> >> >>
> >> >> -     /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> >> >> -     if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
> >> >> +     /* If interrupts were disabled, don't invoke RCU core. */
> >> >> +     if (irqs_disabled_flags(flags))
> >> >>               return;
> >> >>
> >> >>       /*
> >> >> --
> >> >> 2.0.0.rc2
> >> >>
> >> >
> >>
> >>
> >>
> >> --
> >> Pranith
> >>
> >
> 
> 
> 
> -- 
> Pranith
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 00/16] rcu: Some minor fixes and cleanups
  2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
                   ` (15 preceding siblings ...)
  2014-07-23  5:09 ` [PATCH 16/16] rcu: kvm.sh: Fix error when you pass --cpus argument Pranith Kumar
@ 2014-07-23 14:45 ` Paul E. McKenney
  2014-08-27  1:10   ` Pranith Kumar
  16 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 14:45 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	linux-kernel

On Wed, Jul 23, 2014 at 01:09:37AM -0400, Pranith Kumar wrote:
> Hi Paul,
> 
> This is a series of minor fixes and cleanup patches which I found while studying
> the code. All my previous pending (but not rejected ;) patches are superseded by
> this series, expect the rcutorture snprintf changes. I am still waiting for you
> to decide on that one :)
> 
> These changes have been tested by the kvm rcutorture test setup. Some tests give
> me stall warnings, but otherwise have SUCCESS messages in the logs.

For patches 1, 3, 5, 8, 12, and 13, once you get a Reviewed-by from one
of the co-maintainers or designated reviewers, I will queue them.
The other patches I have responded to.

>                                                                     But those
> are occuring even without these changes with the tip paul/rcu/dev. May be it is
> because I am running them for a duration of 4 minutes each only?

When you say "stall warnings", do you mean RCU CPU stall warnings?
I don't see those.  They might be due to your having fewer CPUs than
some of the tests call for.  It might be worth trying to tell the test
to leave one or two of your CPUs alone.  One way to do this is to give
the script a maxcpus= boot argument.

If you instead mean the "BAD SEQ" warning out of the kvm.sh script,
this can happen due to KVM/qemu losing IPIs.  I have been holding off
making RCU tolerate IPI lossage on the grounds that this is a KVM/qemu
bug that needs fixing.

On run duration...  It depends:

o	If you are changing boot-time-only code, it is best to use very
	short test times (such as your four minutes).  If there is any
	non-determistic aspect to the change, you should of course run
	the test several times.

o	For deterministic changes to code paths, running each test for
	30 minutes might suffice.

o	For changes subject to concurrency effects, the test duration
	depends on the probabilities.  Typical run length might be
	four hours per test.  I have reason to believe that there have
	been some bugs that occurred roughly once per hundred hours
	of test time.

	This means that in general, pure testing is insufficient for RCU.

Of course, for many bugs, the more CPUs you throw at it, the more likely
the bug.  It would be good to enhance rcutorture to make these sorts
of bugs more likely to occur on smaller systems.  Or for the formal
validation guys to raise their game so as to be able to deal with RCU.  ;-)

							Thanx, Paul

> --
> Pranith.
> 
> Pranith Kumar (16):
>   rcu: Use rcu_num_nodes instead of NUM_RCU_NODES
>   rcu: Check return value for cpumask allocation
>   rcu: Fix comment for gp_state field values
>   rcu: Remove redundant check for an online CPU
>   rcu: Add noreturn attribute to boost kthread
>   rcu: Clear gp_flags only when actually starting new gp
>   rcu: Save and restore irq flags in rcu_gp_cleanup()
>   rcu: Clean up rcu_spawn_one_boost_kthread()
>   rcu: Remove redundant check for online cpu
>   rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup
>   rcu: Check for spurious wakeup using return value
>   rcu: Rename rcu_spawn_gp_kthread() to rcu_spawn_kthreads()
>   rcu: Spawn nocb kthreads from rcu_prepare_kthreads()
>   rcu: Remove redundant checks for rcu_scheduler_fully_active
>   rcu: Check for a nocb cpu before trying to spawn nocb threads
>   rcu: kvm.sh: Fix error when you pass --cpus argument
> 
>  kernel/rcu/tree.c                             | 42 ++++++++++++++-------------
>  kernel/rcu/tree.h                             |  4 +--
>  kernel/rcu/tree_plugin.h                      | 40 +++++++++++++------------
>  tools/testing/selftests/rcutorture/bin/kvm.sh |  4 +--
>  4 files changed, 47 insertions(+), 43 deletions(-)
> 
> -- 
> 2.0.0.rc2
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/16] rcu: Remove redundant check for an online CPU
  2014-07-23 14:14           ` Paul E. McKenney
@ 2014-07-23 15:07             ` Pranith Kumar
  2014-07-23 15:21               ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 15:07 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 10:14 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 10:01:14AM -0400, Pranith Kumar wrote:
>> On Wed, Jul 23, 2014 at 9:41 AM, Paul E. McKenney
>> <paulmck@linux.vnet.ibm.com> wrote:
>> > On Wed, Jul 23, 2014 at 09:23:47AM -0400, Pranith Kumar wrote:
>> >> On Wed, Jul 23, 2014 at 8:09 AM, Paul E. McKenney
>> >> <paulmck@linux.vnet.ibm.com> wrote:
>> >> > On Wed, Jul 23, 2014 at 01:09:41AM -0400, Pranith Kumar wrote:
>> >> >> rcu_prcess_callbacks() is the softirq handler for RCU which is raised from
>> >> >> invoke_rcu_core() which is called from __call_rcu_core().
>> >> >>
>> >> >> Each of these three functions checks if the cpu is online. We can remove the
>> >> >> redundant ones. This commit removes one of these redundant check.
>> >> >>
>> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>> >> >
>> >> > Sorry, but no.  There can be a long delay between raise_softirq() and
>> >> > this function starting, particularly if ksoftirqd gets involved.  The
>> >> > CPU could easily go offline in the meantime.
>> >> >
>> >>
>> >> That makes sense. I guess one of the other two checks in
>> >> __call_rcu_core() or invoke_rcu_core() can go then?
>> >
>> > What would have to be the case for this suggested change to be safe?
>> > Does that condition in fact hold?
>> >
>>
>> The only scenario which is unsafe is when this thread is preempted and
>> scheduled on a dying CPU. In that case we should turn off interrupts.
>>
>> Actually, checking the code, I see that interrupts are turned off
>> already before we call _call_rcu_core(). So I am not sure such a case
>> will happen. On the other hand invoke_rcu_core() will be a one line
>> function which might as well be in-lined to avoid the double cpu
>> online check.
>>
>> What am I missing?
>
> I am not sure.  Perhaps the fact that __call_rcu_core() doesn't call
> invoke_rcu_core() unless the condition holds (which means that you
> cannnot remove the check from __call_rcu_core()) or maybe the fact that
> invoke_rcu_core() is called from many other places, which means that
> you might not be able to remove the check from invoke_rcu_core().
>

OK, since invoke_rcu_core() is being called from multiple places, we
cannot remove the check there.

But we can remove the check from __call_rcu_core(). Since we are going
to check again in invoke_rcu_core(), no? We will call
invoke_rcu_core() but we will return if the check if false.

The trade-off here is between a function call and an extra
smp_processor_id(). The only reason I am trying to do this is because
smp_processor_id() disables and enables interrupts which is costly and
we can avoid one such check.

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 14:23           ` Paul E. McKenney
@ 2014-07-23 15:11             ` Pranith Kumar
  2014-07-23 15:30               ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 15:11 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 10:23 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 10:12:54AM -0400, Pranith Kumar wrote:
>> On Wed, Jul 23, 2014 at 9:50 AM, Paul E. McKenney
>> <paulmck@linux.vnet.ibm.com> wrote:
>> > On Wed, Jul 23, 2014 at 08:59:06AM -0400, Pranith Kumar wrote:
>> >> On Wed, Jul 23, 2014 at 8:21 AM, Paul E. McKenney
>> >> <paulmck@linux.vnet.ibm.com> wrote:
>> >> > On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
>> >> >> There are two checks for an online CPU if two if() conditions. This commit
>> >> >> simplies this by replacing it with only one check for the online CPU.
>> >> >>
>> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>> >> >
>> >> > I admit that it is very early in the morning my time, but I don't see
>> >> > this change as preserving the semantics in all cases.  Please recheck
>> >> > your changes to the second check.
>> >> >
>> >> >                                                         Thanx, Paul
>> >>
>> >> I guess you must be thrown off by the complementary checks, the first
>> >> check is for cpu_online() and second is for cpu_is_offline(). :)
>> >>
>> >> Previously, if a cpu is offline, the first condition is false and the
>> >> second condition is true, so we return from the second if() condition.
>> >> The same semantics are being preserved.
>> >
>> > Fair enough!
>> >
>> > Nevertheless, I am not seeing this as a simplification.
>>
>> I am not sure what you mean here, do you mean that both the checks are
>> actually required?
>
> I mean that the current compound tests each mean something.  Pulling out
> the offline test adds lines of code and obscures that meaning.   This means
> that it is easier (for me, anyway) to see why the current code is correct
> than it is to see why your suggested change is correct.
>

That is a valid point. I did not mean to reduce readability of the
code. Just trying to avoid the overhead of smp_processor_id().

Not sure if you would prefer this, but how about the following?

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f1ba773..3a26008 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2516,15 +2516,16 @@ static void __call_rcu_core(struct rcu_state
*rsp, struct rcu_data *rdp,
 {
        bool needwake;

+       bool awake = cpu_online(smp_processor_id);
        /*
         * If called from an extended quiescent state, invoke the RCU
         * core in order to force a re-evaluation of RCU's idleness.
         */
-       if (!rcu_is_watching() && cpu_online(smp_processor_id()))
+       if (!rcu_is_watching() && awake)
                invoke_rcu_core();

        /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
-       if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
+       if (irqs_disabled_flags(flags) || !awake)
                return;

        /*


>
>> >> --
>> >> Pranith.
>> >>
>> >>
>> >>
>> >> >
>> >> >> ---
>> >> >>  kernel/rcu/tree.c | 9 ++++++---
>> >> >>  1 file changed, 6 insertions(+), 3 deletions(-)
>> >> >>
>> >> >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>> >> >> index 5dcbf36..8d598a2 100644
>> >> >> --- a/kernel/rcu/tree.c
>> >> >> +++ b/kernel/rcu/tree.c
>> >> >> @@ -2602,15 +2602,18 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
>> >> >>  {
>> >> >>       bool needwake;
>> >> >>
>> >> >> +     if (!cpu_online(smp_processor_id()))
>> >> >> +             return;
>> >> >> +
>> >> >>       /*
>> >> >>        * If called from an extended quiescent state, invoke the RCU
>> >> >>        * core in order to force a re-evaluation of RCU's idleness.
>> >> >>        */
>> >> >> -     if (!rcu_is_watching() && cpu_online(smp_processor_id()))
>> >> >> +     if (!rcu_is_watching())
>> >> >>               invoke_rcu_core();
>> >> >>
>> >> >> -     /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
>> >> >> -     if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
>> >> >> +     /* If interrupts were disabled, don't invoke RCU core. */
>> >> >> +     if (irqs_disabled_flags(flags))
>> >> >>               return;
>> >> >>
>> >> >>       /*
>> >> >> --
>> >> >> 2.0.0.rc2
>> >> >>
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Pranith
>> >>
>> >
>>
>>
>>
>> --
>> Pranith
>>
>



-- 
Pranith

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH 04/16] rcu: Remove redundant check for an online CPU
  2014-07-23 15:07             ` Pranith Kumar
@ 2014-07-23 15:21               ` Pranith Kumar
  0 siblings, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 15:21 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 11:07 AM, Pranith Kumar <bobby.prani@gmail.com> wrote:
> On Wed, Jul 23, 2014 at 10:14 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
>> On Wed, Jul 23, 2014 at 10:01:14AM -0400, Pranith Kumar wrote:
>>> On Wed, Jul 23, 2014 at 9:41 AM, Paul E. McKenney
>>> <paulmck@linux.vnet.ibm.com> wrote:
>>> > On Wed, Jul 23, 2014 at 09:23:47AM -0400, Pranith Kumar wrote:
>>> >> On Wed, Jul 23, 2014 at 8:09 AM, Paul E. McKenney
>>> >> <paulmck@linux.vnet.ibm.com> wrote:
>>> >> > On Wed, Jul 23, 2014 at 01:09:41AM -0400, Pranith Kumar wrote:
>>> >> >> rcu_prcess_callbacks() is the softirq handler for RCU which is raised from
>>> >> >> invoke_rcu_core() which is called from __call_rcu_core().
>>> >> >>
>>> >> >> Each of these three functions checks if the cpu is online. We can remove the
>>> >> >> redundant ones. This commit removes one of these redundant check.
>>> >> >>
>>> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>>> >> >
>>> >> > Sorry, but no.  There can be a long delay between raise_softirq() and
>>> >> > this function starting, particularly if ksoftirqd gets involved.  The
>>> >> > CPU could easily go offline in the meantime.
>>> >> >
>>> >>
>>> >> That makes sense. I guess one of the other two checks in
>>> >> __call_rcu_core() or invoke_rcu_core() can go then?
>>> >
>>> > What would have to be the case for this suggested change to be safe?
>>> > Does that condition in fact hold?
>>> >
>>>
>>> The only scenario which is unsafe is when this thread is preempted and
>>> scheduled on a dying CPU. In that case we should turn off interrupts.
>>>
>>> Actually, checking the code, I see that interrupts are turned off
>>> already before we call _call_rcu_core(). So I am not sure such a case
>>> will happen. On the other hand invoke_rcu_core() will be a one line
>>> function which might as well be in-lined to avoid the double cpu
>>> online check.
>>>
>>> What am I missing?
>>
>> I am not sure.  Perhaps the fact that __call_rcu_core() doesn't call
>> invoke_rcu_core() unless the condition holds (which means that you
>> cannnot remove the check from __call_rcu_core()) or maybe the fact that
>> invoke_rcu_core() is called from many other places, which means that
>> you might not be able to remove the check from invoke_rcu_core().
>>
>
> OK, since invoke_rcu_core() is being called from multiple places, we
> cannot remove the check there.
>
> But we can remove the check from __call_rcu_core(). Since we are going
> to check again in invoke_rcu_core(), no? We will call
> invoke_rcu_core() but we will return if the check if false.
>
> The trade-off here is between a function call and an extra
> smp_processor_id(). The only reason I am trying to do this is because
> smp_processor_id() disables and enables interrupts which is costly and
> we can avoid one such check.
>

Ah, I just checked and the above is not true. We don't disable
interrupts, we do a this_cpu_read().

How many newbie cards do I have left :)

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 15:11             ` Pranith Kumar
@ 2014-07-23 15:30               ` Paul E. McKenney
  2014-07-23 15:44                 ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 15:30 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 11:11:45AM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 10:23 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 10:12:54AM -0400, Pranith Kumar wrote:
> >> On Wed, Jul 23, 2014 at 9:50 AM, Paul E. McKenney
> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> > On Wed, Jul 23, 2014 at 08:59:06AM -0400, Pranith Kumar wrote:
> >> >> On Wed, Jul 23, 2014 at 8:21 AM, Paul E. McKenney
> >> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> >> > On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
> >> >> >> There are two checks for an online CPU if two if() conditions. This commit
> >> >> >> simplies this by replacing it with only one check for the online CPU.
> >> >> >>
> >> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >> >> >
> >> >> > I admit that it is very early in the morning my time, but I don't see
> >> >> > this change as preserving the semantics in all cases.  Please recheck
> >> >> > your changes to the second check.
> >> >> >
> >> >> >                                                         Thanx, Paul
> >> >>
> >> >> I guess you must be thrown off by the complementary checks, the first
> >> >> check is for cpu_online() and second is for cpu_is_offline(). :)
> >> >>
> >> >> Previously, if a cpu is offline, the first condition is false and the
> >> >> second condition is true, so we return from the second if() condition.
> >> >> The same semantics are being preserved.
> >> >
> >> > Fair enough!
> >> >
> >> > Nevertheless, I am not seeing this as a simplification.
> >>
> >> I am not sure what you mean here, do you mean that both the checks are
> >> actually required?
> >
> > I mean that the current compound tests each mean something.  Pulling out
> > the offline test adds lines of code and obscures that meaning.   This means
> > that it is easier (for me, anyway) to see why the current code is correct
> > than it is to see why your suggested change is correct.
> >
> 
> That is a valid point. I did not mean to reduce readability of the
> code. Just trying to avoid the overhead of smp_processor_id().
> 
> Not sure if you would prefer this, but how about the following?

If you change the "awake" to something like "am_online", I could get
behind this one.

							Thanx, Paul

> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index f1ba773..3a26008 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2516,15 +2516,16 @@ static void __call_rcu_core(struct rcu_state
> *rsp, struct rcu_data *rdp,
>  {
>         bool needwake;
> 
> +       bool awake = cpu_online(smp_processor_id);
>         /*
>          * If called from an extended quiescent state, invoke the RCU
>          * core in order to force a re-evaluation of RCU's idleness.
>          */
> -       if (!rcu_is_watching() && cpu_online(smp_processor_id()))
> +       if (!rcu_is_watching() && awake)
>                 invoke_rcu_core();
> 
>         /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> -       if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
> +       if (irqs_disabled_flags(flags) || !awake)
>                 return;
> 
>         /*
> 
> 
> >
> >> >> --
> >> >> Pranith.
> >> >>
> >> >>
> >> >>
> >> >> >
> >> >> >> ---
> >> >> >>  kernel/rcu/tree.c | 9 ++++++---
> >> >> >>  1 file changed, 6 insertions(+), 3 deletions(-)
> >> >> >>
> >> >> >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> >> >> >> index 5dcbf36..8d598a2 100644
> >> >> >> --- a/kernel/rcu/tree.c
> >> >> >> +++ b/kernel/rcu/tree.c
> >> >> >> @@ -2602,15 +2602,18 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
> >> >> >>  {
> >> >> >>       bool needwake;
> >> >> >>
> >> >> >> +     if (!cpu_online(smp_processor_id()))
> >> >> >> +             return;
> >> >> >> +
> >> >> >>       /*
> >> >> >>        * If called from an extended quiescent state, invoke the RCU
> >> >> >>        * core in order to force a re-evaluation of RCU's idleness.
> >> >> >>        */
> >> >> >> -     if (!rcu_is_watching() && cpu_online(smp_processor_id()))
> >> >> >> +     if (!rcu_is_watching())
> >> >> >>               invoke_rcu_core();
> >> >> >>
> >> >> >> -     /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> >> >> >> -     if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
> >> >> >> +     /* If interrupts were disabled, don't invoke RCU core. */
> >> >> >> +     if (irqs_disabled_flags(flags))
> >> >> >>               return;
> >> >> >>
> >> >> >>       /*
> >> >> >> --
> >> >> >> 2.0.0.rc2
> >> >> >>
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Pranith
> >> >>
> >> >
> >>
> >>
> >>
> >> --
> >> Pranith
> >>
> >
> 
> 
> 
> -- 
> Pranith
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 15:30               ` Paul E. McKenney
@ 2014-07-23 15:44                 ` Pranith Kumar
  2014-07-23 19:15                   ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 15:44 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 11:30 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 11:11:45AM -0400, Pranith Kumar wrote:
>> On Wed, Jul 23, 2014 at 10:23 AM, Paul E. McKenney
>> <paulmck@linux.vnet.ibm.com> wrote:
>> > On Wed, Jul 23, 2014 at 10:12:54AM -0400, Pranith Kumar wrote:
>> >> On Wed, Jul 23, 2014 at 9:50 AM, Paul E. McKenney
>> >> <paulmck@linux.vnet.ibm.com> wrote:
>> >> > On Wed, Jul 23, 2014 at 08:59:06AM -0400, Pranith Kumar wrote:
>> >> >> On Wed, Jul 23, 2014 at 8:21 AM, Paul E. McKenney
>> >> >> <paulmck@linux.vnet.ibm.com> wrote:
>> >> >> > On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
>> >> >> >> There are two checks for an online CPU if two if() conditions. This commit
>> >> >> >> simplies this by replacing it with only one check for the online CPU.
>> >> >> >>
>> >> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>> >> >> >
>> >> >> > I admit that it is very early in the morning my time, but I don't see
>> >> >> > this change as preserving the semantics in all cases.  Please recheck
>> >> >> > your changes to the second check.
>> >> >> >
>> >> >> >                                                         Thanx, Paul
>> >> >>
>> >> >> I guess you must be thrown off by the complementary checks, the first
>> >> >> check is for cpu_online() and second is for cpu_is_offline(). :)
>> >> >>
>> >> >> Previously, if a cpu is offline, the first condition is false and the
>> >> >> second condition is true, so we return from the second if() condition.
>> >> >> The same semantics are being preserved.
>> >> >
>> >> > Fair enough!
>> >> >
>> >> > Nevertheless, I am not seeing this as a simplification.
>> >>
>> >> I am not sure what you mean here, do you mean that both the checks are
>> >> actually required?
>> >
>> > I mean that the current compound tests each mean something.  Pulling out
>> > the offline test adds lines of code and obscures that meaning.   This means
>> > that it is easier (for me, anyway) to see why the current code is correct
>> > than it is to see why your suggested change is correct.
>> >
>>
>> That is a valid point. I did not mean to reduce readability of the
>> code. Just trying to avoid the overhead of smp_processor_id().
>>
>> Not sure if you would prefer this, but how about the following?
>
> If you change the "awake" to something like "am_online", I could get
> behind this one.
>

OK! I will submit that in the next series(with the zalloc check).

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/16] rcu: Check return value for cpumask allocation
  2014-07-23 12:06   ` Paul E. McKenney
  2014-07-23 12:49     ` Pranith Kumar
@ 2014-07-23 17:14     ` Pranith Kumar
  2014-07-23 18:01       ` Paul E. McKenney
  1 sibling, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 17:14 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 8:06 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
>> --- a/kernel/rcu/tree_plugin.h
>> +++ b/kernel/rcu/tree_plugin.h
>> @@ -88,7 +88,10 @@ static void __init rcu_bootup_announce_oddness(void)
>>  #ifdef CONFIG_RCU_NOCB_CPU
>>  #ifndef CONFIG_RCU_NOCB_CPU_NONE
>>       if (!have_rcu_nocb_mask) {
>> -             zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL);
>> +             if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
>> +                     pr_info("rcu_nocb_mask allocation failed\n");
>> +                     return;
>
> Good catch, but this "return" is an accident waiting to happen.  The
> accident will happen as soon as another RCU option appears, and the
> person adding it quite naturally adds it at the end of this function.
> The cleanest approach is to make an rcu_bootup_announce_oddness_nocb()
> as one commit that does -only- code motion, and the make this change
> as another commit.
>

Hi Paul,

Looking at this, I think the following is simpler without having to
introduce a new function. What do you think?

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index c31eb28..f2ed342 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -88,8 +88,10 @@ static void __init rcu_bootup_announce_oddness(void)
 #ifdef CONFIG_RCU_NOCB_CPU
 #ifndef CONFIG_RCU_NOCB_CPU_NONE
        if (!have_rcu_nocb_mask) {
-               zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL);
-               have_rcu_nocb_mask = true;
+               if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL))
+                       pr_info("rcu_nocb_mask allocation failed");
+               else
+                       have_rcu_nocb_mask = true;
        }
 #ifdef CONFIG_RCU_NOCB_CPU_ZERO
        pr_info("\tOffload RCU callbacks from CPU 0\n");


-- 
Pranith

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH 02/16] rcu: Check return value for cpumask allocation
  2014-07-23 17:14     ` Pranith Kumar
@ 2014-07-23 18:01       ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 18:01 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 01:14:19PM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 8:06 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> >> --- a/kernel/rcu/tree_plugin.h
> >> +++ b/kernel/rcu/tree_plugin.h
> >> @@ -88,7 +88,10 @@ static void __init rcu_bootup_announce_oddness(void)
> >>  #ifdef CONFIG_RCU_NOCB_CPU
> >>  #ifndef CONFIG_RCU_NOCB_CPU_NONE
> >>       if (!have_rcu_nocb_mask) {
> >> -             zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL);
> >> +             if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL)) {
> >> +                     pr_info("rcu_nocb_mask allocation failed\n");
> >> +                     return;
> >
> > Good catch, but this "return" is an accident waiting to happen.  The
> > accident will happen as soon as another RCU option appears, and the
> > person adding it quite naturally adds it at the end of this function.
> > The cleanest approach is to make an rcu_bootup_announce_oddness_nocb()
> > as one commit that does -only- code motion, and the make this change
> > as another commit.
> >
> 
> Hi Paul,
> 
> Looking at this, I think the following is simpler without having to
> introduce a new function. What do you think?
> 
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index c31eb28..f2ed342 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -88,8 +88,10 @@ static void __init rcu_bootup_announce_oddness(void)
>  #ifdef CONFIG_RCU_NOCB_CPU
>  #ifndef CONFIG_RCU_NOCB_CPU_NONE
>         if (!have_rcu_nocb_mask) {
> -               zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL);
> -               have_rcu_nocb_mask = true;
> +               if (!zalloc_cpumask_var(&rcu_nocb_mask, GFP_KERNEL))
> +                       pr_info("rcu_nocb_mask allocation failed");
> +               else
> +                       have_rcu_nocb_mask = true;
>         }
>  #ifdef CONFIG_RCU_NOCB_CPU_ZERO
>         pr_info("\tOffload RCU callbacks from CPU 0\n");

I suspect that if zalloc_cpumask_var() fails, bad things will happen
if we get here, at least in the CONFIG_CPUMASK_OFFSTACK=y.

							Thanx, Paul

> 
> 
> -- 
> Pranith
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 15:44                 ` Pranith Kumar
@ 2014-07-23 19:15                   ` Paul E. McKenney
  2014-07-23 20:01                     ` Pranith Kumar
  2014-07-23 20:16                     ` Pranith Kumar
  0 siblings, 2 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 19:15 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 11:44:43AM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 11:30 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 11:11:45AM -0400, Pranith Kumar wrote:
> >> On Wed, Jul 23, 2014 at 10:23 AM, Paul E. McKenney
> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> > On Wed, Jul 23, 2014 at 10:12:54AM -0400, Pranith Kumar wrote:
> >> >> On Wed, Jul 23, 2014 at 9:50 AM, Paul E. McKenney
> >> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> >> > On Wed, Jul 23, 2014 at 08:59:06AM -0400, Pranith Kumar wrote:
> >> >> >> On Wed, Jul 23, 2014 at 8:21 AM, Paul E. McKenney
> >> >> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> >> >> > On Wed, Jul 23, 2014 at 01:09:46AM -0400, Pranith Kumar wrote:
> >> >> >> >> There are two checks for an online CPU if two if() conditions. This commit
> >> >> >> >> simplies this by replacing it with only one check for the online CPU.
> >> >> >> >>
> >> >> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >> >> >> >
> >> >> >> > I admit that it is very early in the morning my time, but I don't see
> >> >> >> > this change as preserving the semantics in all cases.  Please recheck
> >> >> >> > your changes to the second check.
> >> >> >> >
> >> >> >> >                                                         Thanx, Paul
> >> >> >>
> >> >> >> I guess you must be thrown off by the complementary checks, the first
> >> >> >> check is for cpu_online() and second is for cpu_is_offline(). :)
> >> >> >>
> >> >> >> Previously, if a cpu is offline, the first condition is false and the
> >> >> >> second condition is true, so we return from the second if() condition.
> >> >> >> The same semantics are being preserved.
> >> >> >
> >> >> > Fair enough!
> >> >> >
> >> >> > Nevertheless, I am not seeing this as a simplification.
> >> >>
> >> >> I am not sure what you mean here, do you mean that both the checks are
> >> >> actually required?
> >> >
> >> > I mean that the current compound tests each mean something.  Pulling out
> >> > the offline test adds lines of code and obscures that meaning.   This means
> >> > that it is easier (for me, anyway) to see why the current code is correct
> >> > than it is to see why your suggested change is correct.
> >> >
> >>
> >> That is a valid point. I did not mean to reduce readability of the
> >> code. Just trying to avoid the overhead of smp_processor_id().
> >>
> >> Not sure if you would prefer this, but how about the following?
> >
> > If you change the "awake" to something like "am_online", I could get
> > behind this one.
> 
> OK! I will submit that in the next series(with the zalloc check).

You caught me at a weak moment...  This change just adds an extra
line of code and doesn't really help anything.

So please leave this one out.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 19:15                   ` Paul E. McKenney
@ 2014-07-23 20:01                     ` Pranith Kumar
  2014-07-23 20:16                     ` Pranith Kumar
  1 sibling, 0 replies; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 20:01 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 3:15 PM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
>> > If you change the "awake" to something like "am_online", I could get
>> > behind this one.
>>
>> OK! I will submit that in the next series(with the zalloc check).
>
> You caught me at a weak moment...  This change just adds an extra
> line of code and doesn't really help anything.
>
> So please leave this one out.
>

It adds an extra line of code and generates better assembly code. Last
try to convince you before I give up :-)

Before:

    if (!rcu_is_watching() && cpu_online(smp_processor_id()))
    26d3:    83 e2 01                 and    $0x1,%edx
    26d6:    75 1f                    jne    26f7 <__call_rcu+0x1c7>
    26d8:    65 8b 14 25 00 00 00     mov    %gs:0x0,%edx
    26df:    00
            26dc: R_X86_64_32S    cpu_number
    26e0:    48 8b 0d 00 00 00 00     mov    0x0(%rip),%rcx        #
26e7 <__call_rcu+0x1b7>
            26e3: R_X86_64_PC32    cpu_online_mask-0x4
    26e7:    89 d2                    mov    %edx,%edx
    26e9:    48 0f a3 11              bt     %rdx,(%rcx)
    26ed:    19 d2                    sbb    %edx,%edx
    26ef:    85 d2                    test   %edx,%edx
    26f1:    0f 85 29 02 00 00        jne    2920 <__call_rcu+0x3f0>
        invoke_rcu_core();

    /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
    if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
    26f7:    48 f7 45 d0 00 02 00     testq  $0x200,-0x30(%rbp)
    26fe:    00
    26ff:    0f 84 e6 fe ff ff        je     25eb <__call_rcu+0xbb>
    2705:    65 8b 14 25 00 00 00     mov    %gs:0x0,%edx
    270c:    00
            2709: R_X86_64_32S    cpu_number
    270d:    48 8b 0d 00 00 00 00     mov    0x0(%rip),%rcx        #
2714 <__call_rcu+0x1e4>
            2710: R_X86_64_PC32    cpu_online_mask-0x4
    2714:    89 d2                    mov    %edx,%edx
    2716:    48 0f a3 11              bt     %rdx,(%rcx)
    271a:    19 d2                    sbb    %edx,%edx
    271c:    85 d2                    test   %edx,%edx
    271e:    0f 84 c7 fe ff ff        je     25eb <__call_rcu+0xbb>

After:

    bool cpu_up = cpu_online(smp_processor_id());
    26c1:    65 8b 14 25 00 00 00     mov    %gs:0x0,%edx
    26c8:    00
            26c5: R_X86_64_32S    cpu_number
    26c9:    48 8b 0d 00 00 00 00     mov    0x0(%rip),%rcx        #
26d0 <__call_rcu+0x1a0>
            26cc: R_X86_64_PC32    cpu_online_mask-0x4
    26d0:    89 d2                    mov    %edx,%edx
    26d2:    48 0f a3 11              bt     %rdx,(%rcx)
    26d6:    19 d2                    sbb    %edx,%edx
    26d8:    85 d2                    test   %edx,%edx
    26da:    41 0f 95 c4              setne  %r12b

    if (!rcu_is_watching() && cpu_up)
    26f0:    83 e2 01                 and    $0x1,%edx
    26f3:    75 09                    jne    26fe <__call_rcu+0x1ce>
    26f5:    45 84 e4                 test   %r12b,%r12b
    26f8:    0f 85 12 02 00 00        jne    2910 <__call_rcu+0x3e0>
        invoke_rcu_core();

    /* If interrupts were disabled or CPU offline, don't invoke RCU core. */
    if (irqs_disabled_flags(flags) || !cpu_up)
    26fe:    48 f7 45 d0 00 02 00     testq  $0x200,-0x30(%rbp)
    2705:    00
    2706:    0f 84 df fe ff ff        je     25eb <__call_rcu+0xbb>
    270c:    45 84 e4                 test   %r12b,%r12b
    270f:    0f 84 d6 fe ff ff        je     25eb <__call_rcu+0xbb>


-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 19:15                   ` Paul E. McKenney
  2014-07-23 20:01                     ` Pranith Kumar
@ 2014-07-23 20:16                     ` Pranith Kumar
  2014-07-23 20:23                       ` Paul E. McKenney
  1 sibling, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-23 20:16 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 3:15 PM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
>> > If you change the "awake" to something like "am_online", I could get
>> > behind this one.
>>
>> OK! I will submit that in the next series(with the zalloc check).
>
> You caught me at a weak moment...  This change just adds an extra
> line of code and doesn't really help anything.
>
> So please leave this one out.
>

<resending as the assembly was garbled>

It adds an extra line of code and generates better assembly code. Last
try to convince you before I give up :-)

Size:
        text    data     bss     dec     hex filename
before  30664    7844      32   38540    968c kernel/rcu/tree.o
after   30648    7844      32   38524    967c kernel/rcu/tree.o

Assembly:

Before:

	if (!rcu_is_watching() && cpu_online(smp_processor_id()))
    26d3:	83 e2 01             	and    $0x1,%edx
    26d6:	75 1f                	jne    26f7 <__call_rcu+0x1c7>
    26d8:	65 8b 14 25 00 00 00 	mov    %gs:0x0,%edx
    26df:	00 
			26dc: R_X86_64_32S	cpu_number
    26e0:	48 8b 0d 00 00 00 00 	mov    0x0(%rip),%rcx        # 26e7 <__call_rcu+0x1b7>
			26e3: R_X86_64_PC32	cpu_online_mask-0x4
    26e7:	89 d2                	mov    %edx,%edx
    26e9:	48 0f a3 11          	bt     %rdx,(%rcx)
    26ed:	19 d2                	sbb    %edx,%edx
    26ef:	85 d2                	test   %edx,%edx
    26f1:	0f 85 29 02 00 00    	jne    2920 <__call_rcu+0x3f0>
		invoke_rcu_core();

	/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
	if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
    26f7:	48 f7 45 d0 00 02 00 	testq  $0x200,-0x30(%rbp)
    26fe:	00 
    26ff:	0f 84 e6 fe ff ff    	je     25eb <__call_rcu+0xbb>
    2705:	65 8b 14 25 00 00 00 	mov    %gs:0x0,%edx
    270c:	00 
			2709: R_X86_64_32S	cpu_number
    270d:	48 8b 0d 00 00 00 00 	mov    0x0(%rip),%rcx        # 2714 <__call_rcu+0x1e4>
			2710: R_X86_64_PC32	cpu_online_mask-0x4
    2714:	89 d2                	mov    %edx,%edx
    2716:	48 0f a3 11          	bt     %rdx,(%rcx)
    271a:	19 d2                	sbb    %edx,%edx
    271c:	85 d2                	test   %edx,%edx
    271e:	0f 84 c7 fe ff ff    	je     25eb <__call_rcu+0xbb>

After:

	bool cpu_up = cpu_online(smp_processor_id());
    26c1:	65 8b 14 25 00 00 00 	mov    %gs:0x0,%edx
    26c8:	00 
			26c5: R_X86_64_32S	cpu_number
    26c9:	48 8b 0d 00 00 00 00 	mov    0x0(%rip),%rcx        # 26d0 <__call_rcu+0x1a0>
			26cc: R_X86_64_PC32	cpu_online_mask-0x4
    26d0:	89 d2                	mov    %edx,%edx
    26d2:	48 0f a3 11          	bt     %rdx,(%rcx)
    26d6:	19 d2                	sbb    %edx,%edx
    26d8:	85 d2                	test   %edx,%edx
    26da:	41 0f 95 c4          	setne  %r12b

	if (!rcu_is_watching() && cpu_up)
    26f0:	83 e2 01             	and    $0x1,%edx
    26f3:	75 09                	jne    26fe <__call_rcu+0x1ce>
    26f5:	45 84 e4             	test   %r12b,%r12b
    26f8:	0f 85 12 02 00 00    	jne    2910 <__call_rcu+0x3e0>
		invoke_rcu_core();

	/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
	if (irqs_disabled_flags(flags) || !cpu_up)
    26fe:	48 f7 45 d0 00 02 00 	testq  $0x200,-0x30(%rbp)
    2705:	00 
    2706:	0f 84 df fe ff ff    	je     25eb <__call_rcu+0xbb>
    270c:	45 84 e4             	test   %r12b,%r12b
    270f:	0f 84 d6 fe ff ff    	je     25eb <__call_rcu+0xbb>

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 09/16] rcu: Remove redundant check for online cpu
  2014-07-23 20:16                     ` Pranith Kumar
@ 2014-07-23 20:23                       ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-23 20:23 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 04:16:11PM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 3:15 PM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> >> > If you change the "awake" to something like "am_online", I could get
> >> > behind this one.
> >>
> >> OK! I will submit that in the next series(with the zalloc check).
> >
> > You caught me at a weak moment...  This change just adds an extra
> > line of code and doesn't really help anything.
> >
> > So please leave this one out.
> >
> 
> <resending as the assembly was garbled>
> 
> It adds an extra line of code and generates better assembly code. Last
> try to convince you before I give up :-)

If you got this kind of savings in __rcu_read_lock() or
__rcu_read_unlock(), I might be interested.  Hard to get excited about
__call_rcu_core(), especially given that a smarter compiler might be
able to make this transformation on its own.

							Thanx, Paul

> Size:
>         text    data     bss     dec     hex filename
> before  30664    7844      32   38540    968c kernel/rcu/tree.o
> after   30648    7844      32   38524    967c kernel/rcu/tree.o
> 
> Assembly:
> 
> Before:
> 
> 	if (!rcu_is_watching() && cpu_online(smp_processor_id()))
>     26d3:	83 e2 01             	and    $0x1,%edx
>     26d6:	75 1f                	jne    26f7 <__call_rcu+0x1c7>
>     26d8:	65 8b 14 25 00 00 00 	mov    %gs:0x0,%edx
>     26df:	00 
> 			26dc: R_X86_64_32S	cpu_number
>     26e0:	48 8b 0d 00 00 00 00 	mov    0x0(%rip),%rcx        # 26e7 <__call_rcu+0x1b7>
> 			26e3: R_X86_64_PC32	cpu_online_mask-0x4
>     26e7:	89 d2                	mov    %edx,%edx
>     26e9:	48 0f a3 11          	bt     %rdx,(%rcx)
>     26ed:	19 d2                	sbb    %edx,%edx
>     26ef:	85 d2                	test   %edx,%edx
>     26f1:	0f 85 29 02 00 00    	jne    2920 <__call_rcu+0x3f0>
> 		invoke_rcu_core();
> 
> 	/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> 	if (irqs_disabled_flags(flags) || cpu_is_offline(smp_processor_id()))
>     26f7:	48 f7 45 d0 00 02 00 	testq  $0x200,-0x30(%rbp)
>     26fe:	00 
>     26ff:	0f 84 e6 fe ff ff    	je     25eb <__call_rcu+0xbb>
>     2705:	65 8b 14 25 00 00 00 	mov    %gs:0x0,%edx
>     270c:	00 
> 			2709: R_X86_64_32S	cpu_number
>     270d:	48 8b 0d 00 00 00 00 	mov    0x0(%rip),%rcx        # 2714 <__call_rcu+0x1e4>
> 			2710: R_X86_64_PC32	cpu_online_mask-0x4
>     2714:	89 d2                	mov    %edx,%edx
>     2716:	48 0f a3 11          	bt     %rdx,(%rcx)
>     271a:	19 d2                	sbb    %edx,%edx
>     271c:	85 d2                	test   %edx,%edx
>     271e:	0f 84 c7 fe ff ff    	je     25eb <__call_rcu+0xbb>
> 
> After:
> 
> 	bool cpu_up = cpu_online(smp_processor_id());
>     26c1:	65 8b 14 25 00 00 00 	mov    %gs:0x0,%edx
>     26c8:	00 
> 			26c5: R_X86_64_32S	cpu_number
>     26c9:	48 8b 0d 00 00 00 00 	mov    0x0(%rip),%rcx        # 26d0 <__call_rcu+0x1a0>
> 			26cc: R_X86_64_PC32	cpu_online_mask-0x4
>     26d0:	89 d2                	mov    %edx,%edx
>     26d2:	48 0f a3 11          	bt     %rdx,(%rcx)
>     26d6:	19 d2                	sbb    %edx,%edx
>     26d8:	85 d2                	test   %edx,%edx
>     26da:	41 0f 95 c4          	setne  %r12b
> 
> 	if (!rcu_is_watching() && cpu_up)
>     26f0:	83 e2 01             	and    $0x1,%edx
>     26f3:	75 09                	jne    26fe <__call_rcu+0x1ce>
>     26f5:	45 84 e4             	test   %r12b,%r12b
>     26f8:	0f 85 12 02 00 00    	jne    2910 <__call_rcu+0x3e0>
> 		invoke_rcu_core();
> 
> 	/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
> 	if (irqs_disabled_flags(flags) || !cpu_up)
>     26fe:	48 f7 45 d0 00 02 00 	testq  $0x200,-0x30(%rbp)
>     2705:	00 
>     2706:	0f 84 df fe ff ff    	je     25eb <__call_rcu+0xbb>
>     270c:	45 84 e4             	test   %r12b,%r12b
>     270f:	0f 84 d6 fe ff ff    	je     25eb <__call_rcu+0xbb>
> 
> -- 
> Pranith
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 11/16] rcu: Check for spurious wakeup using return value
  2014-07-23 12:26   ` Paul E. McKenney
@ 2014-07-24  2:36     ` Pranith Kumar
  2014-07-24  3:43       ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-24  2:36 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 8:26 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 01:09:48AM -0400, Pranith Kumar wrote:
>> When the gp_kthread wakes up from the wait event, it returns 0 if the wake up is
>> due to the condition having been met. This commit checks this return value
>> for a spurious wake up before calling rcu_gp_init().
>>
>> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>
> How does this added check help?  I don't see that it does.  If the flag
> is set, we want to wake up.  If we get a spurious wakeup, but then the
> flag gets set before we actually wake up, we still want to wake up.
>

So I took a look at the docs again, and using the return value is the
recommended way to check for spurious wakeups.

The condition in wait_event_interruptible() is checked when the task
is woken up (either due to stray signals or explicitly) and it returns
true if condition evaluates to true.

In the current scenario, if we get a spurious wakeup, we take the
costly path of checking this condition again (with a barrier and lock)
before going back to wait.

The scenario of getting an actual wakeup after getting a spurious
wakeup exists even today, this is the window after detecting a
spurious wakeup and before going back to wait. I am not sure if using
the return value enlarges that window as we are going back to sleep
immediately.

Thoughts?

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 11/16] rcu: Check for spurious wakeup using return value
  2014-07-24  2:36     ` Pranith Kumar
@ 2014-07-24  3:43       ` Paul E. McKenney
  2014-07-24  4:03         ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-24  3:43 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 10:36:19PM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 8:26 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 01:09:48AM -0400, Pranith Kumar wrote:
> >> When the gp_kthread wakes up from the wait event, it returns 0 if the wake up is
> >> due to the condition having been met. This commit checks this return value
> >> for a spurious wake up before calling rcu_gp_init().
> >>
> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >
> > How does this added check help?  I don't see that it does.  If the flag
> > is set, we want to wake up.  If we get a spurious wakeup, but then the
> > flag gets set before we actually wake up, we still want to wake up.
> 
> So I took a look at the docs again, and using the return value is the
> recommended way to check for spurious wakeups.
> 
> The condition in wait_event_interruptible() is checked when the task
> is woken up (either due to stray signals or explicitly) and it returns
> true if condition evaluates to true.
> 
> In the current scenario, if we get a spurious wakeup, we take the
> costly path of checking this condition again (with a barrier and lock)
> before going back to wait.
> 
> The scenario of getting an actual wakeup after getting a spurious
> wakeup exists even today, this is the window after detecting a
> spurious wakeup and before going back to wait. I am not sure if using
> the return value enlarges that window as we are going back to sleep
> immediately.
> 
> Thoughts?

If the flag is set, why should we care whether or not the wakeup was
spurious?  If the flag is not set, why should we care whether or not
wait_event_interruptible() thought that the wakeup was not spurious?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 11/16] rcu: Check for spurious wakeup using return value
  2014-07-24  3:43       ` Paul E. McKenney
@ 2014-07-24  4:03         ` Pranith Kumar
  2014-07-24 18:12           ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-24  4:03 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Wed, Jul 23, 2014 at 11:43 PM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 10:36:19PM -0400, Pranith Kumar wrote:
>> On Wed, Jul 23, 2014 at 8:26 AM, Paul E. McKenney
>> <paulmck@linux.vnet.ibm.com> wrote:
>> > On Wed, Jul 23, 2014 at 01:09:48AM -0400, Pranith Kumar wrote:
>> >> When the gp_kthread wakes up from the wait event, it returns 0 if the wake up is
>> >> due to the condition having been met. This commit checks this return value
>> >> for a spurious wake up before calling rcu_gp_init().
>> >>
>> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>> >
>> > How does this added check help?  I don't see that it does.  If the flag
>> > is set, we want to wake up.  If we get a spurious wakeup, but then the
>> > flag gets set before we actually wake up, we still want to wake up.
>>
>> So I took a look at the docs again, and using the return value is the
>> recommended way to check for spurious wakeups.
>>
>> The condition in wait_event_interruptible() is checked when the task
>> is woken up (either due to stray signals or explicitly) and it returns
>> true if condition evaluates to true.

this should be returns '0' if the condition evaluates to true.

>>
>> In the current scenario, if we get a spurious wakeup, we take the
>> costly path of checking this condition again (with a barrier and lock)
>> before going back to wait.
>>
>> The scenario of getting an actual wakeup after getting a spurious
>> wakeup exists even today, this is the window after detecting a
>> spurious wakeup and before going back to wait. I am not sure if using
>> the return value enlarges that window as we are going back to sleep
>> immediately.
>>
>> Thoughts?
>
> If the flag is set, why should we care whether or not the wakeup was
> spurious?  If the flag is not set, why should we care whether or not
> wait_event_interruptible() thought that the wakeup was not spurious?
>

A correction about the return value above: return will be 0 if the
condition is true, in this case if the flag is set.

If the flag is set, ret will be 0 and we will go ahead with
rcu_gp_init(). (no change wrt current behavior)

If the flag is not set, currently we go ahead and call rcu_gp_init()
from where we check if the flag is set (after a lock+barrier) and
return.

If we care about what wait_event_interruptible() returns, we can go
back and wait for an actual wakeup much earlier without the additional
overhead of calling rcu_gp_init().

-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 11/16] rcu: Check for spurious wakeup using return value
  2014-07-24  4:03         ` Pranith Kumar
@ 2014-07-24 18:12           ` Paul E. McKenney
  2014-07-24 19:59             ` Pranith Kumar
  0 siblings, 1 reply; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-24 18:12 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...

On Thu, Jul 24, 2014 at 12:03:34AM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 11:43 PM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 10:36:19PM -0400, Pranith Kumar wrote:
> >> On Wed, Jul 23, 2014 at 8:26 AM, Paul E. McKenney
> >> <paulmck@linux.vnet.ibm.com> wrote:
> >> > On Wed, Jul 23, 2014 at 01:09:48AM -0400, Pranith Kumar wrote:
> >> >> When the gp_kthread wakes up from the wait event, it returns 0 if the wake up is
> >> >> due to the condition having been met. This commit checks this return value
> >> >> for a spurious wake up before calling rcu_gp_init().
> >> >>
> >> >> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
> >> >
> >> > How does this added check help?  I don't see that it does.  If the flag
> >> > is set, we want to wake up.  If we get a spurious wakeup, but then the
> >> > flag gets set before we actually wake up, we still want to wake up.
> >>
> >> So I took a look at the docs again, and using the return value is the
> >> recommended way to check for spurious wakeups.
> >>
> >> The condition in wait_event_interruptible() is checked when the task
> >> is woken up (either due to stray signals or explicitly) and it returns
> >> true if condition evaluates to true.
> 
> this should be returns '0' if the condition evaluates to true.

Ah, but if the condition changes while wait_event_interruptible() is in
the process of returning, it is quite possible that the answer will be
different afterwards.  For example, consider this scenario:

o	wait_event_interruptible() decides to return spuriously for
	whatever reason, and thus returns a non-zero value.

o	An invocation of (say) rcu_start_gp_advanced() sets ->gp_flags
	to RCU_GP_FLAG_INIT, thus requesting that a new grace period
	start.

o	At this point, the return value says that we should not start
	a new grace period, but the ->gp_flags value says that we
	should.

Because it is the ->gp_flags value that really knows, the current code
ignores wait_event_interruptible()'s return value and just checks
the ->gp_flags value.

> >> In the current scenario, if we get a spurious wakeup, we take the
> >> costly path of checking this condition again (with a barrier and lock)
> >> before going back to wait.
> >>
> >> The scenario of getting an actual wakeup after getting a spurious
> >> wakeup exists even today, this is the window after detecting a
> >> spurious wakeup and before going back to wait. I am not sure if using
> >> the return value enlarges that window as we are going back to sleep
> >> immediately.
> >>
> >> Thoughts?
> >
> > If the flag is set, why should we care whether or not the wakeup was
> > spurious?  If the flag is not set, why should we care whether or not
> > wait_event_interruptible() thought that the wakeup was not spurious?
> 
> A correction about the return value above: return will be 0 if the
> condition is true, in this case if the flag is set.
> 
> If the flag is set, ret will be 0 and we will go ahead with
> rcu_gp_init(). (no change wrt current behavior)

Sorry, this is not always correct.  RCU is highly concurrent, so you do
need to start thinking in terms of concurrency.  Your reasoning above
is a symptom of single-threaded thinking.  Please see my scenario above
showing how the return can be non-zero even though ->gp_flags is set.

> If the flag is not set, currently we go ahead and call rcu_gp_init()
> from where we check if the flag is set (after a lock+barrier) and
> return.

True enough.  Is that really a problem?  If so, exactly why is it a
problem?

> If we care about what wait_event_interruptible() returns, we can go
> back and wait for an actual wakeup much earlier without the additional
> overhead of calling rcu_gp_init().

The key phrase here is "If we care".  Should we care?  If so, why?

I suggest running some random benchmark and counting how many times
rcu_gp_init() is called and how many times rcu_gp_init() returns
because ->gp_flags is not set.  If rcu_gp_init() returns because
->gp_flags is not set a significant fraction of the time, then this
-might- be worth worrying about.  (Extra credit: Under what conditions
would it be worth worrying about, and how would you go about checking
to see whether those conditions hold?)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 11/16] rcu: Check for spurious wakeup using return value
  2014-07-24 18:12           ` Paul E. McKenney
@ 2014-07-24 19:59             ` Pranith Kumar
  2014-07-24 20:27               ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-07-24 19:59 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...,
	Peter Zijlstra

Adding peterz to CC as git blames him for wait_event code. :)

(original LKML link: https://lkml.org/lkml/2014/7/23/45)

On Thu, Jul 24, 2014 at 2:12 PM, Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

>> >> > How does this added check help?  I don't see that it does.  If the flag
>> >> > is set, we want to wake up.  If we get a spurious wakeup, but then the
>> >> > flag gets set before we actually wake up, we still want to wake up.
>> >>
>> >> So I took a look at the docs again, and using the return value is the
>> >> recommended way to check for spurious wakeups.
>> >>
>> >> The condition in wait_event_interruptible() is checked when the task
>> >> is woken up (either due to stray signals or explicitly) and it returns
>> >> true if condition evaluates to true.
>>
>> this should be returns '0' if the condition evaluates to true.
>
> Ah, but if the condition changes while wait_event_interruptible() is in
> the process of returning, it is quite possible that the answer will be
> different afterwards.  For example, consider this scenario:
>
> o       wait_event_interruptible() decides to return spuriously for
>         whatever reason, and thus returns a non-zero value.
>
> o       An invocation of (say) rcu_start_gp_advanced() sets ->gp_flags
>         to RCU_GP_FLAG_INIT, thus requesting that a new grace period
>         start.
>
> o       At this point, the return value says that we should not start
>         a new grace period, but the ->gp_flags value says that we
>         should.
>
> Because it is the ->gp_flags value that really knows, the current code
> ignores wait_event_interruptible()'s return value and just checks
> the ->gp_flags value.

OK. Let me simplify what wait_event_interruptible is doing. That will
give you an idea why checking the return value is sufficient.

In highly simplified form wait_event() does the following:

ret = wait_event(wq, condition):

	ret = -ERESTARTSYS;
	if (condition) {
		ret = 0;
		goto out;
	}
	sleep();
	/* wakes up */
	if (condition)
		ret = 0;
out:
	return ret;

Considering this, the above situation will now be:

o       wait_event_interruptible() decides to return spuriously for
        whatever reason, and thus returns a non-zero value.

o       An invocation of (say) rcu_start_gp_advanced() sets ->gp_flags
        to RCU_GP_FLAG_INIT, thus requesting that a new grace period
        start.

o       At this point, the return value says that we should not start
        a new grace period, but the ->gp_flags value says that we
        should.

o	We see that return value is non-zero and go back to wait_event

o	In wait_event, we check that the condition is actually true and 
	return immediately without sleeping and with a return value of 0.

If ->gp_flags is set after returning from wait_event_interruptible and we 
get back to waiting again, we will immediately see that the condition is true
and return and continue with rcu_gpu_init().

The basic idea is that in cases of spurious wakeup we return to waiting as
soon as possible and if the condition becomes true while we are going back to
wait_event, we return immediately with a value of 0.

>
>> >> In the current scenario, if we get a spurious wakeup, we take the
>> >> costly path of checking this condition again (with a barrier and lock)
>> >> before going back to wait.
>> >>
>> >> The scenario of getting an actual wakeup after getting a spurious
>> >> wakeup exists even today, this is the window after detecting a
>> >> spurious wakeup and before going back to wait. I am not sure if using
>> >> the return value enlarges that window as we are going back to sleep
>> >> immediately.
>> >>
>> >> Thoughts?
>> >
>> > If the flag is set, why should we care whether or not the wakeup was
>> > spurious?  If the flag is not set, why should we care whether or not
>> > wait_event_interruptible() thought that the wakeup was not spurious?
>>
>> A correction about the return value above: return will be 0 if the
>> condition is true, in this case if the flag is set.
>>
>> If the flag is set, ret will be 0 and we will go ahead with
>> rcu_gp_init(). (no change wrt current behavior)
>
> Sorry, this is not always correct.  RCU is highly concurrent, so you do
> need to start thinking in terms of concurrency.  Your reasoning above
> is a symptom of single-threaded thinking.  Please see my scenario above
> showing how the return can be non-zero even though ->gp_flags is set.
>
>> If the flag is not set, currently we go ahead and call rcu_gp_init()
>> from where we check if the flag is set (after a lock+barrier) and
>> return.
>
> True enough.  Is that really a problem?  If so, exactly why is it a
> problem?

The problem is that we are detecting a spurious wakeup after unnecessary work
which we can avoid by checking the return value.

>
>> If we care about what wait_event_interruptible() returns, we can go
>> back and wait for an actual wakeup much earlier without the additional
>> overhead of calling rcu_gp_init().
>
> The key phrase here is "If we care".  Should we care?  If so, why?
>
> I suggest running some random benchmark and counting how many times
> rcu_gp_init() is called and how many times rcu_gp_init() returns
> because ->gp_flags is not set.  If rcu_gp_init() returns because
> ->gp_flags is not set a significant fraction of the time, then this
> -might- be worth worrying about.  (Extra credit: Under what conditions

In the grand scheme of things, I agree that minor optimizations may not seem
to be worth much. But when the optimizationss are straight forward and 
are _actually_ improving things, even by a small margin, I think they are
worth considering.

Think of the billions of cycles we will save ;-)

> -might- be worth worrying about.  (Extra credit: Under what conditions
> would it be worth worrying about, and how would you go about checking
> to see whether those conditions hold?)
>


-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 11/16] rcu: Check for spurious wakeup using return value
  2014-07-24 19:59             ` Pranith Kumar
@ 2014-07-24 20:27               ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-07-24 20:27 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	open list:READ-COPY UPDATE...,
	Peter Zijlstra

On Thu, Jul 24, 2014 at 03:59:33PM -0400, Pranith Kumar wrote:
> Adding peterz to CC as git blames him for wait_event code. :)
> 
> (original LKML link: https://lkml.org/lkml/2014/7/23/45)
> 
> On Thu, Jul 24, 2014 at 2:12 PM, Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

[ . . . ]

> >> If we care about what wait_event_interruptible() returns, we can go
> >> back and wait for an actual wakeup much earlier without the additional
> >> overhead of calling rcu_gp_init().
> >
> > The key phrase here is "If we care".  Should we care?  If so, why?
> >
> > I suggest running some random benchmark and counting how many times
> > rcu_gp_init() is called and how many times rcu_gp_init() returns
> > because ->gp_flags is not set.  If rcu_gp_init() returns because
> > ->gp_flags is not set a significant fraction of the time, then this
> > -might- be worth worrying about.  (Extra credit: Under what conditions
> 
> In the grand scheme of things, I agree that minor optimizations may not seem
> to be worth much. But when the optimizationss are straight forward and 
> are _actually_ improving things, even by a small margin, I think they are
> worth considering.
> 
> Think of the billions of cycles we will save ;-)

If there are significant savings.  You have yet to demonstrate this.
In fact, you have yet to demonstrate that your change doesn't make
things worse.

							Thanx, Paul

> > -might- be worth worrying about.  (Extra credit: Under what conditions
> > would it be worth worrying about, and how would you go about checking
> > to see whether those conditions hold?)
> >
> 
> 
> -- 
> Pranith
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 00/16] rcu: Some minor fixes and cleanups
  2014-07-23 14:45 ` [PATCH 00/16] rcu: Some minor fixes and cleanups Paul E. McKenney
@ 2014-08-27  1:10   ` Pranith Kumar
  2014-08-27  3:20     ` Paul E. McKenney
  0 siblings, 1 reply; 58+ messages in thread
From: Pranith Kumar @ 2014-08-27  1:10 UTC (permalink / raw)
  To: Paul McKenney
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, LKML

On Wed, Jul 23, 2014 at 10:45 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 23, 2014 at 01:09:37AM -0400, Pranith Kumar wrote:
>> Hi Paul,
>>
>> This is a series of minor fixes and cleanup patches which I found while studying
>> the code. All my previous pending (but not rejected ;) patches are superseded by
>> this series, expect the rcutorture snprintf changes. I am still waiting for you
>> to decide on that one :)
>>
>> These changes have been tested by the kvm rcutorture test setup. Some tests give
>> me stall warnings, but otherwise have SUCCESS messages in the logs.
>
> For patches 1, 3, 5, 8, 12, and 13, once you get a Reviewed-by from one
> of the co-maintainers or designated reviewers, I will queue them.
> The other patches I have responded to.
>

Hi Paul, just a reminder so that these don't get forgotten :)

>>
>> Pranith Kumar (16):
>>   rcu: Use rcu_num_nodes instead of NUM_RCU_NODES
>>   rcu: Check return value for cpumask allocation
>>   rcu: Fix comment for gp_state field values
>>   rcu: Remove redundant check for an online CPU
>>   rcu: Add noreturn attribute to boost kthread
>>   rcu: Clear gp_flags only when actually starting new gp
>>   rcu: Save and restore irq flags in rcu_gp_cleanup()
>>   rcu: Clean up rcu_spawn_one_boost_kthread()
>>   rcu: Remove redundant check for online cpu
>>   rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup
>>   rcu: Check for spurious wakeup using return value
>>   rcu: Rename rcu_spawn_gp_kthread() to rcu_spawn_kthreads()
>>   rcu: Spawn nocb kthreads from rcu_prepare_kthreads()
>>   rcu: Remove redundant checks for rcu_scheduler_fully_active
>>   rcu: Check for a nocb cpu before trying to spawn nocb threads
>>   rcu: kvm.sh: Fix error when you pass --cpus argument
>>
>>  kernel/rcu/tree.c                             | 42 ++++++++++++++-------------
>>  kernel/rcu/tree.h                             |  4 +--
>>  kernel/rcu/tree_plugin.h                      | 40 +++++++++++++------------
>>  tools/testing/selftests/rcutorture/bin/kvm.sh |  4 +--
>>  4 files changed, 47 insertions(+), 43 deletions(-)
>>
>> --
>> 2.0.0.rc2
>>
>



-- 
Pranith

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 00/16] rcu: Some minor fixes and cleanups
  2014-08-27  1:10   ` Pranith Kumar
@ 2014-08-27  3:20     ` Paul E. McKenney
  0 siblings, 0 replies; 58+ messages in thread
From: Paul E. McKenney @ 2014-08-27  3:20 UTC (permalink / raw)
  To: Pranith Kumar
  Cc: Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, LKML

On Tue, Aug 26, 2014 at 09:10:10PM -0400, Pranith Kumar wrote:
> On Wed, Jul 23, 2014 at 10:45 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Wed, Jul 23, 2014 at 01:09:37AM -0400, Pranith Kumar wrote:
> >> Hi Paul,
> >>
> >> This is a series of minor fixes and cleanup patches which I found while studying
> >> the code. All my previous pending (but not rejected ;) patches are superseded by
> >> this series, expect the rcutorture snprintf changes. I am still waiting for you
> >> to decide on that one :)
> >>
> >> These changes have been tested by the kvm rcutorture test setup. Some tests give
> >> me stall warnings, but otherwise have SUCCESS messages in the logs.
> >
> > For patches 1, 3, 5, 8, 12, and 13, once you get a Reviewed-by from one
> > of the co-maintainers or designated reviewers, I will queue them.
> > The other patches I have responded to.
> 
> Hi Paul, just a reminder so that these don't get forgotten :)

Hello, Pranith, haven't forgotten them, but also haven't seen any
reviews.

							Thanx, Paul

> >> Pranith Kumar (16):
> >>   rcu: Use rcu_num_nodes instead of NUM_RCU_NODES
> >>   rcu: Check return value for cpumask allocation
> >>   rcu: Fix comment for gp_state field values
> >>   rcu: Remove redundant check for an online CPU
> >>   rcu: Add noreturn attribute to boost kthread
> >>   rcu: Clear gp_flags only when actually starting new gp
> >>   rcu: Save and restore irq flags in rcu_gp_cleanup()
> >>   rcu: Clean up rcu_spawn_one_boost_kthread()
> >>   rcu: Remove redundant check for online cpu
> >>   rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup
> >>   rcu: Check for spurious wakeup using return value
> >>   rcu: Rename rcu_spawn_gp_kthread() to rcu_spawn_kthreads()
> >>   rcu: Spawn nocb kthreads from rcu_prepare_kthreads()
> >>   rcu: Remove redundant checks for rcu_scheduler_fully_active
> >>   rcu: Check for a nocb cpu before trying to spawn nocb threads
> >>   rcu: kvm.sh: Fix error when you pass --cpus argument
> >>
> >>  kernel/rcu/tree.c                             | 42 ++++++++++++++-------------
> >>  kernel/rcu/tree.h                             |  4 +--
> >>  kernel/rcu/tree_plugin.h                      | 40 +++++++++++++------------
> >>  tools/testing/selftests/rcutorture/bin/kvm.sh |  4 +--
> >>  4 files changed, 47 insertions(+), 43 deletions(-)
> >>
> >> --
> >> 2.0.0.rc2
> >>
> >
> 
> 
> 
> -- 
> Pranith
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2014-08-27  3:20 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-23  5:09 [PATCH 00/16] rcu: Some minor fixes and cleanups Pranith Kumar
2014-07-23  5:09 ` [PATCH 01/16] rcu: Use rcu_num_nodes instead of NUM_RCU_NODES Pranith Kumar
2014-07-23  5:09 ` [PATCH 02/16] rcu: Check return value for cpumask allocation Pranith Kumar
2014-07-23 12:06   ` Paul E. McKenney
2014-07-23 12:49     ` Pranith Kumar
2014-07-23 17:14     ` Pranith Kumar
2014-07-23 18:01       ` Paul E. McKenney
2014-07-23  5:09 ` [PATCH 03/16] rcu: Fix comment for gp_state field values Pranith Kumar
2014-07-23  5:09 ` [PATCH 04/16] rcu: Remove redundant check for an online CPU Pranith Kumar
2014-07-23 12:09   ` Paul E. McKenney
2014-07-23 13:23     ` Pranith Kumar
2014-07-23 13:41       ` Paul E. McKenney
2014-07-23 14:01         ` Pranith Kumar
2014-07-23 14:14           ` Paul E. McKenney
2014-07-23 15:07             ` Pranith Kumar
2014-07-23 15:21               ` Pranith Kumar
2014-07-23  5:09 ` [PATCH 05/16] rcu: Add noreturn attribute to boost kthread Pranith Kumar
2014-07-23  5:09 ` [PATCH 06/16] rcu: Clear gp_flags only when actually starting new gp Pranith Kumar
2014-07-23 12:13   ` Paul E. McKenney
2014-07-23  5:09 ` [PATCH 07/16] rcu: Save and restore irq flags in rcu_gp_cleanup() Pranith Kumar
2014-07-23 12:16   ` Paul E. McKenney
2014-07-23  5:09 ` [PATCH 08/16] rcu: Clean up rcu_spawn_one_boost_kthread() Pranith Kumar
2014-07-23  5:09 ` [PATCH 09/16] rcu: Remove redundant check for online cpu Pranith Kumar
2014-07-23 12:21   ` Paul E. McKenney
2014-07-23 12:59     ` Pranith Kumar
2014-07-23 13:50       ` Paul E. McKenney
2014-07-23 14:12         ` Pranith Kumar
2014-07-23 14:23           ` Paul E. McKenney
2014-07-23 15:11             ` Pranith Kumar
2014-07-23 15:30               ` Paul E. McKenney
2014-07-23 15:44                 ` Pranith Kumar
2014-07-23 19:15                   ` Paul E. McKenney
2014-07-23 20:01                     ` Pranith Kumar
2014-07-23 20:16                     ` Pranith Kumar
2014-07-23 20:23                       ` Paul E. McKenney
2014-07-23  5:09 ` [PATCH 10/16] rcu: Check for RCU_FLAG_GP_INIT bit in gp_flags for spurious wakeup Pranith Kumar
2014-07-23 12:23   ` Paul E. McKenney
2014-07-23  5:09 ` [PATCH 11/16] rcu: Check for spurious wakeup using return value Pranith Kumar
2014-07-23 12:26   ` Paul E. McKenney
2014-07-24  2:36     ` Pranith Kumar
2014-07-24  3:43       ` Paul E. McKenney
2014-07-24  4:03         ` Pranith Kumar
2014-07-24 18:12           ` Paul E. McKenney
2014-07-24 19:59             ` Pranith Kumar
2014-07-24 20:27               ` Paul E. McKenney
2014-07-23  5:09 ` [PATCH 12/16] rcu: Rename rcu_spawn_gp_kthread() to rcu_spawn_kthreads() Pranith Kumar
2014-07-23  5:09 ` [PATCH 13/16] rcu: Spawn nocb kthreads from rcu_prepare_kthreads() Pranith Kumar
2014-07-23  5:09 ` [PATCH 14/16] rcu: Remove redundant checks for rcu_scheduler_fully_active Pranith Kumar
2014-07-23 12:27   ` Paul E. McKenney
2014-07-23  5:09 ` [PATCH 15/16] rcu: Check for a nocb cpu before trying to spawn nocb threads Pranith Kumar
2014-07-23 12:28   ` Paul E. McKenney
2014-07-23 13:14     ` Pranith Kumar
2014-07-23 13:42       ` Paul E. McKenney
2014-07-23  5:09 ` [PATCH 16/16] rcu: kvm.sh: Fix error when you pass --cpus argument Pranith Kumar
2014-07-23 12:31   ` Paul E. McKenney
2014-07-23 14:45 ` [PATCH 00/16] rcu: Some minor fixes and cleanups Paul E. McKenney
2014-08-27  1:10   ` Pranith Kumar
2014-08-27  3:20     ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.