linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH tip/core/rcu 0/2] Expedited grace-period changes for v4.19
@ 2018-06-25 22:43 Paul E. McKenney
  2018-06-25 22:43 ` [PATCH tip/core/rcu 1/2] rcu: Make expedited grace period use direct call on last leaf Paul E. McKenney
  2018-06-25 22:43 ` [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline Paul E. McKenney
  0 siblings, 2 replies; 11+ messages in thread
From: Paul E. McKenney @ 2018-06-25 22:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg, joel

Hello!

This series contains a couple of changes to RCU expedited grace periods:

1.	Initialize the last rcu_node structure via direct call rather than
	via workqueue.

2.	Make parallel initialization of parallel grace periods handle the
	possibility of CPU 0 begin offline, courtesy of Boqun Feng.
	(This is currently marked Not-Yet-Signed-off-by while looking
	into the possibility of a more efficient implementation.)

							Thanx, Paul

------------------------------------------------------------------------

 rcu.h      |    3 +++
 tree_exp.h |   21 +++++++++++++++++----
 2 files changed, 20 insertions(+), 4 deletions(-)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 1/2] rcu: Make expedited grace period use direct call on last leaf
  2018-06-25 22:43 [PATCH tip/core/rcu 0/2] Expedited grace-period changes for v4.19 Paul E. McKenney
@ 2018-06-25 22:43 ` Paul E. McKenney
  2018-06-25 22:43 ` [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline Paul E. McKenney
  1 sibling, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2018-06-25 22:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg, joel,
	Paul E. McKenney

During expedited grace-period initialization, a work item is scheduled
for each leaf rcu_node structure.  However, that initialization code
is itself (normally) executing from a workqueue, so one of the leaf
rcu_node structures could just as well be handled by that pre-existing
workqueue, and with less overhead.  This commit therefore uses a
shiny new rcu_is_leaf_node() macro to execute the last leaf rcu_node
structure's initialization directly from the pre-existing workqueue.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/rcu.h      | 3 +++
 kernel/rcu/tree_exp.h | 5 +++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index 40cea6735c2d..db0870acfdff 100644
--- a/kernel/rcu/rcu.h
+++ b/kernel/rcu/rcu.h
@@ -276,6 +276,9 @@ static inline void rcu_init_levelspread(int *levelspread, const int *levelcnt)
 /* Is this rcu_node a leaf? */
 #define rcu_is_leaf_node(rnp) ((rnp)->level == rcu_num_lvls - 1)
 
+/* Is this rcu_node the last leaf? */
+#define rcu_is_last_leaf_node(rsp, rnp) ((rnp) == &(rsp)->node[rcu_num_nodes - 1])
+
 /*
  * Do a full breadth-first scan of the rcu_node structures for the
  * specified rcu_state structure.
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index d40708e8c5d6..c6385ee1af65 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -486,8 +486,9 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 		rnp->rew.rew_func = func;
 		rnp->rew.rew_rsp = rsp;
 		if (!READ_ONCE(rcu_par_gp_wq) ||
-		    rcu_scheduler_active != RCU_SCHEDULER_RUNNING) {
-			/* No workqueues yet. */
+		    rcu_scheduler_active != RCU_SCHEDULER_RUNNING ||
+		    rcu_is_last_leaf_node(rsp, rnp)) {
+			/* No workqueues yet or last leaf, do direct call. */
 			sync_rcu_exp_select_node_cpus(&rnp->rew.rew_work);
 			continue;
 		}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-25 22:43 [PATCH tip/core/rcu 0/2] Expedited grace-period changes for v4.19 Paul E. McKenney
  2018-06-25 22:43 ` [PATCH tip/core/rcu 1/2] rcu: Make expedited grace period use direct call on last leaf Paul E. McKenney
@ 2018-06-25 22:43 ` Paul E. McKenney
  2018-06-26  9:38   ` Peter Zijlstra
  1 sibling, 1 reply; 11+ messages in thread
From: Paul E. McKenney @ 2018-06-25 22:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg, joel,
	Boqun Feng

From: Boqun Feng <boqun.feng@gmail.com>

Currently, the parallelized initialization of expedited grace periods uses
the workqueue associated with each rcu_node structure's ->grplo field.
This works fine unless that CPU is offline.  This commit therefore
uses the CPU corresponding to the lowest-numbered online CPU, or just
reports the quiescent states if there are no online CPUs on this rcu_node
structure.

Note that this patch uses cpu_is_offline() instead of the usual
approach of checking bits in the rcu_node structure's ->qsmaskinitnext
field.  This is safe because preemption is disabled across both the
cpu_is_offline() check and the call to queue_work_on().

Not-Yet-Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
[ paulmck: Disable preemption to close offline race window. ]
Not-Yet-Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/tree_exp.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index c6385ee1af65..6acac74092cb 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
 static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 				     smp_call_func_t func)
 {
+	int cpu;
 	struct rcu_node *rnp;
 
 	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
@@ -493,8 +494,19 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 			continue;
 		}
 		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
-		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
-		rnp->exp_need_flush = true;
+		preempt_disable();
+		for_each_leaf_node_possible_cpu(rnp, cpu) {
+			if (cpu_is_offline(cpu)) /* Preemption disabled. */
+				continue;
+			queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
+			rnp->exp_need_flush = true;
+			break;
+		}
+		preempt_enable();
+		if (!rnp->exp_need_flush) { /* All offline, report QSes. */
+			queue_work(rcu_par_gp_wq, &rnp->rew.rew_work);
+			rnp->exp_need_flush = true;
+		}
 	}
 
 	/* Wait for workqueue jobs (if any) to complete. */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-25 22:43 ` [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline Paul E. McKenney
@ 2018-06-26  9:38   ` Peter Zijlstra
  2018-06-26 10:44     ` Boqun Feng
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2018-06-26  9:38 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg, joel, Boqun Feng

On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> +		preempt_disable();
> +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> +				continue;

Create for_each_node_online_cpu() instead? Seems a bit pointless to
iterate possible mask only to then check it against the online mask.
Just iterate the online mask directly.

Or better yet, write this as:

	preempt_disable();
	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
	if (cpu > rnp->grphi)
		cpu = WORK_CPU_UNBOUND;
	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
	preempt_enable();

Which is what it appears to be doing.

> +			queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> +			rnp->exp_need_flush = true;
> +			break;
> +		}
> +		preempt_enable();
> +		if (!rnp->exp_need_flush) { /* All offline, report QSes. */
> +			queue_work(rcu_par_gp_wq, &rnp->rew.rew_work);
> +			rnp->exp_need_flush = true;
> +		}
>  	}
>  
>  	/* Wait for workqueue jobs (if any) to complete. */
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-26  9:38   ` Peter Zijlstra
@ 2018-06-26 10:44     ` Boqun Feng
  2018-06-26 11:46       ` Boqun Feng
  0 siblings, 1 reply; 11+ messages in thread
From: Boqun Feng @ 2018-06-26 10:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul E. McKenney, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg, joel

[-- Attachment #1: Type: text/plain, Size: 1359 bytes --]

On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > +		preempt_disable();
> > +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> > +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > +				continue;
> 
> Create for_each_node_online_cpu() instead? Seems a bit pointless to
> iterate possible mask only to then check it against the online mask.
> Just iterate the online mask directly.
> 
> Or better yet, write this as:
> 
> 	preempt_disable();
> 	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> 	if (cpu > rnp->grphi)
> 		cpu = WORK_CPU_UNBOUND;
> 	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> 	preempt_enable();
> 
> Which is what it appears to be doing.
> 

Make sense! Thanks ;-)

Applied this and running a TREE03 rcutorture. If all go well, I will
send the updated patch.

Regards,
Boqun

> > +			queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > +			rnp->exp_need_flush = true;
> > +			break;
> > +		}
> > +		preempt_enable();
> > +		if (!rnp->exp_need_flush) { /* All offline, report QSes. */
> > +			queue_work(rcu_par_gp_wq, &rnp->rew.rew_work);
> > +			rnp->exp_need_flush = true;
> > +		}
> >  	}
> >  
> >  	/* Wait for workqueue jobs (if any) to complete. */
> > -- 
> > 2.17.1
> > 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-26 10:44     ` Boqun Feng
@ 2018-06-26 11:46       ` Boqun Feng
  2018-06-26 12:31         ` Paul E. McKenney
  2018-06-26 19:27         ` Paul E. McKenney
  0 siblings, 2 replies; 11+ messages in thread
From: Boqun Feng @ 2018-06-26 11:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul E. McKenney, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg, joel

On Tue, Jun 26, 2018 at 06:44:47PM +0800, Boqun Feng wrote:
> On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> > On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > > +		preempt_disable();
> > > +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> > > +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > > +				continue;
> > 
> > Create for_each_node_online_cpu() instead? Seems a bit pointless to
> > iterate possible mask only to then check it against the online mask.
> > Just iterate the online mask directly.
> > 
> > Or better yet, write this as:
> > 
> > 	preempt_disable();
> > 	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > 	if (cpu > rnp->grphi)
> > 		cpu = WORK_CPU_UNBOUND;
> > 	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > 	preempt_enable();
> > 
> > Which is what it appears to be doing.
> > 
> 
> Make sense! Thanks ;-)
> 
> Applied this and running a TREE03 rcutorture. If all go well, I will
> send the updated patch.
> 

So the patch has passed one 30 min run for TREE03 rcutorture. Paul,
if it looks good, could you take it for your next spin or pull request
in the future? Thanks.

Regards,
Boqun

-------------->8
Subject: [PATCH tip/core/rcu v2] rcu: exp: Make expedited GPs handle CPU 0 being offline

Currently, the parallelized initialization of expedited grace periods
uses the workqueue associated with each rcu_node structure's ->grplo
field.  This works fine unless that CPU is offline.  This commit
therefore uses the CPU corresponding to the lowest-numbered online CPU,
or fallback to queue the work on WORK_CPU_UNBOUND if there are no online
CPUs on this rcu_node structure.

Note that this patch uses cpu_online_mask instead of the usual approach
of checking bits in the rcu_node structure's ->qsmaskinitnext field.
This is safe because preemption is disabled across both the
cpu_online_mask check and the call to queue_work_on().

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
[ paulmck: Disable preemption to close offline race window. ]
Not-Yet-Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
v1 --> v2:
	
*	Replace the for_each_leaf_node_possible_cpu() + cpu_is_offline()
	check loop with a single cpumask_next() as suggested by Peter
	Zijlstra

 kernel/rcu/tree_exp.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index d40708e8c5d6..3bf87fd4bd91 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -473,6 +473,7 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 				     smp_call_func_t func)
 {
 	struct rcu_node *rnp;
+	int cpu;
 
 	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
 	sync_exp_reset_tree(rsp);
@@ -492,7 +493,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 			continue;
 		}
 		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
-		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
+		preempt_disable();
+		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
+		/* All offlines, queue the work on an unbound CPU */
+		if (unlikely(cpu > rnp->grphi))
+			cpu = WORK_CPU_UNBOUND;
+		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
+		preempt_enable();
 		rnp->exp_need_flush = true;
 	}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-26 11:46       ` Boqun Feng
@ 2018-06-26 12:31         ` Paul E. McKenney
  2018-06-26 19:27         ` Paul E. McKenney
  1 sibling, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2018-06-26 12:31 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Peter Zijlstra, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg, joel

On Tue, Jun 26, 2018 at 07:46:52PM +0800, Boqun Feng wrote:
> On Tue, Jun 26, 2018 at 06:44:47PM +0800, Boqun Feng wrote:
> > On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> > > On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > > > +		preempt_disable();
> > > > +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> > > > +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > > > +				continue;
> > > 
> > > Create for_each_node_online_cpu() instead? Seems a bit pointless to
> > > iterate possible mask only to then check it against the online mask.
> > > Just iterate the online mask directly.
> > > 
> > > Or better yet, write this as:
> > > 
> > > 	preempt_disable();
> > > 	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > 	if (cpu > rnp->grphi)
> > > 		cpu = WORK_CPU_UNBOUND;
> > > 	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > 	preempt_enable();
> > > 
> > > Which is what it appears to be doing.
> > > 
> > 
> > Make sense! Thanks ;-)
> > 
> > Applied this and running a TREE03 rcutorture. If all go well, I will
> > send the updated patch.
> 
> So the patch has passed one 30 min run for TREE03 rcutorture. Paul,
> if it looks good, could you take it for your next spin or pull request
> in the future? Thanks.

Looks much improved, thank you both!  I will pull this in later
today, Pacific Time.

							Thanx, Paul

> Regards,
> Boqun
> 
> -------------->8
> Subject: [PATCH tip/core/rcu v2] rcu: exp: Make expedited GPs handle CPU 0 being offline
> 
> Currently, the parallelized initialization of expedited grace periods
> uses the workqueue associated with each rcu_node structure's ->grplo
> field.  This works fine unless that CPU is offline.  This commit
> therefore uses the CPU corresponding to the lowest-numbered online CPU,
> or fallback to queue the work on WORK_CPU_UNBOUND if there are no online
> CPUs on this rcu_node structure.
> 
> Note that this patch uses cpu_online_mask instead of the usual approach
> of checking bits in the rcu_node structure's ->qsmaskinitnext field.
> This is safe because preemption is disabled across both the
> cpu_online_mask check and the call to queue_work_on().
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> [ paulmck: Disable preemption to close offline race window. ]
> Not-Yet-Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
> v1 --> v2:
> 	
> *	Replace the for_each_leaf_node_possible_cpu() + cpu_is_offline()
> 	check loop with a single cpumask_next() as suggested by Peter
> 	Zijlstra
> 
>  kernel/rcu/tree_exp.h | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> index d40708e8c5d6..3bf87fd4bd91 100644
> --- a/kernel/rcu/tree_exp.h
> +++ b/kernel/rcu/tree_exp.h
> @@ -473,6 +473,7 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
>  				     smp_call_func_t func)
>  {
>  	struct rcu_node *rnp;
> +	int cpu;
> 
>  	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
>  	sync_exp_reset_tree(rsp);
> @@ -492,7 +493,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
>  			continue;
>  		}
>  		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
> -		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
> +		preempt_disable();
> +		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> +		/* All offlines, queue the work on an unbound CPU */
> +		if (unlikely(cpu > rnp->grphi))
> +			cpu = WORK_CPU_UNBOUND;
> +		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> +		preempt_enable();
>  		rnp->exp_need_flush = true;
>  	}
> 
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-26 11:46       ` Boqun Feng
  2018-06-26 12:31         ` Paul E. McKenney
@ 2018-06-26 19:27         ` Paul E. McKenney
  2018-06-27  2:42           ` Boqun Feng
  1 sibling, 1 reply; 11+ messages in thread
From: Paul E. McKenney @ 2018-06-26 19:27 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Peter Zijlstra, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg, joel

On Tue, Jun 26, 2018 at 07:46:52PM +0800, Boqun Feng wrote:
> On Tue, Jun 26, 2018 at 06:44:47PM +0800, Boqun Feng wrote:
> > On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> > > On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > > > +		preempt_disable();
> > > > +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> > > > +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > > > +				continue;
> > > 
> > > Create for_each_node_online_cpu() instead? Seems a bit pointless to
> > > iterate possible mask only to then check it against the online mask.
> > > Just iterate the online mask directly.
> > > 
> > > Or better yet, write this as:
> > > 
> > > 	preempt_disable();
> > > 	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > 	if (cpu > rnp->grphi)
> > > 		cpu = WORK_CPU_UNBOUND;
> > > 	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > 	preempt_enable();
> > > 
> > > Which is what it appears to be doing.
> > > 
> > 
> > Make sense! Thanks ;-)
> > 
> > Applied this and running a TREE03 rcutorture. If all go well, I will
> > send the updated patch.
> > 
> 
> So the patch has passed one 30 min run for TREE03 rcutorture. Paul,
> if it looks good, could you take it for your next spin or pull request
> in the future? Thanks.

I ended up with the following, mostly just rewording the comment and
adding a one-liner on the change.  Does this work for you?

							Thanx, Paul

------------------------------------------------------------------------

commit ef31fa78032536d594630d7bd315d3faf60d98ca
Author: Boqun Feng <boqun.feng@gmail.com>
Date:   Fri Jun 15 12:06:31 2018 -0700

    rcu: Make expedited GPs handle CPU 0 being offline
    
    Currently, the parallelized initialization of expedited grace periods uses
    the workqueue associated with each rcu_node structure's ->grplo field.
    This works fine unless that CPU is offline.  This commit therefore
    uses the CPU corresponding to the lowest-numbered online CPU, or just
    reports the quiescent states if there are no online CPUs on this rcu_node
    structure.
    
    Note that this patch uses cpu_is_offline() instead of the usual
    approach of checking bits in the rcu_node structure's ->qsmaskinitnext
    field.  This is safe because preemption is disabled across both the
    cpu_is_offline() check and the call to queue_work_on().
    
    Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
    [ paulmck: Disable preemption to close offline race window. ]
    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    [ paulmck: Apply Peter Zijlstra feedback on CPU selection. ]

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index c6385ee1af65..b3df3b770afb 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
 static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 				     smp_call_func_t func)
 {
+	int cpu;
 	struct rcu_node *rnp;
 
 	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
@@ -493,7 +494,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 			continue;
 		}
 		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
-		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
+		preempt_disable();
+		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
+		/* If all offline, queue the work on an unbound CPU. */
+		if (unlikely(cpu > rnp->grphi))
+			cpu = WORK_CPU_UNBOUND;
+		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
+		preempt_enable();
 		rnp->exp_need_flush = true;
 	}
 


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-26 19:27         ` Paul E. McKenney
@ 2018-06-27  2:42           ` Boqun Feng
  2018-06-27 16:15             ` Paul E. McKenney
  0 siblings, 1 reply; 11+ messages in thread
From: Boqun Feng @ 2018-06-27  2:42 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg, joel

[-- Attachment #1: Type: text/plain, Size: 4302 bytes --]

On Tue, Jun 26, 2018 at 12:27:47PM -0700, Paul E. McKenney wrote:
> On Tue, Jun 26, 2018 at 07:46:52PM +0800, Boqun Feng wrote:
> > On Tue, Jun 26, 2018 at 06:44:47PM +0800, Boqun Feng wrote:
> > > On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> > > > On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > > > > +		preempt_disable();
> > > > > +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> > > > > +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > > > > +				continue;
> > > > 
> > > > Create for_each_node_online_cpu() instead? Seems a bit pointless to
> > > > iterate possible mask only to then check it against the online mask.
> > > > Just iterate the online mask directly.
> > > > 
> > > > Or better yet, write this as:
> > > > 
> > > > 	preempt_disable();
> > > > 	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > > 	if (cpu > rnp->grphi)
> > > > 		cpu = WORK_CPU_UNBOUND;
> > > > 	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > > 	preempt_enable();
> > > > 
> > > > Which is what it appears to be doing.
> > > > 
> > > 
> > > Make sense! Thanks ;-)
> > > 
> > > Applied this and running a TREE03 rcutorture. If all go well, I will
> > > send the updated patch.
> > > 
> > 
> > So the patch has passed one 30 min run for TREE03 rcutorture. Paul,
> > if it looks good, could you take it for your next spin or pull request
> > in the future? Thanks.
> 
> I ended up with the following, mostly just rewording the comment and
> adding a one-liner on the change.  Does this work for you?
> 

Looks good to me. Only one thing I think we need to modify a little,
please see below:

> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
> commit ef31fa78032536d594630d7bd315d3faf60d98ca
> Author: Boqun Feng <boqun.feng@gmail.com>
> Date:   Fri Jun 15 12:06:31 2018 -0700
> 
>     rcu: Make expedited GPs handle CPU 0 being offline
>     
>     Currently, the parallelized initialization of expedited grace periods uses
>     the workqueue associated with each rcu_node structure's ->grplo field.
>     This works fine unless that CPU is offline.  This commit therefore
>     uses the CPU corresponding to the lowest-numbered online CPU, or just
>     reports the quiescent states if there are no online CPUs on this rcu_node
>     structure.

better write "or just queue the work on WORK_CPU_UNBOUND if there are
no online CPUs on this rcu_node structure"? Because we currently don't
report the QS directly if all CPU are offline.

Thoughts?

Regards,
Boqun

>     
>     Note that this patch uses cpu_is_offline() instead of the usual
>     approach of checking bits in the rcu_node structure's ->qsmaskinitnext
>     field.  This is safe because preemption is disabled across both the
>     cpu_is_offline() check and the call to queue_work_on().
>     
>     Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
>     [ paulmck: Disable preemption to close offline race window. ]
>     Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>     [ paulmck: Apply Peter Zijlstra feedback on CPU selection. ]
> 
> diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> index c6385ee1af65..b3df3b770afb 100644
> --- a/kernel/rcu/tree_exp.h
> +++ b/kernel/rcu/tree_exp.h
> @@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
>  static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
>  				     smp_call_func_t func)
>  {
> +	int cpu;
>  	struct rcu_node *rnp;
>  
>  	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
> @@ -493,7 +494,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
>  			continue;
>  		}
>  		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
> -		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
> +		preempt_disable();
> +		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> +		/* If all offline, queue the work on an unbound CPU. */
> +		if (unlikely(cpu > rnp->grphi))
> +			cpu = WORK_CPU_UNBOUND;
> +		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> +		preempt_enable();
>  		rnp->exp_need_flush = true;
>  	}
>  
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-27  2:42           ` Boqun Feng
@ 2018-06-27 16:15             ` Paul E. McKenney
  2018-06-28 16:52               ` Paul E. McKenney
  0 siblings, 1 reply; 11+ messages in thread
From: Paul E. McKenney @ 2018-06-27 16:15 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Peter Zijlstra, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg, joel

On Wed, Jun 27, 2018 at 10:42:01AM +0800, Boqun Feng wrote:
> On Tue, Jun 26, 2018 at 12:27:47PM -0700, Paul E. McKenney wrote:
> > On Tue, Jun 26, 2018 at 07:46:52PM +0800, Boqun Feng wrote:
> > > On Tue, Jun 26, 2018 at 06:44:47PM +0800, Boqun Feng wrote:
> > > > On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> > > > > On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > > > > > +		preempt_disable();
> > > > > > +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> > > > > > +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > > > > > +				continue;
> > > > > 
> > > > > Create for_each_node_online_cpu() instead? Seems a bit pointless to
> > > > > iterate possible mask only to then check it against the online mask.
> > > > > Just iterate the online mask directly.
> > > > > 
> > > > > Or better yet, write this as:
> > > > > 
> > > > > 	preempt_disable();
> > > > > 	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > > > 	if (cpu > rnp->grphi)
> > > > > 		cpu = WORK_CPU_UNBOUND;
> > > > > 	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > > > 	preempt_enable();
> > > > > 
> > > > > Which is what it appears to be doing.
> > > > > 
> > > > 
> > > > Make sense! Thanks ;-)
> > > > 
> > > > Applied this and running a TREE03 rcutorture. If all go well, I will
> > > > send the updated patch.
> > > > 
> > > 
> > > So the patch has passed one 30 min run for TREE03 rcutorture. Paul,
> > > if it looks good, could you take it for your next spin or pull request
> > > in the future? Thanks.
> > 
> > I ended up with the following, mostly just rewording the comment and
> > adding a one-liner on the change.  Does this work for you?
> 
> Looks good to me. Only one thing I think we need to modify a little,
> please see below:
> 
> > 							Thanx, Paul
> > 
> > ------------------------------------------------------------------------
> > 
> > commit ef31fa78032536d594630d7bd315d3faf60d98ca
> > Author: Boqun Feng <boqun.feng@gmail.com>
> > Date:   Fri Jun 15 12:06:31 2018 -0700
> > 
> >     rcu: Make expedited GPs handle CPU 0 being offline
> >     
> >     Currently, the parallelized initialization of expedited grace periods uses
> >     the workqueue associated with each rcu_node structure's ->grplo field.
> >     This works fine unless that CPU is offline.  This commit therefore
> >     uses the CPU corresponding to the lowest-numbered online CPU, or just
> >     reports the quiescent states if there are no online CPUs on this rcu_node
> >     structure.
> 
> better write "or just queue the work on WORK_CPU_UNBOUND if there are
> no online CPUs on this rcu_node structure"? Because we currently don't
> report the QS directly if all CPU are offline.
> 
> Thoughts?

Any objections?  If I don't hear any by tomorrow morning (Pacific Time),
I will make this change.

							Thanx, Paul

> Regards,
> Boqun
> 
> >     
> >     Note that this patch uses cpu_is_offline() instead of the usual
> >     approach of checking bits in the rcu_node structure's ->qsmaskinitnext
> >     field.  This is safe because preemption is disabled across both the
> >     cpu_is_offline() check and the call to queue_work_on().
> >     
> >     Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> >     [ paulmck: Disable preemption to close offline race window. ]
> >     Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >     [ paulmck: Apply Peter Zijlstra feedback on CPU selection. ]
> > 
> > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > index c6385ee1af65..b3df3b770afb 100644
> > --- a/kernel/rcu/tree_exp.h
> > +++ b/kernel/rcu/tree_exp.h
> > @@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
> >  static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> >  				     smp_call_func_t func)
> >  {
> > +	int cpu;
> >  	struct rcu_node *rnp;
> >  
> >  	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
> > @@ -493,7 +494,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> >  			continue;
> >  		}
> >  		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
> > -		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
> > +		preempt_disable();
> > +		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > +		/* If all offline, queue the work on an unbound CPU. */
> > +		if (unlikely(cpu > rnp->grphi))
> > +			cpu = WORK_CPU_UNBOUND;
> > +		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > +		preempt_enable();
> >  		rnp->exp_need_flush = true;
> >  	}
> >  
> > 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
  2018-06-27 16:15             ` Paul E. McKenney
@ 2018-06-28 16:52               ` Paul E. McKenney
  0 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2018-06-28 16:52 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Peter Zijlstra, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg, joel

On Wed, Jun 27, 2018 at 09:15:31AM -0700, Paul E. McKenney wrote:
> On Wed, Jun 27, 2018 at 10:42:01AM +0800, Boqun Feng wrote:
> > On Tue, Jun 26, 2018 at 12:27:47PM -0700, Paul E. McKenney wrote:
> > > On Tue, Jun 26, 2018 at 07:46:52PM +0800, Boqun Feng wrote:
> > > > On Tue, Jun 26, 2018 at 06:44:47PM +0800, Boqun Feng wrote:
> > > > > On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> > > > > > On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > > > > > > +		preempt_disable();
> > > > > > > +		for_each_leaf_node_possible_cpu(rnp, cpu) {
> > > > > > > +			if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > > > > > > +				continue;
> > > > > > 
> > > > > > Create for_each_node_online_cpu() instead? Seems a bit pointless to
> > > > > > iterate possible mask only to then check it against the online mask.
> > > > > > Just iterate the online mask directly.
> > > > > > 
> > > > > > Or better yet, write this as:
> > > > > > 
> > > > > > 	preempt_disable();
> > > > > > 	cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > > > > 	if (cpu > rnp->grphi)
> > > > > > 		cpu = WORK_CPU_UNBOUND;
> > > > > > 	queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > > > > 	preempt_enable();
> > > > > > 
> > > > > > Which is what it appears to be doing.
> > > > > > 
> > > > > 
> > > > > Make sense! Thanks ;-)
> > > > > 
> > > > > Applied this and running a TREE03 rcutorture. If all go well, I will
> > > > > send the updated patch.
> > > > > 
> > > > 
> > > > So the patch has passed one 30 min run for TREE03 rcutorture. Paul,
> > > > if it looks good, could you take it for your next spin or pull request
> > > > in the future? Thanks.
> > > 
> > > I ended up with the following, mostly just rewording the comment and
> > > adding a one-liner on the change.  Does this work for you?
> > 
> > Looks good to me. Only one thing I think we need to modify a little,
> > please see below:
> > 
> > > 							Thanx, Paul
> > > 
> > > ------------------------------------------------------------------------
> > > 
> > > commit ef31fa78032536d594630d7bd315d3faf60d98ca
> > > Author: Boqun Feng <boqun.feng@gmail.com>
> > > Date:   Fri Jun 15 12:06:31 2018 -0700
> > > 
> > >     rcu: Make expedited GPs handle CPU 0 being offline
> > >     
> > >     Currently, the parallelized initialization of expedited grace periods uses
> > >     the workqueue associated with each rcu_node structure's ->grplo field.
> > >     This works fine unless that CPU is offline.  This commit therefore
> > >     uses the CPU corresponding to the lowest-numbered online CPU, or just
> > >     reports the quiescent states if there are no online CPUs on this rcu_node
> > >     structure.
> > 
> > better write "or just queue the work on WORK_CPU_UNBOUND if there are
> > no online CPUs on this rcu_node structure"? Because we currently don't
> > report the QS directly if all CPU are offline.
> > 
> > Thoughts?
> 
> Any objections?  If I don't hear any by tomorrow morning (Pacific Time),
> I will make this change.

Hearing none, I have made this change.

							Thanx, Paul

> > Regards,
> > Boqun
> > 
> > >     
> > >     Note that this patch uses cpu_is_offline() instead of the usual
> > >     approach of checking bits in the rcu_node structure's ->qsmaskinitnext
> > >     field.  This is safe because preemption is disabled across both the
> > >     cpu_is_offline() check and the call to queue_work_on().
> > >     
> > >     Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > >     [ paulmck: Disable preemption to close offline race window. ]
> > >     Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > >     [ paulmck: Apply Peter Zijlstra feedback on CPU selection. ]
> > > 
> > > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > > index c6385ee1af65..b3df3b770afb 100644
> > > --- a/kernel/rcu/tree_exp.h
> > > +++ b/kernel/rcu/tree_exp.h
> > > @@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
> > >  static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> > >  				     smp_call_func_t func)
> > >  {
> > > +	int cpu;
> > >  	struct rcu_node *rnp;
> > >  
> > >  	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
> > > @@ -493,7 +494,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> > >  			continue;
> > >  		}
> > >  		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
> > > -		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > +		preempt_disable();
> > > +		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > +		/* If all offline, queue the work on an unbound CPU. */
> > > +		if (unlikely(cpu > rnp->grphi))
> > > +			cpu = WORK_CPU_UNBOUND;
> > > +		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > +		preempt_enable();
> > >  		rnp->exp_need_flush = true;
> > >  	}
> > >  
> > > 
> 
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-06-28 16:50 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-25 22:43 [PATCH tip/core/rcu 0/2] Expedited grace-period changes for v4.19 Paul E. McKenney
2018-06-25 22:43 ` [PATCH tip/core/rcu 1/2] rcu: Make expedited grace period use direct call on last leaf Paul E. McKenney
2018-06-25 22:43 ` [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline Paul E. McKenney
2018-06-26  9:38   ` Peter Zijlstra
2018-06-26 10:44     ` Boqun Feng
2018-06-26 11:46       ` Boqun Feng
2018-06-26 12:31         ` Paul E. McKenney
2018-06-26 19:27         ` Paul E. McKenney
2018-06-27  2:42           ` Boqun Feng
2018-06-27 16:15             ` Paul E. McKenney
2018-06-28 16:52               ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).