[tip/core/rcu,06/12] rcu: Make CPU-hotplug removal operations enable tick
diff mbox series

Message ID 20191003013903.13079-6-paulmck@kernel.org
State New
Headers show
Series
  • NO_HZ fixes for v5.5
Related show

Commit Message

Paul E. McKenney Oct. 3, 2019, 1:38 a.m. UTC
From: "Paul E. McKenney" <paulmck@linux.ibm.com>

CPU-hotplug removal operations run the multi_cpu_stop() function, which
relies on the scheduler to gain control from whatever is running on the
various online CPUs, including any nohz_full CPUs running long loops in
kernel-mode code.  Lack of the scheduler-clock interrupt on such CPUs
can delay multi_cpu_stop() for several minutes and can also result in
RCU CPU stall warnings.  This commit therefore causes CPU-hotplug removal
operations to enable the scheduler-clock interrupt on all online CPUs.

[ paulmck: Apply Joel Fernandes TICK_DEP_MASK_RCU->TICK_DEP_BIT_RCU fix. ]
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
 kernel/rcu/tree.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

Comments

Frederic Weisbecker Oct. 3, 2019, 2:34 p.m. UTC | #1
On Wed, Oct 02, 2019 at 06:38:57PM -0700, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@linux.ibm.com>
> 
> CPU-hotplug removal operations run the multi_cpu_stop() function, which
> relies on the scheduler to gain control from whatever is running on the
> various online CPUs, including any nohz_full CPUs running long loops in
> kernel-mode code.  Lack of the scheduler-clock interrupt on such CPUs
> can delay multi_cpu_stop() for several minutes and can also result in
> RCU CPU stall warnings.  This commit therefore causes CPU-hotplug removal
> operations to enable the scheduler-clock interrupt on all online CPUs.

So, like Peter said back then, there must be an issue in the scheduler
such as a missing or mishandled preemption point.

> 
> [ paulmck: Apply Joel Fernandes TICK_DEP_MASK_RCU->TICK_DEP_BIT_RCU fix. ]
> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
> ---
>  kernel/rcu/tree.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index f708d54..74bf5c65 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2091,6 +2091,7 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
>   */
>  int rcutree_dead_cpu(unsigned int cpu)
>  {
> +	int c;
>  	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
>  	struct rcu_node *rnp = rdp->mynode;  /* Outgoing CPU's rdp & rnp. */
>  
> @@ -2101,6 +2102,10 @@ int rcutree_dead_cpu(unsigned int cpu)
>  	rcu_boost_kthread_setaffinity(rnp, -1);
>  	/* Do any needed no-CB deferred wakeups from this CPU. */
>  	do_nocb_deferred_wakeup(per_cpu_ptr(&rcu_data, cpu));
> +
> +	// Stop-machine done, so allow nohz_full to disable tick.
> +	for_each_online_cpu(c)
> +		tick_dep_clear_cpu(c, TICK_DEP_BIT_RCU);

Just use tick_dep_clear() without for_each_online_cpu().

>  	return 0;
>  }
>  
> @@ -3074,6 +3079,7 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoing)
>   */
>  int rcutree_online_cpu(unsigned int cpu)
>  {
> +	int c;
>  	unsigned long flags;
>  	struct rcu_data *rdp;
>  	struct rcu_node *rnp;
> @@ -3087,6 +3093,10 @@ int rcutree_online_cpu(unsigned int cpu)
>  		return 0; /* Too early in boot for scheduler work. */
>  	sync_sched_exp_online_cleanup(cpu);
>  	rcutree_affinity_setting(cpu, -1);
> +
> +	// Stop-machine done, so allow nohz_full to disable tick.
> +	for_each_online_cpu(c)
> +		tick_dep_clear_cpu(c, TICK_DEP_BIT_RCU);

Same here.

>  	return 0;
>  }
>  
> @@ -3096,6 +3106,7 @@ int rcutree_online_cpu(unsigned int cpu)
>   */
>  int rcutree_offline_cpu(unsigned int cpu)
>  {
> +	int c;
>  	unsigned long flags;
>  	struct rcu_data *rdp;
>  	struct rcu_node *rnp;
> @@ -3107,6 +3118,10 @@ int rcutree_offline_cpu(unsigned int cpu)
>  	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
>  
>  	rcutree_affinity_setting(cpu, cpu);
> +
> +	// nohz_full CPUs need the tick for stop-machine to work quickly
> +	for_each_online_cpu(c)
> +		tick_dep_set_cpu(c, TICK_DEP_BIT_RCU);

And here you only need tick_dep_set() without for_each_online_cpu().

Thanks.

>  	return 0;
>  }
>  
> -- 
> 2.9.5
>
Paul E. McKenney Oct. 5, 2019, 5:17 p.m. UTC | #2
On Thu, Oct 03, 2019 at 04:34:09PM +0200, Frederic Weisbecker wrote:
> On Wed, Oct 02, 2019 at 06:38:57PM -0700, paulmck@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@linux.ibm.com>
> > 
> > CPU-hotplug removal operations run the multi_cpu_stop() function, which
> > relies on the scheduler to gain control from whatever is running on the
> > various online CPUs, including any nohz_full CPUs running long loops in
> > kernel-mode code.  Lack of the scheduler-clock interrupt on such CPUs
> > can delay multi_cpu_stop() for several minutes and can also result in
> > RCU CPU stall warnings.  This commit therefore causes CPU-hotplug removal
> > operations to enable the scheduler-clock interrupt on all online CPUs.
> 
> So, like Peter said back then, there must be an issue in the scheduler
> such as a missing or mishandled preemption point.

Fair enough, but this is useful in the meantime.

> > [ paulmck: Apply Joel Fernandes TICK_DEP_MASK_RCU->TICK_DEP_BIT_RCU fix. ]
> > Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
> > ---
> >  kernel/rcu/tree.c | 15 +++++++++++++++
> >  1 file changed, 15 insertions(+)
> > 
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index f708d54..74bf5c65 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -2091,6 +2091,7 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
> >   */
> >  int rcutree_dead_cpu(unsigned int cpu)
> >  {
> > +	int c;
> >  	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> >  	struct rcu_node *rnp = rdp->mynode;  /* Outgoing CPU's rdp & rnp. */
> >  
> > @@ -2101,6 +2102,10 @@ int rcutree_dead_cpu(unsigned int cpu)
> >  	rcu_boost_kthread_setaffinity(rnp, -1);
> >  	/* Do any needed no-CB deferred wakeups from this CPU. */
> >  	do_nocb_deferred_wakeup(per_cpu_ptr(&rcu_data, cpu));
> > +
> > +	// Stop-machine done, so allow nohz_full to disable tick.
> > +	for_each_online_cpu(c)
> > +		tick_dep_clear_cpu(c, TICK_DEP_BIT_RCU);
> 
> Just use tick_dep_clear() without for_each_online_cpu().
> 
> >  	return 0;
> >  }
> >  
> > @@ -3074,6 +3079,7 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoing)
> >   */
> >  int rcutree_online_cpu(unsigned int cpu)
> >  {
> > +	int c;
> >  	unsigned long flags;
> >  	struct rcu_data *rdp;
> >  	struct rcu_node *rnp;
> > @@ -3087,6 +3093,10 @@ int rcutree_online_cpu(unsigned int cpu)
> >  		return 0; /* Too early in boot for scheduler work. */
> >  	sync_sched_exp_online_cleanup(cpu);
> >  	rcutree_affinity_setting(cpu, -1);
> > +
> > +	// Stop-machine done, so allow nohz_full to disable tick.
> > +	for_each_online_cpu(c)
> > +		tick_dep_clear_cpu(c, TICK_DEP_BIT_RCU);
> 
> Same here.
> 
> >  	return 0;
> >  }
> >  
> > @@ -3096,6 +3106,7 @@ int rcutree_online_cpu(unsigned int cpu)
> >   */
> >  int rcutree_offline_cpu(unsigned int cpu)
> >  {
> > +	int c;
> >  	unsigned long flags;
> >  	struct rcu_data *rdp;
> >  	struct rcu_node *rnp;
> > @@ -3107,6 +3118,10 @@ int rcutree_offline_cpu(unsigned int cpu)
> >  	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> >  
> >  	rcutree_affinity_setting(cpu, cpu);
> > +
> > +	// nohz_full CPUs need the tick for stop-machine to work quickly
> > +	for_each_online_cpu(c)
> > +		tick_dep_set_cpu(c, TICK_DEP_BIT_RCU);
> 
> And here you only need tick_dep_set() without for_each_online_cpu().

Thank you!  I applied all three simplifications.

							Thanx, Paul

> Thanks.
> 
> >  	return 0;
> >  }
> >  
> > -- 
> > 2.9.5
> >

Patch
diff mbox series

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f708d54..74bf5c65 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2091,6 +2091,7 @@  static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
  */
 int rcutree_dead_cpu(unsigned int cpu)
 {
+	int c;
 	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
 	struct rcu_node *rnp = rdp->mynode;  /* Outgoing CPU's rdp & rnp. */
 
@@ -2101,6 +2102,10 @@  int rcutree_dead_cpu(unsigned int cpu)
 	rcu_boost_kthread_setaffinity(rnp, -1);
 	/* Do any needed no-CB deferred wakeups from this CPU. */
 	do_nocb_deferred_wakeup(per_cpu_ptr(&rcu_data, cpu));
+
+	// Stop-machine done, so allow nohz_full to disable tick.
+	for_each_online_cpu(c)
+		tick_dep_clear_cpu(c, TICK_DEP_BIT_RCU);
 	return 0;
 }
 
@@ -3074,6 +3079,7 @@  static void rcutree_affinity_setting(unsigned int cpu, int outgoing)
  */
 int rcutree_online_cpu(unsigned int cpu)
 {
+	int c;
 	unsigned long flags;
 	struct rcu_data *rdp;
 	struct rcu_node *rnp;
@@ -3087,6 +3093,10 @@  int rcutree_online_cpu(unsigned int cpu)
 		return 0; /* Too early in boot for scheduler work. */
 	sync_sched_exp_online_cleanup(cpu);
 	rcutree_affinity_setting(cpu, -1);
+
+	// Stop-machine done, so allow nohz_full to disable tick.
+	for_each_online_cpu(c)
+		tick_dep_clear_cpu(c, TICK_DEP_BIT_RCU);
 	return 0;
 }
 
@@ -3096,6 +3106,7 @@  int rcutree_online_cpu(unsigned int cpu)
  */
 int rcutree_offline_cpu(unsigned int cpu)
 {
+	int c;
 	unsigned long flags;
 	struct rcu_data *rdp;
 	struct rcu_node *rnp;
@@ -3107,6 +3118,10 @@  int rcutree_offline_cpu(unsigned int cpu)
 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 
 	rcutree_affinity_setting(cpu, cpu);
+
+	// nohz_full CPUs need the tick for stop-machine to work quickly
+	for_each_online_cpu(c)
+		tick_dep_set_cpu(c, TICK_DEP_BIT_RCU);
 	return 0;
 }