linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
@ 2022-11-25 15:54 Zqiang
  2022-11-26  2:27 ` Joel Fernandes
  2022-11-26  5:19 ` Neeraj Upadhyay
  0 siblings, 2 replies; 9+ messages in thread
From: Zqiang @ 2022-11-25 15:54 UTC (permalink / raw)
  To: paulmck, frederic, joel; +Cc: rcu, linux-kernel

Currently, for the case of num_online_cpus() <= 1, return directly,
indicates the end of current grace period and then release old data.
it's not accurate, for SMP system, when num_online_cpus() is equal
one, maybe another cpu that in offline process(after invoke
__cpu_disable()) is still in the rude RCU-Tasks critical section
holding the old data, this lead to memory corruption.

Therefore, this commit add cpus_read_lock/unlock() before executing
num_online_cpus().

Signed-off-by: Zqiang <qiang1.zhang@intel.com>
---
 kernel/rcu/tasks.h | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 4a991311be9b..08e72c6462d8 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
 {
 }
 
+static DEFINE_PER_CPU(struct work_struct, rude_work);
+
 // Wait for one rude RCU-tasks grace period.
 static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
 {
+	int cpu;
+	struct work_struct *work;
+
+	cpus_read_lock();
 	if (num_online_cpus() <= 1)
-		return;	// Fastpath for only one CPU.
+		goto end;// Fastpath for only one CPU.
 
 	rtp->n_ipis += cpumask_weight(cpu_online_mask);
-	schedule_on_each_cpu(rcu_tasks_be_rude);
+	for_each_online_cpu(cpu) {
+		work = per_cpu_ptr(&rude_work, cpu);
+		INIT_WORK(work, rcu_tasks_be_rude);
+		schedule_work_on(cpu, work);
+	}
+
+	for_each_online_cpu(cpu)
+		flush_work(per_cpu_ptr(&rude_work, cpu));
+
+end:
+	cpus_read_unlock();
 }
 
 void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
  2022-11-25 15:54 [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug Zqiang
@ 2022-11-26  2:27 ` Joel Fernandes
  2022-11-26  2:43   ` Zhang, Qiang1
  2022-11-26  5:19 ` Neeraj Upadhyay
  1 sibling, 1 reply; 9+ messages in thread
From: Joel Fernandes @ 2022-11-26  2:27 UTC (permalink / raw)
  To: Zqiang; +Cc: paulmck, frederic, rcu, linux-kernel

On Fri, Nov 25, 2022 at 11:54:27PM +0800, Zqiang wrote:
> Currently, for the case of num_online_cpus() <= 1, return directly,
> indicates the end of current grace period and then release old data.
> it's not accurate, for SMP system, when num_online_cpus() is equal
> one, maybe another cpu that in offline process(after invoke
> __cpu_disable()) is still in the rude RCU-Tasks critical section
> holding the old data, this lead to memory corruption.
> 
> Therefore, this commit add cpus_read_lock/unlock() before executing
> num_online_cpus().

I am not sure if this is needed. The only way what you suggest can happen is
if the tasks-RCU protected data is accessed after the num_online_cpus() value is
decremented on the CPU going offline.

However, the number of online CPUs value is changed on a CPU other than the
CPU going offline.

So there's no way the CPU going offline can run any code (it is already
dead courtesy of CPUHP_AP_IDLE_DEAD). So a corruption is impossible.

Or, did I miss something?

thanks,

 - Joel



> 
> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> ---
>  kernel/rcu/tasks.h | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> index 4a991311be9b..08e72c6462d8 100644
> --- a/kernel/rcu/tasks.h
> +++ b/kernel/rcu/tasks.h
> @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
>  {
>  }
>  
> +static DEFINE_PER_CPU(struct work_struct, rude_work);
> +
>  // Wait for one rude RCU-tasks grace period.
>  static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
>  {
> +	int cpu;
> +	struct work_struct *work;
> +
> +	cpus_read_lock();
>  	if (num_online_cpus() <= 1)
> -		return;	// Fastpath for only one CPU.
> +		goto end;// Fastpath for only one CPU.
>  
>  	rtp->n_ipis += cpumask_weight(cpu_online_mask);
> -	schedule_on_each_cpu(rcu_tasks_be_rude);
> +	for_each_online_cpu(cpu) {
> +		work = per_cpu_ptr(&rude_work, cpu);
> +		INIT_WORK(work, rcu_tasks_be_rude);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_online_cpu(cpu)
> +		flush_work(per_cpu_ptr(&rude_work, cpu));
> +
> +end:
> +	cpus_read_unlock();
>  }
>  
>  void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
  2022-11-26  2:27 ` Joel Fernandes
@ 2022-11-26  2:43   ` Zhang, Qiang1
  2022-11-26  4:34     ` Joel Fernandes
  0 siblings, 1 reply; 9+ messages in thread
From: Zhang, Qiang1 @ 2022-11-26  2:43 UTC (permalink / raw)
  To: Joel Fernandes; +Cc: paulmck, frederic, rcu, linux-kernel

On Fri, Nov 25, 2022 at 11:54:27PM +0800, Zqiang wrote:
> Currently, for the case of num_online_cpus() <= 1, return directly,
> indicates the end of current grace period and then release old data.
> it's not accurate, for SMP system, when num_online_cpus() is equal
> one, maybe another cpu that in offline process(after invoke
> __cpu_disable()) is still in the rude RCU-Tasks critical section
> holding the old data, this lead to memory corruption.
> 
> Therefore, this commit add cpus_read_lock/unlock() before executing
> num_online_cpus().


>I am not sure if this is needed. The only way what you suggest can happen is
>if the tasks-RCU protected data is accessed after the num_online_cpus() value is
>decremented on the CPU going offline.
>
>However, the number of online CPUs value is changed on a CPU other than the
>CPU going offline.
>
>So there's no way the CPU going offline can run any code (it is already
>dead courtesy of CPUHP_AP_IDLE_DEAD). So a corruption is impossible.
>
>Or, did I miss something?

Hi joel

Suppose the system has two cpus

	CPU0                                                                     CPU1
					     cpu_stopper_thread
                                                                                  take_cpu_down
						    __cpu_disable
							dec __num_online_cpus 
 rcu_tasks_rude_wait_gp                                      cpuhp_invoke_callback
	num_online_cpus() == 1
		return;
        
when __num_online_cpus == 1, the CPU1 not completely offline.

Thanks
Zqiang

>
>thanks,
>
> - Joel



> 
> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> ---
>  kernel/rcu/tasks.h | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> index 4a991311be9b..08e72c6462d8 100644
> --- a/kernel/rcu/tasks.h
> +++ b/kernel/rcu/tasks.h
> @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
>  {
>  }
>  
> +static DEFINE_PER_CPU(struct work_struct, rude_work);
> +
>  // Wait for one rude RCU-tasks grace period.
>  static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
>  {
> +	int cpu;
> +	struct work_struct *work;
> +
> +	cpus_read_lock();
>  	if (num_online_cpus() <= 1)
> -		return;	// Fastpath for only one CPU.
> +		goto end;// Fastpath for only one CPU.
>  
>  	rtp->n_ipis += cpumask_weight(cpu_online_mask);
> -	schedule_on_each_cpu(rcu_tasks_be_rude);
> +	for_each_online_cpu(cpu) {
> +		work = per_cpu_ptr(&rude_work, cpu);
> +		INIT_WORK(work, rcu_tasks_be_rude);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_online_cpu(cpu)
> +		flush_work(per_cpu_ptr(&rude_work, cpu));
> +
> +end:
> +	cpus_read_unlock();
>  }
>  
>  void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
  2022-11-26  2:43   ` Zhang, Qiang1
@ 2022-11-26  4:34     ` Joel Fernandes
  2022-11-26  5:31       ` Neeraj Upadhyay
  0 siblings, 1 reply; 9+ messages in thread
From: Joel Fernandes @ 2022-11-26  4:34 UTC (permalink / raw)
  To: Zhang, Qiang1; +Cc: paulmck, frederic, neeraj.iitr10, rcu, linux-kernel

On Sat, Nov 26, 2022 at 02:43:59AM +0000, Zhang, Qiang1 wrote:
> On Fri, Nov 25, 2022 at 11:54:27PM +0800, Zqiang wrote:
> > Currently, for the case of num_online_cpus() <= 1, return directly,
> > indicates the end of current grace period and then release old data.
> > it's not accurate, for SMP system, when num_online_cpus() is equal
> > one, maybe another cpu that in offline process(after invoke
> > __cpu_disable()) is still in the rude RCU-Tasks critical section
> > holding the old data, this lead to memory corruption.
> > 
> > Therefore, this commit add cpus_read_lock/unlock() before executing
> > num_online_cpus().
> 
> 
> >I am not sure if this is needed. The only way what you suggest can happen is
> >if the tasks-RCU protected data is accessed after the num_online_cpus() value is
> >decremented on the CPU going offline.
> >
> >However, the number of online CPUs value is changed on a CPU other than the
> >CPU going offline.
> >
> >So there's no way the CPU going offline can run any code (it is already
> >dead courtesy of CPUHP_AP_IDLE_DEAD). So a corruption is impossible.
> >
> >Or, did I miss something?
> 
> Hi joel
> 
> Suppose the system has two cpus
> 
> 	CPU0                                                                     CPU1
> 					     cpu_stopper_thread
>                                                                                   take_cpu_down
> 						    __cpu_disable
> 							dec __num_online_cpus 
>  rcu_tasks_rude_wait_gp                                      cpuhp_invoke_callback

Thanks for clarifying!

You are right, this can be a problem for anything in the stop machine on the
CPU going offline from CPUHP_AP_ONLINE to CPUHP_AP_IDLE_DEAD, during which
the code execute on that CPU is not accounted for in num_online_cpus().

Actually Neeraj found a similar issue 2 years ago and instead of hotplug
lock, he added a new attribute to rcu_state to track number of CPUs.

See:
https://lore.kernel.org/r/20200923210313.GS29330@paulmck-ThinkPad-P72
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2317853.html

Could we do something similar?

Off note is the comment in that thread:
  Actually blocking CPU hotplug would not only result in excessive overhead,
  but would also unnecessarily impede CPU-hotplug operations.

Neeraj is also on the thread and could chime in.

Thanks,

 - Joel


> 	num_online_cpus() == 1
> 		return;
>         
> when __num_online_cpus == 1, the CPU1 not completely offline.
> 
> Thanks
> Zqiang
> 
> >
> >thanks,
> >
> > - Joel
> 
> 
> 
> > 
> > Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> > ---
> >  kernel/rcu/tasks.h | 20 ++++++++++++++++++--
> >  1 file changed, 18 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > index 4a991311be9b..08e72c6462d8 100644
> > --- a/kernel/rcu/tasks.h
> > +++ b/kernel/rcu/tasks.h
> > @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
> >  {
> >  }
> >  
> > +static DEFINE_PER_CPU(struct work_struct, rude_work);
> > +
> >  // Wait for one rude RCU-tasks grace period.
> >  static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
> >  {
> > +	int cpu;
> > +	struct work_struct *work;
> > +
> > +	cpus_read_lock();
> >  	if (num_online_cpus() <= 1)
> > -		return;	// Fastpath for only one CPU.
> > +		goto end;// Fastpath for only one CPU.
> >  
> >  	rtp->n_ipis += cpumask_weight(cpu_online_mask);
> > -	schedule_on_each_cpu(rcu_tasks_be_rude);
> > +	for_each_online_cpu(cpu) {
> > +		work = per_cpu_ptr(&rude_work, cpu);
> > +		INIT_WORK(work, rcu_tasks_be_rude);
> > +		schedule_work_on(cpu, work);
> > +	}
> > +
> > +	for_each_online_cpu(cpu)
> > +		flush_work(per_cpu_ptr(&rude_work, cpu));
> > +
> > +end:
> > +	cpus_read_unlock();
> >  }
> >  
> >  void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);
> > -- 
> > 2.25.1
> > 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
  2022-11-25 15:54 [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug Zqiang
  2022-11-26  2:27 ` Joel Fernandes
@ 2022-11-26  5:19 ` Neeraj Upadhyay
  2022-11-26  5:52   ` Zhang, Qiang1
  1 sibling, 1 reply; 9+ messages in thread
From: Neeraj Upadhyay @ 2022-11-26  5:19 UTC (permalink / raw)
  To: Zqiang, paulmck, frederic, joel; +Cc: rcu, linux-kernel

Hi Zqiang,

On 11/25/2022 9:24 PM, Zqiang wrote:
> Currently, for the case of num_online_cpus() <= 1, return directly,
> indicates the end of current grace period and then release old data.
> it's not accurate, for SMP system, when num_online_cpus() is equal
> one, maybe another cpu that in offline process(after invoke
> __cpu_disable()) is still in the rude RCU-Tasks critical section
> holding the old data, this lead to memory corruption.
> 

Was this race seen in your testing? For the outgoing CPU, once that
CPU marks itself offline (and decrements __num_online_cpus), do we
have tracing active on that CPU, and synchronize_rcu_tasks_rude()
not waiting for it could potentially lead to memory corruption?

As per my understanding, given that outgoing/incoming CPU 
decrements/increments the __num_online_cpus value, and num_online_cpus()
is a plain read, problem could happen when the incoming CPU updates the
__num_online_cpus  value, however, rcu_tasks_rude_wait_gp()'s 
num_online_cpus() call didn't observe the increment. So, 
cpus_read_lock/unlock() seems to be required to handle this case.



Thanks
Neeraj

> Therefore, this commit add cpus_read_lock/unlock() before executing
> num_online_cpus().
> 
> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> ---
>   kernel/rcu/tasks.h | 20 ++++++++++++++++++--
>   1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> index 4a991311be9b..08e72c6462d8 100644
> --- a/kernel/rcu/tasks.h
> +++ b/kernel/rcu/tasks.h
> @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
>   {
>   }
>   
> +static DEFINE_PER_CPU(struct work_struct, rude_work);
> +
>   // Wait for one rude RCU-tasks grace period.
>   static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
>   {
> +	int cpu;
> +	struct work_struct *work;
> +
> +	cpus_read_lock();
>   	if (num_online_cpus() <= 1)
> -		return;	// Fastpath for only one CPU.
> +		goto end;// Fastpath for only one CPU.
>   
>   	rtp->n_ipis += cpumask_weight(cpu_online_mask) > -	schedule_on_each_cpu(rcu_tasks_be_rude);
> +	for_each_online_cpu(cpu) {
> +		work = per_cpu_ptr(&rude_work, cpu);
> +		INIT_WORK(work, rcu_tasks_be_rude);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_online_cpu(cpu)
> +		flush_work(per_cpu_ptr(&rude_work, cpu));
> +
> +end:
> +	cpus_read_unlock();
>   }
>   
>   void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
  2022-11-26  4:34     ` Joel Fernandes
@ 2022-11-26  5:31       ` Neeraj Upadhyay
  0 siblings, 0 replies; 9+ messages in thread
From: Neeraj Upadhyay @ 2022-11-26  5:31 UTC (permalink / raw)
  To: Joel Fernandes, Zhang, Qiang1
  Cc: paulmck, frederic, neeraj.iitr10, rcu, linux-kernel

Hi,


On 11/26/2022 10:04 AM, Joel Fernandes wrote:
> On Sat, Nov 26, 2022 at 02:43:59AM +0000, Zhang, Qiang1 wrote:
>> On Fri, Nov 25, 2022 at 11:54:27PM +0800, Zqiang wrote:
>>> Currently, for the case of num_online_cpus() <= 1, return directly,
>>> indicates the end of current grace period and then release old data.
>>> it's not accurate, for SMP system, when num_online_cpus() is equal
>>> one, maybe another cpu that in offline process(after invoke
>>> __cpu_disable()) is still in the rude RCU-Tasks critical section
>>> holding the old data, this lead to memory corruption.
>>>
>>> Therefore, this commit add cpus_read_lock/unlock() before executing
>>> num_online_cpus().
>>
>>
>>> I am not sure if this is needed. The only way what you suggest can happen is
>>> if the tasks-RCU protected data is accessed after the num_online_cpus() value is
>>> decremented on the CPU going offline.
>>>
>>> However, the number of online CPUs value is changed on a CPU other than the
>>> CPU going offline.
>>>
>>> So there's no way the CPU going offline can run any code (it is already
>>> dead courtesy of CPUHP_AP_IDLE_DEAD). So a corruption is impossible.
>>>
>>> Or, did I miss something?
>>
>> Hi joel
>>
>> Suppose the system has two cpus
>>
>> 	CPU0                                                                     CPU1
>> 					     cpu_stopper_thread
>>                                                                                    take_cpu_down
>> 						    __cpu_disable
>> 							dec __num_online_cpus
>>   rcu_tasks_rude_wait_gp                                      cpuhp_invoke_callback
> 
> Thanks for clarifying!
> 
> You are right, this can be a problem for anything in the stop machine on the
> CPU going offline from CPUHP_AP_ONLINE to CPUHP_AP_IDLE_DEAD, during which
> the code execute on that CPU is not accounted for in num_online_cpus().
> 
> Actually Neeraj found a similar issue 2 years ago and instead of hotplug
> lock, he added a new attribute to rcu_state to track number of CPUs.
> 
> See:
> https://lore.kernel.org/r/20200923210313.GS29330@paulmck-ThinkPad-P72
> https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2317853.html
> 
> Could we do something similar?
> 
> Off note is the comment in that thread:
>    Actually blocking CPU hotplug would not only result in excessive overhead,
>    but would also unnecessarily impede CPU-hotplug operations.
> 
> Neeraj is also on the thread and could chime in.
> 

I agree that using a counter, which is updated on the control CPU - 
after the CPU is dead ( for offline case) and before the CPU starts 
executing in kernel (for online case) optimizes the fast path.
However, given that, in the common case (num_online_cpus() > 1),
we also need to acquire cpus_read_lock(), I am not sure of how much 
actual impact that optimization will have.


Thanks
Neeraj

> Thanks,
> 
>   - Joel
> 
> 
>> 	num_online_cpus() == 1
>> 		return;
>>          
>> when __num_online_cpus == 1, the CPU1 not completely offline.
>>
>> Thanks
>> Zqiang
>>
>>>
>>> thanks,
>>>
>>> - Joel
>>
>>
>>
>>>
>>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
>>> ---
>>>   kernel/rcu/tasks.h | 20 ++++++++++++++++++--
>>>   1 file changed, 18 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
>>> index 4a991311be9b..08e72c6462d8 100644
>>> --- a/kernel/rcu/tasks.h
>>> +++ b/kernel/rcu/tasks.h
>>> @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
>>>   {
>>>   }
>>>   
>>> +static DEFINE_PER_CPU(struct work_struct, rude_work);
>>> +
>>>   // Wait for one rude RCU-tasks grace period.
>>>   static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
>>>   {
>>> +	int cpu;
>>> +	struct work_struct *work;
>>> +
>>> +	cpus_read_lock();
>>>   	if (num_online_cpus() <= 1)
>>> -		return;	// Fastpath for only one CPU.
>>> +		goto end;// Fastpath for only one CPU.
>>>   
>>>   	rtp->n_ipis += cpumask_weight(cpu_online_mask);
>>> -	schedule_on_each_cpu(rcu_tasks_be_rude);
>>> +	for_each_online_cpu(cpu) {
>>> +		work = per_cpu_ptr(&rude_work, cpu);
>>> +		INIT_WORK(work, rcu_tasks_be_rude);
>>> +		schedule_work_on(cpu, work);
>>> +	}
>>> +
>>> +	for_each_online_cpu(cpu)
>>> +		flush_work(per_cpu_ptr(&rude_work, cpu));
>>> +
>>> +end:
>>> +	cpus_read_unlock();
>>>   }
>>>   
>>>   void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);
>>> -- 
>>> 2.25.1
>>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
  2022-11-26  5:19 ` Neeraj Upadhyay
@ 2022-11-26  5:52   ` Zhang, Qiang1
  2022-11-26 14:42     ` Joel Fernandes
  0 siblings, 1 reply; 9+ messages in thread
From: Zhang, Qiang1 @ 2022-11-26  5:52 UTC (permalink / raw)
  To: Neeraj Upadhyay, paulmck, frederic, joel; +Cc: rcu, linux-kernel

>Hi Zqiang,
>
>On 11/25/2022 9:24 PM, Zqiang wrote:
> Currently, for the case of num_online_cpus() <= 1, return directly,
> indicates the end of current grace period and then release old data.
> it's not accurate, for SMP system, when num_online_cpus() is equal
> one, maybe another cpu that in offline process(after invoke
> __cpu_disable()) is still in the rude RCU-Tasks critical section
> holding the old data, this lead to memory corruption.
> 
>
>
>Was this race seen in your testing? For the outgoing CPU, once that
>CPU marks itself offline (and decrements __num_online_cpus), do we
>have tracing active on that CPU, and synchronize_rcu_tasks_rude()
>not waiting for it could potentially lead to memory corruption?

Hi Neeraj

Indeed, I didn't see race in the actual production environment,
Maybe my commit information description is not accurate enough,
like the scene I described with joel.

If in cpuhp_invoke_callback, some callback is in rude rcu-tasks read ctrical section,
and still holding old data, but in this time, synchronize_rcu_tasks_rude() not waiting,
and release old data.

Suppose the system has two cpus

	CPU0                                                                     CPU1
					     cpu_stopper_thread
                                                                                  take_cpu_down
						    __cpu_disable
							dec __num_online_cpus 
 rcu_tasks_rude_wait_gp                                      cpuhp_invoke_callback  
	num_online_cpus() == 1
		return;
        
when __num_online_cpus == 1, the CPU1 not completely offline.


>
>As per my understanding, given that outgoing/incoming CPU 
>decrements/increments the __num_online_cpus value, and num_online_cpus()
>is a plain read, problem could happen when the incoming CPU updates the
>__num_online_cpus  value, however, rcu_tasks_rude_wait_gp()'s 
>num_online_cpus() call didn't observe the increment. So, 
>cpus_read_lock/unlock() seems to be required to handle this case.

Yes, the same problem will be encountered when going online, due to
access  __num_online_cpus  that is not protected by cpus_read_lock/unlock() 
in rcu_tasks_rude_wait_gp().

Do I need to change the commit information to send v2?

Thanks
Zqiang

>
>
>Thanks
>Neeraj
>
> Therefore, this commit add cpus_read_lock/unlock() before executing
> num_online_cpus().
> 
> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> ---
>   kernel/rcu/tasks.h | 20 ++++++++++++++++++--
>   1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> index 4a991311be9b..08e72c6462d8 100644
> --- a/kernel/rcu/tasks.h
> +++ b/kernel/rcu/tasks.h
> @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
>   {
>   }
>   
> +static DEFINE_PER_CPU(struct work_struct, rude_work);
> +
>   // Wait for one rude RCU-tasks grace period.
>   static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
>   {
> +	int cpu;
> +	struct work_struct *work;
> +
> +	cpus_read_lock();
>   	if (num_online_cpus() <= 1)
> -		return;	// Fastpath for only one CPU.
> +		goto end;// Fastpath for only one CPU.
>   
>   	rtp->n_ipis += cpumask_weight(cpu_online_mask) > -	schedule_on_each_cpu(rcu_tasks_be_rude);
> +	for_each_online_cpu(cpu) {
> +		work = per_cpu_ptr(&rude_work, cpu);
> +		INIT_WORK(work, rcu_tasks_be_rude);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_online_cpu(cpu)
> +		flush_work(per_cpu_ptr(&rude_work, cpu));
> +
> +end:
> +	cpus_read_unlock();
>   }
>   
>   void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
  2022-11-26  5:52   ` Zhang, Qiang1
@ 2022-11-26 14:42     ` Joel Fernandes
  2022-11-27  9:48       ` Zhang, Qiang1
  0 siblings, 1 reply; 9+ messages in thread
From: Joel Fernandes @ 2022-11-26 14:42 UTC (permalink / raw)
  To: Zhang, Qiang1; +Cc: Neeraj Upadhyay, paulmck, frederic, rcu, linux-kernel



> On Nov 26, 2022, at 12:52 AM, Zhang, Qiang1 <qiang1.zhang@intel.com> wrote:
> 
> 
>> 
>> Hi Zqiang,
>> 
>> On 11/25/2022 9:24 PM, Zqiang wrote:
>> Currently, for the case of num_online_cpus() <= 1, return directly,
>> indicates the end of current grace period and then release old data.
>> it's not accurate, for SMP system, when num_online_cpus() is equal
>> one, maybe another cpu that in offline process(after invoke
>> __cpu_disable()) is still in the rude RCU-Tasks critical section
>> holding the old data, this lead to memory corruption.
>> 
>> 
>> 
>> Was this race seen in your testing? For the outgoing CPU, once that
>> CPU marks itself offline (and decrements __num_online_cpus), do we
>> have tracing active on that CPU, and synchronize_rcu_tasks_rude()
>> not waiting for it could potentially lead to memory corruption?
> 
> Hi Neeraj
> 
> Indeed, I didn't see race in the actual production environment,
> Maybe my commit information description is not accurate enough,
> like the scene I described with joel.
> 
> If in cpuhp_invoke_callback, some callback is in rude rcu-tasks read ctrical section,
> and still holding old data, but in this time, synchronize_rcu_tasks_rude() not waiting,
> and release old data.
> 
> Suppose the system has two cpus
> 
>    CPU0                                                                     CPU1
>                         cpu_stopper_thread
>                                                                                  take_cpu_down
>                            __cpu_disable
>                            dec __num_online_cpus 
> rcu_tasks_rude_wait_gp                                      cpuhp_invoke_callback  
>    num_online_cpus() == 1
>        return;
> 
> when __num_online_cpus == 1, the CPU1 not completely offline.

Agreed with yours and Neeraj assessment.

>> 
>> As per my understanding, given that outgoing/incoming CPU 
>> decrements/increments the __num_online_cpus value, and num_online_cpus()
>> is a plain read, problem could happen when the incoming CPU updates the
>> __num_online_cpus  value, however, rcu_tasks_rude_wait_gp()'s 
>> num_online_cpus() call didn't observe the increment. So, 
>> cpus_read_lock/unlock() seems to be required to handle this case.
> 
> Yes, the same problem will be encountered when going online, due to
> access  __num_online_cpus  that is not protected by cpus_read_lock/unlock() 
> in rcu_tasks_rude_wait_gp().
> 
> Do I need to change the commit information to send v2?

I think so. If you could add the CPU sequence diagram you mentioned, that would be great.

Also I suggest add more details of which specific parts of the hotplug process (the ones in stop machine only) are susceptible to the issue. That is, only those hotplug callbacks that are in  stop machine which may have trampolines prematurely freed from another cpu, right?

Thanks!

  - Joel



> 
> Thanks
> Zqiang
> 
>> 
>> 
>> Thanks
>> Neeraj
>> 
>> Therefore, this commit add cpus_read_lock/unlock() before executing
>> num_online_cpus().
>> 
>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
>> ---
>>  kernel/rcu/tasks.h | 20 ++++++++++++++++++--
>>  1 file changed, 18 insertions(+), 2 deletions(-)
>> 
>> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
>> index 4a991311be9b..08e72c6462d8 100644
>> --- a/kernel/rcu/tasks.h
>> +++ b/kernel/rcu/tasks.h
>> @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
>>  {
>>  }
>> 
>> +static DEFINE_PER_CPU(struct work_struct, rude_work);
>> +
>>  // Wait for one rude RCU-tasks grace period.
>>  static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
>>  {
>> +    int cpu;
>> +    struct work_struct *work;
>> +
>> +    cpus_read_lock();
>>      if (num_online_cpus() <= 1)
>> -        return;    // Fastpath for only one CPU.
>> +        goto end;// Fastpath for only one CPU.
>> 
>>      rtp->n_ipis += cpumask_weight(cpu_online_mask) > -    schedule_on_each_cpu(rcu_tasks_be_rude);
>> +    for_each_online_cpu(cpu) {
>> +        work = per_cpu_ptr(&rude_work, cpu);
>> +        INIT_WORK(work, rcu_tasks_be_rude);
>> +        schedule_work_on(cpu, work);
>> +    }
>> +
>> +    for_each_online_cpu(cpu)
>> +        flush_work(per_cpu_ptr(&rude_work, cpu));
>> +
>> +end:
>> +    cpus_read_unlock();
>>  }
>> 
>>  void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug
  2022-11-26 14:42     ` Joel Fernandes
@ 2022-11-27  9:48       ` Zhang, Qiang1
  0 siblings, 0 replies; 9+ messages in thread
From: Zhang, Qiang1 @ 2022-11-27  9:48 UTC (permalink / raw)
  To: Joel Fernandes; +Cc: Neeraj Upadhyay, paulmck, frederic, rcu, linux-kernel


> On Nov 26, 2022, at 12:52 AM, Zhang, Qiang1 <qiang1.zhang@intel.com> wrote:
> 
> 
>> 
>> Hi Zqiang,
>> 
>> On 11/25/2022 9:24 PM, Zqiang wrote:
>> Currently, for the case of num_online_cpus() <= 1, return directly, 
>> indicates the end of current grace period and then release old data.
>> it's not accurate, for SMP system, when num_online_cpus() is equal 
>> one, maybe another cpu that in offline process(after invoke
>> __cpu_disable()) is still in the rude RCU-Tasks critical section 
>> holding the old data, this lead to memory corruption.
>> 
>> 
>> 
>> Was this race seen in your testing? For the outgoing CPU, once that 
>> CPU marks itself offline (and decrements __num_online_cpus), do we 
>> have tracing active on that CPU, and synchronize_rcu_tasks_rude() not 
>> waiting for it could potentially lead to memory corruption?
> 
> Hi Neeraj
> 
> Indeed, I didn't see race in the actual production environment, Maybe 
> my commit information description is not accurate enough, like the 
> scene I described with joel.
> 
> If in cpuhp_invoke_callback, some callback is in rude rcu-tasks read 
> ctrical section, and still holding old data, but in this time, 
> synchronize_rcu_tasks_rude() not waiting, and release old data.
> 
> Suppose the system has two cpus
> 
>    CPU0                                                                     CPU1
>                         cpu_stopper_thread
>                                                                                  take_cpu_down
>                            __cpu_disable
>                            dec __num_online_cpus 
> rcu_tasks_rude_wait_gp                                      cpuhp_invoke_callback  
>    num_online_cpus() == 1
>        return;
> 
> when __num_online_cpus == 1, the CPU1 not completely offline.
>
>Agreed with yours and Neeraj assessment.
>
>> 
>> As per my understanding, given that outgoing/incoming CPU 
>> decrements/increments the __num_online_cpus value, and 
>> num_online_cpus() is a plain read, problem could happen when the 
>> incoming CPU updates the __num_online_cpus  value, however, 
>> rcu_tasks_rude_wait_gp()'s
>> num_online_cpus() call didn't observe the increment. So,
>> cpus_read_lock/unlock() seems to be required to handle this case.
> 
> Yes, the same problem will be encountered when going online, due to 
> access  __num_online_cpus  that is not protected by 
> cpus_read_lock/unlock() in rcu_tasks_rude_wait_gp().
> 
> Do I need to change the commit information to send v2?
>
>I think so. If you could add the CPU sequence diagram you mentioned, that would be great.
>
>Also I suggest add more details of which specific parts of the hotplug process (the ones in stop machine only) are susceptible to the issue. That is, only those hotplug callbacks that are in  stop machine which may have trampolines prematurely freed from another cpu, right?

Yes, your describe is correct, I will resend, I will resend.

Thanks
Zqiang

>
>Thanks!
>
>  - Joel
>
>
>
> 
> Thanks
> Zqiang
> 
>> 
>> 
>> Thanks
>> Neeraj
>> 
>> Therefore, this commit add cpus_read_lock/unlock() before executing 
>> num_online_cpus().
>> 
>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
>> ---
>>  kernel/rcu/tasks.h | 20 ++++++++++++++++++--
>>  1 file changed, 18 insertions(+), 2 deletions(-)
>> 
>> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 
>> 4a991311be9b..08e72c6462d8 100644
>> --- a/kernel/rcu/tasks.h
>> +++ b/kernel/rcu/tasks.h
>> @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct 
>> work_struct *work)  {  }
>> 
>> +static DEFINE_PER_CPU(struct work_struct, rude_work);
>> +
>>  // Wait for one rude RCU-tasks grace period.
>>  static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)  {
>> +    int cpu;
>> +    struct work_struct *work;
>> +
>> +    cpus_read_lock();
>>      if (num_online_cpus() <= 1)
>> -        return;    // Fastpath for only one CPU.
>> +        goto end;// Fastpath for only one CPU.
>> 
>>      rtp->n_ipis += cpumask_weight(cpu_online_mask) > -    schedule_on_each_cpu(rcu_tasks_be_rude);
>> +    for_each_online_cpu(cpu) {
>> +        work = per_cpu_ptr(&rude_work, cpu);
>> +        INIT_WORK(work, rcu_tasks_be_rude);
>> +        schedule_work_on(cpu, work);
>> +    }
>> +
>> +    for_each_online_cpu(cpu)
>> +        flush_work(per_cpu_ptr(&rude_work, cpu));
>> +
>> +end:
>> +    cpus_read_unlock();
>>  }
>> 
>>  void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-11-27  9:48 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-25 15:54 [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug Zqiang
2022-11-26  2:27 ` Joel Fernandes
2022-11-26  2:43   ` Zhang, Qiang1
2022-11-26  4:34     ` Joel Fernandes
2022-11-26  5:31       ` Neeraj Upadhyay
2022-11-26  5:19 ` Neeraj Upadhyay
2022-11-26  5:52   ` Zhang, Qiang1
2022-11-26 14:42     ` Joel Fernandes
2022-11-27  9:48       ` Zhang, Qiang1

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).