All of lore.kernel.org
 help / color / mirror / Atom feed
* [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 12:53 ` Jing-Ting Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-05-19 12:53 UTC (permalink / raw)
  To: Peter Zijlstra, Daniel Bristot de Oliveira, Valentin Schneider, tglx
  Cc: wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Christian Brauner

Hi all


There is a race condition between CPU hotplug off flow and
__sched_setscheduler(), which will cause hang-up in CPU hotplug off
flow.

Syndrome:
During hotplug off flow in CPU_A, it blocks on
CPUHP_AP_SCHED_WAIT_EMPTY state when enters rcuwait_wait_event().
In that moment, CPU_A stays in idle and cannot wake up stopper
thread(cpuhp/A) to continue CPU_A hotplug off flow.

Root cause:
Balance_push() callback has been stolen by CPU_B in executing
__sched_setscheduler() func., which should be executed in idle task of
CPU_A to wake up stopper thread (cpuhp/A) through calling
rcuwait_wake_up(&rq->hotplug_wait).

Racing flow as below:
CPU_A is going to hotplug off and set rq->balance_callback =
&balance_push_callback, then CPU_A should use balance_push() to push
the task out and release rq_lock.
But if CPU_B do __sched_setscheduler() before CPU_A switch to
swapper/A, CPU_B use splice_balance_callbacks() to steal rq-
>balance_callback and set the CPU_A rq->balance_callback = NULL.
Due to rq->balance_callback is NULL,
so swapper/A could not do balance_push() at CPU_A,
Due to rq(rq_A) != this_rq(rq_B),
so swapper/A could not do rcuwait_wake_up() at CPU_B.


Racing flow:
-----------------------------------------------------------------------
CPU_A (Hotplug down)                          
-----------------------------------------------------------------------
State: CPUHP_AP_ACTIVE
sched_cpu_deactivate()
-> balance_push_set(cpu, true)
   -> rq_A->balance_callback = &balance_push_callback
      => CPU_A set rq_A balance_callback here.
                                
State: CPUHP_AP_SCHED_WAIT_EMPTY
sched_cpu_wait_empty()
-> balance_hotplug_wait()
   -> rcuwait_wait_event(&rq->hotplug_wait)
      => CPU_A do while loop to push task out from CPU_A,
         until swapper/A wake up cpuhp/A.
      -> schedule()
         -> rq_lock(rq, &rf)
            -> context_switch() 
              -> finish_lock_switch()
                 -> __balance_callbacks(rq_A)
                    -> do_balance_callbacks(rq,
                                      splice_balance_callbacks(rq))
                       -> balance_push(rq_A)
                 -> raw_spin_rq_unlock_irq(rq_A)
                    => CPU_A release rq_A lock.

CPU_A release rq_A lock, CPU_B can get rq_A lock.

-----------------------------------------------------------------------
CPU_B (do __sched_setscheduler(), set rq_A->balance_callback = NULL)
-----------------------------------------------------------------------
__sched_setscheduler(p) => task_rq(p) is rq_A
-> task_rq_lock(rq_A)
  -> splice_balance_callbacks(rq_A)
     -> if (head)
           rq_A->balance_callback = NULL
           => CPU_B steal rq_A->balance_callback.
     -> task_rq_unlock(rq_A)


CPU_B release rq_A lock, CPU_A can get rq_A lock and switch to
swapper/A.

-----------------------------------------------------------------------
CPU_A (Hotplug down)                           
-----------------------------------------------------------------------
switch to swapper/A:
schedule()
-> rq_lock(rq, &rf)
   -> context_switch()                                     
      -> finish_lock_switch()
         -> __balance_callbacks(rq_A)
            -> do_balance_callbacks(rq, NULL) 
               => Because rq_A->balance_callback = NULL,
                  swapper/A could not do rcuwait_wake_up().
         -> raw_spin_rq_unlock_irq(rq_A)   

-----------------------------------------------------------------------
CPU_B (do __sched_setscheduler(), set rq_A->balance_callback = NULL) 
-----------------------------------------------------------------------
balance_callbacks(rq_A, head)
-> balance_push(rq_A)
   -> rq->balance_callback = &balance_push_callback;
     -> if (rq != this_rq())
           return;
           => Because rq = rq_A, this_rq = rq_B,
              swapper/A could not do rcuwait_wake_up().

-----------------------------------------------------------------------
CPU_A (Hotplug down)                           
-----------------------------------------------------------------------
rcuwait_wait_event(&rq->hotplug_wait)
=> swapper/A could not do rcuwait_wake_up(),
   it cannot wake up stopper thread(cpuhp/A),
   so system could not exit the while loop at rcuwait_wait_event.




Do you have any suggestion or solution for this issue?
Thank you.



Best regards,
Jing-Ting Wu


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 12:53 ` Jing-Ting Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-05-19 12:53 UTC (permalink / raw)
  To: Peter Zijlstra, Daniel Bristot de Oliveira, Valentin Schneider, tglx
  Cc: wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Christian Brauner

Hi all


There is a race condition between CPU hotplug off flow and
__sched_setscheduler(), which will cause hang-up in CPU hotplug off
flow.

Syndrome:
During hotplug off flow in CPU_A, it blocks on
CPUHP_AP_SCHED_WAIT_EMPTY state when enters rcuwait_wait_event().
In that moment, CPU_A stays in idle and cannot wake up stopper
thread(cpuhp/A) to continue CPU_A hotplug off flow.

Root cause:
Balance_push() callback has been stolen by CPU_B in executing
__sched_setscheduler() func., which should be executed in idle task of
CPU_A to wake up stopper thread (cpuhp/A) through calling
rcuwait_wake_up(&rq->hotplug_wait).

Racing flow as below:
CPU_A is going to hotplug off and set rq->balance_callback =
&balance_push_callback, then CPU_A should use balance_push() to push
the task out and release rq_lock.
But if CPU_B do __sched_setscheduler() before CPU_A switch to
swapper/A, CPU_B use splice_balance_callbacks() to steal rq-
>balance_callback and set the CPU_A rq->balance_callback = NULL.
Due to rq->balance_callback is NULL,
so swapper/A could not do balance_push() at CPU_A,
Due to rq(rq_A) != this_rq(rq_B),
so swapper/A could not do rcuwait_wake_up() at CPU_B.


Racing flow:
-----------------------------------------------------------------------
CPU_A (Hotplug down)                          
-----------------------------------------------------------------------
State: CPUHP_AP_ACTIVE
sched_cpu_deactivate()
-> balance_push_set(cpu, true)
   -> rq_A->balance_callback = &balance_push_callback
      => CPU_A set rq_A balance_callback here.
                                
State: CPUHP_AP_SCHED_WAIT_EMPTY
sched_cpu_wait_empty()
-> balance_hotplug_wait()
   -> rcuwait_wait_event(&rq->hotplug_wait)
      => CPU_A do while loop to push task out from CPU_A,
         until swapper/A wake up cpuhp/A.
      -> schedule()
         -> rq_lock(rq, &rf)
            -> context_switch() 
              -> finish_lock_switch()
                 -> __balance_callbacks(rq_A)
                    -> do_balance_callbacks(rq,
                                      splice_balance_callbacks(rq))
                       -> balance_push(rq_A)
                 -> raw_spin_rq_unlock_irq(rq_A)
                    => CPU_A release rq_A lock.

CPU_A release rq_A lock, CPU_B can get rq_A lock.

-----------------------------------------------------------------------
CPU_B (do __sched_setscheduler(), set rq_A->balance_callback = NULL)
-----------------------------------------------------------------------
__sched_setscheduler(p) => task_rq(p) is rq_A
-> task_rq_lock(rq_A)
  -> splice_balance_callbacks(rq_A)
     -> if (head)
           rq_A->balance_callback = NULL
           => CPU_B steal rq_A->balance_callback.
     -> task_rq_unlock(rq_A)


CPU_B release rq_A lock, CPU_A can get rq_A lock and switch to
swapper/A.

-----------------------------------------------------------------------
CPU_A (Hotplug down)                           
-----------------------------------------------------------------------
switch to swapper/A:
schedule()
-> rq_lock(rq, &rf)
   -> context_switch()                                     
      -> finish_lock_switch()
         -> __balance_callbacks(rq_A)
            -> do_balance_callbacks(rq, NULL) 
               => Because rq_A->balance_callback = NULL,
                  swapper/A could not do rcuwait_wake_up().
         -> raw_spin_rq_unlock_irq(rq_A)   

-----------------------------------------------------------------------
CPU_B (do __sched_setscheduler(), set rq_A->balance_callback = NULL) 
-----------------------------------------------------------------------
balance_callbacks(rq_A, head)
-> balance_push(rq_A)
   -> rq->balance_callback = &balance_push_callback;
     -> if (rq != this_rq())
           return;
           => Because rq = rq_A, this_rq = rq_B,
              swapper/A could not do rcuwait_wake_up().

-----------------------------------------------------------------------
CPU_A (Hotplug down)                           
-----------------------------------------------------------------------
rcuwait_wait_event(&rq->hotplug_wait)
=> swapper/A could not do rcuwait_wake_up(),
   it cannot wake up stopper thread(cpuhp/A),
   so system could not exit the while loop at rcuwait_wait_event.




Do you have any suggestion or solution for this issue?
Thank you.



Best regards,
Jing-Ting Wu


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 12:53 ` Jing-Ting Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-05-19 12:53 UTC (permalink / raw)
  To: Peter Zijlstra, Daniel Bristot de Oliveira, Valentin Schneider, tglx
  Cc: wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Christian Brauner

Hi all


There is a race condition between CPU hotplug off flow and
__sched_setscheduler(), which will cause hang-up in CPU hotplug off
flow.

Syndrome:
During hotplug off flow in CPU_A, it blocks on
CPUHP_AP_SCHED_WAIT_EMPTY state when enters rcuwait_wait_event().
In that moment, CPU_A stays in idle and cannot wake up stopper
thread(cpuhp/A) to continue CPU_A hotplug off flow.

Root cause:
Balance_push() callback has been stolen by CPU_B in executing
__sched_setscheduler() func., which should be executed in idle task of
CPU_A to wake up stopper thread (cpuhp/A) through calling
rcuwait_wake_up(&rq->hotplug_wait).

Racing flow as below:
CPU_A is going to hotplug off and set rq->balance_callback =
&balance_push_callback, then CPU_A should use balance_push() to push
the task out and release rq_lock.
But if CPU_B do __sched_setscheduler() before CPU_A switch to
swapper/A, CPU_B use splice_balance_callbacks() to steal rq-
>balance_callback and set the CPU_A rq->balance_callback = NULL.
Due to rq->balance_callback is NULL,
so swapper/A could not do balance_push() at CPU_A,
Due to rq(rq_A) != this_rq(rq_B),
so swapper/A could not do rcuwait_wake_up() at CPU_B.


Racing flow:
-----------------------------------------------------------------------
CPU_A (Hotplug down)                          
-----------------------------------------------------------------------
State: CPUHP_AP_ACTIVE
sched_cpu_deactivate()
-> balance_push_set(cpu, true)
   -> rq_A->balance_callback = &balance_push_callback
      => CPU_A set rq_A balance_callback here.
                                
State: CPUHP_AP_SCHED_WAIT_EMPTY
sched_cpu_wait_empty()
-> balance_hotplug_wait()
   -> rcuwait_wait_event(&rq->hotplug_wait)
      => CPU_A do while loop to push task out from CPU_A,
         until swapper/A wake up cpuhp/A.
      -> schedule()
         -> rq_lock(rq, &rf)
            -> context_switch() 
              -> finish_lock_switch()
                 -> __balance_callbacks(rq_A)
                    -> do_balance_callbacks(rq,
                                      splice_balance_callbacks(rq))
                       -> balance_push(rq_A)
                 -> raw_spin_rq_unlock_irq(rq_A)
                    => CPU_A release rq_A lock.

CPU_A release rq_A lock, CPU_B can get rq_A lock.

-----------------------------------------------------------------------
CPU_B (do __sched_setscheduler(), set rq_A->balance_callback = NULL)
-----------------------------------------------------------------------
__sched_setscheduler(p) => task_rq(p) is rq_A
-> task_rq_lock(rq_A)
  -> splice_balance_callbacks(rq_A)
     -> if (head)
           rq_A->balance_callback = NULL
           => CPU_B steal rq_A->balance_callback.
     -> task_rq_unlock(rq_A)


CPU_B release rq_A lock, CPU_A can get rq_A lock and switch to
swapper/A.

-----------------------------------------------------------------------
CPU_A (Hotplug down)                           
-----------------------------------------------------------------------
switch to swapper/A:
schedule()
-> rq_lock(rq, &rf)
   -> context_switch()                                     
      -> finish_lock_switch()
         -> __balance_callbacks(rq_A)
            -> do_balance_callbacks(rq, NULL) 
               => Because rq_A->balance_callback = NULL,
                  swapper/A could not do rcuwait_wake_up().
         -> raw_spin_rq_unlock_irq(rq_A)   

-----------------------------------------------------------------------
CPU_B (do __sched_setscheduler(), set rq_A->balance_callback = NULL) 
-----------------------------------------------------------------------
balance_callbacks(rq_A, head)
-> balance_push(rq_A)
   -> rq->balance_callback = &balance_push_callback;
     -> if (rq != this_rq())
           return;
           => Because rq = rq_A, this_rq = rq_B,
              swapper/A could not do rcuwait_wake_up().

-----------------------------------------------------------------------
CPU_A (Hotplug down)                           
-----------------------------------------------------------------------
rcuwait_wait_event(&rq->hotplug_wait)
=> swapper/A could not do rcuwait_wake_up(),
   it cannot wake up stopper thread(cpuhp/A),
   so system could not exit the while loop at rcuwait_wait_event.




Do you have any suggestion or solution for this issue?
Thank you.



Best regards,
Jing-Ting Wu


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
  2022-05-19 12:53 ` Jing-Ting Wu
  (?)
@ 2022-05-19 13:14   ` Peter Zijlstra
  -1 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:14 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> Hi all
> 
> 
> There is a race condition between CPU hotplug off flow and
> __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> flow.

Oooh, you're using core scheduling and the A/B are SMT siblings?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 13:14   ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:14 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> Hi all
> 
> 
> There is a race condition between CPU hotplug off flow and
> __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> flow.

Oooh, you're using core scheduling and the A/B are SMT siblings?

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 13:14   ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:14 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> Hi all
> 
> 
> There is a race condition between CPU hotplug off flow and
> __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> flow.

Oooh, you're using core scheduling and the A/B are SMT siblings?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
  2022-05-19 13:14   ` Peter Zijlstra
  (?)
@ 2022-05-19 13:19     ` Peter Zijlstra
  -1 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:19 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 03:14:52PM +0200, Peter Zijlstra wrote:
> On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> > Hi all
> > 
> > 
> > There is a race condition between CPU hotplug off flow and
> > __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> > flow.
> 
> Oooh, you're using core scheduling and the A/B are SMT siblings?

Hmm, no.. lemme try again.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 13:19     ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:19 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 03:14:52PM +0200, Peter Zijlstra wrote:
> On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> > Hi all
> > 
> > 
> > There is a race condition between CPU hotplug off flow and
> > __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> > flow.
> 
> Oooh, you're using core scheduling and the A/B are SMT siblings?

Hmm, no.. lemme try again.

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 13:19     ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:19 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 03:14:52PM +0200, Peter Zijlstra wrote:
> On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> > Hi all
> > 
> > 
> > There is a race condition between CPU hotplug off flow and
> > __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> > flow.
> 
> Oooh, you're using core scheduling and the A/B are SMT siblings?

Hmm, no.. lemme try again.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
  2022-05-19 12:53 ` Jing-Ting Wu
  (?)
@ 2022-05-19 13:47   ` Peter Zijlstra
  -1 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:47 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> Hi all
> 
> 
> There is a race condition between CPU hotplug off flow and
> __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> flow.

How easy can you reproduce; does the below hack make it better?

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95bac3b094b3..f18ee22b29bc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4763,20 +4763,30 @@ struct callback_head balance_push_callback = {
 	.func = (void (*)(struct callback_head *))balance_push,
 };
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct callback_head *
+__splice_balance_callbacks(struct rq *rq, bool foo)
 {
 	struct callback_head *head = rq->balance_callback;
 
 	lockdep_assert_rq_held(rq);
-	if (head)
-		rq->balance_callback = NULL;
+	if (head) {
+		if (foo && head == &balance_push_callback)
+			head = NULL;
+		else
+			rq->balance_callback = NULL;
+	}
 
 	return head;
 }
 
+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+{
+	return __splice_balance_callbacks(rq, true);
+}
+
 static void __balance_callbacks(struct rq *rq)
 {
-	do_balance_callbacks(rq, splice_balance_callbacks(rq));
+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
 static inline void balance_callbacks(struct rq *rq, struct callback_head *head)


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 13:47   ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:47 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> Hi all
> 
> 
> There is a race condition between CPU hotplug off flow and
> __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> flow.

How easy can you reproduce; does the below hack make it better?

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95bac3b094b3..f18ee22b29bc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4763,20 +4763,30 @@ struct callback_head balance_push_callback = {
 	.func = (void (*)(struct callback_head *))balance_push,
 };
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct callback_head *
+__splice_balance_callbacks(struct rq *rq, bool foo)
 {
 	struct callback_head *head = rq->balance_callback;
 
 	lockdep_assert_rq_held(rq);
-	if (head)
-		rq->balance_callback = NULL;
+	if (head) {
+		if (foo && head == &balance_push_callback)
+			head = NULL;
+		else
+			rq->balance_callback = NULL;
+	}
 
 	return head;
 }
 
+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+{
+	return __splice_balance_callbacks(rq, true);
+}
+
 static void __balance_callbacks(struct rq *rq)
 {
-	do_balance_callbacks(rq, splice_balance_callbacks(rq));
+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
 static inline void balance_callbacks(struct rq *rq, struct callback_head *head)


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-19 13:47   ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-19 13:47 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> Hi all
> 
> 
> There is a race condition between CPU hotplug off flow and
> __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> flow.

How easy can you reproduce; does the below hack make it better?

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95bac3b094b3..f18ee22b29bc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4763,20 +4763,30 @@ struct callback_head balance_push_callback = {
 	.func = (void (*)(struct callback_head *))balance_push,
 };
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct callback_head *
+__splice_balance_callbacks(struct rq *rq, bool foo)
 {
 	struct callback_head *head = rq->balance_callback;
 
 	lockdep_assert_rq_held(rq);
-	if (head)
-		rq->balance_callback = NULL;
+	if (head) {
+		if (foo && head == &balance_push_callback)
+			head = NULL;
+		else
+			rq->balance_callback = NULL;
+	}
 
 	return head;
 }
 
+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+{
+	return __splice_balance_callbacks(rq, true);
+}
+
 static void __balance_callbacks(struct rq *rq)
 {
-	do_balance_callbacks(rq, splice_balance_callbacks(rq));
+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
 static inline void balance_callbacks(struct rq *rq, struct callback_head *head)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [SPAM]Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
  2022-05-19 13:47   ` Peter Zijlstra
@ 2022-05-23  7:12     ` Jing-Ting Wu
  -1 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-05-23  7:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, 2022-05-19 at 15:47 +0200, Peter Zijlstra wrote:
> On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> > Hi all
> > 
> > 
> > There is a race condition between CPU hotplug off flow and
> > __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> > flow.
> 
> How easy can you reproduce; does the below hack make it better?


The issue can be reproduced in about 48 hours when hotplug up/down
frequently.

Thanks for your suggestion.
I think the hack patch could stay the rq->balance_callback when rq-
>callback = &balance_push_callback.
We can add hack patch to the stability test.


> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 95bac3b094b3..f18ee22b29bc 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4763,20 +4763,30 @@ struct callback_head balance_push_callback =
> {
>  	.func = (void (*)(struct callback_head *))balance_push,
>  };
>  
> -static inline struct callback_head *splice_balance_callbacks(struct
> rq *rq)
> +static inline struct callback_head *
> +__splice_balance_callbacks(struct rq *rq, bool foo)
>  {
>  	struct callback_head *head = rq->balance_callback;
>  
>  	lockdep_assert_rq_held(rq);
> -	if (head)
> -		rq->balance_callback = NULL;
> +	if (head) {
> +		if (foo && head == &balance_push_callback)
> +			head = NULL;
> +		else
> +			rq->balance_callback = NULL;
> +	}
>  
>  	return head;
>  }
>  
> +static inline struct callback_head *splice_balance_callbacks(struct
> rq *rq)
> +{
> +	return __splice_balance_callbacks(rq, true);
> +}
> +
>  static void __balance_callbacks(struct rq *rq)
>  {
> -	do_balance_callbacks(rq, splice_balance_callbacks(rq));
> +	do_balance_callbacks(rq, __splice_balance_callbacks(rq,
> false));
>  }
>  
>  static inline void balance_callbacks(struct rq *rq, struct
> callback_head *head)
> 






_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [SPAM]Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-23  7:12     ` Jing-Ting Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-05-23  7:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Thu, 2022-05-19 at 15:47 +0200, Peter Zijlstra wrote:
> On Thu, May 19, 2022 at 08:53:15PM +0800, Jing-Ting Wu wrote:
> > Hi all
> > 
> > 
> > There is a race condition between CPU hotplug off flow and
> > __sched_setscheduler(), which will cause hang-up in CPU hotplug off
> > flow.
> 
> How easy can you reproduce; does the below hack make it better?


The issue can be reproduced in about 48 hours when hotplug up/down
frequently.

Thanks for your suggestion.
I think the hack patch could stay the rq->balance_callback when rq-
>callback = &balance_push_callback.
We can add hack patch to the stability test.


> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 95bac3b094b3..f18ee22b29bc 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4763,20 +4763,30 @@ struct callback_head balance_push_callback =
> {
>  	.func = (void (*)(struct callback_head *))balance_push,
>  };
>  
> -static inline struct callback_head *splice_balance_callbacks(struct
> rq *rq)
> +static inline struct callback_head *
> +__splice_balance_callbacks(struct rq *rq, bool foo)
>  {
>  	struct callback_head *head = rq->balance_callback;
>  
>  	lockdep_assert_rq_held(rq);
> -	if (head)
> -		rq->balance_callback = NULL;
> +	if (head) {
> +		if (foo && head == &balance_push_callback)
> +			head = NULL;
> +		else
> +			rq->balance_callback = NULL;
> +	}
>  
>  	return head;
>  }
>  
> +static inline struct callback_head *splice_balance_callbacks(struct
> rq *rq)
> +{
> +	return __splice_balance_callbacks(rq, true);
> +}
> +
>  static void __balance_callbacks(struct rq *rq)
>  {
> -	do_balance_callbacks(rq, splice_balance_callbacks(rq));
> +	do_balance_callbacks(rq, __splice_balance_callbacks(rq,
> false));
>  }
>  
>  static inline void balance_callbacks(struct rq *rq, struct
> callback_head *head)
> 






_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [SPAM]Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
  2022-05-23  7:12     ` Jing-Ting Wu
@ 2022-05-26  5:57       ` Jing-Ting Wu
  -1 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-05-26  5:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

Hi Peter


> > How easy can you reproduce; does the below hack make it better?
> 
> 
> The issue can be reproduced in about 48 hours when hotplug up/down
> frequently.
> 
> Thanks for your suggestion.
> I think the hack patch could stay the rq->balance_callback when rq-
> > callback = &balance_push_callback.
> 
> We can add hack patch to the stability test.

Use the patch of previous mail, we pass the stability test at previous
48 hours and still testing.

I think the patch could make the issue better.
Do you suggest to upstream this patch to mainline?

Thank you.

> > 
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 95bac3b094b3..f18ee22b29bc 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -4763,20 +4763,30 @@ struct callback_head balance_push_callback
> > =
> > {
> >  	.func = (void (*)(struct callback_head *))balance_push,
> >  };
> >  
> > -static inline struct callback_head
> > *splice_balance_callbacks(struct
> > rq *rq)
> > +static inline struct callback_head *
> > +__splice_balance_callbacks(struct rq *rq, bool foo)
> >  {
> >  	struct callback_head *head = rq->balance_callback;
> >  
> >  	lockdep_assert_rq_held(rq);
> > -	if (head)
> > -		rq->balance_callback = NULL;
> > +	if (head) {
> > +		if (foo && head == &balance_push_callback)
> > +			head = NULL;
> > +		else
> > +			rq->balance_callback = NULL;
> > +	}
> >  
> >  	return head;
> >  }
> >  
> > +static inline struct callback_head
> > *splice_balance_callbacks(struct
> > rq *rq)
> > +{
> > +	return __splice_balance_callbacks(rq, true);
> > +}
> > +
> >  static void __balance_callbacks(struct rq *rq)
> >  {
> > -	do_balance_callbacks(rq, splice_balance_callbacks(rq));
> > +	do_balance_callbacks(rq, __splice_balance_callbacks(rq,
> > false));
> >  }
> >  
> >  static inline void balance_callbacks(struct rq *rq, struct
> > callback_head *head)
> > 
> 

Best Regards,
Jing-Ting Wu




_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [SPAM]Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-26  5:57       ` Jing-Ting Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-05-26  5:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

Hi Peter


> > How easy can you reproduce; does the below hack make it better?
> 
> 
> The issue can be reproduced in about 48 hours when hotplug up/down
> frequently.
> 
> Thanks for your suggestion.
> I think the hack patch could stay the rq->balance_callback when rq-
> > callback = &balance_push_callback.
> 
> We can add hack patch to the stability test.

Use the patch of previous mail, we pass the stability test at previous
48 hours and still testing.

I think the patch could make the issue better.
Do you suggest to upstream this patch to mainline?

Thank you.

> > 
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 95bac3b094b3..f18ee22b29bc 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -4763,20 +4763,30 @@ struct callback_head balance_push_callback
> > =
> > {
> >  	.func = (void (*)(struct callback_head *))balance_push,
> >  };
> >  
> > -static inline struct callback_head
> > *splice_balance_callbacks(struct
> > rq *rq)
> > +static inline struct callback_head *
> > +__splice_balance_callbacks(struct rq *rq, bool foo)
> >  {
> >  	struct callback_head *head = rq->balance_callback;
> >  
> >  	lockdep_assert_rq_held(rq);
> > -	if (head)
> > -		rq->balance_callback = NULL;
> > +	if (head) {
> > +		if (foo && head == &balance_push_callback)
> > +			head = NULL;
> > +		else
> > +			rq->balance_callback = NULL;
> > +	}
> >  
> >  	return head;
> >  }
> >  
> > +static inline struct callback_head
> > *splice_balance_callbacks(struct
> > rq *rq)
> > +{
> > +	return __splice_balance_callbacks(rq, true);
> > +}
> > +
> >  static void __balance_callbacks(struct rq *rq)
> >  {
> > -	do_balance_callbacks(rq, splice_balance_callbacks(rq));
> > +	do_balance_callbacks(rq, __splice_balance_callbacks(rq,
> > false));
> >  }
> >  
> >  static inline void balance_callbacks(struct rq *rq, struct
> > callback_head *head)
> > 
> 

Best Regards,
Jing-Ting Wu




_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [SPAM]Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
  2022-05-26  5:57       ` Jing-Ting Wu
@ 2022-06-02 16:15         ` Jing-Ting Wu
  -1 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-06-02 16:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

Hi Peter


> > > How easy can you reproduce; does the below hack make it better?

The patch is helpful to the syndrome, passed stability test over 10
days so far. (as-is: < 48 hours failed)

Could you help to upstream this patch to linux kernel mainline?
Thank you.

> 
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index 95bac3b094b3..f18ee22b29bc 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -4763,20 +4763,30 @@ struct callback_head
> > > balance_push_callback
> > > =
> > > {
> > >  	.func = (void (*)(struct callback_head *))balance_push,
> > >  };
> > >  
> > > -static inline struct callback_head
> > > *splice_balance_callbacks(struct
> > > rq *rq)
> > > +static inline struct callback_head *
> > > +__splice_balance_callbacks(struct rq *rq, bool foo)
> > >  {
> > >  	struct callback_head *head = rq->balance_callback;
> > >  
> > >  	lockdep_assert_rq_held(rq);
> > > -	if (head)
> > > -		rq->balance_callback = NULL;
> > > +	if (head) {
> > > +		if (foo && head == &balance_push_callback)
> > > +			head = NULL;
> > > +		else
> > > +			rq->balance_callback = NULL;
> > > +	}
> > >  
> > >  	return head;
> > >  }
> > >  
> > > +static inline struct callback_head
> > > *splice_balance_callbacks(struct
> > > rq *rq)
> > > +{
> > > +	return __splice_balance_callbacks(rq, true);
> > > +}
> > > +
> > >  static void __balance_callbacks(struct rq *rq)
> > >  {
> > > -	do_balance_callbacks(rq, splice_balance_callbacks(rq));
> > > +	do_balance_callbacks(rq, __splice_balance_callbacks(rq,
> > > false));
> > >  }
> > >  
> > >  static inline void balance_callbacks(struct rq *rq, struct
> > > callback_head *head)
> > > 


Best Regards,
Jing-Ting Wu
> 
> 


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [SPAM]Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-06-02 16:15         ` Jing-Ting Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-06-02 16:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

Hi Peter


> > > How easy can you reproduce; does the below hack make it better?

The patch is helpful to the syndrome, passed stability test over 10
days so far. (as-is: < 48 hours failed)

Could you help to upstream this patch to linux kernel mainline?
Thank you.

> 
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index 95bac3b094b3..f18ee22b29bc 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -4763,20 +4763,30 @@ struct callback_head
> > > balance_push_callback
> > > =
> > > {
> > >  	.func = (void (*)(struct callback_head *))balance_push,
> > >  };
> > >  
> > > -static inline struct callback_head
> > > *splice_balance_callbacks(struct
> > > rq *rq)
> > > +static inline struct callback_head *
> > > +__splice_balance_callbacks(struct rq *rq, bool foo)
> > >  {
> > >  	struct callback_head *head = rq->balance_callback;
> > >  
> > >  	lockdep_assert_rq_held(rq);
> > > -	if (head)
> > > -		rq->balance_callback = NULL;
> > > +	if (head) {
> > > +		if (foo && head == &balance_push_callback)
> > > +			head = NULL;
> > > +		else
> > > +			rq->balance_callback = NULL;
> > > +	}
> > >  
> > >  	return head;
> > >  }
> > >  
> > > +static inline struct callback_head
> > > *splice_balance_callbacks(struct
> > > rq *rq)
> > > +{
> > > +	return __splice_balance_callbacks(rq, true);
> > > +}
> > > +
> > >  static void __balance_callbacks(struct rq *rq)
> > >  {
> > > -	do_balance_callbacks(rq, splice_balance_callbacks(rq));
> > > +	do_balance_callbacks(rq, __splice_balance_callbacks(rq,
> > > false));
> > >  }
> > >  
> > >  static inline void balance_callbacks(struct rq *rq, struct
> > > callback_head *head)
> > > 


Best Regards,
Jing-Ting Wu
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [SPAM]Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
  2022-06-02 16:15         ` Jing-Ting Wu
@ 2022-06-07 20:40           ` Peter Zijlstra
  -1 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-06-07 20:40 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Fri, Jun 03, 2022 at 12:15:51AM +0800, Jing-Ting Wu wrote:
> Hi Peter
> 
> 
> > > > How easy can you reproduce; does the below hack make it better?
> 
> The patch is helpful to the syndrome, passed stability test over 10
> days so far. (as-is: < 48 hours failed)

Excellent, let me go write a Changelog for it, or something.

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [SPAM]Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-06-07 20:40           ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-06-07 20:40 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Fri, Jun 03, 2022 at 12:15:51AM +0800, Jing-Ting Wu wrote:
> Hi Peter
> 
> 
> > > > How easy can you reproduce; does the below hack make it better?
> 
> The patch is helpful to the syndrome, passed stability test over 10
> days so far. (as-is: < 48 hours failed)

Excellent, let me go write a Changelog for it, or something.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
  2022-06-07 20:40           ` Peter Zijlstra
  (?)
@ 2022-06-07 21:39             ` Peter Zijlstra
  -1 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-06-07 21:39 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Tue, Jun 07, 2022 at 10:40:36PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 03, 2022 at 12:15:51AM +0800, Jing-Ting Wu wrote:

> > The patch is helpful to the syndrome, passed stability test over 10
> > days so far. (as-is: < 48 hours failed)
> 
> Excellent, let me go write a Changelog for it, or something.

How's this then?

---
Subject: sched: Fix balance_push() vs __sched_setscheduler()
From: Peter Zijlstra <peterz@infradead.org>
Date: Tue Jun 7 22:41:55 CEST 2022

The purpose of balance_push() is to act as a filter on task selection
in the case of CPU hotplug, specifically when taking the CPU out.

It does this by (ab)using the balance callback infrastructure, with
the express purpose of keeping all the unlikely/odd cases in a single
place.

In order to serve it's purpose, the balance_push_callback needs to be
(exclusively) on the callback list at all times (noting that the
callback always places itself back on the list the moment it runs,
also noting that when the CPU goes down, regular balancing concerns
are moot, so ignoring them is fine).

And here-in lies the problem, __sched_setscheduler()'s use of
splice_balance_callbacks() takes the callbacks off the list across a
lock-break, making it possible for, an interleaving, __schedule() to
see an empty list and not get filtered.

Reported-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
Link: https://lkml.kernel.org/r/20220519134706.GH2578@worktop.programming.kicks-ass.net
---
 kernel/sched/core.c |   36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4798,25 +4798,55 @@ static void do_balance_callbacks(struct
 
 static void balance_push(struct rq *rq);
 
+/*
+ * balance_push_callback is a right abuse of the callback interface and plays
+ * by significantly different rules.
+ *
+ * Where the normal balance_callback's purpose is to be ran in the same context
+ * that queued it (only later, when it's safe to drop rq->lock again),
+ * balance_push_callback is specifically targeted at __schedule().
+ *
+ * This abuse is tolerated because it places all the unlikely/odd cases behind
+ * a single test, namely: rq->balance_callback == NULL.
+ */
 struct callback_head balance_push_callback = {
 	.next = NULL,
 	.func = (void (*)(struct callback_head *))balance_push,
 };
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct callback_head *
+__splice_balance_callbacks(struct rq *rq, bool split)
 {
 	struct callback_head *head = rq->balance_callback;
 
+	if (likely(!head))
+		return NULL;
+
 	lockdep_assert_rq_held(rq);
-	if (head)
+	/*
+	 * Must not take balance_push_callback off the list when
+	 * splace_balance_callbac() and balance_callbacks() are not
+	 * in the same rq->lock section.
+	 *
+	 * In that case it would be possible for __schedule() to interleave
+	 * and observe the list empty.
+	 */
+	if (split && head == &balance_push_callback)
+		head = NULL;
+	else
 		rq->balance_callback = NULL;
 
 	return head;
 }
 
+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+{
+	return __splice_balance_callbacks(rq, true);
+}
+
 static void __balance_callbacks(struct rq *rq)
 {
-	do_balance_callbacks(rq, splice_balance_callbacks(rq));
+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
 static inline void balance_callbacks(struct rq *rq, struct callback_head *head)

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
@ 2022-06-07 21:39             ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-06-07 21:39 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Tue, Jun 07, 2022 at 10:40:36PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 03, 2022 at 12:15:51AM +0800, Jing-Ting Wu wrote:

> > The patch is helpful to the syndrome, passed stability test over 10
> > days so far. (as-is: < 48 hours failed)
> 
> Excellent, let me go write a Changelog for it, or something.

How's this then?

---
Subject: sched: Fix balance_push() vs __sched_setscheduler()
From: Peter Zijlstra <peterz@infradead.org>
Date: Tue Jun 7 22:41:55 CEST 2022

The purpose of balance_push() is to act as a filter on task selection
in the case of CPU hotplug, specifically when taking the CPU out.

It does this by (ab)using the balance callback infrastructure, with
the express purpose of keeping all the unlikely/odd cases in a single
place.

In order to serve it's purpose, the balance_push_callback needs to be
(exclusively) on the callback list at all times (noting that the
callback always places itself back on the list the moment it runs,
also noting that when the CPU goes down, regular balancing concerns
are moot, so ignoring them is fine).

And here-in lies the problem, __sched_setscheduler()'s use of
splice_balance_callbacks() takes the callbacks off the list across a
lock-break, making it possible for, an interleaving, __schedule() to
see an empty list and not get filtered.

Reported-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
Link: https://lkml.kernel.org/r/20220519134706.GH2578@worktop.programming.kicks-ass.net
---
 kernel/sched/core.c |   36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4798,25 +4798,55 @@ static void do_balance_callbacks(struct
 
 static void balance_push(struct rq *rq);
 
+/*
+ * balance_push_callback is a right abuse of the callback interface and plays
+ * by significantly different rules.
+ *
+ * Where the normal balance_callback's purpose is to be ran in the same context
+ * that queued it (only later, when it's safe to drop rq->lock again),
+ * balance_push_callback is specifically targeted at __schedule().
+ *
+ * This abuse is tolerated because it places all the unlikely/odd cases behind
+ * a single test, namely: rq->balance_callback == NULL.
+ */
 struct callback_head balance_push_callback = {
 	.next = NULL,
 	.func = (void (*)(struct callback_head *))balance_push,
 };
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct callback_head *
+__splice_balance_callbacks(struct rq *rq, bool split)
 {
 	struct callback_head *head = rq->balance_callback;
 
+	if (likely(!head))
+		return NULL;
+
 	lockdep_assert_rq_held(rq);
-	if (head)
+	/*
+	 * Must not take balance_push_callback off the list when
+	 * splace_balance_callbac() and balance_callbacks() are not
+	 * in the same rq->lock section.
+	 *
+	 * In that case it would be possible for __schedule() to interleave
+	 * and observe the list empty.
+	 */
+	if (split && head == &balance_push_callback)
+		head = NULL;
+	else
 		rq->balance_callback = NULL;
 
 	return head;
 }
 
+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+{
+	return __splice_balance_callbacks(rq, true);
+}
+
 static void __balance_callbacks(struct rq *rq)
 {
-	do_balance_callbacks(rq, splice_balance_callbacks(rq));
+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
 static inline void balance_callbacks(struct rq *rq, struct callback_head *head)

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
@ 2022-06-07 21:39             ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-06-07 21:39 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Tue, Jun 07, 2022 at 10:40:36PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 03, 2022 at 12:15:51AM +0800, Jing-Ting Wu wrote:

> > The patch is helpful to the syndrome, passed stability test over 10
> > days so far. (as-is: < 48 hours failed)
> 
> Excellent, let me go write a Changelog for it, or something.

How's this then?

---
Subject: sched: Fix balance_push() vs __sched_setscheduler()
From: Peter Zijlstra <peterz@infradead.org>
Date: Tue Jun 7 22:41:55 CEST 2022

The purpose of balance_push() is to act as a filter on task selection
in the case of CPU hotplug, specifically when taking the CPU out.

It does this by (ab)using the balance callback infrastructure, with
the express purpose of keeping all the unlikely/odd cases in a single
place.

In order to serve it's purpose, the balance_push_callback needs to be
(exclusively) on the callback list at all times (noting that the
callback always places itself back on the list the moment it runs,
also noting that when the CPU goes down, regular balancing concerns
are moot, so ignoring them is fine).

And here-in lies the problem, __sched_setscheduler()'s use of
splice_balance_callbacks() takes the callbacks off the list across a
lock-break, making it possible for, an interleaving, __schedule() to
see an empty list and not get filtered.

Reported-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
Link: https://lkml.kernel.org/r/20220519134706.GH2578@worktop.programming.kicks-ass.net
---
 kernel/sched/core.c |   36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4798,25 +4798,55 @@ static void do_balance_callbacks(struct
 
 static void balance_push(struct rq *rq);
 
+/*
+ * balance_push_callback is a right abuse of the callback interface and plays
+ * by significantly different rules.
+ *
+ * Where the normal balance_callback's purpose is to be ran in the same context
+ * that queued it (only later, when it's safe to drop rq->lock again),
+ * balance_push_callback is specifically targeted at __schedule().
+ *
+ * This abuse is tolerated because it places all the unlikely/odd cases behind
+ * a single test, namely: rq->balance_callback == NULL.
+ */
 struct callback_head balance_push_callback = {
 	.next = NULL,
 	.func = (void (*)(struct callback_head *))balance_push,
 };
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct callback_head *
+__splice_balance_callbacks(struct rq *rq, bool split)
 {
 	struct callback_head *head = rq->balance_callback;
 
+	if (likely(!head))
+		return NULL;
+
 	lockdep_assert_rq_held(rq);
-	if (head)
+	/*
+	 * Must not take balance_push_callback off the list when
+	 * splace_balance_callbac() and balance_callbacks() are not
+	 * in the same rq->lock section.
+	 *
+	 * In that case it would be possible for __schedule() to interleave
+	 * and observe the list empty.
+	 */
+	if (split && head == &balance_push_callback)
+		head = NULL;
+	else
 		rq->balance_callback = NULL;
 
 	return head;
 }
 
+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+{
+	return __splice_balance_callbacks(rq, true);
+}
+
 static void __balance_callbacks(struct rq *rq)
 {
-	do_balance_callbacks(rq, splice_balance_callbacks(rq));
+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
 static inline void balance_callbacks(struct rq *rq, struct callback_head *head)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
  2022-06-07 21:39             ` Peter Zijlstra
  (?)
@ 2022-06-08 14:16               ` Jing-Ting Wu
  -1 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-06-08 14:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

Hi Peter


On Tue, 2022-06-07 at 23:39 +0200, Peter Zijlstra wrote:
> On Tue, Jun 07, 2022 at 10:40:36PM +0200, Peter Zijlstra wrote:
> > On Fri, Jun 03, 2022 at 12:15:51AM +0800, Jing-Ting Wu wrote:
> > > The patch is helpful to the syndrome, passed stability test over
> > > 10
> > > days so far. (as-is: < 48 hours failed)
> > 
> > Excellent, let me go write a Changelog for it, or something.
> 
> How's this then?

I think the description is fine.
Thanks for your help.

[...]
>  
> -static inline struct callback_head *splice_balance_callbacks(struct
> rq *rq)
> +static inline struct callback_head *
> +__splice_balance_callbacks(struct rq *rq, bool split)
>  {
>  	struct callback_head *head = rq->balance_callback;
>  
> +	if (likely(!head))
> +		return NULL;
> +
>  	lockdep_assert_rq_held(rq);
> -	if (head)
> +	/*
> +	 * Must not take balance_push_callback off the list when
> +	 * splace_balance_callbac() and balance_callbacks() are not


Should we change splace_balance_callbac() to splice_balance_callbacks()
at here?


> +	 * in the same rq->lock section.
> +	 *
> +	 * In that case it would be possible for __schedule() to
> interleave
> +	 * and observe the list empty.
> +	 */
> +	if (split && head == &balance_push_callback)
> +		head = NULL;
> +	else
>  		rq->balance_callback = NULL;
>  
>  	return head;
>  }
>  
[...]


Best Regards,
Jing-Ting Wu


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
@ 2022-06-08 14:16               ` Jing-Ting Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-06-08 14:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

Hi Peter


On Tue, 2022-06-07 at 23:39 +0200, Peter Zijlstra wrote:
> On Tue, Jun 07, 2022 at 10:40:36PM +0200, Peter Zijlstra wrote:
> > On Fri, Jun 03, 2022 at 12:15:51AM +0800, Jing-Ting Wu wrote:
> > > The patch is helpful to the syndrome, passed stability test over
> > > 10
> > > days so far. (as-is: < 48 hours failed)
> > 
> > Excellent, let me go write a Changelog for it, or something.
> 
> How's this then?

I think the description is fine.
Thanks for your help.

[...]
>  
> -static inline struct callback_head *splice_balance_callbacks(struct
> rq *rq)
> +static inline struct callback_head *
> +__splice_balance_callbacks(struct rq *rq, bool split)
>  {
>  	struct callback_head *head = rq->balance_callback;
>  
> +	if (likely(!head))
> +		return NULL;
> +
>  	lockdep_assert_rq_held(rq);
> -	if (head)
> +	/*
> +	 * Must not take balance_push_callback off the list when
> +	 * splace_balance_callbac() and balance_callbacks() are not


Should we change splace_balance_callbac() to splice_balance_callbacks()
at here?


> +	 * in the same rq->lock section.
> +	 *
> +	 * In that case it would be possible for __schedule() to
> interleave
> +	 * and observe the list empty.
> +	 */
> +	if (split && head == &balance_push_callback)
> +		head = NULL;
> +	else
>  		rq->balance_callback = NULL;
>  
>  	return head;
>  }
>  
[...]


Best Regards,
Jing-Ting Wu


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
@ 2022-06-08 14:16               ` Jing-Ting Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Jing-Ting Wu @ 2022-06-08 14:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

Hi Peter


On Tue, 2022-06-07 at 23:39 +0200, Peter Zijlstra wrote:
> On Tue, Jun 07, 2022 at 10:40:36PM +0200, Peter Zijlstra wrote:
> > On Fri, Jun 03, 2022 at 12:15:51AM +0800, Jing-Ting Wu wrote:
> > > The patch is helpful to the syndrome, passed stability test over
> > > 10
> > > days so far. (as-is: < 48 hours failed)
> > 
> > Excellent, let me go write a Changelog for it, or something.
> 
> How's this then?

I think the description is fine.
Thanks for your help.

[...]
>  
> -static inline struct callback_head *splice_balance_callbacks(struct
> rq *rq)
> +static inline struct callback_head *
> +__splice_balance_callbacks(struct rq *rq, bool split)
>  {
>  	struct callback_head *head = rq->balance_callback;
>  
> +	if (likely(!head))
> +		return NULL;
> +
>  	lockdep_assert_rq_held(rq);
> -	if (head)
> +	/*
> +	 * Must not take balance_push_callback off the list when
> +	 * splace_balance_callbac() and balance_callbacks() are not


Should we change splace_balance_callbac() to splice_balance_callbacks()
at here?


> +	 * in the same rq->lock section.
> +	 *
> +	 * In that case it would be possible for __schedule() to
> interleave
> +	 * and observe the list empty.
> +	 */
> +	if (split && head == &balance_push_callback)
> +		head = NULL;
> +	else
>  		rq->balance_callback = NULL;
>  
>  	return head;
>  }
>  
[...]


Best Regards,
Jing-Ting Wu


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
  2022-06-08 14:16               ` Jing-Ting Wu
  (?)
@ 2022-06-08 14:48                 ` Peter Zijlstra
  -1 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-06-08 14:48 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Wed, Jun 08, 2022 at 10:16:43PM +0800, Jing-Ting Wu wrote:
> > +	/*
> > +	 * Must not take balance_push_callback off the list when
> > +	 * splace_balance_callbac() and balance_callbacks() are not
> 
> 
> Should we change splace_balance_callbac() to splice_balance_callbacks()
> at here?

Pff, typing is so hard.. :-)

I'll also go find me a Fixes tag I suppose.

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
@ 2022-06-08 14:48                 ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-06-08 14:48 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Wed, Jun 08, 2022 at 10:16:43PM +0800, Jing-Ting Wu wrote:
> > +	/*
> > +	 * Must not take balance_push_callback off the list when
> > +	 * splace_balance_callbac() and balance_callbacks() are not
> 
> 
> Should we change splace_balance_callbac() to splice_balance_callbacks()
> at here?

Pff, typing is so hard.. :-)

I'll also go find me a Fixes tag I suppose.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] sched: Fix balance_push() vs __sched_setscheduler()
@ 2022-06-08 14:48                 ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-06-08 14:48 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: Daniel Bristot de Oliveira, Valentin Schneider, tglx,
	wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek,
	Jonathan.JMChen, chris.redpath, Dietmar Eggemann,
	Vincent Donnefort, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Steven Rostedt, Ben Segall, Mel Gorman, Christian Brauner

On Wed, Jun 08, 2022 at 10:16:43PM +0800, Jing-Ting Wu wrote:
> > +	/*
> > +	 * Must not take balance_push_callback off the list when
> > +	 * splace_balance_callbac() and balance_callbacks() are not
> 
> 
> Should we change splace_balance_callbac() to splice_balance_callbacks()
> at here?

Pff, typing is so hard.. :-)

I'll also go find me a Fixes tag I suppose.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [tip: sched/urgent] sched: Fix balance_push() vs __sched_setscheduler()
  2022-05-19 13:47   ` Peter Zijlstra
                     ` (2 preceding siblings ...)
  (?)
@ 2022-06-13  8:29   ` tip-bot2 for Peter Zijlstra
  -1 siblings, 0 replies; 33+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2022-06-13  8:29 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Jing-Ting Wu, Peter Zijlstra (Intel), x86, linux-kernel

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     04193d590b390ec7a0592630f46d559ec6564ba1
Gitweb:        https://git.kernel.org/tip/04193d590b390ec7a0592630f46d559ec6564ba1
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Tue, 07 Jun 2022 22:41:55 +02:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Mon, 13 Jun 2022 10:15:07 +02:00

sched: Fix balance_push() vs __sched_setscheduler()

The purpose of balance_push() is to act as a filter on task selection
in the case of CPU hotplug, specifically when taking the CPU out.

It does this by (ab)using the balance callback infrastructure, with
the express purpose of keeping all the unlikely/odd cases in a single
place.

In order to serve its purpose, the balance_push_callback needs to be
(exclusively) on the callback list at all times (noting that the
callback always places itself back on the list the moment it runs,
also noting that when the CPU goes down, regular balancing concerns
are moot, so ignoring them is fine).

And here-in lies the problem, __sched_setscheduler()'s use of
splice_balance_callbacks() takes the callbacks off the list across a
lock-break, making it possible for, an interleaving, __schedule() to
see an empty list and not get filtered.

Fixes: ae7927023243 ("sched: Optimize finish_lock_switch()")
Reported-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
Link: https://lkml.kernel.org/r/20220519134706.GH2578@worktop.programming.kicks-ass.net
---
 kernel/sched/core.c  | 36 +++++++++++++++++++++++++++++++++---
 kernel/sched/sched.h |  5 +++++
 2 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bfa7452..da0bf6f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4798,25 +4798,55 @@ static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
 
 static void balance_push(struct rq *rq);
 
+/*
+ * balance_push_callback is a right abuse of the callback interface and plays
+ * by significantly different rules.
+ *
+ * Where the normal balance_callback's purpose is to be ran in the same context
+ * that queued it (only later, when it's safe to drop rq->lock again),
+ * balance_push_callback is specifically targeted at __schedule().
+ *
+ * This abuse is tolerated because it places all the unlikely/odd cases behind
+ * a single test, namely: rq->balance_callback == NULL.
+ */
 struct callback_head balance_push_callback = {
 	.next = NULL,
 	.func = (void (*)(struct callback_head *))balance_push,
 };
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct callback_head *
+__splice_balance_callbacks(struct rq *rq, bool split)
 {
 	struct callback_head *head = rq->balance_callback;
 
+	if (likely(!head))
+		return NULL;
+
 	lockdep_assert_rq_held(rq);
-	if (head)
+	/*
+	 * Must not take balance_push_callback off the list when
+	 * splice_balance_callbacks() and balance_callbacks() are not
+	 * in the same rq->lock section.
+	 *
+	 * In that case it would be possible for __schedule() to interleave
+	 * and observe the list empty.
+	 */
+	if (split && head == &balance_push_callback)
+		head = NULL;
+	else
 		rq->balance_callback = NULL;
 
 	return head;
 }
 
+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+{
+	return __splice_balance_callbacks(rq, true);
+}
+
 static void __balance_callbacks(struct rq *rq)
 {
-	do_balance_callbacks(rq, splice_balance_callbacks(rq));
+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
 static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 0125961..47b89a0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1693,6 +1693,11 @@ queue_balance_callback(struct rq *rq,
 {
 	lockdep_assert_rq_held(rq);
 
+	/*
+	 * Don't (re)queue an already queued item; nor queue anything when
+	 * balance_push() is active, see the comment with
+	 * balance_push_callback.
+	 */
 	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
 		return;
 

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
       [not found] ` <a34049745934652483e3958e0a030a45b6fcfb40.camel@mediatek.com>
  2022-05-06  7:47     ` Peter Zijlstra
@ 2022-05-06  7:47     ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-06  7:47 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: tglx, valentin.schneider, bristot, wsd_upstream, linux-kernel,
	linux-arm-kernel, linux-mediatek, Jonathan.JMChen, Chris.Redpath

On Fri, May 06, 2022 at 12:19:28PM +0800, Jing-Ting Wu wrote:
> Hi all
> Add Chris for status sync.

Please try another posting, that's eminently unreadable garbage due to
line wrapping.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-06  7:47     ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-06  7:47 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: tglx, valentin.schneider, bristot, wsd_upstream, linux-kernel,
	linux-arm-kernel, linux-mediatek, Jonathan.JMChen, Chris.Redpath

On Fri, May 06, 2022 at 12:19:28PM +0800, Jing-Ting Wu wrote:
> Hi all
> Add Chris for status sync.

Please try another posting, that's eminently unreadable garbage due to
line wrapping.

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler()
@ 2022-05-06  7:47     ` Peter Zijlstra
  0 siblings, 0 replies; 33+ messages in thread
From: Peter Zijlstra @ 2022-05-06  7:47 UTC (permalink / raw)
  To: Jing-Ting Wu
  Cc: tglx, valentin.schneider, bristot, wsd_upstream, linux-kernel,
	linux-arm-kernel, linux-mediatek, Jonathan.JMChen, Chris.Redpath

On Fri, May 06, 2022 at 12:19:28PM +0800, Jing-Ting Wu wrote:
> Hi all
> Add Chris for status sync.

Please try another posting, that's eminently unreadable garbage due to
line wrapping.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2022-06-13  8:29 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-19 12:53 [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler() Jing-Ting Wu
2022-05-19 12:53 ` Jing-Ting Wu
2022-05-19 12:53 ` Jing-Ting Wu
2022-05-19 13:14 ` Peter Zijlstra
2022-05-19 13:14   ` Peter Zijlstra
2022-05-19 13:14   ` Peter Zijlstra
2022-05-19 13:19   ` Peter Zijlstra
2022-05-19 13:19     ` Peter Zijlstra
2022-05-19 13:19     ` Peter Zijlstra
2022-05-19 13:47 ` Peter Zijlstra
2022-05-19 13:47   ` Peter Zijlstra
2022-05-19 13:47   ` Peter Zijlstra
2022-05-23  7:12   ` [SPAM]Re: " Jing-Ting Wu
2022-05-23  7:12     ` Jing-Ting Wu
2022-05-26  5:57     ` Jing-Ting Wu
2022-05-26  5:57       ` Jing-Ting Wu
2022-06-02 16:15       ` Jing-Ting Wu
2022-06-02 16:15         ` Jing-Ting Wu
2022-06-07 20:40         ` Peter Zijlstra
2022-06-07 20:40           ` Peter Zijlstra
2022-06-07 21:39           ` [PATCH] sched: Fix balance_push() vs __sched_setscheduler() Peter Zijlstra
2022-06-07 21:39             ` Peter Zijlstra
2022-06-07 21:39             ` Peter Zijlstra
2022-06-08 14:16             ` Jing-Ting Wu
2022-06-08 14:16               ` Jing-Ting Wu
2022-06-08 14:16               ` Jing-Ting Wu
2022-06-08 14:48               ` Peter Zijlstra
2022-06-08 14:48                 ` Peter Zijlstra
2022-06-08 14:48                 ` Peter Zijlstra
2022-06-13  8:29   ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra
     [not found] <6013fb81cdb18892b75c796ab0e6756ee0e9cf71.camel@mediatek.com>
     [not found] ` <a34049745934652483e3958e0a030a45b6fcfb40.camel@mediatek.com>
2022-05-06  7:47   ` [Bug] Race condition between CPU hotplug off flow and __sched_setscheduler() Peter Zijlstra
2022-05-06  7:47     ` Peter Zijlstra
2022-05-06  7:47     ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.