All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] sched: Fix hot-unplug regressions
@ 2021-01-12 14:43 Peter Zijlstra
  2021-01-12 14:43 ` [PATCH 2/4] kthread: Extract KTHREAD_IS_PER_CPU Peter Zijlstra
                   ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-12 14:43 UTC (permalink / raw)
  To: mingo, tglx
  Cc: linux-kernel, jiangshanlai, valentin.schneider, cai,
	vincent.donnefort, decui, paulmck, vincent.guittot, rostedt, tj,
	peterz

Hi,

These 4 patches are the simplest means (barring a revert) of fixing the CPU
hot-unplug problems introduced by commit:

  1cf12e08bc4d ("sched/hotplug: Consolidate task migration on CPU unplug")

Testing here, any by Paul, indicate they survive a pounding.

They restore the previous behaviour of forced affinity breaking for the class
of kernel threads that happen to have single CPU affinity, but are not strictly
a per-cpu kthread.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 2/4] kthread: Extract KTHREAD_IS_PER_CPU
  2021-01-12 14:43 [PATCH 0/4] sched: Fix hot-unplug regressions Peter Zijlstra
@ 2021-01-12 14:43 ` Peter Zijlstra
  2021-01-12 14:43 ` [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU Peter Zijlstra
  2021-01-12 14:43 ` [PATCH 4/4] sched: Fix CPU hotplug / tighten is_per_cpu_kthread() Peter Zijlstra
  2 siblings, 0 replies; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-12 14:43 UTC (permalink / raw)
  To: mingo, tglx
  Cc: linux-kernel, jiangshanlai, valentin.schneider, cai,
	vincent.donnefort, decui, paulmck, vincent.guittot, rostedt,
	axboe, tj, peterz

There is a need to distinguish geniune per-cpu kthreads from kthreads
that happen to have a single CPU affinity.

Geniune per-cpu kthreads are kthreads that are CPU affine for
correctness, these will obviously have PF_KTHREAD set, but must also
have PF_NO_SETAFFINITY set, lest userspace modify their affinity and
ruins things.

However, these two things are not sufficient, PF_NO_SETAFFINITY is
also set on other tasks that have their affinities controlled through
other means, like for instance workqueues.

Therefore another bit is needed; it turns out kthread_create_per_cpu()
already has such a bit: KTHREAD_IS_PER_CPU, which is used to make
kthread_park()/kthread_unpark() work correctly.

Expose this flag and remove the implicit setting of it from
kthread_create_on_cpu(); the io_uring usage of it seems dubious at
best.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
---
 include/linux/kthread.h |    3 +++
 kernel/kthread.c        |   25 ++++++++++++++++++++++++-
 kernel/sched/core.c     |    2 +-
 kernel/sched/sched.h    |    4 ++--
 kernel/smpboot.c        |    1 +
 kernel/workqueue.c      |   11 +++++++++--
 6 files changed, 40 insertions(+), 6 deletions(-)

--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -33,6 +33,9 @@ struct task_struct *kthread_create_on_cp
 					  unsigned int cpu,
 					  const char *namefmt);
 
+void kthread_set_per_cpu(struct task_struct *k, bool set);
+bool kthread_is_per_cpu(struct task_struct *k);
+
 /**
  * kthread_run - create and wake a thread.
  * @threadfn: the function to run until signal_pending(current).
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -493,11 +493,34 @@ struct task_struct *kthread_create_on_cp
 		return p;
 	kthread_bind(p, cpu);
 	/* CPU hotplug need to bind once again when unparking the thread. */
-	set_bit(KTHREAD_IS_PER_CPU, &to_kthread(p)->flags);
 	to_kthread(p)->cpu = cpu;
 	return p;
 }
 
+void kthread_set_per_cpu(struct task_struct *k, bool set)
+{
+	struct kthread *kthread = to_kthread(k);
+	if (!kthread)
+		return;
+
+	if (set) {
+		WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
+		WARN_ON_ONCE(k->nr_cpus_allowed != 1);
+		set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+	} else {
+		clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+	}
+}
+
+bool kthread_is_per_cpu(struct task_struct *k)
+{
+	struct kthread *kthread = to_kthread(k);
+	if (!kthread)
+		return false;
+
+	return test_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+}
+
 /**
  * kthread_unpark - unpark a thread created by kthread_create().
  * @k:		thread created by kthread_create().
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -188,6 +188,7 @@ __smpboot_create_thread(struct smp_hotpl
 		kfree(td);
 		return PTR_ERR(tsk);
 	}
+	kthread_set_per_cpu(tsk, true);
 	/*
 	 * Park the thread so that it could start right on the CPU
 	 * when it is available.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-12 14:43 [PATCH 0/4] sched: Fix hot-unplug regressions Peter Zijlstra
  2021-01-12 14:43 ` [PATCH 2/4] kthread: Extract KTHREAD_IS_PER_CPU Peter Zijlstra
@ 2021-01-12 14:43 ` Peter Zijlstra
  2021-01-12 16:36   ` Lai Jiangshan
                     ` (2 more replies)
  2021-01-12 14:43 ` [PATCH 4/4] sched: Fix CPU hotplug / tighten is_per_cpu_kthread() Peter Zijlstra
  2 siblings, 3 replies; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-12 14:43 UTC (permalink / raw)
  To: mingo, tglx
  Cc: linux-kernel, jiangshanlai, valentin.schneider, cai,
	vincent.donnefort, decui, paulmck, vincent.guittot, rostedt, tj,
	peterz

Mark the per-cpu workqueue workers as KTHREAD_IS_PER_CPU.

Workqueues have unfortunate semantics in that per-cpu workers are not
default flushed and parked during hotplug, however a subset does
manual flush on hotplug and hard relies on them for correctness.

Therefore play silly games..

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/workqueue.c |   11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1861,6 +1861,8 @@ static void worker_attach_to_pool(struct
 	 */
 	if (pool->flags & POOL_DISASSOCIATED)
 		worker->flags |= WORKER_UNBOUND;
+	else
+		kthread_set_per_cpu(worker->task, true);
 
 	list_add_tail(&worker->node, &pool->workers);
 	worker->pool = pool;
@@ -1883,6 +1885,7 @@ static void worker_detach_from_pool(stru
 
 	mutex_lock(&wq_pool_attach_mutex);
 
+	kthread_set_per_cpu(worker->task, false);
 	list_del(&worker->node);
 	worker->pool = NULL;
 
@@ -4919,8 +4922,10 @@ static void unbind_workers(int cpu)
 
 		raw_spin_unlock_irq(&pool->lock);
 
-		for_each_pool_worker(worker, pool)
+		for_each_pool_worker(worker, pool) {
+			kthread_set_per_cpu(worker->task, false);
 			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
+		}
 
 		mutex_unlock(&wq_pool_attach_mutex);
 
@@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
 	 * of all workers first and then clear UNBOUND.  As we're called
 	 * from CPU_ONLINE, the following shouldn't fail.
 	 */
-	for_each_pool_worker(worker, pool)
+	for_each_pool_worker(worker, pool) {
 		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
 						  pool->attrs->cpumask) < 0);
+		kthread_set_per_cpu(worker->task, true);
+	}
 
 	raw_spin_lock_irq(&pool->lock);
 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 4/4] sched: Fix CPU hotplug / tighten is_per_cpu_kthread()
  2021-01-12 14:43 [PATCH 0/4] sched: Fix hot-unplug regressions Peter Zijlstra
  2021-01-12 14:43 ` [PATCH 2/4] kthread: Extract KTHREAD_IS_PER_CPU Peter Zijlstra
  2021-01-12 14:43 ` [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU Peter Zijlstra
@ 2021-01-12 14:43 ` Peter Zijlstra
  2 siblings, 0 replies; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-12 14:43 UTC (permalink / raw)
  To: mingo, tglx
  Cc: linux-kernel, jiangshanlai, valentin.schneider, cai,
	vincent.donnefort, decui, paulmck, vincent.guittot, rostedt, tj,
	peterz

Prior to commit 1cf12e08bc4d ("sched/hotplug: Consolidate task
migration on CPU unplug") we'd leave any task on the dying CPU and
break affinity and force them off at the very end.

This scheme had to change in order to enable migrate_disable(). One
cannot wait for migrate_disable() to complete while stuck in
stop_machine(). Furthermore, since we need at the very least: idle,
hotplug and stop threads at any point before stop_machine, we can't
break affinity and/or push those away.

Under the assumption that all per-cpu kthreads are sanely handled by
CPU hotplug, the new code no long breaks affinity or migrates any of
them (which then includes the critical ones above).

However, there's an important difference between per-cpu kthreads and
kthreads that happen to have a single CPU affinity which is lost. The
latter class very much relies on the forced affinity breaking and
migration semantics previously provided.

Use the new kthread_is_per_cpu() infrastructure to tighten
is_per_cpu_kthread() and fix the hot-unplug problems stemming from the
change.

Fixes: 1cf12e08bc4d ("sched/hotplug: Consolidate task migration on CPU unplug")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/sched/core.c  |    8 +++++++-
 kernel/sched/sched.h |   12 ++++++++++--
 2 files changed, 17 insertions(+), 3 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7276,8 +7276,14 @@ static void balance_push(struct rq *rq)
 	/*
 	 * Both the cpu-hotplug and stop task are in this case and are
 	 * required to complete the hotplug process.
+	 *
+	 * XXX: the idle task does not match is_per_cpu_kthread() due to
+	 * histerical raisins.
 	 */
-	if (is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) {
+	if (rq->idle == push_task ||
+	    is_per_cpu_kthread(push_task) ||
+	    is_migration_disabled(push_task)) {
+
 		/*
 		 * If this is the idle task on the outgoing CPU try to wake
 		 * up the hotplug control thread which might wait for the
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2692,15 +2692,23 @@ static inline void membarrier_switch_mm(
 #endif
 
 #ifdef CONFIG_SMP
+/*
+ * Match geniune per-cpu kthreads; threads that are bound to a single CPU for
+ * correctness, not kernel threads that happen to have a single CPU affinity.
+ *
+ * Such threads will have PF_NO_SETAFFINITY to ensure userspace cannot
+ * accidentally place them elsewhere -- this also filters out 'early' kthreads
+ * that have PF_KTHREAD set but do not have a struct kthread.
+ */
 static inline bool is_per_cpu_kthread(struct task_struct *p)
 {
 	if (!(p->flags & PF_KTHREAD))
 		return false;
 
-	if (p->nr_cpus_allowed != 1)
+	if (!(p->flags & PF_NO_SETAFFINITY))
 		return false;
 
-	return true;
+	return kthread_is_per_cpu(p);
 }
 #endif
 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-12 14:43 ` [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU Peter Zijlstra
@ 2021-01-12 16:36   ` Lai Jiangshan
  2021-01-13 11:43     ` Peter Zijlstra
  2021-01-12 17:57   ` Valentin Schneider
  2021-01-13 13:28   ` Lai Jiangshan
  2 siblings, 1 reply; 23+ messages in thread
From: Lai Jiangshan @ 2021-01-12 16:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Thomas Gleixner, LKML, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> Mark the per-cpu workqueue workers as KTHREAD_IS_PER_CPU.
>
> Workqueues have unfortunate semantics in that per-cpu workers are not
> default flushed and parked during hotplug, however a subset does
> manual flush on hotplug and hard relies on them for correctness.
>
> Therefore play silly games..
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Tested-by: Paul E. McKenney <paulmck@kernel.org>
> ---

Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>

I like this patchset in that the scheduler takes care of the
affinities of the tasks when we don't want it to be per-cpu.

>  kernel/workqueue.c |   11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
>
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1861,6 +1861,8 @@ static void worker_attach_to_pool(struct
>          */
>         if (pool->flags & POOL_DISASSOCIATED)
>                 worker->flags |= WORKER_UNBOUND;
> +       else
> +               kthread_set_per_cpu(worker->task, true);
>
>         list_add_tail(&worker->node, &pool->workers);
>         worker->pool = pool;
> @@ -1883,6 +1885,7 @@ static void worker_detach_from_pool(stru
>
>         mutex_lock(&wq_pool_attach_mutex);
>
> +       kthread_set_per_cpu(worker->task, false);
>         list_del(&worker->node);
>         worker->pool = NULL;
>
> @@ -4919,8 +4922,10 @@ static void unbind_workers(int cpu)
>
>                 raw_spin_unlock_irq(&pool->lock);
>
> -               for_each_pool_worker(worker, pool)
> +               for_each_pool_worker(worker, pool) {
> +                       kthread_set_per_cpu(worker->task, false);
>                         WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
> +               }
>
>                 mutex_unlock(&wq_pool_attach_mutex);
>
> @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
>          * of all workers first and then clear UNBOUND.  As we're called
>          * from CPU_ONLINE, the following shouldn't fail.
>          */
> -       for_each_pool_worker(worker, pool)
> +       for_each_pool_worker(worker, pool) {
>                 WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
>                                                   pool->attrs->cpumask) < 0);
> +               kthread_set_per_cpu(worker->task, true);
> +       }
>
>         raw_spin_lock_irq(&pool->lock);
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-12 14:43 ` [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU Peter Zijlstra
  2021-01-12 16:36   ` Lai Jiangshan
@ 2021-01-12 17:57   ` Valentin Schneider
  2021-01-13 13:28   ` Lai Jiangshan
  2 siblings, 0 replies; 23+ messages in thread
From: Valentin Schneider @ 2021-01-12 17:57 UTC (permalink / raw)
  To: Peter Zijlstra, mingo, tglx
  Cc: linux-kernel, jiangshanlai, cai, vincent.donnefort, decui,
	paulmck, vincent.guittot, rostedt, tj, peterz

On 12/01/21 15:43, Peter Zijlstra wrote:
> @@ -4919,8 +4922,10 @@ static void unbind_workers(int cpu)
>
>               raw_spin_unlock_irq(&pool->lock);
>
> -		for_each_pool_worker(worker, pool)
> +		for_each_pool_worker(worker, pool) {
> +			kthread_set_per_cpu(worker->task, false);
>                       WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
> +		}

Doesn't this supersede patch 1? With patch 4 on top, the BALANCE_PUSH
stuff should start resetting the affinity of the kworkers for which we
are removing the IS_PER_CPU flag.

It's the only nit I have, the rest looks good to me so:

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

I'll go frob that sched_cpu_dying() warning.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-12 16:36   ` Lai Jiangshan
@ 2021-01-13 11:43     ` Peter Zijlstra
  0 siblings, 0 replies; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-13 11:43 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Ingo Molnar, Thomas Gleixner, LKML, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Wed, Jan 13, 2021 at 12:36:55AM +0800, Lai Jiangshan wrote:
> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > Mark the per-cpu workqueue workers as KTHREAD_IS_PER_CPU.
> >
> > Workqueues have unfortunate semantics in that per-cpu workers are not
> > default flushed and parked during hotplug, however a subset does
> > manual flush on hotplug and hard relies on them for correctness.
> >
> > Therefore play silly games..
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > Tested-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> 
> Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>
> 
> I like this patchset in that the scheduler takes care of the
> affinities of the tasks when we don't want it to be per-cpu.

Thanks! A possibly even simpler approach would be to have
rebind_workers() kill all workers and have create_worker() spawn us new
ones.

That avoids ever having to use set_cpus_allowed_ptr() on per-cpu
kthreads.... with the exception of rescuer.. still pondering that.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-12 14:43 ` [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU Peter Zijlstra
  2021-01-12 16:36   ` Lai Jiangshan
  2021-01-12 17:57   ` Valentin Schneider
@ 2021-01-13 13:28   ` Lai Jiangshan
  2021-01-13 14:16     ` Valentin Schneider
  2021-01-14 13:12     ` Peter Zijlstra
  2 siblings, 2 replies; 23+ messages in thread
From: Lai Jiangshan @ 2021-01-13 13:28 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Thomas Gleixner, LKML, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> Mark the per-cpu workqueue workers as KTHREAD_IS_PER_CPU.
>
> Workqueues have unfortunate semantics in that per-cpu workers are not
> default flushed and parked during hotplug, however a subset does
> manual flush on hotplug and hard relies on them for correctness.
>
> Therefore play silly games..
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Tested-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>  kernel/workqueue.c |   11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
>
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1861,6 +1861,8 @@ static void worker_attach_to_pool(struct
>          */
>         if (pool->flags & POOL_DISASSOCIATED)
>                 worker->flags |= WORKER_UNBOUND;
> +       else
> +               kthread_set_per_cpu(worker->task, true);
>
>         list_add_tail(&worker->node, &pool->workers);
>         worker->pool = pool;
> @@ -1883,6 +1885,7 @@ static void worker_detach_from_pool(stru
>
>         mutex_lock(&wq_pool_attach_mutex);
>
> +       kthread_set_per_cpu(worker->task, false);
>         list_del(&worker->node);
>         worker->pool = NULL;
>
> @@ -4919,8 +4922,10 @@ static void unbind_workers(int cpu)
>
>                 raw_spin_unlock_irq(&pool->lock);
>
> -               for_each_pool_worker(worker, pool)
> +               for_each_pool_worker(worker, pool) {
> +                       kthread_set_per_cpu(worker->task, false);
>                         WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
> +               }
>
>                 mutex_unlock(&wq_pool_attach_mutex);
>
> @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
>          * of all workers first and then clear UNBOUND.  As we're called
>          * from CPU_ONLINE, the following shouldn't fail.
>          */
> -       for_each_pool_worker(worker, pool)
> +       for_each_pool_worker(worker, pool) {
>                 WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
>                                                   pool->attrs->cpumask) < 0);
> +               kthread_set_per_cpu(worker->task, true);

Will the schedule break affinity in the middle of these two lines due to
patch4 allowing it and result in Paul's reported splat.

> +       }
>
>         raw_spin_lock_irq(&pool->lock);
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-13 13:28   ` Lai Jiangshan
@ 2021-01-13 14:16     ` Valentin Schneider
  2021-01-13 17:52       ` Paul E. McKenney
  2021-01-14 13:12     ` Peter Zijlstra
  1 sibling, 1 reply; 23+ messages in thread
From: Valentin Schneider @ 2021-01-13 14:16 UTC (permalink / raw)
  To: Lai Jiangshan, Peter Zijlstra
  Cc: Ingo Molnar, Thomas Gleixner, LKML, Qian Cai, Vincent Donnefort,
	Dexuan Cui, Paul E. McKenney, Vincent Guittot, Steven Rostedt,
	Tejun Heo

On 13/01/21 21:28, Lai Jiangshan wrote:
> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
>> @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
>>          * of all workers first and then clear UNBOUND.  As we're called
>>          * from CPU_ONLINE, the following shouldn't fail.
>>          */
>> -       for_each_pool_worker(worker, pool)
>> +       for_each_pool_worker(worker, pool) {
>>                 WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
>>                                                   pool->attrs->cpumask) < 0);
>> +               kthread_set_per_cpu(worker->task, true);
>
> Will the schedule break affinity in the middle of these two lines due to
> patch4 allowing it and result in Paul's reported splat.
>

You might be right; at this point we would still have BALANCE_PUSH set,
so something like the below could happen

  rebind_workers()
    set_cpus_allowed_ptr()
      affine_move_task()
        task_running() => stop_one_cpu()

  ... // Stopper migrates the kworker here in the meantime

  switch_to(<pcpu kworker>) // Both cpuhp thread and kworker should be enqueued
                            // here, so one or the other could be picked
  balance_switch()
    balance_push()
    ^-- no KTHREAD_IS_PER_CPU !

This should however trigger the WARN_ON_ONCE() in kthread_set_per_cpu()
*before* the one in process_one_work(), which I haven't seen in Paul's
mails.

>> +       }
>>
>>         raw_spin_lock_irq(&pool->lock);
>>
>>
>>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-13 14:16     ` Valentin Schneider
@ 2021-01-13 17:52       ` Paul E. McKenney
  2021-01-13 18:43         ` Valentin Schneider
  0 siblings, 1 reply; 23+ messages in thread
From: Paul E. McKenney @ 2021-01-13 17:52 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Lai Jiangshan, Peter Zijlstra, Ingo Molnar, Thomas Gleixner,
	LKML, Qian Cai, Vincent Donnefort, Dexuan Cui, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Wed, Jan 13, 2021 at 02:16:10PM +0000, Valentin Schneider wrote:
> On 13/01/21 21:28, Lai Jiangshan wrote:
> > On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >> @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
> >>          * of all workers first and then clear UNBOUND.  As we're called
> >>          * from CPU_ONLINE, the following shouldn't fail.
> >>          */
> >> -       for_each_pool_worker(worker, pool)
> >> +       for_each_pool_worker(worker, pool) {
> >>                 WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> >>                                                   pool->attrs->cpumask) < 0);
> >> +               kthread_set_per_cpu(worker->task, true);
> >
> > Will the schedule break affinity in the middle of these two lines due to
> > patch4 allowing it and result in Paul's reported splat.
> >
> 
> You might be right; at this point we would still have BALANCE_PUSH set,
> so something like the below could happen
> 
>   rebind_workers()
>     set_cpus_allowed_ptr()
>       affine_move_task()
>         task_running() => stop_one_cpu()
> 
>   ... // Stopper migrates the kworker here in the meantime
> 
>   switch_to(<pcpu kworker>) // Both cpuhp thread and kworker should be enqueued
>                             // here, so one or the other could be picked
>   balance_switch()
>     balance_push()
>     ^-- no KTHREAD_IS_PER_CPU !
> 
> This should however trigger the WARN_ON_ONCE() in kthread_set_per_cpu()
> *before* the one in process_one_work(), which I haven't seen in Paul's
> mails.

The 56 instances of one-hour SRCU-P scenarios hit the WARN_ON_ONCE()
in process_one_work() once, but there is no sign of a WARN_ON_ONCE()
from kthread_set_per_cpu().  But to your point, this does appear to be
a rather low-probability race condition, once per some tens of hours
of SRCU-P.

Is there a more focused check for the race condition above?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-13 17:52       ` Paul E. McKenney
@ 2021-01-13 18:43         ` Valentin Schneider
  2021-01-13 18:59           ` Paul E. McKenney
  0 siblings, 1 reply; 23+ messages in thread
From: Valentin Schneider @ 2021-01-13 18:43 UTC (permalink / raw)
  To: paulmck
  Cc: Lai Jiangshan, Peter Zijlstra, Ingo Molnar, Thomas Gleixner,
	LKML, Qian Cai, Vincent Donnefort, Dexuan Cui, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On 13/01/21 09:52, Paul E. McKenney wrote:
> On Wed, Jan 13, 2021 at 02:16:10PM +0000, Valentin Schneider wrote:
>> You might be right; at this point we would still have BALANCE_PUSH set,
>> so something like the below could happen
>>
>>   rebind_workers()
>>     set_cpus_allowed_ptr()
>>       affine_move_task()
>>         task_running() => stop_one_cpu()
>>
>>   ... // Stopper migrates the kworker here in the meantime
>>
>>   switch_to(<pcpu kworker>) // Both cpuhp thread and kworker should be enqueued
>>                             // here, so one or the other could be picked
>>   balance_switch()
>>     balance_push()
>>     ^-- no KTHREAD_IS_PER_CPU !
>>
>> This should however trigger the WARN_ON_ONCE() in kthread_set_per_cpu()
>> *before* the one in process_one_work(), which I haven't seen in Paul's
>> mails.
>
> The 56 instances of one-hour SRCU-P scenarios hit the WARN_ON_ONCE()
> in process_one_work() once, but there is no sign of a WARN_ON_ONCE()
> from kthread_set_per_cpu().

This does make me doubt the above :/ At the same time, the
process_one_work() warning hinges on POOL_DISASSOCIATED being unset,
which implies having gone through rebind_workers(), which implies
kthread_set_per_cpu(), which implies me being quite confused...

> But to your point, this does appear to be
> a rather low-probability race condition, once per some tens of hours
> of SRCU-P.
>
> Is there a more focused check for the race condition above?
>

Not that I'm aware of. I'm thinking that if the pcpu kworker were an RT
task, then this would guarantee it would get picked in favor of the cpuhp
thread upon switching out of the stopper, but that still requires the
kworker running on some CPU (for some reason) during rebind_workers().



>                                                       Thanx, Paul

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-13 18:43         ` Valentin Schneider
@ 2021-01-13 18:59           ` Paul E. McKenney
  0 siblings, 0 replies; 23+ messages in thread
From: Paul E. McKenney @ 2021-01-13 18:59 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Lai Jiangshan, Peter Zijlstra, Ingo Molnar, Thomas Gleixner,
	LKML, Qian Cai, Vincent Donnefort, Dexuan Cui, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Wed, Jan 13, 2021 at 06:43:57PM +0000, Valentin Schneider wrote:
> On 13/01/21 09:52, Paul E. McKenney wrote:
> > On Wed, Jan 13, 2021 at 02:16:10PM +0000, Valentin Schneider wrote:
> >> You might be right; at this point we would still have BALANCE_PUSH set,
> >> so something like the below could happen
> >>
> >>   rebind_workers()
> >>     set_cpus_allowed_ptr()
> >>       affine_move_task()
> >>         task_running() => stop_one_cpu()
> >>
> >>   ... // Stopper migrates the kworker here in the meantime
> >>
> >>   switch_to(<pcpu kworker>) // Both cpuhp thread and kworker should be enqueued
> >>                             // here, so one or the other could be picked
> >>   balance_switch()
> >>     balance_push()
> >>     ^-- no KTHREAD_IS_PER_CPU !
> >>
> >> This should however trigger the WARN_ON_ONCE() in kthread_set_per_cpu()
> >> *before* the one in process_one_work(), which I haven't seen in Paul's
> >> mails.
> >
> > The 56 instances of one-hour SRCU-P scenarios hit the WARN_ON_ONCE()
> > in process_one_work() once, but there is no sign of a WARN_ON_ONCE()
> > from kthread_set_per_cpu().
> 
> This does make me doubt the above :/ At the same time, the
> process_one_work() warning hinges on POOL_DISASSOCIATED being unset,
> which implies having gone through rebind_workers(), which implies
> kthread_set_per_cpu(), which implies me being quite confused...
> 
> > But to your point, this does appear to be
> > a rather low-probability race condition, once per some tens of hours
> > of SRCU-P.
> >
> > Is there a more focused check for the race condition above?
> 
> Not that I'm aware of. I'm thinking that if the pcpu kworker were an RT
> task, then this would guarantee it would get picked in favor of the cpuhp
> thread upon switching out of the stopper, but that still requires the
> kworker running on some CPU (for some reason) during rebind_workers().

Well, I did use the rcutree.softirq=0 boot parameter, which creates
per-CPU rcuc kthreads to do what RCU_SOFTIRQ normally does.  But these
rcuc kthreads use the normal park/unpark discipline, so should be safe,
for some value of "should".

							Thanx, Paul

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-13 13:28   ` Lai Jiangshan
  2021-01-13 14:16     ` Valentin Schneider
@ 2021-01-14 13:12     ` Peter Zijlstra
  2021-01-14 13:21       ` Valentin Schneider
  1 sibling, 1 reply; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-14 13:12 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Ingo Molnar, Thomas Gleixner, LKML, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Wed, Jan 13, 2021 at 09:28:13PM +0800, Lai Jiangshan wrote:
> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
> >          * of all workers first and then clear UNBOUND.  As we're called
> >          * from CPU_ONLINE, the following shouldn't fail.
> >          */
> > -       for_each_pool_worker(worker, pool)
> > +       for_each_pool_worker(worker, pool) {
> >                 WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> >                                                   pool->attrs->cpumask) < 0);
> > +               kthread_set_per_cpu(worker->task, true);
> 
> Will the schedule break affinity in the middle of these two lines due to
> patch4 allowing it and result in Paul's reported splat.

So something like the below _should_ work, except i'm seeing odd WARNs.
I'll prod at it some more.

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2371,6 +2371,7 @@ static int worker_thread(void *__worker)
 	/* tell the scheduler that this is a workqueue worker */
 	set_pf_worker(true);
 woke_up:
+	kthread_parkme();
 	raw_spin_lock_irq(&pool->lock);
 
 	/* am I supposed to die? */
@@ -2428,6 +2429,7 @@ static int worker_thread(void *__worker)
 			move_linked_works(work, &worker->scheduled, NULL);
 			process_scheduled_works(worker);
 		}
+		kthread_parkme();
 	} while (keep_working(pool));
 
 	worker_set_flags(worker, WORKER_PREP);
@@ -4978,9 +4980,9 @@ static void rebind_workers(struct worker
 	 * from CPU_ONLINE, the following shouldn't fail.
 	 */
 	for_each_pool_worker(worker, pool) {
-		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
-						  pool->attrs->cpumask) < 0);
+		kthread_park(worker->task);
 		kthread_set_per_cpu(worker->task, true);
+		kthread_unpark(worker->task);
 	}
 
 	raw_spin_lock_irq(&pool->lock);

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-14 13:12     ` Peter Zijlstra
@ 2021-01-14 13:21       ` Valentin Schneider
  2021-01-14 15:34         ` Peter Zijlstra
  0 siblings, 1 reply; 23+ messages in thread
From: Valentin Schneider @ 2021-01-14 13:21 UTC (permalink / raw)
  To: Peter Zijlstra, Lai Jiangshan
  Cc: Ingo Molnar, Thomas Gleixner, LKML, Qian Cai, Vincent Donnefort,
	Dexuan Cui, Paul E. McKenney, Vincent Guittot, Steven Rostedt,
	Tejun Heo

On 14/01/21 14:12, Peter Zijlstra wrote:
> On Wed, Jan 13, 2021 at 09:28:13PM +0800, Lai Jiangshan wrote:
>> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
>> > @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
>> >          * of all workers first and then clear UNBOUND.  As we're called
>> >          * from CPU_ONLINE, the following shouldn't fail.
>> >          */
>> > -       for_each_pool_worker(worker, pool)
>> > +       for_each_pool_worker(worker, pool) {
>> >                 WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
>> >                                                   pool->attrs->cpumask) < 0);
>> > +               kthread_set_per_cpu(worker->task, true);
>>
>> Will the schedule break affinity in the middle of these two lines due to
>> patch4 allowing it and result in Paul's reported splat.
>
> So something like the below _should_ work, except i'm seeing odd WARNs.
> I'll prod at it some more.
>
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2371,6 +2371,7 @@ static int worker_thread(void *__worker)
>       /* tell the scheduler that this is a workqueue worker */
>       set_pf_worker(true);
>  woke_up:
> +	kthread_parkme();
>       raw_spin_lock_irq(&pool->lock);
>
>       /* am I supposed to die? */
> @@ -2428,6 +2429,7 @@ static int worker_thread(void *__worker)
>                       move_linked_works(work, &worker->scheduled, NULL);
>                       process_scheduled_works(worker);
>               }
> +		kthread_parkme();
>       } while (keep_working(pool));
>
>       worker_set_flags(worker, WORKER_PREP);
> @@ -4978,9 +4980,9 @@ static void rebind_workers(struct worker
>        * from CPU_ONLINE, the following shouldn't fail.
>        */
>       for_each_pool_worker(worker, pool) {
> -		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> -						  pool->attrs->cpumask) < 0);
> +		kthread_park(worker->task);

Don't we still need an affinity change here, to undo what was done in
unbind_workers()?

Would something like

  __kthread_bind_mask(worker->task, pool->attrs->cpumask, TASK_PARKED)

even work?

>               kthread_set_per_cpu(worker->task, true);
> +		kthread_unpark(worker->task);
>       }
>
>       raw_spin_lock_irq(&pool->lock);

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-14 13:21       ` Valentin Schneider
@ 2021-01-14 15:34         ` Peter Zijlstra
  2021-01-16  6:27           ` Lai Jiangshan
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-14 15:34 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Lai Jiangshan, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Thu, Jan 14, 2021 at 01:21:26PM +0000, Valentin Schneider wrote:
> On 14/01/21 14:12, Peter Zijlstra wrote:

> > -		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> > -						  pool->attrs->cpumask) < 0);
> > +		kthread_park(worker->task);
> 
> Don't we still need an affinity change here, to undo what was done in
> unbind_workers()?
> 
> Would something like
> 
>   __kthread_bind_mask(worker->task, pool->attrs->cpumask, TASK_PARKED)
> 
> even work?
> 
> >               kthread_set_per_cpu(worker->task, true);
> > +		kthread_unpark(worker->task);

Nope, look at what kthread_unpark() does, what was missing was assigning
kthread->cpu though.

The below seems to actually work. Rescuer is still a problem though.

---
 include/linux/kthread.h |  2 +-
 kernel/kthread.c        | 14 ++++++++------
 kernel/sched/core.c     | 19 ++++++++++++++++++-
 kernel/smpboot.c        |  2 +-
 kernel/workqueue.c      | 22 +++++++++++++---------
 5 files changed, 41 insertions(+), 18 deletions(-)

diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index fdd5a52e35d8..2484ed97e72f 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -33,7 +33,7 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
 					  unsigned int cpu,
 					  const char *namefmt);
 
-void kthread_set_per_cpu(struct task_struct *k, bool set);
+void kthread_set_per_cpu(struct task_struct *k, int cpu);
 bool kthread_is_per_cpu(struct task_struct *k);
 
 /**
diff --git a/kernel/kthread.c b/kernel/kthread.c
index bead90275d2b..e0e4a423f184 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -497,19 +497,21 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
 	return p;
 }
 
-void kthread_set_per_cpu(struct task_struct *k, bool set)
+void kthread_set_per_cpu(struct task_struct *k, int cpu)
 {
 	struct kthread *kthread = to_kthread(k);
 	if (!kthread)
 		return;
 
-	if (set) {
-		WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
-		WARN_ON_ONCE(k->nr_cpus_allowed != 1);
-		set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
-	} else {
+	WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
+
+	if (cpu < 0) {
 		clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+		return;
 	}
+
+	kthread->cpu = cpu;
+	set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
 }
 
 bool kthread_is_per_cpu(struct task_struct *k)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 60b257d845fa..c2fdeeb6af2b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7589,7 +7589,24 @@ int sched_cpu_dying(unsigned int cpu)
 	sched_tick_stop(cpu);
 
 	rq_lock_irqsave(rq, &rf);
-	BUG_ON(rq->nr_running != 1 || rq_has_pinned_tasks(rq));
+	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
+		struct task_struct *g, *p;
+
+		pr_crit("CPU%d nr_running=%d\n", cpu, rq->nr_running);
+		rcu_read_lock();
+		for_each_process_thread(g, p) {
+			if (task_cpu(p) != cpu)
+				continue;
+
+			if (!task_on_rq_queued(p))
+				continue;
+
+			pr_crit("\tp=%s\n", p->comm);
+		}
+		rcu_read_unlock();
+
+		WARN_ON_ONCE(1);
+	}
 	rq_unlock_irqrestore(rq, &rf);
 
 	calc_load_migrate(rq);
diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index b0abe575a524..f25208e8df83 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -188,7 +188,7 @@ __smpboot_create_thread(struct smp_hotplug_thread *ht, unsigned int cpu)
 		kfree(td);
 		return PTR_ERR(tsk);
 	}
-	kthread_set_per_cpu(tsk, true);
+	kthread_set_per_cpu(tsk, cpu);
 	/*
 	 * Park the thread so that it could start right on the CPU
 	 * when it is available.
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ec0771e4a3fb..b518fd67a792 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1862,7 +1862,7 @@ static void worker_attach_to_pool(struct worker *worker,
 	if (pool->flags & POOL_DISASSOCIATED)
 		worker->flags |= WORKER_UNBOUND;
 	else
-		kthread_set_per_cpu(worker->task, true);
+		kthread_set_per_cpu(worker->task, pool->cpu);
 
 	list_add_tail(&worker->node, &pool->workers);
 	worker->pool = pool;
@@ -1885,7 +1885,7 @@ static void worker_detach_from_pool(struct worker *worker)
 
 	mutex_lock(&wq_pool_attach_mutex);
 
-	kthread_set_per_cpu(worker->task, false);
+	kthread_set_per_cpu(worker->task, -1);
 	list_del(&worker->node);
 	worker->pool = NULL;
 
@@ -2371,6 +2371,7 @@ static int worker_thread(void *__worker)
 	/* tell the scheduler that this is a workqueue worker */
 	set_pf_worker(true);
 woke_up:
+	kthread_parkme();
 	raw_spin_lock_irq(&pool->lock);
 
 	/* am I supposed to die? */
@@ -2428,7 +2429,7 @@ static int worker_thread(void *__worker)
 			move_linked_works(work, &worker->scheduled, NULL);
 			process_scheduled_works(worker);
 		}
-	} while (keep_working(pool));
+	} while (keep_working(pool) && !kthread_should_park());
 
 	worker_set_flags(worker, WORKER_PREP);
 sleep:
@@ -2440,9 +2441,12 @@ static int worker_thread(void *__worker)
 	 * event.
 	 */
 	worker_enter_idle(worker);
-	__set_current_state(TASK_IDLE);
+	set_current_state(TASK_IDLE);
 	raw_spin_unlock_irq(&pool->lock);
-	schedule();
+
+	if (!kthread_should_park())
+		schedule();
+
 	goto woke_up;
 }
 
@@ -4923,7 +4927,7 @@ static void unbind_workers(int cpu)
 		raw_spin_unlock_irq(&pool->lock);
 
 		for_each_pool_worker(worker, pool) {
-			kthread_set_per_cpu(worker->task, false);
+			kthread_set_per_cpu(worker->task, -1);
 			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
 		}
 
@@ -4978,9 +4982,9 @@ static void rebind_workers(struct worker_pool *pool)
 	 * from CPU_ONLINE, the following shouldn't fail.
 	 */
 	for_each_pool_worker(worker, pool) {
-		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
-						  pool->attrs->cpumask) < 0);
-		kthread_set_per_cpu(worker->task, true);
+		WARN_ON_ONCE(kthread_park(worker->task) < 0);
+		kthread_set_per_cpu(worker->task, pool->cpu);
+		kthread_unpark(worker->task);
 	}
 
 	raw_spin_lock_irq(&pool->lock);

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-14 15:34         ` Peter Zijlstra
@ 2021-01-16  6:27           ` Lai Jiangshan
  2021-01-16 12:45             ` Peter Zijlstra
  0 siblings, 1 reply; 23+ messages in thread
From: Lai Jiangshan @ 2021-01-16  6:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Valentin Schneider, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Thu, Jan 14, 2021 at 11:35 PM Peter Zijlstra <peterz@infradead.org> wrote:

>
> -void kthread_set_per_cpu(struct task_struct *k, bool set)
> +void kthread_set_per_cpu(struct task_struct *k, int cpu)
>  {
>         struct kthread *kthread = to_kthread(k);
>         if (!kthread)
>                 return;
>
> -       if (set) {
> -               WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
> -               WARN_ON_ONCE(k->nr_cpus_allowed != 1);
> -               set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> -       } else {
> +       WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
> +
> +       if (cpu < 0) {
>                 clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> +               return;
>         }
> +
> +       kthread->cpu = cpu;
> +       set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
>  }
>

I don't see the code to set the mask of the cpu to the task
since set_cpus_allowed_ptr() is removed from rebind_worker().

Is it somewhere I missed?

>
> @@ -2371,6 +2371,7 @@ static int worker_thread(void *__worker)
>         /* tell the scheduler that this is a workqueue worker */
>         set_pf_worker(true);
>  woke_up:
> +       kthread_parkme();
>         raw_spin_lock_irq(&pool->lock);
>
>         /* am I supposed to die? */
> @@ -2428,7 +2429,7 @@ static int worker_thread(void *__worker)
>                         move_linked_works(work, &worker->scheduled, NULL);
>                         process_scheduled_works(worker);
>                 }
> -       } while (keep_working(pool));
> +       } while (keep_working(pool) && !kthread_should_park());
>
>         worker_set_flags(worker, WORKER_PREP);
>  sleep:
> @@ -2440,9 +2441,12 @@ static int worker_thread(void *__worker)
>          * event.
>          */
>         worker_enter_idle(worker);
> -       __set_current_state(TASK_IDLE);
> +       set_current_state(TASK_IDLE);
>         raw_spin_unlock_irq(&pool->lock);
> -       schedule();
> +
> +       if (!kthread_should_park())
> +               schedule();
> +
>         goto woke_up;
>  }
>
> @@ -4923,7 +4927,7 @@ static void unbind_workers(int cpu)
>                 raw_spin_unlock_irq(&pool->lock);
>
>                 for_each_pool_worker(worker, pool) {
> -                       kthread_set_per_cpu(worker->task, false);
> +                       kthread_set_per_cpu(worker->task, -1);
>                         WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
>                 }
>
> @@ -4978,9 +4982,9 @@ static void rebind_workers(struct worker_pool *pool)
>          * from CPU_ONLINE, the following shouldn't fail.
>          */
>         for_each_pool_worker(worker, pool) {
> -               WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> -                                                 pool->attrs->cpumask) < 0);
> -               kthread_set_per_cpu(worker->task, true);
> +               WARN_ON_ONCE(kthread_park(worker->task) < 0);
> +               kthread_set_per_cpu(worker->task, pool->cpu);
> +               kthread_unpark(worker->task);

I feel nervous to use kthread_park() here and kthread_parkme() in
worker thread.  And adding kthread_should_park() to the fast path
also daunt me.

How about using a new KTHREAD_XXXX instead of KTHREAD_IS_PER_CPU,
so that we can set and clear KTHREAD_XXXX freely, especially before
set_cpus_allowed_ptr().

For example, we can add a new KTHREAD_ACTIVE_MASK_ONLY which means
even when
  is_per_cpu_kthread() && the_cpu_is_online() &&
  the_cpu_is_not_active() && KTHREAD_ACTIVE_MASK_ONLY
we should also break the affinity.

So that we can easily set KTHREAD_ACTIVE_MASK_ONLY in unbind_workers()
add clear KTHREAD_ACTIVE_MASK_ONLY here and avoid adding new
synchronization like kthread_park().

>         }
>
>         raw_spin_lock_irq(&pool->lock);

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-16  6:27           ` Lai Jiangshan
@ 2021-01-16 12:45             ` Peter Zijlstra
  2021-01-16 14:45               ` Lai Jiangshan
  2021-01-16 15:13               ` Peter Zijlstra
  0 siblings, 2 replies; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-16 12:45 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Valentin Schneider, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Sat, Jan 16, 2021 at 02:27:09PM +0800, Lai Jiangshan wrote:
> On Thu, Jan 14, 2021 at 11:35 PM Peter Zijlstra <peterz@infradead.org> wrote:
> 
> >
> > -void kthread_set_per_cpu(struct task_struct *k, bool set)
> > +void kthread_set_per_cpu(struct task_struct *k, int cpu)
> >  {
> >         struct kthread *kthread = to_kthread(k);
> >         if (!kthread)
> >                 return;
> >
> > -       if (set) {
> > -               WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
> > -               WARN_ON_ONCE(k->nr_cpus_allowed != 1);
> > -               set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> > -       } else {
> > +       WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
> > +
> > +       if (cpu < 0) {
> >                 clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> > +               return;
> >         }
> > +
> > +       kthread->cpu = cpu;
> > +       set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> >  }
> >
> 
> I don't see the code to set the mask of the cpu to the task
> since set_cpus_allowed_ptr() is removed from rebind_worker().
> 
> Is it somewhere I missed?

kthread_unpark().

> > @@ -4978,9 +4982,9 @@ static void rebind_workers(struct worker_pool *pool)
> >          * from CPU_ONLINE, the following shouldn't fail.
> >          */
> >         for_each_pool_worker(worker, pool) {
> > -               WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> > -                                                 pool->attrs->cpumask) < 0);
> > -               kthread_set_per_cpu(worker->task, true);
> > +               WARN_ON_ONCE(kthread_park(worker->task) < 0);
> > +               kthread_set_per_cpu(worker->task, pool->cpu);
> > +               kthread_unpark(worker->task);
> 
> I feel nervous to use kthread_park() here and kthread_parkme() in
> worker thread.  And adding kthread_should_park() to the fast path
> also daunt me.

Is that really such a hot path that an additional load is problematic?

> How about using a new KTHREAD_XXXX instead of KTHREAD_IS_PER_CPU,
> so that we can set and clear KTHREAD_XXXX freely, especially before
> set_cpus_allowed_ptr().

KTHREAD_IS_PER_CPU is exactly what we need, why make another flag?

The above sequence is nice in that it restores both the
KTHREAD_IS_PER_CPU flag and affinity while the task is frozen, so there
are no races where one is observed and not the other.

It is also the exact sequence normal per-cpu threads (smpboot) use to
preserve affinity.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-16 12:45             ` Peter Zijlstra
@ 2021-01-16 14:45               ` Lai Jiangshan
  2021-01-16 15:16                 ` Peter Zijlstra
  2021-01-16 15:13               ` Peter Zijlstra
  1 sibling, 1 reply; 23+ messages in thread
From: Lai Jiangshan @ 2021-01-16 14:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Valentin Schneider, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Sat, Jan 16, 2021 at 8:45 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Sat, Jan 16, 2021 at 02:27:09PM +0800, Lai Jiangshan wrote:
> > On Thu, Jan 14, 2021 at 11:35 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > >
> > > -void kthread_set_per_cpu(struct task_struct *k, bool set)
> > > +void kthread_set_per_cpu(struct task_struct *k, int cpu)
> > >  {
> > >         struct kthread *kthread = to_kthread(k);
> > >         if (!kthread)
> > >                 return;
> > >
> > > -       if (set) {
> > > -               WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
> > > -               WARN_ON_ONCE(k->nr_cpus_allowed != 1);
> > > -               set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> > > -       } else {
> > > +       WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
> > > +
> > > +       if (cpu < 0) {
> > >                 clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> > > +               return;
> > >         }
> > > +
> > > +       kthread->cpu = cpu;
> > > +       set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> > >  }
> > >
> >
> > I don't see the code to set the mask of the cpu to the task
> > since set_cpus_allowed_ptr() is removed from rebind_worker().
> >
> > Is it somewhere I missed?
>
> kthread_unpark().
>
> > > @@ -4978,9 +4982,9 @@ static void rebind_workers(struct worker_pool *pool)
> > >          * from CPU_ONLINE, the following shouldn't fail.
> > >          */
> > >         for_each_pool_worker(worker, pool) {
> > > -               WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> > > -                                                 pool->attrs->cpumask) < 0);
> > > -               kthread_set_per_cpu(worker->task, true);
> > > +               WARN_ON_ONCE(kthread_park(worker->task) < 0);
> > > +               kthread_set_per_cpu(worker->task, pool->cpu);
> > > +               kthread_unpark(worker->task);
> >
> > I feel nervous to use kthread_park() here and kthread_parkme() in
> > worker thread.  And adding kthread_should_park() to the fast path
> > also daunt me.
>
> Is that really such a hot path that an additional load is problematic?
>
> > How about using a new KTHREAD_XXXX instead of KTHREAD_IS_PER_CPU,
> > so that we can set and clear KTHREAD_XXXX freely, especially before
> > set_cpus_allowed_ptr().
>
> KTHREAD_IS_PER_CPU is exactly what we need, why make another flag?
>
> The above sequence is nice in that it restores both the
> KTHREAD_IS_PER_CPU flag and affinity while the task is frozen, so there
> are no races where one is observed and not the other.
>
> It is also the exact sequence normal per-cpu threads (smpboot) use to
> preserve affinity.

Other per-cpu threads normally do short-live works. wq's work can be
lengthy, cpu-intensive, heavy-lock-acquiring or even call
get_online_cpus() which might result in a deadlock with kthread_park().

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-16 12:45             ` Peter Zijlstra
  2021-01-16 14:45               ` Lai Jiangshan
@ 2021-01-16 15:13               ` Peter Zijlstra
  1 sibling, 0 replies; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-16 15:13 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Valentin Schneider, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Sat, Jan 16, 2021 at 01:45:23PM +0100, Peter Zijlstra wrote:
> On Sat, Jan 16, 2021 at 02:27:09PM +0800, Lai Jiangshan wrote:

> > I feel nervous to use kthread_park() here and kthread_parkme() in
> > worker thread.  And adding kthread_should_park() to the fast path
> > also daunt me.
> 
> Is that really such a hot path that an additional load is problematic?

I think we can remove it. It would mean the kthread_park() from the
online callback will take a bit longer, as it will have to wait for all
the works to complete, but that should not be a fundamental problem.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-16 14:45               ` Lai Jiangshan
@ 2021-01-16 15:16                 ` Peter Zijlstra
  2021-01-16 16:14                   ` Lai Jiangshan
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-16 15:16 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Valentin Schneider, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Sat, Jan 16, 2021 at 10:45:04PM +0800, Lai Jiangshan wrote:
> On Sat, Jan 16, 2021 at 8:45 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > It is also the exact sequence normal per-cpu threads (smpboot) use to
> > preserve affinity.
> 
> Other per-cpu threads normally do short-live works. wq's work can be
> lengthy, cpu-intensive, heavy-lock-acquiring or even call
> get_online_cpus() which might result in a deadlock with kthread_park().

kthread_park() is called by the migration thread running the
workqueue_online_cpu() callback.

kthread_parkme() is called by the worker thread, after it completes a
work and has no locks held from that context.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-16 15:16                 ` Peter Zijlstra
@ 2021-01-16 16:14                   ` Lai Jiangshan
  2021-01-16 18:46                     ` Peter Zijlstra
  0 siblings, 1 reply; 23+ messages in thread
From: Lai Jiangshan @ 2021-01-16 16:14 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Valentin Schneider, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Sat, Jan 16, 2021 at 11:16 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Sat, Jan 16, 2021 at 10:45:04PM +0800, Lai Jiangshan wrote:
> > On Sat, Jan 16, 2021 at 8:45 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > > It is also the exact sequence normal per-cpu threads (smpboot) use to
> > > preserve affinity.
> >
> > Other per-cpu threads normally do short-live works. wq's work can be
> > lengthy, cpu-intensive, heavy-lock-acquiring or even call
> > get_online_cpus() which might result in a deadlock with kthread_park().
>
> kthread_park() is called by the migration thread running the
> workqueue_online_cpu() callback.
>
> kthread_parkme() is called by the worker thread, after it completes a
> work and has no locks held from that context.
>
>

BP:                 AP:                  worker:
cpus_write_lock()
bringup_cpu()                            work_item_func()
  bringup_wait_for_ap                      get_online_cpus()
                    kthread_park(worker)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-16 16:14                   ` Lai Jiangshan
@ 2021-01-16 18:46                     ` Peter Zijlstra
  2021-01-17  9:54                       ` Peter Zijlstra
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-16 18:46 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Valentin Schneider, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Sun, Jan 17, 2021 at 12:14:34AM +0800, Lai Jiangshan wrote:

> BP:                 AP:                  worker:
> cpus_write_lock()
> bringup_cpu()                            work_item_func()
>   bringup_wait_for_ap                      get_online_cpus()
>                     kthread_park(worker)

Thanks, pictures are easier. Agreed, that a problem.

I've also found another problem I think.  rescuer_thread becomes part of
for_each_pool_worker() between worker_attach_to_pool() and
worker_detach_from_pool(), so it would try and do kthread_park() on
rescuer, when things align. And rescuer_thread() doesn't have a
kthread_parkme().

And we already rely on this 'ugly' thing of first doing
kthread_set_per_cpu() and fixing up the affinity later for the rescuer.

Let me restart the SRCU-P testing with the below delta applied.

---
 kernel/workqueue.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1db769b116a1..894bb885b40b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2368,7 +2368,6 @@ static int worker_thread(void *__worker)
 	/* tell the scheduler that this is a workqueue worker */
 	set_pf_worker(true);
 woke_up:
-	kthread_parkme();
 	raw_spin_lock_irq(&pool->lock);
 
 	/* am I supposed to die? */
@@ -2426,7 +2425,7 @@ static int worker_thread(void *__worker)
 			move_linked_works(work, &worker->scheduled, NULL);
 			process_scheduled_works(worker);
 		}
-	} while (keep_working(pool) && !kthread_should_park());
+	} while (keep_working(pool));
 
 	worker_set_flags(worker, WORKER_PREP);
 sleep:
@@ -2438,12 +2437,9 @@ static int worker_thread(void *__worker)
 	 * event.
 	 */
 	worker_enter_idle(worker);
-	set_current_state(TASK_IDLE);
+	__set_current_state(TASK_IDLE);
 	raw_spin_unlock_irq(&pool->lock);
-
-	if (!kthread_should_park())
-		schedule();
-
+	schedule();
 	goto woke_up;
 }
 
@@ -4979,9 +4975,9 @@ static void rebind_workers(struct worker_pool *pool)
 	 * from CPU_ONLINE, the following shouldn't fail.
 	 */
 	for_each_pool_worker(worker, pool) {
-		WARN_ON_ONCE(kthread_park(worker->task) < 0);
 		kthread_set_per_cpu(worker->task, pool->cpu);
-		kthread_unpark(worker->task);
+		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
+						  pool->attrs->cpumask) < 0);
 	}
 
 	raw_spin_lock_irq(&pool->lock);

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
  2021-01-16 18:46                     ` Peter Zijlstra
@ 2021-01-17  9:54                       ` Peter Zijlstra
  0 siblings, 0 replies; 23+ messages in thread
From: Peter Zijlstra @ 2021-01-17  9:54 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Valentin Schneider, Ingo Molnar, Thomas Gleixner, LKML, Qian Cai,
	Vincent Donnefort, Dexuan Cui, Paul E. McKenney, Vincent Guittot,
	Steven Rostedt, Tejun Heo

On Sat, Jan 16, 2021 at 07:46:47PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 17, 2021 at 12:14:34AM +0800, Lai Jiangshan wrote:
> 
> > BP:                 AP:                  worker:
> > cpus_write_lock()
> > bringup_cpu()                            work_item_func()
> >   bringup_wait_for_ap                      get_online_cpus()
> >                     kthread_park(worker)
> 
> Thanks, pictures are easier. Agreed, that a problem.
> 
> I've also found another problem I think.  rescuer_thread becomes part of
> for_each_pool_worker() between worker_attach_to_pool() and
> worker_detach_from_pool(), so it would try and do kthread_park() on
> rescuer, when things align. And rescuer_thread() doesn't have a
> kthread_parkme().
> 
> And we already rely on this 'ugly' thing of first doing
> kthread_set_per_cpu() and fixing up the affinity later for the rescuer.
> 
> Let me restart the SRCU-P testing with the below delta applied.
> 
> ---
>  kernel/workqueue.c | 14 +++++---------
>  1 file changed, 5 insertions(+), 9 deletions(-)
> 
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 1db769b116a1..894bb885b40b 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2368,7 +2368,6 @@ static int worker_thread(void *__worker)
>  	/* tell the scheduler that this is a workqueue worker */
>  	set_pf_worker(true);
>  woke_up:
> -	kthread_parkme();
>  	raw_spin_lock_irq(&pool->lock);
>  
>  	/* am I supposed to die? */
> @@ -2426,7 +2425,7 @@ static int worker_thread(void *__worker)
>  			move_linked_works(work, &worker->scheduled, NULL);
>  			process_scheduled_works(worker);
>  		}
> -	} while (keep_working(pool) && !kthread_should_park());
> +	} while (keep_working(pool));
>  
>  	worker_set_flags(worker, WORKER_PREP);
>  sleep:
> @@ -2438,12 +2437,9 @@ static int worker_thread(void *__worker)
>  	 * event.
>  	 */
>  	worker_enter_idle(worker);
> -	set_current_state(TASK_IDLE);
> +	__set_current_state(TASK_IDLE);
>  	raw_spin_unlock_irq(&pool->lock);
> -
> -	if (!kthread_should_park())
> -		schedule();
> -
> +	schedule();
>  	goto woke_up;
>  }
>  
> @@ -4979,9 +4975,9 @@ static void rebind_workers(struct worker_pool *pool)
>  	 * from CPU_ONLINE, the following shouldn't fail.
>  	 */
>  	for_each_pool_worker(worker, pool) {
> -		WARN_ON_ONCE(kthread_park(worker->task) < 0);
>  		kthread_set_per_cpu(worker->task, pool->cpu);
> -		kthread_unpark(worker->task);
> +		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> +						  pool->attrs->cpumask) < 0);
>  	}
>  
>  	raw_spin_lock_irq(&pool->lock);

In the roughly 80 instances of 18*SRCU-P since sending this, I've got
one sched_cpu_dying splat about a stray kworker, so somthing isn't
right.

My intention was to not think today, so I'll delay that until tomorrow.

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2021-01-17  9:56 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-12 14:43 [PATCH 0/4] sched: Fix hot-unplug regressions Peter Zijlstra
2021-01-12 14:43 ` [PATCH 2/4] kthread: Extract KTHREAD_IS_PER_CPU Peter Zijlstra
2021-01-12 14:43 ` [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU Peter Zijlstra
2021-01-12 16:36   ` Lai Jiangshan
2021-01-13 11:43     ` Peter Zijlstra
2021-01-12 17:57   ` Valentin Schneider
2021-01-13 13:28   ` Lai Jiangshan
2021-01-13 14:16     ` Valentin Schneider
2021-01-13 17:52       ` Paul E. McKenney
2021-01-13 18:43         ` Valentin Schneider
2021-01-13 18:59           ` Paul E. McKenney
2021-01-14 13:12     ` Peter Zijlstra
2021-01-14 13:21       ` Valentin Schneider
2021-01-14 15:34         ` Peter Zijlstra
2021-01-16  6:27           ` Lai Jiangshan
2021-01-16 12:45             ` Peter Zijlstra
2021-01-16 14:45               ` Lai Jiangshan
2021-01-16 15:16                 ` Peter Zijlstra
2021-01-16 16:14                   ` Lai Jiangshan
2021-01-16 18:46                     ` Peter Zijlstra
2021-01-17  9:54                       ` Peter Zijlstra
2021-01-16 15:13               ` Peter Zijlstra
2021-01-12 14:43 ` [PATCH 4/4] sched: Fix CPU hotplug / tighten is_per_cpu_kthread() Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.