[tip:,sched/urgent] workqueue: Restrict affinity change to rescuer
diff mbox series

Message ID 161133729498.414.8534909860979779052.tip-bot2@tip-bot2
State New, archived
Headers show
Series
  • [tip:,sched/urgent] workqueue: Restrict affinity change to rescuer
Related show

Commit Message

irqchip-bot for Marc Zyngier Jan. 22, 2021, 5:41 p.m. UTC
The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     640f17c82460e9724fd256f0a1f5d99e7ff0bda4
Gitweb:        https://git.kernel.org/tip/640f17c82460e9724fd256f0a1f5d99e7ff0bda4
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Fri, 15 Jan 2021 19:08:36 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 22 Jan 2021 15:09:43 +01:00

workqueue: Restrict affinity change to rescuer

create_worker() will already set the right affinity using
kthread_bind_mask(), this means only the rescuer will need to change
it's affinity.

Howveer, while in cpu-hot-unplug a regular task is not allowed to run
on online&&!active as it would be pushed away quite agressively. We
need KTHREAD_IS_PER_CPU to survive in that environment.

Therefore set the affinity after getting that magic flag.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Link: https://lkml.kernel.org/r/20210121103506.826629830@infradead.org
---
 kernel/workqueue.c |  9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

Patch
diff mbox series

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index cce3433..894bb88 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1849,12 +1849,6 @@  static void worker_attach_to_pool(struct worker *worker,
 	mutex_lock(&wq_pool_attach_mutex);
 
 	/*
-	 * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
-	 * online CPUs.  It'll be re-applied when any of the CPUs come up.
-	 */
-	set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
-
-	/*
 	 * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains
 	 * stable across this function.  See the comments above the flag
 	 * definition for details.
@@ -1864,6 +1858,9 @@  static void worker_attach_to_pool(struct worker *worker,
 	else
 		kthread_set_per_cpu(worker->task, pool->cpu);
 
+	if (worker->rescue_wq)
+		set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
+
 	list_add_tail(&worker->node, &pool->workers);
 	worker->pool = pool;