[tip:,sched/urgent] workqueue: Use cpu_possible_mask instead of cpu_active_mask to break affinity
diff mbox series

Message ID 161133729644.414.3800664867137502999.tip-bot2@tip-bot2
State Accepted
Commit 547a77d02f8cfb345631ce23b5b548d27afa0fc4
Headers show
Series
  • [tip:,sched/urgent] workqueue: Use cpu_possible_mask instead of cpu_active_mask to break affinity
Related show

Commit Message

tip-bot2 for Jiri Slaby Jan. 22, 2021, 5:41 p.m. UTC
The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     547a77d02f8cfb345631ce23b5b548d27afa0fc4
Gitweb:        https://git.kernel.org/tip/547a77d02f8cfb345631ce23b5b548d27afa0fc4
Author:        Lai Jiangshan <laijs@linux.alibaba.com>
AuthorDate:    Mon, 11 Jan 2021 23:26:33 +08:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 22 Jan 2021 15:09:41 +01:00

workqueue: Use cpu_possible_mask instead of cpu_active_mask to break affinity

The scheduler won't break affinity for us any more, and we should
"emulate" the same behavior when the scheduler breaks affinity for
us.  The behavior is "changing the cpumask to cpu_possible_mask".

And there might be some other CPUs online later while the worker is
still running with the pending work items.  The worker should be allowed
to use the later online CPUs as before and process the work items ASAP.
If we use cpu_active_mask here, we can't achieve this goal but
using cpu_possible_mask can.

Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Acked-by: Tejun Heo <tj@kernel.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Link: https://lkml.kernel.org/r/20210111152638.2417-4-jiangshanlai@gmail.com
---
 kernel/workqueue.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Patch
diff mbox series

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9880b6c..1646331 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4920,7 +4920,7 @@  static void unbind_workers(int cpu)
 		raw_spin_unlock_irq(&pool->lock);
 
 		for_each_pool_worker(worker, pool)
-			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
+			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
 
 		mutex_unlock(&wq_pool_attach_mutex);