All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] workqueue: break affinity initiatively
@ 2020-12-14 15:54 Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 01/10] workqueue: restore unbound_workers' cpumask correctly Lai Jiangshan
                   ` (11 more replies)
  0 siblings, 12 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Hillf Danton, Valentin Schneider, Qian Cai,
	Peter Zijlstra, Vincent Donnefort, Tejun Heo

From: Lai Jiangshan <laijs@linux.alibaba.com>

06249738a41a ("workqueue: Manually break affinity on hotplug")
said that scheduler will not force break affinity for us.

But workqueue highly depends on the old behavior. Many parts of the codes
relies on it, 06249738a41a ("workqueue: Manually break affinity on hotplug")
is not enough to change it, and the commit has flaws in itself too.

We need to thoroughly update the way workqueue handles affinity
in cpu hot[un]plug, what is this patchset intends to do and
replace the Valentin Schneider's patch [1].

Patch 1 fixes a flaw reported by Hillf Danton <hdanton@sina.com>.
I have to include this fix because later patches depends on it.

The patchset is based on tip/master rather than workqueue tree,
because the patchset is a complement for 06249738a41a ("workqueue:
Manually break affinity on hotplug") which is only in tip/master by now.

[1]: https://lore.kernel.org/r/ff62e3ee994efb3620177bf7b19fab16f4866845.camel@redhat.com

Cc: Hillf Danton <hdanton@sina.com>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Qian Cai <cai@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vincent Donnefort <vincent.donnefort@arm.com>
Cc: Tejun Heo <tj@kernel.org>

Lai Jiangshan (10):
  workqueue: restore unbound_workers' cpumask correctly
  workqueue: use cpu_possible_mask instead of cpu_active_mask to break
    affinity
  workqueue: Manually break affinity on pool detachment
  workqueue: don't set the worker's cpumask when kthread_bind_mask()
  workqueue: introduce wq_online_cpumask
  workqueue: use wq_online_cpumask in restore_unbound_workers_cpumask()
  workqueue: Manually break affinity on hotplug for unbound pool
  workqueue: reorganize workqueue_online_cpu()
  workqueue: reorganize workqueue_offline_cpu() unbind_workers()
  workqueue: Fix affinity of kworkers when attaching into pool

 kernel/workqueue.c | 212 +++++++++++++++++++++++++++------------------
 1 file changed, 130 insertions(+), 82 deletions(-)

-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 01/10] workqueue: restore unbound_workers' cpumask correctly
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity Lai Jiangshan
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Lai Jiangshan, Hillf Danton, Tejun Heo, Lai Jiangshan

From: Lai Jiangshan <laijs@linux.alibaba.com>

When we restore workers' cpumask, we should restore them to the
designed pool->attrs->cpumask. And we need to only do it at
the first time.

Cc: Hillf Danton <hdanton@sina.com>
Reported-by: Hillf Danton <hdanton@sina.com>
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c71da2a59e12..aba71ab359dd 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5031,9 +5031,13 @@ static void restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)
 
 	cpumask_and(&cpumask, pool->attrs->cpumask, cpu_online_mask);
 
+	/* is @cpu the first one onlined for the @pool? */
+	if (cpumask_weight(&cpumask) > 1)
+		return;
+
 	/* as we're called from CPU_ONLINE, the following shouldn't fail */
 	for_each_pool_worker(worker, pool)
-		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, &cpumask) < 0);
+		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask) < 0);
 }
 
 int workqueue_prepare_cpu(unsigned int cpu)
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 01/10] workqueue: restore unbound_workers' cpumask correctly Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-14 17:25   ` Peter Zijlstra
  2020-12-16 14:32   ` Tejun Heo
  2020-12-14 15:54 ` [PATCH 03/10] workqueue: Manually break affinity on pool detachment Lai Jiangshan
                   ` (9 subsequent siblings)
  11 siblings, 2 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Tejun Heo, Lai Jiangshan, Peter Zijlstra,
	Valentin Schneider, Daniel Bristot de Oliveira

From: Lai Jiangshan <laijs@linux.alibaba.com>

There might be other CPU online. The workers losing binding on its CPU
should have chance to work on those later onlined CPUs.

Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index aba71ab359dd..1f5b8385c0cf 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4909,8 +4909,9 @@ static void unbind_workers(int cpu)
 
 		raw_spin_unlock_irq(&pool->lock);
 
+		/* don't rely on the scheduler to force break affinity for us. */
 		for_each_pool_worker(worker, pool)
-			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
+			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
 
 		mutex_unlock(&wq_pool_attach_mutex);
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 03/10] workqueue: Manually break affinity on pool detachment
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 01/10] workqueue: restore unbound_workers' cpumask correctly Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 04/10] workqueue: don't set the worker's cpumask when kthread_bind_mask() Lai Jiangshan
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Tejun Heo, Lai Jiangshan, Peter Zijlstra,
	Valentin Schneider, Daniel Bristot de Oliveira

From: Lai Jiangshan <laijs@linux.alibaba.com>

Don't rely on the scheduler to force break affinity for us -- it will
stop doing that for per-cpu-kthreads.

Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1f5b8385c0cf..1f6cb83e0bc5 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1885,6 +1885,16 @@ static void worker_detach_from_pool(struct worker *worker)
 
 	if (list_empty(&pool->workers))
 		detach_completion = pool->detach_completion;
+
+	/*
+	 * The cpus of pool->attrs->cpumask might all go offline after
+	 * detachment, and the scheduler may not force break affinity
+	 * for us, so we do it on our own and unbind this worker which
+	 * can't be unbound by workqueue_offline_cpu() since it doesn't
+	 * belong to any pool after it.
+	 */
+	set_cpus_allowed_ptr(worker->task, cpu_possible_mask);
+
 	mutex_unlock(&wq_pool_attach_mutex);
 
 	/* clear leftover flags without pool->lock after it is detached */
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 04/10] workqueue: don't set the worker's cpumask when kthread_bind_mask()
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (2 preceding siblings ...)
  2020-12-14 15:54 ` [PATCH 03/10] workqueue: Manually break affinity on pool detachment Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-16 14:39   ` Tejun Heo
  2020-12-14 15:54 ` [PATCH 05/10] workqueue: introduce wq_online_cpumask Lai Jiangshan
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Lai Jiangshan, Peter Zijlstra, Tejun Heo, Lai Jiangshan

From: Lai Jiangshan <laijs@linux.alibaba.com>

There might be no online cpu in the pool->attrs->cpumask.
We will set the worker's cpumask later in worker_attach_to_pool().

Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1f6cb83e0bc5..f679c599a70b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1945,7 +1945,15 @@ static struct worker *create_worker(struct worker_pool *pool)
 		goto fail;
 
 	set_user_nice(worker->task, pool->attrs->nice);
-	kthread_bind_mask(worker->task, pool->attrs->cpumask);
+
+	/*
+	 * Set PF_NO_SETAFFINITY via kthread_bind_mask().  We use
+	 * cpu_possible_mask other than pool->attrs->cpumask, because
+	 * there might be no online cpu in the pool->attrs->cpumask.
+	 * The cpumask of the worker will be set properly later in
+	 * worker_attach_to_pool().
+	 */
+	kthread_bind_mask(worker->task, cpu_possible_mask);
 
 	/* successful, attach the worker to the pool */
 	worker_attach_to_pool(worker, pool);
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 05/10] workqueue: introduce wq_online_cpumask
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (3 preceding siblings ...)
  2020-12-14 15:54 ` [PATCH 04/10] workqueue: don't set the worker's cpumask when kthread_bind_mask() Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 06/10] workqueue: use wq_online_cpumask in restore_unbound_workers_cpumask() Lai Jiangshan
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Lai Jiangshan, Tejun Heo, Lai Jiangshan

From: Lai Jiangshan <laijs@linux.alibaba.com>

wq_online_cpumask is the cached result of cpu_online_mask with the
going-down cpu cleared.  It is needed for later patches for setting
correct cpumask for workers and break affinity initiatively.

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 34 ++++++++++++++++++----------------
 1 file changed, 18 insertions(+), 16 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f679c599a70b..8aca3afc88aa 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -310,6 +310,9 @@ static bool workqueue_freezing;		/* PL: have wqs started freezing? */
 /* PL: allowable cpus for unbound wqs and work items */
 static cpumask_var_t wq_unbound_cpumask;
 
+/* PL: online cpus (cpu_online_mask with the going-down cpu cleared) */
+static cpumask_var_t wq_online_cpumask;
+
 /* CPU where unbound work was last round robin scheduled from this CPU */
 static DEFINE_PER_CPU(int, wq_rr_cpu_last);
 
@@ -3830,12 +3833,10 @@ static struct pool_workqueue *alloc_unbound_pwq(struct workqueue_struct *wq,
  * wq_calc_node_cpumask - calculate a wq_attrs' cpumask for the specified node
  * @attrs: the wq_attrs of the default pwq of the target workqueue
  * @node: the target NUMA node
- * @cpu_going_down: if >= 0, the CPU to consider as offline
  * @cpumask: outarg, the resulting cpumask
  *
- * Calculate the cpumask a workqueue with @attrs should use on @node.  If
- * @cpu_going_down is >= 0, that cpu is considered offline during
- * calculation.  The result is stored in @cpumask.
+ * Calculate the cpumask a workqueue with @attrs should use on @node.
+ * The result is stored in @cpumask.
  *
  * If NUMA affinity is not enabled, @attrs->cpumask is always used.  If
  * enabled and @node has online CPUs requested by @attrs, the returned
@@ -3849,15 +3850,14 @@ static struct pool_workqueue *alloc_unbound_pwq(struct workqueue_struct *wq,
  * %false if equal.
  */
 static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int node,
-				 int cpu_going_down, cpumask_t *cpumask)
+				 cpumask_t *cpumask)
 {
 	if (!wq_numa_enabled || attrs->no_numa)
 		goto use_dfl;
 
 	/* does @node have any online CPUs @attrs wants? */
 	cpumask_and(cpumask, cpumask_of_node(node), attrs->cpumask);
-	if (cpu_going_down >= 0)
-		cpumask_clear_cpu(cpu_going_down, cpumask);
+	cpumask_and(cpumask, cpumask, wq_online_cpumask);
 
 	if (cpumask_empty(cpumask))
 		goto use_dfl;
@@ -3966,7 +3966,7 @@ apply_wqattrs_prepare(struct workqueue_struct *wq,
 		goto out_free;
 
 	for_each_node(node) {
-		if (wq_calc_node_cpumask(new_attrs, node, -1, tmp_attrs->cpumask)) {
+		if (wq_calc_node_cpumask(new_attrs, node, tmp_attrs->cpumask)) {
 			ctx->pwq_tbl[node] = alloc_unbound_pwq(wq, tmp_attrs);
 			if (!ctx->pwq_tbl[node])
 				goto out_free;
@@ -4091,7 +4091,6 @@ int apply_workqueue_attrs(struct workqueue_struct *wq,
  * wq_update_unbound_numa - update NUMA affinity of a wq for CPU hot[un]plug
  * @wq: the target workqueue
  * @cpu: the CPU coming up or going down
- * @online: whether @cpu is coming up or going down
  *
  * This function is to be called from %CPU_DOWN_PREPARE, %CPU_ONLINE and
  * %CPU_DOWN_FAILED.  @cpu is being hot[un]plugged, update NUMA affinity of
@@ -4109,11 +4108,9 @@ int apply_workqueue_attrs(struct workqueue_struct *wq,
  * affinity, it's the user's responsibility to flush the work item from
  * CPU_DOWN_PREPARE.
  */
-static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu,
-				   bool online)
+static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu)
 {
 	int node = cpu_to_node(cpu);
-	int cpu_off = online ? -1 : cpu;
 	struct pool_workqueue *old_pwq = NULL, *pwq;
 	struct workqueue_attrs *target_attrs;
 	cpumask_t *cpumask;
@@ -4141,7 +4138,7 @@ static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu,
 	 * and create a new one if they don't match.  If the target cpumask
 	 * equals the default pwq's, the default pwq should be used.
 	 */
-	if (wq_calc_node_cpumask(wq->dfl_pwq->pool->attrs, node, cpu_off, cpumask)) {
+	if (wq_calc_node_cpumask(wq->dfl_pwq->pool->attrs, node, cpumask)) {
 		if (cpumask_equal(cpumask, pwq->pool->attrs->cpumask))
 			return;
 	} else {
@@ -5079,6 +5076,7 @@ int workqueue_online_cpu(unsigned int cpu)
 	int pi;
 
 	mutex_lock(&wq_pool_mutex);
+	cpumask_set_cpu(cpu, wq_online_cpumask);
 
 	for_each_pool(pool, pi) {
 		mutex_lock(&wq_pool_attach_mutex);
@@ -5093,7 +5091,7 @@ int workqueue_online_cpu(unsigned int cpu)
 
 	/* update NUMA affinity of unbound workqueues */
 	list_for_each_entry(wq, &workqueues, list)
-		wq_update_unbound_numa(wq, cpu, true);
+		wq_update_unbound_numa(wq, cpu);
 
 	mutex_unlock(&wq_pool_mutex);
 	return 0;
@@ -5111,8 +5109,9 @@ int workqueue_offline_cpu(unsigned int cpu)
 
 	/* update NUMA affinity of unbound workqueues */
 	mutex_lock(&wq_pool_mutex);
+	cpumask_clear_cpu(cpu, wq_online_cpumask);
 	list_for_each_entry(wq, &workqueues, list)
-		wq_update_unbound_numa(wq, cpu, false);
+		wq_update_unbound_numa(wq, cpu);
 	mutex_unlock(&wq_pool_mutex);
 
 	return 0;
@@ -5949,6 +5948,9 @@ void __init workqueue_init_early(void)
 
 	BUILD_BUG_ON(__alignof__(struct pool_workqueue) < __alignof__(long long));
 
+	BUG_ON(!alloc_cpumask_var(&wq_online_cpumask, GFP_KERNEL));
+	cpumask_copy(wq_online_cpumask, cpu_online_mask);
+
 	BUG_ON(!alloc_cpumask_var(&wq_unbound_cpumask, GFP_KERNEL));
 	cpumask_copy(wq_unbound_cpumask, housekeeping_cpumask(hk_flags));
 
@@ -6045,7 +6047,7 @@ void __init workqueue_init(void)
 	}
 
 	list_for_each_entry(wq, &workqueues, list) {
-		wq_update_unbound_numa(wq, smp_processor_id(), true);
+		wq_update_unbound_numa(wq, smp_processor_id());
 		WARN(init_rescuer(wq),
 		     "workqueue: failed to create early rescuer for %s",
 		     wq->name);
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 06/10] workqueue: use wq_online_cpumask in restore_unbound_workers_cpumask()
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (4 preceding siblings ...)
  2020-12-14 15:54 ` [PATCH 05/10] workqueue: introduce wq_online_cpumask Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 07/10] workqueue: Manually break affinity on hotplug for unbound pool Lai Jiangshan
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Lai Jiangshan, Tejun Heo, Lai Jiangshan

From: Lai Jiangshan <laijs@linux.alibaba.com>

restore_unbound_workers_cpumask() is called when CPU_ONLINE, where
wq_online_cpumask equals to cpu_online_mask. So no fucntionality
changed.

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 8aca3afc88aa..878ed83e5908 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5039,13 +5039,14 @@ static void restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)
 	static cpumask_t cpumask;
 	struct worker *worker;
 
+	lockdep_assert_held(&wq_pool_mutex);
 	lockdep_assert_held(&wq_pool_attach_mutex);
 
 	/* is @cpu allowed for @pool? */
 	if (!cpumask_test_cpu(cpu, pool->attrs->cpumask))
 		return;
 
-	cpumask_and(&cpumask, pool->attrs->cpumask, cpu_online_mask);
+	cpumask_and(&cpumask, pool->attrs->cpumask, wq_online_cpumask);
 
 	/* is @cpu the first one onlined for the @pool? */
 	if (cpumask_weight(&cpumask) > 1)
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 07/10] workqueue: Manually break affinity on hotplug for unbound pool
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (5 preceding siblings ...)
  2020-12-14 15:54 ` [PATCH 06/10] workqueue: use wq_online_cpumask in restore_unbound_workers_cpumask() Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-16 14:50   ` Tejun Heo
  2020-12-14 15:54 ` [PATCH 08/10] workqueue: reorganize workqueue_online_cpu() Lai Jiangshan
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Tejun Heo, Lai Jiangshan, Peter Zijlstra,
	Valentin Schneider, Daniel Bristot de Oliveira

From: Lai Jiangshan <laijs@linux.alibaba.com>

When all of the CPUs of the unbound pool go down, the scheduler
will break affinity on the workers for us.  We can do it by our own
and don't rely on the scheduler to force break affinity for us.

Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 49 ++++++++++++++++++++++++++++++++--------------
 1 file changed, 34 insertions(+), 15 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 878ed83e5908..eea58f77a37b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5025,16 +5025,16 @@ static void rebind_workers(struct worker_pool *pool)
 }
 
 /**
- * restore_unbound_workers_cpumask - restore cpumask of unbound workers
+ * update_unbound_workers_cpumask - update cpumask of unbound workers
  * @pool: unbound pool of interest
- * @cpu: the CPU which is coming up
+ * @cpu: the CPU which is coming up or going down
  *
  * An unbound pool may end up with a cpumask which doesn't have any online
- * CPUs.  When a worker of such pool get scheduled, the scheduler resets
- * its cpus_allowed.  If @cpu is in @pool's cpumask which didn't have any
- * online CPU before, cpus_allowed of all its workers should be restored.
+ * CPUs.  We have to reset workers' cpus_allowed of such pool.  And we
+ * restore the workers' cpus_allowed when the pool's cpumask has online
+ * CPU at the first time after reset.
  */
-static void restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)
+static void update_unbound_workers_cpumask(struct worker_pool *pool, int cpu)
 {
 	static cpumask_t cpumask;
 	struct worker *worker;
@@ -5048,13 +5048,19 @@ static void restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)
 
 	cpumask_and(&cpumask, pool->attrs->cpumask, wq_online_cpumask);
 
-	/* is @cpu the first one onlined for the @pool? */
-	if (cpumask_weight(&cpumask) > 1)
-		return;
-
-	/* as we're called from CPU_ONLINE, the following shouldn't fail */
-	for_each_pool_worker(worker, pool)
-		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask) < 0);
+	switch (cpumask_weight(&cpumask)) {
+	case 0: /* @cpu is the last one going down for the @pool. */
+		for_each_pool_worker(worker, pool)
+			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
+		break;
+	case 1: /* @cpu is the first one onlined for the @pool. */
+		/* as we're called from CPU_ONLINE, the following shouldn't fail */
+		for_each_pool_worker(worker, pool)
+			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask) < 0);
+		break;
+	default: /* other cases, nothing to do */
+		break;
+	}
 }
 
 int workqueue_prepare_cpu(unsigned int cpu)
@@ -5085,7 +5091,7 @@ int workqueue_online_cpu(unsigned int cpu)
 		if (pool->cpu == cpu)
 			rebind_workers(pool);
 		else if (pool->cpu < 0)
-			restore_unbound_workers_cpumask(pool, cpu);
+			update_unbound_workers_cpumask(pool, cpu);
 
 		mutex_unlock(&wq_pool_attach_mutex);
 	}
@@ -5100,7 +5106,9 @@ int workqueue_online_cpu(unsigned int cpu)
 
 int workqueue_offline_cpu(unsigned int cpu)
 {
+	struct worker_pool *pool;
 	struct workqueue_struct *wq;
+	int pi;
 
 	/* unbinding per-cpu workers should happen on the local CPU */
 	if (WARN_ON(cpu != smp_processor_id()))
@@ -5108,9 +5116,20 @@ int workqueue_offline_cpu(unsigned int cpu)
 
 	unbind_workers(cpu);
 
-	/* update NUMA affinity of unbound workqueues */
 	mutex_lock(&wq_pool_mutex);
 	cpumask_clear_cpu(cpu, wq_online_cpumask);
+
+	/* update CPU affinity of workers of unbound pools */
+	for_each_pool(pool, pi) {
+		mutex_lock(&wq_pool_attach_mutex);
+
+		if (pool->cpu < 0)
+			update_unbound_workers_cpumask(pool, cpu);
+
+		mutex_unlock(&wq_pool_attach_mutex);
+	}
+
+	/* update NUMA affinity of unbound workqueues */
 	list_for_each_entry(wq, &workqueues, list)
 		wq_update_unbound_numa(wq, cpu);
 	mutex_unlock(&wq_pool_mutex);
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 08/10] workqueue: reorganize workqueue_online_cpu()
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (6 preceding siblings ...)
  2020-12-14 15:54 ` [PATCH 07/10] workqueue: Manually break affinity on hotplug for unbound pool Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 09/10] workqueue: reorganize workqueue_offline_cpu() unbind_workers() Lai Jiangshan
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Lai Jiangshan, Tejun Heo, Lai Jiangshan

From: Lai Jiangshan <laijs@linux.alibaba.com>

Just move around the code, no functionality changed.

It prepares for later patch protecting wq_online_cpumask
in wq_pool_attach_mutex.

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index eea58f77a37b..fa29b7a083a6 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5085,12 +5085,17 @@ int workqueue_online_cpu(unsigned int cpu)
 	mutex_lock(&wq_pool_mutex);
 	cpumask_set_cpu(cpu, wq_online_cpumask);
 
+	for_each_cpu_worker_pool(pool, cpu) {
+		mutex_lock(&wq_pool_attach_mutex);
+		rebind_workers(pool);
+		mutex_unlock(&wq_pool_attach_mutex);
+	}
+
+	/* update CPU affinity of workers of unbound pools */
 	for_each_pool(pool, pi) {
 		mutex_lock(&wq_pool_attach_mutex);
 
-		if (pool->cpu == cpu)
-			rebind_workers(pool);
-		else if (pool->cpu < 0)
+		if (pool->cpu < 0)
 			update_unbound_workers_cpumask(pool, cpu);
 
 		mutex_unlock(&wq_pool_attach_mutex);
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 09/10] workqueue: reorganize workqueue_offline_cpu() unbind_workers()
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (7 preceding siblings ...)
  2020-12-14 15:54 ` [PATCH 08/10] workqueue: reorganize workqueue_online_cpu() Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-14 15:54 ` [PATCH 10/10] workqueue: Fix affinity of kworkers when attaching into pool Lai Jiangshan
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Lai Jiangshan, Tejun Heo, Lai Jiangshan

From: Lai Jiangshan <laijs@linux.alibaba.com>

Just move around the code, no functionality changed.
Only wq_pool_attach_mutex protected region become a little larger.

It prepares for later patch protecting wq_online_cpumask
in wq_pool_attach_mutex.

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 92 +++++++++++++++++++++++-----------------------
 1 file changed, 46 insertions(+), 46 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index fa29b7a083a6..5ef41c567c2b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4901,62 +4901,58 @@ void wq_worker_comm(char *buf, size_t size, struct task_struct *task)
  * cpu comes back online.
  */
 
-static void unbind_workers(int cpu)
+static void unbind_workers(struct worker_pool *pool)
 {
-	struct worker_pool *pool;
 	struct worker *worker;
 
-	for_each_cpu_worker_pool(pool, cpu) {
-		mutex_lock(&wq_pool_attach_mutex);
-		raw_spin_lock_irq(&pool->lock);
+	lockdep_assert_held(&wq_pool_attach_mutex);
 
-		/*
-		 * We've blocked all attach/detach operations. Make all workers
-		 * unbound and set DISASSOCIATED.  Before this, all workers
-		 * except for the ones which are still executing works from
-		 * before the last CPU down must be on the cpu.  After
-		 * this, they may become diasporas.
-		 */
-		for_each_pool_worker(worker, pool)
-			worker->flags |= WORKER_UNBOUND;
+	raw_spin_lock_irq(&pool->lock);
 
-		pool->flags |= POOL_DISASSOCIATED;
+	/*
+	 * We've blocked all attach/detach operations. Make all workers
+	 * unbound and set DISASSOCIATED.  Before this, all workers
+	 * except for the ones which are still executing works from
+	 * before the last CPU down must be on the cpu.  After
+	 * this, they may become diasporas.
+	 */
+	for_each_pool_worker(worker, pool)
+		worker->flags |= WORKER_UNBOUND;
 
-		raw_spin_unlock_irq(&pool->lock);
+	pool->flags |= POOL_DISASSOCIATED;
 
-		/* don't rely on the scheduler to force break affinity for us. */
-		for_each_pool_worker(worker, pool)
-			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
+	raw_spin_unlock_irq(&pool->lock);
 
-		mutex_unlock(&wq_pool_attach_mutex);
+	/* don't rely on the scheduler to force break affinity for us. */
+	for_each_pool_worker(worker, pool)
+		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
 
-		/*
-		 * Call schedule() so that we cross rq->lock and thus can
-		 * guarantee sched callbacks see the %WORKER_UNBOUND flag.
-		 * This is necessary as scheduler callbacks may be invoked
-		 * from other cpus.
-		 */
-		schedule();
+	/*
+	 * Call schedule() so that we cross rq->lock and thus can
+	 * guarantee sched callbacks see the %WORKER_UNBOUND flag.
+	 * This is necessary as scheduler callbacks may be invoked
+	 * from other cpus.
+	 */
+	schedule();
 
-		/*
-		 * Sched callbacks are disabled now.  Zap nr_running.
-		 * After this, nr_running stays zero and need_more_worker()
-		 * and keep_working() are always true as long as the
-		 * worklist is not empty.  This pool now behaves as an
-		 * unbound (in terms of concurrency management) pool which
-		 * are served by workers tied to the pool.
-		 */
-		atomic_set(&pool->nr_running, 0);
+	/*
+	 * Sched callbacks are disabled now.  Zap nr_running.
+	 * After this, nr_running stays zero and need_more_worker()
+	 * and keep_working() are always true as long as the
+	 * worklist is not empty.  This pool now behaves as an
+	 * unbound (in terms of concurrency management) pool which
+	 * are served by workers tied to the pool.
+	 */
+	atomic_set(&pool->nr_running, 0);
 
-		/*
-		 * With concurrency management just turned off, a busy
-		 * worker blocking could lead to lengthy stalls.  Kick off
-		 * unbound chain execution of currently pending work items.
-		 */
-		raw_spin_lock_irq(&pool->lock);
-		wake_up_worker(pool);
-		raw_spin_unlock_irq(&pool->lock);
-	}
+	/*
+	 * With concurrency management just turned off, a busy
+	 * worker blocking could lead to lengthy stalls.  Kick off
+	 * unbound chain execution of currently pending work items.
+	 */
+	raw_spin_lock_irq(&pool->lock);
+	wake_up_worker(pool);
+	raw_spin_unlock_irq(&pool->lock);
 }
 
 /**
@@ -5119,7 +5115,11 @@ int workqueue_offline_cpu(unsigned int cpu)
 	if (WARN_ON(cpu != smp_processor_id()))
 		return -1;
 
-	unbind_workers(cpu);
+	for_each_cpu_worker_pool(pool, cpu) {
+		mutex_lock(&wq_pool_attach_mutex);
+		unbind_workers(pool);
+		mutex_unlock(&wq_pool_attach_mutex);
+	}
 
 	mutex_lock(&wq_pool_mutex);
 	cpumask_clear_cpu(cpu, wq_online_cpumask);
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 10/10] workqueue: Fix affinity of kworkers when attaching into pool
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (8 preceding siblings ...)
  2020-12-14 15:54 ` [PATCH 09/10] workqueue: reorganize workqueue_offline_cpu() unbind_workers() Lai Jiangshan
@ 2020-12-14 15:54 ` Lai Jiangshan
  2020-12-15 15:03   ` Valentin Schneider
  2020-12-14 17:36 ` [PATCH 00/10] workqueue: break affinity initiatively Peter Zijlstra
  2020-12-16 14:30 ` Tejun Heo
  11 siblings, 1 reply; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-14 15:54 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Valentin Schneider, Qian Cai, Peter Zijlstra,
	Vincent Donnefort, Tejun Heo, Lai Jiangshan

From: Lai Jiangshan <laijs@linux.alibaba.com>

When worker_attach_to_pool() is called, we should not put the workers
to pool->attrs->cpumask when there is not CPU online in it.

We have to use wq_online_cpumask in worker_attach_to_pool() to check
if pool->attrs->cpumask is valid rather than cpu_online_mask or
cpu_active_mask due to gaps between stages in cpu hot[un]plug.

To use wq_online_cpumask in worker_attach_to_pool(), we need to protect
wq_online_cpumask in wq_pool_attach_mutex and we modify workqueue_online_cpu()
and workqueue_offline_cpu() to enlarge wq_pool_attach_mutex protected
region. We also put updating wq_online_cpumask and [re|un]bind_workers()
in the same wq_pool_attach_mutex protected region to make the update
for percpu workqueue atomically.

Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Qian Cai <cai@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vincent Donnefort <vincent.donnefort@arm.com>
Link: https://lore.kernel.org/lkml/20201210163830.21514-3-valentin.schneider@arm.com/
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 kernel/workqueue.c | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5ef41c567c2b..7a04cef90c1c 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -310,7 +310,7 @@ static bool workqueue_freezing;		/* PL: have wqs started freezing? */
 /* PL: allowable cpus for unbound wqs and work items */
 static cpumask_var_t wq_unbound_cpumask;
 
-/* PL: online cpus (cpu_online_mask with the going-down cpu cleared) */
+/* PL&A: online cpus (cpu_online_mask with the going-down cpu cleared) */
 static cpumask_var_t wq_online_cpumask;
 
 /* CPU where unbound work was last round robin scheduled from this CPU */
@@ -1848,11 +1848,11 @@ static void worker_attach_to_pool(struct worker *worker,
 {
 	mutex_lock(&wq_pool_attach_mutex);
 
-	/*
-	 * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
-	 * online CPUs.  It'll be re-applied when any of the CPUs come up.
-	 */
-	set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
+	/* Is there any cpu in pool->attrs->cpumask online? */
+	if (cpumask_any_and(pool->attrs->cpumask, wq_online_cpumask) < nr_cpu_ids)
+		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask) < 0);
+	else
+		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
 
 	/*
 	 * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains
@@ -5079,13 +5079,12 @@ int workqueue_online_cpu(unsigned int cpu)
 	int pi;
 
 	mutex_lock(&wq_pool_mutex);
-	cpumask_set_cpu(cpu, wq_online_cpumask);
 
-	for_each_cpu_worker_pool(pool, cpu) {
-		mutex_lock(&wq_pool_attach_mutex);
+	mutex_lock(&wq_pool_attach_mutex);
+	cpumask_set_cpu(cpu, wq_online_cpumask);
+	for_each_cpu_worker_pool(pool, cpu)
 		rebind_workers(pool);
-		mutex_unlock(&wq_pool_attach_mutex);
-	}
+	mutex_unlock(&wq_pool_attach_mutex);
 
 	/* update CPU affinity of workers of unbound pools */
 	for_each_pool(pool, pi) {
@@ -5115,14 +5114,13 @@ int workqueue_offline_cpu(unsigned int cpu)
 	if (WARN_ON(cpu != smp_processor_id()))
 		return -1;
 
-	for_each_cpu_worker_pool(pool, cpu) {
-		mutex_lock(&wq_pool_attach_mutex);
-		unbind_workers(pool);
-		mutex_unlock(&wq_pool_attach_mutex);
-	}
-
 	mutex_lock(&wq_pool_mutex);
+
+	mutex_lock(&wq_pool_attach_mutex);
 	cpumask_clear_cpu(cpu, wq_online_cpumask);
+	for_each_cpu_worker_pool(pool, cpu)
+		unbind_workers(pool);
+	mutex_unlock(&wq_pool_attach_mutex);
 
 	/* update CPU affinity of workers of unbound pools */
 	for_each_pool(pool, pi) {
-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity
  2020-12-14 15:54 ` [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity Lai Jiangshan
@ 2020-12-14 17:25   ` Peter Zijlstra
  2020-12-15  8:33     ` Lai Jiangshan
  2020-12-15  8:40     ` Peter Zijlstra
  2020-12-16 14:32   ` Tejun Heo
  1 sibling, 2 replies; 25+ messages in thread
From: Peter Zijlstra @ 2020-12-14 17:25 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Lai Jiangshan, Tejun Heo, Valentin Schneider,
	Daniel Bristot de Oliveira

On Mon, Dec 14, 2020 at 11:54:49PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <laijs@linux.alibaba.com>
> 
> There might be other CPU online. The workers losing binding on its CPU
> should have chance to work on those later onlined CPUs.
> 
> Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
> Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
> ---
>  kernel/workqueue.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index aba71ab359dd..1f5b8385c0cf 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -4909,8 +4909,9 @@ static void unbind_workers(int cpu)
>  
>  		raw_spin_unlock_irq(&pool->lock);
>  
> +		/* don't rely on the scheduler to force break affinity for us. */
>  		for_each_pool_worker(worker, pool)
> -			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
> +			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);

Please explain this one.. it's not making sense. Also the Changelog
doesn't seem remotely related to the actual change.

Afaict this is actively wrong.

Also, can you please not Cc me parts of a series? That's bloody
annoying.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] workqueue: break affinity initiatively
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (9 preceding siblings ...)
  2020-12-14 15:54 ` [PATCH 10/10] workqueue: Fix affinity of kworkers when attaching into pool Lai Jiangshan
@ 2020-12-14 17:36 ` Peter Zijlstra
  2020-12-15  5:44   ` Lai Jiangshan
  2020-12-16 14:30 ` Tejun Heo
  11 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2020-12-14 17:36 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Lai Jiangshan, Hillf Danton, Valentin Schneider,
	Qian Cai, Vincent Donnefort, Tejun Heo

On Mon, Dec 14, 2020 at 11:54:47PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <laijs@linux.alibaba.com>
> 
> 06249738a41a ("workqueue: Manually break affinity on hotplug")
> said that scheduler will not force break affinity for us.
> 
> But workqueue highly depends on the old behavior. Many parts of the codes
> relies on it, 06249738a41a ("workqueue: Manually break affinity on hotplug")
> is not enough to change it, and the commit has flaws in itself too.
> 
> We need to thoroughly update the way workqueue handles affinity
> in cpu hot[un]plug, what is this patchset intends to do and
> replace the Valentin Schneider's patch [1].

So the actual problem is with per-cpu kthreads, the new assumption is
that hot-un-plug will make all per-cpu kthreads for the dying CPU go
away.

Workqueues violated that. I fixed the obvious site, and Valentin's patch
avoids workqueues from quickly creating new ones while we're not
looking.

What other problems did you find?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] workqueue: break affinity initiatively
  2020-12-14 17:36 ` [PATCH 00/10] workqueue: break affinity initiatively Peter Zijlstra
@ 2020-12-15  5:44   ` Lai Jiangshan
  2020-12-15  7:50     ` Peter Zijlstra
  0 siblings, 1 reply; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-15  5:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Lai Jiangshan, Hillf Danton, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Tejun Heo

On Tue, Dec 15, 2020 at 1:36 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Dec 14, 2020 at 11:54:47PM +0800, Lai Jiangshan wrote:
> > From: Lai Jiangshan <laijs@linux.alibaba.com>
> >
> > 06249738a41a ("workqueue: Manually break affinity on hotplug")
> > said that scheduler will not force break affinity for us.
> >
> > But workqueue highly depends on the old behavior. Many parts of the codes
> > relies on it, 06249738a41a ("workqueue: Manually break affinity on hotplug")
> > is not enough to change it, and the commit has flaws in itself too.
> >
> > We need to thoroughly update the way workqueue handles affinity
> > in cpu hot[un]plug, what is this patchset intends to do and
> > replace the Valentin Schneider's patch [1].
>
> So the actual problem is with per-cpu kthreads, the new assumption is
> that hot-un-plug will make all per-cpu kthreads for the dying CPU go
> away.

Hello, Peter

"new assumption" is all needed to be aligned. I haven't read the code.
I thought I understood to some extent which is enough for me to know
that workqueue does violate that.

Workqueue does not break affinity for all per-cpu kthreads in several
cases such as hot-un-plug and workers detaching from pool (those workers
will not be searchable from pools and should be handled alike to hot-un-plug).

But workqueue has not only per-cpu kthreads but also per-node threads.
And per-node threads may be bound to multiple CPUs or may be bound to
a single CPU. I don't know how the scheduler distinguishes all these
different cases under the "new assumption". But at least workqueue
handle these different cases at the same few places.  Since workqueue
have to "break affinity" for per-cpu kthreads, it can also "break affinity"
for other cases. Making workqueue totally do not rely on scheduler's
work to "break affinity" is worth doing since we have to do it for the
most parts.

I haven't read the code about "new assumption", if possible, I'll first
try to find out how will scheduler handle these cases:

If a per-node thread has only cpu 4, and when it goes down, does
workqueue need to "break affinity" for it?

If a per-node thread has only cpu 41,42, and when both go down, does
workqueue need to "break affinity" for it?

Thanks
Lai

>
> Workqueues violated that. I fixed the obvious site, and Valentin's patch
> avoids workqueues from quickly creating new ones while we're not
> looking.
>
> What other problems did you find?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] workqueue: break affinity initiatively
  2020-12-15  5:44   ` Lai Jiangshan
@ 2020-12-15  7:50     ` Peter Zijlstra
  2020-12-15  8:14       ` Lai Jiangshan
  0 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2020-12-15  7:50 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: LKML, Lai Jiangshan, Hillf Danton, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Tejun Heo

On Tue, Dec 15, 2020 at 01:44:53PM +0800, Lai Jiangshan wrote:
> I don't know how the scheduler distinguishes all these
> different cases under the "new assumption".

The special case is:

  (p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] workqueue: break affinity initiatively
  2020-12-15  7:50     ` Peter Zijlstra
@ 2020-12-15  8:14       ` Lai Jiangshan
  2020-12-15  8:49         ` Peter Zijlstra
  0 siblings, 1 reply; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-15  8:14 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Lai Jiangshan, Hillf Danton, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Tejun Heo

On Tue, Dec 15, 2020 at 3:50 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Dec 15, 2020 at 01:44:53PM +0800, Lai Jiangshan wrote:
> > I don't know how the scheduler distinguishes all these
> > different cases under the "new assumption".
>
> The special case is:
>
>   (p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1
>
>

So unbound per-node workers can possibly match this test. So there is code
needed to handle for unbound workers/pools which is done by this patchset.

Is this the code of is_per_cpu_kthread()? I think I should have also
used this function in workqueue and don't break affinity for unbound
workers have more than 1 cpu.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity
  2020-12-14 17:25   ` Peter Zijlstra
@ 2020-12-15  8:33     ` Lai Jiangshan
  2020-12-15  8:40     ` Peter Zijlstra
  1 sibling, 0 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-15  8:33 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Lai Jiangshan, Tejun Heo, Valentin Schneider,
	Daniel Bristot de Oliveira

On Tue, Dec 15, 2020 at 1:25 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Dec 14, 2020 at 11:54:49PM +0800, Lai Jiangshan wrote:
> > From: Lai Jiangshan <laijs@linux.alibaba.com>
> >
> > There might be other CPU online. The workers losing binding on its CPU
> > should have chance to work on those later onlined CPUs.
> >
> > Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
> > Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
> > ---
> >  kernel/workqueue.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index aba71ab359dd..1f5b8385c0cf 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -4909,8 +4909,9 @@ static void unbind_workers(int cpu)
> >
> >               raw_spin_unlock_irq(&pool->lock);
> >
> > +             /* don't rely on the scheduler to force break affinity for us. */
> >               for_each_pool_worker(worker, pool)
> > -                     WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
> > +                     WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
>
> Please explain this one.. it's not making sense. Also the Changelog
> doesn't seem remotely related to the actual change.

If the scheduler doesn't break affinity for us any more, I hope that
we can "emulate" previous behavior when the scheduler did breaks affinity
for us. The behavior is "changing the cpumask to cpu_possible_mask".

And there might be some other CPUs online later while the worker is
still running with the pending work items.  I hope the worker can also
use the later online CPUs as before.  If we use cpu_active_mask here,
we can't achieve this.   This is what the changelog said.  I don't know
which wording is better, I will combine both if this reason stands.


>
> Afaict this is actively wrong.
>
> Also, can you please not Cc me parts of a series? That's bloody
> annoying.


Sorry about it.  I was taught "once don't send the whole series to
someone" and very probably I missed the conditions about it.  I think
in this case, I should Cc you the whole series.  May I?

Thanks
Lai

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity
  2020-12-14 17:25   ` Peter Zijlstra
  2020-12-15  8:33     ` Lai Jiangshan
@ 2020-12-15  8:40     ` Peter Zijlstra
  1 sibling, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2020-12-15  8:40 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Lai Jiangshan, Tejun Heo, Valentin Schneider,
	Daniel Bristot de Oliveira

On Mon, Dec 14, 2020 at 06:25:34PM +0100, Peter Zijlstra wrote:
> On Mon, Dec 14, 2020 at 11:54:49PM +0800, Lai Jiangshan wrote:
> > From: Lai Jiangshan <laijs@linux.alibaba.com>
> > 
> > There might be other CPU online. The workers losing binding on its CPU
> > should have chance to work on those later onlined CPUs.
> > 
> > Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
> > Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
> > ---
> >  kernel/workqueue.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index aba71ab359dd..1f5b8385c0cf 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -4909,8 +4909,9 @@ static void unbind_workers(int cpu)
> >  
> >  		raw_spin_unlock_irq(&pool->lock);
> >  
> > +		/* don't rely on the scheduler to force break affinity for us. */
> >  		for_each_pool_worker(worker, pool)
> > -			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
> > +			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
> 
> Please explain this one.. it's not making sense. Also the Changelog
> doesn't seem remotely related to the actual change.
> 
> Afaict this is actively wrong.

I think I was too tired, I see what you're doing now and it should work
fine, I still think the changelog could use help though.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] workqueue: break affinity initiatively
  2020-12-15  8:14       ` Lai Jiangshan
@ 2020-12-15  8:49         ` Peter Zijlstra
  2020-12-15  9:46           ` Lai Jiangshan
  0 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2020-12-15  8:49 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: LKML, Lai Jiangshan, Hillf Danton, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Tejun Heo

On Tue, Dec 15, 2020 at 04:14:26PM +0800, Lai Jiangshan wrote:
> On Tue, Dec 15, 2020 at 3:50 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Tue, Dec 15, 2020 at 01:44:53PM +0800, Lai Jiangshan wrote:
> > > I don't know how the scheduler distinguishes all these
> > > different cases under the "new assumption".
> >
> > The special case is:
> >
> >   (p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1
> >
> >
> 
> So unbound per-node workers can possibly match this test. So there is code
> needed to handle for unbound workers/pools which is done by this patchset.

Curious; how could a per-node worker match this? Only if the node is a
single CPU, or otherwise too?

> Is this the code of is_per_cpu_kthread()? I think I should have also
> used this function in workqueue and don't break affinity for unbound
> workers have more than 1 cpu.

Yes, that function captures it. If you want to use it, feel free to move
it to include/linux/sched.h.

This class of threads is 'special', since it needs to violate the
regular hotplug rules, and migrate_disable() made it just this little
bit more special. It basically comes down to how we need certain per-cpu
kthreads to run on a CPU while it's brought up, before userspace is
allowed on, and similarly they need to run on the CPU after userspace is
no longer allowed on in order to bring it down.

(IOW, they must be allowed to violate the active mask)

Due to migrate_disable() we had to move the migration code from the very
last cpu-down stage, to earlier. This in turn brought the expectation
(which is normally met) that per-cpu kthreads will stop/park or
otherwise make themselves scarce when the CPU goes down. We can no
longer force migrate them.

Workqueues are the sole exception to that, they've got some really
'dodgy' hotplug behaviour.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] workqueue: break affinity initiatively
  2020-12-15  8:49         ` Peter Zijlstra
@ 2020-12-15  9:46           ` Lai Jiangshan
  0 siblings, 0 replies; 25+ messages in thread
From: Lai Jiangshan @ 2020-12-15  9:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Lai Jiangshan, Hillf Danton, Valentin Schneider, Qian Cai,
	Vincent Donnefort, Tejun Heo

On Tue, Dec 15, 2020 at 4:49 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Dec 15, 2020 at 04:14:26PM +0800, Lai Jiangshan wrote:
> > On Tue, Dec 15, 2020 at 3:50 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Tue, Dec 15, 2020 at 01:44:53PM +0800, Lai Jiangshan wrote:
> > > > I don't know how the scheduler distinguishes all these
> > > > different cases under the "new assumption".
> > >
> > > The special case is:
> > >
> > >   (p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1
> > >
> > >
> >
> > So unbound per-node workers can possibly match this test. So there is code
> > needed to handle for unbound workers/pools which is done by this patchset.
>
> Curious; how could a per-node worker match this? Only if the node is a
> single CPU, or otherwise too?

We have /sys/devices/virtual/workqueue/cpumask which can be read/written
to access to wq_unbound_cpumask.

A per-node worker's cpumask is wq_unbound_cpumask&possible_cpumask_of_the_node.
Since wq_unbound_cpumask can be changed by system adim, so a per-node
worker's cpumask is possible to be single CPU.

wq_unbound_cpumask is used when a system adim wants to isolate some
CPUs from unbound workqueques.  But I think it is rare case when the
admin causes a per-node worker's cpumask to be single CPU.

Even it is a rare case, we have to handle it.

>
> > Is this the code of is_per_cpu_kthread()? I think I should have also
> > used this function in workqueue and don't break affinity for unbound
> > workers have more than 1 cpu.
>
> Yes, that function captures it. If you want to use it, feel free to move
> it to include/linux/sched.h.

I will.  "single CPU" for unbound workers/pools is the rare case
and enough to bring the code to break affinity for unbound workers.
If we optimize for the common cases (multiple CPUs for unbound workers),
the optimization seems like additional code works only in the slow
path (hotunplug).

I will try it and see if it is worth.

>
> This class of threads is 'special', since it needs to violate the
> regular hotplug rules, and migrate_disable() made it just this little
> bit more special. It basically comes down to how we need certain per-cpu
> kthreads to run on a CPU while it's brought up, before userspace is
> allowed on, and similarly they need to run on the CPU after userspace is
> no longer allowed on in order to bring it down.
>
> (IOW, they must be allowed to violate the active mask)
>
> Due to migrate_disable() we had to move the migration code from the very
> last cpu-down stage, to earlier. This in turn brought the expectation
> (which is normally met) that per-cpu kthreads will stop/park or
> otherwise make themselves scarce when the CPU goes down. We can no
> longer force migrate them.

Thanks for explaining the rationale.

>
> Workqueues are the sole exception to that, they've got some really
> 'dodgy' hotplug behaviour.
>

Indeed.  No one want to wait for workqueue when hotunplug, so we have
to do something after the fact.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 10/10] workqueue: Fix affinity of kworkers when attaching into pool
  2020-12-14 15:54 ` [PATCH 10/10] workqueue: Fix affinity of kworkers when attaching into pool Lai Jiangshan
@ 2020-12-15 15:03   ` Valentin Schneider
  0 siblings, 0 replies; 25+ messages in thread
From: Valentin Schneider @ 2020-12-15 15:03 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Lai Jiangshan, Qian Cai, Peter Zijlstra,
	Vincent Donnefort, Tejun Heo


On 14/12/20 15:54, Lai Jiangshan wrote:
> @@ -1848,11 +1848,11 @@ static void worker_attach_to_pool(struct worker *worker,
>  {
>       mutex_lock(&wq_pool_attach_mutex);
>
> -	/*
> -	 * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
> -	 * online CPUs.  It'll be re-applied when any of the CPUs come up.
> -	 */
> -	set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
> +	/* Is there any cpu in pool->attrs->cpumask online? */
> +	if (cpumask_any_and(pool->attrs->cpumask, wq_online_cpumask) < nr_cpu_ids)

  if (cpumask_intersects(pool->attrs->cpumask, wq_online_cpumask))

> +		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask) < 0);
> +	else
> +		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);

So for that late-spawned per-CPU kworker case: the outgoing CPU should have
already been cleared from wq_online_cpumask, so it gets its affinity reset
to the possible mask and the subsequent wakeup will ensure it's put on an
active CPU.

Seems alright to me.

>
>       /*
>        * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] workqueue: break affinity initiatively
  2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
                   ` (10 preceding siblings ...)
  2020-12-14 17:36 ` [PATCH 00/10] workqueue: break affinity initiatively Peter Zijlstra
@ 2020-12-16 14:30 ` Tejun Heo
  11 siblings, 0 replies; 25+ messages in thread
From: Tejun Heo @ 2020-12-16 14:30 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Lai Jiangshan, Hillf Danton, Valentin Schneider,
	Qian Cai, Peter Zijlstra, Vincent Donnefort

On Mon, Dec 14, 2020 at 11:54:47PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <laijs@linux.alibaba.com>
> 
> 06249738a41a ("workqueue: Manually break affinity on hotplug")
> said that scheduler will not force break affinity for us.
> 
> But workqueue highly depends on the old behavior. Many parts of the codes
> relies on it, 06249738a41a ("workqueue: Manually break affinity on hotplug")
> is not enough to change it, and the commit has flaws in itself too.
> 
> We need to thoroughly update the way workqueue handles affinity
> in cpu hot[un]plug, what is this patchset intends to do and
> replace the Valentin Schneider's patch [1].
> 
> Patch 1 fixes a flaw reported by Hillf Danton <hdanton@sina.com>.
> I have to include this fix because later patches depends on it.
> 
> The patchset is based on tip/master rather than workqueue tree,
> because the patchset is a complement for 06249738a41a ("workqueue:
> Manually break affinity on hotplug") which is only in tip/master by now.
> 
> [1]: https://lore.kernel.org/r/ff62e3ee994efb3620177bf7b19fab16f4866845.camel@redhat.com

Generally looks good to me. Please feel free to add

 Acked-by: Tejun Heo <tj@kernel.org>

and route the series through tip.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity
  2020-12-14 15:54 ` [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity Lai Jiangshan
  2020-12-14 17:25   ` Peter Zijlstra
@ 2020-12-16 14:32   ` Tejun Heo
  1 sibling, 0 replies; 25+ messages in thread
From: Tejun Heo @ 2020-12-16 14:32 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Lai Jiangshan, Peter Zijlstra, Valentin Schneider,
	Daniel Bristot de Oliveira

Hello,

On Mon, Dec 14, 2020 at 11:54:49PM +0800, Lai Jiangshan wrote:
> @@ -4909,8 +4909,9 @@ static void unbind_workers(int cpu)
>  
>  		raw_spin_unlock_irq(&pool->lock);
>  
> +		/* don't rely on the scheduler to force break affinity for us. */

I'm not sure this comment is helpful. The comment may make sense right now
while the scheduler behavior is changing but down the line it's not gonna
make whole lot of sense.

>  		for_each_pool_worker(worker, pool)
> -			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
> +			WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
>  
>  		mutex_unlock(&wq_pool_attach_mutex);

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 04/10] workqueue: don't set the worker's cpumask when kthread_bind_mask()
  2020-12-14 15:54 ` [PATCH 04/10] workqueue: don't set the worker's cpumask when kthread_bind_mask() Lai Jiangshan
@ 2020-12-16 14:39   ` Tejun Heo
  0 siblings, 0 replies; 25+ messages in thread
From: Tejun Heo @ 2020-12-16 14:39 UTC (permalink / raw)
  To: Lai Jiangshan; +Cc: linux-kernel, Lai Jiangshan, Peter Zijlstra

On Mon, Dec 14, 2020 at 11:54:51PM +0800, Lai Jiangshan wrote:
> @@ -1945,7 +1945,15 @@ static struct worker *create_worker(struct worker_pool *pool)
>  		goto fail;
>  
>  	set_user_nice(worker->task, pool->attrs->nice);
> -	kthread_bind_mask(worker->task, pool->attrs->cpumask);
> +
> +	/*
> +	 * Set PF_NO_SETAFFINITY via kthread_bind_mask().  We use
> +	 * cpu_possible_mask other than pool->attrs->cpumask, because
                             ^
                             instead of

> +	 * there might be no online cpu in the pool->attrs->cpumask.
                 ^
                 might not be any

> +	 * The cpumask of the worker will be set properly later in
> +	 * worker_attach_to_pool().
> +	 */
> +	kthread_bind_mask(worker->task, cpu_possible_mask);

This is a bit ugly but not the end of the world. Maybe we can move it to the
start of worker_thread() but that'd require an extra handshake. Oh well...

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 07/10] workqueue: Manually break affinity on hotplug for unbound pool
  2020-12-14 15:54 ` [PATCH 07/10] workqueue: Manually break affinity on hotplug for unbound pool Lai Jiangshan
@ 2020-12-16 14:50   ` Tejun Heo
  0 siblings, 0 replies; 25+ messages in thread
From: Tejun Heo @ 2020-12-16 14:50 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Lai Jiangshan, Peter Zijlstra, Valentin Schneider,
	Daniel Bristot de Oliveira

On Mon, Dec 14, 2020 at 11:54:54PM +0800, Lai Jiangshan wrote:
>   * An unbound pool may end up with a cpumask which doesn't have any online
> - * CPUs.  When a worker of such pool get scheduled, the scheduler resets
> - * its cpus_allowed.  If @cpu is in @pool's cpumask which didn't have any
> - * online CPU before, cpus_allowed of all its workers should be restored.
> + * CPUs.  We have to reset workers' cpus_allowed of such pool.  And we
> + * restore the workers' cpus_allowed when the pool's cpumask has online
> + * CPU at the first time after reset.
          ^
          for the first time

-- 
tejun

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-12-16 14:51 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-14 15:54 [PATCH 00/10] workqueue: break affinity initiatively Lai Jiangshan
2020-12-14 15:54 ` [PATCH 01/10] workqueue: restore unbound_workers' cpumask correctly Lai Jiangshan
2020-12-14 15:54 ` [PATCH 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity Lai Jiangshan
2020-12-14 17:25   ` Peter Zijlstra
2020-12-15  8:33     ` Lai Jiangshan
2020-12-15  8:40     ` Peter Zijlstra
2020-12-16 14:32   ` Tejun Heo
2020-12-14 15:54 ` [PATCH 03/10] workqueue: Manually break affinity on pool detachment Lai Jiangshan
2020-12-14 15:54 ` [PATCH 04/10] workqueue: don't set the worker's cpumask when kthread_bind_mask() Lai Jiangshan
2020-12-16 14:39   ` Tejun Heo
2020-12-14 15:54 ` [PATCH 05/10] workqueue: introduce wq_online_cpumask Lai Jiangshan
2020-12-14 15:54 ` [PATCH 06/10] workqueue: use wq_online_cpumask in restore_unbound_workers_cpumask() Lai Jiangshan
2020-12-14 15:54 ` [PATCH 07/10] workqueue: Manually break affinity on hotplug for unbound pool Lai Jiangshan
2020-12-16 14:50   ` Tejun Heo
2020-12-14 15:54 ` [PATCH 08/10] workqueue: reorganize workqueue_online_cpu() Lai Jiangshan
2020-12-14 15:54 ` [PATCH 09/10] workqueue: reorganize workqueue_offline_cpu() unbind_workers() Lai Jiangshan
2020-12-14 15:54 ` [PATCH 10/10] workqueue: Fix affinity of kworkers when attaching into pool Lai Jiangshan
2020-12-15 15:03   ` Valentin Schneider
2020-12-14 17:36 ` [PATCH 00/10] workqueue: break affinity initiatively Peter Zijlstra
2020-12-15  5:44   ` Lai Jiangshan
2020-12-15  7:50     ` Peter Zijlstra
2020-12-15  8:14       ` Lai Jiangshan
2020-12-15  8:49         ` Peter Zijlstra
2020-12-15  9:46           ` Lai Jiangshan
2020-12-16 14:30 ` Tejun Heo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.