LKML Archive on lore.kernel.org
 help / color / Atom feed
From: Dexuan Cui <decui@microsoft.com>
To: 'Lai Jiangshan' <jiangshanlai@gmail.com>,
	'Dexuan-Linux Cui' <dexuan.linux@gmail.com>
Cc: 'Linux Kernel Mailing List' <linux-kernel@vger.kernel.org>,
	'Valentin Schneider' <valentin.schneider@arm.com>,
	'Peter Zijlstra' <peterz@infradead.org>,
	'Qian Cai' <cai@redhat.com>,
	'Vincent Donnefort' <vincent.donnefort@arm.com>,
	'Lai Jiangshan' <laijs@linux.alibaba.com>,
	'Hillf Danton' <hdanton@sina.com>, 'Tejun Heo' <tj@kernel.org>
Subject: RE: [PATCH -tip V2 00/10] workqueue: break affinity initiatively
Date: Wed, 23 Dec 2020 20:39:53 +0000
Message-ID: <SJ0PR21MB1872CFBAFEA8152CE3B362BDBFDE9@SJ0PR21MB1872.namprd21.prod.outlook.com> (raw)
In-Reply-To: <SJ0PR21MB1872C6BB2800A55BD1CDE6B8BFDE9@SJ0PR21MB1872.namprd21.prod.outlook.com>

> From: Dexuan Cui
> Sent: Wednesday, December 23, 2020 12:27 PM
> ...
> The warning only repros if there are more than 1 node, and it only prints once
> for the first vCPU of the second node (i.e. node #1).

A correction: if I configure the 32 vCPUs evenly into 4 nodes, I get the warning
once for node #1~#3, respectively.

Thanks,
-- Dexuan

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2376,9 +2376,14 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
                 * For kernel threads that do indeed end up on online &&
                 * !active we want to ensure they are strict per-CPU threads.
                 */
-               WARN_ON(cpumask_intersects(new_mask, cpu_online_mask) &&
+               WARN(cpumask_intersects(new_mask, cpu_online_mask) &&
                        !cpumask_intersects(new_mask, cpu_active_mask) &&
-                       p->nr_cpus_allowed != 1);
+                       p->nr_cpus_allowed != 1, "%*pbl, %*pbl, %*pbl, %d\n",
+                       cpumask_pr_args(new_mask),
+                       cpumask_pr_args(cpu_online_mask),
+                       cpumask_pr_args(cpu_active_mask),
+                       p->nr_cpus_allowed
+                       );
        }

[    1.791611] smp: Bringing up secondary CPUs ...
[    1.795225] x86: Booting SMP configuration:
[    1.798964] .... node  #0, CPUs:        #1  #2  #3  #4  #5  #6  #7
[    1.807068] .... node  #1, CPUs:    #8
[    1.094226] smpboot: CPU 8 Converting physical 0 to logical die 1
[    1.895211] ------------[ cut here ]------------
[    1.899058] 8-15, 0-8, 0-7, 8
[    1.899058] WARNING: CPU: 8 PID: 50 at kernel/sched/core.c:2386 __set_cpus_allowed_ptr+0x1c7/0x1e0
[    1.899058] CPU: 8 PID: 50 Comm: cpuhp/8 Not tainted 5.10.0+ #4
[    1.899058] RIP: 0010:__set_cpus_allowed_ptr+0x1c7/0x1e0
[    1.899058] Call Trace:
[    1.899058]  worker_attach_to_pool+0x53/0xd0
[    1.899058]  create_worker+0xf9/0x190
[    1.899058]  alloc_unbound_pwq+0x3a5/0x3b0
[    1.899058]  wq_update_unbound_numa+0x112/0x1c0
[    1.899058]  workqueue_online_cpu+0x1d0/0x220
[    1.899058]  cpuhp_invoke_callback+0x82/0x4a0
[    1.899058]  cpuhp_thread_fun+0xb8/0x120
[    1.899058]  smpboot_thread_fn+0x198/0x230
[    1.899058]  kthread+0x13d/0x160
[    1.899058]  ret_from_fork+0x22/0x30
[    1.903058]   #9 #10 #11 #12 #13 #14 #15
[    1.907092] .... node  #2, CPUs:   #16
[    1.094226] smpboot: CPU 16 Converting physical 0 to logical die 2
[    1.995205] ------------[ cut here ]------------
[    1.999058] 16-23, 0-16, 0-15, 8
[    1.999058] WARNING: CPU: 16 PID: 91 at kernel/sched/core.c:2386 __set_cpus_allowed_ptr+0x1c7/0x1e0
[    1.999058] CPU: 16 PID: 91 Comm: cpuhp/16 Tainted: G        W         5.10.0+ #4
[    1.999058] RIP: 0010:__set_cpus_allowed_ptr+0x1c7/0x1e0
[    1.999058] Call Trace:
[    1.999058]  worker_attach_to_pool+0x53/0xd0
[    1.999058]  create_worker+0xf9/0x190
[    1.999058]  alloc_unbound_pwq+0x3a5/0x3b0
[    1.999058]  wq_update_unbound_numa+0x112/0x1c0
[    1.999058]  workqueue_online_cpu+0x1d0/0x220
[    1.999058]  cpuhp_invoke_callback+0x82/0x4a0
[    1.999058]  cpuhp_thread_fun+0xb8/0x120
[    1.999058]  smpboot_thread_fn+0x198/0x230
[    1.999058]  kthread+0x13d/0x160
[    1.999058]  ret_from_fork+0x22/0x30
[    2.003058]  #17 #18 #19 #20 #21 #22 #23
[    2.007092] .... node  #3, CPUs:   #24
[    1.094226] smpboot: CPU 24 Converting physical 0 to logical die 3
[    2.095220] ------------[ cut here ]------------
[    2.099058] 24-31, 0-24, 0-23, 8
[    2.099058] WARNING: CPU: 24 PID: 132 at kernel/sched/core.c:2386 __set_cpus_allowed_ptr+0x1c7/0x1e0
[    2.099058] CPU: 24 PID: 132 Comm: cpuhp/24 Tainted: G        W         5.10.0+ #4
[    2.099058] Call Trace:
[    2.099058]  worker_attach_to_pool+0x53/0xd0
[    2.099058]  create_worker+0xf9/0x190
[    2.099058]  alloc_unbound_pwq+0x3a5/0x3b0
[    2.099058]  wq_update_unbound_numa+0x112/0x1c0
[    2.099058]  workqueue_online_cpu+0x1d0/0x220
[    2.099058]  cpuhp_invoke_callback+0x82/0x4a0
[    2.099058]  cpuhp_thread_fun+0xb8/0x120
[    2.099058]  smpboot_thread_fn+0x198/0x230
[    2.099058]  kthread+0x13d/0x160
[    2.099058]  ret_from_fork+0x22/0x30
[    2.103058]  #25 #26 #27 #28 #29 #30 #31
[    2.108091] smp: Brought up 4 nodes, 32 CPUs
[    2.115065] smpboot: Max logical packages: 4
[    2.119067] smpboot: Total of 32 processors activated (146992.31 BogoMIPS)

  reply index

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-18 17:09 Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 01/10] workqueue: restore unbound_workers' cpumask correctly Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 03/10] workqueue: Manually break affinity on pool detachment Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 04/10] workqueue: don't set the worker's cpumask when kthread_bind_mask() Lai Jiangshan
2020-12-24  6:33   ` [workqueue] 6094661b16: WARNING:at_kernel/sched/core.c:#__set_cpus_allowed_ptr kernel test robot
2020-12-18 17:09 ` [PATCH -tip V2 05/10] workqueue: introduce wq_online_cpumask Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 06/10] workqueue: use wq_online_cpumask in restore_unbound_workers_cpumask() Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 07/10] workqueue: Manually break affinity on hotplug for unbound pool Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 08/10] workqueue: reorganize workqueue_online_cpu() Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 09/10] workqueue: reorganize workqueue_offline_cpu() unbind_workers() Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 10/10] workqueue: Fix affinity of kworkers when attaching into pool Lai Jiangshan
2020-12-18 17:59   ` Valentin Schneider
2020-12-19  1:11     ` Lai Jiangshan
2020-12-22 21:39 ` [PATCH -tip V2 00/10] workqueue: break affinity initiatively Dexuan-Linux Cui
2020-12-23 11:32   ` Lai Jiangshan
2020-12-23 15:01   ` Lai Jiangshan
2020-12-23 20:27     ` Dexuan Cui
2020-12-23 20:39       ` Dexuan Cui [this message]
2020-12-23 19:49 ` Paul E. McKenney
     [not found] ` <20201226103421.6616-1-hdanton@sina.com>
2020-12-26 14:52   ` Paul E. McKenney
2020-12-27 14:08     ` Lai Jiangshan
2020-12-27 16:02       ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR21MB1872CFBAFEA8152CE3B362BDBFDE9@SJ0PR21MB1872.namprd21.prod.outlook.com \
    --to=decui@microsoft.com \
    --cc=cai@redhat.com \
    --cc=dexuan.linux@gmail.com \
    --cc=hdanton@sina.com \
    --cc=jiangshanlai@gmail.com \
    --cc=laijs@linux.alibaba.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tj@kernel.org \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.donnefort@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git
	git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git
	git clone --mirror https://lore.kernel.org/lkml/9 lkml/git/9.git
	git clone --mirror https://lore.kernel.org/lkml/10 lkml/git/10.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git