linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dexuan Cui <decui@microsoft.com>
To: Lai Jiangshan <jiangshanlai@gmail.com>,
	Dexuan-Linux Cui <dexuan.linux@gmail.com>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Peter Zijlstra <peterz@infradead.org>, Qian Cai <cai@redhat.com>,
	Vincent Donnefort <vincent.donnefort@arm.com>,
	Lai Jiangshan <laijs@linux.alibaba.com>,
	Hillf Danton <hdanton@sina.com>, Tejun Heo <tj@kernel.org>
Subject: RE: [PATCH -tip V2 00/10] workqueue: break affinity initiatively
Date: Wed, 23 Dec 2020 20:27:18 +0000	[thread overview]
Message-ID: <SJ0PR21MB1872C6BB2800A55BD1CDE6B8BFDE9@SJ0PR21MB1872.namprd21.prod.outlook.com> (raw)
In-Reply-To: <CAJhGHyDWswu4gzT0qJzR3vBC6ESm1yui+JHFHfueXan95i0NUg@mail.gmail.com>

> From: Lai Jiangshan <jiangshanlai@gmail.com>
> Sent: Wednesday, December 23, 2020 7:02 AM
> >
> > Hi,
> > I tested this patchset on today's tip.git's master branch
> > (981316394e35 ("Merge branch 'locking/urgent'")).
> >
> > Every time the kernel boots with 32 CPUs (I'm running the Linux VM on
> > Hyper-V), I get the below warning.
> > (BTW, with 8 or 16 CPUs, I don't see the warning).
> > By printing the cpumasks with "%*pbl", I know the warning happens
> > because:
> > new_mask = 16-31
> > cpu_online_mask= 0-16
> > cpu_active_mask= 0-15
> > p->nr_cpus_allowed=16
> >
> 
> Hello, Dexuan
> 
> Could you omit patch4 of the patchset and test it again, please?
> ("workqueue: don't set the worker's cpumask when kthread_bind_mask()")
> 
> kthread_bind_mask() set the worker task to the pool's cpumask without
> any check. And set_cpus_allowed_ptr() finds that the task's cpumask
> is unchanged (already set by kthread_bind_mask()) and skips all the checks.
> 
> And I found that numa=fake=2U seems broken on cpumask_of_node() in my
> box.
> 
> Thanks,
> Lai

Looks like your analysis is correct: the warning can't repro if I configure all
the 32 vCPUs into 1 virtual NUMA node (and I don't see the message
"smpboot: CPU 16 Converting physical 0 to logical die 1"):

[    1.495440] smp: Bringing up secondary CPUs ...
[    1.499207] x86: Booting SMP configuration:
[    1.503038] .... node  #0, CPUs:        #1  #2  #3  #4  #5  #6  #7 
#8  #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 #24 #25 #26
#27 #28 #29 #30 #31
[    1.531930] smp: Brought up 1 node, 32 CPUs
[    1.538779] smpboot: Max logical packages: 1
[    1.539041] smpboot: Total of 32 processors activated (146859.90 BogoMIPS)

The warning only repros if there are more than 1 node, and it only prints once
for the first vCPU of the second node (i.e. node #1).

With more than 1 node, if I don't use patch4, the warning does not repro.

Thanks,
-- Dexuan

  reply	other threads:[~2020-12-23 20:28 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-18 17:09 [PATCH -tip V2 00/10] workqueue: break affinity initiatively Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 01/10] workqueue: restore unbound_workers' cpumask correctly Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 02/10] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 03/10] workqueue: Manually break affinity on pool detachment Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 04/10] workqueue: don't set the worker's cpumask when kthread_bind_mask() Lai Jiangshan
2020-12-24  6:33   ` [workqueue] 6094661b16: WARNING:at_kernel/sched/core.c:#__set_cpus_allowed_ptr kernel test robot
2020-12-18 17:09 ` [PATCH -tip V2 05/10] workqueue: introduce wq_online_cpumask Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 06/10] workqueue: use wq_online_cpumask in restore_unbound_workers_cpumask() Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 07/10] workqueue: Manually break affinity on hotplug for unbound pool Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 08/10] workqueue: reorganize workqueue_online_cpu() Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 09/10] workqueue: reorganize workqueue_offline_cpu() unbind_workers() Lai Jiangshan
2020-12-18 17:09 ` [PATCH -tip V2 10/10] workqueue: Fix affinity of kworkers when attaching into pool Lai Jiangshan
2020-12-18 17:59   ` Valentin Schneider
2020-12-19  1:11     ` Lai Jiangshan
2020-12-22 21:39 ` [PATCH -tip V2 00/10] workqueue: break affinity initiatively Dexuan-Linux Cui
2020-12-23 11:32   ` Lai Jiangshan
2020-12-23 15:01   ` Lai Jiangshan
2020-12-23 20:27     ` Dexuan Cui [this message]
2020-12-23 20:39       ` Dexuan Cui
2020-12-23 19:49 ` Paul E. McKenney
     [not found] ` <20201226103421.6616-1-hdanton@sina.com>
2020-12-26 14:52   ` Paul E. McKenney
2020-12-27 14:08     ` Lai Jiangshan
2020-12-27 16:02       ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR21MB1872C6BB2800A55BD1CDE6B8BFDE9@SJ0PR21MB1872.namprd21.prod.outlook.com \
    --to=decui@microsoft.com \
    --cc=cai@redhat.com \
    --cc=dexuan.linux@gmail.com \
    --cc=hdanton@sina.com \
    --cc=jiangshanlai@gmail.com \
    --cc=laijs@linux.alibaba.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tj@kernel.org \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.donnefort@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).