From: Gu Zheng <guz.fnst@cn.fujitsu.com>
To: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>, <linux-kernel@vger.kernel.org>,
<laijs@cn.fujitsu.com>, <isimatu.yasuaki@jp.fujitsu.com>,
<tangchen@cn.fujitsu.com>, <izumi.taku@jp.fujitsu.com>
Subject: Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed
Date: Mon, 30 Mar 2015 17:49:31 +0800 [thread overview]
Message-ID: <55191C2B.1050402@cn.fujitsu.com> (raw)
In-Reply-To: <551436F2.5020804@jp.fujitsu.com>
Hi Kame-san,
On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote:
> On 2015/03/27 0:18, Tejun Heo wrote:
>> Hello,
>>
>> On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote:
>>> wq generates the numa affinity (pool->node) for all the possible cpu's
>>> per cpu workqueue at init stage, that means the affinity of currently un-present
>>> ones' may be incorrect, so we need to update the pool->node for the new added cpu
>>> to the correct node when preparing online, otherwise it will try to create worker
>>> on invalid node if node hotplug occurred.
>>
>> If the mapping is gonna be static once the cpus show up, any chance we
>> can initialize that for all possible cpus during boot?
>>
>
> I think the kernel can define all possible
>
> cpuid <-> lapicid <-> pxm <-> nodeid
>
> mapping at boot with using firmware table information.
Could you explain more?
>
> One concern is current x86 logic for memory-less node v.s. memory hotplug.
> (as I explained before)
>
> My idea is
> step1. build all possible mapping at boot cpuid <-> apicid <-> pxm <-> node id at boot.
>
> But this may be overwritten by x86's memory less node logic. So,
> step2. check node is online or not before calling kmalloc. If offline, use -1.
> rather than updating workqueue's attribute.
>
> Thanks,
> -Kame
>
> .
>
next prev parent reply other threads:[~2015-03-30 10:08 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-26 2:17 [PATCH 0/2] workqueue: fix a bug when numa mapping is changed Gu Zheng
2015-03-26 2:17 ` [PATCH 1/2] x86/cpu hotplug: make apicid <--> cpuid mapping persistent Gu Zheng
2015-03-26 3:19 ` Kamezawa Hiroyuki
2015-03-26 4:55 ` Gu Zheng
2015-03-26 15:13 ` Tejun Heo
2015-03-26 16:31 ` Kamezawa Hiroyuki
2015-03-30 9:58 ` Gu Zheng
2015-04-01 2:59 ` Kamezawa Hiroyuki
2015-03-26 2:17 ` [PATCH 2/2] workqueue: update per cpu workqueue's numa affinity when cpu preparing online Gu Zheng
2015-03-26 3:12 ` [PATCH 0/2] workqueue: fix a bug when numa mapping is changed Kamezawa Hiroyuki
2015-03-26 5:04 ` Gu Zheng
2015-03-26 15:18 ` Tejun Heo
2015-03-26 16:42 ` Kamezawa Hiroyuki
2015-03-30 9:49 ` Gu Zheng [this message]
2015-03-30 9:49 ` Gu Zheng
2015-03-31 6:09 ` Kamezawa Hiroyuki
2015-03-31 15:28 ` Tejun Heo
2015-04-01 2:55 ` Kamezawa Hiroyuki
2015-04-01 3:02 ` Tejun Heo
2015-04-01 3:05 ` Tejun Heo
2015-04-01 8:30 ` Kamezawa Hiroyuki
2015-04-02 1:36 ` Gu Zheng
2015-04-02 2:54 ` Kamezawa Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55191C2B.1050402@cn.fujitsu.com \
--to=guz.fnst@cn.fujitsu.com \
--cc=isimatu.yasuaki@jp.fujitsu.com \
--cc=izumi.taku@jp.fujitsu.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=tangchen@cn.fujitsu.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).