From: Peter Zijlstra <peterz@infradead.org>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: catalin.marinas@arm.com, will@kernel.org, mingo@redhat.com,
bp@alien8.de, rth@twiddle.net, ink@jurassic.park.msu.ru,
mattst88@gmail.com, benh@kernel.crashing.org, paulus@samba.org,
mpe@ellerman.id.au, heiko.carstens@de.ibm.com, gor@linux.ibm.com,
borntraeger@de.ibm.com, ysato@users.sourceforge.jp,
dalias@libc.org, davem@davemloft.net, ralf@linux-mips.org,
paul.burton@mips.com, jhogan@kernel.org, jiaxun.yang@flygoat.com,
chenhc@lemote.com, akpm@linux-foundation.org, rppt@linux.ibm.com,
anshuman.khandual@arm.com, tglx@linutronix.de, cai@lca.pw,
robin.murphy@arm.com, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, hpa@zytor.com, x86@kernel.org,
dave.hansen@linux.intel.com, luto@kernel.org,
len.brown@intel.com, axboe@kernel.dk, dledford@redhat.com,
jeffrey.t.kirsher@intel.com, linux-alpha@vger.kernel.org,
naveen.n.rao@linux.vnet.ibm.com, mwb@linux.vnet.ibm.com,
linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org,
linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
tbogendoerfer@suse.de, linux-mips@vger.kernel.org,
rafael@kernel.org, mhocko@kernel.org, gregkh@linuxfoundation.org
Subject: Re: [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware
Date: Mon, 23 Sep 2019 17:15:19 +0200 [thread overview]
Message-ID: <20190923151519.GE2369@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <1568724534-146242-1-git-send-email-linyunsheng@huawei.com>
On Tue, Sep 17, 2019 at 08:48:54PM +0800, Yunsheng Lin wrote:
> When passing the return value of dev_to_node() to cpumask_of_node()
> without checking if the device's node id is NUMA_NO_NODE, there is
> global-out-of-bounds detected by KASAN.
>
> From the discussion [1], NUMA_NO_NODE really means no node affinity,
> which also means all cpus should be usable. So the cpumask_of_node()
> should always return all cpus online when user passes the node id as
> NUMA_NO_NODE, just like similar semantic that page allocator handles
> NUMA_NO_NODE.
>
> But we cannot really copy the page allocator logic. Simply because the
> page allocator doesn't enforce the near node affinity. It just picks it
> up as a preferred node but then it is free to fallback to any other numa
> node. This is not the case here and node_to_cpumask_map will only restrict
> to the particular node's cpus which would have really non deterministic
> behavior depending on where the code is executed. So in fact we really
> want to return cpu_online_mask for NUMA_NO_NODE.
>
> Also there is a debugging version of node_to_cpumask_map() for x86 and
> arm64, which is only used when CONFIG_DEBUG_PER_CPU_MAPS is defined, this
> patch changes it to handle NUMA_NO_NODE as normal node_to_cpumask_map().
>
> [1] https://lore.kernel.org/patchwork/patch/1125789/
That is bloody unusable, don't do that. Use:
https://lkml.kernel.org/r/$MSGID
if anything. Then I can find it in my local mbox without having to
resort to touching a mouse and shitty browser software.
(also patchwork is absolute crap for reading email threads)
Anyway, I found it -- I think, I refused to click the link. I replied
there.
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> Suggested-by: Michal Hocko <mhocko@kernel.org>
> Acked-by: Michal Hocko <mhocko@suse.com>
> diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
> index 4123100e..9859acb 100644
> --- a/arch/x86/mm/numa.c
> +++ b/arch/x86/mm/numa.c
> @@ -861,6 +861,9 @@ void numa_remove_cpu(int cpu)
> */
> const struct cpumask *cpumask_of_node(int node)
> {
> + if (node == NUMA_NO_NODE)
> + return cpu_online_mask;
This mandates the caller holds cpus_read_lock() or something, I'm pretty
sure that if I put:
lockdep_assert_cpus_held();
here, it comes apart real quick. Without holding the cpu hotplug lock,
the online mask is gibberish.
> +
> if ((unsigned)node >= nr_node_ids) {
> printk(KERN_WARNING
> "cpumask_of_node(%d): (unsigned)node >= nr_node_ids(%u)\n",
I still think this makes absolutely no sense what so ever.
next prev parent reply other threads:[~2019-09-23 15:17 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-17 12:48 [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware Yunsheng Lin
2019-09-23 15:15 ` Peter Zijlstra [this message]
2019-09-23 15:28 ` Michal Hocko
2019-09-23 15:48 ` Peter Zijlstra
2019-09-23 16:52 ` Michal Hocko
2019-09-23 20:34 ` Peter Zijlstra
2019-09-24 1:29 ` Yunsheng Lin
2019-09-24 9:25 ` Peter Zijlstra
2019-09-24 11:07 ` Yunsheng Lin
2019-09-24 11:28 ` Peter Zijlstra
2019-09-24 11:44 ` Yunsheng Lin
2019-09-24 11:58 ` Peter Zijlstra
2019-09-24 12:09 ` Yunsheng Lin
2019-09-24 7:47 ` Michal Hocko
2019-09-24 9:17 ` Peter Zijlstra
2019-09-24 10:56 ` Michal Hocko
2019-09-24 11:23 ` Peter Zijlstra
2019-09-24 11:54 ` Michal Hocko
2019-09-24 12:09 ` Peter Zijlstra
2019-09-24 12:25 ` Michal Hocko
2019-09-24 12:43 ` Peter Zijlstra
2019-09-24 12:59 ` Peter Zijlstra
2019-09-24 13:19 ` Michal Hocko
2019-09-25 9:14 ` Yunsheng Lin
2019-09-25 10:41 ` Peter Zijlstra
2019-10-08 8:38 ` Yunsheng Lin
2019-10-09 12:25 ` Robin Murphy
2019-10-10 6:07 ` Yunsheng Lin
2019-10-10 7:32 ` Michal Hocko
2019-10-11 3:27 ` Yunsheng Lin
2019-10-11 11:15 ` Peter Zijlstra
2019-10-12 6:17 ` Yunsheng Lin
2019-10-12 7:40 ` Greg KH
2019-10-12 9:47 ` Yunsheng Lin
2019-10-12 10:40 ` Greg KH
2019-10-12 10:47 ` Greg KH
2019-10-14 8:00 ` Yunsheng Lin
2019-10-14 9:25 ` Greg KH
2019-10-14 9:49 ` Peter Zijlstra
2019-10-14 10:04 ` Greg KH
2019-10-15 10:40 ` Yunsheng Lin
2019-10-15 16:58 ` Greg KH
2019-10-16 12:07 ` Yunsheng Lin
2019-10-28 9:20 ` Yunsheng Lin
2019-10-29 8:53 ` Michal Hocko
2019-10-30 1:58 ` Yunsheng Lin
2019-10-10 8:56 ` Peter Zijlstra
2019-09-25 10:40 ` Peter Zijlstra
2019-09-25 13:25 ` Michal Hocko
2019-09-25 16:31 ` Peter Zijlstra
2019-09-25 21:45 ` Peter Zijlstra
2019-09-26 9:05 ` Peter Zijlstra
2019-09-26 12:10 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190923151519.GE2369@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axboe@kernel.dk \
--cc=benh@kernel.crashing.org \
--cc=borntraeger@de.ibm.com \
--cc=bp@alien8.de \
--cc=cai@lca.pw \
--cc=catalin.marinas@arm.com \
--cc=chenhc@lemote.com \
--cc=dalias@libc.org \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=dledford@redhat.com \
--cc=gor@linux.ibm.com \
--cc=gregkh@linuxfoundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=hpa@zytor.com \
--cc=ink@jurassic.park.msu.ru \
--cc=jeffrey.t.kirsher@intel.com \
--cc=jhogan@kernel.org \
--cc=jiaxun.yang@flygoat.com \
--cc=len.brown@intel.com \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-sh@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=linyunsheng@huawei.com \
--cc=luto@kernel.org \
--cc=mattst88@gmail.com \
--cc=mhocko@kernel.org \
--cc=mingo@redhat.com \
--cc=mpe@ellerman.id.au \
--cc=mwb@linux.vnet.ibm.com \
--cc=naveen.n.rao@linux.vnet.ibm.com \
--cc=paul.burton@mips.com \
--cc=paulus@samba.org \
--cc=rafael@kernel.org \
--cc=ralf@linux-mips.org \
--cc=robin.murphy@arm.com \
--cc=rppt@linux.ibm.com \
--cc=rth@twiddle.net \
--cc=sparclinux@vger.kernel.org \
--cc=tbogendoerfer@suse.de \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=ysato@users.sourceforge.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).