From: Peter Zijlstra <peterz@infradead.org>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: catalin.marinas@arm.com, will@kernel.org, mingo@redhat.com,
bp@alien8.de, rth@twiddle.net, ink@jurassic.park.msu.ru,
mattst88@gmail.com, benh@kernel.crashing.org, paulus@samba.org,
mpe@ellerman.id.au, heiko.carstens@de.ibm.com, gor@linux.ibm.com,
borntraeger@de.ibm.com, ysato@users.sourceforge.jp,
dalias@libc.org, davem@davemloft.net, ralf@linux-mips.org,
paul.burton@mips.com, jhogan@kernel.org, jiaxun.yang@flygoat.com,
chenhc@lemote.com, akpm@linux-foundation.org, rppt@linux.ibm.com,
anshuman.khandual@arm.com, tglx@linutronix.de, cai@lca.pw,
robin.murphy@arm.com, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, hpa@zytor.com, x86@kernel.org,
dave.hansen@linux.intel.com, luto@kernel.org,
len.brown@intel.com, axboe@kernel.dk, dledford@redhat.com,
jeffrey.t.kirsher@intel.com, linux-alpha@vger.kernel.org,
naveen.n.rao@linux.vnet.ibm.com, mwb@linux.vnet.ibm.com,
linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org,
linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
tbogendoerfer@suse.de, linux-mips@vger.kernel.org,
rafael@kernel.org, mhocko@kernel.org, gregkh@linuxfoundation.org,
bhelgaas@google.com, linux-pci@vger.kernel.org,
rjw@rjwysocki.net, lenb@kernel.org, linux-acpi@vger.kernel.org
Subject: Re: [PATCH v7] numa: make node_to_cpumask_map() NUMA_NO_NODE aware
Date: Wed, 30 Oct 2019 11:14:49 +0100 [thread overview]
Message-ID: <20191030101449.GW4097@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <1572428068-180880-1-git-send-email-linyunsheng@huawei.com>
On Wed, Oct 30, 2019 at 05:34:28PM +0800, Yunsheng Lin wrote:
> When passing the return value of dev_to_node() to cpumask_of_node()
> without checking if the device's node id is NUMA_NO_NODE, there is
> global-out-of-bounds detected by KASAN.
>
> From the discussion [1], NUMA_NO_NODE really means no node affinity,
> which also means all cpus should be usable. So the cpumask_of_node()
> should always return all cpus online when user passes the node id as
> NUMA_NO_NODE, just like similar semantic that page allocator handles
> NUMA_NO_NODE.
>
> But we cannot really copy the page allocator logic. Simply because the
> page allocator doesn't enforce the near node affinity. It just picks it
> up as a preferred node but then it is free to fallback to any other numa
> node. This is not the case here and node_to_cpumask_map will only restrict
> to the particular node's cpus which would have really non deterministic
> behavior depending on where the code is executed. So in fact we really
> want to return cpu_online_mask for NUMA_NO_NODE.
>
> Also there is a debugging version of node_to_cpumask_map() for x86 and
> arm64, which is only used when CONFIG_DEBUG_PER_CPU_MAPS is defined, this
> patch changes it to handle NUMA_NO_NODE as normal node_to_cpumask_map().
>
> [1] https://lkml.org/lkml/2019/9/11/66
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> Suggested-by: Michal Hocko <mhocko@kernel.org>
> Acked-by: Michal Hocko <mhocko@suse.com>
> Acked-by: Paul Burton <paul.burton@mips.com> # MIPS bits
Still:
Nacked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
next prev parent reply other threads:[~2019-10-30 10:16 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-30 9:34 [PATCH v7] numa: make node_to_cpumask_map() NUMA_NO_NODE aware Yunsheng Lin
2019-10-30 10:14 ` Peter Zijlstra [this message]
2019-10-30 10:22 ` Michal Hocko
2019-10-30 10:28 ` Peter Zijlstra
2019-10-30 11:33 ` Michal Hocko
2019-10-30 12:28 ` Qian Cai
2019-10-31 3:26 ` Yunsheng Lin
2019-10-30 10:20 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191030101449.GW4097@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axboe@kernel.dk \
--cc=benh@kernel.crashing.org \
--cc=bhelgaas@google.com \
--cc=borntraeger@de.ibm.com \
--cc=bp@alien8.de \
--cc=cai@lca.pw \
--cc=catalin.marinas@arm.com \
--cc=chenhc@lemote.com \
--cc=dalias@libc.org \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=dledford@redhat.com \
--cc=gor@linux.ibm.com \
--cc=gregkh@linuxfoundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=hpa@zytor.com \
--cc=ink@jurassic.park.msu.ru \
--cc=jeffrey.t.kirsher@intel.com \
--cc=jhogan@kernel.org \
--cc=jiaxun.yang@flygoat.com \
--cc=len.brown@intel.com \
--cc=lenb@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-sh@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=linyunsheng@huawei.com \
--cc=luto@kernel.org \
--cc=mattst88@gmail.com \
--cc=mhocko@kernel.org \
--cc=mingo@redhat.com \
--cc=mpe@ellerman.id.au \
--cc=mwb@linux.vnet.ibm.com \
--cc=naveen.n.rao@linux.vnet.ibm.com \
--cc=paul.burton@mips.com \
--cc=paulus@samba.org \
--cc=rafael@kernel.org \
--cc=ralf@linux-mips.org \
--cc=rjw@rjwysocki.net \
--cc=robin.murphy@arm.com \
--cc=rppt@linux.ibm.com \
--cc=rth@twiddle.net \
--cc=sparclinux@vger.kernel.org \
--cc=tbogendoerfer@suse.de \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=ysato@users.sourceforge.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).