From: Peter Zijlstra <peterz@infradead.org>
To: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Rich Felker <dalias@libc.org>,
Linux-sh list <linux-sh@vger.kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Heiko Carstens <heiko.carstens@de.ibm.com>,
jiaxun.yang@flygoat.com, Michal Hocko <mhocko@kernel.org>,
Michael Bringmann <mwb@linux.vnet.ibm.com>,
Paul Mackerras <paulus@samba.org>,
"H. Peter Anvin" <hpa@zytor.com>,
sparclinux <sparclinux@vger.kernel.org>,
Huacai Chen <chenhc@lemote.com>, Will Deacon <will@kernel.org>,
Qian Cai <cai@lca.pw>, linux-s390 <linux-s390@vger.kernel.org>,
Yoshinori Sato <ysato@users.sourceforge.jp>,
the arch/x86 maintainers <x86@kernel.org>,
Yunsheng Lin <linyunsheng@huawei.com>,
Mike Rapoport <rppt@linux.ibm.com>,
Christian Borntraeger <borntraeger@de.ibm.com>,
Doug Ledford <dledford@redhat.com>,
Ingo Molnar <mingo@redhat.com>,
Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
James Hogan <jhogan@kernel.org>, Matt Turner <mattst88@gmail.com>,
linux-mips@vger.kernel.org, Len Brown <len.brown@intel.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Greg KH <gregkh@linuxfoundation.org>,
Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
naveen.n.rao@linux.vnet.ibm.com,
Linux ARM <linux-arm-kernel@lists.infradead.org>,
Richard Henderson <rth@twiddle.net>, Jens Axboe <axboe@kernel.dk>,
linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Ralf Baechle <ralf@linux-mips.org>,
Thomas Bogendoerfer <tbogendoerfer@suse.de>,
Paul Burton <paul.burton@mips.com>,
alpha <linux-alpha@vger.kernel.org>,
"Rafael J. Wysocki" <rafael@kernel.org>,
Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
Andrew Morton <akpm@linux-foundation.org>,
Robin Murphy <robin.murphy@arm.com>,
"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware
Date: Thu, 26 Sep 2019 14:24:06 +0200 [thread overview]
Message-ID: <20190926122406.GB4519@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <CAMuHMdVJ6RbEbKc8s_rhJaUBNnA8sOByq9cJ3KH-qmcqQrm_UQ@mail.gmail.com>
On Thu, Sep 26, 2019 at 01:45:53PM +0200, Geert Uytterhoeven wrote:
> Hi Peter,
>
> On Thu, Sep 26, 2019 at 11:42 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > On Wed, Sep 25, 2019 at 03:25:44PM +0200, Michal Hocko wrote:
> > > I am sorry but I still do not understand why you consider this whack a
> > > mole better then simply live with the fact that NUMA_NO_NODE is a
> > > reality and that using the full cpu mask is a reasonable answer to that.
> >
> > Because it doesn't make physical sense. A device _cannot_ be local to
> > all CPUs in a NUMA system.
>
> While it cannot be local to all CPUs, it can be at a uniform (equal) distance
> to each CPU node, can't it?
Only in some really narrow cases; and I'm not sure those are realistic,
nor if then not providing NUMA info is the best way to describe that.
I suppose it is possible to have a PCI bridge shared between two nodes,
such that the PCI devices have equidistance; esp. if that all lives in a
package. But the moment you scale this out, you either get devices that
are 'local' to a package while having multiple packages, or if you
maintain a single bridge in a big system, things become so slow it all
doesn't matter anyway (try having a equidistant device in a 16 node
system).
I'm saying that assigning a node (one of the shared) is, in the generic
ase of multiple packages, the better solution over assigning all nodes.
The other solution is migrating the device model over to a node mask,
instead of a single node. But like said; I'm not sure anybody actually
build something like this. So I'm not sure it matters.
OTOH allowing to not describe NUMA has led to a whole host of crap,
which if we don't become stricter will only get worse.
prev parent reply other threads:[~2019-09-26 12:27 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-17 12:48 [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware Yunsheng Lin
2019-09-21 22:38 ` Paul Burton
2019-09-23 2:31 ` Yunsheng Lin
2019-09-23 15:15 ` Peter Zijlstra
2019-09-23 15:28 ` Michal Hocko
2019-09-23 15:48 ` Peter Zijlstra
2019-09-23 16:52 ` Michal Hocko
2019-09-23 20:34 ` Peter Zijlstra
2019-09-24 1:29 ` Yunsheng Lin
2019-09-24 9:25 ` Peter Zijlstra
2019-09-24 11:07 ` Yunsheng Lin
2019-09-24 11:28 ` Peter Zijlstra
2019-09-24 11:44 ` Yunsheng Lin
2019-09-24 11:58 ` Peter Zijlstra
2019-09-24 12:09 ` Yunsheng Lin
2019-09-24 7:47 ` Michal Hocko
2019-09-24 9:17 ` Peter Zijlstra
2019-09-24 10:56 ` Michal Hocko
2019-09-24 11:23 ` Peter Zijlstra
2019-09-24 11:54 ` Michal Hocko
2019-09-24 12:09 ` Peter Zijlstra
2019-09-24 12:25 ` Michal Hocko
2019-09-24 12:43 ` Peter Zijlstra
2019-09-24 12:59 ` Peter Zijlstra
2019-09-24 13:19 ` Michal Hocko
2019-09-25 9:14 ` Yunsheng Lin
2019-09-25 10:41 ` Peter Zijlstra
2019-10-08 8:38 ` Yunsheng Lin
2019-10-09 12:25 ` Robin Murphy
2019-10-10 6:07 ` Yunsheng Lin
2019-10-10 7:32 ` Michal Hocko
2019-10-11 3:27 ` Yunsheng Lin
2019-10-11 11:15 ` Peter Zijlstra
2019-10-12 6:17 ` Yunsheng Lin
2019-10-12 7:40 ` Greg KH
2019-10-12 9:47 ` Yunsheng Lin
2019-10-12 10:40 ` Greg KH
2019-10-12 10:47 ` Greg KH
2019-10-14 8:00 ` Yunsheng Lin
2019-10-14 9:25 ` Greg KH
2019-10-14 9:49 ` Peter Zijlstra
2019-10-14 10:04 ` Greg KH
2019-10-15 10:40 ` Yunsheng Lin
2019-10-15 16:58 ` Greg KH
2019-10-16 12:07 ` Yunsheng Lin
2019-10-28 9:20 ` Yunsheng Lin
2019-10-29 8:53 ` Michal Hocko
2019-10-30 1:58 ` Yunsheng Lin
2019-10-10 8:56 ` Peter Zijlstra
2019-09-25 10:40 ` Peter Zijlstra
2019-09-25 13:25 ` Michal Hocko
2019-09-25 16:31 ` Peter Zijlstra
2019-09-25 21:45 ` Peter Zijlstra
2019-09-26 9:05 ` Peter Zijlstra
2019-09-26 12:10 ` Peter Zijlstra
2019-09-26 11:45 ` Geert Uytterhoeven
2019-09-26 12:24 ` Peter Zijlstra [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190926122406.GB4519@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axboe@kernel.dk \
--cc=borntraeger@de.ibm.com \
--cc=bp@alien8.de \
--cc=cai@lca.pw \
--cc=catalin.marinas@arm.com \
--cc=chenhc@lemote.com \
--cc=dalias@libc.org \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=dledford@redhat.com \
--cc=geert@linux-m68k.org \
--cc=gor@linux.ibm.com \
--cc=gregkh@linuxfoundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=hpa@zytor.com \
--cc=ink@jurassic.park.msu.ru \
--cc=jeffrey.t.kirsher@intel.com \
--cc=jhogan@kernel.org \
--cc=jiaxun.yang@flygoat.com \
--cc=len.brown@intel.com \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-sh@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=linyunsheng@huawei.com \
--cc=luto@kernel.org \
--cc=mattst88@gmail.com \
--cc=mhocko@kernel.org \
--cc=mingo@redhat.com \
--cc=mwb@linux.vnet.ibm.com \
--cc=naveen.n.rao@linux.vnet.ibm.com \
--cc=paul.burton@mips.com \
--cc=paulus@samba.org \
--cc=rafael@kernel.org \
--cc=ralf@linux-mips.org \
--cc=robin.murphy@arm.com \
--cc=rppt@linux.ibm.com \
--cc=rth@twiddle.net \
--cc=sparclinux@vger.kernel.org \
--cc=tbogendoerfer@suse.de \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=ysato@users.sourceforge.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).