From: Peter Zijlstra <peterz@infradead.org>
To: Michal Hocko <mhocko@kernel.org>
Cc: Yunsheng Lin <linyunsheng@huawei.com>,
catalin.marinas@arm.com, will@kernel.org, mingo@redhat.com,
bp@alien8.de, rth@twiddle.net, ink@jurassic.park.msu.ru,
mattst88@gmail.com, benh@kernel.crashing.org, paulus@samba.org,
mpe@ellerman.id.au, heiko.carstens@de.ibm.com, gor@linux.ibm.com,
borntraeger@de.ibm.com, ysato@users.sourceforge.jp,
dalias@libc.org, davem@davemloft.net, ralf@linux-mips.org,
paul.burton@mips.com, jhogan@kernel.org, jiaxun.yang@flygoat.com,
chenhc@lemote.com, akpm@linux-foundation.org, rppt@linux.ibm.com,
anshuman.khandual@arm.com, tglx@linutronix.de, cai@lca.pw,
robin.murphy@arm.com, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, hpa@zytor.com, x86@kernel.org,
dave.hansen@linux.intel.com, luto@kernel.org,
len.brown@intel.com, axboe@kernel.dk, dledford@redhat.com,
jeffrey.t.kirsher@intel.com, linux-alpha@vger.kernel.org,
naveen.n.rao@linux.vnet.ibm.com, mwb@linux.vnet.ibm.com,
linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org,
linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
tbogendoerfer@suse.de, linux-mips@vger.kernel.org,
rafael@kernel.org, gregkh@linuxfoundation.org
Subject: Re: [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware
Date: Wed, 25 Sep 2019 18:31:54 +0200 [thread overview]
Message-ID: <20190925163154.GF4553@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20190925132544.GL23050@dhcp22.suse.cz>
On Wed, Sep 25, 2019 at 03:25:44PM +0200, Michal Hocko wrote:
> I am sorry but I still do not understand why you consider this whack a
> mole better then simply live with the fact that NUMA_NO_NODE is a
> reality and that using the full cpu mask is a reasonable answer to that.
Because it doesn't make physical sense. A device _cannot_ be local to
all CPUs in a NUMA system.
next prev parent reply other threads:[~2019-09-25 16:32 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-17 12:48 [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware Yunsheng Lin
2019-09-23 15:15 ` Peter Zijlstra
2019-09-23 15:28 ` Michal Hocko
2019-09-23 15:48 ` Peter Zijlstra
2019-09-23 16:52 ` Michal Hocko
2019-09-23 20:34 ` Peter Zijlstra
2019-09-24 1:29 ` Yunsheng Lin
2019-09-24 9:25 ` Peter Zijlstra
2019-09-24 11:07 ` Yunsheng Lin
2019-09-24 11:28 ` Peter Zijlstra
2019-09-24 11:44 ` Yunsheng Lin
2019-09-24 11:58 ` Peter Zijlstra
2019-09-24 12:09 ` Yunsheng Lin
2019-09-24 7:47 ` Michal Hocko
2019-09-24 9:17 ` Peter Zijlstra
2019-09-24 10:56 ` Michal Hocko
2019-09-24 11:23 ` Peter Zijlstra
2019-09-24 11:54 ` Michal Hocko
2019-09-24 12:09 ` Peter Zijlstra
2019-09-24 12:25 ` Michal Hocko
2019-09-24 12:43 ` Peter Zijlstra
2019-09-24 12:59 ` Peter Zijlstra
2019-09-24 13:19 ` Michal Hocko
2019-09-25 9:14 ` Yunsheng Lin
2019-09-25 10:41 ` Peter Zijlstra
2019-10-08 8:38 ` Yunsheng Lin
2019-10-09 12:25 ` Robin Murphy
2019-10-10 6:07 ` Yunsheng Lin
2019-10-10 7:32 ` Michal Hocko
2019-10-11 3:27 ` Yunsheng Lin
2019-10-11 11:15 ` Peter Zijlstra
2019-10-12 6:17 ` Yunsheng Lin
2019-10-12 7:40 ` Greg KH
2019-10-12 9:47 ` Yunsheng Lin
2019-10-12 10:40 ` Greg KH
2019-10-12 10:47 ` Greg KH
2019-10-14 8:00 ` Yunsheng Lin
2019-10-14 9:25 ` Greg KH
2019-10-14 9:49 ` Peter Zijlstra
2019-10-14 10:04 ` Greg KH
2019-10-15 10:40 ` Yunsheng Lin
2019-10-15 16:58 ` Greg KH
2019-10-16 12:07 ` Yunsheng Lin
2019-10-28 9:20 ` Yunsheng Lin
2019-10-29 8:53 ` Michal Hocko
2019-10-30 1:58 ` Yunsheng Lin
2019-10-10 8:56 ` Peter Zijlstra
2019-09-25 10:40 ` Peter Zijlstra
2019-09-25 13:25 ` Michal Hocko
2019-09-25 16:31 ` Peter Zijlstra [this message]
2019-09-25 21:45 ` Peter Zijlstra
2019-09-26 9:05 ` Peter Zijlstra
2019-09-26 12:10 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190925163154.GF4553@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axboe@kernel.dk \
--cc=benh@kernel.crashing.org \
--cc=borntraeger@de.ibm.com \
--cc=bp@alien8.de \
--cc=cai@lca.pw \
--cc=catalin.marinas@arm.com \
--cc=chenhc@lemote.com \
--cc=dalias@libc.org \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=dledford@redhat.com \
--cc=gor@linux.ibm.com \
--cc=gregkh@linuxfoundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=hpa@zytor.com \
--cc=ink@jurassic.park.msu.ru \
--cc=jeffrey.t.kirsher@intel.com \
--cc=jhogan@kernel.org \
--cc=jiaxun.yang@flygoat.com \
--cc=len.brown@intel.com \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-sh@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=linyunsheng@huawei.com \
--cc=luto@kernel.org \
--cc=mattst88@gmail.com \
--cc=mhocko@kernel.org \
--cc=mingo@redhat.com \
--cc=mpe@ellerman.id.au \
--cc=mwb@linux.vnet.ibm.com \
--cc=naveen.n.rao@linux.vnet.ibm.com \
--cc=paul.burton@mips.com \
--cc=paulus@samba.org \
--cc=rafael@kernel.org \
--cc=ralf@linux-mips.org \
--cc=robin.murphy@arm.com \
--cc=rppt@linux.ibm.com \
--cc=rth@twiddle.net \
--cc=sparclinux@vger.kernel.org \
--cc=tbogendoerfer@suse.de \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=ysato@users.sourceforge.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).