From: Keith Busch <keith.busch@intel.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
linux-mm@kvack.org,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Rafael Wysocki <rafael@kernel.org>,
Dave Hansen <dave.hansen@intel.com>,
Dan Williams <dan.j.williams@intel.com>
Subject: Re: [PATCH 1/7] node: Link memory nodes to their compute nodes
Date: Fri, 16 Nov 2018 11:32:54 -0700 [thread overview]
Message-ID: <20181116183254.GD14630@localhost.localdomain> (raw)
In-Reply-To: <20181115203654.GA28246@bombadil.infradead.org>
On Thu, Nov 15, 2018 at 12:36:54PM -0800, Matthew Wilcox wrote:
> On Thu, Nov 15, 2018 at 07:59:20AM -0700, Keith Busch wrote:
> > On Thu, Nov 15, 2018 at 05:57:10AM -0800, Matthew Wilcox wrote:
> > > On Wed, Nov 14, 2018 at 03:49:14PM -0700, Keith Busch wrote:
> > > > Memory-only nodes will often have affinity to a compute node, and
> > > > platforms have ways to express that locality relationship.
> > > >
> > > > A node containing CPUs or other DMA devices that can initiate memory
> > > > access are referred to as "memory iniators". A "memory target" is a
> > > > node that provides at least one phyiscal address range accessible to a
> > > > memory initiator.
> > >
> > > I think I may be confused here. If there is _no_ link from node X to
> > > node Y, does that mean that node X's CPUs cannot access the memory on
> > > node Y? In my mind, all nodes can access all memory in the system,
> > > just not with uniform bandwidth/latency.
> >
> > The link is just about which nodes are "local". It's like how nodes have
> > a cpulist. Other CPUs not in the node's list can acces that node's memory,
> > but the ones in the mask are local, and provide useful optimization hints.
>
> So ... let's imagine a hypothetical system (I've never seen one built like
> this, but it doesn't seem too implausible). Connect four CPU sockets in
> a square, each of which has some regular DIMMs attached to it. CPU A is
> 0 hops to Memory A, one hop to Memory B and Memory C, and two hops from
> Memory D (each CPU only has two "QPI" links). Then maybe there's some
> special memory extender device attached on the PCIe bus. Now there's
> Memory B1 and B2 that's attached to CPU B and it's local to CPU B, but
> not as local as Memory B is ... and we'd probably _prefer_ to allocate
> memory for CPU A from Memory B1 than from Memory D. But ... *mumble*,
> this seems hard.
Indeed, that particular example is out of scope for this series. The
first objective is to aid a process running in node B's CPUs to allocate
memory in B1. Anything that crosses QPI are their own.
> I understand you're trying to reflect what the HMAT table is telling you,
> I'm just really fuzzy on who's ultimately consuming this information
> and what decisions they're trying to drive from it.
Intended consumers include processes using numa_alloc_onnode() and mbind().
Consider a system with faster DRAM and slower persistent memory. Such
a system may group the DRAM in a different proximity domain than the
persistent memory, and both are local to yet another proximity domain
that contains the CPUs. HMAT provides a way to express that relationship,
and this patch provides a user facing abstraction for that information.
next prev parent reply other threads:[~2018-11-16 18:36 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-14 22:49 [PATCH 1/7] node: Link memory nodes to their compute nodes Keith Busch
2018-11-14 22:49 ` [PATCH 2/7] node: Add heterogenous memory performance Keith Busch
2018-11-19 3:35 ` Anshuman Khandual
2018-11-19 15:46 ` Keith Busch
2018-11-22 13:22 ` Anshuman Khandual
2018-11-27 7:00 ` Dan Williams
2018-11-27 17:42 ` Dan Williams
2018-11-27 17:44 ` Keith Busch
2018-11-14 22:49 ` [PATCH 3/7] doc/vm: New documentation for " Keith Busch
2018-11-15 12:59 ` Jonathan Cameron
2018-12-10 16:12 ` Dan Williams
2018-11-20 13:51 ` Mike Rapoport
2018-11-20 15:31 ` Keith Busch
2018-11-14 22:49 ` [PATCH 4/7] node: Add memory caching attributes Keith Busch
2018-11-15 0:40 ` Dave Hansen
2018-11-19 4:14 ` Anshuman Khandual
2018-11-19 23:06 ` Keith Busch
2018-11-22 13:29 ` Anshuman Khandual
2018-11-26 15:14 ` Keith Busch
2018-11-26 19:06 ` Greg Kroah-Hartman
2018-11-26 19:53 ` Keith Busch
2018-11-26 19:06 ` Greg Kroah-Hartman
2018-11-14 22:49 ` [PATCH 5/7] doc/vm: New documentation for memory cache Keith Busch
2018-11-15 0:41 ` Dave Hansen
2018-11-15 13:16 ` Jonathan Cameron
2018-11-20 13:53 ` Mike Rapoport
2018-11-14 22:49 ` [PATCH 6/7] acpi: Create subtable parsing infrastructure Keith Busch
2018-11-19 9:58 ` Rafael J. Wysocki
2018-11-19 18:36 ` Keith Busch
2018-11-14 22:49 ` [PATCH 7/7] acpi/hmat: Parse and report heterogeneous memory Keith Busch
2018-11-15 13:57 ` [PATCH 1/7] node: Link memory nodes to their compute nodes Matthew Wilcox
2018-11-15 14:59 ` Keith Busch
2018-11-15 17:50 ` Dan Williams
2018-11-19 3:04 ` Anshuman Khandual
2018-11-15 20:36 ` Matthew Wilcox
2018-11-16 18:32 ` Keith Busch [this message]
2018-11-19 3:15 ` Anshuman Khandual
2018-11-19 15:49 ` Keith Busch
2018-12-04 15:43 ` Aneesh Kumar K.V
2018-12-04 16:54 ` Keith Busch
2018-11-16 22:55 ` Dan Williams
2018-11-19 2:52 ` Anshuman Khandual
2018-11-19 2:46 ` Anshuman Khandual
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181116183254.GD14630@localhost.localdomain \
--to=keith.busch@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=gregkh@linuxfoundation.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rafael@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).