linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wei Xu <weixugc@google.com>
To: "ying.huang@intel.com" <ying.huang@intel.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	Alistair Popple <apopple@nvidia.com>,
	 Yang Shi <shy828301@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	 Linux MM <linux-mm@kvack.org>, Greg Thelen <gthelen@google.com>,
	 Jagdish Gediya <jvgediya@linux.ibm.com>,
	 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Davidlohr Bueso <dave@stgolabs.net>,
	 Michal Hocko <mhocko@kernel.org>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	 Brice Goglin <brice.goglin@gmail.com>,
	Feng Tang <feng.tang@intel.com>,
	 Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: RFC: Memory Tiering Kernel Interfaces
Date: Wed, 11 May 2022 23:24:47 -0700	[thread overview]
Message-ID: <CAAPL-u8=n4V-WYgZx4GJpxD17yFuyrz5N07W0uyLmHxn_4zzCw@mail.gmail.com> (raw)
In-Reply-To: <be3b9f239fa46e968b333291910b2afd3e38bcba.camel@intel.com>

On Wed, May 11, 2022 at 8:14 PM ying.huang@intel.com
<ying.huang@intel.com> wrote:
>
> On Wed, 2022-05-11 at 19:39 -0700, Wei Xu wrote:
> > On Wed, May 11, 2022 at 6:42 PM ying.huang@intel.com
> > <ying.huang@intel.com> wrote:
> > >
> > > On Wed, 2022-05-11 at 10:07 -0700, Wei Xu wrote:
> > > > On Wed, May 11, 2022 at 12:49 AM ying.huang@intel.com
> > > > <ying.huang@intel.com> wrote:
> > > > >
> > > > > On Tue, 2022-05-10 at 22:30 -0700, Wei Xu wrote:
> > > > > > On Tue, May 10, 2022 at 4:38 AM Aneesh Kumar K.V
> > > > > > <aneesh.kumar@linux.ibm.com> wrote:
> > > > > > >
> > > > > > > Alistair Popple <apopple@nvidia.com> writes:
> > > > > > >
> > > > > > > > Wei Xu <weixugc@google.com> writes:
> > > > > > > >
> > > > > > > > > On Thu, May 5, 2022 at 5:19 PM Alistair Popple <apopple@nvidia.com> wrote:
> > > > > > > > > >
> > > > > > > > > > Wei Xu <weixugc@google.com> writes:
> > > > > > > > > >
> > > > > > > > > > [...]
> > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Tiering Hierarchy Initialization
> > > > > > > > > > > > > `=============================='
> > > > > > > > > > > > >
> > > > > > > > > > > > > By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY).
> > > > > > > > > > > > >
> > > > > > > > > > > > > A device driver can remove its memory nodes from the top tier, e.g.
> > > > > > > > > > > > > a dax driver can remove PMEM nodes from the top tier.
> > > > > > > > > > > >
> > > > > > > > > > > > With the topology built by firmware we should not need this.
> > > > > > > > > >
> > > > > > > > > > I agree that in an ideal world the hierarchy should be built by firmware based
> > > > > > > > > > on something like the HMAT. But I also think being able to override this will be
> > > > > > > > > > useful in getting there. Therefore a way of overriding the generated hierarchy
> > > > > > > > > > would be good, either via sysfs or kernel boot parameter if we don't want to
> > > > > > > > > > commit to a particular user interface now.
> > > > > > > > > >
> > > > > > > > > > However I'm less sure letting device-drivers override this is a good idea. How
> > > > > > > > > > for example would a GPU driver make sure it's node is in the top tier? By moving
> > > > > > > > > > every node that the driver does not know about out of N_TOPTIER_MEMORY? That
> > > > > > > > > > could get messy if say there were two drivers both of which wanted their node to
> > > > > > > > > > be in the top tier.
> > > > > > > > >
> > > > > > > > > The suggestion is to allow a device driver to opt out its memory
> > > > > > > > > devices from the top-tier, not the other way around.
> > > > > > > >
> > > > > > > > So how would demotion work in the case of accelerators then? In that
> > > > > > > > case we would want GPU memory to demote to DRAM, but that won't happen
> > > > > > > > if both DRAM and GPU memory are in N_TOPTIER_MEMORY and it seems the
> > > > > > > > only override available with this proposal would move GPU memory into a
> > > > > > > > lower tier, which is the opposite of what's needed there.
> > > > > > >
> > > > > > > How about we do 3 tiers now. dax kmem devices can be registered to
> > > > > > > tier 3. By default all numa nodes can be registered at tier 2 and HBM or
> > > > > > > GPU can be enabled to register at tier 1. ?
> > > > > >
> > > > > > This makes sense.  I will send an updated RFC based on the discussions so far.
> > > > >
> > > > > Are these tier number fixed?  If so, it appears strange that the
> > > > > smallest tier number is 0 on some machines, but 1 on some other
> > > > > machines.
> > > >
> > > > When the kernel is configured to allow 3 tiers, we can always show all
> > > > the 3 tiers. It is just that some tiers (e.g. tier 0) may be empty on
> > > > some machines.
> > >
> > > I still think that it's better to have no empty tiers for auto-generated
> > > memory tiers by kernel.  Yes, the tier number will be not absolutely
> > > stable, but that only happens during system bootup in practice, so it's
> > > not a big issue IMHO.
> >
> > It should not be hard to hide empty tiers (e.g. tier-0) if we prefer.
> > But even if tier-0 is empty, we should still keep this tier in the
> > kernel and not move DRAM nodes into this tier.  One reason is that a
> > HBM node might be hot-added into tier-0 at a later time.
> >
>
> Yes.  The in-kernel representation and the user space interface could be
> different.
>
> I have thought something like below.  We always make the main memory
> (DRAM here, CPU local) as tier 0.  Then the slower memory will be
> positive, tier 1, 2, 3, ..., and the faster memory will be negative,
> tier -1, -2, -3, ....  Then, GPU driver can regesiter its memory as tier
> -1.  And the tier number could be more stable.  But I'm not sure whether
> users will be happy with negtive tier number.
>
> > > And, I still think it's better to make only N-1 tiers writable for
> > > totally N tiers (or even readable).  Considering "tier0" is written, how
> > > to deal with nodes in "tier0" before but not after writing?  One
> > > possible way is to put them into "tierN".  And during a user customize
> > > the tiers, the union of "N tiers" may be not complete.
> >
> > The sysfs interfaces that I have in mind now are:
> >
> > * /sys/devices/system/memtier/memtierN/nodelist (N=0, 1, 2)
> >
> > This is read-only to list the memory nodes for a specific tier.
> >
> > * /sys/devices/system/node/nodeN/memtier. (N=0, 1, ...,)
> >
> > This is a read-write interface. When written, the kernel moves the
> > node into the user-specified tier.  No other nodes are affected.
> >
> > This interface should be able to avoid the above issue.
>
> Yes.  This works too.

FYI, I have just sent out an updated RFC with the above sysfs interfaces.

> Best Regards,
> Huang, Ying
>
> > > > BTW, the userspace should not assume a specific meaning of a
> > > > particular tier id because it can change depending on the number of
> > > > tiers that the kernel is configured with.  For example, the userspace
> > > > should not assume that tier-2 always means PMEM nodes.  In a system
> > > > with 4 tiers, PMEM nodes may be in tier-3, not tier-2.
> > >
> > > Yes.  This sounds good.
> > >
> > > Best Regards,
> > > Huang, Ying
> > >
>
>


  parent reply	other threads:[~2022-05-12  6:25 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-30  2:10 RFC: Memory Tiering Kernel Interfaces Wei Xu
2022-04-30  3:59 ` Yang Shi
2022-04-30  6:37   ` Wei Xu
2022-05-06  0:01     ` Alistair Popple
2022-05-10  4:32       ` Wei Xu
2022-05-10  5:37         ` Alistair Popple
2022-05-10 11:38           ` Aneesh Kumar K.V
2022-05-11  5:30             ` Wei Xu
2022-05-11  7:34               ` Alistair Popple
2022-05-11  7:49               ` ying.huang
2022-05-11 17:07                 ` Wei Xu
2022-05-12  1:42                   ` ying.huang
2022-05-12  2:39                     ` Wei Xu
2022-05-12  3:13                       ` ying.huang
2022-05-12  3:37                         ` Wei Xu
2022-05-12  6:24                         ` Wei Xu [this message]
2022-05-06 18:56     ` Yang Shi
2022-05-09 14:32       ` Hesham Almatary
2022-05-10  3:24         ` Yang Shi
2022-05-10  9:59           ` Hesham Almatary
2022-05-10 12:10             ` Aneesh Kumar K V
2022-05-11  5:42               ` Wei Xu
2022-05-11  7:12                 ` Alistair Popple
2022-05-11  9:05                   ` Hesham Almatary
2022-05-12  3:02                     ` ying.huang
2022-05-12  4:40                   ` Aneesh Kumar K V
2022-05-12  4:49                     ` Wei Xu
2022-05-10  4:22         ` Wei Xu
2022-05-10 10:01           ` Hesham Almatary
2022-05-10 11:44           ` Aneesh Kumar K.V
2022-05-01 18:35   ` Dan Williams
2022-05-03  6:36     ` Wei Xu
2022-05-06 19:05     ` Yang Shi
2022-05-07  7:56     ` ying.huang
2022-05-01 17:58 ` Davidlohr Bueso
2022-05-02  1:04   ` David Rientjes
2022-05-02  7:23   ` Aneesh Kumar K.V
2022-05-03  2:07   ` Baolin Wang
2022-05-03  6:06   ` Wei Xu
2022-05-03 17:14   ` Alistair Popple
2022-05-03 17:47     ` Dave Hansen
2022-05-03 22:35       ` Alistair Popple
2022-05-03 23:54         ` Dave Hansen
2022-05-04  1:31           ` Wei Xu
2022-05-04 17:02             ` Dave Hansen
2022-05-05  6:35               ` Wei Xu
2022-05-05 14:24                 ` Dave Hansen
2022-05-10  4:43                   ` Wei Xu
2022-05-02  6:25 ` Aneesh Kumar K.V
2022-05-03  7:02   ` Wei Xu
2022-05-02 15:20 ` Dave Hansen
2022-05-03  7:19   ` Wei Xu
2022-05-03 19:12 ` Tim Chen
2022-05-05  7:02   ` Wei Xu
2022-05-05  8:57 ` ying.huang
2022-05-05 23:57 ` Alistair Popple
2022-05-06  0:25   ` Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAPL-u8=n4V-WYgZx4GJpxD17yFuyrz5N07W0uyLmHxn_4zzCw@mail.gmail.com' \
    --to=weixugc@google.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=apopple@nvidia.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=brice.goglin@gmail.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=dave@stgolabs.net \
    --cc=feng.tang@intel.com \
    --cc=gthelen@google.com \
    --cc=jvgediya@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shy828301@gmail.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).