Linux-api Archive on lore.kernel.org
 help / color / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Ben Widawsky <ben.widawsky@intel.com>
Cc: linux-mm <linux-mm@kvack.org>, Andi Kleen <ak@linux.intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	David Rientjes <rientjes@google.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Kuppuswamy Sathyanarayanan 
	<sathyanarayanan.kuppuswamy@linux.intel.com>,
	Lee Schermerhorn <lee.schermerhorn@hp.com>,
	Li Xinhai <lixinhai.lxh@gmail.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Mina Almasry <almasrymina@google.com>, Tejun Heo <tj@kernel.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	linux-api@vger.kernel.org
Subject: Re: [PATCH 00/18] multiple preferred nodes
Date: Tue, 23 Jun 2020 13:20:48 +0200
Message-ID: <20200623112048.GR31426@dhcp22.suse.cz> (raw)
In-Reply-To: <20200622070957.GB31426@dhcp22.suse.cz>

On Mon 22-06-20 09:10:00, Michal Hocko wrote:
[...]
> > The goal of the new mode is to enable some use-cases when using tiered memory
> > usage models which I've lovingly named.
> > 1a. The Hare - The interconnect is fast enough to meet bandwidth and latency
> > requirements allowing preference to be given to all nodes with "fast" memory.
> > 1b. The Indiscriminate Hare - An application knows it wants fast memory (or
> > perhaps slow memory), but doesn't care which node it runs on. The application
> > can prefer a set of nodes and then xpu bind to the local node (cpu, accelerator,
> > etc). This reverses the nodes are chosen today where the kernel attempts to use
> > local memory to the CPU whenever possible. This will attempt to use the local
> > accelerator to the memory.
> > 2. The Tortoise - The administrator (or the application itself) is aware it only
> > needs slow memory, and so can prefer that.
> >
> > Much of this is almost achievable with the bind interface, but the bind
> > interface suffers from an inability to fallback to another set of nodes if
> > binding fails to all nodes in the nodemask.

Yes, and probably worth mentioning explicitly that this might lead to
the OOM killer invocation so a failure would be disruptive to any
workload which is allowed to allocate from the specific node mask (so
even tasks without any mempolicy).

> > Like MPOL_BIND a nodemask is given. Inherently this removes ordering from the
> > preference.
> > 
> > > /* Set first two nodes as preferred in an 8 node system. */
> > > const unsigned long nodes = 0x3
> > > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8);
> > 
> > > /* Mimic interleave policy, but have fallback *.
> > > const unsigned long nodes = 0xaa
> > > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8);
> > 
> > Some internal discussion took place around the interface. There are two
> > alternatives which we have discussed, plus one I stuck in:
> > 1. Ordered list of nodes. Currently it's believed that the added complexity is
> >    nod needed for expected usecases.

There is no ordering in MPOL_BIND either and even though numa apis tend
to be screwed up from multiple aspects this is not a problem I have ever
stumbled over.

> > 2. A flag for bind to allow falling back to other nodes. This confuses the
> >    notion of binding and is less flexible than the current solution.

Agreed.

> > 3. Create flags or new modes that helps with some ordering. This offers both a
> >    friendlier API as well as a solution for more customized usage. It's unknown
> >    if it's worth the complexity to support this. Here is sample code for how
> >    this might work:
> > 
> > > // Default
> > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_SOCKET, NULL, 0);
> > > // which is the same as
> > > set_mempolicy(MPOL_DEFAULT, NULL, 0);

OK

> > > // The Hare
> > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, NULL, 0);
> > >
> > > // The Tortoise
> > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_REV, NULL, 0);
> > >
> > > // Prefer the fast memory of the first two sockets
> > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, -1, 2);
> > >
> > > // Prefer specific nodes for some something wacky
> > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_CUSTOM, 0x17c, 1024);

I am not so sure about these though. It would be much more easier to
start without additional modifiers and provide MPOL_PREFER_MANY without
any additional restrictions first (btw. I would like MPOL_PREFER_MASK
more but I do understand that naming is not the top priority now).

It would be also great to provide a high level semantic description
here. I have very quickly glanced through patches and they are not
really trivial to follow with many incremental steps so the higher level
intention is lost easily.

Do I get it right that the default semantic is essentially
	- allocate page from the given nodemask (with __GFP_RETRY_MAYFAIL
	  semantic)
	- fallback to numa unrestricted allocation with the default
	  numa policy on the failure

Or are there any usecases to modify how hard to keep the preference over
the fallback?
-- 
Michal Hocko
SUSE Labs

  reply index

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20200619162425.1052382-1-ben.widawsky@intel.com>
2020-06-22  7:09 ` Michal Hocko
2020-06-23 11:20   ` Michal Hocko [this message]
2020-06-23 16:12     ` Ben Widawsky
2020-06-24  7:52       ` Michal Hocko
2020-06-24 16:16         ` Ben Widawsky
2020-06-24 18:39           ` Michal Hocko
2020-06-24 19:37             ` Ben Widawsky
2020-06-24 19:51               ` Michal Hocko
2020-06-24 20:01                 ` Ben Widawsky
2020-06-24 20:07                   ` Michal Hocko
2020-06-24 20:23                     ` Ben Widawsky
2020-06-24 20:42                       ` Michal Hocko
2020-06-24 20:55                         ` Ben Widawsky
2020-06-25  6:28                           ` Michal Hocko
2020-06-26 21:39         ` Ben Widawsky
2020-06-29 10:16           ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200623112048.GR31426@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=ben.widawsky@intel.com \
    --cc=cl@linux.com \
    --cc=corbet@lwn.net \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=jgg@ziepe.ca \
    --cc=lee.schermerhorn@hp.com \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lixinhai.lxh@gmail.com \
    --cc=mgorman@techsingularity.net \
    --cc=mike.kravetz@oracle.com \
    --cc=rientjes@google.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-api Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-api/0 linux-api/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-api linux-api/ https://lore.kernel.org/linux-api \
		linux-api@vger.kernel.org
	public-inbox-index linux-api

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-api


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git