From: Feng Tang <feng.tang@intel.com>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@kernel.org>,
Andrea Arcangeli <aarcange@redhat.com>,
David Rientjes <rientjes@google.com>,
Mel Gorman <mgorman@techsingularity.net>,
Mike Kravetz <mike.kravetz@oracle.com>,
Randy Dunlap <rdunlap@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>,
Dave Hansen <dave.hansen@intel.com>,
Ben Widawsky <ben.widawsky@intel.com>,
Andi leen <ak@linux.intel.com>,
Dan Williams <dan.j.williams@intel.com>,
Feng Tang <feng.tang@intel.com>
Subject: [PATCH v3 00/14] Introduced multi-preference mempolicy
Date: Wed, 3 Mar 2021 18:20:44 +0800 [thread overview]
Message-ID: <1614766858-90344-1-git-send-email-feng.tang@intel.com> (raw)
This patch series introduces the concept of the MPOL_PREFERRED_MANY mempolicy.
This mempolicy mode can be used with either the set_mempolicy(2) or mbind(2)
interfaces. Like the MPOL_PREFERRED interface, it allows an application to set a
preference for nodes which will fulfil memory allocation requests. Unlike the
MPOL_PREFERRED mode, it takes a set of nodes. Like the MPOL_BIND interface, it
works over a set of nodes. Unlike MPOL_BIND, it will not cause a SIGSEGV or
invoke the OOM killer if those preferred nodes are not available.
Along with these patches are patches for libnuma, numactl, numademo, and memhog.
They still need some polish, but can be found here:
https://gitlab.com/bwidawsk/numactl/-/tree/prefer-many
It allows new usage: `numactl -P 0,3,4`
The goal of the new mode is to enable some use-cases when using tiered memory
usage models which I've lovingly named.
1a. The Hare - The interconnect is fast enough to meet bandwidth and latency
requirements allowing preference to be given to all nodes with "fast" memory.
1b. The Indiscriminate Hare - An application knows it wants fast memory (or
perhaps slow memory), but doesn't care which node it runs on. The application
can prefer a set of nodes and then xpu bind to the local node (cpu, accelerator,
etc). This reverses the nodes are chosen today where the kernel attempts to use
local memory to the CPU whenever possible. This will attempt to use the local
accelerator to the memory.
2. The Tortoise - The administrator (or the application itself) is aware it only
needs slow memory, and so can prefer that.
Much of this is almost achievable with the bind interface, but the bind
interface suffers from an inability to fallback to another set of nodes if
binding fails to all nodes in the nodemask.
Like MPOL_BIND a nodemask is given. Inherently this removes ordering from the
preference.
> /* Set first two nodes as preferred in an 8 node system. */
> const unsigned long nodes = 0x3
> set_mempolicy(MPOL_PREFER_MANY, &nodes, 8);
> /* Mimic interleave policy, but have fallback *.
> const unsigned long nodes = 0xaa
> set_mempolicy(MPOL_PREFER_MANY, &nodes, 8);
Some internal discussion took place around the interface. There are two
alternatives which we have discussed, plus one I stuck in:
1. Ordered list of nodes. Currently it's believed that the added complexity is
nod needed for expected usecases.
2. A flag for bind to allow falling back to other nodes. This confuses the
notion of binding and is less flexible than the current solution.
3. Create flags or new modes that helps with some ordering. This offers both a
friendlier API as well as a solution for more customized usage. It's unknown
if it's worth the complexity to support this. Here is sample code for how
this might work:
> // Prefer specific nodes for some something wacky
> set_mempolicy(MPOL_PREFER_MANY, 0x17c, 1024);
>
> // Default
> set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_SOCKET, NULL, 0);
> // which is the same as
> set_mempolicy(MPOL_DEFAULT, NULL, 0);
>
> // The Hare
> set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, NULL, 0);
>
> // The Tortoise
> set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_REV, NULL, 0);
>
> // Prefer the fast memory of the first two sockets
> set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, -1, 2);
>
In v1, Andi Kleen brought up reusing MPOL_PREFERRED as the mode for the API.
There wasn't consensus around this, so I've left the existing API as it was. I'm
open to more feedback here, but my slight preference is to use a new API as it
ensures if people are using it, they are entirely aware of what they're doing
and not accidentally misusing the old interface. (In a similar way to how
MPOL_LOCAL was introduced).
In v1, Michal also brought up renaming this MPOL_PREFERRED_MASK. I'm equally
fine with that change, but I hadn't heard much emphatic support for one way or
another, so I've left that too.
Changelog:
Since v2:
* Rebased against v5.11
* Fix a stack overflow related panic, and a kernel warning (Feng)
* Some code clearup (Feng)
* One RFC patch to speedup mem alloc in some case (Feng)
Since v1:
* Dropped patch to replace numa_node_id in some places (mhocko)
* Dropped all the page allocation patches in favor of new mechanism to
use fallbacks. (mhocko)
* Dropped the special snowflake preferred node algorithm (bwidawsk)
* If the preferred node fails, ALL nodes are rechecked instead of just
the non-preferred nodes.
v3 Summary:
1: Random fix I found along the way
2-5: Represent node preference as a mask internally
6-7: Tread many preferred like bind
8-11: Handle page allocation for the new policy
12: Enable the uapi
13: unifiy 2 functions
14: RFC optimization patch
Thanks,
Ben/Dave/Feng
Ben Widawsky (8):
mm/mempolicy: Add comment for missing LOCAL
mm/mempolicy: kill v.preferred_nodes
mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND
mm/mempolicy: Create a page allocator for policy
mm/mempolicy: Thread allocation for many preferred
mm/mempolicy: VMA allocation for many preferred
mm/mempolicy: huge-page allocation for many preferred
mm/mempolicy: Advertise new MPOL_PREFERRED_MANY
Dave Hansen (4):
mm/mempolicy: convert single preferred_node to full nodemask
mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes
mm/mempolicy: allow preferred code to take a nodemask
mm/mempolicy: refactor rebind code for PREFERRED_MANY
Feng Tang (2):
mem/mempolicy: unify mpol_new_preferred() and
mpol_new_preferred_many()
mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH
gfp bit
.../admin-guide/mm/numa_memory_policy.rst | 22 +-
include/linux/gfp.h | 9 +-
include/linux/mempolicy.h | 6 +-
include/uapi/linux/mempolicy.h | 6 +-
mm/hugetlb.c | 22 +-
mm/mempolicy.c | 266 ++++++++++++++-------
mm/page_alloc.c | 2 +-
7 files changed, 224 insertions(+), 109 deletions(-)
--
2.7.4
next reply other threads:[~2021-03-03 15:38 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-03 10:20 Feng Tang [this message]
2021-03-03 10:20 ` [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Feng Tang
2021-03-10 6:27 ` Feng Tang
2021-03-03 10:20 ` [PATCH v3 02/14] mm/mempolicy: convert single preferred_node to full nodemask Feng Tang
2021-03-03 10:20 ` [PATCH v3 03/14] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
2021-03-03 10:20 ` [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask Feng Tang
2021-03-03 10:20 ` [PATCH v3 05/14] mm/mempolicy: refactor rebind code for PREFERRED_MANY Feng Tang
2021-03-03 10:20 ` [PATCH v3 06/14] mm/mempolicy: kill v.preferred_nodes Feng Tang
2021-03-03 10:20 ` [PATCH v3 07/14] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Feng Tang
2021-03-03 10:20 ` [PATCH v3 08/14] mm/mempolicy: Create a page allocator for policy Feng Tang
2021-03-03 10:20 ` [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred Feng Tang
2021-03-03 10:20 ` [PATCH v3 10/14] mm/mempolicy: VMA " Feng Tang
2021-03-03 10:20 ` [PATCH v3 11/14] mm/mempolicy: huge-page " Feng Tang
2021-03-03 10:20 ` [PATCH v3 12/14] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
2021-03-03 10:20 ` [PATCH v3 13/14] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many() Feng Tang
2021-03-03 10:20 ` [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit Feng Tang
2021-03-03 11:39 ` Michal Hocko
2021-03-03 12:07 ` Feng Tang
2021-03-03 12:18 ` Feng Tang
2021-03-03 12:32 ` Michal Hocko
2021-03-03 13:18 ` Feng Tang
2021-03-03 13:46 ` Feng Tang
2021-03-03 13:59 ` Michal Hocko
2021-03-03 16:31 ` Ben Widawsky
2021-03-03 16:48 ` Dave Hansen
2021-03-10 5:19 ` Feng Tang
2021-03-10 9:44 ` Michal Hocko
2021-03-10 11:49 ` Feng Tang
2021-03-03 17:14 ` Michal Hocko
2021-03-03 17:22 ` Ben Widawsky
2021-03-04 8:14 ` Feng Tang
2021-03-04 12:59 ` Michal Hocko
2021-03-05 2:21 ` Feng Tang
2021-03-04 12:57 ` Michal Hocko
2021-03-03 13:53 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1614766858-90344-1-git-send-email-feng.tang@intel.com \
--to=feng.tang@intel.com \
--cc=aarcange@redhat.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=ben.widawsky@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.