linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/14] Introduced multi-preference mempolicy
@ 2021-03-03 10:20 Feng Tang
  2021-03-03 10:20 ` [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Feng Tang
                   ` (13 more replies)
  0 siblings, 14 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

This patch series introduces the concept of the MPOL_PREFERRED_MANY mempolicy.
This mempolicy mode can be used with either the set_mempolicy(2) or mbind(2)
interfaces. Like the MPOL_PREFERRED interface, it allows an application to set a
preference for nodes which will fulfil memory allocation requests. Unlike the
MPOL_PREFERRED mode, it takes a set of nodes. Like the MPOL_BIND interface, it
works over a set of nodes. Unlike MPOL_BIND, it will not cause a SIGSEGV or
invoke the OOM killer if those preferred nodes are not available.

Along with these patches are patches for libnuma, numactl, numademo, and memhog.
They still need some polish, but can be found here:
https://gitlab.com/bwidawsk/numactl/-/tree/prefer-many
It allows new usage: `numactl -P 0,3,4`

The goal of the new mode is to enable some use-cases when using tiered memory
usage models which I've lovingly named.
1a. The Hare - The interconnect is fast enough to meet bandwidth and latency
requirements allowing preference to be given to all nodes with "fast" memory.
1b. The Indiscriminate Hare - An application knows it wants fast memory (or
perhaps slow memory), but doesn't care which node it runs on. The application
can prefer a set of nodes and then xpu bind to the local node (cpu, accelerator,
etc). This reverses the nodes are chosen today where the kernel attempts to use
local memory to the CPU whenever possible. This will attempt to use the local
accelerator to the memory.
2. The Tortoise - The administrator (or the application itself) is aware it only
needs slow memory, and so can prefer that.

Much of this is almost achievable with the bind interface, but the bind
interface suffers from an inability to fallback to another set of nodes if
binding fails to all nodes in the nodemask.

Like MPOL_BIND a nodemask is given. Inherently this removes ordering from the
preference.

> /* Set first two nodes as preferred in an 8 node system. */
> const unsigned long nodes = 0x3
> set_mempolicy(MPOL_PREFER_MANY, &nodes, 8);

> /* Mimic interleave policy, but have fallback *.
> const unsigned long nodes = 0xaa
> set_mempolicy(MPOL_PREFER_MANY, &nodes, 8);

Some internal discussion took place around the interface. There are two
alternatives which we have discussed, plus one I stuck in:
1. Ordered list of nodes. Currently it's believed that the added complexity is
   nod needed for expected usecases.
2. A flag for bind to allow falling back to other nodes. This confuses the
   notion of binding and is less flexible than the current solution.
3. Create flags or new modes that helps with some ordering. This offers both a
   friendlier API as well as a solution for more customized usage. It's unknown
   if it's worth the complexity to support this. Here is sample code for how
   this might work:

> // Prefer specific nodes for some something wacky
> set_mempolicy(MPOL_PREFER_MANY, 0x17c, 1024);
>
> // Default
> set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_SOCKET, NULL, 0);
> // which is the same as
> set_mempolicy(MPOL_DEFAULT, NULL, 0);
>
> // The Hare
> set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, NULL, 0);
>
> // The Tortoise
> set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_REV, NULL, 0);
>
> // Prefer the fast memory of the first two sockets
> set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, -1, 2);
>

In v1, Andi Kleen brought up reusing MPOL_PREFERRED as the mode for the API.
There wasn't consensus around this, so I've left the existing API as it was. I'm
open to more feedback here, but my slight preference is to use a new API as it
ensures if people are using it, they are entirely aware of what they're doing
and not accidentally misusing the old interface. (In a similar way to how
MPOL_LOCAL was introduced).

In v1, Michal also brought up renaming this MPOL_PREFERRED_MASK. I'm equally
fine with that change, but I hadn't heard much emphatic support for one way or
another, so I've left that too.

Changelog: 

  Since v2:
  * Rebased against v5.11
  * Fix a stack overflow related panic, and a kernel warning (Feng)
  * Some code clearup (Feng)
  * One RFC patch to speedup mem alloc in some case (Feng)

  Since v1:
  * Dropped patch to replace numa_node_id in some places (mhocko)
  * Dropped all the page allocation patches in favor of new mechanism to
    use fallbacks. (mhocko)
  * Dropped the special snowflake preferred node algorithm (bwidawsk)
  * If the preferred node fails, ALL nodes are rechecked instead of just
    the non-preferred nodes.

v3 Summary:
1: Random fix I found along the way
2-5: Represent node preference as a mask internally
6-7: Tread many preferred like bind
8-11: Handle page allocation for the new policy
12: Enable the uapi
13: unifiy 2 functions
14: RFC optimization patch

Thanks,
Ben/Dave/Feng

Ben Widawsky (8):
  mm/mempolicy: Add comment for missing LOCAL
  mm/mempolicy: kill v.preferred_nodes
  mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND
  mm/mempolicy: Create a page allocator for policy
  mm/mempolicy: Thread allocation for many preferred
  mm/mempolicy: VMA allocation for many preferred
  mm/mempolicy: huge-page allocation for many preferred
  mm/mempolicy: Advertise new MPOL_PREFERRED_MANY

Dave Hansen (4):
  mm/mempolicy: convert single preferred_node to full nodemask
  mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes
  mm/mempolicy: allow preferred code to take a nodemask
  mm/mempolicy: refactor rebind code for PREFERRED_MANY

Feng Tang (2):
  mem/mempolicy: unify mpol_new_preferred() and
    mpol_new_preferred_many()
  mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH
    gfp bit

 .../admin-guide/mm/numa_memory_policy.rst          |  22 +-
 include/linux/gfp.h                                |   9 +-
 include/linux/mempolicy.h                          |   6 +-
 include/uapi/linux/mempolicy.h                     |   6 +-
 mm/hugetlb.c                                       |  22 +-
 mm/mempolicy.c                                     | 266 ++++++++++++++-------
 mm/page_alloc.c                                    |   2 +-
 7 files changed, 224 insertions(+), 109 deletions(-)

-- 
2.7.4



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-10  6:27   ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 02/14] mm/mempolicy: convert single preferred_node to full nodemask Feng Tang
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

From: Ben Widawsky <ben.widawsky@intel.com>

MPOL_LOCAL is a bit weird because it is simply a different name for an
existing behavior (preferred policy with no node mask). It has been this
way since it was added here:
commit 479e2802d09f ("mm: mempolicy: Make MPOL_LOCAL a real policy")

It is so similar to MPOL_PREFERRED in fact that when the policy is
created in mpol_new, the mode is set as PREFERRED, and an internal state
representing LOCAL doesn't exist.

To prevent future explorers from scratching their head as to why
MPOL_LOCAL isn't defined in the mpol_ops table, add a small comment
explaining the situations.

v2:
Change comment to refer to mpol_new (Michal)

Link: https://lore.kernel.org/r/20200630212517.308045-2-ben.widawsky@intel.com
#Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 2c3a865..5730fc1 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -427,6 +427,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
 		.create = mpol_new_bind,
 		.rebind = mpol_rebind_nodemask,
 	},
+	/* [MPOL_LOCAL] - see mpol_new() */
 };
 
 static int migrate_page_add(struct page *page, struct list_head *pagelist,
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 02/14] mm/mempolicy: convert single preferred_node to full nodemask
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
  2021-03-03 10:20 ` [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 03/14] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Dave Hansen, Feng Tang

From: Dave Hansen <dave.hansen@linux.intel.com>

The NUMA APIs currently allow passing in a "preferred node" as a
single bit set in a nodemask.  If more than one bit it set, bits
after the first are ignored.  Internally, this is implemented as
a single integer: mempolicy->preferred_node.

This single node is generally OK for location-based NUMA where
memory being allocated will eventually be operated on by a single
CPU.  However, in systems with multiple memory types, folks want
to target a *type* of memory instead of a location.  For instance,
someone might want some high-bandwidth memory but do not care about
the CPU next to which it is allocated.  Or, they want a cheap,
high capacity allocation and want to target all NUMA nodes which
have persistent memory in volatile mode.  In both of these cases,
the application wants to target a *set* of nodes, but does not
want strict MPOL_BIND behavior as that could lead to OOM killer or
SIGSEGV.

To get that behavior, a MPOL_PREFERRED mode is desirable, but one
that honors multiple nodes to be set in the nodemask.

The first step in that direction is to be able to internally store
multiple preferred nodes, which is implemented in this patch.

This should not have any function changes and just switches the
internal representation of mempolicy->preferred_node from an
integer to a nodemask called 'mempolicy->preferred_nodes'.

This is not a pie-in-the-sky dream for an API.  This was a response to a
specific ask of more than one group at Intel.  Specifically:

1. There are existing libraries that target memory types such as
   https://github.com/memkind/memkind.  These are known to suffer
   from SIGSEGV's when memory is low on targeted memory "kinds" that
   span more than one node.  The MCDRAM on a Xeon Phi in "Cluster on
   Die" mode is an example of this.
2. Volatile-use persistent memory users want to have a memory policy
   which is targeted at either "cheap and slow" (PMEM) or "expensive and
   fast" (DRAM).  However, they do not want to experience allocation
   failures when the targeted type is unavailable.
3. Allocate-then-run.  Generally, we let the process scheduler decide
   on which physical CPU to run a task.  That location provides a
   default allocation policy, and memory availability is not generally
   considered when placing tasks.  For situations where memory is
   valuable and constrained, some users want to allocate memory first,
   *then* allocate close compute resources to the allocation.  This is
   the reverse of the normal (CPU) model.  Accelerators such as GPUs
   that operate on core-mm-managed memory are interested in this model.

v2:
Fix spelling errors in commit message. (Ben)
clang-format. (Ben)
Integrated bit from another patch. (Ben)
Update the docs to reflect the internal data structure change (Ben)
Don't advertise MPOL_PREFERRED_MANY in UAPI until we can handle it (Ben)
Added more to the commit message (Dave)

Link: https://lore.kernel.org/r/20200630212517.308045-3-ben.widawsky@intel.com
Co-developed-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 .../admin-guide/mm/numa_memory_policy.rst          |  6 ++--
 include/linux/mempolicy.h                          |  4 +--
 mm/mempolicy.c                                     | 40 ++++++++++++----------
 3 files changed, 27 insertions(+), 23 deletions(-)

diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst
index 067a90a..1ad020c 100644
--- a/Documentation/admin-guide/mm/numa_memory_policy.rst
+++ b/Documentation/admin-guide/mm/numa_memory_policy.rst
@@ -205,9 +205,9 @@ MPOL_PREFERRED
 	of increasing distance from the preferred node based on
 	information provided by the platform firmware.
 
-	Internally, the Preferred policy uses a single node--the
-	preferred_node member of struct mempolicy.  When the internal
-	mode flag MPOL_F_LOCAL is set, the preferred_node is ignored
+	Internally, the Preferred policy uses a nodemask--the
+	preferred_nodes member of struct mempolicy.  When the internal
+	mode flag MPOL_F_LOCAL is set, the preferred_nodes are ignored
 	and the policy is interpreted as local allocation.  "Local"
 	allocation policy can be viewed as a Preferred policy that
 	starts at the node containing the cpu where the allocation
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 5f1c74d..23ee105 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -47,8 +47,8 @@ struct mempolicy {
 	unsigned short mode; 	/* See MPOL_* above */
 	unsigned short flags;	/* See set_mempolicy() MPOL_F_* above */
 	union {
-		short 		 preferred_node; /* preferred */
-		nodemask_t	 nodes;		/* interleave/bind */
+		nodemask_t preferred_nodes; /* preferred */
+		nodemask_t nodes; /* interleave/bind */
 		/* undefined for default */
 	} v;
 	union {
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 5730fc1..8f4a32a 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -205,7 +205,7 @@ static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes)
 	else if (nodes_empty(*nodes))
 		return -EINVAL;			/*  no allowed nodes */
 	else
-		pol->v.preferred_node = first_node(*nodes);
+		pol->v.preferred_nodes = nodemask_of_node(first_node(*nodes));
 	return 0;
 }
 
@@ -345,22 +345,26 @@ static void mpol_rebind_preferred(struct mempolicy *pol,
 						const nodemask_t *nodes)
 {
 	nodemask_t tmp;
+	nodemask_t preferred_node;
+
+	/* MPOL_PREFERRED uses only the first node in the mask */
+	preferred_node = nodemask_of_node(first_node(*nodes));
 
 	if (pol->flags & MPOL_F_STATIC_NODES) {
 		int node = first_node(pol->w.user_nodemask);
 
 		if (node_isset(node, *nodes)) {
-			pol->v.preferred_node = node;
+			pol->v.preferred_nodes = nodemask_of_node(node);
 			pol->flags &= ~MPOL_F_LOCAL;
 		} else
 			pol->flags |= MPOL_F_LOCAL;
 	} else if (pol->flags & MPOL_F_RELATIVE_NODES) {
 		mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes);
-		pol->v.preferred_node = first_node(tmp);
+		pol->v.preferred_nodes = tmp;
 	} else if (!(pol->flags & MPOL_F_LOCAL)) {
-		pol->v.preferred_node = node_remap(pol->v.preferred_node,
-						   pol->w.cpuset_mems_allowed,
-						   *nodes);
+		nodes_remap(tmp, pol->v.preferred_nodes,
+			    pol->w.cpuset_mems_allowed, preferred_node);
+		pol->v.preferred_nodes = tmp;
 		pol->w.cpuset_mems_allowed = *nodes;
 	}
 }
@@ -912,7 +916,7 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes)
 		break;
 	case MPOL_PREFERRED:
 		if (!(p->flags & MPOL_F_LOCAL))
-			node_set(p->v.preferred_node, *nodes);
+			*nodes = p->v.preferred_nodes;
 		/* else return empty node mask for local allocation */
 		break;
 	default:
@@ -1881,9 +1885,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
 /* Return the node id preferred by the given mempolicy, or the given id */
 static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd)
 {
-	if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL))
-		nd = policy->v.preferred_node;
-	else {
+	if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) {
+		nd = first_node(policy->v.preferred_nodes);
+	} else {
 		/*
 		 * __GFP_THISNODE shouldn't even be used with the bind policy
 		 * because we might easily break the expectation to stay on the
@@ -1928,7 +1932,7 @@ unsigned int mempolicy_slab_node(void)
 		/*
 		 * handled MPOL_F_LOCAL above
 		 */
-		return policy->v.preferred_node;
+		return first_node(policy->v.preferred_nodes);
 
 	case MPOL_INTERLEAVE:
 		return interleave_nodes(policy);
@@ -2062,7 +2066,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
 		if (mempolicy->flags & MPOL_F_LOCAL)
 			nid = numa_node_id();
 		else
-			nid = mempolicy->v.preferred_node;
+			nid = first_node(mempolicy->v.preferred_nodes);
 		init_nodemask_of_node(mask, nid);
 		break;
 
@@ -2200,7 +2204,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 		 * node in its nodemask, we allocate the standard way.
 		 */
 		if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL))
-			hpage_node = pol->v.preferred_node;
+			hpage_node = first_node(pol->v.preferred_nodes);
 
 		nmask = policy_nodemask(gfp, pol);
 		if (!nmask || node_isset(hpage_node, *nmask)) {
@@ -2339,7 +2343,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
 		/* a's ->flags is the same as b's */
 		if (a->flags & MPOL_F_LOCAL)
 			return true;
-		return a->v.preferred_node == b->v.preferred_node;
+		return nodes_equal(a->v.preferred_nodes, b->v.preferred_nodes);
 	default:
 		BUG();
 		return false;
@@ -2483,7 +2487,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
 		if (pol->flags & MPOL_F_LOCAL)
 			polnid = numa_node_id();
 		else
-			polnid = pol->v.preferred_node;
+			polnid = first_node(pol->v.preferred_nodes);
 		break;
 
 	case MPOL_BIND:
@@ -2800,7 +2804,7 @@ void __init numa_policy_init(void)
 			.refcnt = ATOMIC_INIT(1),
 			.mode = MPOL_PREFERRED,
 			.flags = MPOL_F_MOF | MPOL_F_MORON,
-			.v = { .preferred_node = nid, },
+			.v = { .preferred_nodes = nodemask_of_node(nid), },
 		};
 	}
 
@@ -2966,7 +2970,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol)
 	if (mode != MPOL_PREFERRED)
 		new->v.nodes = nodes;
 	else if (nodelist)
-		new->v.preferred_node = first_node(nodes);
+		new->v.preferred_nodes = nodemask_of_node(first_node(nodes));
 	else
 		new->flags |= MPOL_F_LOCAL;
 
@@ -3019,7 +3023,7 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
 		if (flags & MPOL_F_LOCAL)
 			mode = MPOL_LOCAL;
 		else
-			node_set(pol->v.preferred_node, nodes);
+			nodes_or(nodes, nodes, pol->v.preferred_nodes);
 		break;
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 03/14] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
  2021-03-03 10:20 ` [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Feng Tang
  2021-03-03 10:20 ` [PATCH v3 02/14] mm/mempolicy: convert single preferred_node to full nodemask Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask Feng Tang
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Dave Hansen, Feng Tang

From: Dave Hansen <dave.hansen@linux.intel.com>

MPOL_PREFERRED honors only a single node set in the nodemask.  Add the
bare define for a new mode which will allow more than one.

The patch does all the plumbing without actually adding the new policy
type.

v2:
Plumb most MPOL_PREFERRED_MANY without exposing UAPI (Ben)
Fixes for checkpatch (Ben)

Link: https://lore.kernel.org/r/20200630212517.308045-4-ben.widawsky@intel.com
Co-developed-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 46 ++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 40 insertions(+), 6 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 8f4a32a..79258b2 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -31,6 +31,9 @@
  *                but useful to set in a VMA when you have a non default
  *                process policy.
  *
+ * preferred many Try a set of nodes first before normal fallback. This is
+ *                similar to preferred without the special case.
+ *
  * default        Allocate on the local node first, or when on a VMA
  *                use the process policy. This is what Linux always did
  *		  in a NUMA aware kernel and still does by, ahem, default.
@@ -105,6 +108,8 @@
 
 #include "internal.h"
 
+#define MPOL_PREFERRED_MANY MPOL_MAX
+
 /* Internal flags */
 #define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0)	/* Skip checks for continuous vmas */
 #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1)		/* Invert check for nodemask */
@@ -175,7 +180,7 @@ struct mempolicy *get_task_policy(struct task_struct *p)
 static const struct mempolicy_operations {
 	int (*create)(struct mempolicy *pol, const nodemask_t *nodes);
 	void (*rebind)(struct mempolicy *pol, const nodemask_t *nodes);
-} mpol_ops[MPOL_MAX];
+} mpol_ops[MPOL_MAX + 1];
 
 static inline int mpol_store_user_nodemask(const struct mempolicy *pol)
 {
@@ -415,7 +420,7 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
 	mmap_write_unlock(mm);
 }
 
-static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
+static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = {
 	[MPOL_DEFAULT] = {
 		.rebind = mpol_rebind_default,
 	},
@@ -432,6 +437,10 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
 		.rebind = mpol_rebind_nodemask,
 	},
 	/* [MPOL_LOCAL] - see mpol_new() */
+	[MPOL_PREFERRED_MANY] = {
+		.create = NULL,
+		.rebind = NULL,
+	},
 };
 
 static int migrate_page_add(struct page *page, struct list_head *pagelist,
@@ -914,6 +923,9 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes)
 	case MPOL_INTERLEAVE:
 		*nodes = p->v.nodes;
 		break;
+	case MPOL_PREFERRED_MANY:
+		*nodes = p->v.preferred_nodes;
+		break;
 	case MPOL_PREFERRED:
 		if (!(p->flags & MPOL_F_LOCAL))
 			*nodes = p->v.preferred_nodes;
@@ -1885,7 +1897,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
 /* Return the node id preferred by the given mempolicy, or the given id */
 static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd)
 {
-	if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) {
+	if ((policy->mode == MPOL_PREFERRED ||
+	     policy->mode == MPOL_PREFERRED_MANY) &&
+	    !(policy->flags & MPOL_F_LOCAL)) {
 		nd = first_node(policy->v.preferred_nodes);
 	} else {
 		/*
@@ -1928,6 +1942,7 @@ unsigned int mempolicy_slab_node(void)
 		return node;
 
 	switch (policy->mode) {
+	case MPOL_PREFERRED_MANY:
 	case MPOL_PREFERRED:
 		/*
 		 * handled MPOL_F_LOCAL above
@@ -2062,6 +2077,9 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
 	task_lock(current);
 	mempolicy = current->mempolicy;
 	switch (mempolicy->mode) {
+	case MPOL_PREFERRED_MANY:
+		*mask = mempolicy->v.preferred_nodes;
+		break;
 	case MPOL_PREFERRED:
 		if (mempolicy->flags & MPOL_F_LOCAL)
 			nid = numa_node_id();
@@ -2116,6 +2134,9 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk,
 		 * nodes in mask.
 		 */
 		break;
+	case MPOL_PREFERRED_MANY:
+		ret = nodes_intersects(mempolicy->v.preferred_nodes, *mask);
+		break;
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
 		ret = nodes_intersects(mempolicy->v.nodes, *mask);
@@ -2200,10 +2221,13 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 		 * node and don't fall back to other nodes, as the cost of
 		 * remote accesses would likely offset THP benefits.
 		 *
-		 * If the policy is interleave, or does not allow the current
-		 * node in its nodemask, we allocate the standard way.
+		 * If the policy is interleave or multiple preferred nodes, or
+		 * does not allow the current node in its nodemask, we allocate
+		 * the standard way.
 		 */
-		if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL))
+		if ((pol->mode == MPOL_PREFERRED ||
+		     pol->mode == MPOL_PREFERRED_MANY) &&
+		    !(pol->flags & MPOL_F_LOCAL))
 			hpage_node = first_node(pol->v.preferred_nodes);
 
 		nmask = policy_nodemask(gfp, pol);
@@ -2339,6 +2363,9 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
 		return !!nodes_equal(a->v.nodes, b->v.nodes);
+	case MPOL_PREFERRED_MANY:
+		return !!nodes_equal(a->v.preferred_nodes,
+				     b->v.preferred_nodes);
 	case MPOL_PREFERRED:
 		/* a's ->flags is the same as b's */
 		if (a->flags & MPOL_F_LOCAL)
@@ -2507,6 +2534,8 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
 		polnid = zone_to_nid(z->zone);
 		break;
 
+		/* case MPOL_PREFERRED_MANY: */
+
 	default:
 		BUG();
 	}
@@ -2858,6 +2887,7 @@ static const char * const policy_modes[] =
 	[MPOL_BIND]       = "bind",
 	[MPOL_INTERLEAVE] = "interleave",
 	[MPOL_LOCAL]      = "local",
+	[MPOL_PREFERRED_MANY]  = "prefer (many)",
 };
 
 
@@ -2937,6 +2967,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol)
 		if (!nodelist)
 			err = 0;
 		goto out;
+	case MPOL_PREFERRED_MANY:
 	case MPOL_BIND:
 		/*
 		 * Insist on a nodelist
@@ -3019,6 +3050,9 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
 	switch (mode) {
 	case MPOL_DEFAULT:
 		break;
+	case MPOL_PREFERRED_MANY:
+		WARN_ON(flags & MPOL_F_LOCAL);
+		fallthrough;
 	case MPOL_PREFERRED:
 		if (flags & MPOL_F_LOCAL)
 			mode = MPOL_LOCAL;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (2 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 03/14] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 05/14] mm/mempolicy: refactor rebind code for PREFERRED_MANY Feng Tang
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Dave Hansen, Feng Tang

From: Dave Hansen <dave.hansen@linux.intel.com>

Create a helper function (mpol_new_preferred_many()) which is usable
both by the old, single-node MPOL_PREFERRED and the new
MPOL_PREFERRED_MANY.

Enforce the old single-node MPOL_PREFERRED behavior in the "new"
version of mpol_new_preferred() which calls mpol_new_preferred_many().

v3:
  * fix a stack overflow caused by emty nodemask (Feng)

Link: https://lore.kernel.org/r/20200630212517.308045-5-ben.widawsky@intel.com
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 79258b2..19ec954 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -203,17 +203,34 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes)
 	return 0;
 }
 
-static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes)
+static int mpol_new_preferred_many(struct mempolicy *pol,
+				   const nodemask_t *nodes)
 {
 	if (!nodes)
 		pol->flags |= MPOL_F_LOCAL;	/* local allocation */
 	else if (nodes_empty(*nodes))
 		return -EINVAL;			/*  no allowed nodes */
 	else
-		pol->v.preferred_nodes = nodemask_of_node(first_node(*nodes));
+		pol->v.preferred_nodes = *nodes;
 	return 0;
 }
 
+static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes)
+{
+	if (nodes) {
+		/* MPOL_PREFERRED can only take a single node: */
+		nodemask_t tmp;
+
+		if (nodes_empty(*nodes))
+			return -EINVAL;
+
+		tmp = nodemask_of_node(first_node(*nodes));
+		return mpol_new_preferred_many(pol, &tmp);
+	}
+
+	return mpol_new_preferred_many(pol, NULL);
+}
+
 static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes)
 {
 	if (nodes_empty(*nodes))
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 05/14] mm/mempolicy: refactor rebind code for PREFERRED_MANY
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (3 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 06/14] mm/mempolicy: kill v.preferred_nodes Feng Tang
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Dave Hansen, Feng Tang

From: Dave Hansen <dave.hansen@linux.intel.com>

Again, this extracts the "only one node must be set" behavior of
MPOL_PREFERRED.  It retains virtually all of the existing code so it can
be used by MPOL_PREFERRED_MANY as well.

v2:
Fixed typos in commit message. (Ben)
Merged bits from other patches. (Ben)
annotate mpol_rebind_preferred_many as unused (Ben)

Link: https://lore.kernel.org/r/20200630212517.308045-6-ben.widawsky@intel.com
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 19ec954..0103c20 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -363,14 +363,11 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes)
 	pol->v.nodes = tmp;
 }
 
-static void mpol_rebind_preferred(struct mempolicy *pol,
-						const nodemask_t *nodes)
+static void mpol_rebind_preferred_common(struct mempolicy *pol,
+					 const nodemask_t *preferred_nodes,
+					 const nodemask_t *nodes)
 {
 	nodemask_t tmp;
-	nodemask_t preferred_node;
-
-	/* MPOL_PREFERRED uses only the first node in the mask */
-	preferred_node = nodemask_of_node(first_node(*nodes));
 
 	if (pol->flags & MPOL_F_STATIC_NODES) {
 		int node = first_node(pol->w.user_nodemask);
@@ -385,12 +382,30 @@ static void mpol_rebind_preferred(struct mempolicy *pol,
 		pol->v.preferred_nodes = tmp;
 	} else if (!(pol->flags & MPOL_F_LOCAL)) {
 		nodes_remap(tmp, pol->v.preferred_nodes,
-			    pol->w.cpuset_mems_allowed, preferred_node);
+			    pol->w.cpuset_mems_allowed, *preferred_nodes);
 		pol->v.preferred_nodes = tmp;
 		pol->w.cpuset_mems_allowed = *nodes;
 	}
 }
 
+/* MPOL_PREFERRED_MANY allows multiple nodes to be set in 'nodes' */
+static void __maybe_unused mpol_rebind_preferred_many(struct mempolicy *pol,
+						      const nodemask_t *nodes)
+{
+	mpol_rebind_preferred_common(pol, nodes, nodes);
+}
+
+static void mpol_rebind_preferred(struct mempolicy *pol,
+				  const nodemask_t *nodes)
+{
+	nodemask_t preferred_node;
+
+	/* MPOL_PREFERRED uses only the first node in 'nodes' */
+	preferred_node = nodemask_of_node(first_node(*nodes));
+
+	mpol_rebind_preferred_common(pol, &preferred_node, nodes);
+}
+
 /*
  * mpol_rebind_policy - Migrate a policy to a different set of nodes
  *
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 06/14] mm/mempolicy: kill v.preferred_nodes
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (4 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 05/14] mm/mempolicy: refactor rebind code for PREFERRED_MANY Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 07/14] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Feng Tang
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

From: Ben Widawsky <ben.widawsky@intel.com>

Now that preferred_nodes is just a mask, and policies are mutually
exclusive, there is no reason to have a separate mask.

This patch is optional. It definitely helps clean up code in future
patches, but there is no functional difference to leaving it with the
previous name. I do believe it helps demonstrate the exclusivity of the
fields.

Link: https://lore.kernel.org/r/20200630212517.308045-7-ben.widawsky@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 include/linux/mempolicy.h |   6 +--
 mm/mempolicy.c            | 112 ++++++++++++++++++++++------------------------
 2 files changed, 55 insertions(+), 63 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 23ee105..ec811c3 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -46,11 +46,7 @@ struct mempolicy {
 	atomic_t refcnt;
 	unsigned short mode; 	/* See MPOL_* above */
 	unsigned short flags;	/* See set_mempolicy() MPOL_F_* above */
-	union {
-		nodemask_t preferred_nodes; /* preferred */
-		nodemask_t nodes; /* interleave/bind */
-		/* undefined for default */
-	} v;
+	nodemask_t nodes;	/* interleave/bind/many */
 	union {
 		nodemask_t cpuset_mems_allowed;	/* relative to these nodes */
 		nodemask_t user_nodemask;	/* nodemask passed by user */
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 0103c20..fe1d83c 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -199,7 +199,7 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes)
 {
 	if (nodes_empty(*nodes))
 		return -EINVAL;
-	pol->v.nodes = *nodes;
+	pol->nodes = *nodes;
 	return 0;
 }
 
@@ -211,7 +211,7 @@ static int mpol_new_preferred_many(struct mempolicy *pol,
 	else if (nodes_empty(*nodes))
 		return -EINVAL;			/*  no allowed nodes */
 	else
-		pol->v.preferred_nodes = *nodes;
+		pol->nodes = *nodes;
 	return 0;
 }
 
@@ -235,7 +235,7 @@ static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes)
 {
 	if (nodes_empty(*nodes))
 		return -EINVAL;
-	pol->v.nodes = *nodes;
+	pol->nodes = *nodes;
 	return 0;
 }
 
@@ -352,15 +352,15 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes)
 	else if (pol->flags & MPOL_F_RELATIVE_NODES)
 		mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes);
 	else {
-		nodes_remap(tmp, pol->v.nodes,pol->w.cpuset_mems_allowed,
-								*nodes);
+		nodes_remap(tmp, pol->nodes, pol->w.cpuset_mems_allowed,
+			    *nodes);
 		pol->w.cpuset_mems_allowed = *nodes;
 	}
 
 	if (nodes_empty(tmp))
 		tmp = *nodes;
 
-	pol->v.nodes = tmp;
+	pol->nodes = tmp;
 }
 
 static void mpol_rebind_preferred_common(struct mempolicy *pol,
@@ -373,17 +373,17 @@ static void mpol_rebind_preferred_common(struct mempolicy *pol,
 		int node = first_node(pol->w.user_nodemask);
 
 		if (node_isset(node, *nodes)) {
-			pol->v.preferred_nodes = nodemask_of_node(node);
+			pol->nodes = nodemask_of_node(node);
 			pol->flags &= ~MPOL_F_LOCAL;
 		} else
 			pol->flags |= MPOL_F_LOCAL;
 	} else if (pol->flags & MPOL_F_RELATIVE_NODES) {
 		mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes);
-		pol->v.preferred_nodes = tmp;
+		pol->nodes = tmp;
 	} else if (!(pol->flags & MPOL_F_LOCAL)) {
-		nodes_remap(tmp, pol->v.preferred_nodes,
-			    pol->w.cpuset_mems_allowed, *preferred_nodes);
-		pol->v.preferred_nodes = tmp;
+		nodes_remap(tmp, pol->nodes, pol->w.cpuset_mems_allowed,
+			    *preferred_nodes);
+		pol->nodes = tmp;
 		pol->w.cpuset_mems_allowed = *nodes;
 	}
 }
@@ -953,14 +953,14 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes)
 	switch (p->mode) {
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
-		*nodes = p->v.nodes;
+		*nodes = p->nodes;
 		break;
 	case MPOL_PREFERRED_MANY:
-		*nodes = p->v.preferred_nodes;
+		*nodes = p->nodes;
 		break;
 	case MPOL_PREFERRED:
 		if (!(p->flags & MPOL_F_LOCAL))
-			*nodes = p->v.preferred_nodes;
+			*nodes = p->nodes;
 		/* else return empty node mask for local allocation */
 		break;
 	default:
@@ -1046,7 +1046,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
 			*policy = err;
 		} else if (pol == current->mempolicy &&
 				pol->mode == MPOL_INTERLEAVE) {
-			*policy = next_node_in(current->il_prev, pol->v.nodes);
+			*policy = next_node_in(current->il_prev, pol->nodes);
 		} else {
 			err = -EINVAL;
 			goto out;
@@ -1898,14 +1898,14 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone)
 	BUG_ON(dynamic_policy_zone == ZONE_MOVABLE);
 
 	/*
-	 * if policy->v.nodes has movable memory only,
+	 * if policy->nodes has movable memory only,
 	 * we apply policy when gfp_zone(gfp) = ZONE_MOVABLE only.
 	 *
-	 * policy->v.nodes is intersect with node_states[N_MEMORY].
+	 * policy->nodes is intersect with node_states[N_MEMORY].
 	 * so if the following test faile, it implies
-	 * policy->v.nodes has movable memory only.
+	 * policy->nodes has movable memory only.
 	 */
-	if (!nodes_intersects(policy->v.nodes, node_states[N_HIGH_MEMORY]))
+	if (!nodes_intersects(policy->nodes, node_states[N_HIGH_MEMORY]))
 		dynamic_policy_zone = ZONE_MOVABLE;
 
 	return zone >= dynamic_policy_zone;
@@ -1919,9 +1919,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
 {
 	/* Lower zones don't get a nodemask applied for MPOL_BIND */
 	if (unlikely(policy->mode == MPOL_BIND) &&
-			apply_policy_zone(policy, gfp_zone(gfp)) &&
-			cpuset_nodemask_valid_mems_allowed(&policy->v.nodes))
-		return &policy->v.nodes;
+	    apply_policy_zone(policy, gfp_zone(gfp)) &&
+	    cpuset_nodemask_valid_mems_allowed(&policy->nodes))
+		return &policy->nodes;
 
 	return NULL;
 }
@@ -1932,7 +1932,7 @@ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd)
 	if ((policy->mode == MPOL_PREFERRED ||
 	     policy->mode == MPOL_PREFERRED_MANY) &&
 	    !(policy->flags & MPOL_F_LOCAL)) {
-		nd = first_node(policy->v.preferred_nodes);
+		nd = first_node(policy->nodes);
 	} else {
 		/*
 		 * __GFP_THISNODE shouldn't even be used with the bind policy
@@ -1951,7 +1951,7 @@ static unsigned interleave_nodes(struct mempolicy *policy)
 	unsigned next;
 	struct task_struct *me = current;
 
-	next = next_node_in(me->il_prev, policy->v.nodes);
+	next = next_node_in(me->il_prev, policy->nodes);
 	if (next < MAX_NUMNODES)
 		me->il_prev = next;
 	return next;
@@ -1979,7 +1979,7 @@ unsigned int mempolicy_slab_node(void)
 		/*
 		 * handled MPOL_F_LOCAL above
 		 */
-		return first_node(policy->v.preferred_nodes);
+		return first_node(policy->nodes);
 
 	case MPOL_INTERLEAVE:
 		return interleave_nodes(policy);
@@ -1995,7 +1995,7 @@ unsigned int mempolicy_slab_node(void)
 		enum zone_type highest_zoneidx = gfp_zone(GFP_KERNEL);
 		zonelist = &NODE_DATA(node)->node_zonelists[ZONELIST_FALLBACK];
 		z = first_zones_zonelist(zonelist, highest_zoneidx,
-							&policy->v.nodes);
+					 &policy->nodes);
 		return z->zone ? zone_to_nid(z->zone) : node;
 	}
 
@@ -2006,12 +2006,12 @@ unsigned int mempolicy_slab_node(void)
 
 /*
  * Do static interleaving for a VMA with known offset @n.  Returns the n'th
- * node in pol->v.nodes (starting from n=0), wrapping around if n exceeds the
+ * node in pol->nodes (starting from n=0), wrapping around if n exceeds the
  * number of present nodes.
  */
 static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
 {
-	unsigned nnodes = nodes_weight(pol->v.nodes);
+	unsigned nnodes = nodes_weight(pol->nodes);
 	unsigned target;
 	int i;
 	int nid;
@@ -2019,9 +2019,9 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
 	if (!nnodes)
 		return numa_node_id();
 	target = (unsigned int)n % nnodes;
-	nid = first_node(pol->v.nodes);
+	nid = first_node(pol->nodes);
 	for (i = 0; i < target; i++)
-		nid = next_node(nid, pol->v.nodes);
+		nid = next_node(nid, pol->nodes);
 	return nid;
 }
 
@@ -2077,7 +2077,7 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
 	} else {
 		nid = policy_node(gfp_flags, *mpol, numa_node_id());
 		if ((*mpol)->mode == MPOL_BIND)
-			*nodemask = &(*mpol)->v.nodes;
+			*nodemask = &(*mpol)->nodes;
 	}
 	return nid;
 }
@@ -2110,19 +2110,19 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
 	mempolicy = current->mempolicy;
 	switch (mempolicy->mode) {
 	case MPOL_PREFERRED_MANY:
-		*mask = mempolicy->v.preferred_nodes;
+		*mask = mempolicy->nodes;
 		break;
 	case MPOL_PREFERRED:
 		if (mempolicy->flags & MPOL_F_LOCAL)
 			nid = numa_node_id();
 		else
-			nid = first_node(mempolicy->v.preferred_nodes);
+			nid = first_node(mempolicy->nodes);
 		init_nodemask_of_node(mask, nid);
 		break;
 
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
-		*mask =  mempolicy->v.nodes;
+		*mask = mempolicy->nodes;
 		break;
 
 	default:
@@ -2167,11 +2167,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk,
 		 */
 		break;
 	case MPOL_PREFERRED_MANY:
-		ret = nodes_intersects(mempolicy->v.preferred_nodes, *mask);
+		ret = nodes_intersects(mempolicy->nodes, *mask);
 		break;
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
-		ret = nodes_intersects(mempolicy->v.nodes, *mask);
+		ret = nodes_intersects(mempolicy->nodes, *mask);
 		break;
 	default:
 		BUG();
@@ -2260,7 +2260,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 		if ((pol->mode == MPOL_PREFERRED ||
 		     pol->mode == MPOL_PREFERRED_MANY) &&
 		    !(pol->flags & MPOL_F_LOCAL))
-			hpage_node = first_node(pol->v.preferred_nodes);
+			hpage_node = first_node(pol->nodes);
 
 		nmask = policy_nodemask(gfp, pol);
 		if (!nmask || node_isset(hpage_node, *nmask)) {
@@ -2394,15 +2394,14 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
 	switch (a->mode) {
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
-		return !!nodes_equal(a->v.nodes, b->v.nodes);
+		return !!nodes_equal(a->nodes, b->nodes);
 	case MPOL_PREFERRED_MANY:
-		return !!nodes_equal(a->v.preferred_nodes,
-				     b->v.preferred_nodes);
+		return !!nodes_equal(a->nodes, b->nodes);
 	case MPOL_PREFERRED:
 		/* a's ->flags is the same as b's */
 		if (a->flags & MPOL_F_LOCAL)
 			return true;
-		return nodes_equal(a->v.preferred_nodes, b->v.preferred_nodes);
+		return nodes_equal(a->nodes, b->nodes);
 	default:
 		BUG();
 		return false;
@@ -2546,7 +2545,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
 		if (pol->flags & MPOL_F_LOCAL)
 			polnid = numa_node_id();
 		else
-			polnid = first_node(pol->v.preferred_nodes);
+			polnid = first_node(pol->nodes);
 		break;
 
 	case MPOL_BIND:
@@ -2557,12 +2556,11 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
 		 * else select nearest allowed node, if any.
 		 * If no allowed nodes, use current [!misplaced].
 		 */
-		if (node_isset(curnid, pol->v.nodes))
+		if (node_isset(curnid, pol->nodes))
 			goto out;
-		z = first_zones_zonelist(
-				node_zonelist(numa_node_id(), GFP_HIGHUSER),
-				gfp_zone(GFP_HIGHUSER),
-				&pol->v.nodes);
+		z = first_zones_zonelist(node_zonelist(numa_node_id(),
+						       GFP_HIGHUSER),
+					 gfp_zone(GFP_HIGHUSER), &pol->nodes);
 		polnid = zone_to_nid(z->zone);
 		break;
 
@@ -2763,11 +2761,9 @@ int mpol_set_shared_policy(struct shared_policy *info,
 	struct sp_node *new = NULL;
 	unsigned long sz = vma_pages(vma);
 
-	pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n",
-		 vma->vm_pgoff,
-		 sz, npol ? npol->mode : -1,
-		 npol ? npol->flags : -1,
-		 npol ? nodes_addr(npol->v.nodes)[0] : NUMA_NO_NODE);
+	pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", vma->vm_pgoff, sz,
+		 npol ? npol->mode : -1, npol ? npol->flags : -1,
+		 npol ? nodes_addr(npol->nodes)[0] : NUMA_NO_NODE);
 
 	if (npol) {
 		new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol);
@@ -2861,11 +2857,11 @@ void __init numa_policy_init(void)
 				     0, SLAB_PANIC, NULL);
 
 	for_each_node(nid) {
-		preferred_node_policy[nid] = (struct mempolicy) {
+		preferred_node_policy[nid] = (struct mempolicy){
 			.refcnt = ATOMIC_INIT(1),
 			.mode = MPOL_PREFERRED,
 			.flags = MPOL_F_MOF | MPOL_F_MORON,
-			.v = { .preferred_nodes = nodemask_of_node(nid), },
+			.nodes = nodemask_of_node(nid),
 		};
 	}
 
@@ -3031,9 +3027,9 @@ int mpol_parse_str(char *str, struct mempolicy **mpol)
 	 * for /proc/mounts, /proc/pid/mounts and /proc/pid/mountinfo.
 	 */
 	if (mode != MPOL_PREFERRED)
-		new->v.nodes = nodes;
+		new->nodes = nodes;
 	else if (nodelist)
-		new->v.preferred_nodes = nodemask_of_node(first_node(nodes));
+		new->nodes = nodemask_of_node(first_node(nodes));
 	else
 		new->flags |= MPOL_F_LOCAL;
 
@@ -3089,11 +3085,11 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
 		if (flags & MPOL_F_LOCAL)
 			mode = MPOL_LOCAL;
 		else
-			nodes_or(nodes, nodes, pol->v.preferred_nodes);
+			nodes_or(nodes, nodes, pol->nodes);
 		break;
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
-		nodes = pol->v.nodes;
+		nodes = pol->nodes;
 		break;
 	default:
 		WARN_ON_ONCE(1);
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 07/14] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (5 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 06/14] mm/mempolicy: kill v.preferred_nodes Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 08/14] mm/mempolicy: Create a page allocator for policy Feng Tang
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

From: Ben Widawsky <ben.widawsky@intel.com>

Begin the real plumbing for handling this new policy. Now that the
internal representation for preferred nodes and bound nodes is the same,
and we can envision what multiple preferred nodes will behave like,
there are obvious places where we can simply reuse the bind behavior.

In v1 of this series, the moral equivalent was:
"mm: Finish handling MPOL_PREFERRED_MANY". Like that, this attempts to
implement the easiest spots for the new policy. Unlike that, this just
reuses BIND.

Link: https://lore.kernel.org/r/20200630212517.308045-8-ben.widawsky@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index fe1d83c..80cb554 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -953,8 +953,6 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes)
 	switch (p->mode) {
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
-		*nodes = p->nodes;
-		break;
 	case MPOL_PREFERRED_MANY:
 		*nodes = p->nodes;
 		break;
@@ -1918,7 +1916,8 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone)
 nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
 {
 	/* Lower zones don't get a nodemask applied for MPOL_BIND */
-	if (unlikely(policy->mode == MPOL_BIND) &&
+	if (unlikely(policy->mode == MPOL_BIND ||
+		     policy->mode == MPOL_PREFERRED_MANY) &&
 	    apply_policy_zone(policy, gfp_zone(gfp)) &&
 	    cpuset_nodemask_valid_mems_allowed(&policy->nodes))
 		return &policy->nodes;
@@ -1974,7 +1973,6 @@ unsigned int mempolicy_slab_node(void)
 		return node;
 
 	switch (policy->mode) {
-	case MPOL_PREFERRED_MANY:
 	case MPOL_PREFERRED:
 		/*
 		 * handled MPOL_F_LOCAL above
@@ -1984,6 +1982,7 @@ unsigned int mempolicy_slab_node(void)
 	case MPOL_INTERLEAVE:
 		return interleave_nodes(policy);
 
+	case MPOL_PREFERRED_MANY:
 	case MPOL_BIND: {
 		struct zoneref *z;
 
@@ -2109,9 +2108,6 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
 	task_lock(current);
 	mempolicy = current->mempolicy;
 	switch (mempolicy->mode) {
-	case MPOL_PREFERRED_MANY:
-		*mask = mempolicy->nodes;
-		break;
 	case MPOL_PREFERRED:
 		if (mempolicy->flags & MPOL_F_LOCAL)
 			nid = numa_node_id();
@@ -2122,6 +2118,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
 
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
+	case MPOL_PREFERRED_MANY:
 		*mask = mempolicy->nodes;
 		break;
 
@@ -2165,12 +2162,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk,
 		 * Thus, it's possible for tsk to have allocated memory from
 		 * nodes in mask.
 		 */
-		break;
-	case MPOL_PREFERRED_MANY:
 		ret = nodes_intersects(mempolicy->nodes, *mask);
 		break;
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
+	case MPOL_PREFERRED_MANY:
 		ret = nodes_intersects(mempolicy->nodes, *mask);
 		break;
 	default:
@@ -2394,7 +2390,6 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
 	switch (a->mode) {
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
-		return !!nodes_equal(a->nodes, b->nodes);
 	case MPOL_PREFERRED_MANY:
 		return !!nodes_equal(a->nodes, b->nodes);
 	case MPOL_PREFERRED:
@@ -2548,6 +2543,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
 			polnid = first_node(pol->nodes);
 		break;
 
+	case MPOL_PREFERRED_MANY:
 	case MPOL_BIND:
 
 		/*
@@ -2564,8 +2560,6 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
 		polnid = zone_to_nid(z->zone);
 		break;
 
-		/* case MPOL_PREFERRED_MANY: */
-
 	default:
 		BUG();
 	}
@@ -3078,15 +3072,13 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
 	switch (mode) {
 	case MPOL_DEFAULT:
 		break;
-	case MPOL_PREFERRED_MANY:
-		WARN_ON(flags & MPOL_F_LOCAL);
-		fallthrough;
 	case MPOL_PREFERRED:
 		if (flags & MPOL_F_LOCAL)
 			mode = MPOL_LOCAL;
 		else
 			nodes_or(nodes, nodes, pol->nodes);
 		break;
+	case MPOL_PREFERRED_MANY:
 	case MPOL_BIND:
 	case MPOL_INTERLEAVE:
 		nodes = pol->nodes;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 08/14] mm/mempolicy: Create a page allocator for policy
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (6 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 07/14] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred Feng Tang
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

From: Ben Widawsky <ben.widawsky@intel.com>

Add a helper function which takes care of handling multiple preferred
nodes. It will be called by future patches that need to handle this,
specifically VMA based page allocation, and task based page allocation.
Huge pages don't quite fit the same pattern because they use different
underlying page allocation functions. This consumes the previous
interleave policy specific allocation function to make a one stop shop
for policy based allocation.

For now, only interleaved policy will be used so there should be no
functional change yet. However, if bisection points to issues in the
next few commits, it was likely the fault of this patch.

Similar functionality is offered via policy_node() and
policy_nodemask(). By themselves however, neither can achieve this
fallback style of sets of nodes.

v3: add __GFP_NOWARN for first try (Feng)

Link: https://lore.kernel.org/r/20200630212517.308045-9-ben.widawsky@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 61 +++++++++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 48 insertions(+), 13 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 80cb554..a737e02 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2177,22 +2177,56 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk,
 	return ret;
 }
 
-/* Allocate a page in interleaved policy.
-   Own path because it needs to do special accounting. */
-static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
-					unsigned nid)
+/* Handle page allocation for all but interleaved policies */
+static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp,
+				       unsigned int order, int preferred_nid)
 {
 	struct page *page;
+	gfp_t gfp_mask = gfp;
 
-	page = __alloc_pages(gfp, order, nid);
-	/* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */
-	if (!static_branch_likely(&vm_numa_stat_key))
+	if (pol->mode == MPOL_INTERLEAVE) {
+		page = __alloc_pages(gfp, order, preferred_nid);
+		/* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */
+		if (!static_branch_likely(&vm_numa_stat_key))
+			return page;
+		if (page && page_to_nid(page) == preferred_nid) {
+			preempt_disable();
+			__inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT);
+			preempt_enable();
+		}
 		return page;
-	if (page && page_to_nid(page) == nid) {
-		preempt_disable();
-		__inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT);
-		preempt_enable();
 	}
+
+	VM_BUG_ON(preferred_nid != NUMA_NO_NODE);
+
+	preferred_nid = numa_node_id();
+
+	/*
+	 * There is a two pass approach implemented here for
+	 * MPOL_PREFERRED_MANY. In the first pass we pretend the preferred nodes
+	 * are bound, but allow the allocation to fail. The below table explains
+	 * how this is achieved.
+	 *
+	 * | Policy                        | preferred nid | nodemask   |
+	 * |-------------------------------|---------------|------------|
+	 * | MPOL_DEFAULT                  | local         | NULL       |
+	 * | MPOL_PREFERRED                | best          | NULL       |
+	 * | MPOL_INTERLEAVE               | ERR           | ERR        |
+	 * | MPOL_BIND                     | local         | pol->nodes |
+	 * | MPOL_PREFERRED_MANY           | best          | pol->nodes |
+	 * | MPOL_PREFERRED_MANY (round 2) | local         | NULL       |
+	 * +-------------------------------+---------------+------------+
+	 */
+	if (pol->mode == MPOL_PREFERRED_MANY)
+		gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+
+	page = __alloc_pages_nodemask(gfp_mask, order,
+				      policy_node(gfp, pol, preferred_nid),
+				      policy_nodemask(gfp, pol));
+
+	if (unlikely(!page && pol->mode == MPOL_PREFERRED_MANY))
+		page = __alloc_pages_nodemask(gfp, order, preferred_nid, NULL);
+
 	return page;
 }
 
@@ -2234,8 +2268,8 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 		unsigned nid;
 
 		nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
+		page = alloc_pages_policy(pol, gfp, order, nid);
 		mpol_cond_put(pol);
-		page = alloc_page_interleave(gfp, order, nid);
 		goto out;
 	}
 
@@ -2319,7 +2353,8 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order)
 	 * nor system default_policy
 	 */
 	if (pol->mode == MPOL_INTERLEAVE)
-		page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
+		page = alloc_pages_policy(pol, gfp, order,
+					  interleave_nodes(pol));
 	else
 		page = __alloc_pages_nodemask(gfp, order,
 				policy_node(gfp, pol, numa_node_id()),
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (7 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 08/14] mm/mempolicy: Create a page allocator for policy Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 10/14] mm/mempolicy: VMA " Feng Tang
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

From: Ben Widawsky <ben.widawsky@intel.com>

In order to support MPOL_PREFERRED_MANY as the mode used by
set_mempolicy(2), alloc_pages_current() needs to support it. This patch
does that by using the new helper function to allocate properly based on
policy.

All the actual machinery to make this work was part of
("mm/mempolicy: Create a page allocator for policy")

Link: https://lore.kernel.org/r/20200630212517.308045-10-ben.widawsky@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index a737e02..ceee90e 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2343,7 +2343,7 @@ EXPORT_SYMBOL(alloc_pages_vma);
 struct page *alloc_pages_current(gfp_t gfp, unsigned order)
 {
 	struct mempolicy *pol = &default_policy;
-	struct page *page;
+	int nid = NUMA_NO_NODE;
 
 	if (!in_interrupt() && !(gfp & __GFP_THISNODE))
 		pol = get_task_policy(current);
@@ -2353,14 +2353,9 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order)
 	 * nor system default_policy
 	 */
 	if (pol->mode == MPOL_INTERLEAVE)
-		page = alloc_pages_policy(pol, gfp, order,
-					  interleave_nodes(pol));
-	else
-		page = __alloc_pages_nodemask(gfp, order,
-				policy_node(gfp, pol, numa_node_id()),
-				policy_nodemask(gfp, pol));
+		nid = interleave_nodes(pol);
 
-	return page;
+	return alloc_pages_policy(pol, gfp, order, nid);
 }
 EXPORT_SYMBOL(alloc_pages_current);
 
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 10/14] mm/mempolicy: VMA allocation for many preferred
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (8 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 11/14] mm/mempolicy: huge-page " Feng Tang
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

From: Ben Widawsky <ben.widawsky@intel.com>

This patch implements MPOL_PREFERRED_MANY for alloc_pages_vma(). Like
alloc_pages_current(), alloc_pages_vma() needs to support policy based
decisions if they've been configured via mbind(2).

The temporary "hack" of treating MPOL_PREFERRED and MPOL_PREFERRED_MANY
can now be removed with this, too.

All the actual machinery to make this work was part of
("mm/mempolicy: Create a page allocator for policy")

Link: https://lore.kernel.org/r/20200630212517.308045-11-ben.widawsky@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 29 +++++++++++++++++++++--------
 1 file changed, 21 insertions(+), 8 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index ceee90e..0cb92ab 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2259,8 +2259,6 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 {
 	struct mempolicy *pol;
 	struct page *page;
-	int preferred_nid;
-	nodemask_t *nmask;
 
 	pol = get_vma_policy(vma, addr);
 
@@ -2274,6 +2272,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 	}
 
 	if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
+		nodemask_t *nmask;
 		int hpage_node = node;
 
 		/*
@@ -2287,10 +2286,26 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 		 * does not allow the current node in its nodemask, we allocate
 		 * the standard way.
 		 */
-		if ((pol->mode == MPOL_PREFERRED ||
-		     pol->mode == MPOL_PREFERRED_MANY) &&
-		    !(pol->flags & MPOL_F_LOCAL))
+		if (pol->mode == MPOL_PREFERRED || !(pol->flags & MPOL_F_LOCAL)) {
 			hpage_node = first_node(pol->nodes);
+		} else if (pol->mode == MPOL_PREFERRED_MANY) {
+			struct zoneref *z;
+
+			/*
+			 * In this policy, with direct reclaim, the normal
+			 * policy based allocation will do the right thing - try
+			 * twice using the preferred nodes first, and all nodes
+			 * second.
+			 */
+			if (gfp & __GFP_DIRECT_RECLAIM) {
+				page = alloc_pages_policy(pol, gfp, order, NUMA_NO_NODE);
+				goto out;
+			}
+
+			z = first_zones_zonelist(node_zonelist(numa_node_id(), GFP_HIGHUSER),
+						 gfp_zone(GFP_HIGHUSER), &pol->nodes);
+			hpage_node = zone_to_nid(z->zone);
+		}
 
 		nmask = policy_nodemask(gfp, pol);
 		if (!nmask || node_isset(hpage_node, *nmask)) {
@@ -2316,9 +2331,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
 		}
 	}
 
-	nmask = policy_nodemask(gfp, pol);
-	preferred_nid = policy_node(gfp, pol, node);
-	page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask);
+	page = alloc_pages_policy(pol, gfp, order, NUMA_NO_NODE);
 	mpol_cond_put(pol);
 out:
 	return page;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 11/14] mm/mempolicy: huge-page allocation for many preferred
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (9 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 10/14] mm/mempolicy: VMA " Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 12/14] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

From: Ben Widawsky <ben.widawsky@intel.com>

Implement the missing huge page allocation functionality while obeying
the preferred node semantics.

This uses a fallback mechanism to try multiple preferred nodes first,
and then all other nodes. It cannot use the helper function that was
introduced because huge page allocation already has its own helpers and
it was more LOC, and effort to try to consolidate that.

The weirdness is MPOL_PREFERRED_MANY can't be called yet because it is
part of the UAPI we haven't yet exposed. Instead of make that define
global, it's simply changed with the UAPI patch.

v3: add __GFP_NOWARN for first try of prefer_many allocation (Feng)

Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/hugetlb.c   | 22 +++++++++++++++++++---
 mm/mempolicy.c |  3 ++-
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4bdb58a..c7c9ef3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1110,7 +1110,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
 				unsigned long address, int avoid_reserve,
 				long chg)
 {
-	struct page *page;
+	struct page *page = NULL;
 	struct mempolicy *mpol;
 	gfp_t gfp_mask;
 	nodemask_t *nodemask;
@@ -1131,7 +1131,15 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
 
 	gfp_mask = htlb_alloc_mask(h);
 	nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
-	page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+	if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */
+		page = dequeue_huge_page_nodemask(h,
+				gfp_mask | __GFP_RETRY_MAYFAIL | __GFP_NOWARN,
+				nid, nodemask);
+		if (!page)
+			page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL);
+	} else {
+		page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+	}
 	if (page && !avoid_reserve && vma_has_reserves(vma, chg)) {
 		SetPagePrivate(page);
 		h->resv_huge_pages--;
@@ -1935,7 +1943,15 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
 	nodemask_t *nodemask;
 
 	nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask);
-	page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask);
+	if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */
+		page = alloc_surplus_huge_page(h,
+				gfp_mask | __GFP_RETRY_MAYFAIL | __GFP_NOWARN,
+				nid, nodemask);
+		if (!page)
+			alloc_surplus_huge_page(h, gfp_mask, nid, NULL);
+	} else {
+		page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask);
+	}
 	mpol_cond_put(mpol);
 
 	return page;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 0cb92ab..f9b2167 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2075,7 +2075,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
 					huge_page_shift(hstate_vma(vma)));
 	} else {
 		nid = policy_node(gfp_flags, *mpol, numa_node_id());
-		if ((*mpol)->mode == MPOL_BIND)
+		if ((*mpol)->mode == MPOL_BIND ||
+		    (*mpol)->mode == MPOL_PREFERRED_MANY)
 			*nodemask = &(*mpol)->nodes;
 	}
 	return nid;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 12/14] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (10 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 11/14] mm/mempolicy: huge-page " Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 13/14] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many() Feng Tang
  2021-03-03 10:20 ` [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit Feng Tang
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

From: Ben Widawsky <ben.widawsky@intel.com>

Adds a new mode to the existing mempolicy modes, MPOL_PREFERRED_MANY.

MPOL_PREFERRED_MANY will be adequately documented in the internal
admin-guide with this patch. Eventually, the man pages for mbind(2),
get_mempolicy(2), set_mempolicy(2) and numactl(8) will also have text
about this mode.  Those shall contain the canonical reference.

NUMA systems continue to become more prevalent. New technologies like
PMEM make finer grain control over memory access patterns increasingly
desirable. MPOL_PREFERRED_MANY allows userspace to specify a set of
nodes that will be tried first when performing allocations. If those
allocations fail, all remaining nodes will be tried. It's a straight
forward API which solves many of the presumptive needs of system
administrators wanting to optimize workloads on such machines. The mode
will work either per VMA, or per thread.

Generally speaking, this is similar to the way MPOL_BIND works, except
the user will only get a SIGSEGV if all nodes in the system are unable
to satisfy the allocation request.

v3: fix a typo of checking policy (Feng)

Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widawsky@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 Documentation/admin-guide/mm/numa_memory_policy.rst | 16 ++++++++++++----
 include/uapi/linux/mempolicy.h                      |  6 +++---
 mm/hugetlb.c                                        |  4 ++--
 mm/mempolicy.c                                      | 14 ++++++--------
 4 files changed, 23 insertions(+), 17 deletions(-)

diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst
index 1ad020c..b69963a 100644
--- a/Documentation/admin-guide/mm/numa_memory_policy.rst
+++ b/Documentation/admin-guide/mm/numa_memory_policy.rst
@@ -245,6 +245,14 @@ MPOL_INTERLEAVED
 	address range or file.  During system boot up, the temporary
 	interleaved system default policy works in this mode.
 
+MPOL_PREFERRED_MANY
+        This mode specifies that the allocation should be attempted from the
+        nodemask specified in the policy. If that allocation fails, the kernel
+        will search other nodes, in order of increasing distance from the first
+        set bit in the nodemask based on information provided by the platform
+        firmware. It is similar to MPOL_PREFERRED with the main exception that
+        is is an error to have an empty nodemask.
+
 NUMA memory policy supports the following optional mode flags:
 
 MPOL_F_STATIC_NODES
@@ -253,10 +261,10 @@ MPOL_F_STATIC_NODES
 	nodes changes after the memory policy has been defined.
 
 	Without this flag, any time a mempolicy is rebound because of a
-	change in the set of allowed nodes, the node (Preferred) or
-	nodemask (Bind, Interleave) is remapped to the new set of
-	allowed nodes.  This may result in nodes being used that were
-	previously undesired.
+        change in the set of allowed nodes, the preferred nodemask (Preferred
+        Many), preferred node (Preferred) or nodemask (Bind, Interleave) is
+        remapped to the new set of allowed nodes.  This may result in nodes
+        being used that were previously undesired.
 
 	With this flag, if the user-specified nodes overlap with the
 	nodes allowed by the task's cpuset, then the memory policy is
diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
index 3354774..ad3eee6 100644
--- a/include/uapi/linux/mempolicy.h
+++ b/include/uapi/linux/mempolicy.h
@@ -16,13 +16,13 @@
  */
 
 /* Policies */
-enum {
-	MPOL_DEFAULT,
+enum { MPOL_DEFAULT,
 	MPOL_PREFERRED,
 	MPOL_BIND,
 	MPOL_INTERLEAVE,
 	MPOL_LOCAL,
-	MPOL_MAX,	/* always last member of enum */
+	MPOL_PREFERRED_MANY,
+	MPOL_MAX, /* always last member of enum */
 };
 
 /* Flags for set_mempolicy */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c7c9ef3..60a0d57 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1131,7 +1131,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
 
 	gfp_mask = htlb_alloc_mask(h);
 	nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
-	if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */
+	if (mpol->mode == MPOL_PREFERRED_MANY) {
 		page = dequeue_huge_page_nodemask(h,
 				gfp_mask | __GFP_RETRY_MAYFAIL | __GFP_NOWARN,
 				nid, nodemask);
@@ -1943,7 +1943,7 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
 	nodemask_t *nodemask;
 
 	nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask);
-	if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */
+	if (mpol->mode == MPOL_PREFERRED_MANY) {
 		page = alloc_surplus_huge_page(h,
 				gfp_mask | __GFP_RETRY_MAYFAIL | __GFP_NOWARN,
 				nid, nodemask);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index f9b2167..1438d58 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -108,8 +108,6 @@
 
 #include "internal.h"
 
-#define MPOL_PREFERRED_MANY MPOL_MAX
-
 /* Internal flags */
 #define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0)	/* Skip checks for continuous vmas */
 #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1)		/* Invert check for nodemask */
@@ -180,7 +178,7 @@ struct mempolicy *get_task_policy(struct task_struct *p)
 static const struct mempolicy_operations {
 	int (*create)(struct mempolicy *pol, const nodemask_t *nodes);
 	void (*rebind)(struct mempolicy *pol, const nodemask_t *nodes);
-} mpol_ops[MPOL_MAX + 1];
+} mpol_ops[MPOL_MAX];
 
 static inline int mpol_store_user_nodemask(const struct mempolicy *pol)
 {
@@ -389,8 +387,8 @@ static void mpol_rebind_preferred_common(struct mempolicy *pol,
 }
 
 /* MPOL_PREFERRED_MANY allows multiple nodes to be set in 'nodes' */
-static void __maybe_unused mpol_rebind_preferred_many(struct mempolicy *pol,
-						      const nodemask_t *nodes)
+static void mpol_rebind_preferred_many(struct mempolicy *pol,
+				       const nodemask_t *nodes)
 {
 	mpol_rebind_preferred_common(pol, nodes, nodes);
 }
@@ -452,7 +450,7 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
 	mmap_write_unlock(mm);
 }
 
-static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = {
+static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
 	[MPOL_DEFAULT] = {
 		.rebind = mpol_rebind_default,
 	},
@@ -470,8 +468,8 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = {
 	},
 	/* [MPOL_LOCAL] - see mpol_new() */
 	[MPOL_PREFERRED_MANY] = {
-		.create = NULL,
-		.rebind = NULL,
+		.create = mpol_new_preferred_many,
+		.rebind = mpol_rebind_preferred_many,
 	},
 };
 
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 13/14] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many()
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (11 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 12/14] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 10:20 ` [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit Feng Tang
  13 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

To reduce some code duplication.

Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 25 +++++++------------------
 1 file changed, 7 insertions(+), 18 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1438d58..d66c1c0 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -201,32 +201,21 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes)
 	return 0;
 }
 
-static int mpol_new_preferred_many(struct mempolicy *pol,
+/* cover both MPOL_PREFERRED and MPOL_PREFERRED_MANY */
+static int mpol_new_preferred(struct mempolicy *pol,
 				   const nodemask_t *nodes)
 {
 	if (!nodes)
 		pol->flags |= MPOL_F_LOCAL;	/* local allocation */
 	else if (nodes_empty(*nodes))
 		return -EINVAL;			/*  no allowed nodes */
-	else
-		pol->nodes = *nodes;
-	return 0;
-}
-
-static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes)
-{
-	if (nodes) {
+	else {
 		/* MPOL_PREFERRED can only take a single node: */
-		nodemask_t tmp;
+		nodemask_t tmp = nodemask_of_node(first_node(*nodes));
 
-		if (nodes_empty(*nodes))
-			return -EINVAL;
-
-		tmp = nodemask_of_node(first_node(*nodes));
-		return mpol_new_preferred_many(pol, &tmp);
+		pol->nodes = (pol->mode == MPOL_PREFERRED) ? tmp : *nodes;
 	}
-
-	return mpol_new_preferred_many(pol, NULL);
+	return 0;
 }
 
 static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes)
@@ -468,7 +457,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
 	},
 	/* [MPOL_LOCAL] - see mpol_new() */
 	[MPOL_PREFERRED_MANY] = {
-		.create = mpol_new_preferred_many,
+		.create = mpol_new_preferred,
 		.rebind = mpol_rebind_preferred_many,
 	},
 };
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
                   ` (12 preceding siblings ...)
  2021-03-03 10:20 ` [PATCH v3 13/14] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many() Feng Tang
@ 2021-03-03 10:20 ` Feng Tang
  2021-03-03 11:39   ` Michal Hocko
  13 siblings, 1 reply; 35+ messages in thread
From: Feng Tang @ 2021-03-03 10:20 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams, Feng Tang

When doing broader test, we noticed allocation slowness in one test
case that malloc memory with size which is slightly bigger than free
memory of targeted nodes, but much less then the total free memory
of system.

The reason is the code enters the slowpath of __alloc_pages_nodemask(),
which takes quite some time. As alloc_pages_policy() will give it a 2nd
try with NULL nodemask, so there is no need to enter the slowpath for
the first try. Add a new gfp bit to skip the slowpath, so that user cases
like this can leverage.

With it, the malloc in such case is much accelerated as it never enters
the slowpath.

Adding a new gfp_mask bit is generally not liked, and another idea is to
add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
and 'fallback-nmask', and they will be tried in turn if not NULL, with
it we can call __alloc_pages_nodemask() only once.

Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 include/linux/gfp.h | 9 +++++++--
 mm/mempolicy.c      | 2 +-
 mm/page_alloc.c     | 2 +-
 3 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 6e479e9..81bacbe 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -39,8 +39,9 @@ struct vm_area_struct;
 #define ___GFP_HARDWALL		0x100000u
 #define ___GFP_THISNODE		0x200000u
 #define ___GFP_ACCOUNT		0x400000u
+#define ___GFP_NO_SLOWPATH	0x800000u
 #ifdef CONFIG_LOCKDEP
-#define ___GFP_NOLOCKDEP	0x800000u
+#define ___GFP_NOLOCKDEP	0x1000000u
 #else
 #define ___GFP_NOLOCKDEP	0
 #endif
@@ -220,11 +221,15 @@ struct vm_area_struct;
 #define __GFP_COMP	((__force gfp_t)___GFP_COMP)
 #define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)
 
+/* Do not go into the slowpath */
+#define __GFP_NO_SLOWPATH	((__force gfp_t)___GFP_NO_SLOWPATH)
+
 /* Disable lockdep for GFP context tracking */
 #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
 
+
 /* Room for N __GFP_FOO bits */
-#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP))
+#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP))
 #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
 
 /**
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d66c1c0..e84b56d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2206,7 +2206,7 @@ static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp,
 	 * +-------------------------------+---------------+------------+
 	 */
 	if (pol->mode == MPOL_PREFERRED_MANY)
-		gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+		gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN | __GFP_NO_SLOWPATH;
 
 	page = __alloc_pages_nodemask(gfp_mask, order,
 				      policy_node(gfp, pol, preferred_nid),
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 519a60d..969e3a1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4993,7 +4993,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
 
 	/* First allocation attempt */
 	page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac);
-	if (likely(page))
+	if (likely(page) || (gfp_mask & __GFP_NO_SLOWPATH))
 		goto out;
 
 	/*
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 10:20 ` [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit Feng Tang
@ 2021-03-03 11:39   ` Michal Hocko
  2021-03-03 12:07     ` Feng Tang
  0 siblings, 1 reply; 35+ messages in thread
From: Michal Hocko @ 2021-03-03 11:39 UTC (permalink / raw)
  To: Feng Tang
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Dave Hansen, Ben Widawsky, Andi leen,
	Dan Williams

On Wed 03-03-21 18:20:58, Feng Tang wrote:
> When doing broader test, we noticed allocation slowness in one test
> case that malloc memory with size which is slightly bigger than free
> memory of targeted nodes, but much less then the total free memory
> of system.
> 
> The reason is the code enters the slowpath of __alloc_pages_nodemask(),
> which takes quite some time. As alloc_pages_policy() will give it a 2nd
> try with NULL nodemask, so there is no need to enter the slowpath for
> the first try. Add a new gfp bit to skip the slowpath, so that user cases
> like this can leverage.
> 
> With it, the malloc in such case is much accelerated as it never enters
> the slowpath.
> 
> Adding a new gfp_mask bit is generally not liked, and another idea is to
> add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
> and 'fallback-nmask', and they will be tried in turn if not NULL, with
> it we can call __alloc_pages_nodemask() only once.

Yes, it is very much disliked. Is there any reason why you cannot use
GFP_NOWAIT for that purpose?
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 11:39   ` Michal Hocko
@ 2021-03-03 12:07     ` Feng Tang
  2021-03-03 12:18       ` Feng Tang
  0 siblings, 1 reply; 35+ messages in thread
From: Feng Tang @ 2021-03-03 12:07 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Dave Hansen, Ben Widawsky, Andi leen,
	Dan Williams

Hi Michal,

On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > When doing broader test, we noticed allocation slowness in one test
> > case that malloc memory with size which is slightly bigger than free
> > memory of targeted nodes, but much less then the total free memory
> > of system.
> > 
> > The reason is the code enters the slowpath of __alloc_pages_nodemask(),
> > which takes quite some time. As alloc_pages_policy() will give it a 2nd
> > try with NULL nodemask, so there is no need to enter the slowpath for
> > the first try. Add a new gfp bit to skip the slowpath, so that user cases
> > like this can leverage.
> > 
> > With it, the malloc in such case is much accelerated as it never enters
> > the slowpath.
> > 
> > Adding a new gfp_mask bit is generally not liked, and another idea is to
> > add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
> > and 'fallback-nmask', and they will be tried in turn if not NULL, with
> > it we can call __alloc_pages_nodemask() only once.
> 
> Yes, it is very much disliked. Is there any reason why you cannot use
> GFP_NOWAIT for that purpose?

I did try that at the first place, but it didn't obviously change the slowness.
I assumed the direct claim was still involved as GFP_NOWAIT only impact kswapd
reclaim.

Thanks,
Feng


> -- 
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 12:07     ` Feng Tang
@ 2021-03-03 12:18       ` Feng Tang
  2021-03-03 12:32         ` Michal Hocko
  0 siblings, 1 reply; 35+ messages in thread
From: Feng Tang @ 2021-03-03 12:18 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Hansen, Dave, Widawsky, Ben, Andi leen,
	Williams, Dan J

On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> Hi Michal,
> 
> On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> > On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > > When doing broader test, we noticed allocation slowness in one test
> > > case that malloc memory with size which is slightly bigger than free
> > > memory of targeted nodes, but much less then the total free memory
> > > of system.
> > > 
> > > The reason is the code enters the slowpath of __alloc_pages_nodemask(),
> > > which takes quite some time. As alloc_pages_policy() will give it a 2nd
> > > try with NULL nodemask, so there is no need to enter the slowpath for
> > > the first try. Add a new gfp bit to skip the slowpath, so that user cases
> > > like this can leverage.
> > > 
> > > With it, the malloc in such case is much accelerated as it never enters
> > > the slowpath.
> > > 
> > > Adding a new gfp_mask bit is generally not liked, and another idea is to
> > > add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
> > > and 'fallback-nmask', and they will be tried in turn if not NULL, with
> > > it we can call __alloc_pages_nodemask() only once.
> > 
> > Yes, it is very much disliked. Is there any reason why you cannot use
> > GFP_NOWAIT for that purpose?
> 
> I did try that at the first place, but it didn't obviously change the slowness.
> I assumed the direct claim was still involved as GFP_NOWAIT only impact kswapd
> reclaim.

One thing I tried which can fix the slowness is:

+	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);

which explicitly clears the 2 kinds of reclaim. And I thought it's too
hacky and didn't mention it in the commit log.

Thanks,
Feng




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 12:18       ` Feng Tang
@ 2021-03-03 12:32         ` Michal Hocko
  2021-03-03 13:18           ` Feng Tang
  0 siblings, 1 reply; 35+ messages in thread
From: Michal Hocko @ 2021-03-03 12:32 UTC (permalink / raw)
  To: Feng Tang
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Hansen, Dave, Widawsky, Ben, Andi leen,
	Williams, Dan J

On Wed 03-03-21 20:18:33, Feng Tang wrote:
> On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> > Hi Michal,
> > 
> > On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> > > On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > > > When doing broader test, we noticed allocation slowness in one test
> > > > case that malloc memory with size which is slightly bigger than free
> > > > memory of targeted nodes, but much less then the total free memory
> > > > of system.
> > > > 
> > > > The reason is the code enters the slowpath of __alloc_pages_nodemask(),
> > > > which takes quite some time. As alloc_pages_policy() will give it a 2nd
> > > > try with NULL nodemask, so there is no need to enter the slowpath for
> > > > the first try. Add a new gfp bit to skip the slowpath, so that user cases
> > > > like this can leverage.
> > > > 
> > > > With it, the malloc in such case is much accelerated as it never enters
> > > > the slowpath.
> > > > 
> > > > Adding a new gfp_mask bit is generally not liked, and another idea is to
> > > > add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
> > > > and 'fallback-nmask', and they will be tried in turn if not NULL, with
> > > > it we can call __alloc_pages_nodemask() only once.
> > > 
> > > Yes, it is very much disliked. Is there any reason why you cannot use
> > > GFP_NOWAIT for that purpose?
> > 
> > I did try that at the first place, but it didn't obviously change the slowness.
> > I assumed the direct claim was still involved as GFP_NOWAIT only impact kswapd
> > reclaim.

I assume you haven't really created gfp mask correctly. What was the
exact gfp mask you have used?

> 
> One thing I tried which can fix the slowness is:
> 
> +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> 
> which explicitly clears the 2 kinds of reclaim. And I thought it's too
> hacky and didn't mention it in the commit log.

Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 12:32         ` Michal Hocko
@ 2021-03-03 13:18           ` Feng Tang
  2021-03-03 13:46             ` Feng Tang
  2021-03-03 13:53             ` Michal Hocko
  0 siblings, 2 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-03 13:18 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Hansen, Dave, Widawsky, Ben, Andi leen,
	Williams, Dan J

On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> > > Hi Michal,
> > > 
> > > On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> > > > On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > > > > When doing broader test, we noticed allocation slowness in one test
> > > > > case that malloc memory with size which is slightly bigger than free
> > > > > memory of targeted nodes, but much less then the total free memory
> > > > > of system.
> > > > > 
> > > > > The reason is the code enters the slowpath of __alloc_pages_nodemask(),
> > > > > which takes quite some time. As alloc_pages_policy() will give it a 2nd
> > > > > try with NULL nodemask, so there is no need to enter the slowpath for
> > > > > the first try. Add a new gfp bit to skip the slowpath, so that user cases
> > > > > like this can leverage.
> > > > > 
> > > > > With it, the malloc in such case is much accelerated as it never enters
> > > > > the slowpath.
> > > > > 
> > > > > Adding a new gfp_mask bit is generally not liked, and another idea is to
> > > > > add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
> > > > > and 'fallback-nmask', and they will be tried in turn if not NULL, with
> > > > > it we can call __alloc_pages_nodemask() only once.
> > > > 
> > > > Yes, it is very much disliked. Is there any reason why you cannot use
> > > > GFP_NOWAIT for that purpose?
> > > 
> > > I did try that at the first place, but it didn't obviously change the slowness.
> > > I assumed the direct claim was still involved as GFP_NOWAIT only impact kswapd
> > > reclaim.
> 
> I assume you haven't really created gfp mask correctly. What was the
> exact gfp mask you have used?

The testcase is a malloc with multi-preferred-node policy, IIRC, the gfp
mask is HIGHUSER_MOVABLE originally, and code here ORs (__GFP_RETRY_MAYFAIL | __GFP_NOWARN).

As GFP_WAIT == __GFP_KSWAPD_RECLAIM, in this test case, the bit is already set.

> > 
> > One thing I tried which can fix the slowness is:
> > 
> > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > 
> > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > hacky and didn't mention it in the commit log.
> 
> Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 

When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
be fixed.

Thanks,
Feng

> -- 
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 13:18           ` Feng Tang
@ 2021-03-03 13:46             ` Feng Tang
  2021-03-03 13:59               ` Michal Hocko
  2021-03-03 13:53             ` Michal Hocko
  1 sibling, 1 reply; 35+ messages in thread
From: Feng Tang @ 2021-03-03 13:46 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Hansen, Dave, Widawsky, Ben, Andi leen,
	Williams, Dan J

On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> > > > Hi Michal,
> > > > 
> > > > On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> > > > > On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > > > > > When doing broader test, we noticed allocation slowness in one test
> > > > > > case that malloc memory with size which is slightly bigger than free
> > > > > > memory of targeted nodes, but much less then the total free memory
> > > > > > of system.
> > > > > > 
> > > > > > The reason is the code enters the slowpath of __alloc_pages_nodemask(),
> > > > > > which takes quite some time. As alloc_pages_policy() will give it a 2nd
> > > > > > try with NULL nodemask, so there is no need to enter the slowpath for
> > > > > > the first try. Add a new gfp bit to skip the slowpath, so that user cases
> > > > > > like this can leverage.
> > > > > > 
> > > > > > With it, the malloc in such case is much accelerated as it never enters
> > > > > > the slowpath.
> > > > > > 
> > > > > > Adding a new gfp_mask bit is generally not liked, and another idea is to
> > > > > > add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
> > > > > > and 'fallback-nmask', and they will be tried in turn if not NULL, with
> > > > > > it we can call __alloc_pages_nodemask() only once.
> > > > > 
> > > > > Yes, it is very much disliked. Is there any reason why you cannot use
> > > > > GFP_NOWAIT for that purpose?
> > > > 
> > > > I did try that at the first place, but it didn't obviously change the slowness.
> > > > I assumed the direct claim was still involved as GFP_NOWAIT only impact kswapd
> > > > reclaim.
> > 
> > I assume you haven't really created gfp mask correctly. What was the
> > exact gfp mask you have used?
> 
> The testcase is a malloc with multi-preferred-node policy, IIRC, the gfp
> mask is HIGHUSER_MOVABLE originally, and code here ORs (__GFP_RETRY_MAYFAIL | __GFP_NOWARN).
> 
> As GFP_WAIT == __GFP_KSWAPD_RECLAIM, in this test case, the bit is already set.
> 
> > > 
> > > One thing I tried which can fix the slowness is:
> > > 
> > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > 
> > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > hacky and didn't mention it in the commit log.
> > 
> > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> 
> When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> be fixed.

I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
can also accelerate the allocation much! though is still a little slower than
this patch. Seems I've messed some of the tries, and sorry for the confusion!

Could this be used as the solution? or the adding another fallback_nodemask way?
but the latter will change the current API quite a bit.

Thanks,
Feng



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 13:18           ` Feng Tang
  2021-03-03 13:46             ` Feng Tang
@ 2021-03-03 13:53             ` Michal Hocko
  1 sibling, 0 replies; 35+ messages in thread
From: Michal Hocko @ 2021-03-03 13:53 UTC (permalink / raw)
  To: Feng Tang
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Hansen, Dave, Widawsky, Ben, Andi leen,
	Williams, Dan J

On Wed 03-03-21 21:18:32, Feng Tang wrote:
> On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> > > > Hi Michal,
> > > > 
> > > > On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> > > > > On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > > > > > When doing broader test, we noticed allocation slowness in one test
> > > > > > case that malloc memory with size which is slightly bigger than free
> > > > > > memory of targeted nodes, but much less then the total free memory
> > > > > > of system.
> > > > > > 
> > > > > > The reason is the code enters the slowpath of __alloc_pages_nodemask(),
> > > > > > which takes quite some time. As alloc_pages_policy() will give it a 2nd
> > > > > > try with NULL nodemask, so there is no need to enter the slowpath for
> > > > > > the first try. Add a new gfp bit to skip the slowpath, so that user cases
> > > > > > like this can leverage.
> > > > > > 
> > > > > > With it, the malloc in such case is much accelerated as it never enters
> > > > > > the slowpath.
> > > > > > 
> > > > > > Adding a new gfp_mask bit is generally not liked, and another idea is to
> > > > > > add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask'
> > > > > > and 'fallback-nmask', and they will be tried in turn if not NULL, with
> > > > > > it we can call __alloc_pages_nodemask() only once.
> > > > > 
> > > > > Yes, it is very much disliked. Is there any reason why you cannot use
> > > > > GFP_NOWAIT for that purpose?
> > > > 
> > > > I did try that at the first place, but it didn't obviously change the slowness.
> > > > I assumed the direct claim was still involved as GFP_NOWAIT only impact kswapd
> > > > reclaim.
> > 
> > I assume you haven't really created gfp mask correctly. What was the
> > exact gfp mask you have used?
> 
> The testcase is a malloc with multi-preferred-node policy, IIRC, the gfp
> mask is HIGHUSER_MOVABLE originally, and code here ORs (__GFP_RETRY_MAYFAIL | __GFP_NOWARN).
> 
> As GFP_WAIT == __GFP_KSWAPD_RECLAIM, in this test case, the bit is already set.

Yes, you have to clear the gfp flag for the direct reclaim. I can see
how that can be confusing though
 
> > > One thing I tried which can fix the slowness is:
> > > 
> > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > 
> > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > hacky and didn't mention it in the commit log.
> > 
> > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> 
> When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> be fixed.

OK, I thought that you wanted to prevent the direct reclaim because that
is the usual suspect for a slow down. If this is not not related to the
direct reclaim then please try to find out what the acutal bottle neck
is. Also how big of a slowdown are we talking about here?
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 13:46             ` Feng Tang
@ 2021-03-03 13:59               ` Michal Hocko
  2021-03-03 16:31                 ` Ben Widawsky
  0 siblings, 1 reply; 35+ messages in thread
From: Michal Hocko @ 2021-03-03 13:59 UTC (permalink / raw)
  To: Feng Tang
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Hansen, Dave, Widawsky, Ben, Andi leen,
	Williams, Dan J

On Wed 03-03-21 21:46:44, Feng Tang wrote:
> On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
[...]
> > > > One thing I tried which can fix the slowness is:
> > > > 
> > > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > 
> > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > hacky and didn't mention it in the commit log.
> > > 
> > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> > 
> > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > be fixed.
> 
> I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> can also accelerate the allocation much! though is still a little slower than
> this patch. Seems I've messed some of the tries, and sorry for the confusion!
> 
> Could this be used as the solution? or the adding another fallback_nodemask way?
> but the latter will change the current API quite a bit.

I haven't got to the whole series yet. The real question is whether the
first attempt to enforce the preferred mask is a general win. I would
argue that it resembles the existing single node preferred memory policy
because that one doesn't push heavily on the preferred node either. So
dropping just the direct reclaim mode makes some sense to me.

IIRC this is something I was recommending in an early proposal of the
feature.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 13:59               ` Michal Hocko
@ 2021-03-03 16:31                 ` Ben Widawsky
  2021-03-03 16:48                   ` Dave Hansen
  2021-03-03 17:14                   ` Michal Hocko
  0 siblings, 2 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-03-03 16:31 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Feng Tang, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Hansen, Dave, Andi leen, Williams,
	Dan J

On 21-03-03 14:59:35, Michal Hocko wrote:
> On Wed 03-03-21 21:46:44, Feng Tang wrote:
> > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> [...]
> > > > > One thing I tried which can fix the slowness is:
> > > > > 
> > > > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > > 
> > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > > hacky and didn't mention it in the commit log.
> > > > 
> > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> > > 
> > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > > be fixed.
> > 
> > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> > can also accelerate the allocation much! though is still a little slower than
> > this patch. Seems I've messed some of the tries, and sorry for the confusion!
> > 
> > Could this be used as the solution? or the adding another fallback_nodemask way?
> > but the latter will change the current API quite a bit.
> 
> I haven't got to the whole series yet. The real question is whether the
> first attempt to enforce the preferred mask is a general win. I would
> argue that it resembles the existing single node preferred memory policy
> because that one doesn't push heavily on the preferred node either. So
> dropping just the direct reclaim mode makes some sense to me.
> 
> IIRC this is something I was recommending in an early proposal of the
> feature.

My assumption [FWIW] is that the usecases we've outlined for multi-preferred
would want more heavy pushing on the preference mask. However, maybe the uapi
could dictate how hard to try/not try.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 16:31                 ` Ben Widawsky
@ 2021-03-03 16:48                   ` Dave Hansen
  2021-03-10  5:19                     ` Feng Tang
  2021-03-03 17:14                   ` Michal Hocko
  1 sibling, 1 reply; 35+ messages in thread
From: Dave Hansen @ 2021-03-03 16:48 UTC (permalink / raw)
  To: Michal Hocko, Feng Tang, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Andi leen, Williams, Dan J

On 3/3/21 8:31 AM, Ben Widawsky wrote:
>> I haven't got to the whole series yet. The real question is whether the
>> first attempt to enforce the preferred mask is a general win. I would
>> argue that it resembles the existing single node preferred memory policy
>> because that one doesn't push heavily on the preferred node either. So
>> dropping just the direct reclaim mode makes some sense to me.
>>
>> IIRC this is something I was recommending in an early proposal of the
>> feature.
> My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> would want more heavy pushing on the preference mask. However, maybe the uapi
> could dictate how hard to try/not try.

There are two things that I think are important:

1. MPOL_PREFERRED_MANY fallback away from the preferred nodes should be
   *temporary*, even in the face of the preferred set being full.  That
   means that _some_ reclaim needs to be done.  Kicking off kswapd is
   fine for this.
2. MPOL_PREFERRED_MANY behavior should resemble MPOL_PREFERRED as
   closely as possible.  We're just going to confuse users if they set a
   single node in a MPOL_PREFERRED_MANY mask and get different behavior
   from MPOL_PREFERRED.

While it would be nice, short-term, to steer MPOL_PREFERRED_MANY
behavior toward how we expect it to get used first, I think it's a
mistake if we do it at the cost of long-term divergence from MPOL_PREFERRED.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 16:31                 ` Ben Widawsky
  2021-03-03 16:48                   ` Dave Hansen
@ 2021-03-03 17:14                   ` Michal Hocko
  2021-03-03 17:22                     ` Ben Widawsky
  1 sibling, 1 reply; 35+ messages in thread
From: Michal Hocko @ 2021-03-03 17:14 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Feng Tang, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Hansen, Dave, Andi leen, Williams,
	Dan J

On Wed 03-03-21 08:31:41, Ben Widawsky wrote:
> On 21-03-03 14:59:35, Michal Hocko wrote:
> > On Wed 03-03-21 21:46:44, Feng Tang wrote:
> > > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > [...]
> > > > > > One thing I tried which can fix the slowness is:
> > > > > > 
> > > > > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > > > 
> > > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > > > hacky and didn't mention it in the commit log.
> > > > > 
> > > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> > > > 
> > > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > > > be fixed.
> > > 
> > > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> > > can also accelerate the allocation much! though is still a little slower than
> > > this patch. Seems I've messed some of the tries, and sorry for the confusion!
> > > 
> > > Could this be used as the solution? or the adding another fallback_nodemask way?
> > > but the latter will change the current API quite a bit.
> > 
> > I haven't got to the whole series yet. The real question is whether the
> > first attempt to enforce the preferred mask is a general win. I would
> > argue that it resembles the existing single node preferred memory policy
> > because that one doesn't push heavily on the preferred node either. So
> > dropping just the direct reclaim mode makes some sense to me.
> > 
> > IIRC this is something I was recommending in an early proposal of the
> > feature.
> 
> My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> would want more heavy pushing on the preference mask. However, maybe the uapi
> could dictate how hard to try/not try.

What does that mean and what is the expectation from the kernel to be
more or less cast in stone?

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 17:14                   ` Michal Hocko
@ 2021-03-03 17:22                     ` Ben Widawsky
  2021-03-04  8:14                       ` Feng Tang
  2021-03-04 12:57                       ` Michal Hocko
  0 siblings, 2 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-03-03 17:22 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Feng Tang, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Hansen, Dave, Andi leen, Williams,
	Dan J

On 21-03-03 18:14:30, Michal Hocko wrote:
> On Wed 03-03-21 08:31:41, Ben Widawsky wrote:
> > On 21-03-03 14:59:35, Michal Hocko wrote:
> > > On Wed 03-03-21 21:46:44, Feng Tang wrote:
> > > > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > > > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > > > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > [...]
> > > > > > > One thing I tried which can fix the slowness is:
> > > > > > > 
> > > > > > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > > > > 
> > > > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > > > > hacky and didn't mention it in the commit log.
> > > > > > 
> > > > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> > > > > 
> > > > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > > > > be fixed.
> > > > 
> > > > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> > > > can also accelerate the allocation much! though is still a little slower than
> > > > this patch. Seems I've messed some of the tries, and sorry for the confusion!
> > > > 
> > > > Could this be used as the solution? or the adding another fallback_nodemask way?
> > > > but the latter will change the current API quite a bit.
> > > 
> > > I haven't got to the whole series yet. The real question is whether the
> > > first attempt to enforce the preferred mask is a general win. I would
> > > argue that it resembles the existing single node preferred memory policy
> > > because that one doesn't push heavily on the preferred node either. So
> > > dropping just the direct reclaim mode makes some sense to me.
> > > 
> > > IIRC this is something I was recommending in an early proposal of the
> > > feature.
> > 
> > My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> > would want more heavy pushing on the preference mask. However, maybe the uapi
> > could dictate how hard to try/not try.
> 
> What does that mean and what is the expectation from the kernel to be
> more or less cast in stone?
> 

(I'm not positive I've understood your question, so correct me if I
misunderstood)

I'm not sure there is a stone-cast way to define it nor should we. At the very
least though, something in uapi that has a general mapping to GFP flags
(specifically around reclaim) for the first round of allocation could make
sense.

In my head there are 3 levels of request possible for multiple nodes:
1. BIND: Those nodes or die.
2. Preferred hard: Those nodes and I'm willing to wait. Fallback if impossible.
3. Preferred soft: Those nodes but I don't want to wait.

Current UAPI in the series doesn't define a distinction between 2, and 3. As I
understand the change, Feng is defining the behavior to be #3, which makes #2
not an option. I sort of punted on defining it entirely, in the beginning.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 17:22                     ` Ben Widawsky
@ 2021-03-04  8:14                       ` Feng Tang
  2021-03-04 12:59                         ` Michal Hocko
  2021-03-04 12:57                       ` Michal Hocko
  1 sibling, 1 reply; 35+ messages in thread
From: Feng Tang @ 2021-03-04  8:14 UTC (permalink / raw)
  To: Michal Hocko, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Hansen, Dave, Andi leen, Williams,
	Dan J

On Wed, Mar 03, 2021 at 09:22:50AM -0800, Ben Widawsky wrote:
> On 21-03-03 18:14:30, Michal Hocko wrote:
> > On Wed 03-03-21 08:31:41, Ben Widawsky wrote:
> > > On 21-03-03 14:59:35, Michal Hocko wrote:
> > > > On Wed 03-03-21 21:46:44, Feng Tang wrote:
> > > > > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > > > > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > > > > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > > [...]
> > > > > > > > One thing I tried which can fix the slowness is:
> > > > > > > > 
> > > > > > > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > > > > > 
> > > > > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > > > > > hacky and didn't mention it in the commit log.
> > > > > > > 
> > > > > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > > > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> > > > > > 
> > > > > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > > > > > be fixed.
> > > > > 
> > > > > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> > > > > can also accelerate the allocation much! though is still a little slower than
> > > > > this patch. Seems I've messed some of the tries, and sorry for the confusion!
> > > > > 
> > > > > Could this be used as the solution? or the adding another fallback_nodemask way?
> > > > > but the latter will change the current API quite a bit.
> > > > 
> > > > I haven't got to the whole series yet. The real question is whether the
> > > > first attempt to enforce the preferred mask is a general win. I would
> > > > argue that it resembles the existing single node preferred memory policy
> > > > because that one doesn't push heavily on the preferred node either. So
> > > > dropping just the direct reclaim mode makes some sense to me.
> > > > 
> > > > IIRC this is something I was recommending in an early proposal of the
> > > > feature.
> > > 
> > > My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> > > would want more heavy pushing on the preference mask. However, maybe the uapi
> > > could dictate how hard to try/not try.
> > 
> > What does that mean and what is the expectation from the kernel to be
> > more or less cast in stone?
> > 
> 
> (I'm not positive I've understood your question, so correct me if I
> misunderstood)
> 
> I'm not sure there is a stone-cast way to define it nor should we. At the very
> least though, something in uapi that has a general mapping to GFP flags
> (specifically around reclaim) for the first round of allocation could make
> sense.
> 
> In my head there are 3 levels of request possible for multiple nodes:
> 1. BIND: Those nodes or die.
> 2. Preferred hard: Those nodes and I'm willing to wait. Fallback if impossible.
> 3. Preferred soft: Those nodes but I don't want to wait.
> 
> Current UAPI in the series doesn't define a distinction between 2, and 3. As I
> understand the change, Feng is defining the behavior to be #3, which makes #2
> not an option. I sort of punted on defining it entirely, in the beginning.

As discussed earlier in the thread, one less hacky solution is to clear
__GFP_DIRECT_RECLAIM bit so that it won't go into direct reclaim, but still
wakeup the kswapd of target nodes and retry, which sits now between 'Preferred hard'
and 'Preferred soft' :)

For current MPOL_PREFERRED, its semantic is also 'Preferred hard', that it
will check free memory of other nodes before entering slowpath waiting.

Thanks,
Feng



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 17:22                     ` Ben Widawsky
  2021-03-04  8:14                       ` Feng Tang
@ 2021-03-04 12:57                       ` Michal Hocko
  1 sibling, 0 replies; 35+ messages in thread
From: Michal Hocko @ 2021-03-04 12:57 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Feng Tang, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Hansen, Dave, Andi leen, Williams,
	Dan J

On Wed 03-03-21 09:22:50, Ben Widawsky wrote:
> On 21-03-03 18:14:30, Michal Hocko wrote:
> > On Wed 03-03-21 08:31:41, Ben Widawsky wrote:
> > > On 21-03-03 14:59:35, Michal Hocko wrote:
> > > > On Wed 03-03-21 21:46:44, Feng Tang wrote:
> > > > > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > > > > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > > > > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > > [...]
> > > > > > > > One thing I tried which can fix the slowness is:
> > > > > > > > 
> > > > > > > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > > > > > 
> > > > > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > > > > > hacky and didn't mention it in the commit log.
> > > > > > > 
> > > > > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > > > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> > > > > > 
> > > > > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > > > > > be fixed.
> > > > > 
> > > > > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> > > > > can also accelerate the allocation much! though is still a little slower than
> > > > > this patch. Seems I've messed some of the tries, and sorry for the confusion!
> > > > > 
> > > > > Could this be used as the solution? or the adding another fallback_nodemask way?
> > > > > but the latter will change the current API quite a bit.
> > > > 
> > > > I haven't got to the whole series yet. The real question is whether the
> > > > first attempt to enforce the preferred mask is a general win. I would
> > > > argue that it resembles the existing single node preferred memory policy
> > > > because that one doesn't push heavily on the preferred node either. So
> > > > dropping just the direct reclaim mode makes some sense to me.
> > > > 
> > > > IIRC this is something I was recommending in an early proposal of the
> > > > feature.
> > > 
> > > My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> > > would want more heavy pushing on the preference mask. However, maybe the uapi
> > > could dictate how hard to try/not try.
> > 
> > What does that mean and what is the expectation from the kernel to be
> > more or less cast in stone?
> > 
> 
> (I'm not positive I've understood your question, so correct me if I
> misunderstood)
> 
> I'm not sure there is a stone-cast way to define it nor should we.

OK, I thought you want the behavior to diverge from the existing
MPOL_PREFERRED which only prefers the configured node as a default but
the allocator is free to fallback to any other node under memory
pressure. For the multiple preferred nodes the same should be applied
and only attempt lightweight attempt before falling back to full
nodeset. Your paragraph I was replying to is not in line with this
though.

> At the very
> least though, something in uapi that has a general mapping to GFP flags
> (specifically around reclaim) for the first round of allocation could make
> sense.

I do not think this is a good idea.

> In my head there are 3 levels of request possible for multiple nodes:
> 1. BIND: Those nodes or die.
> 2. Preferred hard: Those nodes and I'm willing to wait. Fallback if impossible.
> 3. Preferred soft: Those nodes but I don't want to wait.

I do agree that an intermediate "preference" can be helpful because
binding is just too strict and OOM semantic is far from ideal. But this
would need a new policy.
 
> Current UAPI in the series doesn't define a distinction between 2, and 3. As I
> understand the change, Feng is defining the behavior to be #3, which makes #2
> not an option. I sort of punted on defining it entirely, in the beginning.

I really think it should be in line with the existing preferred policy
behavior.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-04  8:14                       ` Feng Tang
@ 2021-03-04 12:59                         ` Michal Hocko
  2021-03-05  2:21                           ` Feng Tang
  0 siblings, 1 reply; 35+ messages in thread
From: Michal Hocko @ 2021-03-04 12:59 UTC (permalink / raw)
  To: Feng Tang
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Hansen, Dave, Andi leen, Williams, Dan J

On Thu 04-03-21 16:14:14, Feng Tang wrote:
> On Wed, Mar 03, 2021 at 09:22:50AM -0800, Ben Widawsky wrote:
> > On 21-03-03 18:14:30, Michal Hocko wrote:
> > > On Wed 03-03-21 08:31:41, Ben Widawsky wrote:
> > > > On 21-03-03 14:59:35, Michal Hocko wrote:
> > > > > On Wed 03-03-21 21:46:44, Feng Tang wrote:
> > > > > > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > > > > > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > > > > > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > > > [...]
> > > > > > > > > One thing I tried which can fix the slowness is:
> > > > > > > > > 
> > > > > > > > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > > > > > > 
> > > > > > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > > > > > > hacky and didn't mention it in the commit log.
> > > > > > > > 
> > > > > > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > > > > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> > > > > > > 
> > > > > > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > > > > > > be fixed.
> > > > > > 
> > > > > > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> > > > > > can also accelerate the allocation much! though is still a little slower than
> > > > > > this patch. Seems I've messed some of the tries, and sorry for the confusion!
> > > > > > 
> > > > > > Could this be used as the solution? or the adding another fallback_nodemask way?
> > > > > > but the latter will change the current API quite a bit.
> > > > > 
> > > > > I haven't got to the whole series yet. The real question is whether the
> > > > > first attempt to enforce the preferred mask is a general win. I would
> > > > > argue that it resembles the existing single node preferred memory policy
> > > > > because that one doesn't push heavily on the preferred node either. So
> > > > > dropping just the direct reclaim mode makes some sense to me.
> > > > > 
> > > > > IIRC this is something I was recommending in an early proposal of the
> > > > > feature.
> > > > 
> > > > My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> > > > would want more heavy pushing on the preference mask. However, maybe the uapi
> > > > could dictate how hard to try/not try.
> > > 
> > > What does that mean and what is the expectation from the kernel to be
> > > more or less cast in stone?
> > > 
> > 
> > (I'm not positive I've understood your question, so correct me if I
> > misunderstood)
> > 
> > I'm not sure there is a stone-cast way to define it nor should we. At the very
> > least though, something in uapi that has a general mapping to GFP flags
> > (specifically around reclaim) for the first round of allocation could make
> > sense.
> > 
> > In my head there are 3 levels of request possible for multiple nodes:
> > 1. BIND: Those nodes or die.
> > 2. Preferred hard: Those nodes and I'm willing to wait. Fallback if impossible.
> > 3. Preferred soft: Those nodes but I don't want to wait.
> > 
> > Current UAPI in the series doesn't define a distinction between 2, and 3. As I
> > understand the change, Feng is defining the behavior to be #3, which makes #2
> > not an option. I sort of punted on defining it entirely, in the beginning.
> 
> As discussed earlier in the thread, one less hacky solution is to clear
> __GFP_DIRECT_RECLAIM bit so that it won't go into direct reclaim, but still
> wakeup the kswapd of target nodes and retry, which sits now between 'Preferred hard'
> and 'Preferred soft' :)

Yes that is what I've had in mind when talking about a lightweight
attempt.

> For current MPOL_PREFERRED, its semantic is also 'Preferred hard', that it

Did you mean to say prefer soft? Because the direct reclaim is attempted
only when node reclaim is enabled.

> will check free memory of other nodes before entering slowpath waiting.

Yes, hence "soft" semantic.

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-04 12:59                         ` Michal Hocko
@ 2021-03-05  2:21                           ` Feng Tang
  0 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-05  2:21 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, Andrew Morton, Andrea Arcangeli,
	David Rientjes, Mel Gorman, Mike Kravetz, Randy Dunlap,
	Vlastimil Babka, Hansen, Dave, Andi leen, Williams, Dan J

On Thu, Mar 04, 2021 at 01:59:40PM +0100, Michal Hocko wrote:
> On Thu 04-03-21 16:14:14, Feng Tang wrote:
> > On Wed, Mar 03, 2021 at 09:22:50AM -0800, Ben Widawsky wrote:
> > > On 21-03-03 18:14:30, Michal Hocko wrote:
> > > > On Wed 03-03-21 08:31:41, Ben Widawsky wrote:
> > > > > On 21-03-03 14:59:35, Michal Hocko wrote:
> > > > > > On Wed 03-03-21 21:46:44, Feng Tang wrote:
> > > > > > > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > > > > > > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > > > > > > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > > > > [...]
> > > > > > > > > > One thing I tried which can fix the slowness is:
> > > > > > > > > > 
> > > > > > > > > > +	gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > > > > > > > 
> > > > > > > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > > > > > > > hacky and didn't mention it in the commit log.
> > > > > > > > > 
> > > > > > > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > > > > > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? 
> > > > > > > > 
> > > > > > > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > > > > > > > be fixed.
> > > > > > > 
> > > > > > > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> > > > > > > can also accelerate the allocation much! though is still a little slower than
> > > > > > > this patch. Seems I've messed some of the tries, and sorry for the confusion!
> > > > > > > 
> > > > > > > Could this be used as the solution? or the adding another fallback_nodemask way?
> > > > > > > but the latter will change the current API quite a bit.
> > > > > > 
> > > > > > I haven't got to the whole series yet. The real question is whether the
> > > > > > first attempt to enforce the preferred mask is a general win. I would
> > > > > > argue that it resembles the existing single node preferred memory policy
> > > > > > because that one doesn't push heavily on the preferred node either. So
> > > > > > dropping just the direct reclaim mode makes some sense to me.
> > > > > > 
> > > > > > IIRC this is something I was recommending in an early proposal of the
> > > > > > feature.
> > > > > 
> > > > > My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> > > > > would want more heavy pushing on the preference mask. However, maybe the uapi
> > > > > could dictate how hard to try/not try.
> > > > 
> > > > What does that mean and what is the expectation from the kernel to be
> > > > more or less cast in stone?
> > > > 
> > > 
> > > (I'm not positive I've understood your question, so correct me if I
> > > misunderstood)
> > > 
> > > I'm not sure there is a stone-cast way to define it nor should we. At the very
> > > least though, something in uapi that has a general mapping to GFP flags
> > > (specifically around reclaim) for the first round of allocation could make
> > > sense.
> > > 
> > > In my head there are 3 levels of request possible for multiple nodes:
> > > 1. BIND: Those nodes or die.
> > > 2. Preferred hard: Those nodes and I'm willing to wait. Fallback if impossible.
> > > 3. Preferred soft: Those nodes but I don't want to wait.
> > > 
> > > Current UAPI in the series doesn't define a distinction between 2, and 3. As I
> > > understand the change, Feng is defining the behavior to be #3, which makes #2
> > > not an option. I sort of punted on defining it entirely, in the beginning.
> > 
> > As discussed earlier in the thread, one less hacky solution is to clear
> > __GFP_DIRECT_RECLAIM bit so that it won't go into direct reclaim, but still
> > wakeup the kswapd of target nodes and retry, which sits now between 'Preferred hard'
> > and 'Preferred soft' :)
> 
> Yes that is what I've had in mind when talking about a lightweight
> attempt.
> 
> > For current MPOL_PREFERRED, its semantic is also 'Preferred hard', that it
> 
> Did you mean to say prefer soft? Because the direct reclaim is attempted
> only when node reclaim is enabled.
> 
> > will check free memory of other nodes before entering slowpath waiting.
> 
> Yes, hence "soft" semantic.

Yes, it's the #3 item: 'Preferred soft' 

Thanks,
Feng

> -- 
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-03 16:48                   ` Dave Hansen
@ 2021-03-10  5:19                     ` Feng Tang
  2021-03-10  9:44                       ` Michal Hocko
  0 siblings, 1 reply; 35+ messages in thread
From: Feng Tang @ 2021-03-10  5:19 UTC (permalink / raw)
  To: Dave Hansen, Michal Hocko, Ben Widawsky
  Cc: Michal Hocko, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Andi Kleen, Williams, Dan J

On Wed, Mar 03, 2021 at 08:48:58AM -0800, Dave Hansen wrote:
> On 3/3/21 8:31 AM, Ben Widawsky wrote:
> >> I haven't got to the whole series yet. The real question is whether the
> >> first attempt to enforce the preferred mask is a general win. I would
> >> argue that it resembles the existing single node preferred memory policy
> >> because that one doesn't push heavily on the preferred node either. So
> >> dropping just the direct reclaim mode makes some sense to me.
> >>
> >> IIRC this is something I was recommending in an early proposal of the
> >> feature.
> > My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> > would want more heavy pushing on the preference mask. However, maybe the uapi
> > could dictate how hard to try/not try.
> 
> There are two things that I think are important:
> 
> 1. MPOL_PREFERRED_MANY fallback away from the preferred nodes should be
>    *temporary*, even in the face of the preferred set being full.  That
>    means that _some_ reclaim needs to be done.  Kicking off kswapd is
>    fine for this.
> 2. MPOL_PREFERRED_MANY behavior should resemble MPOL_PREFERRED as
>    closely as possible.  We're just going to confuse users if they set a
>    single node in a MPOL_PREFERRED_MANY mask and get different behavior
>    from MPOL_PREFERRED.
> 
> While it would be nice, short-term, to steer MPOL_PREFERRED_MANY
> behavior toward how we expect it to get used first, I think it's a
> mistake if we do it at the cost of long-term divergence from MPOL_PREFERRED.

Hi All,

Based on the discussion, I update the patch as below, please review, thanks


From ea9e32fa8b6eff4a64d790b856e044adb30f04b5 Mon Sep 17 00:00:00 2001
From: Feng Tang <feng.tang@intel.com>
Date: Wed, 10 Mar 2021 12:31:24 +0800
Subject: [PATCH] mm/mempolicy: speedup page alloc for MPOL_PREFERRED_MANY

When doing broader test, we noticed allocation slowness in one test
case that malloc memory with size which is slightly bigger than free
memory of targeted nodes, but much less then the total free memory
of system.

The reason is the code enters the slowpath of __alloc_pages_nodemask(),
which takes quite some time.

Since alloc_pages_policy() will give it a 2nd try with NULL nodemask,
we tried solution which creates a new gfp_mask bit __GFP_NO_SLOWPATH
for explicitely skipping entering slowpath in the first try, which is
brutal and costs one precious gfp mask bit.

Based on discussion with Michal/Ben/Dave [1], only skip entering direct
reclaim while still allowing it to wakeup kswapd, which can fix the
slowness and make MPOL_PREFERRED_MANY more close to the semantic of
MPOL_PREFERRED, while avoid creating a new gfp bit.

[1]. https://lore.kernel.org/lkml/1614766858-90344-15-git-send-email-feng.tang@intel.com/
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d66c1c0..00b19f7 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2205,9 +2205,13 @@ static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp,
 	 * | MPOL_PREFERRED_MANY (round 2) | local         | NULL       |
 	 * +-------------------------------+---------------+------------+
 	 */
-	if (pol->mode == MPOL_PREFERRED_MANY)
+	if (pol->mode == MPOL_PREFERRED_MANY) {
 		gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
 
+		/* Skip direct reclaim, as there will be a second try */
+		gfp_mask &= ~__GFP_DIRECT_RECLAIM;
+	}
+
 	page = __alloc_pages_nodemask(gfp_mask, order,
 				      policy_node(gfp, pol, preferred_nid),
 				      policy_nodemask(gfp, pol));
-- 
2.7.4




^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL
  2021-03-03 10:20 ` [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Feng Tang
@ 2021-03-10  6:27   ` Feng Tang
  0 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-10  6:27 UTC (permalink / raw)
  To: linux-mm, linux-kernel, Andrew Morton
  Cc: Michal Hocko, Andrea Arcangeli, David Rientjes, Mel Gorman,
	Mike Kravetz, Randy Dunlap, Vlastimil Babka, Dave Hansen,
	Ben Widawsky, Andi leen, Dan Williams

On Wed, Mar 03, 2021 at 06:20:45PM +0800, Feng Tang wrote:
> From: Ben Widawsky <ben.widawsky@intel.com>
> 
> MPOL_LOCAL is a bit weird because it is simply a different name for an
> existing behavior (preferred policy with no node mask). It has been this
> way since it was added here:
> commit 479e2802d09f ("mm: mempolicy: Make MPOL_LOCAL a real policy")
> 
> It is so similar to MPOL_PREFERRED in fact that when the policy is
> created in mpol_new, the mode is set as PREFERRED, and an internal state
> representing LOCAL doesn't exist.
> 
> To prevent future explorers from scratching their head as to why
> MPOL_LOCAL isn't defined in the mpol_ops table, add a small comment
> explaining the situations.
> 
> v2:
> Change comment to refer to mpol_new (Michal)
> 
> Link: https://lore.kernel.org/r/20200630212517.308045-2-ben.widawsky@intel.com
> #Acked-by: Michal Hocko <mhocko@suse.com>

This shouldn't be masked:

Acked-by: Michal Hocko <mhocko@suse.com>

I did the mask when sending for internal review, and forgot to restore
it, sorry for the noise.

Thanks,
Feng

> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
>  mm/mempolicy.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 2c3a865..5730fc1 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -427,6 +427,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
>  		.create = mpol_new_bind,
>  		.rebind = mpol_rebind_nodemask,
>  	},
> +	/* [MPOL_LOCAL] - see mpol_new() */
>  };
>  
>  static int migrate_page_add(struct page *page, struct list_head *pagelist,
> -- 
> 2.7.4


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-10  5:19                     ` Feng Tang
@ 2021-03-10  9:44                       ` Michal Hocko
  2021-03-10 11:49                         ` Feng Tang
  0 siblings, 1 reply; 35+ messages in thread
From: Michal Hocko @ 2021-03-10  9:44 UTC (permalink / raw)
  To: Feng Tang
  Cc: Dave Hansen, Ben Widawsky, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Andi Kleen, Williams, Dan J

On Wed 10-03-21 13:19:47, Feng Tang wrote:
[...]
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index d66c1c0..00b19f7 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2205,9 +2205,13 @@ static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp,
>  	 * | MPOL_PREFERRED_MANY (round 2) | local         | NULL       |
>  	 * +-------------------------------+---------------+------------+
>  	 */
> -	if (pol->mode == MPOL_PREFERRED_MANY)
> +	if (pol->mode == MPOL_PREFERRED_MANY) {
>  		gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
>  
> +		/* Skip direct reclaim, as there will be a second try */
> +		gfp_mask &= ~__GFP_DIRECT_RECLAIM;

__GFP_RETRY_MAYFAIL is a reclaim modifier which doesn't make any sense
without __GFP_DIRECT_RECLAIM. Also I think it would be better to have a
proper allocation flags in the initial patch which implements the
fallback.

> +	}
> +
>  	page = __alloc_pages_nodemask(gfp_mask, order,
>  				      policy_node(gfp, pol, preferred_nid),
>  				      policy_nodemask(gfp, pol));
> -- 
> 2.7.4
> 
> 

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
  2021-03-10  9:44                       ` Michal Hocko
@ 2021-03-10 11:49                         ` Feng Tang
  0 siblings, 0 replies; 35+ messages in thread
From: Feng Tang @ 2021-03-10 11:49 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Dave Hansen, Ben Widawsky, linux-mm, linux-kernel, Andrew Morton,
	Andrea Arcangeli, David Rientjes, Mel Gorman, Mike Kravetz,
	Randy Dunlap, Vlastimil Babka, Andi Kleen, Williams, Dan J

On Wed, Mar 10, 2021 at 10:44:11AM +0100, Michal Hocko wrote:
> On Wed 10-03-21 13:19:47, Feng Tang wrote:
> [...]
> > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > index d66c1c0..00b19f7 100644
> > --- a/mm/mempolicy.c
> > +++ b/mm/mempolicy.c
> > @@ -2205,9 +2205,13 @@ static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp,
> >  	 * | MPOL_PREFERRED_MANY (round 2) | local         | NULL       |
> >  	 * +-------------------------------+---------------+------------+
> >  	 */
> > -	if (pol->mode == MPOL_PREFERRED_MANY)
> > +	if (pol->mode == MPOL_PREFERRED_MANY) {
> >  		gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
> >  
> > +		/* Skip direct reclaim, as there will be a second try */
> > +		gfp_mask &= ~__GFP_DIRECT_RECLAIM;
> 
> __GFP_RETRY_MAYFAIL is a reclaim modifier which doesn't make any sense
> without __GFP_DIRECT_RECLAIM. Also I think it would be better to have a
> proper allocation flags in the initial patch which implements the
> fallback.

Ok, will remove the __GFP_RETRY_MAYFAIL setting and folder this with
previous patch(8/14).

Thanks,
Feng

> > +	}
> > +
> >  	page = __alloc_pages_nodemask(gfp_mask, order,
> >  				      policy_node(gfp, pol, preferred_nid),
> >  				      policy_nodemask(gfp, pol));
> > -- 
> > 2.7.4
> > 
> > 
> 
> -- 
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2021-03-10 11:49 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
2021-03-03 10:20 ` [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Feng Tang
2021-03-10  6:27   ` Feng Tang
2021-03-03 10:20 ` [PATCH v3 02/14] mm/mempolicy: convert single preferred_node to full nodemask Feng Tang
2021-03-03 10:20 ` [PATCH v3 03/14] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
2021-03-03 10:20 ` [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask Feng Tang
2021-03-03 10:20 ` [PATCH v3 05/14] mm/mempolicy: refactor rebind code for PREFERRED_MANY Feng Tang
2021-03-03 10:20 ` [PATCH v3 06/14] mm/mempolicy: kill v.preferred_nodes Feng Tang
2021-03-03 10:20 ` [PATCH v3 07/14] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Feng Tang
2021-03-03 10:20 ` [PATCH v3 08/14] mm/mempolicy: Create a page allocator for policy Feng Tang
2021-03-03 10:20 ` [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred Feng Tang
2021-03-03 10:20 ` [PATCH v3 10/14] mm/mempolicy: VMA " Feng Tang
2021-03-03 10:20 ` [PATCH v3 11/14] mm/mempolicy: huge-page " Feng Tang
2021-03-03 10:20 ` [PATCH v3 12/14] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
2021-03-03 10:20 ` [PATCH v3 13/14] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many() Feng Tang
2021-03-03 10:20 ` [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit Feng Tang
2021-03-03 11:39   ` Michal Hocko
2021-03-03 12:07     ` Feng Tang
2021-03-03 12:18       ` Feng Tang
2021-03-03 12:32         ` Michal Hocko
2021-03-03 13:18           ` Feng Tang
2021-03-03 13:46             ` Feng Tang
2021-03-03 13:59               ` Michal Hocko
2021-03-03 16:31                 ` Ben Widawsky
2021-03-03 16:48                   ` Dave Hansen
2021-03-10  5:19                     ` Feng Tang
2021-03-10  9:44                       ` Michal Hocko
2021-03-10 11:49                         ` Feng Tang
2021-03-03 17:14                   ` Michal Hocko
2021-03-03 17:22                     ` Ben Widawsky
2021-03-04  8:14                       ` Feng Tang
2021-03-04 12:59                         ` Michal Hocko
2021-03-05  2:21                           ` Feng Tang
2021-03-04 12:57                       ` Michal Hocko
2021-03-03 13:53             ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).