* [PATCH v5 -mm 1/6] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes
2021-06-18 3:44 [PATCH v5 -mm 0/6] Introduced multi-preference mempolicy Feng Tang
@ 2021-06-18 3:44 ` Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 2/6] mm/memplicy: add page allocation function for MPOL_PREFERRED_MANY policy Feng Tang
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Feng Tang @ 2021-06-18 3:44 UTC (permalink / raw)
To: linux-mm, Andrew Morton, Michal Hocko, David Rientjes,
Dave Hansen, Ben Widawsky
Cc: linux-kernel, linux-api, Andrea Arcangeli, Mel Gorman,
Mike Kravetz, Randy Dunlap, Vlastimil Babka, Andi Kleen,
Dan Williams, ying.huang, Dave Hansen, Feng Tang
From: Dave Hansen <dave.hansen@linux.intel.com>
The NUMA APIs currently allow passing in a "preferred node" as a
single bit set in a nodemask. If more than one bit it set, bits
after the first are ignored.
This single node is generally OK for location-based NUMA where
memory being allocated will eventually be operated on by a single
CPU. However, in systems with multiple memory types, folks want
to target a *type* of memory instead of a location. For instance,
someone might want some high-bandwidth memory but do not care about
the CPU next to which it is allocated. Or, they want a cheap,
high capacity allocation and want to target all NUMA nodes which
have persistent memory in volatile mode. In both of these cases,
the application wants to target a *set* of nodes, but does not
want strict MPOL_BIND behavior as that could lead to OOM killer or
SIGSEGV.
So add MPOL_PREFERRED_MANY policy to support the multiple preferred
nodes requirement. This is not a pie-in-the-sky dream for an API.
This was a response to a specific ask of more than one group at Intel.
Specifically:
1. There are existing libraries that target memory types such as
https://github.com/memkind/memkind. These are known to suffer
from SIGSEGV's when memory is low on targeted memory "kinds" that
span more than one node. The MCDRAM on a Xeon Phi in "Cluster on
Die" mode is an example of this.
2. Volatile-use persistent memory users want to have a memory policy
which is targeted at either "cheap and slow" (PMEM) or "expensive and
fast" (DRAM). However, they do not want to experience allocation
failures when the targeted type is unavailable.
3. Allocate-then-run. Generally, we let the process scheduler decide
on which physical CPU to run a task. That location provides a
default allocation policy, and memory availability is not generally
considered when placing tasks. For situations where memory is
valuable and constrained, some users want to allocate memory first,
*then* allocate close compute resources to the allocation. This is
the reverse of the normal (CPU) model. Accelerators such as GPUs
that operate on core-mm-managed memory are interested in this model.
A check is added in sanitize_mpol_flags() to not permit 'prefer_many'
policy to be used for now, and will be removed in later patch after all
implementations for 'prefer_many' are ready, as suggested by Michal Hocko.
Link: https://lore.kernel.org/r/20200630212517.308045-4-ben.widawsky@intel.com
Co-developed-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
include/uapi/linux/mempolicy.h | 1 +
mm/mempolicy.c | 44 +++++++++++++++++++++++++++++++++++++-----
2 files changed, 40 insertions(+), 5 deletions(-)
diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
index 19a00bc7fe86..046d0ccba4cd 100644
--- a/include/uapi/linux/mempolicy.h
+++ b/include/uapi/linux/mempolicy.h
@@ -22,6 +22,7 @@ enum {
MPOL_BIND,
MPOL_INTERLEAVE,
MPOL_LOCAL,
+ MPOL_PREFERRED_MANY,
MPOL_MAX, /* always last member of enum */
};
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index e32360e90274..17b5800b7dcc 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -31,6 +31,9 @@
* but useful to set in a VMA when you have a non default
* process policy.
*
+ * preferred many Try a set of nodes first before normal fallback. This is
+ * similar to preferred without the special case.
+ *
* default Allocate on the local node first, or when on a VMA
* use the process policy. This is what Linux always did
* in a NUMA aware kernel and still does by, ahem, default.
@@ -207,6 +210,14 @@ static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes)
return 0;
}
+static int mpol_new_preferred_many(struct mempolicy *pol, const nodemask_t *nodes)
+{
+ if (nodes_empty(*nodes))
+ return -EINVAL;
+ pol->nodes = *nodes;
+ return 0;
+}
+
static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes)
{
if (nodes_empty(*nodes))
@@ -408,6 +419,10 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
[MPOL_LOCAL] = {
.rebind = mpol_rebind_default,
},
+ [MPOL_PREFERRED_MANY] = {
+ .create = mpol_new_preferred_many,
+ .rebind = mpol_rebind_preferred,
+ },
};
static int migrate_page_add(struct page *page, struct list_head *pagelist,
@@ -900,6 +915,7 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes)
case MPOL_BIND:
case MPOL_INTERLEAVE:
case MPOL_PREFERRED:
+ case MPOL_PREFERRED_MANY:
*nodes = p->nodes;
break;
case MPOL_LOCAL:
@@ -1446,7 +1462,13 @@ static inline int sanitize_mpol_flags(int *mode, unsigned short *flags)
{
*flags = *mode & MPOL_MODE_FLAGS;
*mode &= ~MPOL_MODE_FLAGS;
- if ((unsigned int)(*mode) >= MPOL_MAX)
+
+ /*
+ * The check should be 'mode >= MPOL_MAX', but as 'prefer_many'
+ * is not fully implemented, don't permit it to be used for now,
+ * and the logic will be restored in following patch
+ */
+ if ((unsigned int)(*mode) >= MPOL_PREFERRED_MANY)
return -EINVAL;
if ((*flags & MPOL_F_STATIC_NODES) && (*flags & MPOL_F_RELATIVE_NODES))
return -EINVAL;
@@ -1887,7 +1909,8 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
/* Return the node id preferred by the given mempolicy, or the given id */
static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd)
{
- if (policy->mode == MPOL_PREFERRED) {
+ if (policy->mode == MPOL_PREFERRED ||
+ policy->mode == MPOL_PREFERRED_MANY) {
nd = first_node(policy->nodes);
} else {
/*
@@ -1931,6 +1954,7 @@ unsigned int mempolicy_slab_node(void)
switch (policy->mode) {
case MPOL_PREFERRED:
+ case MPOL_PREFERRED_MANY:
return first_node(policy->nodes);
case MPOL_INTERLEAVE:
@@ -2063,6 +2087,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
mempolicy = current->mempolicy;
switch (mempolicy->mode) {
case MPOL_PREFERRED:
+ case MPOL_PREFERRED_MANY:
case MPOL_BIND:
case MPOL_INTERLEAVE:
*mask = mempolicy->nodes;
@@ -2173,10 +2198,12 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
* node and don't fall back to other nodes, as the cost of
* remote accesses would likely offset THP benefits.
*
- * If the policy is interleave, or does not allow the current
- * node in its nodemask, we allocate the standard way.
+ * If the policy is interleave or multiple preferred nodes, or
+ * does not allow the current node in its nodemask, we allocate
+ * the standard way.
*/
- if (pol->mode == MPOL_PREFERRED)
+ if ((pol->mode == MPOL_PREFERRED ||
+ pol->mode == MPOL_PREFERRED_MANY))
hpage_node = first_node(pol->nodes);
nmask = policy_nodemask(gfp, pol);
@@ -2311,6 +2338,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
case MPOL_BIND:
case MPOL_INTERLEAVE:
case MPOL_PREFERRED:
+ case MPOL_PREFERRED_MANY:
return !!nodes_equal(a->nodes, b->nodes);
case MPOL_LOCAL:
return true;
@@ -2451,6 +2479,9 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
break;
case MPOL_PREFERRED:
+ case MPOL_PREFERRED_MANY:
+ if (node_isset(curnid, pol->nodes))
+ goto out;
polnid = first_node(pol->nodes);
break;
@@ -2829,6 +2860,7 @@ static const char * const policy_modes[] =
[MPOL_BIND] = "bind",
[MPOL_INTERLEAVE] = "interleave",
[MPOL_LOCAL] = "local",
+ [MPOL_PREFERRED_MANY] = "prefer (many)",
};
@@ -2907,6 +2939,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol)
if (!nodelist)
err = 0;
goto out;
+ case MPOL_PREFERRED_MANY:
case MPOL_BIND:
/*
* Insist on a nodelist
@@ -2993,6 +3026,7 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
case MPOL_LOCAL:
break;
case MPOL_PREFERRED:
+ case MPOL_PREFERRED_MANY:
case MPOL_BIND:
case MPOL_INTERLEAVE:
nodes = pol->nodes;
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v5 -mm 2/6] mm/memplicy: add page allocation function for MPOL_PREFERRED_MANY policy
2021-06-18 3:44 [PATCH v5 -mm 0/6] Introduced multi-preference mempolicy Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 1/6] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
@ 2021-06-18 3:44 ` Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 3/6] mm/mempolicy: enable page allocation for MPOL_PREFERRED_MANY for general cases Feng Tang
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Feng Tang @ 2021-06-18 3:44 UTC (permalink / raw)
To: linux-mm, Andrew Morton, Michal Hocko, David Rientjes,
Dave Hansen, Ben Widawsky
Cc: linux-kernel, linux-api, Andrea Arcangeli, Mel Gorman,
Mike Kravetz, Randy Dunlap, Vlastimil Babka, Andi Kleen,
Dan Williams, ying.huang, Feng Tang
The semantics of MPOL_PREFERRED_MANY is similar to MPOL_PREFERRED,
that it will first try to allocate memory from the preferred node(s),
and fallback to all nodes in system when first try fails.
Add a dedicated function for it just like 'interleave' policy.
Link: https://lore.kernel.org/r/20200630212517.308045-9-ben.widawsky@intel.com
Suggested-by: Michal Hocko <mhocko@suse.com>
Co-developed-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/mempolicy.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 17b5800b7dcc..d17bf018efcc 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2153,6 +2153,25 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
return page;
}
+static struct page *alloc_page_preferred_many(gfp_t gfp, unsigned int order,
+ struct mempolicy *pol)
+{
+ struct page *page;
+
+ /*
+ * This is a two pass approach. The first pass will only try the
+ * preferred nodes but skip the direct reclaim and allow the
+ * allocation to fail, while the second pass will try all the
+ * nodes in system.
+ */
+ page = __alloc_pages(((gfp | __GFP_NOWARN) & ~__GFP_DIRECT_RECLAIM),
+ order, first_node(pol->nodes), &pol->nodes);
+ if (!page)
+ page = __alloc_pages(gfp, order, numa_node_id(), NULL);
+
+ return page;
+}
+
/**
* alloc_pages_vma - Allocate a page for a VMA.
* @gfp: GFP flags.
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v5 -mm 3/6] mm/mempolicy: enable page allocation for MPOL_PREFERRED_MANY for general cases
2021-06-18 3:44 [PATCH v5 -mm 0/6] Introduced multi-preference mempolicy Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 1/6] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 2/6] mm/memplicy: add page allocation function for MPOL_PREFERRED_MANY policy Feng Tang
@ 2021-06-18 3:44 ` Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 4/6] mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY Feng Tang
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Feng Tang @ 2021-06-18 3:44 UTC (permalink / raw)
To: linux-mm, Andrew Morton, Michal Hocko, David Rientjes,
Dave Hansen, Ben Widawsky
Cc: linux-kernel, linux-api, Andrea Arcangeli, Mel Gorman,
Mike Kravetz, Randy Dunlap, Vlastimil Babka, Andi Kleen,
Dan Williams, ying.huang, Feng Tang
From: Ben Widawsky <ben.widawsky@intel.com>
In order to support MPOL_PREFERRED_MANY which is used by
set_mempolicy(2), mbind(2), enable both alloc_pages() and
alloc_pages_vma() by using alloc_page_preferred_many().
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/mempolicy.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d17bf018efcc..9dce67fc9bb6 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2207,6 +2207,12 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
goto out;
}
+ if (pol->mode == MPOL_PREFERRED_MANY) {
+ page = alloc_page_preferred_many(gfp, order, pol);
+ mpol_cond_put(pol);
+ goto out;
+ }
+
if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
int hpage_node = node;
@@ -2286,6 +2292,8 @@ struct page *alloc_pages(gfp_t gfp, unsigned order)
*/
if (pol->mode == MPOL_INTERLEAVE)
page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
+ else if (pol->mode == MPOL_PREFERRED_MANY)
+ page = alloc_page_preferred_many(gfp, order, pol);
else
page = __alloc_pages(gfp, order,
policy_node(gfp, pol, numa_node_id()),
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v5 -mm 4/6] mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY
2021-06-18 3:44 [PATCH v5 -mm 0/6] Introduced multi-preference mempolicy Feng Tang
` (2 preceding siblings ...)
2021-06-18 3:44 ` [PATCH v5 -mm 3/6] mm/mempolicy: enable page allocation for MPOL_PREFERRED_MANY for general cases Feng Tang
@ 2021-06-18 3:44 ` Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 5/6] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 6/6] mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies Feng Tang
5 siblings, 0 replies; 7+ messages in thread
From: Feng Tang @ 2021-06-18 3:44 UTC (permalink / raw)
To: linux-mm, Andrew Morton, Michal Hocko, David Rientjes,
Dave Hansen, Ben Widawsky
Cc: linux-kernel, linux-api, Andrea Arcangeli, Mel Gorman,
Mike Kravetz, Randy Dunlap, Vlastimil Babka, Andi Kleen,
Dan Williams, ying.huang, Feng Tang
From: Ben Widawsky <ben.widawsky@intel.com>
Implement the missing huge page allocation functionality while obeying
the preferred node semantics. This is similar to the implementation
for general page allocation, as it uses a fallback mechanism to try
multiple preferred nodes first, and then all other nodes.
[Thanks to 0day bot for caching the missing #ifdef CONFIG_NUMA issue]
Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Co-developed-by: Feng Tang <feng.tang@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/hugetlb.c | 27 +++++++++++++++++++++++++--
mm/mempolicy.c | 3 ++-
2 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e4120680e31a..c771debd35a6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1143,7 +1143,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
unsigned long address, int avoid_reserve,
long chg)
{
- struct page *page;
+ struct page *page = NULL;
struct mempolicy *mpol;
gfp_t gfp_mask;
nodemask_t *nodemask;
@@ -1164,7 +1164,18 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
gfp_mask = htlb_alloc_mask(h);
nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
+#ifdef CONFIG_NUMA
+ if (mpol->mode == MPOL_PREFERRED_MANY) {
+ page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+ if (page)
+ goto check_reserve;
+ /* Fallback to all nodes */
+ nodemask = NULL;
+ }
+#endif
page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+
+check_reserve:
if (page && !avoid_reserve && vma_has_reserves(vma, chg)) {
SetHPageRestoreReserve(page);
h->resv_huge_pages--;
@@ -2048,9 +2059,21 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
nodemask_t *nodemask;
nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask);
+#ifdef CONFIG_NUMA
+ if (mpol->mode == MPOL_PREFERRED_MANY) {
+ gfp_t gfp = (gfp_mask | __GFP_NOWARN) & ~__GFP_DIRECT_RECLAIM;
+
+ page = alloc_surplus_huge_page(h, gfp, nid, nodemask);
+ if (page)
+ goto exit;
+ /* Fallback to all nodes */
+ nodemask = NULL;
+ }
+#endif
page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask);
- mpol_cond_put(mpol);
+exit:
+ mpol_cond_put(mpol);
return page;
}
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 9dce67fc9bb6..93f8789758a7 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2054,7 +2054,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
huge_page_shift(hstate_vma(vma)));
} else {
nid = policy_node(gfp_flags, *mpol, numa_node_id());
- if ((*mpol)->mode == MPOL_BIND)
+ if ((*mpol)->mode == MPOL_BIND ||
+ (*mpol)->mode == MPOL_PREFERRED_MANY)
*nodemask = &(*mpol)->nodes;
}
return nid;
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v5 -mm 5/6] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY
2021-06-18 3:44 [PATCH v5 -mm 0/6] Introduced multi-preference mempolicy Feng Tang
` (3 preceding siblings ...)
2021-06-18 3:44 ` [PATCH v5 -mm 4/6] mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY Feng Tang
@ 2021-06-18 3:44 ` Feng Tang
2021-06-18 3:44 ` [PATCH v5 -mm 6/6] mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies Feng Tang
5 siblings, 0 replies; 7+ messages in thread
From: Feng Tang @ 2021-06-18 3:44 UTC (permalink / raw)
To: linux-mm, Andrew Morton, Michal Hocko, David Rientjes,
Dave Hansen, Ben Widawsky
Cc: linux-kernel, linux-api, Andrea Arcangeli, Mel Gorman,
Mike Kravetz, Randy Dunlap, Vlastimil Babka, Andi Kleen,
Dan Williams, ying.huang, Feng Tang
From: Ben Widawsky <ben.widawsky@intel.com>
Adds a new mode to the existing mempolicy modes, MPOL_PREFERRED_MANY.
MPOL_PREFERRED_MANY will be adequately documented in the internal
admin-guide with this patch. Eventually, the man pages for mbind(2),
get_mempolicy(2), set_mempolicy(2) and numactl(8) will also have text
about this mode. Those shall contain the canonical reference.
NUMA systems continue to become more prevalent. New technologies like
PMEM make finer grain control over memory access patterns increasingly
desirable. MPOL_PREFERRED_MANY allows userspace to specify a set of
nodes that will be tried first when performing allocations. If those
allocations fail, all remaining nodes will be tried. It's a straight
forward API which solves many of the presumptive needs of system
administrators wanting to optimize workloads on such machines. The mode
will work either per VMA, or per thread.
Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widawsky@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
Documentation/admin-guide/mm/numa_memory_policy.rst | 16 ++++++++++++----
mm/mempolicy.c | 7 +------
2 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst
index 067a90a1499c..cd653561e531 100644
--- a/Documentation/admin-guide/mm/numa_memory_policy.rst
+++ b/Documentation/admin-guide/mm/numa_memory_policy.rst
@@ -245,6 +245,14 @@ MPOL_INTERLEAVED
address range or file. During system boot up, the temporary
interleaved system default policy works in this mode.
+MPOL_PREFERRED_MANY
+ This mode specifies that the allocation should be attempted from the
+ nodemask specified in the policy. If that allocation fails, the kernel
+ will search other nodes, in order of increasing distance from the first
+ set bit in the nodemask based on information provided by the platform
+ firmware. It is similar to MPOL_PREFERRED with the main exception that
+ is an error to have an empty nodemask.
+
NUMA memory policy supports the following optional mode flags:
MPOL_F_STATIC_NODES
@@ -253,10 +261,10 @@ MPOL_F_STATIC_NODES
nodes changes after the memory policy has been defined.
Without this flag, any time a mempolicy is rebound because of a
- change in the set of allowed nodes, the node (Preferred) or
- nodemask (Bind, Interleave) is remapped to the new set of
- allowed nodes. This may result in nodes being used that were
- previously undesired.
+ change in the set of allowed nodes, the preferred nodemask (Preferred
+ Many), preferred node (Preferred) or nodemask (Bind, Interleave) is
+ remapped to the new set of allowed nodes. This may result in nodes
+ being used that were previously undesired.
With this flag, if the user-specified nodes overlap with the
nodes allowed by the task's cpuset, then the memory policy is
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 93f8789758a7..d90247d6a71b 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1463,12 +1463,7 @@ static inline int sanitize_mpol_flags(int *mode, unsigned short *flags)
*flags = *mode & MPOL_MODE_FLAGS;
*mode &= ~MPOL_MODE_FLAGS;
- /*
- * The check should be 'mode >= MPOL_MAX', but as 'prefer_many'
- * is not fully implemented, don't permit it to be used for now,
- * and the logic will be restored in following patch
- */
- if ((unsigned int)(*mode) >= MPOL_PREFERRED_MANY)
+ if ((unsigned int)(*mode) >= MPOL_MAX)
return -EINVAL;
if ((*flags & MPOL_F_STATIC_NODES) && (*flags & MPOL_F_RELATIVE_NODES))
return -EINVAL;
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v5 -mm 6/6] mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies
2021-06-18 3:44 [PATCH v5 -mm 0/6] Introduced multi-preference mempolicy Feng Tang
` (4 preceding siblings ...)
2021-06-18 3:44 ` [PATCH v5 -mm 5/6] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
@ 2021-06-18 3:44 ` Feng Tang
5 siblings, 0 replies; 7+ messages in thread
From: Feng Tang @ 2021-06-18 3:44 UTC (permalink / raw)
To: linux-mm, Andrew Morton, Michal Hocko, David Rientjes,
Dave Hansen, Ben Widawsky
Cc: linux-kernel, linux-api, Andrea Arcangeli, Mel Gorman,
Mike Kravetz, Randy Dunlap, Vlastimil Babka, Andi Kleen,
Dan Williams, ying.huang, Feng Tang
As they all do the same thing: sanity check and save nodemask info, create
one mpol_new_nodemask() to reduce redundancy.
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/mempolicy.c | 24 ++++--------------------
1 file changed, 4 insertions(+), 20 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d90247d6a71b..e5ce5a7e8d92 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -192,7 +192,7 @@ static void mpol_relative_nodemask(nodemask_t *ret, const nodemask_t *orig,
nodes_onto(*ret, tmp, *rel);
}
-static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes)
+static int mpol_new_nodemask(struct mempolicy *pol, const nodemask_t *nodes)
{
if (nodes_empty(*nodes))
return -EINVAL;
@@ -210,22 +210,6 @@ static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes)
return 0;
}
-static int mpol_new_preferred_many(struct mempolicy *pol, const nodemask_t *nodes)
-{
- if (nodes_empty(*nodes))
- return -EINVAL;
- pol->nodes = *nodes;
- return 0;
-}
-
-static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes)
-{
- if (nodes_empty(*nodes))
- return -EINVAL;
- pol->nodes = *nodes;
- return 0;
-}
-
/*
* mpol_set_nodemask is called after mpol_new() to set up the nodemask, if
* any, for the new policy. mpol_new() has already validated the nodes
@@ -405,7 +389,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
.rebind = mpol_rebind_default,
},
[MPOL_INTERLEAVE] = {
- .create = mpol_new_interleave,
+ .create = mpol_new_nodemask,
.rebind = mpol_rebind_nodemask,
},
[MPOL_PREFERRED] = {
@@ -413,14 +397,14 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
.rebind = mpol_rebind_preferred,
},
[MPOL_BIND] = {
- .create = mpol_new_bind,
+ .create = mpol_new_nodemask,
.rebind = mpol_rebind_nodemask,
},
[MPOL_LOCAL] = {
.rebind = mpol_rebind_default,
},
[MPOL_PREFERRED_MANY] = {
- .create = mpol_new_preferred_many,
+ .create = mpol_new_nodemask,
.rebind = mpol_rebind_preferred,
},
};
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread