* + mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch added to -mm tree
@ 2021-07-15 0:17 akpm
0 siblings, 0 replies; 2+ messages in thread
From: akpm @ 2021-07-15 0:17 UTC (permalink / raw)
To: aarcange, ak, ben.widawsky, dan.j.williams, dave.hansen,
feng.tang, mgorman, mhocko, mhocko, mike.kravetz, mm-commits,
rdunlap, rientjes, vbabka, ying.huang
The patch titled
Subject: mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY
has been added to the -mm tree. Its filename is
mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Ben Widawsky <ben.widawsky@intel.com>
Subject: mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY
Implement the missing huge page allocation functionality while obeying the
preferred node semantics. This is similar to the implementation for
general page allocation, as it uses a fallback mechanism to try multiple
preferred nodes first, and then all other nodes.
[Thanks to 0day bot for caching the missing #ifdef CONFIG_NUMA issue]
Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com
Link: https://lkml.kernel.org/r/1626077374-81682-5-git-send-email-feng.tang@intel.com
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
Co-developed-by: Feng Tang <feng.tang@intel.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/hugetlb.c | 25 +++++++++++++++++++++++++
mm/mempolicy.c | 3 ++-
2 files changed, 27 insertions(+), 1 deletion(-)
--- a/mm/hugetlb.c~mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many
+++ a/mm/hugetlb.c
@@ -1166,7 +1166,18 @@ static struct page *dequeue_huge_page_vm
gfp_mask = htlb_alloc_mask(h);
nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
+#ifdef CONFIG_NUMA
+ if (mpol->mode == MPOL_PREFERRED_MANY) {
+ page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+ if (page)
+ goto check_reserve;
+ /* Fallback to all nodes */
+ nodemask = NULL;
+ }
+#endif
page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+
+check_reserve:
if (page && !avoid_reserve && vma_has_reserves(vma, chg)) {
SetHPageRestoreReserve(page);
h->resv_huge_pages--;
@@ -2147,6 +2158,20 @@ struct page *alloc_buddy_huge_page_with_
nodemask_t *nodemask;
nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask);
+#ifdef CONFIG_NUMA
+ if (mpol->mode == MPOL_PREFERRED_MANY) {
+ gfp_t gfp = (gfp_mask | __GFP_NOWARN) & ~__GFP_DIRECT_RECLAIM;
+
+ page = alloc_surplus_huge_page(h, gfp, nid, nodemask);
+ if (page) {
+ mpol_cond_put(mpol);
+ return page;
+ }
+
+ /* Fallback to all nodes */
+ nodemask = NULL;
+ }
+#endif
page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask, false);
mpol_cond_put(mpol);
--- a/mm/mempolicy.c~mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many
+++ a/mm/mempolicy.c
@@ -2054,7 +2054,8 @@ int huge_node(struct vm_area_struct *vma
huge_page_shift(hstate_vma(vma)));
} else {
nid = policy_node(gfp_flags, *mpol, numa_node_id());
- if ((*mpol)->mode == MPOL_BIND)
+ if ((*mpol)->mode == MPOL_BIND ||
+ (*mpol)->mode == MPOL_PREFERRED_MANY)
*nodemask = &(*mpol)->nodes;
}
return nid;
_
Patches currently in -mm which might be from ben.widawsky@intel.com are
mm-mempolicy-enable-page-allocation-for-mpol_preferred_many-for-general-cases.patch
mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch
mm-mempolicy-advertise-new-mpol_preferred_many.patch
^ permalink raw reply [flat|nested] 2+ messages in thread
* + mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch added to -mm tree
@ 2021-08-03 21:59 akpm
0 siblings, 0 replies; 2+ messages in thread
From: akpm @ 2021-08-03 21:59 UTC (permalink / raw)
To: mm-commits, ying.huang, vbabka, rientjes, rdunlap, mike.kravetz,
mhocko, mhocko, mgorman, feng.tang, dave.hansen, dan.j.williams,
ak, aarcange, ben.widawsky
The patch titled
Subject: mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY
has been added to the -mm tree. Its filename is
mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Ben Widawsky <ben.widawsky@intel.com>
Subject: mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY
Implement the missing huge page allocation functionality while obeying the
preferred node semantics. This is similar to the implementation for
general page allocation, as it uses a fallback mechanism to try multiple
preferred nodes first, and then all other nodes.
[akpm: fix compling issue when merging with other hugetlb patch]
[Thanks to 0day bot for catching the missing #ifdef CONFIG_NUMA issue]
Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com
Link: https://lkml.kernel.org/r/1627970362-61305-4-git-send-email-feng.tang@intel.com
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Co-developed-by: Feng Tang <feng.tang@intel.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/hugetlb.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
--- a/mm/hugetlb.c~mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many
+++ a/mm/hugetlb.c
@@ -1166,7 +1166,20 @@ static struct page *dequeue_huge_page_vm
gfp_mask = htlb_alloc_mask(h);
nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
+#ifdef CONFIG_NUMA
+ if (mpol->mode == MPOL_PREFERRED_MANY) {
+ page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+ if (page)
+ goto check_reserve;
+ /* Fallback to all nodes */
+ nodemask = NULL;
+ }
+#endif
page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+
+#ifdef CONFIG_NUMA
+check_reserve:
+#endif
if (page && !avoid_reserve && vma_has_reserves(vma, chg)) {
SetHPageRestoreReserve(page);
h->resv_huge_pages--;
@@ -2147,6 +2160,21 @@ struct page *alloc_buddy_huge_page_with_
nodemask_t *nodemask;
nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask);
+#ifdef CONFIG_NUMA
+ if (mpol->mode == MPOL_PREFERRED_MANY) {
+ gfp_t gfp = gfp_mask | __GFP_NOWARN;
+
+ gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
+ page = alloc_surplus_huge_page(h, gfp, nid, nodemask, false);
+ if (page) {
+ mpol_cond_put(mpol);
+ return page;
+ }
+
+ /* Fallback to all nodes */
+ nodemask = NULL;
+ }
+#endif
page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask, false);
mpol_cond_put(mpol);
_
Patches currently in -mm which might be from ben.widawsky@intel.com are
mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch
mm-mempolicy-advertise-new-mpol_preferred_many.patch
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-08-03 21:59 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-15 0:17 + mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch added to -mm tree akpm
2021-08-03 21:59 akpm
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.