mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* + mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2.patch added to -mm tree
@ 2016-11-18 23:05 akpm
  0 siblings, 0 replies; only message in thread
From: akpm @ 2016-11-18 23:05 UTC (permalink / raw)
  To: shijie.huang, aneesh.kumar, catalin.marinas, gerald.schaefer,
	kaly.xin, kirill.shutemov, mhocko, mike.kravetz, n-horiguchi,
	steve.capper, will.deacon, mm-commits


The patch titled
     Subject: mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2
has been added to the -mm tree.  Its filename is
     mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/SubmitChecklist when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Huang Shijie <shijie.huang@arm.com>
Subject: mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2

Since the huge_nodemask() is changed, we have to change this function a
little.

Link: http://lkml.kernel.org/r/1479279304-31379-1-git-send-email-shijie.huang@arm.com
Signed-off-by: Huang Shijie <shijie.huang@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Kirill A . Shutemov <kirill.shutemov@linux.intel.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Kaly Xin <kaly.xin@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |   24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

diff -puN mm/hugetlb.c~mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2 mm/hugetlb.c
--- a/mm/hugetlb.c~mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2
+++ a/mm/hugetlb.c
@@ -1507,19 +1507,19 @@ int dissolve_free_huge_pages(unsigned lo
  *    the gigantic page.
  *
  * 2. The NUMA is enabled, but the vma is NULL.
- *    Create a @nodes_allowed, use alloc_fresh_gigantic_page() to get
+ *    Create a @nodes_allowed, and use alloc_fresh_gigantic_page() to get
  *    the gigantic page.
  *
  * 3. The NUMA is enabled, and the vma is valid.
  *    Use the @vma's memory policy.
- *    Get @nodes_mask by huge_nodemask(), and use alloc_fresh_gigantic_page()
+ *    Get @nodes_allowed by huge_nodemask(), and use alloc_fresh_gigantic_page()
  *    to get the gigantic page.
  */
 static struct page *__hugetlb_alloc_gigantic_page(struct hstate *h,
 		struct vm_area_struct *vma, unsigned long addr, int nid)
 {
-	struct page *page;
-	nodemask_t *nodes_mask;
+	NODEMASK_ALLOC(nodemask_t, nodes_allowed, GFP_KERNEL | __GFP_NORETRY);
+	struct page *page = NULL;
 
 	/* Not NUMA */
 	if (!IS_ENABLED(CONFIG_NUMA)) {
@@ -1530,14 +1530,12 @@ static struct page *__hugetlb_alloc_giga
 		if (page)
 			prep_compound_gigantic_page(page, huge_page_order(h));
 
+		NODEMASK_FREE(nodes_allowed);
 		return page;
 	}
 
 	/* NUMA && !vma */
 	if (!vma) {
-		NODEMASK_ALLOC(nodemask_t, nodes_allowed,
-				GFP_KERNEL | __GFP_NORETRY);
-
 		if (nid == NUMA_NO_NODE) {
 			if (!init_nodemask_of_mempolicy(nodes_allowed)) {
 				NODEMASK_FREE(nodes_allowed);
@@ -1558,13 +1556,11 @@ static struct page *__hugetlb_alloc_giga
 	}
 
 	/* NUMA && vma */
-	nodes_mask = huge_nodemask(vma, addr);
-	if (nodes_mask) {
-		page = alloc_fresh_gigantic_page(h, nodes_mask, true);
-		if (page)
-			return page;
-	}
-	return NULL;
+	if (huge_nodemask(vma, addr, nodes_allowed))
+		page = alloc_fresh_gigantic_page(h, nodes_allowed, true);
+
+	NODEMASK_FREE(nodes_allowed);
+	return page;
 }
 
 /*
_

Patches currently in -mm which might be from shijie.huang@arm.com are

mm-hugetlb-rename-some-allocation-functions.patch
mm-hugetlb-add-a-new-parameter-for-some-functions.patch
mm-hugetlb-change-the-return-type-for-alloc_fresh_gigantic_page.patch
mm-mempolicy-intruduce-a-helper-huge_nodemask.patch
mm-mempolicy-intruduce-a-helper-huge_nodemask-v2.patch
mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page.patch
mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2.patch
mm-hugetlb-support-gigantic-surplus-pages.patch


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2016-11-18 23:05 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-18 23:05 + mm-hugetlb-add-a-new-function-to-allocate-a-new-gigantic-page-v2.patch added to -mm tree akpm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).