linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API
@ 2020-07-31  7:35 js1304
  2020-07-31  7:35 ` [PATCH v3 2/3] mm/hugetlb: make hugetlb migration callback CMA aware js1304
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: js1304 @ 2020-07-31  7:35 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-team, Vlastimil Babka,
	Christoph Hellwig, Roman Gushchin, Mike Kravetz, Naoya Horiguchi,
	Michal Hocko, Aneesh Kumar K . V, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

We have well defined scope API to exclude CMA region.
Use it rather than manipulating gfp_mask manually. With this change,
we can now restore __GFP_MOVABLE for gfp_mask like as usual migration
target allocation. It would result in that the ZONE_MOVABLE is also
searched by page allocator. For hugetlb, gfp_mask is redefined since
it has a regular allocation mask filter for migration target.
__GPF_NOWARN is added to hugetlb gfp_mask filter since a new user for
gfp_mask filter, gup, want to be silent when allocation fails.

Note that this can be considered as a fix for the commit 9a4e9f3b2d73
("mm: update get_user_pages_longterm to migrate pages allocated from
CMA region"). However, "Fixes" tag isn't added here since it is just
suboptimal but it doesn't cause any problem.

Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/hugetlb.h |  2 ++
 mm/gup.c                | 17 ++++++++---------
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 6b9508d..2660b04 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -708,6 +708,8 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask)
 	/* Some callers might want to enfoce node */
 	modified_mask |= (gfp_mask & __GFP_THISNODE);
 
+	modified_mask |= (gfp_mask & __GFP_NOWARN);
+
 	return modified_mask;
 }
 
diff --git a/mm/gup.c b/mm/gup.c
index a55f1ec..3990ddc 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1601,10 +1601,12 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
 	 * Trying to allocate a page for migration. Ignore allocation
 	 * failure warnings. We don't force __GFP_THISNODE here because
 	 * this node here is the node where we have CMA reservation and
-	 * in some case these nodes will have really less non movable
+	 * in some case these nodes will have really less non CMA
 	 * allocation memory.
+	 *
+	 * Note that CMA region is prohibited by allocation scope.
 	 */
-	gfp_t gfp_mask = GFP_USER | __GFP_NOWARN;
+	gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN;
 
 	if (PageHighMem(page))
 		gfp_mask |= __GFP_HIGHMEM;
@@ -1612,6 +1614,8 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
 #ifdef CONFIG_HUGETLB_PAGE
 	if (PageHuge(page)) {
 		struct hstate *h = page_hstate(page);
+
+		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
 		/*
 		 * We don't want to dequeue from the pool because pool pages will
 		 * mostly be from the CMA region.
@@ -1626,11 +1630,6 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
 		 */
 		gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN;
 
-		/*
-		 * Remove the movable mask so that we don't allocate from
-		 * CMA area again.
-		 */
-		thp_gfpmask &= ~__GFP_MOVABLE;
 		thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER);
 		if (!thp)
 			return NULL;
@@ -1773,7 +1772,6 @@ static long __gup_longterm_locked(struct mm_struct *mm,
 				     vmas_tmp, NULL, gup_flags);
 
 	if (gup_flags & FOLL_LONGTERM) {
-		memalloc_nocma_restore(flags);
 		if (rc < 0)
 			goto out;
 
@@ -1786,9 +1784,10 @@ static long __gup_longterm_locked(struct mm_struct *mm,
 
 		rc = check_and_migrate_cma_pages(mm, start, rc, pages,
 						 vmas_tmp, gup_flags);
+out:
+		memalloc_nocma_restore(flags);
 	}
 
-out:
 	if (vmas_tmp != vmas)
 		kfree(vmas_tmp);
 	return rc;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v3 2/3] mm/hugetlb: make hugetlb migration callback CMA aware
  2020-07-31  7:35 [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API js1304
@ 2020-07-31  7:35 ` js1304
  2020-07-31  7:35 ` [PATCH v3 3/3] mm/gup: use a standard migration target allocation callback js1304
  2020-08-04 12:05 ` [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API Vlastimil Babka
  2 siblings, 0 replies; 4+ messages in thread
From: js1304 @ 2020-07-31  7:35 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-team, Vlastimil Babka,
	Christoph Hellwig, Roman Gushchin, Mike Kravetz, Naoya Horiguchi,
	Michal Hocko, Aneesh Kumar K . V, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

new_non_cma_page() in gup.c requires to allocate the new page that is not
on the CMA area. new_non_cma_page() implements it by using allocation
scope APIs.

However, there is a work-around for hugetlb. Normal hugetlb page
allocation API for migration is alloc_huge_page_nodemask(). It consists
of two steps. First is dequeing from the pool. Second is, if there is no
available page on the queue, allocating by using the page allocator.

new_non_cma_page() can't use this API since first step (deque) isn't
aware of scope API to exclude CMA area. So, new_non_cma_page() exports
hugetlb internal function for the second step, alloc_migrate_huge_page(),
to global scope and uses it directly. This is suboptimal since hugetlb
pages on the queue cannot be utilized.

This patch tries to fix this situation by making the deque function on
hugetlb CMA aware. In the deque function, CMA memory is skipped if
PF_MEMALLOC_NOCMA flag is found.

Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/hugetlb.h |  2 --
 mm/gup.c                |  6 +-----
 mm/hugetlb.c            | 11 +++++++++--
 3 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 2660b04..fb2b5aa 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -509,8 +509,6 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
 				nodemask_t *nmask, gfp_t gfp_mask);
 struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
 				unsigned long address);
-struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
-				     int nid, nodemask_t *nmask);
 int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
 			pgoff_t idx);
 
diff --git a/mm/gup.c b/mm/gup.c
index 3990ddc..7b63d72 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1616,11 +1616,7 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
 		struct hstate *h = page_hstate(page);
 
 		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
-		/*
-		 * We don't want to dequeue from the pool because pool pages will
-		 * mostly be from the CMA region.
-		 */
-		return alloc_migrate_huge_page(h, gfp_mask, nid, NULL);
+		return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask);
 	}
 #endif
 	if (PageTransHuge(page)) {
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4645f14..d1706b7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -30,6 +30,7 @@
 #include <linux/llist.h>
 #include <linux/cma.h>
 #include <linux/migrate.h>
+#include <linux/sched/mm.h>
 
 #include <asm/page.h>
 #include <asm/pgalloc.h>
@@ -1041,10 +1042,16 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
 static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
 {
 	struct page *page;
+	bool nocma = !!(current->flags & PF_MEMALLOC_NOCMA);
+
+	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
+		if (nocma && is_migrate_cma_page(page))
+			continue;
 
-	list_for_each_entry(page, &h->hugepage_freelists[nid], lru)
 		if (!PageHWPoison(page))
 			break;
+	}
+
 	/*
 	 * if 'non-isolated free hugepage' not found on the list,
 	 * the allocation fails.
@@ -1973,7 +1980,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
 	return page;
 }
 
-struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
+static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
 				     int nid, nodemask_t *nmask)
 {
 	struct page *page;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v3 3/3] mm/gup: use a standard migration target allocation callback
  2020-07-31  7:35 [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API js1304
  2020-07-31  7:35 ` [PATCH v3 2/3] mm/hugetlb: make hugetlb migration callback CMA aware js1304
@ 2020-07-31  7:35 ` js1304
  2020-08-04 12:05 ` [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API Vlastimil Babka
  2 siblings, 0 replies; 4+ messages in thread
From: js1304 @ 2020-07-31  7:35 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-team, Vlastimil Babka,
	Christoph Hellwig, Roman Gushchin, Mike Kravetz, Naoya Horiguchi,
	Michal Hocko, Aneesh Kumar K . V, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

There is a well-defined migration target allocation callback. Use it.

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/gup.c | 54 ++++++------------------------------------------------
 1 file changed, 6 insertions(+), 48 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 7b63d72..ae096ea 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1590,52 +1590,6 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
 }
 
 #ifdef CONFIG_CMA
-static struct page *new_non_cma_page(struct page *page, unsigned long private)
-{
-	/*
-	 * We want to make sure we allocate the new page from the same node
-	 * as the source page.
-	 */
-	int nid = page_to_nid(page);
-	/*
-	 * Trying to allocate a page for migration. Ignore allocation
-	 * failure warnings. We don't force __GFP_THISNODE here because
-	 * this node here is the node where we have CMA reservation and
-	 * in some case these nodes will have really less non CMA
-	 * allocation memory.
-	 *
-	 * Note that CMA region is prohibited by allocation scope.
-	 */
-	gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN;
-
-	if (PageHighMem(page))
-		gfp_mask |= __GFP_HIGHMEM;
-
-#ifdef CONFIG_HUGETLB_PAGE
-	if (PageHuge(page)) {
-		struct hstate *h = page_hstate(page);
-
-		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
-		return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask);
-	}
-#endif
-	if (PageTransHuge(page)) {
-		struct page *thp;
-		/*
-		 * ignore allocation failure warnings
-		 */
-		gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN;
-
-		thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER);
-		if (!thp)
-			return NULL;
-		prep_transhuge_page(thp);
-		return thp;
-	}
-
-	return __alloc_pages_node(nid, gfp_mask, 0);
-}
-
 static long check_and_migrate_cma_pages(struct mm_struct *mm,
 					unsigned long start,
 					unsigned long nr_pages,
@@ -1649,6 +1603,10 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 	bool migrate_allow = true;
 	LIST_HEAD(cma_page_list);
 	long ret = nr_pages;
+	struct migration_target_control mtc = {
+		.nid = NUMA_NO_NODE,
+		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN,
+	};
 
 check_again:
 	for (i = 0; i < nr_pages;) {
@@ -1694,8 +1652,8 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 		for (i = 0; i < nr_pages; i++)
 			put_page(pages[i]);
 
-		if (migrate_pages(&cma_page_list, new_non_cma_page,
-				  NULL, 0, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
+		if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
+			(unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
 			/*
 			 * some of the pages failed migration. Do get_user_pages
 			 * without migration.
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API
  2020-07-31  7:35 [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API js1304
  2020-07-31  7:35 ` [PATCH v3 2/3] mm/hugetlb: make hugetlb migration callback CMA aware js1304
  2020-07-31  7:35 ` [PATCH v3 3/3] mm/gup: use a standard migration target allocation callback js1304
@ 2020-08-04 12:05 ` Vlastimil Babka
  2 siblings, 0 replies; 4+ messages in thread
From: Vlastimil Babka @ 2020-08-04 12:05 UTC (permalink / raw)
  To: js1304, Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-team, Christoph Hellwig,
	Roman Gushchin, Mike Kravetz, Naoya Horiguchi, Michal Hocko,
	Aneesh Kumar K . V, Joonsoo Kim

On 7/31/20 9:35 AM, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> We have well defined scope API to exclude CMA region.
> Use it rather than manipulating gfp_mask manually. With this change,
> we can now restore __GFP_MOVABLE for gfp_mask like as usual migration
> target allocation. It would result in that the ZONE_MOVABLE is also
> searched by page allocator. For hugetlb, gfp_mask is redefined since
> it has a regular allocation mask filter for migration target.
> __GPF_NOWARN is added to hugetlb gfp_mask filter since a new user for
> gfp_mask filter, gup, want to be silent when allocation fails.
> 
> Note that this can be considered as a fix for the commit 9a4e9f3b2d73
> ("mm: update get_user_pages_longterm to migrate pages allocated from
> CMA region"). However, "Fixes" tag isn't added here since it is just
> suboptimal but it doesn't cause any problem.
> 
> Suggested-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/linux/hugetlb.h |  2 ++
>  mm/gup.c                | 17 ++++++++---------
>  2 files changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 6b9508d..2660b04 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -708,6 +708,8 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask)
>  	/* Some callers might want to enfoce node */
>  	modified_mask |= (gfp_mask & __GFP_THISNODE);
>  
> +	modified_mask |= (gfp_mask & __GFP_NOWARN);
> +
>  	return modified_mask;
>  }
>  
> diff --git a/mm/gup.c b/mm/gup.c
> index a55f1ec..3990ddc 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1601,10 +1601,12 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
>  	 * Trying to allocate a page for migration. Ignore allocation
>  	 * failure warnings. We don't force __GFP_THISNODE here because
>  	 * this node here is the node where we have CMA reservation and
> -	 * in some case these nodes will have really less non movable
> +	 * in some case these nodes will have really less non CMA
>  	 * allocation memory.
> +	 *
> +	 * Note that CMA region is prohibited by allocation scope.
>  	 */
> -	gfp_t gfp_mask = GFP_USER | __GFP_NOWARN;
> +	gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN;
>  
>  	if (PageHighMem(page))
>  		gfp_mask |= __GFP_HIGHMEM;
> @@ -1612,6 +1614,8 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
>  #ifdef CONFIG_HUGETLB_PAGE
>  	if (PageHuge(page)) {
>  		struct hstate *h = page_hstate(page);
> +
> +		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
>  		/*
>  		 * We don't want to dequeue from the pool because pool pages will
>  		 * mostly be from the CMA region.
> @@ -1626,11 +1630,6 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
>  		 */
>  		gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN;
>  
> -		/*
> -		 * Remove the movable mask so that we don't allocate from
> -		 * CMA area again.
> -		 */
> -		thp_gfpmask &= ~__GFP_MOVABLE;
>  		thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER);
>  		if (!thp)
>  			return NULL;
> @@ -1773,7 +1772,6 @@ static long __gup_longterm_locked(struct mm_struct *mm,
>  				     vmas_tmp, NULL, gup_flags);
>  
>  	if (gup_flags & FOLL_LONGTERM) {
> -		memalloc_nocma_restore(flags);
>  		if (rc < 0)
>  			goto out;
>  
> @@ -1786,9 +1784,10 @@ static long __gup_longterm_locked(struct mm_struct *mm,
>  
>  		rc = check_and_migrate_cma_pages(mm, start, rc, pages,
>  						 vmas_tmp, gup_flags);
> +out:
> +		memalloc_nocma_restore(flags);
>  	}
>  
> -out:
>  	if (vmas_tmp != vmas)
>  		kfree(vmas_tmp);
>  	return rc;
> 



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-08-04 12:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-31  7:35 [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API js1304
2020-07-31  7:35 ` [PATCH v3 2/3] mm/hugetlb: make hugetlb migration callback CMA aware js1304
2020-07-31  7:35 ` [PATCH v3 3/3] mm/gup: use a standard migration target allocation callback js1304
2020-08-04 12:05 ` [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).