From: js1304@gmail.com
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
kernel-team@lge.com, Vlastimil Babka <vbabka@suse.cz>,
Christoph Hellwig <hch@infradead.org>,
Roman Gushchin <guro@fb.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
Michal Hocko <mhocko@suse.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: [PATCH v3 6/8] mm/gup: use a standard migration target allocation callback
Date: Tue, 23 Jun 2020 15:13:46 +0900 [thread overview]
Message-ID: <1592892828-1934-7-git-send-email-iamjoonsoo.kim@lge.com> (raw)
In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com>
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
There is a well-defined migration target allocation callback.
It's mostly similar with new_non_cma_page() except considering CMA pages.
This patch adds a CMA consideration to the standard migration target
allocation callback and use it on gup.c.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
mm/gup.c | 57 ++++++++-------------------------------------------------
mm/internal.h | 1 +
mm/migrate.c | 4 +++-
3 files changed, 12 insertions(+), 50 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 15be281..f6124e3 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1608,56 +1608,15 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
}
#ifdef CONFIG_CMA
-static struct page *new_non_cma_page(struct page *page, unsigned long private)
+static struct page *alloc_migration_target_non_cma(struct page *page, unsigned long private)
{
- /*
- * We want to make sure we allocate the new page from the same node
- * as the source page.
- */
- int nid = page_to_nid(page);
- /*
- * Trying to allocate a page for migration. Ignore allocation
- * failure warnings. We don't force __GFP_THISNODE here because
- * this node here is the node where we have CMA reservation and
- * in some case these nodes will have really less non movable
- * allocation memory.
- */
- gfp_t gfp_mask = GFP_USER | __GFP_NOWARN;
-
- if (PageHighMem(page))
- gfp_mask |= __GFP_HIGHMEM;
-
-#ifdef CONFIG_HUGETLB_PAGE
- if (PageHuge(page)) {
- struct hstate *h = page_hstate(page);
-
- /*
- * We don't want to dequeue from the pool because pool pages will
- * mostly be from the CMA region.
- */
- return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask, true);
- }
-#endif
- if (PageTransHuge(page)) {
- struct page *thp;
- /*
- * ignore allocation failure warnings
- */
- gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN;
-
- /*
- * Remove the movable mask so that we don't allocate from
- * CMA area again.
- */
- thp_gfpmask &= ~__GFP_MOVABLE;
- thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER);
- if (!thp)
- return NULL;
- prep_transhuge_page(thp);
- return thp;
- }
+ struct migration_target_control mtc = {
+ .nid = page_to_nid(page),
+ .gfp_mask = GFP_USER | __GFP_NOWARN,
+ .skip_cma = true,
+ };
- return __alloc_pages_node(nid, gfp_mask, 0);
+ return alloc_migration_target(page, (unsigned long)&mtc);
}
static long check_and_migrate_cma_pages(struct task_struct *tsk,
@@ -1719,7 +1678,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
for (i = 0; i < nr_pages; i++)
put_page(pages[i]);
- if (migrate_pages(&cma_page_list, new_non_cma_page,
+ if (migrate_pages(&cma_page_list, alloc_migration_target_non_cma,
NULL, 0, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
/*
* some of the pages failed migration. Do get_user_pages
diff --git a/mm/internal.h b/mm/internal.h
index f725aa8..fb7f7fe 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -619,6 +619,7 @@ struct migration_target_control {
int nid; /* preferred node id */
nodemask_t *nmask;
gfp_t gfp_mask;
+ bool skip_cma;
};
#endif /* __MM_INTERNAL_H */
diff --git a/mm/migrate.c b/mm/migrate.c
index 3afff59..7c4cd74 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1550,7 +1550,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
if (PageHuge(page)) {
return alloc_huge_page_nodemask(
page_hstate(compound_head(page)), mtc->nid,
- mtc->nmask, gfp_mask, false);
+ mtc->nmask, gfp_mask, mtc->skip_cma);
}
if (PageTransHuge(page)) {
@@ -1561,6 +1561,8 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
zidx = zone_idx(page_zone(page));
if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
gfp_mask |= __GFP_HIGHMEM;
+ if (mtc->skip_cma)
+ gfp_mask &= ~__GFP_MOVABLE;
new_page = __alloc_pages_nodemask(gfp_mask, order,
mtc->nid, mtc->nmask);
--
2.7.4
next prev parent reply other threads:[~2020-06-23 6:14 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-23 6:13 [PATCH v3 0/8] clean-up the migration target allocation functions js1304
2020-06-23 6:13 ` [PATCH v3 1/8] mm/page_isolation: prefer the node of the source page js1304
2020-06-23 6:13 ` [PATCH v3 2/8] mm/migrate: move migration helper from .h to .c js1304
2020-06-23 6:13 ` [PATCH v3 3/8] mm/hugetlb: unify migration callbacks js1304
2020-06-24 21:18 ` Mike Kravetz
2020-06-25 11:26 ` Michal Hocko
2020-06-26 4:02 ` Joonsoo Kim
2020-07-02 16:13 ` Vlastimil Babka
2020-07-03 0:55 ` Joonsoo Kim
2020-06-23 6:13 ` [PATCH v3 4/8] mm/hugetlb: make hugetlb migration callback CMA aware js1304
2020-06-25 11:54 ` Michal Hocko
2020-06-26 4:49 ` Joonsoo Kim
2020-06-26 7:23 ` Michal Hocko
2020-06-29 6:27 ` Joonsoo Kim
2020-06-29 7:55 ` Michal Hocko
2020-06-30 6:30 ` Joonsoo Kim
2020-06-30 6:42 ` Michal Hocko
2020-06-30 7:22 ` Joonsoo Kim
2020-06-30 16:37 ` Mike Kravetz
2020-06-23 6:13 ` [PATCH v3 5/8] mm/migrate: make a standard migration target allocation function js1304
2020-06-25 12:05 ` Michal Hocko
2020-06-26 5:02 ` Joonsoo Kim
2020-06-26 7:33 ` Michal Hocko
2020-06-29 6:41 ` Joonsoo Kim
2020-06-29 8:03 ` Michal Hocko
2020-06-30 7:19 ` Joonsoo Kim
2020-07-03 15:25 ` Vlastimil Babka
2020-06-23 6:13 ` js1304 [this message]
2020-06-25 12:08 ` [PATCH v3 6/8] mm/gup: use a standard migration target allocation callback Michal Hocko
2020-06-26 5:03 ` Joonsoo Kim
2020-07-03 15:56 ` Vlastimil Babka
2020-07-06 8:34 ` Joonsoo Kim
2020-06-23 6:13 ` [PATCH v3 7/8] mm/mempolicy: " js1304
2020-06-25 12:09 ` Michal Hocko
2020-07-03 15:59 ` Vlastimil Babka
2020-07-08 1:20 ` Qian Cai
2020-07-08 6:45 ` Michal Hocko
2020-10-08 3:21 ` Hugh Dickins
2020-10-08 17:29 ` Mike Kravetz
2020-10-09 5:50 ` Hugh Dickins
2020-10-09 17:42 ` Mike Kravetz
2020-10-09 22:23 ` Hugh Dickins
2020-10-10 0:25 ` Mike Kravetz
2020-06-23 6:13 ` [PATCH v3 8/8] mm/page_alloc: remove a wrapper for alloc_migration_target() js1304
2020-06-25 12:10 ` Michal Hocko
2020-07-03 16:18 ` Vlastimil Babka
2020-07-06 8:44 ` Joonsoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1592892828-1934-7-git-send-email-iamjoonsoo.kim@lge.com \
--to=js1304@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=hch@infradead.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=n-horiguchi@ah.jp.nec.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).