All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE
@ 2020-12-10  0:43 Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled Pavel Tatashin
                   ` (7 more replies)
  0 siblings, 8 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

Changelog
---------
v2
- Addressed all review comments
- Added Reviewed-by's.
- Renamed PF_MEMALLOC_NOMOVABLE to PF_MEMALLOC_PIN
- Added is_pinnable_page() to check if page can be longterm pinned
- Fixed gup fast path by checking is_in_pinnable_zone()
- rename cma_page_list to movable_page_list
- add a admin-guide note about handling pinned pages in ZONE_MOVABLE,
  updated caveat about pinned pages from linux/mmzone.h
- Move current_gfp_context() to fast-path

---------
When page is pinned it cannot be moved and its physical address stays
the same until pages is unpinned.

This is useful functionality to allows userland to implementation DMA
access. For example, it is used by vfio in vfio_pin_pages().

However, this functionality breaks memory hotplug/hotremove assumptions
that pages in ZONE_MOVABLE can always be migrated.

This patch series fixes this issue by forcing new allocations during
page pinning to omit ZONE_MOVABLE, and also to migrate any existing
pages from ZONE_MOVABLE during pinning.

It uses the same scheme logic that is currently used by CMA, and extends
the functionality for all allocations.

For more information read the discussion [1] about this problem.
[1] https://lore.kernel.org/lkml/CA+CK2bBffHBxjmb9jmSKacm0fJMinyt3Nhk8Nx6iudcQSj80_w@mail.gmail.com

Previous versions:
v1
https://lore.kernel.org/lkml/20201202052330.474592-1-pasha.tatashin@soleen.com

Pavel Tatashin (8):
  mm/gup: perform check_dax_vmas only when FS_DAX is enabled
  mm/gup: don't pin migrated cma pages in movable zone
  mm/gup: make __gup_longterm_locked common
  mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN
  mm: apply per-task gfp constraints in fast path
  mm: honor PF_MEMALLOC_PIN for all movable pages
  mm/gup: migrate pinned pages out of movable zone
  memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning

 .../admin-guide/mm/memory-hotplug.rst         |  9 ++
 include/linux/migrate.h                       |  1 +
 include/linux/mm.h                            | 11 +++
 include/linux/mmzone.h                        | 11 ++-
 include/linux/sched.h                         |  2 +-
 include/linux/sched/mm.h                      | 27 ++----
 include/trace/events/migrate.h                |  3 +-
 mm/gup.c                                      | 91 ++++++++-----------
 mm/hugetlb.c                                  |  4 +-
 mm/page_alloc.c                               | 32 +++----
 mm/vmscan.c                                   | 10 +-
 11 files changed, 101 insertions(+), 100 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled
  2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
@ 2020-12-10  0:43 ` Pavel Tatashin
  2020-12-10  6:36     ` Pankaj Gupta
  2020-12-10  0:43 ` [PATCH v2 2/8] mm/gup: don't pin migrated cma pages in movable zone Pavel Tatashin
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

There is no need to check_dax_vmas() and run through the npage loop of
pinned pages if FS_DAX is not enabled.

Add a stub check_dax_vmas() function for no-FS_DAX case.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
---
 mm/gup.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/gup.c b/mm/gup.c
index 98eb8e6d2609..cdb8b9eeb016 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1568,6 +1568,7 @@ struct page *get_dump_page(unsigned long addr)
 #endif /* CONFIG_ELF_CORE */
 
 #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)
+#ifdef CONFIG_FS_DAX
 static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
 {
 	long i;
@@ -1586,6 +1587,12 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
 	}
 	return false;
 }
+#else
+static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
+{
+	return false;
+}
+#endif
 
 #ifdef CONFIG_CMA
 static long check_and_migrate_cma_pages(struct mm_struct *mm,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 2/8] mm/gup: don't pin migrated cma pages in movable zone
  2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled Pavel Tatashin
@ 2020-12-10  0:43 ` Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common Pavel Tatashin
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

In order not to fragment CMA the pinned pages are migrated. However,
they are migrated to ZONE_MOVABLE, which also should not have pinned pages.

Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning
is allowed.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 mm/gup.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/gup.c b/mm/gup.c
index cdb8b9eeb016..3a76c005a3e2 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1610,7 +1610,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 	long ret = nr_pages;
 	struct migration_target_control mtc = {
 		.nid = NUMA_NO_NODE,
-		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN,
+		.gfp_mask = GFP_USER | __GFP_NOWARN,
 	};
 
 check_again:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
  2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 2/8] mm/gup: don't pin migrated cma pages in movable zone Pavel Tatashin
@ 2020-12-10  0:43 ` Pavel Tatashin
  2020-12-10  4:06   ` Ira Weiny
  2020-12-10  0:43 ` [PATCH v2 4/8] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Pavel Tatashin
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

__gup_longterm_locked() has CMA || FS_DAX version and a common stub
version. In the preparation of prohibiting longterm pinning of pages from
movable zone make the CMA || FS_DAX version common, and delete the stub
version.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
---
 mm/gup.c | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 3a76c005a3e2..0e2de888a8b0 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1567,7 +1567,6 @@ struct page *get_dump_page(unsigned long addr)
 }
 #endif /* CONFIG_ELF_CORE */
 
-#if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)
 #ifdef CONFIG_FS_DAX
 static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
 {
@@ -1757,18 +1756,6 @@ static long __gup_longterm_locked(struct mm_struct *mm,
 		kfree(vmas_tmp);
 	return rc;
 }
-#else /* !CONFIG_FS_DAX && !CONFIG_CMA */
-static __always_inline long __gup_longterm_locked(struct mm_struct *mm,
-						  unsigned long start,
-						  unsigned long nr_pages,
-						  struct page **pages,
-						  struct vm_area_struct **vmas,
-						  unsigned int flags)
-{
-	return __get_user_pages_locked(mm, start, nr_pages, pages, vmas,
-				       NULL, flags);
-}
-#endif /* CONFIG_FS_DAX || CONFIG_CMA */
 
 static bool is_valid_gup_flags(unsigned int gup_flags)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 4/8] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN
  2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
                   ` (2 preceding siblings ...)
  2020-12-10  0:43 ` [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common Pavel Tatashin
@ 2020-12-10  0:43 ` Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 5/8] mm: apply per-task gfp constraints in fast path Pavel Tatashin
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

PF_MEMALLOC_NOCMA is used ot guarantee that the allocator will not return
pages that might belong to CMA region. This is currently used for long
term gup to make sure that such pins are not going to be done on any CMA
pages.

When PF_MEMALLOC_NOCMA has been introduced we haven't realized that it is
focusing on CMA pages too much and that there is larger class of pages that
need the same treatment. MOVABLE zone cannot contain any long term pins as
well so it makes sense to reuse and redefine this flag for that usecase as
well. Rename the flag to PF_MEMALLOC_PIN which defines an allocation
context which can only get pages suitable for long-term pins.

Also re-name:
memalloc_nocma_save()/memalloc_nocma_restore
to
memalloc_pin_save()/memalloc_pin_restore()
and make the new functions common.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
---
 include/linux/sched.h    |  2 +-
 include/linux/sched/mm.h | 21 +++++----------------
 mm/gup.c                 |  4 ++--
 mm/hugetlb.c             |  4 ++--
 mm/page_alloc.c          |  4 ++--
 5 files changed, 12 insertions(+), 23 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 76cd21fa5501..5c4bd5e1cbd8 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1548,7 +1548,7 @@ extern struct pid *cad_pid;
 #define PF_SWAPWRITE		0x00800000	/* Allowed to write to swap */
 #define PF_NO_SETAFFINITY	0x04000000	/* Userland is not allowed to meddle with cpus_mask */
 #define PF_MCE_EARLY		0x08000000      /* Early kill for mce process policy */
-#define PF_MEMALLOC_NOCMA	0x10000000	/* All allocation request will have _GFP_MOVABLE cleared */
+#define PF_MEMALLOC_PIN		0x10000000	/* All allocation request will have _GFP_MOVABLE cleared */
 #define PF_FREEZER_SKIP		0x40000000	/* Freezer should not count it as freezable */
 #define PF_SUSPEND_TASK		0x80000000      /* This thread called freeze_processes() and should not be frozen */
 
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index d5ece7a9a403..a4b5da13d2c6 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -254,29 +254,18 @@ static inline void memalloc_noreclaim_restore(unsigned int flags)
 	current->flags = (current->flags & ~PF_MEMALLOC) | flags;
 }
 
-#ifdef CONFIG_CMA
-static inline unsigned int memalloc_nocma_save(void)
+static inline unsigned int memalloc_pin_save(void)
 {
-	unsigned int flags = current->flags & PF_MEMALLOC_NOCMA;
+	unsigned int flags = current->flags & PF_MEMALLOC_PIN;
 
-	current->flags |= PF_MEMALLOC_NOCMA;
+	current->flags |= PF_MEMALLOC_PIN;
 	return flags;
 }
 
-static inline void memalloc_nocma_restore(unsigned int flags)
+static inline void memalloc_pin_restore(unsigned int flags)
 {
-	current->flags = (current->flags & ~PF_MEMALLOC_NOCMA) | flags;
+	current->flags = (current->flags & ~PF_MEMALLOC_PIN) | flags;
 }
-#else
-static inline unsigned int memalloc_nocma_save(void)
-{
-	return 0;
-}
-
-static inline void memalloc_nocma_restore(unsigned int flags)
-{
-}
-#endif
 
 #ifdef CONFIG_MEMCG
 DECLARE_PER_CPU(struct mem_cgroup *, int_active_memcg);
diff --git a/mm/gup.c b/mm/gup.c
index 0e2de888a8b0..0eb8a85fb704 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1726,7 +1726,7 @@ static long __gup_longterm_locked(struct mm_struct *mm,
 			if (!vmas_tmp)
 				return -ENOMEM;
 		}
-		flags = memalloc_nocma_save();
+		flags = memalloc_pin_save();
 	}
 
 	rc = __get_user_pages_locked(mm, start, nr_pages, pages,
@@ -1749,7 +1749,7 @@ static long __gup_longterm_locked(struct mm_struct *mm,
 		rc = check_and_migrate_cma_pages(mm, start, rc, pages,
 						 vmas_tmp, gup_flags);
 out:
-		memalloc_nocma_restore(flags);
+		memalloc_pin_restore(flags);
 	}
 
 	if (vmas_tmp != vmas)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 37f15c3c24dc..e797b41998ec 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1033,10 +1033,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
 static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
 {
 	struct page *page;
-	bool nocma = !!(current->flags & PF_MEMALLOC_NOCMA);
+	bool pin = !!(current->flags & PF_MEMALLOC_PIN);
 
 	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
-		if (nocma && is_migrate_cma_page(page))
+		if (pin && is_migrate_cma_page(page))
 			continue;
 
 		if (PageHWPoison(page))
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eaa227a479e4..2dea5600f308 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3772,8 +3772,8 @@ static inline unsigned int current_alloc_flags(gfp_t gfp_mask,
 #ifdef CONFIG_CMA
 	unsigned int pflags = current->flags;
 
-	if (!(pflags & PF_MEMALLOC_NOCMA) &&
-			gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
+	if (!(pflags & PF_MEMALLOC_PIN) &&
+	    gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
 		alloc_flags |= ALLOC_CMA;
 
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 5/8] mm: apply per-task gfp constraints in fast path
  2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
                   ` (3 preceding siblings ...)
  2020-12-10  0:43 ` [PATCH v2 4/8] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Pavel Tatashin
@ 2020-12-10  0:43 ` Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 6/8] mm: honor PF_MEMALLOC_PIN for all movable pages Pavel Tatashin
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

Function current_gfp_context() is called after fast path. However, soon we
will add more constraints which will also limit zones based on context.
Move this call into fast path, and apply the correct constraints for all
allocations.

Also update .reclaim_idx based on value returned by current_gfp_context()
because it soon will modify the allowed zones.

Note:
With this patch we will do one extra current->flags load during fast path,
but we already load current->flags in fast-path:

__alloc_pages_nodemask()
 prepare_alloc_pages()
  current_alloc_flags(gfp_mask, *alloc_flags);

Later, when we add the zone constrain logic to current_gfp_context() we
will be able to remove current->flags load from current_alloc_flags, and
therefore return fast-path to the current performance level.

Suggested-by: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
---
 mm/page_alloc.c | 15 ++++++++-------
 mm/vmscan.c     | 10 ++++++----
 2 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2dea5600f308..24c99b3b12af 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4932,6 +4932,13 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
 	}
 
 	gfp_mask &= gfp_allowed_mask;
+	/*
+	 * Apply scoped allocation constraints. This is mainly about GFP_NOFS
+	 * resp. GFP_NOIO which has to be inherited for all allocation requests
+	 * from a particular context which has been marked by
+	 * memalloc_no{fs,io}_{save,restore}.
+	 */
+	gfp_mask = current_gfp_context(gfp_mask);
 	alloc_mask = gfp_mask;
 	if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
 		return NULL;
@@ -4947,13 +4954,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
 	if (likely(page))
 		goto out;
 
-	/*
-	 * Apply scoped allocation constraints. This is mainly about GFP_NOFS
-	 * resp. GFP_NOIO which has to be inherited for all allocation requests
-	 * from a particular context which has been marked by
-	 * memalloc_no{fs,io}_{save,restore}.
-	 */
-	alloc_mask = current_gfp_context(gfp_mask);
+	alloc_mask = gfp_mask;
 	ac.spread_dirty_pages = false;
 
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7b4e31eac2cf..f51581e33fe6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3233,11 +3233,12 @@ static bool throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
 unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 				gfp_t gfp_mask, nodemask_t *nodemask)
 {
+	gfp_t current_gfp_mask = current_gfp_context(gfp_mask);
 	unsigned long nr_reclaimed;
 	struct scan_control sc = {
 		.nr_to_reclaim = SWAP_CLUSTER_MAX,
-		.gfp_mask = current_gfp_context(gfp_mask),
-		.reclaim_idx = gfp_zone(gfp_mask),
+		.gfp_mask = current_gfp_mask,
+		.reclaim_idx = gfp_zone(current_gfp_mask),
 		.order = order,
 		.nodemask = nodemask,
 		.priority = DEF_PRIORITY,
@@ -4157,17 +4158,18 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
 {
 	/* Minimum pages needed in order to stay on node */
 	const unsigned long nr_pages = 1 << order;
+	gfp_t current_gfp_mask = current_gfp_context(gfp_mask);
 	struct task_struct *p = current;
 	unsigned int noreclaim_flag;
 	struct scan_control sc = {
 		.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
-		.gfp_mask = current_gfp_context(gfp_mask),
+		.gfp_mask = current_gfp_mask,
 		.order = order,
 		.priority = NODE_RECLAIM_PRIORITY,
 		.may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE),
 		.may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),
 		.may_swap = 1,
-		.reclaim_idx = gfp_zone(gfp_mask),
+		.reclaim_idx = gfp_zone(current_gfp_mask),
 	};
 
 	trace_mm_vmscan_node_reclaim_begin(pgdat->node_id, order,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 6/8] mm: honor PF_MEMALLOC_PIN for all movable pages
  2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
                   ` (4 preceding siblings ...)
  2020-12-10  0:43 ` [PATCH v2 5/8] mm: apply per-task gfp constraints in fast path Pavel Tatashin
@ 2020-12-10  0:43 ` Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 7/8] mm/gup: migrate pinned pages out of movable zone Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 8/8] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning Pavel Tatashin
  7 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

PF_MEMALLOC_PIN is only honored for CMA pages, extend
this flag to work for any allocations from ZONE_MOVABLE by removing
__GFP_MOVABLE from gfp_mask when this flag is passed in the current
context.

Add is_pinnable_page() to return true if page is in a pinnable page.
A pinnable page is not in ZONE_MOVABLE and not of MIGRATE_CMA type.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/mm.h       | 11 +++++++++++
 include/linux/sched/mm.h |  6 +++++-
 mm/hugetlb.c             |  2 +-
 mm/page_alloc.c          | 19 ++++++++-----------
 4 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index db6ae4d3fb4e..1105e4aa9472 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1100,6 +1100,17 @@ static inline bool is_zone_device_page(const struct page *page)
 }
 #endif
 
+static inline bool is_zone_movable_page(const struct page *page)
+{
+	return page_zonenum(page) == ZONE_MOVABLE;
+}
+
+/* MIGRATE_CMA and ZONE_MOVABLE do not allow pin pages */
+static inline bool is_pinnable_page(struct page *page)
+{
+	return !is_zone_movable_page(page) && !is_migrate_cma_page(page);
+}
+
 #ifdef CONFIG_DEV_PAGEMAP_OPS
 void free_devmap_managed_page(struct page *page);
 DECLARE_STATIC_KEY_FALSE(devmap_managed_key);
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index a4b5da13d2c6..1c7ba35c405a 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -150,12 +150,13 @@ static inline bool in_vfork(struct task_struct *tsk)
  * Applies per-task gfp context to the given allocation flags.
  * PF_MEMALLOC_NOIO implies GFP_NOIO
  * PF_MEMALLOC_NOFS implies GFP_NOFS
+ * PF_MEMALLOC_PIN  implies !GFP_MOVABLE
  */
 static inline gfp_t current_gfp_context(gfp_t flags)
 {
 	unsigned int pflags = READ_ONCE(current->flags);
 
-	if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS))) {
+	if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS | PF_MEMALLOC_PIN))) {
 		/*
 		 * NOIO implies both NOIO and NOFS and it is a weaker context
 		 * so always make sure it makes precedence
@@ -164,6 +165,9 @@ static inline gfp_t current_gfp_context(gfp_t flags)
 			flags &= ~(__GFP_IO | __GFP_FS);
 		else if (pflags & PF_MEMALLOC_NOFS)
 			flags &= ~__GFP_FS;
+
+		if (pflags & PF_MEMALLOC_PIN)
+			flags &= ~__GFP_MOVABLE;
 	}
 	return flags;
 }
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e797b41998ec..79f2643843f3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1036,7 +1036,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
 	bool pin = !!(current->flags & PF_MEMALLOC_PIN);
 
 	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
-		if (pin && is_migrate_cma_page(page))
+		if (pin && !is_pinnable_page(page))
 			continue;
 
 		if (PageHWPoison(page))
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 24c99b3b12af..c514ad058335 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3766,16 +3766,12 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask)
 	return alloc_flags;
 }
 
-static inline unsigned int current_alloc_flags(gfp_t gfp_mask,
-					unsigned int alloc_flags)
+static inline unsigned int cma_alloc_flags(gfp_t gfp_mask,
+					   unsigned int alloc_flags)
 {
 #ifdef CONFIG_CMA
-	unsigned int pflags = current->flags;
-
-	if (!(pflags & PF_MEMALLOC_PIN) &&
-	    gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
+	if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
 		alloc_flags |= ALLOC_CMA;
-
 #endif
 	return alloc_flags;
 }
@@ -4423,7 +4419,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 	} else if (unlikely(rt_task(current)) && !in_interrupt())
 		alloc_flags |= ALLOC_HARDER;
 
-	alloc_flags = current_alloc_flags(gfp_mask, alloc_flags);
+	alloc_flags = cma_alloc_flags(gfp_mask, alloc_flags);
 
 	return alloc_flags;
 }
@@ -4725,7 +4721,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 
 	reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);
 	if (reserve_flags)
-		alloc_flags = current_alloc_flags(gfp_mask, reserve_flags);
+		alloc_flags = cma_alloc_flags(gfp_mask, reserve_flags);
 
 	/*
 	 * Reset the nodemask and zonelist iterators if memory policies can be
@@ -4894,7 +4890,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
 	if (should_fail_alloc_page(gfp_mask, order))
 		return false;
 
-	*alloc_flags = current_alloc_flags(gfp_mask, *alloc_flags);
+	*alloc_flags = cma_alloc_flags(gfp_mask, *alloc_flags);
 
 	/* Dirty zone balancing only done in the fast path */
 	ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE);
@@ -4936,7 +4932,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
 	 * Apply scoped allocation constraints. This is mainly about GFP_NOFS
 	 * resp. GFP_NOIO which has to be inherited for all allocation requests
 	 * from a particular context which has been marked by
-	 * memalloc_no{fs,io}_{save,restore}.
+	 * memalloc_no{fs,io}_{save,restore}. And PF_MEMALLOC_PIN which ensures
+	 * movable zones are not used during allocation.
 	 */
 	gfp_mask = current_gfp_context(gfp_mask);
 	alloc_mask = gfp_mask;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 7/8] mm/gup: migrate pinned pages out of movable zone
  2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
                   ` (5 preceding siblings ...)
  2020-12-10  0:43 ` [PATCH v2 6/8] mm: honor PF_MEMALLOC_PIN for all movable pages Pavel Tatashin
@ 2020-12-10  0:43 ` Pavel Tatashin
  2020-12-10  0:43 ` [PATCH v2 8/8] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning Pavel Tatashin
  7 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only
movable CMA pages. Generalize the function that migrates CMA pages to
migrate all movable pages. Use is_pinnable_page() to check which
pages need to be migrated

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
---
 include/linux/migrate.h        |  1 +
 include/linux/mmzone.h         | 11 ++++--
 include/trace/events/migrate.h |  3 +-
 mm/gup.c                       | 65 ++++++++++++++--------------------
 4 files changed, 37 insertions(+), 43 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 0f8d1583fa8e..00bab23d1ee5 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -27,6 +27,7 @@ enum migrate_reason {
 	MR_MEMPOLICY_MBIND,
 	MR_NUMA_MISPLACED,
 	MR_CONTIG_RANGE,
+	MR_LONGTERM_PIN,
 	MR_TYPES
 };
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fb3bf696c05e..87a7321b4252 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -405,9 +405,14 @@ enum zone_type {
 	 * likely to succeed, and to locally limit unmovable allocations - e.g.,
 	 * to increase the number of THP/huge pages. Notable special cases are:
 	 *
-	 * 1. Pinned pages: (long-term) pinning of movable pages might
-	 *    essentially turn such pages unmovable. Memory offlining might
-	 *    retry a long time.
+	 * 1. Pinned pages: (long-term) pinning of movable pages is avoided
+	 *    when pages are pinned and faulted, but it is still possible that
+	 *    address space already has pages in ZONE_MOVABLE at the time when
+	 *    pages are pinned (i.e. user has touches that memory before
+	 *    pinning). In such case we try to migrate them to a different zone,
+	 *    but if migration fails the pages can still end-up pinned in
+	 *    ZONE_MOVABLE. In such case, memory offlining might retry a long
+	 *    time and will only succeed once user application unpins pages.
 	 * 2. memblock allocations: kernelcore/movablecore setups might create
 	 *    situations where ZONE_MOVABLE contains unmovable allocations
 	 *    after boot. Memory offlining and allocations fail early.
diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h
index 4d434398d64d..363b54ce104c 100644
--- a/include/trace/events/migrate.h
+++ b/include/trace/events/migrate.h
@@ -20,7 +20,8 @@
 	EM( MR_SYSCALL,		"syscall_or_cpuset")		\
 	EM( MR_MEMPOLICY_MBIND,	"mempolicy_mbind")		\
 	EM( MR_NUMA_MISPLACED,	"numa_misplaced")		\
-	EMe(MR_CONTIG_RANGE,	"contig_range")
+	EM( MR_CONTIG_RANGE,	"contig_range")			\
+	EMe(MR_LONGTERM_PIN,	"longterm_pin")
 
 /*
  * First define the enums in the above macros to be exported to userspace
diff --git a/mm/gup.c b/mm/gup.c
index 0eb8a85fb704..e575237d4c67 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -88,11 +88,12 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page,
 		int orig_refs = refs;
 
 		/*
-		 * Can't do FOLL_LONGTERM + FOLL_PIN with CMA in the gup fast
-		 * path, so fail and let the caller fall back to the slow path.
+		 * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a
+		 * right zone, so fail and let the caller fall back to the slow
+		 * path.
 		 */
-		if (unlikely(flags & FOLL_LONGTERM) &&
-				is_migrate_cma_page(page))
+		if (unlikely((flags & FOLL_LONGTERM) &&
+			     !is_pinnable_page(page)))
 			return NULL;
 
 		/*
@@ -1593,19 +1594,18 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
 }
 #endif
 
-#ifdef CONFIG_CMA
-static long check_and_migrate_cma_pages(struct mm_struct *mm,
-					unsigned long start,
-					unsigned long nr_pages,
-					struct page **pages,
-					struct vm_area_struct **vmas,
-					unsigned int gup_flags)
+static long check_and_migrate_movable_pages(struct mm_struct *mm,
+					    unsigned long start,
+					    unsigned long nr_pages,
+					    struct page **pages,
+					    struct vm_area_struct **vmas,
+					    unsigned int gup_flags)
 {
 	unsigned long i;
 	unsigned long step;
 	bool drain_allow = true;
 	bool migrate_allow = true;
-	LIST_HEAD(cma_page_list);
+	LIST_HEAD(movable_page_list);
 	long ret = nr_pages;
 	struct migration_target_control mtc = {
 		.nid = NUMA_NO_NODE,
@@ -1623,13 +1623,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 		 */
 		step = compound_nr(head) - (pages[i] - head);
 		/*
-		 * If we get a page from the CMA zone, since we are going to
-		 * be pinning these entries, we might as well move them out
-		 * of the CMA zone if possible.
+		 * If we get a movable page, since we are going to be pinning
+		 * these entries, try to move them out if possible.
 		 */
-		if (is_migrate_cma_page(head)) {
+		if (!is_pinnable_page(head)) {
 			if (PageHuge(head))
-				isolate_huge_page(head, &cma_page_list);
+				isolate_huge_page(head, &movable_page_list);
 			else {
 				if (!PageLRU(head) && drain_allow) {
 					lru_add_drain_all();
@@ -1637,7 +1636,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 				}
 
 				if (!isolate_lru_page(head)) {
-					list_add_tail(&head->lru, &cma_page_list);
+					list_add_tail(&head->lru, &movable_page_list);
 					mod_node_page_state(page_pgdat(head),
 							    NR_ISOLATED_ANON +
 							    page_is_file_lru(head),
@@ -1649,7 +1648,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 		i += step;
 	}
 
-	if (!list_empty(&cma_page_list)) {
+	if (!list_empty(&movable_page_list)) {
 		/*
 		 * drop the above get_user_pages reference.
 		 */
@@ -1659,7 +1658,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 			for (i = 0; i < nr_pages; i++)
 				put_page(pages[i]);
 
-		if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
+		if (migrate_pages(&movable_page_list, alloc_migration_target, NULL,
 			(unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
 			/*
 			 * some of the pages failed migration. Do get_user_pages
@@ -1667,17 +1666,16 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 			 */
 			migrate_allow = false;
 
-			if (!list_empty(&cma_page_list))
-				putback_movable_pages(&cma_page_list);
+			if (!list_empty(&movable_page_list))
+				putback_movable_pages(&movable_page_list);
 		}
 		/*
 		 * We did migrate all the pages, Try to get the page references
-		 * again migrating any new CMA pages which we failed to isolate
-		 * earlier.
+		 * again migrating any pages which we failed to isolate earlier.
 		 */
 		ret = __get_user_pages_locked(mm, start, nr_pages,
-						   pages, vmas, NULL,
-						   gup_flags);
+					      pages, vmas, NULL,
+					      gup_flags);
 
 		if ((ret > 0) && migrate_allow) {
 			nr_pages = ret;
@@ -1688,17 +1686,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
 
 	return ret;
 }
-#else
-static long check_and_migrate_cma_pages(struct mm_struct *mm,
-					unsigned long start,
-					unsigned long nr_pages,
-					struct page **pages,
-					struct vm_area_struct **vmas,
-					unsigned int gup_flags)
-{
-	return nr_pages;
-}
-#endif /* CONFIG_CMA */
 
 /*
  * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which
@@ -1746,8 +1733,8 @@ static long __gup_longterm_locked(struct mm_struct *mm,
 			goto out;
 		}
 
-		rc = check_and_migrate_cma_pages(mm, start, rc, pages,
-						 vmas_tmp, gup_flags);
+		rc = check_and_migrate_movable_pages(mm, start, rc, pages,
+						     vmas_tmp, gup_flags);
 out:
 		memalloc_pin_restore(flags);
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 8/8] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning
  2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
                   ` (6 preceding siblings ...)
  2020-12-10  0:43 ` [PATCH v2 7/8] mm/gup: migrate pinned pages out of movable zone Pavel Tatashin
@ 2020-12-10  0:43 ` Pavel Tatashin
  7 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10  0:43 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, akpm, vbabka, mhocko,
	david, osalvador, dan.j.williams, sashal, tyhicks,
	iamjoonsoo.kim, mike.kravetz, rostedt, mingo, jgg, peterz,
	mgorman, willy, rientjes, jhubbard, linux-doc

Document the special handling of page pinning when ZONE_MOVABLE present.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Suggested-by: David Hildenbrand <david@redhat.com>
---
 Documentation/admin-guide/mm/memory-hotplug.rst | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst
index 5c4432c96c4b..c6618f99f765 100644
--- a/Documentation/admin-guide/mm/memory-hotplug.rst
+++ b/Documentation/admin-guide/mm/memory-hotplug.rst
@@ -357,6 +357,15 @@ creates ZONE_MOVABLE as following.
    Unfortunately, there is no information to show which memory block belongs
    to ZONE_MOVABLE. This is TBD.
 
+.. note::
+   Techniques that rely on long-term pinnings of memory (especially, RDMA and
+   vfio) are fundamentally problematic with ZONE_MOVABLE and, therefore, memory
+   hot remove. Pinned pages cannot reside on ZONE_MOVABLE, to guarantee that
+   memory can still get hot removed - be aware that pinning can fail even if
+   there is plenty of free memory in ZONE_MOVABLE. In addition, using
+   ZONE_MOVABLE might make page pinning more expensive, because pages have to be
+   migrated off that zone first.
+
 .. _memory_hotplug_how_to_offline_memory:
 
 How to offline memory
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
  2020-12-10  0:43 ` [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common Pavel Tatashin
@ 2020-12-10  4:06   ` Ira Weiny
  2020-12-10 13:30       ` Pavel Tatashin
  0 siblings, 1 reply; 22+ messages in thread
From: Ira Weiny @ 2020-12-10  4:06 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: linux-kernel, linux-mm, akpm, vbabka, mhocko, david, osalvador,
	dan.j.williams, sashal, tyhicks, iamjoonsoo.kim, mike.kravetz,
	rostedt, mingo, jgg, peterz, mgorman, willy, rientjes, jhubbard,
	linux-doc

On Wed, Dec 09, 2020 at 07:43:30PM -0500, Pavel Tatashin wrote:
> __gup_longterm_locked() has CMA || FS_DAX version and a common stub
> version. In the preparation of prohibiting longterm pinning of pages from
> movable zone make the CMA || FS_DAX version common, and delete the stub
> version.

I thought Jason sent a patch which got rid of this as well?

Ira

> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> ---
>  mm/gup.c | 13 -------------
>  1 file changed, 13 deletions(-)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 3a76c005a3e2..0e2de888a8b0 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1567,7 +1567,6 @@ struct page *get_dump_page(unsigned long addr)
>  }
>  #endif /* CONFIG_ELF_CORE */
>  
> -#if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)
>  #ifdef CONFIG_FS_DAX
>  static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
>  {
> @@ -1757,18 +1756,6 @@ static long __gup_longterm_locked(struct mm_struct *mm,
>  		kfree(vmas_tmp);
>  	return rc;
>  }
> -#else /* !CONFIG_FS_DAX && !CONFIG_CMA */
> -static __always_inline long __gup_longterm_locked(struct mm_struct *mm,
> -						  unsigned long start,
> -						  unsigned long nr_pages,
> -						  struct page **pages,
> -						  struct vm_area_struct **vmas,
> -						  unsigned int flags)
> -{
> -	return __get_user_pages_locked(mm, start, nr_pages, pages, vmas,
> -				       NULL, flags);
> -}
> -#endif /* CONFIG_FS_DAX || CONFIG_CMA */
>  
>  static bool is_valid_gup_flags(unsigned int gup_flags)
>  {
> -- 
> 2.25.1
> 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled
  2020-12-10  0:43 ` [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled Pavel Tatashin
@ 2020-12-10  6:36     ` Pankaj Gupta
  0 siblings, 0 replies; 22+ messages in thread
From: Pankaj Gupta @ 2020-12-10  6:36 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: LKML, Linux MM, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, sashal, tyhicks,
	Joonsoo Kim, mike.kravetz, rostedt, Ingo Molnar,
	Dave Jiang <dave.jiang@intel.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>,
	Johannes Thumshirn <jthumshirn@suse.de>,
	Logan Gunthorpe, Peter Zijlstra, Mel Gorman, Matthew Wilcox,
	David Rientjes, John Hubbard, linux-doc

> There is no need to check_dax_vmas() and run through the npage loop of
> pinned pages if FS_DAX is not enabled.
>
> Add a stub check_dax_vmas() function for no-FS_DAX case.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> ---
>  mm/gup.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 98eb8e6d2609..cdb8b9eeb016 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1568,6 +1568,7 @@ struct page *get_dump_page(unsigned long addr)
>  #endif /* CONFIG_ELF_CORE */
>
>  #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)
> +#ifdef CONFIG_FS_DAX
>  static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
>  {
>         long i;
> @@ -1586,6 +1587,12 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
>         }
>         return false;
>  }
> +#else
> +static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
> +{
> +       return false;
> +}
> +#endif
>
>  #ifdef CONFIG_CMA
>  static long check_and_migrate_cma_pages(struct mm_struct *mm,

Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled
@ 2020-12-10  6:36     ` Pankaj Gupta
  0 siblings, 0 replies; 22+ messages in thread
From: Pankaj Gupta @ 2020-12-10  6:36 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: LKML, Linux MM, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, sashal, tyhicks,
	Joonsoo Kim, mike.kravetz, rostedt, Ingo Molnar,
	Dave Jiang <dave.jiang@intel.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>,
	Johannes Thumshirn <jthumshirn@suse.de>,
	Logan Gunthorpe, Peter Zijlstra, Mel Gorman, Matthew Wilcox,
	David Rientjes, John Hubbard, linux-doc

> There is no need to check_dax_vmas() and run through the npage loop of
> pinned pages if FS_DAX is not enabled.
>
> Add a stub check_dax_vmas() function for no-FS_DAX case.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> ---
>  mm/gup.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 98eb8e6d2609..cdb8b9eeb016 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1568,6 +1568,7 @@ struct page *get_dump_page(unsigned long addr)
>  #endif /* CONFIG_ELF_CORE */
>
>  #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)
> +#ifdef CONFIG_FS_DAX
>  static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
>  {
>         long i;
> @@ -1586,6 +1587,12 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
>         }
>         return false;
>  }
> +#else
> +static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
> +{
> +       return false;
> +}
> +#endif
>
>  #ifdef CONFIG_CMA
>  static long check_and_migrate_cma_pages(struct mm_struct *mm,

Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
  2020-12-10  4:06   ` Ira Weiny
@ 2020-12-10 13:30       ` Pavel Tatashin
  0 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10 13:30 UTC (permalink / raw)
  To: Ira Weiny
  Cc: LKML, linux-mm, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, Sasha Levin,
	Tyler Hicks, Joonsoo Kim, mike.kravetz, Steven Rostedt,
	Ingo Molnar, Jason Gunthorpe, Peter Zijlstra, Mel Gorman,
	Matthew Wilcox, David Rientjes, John Hubbard,
	Linux Doc Mailing List

On Wed, Dec 9, 2020 at 11:06 PM Ira Weiny <ira.weiny@intel.com> wrote:
>
> On Wed, Dec 09, 2020 at 07:43:30PM -0500, Pavel Tatashin wrote:
> > __gup_longterm_locked() has CMA || FS_DAX version and a common stub
> > version. In the preparation of prohibiting longterm pinning of pages from
> > movable zone make the CMA || FS_DAX version common, and delete the stub
> > version.
>
> I thought Jason sent a patch which got rid of this as well?

Yes, this series applies on the mainline so it can be easily tested.
The next version, I will sync with linux-next.

Thank you,
Pasha

>
> Ira
>
> >
> > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> > Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> > ---
> >  mm/gup.c | 13 -------------
> >  1 file changed, 13 deletions(-)
> >
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 3a76c005a3e2..0e2de888a8b0 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -1567,7 +1567,6 @@ struct page *get_dump_page(unsigned long addr)
> >  }
> >  #endif /* CONFIG_ELF_CORE */
> >
> > -#if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)
> >  #ifdef CONFIG_FS_DAX
> >  static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
> >  {
> > @@ -1757,18 +1756,6 @@ static long __gup_longterm_locked(struct mm_struct *mm,
> >               kfree(vmas_tmp);
> >       return rc;
> >  }
> > -#else /* !CONFIG_FS_DAX && !CONFIG_CMA */
> > -static __always_inline long __gup_longterm_locked(struct mm_struct *mm,
> > -                                               unsigned long start,
> > -                                               unsigned long nr_pages,
> > -                                               struct page **pages,
> > -                                               struct vm_area_struct **vmas,
> > -                                               unsigned int flags)
> > -{
> > -     return __get_user_pages_locked(mm, start, nr_pages, pages, vmas,
> > -                                    NULL, flags);
> > -}
> > -#endif /* CONFIG_FS_DAX || CONFIG_CMA */
> >
> >  static bool is_valid_gup_flags(unsigned int gup_flags)
> >  {
> > --
> > 2.25.1
> >
> >

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
@ 2020-12-10 13:30       ` Pavel Tatashin
  0 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10 13:30 UTC (permalink / raw)
  To: Ira Weiny
  Cc: LKML, linux-mm, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, Sasha Levin,
	Tyler Hicks, Joonsoo Kim, mike.kravetz, Steven Rostedt,
	Ingo Molnar, Jason Gunthorpe, Peter Zijlstra, Mel Gorman,
	Matthew Wilcox, David Rientjes, John Hubbard,
	Linux Doc Mailing List

On Wed, Dec 9, 2020 at 11:06 PM Ira Weiny <ira.weiny@intel.com> wrote:
>
> On Wed, Dec 09, 2020 at 07:43:30PM -0500, Pavel Tatashin wrote:
> > __gup_longterm_locked() has CMA || FS_DAX version and a common stub
> > version. In the preparation of prohibiting longterm pinning of pages from
> > movable zone make the CMA || FS_DAX version common, and delete the stub
> > version.
>
> I thought Jason sent a patch which got rid of this as well?

Yes, this series applies on the mainline so it can be easily tested.
The next version, I will sync with linux-next.

Thank you,
Pasha

>
> Ira
>
> >
> > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> > Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> > ---
> >  mm/gup.c | 13 -------------
> >  1 file changed, 13 deletions(-)
> >
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 3a76c005a3e2..0e2de888a8b0 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -1567,7 +1567,6 @@ struct page *get_dump_page(unsigned long addr)
> >  }
> >  #endif /* CONFIG_ELF_CORE */
> >
> > -#if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)
> >  #ifdef CONFIG_FS_DAX
> >  static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
> >  {
> > @@ -1757,18 +1756,6 @@ static long __gup_longterm_locked(struct mm_struct *mm,
> >               kfree(vmas_tmp);
> >       return rc;
> >  }
> > -#else /* !CONFIG_FS_DAX && !CONFIG_CMA */
> > -static __always_inline long __gup_longterm_locked(struct mm_struct *mm,
> > -                                               unsigned long start,
> > -                                               unsigned long nr_pages,
> > -                                               struct page **pages,
> > -                                               struct vm_area_struct **vmas,
> > -                                               unsigned int flags)
> > -{
> > -     return __get_user_pages_locked(mm, start, nr_pages, pages, vmas,
> > -                                    NULL, flags);
> > -}
> > -#endif /* CONFIG_FS_DAX || CONFIG_CMA */
> >
> >  static bool is_valid_gup_flags(unsigned int gup_flags)
> >  {
> > --
> > 2.25.1
> >
> >


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled
  2020-12-10  6:36     ` Pankaj Gupta
@ 2020-12-10 13:30       ` Pavel Tatashin
  -1 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10 13:30 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: LKML, Linux MM, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, Sasha Levin,
	Tyler Hicks, Joonsoo Kim, mike.kravetz, Steven Rostedt,
	Ingo Molnar, Dave Jiang <dave.jiang@intel.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>,
	Johannes Thumshirn <jthumshirn@suse.de>,
	Logan Gunthorpe, Peter Zijlstra, Mel Gorman, Matthew Wilcox,
	David Rientjes, John Hubbard, Linux Doc Mailing List

> Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>

Thank you.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled
@ 2020-12-10 13:30       ` Pavel Tatashin
  0 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10 13:30 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: LKML, Linux MM, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, Sasha Levin,
	Tyler Hicks, Joonsoo Kim, mike.kravetz, Steven Rostedt,
	Ingo Molnar, Dave Jiang <dave.jiang@intel.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>,
	Johannes Thumshirn <jthumshirn@suse.de>,
	Logan Gunthorpe, Peter Zijlstra, Mel Gorman, Matthew Wilcox,
	David Rientjes, John Hubbard, Linux Doc Mailing List

> Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>

Thank you.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
  2020-12-10 13:30       ` Pavel Tatashin
  (?)
@ 2020-12-10 17:44       ` Ira Weiny
  2020-12-10 18:57           ` Pavel Tatashin
  -1 siblings, 1 reply; 22+ messages in thread
From: Ira Weiny @ 2020-12-10 17:44 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: LKML, linux-mm, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, Sasha Levin,
	Tyler Hicks, Joonsoo Kim, mike.kravetz, Steven Rostedt,
	Ingo Molnar, Jason Gunthorpe, Peter Zijlstra, Mel Gorman,
	Matthew Wilcox, David Rientjes, John Hubbard,
	Linux Doc Mailing List

On Thu, Dec 10, 2020 at 08:30:03AM -0500, Pavel Tatashin wrote:
> On Wed, Dec 9, 2020 at 11:06 PM Ira Weiny <ira.weiny@intel.com> wrote:
> >
> > On Wed, Dec 09, 2020 at 07:43:30PM -0500, Pavel Tatashin wrote:
> > > __gup_longterm_locked() has CMA || FS_DAX version and a common stub
> > > version. In the preparation of prohibiting longterm pinning of pages from
> > > movable zone make the CMA || FS_DAX version common, and delete the stub
> > > version.
> >
> > I thought Jason sent a patch which got rid of this as well?
> 
> Yes, this series applies on the mainline so it can be easily tested.
> The next version, I will sync with linux-next.

Oh yea we wanted this to be back-portable correct?

If so, LGTM

Reviewed-by: Ira Weiny <ira.weiny@intel.com>

Sorry for not keeping up,
Ira

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
  2020-12-10 17:44       ` Ira Weiny
@ 2020-12-10 18:57           ` Pavel Tatashin
  0 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10 18:57 UTC (permalink / raw)
  To: Ira Weiny
  Cc: LKML, linux-mm, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, Sasha Levin,
	Tyler Hicks, Joonsoo Kim, mike.kravetz, Steven Rostedt,
	Ingo Molnar, Jason Gunthorpe, Peter Zijlstra, Mel Gorman,
	Matthew Wilcox, David Rientjes, John Hubbard,
	Linux Doc Mailing List

On Thu, Dec 10, 2020 at 12:44 PM Ira Weiny <ira.weiny@intel.com> wrote:
>
> On Thu, Dec 10, 2020 at 08:30:03AM -0500, Pavel Tatashin wrote:
> > On Wed, Dec 9, 2020 at 11:06 PM Ira Weiny <ira.weiny@intel.com> wrote:
> > >
> > > On Wed, Dec 09, 2020 at 07:43:30PM -0500, Pavel Tatashin wrote:
> > > > __gup_longterm_locked() has CMA || FS_DAX version and a common stub
> > > > version. In the preparation of prohibiting longterm pinning of pages from
> > > > movable zone make the CMA || FS_DAX version common, and delete the stub
> > > > version.
> > >
> > > I thought Jason sent a patch which got rid of this as well?
> >
> > Yes, this series applies on the mainline so it can be easily tested.
> > The next version, I will sync with linux-next.
>
> Oh yea we wanted this to be back-portable correct?
>
> If so, LGTM
>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>

Thank you. Yes, this series should be backported, but I am not sure
what to do about Jason's patch. Perhaps, in the next version I will
send out this series together with his patch.

Pasha

>
> Sorry for not keeping up,
> Ira

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
@ 2020-12-10 18:57           ` Pavel Tatashin
  0 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10 18:57 UTC (permalink / raw)
  To: Ira Weiny
  Cc: LKML, linux-mm, Andrew Morton, Vlastimil Babka, Michal Hocko,
	David Hildenbrand, Oscar Salvador, Dan Williams, Sasha Levin,
	Tyler Hicks, Joonsoo Kim, mike.kravetz, Steven Rostedt,
	Ingo Molnar, Jason Gunthorpe, Peter Zijlstra, Mel Gorman,
	Matthew Wilcox, David Rientjes, John Hubbard,
	Linux Doc Mailing List

On Thu, Dec 10, 2020 at 12:44 PM Ira Weiny <ira.weiny@intel.com> wrote:
>
> On Thu, Dec 10, 2020 at 08:30:03AM -0500, Pavel Tatashin wrote:
> > On Wed, Dec 9, 2020 at 11:06 PM Ira Weiny <ira.weiny@intel.com> wrote:
> > >
> > > On Wed, Dec 09, 2020 at 07:43:30PM -0500, Pavel Tatashin wrote:
> > > > __gup_longterm_locked() has CMA || FS_DAX version and a common stub
> > > > version. In the preparation of prohibiting longterm pinning of pages from
> > > > movable zone make the CMA || FS_DAX version common, and delete the stub
> > > > version.
> > >
> > > I thought Jason sent a patch which got rid of this as well?
> >
> > Yes, this series applies on the mainline so it can be easily tested.
> > The next version, I will sync with linux-next.
>
> Oh yea we wanted this to be back-portable correct?
>
> If so, LGTM
>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>

Thank you. Yes, this series should be backported, but I am not sure
what to do about Jason's patch. Perhaps, in the next version I will
send out this series together with his patch.

Pasha

>
> Sorry for not keeping up,
> Ira


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
  2020-12-10 18:57           ` Pavel Tatashin
  (?)
@ 2020-12-10 19:53           ` Jason Gunthorpe
  2020-12-10 19:54               ` Pavel Tatashin
  -1 siblings, 1 reply; 22+ messages in thread
From: Jason Gunthorpe @ 2020-12-10 19:53 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: Ira Weiny, LKML, linux-mm, Andrew Morton, Vlastimil Babka,
	Michal Hocko, David Hildenbrand, Oscar Salvador, Dan Williams,
	Sasha Levin, Tyler Hicks, Joonsoo Kim, mike.kravetz,
	Steven Rostedt, Ingo Molnar, Peter Zijlstra, Mel Gorman,
	Matthew Wilcox, David Rientjes, John Hubbard,
	Linux Doc Mailing List

On Thu, Dec 10, 2020 at 01:57:20PM -0500, Pavel Tatashin wrote:

> Thank you. Yes, this series should be backported, but I am not sure
> what to do about Jason's patch. Perhaps, in the next version I will
> send out this series together with his patch.

You need to send out patches that can be applied on top of linux-next,
at this point the window to go to rc kernels is done.

When you eventually want this back ported to stables then suggest they
take my patch as a pre-requisite.

Jason

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
  2020-12-10 19:53           ` Jason Gunthorpe
@ 2020-12-10 19:54               ` Pavel Tatashin
  0 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10 19:54 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ira Weiny, LKML, linux-mm, Andrew Morton, Vlastimil Babka,
	Michal Hocko, David Hildenbrand, Oscar Salvador, Dan Williams,
	Sasha Levin, Tyler Hicks, Joonsoo Kim, mike.kravetz,
	Steven Rostedt, Ingo Molnar, Peter Zijlstra, Mel Gorman,
	Matthew Wilcox, David Rientjes, John Hubbard,
	Linux Doc Mailing List

On Thu, Dec 10, 2020 at 2:53 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
>
> On Thu, Dec 10, 2020 at 01:57:20PM -0500, Pavel Tatashin wrote:
>
> > Thank you. Yes, this series should be backported, but I am not sure
> > what to do about Jason's patch. Perhaps, in the next version I will
> > send out this series together with his patch.
>
> You need to send out patches that can be applied on top of linux-next,
> at this point the window to go to rc kernels is done.
>
> When you eventually want this back ported to stables then suggest they
> take my patch as a pre-requisite.

Sounds good.

Thanks,
Pasha

>
> Jason

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common
@ 2020-12-10 19:54               ` Pavel Tatashin
  0 siblings, 0 replies; 22+ messages in thread
From: Pavel Tatashin @ 2020-12-10 19:54 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ira Weiny, LKML, linux-mm, Andrew Morton, Vlastimil Babka,
	Michal Hocko, David Hildenbrand, Oscar Salvador, Dan Williams,
	Sasha Levin, Tyler Hicks, Joonsoo Kim, mike.kravetz,
	Steven Rostedt, Ingo Molnar, Peter Zijlstra, Mel Gorman,
	Matthew Wilcox, David Rientjes, John Hubbard,
	Linux Doc Mailing List

On Thu, Dec 10, 2020 at 2:53 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
>
> On Thu, Dec 10, 2020 at 01:57:20PM -0500, Pavel Tatashin wrote:
>
> > Thank you. Yes, this series should be backported, but I am not sure
> > what to do about Jason's patch. Perhaps, in the next version I will
> > send out this series together with his patch.
>
> You need to send out patches that can be applied on top of linux-next,
> at this point the window to go to rc kernels is done.
>
> When you eventually want this back ported to stables then suggest they
> take my patch as a pre-requisite.

Sounds good.

Thanks,
Pasha

>
> Jason


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2020-12-10 19:55 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-10  0:43 [PATCH v2 0/8] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
2020-12-10  0:43 ` [PATCH v2 1/8] mm/gup: perform check_dax_vmas only when FS_DAX is enabled Pavel Tatashin
2020-12-10  6:36   ` Pankaj Gupta
2020-12-10  6:36     ` Pankaj Gupta
2020-12-10 13:30     ` Pavel Tatashin
2020-12-10 13:30       ` Pavel Tatashin
2020-12-10  0:43 ` [PATCH v2 2/8] mm/gup: don't pin migrated cma pages in movable zone Pavel Tatashin
2020-12-10  0:43 ` [PATCH v2 3/8] mm/gup: make __gup_longterm_locked common Pavel Tatashin
2020-12-10  4:06   ` Ira Weiny
2020-12-10 13:30     ` Pavel Tatashin
2020-12-10 13:30       ` Pavel Tatashin
2020-12-10 17:44       ` Ira Weiny
2020-12-10 18:57         ` Pavel Tatashin
2020-12-10 18:57           ` Pavel Tatashin
2020-12-10 19:53           ` Jason Gunthorpe
2020-12-10 19:54             ` Pavel Tatashin
2020-12-10 19:54               ` Pavel Tatashin
2020-12-10  0:43 ` [PATCH v2 4/8] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Pavel Tatashin
2020-12-10  0:43 ` [PATCH v2 5/8] mm: apply per-task gfp constraints in fast path Pavel Tatashin
2020-12-10  0:43 ` [PATCH v2 6/8] mm: honor PF_MEMALLOC_PIN for all movable pages Pavel Tatashin
2020-12-10  0:43 ` [PATCH v2 7/8] mm/gup: migrate pinned pages out of movable zone Pavel Tatashin
2020-12-10  0:43 ` [PATCH v2 8/8] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning Pavel Tatashin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.