linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
@ 2021-02-16 17:03 Minchan Kim
  2021-02-16 17:03 ` [RFC 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Minchan Kim @ 2021-02-16 17:03 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, cgoldswo, linux-fsdevel, willy, mhocko, david,
	vbabka, viro, joaodias, Minchan Kim

LRU pagevec holds refcount of pages until the pagevec are drained.
It could prevent migration since the refcount of the page is greater
than the expection in migration logic. To mitigate the issue,
callers of migrate_pages drains LRU pagevec via migrate_prep or
lru_add_drain_all before migrate_pages call.

However, it's not enough because pages coming into pagevec after the
draining call still could stay at the pagevec so it could keep
preventing page migration. Since some callers of migrate_pages have
retrial logic with LRU draining, the page would migrate at next trail
but it is still fragile in that it doesn't close the fundamental race
between upcoming LRU pages into pagvec and migration so the migration
failure could cause contiguous memory allocation failure in the end.

The other concern is migration keeps retrying until pages in pagevec
are drained. During the time, migration repeatedly allocates target
page, unmap source page from page table of processes and then get to
know the failure, restore the original page to pagetable of processes,
free target page, which is also not good.

To solve the issue, this patch tries to close the race rather than
relying on retrial and luck. The idea is to introduce
migration-in-progress tracking count with introducing IPI barrier
after atomic updating the count to minimize read-side overhead.

The migrate_prep increases migrate_pending_count under the lock
and IPI call to guarantee every CPU see the uptodate value
of migrate_pending_count. Then, drain pagevec via lru_add_drain_all.
From now on, no LRU pages could reach pagevec since LRU handling
functions skips the batching if migration is in progress with checking
migrate_pedning(IOW, pagevec should be empty until migration is done).
Every migrate_prep's caller should call migrate_finish in pair to
decrease the migration tracking count.

With the migrate_pending, vulnerable places to make migration failure
could catch migration-in-progress and make their plan to help the
migration(e.g., bh_lru_install[1]) in future.

[1] https://lore.kernel.org/linux-mm/c083b0ab6e410e33ca880d639f90ef4f6f3b33ff.1613020616.git.cgoldswo@codeaurora.org/

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/migrate.h |  3 +++
 mm/mempolicy.c          |  6 +++++
 mm/migrate.c            | 55 ++++++++++++++++++++++++++++++++++++++---
 mm/page_alloc.c         |  3 +++
 mm/swap.c               | 24 +++++++++++++-----
 5 files changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 3a389633b68f..047d5358fe0d 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -46,6 +46,8 @@ extern int isolate_movable_page(struct page *page, isolate_mode_t mode);
 extern void putback_movable_page(struct page *page);
 
 extern void migrate_prep(void);
+extern void migrate_finish(void);
+extern bool migrate_pending(void);
 extern void migrate_prep_local(void);
 extern void migrate_page_states(struct page *newpage, struct page *page);
 extern void migrate_page_copy(struct page *newpage, struct page *page);
@@ -67,6 +69,7 @@ static inline int isolate_movable_page(struct page *page, isolate_mode_t mode)
 	{ return -EBUSY; }
 
 static inline int migrate_prep(void) { return -ENOSYS; }
+static inline void migrate_finish(void) {}
 static inline int migrate_prep_local(void) { return -ENOSYS; }
 
 static inline void migrate_page_states(struct page *newpage, struct page *page)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 6961238c7ef5..46d9986c7bf0 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1208,6 +1208,8 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
 			break;
 	}
 	mmap_read_unlock(mm);
+	migrate_finish();
+
 	if (err < 0)
 		return err;
 	return busy;
@@ -1371,6 +1373,10 @@ static long do_mbind(unsigned long start, unsigned long len,
 	mmap_write_unlock(mm);
 mpol_out:
 	mpol_put(new);
+
+	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
+		migrate_finish();
+
 	return err;
 }
 
diff --git a/mm/migrate.c b/mm/migrate.c
index a69da8aaeccd..d70e113eee04 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -57,6 +57,22 @@
 
 #include "internal.h"
 
+static DEFINE_SPINLOCK(migrate_pending_lock);
+static unsigned long migrate_pending_count;
+static DEFINE_PER_CPU(struct work_struct, migrate_pending_work);
+
+static void read_migrate_pending(struct work_struct *work)
+{
+	/* TODO : not sure it's needed */
+	unsigned long dummy = __READ_ONCE(migrate_pending_count);
+	(void)dummy;
+}
+
+bool migrate_pending(void)
+{
+	return migrate_pending_count;
+}
+
 /*
  * migrate_prep() needs to be called before we start compiling a list of pages
  * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
@@ -64,11 +80,27 @@
  */
 void migrate_prep(void)
 {
+	unsigned int cpu;
+
+	spin_lock(&migrate_pending_lock);
+	migrate_pending_count++;
+	spin_unlock(&migrate_pending_lock);
+
+	for_each_online_cpu(cpu) {
+		struct work_struct *work = &per_cpu(migrate_pending_work, cpu);
+
+		INIT_WORK(work, read_migrate_pending);
+		queue_work_on(cpu, mm_percpu_wq, work);
+	}
+
+	for_each_online_cpu(cpu)
+		flush_work(&per_cpu(migrate_pending_work, cpu));
+	/*
+	 * From now on, every online cpu will see uptodate
+	 * migarte_pending_work.
+	 */
 	/*
 	 * Clear the LRU lists so pages can be isolated.
-	 * Note that pages may be moved off the LRU after we have
-	 * drained them. Those pages will fail to migrate like other
-	 * pages that may be busy.
 	 */
 	lru_add_drain_all();
 }
@@ -79,6 +111,22 @@ void migrate_prep_local(void)
 	lru_add_drain();
 }
 
+void migrate_finish(void)
+{
+	int cpu;
+
+	spin_lock(&migrate_pending_lock);
+	migrate_pending_count--;
+	spin_unlock(&migrate_pending_lock);
+
+	for_each_online_cpu(cpu) {
+		struct work_struct *work = &per_cpu(migrate_pending_work, cpu);
+
+		INIT_WORK(work, read_migrate_pending);
+		queue_work_on(cpu, mm_percpu_wq, work);
+	}
+}
+
 int isolate_movable_page(struct page *page, isolate_mode_t mode)
 {
 	struct address_space *mapping;
@@ -1837,6 +1885,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes,
 	if (err >= 0)
 		err = err1;
 out:
+	migrate_finish();
 	return err;
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6446778cbc6b..e4cb959f64dc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8493,6 +8493,9 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
 		ret = migrate_pages(&cc->migratepages, alloc_migration_target,
 				NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE);
 	}
+
+	migrate_finish();
+
 	if (ret < 0) {
 		putback_movable_pages(&cc->migratepages);
 		return ret;
diff --git a/mm/swap.c b/mm/swap.c
index 31b844d4ed94..e42c4b4bf2b3 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -36,6 +36,7 @@
 #include <linux/hugetlb.h>
 #include <linux/page_idle.h>
 #include <linux/local_lock.h>
+#include <linux/migrate.h>
 
 #include "internal.h"
 
@@ -235,6 +236,17 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
 	}
 }
 
+/* return true if pagevec needs flush */
+static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page)
+{
+	bool ret = false;
+
+	if (!pagevec_add(pvec, page) || PageCompound(page) || migrate_pending())
+		ret = true;
+
+	return ret;
+}
+
 /*
  * Writeback is about to end against a page which has been marked for immediate
  * reclaim.  If it still appears to be reclaimable, move it to the tail of the
@@ -252,7 +264,7 @@ void rotate_reclaimable_page(struct page *page)
 		get_page(page);
 		local_lock_irqsave(&lru_rotate.lock, flags);
 		pvec = this_cpu_ptr(&lru_rotate.pvec);
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, pagevec_move_tail_fn);
 		local_unlock_irqrestore(&lru_rotate.lock, flags);
 	}
@@ -343,7 +355,7 @@ static void activate_page(struct page *page)
 		local_lock(&lru_pvecs.lock);
 		pvec = this_cpu_ptr(&lru_pvecs.activate_page);
 		get_page(page);
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, __activate_page);
 		local_unlock(&lru_pvecs.lock);
 	}
@@ -458,7 +470,7 @@ void lru_cache_add(struct page *page)
 	get_page(page);
 	local_lock(&lru_pvecs.lock);
 	pvec = this_cpu_ptr(&lru_pvecs.lru_add);
-	if (!pagevec_add(pvec, page) || PageCompound(page))
+	if (pagevec_add_and_need_flush(pvec, page))
 		__pagevec_lru_add(pvec);
 	local_unlock(&lru_pvecs.lock);
 }
@@ -654,7 +666,7 @@ void deactivate_file_page(struct page *page)
 		local_lock(&lru_pvecs.lock);
 		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_file);
 
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_file_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
@@ -676,7 +688,7 @@ void deactivate_page(struct page *page)
 		local_lock(&lru_pvecs.lock);
 		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate);
 		get_page(page);
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
@@ -698,7 +710,7 @@ void mark_page_lazyfree(struct page *page)
 		local_lock(&lru_pvecs.lock);
 		pvec = this_cpu_ptr(&lru_pvecs.lru_lazyfree);
 		get_page(page);
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
-- 
2.30.0.478.g8a0d178c01-goog



^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-02-18 16:21 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-16 17:03 [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
2021-02-16 17:03 ` [RFC 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
2021-02-16 18:22 ` [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Matthew Wilcox
2021-02-16 21:30   ` Minchan Kim
2021-02-17  8:59 ` Michal Hocko
2021-02-17  9:50   ` Michal Hocko
2021-02-17 20:51     ` Minchan Kim
2021-02-17 21:11       ` Matthew Wilcox
2021-02-17 21:22         ` Minchan Kim
2021-02-17 20:46   ` Minchan Kim
2021-02-17 21:16     ` Matthew Wilcox
2021-02-17 21:32       ` Minchan Kim
2021-02-18  8:17         ` Michal Hocko
2021-02-18  8:24           ` David Hildenbrand
2021-02-18 15:52           ` Minchan Kim
2021-02-18 16:08             ` Michal Hocko
2021-02-18 16:21               ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).