linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
@ 2021-02-16 17:03 Minchan Kim
  2021-02-16 17:03 ` [RFC 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Minchan Kim @ 2021-02-16 17:03 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, cgoldswo, linux-fsdevel, willy, mhocko, david,
	vbabka, viro, joaodias, Minchan Kim

LRU pagevec holds refcount of pages until the pagevec are drained.
It could prevent migration since the refcount of the page is greater
than the expection in migration logic. To mitigate the issue,
callers of migrate_pages drains LRU pagevec via migrate_prep or
lru_add_drain_all before migrate_pages call.

However, it's not enough because pages coming into pagevec after the
draining call still could stay at the pagevec so it could keep
preventing page migration. Since some callers of migrate_pages have
retrial logic with LRU draining, the page would migrate at next trail
but it is still fragile in that it doesn't close the fundamental race
between upcoming LRU pages into pagvec and migration so the migration
failure could cause contiguous memory allocation failure in the end.

The other concern is migration keeps retrying until pages in pagevec
are drained. During the time, migration repeatedly allocates target
page, unmap source page from page table of processes and then get to
know the failure, restore the original page to pagetable of processes,
free target page, which is also not good.

To solve the issue, this patch tries to close the race rather than
relying on retrial and luck. The idea is to introduce
migration-in-progress tracking count with introducing IPI barrier
after atomic updating the count to minimize read-side overhead.

The migrate_prep increases migrate_pending_count under the lock
and IPI call to guarantee every CPU see the uptodate value
of migrate_pending_count. Then, drain pagevec via lru_add_drain_all.
From now on, no LRU pages could reach pagevec since LRU handling
functions skips the batching if migration is in progress with checking
migrate_pedning(IOW, pagevec should be empty until migration is done).
Every migrate_prep's caller should call migrate_finish in pair to
decrease the migration tracking count.

With the migrate_pending, vulnerable places to make migration failure
could catch migration-in-progress and make their plan to help the
migration(e.g., bh_lru_install[1]) in future.

[1] https://lore.kernel.org/linux-mm/c083b0ab6e410e33ca880d639f90ef4f6f3b33ff.1613020616.git.cgoldswo@codeaurora.org/

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/migrate.h |  3 +++
 mm/mempolicy.c          |  6 +++++
 mm/migrate.c            | 55 ++++++++++++++++++++++++++++++++++++++---
 mm/page_alloc.c         |  3 +++
 mm/swap.c               | 24 +++++++++++++-----
 5 files changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 3a389633b68f..047d5358fe0d 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -46,6 +46,8 @@ extern int isolate_movable_page(struct page *page, isolate_mode_t mode);
 extern void putback_movable_page(struct page *page);
 
 extern void migrate_prep(void);
+extern void migrate_finish(void);
+extern bool migrate_pending(void);
 extern void migrate_prep_local(void);
 extern void migrate_page_states(struct page *newpage, struct page *page);
 extern void migrate_page_copy(struct page *newpage, struct page *page);
@@ -67,6 +69,7 @@ static inline int isolate_movable_page(struct page *page, isolate_mode_t mode)
 	{ return -EBUSY; }
 
 static inline int migrate_prep(void) { return -ENOSYS; }
+static inline void migrate_finish(void) {}
 static inline int migrate_prep_local(void) { return -ENOSYS; }
 
 static inline void migrate_page_states(struct page *newpage, struct page *page)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 6961238c7ef5..46d9986c7bf0 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1208,6 +1208,8 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
 			break;
 	}
 	mmap_read_unlock(mm);
+	migrate_finish();
+
 	if (err < 0)
 		return err;
 	return busy;
@@ -1371,6 +1373,10 @@ static long do_mbind(unsigned long start, unsigned long len,
 	mmap_write_unlock(mm);
 mpol_out:
 	mpol_put(new);
+
+	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
+		migrate_finish();
+
 	return err;
 }
 
diff --git a/mm/migrate.c b/mm/migrate.c
index a69da8aaeccd..d70e113eee04 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -57,6 +57,22 @@
 
 #include "internal.h"
 
+static DEFINE_SPINLOCK(migrate_pending_lock);
+static unsigned long migrate_pending_count;
+static DEFINE_PER_CPU(struct work_struct, migrate_pending_work);
+
+static void read_migrate_pending(struct work_struct *work)
+{
+	/* TODO : not sure it's needed */
+	unsigned long dummy = __READ_ONCE(migrate_pending_count);
+	(void)dummy;
+}
+
+bool migrate_pending(void)
+{
+	return migrate_pending_count;
+}
+
 /*
  * migrate_prep() needs to be called before we start compiling a list of pages
  * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
@@ -64,11 +80,27 @@
  */
 void migrate_prep(void)
 {
+	unsigned int cpu;
+
+	spin_lock(&migrate_pending_lock);
+	migrate_pending_count++;
+	spin_unlock(&migrate_pending_lock);
+
+	for_each_online_cpu(cpu) {
+		struct work_struct *work = &per_cpu(migrate_pending_work, cpu);
+
+		INIT_WORK(work, read_migrate_pending);
+		queue_work_on(cpu, mm_percpu_wq, work);
+	}
+
+	for_each_online_cpu(cpu)
+		flush_work(&per_cpu(migrate_pending_work, cpu));
+	/*
+	 * From now on, every online cpu will see uptodate
+	 * migarte_pending_work.
+	 */
 	/*
 	 * Clear the LRU lists so pages can be isolated.
-	 * Note that pages may be moved off the LRU after we have
-	 * drained them. Those pages will fail to migrate like other
-	 * pages that may be busy.
 	 */
 	lru_add_drain_all();
 }
@@ -79,6 +111,22 @@ void migrate_prep_local(void)
 	lru_add_drain();
 }
 
+void migrate_finish(void)
+{
+	int cpu;
+
+	spin_lock(&migrate_pending_lock);
+	migrate_pending_count--;
+	spin_unlock(&migrate_pending_lock);
+
+	for_each_online_cpu(cpu) {
+		struct work_struct *work = &per_cpu(migrate_pending_work, cpu);
+
+		INIT_WORK(work, read_migrate_pending);
+		queue_work_on(cpu, mm_percpu_wq, work);
+	}
+}
+
 int isolate_movable_page(struct page *page, isolate_mode_t mode)
 {
 	struct address_space *mapping;
@@ -1837,6 +1885,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes,
 	if (err >= 0)
 		err = err1;
 out:
+	migrate_finish();
 	return err;
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6446778cbc6b..e4cb959f64dc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8493,6 +8493,9 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
 		ret = migrate_pages(&cc->migratepages, alloc_migration_target,
 				NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE);
 	}
+
+	migrate_finish();
+
 	if (ret < 0) {
 		putback_movable_pages(&cc->migratepages);
 		return ret;
diff --git a/mm/swap.c b/mm/swap.c
index 31b844d4ed94..e42c4b4bf2b3 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -36,6 +36,7 @@
 #include <linux/hugetlb.h>
 #include <linux/page_idle.h>
 #include <linux/local_lock.h>
+#include <linux/migrate.h>
 
 #include "internal.h"
 
@@ -235,6 +236,17 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
 	}
 }
 
+/* return true if pagevec needs flush */
+static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page)
+{
+	bool ret = false;
+
+	if (!pagevec_add(pvec, page) || PageCompound(page) || migrate_pending())
+		ret = true;
+
+	return ret;
+}
+
 /*
  * Writeback is about to end against a page which has been marked for immediate
  * reclaim.  If it still appears to be reclaimable, move it to the tail of the
@@ -252,7 +264,7 @@ void rotate_reclaimable_page(struct page *page)
 		get_page(page);
 		local_lock_irqsave(&lru_rotate.lock, flags);
 		pvec = this_cpu_ptr(&lru_rotate.pvec);
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, pagevec_move_tail_fn);
 		local_unlock_irqrestore(&lru_rotate.lock, flags);
 	}
@@ -343,7 +355,7 @@ static void activate_page(struct page *page)
 		local_lock(&lru_pvecs.lock);
 		pvec = this_cpu_ptr(&lru_pvecs.activate_page);
 		get_page(page);
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, __activate_page);
 		local_unlock(&lru_pvecs.lock);
 	}
@@ -458,7 +470,7 @@ void lru_cache_add(struct page *page)
 	get_page(page);
 	local_lock(&lru_pvecs.lock);
 	pvec = this_cpu_ptr(&lru_pvecs.lru_add);
-	if (!pagevec_add(pvec, page) || PageCompound(page))
+	if (pagevec_add_and_need_flush(pvec, page))
 		__pagevec_lru_add(pvec);
 	local_unlock(&lru_pvecs.lock);
 }
@@ -654,7 +666,7 @@ void deactivate_file_page(struct page *page)
 		local_lock(&lru_pvecs.lock);
 		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_file);
 
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_file_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
@@ -676,7 +688,7 @@ void deactivate_page(struct page *page)
 		local_lock(&lru_pvecs.lock);
 		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate);
 		get_page(page);
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
@@ -698,7 +710,7 @@ void mark_page_lazyfree(struct page *page)
 		local_lock(&lru_pvecs.lock);
 		pvec = this_cpu_ptr(&lru_pvecs.lru_lazyfree);
 		get_page(page);
-		if (!pagevec_add(pvec, page) || PageCompound(page))
+		if (pagevec_add_and_need_flush(pvec, page))
 			pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
-- 
2.30.0.478.g8a0d178c01-goog



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 2/2] mm: fs: Invalidate BH LRU during page migration
  2021-02-16 17:03 [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
@ 2021-02-16 17:03 ` Minchan Kim
  2021-02-16 18:22 ` [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Matthew Wilcox
  2021-02-17  8:59 ` Michal Hocko
  2 siblings, 0 replies; 17+ messages in thread
From: Minchan Kim @ 2021-02-16 17:03 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, cgoldswo, linux-fsdevel, willy, mhocko, david,
	vbabka, viro, joaodias, Minchan Kim

Pages containing buffer_heads that are in one of the per-CPU
buffer_head LRU caches will be pinned and thus cannot be migrated.
This can prevent CMA allocations from succeeding, which are often used
on platforms with co-processors (such as a DSP) that can only use
physically contiguous memory. It can also prevent memory
hot-unplugging from succeeding, which involves migrating at least
MIN_MEMORY_BLOCK_SIZE bytes of memory, which ranges from 8 MiB to 1
GiB based on the architecture in use.

Correspondingly, invalidate the BH LRU caches before a migration
starts and stop any buffer_head from being cached in the LRU caches,
until migration has finished.

Signed-off-by: Chris Goldsworthy <cgoldswo@codeaurora.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 fs/buffer.c                 | 13 +++++++++++--
 include/linux/buffer_head.h |  2 ++
 mm/swap.c                   |  5 ++++-
 3 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 96c7604f69b3..de62e75d0ed0 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -45,6 +45,7 @@
 #include <linux/mpage.h>
 #include <linux/bit_spinlock.h>
 #include <linux/pagevec.h>
+#include <linux/migrate.h>
 #include <linux/sched/mm.h>
 #include <trace/events/block.h>
 #include <linux/fscrypt.h>
@@ -1301,6 +1302,14 @@ static void bh_lru_install(struct buffer_head *bh)
 	int i;
 
 	check_irqs_on();
+	/*
+	 * buffer_head in bh_lru could increase refcount of the page
+	 * until it will be invalidated. It causes page migraion failure.
+	 * Skip putting upcoming bh into bh_lru until migration is done.
+	 */
+	if (migrate_pending())
+		return;
+
 	bh_lru_lock();
 
 	b = this_cpu_ptr(&bh_lrus);
@@ -1446,7 +1455,7 @@ EXPORT_SYMBOL(__bread_gfp);
  * This doesn't race because it runs in each cpu either in irq
  * or with preempt disabled.
  */
-static void invalidate_bh_lru(void *arg)
+void invalidate_bh_lru(void *arg)
 {
 	struct bh_lru *b = &get_cpu_var(bh_lrus);
 	int i;
@@ -1458,7 +1467,7 @@ static void invalidate_bh_lru(void *arg)
 	put_cpu_var(bh_lrus);
 }
 
-static bool has_bh_in_lru(int cpu, void *dummy)
+bool has_bh_in_lru(int cpu, void *dummy)
 {
 	struct bh_lru *b = per_cpu_ptr(&bh_lrus, cpu);
 	int i;
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 6b47f94378c5..3d98bdabaac9 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -194,6 +194,8 @@ void __breadahead_gfp(struct block_device *, sector_t block, unsigned int size,
 struct buffer_head *__bread_gfp(struct block_device *,
 				sector_t block, unsigned size, gfp_t gfp);
 void invalidate_bh_lrus(void);
+void invalidate_bh_lru(void *);
+bool has_bh_in_lru(int cpu, void *dummy);
 struct buffer_head *alloc_buffer_head(gfp_t gfp_flags);
 void free_buffer_head(struct buffer_head * bh);
 void unlock_buffer(struct buffer_head *bh);
diff --git a/mm/swap.c b/mm/swap.c
index e42c4b4bf2b3..14faf558347b 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -37,6 +37,7 @@
 #include <linux/page_idle.h>
 #include <linux/local_lock.h>
 #include <linux/migrate.h>
+#include <linux/buffer_head.h>
 
 #include "internal.h"
 
@@ -641,6 +642,7 @@ void lru_add_drain_cpu(int cpu)
 		pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
 
 	activate_page_drain(cpu);
+	invalidate_bh_lru(NULL);
 }
 
 /**
@@ -827,7 +829,8 @@ void lru_add_drain_all(void)
 		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
 		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
 		    pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree, cpu)) ||
-		    need_activate_page_drain(cpu)) {
+		    need_activate_page_drain(cpu) ||
+		    has_bh_in_lru(cpu, NULL)) {
 			INIT_WORK(work, lru_add_drain_per_cpu);
 			queue_work_on(cpu, mm_percpu_wq, work);
 			__cpumask_set_cpu(cpu, &has_work);
-- 
2.30.0.478.g8a0d178c01-goog



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-16 17:03 [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
  2021-02-16 17:03 ` [RFC 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
@ 2021-02-16 18:22 ` Matthew Wilcox
  2021-02-16 21:30   ` Minchan Kim
  2021-02-17  8:59 ` Michal Hocko
  2 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2021-02-16 18:22 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, LKML, cgoldswo, linux-fsdevel, mhocko,
	david, vbabka, viro, joaodias

On Tue, Feb 16, 2021 at 09:03:47AM -0800, Minchan Kim wrote:
> LRU pagevec holds refcount of pages until the pagevec are drained.
> It could prevent migration since the refcount of the page is greater
> than the expection in migration logic. To mitigate the issue,
> callers of migrate_pages drains LRU pagevec via migrate_prep or
> lru_add_drain_all before migrate_pages call.
> 
> However, it's not enough because pages coming into pagevec after the
> draining call still could stay at the pagevec so it could keep
> preventing page migration. Since some callers of migrate_pages have
> retrial logic with LRU draining, the page would migrate at next trail
> but it is still fragile in that it doesn't close the fundamental race
> between upcoming LRU pages into pagvec and migration so the migration
> failure could cause contiguous memory allocation failure in the end.

Have you been able to gather any numbers on this?  eg does migration
now succeed 5% more often?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-16 18:22 ` [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Matthew Wilcox
@ 2021-02-16 21:30   ` Minchan Kim
  0 siblings, 0 replies; 17+ messages in thread
From: Minchan Kim @ 2021-02-16 21:30 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, linux-mm, LKML, cgoldswo, linux-fsdevel, mhocko,
	david, vbabka, viro, joaodias

On Tue, Feb 16, 2021 at 06:22:42PM +0000, Matthew Wilcox wrote:
> On Tue, Feb 16, 2021 at 09:03:47AM -0800, Minchan Kim wrote:
> > LRU pagevec holds refcount of pages until the pagevec are drained.
> > It could prevent migration since the refcount of the page is greater
> > than the expection in migration logic. To mitigate the issue,
> > callers of migrate_pages drains LRU pagevec via migrate_prep or
> > lru_add_drain_all before migrate_pages call.
> > 
> > However, it's not enough because pages coming into pagevec after the
> > draining call still could stay at the pagevec so it could keep
> > preventing page migration. Since some callers of migrate_pages have
> > retrial logic with LRU draining, the page would migrate at next trail
> > but it is still fragile in that it doesn't close the fundamental race
> > between upcoming LRU pages into pagvec and migration so the migration
> > failure could cause contiguous memory allocation failure in the end.
> 
> Have you been able to gather any numbers on this?  eg does migration
> now succeed 5% more often?

What I measured was how many times migrate_pages retried with force mode
below debug code.
The test was android apps launching with cma allocation in background.
Total cma allocation count was about 500 during the entire testing 
and have seen about 400 retrial with below debug code.
With this patchset(with bug fix), the retrial count was reduced under 30.

What I measured was how many times the migrate_pages 
diff --git a/mm/migrate.c b/mm/migrate.c
index 04a98bb2f568..caa661be2d16 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1459,6 +1459,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
                                                private, page, pass > 2, mode,
                                                reason);

+                       if (rc && reason == MR_CONTIG_RANGE && pass > 2) {
+                               printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc);
+                               dump_page(page, "fail to migrate");
+                       }
+



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-16 17:03 [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
  2021-02-16 17:03 ` [RFC 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
  2021-02-16 18:22 ` [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Matthew Wilcox
@ 2021-02-17  8:59 ` Michal Hocko
  2021-02-17  9:50   ` Michal Hocko
  2021-02-17 20:46   ` Minchan Kim
  2 siblings, 2 replies; 17+ messages in thread
From: Michal Hocko @ 2021-02-17  8:59 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, LKML, cgoldswo, linux-fsdevel, willy,
	david, vbabka, viro, joaodias

On Tue 16-02-21 09:03:47, Minchan Kim wrote:
> LRU pagevec holds refcount of pages until the pagevec are drained.
> It could prevent migration since the refcount of the page is greater
> than the expection in migration logic. To mitigate the issue,
> callers of migrate_pages drains LRU pagevec via migrate_prep or
> lru_add_drain_all before migrate_pages call.
> 
> However, it's not enough because pages coming into pagevec after the
> draining call still could stay at the pagevec so it could keep
> preventing page migration. Since some callers of migrate_pages have
> retrial logic with LRU draining, the page would migrate at next trail
> but it is still fragile in that it doesn't close the fundamental race
> between upcoming LRU pages into pagvec and migration so the migration
> failure could cause contiguous memory allocation failure in the end.

Please put some numbers on how often this happens here.

> The other concern is migration keeps retrying until pages in pagevec
> are drained. During the time, migration repeatedly allocates target
> page, unmap source page from page table of processes and then get to
> know the failure, restore the original page to pagetable of processes,
> free target page, which is also not good.

This is not good for performance you mean, rigth?
 
> To solve the issue, this patch tries to close the race rather than
> relying on retrial and luck. The idea is to introduce
> migration-in-progress tracking count with introducing IPI barrier
> after atomic updating the count to minimize read-side overhead.
>
> The migrate_prep increases migrate_pending_count under the lock
> and IPI call to guarantee every CPU see the uptodate value
> of migrate_pending_count. Then, drain pagevec via lru_add_drain_all.
> >From now on, no LRU pages could reach pagevec since LRU handling
> functions skips the batching if migration is in progress with checking
> migrate_pedning(IOW, pagevec should be empty until migration is done).
> Every migrate_prep's caller should call migrate_finish in pair to
> decrease the migration tracking count.

migrate_prep already does schedule draining on each cpu which has pages
queued. Why isn't it enough to disable pcp lru caches right before
draining in migrate_prep? More on IPI side below
 
[...]
> +static DEFINE_SPINLOCK(migrate_pending_lock);
> +static unsigned long migrate_pending_count;
> +static DEFINE_PER_CPU(struct work_struct, migrate_pending_work);
> +
> +static void read_migrate_pending(struct work_struct *work)
> +{
> +	/* TODO : not sure it's needed */
> +	unsigned long dummy = __READ_ONCE(migrate_pending_count);
> +	(void)dummy;

What are you trying to achieve here? Are you just trying to enforce read
memory barrier here?

> +}
> +
> +bool migrate_pending(void)
> +{
> +	return migrate_pending_count;
> +}
> +
>  /*
>   * migrate_prep() needs to be called before we start compiling a list of pages
>   * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
> @@ -64,11 +80,27 @@
>   */
>  void migrate_prep(void)
>  {
> +	unsigned int cpu;
> +
> +	spin_lock(&migrate_pending_lock);
> +	migrate_pending_count++;
> +	spin_unlock(&migrate_pending_lock);

I suspect you do not want to add atomic_read inside hot paths, right? Is
this really something that we have to microoptimize for? atomic_read is
a simple READ_ONCE on many archs.

> +
> +	for_each_online_cpu(cpu) {
> +		struct work_struct *work = &per_cpu(migrate_pending_work, cpu);
> +
> +		INIT_WORK(work, read_migrate_pending);
> +		queue_work_on(cpu, mm_percpu_wq, work);
> +	}
> +
> +	for_each_online_cpu(cpu)
> +		flush_work(&per_cpu(migrate_pending_work, cpu));

I also do not follow this scheme. Where is the IPI you are mentioning
above?

> +	/*
> +	 * From now on, every online cpu will see uptodate
> +	 * migarte_pending_work.
> +	 */
>  	/*
>  	 * Clear the LRU lists so pages can be isolated.
> -	 * Note that pages may be moved off the LRU after we have
> -	 * drained them. Those pages will fail to migrate like other
> -	 * pages that may be busy.
>  	 */
>  	lru_add_drain_all();

Overall, this looks rather heavy weight to my taste. Have you tried to
play with a simple atomic counter approach? atomic_read when adding to
the cache and atomic_inc inside migrate_prep followed by lrdu_add_drain.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-17  8:59 ` Michal Hocko
@ 2021-02-17  9:50   ` Michal Hocko
  2021-02-17 20:51     ` Minchan Kim
  2021-02-17 20:46   ` Minchan Kim
  1 sibling, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2021-02-17  9:50 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, LKML, cgoldswo, linux-fsdevel, willy,
	david, vbabka, viro, joaodias

On Wed 17-02-21 09:59:54, Michal Hocko wrote:
> On Tue 16-02-21 09:03:47, Minchan Kim wrote:
[...]
> >  /*
> >   * migrate_prep() needs to be called before we start compiling a list of pages
> >   * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
> > @@ -64,11 +80,27 @@
> >   */
> >  void migrate_prep(void)
> >  {
> > +	unsigned int cpu;
> > +
> > +	spin_lock(&migrate_pending_lock);
> > +	migrate_pending_count++;
> > +	spin_unlock(&migrate_pending_lock);
> 
> I suspect you do not want to add atomic_read inside hot paths, right? Is
> this really something that we have to microoptimize for? atomic_read is
> a simple READ_ONCE on many archs.

Or you rather wanted to prevent from read memory barrier to enfore the
ordering.

> > +
> > +	for_each_online_cpu(cpu) {
> > +		struct work_struct *work = &per_cpu(migrate_pending_work, cpu);
> > +
> > +		INIT_WORK(work, read_migrate_pending);
> > +		queue_work_on(cpu, mm_percpu_wq, work);
> > +	}
> > +
> > +	for_each_online_cpu(cpu)
> > +		flush_work(&per_cpu(migrate_pending_work, cpu));
> 
> I also do not follow this scheme. Where is the IPI you are mentioning
> above?

Thinking about it some more I think you mean the rescheduling IPI here?

> > +	/*
> > +	 * From now on, every online cpu will see uptodate
> > +	 * migarte_pending_work.
> > +	 */
> >  	/*
> >  	 * Clear the LRU lists so pages can be isolated.
> > -	 * Note that pages may be moved off the LRU after we have
> > -	 * drained them. Those pages will fail to migrate like other
> > -	 * pages that may be busy.
> >  	 */
> >  	lru_add_drain_all();
> 
> Overall, this looks rather heavy weight to my taste. Have you tried to
> play with a simple atomic counter approach? atomic_read when adding to
> the cache and atomic_inc inside migrate_prep followed by lrdu_add_drain.

If you really want a strong ordering then it should be sufficient to
simply alter lru_add_drain_all to force draining all CPUs. This will
make sure no new pages are added to the pcp lists and you will also sync
up anything that has accumulated because of a race between atomic_read
and inc:
diff --git a/mm/swap.c b/mm/swap.c
index 2cca7141470c..91600d7bb7a8 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -745,7 +745,7 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
  * Calling this function with cpu hotplug locks held can actually lead
  * to obscure indirect dependencies via WQ context.
  */
-void lru_add_drain_all(void)
+void lru_add_drain_all(bool force_all_cpus)
 {
 	/*
 	 * lru_drain_gen - Global pages generation number
@@ -820,7 +820,8 @@ void lru_add_drain_all(void)
 	for_each_online_cpu(cpu) {
 		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
 
-		if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
+		if (force_all_cpus ||
+		    pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
 		    data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
 		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
 		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-17  8:59 ` Michal Hocko
  2021-02-17  9:50   ` Michal Hocko
@ 2021-02-17 20:46   ` Minchan Kim
  2021-02-17 21:16     ` Matthew Wilcox
  1 sibling, 1 reply; 17+ messages in thread
From: Minchan Kim @ 2021-02-17 20:46 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, linux-mm, LKML, cgoldswo, linux-fsdevel, willy,
	david, vbabka, viro, joaodias

On Wed, Feb 17, 2021 at 09:59:54AM +0100, Michal Hocko wrote:
> On Tue 16-02-21 09:03:47, Minchan Kim wrote:
> > LRU pagevec holds refcount of pages until the pagevec are drained.
> > It could prevent migration since the refcount of the page is greater
> > than the expection in migration logic. To mitigate the issue,
> > callers of migrate_pages drains LRU pagevec via migrate_prep or
> > lru_add_drain_all before migrate_pages call.
> > 
> > However, it's not enough because pages coming into pagevec after the
> > draining call still could stay at the pagevec so it could keep
> > preventing page migration. Since some callers of migrate_pages have
> > retrial logic with LRU draining, the page would migrate at next trail
> > but it is still fragile in that it doesn't close the fundamental race
> > between upcoming LRU pages into pagvec and migration so the migration
> > failure could cause contiguous memory allocation failure in the end.
> 
> Please put some numbers on how often this happens here.

Actually, it's hard to get real number of cma allocation failure
since we didn't introduce any cma allocation statistics.
I am working on it to make the failure more visible.
https://lore.kernel.org/linux-mm/20210217170025.512704-1-minchan@kernel.org/

Having said, the cma allocation failure is one of the most common bug
reporting from field test I got periodically. I couldn't say they all
comes from pagevec issue since it's *really hard* to get who was the long
time holder of additional refcount at the moment. However, I kept tracking
down pinner from drivers and resolved them but the problem is still
reported and saw pagevec is one of common place to hold additional
refcount. 

I think data below is not exact one you want to see but that's the
way how I could see how this patch effectives reduce vulnerability.

Quote from Matthew quesiton
````
What I measured was how many times migrate_pages retried with force mode
below debug code.
The test was android apps launching with cma allocation in background.
Total cma allocation count was about 500 during the entire testing
and have seen about 400 retrial with below debug code.
With this patchset(with bug fix), the retrial count was reduced under 30.

What I measured was how many times the migrate_pages
diff --git a/mm/migrate.c b/mm/migrate.c
index 04a98bb2f568..caa661be2d16 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1459,6 +1459,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
                                                private, page, pass > 2, mode,
                                                reason);

+                       if (rc && reason == MR_CONTIG_RANGE && pass > 2) {
+                               printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc);
+                               dump_page(page, "fail to migrate");
+                       }
+

```

IMO, rather than relying on retrial, it would be better if we close
such race from the beginning like free memory isolation logic in
start_isolate_page_range. Furthmore, migrate_pending function
will provide a way to detect migration-in-progress other vulnerable
places in future.

> 
> > The other concern is migration keeps retrying until pages in pagevec
> > are drained. During the time, migration repeatedly allocates target
> > page, unmap source page from page table of processes and then get to
> > know the failure, restore the original page to pagetable of processes,
> > free target page, which is also not good.
> 
> This is not good for performance you mean, rigth?

It's not good for allocation latency as well as performance but
it's not a top reason for this work. Goal is mostly focus on
closing the race to reduce pontential sites to migration failure
for CMA.

>  
> > To solve the issue, this patch tries to close the race rather than
> > relying on retrial and luck. The idea is to introduce
> > migration-in-progress tracking count with introducing IPI barrier
> > after atomic updating the count to minimize read-side overhead.
> >
> > The migrate_prep increases migrate_pending_count under the lock
> > and IPI call to guarantee every CPU see the uptodate value
> > of migrate_pending_count. Then, drain pagevec via lru_add_drain_all.
> > >From now on, no LRU pages could reach pagevec since LRU handling
> > functions skips the batching if migration is in progress with checking
> > migrate_pedning(IOW, pagevec should be empty until migration is done).
> > Every migrate_prep's caller should call migrate_finish in pair to
> > decrease the migration tracking count.
> 
> migrate_prep already does schedule draining on each cpu which has pages
> queued. Why isn't it enough to disable pcp lru caches right before
> draining in migrate_prep? More on IPI side below

Yub, with simple tweak of lru_add_drain_all(bool disable) and queue
it for every CPU, it would work.

>  
> [...]
> > +static DEFINE_SPINLOCK(migrate_pending_lock);
> > +static unsigned long migrate_pending_count;
> > +static DEFINE_PER_CPU(struct work_struct, migrate_pending_work);
> > +
> > +static void read_migrate_pending(struct work_struct *work)
> > +{
> > +	/* TODO : not sure it's needed */
> > +	unsigned long dummy = __READ_ONCE(migrate_pending_count);
> > +	(void)dummy;
> 
> What are you trying to achieve here? Are you just trying to enforce read
> memory barrier here?

I though IPI by migrate_pending_work will guarantee the memory ordering
so it wouldn't be needed but wanted to get attention from reviewer.
If it's not needed, I will drop it.

> 
> > +}
> > +
> > +bool migrate_pending(void)
> > +{
> > +	return migrate_pending_count;
> > +}
> > +
> >  /*
> >   * migrate_prep() needs to be called before we start compiling a list of pages
> >   * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
> > @@ -64,11 +80,27 @@
> >   */
> >  void migrate_prep(void)
> >  {
> > +	unsigned int cpu;
> > +
> > +	spin_lock(&migrate_pending_lock);
> > +	migrate_pending_count++;
> > +	spin_unlock(&migrate_pending_lock);
> 
> I suspect you do not want to add atomic_read inside hot paths, right? Is
> this really something that we have to microoptimize for? atomic_read is
> a simple READ_ONCE on many archs.

It's also spin_lock_irq_save in some arch. If the new synchonization is
heavily compilcated, atomic would be better for simple start but I thought
this locking scheme is too simple so no need to add atomic operation in
readside.


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-17  9:50   ` Michal Hocko
@ 2021-02-17 20:51     ` Minchan Kim
  2021-02-17 21:11       ` Matthew Wilcox
  0 siblings, 1 reply; 17+ messages in thread
From: Minchan Kim @ 2021-02-17 20:51 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, linux-mm, LKML, cgoldswo, linux-fsdevel, willy,
	david, vbabka, viro, joaodias

On Wed, Feb 17, 2021 at 10:50:55AM +0100, Michal Hocko wrote:
> On Wed 17-02-21 09:59:54, Michal Hocko wrote:
> > On Tue 16-02-21 09:03:47, Minchan Kim wrote:
> [...]
> > >  /*
> > >   * migrate_prep() needs to be called before we start compiling a list of pages
> > >   * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
> > > @@ -64,11 +80,27 @@
> > >   */
> > >  void migrate_prep(void)
> > >  {
> > > +	unsigned int cpu;
> > > +
> > > +	spin_lock(&migrate_pending_lock);
> > > +	migrate_pending_count++;
> > > +	spin_unlock(&migrate_pending_lock);
> > 
> > I suspect you do not want to add atomic_read inside hot paths, right? Is
> > this really something that we have to microoptimize for? atomic_read is
> > a simple READ_ONCE on many archs.
> 
> Or you rather wanted to prevent from read memory barrier to enfore the
> ordering.

Yub.

> 
> > > +
> > > +	for_each_online_cpu(cpu) {
> > > +		struct work_struct *work = &per_cpu(migrate_pending_work, cpu);
> > > +
> > > +		INIT_WORK(work, read_migrate_pending);
> > > +		queue_work_on(cpu, mm_percpu_wq, work);
> > > +	}
> > > +
> > > +	for_each_online_cpu(cpu)
> > > +		flush_work(&per_cpu(migrate_pending_work, cpu));
> > 
> > I also do not follow this scheme. Where is the IPI you are mentioning
> > above?
> 
> Thinking about it some more I think you mean the rescheduling IPI here?

True.

> 
> > > +	/*
> > > +	 * From now on, every online cpu will see uptodate
> > > +	 * migarte_pending_work.
> > > +	 */
> > >  	/*
> > >  	 * Clear the LRU lists so pages can be isolated.
> > > -	 * Note that pages may be moved off the LRU after we have
> > > -	 * drained them. Those pages will fail to migrate like other
> > > -	 * pages that may be busy.
> > >  	 */
> > >  	lru_add_drain_all();
> > 
> > Overall, this looks rather heavy weight to my taste. Have you tried to
> > play with a simple atomic counter approach? atomic_read when adding to
> > the cache and atomic_inc inside migrate_prep followed by lrdu_add_drain.

I'd like to avoid atomic operation if we could.

> 
> If you really want a strong ordering then it should be sufficient to
> simply alter lru_add_drain_all to force draining all CPUs. This will
> make sure no new pages are added to the pcp lists and you will also sync
> up anything that has accumulated because of a race between atomic_read
> and inc:
> diff --git a/mm/swap.c b/mm/swap.c
> index 2cca7141470c..91600d7bb7a8 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -745,7 +745,7 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
>   * Calling this function with cpu hotplug locks held can actually lead
>   * to obscure indirect dependencies via WQ context.
>   */
> -void lru_add_drain_all(void)
> +void lru_add_drain_all(bool force_all_cpus)
>  {
>  	/*
>  	 * lru_drain_gen - Global pages generation number
> @@ -820,7 +820,8 @@ void lru_add_drain_all(void)
>  	for_each_online_cpu(cpu) {
>  		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
>  
> -		if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
> +		if (force_all_cpus ||
> +		    pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
>  		    data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
>  		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
>  		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||

Yub, that's a idea.
How about this?

diff --git a/mm/migrate.c b/mm/migrate.c
index a69da8aaeccd..2531642dd9ce 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -57,6 +57,14 @@
 
 #include "internal.h"
 
+static DEFINE_SPINLOCK(migrate_pending_lock);
+static unsigned long migrate_pending_count;
+
+bool migrate_pending(void)
+{
+	return migrate_pending_count;
+}
+
 /*
  * migrate_prep() needs to be called before we start compiling a list of pages
  * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
@@ -64,13 +72,20 @@
  */
 void migrate_prep(void)
 {
+	unsigned int cpu;
+
+	/*
+	 * lru_add_drain_all's IPI will make sure no new pages are added
+	 * to the pcp lists and drain them all.
+	 */
+	spin_lock(&migrate_pending_lock);
+	migrate_pending_count++;
+	spin_unlock(&migrate_pending_lock);
+
 	/*
 	 * Clear the LRU lists so pages can be isolated.
-	 * Note that pages may be moved off the LRU after we have
-	 * drained them. Those pages will fail to migrate like other
-	 * pages that may be busy.
 	 */
-	lru_add_drain_all();
+	lru_add_drain_all(true);
 }
 
 /* Do the necessary work of migrate_prep but not if it involves other CPUs */
@@ -79,6 +94,15 @@ void migrate_prep_local(void)
 	lru_add_drain();
 }
 
+void migrate_finish(void)
+{
+	int cpu;
+
+	spin_lock(&migrate_pending_lock);
+	migrate_pending_count--;
+	spin_unlock(&migrate_pending_lock);
+}

A odd here is there are no barrier for migrate_finish for
updating migrate_pending_count so other CPUs will see
stale value until they encounters the barrier by chance.
However, it wouldn't be a big deal, IMHO.

What do you think?


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-17 20:51     ` Minchan Kim
@ 2021-02-17 21:11       ` Matthew Wilcox
  2021-02-17 21:22         ` Minchan Kim
  0 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2021-02-17 21:11 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Michal Hocko, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, david, vbabka, viro, joaodias

On Wed, Feb 17, 2021 at 12:51:25PM -0800, Minchan Kim wrote:
> I'd like to avoid atomic operation if we could.

Why do you think that the spinlock is better?



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-17 20:46   ` Minchan Kim
@ 2021-02-17 21:16     ` Matthew Wilcox
  2021-02-17 21:32       ` Minchan Kim
  0 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2021-02-17 21:16 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Michal Hocko, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, david, vbabka, viro, joaodias

On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
> > I suspect you do not want to add atomic_read inside hot paths, right? Is
> > this really something that we have to microoptimize for? atomic_read is
> > a simple READ_ONCE on many archs.
> 
> It's also spin_lock_irq_save in some arch. If the new synchonization is
> heavily compilcated, atomic would be better for simple start but I thought
> this locking scheme is too simple so no need to add atomic operation in
> readside.

What arch uses a spinlock for atomic_read()?  I just had a quick grep and
didn't see any.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-17 21:11       ` Matthew Wilcox
@ 2021-02-17 21:22         ` Minchan Kim
  0 siblings, 0 replies; 17+ messages in thread
From: Minchan Kim @ 2021-02-17 21:22 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Michal Hocko, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, david, vbabka, viro, joaodias

On Wed, Feb 17, 2021 at 09:11:43PM +0000, Matthew Wilcox wrote:
> On Wed, Feb 17, 2021 at 12:51:25PM -0800, Minchan Kim wrote:
> > I'd like to avoid atomic operation if we could.
> 
> Why do you think that the spinlock is better?
> 

Michal suggested atomic_inc in writeside and atomic_read in readside.
I wanted to avoid atomic operation in readside path.

You are suggesting atomic for writeside and just read(not atomic_read)
in readside?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-17 21:16     ` Matthew Wilcox
@ 2021-02-17 21:32       ` Minchan Kim
  2021-02-18  8:17         ` Michal Hocko
  0 siblings, 1 reply; 17+ messages in thread
From: Minchan Kim @ 2021-02-17 21:32 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Michal Hocko, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, david, vbabka, viro, joaodias

On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote:
> On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
> > > I suspect you do not want to add atomic_read inside hot paths, right? Is
> > > this really something that we have to microoptimize for? atomic_read is
> > > a simple READ_ONCE on many archs.
> > 
> > It's also spin_lock_irq_save in some arch. If the new synchonization is
> > heavily compilcated, atomic would be better for simple start but I thought
> > this locking scheme is too simple so no need to add atomic operation in
> > readside.
> 
> What arch uses a spinlock for atomic_read()?  I just had a quick grep and
> didn't see any.

Ah, my bad. I was confused with update side.
Okay, let's use atomic op to make it simple.

Thanks for the review, Mattew and Michal.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-17 21:32       ` Minchan Kim
@ 2021-02-18  8:17         ` Michal Hocko
  2021-02-18  8:24           ` David Hildenbrand
  2021-02-18 15:52           ` Minchan Kim
  0 siblings, 2 replies; 17+ messages in thread
From: Michal Hocko @ 2021-02-18  8:17 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Matthew Wilcox, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, david, vbabka, viro, joaodias

On Wed 17-02-21 13:32:05, Minchan Kim wrote:
> On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote:
> > On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
> > > > I suspect you do not want to add atomic_read inside hot paths, right? Is
> > > > this really something that we have to microoptimize for? atomic_read is
> > > > a simple READ_ONCE on many archs.
> > > 
> > > It's also spin_lock_irq_save in some arch. If the new synchonization is
> > > heavily compilcated, atomic would be better for simple start but I thought
> > > this locking scheme is too simple so no need to add atomic operation in
> > > readside.
> > 
> > What arch uses a spinlock for atomic_read()?  I just had a quick grep and
> > didn't see any.
> 
> Ah, my bad. I was confused with update side.
> Okay, let's use atomic op to make it simple.

Thanks. This should make the code much more simple. Before you send
another version for the review I have another thing to consider. You are
kind of wiring this into the migration code but control over lru pcp
caches can be used in other paths as well. Memory offlining would be
another user. We already disable page allocator pcp caches to prevent
regular draining. We could do the same with lru pcp caches.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-18  8:17         ` Michal Hocko
@ 2021-02-18  8:24           ` David Hildenbrand
  2021-02-18 15:52           ` Minchan Kim
  1 sibling, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2021-02-18  8:24 UTC (permalink / raw)
  To: Michal Hocko, Minchan Kim
  Cc: Matthew Wilcox, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, vbabka, viro, joaodias

On 18.02.21 09:17, Michal Hocko wrote:
> On Wed 17-02-21 13:32:05, Minchan Kim wrote:
>> On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote:
>>> On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
>>>>> I suspect you do not want to add atomic_read inside hot paths, right? Is
>>>>> this really something that we have to microoptimize for? atomic_read is
>>>>> a simple READ_ONCE on many archs.
>>>>
>>>> It's also spin_lock_irq_save in some arch. If the new synchonization is
>>>> heavily compilcated, atomic would be better for simple start but I thought
>>>> this locking scheme is too simple so no need to add atomic operation in
>>>> readside.
>>>
>>> What arch uses a spinlock for atomic_read()?  I just had a quick grep and
>>> didn't see any.
>>
>> Ah, my bad. I was confused with update side.
>> Okay, let's use atomic op to make it simple.
> 
> Thanks. This should make the code much more simple. Before you send
> another version for the review I have another thing to consider. You are
> kind of wiring this into the migration code but control over lru pcp
> caches can be used in other paths as well. Memory offlining would be
> another user. We already disable page allocator pcp caches to prevent
> regular draining. We could do the same with lru pcp caches.
> 

Agreed. And dealing with PCP more reliably might also be of interest in 
context of more reliable alloc_contig_range().

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-18  8:17         ` Michal Hocko
  2021-02-18  8:24           ` David Hildenbrand
@ 2021-02-18 15:52           ` Minchan Kim
  2021-02-18 16:08             ` Michal Hocko
  1 sibling, 1 reply; 17+ messages in thread
From: Minchan Kim @ 2021-02-18 15:52 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Matthew Wilcox, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, david, vbabka, viro, joaodias

On Thu, Feb 18, 2021 at 09:17:02AM +0100, Michal Hocko wrote:
> On Wed 17-02-21 13:32:05, Minchan Kim wrote:
> > On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote:
> > > On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
> > > > > I suspect you do not want to add atomic_read inside hot paths, right? Is
> > > > > this really something that we have to microoptimize for? atomic_read is
> > > > > a simple READ_ONCE on many archs.
> > > > 
> > > > It's also spin_lock_irq_save in some arch. If the new synchonization is
> > > > heavily compilcated, atomic would be better for simple start but I thought
> > > > this locking scheme is too simple so no need to add atomic operation in
> > > > readside.
> > > 
> > > What arch uses a spinlock for atomic_read()?  I just had a quick grep and
> > > didn't see any.
> > 
> > Ah, my bad. I was confused with update side.
> > Okay, let's use atomic op to make it simple.
> 
> Thanks. This should make the code much more simple. Before you send
> another version for the review I have another thing to consider. You are
> kind of wiring this into the migration code but control over lru pcp
> caches can be used in other paths as well. Memory offlining would be
> another user. We already disable page allocator pcp caches to prevent
> regular draining. We could do the same with lru pcp caches.

I didn't catch your point here. If memory offlining is interested on
disabling lru pcp, it could call migrate_prep and migrate_finish
like other places. Are you suggesting this one?

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a969463bdda4..0ec1c13bfe32 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1425,8 +1425,12 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
                node_clear(mtc.nid, nmask);
                if (nodes_empty(nmask))
                        node_set(mtc.nid, nmask);
+
+               migrate_prep();
                ret = migrate_pages(&source, alloc_migration_target, NULL,
                        (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
+
+               migrate_finish();
                if (ret) {
                        list_for_each_entry(page, &source, lru) {
                                pr_warn("migrating pfn %lx failed ret:%d ",



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-18 15:52           ` Minchan Kim
@ 2021-02-18 16:08             ` Michal Hocko
  2021-02-18 16:21               ` Minchan Kim
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2021-02-18 16:08 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Matthew Wilcox, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, david, vbabka, viro, joaodias

On Thu 18-02-21 07:52:25, Minchan Kim wrote:
> On Thu, Feb 18, 2021 at 09:17:02AM +0100, Michal Hocko wrote:
> > On Wed 17-02-21 13:32:05, Minchan Kim wrote:
> > > On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote:
> > > > On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
> > > > > > I suspect you do not want to add atomic_read inside hot paths, right? Is
> > > > > > this really something that we have to microoptimize for? atomic_read is
> > > > > > a simple READ_ONCE on many archs.
> > > > > 
> > > > > It's also spin_lock_irq_save in some arch. If the new synchonization is
> > > > > heavily compilcated, atomic would be better for simple start but I thought
> > > > > this locking scheme is too simple so no need to add atomic operation in
> > > > > readside.
> > > > 
> > > > What arch uses a spinlock for atomic_read()?  I just had a quick grep and
> > > > didn't see any.
> > > 
> > > Ah, my bad. I was confused with update side.
> > > Okay, let's use atomic op to make it simple.
> > 
> > Thanks. This should make the code much more simple. Before you send
> > another version for the review I have another thing to consider. You are
> > kind of wiring this into the migration code but control over lru pcp
> > caches can be used in other paths as well. Memory offlining would be
> > another user. We already disable page allocator pcp caches to prevent
> > regular draining. We could do the same with lru pcp caches.
> 
> I didn't catch your point here. If memory offlining is interested on
> disabling lru pcp, it could call migrate_prep and migrate_finish
> like other places. Are you suggesting this one?

What I meant to say is that you can have a look at this not as an
integral part of the migration code but rather a common functionality
that migration and others can use. So instead of an implicit part of
migrate_prep this would become disable_lru_cache and migrate_finish
would become lruc_cache_enable. See my point? 

An advantage of that would be that this would match the pcp page
allocator disabling and we could have it in place for the whole
operation to make the page state more stable wrt. LRU state (PageLRU).
 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index a969463bdda4..0ec1c13bfe32 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1425,8 +1425,12 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>                 node_clear(mtc.nid, nmask);
>                 if (nodes_empty(nmask))
>                         node_set(mtc.nid, nmask);
> +
> +               migrate_prep();
>                 ret = migrate_pages(&source, alloc_migration_target, NULL,
>                         (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
> +
> +               migrate_finish();
>                 if (ret) {
>                         list_for_each_entry(page, &source, lru) {
>                                 pr_warn("migrating pfn %lx failed ret:%d ",
> 

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
  2021-02-18 16:08             ` Michal Hocko
@ 2021-02-18 16:21               ` Minchan Kim
  0 siblings, 0 replies; 17+ messages in thread
From: Minchan Kim @ 2021-02-18 16:21 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Matthew Wilcox, Andrew Morton, linux-mm, LKML, cgoldswo,
	linux-fsdevel, david, vbabka, viro, joaodias

On Thu, Feb 18, 2021 at 05:08:58PM +0100, Michal Hocko wrote:
> On Thu 18-02-21 07:52:25, Minchan Kim wrote:
> > On Thu, Feb 18, 2021 at 09:17:02AM +0100, Michal Hocko wrote:
> > > On Wed 17-02-21 13:32:05, Minchan Kim wrote:
> > > > On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote:
> > > > > On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
> > > > > > > I suspect you do not want to add atomic_read inside hot paths, right? Is
> > > > > > > this really something that we have to microoptimize for? atomic_read is
> > > > > > > a simple READ_ONCE on many archs.
> > > > > > 
> > > > > > It's also spin_lock_irq_save in some arch. If the new synchonization is
> > > > > > heavily compilcated, atomic would be better for simple start but I thought
> > > > > > this locking scheme is too simple so no need to add atomic operation in
> > > > > > readside.
> > > > > 
> > > > > What arch uses a spinlock for atomic_read()?  I just had a quick grep and
> > > > > didn't see any.
> > > > 
> > > > Ah, my bad. I was confused with update side.
> > > > Okay, let's use atomic op to make it simple.
> > > 
> > > Thanks. This should make the code much more simple. Before you send
> > > another version for the review I have another thing to consider. You are
> > > kind of wiring this into the migration code but control over lru pcp
> > > caches can be used in other paths as well. Memory offlining would be
> > > another user. We already disable page allocator pcp caches to prevent
> > > regular draining. We could do the same with lru pcp caches.
> > 
> > I didn't catch your point here. If memory offlining is interested on
> > disabling lru pcp, it could call migrate_prep and migrate_finish
> > like other places. Are you suggesting this one?
> 
> What I meant to say is that you can have a look at this not as an
> integral part of the migration code but rather a common functionality
> that migration and others can use. So instead of an implicit part of
> migrate_prep this would become disable_lru_cache and migrate_finish
> would become lruc_cache_enable. See my point? 
> 
> An advantage of that would be that this would match the pcp page
> allocator disabling and we could have it in place for the whole
> operation to make the page state more stable wrt. LRU state (PageLRU).

Understood. Thanks for the clarification.


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-02-18 16:21 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-16 17:03 [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
2021-02-16 17:03 ` [RFC 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
2021-02-16 18:22 ` [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Matthew Wilcox
2021-02-16 21:30   ` Minchan Kim
2021-02-17  8:59 ` Michal Hocko
2021-02-17  9:50   ` Michal Hocko
2021-02-17 20:51     ` Minchan Kim
2021-02-17 21:11       ` Matthew Wilcox
2021-02-17 21:22         ` Minchan Kim
2021-02-17 20:46   ` Minchan Kim
2021-02-17 21:16     ` Matthew Wilcox
2021-02-17 21:32       ` Minchan Kim
2021-02-18  8:17         ` Michal Hocko
2021-02-18  8:24           ` David Hildenbrand
2021-02-18 15:52           ` Minchan Kim
2021-02-18 16:08             ` Michal Hocko
2021-02-18 16:21               ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).