* [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
@ 2021-03-09 5:16 Minchan Kim
2021-03-09 5:16 ` [PATCH v2 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Minchan Kim @ 2021-03-09 5:16 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, LKML, joaodias, surenb, cgoldswo, willy, mhocko, david,
vbabka, linux-fsdevel, Minchan Kim
LRU pagevec holds refcount of pages until the pagevec are drained.
It could prevent migration since the refcount of the page is greater
than the expection in migration logic. To mitigate the issue,
callers of migrate_pages drains LRU pagevec via migrate_prep or
lru_add_drain_all before migrate_pages call.
However, it's not enough because pages coming into pagevec after the
draining call still could stay at the pagevec so it could keep
preventing page migration. Since some callers of migrate_pages have
retrial logic with LRU draining, the page would migrate at next trail
but it is still fragile in that it doesn't close the fundamental race
between upcoming LRU pages into pagvec and migration so the migration
failure could cause contiguous memory allocation failure in the end.
To close the race, this patch disables lru caches(i.e, pagevec)
during ongoing migration until migrate is done.
Since it's really hard to reproduce, I measured how many times
migrate_pages retried with force mode below debug code.
int migrate_pages(struct list_head *from, new_page_t get_new_page,
..
..
if (rc && reason == MR_CONTIG_RANGE && pass > 2) {
printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc);
dump_page(page, "fail to migrate");
}
The test was repeating android apps launching with cma allocation
in background every five seconds. Total cma allocation count was
about 500 during the testing. With this patch, the dump_page count
was reduced from 400 to 30.
It would be also useful for memory-hotplug.
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
* from v1 - https://lore.kernel.org/lkml/20210302210949.2440120-1-minchan@kernel.org/
* introduce __lru_add_drain_all to minimize changes - mhocko
* use lru_cache_disable for memory-hotplug
* schedule for every cpu at force_all_cpus
* from RFC - http://lore.kernel.org/linux-mm/20210216170348.1513483-1-minchan@kernel.org
* use atomic and lru_add_drain_all for strict ordering - mhocko
* lru_cache_disable/enable - mhocko
include/linux/migrate.h | 6 ++-
include/linux/swap.h | 2 +
mm/memory_hotplug.c | 3 +-
mm/mempolicy.c | 6 +++
mm/migrate.c | 13 ++++---
mm/page_alloc.c | 3 ++
mm/swap.c | 82 +++++++++++++++++++++++++++++++++--------
7 files changed, 91 insertions(+), 24 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 3a389633b68f..6a23174ea081 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -47,6 +47,7 @@ extern void putback_movable_page(struct page *page);
extern void migrate_prep(void);
extern void migrate_prep_local(void);
+extern void migrate_finish(void);
extern void migrate_page_states(struct page *newpage, struct page *page);
extern void migrate_page_copy(struct page *newpage, struct page *page);
extern int migrate_huge_page_move_mapping(struct address_space *mapping,
@@ -66,8 +67,9 @@ static inline struct page *alloc_migration_target(struct page *page,
static inline int isolate_movable_page(struct page *page, isolate_mode_t mode)
{ return -EBUSY; }
-static inline int migrate_prep(void) { return -ENOSYS; }
-static inline int migrate_prep_local(void) { return -ENOSYS; }
+static inline void migrate_prep(void) { return -ENOSYS; }
+static inline void migrate_prep_local(void) { return -ENOSYS; }
+static inline void migrate_done(void) {}
static inline void migrate_page_states(struct page *newpage, struct page *page)
{
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 71166bc10d17..aaa6b9cc3f8a 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -339,6 +339,8 @@ extern void lru_note_cost(struct lruvec *lruvec, bool file,
extern void lru_note_cost_page(struct page *);
extern void lru_cache_add(struct page *);
extern void mark_page_accessed(struct page *);
+extern void lru_cache_disable(void);
+extern void lru_cache_enable(void);
extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_cpu_zone(struct zone *zone);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a969463bdda4..9a5af00802f9 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1571,6 +1571,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
* in a way that pages from isolated pageblock are left on pcplists.
*/
zone_pcp_disable(zone);
+ lru_cache_disable();
/* set above range as isolated */
ret = start_isolate_page_range(start_pfn, end_pfn,
@@ -1602,7 +1603,6 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
}
cond_resched();
- lru_add_drain_all();
ret = scan_movable_pages(pfn, end_pfn, &pfn);
if (!ret) {
@@ -1647,6 +1647,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
zone->nr_isolate_pageblock -= nr_pages / pageblock_nr_pages;
spin_unlock_irqrestore(&zone->lock, flags);
+ lru_cache_enable();
zone_pcp_enable(zone);
/* removal success */
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 6961238c7ef5..46d9986c7bf0 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1208,6 +1208,8 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
break;
}
mmap_read_unlock(mm);
+ migrate_finish();
+
if (err < 0)
return err;
return busy;
@@ -1371,6 +1373,10 @@ static long do_mbind(unsigned long start, unsigned long len,
mmap_write_unlock(mm);
mpol_out:
mpol_put(new);
+
+ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
+ migrate_finish();
+
return err;
}
diff --git a/mm/migrate.c b/mm/migrate.c
index a69da8aaeccd..3de67c5f70bc 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -65,12 +65,9 @@
void migrate_prep(void)
{
/*
- * Clear the LRU lists so pages can be isolated.
- * Note that pages may be moved off the LRU after we have
- * drained them. Those pages will fail to migrate like other
- * pages that may be busy.
+ * Clear the LRU pcp lists so pages can be isolated.
*/
- lru_add_drain_all();
+ lru_cache_disable();
}
/* Do the necessary work of migrate_prep but not if it involves other CPUs */
@@ -79,6 +76,11 @@ void migrate_prep_local(void)
lru_add_drain();
}
+void migrate_finish(void)
+{
+ lru_cache_enable();
+}
+
int isolate_movable_page(struct page *page, isolate_mode_t mode)
{
struct address_space *mapping;
@@ -1837,6 +1839,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes,
if (err >= 0)
err = err1;
out:
+ migrate_finish();
return err;
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9811663fcf0b..6c0b5c6a4779 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8491,6 +8491,9 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
ret = migrate_pages(&cc->migratepages, alloc_migration_target,
NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE);
}
+
+ migrate_finish();
+
if (ret < 0) {
putback_movable_pages(&cc->migratepages);
return ret;
diff --git a/mm/swap.c b/mm/swap.c
index 31b844d4ed94..fc8acccb882b 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -235,6 +235,18 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
}
}
+/* return true if pagevec needs to drain */
+static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page)
+{
+ bool ret = false;
+
+ if (!pagevec_add(pvec, page) || PageCompound(page) ||
+ lru_cache_disabled())
+ ret = true;
+
+ return ret;
+}
+
/*
* Writeback is about to end against a page which has been marked for immediate
* reclaim. If it still appears to be reclaimable, move it to the tail of the
@@ -252,7 +264,7 @@ void rotate_reclaimable_page(struct page *page)
get_page(page);
local_lock_irqsave(&lru_rotate.lock, flags);
pvec = this_cpu_ptr(&lru_rotate.pvec);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, pagevec_move_tail_fn);
local_unlock_irqrestore(&lru_rotate.lock, flags);
}
@@ -343,7 +355,7 @@ static void activate_page(struct page *page)
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.activate_page);
get_page(page);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, __activate_page);
local_unlock(&lru_pvecs.lock);
}
@@ -458,7 +470,7 @@ void lru_cache_add(struct page *page)
get_page(page);
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_add);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
__pagevec_lru_add(pvec);
local_unlock(&lru_pvecs.lock);
}
@@ -654,7 +666,7 @@ void deactivate_file_page(struct page *page)
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_file);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, lru_deactivate_file_fn);
local_unlock(&lru_pvecs.lock);
}
@@ -676,7 +688,7 @@ void deactivate_page(struct page *page)
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate);
get_page(page);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, lru_deactivate_fn);
local_unlock(&lru_pvecs.lock);
}
@@ -698,7 +710,7 @@ void mark_page_lazyfree(struct page *page)
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_lazyfree);
get_page(page);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
local_unlock(&lru_pvecs.lock);
}
@@ -728,14 +740,7 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
lru_add_drain();
}
-/*
- * Doesn't need any cpu hotplug locking because we do rely on per-cpu
- * kworkers being shut down before our page_alloc_cpu_dead callback is
- * executed on the offlined cpu.
- * Calling this function with cpu hotplug locks held can actually lead
- * to obscure indirect dependencies via WQ context.
- */
-void lru_add_drain_all(void)
+void __lru_add_drain_all(bool force_all_cpus)
{
/*
* lru_drain_gen - Global pages generation number
@@ -780,7 +785,7 @@ void lru_add_drain_all(void)
* (C) Exit the draining operation if a newer generation, from another
* lru_add_drain_all(), was already scheduled for draining. Check (A).
*/
- if (unlikely(this_gen != lru_drain_gen))
+ if (unlikely(this_gen != lru_drain_gen && !force_all_cpus))
goto done;
/*
@@ -810,7 +815,8 @@ void lru_add_drain_all(void)
for_each_online_cpu(cpu) {
struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
- if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
+ if (force_all_cpus ||
+ pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
@@ -828,6 +834,18 @@ void lru_add_drain_all(void)
done:
mutex_unlock(&lock);
}
+
+/*
+ * Doesn't need any cpu hotplug locking because we do rely on per-cpu
+ * kworkers being shut down before our page_alloc_cpu_dead callback is
+ * executed on the offlined cpu.
+ * Calling this function with cpu hotplug locks held can actually lead
+ * to obscure indirect dependencies via WQ context.
+ */
+void lru_add_drain_all(void)
+{
+ __lru_add_drain_all(false);
+}
#else
void lru_add_drain_all(void)
{
@@ -835,6 +853,38 @@ void lru_add_drain_all(void)
}
#endif /* CONFIG_SMP */
+static atomic_t lru_disable_count = ATOMIC_INIT(0);
+
+bool lru_cache_disabled(void)
+{
+ return atomic_read(&lru_disable_count);
+}
+
+void lru_cache_enable(void)
+{
+ atomic_dec(&lru_disable_count);
+}
+/*
+ * Clear the LRU lists so pages can be isolated.
+ */
+void lru_cache_disable(void)
+{
+ atomic_inc(&lru_disable_count);
+#ifdef CONFIG_SMP
+ /*
+ * lru_add_drain_all in the force mode will schedule draining on
+ * all online CPUs so any calls of lru_cache_disabled wrapped by
+ * local_lock or preemption disabled would be ordered by that.
+ * The atomic operation doesn't need to have stronger ordering
+ * requirements because that is enforeced by the scheduling
+ * guarantees.
+ */
+ __lru_add_drain_all(true);
+#else
+ lru_add_drain();
+#endif
+}
+
/**
* release_pages - batched put_page()
* @pages: array of pages to release
--
2.30.1.766.gb4fecdf3b7-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 2/2] mm: fs: Invalidate BH LRU during page migration
2021-03-09 5:16 [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
@ 2021-03-09 5:16 ` Minchan Kim
2021-03-09 11:11 ` kernel test robot
2021-03-09 7:56 ` [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily kernel test robot
` (3 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Minchan Kim @ 2021-03-09 5:16 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, LKML, joaodias, surenb, cgoldswo, willy, mhocko, david,
vbabka, linux-fsdevel, Minchan Kim
Pages containing buffer_heads that are in one of the per-CPU
buffer_head LRU caches will be pinned and thus cannot be migrated.
This can prevent CMA allocations from succeeding, which are often used
on platforms with co-processors (such as a DSP) that can only use
physically contiguous memory. It can also prevent memory
hot-unplugging from succeeding, which involves migrating at least
MIN_MEMORY_BLOCK_SIZE bytes of memory, which ranges from 8 MiB to 1
GiB based on the architecture in use.
Correspondingly, invalidate the BH LRU caches before a migration
starts and stop any buffer_head from being cached in the LRU caches,
until migration has finished.
Signed-off-by: Chris Goldsworthy <cgoldswo@codeaurora.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
fs/buffer.c | 12 ++++++++++--
include/linux/buffer_head.h | 3 +++
include/linux/swap.h | 1 +
mm/swap.c | 5 ++++-
4 files changed, 18 insertions(+), 3 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 96c7604f69b3..4492e9d4c9d3 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1301,6 +1301,14 @@ static void bh_lru_install(struct buffer_head *bh)
int i;
check_irqs_on();
+ /*
+ * buffer_head in bh_lru could increase refcount of the page
+ * until it will be invalidated. It causes page migraion failure.
+ * Skip putting upcoming bh into bh_lru until migration is done.
+ */
+ if (lru_cache_disabled())
+ return;
+
bh_lru_lock();
b = this_cpu_ptr(&bh_lrus);
@@ -1446,7 +1454,7 @@ EXPORT_SYMBOL(__bread_gfp);
* This doesn't race because it runs in each cpu either in irq
* or with preempt disabled.
*/
-static void invalidate_bh_lru(void *arg)
+void invalidate_bh_lru(void *arg)
{
struct bh_lru *b = &get_cpu_var(bh_lrus);
int i;
@@ -1458,7 +1466,7 @@ static void invalidate_bh_lru(void *arg)
put_cpu_var(bh_lrus);
}
-static bool has_bh_in_lru(int cpu, void *dummy)
+bool has_bh_in_lru(int cpu, void *dummy)
{
struct bh_lru *b = per_cpu_ptr(&bh_lrus, cpu);
int i;
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 6b47f94378c5..3ae62f3f788e 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -194,6 +194,8 @@ void __breadahead_gfp(struct block_device *, sector_t block, unsigned int size,
struct buffer_head *__bread_gfp(struct block_device *,
sector_t block, unsigned size, gfp_t gfp);
void invalidate_bh_lrus(void);
+void invalidate_bh_lru(void *arg);
+bool has_bh_in_lru(int cpu, void *dummy);
struct buffer_head *alloc_buffer_head(gfp_t gfp_flags);
void free_buffer_head(struct buffer_head * bh);
void unlock_buffer(struct buffer_head *bh);
@@ -406,6 +408,7 @@ static inline int inode_has_buffers(struct inode *inode) { return 0; }
static inline void invalidate_inode_buffers(struct inode *inode) {}
static inline int remove_inode_buffers(struct inode *inode) { return 1; }
static inline int sync_mapping_buffers(struct address_space *mapping) { return 0; }
+static inline void invalidate_bh_lru(void *arg) {}
#define buffer_heads_over_limit 0
#endif /* CONFIG_BLOCK */
diff --git a/include/linux/swap.h b/include/linux/swap.h
index aaa6b9cc3f8a..5386cce1a26d 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -341,6 +341,7 @@ extern void lru_cache_add(struct page *);
extern void mark_page_accessed(struct page *);
extern void lru_cache_disable(void);
extern void lru_cache_enable(void);
+extern bool lru_cache_disabled(void);
extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_cpu_zone(struct zone *zone);
diff --git a/mm/swap.c b/mm/swap.c
index fc8acccb882b..d599d6449154 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -36,6 +36,7 @@
#include <linux/hugetlb.h>
#include <linux/page_idle.h>
#include <linux/local_lock.h>
+#include <linux/buffer_head.h>
#include "internal.h"
@@ -641,6 +642,7 @@ void lru_add_drain_cpu(int cpu)
pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
activate_page_drain(cpu);
+ invalidate_bh_lru(NULL);
}
/**
@@ -821,7 +823,8 @@ void __lru_add_drain_all(bool force_all_cpus)
pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree, cpu)) ||
- need_activate_page_drain(cpu)) {
+ need_activate_page_drain(cpu) ||
+ has_bh_in_lru(cpu, NULL)) {
INIT_WORK(work, lru_add_drain_per_cpu);
queue_work_on(cpu, mm_percpu_wq, work);
__cpumask_set_cpu(cpu, &has_work);
--
2.30.1.766.gb4fecdf3b7-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
2021-03-09 5:16 [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
2021-03-09 5:16 ` [PATCH v2 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
@ 2021-03-09 7:56 ` kernel test robot
2021-03-09 9:43 ` kernel test robot
` (2 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2021-03-09 7:56 UTC (permalink / raw)
To: Minchan Kim, Andrew Morton
Cc: kbuild-all, clang-built-linux, Linux Memory Management List,
LKML, joaodias, surenb, cgoldswo, willy, mhocko, david, vbabka
[-- Attachment #1: Type: text/plain, Size: 6592 bytes --]
Hi Minchan,
I love your patch! Perhaps something to improve:
[auto build test WARNING on linux/master]
[also build test WARNING on linus/master v5.12-rc2 next-20210309]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 144c79ef33536b4ecb4951e07dbc1f2b7fa99d32
config: arm64-randconfig-r023-20210308 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 820f508b08d7c94b2dd7847e9710d2bc36d3dd45)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm64 cross compiling tool for clang build
# apt-get install binutils-aarch64-linux-gnu
# https://github.com/0day-ci/linux/commit/e746db1a2ab13441890fa2cad8604bbec190b401
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
git checkout e746db1a2ab13441890fa2cad8604bbec190b401
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
mm/swap.c:244:4: error: implicit declaration of function 'lru_cache_disabled' [-Werror,-Wimplicit-function-declaration]
lru_cache_disabled())
^
mm/swap.c:244:4: note: did you mean 'lru_cache_disable'?
include/linux/swap.h:342:13: note: 'lru_cache_disable' declared here
extern void lru_cache_disable(void);
^
>> mm/swap.c:743:6: warning: no previous prototype for function '__lru_add_drain_all' [-Wmissing-prototypes]
void __lru_add_drain_all(bool force_all_cpus)
^
mm/swap.c:743:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void __lru_add_drain_all(bool force_all_cpus)
^
static
mm/swap.c:858:6: error: conflicting types for 'lru_cache_disabled'
bool lru_cache_disabled(void)
^
mm/swap.c:244:4: note: previous implicit declaration is here
lru_cache_disabled())
^
1 warning and 2 errors generated.
vim +/__lru_add_drain_all +743 mm/swap.c
742
> 743 void __lru_add_drain_all(bool force_all_cpus)
744 {
745 /*
746 * lru_drain_gen - Global pages generation number
747 *
748 * (A) Definition: global lru_drain_gen = x implies that all generations
749 * 0 < n <= x are already *scheduled* for draining.
750 *
751 * This is an optimization for the highly-contended use case where a
752 * user space workload keeps constantly generating a flow of pages for
753 * each CPU.
754 */
755 static unsigned int lru_drain_gen;
756 static struct cpumask has_work;
757 static DEFINE_MUTEX(lock);
758 unsigned cpu, this_gen;
759
760 /*
761 * Make sure nobody triggers this path before mm_percpu_wq is fully
762 * initialized.
763 */
764 if (WARN_ON(!mm_percpu_wq))
765 return;
766
767 /*
768 * Guarantee pagevec counter stores visible by this CPU are visible to
769 * other CPUs before loading the current drain generation.
770 */
771 smp_mb();
772
773 /*
774 * (B) Locally cache global LRU draining generation number
775 *
776 * The read barrier ensures that the counter is loaded before the mutex
777 * is taken. It pairs with smp_mb() inside the mutex critical section
778 * at (D).
779 */
780 this_gen = smp_load_acquire(&lru_drain_gen);
781
782 mutex_lock(&lock);
783
784 /*
785 * (C) Exit the draining operation if a newer generation, from another
786 * lru_add_drain_all(), was already scheduled for draining. Check (A).
787 */
788 if (unlikely(this_gen != lru_drain_gen && !force_all_cpus))
789 goto done;
790
791 /*
792 * (D) Increment global generation number
793 *
794 * Pairs with smp_load_acquire() at (B), outside of the critical
795 * section. Use a full memory barrier to guarantee that the new global
796 * drain generation number is stored before loading pagevec counters.
797 *
798 * This pairing must be done here, before the for_each_online_cpu loop
799 * below which drains the page vectors.
800 *
801 * Let x, y, and z represent some system CPU numbers, where x < y < z.
802 * Assume CPU #z is is in the middle of the for_each_online_cpu loop
803 * below and has already reached CPU #y's per-cpu data. CPU #x comes
804 * along, adds some pages to its per-cpu vectors, then calls
805 * lru_add_drain_all().
806 *
807 * If the paired barrier is done at any later step, e.g. after the
808 * loop, CPU #x will just exit at (C) and miss flushing out all of its
809 * added pages.
810 */
811 WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
812 smp_mb();
813
814 cpumask_clear(&has_work);
815 for_each_online_cpu(cpu) {
816 struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
817
818 if (force_all_cpus ||
819 pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
820 data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
821 pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
822 pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
823 pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree, cpu)) ||
824 need_activate_page_drain(cpu)) {
825 INIT_WORK(work, lru_add_drain_per_cpu);
826 queue_work_on(cpu, mm_percpu_wq, work);
827 __cpumask_set_cpu(cpu, &has_work);
828 }
829 }
830
831 for_each_cpu(cpu, &has_work)
832 flush_work(&per_cpu(lru_add_drain_work, cpu));
833
834 done:
835 mutex_unlock(&lock);
836 }
837
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 36809 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
2021-03-09 5:16 [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
2021-03-09 5:16 ` [PATCH v2 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
2021-03-09 7:56 ` [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily kernel test robot
@ 2021-03-09 9:43 ` kernel test robot
2021-03-09 10:07 ` kernel test robot
2021-03-09 11:03 ` Michal Hocko
4 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2021-03-09 9:43 UTC (permalink / raw)
To: Minchan Kim, Andrew Morton
Cc: kbuild-all, Linux Memory Management List, LKML, joaodias, surenb,
cgoldswo, willy, mhocko, david, vbabka
[-- Attachment #1: Type: text/plain, Size: 5946 bytes --]
Hi Minchan,
I love your patch! Yet something to improve:
[auto build test ERROR on linux/master]
[also build test ERROR on linus/master v5.12-rc2 next-20210309]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 144c79ef33536b4ecb4951e07dbc1f2b7fa99d32
config: powerpc-skiroot_defconfig (attached as .config)
compiler: powerpc64le-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/e746db1a2ab13441890fa2cad8604bbec190b401
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
git checkout e746db1a2ab13441890fa2cad8604bbec190b401
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
In file included from mm/mempolicy.c:92:
include/linux/migrate.h: In function 'migrate_prep':
include/linux/migrate.h:70:48: error: 'return' with a value, in function returning void [-Werror=return-type]
70 | static inline void migrate_prep(void) { return -ENOSYS; }
| ^
include/linux/migrate.h:70:20: note: declared here
70 | static inline void migrate_prep(void) { return -ENOSYS; }
| ^~~~~~~~~~~~
include/linux/migrate.h: In function 'migrate_prep_local':
include/linux/migrate.h:71:54: error: 'return' with a value, in function returning void [-Werror=return-type]
71 | static inline void migrate_prep_local(void) { return -ENOSYS; }
| ^
include/linux/migrate.h:71:20: note: declared here
71 | static inline void migrate_prep_local(void) { return -ENOSYS; }
| ^~~~~~~~~~~~~~~~~~
mm/mempolicy.c: In function 'do_mbind':
>> mm/mempolicy.c:1378:3: error: implicit declaration of function 'migrate_finish'; did you mean 'migrate_done'? [-Werror=implicit-function-declaration]
1378 | migrate_finish();
| ^~~~~~~~~~~~~~
| migrate_done
cc1: some warnings being treated as errors
vim +1378 mm/mempolicy.c
1277
1278 static long do_mbind(unsigned long start, unsigned long len,
1279 unsigned short mode, unsigned short mode_flags,
1280 nodemask_t *nmask, unsigned long flags)
1281 {
1282 struct mm_struct *mm = current->mm;
1283 struct mempolicy *new;
1284 unsigned long end;
1285 int err;
1286 int ret;
1287 LIST_HEAD(pagelist);
1288
1289 if (flags & ~(unsigned long)MPOL_MF_VALID)
1290 return -EINVAL;
1291 if ((flags & MPOL_MF_MOVE_ALL) && !capable(CAP_SYS_NICE))
1292 return -EPERM;
1293
1294 if (start & ~PAGE_MASK)
1295 return -EINVAL;
1296
1297 if (mode == MPOL_DEFAULT)
1298 flags &= ~MPOL_MF_STRICT;
1299
1300 len = (len + PAGE_SIZE - 1) & PAGE_MASK;
1301 end = start + len;
1302
1303 if (end < start)
1304 return -EINVAL;
1305 if (end == start)
1306 return 0;
1307
1308 new = mpol_new(mode, mode_flags, nmask);
1309 if (IS_ERR(new))
1310 return PTR_ERR(new);
1311
1312 if (flags & MPOL_MF_LAZY)
1313 new->flags |= MPOL_F_MOF;
1314
1315 /*
1316 * If we are using the default policy then operation
1317 * on discontinuous address spaces is okay after all
1318 */
1319 if (!new)
1320 flags |= MPOL_MF_DISCONTIG_OK;
1321
1322 pr_debug("mbind %lx-%lx mode:%d flags:%d nodes:%lx\n",
1323 start, start + len, mode, mode_flags,
1324 nmask ? nodes_addr(*nmask)[0] : NUMA_NO_NODE);
1325
1326 if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
1327
1328 migrate_prep();
1329 }
1330 {
1331 NODEMASK_SCRATCH(scratch);
1332 if (scratch) {
1333 mmap_write_lock(mm);
1334 err = mpol_set_nodemask(new, nmask, scratch);
1335 if (err)
1336 mmap_write_unlock(mm);
1337 } else
1338 err = -ENOMEM;
1339 NODEMASK_SCRATCH_FREE(scratch);
1340 }
1341 if (err)
1342 goto mpol_out;
1343
1344 ret = queue_pages_range(mm, start, end, nmask,
1345 flags | MPOL_MF_INVERT, &pagelist);
1346
1347 if (ret < 0) {
1348 err = ret;
1349 goto up_out;
1350 }
1351
1352 err = mbind_range(mm, start, end, new);
1353
1354 if (!err) {
1355 int nr_failed = 0;
1356
1357 if (!list_empty(&pagelist)) {
1358 WARN_ON_ONCE(flags & MPOL_MF_LAZY);
1359 nr_failed = migrate_pages(&pagelist, new_page, NULL,
1360 start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND);
1361 if (nr_failed)
1362 putback_movable_pages(&pagelist);
1363 }
1364
1365 if ((ret > 0) || (nr_failed && (flags & MPOL_MF_STRICT)))
1366 err = -EIO;
1367 } else {
1368 up_out:
1369 if (!list_empty(&pagelist))
1370 putback_movable_pages(&pagelist);
1371 }
1372
1373 mmap_write_unlock(mm);
1374 mpol_out:
1375 mpol_put(new);
1376
1377 if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
> 1378 migrate_finish();
1379
1380 return err;
1381 }
1382
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 21555 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
2021-03-09 5:16 [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
` (2 preceding siblings ...)
2021-03-09 9:43 ` kernel test robot
@ 2021-03-09 10:07 ` kernel test robot
2021-03-09 11:03 ` Michal Hocko
4 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2021-03-09 10:07 UTC (permalink / raw)
To: Minchan Kim, Andrew Morton
Cc: kbuild-all, Linux Memory Management List, LKML, joaodias, surenb,
cgoldswo, willy, mhocko, david, vbabka
[-- Attachment #1: Type: text/plain, Size: 6363 bytes --]
Hi Minchan,
I love your patch! Perhaps something to improve:
[auto build test WARNING on linux/master]
[also build test WARNING on linus/master v5.12-rc2 next-20210309]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 144c79ef33536b4ecb4951e07dbc1f2b7fa99d32
config: openrisc-randconfig-r026-20210308 (attached as .config)
compiler: or1k-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/e746db1a2ab13441890fa2cad8604bbec190b401
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
git checkout e746db1a2ab13441890fa2cad8604bbec190b401
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=openrisc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
mm/swap.c: In function 'pagevec_add_and_need_flush':
mm/swap.c:244:4: error: implicit declaration of function 'lru_cache_disabled'; did you mean 'lru_cache_disable'? [-Werror=implicit-function-declaration]
244 | lru_cache_disabled())
| ^~~~~~~~~~~~~~~~~~
| lru_cache_disable
mm/swap.c: At top level:
>> mm/swap.c:743:6: warning: no previous prototype for '__lru_add_drain_all' [-Wmissing-prototypes]
743 | void __lru_add_drain_all(bool force_all_cpus)
| ^~~~~~~~~~~~~~~~~~~
mm/swap.c:858:6: warning: no previous prototype for 'lru_cache_disabled' [-Wmissing-prototypes]
858 | bool lru_cache_disabled(void)
| ^~~~~~~~~~~~~~~~~~
mm/swap.c:858:6: error: conflicting types for 'lru_cache_disabled'
mm/swap.c:244:4: note: previous implicit declaration of 'lru_cache_disabled' was here
244 | lru_cache_disabled())
| ^~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
vim +/__lru_add_drain_all +743 mm/swap.c
742
> 743 void __lru_add_drain_all(bool force_all_cpus)
744 {
745 /*
746 * lru_drain_gen - Global pages generation number
747 *
748 * (A) Definition: global lru_drain_gen = x implies that all generations
749 * 0 < n <= x are already *scheduled* for draining.
750 *
751 * This is an optimization for the highly-contended use case where a
752 * user space workload keeps constantly generating a flow of pages for
753 * each CPU.
754 */
755 static unsigned int lru_drain_gen;
756 static struct cpumask has_work;
757 static DEFINE_MUTEX(lock);
758 unsigned cpu, this_gen;
759
760 /*
761 * Make sure nobody triggers this path before mm_percpu_wq is fully
762 * initialized.
763 */
764 if (WARN_ON(!mm_percpu_wq))
765 return;
766
767 /*
768 * Guarantee pagevec counter stores visible by this CPU are visible to
769 * other CPUs before loading the current drain generation.
770 */
771 smp_mb();
772
773 /*
774 * (B) Locally cache global LRU draining generation number
775 *
776 * The read barrier ensures that the counter is loaded before the mutex
777 * is taken. It pairs with smp_mb() inside the mutex critical section
778 * at (D).
779 */
780 this_gen = smp_load_acquire(&lru_drain_gen);
781
782 mutex_lock(&lock);
783
784 /*
785 * (C) Exit the draining operation if a newer generation, from another
786 * lru_add_drain_all(), was already scheduled for draining. Check (A).
787 */
788 if (unlikely(this_gen != lru_drain_gen && !force_all_cpus))
789 goto done;
790
791 /*
792 * (D) Increment global generation number
793 *
794 * Pairs with smp_load_acquire() at (B), outside of the critical
795 * section. Use a full memory barrier to guarantee that the new global
796 * drain generation number is stored before loading pagevec counters.
797 *
798 * This pairing must be done here, before the for_each_online_cpu loop
799 * below which drains the page vectors.
800 *
801 * Let x, y, and z represent some system CPU numbers, where x < y < z.
802 * Assume CPU #z is is in the middle of the for_each_online_cpu loop
803 * below and has already reached CPU #y's per-cpu data. CPU #x comes
804 * along, adds some pages to its per-cpu vectors, then calls
805 * lru_add_drain_all().
806 *
807 * If the paired barrier is done at any later step, e.g. after the
808 * loop, CPU #x will just exit at (C) and miss flushing out all of its
809 * added pages.
810 */
811 WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
812 smp_mb();
813
814 cpumask_clear(&has_work);
815 for_each_online_cpu(cpu) {
816 struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
817
818 if (force_all_cpus ||
819 pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
820 data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
821 pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
822 pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
823 pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree, cpu)) ||
824 need_activate_page_drain(cpu)) {
825 INIT_WORK(work, lru_add_drain_per_cpu);
826 queue_work_on(cpu, mm_percpu_wq, work);
827 __cpumask_set_cpu(cpu, &has_work);
828 }
829 }
830
831 for_each_cpu(cpu, &has_work)
832 flush_work(&per_cpu(lru_add_drain_work, cpu));
833
834 done:
835 mutex_unlock(&lock);
836 }
837
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 24758 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
2021-03-09 5:16 [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
` (3 preceding siblings ...)
2021-03-09 10:07 ` kernel test robot
@ 2021-03-09 11:03 ` Michal Hocko
2021-03-09 16:29 ` Minchan Kim
4 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2021-03-09 11:03 UTC (permalink / raw)
To: Minchan Kim
Cc: Andrew Morton, linux-mm, LKML, joaodias, surenb, cgoldswo, willy,
david, vbabka, linux-fsdevel
On Mon 08-03-21 21:16:27, Minchan Kim wrote:
> LRU pagevec holds refcount of pages until the pagevec are drained.
> It could prevent migration since the refcount of the page is greater
> than the expection in migration logic. To mitigate the issue,
> callers of migrate_pages drains LRU pagevec via migrate_prep or
> lru_add_drain_all before migrate_pages call.
>
> However, it's not enough because pages coming into pagevec after the
> draining call still could stay at the pagevec so it could keep
> preventing page migration. Since some callers of migrate_pages have
> retrial logic with LRU draining, the page would migrate at next trail
> but it is still fragile in that it doesn't close the fundamental race
> between upcoming LRU pages into pagvec and migration so the migration
> failure could cause contiguous memory allocation failure in the end.
>
> To close the race, this patch disables lru caches(i.e, pagevec)
> during ongoing migration until migrate is done.
>
> Since it's really hard to reproduce, I measured how many times
> migrate_pages retried with force mode below debug code.
It would be better to explicitly state that this is about a fallback to
a sync migration.
> int migrate_pages(struct list_head *from, new_page_t get_new_page,
> ..
> ..
>
> if (rc && reason == MR_CONTIG_RANGE && pass > 2) {
> printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc);
> dump_page(page, "fail to migrate");
> }
>
> The test was repeating android apps launching with cma allocation
> in background every five seconds. Total cma allocation count was
> about 500 during the testing. With this patch, the dump_page count
> was reduced from 400 to 30.
I still find these results hard to argue about because it has really no
relation to any measurable effect for those apps you are mentioning. I
would expect sync migration would lead to performance difference. Is
there any?
> It would be also useful for memory-hotplug.
This is a statment that would deserve some explanation.
"
The new interface is alsow useful for memory hotplug which currently
drains lru pcp caches after each migration failure. This is rather
suboptimal as it has to disrupt others running during the operation.
With the new interface the operation happens only once. This is also in
line with pcp allocator cache which are disabled for the offlining as
well.
"
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
> * from v1 - https://lore.kernel.org/lkml/20210302210949.2440120-1-minchan@kernel.org/
> * introduce __lru_add_drain_all to minimize changes - mhocko
> * use lru_cache_disable for memory-hotplug
> * schedule for every cpu at force_all_cpus
>
> * from RFC - http://lore.kernel.org/linux-mm/20210216170348.1513483-1-minchan@kernel.org
> * use atomic and lru_add_drain_all for strict ordering - mhocko
> * lru_cache_disable/enable - mhocko
>
> include/linux/migrate.h | 6 ++-
> include/linux/swap.h | 2 +
> mm/memory_hotplug.c | 3 +-
> mm/mempolicy.c | 6 +++
> mm/migrate.c | 13 ++++---
> mm/page_alloc.c | 3 ++
> mm/swap.c | 82 +++++++++++++++++++++++++++++++++--------
> 7 files changed, 91 insertions(+), 24 deletions(-)
Sorry for nit picking but I think the additional abstraction for
migrate_prep is not really needed and we can remove some more code.
Maybe we should even get rid of migrate_prep_local which only has a
single caller and open coding lru draining with a comment would be
better from code reading POV IMO.
include/linux/migrate.h | 4 +--
include/linux/swap.h | 2 ++
mm/memory_hotplug.c | 3 +-
mm/mempolicy.c | 10 ++++--
mm/migrate.c | 19 ++----------
mm/page_alloc.c | 5 ++-
mm/swap.c | 82 +++++++++++++++++++++++++++++++++++++++----------
7 files changed, 85 insertions(+), 40 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 4594838a0f7c..dbd6311f23be 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -45,7 +45,6 @@ extern struct page *alloc_migration_target(struct page *page, unsigned long priv
extern int isolate_movable_page(struct page *page, isolate_mode_t mode);
extern void putback_movable_page(struct page *page);
-extern void migrate_prep(void);
extern void migrate_prep_local(void);
extern void migrate_page_states(struct page *newpage, struct page *page);
extern void migrate_page_copy(struct page *newpage, struct page *page);
@@ -66,8 +65,7 @@ static inline struct page *alloc_migration_target(struct page *page,
static inline int isolate_movable_page(struct page *page, isolate_mode_t mode)
{ return -EBUSY; }
-static inline int migrate_prep(void) { return -ENOSYS; }
-static inline int migrate_prep_local(void) { return -ENOSYS; }
+static inline void migrate_prep_local(void) { return -ENOSYS; }
static inline void migrate_page_states(struct page *newpage, struct page *page)
{
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 596bc2f4d9b0..c11d92582378 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -339,6 +339,8 @@ extern void lru_note_cost(struct lruvec *lruvec, bool file,
extern void lru_note_cost_page(struct page *);
extern void lru_cache_add(struct page *);
extern void mark_page_accessed(struct page *);
+extern void lru_cache_disable(void);
+extern void lru_cache_enable(void);
extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_cpu_zone(struct zone *zone);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index f9d57b9be8c7..d86868dd2d78 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1496,6 +1496,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
* in a way that pages from isolated pageblock are left on pcplists.
*/
zone_pcp_disable(zone);
+ lru_cache_disable();
/* set above range as isolated */
ret = start_isolate_page_range(start_pfn, end_pfn,
@@ -1527,7 +1528,6 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
}
cond_resched();
- lru_add_drain_all();
ret = scan_movable_pages(pfn, end_pfn, &pfn);
if (!ret) {
@@ -1572,6 +1572,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
zone->nr_isolate_pageblock -= nr_pages / pageblock_nr_pages;
spin_unlock_irqrestore(&zone->lock, flags);
+ lru_cache_enable();
zone_pcp_enable(zone);
/* removal success */
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 2c3a86502053..c9f45a5e2714 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1114,7 +1114,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
int err = 0;
nodemask_t tmp;
- migrate_prep();
+ lru_cache_disable();
mmap_read_lock(mm);
@@ -1198,6 +1198,8 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
break;
}
mmap_read_unlock(mm);
+ lru_cache_enable();
+
if (err < 0)
return err;
return busy;
@@ -1313,7 +1315,7 @@ static long do_mbind(unsigned long start, unsigned long len,
if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
- migrate_prep();
+ lru_cache_disable();
}
{
NODEMASK_SCRATCH(scratch);
@@ -1361,6 +1363,10 @@ static long do_mbind(unsigned long start, unsigned long len,
mmap_write_unlock(mm);
mpol_out:
mpol_put(new);
+
+ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
+ lru_cache_enable();
+
return err;
}
diff --git a/mm/migrate.c b/mm/migrate.c
index 20ca887ea769..ec50966e9a80 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -57,22 +57,6 @@
#include "internal.h"
-/*
- * migrate_prep() needs to be called before we start compiling a list of pages
- * to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
- * undesirable, use migrate_prep_local()
- */
-void migrate_prep(void)
-{
- /*
- * Clear the LRU lists so pages can be isolated.
- * Note that pages may be moved off the LRU after we have
- * drained them. Those pages will fail to migrate like other
- * pages that may be busy.
- */
- lru_add_drain_all();
-}
-
/* Do the necessary work of migrate_prep but not if it involves other CPUs */
void migrate_prep_local(void)
{
@@ -1763,7 +1747,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes,
int start, i;
int err = 0, err1;
- migrate_prep();
+ lru_cache_disable();
for (i = start = 0; i < nr_pages; i++) {
const void __user *p;
@@ -1832,6 +1816,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes,
if (err >= 0)
err = err1;
out:
+ lru_cache_enable();
return err;
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 519a60d5b6f7..692d298aff31 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8482,7 +8482,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
};
- migrate_prep();
+ lru_cache_disable();
while (pfn < end || !list_empty(&cc->migratepages)) {
if (fatal_signal_pending(current)) {
@@ -8510,6 +8510,9 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
ret = migrate_pages(&cc->migratepages, alloc_migration_target,
NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE);
}
+
+ lru_cache_enable();
+
if (ret < 0) {
putback_movable_pages(&cc->migratepages);
return ret;
diff --git a/mm/swap.c b/mm/swap.c
index 2cca7141470c..eb4aafd1685d 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -236,6 +236,18 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
}
}
+/* return true if pagevec needs to drain */
+static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page)
+{
+ bool ret = false;
+
+ if (!pagevec_add(pvec, page) || PageCompound(page) ||
+ lru_cache_disabled())
+ ret = true;
+
+ return ret;
+}
+
/*
* Writeback is about to end against a page which has been marked for immediate
* reclaim. If it still appears to be reclaimable, move it to the tail of the
@@ -253,7 +265,7 @@ void rotate_reclaimable_page(struct page *page)
get_page(page);
local_lock_irqsave(&lru_rotate.lock, flags);
pvec = this_cpu_ptr(&lru_rotate.pvec);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, pagevec_move_tail_fn);
local_unlock_irqrestore(&lru_rotate.lock, flags);
}
@@ -346,7 +358,7 @@ static void activate_page(struct page *page)
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.activate_page);
get_page(page);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, __activate_page);
local_unlock(&lru_pvecs.lock);
}
@@ -461,7 +473,7 @@ void lru_cache_add(struct page *page)
get_page(page);
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_add);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
__pagevec_lru_add(pvec);
local_unlock(&lru_pvecs.lock);
}
@@ -664,7 +676,7 @@ void deactivate_file_page(struct page *page)
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_file);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, lru_deactivate_file_fn);
local_unlock(&lru_pvecs.lock);
}
@@ -686,7 +698,7 @@ void deactivate_page(struct page *page)
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate);
get_page(page);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, lru_deactivate_fn);
local_unlock(&lru_pvecs.lock);
}
@@ -708,7 +720,7 @@ void mark_page_lazyfree(struct page *page)
local_lock(&lru_pvecs.lock);
pvec = this_cpu_ptr(&lru_pvecs.lru_lazyfree);
get_page(page);
- if (!pagevec_add(pvec, page) || PageCompound(page))
+ if (pagevec_add_and_need_flush(pvec, page))
pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
local_unlock(&lru_pvecs.lock);
}
@@ -738,14 +750,7 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
lru_add_drain();
}
-/*
- * Doesn't need any cpu hotplug locking because we do rely on per-cpu
- * kworkers being shut down before our page_alloc_cpu_dead callback is
- * executed on the offlined cpu.
- * Calling this function with cpu hotplug locks held can actually lead
- * to obscure indirect dependencies via WQ context.
- */
-void lru_add_drain_all(void)
+void __lru_add_drain_all(bool force_all_cpus)
{
/*
* lru_drain_gen - Global pages generation number
@@ -790,7 +795,7 @@ void lru_add_drain_all(void)
* (C) Exit the draining operation if a newer generation, from another
* lru_add_drain_all(), was already scheduled for draining. Check (A).
*/
- if (unlikely(this_gen != lru_drain_gen))
+ if (unlikely(this_gen != lru_drain_gen && !force_all_cpus))
goto done;
/*
@@ -820,7 +825,8 @@ void lru_add_drain_all(void)
for_each_online_cpu(cpu) {
struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
- if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
+ if (force_all_cpus ||
+ pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
@@ -838,6 +844,18 @@ void lru_add_drain_all(void)
done:
mutex_unlock(&lock);
}
+
+/*
+ * Doesn't need any cpu hotplug locking because we do rely on per-cpu
+ * kworkers being shut down before our page_alloc_cpu_dead callback is
+ * executed on the offlined cpu.
+ * Calling this function with cpu hotplug locks held can actually lead
+ * to obscure indirect dependencies via WQ context.
+ */
+void lru_add_drain_all(void)
+{
+ __lru_add_drain_all(false);
+}
#else
void lru_add_drain_all(void)
{
@@ -845,6 +863,38 @@ void lru_add_drain_all(void)
}
#endif /* CONFIG_SMP */
+static atomic_t lru_disable_count = ATOMIC_INIT(0);
+
+bool lru_cache_disabled(void)
+{
+ return atomic_read(&lru_disable_count);
+}
+
+void lru_cache_enable(void)
+{
+ atomic_dec(&lru_disable_count);
+}
+/*
+ * Clear the LRU lists so pages can be isolated.
+ */
+void lru_cache_disable(void)
+{
+ atomic_inc(&lru_disable_count);
+#ifdef CONFIG_SMP
+ /*
+ * lru_add_drain_all in the force mode will schedule draining on
+ * all online CPUs so any calls of lru_cache_disabled wrapped by
+ * local_lock or preemption disabled would be ordered by that.
+ * The atomic operation doesn't need to have stronger ordering
+ * requirements because that is enforeced by the scheduling
+ * guarantees.
+ */
+ __lru_add_drain_all(true);
+#else
+ lru_add_drain();
+#endif
+}
+
/**
* release_pages - batched put_page()
* @pages: array of pages to release
--
Michal Hocko
SUSE Labs
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] mm: fs: Invalidate BH LRU during page migration
2021-03-09 5:16 ` [PATCH v2 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
@ 2021-03-09 11:11 ` kernel test robot
0 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2021-03-09 11:11 UTC (permalink / raw)
To: Minchan Kim, Andrew Morton
Cc: kbuild-all, Linux Memory Management List, LKML, joaodias, surenb,
cgoldswo, willy, mhocko, david, vbabka
[-- Attachment #1: Type: text/plain, Size: 5885 bytes --]
Hi Minchan,
I love your patch! Yet something to improve:
[auto build test ERROR on linux/master]
[also build test ERROR on linus/master v5.12-rc2 next-20210309]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 144c79ef33536b4ecb4951e07dbc1f2b7fa99d32
config: openrisc-randconfig-r026-20210308 (attached as .config)
compiler: or1k-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/dfca8699b8fb8cf3bed2297e261fca53c0fc523c
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
git checkout dfca8699b8fb8cf3bed2297e261fca53c0fc523c
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=openrisc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
mm/swap.c:745:6: warning: no previous prototype for '__lru_add_drain_all' [-Wmissing-prototypes]
745 | void __lru_add_drain_all(bool force_all_cpus)
| ^~~~~~~~~~~~~~~~~~~
mm/swap.c: In function '__lru_add_drain_all':
>> mm/swap.c:827:7: error: implicit declaration of function 'has_bh_in_lru' [-Werror=implicit-function-declaration]
827 | has_bh_in_lru(cpu, NULL)) {
| ^~~~~~~~~~~~~
cc1: some warnings being treated as errors
vim +/has_bh_in_lru +827 mm/swap.c
744
745 void __lru_add_drain_all(bool force_all_cpus)
746 {
747 /*
748 * lru_drain_gen - Global pages generation number
749 *
750 * (A) Definition: global lru_drain_gen = x implies that all generations
751 * 0 < n <= x are already *scheduled* for draining.
752 *
753 * This is an optimization for the highly-contended use case where a
754 * user space workload keeps constantly generating a flow of pages for
755 * each CPU.
756 */
757 static unsigned int lru_drain_gen;
758 static struct cpumask has_work;
759 static DEFINE_MUTEX(lock);
760 unsigned cpu, this_gen;
761
762 /*
763 * Make sure nobody triggers this path before mm_percpu_wq is fully
764 * initialized.
765 */
766 if (WARN_ON(!mm_percpu_wq))
767 return;
768
769 /*
770 * Guarantee pagevec counter stores visible by this CPU are visible to
771 * other CPUs before loading the current drain generation.
772 */
773 smp_mb();
774
775 /*
776 * (B) Locally cache global LRU draining generation number
777 *
778 * The read barrier ensures that the counter is loaded before the mutex
779 * is taken. It pairs with smp_mb() inside the mutex critical section
780 * at (D).
781 */
782 this_gen = smp_load_acquire(&lru_drain_gen);
783
784 mutex_lock(&lock);
785
786 /*
787 * (C) Exit the draining operation if a newer generation, from another
788 * lru_add_drain_all(), was already scheduled for draining. Check (A).
789 */
790 if (unlikely(this_gen != lru_drain_gen && !force_all_cpus))
791 goto done;
792
793 /*
794 * (D) Increment global generation number
795 *
796 * Pairs with smp_load_acquire() at (B), outside of the critical
797 * section. Use a full memory barrier to guarantee that the new global
798 * drain generation number is stored before loading pagevec counters.
799 *
800 * This pairing must be done here, before the for_each_online_cpu loop
801 * below which drains the page vectors.
802 *
803 * Let x, y, and z represent some system CPU numbers, where x < y < z.
804 * Assume CPU #z is is in the middle of the for_each_online_cpu loop
805 * below and has already reached CPU #y's per-cpu data. CPU #x comes
806 * along, adds some pages to its per-cpu vectors, then calls
807 * lru_add_drain_all().
808 *
809 * If the paired barrier is done at any later step, e.g. after the
810 * loop, CPU #x will just exit at (C) and miss flushing out all of its
811 * added pages.
812 */
813 WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
814 smp_mb();
815
816 cpumask_clear(&has_work);
817 for_each_online_cpu(cpu) {
818 struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
819
820 if (force_all_cpus ||
821 pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
822 data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
823 pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
824 pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
825 pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree, cpu)) ||
826 need_activate_page_drain(cpu) ||
> 827 has_bh_in_lru(cpu, NULL)) {
828 INIT_WORK(work, lru_add_drain_per_cpu);
829 queue_work_on(cpu, mm_percpu_wq, work);
830 __cpumask_set_cpu(cpu, &has_work);
831 }
832 }
833
834 for_each_cpu(cpu, &has_work)
835 flush_work(&per_cpu(lru_add_drain_work, cpu));
836
837 done:
838 mutex_unlock(&lock);
839 }
840
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 24758 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
2021-03-09 11:03 ` Michal Hocko
@ 2021-03-09 16:29 ` Minchan Kim
2021-03-09 16:31 ` David Hildenbrand
2021-03-09 17:54 ` Michal Hocko
0 siblings, 2 replies; 11+ messages in thread
From: Minchan Kim @ 2021-03-09 16:29 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, linux-mm, LKML, joaodias, surenb, cgoldswo, willy,
david, vbabka, linux-fsdevel
On Tue, Mar 09, 2021 at 12:03:08PM +0100, Michal Hocko wrote:
> On Mon 08-03-21 21:16:27, Minchan Kim wrote:
> > LRU pagevec holds refcount of pages until the pagevec are drained.
> > It could prevent migration since the refcount of the page is greater
> > than the expection in migration logic. To mitigate the issue,
> > callers of migrate_pages drains LRU pagevec via migrate_prep or
> > lru_add_drain_all before migrate_pages call.
> >
> > However, it's not enough because pages coming into pagevec after the
> > draining call still could stay at the pagevec so it could keep
> > preventing page migration. Since some callers of migrate_pages have
> > retrial logic with LRU draining, the page would migrate at next trail
> > but it is still fragile in that it doesn't close the fundamental race
> > between upcoming LRU pages into pagvec and migration so the migration
> > failure could cause contiguous memory allocation failure in the end.
> >
> > To close the race, this patch disables lru caches(i.e, pagevec)
> > during ongoing migration until migrate is done.
> >
> > Since it's really hard to reproduce, I measured how many times
> > migrate_pages retried with force mode below debug code.
>
> It would be better to explicitly state that this is about a fallback to
> a sync migration.
>
>
> > int migrate_pages(struct list_head *from, new_page_t get_new_page,
> > ..
> > ..
> >
> > if (rc && reason == MR_CONTIG_RANGE && pass > 2) {
> > printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc);
> > dump_page(page, "fail to migrate");
> > }
> >
> > The test was repeating android apps launching with cma allocation
> > in background every five seconds. Total cma allocation count was
> > about 500 during the testing. With this patch, the dump_page count
> > was reduced from 400 to 30.
>
> I still find these results hard to argue about because it has really no
> relation to any measurable effect for those apps you are mentioning. I
> would expect sync migration would lead to performance difference. Is
> there any?
Think about migrating 300M pages. It needs to migrate 76800 pages.
It means page migration works(unmap + copy + map) are dominant.
>
> > It would be also useful for memory-hotplug.
>
> This is a statment that would deserve some explanation.
> "
> The new interface is alsow useful for memory hotplug which currently
> drains lru pcp caches after each migration failure. This is rather
> suboptimal as it has to disrupt others running during the operation.
> With the new interface the operation happens only once. This is also in
> line with pcp allocator cache which are disabled for the offlining as
> well.
> "
Much better. Thanks.
>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> > * from v1 - https://lore.kernel.org/lkml/20210302210949.2440120-1-minchan@kernel.org/
> > * introduce __lru_add_drain_all to minimize changes - mhocko
> > * use lru_cache_disable for memory-hotplug
> > * schedule for every cpu at force_all_cpus
> >
> > * from RFC - http://lore.kernel.org/linux-mm/20210216170348.1513483-1-minchan@kernel.org
> > * use atomic and lru_add_drain_all for strict ordering - mhocko
> > * lru_cache_disable/enable - mhocko
> >
> > include/linux/migrate.h | 6 ++-
> > include/linux/swap.h | 2 +
> > mm/memory_hotplug.c | 3 +-
> > mm/mempolicy.c | 6 +++
> > mm/migrate.c | 13 ++++---
> > mm/page_alloc.c | 3 ++
> > mm/swap.c | 82 +++++++++++++++++++++++++++++++++--------
> > 7 files changed, 91 insertions(+), 24 deletions(-)
>
> Sorry for nit picking but I think the additional abstraction for
> migrate_prep is not really needed and we can remove some more code.
> Maybe we should even get rid of migrate_prep_local which only has a
> single caller and open coding lru draining with a comment would be
> better from code reading POV IMO.
Thanks for the code. I agree with you.
However, in this moment, let's go with this one until we conclude.
The removal of migrate_prep could be easily done after that.
I am happy to work on it.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
2021-03-09 16:29 ` Minchan Kim
@ 2021-03-09 16:31 ` David Hildenbrand
2021-03-09 17:15 ` Minchan Kim
2021-03-09 17:54 ` Michal Hocko
1 sibling, 1 reply; 11+ messages in thread
From: David Hildenbrand @ 2021-03-09 16:31 UTC (permalink / raw)
To: Minchan Kim, Michal Hocko
Cc: Andrew Morton, linux-mm, LKML, joaodias, surenb, cgoldswo, willy,
vbabka, linux-fsdevel
>>> Signed-off-by: Minchan Kim <minchan@kernel.org>
>>> ---
>>> * from v1 - https://lore.kernel.org/lkml/20210302210949.2440120-1-minchan@kernel.org/
>>> * introduce __lru_add_drain_all to minimize changes - mhocko
>>> * use lru_cache_disable for memory-hotplug
>>> * schedule for every cpu at force_all_cpus
>>>
>>> * from RFC - http://lore.kernel.org/linux-mm/20210216170348.1513483-1-minchan@kernel.org
>>> * use atomic and lru_add_drain_all for strict ordering - mhocko
>>> * lru_cache_disable/enable - mhocko
>>>
>>> include/linux/migrate.h | 6 ++-
>>> include/linux/swap.h | 2 +
>>> mm/memory_hotplug.c | 3 +-
>>> mm/mempolicy.c | 6 +++
>>> mm/migrate.c | 13 ++++---
>>> mm/page_alloc.c | 3 ++
>>> mm/swap.c | 82 +++++++++++++++++++++++++++++++++--------
>>> 7 files changed, 91 insertions(+), 24 deletions(-)
>>
>> Sorry for nit picking but I think the additional abstraction for
>> migrate_prep is not really needed and we can remove some more code.
>> Maybe we should even get rid of migrate_prep_local which only has a
>> single caller and open coding lru draining with a comment would be
>> better from code reading POV IMO.
>
> Thanks for the code. I agree with you.
> However, in this moment, let's go with this one until we conclude.
> The removal of migrate_prep could be easily done after that.
> I am happy to work on it.
Can you prepare + send along these cleanups so we can have a look at the
end result?
(either cleanups before or after your changes - doing cleanups before
might be cleaner as we are not dealing with a fix here that we want to
backport)
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
2021-03-09 16:31 ` David Hildenbrand
@ 2021-03-09 17:15 ` Minchan Kim
0 siblings, 0 replies; 11+ messages in thread
From: Minchan Kim @ 2021-03-09 17:15 UTC (permalink / raw)
To: David Hildenbrand
Cc: Michal Hocko, Andrew Morton, linux-mm, LKML, joaodias, surenb,
cgoldswo, willy, vbabka, linux-fsdevel
On Tue, Mar 09, 2021 at 05:31:09PM +0100, David Hildenbrand wrote:
>
> > > > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > > > ---
> > > > * from v1 - https://lore.kernel.org/lkml/20210302210949.2440120-1-minchan@kernel.org/
> > > > * introduce __lru_add_drain_all to minimize changes - mhocko
> > > > * use lru_cache_disable for memory-hotplug
> > > > * schedule for every cpu at force_all_cpus
> > > >
> > > > * from RFC - http://lore.kernel.org/linux-mm/20210216170348.1513483-1-minchan@kernel.org
> > > > * use atomic and lru_add_drain_all for strict ordering - mhocko
> > > > * lru_cache_disable/enable - mhocko
> > > >
> > > > include/linux/migrate.h | 6 ++-
> > > > include/linux/swap.h | 2 +
> > > > mm/memory_hotplug.c | 3 +-
> > > > mm/mempolicy.c | 6 +++
> > > > mm/migrate.c | 13 ++++---
> > > > mm/page_alloc.c | 3 ++
> > > > mm/swap.c | 82 +++++++++++++++++++++++++++++++++--------
> > > > 7 files changed, 91 insertions(+), 24 deletions(-)
> > >
> > > Sorry for nit picking but I think the additional abstraction for
> > > migrate_prep is not really needed and we can remove some more code.
> > > Maybe we should even get rid of migrate_prep_local which only has a
> > > single caller and open coding lru draining with a comment would be
> > > better from code reading POV IMO.
> >
> > Thanks for the code. I agree with you.
> > However, in this moment, let's go with this one until we conclude.
> > The removal of migrate_prep could be easily done after that.
> > I am happy to work on it.
>
> Can you prepare + send along these cleanups so we can have a look at the end
> result?
>
> (either cleanups before or after your changes - doing cleanups before might
> be cleaner as we are not dealing with a fix here that we want to backport)
Okay, let me try one more time.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily
2021-03-09 16:29 ` Minchan Kim
2021-03-09 16:31 ` David Hildenbrand
@ 2021-03-09 17:54 ` Michal Hocko
1 sibling, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2021-03-09 17:54 UTC (permalink / raw)
To: Minchan Kim
Cc: Andrew Morton, linux-mm, LKML, joaodias, surenb, cgoldswo, willy,
david, vbabka, linux-fsdevel
On Tue 09-03-21 08:29:21, Minchan Kim wrote:
> On Tue, Mar 09, 2021 at 12:03:08PM +0100, Michal Hocko wrote:
[...]
> > Sorry for nit picking but I think the additional abstraction for
> > migrate_prep is not really needed and we can remove some more code.
> > Maybe we should even get rid of migrate_prep_local which only has a
> > single caller and open coding lru draining with a comment would be
> > better from code reading POV IMO.
>
> Thanks for the code. I agree with you.
> However, in this moment, let's go with this one until we conclude.
> The removal of migrate_prep could be easily done after that.
> I am happy to work on it.
I will leave that up to you but I find it a bit pointless to add
migrate_finish just to remove it in the next patch.
Btw. you should also move lru_cache_disabled up in swap.c to fix up
compilation issues by 0 day bot. I didn't do that in my version.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-03-09 17:55 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-09 5:16 [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
2021-03-09 5:16 ` [PATCH v2 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
2021-03-09 11:11 ` kernel test robot
2021-03-09 7:56 ` [PATCH v2 1/2] mm: disable LRU pagevec during the migration temporarily kernel test robot
2021-03-09 9:43 ` kernel test robot
2021-03-09 10:07 ` kernel test robot
2021-03-09 11:03 ` Michal Hocko
2021-03-09 16:29 ` Minchan Kim
2021-03-09 16:31 ` David Hildenbrand
2021-03-09 17:15 ` Minchan Kim
2021-03-09 17:54 ` Michal Hocko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).