All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page()
@ 2024-03-27 14:10 Kefeng Wang
  2024-03-27 14:10 ` [PATCH 1/6] mm: migrate: add isolate_movable_folio() Kefeng Wang
                   ` (5 more replies)
  0 siblings, 6 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-27 14:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang, Kefeng Wang

Turn isolate_lru_page() to folio_isolate_lru() and turn
isolate_movable_page() to isolate_movable_folio().

Kefeng Wang (6):
  mm: migrate: add isolate_movable_folio()
  mm: memory_hotplug: use more folio in do_migrate_range()
  mm: remove isolate_lru_page()
  mm: compaction: use isolate_movable_folio() in
    isolate_migratepages_block()
  mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio()
  mm: migrate: remove isolate_movable_page()

 Documentation/mm/page_migration.rst           |  6 ++--
 .../translations/zh_CN/mm/page_migration.rst  |  6 ++--
 include/linux/migrate.h                       |  4 +--
 mm/compaction.c                               | 30 ++++++++---------
 mm/filemap.c                                  |  2 +-
 mm/folio-compat.c                             |  7 ----
 mm/internal.h                                 |  1 -
 mm/khugepaged.c                               |  8 ++---
 mm/memory-failure.c                           |  4 +--
 mm/memory_hotplug.c                           | 30 ++++++++---------
 mm/migrate.c                                  | 33 +++++++++----------
 mm/migrate_device.c                           |  2 +-
 mm/swap.c                                     |  2 +-
 13 files changed, 62 insertions(+), 73 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/6] mm: migrate: add isolate_movable_folio()
  2024-03-27 14:10 [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
@ 2024-03-27 14:10 ` Kefeng Wang
  2024-03-27 14:29   ` Zi Yan
  2024-03-27 18:59   ` Vishal Moola
  2024-03-27 14:10 ` [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range() Kefeng Wang
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-27 14:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang, Kefeng Wang

Like isolate_lru_page(), make isolate_movable_page() as a wrapper
around isolate_lru_folio(), since isolate_movable_page() always
fails on a tail page, add a warn for tail page and return immediately.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/migrate.h |  3 +++
 mm/migrate.c            | 41 +++++++++++++++++++++++------------------
 2 files changed, 26 insertions(+), 18 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f9d92482d117..a6c38ee7246a 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
 		  unsigned int *ret_succeeded);
 struct folio *alloc_migration_target(struct folio *src, unsigned long private);
 bool isolate_movable_page(struct page *page, isolate_mode_t mode);
+bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode);
 
 int migrate_huge_page_move_mapping(struct address_space *mapping,
 		struct folio *dst, struct folio *src);
@@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src,
 	{ return NULL; }
 static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 	{ return false; }
+static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode)
+	{ return false; }
 
 static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
 				  struct folio *dst, struct folio *src)
diff --git a/mm/migrate.c b/mm/migrate.c
index 2228ca681afb..b2195b6ff32c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -57,31 +57,29 @@
 
 #include "internal.h"
 
-bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode)
 {
-	struct folio *folio = folio_get_nontail_page(page);
 	const struct movable_operations *mops;
 
 	/*
-	 * Avoid burning cycles with pages that are yet under __free_pages(),
+	 * Avoid burning cycles with folios that are yet under __free_pages(),
 	 * or just got freed under us.
 	 *
-	 * In case we 'win' a race for a movable page being freed under us and
+	 * In case we 'win' a race for a movable folio being freed under us and
 	 * raise its refcount preventing __free_pages() from doing its job
-	 * the put_page() at the end of this block will take care of
-	 * release this page, thus avoiding a nasty leakage.
+	 * the folio_put() at the end of this block will take care of
+	 * release this folio, thus avoiding a nasty leakage.
 	 */
-	if (!folio)
-		goto out;
+	folio_get(folio);
 
 	if (unlikely(folio_test_slab(folio)))
 		goto out_putfolio;
 	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
 	smp_rmb();
 	/*
-	 * Check movable flag before taking the page lock because
-	 * we use non-atomic bitops on newly allocated page flags so
-	 * unconditionally grabbing the lock ruins page's owner side.
+	 * Check movable flag before taking the folio lock because
+	 * we use non-atomic bitops on newly allocated folio flags so
+	 * unconditionally grabbing the lock ruins folio's owner side.
 	 */
 	if (unlikely(!__folio_test_movable(folio)))
 		goto out_putfolio;
@@ -91,13 +89,13 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 		goto out_putfolio;
 
 	/*
-	 * As movable pages are not isolated from LRU lists, concurrent
-	 * compaction threads can race against page migration functions
-	 * as well as race against the releasing a page.
+	 * As movable folios are not isolated from LRU lists, concurrent
+	 * compaction threads can race against folio migration functions
+	 * as well as race against the releasing a folio.
 	 *
-	 * In order to avoid having an already isolated movable page
+	 * In order to avoid having an already isolated movable folio
 	 * being (wrongly) re-isolated while it is under migration,
-	 * or to avoid attempting to isolate pages being released,
+	 * or to avoid attempting to isolate folios being released,
 	 * lets be sure we have the page lock
 	 * before proceeding with the movable page isolation steps.
 	 */
@@ -113,7 +111,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 	if (!mops->isolate_page(&folio->page, mode))
 		goto out_no_isolated;
 
-	/* Driver shouldn't use PG_isolated bit of page->flags */
+	/* Driver shouldn't use PG_isolated bit of folio->flags */
 	WARN_ON_ONCE(folio_test_isolated(folio));
 	folio_set_isolated(folio);
 	folio_unlock(folio);
@@ -124,10 +122,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 	folio_unlock(folio);
 out_putfolio:
 	folio_put(folio);
-out:
 	return false;
 }
 
+bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+{
+	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
+		return false;
+
+	return isolate_movable_folio((struct folio *)page, mode);
+}
+
 static void putback_movable_folio(struct folio *folio)
 {
 	const struct movable_operations *mops = folio_movable_ops(folio);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range()
  2024-03-27 14:10 [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
  2024-03-27 14:10 ` [PATCH 1/6] mm: migrate: add isolate_movable_folio() Kefeng Wang
@ 2024-03-27 14:10 ` Kefeng Wang
  2024-03-27 14:45   ` Zi Yan
  2024-03-27 14:10 ` [PATCH 3/6] mm: remove isolate_lru_page() Kefeng Wang
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 24+ messages in thread
From: Kefeng Wang @ 2024-03-27 14:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang, Kefeng Wang

With isolate_movable_folio() and folio_isolate_lru(), let's use
more folio in do_migrate_range() to save compound_head() calls.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory_hotplug.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a444e2d7dd2b..bd207772c619 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1774,14 +1774,14 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
 
 static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 {
+	struct folio *folio;
 	unsigned long pfn;
-	struct page *page, *head;
 	LIST_HEAD(source);
 	static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
 				      DEFAULT_RATELIMIT_BURST);
 
 	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
-		struct folio *folio;
+		struct page *page, *head;
 		bool isolated;
 
 		if (!pfn_valid(pfn))
@@ -1818,15 +1818,15 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 		 * We can skip free pages. And we can deal with pages on
 		 * LRU and non-lru movable pages.
 		 */
-		if (PageLRU(page))
-			isolated = isolate_lru_page(page);
+		if (folio_test_lru(folio))
+			isolated = folio_isolate_lru(folio);
 		else
-			isolated = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
+			isolated = isolate_movable_folio(folio, ISOLATE_UNEVICTABLE);
 		if (isolated) {
-			list_add_tail(&page->lru, &source);
-			if (!__PageMovable(page))
-				inc_node_page_state(page, NR_ISOLATED_ANON +
-						    page_is_file_lru(page));
+			list_add_tail(&folio->lru, &source);
+			if (!__folio_test_movable(folio))
+				node_stat_add_folio(folio, NR_ISOLATED_ANON +
+						    folio_is_file_lru(folio));
 
 		} else {
 			if (__ratelimit(&migrate_rs)) {
@@ -1834,7 +1834,7 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 				dump_page(page, "isolation failed");
 			}
 		}
-		put_page(page);
+		folio_put(folio);
 	}
 	if (!list_empty(&source)) {
 		nodemask_t nmask = node_states[N_MEMORY];
@@ -1846,9 +1846,9 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 
 		/*
 		 * We have checked that migration range is on a single zone so
-		 * we can use the nid of the first page to all the others.
+		 * we can use the nid of the first folio to all the others.
 		 */
-		mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru));
+		mtc.nid = folio_nid(list_first_entry(&source, struct folio, lru));
 
 		/*
 		 * try to allocate from a different node but reuse this node
@@ -1861,11 +1861,11 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 		ret = migrate_pages(&source, alloc_migration_target, NULL,
 			(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG, NULL);
 		if (ret) {
-			list_for_each_entry(page, &source, lru) {
+			list_for_each_entry(folio, &source, lru) {
 				if (__ratelimit(&migrate_rs)) {
 					pr_warn("migrating pfn %lx failed ret:%d\n",
-						page_to_pfn(page), ret);
-					dump_page(page, "migration failure");
+						folio_pfn(folio), ret);
+					dump_page(&folio->page, "migration failure");
 				}
 			}
 			putback_movable_pages(&source);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/6] mm: remove isolate_lru_page()
  2024-03-27 14:10 [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
  2024-03-27 14:10 ` [PATCH 1/6] mm: migrate: add isolate_movable_folio() Kefeng Wang
  2024-03-27 14:10 ` [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range() Kefeng Wang
@ 2024-03-27 14:10 ` Kefeng Wang
  2024-03-28 12:22   ` kernel test robot
  2024-03-28 15:33   ` kernel test robot
  2024-03-27 14:10 ` [PATCH 4/6] mm: compaction: use isolate_movable_folio() in isolate_migratepages_block() Kefeng Wang
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-27 14:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang, Kefeng Wang

There are no more callers of isolate_lru_page(), remove it.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 Documentation/mm/page_migration.rst                    | 6 +++---
 Documentation/translations/zh_CN/mm/page_migration.rst | 6 +++---
 mm/filemap.c                                           | 2 +-
 mm/folio-compat.c                                      | 7 -------
 mm/internal.h                                          | 1 -
 mm/khugepaged.c                                        | 8 ++++----
 mm/migrate_device.c                                    | 2 +-
 mm/swap.c                                              | 2 +-
 8 files changed, 13 insertions(+), 21 deletions(-)

diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst
index f1ce67a26615..0046bbbdc65d 100644
--- a/Documentation/mm/page_migration.rst
+++ b/Documentation/mm/page_migration.rst
@@ -67,8 +67,8 @@ In kernel use of migrate_pages()
 
    Lists of pages to be migrated are generated by scanning over
    pages and moving them into lists. This is done by
-   calling isolate_lru_page().
-   Calling isolate_lru_page() increases the references to the page
+   calling folio_isolate_lru().
+   Calling folio_isolate_lru() increases the references to the page
    so that it cannot vanish while the page migration occurs.
    It also prevents the swapper or other scans from encountering
    the page.
@@ -86,7 +86,7 @@ How migrate_pages() works
 
 migrate_pages() does several passes over its list of pages. A page is moved
 if all references to a page are removable at the time. The page has
-already been removed from the LRU via isolate_lru_page() and the refcount
+already been removed from the LRU via folio_isolate_lru() and the refcount
 is increased so that the page cannot be freed while page migration occurs.
 
 Steps:
diff --git a/Documentation/translations/zh_CN/mm/page_migration.rst b/Documentation/translations/zh_CN/mm/page_migration.rst
index f95063826a15..8c8461c6cb9f 100644
--- a/Documentation/translations/zh_CN/mm/page_migration.rst
+++ b/Documentation/translations/zh_CN/mm/page_migration.rst
@@ -50,8 +50,8 @@ mbind()设置一个新的内存策略。一个进程的页面也可以通过sys_
 
 1. 从LRU中移除页面。
 
-   要迁移的页面列表是通过扫描页面并把它们移到列表中来生成的。这是通过调用 isolate_lru_page()
-   来完成的。调用isolate_lru_page()增加了对该页的引用,这样在页面迁移发生时它就不会
+   要迁移的页面列表是通过扫描页面并把它们移到列表中来生成的。这是通过调用 folio_isolate_lru()
+   来完成的。调用folio_isolate_lru()增加了对该页的引用,这样在页面迁移发生时它就不会
    消失。它还可以防止交换器或其他扫描器遇到该页。
 
 
@@ -65,7 +65,7 @@ migrate_pages()如何工作
 =======================
 
 migrate_pages()对它的页面列表进行了多次处理。如果当时对一个页面的所有引用都可以被移除,
-那么这个页面就会被移动。该页已经通过isolate_lru_page()从LRU中移除,并且refcount被
+那么这个页面就会被移动。该页已经通过folio_isolate_lru()从LRU中移除,并且refcount被
 增加,以便在页面迁移发生时不释放该页。
 
 步骤:
diff --git a/mm/filemap.c b/mm/filemap.c
index 7437b2bd75c1..2a03fbbf413a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -113,7 +113,7 @@
  *    ->private_lock		(try_to_unmap_one)
  *    ->i_pages lock		(try_to_unmap_one)
  *    ->lruvec->lru_lock	(follow_page->mark_page_accessed)
- *    ->lruvec->lru_lock	(check_pte_range->isolate_lru_page)
+ *    ->lruvec->lru_lock	(check_pte_range->folio_isolate_lru)
  *    ->private_lock		(folio_remove_rmap_pte->set_page_dirty)
  *    ->i_pages lock		(folio_remove_rmap_pte->set_page_dirty)
  *    bdi.wb->list_lock		(folio_remove_rmap_pte->set_page_dirty)
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index 50412014f16f..95ad426b296a 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -105,13 +105,6 @@ struct page *grab_cache_page_write_begin(struct address_space *mapping,
 }
 EXPORT_SYMBOL(grab_cache_page_write_begin);
 
-bool isolate_lru_page(struct page *page)
-{
-	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
-		return false;
-	return folio_isolate_lru((struct folio *)page);
-}
-
 void putback_lru_page(struct page *page)
 {
 	folio_putback_lru(page_folio(page));
diff --git a/mm/internal.h b/mm/internal.h
index 7e486f2c502c..7cdf7d3d83ea 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -292,7 +292,6 @@ extern unsigned long highest_memmap_pfn;
 /*
  * in mm/vmscan.c:
  */
-bool isolate_lru_page(struct page *page);
 bool folio_isolate_lru(struct folio *folio);
 void putback_lru_page(struct page *page);
 void folio_putback_lru(struct folio *folio);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 38830174608f..e9b8b368f655 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -607,7 +607,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		}
 
 		/*
-		 * We can do it before isolate_lru_page because the
+		 * We can do it before folio_isolate_lru because the
 		 * page can't be freed from under us. NOTE: PG_lock
 		 * is needed to serialize against split_huge_page
 		 * when invoked from the VM.
@@ -1867,7 +1867,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 					result = SCAN_FAIL;
 					goto xa_unlocked;
 				}
-				/* drain lru cache to help isolate_lru_page() */
+				/* drain lru cache to help folio_isolate_lru() */
 				lru_add_drain();
 				page = folio_file_page(folio, index);
 			} else if (trylock_page(page)) {
@@ -1883,7 +1883,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 				page_cache_sync_readahead(mapping, &file->f_ra,
 							  file, index,
 							  end - index);
-				/* drain lru cache to help isolate_lru_page() */
+				/* drain lru cache to help folio_isolate_lru() */
 				lru_add_drain();
 				page = find_lock_page(mapping, index);
 				if (unlikely(page == NULL)) {
@@ -1990,7 +1990,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 		 * We control three references to the page:
 		 *  - we hold a pin on it;
 		 *  - one reference from page cache;
-		 *  - one from isolate_lru_page;
+		 *  - one from folio_isolate_lru;
 		 * If those are the only references, then any new usage of the
 		 * page will have to fetch it from the page cache. That requires
 		 * locking the page to handle truncate, so any new usage will be
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index c0547271eaaa..3a42624bb590 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -326,7 +326,7 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page)
 {
 	/*
 	 * One extra ref because caller holds an extra reference, either from
-	 * isolate_lru_page() for a regular page, or migrate_vma_collect() for
+	 * folio_isolate_lru() for a regular page, or migrate_vma_collect() for
 	 * a device page.
 	 */
 	int extra = 1 + (page == fault_page);
diff --git a/mm/swap.c b/mm/swap.c
index 500a09a48dfd..decd6d44b7ac 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -930,7 +930,7 @@ atomic_t lru_disable_count = ATOMIC_INIT(0);
 
 /*
  * lru_cache_disable() needs to be called before we start compiling
- * a list of pages to be migrated using isolate_lru_page().
+ * a list of pages to be migrated using folio_isolate_lru().
  * It drains pages on LRU cache and then disable on all cpus until
  * lru_cache_enable is called.
  *
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/6] mm: compaction: use isolate_movable_folio() in isolate_migratepages_block()
  2024-03-27 14:10 [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (2 preceding siblings ...)
  2024-03-27 14:10 ` [PATCH 3/6] mm: remove isolate_lru_page() Kefeng Wang
@ 2024-03-27 14:10 ` Kefeng Wang
  2024-03-27 18:49   ` Vishal Moola
  2024-03-27 14:10 ` [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio() Kefeng Wang
  2024-03-27 14:10 ` [PATCH 6/6] mm: migrate: remove isolate_movable_page() Kefeng Wang
  5 siblings, 1 reply; 24+ messages in thread
From: Kefeng Wang @ 2024-03-27 14:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang, Kefeng Wang

This moves folio_get_nontail_page() before non-lru movable pages check,
and directly call isolate_movable_folio() to save compound_head() calls,
since the reference count of the non-lru movable page is increased, a
folio_put() is need() whether the folio is isolated or not.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/compaction.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 807b58e6eb68..74ac65daaed1 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1097,41 +1097,41 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			}
 		}
 
+		/*
+		 * Be careful not to clear PageLRU until after we're
+		 * sure the page is not being freed elsewhere -- the
+		 * page release code relies on it.
+		 */
+		folio = folio_get_nontail_page(page);
+		if (unlikely(!folio))
+			goto isolate_fail;
+
 		/*
 		 * Check may be lockless but that's ok as we recheck later.
 		 * It's possible to migrate LRU and non-lru movable pages.
 		 * Skip any other type of page
 		 */
-		if (!PageLRU(page)) {
+		if (!folio_test_lru(folio)) {
 			/*
 			 * __PageMovable can return false positive so we need
 			 * to verify it under page_lock.
 			 */
-			if (unlikely(__PageMovable(page)) &&
-					!PageIsolated(page)) {
+			if (unlikely(__folio_test_movable(folio)) &&
+					!folio_test_isolated(folio)) {
 				if (locked) {
 					unlock_page_lruvec_irqrestore(locked, flags);
 					locked = NULL;
 				}
 
-				if (isolate_movable_page(page, mode)) {
-					folio = page_folio(page);
+				if (isolate_movable_folio(folio, mode)) {
+					folio_put(folio);
 					goto isolate_success;
 				}
 			}
 
-			goto isolate_fail;
+			goto isolate_fail_put;
 		}
 
-		/*
-		 * Be careful not to clear PageLRU until after we're
-		 * sure the page is not being freed elsewhere -- the
-		 * page release code relies on it.
-		 */
-		folio = folio_get_nontail_page(page);
-		if (unlikely(!folio))
-			goto isolate_fail;
-
 		/*
 		 * Migration will fail if an anonymous page is pinned in memory,
 		 * so avoid taking lru_lock and isolating it unnecessarily in an
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio()
  2024-03-27 14:10 [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (3 preceding siblings ...)
  2024-03-27 14:10 ` [PATCH 4/6] mm: compaction: use isolate_movable_folio() in isolate_migratepages_block() Kefeng Wang
@ 2024-03-27 14:10 ` Kefeng Wang
  2024-03-27 15:12   ` Zi Yan
  2024-03-28 16:57   ` kernel test robot
  2024-03-27 14:10 ` [PATCH 6/6] mm: migrate: remove isolate_movable_page() Kefeng Wang
  5 siblings, 2 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-27 14:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang, Kefeng Wang

Directly use isolate_movable_folio() helper in mf_isolate_folio().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory-failure.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 9349948f1abf..6f47776df0e1 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2628,8 +2628,8 @@ static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist)
 		if (lru)
 			isolated = folio_isolate_lru(folio);
 		else
-			isolated = isolate_movable_page(&folio->page,
-							ISOLATE_UNEVICTABLE);
+			isolated = isolate_movable_folio(folio,
+							 ISOLATE_UNEVICTABLE);
 
 		if (isolated) {
 			list_add(&folio->lru, pagelist);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 6/6] mm: migrate: remove isolate_movable_page()
  2024-03-27 14:10 [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (4 preceding siblings ...)
  2024-03-27 14:10 ` [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio() Kefeng Wang
@ 2024-03-27 14:10 ` Kefeng Wang
  5 siblings, 0 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-27 14:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang, Kefeng Wang

There are no more callers of isolate_movable_page(), remove it.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/migrate.h | 3 ---
 mm/migrate.c            | 8 --------
 2 files changed, 11 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index a6c38ee7246a..2b680b939020 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -69,7 +69,6 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
 		  unsigned long private, enum migrate_mode mode, int reason,
 		  unsigned int *ret_succeeded);
 struct folio *alloc_migration_target(struct folio *src, unsigned long private);
-bool isolate_movable_page(struct page *page, isolate_mode_t mode);
 bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode);
 
 int migrate_huge_page_move_mapping(struct address_space *mapping,
@@ -90,8 +89,6 @@ static inline int migrate_pages(struct list_head *l, new_folio_t new,
 static inline struct folio *alloc_migration_target(struct folio *src,
 		unsigned long private)
 	{ return NULL; }
-static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
-	{ return false; }
 static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode)
 	{ return false; }
 
diff --git a/mm/migrate.c b/mm/migrate.c
index b2195b6ff32c..f50ed046ede3 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -125,14 +125,6 @@ bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode)
 	return false;
 }
 
-bool isolate_movable_page(struct page *page, isolate_mode_t mode)
-{
-	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
-		return false;
-
-	return isolate_movable_folio((struct folio *)page, mode);
-}
-
 static void putback_movable_folio(struct folio *folio)
 {
 	const struct movable_operations *mops = folio_movable_ops(folio);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/6] mm: migrate: add isolate_movable_folio()
  2024-03-27 14:10 ` [PATCH 1/6] mm: migrate: add isolate_movable_folio() Kefeng Wang
@ 2024-03-27 14:29   ` Zi Yan
  2024-03-27 14:36     ` Kefeng Wang
  2024-03-27 18:59   ` Vishal Moola
  1 sibling, 1 reply; 24+ messages in thread
From: Zi Yan @ 2024-03-27 14:29 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, willy, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Hugh Dickins, Jonathan Corbet,
	linux-mm, linux-doc, Baolin Wang

[-- Attachment #1: Type: text/plain, Size: 5677 bytes --]

On 27 Mar 2024, at 10:10, Kefeng Wang wrote:

> Like isolate_lru_page(), make isolate_movable_page() as a wrapper
> around isolate_lru_folio(), since isolate_movable_page() always
> fails on a tail page, add a warn for tail page and return immediately.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  include/linux/migrate.h |  3 +++
>  mm/migrate.c            | 41 +++++++++++++++++++++++------------------
>  2 files changed, 26 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index f9d92482d117..a6c38ee7246a 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
>  		  unsigned int *ret_succeeded);
>  struct folio *alloc_migration_target(struct folio *src, unsigned long private);
>  bool isolate_movable_page(struct page *page, isolate_mode_t mode);
> +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode);
>
>  int migrate_huge_page_move_mapping(struct address_space *mapping,
>  		struct folio *dst, struct folio *src);
> @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src,
>  	{ return NULL; }
>  static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	{ return false; }
> +static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode)
> +	{ return false; }
>
>  static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>  				  struct folio *dst, struct folio *src)
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 2228ca681afb..b2195b6ff32c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -57,31 +57,29 @@
>
>  #include "internal.h"
>
> -bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode)
>  {
> -	struct folio *folio = folio_get_nontail_page(page);
>  	const struct movable_operations *mops;
>
>  	/*
> -	 * Avoid burning cycles with pages that are yet under __free_pages(),
> +	 * Avoid burning cycles with folios that are yet under __free_pages(),
>  	 * or just got freed under us.
>  	 *
> -	 * In case we 'win' a race for a movable page being freed under us and
> +	 * In case we 'win' a race for a movable folio being freed under us and
>  	 * raise its refcount preventing __free_pages() from doing its job
> -	 * the put_page() at the end of this block will take care of
> -	 * release this page, thus avoiding a nasty leakage.
> +	 * the folio_put() at the end of this block will take care of
> +	 * release this folio, thus avoiding a nasty leakage.
>  	 */
> -	if (!folio)
> -		goto out;
> +	folio_get(folio);

You need folio_try_get() instead. Since folio_get_nontail_page() calls
get_page_unless_zero() first.

>
>  	if (unlikely(folio_test_slab(folio)))
>  		goto out_putfolio;
>  	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
>  	smp_rmb();
>  	/*
> -	 * Check movable flag before taking the page lock because
> -	 * we use non-atomic bitops on newly allocated page flags so
> -	 * unconditionally grabbing the lock ruins page's owner side.
> +	 * Check movable flag before taking the folio lock because
> +	 * we use non-atomic bitops on newly allocated folio flags so
> +	 * unconditionally grabbing the lock ruins folio's owner side.
>  	 */
>  	if (unlikely(!__folio_test_movable(folio)))
>  		goto out_putfolio;
> @@ -91,13 +89,13 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  		goto out_putfolio;
>
>  	/*
> -	 * As movable pages are not isolated from LRU lists, concurrent
> -	 * compaction threads can race against page migration functions
> -	 * as well as race against the releasing a page.
> +	 * As movable folios are not isolated from LRU lists, concurrent
> +	 * compaction threads can race against folio migration functions
> +	 * as well as race against the releasing a folio.
>  	 *
> -	 * In order to avoid having an already isolated movable page
> +	 * In order to avoid having an already isolated movable folio
>  	 * being (wrongly) re-isolated while it is under migration,
> -	 * or to avoid attempting to isolate pages being released,
> +	 * or to avoid attempting to isolate folios being released,
>  	 * lets be sure we have the page lock
>  	 * before proceeding with the movable page isolation steps.
>  	 */
> @@ -113,7 +111,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	if (!mops->isolate_page(&folio->page, mode))
>  		goto out_no_isolated;
>
> -	/* Driver shouldn't use PG_isolated bit of page->flags */
> +	/* Driver shouldn't use PG_isolated bit of folio->flags */
>  	WARN_ON_ONCE(folio_test_isolated(folio));
>  	folio_set_isolated(folio);
>  	folio_unlock(folio);
> @@ -124,10 +122,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	folio_unlock(folio);
>  out_putfolio:
>  	folio_put(folio);
> -out:
>  	return false;
>  }
>
> +bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> +{
> +	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
> +		return false;

Why bother adding a warning here? There was no warning before. Also,
after this series, isolate_movable_page() will be gone.

> +
> +	return isolate_movable_folio((struct folio *)page, mode);
> +}
> +
>  static void putback_movable_folio(struct folio *folio)
>  {
>  	const struct movable_operations *mops = folio_movable_ops(folio);
> -- 
> 2.27.0


--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/6] mm: migrate: add isolate_movable_folio()
  2024-03-27 14:29   ` Zi Yan
@ 2024-03-27 14:36     ` Kefeng Wang
  0 siblings, 0 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-27 14:36 UTC (permalink / raw)
  To: Zi Yan
  Cc: Andrew Morton, willy, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Hugh Dickins, Jonathan Corbet,
	linux-mm, linux-doc, Baolin Wang



On 2024/3/27 22:29, Zi Yan wrote:
> On 27 Mar 2024, at 10:10, Kefeng Wang wrote:
> 
>> Like isolate_lru_page(), make isolate_movable_page() as a wrapper
>> around isolate_lru_folio(), since isolate_movable_page() always
>> fails on a tail page, add a warn for tail page and return immediately.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   include/linux/migrate.h |  3 +++
>>   mm/migrate.c            | 41 +++++++++++++++++++++++------------------
>>   2 files changed, 26 insertions(+), 18 deletions(-)
>>
>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>> index f9d92482d117..a6c38ee7246a 100644
>> --- a/include/linux/migrate.h
>> +++ b/include/linux/migrate.h
>> @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
>>   		  unsigned int *ret_succeeded);
>>   struct folio *alloc_migration_target(struct folio *src, unsigned long private);
>>   bool isolate_movable_page(struct page *page, isolate_mode_t mode);
>> +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode);
>>
>>   int migrate_huge_page_move_mapping(struct address_space *mapping,
>>   		struct folio *dst, struct folio *src);
>> @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src,
>>   	{ return NULL; }
>>   static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>>   	{ return false; }
>> +static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode)
>> +	{ return false; }
>>
>>   static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>>   				  struct folio *dst, struct folio *src)
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 2228ca681afb..b2195b6ff32c 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -57,31 +57,29 @@
>>
>>   #include "internal.h"
>>
>> -bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>> +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode)
>>   {
>> -	struct folio *folio = folio_get_nontail_page(page);
>>   	const struct movable_operations *mops;
>>
>>   	/*
>> -	 * Avoid burning cycles with pages that are yet under __free_pages(),
>> +	 * Avoid burning cycles with folios that are yet under __free_pages(),
>>   	 * or just got freed under us.
>>   	 *
>> -	 * In case we 'win' a race for a movable page being freed under us and
>> +	 * In case we 'win' a race for a movable folio being freed under us and
>>   	 * raise its refcount preventing __free_pages() from doing its job
>> -	 * the put_page() at the end of this block will take care of
>> -	 * release this page, thus avoiding a nasty leakage.
>> +	 * the folio_put() at the end of this block will take care of
>> +	 * release this folio, thus avoiding a nasty leakage.
>>   	 */
>> -	if (!folio)
>> -		goto out;
>> +	folio_get(folio);
> 
> You need folio_try_get() instead. Since folio_get_nontail_page() calls
> get_page_unless_zero() first.
Oh, indeed, will fix.
> 
>>
>>   	if (unlikely(folio_test_slab(folio)))
>>   		goto out_putfolio;
>>   	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
>>   	smp_rmb();
>>   	/*
>> -	 * Check movable flag before taking the page lock because
>> -	 * we use non-atomic bitops on newly allocated page flags so
>> -	 * unconditionally grabbing the lock ruins page's owner side.
>> +	 * Check movable flag before taking the folio lock because
>> +	 * we use non-atomic bitops on newly allocated folio flags so
>> +	 * unconditionally grabbing the lock ruins folio's owner side.
>>   	 */
>>   	if (unlikely(!__folio_test_movable(folio)))
>>   		goto out_putfolio;
>> @@ -91,13 +89,13 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>>   		goto out_putfolio;
>>
>>   	/*
>> -	 * As movable pages are not isolated from LRU lists, concurrent
>> -	 * compaction threads can race against page migration functions
>> -	 * as well as race against the releasing a page.
>> +	 * As movable folios are not isolated from LRU lists, concurrent
>> +	 * compaction threads can race against folio migration functions
>> +	 * as well as race against the releasing a folio.
>>   	 *
>> -	 * In order to avoid having an already isolated movable page
>> +	 * In order to avoid having an already isolated movable folio
>>   	 * being (wrongly) re-isolated while it is under migration,
>> -	 * or to avoid attempting to isolate pages being released,
>> +	 * or to avoid attempting to isolate folios being released,
>>   	 * lets be sure we have the page lock
>>   	 * before proceeding with the movable page isolation steps.
>>   	 */
>> @@ -113,7 +111,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>>   	if (!mops->isolate_page(&folio->page, mode))
>>   		goto out_no_isolated;
>>
>> -	/* Driver shouldn't use PG_isolated bit of page->flags */
>> +	/* Driver shouldn't use PG_isolated bit of folio->flags */
>>   	WARN_ON_ONCE(folio_test_isolated(folio));
>>   	folio_set_isolated(folio);
>>   	folio_unlock(folio);
>> @@ -124,10 +122,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>>   	folio_unlock(folio);
>>   out_putfolio:
>>   	folio_put(folio);
>> -out:
>>   	return false;
>>   }
>>
>> +bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>> +{
>> +	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
>> +		return false;
> 
> Why bother adding a warning here? There was no warning before. Also,
> after this series, isolate_movable_page() will be gone.

I copy from isolate_lru_page(), but as you said, it seems useless, will 
remove it.

Thanks.
> 
>> +
>> +	return isolate_movable_folio((struct folio *)page, mode);
>> +}
>> +
>>   static void putback_movable_folio(struct folio *folio)
>>   {
>>   	const struct movable_operations *mops = folio_movable_ops(folio);
>> -- 
>> 2.27.0
> 
> 
> --
> Best Regards,
> Yan, Zi

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range()
  2024-03-27 14:10 ` [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range() Kefeng Wang
@ 2024-03-27 14:45   ` Zi Yan
  2024-03-27 14:54     ` Matthew Wilcox
  0 siblings, 1 reply; 24+ messages in thread
From: Zi Yan @ 2024-03-27 14:45 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, willy, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Hugh Dickins, Jonathan Corbet,
	linux-mm, linux-doc, Baolin Wang

[-- Attachment #1: Type: text/plain, Size: 1535 bytes --]

On 27 Mar 2024, at 10:10, Kefeng Wang wrote:

> With isolate_movable_folio() and folio_isolate_lru(), let's use
> more folio in do_migrate_range() to save compound_head() calls.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  mm/memory_hotplug.c | 30 +++++++++++++++---------------
>  1 file changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index a444e2d7dd2b..bd207772c619 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1774,14 +1774,14 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
>
>  static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>  {
> +	struct folio *folio;
>  	unsigned long pfn;
> -	struct page *page, *head;
>  	LIST_HEAD(source);
>  	static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
>  				      DEFAULT_RATELIMIT_BURST);
>
>  	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> -		struct folio *folio;
> +		struct page *page, *head;

You could get rid of head too. It is only used to calculate next pfn,
so pfn = folio_to_pfn(folio) + folio_nr_pages(folio) - 1 would work.

And the PageHuge(page) and PageTransHuge(page) can be simplified, since
their pfn calculations are the same. Something like:

if (folio_test_large(folio)) {
	pfn = folio_to_pfn(folio) + folio_nr_pages(folio) - 1;
	if (folio_test_hugetlb(folio)) {
		isolate_hugetlb(folio, &source);
		continue;
	}
}



--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range()
  2024-03-27 14:45   ` Zi Yan
@ 2024-03-27 14:54     ` Matthew Wilcox
  2024-03-27 15:10       ` Zi Yan
  0 siblings, 1 reply; 24+ messages in thread
From: Matthew Wilcox @ 2024-03-27 14:54 UTC (permalink / raw)
  To: Zi Yan
  Cc: Kefeng Wang, Andrew Morton, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Hugh Dickins, Jonathan Corbet,
	linux-mm, linux-doc, Baolin Wang

On Wed, Mar 27, 2024 at 10:45:42AM -0400, Zi Yan wrote:
> >  	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> > -		struct folio *folio;
> > +		struct page *page, *head;
> 
> You could get rid of head too. It is only used to calculate next pfn,
> so pfn = folio_to_pfn(folio) + folio_nr_pages(folio) - 1 would work.
> 
> And the PageHuge(page) and PageTransHuge(page) can be simplified, since
> their pfn calculations are the same. Something like:
> 
> if (folio_test_large(folio)) {
> 	pfn = folio_to_pfn(folio) + folio_nr_pages(folio) - 1;
> 	if (folio_test_hugetlb(folio)) {
> 		isolate_hugetlb(folio, &source);
> 		continue;
> 	}
> }

How much of this is safe without a refcount on the folio?



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range()
  2024-03-27 14:54     ` Matthew Wilcox
@ 2024-03-27 15:10       ` Zi Yan
  2024-03-27 15:58         ` Matthew Wilcox
  2024-03-28  5:06         ` Kefeng Wang
  0 siblings, 2 replies; 24+ messages in thread
From: Zi Yan @ 2024-03-27 15:10 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Kefeng Wang, Andrew Morton, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Hugh Dickins, Jonathan Corbet,
	linux-mm, linux-doc, Baolin Wang

[-- Attachment #1: Type: text/plain, Size: 1101 bytes --]

On 27 Mar 2024, at 10:54, Matthew Wilcox wrote:

> On Wed, Mar 27, 2024 at 10:45:42AM -0400, Zi Yan wrote:
>>>  	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
>>> -		struct folio *folio;
>>> +		struct page *page, *head;
>>
>> You could get rid of head too. It is only used to calculate next pfn,
>> so pfn = folio_to_pfn(folio) + folio_nr_pages(folio) - 1 would work.
>>
>> And the PageHuge(page) and PageTransHuge(page) can be simplified, since
>> their pfn calculations are the same. Something like:
>>
>> if (folio_test_large(folio)) {
>> 	pfn = folio_to_pfn(folio) + folio_nr_pages(folio) - 1;
>> 	if (folio_test_hugetlb(folio)) {
>> 		isolate_hugetlb(folio, &source);
>> 		continue;
>> 	}
>> }
>
> How much of this is safe without a refcount on the folio?

folio_to_pfn() should be fine, isolate_hugetlb() checks the folio
under hugetlb_lock, but folio_nr_pages() might return a bogus number
that makes pfn go beyond end_pfn and ends for loop early. The code
below increases the refcount, so it might be better to move this
part of the code after refcount is increased.

--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio()
  2024-03-27 14:10 ` [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio() Kefeng Wang
@ 2024-03-27 15:12   ` Zi Yan
  2024-03-28 16:57   ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: Zi Yan @ 2024-03-27 15:12 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, willy, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Hugh Dickins, Jonathan Corbet,
	linux-mm, linux-doc, Baolin Wang

[-- Attachment #1: Type: text/plain, Size: 333 bytes --]

On 27 Mar 2024, at 10:10, Kefeng Wang wrote:

> Directly use isolate_movable_folio() helper in mf_isolate_folio().
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  mm/memory-failure.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range()
  2024-03-27 15:10       ` Zi Yan
@ 2024-03-27 15:58         ` Matthew Wilcox
  2024-03-28  5:30           ` Kefeng Wang
  2024-03-28  5:06         ` Kefeng Wang
  1 sibling, 1 reply; 24+ messages in thread
From: Matthew Wilcox @ 2024-03-27 15:58 UTC (permalink / raw)
  To: Zi Yan
  Cc: Kefeng Wang, Andrew Morton, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Hugh Dickins, Jonathan Corbet,
	linux-mm, linux-doc, Baolin Wang

On Wed, Mar 27, 2024 at 11:10:48AM -0400, Zi Yan wrote:
> On 27 Mar 2024, at 10:54, Matthew Wilcox wrote:
> > How much of this is safe without a refcount on the folio?
> 
> folio_to_pfn() should be fine, isolate_hugetlb() checks the folio
> under hugetlb_lock, but folio_nr_pages() might return a bogus number
> that makes pfn go beyond end_pfn and ends for loop early. The code
> below increases the refcount, so it might be better to move this
> part of the code after refcount is increased.

I really want to instill a little bit of fear in Kefeng.

This is all really subtle code because it's running without a refcount.
I've mostly stayed away from it because I know that I don't know what
I'm doing.  Kefeng has no idea that he doesn't know what he's doing.

And so we get these patches, and they're sometimes subtly wrong, and it
takes a lot of arguing and revision and thinking and Kefeng is doing
very little of the thinking part!

Kefeng, please stick to working on code that you understand.  Or take
the time to learn code you don't understand before sending patches to
it.  This kind of Leeroy Jenkins approach to development is not good.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/6] mm: compaction: use isolate_movable_folio() in isolate_migratepages_block()
  2024-03-27 14:10 ` [PATCH 4/6] mm: compaction: use isolate_movable_folio() in isolate_migratepages_block() Kefeng Wang
@ 2024-03-27 18:49   ` Vishal Moola
  2024-03-28 12:49     ` Kefeng Wang
  0 siblings, 1 reply; 24+ messages in thread
From: Vishal Moola @ 2024-03-27 18:49 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, willy, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Zi Yan, Hugh Dickins,
	Jonathan Corbet, linux-mm, linux-doc, Baolin Wang

On Wed, Mar 27, 2024 at 10:10:32PM +0800, Kefeng Wang wrote:
> This moves folio_get_nontail_page() before non-lru movable pages check,
> and directly call isolate_movable_folio() to save compound_head() calls,
> since the reference count of the non-lru movable page is increased, a
> folio_put() is need() whether the folio is isolated or not.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  mm/compaction.c | 30 +++++++++++++++---------------
>  1 file changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 807b58e6eb68..74ac65daaed1 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1097,41 +1097,41 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  			}
>  		}
>  
> +		/*
> +		 * Be careful not to clear PageLRU until after we're
> +		 * sure the page is not being freed elsewhere -- the
> +		 * page release code relies on it.
> +		 */
> +		folio = folio_get_nontail_page(page);
> +		if (unlikely(!folio))
> +			goto isolate_fail;
> +

If you wanted to move this, I think this should be part of your first
patch (or prior to it). It would make your first patch be more sensible as
is. You could then also consider making isolate_movable_folio() more similar
to folio_isolate_lru() if you really wanted to.

>  		/*
>  		 * Check may be lockless but that's ok as we recheck later.
>  		 * It's possible to migrate LRU and non-lru movable pages.
>  		 * Skip any other type of page
>  		 */
> -		if (!PageLRU(page)) {
> +		if (!folio_test_lru(folio)) {
>  			/*
>  			 * __PageMovable can return false positive so we need
>  			 * to verify it under page_lock.
>  			 */
> -			if (unlikely(__PageMovable(page)) &&
> -					!PageIsolated(page)) {
> +			if (unlikely(__folio_test_movable(folio)) &&
> +					!folio_test_isolated(folio)) {
>  				if (locked) {
>  					unlock_page_lruvec_irqrestore(locked, flags);
>  					locked = NULL;
>  				}
>  
> -				if (isolate_movable_page(page, mode)) {
> -					folio = page_folio(page);
> +				if (isolate_movable_folio(folio, mode)) {
> +					folio_put(folio);
>  					goto isolate_success;
>  				}
>  			}
>  
> -			goto isolate_fail;
> +			goto isolate_fail_put;
>  		}
>  
> -		/*
> -		 * Be careful not to clear PageLRU until after we're
> -		 * sure the page is not being freed elsewhere -- the
> -		 * page release code relies on it.
> -		 */
> -		folio = folio_get_nontail_page(page);
> -		if (unlikely(!folio))
> -			goto isolate_fail;
> -
>  		/*
>  		 * Migration will fail if an anonymous page is pinned in memory,
>  		 * so avoid taking lru_lock and isolating it unnecessarily in an
> -- 
> 2.27.0
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/6] mm: migrate: add isolate_movable_folio()
  2024-03-27 14:10 ` [PATCH 1/6] mm: migrate: add isolate_movable_folio() Kefeng Wang
  2024-03-27 14:29   ` Zi Yan
@ 2024-03-27 18:59   ` Vishal Moola
  2024-03-28  5:08     ` Kefeng Wang
  1 sibling, 1 reply; 24+ messages in thread
From: Vishal Moola @ 2024-03-27 18:59 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, willy, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Zi Yan, Hugh Dickins,
	Jonathan Corbet, linux-mm, linux-doc, Baolin Wang

On Wed, Mar 27, 2024 at 10:10:29PM +0800, Kefeng Wang wrote:
> Like isolate_lru_page(), make isolate_movable_page() as a wrapper
> around isolate_lru_folio(), since isolate_movable_page() always
> fails on a tail page, add a warn for tail page and return immediately.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  include/linux/migrate.h |  3 +++
>  mm/migrate.c            | 41 +++++++++++++++++++++++------------------
>  2 files changed, 26 insertions(+), 18 deletions(-)
> 
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index f9d92482d117..a6c38ee7246a 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
>  		  unsigned int *ret_succeeded);
>  struct folio *alloc_migration_target(struct folio *src, unsigned long private);
>  bool isolate_movable_page(struct page *page, isolate_mode_t mode);
> +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode);
>  
>  int migrate_huge_page_move_mapping(struct address_space *mapping,
>  		struct folio *dst, struct folio *src);
> @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src,
>  	{ return NULL; }
>  static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	{ return false; }
> +static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode)
> +	{ return false; }

Wrong argument here.

>  
>  static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>  				  struct folio *dst, struct folio *src)
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 2228ca681afb..b2195b6ff32c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -57,31 +57,29 @@
>  
>  #include "internal.h"
>  
> -bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode)
>  {
> -	struct folio *folio = folio_get_nontail_page(page);
>  	const struct movable_operations *mops;
>  
>  	/*
> -	 * Avoid burning cycles with pages that are yet under __free_pages(),
> +	 * Avoid burning cycles with folios that are yet under __free_pages(),
>  	 * or just got freed under us.
>  	 *
> -	 * In case we 'win' a race for a movable page being freed under us and
> +	 * In case we 'win' a race for a movable folio being freed under us and
>  	 * raise its refcount preventing __free_pages() from doing its job
> -	 * the put_page() at the end of this block will take care of
> -	 * release this page, thus avoiding a nasty leakage.
> +	 * the folio_put() at the end of this block will take care of
> +	 * release this folio, thus avoiding a nasty leakage.
>  	 */
> -	if (!folio)
> -		goto out;
> +	folio_get(folio);
>  
>  	if (unlikely(folio_test_slab(folio)))
>  		goto out_putfolio;
>  	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
>  	smp_rmb();
>  	/*
> -	 * Check movable flag before taking the page lock because
> -	 * we use non-atomic bitops on newly allocated page flags so
> -	 * unconditionally grabbing the lock ruins page's owner side.
> +	 * Check movable flag before taking the folio lock because
> +	 * we use non-atomic bitops on newly allocated folio flags so
> +	 * unconditionally grabbing the lock ruins folio's owner side.
>  	 */
>  	if (unlikely(!__folio_test_movable(folio)))
>  		goto out_putfolio;
> @@ -91,13 +89,13 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  		goto out_putfolio;
>  
>  	/*
> -	 * As movable pages are not isolated from LRU lists, concurrent
> -	 * compaction threads can race against page migration functions
> -	 * as well as race against the releasing a page.
> +	 * As movable folios are not isolated from LRU lists, concurrent
> +	 * compaction threads can race against folio migration functions
> +	 * as well as race against the releasing a folio.
>  	 *
> -	 * In order to avoid having an already isolated movable page
> +	 * In order to avoid having an already isolated movable folio
>  	 * being (wrongly) re-isolated while it is under migration,
> -	 * or to avoid attempting to isolate pages being released,
> +	 * or to avoid attempting to isolate folios being released,
>  	 * lets be sure we have the page lock
>  	 * before proceeding with the movable page isolation steps.
>  	 */
> @@ -113,7 +111,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	if (!mops->isolate_page(&folio->page, mode))
>  		goto out_no_isolated;
>  
> -	/* Driver shouldn't use PG_isolated bit of page->flags */
> +	/* Driver shouldn't use PG_isolated bit of folio->flags */
>  	WARN_ON_ONCE(folio_test_isolated(folio));
>  	folio_set_isolated(folio);
>  	folio_unlock(folio);
> @@ -124,10 +122,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	folio_unlock(folio);
>  out_putfolio:
>  	folio_put(folio);
> -out:
>  	return false;
>  }
>  
> +bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> +{
> +	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
> +		return false;

This warning doesn't make sense. As of now, we still isolate_movable_page()
to be able to take in a tail page, we just don't want to operate on it.

> +	return isolate_movable_folio((struct folio *)page, mode);
> +}
> +
>  static void putback_movable_folio(struct folio *folio)
>  {
>  	const struct movable_operations *mops = folio_movable_ops(folio);
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range()
  2024-03-27 15:10       ` Zi Yan
  2024-03-27 15:58         ` Matthew Wilcox
@ 2024-03-28  5:06         ` Kefeng Wang
  1 sibling, 0 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-28  5:06 UTC (permalink / raw)
  To: Zi Yan, Matthew Wilcox
  Cc: Andrew Morton, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang



On 2024/3/27 23:10, Zi Yan wrote:
> On 27 Mar 2024, at 10:54, Matthew Wilcox wrote:
> 
>> On Wed, Mar 27, 2024 at 10:45:42AM -0400, Zi Yan wrote:
>>>>   	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
>>>> -		struct folio *folio;
>>>> +		struct page *page, *head;
>>>
>>> You could get rid of head too. It is only used to calculate next pfn,
>>> so pfn = folio_to_pfn(folio) + folio_nr_pages(folio) - 1 would work.
>>>
>>> And the PageHuge(page) and PageTransHuge(page) can be simplified, since
>>> their pfn calculations are the same. Something like:
>>>
>>> if (folio_test_large(folio)) {
>>> 	pfn = folio_to_pfn(folio) + folio_nr_pages(folio) - 1;
>>> 	if (folio_test_hugetlb(folio)) {
>>> 		isolate_hugetlb(folio, &source);
>>> 		continue;
>>> 	}
>>> }
>>
>> How much of this is safe without a refcount on the folio?
> 
> folio_to_pfn() should be fine, isolate_hugetlb() checks the folio
> under hugetlb_lock, but folio_nr_pages() might return a bogus number
> that makes pfn go beyond end_pfn and ends for loop early. The code
> below increases the refcount, so it might be better to move this
> part of the code after refcount is increased.

The PageHWPoison() is per-page check, for hugetlb, it will directly
try to isolate and ignore the PageHWPoison check, I'm not sure about
moveing PageHWPoison ahead, we need to take the i_mmap_lock_write and
add TTU_RMAP_LOCKED for for hugetlb pages in shared mappings when
try_to_unmap(), but now hugetlb pages won't unmap if there is posion
page, if the get_page_unless_zero() is moved ahead, the logical need be
changed a lot, this minimize changes aim to remove isolate_lru/movable_page,
so could we keep it simple, but as your suggested, we could do more
optimization about do_migrate_range() in the next.

Thanks.


> 
> --
> Best Regards,
> Yan, Zi

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/6] mm: migrate: add isolate_movable_folio()
  2024-03-27 18:59   ` Vishal Moola
@ 2024-03-28  5:08     ` Kefeng Wang
  0 siblings, 0 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-28  5:08 UTC (permalink / raw)
  To: Vishal Moola
  Cc: Andrew Morton, willy, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Zi Yan, Hugh Dickins,
	Jonathan Corbet, linux-mm, linux-doc, Baolin Wang



On 2024/3/28 2:59, Vishal Moola wrote:
> On Wed, Mar 27, 2024 at 10:10:29PM +0800, Kefeng Wang wrote:
>> Like isolate_lru_page(), make isolate_movable_page() as a wrapper
>> around isolate_lru_folio(), since isolate_movable_page() always
>> fails on a tail page, add a warn for tail page and return immediately.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   include/linux/migrate.h |  3 +++
>>   mm/migrate.c            | 41 +++++++++++++++++++++++------------------
>>   2 files changed, 26 insertions(+), 18 deletions(-)
>>
>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>> index f9d92482d117..a6c38ee7246a 100644
>> --- a/include/linux/migrate.h
>> +++ b/include/linux/migrate.h
>> @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
>>   		  unsigned int *ret_succeeded);
>>   struct folio *alloc_migration_target(struct folio *src, unsigned long private);
>>   bool isolate_movable_page(struct page *page, isolate_mode_t mode);
>> +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode);
>>   
>>   int migrate_huge_page_move_mapping(struct address_space *mapping,
>>   		struct folio *dst, struct folio *src);
>> @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src,
>>   	{ return NULL; }
>>   static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>>   	{ return false; }
>> +static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode)
>> +	{ return false; }
> 
> Wrong argument here.
Mistake, will fix.

> 
>>   
>>   static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>>   				  struct folio *dst, struct folio *src)
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 2228ca681afb..b2195b6ff32c 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -57,31 +57,29 @@
>>   
>>   #include "internal.h"
>>   
>> -bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>> +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode)
>>   {
>> -	struct folio *folio = folio_get_nontail_page(page);
>>   	const struct movable_operations *mops;
>>   
>>   	/*
>> -	 * Avoid burning cycles with pages that are yet under __free_pages(),
>> +	 * Avoid burning cycles with folios that are yet under __free_pages(),
>>   	 * or just got freed under us.
>>   	 *
>> -	 * In case we 'win' a race for a movable page being freed under us and
>> +	 * In case we 'win' a race for a movable folio being freed under us and
>>   	 * raise its refcount preventing __free_pages() from doing its job
>> -	 * the put_page() at the end of this block will take care of
>> -	 * release this page, thus avoiding a nasty leakage.
>> +	 * the folio_put() at the end of this block will take care of
>> +	 * release this folio, thus avoiding a nasty leakage.
>>   	 */
>> -	if (!folio)
>> -		goto out;
>> +	folio_get(folio);
>>   
>>   	if (unlikely(folio_test_slab(folio)))
>>   		goto out_putfolio;
>>   	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
>>   	smp_rmb();
>>   	/*
>> -	 * Check movable flag before taking the page lock because
>> -	 * we use non-atomic bitops on newly allocated page flags so
>> -	 * unconditionally grabbing the lock ruins page's owner side.
>> +	 * Check movable flag before taking the folio lock because
>> +	 * we use non-atomic bitops on newly allocated folio flags so
>> +	 * unconditionally grabbing the lock ruins folio's owner side.
>>   	 */
>>   	if (unlikely(!__folio_test_movable(folio)))
>>   		goto out_putfolio;
>> @@ -91,13 +89,13 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>>   		goto out_putfolio;
>>   
>>   	/*
>> -	 * As movable pages are not isolated from LRU lists, concurrent
>> -	 * compaction threads can race against page migration functions
>> -	 * as well as race against the releasing a page.
>> +	 * As movable folios are not isolated from LRU lists, concurrent
>> +	 * compaction threads can race against folio migration functions
>> +	 * as well as race against the releasing a folio.
>>   	 *
>> -	 * In order to avoid having an already isolated movable page
>> +	 * In order to avoid having an already isolated movable folio
>>   	 * being (wrongly) re-isolated while it is under migration,
>> -	 * or to avoid attempting to isolate pages being released,
>> +	 * or to avoid attempting to isolate folios being released,
>>   	 * lets be sure we have the page lock
>>   	 * before proceeding with the movable page isolation steps.
>>   	 */
>> @@ -113,7 +111,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>>   	if (!mops->isolate_page(&folio->page, mode))
>>   		goto out_no_isolated;
>>   
>> -	/* Driver shouldn't use PG_isolated bit of page->flags */
>> +	/* Driver shouldn't use PG_isolated bit of folio->flags */
>>   	WARN_ON_ONCE(folio_test_isolated(folio));
>>   	folio_set_isolated(folio);
>>   	folio_unlock(folio);
>> @@ -124,10 +122,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>>   	folio_unlock(folio);
>>   out_putfolio:
>>   	folio_put(folio);
>> -out:
>>   	return false;
>>   }
>>   
>> +bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>> +{
>> +	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
>> +		return false;
> 
> This warning doesn't make sense. As of now, we still isolate_movable_page()
> to be able to take in a tail page, we just don't want to operate on it.
Zi replied too, I will remove it.

Thanks.
> 
>> +	return isolate_movable_folio((struct folio *)page, mode);
>> +}
>> +
>>   static void putback_movable_folio(struct folio *folio)
>>   {
>>   	const struct movable_operations *mops = folio_movable_ops(folio);
>> -- 
>> 2.27.0
>>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range()
  2024-03-27 15:58         ` Matthew Wilcox
@ 2024-03-28  5:30           ` Kefeng Wang
  0 siblings, 0 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-28  5:30 UTC (permalink / raw)
  To: Matthew Wilcox, Zi Yan
  Cc: Andrew Morton, Miaohe Lin, Naoya Horiguchi, David Hildenbrand,
	Oscar Salvador, Hugh Dickins, Jonathan Corbet, linux-mm,
	linux-doc, Baolin Wang



On 2024/3/27 23:58, Matthew Wilcox wrote:
> On Wed, Mar 27, 2024 at 11:10:48AM -0400, Zi Yan wrote:
>> On 27 Mar 2024, at 10:54, Matthew Wilcox wrote:
>>> How much of this is safe without a refcount on the folio?
>>
>> folio_to_pfn() should be fine, isolate_hugetlb() checks the folio
>> under hugetlb_lock, but folio_nr_pages() might return a bogus number
>> that makes pfn go beyond end_pfn and ends for loop early. The code
>> below increases the refcount, so it might be better to move this
>> part of the code after refcount is increased.
> 
> I really want to instill a little bit of fear in Kefeng.
> 
> This is all really subtle code because it's running without a refcount.
> I've mostly stayed away from it because I know that I don't know what
> I'm doing.  Kefeng has no idea that he doesn't know what he's doing.
> 
> And so we get these patches, and they're sometimes subtly wrong, and it
> takes a lot of arguing and revision and thinking and Kefeng is doing
> very little of the thinking part!
> 
> Kefeng, please stick to working on code that you understand.  Or take
> the time to learn code you don't understand before sending patches to
> it.  This kind of Leeroy Jenkins approach to development is not good.

Oh, I remember your reminder and be in awe of changes and try to think
more about changes, for this one, I only convert page to folio after
refcount increased with get_page_unless_zero(), and as replied to Zi,
PageHWPoison part need more consideration and this one only aims to
remove isolate_lru/movable_page, so don't touch the page before
get_page_unless_zero().

Thanks for your review and help all the time.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/6] mm: remove isolate_lru_page()
  2024-03-27 14:10 ` [PATCH 3/6] mm: remove isolate_lru_page() Kefeng Wang
@ 2024-03-28 12:22   ` kernel test robot
  2024-03-28 12:56     ` Kefeng Wang
  2024-03-28 15:33   ` kernel test robot
  1 sibling, 1 reply; 24+ messages in thread
From: kernel test robot @ 2024-03-28 12:22 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: llvm, oe-kbuild-all, Linux Memory Management List, willy,
	Miaohe Lin, Naoya Horiguchi, David Hildenbrand, Oscar Salvador,
	Zi Yan, Hugh Dickins, Jonathan Corbet, linux-doc, Baolin Wang,
	Kefeng Wang

Hi Kefeng,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v6.9-rc1]
[cannot apply to akpm-mm/mm-everything next-20240328]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-migrate-add-isolate_movable_folio/20240327-221513
base:   linus/master
patch link:    https://lore.kernel.org/r/20240327141034.3712697-4-wangkefeng.wang%40huawei.com
patch subject: [PATCH 3/6] mm: remove isolate_lru_page()
config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20240328/202403282057.pIA3kJoz-lkp@intel.com/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403282057.pIA3kJoz-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202403282057.pIA3kJoz-lkp@intel.com/

All errors (new ones prefixed by >>):

>> mm/migrate_device.c:388:9: error: call to undeclared function 'isolate_lru_page'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     388 |                         if (!isolate_lru_page(page)) {
         |                              ^
   mm/migrate_device.c:388:9: note: did you mean '__isolate_free_page'?
   mm/internal.h:487:12: note: '__isolate_free_page' declared here
     487 | extern int __isolate_free_page(struct page *page, unsigned int order);
         |            ^
   1 error generated.


vim +/isolate_lru_page +388 mm/migrate_device.c

76cbbead253ddc Christoph Hellwig       2022-02-16  355  
76cbbead253ddc Christoph Hellwig       2022-02-16  356  /*
44af0b45d58d7b Alistair Popple         2022-11-11  357   * Unmaps pages for migration. Returns number of source pfns marked as
44af0b45d58d7b Alistair Popple         2022-11-11  358   * migrating.
76cbbead253ddc Christoph Hellwig       2022-02-16  359   */
241f6885965683 Alistair Popple         2022-09-28  360  static unsigned long migrate_device_unmap(unsigned long *src_pfns,
241f6885965683 Alistair Popple         2022-09-28  361  					  unsigned long npages,
241f6885965683 Alistair Popple         2022-09-28  362  					  struct page *fault_page)
76cbbead253ddc Christoph Hellwig       2022-02-16  363  {
76cbbead253ddc Christoph Hellwig       2022-02-16  364  	unsigned long i, restore = 0;
76cbbead253ddc Christoph Hellwig       2022-02-16  365  	bool allow_drain = true;
241f6885965683 Alistair Popple         2022-09-28  366  	unsigned long unmapped = 0;
76cbbead253ddc Christoph Hellwig       2022-02-16  367  
76cbbead253ddc Christoph Hellwig       2022-02-16  368  	lru_add_drain();
76cbbead253ddc Christoph Hellwig       2022-02-16  369  
76cbbead253ddc Christoph Hellwig       2022-02-16  370  	for (i = 0; i < npages; i++) {
241f6885965683 Alistair Popple         2022-09-28  371  		struct page *page = migrate_pfn_to_page(src_pfns[i]);
4b8554c527f3cf Matthew Wilcox (Oracle  2022-01-28  372) 		struct folio *folio;
76cbbead253ddc Christoph Hellwig       2022-02-16  373  
44af0b45d58d7b Alistair Popple         2022-11-11  374  		if (!page) {
44af0b45d58d7b Alistair Popple         2022-11-11  375  			if (src_pfns[i] & MIGRATE_PFN_MIGRATE)
44af0b45d58d7b Alistair Popple         2022-11-11  376  				unmapped++;
76cbbead253ddc Christoph Hellwig       2022-02-16  377  			continue;
44af0b45d58d7b Alistair Popple         2022-11-11  378  		}
76cbbead253ddc Christoph Hellwig       2022-02-16  379  
76cbbead253ddc Christoph Hellwig       2022-02-16  380  		/* ZONE_DEVICE pages are not on LRU */
76cbbead253ddc Christoph Hellwig       2022-02-16  381  		if (!is_zone_device_page(page)) {
76cbbead253ddc Christoph Hellwig       2022-02-16  382  			if (!PageLRU(page) && allow_drain) {
1fec6890bf2247 Matthew Wilcox (Oracle  2023-06-21  383) 				/* Drain CPU's lru cache */
76cbbead253ddc Christoph Hellwig       2022-02-16  384  				lru_add_drain_all();
76cbbead253ddc Christoph Hellwig       2022-02-16  385  				allow_drain = false;
76cbbead253ddc Christoph Hellwig       2022-02-16  386  			}
76cbbead253ddc Christoph Hellwig       2022-02-16  387  
f7f9c00dfafffd Baolin Wang             2023-02-15 @388  			if (!isolate_lru_page(page)) {
241f6885965683 Alistair Popple         2022-09-28  389  				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
76cbbead253ddc Christoph Hellwig       2022-02-16  390  				restore++;
76cbbead253ddc Christoph Hellwig       2022-02-16  391  				continue;
76cbbead253ddc Christoph Hellwig       2022-02-16  392  			}
76cbbead253ddc Christoph Hellwig       2022-02-16  393  
76cbbead253ddc Christoph Hellwig       2022-02-16  394  			/* Drop the reference we took in collect */
76cbbead253ddc Christoph Hellwig       2022-02-16  395  			put_page(page);
76cbbead253ddc Christoph Hellwig       2022-02-16  396  		}
76cbbead253ddc Christoph Hellwig       2022-02-16  397  
4b8554c527f3cf Matthew Wilcox (Oracle  2022-01-28  398) 		folio = page_folio(page);
4b8554c527f3cf Matthew Wilcox (Oracle  2022-01-28  399) 		if (folio_mapped(folio))
4b8554c527f3cf Matthew Wilcox (Oracle  2022-01-28  400) 			try_to_migrate(folio, 0);
76cbbead253ddc Christoph Hellwig       2022-02-16  401  
16ce101db85db6 Alistair Popple         2022-09-28  402  		if (page_mapped(page) ||
241f6885965683 Alistair Popple         2022-09-28  403  		    !migrate_vma_check_page(page, fault_page)) {
76cbbead253ddc Christoph Hellwig       2022-02-16  404  			if (!is_zone_device_page(page)) {
76cbbead253ddc Christoph Hellwig       2022-02-16  405  				get_page(page);
76cbbead253ddc Christoph Hellwig       2022-02-16  406  				putback_lru_page(page);
76cbbead253ddc Christoph Hellwig       2022-02-16  407  			}
76cbbead253ddc Christoph Hellwig       2022-02-16  408  
241f6885965683 Alistair Popple         2022-09-28  409  			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
76cbbead253ddc Christoph Hellwig       2022-02-16  410  			restore++;
76cbbead253ddc Christoph Hellwig       2022-02-16  411  			continue;
76cbbead253ddc Christoph Hellwig       2022-02-16  412  		}
241f6885965683 Alistair Popple         2022-09-28  413  
241f6885965683 Alistair Popple         2022-09-28  414  		unmapped++;
76cbbead253ddc Christoph Hellwig       2022-02-16  415  	}
76cbbead253ddc Christoph Hellwig       2022-02-16  416  
76cbbead253ddc Christoph Hellwig       2022-02-16  417  	for (i = 0; i < npages && restore; i++) {
241f6885965683 Alistair Popple         2022-09-28  418  		struct page *page = migrate_pfn_to_page(src_pfns[i]);
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  419) 		struct folio *folio;
76cbbead253ddc Christoph Hellwig       2022-02-16  420  
241f6885965683 Alistair Popple         2022-09-28  421  		if (!page || (src_pfns[i] & MIGRATE_PFN_MIGRATE))
76cbbead253ddc Christoph Hellwig       2022-02-16  422  			continue;
76cbbead253ddc Christoph Hellwig       2022-02-16  423  
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  424) 		folio = page_folio(page);
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  425) 		remove_migration_ptes(folio, folio, false);
76cbbead253ddc Christoph Hellwig       2022-02-16  426  
241f6885965683 Alistair Popple         2022-09-28  427  		src_pfns[i] = 0;
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  428) 		folio_unlock(folio);
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  429) 		folio_put(folio);
76cbbead253ddc Christoph Hellwig       2022-02-16  430  		restore--;
76cbbead253ddc Christoph Hellwig       2022-02-16  431  	}
241f6885965683 Alistair Popple         2022-09-28  432  
241f6885965683 Alistair Popple         2022-09-28  433  	return unmapped;
241f6885965683 Alistair Popple         2022-09-28  434  }
241f6885965683 Alistair Popple         2022-09-28  435  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/6] mm: compaction: use isolate_movable_folio() in isolate_migratepages_block()
  2024-03-27 18:49   ` Vishal Moola
@ 2024-03-28 12:49     ` Kefeng Wang
  0 siblings, 0 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-28 12:49 UTC (permalink / raw)
  To: Vishal Moola
  Cc: Andrew Morton, willy, Miaohe Lin, Naoya Horiguchi,
	David Hildenbrand, Oscar Salvador, Zi Yan, Hugh Dickins,
	Jonathan Corbet, linux-mm, linux-doc, Baolin Wang



On 2024/3/28 2:49, Vishal Moola wrote:
> On Wed, Mar 27, 2024 at 10:10:32PM +0800, Kefeng Wang wrote:
>> This moves folio_get_nontail_page() before non-lru movable pages check,
>> and directly call isolate_movable_folio() to save compound_head() calls,
>> since the reference count of the non-lru movable page is increased, a
>> folio_put() is need() whether the folio is isolated or not.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   mm/compaction.c | 30 +++++++++++++++---------------
>>   1 file changed, 15 insertions(+), 15 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 807b58e6eb68..74ac65daaed1 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1097,41 +1097,41 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>   			}
>>   		}
>>   
>> +		/*
>> +		 * Be careful not to clear PageLRU until after we're
>> +		 * sure the page is not being freed elsewhere -- the
>> +		 * page release code relies on it.
>> +		 */
>> +		folio = folio_get_nontail_page(page);
>> +		if (unlikely(!folio))
>> +			goto isolate_fail;
>> +
> 
> If you wanted to move this, I think this should be part of your first
> patch (or prior to it). It would make your first patch be more sensible as

ok, will re-order the patches.

> is. You could then also consider making isolate_movable_folio() more similar
> to folio_isolate_lru() if you really wanted to.

Maybe just rename it folio_isolate_movable and no more changes now.

Thanks.

> 
>>   		/*
>>   		 * Check may be lockless but that's ok as we recheck later.
>>   		 * It's possible to migrate LRU and non-lru movable pages.
>>   		 * Skip any other type of page
>>   		 */
>> -		if (!PageLRU(page)) {
>> +		if (!folio_test_lru(folio)) {
>>   			/*
>>   			 * __PageMovable can return false positive so we need
>>   			 * to verify it under page_lock.
>>   			 */
>> -			if (unlikely(__PageMovable(page)) &&
>> -					!PageIsolated(page)) {
>> +			if (unlikely(__folio_test_movable(folio)) &&
>> +					!folio_test_isolated(folio)) {
>>   				if (locked) {
>>   					unlock_page_lruvec_irqrestore(locked, flags);
>>   					locked = NULL;
>>   				}
>>   
>> -				if (isolate_movable_page(page, mode)) {
>> -					folio = page_folio(page);
>> +				if (isolate_movable_folio(folio, mode)) {
>> +					folio_put(folio);
>>   					goto isolate_success;
>>   				}
>>   			}
>>   
>> -			goto isolate_fail;
>> +			goto isolate_fail_put;
>>   		}
>>   
>> -		/*
>> -		 * Be careful not to clear PageLRU until after we're
>> -		 * sure the page is not being freed elsewhere -- the
>> -		 * page release code relies on it.
>> -		 */
>> -		folio = folio_get_nontail_page(page);
>> -		if (unlikely(!folio))
>> -			goto isolate_fail;
>> -
>>   		/*
>>   		 * Migration will fail if an anonymous page is pinned in memory,
>>   		 * so avoid taking lru_lock and isolating it unnecessarily in an
>> -- 
>> 2.27.0
>>
>>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/6] mm: remove isolate_lru_page()
  2024-03-28 12:22   ` kernel test robot
@ 2024-03-28 12:56     ` Kefeng Wang
  0 siblings, 0 replies; 24+ messages in thread
From: Kefeng Wang @ 2024-03-28 12:56 UTC (permalink / raw)
  To: kernel test robot, Andrew Morton
  Cc: llvm, oe-kbuild-all, Linux Memory Management List, willy,
	Miaohe Lin, Naoya Horiguchi, David Hildenbrand, Oscar Salvador,
	Zi Yan, Hugh Dickins, Jonathan Corbet, linux-doc, Baolin Wang



On 2024/3/28 20:22, kernel test robot wrote:
> Hi Kefeng,
> 
> kernel test robot noticed the following build errors:
> 
> [auto build test ERROR on linus/master]
> [also build test ERROR on v6.9-rc1]
> [cannot apply to akpm-mm/mm-everything next-20240328]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-migrate-add-isolate_movable_folio/20240327-221513
> base:   linus/master
> patch link:    https://lore.kernel.org/r/20240327141034.3712697-4-wangkefeng.wang%40huawei.com
> patch subject: [PATCH 3/6] mm: remove isolate_lru_page()
> config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20240328/202403282057.pIA3kJoz-lkp@intel.com/config)
> compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403282057.pIA3kJoz-lkp@intel.com/reproduce)
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202403282057.pIA3kJoz-lkp@intel.com/
> 
> All errors (new ones prefixed by >>):
This changed locally, but missing this when rebase on new branch, will fix.
> 
>>> mm/migrate_device.c:388:9: error: call to undeclared function 'isolate_lru_page'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
>       388 |                         if (!isolate_lru_page(page)) {
>           |                              ^
>     mm/migrate_device.c:388:9: note: did you mean '__isolate_free_page'?
>     mm/internal.h:487:12: note: '__isolate_free_page' declared here
>       487 | extern int __isolate_free_page(struct page *page, unsigned int order);
>           |            ^
>     1 error generated.
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/6] mm: remove isolate_lru_page()
  2024-03-27 14:10 ` [PATCH 3/6] mm: remove isolate_lru_page() Kefeng Wang
  2024-03-28 12:22   ` kernel test robot
@ 2024-03-28 15:33   ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2024-03-28 15:33 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, willy, Miaohe Lin,
	Naoya Horiguchi, David Hildenbrand, Oscar Salvador, Zi Yan,
	Hugh Dickins, Jonathan Corbet, linux-doc, Baolin Wang,
	Kefeng Wang

Hi Kefeng,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v6.9-rc1]
[cannot apply to akpm-mm/mm-everything next-20240328]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-migrate-add-isolate_movable_folio/20240327-221513
base:   linus/master
patch link:    https://lore.kernel.org/r/20240327141034.3712697-4-wangkefeng.wang%40huawei.com
patch subject: [PATCH 3/6] mm: remove isolate_lru_page()
config: x86_64-randconfig-013-20240328 (https://download.01.org/0day-ci/archive/20240328/202403282357.bFSsmYuH-lkp@intel.com/config)
compiler: gcc-10 (Ubuntu 10.5.0-1ubuntu1) 10.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403282357.bFSsmYuH-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202403282357.bFSsmYuH-lkp@intel.com/

All errors (new ones prefixed by >>):

   mm/migrate_device.c: In function 'migrate_device_unmap':
>> mm/migrate_device.c:388:9: error: implicit declaration of function 'isolate_lru_page' [-Werror=implicit-function-declaration]
     388 |    if (!isolate_lru_page(page)) {
         |         ^~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors


vim +/isolate_lru_page +388 mm/migrate_device.c

76cbbead253ddc Christoph Hellwig       2022-02-16  355  
76cbbead253ddc Christoph Hellwig       2022-02-16  356  /*
44af0b45d58d7b Alistair Popple         2022-11-11  357   * Unmaps pages for migration. Returns number of source pfns marked as
44af0b45d58d7b Alistair Popple         2022-11-11  358   * migrating.
76cbbead253ddc Christoph Hellwig       2022-02-16  359   */
241f6885965683 Alistair Popple         2022-09-28  360  static unsigned long migrate_device_unmap(unsigned long *src_pfns,
241f6885965683 Alistair Popple         2022-09-28  361  					  unsigned long npages,
241f6885965683 Alistair Popple         2022-09-28  362  					  struct page *fault_page)
76cbbead253ddc Christoph Hellwig       2022-02-16  363  {
76cbbead253ddc Christoph Hellwig       2022-02-16  364  	unsigned long i, restore = 0;
76cbbead253ddc Christoph Hellwig       2022-02-16  365  	bool allow_drain = true;
241f6885965683 Alistair Popple         2022-09-28  366  	unsigned long unmapped = 0;
76cbbead253ddc Christoph Hellwig       2022-02-16  367  
76cbbead253ddc Christoph Hellwig       2022-02-16  368  	lru_add_drain();
76cbbead253ddc Christoph Hellwig       2022-02-16  369  
76cbbead253ddc Christoph Hellwig       2022-02-16  370  	for (i = 0; i < npages; i++) {
241f6885965683 Alistair Popple         2022-09-28  371  		struct page *page = migrate_pfn_to_page(src_pfns[i]);
4b8554c527f3cf Matthew Wilcox (Oracle  2022-01-28  372) 		struct folio *folio;
76cbbead253ddc Christoph Hellwig       2022-02-16  373  
44af0b45d58d7b Alistair Popple         2022-11-11  374  		if (!page) {
44af0b45d58d7b Alistair Popple         2022-11-11  375  			if (src_pfns[i] & MIGRATE_PFN_MIGRATE)
44af0b45d58d7b Alistair Popple         2022-11-11  376  				unmapped++;
76cbbead253ddc Christoph Hellwig       2022-02-16  377  			continue;
44af0b45d58d7b Alistair Popple         2022-11-11  378  		}
76cbbead253ddc Christoph Hellwig       2022-02-16  379  
76cbbead253ddc Christoph Hellwig       2022-02-16  380  		/* ZONE_DEVICE pages are not on LRU */
76cbbead253ddc Christoph Hellwig       2022-02-16  381  		if (!is_zone_device_page(page)) {
76cbbead253ddc Christoph Hellwig       2022-02-16  382  			if (!PageLRU(page) && allow_drain) {
1fec6890bf2247 Matthew Wilcox (Oracle  2023-06-21  383) 				/* Drain CPU's lru cache */
76cbbead253ddc Christoph Hellwig       2022-02-16  384  				lru_add_drain_all();
76cbbead253ddc Christoph Hellwig       2022-02-16  385  				allow_drain = false;
76cbbead253ddc Christoph Hellwig       2022-02-16  386  			}
76cbbead253ddc Christoph Hellwig       2022-02-16  387  
f7f9c00dfafffd Baolin Wang             2023-02-15 @388  			if (!isolate_lru_page(page)) {
241f6885965683 Alistair Popple         2022-09-28  389  				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
76cbbead253ddc Christoph Hellwig       2022-02-16  390  				restore++;
76cbbead253ddc Christoph Hellwig       2022-02-16  391  				continue;
76cbbead253ddc Christoph Hellwig       2022-02-16  392  			}
76cbbead253ddc Christoph Hellwig       2022-02-16  393  
76cbbead253ddc Christoph Hellwig       2022-02-16  394  			/* Drop the reference we took in collect */
76cbbead253ddc Christoph Hellwig       2022-02-16  395  			put_page(page);
76cbbead253ddc Christoph Hellwig       2022-02-16  396  		}
76cbbead253ddc Christoph Hellwig       2022-02-16  397  
4b8554c527f3cf Matthew Wilcox (Oracle  2022-01-28  398) 		folio = page_folio(page);
4b8554c527f3cf Matthew Wilcox (Oracle  2022-01-28  399) 		if (folio_mapped(folio))
4b8554c527f3cf Matthew Wilcox (Oracle  2022-01-28  400) 			try_to_migrate(folio, 0);
76cbbead253ddc Christoph Hellwig       2022-02-16  401  
16ce101db85db6 Alistair Popple         2022-09-28  402  		if (page_mapped(page) ||
241f6885965683 Alistair Popple         2022-09-28  403  		    !migrate_vma_check_page(page, fault_page)) {
76cbbead253ddc Christoph Hellwig       2022-02-16  404  			if (!is_zone_device_page(page)) {
76cbbead253ddc Christoph Hellwig       2022-02-16  405  				get_page(page);
76cbbead253ddc Christoph Hellwig       2022-02-16  406  				putback_lru_page(page);
76cbbead253ddc Christoph Hellwig       2022-02-16  407  			}
76cbbead253ddc Christoph Hellwig       2022-02-16  408  
241f6885965683 Alistair Popple         2022-09-28  409  			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
76cbbead253ddc Christoph Hellwig       2022-02-16  410  			restore++;
76cbbead253ddc Christoph Hellwig       2022-02-16  411  			continue;
76cbbead253ddc Christoph Hellwig       2022-02-16  412  		}
241f6885965683 Alistair Popple         2022-09-28  413  
241f6885965683 Alistair Popple         2022-09-28  414  		unmapped++;
76cbbead253ddc Christoph Hellwig       2022-02-16  415  	}
76cbbead253ddc Christoph Hellwig       2022-02-16  416  
76cbbead253ddc Christoph Hellwig       2022-02-16  417  	for (i = 0; i < npages && restore; i++) {
241f6885965683 Alistair Popple         2022-09-28  418  		struct page *page = migrate_pfn_to_page(src_pfns[i]);
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  419) 		struct folio *folio;
76cbbead253ddc Christoph Hellwig       2022-02-16  420  
241f6885965683 Alistair Popple         2022-09-28  421  		if (!page || (src_pfns[i] & MIGRATE_PFN_MIGRATE))
76cbbead253ddc Christoph Hellwig       2022-02-16  422  			continue;
76cbbead253ddc Christoph Hellwig       2022-02-16  423  
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  424) 		folio = page_folio(page);
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  425) 		remove_migration_ptes(folio, folio, false);
76cbbead253ddc Christoph Hellwig       2022-02-16  426  
241f6885965683 Alistair Popple         2022-09-28  427  		src_pfns[i] = 0;
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  428) 		folio_unlock(folio);
4eecb8b9163df8 Matthew Wilcox (Oracle  2022-01-28  429) 		folio_put(folio);
76cbbead253ddc Christoph Hellwig       2022-02-16  430  		restore--;
76cbbead253ddc Christoph Hellwig       2022-02-16  431  	}
241f6885965683 Alistair Popple         2022-09-28  432  
241f6885965683 Alistair Popple         2022-09-28  433  	return unmapped;
241f6885965683 Alistair Popple         2022-09-28  434  }
241f6885965683 Alistair Popple         2022-09-28  435  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio()
  2024-03-27 14:10 ` [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio() Kefeng Wang
  2024-03-27 15:12   ` Zi Yan
@ 2024-03-28 16:57   ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2024-03-28 16:57 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, willy, Miaohe Lin,
	Naoya Horiguchi, David Hildenbrand, Oscar Salvador, Zi Yan,
	Hugh Dickins, Jonathan Corbet, linux-doc, Baolin Wang,
	Kefeng Wang

Hi Kefeng,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v6.9-rc1]
[cannot apply to akpm-mm/mm-everything next-20240328]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-migrate-add-isolate_movable_folio/20240327-221513
base:   linus/master
patch link:    https://lore.kernel.org/r/20240327141034.3712697-6-wangkefeng.wang%40huawei.com
patch subject: [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio()
config: parisc-randconfig-r051-20240328 (https://download.01.org/0day-ci/archive/20240329/202403290000.hSRD3CAB-lkp@intel.com/config)
compiler: hppa-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240329/202403290000.hSRD3CAB-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202403290000.hSRD3CAB-lkp@intel.com/

All errors (new ones prefixed by >>):

   mm/memory-failure.c: In function 'mf_isolate_folio':
>> mm/memory-failure.c:2631:58: error: passing argument 1 of 'isolate_movable_folio' from incompatible pointer type [-Werror=incompatible-pointer-types]
    2631 |                         isolated = isolate_movable_folio(folio,
         |                                                          ^~~~~
         |                                                          |
         |                                                          struct folio *
   In file included from mm/memory-failure.c:51:
   include/linux/migrate.h:98:55: note: expected 'struct page *' but argument is of type 'struct folio *'
      98 | static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode)
         |                                          ~~~~~~~~~~~~~^~~~
   cc1: some warnings being treated as errors


vim +/isolate_movable_folio +2631 mm/memory-failure.c

  2618	
  2619	static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist)
  2620	{
  2621		bool isolated = false;
  2622	
  2623		if (folio_test_hugetlb(folio)) {
  2624			isolated = isolate_hugetlb(folio, pagelist);
  2625		} else {
  2626			bool lru = !__folio_test_movable(folio);
  2627	
  2628			if (lru)
  2629				isolated = folio_isolate_lru(folio);
  2630			else
> 2631				isolated = isolate_movable_folio(folio,
  2632								 ISOLATE_UNEVICTABLE);
  2633	
  2634			if (isolated) {
  2635				list_add(&folio->lru, pagelist);
  2636				if (lru)
  2637					node_stat_add_folio(folio, NR_ISOLATED_ANON +
  2638							    folio_is_file_lru(folio));
  2639			}
  2640		}
  2641	
  2642		/*
  2643		 * If we succeed to isolate the folio, we grabbed another refcount on
  2644		 * the folio, so we can safely drop the one we got from get_any_page().
  2645		 * If we failed to isolate the folio, it means that we cannot go further
  2646		 * and we will return an error, so drop the reference we got from
  2647		 * get_any_page() as well.
  2648		 */
  2649		folio_put(folio);
  2650		return isolated;
  2651	}
  2652	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2024-03-28 16:58 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-27 14:10 [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
2024-03-27 14:10 ` [PATCH 1/6] mm: migrate: add isolate_movable_folio() Kefeng Wang
2024-03-27 14:29   ` Zi Yan
2024-03-27 14:36     ` Kefeng Wang
2024-03-27 18:59   ` Vishal Moola
2024-03-28  5:08     ` Kefeng Wang
2024-03-27 14:10 ` [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range() Kefeng Wang
2024-03-27 14:45   ` Zi Yan
2024-03-27 14:54     ` Matthew Wilcox
2024-03-27 15:10       ` Zi Yan
2024-03-27 15:58         ` Matthew Wilcox
2024-03-28  5:30           ` Kefeng Wang
2024-03-28  5:06         ` Kefeng Wang
2024-03-27 14:10 ` [PATCH 3/6] mm: remove isolate_lru_page() Kefeng Wang
2024-03-28 12:22   ` kernel test robot
2024-03-28 12:56     ` Kefeng Wang
2024-03-28 15:33   ` kernel test robot
2024-03-27 14:10 ` [PATCH 4/6] mm: compaction: use isolate_movable_folio() in isolate_migratepages_block() Kefeng Wang
2024-03-27 18:49   ` Vishal Moola
2024-03-28 12:49     ` Kefeng Wang
2024-03-27 14:10 ` [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio() Kefeng Wang
2024-03-27 15:12   ` Zi Yan
2024-03-28 16:57   ` kernel test robot
2024-03-27 14:10 ` [PATCH 6/6] mm: migrate: remove isolate_movable_page() Kefeng Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.