linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page()
@ 2024-04-25  8:40 Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
                   ` (9 more replies)
  0 siblings, 10 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

Rework to unify Huge/LRU/non-LRU folio isolation in do_migrate_range()
and isolate_migratepages_block(), then convert isolate_movable/lru_page()
to folio_isolate_movable/lru(), finally cleanup isolate_moveable/lru_page().

v2:
- rename isolate_movable_folio() to isolate_movable_folio() and address
  the issue about it, per Zi Yan, Vishal Moola
- check hwpoisoned page firstly in do_migrate_range(), which help
  to unify Hugepage/LRU/non-LRU movable folio isolation, suggested by
  Zi Yan
- re-order patches, per Vishal Moola.

Kefeng Wang (10):
  mm: memory_hotplug: check hwpoisoned page firstly in
    do_migrate_range()
  mm: add isolate_folio_to_list()
  mm: memory_hotplug: unify Huge/LRU/non-LRU folio isolation
  mm: compaction: try get reference before non-lru movable folio
    migration
  mm: migrate: add folio_isolate_movable()
  mm: compaction: use folio_isolate_movable()
  mm: migrate: use folio_isolate_movable()
  mm: migrate: remove isolate_movable_page()
  mm: migrate_device: use more folio in migrate_device_unmap()
  mm: remove isolate_lru_page()

 Documentation/mm/page_migration.rst           |  6 +-
 Documentation/mm/unevictable-lru.rst          |  2 +-
 .../translations/zh_CN/mm/page_migration.rst  |  6 +-
 include/linux/migrate.h                       |  5 +-
 mm/compaction.c                               | 38 ++++----
 mm/filemap.c                                  |  2 +-
 mm/folio-compat.c                             |  7 --
 mm/internal.h                                 |  3 +-
 mm/khugepaged.c                               |  6 +-
 mm/memory-failure.c                           | 21 +---
 mm/memory_hotplug.c                           | 95 +++++++++++--------
 mm/migrate.c                                  | 60 ++++++++----
 mm/migrate_device.c                           | 18 ++--
 mm/swap.c                                     |  2 +-
 14 files changed, 145 insertions(+), 126 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-27  3:57   ` kernel test robot
                     ` (2 more replies)
  2024-04-25  8:40 ` [PATCH v2 02/10] mm: add isolate_folio_to_list() Kefeng Wang
                   ` (8 subsequent siblings)
  9 siblings, 3 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

The commit b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned
pages to be offlined") don't handle the hugetlb pages, the dead loop
still occur if offline a hwpoison hugetlb, luckly, with commit
e591ef7d96d6 ("mm,hwpoison,hugetlb,memory_hotplug: hotremove memory
section with hwpoisoned hugepage"), the HPageMigratable of hugetlb
page will be clear, and the hwpoison hugetlb page will be skipped in
scan_movable_pages(), so the deed loop issue is fixed.

However if the HPageMigratable() check passed(without reference and lock),
the hugetlb page may be hwpoisoned, it won't cause issue since the
hwpoisoned page will be handled correctly in the next scan_movable_pages()
loop, it will be isolated in do_migrate_range() and but fails to migrated,

In order to avoid this isolation and unify all hwpoisoned page handling.
let's unconditionally check hwpoison firstly, and if it is a hwpoisoned
hugetlb page, try to unmap it as the catch all safety net like normal page
does, also add some warn when the folio is still mapped.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory_hotplug.c | 62 +++++++++++++++++++++++++++++++++++----------
 1 file changed, 48 insertions(+), 14 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 431b1f6753c0..1985caf73e5a 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1772,6 +1772,35 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
 	return 0;
 }
 
+static bool isolate_and_unmap_hwposion_folio(struct folio *folio)
+{
+	if (WARN_ON(folio_test_lru(folio)))
+		folio_isolate_lru(folio);
+
+	if (!folio_mapped(folio))
+		return true;
+
+	if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
+		struct address_space *mapping;
+
+		mapping = hugetlb_page_mapping_lock_write(&folio->page);
+		if (mapping) {
+			/*
+			 * In shared mappings, try_to_unmap could potentially
+			 * call huge_pmd_unshare.  Because of this, take
+			 * semaphore in write mode here and set TTU_RMAP_LOCKED
+			 * to let lower levels know we have taken the lock.
+			 */
+			try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_RMAP_LOCKED);
+			i_mmap_unlock_write(mapping);
+		}
+	} else {
+		try_to_unmap(folio, TTU_IGNORE_MLOCK);
+	}
+
+	return folio_mapped(folio);
+}
+
 static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 {
 	unsigned long pfn;
@@ -1790,28 +1819,33 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 		folio = page_folio(page);
 		head = &folio->page;
 
-		if (PageHuge(page)) {
-			pfn = page_to_pfn(head) + compound_nr(head) - 1;
-			isolate_hugetlb(folio, &source);
-			continue;
-		} else if (PageTransHuge(page))
-			pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
-
 		/*
 		 * HWPoison pages have elevated reference counts so the migration would
 		 * fail on them. It also doesn't make any sense to migrate them in the
 		 * first place. Still try to unmap such a page in case it is still mapped
-		 * (e.g. current hwpoison implementation doesn't unmap KSM pages but keep
-		 * the unmap as the catch all safety net).
+		 * (keep the unmap as the catch all safety net).
 		 */
-		if (PageHWPoison(page)) {
-			if (WARN_ON(folio_test_lru(folio)))
-				folio_isolate_lru(folio);
-			if (folio_mapped(folio))
-				try_to_unmap(folio, TTU_IGNORE_MLOCK);
+		if (unlikely(PageHWPoison(page))) {
+			folio = page_folio(page);
+			if (isolate_and_unmap_hwposion_folio(folio)) {
+				if (__ratelimit(&migrate_rs)) {
+					pr_warn("%#lx: failed to unmap hwpoison folio\n",
+						pfn);
+				}
+			}
+
+			if (folio_test_large(folio))
+				pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
 			continue;
 		}
 
+		if (PageHuge(page)) {
+			pfn = page_to_pfn(head) + compound_nr(head) - 1;
+			isolate_hugetlb(folio, &source);
+			continue;
+		} else if (PageTransHuge(page))
+			pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
+
 		if (!get_page_unless_zero(page))
 			continue;
 		/*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 02/10] mm: add isolate_folio_to_list()
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 03/10] mm: memory_hotplug: unify Huge/LRU/non-LRU folio isolation Kefeng Wang
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

This is to add isolate_folio_to_list() to try to isolate hugepage,
no-LRU movable and LRU folios to a list, which will be reused by
do_migrate_range() from memory hotplug soon.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/internal.h       |  2 ++
 mm/memory-failure.c | 21 +--------------------
 mm/migrate.c        | 27 +++++++++++++++++++++++++++
 3 files changed, 30 insertions(+), 20 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 2adc3f616b71..e3968061010b 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -378,6 +378,8 @@ extern unsigned long highest_memmap_pfn;
  */
 #define MAX_RECLAIM_RETRIES 16
 
+bool isolate_folio_to_list(struct folio *folio, struct list_head *list);
+
 /*
  * in mm/vmscan.c:
  */
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 68e1fe1c0b72..793c1cf02bd9 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2620,26 +2620,7 @@ EXPORT_SYMBOL(unpoison_memory);
 
 static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist)
 {
-	bool isolated = false;
-
-	if (folio_test_hugetlb(folio)) {
-		isolated = isolate_hugetlb(folio, pagelist);
-	} else {
-		bool lru = !__folio_test_movable(folio);
-
-		if (lru)
-			isolated = folio_isolate_lru(folio);
-		else
-			isolated = isolate_movable_page(&folio->page,
-							ISOLATE_UNEVICTABLE);
-
-		if (isolated) {
-			list_add(&folio->lru, pagelist);
-			if (lru)
-				node_stat_add_folio(folio, NR_ISOLATED_ANON +
-						    folio_is_file_lru(folio));
-		}
-	}
+	bool isolated = isolate_folio_to_list(folio, pagelist);
 
 	/*
 	 * If we succeed to isolate the folio, we grabbed another refcount on
diff --git a/mm/migrate.c b/mm/migrate.c
index c7692f303fa7..788747dd5225 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -177,6 +177,33 @@ void putback_movable_pages(struct list_head *l)
 	}
 }
 
+/* Must be called with an elevated refcount on the folio */
+bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
+{
+	bool isolated = false;
+
+	if (folio_test_hugetlb(folio)) {
+		isolated = isolate_hugetlb(folio, list);
+	} else {
+		bool lru = !__folio_test_movable(folio);
+
+		if (lru)
+			isolated = folio_isolate_lru(folio);
+		else
+			isolated = isolate_movable_page(&folio->page
+							ISOLATE_UNEVICTABLE);
+
+		if (isolated) {
+			list_add(&folio->lru, list);
+			if (lru)
+				node_stat_add_folio(folio, NR_ISOLATED_ANON +
+						    folio_is_file_lru(folio));
+		}
+	}
+
+	return isolated;
+}
+
 /*
  * Restore a potential migration pte to a working pte entry
  */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 03/10] mm: memory_hotplug: unify Huge/LRU/non-LRU folio isolation
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 02/10] mm: add isolate_folio_to_list() Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 04/10] mm: compaction: try get reference before non-lru movable folio migration Kefeng Wang
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

Move isolate_hugetlb() after grab a reference, and use the
isolate_folio_to_list() to unify hugetlb/LRU/non-LRU folio
isolation, which cleanup code a bit and save a few calls to
compound_head().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory_hotplug.c | 45 ++++++++++++++-------------------------------
 1 file changed, 14 insertions(+), 31 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 1985caf73e5a..0e40ee39aa88 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1804,20 +1804,17 @@ static bool isolate_and_unmap_hwposion_folio(struct folio *folio)
 static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 {
 	unsigned long pfn;
-	struct page *page, *head;
 	LIST_HEAD(source);
+	struct folio *folio;
 	static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
 				      DEFAULT_RATELIMIT_BURST);
 
 	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
-		struct folio *folio;
-		bool isolated;
+		struct page *page;
 
 		if (!pfn_valid(pfn))
 			continue;
 		page = pfn_to_page(pfn);
-		folio = page_folio(page);
-		head = &folio->page;
 
 		/*
 		 * HWPoison pages have elevated reference counts so the migration would
@@ -1839,36 +1836,21 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 			continue;
 		}
 
-		if (PageHuge(page)) {
-			pfn = page_to_pfn(head) + compound_nr(head) - 1;
-			isolate_hugetlb(folio, &source);
+		folio = folio_get_nontail_page(page);
+		if (!folio)
 			continue;
-		} else if (PageTransHuge(page))
-			pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
 
-		if (!get_page_unless_zero(page))
-			continue;
-		/*
-		 * We can skip free pages. And we can deal with pages on
-		 * LRU and non-lru movable pages.
-		 */
-		if (PageLRU(page))
-			isolated = isolate_lru_page(page);
-		else
-			isolated = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
-		if (isolated) {
-			list_add_tail(&page->lru, &source);
-			if (!__PageMovable(page))
-				inc_node_page_state(page, NR_ISOLATED_ANON +
-						    page_is_file_lru(page));
+		if (folio_test_large(folio))
+			pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
 
-		} else {
+		/* Skip free folios, deal with hugetlb, LRU and non-lru movable folios. */
+		if (!isolate_folio_to_list(folio, &source)) {
 			if (__ratelimit(&migrate_rs)) {
 				pr_warn("failed to isolate pfn %lx\n", pfn);
 				dump_page(page, "isolation failed");
 			}
 		}
-		put_page(page);
+		folio_put(folio);
 	}
 	if (!list_empty(&source)) {
 		nodemask_t nmask = node_states[N_MEMORY];
@@ -1883,7 +1865,7 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 		 * We have checked that migration range is on a single zone so
 		 * we can use the nid of the first page to all the others.
 		 */
-		mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru));
+		mtc.nid = folio_nid(list_first_entry(&source, struct folio, lru));
 
 		/*
 		 * try to allocate from a different node but reuse this node
@@ -1896,11 +1878,12 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 		ret = migrate_pages(&source, alloc_migration_target, NULL,
 			(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG, NULL);
 		if (ret) {
-			list_for_each_entry(page, &source, lru) {
+			list_for_each_entry(folio, &source, lru) {
 				if (__ratelimit(&migrate_rs)) {
 					pr_warn("migrating pfn %lx failed ret:%d\n",
-						page_to_pfn(page), ret);
-					dump_page(page, "migration failure");
+						folio_pfn(folio), ret);
+					dump_page(&folio->page,
+						  "migration failure");
 				}
 			}
 			putback_movable_pages(&source);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 04/10] mm: compaction: try get reference before non-lru movable folio migration
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (2 preceding siblings ...)
  2024-04-25  8:40 ` [PATCH v2 03/10] mm: memory_hotplug: unify Huge/LRU/non-LRU folio isolation Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 05/10] mm: migrate: add folio_isolate_movable() Kefeng Wang
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

Non-lru movable folio migration will fail if !folio_get_nontail_page(),
see isolate_movable_page(), so does lru folio, so folio_get_nontail_page()
could be called firstly to unify the handling of non-lru movable/lru folios
migration a bit, this is also to prepare to convert isolate_movable_page()
to take a folio. Since the reference count of the non-lru movable folio is
increased, a folio_put() is needed whether the folio is isolated or not.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/compaction.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index e731d45befc7..4fc5a19b06ad 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1097,41 +1097,41 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			}
 		}
 
+		/*
+		 * Be careful not to clear LRU flag until after we're
+		 * sure the folio is not being freed elsewhere -- the
+		 * folio release code relies on it.
+		 */
+		folio = folio_get_nontail_page(page);
+		if (unlikely(!folio))
+			goto isolate_fail;
+
 		/*
 		 * Check may be lockless but that's ok as we recheck later.
-		 * It's possible to migrate LRU and non-lru movable pages.
-		 * Skip any other type of page
+		 * It's possible to migrate LRU and non-lru movable folios.
+		 * Skip any other type of folio
 		 */
-		if (!PageLRU(page)) {
+		if (!folio_test_lru(folio)) {
 			/*
-			 * __PageMovable can return false positive so we need
-			 * to verify it under page_lock.
+			 * __folio_test_movable can return false positive so
+			 * we need to verify it under folio lock.
 			 */
-			if (unlikely(__PageMovable(page)) &&
-					!PageIsolated(page)) {
+			if (unlikely(__folio_test_movable(folio)) &&
+					!folio_test_isolated(folio)) {
 				if (locked) {
 					unlock_page_lruvec_irqrestore(locked, flags);
 					locked = NULL;
 				}
 
-				if (isolate_movable_page(page, mode)) {
-					folio = page_folio(page);
+				if (isolate_movable_page(&folio->page, mode)) {
+					folio_put(folio);
 					goto isolate_success;
 				}
 			}
 
-			goto isolate_fail;
+			goto isolate_fail_put;
 		}
 
-		/*
-		 * Be careful not to clear PageLRU until after we're
-		 * sure the page is not being freed elsewhere -- the
-		 * page release code relies on it.
-		 */
-		folio = folio_get_nontail_page(page);
-		if (unlikely(!folio))
-			goto isolate_fail;
-
 		/*
 		 * Migration will fail if an anonymous page is pinned in memory,
 		 * so avoid taking lru_lock and isolating it unnecessarily in an
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 05/10] mm: migrate: add folio_isolate_movable()
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (3 preceding siblings ...)
  2024-04-25  8:40 ` [PATCH v2 04/10] mm: compaction: try get reference before non-lru movable folio migration Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 06/10] mm: compaction: use folio_isolate_movable() Kefeng Wang
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

Like isolate_lru_page(), make isolate_movable_page() as a wrapper
around folio_isolate_movable(), since isolate_movable_page() always
fails on a tail page, return immediately for a tail page in the warpper,
and the wrapper will be removed once all callers are converted to
folio_isolate_movable().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/migrate.h |  4 ++++
 mm/migrate.c            | 41 ++++++++++++++++++++++++-----------------
 2 files changed, 28 insertions(+), 17 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 2ce13e8a309b..4f1bad4379d3 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -72,6 +72,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
 		  unsigned int *ret_succeeded);
 struct folio *alloc_migration_target(struct folio *src, unsigned long private);
 bool isolate_movable_page(struct page *page, isolate_mode_t mode);
+bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode);
 
 int migrate_huge_page_move_mapping(struct address_space *mapping,
 		struct folio *dst, struct folio *src);
@@ -94,6 +95,9 @@ static inline struct folio *alloc_migration_target(struct folio *src,
 	{ return NULL; }
 static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 	{ return false; }
+static inline bool folio_isolate_movable(struct folio *folio,
+					 isolate_mode_t mode)
+	{ return false; }
 
 static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
 				  struct folio *dst, struct folio *src)
diff --git a/mm/migrate.c b/mm/migrate.c
index 788747dd5225..8041a6acaf01 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -57,21 +57,20 @@
 
 #include "internal.h"
 
-bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode)
 {
-	struct folio *folio = folio_get_nontail_page(page);
 	const struct movable_operations *mops;
 
 	/*
-	 * Avoid burning cycles with pages that are yet under __free_pages(),
+	 * Avoid burning cycles with folios that are yet under __free_pages(),
 	 * or just got freed under us.
 	 *
-	 * In case we 'win' a race for a movable page being freed under us and
+	 * In case we 'win' a race for a movable folio being freed under us and
 	 * raise its refcount preventing __free_pages() from doing its job
-	 * the put_page() at the end of this block will take care of
-	 * release this page, thus avoiding a nasty leakage.
+	 * the folio_put() at the end of this block will take care of
+	 * release this folio, thus avoiding a nasty leakage.
 	 */
-	if (!folio)
+	if (!folio_try_get(folio))
 		goto out;
 
 	if (unlikely(folio_test_slab(folio)))
@@ -79,9 +78,9 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
 	smp_rmb();
 	/*
-	 * Check movable flag before taking the page lock because
-	 * we use non-atomic bitops on newly allocated page flags so
-	 * unconditionally grabbing the lock ruins page's owner side.
+	 * Check movable flag before taking the folio lock because
+	 * we use non-atomic bitops on newly allocated folio flags so
+	 * unconditionally grabbing the lock ruins folio's owner side.
 	 */
 	if (unlikely(!__folio_test_movable(folio)))
 		goto out_putfolio;
@@ -91,15 +90,15 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 		goto out_putfolio;
 
 	/*
-	 * As movable pages are not isolated from LRU lists, concurrent
-	 * compaction threads can race against page migration functions
-	 * as well as race against the releasing a page.
+	 * As movable folios are not isolated from LRU lists, concurrent
+	 * compaction threads can race against folio migration functions
+	 * as well as race against the releasing a folio.
 	 *
-	 * In order to avoid having an already isolated movable page
+	 * In order to avoid having an already isolated movable folio
 	 * being (wrongly) re-isolated while it is under migration,
-	 * or to avoid attempting to isolate pages being released,
-	 * lets be sure we have the page lock
-	 * before proceeding with the movable page isolation steps.
+	 * or to avoid attempting to isolate folios being released,
+	 * lets be sure we have the folio lock
+	 * before proceeding with the movable folio isolation steps.
 	 */
 	if (unlikely(!folio_trylock(folio)))
 		goto out_putfolio;
@@ -128,6 +127,14 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 	return false;
 }
 
+bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+{
+	if (PageTail(page))
+		return false;
+
+	return folio_isolate_movable((struct folio *)page, mode);
+}
+
 static void putback_movable_folio(struct folio *folio)
 {
 	const struct movable_operations *mops = folio_movable_ops(folio);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 06/10] mm: compaction: use folio_isolate_movable()
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (4 preceding siblings ...)
  2024-04-25  8:40 ` [PATCH v2 05/10] mm: migrate: add folio_isolate_movable() Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 07/10] mm: migrate: " Kefeng Wang
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

Directly use folio_isolate_movable() helper in isolate_migratepages_block()

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/compaction.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 4fc5a19b06ad..65a98968f681 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1123,7 +1123,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 					locked = NULL;
 				}
 
-				if (isolate_movable_page(&folio->page, mode)) {
+				if (folio_isolate_movable(folio, mode)) {
 					folio_put(folio);
 					goto isolate_success;
 				}
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 07/10] mm: migrate: use folio_isolate_movable()
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (5 preceding siblings ...)
  2024-04-25  8:40 ` [PATCH v2 06/10] mm: compaction: use folio_isolate_movable() Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 08/10] mm: migrate: remove isolate_movable_page() Kefeng Wang
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

Directly use folio_isolate_movable() helper in isolate_folio_to_list().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 8041a6acaf01..3d56604594bb 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -197,8 +197,8 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
 		if (lru)
 			isolated = folio_isolate_lru(folio);
 		else
-			isolated = isolate_movable_page(&folio->page
-							ISOLATE_UNEVICTABLE);
+			isolated = folio_isolate_movable(folio,
+							 ISOLATE_UNEVICTABLE);
 
 		if (isolated) {
 			list_add(&folio->lru, list);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 08/10] mm: migrate: remove isolate_movable_page()
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (6 preceding siblings ...)
  2024-04-25  8:40 ` [PATCH v2 07/10] mm: migrate: " Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 09/10] mm: migrate_device: use more folio in migrate_device_unmap() Kefeng Wang
  2024-04-25  8:40 ` [PATCH v2 10/10] mm: remove isolate_lru_page() Kefeng Wang
  9 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

There are no more callers of isolate_movable_page(), remove it.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/migrate.h | 3 ---
 mm/migrate.c            | 8 --------
 2 files changed, 11 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 4f1bad4379d3..938efa2fd6d7 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -71,7 +71,6 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
 		  unsigned long private, enum migrate_mode mode, int reason,
 		  unsigned int *ret_succeeded);
 struct folio *alloc_migration_target(struct folio *src, unsigned long private);
-bool isolate_movable_page(struct page *page, isolate_mode_t mode);
 bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode);
 
 int migrate_huge_page_move_mapping(struct address_space *mapping,
@@ -93,8 +92,6 @@ static inline int migrate_pages(struct list_head *l, new_folio_t new,
 static inline struct folio *alloc_migration_target(struct folio *src,
 		unsigned long private)
 	{ return NULL; }
-static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
-	{ return false; }
 static inline bool folio_isolate_movable(struct folio *folio,
 					 isolate_mode_t mode)
 	{ return false; }
diff --git a/mm/migrate.c b/mm/migrate.c
index 3d56604594bb..4fbc7ff39da3 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -127,14 +127,6 @@ bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode)
 	return false;
 }
 
-bool isolate_movable_page(struct page *page, isolate_mode_t mode)
-{
-	if (PageTail(page))
-		return false;
-
-	return folio_isolate_movable((struct folio *)page, mode);
-}
-
 static void putback_movable_folio(struct folio *folio)
 {
 	const struct movable_operations *mops = folio_movable_ops(folio);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 09/10] mm: migrate_device: use more folio in migrate_device_unmap()
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (7 preceding siblings ...)
  2024-04-25  8:40 ` [PATCH v2 08/10] mm: migrate: remove isolate_movable_page() Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  2024-04-25  9:31   ` David Hildenbrand
  2024-04-25  8:40 ` [PATCH v2 10/10] mm: remove isolate_lru_page() Kefeng Wang
  9 siblings, 1 reply; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

The page for migrate_device_unmap() already has a reference, so it is
safe to convert the page to folio to save a few calls to compound_head(),
which removes the last isolate_lru_page() call.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate_device.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index a68616c1965f..423d71ad736a 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -379,33 +379,33 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
 			continue;
 		}
 
+		folio = page_folio(page);
 		/* ZONE_DEVICE pages are not on LRU */
-		if (!is_zone_device_page(page)) {
-			if (!PageLRU(page) && allow_drain) {
+		if (!folio_is_zone_device(folio)) {
+			if (!folio_test_lru(folio) && allow_drain) {
 				/* Drain CPU's lru cache */
 				lru_add_drain_all();
 				allow_drain = false;
 			}
 
-			if (!isolate_lru_page(page)) {
+			if (!folio_isolate_lru(folio)) {
 				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 				restore++;
 				continue;
 			}
 
 			/* Drop the reference we took in collect */
-			put_page(page);
+			folio_put(folio);
 		}
 
-		folio = page_folio(page);
 		if (folio_mapped(folio))
 			try_to_migrate(folio, 0);
 
 		if (page_mapped(page) ||
 		    !migrate_vma_check_page(page, fault_page)) {
-			if (!is_zone_device_page(page)) {
-				get_page(page);
-				putback_lru_page(page);
+			if (!folio_is_zone_device(folio)) {
+				folio_get(folio);
+				folio_putback_lru(folio);
 			}
 
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 10/10] mm: remove isolate_lru_page()
  2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
                   ` (8 preceding siblings ...)
  2024-04-25  8:40 ` [PATCH v2 09/10] mm: migrate_device: use more folio in migrate_device_unmap() Kefeng Wang
@ 2024-04-25  8:40 ` Kefeng Wang
  9 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, David Hildenbrand, Miaohe Lin, Naoya Horiguchi,
	Oscar Salvador, Zi Yan, Hugh Dickins, Jonathan Corbet, linux-mm,
	Vishal Moola, Kefeng Wang

There are no more callers of isolate_lru_page(), remove it.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 Documentation/mm/page_migration.rst                    | 6 +++---
 Documentation/mm/unevictable-lru.rst                   | 2 +-
 Documentation/translations/zh_CN/mm/page_migration.rst | 6 +++---
 mm/filemap.c                                           | 2 +-
 mm/folio-compat.c                                      | 7 -------
 mm/internal.h                                          | 1 -
 mm/khugepaged.c                                        | 6 +++---
 mm/migrate_device.c                                    | 2 +-
 mm/swap.c                                              | 2 +-
 9 files changed, 13 insertions(+), 21 deletions(-)

diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst
index f1ce67a26615..0046bbbdc65d 100644
--- a/Documentation/mm/page_migration.rst
+++ b/Documentation/mm/page_migration.rst
@@ -67,8 +67,8 @@ In kernel use of migrate_pages()
 
    Lists of pages to be migrated are generated by scanning over
    pages and moving them into lists. This is done by
-   calling isolate_lru_page().
-   Calling isolate_lru_page() increases the references to the page
+   calling folio_isolate_lru().
+   Calling folio_isolate_lru() increases the references to the page
    so that it cannot vanish while the page migration occurs.
    It also prevents the swapper or other scans from encountering
    the page.
@@ -86,7 +86,7 @@ How migrate_pages() works
 
 migrate_pages() does several passes over its list of pages. A page is moved
 if all references to a page are removable at the time. The page has
-already been removed from the LRU via isolate_lru_page() and the refcount
+already been removed from the LRU via folio_isolate_lru() and the refcount
 is increased so that the page cannot be freed while page migration occurs.
 
 Steps:
diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index b6a07a26b10d..de3511c8d82d 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -230,7 +230,7 @@ In Nick's patch, he used one of the struct page LRU list link fields as a count
 of VM_LOCKED VMAs that map the page (Rik van Riel had the same idea three years
 earlier).  But this use of the link field for a count prevented the management
 of the pages on an LRU list, and thus mlocked pages were not migratable as
-isolate_lru_page() could not detect them, and the LRU list link field was not
+folio_isolate_lru() could not detect them, and the LRU list link field was not
 available to the migration subsystem.
 
 Nick resolved this by putting mlocked pages back on the LRU list before
diff --git a/Documentation/translations/zh_CN/mm/page_migration.rst b/Documentation/translations/zh_CN/mm/page_migration.rst
index f95063826a15..8c8461c6cb9f 100644
--- a/Documentation/translations/zh_CN/mm/page_migration.rst
+++ b/Documentation/translations/zh_CN/mm/page_migration.rst
@@ -50,8 +50,8 @@ mbind()设置一个新的内存策略。一个进程的页面也可以通过sys_
 
 1. 从LRU中移除页面。
 
-   要迁移的页面列表是通过扫描页面并把它们移到列表中来生成的。这是通过调用 isolate_lru_page()
-   来完成的。调用isolate_lru_page()增加了对该页的引用,这样在页面迁移发生时它就不会
+   要迁移的页面列表是通过扫描页面并把它们移到列表中来生成的。这是通过调用 folio_isolate_lru()
+   来完成的。调用folio_isolate_lru()增加了对该页的引用,这样在页面迁移发生时它就不会
    消失。它还可以防止交换器或其他扫描器遇到该页。
 
 
@@ -65,7 +65,7 @@ migrate_pages()如何工作
 =======================
 
 migrate_pages()对它的页面列表进行了多次处理。如果当时对一个页面的所有引用都可以被移除,
-那么这个页面就会被移动。该页已经通过isolate_lru_page()从LRU中移除,并且refcount被
+那么这个页面就会被移动。该页已经通过folio_isolate_lru()从LRU中移除,并且refcount被
 增加,以便在页面迁移发生时不释放该页。
 
 步骤:
diff --git a/mm/filemap.c b/mm/filemap.c
index fc784259f278..a28d05c54dd4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -113,7 +113,7 @@
  *    ->private_lock		(try_to_unmap_one)
  *    ->i_pages lock		(try_to_unmap_one)
  *    ->lruvec->lru_lock	(follow_page->mark_page_accessed)
- *    ->lruvec->lru_lock	(check_pte_range->isolate_lru_page)
+ *    ->lruvec->lru_lock	(check_pte_range->folio_isolate_lru)
  *    ->private_lock		(folio_remove_rmap_pte->set_page_dirty)
  *    ->i_pages lock		(folio_remove_rmap_pte->set_page_dirty)
  *    bdi.wb->list_lock		(folio_remove_rmap_pte->set_page_dirty)
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index f31e0ce65b11..3e72bec05415 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -99,13 +99,6 @@ struct page *grab_cache_page_write_begin(struct address_space *mapping,
 }
 EXPORT_SYMBOL(grab_cache_page_write_begin);
 
-bool isolate_lru_page(struct page *page)
-{
-	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
-		return false;
-	return folio_isolate_lru((struct folio *)page);
-}
-
 void putback_lru_page(struct page *page)
 {
 	folio_putback_lru(page_folio(page));
diff --git a/mm/internal.h b/mm/internal.h
index e3968061010b..2a6c451cab97 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -383,7 +383,6 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list);
 /*
  * in mm/vmscan.c:
  */
-bool isolate_lru_page(struct page *page);
 bool folio_isolate_lru(struct folio *folio);
 void putback_lru_page(struct page *page);
 void folio_putback_lru(struct folio *folio);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2f73d2aa9ae8..bdd926fac5f4 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -607,7 +607,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		}
 
 		/*
-		 * We can do it before isolate_lru_page because the
+		 * We can do it before folio_isolate_lru because the
 		 * page can't be freed from under us. NOTE: PG_lock
 		 * is needed to serialize against split_huge_page
 		 * when invoked from the VM.
@@ -1847,7 +1847,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 					result = SCAN_FAIL;
 					goto xa_unlocked;
 				}
-				/* drain lru cache to help isolate_lru_page() */
+				/* drain lru cache to help folio_isolate_lru() */
 				lru_add_drain();
 			} else if (folio_trylock(folio)) {
 				folio_get(folio);
@@ -1862,7 +1862,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 				page_cache_sync_readahead(mapping, &file->f_ra,
 							  file, index,
 							  end - index);
-				/* drain lru cache to help isolate_lru_page() */
+				/* drain lru cache to help folio_isolate_lru() */
 				lru_add_drain();
 				folio = filemap_lock_folio(mapping, index);
 				if (IS_ERR(folio)) {
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 423d71ad736a..a625f4694b56 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -328,7 +328,7 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page)
 
 	/*
 	 * One extra ref because caller holds an extra reference, either from
-	 * isolate_lru_page() for a regular page, or migrate_vma_collect() for
+	 * folio_isolate_lru() for a regular page, or migrate_vma_collect() for
 	 * a device page.
 	 */
 	int extra = 1 + (page == fault_page);
diff --git a/mm/swap.c b/mm/swap.c
index f0d478eee292..b298af23d713 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -918,7 +918,7 @@ atomic_t lru_disable_count = ATOMIC_INIT(0);
 
 /*
  * lru_cache_disable() needs to be called before we start compiling
- * a list of pages to be migrated using isolate_lru_page().
+ * a list of pages to be migrated using folio_isolate_lru().
  * It drains pages on LRU cache and then disable on all cpus until
  * lru_cache_enable is called.
  *
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 09/10] mm: migrate_device: use more folio in migrate_device_unmap()
  2024-04-25  8:40 ` [PATCH v2 09/10] mm: migrate_device: use more folio in migrate_device_unmap() Kefeng Wang
@ 2024-04-25  9:31   ` David Hildenbrand
  2024-04-25 11:05     ` Kefeng Wang
  0 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2024-04-25  9:31 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Zi Yan,
	Hugh Dickins, Jonathan Corbet, linux-mm, Vishal Moola

On 25.04.24 10:40, Kefeng Wang wrote:
> The page for migrate_device_unmap() already has a reference, so it is
> safe to convert the page to folio to save a few calls to compound_head(),
> which removes the last isolate_lru_page() call.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>   mm/migrate_device.c | 16 ++++++++--------
>   1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index a68616c1965f..423d71ad736a 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -379,33 +379,33 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
>   			continue;
>   		}
>   
> +		folio = page_folio(page);
>   		/* ZONE_DEVICE pages are not on LRU */
> -		if (!is_zone_device_page(page)) {
> -			if (!PageLRU(page) && allow_drain) {
> +		if (!folio_is_zone_device(folio)) {
> +			if (!folio_test_lru(folio) && allow_drain) {
>   				/* Drain CPU's lru cache */
>   				lru_add_drain_all();
>   				allow_drain = false;
>   			}
>   
> -			if (!isolate_lru_page(page)) {
> +			if (!folio_isolate_lru(folio)) {
>   				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
>   				restore++;
>   				continue;
>   			}
>   
>   			/* Drop the reference we took in collect */
> -			put_page(page);
> +			folio_put(folio);
>   		}
>   
> -		folio = page_folio(page);
>   		if (folio_mapped(folio))
>   			try_to_migrate(folio, 0);
>   
>   		if (page_mapped(page) ||

folio_mapped(), just as above

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 09/10] mm: migrate_device: use more folio in migrate_device_unmap()
  2024-04-25  9:31   ` David Hildenbrand
@ 2024-04-25 11:05     ` Kefeng Wang
  0 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-25 11:05 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton
  Cc: willy, Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Zi Yan,
	Hugh Dickins, Jonathan Corbet, linux-mm, Vishal Moola



On 2024/4/25 17:31, David Hildenbrand wrote:
> On 25.04.24 10:40, Kefeng Wang wrote:
>> The page for migrate_device_unmap() already has a reference, so it is
>> safe to convert the page to folio to save a few calls to compound_head(),
>> which removes the last isolate_lru_page() call.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   mm/migrate_device.c | 16 ++++++++--------
>>   1 file changed, 8 insertions(+), 8 deletions(-)
>>
>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>> index a68616c1965f..423d71ad736a 100644
>> --- a/mm/migrate_device.c
>> +++ b/mm/migrate_device.c
>> @@ -379,33 +379,33 @@ static unsigned long 
>> migrate_device_unmap(unsigned long *src_pfns,
>>               continue;
>>           }
>> +        folio = page_folio(page);
>>           /* ZONE_DEVICE pages are not on LRU */
>> -        if (!is_zone_device_page(page)) {
>> -            if (!PageLRU(page) && allow_drain) {
>> +        if (!folio_is_zone_device(folio)) {
>> +            if (!folio_test_lru(folio) && allow_drain) {
>>                   /* Drain CPU's lru cache */
>>                   lru_add_drain_all();
>>                   allow_drain = false;
>>               }
>> -            if (!isolate_lru_page(page)) {
>> +            if (!folio_isolate_lru(folio)) {
>>                   src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
>>                   restore++;
>>                   continue;
>>               }
>>               /* Drop the reference we took in collect */
>> -            put_page(page);
>> +            folio_put(folio);
>>           }
>> -        folio = page_folio(page);
>>           if (folio_mapped(folio))
>>               try_to_migrate(folio, 0);
>>           if (page_mapped(page) ||
> 
> folio_mapped(), just as above

ah, don't know why missing this one, will fix, thanks.

> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
  2024-04-25  8:40 ` [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
@ 2024-04-27  3:57   ` kernel test robot
  2024-04-28  0:49     ` Kefeng Wang
  2024-04-27  5:40   ` kernel test robot
  2024-04-27  7:23   ` kernel test robot
  2 siblings, 1 reply; 17+ messages in thread
From: kernel test robot @ 2024-04-27  3:57 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, willy,
	David Hildenbrand, Miaohe Lin, Naoya Horiguchi, Oscar Salvador,
	Zi Yan, Hugh Dickins, Jonathan Corbet, Vishal Moola, Kefeng Wang

Hi Kefeng,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-memory_hotplug-check-hwpoisoned-page-firstly-in-do_migrate_range/20240425-164317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20240425084028.3888403-2-wangkefeng.wang%40huawei.com
patch subject: [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
config: arm64-defconfig (https://download.01.org/0day-ci/archive/20240427/202404271110.2fxPtHNB-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240427/202404271110.2fxPtHNB-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202404271110.2fxPtHNB-lkp@intel.com/

All warnings (new ones prefixed by >>):

   mm/memory_hotplug.c: In function 'isolate_and_unmap_hwposion_folio':
   mm/memory_hotplug.c:1786:27: error: implicit declaration of function 'hugetlb_page_mapping_lock_write'; did you mean 'hugetlb_folio_mapping_lock_write'? [-Werror=implicit-function-declaration]
    1786 |                 mapping = hugetlb_page_mapping_lock_write(&folio->page);
         |                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
         |                           hugetlb_folio_mapping_lock_write
>> mm/memory_hotplug.c:1786:25: warning: assignment to 'struct address_space *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    1786 |                 mapping = hugetlb_page_mapping_lock_write(&folio->page);
         |                         ^
   cc1: some warnings being treated as errors


vim +1786 mm/memory_hotplug.c

  1774	
  1775	static bool isolate_and_unmap_hwposion_folio(struct folio *folio)
  1776	{
  1777		if (WARN_ON(folio_test_lru(folio)))
  1778			folio_isolate_lru(folio);
  1779	
  1780		if (!folio_mapped(folio))
  1781			return true;
  1782	
  1783		if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
  1784			struct address_space *mapping;
  1785	
> 1786			mapping = hugetlb_page_mapping_lock_write(&folio->page);
  1787			if (mapping) {
  1788				/*
  1789				 * In shared mappings, try_to_unmap could potentially
  1790				 * call huge_pmd_unshare.  Because of this, take
  1791				 * semaphore in write mode here and set TTU_RMAP_LOCKED
  1792				 * to let lower levels know we have taken the lock.
  1793				 */
  1794				try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_RMAP_LOCKED);
  1795				i_mmap_unlock_write(mapping);
  1796			}
  1797		} else {
  1798			try_to_unmap(folio, TTU_IGNORE_MLOCK);
  1799		}
  1800	
  1801		return folio_mapped(folio);
  1802	}
  1803	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
  2024-04-25  8:40 ` [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
  2024-04-27  3:57   ` kernel test robot
@ 2024-04-27  5:40   ` kernel test robot
  2024-04-27  7:23   ` kernel test robot
  2 siblings, 0 replies; 17+ messages in thread
From: kernel test robot @ 2024-04-27  5:40 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, willy,
	David Hildenbrand, Miaohe Lin, Naoya Horiguchi, Oscar Salvador,
	Zi Yan, Hugh Dickins, Jonathan Corbet, Vishal Moola, Kefeng Wang

Hi Kefeng,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-memory_hotplug-check-hwpoisoned-page-firstly-in-do_migrate_range/20240425-164317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20240425084028.3888403-2-wangkefeng.wang%40huawei.com
patch subject: [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
config: arm64-defconfig (https://download.01.org/0day-ci/archive/20240427/202404271311.KpDy4akD-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240427/202404271311.KpDy4akD-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202404271311.KpDy4akD-lkp@intel.com/

All errors (new ones prefixed by >>):

   mm/memory_hotplug.c: In function 'isolate_and_unmap_hwposion_folio':
>> mm/memory_hotplug.c:1786:27: error: implicit declaration of function 'hugetlb_page_mapping_lock_write'; did you mean 'hugetlb_folio_mapping_lock_write'? [-Werror=implicit-function-declaration]
    1786 |                 mapping = hugetlb_page_mapping_lock_write(&folio->page);
         |                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
         |                           hugetlb_folio_mapping_lock_write
   mm/memory_hotplug.c:1786:25: warning: assignment to 'struct address_space *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    1786 |                 mapping = hugetlb_page_mapping_lock_write(&folio->page);
         |                         ^
   cc1: some warnings being treated as errors


vim +1786 mm/memory_hotplug.c

  1774	
  1775	static bool isolate_and_unmap_hwposion_folio(struct folio *folio)
  1776	{
  1777		if (WARN_ON(folio_test_lru(folio)))
  1778			folio_isolate_lru(folio);
  1779	
  1780		if (!folio_mapped(folio))
  1781			return true;
  1782	
  1783		if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
  1784			struct address_space *mapping;
  1785	
> 1786			mapping = hugetlb_page_mapping_lock_write(&folio->page);
  1787			if (mapping) {
  1788				/*
  1789				 * In shared mappings, try_to_unmap could potentially
  1790				 * call huge_pmd_unshare.  Because of this, take
  1791				 * semaphore in write mode here and set TTU_RMAP_LOCKED
  1792				 * to let lower levels know we have taken the lock.
  1793				 */
  1794				try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_RMAP_LOCKED);
  1795				i_mmap_unlock_write(mapping);
  1796			}
  1797		} else {
  1798			try_to_unmap(folio, TTU_IGNORE_MLOCK);
  1799		}
  1800	
  1801		return folio_mapped(folio);
  1802	}
  1803	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
  2024-04-25  8:40 ` [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
  2024-04-27  3:57   ` kernel test robot
  2024-04-27  5:40   ` kernel test robot
@ 2024-04-27  7:23   ` kernel test robot
  2 siblings, 0 replies; 17+ messages in thread
From: kernel test robot @ 2024-04-27  7:23 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: llvm, oe-kbuild-all, Linux Memory Management List, willy,
	David Hildenbrand, Miaohe Lin, Naoya Horiguchi, Oscar Salvador,
	Zi Yan, Hugh Dickins, Jonathan Corbet, Vishal Moola, Kefeng Wang

Hi Kefeng,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-memory_hotplug-check-hwpoisoned-page-firstly-in-do_migrate_range/20240425-164317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20240425084028.3888403-2-wangkefeng.wang%40huawei.com
patch subject: [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
config: s390-defconfig (https://download.01.org/0day-ci/archive/20240427/202404271547.obYnCedG-lkp@intel.com/config)
compiler: clang version 19.0.0git (https://github.com/llvm/llvm-project 5ef5eb66fb428aaf61fb51b709f065c069c11242)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240427/202404271547.obYnCedG-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202404271547.obYnCedG-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from mm/memory_hotplug.c:9:
   In file included from include/linux/mm.h:2253:
   include/linux/vmstat.h:500:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
     500 |         return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~ ^
     501 |                            item];
         |                            ~~~~
   include/linux/vmstat.h:507:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
     507 |         return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~ ^
     508 |                            NR_VM_NUMA_EVENT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~~
   include/linux/vmstat.h:514:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
     514 |         return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
         |                               ~~~~~~~~~~~ ^ ~~~
   include/linux/vmstat.h:519:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
     519 |         return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~ ^
     520 |                            NR_VM_NUMA_EVENT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~~
   include/linux/vmstat.h:528:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
     528 |         return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~ ^
     529 |                            NR_VM_NUMA_EVENT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~~
   In file included from mm/memory_hotplug.c:30:
   include/linux/mm_inline.h:47:41: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
      47 |         __mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages);
         |                                    ~~~~~~~~~~~ ^ ~~~
   include/linux/mm_inline.h:49:22: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
      49 |                                 NR_ZONE_LRU_BASE + lru, nr_pages);
         |                                 ~~~~~~~~~~~~~~~~ ^ ~~~
   In file included from mm/memory_hotplug.c:34:
   In file included from include/linux/memblock.h:13:
   In file included from arch/s390/include/asm/dma.h:5:
   In file included from include/linux/io.h:14:
   In file included from arch/s390/include/asm/io.h:78:
   include/asm-generic/io.h:548:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     548 |         val = __raw_readb(PCI_IOBASE + addr);
         |                           ~~~~~~~~~~ ^
   include/asm-generic/io.h:561:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     561 |         val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
         |                                                         ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:37:59: note: expanded from macro '__le16_to_cpu'
      37 | #define __le16_to_cpu(x) __swab16((__force __u16)(__le16)(x))
         |                                                           ^
   include/uapi/linux/swab.h:102:54: note: expanded from macro '__swab16'
     102 | #define __swab16(x) (__u16)__builtin_bswap16((__u16)(x))
         |                                                      ^
   In file included from mm/memory_hotplug.c:34:
   In file included from include/linux/memblock.h:13:
   In file included from arch/s390/include/asm/dma.h:5:
   In file included from include/linux/io.h:14:
   In file included from arch/s390/include/asm/io.h:78:
   include/asm-generic/io.h:574:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     574 |         val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
         |                                                         ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:35:59: note: expanded from macro '__le32_to_cpu'
      35 | #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
         |                                                           ^
   include/uapi/linux/swab.h:115:54: note: expanded from macro '__swab32'
     115 | #define __swab32(x) (__u32)__builtin_bswap32((__u32)(x))
         |                                                      ^
   In file included from mm/memory_hotplug.c:34:
   In file included from include/linux/memblock.h:13:
   In file included from arch/s390/include/asm/dma.h:5:
   In file included from include/linux/io.h:14:
   In file included from arch/s390/include/asm/io.h:78:
   include/asm-generic/io.h:585:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     585 |         __raw_writeb(value, PCI_IOBASE + addr);
         |                             ~~~~~~~~~~ ^
   include/asm-generic/io.h:595:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     595 |         __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
         |                                                       ~~~~~~~~~~ ^
   include/asm-generic/io.h:605:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     605 |         __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
         |                                                       ~~~~~~~~~~ ^
   include/asm-generic/io.h:693:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     693 |         readsb(PCI_IOBASE + addr, buffer, count);
         |                ~~~~~~~~~~ ^
   include/asm-generic/io.h:701:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     701 |         readsw(PCI_IOBASE + addr, buffer, count);
         |                ~~~~~~~~~~ ^
   include/asm-generic/io.h:709:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     709 |         readsl(PCI_IOBASE + addr, buffer, count);
         |                ~~~~~~~~~~ ^
   include/asm-generic/io.h:718:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     718 |         writesb(PCI_IOBASE + addr, buffer, count);
         |                 ~~~~~~~~~~ ^
   include/asm-generic/io.h:727:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     727 |         writesw(PCI_IOBASE + addr, buffer, count);
         |                 ~~~~~~~~~~ ^
   include/asm-generic/io.h:736:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     736 |         writesl(PCI_IOBASE + addr, buffer, count);
         |                 ~~~~~~~~~~ ^
>> mm/memory_hotplug.c:1786:13: error: call to undeclared function 'hugetlb_page_mapping_lock_write'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    1786 |                 mapping = hugetlb_page_mapping_lock_write(&folio->page);
         |                           ^
   mm/memory_hotplug.c:1786:13: note: did you mean 'hugetlb_folio_mapping_lock_write'?
   include/linux/hugetlb.h:181:23: note: 'hugetlb_folio_mapping_lock_write' declared here
     181 | struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio);
         |                       ^
>> mm/memory_hotplug.c:1786:11: error: incompatible integer to pointer conversion assigning to 'struct address_space *' from 'int' [-Wint-conversion]
    1786 |                 mapping = hugetlb_page_mapping_lock_write(&folio->page);
         |                         ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   19 warnings and 2 errors generated.


vim +/hugetlb_page_mapping_lock_write +1786 mm/memory_hotplug.c

  1774	
  1775	static bool isolate_and_unmap_hwposion_folio(struct folio *folio)
  1776	{
  1777		if (WARN_ON(folio_test_lru(folio)))
  1778			folio_isolate_lru(folio);
  1779	
  1780		if (!folio_mapped(folio))
  1781			return true;
  1782	
  1783		if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
  1784			struct address_space *mapping;
  1785	
> 1786			mapping = hugetlb_page_mapping_lock_write(&folio->page);
  1787			if (mapping) {
  1788				/*
  1789				 * In shared mappings, try_to_unmap could potentially
  1790				 * call huge_pmd_unshare.  Because of this, take
  1791				 * semaphore in write mode here and set TTU_RMAP_LOCKED
  1792				 * to let lower levels know we have taken the lock.
  1793				 */
  1794				try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_RMAP_LOCKED);
  1795				i_mmap_unlock_write(mapping);
  1796			}
  1797		} else {
  1798			try_to_unmap(folio, TTU_IGNORE_MLOCK);
  1799		}
  1800	
  1801		return folio_mapped(folio);
  1802	}
  1803	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
  2024-04-27  3:57   ` kernel test robot
@ 2024-04-28  0:49     ` Kefeng Wang
  0 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2024-04-28  0:49 UTC (permalink / raw)
  To: kernel test robot, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, willy,
	David Hildenbrand, Miaohe Lin, Naoya Horiguchi, Oscar Salvador,
	Zi Yan, Hugh Dickins, Jonathan Corbet, Vishal Moola



On 2024/4/27 11:57, kernel test robot wrote:
> Hi Kefeng,
> 
> kernel test robot noticed the following build warnings:
> 
> [auto build test WARNING on akpm-mm/mm-everything]
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-memory_hotplug-check-hwpoisoned-page-firstly-in-do_migrate_range/20240425-164317
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link:    https://lore.kernel.org/r/20240425084028.3888403-2-wangkefeng.wang%40huawei.com
> patch subject: [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range()
> config: arm64-defconfig (https://download.01.org/0day-ci/archive/20240427/202404271110.2fxPtHNB-lkp@intel.com/config)
> compiler: aarch64-linux-gcc (GCC) 13.2.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240427/202404271110.2fxPtHNB-lkp@intel.com/reproduce)
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202404271110.2fxPtHNB-lkp@intel.com/
> 
> All warnings (new ones prefixed by >>):
> 
>     mm/memory_hotplug.c: In function 'isolate_and_unmap_hwposion_folio':
>     mm/memory_hotplug.c:1786:27: error: implicit declaration of function 'hugetlb_page_mapping_lock_write'; did you mean 'hugetlb_folio_mapping_lock_write'? [-Werror=implicit-function-declaration]
>      1786 |                 mapping = hugetlb_page_mapping_lock_write(&folio->page);
>           |                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>           |                           hugetlb_folio_mapping_lock_write
>>> mm/memory_hotplug.c:1786:25: warning: assignment to 'struct address_space *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
>      1786 |                 mapping = hugetlb_page_mapping_lock_write(&folio->page);
>           |                         ^
>     cc1: some warnings being treated as errors


The patch based on next-20240425, but from next-20240426 with
"mm: convert hugetlb_page_mapping_lock_write to folio", so it is easy to
fix by converting to hugetlb_folio_mapping_lock_write() api, but keep it
for a while, see if there are any new comments about this patchset.


Thanks.

> 
> 
> vim +1786 mm/memory_hotplug.c
> 
>    1774	
>    1775	static bool isolate_and_unmap_hwposion_folio(struct folio *folio)
>    1776	{
>    1777		if (WARN_ON(folio_test_lru(folio)))
>    1778			folio_isolate_lru(folio);
>    1779	
>    1780		if (!folio_mapped(folio))
>    1781			return true;
>    1782	
>    1783		if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
>    1784			struct address_space *mapping;
>    1785	
>> 1786			mapping = hugetlb_page_mapping_lock_write(&folio->page);
>    1787			if (mapping) {
>    1788				/*
>    1789				 * In shared mappings, try_to_unmap could potentially
>    1790				 * call huge_pmd_unshare.  Because of this, take
>    1791				 * semaphore in write mode here and set TTU_RMAP_LOCKED
>    1792				 * to let lower levels know we have taken the lock.
>    1793				 */
>    1794				try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_RMAP_LOCKED);
>    1795				i_mmap_unlock_write(mapping);
>    1796			}
>    1797		} else {
>    1798			try_to_unmap(folio, TTU_IGNORE_MLOCK);
>    1799		}
>    1800	
>    1801		return folio_mapped(folio);
>    1802	}
>    1803	
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2024-04-28  0:50 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-25  8:40 [PATCH v2 00/10] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 01/10] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
2024-04-27  3:57   ` kernel test robot
2024-04-28  0:49     ` Kefeng Wang
2024-04-27  5:40   ` kernel test robot
2024-04-27  7:23   ` kernel test robot
2024-04-25  8:40 ` [PATCH v2 02/10] mm: add isolate_folio_to_list() Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 03/10] mm: memory_hotplug: unify Huge/LRU/non-LRU folio isolation Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 04/10] mm: compaction: try get reference before non-lru movable folio migration Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 05/10] mm: migrate: add folio_isolate_movable() Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 06/10] mm: compaction: use folio_isolate_movable() Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 07/10] mm: migrate: " Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 08/10] mm: migrate: remove isolate_movable_page() Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 09/10] mm: migrate_device: use more folio in migrate_device_unmap() Kefeng Wang
2024-04-25  9:31   ` David Hildenbrand
2024-04-25 11:05     ` Kefeng Wang
2024-04-25  8:40 ` [PATCH v2 10/10] mm: remove isolate_lru_page() Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).