All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio
@ 2024-03-21  3:27 Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
                   ` (11 more replies)
  0 siblings, 12 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

The folio migration is widely used in kernel, memory compaction, memory
hotplug, soft offline page, numa balance, memory demote/promotion, etc,
but once access a poisoned source folio when migrating, the kerenl will
panic.

There is a mechanism in the kernel to recover from uncorrectable memory
errors, ARCH_HAS_COPY_MC(Machine Check Safe Memory Copy), which is already
used in NVDIMM or core-mm paths(eg, CoW, khugepaged, coredump, ksm copy),
see copy_mc_to_{user,kernel}, copy_mc_{user_}highpage callers.

This series of patches provide the recovery mechanism from folio copy for
the widely used folio migration. Please note, because folio migration is
no guarantee of success, so we could chose to make folio migration tolerant
of memory failures, adding folio_mc_copy() which is a #MC versions of
folio_copy(), once accessing a poisoned source folio, we could return error
and make the folio migration fail, and this could avoid the similar panic
shown below.

  CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0
  pc : copy_page+0x10/0xc0
  lr : copy_highpage+0x38/0x50
  ...
  Call trace:
   copy_page+0x10/0xc0
   folio_copy+0x78/0x90
   migrate_folio_extra+0x54/0xa0
   move_to_new_folio+0xd8/0x1f0
   migrate_folio_move+0xb8/0x300
   migrate_pages_batch+0x528/0x788
   migrate_pages_sync+0x8c/0x258
   migrate_pages+0x440/0x528
   soft_offline_in_use_page+0x2ec/0x3c0
   soft_offline_page+0x238/0x310
   soft_offline_page_store+0x6c/0xc0
   dev_attr_store+0x20/0x40
   sysfs_kf_write+0x4c/0x68
   kernfs_fop_write_iter+0x130/0x1c8
   new_sync_write+0xa4/0x138
   vfs_write+0x238/0x2d8
   ksys_write+0x74/0x110

v1:
- no change, resend and rebased on 6.9-rc1

rfcv2:
- Separate __migrate_device_pages() cleanup from patch "remove 
  migrate_folio_extra()", suggested by Matthew
- Split folio_migrate_mapping(), move refcount check/freeze out
  of folio_migrate_mapping(), suggested by Matthew
- add RB

Kefeng Wang (11):
  mm: migrate: simplify __buffer_migrate_folio()
  mm: migrate_device: use more folio in __migrate_device_pages()
  mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY
  mm: migrate: remove migrate_folio_extra()
  mm: remove MIGRATE_SYNC_NO_COPY mode
  mm: migrate: split folio_migrate_mapping()
  mm: add folio_mc_copy()
  mm: migrate: support poisoned recover from migrate folio
  fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio()
  mm: migrate: remove folio_migrate_copy()
  fs: aio: add explicit check for large folio in aio_migrate_folio()

 fs/aio.c                     |  15 ++--
 fs/hugetlbfs/inode.c         |   5 +-
 include/linux/migrate.h      |   3 -
 include/linux/migrate_mode.h |   5 --
 include/linux/mm.h           |   1 +
 mm/balloon_compaction.c      |   8 --
 mm/migrate.c                 | 157 +++++++++++++++++------------------
 mm/migrate_device.c          |  28 +++----
 mm/util.c                    |  20 +++++
 mm/zsmalloc.c                |   8 --
 10 files changed, 115 insertions(+), 135 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio()
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-04-01 17:54   ` Vishal Moola
  2024-04-16  9:28   ` Miaohe Lin
  2024-03-21  3:27 ` [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages() Kefeng Wang
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

Use filemap_migrate_folio() helper to simplify __buffer_migrate_folio().

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 73a052a382f1..cb4cbaa42a35 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -777,24 +777,16 @@ static int __buffer_migrate_folio(struct address_space *mapping,
 		}
 	}
 
-	rc = folio_migrate_mapping(mapping, dst, src, 0);
+	rc = filemap_migrate_folio(mapping, dst, src, mode);
 	if (rc != MIGRATEPAGE_SUCCESS)
 		goto unlock_buffers;
 
-	folio_attach_private(dst, folio_detach_private(src));
-
 	bh = head;
 	do {
 		folio_set_bh(bh, dst, bh_offset(bh));
 		bh = bh->b_this_page;
 	} while (bh != head);
 
-	if (mode != MIGRATE_SYNC_NO_COPY)
-		folio_migrate_copy(dst, src);
-	else
-		folio_migrate_flags(dst, src);
-
-	rc = MIGRATEPAGE_SUCCESS;
 unlock_buffers:
 	if (check_refs)
 		spin_unlock(&mapping->i_private_lock);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages()
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-04-01 18:22   ` Vishal Moola
  2024-04-16 12:13   ` Miaohe Lin
  2024-03-21  3:27 ` [PATCH v1 03/11] mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY Kefeng Wang
                   ` (9 subsequent siblings)
  11 siblings, 2 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

Use newfolio/folio for migrate_folio_extra()/migrate_folio() to
save four compound_head() calls.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate_device.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index b6c27c76e1a0..ee4d60951670 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -694,6 +694,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 		struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
 		struct page *page = migrate_pfn_to_page(src_pfns[i]);
 		struct address_space *mapping;
+		struct folio *newfolio, *folio;
 		int r;
 
 		if (!newpage) {
@@ -728,14 +729,13 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 			continue;
 		}
 
-		mapping = page_mapping(page);
+		newfolio = page_folio(newpage);
+		folio = page_folio(page);
+		mapping = folio_mapping(folio);
 
-		if (is_device_private_page(newpage) ||
-		    is_device_coherent_page(newpage)) {
+		if (folio_is_device_private(newfolio) ||
+		    folio_is_device_coherent(newfolio)) {
 			if (mapping) {
-				struct folio *folio;
-
-				folio = page_folio(page);
 
 				/*
 				 * For now only support anonymous memory migrating to
@@ -749,7 +749,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 					continue;
 				}
 			}
-		} else if (is_zone_device_page(newpage)) {
+		} else if (folio_is_zone_device(newfolio)) {
 			/*
 			 * Other types of ZONE_DEVICE page are not supported.
 			 */
@@ -758,12 +758,11 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 		}
 
 		if (migrate && migrate->fault_page == page)
-			r = migrate_folio_extra(mapping, page_folio(newpage),
-						page_folio(page),
+			r = migrate_folio_extra(mapping, newfolio, folio,
 						MIGRATE_SYNC_NO_COPY, 1);
 		else
-			r = migrate_folio(mapping, page_folio(newpage),
-					page_folio(page), MIGRATE_SYNC_NO_COPY);
+			r = migrate_folio(mapping, newfolio, folio,
+					  MIGRATE_SYNC_NO_COPY);
 		if (r != MIGRATEPAGE_SUCCESS)
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 	}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 03/11] mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages() Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 04/11] mm: migrate: remove migrate_folio_extra() Kefeng Wang
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

The __migrate_device_pages() won't copy page so MIGRATE_SYNC_NO_COPY
passed into migrate_folio()/migrate_folio_extra(), actually a easy
way is just to call folio_migrate_mapping()/folio_migrate_flags(),
converting it to unify and simplify the migrate device pages, which
also remove the only call for MIGRATE_SYNC_NO_COPY.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate_device.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index ee4d60951670..c0547271eaaa 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -695,7 +695,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 		struct page *page = migrate_pfn_to_page(src_pfns[i]);
 		struct address_space *mapping;
 		struct folio *newfolio, *folio;
-		int r;
+		int r, extra_cnt = 0;
 
 		if (!newpage) {
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
@@ -757,14 +757,15 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 			continue;
 		}
 
+		BUG_ON(folio_test_writeback(folio));
+
 		if (migrate && migrate->fault_page == page)
-			r = migrate_folio_extra(mapping, newfolio, folio,
-						MIGRATE_SYNC_NO_COPY, 1);
-		else
-			r = migrate_folio(mapping, newfolio, folio,
-					  MIGRATE_SYNC_NO_COPY);
+			extra_cnt = 1;
+		r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt);
 		if (r != MIGRATEPAGE_SUCCESS)
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
+		else
+			folio_migrate_flags(newfolio, folio);
 	}
 
 	if (notified)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 04/11] mm: migrate: remove migrate_folio_extra()
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (2 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 03/11] mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-04-16 12:40   ` Miaohe Lin
  2024-03-21  3:27 ` [PATCH v1 05/11] mm: remove MIGRATE_SYNC_NO_COPY mode Kefeng Wang
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

The migrate_folio_extra() only called in migrate.c now, convert it
a static function and take a new src_private argument which could
be shared by migrate_folio() and filemap_migrate_folio() to simplify
code a bit.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/migrate.h |  2 --
 mm/migrate.c            | 33 +++++++++++----------------------
 2 files changed, 11 insertions(+), 24 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 2ce13e8a309b..517f70b70620 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -63,8 +63,6 @@ extern const char *migrate_reason_names[MR_TYPES];
 #ifdef CONFIG_MIGRATION
 
 void putback_movable_pages(struct list_head *l);
-int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
-		struct folio *src, enum migrate_mode mode, int extra_count);
 int migrate_folio(struct address_space *mapping, struct folio *dst,
 		struct folio *src, enum migrate_mode mode);
 int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
diff --git a/mm/migrate.c b/mm/migrate.c
index cb4cbaa42a35..c006b0b44013 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -658,18 +658,19 @@ EXPORT_SYMBOL(folio_migrate_copy);
  *                    Migration functions
  ***********************************************************/
 
-int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
-		struct folio *src, enum migrate_mode mode, int extra_count)
+static int __migrate_folio(struct address_space *mapping, struct folio *dst,
+			   struct folio *src, void *src_private,
+			   enum migrate_mode mode)
 {
 	int rc;
 
-	BUG_ON(folio_test_writeback(src));	/* Writeback must be complete */
-
-	rc = folio_migrate_mapping(mapping, dst, src, extra_count);
-
+	rc = folio_migrate_mapping(mapping, dst, src, 0);
 	if (rc != MIGRATEPAGE_SUCCESS)
 		return rc;
 
+	if (src_private)
+		folio_attach_private(dst, folio_detach_private(src));
+
 	if (mode != MIGRATE_SYNC_NO_COPY)
 		folio_migrate_copy(dst, src);
 	else
@@ -690,9 +691,10 @@ int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
  * Folios are locked upon entry and exit.
  */
 int migrate_folio(struct address_space *mapping, struct folio *dst,
-		struct folio *src, enum migrate_mode mode)
+		  struct folio *src, enum migrate_mode mode)
 {
-	return migrate_folio_extra(mapping, dst, src, mode, 0);
+	BUG_ON(folio_test_writeback(src));	/* Writeback must be complete */
+	return __migrate_folio(mapping, dst, src, NULL, mode);
 }
 EXPORT_SYMBOL(migrate_folio);
 
@@ -846,20 +848,7 @@ EXPORT_SYMBOL_GPL(buffer_migrate_folio_norefs);
 int filemap_migrate_folio(struct address_space *mapping,
 		struct folio *dst, struct folio *src, enum migrate_mode mode)
 {
-	int ret;
-
-	ret = folio_migrate_mapping(mapping, dst, src, 0);
-	if (ret != MIGRATEPAGE_SUCCESS)
-		return ret;
-
-	if (folio_get_private(src))
-		folio_attach_private(dst, folio_detach_private(src));
-
-	if (mode != MIGRATE_SYNC_NO_COPY)
-		folio_migrate_copy(dst, src);
-	else
-		folio_migrate_flags(dst, src);
-	return MIGRATEPAGE_SUCCESS;
+	return __migrate_folio(mapping, dst, src, folio_get_private(src), mode);
 }
 EXPORT_SYMBOL_GPL(filemap_migrate_folio);
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 05/11] mm: remove MIGRATE_SYNC_NO_COPY mode
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (3 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 04/11] mm: migrate: remove migrate_folio_extra() Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 06/11] mm: migrate: split folio_migrate_mapping() Kefeng Wang
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

Commit 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY")
introduce a new MIGRATE_SYNC_NO_COPY mode to allow to offload the copy to
a device DMA engine, which is only used __migrate_device_pages() to decide
whether or not copy the old page, and the MIGRATE_SYNC_NO_COPY mode only
set in hmm, as the MIGRATE_SYNC_NO_COPY set is removed by previous cleanup,
it seems that we could remove the unnecessary MIGRATE_SYNC_NO_COPY.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/aio.c                     | 12 +-----------
 fs/hugetlbfs/inode.c         |  5 +----
 include/linux/migrate_mode.h |  5 -----
 mm/balloon_compaction.c      |  8 --------
 mm/migrate.c                 |  8 +-------
 mm/zsmalloc.c                |  8 --------
 6 files changed, 3 insertions(+), 43 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index 9cdaa2faa536..e36849a38f13 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -409,17 +409,7 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst,
 	struct kioctx *ctx;
 	unsigned long flags;
 	pgoff_t idx;
-	int rc;
-
-	/*
-	 * We cannot support the _NO_COPY case here, because copy needs to
-	 * happen under the ctx->completion_lock. That does not work with the
-	 * migration workflow of MIGRATE_SYNC_NO_COPY.
-	 */
-	if (mode == MIGRATE_SYNC_NO_COPY)
-		return -EINVAL;
-
-	rc = 0;
+	int rc = 0;
 
 	/* mapping->i_private_lock here protects against the kioctx teardown.  */
 	spin_lock(&mapping->i_private_lock);
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 6502c7e776d1..d0c496af8d43 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -1131,10 +1131,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping,
 		hugetlb_set_folio_subpool(src, NULL);
 	}
 
-	if (mode != MIGRATE_SYNC_NO_COPY)
-		folio_migrate_copy(dst, src);
-	else
-		folio_migrate_flags(dst, src);
+	folio_migrate_copy(dst, src);
 
 	return MIGRATEPAGE_SUCCESS;
 }
diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h
index f37cc03f9369..9fb482bb7323 100644
--- a/include/linux/migrate_mode.h
+++ b/include/linux/migrate_mode.h
@@ -7,16 +7,11 @@
  *	on most operations but not ->writepage as the potential stall time
  *	is too significant
  * MIGRATE_SYNC will block when migrating pages
- * MIGRATE_SYNC_NO_COPY will block when migrating pages but will not copy pages
- *	with the CPU. Instead, page copy happens outside the migratepage()
- *	callback and is likely using a DMA engine. See migrate_vma() and HMM
- *	(mm/hmm.c) for users of this mode.
  */
 enum migrate_mode {
 	MIGRATE_ASYNC,
 	MIGRATE_SYNC_LIGHT,
 	MIGRATE_SYNC,
-	MIGRATE_SYNC_NO_COPY,
 };
 
 enum migrate_reason {
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 22c96fed70b5..6597ebea8ae2 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -234,14 +234,6 @@ static int balloon_page_migrate(struct page *newpage, struct page *page,
 {
 	struct balloon_dev_info *balloon = balloon_page_device(page);
 
-	/*
-	 * We can not easily support the no copy case here so ignore it as it
-	 * is unlikely to be used with balloon pages. See include/linux/hmm.h
-	 * for a user of the MIGRATE_SYNC_NO_COPY mode.
-	 */
-	if (mode == MIGRATE_SYNC_NO_COPY)
-		return -EINVAL;
-
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
 
diff --git a/mm/migrate.c b/mm/migrate.c
index c006b0b44013..669c6c2a1868 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -671,10 +671,7 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst,
 	if (src_private)
 		folio_attach_private(dst, folio_detach_private(src));
 
-	if (mode != MIGRATE_SYNC_NO_COPY)
-		folio_migrate_copy(dst, src);
-	else
-		folio_migrate_flags(dst, src);
+	folio_migrate_copy(dst, src);
 	return MIGRATEPAGE_SUCCESS;
 }
 
@@ -903,7 +900,6 @@ static int fallback_migrate_folio(struct address_space *mapping,
 		/* Only writeback folios in full synchronous migration */
 		switch (mode) {
 		case MIGRATE_SYNC:
-		case MIGRATE_SYNC_NO_COPY:
 			break;
 		default:
 			return -EBUSY;
@@ -1161,7 +1157,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
 		 */
 		switch (mode) {
 		case MIGRATE_SYNC:
-		case MIGRATE_SYNC_NO_COPY:
 			break;
 		default:
 			rc = -EBUSY;
@@ -1372,7 +1367,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
 			goto out;
 		switch (mode) {
 		case MIGRATE_SYNC:
-		case MIGRATE_SYNC_NO_COPY:
 			break;
 		default:
 			goto out;
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7d7cb3eaabe0..4467cdb1f565 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1752,14 +1752,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	unsigned long old_obj, new_obj;
 	unsigned int obj_idx;
 
-	/*
-	 * We cannot support the _NO_COPY case here, because copy needs to
-	 * happen under the zs lock, which does not work with
-	 * MIGRATE_SYNC_NO_COPY workflow.
-	 */
-	if (mode == MIGRATE_SYNC_NO_COPY)
-		return -EINVAL;
-
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
 	/* The page is locked, so this pointer must remain valid */
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 06/11] mm: migrate: split folio_migrate_mapping()
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (4 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 05/11] mm: remove MIGRATE_SYNC_NO_COPY mode Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 07/11] mm: add folio_mc_copy() Kefeng Wang
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

The folio_migrate_mapping() function is splitted into two parts,
folio_refs_check_and_freeze() and folio_replace_mapping_and_unfreeze(),
also update comment from page to folio.

Note, the folio_ref_freeze() is moved out of xas_lock_irq(), since the
folio is already isolated and locked during migration, so suppose that
there is no functional change.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 74 +++++++++++++++++++++++++++++-----------------------
 1 file changed, 42 insertions(+), 32 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 669c6c2a1868..59c7d66aacba 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -393,50 +393,49 @@ static int folio_expected_refs(struct address_space *mapping,
 }
 
 /*
- * Replace the page in the mapping.
- *
  * The number of remaining references must be:
- * 1 for anonymous pages without a mapping
- * 2 for pages with a mapping
- * 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
+ * 1 for anonymous folios without a mapping
+ * 2 for folios with a mapping
+ * 3 for folios with a mapping and PagePrivate/PagePrivate2 set.
  */
-int folio_migrate_mapping(struct address_space *mapping,
-		struct folio *newfolio, struct folio *folio, int extra_count)
+static int folio_refs_check_and_freeze(struct address_space *mapping,
+				       struct folio *folio, int expected_cnt)
+{
+	if (!mapping) {
+		if (folio_ref_count(folio) != expected_cnt)
+			return -EAGAIN;
+	} else {
+		if (!folio_ref_freeze(folio, expected_cnt))
+			return -EAGAIN;
+	}
+
+	return 0;
+}
+
+/* The folio refcount must be freezed if folio with a mapping */
+static void folio_replace_mapping_and_unfreeze(struct address_space *mapping,
+		struct folio *newfolio, struct folio *folio, int expected_cnt)
 {
 	XA_STATE(xas, &mapping->i_pages, folio_index(folio));
 	struct zone *oldzone, *newzone;
-	int dirty;
-	int expected_count = folio_expected_refs(mapping, folio) + extra_count;
 	long nr = folio_nr_pages(folio);
 	long entries, i;
+	int dirty;
 
 	if (!mapping) {
-		/* Anonymous page without mapping */
-		if (folio_ref_count(folio) != expected_count)
-			return -EAGAIN;
-
-		/* No turning back from here */
+		/* Anonymous folio without mapping */
 		newfolio->index = folio->index;
 		newfolio->mapping = folio->mapping;
 		if (folio_test_swapbacked(folio))
 			__folio_set_swapbacked(newfolio);
-
-		return MIGRATEPAGE_SUCCESS;
+		return;
 	}
 
 	oldzone = folio_zone(folio);
 	newzone = folio_zone(newfolio);
 
+	/* Now we know that no one else is looking at the folio */
 	xas_lock_irq(&xas);
-	if (!folio_ref_freeze(folio, expected_count)) {
-		xas_unlock_irq(&xas);
-		return -EAGAIN;
-	}
-
-	/*
-	 * Now we know that no one else is looking at the folio:
-	 * no turning back from here.
-	 */
 	newfolio->index = folio->index;
 	newfolio->mapping = folio->mapping;
 	folio_ref_add(newfolio, nr); /* add cache reference */
@@ -452,7 +451,7 @@ int folio_migrate_mapping(struct address_space *mapping,
 		entries = 1;
 	}
 
-	/* Move dirty while page refs frozen and newpage not yet exposed */
+	/* Move dirty while folio refs frozen and newfolio not yet exposed */
 	dirty = folio_test_dirty(folio);
 	if (dirty) {
 		folio_clear_dirty(folio);
@@ -466,22 +465,22 @@ int folio_migrate_mapping(struct address_space *mapping,
 	}
 
 	/*
-	 * Drop cache reference from old page by unfreezing
-	 * to one less reference.
+	 * Since old folio's refcount freezed, now drop cache reference from
+	 * old folio by unfreezing to one less reference.
 	 * We know this isn't the last reference.
 	 */
-	folio_ref_unfreeze(folio, expected_count - nr);
+	folio_ref_unfreeze(folio, expected_cnt - nr);
 
 	xas_unlock(&xas);
 	/* Leave irq disabled to prevent preemption while updating stats */
 
 	/*
 	 * If moved to a different zone then also account
-	 * the page for that zone. Other VM counters will be
+	 * the folio for that zone. Other VM counters will be
 	 * taken care of when we establish references to the
-	 * new page and drop references to the old page.
+	 * new folio and drop references to the old folio.
 	 *
-	 * Note that anonymous pages are accounted for
+	 * Note that anonymous folios are accounted for
 	 * via NR_FILE_PAGES and NR_ANON_MAPPED if they
 	 * are mapped to swap space.
 	 */
@@ -518,7 +517,18 @@ int folio_migrate_mapping(struct address_space *mapping,
 		}
 	}
 	local_irq_enable();
+}
+
+int folio_migrate_mapping(struct address_space *mapping, struct folio *newfolio,
+			  struct folio *folio, int extra_count)
+{
+	int ret, expected = folio_expected_refs(mapping, folio) + extra_count;
+
+	ret = folio_refs_check_and_freeze(mapping, folio, expected);
+	if (ret)
+		return ret;
 
+	folio_replace_mapping_and_unfreeze(mapping, newfolio, folio, expected);
 	return MIGRATEPAGE_SUCCESS;
 }
 EXPORT_SYMBOL(folio_migrate_mapping);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 07/11] mm: add folio_mc_copy()
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (5 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 06/11] mm: migrate: split folio_migrate_mapping() Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 08/11] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

Add a variant of folio_copy() which use copy_mc_highpage() to support
machine check safe copy when folio copy.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h |  1 +
 mm/util.c          | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0436b919f1c7..bd1bc29c38e2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1312,6 +1312,7 @@ void put_pages_list(struct list_head *pages);
 
 void split_page(struct page *page, unsigned int order);
 void folio_copy(struct folio *dst, struct folio *src);
+int folio_mc_copy(struct folio *dst, struct folio *src);
 
 unsigned long nr_free_buffer_pages(void);
 
diff --git a/mm/util.c b/mm/util.c
index 669397235787..b186d84b28b6 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -828,6 +828,26 @@ void folio_copy(struct folio *dst, struct folio *src)
 }
 EXPORT_SYMBOL(folio_copy);
 
+int folio_mc_copy(struct folio *dst, struct folio *src)
+{
+	long nr = folio_nr_pages(src);
+	long i = 0;
+	int ret = 0;
+
+	for (;;) {
+		if (copy_mc_highpage(folio_page(dst, i), folio_page(src, i))) {
+			ret = -EFAULT;
+			break;
+		}
+		if (++i == nr)
+			break;
+		cond_resched();
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(folio_mc_copy);
+
 int sysctl_overcommit_memory __read_mostly = OVERCOMMIT_GUESS;
 int sysctl_overcommit_ratio __read_mostly = 50;
 unsigned long sysctl_overcommit_kbytes __read_mostly;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 08/11] mm: migrate: support poisoned recover from migrate folio
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (6 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 07/11] mm: add folio_mc_copy() Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 09/11] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio() Kefeng Wang
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

The folio migration is widely used in kernel, memory compaction, memory
hotplug, soft offline page, numa balance, memory demote/promotion, etc,
but once access a poisoned source folio when migrating, the kerenl will
panic.

There is a mechanism in the kernel to recover from uncorrectable memory
errors, ARCH_HAS_COPY_MC, which is already used in other core-mm paths,
eg, CoW, khugepaged, coredump, ksm copy, see copy_mc_to_{user,kernel},
copy_mc_{user_}highpage callers.

In order to support poisoned folio copy recover from migrate folio, we
chose to make folio migration tolerant of memory failures and return
error for folio migration, because folio migration is no guarantee
of success, this could avoid the similar panic shown below.

  CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0
  pc : copy_page+0x10/0xc0
  lr : copy_highpage+0x38/0x50
  ...
  Call trace:
   copy_page+0x10/0xc0
   folio_copy+0x78/0x90
   migrate_folio_extra+0x54/0xa0
   move_to_new_folio+0xd8/0x1f0
   migrate_folio_move+0xb8/0x300
   migrate_pages_batch+0x528/0x788
   migrate_pages_sync+0x8c/0x258
   migrate_pages+0x440/0x528
   soft_offline_in_use_page+0x2ec/0x3c0
   soft_offline_page+0x238/0x310
   soft_offline_page_store+0x6c/0xc0
   dev_attr_store+0x20/0x40
   sysfs_kf_write+0x4c/0x68
   kernfs_fop_write_iter+0x130/0x1c8
   new_sync_write+0xa4/0x138
   vfs_write+0x238/0x2d8
   ksys_write+0x74/0x110

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 59c7d66aacba..a31dd2cfc646 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -672,16 +672,25 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst,
 			   struct folio *src, void *src_private,
 			   enum migrate_mode mode)
 {
-	int rc;
+	int rc, expected_cnt = folio_expected_refs(mapping, src);
 
-	rc = folio_migrate_mapping(mapping, dst, src, 0);
-	if (rc != MIGRATEPAGE_SUCCESS)
+	rc = folio_refs_check_and_freeze(mapping, src, expected_cnt);
+	if (rc)
 		return rc;
 
+	rc = folio_mc_copy(dst, src);
+	if (rc) {
+		if (mapping)
+			folio_ref_unfreeze(src, expected_cnt);
+		return rc;
+	}
+
+	folio_replace_mapping_and_unfreeze(mapping, dst, src, expected_cnt);
+
 	if (src_private)
 		folio_attach_private(dst, folio_detach_private(src));
 
-	folio_migrate_copy(dst, src);
+	folio_migrate_flags(dst, src);
 	return MIGRATEPAGE_SUCCESS;
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 09/11] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio()
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (7 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 08/11] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 10/11] mm: migrate: remove folio_migrate_copy() Kefeng Wang
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

This is similar to __migrate_folio(), use folio_mc_copy() in HugeTLB
folio migration to avoid panic when copy from poisoned folio.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/hugetlbfs/inode.c |  2 +-
 mm/migrate.c         | 14 +++++++++-----
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index d0c496af8d43..e6733f3b6bf1 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -1131,7 +1131,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping,
 		hugetlb_set_folio_subpool(src, NULL);
 	}
 
-	folio_migrate_copy(dst, src);
+	folio_migrate_flags(dst, src);
 
 	return MIGRATEPAGE_SUCCESS;
 }
diff --git a/mm/migrate.c b/mm/migrate.c
index a31dd2cfc646..c0e2a26df30b 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -541,15 +541,19 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
 				   struct folio *dst, struct folio *src)
 {
 	XA_STATE(xas, &mapping->i_pages, folio_index(src));
-	int expected_count;
+	int rc, expected_count = folio_expected_refs(mapping, src);
 
-	xas_lock_irq(&xas);
-	expected_count = folio_expected_refs(mapping, src);
-	if (!folio_ref_freeze(src, expected_count)) {
-		xas_unlock_irq(&xas);
+	if (!folio_ref_freeze(src, expected_count))
 		return -EAGAIN;
+
+	rc = folio_mc_copy(dst, src);
+	if (rc) {
+		folio_ref_unfreeze(src, expected_count);
+		return rc;
 	}
 
+	xas_lock_irq(&xas);
+
 	dst->index = src->index;
 	dst->mapping = src->mapping;
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 10/11] mm: migrate: remove folio_migrate_copy()
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (8 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 09/11] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio() Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-03-21  3:27 ` [PATCH v1 11/11] fs: aio: add explicit check for large folio in aio_migrate_folio() Kefeng Wang
  2024-03-28 13:30 ` [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
  11 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

The folio_migrate_copy() is just a wrapper of folio_copy() and
folio_migrate_flags(), it is simple and only aio use it for now,
unfold it and remove folio_migrate_copy().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/aio.c                | 3 ++-
 include/linux/migrate.h | 1 -
 mm/migrate.c            | 7 -------
 3 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index e36849a38f13..9783bb5d81e7 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -454,7 +454,8 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst,
 	 * events from being lost.
 	 */
 	spin_lock_irqsave(&ctx->completion_lock, flags);
-	folio_migrate_copy(dst, src);
+	folio_copy(dst, src);
+	folio_migrate_flags(dst, src);
 	BUG_ON(ctx->ring_pages[idx] != &src->page);
 	ctx->ring_pages[idx] = &dst->page;
 	spin_unlock_irqrestore(&ctx->completion_lock, flags);
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 517f70b70620..f9d92482d117 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -76,7 +76,6 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
 void migration_entry_wait_on_locked(swp_entry_t entry, spinlock_t *ptl)
 		__releases(ptl);
 void folio_migrate_flags(struct folio *newfolio, struct folio *folio);
-void folio_migrate_copy(struct folio *newfolio, struct folio *folio);
 int folio_migrate_mapping(struct address_space *mapping,
 		struct folio *newfolio, struct folio *folio, int extra_count);
 
diff --git a/mm/migrate.c b/mm/migrate.c
index c0e2a26df30b..2228ca681afb 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -661,13 +661,6 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 }
 EXPORT_SYMBOL(folio_migrate_flags);
 
-void folio_migrate_copy(struct folio *newfolio, struct folio *folio)
-{
-	folio_copy(newfolio, folio);
-	folio_migrate_flags(newfolio, folio);
-}
-EXPORT_SYMBOL(folio_migrate_copy);
-
 /************************************************************
  *                    Migration functions
  ***********************************************************/
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 11/11] fs: aio: add explicit check for large folio in aio_migrate_folio()
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (9 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 10/11] mm: migrate: remove folio_migrate_copy() Kefeng Wang
@ 2024-03-21  3:27 ` Kefeng Wang
  2024-03-21  3:35   ` Matthew Wilcox
  2024-03-28 13:30 ` [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
  11 siblings, 1 reply; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  3:27 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Kefeng Wang

Since large folio copy could spend lots of time and it is involved with
a cond_resched(), the aio couldn't support migrate large folio as it takes
a spin lock when folio copy, add explicit check for large folio and return
err directly.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/aio.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/fs/aio.c b/fs/aio.c
index 9783bb5d81e7..0391ef58c564 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -411,6 +411,10 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst,
 	pgoff_t idx;
 	int rc = 0;
 
+	/* Large folios aren't supported */
+	if (folio_test_large(src))
+		return -EINVAL;
+
 	/* mapping->i_private_lock here protects against the kioctx teardown.  */
 	spin_lock(&mapping->i_private_lock);
 	ctx = mapping->i_private_data;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 11/11] fs: aio: add explicit check for large folio in aio_migrate_folio()
  2024-03-21  3:27 ` [PATCH v1 11/11] fs: aio: add explicit check for large folio in aio_migrate_folio() Kefeng Wang
@ 2024-03-21  3:35   ` Matthew Wilcox
  2024-03-21  5:40     ` Kefeng Wang
  0 siblings, 1 reply; 25+ messages in thread
From: Matthew Wilcox @ 2024-03-21  3:35 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, linux-mm, Tony Luck, Naoya Horiguchi, Miaohe Lin,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins

On Thu, Mar 21, 2024 at 11:27:47AM +0800, Kefeng Wang wrote:
> Since large folio copy could spend lots of time and it is involved with
> a cond_resched(), the aio couldn't support migrate large folio as it takes
> a spin lock when folio copy, add explicit check for large folio and return
> err directly.

This is unnecessary.  aio only allocates order-0 folios (it uses
find_or_create_page() to do it).

If you want to take on converting aio to use folios instead of pages,
that'd be a worthwhile project.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 11/11] fs: aio: add explicit check for large folio in aio_migrate_folio()
  2024-03-21  3:35   ` Matthew Wilcox
@ 2024-03-21  5:40     ` Kefeng Wang
  0 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-21  5:40 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, linux-mm, Tony Luck, Naoya Horiguchi, Miaohe Lin,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins



On 2024/3/21 11:35, Matthew Wilcox wrote:
> On Thu, Mar 21, 2024 at 11:27:47AM +0800, Kefeng Wang wrote:
>> Since large folio copy could spend lots of time and it is involved with
>> a cond_resched(), the aio couldn't support migrate large folio as it takes
>> a spin lock when folio copy, add explicit check for large folio and return
>> err directly.
> 
> This is unnecessary.  aio only allocates order-0 folios (it uses
> find_or_create_page() to do it).

Yes, only order-0 now, I will drop it.

> 
> If you want to take on converting aio to use folios instead of pages,
> that'd be a worthwhile project.

OK, will try, thanks for your review.
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio
  2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (10 preceding siblings ...)
  2024-03-21  3:27 ` [PATCH v1 11/11] fs: aio: add explicit check for large folio in aio_migrate_folio() Kefeng Wang
@ 2024-03-28 13:30 ` Kefeng Wang
  11 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-03-28 13:30 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Naoya Horiguchi, Miaohe Lin, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins

Hi, since rfcv2, there is no more changes, kindly ping, any comments, 
thanks all.

On 2024/3/21 11:27, Kefeng Wang wrote:
> The folio migration is widely used in kernel, memory compaction, memory
> hotplug, soft offline page, numa balance, memory demote/promotion, etc,
> but once access a poisoned source folio when migrating, the kerenl will
> panic.
> 
> There is a mechanism in the kernel to recover from uncorrectable memory
> errors, ARCH_HAS_COPY_MC(Machine Check Safe Memory Copy), which is already
> used in NVDIMM or core-mm paths(eg, CoW, khugepaged, coredump, ksm copy),
> see copy_mc_to_{user,kernel}, copy_mc_{user_}highpage callers.
> 
> This series of patches provide the recovery mechanism from folio copy for
> the widely used folio migration. Please note, because folio migration is
> no guarantee of success, so we could chose to make folio migration tolerant
> of memory failures, adding folio_mc_copy() which is a #MC versions of
> folio_copy(), once accessing a poisoned source folio, we could return error
> and make the folio migration fail, and this could avoid the similar panic
> shown below.
> 
>    CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0
>    pc : copy_page+0x10/0xc0
>    lr : copy_highpage+0x38/0x50
>    ...
>    Call trace:
>     copy_page+0x10/0xc0
>     folio_copy+0x78/0x90
>     migrate_folio_extra+0x54/0xa0
>     move_to_new_folio+0xd8/0x1f0
>     migrate_folio_move+0xb8/0x300
>     migrate_pages_batch+0x528/0x788
>     migrate_pages_sync+0x8c/0x258
>     migrate_pages+0x440/0x528
>     soft_offline_in_use_page+0x2ec/0x3c0
>     soft_offline_page+0x238/0x310
>     soft_offline_page_store+0x6c/0xc0
>     dev_attr_store+0x20/0x40
>     sysfs_kf_write+0x4c/0x68
>     kernfs_fop_write_iter+0x130/0x1c8
>     new_sync_write+0xa4/0x138
>     vfs_write+0x238/0x2d8
>     ksys_write+0x74/0x110
> 
> v1:
> - no change, resend and rebased on 6.9-rc1
> 
> rfcv2:
> - Separate __migrate_device_pages() cleanup from patch "remove
>    migrate_folio_extra()", suggested by Matthew
> - Split folio_migrate_mapping(), move refcount check/freeze out
>    of folio_migrate_mapping(), suggested by Matthew
> - add RB
> 
> Kefeng Wang (11):
>    mm: migrate: simplify __buffer_migrate_folio()
>    mm: migrate_device: use more folio in __migrate_device_pages()
>    mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY
>    mm: migrate: remove migrate_folio_extra()
>    mm: remove MIGRATE_SYNC_NO_COPY mode
>    mm: migrate: split folio_migrate_mapping()
>    mm: add folio_mc_copy()
>    mm: migrate: support poisoned recover from migrate folio
>    fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio()
>    mm: migrate: remove folio_migrate_copy()
>    fs: aio: add explicit check for large folio in aio_migrate_folio()
> 
>   fs/aio.c                     |  15 ++--
>   fs/hugetlbfs/inode.c         |   5 +-
>   include/linux/migrate.h      |   3 -
>   include/linux/migrate_mode.h |   5 --
>   include/linux/mm.h           |   1 +
>   mm/balloon_compaction.c      |   8 --
>   mm/migrate.c                 | 157 +++++++++++++++++------------------
>   mm/migrate_device.c          |  28 +++----
>   mm/util.c                    |  20 +++++
>   mm/zsmalloc.c                |   8 --
>   10 files changed, 115 insertions(+), 135 deletions(-)
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio()
  2024-03-21  3:27 ` [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
@ 2024-04-01 17:54   ` Vishal Moola
  2024-04-16  9:28   ` Miaohe Lin
  1 sibling, 0 replies; 25+ messages in thread
From: Vishal Moola @ 2024-04-01 17:54 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, linux-mm, Tony Luck, Naoya Horiguchi, Miaohe Lin,
	Matthew Wilcox, David Hildenbrand, Muchun Song, Benjamin LaHaise,
	jglisse, linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan,
	Hugh Dickins

On Thu, Mar 21, 2024 at 11:27:37AM +0800, Kefeng Wang wrote:
> Use filemap_migrate_folio() helper to simplify __buffer_migrate_folio().
> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>

Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages()
  2024-03-21  3:27 ` [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages() Kefeng Wang
@ 2024-04-01 18:22   ` Vishal Moola
  2024-04-02  6:21     ` Kefeng Wang
  2024-04-16 12:13   ` Miaohe Lin
  1 sibling, 1 reply; 25+ messages in thread
From: Vishal Moola @ 2024-04-01 18:22 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, linux-mm, Tony Luck, Naoya Horiguchi, Miaohe Lin,
	Matthew Wilcox, David Hildenbrand, Muchun Song, Benjamin LaHaise,
	jglisse, linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan,
	Hugh Dickins

On Thu, Mar 21, 2024 at 11:27:38AM +0800, Kefeng Wang wrote:
>  
>  		if (!newpage) {
> @@ -728,14 +729,13 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>  			continue;
>  		}
>  
> -		mapping = page_mapping(page);
> +		newfolio = page_folio(newpage);

You could save another compound_head() call by passing the folio through
to migrate_vma_insert_page() and make it migrate_vma_insert_folio(),
since its already converted to use folios.

> +		folio = page_folio(page);
> +		mapping = folio_mapping(folio);
>  
 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages()
  2024-04-01 18:22   ` Vishal Moola
@ 2024-04-02  6:21     ` Kefeng Wang
  2024-04-02 15:54       ` Vishal Moola
  0 siblings, 1 reply; 25+ messages in thread
From: Kefeng Wang @ 2024-04-02  6:21 UTC (permalink / raw)
  To: Vishal Moola
  Cc: Andrew Morton, linux-mm, Tony Luck, Naoya Horiguchi, Miaohe Lin,
	Matthew Wilcox, David Hildenbrand, Muchun Song, Benjamin LaHaise,
	jglisse, linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan,
	Hugh Dickins



On 2024/4/2 2:22, Vishal Moola wrote:
> On Thu, Mar 21, 2024 at 11:27:38AM +0800, Kefeng Wang wrote:
>>   
>>   		if (!newpage) {
>> @@ -728,14 +729,13 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>>   			continue;
>>   		}
>>   
>> -		mapping = page_mapping(page);
>> +		newfolio = page_folio(newpage);
> 
> You could save another compound_head() call by passing the folio through
> to migrate_vma_insert_page() and make it migrate_vma_insert_folio(),
> since its already converted to use folios.

Sure, but let's do it later, we could convert more functions in 
migrate_device.c to use folios, thanks for your review, do you
mind to help to review other patches, hope that the poison recover
from migrate folio was merged firstly.

> 
>> +		folio = page_folio(page);
>> +		mapping = folio_mapping(folio);
>>   
>   

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages()
  2024-04-02  6:21     ` Kefeng Wang
@ 2024-04-02 15:54       ` Vishal Moola
  2024-04-03  1:23         ` Kefeng Wang
  0 siblings, 1 reply; 25+ messages in thread
From: Vishal Moola @ 2024-04-02 15:54 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, linux-mm, Tony Luck, Naoya Horiguchi, Miaohe Lin,
	Matthew Wilcox, David Hildenbrand, Muchun Song, Benjamin LaHaise,
	jglisse, linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan,
	Hugh Dickins

On Mon, Apr 1, 2024 at 11:21 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
>
>
> On 2024/4/2 2:22, Vishal Moola wrote:
> > On Thu, Mar 21, 2024 at 11:27:38AM +0800, Kefeng Wang wrote:
> >>
> >>              if (!newpage) {
> >> @@ -728,14 +729,13 @@ static void __migrate_device_pages(unsigned long *src_pfns,
> >>                      continue;
> >>              }
> >>
> >> -            mapping = page_mapping(page);
> >> +            newfolio = page_folio(newpage);
> >
> > You could save another compound_head() call by passing the folio through
> > to migrate_vma_insert_page() and make it migrate_vma_insert_folio(),
> > since its already converted to use folios.
>
> Sure, but let's do it later, we could convert more functions in
> migrate_device.c to use folios, thanks for your review, do you

Makes sense to me. This patch looks fine to me:
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

> mind to help to review other patches, hope that the poison recover
> from migrate folio was merged firstly.

I'll take a look at it, I'm not too familiar with how that code works just
yet.

> >
> >> +            folio = page_folio(page);
> >> +            mapping = folio_mapping(folio);
> >>
> >

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages()
  2024-04-02 15:54       ` Vishal Moola
@ 2024-04-03  1:23         ` Kefeng Wang
  0 siblings, 0 replies; 25+ messages in thread
From: Kefeng Wang @ 2024-04-03  1:23 UTC (permalink / raw)
  To: Vishal Moola
  Cc: Andrew Morton, linux-mm, Tony Luck, Naoya Horiguchi, Miaohe Lin,
	Matthew Wilcox, David Hildenbrand, Muchun Song, Benjamin LaHaise,
	jglisse, linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan,
	Hugh Dickins



On 2024/4/2 23:54, Vishal Moola wrote:
> On Mon, Apr 1, 2024 at 11:21 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>
>>
>>
>> On 2024/4/2 2:22, Vishal Moola wrote:
>>> On Thu, Mar 21, 2024 at 11:27:38AM +0800, Kefeng Wang wrote:
>>>>
>>>>               if (!newpage) {
>>>> @@ -728,14 +729,13 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>>>>                       continue;
>>>>               }
>>>>
>>>> -            mapping = page_mapping(page);
>>>> +            newfolio = page_folio(newpage);
>>>
>>> You could save another compound_head() call by passing the folio through
>>> to migrate_vma_insert_page() and make it migrate_vma_insert_folio(),
>>> since its already converted to use folios.
>>
>> Sure, but let's do it later, we could convert more functions in
>> migrate_device.c to use folios, thanks for your review, do you
> 
> Makes sense to me. This patch looks fine to me:
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> 

Thanks,

>> mind to help to review other patches, hope that the poison recover
>> from migrate folio was merged firstly.
> 
> I'll take a look at it, I'm not too familiar with how that code works just
> yet.

That's great.

> 
>>>
>>>> +            folio = page_folio(page);
>>>> +            mapping = folio_mapping(folio);
>>>>
>>>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio()
  2024-03-21  3:27 ` [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
  2024-04-01 17:54   ` Vishal Moola
@ 2024-04-16  9:28   ` Miaohe Lin
  1 sibling, 0 replies; 25+ messages in thread
From: Miaohe Lin @ 2024-04-16  9:28 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Tony Luck, Naoya Horiguchi, Matthew Wilcox, David Hildenbrand,
	Muchun Song, Benjamin LaHaise, jglisse, linux-aio, linux-fsdevel,
	Zi Yan, Jiaqi Yan, Hugh Dickins, Andrew Morton, linux-mm

On 2024/3/21 11:27, Kefeng Wang wrote:
> Use filemap_migrate_folio() helper to simplify __buffer_migrate_folio().
> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Thanks.
.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages()
  2024-03-21  3:27 ` [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages() Kefeng Wang
  2024-04-01 18:22   ` Vishal Moola
@ 2024-04-16 12:13   ` Miaohe Lin
  1 sibling, 0 replies; 25+ messages in thread
From: Miaohe Lin @ 2024-04-16 12:13 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Tony Luck, Naoya Horiguchi, Matthew Wilcox, David Hildenbrand,
	Muchun Song, Benjamin LaHaise, jglisse, linux-aio, linux-fsdevel,
	Zi Yan, Jiaqi Yan, Hugh Dickins, Andrew Morton, linux-mm

On 2024/3/21 11:27, Kefeng Wang wrote:
> Use newfolio/folio for migrate_folio_extra()/migrate_folio() to
> save four compound_head() calls.
> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Thanks.
.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 04/11] mm: migrate: remove migrate_folio_extra()
  2024-03-21  3:27 ` [PATCH v1 04/11] mm: migrate: remove migrate_folio_extra() Kefeng Wang
@ 2024-04-16 12:40   ` Miaohe Lin
  2024-04-17  1:43     ` Kefeng Wang
  0 siblings, 1 reply; 25+ messages in thread
From: Miaohe Lin @ 2024-04-16 12:40 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Tony Luck, Naoya Horiguchi, Matthew Wilcox, David Hildenbrand,
	Muchun Song, Benjamin LaHaise, jglisse, linux-aio, linux-fsdevel,
	Zi Yan, Jiaqi Yan, Hugh Dickins, Andrew Morton, linux-mm

On 2024/3/21 11:27, Kefeng Wang wrote:
> The migrate_folio_extra() only called in migrate.c now, convert it
> a static function and take a new src_private argument which could
> be shared by migrate_folio() and filemap_migrate_folio() to simplify
> code a bit.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  include/linux/migrate.h |  2 --
>  mm/migrate.c            | 33 +++++++++++----------------------
>  2 files changed, 11 insertions(+), 24 deletions(-)
> 
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 2ce13e8a309b..517f70b70620 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -63,8 +63,6 @@ extern const char *migrate_reason_names[MR_TYPES];
>  #ifdef CONFIG_MIGRATION
>  
>  void putback_movable_pages(struct list_head *l);
> -int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
> -		struct folio *src, enum migrate_mode mode, int extra_count);
>  int migrate_folio(struct address_space *mapping, struct folio *dst,
>  		struct folio *src, enum migrate_mode mode);
>  int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
> diff --git a/mm/migrate.c b/mm/migrate.c
> index cb4cbaa42a35..c006b0b44013 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -658,18 +658,19 @@ EXPORT_SYMBOL(folio_migrate_copy);
>   *                    Migration functions
>   ***********************************************************/
>  
> -int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
> -		struct folio *src, enum migrate_mode mode, int extra_count)
> +static int __migrate_folio(struct address_space *mapping, struct folio *dst,
> +			   struct folio *src, void *src_private,
> +			   enum migrate_mode mode)
>  {
>  	int rc;
>  
> -	BUG_ON(folio_test_writeback(src));	/* Writeback must be complete */
> -
> -	rc = folio_migrate_mapping(mapping, dst, src, extra_count);
> -
> +	rc = folio_migrate_mapping(mapping, dst, src, 0);
>  	if (rc != MIGRATEPAGE_SUCCESS)
>  		return rc;
>  
> +	if (src_private)

src_private seems unneeded. It can be replaced with folio_get_private(src)?

Thanks.
.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 04/11] mm: migrate: remove migrate_folio_extra()
  2024-04-16 12:40   ` Miaohe Lin
@ 2024-04-17  1:43     ` Kefeng Wang
  2024-04-18  2:32       ` Miaohe Lin
  0 siblings, 1 reply; 25+ messages in thread
From: Kefeng Wang @ 2024-04-17  1:43 UTC (permalink / raw)
  To: Miaohe Lin
  Cc: Tony Luck, Naoya Horiguchi, Matthew Wilcox, David Hildenbrand,
	Muchun Song, Benjamin LaHaise, jglisse, linux-aio, linux-fsdevel,
	Zi Yan, Jiaqi Yan, Hugh Dickins, Andrew Morton, linux-mm



On 2024/4/16 20:40, Miaohe Lin wrote:
> On 2024/3/21 11:27, Kefeng Wang wrote:
>> The migrate_folio_extra() only called in migrate.c now, convert it
>> a static function and take a new src_private argument which could
>> be shared by migrate_folio() and filemap_migrate_folio() to simplify
>> code a bit.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   include/linux/migrate.h |  2 --
>>   mm/migrate.c            | 33 +++++++++++----------------------
>>   2 files changed, 11 insertions(+), 24 deletions(-)
>>
>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>> index 2ce13e8a309b..517f70b70620 100644
>> --- a/include/linux/migrate.h
>> +++ b/include/linux/migrate.h
>> @@ -63,8 +63,6 @@ extern const char *migrate_reason_names[MR_TYPES];
>>   #ifdef CONFIG_MIGRATION
>>   
>>   void putback_movable_pages(struct list_head *l);
>> -int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
>> -		struct folio *src, enum migrate_mode mode, int extra_count);
>>   int migrate_folio(struct address_space *mapping, struct folio *dst,
>>   		struct folio *src, enum migrate_mode mode);
>>   int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index cb4cbaa42a35..c006b0b44013 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -658,18 +658,19 @@ EXPORT_SYMBOL(folio_migrate_copy);
>>    *                    Migration functions
>>    ***********************************************************/
>>   
>> -int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
>> -		struct folio *src, enum migrate_mode mode, int extra_count)
>> +static int __migrate_folio(struct address_space *mapping, struct folio *dst,
>> +			   struct folio *src, void *src_private,
>> +			   enum migrate_mode mode)
>>   {
>>   	int rc;
>>   
>> -	BUG_ON(folio_test_writeback(src));	/* Writeback must be complete */
>> -
>> -	rc = folio_migrate_mapping(mapping, dst, src, extra_count);
>> -
>> +	rc = folio_migrate_mapping(mapping, dst, src, 0);
>>   	if (rc != MIGRATEPAGE_SUCCESS)
>>   		return rc;
>>   
>> +	if (src_private)
> 
> src_private seems unneeded. It can be replaced with folio_get_private(src)?
> 

__migrate_folio() is used by migrate_folio() and filemap_migrate_folio(),
but migrate_folio() is for LRU folio, when swapcache folio, the
folio->private is handled from folio_migrate_mapping(), we should not
try to call folio_detach_private/folio_attach_private().

> Thanks.
> .

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 04/11] mm: migrate: remove migrate_folio_extra()
  2024-04-17  1:43     ` Kefeng Wang
@ 2024-04-18  2:32       ` Miaohe Lin
  0 siblings, 0 replies; 25+ messages in thread
From: Miaohe Lin @ 2024-04-18  2:32 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Tony Luck, Naoya Horiguchi, Matthew Wilcox, David Hildenbrand,
	Muchun Song, Benjamin LaHaise, jglisse, linux-aio, linux-fsdevel,
	Zi Yan, Jiaqi Yan, Hugh Dickins, Andrew Morton, linux-mm

On 2024/4/17 9:43, Kefeng Wang wrote:
> 
> 
> On 2024/4/16 20:40, Miaohe Lin wrote:
>> On 2024/3/21 11:27, Kefeng Wang wrote:
>>> The migrate_folio_extra() only called in migrate.c now, convert it
>>> a static function and take a new src_private argument which could
>>> be shared by migrate_folio() and filemap_migrate_folio() to simplify
>>> code a bit.
>>>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>> ---
>>>   include/linux/migrate.h |  2 --
>>>   mm/migrate.c            | 33 +++++++++++----------------------
>>>   2 files changed, 11 insertions(+), 24 deletions(-)
>>>
>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>>> index 2ce13e8a309b..517f70b70620 100644
>>> --- a/include/linux/migrate.h
>>> +++ b/include/linux/migrate.h
>>> @@ -63,8 +63,6 @@ extern const char *migrate_reason_names[MR_TYPES];
>>>   #ifdef CONFIG_MIGRATION
>>>     void putback_movable_pages(struct list_head *l);
>>> -int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
>>> -        struct folio *src, enum migrate_mode mode, int extra_count);
>>>   int migrate_folio(struct address_space *mapping, struct folio *dst,
>>>           struct folio *src, enum migrate_mode mode);
>>>   int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>> index cb4cbaa42a35..c006b0b44013 100644
>>> --- a/mm/migrate.c
>>> +++ b/mm/migrate.c
>>> @@ -658,18 +658,19 @@ EXPORT_SYMBOL(folio_migrate_copy);
>>>    *                    Migration functions
>>>    ***********************************************************/
>>>   -int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
>>> -        struct folio *src, enum migrate_mode mode, int extra_count)
>>> +static int __migrate_folio(struct address_space *mapping, struct folio *dst,
>>> +               struct folio *src, void *src_private,
>>> +               enum migrate_mode mode)
>>>   {
>>>       int rc;
>>>   -    BUG_ON(folio_test_writeback(src));    /* Writeback must be complete */
>>> -
>>> -    rc = folio_migrate_mapping(mapping, dst, src, extra_count);
>>> -
>>> +    rc = folio_migrate_mapping(mapping, dst, src, 0);
>>>       if (rc != MIGRATEPAGE_SUCCESS)
>>>           return rc;
>>>   +    if (src_private)
>>
>> src_private seems unneeded. It can be replaced with folio_get_private(src)?
>>
> 
> __migrate_folio() is used by migrate_folio() and filemap_migrate_folio(),
> but migrate_folio() is for LRU folio, when swapcache folio, the
> folio->private is handled from folio_migrate_mapping(), we should not
> try to call folio_detach_private/folio_attach_private().

I see. Swapcache folio will use private field while without using PagePrivate/PagePrivate2.
We can't handle this case if src_private is removed.
Thanks.
.

> 
>> Thanks.
>> .
> .


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2024-04-18  2:32 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-21  3:27 [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang
2024-03-21  3:27 ` [PATCH v1 01/11] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
2024-04-01 17:54   ` Vishal Moola
2024-04-16  9:28   ` Miaohe Lin
2024-03-21  3:27 ` [PATCH v1 02/11] mm: migrate_device: use more folio in __migrate_device_pages() Kefeng Wang
2024-04-01 18:22   ` Vishal Moola
2024-04-02  6:21     ` Kefeng Wang
2024-04-02 15:54       ` Vishal Moola
2024-04-03  1:23         ` Kefeng Wang
2024-04-16 12:13   ` Miaohe Lin
2024-03-21  3:27 ` [PATCH v1 03/11] mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY Kefeng Wang
2024-03-21  3:27 ` [PATCH v1 04/11] mm: migrate: remove migrate_folio_extra() Kefeng Wang
2024-04-16 12:40   ` Miaohe Lin
2024-04-17  1:43     ` Kefeng Wang
2024-04-18  2:32       ` Miaohe Lin
2024-03-21  3:27 ` [PATCH v1 05/11] mm: remove MIGRATE_SYNC_NO_COPY mode Kefeng Wang
2024-03-21  3:27 ` [PATCH v1 06/11] mm: migrate: split folio_migrate_mapping() Kefeng Wang
2024-03-21  3:27 ` [PATCH v1 07/11] mm: add folio_mc_copy() Kefeng Wang
2024-03-21  3:27 ` [PATCH v1 08/11] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
2024-03-21  3:27 ` [PATCH v1 09/11] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio() Kefeng Wang
2024-03-21  3:27 ` [PATCH v1 10/11] mm: migrate: remove folio_migrate_copy() Kefeng Wang
2024-03-21  3:27 ` [PATCH v1 11/11] fs: aio: add explicit check for large folio in aio_migrate_folio() Kefeng Wang
2024-03-21  3:35   ` Matthew Wilcox
2024-03-21  5:40     ` Kefeng Wang
2024-03-28 13:30 ` [PATCH v1 00/11] mm: migrate: support poison recover from migrate folio Kefeng Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.