linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: Migrate high-order folios in swap cache correctly
@ 2023-12-14  4:58 Matthew Wilcox (Oracle)
  2023-12-14 22:11 ` Andrew Morton
  2024-02-06 14:54 ` Charan Teja Kalla
  0 siblings, 2 replies; 4+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-12-14  4:58 UTC (permalink / raw)
  To: Andrew Morton, linux-mm; +Cc: Charan Teja Kalla, Matthew Wilcox

From: Charan Teja Kalla <quic_charante@quicinc.com>

Large folios occupy N consecutive entries in the swap cache
instead of using multi-index entries like the page cache.
However, if a large folio is re-added to the LRU list, it can
be migrated.  The migration code was not aware of the difference
between the swap cache and the page cache and assumed that a single
xas_store() would be sufficient.

This leaves potentially many stale pointers to the now-migrated folio
in the swap cache, which can lead to almost arbitrary data corruption
in the future.  This can also manifest as infinite loops with the
RCU read lock held.

Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
[modifications to the changelog & tweaked the fix]
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/migrate.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index d9d2b9432e81..2d67ca47d2e2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -405,6 +405,7 @@ int folio_migrate_mapping(struct address_space *mapping,
 	int dirty;
 	int expected_count = folio_expected_refs(mapping, folio) + extra_count;
 	long nr = folio_nr_pages(folio);
+	long entries, i;
 
 	if (!mapping) {
 		/* Anonymous page without mapping */
@@ -442,8 +443,10 @@ int folio_migrate_mapping(struct address_space *mapping,
 			folio_set_swapcache(newfolio);
 			newfolio->private = folio_get_private(folio);
 		}
+		entries = nr;
 	} else {
 		VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
+		entries = 1;
 	}
 
 	/* Move dirty while page refs frozen and newpage not yet exposed */
@@ -453,7 +456,11 @@ int folio_migrate_mapping(struct address_space *mapping,
 		folio_set_dirty(newfolio);
 	}
 
-	xas_store(&xas, newfolio);
+	/* Swap cache still stores N entries instead of a high-order entry */
+	for (i = 0; i < entries; i++) {
+		xas_store(&xas, newfolio);
+		xas_next(&xas);
+	}
 
 	/*
 	 * Drop cache reference from old page by unfreezing
-- 
2.42.0



^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-02-07 14:38 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-14  4:58 [PATCH] mm: Migrate high-order folios in swap cache correctly Matthew Wilcox (Oracle)
2023-12-14 22:11 ` Andrew Morton
2024-02-06 14:54 ` Charan Teja Kalla
2024-02-07 14:38   ` Charan Teja Kalla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).