linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/59] MM folio changes for 6.1
@ 2022-08-08 19:33 Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 01/59] mm: Fix VM_BUG_ON in __delete_from_swap_cache() Matthew Wilcox (Oracle)
                   ` (59 more replies)
  0 siblings, 60 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

The first three patches I hope are added into 6.0 before release (and
the one cc:stable gets backported to 5.19).

My focus this round has been on shmem.  I believe it is now fully
converted to folios.  Of course, shmem interacts with a lot of the swap
cache and other parts of the kernel, so there are patches all over the MM.

This patch series survives a round of xfstests on tmpfs, which is nice,
but hardly an exhaustive test.

Matthew Wilcox (Oracle) (59):
  mm: Fix VM_BUG_ON in __delete_from_swap_cache()
  shmem: Update folio if shmem_replace_page() updates the page
  vmscan: Check folio_test_private(), not folio_get_private()
  mm/vmscan: Fix a lot of comments
  mm: Add the first tail page to struct folio
  mm: Reimplement folio_order() and folio_nr_pages()
  mm: Add split_folio()
  mm: Add folio_add_lru_vma()
  shmem: Convert shmem_writepage() to use a folio throughout
  shmem: Convert shmem_delete_from_page_cache() to take a folio
  shmem: Convert shmem_replace_page() to use folios throughout
  mm/swapfile: Remove page_swapcount()
  mm/swapfile: Convert try_to_free_swap() to folio_free_swap()
  mm/swap: Convert __read_swap_cache_async() to use a folio
  mm/swap: Convert add_to_swap_cache() to take a folio
  mm/swap: Convert put_swap_page() to put_swap_folio()
  mm: Convert do_swap_page() to use a folio
  mm: Convert do_swap_page()'s swapcache variable to a folio
  memcg: Convert mem_cgroup_swapin_charge_page() to
    mem_cgroup_swapin_charge_folio()
  shmem: Convert shmem_mfill_atomic_pte() to use a folio
  shmem: Convert shmem_replace_page() to shmem_replace_folio()
  swap: Add swap_cache_get_folio()
  shmem: Eliminate struct page from shmem_swapin_folio()
  shmem: Convert shmem_getpage_gfp() to shmem_get_folio_gfp()
  shmem: Convert shmem_fault() to use shmem_get_folio_gfp()
  shmem: Convert shmem_read_mapping_page_gfp() to use
    shmem_get_folio_gfp()
  shmem: Add shmem_get_folio()
  shmem: Convert shmem_get_partial_folio() to use shmem_get_folio()
  shmem: Convert shmem_write_begin() to use shmem_get_folio()
  shmem: Convert shmem_file_read_iter() to use shmem_get_folio()
  shmem: Convert shmem_fallocate() to use a folio
  shmem: Convert shmem_symlink() to use a folio
  shmem: Convert shmem_get_link() to use a folio
  khugepaged: Call shmem_get_folio()
  userfaultfd: Convert mcontinue_atomic_pte() to use a folio
  shmem: Remove shmem_getpage()
  swapfile: Convert try_to_unuse() to use a folio
  swapfile: Convert __try_to_reclaim_swap() to use a folio
  swapfile: Convert unuse_pte_range() to use a folio
  mm: Convert do_swap_page() to use swap_cache_get_folio()
  mm: Remove lookup_swap_cache()
  swap_state: Convert free_swap_cache() to use a folio
  swap: Convert swap_writepage() to use a folio
  mm: Convert do_wp_page() to use a folio
  huge_memory: Convert do_huge_pmd_wp_page() to use a folio
  madvise: Convert madvise_free_pte_range() to use a folio
  uprobes: Use folios more widely in __replace_page()
  ksm: Use a folio in replace_page()
  mm: Convert do_swap_page() to use folio_free_swap()
  memcg: Convert mem_cgroup_swap_full() to take a folio
  mm: Remove try_to_free_swap()
  rmap: Convert page_move_anon_rmap() to use a folio
  migrate: Convert __unmap_and_move() to use folios
  migrate: Convert unmap_and_move_huge_page() to use folios
  huge_memory: Convert split_huge_page_to_list() to use a folio
  huge_memory: Convert unmap_page() to unmap_folio()
  mm: Convert page_get_anon_vma() to folio_get_anon_vma()
  rmap: Remove page_unlock_anon_vma_read()
  uprobes: Use new_folio in __replace_page()

 include/linux/huge_mm.h    |   5 +
 include/linux/memcontrol.h |   4 +-
 include/linux/mm.h         |  12 +-
 include/linux/mm_types.h   |  30 ++-
 include/linux/rmap.h       |   7 +-
 include/linux/shmem_fs.h   |   6 +-
 include/linux/swap.h       |  35 ++--
 kernel/events/uprobes.c    |  28 +--
 mm/folio-compat.c          |   6 +
 mm/huge_memory.c           |  95 +++++-----
 mm/khugepaged.c            |   7 +-
 mm/ksm.c                   |   8 +-
 mm/madvise.c               |  49 ++---
 mm/memcontrol.c            |  21 +--
 mm/memory-failure.c        |   2 +-
 mm/memory.c                | 151 ++++++++-------
 mm/migrate.c               | 107 ++++++-----
 mm/page_io.c               |  21 ++-
 mm/rmap.c                  |  33 ++--
 mm/shmem.c                 | 374 ++++++++++++++++++-------------------
 mm/swap.c                  |  19 +-
 mm/swap.h                  |  16 +-
 mm/swap_slots.c            |   2 +-
 mm/swap_state.c            | 113 +++++------
 mm/swapfile.c              | 159 ++++++++--------
 mm/truncate.c              |   2 +-
 mm/userfaultfd.c           |  14 +-
 mm/vmscan.c                | 263 +++++++++++++-------------
 28 files changed, 810 insertions(+), 779 deletions(-)

-- 
2.35.1



^ permalink raw reply	[flat|nested] 62+ messages in thread

* [PATCH 01/59] mm: Fix VM_BUG_ON in __delete_from_swap_cache()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 02/59] shmem: Update folio if shmem_replace_page() updates the page Matthew Wilcox (Oracle)
                   ` (58 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Commit ceff9d3354e9 changed the VM_BUG_ON() to dump the folio we're
storing instead of the entry we retrieved.  This was a mistake; the
entry we retrieved is the more interesting page to dump.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/swap_state.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index e166051566f4..41afa6d45b23 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -151,7 +151,7 @@ void __delete_from_swap_cache(struct folio *folio,
 
 	for (i = 0; i < nr; i++) {
 		void *entry = xas_store(&xas, shadow);
-		VM_BUG_ON_FOLIO(entry != folio, folio);
+		VM_BUG_ON_PAGE(entry != folio, entry);
 		set_page_private(folio_page(folio, i), 0);
 		xas_next(&xas);
 	}
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 02/59] shmem: Update folio if shmem_replace_page() updates the page
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 01/59] mm: Fix VM_BUG_ON in __delete_from_swap_cache() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 03/59] vmscan: Check folio_test_private(), not folio_get_private() Matthew Wilcox (Oracle)
                   ` (57 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd, stable, William Kucharski

If we allocate a new page, we need to make sure that our folio matches
that new page.  If we don't, we store the wrong folio in the shmem page
cache which will lead to data corruption.  This problem will be solved
by changing shmem_replace_page() to shmem_replace_folio(), but this
patch is the minimal fix.

Fixes: da08e9b79323 ("mm/shmem: convert shmem_swapin_page() to shmem_swapin_folio()")
Cc: stable@vger.kernel.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
---
 mm/shmem.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/shmem.c b/mm/shmem.c
index e975fcd9d2e1..4ae43cffeda3 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1780,6 +1780,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 
 	if (shmem_should_replace_folio(folio, gfp)) {
 		error = shmem_replace_page(&page, gfp, info, index);
+		folio = page_folio(page);
 		if (error)
 			goto failed;
 	}
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 03/59] vmscan: Check folio_test_private(), not folio_get_private()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 01/59] mm: Fix VM_BUG_ON in __delete_from_swap_cache() Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 02/59] shmem: Update folio if shmem_replace_page() updates the page Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 04/59] mm/vmscan: Fix a lot of comments Matthew Wilcox (Oracle)
                   ` (56 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

These two predicates are the same for file pages, but are not the
same for anonymous pages.

Reported-by: Hugh Dickins <hughd@google.com>
Fixes: 07f67a8dedc0 ("mm/vmscan: convert shrink_active_list() to use a folio")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/vmscan.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b2b1431352dc..382dbe97329f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2550,8 +2550,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 		}
 
 		if (unlikely(buffer_heads_over_limit)) {
-			if (folio_get_private(folio) && folio_trylock(folio)) {
-				if (folio_get_private(folio))
+			if (folio_test_private(folio) && folio_trylock(folio)) {
+				if (folio_test_private(folio))
 					filemap_release_folio(folio, 0);
 				folio_unlock(folio);
 			}
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 04/59] mm/vmscan: Fix a lot of comments
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 03/59] vmscan: Check folio_test_private(), not folio_get_private() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 05/59] mm: Add the first tail page to struct folio Matthew Wilcox (Oracle)
                   ` (55 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

A lot of comments mention pages when they should say folios.
Fix them up.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/vmscan.c | 252 ++++++++++++++++++++++++++--------------------------
 1 file changed, 125 insertions(+), 127 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 382dbe97329f..93b1087fc969 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -85,7 +85,7 @@ struct scan_control {
 	unsigned long	anon_cost;
 	unsigned long	file_cost;
 
-	/* Can active pages be deactivated as part of reclaim? */
+	/* Can active folios be deactivated as part of reclaim? */
 #define DEACTIVATE_ANON 1
 #define DEACTIVATE_FILE 2
 	unsigned int may_deactivate:2;
@@ -95,10 +95,10 @@ struct scan_control {
 	/* Writepage batching in laptop mode; RECLAIM_WRITE */
 	unsigned int may_writepage:1;
 
-	/* Can mapped pages be reclaimed? */
+	/* Can mapped folios be reclaimed? */
 	unsigned int may_unmap:1;
 
-	/* Can pages be swapped as part of reclaim? */
+	/* Can folios be swapped as part of reclaim? */
 	unsigned int may_swap:1;
 
 	/* Proactive reclaim invoked by userspace through memory.reclaim */
@@ -123,7 +123,7 @@ struct scan_control {
 	/* There is easily reclaimable cold cache in the current node */
 	unsigned int cache_trim_mode:1;
 
-	/* The file pages on the current node are dangerously low */
+	/* The file folios on the current node are dangerously low */
 	unsigned int file_is_tiny:1;
 
 	/* Always discard instead of demoting to lower tier memory */
@@ -135,7 +135,7 @@ struct scan_control {
 	/* Scan (total_size >> priority) pages at once */
 	s8 priority;
 
-	/* The highest zone to isolate pages for reclaim from */
+	/* The highest zone to isolate folios for reclaim from */
 	s8 reclaim_idx;
 
 	/* This context's GFP mask */
@@ -443,7 +443,7 @@ static bool cgroup_reclaim(struct scan_control *sc)
  *
  * The normal page dirty throttling mechanism in balance_dirty_pages() is
  * completely broken with the legacy memcg and direct stalling in
- * shrink_page_list() is used for throttling instead, which lacks all the
+ * shrink_folio_list() is used for throttling instead, which lacks all the
  * niceties such as fairness, adaptive pausing, bandwidth proportional
  * allocation and configurability.
  *
@@ -564,9 +564,9 @@ static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
 }
 
 /*
- * This misses isolated pages which are not accounted for to save counters.
+ * This misses isolated folios which are not accounted for to save counters.
  * As the data only determines if reclaim or compaction continues, it is
- * not expected that isolated pages will be a dominating factor.
+ * not expected that isolated folios will be a dominating factor.
  */
 unsigned long zone_reclaimable_pages(struct zone *zone)
 {
@@ -1039,9 +1039,9 @@ void drop_slab(void)
 static inline int is_page_cache_freeable(struct folio *folio)
 {
 	/*
-	 * A freeable page cache page is referenced only by the caller
-	 * that isolated the page, the page cache and optional buffer
-	 * heads at page->private.
+	 * A freeable page cache folio is referenced only by the caller
+	 * that isolated the folio, the page cache and optional filesystem
+	 * private data at folio->private.
 	 */
 	return folio_ref_count(folio) - folio_test_private(folio) ==
 		1 + folio_nr_pages(folio);
@@ -1081,8 +1081,8 @@ static bool skip_throttle_noprogress(pg_data_t *pgdat)
 		return true;
 
 	/*
-	 * If there are a lot of dirty/writeback pages then do not
-	 * throttle as throttling will occur when the pages cycle
+	 * If there are a lot of dirty/writeback folios then do not
+	 * throttle as throttling will occur when the folios cycle
 	 * towards the end of the LRU if still under writeback.
 	 */
 	for (i = 0; i < MAX_NR_ZONES; i++) {
@@ -1125,7 +1125,7 @@ void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason)
 	 * short. Failing to make progress or waiting on writeback are
 	 * potentially long-lived events so use a longer timeout. This is shaky
 	 * logic as a failure to make progress could be due to anything from
-	 * writeback to a slow device to excessive references pages at the tail
+	 * writeback to a slow device to excessive referenced folios at the tail
 	 * of the inactive LRU.
 	 */
 	switch(reason) {
@@ -1171,8 +1171,8 @@ void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason)
 }
 
 /*
- * Account for pages written if tasks are throttled waiting on dirty
- * pages to clean. If enough pages have been cleaned since throttling
+ * Account for folios written if tasks are throttled waiting on dirty
+ * folios to clean. If enough folios have been cleaned since throttling
  * started then wakeup the throttled tasks.
  */
 void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
@@ -1198,18 +1198,18 @@ void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
 
 /* possible outcome of pageout() */
 typedef enum {
-	/* failed to write page out, page is locked */
+	/* failed to write folio out, folio is locked */
 	PAGE_KEEP,
-	/* move page to the active list, page is locked */
+	/* move folio to the active list, folio is locked */
 	PAGE_ACTIVATE,
-	/* page has been sent to the disk successfully, page is unlocked */
+	/* folio has been sent to the disk successfully, folio is unlocked */
 	PAGE_SUCCESS,
-	/* page is clean and locked */
+	/* folio is clean and locked */
 	PAGE_CLEAN,
 } pageout_t;
 
 /*
- * pageout is called by shrink_page_list() for each dirty page.
+ * pageout is called by shrink_folio_list() for each dirty folio.
  * Calls ->writepage().
  */
 static pageout_t pageout(struct folio *folio, struct address_space *mapping,
@@ -1283,7 +1283,7 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping,
 }
 
 /*
- * Same as remove_mapping, but if the page is removed from the mapping, it
+ * Same as remove_mapping, but if the folio is removed from the mapping, it
  * gets returned with a refcount of 0.
  */
 static int __remove_mapping(struct address_space *mapping, struct folio *folio,
@@ -1299,34 +1299,34 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio,
 		spin_lock(&mapping->host->i_lock);
 	xa_lock_irq(&mapping->i_pages);
 	/*
-	 * The non racy check for a busy page.
+	 * The non racy check for a busy folio.
 	 *
 	 * Must be careful with the order of the tests. When someone has
-	 * a ref to the page, it may be possible that they dirty it then
-	 * drop the reference. So if PageDirty is tested before page_count
-	 * here, then the following race may occur:
+	 * a ref to the folio, it may be possible that they dirty it then
+	 * drop the reference. So if the dirty flag is tested before the
+	 * refcount here, then the following race may occur:
 	 *
 	 * get_user_pages(&page);
 	 * [user mapping goes away]
 	 * write_to(page);
-	 *				!PageDirty(page)    [good]
-	 * SetPageDirty(page);
-	 * put_page(page);
-	 *				!page_count(page)   [good, discard it]
+	 *				!folio_test_dirty(folio)    [good]
+	 * folio_set_dirty(folio);
+	 * folio_put(folio);
+	 *				!refcount(folio)   [good, discard it]
 	 *
 	 * [oops, our write_to data is lost]
 	 *
 	 * Reversing the order of the tests ensures such a situation cannot
-	 * escape unnoticed. The smp_rmb is needed to ensure the page->flags
-	 * load is not satisfied before that of page->_refcount.
+	 * escape unnoticed. The smp_rmb is needed to ensure the folio->flags
+	 * load is not satisfied before that of folio->_refcount.
 	 *
-	 * Note that if SetPageDirty is always performed via set_page_dirty,
+	 * Note that if the dirty flag is always set via folio_mark_dirty,
 	 * and thus under the i_pages lock, then this ordering is not required.
 	 */
 	refcount = 1 + folio_nr_pages(folio);
 	if (!folio_ref_freeze(folio, refcount))
 		goto cannot_free;
-	/* note: atomic_cmpxchg in page_ref_freeze provides the smp_rmb */
+	/* note: atomic_cmpxchg in folio_ref_freeze provides the smp_rmb */
 	if (unlikely(folio_test_dirty(folio))) {
 		folio_ref_unfreeze(folio, refcount);
 		goto cannot_free;
@@ -1355,7 +1355,7 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio,
 		 * back.
 		 *
 		 * We also don't store shadows for DAX mappings because the
-		 * only page cache pages found in these are zero pages
+		 * only page cache folios found in these are zero pages
 		 * covering holes, and because we don't want to mix DAX
 		 * exceptional entries and shadow exceptional entries in the
 		 * same address_space.
@@ -1423,14 +1423,14 @@ void folio_putback_lru(struct folio *folio)
 	folio_put(folio);		/* drop ref from isolate */
 }
 
-enum page_references {
-	PAGEREF_RECLAIM,
-	PAGEREF_RECLAIM_CLEAN,
-	PAGEREF_KEEP,
-	PAGEREF_ACTIVATE,
+enum folio_references {
+	FOLIOREF_RECLAIM,
+	FOLIOREF_RECLAIM_CLEAN,
+	FOLIOREF_KEEP,
+	FOLIOREF_ACTIVATE,
 };
 
-static enum page_references folio_check_references(struct folio *folio,
+static enum folio_references folio_check_references(struct folio *folio,
 						  struct scan_control *sc)
 {
 	int referenced_ptes, referenced_folio;
@@ -1445,11 +1445,11 @@ static enum page_references folio_check_references(struct folio *folio,
 	 * Let the folio, now marked Mlocked, be moved to the unevictable list.
 	 */
 	if (vm_flags & VM_LOCKED)
-		return PAGEREF_ACTIVATE;
+		return FOLIOREF_ACTIVATE;
 
 	/* rmap lock contention: rotate */
 	if (referenced_ptes == -1)
-		return PAGEREF_KEEP;
+		return FOLIOREF_KEEP;
 
 	if (referenced_ptes) {
 		/*
@@ -1469,34 +1469,34 @@ static enum page_references folio_check_references(struct folio *folio,
 		folio_set_referenced(folio);
 
 		if (referenced_folio || referenced_ptes > 1)
-			return PAGEREF_ACTIVATE;
+			return FOLIOREF_ACTIVATE;
 
 		/*
 		 * Activate file-backed executable folios after first usage.
 		 */
 		if ((vm_flags & VM_EXEC) && folio_is_file_lru(folio))
-			return PAGEREF_ACTIVATE;
+			return FOLIOREF_ACTIVATE;
 
-		return PAGEREF_KEEP;
+		return FOLIOREF_KEEP;
 	}
 
 	/* Reclaim if clean, defer dirty folios to writeback */
 	if (referenced_folio && folio_is_file_lru(folio))
-		return PAGEREF_RECLAIM_CLEAN;
+		return FOLIOREF_RECLAIM_CLEAN;
 
-	return PAGEREF_RECLAIM;
+	return FOLIOREF_RECLAIM;
 }
 
-/* Check if a page is dirty or under writeback */
+/* Check if a folio is dirty or under writeback */
 static void folio_check_dirty_writeback(struct folio *folio,
 				       bool *dirty, bool *writeback)
 {
 	struct address_space *mapping;
 
 	/*
-	 * Anonymous pages are not handled by flushers and must be written
+	 * Anonymous folios are not handled by flushers and must be written
 	 * from reclaim context. Do not stall reclaim based on them.
-	 * MADV_FREE anonymous pages are put into inactive file list too.
+	 * MADV_FREE anonymous folios are put into inactive file list too.
 	 * They could be mistakenly treated as file lru. So further anon
 	 * test is needed.
 	 */
@@ -1538,24 +1538,24 @@ static struct page *alloc_demote_page(struct page *page, unsigned long node)
 }
 
 /*
- * Take pages on @demote_list and attempt to demote them to
- * another node.  Pages which are not demoted are left on
- * @demote_pages.
+ * Take folios on @demote_folios and attempt to demote them to
+ * another node.  Folios which are not demoted are left on
+ * @demote_folios.
  */
-static unsigned int demote_page_list(struct list_head *demote_pages,
+static unsigned int demote_folio_list(struct list_head *demote_folios,
 				     struct pglist_data *pgdat)
 {
 	int target_nid = next_demotion_node(pgdat->node_id);
 	unsigned int nr_succeeded;
 
-	if (list_empty(demote_pages))
+	if (list_empty(demote_folios))
 		return 0;
 
 	if (target_nid == NUMA_NO_NODE)
 		return 0;
 
 	/* Demotion ignores all cpuset and mempolicy settings */
-	migrate_pages(demote_pages, alloc_demote_page, NULL,
+	migrate_pages(demote_folios, alloc_demote_page, NULL,
 			    target_nid, MIGRATE_ASYNC, MR_DEMOTION,
 			    &nr_succeeded);
 
@@ -1584,17 +1584,15 @@ static bool may_enter_fs(struct folio *folio, gfp_t gfp_mask)
 }
 
 /*
- * shrink_page_list() returns the number of reclaimed pages
+ * shrink_folio_list() returns the number of reclaimed pages
  */
-static unsigned int shrink_page_list(struct list_head *page_list,
-				     struct pglist_data *pgdat,
-				     struct scan_control *sc,
-				     struct reclaim_stat *stat,
-				     bool ignore_references)
+static unsigned int shrink_folio_list(struct list_head *folio_list,
+		struct pglist_data *pgdat, struct scan_control *sc,
+		struct reclaim_stat *stat, bool ignore_references)
 {
-	LIST_HEAD(ret_pages);
-	LIST_HEAD(free_pages);
-	LIST_HEAD(demote_pages);
+	LIST_HEAD(ret_folios);
+	LIST_HEAD(free_folios);
+	LIST_HEAD(demote_folios);
 	unsigned int nr_reclaimed = 0;
 	unsigned int pgactivate = 0;
 	bool do_demote_pass;
@@ -1605,16 +1603,16 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 	do_demote_pass = can_demote(pgdat->node_id, sc);
 
 retry:
-	while (!list_empty(page_list)) {
+	while (!list_empty(folio_list)) {
 		struct address_space *mapping;
 		struct folio *folio;
-		enum page_references references = PAGEREF_RECLAIM;
+		enum folio_references references = FOLIOREF_RECLAIM;
 		bool dirty, writeback;
 		unsigned int nr_pages;
 
 		cond_resched();
 
-		folio = lru_to_folio(page_list);
+		folio = lru_to_folio(folio_list);
 		list_del(&folio->lru);
 
 		if (!folio_trylock(folio))
@@ -1733,7 +1731,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 				folio_unlock(folio);
 				folio_wait_writeback(folio);
 				/* then go back and try same folio again */
-				list_add_tail(&folio->lru, page_list);
+				list_add_tail(&folio->lru, folio_list);
 				continue;
 			}
 		}
@@ -1742,13 +1740,13 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 			references = folio_check_references(folio, sc);
 
 		switch (references) {
-		case PAGEREF_ACTIVATE:
+		case FOLIOREF_ACTIVATE:
 			goto activate_locked;
-		case PAGEREF_KEEP:
+		case FOLIOREF_KEEP:
 			stat->nr_ref_keep += nr_pages;
 			goto keep_locked;
-		case PAGEREF_RECLAIM:
-		case PAGEREF_RECLAIM_CLEAN:
+		case FOLIOREF_RECLAIM:
+		case FOLIOREF_RECLAIM_CLEAN:
 			; /* try to reclaim the folio below */
 		}
 
@@ -1758,7 +1756,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 		 */
 		if (do_demote_pass &&
 		    (thp_migration_supported() || !folio_test_large(folio))) {
-			list_add(&folio->lru, &demote_pages);
+			list_add(&folio->lru, &demote_folios);
 			folio_unlock(folio);
 			continue;
 		}
@@ -1785,7 +1783,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 					 */
 					if (!folio_entire_mapcount(folio) &&
 					    split_folio_to_list(folio,
-								page_list))
+								folio_list))
 						goto activate_locked;
 				}
 				if (!add_to_swap(folio)) {
@@ -1793,7 +1791,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 						goto activate_locked_split;
 					/* Fallback to swap normal pages */
 					if (split_folio_to_list(folio,
-								page_list))
+								folio_list))
 						goto activate_locked;
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 					count_vm_event(THP_SWPOUT_FALLBACK);
@@ -1805,7 +1803,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 		} else if (folio_test_swapbacked(folio) &&
 			   folio_test_large(folio)) {
 			/* Split shmem folio */
-			if (split_folio_to_list(folio, page_list))
+			if (split_folio_to_list(folio, folio_list))
 				goto keep_locked;
 		}
 
@@ -1870,7 +1868,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 				goto activate_locked;
 			}
 
-			if (references == PAGEREF_RECLAIM_CLEAN)
+			if (references == FOLIOREF_RECLAIM_CLEAN)
 				goto keep_locked;
 			if (!may_enter_fs(folio, sc->gfp_mask))
 				goto keep_locked;
@@ -1983,13 +1981,13 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 		nr_reclaimed += nr_pages;
 
 		/*
-		 * Is there need to periodically free_page_list? It would
+		 * Is there need to periodically free_folio_list? It would
 		 * appear not as the counts should be low
 		 */
 		if (unlikely(folio_test_large(folio)))
 			destroy_large_folio(folio);
 		else
-			list_add(&folio->lru, &free_pages);
+			list_add(&folio->lru, &free_folios);
 		continue;
 
 activate_locked_split:
@@ -2017,29 +2015,29 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 keep_locked:
 		folio_unlock(folio);
 keep:
-		list_add(&folio->lru, &ret_pages);
+		list_add(&folio->lru, &ret_folios);
 		VM_BUG_ON_FOLIO(folio_test_lru(folio) ||
 				folio_test_unevictable(folio), folio);
 	}
-	/* 'page_list' is always empty here */
+	/* 'folio_list' is always empty here */
 
 	/* Migrate folios selected for demotion */
-	nr_reclaimed += demote_page_list(&demote_pages, pgdat);
-	/* Folios that could not be demoted are still in @demote_pages */
-	if (!list_empty(&demote_pages)) {
-		/* Folios which weren't demoted go back on @page_list for retry: */
-		list_splice_init(&demote_pages, page_list);
+	nr_reclaimed += demote_folio_list(&demote_folios, pgdat);
+	/* Folios that could not be demoted are still in @demote_folios */
+	if (!list_empty(&demote_folios)) {
+		/* Folios which weren't demoted go back on @folio_list for retry: */
+		list_splice_init(&demote_folios, folio_list);
 		do_demote_pass = false;
 		goto retry;
 	}
 
 	pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
 
-	mem_cgroup_uncharge_list(&free_pages);
+	mem_cgroup_uncharge_list(&free_folios);
 	try_to_unmap_flush();
-	free_unref_page_list(&free_pages);
+	free_unref_page_list(&free_folios);
 
-	list_splice(&ret_pages, page_list);
+	list_splice(&ret_folios, folio_list);
 	count_vm_events(PGACTIVATE, pgactivate);
 
 	if (plug)
@@ -2048,7 +2046,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 }
 
 unsigned int reclaim_clean_pages_from_list(struct zone *zone,
-					    struct list_head *folio_list)
+					   struct list_head *folio_list)
 {
 	struct scan_control sc = {
 		.gfp_mask = GFP_KERNEL,
@@ -2076,7 +2074,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
 	 * change in the future.
 	 */
 	noreclaim_flag = memalloc_noreclaim_save();
-	nr_reclaimed = shrink_page_list(&clean_folios, zone->zone_pgdat, &sc,
+	nr_reclaimed = shrink_folio_list(&clean_folios, zone->zone_pgdat, &sc,
 					&stat, true);
 	memalloc_noreclaim_restore(noreclaim_flag);
 
@@ -2135,7 +2133,7 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec,
  *
  * returns how many pages were moved onto *@dst.
  */
-static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
+static unsigned long isolate_lru_folios(unsigned long nr_to_scan,
 		struct lruvec *lruvec, struct list_head *dst,
 		unsigned long *nr_scanned, struct scan_control *sc,
 		enum lru_list lru)
@@ -2242,8 +2240,8 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
  *
  * Context:
  *
- * (1) Must be called with an elevated refcount on the page. This is a
- *     fundamental difference from isolate_lru_pages() (which is called
+ * (1) Must be called with an elevated refcount on the folio. This is a
+ *     fundamental difference from isolate_lru_folios() (which is called
  *     without a stable reference).
  * (2) The lru_lock must not be held.
  * (3) Interrupts must be enabled.
@@ -2315,13 +2313,13 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 }
 
 /*
- * move_pages_to_lru() moves folios from private @list to appropriate LRU list.
+ * move_folios_to_lru() moves folios from private @list to appropriate LRU list.
  * On return, @list is reused as a list of folios to be freed by the caller.
  *
  * Returns the number of pages moved to the given lruvec.
  */
-static unsigned int move_pages_to_lru(struct lruvec *lruvec,
-				      struct list_head *list)
+static unsigned int move_folios_to_lru(struct lruvec *lruvec,
+		struct list_head *list)
 {
 	int nr_pages, nr_moved = 0;
 	LIST_HEAD(folios_to_free);
@@ -2341,7 +2339,7 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 		/*
 		 * The folio_set_lru needs to be kept here for list integrity.
 		 * Otherwise:
-		 *   #0 move_pages_to_lru             #1 release_pages
+		 *   #0 move_folios_to_lru             #1 release_pages
 		 *   if (!folio_put_testzero())
 		 *				      if (folio_put_testzero())
 		 *				        !lru //skip lru_lock
@@ -2398,11 +2396,11 @@ static int current_may_throttle(void)
  * shrink_inactive_list() is a helper for shrink_node().  It returns the number
  * of reclaimed pages
  */
-static unsigned long
-shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
-		     struct scan_control *sc, enum lru_list lru)
+static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
+		struct lruvec *lruvec, struct scan_control *sc,
+		enum lru_list lru)
 {
-	LIST_HEAD(page_list);
+	LIST_HEAD(folio_list);
 	unsigned long nr_scanned;
 	unsigned int nr_reclaimed = 0;
 	unsigned long nr_taken;
@@ -2429,7 +2427,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 	spin_lock_irq(&lruvec->lru_lock);
 
-	nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &page_list,
+	nr_taken = isolate_lru_folios(nr_to_scan, lruvec, &folio_list,
 				     &nr_scanned, sc, lru);
 
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken);
@@ -2444,10 +2442,10 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	if (nr_taken == 0)
 		return 0;
 
-	nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);
+	nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false);
 
 	spin_lock_irq(&lruvec->lru_lock);
-	move_pages_to_lru(lruvec, &page_list);
+	move_folios_to_lru(lruvec, &folio_list);
 
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
@@ -2458,16 +2456,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	spin_unlock_irq(&lruvec->lru_lock);
 
 	lru_note_cost(lruvec, file, stat.nr_pageout);
-	mem_cgroup_uncharge_list(&page_list);
-	free_unref_page_list(&page_list);
+	mem_cgroup_uncharge_list(&folio_list);
+	free_unref_page_list(&folio_list);
 
 	/*
-	 * If dirty pages are scanned that are not queued for IO, it
+	 * If dirty folios are scanned that are not queued for IO, it
 	 * implies that flushers are not doing their job. This can
-	 * happen when memory pressure pushes dirty pages to the end of
+	 * happen when memory pressure pushes dirty folios to the end of
 	 * the LRU before the dirty limits are breached and the dirty
 	 * data has expired. It can also happen when the proportion of
-	 * dirty pages grows not through writes but through memory
+	 * dirty folios grows not through writes but through memory
 	 * pressure reclaiming all the clean cache. And in some cases,
 	 * the flushers simply cannot keep up with the allocation
 	 * rate. Nudge the flusher threads in case they are asleep.
@@ -2526,7 +2524,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 
 	spin_lock_irq(&lruvec->lru_lock);
 
-	nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &l_hold,
+	nr_taken = isolate_lru_folios(nr_to_scan, lruvec, &l_hold,
 				     &nr_scanned, sc, lru);
 
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken);
@@ -2586,8 +2584,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	spin_lock_irq(&lruvec->lru_lock);
 
-	nr_activate = move_pages_to_lru(lruvec, &l_active);
-	nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
+	nr_activate = move_folios_to_lru(lruvec, &l_active);
+	nr_deactivate = move_folios_to_lru(lruvec, &l_inactive);
 	/* Keep all free folios in l_active list */
 	list_splice(&l_inactive, &l_active);
 
@@ -2603,7 +2601,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 			nr_deactivate, nr_rotated, sc->priority, file);
 }
 
-static unsigned int reclaim_page_list(struct list_head *page_list,
+static unsigned int reclaim_folio_list(struct list_head *folio_list,
 				      struct pglist_data *pgdat)
 {
 	struct reclaim_stat dummy_stat;
@@ -2617,9 +2615,9 @@ static unsigned int reclaim_page_list(struct list_head *page_list,
 		.no_demotion = 1,
 	};
 
-	nr_reclaimed = shrink_page_list(page_list, pgdat, &sc, &dummy_stat, false);
-	while (!list_empty(page_list)) {
-		folio = lru_to_folio(page_list);
+	nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, false);
+	while (!list_empty(folio_list)) {
+		folio = lru_to_folio(folio_list);
 		list_del(&folio->lru);
 		folio_putback_lru(folio);
 	}
@@ -2649,11 +2647,11 @@ unsigned long reclaim_pages(struct list_head *folio_list)
 			continue;
 		}
 
-		nr_reclaimed += reclaim_page_list(&node_folio_list, NODE_DATA(nid));
+		nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid));
 		nid = folio_nid(lru_to_folio(folio_list));
 	} while (!list_empty(folio_list));
 
-	nr_reclaimed += reclaim_page_list(&node_folio_list, NODE_DATA(nid));
+	nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid));
 
 	memalloc_noreclaim_restore(noreclaim_flag);
 
@@ -2683,13 +2681,13 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
  * but large enough to avoid thrashing the aggregate readahead window.
  *
  * Both inactive lists should also be large enough that each inactive
- * page has a chance to be referenced again before it is reclaimed.
+ * folio has a chance to be referenced again before it is reclaimed.
  *
  * If that fails and refaulting is observed, the inactive list grows.
  *
- * The inactive_ratio is the target ratio of ACTIVE to INACTIVE pages
+ * The inactive_ratio is the target ratio of ACTIVE to INACTIVE folios
  * on this LRU, maintained by the pageout code. An inactive_ratio
- * of 3 means 3:1 or 25% of the pages are kept on the inactive list.
+ * of 3 means 3:1 or 25% of the folios are kept on the inactive list.
  *
  * total     target    max
  * memory    ratio     inactive
@@ -2732,8 +2730,8 @@ enum scan_balance {
  * Determine how aggressively the anon and file LRU lists should be
  * scanned.
  *
- * nr[0] = anon inactive pages to scan; nr[1] = anon active pages to scan
- * nr[2] = file inactive pages to scan; nr[3] = file active pages to scan
+ * nr[0] = anon inactive folios to scan; nr[1] = anon active folios to scan
+ * nr[2] = file inactive folios to scan; nr[3] = file active folios to scan
  */
 static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 			   unsigned long *nr)
@@ -2748,7 +2746,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 	unsigned long ap, fp;
 	enum lru_list lru;
 
-	/* If we have no swap space, do not bother scanning anon pages. */
+	/* If we have no swap space, do not bother scanning anon folios. */
 	if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id, sc)) {
 		scan_balance = SCAN_FILE;
 		goto out;
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 05/59] mm: Add the first tail page to struct folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 04/59] mm/vmscan: Fix a lot of comments Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 06/59] mm: Reimplement folio_order() and folio_nr_pages() Matthew Wilcox (Oracle)
                   ` (54 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Some of the static checkers get confused by extracting the page from
the folio and referring to fields in the first tail page.  Adding these
fields to struct folio lets us avoid doing that.  It has the risk that
people will refer to those fields without checking that the folio is
actually a large folio, so prefix them with underscores and document
the preferred function to use instead.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mm_types.h | 30 +++++++++++++++++++++++++++++-
 1 file changed, 29 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index cf97f3884fda..8a9ee9d24973 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -244,6 +244,13 @@ struct page {
  * @_refcount: Do not access this member directly.  Use folio_ref_count()
  *    to find how many references there are to this folio.
  * @memcg_data: Memory Control Group data.
+ * @_flags_1: For large folios, additional page flags.
+ * @__head: Points to the folio.  Do not use.
+ * @_folio_dtor: Which destructor to use for this folio.
+ * @_folio_order: Do not use directly, call folio_order().
+ * @_total_mapcount: Do not use directly, call folio_entire_mapcount().
+ * @_pincount: Do not use directly, call folio_maybe_dma_pinned().
+ * @_folio_nr_pages: Do not use directly, call folio_nr_pages().
  *
  * A folio is a physically, virtually and logically contiguous set
  * of bytes.  It is a power-of-two in size, and it is aligned to that
@@ -282,9 +289,17 @@ struct folio {
 		};
 		struct page page;
 	};
+	unsigned long _flags_1;
+	unsigned long __head;
+	unsigned char _folio_dtor;
+	unsigned char _folio_order;
+	atomic_t _total_mapcount;
+	atomic_t _pincount;
+#ifdef CONFIG_64BIT
+	unsigned int _folio_nr_pages;
+#endif
 };
 
-static_assert(sizeof(struct page) == sizeof(struct folio));
 #define FOLIO_MATCH(pg, fl)						\
 	static_assert(offsetof(struct page, pg) == offsetof(struct folio, fl))
 FOLIO_MATCH(flags, flags);
@@ -299,6 +314,19 @@ FOLIO_MATCH(_refcount, _refcount);
 FOLIO_MATCH(memcg_data, memcg_data);
 #endif
 #undef FOLIO_MATCH
+#define FOLIO_MATCH(pg, fl)						\
+	static_assert(offsetof(struct folio, fl) ==			\
+			offsetof(struct page, pg) + sizeof(struct page))
+FOLIO_MATCH(flags, _flags_1);
+FOLIO_MATCH(compound_head, __head);
+FOLIO_MATCH(compound_dtor, _folio_dtor);
+FOLIO_MATCH(compound_order, _folio_order);
+FOLIO_MATCH(compound_mapcount, _total_mapcount);
+FOLIO_MATCH(compound_pincount, _pincount);
+#ifdef CONFIG_64BIT
+FOLIO_MATCH(compound_nr, _folio_nr_pages);
+#endif
+#undef FOLIO_MATCH
 
 static inline atomic_t *folio_mapcount_ptr(struct folio *folio)
 {
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 06/59] mm: Reimplement folio_order() and folio_nr_pages()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (4 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 05/59] mm: Add the first tail page to struct folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 07/59] mm: Add split_folio() Matthew Wilcox (Oracle)
                   ` (53 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Instead of calling compound_order() and compound_nr_pages(), use
the folio directly.  Saves 1905 bytes from mm/filemap.o due to
folio_test_large() now being a cheaper check than PageHead().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mm.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 18e01474cf6b..7ee34dcf1aa3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -697,7 +697,9 @@ static inline unsigned int compound_order(struct page *page)
  */
 static inline unsigned int folio_order(struct folio *folio)
 {
-	return compound_order(&folio->page);
+	if (!folio_test_large(folio))
+		return 0;
+	return folio->_folio_order;
 }
 
 #include <linux/huge_mm.h>
@@ -1590,7 +1592,13 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
  */
 static inline long folio_nr_pages(struct folio *folio)
 {
-	return compound_nr(&folio->page);
+	if (!folio_test_large(folio))
+		return 1;
+#ifdef CONFIG_64BIT
+	return folio->_folio_nr_pages;
+#else
+	return 1L << folio->_folio_order;
+#endif
 }
 
 /**
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 07/59] mm: Add split_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (5 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 06/59] mm: Reimplement folio_order() and folio_nr_pages() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 08/59] mm: Add folio_add_lru_vma() Matthew Wilcox (Oracle)
                   ` (52 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

This wrapper removes a need to use split_huge_page(&folio->page).
Convert two callers.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/huge_mm.h | 5 +++++
 mm/shmem.c              | 2 +-
 mm/truncate.c           | 2 +-
 3 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 768e5261fdae..aa0a427284aa 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -435,6 +435,11 @@ static inline int split_folio_to_list(struct folio *folio,
 	return split_huge_page_to_list(&folio->page, list);
 }
 
+static inline int split_folio(struct folio *folio)
+{
+	return split_folio_to_list(folio, NULL);
+}
+
 /*
  * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to
  * limitations in the implementation like arm64 MTE can override this to
diff --git a/mm/shmem.c b/mm/shmem.c
index 4ae43cffeda3..de267a23248d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -629,7 +629,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
 			goto move_back;
 		}
 
-		ret = split_huge_page(&folio->page);
+		ret = split_folio(folio);
 		folio_unlock(folio);
 		folio_put(folio);
 
diff --git a/mm/truncate.c b/mm/truncate.c
index 0b0708bf935f..c0be77e5c008 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -240,7 +240,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
 		folio_invalidate(folio, offset, length);
 	if (!folio_test_large(folio))
 		return true;
-	if (split_huge_page(&folio->page) == 0)
+	if (split_folio(folio) == 0)
 		return true;
 	if (folio_test_dirty(folio))
 		return false;
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 08/59] mm: Add folio_add_lru_vma()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (6 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 07/59] mm: Add split_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 09/59] shmem: Convert shmem_writepage() to use a folio throughout Matthew Wilcox (Oracle)
                   ` (51 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Convert lru_cache_add_inactive_or_unevictable() to folio_add_lru_vma()
and add a compatibility wrapper.
---
 include/linux/swap.h | 10 +++++-----
 mm/folio-compat.c    |  6 ++++++
 mm/swap.c            | 19 +++++++++----------
 3 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 43150b9bbc5c..333d5588dc2d 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -375,11 +375,11 @@ extern unsigned long totalreserve_pages;
 
 
 /* linux/mm/swap.c */
-extern void lru_note_cost(struct lruvec *lruvec, bool file,
-			  unsigned int nr_pages);
-extern void lru_note_cost_folio(struct folio *);
-extern void folio_add_lru(struct folio *);
-extern void lru_cache_add(struct page *);
+void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages);
+void lru_note_cost_folio(struct folio *);
+void folio_add_lru(struct folio *);
+void folio_add_lru_vma(struct folio *, struct vm_area_struct *);
+void lru_cache_add(struct page *);
 void mark_page_accessed(struct page *);
 void folio_mark_accessed(struct folio *);
 
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index 458618c7302c..e1e23b4947d7 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -88,6 +88,12 @@ void lru_cache_add(struct page *page)
 }
 EXPORT_SYMBOL(lru_cache_add);
 
+void lru_cache_add_inactive_or_unevictable(struct page *page,
+		struct vm_area_struct *vma)
+{
+	folio_add_lru_vma(page_folio(page), vma);
+}
+
 int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
 		pgoff_t index, gfp_t gfp)
 {
diff --git a/mm/swap.c b/mm/swap.c
index 9cee7f6a3809..6525011b715e 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -493,22 +493,21 @@ void folio_add_lru(struct folio *folio)
 EXPORT_SYMBOL(folio_add_lru);
 
 /**
- * lru_cache_add_inactive_or_unevictable
- * @page:  the page to be added to LRU
- * @vma:   vma in which page is mapped for determining reclaimability
+ * folio_add_lru_vma() - Add a folio to the appropate LRU list for this VMA.
+ * @folio: The folio to be added to the LRU.
+ * @vma: VMA in which the folio is mapped.
  *
- * Place @page on the inactive or unevictable LRU list, depending on its
- * evictability.
+ * If the VMA is mlocked, @folio is added to the unevictable list.
+ * Otherwise, it is treated the same way as folio_add_lru().
  */
-void lru_cache_add_inactive_or_unevictable(struct page *page,
-					 struct vm_area_struct *vma)
+void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma)
 {
-	VM_BUG_ON_PAGE(PageLRU(page), page);
+	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
 	if (unlikely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED))
-		mlock_new_page(page);
+		mlock_new_page(&folio->page);
 	else
-		lru_cache_add(page);
+		folio_add_lru(folio);
 }
 
 /*
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 09/59] shmem: Convert shmem_writepage() to use a folio throughout
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (7 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 08/59] mm: Add folio_add_lru_vma() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 10/59] shmem: Convert shmem_delete_from_page_cache() to take a folio Matthew Wilcox (Oracle)
                   ` (50 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Even though we will split any large folio that comes in, write the
code to handle large folios so as to not leave a trap for whoever
tries to handle large folios in the swap cache.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 47 ++++++++++++++++++++++++-----------------------
 1 file changed, 24 insertions(+), 23 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index de267a23248d..bcd3644eb9c3 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1328,17 +1328,18 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	 * "force", drivers/gpu/drm/i915/gem/i915_gem_shmem.c gets huge pages,
 	 * and its shmem_writeback() needs them to be split when swapping.
 	 */
-	if (PageTransCompound(page)) {
+	if (folio_test_large(folio)) {
 		/* Ensure the subpages are still dirty */
-		SetPageDirty(page);
+		folio_test_set_dirty(folio);
 		if (split_huge_page(page) < 0)
 			goto redirty;
-		ClearPageDirty(page);
+		folio = page_folio(page);
+		folio_clear_dirty(folio);
 	}
 
-	BUG_ON(!PageLocked(page));
-	mapping = page->mapping;
-	index = page->index;
+	BUG_ON(!folio_test_locked(folio));
+	mapping = folio->mapping;
+	index = folio->index;
 	inode = mapping->host;
 	info = SHMEM_I(inode);
 	if (info->flags & VM_LOCKED)
@@ -1361,15 +1362,15 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	/*
 	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
 	 * value into swapfile.c, the only way we can correctly account for a
-	 * fallocated page arriving here is now to initialize it and write it.
+	 * fallocated folio arriving here is now to initialize it and write it.
 	 *
-	 * That's okay for a page already fallocated earlier, but if we have
+	 * That's okay for a folio already fallocated earlier, but if we have
 	 * not yet completed the fallocation, then (a) we want to keep track
-	 * of this page in case we have to undo it, and (b) it may not be a
+	 * of this folio in case we have to undo it, and (b) it may not be a
 	 * good idea to continue anyway, once we're pushing into swap.  So
-	 * reactivate the page, and let shmem_fallocate() quit when too many.
+	 * reactivate the folio, and let shmem_fallocate() quit when too many.
 	 */
-	if (!PageUptodate(page)) {
+	if (!folio_test_uptodate(folio)) {
 		if (inode->i_private) {
 			struct shmem_falloc *shmem_falloc;
 			spin_lock(&inode->i_lock);
@@ -1385,9 +1386,9 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 			if (shmem_falloc)
 				goto redirty;
 		}
-		clear_highpage(page);
-		flush_dcache_page(page);
-		SetPageUptodate(page);
+		folio_zero_range(folio, 0, folio_size(folio));
+		flush_dcache_folio(folio);
+		folio_mark_uptodate(folio);
 	}
 
 	swap = folio_alloc_swap(folio);
@@ -1396,7 +1397,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 
 	/*
 	 * Add inode to shmem_unuse()'s list of swapped-out inodes,
-	 * if it's not already there.  Do it now before the page is
+	 * if it's not already there.  Do it now before the folio is
 	 * moved to swap cache, when its pagelock no longer protects
 	 * the inode from eviction.  But don't unlock the mutex until
 	 * we've incremented swapped, because shmem_unuse_inode() will
@@ -1406,7 +1407,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	if (list_empty(&info->swaplist))
 		list_add(&info->swaplist, &shmem_swaplist);
 
-	if (add_to_swap_cache(page, swap,
+	if (add_to_swap_cache(&folio->page, swap,
 			__GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN,
 			NULL) == 0) {
 		spin_lock_irq(&info->lock);
@@ -1415,21 +1416,21 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 		spin_unlock_irq(&info->lock);
 
 		swap_shmem_alloc(swap);
-		shmem_delete_from_page_cache(page, swp_to_radix_entry(swap));
+		shmem_delete_from_page_cache(&folio->page, swp_to_radix_entry(swap));
 
 		mutex_unlock(&shmem_swaplist_mutex);
-		BUG_ON(page_mapped(page));
-		swap_writepage(page, wbc);
+		BUG_ON(folio_mapped(folio));
+		swap_writepage(&folio->page, wbc);
 		return 0;
 	}
 
 	mutex_unlock(&shmem_swaplist_mutex);
-	put_swap_page(page, swap);
+	put_swap_page(&folio->page, swap);
 redirty:
-	set_page_dirty(page);
+	folio_mark_dirty(folio);
 	if (wbc->for_reclaim)
-		return AOP_WRITEPAGE_ACTIVATE;	/* Return with page locked */
-	unlock_page(page);
+		return AOP_WRITEPAGE_ACTIVATE;	/* Return with folio locked */
+	folio_unlock(folio);
 	return 0;
 }
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 10/59] shmem: Convert shmem_delete_from_page_cache() to take a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (8 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 09/59] shmem: Convert shmem_writepage() to use a folio throughout Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 11/59] shmem: Convert shmem_replace_page() to use folios throughout Matthew Wilcox (Oracle)
                   ` (49 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Remove the assertion that the page is not Compound as this function
now handles large folios correctly.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index bcd3644eb9c3..f561f6e7f53b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -763,23 +763,22 @@ static int shmem_add_to_page_cache(struct folio *folio,
 }
 
 /*
- * Like delete_from_page_cache, but substitutes swap for page.
+ * Like delete_from_page_cache, but substitutes swap for @folio.
  */
-static void shmem_delete_from_page_cache(struct page *page, void *radswap)
+static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
 {
-	struct address_space *mapping = page->mapping;
+	struct address_space *mapping = folio->mapping;
+	long nr = folio_nr_pages(folio);
 	int error;
 
-	VM_BUG_ON_PAGE(PageCompound(page), page);
-
 	xa_lock_irq(&mapping->i_pages);
-	error = shmem_replace_entry(mapping, page->index, page, radswap);
-	page->mapping = NULL;
-	mapping->nrpages--;
-	__dec_lruvec_page_state(page, NR_FILE_PAGES);
-	__dec_lruvec_page_state(page, NR_SHMEM);
+	error = shmem_replace_entry(mapping, folio->index, folio, radswap);
+	folio->mapping = NULL;
+	mapping->nrpages -= nr;
+	__lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr);
+	__lruvec_stat_mod_folio(folio, NR_SHMEM, -nr);
 	xa_unlock_irq(&mapping->i_pages);
-	put_page(page);
+	folio_put(folio);
 	BUG_ON(error);
 }
 
@@ -1416,7 +1415,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 		spin_unlock_irq(&info->lock);
 
 		swap_shmem_alloc(swap);
-		shmem_delete_from_page_cache(&folio->page, swp_to_radix_entry(swap));
+		shmem_delete_from_page_cache(folio, swp_to_radix_entry(swap));
 
 		mutex_unlock(&shmem_swaplist_mutex);
 		BUG_ON(folio_mapped(folio));
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 11/59] shmem: Convert shmem_replace_page() to use folios throughout
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (9 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 10/59] shmem: Convert shmem_delete_from_page_cache() to take a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 12/59] mm/swapfile: Remove page_swapcount() Matthew Wilcox (Oracle)
                   ` (48 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Introduce folio_set_swap_entry() to abstract how both folio->private
and swp_entry_t work.  Use swap_address_space() directly instead of
indirecting through folio_mapping().  Include an assertion that the old
folio is not large as we only allocate a single-page folio to replace it.
Use folio_put_refs() instead of calling folio_put() twice.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/swap.h |  5 ++++
 mm/shmem.c           | 63 +++++++++++++++++++++-----------------------
 2 files changed, 35 insertions(+), 33 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 333d5588dc2d..afcb76bbd141 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -351,6 +351,11 @@ static inline swp_entry_t folio_swap_entry(struct folio *folio)
 	return entry;
 }
 
+static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
+{
+	folio->private = (void *)entry.val;
+}
+
 /* linux/mm/workingset.c */
 void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
 void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
diff --git a/mm/shmem.c b/mm/shmem.c
index f561f6e7f53b..eec32307984d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1560,12 +1560,6 @@ static struct folio *shmem_alloc_folio(gfp_t gfp,
 	return folio;
 }
 
-static struct page *shmem_alloc_page(gfp_t gfp,
-			struct shmem_inode_info *info, pgoff_t index)
-{
-	return &shmem_alloc_folio(gfp, info, index)->page;
-}
-
 static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode,
 		pgoff_t index, bool huge)
 {
@@ -1617,49 +1611,47 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp)
 static int shmem_replace_page(struct page **pagep, gfp_t gfp,
 				struct shmem_inode_info *info, pgoff_t index)
 {
-	struct page *oldpage, *newpage;
 	struct folio *old, *new;
 	struct address_space *swap_mapping;
 	swp_entry_t entry;
 	pgoff_t swap_index;
 	int error;
 
-	oldpage = *pagep;
-	entry.val = page_private(oldpage);
+	old = page_folio(*pagep);
+	entry = folio_swap_entry(old);
 	swap_index = swp_offset(entry);
-	swap_mapping = page_mapping(oldpage);
+	swap_mapping = swap_address_space(entry);
 
 	/*
 	 * We have arrived here because our zones are constrained, so don't
 	 * limit chance of success by further cpuset and node constraints.
 	 */
 	gfp &= ~GFP_CONSTRAINT_MASK;
-	newpage = shmem_alloc_page(gfp, info, index);
-	if (!newpage)
+	VM_BUG_ON_FOLIO(folio_test_large(old), old);
+	new = shmem_alloc_folio(gfp, info, index);
+	if (!new)
 		return -ENOMEM;
 
-	get_page(newpage);
-	copy_highpage(newpage, oldpage);
-	flush_dcache_page(newpage);
+	folio_get(new);
+	folio_copy(new, old);
+	flush_dcache_folio(new);
 
-	__SetPageLocked(newpage);
-	__SetPageSwapBacked(newpage);
-	SetPageUptodate(newpage);
-	set_page_private(newpage, entry.val);
-	SetPageSwapCache(newpage);
+	__folio_set_locked(new);
+	__folio_set_swapbacked(new);
+	folio_mark_uptodate(new);
+	folio_set_swap_entry(new, entry);
+	folio_set_swapcache(new);
 
 	/*
 	 * Our caller will very soon move newpage out of swapcache, but it's
 	 * a nice clean interface for us to replace oldpage by newpage there.
 	 */
 	xa_lock_irq(&swap_mapping->i_pages);
-	error = shmem_replace_entry(swap_mapping, swap_index, oldpage, newpage);
+	error = shmem_replace_entry(swap_mapping, swap_index, old, new);
 	if (!error) {
-		old = page_folio(oldpage);
-		new = page_folio(newpage);
 		mem_cgroup_migrate(old, new);
-		__inc_lruvec_page_state(newpage, NR_FILE_PAGES);
-		__dec_lruvec_page_state(oldpage, NR_FILE_PAGES);
+		__lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1);
+		__lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1);
 	}
 	xa_unlock_irq(&swap_mapping->i_pages);
 
@@ -1669,18 +1661,17 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
 		 * both PageSwapCache and page_private after getting page lock;
 		 * but be defensive.  Reverse old to newpage for clear and free.
 		 */
-		oldpage = newpage;
+		old = new;
 	} else {
-		lru_cache_add(newpage);
-		*pagep = newpage;
+		folio_add_lru(new);
+		*pagep = &new->page;
 	}
 
-	ClearPageSwapCache(oldpage);
-	set_page_private(oldpage, 0);
+	folio_clear_swapcache(old);
+	old->private = NULL;
 
-	unlock_page(oldpage);
-	put_page(oldpage);
-	put_page(oldpage);
+	folio_unlock(old);
+	folio_put_refs(old, 2);
 	return error;
 }
 
@@ -2362,6 +2353,12 @@ static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir,
 }
 
 #ifdef CONFIG_USERFAULTFD
+static struct page *shmem_alloc_page(gfp_t gfp,
+			struct shmem_inode_info *info, pgoff_t index)
+{
+	return &shmem_alloc_folio(gfp, info, index)->page;
+}
+
 int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
 			   pmd_t *dst_pmd,
 			   struct vm_area_struct *dst_vma,
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 12/59] mm/swapfile: Remove page_swapcount()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (10 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 11/59] shmem: Convert shmem_replace_page() to use folios throughout Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 13/59] mm/swapfile: Convert try_to_free_swap() to folio_free_swap() Matthew Wilcox (Oracle)
                   ` (47 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

By restructuring folio_swapped(), it can use swap_swapcount() instead
of page_swapcount().  It's even a little more efficient.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/swapfile.c | 46 +++++++++++++---------------------------------
 1 file changed, 13 insertions(+), 33 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 1fdccd2f1422..c042fd71de02 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1427,30 +1427,6 @@ void swapcache_free_entries(swp_entry_t *entries, int n)
 		spin_unlock(&p->lock);
 }
 
-/*
- * How many references to page are currently swapped out?
- * This does not give an exact answer when swap count is continued,
- * but does include the high COUNT_CONTINUED flag to allow for that.
- */
-static int page_swapcount(struct page *page)
-{
-	int count = 0;
-	struct swap_info_struct *p;
-	struct swap_cluster_info *ci;
-	swp_entry_t entry;
-	unsigned long offset;
-
-	entry.val = page_private(page);
-	p = _swap_info_get(entry);
-	if (p) {
-		offset = swp_offset(entry);
-		ci = lock_cluster_or_swap_info(p, offset);
-		count = swap_count(p->swap_map[offset]);
-		unlock_cluster_or_swap_info(p, ci);
-	}
-	return count;
-}
-
 int __swap_count(swp_entry_t entry)
 {
 	struct swap_info_struct *si;
@@ -1465,11 +1441,16 @@ int __swap_count(swp_entry_t entry)
 	return count;
 }
 
+/*
+ * How many references to @entry are currently swapped out?
+ * This does not give an exact answer when swap count is continued,
+ * but does include the high COUNT_CONTINUED flag to allow for that.
+ */
 static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)
 {
-	int count = 0;
 	pgoff_t offset = swp_offset(entry);
 	struct swap_cluster_info *ci;
+	int count;
 
 	ci = lock_cluster_or_swap_info(si, offset);
 	count = swap_count(si->swap_map[offset]);
@@ -1570,17 +1551,16 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
 
 static bool folio_swapped(struct folio *folio)
 {
-	swp_entry_t entry;
-	struct swap_info_struct *si;
+	swp_entry_t entry = folio_swap_entry(folio);
+	struct swap_info_struct *si = _swap_info_get(entry);
+
+	if (!si)
+		return false;
 
 	if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!folio_test_large(folio)))
-		return page_swapcount(&folio->page) != 0;
+		return swap_swapcount(si, entry) != 0;
 
-	entry = folio_swap_entry(folio);
-	si = _swap_info_get(entry);
-	if (si)
-		return swap_page_trans_huge_swapped(si, entry);
-	return false;
+	return swap_page_trans_huge_swapped(si, entry);
 }
 
 /*
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 13/59] mm/swapfile: Convert try_to_free_swap() to folio_free_swap()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (11 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 12/59] mm/swapfile: Remove page_swapcount() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 14/59] mm/swap: Convert __read_swap_cache_async() to use a folio Matthew Wilcox (Oracle)
                   ` (46 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Add kernel-doc for folio_free_swap() and make it return bool.
Add a try_to_free_swap() compatibility wrapper.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/swap.h |  6 ++++++
 mm/folio-compat.c    |  7 +++++++
 mm/swapfile.c        | 32 ++++++++++++++++++--------------
 mm/vmscan.c          |  2 +-
 4 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index afcb76bbd141..4595cbc1cb02 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -486,6 +486,7 @@ static inline long get_nr_swap_pages(void)
 
 extern void si_swapinfo(struct sysinfo *);
 swp_entry_t folio_alloc_swap(struct folio *folio);
+bool folio_free_swap(struct folio *folio);
 extern void put_swap_page(struct page *page, swp_entry_t entry);
 extern swp_entry_t get_swap_page_of_type(int);
 extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size);
@@ -602,6 +603,11 @@ static inline swp_entry_t folio_alloc_swap(struct folio *folio)
 	return entry;
 }
 
+static inline bool folio_free_swap(struct folio *folio)
+{
+	return false;
+}
+
 static inline int add_swap_extent(struct swap_info_struct *sis,
 				  unsigned long start_page,
 				  unsigned long nr_pages, sector_t start_block)
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index e1e23b4947d7..06d47f00609b 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -146,3 +146,10 @@ void putback_lru_page(struct page *page)
 {
 	folio_putback_lru(page_folio(page));
 }
+
+#ifdef CONFIG_SWAP
+int try_to_free_swap(struct page *page)
+{
+	return folio_free_swap(page_folio(page));
+}
+#endif
diff --git a/mm/swapfile.c b/mm/swapfile.c
index c042fd71de02..880871f4c6d4 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1563,43 +1563,47 @@ static bool folio_swapped(struct folio *folio)
 	return swap_page_trans_huge_swapped(si, entry);
 }
 
-/*
- * If swap is getting full, or if there are no more mappings of this page,
- * then try_to_free_swap is called to free its swap space.
+/**
+ * folio_free_swap() - Free the swap space used for this folio.
+ * @folio: The folio to remove.
+ *
+ * If swap is getting full, or if there are no more mappings of this folio,
+ * then call folio_free_swap to free its swap space.
+ *
+ * Return: true if we were able to release the swap space.
  */
-int try_to_free_swap(struct page *page)
+bool folio_free_swap(struct folio *folio)
 {
-	struct folio *folio = page_folio(page);
 	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
 
 	if (!folio_test_swapcache(folio))
-		return 0;
+		return false;
 	if (folio_test_writeback(folio))
-		return 0;
+		return false;
 	if (folio_swapped(folio))
-		return 0;
+		return false;
 
 	/*
 	 * Once hibernation has begun to create its image of memory,
-	 * there's a danger that one of the calls to try_to_free_swap()
+	 * there's a danger that one of the calls to folio_free_swap()
 	 * - most probably a call from __try_to_reclaim_swap() while
 	 * hibernation is allocating its own swap pages for the image,
 	 * but conceivably even a call from memory reclaim - will free
-	 * the swap from a page which has already been recorded in the
-	 * image as a clean swapcache page, and then reuse its swap for
+	 * the swap from a folio which has already been recorded in the
+	 * image as a clean swapcache folio, and then reuse its swap for
 	 * another page of the image.  On waking from hibernation, the
-	 * original page might be freed under memory pressure, then
+	 * original folio might be freed under memory pressure, then
 	 * later read back in from swap, now with the wrong data.
 	 *
 	 * Hibernation suspends storage while it is writing the image
 	 * to disk so check that here.
 	 */
 	if (pm_suspended_storage())
-		return 0;
+		return false;
 
 	delete_from_swap_cache(folio);
 	folio_set_dirty(folio);
-	return 1;
+	return true;
 }
 
 /*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 93b1087fc969..d3e26712dbc1 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2004,7 +2004,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 		if (folio_test_swapcache(folio) &&
 		    (mem_cgroup_swap_full(&folio->page) ||
 		     folio_test_mlocked(folio)))
-			try_to_free_swap(&folio->page);
+			folio_free_swap(folio);
 		VM_BUG_ON_FOLIO(folio_test_active(folio), folio);
 		if (!folio_test_mlocked(folio)) {
 			int type = folio_is_file_lru(folio);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 14/59] mm/swap: Convert __read_swap_cache_async() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (12 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 13/59] mm/swapfile: Convert try_to_free_swap() to folio_free_swap() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 15/59] mm/swap: Convert add_to_swap_cache() to take " Matthew Wilcox (Oracle)
                   ` (45 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Remove a few hidden (and one visible) calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/swap_state.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 41afa6d45b23..b1e181fc5268 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -411,7 +411,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 			bool *new_page_allocated)
 {
 	struct swap_info_struct *si;
-	struct page *page;
+	struct folio *folio;
 	void *shadow = NULL;
 
 	*new_page_allocated = false;
@@ -426,11 +426,11 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		si = get_swap_device(entry);
 		if (!si)
 			return NULL;
-		page = find_get_page(swap_address_space(entry),
-				     swp_offset(entry));
+		folio = filemap_get_folio(swap_address_space(entry),
+						swp_offset(entry));
 		put_swap_device(si);
-		if (page)
-			return page;
+		if (folio)
+			return folio_file_page(folio, swp_offset(entry));
 
 		/*
 		 * Just skip read ahead for unused swap slot.
@@ -448,8 +448,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		 * before marking swap_map SWAP_HAS_CACHE, when -EEXIST will
 		 * cause any racers to loop around until we add it to cache.
 		 */
-		page = alloc_page_vma(gfp_mask, vma, addr);
-		if (!page)
+		folio = vma_alloc_folio(gfp_mask, 0, vma, addr, false);
+		if (!folio)
 			return NULL;
 
 		/*
@@ -459,7 +459,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		if (!err)
 			break;
 
-		put_page(page);
+		folio_put(folio);
 		if (err != -EEXIST)
 			return NULL;
 
@@ -477,30 +477,30 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 	 * The swap entry is ours to swap in. Prepare the new page.
 	 */
 
-	__SetPageLocked(page);
-	__SetPageSwapBacked(page);
+	__folio_set_locked(folio);
+	__folio_set_swapbacked(folio);
 
-	if (mem_cgroup_swapin_charge_page(page, NULL, gfp_mask, entry))
+	if (mem_cgroup_swapin_charge_page(&folio->page, NULL, gfp_mask, entry))
 		goto fail_unlock;
 
 	/* May fail (-ENOMEM) if XArray node allocation failed. */
-	if (add_to_swap_cache(page, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow))
+	if (add_to_swap_cache(&folio->page, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow))
 		goto fail_unlock;
 
 	mem_cgroup_swapin_uncharge_swap(entry);
 
 	if (shadow)
-		workingset_refault(page_folio(page), shadow);
+		workingset_refault(folio, shadow);
 
-	/* Caller will initiate read into locked page */
-	lru_cache_add(page);
+	/* Caller will initiate read into locked folio */
+	folio_add_lru(folio);
 	*new_page_allocated = true;
-	return page;
+	return &folio->page;
 
 fail_unlock:
-	put_swap_page(page, entry);
-	unlock_page(page);
-	put_page(page);
+	put_swap_page(&folio->page, entry);
+	folio_unlock(folio);
+	folio_put(folio);
 	return NULL;
 }
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 15/59] mm/swap: Convert add_to_swap_cache() to take a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (13 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 14/59] mm/swap: Convert __read_swap_cache_async() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 16/59] mm/swap: Convert put_swap_page() to put_swap_folio() Matthew Wilcox (Oracle)
                   ` (44 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

With all callers using folios, we can convert add_to_swap_cache()
to take a folio and use it throughout.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c      |  2 +-
 mm/swap.h       |  4 ++--
 mm/swap_state.c | 34 +++++++++++++++++-----------------
 3 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index eec32307984d..49c3f59b5b76 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1406,7 +1406,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	if (list_empty(&info->swaplist))
 		list_add(&info->swaplist, &shmem_swaplist);
 
-	if (add_to_swap_cache(&folio->page, swap,
+	if (add_to_swap_cache(folio, swap,
 			__GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN,
 			NULL) == 0) {
 		spin_lock_irq(&info->lock);
diff --git a/mm/swap.h b/mm/swap.h
index 17936e068c1c..0e023765e110 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -34,7 +34,7 @@ extern struct address_space *swapper_spaces[];
 void show_swap_cache_info(void);
 bool add_to_swap(struct folio *folio);
 void *get_shadow_from_swap_cache(swp_entry_t entry);
-int add_to_swap_cache(struct page *page, swp_entry_t entry,
+int add_to_swap_cache(struct folio *folio, swp_entry_t entry,
 		      gfp_t gfp, void **shadowp);
 void __delete_from_swap_cache(struct folio *folio,
 			      swp_entry_t entry, void *shadow);
@@ -124,7 +124,7 @@ static inline void *get_shadow_from_swap_cache(swp_entry_t entry)
 	return NULL;
 }
 
-static inline int add_to_swap_cache(struct page *page, swp_entry_t entry,
+static inline int add_to_swap_cache(struct folio *folio, swp_entry_t entry,
 					gfp_t gfp_mask, void **shadowp)
 {
 	return -1;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b1e181fc5268..ecf1accc2fb1 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -85,21 +85,21 @@ void *get_shadow_from_swap_cache(swp_entry_t entry)
  * add_to_swap_cache resembles filemap_add_folio on swapper_space,
  * but sets SwapCache flag and private instead of mapping and index.
  */
-int add_to_swap_cache(struct page *page, swp_entry_t entry,
+int add_to_swap_cache(struct folio *folio, swp_entry_t entry,
 			gfp_t gfp, void **shadowp)
 {
 	struct address_space *address_space = swap_address_space(entry);
 	pgoff_t idx = swp_offset(entry);
-	XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page));
-	unsigned long i, nr = thp_nr_pages(page);
+	XA_STATE_ORDER(xas, &address_space->i_pages, idx, folio_order(folio));
+	unsigned long i, nr = folio_nr_pages(folio);
 	void *old;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	VM_BUG_ON_PAGE(PageSwapCache(page), page);
-	VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
+	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+	VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
+	VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio);
 
-	page_ref_add(page, nr);
-	SetPageSwapCache(page);
+	folio_ref_add(folio, nr);
+	folio_set_swapcache(folio);
 
 	do {
 		xas_lock_irq(&xas);
@@ -107,19 +107,19 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry,
 		if (xas_error(&xas))
 			goto unlock;
 		for (i = 0; i < nr; i++) {
-			VM_BUG_ON_PAGE(xas.xa_index != idx + i, page);
+			VM_BUG_ON_FOLIO(xas.xa_index != idx + i, folio);
 			old = xas_load(&xas);
 			if (xa_is_value(old)) {
 				if (shadowp)
 					*shadowp = old;
 			}
-			set_page_private(page + i, entry.val + i);
-			xas_store(&xas, page);
+			set_page_private(folio_page(folio, i), entry.val + i);
+			xas_store(&xas, folio);
 			xas_next(&xas);
 		}
 		address_space->nrpages += nr;
-		__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
-		__mod_lruvec_page_state(page, NR_SWAPCACHE, nr);
+		__node_stat_mod_folio(folio, NR_FILE_PAGES, nr);
+		__lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr);
 unlock:
 		xas_unlock_irq(&xas);
 	} while (xas_nomem(&xas, gfp));
@@ -127,8 +127,8 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry,
 	if (!xas_error(&xas))
 		return 0;
 
-	ClearPageSwapCache(page);
-	page_ref_sub(page, nr);
+	folio_clear_swapcache(folio);
+	folio_ref_sub(folio, nr);
 	return xas_error(&xas);
 }
 
@@ -194,7 +194,7 @@ bool add_to_swap(struct folio *folio)
 	/*
 	 * Add it to the swap cache.
 	 */
-	err = add_to_swap_cache(&folio->page, entry,
+	err = add_to_swap_cache(folio, entry,
 			__GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN, NULL);
 	if (err)
 		/*
@@ -484,7 +484,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		goto fail_unlock;
 
 	/* May fail (-ENOMEM) if XArray node allocation failed. */
-	if (add_to_swap_cache(&folio->page, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow))
+	if (add_to_swap_cache(folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow))
 		goto fail_unlock;
 
 	mem_cgroup_swapin_uncharge_swap(entry);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 16/59] mm/swap: Convert put_swap_page() to put_swap_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (14 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 15/59] mm/swap: Convert add_to_swap_cache() to take " Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 17/59] mm: Convert do_swap_page() to use a folio Matthew Wilcox (Oracle)
                   ` (43 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

With all callers now using a folio, we can convert this function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/swap.h | 4 ++--
 mm/shmem.c           | 2 +-
 mm/swap_slots.c      | 2 +-
 mm/swap_state.c      | 6 +++---
 mm/swapfile.c        | 4 ++--
 mm/vmscan.c          | 2 +-
 6 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 4595cbc1cb02..f16c9af6bf32 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -487,7 +487,7 @@ static inline long get_nr_swap_pages(void)
 extern void si_swapinfo(struct sysinfo *);
 swp_entry_t folio_alloc_swap(struct folio *folio);
 bool folio_free_swap(struct folio *folio);
-extern void put_swap_page(struct page *page, swp_entry_t entry);
+void put_swap_folio(struct folio *folio, swp_entry_t entry);
 extern swp_entry_t get_swap_page_of_type(int);
 extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size);
 extern int add_swap_count_continuation(swp_entry_t, gfp_t);
@@ -572,7 +572,7 @@ static inline void swap_free(swp_entry_t swp)
 {
 }
 
-static inline void put_swap_page(struct page *page, swp_entry_t swp)
+static inline void put_swap_folio(struct folio *folio, swp_entry_t swp)
 {
 }
 
diff --git a/mm/shmem.c b/mm/shmem.c
index 49c3f59b5b76..4693edb33648 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1424,7 +1424,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	}
 
 	mutex_unlock(&shmem_swaplist_mutex);
-	put_swap_page(&folio->page, swap);
+	put_swap_folio(folio, swap);
 redirty:
 	folio_mark_dirty(folio);
 	if (wbc->for_reclaim)
diff --git a/mm/swap_slots.c b/mm/swap_slots.c
index 10b94d64cc25..0bec1f705f8e 100644
--- a/mm/swap_slots.c
+++ b/mm/swap_slots.c
@@ -343,7 +343,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio)
 	get_swap_pages(1, &entry, 1);
 out:
 	if (mem_cgroup_try_charge_swap(folio, entry)) {
-		put_swap_page(&folio->page, entry);
+		put_swap_folio(folio, entry);
 		entry.val = 0;
 	}
 	return entry;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index ecf1accc2fb1..ea354efd3735 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -218,7 +218,7 @@ bool add_to_swap(struct folio *folio)
 	return true;
 
 fail:
-	put_swap_page(&folio->page, entry);
+	put_swap_folio(folio, entry);
 	return false;
 }
 
@@ -237,7 +237,7 @@ void delete_from_swap_cache(struct folio *folio)
 	__delete_from_swap_cache(folio, entry, NULL);
 	xa_unlock_irq(&address_space->i_pages);
 
-	put_swap_page(&folio->page, entry);
+	put_swap_folio(folio, entry);
 	folio_ref_sub(folio, folio_nr_pages(folio));
 }
 
@@ -498,7 +498,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 	return &folio->page;
 
 fail_unlock:
-	put_swap_page(&folio->page, entry);
+	put_swap_folio(folio, entry);
 	folio_unlock(folio);
 	folio_put(folio);
 	return NULL;
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 880871f4c6d4..186511a8ef4f 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1328,7 +1328,7 @@ void swap_free(swp_entry_t entry)
 /*
  * Called after dropping swapcache to decrease refcnt to swap entries.
  */
-void put_swap_page(struct page *page, swp_entry_t entry)
+void put_swap_folio(struct folio *folio, swp_entry_t entry)
 {
 	unsigned long offset = swp_offset(entry);
 	unsigned long idx = offset / SWAPFILE_CLUSTER;
@@ -1337,7 +1337,7 @@ void put_swap_page(struct page *page, swp_entry_t entry)
 	unsigned char *map;
 	unsigned int i, free_entries = 0;
 	unsigned char val;
-	int size = swap_entry_size(thp_nr_pages(page));
+	int size = swap_entry_size(folio_nr_pages(folio));
 
 	si = _swap_info_get(entry);
 	if (!si)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index d3e26712dbc1..ac7f6f77e28a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1339,7 +1339,7 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio,
 			shadow = workingset_eviction(folio, target_memcg);
 		__delete_from_swap_cache(folio, swap, shadow);
 		xa_unlock_irq(&mapping->i_pages);
-		put_swap_page(&folio->page, swap);
+		put_swap_folio(folio, swap);
 	} else {
 		void (*free_folio)(struct folio *);
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 17/59] mm: Convert do_swap_page() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (15 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 16/59] mm/swap: Convert put_swap_page() to put_swap_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 18/59] mm: Convert do_swap_page()'s swapcache variable to " Matthew Wilcox (Oracle)
                   ` (42 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Removes quite a lot of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/memory.c | 57 +++++++++++++++++++++++++++++++----------------------
 1 file changed, 33 insertions(+), 24 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 4ba73f5aa8bb..f172b148e29b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3718,6 +3718,7 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf)
 vm_fault_t do_swap_page(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
+	struct folio *folio;
 	struct page *page = NULL, *swapcache;
 	struct swap_info_struct *si = NULL;
 	rmap_t rmap_flags = RMAP_NONE;
@@ -3762,19 +3763,23 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 
 	page = lookup_swap_cache(entry, vma, vmf->address);
 	swapcache = page;
+	if (page)
+		folio = page_folio(page);
 
 	if (!page) {
 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
 		    __swap_count(entry) == 1) {
 			/* skip swapcache */
-			page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma,
-							vmf->address);
-			if (page) {
-				__SetPageLocked(page);
-				__SetPageSwapBacked(page);
+			folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
+						vma, vmf->address, false);
+			page = &folio->page;
+			if (folio) {
+				__folio_set_locked(folio);
+				__folio_set_swapbacked(folio);
 
 				if (mem_cgroup_swapin_charge_page(page,
-					vma->vm_mm, GFP_KERNEL, entry)) {
+							vma->vm_mm, GFP_KERNEL,
+							entry)) {
 					ret = VM_FAULT_OOM;
 					goto out_page;
 				}
@@ -3782,20 +3787,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 
 				shadow = get_shadow_from_swap_cache(entry);
 				if (shadow)
-					workingset_refault(page_folio(page),
-								shadow);
+					workingset_refault(folio, shadow);
 
-				lru_cache_add(page);
+				folio_add_lru(folio);
 
 				/* To provide entry to swap_readpage() */
-				set_page_private(page, entry.val);
+				folio_set_swap_entry(folio, entry);
 				swap_readpage(page, true, NULL);
-				set_page_private(page, 0);
+				folio->private = NULL;
 			}
 		} else {
 			page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
 						vmf);
 			swapcache = page;
+			if (page)
+				folio = page_folio(page);
 		}
 
 		if (!page) {
@@ -3838,7 +3844,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		 * swapcache, we need to check that the page's swap has not
 		 * changed.
 		 */
-		if (unlikely(!PageSwapCache(page) ||
+		if (unlikely(!folio_test_swapcache(folio) ||
 			     page_private(page) != entry.val))
 			goto out_page;
 
@@ -3853,6 +3859,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 			page = swapcache;
 			goto out_page;
 		}
+		folio = page_folio(page);
 
 		/*
 		 * If we want to map a page that's in the swapcache writable, we
@@ -3861,7 +3868,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		 * pagevecs if required.
 		 */
 		if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache &&
-		    !PageKsm(page) && !PageLRU(page))
+		    !folio_test_ksm(folio) && !folio_test_lru(folio))
 			lru_add_drain();
 	}
 
@@ -3875,7 +3882,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
 		goto out_nomap;
 
-	if (unlikely(!PageUptodate(page))) {
+	if (unlikely(!folio_test_uptodate(folio))) {
 		ret = VM_FAULT_SIGBUS;
 		goto out_nomap;
 	}
@@ -3888,14 +3895,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	 * check after taking the PT lock and making sure that nobody
 	 * concurrently faulted in this page and set PG_anon_exclusive.
 	 */
-	BUG_ON(!PageAnon(page) && PageMappedToDisk(page));
-	BUG_ON(PageAnon(page) && PageAnonExclusive(page));
+	BUG_ON(!folio_test_anon(folio) && folio_test_mappedtodisk(folio));
+	BUG_ON(folio_test_anon(folio) && PageAnonExclusive(page));
 
 	/*
 	 * Check under PT lock (to protect against concurrent fork() sharing
 	 * the swap entry concurrently) for certainly exclusive pages.
 	 */
-	if (!PageKsm(page)) {
+	if (!folio_test_ksm(folio)) {
 		/*
 		 * Note that pte_swp_exclusive() == false for architectures
 		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
@@ -3907,7 +3914,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 			 * swapcache -> certainly exclusive.
 			 */
 			exclusive = true;
-		} else if (exclusive && PageWriteback(page) &&
+		} else if (exclusive && folio_test_writeback(folio) &&
 			  data_race(si->flags & SWP_STABLE_WRITES)) {
 			/*
 			 * This is tricky: not all swap backends support
@@ -3950,7 +3957,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	 * exposing them to the swapcache or because the swap entry indicates
 	 * exclusivity.
 	 */
-	if (!PageKsm(page) && (exclusive || page_count(page) == 1)) {
+	if (!folio_test_ksm(folio) &&
+	    (exclusive || folio_ref_count(folio) == 1)) {
 		if (vmf->flags & FAULT_FLAG_WRITE) {
 			pte = maybe_mkwrite(pte_mkdirty(pte), vma);
 			vmf->flags &= ~FAULT_FLAG_WRITE;
@@ -3970,16 +3978,17 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	/* ksm created a completely new copy */
 	if (unlikely(page != swapcache && swapcache)) {
 		page_add_new_anon_rmap(page, vma, vmf->address);
-		lru_cache_add_inactive_or_unevictable(page, vma);
+		folio_add_lru_vma(folio, vma);
 	} else {
 		page_add_anon_rmap(page, vma, vmf->address, rmap_flags);
 	}
 
-	VM_BUG_ON(!PageAnon(page) || (pte_write(pte) && !PageAnonExclusive(page)));
+	VM_BUG_ON(!folio_test_anon(folio) ||
+			(pte_write(pte) && !PageAnonExclusive(page)));
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
 	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
 
-	unlock_page(page);
+	folio_unlock(folio);
 	if (page != swapcache && swapcache) {
 		/*
 		 * Hold the lock to avoid the swap entry to be reused
@@ -4011,9 +4020,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 out_nomap:
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 out_page:
-	unlock_page(page);
+	folio_unlock(folio);
 out_release:
-	put_page(page);
+	folio_put(folio);
 	if (page != swapcache && swapcache) {
 		unlock_page(swapcache);
 		put_page(swapcache);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 18/59] mm: Convert do_swap_page()'s swapcache variable to a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (16 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 17/59] mm: Convert do_swap_page() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-11  2:28   ` Hugh Dickins
  2022-08-08 19:33 ` [PATCH 19/59] memcg: Convert mem_cgroup_swapin_charge_page() to mem_cgroup_swapin_charge_folio() Matthew Wilcox (Oracle)
                   ` (41 subsequent siblings)
  59 siblings, 1 reply; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

The 'swapcache' variable is used to track whether the page is from the
swapcache or not.  It can do this equally well by being the folio of
the page rather than the page itself, and this saves a number of calls
to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/memory.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index f172b148e29b..471102f0cbf2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3718,8 +3718,8 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf)
 vm_fault_t do_swap_page(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct folio *folio;
-	struct page *page = NULL, *swapcache;
+	struct folio *swapcache, *folio = NULL;
+	struct page *page;
 	struct swap_info_struct *si = NULL;
 	rmap_t rmap_flags = RMAP_NONE;
 	bool exclusive = false;
@@ -3762,11 +3762,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		goto out;
 
 	page = lookup_swap_cache(entry, vma, vmf->address);
-	swapcache = page;
 	if (page)
 		folio = page_folio(page);
+	swapcache = folio;
 
-	if (!page) {
+	if (!folio) {
 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
 		    __swap_count(entry) == 1) {
 			/* skip swapcache */
@@ -3799,12 +3799,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		} else {
 			page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
 						vmf);
-			swapcache = page;
 			if (page)
 				folio = page_folio(page);
+			swapcache = folio;
 		}
 
-		if (!page) {
+		if (!folio) {
 			/*
 			 * Back out if somebody else faulted in this pte
 			 * while we released the pte lock.
@@ -3856,10 +3856,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		page = ksm_might_need_to_copy(page, vma, vmf->address);
 		if (unlikely(!page)) {
 			ret = VM_FAULT_OOM;
-			page = swapcache;
 			goto out_page;
 		}
 		folio = page_folio(page);
+		swapcache = folio;
 
 		/*
 		 * If we want to map a page that's in the swapcache writable, we
@@ -3867,7 +3867,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		 * owner. Try removing the extra reference from the local LRU
 		 * pagevecs if required.
 		 */
-		if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache &&
+		if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache &&
 		    !folio_test_ksm(folio) && !folio_test_lru(folio))
 			lru_add_drain();
 	}
@@ -3908,7 +3908,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
 		 */
 		exclusive = pte_swp_exclusive(vmf->orig_pte);
-		if (page != swapcache) {
+		if (folio != swapcache) {
 			/*
 			 * We have a fresh page that is not exposed to the
 			 * swapcache -> certainly exclusive.
@@ -3976,7 +3976,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	vmf->orig_pte = pte;
 
 	/* ksm created a completely new copy */
-	if (unlikely(page != swapcache && swapcache)) {
+	if (unlikely(folio != swapcache && swapcache)) {
 		page_add_new_anon_rmap(page, vma, vmf->address);
 		folio_add_lru_vma(folio, vma);
 	} else {
@@ -3989,7 +3989,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
 
 	folio_unlock(folio);
-	if (page != swapcache && swapcache) {
+	if (folio != swapcache && swapcache) {
 		/*
 		 * Hold the lock to avoid the swap entry to be reused
 		 * until we take the PT lock for the pte_same() check
@@ -3998,8 +3998,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		 * so that the swap count won't change under a
 		 * parallel locked swapcache.
 		 */
-		unlock_page(swapcache);
-		put_page(swapcache);
+		folio_unlock(swapcache);
+		folio_put(swapcache);
 	}
 
 	if (vmf->flags & FAULT_FLAG_WRITE) {
@@ -4023,9 +4023,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	folio_unlock(folio);
 out_release:
 	folio_put(folio);
-	if (page != swapcache && swapcache) {
-		unlock_page(swapcache);
-		put_page(swapcache);
+	if (folio != swapcache && swapcache) {
+		folio_unlock(swapcache);
+		folio_put(swapcache);
 	}
 	if (si)
 		put_swap_device(si);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 19/59] memcg: Convert mem_cgroup_swapin_charge_page() to mem_cgroup_swapin_charge_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (17 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 18/59] mm: Convert do_swap_page()'s swapcache variable to " Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 20/59] shmem: Convert shmem_mfill_atomic_pte() to use a folio Matthew Wilcox (Oracle)
                   ` (40 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

All callers now have a folio, so pass it in here and remove an unnecessary
call to page_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/memcontrol.h |  4 ++--
 mm/memcontrol.c            | 13 ++++++-------
 mm/memory.c                |  2 +-
 mm/swap_state.c            |  2 +-
 4 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4d31ce55b1c0..5d7e834f37ba 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -689,7 +689,7 @@ static inline int mem_cgroup_charge(struct folio *folio, struct mm_struct *mm,
 	return __mem_cgroup_charge(folio, mm, gfp);
 }
 
-int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
+int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm,
 				  gfp_t gfp, swp_entry_t entry);
 void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
 
@@ -1227,7 +1227,7 @@ static inline int mem_cgroup_charge(struct folio *folio,
 	return 0;
 }
 
-static inline int mem_cgroup_swapin_charge_page(struct page *page,
+static inline int mem_cgroup_swapin_charge_folio(struct folio *folio,
 			struct mm_struct *mm, gfp_t gfp, swp_entry_t entry)
 {
 	return 0;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b69979c9ced5..f65833efe90d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6813,21 +6813,20 @@ int __mem_cgroup_charge(struct folio *folio, struct mm_struct *mm, gfp_t gfp)
 }
 
 /**
- * mem_cgroup_swapin_charge_page - charge a newly allocated page for swapin
- * @page: page to charge
+ * mem_cgroup_swapin_charge_folio - Charge a newly allocated folio for swapin.
+ * @folio: folio to charge.
  * @mm: mm context of the victim
  * @gfp: reclaim mode
- * @entry: swap entry for which the page is allocated
+ * @entry: swap entry for which the folio is allocated
  *
- * This function charges a page allocated for swapin. Please call this before
- * adding the page to the swapcache.
+ * This function charges a folio allocated for swapin. Please call this before
+ * adding the folio to the swapcache.
  *
  * Returns 0 on success. Otherwise, an error code is returned.
  */
-int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
+int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm,
 				  gfp_t gfp, swp_entry_t entry)
 {
-	struct folio *folio = page_folio(page);
 	struct mem_cgroup *memcg;
 	unsigned short id;
 	int ret;
diff --git a/mm/memory.c b/mm/memory.c
index 471102f0cbf2..23b164bf3c70 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3777,7 +3777,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 				__folio_set_locked(folio);
 				__folio_set_swapbacked(folio);
 
-				if (mem_cgroup_swapin_charge_page(page,
+				if (mem_cgroup_swapin_charge_folio(folio,
 							vma->vm_mm, GFP_KERNEL,
 							entry)) {
 					ret = VM_FAULT_OOM;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index ea354efd3735..a7e0438902dd 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -480,7 +480,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 	__folio_set_locked(folio);
 	__folio_set_swapbacked(folio);
 
-	if (mem_cgroup_swapin_charge_page(&folio->page, NULL, gfp_mask, entry))
+	if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry))
 		goto fail_unlock;
 
 	/* May fail (-ENOMEM) if XArray node allocation failed. */
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 20/59] shmem: Convert shmem_mfill_atomic_pte() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (18 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 19/59] memcg: Convert mem_cgroup_swapin_charge_page() to mem_cgroup_swapin_charge_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 21/59] shmem: Convert shmem_replace_page() to shmem_replace_folio() Matthew Wilcox (Oracle)
                   ` (39 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Assert that this is a single-page folio as there are several assumptions
in here that it's exactly PAGE_SIZE bytes large.  Saves several calls
to compound_head() and removes the last caller of shmem_alloc_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 45 +++++++++++++++++++--------------------------
 1 file changed, 19 insertions(+), 26 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 4693edb33648..e7fd1dfb2895 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2353,12 +2353,6 @@ static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir,
 }
 
 #ifdef CONFIG_USERFAULTFD
-static struct page *shmem_alloc_page(gfp_t gfp,
-			struct shmem_inode_info *info, pgoff_t index)
-{
-	return &shmem_alloc_folio(gfp, info, index)->page;
-}
-
 int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
 			   pmd_t *dst_pmd,
 			   struct vm_area_struct *dst_vma,
@@ -2374,7 +2368,6 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
 	pgoff_t pgoff = linear_page_index(dst_vma, dst_addr);
 	void *page_kaddr;
 	struct folio *folio;
-	struct page *page;
 	int ret;
 	pgoff_t max_off;
 
@@ -2393,53 +2386,53 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
 
 	if (!*pagep) {
 		ret = -ENOMEM;
-		page = shmem_alloc_page(gfp, info, pgoff);
-		if (!page)
+		folio = shmem_alloc_folio(gfp, info, pgoff);
+		if (!folio)
 			goto out_unacct_blocks;
 
 		if (!zeropage) {	/* COPY */
-			page_kaddr = kmap_atomic(page);
+			page_kaddr = kmap_local_folio(folio, 0);
 			ret = copy_from_user(page_kaddr,
 					     (const void __user *)src_addr,
 					     PAGE_SIZE);
-			kunmap_atomic(page_kaddr);
+			kunmap_local(page_kaddr);
 
 			/* fallback to copy_from_user outside mmap_lock */
 			if (unlikely(ret)) {
-				*pagep = page;
+				*pagep = &folio->page;
 				ret = -ENOENT;
 				/* don't free the page */
 				goto out_unacct_blocks;
 			}
 
-			flush_dcache_page(page);
+			flush_dcache_folio(folio);
 		} else {		/* ZEROPAGE */
-			clear_user_highpage(page, dst_addr);
+			clear_user_highpage(&folio->page, dst_addr);
 		}
 	} else {
-		page = *pagep;
+		folio = page_folio(*pagep);
+		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
 		*pagep = NULL;
 	}
 
-	VM_BUG_ON(PageLocked(page));
-	VM_BUG_ON(PageSwapBacked(page));
-	__SetPageLocked(page);
-	__SetPageSwapBacked(page);
-	__SetPageUptodate(page);
+	VM_BUG_ON(folio_test_locked(folio));
+	VM_BUG_ON(folio_test_swapbacked(folio));
+	__folio_set_locked(folio);
+	__folio_set_swapbacked(folio);
+	__folio_mark_uptodate(folio);
 
 	ret = -EFAULT;
 	max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
 	if (unlikely(pgoff >= max_off))
 		goto out_release;
 
-	folio = page_folio(page);
 	ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL,
 				      gfp & GFP_RECLAIM_MASK, dst_mm);
 	if (ret)
 		goto out_release;
 
 	ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr,
-				       page, true, wp_copy);
+				       &folio->page, true, wp_copy);
 	if (ret)
 		goto out_delete_from_cache;
 
@@ -2449,13 +2442,13 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
 	shmem_recalc_inode(inode);
 	spin_unlock_irq(&info->lock);
 
-	unlock_page(page);
+	folio_unlock(folio);
 	return 0;
 out_delete_from_cache:
-	delete_from_page_cache(page);
+	filemap_remove_folio(folio);
 out_release:
-	unlock_page(page);
-	put_page(page);
+	folio_unlock(folio);
+	folio_put(folio);
 out_unacct_blocks:
 	shmem_inode_unacct_blocks(inode, 1);
 	return ret;
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 21/59] shmem: Convert shmem_replace_page() to shmem_replace_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (19 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 20/59] shmem: Convert shmem_mfill_atomic_pte() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 22/59] swap: Add swap_cache_get_folio() Matthew Wilcox (Oracle)
                   ` (38 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

The caller has a folio, so convert the calling convention and rename
the function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index e7fd1dfb2895..ac2b2ebfc9c9 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1608,7 +1608,7 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp)
 	return folio_zonenum(folio) > gfp_zone(gfp);
 }
 
-static int shmem_replace_page(struct page **pagep, gfp_t gfp,
+static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
 				struct shmem_inode_info *info, pgoff_t index)
 {
 	struct folio *old, *new;
@@ -1617,7 +1617,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
 	pgoff_t swap_index;
 	int error;
 
-	old = page_folio(*pagep);
+	old = *foliop;
 	entry = folio_swap_entry(old);
 	swap_index = swp_offset(entry);
 	swap_mapping = swap_address_space(entry);
@@ -1664,7 +1664,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
 		old = new;
 	} else {
 		folio_add_lru(new);
-		*pagep = &new->page;
+		*foliop = new;
 	}
 
 	folio_clear_swapcache(old);
@@ -1770,8 +1770,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 	arch_swap_restore(swap, folio);
 
 	if (shmem_should_replace_folio(folio, gfp)) {
-		error = shmem_replace_page(&page, gfp, info, index);
-		folio = page_folio(page);
+		error = shmem_replace_folio(&folio, gfp, info, index);
 		if (error)
 			goto failed;
 	}
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 22/59] swap: Add swap_cache_get_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (20 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 21/59] shmem: Convert shmem_replace_page() to shmem_replace_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 23/59] shmem: Eliminate struct page from shmem_swapin_folio() Matthew Wilcox (Oracle)
                   ` (37 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Convert lookup_swap_cache() into swap_cache_get_folio() and add a
lookup_swap_cache() wrapper around it.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/swap.h       |  2 ++
 mm/swap_state.c | 32 +++++++++++++++++++++-----------
 2 files changed, 23 insertions(+), 11 deletions(-)

diff --git a/mm/swap.h b/mm/swap.h
index 0e023765e110..f70ff34dab82 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -41,6 +41,8 @@ void __delete_from_swap_cache(struct folio *folio,
 void delete_from_swap_cache(struct folio *folio);
 void clear_shadow_from_swap_cache(int type, unsigned long begin,
 				  unsigned long end);
+struct folio *swap_cache_get_folio(swp_entry_t entry,
+		struct vm_area_struct *vma, unsigned long addr);
 struct page *lookup_swap_cache(swp_entry_t entry,
 			       struct vm_area_struct *vma,
 			       unsigned long addr);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index a7e0438902dd..b96bf4ec8b5b 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -317,24 +317,24 @@ static inline bool swap_use_vma_readahead(void)
 }
 
 /*
- * Lookup a swap entry in the swap cache. A found page will be returned
+ * Lookup a swap entry in the swap cache. A found folio will be returned
  * unlocked and with its refcount incremented - we rely on the kernel
- * lock getting page table operations atomic even if we drop the page
+ * lock getting page table operations atomic even if we drop the folio
  * lock before returning.
  */
-struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma,
-			       unsigned long addr)
+struct folio *swap_cache_get_folio(swp_entry_t entry,
+		struct vm_area_struct *vma, unsigned long addr)
 {
-	struct page *page;
+	struct folio *folio;
 	struct swap_info_struct *si;
 
 	si = get_swap_device(entry);
 	if (!si)
 		return NULL;
-	page = find_get_page(swap_address_space(entry), swp_offset(entry));
+	folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
 	put_swap_device(si);
 
-	if (page) {
+	if (folio) {
 		bool vma_ra = swap_use_vma_readahead();
 		bool readahead;
 
@@ -342,10 +342,10 @@ struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma,
 		 * At the moment, we don't support PG_readahead for anon THP
 		 * so let's bail out rather than confusing the readahead stat.
 		 */
-		if (unlikely(PageTransCompound(page)))
-			return page;
+		if (unlikely(folio_test_large(folio)))
+			return folio;
 
-		readahead = TestClearPageReadahead(page);
+		readahead = folio_test_clear_readahead(folio);
 		if (vma && vma_ra) {
 			unsigned long ra_val;
 			int win, hits;
@@ -366,7 +366,17 @@ struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma,
 		}
 	}
 
-	return page;
+	return folio;
+}
+
+struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma,
+			       unsigned long addr)
+{
+	struct folio *folio = swap_cache_get_folio(entry, vma, addr);
+
+	if (!folio)
+		return NULL;
+	return folio_file_page(folio, swp_offset(entry));
 }
 
 /**
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 23/59] shmem: Eliminate struct page from shmem_swapin_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (21 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 22/59] swap: Add swap_cache_get_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 24/59] shmem: Convert shmem_getpage_gfp() to shmem_get_folio_gfp() Matthew Wilcox (Oracle)
                   ` (36 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Convert shmem_swapin() to return a folio and use swap_cache_get_folio(),
removing all uses of struct page in this function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index ac2b2ebfc9c9..a83b49a6e1d8 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1486,7 +1486,7 @@ static void shmem_pseudo_vma_destroy(struct vm_area_struct *vma)
 	mpol_cond_put(vma->vm_policy);
 }
 
-static struct page *shmem_swapin(swp_entry_t swap, gfp_t gfp,
+static struct folio *shmem_swapin(swp_entry_t swap, gfp_t gfp,
 			struct shmem_inode_info *info, pgoff_t index)
 {
 	struct vm_area_struct pvma;
@@ -1499,7 +1499,9 @@ static struct page *shmem_swapin(swp_entry_t swap, gfp_t gfp,
 	page = swap_cluster_readahead(swap, gfp, &vmf);
 	shmem_pseudo_vma_destroy(&pvma);
 
-	return page;
+	if (!page)
+		return NULL;
+	return page_folio(page);
 }
 
 /*
@@ -1719,7 +1721,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 	struct address_space *mapping = inode->i_mapping;
 	struct shmem_inode_info *info = SHMEM_I(inode);
 	struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
-	struct page *page;
 	struct folio *folio = NULL;
 	swp_entry_t swap;
 	int error;
@@ -1732,8 +1733,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 		return -EIO;
 
 	/* Look it up and read it in.. */
-	page = lookup_swap_cache(swap, NULL, 0);
-	if (!page) {
+	folio = swap_cache_get_folio(swap, NULL, 0);
+	if (!folio) {
 		/* Or update major stats only when swapin succeeds?? */
 		if (fault_type) {
 			*fault_type |= VM_FAULT_MAJOR;
@@ -1741,13 +1742,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 			count_memcg_event_mm(charge_mm, PGMAJFAULT);
 		}
 		/* Here we actually start the io */
-		page = shmem_swapin(swap, gfp, info, index);
-		if (!page) {
+		folio = shmem_swapin(swap, gfp, info, index);
+		if (!folio) {
 			error = -ENOMEM;
 			goto failed;
 		}
 	}
-	folio = page_folio(page);
 
 	/* We have to do this with folio locked to prevent races */
 	folio_lock(folio);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 24/59] shmem: Convert shmem_getpage_gfp() to shmem_get_folio_gfp()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (22 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 23/59] shmem: Eliminate struct page from shmem_swapin_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 25/59] shmem: Convert shmem_fault() to use shmem_get_folio_gfp() Matthew Wilcox (Oracle)
                   ` (35 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Add a shmem_getpage_gfp() wrapper for compatibility with current users.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 70 ++++++++++++++++++++++++++++++++----------------------
 1 file changed, 41 insertions(+), 29 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index a83b49a6e1d8..e41214dcb137 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -139,17 +139,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 			     struct folio **foliop, enum sgp_type sgp,
 			     gfp_t gfp, struct vm_area_struct *vma,
 			     vm_fault_t *fault_type);
-static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
-		struct page **pagep, enum sgp_type sgp,
-		gfp_t gfp, struct vm_area_struct *vma,
-		struct vm_fault *vmf, vm_fault_t *fault_type);
-
-int shmem_getpage(struct inode *inode, pgoff_t index,
-		struct page **pagep, enum sgp_type sgp)
-{
-	return shmem_getpage_gfp(inode, index, pagep, sgp,
-		mapping_gfp_mask(inode->i_mapping), NULL, NULL, NULL);
-}
 
 static inline struct shmem_sb_info *SHMEM_SB(struct super_block *sb)
 {
@@ -1595,7 +1584,7 @@ static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode,
 
 /*
  * When a page is moved from swapcache to shmem filecache (either by the
- * usual swapin of shmem_getpage_gfp(), or by the less common swapoff of
+ * usual swapin of shmem_get_folio_gfp(), or by the less common swapoff of
  * shmem_unuse_inode()), it may have been read in earlier from swap, in
  * ignorance of the mapping it belongs to.  If that mapping has special
  * constraints (like the gma500 GEM driver, which requires RAM below 4GB),
@@ -1810,7 +1799,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 }
 
 /*
- * shmem_getpage_gfp - find page in cache, or get from swap, or allocate
+ * shmem_get_folio_gfp - find page in cache, or get from swap, or allocate
  *
  * If we allocate a new one we do not mark it dirty. That's up to the
  * vm. If we swap it in we mark it dirty since we also free the swap
@@ -1819,10 +1808,10 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
  * vma, vmf, and fault_type are only supplied by shmem_fault:
  * otherwise they are NULL.
  */
-static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
-	struct page **pagep, enum sgp_type sgp, gfp_t gfp,
-	struct vm_area_struct *vma, struct vm_fault *vmf,
-			vm_fault_t *fault_type)
+static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
+		struct folio **foliop, enum sgp_type sgp, gfp_t gfp,
+		struct vm_area_struct *vma, struct vm_fault *vmf,
+		vm_fault_t *fault_type)
 {
 	struct address_space *mapping = inode->i_mapping;
 	struct shmem_inode_info *info = SHMEM_I(inode);
@@ -1862,7 +1851,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 		if (error == -EEXIST)
 			goto repeat;
 
-		*pagep = &folio->page;
+		*foliop = folio;
 		return error;
 	}
 
@@ -1872,7 +1861,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 			folio_mark_accessed(folio);
 		if (folio_test_uptodate(folio))
 			goto out;
-		/* fallocated page */
+		/* fallocated folio */
 		if (sgp != SGP_READ)
 			goto clear;
 		folio_unlock(folio);
@@ -1880,10 +1869,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 	}
 
 	/*
-	 * SGP_READ: succeed on hole, with NULL page, letting caller zero.
-	 * SGP_NOALLOC: fail on hole, with NULL page, letting caller fail.
+	 * SGP_READ: succeed on hole, with NULL folio, letting caller zero.
+	 * SGP_NOALLOC: fail on hole, with NULL folio, letting caller fail.
 	 */
-	*pagep = NULL;
+	*foliop = NULL;
 	if (sgp == SGP_READ)
 		return 0;
 	if (sgp == SGP_NOALLOC)
@@ -1916,7 +1905,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 		if (error != -ENOSPC)
 			goto unlock;
 		/*
-		 * Try to reclaim some space by splitting a huge page
+		 * Try to reclaim some space by splitting a large folio
 		 * beyond i_size on the filesystem.
 		 */
 		while (retry--) {
@@ -1952,9 +1941,9 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 
 	if (folio_test_pmd_mappable(folio) &&
 	    DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) <
-			hindex + HPAGE_PMD_NR - 1) {
+					folio_next_index(folio) - 1) {
 		/*
-		 * Part of the huge page is beyond i_size: subject
+		 * Part of the large folio is beyond i_size: subject
 		 * to shrink under memory pressure.
 		 */
 		spin_lock(&sbinfo->shrinklist_lock);
@@ -1971,14 +1960,14 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 	}
 
 	/*
-	 * Let SGP_FALLOC use the SGP_WRITE optimization on a new page.
+	 * Let SGP_FALLOC use the SGP_WRITE optimization on a new folio.
 	 */
 	if (sgp == SGP_FALLOC)
 		sgp = SGP_WRITE;
 clear:
 	/*
-	 * Let SGP_WRITE caller clear ends if write does not fill page;
-	 * but SGP_FALLOC on a page fallocated earlier must initialize
+	 * Let SGP_WRITE caller clear ends if write does not fill folio;
+	 * but SGP_FALLOC on a folio fallocated earlier must initialize
 	 * it now, lest undo on failure cancel our earlier guarantee.
 	 */
 	if (sgp != SGP_WRITE && !folio_test_uptodate(folio)) {
@@ -2004,7 +1993,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 		goto unlock;
 	}
 out:
-	*pagep = folio_page(folio, index - hindex);
+	*foliop = folio;
 	return 0;
 
 	/*
@@ -2034,6 +2023,29 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 	return error;
 }
 
+static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
+		struct page **pagep, enum sgp_type sgp,
+		gfp_t gfp, struct vm_area_struct *vma,
+		struct vm_fault *vmf, vm_fault_t *fault_type)
+{
+	struct folio *folio = NULL;
+	int ret = shmem_get_folio_gfp(inode, index, &folio, sgp, gfp, vma,
+			vmf, fault_type);
+
+	if (folio)
+		*pagep = folio_file_page(folio, index);
+	else
+		*pagep = NULL;
+	return ret;
+}
+
+int shmem_getpage(struct inode *inode, pgoff_t index,
+		struct page **pagep, enum sgp_type sgp)
+{
+	return shmem_getpage_gfp(inode, index, pagep, sgp,
+		mapping_gfp_mask(inode->i_mapping), NULL, NULL, NULL);
+}
+
 /*
  * This is like autoremove_wake_function, but it removes the wait queue
  * entry unconditionally - even if something else had already woken the
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 25/59] shmem: Convert shmem_fault() to use shmem_get_folio_gfp()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (23 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 24/59] shmem: Convert shmem_getpage_gfp() to shmem_get_folio_gfp() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 26/59] shmem: Convert shmem_read_mapping_page_gfp() " Matthew Wilcox (Oracle)
                   ` (34 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

No particular advantage for this function, but necessary to remove
shmem_getpage_gfp().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index e41214dcb137..f9654008950e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2063,6 +2063,7 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
 	struct vm_area_struct *vma = vmf->vma;
 	struct inode *inode = file_inode(vma->vm_file);
 	gfp_t gfp = mapping_gfp_mask(inode->i_mapping);
+	struct folio *folio;
 	int err;
 	vm_fault_t ret = VM_FAULT_LOCKED;
 
@@ -2125,10 +2126,11 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
 		spin_unlock(&inode->i_lock);
 	}
 
-	err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE,
+	err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE,
 				  gfp, vma, vmf, &ret);
 	if (err)
 		return vmf_error(err);
+	vmf->page = folio_file_page(folio, vmf->pgoff);
 	return ret;
 }
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 26/59] shmem: Convert shmem_read_mapping_page_gfp() to use shmem_get_folio_gfp()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (24 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 25/59] shmem: Convert shmem_fault() to use shmem_get_folio_gfp() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 27/59] shmem: Add shmem_get_folio() Matthew Wilcox (Oracle)
                   ` (33 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves a couple of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index f9654008950e..eaaadb7d31b7 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -4258,18 +4258,20 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping,
 {
 #ifdef CONFIG_SHMEM
 	struct inode *inode = mapping->host;
+	struct folio *folio;
 	struct page *page;
 	int error;
 
 	BUG_ON(!shmem_mapping(mapping));
-	error = shmem_getpage_gfp(inode, index, &page, SGP_CACHE,
+	error = shmem_get_folio_gfp(inode, index, &folio, SGP_CACHE,
 				  gfp, NULL, NULL, NULL);
 	if (error)
 		return ERR_PTR(error);
 
-	unlock_page(page);
+	folio_unlock(folio);
+	page = folio_file_page(folio, index);
 	if (PageHWPoison(page)) {
-		put_page(page);
+		folio_put(folio);
 		return ERR_PTR(-EIO);
 	}
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 27/59] shmem: Add shmem_get_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (25 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 26/59] shmem: Convert shmem_read_mapping_page_gfp() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 28/59] shmem: Convert shmem_get_partial_folio() to use shmem_get_folio() Matthew Wilcox (Oracle)
                   ` (32 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

With no remaining callers of shmem_getpage_gfp(), add shmem_get_folio()
and reimplement shmem_getpage() as a call to shmem_get_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/shmem_fs.h |  2 ++
 mm/shmem.c               | 23 ++++++++++-------------
 2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 1b6c4013f691..0549929261f2 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -118,6 +118,8 @@ enum sgp_type {
 
 extern int shmem_getpage(struct inode *inode, pgoff_t index,
 		struct page **pagep, enum sgp_type sgp);
+int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop,
+		enum sgp_type sgp);
 
 static inline struct page *shmem_read_mapping_page(
 				struct address_space *mapping, pgoff_t index)
diff --git a/mm/shmem.c b/mm/shmem.c
index eaaadb7d31b7..75d9304b964e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2023,14 +2023,18 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
 	return error;
 }
 
-static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
-		struct page **pagep, enum sgp_type sgp,
-		gfp_t gfp, struct vm_area_struct *vma,
-		struct vm_fault *vmf, vm_fault_t *fault_type)
+int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop,
+		enum sgp_type sgp)
+{
+	return shmem_get_folio_gfp(inode, index, foliop, sgp,
+			mapping_gfp_mask(inode->i_mapping), NULL, NULL, NULL);
+}
+
+int shmem_getpage(struct inode *inode, pgoff_t index,
+		struct page **pagep, enum sgp_type sgp)
 {
 	struct folio *folio = NULL;
-	int ret = shmem_get_folio_gfp(inode, index, &folio, sgp, gfp, vma,
-			vmf, fault_type);
+	int ret = shmem_get_folio(inode, index, &folio, sgp);
 
 	if (folio)
 		*pagep = folio_file_page(folio, index);
@@ -2039,13 +2043,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 	return ret;
 }
 
-int shmem_getpage(struct inode *inode, pgoff_t index,
-		struct page **pagep, enum sgp_type sgp)
-{
-	return shmem_getpage_gfp(inode, index, pagep, sgp,
-		mapping_gfp_mask(inode->i_mapping), NULL, NULL, NULL);
-}
-
 /*
  * This is like autoremove_wake_function, but it removes the wait queue
  * entry unconditionally - even if something else had already woken the
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 28/59] shmem: Convert shmem_get_partial_folio() to use shmem_get_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (26 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 27/59] shmem: Add shmem_get_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 29/59] shmem: Convert shmem_write_begin() " Matthew Wilcox (Oracle)
                   ` (31 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Get rid of an unnecessary folio->page->folio conversion.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 75d9304b964e..b26ab3167040 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -874,10 +874,9 @@ void shmem_unlock_mapping(struct address_space *mapping)
 static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index)
 {
 	struct folio *folio;
-	struct page *page;
 
 	/*
-	 * At first avoid shmem_getpage(,,,SGP_READ): that fails
+	 * At first avoid shmem_get_folio(,,,SGP_READ): that fails
 	 * beyond i_size, and reports fallocated pages as holes.
 	 */
 	folio = __filemap_get_folio(inode->i_mapping, index,
@@ -888,9 +887,9 @@ static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index)
 	 * But read a page back from swap if any of it is within i_size
 	 * (although in some cases this is just a waste of time).
 	 */
-	page = NULL;
-	shmem_getpage(inode, index, &page, SGP_READ);
-	return page ? page_folio(page) : NULL;
+	folio = NULL;
+	shmem_get_folio(inode, index, &folio, SGP_READ);
+	return folio;
 }
 
 /*
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 29/59] shmem: Convert shmem_write_begin() to use shmem_get_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (27 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 28/59] shmem: Convert shmem_get_partial_folio() to use shmem_get_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 30/59] shmem: Convert shmem_file_read_iter() " Matthew Wilcox (Oracle)
                   ` (30 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Use a folio throughout this function, saving a couple of calls to
compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index b26ab3167040..45212001effd 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2482,6 +2482,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
 	struct inode *inode = mapping->host;
 	struct shmem_inode_info *info = SHMEM_I(inode);
 	pgoff_t index = pos >> PAGE_SHIFT;
+	struct folio *folio;
 	int ret = 0;
 
 	/* i_rwsem is held by caller */
@@ -2493,14 +2494,15 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
 			return -EPERM;
 	}
 
-	ret = shmem_getpage(inode, index, pagep, SGP_WRITE);
+	ret = shmem_get_folio(inode, index, &folio, SGP_WRITE);
 
 	if (ret)
 		return ret;
 
+	*pagep = folio_file_page(folio, index);
 	if (PageHWPoison(*pagep)) {
-		unlock_page(*pagep);
-		put_page(*pagep);
+		folio_unlock(folio);
+		folio_put(folio);
 		*pagep = NULL;
 		return -EIO;
 	}
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 30/59] shmem: Convert shmem_file_read_iter() to use shmem_get_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (28 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 29/59] shmem: Convert shmem_write_begin() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:33 ` [PATCH 31/59] shmem: Convert shmem_fallocate() to use a folio Matthew Wilcox (Oracle)
                   ` (29 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Use a folio throughout, saving five calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 45212001effd..51a1934f8353 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2561,6 +2561,7 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
 	offset = *ppos & ~PAGE_MASK;
 
 	for (;;) {
+		struct folio *folio = NULL;
 		struct page *page = NULL;
 		pgoff_t end_index;
 		unsigned long nr, ret;
@@ -2575,17 +2576,18 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
 				break;
 		}
 
-		error = shmem_getpage(inode, index, &page, SGP_READ);
+		error = shmem_get_folio(inode, index, &folio, SGP_READ);
 		if (error) {
 			if (error == -EINVAL)
 				error = 0;
 			break;
 		}
-		if (page) {
-			unlock_page(page);
+		if (folio) {
+			folio_unlock(folio);
 
+			page = folio_file_page(folio, index);
 			if (PageHWPoison(page)) {
-				put_page(page);
+				folio_put(folio);
 				error = -EIO;
 				break;
 			}
@@ -2601,14 +2603,14 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
 		if (index == end_index) {
 			nr = i_size & ~PAGE_MASK;
 			if (nr <= offset) {
-				if (page)
-					put_page(page);
+				if (folio)
+					folio_put(folio);
 				break;
 			}
 		}
 		nr -= offset;
 
-		if (page) {
+		if (folio) {
 			/*
 			 * If users can be writing to this page using arbitrary
 			 * virtual addresses, take care about potential aliasing
@@ -2620,13 +2622,13 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
 			 * Mark the page accessed if we read the beginning.
 			 */
 			if (!offset)
-				mark_page_accessed(page);
+				folio_mark_accessed(folio);
 			/*
 			 * Ok, we have the page, and it's up-to-date, so
 			 * now we can copy it to user space...
 			 */
 			ret = copy_page_to_iter(page, offset, nr, to);
-			put_page(page);
+			folio_put(folio);
 
 		} else if (iter_is_iovec(to)) {
 			/*
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 31/59] shmem: Convert shmem_fallocate() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (29 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 30/59] shmem: Convert shmem_file_read_iter() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:33 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 32/59] shmem: Convert shmem_symlink() " Matthew Wilcox (Oracle)
                   ` (28 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:33 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Call shmem_get_folio() and use the folio APIs instead of the page APIs.
Saves several calls to compound_head() and removes assumptions about
the size of a large folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 36 +++++++++++++++++-------------------
 1 file changed, 17 insertions(+), 19 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 51a1934f8353..fda56776b510 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2771,7 +2771,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		info->fallocend = end;
 
 	for (index = start; index < end; ) {
-		struct page *page;
+		struct folio *folio;
 
 		/*
 		 * Good, the fallocate(2) manpage permits EINTR: we may have
@@ -2782,10 +2782,11 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		else if (shmem_falloc.nr_unswapped > shmem_falloc.nr_falloced)
 			error = -ENOMEM;
 		else
-			error = shmem_getpage(inode, index, &page, SGP_FALLOC);
+			error = shmem_get_folio(inode, index, &folio,
+						SGP_FALLOC);
 		if (error) {
 			info->fallocend = undo_fallocend;
-			/* Remove the !PageUptodate pages we added */
+			/* Remove the !uptodate folios we added */
 			if (index > start) {
 				shmem_undo_range(inode,
 				    (loff_t)start << PAGE_SHIFT,
@@ -2794,37 +2795,34 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 			goto undone;
 		}
 
-		index++;
 		/*
 		 * Here is a more important optimization than it appears:
-		 * a second SGP_FALLOC on the same huge page will clear it,
-		 * making it PageUptodate and un-undoable if we fail later.
+		 * a second SGP_FALLOC on the same large folio will clear it,
+		 * making it uptodate and un-undoable if we fail later.
 		 */
-		if (PageTransCompound(page)) {
-			index = round_up(index, HPAGE_PMD_NR);
-			/* Beware 32-bit wraparound */
-			if (!index)
-				index--;
-		}
+		index = folio_next_index(folio);
+		/* Beware 32-bit wraparound */
+		if (!index)
+			index--;
 
 		/*
 		 * Inform shmem_writepage() how far we have reached.
 		 * No need for lock or barrier: we have the page lock.
 		 */
-		if (!PageUptodate(page))
+		if (!folio_test_uptodate(folio))
 			shmem_falloc.nr_falloced += index - shmem_falloc.next;
 		shmem_falloc.next = index;
 
 		/*
-		 * If !PageUptodate, leave it that way so that freeable pages
+		 * If !uptodate, leave it that way so that freeable folios
 		 * can be recognized if we need to rollback on error later.
-		 * But set_page_dirty so that memory pressure will swap rather
-		 * than free the pages we are allocating (and SGP_CACHE pages
+		 * But mark it dirty so that memory pressure will swap rather
+		 * than free the folios we are allocating (and SGP_CACHE folios
 		 * might still be clean: we now need to mark those dirty too).
 		 */
-		set_page_dirty(page);
-		unlock_page(page);
-		put_page(page);
+		folio_mark_dirty(folio);
+		folio_unlock(folio);
+		folio_put(folio);
 		cond_resched();
 	}
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 32/59] shmem: Convert shmem_symlink() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (30 preceding siblings ...)
  2022-08-08 19:33 ` [PATCH 31/59] shmem: Convert shmem_fallocate() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 33/59] shmem: Convert shmem_get_link() " Matthew Wilcox (Oracle)
                   ` (27 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

While symlinks will always be < PAGE_SIZE, using the folio APIs gets
rid of unnecessary calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index fda56776b510..9af2292e257b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3076,7 +3076,7 @@ static int shmem_symlink(struct user_namespace *mnt_userns, struct inode *dir,
 	int error;
 	int len;
 	struct inode *inode;
-	struct page *page;
+	struct folio *folio;
 
 	len = strlen(symname) + 1;
 	if (len > PAGE_SIZE)
@@ -3104,18 +3104,18 @@ static int shmem_symlink(struct user_namespace *mnt_userns, struct inode *dir,
 		inode->i_op = &shmem_short_symlink_operations;
 	} else {
 		inode_nohighmem(inode);
-		error = shmem_getpage(inode, 0, &page, SGP_WRITE);
+		error = shmem_get_folio(inode, 0, &folio, SGP_WRITE);
 		if (error) {
 			iput(inode);
 			return error;
 		}
 		inode->i_mapping->a_ops = &shmem_aops;
 		inode->i_op = &shmem_symlink_inode_operations;
-		memcpy(page_address(page), symname, len);
-		SetPageUptodate(page);
-		set_page_dirty(page);
-		unlock_page(page);
-		put_page(page);
+		memcpy(folio_address(folio), symname, len);
+		folio_mark_uptodate(folio);
+		folio_mark_dirty(folio);
+		folio_unlock(folio);
+		folio_put(folio);
 	}
 	dir->i_size += BOGO_DIRENT_SIZE;
 	dir->i_ctime = dir->i_mtime = current_time(dir);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 33/59] shmem: Convert shmem_get_link() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (31 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 32/59] shmem: Convert shmem_symlink() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 34/59] khugepaged: Call shmem_get_folio() Matthew Wilcox (Oracle)
                   ` (26 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Symlinks will never use a large folio, but using the folio API removes
a lot of unnecessary folio->page->folio conversions.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/shmem.c | 33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 9af2292e257b..128970a74aa2 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3126,40 +3126,41 @@ static int shmem_symlink(struct user_namespace *mnt_userns, struct inode *dir,
 
 static void shmem_put_link(void *arg)
 {
-	mark_page_accessed(arg);
-	put_page(arg);
+	folio_mark_accessed(arg);
+	folio_put(arg);
 }
 
 static const char *shmem_get_link(struct dentry *dentry,
 				  struct inode *inode,
 				  struct delayed_call *done)
 {
-	struct page *page = NULL;
+	struct folio *folio = NULL;
 	int error;
+
 	if (!dentry) {
-		page = find_get_page(inode->i_mapping, 0);
-		if (!page)
+		folio = filemap_get_folio(inode->i_mapping, 0);
+		if (!folio)
 			return ERR_PTR(-ECHILD);
-		if (PageHWPoison(page) ||
-		    !PageUptodate(page)) {
-			put_page(page);
+		if (PageHWPoison(&folio->page) ||
+		    !folio_test_uptodate(folio)) {
+			folio_put(folio);
 			return ERR_PTR(-ECHILD);
 		}
 	} else {
-		error = shmem_getpage(inode, 0, &page, SGP_READ);
+		error = shmem_get_folio(inode, 0, &folio, SGP_READ);
 		if (error)
 			return ERR_PTR(error);
-		if (!page)
+		if (!folio)
 			return ERR_PTR(-ECHILD);
-		if (PageHWPoison(page)) {
-			unlock_page(page);
-			put_page(page);
+		if (PageHWPoison(&folio->page)) {
+			folio_unlock(folio);
+			folio_put(folio);
 			return ERR_PTR(-ECHILD);
 		}
-		unlock_page(page);
+		folio_unlock(folio);
 	}
-	set_delayed_call(done, shmem_put_link, page);
-	return page_address(page);
+	set_delayed_call(done, shmem_put_link, folio);
+	return folio_address(folio);
 }
 
 #ifdef CONFIG_TMPFS_XATTR
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 34/59] khugepaged: Call shmem_get_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (32 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 33/59] shmem: Convert shmem_get_link() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 35/59] userfaultfd: Convert mcontinue_atomic_pte() to use a folio Matthew Wilcox (Oracle)
                   ` (25 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

shmem_getpage() is being removed, so call its replacement and find the
precise page ourselves.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/khugepaged.c | 7 +++++--
 mm/shmem.c      | 4 ++--
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 01f71786d530..6cf4bbd22fc9 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1682,13 +1682,16 @@ static void collapse_file(struct mm_struct *mm,
 			}
 
 			if (xa_is_value(page) || !PageUptodate(page)) {
+				struct folio *folio;
+
 				xas_unlock_irq(&xas);
 				/* swap in or instantiate fallocated page */
-				if (shmem_getpage(mapping->host, index, &page,
-						  SGP_NOALLOC)) {
+				if (shmem_get_folio(mapping->host, index,
+						&folio, SGP_NOALLOC)) {
 					result = SCAN_FAIL;
 					goto xa_unlocked;
 				}
+				page = folio_file_page(folio, index);
 			} else if (trylock_page(page)) {
 				get_page(page);
 				xas_unlock_irq(&xas);
diff --git a/mm/shmem.c b/mm/shmem.c
index 128970a74aa2..da7ce4039430 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3141,7 +3141,7 @@ static const char *shmem_get_link(struct dentry *dentry,
 		folio = filemap_get_folio(inode->i_mapping, 0);
 		if (!folio)
 			return ERR_PTR(-ECHILD);
-		if (PageHWPoison(&folio->page) ||
+		if (PageHWPoison(folio_page(folio, 0)) ||
 		    !folio_test_uptodate(folio)) {
 			folio_put(folio);
 			return ERR_PTR(-ECHILD);
@@ -3152,7 +3152,7 @@ static const char *shmem_get_link(struct dentry *dentry,
 			return ERR_PTR(error);
 		if (!folio)
 			return ERR_PTR(-ECHILD);
-		if (PageHWPoison(&folio->page)) {
+		if (PageHWPoison(folio_page(folio, 0))) {
 			folio_unlock(folio);
 			folio_put(folio);
 			return ERR_PTR(-ECHILD);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 35/59] userfaultfd: Convert mcontinue_atomic_pte() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (33 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 34/59] khugepaged: Call shmem_get_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 36/59] shmem: Remove shmem_getpage() Matthew Wilcox (Oracle)
                   ` (24 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

shmem_getpage() is being replaced by shmem_get_folio() so use a folio
throughout this function.  Saves several calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/userfaultfd.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 07d3befc80e4..823f2da7aba5 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -243,20 +243,22 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm,
 {
 	struct inode *inode = file_inode(dst_vma->vm_file);
 	pgoff_t pgoff = linear_page_index(dst_vma, dst_addr);
+	struct folio *folio;
 	struct page *page;
 	int ret;
 
-	ret = shmem_getpage(inode, pgoff, &page, SGP_NOALLOC);
-	/* Our caller expects us to return -EFAULT if we failed to find page. */
+	ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC);
+	/* Our caller expects us to return -EFAULT if we failed to find folio */
 	if (ret == -ENOENT)
 		ret = -EFAULT;
 	if (ret)
 		goto out;
-	if (!page) {
+	if (!folio) {
 		ret = -EFAULT;
 		goto out;
 	}
 
+	page = folio_file_page(folio, pgoff);
 	if (PageHWPoison(page)) {
 		ret = -EIO;
 		goto out_release;
@@ -267,13 +269,13 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm,
 	if (ret)
 		goto out_release;
 
-	unlock_page(page);
+	folio_unlock(folio);
 	ret = 0;
 out:
 	return ret;
 out_release:
-	unlock_page(page);
-	put_page(page);
+	folio_unlock(folio);
+	folio_put(folio);
 	goto out;
 }
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 36/59] shmem: Remove shmem_getpage()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (34 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 35/59] userfaultfd: Convert mcontinue_atomic_pte() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 37/59] swapfile: Convert try_to_unuse() to use a folio Matthew Wilcox (Oracle)
                   ` (23 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

With all callers removed, remove this wrapper function.  The flags
are now mysteriously called SGP, but I think we can live with that.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/shmem_fs.h |  4 +---
 mm/shmem.c               | 15 +--------------
 2 files changed, 2 insertions(+), 17 deletions(-)

diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 0549929261f2..5c8cc23fcefe 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -107,7 +107,7 @@ extern unsigned long shmem_swap_usage(struct vm_area_struct *vma);
 extern unsigned long shmem_partial_swap_usage(struct address_space *mapping,
 						pgoff_t start, pgoff_t end);
 
-/* Flag allocation requirements to shmem_getpage */
+/* Flag allocation requirements to shmem_get_folio */
 enum sgp_type {
 	SGP_READ,	/* don't exceed i_size, don't allocate page */
 	SGP_NOALLOC,	/* similar, but fail on hole or use fallocated page */
@@ -116,8 +116,6 @@ enum sgp_type {
 	SGP_FALLOC,	/* like SGP_WRITE, but make existing page Uptodate */
 };
 
-extern int shmem_getpage(struct inode *inode, pgoff_t index,
-		struct page **pagep, enum sgp_type sgp);
 int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop,
 		enum sgp_type sgp);
 
diff --git a/mm/shmem.c b/mm/shmem.c
index da7ce4039430..3cdfb061b2ff 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -179,7 +179,7 @@ static inline int shmem_reacct_size(unsigned long flags,
 /*
  * ... whereas tmpfs objects are accounted incrementally as
  * pages are allocated, in order to allow large sparse files.
- * shmem_getpage reports shmem_acct_block failure as -ENOSPC not -ENOMEM,
+ * shmem_get_folio reports shmem_acct_block failure as -ENOSPC not -ENOMEM,
  * so that a failure on a sparse tmpfs mapping will give SIGBUS not OOM.
  */
 static inline int shmem_acct_block(unsigned long flags, long pages)
@@ -2029,19 +2029,6 @@ int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop,
 			mapping_gfp_mask(inode->i_mapping), NULL, NULL, NULL);
 }
 
-int shmem_getpage(struct inode *inode, pgoff_t index,
-		struct page **pagep, enum sgp_type sgp)
-{
-	struct folio *folio = NULL;
-	int ret = shmem_get_folio(inode, index, &folio, sgp);
-
-	if (folio)
-		*pagep = folio_file_page(folio, index);
-	else
-		*pagep = NULL;
-	return ret;
-}
-
 /*
  * This is like autoremove_wake_function, but it removes the wait queue
  * entry unconditionally - even if something else had already woken the
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 37/59] swapfile: Convert try_to_unuse() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (35 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 36/59] shmem: Remove shmem_getpage() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 38/59] swapfile: Convert __try_to_reclaim_swap() " Matthew Wilcox (Oracle)
                   ` (22 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves five calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/swapfile.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 186511a8ef4f..df85eb73f34e 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2026,7 +2026,7 @@ static int try_to_unuse(unsigned int type)
 	struct list_head *p;
 	int retval = 0;
 	struct swap_info_struct *si = swap_info[type];
-	struct page *page;
+	struct folio *folio;
 	swp_entry_t entry;
 	unsigned int i;
 
@@ -2076,21 +2076,21 @@ static int try_to_unuse(unsigned int type)
 	       (i = find_next_to_unuse(si, i)) != 0) {
 
 		entry = swp_entry(type, i);
-		page = find_get_page(swap_address_space(entry), i);
-		if (!page)
+		folio = filemap_get_folio(swap_address_space(entry), i);
+		if (!folio)
 			continue;
 
 		/*
-		 * It is conceivable that a racing task removed this page from
-		 * swap cache just before we acquired the page lock. The page
+		 * It is conceivable that a racing task removed this folio from
+		 * swap cache just before we acquired the page lock. The folio
 		 * might even be back in swap cache on another swap area. But
-		 * that is okay, try_to_free_swap() only removes stale pages.
+		 * that is okay, folio_free_swap() only removes stale folios.
 		 */
-		lock_page(page);
-		wait_on_page_writeback(page);
-		try_to_free_swap(page);
-		unlock_page(page);
-		put_page(page);
+		folio_lock(folio);
+		folio_wait_writeback(folio);
+		folio_free_swap(folio);
+		folio_unlock(folio);
+		folio_put(folio);
 	}
 
 	/*
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 38/59] swapfile: Convert __try_to_reclaim_swap() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (36 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 37/59] swapfile: Convert try_to_unuse() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 39/59] swapfile: Convert unuse_pte_range() " Matthew Wilcox (Oracle)
                   ` (21 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves five calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/swapfile.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index df85eb73f34e..ce538c3f9161 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -128,27 +128,27 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
 				 unsigned long offset, unsigned long flags)
 {
 	swp_entry_t entry = swp_entry(si->type, offset);
-	struct page *page;
+	struct folio *folio;
 	int ret = 0;
 
-	page = find_get_page(swap_address_space(entry), offset);
-	if (!page)
+	folio = filemap_get_folio(swap_address_space(entry), offset);
+	if (!folio)
 		return 0;
 	/*
 	 * When this function is called from scan_swap_map_slots() and it's
-	 * called by vmscan.c at reclaiming pages. So, we hold a lock on a page,
+	 * called by vmscan.c at reclaiming folios. So we hold a folio lock
 	 * here. We have to use trylock for avoiding deadlock. This is a special
-	 * case and you should use try_to_free_swap() with explicit lock_page()
+	 * case and you should use folio_free_swap() with explicit folio_lock()
 	 * in usual operations.
 	 */
-	if (trylock_page(page)) {
+	if (folio_trylock(folio)) {
 		if ((flags & TTRS_ANYWAY) ||
-		    ((flags & TTRS_UNMAPPED) && !page_mapped(page)) ||
-		    ((flags & TTRS_FULL) && mem_cgroup_swap_full(page)))
-			ret = try_to_free_swap(page);
-		unlock_page(page);
+		    ((flags & TTRS_UNMAPPED) && !folio_mapped(folio)) ||
+		    ((flags & TTRS_FULL) && mem_cgroup_swap_full(&folio->page)))
+			ret = folio_free_swap(folio);
+		folio_unlock(folio);
 	}
-	put_page(page);
+	folio_put(folio);
 	return ret;
 }
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 39/59] swapfile: Convert unuse_pte_range() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (37 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 38/59] swapfile: Convert __try_to_reclaim_swap() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 40/59] mm: Convert do_swap_page() to use swap_cache_get_folio() Matthew Wilcox (Oracle)
                   ` (20 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Delay fetching the precise page from the folio until we're in unuse_pte().
Saves many calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/swapfile.c | 33 +++++++++++++++++++--------------
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index ce538c3f9161..9ee42a12cffc 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1754,8 +1754,9 @@ static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
  * force COW, vm_page_prot omits write permission from any private vma.
  */
 static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
-		unsigned long addr, swp_entry_t entry, struct page *page)
+		unsigned long addr, swp_entry_t entry, struct folio *folio)
 {
+	struct page *page = folio_file_page(folio, swp_offset(entry));
 	struct page *swapcache;
 	spinlock_t *ptl;
 	pte_t *pte, new_pte;
@@ -1827,17 +1828,18 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 			unsigned long addr, unsigned long end,
 			unsigned int type)
 {
-	struct page *page;
 	swp_entry_t entry;
 	pte_t *pte;
 	struct swap_info_struct *si;
-	unsigned long offset;
 	int ret = 0;
 	volatile unsigned char *swap_map;
 
 	si = swap_info[type];
 	pte = pte_offset_map(pmd, addr);
 	do {
+		struct folio *folio;
+		unsigned long offset;
+
 		if (!is_swap_pte(*pte))
 			continue;
 
@@ -1848,8 +1850,9 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 		offset = swp_offset(entry);
 		pte_unmap(pte);
 		swap_map = &si->swap_map[offset];
-		page = lookup_swap_cache(entry, vma, addr);
-		if (!page) {
+		folio = swap_cache_get_folio(entry, vma, addr);
+		if (!folio) {
+			struct page *page;
 			struct vm_fault vmf = {
 				.vma = vma,
 				.address = addr,
@@ -1859,25 +1862,27 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 
 			page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
 						&vmf);
+			if (page)
+				folio = page_folio(page);
 		}
-		if (!page) {
+		if (!folio) {
 			if (*swap_map == 0 || *swap_map == SWAP_MAP_BAD)
 				goto try_next;
 			return -ENOMEM;
 		}
 
-		lock_page(page);
-		wait_on_page_writeback(page);
-		ret = unuse_pte(vma, pmd, addr, entry, page);
+		folio_lock(folio);
+		folio_wait_writeback(folio);
+		ret = unuse_pte(vma, pmd, addr, entry, folio);
 		if (ret < 0) {
-			unlock_page(page);
-			put_page(page);
+			folio_unlock(folio);
+			folio_put(folio);
 			goto out;
 		}
 
-		try_to_free_swap(page);
-		unlock_page(page);
-		put_page(page);
+		folio_free_swap(folio);
+		folio_unlock(folio);
+		folio_put(folio);
 try_next:
 		pte = pte_offset_map(pmd, addr);
 	} while (pte++, addr += PAGE_SIZE, addr != end);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 40/59] mm: Convert do_swap_page() to use swap_cache_get_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (38 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 39/59] swapfile: Convert unuse_pte_range() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 41/59] mm: Remove lookup_swap_cache() Matthew Wilcox (Oracle)
                   ` (19 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves a folio->page->folio conversion.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/memory.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 23b164bf3c70..5465037c237c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3761,9 +3761,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	if (unlikely(!si))
 		goto out;
 
-	page = lookup_swap_cache(entry, vma, vmf->address);
-	if (page)
-		folio = page_folio(page);
+	folio = swap_cache_get_folio(entry, vma, vmf->address);
+	if (folio)
+		page = folio_file_page(folio, swp_offset(entry));
 	swapcache = folio;
 
 	if (!folio) {
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 41/59] mm: Remove lookup_swap_cache()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (39 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 40/59] mm: Convert do_swap_page() to use swap_cache_get_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 42/59] swap_state: Convert free_swap_cache() to use a folio Matthew Wilcox (Oracle)
                   ` (18 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

All callers have now been converted to swap_cache_get_folio(), so
we can remove this wrapper.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/memcontrol.c |  2 +-
 mm/swap.h       | 10 ----------
 mm/swap_state.c | 12 +-----------
 3 files changed, 2 insertions(+), 22 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index f65833efe90d..3b0698faf1ee 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5561,7 +5561,7 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
 		return NULL;
 
 	/*
-	 * Because lookup_swap_cache() updates some statistics counter,
+	 * Because swap_cache_get_folio() updates some statistics counter,
 	 * we call find_get_page() with swapper_space directly.
 	 */
 	page = find_get_page(swap_address_space(ent), swp_offset(ent));
diff --git a/mm/swap.h b/mm/swap.h
index f70ff34dab82..9551d7815c37 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -43,9 +43,6 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin,
 				  unsigned long end);
 struct folio *swap_cache_get_folio(swp_entry_t entry,
 		struct vm_area_struct *vma, unsigned long addr);
-struct page *lookup_swap_cache(swp_entry_t entry,
-			       struct vm_area_struct *vma,
-			       unsigned long addr);
 struct page *find_get_incore_page(struct address_space *mapping, pgoff_t index);
 
 struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
@@ -103,13 +100,6 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
 	return 0;
 }
 
-static inline struct page *lookup_swap_cache(swp_entry_t swp,
-					     struct vm_area_struct *vma,
-					     unsigned long addr)
-{
-	return NULL;
-}
-
 static inline
 struct page *find_get_incore_page(struct address_space *mapping, pgoff_t index)
 {
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b96bf4ec8b5b..4af135a7b53c 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -369,16 +369,6 @@ struct folio *swap_cache_get_folio(swp_entry_t entry,
 	return folio;
 }
 
-struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma,
-			       unsigned long addr)
-{
-	struct folio *folio = swap_cache_get_folio(entry, vma, addr);
-
-	if (!folio)
-		return NULL;
-	return folio_file_page(folio, swp_offset(entry));
-}
-
 /**
  * find_get_incore_page - Find and get a page from the page or swap caches.
  * @mapping: The address_space to search.
@@ -430,7 +420,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		int err;
 		/*
 		 * First check the swap cache.  Since this is normally
-		 * called after lookup_swap_cache() failed, re-calling
+		 * called after swap_cache_get_folio() failed, re-calling
 		 * that would confuse statistics.
 		 */
 		si = get_swap_device(entry);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 42/59] swap_state: Convert free_swap_cache() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (40 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 41/59] mm: Remove lookup_swap_cache() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 43/59] swap: Convert swap_writepage() " Matthew Wilcox (Oracle)
                   ` (17 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves several calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/swap_state.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 4af135a7b53c..438d0676c5be 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -272,16 +272,19 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin,
 /* 
  * If we are the only user, then try to free up the swap cache. 
  * 
- * Its ok to check for PageSwapCache without the page lock
+ * Its ok to check the swapcache flag without the folio lock
  * here because we are going to recheck again inside
- * try_to_free_swap() _with_ the lock.
+ * folio_free_swap() _with_ the lock.
  * 					- Marcelo
  */
 void free_swap_cache(struct page *page)
 {
-	if (PageSwapCache(page) && !page_mapped(page) && trylock_page(page)) {
-		try_to_free_swap(page);
-		unlock_page(page);
+	struct folio *folio = page_folio(page);
+
+	if (folio_test_swapcache(folio) && !folio_mapped(folio) &&
+	    folio_trylock(folio)) {
+		folio_free_swap(folio);
+		folio_unlock(folio);
 	}
 }
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 43/59] swap: Convert swap_writepage() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (41 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 42/59] swap_state: Convert free_swap_cache() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 44/59] mm: Convert do_wp_page() " Matthew Wilcox (Oracle)
                   ` (16 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Removes many calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_io.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index 68318134dc92..2a3e3e0a6497 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -180,29 +180,30 @@ int generic_swapfile_activate(struct swap_info_struct *sis,
  */
 int swap_writepage(struct page *page, struct writeback_control *wbc)
 {
+	struct folio *folio = page_folio(page);
 	int ret = 0;
 
-	if (try_to_free_swap(page)) {
-		unlock_page(page);
+	if (folio_free_swap(folio)) {
+		folio_unlock(folio);
 		goto out;
 	}
 	/*
 	 * Arch code may have to preserve more data than just the page
 	 * contents, e.g. memory tags.
 	 */
-	ret = arch_prepare_to_swap(page);
+	ret = arch_prepare_to_swap(&folio->page);
 	if (ret) {
-		set_page_dirty(page);
-		unlock_page(page);
+		folio_mark_dirty(folio);
+		folio_unlock(folio);
 		goto out;
 	}
-	if (frontswap_store(page) == 0) {
-		set_page_writeback(page);
-		unlock_page(page);
-		end_page_writeback(page);
+	if (frontswap_store(&folio->page) == 0) {
+		folio_start_writeback(folio);
+		folio_unlock(folio);
+		folio_end_writeback(folio);
 		goto out;
 	}
-	ret = __swap_writepage(page, wbc, end_swap_bio_write);
+	ret = __swap_writepage(&folio->page, wbc, end_swap_bio_write);
 out:
 	return ret;
 }
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 44/59] mm: Convert do_wp_page() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (42 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 43/59] swap: Convert swap_writepage() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 45/59] huge_memory: Convert do_huge_pmd_wp_page() " Matthew Wilcox (Oracle)
                   ` (15 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves many calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/memory.c | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 5465037c237c..43432b877447 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3362,6 +3362,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 {
 	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
 	struct vm_area_struct *vma = vmf->vma;
+	struct folio *folio;
 
 	VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE));
 	VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE));
@@ -3408,48 +3409,47 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 	 * Take out anonymous pages first, anonymous shared vmas are
 	 * not dirty accountable.
 	 */
-	if (PageAnon(vmf->page)) {
-		struct page *page = vmf->page;
-
+	folio = page_folio(vmf->page);
+	if (folio_test_anon(folio)) {
 		/*
 		 * If the page is exclusive to this process we must reuse the
 		 * page without further checks.
 		 */
-		if (PageAnonExclusive(page))
+		if (PageAnonExclusive(vmf->page))
 			goto reuse;
 
 		/*
-		 * We have to verify under page lock: these early checks are
-		 * just an optimization to avoid locking the page and freeing
+		 * We have to verify under folio lock: these early checks are
+		 * just an optimization to avoid locking the folio and freeing
 		 * the swapcache if there is little hope that we can reuse.
 		 *
-		 * PageKsm() doesn't necessarily raise the page refcount.
+		 * KSM doesn't necessarily raise the folio refcount.
 		 */
-		if (PageKsm(page) || page_count(page) > 3)
+		if (folio_test_ksm(folio) || folio_ref_count(folio) > 3)
 			goto copy;
-		if (!PageLRU(page))
+		if (!folio_test_lru(folio))
 			/*
 			 * Note: We cannot easily detect+handle references from
-			 * remote LRU pagevecs or references to PageLRU() pages.
+			 * remote LRU pagevecs or references to LRU folios.
 			 */
 			lru_add_drain();
-		if (page_count(page) > 1 + PageSwapCache(page))
+		if (folio_ref_count(folio) > 1 + folio_test_swapcache(folio))
 			goto copy;
-		if (!trylock_page(page))
+		if (!folio_trylock(folio))
 			goto copy;
-		if (PageSwapCache(page))
-			try_to_free_swap(page);
-		if (PageKsm(page) || page_count(page) != 1) {
-			unlock_page(page);
+		if (folio_test_swapcache(folio))
+			folio_free_swap(folio);
+		if (folio_test_ksm(folio) || folio_ref_count(folio) != 1) {
+			folio_unlock(folio);
 			goto copy;
 		}
 		/*
-		 * Ok, we've got the only page reference from our mapping
-		 * and the page is locked, it's dark out, and we're wearing
+		 * Ok, we've got the only folio reference from our mapping
+		 * and the folio is locked, it's dark out, and we're wearing
 		 * sunglasses. Hit it.
 		 */
-		page_move_anon_rmap(page, vma);
-		unlock_page(page);
+		page_move_anon_rmap(vmf->page, vma);
+		folio_unlock(folio);
 reuse:
 		if (unlikely(unshare)) {
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 45/59] huge_memory: Convert do_huge_pmd_wp_page() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (43 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 44/59] mm: Convert do_wp_page() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 46/59] madvise: Convert madvise_free_pte_range() " Matthew Wilcox (Oracle)
                   ` (14 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Removes many calls to compound_head().  Does not remove the assumption
that a folio may not be larger than a PMD.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/huge_memory.c | 35 +++++++++++++++++++----------------
 1 file changed, 19 insertions(+), 16 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8a7c1b344abe..7b998f2083aa 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1313,6 +1313,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 {
 	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
 	struct vm_area_struct *vma = vmf->vma;
+	struct folio *folio;
 	struct page *page;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	pmd_t orig_pmd = vmf->orig_pmd;
@@ -1334,46 +1335,48 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 	}
 
 	page = pmd_page(orig_pmd);
+	folio = page_folio(page);
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 
 	/* Early check when only holding the PT lock. */
 	if (PageAnonExclusive(page))
 		goto reuse;
 
-	if (!trylock_page(page)) {
-		get_page(page);
+	if (!folio_trylock(folio)) {
+		folio_get(folio);
 		spin_unlock(vmf->ptl);
-		lock_page(page);
+		folio_lock(folio);
 		spin_lock(vmf->ptl);
 		if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) {
 			spin_unlock(vmf->ptl);
-			unlock_page(page);
-			put_page(page);
+			folio_unlock(folio);
+			folio_put(folio);
 			return 0;
 		}
-		put_page(page);
+		folio_put(folio);
 	}
 
 	/* Recheck after temporarily dropping the PT lock. */
 	if (PageAnonExclusive(page)) {
-		unlock_page(page);
+		folio_unlock(folio);
 		goto reuse;
 	}
 
 	/*
-	 * See do_wp_page(): we can only reuse the page exclusively if there are
-	 * no additional references. Note that we always drain the LRU
-	 * pagevecs immediately after adding a THP.
+	 * See do_wp_page(): we can only reuse the folio exclusively if
+	 * there are no additional references. Note that we always drain
+	 * the LRU pagevecs immediately after adding a THP.
 	 */
-	if (page_count(page) > 1 + PageSwapCache(page) * thp_nr_pages(page))
+	if (folio_ref_count(folio) >
+			1 + folio_test_swapcache(folio) * folio_nr_pages(folio))
 		goto unlock_fallback;
-	if (PageSwapCache(page))
-		try_to_free_swap(page);
-	if (page_count(page) == 1) {
+	if (folio_test_swapcache(folio))
+		folio_free_swap(folio);
+	if (folio_ref_count(folio) == 1) {
 		pmd_t entry;
 
 		page_move_anon_rmap(page, vma);
-		unlock_page(page);
+		folio_unlock(folio);
 reuse:
 		if (unlikely(unshare)) {
 			spin_unlock(vmf->ptl);
@@ -1388,7 +1391,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 	}
 
 unlock_fallback:
-	unlock_page(page);
+	folio_unlock(folio);
 	spin_unlock(vmf->ptl);
 fallback:
 	__split_huge_pmd(vma, vmf->pmd, vmf->address, false, NULL);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 46/59] madvise: Convert madvise_free_pte_range() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (44 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 45/59] huge_memory: Convert do_huge_pmd_wp_page() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 47/59] uprobes: Use folios more widely in __replace_page() Matthew Wilcox (Oracle)
                   ` (13 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves a lot of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/madvise.c | 49 +++++++++++++++++++++++++------------------------
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/mm/madvise.c b/mm/madvise.c
index 5f0f0948a50e..05de79d269e8 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -597,6 +597,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 	struct vm_area_struct *vma = walk->vma;
 	spinlock_t *ptl;
 	pte_t *orig_pte, *pte, ptent;
+	struct folio *folio;
 	struct page *page;
 	int nr_swap = 0;
 	unsigned long next;
@@ -641,56 +642,56 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 		page = vm_normal_page(vma, addr, ptent);
 		if (!page || is_zone_device_page(page))
 			continue;
+		folio = page_folio(page);
 
 		/*
-		 * If pmd isn't transhuge but the page is THP and
+		 * If pmd isn't transhuge but the folio is large and
 		 * is owned by only this process, split it and
 		 * deactivate all pages.
 		 */
-		if (PageTransCompound(page)) {
-			if (page_mapcount(page) != 1)
+		if (folio_test_large(folio)) {
+			if (folio_mapcount(folio) != 1)
 				goto out;
-			get_page(page);
-			if (!trylock_page(page)) {
-				put_page(page);
+			folio_get(folio);
+			if (!folio_trylock(folio)) {
+				folio_put(folio);
 				goto out;
 			}
 			pte_unmap_unlock(orig_pte, ptl);
-			if (split_huge_page(page)) {
-				unlock_page(page);
-				put_page(page);
+			if (split_folio(folio)) {
+				folio_unlock(folio);
+				folio_put(folio);
 				orig_pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
 				goto out;
 			}
-			unlock_page(page);
-			put_page(page);
+			folio_unlock(folio);
+			folio_put(folio);
 			orig_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
 			pte--;
 			addr -= PAGE_SIZE;
 			continue;
 		}
 
-		VM_BUG_ON_PAGE(PageTransCompound(page), page);
-
-		if (PageSwapCache(page) || PageDirty(page)) {
-			if (!trylock_page(page))
+		if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
+			if (!folio_trylock(folio))
 				continue;
 			/*
-			 * If page is shared with others, we couldn't clear
-			 * PG_dirty of the page.
+			 * If folio is shared with others, we mustn't clear
+			 * the folio's dirty flag.
 			 */
-			if (page_mapcount(page) != 1) {
-				unlock_page(page);
+			if (folio_mapcount(folio) != 1) {
+				folio_unlock(folio);
 				continue;
 			}
 
-			if (PageSwapCache(page) && !try_to_free_swap(page)) {
-				unlock_page(page);
+			if (folio_test_swapcache(folio) &&
+			    !folio_free_swap(folio)) {
+				folio_unlock(folio);
 				continue;
 			}
 
-			ClearPageDirty(page);
-			unlock_page(page);
+			folio_clear_dirty(folio);
+			folio_unlock(folio);
 		}
 
 		if (pte_young(ptent) || pte_dirty(ptent)) {
@@ -708,7 +709,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 			set_pte_at(mm, addr, pte, ptent);
 			tlb_remove_tlb_entry(tlb, pte, addr);
 		}
-		mark_page_lazyfree(page);
+		mark_page_lazyfree(&folio->page);
 	}
 out:
 	if (nr_swap) {
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 47/59] uprobes: Use folios more widely in __replace_page()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (45 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 46/59] madvise: Convert madvise_free_pte_range() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 48/59] ksm: Use a folio in replace_page() Matthew Wilcox (Oracle)
                   ` (12 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Remove a few hidden calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 kernel/events/uprobes.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 2eaa327f8158..9722c4587c48 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -19,7 +19,7 @@
 #include <linux/export.h>
 #include <linux/rmap.h>		/* anon_vma_prepare */
 #include <linux/mmu_notifier.h>	/* set_pte_at_notify */
-#include <linux/swap.h>		/* try_to_free_swap */
+#include <linux/swap.h>		/* folio_free_swap */
 #include <linux/ptrace.h>	/* user_enable_single_step */
 #include <linux/kdebug.h>	/* notifier mechanism */
 #include "../../mm/internal.h"	/* munlock_vma_page */
@@ -154,8 +154,9 @@ static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
 static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 				struct page *old_page, struct page *new_page)
 {
+	struct folio *old_folio = page_folio(old_page);
 	struct mm_struct *mm = vma->vm_mm;
-	DEFINE_FOLIO_VMA_WALK(pvmw, page_folio(old_page), vma, addr, 0);
+	DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0);
 	int err;
 	struct mmu_notifier_range range;
 
@@ -169,8 +170,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 			return err;
 	}
 
-	/* For try_to_free_swap() below */
-	lock_page(old_page);
+	/* For folio_free_swap() below */
+	folio_lock(old_folio);
 
 	mmu_notifier_invalidate_range_start(&range);
 	err = -EAGAIN;
@@ -186,7 +187,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 		/* no new page, just dec_mm_counter for old_page */
 		dec_mm_counter(mm, MM_ANONPAGES);
 
-	if (!PageAnon(old_page)) {
+	if (!folio_test_anon(old_folio)) {
 		dec_mm_counter(mm, mm_counter_file(old_page));
 		inc_mm_counter(mm, MM_ANONPAGES);
 	}
@@ -198,15 +199,15 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 				  mk_pte(new_page, vma->vm_page_prot));
 
 	page_remove_rmap(old_page, vma, false);
-	if (!page_mapped(old_page))
-		try_to_free_swap(old_page);
+	if (!folio_mapped(old_folio))
+		folio_free_swap(old_folio);
 	page_vma_mapped_walk_done(&pvmw);
-	put_page(old_page);
+	folio_put(old_folio);
 
 	err = 0;
  unlock:
 	mmu_notifier_invalidate_range_end(&range);
-	unlock_page(old_page);
+	folio_unlock(old_folio);
 	return err;
 }
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 48/59] ksm: Use a folio in replace_page()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (46 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 47/59] uprobes: Use folios more widely in __replace_page() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 49/59] mm: Convert do_swap_page() to use folio_free_swap() Matthew Wilcox (Oracle)
                   ` (11 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Replace three calls to compound_head() with one.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/ksm.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 42ab153335a2..322652e7e6fe 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1133,6 +1133,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 			struct page *kpage, pte_t orig_pte)
 {
 	struct mm_struct *mm = vma->vm_mm;
+	struct folio *folio;
 	pmd_t *pmd;
 	pte_t *ptep;
 	pte_t newpte;
@@ -1191,10 +1192,11 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 	ptep_clear_flush(vma, addr, ptep);
 	set_pte_at_notify(mm, addr, ptep, newpte);
 
+	folio = page_folio(page);
 	page_remove_rmap(page, vma, false);
-	if (!page_mapped(page))
-		try_to_free_swap(page);
-	put_page(page);
+	if (!folio_mapped(folio))
+		folio_free_swap(folio);
+	folio_put(folio);
 
 	pte_unmap_unlock(ptep, ptl);
 	err = 0;
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 49/59] mm: Convert do_swap_page() to use folio_free_swap()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (47 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 48/59] ksm: Use a folio in replace_page() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 50/59] memcg: Convert mem_cgroup_swap_full() to take a folio Matthew Wilcox (Oracle)
                   ` (10 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Also convert should_try_to_free_swap() to use a folio.  This removes a
few calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/memory.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 43432b877447..5b440045d306 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3635,14 +3635,14 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
 	return 0;
 }
 
-static inline bool should_try_to_free_swap(struct page *page,
+static inline bool should_try_to_free_swap(struct folio *folio,
 					   struct vm_area_struct *vma,
 					   unsigned int fault_flags)
 {
-	if (!PageSwapCache(page))
+	if (!folio_test_swapcache(folio))
 		return false;
-	if (mem_cgroup_swap_full(page) || (vma->vm_flags & VM_LOCKED) ||
-	    PageMlocked(page))
+	if (mem_cgroup_swap_full(&folio->page) || (vma->vm_flags & VM_LOCKED) ||
+	    folio_test_mlocked(folio))
 		return true;
 	/*
 	 * If we want to map a page that's in the swapcache writable, we
@@ -3650,8 +3650,8 @@ static inline bool should_try_to_free_swap(struct page *page,
 	 * user. Try freeing the swapcache to get rid of the swapcache
 	 * reference only in case it's likely that we'll be the exlusive user.
 	 */
-	return (fault_flags & FAULT_FLAG_WRITE) && !PageKsm(page) &&
-		page_count(page) == 2;
+	return (fault_flags & FAULT_FLAG_WRITE) && !folio_test_ksm(folio) &&
+		folio_ref_count(folio) == 2;
 }
 
 static vm_fault_t pte_marker_clear(struct vm_fault *vmf)
@@ -3944,8 +3944,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	 * yet.
 	 */
 	swap_free(entry);
-	if (should_try_to_free_swap(page, vma, vmf->flags))
-		try_to_free_swap(page);
+	if (should_try_to_free_swap(folio, vma, vmf->flags))
+		folio_free_swap(folio);
 
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 	dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 50/59] memcg: Convert mem_cgroup_swap_full() to take a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (48 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 49/59] mm: Convert do_swap_page() to use folio_free_swap() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 51/59] mm: Remove try_to_free_swap() Matthew Wilcox (Oracle)
                   ` (9 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

All callers now have a folio, so convert the function to take a folio.
Saves a couple of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/swap.h | 4 ++--
 mm/memcontrol.c      | 6 +++---
 mm/memory.c          | 2 +-
 mm/swapfile.c        | 2 +-
 mm/vmscan.c          | 3 +--
 5 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index f16c9af6bf32..8178ec471fe3 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -688,7 +688,7 @@ static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_p
 }
 
 extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg);
-extern bool mem_cgroup_swap_full(struct page *page);
+extern bool mem_cgroup_swap_full(struct folio *folio);
 #else
 static inline void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 {
@@ -710,7 +710,7 @@ static inline long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg)
 	return get_nr_swap_pages();
 }
 
-static inline bool mem_cgroup_swap_full(struct page *page)
+static inline bool mem_cgroup_swap_full(struct folio *folio)
 {
 	return vm_swap_full();
 }
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3b0698faf1ee..557579f43e7b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -7375,18 +7375,18 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg)
 	return nr_swap_pages;
 }
 
-bool mem_cgroup_swap_full(struct page *page)
+bool mem_cgroup_swap_full(struct folio *folio)
 {
 	struct mem_cgroup *memcg;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
 
 	if (vm_swap_full())
 		return true;
 	if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return false;
 
-	memcg = page_memcg(page);
+	memcg = folio_memcg(folio);
 	if (!memcg)
 		return false;
 
diff --git a/mm/memory.c b/mm/memory.c
index 5b440045d306..b6f2ccda4f45 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3641,7 +3641,7 @@ static inline bool should_try_to_free_swap(struct folio *folio,
 {
 	if (!folio_test_swapcache(folio))
 		return false;
-	if (mem_cgroup_swap_full(&folio->page) || (vma->vm_flags & VM_LOCKED) ||
+	if (mem_cgroup_swap_full(folio) || (vma->vm_flags & VM_LOCKED) ||
 	    folio_test_mlocked(folio))
 		return true;
 	/*
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 9ee42a12cffc..74929a8cbf88 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -144,7 +144,7 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
 	if (folio_trylock(folio)) {
 		if ((flags & TTRS_ANYWAY) ||
 		    ((flags & TTRS_UNMAPPED) && !folio_mapped(folio)) ||
-		    ((flags & TTRS_FULL) && mem_cgroup_swap_full(&folio->page)))
+		    ((flags & TTRS_FULL) && mem_cgroup_swap_full(folio)))
 			ret = folio_free_swap(folio);
 		folio_unlock(folio);
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ac7f6f77e28a..85af57bbfd81 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2002,8 +2002,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 activate_locked:
 		/* Not a candidate for swapping, so reclaim swap space. */
 		if (folio_test_swapcache(folio) &&
-		    (mem_cgroup_swap_full(&folio->page) ||
-		     folio_test_mlocked(folio)))
+		    (mem_cgroup_swap_full(folio) || folio_test_mlocked(folio)))
 			folio_free_swap(folio);
 		VM_BUG_ON_FOLIO(folio_test_active(folio), folio);
 		if (!folio_test_mlocked(folio)) {
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 51/59] mm: Remove try_to_free_swap()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (49 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 50/59] memcg: Convert mem_cgroup_swap_full() to take a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 52/59] rmap: Convert page_move_anon_rmap() to use a folio Matthew Wilcox (Oracle)
                   ` (8 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

All callers have now been converted to folio_free_swap() and we can
remove this wrapper.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/swap.h | 6 ------
 mm/folio-compat.c    | 7 -------
 mm/memory.c          | 2 +-
 3 files changed, 1 insertion(+), 14 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 8178ec471fe3..3be59affca63 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -506,7 +506,6 @@ extern int __swp_swapcount(swp_entry_t entry);
 extern int swp_swapcount(swp_entry_t entry);
 extern struct swap_info_struct *page_swap_info(struct page *);
 extern struct swap_info_struct *swp_swap_info(swp_entry_t entry);
-extern int try_to_free_swap(struct page *);
 struct backing_dev_info;
 extern int init_swap_address_space(unsigned int type, unsigned long nr_pages);
 extern void exit_swap_address_space(unsigned int type);
@@ -591,11 +590,6 @@ static inline int swp_swapcount(swp_entry_t entry)
 	return 0;
 }
 
-static inline int try_to_free_swap(struct page *page)
-{
-	return 0;
-}
-
 static inline swp_entry_t folio_alloc_swap(struct folio *folio)
 {
 	swp_entry_t entry;
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index 06d47f00609b..e1e23b4947d7 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -146,10 +146,3 @@ void putback_lru_page(struct page *page)
 {
 	folio_putback_lru(page_folio(page));
 }
-
-#ifdef CONFIG_SWAP
-int try_to_free_swap(struct page *page)
-{
-	return folio_free_swap(page_folio(page));
-}
-#endif
diff --git a/mm/memory.c b/mm/memory.c
index b6f2ccda4f45..76b0e67399a1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3838,7 +3838,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 
 	if (swapcache) {
 		/*
-		 * Make sure try_to_free_swap or swapoff did not release the
+		 * Make sure folio_free_swap() or swapoff did not release the
 		 * swapcache from under us.  The page pin, and pte_same test
 		 * below, are not enough to exclude that.  Even if it is still
 		 * swapcache, we need to check that the page's swap has not
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 52/59] rmap: Convert page_move_anon_rmap() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (50 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 51/59] mm: Remove try_to_free_swap() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 53/59] migrate: Convert __unmap_and_move() to use folios Matthew Wilcox (Oracle)
                   ` (7 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Removes one call to compound_head() and a reference to page->mapping.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/rmap.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index edc06c52bc82..aaab1c1078b4 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1098,22 +1098,20 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
  */
 void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
 {
-	struct anon_vma *anon_vma = vma->anon_vma;
-	struct page *subpage = page;
-
-	page = compound_head(page);
+	void *anon_vma = vma->anon_vma;
+	struct folio *folio = page_folio(page);
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
 	VM_BUG_ON_VMA(!anon_vma, vma);
 
-	anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
+	anon_vma += PAGE_MAPPING_ANON;
 	/*
 	 * Ensure that anon_vma and the PAGE_MAPPING_ANON bit are written
 	 * simultaneously, so a concurrent reader (eg folio_referenced()'s
 	 * folio_test_anon()) will not see one without the other.
 	 */
-	WRITE_ONCE(page->mapping, (struct address_space *) anon_vma);
-	SetPageAnonExclusive(subpage);
+	WRITE_ONCE(folio->mapping, anon_vma);
+	SetPageAnonExclusive(page);
 }
 
 /**
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 53/59] migrate: Convert __unmap_and_move() to use folios
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (51 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 52/59] rmap: Convert page_move_anon_rmap() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 54/59] migrate: Convert unmap_and_move_huge_page() " Matthew Wilcox (Oracle)
                   ` (6 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Removes a lot of calls to compound_head().  Also remove a VM_BUG_ON that
can never trigger as the PageAnon bit is the bottom bit of page->mapping.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/migrate.c | 75 ++++++++++++++++++++++++++--------------------------
 1 file changed, 37 insertions(+), 38 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 6a1597c92261..3cd1fdb5f572 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -976,17 +976,15 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
 	return rc;
 }
 
-static int __unmap_and_move(struct page *page, struct page *newpage,
+static int __unmap_and_move(struct folio *src, struct folio *dst,
 				int force, enum migrate_mode mode)
 {
-	struct folio *folio = page_folio(page);
-	struct folio *dst = page_folio(newpage);
 	int rc = -EAGAIN;
 	bool page_was_mapped = false;
 	struct anon_vma *anon_vma = NULL;
-	bool is_lru = !__PageMovable(page);
+	bool is_lru = !__PageMovable(&src->page);
 
-	if (!trylock_page(page)) {
+	if (!folio_trylock(src)) {
 		if (!force || mode == MIGRATE_ASYNC)
 			goto out;
 
@@ -1006,10 +1004,10 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		if (current->flags & PF_MEMALLOC)
 			goto out;
 
-		lock_page(page);
+		folio_lock(src);
 	}
 
-	if (PageWriteback(page)) {
+	if (folio_test_writeback(src)) {
 		/*
 		 * Only in the case of a full synchronous migration is it
 		 * necessary to wait for PageWriteback. In the async case,
@@ -1026,12 +1024,12 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		}
 		if (!force)
 			goto out_unlock;
-		wait_on_page_writeback(page);
+		folio_wait_writeback(src);
 	}
 
 	/*
-	 * By try_to_migrate(), page->mapcount goes down to 0 here. In this case,
-	 * we cannot notice that anon_vma is freed while we migrates a page.
+	 * By try_to_migrate(), src->mapcount goes down to 0 here. In this case,
+	 * we cannot notice that anon_vma is freed while we migrate a page.
 	 * This get_anon_vma() delays freeing anon_vma pointer until the end
 	 * of migration. File cache pages are no problem because of page_lock()
 	 * File Caches may use write_page() or lock_page() in migration, then,
@@ -1043,22 +1041,22 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 	 * because that implies that the anon page is no longer mapped
 	 * (and cannot be remapped so long as we hold the page lock).
 	 */
-	if (PageAnon(page) && !PageKsm(page))
-		anon_vma = page_get_anon_vma(page);
+	if (folio_test_anon(src) && !folio_test_ksm(src))
+		anon_vma = page_get_anon_vma(&src->page);
 
 	/*
 	 * Block others from accessing the new page when we get around to
 	 * establishing additional references. We are usually the only one
-	 * holding a reference to newpage at this point. We used to have a BUG
-	 * here if trylock_page(newpage) fails, but would like to allow for
-	 * cases where there might be a race with the previous use of newpage.
+	 * holding a reference to dst at this point. We used to have a BUG
+	 * here if folio_trylock(dst) fails, but would like to allow for
+	 * cases where there might be a race with the previous use of dst.
 	 * This is much like races on refcount of oldpage: just don't BUG().
 	 */
-	if (unlikely(!trylock_page(newpage)))
+	if (unlikely(!folio_trylock(dst)))
 		goto out_unlock;
 
 	if (unlikely(!is_lru)) {
-		rc = move_to_new_folio(dst, folio, mode);
+		rc = move_to_new_folio(dst, src, mode);
 		goto out_unlock_both;
 	}
 
@@ -1066,7 +1064,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 	 * Corner case handling:
 	 * 1. When a new swap-cache page is read into, it is added to the LRU
 	 * and treated as swapcache but it has no rmap yet.
-	 * Calling try_to_unmap() against a page->mapping==NULL page will
+	 * Calling try_to_unmap() against a src->mapping==NULL page will
 	 * trigger a BUG.  So handle it here.
 	 * 2. An orphaned page (see truncate_cleanup_page) might have
 	 * fs-private metadata. The page can be picked up due to memory
@@ -1074,57 +1072,56 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 	 * invisible to the vm, so the page can not be migrated.  So try to
 	 * free the metadata, so the page can be freed.
 	 */
-	if (!page->mapping) {
-		VM_BUG_ON_PAGE(PageAnon(page), page);
-		if (page_has_private(page)) {
-			try_to_free_buffers(folio);
+	if (!src->mapping) {
+		if (folio_test_private(src)) {
+			try_to_free_buffers(src);
 			goto out_unlock_both;
 		}
-	} else if (page_mapped(page)) {
+	} else if (folio_mapped(src)) {
 		/* Establish migration ptes */
-		VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma,
-				page);
-		try_to_migrate(folio, 0);
+		VM_BUG_ON_FOLIO(folio_test_anon(src) &&
+			       !folio_test_ksm(src) && !anon_vma, src);
+		try_to_migrate(src, 0);
 		page_was_mapped = true;
 	}
 
-	if (!page_mapped(page))
-		rc = move_to_new_folio(dst, folio, mode);
+	if (!folio_mapped(src))
+		rc = move_to_new_folio(dst, src, mode);
 
 	/*
-	 * When successful, push newpage to LRU immediately: so that if it
+	 * When successful, push dst to LRU immediately: so that if it
 	 * turns out to be an mlocked page, remove_migration_ptes() will
-	 * automatically build up the correct newpage->mlock_count for it.
+	 * automatically build up the correct dst->mlock_count for it.
 	 *
 	 * We would like to do something similar for the old page, when
 	 * unsuccessful, and other cases when a page has been temporarily
 	 * isolated from the unevictable LRU: but this case is the easiest.
 	 */
 	if (rc == MIGRATEPAGE_SUCCESS) {
-		lru_cache_add(newpage);
+		folio_add_lru(dst);
 		if (page_was_mapped)
 			lru_add_drain();
 	}
 
 	if (page_was_mapped)
-		remove_migration_ptes(folio,
-			rc == MIGRATEPAGE_SUCCESS ? dst : folio, false);
+		remove_migration_ptes(src,
+			rc == MIGRATEPAGE_SUCCESS ? dst : src, false);
 
 out_unlock_both:
-	unlock_page(newpage);
+	folio_unlock(dst);
 out_unlock:
 	/* Drop an anon_vma reference if we took one */
 	if (anon_vma)
 		put_anon_vma(anon_vma);
-	unlock_page(page);
+	folio_unlock(src);
 out:
 	/*
-	 * If migration is successful, decrease refcount of the newpage,
+	 * If migration is successful, decrease refcount of dst,
 	 * which will not free the page because new page owner increased
 	 * refcounter.
 	 */
 	if (rc == MIGRATEPAGE_SUCCESS)
-		put_page(newpage);
+		folio_put(dst);
 
 	return rc;
 }
@@ -1140,6 +1137,7 @@ static int unmap_and_move(new_page_t get_new_page,
 				   enum migrate_reason reason,
 				   struct list_head *ret)
 {
+	struct folio *dst, *src = page_folio(page);
 	int rc = MIGRATEPAGE_SUCCESS;
 	struct page *newpage = NULL;
 
@@ -1157,9 +1155,10 @@ static int unmap_and_move(new_page_t get_new_page,
 	newpage = get_new_page(page, private);
 	if (!newpage)
 		return -ENOMEM;
+	dst = page_folio(newpage);
 
 	newpage->private = 0;
-	rc = __unmap_and_move(page, newpage, force, mode);
+	rc = __unmap_and_move(src, dst, force, mode);
 	if (rc == MIGRATEPAGE_SUCCESS)
 		set_page_owner_migrate_reason(newpage, reason);
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 54/59] migrate: Convert unmap_and_move_huge_page() to use folios
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (52 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 53/59] migrate: Convert __unmap_and_move() to use folios Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 55/59] huge_memory: Convert split_huge_page_to_list() to use a folio Matthew Wilcox (Oracle)
                   ` (5 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves several calls to compound_head() and removes a couple of uses of
page->lru.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/migrate.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 3cd1fdb5f572..7b338ddd011a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1244,11 +1244,11 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	 * kicking migration.
 	 */
 	if (!hugepage_migration_supported(page_hstate(hpage))) {
-		list_move_tail(&hpage->lru, ret);
+		list_move_tail(&src->lru, ret);
 		return -ENOSYS;
 	}
 
-	if (page_count(hpage) == 1) {
+	if (folio_ref_count(src) == 1) {
 		/* page was freed from under us. So we are done. */
 		putback_active_hugepage(hpage);
 		return MIGRATEPAGE_SUCCESS;
@@ -1259,7 +1259,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 		return -ENOMEM;
 	dst = page_folio(new_hpage);
 
-	if (!trylock_page(hpage)) {
+	if (!folio_trylock(src)) {
 		if (!force)
 			goto out;
 		switch (mode) {
@@ -1269,29 +1269,29 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 		default:
 			goto out;
 		}
-		lock_page(hpage);
+		folio_lock(src);
 	}
 
 	/*
 	 * Check for pages which are in the process of being freed.  Without
-	 * page_mapping() set, hugetlbfs specific move page routine will not
+	 * folio_mapping() set, hugetlbfs specific move page routine will not
 	 * be called and we could leak usage counts for subpools.
 	 */
-	if (hugetlb_page_subpool(hpage) && !page_mapping(hpage)) {
+	if (hugetlb_page_subpool(hpage) && !folio_mapping(src)) {
 		rc = -EBUSY;
 		goto out_unlock;
 	}
 
-	if (PageAnon(hpage))
-		anon_vma = page_get_anon_vma(hpage);
+	if (folio_test_anon(src))
+		anon_vma = page_get_anon_vma(&src->page);
 
-	if (unlikely(!trylock_page(new_hpage)))
+	if (unlikely(!folio_trylock(dst)))
 		goto put_anon;
 
-	if (page_mapped(hpage)) {
+	if (folio_mapped(src)) {
 		enum ttu_flags ttu = 0;
 
-		if (!PageAnon(hpage)) {
+		if (!folio_test_anon(src)) {
 			/*
 			 * In shared mappings, try_to_unmap could potentially
 			 * call huge_pmd_unshare.  Because of this, take
@@ -1312,7 +1312,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 			i_mmap_unlock_write(mapping);
 	}
 
-	if (!page_mapped(hpage))
+	if (!folio_mapped(src))
 		rc = move_to_new_folio(dst, src, mode);
 
 	if (page_was_mapped)
@@ -1320,7 +1320,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 			rc == MIGRATEPAGE_SUCCESS ? dst : src, false);
 
 unlock_put_anon:
-	unlock_page(new_hpage);
+	folio_unlock(dst);
 
 put_anon:
 	if (anon_vma)
@@ -1332,12 +1332,12 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	}
 
 out_unlock:
-	unlock_page(hpage);
+	folio_unlock(src);
 out:
 	if (rc == MIGRATEPAGE_SUCCESS)
 		putback_active_hugepage(hpage);
 	else if (rc != -EAGAIN)
-		list_move_tail(&hpage->lru, ret);
+		list_move_tail(&src->lru, ret);
 
 	/*
 	 * If migration was not successful and there's a freeing callback, use
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 55/59] huge_memory: Convert split_huge_page_to_list() to use a folio
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (53 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 54/59] migrate: Convert unmap_and_move_huge_page() " Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 56/59] huge_memory: Convert unmap_page() to unmap_folio() Matthew Wilcox (Oracle)
                   ` (4 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves many calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/huge_memory.c | 49 ++++++++++++++++++++++++------------------------
 1 file changed, 24 insertions(+), 25 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 7b998f2083aa..431a3b7078c7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2592,27 +2592,26 @@ bool can_split_folio(struct folio *folio, int *pextra_pins)
 int split_huge_page_to_list(struct page *page, struct list_head *list)
 {
 	struct folio *folio = page_folio(page);
-	struct page *head = &folio->page;
-	struct deferred_split *ds_queue = get_deferred_split_queue(head);
-	XA_STATE(xas, &head->mapping->i_pages, head->index);
+	struct deferred_split *ds_queue = get_deferred_split_queue(&folio->page);
+	XA_STATE(xas, &folio->mapping->i_pages, folio->index);
 	struct anon_vma *anon_vma = NULL;
 	struct address_space *mapping = NULL;
 	int extra_pins, ret;
 	pgoff_t end;
 	bool is_hzp;
 
-	VM_BUG_ON_PAGE(!PageLocked(head), head);
-	VM_BUG_ON_PAGE(!PageCompound(head), head);
+	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+	VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
 
-	is_hzp = is_huge_zero_page(head);
-	VM_WARN_ON_ONCE_PAGE(is_hzp, head);
+	is_hzp = is_huge_zero_page(&folio->page);
+	VM_WARN_ON_ONCE_FOLIO(is_hzp, folio);
 	if (is_hzp)
 		return -EBUSY;
 
-	if (PageWriteback(head))
+	if (folio_test_writeback(folio))
 		return -EBUSY;
 
-	if (PageAnon(head)) {
+	if (folio_test_anon(folio)) {
 		/*
 		 * The caller does not necessarily hold an mmap_lock that would
 		 * prevent the anon_vma disappearing so we first we take a
@@ -2621,7 +2620,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		 * is taken to serialise against parallel split or collapse
 		 * operations.
 		 */
-		anon_vma = page_get_anon_vma(head);
+		anon_vma = page_get_anon_vma(&folio->page);
 		if (!anon_vma) {
 			ret = -EBUSY;
 			goto out;
@@ -2630,7 +2629,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		mapping = NULL;
 		anon_vma_lock_write(anon_vma);
 	} else {
-		mapping = head->mapping;
+		mapping = folio->mapping;
 
 		/* Truncated ? */
 		if (!mapping) {
@@ -2638,7 +2637,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 			goto out;
 		}
 
-		xas_split_alloc(&xas, head, compound_order(head),
+		xas_split_alloc(&xas, folio, folio_order(folio),
 				mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK);
 		if (xas_error(&xas)) {
 			ret = xas_error(&xas);
@@ -2653,7 +2652,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		 * but on 32-bit, i_size_read() takes an irq-unsafe seqlock,
 		 * which cannot be nested inside the page tree lock. So note
 		 * end now: i_size itself may be changed at any moment, but
-		 * head page lock is good enough to serialize the trimming.
+		 * folio lock is good enough to serialize the trimming.
 		 */
 		end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE);
 		if (shmem_mapping(mapping))
@@ -2669,38 +2668,38 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		goto out_unlock;
 	}
 
-	unmap_page(head);
+	unmap_page(&folio->page);
 
 	/* block interrupt reentry in xa_lock and spinlock */
 	local_irq_disable();
 	if (mapping) {
 		/*
-		 * Check if the head page is present in page cache.
-		 * We assume all tail are present too, if head is there.
+		 * Check if the folio is present in page cache.
+		 * We assume all tail are present too, if folio is there.
 		 */
 		xas_lock(&xas);
 		xas_reset(&xas);
-		if (xas_load(&xas) != head)
+		if (xas_load(&xas) != folio)
 			goto fail;
 	}
 
 	/* Prevent deferred_split_scan() touching ->_refcount */
 	spin_lock(&ds_queue->split_queue_lock);
-	if (page_ref_freeze(head, 1 + extra_pins)) {
-		if (!list_empty(page_deferred_list(head))) {
+	if (folio_ref_freeze(folio, 1 + extra_pins)) {
+		if (!list_empty(page_deferred_list(&folio->page))) {
 			ds_queue->split_queue_len--;
-			list_del(page_deferred_list(head));
+			list_del(page_deferred_list(&folio->page));
 		}
 		spin_unlock(&ds_queue->split_queue_lock);
 		if (mapping) {
-			int nr = thp_nr_pages(head);
+			int nr = folio_nr_pages(folio);
 
-			xas_split(&xas, head, thp_order(head));
-			if (PageSwapBacked(head)) {
-				__mod_lruvec_page_state(head, NR_SHMEM_THPS,
+			xas_split(&xas, folio, folio_order(folio));
+			if (folio_test_swapbacked(folio)) {
+				__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS,
 							-nr);
 			} else {
-				__mod_lruvec_page_state(head, NR_FILE_THPS,
+				__lruvec_stat_mod_folio(folio, NR_FILE_THPS,
 							-nr);
 				filemap_nr_thps_dec(mapping);
 			}
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 56/59] huge_memory: Convert unmap_page() to unmap_folio()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (54 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 55/59] huge_memory: Convert split_huge_page_to_list() to use a folio Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 57/59] mm: Convert page_get_anon_vma() to folio_get_anon_vma() Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Remove a folio->page->folio conversion.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/huge_memory.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 431a3b7078c7..2b0f8787c7ed 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2326,13 +2326,12 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
 	}
 }
 
-static void unmap_page(struct page *page)
+static void unmap_folio(struct folio *folio)
 {
-	struct folio *folio = page_folio(page);
 	enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
 		TTU_SYNC;
 
-	VM_BUG_ON_PAGE(!PageHead(page), page);
+	VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
 
 	/*
 	 * Anon pages need migration entries to preserve them, but file
@@ -2349,7 +2348,7 @@ static void remap_page(struct folio *folio, unsigned long nr)
 {
 	int i = 0;
 
-	/* If unmap_page() uses try_to_migrate() on file, remove this check */
+	/* If unmap_folio() uses try_to_migrate() on file, remove this check */
 	if (!folio_test_anon(folio))
 		return;
 	for (;;) {
@@ -2399,7 +2398,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
 	 * for example lock_page() which set PG_waiters.
 	 *
 	 * Note that for mapped sub-pages of an anonymous THP,
-	 * PG_anon_exclusive has been cleared in unmap_page() and is stored in
+	 * PG_anon_exclusive has been cleared in unmap_folio() and is stored in
 	 * the migration entry instead from where remap_page() will restore it.
 	 * We can still have PG_anon_exclusive set on effectively unmapped and
 	 * unreferenced sub-pages of an anonymous THP: we can simply drop
@@ -2660,7 +2659,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 	}
 
 	/*
-	 * Racy check if we can split the page, before unmap_page() will
+	 * Racy check if we can split the page, before unmap_folio() will
 	 * split PMDs
 	 */
 	if (!can_split_folio(folio, &extra_pins)) {
@@ -2668,7 +2667,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		goto out_unlock;
 	}
 
-	unmap_page(&folio->page);
+	unmap_folio(folio);
 
 	/* block interrupt reentry in xa_lock and spinlock */
 	local_irq_disable();
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 57/59] mm: Convert page_get_anon_vma() to folio_get_anon_vma()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (55 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 56/59] huge_memory: Convert unmap_page() to unmap_folio() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 58/59] rmap: Remove page_unlock_anon_vma_read() Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

With all callers now passing in a folio, rename the function and convert
all callers.  Removes a couple of calls to compound_head() and a reference
to page->mapping.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/rmap.h |  2 +-
 mm/huge_memory.c     |  2 +-
 mm/migrate.c         |  6 +++---
 mm/rmap.c            | 14 +++++++-------
 4 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index bf80adca980b..4fde1cf5a5e8 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -163,7 +163,7 @@ static inline void anon_vma_merge(struct vm_area_struct *vma,
 	unlink_anon_vmas(next);
 }
 
-struct anon_vma *page_get_anon_vma(struct page *page);
+struct anon_vma *folio_get_anon_vma(struct folio *folio);
 
 /* RMAP flags, currently only relevant for some anon rmap operations. */
 typedef int __bitwise rmap_t;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2b0f8787c7ed..44a843f12fb3 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2619,7 +2619,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		 * is taken to serialise against parallel split or collapse
 		 * operations.
 		 */
-		anon_vma = page_get_anon_vma(&folio->page);
+		anon_vma = folio_get_anon_vma(folio);
 		if (!anon_vma) {
 			ret = -EBUSY;
 			goto out;
diff --git a/mm/migrate.c b/mm/migrate.c
index 7b338ddd011a..c688319ffd46 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1035,14 +1035,14 @@ static int __unmap_and_move(struct folio *src, struct folio *dst,
 	 * File Caches may use write_page() or lock_page() in migration, then,
 	 * just care Anon page here.
 	 *
-	 * Only page_get_anon_vma() understands the subtleties of
+	 * Only folio_get_anon_vma() understands the subtleties of
 	 * getting a hold on an anon_vma from outside one of its mms.
 	 * But if we cannot get anon_vma, then we won't need it anyway,
 	 * because that implies that the anon page is no longer mapped
 	 * (and cannot be remapped so long as we hold the page lock).
 	 */
 	if (folio_test_anon(src) && !folio_test_ksm(src))
-		anon_vma = page_get_anon_vma(&src->page);
+		anon_vma = folio_get_anon_vma(src);
 
 	/*
 	 * Block others from accessing the new page when we get around to
@@ -1283,7 +1283,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	}
 
 	if (folio_test_anon(src))
-		anon_vma = page_get_anon_vma(&src->page);
+		anon_vma = folio_get_anon_vma(src);
 
 	if (unlikely(!folio_trylock(dst)))
 		goto put_anon;
diff --git a/mm/rmap.c b/mm/rmap.c
index aaab1c1078b4..dcb856150295 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -486,16 +486,16 @@ void __init anon_vma_init(void)
  * if there is a mapcount, we can dereference the anon_vma after observing
  * those.
  */
-struct anon_vma *page_get_anon_vma(struct page *page)
+struct anon_vma *folio_get_anon_vma(struct folio *folio)
 {
 	struct anon_vma *anon_vma = NULL;
 	unsigned long anon_mapping;
 
 	rcu_read_lock();
-	anon_mapping = (unsigned long)READ_ONCE(page->mapping);
+	anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
 	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
 		goto out;
-	if (!page_mapped(page))
+	if (!folio_mapped(folio))
 		goto out;
 
 	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
@@ -505,13 +505,13 @@ struct anon_vma *page_get_anon_vma(struct page *page)
 	}
 
 	/*
-	 * If this page is still mapped, then its anon_vma cannot have been
+	 * If this folio is still mapped, then its anon_vma cannot have been
 	 * freed.  But if it has been unmapped, we have no security against the
 	 * anon_vma structure being freed and reused (for another anon_vma:
 	 * SLAB_TYPESAFE_BY_RCU guarantees that - so the atomic_inc_not_zero()
 	 * above cannot corrupt).
 	 */
-	if (!page_mapped(page)) {
+	if (!folio_mapped(folio)) {
 		rcu_read_unlock();
 		put_anon_vma(anon_vma);
 		return NULL;
@@ -523,11 +523,11 @@ struct anon_vma *page_get_anon_vma(struct page *page)
 }
 
 /*
- * Similar to page_get_anon_vma() except it locks the anon_vma.
+ * Similar to folio_get_anon_vma() except it locks the anon_vma.
  *
  * Its a little more complex as it tries to keep the fast path to a single
  * atomic op -- the trylock. If we fail the trylock, we fall back to getting a
- * reference like with page_get_anon_vma() and then block on the mutex
+ * reference like with folio_get_anon_vma() and then block on the mutex
  * on !rwc->try_lock case.
  */
 struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 58/59] rmap: Remove page_unlock_anon_vma_read()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (56 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 57/59] mm: Convert page_get_anon_vma() to folio_get_anon_vma() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-08 19:34 ` [PATCH 59/59] uprobes: Use new_folio in __replace_page() Matthew Wilcox (Oracle)
  2022-08-11  2:17 ` [PATCH 00/59] MM folio changes for 6.1 Hugh Dickins
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

This was simply an alias for anon_vma_unlock_read() since 2011.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/rmap.h | 5 -----
 mm/memory-failure.c  | 2 +-
 mm/rmap.c            | 5 -----
 3 files changed, 1 insertion(+), 11 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 4fde1cf5a5e8..1c6cc96925a4 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -402,13 +402,8 @@ struct rmap_walk_control {
 
 void rmap_walk(struct folio *folio, struct rmap_walk_control *rwc);
 void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc);
-
-/*
- * Called by memory-failure.c to kill processes.
- */
 struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 					  struct rmap_walk_control *rwc);
-void page_unlock_anon_vma_read(struct anon_vma *anon_vma);
 
 #else	/* !CONFIG_MMU */
 
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 9a7a228ad04a..794b111b327f 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -521,7 +521,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill,
 		}
 	}
 	read_unlock(&tasklist_lock);
-	page_unlock_anon_vma_read(av);
+	anon_vma_unlock_read(av);
 }
 
 /*
diff --git a/mm/rmap.c b/mm/rmap.c
index dcb856150295..d025ad00e63c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -599,11 +599,6 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 	return anon_vma;
 }
 
-void page_unlock_anon_vma_read(struct anon_vma *anon_vma)
-{
-	anon_vma_unlock_read(anon_vma);
-}
-
 #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
 /*
  * Flush TLB entries for recently unmapped pages from remote CPUs. It is
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 59/59] uprobes: Use new_folio in __replace_page()
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (57 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 58/59] rmap: Remove page_unlock_anon_vma_read() Matthew Wilcox (Oracle)
@ 2022-08-08 19:34 ` Matthew Wilcox (Oracle)
  2022-08-11  2:17 ` [PATCH 00/59] MM folio changes for 6.1 Hugh Dickins
  59 siblings, 0 replies; 62+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-08-08 19:34 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), hughd

Saves several calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 kernel/events/uprobes.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 9722c4587c48..7179298808c2 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -155,6 +155,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 				struct page *old_page, struct page *new_page)
 {
 	struct folio *old_folio = page_folio(old_page);
+	struct folio *new_folio;
 	struct mm_struct *mm = vma->vm_mm;
 	DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0);
 	int err;
@@ -164,8 +165,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 				addr + PAGE_SIZE);
 
 	if (new_page) {
-		err = mem_cgroup_charge(page_folio(new_page), vma->vm_mm,
-					GFP_KERNEL);
+		new_folio = page_folio(new_page);
+		err = mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL);
 		if (err)
 			return err;
 	}
@@ -180,9 +181,9 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 	VM_BUG_ON_PAGE(addr != pvmw.address, old_page);
 
 	if (new_page) {
-		get_page(new_page);
+		folio_get(new_folio);
 		page_add_new_anon_rmap(new_page, vma, addr);
-		lru_cache_add_inactive_or_unevictable(new_page, vma);
+		folio_add_lru_vma(new_folio, vma);
 	} else
 		/* no new page, just dec_mm_counter for old_page */
 		dec_mm_counter(mm, MM_ANONPAGES);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 62+ messages in thread

* Re: [PATCH 00/59] MM folio changes for 6.1
  2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
                   ` (58 preceding siblings ...)
  2022-08-08 19:34 ` [PATCH 59/59] uprobes: Use new_folio in __replace_page() Matthew Wilcox (Oracle)
@ 2022-08-11  2:17 ` Hugh Dickins
  59 siblings, 0 replies; 62+ messages in thread
From: Hugh Dickins @ 2022-08-11  2:17 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, hughd, Andrew Morton

On Mon, 8 Aug 2022, Matthew Wilcox (Oracle) wrote:

> The first three patches I hope are added into 6.0 before release (and
> the one cc:stable gets backported to 5.19).
> 
> My focus this round has been on shmem.  I believe it is now fully
> converted to folios.  Of course, shmem interacts with a lot of the swap
> cache and other parts of the kernel, so there are patches all over the MM.
> 
> This patch series survives a round of xfstests on tmpfs, which is nice,
> but hardly an exhaustive test.

Yes, xfstests fine on tmpfs (64-bit or 32-bit, huge or not).  Just
one line wrong, in do_swap_page(): I'll respond to 18/59 on that.

I have not looked into the 59 patches at all, but did search through
the total diff, to find the cause for that problem; and having got that
far, thought I'd best look through the remainder too: I found nothing
more that needs to be changed.

Hugh

> 
> Matthew Wilcox (Oracle) (59):
>   mm: Fix VM_BUG_ON in __delete_from_swap_cache()
>   shmem: Update folio if shmem_replace_page() updates the page
>   vmscan: Check folio_test_private(), not folio_get_private()
>   mm/vmscan: Fix a lot of comments
>   mm: Add the first tail page to struct folio
>   mm: Reimplement folio_order() and folio_nr_pages()
>   mm: Add split_folio()
>   mm: Add folio_add_lru_vma()
>   shmem: Convert shmem_writepage() to use a folio throughout
>   shmem: Convert shmem_delete_from_page_cache() to take a folio
>   shmem: Convert shmem_replace_page() to use folios throughout
>   mm/swapfile: Remove page_swapcount()
>   mm/swapfile: Convert try_to_free_swap() to folio_free_swap()
>   mm/swap: Convert __read_swap_cache_async() to use a folio
>   mm/swap: Convert add_to_swap_cache() to take a folio
>   mm/swap: Convert put_swap_page() to put_swap_folio()
>   mm: Convert do_swap_page() to use a folio
>   mm: Convert do_swap_page()'s swapcache variable to a folio
>   memcg: Convert mem_cgroup_swapin_charge_page() to
>     mem_cgroup_swapin_charge_folio()
>   shmem: Convert shmem_mfill_atomic_pte() to use a folio
>   shmem: Convert shmem_replace_page() to shmem_replace_folio()
>   swap: Add swap_cache_get_folio()
>   shmem: Eliminate struct page from shmem_swapin_folio()
>   shmem: Convert shmem_getpage_gfp() to shmem_get_folio_gfp()
>   shmem: Convert shmem_fault() to use shmem_get_folio_gfp()
>   shmem: Convert shmem_read_mapping_page_gfp() to use
>     shmem_get_folio_gfp()
>   shmem: Add shmem_get_folio()
>   shmem: Convert shmem_get_partial_folio() to use shmem_get_folio()
>   shmem: Convert shmem_write_begin() to use shmem_get_folio()
>   shmem: Convert shmem_file_read_iter() to use shmem_get_folio()
>   shmem: Convert shmem_fallocate() to use a folio
>   shmem: Convert shmem_symlink() to use a folio
>   shmem: Convert shmem_get_link() to use a folio
>   khugepaged: Call shmem_get_folio()
>   userfaultfd: Convert mcontinue_atomic_pte() to use a folio
>   shmem: Remove shmem_getpage()
>   swapfile: Convert try_to_unuse() to use a folio
>   swapfile: Convert __try_to_reclaim_swap() to use a folio
>   swapfile: Convert unuse_pte_range() to use a folio
>   mm: Convert do_swap_page() to use swap_cache_get_folio()
>   mm: Remove lookup_swap_cache()
>   swap_state: Convert free_swap_cache() to use a folio
>   swap: Convert swap_writepage() to use a folio
>   mm: Convert do_wp_page() to use a folio
>   huge_memory: Convert do_huge_pmd_wp_page() to use a folio
>   madvise: Convert madvise_free_pte_range() to use a folio
>   uprobes: Use folios more widely in __replace_page()
>   ksm: Use a folio in replace_page()
>   mm: Convert do_swap_page() to use folio_free_swap()
>   memcg: Convert mem_cgroup_swap_full() to take a folio
>   mm: Remove try_to_free_swap()
>   rmap: Convert page_move_anon_rmap() to use a folio
>   migrate: Convert __unmap_and_move() to use folios
>   migrate: Convert unmap_and_move_huge_page() to use folios
>   huge_memory: Convert split_huge_page_to_list() to use a folio
>   huge_memory: Convert unmap_page() to unmap_folio()
>   mm: Convert page_get_anon_vma() to folio_get_anon_vma()
>   rmap: Remove page_unlock_anon_vma_read()
>   uprobes: Use new_folio in __replace_page()
> 
>  include/linux/huge_mm.h    |   5 +
>  include/linux/memcontrol.h |   4 +-
>  include/linux/mm.h         |  12 +-
>  include/linux/mm_types.h   |  30 ++-
>  include/linux/rmap.h       |   7 +-
>  include/linux/shmem_fs.h   |   6 +-
>  include/linux/swap.h       |  35 ++--
>  kernel/events/uprobes.c    |  28 +--
>  mm/folio-compat.c          |   6 +
>  mm/huge_memory.c           |  95 +++++-----
>  mm/khugepaged.c            |   7 +-
>  mm/ksm.c                   |   8 +-
>  mm/madvise.c               |  49 ++---
>  mm/memcontrol.c            |  21 +--
>  mm/memory-failure.c        |   2 +-
>  mm/memory.c                | 151 ++++++++-------
>  mm/migrate.c               | 107 ++++++-----
>  mm/page_io.c               |  21 ++-
>  mm/rmap.c                  |  33 ++--
>  mm/shmem.c                 | 374 ++++++++++++++++++-------------------
>  mm/swap.c                  |  19 +-
>  mm/swap.h                  |  16 +-
>  mm/swap_slots.c            |   2 +-
>  mm/swap_state.c            | 113 +++++------
>  mm/swapfile.c              | 159 ++++++++--------
>  mm/truncate.c              |   2 +-
>  mm/userfaultfd.c           |  14 +-
>  mm/vmscan.c                | 263 +++++++++++++-------------
>  28 files changed, 810 insertions(+), 779 deletions(-)
> 
> -- 
> 2.35.1


^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 18/59] mm: Convert do_swap_page()'s swapcache variable to a folio
  2022-08-08 19:33 ` [PATCH 18/59] mm: Convert do_swap_page()'s swapcache variable to " Matthew Wilcox (Oracle)
@ 2022-08-11  2:28   ` Hugh Dickins
  0 siblings, 0 replies; 62+ messages in thread
From: Hugh Dickins @ 2022-08-11  2:28 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, hughd, Andrew Morton

On Mon, 8 Aug 2022, Matthew Wilcox (Oracle) wrote:

> The 'swapcache' variable is used to track whether the page is from the
> swapcache or not.  It can do this equally well by being the folio of
> the page rather than the page itself, and this saves a number of calls
> to compound_head().
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/memory.c | 32 ++++++++++++++++----------------
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index f172b148e29b..471102f0cbf2 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3718,8 +3718,8 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf)
>  vm_fault_t do_swap_page(struct vm_fault *vmf)
>  {
>  	struct vm_area_struct *vma = vmf->vma;
> -	struct folio *folio;
> -	struct page *page = NULL, *swapcache;
> +	struct folio *swapcache, *folio = NULL;
> +	struct page *page;
>  	struct swap_info_struct *si = NULL;
>  	rmap_t rmap_flags = RMAP_NONE;
>  	bool exclusive = false;
> @@ -3762,11 +3762,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  		goto out;
>  
>  	page = lookup_swap_cache(entry, vma, vmf->address);
> -	swapcache = page;
>  	if (page)
>  		folio = page_folio(page);
> +	swapcache = folio;
>  
> -	if (!page) {
> +	if (!folio) {
>  		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
>  		    __swap_count(entry) == 1) {
>  			/* skip swapcache */
> @@ -3799,12 +3799,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  		} else {
>  			page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
>  						vmf);
> -			swapcache = page;
>  			if (page)
>  				folio = page_folio(page);
> +			swapcache = folio;
>  		}
>  
> -		if (!page) {
> +		if (!folio) {
>  			/*
>  			 * Back out if somebody else faulted in this pte
>  			 * while we released the pte lock.
> @@ -3856,10 +3856,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  		page = ksm_might_need_to_copy(page, vma, vmf->address);
>  		if (unlikely(!page)) {
>  			ret = VM_FAULT_OOM;
> -			page = swapcache;
>  			goto out_page;
>  		}
>  		folio = page_folio(page);
> +		swapcache = folio;

I couldn't get further than one iteration into my swapping loads:
processes hung waiting for the folio lock.

Delete that "swapcache = folio;" line: here is (one place) where
swapcache and folio may diverge, and shall need to be unlocked
and put separately.  All working okay since I deleted that.

Hugh

>  
>  		/*
>  		 * If we want to map a page that's in the swapcache writable, we
> @@ -3867,7 +3867,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  		 * owner. Try removing the extra reference from the local LRU
>  		 * pagevecs if required.
>  		 */
> -		if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache &&
> +		if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache &&
>  		    !folio_test_ksm(folio) && !folio_test_lru(folio))
>  			lru_add_drain();
>  	}
> @@ -3908,7 +3908,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
>  		 */
>  		exclusive = pte_swp_exclusive(vmf->orig_pte);
> -		if (page != swapcache) {
> +		if (folio != swapcache) {
>  			/*
>  			 * We have a fresh page that is not exposed to the
>  			 * swapcache -> certainly exclusive.
> @@ -3976,7 +3976,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	vmf->orig_pte = pte;
>  
>  	/* ksm created a completely new copy */
> -	if (unlikely(page != swapcache && swapcache)) {
> +	if (unlikely(folio != swapcache && swapcache)) {
>  		page_add_new_anon_rmap(page, vma, vmf->address);
>  		folio_add_lru_vma(folio, vma);
>  	} else {
> @@ -3989,7 +3989,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
>  
>  	folio_unlock(folio);
> -	if (page != swapcache && swapcache) {
> +	if (folio != swapcache && swapcache) {
>  		/*
>  		 * Hold the lock to avoid the swap entry to be reused
>  		 * until we take the PT lock for the pte_same() check
> @@ -3998,8 +3998,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  		 * so that the swap count won't change under a
>  		 * parallel locked swapcache.
>  		 */
> -		unlock_page(swapcache);
> -		put_page(swapcache);
> +		folio_unlock(swapcache);
> +		folio_put(swapcache);
>  	}
>  
>  	if (vmf->flags & FAULT_FLAG_WRITE) {
> @@ -4023,9 +4023,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	folio_unlock(folio);
>  out_release:
>  	folio_put(folio);
> -	if (page != swapcache && swapcache) {
> -		unlock_page(swapcache);
> -		put_page(swapcache);
> +	if (folio != swapcache && swapcache) {
> +		folio_unlock(swapcache);
> +		folio_put(swapcache);
>  	}
>  	if (si)
>  		put_swap_device(si);
> -- 
> 2.35.1


^ permalink raw reply	[flat|nested] 62+ messages in thread

end of thread, other threads:[~2022-08-11  2:28 UTC | newest]

Thread overview: 62+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-08 19:33 [PATCH 00/59] MM folio changes for 6.1 Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 01/59] mm: Fix VM_BUG_ON in __delete_from_swap_cache() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 02/59] shmem: Update folio if shmem_replace_page() updates the page Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 03/59] vmscan: Check folio_test_private(), not folio_get_private() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 04/59] mm/vmscan: Fix a lot of comments Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 05/59] mm: Add the first tail page to struct folio Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 06/59] mm: Reimplement folio_order() and folio_nr_pages() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 07/59] mm: Add split_folio() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 08/59] mm: Add folio_add_lru_vma() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 09/59] shmem: Convert shmem_writepage() to use a folio throughout Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 10/59] shmem: Convert shmem_delete_from_page_cache() to take a folio Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 11/59] shmem: Convert shmem_replace_page() to use folios throughout Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 12/59] mm/swapfile: Remove page_swapcount() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 13/59] mm/swapfile: Convert try_to_free_swap() to folio_free_swap() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 14/59] mm/swap: Convert __read_swap_cache_async() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 15/59] mm/swap: Convert add_to_swap_cache() to take " Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 16/59] mm/swap: Convert put_swap_page() to put_swap_folio() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 17/59] mm: Convert do_swap_page() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 18/59] mm: Convert do_swap_page()'s swapcache variable to " Matthew Wilcox (Oracle)
2022-08-11  2:28   ` Hugh Dickins
2022-08-08 19:33 ` [PATCH 19/59] memcg: Convert mem_cgroup_swapin_charge_page() to mem_cgroup_swapin_charge_folio() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 20/59] shmem: Convert shmem_mfill_atomic_pte() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 21/59] shmem: Convert shmem_replace_page() to shmem_replace_folio() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 22/59] swap: Add swap_cache_get_folio() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 23/59] shmem: Eliminate struct page from shmem_swapin_folio() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 24/59] shmem: Convert shmem_getpage_gfp() to shmem_get_folio_gfp() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 25/59] shmem: Convert shmem_fault() to use shmem_get_folio_gfp() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 26/59] shmem: Convert shmem_read_mapping_page_gfp() " Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 27/59] shmem: Add shmem_get_folio() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 28/59] shmem: Convert shmem_get_partial_folio() to use shmem_get_folio() Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 29/59] shmem: Convert shmem_write_begin() " Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 30/59] shmem: Convert shmem_file_read_iter() " Matthew Wilcox (Oracle)
2022-08-08 19:33 ` [PATCH 31/59] shmem: Convert shmem_fallocate() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 32/59] shmem: Convert shmem_symlink() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 33/59] shmem: Convert shmem_get_link() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 34/59] khugepaged: Call shmem_get_folio() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 35/59] userfaultfd: Convert mcontinue_atomic_pte() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 36/59] shmem: Remove shmem_getpage() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 37/59] swapfile: Convert try_to_unuse() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 38/59] swapfile: Convert __try_to_reclaim_swap() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 39/59] swapfile: Convert unuse_pte_range() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 40/59] mm: Convert do_swap_page() to use swap_cache_get_folio() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 41/59] mm: Remove lookup_swap_cache() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 42/59] swap_state: Convert free_swap_cache() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 43/59] swap: Convert swap_writepage() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 44/59] mm: Convert do_wp_page() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 45/59] huge_memory: Convert do_huge_pmd_wp_page() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 46/59] madvise: Convert madvise_free_pte_range() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 47/59] uprobes: Use folios more widely in __replace_page() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 48/59] ksm: Use a folio in replace_page() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 49/59] mm: Convert do_swap_page() to use folio_free_swap() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 50/59] memcg: Convert mem_cgroup_swap_full() to take a folio Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 51/59] mm: Remove try_to_free_swap() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 52/59] rmap: Convert page_move_anon_rmap() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 53/59] migrate: Convert __unmap_and_move() to use folios Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 54/59] migrate: Convert unmap_and_move_huge_page() " Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 55/59] huge_memory: Convert split_huge_page_to_list() to use a folio Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 56/59] huge_memory: Convert unmap_page() to unmap_folio() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 57/59] mm: Convert page_get_anon_vma() to folio_get_anon_vma() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 58/59] rmap: Remove page_unlock_anon_vma_read() Matthew Wilcox (Oracle)
2022-08-08 19:34 ` [PATCH 59/59] uprobes: Use new_folio in __replace_page() Matthew Wilcox (Oracle)
2022-08-11  2:17 ` [PATCH 00/59] MM folio changes for 6.1 Hugh Dickins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).