linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Convert much of vmscan to folios
@ 2022-06-17 15:42 Matthew Wilcox (Oracle)
  2022-06-17 15:42 ` [PATCH 1/5] mm/vmscan: Convert reclaim_clean_pages_from_list() " Matthew Wilcox (Oracle)
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-06-17 15:42 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

vmscan always operates on folios since it puts the pages on the LRU list.
Switching all of these functions from pages to folios saves 1483 bytes
of text from removing all the baggage around calling compound_page()
and similar functions.  Applies cleanly to next-20220617 and passes an
xfstests run.

Matthew Wilcox (Oracle) (5):
  mm/vmscan: Convert reclaim_clean_pages_from_list() to folios
  mm/vmscan: Convert isolate_lru_pages() to use a folio
  mm/vmscan: Convert move_pages_to_lru() to use a folio
  mm/vmscan: Convert shrink_active_list() to use a folio
  mm/vmscan: Convert reclaim_pages() to use a folio

 include/linux/page-flags.h |   6 +
 mm/vmscan.c                | 228 ++++++++++++++++++-------------------
 2 files changed, 118 insertions(+), 116 deletions(-)

-- 
2.35.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] mm/vmscan: Convert reclaim_clean_pages_from_list() to folios
  2022-06-17 15:42 [PATCH 0/5] Convert much of vmscan to folios Matthew Wilcox (Oracle)
@ 2022-06-17 15:42 ` Matthew Wilcox (Oracle)
  2022-06-19  6:38   ` Christoph Hellwig
  2022-06-17 15:42 ` [PATCH 2/5] mm/vmscan: Convert isolate_lru_pages() to use a folio Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-06-17 15:42 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

This is a straightforward conversion which removes several hidden calls
to compound_head, saving 330 bytes of kernel text.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/page-flags.h |  6 ++++++
 mm/vmscan.c                | 22 +++++++++++-----------
 2 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index e66f7aa3191d..f32aade2a6e0 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -670,6 +670,12 @@ static __always_inline bool PageAnon(struct page *page)
 	return folio_test_anon(page_folio(page));
 }
 
+static __always_inline bool __folio_test_movable(const struct folio *folio)
+{
+	return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
+			PAGE_MAPPING_MOVABLE;
+}
+
 static __always_inline int __PageMovable(struct page *page)
 {
 	return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f7d9a683e3a7..6c7184f333bf 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1987,7 +1987,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 }
 
 unsigned int reclaim_clean_pages_from_list(struct zone *zone,
-					    struct list_head *page_list)
+					    struct list_head *folio_list)
 {
 	struct scan_control sc = {
 		.gfp_mask = GFP_KERNEL,
@@ -1995,16 +1995,16 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
 	};
 	struct reclaim_stat stat;
 	unsigned int nr_reclaimed;
-	struct page *page, *next;
-	LIST_HEAD(clean_pages);
+	struct folio *folio, *next;
+	LIST_HEAD(clean_folios);
 	unsigned int noreclaim_flag;
 
-	list_for_each_entry_safe(page, next, page_list, lru) {
-		if (!PageHuge(page) && page_is_file_lru(page) &&
-		    !PageDirty(page) && !__PageMovable(page) &&
-		    !PageUnevictable(page)) {
-			ClearPageActive(page);
-			list_move(&page->lru, &clean_pages);
+	list_for_each_entry_safe(folio, next, folio_list, lru) {
+		if (!folio_test_hugetlb(folio) && folio_is_file_lru(folio) &&
+		    !folio_test_dirty(folio) && !__folio_test_movable(folio) &&
+		    !folio_test_unevictable(folio)) {
+			folio_clear_active(folio);
+			list_move(&folio->lru, &clean_folios);
 		}
 	}
 
@@ -2015,11 +2015,11 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
 	 * change in the future.
 	 */
 	noreclaim_flag = memalloc_noreclaim_save();
-	nr_reclaimed = shrink_page_list(&clean_pages, zone->zone_pgdat, &sc,
+	nr_reclaimed = shrink_page_list(&clean_folios, zone->zone_pgdat, &sc,
 					&stat, true);
 	memalloc_noreclaim_restore(noreclaim_flag);
 
-	list_splice(&clean_pages, page_list);
+	list_splice(&clean_folios, folio_list);
 	mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE,
 			    -(long)nr_reclaimed);
 	/*
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] mm/vmscan: Convert isolate_lru_pages() to use a folio
  2022-06-17 15:42 [PATCH 0/5] Convert much of vmscan to folios Matthew Wilcox (Oracle)
  2022-06-17 15:42 ` [PATCH 1/5] mm/vmscan: Convert reclaim_clean_pages_from_list() " Matthew Wilcox (Oracle)
@ 2022-06-17 15:42 ` Matthew Wilcox (Oracle)
  2022-06-19  6:39   ` Christoph Hellwig
  2022-06-17 15:42 ` [PATCH 3/5] mm/vmscan: Convert move_pages_to_lru() " Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-06-17 15:42 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

Remove a few hidden calls to compound_head, saving 279 bytes of text.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/vmscan.c | 66 ++++++++++++++++++++++++++---------------------------
 1 file changed, 33 insertions(+), 33 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6c7184f333bf..81d61139bbb1 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -160,17 +160,17 @@ struct scan_control {
 };
 
 #ifdef ARCH_HAS_PREFETCHW
-#define prefetchw_prev_lru_page(_page, _base, _field)			\
+#define prefetchw_prev_lru_folio(_folio, _base, _field)			\
 	do {								\
-		if ((_page)->lru.prev != _base) {			\
-			struct page *prev;				\
+		if ((_folio)->lru.prev != _base) {			\
+			struct folio *prev;				\
 									\
-			prev = lru_to_page(&(_page->lru));		\
+			prev = lru_to_folio(&(_folio->lru));		\
 			prefetchw(&prev->_field);			\
 		}							\
 	} while (0)
 #else
-#define prefetchw_prev_lru_page(_page, _base, _field) do { } while (0)
+#define prefetchw_prev_lru_folio(_folio, _base, _field) do { } while (0)
 #endif
 
 /*
@@ -2085,72 +2085,72 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 	unsigned long nr_skipped[MAX_NR_ZONES] = { 0, };
 	unsigned long skipped = 0;
 	unsigned long scan, total_scan, nr_pages;
-	LIST_HEAD(pages_skipped);
+	LIST_HEAD(folios_skipped);
 
 	total_scan = 0;
 	scan = 0;
 	while (scan < nr_to_scan && !list_empty(src)) {
 		struct list_head *move_to = src;
-		struct page *page;
+		struct folio *folio;
 
-		page = lru_to_page(src);
-		prefetchw_prev_lru_page(page, src, flags);
+		folio = lru_to_folio(src);
+		prefetchw_prev_lru_folio(folio, src, flags);
 
-		nr_pages = compound_nr(page);
+		nr_pages = folio_nr_pages(folio);
 		total_scan += nr_pages;
 
-		if (page_zonenum(page) > sc->reclaim_idx) {
-			nr_skipped[page_zonenum(page)] += nr_pages;
-			move_to = &pages_skipped;
+		if (folio_zonenum(folio) > sc->reclaim_idx) {
+			nr_skipped[folio_zonenum(folio)] += nr_pages;
+			move_to = &folios_skipped;
 			goto move;
 		}
 
 		/*
-		 * Do not count skipped pages because that makes the function
-		 * return with no isolated pages if the LRU mostly contains
-		 * ineligible pages.  This causes the VM to not reclaim any
-		 * pages, triggering a premature OOM.
-		 * Account all tail pages of THP.
+		 * Do not count skipped folios because that makes the function
+		 * return with no isolated folios if the LRU mostly contains
+		 * ineligible folios.  This causes the VM to not reclaim any
+		 * folios, triggering a premature OOM.
+		 * Account all pages in a folio.
 		 */
 		scan += nr_pages;
 
-		if (!PageLRU(page))
+		if (!folio_test_lru(folio))
 			goto move;
-		if (!sc->may_unmap && page_mapped(page))
+		if (!sc->may_unmap && folio_mapped(folio))
 			goto move;
 
 		/*
-		 * Be careful not to clear PageLRU until after we're
-		 * sure the page is not being freed elsewhere -- the
-		 * page release code relies on it.
+		 * Be careful not to clear the lru flag until after we're
+		 * sure the folio is not being freed elsewhere -- the
+		 * folio release code relies on it.
 		 */
-		if (unlikely(!get_page_unless_zero(page)))
+		if (unlikely(!folio_try_get(folio)))
 			goto move;
 
-		if (!TestClearPageLRU(page)) {
-			/* Another thread is already isolating this page */
-			put_page(page);
+		if (!folio_test_clear_lru(folio)) {
+			/* Another thread is already isolating this folio */
+			folio_put(folio);
 			goto move;
 		}
 
 		nr_taken += nr_pages;
-		nr_zone_taken[page_zonenum(page)] += nr_pages;
+		nr_zone_taken[folio_zonenum(folio)] += nr_pages;
 		move_to = dst;
 move:
-		list_move(&page->lru, move_to);
+		list_move(&folio->lru, move_to);
 	}
 
 	/*
-	 * Splice any skipped pages to the start of the LRU list. Note that
+	 * Splice any skipped folios to the start of the LRU list. Note that
 	 * this disrupts the LRU order when reclaiming for lower zones but
 	 * we cannot splice to the tail. If we did then the SWAP_CLUSTER_MAX
-	 * scanning would soon rescan the same pages to skip and waste lots
+	 * scanning would soon rescan the same folios to skip and waste lots
 	 * of cpu cycles.
 	 */
-	if (!list_empty(&pages_skipped)) {
+	if (!list_empty(&folios_skipped)) {
 		int zid;
 
-		list_splice(&pages_skipped, src);
+		list_splice(&folios_skipped, src);
 		for (zid = 0; zid < MAX_NR_ZONES; zid++) {
 			if (!nr_skipped[zid])
 				continue;
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] mm/vmscan: Convert move_pages_to_lru() to use a folio
  2022-06-17 15:42 [PATCH 0/5] Convert much of vmscan to folios Matthew Wilcox (Oracle)
  2022-06-17 15:42 ` [PATCH 1/5] mm/vmscan: Convert reclaim_clean_pages_from_list() " Matthew Wilcox (Oracle)
  2022-06-17 15:42 ` [PATCH 2/5] mm/vmscan: Convert isolate_lru_pages() to use a folio Matthew Wilcox (Oracle)
@ 2022-06-17 15:42 ` Matthew Wilcox (Oracle)
  2022-06-19  6:39   ` Christoph Hellwig
  2022-06-17 15:42 ` [PATCH 4/5] mm/vmscan: Convert shrink_active_list() " Matthew Wilcox (Oracle)
  2022-06-17 15:42 ` [PATCH 5/5] mm/vmscan: Convert reclaim_pages() " Matthew Wilcox (Oracle)
  4 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-06-17 15:42 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

Remove a few hidden calls to compound_head, saving 387 bytes of text on
my test configuration.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/vmscan.c | 54 ++++++++++++++++++++++++++---------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 81d61139bbb1..f8ec446041c3 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2254,8 +2254,8 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 }
 
 /*
- * move_pages_to_lru() moves pages from private @list to appropriate LRU list.
- * On return, @list is reused as a list of pages to be freed by the caller.
+ * move_pages_to_lru() moves folios from private @list to appropriate LRU list.
+ * On return, @list is reused as a list of folios to be freed by the caller.
  *
  * Returns the number of pages moved to the given lruvec.
  */
@@ -2263,42 +2263,42 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 				      struct list_head *list)
 {
 	int nr_pages, nr_moved = 0;
-	LIST_HEAD(pages_to_free);
-	struct page *page;
+	LIST_HEAD(folios_to_free);
 
 	while (!list_empty(list)) {
-		page = lru_to_page(list);
-		VM_BUG_ON_PAGE(PageLRU(page), page);
-		list_del(&page->lru);
-		if (unlikely(!page_evictable(page))) {
+		struct folio *folio = lru_to_folio(list);
+
+		VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
+		list_del(&folio->lru);
+		if (unlikely(!folio_evictable(folio))) {
 			spin_unlock_irq(&lruvec->lru_lock);
-			putback_lru_page(page);
+			folio_putback_lru(folio);
 			spin_lock_irq(&lruvec->lru_lock);
 			continue;
 		}
 
 		/*
-		 * The SetPageLRU needs to be kept here for list integrity.
+		 * The folio_set_lru needs to be kept here for list integrity.
 		 * Otherwise:
 		 *   #0 move_pages_to_lru             #1 release_pages
-		 *   if !put_page_testzero
-		 *				      if (put_page_testzero())
-		 *				        !PageLRU //skip lru_lock
-		 *     SetPageLRU()
-		 *     list_add(&page->lru,)
-		 *                                        list_add(&page->lru,)
+		 *   if (!folio_put_testzero())
+		 *				      if (folio_put_testzero())
+		 *				        !lru //skip lru_lock
+		 *     folio_set_lru()
+		 *     list_add(&folio->lru,)
+		 *                                        list_add(&folio->lru,)
 		 */
-		SetPageLRU(page);
+		folio_set_lru(folio);
 
-		if (unlikely(put_page_testzero(page))) {
-			__clear_page_lru_flags(page);
+		if (unlikely(folio_put_testzero(folio))) {
+			__folio_clear_lru_flags(folio);
 
-			if (unlikely(PageCompound(page))) {
+			if (unlikely(folio_test_large(folio))) {
 				spin_unlock_irq(&lruvec->lru_lock);
-				destroy_compound_page(page);
+				destroy_compound_page(&folio->page);
 				spin_lock_irq(&lruvec->lru_lock);
 			} else
-				list_add(&page->lru, &pages_to_free);
+				list_add(&folio->lru, &folios_to_free);
 
 			continue;
 		}
@@ -2307,18 +2307,18 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 		 * All pages were isolated from the same lruvec (and isolation
 		 * inhibits memcg migration).
 		 */
-		VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page);
-		add_page_to_lru_list(page, lruvec);
-		nr_pages = thp_nr_pages(page);
+		VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+		lruvec_add_folio(lruvec, folio);
+		nr_pages = folio_nr_pages(folio);
 		nr_moved += nr_pages;
-		if (PageActive(page))
+		if (folio_test_active(folio))
 			workingset_age_nonresident(lruvec, nr_pages);
 	}
 
 	/*
 	 * To save our caller's stack, now use input list for pages to free.
 	 */
-	list_splice(&pages_to_free, list);
+	list_splice(&folios_to_free, list);
 
 	return nr_moved;
 }
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] mm/vmscan: Convert shrink_active_list() to use a folio
  2022-06-17 15:42 [PATCH 0/5] Convert much of vmscan to folios Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2022-06-17 15:42 ` [PATCH 3/5] mm/vmscan: Convert move_pages_to_lru() " Matthew Wilcox (Oracle)
@ 2022-06-17 15:42 ` Matthew Wilcox (Oracle)
  2022-06-19  6:40   ` Christoph Hellwig
  2022-07-11 20:53   ` Matthew Wilcox
  2022-06-17 15:42 ` [PATCH 5/5] mm/vmscan: Convert reclaim_pages() " Matthew Wilcox (Oracle)
  4 siblings, 2 replies; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-06-17 15:42 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

Remove a few hidden calls to compound_head, saving 411 bytes of text.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/vmscan.c | 61 +++++++++++++++++++++++++----------------------------
 1 file changed, 29 insertions(+), 32 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f8ec446041c3..0a0e013a3457 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -26,8 +26,7 @@
 #include <linux/file.h>
 #include <linux/writeback.h>
 #include <linux/blkdev.h>
-#include <linux/buffer_head.h>	/* for try_to_release_page(),
-					buffer_heads_over_limit */
+#include <linux/buffer_head.h>	/* for buffer_heads_over_limit */
 #include <linux/mm_inline.h>
 #include <linux/backing-dev.h>
 #include <linux/rmap.h>
@@ -2429,21 +2428,21 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 }
 
 /*
- * shrink_active_list() moves pages from the active LRU to the inactive LRU.
+ * shrink_active_list() moves folios from the active LRU to the inactive LRU.
  *
- * We move them the other way if the page is referenced by one or more
+ * We move them the other way if the folio is referenced by one or more
  * processes.
  *
- * If the pages are mostly unmapped, the processing is fast and it is
+ * If the folios are mostly unmapped, the processing is fast and it is
  * appropriate to hold lru_lock across the whole operation.  But if
- * the pages are mapped, the processing is slow (folio_referenced()), so
- * we should drop lru_lock around each page.  It's impossible to balance
- * this, so instead we remove the pages from the LRU while processing them.
- * It is safe to rely on PG_active against the non-LRU pages in here because
- * nobody will play with that bit on a non-LRU page.
+ * the folios are mapped, the processing is slow (folio_referenced()), so
+ * we should drop lru_lock around each folio.  It's impossible to balance
+ * this, so instead we remove the folios from the LRU while processing them.
+ * It is safe to rely on the active flag against the non-LRU folios in here
+ * because nobody will play with that bit on a non-LRU folio.
  *
- * The downside is that we have to touch page->_refcount against each page.
- * But we had to alter page->flags anyway.
+ * The downside is that we have to touch folio->_refcount against each folio.
+ * But we had to alter folio->flags anyway.
  */
 static void shrink_active_list(unsigned long nr_to_scan,
 			       struct lruvec *lruvec,
@@ -2453,7 +2452,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	unsigned long nr_taken;
 	unsigned long nr_scanned;
 	unsigned long vm_flags;
-	LIST_HEAD(l_hold);	/* The pages which were snipped off */
+	LIST_HEAD(l_hold);	/* The folios which were snipped off */
 	LIST_HEAD(l_active);
 	LIST_HEAD(l_inactive);
 	unsigned nr_deactivate, nr_activate;
@@ -2478,23 +2477,21 @@ static void shrink_active_list(unsigned long nr_to_scan,
 
 	while (!list_empty(&l_hold)) {
 		struct folio *folio;
-		struct page *page;
 
 		cond_resched();
 		folio = lru_to_folio(&l_hold);
 		list_del(&folio->lru);
-		page = &folio->page;
 
-		if (unlikely(!page_evictable(page))) {
-			putback_lru_page(page);
+		if (unlikely(!folio_evictable(folio))) {
+			folio_putback_lru(folio);
 			continue;
 		}
 
 		if (unlikely(buffer_heads_over_limit)) {
-			if (page_has_private(page) && trylock_page(page)) {
-				if (page_has_private(page))
-					try_to_release_page(page, 0);
-				unlock_page(page);
+			if (folio_get_private(folio) && folio_trylock(folio)) {
+				if (folio_get_private(folio))
+					filemap_release_folio(folio, 0);
+				folio_unlock(folio);
 			}
 		}
 
@@ -2502,34 +2499,34 @@ static void shrink_active_list(unsigned long nr_to_scan,
 		if (folio_referenced(folio, 0, sc->target_mem_cgroup,
 				     &vm_flags) != 0) {
 			/*
-			 * Identify referenced, file-backed active pages and
+			 * Identify referenced, file-backed active folios and
 			 * give them one more trip around the active list. So
 			 * that executable code get better chances to stay in
-			 * memory under moderate memory pressure.  Anon pages
+			 * memory under moderate memory pressure.  Anon folios
 			 * are not likely to be evicted by use-once streaming
-			 * IO, plus JVM can create lots of anon VM_EXEC pages,
+			 * IO, plus JVM can create lots of anon VM_EXEC folios,
 			 * so we ignore them here.
 			 */
-			if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) {
-				nr_rotated += thp_nr_pages(page);
-				list_add(&page->lru, &l_active);
+			if ((vm_flags & VM_EXEC) && folio_is_file_lru(folio)) {
+				nr_rotated += folio_nr_pages(folio);
+				list_add(&folio->lru, &l_active);
 				continue;
 			}
 		}
 
-		ClearPageActive(page);	/* we are de-activating */
-		SetPageWorkingset(page);
-		list_add(&page->lru, &l_inactive);
+		folio_clear_active(folio);	/* we are de-activating */
+		folio_set_workingset(folio);
+		list_add(&folio->lru, &l_inactive);
 	}
 
 	/*
-	 * Move pages back to the lru list.
+	 * Move folios back to the lru list.
 	 */
 	spin_lock_irq(&lruvec->lru_lock);
 
 	nr_activate = move_pages_to_lru(lruvec, &l_active);
 	nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
-	/* Keep all free pages in l_active list */
+	/* Keep all free folios in l_active list */
 	list_splice(&l_inactive, &l_active);
 
 	__count_vm_events(PGDEACTIVATE, nr_deactivate);
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] mm/vmscan: Convert reclaim_pages() to use a folio
  2022-06-17 15:42 [PATCH 0/5] Convert much of vmscan to folios Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2022-06-17 15:42 ` [PATCH 4/5] mm/vmscan: Convert shrink_active_list() " Matthew Wilcox (Oracle)
@ 2022-06-17 15:42 ` Matthew Wilcox (Oracle)
  2022-06-19  6:40   ` Christoph Hellwig
  4 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-06-17 15:42 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

Remove a few hidden calls to compound_head, saving 76 bytes of text.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/vmscan.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0a0e013a3457..50b3edfe815b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2565,34 +2565,33 @@ static unsigned int reclaim_page_list(struct list_head *page_list,
 	return nr_reclaimed;
 }
 
-unsigned long reclaim_pages(struct list_head *page_list)
+unsigned long reclaim_pages(struct list_head *folio_list)
 {
 	int nid;
 	unsigned int nr_reclaimed = 0;
-	LIST_HEAD(node_page_list);
-	struct page *page;
+	LIST_HEAD(node_folio_list);
 	unsigned int noreclaim_flag;
 
-	if (list_empty(page_list))
+	if (list_empty(folio_list))
 		return nr_reclaimed;
 
 	noreclaim_flag = memalloc_noreclaim_save();
 
-	nid = page_to_nid(lru_to_page(page_list));
+	nid = folio_nid(lru_to_folio(folio_list));
 	do {
-		page = lru_to_page(page_list);
+		struct folio *folio = lru_to_folio(folio_list);
 
-		if (nid == page_to_nid(page)) {
-			ClearPageActive(page);
-			list_move(&page->lru, &node_page_list);
+		if (nid == folio_nid(folio)) {
+			folio_clear_active(folio);
+			list_move(&folio->lru, &node_folio_list);
 			continue;
 		}
 
-		nr_reclaimed += reclaim_page_list(&node_page_list, NODE_DATA(nid));
-		nid = page_to_nid(lru_to_page(page_list));
-	} while (!list_empty(page_list));
+		nr_reclaimed += reclaim_page_list(&node_folio_list, NODE_DATA(nid));
+		nid = folio_nid(lru_to_folio(folio_list));
+	} while (!list_empty(folio_list));
 
-	nr_reclaimed += reclaim_page_list(&node_page_list, NODE_DATA(nid));
+	nr_reclaimed += reclaim_page_list(&node_folio_list, NODE_DATA(nid));
 
 	memalloc_noreclaim_restore(noreclaim_flag);
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/5] mm/vmscan: Convert reclaim_clean_pages_from_list() to folios
  2022-06-17 15:42 ` [PATCH 1/5] mm/vmscan: Convert reclaim_clean_pages_from_list() " Matthew Wilcox (Oracle)
@ 2022-06-19  6:38   ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-06-19  6:38 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Andrew Morton

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] mm/vmscan: Convert isolate_lru_pages() to use a folio
  2022-06-17 15:42 ` [PATCH 2/5] mm/vmscan: Convert isolate_lru_pages() to use a folio Matthew Wilcox (Oracle)
@ 2022-06-19  6:39   ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-06-19  6:39 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Andrew Morton

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] mm/vmscan: Convert move_pages_to_lru() to use a folio
  2022-06-17 15:42 ` [PATCH 3/5] mm/vmscan: Convert move_pages_to_lru() " Matthew Wilcox (Oracle)
@ 2022-06-19  6:39   ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-06-19  6:39 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Andrew Morton

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] mm/vmscan: Convert shrink_active_list() to use a folio
  2022-06-17 15:42 ` [PATCH 4/5] mm/vmscan: Convert shrink_active_list() " Matthew Wilcox (Oracle)
@ 2022-06-19  6:40   ` Christoph Hellwig
  2022-07-11 20:53   ` Matthew Wilcox
  1 sibling, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-06-19  6:40 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Andrew Morton

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm/vmscan: Convert reclaim_pages() to use a folio
  2022-06-17 15:42 ` [PATCH 5/5] mm/vmscan: Convert reclaim_pages() " Matthew Wilcox (Oracle)
@ 2022-06-19  6:40   ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-06-19  6:40 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Andrew Morton

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] mm/vmscan: Convert shrink_active_list() to use a folio
  2022-06-17 15:42 ` [PATCH 4/5] mm/vmscan: Convert shrink_active_list() " Matthew Wilcox (Oracle)
  2022-06-19  6:40   ` Christoph Hellwig
@ 2022-07-11 20:53   ` Matthew Wilcox
  2022-07-13 18:13     ` Andrew Morton
  1 sibling, 1 reply; 13+ messages in thread
From: Matthew Wilcox @ 2022-07-11 20:53 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Hugh Dickins

On Fri, Jun 17, 2022 at 04:42:47PM +0100, Matthew Wilcox (Oracle) wrote:
> @@ -2478,23 +2477,21 @@ static void shrink_active_list(unsigned long nr_to_scan,
>  
>  	while (!list_empty(&l_hold)) {
>  		struct folio *folio;
> -		struct page *page;
>  
>  		cond_resched();
>  		folio = lru_to_folio(&l_hold);
>  		list_del(&folio->lru);
> -		page = &folio->page;
>  
> -		if (unlikely(!page_evictable(page))) {
> -			putback_lru_page(page);
> +		if (unlikely(!folio_evictable(folio))) {
> +			folio_putback_lru(folio);
>  			continue;
>  		}
>  
>  		if (unlikely(buffer_heads_over_limit)) {
> -			if (page_has_private(page) && trylock_page(page)) {
> -				if (page_has_private(page))
> -					try_to_release_page(page, 0);
> -				unlock_page(page);
> +			if (folio_get_private(folio) && folio_trylock(folio)) {
> +				if (folio_get_private(folio))
> +					filemap_release_folio(folio, 0);
> +				folio_unlock(folio);
>  			}
>  		}
>  

Hi Andrew.  Hugh points out that the above is not an equivalent
transformation for pages which are in the swapcache.  Can you apply
this fix, or would you like a full patch?

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0070c7fb600a..7e34f4c8d956 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2544,8 +2544,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 		}
 
 		if (unlikely(buffer_heads_over_limit)) {
-			if (folio_get_private(folio) && folio_trylock(folio)) {
-				if (folio_get_private(folio))
+			if (folio_test_private(folio) && folio_trylock(folio)) {
+				if (folio_test_private(folio))
 					filemap_release_folio(folio, 0);
 				folio_unlock(folio);
 			}


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] mm/vmscan: Convert shrink_active_list() to use a folio
  2022-07-11 20:53   ` Matthew Wilcox
@ 2022-07-13 18:13     ` Andrew Morton
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2022-07-13 18:13 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm, Hugh Dickins

On Mon, 11 Jul 2022 21:53:04 +0100 Matthew Wilcox <willy@infradead.org> wrote:

> On Fri, Jun 17, 2022 at 04:42:47PM +0100, Matthew Wilcox (Oracle) wrote:
> > @@ -2478,23 +2477,21 @@ static void shrink_active_list(unsigned long nr_to_scan,
> >  
> >  	while (!list_empty(&l_hold)) {
> >  		struct folio *folio;
> > -		struct page *page;
> >  
> >  		cond_resched();
> >  		folio = lru_to_folio(&l_hold);
> >  		list_del(&folio->lru);
> > -		page = &folio->page;
> >  
> > -		if (unlikely(!page_evictable(page))) {
> > -			putback_lru_page(page);
> > +		if (unlikely(!folio_evictable(folio))) {
> > +			folio_putback_lru(folio);
> >  			continue;
> >  		}
> >  
> >  		if (unlikely(buffer_heads_over_limit)) {
> > -			if (page_has_private(page) && trylock_page(page)) {
> > -				if (page_has_private(page))
> > -					try_to_release_page(page, 0);
> > -				unlock_page(page);
> > +			if (folio_get_private(folio) && folio_trylock(folio)) {
> > +				if (folio_get_private(folio))
> > +					filemap_release_folio(folio, 0);
> > +				folio_unlock(folio);
> >  			}
> >  		}
> >  
> 
> Hi Andrew.  Hugh points out that the above is not an equivalent
> transformation for pages which are in the swapcache.  Can you apply
> this fix, or would you like a full patch?
> 
> ...

The original is in mm-stable and rebasing that is bad (sigh) so I'll
add this as a standalone patch.  So yes, please send along the real
thing.



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-07-13 18:13 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-17 15:42 [PATCH 0/5] Convert much of vmscan to folios Matthew Wilcox (Oracle)
2022-06-17 15:42 ` [PATCH 1/5] mm/vmscan: Convert reclaim_clean_pages_from_list() " Matthew Wilcox (Oracle)
2022-06-19  6:38   ` Christoph Hellwig
2022-06-17 15:42 ` [PATCH 2/5] mm/vmscan: Convert isolate_lru_pages() to use a folio Matthew Wilcox (Oracle)
2022-06-19  6:39   ` Christoph Hellwig
2022-06-17 15:42 ` [PATCH 3/5] mm/vmscan: Convert move_pages_to_lru() " Matthew Wilcox (Oracle)
2022-06-19  6:39   ` Christoph Hellwig
2022-06-17 15:42 ` [PATCH 4/5] mm/vmscan: Convert shrink_active_list() " Matthew Wilcox (Oracle)
2022-06-19  6:40   ` Christoph Hellwig
2022-07-11 20:53   ` Matthew Wilcox
2022-07-13 18:13     ` Andrew Morton
2022-06-17 15:42 ` [PATCH 5/5] mm/vmscan: Convert reclaim_pages() " Matthew Wilcox (Oracle)
2022-06-19  6:40   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).