All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: akpm@linux-foundation.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-fsdevel@vger.kernel.org, Vlastimil Babka <vbabka@suse.cz>,
	William Kucharski <william.kucharski@oracle.com>,
	Christoph Hellwig <hch@lst.de>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCH v13 23/32] mm/swap: Add folio_rotate_reclaimable()
Date: Mon, 12 Jul 2021 20:01:55 +0100	[thread overview]
Message-ID: <20210712190204.80979-24-willy@infradead.org> (raw)
In-Reply-To: <20210712190204.80979-1-willy@infradead.org>

Convert rotate_reclaimable_page() to folio_rotate_reclaimable().  This
eliminates all five of the calls to compound_head() in this function,
saving 75 bytes at the cost of adding 15 bytes to its one caller,
end_page_writeback().  We also save 36 bytes from pagevec_move_tail_fn()
due to using folios there.  Net 96 bytes savings.

Also move its declaration to mm/internal.h as it's only used by filemap.c.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 include/linux/swap.h |  1 -
 mm/filemap.c         |  3 ++-
 mm/internal.h        |  1 +
 mm/page_io.c         |  4 ++--
 mm/swap.c            | 30 ++++++++++++++++--------------
 5 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 3d3d85354026..8394716a002b 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -371,7 +371,6 @@ extern void lru_add_drain(void);
 extern void lru_add_drain_cpu(int cpu);
 extern void lru_add_drain_cpu_zone(struct zone *zone);
 extern void lru_add_drain_all(void);
-extern void rotate_reclaimable_page(struct page *page);
 extern void deactivate_file_page(struct page *page);
 extern void deactivate_page(struct page *page);
 extern void mark_page_lazyfree(struct page *page);
diff --git a/mm/filemap.c b/mm/filemap.c
index 1dab6c126c7a..3ebccf9dd7e8 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1529,8 +1529,9 @@ void end_page_writeback(struct page *page)
 	 * ever page writeback.
 	 */
 	if (PageReclaim(page)) {
+		struct folio *folio = page_folio(page);
 		ClearPageReclaim(page);
-		rotate_reclaimable_page(page);
+		folio_rotate_reclaimable(folio);
 	}
 
 	/*
diff --git a/mm/internal.h b/mm/internal.h
index 31ff935b2547..1a8851b73031 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -35,6 +35,7 @@
 void page_writeback_init(void);
 
 vm_fault_t do_swap_page(struct vm_fault *vmf);
+void folio_rotate_reclaimable(struct folio *folio);
 
 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
 		unsigned long floor, unsigned long ceiling);
diff --git a/mm/page_io.c b/mm/page_io.c
index c493ce9ebcf5..d597bc6e6e45 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -38,7 +38,7 @@ void end_swap_bio_write(struct bio *bio)
 		 * Also print a dire warning that things will go BAD (tm)
 		 * very quickly.
 		 *
-		 * Also clear PG_reclaim to avoid rotate_reclaimable_page()
+		 * Also clear PG_reclaim to avoid folio_rotate_reclaimable()
 		 */
 		set_page_dirty(page);
 		pr_alert_ratelimited("Write-error on swap-device (%u:%u:%llu)\n",
@@ -317,7 +317,7 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc,
 			 * temporary failure if the system has limited
 			 * memory for allocating transmit buffers.
 			 * Mark the page dirty and avoid
-			 * rotate_reclaimable_page but rate-limit the
+			 * folio_rotate_reclaimable but rate-limit the
 			 * messages but do not flag PageError like
 			 * the normal direct-to-bio case as it could
 			 * be temporary.
diff --git a/mm/swap.c b/mm/swap.c
index 19600430e536..6d4696eb2d43 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -228,11 +228,13 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
 
 static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
 {
-	if (!PageUnevictable(page)) {
-		del_page_from_lru_list(page, lruvec);
-		ClearPageActive(page);
-		add_page_to_lru_list_tail(page, lruvec);
-		__count_vm_events(PGROTATED, thp_nr_pages(page));
+	struct folio *folio = page_folio(page);
+
+	if (!folio_unevictable(folio)) {
+		folio_del_from_lru_list(folio, lruvec);
+		folio_clear_active_flag(folio);
+		folio_add_to_lru_list_tail(folio, lruvec);
+		__count_vm_events(PGROTATED, folio_nr_pages(folio));
 	}
 }
 
@@ -249,23 +251,23 @@ static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page)
 }
 
 /*
- * Writeback is about to end against a page which has been marked for immediate
- * reclaim.  If it still appears to be reclaimable, move it to the tail of the
- * inactive list.
+ * Writeback is about to end against a folio which has been marked for
+ * immediate reclaim.  If it still appears to be reclaimable, move it
+ * to the tail of the inactive list.
  *
- * rotate_reclaimable_page() must disable IRQs, to prevent nasty races.
+ * folio_rotate_reclaimable() must disable IRQs, to prevent nasty races.
  */
-void rotate_reclaimable_page(struct page *page)
+void folio_rotate_reclaimable(struct folio *folio)
 {
-	if (!PageLocked(page) && !PageDirty(page) &&
-	    !PageUnevictable(page) && PageLRU(page)) {
+	if (!folio_locked(folio) && !folio_dirty(folio) &&
+	    !folio_unevictable(folio) && folio_lru(folio)) {
 		struct pagevec *pvec;
 		unsigned long flags;
 
-		get_page(page);
+		folio_get(folio);
 		local_lock_irqsave(&lru_rotate.lock, flags);
 		pvec = this_cpu_ptr(&lru_rotate.pvec);
-		if (pagevec_add_and_need_flush(pvec, page))
+		if (pagevec_add_and_need_flush(pvec, &folio->page))
 			pagevec_lru_move_fn(pvec, pagevec_move_tail_fn);
 		local_unlock_irqrestore(&lru_rotate.lock, flags);
 	}
-- 
2.30.2


  parent reply	other threads:[~2021-07-12 19:15 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-12 19:01 [PATCH v13a 00/32] Memory folios Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 01/32] mm: Convert get_page_unless_zero() to return bool Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 02/32] mm: Introduce struct folio Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 03/32] mm: Add folio_pgdat(), folio_zone() and folio_zonenum() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 04/32] mm/vmstat: Add functions to account folio statistics Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 05/32] mm/debug: Add VM_BUG_ON_FOLIO() and VM_WARN_ON_ONCE_FOLIO() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 06/32] mm: Add folio reference count functions Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 07/32] mm: Add folio_put() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 08/32] mm: Add folio_get() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 09/32] mm: Add folio_try_get_rcu() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 10/32] mm: Add folio flag manipulation functions Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 11/32] mm/lru: Add folio LRU functions Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 12/32] mm: Handle per-folio private data Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 13/32] mm/filemap: Add folio_index(), folio_file_page() and folio_contains() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 14/32] mm/filemap: Add folio_next_index() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 15/32] mm/filemap: Add folio_pos() and folio_file_pos() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 16/32] mm/util: Add folio_mapping() and folio_file_mapping() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 17/32] mm/filemap: Add folio_unlock() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 18/32] mm/filemap: Add folio_lock() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 19/32] mm/filemap: Add folio_lock_killable() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 20/32] mm/filemap: Add __folio_lock_async() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 21/32] mm/filemap: Add folio_wait_locked() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 22/32] mm/filemap: Add __folio_lock_or_retry() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` Matthew Wilcox (Oracle) [this message]
2021-07-12 19:01 ` [PATCH v13 24/32] mm/filemap: Add folio_end_writeback() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 25/32] mm/writeback: Add folio_wait_writeback() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 26/32] mm/writeback: Add folio_wait_stable() Matthew Wilcox (Oracle)
2021-07-12 19:01 ` [PATCH v13 27/32] mm/filemap: Add folio_wait_bit() Matthew Wilcox (Oracle)
2021-07-12 19:02 ` [PATCH v13 28/32] mm/filemap: Add folio_wake_bit() Matthew Wilcox (Oracle)
2021-07-12 19:02 ` [PATCH v13 29/32] mm/filemap: Convert page wait queues to be folios Matthew Wilcox (Oracle)
2021-07-12 19:02 ` [PATCH v13 30/32] mm/filemap: Add folio private_2 functions Matthew Wilcox (Oracle)
2021-07-12 19:02 ` [PATCH v13 31/32] fs/netfs: Add folio fscache functions Matthew Wilcox (Oracle)
2021-07-12 19:02 ` [PATCH v13 32/32] mm: Add folio_mapped() Matthew Wilcox (Oracle)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210712190204.80979-24-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=hch@lst.de \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=vbabka@suse.cz \
    --cc=william.kucharski@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.