All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	linux-mm@kvack.org, David Hildenbrand <david@redhat.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Muchun Song <muchun.song@linux.dev>,
	Oscar Salvador <osalvador@suse.de>
Subject: [PATCH 3/9] mm: Remove folio_prep_large_rmappable()
Date: Thu, 21 Mar 2024 14:24:41 +0000	[thread overview]
Message-ID: <20240321142448.1645400-4-willy@infradead.org> (raw)
In-Reply-To: <20240321142448.1645400-1-willy@infradead.org>

Now that prep_compound_page() initialises folio->_deferred_list,
folio_prep_large_rmappable()'s only purpose is to set the large_rmappable
flag, so inline it into the two callers.  Take the opportunity to convert
the large_rmappable definition from PAGEFLAG to FOLIO_FLAG and remove
the existance of PageTestLargeRmappable and friends.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/huge_mm.h    | 3 ---
 include/linux/page-flags.h | 4 ++--
 mm/huge_memory.c           | 9 +--------
 mm/internal.h              | 3 ++-
 4 files changed, 5 insertions(+), 14 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index de0c89105076..0e16451adaba 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -263,7 +263,6 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
 unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
 
-void folio_prep_large_rmappable(struct folio *folio);
 bool can_split_folio(struct folio *folio, int *pextra_pins);
 int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 		unsigned int new_order);
@@ -411,8 +410,6 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
 	return 0;
 }
 
-static inline void folio_prep_large_rmappable(struct folio *folio) {}
-
 #define transparent_hugepage_flags 0UL
 
 #define thp_get_unmapped_area	NULL
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index dc1607f1415e..8d0e6ce25ca2 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -869,9 +869,9 @@ static inline void ClearPageCompound(struct page *page)
 	BUG_ON(!PageHead(page));
 	ClearPageHead(page);
 }
-PAGEFLAG(LargeRmappable, large_rmappable, PF_SECOND)
+FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
 #else
-TESTPAGEFLAG_FALSE(LargeRmappable, large_rmappable)
+FOLIO_FLAG_FALSE(large_rmappable)
 #endif
 
 #define PG_head_mask ((1UL << PG_head))
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 04fb994a7b0b..5cb025341d52 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -788,13 +788,6 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio)
 }
 #endif
 
-void folio_prep_large_rmappable(struct folio *folio)
-{
-	if (!folio || !folio_test_large(folio))
-		return;
-	folio_set_large_rmappable(folio);
-}
-
 static inline bool is_transparent_hugepage(struct folio *folio)
 {
 	if (!folio_test_large(folio))
@@ -2861,7 +2854,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
 	clear_compound_head(page_tail);
 	if (new_order) {
 		prep_compound_page(page_tail, new_order);
-		folio_prep_large_rmappable(new_folio);
+		folio_set_large_rmappable(new_folio);
 	}
 
 	/* Finally unfreeze refcount. Additional reference from page cache. */
diff --git a/mm/internal.h b/mm/internal.h
index 10895ec52546..ee669963db15 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -513,7 +513,8 @@ static inline struct folio *page_rmappable_folio(struct page *page)
 {
 	struct folio *folio = (struct folio *)page;
 
-	folio_prep_large_rmappable(folio);
+	if (folio && folio_test_large(folio))
+		folio_set_large_rmappable(folio);
 	return folio;
 }
 
-- 
2.43.0



  parent reply	other threads:[~2024-03-21 14:25 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-21 14:24 [PATCH 0/9] Various significant MM patches Matthew Wilcox (Oracle)
2024-03-21 14:24 ` [PATCH 1/9] mm: Always initialise folio->_deferred_list Matthew Wilcox (Oracle)
2024-03-22  8:23   ` Miaohe Lin
2024-03-22 13:00     ` Matthew Wilcox
2024-04-01  3:14       ` Miaohe Lin
2024-03-22  9:30   ` Vlastimil Babka
2024-03-22 12:49   ` David Hildenbrand
2024-03-21 14:24 ` [PATCH 2/9] mm: Create FOLIO_FLAG_FALSE and FOLIO_TYPE_OPS macros Matthew Wilcox (Oracle)
2024-03-22  9:33   ` Vlastimil Babka
2024-03-21 14:24 ` Matthew Wilcox (Oracle) [this message]
2024-03-22  9:37   ` [PATCH 3/9] mm: Remove folio_prep_large_rmappable() Vlastimil Babka
2024-03-22 12:51   ` David Hildenbrand
2024-03-21 14:24 ` [PATCH 4/9] mm: Support page_mapcount() on page_has_type() pages Matthew Wilcox (Oracle)
2024-03-22  9:43   ` Vlastimil Babka
2024-03-22 12:43     ` Matthew Wilcox
2024-03-22 15:04   ` David Hildenbrand
2024-03-21 14:24 ` [PATCH 5/9] mm: Turn folio_test_hugetlb into a PageType Matthew Wilcox (Oracle)
2024-03-22 10:19   ` Vlastimil Babka
2024-03-22 15:06     ` David Hildenbrand
2024-03-23  3:24     ` Matthew Wilcox
2024-03-25  7:57   ` Vlastimil Babka
2024-03-25 18:48     ` Andrew Morton
2024-03-25 20:41       ` Matthew Wilcox
2024-03-25 20:47         ` Vlastimil Babka
2024-03-25 15:14   ` Matthew Wilcox
2024-03-25 15:18     ` Matthew Wilcox
2024-03-25 15:33       ` Matthew Wilcox
2024-03-21 14:24 ` [PATCH 6/9] mm: Remove a call to compound_head() from is_page_hwpoison() Matthew Wilcox (Oracle)
2024-03-22 10:28   ` Vlastimil Babka
2024-03-21 14:24 ` [PATCH 7/9] mm: Free up PG_slab Matthew Wilcox (Oracle)
2024-03-22  9:20   ` Miaohe Lin
2024-03-22 10:41     ` Vlastimil Babka
2024-04-01  3:38       ` Miaohe Lin
2024-03-22 15:09   ` David Hildenbrand
2024-03-25 15:19   ` Matthew Wilcox
2024-03-31 15:11   ` kernel test robot
2024-03-31 15:11     ` [LTP] " kernel test robot
2024-04-02  5:26     ` Matthew Wilcox
2024-04-02  5:26       ` [LTP] " Matthew Wilcox
2024-03-21 14:24 ` [PATCH 8/9] mm: Improve dumping of mapcount and page_type Matthew Wilcox (Oracle)
2024-03-22 11:05   ` Vlastimil Babka
2024-03-22 15:10   ` David Hildenbrand
2024-03-21 14:24 ` [PATCH 9/9] hugetlb: Remove mention of destructors Matthew Wilcox (Oracle)
2024-03-22 11:08   ` Vlastimil Babka
2024-03-22 15:13   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240321142448.1645400-4-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.