All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hugh Dickins <hughd@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Andi Kleen <ak@linux.intel.com>, Christoph Lameter <cl@linux.com>,
	Matthew Wilcox <willy@infradead.org>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	David Hildenbrand <david@redhat.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Yang Shi <shy828301@gmail.com>,
	Sidhartha Kumar <sidhartha.kumar@oracle.com>,
	Vishal Moola <vishal.moola@gmail.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Tejun Heo <tj@kernel.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@suse.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH 09/12] mm: add page_rmappable_folio() wrapper
Date: Mon, 25 Sep 2023 01:32:02 -0700 (PDT)	[thread overview]
Message-ID: <f4dc7bb6-be3a-c1b-c30-37c4e0c16e4d@google.com> (raw)
In-Reply-To: <2d872cef-7787-a7ca-10e-9d45a64c80b4@google.com>

folio_prep_large_rmappable() is being used repeatedly along with a
conversion from page to folio, a check non-NULL, a check order > 1:
wrap it all up into struct folio *page_rmappable_folio(struct page *).

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 include/linux/huge_mm.h | 13 +++++++++++++
 mm/mempolicy.c          | 17 +++--------------
 mm/page_alloc.c         |  8 ++------
 3 files changed, 18 insertions(+), 20 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index fa0350b0812a..58e7662a8a62 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -141,6 +141,15 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
 
 void folio_prep_large_rmappable(struct folio *folio);
+static inline struct folio *page_rmappable_folio(struct page *page)
+{
+	struct folio *folio = (struct folio *)page;
+
+	if (folio && folio_order(folio) > 1)
+		folio_prep_large_rmappable(folio);
+	return folio;
+}
+
 bool can_split_folio(struct folio *folio, int *pextra_pins);
 int split_huge_page_to_list(struct page *page, struct list_head *list);
 static inline int split_huge_page(struct page *page)
@@ -281,6 +290,10 @@ static inline bool hugepage_vma_check(struct vm_area_struct *vma,
 }
 
 static inline void folio_prep_large_rmappable(struct folio *folio) {}
+static inline struct folio *page_rmappable_folio(struct page *page)
+{
+	return (struct folio *)page;
+}
 
 #define transparent_hugepage_flags 0UL
 
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 7ab6102d7da4..4c3b3f535630 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2137,10 +2137,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
 		mpol_cond_put(pol);
 		gfp |= __GFP_COMP;
 		page = alloc_page_interleave(gfp, order, nid);
-		folio = (struct folio *)page;
-		if (folio && order > 1)
-			folio_prep_large_rmappable(folio);
-		goto out;
+		return page_rmappable_folio(page);
 	}
 
 	if (pol->mode == MPOL_PREFERRED_MANY) {
@@ -2150,10 +2147,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
 		gfp |= __GFP_COMP;
 		page = alloc_pages_preferred_many(gfp, order, node, pol);
 		mpol_cond_put(pol);
-		folio = (struct folio *)page;
-		if (folio && order > 1)
-			folio_prep_large_rmappable(folio);
-		goto out;
+		return page_rmappable_folio(page);
 	}
 
 	if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
@@ -2247,12 +2241,7 @@ EXPORT_SYMBOL(alloc_pages);
 
 struct folio *folio_alloc(gfp_t gfp, unsigned order)
 {
-	struct page *page = alloc_pages(gfp | __GFP_COMP, order);
-	struct folio *folio = (struct folio *)page;
-
-	if (folio && order > 1)
-		folio_prep_large_rmappable(folio);
-	return folio;
+	return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order));
 }
 EXPORT_SYMBOL(folio_alloc);
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 95546f376302..5b1707d9025a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4456,12 +4456,8 @@ struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid,
 		nodemask_t *nodemask)
 {
 	struct page *page = __alloc_pages(gfp | __GFP_COMP, order,
-			preferred_nid, nodemask);
-	struct folio *folio = (struct folio *)page;
-
-	if (folio && order > 1)
-		folio_prep_large_rmappable(folio);
-	return folio;
+					  preferred_nid, nodemask);
+	return page_rmappable_folio(page);
 }
 EXPORT_SYMBOL(__folio_alloc);
 
-- 
2.35.3


  parent reply	other threads:[~2023-09-25  8:32 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-25  8:17 [PATCH 00/12] mempolicy: cleanups leading to NUMA mpol without vma Hugh Dickins
2023-09-25  8:21 ` [PATCH 01/12] hugetlbfs: drop shared NUMA mempolicy pretence Hugh Dickins
2023-09-25 22:09   ` Matthew Wilcox
2023-09-25 22:46   ` Andi Kleen
2023-09-26 22:26     ` Hugh Dickins
2023-09-25  8:22 ` [PATCH 02/12] kernfs: drop shared NUMA mempolicy hooks Hugh Dickins
2023-09-25 22:10   ` Matthew Wilcox
2023-09-25  8:24 ` [PATCH 03/12] mempolicy: fix migrate_pages(2) syscall return nr_failed Hugh Dickins
2023-09-25 22:22   ` Matthew Wilcox
2023-09-26 20:47     ` Hugh Dickins
2023-09-27  8:02   ` Huang, Ying
2023-09-30  4:20     ` Hugh Dickins
2023-09-25  8:25 ` [PATCH 04/12] mempolicy trivia: delete those ancient pr_debug()s Hugh Dickins
2023-09-25 22:23   ` Matthew Wilcox
2023-09-25  8:26 ` [PATCH 05/12] mempolicy trivia: slightly more consistent naming Hugh Dickins
2023-09-25 22:28   ` Matthew Wilcox
2023-09-25  8:28 ` [PATCH 06/12] mempolicy trivia: use pgoff_t in shared mempolicy tree Hugh Dickins
2023-09-25 22:31   ` Matthew Wilcox
2023-09-25 22:38     ` Matthew Wilcox
2023-09-26 21:19     ` Hugh Dickins
2023-09-25  8:29 ` [PATCH 07/12] mempolicy: mpol_shared_policy_init() without pseudo-vma Hugh Dickins
2023-09-25 22:50   ` Matthew Wilcox
2023-09-26 21:36     ` Hugh Dickins
2023-09-25  8:30 ` [PATCH 08/12] mempolicy: remove confusing MPOL_MF_LAZY dead code Hugh Dickins
2023-09-25 22:52   ` Matthew Wilcox
2023-09-25  8:32 ` Hugh Dickins [this message]
2023-09-25 22:58   ` [PATCH 09/12] mm: add page_rmappable_folio() wrapper Matthew Wilcox
2023-09-26 21:58     ` Hugh Dickins
2023-09-25  8:33 ` [PATCH 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma Hugh Dickins
2023-09-25  8:35 ` [PATCH 11/12] mempolicy: mmap_lock is not needed while migrating folios Hugh Dickins
2023-09-25  8:36 ` [PATCH 12/12] mempolicy: migration attempt to match interleave nodes Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f4dc7bb6-be3a-c1b-c30-37c4e0c16e4d@google.com \
    --to=hughd@google.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=david@redhat.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=shy828301@gmail.com \
    --cc=sidhartha.kumar@oracle.com \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=vishal.moola@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.