All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: linux-mm@kvack.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	John Hubbard <jhubbard@nvidia.com>,
	Christoph Hellwig <hch@infradead.org>,
	William Kucharski <william.kucharski@oracle.com>,
	linux-kernel@vger.kernel.org, Jason Gunthorpe <jgg@ziepe.ca>
Subject: [PATCH v2 15/28] gup: Add try_get_folio() and try_grab_folio()
Date: Mon, 10 Jan 2022 04:23:53 +0000	[thread overview]
Message-ID: <20220110042406.499429-16-willy@infradead.org> (raw)
In-Reply-To: <20220110042406.499429-1-willy@infradead.org>

Convert try_get_compound_head() into try_get_folio() and convert
try_grab_compound_head() into try_grab_folio().  Also convert
hpage_pincount_add() to folio_pincount_add().  Add a temporary
try_grab_compound_head() wrapper around try_grab_folio() to
let us convert callers individually.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/gup.c      | 104 +++++++++++++++++++++++++-------------------------
 mm/internal.h |   5 +++
 2 files changed, 56 insertions(+), 53 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 1282d29357b7..9e581201d679 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -30,21 +30,19 @@ struct follow_page_context {
 };
 
 /*
- * When pinning a compound page, use an exact count to track it, via
- * page_pincount_add/_sub().
+ * When pinning a large folio, use an exact count to track it.
  *
- * However, be sure to *also* increment the normal page refcount field
- * at least once, so that the page really is pinned.  That's why the
- * refcount from the earlier try_get_compound_head() is left intact.
+ * However, be sure to *also* increment the normal folio refcount
+ * field at least once, so that the folio really is pinned.
+ * That's why the refcount from the earlier
+ * try_get_folio() is left intact.
  */
-static void page_pincount_add(struct page *page, int refs)
+static void folio_pincount_add(struct folio *folio, int refs)
 {
-	VM_BUG_ON_PAGE(page != compound_head(page), page);
-
-	if (PageHead(page))
-		atomic_add(refs, compound_pincount_ptr(page));
+	if (folio_test_large(folio))
+		atomic_add(refs, folio_pincount_ptr(folio));
 	else
-		page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1));
+		folio_ref_add(folio, refs * (GUP_PIN_COUNTING_BIAS - 1));
 }
 
 static int page_pincount_sub(struct page *page, int refs)
@@ -76,75 +74,70 @@ static void put_page_refs(struct page *page, int refs)
 }
 
 /*
- * Return the compound head page with ref appropriately incremented,
+ * Return the folio with ref appropriately incremented,
  * or NULL if that failed.
  */
-static inline struct page *try_get_compound_head(struct page *page, int refs)
+static inline struct folio *try_get_folio(struct page *page, int refs)
 {
-	struct page *head;
+	struct folio *folio;
 
 retry:
-	head = compound_head(page);
-
-	if (WARN_ON_ONCE(page_ref_count(head) < 0))
+	folio = page_folio(page);
+	if (WARN_ON_ONCE(folio_ref_count(folio) < 0))
 		return NULL;
-	if (unlikely(!page_cache_add_speculative(head, refs)))
+	if (unlikely(!folio_ref_try_add_rcu(folio, refs)))
 		return NULL;
 
 	/*
-	 * At this point we have a stable reference to the head page; but it
-	 * could be that between the compound_head() lookup and the refcount
-	 * increment, the compound page was split, in which case we'd end up
-	 * holding a reference on a page that has nothing to do with the page
+	 * At this point we have a stable reference to the folio; but it
+	 * could be that between calling page_folio() and the refcount
+	 * increment, the folio was split, in which case we'd end up
+	 * holding a reference on a folio that has nothing to do with the page
 	 * we were given anymore.
-	 * So now that the head page is stable, recheck that the pages still
-	 * belong together.
+	 * So now that the folio is stable, recheck that the page still
+	 * belongs to this folio.
 	 */
-	if (unlikely(compound_head(page) != head)) {
-		put_page_refs(head, refs);
+	if (unlikely(page_folio(page) != folio)) {
+		folio_put_refs(folio, refs);
 		goto retry;
 	}
 
-	return head;
+	return folio;
 }
 
 /**
- * try_grab_compound_head() - attempt to elevate a page's refcount, by a
- * flags-dependent amount.
- *
- * Even though the name includes "compound_head", this function is still
- * appropriate for callers that have a non-compound @page to get.
- *
+ * try_grab_folio() - Attempt to get or pin a folio.
  * @page:  pointer to page to be grabbed
- * @refs:  the value to (effectively) add to the page's refcount
+ * @refs:  the value to (effectively) add to the folio's refcount
  * @flags: gup flags: these are the FOLL_* flag values.
  *
  * "grab" names in this file mean, "look at flags to decide whether to use
- * FOLL_PIN or FOLL_GET behavior, when incrementing the page's refcount.
+ * FOLL_PIN or FOLL_GET behavior, when incrementing the folio's refcount.
  *
  * Either FOLL_PIN or FOLL_GET (or neither) must be set, but not both at the
  * same time. (That's true throughout the get_user_pages*() and
  * pin_user_pages*() APIs.) Cases:
  *
- *    FOLL_GET: page's refcount will be incremented by @refs.
+ *    FOLL_GET: folio's refcount will be incremented by @refs.
  *
- *    FOLL_PIN on compound pages: page's refcount will be incremented by
- *    @refs, and page[1].compound_pincount will be incremented by @refs.
+ *    FOLL_PIN on large folios: folio's refcount will be incremented by
+ *    @refs, and its compound_pincount will be incremented by @refs.
  *
- *    FOLL_PIN on normal pages: page's refcount will be incremented by
+ *    FOLL_PIN on single-page folios: folio's refcount will be incremented by
  *    @refs * GUP_PIN_COUNTING_BIAS.
  *
- * Return: head page (with refcount appropriately incremented) for success, or
- * NULL upon failure. If neither FOLL_GET nor FOLL_PIN was set, that's
- * considered failure, and furthermore, a likely bug in the caller, so a warning
- * is also emitted.
+ * Return: The folio containing @page (with refcount appropriately
+ * incremented) for success, or NULL upon failure. If neither FOLL_GET
+ * nor FOLL_PIN was set, that's considered failure, and furthermore,
+ * a likely bug in the caller, so a warning is also emitted.
  */
-struct page *try_grab_compound_head(struct page *page,
-				    int refs, unsigned int flags)
+struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags)
 {
 	if (flags & FOLL_GET)
-		return try_get_compound_head(page, refs);
+		return try_get_folio(page, refs);
 	else if (flags & FOLL_PIN) {
+		struct folio *folio;
+
 		/*
 		 * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a
 		 * right zone, so fail and let the caller fall back to the slow
@@ -158,21 +151,26 @@ struct page *try_grab_compound_head(struct page *page,
 		 * CAUTION: Don't use compound_head() on the page before this
 		 * point, the result won't be stable.
 		 */
-		page = try_get_compound_head(page, refs);
-		if (!page)
+		folio = try_get_folio(page, refs);
+		if (!folio)
 			return NULL;
 
-		page_pincount_add(page, refs);
-		mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED,
-				    refs);
+		folio_pincount_add(folio, refs);
+		node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs);
 
-		return page;
+		return folio;
 	}
 
 	WARN_ON_ONCE(1);
 	return NULL;
 }
 
+struct page *try_grab_compound_head(struct page *page,
+		int refs, unsigned int flags)
+{
+	return &try_grab_folio(page, refs, flags)->page;
+}
+
 static void put_compound_head(struct page *page, int refs, unsigned int flags)
 {
 	if (flags & FOLL_PIN) {
@@ -196,7 +194,7 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags)
  * @flags:   gup flags: these are the FOLL_* flag values.
  *
  * Either FOLL_PIN or FOLL_GET (or neither) may be set, but not both at the same
- * time. Cases: please see the try_grab_compound_head() documentation, with
+ * time. Cases: please see the try_grab_folio() documentation, with
  * "refs=1".
  *
  * Return: true for success, or if no action was required (if neither FOLL_PIN
diff --git a/mm/internal.h b/mm/internal.h
index 26af8a5a5be3..9a72d1ecdab4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -723,4 +723,9 @@ void vunmap_range_noflush(unsigned long start, unsigned long end);
 int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
 		      unsigned long addr, int page_nid, int *flags);
 
+/*
+ * mm/gup.c
+ */
+struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags);
+
 #endif	/* __MM_INTERNAL_H */
-- 
2.33.0


  parent reply	other threads:[~2022-01-10  4:25 UTC|newest]

Thread overview: 96+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-10  4:23 [PATCH v2 00/28] Convert GUP to folios Matthew Wilcox (Oracle)
2022-01-10  4:23 ` [PATCH v2 01/28] gup: Remove for_each_compound_range() Matthew Wilcox (Oracle)
2022-01-10  8:22   ` Christoph Hellwig
2022-01-11  1:07   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 02/28] gup: Remove for_each_compound_head() Matthew Wilcox (Oracle)
2022-01-10  8:23   ` Christoph Hellwig
2022-01-11  1:11   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 03/28] gup: Change the calling convention for compound_range_next() Matthew Wilcox (Oracle)
2022-01-10  8:25   ` Christoph Hellwig
2022-01-11  1:14   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 04/28] gup: Optimise compound_range_next() Matthew Wilcox (Oracle)
2022-01-10  8:26   ` Christoph Hellwig
2022-01-11  1:16   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 05/28] gup: Change the calling convention for compound_next() Matthew Wilcox (Oracle)
2022-01-10  8:27   ` Christoph Hellwig
2022-01-10 13:28     ` Matthew Wilcox
2022-01-11  1:18   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 06/28] gup: Fix some contiguous memmap assumptions Matthew Wilcox (Oracle)
2022-01-10  8:29   ` Christoph Hellwig
2022-01-10 13:37     ` Matthew Wilcox
2022-01-10 19:05       ` [External] : " Mike Kravetz
2022-01-11  1:47   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 07/28] gup: Remove an assumption of a contiguous memmap Matthew Wilcox (Oracle)
2022-01-10  8:30   ` Christoph Hellwig
2022-01-11  3:27   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 08/28] gup: Handle page split race more efficiently Matthew Wilcox (Oracle)
2022-01-10  8:31   ` Christoph Hellwig
2022-01-11  3:30   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 09/28] gup: Turn hpage_pincount_add() into page_pincount_add() Matthew Wilcox (Oracle)
2022-01-10  8:31   ` Christoph Hellwig
2022-01-11  3:35   ` John Hubbard
2022-01-11  4:32   ` John Hubbard
2022-01-11 13:46     ` Matthew Wilcox
2022-01-10  4:23 ` [PATCH v2 10/28] gup: Turn hpage_pincount_sub() into page_pincount_sub() Matthew Wilcox (Oracle)
2022-01-10  8:32   ` Christoph Hellwig
2022-01-11  6:40   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 11/28] mm: Make compound_pincount always available Matthew Wilcox (Oracle)
2022-01-11  4:06   ` John Hubbard
2022-01-11  4:38     ` Matthew Wilcox
2022-01-11  5:10       ` John Hubbard
2022-01-20  9:15   ` Christoph Hellwig
2022-01-10  4:23 ` [PATCH v2 12/28] mm: Add folio_put_refs() Matthew Wilcox (Oracle)
2022-01-10  8:32   ` Christoph Hellwig
2022-01-11  4:14   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 13/28] mm: Add folio_pincount_ptr() Matthew Wilcox (Oracle)
2022-01-10  8:33   ` Christoph Hellwig
2022-01-11  4:22   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 14/28] mm: Convert page_maybe_dma_pinned() to use a folio Matthew Wilcox (Oracle)
2022-01-10  8:33   ` Christoph Hellwig
2022-01-11  4:27   ` John Hubbard
2022-01-10  4:23 ` Matthew Wilcox (Oracle) [this message]
2022-01-10  8:34   ` [PATCH v2 15/28] gup: Add try_get_folio() and try_grab_folio() Christoph Hellwig
2022-01-11  5:00   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 16/28] mm: Remove page_cache_add_speculative() and page_cache_get_speculative() Matthew Wilcox (Oracle)
2022-01-10  8:35   ` Christoph Hellwig
2022-01-11  5:14   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 17/28] gup: Add gup_put_folio() Matthew Wilcox (Oracle)
2022-01-10  8:35   ` Christoph Hellwig
2022-01-11  6:44   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 18/28] hugetlb: Use try_grab_folio() instead of try_grab_compound_head() Matthew Wilcox (Oracle)
2022-01-10  8:36   ` Christoph Hellwig
2022-01-11  6:47   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 19/28] gup: Convert try_grab_page() to call try_grab_folio() Matthew Wilcox (Oracle)
2022-01-10  8:36   ` Christoph Hellwig
2022-01-11  7:01   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 20/28] gup: Convert gup_pte_range() to use a folio Matthew Wilcox (Oracle)
2022-01-10  8:37   ` Christoph Hellwig
2022-01-11  7:06   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 21/28] gup: Convert gup_hugepte() " Matthew Wilcox (Oracle)
2022-01-10  8:37   ` Christoph Hellwig
2022-01-11  7:33   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 22/28] gup: Convert gup_huge_pmd() " Matthew Wilcox (Oracle)
2022-01-10  8:37   ` Christoph Hellwig
2022-01-11  7:36   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 23/28] gup: Convert gup_huge_pud() " Matthew Wilcox (Oracle)
2022-01-10  8:38   ` Christoph Hellwig
2022-01-11  7:38   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 24/28] gup: Convert gup_huge_pgd() " Matthew Wilcox (Oracle)
2022-01-10  8:38   ` Christoph Hellwig
2022-01-11  7:38   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 25/28] gup: Convert compound_next() to gup_folio_next() Matthew Wilcox (Oracle)
2022-01-10  8:39   ` Christoph Hellwig
2022-01-11  7:41   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 26/28] gup: Convert compound_range_next() to gup_folio_range_next() Matthew Wilcox (Oracle)
2022-01-10  8:41   ` Christoph Hellwig
2022-01-10 13:41     ` Matthew Wilcox
2022-01-11  7:44   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 27/28] mm: Add isolate_lru_folio() Matthew Wilcox (Oracle)
2022-01-10  8:42   ` Christoph Hellwig
2022-01-11  7:49   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 28/28] gup: Convert check_and_migrate_movable_pages() to use a folio Matthew Wilcox (Oracle)
2022-01-10  8:42   ` Christoph Hellwig
2022-01-11  7:52   ` John Hubbard
2022-01-10 15:31 ` [PATCH v2 00/28] Convert GUP to folios Jason Gunthorpe
2022-01-10 16:09   ` Matthew Wilcox
2022-01-10 17:26 ` William Kucharski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220110042406.499429-16-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=hch@infradead.org \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=william.kucharski@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.