All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>,
	David Hildenbrand <david@redhat.com>,
	David Howells <dhowells@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	John Hubbard <jhubbard@nvidia.com>,
	linux-mm@kvack.org, "Mike Rapoport (IBM)" <rppt@kernel.org>
Subject: [PATCH v2 12/13] mm/gup: move gup_must_unshare() to mm/internal.h
Date: Tue, 24 Jan 2023 16:34:33 -0400	[thread overview]
Message-ID: <12-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com> (raw)
In-Reply-To: <0-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com>

This function is only used in gup.c and closely related. It touches
FOLL_PIN so it must be moved before the next patch.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 include/linux/mm.h | 65 ----------------------------------------------
 mm/internal.h      | 65 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 65 insertions(+), 65 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a47a6e8a9c78be..e0bacf9f2c5ebe 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3087,71 +3087,6 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
 	return 0;
 }
 
-/*
- * Indicates for which pages that are write-protected in the page table,
- * whether GUP has to trigger unsharing via FAULT_FLAG_UNSHARE such that the
- * GUP pin will remain consistent with the pages mapped into the page tables
- * of the MM.
- *
- * Temporary unmapping of PageAnonExclusive() pages or clearing of
- * PageAnonExclusive() has to protect against concurrent GUP:
- * * Ordinary GUP: Using the PT lock
- * * GUP-fast and fork(): mm->write_protect_seq
- * * GUP-fast and KSM or temporary unmapping (swap, migration): see
- *    page_try_share_anon_rmap()
- *
- * Must be called with the (sub)page that's actually referenced via the
- * page table entry, which might not necessarily be the head page for a
- * PTE-mapped THP.
- *
- * If the vma is NULL, we're coming from the GUP-fast path and might have
- * to fallback to the slow path just to lookup the vma.
- */
-static inline bool gup_must_unshare(struct vm_area_struct *vma,
-				    unsigned int flags, struct page *page)
-{
-	/*
-	 * FOLL_WRITE is implicitly handled correctly as the page table entry
-	 * has to be writable -- and if it references (part of) an anonymous
-	 * folio, that part is required to be marked exclusive.
-	 */
-	if ((flags & (FOLL_WRITE | FOLL_PIN)) != FOLL_PIN)
-		return false;
-	/*
-	 * Note: PageAnon(page) is stable until the page is actually getting
-	 * freed.
-	 */
-	if (!PageAnon(page)) {
-		/*
-		 * We only care about R/O long-term pining: R/O short-term
-		 * pinning does not have the semantics to observe successive
-		 * changes through the process page tables.
-		 */
-		if (!(flags & FOLL_LONGTERM))
-			return false;
-
-		/* We really need the vma ... */
-		if (!vma)
-			return true;
-
-		/*
-		 * ... because we only care about writable private ("COW")
-		 * mappings where we have to break COW early.
-		 */
-		return is_cow_mapping(vma->vm_flags);
-	}
-
-	/* Paired with a memory barrier in page_try_share_anon_rmap(). */
-	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP))
-		smp_rmb();
-
-	/*
-	 * Note that PageKsm() pages cannot be exclusive, and consequently,
-	 * cannot get pinned.
-	 */
-	return !PageAnonExclusive(page);
-}
-
 /*
  * Indicates whether GUP can follow a PROT_NONE mapped page, or whether
  * a (NUMA hinting) fault is required.
diff --git a/mm/internal.h b/mm/internal.h
index 0f035bcaf133f5..5c1310b98db64d 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -854,6 +854,71 @@ int migrate_device_coherent_page(struct page *page);
 struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags);
 int __must_check try_grab_page(struct page *page, unsigned int flags);
 
+/*
+ * Indicates for which pages that are write-protected in the page table,
+ * whether GUP has to trigger unsharing via FAULT_FLAG_UNSHARE such that the
+ * GUP pin will remain consistent with the pages mapped into the page tables
+ * of the MM.
+ *
+ * Temporary unmapping of PageAnonExclusive() pages or clearing of
+ * PageAnonExclusive() has to protect against concurrent GUP:
+ * * Ordinary GUP: Using the PT lock
+ * * GUP-fast and fork(): mm->write_protect_seq
+ * * GUP-fast and KSM or temporary unmapping (swap, migration): see
+ *    page_try_share_anon_rmap()
+ *
+ * Must be called with the (sub)page that's actually referenced via the
+ * page table entry, which might not necessarily be the head page for a
+ * PTE-mapped THP.
+ *
+ * If the vma is NULL, we're coming from the GUP-fast path and might have
+ * to fallback to the slow path just to lookup the vma.
+ */
+static inline bool gup_must_unshare(struct vm_area_struct *vma,
+				    unsigned int flags, struct page *page)
+{
+	/*
+	 * FOLL_WRITE is implicitly handled correctly as the page table entry
+	 * has to be writable -- and if it references (part of) an anonymous
+	 * folio, that part is required to be marked exclusive.
+	 */
+	if ((flags & (FOLL_WRITE | FOLL_PIN)) != FOLL_PIN)
+		return false;
+	/*
+	 * Note: PageAnon(page) is stable until the page is actually getting
+	 * freed.
+	 */
+	if (!PageAnon(page)) {
+		/*
+		 * We only care about R/O long-term pining: R/O short-term
+		 * pinning does not have the semantics to observe successive
+		 * changes through the process page tables.
+		 */
+		if (!(flags & FOLL_LONGTERM))
+			return false;
+
+		/* We really need the vma ... */
+		if (!vma)
+			return true;
+
+		/*
+		 * ... because we only care about writable private ("COW")
+		 * mappings where we have to break COW early.
+		 */
+		return is_cow_mapping(vma->vm_flags);
+	}
+
+	/* Paired with a memory barrier in page_try_share_anon_rmap(). */
+	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP))
+		smp_rmb();
+
+	/*
+	 * Note that PageKsm() pages cannot be exclusive, and consequently,
+	 * cannot get pinned.
+	 */
+	return !PageAnonExclusive(page);
+}
+
 extern bool mirrored_kernelcore;
 
 static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma)
-- 
2.39.0



  parent reply	other threads:[~2023-01-24 20:35 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-24 20:34 [PATCH v2 00/13] Simplify the external interface for GUP Jason Gunthorpe
2023-01-24 20:34 ` [PATCH v2 01/13] mm/gup: have internal functions get the mmap_read_lock() Jason Gunthorpe
2023-01-25  2:11   ` John Hubbard
2023-01-25  2:52     ` John Hubbard
2023-01-25 16:38     ` Jason Gunthorpe
2023-01-25 18:48       ` John Hubbard
2023-01-24 20:34 ` [PATCH v2 02/13] mm/gup: remove obsolete FOLL_LONGTERM comment Jason Gunthorpe
2023-01-25  2:13   ` John Hubbard
2023-02-08 14:25   ` David Hildenbrand
2023-01-24 20:34 ` [PATCH v2 03/13] mm/gup: don't call __gup_longterm_locked() if FOLL_LONGTERM cannot be set Jason Gunthorpe
2023-02-08 14:26   ` David Hildenbrand
2023-01-24 20:34 ` [PATCH v2 04/13] mm/gup: move try_grab_page() to mm/internal.h Jason Gunthorpe
2023-01-25  2:15   ` John Hubbard
2023-02-08 14:26   ` David Hildenbrand
2023-01-24 20:34 ` [PATCH v2 05/13] mm/gup: simplify the external interface functions and consolidate invariants Jason Gunthorpe
2023-01-25  2:30   ` John Hubbard
2023-01-24 20:34 ` [PATCH v2 06/13] mm/gup: add an assertion that the mmap lock is locked Jason Gunthorpe
2023-01-25  2:34   ` John Hubbard
2023-01-24 20:34 ` [PATCH v2 07/13] mm/gup: remove locked being NULL from faultin_vma_page_range() Jason Gunthorpe
2023-01-25  2:38   ` John Hubbard
2023-01-24 20:34 ` [PATCH v2 08/13] mm/gup: add FOLL_UNLOCKABLE Jason Gunthorpe
2023-01-24 20:34 ` [PATCH v2 09/13] mm/gup: make locked never NULL in the internal GUP functions Jason Gunthorpe
2023-01-25  3:00   ` John Hubbard
2023-01-24 20:34 ` [PATCH v2 10/13] mm/gup: remove pin_user_pages_fast_only() Jason Gunthorpe
2023-01-24 20:34 ` [PATCH v2 11/13] mm/gup: make get_user_pages_fast_only() return the common return value Jason Gunthorpe
2023-01-24 20:34 ` Jason Gunthorpe [this message]
2023-01-25  2:41   ` [PATCH v2 12/13] mm/gup: move gup_must_unshare() to mm/internal.h John Hubbard
2023-01-26 11:29   ` David Hildenbrand
2023-01-24 20:34 ` [PATCH v2 13/13] mm/gup: move private gup FOLL_ flags to internal.h Jason Gunthorpe
2023-01-25  2:44   ` John Hubbard
2023-01-26 12:48   ` David Hildenbrand
2023-01-26 12:55     ` Jason Gunthorpe
2023-01-26 13:06       ` David Hildenbrand
2023-01-26 14:41       ` Claudio Imbrenda
2023-01-26 14:46         ` David Hildenbrand
2023-01-26 15:05           ` Jason Gunthorpe
2023-01-26 15:39             ` Claudio Imbrenda
2023-01-26 16:35               ` Jason Gunthorpe
2023-01-26 17:24                 ` Claudio Imbrenda
2023-01-30 18:21                 ` Claudio Imbrenda
2023-01-30 18:24                   ` Jason Gunthorpe
2023-02-07 11:31                     ` Claudio Imbrenda
2023-02-07 12:40                       ` Jason Gunthorpe
2023-02-06 23:46 ` [PATCH v2 00/13] Simplify the external interface for GUP Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=apopple@nvidia.com \
    --cc=david@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=hch@infradead.org \
    --cc=jhubbard@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=rppt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.