All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages
@ 2022-02-24 12:26 David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 01/13] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed David Hildenbrand
                   ` (13 more replies)
  0 siblings, 14 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand, Khalid Aziz

This series is the result of the discussion on the previous approach [2].
More information on the general COW issues can be found there. It is based
on [1], which resides in -mm and -next:
	[PATCH v3 0/9] mm: COW fixes part 1: fix the COW security issue for
	THP and swap

I keep the latest state, including some hacky selftest on:
	https://github.com/davidhildenbrand/linux/tree/cow_fixes_part_2

This series fixes memory corruptions when a GUP pin (FOLL_PIN) was taken
on an anonymous page and COW logic fails to detect exclusivity of the page
to then replacing the anonymous page by a copy in the page table: The
GUP pin lost synchronicity with the pages mapped into the page tables.

This issue, including other related COW issues, has been summarized in [3]
under 3):
"
  3. Intra Process Memory Corruptions due to Wrong COW (FOLL_PIN)

  page_maybe_dma_pinned() is used to check if a page may be pinned for
  DMA (using FOLL_PIN instead of FOLL_GET). While false positives are
  tolerable, false negatives are problematic: pages that are pinned for
  DMA must not be added to the swapcache. If it happens, the (now pinned)
  page could be faulted back from the swapcache into page tables
  read-only. Future write-access would detect the pinning and COW the
  page, losing synchronicity. For the interested reader, this is nicely
  documented in feb889fb40fa ("mm: don't put pinned pages into the swap
  cache").

  Peter reports [8] that page_maybe_dma_pinned() as used is racy in some
  cases and can result in a violation of the documented semantics:
  giving false negatives because of the race.

  There are cases where we call it without properly taking a per-process
  sequence lock, turning the usage of page_maybe_dma_pinned() racy. While
  one case (clear_refs SOFTDIRTY tracking, see below) seems to be easy to
  handle, there is especially one rmap case (shrink_page_list) that's hard
  to fix: in the rmap world, we're not limited to a single process.

  The shrink_page_list() issue is really subtle. If we race with
  someone pinning a page, we can trigger the same issue as in the FOLL_GET
  case. See the detail section at the end of this mail on a discussion how
  bad this can bite us with VFIO or other FOLL_PIN user.

  It's harder to reproduce, but I managed to modify the O_DIRECT
  reproducer to use io_uring fixed buffers [15] instead, which ends up
  using FOLL_PIN | FOLL_WRITE | FOLL_LONGTERM to pin buffer pages and can
  similarly trigger a loss of synchronicity and consequently a memory
  corruption.

  Again, the root issue is that a write-fault on a page that has
  additional references results in a COW and thereby a loss of
  synchronicity and consequently a memory corruption if two parties
  believe they are referencing the same page.
"

This series makes GUP pins (R/O and R/W) on anonymous pages fully reliable,
especially also taking care of concurrent pinning via GUP-fast,
for example, also fully fixing an issue reported regarding NUMA
balancing [4] recently. While doing that, it further reduces "unnecessary
COWs", especially when we don't fork()/KSM and don't swapout, and fixes the
COW security for hugetlb for FOLL_PIN.

In summary, we track via a pageflag (PG_anon_exclusive) whether a mapped
anonymous page is exclusive. Exclusive anonymous pages that are mapped
R/O can directly be mapped R/W by the COW logic in the write fault handler.
Exclusive anonymous pages that want to be shared (fork(), KSM) first have
to mark a mapped anonymous page shared -- which will fail if there are
GUP pins on the page. GUP is only allowed to take a pin on anonymous pages
that is exclusive. The PT lock is the primary mechanism to synchronize
modifications of PG_anon_exclusive. GUP-fast is synchronized either via the
src_mm->write_protect_seq or via clear/invalidate+flush of the relevant
page table entry.

Special care has to be taken about swap, migration, and THPs (whereby a
PMD-mapping can be converted to a PTE mapping and we have to track
information for subpages). Besides these, we let the rmap code handle most
magic. For reliable R/O pins of anonymous pages, we need FAULT_FLAG_UNSHARE
logic as part of our previous approach [2], however, it's now 100% mapcount
free and I further simplified it a bit.

  #1 is a fix
  #3-#7 are mostly rmap preparations for PG_anon_exclusive handling
  #8 introduces PG_anon_exclusive
  #9 uses PG_anon_exclusive and make R/W pins of anonymous pages
   reliable
  #10 is a preparation for reliable R/O pins
  #11 and #12 is reused/modified GUP-triggered unsharing for R/O GUP pins
   make R/O pins of anonymous pages reliable
  #13 adds sanity check when (un)pinning anonymous pages

I'm not proud about patch #8, suggestions welcome. Patch #9 contains
excessive explanations and the main logic for R/W pins. #11 and #12
resemble what we proposed in the previous approach [2]. I consider the
general approach of #13 very nice and helpful, and I remember Linus even
envisioning something like that for finding BUGs, although we might want to
implement the sanity checks eventually differently

It passes my growing set of tests for "wrong COW" and "missed COW",
including the ones in [30 -- I'd really appreciate some experienced eyes
to take a close look at corner cases. Only tested on x86_64, testing with
CONT-mapped hugetlb pages on arm64 might be interesting.

Once we converted relevant users of FOLL_GET (e.g., O_DIRECT) to FOLL_PIN,
the issue described in [3] under 2) will be fixed as well. Further, once
that's in place we can streamline our COW logic for hugetlb to rely on
page_count() as well and fix any possible COW security issues.

Don't be scared by the diff stats, it includes a lot of comments and the
PG_anon_exclusive introduction is a bit "code consuming" due to PG_slab.

[1] https://lkml.kernel.org/r/20220131162940.210846-1-david@redhat.com
[2] https://lkml.kernel.org/r/20211217113049.23850-1-david@redhat.com
[3] https://lore.kernel.org/r/3ae33b08-d9ef-f846-56fb-645e3b9b4c66@redhat.com
[4] https://bugzilla.kernel.org/show_bug.cgi?id=215616

David Hildenbrand (13):
  mm/rmap: fix missing swap_free() in try_to_unmap() after
    arch_unmap_one() failed
  mm/hugetlb: take src_mm->write_protect_seq in
    copy_hugetlb_page_range()
  mm/memory: slightly simplify copy_present_pte()
  mm/rmap: split page_dup_rmap() into page_dup_file_rmap() and
    page_try_dup_anon_rmap()
  mm/rmap: remove do_page_add_anon_rmap()
  mm/rmap: pass rmap flags to hugepage_add_anon_rmap()
  mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon()
    page exclusively
  mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages
  mm: remember exclusively mapped anonymous pages with PG_anon_exclusive
  mm/gup: disallow follow_page(FOLL_PIN)
  mm: support GUP-triggered unsharing of anonymous pages
  mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared
    anonymous page
  mm/gup: sanity-check with CONFIG_DEBUG_VM that anonymous pages are
    exclusive when (un)pinning

 fs/proc/page.c                 |   3 +-
 include/linux/mm.h             |  46 +++++++-
 include/linux/mm_types.h       |   8 ++
 include/linux/page-flags.h     | 124 ++++++++++++++++++++-
 include/linux/rmap.h           |  87 ++++++++++++++-
 include/linux/swap.h           |  15 ++-
 include/linux/swapops.h        |  25 +++++
 include/trace/events/mmflags.h |   2 +-
 mm/gup.c                       |  96 +++++++++++++++-
 mm/huge_memory.c               | 125 ++++++++++++++++-----
 mm/hugetlb.c                   | 133 +++++++++++++++-------
 mm/ksm.c                       |  15 ++-
 mm/memory-failure.c            |  24 +++-
 mm/memory.c                    | 197 ++++++++++++++++++++-------------
 mm/memremap.c                  |  11 ++
 mm/migrate.c                   |  46 ++++++--
 mm/mprotect.c                  |   8 +-
 mm/page_alloc.c                |  13 +++
 mm/rmap.c                      |  75 +++++++++----
 mm/swapfile.c                  |   2 +-
 20 files changed, 849 insertions(+), 206 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH RFC 01/13] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 16:26   ` Khalid Aziz
  2022-02-24 12:26 ` [PATCH RFC 02/13] mm/hugetlb: take src_mm->write_protect_seq in copy_hugetlb_page_range() David Hildenbrand
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand, Khalid Aziz

In case arch_unmap_one() fails, we already did a swap_duplicate(). let's
undo that properly via swap_free().

Fixes: ca827d55ebaa ("mm, swap: Add infrastructure for saving page metadata on swap")
Cc: Khalid Aziz <khalid.aziz@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/rmap.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/rmap.c b/mm/rmap.c
index 6a1e8c7f6213..f825aeef61ca 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1625,6 +1625,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				break;
 			}
 			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
+				swap_free(entry);
 				set_pte_at(mm, address, pvmw.pte, pteval);
 				ret = false;
 				page_vma_mapped_walk_done(&pvmw);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 02/13] mm/hugetlb: take src_mm->write_protect_seq in copy_hugetlb_page_range()
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 01/13] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 03/13] mm/memory: slightly simplify copy_present_pte() David Hildenbrand
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

Let's do it just like copy_page_range(), taking the seqlock and making
sure the mmap_lock is held in write mode.

This allows for add a VM_BUG_ON to page_needs_cow_for_dma() and
properly synchronizes cocnurrent fork() with GUP-fast of hugetlb pages,
which will be relevant for further changes.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h | 4 ++++
 mm/hugetlb.c       | 8 ++++++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a12291cfe5dd..b29cb8bacd59 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1323,6 +1323,8 @@ static inline bool is_cow_mapping(vm_flags_t flags)
 /*
  * This should most likely only be called during fork() to see whether we
  * should break the cow immediately for a page on the src mm.
+ *
+ * The caller has to hold the PT lock and the vma->vm_mm->->write_protect_seq.
  */
 static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma,
 					  struct page *page)
@@ -1330,6 +1332,8 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma,
 	if (!is_cow_mapping(vma->vm_flags))
 		return false;
 
+	VM_BUG_ON(!(raw_read_seqcount(&vma->vm_mm->write_protect_seq) & 1));
+
 	if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags))
 		return false;
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 61895cc01d09..e2e2d0f3f259 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4710,6 +4710,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 					vma->vm_start,
 					vma->vm_end);
 		mmu_notifier_invalidate_range_start(&range);
+		mmap_assert_write_locked(src);
+		raw_write_seqcount_begin(&src->write_protect_seq);
 	} else {
 		/*
 		 * For shared mappings i_mmap_rwsem must be held to call
@@ -4842,10 +4844,12 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 		spin_unlock(dst_ptl);
 	}
 
-	if (cow)
+	if (cow) {
+		raw_write_seqcount_end(&src->write_protect_seq);
 		mmu_notifier_invalidate_range_end(&range);
-	else
+	} else {
 		i_mmap_unlock_read(mapping);
+	}
 
 	return ret;
 }
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 03/13] mm/memory: slightly simplify copy_present_pte()
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 01/13] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 02/13] mm/hugetlb: take src_mm->write_protect_seq in copy_hugetlb_page_range() David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-25  5:15   ` Hillf Danton
  2022-02-24 12:26 ` [PATCH RFC 04/13] mm/rmap: split page_dup_rmap() into page_dup_file_rmap() and page_try_dup_anon_rmap() David Hildenbrand
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

Let's move the pinning check into the caller, to simplify return code
logic and prepare for further changes: relocating the
page_needs_cow_for_dma() into rmap handling code.

While at it, remove the unused pte parameter and simplify the comments a
bit.

No functional change intended.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/memory.c | 53 ++++++++++++++++-------------------------------------
 1 file changed, 16 insertions(+), 37 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index c6177d897964..accb72a3343d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -865,19 +865,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 }
 
 /*
- * Copy a present and normal page if necessary.
+ * Copy a present and normal page.
  *
- * NOTE! The usual case is that this doesn't need to do
- * anything, and can just return a positive value. That
- * will let the caller know that it can just increase
- * the page refcount and re-use the pte the traditional
- * way.
- *
- * But _if_ we need to copy it because it needs to be
- * pinned in the parent (and the child should get its own
- * copy rather than just a reference to the same page),
- * we'll do that here and return zero to let the caller
- * know we're done.
+ * NOTE! The usual case is that this isn't required;
+ * instead, the caller can just increase the page refcount
+ * and re-use the pte the traditional way.
  *
  * And if we need a pre-allocated page but don't yet have
  * one, return a negative error to let the preallocation
@@ -887,25 +879,10 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 static inline int
 copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 		  pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss,
-		  struct page **prealloc, pte_t pte, struct page *page)
+		  struct page **prealloc, struct page *page)
 {
 	struct page *new_page;
-
-	/*
-	 * What we want to do is to check whether this page may
-	 * have been pinned by the parent process.  If so,
-	 * instead of wrprotect the pte on both sides, we copy
-	 * the page immediately so that we'll always guarantee
-	 * the pinned page won't be randomly replaced in the
-	 * future.
-	 *
-	 * The page pinning checks are just "has this mm ever
-	 * seen pinning", along with the (inexact) check of
-	 * the page count. That might give false positives for
-	 * for pinning, but it will work correctly.
-	 */
-	if (likely(!page_needs_cow_for_dma(src_vma, page)))
-		return 1;
+	pte_t pte;
 
 	new_page = *prealloc;
 	if (!new_page)
@@ -947,14 +924,16 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	struct page *page;
 
 	page = vm_normal_page(src_vma, addr, pte);
-	if (page) {
-		int retval;
-
-		retval = copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
-					   addr, rss, prealloc, pte, page);
-		if (retval <= 0)
-			return retval;
-
+	if (page && unlikely(page_needs_cow_for_dma(src_vma, page))) {
+		/*
+		 * If this page may have been pinned by the parent process,
+		 * copy the page immediately for the child so that we'll always
+		 * guarantee the pinned page won't be randomly replaced in the
+		 * future.
+		 */
+		return copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
+					 addr, rss, prealloc, page);
+	} else if (page) {
 		get_page(page);
 		page_dup_rmap(page, false);
 		rss[mm_counter(page)]++;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 04/13] mm/rmap: split page_dup_rmap() into page_dup_file_rmap() and page_try_dup_anon_rmap()
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (2 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 03/13] mm/memory: slightly simplify copy_present_pte() David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap() David Hildenbrand
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

... and move the special check for pinned pages into
page_try_dup_anon_rmap() to prepare for tracking exclusive anonymous
pages via a new pageflag, clearing it only after making sure that there
are no GUP pins on the anonymous page.

We really only care about pins on anonymous pages, because they are
prone to getting replaced in the COW handler once mapped R/O. For !anon
pages in cow-mappings (!VM_SHARED && VM_MAYWRITE) we shouldn't really
care about that, at least not that I could come up with an example.

Let's drop the is_cow_mapping() check from page_needs_cow_for_dma(), as we
know we're dealing with anonymous pages. Also, drop the handling of
pinned pages from copy_huge_pud() and add a comment if ever supporting
anonymous pages on the PUD level.

This is a preparation for tracking exclusivity of anonymous pages in
the rmap code, and disallowing marking a page shared (-> failing to
duplicate) if there are GUP pins on a page.

RFC notes: if I'm missing something important for !anon pages, we could
           similarly handle it via page_try_dup_file_rmap().

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h   |  5 +----
 include/linux/rmap.h | 48 +++++++++++++++++++++++++++++++++++++++++++-
 mm/huge_memory.c     | 27 ++++++++-----------------
 mm/hugetlb.c         | 16 ++++++++-------
 mm/memory.c          | 17 +++++++++++-----
 mm/migrate.c         |  2 +-
 6 files changed, 78 insertions(+), 37 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b29cb8bacd59..ffeb7f5fa32b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1322,16 +1322,13 @@ static inline bool is_cow_mapping(vm_flags_t flags)
 
 /*
  * This should most likely only be called during fork() to see whether we
- * should break the cow immediately for a page on the src mm.
+ * should break the cow immediately for an anon page on the src mm.
  *
  * The caller has to hold the PT lock and the vma->vm_mm->->write_protect_seq.
  */
 static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma,
 					  struct page *page)
 {
-	if (!is_cow_mapping(vma->vm_flags))
-		return false;
-
 	VM_BUG_ON(!(raw_read_seqcount(&vma->vm_mm->write_protect_seq) & 1));
 
 	if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags))
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index e704b1a4c06c..92c3585b8c6a 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -180,11 +180,57 @@ void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
 void hugepage_add_new_anon_rmap(struct page *, struct vm_area_struct *,
 				unsigned long);
 
-static inline void page_dup_rmap(struct page *page, bool compound)
+static inline void __page_dup_rmap(struct page *page, bool compound)
 {
 	atomic_inc(compound ? compound_mapcount_ptr(page) : &page->_mapcount);
 }
 
+static inline void page_dup_file_rmap(struct page *page, bool compound)
+{
+	__page_dup_rmap(page, compound);
+}
+
+/**
+ * page_try_dup_anon_rmap - try duplicating a mapping of an already mapped
+ *			    anonymous page
+ * @page: the page to duplicate the mapping for
+ * @compound: the page is mapped as compound or as a small page
+ * @vma: the source vma
+ *
+ * The caller needs to hold the PT lock and the vma->vma_mm->write_protect_seq.
+ *
+ * Duplicating the mapping can only fail if the page may be pinned; device
+ * private pages cannot get pinned and consequently this function cannot fail.
+ *
+ * If duplicating the mapping succeeds, the page has to be mapped R/O into
+ * the parent and the child. It must *not* get mapped writable after this call.
+ *
+ * Returns 0 if duplicating the mapping succeeded. Returns -EBUSY otherwise.
+ */
+static inline int page_try_dup_anon_rmap(struct page *page, bool compound,
+					 struct vm_area_struct *vma)
+{
+	VM_BUG_ON_PAGE(!PageAnon(page), page);
+
+	/*
+	 * If this page may have been pinned by the parent process,
+	 * don't allow to duplicate the mapping but instead require to e.g.,
+	 * copy the page immediately for the child so that we'll always
+	 * guarantee the pinned page won't be randomly replaced in the
+	 * future on write faults.
+	 */
+	if (likely(!is_device_private_page(page) &&
+	    unlikely(page_needs_cow_for_dma(vma, page))))
+		return -EBUSY;
+
+	/*
+	 * It's okay to share the anon page between both processes, mapping
+	 * the page R/O into both processes.
+	 */
+	__page_dup_rmap(page, compound);
+	return 0;
+}
+
 /*
  * Called from mm/vmscan.c to handle paging out
  */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cda88d8ac1bd..c126d728b8de 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1097,23 +1097,16 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	src_page = pmd_page(pmd);
 	VM_BUG_ON_PAGE(!PageHead(src_page), src_page);
 
-	/*
-	 * If this page is a potentially pinned page, split and retry the fault
-	 * with smaller page size.  Normally this should not happen because the
-	 * userspace should use MADV_DONTFORK upon pinned regions.  This is a
-	 * best effort that the pinned pages won't be replaced by another
-	 * random page during the coming copy-on-write.
-	 */
-	if (unlikely(page_needs_cow_for_dma(src_vma, src_page))) {
+	get_page(src_page);
+	if (unlikely(page_try_dup_anon_rmap(src_page, true, src_vma))) {
+		/* Page maybe pinned: split and retry the fault on PTEs. */
+		put_page(src_page);
 		pte_free(dst_mm, pgtable);
 		spin_unlock(src_ptl);
 		spin_unlock(dst_ptl);
 		__split_huge_pmd(src_vma, src_pmd, addr, false, NULL);
 		return -EAGAIN;
 	}
-
-	get_page(src_page);
-	page_dup_rmap(src_page, true);
 	add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
 out_zero_page:
 	mm_inc_nr_ptes(dst_mm);
@@ -1217,14 +1210,10 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		/* No huge zero pud yet */
 	}
 
-	/* Please refer to comments in copy_huge_pmd() */
-	if (unlikely(page_needs_cow_for_dma(vma, pud_page(pud)))) {
-		spin_unlock(src_ptl);
-		spin_unlock(dst_ptl);
-		__split_huge_pud(vma, src_pud, addr);
-		return -EAGAIN;
-	}
-
+	/*
+	 * TODO: once we support anonymous pages, use page_try_dup_anon_rmap()
+	 * and split if duplicating fails.
+	 */
 	pudp_set_wrprotect(src_mm, addr, src_pud);
 	pud = pud_mkold(pud_wrprotect(pud));
 	set_pud_at(dst_mm, addr, dst_pud, pud);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e2e2d0f3f259..6b928e720e83 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4781,15 +4781,18 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 			get_page(ptepage);
 
 			/*
-			 * This is a rare case where we see pinned hugetlb
-			 * pages while they're prone to COW.  We need to do the
-			 * COW earlier during fork.
+			 * Failing to duplicate the anon rmap is a rare case
+			 * where we see pinned hugetlb pages while they're
+			 * prone to COW. We need to do the COW earlier during
+			 * fork.
 			 *
 			 * When pre-allocating the page or copying data, we
 			 * need to be without the pgtable locks since we could
 			 * sleep during the process.
 			 */
-			if (unlikely(page_needs_cow_for_dma(vma, ptepage))) {
+			if (!PageAnon(ptepage)) {
+				page_dup_file_rmap(ptepage, true);
+			} else if (page_try_dup_anon_rmap(ptepage, true, vma)) {
 				pte_t src_pte_old = entry;
 				struct page *new;
 
@@ -4836,7 +4839,6 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 				entry = huge_pte_wrprotect(entry);
 			}
 
-			page_dup_rmap(ptepage, true);
 			set_huge_pte_at(dst, addr, dst_pte, entry);
 			hugetlb_count_add(npages, dst);
 		}
@@ -5515,7 +5517,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 		ClearHPageRestoreReserve(page);
 		hugepage_add_new_anon_rmap(page, vma, haddr);
 	} else
-		page_dup_rmap(page, true);
+		page_dup_file_rmap(page, true);
 	new_pte = make_huge_pte(vma, page, ((vma->vm_flags & VM_WRITE)
 				&& (vma->vm_flags & VM_SHARED)));
 	set_huge_pte_at(mm, haddr, ptep, new_pte);
@@ -5875,7 +5877,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
 		goto out_release_unlock;
 
 	if (vm_shared) {
-		page_dup_rmap(page, true);
+		page_dup_file_rmap(page, true);
 	} else {
 		ClearHPageRestoreReserve(page);
 		hugepage_add_new_anon_rmap(page, dst_vma, dst_addr);
diff --git a/mm/memory.c b/mm/memory.c
index accb72a3343d..b9602d41d907 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -828,7 +828,8 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		 */
 		get_page(page);
 		rss[mm_counter(page)]++;
-		page_dup_rmap(page, false);
+		/* Cannot fail as these pages cannot get pinned. */
+		BUG_ON(page_try_dup_anon_rmap(page, false, src_vma));
 
 		/*
 		 * We do not preserve soft-dirty information, because so
@@ -924,18 +925,24 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	struct page *page;
 
 	page = vm_normal_page(src_vma, addr, pte);
-	if (page && unlikely(page_needs_cow_for_dma(src_vma, page))) {
+	if (page && PageAnon(page)) {
 		/*
 		 * If this page may have been pinned by the parent process,
 		 * copy the page immediately for the child so that we'll always
 		 * guarantee the pinned page won't be randomly replaced in the
 		 * future.
 		 */
-		return copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
-					 addr, rss, prealloc, page);
+		get_page(page);
+		if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) {
+			/* Page maybe pinned, we have to copy. */
+			put_page(page);
+			return copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
+						 addr, rss, prealloc, page);
+		}
+		rss[mm_counter(page)]++;
 	} else if (page) {
 		get_page(page);
-		page_dup_rmap(page, false);
+		page_dup_file_rmap(page, false);
 		rss[mm_counter(page)]++;
 	}
 
diff --git a/mm/migrate.c b/mm/migrate.c
index c7da064b4781..524c2648ab36 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 			if (PageAnon(new))
 				hugepage_add_anon_rmap(new, vma, pvmw.address);
 			else
-				page_dup_rmap(new, true);
+				page_dup_file_rmap(new, true);
 			set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte);
 		} else
 #endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap()
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (3 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 04/13] mm/rmap: split page_dup_rmap() into page_dup_file_rmap() and page_try_dup_anon_rmap() David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 17:29   ` Linus Torvalds
  2022-02-24 12:26 ` [PATCH RFC 06/13] mm/rmap: pass rmap flags to hugepage_add_anon_rmap() David Hildenbrand
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

... and instead convert page_add_anon_rmap() to accept flags.

Passing flags instead of bools is usually nicer either way, and we want
to more often also pass RMAP_EXCLUSIVE in follow up patches when
detecting that an anonymous page is exclusive: for example, when
restoring an anonymous page from a writable migration entry.

This is a preparation for marking an anonymous page inside
page_add_anon_rmap() as exclusive when RMAP_EXCLUSIVE is passed.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/rmap.h |  4 +---
 mm/huge_memory.c     |  2 +-
 mm/ksm.c             |  2 +-
 mm/memory.c          |  2 +-
 mm/migrate.c         |  2 +-
 mm/rmap.c            | 17 +++--------------
 mm/swapfile.c        |  2 +-
 7 files changed, 9 insertions(+), 22 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 92c3585b8c6a..f230e86b4587 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -167,9 +167,7 @@ struct anon_vma *page_get_anon_vma(struct page *page);
  */
 void page_move_anon_rmap(struct page *, struct vm_area_struct *);
 void page_add_anon_rmap(struct page *, struct vm_area_struct *,
-		unsigned long, bool);
-void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,
-			   unsigned long, int);
+		unsigned long, int);
 void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
 		unsigned long, bool);
 void page_add_file_rmap(struct page *, bool);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c126d728b8de..2ca137e01e84 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3115,7 +3115,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
 
 	flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE);
 	if (PageAnon(new))
-		page_add_anon_rmap(new, vma, mmun_start, true);
+		page_add_anon_rmap(new, vma, mmun_start, RMAP_COMPOUND);
 	else
 		page_add_file_rmap(new, true);
 	set_pmd_at(mm, mmun_start, pvmw->pmd, pmde);
diff --git a/mm/ksm.c b/mm/ksm.c
index c20bd4d9a0d9..e897577c54c5 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1153,7 +1153,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 	 */
 	if (!is_zero_pfn(page_to_pfn(kpage))) {
 		get_page(kpage);
-		page_add_anon_rmap(kpage, vma, addr, false);
+		page_add_anon_rmap(kpage, vma, addr, 0);
 		newpte = mk_pte(kpage, vma->vm_page_prot);
 	} else {
 		newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage),
diff --git a/mm/memory.c b/mm/memory.c
index b9602d41d907..c2f1c2428303 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3709,7 +3709,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		lru_cache_add_inactive_or_unevictable(page, vma);
 	} else {
-		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
+		page_add_anon_rmap(page, vma, vmf->address, exclusive);
 	}
 
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
diff --git a/mm/migrate.c b/mm/migrate.c
index 524c2648ab36..d4d72a15224c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -246,7 +246,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 #endif
 		{
 			if (PageAnon(new))
-				page_add_anon_rmap(new, vma, pvmw.address, false);
+				page_add_anon_rmap(new, vma, pvmw.address, 0);
 			else
 				page_add_file_rmap(new, false);
 			set_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte);
diff --git a/mm/rmap.c b/mm/rmap.c
index f825aeef61ca..902ebf99d147 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1129,26 +1129,15 @@ static void __page_check_anon_rmap(struct page *page,
  * @page:	the page to add the mapping to
  * @vma:	the vm area in which the mapping is added
  * @address:	the user virtual address mapped
- * @compound:	charge the page as compound or small page
+ * @flags:	the rmap flags
  *
  * The caller needs to hold the pte lock, and the page must be locked in
  * the anon_vma case: to serialize mapping,index checking after setting,
  * and to ensure that PageAnon is not being upgraded racily to PageKsm
  * (but PageKsm is never downgraded to PageAnon).
  */
-void page_add_anon_rmap(struct page *page,
-	struct vm_area_struct *vma, unsigned long address, bool compound)
-{
-	do_page_add_anon_rmap(page, vma, address, compound ? RMAP_COMPOUND : 0);
-}
-
-/*
- * Special version of the above for do_swap_page, which often runs
- * into pages that are exclusively owned by the current process.
- * Everybody else should continue to use page_add_anon_rmap above.
- */
-void do_page_add_anon_rmap(struct page *page,
-	struct vm_area_struct *vma, unsigned long address, int flags)
+void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
+			unsigned long address, int flags)
 {
 	bool compound = flags & RMAP_COMPOUND;
 	bool first;
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a5183315dc58..eacda7ca7ac9 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1800,7 +1800,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
 	inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
 	get_page(page);
 	if (page == swapcache) {
-		page_add_anon_rmap(page, vma, addr, false);
+		page_add_anon_rmap(page, vma, addr, 0);
 	} else { /* ksm created a completely new copy */
 		page_add_new_anon_rmap(page, vma, addr, false);
 		lru_cache_add_inactive_or_unevictable(page, vma);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 06/13] mm/rmap: pass rmap flags to hugepage_add_anon_rmap()
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (4 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap() David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 17:37   ` Linus Torvalds
  2022-02-24 12:26 ` [PATCH RFC 07/13] mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon() page exclusively David Hildenbrand
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

Let's prepare for passing RMAP_EXCLUSIVE, similarly as we do for
page_add_anon_rmap() now. RMAP_COMPOUND is implicit for hugetlb
pages and ignored.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/rmap.h | 2 +-
 mm/migrate.c         | 2 +-
 mm/rmap.c            | 8 +++++---
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index f230e86b4587..593a4566420f 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -174,7 +174,7 @@ void page_add_file_rmap(struct page *, bool);
 void page_remove_rmap(struct page *, bool);
 
 void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
-			    unsigned long);
+			    unsigned long, int);
 void hugepage_add_new_anon_rmap(struct page *, struct vm_area_struct *,
 				unsigned long);
 
diff --git a/mm/migrate.c b/mm/migrate.c
index d4d72a15224c..709cb11d5b81 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -238,7 +238,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 			pte = pte_mkhuge(pte);
 			pte = arch_make_huge_pte(pte, shift, vma->vm_flags);
 			if (PageAnon(new))
-				hugepage_add_anon_rmap(new, vma, pvmw.address);
+				hugepage_add_anon_rmap(new, vma, pvmw.address, 0);
 			else
 				page_dup_file_rmap(new, true);
 			set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte);
diff --git a/mm/rmap.c b/mm/rmap.c
index 902ebf99d147..bafdb7f70cec 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2409,9 +2409,11 @@ void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc)
  * The following two functions are for anonymous (private mapped) hugepages.
  * Unlike common anonymous pages, anonymous hugepages have no accounting code
  * and no lru code, because we handle hugepages differently from common pages.
+ *
+ * RMAP_COMPOUND is ignored.
  */
-void hugepage_add_anon_rmap(struct page *page,
-			    struct vm_area_struct *vma, unsigned long address)
+void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
+			    unsigned long address, int flags)
 {
 	struct anon_vma *anon_vma = vma->anon_vma;
 	int first;
@@ -2421,7 +2423,7 @@ void hugepage_add_anon_rmap(struct page *page,
 	/* address might be in next vma when migration races vma_adjust */
 	first = atomic_inc_and_test(compound_mapcount_ptr(page));
 	if (first)
-		__page_set_anon_rmap(page, vma, address, 0);
+		__page_set_anon_rmap(page, vma, address, flags & RMAP_EXCLUSIVE);
 }
 
 void hugepage_add_new_anon_rmap(struct page *page,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 07/13] mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon() page exclusively
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (5 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 06/13] mm/rmap: pass rmap flags to hugepage_add_anon_rmap() David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 08/13] mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages David Hildenbrand
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

We want to mark anonymous pages exclusive, and when using
page_move_anon_rmap() we know that we are the exclusive user, as
properly documented. This is a preparation for marking anonymous pages
exclusive in page_move_anon_rmap().

In both instances, we're holding page lock and are sure that we're the
exclusive owner (page_count() == 1). hugetlb already properly uses
page_move_anon_rmap() in the write fault handler.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/huge_memory.c | 2 ++
 mm/memory.c      | 1 +
 2 files changed, 3 insertions(+)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2ca137e01e84..7b9e76ba3f2f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1317,6 +1317,8 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 		try_to_free_swap(page);
 	if (page_count(page) == 1) {
 		pmd_t entry;
+
+		page_move_anon_rmap(page, vma);
 		entry = pmd_mkyoung(orig_pmd);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 		if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1))
diff --git a/mm/memory.c b/mm/memory.c
index c2f1c2428303..1ab0dfbec288 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3307,6 +3307,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 		 * and the page is locked, it's dark out, and we're wearing
 		 * sunglasses. Hit it.
 		 */
+		page_move_anon_rmap(page, vma);
 		unlock_page(page);
 		wp_page_reuse(vmf);
 		return VM_FAULT_WRITE;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 08/13] mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (6 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 07/13] mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon() page exclusively David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 09/13] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive David Hildenbrand
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

The basic question we would like to have a reliable and efficient answer
to is: is this anonymous page exclusive to a single process or might it
be shared?

In an ideal world, we'd have a spare pageflag. Unfortunately, pageflags
don't grow on trees, so we have to get a little creative for the time
being.

Introduce a way to mark an anonymous page as exclusive, with the
ultimate goal of teaching our COW logic to not do "wrong COWs", whereby
GUP pins lose consistency with the pages mapped into the page table,
resulting in reported memory corruptions.

Most pageflags already have semantics for anonymous pages, so we're left
with reusing PG_slab for our purpose: for PageAnon() pages PG_slab now
translates to PG_anon_exclusive, teach some in-kernel code that manually
handles PG_slab about that.

Add a spoiler on the semantics of PG_anon_exclusive as documentation. More
documentation will be contained in the code that actually makes use of
PG_anon_exclusive.

We won't be clearing PG_anon_exclusive on destructive unmapping (i.e.,
zapping) of page table entries, page freeing code will handle that when
also invalidate page->mapping to not indicate PageAnon() anymore.
Letting information about exclusivity stick around will be an important
property when adding sanity checks to unpinning code.

RFC notes: in-tree tools/cgroup/memcg_slabinfo.py looks like it might need
	   some care. We'd have to lookup the head page and check if
	   PageAnon() is set. Similarly, tools living outside the kernel
	   repository like crash and makedumpfile might need adaptions.

Cc: Roman Gushchin <guro@fb.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 fs/proc/page.c                 |   3 +-
 include/linux/page-flags.h     | 124 ++++++++++++++++++++++++++++++++-
 include/trace/events/mmflags.h |   2 +-
 mm/hugetlb.c                   |   4 ++
 mm/memory-failure.c            |  24 +++++--
 mm/memremap.c                  |  11 +++
 mm/page_alloc.c                |  13 ++++
 7 files changed, 170 insertions(+), 11 deletions(-)

diff --git a/fs/proc/page.c b/fs/proc/page.c
index 9f1077d94cde..61187568e5f7 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -182,8 +182,7 @@ u64 stable_page_flags(struct page *page)
 
 	u |= kpf_copy_bit(k, KPF_LOCKED,	PG_locked);
 
-	u |= kpf_copy_bit(k, KPF_SLAB,		PG_slab);
-	if (PageTail(page) && PageSlab(compound_head(page)))
+	if (PageSlab(page) || PageSlab(compound_head(page)))
 		u |= 1 << KPF_SLAB;
 
 	u |= kpf_copy_bit(k, KPF_ERROR,		PG_error);
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 1c3b6e5c8bfd..b3c2cf71467e 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -107,7 +107,7 @@ enum pageflags {
 	PG_workingset,
 	PG_waiters,		/* Page has waiters, check its waitqueue. Must be bit #7 and in the same byte as "PG_locked" */
 	PG_error,
-	PG_slab,
+	PG_slab,		/* Slab page if !PageAnon() */
 	PG_owner_priv_1,	/* Owner use. If pagecache, fs may use*/
 	PG_arch_1,
 	PG_reserved,
@@ -142,6 +142,63 @@ enum pageflags {
 
 	PG_readahead = PG_reclaim,
 
+	/*
+	 * Depending on the way an anonymous folio can be mapped into a page
+	 * table (e.g., single PMD/PUD/CONT of the head page vs. PTE-mapped
+	 * THP), PG_anon_exclusive may be set only for the head page or for
+	 * subpages of an anonymous folio.
+	 *
+	 * PG_anon_exclusive is *usually* only expressive in combination with a
+	 * page table entry. Depending on the page table entry type it might
+	 * store the following information:
+	 *
+	 *	Is what's mapped via this page table entry exclusive to the
+	 *	single process and can be mapped writable without further
+	 *	checks? If not, it might be shared and we might have to COW.
+	 *
+	 * For now, we only expect PTE-mapped THPs to make use of
+	 * PG_anon_exclusive in subpages. For other anonymous compound
+	 * folios (i.e., hugetlb), only the head page is logically mapped and
+	 * holds this information.
+	 *
+	 * For example, an exclusive, PMD-mapped THP only has PG_anon_exclusive
+	 * set on the head page. When replacing the PMD by a page table full
+	 * of PTEs, PG_anon_exclusive, if set on the head page, will be set on
+	 * all tail pages accordingly. Note that converting from a PTE-mapping
+	 * to a PMD mapping using the same compound page is currently not
+	 * possible and consequently doesn't require care.
+	 *
+	 * If GUP wants to take a reliable pin (FOLL_PIN) on an anonymous page,
+	 * it should only pin if the relevant PG_anon_bit is set. In that case,
+	 * the pin will be fully reliable and stay consistent with the pages
+	 * mapped into the page table, as the bit cannot get cleared (e.g., by
+	 * fork(), KSM) while the page is pinned. For anonymous pages that
+	 * are mapped R/W, PG_anon_exclusive can be assumed to always be set
+	 * because such pages cannot possibly be shared.
+	 *
+	 * The page table lock protecting the page table entry is the primary
+	 * synchronization mechanism for PG_anon_exclusive; GUP-fast that does
+	 * not take the PT lock needs special care when trying to clear the
+	 * flag.
+	 *
+	 * Page table entry types and PG_anon_exclusive:
+	 * * Present: PG_anon_exclusive applies.
+	 * * Swap: the information is lost. PG_anon_exclusive was cleared.
+	 * * Migration: the entry holds this information instead.
+	 *		PG_anon_exclusive was cleared.
+	 * * Device private: PG_anon_exclusive applies.
+	 * * Device exclusive: PG_anon_exclusive applies.
+	 * * HW Poison: PG_anon_exclusive is stale and not changed.
+	 *
+	 * If the page may be pinned (FOLL_PIN), clearing PG_anon_exclusive is
+	 * not allowed and the flag will stick around until the page is freed
+	 * and folio->mapping is cleared.
+	 *
+	 * Before clearing folio->mapping, PG_anon_exclusive has to be cleared
+	 * to not result in PageSlab() false positives.
+	 */
+	PG_anon_exclusive = PG_slab,
+
 	/* Filesystems */
 	PG_checked = PG_owner_priv_1,
 
@@ -425,7 +482,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD)
 	TESTCLEARFLAG(Active, active, PF_HEAD)
 PAGEFLAG(Workingset, workingset, PF_HEAD)
 	TESTCLEARFLAG(Workingset, workingset, PF_HEAD)
-__PAGEFLAG(Slab, slab, PF_NO_TAIL)
 __PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL)
 PAGEFLAG(Checked, checked, PF_NO_COMPOUND)	   /* Used by some filesystems */
 
@@ -920,6 +976,70 @@ extern bool is_free_buddy_page(struct page *page);
 
 __PAGEFLAG(Isolated, isolated, PF_ANY);
 
+static __always_inline bool folio_test_slab(struct folio *folio)
+{
+	return !folio_test_anon(folio) &&
+	       test_bit(PG_slab, folio_flags(folio, FOLIO_PF_NO_TAIL));
+}
+
+static __always_inline int PageSlab(struct page *page)
+{
+	return !PageAnon(page) &&
+		test_bit(PG_slab, &PF_NO_TAIL(page, 0)->flags);
+}
+
+static __always_inline void __folio_set_slab(struct folio *folio)
+{
+	VM_BUG_ON(folio_test_anon(folio));
+	__set_bit(PG_slab, folio_flags(folio, FOLIO_PF_NO_TAIL));
+}
+
+static __always_inline void __SetPageSlab(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(PageAnon(page), page);
+	__set_bit(PG_slab, &PF_NO_TAIL(page, 1)->flags);
+}
+
+static __always_inline void __folio_clear_slab(struct folio *folio)
+{
+	VM_BUG_ON(folio_test_anon(folio));
+	__clear_bit(PG_slab, folio_flags(folio, FOLIO_PF_NO_TAIL));
+}
+
+static __always_inline void __ClearPageSlab(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(PageAnon(page), page);
+	__clear_bit(PG_slab, &PF_NO_TAIL(page, 1)->flags);
+}
+
+static __always_inline int PageAnonExclusive(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(!PageAnon(page), page);
+	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
+	return test_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
+}
+
+static __always_inline void SetPageAnonExclusive(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
+	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
+	set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
+}
+
+static __always_inline void ClearPageAnonExclusive(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
+	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
+	clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
+}
+
+static __always_inline void __ClearPageAnonExclusive(struct page *page)
+{
+	VM_BUG_ON_PGFLAGS(!PageAnon(page), page);
+	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
+	__clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
+}
+
 #ifdef CONFIG_MMU
 #define __PG_MLOCKED		(1UL << PG_mlocked)
 #else
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index 116ed4d5d0f8..5662f74e027f 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -103,7 +103,7 @@
 	{1UL << PG_lru,			"lru"		},		\
 	{1UL << PG_active,		"active"	},		\
 	{1UL << PG_workingset,		"workingset"	},		\
-	{1UL << PG_slab,		"slab"		},		\
+	{1UL << PG_slab,		"slab/anon_exclusive" },	\
 	{1UL << PG_owner_priv_1,	"owner_priv_1"	},		\
 	{1UL << PG_arch_1,		"arch_1"	},		\
 	{1UL << PG_reserved,		"reserved"	},		\
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6b928e720e83..a4b769f3d832 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1669,6 +1669,10 @@ void free_huge_page(struct page *page)
 	VM_BUG_ON_PAGE(page_mapcount(page), page);
 
 	hugetlb_set_page_subpool(page, NULL);
+	if (PageAnon(page)) {
+		__ClearPageAnonExclusive(page);
+		wmb(); /* avoid PageSlab() false positives */
+	}
 	page->mapping = NULL;
 	restore_reserve = HPageRestoreReserve(page);
 	ClearHPageRestoreReserve(page);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 97a9ed8f87a9..176fa2fbd699 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1470,17 +1470,29 @@ static int identify_page_state(unsigned long pfn, struct page *p,
 	 * The first check uses the current page flags which may not have any
 	 * relevant information. The second check with the saved page flags is
 	 * carried out only if the first check can't determine the page status.
+	 *
+	 * Note that PG_slab is also used as PG_anon_exclusive for PageAnon()
+	 * pages. Most of these pages should have been handled previously,
+	 * however, let's play safe and verify via PageAnon().
 	 */
-	for (ps = error_states;; ps++)
-		if ((p->flags & ps->mask) == ps->res)
-			break;
+	for (ps = error_states;; ps++) {
+		if ((p->flags & ps->mask) != ps->res)
+			continue;
+		if ((ps->type == MF_MSG_SLAB) && PageAnon(p))
+			continue;
+		break;
+	}
 
 	page_flags |= (p->flags & (1UL << PG_dirty));
 
 	if (!ps->mask)
-		for (ps = error_states;; ps++)
-			if ((page_flags & ps->mask) == ps->res)
-				break;
+		for (ps = error_states;; ps++) {
+			if ((page_flags & ps->mask) != ps->res)
+				continue;
+			if ((ps->type == MF_MSG_SLAB) && PageAnon(p))
+				continue;
+			break;
+		}
 	return page_action(ps, p, pfn);
 }
 
diff --git a/mm/memremap.c b/mm/memremap.c
index 6aa5f0c2d11f..a6b82ae8b3e6 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -478,6 +478,17 @@ void free_devmap_managed_page(struct page *page)
 
 	mem_cgroup_uncharge(page_folio(page));
 
+	/*
+	 * Note: we don't expect anonymous compound pages yet. Once supported
+	 * and we could PTE-map them similar to THP, we'd have to clear
+	 * PG_anon_exclusive on all tail pages.
+	 */
+	VM_BUG_ON_PAGE(PageAnon(page) && PageCompound(page), page);
+	if (PageAnon(page)) {
+		__ClearPageAnonExclusive(page);
+		wmb(); /* avoid PageSlab() false positives */
+	}
+
 	/*
 	 * When a device_private page is freed, the page->mapping field
 	 * may still contain a (stale) mapping value. For example, the
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3589febc6d31..e59e68a61d1a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1297,6 +1297,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 {
 	int bad = 0;
 	bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
+	const bool anon = PageAnon(page);
 
 	VM_BUG_ON_PAGE(PageTail(page), page);
 
@@ -1329,6 +1330,14 @@ static __always_inline bool free_pages_prepare(struct page *page,
 			ClearPageHasHWPoisoned(page);
 		}
 		for (i = 1; i < (1 << order); i++) {
+			/*
+			 * Freeing a previously PTE-mapped THP can have
+			 * PG_anon_exclusive set on tail pages. Clear it
+			 * manually as it's overloaded with PG_slab that we
+			 * want to catch in check_free_page().
+			 */
+			if (anon)
+				__ClearPageAnonExclusive(page + i);
 			if (compound)
 				bad += free_tail_pages_check(page, page + i);
 			if (unlikely(check_free_page(page + i))) {
@@ -1338,6 +1347,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
 			(page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
 		}
 	}
+	if (anon) {
+		__ClearPageAnonExclusive(page);
+		wmb(); /* avoid PageSlab() false positives */
+	}
 	if (PageMappingFlags(page))
 		page->mapping = NULL;
 	if (memcg_kmem_enabled() && PageMemcgKmem(page))
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 09/13] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (7 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 08/13] mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 10/13] mm/gup: disallow follow_page(FOLL_PIN) David Hildenbrand
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

Let's mark exclusively mapped anonymous pages with PG_anon_exclusive as
exclusive, and use that information to make GUP pins reliable and stay
consistent with the page mapped into the page table even if the
page table entry gets write-protected.

With that information at hand, we can extend our COW logic to always
reuse anonymous pages that are exclusive. For anonymous pages that
might be shared, the existing logic applies.

As already documented, PG_anon_exclusive is usually only expressive in
combination with a page table entry. Especially PTE vs. PMD-mapped
anonymous pages require more thought, some examples: due to mremap() we
can easily have a single compound page PTE-mapped into multiple page tables
exclusively in a single process -- multiple page table locks apply.
Further, due to MADV_WIPEONFORK we might not necessarily write-protect
all PTEs, and only some subpages might be pinned. Long story short: once
PTE-mapped, we have to track information about exclusivity per sub-page,
but until then, we can just track it for the compound page in the head
page and not having to update a whole bunch of subpages all of the time
for a simple PMD mapping of a THP.

For simplicity, this commit mostly talks about "anonymous pages", while
it's for THP actually "the part of an anonymous folio referenced via
a page table entry".

To not spill PG_anon_exclusive code all over the mm code-base, we let
the anon rmap code to handle all PG_anon_exclusive logic it can easily
handle.

If a writable, present page table entry points at an anonymous (sub)page,
that (sub)page must be PG_anon_exclusive. If GUP wants to take a reliably
pin (FOLL_PIN) on an anonymous page references via a present
page table entry, it must only pin if PG_anon_exclusive is set for the
mapped (sub)page.

This commit doesn't adjust GUP, so this is only implicitly handled for
FOLL_WRITE, follow-up commits will teach GUP to also respect it for
FOLL_PIN without !FOLL_WRITE, to make all GUP pins of anonymous pages
fully reliable.

Whenever an anonymous page is to be shared (fork(), KSM), or when
temporarily unmapping an anonymous page (swap, migration), the relevant
PG_anon_exclusive bit has to be cleared to mark the anonymous page
possibly shared. Clearing will fail if there are GUP pins on the page:
* For fork(), this means having to copy the page and not being able to
  share it. fork() protects against concurrent GUP using the PT lock and
  the src_mm->write_protect_seq.
* For KSM, this means sharing will fail. For swap this means, unmapping
  will fail, For migration this means, migration will fail early. All
  three cases protect against concurrent GUP using the PT lock and a
  proper clear/invalidate+flush of the relevant page table entry.

This fixes memory corruptions reported for FOLL_PIN | FOLL_WRITE, when a
pinned page gets mapped R/O and the successive write fault ends up
replacing the page instead of reusing it. It improves the situation for
O_DIRECT/vmsplice/... that still use FOLL_GET instead of FOLL_PIN,
if fork() is *not* involved, however swapout and fork() are still
problematic. Properly using FOLL_PIN instead of FOLL_GET for these
GUP users will fix the issue for them.

I. Details about basic handling

I.1. Fresh anonymous pages

page_add_new_anon_rmap() and hugepage_add_new_anon_rmap() will mark the
given page exclusive via __page_set_anon_rmap(exclusive=1). As that is
the mechanism fresh anonymous pages come into life (besides migration
code where we copy the page->mapping), all fresh anonymous pages will
start out as exclusive.

I.2. COW reuse handling of anonymous pages

When a COW handler stumbles over a (sub)page that's marked exclusive, it
simply reuses it. Otherwise, the handler tries harder under page lock to
detect if the (sub)page is exclusive and can be reused. If exclusive,
page_move_anon_rmap() will mark the given (sub)page exclusive.

Note that hugetlb code does not yet check for PageAnonExclusive(), as it
still uses the old COW logic that is prone to the COW security issue
because hugetlb code cannot really tolerate unnecessary/wrong COW as
huge pages are a scarce resource.

I.3. Migration handling

try_to_migrate() has to try marking an exclusive anonymous page shared
via page_try_share_anon_rmap(). If it fails because there are GUP pins
on the page, unmap fails. migrate_vma_collect_pmd() and
__split_huge_pmd_locked() are handled similarly.

Writable migration entries implicitly point at shared anonymous pages.
For readable migration entries that information is stored via a new
"readable-exclusive" migration entry, specific to anonymous pages.

When restoring a migration entry in remove_migration_pte(), information
about exlusivity is detected via the migration entry type, and
RMAP_EXCLUSIVE is set accordingly for
page_add_anon_rmap()/hugepage_add_anon_rmap() to restore that
information.

I.4. Swapout handling

try_to_unmap() has to try marking the mapped page possibly shared via
page_try_share_anon_rmap(). If it fails because there are GUP pins on the
page, unmap fails. For now, information about exclusivity is lost. In the
future, we might want to remember that information in the swap entry in
some cases, however, it requires more thought, care, and a way to store
that information in swap entries.

I.5. Swapin handling

do_swap_page() will never stumble over exclusive anonymous pages in the
swap cache, as try_to_migrate() prohibits that. do_swap_page() always has
to detect manually if an anonymous page is exclusive and has to set
RMAP_EXCLUSIVE for page_add_anon_rmap() accordingly.

I.6. THP handling

__split_huge_pmd_locked() has to move the information about exclusivity
from the PMD to the PTEs.

a) In case we have a readable-exclusive PMD migration entry, simply insert
readable-exclusive PTE migration entries.

b) In case we have a present PMD entry and we don't want to freeze
("convert to migration entries"), simply forward PG_anon_exclusive to
all sub-pages, no need to temporarily clear the bit.

c) In case we have a present PMD entry and want to freeze, handle it
similar to try_to_migrate(): try marking the page shared first. In case
we fail, we ignore the "freeze" instruction and simply split ordinarily.
try_to_migrate() will properly fail because the THP is still mapped via
PTEs.

When splitting a compound anonymous folio (THP), the information about
exclusivity is implicitly handled via the migration entries: no need to
replicate PG_anon_exclusive manually.

I.7. fork() handling

fork() handling is relatively easy, because PG_anon_exclusive is only
expressive for some page table entry types.

a) Present anonymous pages

page_try_dup_anon_rmap() will mark the given subpage shared -- which
will fail if the page is pinned. If it failed, we have to copy (or
PTE-map a PMD to handle it on the PTE level).

Note that device exclusive entries are just a pointer at a PageAnon()
page. fork() will first convert a device exclusive entry to a present
page table and handle it just like present anonymous pages.

b) Device private entry

Device private entries point at PageAnon() pages that cannot be mapped
directly and, therefore, cannot get pinned.

page_try_dup_anon_rmap() will mark the given subpage shared, which
cannot fail because they cannot get pinned.

c) HW poison entries

PG_anon_exclusive will remain untouched and is stale -- the page table
entry is just a placeholder after all.

d) Migration entries

Writable and readable-exclusive entries are converted to readable
entries: possibly shared.

I.8. mprotect() handling

mprotect() only has to properly handle the new readable-exclusive
migration entry:

When write-protecting a migration entry that points at an anonymous
page, remember the information about exclusivity via the
"readable-exclusive" migration entry type.

II. Migration and GUP-fast

Whenever replacing a present page table entry that maps an exclusive
anonymous page by a migration entry, we have to mark the page possibly
shared and synchronize against GUP-fast by a proper
clear/invalidate+flush to make the following scenario impossible:

1. try_to_migrate() places a migration entry after checking for GUP pins
   and marks the page possibly shared.
2. GUP-fast pins the page due to lack of synchronization
3. fork() converts the "writable/readable-exclusive" migration entry into a
   readable migration entry
4. Migration fails due to the GUP pin (failing to freeze the refcount)
5. Migration entries are restored. PG_anon_exclusive is lost

-> We have a pinned page that is not marked exclusive anymore.

Note that we move information about exclusivity from the page to the
migration entry as it otherwise highly overcomplicates fork() and
PTE-mapping a THP.

III. Swapout and GUP-fast

Whenever replacing a present page table entry that maps an exclusive
anonymous page by a swap entry, we have to mark the page possibly
shared and synchronize against GUP-fast by a proper
clear/invalidate+flush to make the following scenario impossible:

1. try_to_unmap() places a swap entry after checking for GUP pins and
   clears exclusivity information on the page.
2. GUP-fast pins the page due to lack of synchronization.

-> We have a pinned page that is not marked exclusive anymore.

If we'd ever store information about exclusivity in the swap entry,
similar to migration handling, the same considerations as in II would
apply. This is future work.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/rmap.h    | 33 +++++++++++++++++++
 include/linux/swap.h    | 15 ++++++---
 include/linux/swapops.h | 25 ++++++++++++++
 mm/huge_memory.c        | 73 +++++++++++++++++++++++++++++++++++++----
 mm/hugetlb.c            | 13 +++++---
 mm/ksm.c                | 13 +++++++-
 mm/memory.c             | 33 ++++++++++++++-----
 mm/migrate.c            | 44 +++++++++++++++++++++----
 mm/mprotect.c           |  8 +++--
 mm/rmap.c               | 49 ++++++++++++++++++++++++---
 10 files changed, 268 insertions(+), 38 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 593a4566420f..c4957bac4868 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -221,6 +221,7 @@ static inline int page_try_dup_anon_rmap(struct page *page, bool compound,
 	    unlikely(page_needs_cow_for_dma(vma, page))))
 		return -EBUSY;
 
+	ClearPageAnonExclusive(page);
 	/*
 	 * It's okay to share the anon page between both processes, mapping
 	 * the page R/O into both processes.
@@ -229,6 +230,38 @@ static inline int page_try_dup_anon_rmap(struct page *page, bool compound,
 	return 0;
 }
 
+/**
+ * page_try_share_anon_rmap - try marking an exclusive anonymous page possibly
+ *			      shared to prepare for KSM or temporary unmapping
+ * @page: the exclusive anonymous page to try marking possibly shared
+ *
+ * The caller needs to hold the PT lock and has to have the page table entry
+ * cleared/invalidated+flushed, to properly sync against GUP-fast.
+ *
+ * This is similar to page_try_dup_anon_rmap(), however, not used during fork()
+ * to duplicate a mapping, but instead to prepare for KSM or temporarily
+ * unmapping a page (swap, migration) via page_remove_rmap().
+ *
+ * Marking the page shared can only fail if the page may be pinned; device
+ * private pages cannot get pinned and consequently this function cannot fail.
+ *
+ * Returns 0 if marking the page possibly shared succeeded. Returns -EBUSY
+ * otherwise.
+ */
+static inline int page_try_share_anon_rmap(struct page *page)
+{
+	VM_BUG_ON_PAGE(!PageAnon(page) || !PageAnonExclusive(page), page);
+
+	/* See page_try_dup_anon_rmap(). */
+	if (likely(!is_device_private_page(page) &&
+	    unlikely(page_maybe_dma_pinned(page))))
+		return -EBUSY;
+
+	ClearPageAnonExclusive(page);
+	return 0;
+}
+
+
 /*
  * Called from mm/vmscan.c to handle paging out
  */
diff --git a/include/linux/swap.h b/include/linux/swap.h
index b546e4bd5c5a..422765d1141c 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -78,12 +78,19 @@ static inline int current_is_kswapd(void)
 #endif
 
 /*
- * NUMA node memory migration support
+ * Page migration support.
+ *
+ * SWP_MIGRATION_READ_EXCLUSIVE is only applicable to anonymous pages and
+ * indicates that the referenced (part of) an anonymous page is exclusive to
+ * a single process. For SWP_MIGRATION_WRITE, that information is implicit:
+ * (part of) an anonymous page that are mapped writable are exclusive to a
+ * single process.
  */
 #ifdef CONFIG_MIGRATION
-#define SWP_MIGRATION_NUM 2
-#define SWP_MIGRATION_READ	(MAX_SWAPFILES + SWP_HWPOISON_NUM)
-#define SWP_MIGRATION_WRITE	(MAX_SWAPFILES + SWP_HWPOISON_NUM + 1)
+#define SWP_MIGRATION_NUM 3
+#define SWP_MIGRATION_READ (MAX_SWAPFILES + SWP_HWPOISON_NUM)
+#define SWP_MIGRATION_READ_EXCLUSIVE (MAX_SWAPFILES + SWP_HWPOISON_NUM + 1)
+#define SWP_MIGRATION_WRITE (MAX_SWAPFILES + SWP_HWPOISON_NUM + 2)
 #else
 #define SWP_MIGRATION_NUM 0
 #endif
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index d356ab4047f7..06280fc1c99b 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -194,6 +194,7 @@ static inline bool is_writable_device_exclusive_entry(swp_entry_t entry)
 static inline int is_migration_entry(swp_entry_t entry)
 {
 	return unlikely(swp_type(entry) == SWP_MIGRATION_READ ||
+			swp_type(entry) == SWP_MIGRATION_READ_EXCLUSIVE ||
 			swp_type(entry) == SWP_MIGRATION_WRITE);
 }
 
@@ -202,11 +203,26 @@ static inline int is_writable_migration_entry(swp_entry_t entry)
 	return unlikely(swp_type(entry) == SWP_MIGRATION_WRITE);
 }
 
+static inline int is_readable_migration_entry(swp_entry_t entry)
+{
+	return unlikely(swp_type(entry) == SWP_MIGRATION_READ);
+}
+
+static inline int is_readable_exclusive_migration_entry(swp_entry_t entry)
+{
+	return unlikely(swp_type(entry) == SWP_MIGRATION_READ_EXCLUSIVE);
+}
+
 static inline swp_entry_t make_readable_migration_entry(pgoff_t offset)
 {
 	return swp_entry(SWP_MIGRATION_READ, offset);
 }
 
+static inline swp_entry_t make_readable_exclusive_migration_entry(pgoff_t offset)
+{
+	return swp_entry(SWP_MIGRATION_READ_EXCLUSIVE, offset);
+}
+
 static inline swp_entry_t make_writable_migration_entry(pgoff_t offset)
 {
 	return swp_entry(SWP_MIGRATION_WRITE, offset);
@@ -224,6 +240,11 @@ static inline swp_entry_t make_readable_migration_entry(pgoff_t offset)
 	return swp_entry(0, 0);
 }
 
+static inline swp_entry_t make_readable_exclusive_migration_entry(pgoff_t offset)
+{
+	return swp_entry(0, 0);
+}
+
 static inline swp_entry_t make_writable_migration_entry(pgoff_t offset)
 {
 	return swp_entry(0, 0);
@@ -244,6 +265,10 @@ static inline int is_writable_migration_entry(swp_entry_t entry)
 {
 	return 0;
 }
+static inline int is_readable_migration_entry(swp_entry_t entry)
+{
+	return 0;
+}
 
 #endif
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 7b9e76ba3f2f..6f2ee41cd423 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1054,7 +1054,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		swp_entry_t entry = pmd_to_swp_entry(pmd);
 
 		VM_BUG_ON(!is_pmd_migration_entry(pmd));
-		if (is_writable_migration_entry(entry)) {
+		if (!is_readable_migration_entry(entry)) {
 			entry = make_readable_migration_entry(
 							swp_offset(entry));
 			pmd = swp_entry_to_pmd(entry);
@@ -1292,6 +1292,17 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 	page = pmd_page(orig_pmd);
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 
+	if (PageAnonExclusive(page)) {
+		pmd_t entry;
+
+		entry = pmd_mkyoung(orig_pmd);
+		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+		if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1))
+			update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
+		spin_unlock(vmf->ptl);
+		return VM_FAULT_WRITE;
+	}
+
 	if (!trylock_page(page)) {
 		get_page(page);
 		spin_unlock(vmf->ptl);
@@ -1741,6 +1752,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
 	if (is_swap_pmd(*pmd)) {
 		swp_entry_t entry = pmd_to_swp_entry(*pmd);
+		struct page *page = pfn_swap_entry_to_page(entry);
 
 		VM_BUG_ON(!is_pmd_migration_entry(*pmd));
 		if (is_writable_migration_entry(entry)) {
@@ -1749,8 +1761,10 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 			 * A protection check is difficult so
 			 * just be safe and disable write
 			 */
-			entry = make_readable_migration_entry(
-							swp_offset(entry));
+			if (PageAnon(page))
+				entry = make_readable_exclusive_migration_entry(swp_offset(entry));
+			else
+				entry = make_readable_migration_entry(swp_offset(entry));
 			newpmd = swp_entry_to_pmd(entry);
 			if (pmd_swp_soft_dirty(*pmd))
 				newpmd = pmd_swp_mksoft_dirty(newpmd);
@@ -1959,6 +1973,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 	pgtable_t pgtable;
 	pmd_t old_pmd, _pmd;
 	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
+	bool anon_exclusive = false;
 	unsigned long addr;
 	int i;
 
@@ -2040,6 +2055,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 		entry = pmd_to_swp_entry(old_pmd);
 		page = pfn_swap_entry_to_page(entry);
 		write = is_writable_migration_entry(entry);
+		if (PageAnon(page))
+			anon_exclusive = is_readable_exclusive_migration_entry(entry);
 		young = false;
 		soft_dirty = pmd_swp_soft_dirty(old_pmd);
 		uffd_wp = pmd_swp_uffd_wp(old_pmd);
@@ -2051,6 +2068,23 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 		young = pmd_young(old_pmd);
 		soft_dirty = pmd_soft_dirty(old_pmd);
 		uffd_wp = pmd_uffd_wp(old_pmd);
+
+		/*
+		 * Without "freeze", we'll simply split the PMD, propagating the
+		 * PageAnonExclusive() flag for each PTE by setting it for
+		 * each subpage -- no need to (temporarily) clear.
+		 *
+		 * With "freeze" we want to replace mapped pages by
+		 * migration entries right away. This is only possible if we
+		 * managed to clear PageAnonExclusive() -- see
+		 * set_pmd_migration_entry().
+		 *
+		 * In case we cannot clear PageAnonExclusive(), split the PMD
+		 * only and let try_to_migrate_one() fail later.
+		 */
+		anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
+		if (freeze && page_try_share_anon_rmap(page))
+			freeze = false;
 	}
 	VM_BUG_ON_PAGE(!page_count(page), page);
 	page_ref_add(page, HPAGE_PMD_NR - 1);
@@ -2074,6 +2108,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 			if (write)
 				swp_entry = make_writable_migration_entry(
 							page_to_pfn(page + i));
+			else if (anon_exclusive)
+				swp_entry = make_readable_exclusive_migration_entry(
+							page_to_pfn(page + i));
 			else
 				swp_entry = make_readable_migration_entry(
 							page_to_pfn(page + i));
@@ -2085,6 +2122,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 		} else {
 			entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
 			entry = maybe_mkwrite(entry, vma);
+			if (anon_exclusive)
+				SetPageAnonExclusive(page + i);
 			if (!write)
 				entry = pte_wrprotect(entry);
 			if (!young)
@@ -2315,8 +2354,11 @@ static void __split_huge_page_tail(struct page *head, int tail,
 	 *
 	 * After successful get_page_unless_zero() might follow flags change,
 	 * for example lock_page() which set PG_waiters.
+	 *
+	 * Keep PG_anon_exclusive information already maintained for all
+	 * subpages of a compound page untouched.
 	 */
-	page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
+	page_tail->flags &= ~(PAGE_FLAGS_CHECK_AT_PREP & ~PG_anon_exclusive);
 	page_tail->flags |= (head->flags &
 			((1L << PG_referenced) |
 			 (1L << PG_swapbacked) |
@@ -3070,6 +3112,7 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
 	struct vm_area_struct *vma = pvmw->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long address = pvmw->address;
+	bool anon_exclusive;
 	pmd_t pmdval;
 	swp_entry_t entry;
 	pmd_t pmdswp;
@@ -3079,10 +3122,19 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
 
 	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
 	pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
+
+	anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
+	if (anon_exclusive && page_try_share_anon_rmap(page)) {
+		set_pmd_at(mm, address, pvmw->pmd, pmdval);
+		return;
+	}
+
 	if (pmd_dirty(pmdval))
 		set_page_dirty(page);
 	if (pmd_write(pmdval))
 		entry = make_writable_migration_entry(page_to_pfn(page));
+	else if (anon_exclusive)
+		entry = make_readable_exclusive_migration_entry(page_to_pfn(page));
 	else
 		entry = make_readable_migration_entry(page_to_pfn(page));
 	pmdswp = swp_entry_to_pmd(entry);
@@ -3116,10 +3168,17 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
 		pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde));
 
 	flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE);
-	if (PageAnon(new))
-		page_add_anon_rmap(new, vma, mmun_start, RMAP_COMPOUND);
-	else
+	if (PageAnon(new)) {
+		int rmap_flags = RMAP_COMPOUND;
+
+		if (!is_readable_migration_entry(entry))
+			rmap_flags |= RMAP_EXCLUSIVE;
+
+		page_add_anon_rmap(new, vma, mmun_start, rmap_flags);
+	} else {
 		page_add_file_rmap(new, true);
+	}
+	VM_BUG_ON(pmd_write(pmde) && PageAnon(new) && !PageAnonExclusive(new));
 	set_pmd_at(mm, mmun_start, pvmw->pmd, pmde);
 	if ((vma->vm_flags & VM_LOCKED) && !PageDoubleMap(new))
 		mlock_vma_page(new);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a4b769f3d832..0c2f3d500487 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4767,7 +4767,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 				    is_hugetlb_entry_hwpoisoned(entry))) {
 			swp_entry_t swp_entry = pte_to_swp_entry(entry);
 
-			if (is_writable_migration_entry(swp_entry) && cow) {
+			if (!is_readable_migration_entry(swp_entry) && cow) {
 				/*
 				 * COW mappings require pages in both
 				 * parent and child to be set to read.
@@ -6163,12 +6163,17 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		}
 		if (unlikely(is_hugetlb_entry_migration(pte))) {
 			swp_entry_t entry = pte_to_swp_entry(pte);
+			struct page *page = pfn_swap_entry_to_page(entry);
 
-			if (is_writable_migration_entry(entry)) {
+			if (!is_readable_migration_entry(entry)) {
 				pte_t newpte;
 
-				entry = make_readable_migration_entry(
-							swp_offset(entry));
+				if (PageAnon(page))
+					entry = make_readable_exclusive_migration_entry(
+								swp_offset(entry));
+				else
+					entry = make_readable_migration_entry(
+								swp_offset(entry));
 				newpte = swp_entry_to_pte(entry);
 				set_huge_swap_pte_at(mm, address, ptep,
 						     newpte, huge_page_size(h));
diff --git a/mm/ksm.c b/mm/ksm.c
index e897577c54c5..c380c36f3ebd 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -866,6 +866,7 @@ static inline struct stable_node *page_stable_node(struct page *page)
 static inline void set_page_stable_node(struct page *page,
 					struct stable_node *stable_node)
 {
+	VM_BUG_ON_PAGE(PageAnon(page) && PageAnonExclusive(page), page);
 	page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
 }
 
@@ -1041,6 +1042,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 	int swapped;
 	int err = -EFAULT;
 	struct mmu_notifier_range range;
+	bool anon_exclusive;
 
 	pvmw.address = page_address_in_vma(page, vma);
 	if (pvmw.address == -EFAULT)
@@ -1058,9 +1060,10 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 	if (WARN_ONCE(!pvmw.pte, "Unexpected PMD mapping?"))
 		goto out_unlock;
 
+	anon_exclusive = PageAnonExclusive(page);
 	if (pte_write(*pvmw.pte) || pte_dirty(*pvmw.pte) ||
 	    (pte_protnone(*pvmw.pte) && pte_savedwrite(*pvmw.pte)) ||
-						mm_tlb_flush_pending(mm)) {
+	    anon_exclusive || mm_tlb_flush_pending(mm)) {
 		pte_t entry;
 
 		swapped = PageSwapCache(page);
@@ -1088,6 +1091,12 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 			set_pte_at(mm, pvmw.address, pvmw.pte, entry);
 			goto out_unlock;
 		}
+
+		if (anon_exclusive && page_try_share_anon_rmap(page)) {
+			set_pte_at(mm, pvmw.address, pvmw.pte, entry);
+			goto out_unlock;
+		}
+
 		if (pte_dirty(entry))
 			set_page_dirty(page);
 
@@ -1146,6 +1155,8 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 		pte_unmap_unlock(ptep, ptl);
 		goto out_mn;
 	}
+	VM_BUG_ON_PAGE(PageAnonExclusive(page), page);
+	VM_BUG_ON_PAGE(PageAnon(kpage) && PageAnonExclusive(kpage), kpage);
 
 	/*
 	 * No need to check ksm_use_zero_pages here: we can only have a
diff --git a/mm/memory.c b/mm/memory.c
index 1ab0dfbec288..11577ee015be 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -720,6 +720,8 @@ static void restore_exclusive_pte(struct vm_area_struct *vma,
 	else if (is_writable_device_exclusive_entry(entry))
 		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
 
+	VM_BUG_ON(pte_write(pte) && !(PageAnon(page) && PageAnonExclusive(page)));
+
 	/*
 	 * No need to take a page reference as one was already
 	 * created when the swap entry was made.
@@ -799,11 +801,12 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 
 		rss[mm_counter(page)]++;
 
-		if (is_writable_migration_entry(entry) &&
+		if (!is_readable_migration_entry(entry) &&
 				is_cow_mapping(vm_flags)) {
 			/*
-			 * COW mappings require pages in both
-			 * parent and child to be set to read.
+			 * COW mappings require pages in both parent and child
+			 * to be set to read. A previously exclusive entry is
+			 * now shared.
 			 */
 			entry = make_readable_migration_entry(
 							swp_offset(entry));
@@ -954,6 +957,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 		ptep_set_wrprotect(src_mm, addr, src_pte);
 		pte = pte_wrprotect(pte);
 	}
+	VM_BUG_ON(page && PageAnon(page) && PageAnonExclusive(page));
 
 	/*
 	 * If it's a shared mapping, mark it clean in
@@ -2943,6 +2947,9 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
 	struct vm_area_struct *vma = vmf->vma;
 	struct page *page = vmf->page;
 	pte_t entry;
+
+	VM_BUG_ON(PageAnon(page) && !PageAnonExclusive(page));
+
 	/*
 	 * Clear the pages cpupid information as the existing
 	 * information potentially belongs to a now completely
@@ -3277,6 +3284,13 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 	if (PageAnon(vmf->page)) {
 		struct page *page = vmf->page;
 
+		/*
+		 * If the page is exclusive to this process we must reuse the
+		 * page without further checks.
+		 */
+		if (PageAnonExclusive(page))
+			goto reuse;
+
 		/*
 		 * We have to verify under page lock: these early checks are
 		 * just an optimization to avoid locking the page and freeing
@@ -3309,6 +3323,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 		 */
 		page_move_anon_rmap(page, vma);
 		unlock_page(page);
+reuse:
 		wp_page_reuse(vmf);
 		return VM_FAULT_WRITE;
 	} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
@@ -3689,11 +3704,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	 * that are certainly not shared because we just allocated them without
 	 * exposing them to the swapcache.
 	 */
-	if ((vmf->flags & FAULT_FLAG_WRITE) && !PageKsm(page) &&
-	    (page != swapcache || page_count(page) == 1)) {
-		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
-		vmf->flags &= ~FAULT_FLAG_WRITE;
-		ret |= VM_FAULT_WRITE;
+	if (!PageKsm(page) && (page != swapcache || page_count(page) == 1)) {
+		if (vmf->flags & FAULT_FLAG_WRITE) {
+			pte = maybe_mkwrite(pte_mkdirty(pte), vma);
+			vmf->flags &= ~FAULT_FLAG_WRITE;
+			ret |= VM_FAULT_WRITE;
+		}
 		exclusive = RMAP_EXCLUSIVE;
 	}
 	flush_icache_page(vma, page);
@@ -3713,6 +3729,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		page_add_anon_rmap(page, vma, vmf->address, exclusive);
 	}
 
+	VM_BUG_ON(!PageAnon(page) || (pte_write(pte) && !PageAnonExclusive(page)));
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
 	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
 
diff --git a/mm/migrate.c b/mm/migrate.c
index 709cb11d5b81..63a3241041e1 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -188,6 +188,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 
 	VM_BUG_ON_PAGE(PageTail(page), page);
 	while (page_vma_mapped_walk(&pvmw)) {
+		int rmap_flags = 0;
+
 		if (PageKsm(page))
 			new = page;
 		else
@@ -217,6 +219,9 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 		else if (pte_swp_uffd_wp(*pvmw.pte))
 			pte = pte_mkuffd_wp(pte);
 
+		if (PageAnon(new) && !is_readable_migration_entry(entry))
+			rmap_flags |= RMAP_EXCLUSIVE;
+
 		if (unlikely(is_device_private_page(new))) {
 			if (pte_write(pte))
 				entry = make_writable_device_private_entry(
@@ -238,7 +243,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 			pte = pte_mkhuge(pte);
 			pte = arch_make_huge_pte(pte, shift, vma->vm_flags);
 			if (PageAnon(new))
-				hugepage_add_anon_rmap(new, vma, pvmw.address, 0);
+				hugepage_add_anon_rmap(new, vma, pvmw.address,
+						       rmap_flags);
 			else
 				page_dup_file_rmap(new, true);
 			set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte);
@@ -246,7 +252,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 #endif
 		{
 			if (PageAnon(new))
-				page_add_anon_rmap(new, vma, pvmw.address, 0);
+				page_add_anon_rmap(new, vma, pvmw.address,
+						   rmap_flags);
 			else
 				page_add_file_rmap(new, false);
 			set_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte);
@@ -377,6 +384,10 @@ int folio_migrate_mapping(struct address_space *mapping,
 		/* No turning back from here */
 		newfolio->index = folio->index;
 		newfolio->mapping = folio->mapping;
+		/*
+		 * Note: PG_anon_exclusive is always migrated via migration
+		 * entries.
+		 */
 		if (folio_test_swapbacked(folio))
 			__folio_set_swapbacked(newfolio);
 
@@ -2300,15 +2311,34 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
 		 * set up a special migration page table entry now.
 		 */
 		if (trylock_page(page)) {
+			bool anon_exclusive;
 			pte_t swp_pte;
 
+			anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
+			if (anon_exclusive) {
+				flush_cache_page(vma, addr, pte_pfn(*ptep));
+				ptep_clear_flush(vma, addr, ptep);
+
+				if (page_try_share_anon_rmap(page)) {
+					set_pte_at(mm, addr, ptep, pte);
+					unlock_page(page);
+					put_page(page);
+					mpfn = 0;
+					goto next;
+				}
+			} else {
+				ptep_get_and_clear(mm, addr, ptep);
+			}
+
 			migrate->cpages++;
-			ptep_get_and_clear(mm, addr, ptep);
 
 			/* Setup special migration page table entry */
 			if (mpfn & MIGRATE_PFN_WRITE)
 				entry = make_writable_migration_entry(
 							page_to_pfn(page));
+			else if (anon_exclusive)
+				entry = make_readable_exclusive_migration_entry(
+							page_to_pfn(page));
 			else
 				entry = make_readable_migration_entry(
 							page_to_pfn(page));
@@ -2327,12 +2357,12 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
 			set_pte_at(mm, addr, ptep, swp_pte);
 
 			/*
-			 * This is like regular unmap: we remove the rmap and
-			 * drop page refcount. Page won't be freed, as we took
-			 * a reference just above.
+			 * This is like regular unmap: we remove the
+			 * rmap and drop page refcount. Page won't be
+			 * freed, as we took a reference just above.
 			 */
-			page_remove_rmap(page, false);
 			put_page(page);
+			page_remove_rmap(page, false);
 
 			if (pte_present(pte))
 				unmapped++;
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 5ca3fbcb1495..fd8713f877dd 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -141,6 +141,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 			pages++;
 		} else if (is_swap_pte(oldpte)) {
 			swp_entry_t entry = pte_to_swp_entry(oldpte);
+			struct page *page = pfn_swap_entry_to_page(entry);
 			pte_t newpte;
 
 			if (is_writable_migration_entry(entry)) {
@@ -148,8 +149,11 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				 * A protection check is difficult so
 				 * just be safe and disable write
 				 */
-				entry = make_readable_migration_entry(
-							swp_offset(entry));
+				if (PageAnon(page))
+					entry = make_readable_exclusive_migration_entry(
+							     swp_offset(entry));
+				else
+					entry = make_readable_migration_entry(swp_offset(entry));
 				newpte = swp_entry_to_pte(entry);
 				if (pte_swp_soft_dirty(oldpte))
 					newpte = pte_swp_mksoft_dirty(newpte);
diff --git a/mm/rmap.c b/mm/rmap.c
index bafdb7f70cec..cc54bd1cce61 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1048,6 +1048,7 @@ EXPORT_SYMBOL_GPL(folio_mkclean);
 void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
 {
 	struct anon_vma *anon_vma = vma->anon_vma;
+	struct page *subpage = page;
 
 	page = compound_head(page);
 
@@ -1061,6 +1062,7 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
 	 * PageAnon()) will not see one without the other.
 	 */
 	WRITE_ONCE(page->mapping, (struct address_space *) anon_vma);
+	SetPageAnonExclusive(subpage);
 }
 
 /**
@@ -1078,7 +1080,7 @@ static void __page_set_anon_rmap(struct page *page,
 	BUG_ON(!anon_vma);
 
 	if (PageAnon(page))
-		return;
+		goto out;
 
 	/*
 	 * If the page isn't exclusively mapped into this vma,
@@ -1097,6 +1099,9 @@ static void __page_set_anon_rmap(struct page *page,
 	anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
 	WRITE_ONCE(page->mapping, (struct address_space *) anon_vma);
 	page->index = linear_page_index(vma, address);
+out:
+	if (exclusive)
+		SetPageAnonExclusive(page);
 }
 
 /**
@@ -1156,6 +1161,8 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 	} else {
 		first = atomic_inc_and_test(&page->_mapcount);
 	}
+	VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
+	VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
 
 	if (first) {
 		int nr = compound ? thp_nr_pages(page) : 1;
@@ -1419,7 +1426,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	};
 	pte_t pteval;
 	struct page *subpage;
-	bool ret = true;
+	bool anon_exclusive, ret = true;
 	struct mmu_notifier_range range;
 	enum ttu_flags flags = (enum ttu_flags)(long)arg;
 
@@ -1482,6 +1489,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 
 		subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
 		address = pvmw.address;
+		anon_exclusive = PageAnon(page) && PageAnonExclusive(subpage);
 
 		if (PageHuge(page) && !PageAnon(page)) {
 			/*
@@ -1517,9 +1525,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 			}
 		}
 
-		/* Nuke the page table entry. */
+		/*
+		 * Nuke the page table entry. When having to clear
+		 * PageAnonExclusive(), we always have to flush.
+		 */
 		flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
-		if (should_defer_flush(mm, flags)) {
+		if (should_defer_flush(mm, flags) && !anon_exclusive) {
 			/*
 			 * We clear the PTE but do not flush so potentially
 			 * a remote CPU could still be writing to the page.
@@ -1620,6 +1631,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
+			if (anon_exclusive &&
+			    page_try_share_anon_rmap(subpage)) {
+				swap_free(entry);
+				set_pte_at(mm, address, pvmw.pte, pteval);
+				ret = false;
+				page_vma_mapped_walk_done(&pvmw);
+				break;
+			}
 			if (list_empty(&mm->mmlist)) {
 				spin_lock(&mmlist_lock);
 				if (list_empty(&mm->mmlist))
@@ -1720,7 +1739,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
 	};
 	pte_t pteval;
 	struct page *subpage;
-	bool ret = true;
+	bool anon_exclusive, ret = true;
 	struct mmu_notifier_range range;
 	enum ttu_flags flags = (enum ttu_flags)(long)arg;
 
@@ -1779,6 +1798,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
 
 		subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
 		address = pvmw.address;
+		anon_exclusive = PageAnon(page) && PageAnonExclusive(subpage);
 
 		if (PageHuge(page) && !PageAnon(page)) {
 			/*
@@ -1830,6 +1850,9 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
 			swp_entry_t entry;
 			pte_t swp_pte;
 
+			if (anon_exclusive)
+				BUG_ON(page_try_share_anon_rmap(subpage));
+
 			/*
 			 * Store the pfn of the page in a special migration
 			 * pte. do_swap_page() will wait until the migration
@@ -1838,6 +1861,8 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
 			entry = pte_to_swp_entry(pteval);
 			if (is_writable_device_private_entry(entry))
 				entry = make_writable_migration_entry(pfn);
+			else if (anon_exclusive)
+				entry = make_readable_exclusive_migration_entry(pfn);
 			else
 				entry = make_readable_migration_entry(pfn);
 			swp_pte = swp_entry_to_pte(entry);
@@ -1900,6 +1925,15 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
+			VM_BUG_ON_PAGE(pte_write(pteval) && PageAnon(page) &&
+				       !anon_exclusive, page);
+			if (anon_exclusive &&
+			    page_try_share_anon_rmap(subpage)) {
+				set_pte_at(mm, address, pvmw.pte, pteval);
+				ret = false;
+				page_vma_mapped_walk_done(&pvmw);
+				break;
+			}
 
 			/*
 			 * Store the pfn of the page in a special migration
@@ -1909,6 +1943,9 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
 			if (pte_write(pteval))
 				entry = make_writable_migration_entry(
 							page_to_pfn(subpage));
+			else if (anon_exclusive)
+				entry = make_readable_exclusive_migration_entry(
+							page_to_pfn(subpage));
 			else
 				entry = make_readable_migration_entry(
 							page_to_pfn(subpage));
@@ -2422,6 +2459,8 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 	BUG_ON(!anon_vma);
 	/* address might be in next vma when migration races vma_adjust */
 	first = atomic_inc_and_test(compound_mapcount_ptr(page));
+	VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
+	VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
 	if (first)
 		__page_set_anon_rmap(page, vma, address, flags & RMAP_EXCLUSIVE);
 }
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 10/13] mm/gup: disallow follow_page(FOLL_PIN)
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (8 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 09/13] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 11/13] mm: support GUP-triggered unsharing of anonymous pages David Hildenbrand
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

We want to change the way we handle R/O pins on anonymous pages that
might be shared: if we detect a possibly shared anonymous page --
mapped R/O and not !PageAnonExclusive() -- we want to trigger unsharing
via a page fault, resulting in an exclusive anonymous page that can be
pinned reliably without getting replaced via COW on the next write
fault.

However, the required page fault will be problematic for follow_page():
in contrast to ordinary GUP, follow_page() doesn't trigger faults
internally. So we would have to end up failing a R/O pin via
follow_page(), although there is something mapped R/O into the page
table, which might be rather surprising.

We don't seem to have follow_page(FOLL_PIN) users, and it's a purely
internal MM function. Let's just make our life easier and the semantics of
follow_page() clearer by just disallowing FOLL_PIN for follow_page()
completely.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/gup.c     | 3 +++
 mm/hugetlb.c | 8 +++++---
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index a9d4d724aef7..7127e789d210 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -875,6 +875,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 	if (vma_is_secretmem(vma))
 		return NULL;
 
+	if (foll_flags & FOLL_PIN)
+		return NULL;
+
 	page = follow_page_mask(vma, address, foll_flags, &ctx);
 	if (ctx.pgmap)
 		put_dev_pagemap(ctx.pgmap);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0c2f3d500487..8aaca8becf97 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6693,9 +6693,11 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
 	spinlock_t *ptl;
 	pte_t pte;
 
-	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
-	if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
-			 (FOLL_PIN | FOLL_GET)))
+	/*
+	 * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
+	 * follow_hugetlb_page().
+	 */
+	if (WARN_ON_ONCE(flags & FOLL_PIN))
 		return NULL;
 
 retry:
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 11/13] mm: support GUP-triggered unsharing of anonymous pages
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (9 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 10/13] mm/gup: disallow follow_page(FOLL_PIN) David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-02-24 12:26 ` [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page David Hildenbrand
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

Whenever GUP currently ends up taking a R/O pin on an anonymous page that
might be shared -- mapped R/O and !PageAnonExclusive() -- any write fault
on the page table entry will end up replacing the mapped anonymous page
due to COW, resulting in the GUP pin no longer being consistent with the
page actually mapped into the page table.

The possible ways to deal with this situation are:
 (1) Ignore and pin -- what we do right now.
 (2) Fail to pin -- which would be rather surprising to callers and
     could break user space.
 (3) Trigger unsharing and pin the now exclusive page -- reliable R/O
     pins.

We want to implement 3) because it provides the clearest semantics and
allows for checking in unpin_user_pages() and friends for possible BUGs:
when trying to unpin a page that's no longer exclusive, clearly
something went very wrong and might result in memory corruptions that
might be hard to debug. So we better have a nice way to spot such
issues.

To implement 3), we need a way for GUP to trigger unsharing:
FAULT_FLAG_UNSHARE. FAULT_FLAG_UNSHARE is only applicable to R/O mapped
anonymous pages and resembles COW logic during a write fault. However, in
contrast to a write fault, GUP-triggered unsharing will, for example, still
maintain the write protection.

Let's implement FAULT_FLAG_UNSHARE by hooking into the existing write fault
handlers for all applicable anonymous page types: ordinary pages, THP and
hugetlb.

* If FAULT_FLAG_UNSHARE finds a R/O-mapped anonymous page that has been
  marked exclusive in the meantime by someone else, there is nothing to do.
* If FAULT_FLAG_UNSHARE finds a R/O-mapped anonymous page that's not
  marked exclusive, it will try detecting if the process is the exclusive
  owner. If exclusive, it can be set exclusive similar to reuse logic
  during write faults via page_move_anon_rmap() and there is nothing
  else to do; otherwise, we either have to copy and map a fresh,
  anonymous exclusive page R/O (ordinary pages, hugetlb), or split the
  THP.

This commit is heavily based on patches by Andrea.

Co-developed-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm_types.h |  8 ++++
 mm/huge_memory.c         | 17 ++++++-
 mm/hugetlb.c             | 56 ++++++++++++++---------
 mm/memory.c              | 97 +++++++++++++++++++++++++++-------------
 4 files changed, 127 insertions(+), 51 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 5140e5feb486..315a8d121382 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -791,6 +791,9 @@ typedef struct {
  * @FAULT_FLAG_REMOTE: The fault is not for current task/mm.
  * @FAULT_FLAG_INSTRUCTION: The fault was during an instruction fetch.
  * @FAULT_FLAG_INTERRUPTIBLE: The fault can be interrupted by non-fatal signals.
+ * @FAULT_FLAG_UNSHARE: The fault is an unsharing request to unshare (and mark
+ *                      exclusive) a possibly shared anonymous page that is
+ *                      mapped R/O.
  *
  * About @FAULT_FLAG_ALLOW_RETRY and @FAULT_FLAG_TRIED: we can specify
  * whether we would allow page faults to retry by specifying these two
@@ -810,6 +813,10 @@ typedef struct {
  * continuous faults with flags (b).  We should always try to detect pending
  * signals before a retry to make sure the continuous page faults can still be
  * interrupted if necessary.
+ *
+ * The combination FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE is illegal.
+ * FAULT_FLAG_UNSHARE is ignored and treated like an ordinary read fault when
+ * no existing R/O-mapped anonymous page is encountered.
  */
 enum fault_flag {
 	FAULT_FLAG_WRITE =		1 << 0,
@@ -822,6 +829,7 @@ enum fault_flag {
 	FAULT_FLAG_REMOTE =		1 << 7,
 	FAULT_FLAG_INSTRUCTION =	1 << 8,
 	FAULT_FLAG_INTERRUPTIBLE =	1 << 9,
+	FAULT_FLAG_UNSHARE =		1 << 10,
 };
 
 #endif /* _LINUX_MM_TYPES_H */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6f2ee41cd423..2cb8fa31377a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1271,6 +1271,7 @@ void huge_pmd_set_accessed(struct vm_fault *vmf)
 
 vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 {
+	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
 	struct vm_area_struct *vma = vmf->vma;
 	struct page *page;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
@@ -1279,6 +1280,9 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 	vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
 	VM_BUG_ON_VMA(!vma->anon_vma, vma);
 
+	VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE));
+	VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE));
+
 	if (is_huge_zero_pmd(orig_pmd))
 		goto fallback;
 
@@ -1295,6 +1299,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 	if (PageAnonExclusive(page)) {
 		pmd_t entry;
 
+		/* R/O and exclusive, nothing to do. */
+		if (unlikely(unshare)) {
+			spin_unlock(vmf->ptl);
+			return 0;
+		}
+
 		entry = pmd_mkyoung(orig_pmd);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 		if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1))
@@ -1318,7 +1328,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 	}
 
 	/*
-	 * See do_wp_page(): we can only map the page writable if there are
+	 * See do_wp_page(): we can only reuse the page exclusively if there are
 	 * no additional references. Note that we always drain the LRU
 	 * pagevecs immediately after adding a THP.
 	 */
@@ -1330,6 +1340,11 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 		pmd_t entry;
 
 		page_move_anon_rmap(page, vma);
+		if (unlikely(unshare)) {
+			unlock_page(page);
+			spin_unlock(vmf->ptl);
+			return 0;
+		}
 		entry = pmd_mkyoung(orig_pmd);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 		if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1))
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8aaca8becf97..ebcde7e9f704 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5140,15 +5140,16 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
 }
 
 /*
- * Hugetlb_cow() should be called with page lock of the original hugepage held.
+ * hugetlb_wp() should be called with page lock of the original hugepage held.
  * Called with hugetlb_fault_mutex_table held and pte_page locked so we
  * cannot race with other handlers or page migration.
  * Keep the pte_same checks anyway to make transition from the mutex easier.
  */
-static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
-		       unsigned long address, pte_t *ptep,
+static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
+		       unsigned long address, pte_t *ptep, unsigned int flags,
 		       struct page *pagecache_page, spinlock_t *ptl)
 {
+	const bool unshare = flags & FAULT_FLAG_UNSHARE;
 	pte_t pte;
 	struct hstate *h = hstate_vma(vma);
 	struct page *old_page, *new_page;
@@ -5157,17 +5158,26 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	unsigned long haddr = address & huge_page_mask(h);
 	struct mmu_notifier_range range;
 
+	VM_BUG_ON(unshare && (flags & FOLL_WRITE));
+	VM_BUG_ON(!unshare && !(flags & FOLL_WRITE));
+
 	pte = huge_ptep_get(ptep);
 	old_page = pte_page(pte);
 
 retry_avoidcopy:
-	/* If no-one else is actually using this page, avoid the copy
-	 * and just make the page writable */
+	/*
+	 * If no-one else is actually using this page, we're the exclusive
+	 * owner and can reuse this page.
+	 */
 	if (page_mapcount(old_page) == 1 && PageAnon(old_page)) {
+		if (unlikely(unshare) && PageAnonExclusive(old_page))
+			return 0;
 		page_move_anon_rmap(old_page, vma);
-		set_huge_ptep_writable(vma, haddr, ptep);
+		if (likely(!unshare))
+			set_huge_ptep_writable(vma, haddr, ptep);
 		return 0;
 	}
+	VM_BUG_ON_PAGE(PageAnon(old_page) && PageAnonExclusive(old_page), old_page);
 
 	/*
 	 * If the process that created a MAP_PRIVATE mapping is about to
@@ -5266,13 +5276,13 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	if (likely(ptep && pte_same(huge_ptep_get(ptep), pte))) {
 		ClearHPageRestoreReserve(new_page);
 
-		/* Break COW */
+		/* Break COW or unshare */
 		huge_ptep_clear_flush(vma, haddr, ptep);
 		mmu_notifier_invalidate_range(mm, range.start, range.end);
 		page_remove_rmap(old_page, true);
 		hugepage_add_new_anon_rmap(new_page, vma, haddr);
 		set_huge_pte_at(mm, haddr, ptep,
-				make_huge_pte(vma, new_page, 1));
+				make_huge_pte(vma, new_page, !unshare));
 		SetHPageMigratable(new_page);
 		/* Make the old page be freed below */
 		new_page = old_page;
@@ -5280,7 +5290,10 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	spin_unlock(ptl);
 	mmu_notifier_invalidate_range_end(&range);
 out_release_all:
-	/* No restore in case of successful pagetable update (Break COW) */
+	/*
+	 * No restore in case of successful pagetable update (Break COW or
+	 * unshare)
+	 */
 	if (new_page != old_page)
 		restore_reserve_on_error(h, vma, haddr, new_page);
 	put_page(new_page);
@@ -5403,7 +5416,8 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 	/*
 	 * Currently, we are forced to kill the process in the event the
 	 * original mapper has unmapped pages from the child due to a failed
-	 * COW. Warn that such a situation has occurred as it may not be obvious
+	 * COW/unsharing. Warn that such a situation has occurred as it may not
+	 * be obvious.
 	 */
 	if (is_vma_resv_set(vma, HPAGE_RESV_UNMAPPED)) {
 		pr_warn_ratelimited("PID %d killed due to inadequate hugepage pool\n",
@@ -5529,7 +5543,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 	hugetlb_count_add(pages_per_huge_page(h), mm);
 	if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) {
 		/* Optimization, do the COW without a second fault */
-		ret = hugetlb_cow(mm, vma, address, ptep, page, ptl);
+		ret = hugetlb_wp(mm, vma, address, ptep, flags, page, ptl);
 	}
 
 	spin_unlock(ptl);
@@ -5659,14 +5673,15 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		goto out_mutex;
 
 	/*
-	 * If we are going to COW the mapping later, we examine the pending
-	 * reservations for this page now. This will ensure that any
+	 * If we are going to COW/unshare the mapping later, we examine the
+	 * pending reservations for this page now. This will ensure that any
 	 * allocations necessary to record that reservation occur outside the
 	 * spinlock. For private mappings, we also lookup the pagecache
 	 * page now as it is used to determine if a reservation has been
 	 * consumed.
 	 */
-	if ((flags & FAULT_FLAG_WRITE) && !huge_pte_write(entry)) {
+	if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) &&
+	    !huge_pte_write(entry)) {
 		if (vma_needs_reservation(h, vma, haddr) < 0) {
 			ret = VM_FAULT_OOM;
 			goto out_mutex;
@@ -5681,12 +5696,12 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 
 	ptl = huge_pte_lock(h, mm, ptep);
 
-	/* Check for a racing update before calling hugetlb_cow */
+	/* Check for a racing update before calling hugetlb_wp() */
 	if (unlikely(!pte_same(entry, huge_ptep_get(ptep))))
 		goto out_ptl;
 
 	/*
-	 * hugetlb_cow() requires page locks of pte_page(entry) and
+	 * hugetlb_wp() requires page locks of pte_page(entry) and
 	 * pagecache_page, so here we need take the former one
 	 * when page != pagecache_page or !pagecache_page.
 	 */
@@ -5699,13 +5714,14 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 
 	get_page(page);
 
-	if (flags & FAULT_FLAG_WRITE) {
+	if (flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) {
 		if (!huge_pte_write(entry)) {
-			ret = hugetlb_cow(mm, vma, address, ptep,
-					  pagecache_page, ptl);
+			ret = hugetlb_wp(mm, vma, address, ptep, flags,
+					 pagecache_page, ptl);
 			goto out_put_page;
+		} else if (likely(flags & FAULT_FLAG_WRITE)) {
+			entry = huge_pte_mkdirty(entry);
 		}
-		entry = huge_pte_mkdirty(entry);
 	}
 	entry = pte_mkyoung(entry);
 	if (huge_ptep_set_access_flags(vma, haddr, ptep, entry,
diff --git a/mm/memory.c b/mm/memory.c
index 11577ee015be..33d046c34819 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2739,8 +2739,8 @@ static inline int pte_unmap_same(struct vm_fault *vmf)
 	return same;
 }
 
-static inline bool cow_user_page(struct page *dst, struct page *src,
-				 struct vm_fault *vmf)
+static inline bool __wp_page_copy_user(struct page *dst, struct page *src,
+				       struct vm_fault *vmf)
 {
 	bool ret;
 	void *kaddr;
@@ -2968,7 +2968,8 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
 }
 
 /*
- * Handle the case of a page which we actually need to copy to a new page.
+ * Handle the case of a page which we actually need to copy to a new page,
+ * either due to COW or unsharing.
  *
  * Called with mmap_lock locked and the old page referenced, but
  * without the ptl held.
@@ -2985,6 +2986,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
  */
 static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 {
+	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
 	struct vm_area_struct *vma = vmf->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	struct page *old_page = vmf->page;
@@ -3007,7 +3009,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 		if (!new_page)
 			goto oom;
 
-		if (!cow_user_page(new_page, old_page, vmf)) {
+		if (!__wp_page_copy_user(new_page, old_page, vmf)) {
 			/*
 			 * COW failed, if the fault was solved by other,
 			 * it's fine. If not, userspace would re-fault on
@@ -3049,7 +3051,14 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 		flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
 		entry = mk_pte(new_page, vma->vm_page_prot);
 		entry = pte_sw_mkyoung(entry);
-		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		if (unlikely(unshare)) {
+			if (pte_soft_dirty(vmf->orig_pte))
+				entry = pte_mksoft_dirty(entry);
+			if (pte_uffd_wp(vmf->orig_pte))
+				entry = pte_mkuffd_wp(entry);
+		} else {
+			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		}
 
 		/*
 		 * Clear the pte entry and flush it first, before updating the
@@ -3066,6 +3075,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 		 * mmu page tables (such as kvm shadow page tables), we want the
 		 * new page to be mapped directly into the secondary page table.
 		 */
+		BUG_ON(unshare && pte_write(entry));
 		set_pte_at_notify(mm, vmf->address, vmf->pte, entry);
 		update_mmu_cache(vma, vmf->address, vmf->pte);
 		if (old_page) {
@@ -3125,7 +3135,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 			free_swap_cache(old_page);
 		put_page(old_page);
 	}
-	return page_copied ? VM_FAULT_WRITE : 0;
+	return page_copied && !unshare ? VM_FAULT_WRITE : 0;
 oom_free_new:
 	put_page(new_page);
 oom:
@@ -3225,18 +3235,22 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf)
 }
 
 /*
- * This routine handles present pages, when users try to write
- * to a shared page. It is done by copying the page to a new address
- * and decrementing the shared-page counter for the old page.
+ * This routine handles present pages, when
+ * * users try to write to a shared page (FAULT_FLAG_WRITE)
+ * * GUP wants to take a R/O pin on a possibly shared anonymous page
+ *   (FAULT_FLAG_UNSHARE)
+ *
+ * It is done by copying the page to a new address and decrementing the
+ * shared-page counter for the old page.
  *
  * Note that this routine assumes that the protection checks have been
  * done by the caller (the low-level page fault routine in most cases).
- * Thus we can safely just mark it writable once we've done any necessary
- * COW.
+ * Thus, with FAULT_FLAG_WRITE, we can safely just mark it writable once we've
+ * done any necessary COW.
  *
- * We also mark the page dirty at this point even though the page will
- * change only once the write actually happens. This avoids a few races,
- * and potentially makes it more efficient.
+ * In case of FAULT_FLAG_WRITE, we also mark the page dirty at this point even
+ * though the page will change only once the write actually happens. This
+ * avoids a few races, and potentially makes it more efficient.
  *
  * We enter with non-exclusive mmap_lock (to exclude vma changes,
  * but allow concurrent faults), with pte both mapped and locked.
@@ -3245,23 +3259,35 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf)
 static vm_fault_t do_wp_page(struct vm_fault *vmf)
 	__releases(vmf->ptl)
 {
+	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
 	struct vm_area_struct *vma = vmf->vma;
 
-	if (userfaultfd_pte_wp(vma, *vmf->pte)) {
-		pte_unmap_unlock(vmf->pte, vmf->ptl);
-		return handle_userfault(vmf, VM_UFFD_WP);
-	}
+	VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE));
+	VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE));
 
-	/*
-	 * Userfaultfd write-protect can defer flushes. Ensure the TLB
-	 * is flushed in this case before copying.
-	 */
-	if (unlikely(userfaultfd_wp(vmf->vma) &&
-		     mm_tlb_flush_pending(vmf->vma->vm_mm)))
-		flush_tlb_page(vmf->vma, vmf->address);
+	if (likely(!unshare)) {
+		if (userfaultfd_pte_wp(vma, *vmf->pte)) {
+			pte_unmap_unlock(vmf->pte, vmf->ptl);
+			return handle_userfault(vmf, VM_UFFD_WP);
+		}
+
+		/*
+		 * Userfaultfd write-protect can defer flushes. Ensure the TLB
+		 * is flushed in this case before copying.
+		 */
+		if (unlikely(userfaultfd_wp(vmf->vma) &&
+			     mm_tlb_flush_pending(vmf->vma->vm_mm)))
+			flush_tlb_page(vmf->vma, vmf->address);
+	}
 
 	vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
 	if (!vmf->page) {
+		/* No anonymous page -> nothing to do. */
+		if (unlikely(unshare)) {
+			pte_unmap_unlock(vmf->pte, vmf->ptl);
+			return 0;
+		}
+
 		/*
 		 * VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a
 		 * VM_PFNMAP VMA.
@@ -3324,8 +3350,16 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 		page_move_anon_rmap(page, vma);
 		unlock_page(page);
 reuse:
+		if (unlikely(unshare)) {
+			pte_unmap_unlock(vmf->pte, vmf->ptl);
+			return 0;
+		}
 		wp_page_reuse(vmf);
 		return VM_FAULT_WRITE;
+	} else if (unshare) {
+		/* No anonymous page -> nothing to do. */
+		pte_unmap_unlock(vmf->pte, vmf->ptl);
+		return 0;
 	} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
 					(VM_WRITE|VM_SHARED))) {
 		return wp_page_shared(vmf);
@@ -4506,8 +4540,11 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
 /* `inline' is required to avoid gcc 4.1.2 build error */
 static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf)
 {
+	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
+
 	if (vma_is_anonymous(vmf->vma)) {
-		if (userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd))
+		if (unlikely(unshare) &&
+		    userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd))
 			return handle_userfault(vmf, VM_UFFD_WP);
 		return do_huge_pmd_wp_page(vmf);
 	}
@@ -4642,7 +4679,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
 		update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
 		goto unlock;
 	}
-	if (vmf->flags & FAULT_FLAG_WRITE) {
+	if (vmf->flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) {
 		if (!pte_write(entry))
 			return do_wp_page(vmf);
 		entry = pte_mkdirty(entry);
@@ -4685,7 +4722,6 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
 		.pgoff = linear_page_index(vma, address),
 		.gfp_mask = __get_fault_gfp_mask(vma),
 	};
-	unsigned int dirty = flags & FAULT_FLAG_WRITE;
 	struct mm_struct *mm = vma->vm_mm;
 	pgd_t *pgd;
 	p4d_t *p4d;
@@ -4712,7 +4748,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
 
 			/* NUMA case for anonymous PUDs would go here */
 
-			if (dirty && !pud_write(orig_pud)) {
+			if ((flags & FAULT_FLAG_WRITE) && !pud_write(orig_pud)) {
 				ret = wp_huge_pud(&vmf, orig_pud);
 				if (!(ret & VM_FAULT_FALLBACK))
 					return ret;
@@ -4750,7 +4786,8 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
 			if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma))
 				return do_huge_pmd_numa_page(&vmf);
 
-			if (dirty && !pmd_write(vmf.orig_pmd)) {
+			if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) &&
+			    !pmd_write(vmf.orig_pmd)) {
 				ret = wp_huge_pmd(&vmf);
 				if (!(ret & VM_FAULT_FALLBACK))
 					return ret;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (10 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 11/13] mm: support GUP-triggered unsharing of anonymous pages David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-03-02 16:55   ` Jason Gunthorpe
  2022-02-24 12:26 ` [PATCH RFC 13/13] mm/gup: sanity-check with CONFIG_DEBUG_VM that anonymous pages are exclusive when (un)pinning David Hildenbrand
  2022-03-01  8:24 ` [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
  13 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

Whenever GUP currently ends up taking a R/O pin on an anonymous page that
might be shared -- mapped R/O and !PageAnonExclusive() -- any write fault
on the page table entry will end up replacing the mapped anonymous page
due to COW, resulting in the GUP pin no longer being consistent with the
page actually mapped into the page table.

The possible ways to deal with this situation are:
 (1) Ignore and pin -- what we do right now.
 (2) Fail to pin -- which would be rather surprising to callers and
     could break user space.
 (3) Trigger unsharing and pin the now exclusive page -- reliable R/O
     pins.

Let's implement 3) because it provides the clearest semantics and
allows for checking in unpin_user_pages() and friends for possible BUGs:
when trying to unpin a page that's no longer exclusive, clearly
something went very wrong and might result in memory corruptions that
might be hard to debug. So we better have a nice way to spot such
issues.

This change implies that whenever user space *wrote* to a private
mapping (IOW, we have an anonymous page mapped), that GUP pins will
always remain consistent: reliable R/O GUP pins of anonymous pages.

As a side note, this commit fixes the COW security issue for hugetlb with
FOLL_PIN as documented in:
  https://lore.kernel.org/r/3ae33b08-d9ef-f846-56fb-645e3b9b4c66@redhat.com
The vmsplice reproducer still applies, because vmsplice uses FOLL_GET
instead of FOLL_PIN.

Note that follow_huge_pmd() doesn't apply because we cannot end up in
there with FOLL_PIN.

This commit is heavily based on prototype patches by Andrea.

Co-developed-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h | 39 +++++++++++++++++++++++++++++++++++++++
 mm/gup.c           | 32 +++++++++++++++++++++++++++++---
 mm/huge_memory.c   |  3 +++
 mm/hugetlb.c       | 25 ++++++++++++++++++++++---
 4 files changed, 93 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ffeb7f5fa32b..48ce9b7f903d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3003,6 +3003,45 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
 	return 0;
 }
 
+/*
+ * Indicates for which pages that are write-protected in the page table,
+ * whether GUP has to trigger unsharing via FAULT_FLAG_UNSHARE such that the
+ * GUP pin will remain consistent with the pages mapped into the page tables
+ * of the MM.
+ *
+ * Temporary unmapping of PageAnonExclusive() pages or clearing of
+ * PageAnonExclusive() has to protect against concurrent GUP:
+ * * Ordinary GUP: Using the PT lock
+ * * GUP-fast and fork(): mm->write_protect_seq
+ * * GUP-fast and KSM or temporary unmapping (swap, migration):
+ *   clear/invalidate+flush of the page table entry
+ *
+ * Must be called with the (sub)page that's actually referenced via the
+ * page table entry, which might not necessarily be the head page for a
+ * PTE-mapped THP.
+ */
+static inline bool gup_must_unshare(unsigned int flags, struct page *page)
+{
+	/*
+	 * FOLL_WRITE is implicitly handled correctly as the page table entry
+	 * has to be writable -- and if it references (part of) an anonymous
+	 * folio, that part is required to be marked exclusive.
+	 */
+	if ((flags & (FOLL_WRITE | FOLL_PIN)) != FOLL_PIN)
+		return false;
+	/*
+	 * Note: PageAnon(page) is stable until the page is actually getting
+	 * freed.
+	 */
+	if (!PageAnon(page))
+		return false;
+	/*
+	 * Note that PageKsm() pages cannot be exclusive, and consequently,
+	 * cannot get pinned.
+	 */
+	return !PageAnonExclusive(page);
+}
+
 typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data);
 extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 			       unsigned long size, pte_fn_t fn, void *data);
diff --git a/mm/gup.c b/mm/gup.c
index 7127e789d210..2cb7cbb1fc1f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -568,6 +568,10 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 		}
 	}
 
+	if (!pte_write(pte) && gup_must_unshare(flags, page)) {
+		page = ERR_PTR(-EMLINK);
+		goto out;
+	}
 	/* try_grab_page() does nothing unless FOLL_GET or FOLL_PIN is set. */
 	if (unlikely(!try_grab_page(page, flags))) {
 		page = ERR_PTR(-ENOMEM);
@@ -820,6 +824,11 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
  * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches
  * the device's dev_pagemap metadata to avoid repeating expensive lookups.
  *
+ * When getting an anonymous page and the caller has to trigger unsharing
+ * of a shared anonymous page first, -EMLINK is returned. The caller should
+ * trigger a fault with FAULT_FLAG_UNSHARE set. Note that unsharing is only
+ * relevant with FOLL_PIN and !FOLL_WRITE.
+ *
  * On output, the @ctx->page_mask is set according to the size of the page.
  *
  * Return: the mapped (struct page *), %NULL if no mapping exists, or
@@ -943,7 +952,8 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
  * is, *@locked will be set to 0 and -EBUSY returned.
  */
 static int faultin_page(struct vm_area_struct *vma,
-		unsigned long address, unsigned int *flags, int *locked)
+		unsigned long address, unsigned int *flags, bool unshare,
+		int *locked)
 {
 	unsigned int fault_flags = 0;
 	vm_fault_t ret;
@@ -968,6 +978,11 @@ static int faultin_page(struct vm_area_struct *vma,
 		 */
 		fault_flags |= FAULT_FLAG_TRIED;
 	}
+	if (unshare) {
+		fault_flags |= FAULT_FLAG_UNSHARE;
+		/* FAULT_FLAG_WRITE and FAULT_FLAG_UNSHARE are incompatible */
+		VM_BUG_ON(fault_flags & FAULT_FLAG_WRITE);
+	}
 
 	ret = handle_mm_fault(vma, address, fault_flags, NULL);
 	if (ret & VM_FAULT_ERROR) {
@@ -1189,8 +1204,9 @@ static long __get_user_pages(struct mm_struct *mm,
 		cond_resched();
 
 		page = follow_page_mask(vma, start, foll_flags, &ctx);
-		if (!page) {
-			ret = faultin_page(vma, start, &foll_flags, locked);
+		if (!page || PTR_ERR(page) == -EMLINK) {
+			ret = faultin_page(vma, start, &foll_flags,
+					   PTR_ERR(page) == -EMLINK, locked);
 			switch (ret) {
 			case 0:
 				goto retry;
@@ -2346,6 +2362,11 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 			goto pte_unmap;
 		}
 
+		if (!pte_write(pte) && gup_must_unshare(flags, page)) {
+			put_compound_head(head, 1, flags);
+			goto pte_unmap;
+		}
+
 		VM_BUG_ON_PAGE(compound_head(page) != head, page);
 
 		/*
@@ -2589,6 +2610,11 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 		return 0;
 	}
 
+	if (!pmd_write(orig) && gup_must_unshare(flags, head)) {
+		put_compound_head(head, refs, flags);
+		return 0;
+	}
+
 	*nr += refs;
 	SetPageReferenced(head);
 	return 1;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2cb8fa31377a..334cd9381635 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1396,6 +1396,9 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 	page = pmd_page(*pmd);
 	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
 
+	if (!pmd_write(*pmd) && gup_must_unshare(flags, page))
+		return ERR_PTR(-EMLINK);
+
 	if (!try_grab_page(page, flags))
 		return ERR_PTR(-ENOMEM);
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ebcde7e9f704..6c2a7dd8a48d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5957,6 +5957,23 @@ static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma,
 	}
 }
 
+static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte,
+					       bool *unshare)
+{
+	pte_t pteval = huge_ptep_get(pte);
+
+	*unshare = false;
+	if (is_swap_pte(pteval))
+		return true;
+	if (huge_pte_write(pteval))
+		return false;
+	if (flags & FOLL_WRITE)
+		return true;
+	if (gup_must_unshare(flags, pte_page(pteval)))
+		*unshare = true;
+	return *unshare;
+}
+
 long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 			 struct page **pages, struct vm_area_struct **vmas,
 			 unsigned long *position, unsigned long *nr_pages,
@@ -5971,6 +5988,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	while (vaddr < vma->vm_end && remainder) {
 		pte_t *pte;
 		spinlock_t *ptl = NULL;
+		bool unshare;
 		int absent;
 		struct page *page;
 
@@ -6021,9 +6039,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		 * both cases, and because we can't follow correct pages
 		 * directly from any kind of swap entries.
 		 */
-		if (absent || is_swap_pte(huge_ptep_get(pte)) ||
-		    ((flags & FOLL_WRITE) &&
-		      !huge_pte_write(huge_ptep_get(pte)))) {
+		if (absent ||
+		    __follow_hugetlb_must_fault(flags, pte, &unshare)) {
 			vm_fault_t ret;
 			unsigned int fault_flags = 0;
 
@@ -6031,6 +6048,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 				spin_unlock(ptl);
 			if (flags & FOLL_WRITE)
 				fault_flags |= FAULT_FLAG_WRITE;
+			else if (unshare)
+				fault_flags |= FAULT_FLAG_UNSHARE;
 			if (locked)
 				fault_flags |= FAULT_FLAG_ALLOW_RETRY |
 					FAULT_FLAG_KILLABLE;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 13/13] mm/gup: sanity-check with CONFIG_DEBUG_VM that anonymous pages are exclusive when (un)pinning
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (11 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page David Hildenbrand
@ 2022-02-24 12:26 ` David Hildenbrand
  2022-03-01  8:24 ` [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 12:26 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm,
	David Hildenbrand

Let's verify when (un)pinning anonymous pages that we always deal with
exclusive anonymous pages, which guarantees that we'll have a reliable
PIN, meaning that we cannot end up with the GUP pin being inconsistent
with he pages mapped into the page tables due to a COW triggered
by a write fault.

When pinning pages, after conditionally triggering GUP unsharing of
possibly shared anonymous pages, we should always only see exclusive
anonymous pages. Note that anonymous pages that are mapped writable
must be marked exclusive, otherwise we'd have a BUG.

When pinning during ordinary GUP, simply add a check after our
conditional GUP-triggered unsharing checks. As we know exactly how the
page is mapped, we know exactly in which page we have to check for
PageAnonExclusive().

When pinning via GUP-fast we have to be careful, because we can race with
fork(): verify only after we made sure via the seqcount that we didn't
race with concurrent fork() that we didn't end up pinning a possibly
shared anonymous page.

Similarly, when unpinning, verify that the pages are still marked as
exclusive: otherwise something turned the pages possibly shared, which
can result in random memory corruptions, which we really want to catch.

With only the pinned pages at hand and not the actual page table entries
we have to be a bit careful: hugetlb pages are always mapped via a
single logical page table entry referencing the head page and
PG_anon_exclusive of the head page applies. Anon THP are a bit more
complicated, because we might have obtained the page reference either via
a PMD or a PTE -- depending on the mapping type we either have to check
PageAnonExclusive of the head page (PMD-mapped THP) or the tail page
(PTE-mapped THP) applies: as we don't know and to make our life easier,
check that either is set.

Take care to not verify in case we're unpinning during GUP-fast because
we detected concurrent fork(): we might stumble over an anonymous page
that is now shared.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/gup.c         | 61 +++++++++++++++++++++++++++++++++++++++++++++++-
 mm/huge_memory.c |  3 +++
 mm/hugetlb.c     |  3 +++
 3 files changed, 66 insertions(+), 1 deletion(-)

diff --git a/mm/gup.c b/mm/gup.c
index 2cb7cbb1fc1f..df1ae29a1f5f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -45,6 +45,41 @@ static void hpage_pincount_sub(struct page *page, int refs)
 	atomic_sub(refs, compound_pincount_ptr(page));
 }
 
+static inline void sanity_check_pinned_pages(struct page **pages,
+					     unsigned long npages)
+{
+#ifdef CONFIG_DEBUG_VM
+	/*
+	 * We expect to only succeed in pinning anonymous pages if they are
+	 * exclusive. Once pinned, we can no longer mark them shared and the
+	 * PageAnonExclusive() will stick around until the page is freed
+	 * after it was unmapped.
+	 *
+	 * We'd like to verify that our pinned anonymous pages are mapped
+	 * exclusively. The issue with anon THP is that we don't know how
+	 * they are/were mapped when pinning them. However, for anon
+	 * THP we can assume that either the given page (PTE-mapped THP) or
+	 * the head page (PMD-mapped THP) should be PageAnonExclusive(). If
+	 * neither is the case there is certainly something wrong.
+	 */
+	for (; npages; npages--, pages++) {
+		struct page *page = *pages;
+		struct page *head = compound_head(page);
+
+		if (!PageAnon(page))
+			continue;
+		if (!PageCompound(page))
+			VM_BUG_ON_PAGE(!PageAnonExclusive(page), page);
+		else if (PageHuge(page))
+			VM_BUG_ON_PAGE(!PageAnonExclusive(head), page);
+		else
+			/* Either PTE-mapped or PMD-mapped, we don't know. */
+			VM_BUG_ON_PAGE(!PageAnonExclusive(head) &&
+				       !PageAnonExclusive(page), page);
+	}
+#endif /* CONFIG_DEBUG_VM */
+}
+
 /* Equivalent to calling put_page() @refs times. */
 static void put_page_refs(struct page *page, int refs)
 {
@@ -250,6 +285,7 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags)
  */
 void unpin_user_page(struct page *page)
 {
+	sanity_check_pinned_pages(&page, 1);
 	put_compound_head(compound_head(page), 1, FOLL_PIN);
 }
 EXPORT_SYMBOL(unpin_user_page);
@@ -340,6 +376,7 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
 		return;
 	}
 
+	sanity_check_pinned_pages(pages, npages);
 	for_each_compound_head(index, pages, npages, head, ntails) {
 		/*
 		 * Checking PageDirty at this point may race with
@@ -404,6 +441,21 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages,
 }
 EXPORT_SYMBOL(unpin_user_page_range_dirty_lock);
 
+static void unpin_user_pages_lockless(struct page **pages, unsigned long npages)
+{
+	unsigned long index;
+	struct page *head;
+	unsigned int ntails;
+
+	/*
+	 * Don't perform any sanity checks because we might have raced with
+	 * fork() and some anonymous pages might no actually be shared --
+	 * which is why we're unpinning after all.
+	 */
+	for_each_compound_head(index, pages, npages, head, ntails)
+		put_compound_head(head, ntails, FOLL_PIN);
+}
+
 /**
  * unpin_user_pages() - release an array of gup-pinned pages.
  * @pages:  array of pages to be marked dirty and released.
@@ -426,6 +478,7 @@ void unpin_user_pages(struct page **pages, unsigned long npages)
 	 */
 	if (WARN_ON(IS_ERR_VALUE(npages)))
 		return;
+	sanity_check_pinned_pages(pages, npages);
 
 	for_each_compound_head(index, pages, npages, head, ntails)
 		put_compound_head(head, ntails, FOLL_PIN);
@@ -572,6 +625,10 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 		page = ERR_PTR(-EMLINK);
 		goto out;
 	}
+
+	VM_BUG_ON((flags & FOLL_PIN) && PageAnon(page) &&
+		  !PageAnonExclusive(page));
+
 	/* try_grab_page() does nothing unless FOLL_GET or FOLL_PIN is set. */
 	if (unlikely(!try_grab_page(page, flags))) {
 		page = ERR_PTR(-ENOMEM);
@@ -2885,8 +2942,10 @@ static unsigned long lockless_pages_from_mm(unsigned long start,
 	 */
 	if (gup_flags & FOLL_PIN) {
 		if (read_seqcount_retry(&current->mm->write_protect_seq, seq)) {
-			unpin_user_pages(pages, nr_pinned);
+			unpin_user_pages_lockless(pages, nr_pinned);
 			return 0;
+		} else {
+			sanity_check_pinned_pages(pages, nr_pinned);
 		}
 	}
 	return nr_pinned;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 334cd9381635..72ed42d0ef3c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1399,6 +1399,9 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 	if (!pmd_write(*pmd) && gup_must_unshare(flags, page))
 		return ERR_PTR(-EMLINK);
 
+	VM_BUG_ON((flags & FOLL_PIN) && PageAnon(page) &&
+		  !PageAnonExclusive(page));
+
 	if (!try_grab_page(page, flags))
 		return ERR_PTR(-ENOMEM);
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6c2a7dd8a48d..e1f7fa2b0292 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6091,6 +6091,9 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT;
 		page = pte_page(huge_ptep_get(pte));
 
+		VM_BUG_ON((flags & FOLL_PIN) && PageAnon(page) &&
+			  !PageAnonExclusive(page));
+
 		/*
 		 * If subpage information not requested, update counters
 		 * and skip the same_page loop below.
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 01/13] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed
  2022-02-24 12:26 ` [PATCH RFC 01/13] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed David Hildenbrand
@ 2022-02-24 16:26   ` Khalid Aziz
  0 siblings, 0 replies; 30+ messages in thread
From: Khalid Aziz @ 2022-02-24 16:26 UTC (permalink / raw)
  To: David Hildenbrand, linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm

On 2/24/22 05:26, David Hildenbrand wrote:
> In case arch_unmap_one() fails, we already did a swap_duplicate(). let's
> undo that properly via swap_free().
> 
> Fixes: ca827d55ebaa ("mm, swap: Add infrastructure for saving page metadata on swap")
> Cc: Khalid Aziz <khalid.aziz@oracle.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>   mm/rmap.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 6a1e8c7f6213..f825aeef61ca 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1625,6 +1625,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>   				break;
>   			}
>   			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
> +				swap_free(entry);
>   				set_pte_at(mm, address, pvmw.pte, pteval);
>   				ret = false;
>   				page_vma_mapped_walk_done(&pvmw);

That looks reasonable.

Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>

--
Khalid

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap()
  2022-02-24 12:26 ` [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap() David Hildenbrand
@ 2022-02-24 17:29   ` Linus Torvalds
  2022-02-24 17:41     ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2022-02-24 17:29 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Linux Kernel Mailing List, Andrew Morton, Hugh Dickins,
	David Rientjes, Shakeel Butt, John Hubbard, Jason Gunthorpe,
	Mike Kravetz, Mike Rapoport, Yang Shi, Kirill A . Shutemov,
	Matthew Wilcox, Vlastimil Babka, Jann Horn, Michal Hocko,
	Nadav Amit, Rik van Riel, Roman Gushchin, Andrea Arcangeli,
	Peter Xu, Donald Dutile, Christoph Hellwig, Oleg Nesterov,
	Jan Kara, Liang Zhang, Pedro Gomes, Oded Gabbay, Linux-MM

On Thu, Feb 24, 2022 at 4:29 AM David Hildenbrand <david@redhat.com> wrote:
>
> ... and instead convert page_add_anon_rmap() to accept flags.

Can you fix the comment above the RMAP_xyz definitions? That one still says

  /* bitflags for do_page_add_anon_rmap() */

that tnow no longer exists.

Also, while this kind of code isn't unusual, I think it's still confusing:

> +               page_add_anon_rmap(page, vma, addr, 0);

because when reading that, at least I go "what does 0 mean? Is it a
page offset, or what?"

It might be a good idea to simply add a

  #define RMAP_PAGE 0x00

or something like that, just to have the callers all make it obvious
that we're talking about that RMAP_xyz bits - even if some of them may
be default.

(Then using an enum of a special type is something we do if we want to
add extra clarity or sparse testing, I don't think there are enough
users for that to make sense)

               Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 06/13] mm/rmap: pass rmap flags to hugepage_add_anon_rmap()
  2022-02-24 12:26 ` [PATCH RFC 06/13] mm/rmap: pass rmap flags to hugepage_add_anon_rmap() David Hildenbrand
@ 2022-02-24 17:37   ` Linus Torvalds
  0 siblings, 0 replies; 30+ messages in thread
From: Linus Torvalds @ 2022-02-24 17:37 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Linux Kernel Mailing List, Andrew Morton, Hugh Dickins,
	David Rientjes, Shakeel Butt, John Hubbard, Jason Gunthorpe,
	Mike Kravetz, Mike Rapoport, Yang Shi, Kirill A . Shutemov,
	Matthew Wilcox, Vlastimil Babka, Jann Horn, Michal Hocko,
	Nadav Amit, Rik van Riel, Roman Gushchin, Andrea Arcangeli,
	Peter Xu, Donald Dutile, Christoph Hellwig, Oleg Nesterov,
	Jan Kara, Liang Zhang, Pedro Gomes, Oded Gabbay, Linux-MM

On Thu, Feb 24, 2022 at 4:30 AM David Hildenbrand <david@redhat.com> wrote:
>
> -                               hugepage_add_anon_rmap(new, vma, pvmw.address);
> +                               hugepage_add_anon_rmap(new, vma, pvmw.address, 0);

This now has the same "what does 0 mean?" issue.

(And grepping for RMAP_, I notice that we have a namespace clash, with
xfs and kvm also using "RMAP_xyz" for their own things. Oh wel, at
least they don't look like they'd ever mix).

            Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap()
  2022-02-24 17:29   ` Linus Torvalds
@ 2022-02-24 17:41     ` David Hildenbrand
  2022-02-24 17:48       ` Linus Torvalds
  0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2022-02-24 17:41 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux Kernel Mailing List, Andrew Morton, Hugh Dickins,
	David Rientjes, Shakeel Butt, John Hubbard, Jason Gunthorpe,
	Mike Kravetz, Mike Rapoport, Yang Shi, Kirill A . Shutemov,
	Matthew Wilcox, Vlastimil Babka, Jann Horn, Michal Hocko,
	Nadav Amit, Rik van Riel, Roman Gushchin, Andrea Arcangeli,
	Peter Xu, Donald Dutile, Christoph Hellwig, Oleg Nesterov,
	Jan Kara, Liang Zhang, Pedro Gomes, Oded Gabbay, Linux-MM

On 24.02.22 18:29, Linus Torvalds wrote:
> On Thu, Feb 24, 2022 at 4:29 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> ... and instead convert page_add_anon_rmap() to accept flags.
> 
> Can you fix the comment above the RMAP_xyz definitions? That one still says
> 
>   /* bitflags for do_page_add_anon_rmap() */
> 
> that tnow no longer exists.

Oh, yes sure.

> 
> Also, while this kind of code isn't unusual, I think it's still confusing:
> 
>> +               page_add_anon_rmap(page, vma, addr, 0);
> 
> because when reading that, at least I go "what does 0 mean? Is it a
> page offset, or what?"

Yes, I agree.

> 
> It might be a good idea to simply add a
> 
>   #define RMAP_PAGE 0x00
> 
> or something like that, just to have the callers all make it obvious
> that we're talking about that RMAP_xyz bits - even if some of them may
> be default.
> 
> (Then using an enum of a special type is something we do if we want to
> add extra clarity or sparse testing, I don't think there are enough
> users for that to make sense)
> 

Actually, I thought about doing it similarly to what I did in
page_alloc.c with fpi_t:

typedef int __bitwise fpi_t;

#define FPI_NONE		((__force fpi_t)0)


I can do something similar here.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap()
  2022-02-24 17:41     ` David Hildenbrand
@ 2022-02-24 17:48       ` Linus Torvalds
  2022-02-25  9:01         ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2022-02-24 17:48 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Linux Kernel Mailing List, Andrew Morton, Hugh Dickins,
	David Rientjes, Shakeel Butt, John Hubbard, Jason Gunthorpe,
	Mike Kravetz, Mike Rapoport, Yang Shi, Kirill A . Shutemov,
	Matthew Wilcox, Vlastimil Babka, Jann Horn, Michal Hocko,
	Nadav Amit, Rik van Riel, Roman Gushchin, Andrea Arcangeli,
	Peter Xu, Donald Dutile, Christoph Hellwig, Oleg Nesterov,
	Jan Kara, Liang Zhang, Pedro Gomes, Oded Gabbay, Linux-MM

On Thu, Feb 24, 2022 at 9:41 AM David Hildenbrand <david@redhat.com> wrote:
>
> Actually, I thought about doing it similarly to what I did in
> page_alloc.c with fpi_t:
>
> typedef int __bitwise fpi_t;
>
> #define FPI_NONE                ((__force fpi_t)0)
>
> I can do something similar here.

Yeah, that looks good. And then the relevant function declarations and
definitions also have that explicit type there instead of 'int', which
adds a bit more documentation for people grepping for use.

Thanks,
               Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 03/13] mm/memory: slightly simplify copy_present_pte()
  2022-02-24 12:26 ` [PATCH RFC 03/13] mm/memory: slightly simplify copy_present_pte() David Hildenbrand
@ 2022-02-25  5:15   ` Hillf Danton
  2022-02-25  8:01     ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: Hillf Danton @ 2022-02-25  5:15 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, Linus Torvalds, John Hubbard, Oded Gabbay

On Thu, 24 Feb 2022 13:26:04 +0100 David Hildenbrand wrote:
> Let's move the pinning check into the caller, to simplify return code
> logic and prepare for further changes: relocating the
> page_needs_cow_for_dma() into rmap handling code.
> 
> While at it, remove the unused pte parameter and simplify the comments a
> bit.
> 
> No functional change intended.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/memory.c | 53 ++++++++++++++++-------------------------------------
>  1 file changed, 16 insertions(+), 37 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index c6177d897964..accb72a3343d 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -865,19 +865,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  }
>  
>  /*
> - * Copy a present and normal page if necessary.
> + * Copy a present and normal page.
>   *
> - * NOTE! The usual case is that this doesn't need to do
> - * anything, and can just return a positive value. That
> - * will let the caller know that it can just increase
> - * the page refcount and re-use the pte the traditional
> - * way.
> - *
> - * But _if_ we need to copy it because it needs to be
> - * pinned in the parent (and the child should get its own
> - * copy rather than just a reference to the same page),
> - * we'll do that here and return zero to let the caller
> - * know we're done.
> + * NOTE! The usual case is that this isn't required;
> + * instead, the caller can just increase the page refcount
> + * and re-use the pte the traditional way.
>   *
>   * And if we need a pre-allocated page but don't yet have
>   * one, return a negative error to let the preallocation
> @@ -887,25 +879,10 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  static inline int
>  copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  		  pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss,
> -		  struct page **prealloc, pte_t pte, struct page *page)
> +		  struct page **prealloc, struct page *page)
>  {
>  	struct page *new_page;
> -
> -	/*
> -	 * What we want to do is to check whether this page may
> -	 * have been pinned by the parent process.  If so,
> -	 * instead of wrprotect the pte on both sides, we copy
> -	 * the page immediately so that we'll always guarantee
> -	 * the pinned page won't be randomly replaced in the
> -	 * future.
> -	 *
> -	 * The page pinning checks are just "has this mm ever
> -	 * seen pinning", along with the (inexact) check of
> -	 * the page count. That might give false positives for
> -	 * for pinning, but it will work correctly.
> -	 */
> -	if (likely(!page_needs_cow_for_dma(src_vma, page)))
> -		return 1;
> +	pte_t pte;
>  
>  	new_page = *prealloc;
>  	if (!new_page)
> @@ -947,14 +924,16 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  	struct page *page;
>  
>  	page = vm_normal_page(src_vma, addr, pte);
> -	if (page) {
> -		int retval;
> -
> -		retval = copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
> -					   addr, rss, prealloc, pte, page);
> -		if (retval <= 0)
> -			return retval;
> -
> +	if (page && unlikely(page_needs_cow_for_dma(src_vma, page))) {
> +		/*
> +		 * If this page may have been pinned by the parent process,
> +		 * copy the page immediately for the child so that we'll always
> +		 * guarantee the pinned page won't be randomly replaced in the
> +		 * future.
> +		 */
> +		return copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
> +					 addr, rss, prealloc, page);

Off topic question, is it likely that the DMA tranfer from device in parallel
to copying page ends up with different data between parent and kid?

Hillf

> +	} else if (page) {
>  		get_page(page);
>  		page_dup_rmap(page, false);
>  		rss[mm_counter(page)]++;
> -- 
> 2.35.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 03/13] mm/memory: slightly simplify copy_present_pte()
  2022-02-25  5:15   ` Hillf Danton
@ 2022-02-25  8:01     ` David Hildenbrand
  0 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-25  8:01 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-kernel, linux-mm, Linus Torvalds, John Hubbard, Oded Gabbay

On 25.02.22 06:15, Hillf Danton wrote:
> On Thu, 24 Feb 2022 13:26:04 +0100 David Hildenbrand wrote:
>> Let's move the pinning check into the caller, to simplify return code
>> logic and prepare for further changes: relocating the
>> page_needs_cow_for_dma() into rmap handling code.
>>
>> While at it, remove the unused pte parameter and simplify the comments a
>> bit.
>>
>> No functional change intended.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>  mm/memory.c | 53 ++++++++++++++++-------------------------------------
>>  1 file changed, 16 insertions(+), 37 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index c6177d897964..accb72a3343d 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -865,19 +865,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>  }
>>  
>>  /*
>> - * Copy a present and normal page if necessary.
>> + * Copy a present and normal page.
>>   *
>> - * NOTE! The usual case is that this doesn't need to do
>> - * anything, and can just return a positive value. That
>> - * will let the caller know that it can just increase
>> - * the page refcount and re-use the pte the traditional
>> - * way.
>> - *
>> - * But _if_ we need to copy it because it needs to be
>> - * pinned in the parent (and the child should get its own
>> - * copy rather than just a reference to the same page),
>> - * we'll do that here and return zero to let the caller
>> - * know we're done.
>> + * NOTE! The usual case is that this isn't required;
>> + * instead, the caller can just increase the page refcount
>> + * and re-use the pte the traditional way.
>>   *
>>   * And if we need a pre-allocated page but don't yet have
>>   * one, return a negative error to let the preallocation
>> @@ -887,25 +879,10 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>  static inline int
>>  copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>>  		  pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss,
>> -		  struct page **prealloc, pte_t pte, struct page *page)
>> +		  struct page **prealloc, struct page *page)
>>  {
>>  	struct page *new_page;
>> -
>> -	/*
>> -	 * What we want to do is to check whether this page may
>> -	 * have been pinned by the parent process.  If so,
>> -	 * instead of wrprotect the pte on both sides, we copy
>> -	 * the page immediately so that we'll always guarantee
>> -	 * the pinned page won't be randomly replaced in the
>> -	 * future.
>> -	 *
>> -	 * The page pinning checks are just "has this mm ever
>> -	 * seen pinning", along with the (inexact) check of
>> -	 * the page count. That might give false positives for
>> -	 * for pinning, but it will work correctly.
>> -	 */
>> -	if (likely(!page_needs_cow_for_dma(src_vma, page)))
>> -		return 1;
>> +	pte_t pte;
>>  
>>  	new_page = *prealloc;
>>  	if (!new_page)
>> @@ -947,14 +924,16 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>>  	struct page *page;
>>  
>>  	page = vm_normal_page(src_vma, addr, pte);
>> -	if (page) {
>> -		int retval;
>> -
>> -		retval = copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
>> -					   addr, rss, prealloc, pte, page);
>> -		if (retval <= 0)
>> -			return retval;
>> -
>> +	if (page && unlikely(page_needs_cow_for_dma(src_vma, page))) {
>> +		/*
>> +		 * If this page may have been pinned by the parent process,
>> +		 * copy the page immediately for the child so that we'll always
>> +		 * guarantee the pinned page won't be randomly replaced in the
>> +		 * future.
>> +		 */
>> +		return copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
>> +					 addr, rss, prealloc, page);
> 
> Off topic question, is it likely that the DMA tranfer from device in parallel
> to copying page ends up with different data between parent and kid?

If the parent has a GUP pin on the page before fork(), the parent will
keep that page mapped exclusively and the child will receive a copy.

It's pretty much undefined which content that copy will have if there is
concurrent DMA via that GUP pin: we'll snapshot that page at some point
in time.

It's fully under the parent process control when to start/stop I/O via a
GUP pin and when to call fork().

So yes, if there is fork() with concurrent DMA via a GUP pin modifying
the page, the page content isn't well defined: could be the content
before DMA, mid DMA or post DMA.


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap()
  2022-02-24 17:48       ` Linus Torvalds
@ 2022-02-25  9:01         ` David Hildenbrand
  0 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-02-25  9:01 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux Kernel Mailing List, Andrew Morton, Hugh Dickins,
	David Rientjes, Shakeel Butt, John Hubbard, Jason Gunthorpe,
	Mike Kravetz, Mike Rapoport, Yang Shi, Kirill A . Shutemov,
	Matthew Wilcox, Vlastimil Babka, Jann Horn, Michal Hocko,
	Nadav Amit, Rik van Riel, Roman Gushchin, Andrea Arcangeli,
	Peter Xu, Donald Dutile, Christoph Hellwig, Oleg Nesterov,
	Jan Kara, Liang Zhang, Pedro Gomes, Oded Gabbay, Linux-MM

On 24.02.22 18:48, Linus Torvalds wrote:
> On Thu, Feb 24, 2022 at 9:41 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> Actually, I thought about doing it similarly to what I did in
>> page_alloc.c with fpi_t:
>>
>> typedef int __bitwise fpi_t;
>>
>> #define FPI_NONE                ((__force fpi_t)0)
>>
>> I can do something similar here.
> 
> Yeah, that looks good. And then the relevant function declarations and
> definitions also have that explicit type there instead of 'int', which
> adds a bit more documentation for people grepping for use.
> 

While at it already, I'll also add a patch to drop the "bool compound" parameter
from page_add_new_anon_rmap()

(description only)

Author: David Hildenbrand <david@redhat.com>
Date:   Fri Feb 25 09:38:11 2022 +0100

    mm/rmap: drop "compound" parameter from page_add_new_anon_rmap()
    
    New anonymous pages are always mapped natively: only THP/khugepagd code
    maps a new compound anonymous page and passes "true". Otherwise, we're
    just dealing with simple, non-compound pages.
    
    Let's give the interface clearer semantics and document these.
    
    Signed-off-by: David Hildenbrand <david@redhat.com>


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages
  2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
                   ` (12 preceding siblings ...)
  2022-02-24 12:26 ` [PATCH RFC 13/13] mm/gup: sanity-check with CONFIG_DEBUG_VM that anonymous pages are exclusive when (un)pinning David Hildenbrand
@ 2022-03-01  8:24 ` David Hildenbrand
  13 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-03-01  8:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Linus Torvalds, David Rientjes,
	Shakeel Butt, John Hubbard, Jason Gunthorpe, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm, Khalid Aziz

On 24.02.22 13:26, David Hildenbrand wrote:
> This series is the result of the discussion on the previous approach [2].
> More information on the general COW issues can be found there. It is based
> on [1], which resides in -mm and -next:
> 	[PATCH v3 0/9] mm: COW fixes part 1: fix the COW security issue for
> 	THP and swap
> 
> I keep the latest state, including some hacky selftest on:
> 	https://github.com/davidhildenbrand/linux/tree/cow_fixes_part_2
> 
> This series fixes memory corruptions when a GUP pin (FOLL_PIN) was taken
> on an anonymous page and COW logic fails to detect exclusivity of the page
> to then replacing the anonymous page by a copy in the page table: The
> GUP pin lost synchronicity with the pages mapped into the page tables.
> 
> This issue, including other related COW issues, has been summarized in [3]
> under 3):
> "
>   3. Intra Process Memory Corruptions due to Wrong COW (FOLL_PIN)
> 
>   page_maybe_dma_pinned() is used to check if a page may be pinned for
>   DMA (using FOLL_PIN instead of FOLL_GET). While false positives are
>   tolerable, false negatives are problematic: pages that are pinned for
>   DMA must not be added to the swapcache. If it happens, the (now pinned)
>   page could be faulted back from the swapcache into page tables
>   read-only. Future write-access would detect the pinning and COW the
>   page, losing synchronicity. For the interested reader, this is nicely
>   documented in feb889fb40fa ("mm: don't put pinned pages into the swap
>   cache").
> 
>   Peter reports [8] that page_maybe_dma_pinned() as used is racy in some
>   cases and can result in a violation of the documented semantics:
>   giving false negatives because of the race.
> 
>   There are cases where we call it without properly taking a per-process
>   sequence lock, turning the usage of page_maybe_dma_pinned() racy. While
>   one case (clear_refs SOFTDIRTY tracking, see below) seems to be easy to
>   handle, there is especially one rmap case (shrink_page_list) that's hard
>   to fix: in the rmap world, we're not limited to a single process.
> 
>   The shrink_page_list() issue is really subtle. If we race with
>   someone pinning a page, we can trigger the same issue as in the FOLL_GET
>   case. See the detail section at the end of this mail on a discussion how
>   bad this can bite us with VFIO or other FOLL_PIN user.
> 
>   It's harder to reproduce, but I managed to modify the O_DIRECT
>   reproducer to use io_uring fixed buffers [15] instead, which ends up
>   using FOLL_PIN | FOLL_WRITE | FOLL_LONGTERM to pin buffer pages and can
>   similarly trigger a loss of synchronicity and consequently a memory
>   corruption.
> 
>   Again, the root issue is that a write-fault on a page that has
>   additional references results in a COW and thereby a loss of
>   synchronicity and consequently a memory corruption if two parties
>   believe they are referencing the same page.
> "
> 
> This series makes GUP pins (R/O and R/W) on anonymous pages fully reliable,
> especially also taking care of concurrent pinning via GUP-fast,
> for example, also fully fixing an issue reported regarding NUMA
> balancing [4] recently. While doing that, it further reduces "unnecessary
> COWs", especially when we don't fork()/KSM and don't swapout, and fixes the
> COW security for hugetlb for FOLL_PIN.
> 
> In summary, we track via a pageflag (PG_anon_exclusive) whether a mapped
> anonymous page is exclusive. Exclusive anonymous pages that are mapped
> R/O can directly be mapped R/W by the COW logic in the write fault handler.
> Exclusive anonymous pages that want to be shared (fork(), KSM) first have
> to mark a mapped anonymous page shared -- which will fail if there are
> GUP pins on the page. GUP is only allowed to take a pin on anonymous pages
> that is exclusive. The PT lock is the primary mechanism to synchronize
> modifications of PG_anon_exclusive. GUP-fast is synchronized either via the
> src_mm->write_protect_seq or via clear/invalidate+flush of the relevant
> page table entry.
> 
> Special care has to be taken about swap, migration, and THPs (whereby a
> PMD-mapping can be converted to a PTE mapping and we have to track
> information for subpages). Besides these, we let the rmap code handle most
> magic. For reliable R/O pins of anonymous pages, we need FAULT_FLAG_UNSHARE
> logic as part of our previous approach [2], however, it's now 100% mapcount
> free and I further simplified it a bit.
> 
>   #1 is a fix
>   #3-#7 are mostly rmap preparations for PG_anon_exclusive handling
>   #8 introduces PG_anon_exclusive
>   #9 uses PG_anon_exclusive and make R/W pins of anonymous pages
>    reliable
>   #10 is a preparation for reliable R/O pins
>   #11 and #12 is reused/modified GUP-triggered unsharing for R/O GUP pins
>    make R/O pins of anonymous pages reliable
>   #13 adds sanity check when (un)pinning anonymous pages
> 
> I'm not proud about patch #8, suggestions welcome. Patch #9 contains
> excessive explanations and the main logic for R/W pins. #11 and #12
> resemble what we proposed in the previous approach [2]. I consider the
> general approach of #13 very nice and helpful, and I remember Linus even
> envisioning something like that for finding BUGs, although we might want to
> implement the sanity checks eventually differently
> 
> It passes my growing set of tests for "wrong COW" and "missed COW",
> including the ones in [30 -- I'd really appreciate some experienced eyes
> to take a close look at corner cases. Only tested on x86_64, testing with
> CONT-mapped hugetlb pages on arm64 might be interesting.
> 
> Once we converted relevant users of FOLL_GET (e.g., O_DIRECT) to FOLL_PIN,
> the issue described in [3] under 2) will be fixed as well. Further, once
> that's in place we can streamline our COW logic for hugetlb to rely on
> page_count() as well and fix any possible COW security issues.

Hi,

I did excessive tests on aarch64 with CONT hugetlb pages yesterday and
didn't find any surprises, at least not in these changes. [1]

While thinking on how to get O_DIRECT fixed immediately, I realized the
following:
(1) This series turns FOLL_PIN on anonymous pages completely reliable,
    for any case of GUP+fork (GUP before fork, GUP during fork, GUP
    after fork in child/parent), which is pretty nice IMHO.
(2) For O_DIRECT and friends (FOLL_GET) we primarily care about fixing
    short-term FOLL_WRITE *without* fork(), which are the memory
    corruptions we're experiencing. Long-term FOLL_GET (meaning, even
    staying reliable after fork()) already has a bad smell to it.

Especially, (2) never worked reliably with fork() involved. I came to
the conclusion that this series pretty much fixes (2) already *except*
the swapout case. I might be wrong, but I think it should be possible to
handle that as well, meaning that: a FOLL_GET|FOLL_WRITE on an anonymous
page will be reliable as long as we don't fork().

I'm planning on sending a part3 to cover that, so we don't have to wait
for the FOLL_PIN conversion to get our O_DIRECT reproducers fixed. Stay
tuned.

If there isn't any more feedback, I'll be sending out a v1 soonish.


[1]
https://lkml.kernel.org/r/811c5c8e-b3a2-85d2-049c-717f17c3a03a@redhat.com


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page
  2022-02-24 12:26 ` [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page David Hildenbrand
@ 2022-03-02 16:55   ` Jason Gunthorpe
  2022-03-02 20:38     ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: Jason Gunthorpe @ 2022-03-02 16:55 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, Andrew Morton, Hugh Dickins, Linus Torvalds,
	David Rientjes, Shakeel Butt, John Hubbard, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm

On Thu, Feb 24, 2022 at 01:26:13PM +0100, David Hildenbrand wrote:
> Whenever GUP currently ends up taking a R/O pin on an anonymous page that
> might be shared -- mapped R/O and !PageAnonExclusive() -- any write fault
> on the page table entry will end up replacing the mapped anonymous page
> due to COW, resulting in the GUP pin no longer being consistent with the
> page actually mapped into the page table.
> 
> The possible ways to deal with this situation are:
>  (1) Ignore and pin -- what we do right now.
>  (2) Fail to pin -- which would be rather surprising to callers and
>      could break user space.
>  (3) Trigger unsharing and pin the now exclusive page -- reliable R/O
>      pins.

How does this mesh with the common FOLL_FORCE|FOLL_WRITE|FOLL_PIN
pattern used for requesting read access? Can they be converted to
just FOLL_WRITE|FOLL_PIN after this?

Jason

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page
  2022-03-02 16:55   ` Jason Gunthorpe
@ 2022-03-02 20:38     ` David Hildenbrand
  2022-03-02 20:59       ` Jason Gunthorpe
  2022-03-03  1:47       ` John Hubbard
  0 siblings, 2 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-03-02 20:38 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-kernel, Andrew Morton, Hugh Dickins, Linus Torvalds,
	David Rientjes, Shakeel Butt, John Hubbard, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm

On 02.03.22 17:55, Jason Gunthorpe wrote:
> On Thu, Feb 24, 2022 at 01:26:13PM +0100, David Hildenbrand wrote:
>> Whenever GUP currently ends up taking a R/O pin on an anonymous page that
>> might be shared -- mapped R/O and !PageAnonExclusive() -- any write fault
>> on the page table entry will end up replacing the mapped anonymous page
>> due to COW, resulting in the GUP pin no longer being consistent with the
>> page actually mapped into the page table.
>>
>> The possible ways to deal with this situation are:
>>  (1) Ignore and pin -- what we do right now.
>>  (2) Fail to pin -- which would be rather surprising to callers and
>>      could break user space.
>>  (3) Trigger unsharing and pin the now exclusive page -- reliable R/O
>>      pins.
> 

Hi Jason,

> How does this mesh with the common FOLL_FORCE|FOLL_WRITE|FOLL_PIN
> pattern used for requesting read access? Can they be converted to
> just FOLL_WRITE|FOLL_PIN after this?

Interesting question, I thought about this in detail yet, let me give it
a try:


IIRC, the sole purpose of FOLL_FORCE in the context of R/O pins is to
enforce the eventual COW -- meaning we COW (via FOLL_WRITE) even if we
don't have the permissions to write (via FOLL_FORCE), to make sure we
most certainly have an exclusive anonymoous page in a MAP_PRIVATE mapping.

Dropping only the FOLL_FORCE would make the FOLL_WRITE request fail if
the mapping is currently !VM_WRITE (but is VM_MAYWRITE), so that
wouldn't work.

I recall that we don't allow pinning the zero page ("special pte",
!vm_normal_page()). So if you have an ordinary MAP_PRIVATE|MAP_ANON
mapping, you will now only need a "FOLL_READ" and have a reliable pin,
even if not previously writing to every page.


It would we different with other MAP_PRIVATE file mappings I remember:

With FOLL_FORCE|FOLL_WRITE|FOLL_PIN we'd force placement of an anonymous
page, resulting in the R/O (long-term ?) pin not observing consecutive
file changes. With a pure FOLL_READ we'd still observe file changes as
we don't trigger a write fault.

BUT, once we actually write to the private mapping via the page table,
the GUP pin would go out of sync with the now-anonymous page mapped into
the page table. However, I'm having a hard time answering what's
actually expected?

It's really hard to tell what the user wants with MAP_PRIVATE file
mappings and stumbles over a !anon page (no modifications so far):

(a) I want a R/O pin to observe file modifications.
(b) I want the R/O pin to *not* observe file modifications but observe
    my (eventual? if any) private modifications,

Of course, if we already wrote to that page and now have an anon page,
it's easy: we are already no longer following file changes.

Maybe FOLL_PIN would already do now what we'd expect from a R/O pin --
(a), maybe not. I'm wondering if FOLL_LONGTERM could give us an
indication whether (a) or (b) applies.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page
  2022-03-02 20:38     ` David Hildenbrand
@ 2022-03-02 20:59       ` Jason Gunthorpe
  2022-03-03  8:07         ` David Hildenbrand
  2022-03-03  1:47       ` John Hubbard
  1 sibling, 1 reply; 30+ messages in thread
From: Jason Gunthorpe @ 2022-03-02 20:59 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, Andrew Morton, Hugh Dickins, Linus Torvalds,
	David Rientjes, Shakeel Butt, John Hubbard, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm

On Wed, Mar 02, 2022 at 09:38:09PM +0100, David Hildenbrand wrote:

> (a) I want a R/O pin to observe file modifications.
> (b) I want the R/O pin to *not* observe file modifications but observe
>     my (eventual? if any) private modifications,

A scenario I know that motivated this is fairly straightfoward:

   static char data[] = {};

   ibv_reg_mr(data, READ_ONLY)
   data[0] = 1
   .. go touch data via DMA ..

We want to reliably observe the '1'

What is happening under the covers is that 'data' is placed in the
.data segment and becomes a file backed MAP_PRIVATE page. The write
COWs that page

It would work OK if it was in .bss instead

I think the FOLL_FORCE is there because otherwise the trick doesn't
work on true RO pages and the API becomes broken.

Jason

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page
  2022-03-02 20:38     ` David Hildenbrand
  2022-03-02 20:59       ` Jason Gunthorpe
@ 2022-03-03  1:47       ` John Hubbard
  2022-03-03  8:06         ` David Hildenbrand
  1 sibling, 1 reply; 30+ messages in thread
From: John Hubbard @ 2022-03-03  1:47 UTC (permalink / raw)
  To: David Hildenbrand, Jason Gunthorpe
  Cc: linux-kernel, Andrew Morton, Hugh Dickins, Linus Torvalds,
	David Rientjes, Shakeel Butt, Mike Kravetz, Mike Rapoport,
	Yang Shi, Kirill A . Shutemov, Matthew Wilcox, Vlastimil Babka,
	Jann Horn, Michal Hocko, Nadav Amit, Rik van Riel,
	Roman Gushchin, Andrea Arcangeli, Peter Xu, Donald Dutile,
	Christoph Hellwig, Oleg Nesterov, Jan Kara, Liang Zhang,
	Pedro Gomes, Oded Gabbay, linux-mm

On 3/2/22 12:38, David Hildenbrand wrote:
...
> BUT, once we actually write to the private mapping via the page table,
> the GUP pin would go out of sync with the now-anonymous page mapped into
> the page table. However, I'm having a hard time answering what's
> actually expected?
> 
> It's really hard to tell what the user wants with MAP_PRIVATE file
> mappings and stumbles over a !anon page (no modifications so far):
> 
> (a) I want a R/O pin to observe file modifications.
> (b) I want the R/O pin to *not* observe file modifications but observe
>      my (eventual? if any) private modifications,
> 

On this aspect, I think it is easier than trying to discern user
intentions. Because it is less a question of what the user wants, and
more a question of how mmap(2) is specified. And the man page clearly
indicates that the user has no right to expect to see file
modifications. Here's the excerpt:

"MAP_PRIVATE
	
Create  a private copy-on-write mapping.  Updates to the mapping are not
visible to other processes mapping the same file, and are not carried
through to the underlying file.  It is unspecified whether changes  made
to the file after the mmap() call are visible in the mapped region.
"

> Of course, if we already wrote to that page and now have an anon page,
> it's easy: we are already no longer following file changes.

Yes, and in fact, I've always thought that the way this was written
means that it should be treated as a snapshot of the file contents,
and no longer reliably connected in either direction to the page(s).


thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page
  2022-03-03  1:47       ` John Hubbard
@ 2022-03-03  8:06         ` David Hildenbrand
  2022-03-09  7:37           ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2022-03-03  8:06 UTC (permalink / raw)
  To: John Hubbard, Jason Gunthorpe
  Cc: linux-kernel, Andrew Morton, Hugh Dickins, Linus Torvalds,
	David Rientjes, Shakeel Butt, Mike Kravetz, Mike Rapoport,
	Yang Shi, Kirill A . Shutemov, Matthew Wilcox, Vlastimil Babka,
	Jann Horn, Michal Hocko, Nadav Amit, Rik van Riel,
	Roman Gushchin, Andrea Arcangeli, Peter Xu, Donald Dutile,
	Christoph Hellwig, Oleg Nesterov, Jan Kara, Liang Zhang,
	Pedro Gomes, Oded Gabbay, linux-mm

On 03.03.22 02:47, John Hubbard wrote:
> On 3/2/22 12:38, David Hildenbrand wrote:
> ...
>> BUT, once we actually write to the private mapping via the page table,
>> the GUP pin would go out of sync with the now-anonymous page mapped into
>> the page table. However, I'm having a hard time answering what's
>> actually expected?
>>
>> It's really hard to tell what the user wants with MAP_PRIVATE file
>> mappings and stumbles over a !anon page (no modifications so far):
>>
>> (a) I want a R/O pin to observe file modifications.
>> (b) I want the R/O pin to *not* observe file modifications but observe
>>      my (eventual? if any) private modifications,
>>
> 
> On this aspect, I think it is easier than trying to discern user
> intentions. Because it is less a question of what the user wants, and
> more a question of how mmap(2) is specified. And the man page clearly
> indicates that the user has no right to expect to see file
> modifications. Here's the excerpt:
> 
> "MAP_PRIVATE
> 	
> Create  a private copy-on-write mapping.  Updates to the mapping are not
> visible to other processes mapping the same file, and are not carried
> through to the underlying file.  It is unspecified whether changes  made
> to the file after the mmap() call are visible in the mapped region.
> "
> 
>> Of course, if we already wrote to that page and now have an anon page,
>> it's easy: we are already no longer following file changes.
> 
> Yes, and in fact, I've always thought that the way this was written
> means that it should be treated as a snapshot of the file contents,
> and no longer reliably connected in either direction to the page(s).

Thanks John, that's extremely helpful. I forgot about these MAP_PRIVATE
mmap() details -- they help a lot to clarify which semantics to provide.

So what we could do is:

a) Extend FAULT_FLAG_UNSHARE to also unshare an !anon page in
   a MAP_RPIVATE mapping, replacing it with an (exclusive) anon page.
   R/O PTE permissions are maintained, just like unsharing in the
   context of this series.

b) Similarly trigger FAULT_FLAG_UNSHARE from GUP when trying to take a
   R/O pin (FOLL_PIN) on a R/O-mapped !anon page in a MAP_PRIVATE
   mapping.

c) Make R/O pins consistently use "FOLL_PIN" instead, getting rid of
   FOLL_FORCE|FOLL_WRITE.


Of course, we can't detect MAP_PRIVATE vs. MAP_SHARED in GUP-fast (no
VMA), so we'd always have to fallback in GUP-fast in case we intend to
FOLL_PIN a R/O-mapped !anon page. That would imply that essentially any
R/O pins (FOLL_PIN) would have to fallback to ordinary GUP. BUT, I mean
we require FOLL_FORCE|FOLL_WRITE right now, which is not any different,
so ...

One optimization would be to trigger b) only for FOLL_LONGTERM. For
!FOLL_LONGTERM there are "in theory" absolutely no guarantees which data
will be observed if we modify concurrently to e.g., O_DIRECT IMHO. But
that would require some more thought.

Of course, that's all material for another journey, although it should
be mostly straight forward.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page
  2022-03-02 20:59       ` Jason Gunthorpe
@ 2022-03-03  8:07         ` David Hildenbrand
  0 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-03-03  8:07 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-kernel, Andrew Morton, Hugh Dickins, Linus Torvalds,
	David Rientjes, Shakeel Butt, John Hubbard, Mike Kravetz,
	Mike Rapoport, Yang Shi, Kirill A . Shutemov, Matthew Wilcox,
	Vlastimil Babka, Jann Horn, Michal Hocko, Nadav Amit,
	Rik van Riel, Roman Gushchin, Andrea Arcangeli, Peter Xu,
	Donald Dutile, Christoph Hellwig, Oleg Nesterov, Jan Kara,
	Liang Zhang, Pedro Gomes, Oded Gabbay, linux-mm

On 02.03.22 21:59, Jason Gunthorpe wrote:
> On Wed, Mar 02, 2022 at 09:38:09PM +0100, David Hildenbrand wrote:
> 
>> (a) I want a R/O pin to observe file modifications.
>> (b) I want the R/O pin to *not* observe file modifications but observe
>>     my (eventual? if any) private modifications,
> 
> A scenario I know that motivated this is fairly straightfoward:
> 
>    static char data[] = {};
> 
>    ibv_reg_mr(data, READ_ONLY)
>    data[0] = 1
>    .. go touch data via DMA ..
> 
> We want to reliably observe the '1'
> 
> What is happening under the covers is that 'data' is placed in the
> .data segment and becomes a file backed MAP_PRIVATE page. The write
> COWs that page
> 
> It would work OK if it was in .bss instead
> 
> I think the FOLL_FORCE is there because otherwise the trick doesn't
> work on true RO pages and the API becomes broken.

Thanks for the nice example, it matches what John brought up in respect
to MAP_PRIVATE semantics.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page
  2022-03-03  8:06         ` David Hildenbrand
@ 2022-03-09  7:37           ` David Hildenbrand
  0 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2022-03-09  7:37 UTC (permalink / raw)
  To: John Hubbard, Jason Gunthorpe
  Cc: linux-kernel, Andrew Morton, Hugh Dickins, Linus Torvalds,
	David Rientjes, Shakeel Butt, Mike Kravetz, Mike Rapoport,
	Yang Shi, Kirill A . Shutemov, Matthew Wilcox, Vlastimil Babka,
	Jann Horn, Michal Hocko, Nadav Amit, Rik van Riel,
	Roman Gushchin, Andrea Arcangeli, Peter Xu, Donald Dutile,
	Christoph Hellwig, Oleg Nesterov, Jan Kara, Liang Zhang,
	Pedro Gomes, Oded Gabbay, linux-mm

On 03.03.22 09:06, David Hildenbrand wrote:
> On 03.03.22 02:47, John Hubbard wrote:
>> On 3/2/22 12:38, David Hildenbrand wrote:
>> ...
>>> BUT, once we actually write to the private mapping via the page table,
>>> the GUP pin would go out of sync with the now-anonymous page mapped into
>>> the page table. However, I'm having a hard time answering what's
>>> actually expected?
>>>
>>> It's really hard to tell what the user wants with MAP_PRIVATE file
>>> mappings and stumbles over a !anon page (no modifications so far):
>>>
>>> (a) I want a R/O pin to observe file modifications.
>>> (b) I want the R/O pin to *not* observe file modifications but observe
>>>      my (eventual? if any) private modifications,
>>>
>>
>> On this aspect, I think it is easier than trying to discern user
>> intentions. Because it is less a question of what the user wants, and
>> more a question of how mmap(2) is specified. And the man page clearly
>> indicates that the user has no right to expect to see file
>> modifications. Here's the excerpt:
>>
>> "MAP_PRIVATE
>> 	
>> Create  a private copy-on-write mapping.  Updates to the mapping are not
>> visible to other processes mapping the same file, and are not carried
>> through to the underlying file.  It is unspecified whether changes  made
>> to the file after the mmap() call are visible in the mapped region.
>> "
>>
>>> Of course, if we already wrote to that page and now have an anon page,
>>> it's easy: we are already no longer following file changes.
>>
>> Yes, and in fact, I've always thought that the way this was written
>> means that it should be treated as a snapshot of the file contents,
>> and no longer reliably connected in either direction to the page(s).
> 
> Thanks John, that's extremely helpful. I forgot about these MAP_PRIVATE
> mmap() details -- they help a lot to clarify which semantics to provide.
> 
> So what we could do is:
> 
> a) Extend FAULT_FLAG_UNSHARE to also unshare an !anon page in
>    a MAP_RPIVATE mapping, replacing it with an (exclusive) anon page.
>    R/O PTE permissions are maintained, just like unsharing in the
>    context of this series.
> 
> b) Similarly trigger FAULT_FLAG_UNSHARE from GUP when trying to take a
>    R/O pin (FOLL_PIN) on a R/O-mapped !anon page in a MAP_PRIVATE
>    mapping.
> 
> c) Make R/O pins consistently use "FOLL_PIN" instead, getting rid of
>    FOLL_FORCE|FOLL_WRITE.
> 
> 
> Of course, we can't detect MAP_PRIVATE vs. MAP_SHARED in GUP-fast (no
> VMA), so we'd always have to fallback in GUP-fast in case we intend to
> FOLL_PIN a R/O-mapped !anon page. That would imply that essentially any
> R/O pins (FOLL_PIN) would have to fallback to ordinary GUP. BUT, I mean
> we require FOLL_FORCE|FOLL_WRITE right now, which is not any different,
> so ...
> 
> One optimization would be to trigger b) only for FOLL_LONGTERM. For
> !FOLL_LONGTERM there are "in theory" absolutely no guarantees which data
> will be observed if we modify concurrently to e.g., O_DIRECT IMHO. But
> that would require some more thought.
> 
> Of course, that's all material for another journey, although it should
> be mostly straight forward.
> 

Just a slight clarification after stumbling over shared zeropage code in
follow_page_pte(): we do seem to support pinning the shared zeropage at
least on the GUP-slow path. While I haven't played with it, I assume
we'd have to implement+trigger unsharing in case we'd want to take a R/O
pin on the shared zeropage.

Of course, similar to file-backed MAP_PRIVATE handling, this is out of
the scope of this series ("This change implies that whenever user space
wrote to a private mapping (IOW, we have an anonymous page mapped), that
GUP pins will
always remain consistent: reliable R/O GUP pins of anonymous pages.").

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-03-09  7:37 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-24 12:26 [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 01/13] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed David Hildenbrand
2022-02-24 16:26   ` Khalid Aziz
2022-02-24 12:26 ` [PATCH RFC 02/13] mm/hugetlb: take src_mm->write_protect_seq in copy_hugetlb_page_range() David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 03/13] mm/memory: slightly simplify copy_present_pte() David Hildenbrand
2022-02-25  5:15   ` Hillf Danton
2022-02-25  8:01     ` David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 04/13] mm/rmap: split page_dup_rmap() into page_dup_file_rmap() and page_try_dup_anon_rmap() David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap() David Hildenbrand
2022-02-24 17:29   ` Linus Torvalds
2022-02-24 17:41     ` David Hildenbrand
2022-02-24 17:48       ` Linus Torvalds
2022-02-25  9:01         ` David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 06/13] mm/rmap: pass rmap flags to hugepage_add_anon_rmap() David Hildenbrand
2022-02-24 17:37   ` Linus Torvalds
2022-02-24 12:26 ` [PATCH RFC 07/13] mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon() page exclusively David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 08/13] mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 09/13] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 10/13] mm/gup: disallow follow_page(FOLL_PIN) David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 11/13] mm: support GUP-triggered unsharing of anonymous pages David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page David Hildenbrand
2022-03-02 16:55   ` Jason Gunthorpe
2022-03-02 20:38     ` David Hildenbrand
2022-03-02 20:59       ` Jason Gunthorpe
2022-03-03  8:07         ` David Hildenbrand
2022-03-03  1:47       ` John Hubbard
2022-03-03  8:06         ` David Hildenbrand
2022-03-09  7:37           ` David Hildenbrand
2022-02-24 12:26 ` [PATCH RFC 13/13] mm/gup: sanity-check with CONFIG_DEBUG_VM that anonymous pages are exclusive when (un)pinning David Hildenbrand
2022-03-01  8:24 ` [PATCH RFC 00/13] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.