All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/6] Per-VMA lock support for swap and userfaults
@ 2023-06-28 17:25 Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 1/6] swap: remove remnants of polling from read_swap_cache_async Suren Baghdasaryan
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-28 17:25 UTC (permalink / raw)
  To: akpm
  Cc: willy, hannes, mhocko, josef, jack, ldufour, laurent.dufour,
	michel, liam.howlett, jglisse, vbabka, minchan, dave,
	punit.agrawal, lstoakes, hdanton, apopple, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	surenb, linux-mm, linux-fsdevel, linux-kernel, kernel-team

When per-VMA locks were introduced in [1] several types of page faults
would still fall back to mmap_lock to keep the patchset simple. Among them
are swap and userfault pages. The main reason for skipping those cases was
the fact that mmap_lock could be dropped while handling these faults and
that required additional logic to be implemented.
Implement the mechanism to allow per-VMA locks to be dropped for these
cases.
First, change handle_mm_fault to drop per-VMA locks when returning
VM_FAULT_RETRY or VM_FAULT_COMPLETED to be consistent with the way
mmap_lock is handled. Then change folio_lock_or_retry to accept vm_fault
and return vm_fault_t which simplifies later patches. Finally allow swap
and uffd page faults to be handled under per-VMA locks by dropping per-VMA
and retrying, the same way it's done under mmap_lock.
Naturally, once VMA lock is dropped that VMA should be assumed unstable
and can't be used.

Changes since v4 posted at [2]
- 5/6 changed setting VM_FAULT_RETRY bit to an assignment, per Peter Xu
- 5/6 moved release_fault_lock to its final place, per Peter Xu
- 6/6 removed mm parameter in assert_fault_locked, per Peter Xu
- 6/6 replaced BUG_ON with WARN_ON_ONCE and moved the check for
FAULT_FLAG_RETRY_NOWAIT && FAULT_FLAG_VMA_LOCK into sanitize_fault_flags,
per Peter Xu

Note: patch 3/8 will cause a trivial merge conflict in arch/arm64/mm/fault.c
when applied over mm-unstable branch due to a patch from ARM64 tree [3]
which is missing in mm-unstable.

[1] https://lore.kernel.org/all/20230227173632.3292573-1-surenb@google.com/
[2] https://lore.kernel.org/all/20230628071800.544800-1-surenb@google.com/
[3] https://lore.kernel.org/all/20230524131305.2808-1-jszhang@kernel.org/

Suren Baghdasaryan (6):
  swap: remove remnants of polling from read_swap_cache_async
  mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED
  mm: drop per-VMA lock when returning VM_FAULT_RETRY or
    VM_FAULT_COMPLETED
  mm: change folio_lock_or_retry to use vm_fault directly
  mm: handle swap page faults under per-VMA lock
  mm: handle userfaults under VMA lock

 arch/arm64/mm/fault.c    |  3 ++-
 arch/powerpc/mm/fault.c  |  3 ++-
 arch/s390/mm/fault.c     |  3 ++-
 arch/x86/mm/fault.c      |  3 ++-
 fs/userfaultfd.c         | 34 ++++++++++++----------------
 include/linux/mm.h       | 39 ++++++++++++++++++++++++++++++++
 include/linux/mm_types.h |  3 ++-
 include/linux/pagemap.h  |  9 ++++----
 mm/filemap.c             | 37 +++++++++++++++---------------
 mm/madvise.c             |  4 ++--
 mm/memory.c              | 49 ++++++++++++++++++++++------------------
 mm/swap.h                |  1 -
 mm/swap_state.c          | 12 ++++------
 13 files changed, 120 insertions(+), 80 deletions(-)

-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 1/6] swap: remove remnants of polling from read_swap_cache_async
  2023-06-28 17:25 [PATCH v5 0/6] Per-VMA lock support for swap and userfaults Suren Baghdasaryan
@ 2023-06-28 17:25 ` Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 2/6] mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED Suren Baghdasaryan
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-28 17:25 UTC (permalink / raw)
  To: akpm
  Cc: willy, hannes, mhocko, josef, jack, ldufour, laurent.dufour,
	michel, liam.howlett, jglisse, vbabka, minchan, dave,
	punit.agrawal, lstoakes, hdanton, apopple, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	surenb, linux-mm, linux-fsdevel, linux-kernel, kernel-team,
	Christoph Hellwig

Commit [1] introduced IO polling support duding swapin to reduce
swap read latency for block devices that can be polled. However later
commit [2] removed polling support. Therefore it seems safe to remove
do_poll parameter in read_swap_cache_async and always call swap_readpage
with synchronous=false waiting for IO completion in folio_lock_or_retry.

[1] commit 23955622ff8d ("swap: add block io poll in swapin path")
[2] commit 9650b453a3d4 ("block: ignore RWF_HIPRI hint for sync dio")

Suggested-by: "Huang, Ying" <ying.huang@intel.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 mm/madvise.c    |  4 ++--
 mm/swap.h       |  1 -
 mm/swap_state.c | 12 +++++-------
 3 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/mm/madvise.c b/mm/madvise.c
index b5ffbaf616f5..b1e8adf1234e 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -215,7 +215,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
 			continue;
 
 		page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE,
-					     vma, index, false, &splug);
+					     vma, index, &splug);
 		if (page)
 			put_page(page);
 	}
@@ -252,7 +252,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma,
 		rcu_read_unlock();
 
 		page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE,
-					     NULL, 0, false, &splug);
+					     NULL, 0, &splug);
 		if (page)
 			put_page(page);
 
diff --git a/mm/swap.h b/mm/swap.h
index 7c033d793f15..8a3c7a0ace4f 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -46,7 +46,6 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
 struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 				   struct vm_area_struct *vma,
 				   unsigned long addr,
-				   bool do_poll,
 				   struct swap_iocb **plug);
 struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 				     struct vm_area_struct *vma,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b76a65ac28b3..a3839de71f3f 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -517,15 +517,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
  */
 struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 				   struct vm_area_struct *vma,
-				   unsigned long addr, bool do_poll,
-				   struct swap_iocb **plug)
+				   unsigned long addr, struct swap_iocb **plug)
 {
 	bool page_was_allocated;
 	struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
 			vma, addr, &page_was_allocated);
 
 	if (page_was_allocated)
-		swap_readpage(retpage, do_poll, plug);
+		swap_readpage(retpage, false, plug);
 
 	return retpage;
 }
@@ -620,7 +619,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 	struct swap_info_struct *si = swp_swap_info(entry);
 	struct blk_plug plug;
 	struct swap_iocb *splug = NULL;
-	bool do_poll = true, page_allocated;
+	bool page_allocated;
 	struct vm_area_struct *vma = vmf->vma;
 	unsigned long addr = vmf->address;
 
@@ -628,7 +627,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 	if (!mask)
 		goto skip;
 
-	do_poll = false;
 	/* Read a page_cluster sized and aligned cluster around offset. */
 	start_offset = offset & ~mask;
 	end_offset = offset | mask;
@@ -660,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 	lru_add_drain();	/* Push any new pages onto the LRU now */
 skip:
 	/* The page was likely read above, so no need for plugging here */
-	return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll, NULL);
+	return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
 }
 
 int init_swap_address_space(unsigned int type, unsigned long nr_pages)
@@ -825,7 +823,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
 skip:
 	/* The page was likely read above, so no need for plugging here */
 	return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
-				     ra_info.win == 1, NULL);
+				     NULL);
 }
 
 /**
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 2/6] mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED
  2023-06-28 17:25 [PATCH v5 0/6] Per-VMA lock support for swap and userfaults Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 1/6] swap: remove remnants of polling from read_swap_cache_async Suren Baghdasaryan
@ 2023-06-28 17:25 ` Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 3/6] mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED Suren Baghdasaryan
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-28 17:25 UTC (permalink / raw)
  To: akpm
  Cc: willy, hannes, mhocko, josef, jack, ldufour, laurent.dufour,
	michel, liam.howlett, jglisse, vbabka, minchan, dave,
	punit.agrawal, lstoakes, hdanton, apopple, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	surenb, linux-mm, linux-fsdevel, linux-kernel, kernel-team

VM_FAULT_RESULT_TRACE should contain an element for every vm_fault_reason
to be used as flag_array inside trace_print_flags_seq(). The element
for VM_FAULT_COMPLETED is missing, add it.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 include/linux/mm_types.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 306a3d1a0fa6..79765e3dd8f3 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1070,7 +1070,8 @@ enum vm_fault_reason {
 	{ VM_FAULT_RETRY,               "RETRY" },	\
 	{ VM_FAULT_FALLBACK,            "FALLBACK" },	\
 	{ VM_FAULT_DONE_COW,            "DONE_COW" },	\
-	{ VM_FAULT_NEEDDSYNC,           "NEEDDSYNC" }
+	{ VM_FAULT_NEEDDSYNC,           "NEEDDSYNC" },	\
+	{ VM_FAULT_COMPLETED,           "COMPLETED" }
 
 struct vm_special_mapping {
 	const char *name;	/* The name, e.g. "[vdso]". */
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 3/6] mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED
  2023-06-28 17:25 [PATCH v5 0/6] Per-VMA lock support for swap and userfaults Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 1/6] swap: remove remnants of polling from read_swap_cache_async Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 2/6] mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED Suren Baghdasaryan
@ 2023-06-28 17:25 ` Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 4/6] mm: change folio_lock_or_retry to use vm_fault directly Suren Baghdasaryan
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-28 17:25 UTC (permalink / raw)
  To: akpm
  Cc: willy, hannes, mhocko, josef, jack, ldufour, laurent.dufour,
	michel, liam.howlett, jglisse, vbabka, minchan, dave,
	punit.agrawal, lstoakes, hdanton, apopple, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	surenb, linux-mm, linux-fsdevel, linux-kernel, kernel-team

handle_mm_fault returning VM_FAULT_RETRY or VM_FAULT_COMPLETED means
mmap_lock has been released. However with per-VMA locks behavior is
different and the caller should still release it. To make the
rules consistent for the caller, drop the per-VMA lock when returning
VM_FAULT_RETRY or VM_FAULT_COMPLETED. Currently the only path returning
VM_FAULT_RETRY under per-VMA locks is do_swap_page and no path returns
VM_FAULT_COMPLETED for now.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Peter Xu <peterx@redhat.com>
---
 arch/arm64/mm/fault.c   | 3 ++-
 arch/powerpc/mm/fault.c | 3 ++-
 arch/s390/mm/fault.c    | 3 ++-
 arch/x86/mm/fault.c     | 3 ++-
 mm/memory.c             | 1 +
 5 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index c85b6d70b222..9c06c53a9ff3 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -612,7 +612,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
 		goto lock_mmap;
 	}
 	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
-	vma_end_read(vma);
+	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
+		vma_end_read(vma);
 
 	if (!(fault & VM_FAULT_RETRY)) {
 		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 531177a4ee08..4697c5dca31c 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -494,7 +494,8 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
 	}
 
 	fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs);
-	vma_end_read(vma);
+	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
+		vma_end_read(vma);
 
 	if (!(fault & VM_FAULT_RETRY)) {
 		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index b65144c392b0..cccefe41038b 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -418,7 +418,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
 		goto lock_mmap;
 	}
 	fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs);
-	vma_end_read(vma);
+	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
+		vma_end_read(vma);
 	if (!(fault & VM_FAULT_RETRY)) {
 		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
 		goto out;
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index e4399983c50c..d69c85c1c04e 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1347,7 +1347,8 @@ void do_user_addr_fault(struct pt_regs *regs,
 		goto lock_mmap;
 	}
 	fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs);
-	vma_end_read(vma);
+	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
+		vma_end_read(vma);
 
 	if (!(fault & VM_FAULT_RETRY)) {
 		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
diff --git a/mm/memory.c b/mm/memory.c
index f69fbc251198..f14d45957b83 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3713,6 +3713,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 
 	if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
 		ret = VM_FAULT_RETRY;
+		vma_end_read(vma);
 		goto out;
 	}
 
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 4/6] mm: change folio_lock_or_retry to use vm_fault directly
  2023-06-28 17:25 [PATCH v5 0/6] Per-VMA lock support for swap and userfaults Suren Baghdasaryan
                   ` (2 preceding siblings ...)
  2023-06-28 17:25 ` [PATCH v5 3/6] mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED Suren Baghdasaryan
@ 2023-06-28 17:25 ` Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock Suren Baghdasaryan
  2023-06-28 17:25 ` [PATCH v5 6/6] mm: handle userfaults under VMA lock Suren Baghdasaryan
  5 siblings, 0 replies; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-28 17:25 UTC (permalink / raw)
  To: akpm
  Cc: willy, hannes, mhocko, josef, jack, ldufour, laurent.dufour,
	michel, liam.howlett, jglisse, vbabka, minchan, dave,
	punit.agrawal, lstoakes, hdanton, apopple, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	surenb, linux-mm, linux-fsdevel, linux-kernel, kernel-team

Change folio_lock_or_retry to accept vm_fault struct and return the
vm_fault_t directly.

Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Peter Xu <peterx@redhat.com>
---
 include/linux/pagemap.h |  9 ++++-----
 mm/filemap.c            | 22 ++++++++++++----------
 mm/memory.c             | 14 ++++++--------
 3 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index a56308a9d1a4..59d070c55c97 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -896,8 +896,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page,
 
 void __folio_lock(struct folio *folio);
 int __folio_lock_killable(struct folio *folio);
-bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm,
-				unsigned int flags);
+vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf);
 void unlock_page(struct page *page);
 void folio_unlock(struct folio *folio);
 
@@ -1001,11 +1000,11 @@ static inline int folio_lock_killable(struct folio *folio)
  * Return value and mmap_lock implications depend on flags; see
  * __folio_lock_or_retry().
  */
-static inline bool folio_lock_or_retry(struct folio *folio,
-		struct mm_struct *mm, unsigned int flags)
+static inline vm_fault_t folio_lock_or_retry(struct folio *folio,
+					     struct vm_fault *vmf)
 {
 	might_sleep();
-	return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags);
+	return folio_trylock(folio) ? 0 : __folio_lock_or_retry(folio, vmf);
 }
 
 /*
diff --git a/mm/filemap.c b/mm/filemap.c
index 00f01d8ead47..52bcf12dcdbf 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1701,32 +1701,34 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait)
 
 /*
  * Return values:
- * true - folio is locked; mmap_lock is still held.
- * false - folio is not locked.
+ * 0 - folio is locked.
+ * VM_FAULT_RETRY - folio is not locked.
  *     mmap_lock has been released (mmap_read_unlock(), unless flags had both
  *     FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in
  *     which case mmap_lock is still held.
  *
- * If neither ALLOW_RETRY nor KILLABLE are set, will always return true
+ * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0
  * with the folio locked and the mmap_lock unperturbed.
  */
-bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm,
-			 unsigned int flags)
+vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf)
 {
+	struct mm_struct *mm = vmf->vma->vm_mm;
+	unsigned int flags = vmf->flags;
+
 	if (fault_flag_allow_retry_first(flags)) {
 		/*
 		 * CAUTION! In this case, mmap_lock is not released
-		 * even though return 0.
+		 * even though return VM_FAULT_RETRY.
 		 */
 		if (flags & FAULT_FLAG_RETRY_NOWAIT)
-			return false;
+			return VM_FAULT_RETRY;
 
 		mmap_read_unlock(mm);
 		if (flags & FAULT_FLAG_KILLABLE)
 			folio_wait_locked_killable(folio);
 		else
 			folio_wait_locked(folio);
-		return false;
+		return VM_FAULT_RETRY;
 	}
 	if (flags & FAULT_FLAG_KILLABLE) {
 		bool ret;
@@ -1734,13 +1736,13 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm,
 		ret = __folio_lock_killable(folio);
 		if (ret) {
 			mmap_read_unlock(mm);
-			return false;
+			return VM_FAULT_RETRY;
 		}
 	} else {
 		__folio_lock(folio);
 	}
 
-	return true;
+	return 0;
 }
 
 /**
diff --git a/mm/memory.c b/mm/memory.c
index f14d45957b83..345080052003 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3568,6 +3568,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
 	struct folio *folio = page_folio(vmf->page);
 	struct vm_area_struct *vma = vmf->vma;
 	struct mmu_notifier_range range;
+	vm_fault_t ret;
 
 	/*
 	 * We need a reference to lock the folio because we don't hold
@@ -3580,9 +3581,10 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
 	if (!folio_try_get(folio))
 		return 0;
 
-	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
+	ret = folio_lock_or_retry(folio, vmf);
+	if (ret) {
 		folio_put(folio);
-		return VM_FAULT_RETRY;
+		return ret;
 	}
 	mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0,
 				vma->vm_mm, vmf->address & PAGE_MASK,
@@ -3704,7 +3706,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	bool exclusive = false;
 	swp_entry_t entry;
 	pte_t pte;
-	int locked;
 	vm_fault_t ret = 0;
 	void *shadow = NULL;
 
@@ -3826,12 +3827,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		goto out_release;
 	}
 
-	locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags);
-
-	if (!locked) {
-		ret |= VM_FAULT_RETRY;
+	ret |= folio_lock_or_retry(folio, vmf);
+	if (ret & VM_FAULT_RETRY)
 		goto out_release;
-	}
 
 	if (swapcache) {
 		/*
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock
  2023-06-28 17:25 [PATCH v5 0/6] Per-VMA lock support for swap and userfaults Suren Baghdasaryan
                   ` (3 preceding siblings ...)
  2023-06-28 17:25 ` [PATCH v5 4/6] mm: change folio_lock_or_retry to use vm_fault directly Suren Baghdasaryan
@ 2023-06-28 17:25 ` Suren Baghdasaryan
  2023-06-29  6:04   ` Alistair Popple
  2023-06-28 17:25 ` [PATCH v5 6/6] mm: handle userfaults under VMA lock Suren Baghdasaryan
  5 siblings, 1 reply; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-28 17:25 UTC (permalink / raw)
  To: akpm
  Cc: willy, hannes, mhocko, josef, jack, ldufour, laurent.dufour,
	michel, liam.howlett, jglisse, vbabka, minchan, dave,
	punit.agrawal, lstoakes, hdanton, apopple, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	surenb, linux-mm, linux-fsdevel, linux-kernel, kernel-team

When page fault is handled under per-VMA lock protection, all swap page
faults are retried with mmap_lock because folio_lock_or_retry has to drop
and reacquire mmap_lock if folio could not be immediately locked.
Follow the same pattern as mmap_lock to drop per-VMA lock when waiting
for folio and retrying once folio is available.
With this obstacle removed, enable do_swap_page to operate under
per-VMA lock protection. Drivers implementing ops->migrate_to_ram might
still rely on mmap_lock, therefore we have to fall back to mmap_lock in
that particular case.
Note that the only time do_swap_page calls synchronous swap_readpage
is when SWP_SYNCHRONOUS_IO is set, which is only set for
QUEUE_FLAG_SYNCHRONOUS devices: brd, zram and nvdimms (both btt and
pmem). Therefore we don't sleep in this path, and there's no need to
drop the mmap or per-VMA lock.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Peter Xu <peterx@redhat.com>
---
 include/linux/mm.h | 13 +++++++++++++
 mm/filemap.c       | 17 ++++++++---------
 mm/memory.c        | 16 ++++++++++------
 3 files changed, 31 insertions(+), 15 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index fec149585985..bbaec479bf98 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -723,6 +723,14 @@ static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached)
 struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
 					  unsigned long address);
 
+static inline void release_fault_lock(struct vm_fault *vmf)
+{
+	if (vmf->flags & FAULT_FLAG_VMA_LOCK)
+		vma_end_read(vmf->vma);
+	else
+		mmap_read_unlock(vmf->vma->vm_mm);
+}
+
 #else /* CONFIG_PER_VMA_LOCK */
 
 static inline void vma_init_lock(struct vm_area_struct *vma) {}
@@ -736,6 +744,11 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) {}
 static inline void vma_mark_detached(struct vm_area_struct *vma,
 				     bool detached) {}
 
+static inline void release_fault_lock(struct vm_fault *vmf)
+{
+	mmap_read_unlock(vmf->vma->vm_mm);
+}
+
 #endif /* CONFIG_PER_VMA_LOCK */
 
 /*
diff --git a/mm/filemap.c b/mm/filemap.c
index 52bcf12dcdbf..d4d8f474e0c5 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1703,27 +1703,26 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait)
  * Return values:
  * 0 - folio is locked.
  * VM_FAULT_RETRY - folio is not locked.
- *     mmap_lock has been released (mmap_read_unlock(), unless flags had both
- *     FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in
- *     which case mmap_lock is still held.
+ *     mmap_lock or per-VMA lock has been released (mmap_read_unlock() or
+ *     vma_end_read()), unless flags had both FAULT_FLAG_ALLOW_RETRY and
+ *     FAULT_FLAG_RETRY_NOWAIT set, in which case the lock is still held.
  *
  * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0
- * with the folio locked and the mmap_lock unperturbed.
+ * with the folio locked and the mmap_lock/per-VMA lock is left unperturbed.
  */
 vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf)
 {
-	struct mm_struct *mm = vmf->vma->vm_mm;
 	unsigned int flags = vmf->flags;
 
 	if (fault_flag_allow_retry_first(flags)) {
 		/*
-		 * CAUTION! In this case, mmap_lock is not released
-		 * even though return VM_FAULT_RETRY.
+		 * CAUTION! In this case, mmap_lock/per-VMA lock is not
+		 * released even though returning VM_FAULT_RETRY.
 		 */
 		if (flags & FAULT_FLAG_RETRY_NOWAIT)
 			return VM_FAULT_RETRY;
 
-		mmap_read_unlock(mm);
+		release_fault_lock(vmf);
 		if (flags & FAULT_FLAG_KILLABLE)
 			folio_wait_locked_killable(folio);
 		else
@@ -1735,7 +1734,7 @@ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf)
 
 		ret = __folio_lock_killable(folio);
 		if (ret) {
-			mmap_read_unlock(mm);
+			release_fault_lock(vmf);
 			return VM_FAULT_RETRY;
 		}
 	} else {
diff --git a/mm/memory.c b/mm/memory.c
index 345080052003..4fb8ecfc6d13 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3712,12 +3712,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	if (!pte_unmap_same(vmf))
 		goto out;
 
-	if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
-		ret = VM_FAULT_RETRY;
-		vma_end_read(vma);
-		goto out;
-	}
-
 	entry = pte_to_swp_entry(vmf->orig_pte);
 	if (unlikely(non_swap_entry(entry))) {
 		if (is_migration_entry(entry)) {
@@ -3727,6 +3721,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 			vmf->page = pfn_swap_entry_to_page(entry);
 			ret = remove_device_exclusive_entry(vmf);
 		} else if (is_device_private_entry(entry)) {
+			if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
+				/*
+				 * migrate_to_ram is not yet ready to operate
+				 * under VMA lock.
+				 */
+				vma_end_read(vma);
+				ret = VM_FAULT_RETRY;
+				goto out;
+			}
+
 			vmf->page = pfn_swap_entry_to_page(entry);
 			vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
 					vmf->address, &vmf->ptl);
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 6/6] mm: handle userfaults under VMA lock
  2023-06-28 17:25 [PATCH v5 0/6] Per-VMA lock support for swap and userfaults Suren Baghdasaryan
                   ` (4 preceding siblings ...)
  2023-06-28 17:25 ` [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock Suren Baghdasaryan
@ 2023-06-28 17:25 ` Suren Baghdasaryan
  2023-06-28 17:32   ` Peter Xu
  5 siblings, 1 reply; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-28 17:25 UTC (permalink / raw)
  To: akpm
  Cc: willy, hannes, mhocko, josef, jack, ldufour, laurent.dufour,
	michel, liam.howlett, jglisse, vbabka, minchan, dave,
	punit.agrawal, lstoakes, hdanton, apopple, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	surenb, linux-mm, linux-fsdevel, linux-kernel, kernel-team

Enable handle_userfault to operate under VMA lock by releasing VMA lock
instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT
should never be used when handling faults under per-VMA lock protection
because that would break the assumption that lock is dropped on retry.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 fs/userfaultfd.c   | 34 ++++++++++++++--------------------
 include/linux/mm.h | 26 ++++++++++++++++++++++++++
 mm/memory.c        | 20 +++++++++++---------
 3 files changed, 51 insertions(+), 29 deletions(-)

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index 4e800bb7d2ab..9d61e3e7da7b 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -277,17 +277,16 @@ static inline struct uffd_msg userfault_msg(unsigned long address,
  * hugepmd ranges.
  */
 static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
-					 struct vm_area_struct *vma,
-					 unsigned long address,
-					 unsigned long flags,
-					 unsigned long reason)
+					      struct vm_fault *vmf,
+					      unsigned long reason)
 {
+	struct vm_area_struct *vma = vmf->vma;
 	pte_t *ptep, pte;
 	bool ret = true;
 
-	mmap_assert_locked(ctx->mm);
+	assert_fault_locked(vmf);
 
-	ptep = hugetlb_walk(vma, address, vma_mmu_pagesize(vma));
+	ptep = hugetlb_walk(vma, vmf->address, vma_mmu_pagesize(vma));
 	if (!ptep)
 		goto out;
 
@@ -308,10 +307,8 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
 }
 #else
 static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
-					 struct vm_area_struct *vma,
-					 unsigned long address,
-					 unsigned long flags,
-					 unsigned long reason)
+					      struct vm_fault *vmf,
+					      unsigned long reason)
 {
 	return false;	/* should never get here */
 }
@@ -325,11 +322,11 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
  * threads.
  */
 static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
-					 unsigned long address,
-					 unsigned long flags,
+					 struct vm_fault *vmf,
 					 unsigned long reason)
 {
 	struct mm_struct *mm = ctx->mm;
+	unsigned long address = vmf->address;
 	pgd_t *pgd;
 	p4d_t *p4d;
 	pud_t *pud;
@@ -337,7 +334,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
 	pte_t *pte;
 	bool ret = true;
 
-	mmap_assert_locked(mm);
+	assert_fault_locked(vmf);
 
 	pgd = pgd_offset(mm, address);
 	if (!pgd_present(*pgd))
@@ -445,7 +442,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
 	 * Coredumping runs without mmap_lock so we can only check that
 	 * the mmap_lock is held, if PF_DUMPCORE was not set.
 	 */
-	mmap_assert_locked(mm);
+	assert_fault_locked(vmf);
 
 	ctx = vma->vm_userfaultfd_ctx.ctx;
 	if (!ctx)
@@ -561,15 +558,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
 	spin_unlock_irq(&ctx->fault_pending_wqh.lock);
 
 	if (!is_vm_hugetlb_page(vma))
-		must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags,
-						  reason);
+		must_wait = userfaultfd_must_wait(ctx, vmf, reason);
 	else
-		must_wait = userfaultfd_huge_must_wait(ctx, vma,
-						       vmf->address,
-						       vmf->flags, reason);
+		must_wait = userfaultfd_huge_must_wait(ctx, vmf, reason);
 	if (is_vm_hugetlb_page(vma))
 		hugetlb_vma_unlock_read(vma);
-	mmap_read_unlock(mm);
+	release_fault_lock(vmf);
 
 	if (likely(must_wait && !READ_ONCE(ctx->released))) {
 		wake_up_poll(&ctx->fd_wqh, EPOLLIN);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bbaec479bf98..cd5389338def 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -705,6 +705,17 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma)
 	return true;
 }
 
+static inline void vma_assert_locked(struct vm_area_struct *vma)
+{
+	int mm_lock_seq;
+
+	if (__is_vma_write_locked(vma, &mm_lock_seq))
+		return;
+
+	lockdep_assert_held(&vma->vm_lock->lock);
+	VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma);
+}
+
 static inline void vma_assert_write_locked(struct vm_area_struct *vma)
 {
 	int mm_lock_seq;
@@ -731,6 +742,15 @@ static inline void release_fault_lock(struct vm_fault *vmf)
 		mmap_read_unlock(vmf->vma->vm_mm);
 }
 
+static inline
+void assert_fault_locked(struct vm_fault *vmf)
+{
+	if (vmf->flags & FAULT_FLAG_VMA_LOCK)
+		vma_assert_locked(vmf->vma);
+	else
+		mmap_assert_locked(vmf->vma->vm_mm);
+}
+
 #else /* CONFIG_PER_VMA_LOCK */
 
 static inline void vma_init_lock(struct vm_area_struct *vma) {}
@@ -749,6 +769,12 @@ static inline void release_fault_lock(struct vm_fault *vmf)
 	mmap_read_unlock(vmf->vma->vm_mm);
 }
 
+static inline
+void assert_fault_locked(struct vm_fault *vmf)
+{
+	mmap_assert_locked(vmf->vma->vm_mm);
+}
+
 #endif /* CONFIG_PER_VMA_LOCK */
 
 /*
diff --git a/mm/memory.c b/mm/memory.c
index 4fb8ecfc6d13..672f7383a622 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5202,6 +5202,17 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma,
 				 !is_cow_mapping(vma->vm_flags)))
 			return VM_FAULT_SIGSEGV;
 	}
+#ifdef CONFIG_PER_VMA_LOCK
+	/*
+	 * Per-VMA locks can't be used with FAULT_FLAG_RETRY_NOWAIT because of
+	 * the assumption that lock is dropped on VM_FAULT_RETRY.
+	 */
+	if (WARN_ON_ONCE((*flags &
+			(FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT)) ==
+			(FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT)))
+		return VM_FAULT_SIGSEGV;
+#endif
+
 	return 0;
 }
 
@@ -5294,15 +5305,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
 	if (!vma_start_read(vma))
 		goto inval;
 
-	/*
-	 * Due to the possibility of userfault handler dropping mmap_lock, avoid
-	 * it for now and fall back to page fault handling under mmap_lock.
-	 */
-	if (userfaultfd_armed(vma)) {
-		vma_end_read(vma);
-		goto inval;
-	}
-
 	/* Check since vm_start/vm_end might change before we lock the VMA */
 	if (unlikely(address < vma->vm_start || address >= vma->vm_end)) {
 		vma_end_read(vma);
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 6/6] mm: handle userfaults under VMA lock
  2023-06-28 17:25 ` [PATCH v5 6/6] mm: handle userfaults under VMA lock Suren Baghdasaryan
@ 2023-06-28 17:32   ` Peter Xu
  2023-06-29  0:19     ` Suren Baghdasaryan
  0 siblings, 1 reply; 14+ messages in thread
From: Peter Xu @ 2023-06-28 17:32 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: akpm, willy, hannes, mhocko, josef, jack, ldufour,
	laurent.dufour, michel, liam.howlett, jglisse, vbabka, minchan,
	dave, punit.agrawal, lstoakes, hdanton, apopple, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	linux-mm, linux-fsdevel, linux-kernel, kernel-team

On Wed, Jun 28, 2023 at 10:25:29AM -0700, Suren Baghdasaryan wrote:
> Enable handle_userfault to operate under VMA lock by releasing VMA lock
> instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT
> should never be used when handling faults under per-VMA lock protection
> because that would break the assumption that lock is dropped on retry.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Maybe the sanitize_fault_flags() changes suite more in patch 3, but not a
big deal I guess.

Acked-by: Peter Xu <peterx@redhat.com>

Thanks!

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 6/6] mm: handle userfaults under VMA lock
  2023-06-28 17:32   ` Peter Xu
@ 2023-06-29  0:19     ` Suren Baghdasaryan
  2023-06-29 16:32       ` Peter Xu
  0 siblings, 1 reply; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-29  0:19 UTC (permalink / raw)
  To: Peter Xu
  Cc: akpm, willy, hannes, mhocko, josef, jack, ldufour,
	laurent.dufour, michel, liam.howlett, jglisse, vbabka, minchan,
	dave, punit.agrawal, lstoakes, hdanton, apopple, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	linux-mm, linux-fsdevel, linux-kernel, kernel-team

On Wed, Jun 28, 2023 at 10:32 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Jun 28, 2023 at 10:25:29AM -0700, Suren Baghdasaryan wrote:
> > Enable handle_userfault to operate under VMA lock by releasing VMA lock
> > instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT
> > should never be used when handling faults under per-VMA lock protection
> > because that would break the assumption that lock is dropped on retry.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
>
> Maybe the sanitize_fault_flags() changes suite more in patch 3, but not a
> big deal I guess.

IIUC FAULT_FLAG_RETRY_NOWAIT comes into play in this patchset only in
the context of uffds, therefore that check seems to be needed when we
enable per-VMA lock uffd support, which is this patch. Does that make
sense?

>
> Acked-by: Peter Xu <peterx@redhat.com>

Thanks!

>
> Thanks!
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock
  2023-06-28 17:25 ` [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock Suren Baghdasaryan
@ 2023-06-29  6:04   ` Alistair Popple
  2023-06-30  1:30     ` Suren Baghdasaryan
  0 siblings, 1 reply; 14+ messages in thread
From: Alistair Popple @ 2023-06-29  6:04 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: akpm, willy, hannes, mhocko, josef, jack, ldufour,
	laurent.dufour, michel, liam.howlett, jglisse, vbabka, minchan,
	dave, punit.agrawal, lstoakes, hdanton, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	linux-mm, linux-fsdevel, linux-kernel, kernel-team


Looks good and passed the HMM selftests. So:

Tested-by: Alistair Popple <apopple@nvidia.com>
Reviewed-by: Alistair Popple <apopple@nvidia.com>

Suren Baghdasaryan <surenb@google.com> writes:

> When page fault is handled under per-VMA lock protection, all swap page
> faults are retried with mmap_lock because folio_lock_or_retry has to drop
> and reacquire mmap_lock if folio could not be immediately locked.
> Follow the same pattern as mmap_lock to drop per-VMA lock when waiting
> for folio and retrying once folio is available.
> With this obstacle removed, enable do_swap_page to operate under
> per-VMA lock protection. Drivers implementing ops->migrate_to_ram might
> still rely on mmap_lock, therefore we have to fall back to mmap_lock in
> that particular case.
> Note that the only time do_swap_page calls synchronous swap_readpage
> is when SWP_SYNCHRONOUS_IO is set, which is only set for
> QUEUE_FLAG_SYNCHRONOUS devices: brd, zram and nvdimms (both btt and
> pmem). Therefore we don't sleep in this path, and there's no need to
> drop the mmap or per-VMA lock.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Acked-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/mm.h | 13 +++++++++++++
>  mm/filemap.c       | 17 ++++++++---------
>  mm/memory.c        | 16 ++++++++++------
>  3 files changed, 31 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index fec149585985..bbaec479bf98 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -723,6 +723,14 @@ static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached)
>  struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
>  					  unsigned long address);
>  
> +static inline void release_fault_lock(struct vm_fault *vmf)
> +{
> +	if (vmf->flags & FAULT_FLAG_VMA_LOCK)
> +		vma_end_read(vmf->vma);
> +	else
> +		mmap_read_unlock(vmf->vma->vm_mm);
> +}
> +
>  #else /* CONFIG_PER_VMA_LOCK */
>  
>  static inline void vma_init_lock(struct vm_area_struct *vma) {}
> @@ -736,6 +744,11 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) {}
>  static inline void vma_mark_detached(struct vm_area_struct *vma,
>  				     bool detached) {}
>  
> +static inline void release_fault_lock(struct vm_fault *vmf)
> +{
> +	mmap_read_unlock(vmf->vma->vm_mm);
> +}
> +
>  #endif /* CONFIG_PER_VMA_LOCK */
>  
>  /*
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 52bcf12dcdbf..d4d8f474e0c5 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1703,27 +1703,26 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait)
>   * Return values:
>   * 0 - folio is locked.
>   * VM_FAULT_RETRY - folio is not locked.
> - *     mmap_lock has been released (mmap_read_unlock(), unless flags had both
> - *     FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in
> - *     which case mmap_lock is still held.
> + *     mmap_lock or per-VMA lock has been released (mmap_read_unlock() or
> + *     vma_end_read()), unless flags had both FAULT_FLAG_ALLOW_RETRY and
> + *     FAULT_FLAG_RETRY_NOWAIT set, in which case the lock is still held.
>   *
>   * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0
> - * with the folio locked and the mmap_lock unperturbed.
> + * with the folio locked and the mmap_lock/per-VMA lock is left unperturbed.
>   */
>  vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf)
>  {
> -	struct mm_struct *mm = vmf->vma->vm_mm;
>  	unsigned int flags = vmf->flags;
>  
>  	if (fault_flag_allow_retry_first(flags)) {
>  		/*
> -		 * CAUTION! In this case, mmap_lock is not released
> -		 * even though return VM_FAULT_RETRY.
> +		 * CAUTION! In this case, mmap_lock/per-VMA lock is not
> +		 * released even though returning VM_FAULT_RETRY.
>  		 */
>  		if (flags & FAULT_FLAG_RETRY_NOWAIT)
>  			return VM_FAULT_RETRY;
>  
> -		mmap_read_unlock(mm);
> +		release_fault_lock(vmf);
>  		if (flags & FAULT_FLAG_KILLABLE)
>  			folio_wait_locked_killable(folio);
>  		else
> @@ -1735,7 +1734,7 @@ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf)
>  
>  		ret = __folio_lock_killable(folio);
>  		if (ret) {
> -			mmap_read_unlock(mm);
> +			release_fault_lock(vmf);
>  			return VM_FAULT_RETRY;
>  		}
>  	} else {
> diff --git a/mm/memory.c b/mm/memory.c
> index 345080052003..4fb8ecfc6d13 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3712,12 +3712,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	if (!pte_unmap_same(vmf))
>  		goto out;
>  
> -	if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> -		ret = VM_FAULT_RETRY;
> -		vma_end_read(vma);
> -		goto out;
> -	}
> -
>  	entry = pte_to_swp_entry(vmf->orig_pte);
>  	if (unlikely(non_swap_entry(entry))) {
>  		if (is_migration_entry(entry)) {
> @@ -3727,6 +3721,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  			vmf->page = pfn_swap_entry_to_page(entry);
>  			ret = remove_device_exclusive_entry(vmf);
>  		} else if (is_device_private_entry(entry)) {
> +			if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> +				/*
> +				 * migrate_to_ram is not yet ready to operate
> +				 * under VMA lock.
> +				 */
> +				vma_end_read(vma);
> +				ret = VM_FAULT_RETRY;
> +				goto out;
> +			}
> +
>  			vmf->page = pfn_swap_entry_to_page(entry);
>  			vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
>  					vmf->address, &vmf->ptl);


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 6/6] mm: handle userfaults under VMA lock
  2023-06-29  0:19     ` Suren Baghdasaryan
@ 2023-06-29 16:32       ` Peter Xu
  2023-06-29 16:39         ` Suren Baghdasaryan
  0 siblings, 1 reply; 14+ messages in thread
From: Peter Xu @ 2023-06-29 16:32 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: akpm, willy, hannes, mhocko, josef, jack, ldufour,
	laurent.dufour, michel, liam.howlett, jglisse, vbabka, minchan,
	dave, punit.agrawal, lstoakes, hdanton, apopple, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	linux-mm, linux-fsdevel, linux-kernel, kernel-team

On Wed, Jun 28, 2023 at 05:19:31PM -0700, Suren Baghdasaryan wrote:
> On Wed, Jun 28, 2023 at 10:32 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Wed, Jun 28, 2023 at 10:25:29AM -0700, Suren Baghdasaryan wrote:
> > > Enable handle_userfault to operate under VMA lock by releasing VMA lock
> > > instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT
> > > should never be used when handling faults under per-VMA lock protection
> > > because that would break the assumption that lock is dropped on retry.
> > >
> > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> >
> > Maybe the sanitize_fault_flags() changes suite more in patch 3, but not a
> > big deal I guess.
> 
> IIUC FAULT_FLAG_RETRY_NOWAIT comes into play in this patchset only in
> the context of uffds, therefore that check seems to be needed when we
> enable per-VMA lock uffd support, which is this patch. Does that make
> sense?

I don't see why uffd is special in this regard, as e.g. swap also checks
NOWAIT when folio_lock_or_retry() so I assume it's also used there.

IMHO the "NOWAIT should never apply with VMA_LOCK so far" assumption starts
from patch 3 where it conditionally releases the vma lock when
!(RETRY|COMPLETE); that is the real place where it can start to go wrong if
anyone breaks the assumption.

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 6/6] mm: handle userfaults under VMA lock
  2023-06-29 16:32       ` Peter Xu
@ 2023-06-29 16:39         ` Suren Baghdasaryan
  2023-06-30  2:07           ` Suren Baghdasaryan
  0 siblings, 1 reply; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-29 16:39 UTC (permalink / raw)
  To: Peter Xu
  Cc: akpm, willy, hannes, mhocko, josef, jack, ldufour,
	laurent.dufour, michel, liam.howlett, jglisse, vbabka, minchan,
	dave, punit.agrawal, lstoakes, hdanton, apopple, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	linux-mm, linux-fsdevel, linux-kernel, kernel-team

On Thu, Jun 29, 2023 at 9:33 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Jun 28, 2023 at 05:19:31PM -0700, Suren Baghdasaryan wrote:
> > On Wed, Jun 28, 2023 at 10:32 AM Peter Xu <peterx@redhat.com> wrote:
> > >
> > > On Wed, Jun 28, 2023 at 10:25:29AM -0700, Suren Baghdasaryan wrote:
> > > > Enable handle_userfault to operate under VMA lock by releasing VMA lock
> > > > instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT
> > > > should never be used when handling faults under per-VMA lock protection
> > > > because that would break the assumption that lock is dropped on retry.
> > > >
> > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > >
> > > Maybe the sanitize_fault_flags() changes suite more in patch 3, but not a
> > > big deal I guess.
> >
> > IIUC FAULT_FLAG_RETRY_NOWAIT comes into play in this patchset only in
> > the context of uffds, therefore that check seems to be needed when we
> > enable per-VMA lock uffd support, which is this patch. Does that make
> > sense?
>
> I don't see why uffd is special in this regard, as e.g. swap also checks
> NOWAIT when folio_lock_or_retry() so I assume it's also used there.
>
> IMHO the "NOWAIT should never apply with VMA_LOCK so far" assumption starts
> from patch 3 where it conditionally releases the vma lock when
> !(RETRY|COMPLETE); that is the real place where it can start to go wrong if
> anyone breaks the assumption.

Um, yes, you are right as usual. It was clear to me from the code that
NOWAIT is not used with swap under VMA_LOCK, that's why I didn't
consider this check earlier. Yeah, patch 3 seems like a more
appropriate place for it. I'll move it and post a new patchset later
today or tomorrow morning with your Acks.
Thanks,
Suren.

>
> Thanks,
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock
  2023-06-29  6:04   ` Alistair Popple
@ 2023-06-30  1:30     ` Suren Baghdasaryan
  0 siblings, 0 replies; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-30  1:30 UTC (permalink / raw)
  To: Alistair Popple
  Cc: akpm, willy, hannes, mhocko, josef, jack, ldufour,
	laurent.dufour, michel, liam.howlett, jglisse, vbabka, minchan,
	dave, punit.agrawal, lstoakes, hdanton, peterx, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	linux-mm, linux-fsdevel, linux-kernel, kernel-team

On Wed, Jun 28, 2023 at 11:06 PM Alistair Popple <apopple@nvidia.com> wrote:
>
>
> Looks good and passed the HMM selftests. So:
>
> Tested-by: Alistair Popple <apopple@nvidia.com>
> Reviewed-by: Alistair Popple <apopple@nvidia.com>

Thanks!

>
> Suren Baghdasaryan <surenb@google.com> writes:
>
> > When page fault is handled under per-VMA lock protection, all swap page
> > faults are retried with mmap_lock because folio_lock_or_retry has to drop
> > and reacquire mmap_lock if folio could not be immediately locked.
> > Follow the same pattern as mmap_lock to drop per-VMA lock when waiting
> > for folio and retrying once folio is available.
> > With this obstacle removed, enable do_swap_page to operate under
> > per-VMA lock protection. Drivers implementing ops->migrate_to_ram might
> > still rely on mmap_lock, therefore we have to fall back to mmap_lock in
> > that particular case.
> > Note that the only time do_swap_page calls synchronous swap_readpage
> > is when SWP_SYNCHRONOUS_IO is set, which is only set for
> > QUEUE_FLAG_SYNCHRONOUS devices: brd, zram and nvdimms (both btt and
> > pmem). Therefore we don't sleep in this path, and there's no need to
> > drop the mmap or per-VMA lock.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > Acked-by: Peter Xu <peterx@redhat.com>
> > ---
> >  include/linux/mm.h | 13 +++++++++++++
> >  mm/filemap.c       | 17 ++++++++---------
> >  mm/memory.c        | 16 ++++++++++------
> >  3 files changed, 31 insertions(+), 15 deletions(-)
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index fec149585985..bbaec479bf98 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -723,6 +723,14 @@ static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached)
> >  struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> >                                         unsigned long address);
> >
> > +static inline void release_fault_lock(struct vm_fault *vmf)
> > +{
> > +     if (vmf->flags & FAULT_FLAG_VMA_LOCK)
> > +             vma_end_read(vmf->vma);
> > +     else
> > +             mmap_read_unlock(vmf->vma->vm_mm);
> > +}
> > +
> >  #else /* CONFIG_PER_VMA_LOCK */
> >
> >  static inline void vma_init_lock(struct vm_area_struct *vma) {}
> > @@ -736,6 +744,11 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) {}
> >  static inline void vma_mark_detached(struct vm_area_struct *vma,
> >                                    bool detached) {}
> >
> > +static inline void release_fault_lock(struct vm_fault *vmf)
> > +{
> > +     mmap_read_unlock(vmf->vma->vm_mm);
> > +}
> > +
> >  #endif /* CONFIG_PER_VMA_LOCK */
> >
> >  /*
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 52bcf12dcdbf..d4d8f474e0c5 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -1703,27 +1703,26 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait)
> >   * Return values:
> >   * 0 - folio is locked.
> >   * VM_FAULT_RETRY - folio is not locked.
> > - *     mmap_lock has been released (mmap_read_unlock(), unless flags had both
> > - *     FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in
> > - *     which case mmap_lock is still held.
> > + *     mmap_lock or per-VMA lock has been released (mmap_read_unlock() or
> > + *     vma_end_read()), unless flags had both FAULT_FLAG_ALLOW_RETRY and
> > + *     FAULT_FLAG_RETRY_NOWAIT set, in which case the lock is still held.
> >   *
> >   * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0
> > - * with the folio locked and the mmap_lock unperturbed.
> > + * with the folio locked and the mmap_lock/per-VMA lock is left unperturbed.
> >   */
> >  vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf)
> >  {
> > -     struct mm_struct *mm = vmf->vma->vm_mm;
> >       unsigned int flags = vmf->flags;
> >
> >       if (fault_flag_allow_retry_first(flags)) {
> >               /*
> > -              * CAUTION! In this case, mmap_lock is not released
> > -              * even though return VM_FAULT_RETRY.
> > +              * CAUTION! In this case, mmap_lock/per-VMA lock is not
> > +              * released even though returning VM_FAULT_RETRY.
> >                */
> >               if (flags & FAULT_FLAG_RETRY_NOWAIT)
> >                       return VM_FAULT_RETRY;
> >
> > -             mmap_read_unlock(mm);
> > +             release_fault_lock(vmf);
> >               if (flags & FAULT_FLAG_KILLABLE)
> >                       folio_wait_locked_killable(folio);
> >               else
> > @@ -1735,7 +1734,7 @@ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf)
> >
> >               ret = __folio_lock_killable(folio);
> >               if (ret) {
> > -                     mmap_read_unlock(mm);
> > +                     release_fault_lock(vmf);
> >                       return VM_FAULT_RETRY;
> >               }
> >       } else {
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 345080052003..4fb8ecfc6d13 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -3712,12 +3712,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >       if (!pte_unmap_same(vmf))
> >               goto out;
> >
> > -     if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> > -             ret = VM_FAULT_RETRY;
> > -             vma_end_read(vma);
> > -             goto out;
> > -     }
> > -
> >       entry = pte_to_swp_entry(vmf->orig_pte);
> >       if (unlikely(non_swap_entry(entry))) {
> >               if (is_migration_entry(entry)) {
> > @@ -3727,6 +3721,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >                       vmf->page = pfn_swap_entry_to_page(entry);
> >                       ret = remove_device_exclusive_entry(vmf);
> >               } else if (is_device_private_entry(entry)) {
> > +                     if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> > +                             /*
> > +                              * migrate_to_ram is not yet ready to operate
> > +                              * under VMA lock.
> > +                              */
> > +                             vma_end_read(vma);
> > +                             ret = VM_FAULT_RETRY;
> > +                             goto out;
> > +                     }
> > +
> >                       vmf->page = pfn_swap_entry_to_page(entry);
> >                       vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
> >                                       vmf->address, &vmf->ptl);
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 6/6] mm: handle userfaults under VMA lock
  2023-06-29 16:39         ` Suren Baghdasaryan
@ 2023-06-30  2:07           ` Suren Baghdasaryan
  0 siblings, 0 replies; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-06-30  2:07 UTC (permalink / raw)
  To: Peter Xu
  Cc: akpm, willy, hannes, mhocko, josef, jack, ldufour,
	laurent.dufour, michel, liam.howlett, jglisse, vbabka, minchan,
	dave, punit.agrawal, lstoakes, hdanton, apopple, ying.huang,
	david, yuzhao, dhowells, hughd, viro, brauner, pasha.tatashin,
	linux-mm, linux-fsdevel, linux-kernel, kernel-team

On Thu, Jun 29, 2023 at 9:39 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> On Thu, Jun 29, 2023 at 9:33 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Wed, Jun 28, 2023 at 05:19:31PM -0700, Suren Baghdasaryan wrote:
> > > On Wed, Jun 28, 2023 at 10:32 AM Peter Xu <peterx@redhat.com> wrote:
> > > >
> > > > On Wed, Jun 28, 2023 at 10:25:29AM -0700, Suren Baghdasaryan wrote:
> > > > > Enable handle_userfault to operate under VMA lock by releasing VMA lock
> > > > > instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT
> > > > > should never be used when handling faults under per-VMA lock protection
> > > > > because that would break the assumption that lock is dropped on retry.
> > > > >
> > > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > >
> > > > Maybe the sanitize_fault_flags() changes suite more in patch 3, but not a
> > > > big deal I guess.
> > >
> > > IIUC FAULT_FLAG_RETRY_NOWAIT comes into play in this patchset only in
> > > the context of uffds, therefore that check seems to be needed when we
> > > enable per-VMA lock uffd support, which is this patch. Does that make
> > > sense?
> >
> > I don't see why uffd is special in this regard, as e.g. swap also checks
> > NOWAIT when folio_lock_or_retry() so I assume it's also used there.
> >
> > IMHO the "NOWAIT should never apply with VMA_LOCK so far" assumption starts
> > from patch 3 where it conditionally releases the vma lock when
> > !(RETRY|COMPLETE); that is the real place where it can start to go wrong if
> > anyone breaks the assumption.
>
> Um, yes, you are right as usual. It was clear to me from the code that
> NOWAIT is not used with swap under VMA_LOCK, that's why I didn't
> consider this check earlier. Yeah, patch 3 seems like a more
> appropriate place for it. I'll move it and post a new patchset later
> today or tomorrow morning with your Acks.

Posted v6 at https://lore.kernel.org/all/20230630020436.1066016-1-surenb@google.com/

> Thanks,
> Suren.
>
> >
> > Thanks,
> >
> > --
> > Peter Xu
> >

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-06-30  2:08 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-28 17:25 [PATCH v5 0/6] Per-VMA lock support for swap and userfaults Suren Baghdasaryan
2023-06-28 17:25 ` [PATCH v5 1/6] swap: remove remnants of polling from read_swap_cache_async Suren Baghdasaryan
2023-06-28 17:25 ` [PATCH v5 2/6] mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED Suren Baghdasaryan
2023-06-28 17:25 ` [PATCH v5 3/6] mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED Suren Baghdasaryan
2023-06-28 17:25 ` [PATCH v5 4/6] mm: change folio_lock_or_retry to use vm_fault directly Suren Baghdasaryan
2023-06-28 17:25 ` [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock Suren Baghdasaryan
2023-06-29  6:04   ` Alistair Popple
2023-06-30  1:30     ` Suren Baghdasaryan
2023-06-28 17:25 ` [PATCH v5 6/6] mm: handle userfaults under VMA lock Suren Baghdasaryan
2023-06-28 17:32   ` Peter Xu
2023-06-29  0:19     ` Suren Baghdasaryan
2023-06-29 16:32       ` Peter Xu
2023-06-29 16:39         ` Suren Baghdasaryan
2023-06-30  2:07           ` Suren Baghdasaryan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.