linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] mm: change vma_start_read to fail if VMA got detached from under it
@ 2023-06-20 23:57 Suren Baghdasaryan
  2023-06-20 23:57 ` [PATCH 2/3] mm: change vma_start_read to fail to lock a detached VMA Suren Baghdasaryan
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Suren Baghdasaryan @ 2023-06-20 23:57 UTC (permalink / raw)
  To: akpm
  Cc: willy, torvalds, vegard.nossum, mpe, Liam.Howlett, lrh2000,
	mgorman, linux-mm, linux-kernel, kernel-team, surenb

Current implementation of vma_start_read() checks VMA for being locked
before taking vma->vm_lock and then checks that again. This mechanism
fails to detect a case when the VMA gets write-locked, modified and
unlocked after the first check but before the vma->vm_lock is obtained.
While this is not strictly a problem (vma_start_read would not produce
a false unlocked result), this allows it to successfully lock a VMA which
got detached from the VMA tree while vma_start_read was locking it.
New condition checks for any change in vma->vm_lock_seq after we obtain
vma->vm_lock and will cause vma_start_read() to fail if the above race
occurs.

Fixes: 5e31275cc997 ("mm: add per-VMA lock and helper functions to control it")
Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 include/linux/mm.h | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 27ce77080c79..8410da79c570 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -639,23 +639,24 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
  */
 static inline bool vma_start_read(struct vm_area_struct *vma)
 {
-	/* Check before locking. A race might cause false locked result. */
-	if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
+	int vm_lock_seq = READ_ONCE(vma->vm_lock_seq);
+
+	/*
+	 * Check if VMA is locked before taking vma->vm_lock. A race or
+	 * mm_lock_seq overflow might cause false locked result.
+	 */
+	if (vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
 		return false;
 
 	if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0))
 		return false;
 
-	/*
-	 * Overflow might produce false locked result.
-	 * False unlocked result is impossible because we modify and check
-	 * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
-	 * modification invalidates all existing locks.
-	 */
-	if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {
+	/* Fail if VMA was write-locked after we checked it earlier */
+	if (unlikely(vm_lock_seq != READ_ONCE(vma->vm_lock_seq))) {
 		up_read(&vma->vm_lock->lock);
 		return false;
 	}
+
 	return true;
 }
 
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-06-26 20:51 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-20 23:57 [PATCH 1/3] mm: change vma_start_read to fail if VMA got detached from under it Suren Baghdasaryan
2023-06-20 23:57 ` [PATCH 2/3] mm: change vma_start_read to fail to lock a detached VMA Suren Baghdasaryan
2023-06-20 23:57 ` [PATCH 3/3] mm: check for VMA being detached before destroying it Suren Baghdasaryan
2023-06-21  0:05   ` Suren Baghdasaryan
2023-06-21  2:15   ` kernel test robot
2023-06-21  7:01     ` Suren Baghdasaryan
2023-06-21  5:53   ` kernel test robot
2023-06-26 20:51 ` [PATCH 1/3] mm: change vma_start_read to fail if VMA got detached from under it Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).