All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v2 00/12] mm/hugetlb: Make huge_pte_offset() thread-safe for pmd unshare
@ 2022-11-18  1:10 Peter Xu
  2022-11-18  1:10 ` [PATCH RFC v2 01/12] mm/hugetlb: Let vma_offset_start() to return start Peter Xu
                   ` (12 more replies)
  0 siblings, 13 replies; 22+ messages in thread
From: Peter Xu @ 2022-11-18  1:10 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Rik van Riel, Muchun Song, Andrew Morton, peterx, James Houghton,
	Nadav Amit, Andrea Arcangeli, David Hildenbrand, Miaohe Lin,
	Mike Kravetz

Based on latest mm-unstable (96aa38b69507).

This can be seen as a follow-up series to Mike's recent hugetlb vma lock
series for pmd unsharing, so this series also depends on that one.
Hopefully this series can make it a more complete resolution for pmd
unsharing.

PS: so far no one strongly ACKed this, let me keep the RFC tag.  But I
think I'm already more confident than many of the RFCs I posted.

PS2: there're a lot of changes comparing to rfcv1, so I'm just not adding
the changelog.  The whole idea is still the same, though.

Problem
=======

huge_pte_offset() is a major helper used by hugetlb code paths to walk a
hugetlb pgtable.  It's used mostly everywhere since that's needed even
before taking the pgtable lock.

huge_pte_offset() is always called with mmap lock held with either read or
write.

For normal memory types that's far enough, since any pgtable removal
requires mmap write lock (e.g. munmap or mm destructions).  However hugetlb
has the pmd unshare feature, it means not only the pgtable page can be gone
from under us when we're doing a walking, but also the pgtable page we're
walking (even after unshared, in this case it can only be the huge PUD page
which contains 512 huge pmd entries, with the vma VM_SHARED mapped).  It's
possible because even though freeing the pgtable page requires mmap write
lock, it doesn't help us when we're walking on another mm's pgtable, so
it's still on risk even if we're with the current->mm's mmap lock.

The recent work from Mike on vma lock can resolve most of this already.
It's achieved by forbidden pmd unsharing during the lock being taken, so no
further risk of the pgtable page being freed.  It means if we can take the
vma lock around all huge_pte_offset() callers it'll be safe.

There're already a bunch of them that we did as per the latest mm-unstable,
but also quite a few others that we didn't for various reasons.  E.g. it
may not be applicable for not-allow-to-sleep contexts like FOLL_NOWAIT.
Or, huge_pmd_share() is actually a tricky user of huge_pte_offset(),
because even if we took the vma lock, we're walking on another mm's vma!
Taking vma lock for all the vmas are probably not gonna work.

I have totally no report showing that I can trigger such a race, but from
code wise I never see anything that stops the race from happening.  This
series is trying to resolve that problem.

Resolution
==========

What this patch proposed, besides using the vma lock, is that we can also
use other ways to protect the pgtable page from being freed from under us
in huge_pte_offset() context.  The idea is kind of similar to RCU fast-gup.
Note that fast-gup is very safe regarding pmd unsharing even before vma
lock, because fast-gup relies on RCU to protect walking any pgtable page,
including another mm's.  So fast-gup will never hit a freed page even if
pmd sharing is possible.

To apply similar same idea to huge_pte_offset(), it means with proper RCU
protection the pte_t* pointer returned from huge_pte_offset() can also be
always safe to access and de-reference, along with the pgtable lock that
was bound to the pgtable page.  Note that RCU will only work to protect
pgtables if MMU_GATHER_RCU_TABLE_FREE=y.  For the rest we need to disable
irq.  Of course, the whole locking idea is not needed if pmd sharing is not
possible at all, or, on private hugetlb mappings.

Patch Layout
============

Patch 1-3:         cleanup, or dependency of the follow up patches
Patch 4:           the core patch to introduce hugetlb walker lock
Patch 5-11:        each patch resolves one possible race condition
Patch 12:          introduce hugetlb_walk() to replace huge_pte_offset()

Tests
=====

Only lightly tested on hugetlb kselftests including uffd.

Comments welcomed, thanks.

Peter Xu (12):
  mm/hugetlb: Let vma_offset_start() to return start
  mm/hugetlb: Move swap entry handling into vma lock for fault
  mm/hugetlb: Don't wait for migration entry during follow page
  mm/hugetlb: Add pgtable walker lock
  mm/hugetlb: Make userfaultfd_huge_must_wait() safe to pmd unshare
  mm/hugetlb: Protect huge_pmd_share() with walker lock
  mm/hugetlb: Use hugetlb walker lock in hugetlb_follow_page_mask()
  mm/hugetlb: Use hugetlb walker lock in follow_hugetlb_page()
  mm/hugetlb: Use hugetlb walker lock in hugetlb_vma_maps_page()
  mm/hugetlb: Use hugetlb walker lock in walk_hugetlb_range()
  mm/hugetlb: Use hugetlb walker lock in page_vma_mapped_walk()
  mm/hugetlb: Introduce hugetlb_walk()

 arch/s390/mm/gmap.c      |   2 +
 fs/hugetlbfs/inode.c     |  41 +++++++-------
 fs/proc/task_mmu.c       |   2 +
 fs/userfaultfd.c         |  24 ++++++---
 include/linux/hugetlb.h  | 112 +++++++++++++++++++++++++++++++++++++++
 include/linux/pagewalk.h |   9 +++-
 include/linux/rmap.h     |   4 ++
 mm/hugetlb.c             |  97 +++++++++++++++++----------------
 mm/page_vma_mapped.c     |   7 ++-
 mm/pagewalk.c            |   6 +--
 10 files changed, 224 insertions(+), 80 deletions(-)

-- 
2.37.3


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-11-25 13:56 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-18  1:10 [PATCH RFC v2 00/12] mm/hugetlb: Make huge_pte_offset() thread-safe for pmd unshare Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 01/12] mm/hugetlb: Let vma_offset_start() to return start Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 02/12] mm/hugetlb: Move swap entry handling into vma lock for fault Peter Xu
2022-11-18  1:35   ` Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 03/12] mm/hugetlb: Don't wait for migration entry during follow page Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 04/12] mm/hugetlb: Add pgtable walker lock Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 05/12] mm/hugetlb: Make userfaultfd_huge_must_wait() safe to pmd unshare Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 06/12] mm/hugetlb: Protect huge_pmd_share() with walker lock Peter Xu
2022-11-18  1:17   ` Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 07/12] mm/hugetlb: Use hugetlb walker lock in hugetlb_follow_page_mask() Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 08/12] mm/hugetlb: Use hugetlb walker lock in follow_hugetlb_page() Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 09/12] mm/hugetlb: Use hugetlb walker lock in hugetlb_vma_maps_page() Peter Xu
2022-11-18  1:10 ` [PATCH RFC v2 10/12] mm/hugetlb: Use hugetlb walker lock in walk_hugetlb_range() Peter Xu
2022-11-18  1:11 ` [PATCH RFC v2 11/12] mm/hugetlb: Use hugetlb walker lock in page_vma_mapped_walk() Peter Xu
2022-11-18  1:11 ` [PATCH RFC v2 12/12] mm/hugetlb: Introduce hugetlb_walk() Peter Xu
2022-11-23  9:40 ` [PATCH RFC v2 00/12] mm/hugetlb: Make huge_pte_offset() thread-safe for pmd unshare David Hildenbrand
2022-11-23 15:09   ` Peter Xu
2022-11-23 18:21     ` Mike Kravetz
2022-11-23 18:56       ` Peter Xu
2022-11-23 19:31         ` David Hildenbrand
2022-11-25  9:43     ` David Hildenbrand
2022-11-25 13:55       ` Peter Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.