All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction
@ 2022-11-28 18:02 Jann Horn
  2022-11-28 18:02 ` [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI Jann Horn
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Jann Horn @ 2022-11-28 18:02 UTC (permalink / raw)
  To: security, Andrew Morton
  Cc: Yang Shi, David Hildenbrand, Peter Xu, John Hubbard,
	linux-kernel, linux-mm

pagetable walks on address ranges mapped by VMAs can be done under the mmap
lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's
address_space. Only one of these needs to be held, and it does not need to
be held in exclusive mode.

Under those circumstances, the rules for concurrent access to page table
entries are:

 - Terminal page table entries (entries that don't point to another page
   table) can be arbitrarily changed under the page table lock, with the
   exception that they always need to be consistent for
   hardware page table walks and lockless_pages_from_mm().
   This includes that they can be changed into non-terminal entries.
 - Non-terminal page table entries (which point to another page table)
   can not be modified; readers are allowed to READ_ONCE() an entry, verify
   that it is non-terminal, and then assume that its value will stay as-is.

Retracting a page table involves modifying a non-terminal entry, so
page-table-level locks are insufficient to protect against concurrent
page table traversal; it requires taking all the higher-level locks under
which it is possible to start a page walk in the relevant range in
exclusive mode.

The collapse_huge_page() path for anonymous THP already follows this rule,
but the shmem/file THP path was getting it wrong, making it possible for
concurrent rmap-based operations to cause corruption.

Cc: stable@kernel.org
Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP")
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Jann Horn <jannh@google.com>
---
v4: added ack by David Hildenbrand

 mm/khugepaged.c | 55 +++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 51 insertions(+), 4 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 4734315f79407..674b111a24fa7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1384,16 +1384,37 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
 	return SCAN_SUCCEED;
 }
 
+/*
+ * A note about locking:
+ * Trying to take the page table spinlocks would be useless here because those
+ * are only used to synchronize:
+ *
+ *  - modifying terminal entries (ones that point to a data page, not to another
+ *    page table)
+ *  - installing *new* non-terminal entries
+ *
+ * Instead, we need roughly the same kind of protection as free_pgtables() or
+ * mm_take_all_locks() (but only for a single VMA):
+ * The mmap lock together with this VMA's rmap locks covers all paths towards
+ * the page table entries we're messing with here, except for hardware page
+ * table walks and lockless_pages_from_mm().
+ */
 static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
 				  unsigned long addr, pmd_t *pmdp)
 {
-	spinlock_t *ptl;
 	pmd_t pmd;
 
 	mmap_assert_write_locked(mm);
-	ptl = pmd_lock(vma->vm_mm, pmdp);
+	if (vma->vm_file)
+		lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem);
+	/*
+	 * All anon_vmas attached to the VMA have the same root and are
+	 * therefore locked by the same lock.
+	 */
+	if (vma->anon_vma)
+		lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
+
 	pmd = pmdp_collapse_flush(vma, addr, pmdp);
-	spin_unlock(ptl);
 	mm_dec_nr_ptes(mm);
 	page_table_check_pte_clear_range(mm, addr, pmd);
 	pte_free(mm, pmd_pgtable(pmd));
@@ -1444,6 +1465,14 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 	if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false))
 		return SCAN_VMA_CHECK;
 
+	/*
+	 * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
+	 * that got written to. Without this, we'd have to also lock the
+	 * anon_vma if one exists.
+	 */
+	if (vma->anon_vma)
+		return SCAN_VMA_CHECK;
+
 	/* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */
 	if (userfaultfd_wp(vma))
 		return SCAN_PTE_UFFD_WP;
@@ -1477,6 +1506,20 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 		goto drop_hpage;
 	}
 
+	/*
+	 * We need to lock the mapping so that from here on, only GUP-fast and
+	 * hardware page walks can access the parts of the page tables that
+	 * we're operating on.
+	 * See collapse_and_free_pmd().
+	 */
+	i_mmap_lock_write(vma->vm_file->f_mapping);
+
+	/*
+	 * This spinlock should be unnecessary: Nobody else should be accessing
+	 * the page tables under spinlock protection here, only
+	 * lockless_pages_from_mm() and the hardware page walker can access page
+	 * tables while all the high-level locks are held in write mode.
+	 */
 	start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
 	result = SCAN_FAIL;
 
@@ -1531,6 +1574,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 	/* step 4: remove pte entries */
 	collapse_and_free_pmd(mm, vma, haddr, pmd);
 
+	i_mmap_unlock_write(vma->vm_file->f_mapping);
+
 maybe_install_pmd:
 	/* step 5: install pmd entry */
 	result = install_pmd
@@ -1544,6 +1589,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 
 abort:
 	pte_unmap_unlock(start_pte, ptl);
+	i_mmap_unlock_write(vma->vm_file->f_mapping);
 	goto drop_hpage;
 }
 
@@ -1600,7 +1646,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
 		 * An alternative would be drop the check, but check that page
 		 * table is clear before calling pmdp_collapse_flush() under
 		 * ptl. It has higher chance to recover THP for the VMA, but
-		 * has higher cost too.
+		 * has higher cost too. It would also probably require locking
+		 * the anon_vma.
 		 */
 		if (vma->anon_vma) {
 			result = SCAN_PAGE_ANON;

base-commit: eb7081409f94a9a8608593d0fb63a1aa3d6f95d8
-- 
2.38.1.584.g0f3c55d4c2-goog


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
  2022-11-28 18:02 [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction Jann Horn
@ 2022-11-28 18:02 ` Jann Horn
  2022-11-28 19:54   ` Yang Shi
  2022-11-28 18:02 ` [PATCH v4 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths Jann Horn
  2022-11-28 19:48 ` [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction Yang Shi
  2 siblings, 1 reply; 13+ messages in thread
From: Jann Horn @ 2022-11-28 18:02 UTC (permalink / raw)
  To: security, Andrew Morton
  Cc: Yang Shi, David Hildenbrand, Peter Xu, John Hubbard,
	linux-kernel, linux-mm

Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
ensure that the page table was not removed by khugepaged in between.

However, lockless_pages_from_mm() still requires that the page table is not
concurrently freed or reused to store non-PTE data. Otherwise, problems
can occur because:

 - deposited page tables can be freed when a THP page somewhere in the
   mm is removed
 - some architectures store non-PTE information inside deposited page
   tables (see radix__pgtable_trans_huge_deposit())

Additionally, lockless_pages_from_mm() is also somewhat brittle with
regards to page tables being repeatedly moved back and forth, but
that shouldn't be an issue in practice.

Fix it by sending IPIs (if the architecture uses
semi-RCU-style page table freeing) before freeing/reusing page tables.

As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
there are two possible cases:

 1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
    tlb_remove_table_sync_one() to send an IPI to synchronize with
    lockless_pages_from_mm().
 2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
    TLB flushes are already guaranteed to send IPIs.
    tlb_remove_table_sync_one() will do nothing, but we've already
    run pmdp_collapse_flush(), which did a TLB flush, which must have
    involved IPIs.

Cc: stable@kernel.org
Fixes: ba76149f47d8 ("thp: khugepaged")
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Jann Horn <jannh@google.com>
---
v4:
 - added ack from David Hildenbrand
 - made commit message more verbose

 include/asm-generic/tlb.h | 4 ++++
 mm/khugepaged.c           | 2 ++
 mm/mmu_gather.c           | 4 +---
 3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 492dce43236ea..cab7cfebf40bd 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -222,12 +222,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
 #define tlb_needs_table_invalidate() (true)
 #endif
 
+void tlb_remove_table_sync_one(void);
+
 #else
 
 #ifdef tlb_needs_table_invalidate
 #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
 #endif
 
+static inline void tlb_remove_table_sync_one(void) { }
+
 #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
 
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 674b111a24fa7..c3d3ce596bff7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1057,6 +1057,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	_pmd = pmdp_collapse_flush(vma, address, pmd);
 	spin_unlock(pmd_ptl);
 	mmu_notifier_invalidate_range_end(&range);
+	tlb_remove_table_sync_one();
 
 	spin_lock(pte_ptl);
 	result =  __collapse_huge_page_isolate(vma, address, pte, cc,
@@ -1415,6 +1416,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
 		lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
 
 	pmd = pmdp_collapse_flush(vma, addr, pmdp);
+	tlb_remove_table_sync_one();
 	mm_dec_nr_ptes(mm);
 	page_table_check_pte_clear_range(mm, addr, pmd);
 	pte_free(mm, pmd_pgtable(pmd));
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index add4244e5790d..3a2c3f8cad2fe 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -153,7 +153,7 @@ static void tlb_remove_table_smp_sync(void *arg)
 	/* Simply deliver the interrupt */
 }
 
-static void tlb_remove_table_sync_one(void)
+void tlb_remove_table_sync_one(void)
 {
 	/*
 	 * This isn't an RCU grace period and hence the page-tables cannot be
@@ -177,8 +177,6 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
 
 #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
 
-static void tlb_remove_table_sync_one(void) { }
-
 static void tlb_remove_table_free(struct mmu_table_batch *batch)
 {
 	__tlb_remove_table_free(batch);
-- 
2.38.1.584.g0f3c55d4c2-goog


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths
  2022-11-28 18:02 [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction Jann Horn
  2022-11-28 18:02 ` [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI Jann Horn
@ 2022-11-28 18:02 ` Jann Horn
  2022-11-28 18:08   ` David Hildenbrand
  2022-11-28 19:56   ` Yang Shi
  2022-11-28 19:48 ` [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction Yang Shi
  2 siblings, 2 replies; 13+ messages in thread
From: Jann Horn @ 2022-11-28 18:02 UTC (permalink / raw)
  To: security, Andrew Morton
  Cc: Yang Shi, David Hildenbrand, Peter Xu, John Hubbard,
	linux-kernel, linux-mm

Any codepath that zaps page table entries must invoke MMU notifiers to
ensure that secondary MMUs (like KVM) don't keep accessing pages which
aren't mapped anymore. Secondary MMUs don't hold their own references to
pages that are mirrored over, so failing to notify them can lead to page
use-after-free.

I'm marking this as addressing an issue introduced in commit f3f0e1d2150b
("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of
the security impact of this only came in commit 27e1f8273113 ("khugepaged:
enable collapse pmd for pte-mapped THP"), which actually omitted flushes
for the removal of present PTEs, not just for the removal of empty page
tables.

Cc: stable@kernel.org
Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages")
Signed-off-by: Jann Horn <jannh@google.com>
---
v4: no changes

 mm/khugepaged.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index c3d3ce596bff7..49eb4b4981d88 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1404,6 +1404,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
 				  unsigned long addr, pmd_t *pmdp)
 {
 	pmd_t pmd;
+	struct mmu_notifier_range range;
 
 	mmap_assert_write_locked(mm);
 	if (vma->vm_file)
@@ -1415,8 +1416,12 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
 	if (vma->anon_vma)
 		lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
 
+	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, addr,
+				addr + HPAGE_PMD_SIZE);
+	mmu_notifier_invalidate_range_start(&range);
 	pmd = pmdp_collapse_flush(vma, addr, pmdp);
 	tlb_remove_table_sync_one();
+	mmu_notifier_invalidate_range_end(&range);
 	mm_dec_nr_ptes(mm);
 	page_table_check_pte_clear_range(mm, addr, pmd);
 	pte_free(mm, pmd_pgtable(pmd));
-- 
2.38.1.584.g0f3c55d4c2-goog


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths
  2022-11-28 18:02 ` [PATCH v4 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths Jann Horn
@ 2022-11-28 18:08   ` David Hildenbrand
  2022-11-28 19:56   ` Yang Shi
  1 sibling, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2022-11-28 18:08 UTC (permalink / raw)
  To: Jann Horn, security, Andrew Morton
  Cc: Yang Shi, Peter Xu, John Hubbard, linux-kernel, linux-mm

On 28.11.22 19:02, Jann Horn wrote:
> Any codepath that zaps page table entries must invoke MMU notifiers to
> ensure that secondary MMUs (like KVM) don't keep accessing pages which
> aren't mapped anymore. Secondary MMUs don't hold their own references to
> pages that are mirrored over, so failing to notify them can lead to page
> use-after-free.
> 
> I'm marking this as addressing an issue introduced in commit f3f0e1d2150b
> ("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of
> the security impact of this only came in commit 27e1f8273113 ("khugepaged:
> enable collapse pmd for pte-mapped THP"), which actually omitted flushes
> for the removal of present PTEs, not just for the removal of empty page
> tables.
> 
> Cc: stable@kernel.org
> Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages")
> Signed-off-by: Jann Horn <jannh@google.com>

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction
  2022-11-28 18:02 [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction Jann Horn
  2022-11-28 18:02 ` [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI Jann Horn
  2022-11-28 18:02 ` [PATCH v4 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths Jann Horn
@ 2022-11-28 19:48 ` Yang Shi
  2 siblings, 0 replies; 13+ messages in thread
From: Yang Shi @ 2022-11-28 19:48 UTC (permalink / raw)
  To: Jann Horn
  Cc: security, Andrew Morton, David Hildenbrand, Peter Xu,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
>
> pagetable walks on address ranges mapped by VMAs can be done under the mmap
> lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's
> address_space. Only one of these needs to be held, and it does not need to
> be held in exclusive mode.
>
> Under those circumstances, the rules for concurrent access to page table
> entries are:
>
>  - Terminal page table entries (entries that don't point to another page
>    table) can be arbitrarily changed under the page table lock, with the
>    exception that they always need to be consistent for
>    hardware page table walks and lockless_pages_from_mm().
>    This includes that they can be changed into non-terminal entries.
>  - Non-terminal page table entries (which point to another page table)
>    can not be modified; readers are allowed to READ_ONCE() an entry, verify
>    that it is non-terminal, and then assume that its value will stay as-is.
>
> Retracting a page table involves modifying a non-terminal entry, so
> page-table-level locks are insufficient to protect against concurrent
> page table traversal; it requires taking all the higher-level locks under
> which it is possible to start a page walk in the relevant range in
> exclusive mode.
>
> The collapse_huge_page() path for anonymous THP already follows this rule,
> but the shmem/file THP path was getting it wrong, making it possible for
> concurrent rmap-based operations to cause corruption.
>
> Cc: stable@kernel.org
> Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP")
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
> v4: added ack by David Hildenbrand

Reviewed-by: Yang Shi <shy828301@gmail.com>

>
>  mm/khugepaged.c | 55 +++++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 51 insertions(+), 4 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 4734315f79407..674b111a24fa7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1384,16 +1384,37 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
>         return SCAN_SUCCEED;
>  }
>
> +/*
> + * A note about locking:
> + * Trying to take the page table spinlocks would be useless here because those
> + * are only used to synchronize:
> + *
> + *  - modifying terminal entries (ones that point to a data page, not to another
> + *    page table)
> + *  - installing *new* non-terminal entries
> + *
> + * Instead, we need roughly the same kind of protection as free_pgtables() or
> + * mm_take_all_locks() (but only for a single VMA):
> + * The mmap lock together with this VMA's rmap locks covers all paths towards
> + * the page table entries we're messing with here, except for hardware page
> + * table walks and lockless_pages_from_mm().
> + */
>  static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
>                                   unsigned long addr, pmd_t *pmdp)
>  {
> -       spinlock_t *ptl;
>         pmd_t pmd;
>
>         mmap_assert_write_locked(mm);
> -       ptl = pmd_lock(vma->vm_mm, pmdp);
> +       if (vma->vm_file)
> +               lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem);
> +       /*
> +        * All anon_vmas attached to the VMA have the same root and are
> +        * therefore locked by the same lock.
> +        */
> +       if (vma->anon_vma)
> +               lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
> +
>         pmd = pmdp_collapse_flush(vma, addr, pmdp);
> -       spin_unlock(ptl);
>         mm_dec_nr_ptes(mm);
>         page_table_check_pte_clear_range(mm, addr, pmd);
>         pte_free(mm, pmd_pgtable(pmd));
> @@ -1444,6 +1465,14 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>         if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false))
>                 return SCAN_VMA_CHECK;
>
> +       /*
> +        * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
> +        * that got written to. Without this, we'd have to also lock the
> +        * anon_vma if one exists.
> +        */
> +       if (vma->anon_vma)
> +               return SCAN_VMA_CHECK;
> +
>         /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */
>         if (userfaultfd_wp(vma))
>                 return SCAN_PTE_UFFD_WP;
> @@ -1477,6 +1506,20 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>                 goto drop_hpage;
>         }
>
> +       /*
> +        * We need to lock the mapping so that from here on, only GUP-fast and
> +        * hardware page walks can access the parts of the page tables that
> +        * we're operating on.
> +        * See collapse_and_free_pmd().
> +        */
> +       i_mmap_lock_write(vma->vm_file->f_mapping);
> +
> +       /*
> +        * This spinlock should be unnecessary: Nobody else should be accessing
> +        * the page tables under spinlock protection here, only
> +        * lockless_pages_from_mm() and the hardware page walker can access page
> +        * tables while all the high-level locks are held in write mode.
> +        */
>         start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
>         result = SCAN_FAIL;
>
> @@ -1531,6 +1574,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>         /* step 4: remove pte entries */
>         collapse_and_free_pmd(mm, vma, haddr, pmd);
>
> +       i_mmap_unlock_write(vma->vm_file->f_mapping);
> +
>  maybe_install_pmd:
>         /* step 5: install pmd entry */
>         result = install_pmd
> @@ -1544,6 +1589,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>
>  abort:
>         pte_unmap_unlock(start_pte, ptl);
> +       i_mmap_unlock_write(vma->vm_file->f_mapping);
>         goto drop_hpage;
>  }
>
> @@ -1600,7 +1646,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
>                  * An alternative would be drop the check, but check that page
>                  * table is clear before calling pmdp_collapse_flush() under
>                  * ptl. It has higher chance to recover THP for the VMA, but
> -                * has higher cost too.
> +                * has higher cost too. It would also probably require locking
> +                * the anon_vma.
>                  */
>                 if (vma->anon_vma) {
>                         result = SCAN_PAGE_ANON;
>
> base-commit: eb7081409f94a9a8608593d0fb63a1aa3d6f95d8
> --
> 2.38.1.584.g0f3c55d4c2-goog
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
  2022-11-28 18:02 ` [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI Jann Horn
@ 2022-11-28 19:54   ` Yang Shi
  2022-11-28 19:56     ` Jann Horn
  0 siblings, 1 reply; 13+ messages in thread
From: Yang Shi @ 2022-11-28 19:54 UTC (permalink / raw)
  To: Jann Horn
  Cc: security, Andrew Morton, David Hildenbrand, Peter Xu,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
>
> Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> ensure that the page table was not removed by khugepaged in between.
>
> However, lockless_pages_from_mm() still requires that the page table is not
> concurrently freed or reused to store non-PTE data. Otherwise, problems
> can occur because:
>
>  - deposited page tables can be freed when a THP page somewhere in the
>    mm is removed
>  - some architectures store non-PTE information inside deposited page
>    tables (see radix__pgtable_trans_huge_deposit())
>
> Additionally, lockless_pages_from_mm() is also somewhat brittle with
> regards to page tables being repeatedly moved back and forth, but
> that shouldn't be an issue in practice.
>
> Fix it by sending IPIs (if the architecture uses
> semi-RCU-style page table freeing) before freeing/reusing page tables.
>
> As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
> there are two possible cases:
>
>  1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
>     tlb_remove_table_sync_one() to send an IPI to synchronize with
>     lockless_pages_from_mm().
>  2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
>     TLB flushes are already guaranteed to send IPIs.
>     tlb_remove_table_sync_one() will do nothing, but we've already
>     run pmdp_collapse_flush(), which did a TLB flush, which must have
>     involved IPIs.

I'm trying to catch up with the discussion after the holiday break. I
understand you switched from always allocating a new page table page
(we decided before) to sending IPIs to serialize against fast-GUP,
this is fine to me.

So the code now looks like:
    pmdp_collapse_flush()
    sending IPI

But the missing part is how we reached "TLB flushes are already
guaranteed to send IPIs" when CONFIG_MMU_GATHER_RCU_TABLE_FREE is
unset? ARM64 doesn't do it IIRC. Or did I miss something?

>
> Cc: stable@kernel.org
> Fixes: ba76149f47d8 ("thp: khugepaged")
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
> v4:
>  - added ack from David Hildenbrand
>  - made commit message more verbose
>
>  include/asm-generic/tlb.h | 4 ++++
>  mm/khugepaged.c           | 2 ++
>  mm/mmu_gather.c           | 4 +---
>  3 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 492dce43236ea..cab7cfebf40bd 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -222,12 +222,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
>  #define tlb_needs_table_invalidate() (true)
>  #endif
>
> +void tlb_remove_table_sync_one(void);
> +
>  #else
>
>  #ifdef tlb_needs_table_invalidate
>  #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
>  #endif
>
> +static inline void tlb_remove_table_sync_one(void) { }
> +
>  #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 674b111a24fa7..c3d3ce596bff7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1057,6 +1057,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>         _pmd = pmdp_collapse_flush(vma, address, pmd);
>         spin_unlock(pmd_ptl);
>         mmu_notifier_invalidate_range_end(&range);
> +       tlb_remove_table_sync_one();
>
>         spin_lock(pte_ptl);
>         result =  __collapse_huge_page_isolate(vma, address, pte, cc,
> @@ -1415,6 +1416,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
>                 lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
>
>         pmd = pmdp_collapse_flush(vma, addr, pmdp);
> +       tlb_remove_table_sync_one();
>         mm_dec_nr_ptes(mm);
>         page_table_check_pte_clear_range(mm, addr, pmd);
>         pte_free(mm, pmd_pgtable(pmd));
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index add4244e5790d..3a2c3f8cad2fe 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -153,7 +153,7 @@ static void tlb_remove_table_smp_sync(void *arg)
>         /* Simply deliver the interrupt */
>  }
>
> -static void tlb_remove_table_sync_one(void)
> +void tlb_remove_table_sync_one(void)
>  {
>         /*
>          * This isn't an RCU grace period and hence the page-tables cannot be
> @@ -177,8 +177,6 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
>
>  #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
> -static void tlb_remove_table_sync_one(void) { }
> -
>  static void tlb_remove_table_free(struct mmu_table_batch *batch)
>  {
>         __tlb_remove_table_free(batch);
> --
> 2.38.1.584.g0f3c55d4c2-goog
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
  2022-11-28 19:54   ` Yang Shi
@ 2022-11-28 19:56     ` Jann Horn
  2022-11-28 20:10       ` Yang Shi
  2022-11-28 20:15       ` Peter Xu
  0 siblings, 2 replies; 13+ messages in thread
From: Jann Horn @ 2022-11-28 19:56 UTC (permalink / raw)
  To: Yang Shi
  Cc: security, Andrew Morton, David Hildenbrand, Peter Xu,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 8:54 PM Yang Shi <shy828301@gmail.com> wrote:
>
> On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
> >
> > Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> > collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> > ensure that the page table was not removed by khugepaged in between.
> >
> > However, lockless_pages_from_mm() still requires that the page table is not
> > concurrently freed or reused to store non-PTE data. Otherwise, problems
> > can occur because:
> >
> >  - deposited page tables can be freed when a THP page somewhere in the
> >    mm is removed
> >  - some architectures store non-PTE information inside deposited page
> >    tables (see radix__pgtable_trans_huge_deposit())
> >
> > Additionally, lockless_pages_from_mm() is also somewhat brittle with
> > regards to page tables being repeatedly moved back and forth, but
> > that shouldn't be an issue in practice.
> >
> > Fix it by sending IPIs (if the architecture uses
> > semi-RCU-style page table freeing) before freeing/reusing page tables.
> >
> > As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
> > there are two possible cases:
> >
> >  1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
> >     tlb_remove_table_sync_one() to send an IPI to synchronize with
> >     lockless_pages_from_mm().
> >  2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
> >     TLB flushes are already guaranteed to send IPIs.
> >     tlb_remove_table_sync_one() will do nothing, but we've already
> >     run pmdp_collapse_flush(), which did a TLB flush, which must have
> >     involved IPIs.
>
> I'm trying to catch up with the discussion after the holiday break. I
> understand you switched from always allocating a new page table page
> (we decided before) to sending IPIs to serialize against fast-GUP,
> this is fine to me.
>
> So the code now looks like:
>     pmdp_collapse_flush()
>     sending IPI
>
> But the missing part is how we reached "TLB flushes are already
> guaranteed to send IPIs" when CONFIG_MMU_GATHER_RCU_TABLE_FREE is
> unset? ARM64 doesn't do it IIRC. Or did I miss something?

From arch/arm64/Kconfig:

select MMU_GATHER_RCU_TABLE_FREE

CONFIG_MMU_GATHER_RCU_TABLE_FREE is not a config option that the user
can freely toggle; it is an option selected by the architecture.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths
  2022-11-28 18:02 ` [PATCH v4 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths Jann Horn
  2022-11-28 18:08   ` David Hildenbrand
@ 2022-11-28 19:56   ` Yang Shi
  1 sibling, 0 replies; 13+ messages in thread
From: Yang Shi @ 2022-11-28 19:56 UTC (permalink / raw)
  To: Jann Horn
  Cc: security, Andrew Morton, David Hildenbrand, Peter Xu,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
>
> Any codepath that zaps page table entries must invoke MMU notifiers to
> ensure that secondary MMUs (like KVM) don't keep accessing pages which
> aren't mapped anymore. Secondary MMUs don't hold their own references to
> pages that are mirrored over, so failing to notify them can lead to page
> use-after-free.
>
> I'm marking this as addressing an issue introduced in commit f3f0e1d2150b
> ("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of
> the security impact of this only came in commit 27e1f8273113 ("khugepaged:
> enable collapse pmd for pte-mapped THP"), which actually omitted flushes
> for the removal of present PTEs, not just for the removal of empty page
> tables.
>
> Cc: stable@kernel.org
> Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages")
> Signed-off-by: Jann Horn <jannh@google.com>

Reviewed-by: Yang Shi <shy828301@gmail.com>

> ---
> v4: no changes
>
>  mm/khugepaged.c | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index c3d3ce596bff7..49eb4b4981d88 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1404,6 +1404,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
>                                   unsigned long addr, pmd_t *pmdp)
>  {
>         pmd_t pmd;
> +       struct mmu_notifier_range range;
>
>         mmap_assert_write_locked(mm);
>         if (vma->vm_file)
> @@ -1415,8 +1416,12 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
>         if (vma->anon_vma)
>                 lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
>
> +       mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, addr,
> +                               addr + HPAGE_PMD_SIZE);
> +       mmu_notifier_invalidate_range_start(&range);
>         pmd = pmdp_collapse_flush(vma, addr, pmdp);
>         tlb_remove_table_sync_one();
> +       mmu_notifier_invalidate_range_end(&range);
>         mm_dec_nr_ptes(mm);
>         page_table_check_pte_clear_range(mm, addr, pmd);
>         pte_free(mm, pmd_pgtable(pmd));
> --
> 2.38.1.584.g0f3c55d4c2-goog
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
  2022-11-28 19:56     ` Jann Horn
@ 2022-11-28 20:10       ` Yang Shi
  2022-11-28 20:11         ` Jann Horn
  2022-11-28 20:15       ` Peter Xu
  1 sibling, 1 reply; 13+ messages in thread
From: Yang Shi @ 2022-11-28 20:10 UTC (permalink / raw)
  To: Jann Horn
  Cc: security, Andrew Morton, David Hildenbrand, Peter Xu,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 11:57 AM Jann Horn <jannh@google.com> wrote:
>
> On Mon, Nov 28, 2022 at 8:54 PM Yang Shi <shy828301@gmail.com> wrote:
> >
> > On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
> > >
> > > Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> > > collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> > > ensure that the page table was not removed by khugepaged in between.
> > >
> > > However, lockless_pages_from_mm() still requires that the page table is not
> > > concurrently freed or reused to store non-PTE data. Otherwise, problems
> > > can occur because:
> > >
> > >  - deposited page tables can be freed when a THP page somewhere in the
> > >    mm is removed
> > >  - some architectures store non-PTE information inside deposited page
> > >    tables (see radix__pgtable_trans_huge_deposit())
> > >
> > > Additionally, lockless_pages_from_mm() is also somewhat brittle with
> > > regards to page tables being repeatedly moved back and forth, but
> > > that shouldn't be an issue in practice.
> > >
> > > Fix it by sending IPIs (if the architecture uses
> > > semi-RCU-style page table freeing) before freeing/reusing page tables.
> > >
> > > As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
> > > there are two possible cases:
> > >
> > >  1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
> > >     tlb_remove_table_sync_one() to send an IPI to synchronize with
> > >     lockless_pages_from_mm().
> > >  2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
> > >     TLB flushes are already guaranteed to send IPIs.
> > >     tlb_remove_table_sync_one() will do nothing, but we've already
> > >     run pmdp_collapse_flush(), which did a TLB flush, which must have
> > >     involved IPIs.
> >
> > I'm trying to catch up with the discussion after the holiday break. I
> > understand you switched from always allocating a new page table page
> > (we decided before) to sending IPIs to serialize against fast-GUP,
> > this is fine to me.
> >
> > So the code now looks like:
> >     pmdp_collapse_flush()
> >     sending IPI
> >
> > But the missing part is how we reached "TLB flushes are already
> > guaranteed to send IPIs" when CONFIG_MMU_GATHER_RCU_TABLE_FREE is
> > unset? ARM64 doesn't do it IIRC. Or did I miss something?
>
> From arch/arm64/Kconfig:
>
> select MMU_GATHER_RCU_TABLE_FREE
>
> CONFIG_MMU_GATHER_RCU_TABLE_FREE is not a config option that the user
> can freely toggle; it is an option selected by the architecture.

Aha, I see :-) BTW, shall we revert "mm: gup: fix the fast GUP race
against THP collapse"? It seems not necessary anymore if this approach
is used IIUC.

Reviewed-by: Yang Shi <shy828301@gmail.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
  2022-11-28 20:10       ` Yang Shi
@ 2022-11-28 20:11         ` Jann Horn
  2022-11-28 22:10           ` Yang Shi
  0 siblings, 1 reply; 13+ messages in thread
From: Jann Horn @ 2022-11-28 20:11 UTC (permalink / raw)
  To: Yang Shi
  Cc: security, Andrew Morton, David Hildenbrand, Peter Xu,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 9:10 PM Yang Shi <shy828301@gmail.com> wrote:
> On Mon, Nov 28, 2022 at 11:57 AM Jann Horn <jannh@google.com> wrote:
> >
> > On Mon, Nov 28, 2022 at 8:54 PM Yang Shi <shy828301@gmail.com> wrote:
> > >
> > > On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
> > > >
> > > > Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> > > > collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> > > > ensure that the page table was not removed by khugepaged in between.
> > > >
> > > > However, lockless_pages_from_mm() still requires that the page table is not
> > > > concurrently freed or reused to store non-PTE data. Otherwise, problems
> > > > can occur because:
> > > >
> > > >  - deposited page tables can be freed when a THP page somewhere in the
> > > >    mm is removed
> > > >  - some architectures store non-PTE information inside deposited page
> > > >    tables (see radix__pgtable_trans_huge_deposit())
> > > >
> > > > Additionally, lockless_pages_from_mm() is also somewhat brittle with
> > > > regards to page tables being repeatedly moved back and forth, but
> > > > that shouldn't be an issue in practice.
> > > >
> > > > Fix it by sending IPIs (if the architecture uses
> > > > semi-RCU-style page table freeing) before freeing/reusing page tables.
> > > >
> > > > As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
> > > > there are two possible cases:
> > > >
> > > >  1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
> > > >     tlb_remove_table_sync_one() to send an IPI to synchronize with
> > > >     lockless_pages_from_mm().
> > > >  2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
> > > >     TLB flushes are already guaranteed to send IPIs.
> > > >     tlb_remove_table_sync_one() will do nothing, but we've already
> > > >     run pmdp_collapse_flush(), which did a TLB flush, which must have
> > > >     involved IPIs.
> > >
> > > I'm trying to catch up with the discussion after the holiday break. I
> > > understand you switched from always allocating a new page table page
> > > (we decided before) to sending IPIs to serialize against fast-GUP,
> > > this is fine to me.
> > >
> > > So the code now looks like:
> > >     pmdp_collapse_flush()
> > >     sending IPI
> > >
> > > But the missing part is how we reached "TLB flushes are already
> > > guaranteed to send IPIs" when CONFIG_MMU_GATHER_RCU_TABLE_FREE is
> > > unset? ARM64 doesn't do it IIRC. Or did I miss something?
> >
> > From arch/arm64/Kconfig:
> >
> > select MMU_GATHER_RCU_TABLE_FREE
> >
> > CONFIG_MMU_GATHER_RCU_TABLE_FREE is not a config option that the user
> > can freely toggle; it is an option selected by the architecture.
>
> Aha, I see :-) BTW, shall we revert "mm: gup: fix the fast GUP race
> against THP collapse"? It seems not necessary anymore if this approach
> is used IIUC.

Yeah, I agree.

> Reviewed-by: Yang Shi <shy828301@gmail.com>

Thanks!

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
  2022-11-28 19:56     ` Jann Horn
  2022-11-28 20:10       ` Yang Shi
@ 2022-11-28 20:15       ` Peter Xu
  1 sibling, 0 replies; 13+ messages in thread
From: Peter Xu @ 2022-11-28 20:15 UTC (permalink / raw)
  To: Jann Horn
  Cc: Yang Shi, security, Andrew Morton, David Hildenbrand,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 08:56:54PM +0100, Jann Horn wrote:
> On Mon, Nov 28, 2022 at 8:54 PM Yang Shi <shy828301@gmail.com> wrote:
> >
> > On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
> > >
> > > Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> > > collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> > > ensure that the page table was not removed by khugepaged in between.
> > >
> > > However, lockless_pages_from_mm() still requires that the page table is not
> > > concurrently freed or reused to store non-PTE data. Otherwise, problems
> > > can occur because:
> > >
> > >  - deposited page tables can be freed when a THP page somewhere in the
> > >    mm is removed
> > >  - some architectures store non-PTE information inside deposited page
> > >    tables (see radix__pgtable_trans_huge_deposit())
> > >
> > > Additionally, lockless_pages_from_mm() is also somewhat brittle with
> > > regards to page tables being repeatedly moved back and forth, but
> > > that shouldn't be an issue in practice.
> > >
> > > Fix it by sending IPIs (if the architecture uses
> > > semi-RCU-style page table freeing) before freeing/reusing page tables.
> > >
> > > As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
> > > there are two possible cases:
> > >
> > >  1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
> > >     tlb_remove_table_sync_one() to send an IPI to synchronize with
> > >     lockless_pages_from_mm().
> > >  2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
> > >     TLB flushes are already guaranteed to send IPIs.
> > >     tlb_remove_table_sync_one() will do nothing, but we've already
> > >     run pmdp_collapse_flush(), which did a TLB flush, which must have
> > >     involved IPIs.
> >
> > I'm trying to catch up with the discussion after the holiday break. I
> > understand you switched from always allocating a new page table page
> > (we decided before) to sending IPIs to serialize against fast-GUP,
> > this is fine to me.
> >
> > So the code now looks like:
> >     pmdp_collapse_flush()
> >     sending IPI
> >
> > But the missing part is how we reached "TLB flushes are already
> > guaranteed to send IPIs" when CONFIG_MMU_GATHER_RCU_TABLE_FREE is
> > unset? ARM64 doesn't do it IIRC. Or did I miss something?
> 
> From arch/arm64/Kconfig:
> 
> select MMU_GATHER_RCU_TABLE_FREE
> 
> CONFIG_MMU_GATHER_RCU_TABLE_FREE is not a config option that the user
> can freely toggle; it is an option selected by the architecture.

True.

I think I understand what Yang is confused about and I had the same
question (asked in the old threads but didn't yet got a confirmation),
since I think arm64 didn't use IPI for tlb is also true (according to the
arm64 version of __flush_tlb_range), so PPC doesn't seem to be the only one.

I mentioned PPC only because I saw the comment in mmu_gather.c:

 * Architectures that do not have this (PPC) need to delay the freeing by some
 * other means, this is that means.

So I think it's obsolete.

In short, IIUC there's just an implicit dependency that any
!MMU_GATHER_RCU_TABLE_FREE arch must require IPI for tlb flush (not vice
versa, hence arm64 can have RCU_TABLE_FREE), or something could be broken.

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
  2022-11-28 20:11         ` Jann Horn
@ 2022-11-28 22:10           ` Yang Shi
  2022-11-29 15:30             ` Jann Horn
  0 siblings, 1 reply; 13+ messages in thread
From: Yang Shi @ 2022-11-28 22:10 UTC (permalink / raw)
  To: Jann Horn
  Cc: security, Andrew Morton, David Hildenbrand, Peter Xu,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 12:12 PM Jann Horn <jannh@google.com> wrote:
>
> On Mon, Nov 28, 2022 at 9:10 PM Yang Shi <shy828301@gmail.com> wrote:
> > On Mon, Nov 28, 2022 at 11:57 AM Jann Horn <jannh@google.com> wrote:
> > >
> > > On Mon, Nov 28, 2022 at 8:54 PM Yang Shi <shy828301@gmail.com> wrote:
> > > >
> > > > On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
> > > > >
> > > > > Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> > > > > collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> > > > > ensure that the page table was not removed by khugepaged in between.
> > > > >
> > > > > However, lockless_pages_from_mm() still requires that the page table is not
> > > > > concurrently freed or reused to store non-PTE data. Otherwise, problems
> > > > > can occur because:
> > > > >
> > > > >  - deposited page tables can be freed when a THP page somewhere in the
> > > > >    mm is removed
> > > > >  - some architectures store non-PTE information inside deposited page
> > > > >    tables (see radix__pgtable_trans_huge_deposit())
> > > > >
> > > > > Additionally, lockless_pages_from_mm() is also somewhat brittle with
> > > > > regards to page tables being repeatedly moved back and forth, but
> > > > > that shouldn't be an issue in practice.
> > > > >
> > > > > Fix it by sending IPIs (if the architecture uses
> > > > > semi-RCU-style page table freeing) before freeing/reusing page tables.
> > > > >
> > > > > As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
> > > > > there are two possible cases:
> > > > >
> > > > >  1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
> > > > >     tlb_remove_table_sync_one() to send an IPI to synchronize with
> > > > >     lockless_pages_from_mm().
> > > > >  2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
> > > > >     TLB flushes are already guaranteed to send IPIs.
> > > > >     tlb_remove_table_sync_one() will do nothing, but we've already
> > > > >     run pmdp_collapse_flush(), which did a TLB flush, which must have
> > > > >     involved IPIs.
> > > >
> > > > I'm trying to catch up with the discussion after the holiday break. I
> > > > understand you switched from always allocating a new page table page
> > > > (we decided before) to sending IPIs to serialize against fast-GUP,
> > > > this is fine to me.
> > > >
> > > > So the code now looks like:
> > > >     pmdp_collapse_flush()
> > > >     sending IPI
> > > >
> > > > But the missing part is how we reached "TLB flushes are already
> > > > guaranteed to send IPIs" when CONFIG_MMU_GATHER_RCU_TABLE_FREE is
> > > > unset? ARM64 doesn't do it IIRC. Or did I miss something?
> > >
> > > From arch/arm64/Kconfig:
> > >
> > > select MMU_GATHER_RCU_TABLE_FREE
> > >
> > > CONFIG_MMU_GATHER_RCU_TABLE_FREE is not a config option that the user
> > > can freely toggle; it is an option selected by the architecture.
> >
> > Aha, I see :-) BTW, shall we revert "mm: gup: fix the fast GUP race
> > against THP collapse"? It seems not necessary anymore if this approach
> > is used IIUC.
>
> Yeah, I agree.

Since this patch could solve two problems: the use-after-free of the
data page (pinned by fast-GUP) and the page table page and my patch
will be reverted, so could you please catch both issues in this
patch's commit log? I'd like to preserve the description of the issue
fixed by my patch. I think that it is helpful to see the information
about all the fixed problems in one commit instead of digging into
another reverted commit.

>
> > Reviewed-by: Yang Shi <shy828301@gmail.com>
>
> Thanks!

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
  2022-11-28 22:10           ` Yang Shi
@ 2022-11-29 15:30             ` Jann Horn
  0 siblings, 0 replies; 13+ messages in thread
From: Jann Horn @ 2022-11-29 15:30 UTC (permalink / raw)
  To: Yang Shi
  Cc: security, Andrew Morton, David Hildenbrand, Peter Xu,
	John Hubbard, linux-kernel, linux-mm

On Mon, Nov 28, 2022 at 11:10 PM Yang Shi <shy828301@gmail.com> wrote:
>
> On Mon, Nov 28, 2022 at 12:12 PM Jann Horn <jannh@google.com> wrote:
> >
> > On Mon, Nov 28, 2022 at 9:10 PM Yang Shi <shy828301@gmail.com> wrote:
> > > On Mon, Nov 28, 2022 at 11:57 AM Jann Horn <jannh@google.com> wrote:
> > > >
> > > > On Mon, Nov 28, 2022 at 8:54 PM Yang Shi <shy828301@gmail.com> wrote:
> > > > >
> > > > > On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@google.com> wrote:
> > > > > >
> > > > > > Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> > > > > > collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> > > > > > ensure that the page table was not removed by khugepaged in between.
> > > > > >
> > > > > > However, lockless_pages_from_mm() still requires that the page table is not
> > > > > > concurrently freed or reused to store non-PTE data. Otherwise, problems
> > > > > > can occur because:
> > > > > >
> > > > > >  - deposited page tables can be freed when a THP page somewhere in the
> > > > > >    mm is removed
> > > > > >  - some architectures store non-PTE information inside deposited page
> > > > > >    tables (see radix__pgtable_trans_huge_deposit())
> > > > > >
> > > > > > Additionally, lockless_pages_from_mm() is also somewhat brittle with
> > > > > > regards to page tables being repeatedly moved back and forth, but
> > > > > > that shouldn't be an issue in practice.
> > > > > >
> > > > > > Fix it by sending IPIs (if the architecture uses
> > > > > > semi-RCU-style page table freeing) before freeing/reusing page tables.
> > > > > >
> > > > > > As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
> > > > > > there are two possible cases:
> > > > > >
> > > > > >  1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
> > > > > >     tlb_remove_table_sync_one() to send an IPI to synchronize with
> > > > > >     lockless_pages_from_mm().
> > > > > >  2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
> > > > > >     TLB flushes are already guaranteed to send IPIs.
> > > > > >     tlb_remove_table_sync_one() will do nothing, but we've already
> > > > > >     run pmdp_collapse_flush(), which did a TLB flush, which must have
> > > > > >     involved IPIs.
> > > > >
> > > > > I'm trying to catch up with the discussion after the holiday break. I
> > > > > understand you switched from always allocating a new page table page
> > > > > (we decided before) to sending IPIs to serialize against fast-GUP,
> > > > > this is fine to me.
> > > > >
> > > > > So the code now looks like:
> > > > >     pmdp_collapse_flush()
> > > > >     sending IPI
> > > > >
> > > > > But the missing part is how we reached "TLB flushes are already
> > > > > guaranteed to send IPIs" when CONFIG_MMU_GATHER_RCU_TABLE_FREE is
> > > > > unset? ARM64 doesn't do it IIRC. Or did I miss something?
> > > >
> > > > From arch/arm64/Kconfig:
> > > >
> > > > select MMU_GATHER_RCU_TABLE_FREE
> > > >
> > > > CONFIG_MMU_GATHER_RCU_TABLE_FREE is not a config option that the user
> > > > can freely toggle; it is an option selected by the architecture.
> > >
> > > Aha, I see :-) BTW, shall we revert "mm: gup: fix the fast GUP race
> > > against THP collapse"? It seems not necessary anymore if this approach
> > > is used IIUC.
> >
> > Yeah, I agree.
>
> Since this patch could solve two problems: the use-after-free of the
> data page (pinned by fast-GUP) and the page table page and my patch
> will be reverted, so could you please catch both issues in this
> patch's commit log? I'd like to preserve the description of the issue
> fixed by my patch. I think that it is helpful to see the information
> about all the fixed problems in one commit instead of digging into
> another reverted commit.

OK, I will rewrite the commit message to describe the overall problem,
including the part addressed by your patch.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-11-29 15:31 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-28 18:02 [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction Jann Horn
2022-11-28 18:02 ` [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI Jann Horn
2022-11-28 19:54   ` Yang Shi
2022-11-28 19:56     ` Jann Horn
2022-11-28 20:10       ` Yang Shi
2022-11-28 20:11         ` Jann Horn
2022-11-28 22:10           ` Yang Shi
2022-11-29 15:30             ` Jann Horn
2022-11-28 20:15       ` Peter Xu
2022-11-28 18:02 ` [PATCH v4 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths Jann Horn
2022-11-28 18:08   ` David Hildenbrand
2022-11-28 19:56   ` Yang Shi
2022-11-28 19:48 ` [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction Yang Shi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.