All of lore.kernel.org
 help / color / mirror / Atom feed
* [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse
@ 2022-09-07 18:01 ` Yang Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Yang Shi @ 2022-09-07 18:01 UTC (permalink / raw)
  To: david, peterx, kirill.shutemov, jhubbard, jgg, hughd, akpm, aneesh.kumar
  Cc: shy828301, linux-mm, linuxppc-dev, linux-kernel

Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
sufficient to handle concurrent GUP-fast in all cases, it only handles
traditional IPI-based GUP-fast correctly.  On architectures that send
an IPI broadcast on TLB flush, it works as expected.  But on the
architectures that do not use IPI to broadcast TLB flush, it may have
the below race:

   CPU A                                          CPU B
THP collapse                                     fast GUP
                                              gup_pmd_range() <-- see valid pmd
                                                  gup_pte_range() <-- work on pte
pmdp_collapse_flush() <-- clear pmd and flush
__collapse_huge_page_isolate()
    check page pinned <-- before GUP bump refcount
                                                      pin the page
                                                      check PTE <-- no change
__collapse_huge_page_copy()
    copy data to huge page
    ptep_clear()
install huge pmd for the huge page
                                                      return the stale page
discard the stale page

The race could be fixed by checking whether PMD is changed or not after
taking the page pin in fast GUP, just like what it does for PTE.  If the
PMD is changed it means there may be parallel THP collapse, so GUP
should back off.

Also update the stale comment about serializing against fast GUP in
khugepaged.

Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Yang Shi <shy828301@gmail.com>
---
v2: * Incorporated the comment from Peter about the comment.
    * Moved the comment right before gup_pte_range() instead of in the
      body of the function, per John.
    * Added patch 2/2 per Aneesh.

 mm/gup.c        | 34 ++++++++++++++++++++++++++++------
 mm/khugepaged.c | 10 ++++++----
 2 files changed, 34 insertions(+), 10 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index f3fc1f08d90c..40aa1c937212 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2380,8 +2380,28 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
 }
 
 #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
-			 unsigned int flags, struct page **pages, int *nr)
+/*
+ * Fast-gup relies on pte change detection to avoid concurrent pgtable
+ * operations.
+ *
+ * To pin the page, fast-gup needs to do below in order:
+ * (1) pin the page (by prefetching pte), then (2) check pte not changed.
+ *
+ * For the rest of pgtable operations where pgtable updates can be racy
+ * with fast-gup, we need to do (1) clear pte, then (2) check whether page
+ * is pinned.
+ *
+ * Above will work for all pte-level operations, including THP split.
+ *
+ * For THP collapse, it's a bit more complicated because fast-gup may be
+ * walking a pgtable page that is being freed (pte is still valid but pmd
+ * can be cleared already).  To avoid race in such condition, we need to
+ * also check pmd here to make sure pmd doesn't change (corresponds to
+ * pmdp_collapse_flush() in the THP collapse code path).
+ */
+static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
+			 unsigned long end, unsigned int flags,
+			 struct page **pages, int *nr)
 {
 	struct dev_pagemap *pgmap = NULL;
 	int nr_start = *nr, ret = 0;
@@ -2423,7 +2443,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 			goto pte_unmap;
 		}
 
-		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+		if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
+		    unlikely(pte_val(pte) != pte_val(*ptep))) {
 			gup_put_folio(folio, 1, flags);
 			goto pte_unmap;
 		}
@@ -2470,8 +2491,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
  * get_user_pages_fast_only implementation that can pin pages. Thus it's still
  * useful to have gup_huge_pmd even if we can't operate on ptes.
  */
-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
-			 unsigned int flags, struct page **pages, int *nr)
+static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
+			 unsigned long end, unsigned int flags,
+			 struct page **pages, int *nr)
 {
 	return 0;
 }
@@ -2791,7 +2813,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
 			if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr,
 					 PMD_SHIFT, next, flags, pages, nr))
 				return 0;
-		} else if (!gup_pte_range(pmd, addr, next, flags, pages, nr))
+		} else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr))
 			return 0;
 	} while (pmdp++, addr = next, addr != end);
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2d74cf01f694..518b49095db3 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1049,10 +1049,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
 
 	pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
 	/*
-	 * After this gup_fast can't run anymore. This also removes
-	 * any huge TLB entry from the CPU so we won't allow
-	 * huge and small TLB entries for the same virtual address
-	 * to avoid the risk of CPU bugs in that area.
+	 * This removes any huge TLB entry from the CPU so we won't allow
+	 * huge and small TLB entries for the same virtual address to
+	 * avoid the risk of CPU bugs in that area.
+	 *
+	 * Parallel fast GUP is fine since fast GUP will back off when
+	 * it detects PMD is changed.
 	 */
 	_pmd = pmdp_collapse_flush(vma, address, pmd);
 	spin_unlock(pmd_ptl);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse
@ 2022-09-07 18:01 ` Yang Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Yang Shi @ 2022-09-07 18:01 UTC (permalink / raw)
  To: david, peterx, kirill.shutemov, jhubbard, jgg, hughd, akpm, aneesh.kumar
  Cc: linux-mm, shy828301, linuxppc-dev, linux-kernel

Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
sufficient to handle concurrent GUP-fast in all cases, it only handles
traditional IPI-based GUP-fast correctly.  On architectures that send
an IPI broadcast on TLB flush, it works as expected.  But on the
architectures that do not use IPI to broadcast TLB flush, it may have
the below race:

   CPU A                                          CPU B
THP collapse                                     fast GUP
                                              gup_pmd_range() <-- see valid pmd
                                                  gup_pte_range() <-- work on pte
pmdp_collapse_flush() <-- clear pmd and flush
__collapse_huge_page_isolate()
    check page pinned <-- before GUP bump refcount
                                                      pin the page
                                                      check PTE <-- no change
__collapse_huge_page_copy()
    copy data to huge page
    ptep_clear()
install huge pmd for the huge page
                                                      return the stale page
discard the stale page

The race could be fixed by checking whether PMD is changed or not after
taking the page pin in fast GUP, just like what it does for PTE.  If the
PMD is changed it means there may be parallel THP collapse, so GUP
should back off.

Also update the stale comment about serializing against fast GUP in
khugepaged.

Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Yang Shi <shy828301@gmail.com>
---
v2: * Incorporated the comment from Peter about the comment.
    * Moved the comment right before gup_pte_range() instead of in the
      body of the function, per John.
    * Added patch 2/2 per Aneesh.

 mm/gup.c        | 34 ++++++++++++++++++++++++++++------
 mm/khugepaged.c | 10 ++++++----
 2 files changed, 34 insertions(+), 10 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index f3fc1f08d90c..40aa1c937212 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2380,8 +2380,28 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
 }
 
 #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
-			 unsigned int flags, struct page **pages, int *nr)
+/*
+ * Fast-gup relies on pte change detection to avoid concurrent pgtable
+ * operations.
+ *
+ * To pin the page, fast-gup needs to do below in order:
+ * (1) pin the page (by prefetching pte), then (2) check pte not changed.
+ *
+ * For the rest of pgtable operations where pgtable updates can be racy
+ * with fast-gup, we need to do (1) clear pte, then (2) check whether page
+ * is pinned.
+ *
+ * Above will work for all pte-level operations, including THP split.
+ *
+ * For THP collapse, it's a bit more complicated because fast-gup may be
+ * walking a pgtable page that is being freed (pte is still valid but pmd
+ * can be cleared already).  To avoid race in such condition, we need to
+ * also check pmd here to make sure pmd doesn't change (corresponds to
+ * pmdp_collapse_flush() in the THP collapse code path).
+ */
+static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
+			 unsigned long end, unsigned int flags,
+			 struct page **pages, int *nr)
 {
 	struct dev_pagemap *pgmap = NULL;
 	int nr_start = *nr, ret = 0;
@@ -2423,7 +2443,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 			goto pte_unmap;
 		}
 
-		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+		if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
+		    unlikely(pte_val(pte) != pte_val(*ptep))) {
 			gup_put_folio(folio, 1, flags);
 			goto pte_unmap;
 		}
@@ -2470,8 +2491,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
  * get_user_pages_fast_only implementation that can pin pages. Thus it's still
  * useful to have gup_huge_pmd even if we can't operate on ptes.
  */
-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
-			 unsigned int flags, struct page **pages, int *nr)
+static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
+			 unsigned long end, unsigned int flags,
+			 struct page **pages, int *nr)
 {
 	return 0;
 }
@@ -2791,7 +2813,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
 			if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr,
 					 PMD_SHIFT, next, flags, pages, nr))
 				return 0;
-		} else if (!gup_pte_range(pmd, addr, next, flags, pages, nr))
+		} else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr))
 			return 0;
 	} while (pmdp++, addr = next, addr != end);
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2d74cf01f694..518b49095db3 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1049,10 +1049,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
 
 	pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
 	/*
-	 * After this gup_fast can't run anymore. This also removes
-	 * any huge TLB entry from the CPU so we won't allow
-	 * huge and small TLB entries for the same virtual address
-	 * to avoid the risk of CPU bugs in that area.
+	 * This removes any huge TLB entry from the CPU so we won't allow
+	 * huge and small TLB entries for the same virtual address to
+	 * avoid the risk of CPU bugs in that area.
+	 *
+	 * Parallel fast GUP is fine since fast GUP will back off when
+	 * it detects PMD is changed.
 	 */
 	_pmd = pmdp_collapse_flush(vma, address, pmd);
 	spin_unlock(pmd_ptl);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [v2 PATCH 2/2] powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush
  2022-09-07 18:01 ` Yang Shi
@ 2022-09-07 18:01   ` Yang Shi
  -1 siblings, 0 replies; 13+ messages in thread
From: Yang Shi @ 2022-09-07 18:01 UTC (permalink / raw)
  To: david, peterx, kirill.shutemov, jhubbard, jgg, hughd, akpm, aneesh.kumar
  Cc: shy828301, linux-mm, linuxppc-dev, linux-kernel

The IPI broadcast is used to serialize against fast-GUP, but fast-GUP
will move to use RCU instead of disabling local interrupts in fast-GUP.
Using an IPI is the old-styled way of serializing against fast-GUP
although it still works as expected now.

And fast-GUP now fixed the potential race with THP collapse by checking
whether PMD is changed or not.  So IPI broadcast in radix pmd collapse
flush is not necessary anymore.  But it is still needed for hash TLB.

Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 698274109c91..e712f80fe189 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -937,15 +937,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
 	pmd = *pmdp;
 	pmd_clear(pmdp);
 
-	/*
-	 * pmdp collapse_flush need to ensure that there are no parallel gup
-	 * walk after this call. This is needed so that we can have stable
-	 * page ref count when collapsing a page. We don't allow a collapse page
-	 * if we have gup taken on the page. We can ensure that by sending IPI
-	 * because gup walk happens with IRQ disabled.
-	 */
-	serialize_against_pte_lookup(vma->vm_mm);
-
 	radix__flush_tlb_collapsed_pmd(vma->vm_mm, address);
 
 	return pmd;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [v2 PATCH 2/2] powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush
@ 2022-09-07 18:01   ` Yang Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Yang Shi @ 2022-09-07 18:01 UTC (permalink / raw)
  To: david, peterx, kirill.shutemov, jhubbard, jgg, hughd, akpm, aneesh.kumar
  Cc: linux-mm, shy828301, linuxppc-dev, linux-kernel

The IPI broadcast is used to serialize against fast-GUP, but fast-GUP
will move to use RCU instead of disabling local interrupts in fast-GUP.
Using an IPI is the old-styled way of serializing against fast-GUP
although it still works as expected now.

And fast-GUP now fixed the potential race with THP collapse by checking
whether PMD is changed or not.  So IPI broadcast in radix pmd collapse
flush is not necessary anymore.  But it is still needed for hash TLB.

Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 698274109c91..e712f80fe189 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -937,15 +937,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
 	pmd = *pmdp;
 	pmd_clear(pmdp);
 
-	/*
-	 * pmdp collapse_flush need to ensure that there are no parallel gup
-	 * walk after this call. This is needed so that we can have stable
-	 * page ref count when collapsing a page. We don't allow a collapse page
-	 * if we have gup taken on the page. We can ensure that by sending IPI
-	 * because gup walk happens with IRQ disabled.
-	 */
-	serialize_against_pte_lookup(vma->vm_mm);
-
 	radix__flush_tlb_collapsed_pmd(vma->vm_mm, address);
 
 	return pmd;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 2/2] powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush
  2022-09-07 18:01   ` Yang Shi
  (?)
@ 2022-09-07 19:34   ` David Hildenbrand
  -1 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2022-09-07 19:34 UTC (permalink / raw)
  To: Yang Shi, peterx, kirill.shutemov, jhubbard, jgg, hughd, akpm,
	aneesh.kumar
  Cc: linux-mm, linuxppc-dev, linux-kernel

On 07.09.22 20:01, Yang Shi wrote:
> The IPI broadcast is used to serialize against fast-GUP, but fast-GUP
> will move to use RCU instead of disabling local interrupts in fast-GUP.
> Using an IPI is the old-styled way of serializing against fast-GUP
> although it still works as expected now.
> 
> And fast-GUP now fixed the potential race with THP collapse by checking
> whether PMD is changed or not.  So IPI broadcast in radix pmd collapse
> flush is not necessary anymore.  But it is still needed for hash TLB.
> 
> Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Signed-off-by: Yang Shi <shy828301@gmail.com>
> ---
>   arch/powerpc/mm/book3s64/radix_pgtable.c | 9 ---------
>   1 file changed, 9 deletions(-)
> 
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 698274109c91..e712f80fe189 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -937,15 +937,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
>   	pmd = *pmdp;
>   	pmd_clear(pmdp);
>   
> -	/*
> -	 * pmdp collapse_flush need to ensure that there are no parallel gup
> -	 * walk after this call. This is needed so that we can have stable
> -	 * page ref count when collapsing a page. We don't allow a collapse page
> -	 * if we have gup taken on the page. We can ensure that by sending IPI
> -	 * because gup walk happens with IRQ disabled.
> -	 */
> -	serialize_against_pte_lookup(vma->vm_mm);
> -
>   	radix__flush_tlb_collapsed_pmd(vma->vm_mm, address);
>   
>   	return pmd;

Makes sense to me

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 2/2] powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush
  2022-09-07 18:01   ` Yang Shi
@ 2022-09-07 20:12     ` Peter Xu
  -1 siblings, 0 replies; 13+ messages in thread
From: Peter Xu @ 2022-09-07 20:12 UTC (permalink / raw)
  To: Yang Shi
  Cc: david, kirill.shutemov, jhubbard, jgg, hughd, akpm, aneesh.kumar,
	linux-mm, linuxppc-dev, linux-kernel

On Wed, Sep 07, 2022 at 11:01:44AM -0700, Yang Shi wrote:
> The IPI broadcast is used to serialize against fast-GUP, but fast-GUP
> will move to use RCU instead of disabling local interrupts in fast-GUP.
> Using an IPI is the old-styled way of serializing against fast-GUP
> although it still works as expected now.
> 
> And fast-GUP now fixed the potential race with THP collapse by checking
> whether PMD is changed or not.  So IPI broadcast in radix pmd collapse
> flush is not necessary anymore.  But it is still needed for hash TLB.
> 
> Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Signed-off-by: Yang Shi <shy828301@gmail.com>

Acked-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 2/2] powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush
@ 2022-09-07 20:12     ` Peter Xu
  0 siblings, 0 replies; 13+ messages in thread
From: Peter Xu @ 2022-09-07 20:12 UTC (permalink / raw)
  To: Yang Shi
  Cc: david, jhubbard, hughd, linux-kernel, linux-mm, jgg,
	aneesh.kumar, akpm, linuxppc-dev, kirill.shutemov

On Wed, Sep 07, 2022 at 11:01:44AM -0700, Yang Shi wrote:
> The IPI broadcast is used to serialize against fast-GUP, but fast-GUP
> will move to use RCU instead of disabling local interrupts in fast-GUP.
> Using an IPI is the old-styled way of serializing against fast-GUP
> although it still works as expected now.
> 
> And fast-GUP now fixed the potential race with THP collapse by checking
> whether PMD is changed or not.  So IPI broadcast in radix pmd collapse
> flush is not necessary anymore.  But it is still needed for hash TLB.
> 
> Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Signed-off-by: Yang Shi <shy828301@gmail.com>

Acked-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse
  2022-09-07 18:01 ` Yang Shi
@ 2022-09-07 21:22   ` Andrew Morton
  -1 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2022-09-07 21:22 UTC (permalink / raw)
  To: Yang Shi
  Cc: david, peterx, kirill.shutemov, jhubbard, jgg, hughd,
	aneesh.kumar, linux-mm, linuxppc-dev, linux-kernel

On Wed,  7 Sep 2022 11:01:43 -0700 Yang Shi <shy828301@gmail.com> wrote:

> Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
> introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
> sufficient to handle concurrent GUP-fast in all cases, it only handles
> traditional IPI-based GUP-fast correctly.  On architectures that send
> an IPI broadcast on TLB flush, it works as expected.  But on the
> architectures that do not use IPI to broadcast TLB flush, it may have
> the below race:
> 
>    CPU A                                          CPU B
> THP collapse                                     fast GUP
>                                               gup_pmd_range() <-- see valid pmd
>                                                   gup_pte_range() <-- work on pte
> pmdp_collapse_flush() <-- clear pmd and flush
> __collapse_huge_page_isolate()
>     check page pinned <-- before GUP bump refcount
>                                                       pin the page
>                                                       check PTE <-- no change
> __collapse_huge_page_copy()
>     copy data to huge page
>     ptep_clear()
> install huge pmd for the huge page
>                                                       return the stale page
> discard the stale page
> 
> The race could be fixed by checking whether PMD is changed or not after
> taking the page pin in fast GUP, just like what it does for PTE.  If the
> PMD is changed it means there may be parallel THP collapse, so GUP
> should back off.
> 
> Also update the stale comment about serializing against fast GUP in
> khugepaged.
> 
> Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")

Is this not worth a -stable backport?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse
@ 2022-09-07 21:22   ` Andrew Morton
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2022-09-07 21:22 UTC (permalink / raw)
  To: Yang Shi
  Cc: david, jhubbard, hughd, linux-kernel, peterx, linux-mm, jgg,
	aneesh.kumar, linuxppc-dev, kirill.shutemov

On Wed,  7 Sep 2022 11:01:43 -0700 Yang Shi <shy828301@gmail.com> wrote:

> Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
> introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
> sufficient to handle concurrent GUP-fast in all cases, it only handles
> traditional IPI-based GUP-fast correctly.  On architectures that send
> an IPI broadcast on TLB flush, it works as expected.  But on the
> architectures that do not use IPI to broadcast TLB flush, it may have
> the below race:
> 
>    CPU A                                          CPU B
> THP collapse                                     fast GUP
>                                               gup_pmd_range() <-- see valid pmd
>                                                   gup_pte_range() <-- work on pte
> pmdp_collapse_flush() <-- clear pmd and flush
> __collapse_huge_page_isolate()
>     check page pinned <-- before GUP bump refcount
>                                                       pin the page
>                                                       check PTE <-- no change
> __collapse_huge_page_copy()
>     copy data to huge page
>     ptep_clear()
> install huge pmd for the huge page
>                                                       return the stale page
> discard the stale page
> 
> The race could be fixed by checking whether PMD is changed or not after
> taking the page pin in fast GUP, just like what it does for PTE.  If the
> PMD is changed it means there may be parallel THP collapse, so GUP
> should back off.
> 
> Also update the stale comment about serializing against fast GUP in
> khugepaged.
> 
> Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")

Is this not worth a -stable backport?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse
  2022-09-07 21:22   ` Andrew Morton
@ 2022-09-07 21:23     ` Yang Shi
  -1 siblings, 0 replies; 13+ messages in thread
From: Yang Shi @ 2022-09-07 21:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: david, peterx, kirill.shutemov, jhubbard, jgg, hughd,
	aneesh.kumar, linux-mm, linuxppc-dev, linux-kernel

On Wed, Sep 7, 2022 at 2:22 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed,  7 Sep 2022 11:01:43 -0700 Yang Shi <shy828301@gmail.com> wrote:
>
> > Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
> > introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
> > sufficient to handle concurrent GUP-fast in all cases, it only handles
> > traditional IPI-based GUP-fast correctly.  On architectures that send
> > an IPI broadcast on TLB flush, it works as expected.  But on the
> > architectures that do not use IPI to broadcast TLB flush, it may have
> > the below race:
> >
> >    CPU A                                          CPU B
> > THP collapse                                     fast GUP
> >                                               gup_pmd_range() <-- see valid pmd
> >                                                   gup_pte_range() <-- work on pte
> > pmdp_collapse_flush() <-- clear pmd and flush
> > __collapse_huge_page_isolate()
> >     check page pinned <-- before GUP bump refcount
> >                                                       pin the page
> >                                                       check PTE <-- no change
> > __collapse_huge_page_copy()
> >     copy data to huge page
> >     ptep_clear()
> > install huge pmd for the huge page
> >                                                       return the stale page
> > discard the stale page
> >
> > The race could be fixed by checking whether PMD is changed or not after
> > taking the page pin in fast GUP, just like what it does for PTE.  If the
> > PMD is changed it means there may be parallel THP collapse, so GUP
> > should back off.
> >
> > Also update the stale comment about serializing against fast GUP in
> > khugepaged.
> >
> > Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")
>
> Is this not worth a -stable backport?

Yes, I think it is.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse
@ 2022-09-07 21:23     ` Yang Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Yang Shi @ 2022-09-07 21:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: david, jhubbard, hughd, linux-kernel, peterx, linux-mm, jgg,
	aneesh.kumar, linuxppc-dev, kirill.shutemov

On Wed, Sep 7, 2022 at 2:22 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed,  7 Sep 2022 11:01:43 -0700 Yang Shi <shy828301@gmail.com> wrote:
>
> > Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
> > introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
> > sufficient to handle concurrent GUP-fast in all cases, it only handles
> > traditional IPI-based GUP-fast correctly.  On architectures that send
> > an IPI broadcast on TLB flush, it works as expected.  But on the
> > architectures that do not use IPI to broadcast TLB flush, it may have
> > the below race:
> >
> >    CPU A                                          CPU B
> > THP collapse                                     fast GUP
> >                                               gup_pmd_range() <-- see valid pmd
> >                                                   gup_pte_range() <-- work on pte
> > pmdp_collapse_flush() <-- clear pmd and flush
> > __collapse_huge_page_isolate()
> >     check page pinned <-- before GUP bump refcount
> >                                                       pin the page
> >                                                       check PTE <-- no change
> > __collapse_huge_page_copy()
> >     copy data to huge page
> >     ptep_clear()
> > install huge pmd for the huge page
> >                                                       return the stale page
> > discard the stale page
> >
> > The race could be fixed by checking whether PMD is changed or not after
> > taking the page pin in fast GUP, just like what it does for PTE.  If the
> > PMD is changed it means there may be parallel THP collapse, so GUP
> > should back off.
> >
> > Also update the stale comment about serializing against fast GUP in
> > khugepaged.
> >
> > Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")
>
> Is this not worth a -stable backport?

Yes, I think it is.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse
  2022-09-07 18:01 ` Yang Shi
@ 2022-09-08  0:06   ` John Hubbard
  -1 siblings, 0 replies; 13+ messages in thread
From: John Hubbard @ 2022-09-08  0:06 UTC (permalink / raw)
  To: Yang Shi, david, peterx, kirill.shutemov, jgg, hughd, akpm, aneesh.kumar
  Cc: linux-mm, linuxppc-dev, linux-kernel

On 9/7/22 11:01, Yang Shi wrote:
> Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
> introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
> sufficient to handle concurrent GUP-fast in all cases, it only handles
> traditional IPI-based GUP-fast correctly.  On architectures that send
> an IPI broadcast on TLB flush, it works as expected.  But on the
> architectures that do not use IPI to broadcast TLB flush, it may have
> the below race:
> 
>     CPU A                                          CPU B
> THP collapse                                     fast GUP
>                                                gup_pmd_range() <-- see valid pmd
>                                                    gup_pte_range() <-- work on pte
> pmdp_collapse_flush() <-- clear pmd and flush
> __collapse_huge_page_isolate()
>      check page pinned <-- before GUP bump refcount
>                                                        pin the page
>                                                        check PTE <-- no change
> __collapse_huge_page_copy()
>      copy data to huge page
>      ptep_clear()
> install huge pmd for the huge page
>                                                        return the stale page
> discard the stale page
> 
> The race could be fixed by checking whether PMD is changed or not after
> taking the page pin in fast GUP, just like what it does for PTE.  If the
> PMD is changed it means there may be parallel THP collapse, so GUP
> should back off.
> 
> Also update the stale comment about serializing against fast GUP in
> khugepaged.
> 
> Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")
> Acked-by: David Hildenbrand <david@redhat.com>
> Acked-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Yang Shi <shy828301@gmail.com>
> ---
> v2: * Incorporated the comment from Peter about the comment.
>      * Moved the comment right before gup_pte_range() instead of in the
>        body of the function, per John.
>      * Added patch 2/2 per Aneesh.
> 
>   mm/gup.c        | 34 ++++++++++++++++++++++++++++------
>   mm/khugepaged.c | 10 ++++++----
>   2 files changed, 34 insertions(+), 10 deletions(-)
> 

Looks good.

Reviewed-by: John Hubbard <jhubbard@nvidia.com>

thanks,
-- 
John Hubbard
NVIDIA

> diff --git a/mm/gup.c b/mm/gup.c
> index f3fc1f08d90c..40aa1c937212 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2380,8 +2380,28 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
>   }
>   
>   #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> -			 unsigned int flags, struct page **pages, int *nr)
> +/*
> + * Fast-gup relies on pte change detection to avoid concurrent pgtable
> + * operations.
> + *
> + * To pin the page, fast-gup needs to do below in order:
> + * (1) pin the page (by prefetching pte), then (2) check pte not changed.
> + *
> + * For the rest of pgtable operations where pgtable updates can be racy
> + * with fast-gup, we need to do (1) clear pte, then (2) check whether page
> + * is pinned.
> + *
> + * Above will work for all pte-level operations, including THP split.
> + *
> + * For THP collapse, it's a bit more complicated because fast-gup may be
> + * walking a pgtable page that is being freed (pte is still valid but pmd
> + * can be cleared already).  To avoid race in such condition, we need to
> + * also check pmd here to make sure pmd doesn't change (corresponds to
> + * pmdp_collapse_flush() in the THP collapse code path).
> + */
> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> +			 unsigned long end, unsigned int flags,
> +			 struct page **pages, int *nr)
>   {
>   	struct dev_pagemap *pgmap = NULL;
>   	int nr_start = *nr, ret = 0;
> @@ -2423,7 +2443,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>   			goto pte_unmap;
>   		}
>   
> -		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
> +		if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
> +		    unlikely(pte_val(pte) != pte_val(*ptep))) {
>   			gup_put_folio(folio, 1, flags);
>   			goto pte_unmap;
>   		}
> @@ -2470,8 +2491,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>    * get_user_pages_fast_only implementation that can pin pages. Thus it's still
>    * useful to have gup_huge_pmd even if we can't operate on ptes.
>    */
> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> -			 unsigned int flags, struct page **pages, int *nr)
> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> +			 unsigned long end, unsigned int flags,
> +			 struct page **pages, int *nr)
>   {
>   	return 0;
>   }
> @@ -2791,7 +2813,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
>   			if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr,
>   					 PMD_SHIFT, next, flags, pages, nr))
>   				return 0;
> -		} else if (!gup_pte_range(pmd, addr, next, flags, pages, nr))
> +		} else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr))
>   			return 0;
>   	} while (pmdp++, addr = next, addr != end);
>   
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 2d74cf01f694..518b49095db3 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1049,10 +1049,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>   
>   	pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
>   	/*
> -	 * After this gup_fast can't run anymore. This also removes
> -	 * any huge TLB entry from the CPU so we won't allow
> -	 * huge and small TLB entries for the same virtual address
> -	 * to avoid the risk of CPU bugs in that area.
> +	 * This removes any huge TLB entry from the CPU so we won't allow
> +	 * huge and small TLB entries for the same virtual address to
> +	 * avoid the risk of CPU bugs in that area.
> +	 *
> +	 * Parallel fast GUP is fine since fast GUP will back off when
> +	 * it detects PMD is changed.
>   	 */
>   	_pmd = pmdp_collapse_flush(vma, address, pmd);
>   	spin_unlock(pmd_ptl);



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse
@ 2022-09-08  0:06   ` John Hubbard
  0 siblings, 0 replies; 13+ messages in thread
From: John Hubbard @ 2022-09-08  0:06 UTC (permalink / raw)
  To: Yang Shi, david, peterx, kirill.shutemov, jgg, hughd, akpm, aneesh.kumar
  Cc: linux-mm, linuxppc-dev, linux-kernel

On 9/7/22 11:01, Yang Shi wrote:
> Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
> introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
> sufficient to handle concurrent GUP-fast in all cases, it only handles
> traditional IPI-based GUP-fast correctly.  On architectures that send
> an IPI broadcast on TLB flush, it works as expected.  But on the
> architectures that do not use IPI to broadcast TLB flush, it may have
> the below race:
> 
>     CPU A                                          CPU B
> THP collapse                                     fast GUP
>                                                gup_pmd_range() <-- see valid pmd
>                                                    gup_pte_range() <-- work on pte
> pmdp_collapse_flush() <-- clear pmd and flush
> __collapse_huge_page_isolate()
>      check page pinned <-- before GUP bump refcount
>                                                        pin the page
>                                                        check PTE <-- no change
> __collapse_huge_page_copy()
>      copy data to huge page
>      ptep_clear()
> install huge pmd for the huge page
>                                                        return the stale page
> discard the stale page
> 
> The race could be fixed by checking whether PMD is changed or not after
> taking the page pin in fast GUP, just like what it does for PTE.  If the
> PMD is changed it means there may be parallel THP collapse, so GUP
> should back off.
> 
> Also update the stale comment about serializing against fast GUP in
> khugepaged.
> 
> Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")
> Acked-by: David Hildenbrand <david@redhat.com>
> Acked-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Yang Shi <shy828301@gmail.com>
> ---
> v2: * Incorporated the comment from Peter about the comment.
>      * Moved the comment right before gup_pte_range() instead of in the
>        body of the function, per John.
>      * Added patch 2/2 per Aneesh.
> 
>   mm/gup.c        | 34 ++++++++++++++++++++++++++++------
>   mm/khugepaged.c | 10 ++++++----
>   2 files changed, 34 insertions(+), 10 deletions(-)
> 

Looks good.

Reviewed-by: John Hubbard <jhubbard@nvidia.com>

thanks,
-- 
John Hubbard
NVIDIA

> diff --git a/mm/gup.c b/mm/gup.c
> index f3fc1f08d90c..40aa1c937212 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2380,8 +2380,28 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
>   }
>   
>   #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> -			 unsigned int flags, struct page **pages, int *nr)
> +/*
> + * Fast-gup relies on pte change detection to avoid concurrent pgtable
> + * operations.
> + *
> + * To pin the page, fast-gup needs to do below in order:
> + * (1) pin the page (by prefetching pte), then (2) check pte not changed.
> + *
> + * For the rest of pgtable operations where pgtable updates can be racy
> + * with fast-gup, we need to do (1) clear pte, then (2) check whether page
> + * is pinned.
> + *
> + * Above will work for all pte-level operations, including THP split.
> + *
> + * For THP collapse, it's a bit more complicated because fast-gup may be
> + * walking a pgtable page that is being freed (pte is still valid but pmd
> + * can be cleared already).  To avoid race in such condition, we need to
> + * also check pmd here to make sure pmd doesn't change (corresponds to
> + * pmdp_collapse_flush() in the THP collapse code path).
> + */
> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> +			 unsigned long end, unsigned int flags,
> +			 struct page **pages, int *nr)
>   {
>   	struct dev_pagemap *pgmap = NULL;
>   	int nr_start = *nr, ret = 0;
> @@ -2423,7 +2443,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>   			goto pte_unmap;
>   		}
>   
> -		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
> +		if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
> +		    unlikely(pte_val(pte) != pte_val(*ptep))) {
>   			gup_put_folio(folio, 1, flags);
>   			goto pte_unmap;
>   		}
> @@ -2470,8 +2491,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>    * get_user_pages_fast_only implementation that can pin pages. Thus it's still
>    * useful to have gup_huge_pmd even if we can't operate on ptes.
>    */
> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> -			 unsigned int flags, struct page **pages, int *nr)
> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> +			 unsigned long end, unsigned int flags,
> +			 struct page **pages, int *nr)
>   {
>   	return 0;
>   }
> @@ -2791,7 +2813,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
>   			if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr,
>   					 PMD_SHIFT, next, flags, pages, nr))
>   				return 0;
> -		} else if (!gup_pte_range(pmd, addr, next, flags, pages, nr))
> +		} else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr))
>   			return 0;
>   	} while (pmdp++, addr = next, addr != end);
>   
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 2d74cf01f694..518b49095db3 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1049,10 +1049,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>   
>   	pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
>   	/*
> -	 * After this gup_fast can't run anymore. This also removes
> -	 * any huge TLB entry from the CPU so we won't allow
> -	 * huge and small TLB entries for the same virtual address
> -	 * to avoid the risk of CPU bugs in that area.
> +	 * This removes any huge TLB entry from the CPU so we won't allow
> +	 * huge and small TLB entries for the same virtual address to
> +	 * avoid the risk of CPU bugs in that area.
> +	 *
> +	 * Parallel fast GUP is fine since fast GUP will back off when
> +	 * it detects PMD is changed.
>   	 */
>   	_pmd = pmdp_collapse_flush(vma, address, pmd);
>   	spin_unlock(pmd_ptl);



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-09-08  0:04 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-07 18:01 [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse Yang Shi
2022-09-07 18:01 ` Yang Shi
2022-09-07 18:01 ` [v2 PATCH 2/2] powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush Yang Shi
2022-09-07 18:01   ` Yang Shi
2022-09-07 19:34   ` David Hildenbrand
2022-09-07 20:12   ` Peter Xu
2022-09-07 20:12     ` Peter Xu
2022-09-07 21:22 ` [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse Andrew Morton
2022-09-07 21:22   ` Andrew Morton
2022-09-07 21:23   ` Yang Shi
2022-09-07 21:23     ` Yang Shi
2022-09-08  0:06 ` John Hubbard
2022-09-08  0:06   ` John Hubbard

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.