linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Yang Shi <shy828301@gmail.com>,
	david@redhat.com, peterx@redhat.com,
	kirill.shutemov@linux.intel.com, jgg@nvidia.com,
	hughd@google.com, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: gup: fix the fast GUP race against THP collapse
Date: Sun, 4 Sep 2022 15:29:04 -0700	[thread overview]
Message-ID: <e6ad1084-c301-9f11-1fa7-7614bf859aaf@nvidia.com> (raw)
In-Reply-To: <20220901222707.477402-1-shy828301@gmail.com>

On 9/1/22 15:27, Yang Shi wrote:
> Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
> introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
> sufficient to handle concurrent GUP-fast in all cases, it only handles
> traditional IPI-based GUP-fast correctly.  On architectures that send
> an IPI broadcast on TLB flush, it works as expected.  But on the
> architectures that do not use IPI to broadcast TLB flush, it may have
> the below race:
> 
>    CPU A                                          CPU B
> THP collapse                                     fast GUP
>                                               gup_pmd_range() <-- see valid pmd
>                                                   gup_pte_range() <-- work on pte
> pmdp_collapse_flush() <-- clear pmd and flush
> __collapse_huge_page_isolate()
>     check page pinned <-- before GUP bump refcount
>                                                       pin the page
>                                                       check PTE <-- no change
> __collapse_huge_page_copy()
>     copy data to huge page
>     ptep_clear()
> install huge pmd for the huge page
>                                                       return the stale page
> discard the stale page

Hi Yang,

Thanks for taking the trouble to write down these notes. I always
forget which race we are dealing with, and this is a great help. :)

More...

> 
> The race could be fixed by checking whether PMD is changed or not after
> taking the page pin in fast GUP, just like what it does for PTE.  If the
> PMD is changed it means there may be parallel THP collapse, so GUP
> should back off.
> 
> Also update the stale comment about serializing against fast GUP in
> khugepaged.
> 
> Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")
> Signed-off-by: Yang Shi <shy828301@gmail.com>
> ---
>  mm/gup.c        | 30 ++++++++++++++++++++++++------
>  mm/khugepaged.c | 10 ++++++----
>  2 files changed, 30 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index f3fc1f08d90c..4365b2811269 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2380,8 +2380,9 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
>  }
>  
>  #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> -			 unsigned int flags, struct page **pages, int *nr)
> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> +			 unsigned long end, unsigned int flags,
> +			 struct page **pages, int *nr)
>  {
>  	struct dev_pagemap *pgmap = NULL;
>  	int nr_start = *nr, ret = 0;
> @@ -2423,7 +2424,23 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>  			goto pte_unmap;
>  		}
>  
> -		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
> +		/*
> +		 * THP collapse conceptually does:
> +		 *   1. Clear and flush PMD
> +		 *   2. Check the base page refcount
> +		 *   3. Copy data to huge page
> +		 *   4. Clear PTE
> +		 *   5. Discard the base page
> +		 *
> +		 * So fast GUP may race with THP collapse then pin and
> +		 * return an old page since TLB flush is no longer sufficient
> +		 * to serialize against fast GUP.
> +		 *
> +		 * Check PMD, if it is changed just back off since it
> +		 * means there may be parallel THP collapse.
> +		 */

As I mentioned in the other thread, it would be a nice touch to move
such discussion into the comment header.

> +		if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
> +		    unlikely(pte_val(pte) != pte_val(*ptep))) {


That should be READ_ONCE() for the *pmdp and *ptep reads. Because this
whole lockless house of cards may fall apart if we try reading the
page table values without READ_ONCE(). 

That's a rather vague statement, and in fact, the READ_ONCE() should
be paired with a page table write somewhere else, to make that claim
more precise.


>  			gup_put_folio(folio, 1, flags);
>  			goto pte_unmap;
>  		}
> @@ -2470,8 +2487,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>   * get_user_pages_fast_only implementation that can pin pages. Thus it's still
>   * useful to have gup_huge_pmd even if we can't operate on ptes.
>   */
> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> -			 unsigned int flags, struct page **pages, int *nr)
> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> +			 unsigned long end, unsigned int flags,
> +			 struct page **pages, int *nr)
>  {
>  	return 0;
>  }
> @@ -2791,7 +2809,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
>  			if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr,
>  					 PMD_SHIFT, next, flags, pages, nr))
>  				return 0;
> -		} else if (!gup_pte_range(pmd, addr, next, flags, pages, nr))
> +		} else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr))
>  			return 0;
>  	} while (pmdp++, addr = next, addr != end);
>  
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 2d74cf01f694..518b49095db3 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1049,10 +1049,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>  
>  	pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
>  	/*
> -	 * After this gup_fast can't run anymore. This also removes
> -	 * any huge TLB entry from the CPU so we won't allow
> -	 * huge and small TLB entries for the same virtual address
> -	 * to avoid the risk of CPU bugs in that area.
> +	 * This removes any huge TLB entry from the CPU so we won't allow
> +	 * huge and small TLB entries for the same virtual address to
> +	 * avoid the risk of CPU bugs in that area.
> +	 *
> +	 * Parallel fast GUP is fine since fast GUP will back off when
> +	 * it detects PMD is changed.
>  	 */
>  	_pmd = pmdp_collapse_flush(vma, address, pmd);

To follow up on David Hildenbrand's note about this in the nearby thread...
I'm also not sure if pmdp_collapse_flush() implies a memory barrier on 
all arches. It definitely does do an atomic op with a return value on x86,
but that's just one arch.


thanks,

-- 
John Hubbard
NVIDIA

>  	spin_unlock(pmd_ptl);



  parent reply	other threads:[~2022-09-04 22:29 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-01 22:27 [PATCH] mm: gup: fix the fast GUP race against THP collapse Yang Shi
2022-09-01 23:26 ` Peter Xu
2022-09-01 23:50   ` Yang Shi
2022-09-02  6:39     ` David Hildenbrand
2022-09-02 15:23       ` Yang Shi
2022-09-02 15:59     ` Peter Xu
2022-09-02 16:04       ` Peter Xu
2022-09-02 17:30       ` Yang Shi
2022-09-02 17:45       ` Yang Shi
2022-09-02 20:33         ` Peter Xu
2022-09-05  8:56           ` Aneesh Kumar K.V
2022-09-05  8:54         ` Aneesh Kumar K.V
2022-09-06 19:07           ` Yang Shi
2022-09-07  4:50             ` Aneesh Kumar K V
2022-09-07 17:08               ` Yang Shi
2022-09-04 22:21       ` John Hubbard
2022-09-02  6:42 ` David Hildenbrand
2022-09-04 22:29 ` John Hubbard [this message]
2022-09-05  7:59   ` David Hildenbrand
2022-09-05 10:16     ` Baolin Wang
2022-09-05 10:24       ` David Hildenbrand
2022-09-05 11:11         ` David Hildenbrand
2022-09-05 14:35           ` Baolin Wang
2022-09-05 14:40             ` David Hildenbrand
2022-09-06  5:53               ` Baolin Wang
2022-09-06  2:12     ` John Hubbard
2022-09-06 12:50       ` David Hildenbrand
2022-09-06 13:47     ` Jason Gunthorpe
2022-09-06 13:57       ` David Hildenbrand
2022-09-06 14:30         ` Jason Gunthorpe
2022-09-06 14:44           ` David Hildenbrand
2022-09-06 15:33             ` Jason Gunthorpe
2022-09-06 19:11             ` Yang Shi
2022-09-06 23:16             ` John Hubbard
2022-09-06 19:01     ` Yang Shi
2022-09-05  9:03   ` Baolin Wang
2022-09-06 18:50   ` Yang Shi
2022-09-06 21:27     ` John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e6ad1084-c301-9f11-1fa7-7614bf859aaf@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=jgg@nvidia.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=peterx@redhat.com \
    --cc=shy828301@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).