linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>
To: Yang Shi <shy828301@gmail.com>
Cc: Peter Xu <peterx@redhat.com>,
	david@redhat.com, kirill.shutemov@linux.intel.com,
	jhubbard@nvidia.com, jgg@nvidia.com, hughd@google.com,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Subject: Re: [PATCH] mm: gup: fix the fast GUP race against THP collapse
Date: Wed, 7 Sep 2022 10:20:52 +0530	[thread overview]
Message-ID: <92fe7b10-cbcf-6fdb-af23-4cb2f314e612@linux.ibm.com> (raw)
In-Reply-To: <CAHbLzkohKvOFyfsVr=ry8Goi6kgxh9ig84FX0+pY4qzL4i0xWg@mail.gmail.com>

On 9/7/22 12:37 AM, Yang Shi wrote:
> On Mon, Sep 5, 2022 at 1:56 AM Aneesh Kumar K.V
> <aneesh.kumar@linux.ibm.com> wrote:
>>
>> Yang Shi <shy828301@gmail.com> writes:
>>
>>>
>>> On Fri, Sep 2, 2022 at 9:00 AM Peter Xu <peterx@redhat.com> wrote:
>>>>
>>>> On Thu, Sep 01, 2022 at 04:50:45PM -0700, Yang Shi wrote:
>>>>> On Thu, Sep 1, 2022 at 4:26 PM Peter Xu <peterx@redhat.com> wrote:
>>>>>>
>>>>>> Hi, Yang,
>>>>>>
>>>>>> On Thu, Sep 01, 2022 at 03:27:07PM -0700, Yang Shi wrote:
>>>>>>> Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
>>>>>>> introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
>>>>>>> sufficient to handle concurrent GUP-fast in all cases, it only handles
>>>>>>> traditional IPI-based GUP-fast correctly.
>>>>>>
>>>>>> If TLB flush (or, IPI broadcasts) used to work to protect against gup-fast,
>>>>>> I'm kind of confused why it's not sufficient even if with RCU gup?  Isn't
>>>>>> that'll keep working as long as interrupt disabled (which current fast-gup
>>>>>> will still do)?
>>>>>
>>>>> Actually the wording was copied from David's commit log for his
>>>>> PageAnonExclusive fix. My understanding is the IPI broadcast still
>>>>> works, but it may not be supported by all architectures and not
>>>>> preferred anymore. So we should avoid depending on IPI broadcast IIUC.
>>>>>
>>>>>>
>>>>>> IIUC the issue is you suspect not all archs correctly implemented
>>>>>> pmdp_collapse_flush(), or am I wrong?
>>>>>
>>>>> This is a possible fix, please see below for details.
>>>>>
>>>>>>
>>>>>>> On architectures that send
>>>>>>> an IPI broadcast on TLB flush, it works as expected.  But on the
>>>>>>> architectures that do not use IPI to broadcast TLB flush, it may have
>>>>>>> the below race:
>>>>>>>
>>>>>>>    CPU A                                          CPU B
>>>>>>> THP collapse                                     fast GUP
>>>>>>>                                               gup_pmd_range() <-- see valid pmd
>>>>>>>                                                   gup_pte_range() <-- work on pte
>>>>>>> pmdp_collapse_flush() <-- clear pmd and flush
>>>>>>> __collapse_huge_page_isolate()
>>>>>>>     check page pinned <-- before GUP bump refcount
>>>>>>>                                                       pin the page
>>>>>>>                                                       check PTE <-- no change
>>>>>>> __collapse_huge_page_copy()
>>>>>>>     copy data to huge page
>>>>>>>     ptep_clear()
>>>>>>> install huge pmd for the huge page
>>>>>>>                                                       return the stale page
>>>>>>> discard the stale page
>>>>>>>
>>>>>>> The race could be fixed by checking whether PMD is changed or not after
>>>>>>> taking the page pin in fast GUP, just like what it does for PTE.  If the
>>>>>>> PMD is changed it means there may be parallel THP collapse, so GUP
>>>>>>> should back off.
>>>>>>
>>>>>> Could the race also be fixed by impl pmdp_collapse_flush() correctly for
>>>>>> the archs that are missing? Do you know which arch(s) is broken with it?
>>>>>
>>>>> Yes, and this was suggested by me in the first place, but per the
>>>>> suggestion from John and David, this is not the preferred way. I think
>>>>> it is because:
>>>>>
>>>>> Firstly, using IPI to serialize against fast GUP is not recommended
>>>>> anymore since fast GUP does check PTE then back off so we should avoid
>>>>> it.
>>>>> Secondly, if checking PMD then backing off could solve the problem,
>>>>> why do we still need broadcast IPI? It doesn't sound performant.
>>>>>
>>>>>>
>>>>>> It's just not clear to me whether this patch is an optimization or a fix,
>>>>>> if it's a fix whether the IPI broadcast in ppc pmdp_collapse_flush() would
>>>>>> still be needed.
>>>>>
>>>>> It is a fix and the fix will make IPI broadcast not useful anymore.
>>>>
>>>> How about another patch to remove the ppc impl too?  Then it can be a two
>>>> patches series.
>>>
>>> BTW, I don't think we could remove the ppc implementation since it is
>>> different from the generic pmdp_collapse_flush(), particularly for the
>>> hash part IIUC.
>>>
>>> The generic version calls flush_tlb_range() -> hash__flush_tlb_range()
>>> for hash, but the hash call is actually no-op. The ppc version calls
>>> hash__pmdp_collapse_flush() -> flush_tlb_pmd_range(), which does
>>> something useful.
>>>
>>
>> We should actually rename flush_tlb_pmd_range(). It actually flush the
>> hash page table entries.
>>
>> I will do the below patch for ppc64 to clarify this better
> 
> Thanks, Aneesh. It looks more readable. A follow-up question, I think
> we could remove serialize_against_pte_lookup(), which just issues IPI
> broadcast to run a dummy function. This IPI should not be needed
> anymore with my patch. Of course, we need to keep the memory barrier.
> 


For radix translation yes. For hash we still need that. W.r.t memory barrier,
radix do use radix__flush_tlb_collapsed_pmd() which does a tlb invalidate.
IIUC that will enfocre the required memory barrier there. So you should be able
to just remove 

modified   arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -937,15 +937,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
 	pmd = *pmdp;
 	pmd_clear(pmdp);
 
-	/*
-	 * pmdp collapse_flush need to ensure that there are no parallel gup
-	 * walk after this call. This is needed so that we can have stable
-	 * page ref count when collapsing a page. We don't allow a collapse page
-	 * if we have gup taken on the page. We can ensure that by sending IPI
-	 * because gup walk happens with IRQ disabled.
-	 */
-	serialize_against_pte_lookup(vma->vm_mm);
-
 	radix__flush_tlb_collapsed_pmd(vma->vm_mm, address);
 
 	return pmd;

in your patch. This will also consolidate all the related changes together.

>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
>> index 8b762f282190..fd30fa20c392 100644
>> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
>> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
>> @@ -112,13 +112,12 @@ static inline void hash__flush_tlb_kernel_range(unsigned long start,
>>
>>  struct mmu_gather;
>>  extern void hash__tlb_flush(struct mmu_gather *tlb);
>> -void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr);
>>
>>  #ifdef CONFIG_PPC_64S_HASH_MMU
>>  /* Private function for use by PCI IO mapping code */
>>  extern void __flush_hash_table_range(unsigned long start, unsigned long end);
>> -extern void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd,
>> -                               unsigned long addr);
>> +extern void flush_hash_table_pmd_range(struct mm_struct *mm, pmd_t *pmd,
>> +                                      unsigned long addr);
>>  #else
>>  static inline void __flush_hash_table_range(unsigned long start, unsigned long end) { }
>>  #endif
>> diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
>> index ae008b9df0e6..f30131933a01 100644
>> --- a/arch/powerpc/mm/book3s64/hash_pgtable.c
>> +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
>> @@ -256,7 +256,7 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres
>>          * the __collapse_huge_page_copy can result in copying
>>          * the old content.
>>          */
>> -       flush_tlb_pmd_range(vma->vm_mm, &pmd, address);
>> +       flush_hash_table_pmd_range(vma->vm_mm, &pmd, address);
>>         return pmd;
>>  }
>>
>> diff --git a/arch/powerpc/mm/book3s64/hash_tlb.c b/arch/powerpc/mm/book3s64/hash_tlb.c
>> index eb0bccaf221e..a64ea0a7ef96 100644
>> --- a/arch/powerpc/mm/book3s64/hash_tlb.c
>> +++ b/arch/powerpc/mm/book3s64/hash_tlb.c
>> @@ -221,7 +221,7 @@ void __flush_hash_table_range(unsigned long start, unsigned long end)
>>         local_irq_restore(flags);
>>  }
>>
>> -void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr)
>> +void flush_hash_table_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr)
>>  {
>>         pte_t *pte;
>>         pte_t *start_pte;
>>



  reply	other threads:[~2022-09-07  4:51 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-01 22:27 [PATCH] mm: gup: fix the fast GUP race against THP collapse Yang Shi
2022-09-01 23:26 ` Peter Xu
2022-09-01 23:50   ` Yang Shi
2022-09-02  6:39     ` David Hildenbrand
2022-09-02 15:23       ` Yang Shi
2022-09-02 15:59     ` Peter Xu
2022-09-02 16:04       ` Peter Xu
2022-09-02 17:30       ` Yang Shi
2022-09-02 17:45       ` Yang Shi
2022-09-02 20:33         ` Peter Xu
2022-09-05  8:56           ` Aneesh Kumar K.V
2022-09-05  8:54         ` Aneesh Kumar K.V
2022-09-06 19:07           ` Yang Shi
2022-09-07  4:50             ` Aneesh Kumar K V [this message]
2022-09-07 17:08               ` Yang Shi
2022-09-04 22:21       ` John Hubbard
2022-09-02  6:42 ` David Hildenbrand
2022-09-04 22:29 ` John Hubbard
2022-09-05  7:59   ` David Hildenbrand
2022-09-05 10:16     ` Baolin Wang
2022-09-05 10:24       ` David Hildenbrand
2022-09-05 11:11         ` David Hildenbrand
2022-09-05 14:35           ` Baolin Wang
2022-09-05 14:40             ` David Hildenbrand
2022-09-06  5:53               ` Baolin Wang
2022-09-06  2:12     ` John Hubbard
2022-09-06 12:50       ` David Hildenbrand
2022-09-06 13:47     ` Jason Gunthorpe
2022-09-06 13:57       ` David Hildenbrand
2022-09-06 14:30         ` Jason Gunthorpe
2022-09-06 14:44           ` David Hildenbrand
2022-09-06 15:33             ` Jason Gunthorpe
2022-09-06 19:11             ` Yang Shi
2022-09-06 23:16             ` John Hubbard
2022-09-06 19:01     ` Yang Shi
2022-09-05  9:03   ` Baolin Wang
2022-09-06 18:50   ` Yang Shi
2022-09-06 21:27     ` John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=92fe7b10-cbcf-6fdb-af23-4cb2f314e612@linux.ibm.com \
    --to=aneesh.kumar@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=jgg@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=peterx@redhat.com \
    --cc=shy828301@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).