All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Hugh Dickins <hughd@google.com>,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org,
	kaleshsingh@google.com, npiggin@gmail.com,
	joel@joelfernandes.org,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [PATCH v7 01/11] mm/mremap: Fix race between MOVE_PMD mremap and pageout
Date: Tue, 8 Jun 2021 15:05:47 +0300	[thread overview]
Message-ID: <20210608120547.krz7ymie3qq2sd2r@box.shutemov.name> (raw)
In-Reply-To: <e7de1397-e982-9236-1545-9beb4233926f@linux.ibm.com>

On Tue, Jun 08, 2021 at 04:47:19PM +0530, Aneesh Kumar K.V wrote:
> On 6/8/21 3:12 PM, Kirill A. Shutemov wrote:
> > On Tue, Jun 08, 2021 at 01:22:23PM +0530, Aneesh Kumar K.V wrote:
> > > 
> > > Hi Hugh,
> > > 
> > > Hugh Dickins <hughd@google.com> writes:
> > > 
> > > > On Mon, 7 Jun 2021, Aneesh Kumar K.V wrote:
> > > > 
> > > > > CPU 1				CPU 2					CPU 3
> > > > > 
> > > > > mremap(old_addr, new_addr)      page_shrinker/try_to_unmap_one
> > > > > 
> > > > > mmap_write_lock_killable()
> > > > > 
> > > > > 				addr = old_addr
> > > > > 				lock(pte_ptl)
> > > > > lock(pmd_ptl)
> > > > > pmd = *old_pmd
> > > > > pmd_clear(old_pmd)
> > > > > flush_tlb_range(old_addr)
> > > > > 
> > > > > *new_pmd = pmd
> > > > > 									*new_addr = 10; and fills
> > > > > 									TLB with new addr
> > > > > 									and old pfn
> > > > > 
> > > > > unlock(pmd_ptl)
> > > > > 				ptep_clear_flush()
> > > > > 				old pfn is free.
> > > > > 									Stale TLB entry
> > > > > 
> > > > > Fix this race by holding pmd lock in pageout. This still doesn't handle the race
> > > > > between MOVE_PUD and pageout.
> > > > > 
> > > > > Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions")
> > > > > Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.com
> > > > > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> > > > 
> > > > This seems very wrong to me, to require another level of locking in the
> > > > rmap lookup, just to fix some new pagetable games in mremap.
> > > > 
> > > > But Linus asked "Am I missing something?": neither of you have mentioned
> > > > mremap's take_rmap_locks(), so I hope that already meets your need.  And
> > > > if it needs to be called more often than before (see "need_rmap_locks"),
> > > > that's probably okay.
> > > > 
> > > > Hugh
> > > > 
> > > 
> > > Thanks for reviewing the change. I missed the rmap lock in the code
> > > path. How about the below change?
> > > 
> > >      mm/mremap: hold the rmap lock in write mode when moving page table entries.
> > >      To avoid a race between rmap walk and mremap, mremap does take_rmap_locks().
> > >      The lock was taken to ensure that rmap walk don't miss a page table entry due to
> > >      PTE moves via move_pagetables(). The kernel does further optimization of
> > >      this lock such that if we are going to find the newly added vma after the
> > >      old vma, the rmap lock is not taken. This is because rmap walk would find the
> > >      vmas in the same order and if we don't find the page table attached to
> > >      older vma we would find it with the new vma which we would iterate later.
> > >      The actual lifetime of the page is still controlled by the PTE lock.
> > >      This patch updates the locking requirement to handle another race condition
> > >      explained below with optimized mremap::
> > >      Optmized PMD move
> > >          CPU 1                           CPU 2                                   CPU 3
> > >          mremap(old_addr, new_addr)      page_shrinker/try_to_unmap_one
> > >          mmap_write_lock_killable()
> > >                                          addr = old_addr
> > >                                          lock(pte_ptl)
> > >          lock(pmd_ptl)
> > >          pmd = *old_pmd
> > >          pmd_clear(old_pmd)
> > >          flush_tlb_range(old_addr)
> > >          *new_pmd = pmd
> > >                                                                                  *new_addr = 10; and fills
> > >                                                                                  TLB with new addr
> > >                                                                                  and old pfn
> > >          unlock(pmd_ptl)
> > >                                          ptep_clear_flush()
> > >                                          old pfn is free.
> > >                                                                                  Stale TLB entry
> > >      Optmized PUD move:
> > >          CPU 1                           CPU 2                                   CPU 3
> > >          mremap(old_addr, new_addr)      page_shrinker/try_to_unmap_one
> > >          mmap_write_lock_killable()
> > >                                          addr = old_addr
> > >                                          lock(pte_ptl)
> > >          lock(pud_ptl)
> > >          pud = *old_pud
> > >          pud_clear(old_pud)
> > >          flush_tlb_range(old_addr)
> > >          *new_pud = pud
> > >                                                                                  *new_addr = 10; and fills
> > >                                                                                  TLB with new addr
> > >                                                                                  and old pfn
> > >          unlock(pud_ptl)
> > >                                          ptep_clear_flush()
> > >                                          old pfn is free.
> > >                                                                                  Stale TLB entry
> > >      Both the above race condition can be fixed if we force mremap path to take rmap lock.
> > >      Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> > 
> > Looks like it should be enough to address the race.
> > 
> > It would be nice to understand what is performance overhead of the
> > additional locking. Is it still faster to move single PMD page table under
> > these locks comparing to moving PTE page table entries without the locks?
> > 
> 
> The improvements provided by optimized mremap as captured in patch 11 is
> large.
> 
> mremap HAVE_MOVE_PMD/PUD optimization time comparison for 1GB region:
> 1GB mremap - Source PTE-aligned, Destination PTE-aligned
>   mremap time:      2292772ns
> 1GB mremap - Source PMD-aligned, Destination PMD-aligned
>   mremap time:      1158928ns
> 1GB mremap - Source PUD-aligned, Destination PUD-aligned
>   mremap time:        63886ns
> 
> With additional locking, I haven't observed much change in those numbers.
> But that could also be because there is no contention on these locks when
> this test is run?

Okay, it's good enough: contention should not be common and it's okay to
pay a price for correctness.

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
 Kirill A. Shutemov


WARNING: multiple messages have this Message-ID (diff)
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
	Hugh Dickins <hughd@google.com>,
	npiggin@gmail.com, linux-mm@kvack.org, kaleshsingh@google.com,
	joel@joelfernandes.org, akpm@linux-foundation.org,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v7 01/11] mm/mremap: Fix race between MOVE_PMD mremap and pageout
Date: Tue, 8 Jun 2021 15:05:47 +0300	[thread overview]
Message-ID: <20210608120547.krz7ymie3qq2sd2r@box.shutemov.name> (raw)
In-Reply-To: <e7de1397-e982-9236-1545-9beb4233926f@linux.ibm.com>

On Tue, Jun 08, 2021 at 04:47:19PM +0530, Aneesh Kumar K.V wrote:
> On 6/8/21 3:12 PM, Kirill A. Shutemov wrote:
> > On Tue, Jun 08, 2021 at 01:22:23PM +0530, Aneesh Kumar K.V wrote:
> > > 
> > > Hi Hugh,
> > > 
> > > Hugh Dickins <hughd@google.com> writes:
> > > 
> > > > On Mon, 7 Jun 2021, Aneesh Kumar K.V wrote:
> > > > 
> > > > > CPU 1				CPU 2					CPU 3
> > > > > 
> > > > > mremap(old_addr, new_addr)      page_shrinker/try_to_unmap_one
> > > > > 
> > > > > mmap_write_lock_killable()
> > > > > 
> > > > > 				addr = old_addr
> > > > > 				lock(pte_ptl)
> > > > > lock(pmd_ptl)
> > > > > pmd = *old_pmd
> > > > > pmd_clear(old_pmd)
> > > > > flush_tlb_range(old_addr)
> > > > > 
> > > > > *new_pmd = pmd
> > > > > 									*new_addr = 10; and fills
> > > > > 									TLB with new addr
> > > > > 									and old pfn
> > > > > 
> > > > > unlock(pmd_ptl)
> > > > > 				ptep_clear_flush()
> > > > > 				old pfn is free.
> > > > > 									Stale TLB entry
> > > > > 
> > > > > Fix this race by holding pmd lock in pageout. This still doesn't handle the race
> > > > > between MOVE_PUD and pageout.
> > > > > 
> > > > > Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions")
> > > > > Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.com
> > > > > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> > > > 
> > > > This seems very wrong to me, to require another level of locking in the
> > > > rmap lookup, just to fix some new pagetable games in mremap.
> > > > 
> > > > But Linus asked "Am I missing something?": neither of you have mentioned
> > > > mremap's take_rmap_locks(), so I hope that already meets your need.  And
> > > > if it needs to be called more often than before (see "need_rmap_locks"),
> > > > that's probably okay.
> > > > 
> > > > Hugh
> > > > 
> > > 
> > > Thanks for reviewing the change. I missed the rmap lock in the code
> > > path. How about the below change?
> > > 
> > >      mm/mremap: hold the rmap lock in write mode when moving page table entries.
> > >      To avoid a race between rmap walk and mremap, mremap does take_rmap_locks().
> > >      The lock was taken to ensure that rmap walk don't miss a page table entry due to
> > >      PTE moves via move_pagetables(). The kernel does further optimization of
> > >      this lock such that if we are going to find the newly added vma after the
> > >      old vma, the rmap lock is not taken. This is because rmap walk would find the
> > >      vmas in the same order and if we don't find the page table attached to
> > >      older vma we would find it with the new vma which we would iterate later.
> > >      The actual lifetime of the page is still controlled by the PTE lock.
> > >      This patch updates the locking requirement to handle another race condition
> > >      explained below with optimized mremap::
> > >      Optmized PMD move
> > >          CPU 1                           CPU 2                                   CPU 3
> > >          mremap(old_addr, new_addr)      page_shrinker/try_to_unmap_one
> > >          mmap_write_lock_killable()
> > >                                          addr = old_addr
> > >                                          lock(pte_ptl)
> > >          lock(pmd_ptl)
> > >          pmd = *old_pmd
> > >          pmd_clear(old_pmd)
> > >          flush_tlb_range(old_addr)
> > >          *new_pmd = pmd
> > >                                                                                  *new_addr = 10; and fills
> > >                                                                                  TLB with new addr
> > >                                                                                  and old pfn
> > >          unlock(pmd_ptl)
> > >                                          ptep_clear_flush()
> > >                                          old pfn is free.
> > >                                                                                  Stale TLB entry
> > >      Optmized PUD move:
> > >          CPU 1                           CPU 2                                   CPU 3
> > >          mremap(old_addr, new_addr)      page_shrinker/try_to_unmap_one
> > >          mmap_write_lock_killable()
> > >                                          addr = old_addr
> > >                                          lock(pte_ptl)
> > >          lock(pud_ptl)
> > >          pud = *old_pud
> > >          pud_clear(old_pud)
> > >          flush_tlb_range(old_addr)
> > >          *new_pud = pud
> > >                                                                                  *new_addr = 10; and fills
> > >                                                                                  TLB with new addr
> > >                                                                                  and old pfn
> > >          unlock(pud_ptl)
> > >                                          ptep_clear_flush()
> > >                                          old pfn is free.
> > >                                                                                  Stale TLB entry
> > >      Both the above race condition can be fixed if we force mremap path to take rmap lock.
> > >      Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> > 
> > Looks like it should be enough to address the race.
> > 
> > It would be nice to understand what is performance overhead of the
> > additional locking. Is it still faster to move single PMD page table under
> > these locks comparing to moving PTE page table entries without the locks?
> > 
> 
> The improvements provided by optimized mremap as captured in patch 11 is
> large.
> 
> mremap HAVE_MOVE_PMD/PUD optimization time comparison for 1GB region:
> 1GB mremap - Source PTE-aligned, Destination PTE-aligned
>   mremap time:      2292772ns
> 1GB mremap - Source PMD-aligned, Destination PMD-aligned
>   mremap time:      1158928ns
> 1GB mremap - Source PUD-aligned, Destination PUD-aligned
>   mremap time:        63886ns
> 
> With additional locking, I haven't observed much change in those numbers.
> But that could also be because there is no contention on these locks when
> this test is run?

Okay, it's good enough: contention should not be common and it's okay to
pay a price for correctness.

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
 Kirill A. Shutemov

  reply	other threads:[~2021-06-08 12:05 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07  5:51 [PATCH v7 00/11] Speedup mremap on ppc64 Aneesh Kumar K.V
2021-06-07  5:51 ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 01/11] mm/mremap: Fix race between MOVE_PMD mremap and pageout Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-08  0:06   ` Hugh Dickins
2021-06-08  0:06     ` Hugh Dickins
2021-06-08  7:52     ` Aneesh Kumar K.V
2021-06-08  7:52       ` Aneesh Kumar K.V
2021-06-08  9:42       ` Kirill A. Shutemov
2021-06-08  9:42         ` Kirill A. Shutemov
2021-06-08 11:17         ` Aneesh Kumar K.V
2021-06-08 11:17           ` Aneesh Kumar K.V
2021-06-08 12:05           ` Kirill A. Shutemov [this message]
2021-06-08 12:05             ` Kirill A. Shutemov
2021-06-08 20:39       ` Hugh Dickins
2021-06-08 20:39         ` Hugh Dickins
2021-06-07  5:51 ` [PATCH v7 02/11] mm/mremap: Fix race between MOVE_PUD " Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-14 14:55   ` [mm/mremap] ecf8443e51: vm-scalability.throughput -29.4% regression kernel test robot
2021-06-14 14:55     ` kernel test robot
2021-06-14 14:55     ` kernel test robot
2021-06-14 14:58     ` Linus Torvalds
2021-06-14 14:58       ` Linus Torvalds
2021-06-14 14:58       ` Linus Torvalds
2021-06-14 14:58       ` Linus Torvalds
2021-06-14 16:08     ` Aneesh Kumar K.V
2021-06-14 16:08       ` Aneesh Kumar K.V
2021-06-14 16:08       ` Aneesh Kumar K.V
2021-06-17  2:38       ` [LKP] " Liu, Yujie
2021-06-17  2:38         ` Liu, Yujie
2021-06-17  2:38         ` [LKP] " Liu, Yujie
2021-06-07  5:51 ` [PATCH v7 03/11] selftest/mremap_test: Update the test to handle pagesize other than 4K Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 04/11] selftest/mremap_test: Avoid crash with static build Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 05/11] mm/mremap: Convert huge PUD move to separate helper Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 06/11] mm/mremap: Don't enable optimized PUD move if page table levels is 2 Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 07/11] mm/mremap: Use pmd/pud_poplulate to update page table entries Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 08/11] powerpc/mm/book3s64: Fix possible build error Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 09/11] mm/mremap: Allow arch runtime override Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 10/11] powerpc/book3s64/mm: Update flush_tlb_range to flush page walk cache Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07  5:51 ` [PATCH v7 11/11] powerpc/mm: Enable HAVE_MOVE_PMD support Aneesh Kumar K.V
2021-06-07  5:51   ` Aneesh Kumar K.V
2021-06-07 10:10 ` [PATCH v7 00/11] Speedup mremap on ppc64 Nick Piggin
2021-06-07 10:10   ` Nick Piggin
2021-06-08  4:39   ` Aneesh Kumar K.V
2021-06-08  4:39     ` Aneesh Kumar K.V
2021-06-08  5:03     ` Nicholas Piggin
2021-06-08  5:03       ` Nicholas Piggin
2021-06-08 17:10   ` Linus Torvalds
2021-06-08 17:10     ` Linus Torvalds
2021-06-16  1:44     ` Nicholas Piggin
2021-06-16  1:44       ` Nicholas Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210608120547.krz7ymie3qq2sd2r@box.shutemov.name \
    --to=kirill@shutemov.name \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=hughd@google.com \
    --cc=joel@joelfernandes.org \
    --cc=kaleshsingh@google.com \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=npiggin@gmail.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.