From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, kaleshsingh@google.com, npiggin@gmail.com, joel@joelfernandes.org, Christophe Leroy <christophe.leroy@csgroup.eu>, Linus Torvalds <torvalds@linux-foundation.org>, "Kirill A . Shutemov" <kirill@shutemov.name>, "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>, stable@vger.kernel.org, Hugh Dickins <hughd@google.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCH v2 6/6] mm/mremap: hold the rmap lock in write mode when moving page table entries. Date: Wed, 16 Jun 2021 10:22:39 +0530 [thread overview] Message-ID: <20210616045239.370802-7-aneesh.kumar@linux.ibm.com> (raw) In-Reply-To: <20210616045239.370802-1-aneesh.kumar@linux.ibm.com> To avoid a race between rmap walk and mremap, mremap does take_rmap_locks(). The lock was taken to ensure that rmap walk don't miss a page table entry due to PTE moves via move_pagetables(). The kernel does further optimization of this lock such that if we are going to find the newly added vma after the old vma, the rmap lock is not taken. This is because rmap walk would find the vmas in the same order and if we don't find the page table attached to older vma we would find it with the new vma which we would iterate later. As explained in commit eb66ae030829 ("mremap: properly flush TLB before releasing the page") mremap is special in that it doesn't take ownership of the page. The optimized version for PUD/PMD aligned mremap also doesn't hold the ptl lock. This can result in stale TLB entries as show below. This patch updates the rmap locking requirement in mremap to handle the race condition explained below with optimized mremap:: Optmized PMD move CPU 1 CPU 2 CPU 3 mremap(old_addr, new_addr) page_shrinker/try_to_unmap_one mmap_write_lock_killable() addr = old_addr lock(pte_ptl) lock(pmd_ptl) pmd = *old_pmd pmd_clear(old_pmd) flush_tlb_range(old_addr) *new_pmd = pmd *new_addr = 10; and fills TLB with new addr and old pfn unlock(pmd_ptl) ptep_clear_flush() old pfn is free. Stale TLB entry Optimized PUD move also suffers from a similar race. Both the above race condition can be fixed if we force mremap path to take rmap lock. Cc: stable@vger.kernel.org Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") Fixes: c49dd3401802 ("mm: speedup mremap on 1GB or larger regions") Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.com Acked-by: Hugh Dickins <hughd@google.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> --- mm/mremap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index 72fa0491681e..c3cad539a7aa 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -503,7 +503,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, } else if (IS_ENABLED(CONFIG_HAVE_MOVE_PUD) && extent == PUD_SIZE) { if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr, - old_pud, new_pud, need_rmap_locks)) + old_pud, new_pud, true)) continue; } @@ -530,7 +530,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, * moving at the PMD level if possible. */ if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, - old_pmd, new_pmd, need_rmap_locks)) + old_pmd, new_pmd, true)) continue; } -- 2.31.1
WARNING: multiple messages have this Message-ID (diff)
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>, Hugh Dickins <hughd@google.com>, Linus Torvalds <torvalds@linux-foundation.org>, npiggin@gmail.com, kaleshsingh@google.com, joel@joelfernandes.org, "Kirill A . Shutemov" <kirill@shutemov.name>, stable@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCH v2 6/6] mm/mremap: hold the rmap lock in write mode when moving page table entries. Date: Wed, 16 Jun 2021 10:22:39 +0530 [thread overview] Message-ID: <20210616045239.370802-7-aneesh.kumar@linux.ibm.com> (raw) In-Reply-To: <20210616045239.370802-1-aneesh.kumar@linux.ibm.com> To avoid a race between rmap walk and mremap, mremap does take_rmap_locks(). The lock was taken to ensure that rmap walk don't miss a page table entry due to PTE moves via move_pagetables(). The kernel does further optimization of this lock such that if we are going to find the newly added vma after the old vma, the rmap lock is not taken. This is because rmap walk would find the vmas in the same order and if we don't find the page table attached to older vma we would find it with the new vma which we would iterate later. As explained in commit eb66ae030829 ("mremap: properly flush TLB before releasing the page") mremap is special in that it doesn't take ownership of the page. The optimized version for PUD/PMD aligned mremap also doesn't hold the ptl lock. This can result in stale TLB entries as show below. This patch updates the rmap locking requirement in mremap to handle the race condition explained below with optimized mremap:: Optmized PMD move CPU 1 CPU 2 CPU 3 mremap(old_addr, new_addr) page_shrinker/try_to_unmap_one mmap_write_lock_killable() addr = old_addr lock(pte_ptl) lock(pmd_ptl) pmd = *old_pmd pmd_clear(old_pmd) flush_tlb_range(old_addr) *new_pmd = pmd *new_addr = 10; and fills TLB with new addr and old pfn unlock(pmd_ptl) ptep_clear_flush() old pfn is free. Stale TLB entry Optimized PUD move also suffers from a similar race. Both the above race condition can be fixed if we force mremap path to take rmap lock. Cc: stable@vger.kernel.org Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") Fixes: c49dd3401802 ("mm: speedup mremap on 1GB or larger regions") Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.com Acked-by: Hugh Dickins <hughd@google.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> --- mm/mremap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index 72fa0491681e..c3cad539a7aa 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -503,7 +503,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, } else if (IS_ENABLED(CONFIG_HAVE_MOVE_PUD) && extent == PUD_SIZE) { if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr, - old_pud, new_pud, need_rmap_locks)) + old_pud, new_pud, true)) continue; } @@ -530,7 +530,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, * moving at the PMD level if possible. */ if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, - old_pmd, new_pmd, need_rmap_locks)) + old_pmd, new_pmd, true)) continue; } -- 2.31.1
next prev parent reply other threads:[~2021-06-16 4:53 UTC|newest] Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-16 4:52 [PATCH v2 0/6] mrermap fixes Aneesh Kumar K.V 2021-06-16 4:52 ` Aneesh Kumar K.V 2021-06-16 4:52 ` [PATCH v2 1/6] selftest/mremap_test: Update the test to handle pagesize other than 4K Aneesh Kumar K.V 2021-06-16 4:52 ` Aneesh Kumar K.V 2021-06-16 4:52 ` [PATCH v2 2/6] selftest/mremap_test: Avoid crash with static build Aneesh Kumar K.V 2021-06-16 4:52 ` Aneesh Kumar K.V 2021-06-16 4:52 ` [PATCH v2 3/6] mm/mremap: Convert huge PUD move to separate helper Aneesh Kumar K.V 2021-06-16 4:52 ` Aneesh Kumar K.V 2021-06-16 4:52 ` [PATCH v2 4/6] mm/mremap: Don't enable optimized PUD move if page table levels is 2 Aneesh Kumar K.V 2021-06-16 4:52 ` Aneesh Kumar K.V 2021-06-16 4:52 ` [PATCH v2 5/6] mm/mremap: Use pmd/pud_poplulate to update page table entries Aneesh Kumar K.V 2021-06-16 4:52 ` Aneesh Kumar K.V 2021-06-16 4:52 ` Aneesh Kumar K.V [this message] 2021-06-16 4:52 ` [PATCH v2 6/6] mm/mremap: hold the rmap lock in write mode when moving " Aneesh Kumar K.V 2021-06-17 1:43 ` Andrew Morton 2021-06-17 1:43 ` Andrew Morton 2021-06-16 14:39 ` [PATCH v2 0/6] mrermap fixes Linus Torvalds 2021-06-16 14:39 ` Linus Torvalds 2021-06-17 1:00 ` Andrew Morton 2021-06-17 1:00 ` Andrew Morton
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210616045239.370802-7-aneesh.kumar@linux.ibm.com \ --to=aneesh.kumar@linux.ibm.com \ --cc=akpm@linux-foundation.org \ --cc=christophe.leroy@csgroup.eu \ --cc=hughd@google.com \ --cc=joel@joelfernandes.org \ --cc=kaleshsingh@google.com \ --cc=kirill.shutemov@linux.intel.com \ --cc=kirill@shutemov.name \ --cc=linux-mm@kvack.org \ --cc=linuxppc-dev@lists.ozlabs.org \ --cc=mpe@ellerman.id.au \ --cc=npiggin@gmail.com \ --cc=stable@vger.kernel.org \ --cc=torvalds@linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.