From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, kaleshsingh@google.com, npiggin@gmail.com, joel@joelfernandes.org, "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Subject: [PATCH v4 6/9] mm/mremap: Use range flush that does TLB and page walk cache flush Date: Wed, 14 Apr 2021 14:29:12 +0530 [thread overview] Message-ID: <20210414085915.301189-7-aneesh.kumar@linux.ibm.com> (raw) In-Reply-To: <20210414085915.301189-1-aneesh.kumar@linux.ibm.com> Some architectures do have the concept of page walk cache which need to be flush when updating higher levels of page tables. A fast mremap that involves moving page table pages instead of copying pte entries should flush page walk cache since the old translation cache is no more valid. Add new helper flush_pte_tlb_pwc_range() which invalidates both TLB and page walk cache where TLB entries are mapped with page size PAGE_SIZE. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> --- arch/powerpc/include/asm/book3s/64/tlbflush.h | 11 +++++++++++ mm/mremap.c | 15 +++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h index f9f8a3a264f7..c236b66f490b 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h @@ -80,6 +80,17 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, return flush_hugetlb_tlb_pwc_range(vma, start, end, false); } +#define flush_pte_tlb_pwc_range flush_tlb_pwc_range +static inline void flush_pte_tlb_pwc_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end, + bool also_pwc) +{ + if (radix_enabled()) + return radix__flush_tlb_pwc_range_psize(vma->vm_mm, start, + end, mmu_virtual_psize, also_pwc); + return hash__flush_tlb_range(vma, start, end); +} + static inline void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { diff --git a/mm/mremap.c b/mm/mremap.c index 574287f9bb39..0e7b11daafee 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -210,6 +210,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, drop_rmap_locks(vma); } +#ifndef flush_pte_tlb_pwc_range +#define flush_pte_tlb_pwc_range flush_pte_tlb_pwc_range +static inline void flush_pte_tlb_pwc_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + bool also_pwc) +{ + return flush_tlb_range(vma, start, end); +} +#endif + #ifdef CONFIG_HAVE_MOVE_PMD static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) @@ -260,7 +271,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pmd_none(*new_pmd)); pmd_populate(mm, new_pmd, (pgtable_t)pmd_page_vaddr(pmd)); - flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); + flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PMD_SIZE, true); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl); @@ -307,7 +318,7 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pud_none(*new_pud)); pud_populate(mm, new_pud, (pmd_t *)pud_page_vaddr(pud)); - flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PUD_SIZE, true); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl); -- 2.30.2
WARNING: multiple messages have this Message-ID (diff)
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: kaleshsingh@google.com, npiggin@gmail.com, "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>, joel@joelfernandes.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 6/9] mm/mremap: Use range flush that does TLB and page walk cache flush Date: Wed, 14 Apr 2021 14:29:12 +0530 [thread overview] Message-ID: <20210414085915.301189-7-aneesh.kumar@linux.ibm.com> (raw) In-Reply-To: <20210414085915.301189-1-aneesh.kumar@linux.ibm.com> Some architectures do have the concept of page walk cache which need to be flush when updating higher levels of page tables. A fast mremap that involves moving page table pages instead of copying pte entries should flush page walk cache since the old translation cache is no more valid. Add new helper flush_pte_tlb_pwc_range() which invalidates both TLB and page walk cache where TLB entries are mapped with page size PAGE_SIZE. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> --- arch/powerpc/include/asm/book3s/64/tlbflush.h | 11 +++++++++++ mm/mremap.c | 15 +++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h index f9f8a3a264f7..c236b66f490b 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h @@ -80,6 +80,17 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, return flush_hugetlb_tlb_pwc_range(vma, start, end, false); } +#define flush_pte_tlb_pwc_range flush_tlb_pwc_range +static inline void flush_pte_tlb_pwc_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end, + bool also_pwc) +{ + if (radix_enabled()) + return radix__flush_tlb_pwc_range_psize(vma->vm_mm, start, + end, mmu_virtual_psize, also_pwc); + return hash__flush_tlb_range(vma, start, end); +} + static inline void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { diff --git a/mm/mremap.c b/mm/mremap.c index 574287f9bb39..0e7b11daafee 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -210,6 +210,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, drop_rmap_locks(vma); } +#ifndef flush_pte_tlb_pwc_range +#define flush_pte_tlb_pwc_range flush_pte_tlb_pwc_range +static inline void flush_pte_tlb_pwc_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + bool also_pwc) +{ + return flush_tlb_range(vma, start, end); +} +#endif + #ifdef CONFIG_HAVE_MOVE_PMD static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) @@ -260,7 +271,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pmd_none(*new_pmd)); pmd_populate(mm, new_pmd, (pgtable_t)pmd_page_vaddr(pmd)); - flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); + flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PMD_SIZE, true); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl); @@ -307,7 +318,7 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pud_none(*new_pud)); pud_populate(mm, new_pud, (pmd_t *)pud_page_vaddr(pud)); - flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PUD_SIZE, true); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl); -- 2.30.2
next prev parent reply other threads:[~2021-04-14 8:59 UTC|newest] Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-04-14 8:59 [PATCH v4 0/9] Speedup mremap on ppc64 Aneesh Kumar K.V 2021-04-14 8:59 ` Aneesh Kumar K.V 2021-04-14 8:59 ` [PATCH v4 1/9] selftest/mremap_test: Update the test to handle pagesize other than 4K Aneesh Kumar K.V 2021-04-14 8:59 ` Aneesh Kumar K.V 2021-04-14 8:59 ` [PATCH v4 2/9] selftest/mremap_test: Avoid crash with static build Aneesh Kumar K.V 2021-04-14 8:59 ` Aneesh Kumar K.V 2021-04-14 8:59 ` [PATCH v4 3/9] mm/mremap: Use pmd/pud_poplulate to update page table entries Aneesh Kumar K.V 2021-04-14 8:59 ` Aneesh Kumar K.V 2021-04-14 8:59 ` [PATCH v4 4/9] powerpc/mm/book3s64: Fix possible build error Aneesh Kumar K.V 2021-04-14 8:59 ` Aneesh Kumar K.V 2021-04-20 3:43 ` Michael Ellerman 2021-04-20 3:43 ` Michael Ellerman 2021-04-14 8:59 ` [PATCH v4 5/9] powerpc/mm/book3s64: Update tlb flush routines to take a page walk cache flush argument Aneesh Kumar K.V 2021-04-14 8:59 ` Aneesh Kumar K.V [this message] 2021-04-14 8:59 ` [PATCH v4 6/9] mm/mremap: Use range flush that does TLB and page walk cache flush Aneesh Kumar K.V 2021-04-20 3:47 ` Michael Ellerman 2021-04-20 3:47 ` Michael Ellerman 2021-04-20 4:17 ` Aneesh Kumar K.V 2021-04-20 4:17 ` Aneesh Kumar K.V 2021-04-14 8:59 ` [PATCH v4 7/9] mm/mremap: Move TLB flush outside page table lock Aneesh Kumar K.V 2021-04-14 8:59 ` Aneesh Kumar K.V 2021-04-14 8:59 ` [PATCH v4 8/9] mm/mremap: Allow arch runtime override Aneesh Kumar K.V 2021-04-20 3:52 ` Michael Ellerman 2021-04-20 3:52 ` Michael Ellerman 2021-04-20 4:30 ` Aneesh Kumar K.V 2021-04-20 4:30 ` Aneesh Kumar K.V 2021-04-14 8:59 ` [PATCH v4 9/9] powerpc/mm: Enable move pmd/pud Aneesh Kumar K.V 2021-04-14 8:59 ` Aneesh Kumar K.V
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210414085915.301189-7-aneesh.kumar@linux.ibm.com \ --to=aneesh.kumar@linux.ibm.com \ --cc=akpm@linux-foundation.org \ --cc=joel@joelfernandes.org \ --cc=kaleshsingh@google.com \ --cc=linux-mm@kvack.org \ --cc=linuxppc-dev@lists.ozlabs.org \ --cc=mpe@ellerman.id.au \ --cc=npiggin@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.