linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Zhenyu Ye <yezhenyu2@huawei.com>
Cc: mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com,
	aneesh.kumar@linux.ibm.com, akpm@linux-foundation.org,
	npiggin@gmail.com, arnd@arndb.de, rostedt@goodmis.org,
	maz@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de,
	yuzhao@google.com, Dave.Martin@arm.com, steven.price@arm.com,
	broonie@kernel.org, guohanjun@huawei.com,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-mm@kvack.org, arm@kernel.org, xiexiangyou@huawei.com,
	prime.zeng@hisilicon.com, zhangshaokun@hisilicon.com,
	kuhn.chenqun@huawei.com
Subject: Re: [PATCH v1 5/6] mm: tlb: Provide flush_*_tlb_range wrappers
Date: Mon, 20 Apr 2020 14:09:16 +0200	[thread overview]
Message-ID: <20200420120916.GE20696@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20200403090048.938-6-yezhenyu2@huawei.com>

On Fri, Apr 03, 2020 at 05:00:47PM +0800, Zhenyu Ye wrote:
> This patch provides flush_{pte|pmd|pud|p4d}_tlb_range() in generic
> code, which are expressed through the mmu_gather APIs.  These
> interface set tlb->cleared_* and finally call tlb_flush(), so we
> can do the tlb invalidation according to the information in
> struct mmu_gather.
> 
> Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com>
> ---
>  include/asm-generic/pgtable.h | 12 +++++++--
>  mm/pgtable-generic.c          | 50 +++++++++++++++++++++++++++++++++++
>  2 files changed, 60 insertions(+), 2 deletions(-)
> 
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index e2e2bef07dd2..2bedeee94131 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -1160,11 +1160,19 @@ static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
>   * invalidate the entire TLB which is not desitable.
>   * e.g. see arch/arc: flush_pmd_tlb_range
>   */
> -#define flush_pmd_tlb_range(vma, addr, end)	flush_tlb_range(vma, addr, end)
> -#define flush_pud_tlb_range(vma, addr, end)	flush_tlb_range(vma, addr, end)
> +extern void flush_pte_tlb_range(struct vm_area_struct *vma,
> +				unsigned long addr, unsigned long end);
> +extern void flush_pmd_tlb_range(struct vm_area_struct *vma,
> +				unsigned long addr, unsigned long end);
> +extern void flush_pud_tlb_range(struct vm_area_struct *vma,
> +				unsigned long addr, unsigned long end);
> +extern void flush_p4d_tlb_range(struct vm_area_struct *vma,
> +				unsigned long addr, unsigned long end);
>  #else
> +#define flush_pte_tlb_range(vma, addr, end)	BUILD_BUG()
>  #define flush_pmd_tlb_range(vma, addr, end)	BUILD_BUG()
>  #define flush_pud_tlb_range(vma, addr, end)	BUILD_BUG()
> +#define flush_p4d_tlb_range(vma, addr, end)	BUILD_BUG()
>  #endif
>  #endif

Ideally you'd make __HAVE_ARCH_FLUSH_PMD_TLB_RANGE go away. Power
certainly doesnt need it with the below.

> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> index 3d7c01e76efc..0f5414a4a2ec 100644
> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -101,6 +101,56 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  
> +#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> +void flush_pte_tlb_range(struct vm_area_struct *vma,
> +			 unsigned long addr, unsigned long end)
> +{
> +	struct mmu_gather tlb;
> +
> +	tlb_gather_mmu(&tlb, vma->vm_mm, addr, end);
> +	tlb_start_vma(&tlb, vma);
> +	tlb_set_pte_range(&tlb, addr, end - addr);
> +	tlb_end_vma(&tlb, vma);
> +	tlb_finish_mmu(&tlb, addr, end);
> +}
> +
> +void flush_pmd_tlb_range(struct vm_area_struct *vma,
> +			 unsigned long addr, unsigned long end)
> +{
> +	struct mmu_gather tlb;
> +
> +	tlb_gather_mmu(&tlb, vma->vm_mm, addr, end);
> +	tlb_start_vma(&tlb, vma);
> +	tlb_set_pmd_range(&tlb, addr, end - addr);
> +	tlb_end_vma(&tlb, vma);
> +	tlb_finish_mmu(&tlb, addr, end);
> +}
> +
> +void flush_pud_tlb_range(struct vm_area_struct *vma,
> +			 unsigned long addr, unsigned long end)
> +{
> +	struct mmu_gather tlb;
> +
> +	tlb_gather_mmu(&tlb, vma->vm_mm, addr, end);
> +	tlb_start_vma(&tlb, vma);
> +	tlb_set_pud_range(&tlb, addr, end - addr);
> +	tlb_end_vma(&tlb, vma);
> +	tlb_finish_mmu(&tlb, addr, end);
> +}
> +
> +void flush_p4d_tlb_range(struct vm_area_struct *vma,
> +			 unsigned long addr, unsigned long end)
> +{
> +	struct mmu_gather tlb;
> +
> +	tlb_gather_mmu(&tlb, vma->vm_mm, addr, end);
> +	tlb_start_vma(&tlb, vma);
> +	tlb_set_p4d_range(&tlb, addr, end - addr);
> +	tlb_end_vma(&tlb, vma);
> +	tlb_finish_mmu(&tlb, addr, end);
> +}
> +#endif /* __HAVE_ARCH_FLUSH_PMD_TLB_RANGE */

You're nowhere near lazy enough:

#define FLUSH_Pxx_TLB_RANGE(_pxx) \
void flush_##_pxx##_tlb_range(struct vm_area_struct *vma, \
			      unsigned long addr, unsigned long end) \
{ \
	struct mmu_gather tlb; \
	\
	tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); \
	tlb_start_vma(&tlb, vma); \
	tlb_flush_##_pxx##_range(&tlb, addr, end-addr); \
	tlb_end_vma(&tlb, vma); \
	tlb_finish_mmu(&tlb, addr, end); \
}

FLUSH_Pxx_TLB_RANGE(pte)
FLUSH_Pxx_TLB_RANGE(pmd)
FLUSH_Pxx_TLB_RANGE(pud)
FLUSH_Pxx_TLB_RANGE(p4d)



  reply	other threads:[~2020-04-20 12:09 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-03  9:00 [PATCH v1 0/6] arm64: tlb: add support for TTL feature Zhenyu Ye
2020-04-03  9:00 ` [PATCH v1 1/6] arm64: Detect the ARMv8.4 " Zhenyu Ye
2020-04-21 16:53   ` Christoph Hellwig
2020-04-21 17:13     ` Peter Zijlstra
2020-04-21 17:16       ` Christoph Hellwig
2020-04-22  2:13         ` Zhenyu Ye
2020-04-03  9:00 ` [PATCH v1 2/6] arm64: Add level-hinted TLB invalidation helper Zhenyu Ye
2020-04-03  9:00 ` [PATCH v1 3/6] arm64: Add tlbi_user_level " Zhenyu Ye
2020-04-03  9:00 ` [PATCH v1 4/6] tlb: mmu_gather: add tlb_set_*_range APIs Zhenyu Ye
2020-04-20 11:46   ` Peter Zijlstra
2020-04-03  9:00 ` [PATCH v1 5/6] mm: tlb: Provide flush_*_tlb_range wrappers Zhenyu Ye
2020-04-20 12:09   ` Peter Zijlstra [this message]
2020-04-21 14:18     ` Zhenyu Ye
2020-04-03  9:00 ` [PATCH v1 6/6] arm64: tlb: Set the TTL field in flush_tlb_range Zhenyu Ye
2020-04-20 12:10   ` Peter Zijlstra
2020-04-21  0:06     ` Steven Rostedt
2020-04-21  8:30       ` Peter Zijlstra
2020-04-21 12:22         ` Zhenyu Ye

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200420120916.GE20696@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=Dave.Martin@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=arm@kernel.org \
    --cc=arnd@arndb.de \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=guohanjun@huawei.com \
    --cc=kuhn.chenqun@huawei.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=npiggin@gmail.com \
    --cc=prime.zeng@hisilicon.com \
    --cc=rostedt@goodmis.org \
    --cc=steven.price@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    --cc=xiexiangyou@huawei.com \
    --cc=yezhenyu2@huawei.com \
    --cc=yuzhao@google.com \
    --cc=zhangshaokun@hisilicon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).