All of lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will.deacon@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org, benh@au1.ibm.com,
	torvalds@linux-foundation.org, npiggin@gmail.com,
	catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH 02/11] arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable()
Date: Tue, 28 Aug 2018 14:03:27 +0100	[thread overview]
Message-ID: <20180828130326.GB26727@arm.com> (raw)
In-Reply-To: <20180824175609.GR24124@hirez.programming.kicks-ass.net>

On Fri, Aug 24, 2018 at 07:56:09PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 24, 2018 at 04:52:37PM +0100, Will Deacon wrote:
> > __flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after
> > writing the new table entry and therefore avoid the barrier prior to the
> > TLBI instruction.
> > 
> > In preparation for delaying our walk-cache invalidation on the unmap()
> > path, move the DSB into the TLB invalidation routines.
> > 
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/include/asm/tlbflush.h | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> > index 7e2a35424ca4..e257f8655b84 100644
> > --- a/arch/arm64/include/asm/tlbflush.h
> > +++ b/arch/arm64/include/asm/tlbflush.h
> > @@ -213,6 +213,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm,
> >  {
> >  	unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm));
> >  
> > +	dsb(ishst);
> >  	__tlbi(vae1is, addr);
> >  	__tlbi_user(vae1is, addr);
> >  	dsb(ish);
> > @@ -222,6 +223,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
> >  {
> >  	unsigned long addr = __TLBI_VADDR(kaddr, 0);
> >  
> > +	dsb(ishst);
> >  	__tlbi(vaae1is, addr);
> >  	dsb(ish);
> >  }
> 
> I would suggest these barrier -- like any other barriers, carry a
> comment that explain the required ordering.
> 
> I think this here reads like:
> 
> 	STORE: unhook page
> 
> 	DSB-ishst: wait for all stores to complete
> 	TLBI: invalidate broadcast
> 	DSB-ish: wait for TLBI to complete
> 
> And the 'newly' placed DSB-ishst ensures the page is observed unlinked
> before we issue the invalidate.

Yeah, not a bad idea. We already have a big block comment in this file but
it's desperately out of date, so lemme rewrite that and justify the barriers
at the same time.

Will

WARNING: multiple messages have this Message-ID (diff)
From: will.deacon@arm.com (Will Deacon)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH 02/11] arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable()
Date: Tue, 28 Aug 2018 14:03:27 +0100	[thread overview]
Message-ID: <20180828130326.GB26727@arm.com> (raw)
In-Reply-To: <20180824175609.GR24124@hirez.programming.kicks-ass.net>

On Fri, Aug 24, 2018 at 07:56:09PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 24, 2018 at 04:52:37PM +0100, Will Deacon wrote:
> > __flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after
> > writing the new table entry and therefore avoid the barrier prior to the
> > TLBI instruction.
> > 
> > In preparation for delaying our walk-cache invalidation on the unmap()
> > path, move the DSB into the TLB invalidation routines.
> > 
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/include/asm/tlbflush.h | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> > index 7e2a35424ca4..e257f8655b84 100644
> > --- a/arch/arm64/include/asm/tlbflush.h
> > +++ b/arch/arm64/include/asm/tlbflush.h
> > @@ -213,6 +213,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm,
> >  {
> >  	unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm));
> >  
> > +	dsb(ishst);
> >  	__tlbi(vae1is, addr);
> >  	__tlbi_user(vae1is, addr);
> >  	dsb(ish);
> > @@ -222,6 +223,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
> >  {
> >  	unsigned long addr = __TLBI_VADDR(kaddr, 0);
> >  
> > +	dsb(ishst);
> >  	__tlbi(vaae1is, addr);
> >  	dsb(ish);
> >  }
> 
> I would suggest these barrier -- like any other barriers, carry a
> comment that explain the required ordering.
> 
> I think this here reads like:
> 
> 	STORE: unhook page
> 
> 	DSB-ishst: wait for all stores to complete
> 	TLBI: invalidate broadcast
> 	DSB-ish: wait for TLBI to complete
> 
> And the 'newly' placed DSB-ishst ensures the page is observed unlinked
> before we issue the invalidate.

Yeah, not a bad idea. We already have a big block comment in this file but
it's desperately out of date, so lemme rewrite that and justify the barriers
at the same time.

Will

  reply	other threads:[~2018-08-28 13:03 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-24 15:52 [RFC PATCH 00/11] Avoid synchronous TLB invalidation for intermediate page-table entries on arm64 Will Deacon
2018-08-24 15:52 ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 01/11] arm64: tlb: Use last-level invalidation in flush_tlb_kernel_range() Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 02/11] arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable() Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 17:56   ` Peter Zijlstra
2018-08-24 17:56     ` Peter Zijlstra
2018-08-28 13:03     ` Will Deacon [this message]
2018-08-28 13:03       ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 03/11] arm64: pgtable: Implement p[mu]d_valid() and check in set_p[mu]d() Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 16:15   ` Linus Torvalds
2018-08-24 16:15     ` Linus Torvalds
2018-08-28 12:49     ` Will Deacon
2018-08-28 12:49       ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 04/11] arm64: tlb: Justify non-leaf invalidation in flush_tlb_range() Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 05/11] arm64: tlbflush: Allow stride to be specified for __flush_tlb_range() Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 06/11] arm64: tlb: Remove redundant !CONFIG_HAVE_RCU_TABLE_FREE code Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 07/11] asm-generic/tlb: Guard with #ifdef CONFIG_MMU Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 08/11] asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-27  4:44   ` Nicholas Piggin
2018-08-27  4:44     ` Nicholas Piggin
2018-08-28 13:46     ` Peter Zijlstra
2018-08-28 13:46       ` Peter Zijlstra
2018-08-28 13:48       ` Peter Zijlstra
2018-08-28 13:48         ` Peter Zijlstra
2018-08-28 14:12       ` Nicholas Piggin
2018-08-28 14:12         ` Nicholas Piggin
2018-08-24 15:52 ` [RFC PATCH 09/11] asm-generic/tlb: Track which levels of the page tables have been cleared Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-27  7:53   ` Peter Zijlstra
2018-08-27  7:53     ` Peter Zijlstra
2018-08-28 13:12     ` Will Deacon
2018-08-28 13:12       ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 10/11] arm64: tlb: Adjust stride and type of TLBI according to mmu_gather Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 15:52 ` [RFC PATCH 11/11] arm64: tlb: Avoid synchronous TLBIs when freeing page tables Will Deacon
2018-08-24 15:52   ` Will Deacon
2018-08-24 16:20 ` [RFC PATCH 00/11] Avoid synchronous TLB invalidation for intermediate page-table entries on arm64 Linus Torvalds
2018-08-24 16:20   ` Linus Torvalds
2018-08-26 10:56   ` Peter Zijlstra
2018-08-26 10:56     ` Peter Zijlstra
2018-09-04 18:38 ` Jon Masters
2018-09-04 18:38   ` Jon Masters
2018-09-05 12:28   ` Will Deacon
2018-09-05 12:28     ` Will Deacon
2018-09-07  6:36     ` Jon Masters
2018-09-07  6:36       ` Jon Masters
2018-09-13 15:53       ` Will Deacon
2018-09-13 15:53         ` Will Deacon
2018-09-13 16:53         ` Jon Masters
2018-09-13 16:53           ` Jon Masters

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180828130326.GB26727@arm.com \
    --to=will.deacon@arm.com \
    --cc=benh@au1.ibm.com \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=npiggin@gmail.com \
    --cc=peterz@infradead.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.