From: Peter Zijlstra <peterz@infradead.org> To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, catalin.marinas@arm.com, will@kernel.org, linux-doc@vger.kernel.org, corbet@lwn.net, arnd@arndb.de, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, Barry Song <v-songbaohua@oppo.com>, Nadav Amit <namit@vmware.com>, Mel Gorman <mgorman@suse.de> Subject: Re: [PATCH 4/4] arm64: support batched/deferred tlb shootdown during page reclamation Date: Thu, 7 Jul 2022 15:49:34 +0200 [thread overview] Message-ID: <Ysbkbt7cvUWSShtc@hirez.programming.kicks-ass.net> (raw) In-Reply-To: <20220707125242.425242-5-21cnbao@gmail.com> On Fri, Jul 08, 2022 at 12:52:42AM +1200, Barry Song wrote: > diff --git a/arch/arm64/include/asm/tlbbatch.h b/arch/arm64/include/asm/tlbbatch.h > new file mode 100644 > index 000000000000..fedb0b87b8db > --- /dev/null > +++ b/arch/arm64/include/asm/tlbbatch.h > @@ -0,0 +1,12 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ARCH_ARM64_TLBBATCH_H > +#define _ARCH_ARM64_TLBBATCH_H > + > +struct arch_tlbflush_unmap_batch { > + /* > + * For arm64, HW can do tlb shootdown, so we don't > + * need to record cpumask for sending IPI > + */ > +}; > + > +#endif /* _ARCH_ARM64_TLBBATCH_H */ > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 412a3b9a3c25..b3ed163267ca 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -272,6 +272,19 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, > dsb(ish); > } > > +static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, > + struct mm_struct *mm, > + struct vm_area_struct *vma, > + unsigned long uaddr) > +{ > + flush_tlb_page_nosync(vma, uaddr); > +} You're passing that vma along just to get the mm, that's quite silly and trivially fixed. diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25..87505ecce1f0 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -254,17 +254,23 @@ static inline void flush_tlb_mm(struct mm_struct *mm) dsb(ish); } -static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, - unsigned long uaddr) +static inline void __flush_tlb_page_nosync(struct mm_struct *mm, + unsigned long uaddr) { unsigned long addr; dsb(ishst); - addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); + addr = __TLBI_VADDR(uaddr, ASID(mm)); __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); } +static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, + unsigned long uaddr) +{ + return __flush_tlb_page_nosync(vma->vm_mm, uaddr); +} + static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) {
WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <peterz@infradead.org> To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, catalin.marinas@arm.com, will@kernel.org, linux-doc@vger.kernel.org, corbet@lwn.net, arnd@arndb.de, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, Barry Song <v-songbaohua@oppo.com>, Nadav Amit <namit@vmware.com>, Mel Gorman <mgorman@suse.de> Subject: Re: [PATCH 4/4] arm64: support batched/deferred tlb shootdown during page reclamation Date: Thu, 7 Jul 2022 15:49:34 +0200 [thread overview] Message-ID: <Ysbkbt7cvUWSShtc@hirez.programming.kicks-ass.net> (raw) In-Reply-To: <20220707125242.425242-5-21cnbao@gmail.com> On Fri, Jul 08, 2022 at 12:52:42AM +1200, Barry Song wrote: > diff --git a/arch/arm64/include/asm/tlbbatch.h b/arch/arm64/include/asm/tlbbatch.h > new file mode 100644 > index 000000000000..fedb0b87b8db > --- /dev/null > +++ b/arch/arm64/include/asm/tlbbatch.h > @@ -0,0 +1,12 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ARCH_ARM64_TLBBATCH_H > +#define _ARCH_ARM64_TLBBATCH_H > + > +struct arch_tlbflush_unmap_batch { > + /* > + * For arm64, HW can do tlb shootdown, so we don't > + * need to record cpumask for sending IPI > + */ > +}; > + > +#endif /* _ARCH_ARM64_TLBBATCH_H */ > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 412a3b9a3c25..b3ed163267ca 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -272,6 +272,19 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, > dsb(ish); > } > > +static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, > + struct mm_struct *mm, > + struct vm_area_struct *vma, > + unsigned long uaddr) > +{ > + flush_tlb_page_nosync(vma, uaddr); > +} You're passing that vma along just to get the mm, that's quite silly and trivially fixed. diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25..87505ecce1f0 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -254,17 +254,23 @@ static inline void flush_tlb_mm(struct mm_struct *mm) dsb(ish); } -static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, - unsigned long uaddr) +static inline void __flush_tlb_page_nosync(struct mm_struct *mm, + unsigned long uaddr) { unsigned long addr; dsb(ishst); - addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); + addr = __TLBI_VADDR(uaddr, ASID(mm)); __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); } +static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, + unsigned long uaddr) +{ + return __flush_tlb_page_nosync(vma->vm_mm, uaddr); +} + static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) { _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-07-07 13:52 UTC|newest] Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-07-07 12:52 [PATCH 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH Barry Song 2022-07-07 12:52 ` Barry Song 2022-07-07 12:52 ` [PATCH 1/4] Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't apply to ARM64" Barry Song 2022-07-07 12:52 ` Barry Song 2022-07-07 12:52 ` [PATCH 2/4] mm: rmap: Allow platforms without mm_cpumask to defer TLB flush Barry Song 2022-07-07 12:52 ` Barry Song 2022-07-08 6:36 ` Nadav Amit 2022-07-08 6:36 ` Nadav Amit 2022-07-08 6:59 ` Barry Song 2022-07-08 6:59 ` Barry Song 2022-07-08 8:08 ` Nadav Amit 2022-07-08 8:08 ` Nadav Amit 2022-07-08 8:17 ` Barry Song 2022-07-08 8:17 ` Barry Song 2022-07-08 8:27 ` Peter Zijlstra 2022-07-08 8:27 ` Peter Zijlstra 2022-07-11 0:30 ` Barry Song 2022-07-11 0:30 ` Barry Song 2022-07-07 12:52 ` [PATCH 3/4] mm: rmap: Extend tlbbatch APIs to fit new platforms Barry Song 2022-07-07 12:52 ` Barry Song 2022-07-07 16:43 ` Dave Hansen 2022-07-07 16:43 ` Dave Hansen 2022-07-07 21:12 ` Barry Song 2022-07-07 21:12 ` Barry Song 2022-07-07 12:52 ` [PATCH 4/4] arm64: support batched/deferred tlb shootdown during page reclamation Barry Song 2022-07-07 12:52 ` Barry Song 2022-07-07 13:49 ` Peter Zijlstra [this message] 2022-07-07 13:49 ` Peter Zijlstra 2022-07-07 21:10 ` Barry Song 2022-07-07 21:10 ` Barry Song 2022-07-08 6:21 ` Yicong Yang 2022-07-08 6:21 ` Yicong Yang
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=Ysbkbt7cvUWSShtc@hirez.programming.kicks-ass.net \ --to=peterz@infradead.org \ --cc=21cnbao@gmail.com \ --cc=akpm@linux-foundation.org \ --cc=arnd@arndb.de \ --cc=catalin.marinas@arm.com \ --cc=corbet@lwn.net \ --cc=darren@os.amperecomputing.com \ --cc=guojian@oppo.com \ --cc=huzhanyuan@oppo.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=lipeifeng@oppo.com \ --cc=mgorman@suse.de \ --cc=namit@vmware.com \ --cc=realmz6@gmail.com \ --cc=v-songbaohua@oppo.com \ --cc=will@kernel.org \ --cc=x86@kernel.org \ --cc=yangyicong@hisilicon.com \ --cc=zhangshiming@oppo.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.