* [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V @ 2021-03-29 1:55 Jiuyang Liu 2021-03-30 7:02 ` Alex Ghiti ` (2 more replies) 0 siblings, 3 replies; 6+ messages in thread From: Jiuyang Liu @ 2021-03-29 1:55 UTC (permalink / raw) To: Alex Ghiti Cc: Jiuyang Liu, Andrew Waterman, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton, Geert Uytterhoeven, linux-riscv, linux-kernel This patch implements flush_cache_vmap and flush_cache_vunmap for RISC-V, since these functions might modify PTE. Without this patch, SFENCE.VMA won't be added to related codes, which might introduce a bug in some out-of-order micro-architecture implementations. Signed-off-by: Jiuyang Liu <liu@jiuyang.me> --- arch/riscv/include/asm/cacheflush.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 23ff70350992..4adf25248c43 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -8,6 +8,14 @@ #include <linux/mm.h> +/* + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA. + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries. + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries + */ +#define flush_cache_vmap(start, end) flush_tlb_all() +#define flush_cache_vunmap(start, end) flush_tlb_all() + static inline void local_flush_icache_all(void) { asm volatile ("fence.i" ::: "memory"); -- 2.31.1 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V 2021-03-29 1:55 [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V Jiuyang Liu @ 2021-03-30 7:02 ` Alex Ghiti 2021-04-01 6:37 ` Christoph Hellwig 2021-04-11 21:41 ` Palmer Dabbelt 2 siblings, 0 replies; 6+ messages in thread From: Alex Ghiti @ 2021-03-30 7:02 UTC (permalink / raw) To: Jiuyang Liu Cc: Andrew Waterman, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton, Geert Uytterhoeven, linux-riscv, linux-kernel Hi Jiuyang, Le 3/28/21 à 9:55 PM, Jiuyang Liu a écrit : > This patch implements flush_cache_vmap and flush_cache_vunmap for > RISC-V, since these functions might modify PTE. Without this patch, > SFENCE.VMA won't be added to related codes, which might introduce a bug > in some out-of-order micro-architecture implementations. > > Signed-off-by: Jiuyang Liu <liu@jiuyang.me> > --- > arch/riscv/include/asm/cacheflush.h | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h > index 23ff70350992..4adf25248c43 100644 > --- a/arch/riscv/include/asm/cacheflush.h > +++ b/arch/riscv/include/asm/cacheflush.h > @@ -8,6 +8,14 @@ > > #include <linux/mm.h> > > +/* > + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA. "might modify PTE" is not entirely true I think, this is what happens before using this function that might modify PTE, those functions ensure those modifications are made visible. > + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries. > + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries > + */ > +#define flush_cache_vmap(start, end) flush_tlb_all() > +#define flush_cache_vunmap(start, end) flush_tlb_all() > + > static inline void local_flush_icache_all(void) > { > asm volatile ("fence.i" ::: "memory"); > FWIW, you can add: Reviewed-by: Alexandre Ghiti <alex@ghiti.fr> Thanks, Alex _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V 2021-03-29 1:55 [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V Jiuyang Liu 2021-03-30 7:02 ` Alex Ghiti @ 2021-04-01 6:37 ` Christoph Hellwig 2021-04-11 21:41 ` Palmer Dabbelt 2 siblings, 0 replies; 6+ messages in thread From: Christoph Hellwig @ 2021-04-01 6:37 UTC (permalink / raw) To: Jiuyang Liu Cc: Alex Ghiti, Andrew Waterman, Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton, Geert Uytterhoeven, linux-riscv, linux-kernel On Mon, Mar 29, 2021 at 01:55:09AM +0000, Jiuyang Liu wrote: > +/* > + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA. > + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries. > + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries > + */ Please never ever write comments > 80 chars. And please read the coding style document. _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V 2021-03-29 1:55 [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V Jiuyang Liu 2021-03-30 7:02 ` Alex Ghiti 2021-04-01 6:37 ` Christoph Hellwig @ 2021-04-11 21:41 ` Palmer Dabbelt 2021-04-12 0:13 ` Jiuyang Liu 2021-04-12 6:22 ` Jisheng Zhang 2 siblings, 2 replies; 6+ messages in thread From: Palmer Dabbelt @ 2021-04-11 21:41 UTC (permalink / raw) To: liu Cc: alex, liu, waterman, Paul Walmsley, aou, akpm, geert, linux-riscv, linux-kernel On Sun, 28 Mar 2021 18:55:09 PDT (-0700), liu@jiuyang.me wrote: > This patch implements flush_cache_vmap and flush_cache_vunmap for > RISC-V, since these functions might modify PTE. Without this patch, > SFENCE.VMA won't be added to related codes, which might introduce a bug > in some out-of-order micro-architecture implementations. > > Signed-off-by: Jiuyang Liu <liu@jiuyang.me> > --- > arch/riscv/include/asm/cacheflush.h | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h > index 23ff70350992..4adf25248c43 100644 > --- a/arch/riscv/include/asm/cacheflush.h > +++ b/arch/riscv/include/asm/cacheflush.h > @@ -8,6 +8,14 @@ > > #include <linux/mm.h> > > +/* > + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA. > + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries. > + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries These should have line breaks. > + */ > +#define flush_cache_vmap(start, end) flush_tlb_all() We shouldn't need cache flushes for permission upgrades: the ISA allows the old mappings to be visible until a fence, but the theory is that window will be sort for reasonable architectures so the overhead of flushing the entire TLB will overwhelm the extra faults. There are a handful of places where we preemptively flush, but those are generally because we can't handle the faults correctly. If you have some benchmark that demonstrates a performance issue on real hardware here then I'm happy to talk about this further, but this assumption is all over arch/riscv so I'd prefer to keep things consistent for now. > +#define flush_cache_vunmap(start, end) flush_tlb_all() This one does seem necessary. > + > static inline void local_flush_icache_all(void) > { > asm volatile ("fence.i" ::: "memory"); _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V 2021-04-11 21:41 ` Palmer Dabbelt @ 2021-04-12 0:13 ` Jiuyang Liu 2021-04-12 6:22 ` Jisheng Zhang 1 sibling, 0 replies; 6+ messages in thread From: Jiuyang Liu @ 2021-04-12 0:13 UTC (permalink / raw) To: Palmer Dabbelt Cc: alex, waterman, Paul Walmsley, aou, akpm, geert, linux-riscv, linux-kernel On Sunday, April 11, 2021 9:41:07 PM UTC you wrote: > On Sun, 28 Mar 2021 18:55:09 PDT (-0700), liu@jiuyang.me wrote: > > This patch implements flush_cache_vmap and flush_cache_vunmap for > > RISC-V, since these functions might modify PTE. Without this patch, > > SFENCE.VMA won't be added to related codes, which might introduce a bug > > in some out-of-order micro-architecture implementations. > > > > Signed-off-by: Jiuyang Liu <liu@jiuyang.me> > > --- > > > > arch/riscv/include/asm/cacheflush.h | 8 ++++++++ > > 1 file changed, 8 insertions(+) > > > > diff --git a/arch/riscv/include/asm/cacheflush.h > > b/arch/riscv/include/asm/cacheflush.h index 23ff70350992..4adf25248c43 > > 100644 > > --- a/arch/riscv/include/asm/cacheflush.h > > +++ b/arch/riscv/include/asm/cacheflush.h > > @@ -8,6 +8,14 @@ > > > > #include <linux/mm.h> > > > > +/* > > + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs > > SFENCE.VMA. + * - flush_cache_vmap is invoked after map_kernel_range() > > has installed the page table entries. + * - flush_cache_vunmap is invoked > > before unmap_kernel_range() deletes the page table entries > These should have line breaks. Fixed in the newest patch, thanks for pointing out. > > > + */ > > +#define flush_cache_vmap(start, end) flush_tlb_all() > > We shouldn't need cache flushes for permission upgrades: the ISA allows > the old mappings to be visible until a fence, but the theory is that > window will be sort for reasonable architectures so the overhead of > flushing the entire TLB will overwhelm the extra faults. There are a > handful of places where we preemptively flush, but those are generally > because we can't handle the faults correctly. Got it, I removed this. > If you have some benchmark that demonstrates a performance issue on real > hardware here then I'm happy to talk about this further, but this > assumption is all over arch/riscv so I'd prefer to keep things > consistent for now. We are using riscv-boom + FireSim setting up a benchmark environment, I can try it after setting this. > > +#define flush_cache_vunmap(start, end) flush_tlb_all() > > This one does seem necessary. > > > + > > > > static inline void local_flush_icache_all(void) > > { > > > > asm volatile ("fence.i" ::: "memory"); _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V 2021-04-11 21:41 ` Palmer Dabbelt 2021-04-12 0:13 ` Jiuyang Liu @ 2021-04-12 6:22 ` Jisheng Zhang 1 sibling, 0 replies; 6+ messages in thread From: Jisheng Zhang @ 2021-04-12 6:22 UTC (permalink / raw) To: Palmer Dabbelt Cc: liu, alex, waterman, Paul Walmsley, aou, akpm, geert, linux-riscv, linux-kernel On Sun, 11 Apr 2021 14:41:07 -0700 (PDT) Palmer Dabbelt <palmer@dabbelt.com> wrote: > > > On Sun, 28 Mar 2021 18:55:09 PDT (-0700), liu@jiuyang.me wrote: > > This patch implements flush_cache_vmap and flush_cache_vunmap for > > RISC-V, since these functions might modify PTE. Without this patch, > > SFENCE.VMA won't be added to related codes, which might introduce a bug > > in some out-of-order micro-architecture implementations. > > > > Signed-off-by: Jiuyang Liu <liu@jiuyang.me> > > --- > > arch/riscv/include/asm/cacheflush.h | 8 ++++++++ > > 1 file changed, 8 insertions(+) > > > > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h > > index 23ff70350992..4adf25248c43 100644 > > --- a/arch/riscv/include/asm/cacheflush.h > > +++ b/arch/riscv/include/asm/cacheflush.h > > @@ -8,6 +8,14 @@ > > > > #include <linux/mm.h> > > > > +/* > > + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA. > > + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries. > > + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries > > These should have line breaks. > > > + */ > > +#define flush_cache_vmap(start, end) flush_tlb_all() > > We shouldn't need cache flushes for permission upgrades: the ISA allows > the old mappings to be visible until a fence, but the theory is that > window will be sort for reasonable architectures so the overhead of > flushing the entire TLB will overwhelm the extra faults. There are a > handful of places where we preemptively flush, but those are generally > because we can't handle the faults correctly. > > If you have some benchmark that demonstrates a performance issue on real > hardware here then I'm happy to talk about this further, but this > assumption is all over arch/riscv so I'd prefer to keep things > consistent for now. IMHO the flush_cache_vmap() isn't necessary. From previous discussion, it seems the reason to implement flush_cache_vmap() is we missed sfence.vma in vmalloc related code path. But... The riscv privileged spec says "In particular, if a leaf PTE is modified but a subsuming SFENCE.VMA is not executed, either the old translation or the new translation will be used, but the choice is unpredictable. The behavior is otherwise well-defined" *If old translation, we do have a page fault, but the vmalloc_fault() will take care of it, then local_flush_tlb_page() will sfence.vma properly. *If new translation, we don't do anything. In both cases, we don't need to implement the flush_cache_vmap() From another side, even we insert sfence.vma() in advance rather than rely on the vmalloc_fault() we still can't ensure other harts use the new translation. Take below small window case for example: cpu0 cpu1 map_kernel_range() map_kernel_range_noflush() access the new vmalloced space. flush_cache_vmap() That's to say, we sill rely on the vmalloc_fault(). > > > +#define flush_cache_vunmap(start, end) flush_tlb_all() > In flush_cache_vunmap() caller's code path, the translation is modified *after* the flush_cache_vunmap(), for example: unmap_kernel_range() flush_cache_vunmap() vunmap_page_range() flush_tlb_kernel_range() IOW, when we call flush_cache_vunmap(), the translation has not changed. Instead, I believe it's the flush_tlb_kernel_range() to flush the translations after we changed the translation in vunmap_page_range() Regards _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2021-04-12 6:22 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-03-29 1:55 [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V Jiuyang Liu 2021-03-30 7:02 ` Alex Ghiti 2021-04-01 6:37 ` Christoph Hellwig 2021-04-11 21:41 ` Palmer Dabbelt 2021-04-12 0:13 ` Jiuyang Liu 2021-04-12 6:22 ` Jisheng Zhang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).