* [PATCH V5 0/3] riscv: Fixup asid_allocator remaining issues @ 2021-05-30 16:49 guoren 2021-05-30 16:49 ` [PATCH V5 1/3] riscv: Use global mappings for kernel pages guoren ` (2 more replies) 0 siblings, 3 replies; 7+ messages in thread From: guoren @ 2021-05-30 16:49 UTC (permalink / raw) To: guoren, anup.patel, palmerdabbelt, arnd, hch Cc: linux-riscv, linux-kernel, linux-arch, Guo Ren From: Guo Ren <guoren@linux.alibaba.com> The patchset fixes the remaining problems of asid_allocator. - Fixup _PAGE_GLOBAL for kernel virtual address mapping - Optimize tlb_flush with asid & range Changes since v4: - Fixup double PAGE_SIZE add in local_flush_tlb_range_asid - Add tlbflush: Optimize coding convention - Optimize comment Changes since v3: - Optimize coding convention for "riscv: Use use_asid_allocator flush TLB" Changes since v2: - Remove PAGE_UP/DOWN usage in tlbflush.h - Optimize variable name Changes since v1: - Drop PAGE_UP wrong fixup - Rebase on clean linux-5.13-rc2 - Add Reviewed-by Guo Ren (3): riscv: Use global mappings for kernel pages riscv: Add ASID-based tlbflushing methods riscv: tlbflush: Optimize coding convention arch/riscv/include/asm/mmu_context.h | 2 ++ arch/riscv/include/asm/pgtable.h | 3 +- arch/riscv/include/asm/tlbflush.h | 22 ++++++++++++++ arch/riscv/mm/context.c | 2 +- arch/riscv/mm/tlbflush.c | 57 ++++++++++++++++++++++++++++-------- 5 files changed, 71 insertions(+), 15 deletions(-) -- 2.7.4 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH V5 1/3] riscv: Use global mappings for kernel pages 2021-05-30 16:49 [PATCH V5 0/3] riscv: Fixup asid_allocator remaining issues guoren @ 2021-05-30 16:49 ` guoren 2021-05-30 16:49 ` [PATCH V5 2/3] riscv: Add ASID-based tlbflushing methods guoren 2021-05-30 16:49 ` [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention guoren 2 siblings, 0 replies; 7+ messages in thread From: guoren @ 2021-05-30 16:49 UTC (permalink / raw) To: guoren, anup.patel, palmerdabbelt, arnd, hch Cc: linux-riscv, linux-kernel, linux-arch, Guo Ren From: Guo Ren <guoren@linux.alibaba.com> We map kernel pages into all addresses space, so they can be marked as global. This allows hardware to avoid flushing the kernel mappings when moving between address spaces. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Palmer Dabbelt <palmerdabbelt@google.com> --- arch/riscv/include/asm/pgtable.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 9469f46..346a3c6 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -134,7 +134,8 @@ | _PAGE_WRITE \ | _PAGE_PRESENT \ | _PAGE_ACCESSED \ - | _PAGE_DIRTY) + | _PAGE_DIRTY \ + | _PAGE_GLOBAL) #define PAGE_KERNEL __pgprot(_PAGE_KERNEL) #define PAGE_KERNEL_READ __pgprot(_PAGE_KERNEL & ~_PAGE_WRITE) -- 2.7.4 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH V5 2/3] riscv: Add ASID-based tlbflushing methods 2021-05-30 16:49 [PATCH V5 0/3] riscv: Fixup asid_allocator remaining issues guoren 2021-05-30 16:49 ` [PATCH V5 1/3] riscv: Use global mappings for kernel pages guoren @ 2021-05-30 16:49 ` guoren 2021-05-31 6:17 ` Christoph Hellwig 2021-05-30 16:49 ` [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention guoren 2 siblings, 1 reply; 7+ messages in thread From: guoren @ 2021-05-30 16:49 UTC (permalink / raw) To: guoren, anup.patel, palmerdabbelt, arnd, hch Cc: linux-riscv, linux-kernel, linux-arch, Guo Ren From: Guo Ren <guoren@linux.alibaba.com> Implement optimized version of the tlb flushing routines for systems using ASIDs. These are behind the use_asid_allocator static branch to not affect existing systems not using ASIDs. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reviewed-by: Anup Patel <anup.patel@wdc.com> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Christoph Hellwig <hch@lst.de> --- arch/riscv/include/asm/mmu_context.h | 2 ++ arch/riscv/include/asm/tlbflush.h | 22 +++++++++++++++++ arch/riscv/mm/context.c | 2 +- arch/riscv/mm/tlbflush.c | 46 +++++++++++++++++++++++++++++++++--- 4 files changed, 68 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h index b065941..7030837 100644 --- a/arch/riscv/include/asm/mmu_context.h +++ b/arch/riscv/include/asm/mmu_context.h @@ -33,6 +33,8 @@ static inline int init_new_context(struct task_struct *tsk, return 0; } +DECLARE_STATIC_KEY_FALSE(use_asid_allocator); + #include <asm-generic/mmu_context.h> #endif /* _ASM_RISCV_MMU_CONTEXT_H */ diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index c84218a..894cf75 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -22,9 +22,31 @@ static inline void local_flush_tlb_page(unsigned long addr) { ALT_FLUSH_TLB_PAGE(__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory")); } + +static inline void local_flush_tlb_all_asid(unsigned long asid) +{ + __asm__ __volatile__ ("sfence.vma x0, %0" + : + : "r" (asid) + : "memory"); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long asid) +{ + unsigned long tmp, end = ALIGN(start + size, PAGE_SIZE); + + for (tmp = start & PAGE_MASK; tmp < end; tmp += PAGE_SIZE) { + __asm__ __volatile__ ("sfence.vma %0, %1" + : + : "r" (tmp), "r" (asid) + : "memory"); + } +} #else /* CONFIG_MMU */ #define local_flush_tlb_all() do { } while (0) #define local_flush_tlb_page(addr) do { } while (0) +#define local_flush_tlb_range_asid(addr) do { } while (0) #endif /* CONFIG_MMU */ #if defined(CONFIG_SMP) && defined(CONFIG_MMU) diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 68aa312..45c1b04 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -18,7 +18,7 @@ #ifdef CONFIG_MMU -static DEFINE_STATIC_KEY_FALSE(use_asid_allocator); +DEFINE_STATIC_KEY_FALSE(use_asid_allocator); static unsigned long asid_bits; static unsigned long num_asids; diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 720b443..87b4e52 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -4,6 +4,7 @@ #include <linux/smp.h> #include <linux/sched.h> #include <asm/sbi.h> +#include <asm/mmu_context.h> void flush_tlb_all(void) { @@ -39,18 +40,57 @@ static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, put_cpu(); } +static void __sbi_tlb_flush_range_asid(struct cpumask *cmask, + unsigned long start, + unsigned long size, + unsigned long asid) +{ + struct cpumask hmask; + unsigned int cpuid; + + if (cpumask_empty(cmask)) + return; + + cpuid = get_cpu(); + + if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { + if (size == -1) + local_flush_tlb_all_asid(asid); + else + local_flush_tlb_range_asid(start, size, asid); + } else { + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma_asid(cpumask_bits(&hmask), + start, size, asid); + } + + put_cpu(); +} + void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); + if (static_branch_unlikely(&use_asid_allocator)) + __sbi_tlb_flush_range_asid(mm_cpumask(mm), 0, -1, + atomic_long_read(&mm->context.id)); + else + __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); + if (static_branch_unlikely(&use_asid_allocator)) + __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE, + atomic_long_read(&vma->vm_mm->context.id)); + else + __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); + if (static_branch_unlikely(&use_asid_allocator)) + __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), start, end - start, + atomic_long_read(&vma->vm_mm->context.id)); + else + __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); } -- 2.7.4 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH V5 2/3] riscv: Add ASID-based tlbflushing methods 2021-05-30 16:49 ` [PATCH V5 2/3] riscv: Add ASID-based tlbflushing methods guoren @ 2021-05-31 6:17 ` Christoph Hellwig 2021-05-31 12:20 ` Guo Ren 0 siblings, 1 reply; 7+ messages in thread From: Christoph Hellwig @ 2021-05-31 6:17 UTC (permalink / raw) To: guoren Cc: anup.patel, palmerdabbelt, arnd, hch, linux-riscv, linux-kernel, linux-arch, Guo Ren On Sun, May 30, 2021 at 04:49:25PM +0000, guoren@kernel.org wrote: > From: Guo Ren <guoren@linux.alibaba.com> > > Implement optimized version of the tlb flushing routines for systems > using ASIDs. These are behind the use_asid_allocator static branch to > not affect existing systems not using ASIDs. I still think the code duplication and exposing of new code in a global header here is a bad idea and would suggest the version I sent instead. _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH V5 2/3] riscv: Add ASID-based tlbflushing methods 2021-05-31 6:17 ` Christoph Hellwig @ 2021-05-31 12:20 ` Guo Ren 0 siblings, 0 replies; 7+ messages in thread From: Guo Ren @ 2021-05-31 12:20 UTC (permalink / raw) To: Christoph Hellwig Cc: Anup Patel, Palmer Dabbelt, Arnd Bergmann, linux-riscv, Linux Kernel Mailing List, linux-arch, Guo Ren On Mon, May 31, 2021 at 2:17 PM Christoph Hellwig <hch@lst.de> wrote: > > On Sun, May 30, 2021 at 04:49:25PM +0000, guoren@kernel.org wrote: > > From: Guo Ren <guoren@linux.alibaba.com> > > > > Implement optimized version of the tlb flushing routines for systems > > using ASIDs. These are behind the use_asid_allocator static branch to > > not affect existing systems not using ASIDs. > > I still think the code duplication and exposing of new code in a global > header here is a bad idea and would suggest the version I sent instead. Your idea is in the third patch, and I also add you with Co-developed-by. Please have a look: https://lore.kernel.org/linux-riscv/1622393366-46079-4-git-send-email-guoren@kernel.org/T/#u [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention -- Best Regards Guo Ren ML: https://lore.kernel.org/linux-csky/ _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention 2021-05-30 16:49 [PATCH V5 0/3] riscv: Fixup asid_allocator remaining issues guoren 2021-05-30 16:49 ` [PATCH V5 1/3] riscv: Use global mappings for kernel pages guoren 2021-05-30 16:49 ` [PATCH V5 2/3] riscv: Add ASID-based tlbflushing methods guoren @ 2021-05-30 16:49 ` guoren 2 siblings, 0 replies; 7+ messages in thread From: guoren @ 2021-05-30 16:49 UTC (permalink / raw) To: guoren, anup.patel, palmerdabbelt, arnd, hch Cc: linux-riscv, linux-kernel, linux-arch, Guo Ren, Atish Patra From: Guo Ren <guoren@linux.alibaba.com> Passing the mm_struct as the first argument, as we can derive both the cpumask and asid from it instead of doing that in the callers. But more importantly, the static branch check can be moved deeper into the code to avoid a lot of duplication. Also add FIXME comment on the non-ASID code switches to a global flush once flushing more than a single page. Link: https://lore.kernel.org/linux-riscv/CAJF2gTQpDYtEdw6ZrTVZUYqxGdhLPs25RjuUiQtz=xN2oKs2fw@mail.gmail.com/T/#m30f7e8d02361f21f709bc3357b9f6ead1d47ed43 Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Co-Developed-by: Christoph Hellwig <hch@lst.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Anup Patel <anup.patel@wdc.com> Cc: Atish Patra <atish.patra@wdc.com> --- arch/riscv/mm/tlbflush.c | 91 ++++++++++++++++++++++-------------------------- 1 file changed, 41 insertions(+), 50 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 87b4e52..facca6e 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -12,56 +12,59 @@ void flush_tlb_all(void) } /* - * This function must not be called with cmask being null. + * This function must not be called with mm_cpumask(mm) being null. * Kernel may panic if cmask is NULL. */ -static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, +static void __sbi_tlb_flush_range(struct mm_struct *mm, + unsigned long start, unsigned long size) { + struct cpumask *cmask = mm_cpumask(mm); struct cpumask hmask; unsigned int cpuid; + bool local; if (cpumask_empty(cmask)) return; cpuid = get_cpu(); - if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { - /* local cpu is the only cpu present in cpumask */ - if (size <= PAGE_SIZE) - local_flush_tlb_page(start); - else - local_flush_tlb_all(); - } else { - riscv_cpuid_to_hartid_mask(cmask, &hmask); - sbi_remote_sfence_vma(cpumask_bits(&hmask), start, size); - } + /* + * check if the tlbflush needs to be sent to other CPUs, local + * cpu is the only cpu present in cpumask. + */ + local = !(cpumask_any_but(cmask, cpuid) < nr_cpu_ids); - put_cpu(); -} - -static void __sbi_tlb_flush_range_asid(struct cpumask *cmask, - unsigned long start, - unsigned long size, - unsigned long asid) -{ - struct cpumask hmask; - unsigned int cpuid; - - if (cpumask_empty(cmask)) - return; - - cpuid = get_cpu(); + if (static_branch_likely(&use_asid_allocator)) { + unsigned long asid = atomic_long_read(&mm->context.id); - if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { - if (size == -1) - local_flush_tlb_all_asid(asid); - else - local_flush_tlb_range_asid(start, size, asid); + if (likely(local)) { + if (size == -1) + local_flush_tlb_all_asid(asid); + else + local_flush_tlb_range_asid(start, size, asid); + } else { + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma_asid(cpumask_bits(&hmask), + start, size, asid); + } } else { - riscv_cpuid_to_hartid_mask(cmask, &hmask); - sbi_remote_sfence_vma_asid(cpumask_bits(&hmask), - start, size, asid); + if (likely(local)) { + /* + * FIXME: The non-ASID code switches to a global flush + * once flushing more than a single page. It's made by + * commit 6efb16b1d551 (RISC-V: Issue a tlb page flush + * if possible). + */ + if (size <= PAGE_SIZE) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); + } else { + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma(cpumask_bits(&hmask), + start, size); + } } put_cpu(); @@ -69,28 +72,16 @@ static void __sbi_tlb_flush_range_asid(struct cpumask *cmask, void flush_tlb_mm(struct mm_struct *mm) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(mm), 0, -1, - atomic_long_read(&mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); + __sbi_tlb_flush_range(mm, 0, -1); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE, - atomic_long_read(&vma->vm_mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); + __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), start, end - start, - atomic_long_read(&vma->vm_mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); + __sbi_tlb_flush_range(vma->vm_mm, start, end - start); } -- 2.7.4 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH v2 00/11] riscv: Add DMA_COHERENT support for Allwinner D1 @ 2021-06-06 9:03 guoren 2021-06-06 9:04 ` [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention guoren 0 siblings, 1 reply; 7+ messages in thread From: guoren @ 2021-06-06 9:03 UTC (permalink / raw) To: guoren, anup.patel, palmerdabbelt, arnd, wens, maxime, drew, liush, lazyparser, wefu Cc: linux-riscv, linux-kernel, linux-arch, linux-sunxi, Guo Ren From: Guo Ren <guoren@linux.alibaba.com> The RISC-V ISA doesn't yet specify how to query or modify PMAs, so let vendors define the custom properties of memory regions in PTE. This patchset helps SOC vendors to support their own custom interconnect coherent solution with PTE attributes. For example, allwinner D1[1] uses T-HEAD C906 as main processor, C906 has two modes in MMU: - Compatible mode, the same as the definitions in spec. - Enhanced mode, add custom DMA_COHERENT attribute bits in PTE which not mentioned in spec. Allwinner D1 needs the enhanced mode to support the DMA type device with non-coherent interconnect in its SOC. C906 uses BITS(63 - 59) as custom attribute bits in PTE. The patchset contain 4 parts (asid, pgtable, cmo, soc) which have been tested on D1: - asid: T-HEAD C906 of D1 contains full asid hw facilities which has no conflict with RISC-V spec, and hope these patches soon could be approved. - pgtable: Using a image-hdr to pass vendor specific information and setup custom PTE attributes in a global struct variable during boot stage. Also it needs define custom protection_map in linux/mm. - cmo: We need deal with dma_sync & icache_sync & __vdso_icache_sync. In this patchset, I just show you how T-HEAD C9xx work, and seems Atish is working for the DMA infrustructure, please let me know the idea. - soc: Add allwinner gmac driver & dts & Kconfig for sunxi test. The patchset could work with linux-5.13-rc4, here is the steps for D1: - Download linux-5.13-rc4 and apply the patchset - make ARCH=riscv CROSS_COMPILE=riscv64-linux- defconfig - make ARCH=riscv CROSS_COMPILE=riscv64-linux- Image modules dtbs - mkimage -A riscv -O linux -T kernel -C none -a 0x00200000 -e 0x00200000 -n Linux -d arch/riscv/boot/Image uImage - Download newest opensbi [2], build with [3], and get fw_dynamic.bin - Copy uImage, fw_dynamic.bin, allwinner-d1-nezha-kit.dtb into boot partition of TF card. - Plugin the TF card and power on D1. Link: https://linux-sunxi.org/D1 [1] Link: https://github.com/riscv/opensbi branch:master [2] Link: https://github.com/riscv/opensbi/blob/master/docs/platform/thead-c9xx.md [3] Changes since v1: - Rebase on linux-5.13-rc4 - Support defconfig for different PTE attributes - Support C906 icache_sync - Add Allwinner D1 dts & Kconfig & gmac for testing - Add asid optimization for D1 usage Guo Ren (10): riscv: asid: Use global mappings for kernel pages riscv: asid: Add ASID-based tlbflushing methods riscv: asid: Optimize tlbflush coding convention riscv: pgtable: Fixup _PAGE_CHG_MASK usage riscv: pgtable: Add custom protection_map init riscv: pgtable: Add DMA_COHERENT with custom PTE attributes riscv: cmo: Add dma-noncoherency support riscv: cmo: Add vendor custom icache sync riscv: soc: Initial DTS for Allwinner D1 NeZha board riscv: soc: Add Allwinner SoC kconfig option liush (1): riscv: soc: Allwinner D1 GMAC driver only for temp use arch/riscv/Kconfig | 9 + arch/riscv/Kconfig.socs | 12 + arch/riscv/boot/dts/Makefile | 1 + arch/riscv/boot/dts/allwinner/Makefile | 2 + .../boot/dts/allwinner/allwinner-d1-nezha-kit.dts | 29 + arch/riscv/boot/dts/allwinner/allwinner-d1.dtsi | 100 + arch/riscv/configs/defconfig | 1 + arch/riscv/include/asm/cacheflush.h | 48 +- arch/riscv/include/asm/mmu_context.h | 2 + arch/riscv/include/asm/pgtable-64.h | 8 +- arch/riscv/include/asm/pgtable-bits.h | 20 +- arch/riscv/include/asm/pgtable.h | 44 +- arch/riscv/include/asm/sbi.h | 15 + arch/riscv/include/asm/soc.h | 1 + arch/riscv/include/asm/tlbflush.h | 22 + arch/riscv/include/asm/vendorid_list.h | 1 + arch/riscv/kernel/sbi.c | 19 + arch/riscv/kernel/soc.c | 22 + arch/riscv/kernel/vdso/flush_icache.S | 33 +- arch/riscv/mm/Makefile | 1 + arch/riscv/mm/cacheflush.c | 3 +- arch/riscv/mm/context.c | 2 +- arch/riscv/mm/dma-mapping.c | 53 + arch/riscv/mm/init.c | 26 + arch/riscv/mm/tlbflush.c | 57 +- drivers/net/ethernet/Kconfig | 1 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/allwinnertmp/Kconfig | 17 + drivers/net/ethernet/allwinnertmp/Makefile | 7 + drivers/net/ethernet/allwinnertmp/sunxi-gmac-ops.c | 690 ++++++ drivers/net/ethernet/allwinnertmp/sunxi-gmac.c | 2240 ++++++++++++++++++++ drivers/net/ethernet/allwinnertmp/sunxi-gmac.h | 258 +++ drivers/net/phy/realtek.c | 2 +- mm/mmap.c | 4 + 34 files changed, 3714 insertions(+), 37 deletions(-) create mode 100644 arch/riscv/boot/dts/allwinner/Makefile create mode 100644 arch/riscv/boot/dts/allwinner/allwinner-d1-nezha-kit.dts create mode 100644 arch/riscv/boot/dts/allwinner/allwinner-d1.dtsi create mode 100644 arch/riscv/mm/dma-mapping.c create mode 100644 drivers/net/ethernet/allwinnertmp/Kconfig create mode 100644 drivers/net/ethernet/allwinnertmp/Makefile create mode 100644 drivers/net/ethernet/allwinnertmp/sunxi-gmac-ops.c create mode 100644 drivers/net/ethernet/allwinnertmp/sunxi-gmac.c create mode 100644 drivers/net/ethernet/allwinnertmp/sunxi-gmac.h -- 2.7.4 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention 2021-06-06 9:03 [RFC PATCH v2 00/11] riscv: Add DMA_COHERENT support for Allwinner D1 guoren @ 2021-06-06 9:04 ` guoren 0 siblings, 0 replies; 7+ messages in thread From: guoren @ 2021-06-06 9:04 UTC (permalink / raw) To: guoren, anup.patel, palmerdabbelt, arnd, wens, maxime, drew, liush, lazyparser, wefu Cc: linux-riscv, linux-kernel, linux-arch, linux-sunxi, Guo Ren, Christoph Hellwig, Atish Patra From: Guo Ren <guoren@linux.alibaba.com> Passing the mm_struct as the first argument, as we can derive both the cpumask and asid from it instead of doing that in the callers. But more importantly, the static branch check can be moved deeper into the code to avoid a lot of duplication. Also add FIXME comment on the non-ASID code switches to a global flush once flushing more than a single page. Link: https://lore.kernel.org/linux-riscv/CAJF2gTQpDYtEdw6ZrTVZUYqxGdhLPs25RjuUiQtz=xN2oKs2fw@mail.gmail.com/T/#m30f7e8d02361f21f709bc3357b9f6ead1d47ed43 Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Co-Developed-by: Christoph Hellwig <hch@lst.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Anup Patel <anup.patel@wdc.com> Cc: Atish Patra <atish.patra@wdc.com> --- arch/riscv/mm/tlbflush.c | 91 ++++++++++++++++++++++-------------------------- 1 file changed, 41 insertions(+), 50 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 87b4e52..facca6e 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -12,56 +12,59 @@ void flush_tlb_all(void) } /* - * This function must not be called with cmask being null. + * This function must not be called with mm_cpumask(mm) being null. * Kernel may panic if cmask is NULL. */ -static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, +static void __sbi_tlb_flush_range(struct mm_struct *mm, + unsigned long start, unsigned long size) { + struct cpumask *cmask = mm_cpumask(mm); struct cpumask hmask; unsigned int cpuid; + bool local; if (cpumask_empty(cmask)) return; cpuid = get_cpu(); - if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { - /* local cpu is the only cpu present in cpumask */ - if (size <= PAGE_SIZE) - local_flush_tlb_page(start); - else - local_flush_tlb_all(); - } else { - riscv_cpuid_to_hartid_mask(cmask, &hmask); - sbi_remote_sfence_vma(cpumask_bits(&hmask), start, size); - } + /* + * check if the tlbflush needs to be sent to other CPUs, local + * cpu is the only cpu present in cpumask. + */ + local = !(cpumask_any_but(cmask, cpuid) < nr_cpu_ids); - put_cpu(); -} - -static void __sbi_tlb_flush_range_asid(struct cpumask *cmask, - unsigned long start, - unsigned long size, - unsigned long asid) -{ - struct cpumask hmask; - unsigned int cpuid; - - if (cpumask_empty(cmask)) - return; - - cpuid = get_cpu(); + if (static_branch_likely(&use_asid_allocator)) { + unsigned long asid = atomic_long_read(&mm->context.id); - if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { - if (size == -1) - local_flush_tlb_all_asid(asid); - else - local_flush_tlb_range_asid(start, size, asid); + if (likely(local)) { + if (size == -1) + local_flush_tlb_all_asid(asid); + else + local_flush_tlb_range_asid(start, size, asid); + } else { + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma_asid(cpumask_bits(&hmask), + start, size, asid); + } } else { - riscv_cpuid_to_hartid_mask(cmask, &hmask); - sbi_remote_sfence_vma_asid(cpumask_bits(&hmask), - start, size, asid); + if (likely(local)) { + /* + * FIXME: The non-ASID code switches to a global flush + * once flushing more than a single page. It's made by + * commit 6efb16b1d551 (RISC-V: Issue a tlb page flush + * if possible). + */ + if (size <= PAGE_SIZE) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); + } else { + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma(cpumask_bits(&hmask), + start, size); + } } put_cpu(); @@ -69,28 +72,16 @@ static void __sbi_tlb_flush_range_asid(struct cpumask *cmask, void flush_tlb_mm(struct mm_struct *mm) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(mm), 0, -1, - atomic_long_read(&mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); + __sbi_tlb_flush_range(mm, 0, -1); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE, - atomic_long_read(&vma->vm_mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); + __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), start, end - start, - atomic_long_read(&vma->vm_mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); + __sbi_tlb_flush_range(vma->vm_mm, start, end - start); } -- 2.7.4 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2021-06-06 9:05 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-05-30 16:49 [PATCH V5 0/3] riscv: Fixup asid_allocator remaining issues guoren 2021-05-30 16:49 ` [PATCH V5 1/3] riscv: Use global mappings for kernel pages guoren 2021-05-30 16:49 ` [PATCH V5 2/3] riscv: Add ASID-based tlbflushing methods guoren 2021-05-31 6:17 ` Christoph Hellwig 2021-05-31 12:20 ` Guo Ren 2021-05-30 16:49 ` [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention guoren 2021-06-06 9:03 [RFC PATCH v2 00/11] riscv: Add DMA_COHERENT support for Allwinner D1 guoren 2021-06-06 9:04 ` [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention guoren
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).