From: Mike Rapoport <rppt@kernel.org> To: Vineet Gupta <vgupta@kernel.org> Cc: linux-snps-arc@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual <anshuman.khandual@arm.com> Subject: Re: [PATCH 15/18] ARC: mm: support 3 levels of page tables Date: Wed, 11 Aug 2021 15:24:36 +0300 [thread overview] Message-ID: <YRPBhJyYM/L5XWb/@kernel.org> (raw) In-Reply-To: <20210811004258.138075-16-vgupta@kernel.org> On Tue, Aug 10, 2021 at 05:42:55PM -0700, Vineet Gupta wrote: > ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. > Forthcoming hw will have multiple levels, so this change preps mm code > for same. It is also fun to try multi levels even on soft-walked code to > ensure generic mm code is robust to handle. > > overview > ________ > > 2 levels {pgd, pte} : pmd is folded but pmd_* macros are valid and operate on pgd > 3 levels {pgd, pmd, pte}: > - pud is folded and pud_* macros point to pgd > - pmd_* macros operate on actual pmd > > code changes > ____________ > > 1. #include <asm-generic/pgtable-nopud.h> > > 2. Define CONFIG_PGTABLE_LEVELS 3 > > 3a. Define PMD_SHIFT, PMD_SIZE, PMD_MASK, pmd_t > 3b. Define pmd_val() which actually deals with pmd > (pmd_offset(), pmd_index() are provided by generic code) > 3c. Define pmd_alloc_one() and pmd_free() to allocate pmd > (pmd_populate/pmd_free already exist) > > 4. Define pud_none(), pud_bad() macros based on generic pud_val() which > internally pertains to pgd now. > 4b. define pud_populate() to just setup pgd > > Signed-off-by: Vineet Gupta <vgupta@kernel.org> > --- > arch/arc/Kconfig | 4 ++ > arch/arc/include/asm/page.h | 11 +++++ > arch/arc/include/asm/pgalloc.h | 22 ++++++++++ > arch/arc/include/asm/pgtable-levels.h | 63 ++++++++++++++++++++++++--- > arch/arc/include/asm/processor.h | 2 +- > arch/arc/mm/fault.c | 4 ++ > arch/arc/mm/tlb.c | 4 +- > arch/arc/mm/tlbex.S | 9 ++++ > 8 files changed, 111 insertions(+), 8 deletions(-) > > diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig > index 59d5b2a179f6..43cb8aaf57a2 100644 > --- a/arch/arc/Kconfig > +++ b/arch/arc/Kconfig > @@ -314,6 +314,10 @@ config ARC_HUGEPAGE_16M > > endchoice > > +config PGTABLE_LEVELS > + int "Number of Page table levels" > + default 2 > + > config ARC_COMPACT_IRQ_LEVELS > depends on ISA_ARCOMPACT > bool "Setup Timer IRQ as high Priority" > diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h > index 313e6f543d2d..df3cc154ae4a 100644 > --- a/arch/arc/include/asm/page.h > +++ b/arch/arc/include/asm/page.h > @@ -41,6 +41,17 @@ typedef struct { > #define pgd_val(x) ((x).pgd) > #define __pgd(x) ((pgd_t) { (x) }) > > +#if CONFIG_PGTABLE_LEVELS > 2 > + > +typedef struct { > + unsigned long pmd; > +} pmd_t; > + > +#define pmd_val(x) ((x).pmd) > +#define __pmd(x) ((pmd_t) { (x) }) > + > +#endif > + > typedef struct { > #ifdef CONFIG_ARC_HAS_PAE40 > unsigned long long pte; > diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h > index 0cf73431eb89..01c2d84418ed 100644 > --- a/arch/arc/include/asm/pgalloc.h > +++ b/arch/arc/include/asm/pgalloc.h > @@ -86,6 +86,28 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) > } > > > +#if CONFIG_PGTABLE_LEVELS > 2 > + > +static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) > +{ > + set_pud(pudp, __pud((unsigned long)pmdp)); > +} > + > +static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) > +{ > + return (pmd_t *)__get_free_page( > + GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO); > +} > + > +static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) > +{ > + free_page((unsigned long)pmd); > +} > + > +#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) > + > +#endif > + > /* > * With software-only page-tables, addr-split for traversal is tweakable and > * that directly governs how big tables would be at each level. > diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h > index 8ece75335bb5..1c2f022d4ad0 100644 > --- a/arch/arc/include/asm/pgtable-levels.h > +++ b/arch/arc/include/asm/pgtable-levels.h > @@ -10,6 +10,8 @@ > #ifndef _ASM_ARC_PGTABLE_LEVELS_H > #define _ASM_ARC_PGTABLE_LEVELS_H > > +#if CONFIG_PGTABLE_LEVELS == 2 > + > /* > * 2 level paging setup for software walked MMUv3 (ARC700) and MMUv4 (HS) > * > @@ -37,16 +39,38 @@ > #define PGDIR_SHIFT 21 > #endif > > -#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ > -#define PGDIR_MASK (~(PGDIR_SIZE - 1)) > +#else > + > +/* > + * A default 3 level paging testing setup in software walked MMU > + * MMUv4 (8K page): <4> : <7> : <8> : <13> > + */ > +#define PGDIR_SHIFT 28 > +#if CONFIG_PGTABLE_LEVELS > 2 > +#define PMD_SHIFT 21 > +#endif > + > +#endif > > +#define PGDIR_SIZE BIT(PGDIR_SHIFT) > +#define PGDIR_MASK (~(PGDIR_SIZE - 1)) > #define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) > > -#define PTRS_PER_PTE BIT(PGDIR_SHIFT - PAGE_SHIFT) > +#if CONFIG_PGTABLE_LEVELS > 2 > +#define PMD_SIZE BIT(PMD_SHIFT) > +#define PMD_MASK (~(PMD_SIZE - 1)) > +#define PTRS_PER_PMD BIT(PGDIR_SHIFT - PMD_SHIFT) Maybe move these into the previous #if CONFIG_PGTABLE_LEVELS > 2? > +#endif > + > +#define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT) > > #ifndef __ASSEMBLY__ > > +#if CONFIG_PGTABLE_LEVELS > 2 > +#include <asm-generic/pgtable-nopud.h> > +#else > #include <asm-generic/pgtable-nopmd.h> > +#endif > > /* > * 1st level paging: pgd > @@ -57,9 +81,35 @@ > #define pgd_ERROR(e) \ > pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) > > +#if CONFIG_PGTABLE_LEVELS > 2 > + > +/* In 3 level paging, pud_* macros work on pgd */ > +#define pud_none(x) (!pud_val(x)) > +#define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) > +#define pud_present(x) (pud_val(x)) > +#define pud_clear(xp) do { pud_val(*(xp)) = 0; } while (0) > +#define pud_pgtable(pud) ((pmd_t *)(pud_val(pud) & PAGE_MASK)) > +#define pud_page(pud) virt_to_page(pud_pgtable(pud)) > +#define set_pud(pudp, pud) (*(pudp) = pud) > + > +/* > + * 2nd level paging: pmd > + */ > +#define pmd_ERROR(e) \ > + pr_crit("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e)) > + > +#define pmd_pfn(pmd) ((pmd_val(pmd) & PMD_MASK) >> PAGE_SHIFT) > +#define pfn_pmd(pfn,prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) > +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) > + > +#endif > + > /* > - * Due to the strange way generic pgtable level folding works, in a 2 level > - * setup, pmd_val() returns pgd, so these pmd_* macros actually work on pgd > + * Due to the strange way generic pgtable level folding works, the pmd_* macros > + * - are valid even for 2 levels (which supposedly only has pgd - pte) > + * - behave differently for 2 vs. 3 > + * In 2 level paging (pgd -> pte), pmd_* macros work on pgd > + * In 3+ level paging (pgd -> pmd -> pte), pmd_* macros work on pmd > */ > #define pmd_none(x) (!pmd_val(x)) > #define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) > @@ -70,6 +120,9 @@ > #define set_pmd(pmdp, pmd) (*(pmdp) = pmd) > #define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) > > +/* > + * 3rd level paging: pte > + */ > #define pte_ERROR(e) \ > pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) > > diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h > index e4031ecd3c8c..f28afcf5c6d1 100644 > --- a/arch/arc/include/asm/processor.h > +++ b/arch/arc/include/asm/processor.h > @@ -93,7 +93,7 @@ extern unsigned int get_wchan(struct task_struct *p); > #define VMALLOC_START (PAGE_OFFSET - (CONFIG_ARC_KVADDR_SIZE << 20)) > > /* 1 PGDIR_SIZE each for fixmap/pkmap, 2 PGDIR_SIZE gutter (see asm/highmem.h) */ > -#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PGDIR_SIZE * 4) > +#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PMD_SIZE * 4) > > #define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE) > > diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c > index 41f154320964..8da2f0ad8c69 100644 > --- a/arch/arc/mm/fault.c > +++ b/arch/arc/mm/fault.c > @@ -39,6 +39,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) > if (!pgd_present(*pgd_k)) > goto bad_area; > > + set_pgd(pgd, *pgd_k); > + > p4d = p4d_offset(pgd, address); > p4d_k = p4d_offset(pgd_k, address); > if (!p4d_present(*p4d_k)) > @@ -49,6 +51,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) > if (!pud_present(*pud_k)) > goto bad_area; > > + set_pud(pud, *pud_k); > + > pmd = pmd_offset(pud, address); > pmd_k = pmd_offset(pud_k, address); > if (!pmd_present(*pmd_k)) > diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c > index 34f16e0b41e6..77da83569b36 100644 > --- a/arch/arc/mm/tlb.c > +++ b/arch/arc/mm/tlb.c > @@ -658,8 +658,8 @@ char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len) > IS_USED_CFG(CONFIG_TRANSPARENT_HUGEPAGE)); > > n += scnprintf(buf + n, len - n, > - "MMU [v%x]\t: %dk PAGE, %sJTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", > - p_mmu->ver, p_mmu->pg_sz_k, super_pg, > + "MMU [v%x]\t: %dk PAGE, %s, swalk %d lvl, JTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", > + p_mmu->ver, p_mmu->pg_sz_k, super_pg, CONFIG_PGTABLE_LEVELS, > p_mmu->sets * p_mmu->ways, p_mmu->sets, p_mmu->ways, > p_mmu->u_dtlb, p_mmu->u_itlb, > IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40)); > diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S > index d08bd09a0afc..5f6bfdfda1be 100644 > --- a/arch/arc/mm/tlbex.S > +++ b/arch/arc/mm/tlbex.S > @@ -173,6 +173,15 @@ ex_saved_reg1: > tst r3, r3 > bz do_slow_path_pf ; if no Page Table, do page fault > > +#if CONFIG_PGTABLE_LEVELS > 2 > + lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD > + and r0, r0, (PTRS_PER_PMD - 1) > + ld.as r1, [r3, r0] ; PMD entry > + tst r1, r1 > + bz do_slow_path_pf > + mov r3, r1 > +#endif > + > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > and.f 0, r3, _PAGE_HW_SZ ; Is this Huge PMD (thp) > add2.nz r1, r1, r0 > -- > 2.25.1 > -- Sincerely yours, Mike.
WARNING: multiple messages have this Message-ID (diff)
From: Mike Rapoport <rppt@kernel.org> To: Vineet Gupta <vgupta@kernel.org> Cc: linux-snps-arc@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual <anshuman.khandual@arm.com> Subject: Re: [PATCH 15/18] ARC: mm: support 3 levels of page tables Date: Wed, 11 Aug 2021 15:24:36 +0300 [thread overview] Message-ID: <YRPBhJyYM/L5XWb/@kernel.org> (raw) In-Reply-To: <20210811004258.138075-16-vgupta@kernel.org> On Tue, Aug 10, 2021 at 05:42:55PM -0700, Vineet Gupta wrote: > ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. > Forthcoming hw will have multiple levels, so this change preps mm code > for same. It is also fun to try multi levels even on soft-walked code to > ensure generic mm code is robust to handle. > > overview > ________ > > 2 levels {pgd, pte} : pmd is folded but pmd_* macros are valid and operate on pgd > 3 levels {pgd, pmd, pte}: > - pud is folded and pud_* macros point to pgd > - pmd_* macros operate on actual pmd > > code changes > ____________ > > 1. #include <asm-generic/pgtable-nopud.h> > > 2. Define CONFIG_PGTABLE_LEVELS 3 > > 3a. Define PMD_SHIFT, PMD_SIZE, PMD_MASK, pmd_t > 3b. Define pmd_val() which actually deals with pmd > (pmd_offset(), pmd_index() are provided by generic code) > 3c. Define pmd_alloc_one() and pmd_free() to allocate pmd > (pmd_populate/pmd_free already exist) > > 4. Define pud_none(), pud_bad() macros based on generic pud_val() which > internally pertains to pgd now. > 4b. define pud_populate() to just setup pgd > > Signed-off-by: Vineet Gupta <vgupta@kernel.org> > --- > arch/arc/Kconfig | 4 ++ > arch/arc/include/asm/page.h | 11 +++++ > arch/arc/include/asm/pgalloc.h | 22 ++++++++++ > arch/arc/include/asm/pgtable-levels.h | 63 ++++++++++++++++++++++++--- > arch/arc/include/asm/processor.h | 2 +- > arch/arc/mm/fault.c | 4 ++ > arch/arc/mm/tlb.c | 4 +- > arch/arc/mm/tlbex.S | 9 ++++ > 8 files changed, 111 insertions(+), 8 deletions(-) > > diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig > index 59d5b2a179f6..43cb8aaf57a2 100644 > --- a/arch/arc/Kconfig > +++ b/arch/arc/Kconfig > @@ -314,6 +314,10 @@ config ARC_HUGEPAGE_16M > > endchoice > > +config PGTABLE_LEVELS > + int "Number of Page table levels" > + default 2 > + > config ARC_COMPACT_IRQ_LEVELS > depends on ISA_ARCOMPACT > bool "Setup Timer IRQ as high Priority" > diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h > index 313e6f543d2d..df3cc154ae4a 100644 > --- a/arch/arc/include/asm/page.h > +++ b/arch/arc/include/asm/page.h > @@ -41,6 +41,17 @@ typedef struct { > #define pgd_val(x) ((x).pgd) > #define __pgd(x) ((pgd_t) { (x) }) > > +#if CONFIG_PGTABLE_LEVELS > 2 > + > +typedef struct { > + unsigned long pmd; > +} pmd_t; > + > +#define pmd_val(x) ((x).pmd) > +#define __pmd(x) ((pmd_t) { (x) }) > + > +#endif > + > typedef struct { > #ifdef CONFIG_ARC_HAS_PAE40 > unsigned long long pte; > diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h > index 0cf73431eb89..01c2d84418ed 100644 > --- a/arch/arc/include/asm/pgalloc.h > +++ b/arch/arc/include/asm/pgalloc.h > @@ -86,6 +86,28 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) > } > > > +#if CONFIG_PGTABLE_LEVELS > 2 > + > +static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) > +{ > + set_pud(pudp, __pud((unsigned long)pmdp)); > +} > + > +static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) > +{ > + return (pmd_t *)__get_free_page( > + GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO); > +} > + > +static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) > +{ > + free_page((unsigned long)pmd); > +} > + > +#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) > + > +#endif > + > /* > * With software-only page-tables, addr-split for traversal is tweakable and > * that directly governs how big tables would be at each level. > diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h > index 8ece75335bb5..1c2f022d4ad0 100644 > --- a/arch/arc/include/asm/pgtable-levels.h > +++ b/arch/arc/include/asm/pgtable-levels.h > @@ -10,6 +10,8 @@ > #ifndef _ASM_ARC_PGTABLE_LEVELS_H > #define _ASM_ARC_PGTABLE_LEVELS_H > > +#if CONFIG_PGTABLE_LEVELS == 2 > + > /* > * 2 level paging setup for software walked MMUv3 (ARC700) and MMUv4 (HS) > * > @@ -37,16 +39,38 @@ > #define PGDIR_SHIFT 21 > #endif > > -#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ > -#define PGDIR_MASK (~(PGDIR_SIZE - 1)) > +#else > + > +/* > + * A default 3 level paging testing setup in software walked MMU > + * MMUv4 (8K page): <4> : <7> : <8> : <13> > + */ > +#define PGDIR_SHIFT 28 > +#if CONFIG_PGTABLE_LEVELS > 2 > +#define PMD_SHIFT 21 > +#endif > + > +#endif > > +#define PGDIR_SIZE BIT(PGDIR_SHIFT) > +#define PGDIR_MASK (~(PGDIR_SIZE - 1)) > #define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) > > -#define PTRS_PER_PTE BIT(PGDIR_SHIFT - PAGE_SHIFT) > +#if CONFIG_PGTABLE_LEVELS > 2 > +#define PMD_SIZE BIT(PMD_SHIFT) > +#define PMD_MASK (~(PMD_SIZE - 1)) > +#define PTRS_PER_PMD BIT(PGDIR_SHIFT - PMD_SHIFT) Maybe move these into the previous #if CONFIG_PGTABLE_LEVELS > 2? > +#endif > + > +#define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT) > > #ifndef __ASSEMBLY__ > > +#if CONFIG_PGTABLE_LEVELS > 2 > +#include <asm-generic/pgtable-nopud.h> > +#else > #include <asm-generic/pgtable-nopmd.h> > +#endif > > /* > * 1st level paging: pgd > @@ -57,9 +81,35 @@ > #define pgd_ERROR(e) \ > pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) > > +#if CONFIG_PGTABLE_LEVELS > 2 > + > +/* In 3 level paging, pud_* macros work on pgd */ > +#define pud_none(x) (!pud_val(x)) > +#define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) > +#define pud_present(x) (pud_val(x)) > +#define pud_clear(xp) do { pud_val(*(xp)) = 0; } while (0) > +#define pud_pgtable(pud) ((pmd_t *)(pud_val(pud) & PAGE_MASK)) > +#define pud_page(pud) virt_to_page(pud_pgtable(pud)) > +#define set_pud(pudp, pud) (*(pudp) = pud) > + > +/* > + * 2nd level paging: pmd > + */ > +#define pmd_ERROR(e) \ > + pr_crit("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e)) > + > +#define pmd_pfn(pmd) ((pmd_val(pmd) & PMD_MASK) >> PAGE_SHIFT) > +#define pfn_pmd(pfn,prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) > +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) > + > +#endif > + > /* > - * Due to the strange way generic pgtable level folding works, in a 2 level > - * setup, pmd_val() returns pgd, so these pmd_* macros actually work on pgd > + * Due to the strange way generic pgtable level folding works, the pmd_* macros > + * - are valid even for 2 levels (which supposedly only has pgd - pte) > + * - behave differently for 2 vs. 3 > + * In 2 level paging (pgd -> pte), pmd_* macros work on pgd > + * In 3+ level paging (pgd -> pmd -> pte), pmd_* macros work on pmd > */ > #define pmd_none(x) (!pmd_val(x)) > #define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) > @@ -70,6 +120,9 @@ > #define set_pmd(pmdp, pmd) (*(pmdp) = pmd) > #define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) > > +/* > + * 3rd level paging: pte > + */ > #define pte_ERROR(e) \ > pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) > > diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h > index e4031ecd3c8c..f28afcf5c6d1 100644 > --- a/arch/arc/include/asm/processor.h > +++ b/arch/arc/include/asm/processor.h > @@ -93,7 +93,7 @@ extern unsigned int get_wchan(struct task_struct *p); > #define VMALLOC_START (PAGE_OFFSET - (CONFIG_ARC_KVADDR_SIZE << 20)) > > /* 1 PGDIR_SIZE each for fixmap/pkmap, 2 PGDIR_SIZE gutter (see asm/highmem.h) */ > -#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PGDIR_SIZE * 4) > +#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PMD_SIZE * 4) > > #define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE) > > diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c > index 41f154320964..8da2f0ad8c69 100644 > --- a/arch/arc/mm/fault.c > +++ b/arch/arc/mm/fault.c > @@ -39,6 +39,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) > if (!pgd_present(*pgd_k)) > goto bad_area; > > + set_pgd(pgd, *pgd_k); > + > p4d = p4d_offset(pgd, address); > p4d_k = p4d_offset(pgd_k, address); > if (!p4d_present(*p4d_k)) > @@ -49,6 +51,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) > if (!pud_present(*pud_k)) > goto bad_area; > > + set_pud(pud, *pud_k); > + > pmd = pmd_offset(pud, address); > pmd_k = pmd_offset(pud_k, address); > if (!pmd_present(*pmd_k)) > diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c > index 34f16e0b41e6..77da83569b36 100644 > --- a/arch/arc/mm/tlb.c > +++ b/arch/arc/mm/tlb.c > @@ -658,8 +658,8 @@ char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len) > IS_USED_CFG(CONFIG_TRANSPARENT_HUGEPAGE)); > > n += scnprintf(buf + n, len - n, > - "MMU [v%x]\t: %dk PAGE, %sJTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", > - p_mmu->ver, p_mmu->pg_sz_k, super_pg, > + "MMU [v%x]\t: %dk PAGE, %s, swalk %d lvl, JTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", > + p_mmu->ver, p_mmu->pg_sz_k, super_pg, CONFIG_PGTABLE_LEVELS, > p_mmu->sets * p_mmu->ways, p_mmu->sets, p_mmu->ways, > p_mmu->u_dtlb, p_mmu->u_itlb, > IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40)); > diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S > index d08bd09a0afc..5f6bfdfda1be 100644 > --- a/arch/arc/mm/tlbex.S > +++ b/arch/arc/mm/tlbex.S > @@ -173,6 +173,15 @@ ex_saved_reg1: > tst r3, r3 > bz do_slow_path_pf ; if no Page Table, do page fault > > +#if CONFIG_PGTABLE_LEVELS > 2 > + lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD > + and r0, r0, (PTRS_PER_PMD - 1) > + ld.as r1, [r3, r0] ; PMD entry > + tst r1, r1 > + bz do_slow_path_pf > + mov r3, r1 > +#endif > + > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > and.f 0, r3, _PAGE_HW_SZ ; Is this Huge PMD (thp) > add2.nz r1, r1, r0 > -- > 2.25.1 > -- Sincerely yours, Mike. _______________________________________________ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
next prev parent reply other threads:[~2021-08-11 12:26 UTC|newest] Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-11 0:42 [PATCH 00/18] ARC mm updates to support 3 or 4 levels of paging Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 01/18] ARC: mm: simplify mmu scratch register assingment to mmu needs Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 02/18] ARC: mm: remove tlb paranoid code Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 03/18] ARC: mm: move mmu/cache externs out to setup.h Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 5:10 ` Mike Rapoport 2021-08-11 5:10 ` Mike Rapoport 2021-08-11 18:46 ` Vineet Gupta 2021-08-11 18:46 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 04/18] ARC: mm: remove pgd_offset_fast Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 5:12 ` Mike Rapoport 2021-08-11 5:12 ` Mike Rapoport 2021-08-11 18:54 ` Vineet Gupta 2021-08-11 18:54 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 05/18] ARC: mm: Fixes to allow STRICT_MM_TYPECHECKS Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 06/18] ARC: mm: Enable STRICT_MM_TYPECHECKS Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 12:04 ` Mike Rapoport 2021-08-11 12:04 ` Mike Rapoport 2021-08-11 19:01 ` Vineet Gupta 2021-08-11 19:01 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 07/18] ARC: ioremap: use more commonly used PAGE_KERNEL based uncached flag Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 5:18 ` Mike Rapoport 2021-08-11 5:18 ` Mike Rapoport 2021-08-11 18:58 ` Vineet Gupta 2021-08-11 18:58 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 08/18] ARC: mm: pmd_populate* to use the canonical set_pmd (and drop pmd_set) Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 09/18] ARC: mm: non-functional code cleanup ahead of 3 levels Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 12:31 ` Mike Rapoport 2021-08-11 12:31 ` Mike Rapoport 2021-08-12 1:37 ` Vineet Gupta 2021-08-12 1:37 ` Vineet Gupta 2021-08-12 6:18 ` Mike Rapoport 2021-08-12 6:18 ` Mike Rapoport 2021-08-12 18:58 ` Vineet Gupta 2021-08-12 18:58 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 10/18] ARC: mm: move MMU specific bits out of ASID allocator Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 11/18] ARC: mm: move MMU specific bits out of entry code Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 12:15 ` Mike Rapoport 2021-08-11 12:15 ` Mike Rapoport 2021-08-11 19:30 ` Vineet Gupta 2021-08-11 19:30 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 12/18] ARC: mm: disintegrate mmu.h (arcv2 bits out) Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 13/18] ARC: mm: disintegrate pgtable.h into levels and flags Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 14/18] ARC: mm: hack to allow 2 level build with 4 level code Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 15/18] ARC: mm: support 3 levels of page tables Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 12:24 ` Mike Rapoport [this message] 2021-08-11 12:24 ` Mike Rapoport 2021-08-11 22:15 ` Vineet Gupta 2021-08-11 22:15 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 16/18] ARC: mm: support 4 " Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 12:28 ` Mike Rapoport 2021-08-11 12:28 ` Mike Rapoport 2021-08-11 22:17 ` Vineet Gupta 2021-08-11 22:17 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 17/18] ARC: mm: vmalloc sync from kernel to user table to update PMD Vineet Gupta 2021-08-11 0:42 ` Vineet Gupta 2021-08-11 0:42 ` [PATCH 18/18] ARC: mm: introduce _PAGE_TABLE to explicitly link pgd,pud,pmd entries Vineet Gupta 2021-08-11 0:42 ` [PATCH 18/18] ARC: mm: introduce _PAGE_TABLE to explicitly link pgd, pud, pmd entries Vineet Gupta
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=YRPBhJyYM/L5XWb/@kernel.org \ --to=rppt@kernel.org \ --cc=anshuman.khandual@arm.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-snps-arc@lists.infradead.org \ --cc=vgupta@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.