From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FE4AC433F5 for ; Wed, 11 May 2022 19:30:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346940AbiEKTaQ (ORCPT ); Wed, 11 May 2022 15:30:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346838AbiEKT3s (ORCPT ); Wed, 11 May 2022 15:29:48 -0400 Received: from gloria.sntech.de (gloria.sntech.de [185.11.138.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CCBCF61D3 for ; Wed, 11 May 2022 12:29:47 -0700 (PDT) Received: from ip5b412258.dynamic.kabel-deutschland.de ([91.65.34.88] helo=phil.lan) by gloria.sntech.de with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1nos1k-0004fe-7F; Wed, 11 May 2022 21:29:40 +0200 From: Heiko Stuebner To: palmer@dabbelt.com, paul.walmsley@sifive.com, aou@eecs.berkeley.edu Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, wefu@redhat.com, liush@allwinnertech.com, guoren@kernel.org, atishp@atishpatra.org, anup@brainfault.org, drew@beagleboard.org, hch@lst.de, arnd@arndb.de, wens@csie.org, maxime@cerno.tech, gfavor@ventanamicro.com, andrea.mondelli@huawei.com, behrensj@mit.edu, xinhaoqu@huawei.com, mick@ics.forth.gr, allen.baum@esperantotech.com, jscheid@ventanamicro.com, rtrauben@gmail.com, samuel@sholland.org, cmuellner@linux.com, philipp.tomsich@vrull.eu, Heiko Stuebner Subject: [PATCH 08/12] riscv: Fix accessing pfn bits in PTEs for non-32bit variants Date: Wed, 11 May 2022 21:29:17 +0200 Message-Id: <20220511192921.2223629-9-heiko@sntech.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220511192921.2223629-1-heiko@sntech.de> References: <20220511192921.2223629-1-heiko@sntech.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On rv32 the PFN part of PTEs is defined to use bits [xlen-1:10] while on rv64 it is defined to use bits [53:10], leaving [63:54] as reserved. With upcoming optional extensions like svpbmt these previously reserved bits will get used so simply right-shifting the PTE to get the PFN won't be enough. So introduce a _PAGE_PFN_MASK constant to mask the correct bits for both rv32 and rv64 before shifting. Signed-off-by: Heiko Stuebner Reviewed-by: Philipp Tomsich --- arch/riscv/include/asm/pgtable-32.h | 8 ++++++++ arch/riscv/include/asm/pgtable-64.h | 14 +++++++++++--- arch/riscv/include/asm/pgtable-bits.h | 6 ------ arch/riscv/include/asm/pgtable.h | 8 +++++--- 4 files changed, 24 insertions(+), 12 deletions(-) diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/pgtable-32.h index 5b2e79e5bfa5..e266a4fe7f43 100644 --- a/arch/riscv/include/asm/pgtable-32.h +++ b/arch/riscv/include/asm/pgtable-32.h @@ -7,6 +7,7 @@ #define _ASM_RISCV_PGTABLE_32_H #include +#include #include /* Size of region mapped by a page global directory */ @@ -16,4 +17,11 @@ #define MAX_POSSIBLE_PHYSMEM_BITS 34 +/* + * rv32 PTE format: + * | XLEN-1 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 + * PFN reserved for SW D A G U X W R V + */ +#define _PAGE_PFN_MASK GENMASK(31, 10) + #endif /* _ASM_RISCV_PGTABLE_32_H */ diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 7e246e9f8d70..15f3ad5aee4f 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -6,6 +6,7 @@ #ifndef _ASM_RISCV_PGTABLE_64_H #define _ASM_RISCV_PGTABLE_64_H +#include #include extern bool pgtable_l4_enabled; @@ -65,6 +66,13 @@ typedef struct { #define PTRS_PER_PMD (PAGE_SIZE / sizeof(pmd_t)) +/* + * rv64 PTE format: + * | 63 | 62 61 | 60 54 | 53 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 + * N MT RSV PFN reserved for SW D A G U X W R V + */ +#define _PAGE_PFN_MASK GENMASK(53, 10) + static inline int pud_present(pud_t pud) { return (pud_val(pud) & _PAGE_PRESENT); @@ -108,12 +116,12 @@ static inline unsigned long _pud_pfn(pud_t pud) static inline pmd_t *pud_pgtable(pud_t pud) { - return (pmd_t *)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT); + return (pmd_t *)pfn_to_virt(__page_val_to_pfn(pud_val(pud))); } static inline struct page *pud_page(pud_t pud) { - return pfn_to_page(pud_val(pud) >> _PAGE_PFN_SHIFT); + return pfn_to_page(__page_val_to_pfn(pud_val(pud))); } #define mm_p4d_folded mm_p4d_folded @@ -143,7 +151,7 @@ static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot) static inline unsigned long _pmd_pfn(pmd_t pmd) { - return pmd_val(pmd) >> _PAGE_PFN_SHIFT; + return __page_val_to_pfn(pmd_val(pmd)); } #define mk_pmd(page, prot) pfn_pmd(page_to_pfn(page), prot) diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h index a6b0c89824c2..e571fa954afc 100644 --- a/arch/riscv/include/asm/pgtable-bits.h +++ b/arch/riscv/include/asm/pgtable-bits.h @@ -6,12 +6,6 @@ #ifndef _ASM_RISCV_PGTABLE_BITS_H #define _ASM_RISCV_PGTABLE_BITS_H -/* - * PTE format: - * | XLEN-1 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 - * PFN reserved for SW D A G U X W R V - */ - #define _PAGE_ACCESSED_OFFSET 6 #define _PAGE_PRESENT (1 << 0) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 046b44225623..faba543e2b08 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -108,6 +108,8 @@ #include #include +#define __page_val_to_pfn(_val) (((_val) & _PAGE_PFN_MASK) >> _PAGE_PFN_SHIFT) + #ifdef CONFIG_64BIT #include #else @@ -261,12 +263,12 @@ static inline unsigned long _pgd_pfn(pgd_t pgd) static inline struct page *pmd_page(pmd_t pmd) { - return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT); + return pfn_to_page(__page_val_to_pfn(pmd_val(pmd))); } static inline unsigned long pmd_page_vaddr(pmd_t pmd) { - return (unsigned long)pfn_to_virt(pmd_val(pmd) >> _PAGE_PFN_SHIFT); + return (unsigned long)pfn_to_virt(__page_val_to_pfn(pmd_val(pmd))); } static inline pte_t pmd_pte(pmd_t pmd) @@ -282,7 +284,7 @@ static inline pte_t pud_pte(pud_t pud) /* Yields the page frame number (PFN) of a page table entry */ static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) >> _PAGE_PFN_SHIFT); + return __page_val_to_pfn(pte_val(pte)); } #define pte_page(x) pfn_to_page(pte_pfn(x)) -- 2.35.1