From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97E6DC47082 for ; Thu, 3 Jun 2021 11:46:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 74F99613E6 for ; Thu, 3 Jun 2021 11:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229881AbhFCLsJ (ORCPT ); Thu, 3 Jun 2021 07:48:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229697AbhFCLsI (ORCPT ); Thu, 3 Jun 2021 07:48:08 -0400 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C37EC06174A for ; Thu, 3 Jun 2021 04:46:13 -0700 (PDT) Received: by mail-wr1-x42e.google.com with SMTP id h8so5499059wrz.8 for ; Thu, 03 Jun 2021 04:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=SLeIM/SwyN6AxbwQXqHz7v5anI9n9MkiDwfFgqGwUzs=; b=zsbPKr2qBH0FBAszbR5luyM7zCBVmGWkMvC5rfeLaIDv+dmi3s1/TmH06ujk2wSCq8 HoG6KCogzfF0pSbfQUHVubUmxK3xDMpZQkVNJQDKjK2sRu3Fopm8sVGhp+iJDSLjDq2e lKRbYOaaVcI6G9p7PuA5xG7QJ6VlBcyQAMLXHMaZeeQBv496Agp/pHc7R5y8OrnWi/xk ohqGYKw8z0+v/JILNw7Ev7vGgu51NaY7ALOvLq2EHurd8wdJ6k8O8pSFyDr08J3nf+mL Bmx/rbwF630cRmCjUdpxfxHy9Z3ZKxQ2M1a+MDO4GRxwwWk7CrHzgnLNK4KXYzx41Cf5 2pyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=SLeIM/SwyN6AxbwQXqHz7v5anI9n9MkiDwfFgqGwUzs=; b=eH+gC/mh1lcnFBqjmkhrhu+BAomsTJT7x+3sftxx43fAnli/4uM3C5jh3yYwaIYjmS 7OqrS5F7HKApM07BpcK9yq11JrjM6LE/Z52mHiacYRIcOilg0CnCiOexeWJ+Zw1x+l9W /I5ypMD/ruaJHxrN0FE1L3r32TSg8iV6z7cPg055LAUBk+kHQ8StDi5yrAD2snUpi4BY an/6dS5IFGAPQNRpjx7bSQyIR9niWAXMdkaDmEVvvlr+WYu98F9Et6I5UQOCBNJlN9dW nCPGXBm8wTYDyeChYN8R77adPmNnUjQxpQg8Tux0iOBExoo/G6O6OwYriifX3hLCJ5xc 4X1A== X-Gm-Message-State: AOAM530qqGa4/QdJFklmNdDbXf1WdE/4tfPYVvA0yEwszAXv6T6SEucz N9OQuHo4K1QuPbFKkn3BRFOYGatft+nXwCkJGEJRCg== X-Google-Smtp-Source: ABdhPJzf7HrgsJvhD4Eb9w/u0hfCQ6HdMeNw9dOo+78F17cr0Pyy5z2/s7LEEe2Rn9KACv0SsGLult001dX5L3A0oqs= X-Received: by 2002:a5d:540b:: with SMTP id g11mr11207959wrv.390.1622720769320; Thu, 03 Jun 2021 04:46:09 -0700 (PDT) MIME-Version: 1.0 References: <20210603082749.1256129-1-alex@ghiti.fr> <20210603082749.1256129-4-alex@ghiti.fr> In-Reply-To: <20210603082749.1256129-4-alex@ghiti.fr> From: Anup Patel Date: Thu, 3 Jun 2021 17:15:57 +0530 Message-ID: Subject: Re: [PATCH v3 3/3] riscv: Map the kernel with correct permissions the first time To: Alexandre Ghiti Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Jisheng Zhang , Christoph Hellwig , Zong Li , linux-riscv , "linux-kernel@vger.kernel.org List" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 3, 2021 at 2:01 PM Alexandre Ghiti wrote: > > For 64b kernels, we map all the kernel with write and execute permissions > and afterwards remove writability from text and executability from data. > > For 32b kernels, the kernel mapping resides in the linear mapping, so we > map all the linear mapping as writable and executable and afterwards we > remove those properties for unused memory and kernel mapping as > described above. > > Change this behavior to directly map the kernel with correct permissions > and avoid going through the whole mapping to fix the permissions. > > At the same time, this fixes an issue introduced by commit 2bfc6cd81bd1 > ("riscv: Move kernel mapping outside of linear mapping") as reported > here https://github.com/starfive-tech/linux/issues/17. > > Signed-off-by: Alexandre Ghiti Looks good to me. Reviewed-by: Anup Patel Regards, Anup > --- > arch/riscv/include/asm/page.h | 13 +++- > arch/riscv/include/asm/sections.h | 17 +++++ > arch/riscv/include/asm/set_memory.h | 8 --- > arch/riscv/kernel/setup.c | 11 +-- > arch/riscv/mm/init.c | 102 ++++++++++++---------------- > 5 files changed, 75 insertions(+), 76 deletions(-) > > diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h > index 6e004d8fda4d..349e4f9874cc 100644 > --- a/arch/riscv/include/asm/page.h > +++ b/arch/riscv/include/asm/page.h > @@ -95,6 +95,7 @@ extern unsigned long va_kernel_pa_offset; > #endif > extern unsigned long va_kernel_xip_pa_offset; > extern unsigned long pfn_base; > +extern uintptr_t load_sz; > #define ARCH_PFN_OFFSET (pfn_base) > #else > #define va_pa_offset 0 > @@ -108,6 +109,11 @@ extern unsigned long pfn_base; > extern unsigned long kernel_virt_addr; > > #ifdef CONFIG_64BIT > +#define is_kernel_mapping(x) \ > + ((x) >= kernel_virt_addr && (x) < (kernel_virt_addr + load_sz)) > +#define is_linear_mapping(x) \ > + ((x) >= PAGE_OFFSET && (x) < kernel_virt_addr) > + > #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + va_pa_offset)) > #define kernel_mapping_pa_to_va(y) ({ \ > unsigned long _y = y; \ > @@ -127,10 +133,15 @@ extern unsigned long kernel_virt_addr; > > #define __va_to_pa_nodebug(x) ({ \ > unsigned long _x = x; \ > - (_x < kernel_virt_addr) ? \ > + is_linear_mapping(_x) ? \ > linear_mapping_va_to_pa(_x) : kernel_mapping_va_to_pa(_x); \ > }) > #else > +#define is_kernel_mapping(x) \ > + ((x) >= kernel_virt_addr && (x) < (kernel_virt_addr + load_sz)) > +#define is_linear_mapping(x) \ > + ((x) >= PAGE_OFFSET) > + > #define __pa_to_va_nodebug(x) ((void *)((unsigned long) (x) + va_pa_offset)) > #define __va_to_pa_nodebug(x) ((unsigned long)(x) - va_pa_offset) > #endif /* CONFIG_64BIT */ > diff --git a/arch/riscv/include/asm/sections.h b/arch/riscv/include/asm/sections.h > index 8a303fb1ee3b..32336e8a17cb 100644 > --- a/arch/riscv/include/asm/sections.h > +++ b/arch/riscv/include/asm/sections.h > @@ -6,6 +6,7 @@ > #define __ASM_SECTIONS_H > > #include > +#include > > extern char _start[]; > extern char _start_kernel[]; > @@ -13,4 +14,20 @@ extern char __init_data_begin[], __init_data_end[]; > extern char __init_text_begin[], __init_text_end[]; > extern char __alt_start[], __alt_end[]; > > +static inline bool is_va_kernel_text(uintptr_t va) > +{ > + uintptr_t start = (uintptr_t)_start; > + uintptr_t end = (uintptr_t)__init_data_begin; > + > + return va >= start && va < end; > +} > + > +static inline bool is_va_kernel_lm_alias_text(uintptr_t va) > +{ > + uintptr_t start = (uintptr_t)lm_alias(_start); > + uintptr_t end = (uintptr_t)lm_alias(__init_data_begin); > + > + return va >= start && va < end; > +} > + > #endif /* __ASM_SECTIONS_H */ > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h > index 7a411fed9e0e..c0b41ed218e1 100644 > --- a/arch/riscv/include/asm/set_memory.h > +++ b/arch/riscv/include/asm/set_memory.h > @@ -17,13 +17,11 @@ int set_memory_x(unsigned long addr, int numpages); > int set_memory_nx(unsigned long addr, int numpages); > int set_memory_rw_nx(unsigned long addr, int numpages); > int set_kernel_memory(char *start, char *end, int (*set_memory)(unsigned long, int)); > -void protect_kernel_text_data(void); > #else > static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_x(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } > -static inline void protect_kernel_text_data(void) {} > static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; } > static inline int set_kernel_memory(char *start, char *end, int (*set_memory)(unsigned long, int)) > { > @@ -31,12 +29,6 @@ static inline int set_kernel_memory(char *start, char *end, int (*set_memory)(un > } > #endif > > -#if defined(CONFIG_64BIT) && defined(CONFIG_STRICT_KERNEL_RWX) > -void protect_kernel_linear_mapping_text_rodata(void); > -#else > -static inline void protect_kernel_linear_mapping_text_rodata(void) {} > -#endif > - > int set_direct_map_invalid_noflush(struct page *page); > int set_direct_map_default_noflush(struct page *page); > bool kernel_page_present(struct page *page); > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c > index 4db4d0b5911f..b3d0895ce5f7 100644 > --- a/arch/riscv/kernel/setup.c > +++ b/arch/riscv/kernel/setup.c > @@ -290,11 +290,6 @@ void __init setup_arch(char **cmdline_p) > init_resources(); > sbi_init(); > > - if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) { > - protect_kernel_text_data(); > - protect_kernel_linear_mapping_text_rodata(); > - } > - > #ifdef CONFIG_SWIOTLB > swiotlb_init(1); > #endif > @@ -333,11 +328,9 @@ subsys_initcall(topology_init); > > void free_initmem(void) > { > - unsigned long init_begin = (unsigned long)__init_begin; > - unsigned long init_end = (unsigned long)__init_end; > - > if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) > - set_memory_rw_nx(init_begin, (init_end - init_begin) >> PAGE_SHIFT); > + set_kernel_memory(lm_alias(__init_begin), lm_alias(__init_end), > + IS_ENABLED(CONFIG_64BIT) ? set_memory_rw : set_memory_rw_nx); > > free_initmem_default(POISON_FREE_INITMEM); > } > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 2d80088f33d5..6b70c345cfc4 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -425,6 +425,42 @@ asmlinkage void __init __copy_data(void) > } > #endif > > +#ifdef CONFIG_STRICT_KERNEL_RWX > +static __init pgprot_t pgprot_from_va(uintptr_t va) > +{ > + if (is_va_kernel_text(va)) > + return PAGE_KERNEL_READ_EXEC; > + > + /* > + * In 64b kernel, the kernel mapping is outside the linear mapping so we > + * must protect its linear mapping alias from being executed and written. > + * And rodata section is marked readonly in mark_rodata_ro. > + */ > + if (IS_ENABLED(CONFIG_64BIT) && is_va_kernel_lm_alias_text(va)) > + return PAGE_KERNEL_READ; > + > + return PAGE_KERNEL; > +} > + > +void mark_rodata_ro(void) > +{ > + set_kernel_memory(__start_rodata, _data, set_memory_ro); > + if (IS_ENABLED(CONFIG_64BIT)) > + set_kernel_memory(lm_alias(__start_rodata), lm_alias(_data), > + set_memory_ro); > + > + debug_checkwx(); > +} > +#else > +static __init pgprot_t pgprot_from_va(uintptr_t va) > +{ > + if (IS_ENABLED(CONFIG_64BIT) && !is_kernel_mapping(va)) > + return PAGE_KERNEL; > + > + return PAGE_KERNEL_EXEC; > +} > +#endif /* CONFIG_STRICT_KERNEL_RWX */ > + > /* > * setup_vm() is called from head.S with MMU-off. > * > @@ -454,7 +490,8 @@ uintptr_t xiprom, xiprom_sz; > #define xiprom_sz (*((uintptr_t *)XIP_FIXUP(&xiprom_sz))) > #define xiprom (*((uintptr_t *)XIP_FIXUP(&xiprom))) > > -static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size) > +static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size, > + __always_unused bool early) > { > uintptr_t va, end_va; > > @@ -473,7 +510,7 @@ static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size) > map_size, PAGE_KERNEL); > } > #else > -static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size) > +static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size, bool early) > { > uintptr_t va, end_va; > > @@ -481,7 +518,7 @@ static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size) > for (va = kernel_virt_addr; va < end_va; va += map_size) > create_pgd_mapping(pgdir, va, > load_pa + (va - kernel_virt_addr), > - map_size, PAGE_KERNEL_EXEC); > + map_size, early ? PAGE_KERNEL_EXEC : pgprot_from_va(va)); > } > #endif > > @@ -558,7 +595,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) > * us to reach paging_init(). We map all memory banks later > * in setup_vm_final() below. > */ > - create_kernel_page_table(early_pg_dir, map_size); > + create_kernel_page_table(early_pg_dir, map_size, true); > > #ifndef __PAGETABLE_PMD_FOLDED > /* Setup early PMD for DTB */ > @@ -634,22 +671,6 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) > #endif > } > > -#if defined(CONFIG_64BIT) && defined(CONFIG_STRICT_KERNEL_RWX) > -void protect_kernel_linear_mapping_text_rodata(void) > -{ > - unsigned long text_start = (unsigned long)lm_alias(_start); > - unsigned long init_text_start = (unsigned long)lm_alias(__init_text_begin); > - unsigned long rodata_start = (unsigned long)lm_alias(__start_rodata); > - unsigned long data_start = (unsigned long)lm_alias(_data); > - > - set_memory_ro(text_start, (init_text_start - text_start) >> PAGE_SHIFT); > - set_memory_nx(text_start, (init_text_start - text_start) >> PAGE_SHIFT); > - > - set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > - set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > -} > -#endif > - > static void __init setup_vm_final(void) > { > uintptr_t va, map_size; > @@ -682,21 +703,15 @@ static void __init setup_vm_final(void) > map_size = best_map_size(start, end - start); > for (pa = start; pa < end; pa += map_size) { > va = (uintptr_t)__va(pa); > - create_pgd_mapping(swapper_pg_dir, va, pa, > - map_size, > -#ifdef CONFIG_64BIT > - PAGE_KERNEL > -#else > - PAGE_KERNEL_EXEC > -#endif > - ); > > + create_pgd_mapping(swapper_pg_dir, va, pa, map_size, > + pgprot_from_va(va)); > } > } > > #ifdef CONFIG_64BIT > /* Map the kernel */ > - create_kernel_page_table(swapper_pg_dir, PMD_SIZE); > + create_kernel_page_table(swapper_pg_dir, PMD_SIZE, false); > #endif > > /* Clear fixmap PTE and PMD mappings */ > @@ -727,35 +742,6 @@ static inline void setup_vm_final(void) > } > #endif /* CONFIG_MMU */ > > -#ifdef CONFIG_STRICT_KERNEL_RWX > -void __init protect_kernel_text_data(void) > -{ > - unsigned long text_start = (unsigned long)_start; > - unsigned long init_text_start = (unsigned long)__init_text_begin; > - unsigned long init_data_start = (unsigned long)__init_data_begin; > - unsigned long rodata_start = (unsigned long)__start_rodata; > - unsigned long data_start = (unsigned long)_data; > - unsigned long max_low = (unsigned long)(__va(PFN_PHYS(max_low_pfn))); > - > - set_memory_ro(text_start, (init_text_start - text_start) >> PAGE_SHIFT); > - set_memory_ro(init_text_start, (init_data_start - init_text_start) >> PAGE_SHIFT); > - set_memory_nx(init_data_start, (rodata_start - init_data_start) >> PAGE_SHIFT); > - /* rodata section is marked readonly in mark_rodata_ro */ > - set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > - set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT); > -} > - > -void mark_rodata_ro(void) > -{ > - unsigned long rodata_start = (unsigned long)__start_rodata; > - unsigned long data_start = (unsigned long)_data; > - > - set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > - > - debug_checkwx(); > -} > -#endif > - > #ifdef CONFIG_KEXEC_CORE > /* > * reserve_crashkernel() - reserves memory for crash kernel > -- > 2.30.2 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9231FC47096 for ; Thu, 3 Jun 2021 11:46:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 49580613E3 for ; Thu, 3 Jun 2021 11:46:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49580613E3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=brainfault.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5oWZkgLb+fL7uL+0UWcAml44N3bPk1CU+o+xb1fdEVM=; b=IkdDqdNaaC1yeV 2K2bvCnt/BZU1RdC28dVQxbI5TQBhp2M6XbMwuvD7l9x/WaxHTk9DLoEd1ZMO8n4gvzrZytQ/eev8 /MvBxiDI7ZLeWqhSpjRJRFKdTbgfLDscZACIMkfaLwweaYKpKZUpBJilK0vdYSSEWVnrUhFzbUwwh LxuSf+VM6QwV+qq1lzsye03Cs3/cBmZbvxVFchT6rwRVP1tI4VT2OV0SnRNx0tXA3MU43XDa3vLVo PrIPMDVO9NW+xdiA6o7hL6a276/FedsB1Dae8njuGGE0DLnzKru6o0IgzL7Q+tHxfu45BQOhl2rJG rZ5kvNBdpRq1kKMjszJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lolni-008PuL-Fe; Thu, 03 Jun 2021 11:46:14 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lolnf-008PtC-9x for linux-riscv@lists.infradead.org; Thu, 03 Jun 2021 11:46:13 +0000 Received: by mail-wr1-x42f.google.com with SMTP id y7so893912wrh.7 for ; Thu, 03 Jun 2021 04:46:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=SLeIM/SwyN6AxbwQXqHz7v5anI9n9MkiDwfFgqGwUzs=; b=zsbPKr2qBH0FBAszbR5luyM7zCBVmGWkMvC5rfeLaIDv+dmi3s1/TmH06ujk2wSCq8 HoG6KCogzfF0pSbfQUHVubUmxK3xDMpZQkVNJQDKjK2sRu3Fopm8sVGhp+iJDSLjDq2e lKRbYOaaVcI6G9p7PuA5xG7QJ6VlBcyQAMLXHMaZeeQBv496Agp/pHc7R5y8OrnWi/xk ohqGYKw8z0+v/JILNw7Ev7vGgu51NaY7ALOvLq2EHurd8wdJ6k8O8pSFyDr08J3nf+mL Bmx/rbwF630cRmCjUdpxfxHy9Z3ZKxQ2M1a+MDO4GRxwwWk7CrHzgnLNK4KXYzx41Cf5 2pyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=SLeIM/SwyN6AxbwQXqHz7v5anI9n9MkiDwfFgqGwUzs=; b=nr1gQ9R9TQgUjxDjsptmZYLncGGspGtya6hHW93sR9RzNyLgxwT/nTZQ8RwpnlDvbC eDxTQ+lftW97V3H8IYzW7DNEWAvP0Q9efRZjxszbCywWynYxTHnG38RQyluMBg5eScol DjJuF6BwTAzrvVNAf7qtL+3xPRHQTw5Dyr7cHvA4o5dnKK6mI54sAiP2oJh6oiemRC5a plySWWLA4doUnHsKzlQXEd/BJA2N+WCzV5UkIb/CKiIetxj94/kxQ8SWngrXM100orOO /+wDopPVTiruWziHLO4WNkyxcBC3en7G6zfouL82JkaIX04J+r+kSZwlgQBnD42aCyZ9 psyw== X-Gm-Message-State: AOAM533KI2vQS8xgNvr21FhErDKUpwsKMPbRSSkiKy0ZufP1Ja+kW7C8 N9lsEn/dhmVotH2Exjf7xro2AaSLwAay979Eobrj5w== X-Google-Smtp-Source: ABdhPJzf7HrgsJvhD4Eb9w/u0hfCQ6HdMeNw9dOo+78F17cr0Pyy5z2/s7LEEe2Rn9KACv0SsGLult001dX5L3A0oqs= X-Received: by 2002:a5d:540b:: with SMTP id g11mr11207959wrv.390.1622720769320; Thu, 03 Jun 2021 04:46:09 -0700 (PDT) MIME-Version: 1.0 References: <20210603082749.1256129-1-alex@ghiti.fr> <20210603082749.1256129-4-alex@ghiti.fr> In-Reply-To: <20210603082749.1256129-4-alex@ghiti.fr> From: Anup Patel Date: Thu, 3 Jun 2021 17:15:57 +0530 Message-ID: Subject: Re: [PATCH v3 3/3] riscv: Map the kernel with correct permissions the first time To: Alexandre Ghiti Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Jisheng Zhang , Christoph Hellwig , Zong Li , linux-riscv , "linux-kernel@vger.kernel.org List" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210603_044611_390519_92DB7F1B X-CRM114-Status: GOOD ( 29.69 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On Thu, Jun 3, 2021 at 2:01 PM Alexandre Ghiti wrote: > > For 64b kernels, we map all the kernel with write and execute permissions > and afterwards remove writability from text and executability from data. > > For 32b kernels, the kernel mapping resides in the linear mapping, so we > map all the linear mapping as writable and executable and afterwards we > remove those properties for unused memory and kernel mapping as > described above. > > Change this behavior to directly map the kernel with correct permissions > and avoid going through the whole mapping to fix the permissions. > > At the same time, this fixes an issue introduced by commit 2bfc6cd81bd1 > ("riscv: Move kernel mapping outside of linear mapping") as reported > here https://github.com/starfive-tech/linux/issues/17. > > Signed-off-by: Alexandre Ghiti Looks good to me. Reviewed-by: Anup Patel Regards, Anup > --- > arch/riscv/include/asm/page.h | 13 +++- > arch/riscv/include/asm/sections.h | 17 +++++ > arch/riscv/include/asm/set_memory.h | 8 --- > arch/riscv/kernel/setup.c | 11 +-- > arch/riscv/mm/init.c | 102 ++++++++++++---------------- > 5 files changed, 75 insertions(+), 76 deletions(-) > > diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h > index 6e004d8fda4d..349e4f9874cc 100644 > --- a/arch/riscv/include/asm/page.h > +++ b/arch/riscv/include/asm/page.h > @@ -95,6 +95,7 @@ extern unsigned long va_kernel_pa_offset; > #endif > extern unsigned long va_kernel_xip_pa_offset; > extern unsigned long pfn_base; > +extern uintptr_t load_sz; > #define ARCH_PFN_OFFSET (pfn_base) > #else > #define va_pa_offset 0 > @@ -108,6 +109,11 @@ extern unsigned long pfn_base; > extern unsigned long kernel_virt_addr; > > #ifdef CONFIG_64BIT > +#define is_kernel_mapping(x) \ > + ((x) >= kernel_virt_addr && (x) < (kernel_virt_addr + load_sz)) > +#define is_linear_mapping(x) \ > + ((x) >= PAGE_OFFSET && (x) < kernel_virt_addr) > + > #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + va_pa_offset)) > #define kernel_mapping_pa_to_va(y) ({ \ > unsigned long _y = y; \ > @@ -127,10 +133,15 @@ extern unsigned long kernel_virt_addr; > > #define __va_to_pa_nodebug(x) ({ \ > unsigned long _x = x; \ > - (_x < kernel_virt_addr) ? \ > + is_linear_mapping(_x) ? \ > linear_mapping_va_to_pa(_x) : kernel_mapping_va_to_pa(_x); \ > }) > #else > +#define is_kernel_mapping(x) \ > + ((x) >= kernel_virt_addr && (x) < (kernel_virt_addr + load_sz)) > +#define is_linear_mapping(x) \ > + ((x) >= PAGE_OFFSET) > + > #define __pa_to_va_nodebug(x) ((void *)((unsigned long) (x) + va_pa_offset)) > #define __va_to_pa_nodebug(x) ((unsigned long)(x) - va_pa_offset) > #endif /* CONFIG_64BIT */ > diff --git a/arch/riscv/include/asm/sections.h b/arch/riscv/include/asm/sections.h > index 8a303fb1ee3b..32336e8a17cb 100644 > --- a/arch/riscv/include/asm/sections.h > +++ b/arch/riscv/include/asm/sections.h > @@ -6,6 +6,7 @@ > #define __ASM_SECTIONS_H > > #include > +#include > > extern char _start[]; > extern char _start_kernel[]; > @@ -13,4 +14,20 @@ extern char __init_data_begin[], __init_data_end[]; > extern char __init_text_begin[], __init_text_end[]; > extern char __alt_start[], __alt_end[]; > > +static inline bool is_va_kernel_text(uintptr_t va) > +{ > + uintptr_t start = (uintptr_t)_start; > + uintptr_t end = (uintptr_t)__init_data_begin; > + > + return va >= start && va < end; > +} > + > +static inline bool is_va_kernel_lm_alias_text(uintptr_t va) > +{ > + uintptr_t start = (uintptr_t)lm_alias(_start); > + uintptr_t end = (uintptr_t)lm_alias(__init_data_begin); > + > + return va >= start && va < end; > +} > + > #endif /* __ASM_SECTIONS_H */ > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h > index 7a411fed9e0e..c0b41ed218e1 100644 > --- a/arch/riscv/include/asm/set_memory.h > +++ b/arch/riscv/include/asm/set_memory.h > @@ -17,13 +17,11 @@ int set_memory_x(unsigned long addr, int numpages); > int set_memory_nx(unsigned long addr, int numpages); > int set_memory_rw_nx(unsigned long addr, int numpages); > int set_kernel_memory(char *start, char *end, int (*set_memory)(unsigned long, int)); > -void protect_kernel_text_data(void); > #else > static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_x(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } > -static inline void protect_kernel_text_data(void) {} > static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; } > static inline int set_kernel_memory(char *start, char *end, int (*set_memory)(unsigned long, int)) > { > @@ -31,12 +29,6 @@ static inline int set_kernel_memory(char *start, char *end, int (*set_memory)(un > } > #endif > > -#if defined(CONFIG_64BIT) && defined(CONFIG_STRICT_KERNEL_RWX) > -void protect_kernel_linear_mapping_text_rodata(void); > -#else > -static inline void protect_kernel_linear_mapping_text_rodata(void) {} > -#endif > - > int set_direct_map_invalid_noflush(struct page *page); > int set_direct_map_default_noflush(struct page *page); > bool kernel_page_present(struct page *page); > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c > index 4db4d0b5911f..b3d0895ce5f7 100644 > --- a/arch/riscv/kernel/setup.c > +++ b/arch/riscv/kernel/setup.c > @@ -290,11 +290,6 @@ void __init setup_arch(char **cmdline_p) > init_resources(); > sbi_init(); > > - if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) { > - protect_kernel_text_data(); > - protect_kernel_linear_mapping_text_rodata(); > - } > - > #ifdef CONFIG_SWIOTLB > swiotlb_init(1); > #endif > @@ -333,11 +328,9 @@ subsys_initcall(topology_init); > > void free_initmem(void) > { > - unsigned long init_begin = (unsigned long)__init_begin; > - unsigned long init_end = (unsigned long)__init_end; > - > if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) > - set_memory_rw_nx(init_begin, (init_end - init_begin) >> PAGE_SHIFT); > + set_kernel_memory(lm_alias(__init_begin), lm_alias(__init_end), > + IS_ENABLED(CONFIG_64BIT) ? set_memory_rw : set_memory_rw_nx); > > free_initmem_default(POISON_FREE_INITMEM); > } > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 2d80088f33d5..6b70c345cfc4 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -425,6 +425,42 @@ asmlinkage void __init __copy_data(void) > } > #endif > > +#ifdef CONFIG_STRICT_KERNEL_RWX > +static __init pgprot_t pgprot_from_va(uintptr_t va) > +{ > + if (is_va_kernel_text(va)) > + return PAGE_KERNEL_READ_EXEC; > + > + /* > + * In 64b kernel, the kernel mapping is outside the linear mapping so we > + * must protect its linear mapping alias from being executed and written. > + * And rodata section is marked readonly in mark_rodata_ro. > + */ > + if (IS_ENABLED(CONFIG_64BIT) && is_va_kernel_lm_alias_text(va)) > + return PAGE_KERNEL_READ; > + > + return PAGE_KERNEL; > +} > + > +void mark_rodata_ro(void) > +{ > + set_kernel_memory(__start_rodata, _data, set_memory_ro); > + if (IS_ENABLED(CONFIG_64BIT)) > + set_kernel_memory(lm_alias(__start_rodata), lm_alias(_data), > + set_memory_ro); > + > + debug_checkwx(); > +} > +#else > +static __init pgprot_t pgprot_from_va(uintptr_t va) > +{ > + if (IS_ENABLED(CONFIG_64BIT) && !is_kernel_mapping(va)) > + return PAGE_KERNEL; > + > + return PAGE_KERNEL_EXEC; > +} > +#endif /* CONFIG_STRICT_KERNEL_RWX */ > + > /* > * setup_vm() is called from head.S with MMU-off. > * > @@ -454,7 +490,8 @@ uintptr_t xiprom, xiprom_sz; > #define xiprom_sz (*((uintptr_t *)XIP_FIXUP(&xiprom_sz))) > #define xiprom (*((uintptr_t *)XIP_FIXUP(&xiprom))) > > -static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size) > +static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size, > + __always_unused bool early) > { > uintptr_t va, end_va; > > @@ -473,7 +510,7 @@ static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size) > map_size, PAGE_KERNEL); > } > #else > -static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size) > +static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size, bool early) > { > uintptr_t va, end_va; > > @@ -481,7 +518,7 @@ static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t map_size) > for (va = kernel_virt_addr; va < end_va; va += map_size) > create_pgd_mapping(pgdir, va, > load_pa + (va - kernel_virt_addr), > - map_size, PAGE_KERNEL_EXEC); > + map_size, early ? PAGE_KERNEL_EXEC : pgprot_from_va(va)); > } > #endif > > @@ -558,7 +595,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) > * us to reach paging_init(). We map all memory banks later > * in setup_vm_final() below. > */ > - create_kernel_page_table(early_pg_dir, map_size); > + create_kernel_page_table(early_pg_dir, map_size, true); > > #ifndef __PAGETABLE_PMD_FOLDED > /* Setup early PMD for DTB */ > @@ -634,22 +671,6 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) > #endif > } > > -#if defined(CONFIG_64BIT) && defined(CONFIG_STRICT_KERNEL_RWX) > -void protect_kernel_linear_mapping_text_rodata(void) > -{ > - unsigned long text_start = (unsigned long)lm_alias(_start); > - unsigned long init_text_start = (unsigned long)lm_alias(__init_text_begin); > - unsigned long rodata_start = (unsigned long)lm_alias(__start_rodata); > - unsigned long data_start = (unsigned long)lm_alias(_data); > - > - set_memory_ro(text_start, (init_text_start - text_start) >> PAGE_SHIFT); > - set_memory_nx(text_start, (init_text_start - text_start) >> PAGE_SHIFT); > - > - set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > - set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > -} > -#endif > - > static void __init setup_vm_final(void) > { > uintptr_t va, map_size; > @@ -682,21 +703,15 @@ static void __init setup_vm_final(void) > map_size = best_map_size(start, end - start); > for (pa = start; pa < end; pa += map_size) { > va = (uintptr_t)__va(pa); > - create_pgd_mapping(swapper_pg_dir, va, pa, > - map_size, > -#ifdef CONFIG_64BIT > - PAGE_KERNEL > -#else > - PAGE_KERNEL_EXEC > -#endif > - ); > > + create_pgd_mapping(swapper_pg_dir, va, pa, map_size, > + pgprot_from_va(va)); > } > } > > #ifdef CONFIG_64BIT > /* Map the kernel */ > - create_kernel_page_table(swapper_pg_dir, PMD_SIZE); > + create_kernel_page_table(swapper_pg_dir, PMD_SIZE, false); > #endif > > /* Clear fixmap PTE and PMD mappings */ > @@ -727,35 +742,6 @@ static inline void setup_vm_final(void) > } > #endif /* CONFIG_MMU */ > > -#ifdef CONFIG_STRICT_KERNEL_RWX > -void __init protect_kernel_text_data(void) > -{ > - unsigned long text_start = (unsigned long)_start; > - unsigned long init_text_start = (unsigned long)__init_text_begin; > - unsigned long init_data_start = (unsigned long)__init_data_begin; > - unsigned long rodata_start = (unsigned long)__start_rodata; > - unsigned long data_start = (unsigned long)_data; > - unsigned long max_low = (unsigned long)(__va(PFN_PHYS(max_low_pfn))); > - > - set_memory_ro(text_start, (init_text_start - text_start) >> PAGE_SHIFT); > - set_memory_ro(init_text_start, (init_data_start - init_text_start) >> PAGE_SHIFT); > - set_memory_nx(init_data_start, (rodata_start - init_data_start) >> PAGE_SHIFT); > - /* rodata section is marked readonly in mark_rodata_ro */ > - set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > - set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT); > -} > - > -void mark_rodata_ro(void) > -{ > - unsigned long rodata_start = (unsigned long)__start_rodata; > - unsigned long data_start = (unsigned long)_data; > - > - set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > - > - debug_checkwx(); > -} > -#endif > - > #ifdef CONFIG_KEXEC_CORE > /* > * reserve_crashkernel() - reserves memory for crash kernel > -- > 2.30.2 > _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv