From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B289C43461 for ; Tue, 20 Apr 2021 04:19:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2F6DD61177 for ; Tue, 20 Apr 2021 04:19:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229566AbhDTET4 (ORCPT ); Tue, 20 Apr 2021 00:19:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229521AbhDTETt (ORCPT ); Tue, 20 Apr 2021 00:19:49 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4B98C06138A for ; Mon, 19 Apr 2021 21:18:38 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id e7so27134750wrs.11 for ; Mon, 19 Apr 2021 21:18:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=SjWEZ1zxdmBRj9KcJNahO8i2TB9ZNFWY/gcI2Oi4MqQ=; b=zqRNucjDdX3DN0XU/uZOKhk1gQCfti9NkXWNmKs0zxE5Ss4HjHjFIrPYXOBdnMlU7M 50DnQ/WyK/ILA4Tzs+ycleoOv88GPJNG/8GZCO+qVnRJ/jzjhBnEXiW1J2dIectUdooE wRtBpyKG6VuHChMOwuAE0kBzSRbxz5uhe8FjnfeqCnyJQC0RzhZBKxegFxKMZQRCkqzs jww90znjCKy9iq575YD2G4aK4RC9tsEY7UOIU8KcDgTDMYiPIu4IF47c+7TThl7Q4NUj wwzBcQxCqaMn+zT2EmDf/ZLFJj7qXYAOpbUV/ahGQ8hA0OgV//AZfNJnXeluZOUseLlX uwxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=SjWEZ1zxdmBRj9KcJNahO8i2TB9ZNFWY/gcI2Oi4MqQ=; b=LQ/btsqkCY7e/cvNqVDU3Lgtjq2yogQ9Zwdsu/dQgjLbtt7I6IIp9tQx9zV1dGrPPM cGZ2WH3bUF5g/A6vskKou/EfGrirLPLFpsz0WE1fLCLFITUL8xe8N/nLlPpP8Cntu3cU gpd6f3ONVa2TTygpIRpDVWObqRkkiDSfb6g8kP9cigukn1oswLS85wnGMaxnwH+Ui46A gFCc8s+w7DnApO6MRl8hl/deQ58K5UZT4+bDlIwTC6y4Q6dsUD7zpe5iTYho1RBrswyP rVJ6FiPnVTYyPM3hUEP58rVLKSOkzB46NFYwOYZnhYOin4HD7Cb5DO5g1J0gtIFVx2RK BNug== X-Gm-Message-State: AOAM531eJjAfeaKzBAQFJbpw+pF+ZDp/Z5V5cFcZycDsrWqwE2PHFX22 cmyQSHaPfb6JeNf8XNFf3C8+oCwz6fImPTEJd3qauQ== X-Google-Smtp-Source: ABdhPJywH6zNS+S9+aPYlLnH0UPE/gKZS9rjo3+N55w1r1s4RZCqJ8v1oZokj44lytbNf+NhzRcwKUvE36LHj44/WDs= X-Received: by 2002:adf:ce12:: with SMTP id p18mr18078880wrn.144.1618892317302; Mon, 19 Apr 2021 21:18:37 -0700 (PDT) MIME-Version: 1.0 References: <20210417172159.32085-1-alex@ghiti.fr> In-Reply-To: <20210417172159.32085-1-alex@ghiti.fr> From: Anup Patel Date: Tue, 20 Apr 2021 09:48:26 +0530 Message-ID: Subject: Re: [PATCH] riscv: Fix 32b kernel caused by 64b kernel mapping moving outside linear mapping To: Alexandre Ghiti Cc: Jonathan Corbet , Paul Walmsley , Palmer Dabbelt , Albert Ou , Arnd Bergmann , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , linux-doc@vger.kernel.org, linux-riscv , "linux-kernel@vger.kernel.org List" , kasan-dev@googlegroups.com, linux-arch , Linux Memory Management List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Apr 17, 2021 at 10:52 PM Alexandre Ghiti wrote: > > Fix multiple leftovers when moving the kernel mapping outside the linear > mapping for 64b kernel that left the 32b kernel unusable. > > Fixes: 4b67f48da707 ("riscv: Move kernel mapping outside of linear mapping") > Signed-off-by: Alexandre Ghiti Quite a few #ifdef but I don't see any better way at the moment. Maybe we can clean this later. Otherwise looks good to me. Reviewed-by: Anup Patel Regards, Anup > --- > arch/riscv/include/asm/page.h | 9 +++++++++ > arch/riscv/include/asm/pgtable.h | 16 ++++++++++++---- > arch/riscv/mm/init.c | 25 ++++++++++++++++++++++++- > 3 files changed, 45 insertions(+), 5 deletions(-) > > diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h > index 22cfb2be60dc..f64b61296c0c 100644 > --- a/arch/riscv/include/asm/page.h > +++ b/arch/riscv/include/asm/page.h > @@ -90,15 +90,20 @@ typedef struct page *pgtable_t; > > #ifdef CONFIG_MMU > extern unsigned long va_pa_offset; > +#ifdef CONFIG_64BIT > extern unsigned long va_kernel_pa_offset; > +#endif > extern unsigned long pfn_base; > #define ARCH_PFN_OFFSET (pfn_base) > #else > #define va_pa_offset 0 > +#ifdef CONFIG_64BIT > #define va_kernel_pa_offset 0 > +#endif > #define ARCH_PFN_OFFSET (PAGE_OFFSET >> PAGE_SHIFT) > #endif /* CONFIG_MMU */ > > +#ifdef CONFIG_64BIT > extern unsigned long kernel_virt_addr; > > #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + va_pa_offset)) > @@ -112,6 +117,10 @@ extern unsigned long kernel_virt_addr; > (_x < kernel_virt_addr) ? \ > linear_mapping_va_to_pa(_x) : kernel_mapping_va_to_pa(_x); \ > }) > +#else > +#define __pa_to_va_nodebug(x) ((void *)((unsigned long) (x) + va_pa_offset)) > +#define __va_to_pa_nodebug(x) ((unsigned long)(x) - va_pa_offset) > +#endif > > #ifdef CONFIG_DEBUG_VIRTUAL > extern phys_addr_t __virt_to_phys(unsigned long x); > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h > index 80e63a93e903..5afda75cc2c3 100644 > --- a/arch/riscv/include/asm/pgtable.h > +++ b/arch/riscv/include/asm/pgtable.h > @@ -16,19 +16,27 @@ > #else > > #define ADDRESS_SPACE_END (UL(-1)) > -/* > - * Leave 2GB for kernel and BPF at the end of the address space > - */ > + > +#ifdef CONFIG_64BIT > +/* Leave 2GB for kernel and BPF at the end of the address space */ > #define KERNEL_LINK_ADDR (ADDRESS_SPACE_END - SZ_2G + 1) > +#else > +#define KERNEL_LINK_ADDR PAGE_OFFSET > +#endif > > #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) > #define VMALLOC_END (PAGE_OFFSET - 1) > #define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) > > -/* KASLR should leave at least 128MB for BPF after the kernel */ > #define BPF_JIT_REGION_SIZE (SZ_128M) > +#ifdef CONFIG_64BIT > +/* KASLR should leave at least 128MB for BPF after the kernel */ > #define BPF_JIT_REGION_START PFN_ALIGN((unsigned long)&_end) > #define BPF_JIT_REGION_END (BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE) > +#else > +#define BPF_JIT_REGION_START (PAGE_OFFSET - BPF_JIT_REGION_SIZE) > +#define BPF_JIT_REGION_END (VMALLOC_END) > +#endif > > /* Modules always live before the kernel */ > #ifdef CONFIG_64BIT > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 093f3a96ecfc..dc9b988e0778 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -91,8 +91,10 @@ static void print_vm_layout(void) > (unsigned long)VMALLOC_END); > print_mlm("lowmem", (unsigned long)PAGE_OFFSET, > (unsigned long)high_memory); > +#ifdef CONFIG_64BIT > print_mlm("kernel", (unsigned long)KERNEL_LINK_ADDR, > (unsigned long)ADDRESS_SPACE_END); > +#endif > } > #else > static void print_vm_layout(void) { } > @@ -165,9 +167,11 @@ static struct pt_alloc_ops pt_ops; > /* Offset between linear mapping virtual address and kernel load address */ > unsigned long va_pa_offset; > EXPORT_SYMBOL(va_pa_offset); > +#ifdef CONFIG_64BIT > /* Offset between kernel mapping virtual address and kernel load address */ > unsigned long va_kernel_pa_offset; > EXPORT_SYMBOL(va_kernel_pa_offset); > +#endif > unsigned long pfn_base; > EXPORT_SYMBOL(pfn_base); > > @@ -410,7 +414,9 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) > load_sz = (uintptr_t)(&_end) - load_pa; > > va_pa_offset = PAGE_OFFSET - load_pa; > +#ifdef CONFIG_64BIT > va_kernel_pa_offset = kernel_virt_addr - load_pa; > +#endif > > pfn_base = PFN_DOWN(load_pa); > > @@ -469,12 +475,16 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) > pa + PMD_SIZE, PMD_SIZE, PAGE_KERNEL); > dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PMD_SIZE - 1)); > #else /* CONFIG_BUILTIN_DTB */ > +#ifdef CONFIG_64BIT > /* > * __va can't be used since it would return a linear mapping address > * whereas dtb_early_va will be used before setup_vm_final installs > * the linear mapping. > */ > dtb_early_va = kernel_mapping_pa_to_va(dtb_pa); > +#else > + dtb_early_va = __va(dtb_pa); > +#endif /* CONFIG_64BIT */ > #endif /* CONFIG_BUILTIN_DTB */ > #else > #ifndef CONFIG_BUILTIN_DTB > @@ -486,7 +496,11 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) > pa + PGDIR_SIZE, PGDIR_SIZE, PAGE_KERNEL); > dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PGDIR_SIZE - 1)); > #else /* CONFIG_BUILTIN_DTB */ > +#ifdef CONFIG_64BIT > dtb_early_va = kernel_mapping_pa_to_va(dtb_pa); > +#else > + dtb_early_va = __va(dtb_pa); > +#endif /* CONFIG_64BIT */ > #endif /* CONFIG_BUILTIN_DTB */ > #endif > dtb_early_pa = dtb_pa; > @@ -571,12 +585,21 @@ static void __init setup_vm_final(void) > for (pa = start; pa < end; pa += map_size) { > va = (uintptr_t)__va(pa); > create_pgd_mapping(swapper_pg_dir, va, pa, > - map_size, PAGE_KERNEL); > + map_size, > +#ifdef CONFIG_64BIT > + PAGE_KERNEL > +#else > + PAGE_KERNEL_EXEC > +#endif > + ); > + > } > } > > +#ifdef CONFIG_64BIT > /* Map the kernel */ > create_kernel_page_table(swapper_pg_dir, PMD_SIZE); > +#endif > > /* Clear fixmap PTE and PMD mappings */ > clear_fixmap(FIX_PTE); > -- > 2.20.1 >