From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 434C1C433DF for ; Tue, 21 Jul 2020 19:05:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E985020656 for ; Tue, 21 Jul 2020 19:05:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=dabbelt-com.20150623.gappssmtp.com header.i=@dabbelt-com.20150623.gappssmtp.com header.b="2FJezxvM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728379AbgGUTFo (ORCPT ); Tue, 21 Jul 2020 15:05:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726602AbgGUTFn (ORCPT ); Tue, 21 Jul 2020 15:05:43 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCAA5C061794 for ; Tue, 21 Jul 2020 12:05:41 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id mn17so2010063pjb.4 for ; Tue, 21 Jul 2020 12:05:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dabbelt-com.20150623.gappssmtp.com; s=20150623; h=date:subject:in-reply-to:cc:from:to:message-id:mime-version :content-transfer-encoding; bh=hdMo1qYoWLTP0HS+quxo8iCzLU73ccVkSQIkXp+qPq8=; b=2FJezxvM+9b8Xttr0n/eceW6GA62NY0QfXnChicSHLlwOkxfhota5ldbuNFmefmfBV ebrbZ+yN46o7tjV3J7AZO/7TGHm3iqlPg5XC5Q+MycLIR9M5W3PI0IR3opjjcXikzLmT b8ObaKPA1IXuXPJp0FexeQqjuun7jXud7KyG6sbsS7R7fhP1LIBQDHCdYRz+cv2+d3Wz zMslNGo3EvSCkCgQGmPBXT9So5/4gYUtoqWCHtBPz+iSoreXjcdU42i+WacAd10psnag VLV+zVj0TK9vwURD18LoJFLnV4FBig0MNZBJb1awKv/Tu+2d25iSnxaRAjcft9UoogRw QD7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:subject:in-reply-to:cc:from:to:message-id :mime-version:content-transfer-encoding; bh=hdMo1qYoWLTP0HS+quxo8iCzLU73ccVkSQIkXp+qPq8=; b=tP5yrR4bjDFoOdiwV/mOBWYBXN2IdX4FavBbI6GfprlC4J+hQf09unebpmHLtZofe2 bTpBu/2FPxRWYmL5kG42M5eL6zqKgroI8idB6o2YprixCQNmwEviVNtTyRDqwtlChc2o 38/a4e9Niu3A3anunLO1uWTv/VBB1zrCcL3IPPsx5tNJI7GWnFDICQFr/NRp/Wtkkyvg JJBkdJlDjp3SzDtmhbJeForp03j9DeK8b6VTpoI8tETdsnCy3OtV4xx90KHrzZmf2pao oQLvRLW9FfuU8OiMiM6yK/zzhPpZgpHLvca/1sDe405xENFB3I1j7ohA4hqk6aIR91Pj wLJA== X-Gm-Message-State: AOAM530auHpPyT+y+wir9GvtJWX/KKNzNX9m9aL96u9O9w6xyPldr/SX 0GzUY0aqPLXrWCbkhxH929lQACQDpV8= X-Google-Smtp-Source: ABdhPJxKuiBX6f4KJzlsVs8ZHQap9PIyUpv6lkcFvntcNCzI+ZMNm0uQ7S0hBLHBa7MWsXSsWTybIA== X-Received: by 2002:a17:90a:728d:: with SMTP id e13mr6596750pjg.51.1595358340852; Tue, 21 Jul 2020 12:05:40 -0700 (PDT) Received: from localhost (76-210-143-223.lightspeed.sntcca.sbcglobal.net. [76.210.143.223]) by smtp.gmail.com with ESMTPSA id q6sm21079467pfg.76.2020.07.21.12.05.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 12:05:40 -0700 (PDT) Date: Tue, 21 Jul 2020 12:05:40 -0700 (PDT) X-Google-Original-Date: Tue, 21 Jul 2020 12:05:27 PDT (-0700) Subject: Re: [PATCH v5 1/4] riscv: Move kernel mapping to vmalloc zone In-Reply-To: <7cb2285e-68ba-6827-5e61-e33a4b65ac03@ghiti.fr> CC: mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, Paul Walmsley , aou@eecs.berkeley.edu, Anup Patel , Atish Patra , zong.li@sifive.com, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-mm@kvack.org From: Palmer Dabbelt To: alex@ghiti.fr Message-ID: Mime-Version: 1.0 (MHng) Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 21 Jul 2020 11:36:10 PDT (-0700), alex@ghiti.fr wrote: > Let's try to make progress here: I add linux-mm in CC to get feedback on > this patch as it blocks sv48 support too. Sorry for being slow here. I haven't replied because I hadn't really fleshed out the design yet, but just so everyone's on the same page my problems with this are: * We waste vmalloc space on 32-bit systems, where there isn't a lot of it. * On 64-bit systems the VA space around the kernel is precious because it's the only place we can place text (modules, BPF, whatever). If we start putting the kernel in the vmalloc space then we either have to pre-allocate a bunch of space around it (essentially making it a fixed mapping anyway) or it becomes likely that we won't be able to find space for modules as they're loaded into running systems. * Relying on a relocatable kernel for sv48 support introduces a fairly large performance hit. Roughly, my proposal would be to: * Leave the 32-bit memory map alone. On 32-bit systems we can load modules anywhere and we only have one VA width, so we're not really solving any problems with these changes. * Staticly allocate a 2GiB portion of the VA space for all our text, as its own region. We'd link/relocate the kernel here instead of around PAGE_OFFSET, which would decouple the kernel from the physical memory layout of the system. This would have the side effect of sorting out a bunch of bootloader headaches that we currently have. * Sort out how to maintain a linear map as the canonical hole moves around between the VA widths without adding a bunch of overhead to the virt2phys and friends. This is probably going to be the trickiest part, but I think if we just change the page table code to essentially lie about VAs when an sv39 system runs an sv48+sv39 kernel we could make it work -- there'd be some logical complexity involved, but it would remain fast. This doesn't solve the problem of virtually relocatable kernels, but it does let us decouple that from the sv48 stuff. It also lets us stop relying on a fixed physical address the kernel is loaded into, which is another thing I don't like. I know this may be a more complicated approach, but there aren't any sv48 systems around right now so I just don't see the rush to support them, particularly when there's a cost to what already exists (for those who haven't been watching, so far all the sv48 patch sets have imposed a significant performance penalty on all systems). > > Alex > > Le 7/9/20 à 7:11 AM, Alex Ghiti a écrit : >> Hi Palmer, >> >> Le 7/9/20 à 1:05 AM, Palmer Dabbelt a écrit : >>> On Sun, 07 Jun 2020 00:59:46 PDT (-0700), alex@ghiti.fr wrote: >>>> This is a preparatory patch for relocatable kernel. >>>> >>>> The kernel used to be linked at PAGE_OFFSET address and used to be >>>> loaded >>>> physically at the beginning of the main memory. Therefore, we could use >>>> the linear mapping for the kernel mapping. >>>> >>>> But the relocated kernel base address will be different from PAGE_OFFSET >>>> and since in the linear mapping, two different virtual addresses cannot >>>> point to the same physical address, the kernel mapping needs to lie >>>> outside >>>> the linear mapping. >>> >>> I know it's been a while, but I keep opening this up to review it and >>> just >>> can't get over how ugly it is to put the kernel's linear map in the >>> vmalloc >>> region. >>> >>> I guess I don't understand why this is necessary at all. >>> Specifically: why >>> can't we just relocate the kernel within the linear map?  That would >>> let the >>> bootloader put the kernel wherever it wants, modulo the physical >>> memory size we >>> support.  We'd need to handle the regions that are coupled to the >>> kernel's >>> execution address, but we could just put them in an explicit memory >>> region >>> which is what we should probably be doing anyway. >> >> Virtual relocation in the linear mapping requires to move the kernel >> physically too. Zong implemented this physical move in its KASLR RFC >> patchset, which is cumbersome since finding an available physical spot >> is harder than just selecting a virtual range in the vmalloc range. >> >> In addition, having the kernel mapping in the linear mapping prevents >> the use of hugepage for the linear mapping resulting in performance loss >> (at least for the GB that encompasses the kernel). >> >> Why do you find this "ugly" ? The vmalloc region is just a bunch of >> available virtual addresses to whatever purpose we want, and as noted by >> Zong, arm64 uses the same scheme. >> >>> >>>> In addition, because modules and BPF must be close to the kernel (inside >>>> +-2GB window), the kernel is placed at the end of the vmalloc zone minus >>>> 2GB, which leaves room for modules and BPF. The kernel could not be >>>> placed at the beginning of the vmalloc zone since other vmalloc >>>> allocations from the kernel could get all the +-2GB window around the >>>> kernel which would prevent new modules and BPF programs to be loaded. >>> >>> Well, that's not enough to make sure this doesn't happen -- it's just >>> enough to >>> make sure it doesn't happen very quickily.  That's the same boat we're >>> already >>> in, though, so it's not like it's worse. >> >> Indeed, that's not worse, I haven't found a way to reserve vmalloc area >> without actually allocating it. >> >>> >>>> Signed-off-by: Alexandre Ghiti >>>> Reviewed-by: Zong Li >>>> --- >>>>  arch/riscv/boot/loader.lds.S     |  3 +- >>>>  arch/riscv/include/asm/page.h    | 10 +++++- >>>>  arch/riscv/include/asm/pgtable.h | 38 ++++++++++++++------- >>>>  arch/riscv/kernel/head.S         |  3 +- >>>>  arch/riscv/kernel/module.c       |  4 +-- >>>>  arch/riscv/kernel/vmlinux.lds.S  |  3 +- >>>>  arch/riscv/mm/init.c             | 58 +++++++++++++++++++++++++------- >>>>  arch/riscv/mm/physaddr.c         |  2 +- >>>>  8 files changed, 88 insertions(+), 33 deletions(-) >>>> >>>> diff --git a/arch/riscv/boot/loader.lds.S b/arch/riscv/boot/loader.lds.S >>>> index 47a5003c2e28..62d94696a19c 100644 >>>> --- a/arch/riscv/boot/loader.lds.S >>>> +++ b/arch/riscv/boot/loader.lds.S >>>> @@ -1,13 +1,14 @@ >>>>  /* SPDX-License-Identifier: GPL-2.0 */ >>>> >>>>  #include >>>> +#include >>>> >>>>  OUTPUT_ARCH(riscv) >>>>  ENTRY(_start) >>>> >>>>  SECTIONS >>>>  { >>>> -    . = PAGE_OFFSET; >>>> +    . = KERNEL_LINK_ADDR; >>>> >>>>      .payload : { >>>>          *(.payload) >>>> diff --git a/arch/riscv/include/asm/page.h >>>> b/arch/riscv/include/asm/page.h >>>> index 2d50f76efe48..48bb09b6a9b7 100644 >>>> --- a/arch/riscv/include/asm/page.h >>>> +++ b/arch/riscv/include/asm/page.h >>>> @@ -90,18 +90,26 @@ typedef struct page *pgtable_t; >>>> >>>>  #ifdef CONFIG_MMU >>>>  extern unsigned long va_pa_offset; >>>> +extern unsigned long va_kernel_pa_offset; >>>>  extern unsigned long pfn_base; >>>>  #define ARCH_PFN_OFFSET        (pfn_base) >>>>  #else >>>>  #define va_pa_offset        0 >>>> +#define va_kernel_pa_offset    0 >>>>  #define ARCH_PFN_OFFSET        (PAGE_OFFSET >> PAGE_SHIFT) >>>>  #endif /* CONFIG_MMU */ >>>> >>>>  extern unsigned long max_low_pfn; >>>>  extern unsigned long min_low_pfn; >>>> +extern unsigned long kernel_virt_addr; >>>> >>>>  #define __pa_to_va_nodebug(x)    ((void *)((unsigned long) (x) + >>>> va_pa_offset)) >>>> -#define __va_to_pa_nodebug(x)    ((unsigned long)(x) - va_pa_offset) >>>> +#define linear_mapping_va_to_pa(x)    ((unsigned long)(x) - >>>> va_pa_offset) >>>> +#define kernel_mapping_va_to_pa(x)    \ >>>> +    ((unsigned long)(x) - va_kernel_pa_offset) >>>> +#define __va_to_pa_nodebug(x)        \ >>>> +    (((x) >= PAGE_OFFSET) ?        \ >>>> +        linear_mapping_va_to_pa(x) : kernel_mapping_va_to_pa(x)) >>>> >>>>  #ifdef CONFIG_DEBUG_VIRTUAL >>>>  extern phys_addr_t __virt_to_phys(unsigned long x); >>>> diff --git a/arch/riscv/include/asm/pgtable.h >>>> b/arch/riscv/include/asm/pgtable.h >>>> index 35b60035b6b0..94ef3b49dfb6 100644 >>>> --- a/arch/riscv/include/asm/pgtable.h >>>> +++ b/arch/riscv/include/asm/pgtable.h >>>> @@ -11,23 +11,29 @@ >>>> >>>>  #include >>>> >>>> -#ifndef __ASSEMBLY__ >>>> - >>>> -/* Page Upper Directory not used in RISC-V */ >>>> -#include >>>> -#include >>>> -#include >>>> -#include >>>> - >>>> -#ifdef CONFIG_MMU >>>> +#ifndef CONFIG_MMU >>>> +#define KERNEL_VIRT_ADDR    PAGE_OFFSET >>>> +#define KERNEL_LINK_ADDR    PAGE_OFFSET >>>> +#else >>>> +/* >>>> + * Leave 2GB for modules and BPF that must lie within a 2GB range >>>> around >>>> + * the kernel. >>>> + */ >>>> +#define KERNEL_VIRT_ADDR    (VMALLOC_END - SZ_2G + 1) >>>> +#define KERNEL_LINK_ADDR    KERNEL_VIRT_ADDR >>> >>> At a bare minimum this is going to make a mess of the 32-bit port, as >>> non-relocatable kernels are now going to get linked at 1GiB which is >>> where user >>> code is supposed to live.  That's an easy fix, though, as the 32-bit >>> stuff >>> doesn't need any module address restrictions. >> >> Indeed, I will take a look at that. >> >>> >>>>  #define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1) >>>>  #define VMALLOC_END      (PAGE_OFFSET - 1) >>>>  #define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE) >>>> >>>>  #define BPF_JIT_REGION_SIZE    (SZ_128M) >>>> -#define BPF_JIT_REGION_START    (PAGE_OFFSET - BPF_JIT_REGION_SIZE) >>>> -#define BPF_JIT_REGION_END    (VMALLOC_END) >>>> +#define BPF_JIT_REGION_START    PFN_ALIGN((unsigned long)&_end) >>>> +#define BPF_JIT_REGION_END    (BPF_JIT_REGION_START + >>>> BPF_JIT_REGION_SIZE) >>>> + >>>> +#ifdef CONFIG_64BIT >>>> +#define VMALLOC_MODULE_START    BPF_JIT_REGION_END >>>> +#define VMALLOC_MODULE_END    (((unsigned long)&_start & PAGE_MASK) >>>> + SZ_2G) >>>> +#endif >>>> >>>>  /* >>>>   * Roughly size the vmemmap space to be large enough to fit enough >>>> @@ -57,9 +63,16 @@ >>>>  #define FIXADDR_SIZE     PGDIR_SIZE >>>>  #endif >>>>  #define FIXADDR_START    (FIXADDR_TOP - FIXADDR_SIZE) >>>> - >>>>  #endif >>>> >>>> +#ifndef __ASSEMBLY__ >>>> + >>>> +/* Page Upper Directory not used in RISC-V */ >>>> +#include >>>> +#include >>>> +#include >>>> +#include >>>> + >>>>  #ifdef CONFIG_64BIT >>>>  #include >>>>  #else >>>> @@ -483,6 +496,7 @@ static inline void __kernel_map_pages(struct page >>>> *page, int numpages, int enabl >>>> >>>>  #define kern_addr_valid(addr)   (1) /* FIXME */ >>>> >>>> +extern char _start[]; >>>>  extern void *dtb_early_va; >>>>  void setup_bootmem(void); >>>>  void paging_init(void); >>>> diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S >>>> index 98a406474e7d..8f5bb7731327 100644 >>>> --- a/arch/riscv/kernel/head.S >>>> +++ b/arch/riscv/kernel/head.S >>>> @@ -49,7 +49,8 @@ ENTRY(_start) >>>>  #ifdef CONFIG_MMU >>>>  relocate: >>>>      /* Relocate return address */ >>>> -    li a1, PAGE_OFFSET >>>> +    la a1, kernel_virt_addr >>>> +    REG_L a1, 0(a1) >>>>      la a2, _start >>>>      sub a1, a1, a2 >>>>      add ra, ra, a1 >>>> diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c >>>> index 8bbe5dbe1341..1a8fbe05accf 100644 >>>> --- a/arch/riscv/kernel/module.c >>>> +++ b/arch/riscv/kernel/module.c >>>> @@ -392,12 +392,10 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const >>>> char *strtab, >>>>  } >>>> >>>>  #if defined(CONFIG_MMU) && defined(CONFIG_64BIT) >>>> -#define VMALLOC_MODULE_START \ >>>> -     max(PFN_ALIGN((unsigned long)&_end - SZ_2G), VMALLOC_START) >>>>  void *module_alloc(unsigned long size) >>>>  { >>>>      return __vmalloc_node_range(size, 1, VMALLOC_MODULE_START, >>>> -                    VMALLOC_END, GFP_KERNEL, >>>> +                    VMALLOC_MODULE_END, GFP_KERNEL, >>>>                      PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, >>>>                      __builtin_return_address(0)); >>>>  } >>>> diff --git a/arch/riscv/kernel/vmlinux.lds.S >>>> b/arch/riscv/kernel/vmlinux.lds.S >>>> index 0339b6bbe11a..a9abde62909f 100644 >>>> --- a/arch/riscv/kernel/vmlinux.lds.S >>>> +++ b/arch/riscv/kernel/vmlinux.lds.S >>>> @@ -4,7 +4,8 @@ >>>>   * Copyright (C) 2017 SiFive >>>>   */ >>>> >>>> -#define LOAD_OFFSET PAGE_OFFSET >>>> +#include >>>> +#define LOAD_OFFSET KERNEL_LINK_ADDR >>>>  #include >>>>  #include >>>>  #include >>>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c >>>> index 736de6c8739f..71da78914645 100644 >>>> --- a/arch/riscv/mm/init.c >>>> +++ b/arch/riscv/mm/init.c >>>> @@ -22,6 +22,9 @@ >>>> >>>>  #include "../kernel/head.h" >>>> >>>> +unsigned long kernel_virt_addr = KERNEL_VIRT_ADDR; >>>> +EXPORT_SYMBOL(kernel_virt_addr); >>>> + >>>>  unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] >>>>                              __page_aligned_bss; >>>>  EXPORT_SYMBOL(empty_zero_page); >>>> @@ -178,8 +181,12 @@ void __init setup_bootmem(void) >>>>  } >>>> >>>>  #ifdef CONFIG_MMU >>>> +/* Offset between linear mapping virtual address and kernel load >>>> address */ >>>>  unsigned long va_pa_offset; >>>>  EXPORT_SYMBOL(va_pa_offset); >>>> +/* Offset between kernel mapping virtual address and kernel load >>>> address */ >>>> +unsigned long va_kernel_pa_offset; >>>> +EXPORT_SYMBOL(va_kernel_pa_offset); >>>>  unsigned long pfn_base; >>>>  EXPORT_SYMBOL(pfn_base); >>>> >>>> @@ -271,7 +278,7 @@ static phys_addr_t __init alloc_pmd(uintptr_t va) >>>>      if (mmu_enabled) >>>>          return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); >>>> >>>> -    pmd_num = (va - PAGE_OFFSET) >> PGDIR_SHIFT; >>>> +    pmd_num = (va - kernel_virt_addr) >> PGDIR_SHIFT; >>>>      BUG_ON(pmd_num >= NUM_EARLY_PMDS); >>>>      return (uintptr_t)&early_pmd[pmd_num * PTRS_PER_PMD]; >>>>  } >>>> @@ -372,14 +379,30 @@ static uintptr_t __init >>>> best_map_size(phys_addr_t base, phys_addr_t size) >>>>  #error "setup_vm() is called from head.S before relocate so it >>>> should not use absolute addressing." >>>>  #endif >>>> >>>> +static uintptr_t load_pa, load_sz; >>>> + >>>> +static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t >>>> map_size) >>>> +{ >>>> +    uintptr_t va, end_va; >>>> + >>>> +    end_va = kernel_virt_addr + load_sz; >>>> +    for (va = kernel_virt_addr; va < end_va; va += map_size) >>>> +        create_pgd_mapping(pgdir, va, >>>> +                   load_pa + (va - kernel_virt_addr), >>>> +                   map_size, PAGE_KERNEL_EXEC); >>>> +} >>>> + >>>>  asmlinkage void __init setup_vm(uintptr_t dtb_pa) >>>>  { >>>>      uintptr_t va, end_va; >>>> -    uintptr_t load_pa = (uintptr_t)(&_start); >>>> -    uintptr_t load_sz = (uintptr_t)(&_end) - load_pa; >>>>      uintptr_t map_size = best_map_size(load_pa, >>>> MAX_EARLY_MAPPING_SIZE); >>>> >>>> +    load_pa = (uintptr_t)(&_start); >>>> +    load_sz = (uintptr_t)(&_end) - load_pa; >>>> + >>>>      va_pa_offset = PAGE_OFFSET - load_pa; >>>> +    va_kernel_pa_offset = kernel_virt_addr - load_pa; >>>> + >>>>      pfn_base = PFN_DOWN(load_pa); >>>> >>>>      /* >>>> @@ -402,26 +425,22 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) >>>>      create_pmd_mapping(fixmap_pmd, FIXADDR_START, >>>>                 (uintptr_t)fixmap_pte, PMD_SIZE, PAGE_TABLE); >>>>      /* Setup trampoline PGD and PMD */ >>>> -    create_pgd_mapping(trampoline_pg_dir, PAGE_OFFSET, >>>> +    create_pgd_mapping(trampoline_pg_dir, kernel_virt_addr, >>>>                 (uintptr_t)trampoline_pmd, PGDIR_SIZE, PAGE_TABLE); >>>> -    create_pmd_mapping(trampoline_pmd, PAGE_OFFSET, >>>> +    create_pmd_mapping(trampoline_pmd, kernel_virt_addr, >>>>                 load_pa, PMD_SIZE, PAGE_KERNEL_EXEC); >>>>  #else >>>>      /* Setup trampoline PGD */ >>>> -    create_pgd_mapping(trampoline_pg_dir, PAGE_OFFSET, >>>> +    create_pgd_mapping(trampoline_pg_dir, kernel_virt_addr, >>>>                 load_pa, PGDIR_SIZE, PAGE_KERNEL_EXEC); >>>>  #endif >>>> >>>>      /* >>>> -     * Setup early PGD covering entire kernel which will allows >>>> +     * Setup early PGD covering entire kernel which will allow >>>>       * us to reach paging_init(). We map all memory banks later >>>>       * in setup_vm_final() below. >>>>       */ >>>> -    end_va = PAGE_OFFSET + load_sz; >>>> -    for (va = PAGE_OFFSET; va < end_va; va += map_size) >>>> -        create_pgd_mapping(early_pg_dir, va, >>>> -                   load_pa + (va - PAGE_OFFSET), >>>> -                   map_size, PAGE_KERNEL_EXEC); >>>> +    create_kernel_page_table(early_pg_dir, map_size); >>>> >>>>      /* Create fixed mapping for early FDT parsing */ >>>>      end_va = __fix_to_virt(FIX_FDT) + FIX_FDT_SIZE; >>>> @@ -441,6 +460,7 @@ static void __init setup_vm_final(void) >>>>      uintptr_t va, map_size; >>>>      phys_addr_t pa, start, end; >>>>      struct memblock_region *reg; >>>> +    static struct vm_struct vm_kernel = { 0 }; >>>> >>>>      /* Set mmu_enabled flag */ >>>>      mmu_enabled = true; >>>> @@ -467,10 +487,22 @@ static void __init setup_vm_final(void) >>>>          for (pa = start; pa < end; pa += map_size) { >>>>              va = (uintptr_t)__va(pa); >>>>              create_pgd_mapping(swapper_pg_dir, va, pa, >>>> -                       map_size, PAGE_KERNEL_EXEC); >>>> +                       map_size, PAGE_KERNEL); >>>>          } >>>>      } >>>> >>>> +    /* Map the kernel */ >>>> +    create_kernel_page_table(swapper_pg_dir, PMD_SIZE); >>>> + >>>> +    /* Reserve the vmalloc area occupied by the kernel */ >>>> +    vm_kernel.addr = (void *)kernel_virt_addr; >>>> +    vm_kernel.phys_addr = load_pa; >>>> +    vm_kernel.size = (load_sz + PMD_SIZE - 1) & ~(PMD_SIZE - 1); >>>> +    vm_kernel.flags = VM_MAP | VM_NO_GUARD; >>>> +    vm_kernel.caller = __builtin_return_address(0); >>>> + >>>> +    vm_area_add_early(&vm_kernel); >>>> + >>>>      /* Clear fixmap PTE and PMD mappings */ >>>>      clear_fixmap(FIX_PTE); >>>>      clear_fixmap(FIX_PMD); >>>> diff --git a/arch/riscv/mm/physaddr.c b/arch/riscv/mm/physaddr.c >>>> index e8e4dcd39fed..35703d5ef5fd 100644 >>>> --- a/arch/riscv/mm/physaddr.c >>>> +++ b/arch/riscv/mm/physaddr.c >>>> @@ -23,7 +23,7 @@ EXPORT_SYMBOL(__virt_to_phys); >>>> >>>>  phys_addr_t __phys_addr_symbol(unsigned long x) >>>>  { >>>> -    unsigned long kernel_start = (unsigned long)PAGE_OFFSET; >>>> +    unsigned long kernel_start = (unsigned long)kernel_virt_addr; >>>>      unsigned long kernel_end = (unsigned long)_end; >>>> >>>>      /* >> >> Alex From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00798C433E0 for ; Tue, 21 Jul 2020 19:05:58 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B31112068F for ; Tue, 21 Jul 2020 19:05:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="VpaGsbt3"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=dabbelt-com.20150623.gappssmtp.com header.i=@dabbelt-com.20150623.gappssmtp.com header.b="2FJezxvM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B31112068F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=dabbelt.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Mime-Version:Message-ID:To:From:In-Reply-To:Subject: Date:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:References:List-Owner; bh=l/3gfmN8x4ZJxhhkOV8ux5WUCSaAORShLehZqHYiXXI=; b=VpaGsbt3JzSLurqAevtImIVZ1 GpY/1hCd2RsTNt9qmfPK1U7vdTlpR9LriKyQbeaD7rOcwveS9CvN/84AKUcrXn6iRm00LzhmtJsV8 t/SLN8sHnhhhpPXimMOdkcNOkXXlPs1gB7UNlvz7xAltHLoo9+Gc6/71gbAAWnMBm1pHgjr/yV9qf tLQGeqwD9F247WXs72TD2l8Jov4fYW1+yzHggqFqAsP4M1e2CeU4kjaelYAr89wH9596Oe9+nSrKW SmTQIi/IEa6gTk1QdMgokczgaIkDl69PMMir7WOe0+Bz2RcVcvAJ99XSVPoOXveK2RftnHrBOJl7f EMrBuQE9g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jxxaF-0003iw-2A; Tue, 21 Jul 2020 19:05:47 +0000 Received: from mail-pl1-x641.google.com ([2607:f8b0:4864:20::641]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jxxaB-0003iA-Pr for linux-riscv@lists.infradead.org; Tue, 21 Jul 2020 19:05:45 +0000 Received: by mail-pl1-x641.google.com with SMTP id d1so10672603plr.8 for ; Tue, 21 Jul 2020 12:05:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dabbelt-com.20150623.gappssmtp.com; s=20150623; h=date:subject:in-reply-to:cc:from:to:message-id:mime-version :content-transfer-encoding; bh=hdMo1qYoWLTP0HS+quxo8iCzLU73ccVkSQIkXp+qPq8=; b=2FJezxvM+9b8Xttr0n/eceW6GA62NY0QfXnChicSHLlwOkxfhota5ldbuNFmefmfBV ebrbZ+yN46o7tjV3J7AZO/7TGHm3iqlPg5XC5Q+MycLIR9M5W3PI0IR3opjjcXikzLmT b8ObaKPA1IXuXPJp0FexeQqjuun7jXud7KyG6sbsS7R7fhP1LIBQDHCdYRz+cv2+d3Wz zMslNGo3EvSCkCgQGmPBXT9So5/4gYUtoqWCHtBPz+iSoreXjcdU42i+WacAd10psnag VLV+zVj0TK9vwURD18LoJFLnV4FBig0MNZBJb1awKv/Tu+2d25iSnxaRAjcft9UoogRw QD7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:subject:in-reply-to:cc:from:to:message-id :mime-version:content-transfer-encoding; bh=hdMo1qYoWLTP0HS+quxo8iCzLU73ccVkSQIkXp+qPq8=; b=Ve/vPm5MiwE+i0IFJ5tzhotGPRBf/nR7VefoBwmz5t3RnuKGybvA2i7hpAyScbVAl3 GGk9SR1HTkpvaswxC6hi202ryDX3QUo/9xvA/DLQAOPR7AteNmMhRd5o2dThMCJanBuy TchnYjQzA1eWuGTR3h6v/9Xf8SgnKYByhAKzp3nj32bcZj5a53TGi4YZNNe9T3iNqX3a vO/Xbu0dBr2XU7t0ySndmJWLHSoPjqz8kHwJcO8z8F6TATFEvygDNzn6/70QLKxNC4wC ZKL5LhCjAHL3swVQbRG8X32y3qgeRIhruR/e6m2wiHTuZISKROZJeEh1eQFFcD3CResi 8wRQ== X-Gm-Message-State: AOAM531rWG3+6sudYqQ0AkBDcbn8fn9X7DCsMYu1I6GalcLw7nSNUmzr bjbPPY7DS/jKV1OWAhGSqjY4/g== X-Google-Smtp-Source: ABdhPJxKuiBX6f4KJzlsVs8ZHQap9PIyUpv6lkcFvntcNCzI+ZMNm0uQ7S0hBLHBa7MWsXSsWTybIA== X-Received: by 2002:a17:90a:728d:: with SMTP id e13mr6596750pjg.51.1595358340852; Tue, 21 Jul 2020 12:05:40 -0700 (PDT) Received: from localhost (76-210-143-223.lightspeed.sntcca.sbcglobal.net. [76.210.143.223]) by smtp.gmail.com with ESMTPSA id q6sm21079467pfg.76.2020.07.21.12.05.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 12:05:40 -0700 (PDT) Date: Tue, 21 Jul 2020 12:05:40 -0700 (PDT) X-Google-Original-Date: Tue, 21 Jul 2020 12:05:27 PDT (-0700) Subject: Re: [PATCH v5 1/4] riscv: Move kernel mapping to vmalloc zone In-Reply-To: <7cb2285e-68ba-6827-5e61-e33a4b65ac03@ghiti.fr> From: Palmer Dabbelt To: alex@ghiti.fr Message-ID: Mime-Version: 1.0 (MHng) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200721_150544_012770_05B8018B X-CRM114-Status: GOOD ( 39.75 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: aou@eecs.berkeley.edu, benh@kernel.crashing.org, linux-mm@kvack.org, mpe@ellerman.id.au, Anup Patel , linux-kernel@vger.kernel.org, Atish Patra , paulus@samba.org, zong.li@sifive.com, Paul Walmsley , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Content-Transfer-Encoding: base64 Content-Type: text/plain; charset="utf-8"; Format="flowed" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org T24gVHVlLCAyMSBKdWwgMjAyMCAxMTozNjoxMCBQRFQgKC0wNzAwKSwgYWxleEBnaGl0aS5mciB3 cm90ZToKPiBMZXQncyB0cnkgdG8gbWFrZSBwcm9ncmVzcyBoZXJlOiBJIGFkZCBsaW51eC1tbSBp biBDQyB0byBnZXQgZmVlZGJhY2sgb24KPiB0aGlzIHBhdGNoIGFzIGl0IGJsb2NrcyBzdjQ4IHN1 cHBvcnQgdG9vLgoKU29ycnkgZm9yIGJlaW5nIHNsb3cgaGVyZS4gIEkgaGF2ZW4ndCByZXBsaWVk IGJlY2F1c2UgSSBoYWRuJ3QgcmVhbGx5IGZsZXNoZWQKb3V0IHRoZSBkZXNpZ24geWV0LCBidXQg anVzdCBzbyBldmVyeW9uZSdzIG9uIHRoZSBzYW1lIHBhZ2UgbXkgcHJvYmxlbXMgd2l0aAp0aGlz IGFyZToKCiogV2Ugd2FzdGUgdm1hbGxvYyBzcGFjZSBvbiAzMi1iaXQgc3lzdGVtcywgd2hlcmUg dGhlcmUgaXNuJ3QgYSBsb3Qgb2YgaXQuCiogT24gNjQtYml0IHN5c3RlbXMgdGhlIFZBIHNwYWNl IGFyb3VuZCB0aGUga2VybmVsIGlzIHByZWNpb3VzIGJlY2F1c2UgaXQncyB0aGUKICBvbmx5IHBs YWNlIHdlIGNhbiBwbGFjZSB0ZXh0IChtb2R1bGVzLCBCUEYsIHdoYXRldmVyKS4gIElmIHdlIHN0 YXJ0IHB1dHRpbmcKICB0aGUga2VybmVsIGluIHRoZSB2bWFsbG9jIHNwYWNlIHRoZW4gd2UgZWl0 aGVyIGhhdmUgdG8gcHJlLWFsbG9jYXRlIGEgYnVuY2gKICBvZiBzcGFjZSBhcm91bmQgaXQgKGVz c2VudGlhbGx5IG1ha2luZyBpdCBhIGZpeGVkIG1hcHBpbmcgYW55d2F5KSBvciBpdAogIGJlY29t ZXMgbGlrZWx5IHRoYXQgd2Ugd29uJ3QgYmUgYWJsZSB0byBmaW5kIHNwYWNlIGZvciBtb2R1bGVz IGFzIHRoZXkncmUKICBsb2FkZWQgaW50byBydW5uaW5nIHN5c3RlbXMuCiogUmVseWluZyBvbiBh IHJlbG9jYXRhYmxlIGtlcm5lbCBmb3Igc3Y0OCBzdXBwb3J0IGludHJvZHVjZXMgYSBmYWlybHkg bGFyZ2UKICBwZXJmb3JtYW5jZSBoaXQuCgpSb3VnaGx5LCBteSBwcm9wb3NhbCB3b3VsZCBiZSB0 bzoKCiogTGVhdmUgdGhlIDMyLWJpdCBtZW1vcnkgbWFwIGFsb25lLiAgT24gMzItYml0IHN5c3Rl bXMgd2UgY2FuIGxvYWQgbW9kdWxlcwogIGFueXdoZXJlIGFuZCB3ZSBvbmx5IGhhdmUgb25lIFZB IHdpZHRoLCBzbyB3ZSdyZSBub3QgcmVhbGx5IHNvbHZpbmcgYW55CiAgcHJvYmxlbXMgd2l0aCB0 aGVzZSBjaGFuZ2VzLgoqIFN0YXRpY2x5IGFsbG9jYXRlIGEgMkdpQiBwb3J0aW9uIG9mIHRoZSBW QSBzcGFjZSBmb3IgYWxsIG91ciB0ZXh0LCBhcyBpdHMgb3duCiAgcmVnaW9uLiAgV2UnZCBsaW5r L3JlbG9jYXRlIHRoZSBrZXJuZWwgaGVyZSBpbnN0ZWFkIG9mIGFyb3VuZCBQQUdFX09GRlNFVCwK ICB3aGljaCB3b3VsZCBkZWNvdXBsZSB0aGUga2VybmVsIGZyb20gdGhlIHBoeXNpY2FsIG1lbW9y eSBsYXlvdXQgb2YgdGhlIHN5c3RlbS4KICBUaGlzIHdvdWxkIGhhdmUgdGhlIHNpZGUgZWZmZWN0 IG9mIHNvcnRpbmcgb3V0IGEgYnVuY2ggb2YgYm9vdGxvYWRlciBoZWFkYWNoZXMKICB0aGF0IHdl IGN1cnJlbnRseSBoYXZlLgoqIFNvcnQgb3V0IGhvdyB0byBtYWludGFpbiBhIGxpbmVhciBtYXAg YXMgdGhlIGNhbm9uaWNhbCBob2xlIG1vdmVzIGFyb3VuZAogIGJldHdlZW4gdGhlIFZBIHdpZHRo cyB3aXRob3V0IGFkZGluZyBhIGJ1bmNoIG9mIG92ZXJoZWFkIHRvIHRoZSB2aXJ0MnBoeXMgYW5k CiAgZnJpZW5kcy4gIFRoaXMgaXMgcHJvYmFibHkgZ29pbmcgdG8gYmUgdGhlIHRyaWNraWVzdCBw YXJ0LCBidXQgSSB0aGluayBpZiB3ZQogIGp1c3QgY2hhbmdlIHRoZSBwYWdlIHRhYmxlIGNvZGUg dG8gZXNzZW50aWFsbHkgbGllIGFib3V0IFZBcyB3aGVuIGFuIHN2MzkKICBzeXN0ZW0gcnVucyBh biBzdjQ4K3N2Mzkga2VybmVsIHdlIGNvdWxkIG1ha2UgaXQgd29yayAtLSB0aGVyZSdkIGJlIHNv bWUKICBsb2dpY2FsIGNvbXBsZXhpdHkgaW52b2x2ZWQsIGJ1dCBpdCB3b3VsZCByZW1haW4gZmFz dC4KClRoaXMgZG9lc24ndCBzb2x2ZSB0aGUgcHJvYmxlbSBvZiB2aXJ0dWFsbHkgcmVsb2NhdGFi bGUga2VybmVscywgYnV0IGl0IGRvZXMKbGV0IHVzIGRlY291cGxlIHRoYXQgZnJvbSB0aGUgc3Y0 OCBzdHVmZi4gIEl0IGFsc28gbGV0cyB1cyBzdG9wIHJlbHlpbmcgb24gYQpmaXhlZCBwaHlzaWNh bCBhZGRyZXNzIHRoZSBrZXJuZWwgaXMgbG9hZGVkIGludG8sIHdoaWNoIGlzIGFub3RoZXIgdGhp bmcgSQpkb24ndCBsaWtlLgoKSSBrbm93IHRoaXMgbWF5IGJlIGEgbW9yZSBjb21wbGljYXRlZCBh cHByb2FjaCwgYnV0IHRoZXJlIGFyZW4ndCBhbnkgc3Y0OApzeXN0ZW1zIGFyb3VuZCByaWdodCBu b3cgc28gSSBqdXN0IGRvbid0IHNlZSB0aGUgcnVzaCB0byBzdXBwb3J0IHRoZW0sCnBhcnRpY3Vs YXJseSB3aGVuIHRoZXJlJ3MgYSBjb3N0IHRvIHdoYXQgYWxyZWFkeSBleGlzdHMgKGZvciB0aG9z ZSB3aG8gaGF2ZW4ndApiZWVuIHdhdGNoaW5nLCBzbyBmYXIgYWxsIHRoZSBzdjQ4IHBhdGNoIHNl dHMgaGF2ZSBpbXBvc2VkIGEgc2lnbmlmaWNhbnQKcGVyZm9ybWFuY2UgcGVuYWx0eSBvbiBhbGwg c3lzdGVtcykuCgo+Cj4gQWxleAo+Cj4gTGUgNy85LzIwIMOgIDc6MTEgQU0sIEFsZXggR2hpdGkg YSDDqWNyaXTCoDoKPj4gSGkgUGFsbWVyLAo+Pgo+PiBMZSA3LzkvMjAgw6AgMTowNSBBTSwgUGFs bWVyIERhYmJlbHQgYSDDqWNyaXTCoDoKPj4+IE9uIFN1biwgMDcgSnVuIDIwMjAgMDA6NTk6NDYg UERUICgtMDcwMCksIGFsZXhAZ2hpdGkuZnIgd3JvdGU6Cj4+Pj4gVGhpcyBpcyBhIHByZXBhcmF0 b3J5IHBhdGNoIGZvciByZWxvY2F0YWJsZSBrZXJuZWwuCj4+Pj4KPj4+PiBUaGUga2VybmVsIHVz ZWQgdG8gYmUgbGlua2VkIGF0IFBBR0VfT0ZGU0VUIGFkZHJlc3MgYW5kIHVzZWQgdG8gYmUKPj4+ PiBsb2FkZWQKPj4+PiBwaHlzaWNhbGx5IGF0IHRoZSBiZWdpbm5pbmcgb2YgdGhlIG1haW4gbWVt b3J5LiBUaGVyZWZvcmUsIHdlIGNvdWxkIHVzZQo+Pj4+IHRoZSBsaW5lYXIgbWFwcGluZyBmb3Ig dGhlIGtlcm5lbCBtYXBwaW5nLgo+Pj4+Cj4+Pj4gQnV0IHRoZSByZWxvY2F0ZWQga2VybmVsIGJh c2UgYWRkcmVzcyB3aWxsIGJlIGRpZmZlcmVudCBmcm9tIFBBR0VfT0ZGU0VUCj4+Pj4gYW5kIHNp bmNlIGluIHRoZSBsaW5lYXIgbWFwcGluZywgdHdvIGRpZmZlcmVudCB2aXJ0dWFsIGFkZHJlc3Nl cyBjYW5ub3QKPj4+PiBwb2ludCB0byB0aGUgc2FtZSBwaHlzaWNhbCBhZGRyZXNzLCB0aGUga2Vy bmVsIG1hcHBpbmcgbmVlZHMgdG8gbGllCj4+Pj4gb3V0c2lkZQo+Pj4+IHRoZSBsaW5lYXIgbWFw cGluZy4KPj4+Cj4+PiBJIGtub3cgaXQncyBiZWVuIGEgd2hpbGUsIGJ1dCBJIGtlZXAgb3Blbmlu ZyB0aGlzIHVwIHRvIHJldmlldyBpdCBhbmQKPj4+IGp1c3QKPj4+IGNhbid0IGdldCBvdmVyIGhv dyB1Z2x5IGl0IGlzIHRvIHB1dCB0aGUga2VybmVsJ3MgbGluZWFyIG1hcCBpbiB0aGUKPj4+IHZt YWxsb2MKPj4+IHJlZ2lvbi4KPj4+Cj4+PiBJIGd1ZXNzIEkgZG9uJ3QgdW5kZXJzdGFuZCB3aHkg dGhpcyBpcyBuZWNlc3NhcnkgYXQgYWxsLgo+Pj4gU3BlY2lmaWNhbGx5OiB3aHkKPj4+IGNhbid0 IHdlIGp1c3QgcmVsb2NhdGUgdGhlIGtlcm5lbCB3aXRoaW4gdGhlIGxpbmVhciBtYXA/wqAgVGhh dCB3b3VsZAo+Pj4gbGV0IHRoZQo+Pj4gYm9vdGxvYWRlciBwdXQgdGhlIGtlcm5lbCB3aGVyZXZl ciBpdCB3YW50cywgbW9kdWxvIHRoZSBwaHlzaWNhbAo+Pj4gbWVtb3J5IHNpemUgd2UKPj4+IHN1 cHBvcnQuwqAgV2UnZCBuZWVkIHRvIGhhbmRsZSB0aGUgcmVnaW9ucyB0aGF0IGFyZSBjb3VwbGVk IHRvIHRoZQo+Pj4ga2VybmVsJ3MKPj4+IGV4ZWN1dGlvbiBhZGRyZXNzLCBidXQgd2UgY291bGQg anVzdCBwdXQgdGhlbSBpbiBhbiBleHBsaWNpdCBtZW1vcnkKPj4+IHJlZ2lvbgo+Pj4gd2hpY2gg aXMgd2hhdCB3ZSBzaG91bGQgcHJvYmFibHkgYmUgZG9pbmcgYW55d2F5Lgo+Pgo+PiBWaXJ0dWFs IHJlbG9jYXRpb24gaW4gdGhlIGxpbmVhciBtYXBwaW5nIHJlcXVpcmVzIHRvIG1vdmUgdGhlIGtl cm5lbAo+PiBwaHlzaWNhbGx5IHRvby4gWm9uZyBpbXBsZW1lbnRlZCB0aGlzIHBoeXNpY2FsIG1v dmUgaW4gaXRzIEtBU0xSIFJGQwo+PiBwYXRjaHNldCwgd2hpY2ggaXMgY3VtYmVyc29tZSBzaW5j ZSBmaW5kaW5nIGFuIGF2YWlsYWJsZSBwaHlzaWNhbCBzcG90Cj4+IGlzIGhhcmRlciB0aGFuIGp1 c3Qgc2VsZWN0aW5nIGEgdmlydHVhbCByYW5nZSBpbiB0aGUgdm1hbGxvYyByYW5nZS4KPj4KPj4g SW4gYWRkaXRpb24sIGhhdmluZyB0aGUga2VybmVsIG1hcHBpbmcgaW4gdGhlIGxpbmVhciBtYXBw aW5nIHByZXZlbnRzCj4+IHRoZSB1c2Ugb2YgaHVnZXBhZ2UgZm9yIHRoZSBsaW5lYXIgbWFwcGlu ZyByZXN1bHRpbmcgaW4gcGVyZm9ybWFuY2UgbG9zcwo+PiAoYXQgbGVhc3QgZm9yIHRoZSBHQiB0 aGF0IGVuY29tcGFzc2VzIHRoZSBrZXJuZWwpLgo+Pgo+PiBXaHkgZG8geW91IGZpbmQgdGhpcyAi dWdseSIgPyBUaGUgdm1hbGxvYyByZWdpb24gaXMganVzdCBhIGJ1bmNoIG9mCj4+IGF2YWlsYWJs ZSB2aXJ0dWFsIGFkZHJlc3NlcyB0byB3aGF0ZXZlciBwdXJwb3NlIHdlIHdhbnQsIGFuZCBhcyBu b3RlZCBieQo+PiBab25nLCBhcm02NCB1c2VzIHRoZSBzYW1lIHNjaGVtZS4KPj4KPj4+Cj4+Pj4g SW4gYWRkaXRpb24sIGJlY2F1c2UgbW9kdWxlcyBhbmQgQlBGIG11c3QgYmUgY2xvc2UgdG8gdGhl IGtlcm5lbCAoaW5zaWRlCj4+Pj4gKy0yR0Igd2luZG93KSwgdGhlIGtlcm5lbCBpcyBwbGFjZWQg YXQgdGhlIGVuZCBvZiB0aGUgdm1hbGxvYyB6b25lIG1pbnVzCj4+Pj4gMkdCLCB3aGljaCBsZWF2 ZXMgcm9vbSBmb3IgbW9kdWxlcyBhbmQgQlBGLiBUaGUga2VybmVsIGNvdWxkIG5vdCBiZQo+Pj4+ IHBsYWNlZCBhdCB0aGUgYmVnaW5uaW5nIG9mIHRoZSB2bWFsbG9jIHpvbmUgc2luY2Ugb3RoZXIg dm1hbGxvYwo+Pj4+IGFsbG9jYXRpb25zIGZyb20gdGhlIGtlcm5lbCBjb3VsZCBnZXQgYWxsIHRo ZSArLTJHQiB3aW5kb3cgYXJvdW5kIHRoZQo+Pj4+IGtlcm5lbCB3aGljaCB3b3VsZCBwcmV2ZW50 IG5ldyBtb2R1bGVzIGFuZCBCUEYgcHJvZ3JhbXMgdG8gYmUgbG9hZGVkLgo+Pj4KPj4+IFdlbGws IHRoYXQncyBub3QgZW5vdWdoIHRvIG1ha2Ugc3VyZSB0aGlzIGRvZXNuJ3QgaGFwcGVuIC0tIGl0 J3MganVzdAo+Pj4gZW5vdWdoIHRvCj4+PiBtYWtlIHN1cmUgaXQgZG9lc24ndCBoYXBwZW4gdmVy eSBxdWlja2lseS7CoCBUaGF0J3MgdGhlIHNhbWUgYm9hdCB3ZSdyZQo+Pj4gYWxyZWFkeQo+Pj4g aW4sIHRob3VnaCwgc28gaXQncyBub3QgbGlrZSBpdCdzIHdvcnNlLgo+Pgo+PiBJbmRlZWQsIHRo YXQncyBub3Qgd29yc2UsIEkgaGF2ZW4ndCBmb3VuZCBhIHdheSB0byByZXNlcnZlIHZtYWxsb2Mg YXJlYQo+PiB3aXRob3V0IGFjdHVhbGx5IGFsbG9jYXRpbmcgaXQuCj4+Cj4+Pgo+Pj4+IFNpZ25l ZC1vZmYtYnk6IEFsZXhhbmRyZSBHaGl0aSA8YWxleEBnaGl0aS5mcj4KPj4+PiBSZXZpZXdlZC1i eTogWm9uZyBMaSA8em9uZy5saUBzaWZpdmUuY29tPgo+Pj4+IC0tLQo+Pj4+IMKgYXJjaC9yaXNj di9ib290L2xvYWRlci5sZHMuU8KgwqDCoMKgIHzCoCAzICstCj4+Pj4gwqBhcmNoL3Jpc2N2L2lu Y2x1ZGUvYXNtL3BhZ2UuaMKgwqDCoCB8IDEwICsrKysrLQo+Pj4+IMKgYXJjaC9yaXNjdi9pbmNs dWRlL2FzbS9wZ3RhYmxlLmggfCAzOCArKysrKysrKysrKysrKy0tLS0tLS0KPj4+PiDCoGFyY2gv cmlzY3Yva2VybmVsL2hlYWQuU8KgwqDCoMKgwqDCoMKgwqAgfMKgIDMgKy0KPj4+PiDCoGFyY2gv cmlzY3Yva2VybmVsL21vZHVsZS5jwqDCoMKgwqDCoMKgIHzCoCA0ICstLQo+Pj4+IMKgYXJjaC9y aXNjdi9rZXJuZWwvdm1saW51eC5sZHMuU8KgIHzCoCAzICstCj4+Pj4gwqBhcmNoL3Jpc2N2L21t L2luaXQuY8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8IDU4ICsrKysrKysrKysrKysrKysrKysr KysrKystLS0tLS0tCj4+Pj4gwqBhcmNoL3Jpc2N2L21tL3BoeXNhZGRyLmPCoMKgwqDCoMKgwqDC oMKgIHzCoCAyICstCj4+Pj4gwqA4IGZpbGVzIGNoYW5nZWQsIDg4IGluc2VydGlvbnMoKyksIDMz IGRlbGV0aW9ucygtKQo+Pj4+Cj4+Pj4gZGlmZiAtLWdpdCBhL2FyY2gvcmlzY3YvYm9vdC9sb2Fk ZXIubGRzLlMgYi9hcmNoL3Jpc2N2L2Jvb3QvbG9hZGVyLmxkcy5TCj4+Pj4gaW5kZXggNDdhNTAw M2MyZTI4Li42MmQ5NDY5NmExOWMgMTAwNjQ0Cj4+Pj4gLS0tIGEvYXJjaC9yaXNjdi9ib290L2xv YWRlci5sZHMuUwo+Pj4+ICsrKyBiL2FyY2gvcmlzY3YvYm9vdC9sb2FkZXIubGRzLlMKPj4+PiBA QCAtMSwxMyArMSwxNCBAQAo+Pj4+IMKgLyogU1BEWC1MaWNlbnNlLUlkZW50aWZpZXI6IEdQTC0y LjAgKi8KPj4+Pgo+Pj4+IMKgI2luY2x1ZGUgPGFzbS9wYWdlLmg+Cj4+Pj4gKyNpbmNsdWRlIDxh c20vcGd0YWJsZS5oPgo+Pj4+Cj4+Pj4gwqBPVVRQVVRfQVJDSChyaXNjdikKPj4+PiDCoEVOVFJZ KF9zdGFydCkKPj4+Pgo+Pj4+IMKgU0VDVElPTlMKPj4+PiDCoHsKPj4+PiAtwqDCoMKgIC4gPSBQ QUdFX09GRlNFVDsKPj4+PiArwqDCoMKgIC4gPSBLRVJORUxfTElOS19BRERSOwo+Pj4+Cj4+Pj4g wqDCoMKgwqAgLnBheWxvYWQgOiB7Cj4+Pj4gwqDCoMKgwqDCoMKgwqDCoCAqKC5wYXlsb2FkKQo+ Pj4+IGRpZmYgLS1naXQgYS9hcmNoL3Jpc2N2L2luY2x1ZGUvYXNtL3BhZ2UuaAo+Pj4+IGIvYXJj aC9yaXNjdi9pbmNsdWRlL2FzbS9wYWdlLmgKPj4+PiBpbmRleCAyZDUwZjc2ZWZlNDguLjQ4YmIw OWI2YTliNyAxMDA2NDQKPj4+PiAtLS0gYS9hcmNoL3Jpc2N2L2luY2x1ZGUvYXNtL3BhZ2UuaAo+ Pj4+ICsrKyBiL2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vcGFnZS5oCj4+Pj4gQEAgLTkwLDE4ICs5 MCwyNiBAQCB0eXBlZGVmIHN0cnVjdCBwYWdlICpwZ3RhYmxlX3Q7Cj4+Pj4KPj4+PiDCoCNpZmRl ZiBDT05GSUdfTU1VCj4+Pj4gwqBleHRlcm4gdW5zaWduZWQgbG9uZyB2YV9wYV9vZmZzZXQ7Cj4+ Pj4gK2V4dGVybiB1bnNpZ25lZCBsb25nIHZhX2tlcm5lbF9wYV9vZmZzZXQ7Cj4+Pj4gwqBleHRl cm4gdW5zaWduZWQgbG9uZyBwZm5fYmFzZTsKPj4+PiDCoCNkZWZpbmUgQVJDSF9QRk5fT0ZGU0VU wqDCoMKgwqDCoMKgwqAgKHBmbl9iYXNlKQo+Pj4+IMKgI2Vsc2UKPj4+PiDCoCNkZWZpbmUgdmFf cGFfb2Zmc2V0wqDCoMKgwqDCoMKgwqAgMAo+Pj4+ICsjZGVmaW5lIHZhX2tlcm5lbF9wYV9vZmZz ZXTCoMKgwqAgMAo+Pj4+IMKgI2RlZmluZSBBUkNIX1BGTl9PRkZTRVTCoMKgwqDCoMKgwqDCoCAo UEFHRV9PRkZTRVQgPj4gUEFHRV9TSElGVCkKPj4+PiDCoCNlbmRpZiAvKiBDT05GSUdfTU1VICov Cj4+Pj4KPj4+PiDCoGV4dGVybiB1bnNpZ25lZCBsb25nIG1heF9sb3dfcGZuOwo+Pj4+IMKgZXh0 ZXJuIHVuc2lnbmVkIGxvbmcgbWluX2xvd19wZm47Cj4+Pj4gK2V4dGVybiB1bnNpZ25lZCBsb25n IGtlcm5lbF92aXJ0X2FkZHI7Cj4+Pj4KPj4+PiDCoCNkZWZpbmUgX19wYV90b192YV9ub2RlYnVn KHgpwqDCoMKgICgodm9pZCAqKSgodW5zaWduZWQgbG9uZykgKHgpICsKPj4+PiB2YV9wYV9vZmZz ZXQpKQo+Pj4+IC0jZGVmaW5lIF9fdmFfdG9fcGFfbm9kZWJ1Zyh4KcKgwqDCoCAoKHVuc2lnbmVk IGxvbmcpKHgpIC0gdmFfcGFfb2Zmc2V0KQo+Pj4+ICsjZGVmaW5lIGxpbmVhcl9tYXBwaW5nX3Zh X3RvX3BhKHgpwqDCoMKgICgodW5zaWduZWQgbG9uZykoeCkgLQo+Pj4+IHZhX3BhX29mZnNldCkK Pj4+PiArI2RlZmluZSBrZXJuZWxfbWFwcGluZ192YV90b19wYSh4KcKgwqDCoCBcCj4+Pj4gK8Kg wqDCoCAoKHVuc2lnbmVkIGxvbmcpKHgpIC0gdmFfa2VybmVsX3BhX29mZnNldCkKPj4+PiArI2Rl ZmluZSBfX3ZhX3RvX3BhX25vZGVidWcoeCnCoMKgwqDCoMKgwqDCoCBcCj4+Pj4gK8KgwqDCoCAo KCh4KSA+PSBQQUdFX09GRlNFVCkgP8KgwqDCoMKgwqDCoMKgIFwKPj4+PiArwqDCoMKgwqDCoMKg wqAgbGluZWFyX21hcHBpbmdfdmFfdG9fcGEoeCkgOiBrZXJuZWxfbWFwcGluZ192YV90b19wYSh4 KSkKPj4+Pgo+Pj4+IMKgI2lmZGVmIENPTkZJR19ERUJVR19WSVJUVUFMCj4+Pj4gwqBleHRlcm4g cGh5c19hZGRyX3QgX192aXJ0X3RvX3BoeXModW5zaWduZWQgbG9uZyB4KTsKPj4+PiBkaWZmIC0t Z2l0IGEvYXJjaC9yaXNjdi9pbmNsdWRlL2FzbS9wZ3RhYmxlLmgKPj4+PiBiL2FyY2gvcmlzY3Yv aW5jbHVkZS9hc20vcGd0YWJsZS5oCj4+Pj4gaW5kZXggMzViNjAwMzViNmIwLi45NGVmM2I0OWRm YjYgMTAwNjQ0Cj4+Pj4gLS0tIGEvYXJjaC9yaXNjdi9pbmNsdWRlL2FzbS9wZ3RhYmxlLmgKPj4+ PiArKysgYi9hcmNoL3Jpc2N2L2luY2x1ZGUvYXNtL3BndGFibGUuaAo+Pj4+IEBAIC0xMSwyMyAr MTEsMjkgQEAKPj4+Pgo+Pj4+IMKgI2luY2x1ZGUgPGFzbS9wZ3RhYmxlLWJpdHMuaD4KPj4+Pgo+ Pj4+IC0jaWZuZGVmIF9fQVNTRU1CTFlfXwo+Pj4+IC0KPj4+PiAtLyogUGFnZSBVcHBlciBEaXJl Y3Rvcnkgbm90IHVzZWQgaW4gUklTQy1WICovCj4+Pj4gLSNpbmNsdWRlIDxhc20tZ2VuZXJpYy9w Z3RhYmxlLW5vcHVkLmg+Cj4+Pj4gLSNpbmNsdWRlIDxhc20vcGFnZS5oPgo+Pj4+IC0jaW5jbHVk ZSA8YXNtL3RsYmZsdXNoLmg+Cj4+Pj4gLSNpbmNsdWRlIDxsaW51eC9tbV90eXBlcy5oPgo+Pj4+ IC0KPj4+PiAtI2lmZGVmIENPTkZJR19NTVUKPj4+PiArI2lmbmRlZiBDT05GSUdfTU1VCj4+Pj4g KyNkZWZpbmUgS0VSTkVMX1ZJUlRfQUREUsKgwqDCoCBQQUdFX09GRlNFVAo+Pj4+ICsjZGVmaW5l IEtFUk5FTF9MSU5LX0FERFLCoMKgwqAgUEFHRV9PRkZTRVQKPj4+PiArI2Vsc2UKPj4+PiArLyoK Pj4+PiArICogTGVhdmUgMkdCIGZvciBtb2R1bGVzIGFuZCBCUEYgdGhhdCBtdXN0IGxpZSB3aXRo aW4gYSAyR0IgcmFuZ2UKPj4+PiBhcm91bmQKPj4+PiArICogdGhlIGtlcm5lbC4KPj4+PiArICov Cj4+Pj4gKyNkZWZpbmUgS0VSTkVMX1ZJUlRfQUREUsKgwqDCoCAoVk1BTExPQ19FTkQgLSBTWl8y RyArIDEpCj4+Pj4gKyNkZWZpbmUgS0VSTkVMX0xJTktfQUREUsKgwqDCoCBLRVJORUxfVklSVF9B RERSCj4+Pgo+Pj4gQXQgYSBiYXJlIG1pbmltdW0gdGhpcyBpcyBnb2luZyB0byBtYWtlIGEgbWVz cyBvZiB0aGUgMzItYml0IHBvcnQsIGFzCj4+PiBub24tcmVsb2NhdGFibGUga2VybmVscyBhcmUg bm93IGdvaW5nIHRvIGdldCBsaW5rZWQgYXQgMUdpQiB3aGljaCBpcwo+Pj4gd2hlcmUgdXNlcgo+ Pj4gY29kZSBpcyBzdXBwb3NlZCB0byBsaXZlLsKgIFRoYXQncyBhbiBlYXN5IGZpeCwgdGhvdWdo LCBhcyB0aGUgMzItYml0Cj4+PiBzdHVmZgo+Pj4gZG9lc24ndCBuZWVkIGFueSBtb2R1bGUgYWRk cmVzcyByZXN0cmljdGlvbnMuCj4+Cj4+IEluZGVlZCwgSSB3aWxsIHRha2UgYSBsb29rIGF0IHRo YXQuCj4+Cj4+Pgo+Pj4+IMKgI2RlZmluZSBWTUFMTE9DX1NJWkXCoMKgwqDCoCAoS0VSTl9WSVJU X1NJWkUgPj4gMSkKPj4+PiDCoCNkZWZpbmUgVk1BTExPQ19FTkTCoMKgwqDCoMKgIChQQUdFX09G RlNFVCAtIDEpCj4+Pj4gwqAjZGVmaW5lIFZNQUxMT0NfU1RBUlTCoMKgwqAgKFBBR0VfT0ZGU0VU IC0gVk1BTExPQ19TSVpFKQo+Pj4+Cj4+Pj4gwqAjZGVmaW5lIEJQRl9KSVRfUkVHSU9OX1NJWkXC oMKgwqAgKFNaXzEyOE0pCj4+Pj4gLSNkZWZpbmUgQlBGX0pJVF9SRUdJT05fU1RBUlTCoMKgwqAg KFBBR0VfT0ZGU0VUIC0gQlBGX0pJVF9SRUdJT05fU0laRSkKPj4+PiAtI2RlZmluZSBCUEZfSklU X1JFR0lPTl9FTkTCoMKgwqAgKFZNQUxMT0NfRU5EKQo+Pj4+ICsjZGVmaW5lIEJQRl9KSVRfUkVH SU9OX1NUQVJUwqDCoMKgIFBGTl9BTElHTigodW5zaWduZWQgbG9uZykmX2VuZCkKPj4+PiArI2Rl ZmluZSBCUEZfSklUX1JFR0lPTl9FTkTCoMKgwqAgKEJQRl9KSVRfUkVHSU9OX1NUQVJUICsKPj4+ PiBCUEZfSklUX1JFR0lPTl9TSVpFKQo+Pj4+ICsKPj4+PiArI2lmZGVmIENPTkZJR182NEJJVAo+ Pj4+ICsjZGVmaW5lIFZNQUxMT0NfTU9EVUxFX1NUQVJUwqDCoMKgIEJQRl9KSVRfUkVHSU9OX0VO RAo+Pj4+ICsjZGVmaW5lIFZNQUxMT0NfTU9EVUxFX0VORMKgwqDCoCAoKCh1bnNpZ25lZCBsb25n KSZfc3RhcnQgJiBQQUdFX01BU0spCj4+Pj4gKyBTWl8yRykKPj4+PiArI2VuZGlmCj4+Pj4KPj4+ PiDCoC8qCj4+Pj4gwqAgKiBSb3VnaGx5IHNpemUgdGhlIHZtZW1tYXAgc3BhY2UgdG8gYmUgbGFy Z2UgZW5vdWdoIHRvIGZpdCBlbm91Z2gKPj4+PiBAQCAtNTcsOSArNjMsMTYgQEAKPj4+PiDCoCNk ZWZpbmUgRklYQUREUl9TSVpFwqDCoMKgwqAgUEdESVJfU0laRQo+Pj4+IMKgI2VuZGlmCj4+Pj4g wqAjZGVmaW5lIEZJWEFERFJfU1RBUlTCoMKgwqAgKEZJWEFERFJfVE9QIC0gRklYQUREUl9TSVpF KQo+Pj4+IC0KPj4+PiDCoCNlbmRpZgo+Pj4+Cj4+Pj4gKyNpZm5kZWYgX19BU1NFTUJMWV9fCj4+ Pj4gKwo+Pj4+ICsvKiBQYWdlIFVwcGVyIERpcmVjdG9yeSBub3QgdXNlZCBpbiBSSVNDLVYgKi8K Pj4+PiArI2luY2x1ZGUgPGFzbS1nZW5lcmljL3BndGFibGUtbm9wdWQuaD4KPj4+PiArI2luY2x1 ZGUgPGFzbS9wYWdlLmg+Cj4+Pj4gKyNpbmNsdWRlIDxhc20vdGxiZmx1c2guaD4KPj4+PiArI2lu Y2x1ZGUgPGxpbnV4L21tX3R5cGVzLmg+Cj4+Pj4gKwo+Pj4+IMKgI2lmZGVmIENPTkZJR182NEJJ VAo+Pj4+IMKgI2luY2x1ZGUgPGFzbS9wZ3RhYmxlLTY0Lmg+Cj4+Pj4gwqAjZWxzZQo+Pj4+IEBA IC00ODMsNiArNDk2LDcgQEAgc3RhdGljIGlubGluZSB2b2lkIF9fa2VybmVsX21hcF9wYWdlcyhz dHJ1Y3QgcGFnZQo+Pj4+ICpwYWdlLCBpbnQgbnVtcGFnZXMsIGludCBlbmFibAo+Pj4+Cj4+Pj4g wqAjZGVmaW5lIGtlcm5fYWRkcl92YWxpZChhZGRyKcKgwqAgKDEpIC8qIEZJWE1FICovCj4+Pj4K Pj4+PiArZXh0ZXJuIGNoYXIgX3N0YXJ0W107Cj4+Pj4gwqBleHRlcm4gdm9pZCAqZHRiX2Vhcmx5 X3ZhOwo+Pj4+IMKgdm9pZCBzZXR1cF9ib290bWVtKHZvaWQpOwo+Pj4+IMKgdm9pZCBwYWdpbmdf aW5pdCh2b2lkKTsKPj4+PiBkaWZmIC0tZ2l0IGEvYXJjaC9yaXNjdi9rZXJuZWwvaGVhZC5TIGIv YXJjaC9yaXNjdi9rZXJuZWwvaGVhZC5TCj4+Pj4gaW5kZXggOThhNDA2NDc0ZTdkLi44ZjViYjc3 MzEzMjcgMTAwNjQ0Cj4+Pj4gLS0tIGEvYXJjaC9yaXNjdi9rZXJuZWwvaGVhZC5TCj4+Pj4gKysr IGIvYXJjaC9yaXNjdi9rZXJuZWwvaGVhZC5TCj4+Pj4gQEAgLTQ5LDcgKzQ5LDggQEAgRU5UUlko X3N0YXJ0KQo+Pj4+IMKgI2lmZGVmIENPTkZJR19NTVUKPj4+PiDCoHJlbG9jYXRlOgo+Pj4+IMKg wqDCoMKgIC8qIFJlbG9jYXRlIHJldHVybiBhZGRyZXNzICovCj4+Pj4gLcKgwqDCoCBsaSBhMSwg UEFHRV9PRkZTRVQKPj4+PiArwqDCoMKgIGxhIGExLCBrZXJuZWxfdmlydF9hZGRyCj4+Pj4gK8Kg wqDCoCBSRUdfTCBhMSwgMChhMSkKPj4+PiDCoMKgwqDCoCBsYSBhMiwgX3N0YXJ0Cj4+Pj4gwqDC oMKgwqAgc3ViIGExLCBhMSwgYTIKPj4+PiDCoMKgwqDCoCBhZGQgcmEsIHJhLCBhMQo+Pj4+IGRp ZmYgLS1naXQgYS9hcmNoL3Jpc2N2L2tlcm5lbC9tb2R1bGUuYyBiL2FyY2gvcmlzY3Yva2VybmVs L21vZHVsZS5jCj4+Pj4gaW5kZXggOGJiZTVkYmUxMzQxLi4xYThmYmUwNWFjY2YgMTAwNjQ0Cj4+ Pj4gLS0tIGEvYXJjaC9yaXNjdi9rZXJuZWwvbW9kdWxlLmMKPj4+PiArKysgYi9hcmNoL3Jpc2N2 L2tlcm5lbC9tb2R1bGUuYwo+Pj4+IEBAIC0zOTIsMTIgKzM5MiwxMCBAQCBpbnQgYXBwbHlfcmVs b2NhdGVfYWRkKEVsZl9TaGRyICpzZWNoZHJzLCBjb25zdAo+Pj4+IGNoYXIgKnN0cnRhYiwKPj4+ PiDCoH0KPj4+Pgo+Pj4+IMKgI2lmIGRlZmluZWQoQ09ORklHX01NVSkgJiYgZGVmaW5lZChDT05G SUdfNjRCSVQpCj4+Pj4gLSNkZWZpbmUgVk1BTExPQ19NT0RVTEVfU1RBUlQgXAo+Pj4+IC3CoMKg wqDCoCBtYXgoUEZOX0FMSUdOKCh1bnNpZ25lZCBsb25nKSZfZW5kIC0gU1pfMkcpLCBWTUFMTE9D X1NUQVJUKQo+Pj4+IMKgdm9pZCAqbW9kdWxlX2FsbG9jKHVuc2lnbmVkIGxvbmcgc2l6ZSkKPj4+ PiDCoHsKPj4+PiDCoMKgwqDCoCByZXR1cm4gX192bWFsbG9jX25vZGVfcmFuZ2Uoc2l6ZSwgMSwg Vk1BTExPQ19NT0RVTEVfU1RBUlQsCj4+Pj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC oMKgwqDCoMKgIFZNQUxMT0NfRU5ELCBHRlBfS0VSTkVMLAo+Pj4+ICvCoMKgwqDCoMKgwqDCoMKg wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBWTUFMTE9DX01PRFVMRV9FTkQsIEdGUF9LRVJORUwsCj4+ Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBQQUdFX0tFUk5FTF9F WEVDLCAwLCBOVU1BX05PX05PREUsCj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg wqDCoMKgwqDCoCBfX2J1aWx0aW5fcmV0dXJuX2FkZHJlc3MoMCkpOwo+Pj4+IMKgfQo+Pj4+IGRp ZmYgLS1naXQgYS9hcmNoL3Jpc2N2L2tlcm5lbC92bWxpbnV4Lmxkcy5TCj4+Pj4gYi9hcmNoL3Jp c2N2L2tlcm5lbC92bWxpbnV4Lmxkcy5TCj4+Pj4gaW5kZXggMDMzOWI2YmJlMTFhLi5hOWFiZGU2 MjkwOWYgMTAwNjQ0Cj4+Pj4gLS0tIGEvYXJjaC9yaXNjdi9rZXJuZWwvdm1saW51eC5sZHMuUwo+ Pj4+ICsrKyBiL2FyY2gvcmlzY3Yva2VybmVsL3ZtbGludXgubGRzLlMKPj4+PiBAQCAtNCw3ICs0 LDggQEAKPj4+PiDCoCAqIENvcHlyaWdodCAoQykgMjAxNyBTaUZpdmUKPj4+PiDCoCAqLwo+Pj4+ Cj4+Pj4gLSNkZWZpbmUgTE9BRF9PRkZTRVQgUEFHRV9PRkZTRVQKPj4+PiArI2luY2x1ZGUgPGFz bS9wZ3RhYmxlLmg+Cj4+Pj4gKyNkZWZpbmUgTE9BRF9PRkZTRVQgS0VSTkVMX0xJTktfQUREUgo+ Pj4+IMKgI2luY2x1ZGUgPGFzbS92bWxpbnV4Lmxkcy5oPgo+Pj4+IMKgI2luY2x1ZGUgPGFzbS9w YWdlLmg+Cj4+Pj4gwqAjaW5jbHVkZSA8YXNtL2NhY2hlLmg+Cj4+Pj4gZGlmZiAtLWdpdCBhL2Fy Y2gvcmlzY3YvbW0vaW5pdC5jIGIvYXJjaC9yaXNjdi9tbS9pbml0LmMKPj4+PiBpbmRleCA3MzZk ZTZjODczOWYuLjcxZGE3ODkxNDY0NSAxMDA2NDQKPj4+PiAtLS0gYS9hcmNoL3Jpc2N2L21tL2lu aXQuYwo+Pj4+ICsrKyBiL2FyY2gvcmlzY3YvbW0vaW5pdC5jCj4+Pj4gQEAgLTIyLDYgKzIyLDkg QEAKPj4+Pgo+Pj4+IMKgI2luY2x1ZGUgIi4uL2tlcm5lbC9oZWFkLmgiCj4+Pj4KPj4+PiArdW5z aWduZWQgbG9uZyBrZXJuZWxfdmlydF9hZGRyID0gS0VSTkVMX1ZJUlRfQUREUjsKPj4+PiArRVhQ T1JUX1NZTUJPTChrZXJuZWxfdmlydF9hZGRyKTsKPj4+PiArCj4+Pj4gwqB1bnNpZ25lZCBsb25n IGVtcHR5X3plcm9fcGFnZVtQQUdFX1NJWkUgLyBzaXplb2YodW5zaWduZWQgbG9uZyldCj4+Pj4g wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg X19wYWdlX2FsaWduZWRfYnNzOwo+Pj4+IMKgRVhQT1JUX1NZTUJPTChlbXB0eV96ZXJvX3BhZ2Up Owo+Pj4+IEBAIC0xNzgsOCArMTgxLDEyIEBAIHZvaWQgX19pbml0IHNldHVwX2Jvb3RtZW0odm9p ZCkKPj4+PiDCoH0KPj4+Pgo+Pj4+IMKgI2lmZGVmIENPTkZJR19NTVUKPj4+PiArLyogT2Zmc2V0 IGJldHdlZW4gbGluZWFyIG1hcHBpbmcgdmlydHVhbCBhZGRyZXNzIGFuZCBrZXJuZWwgbG9hZAo+ Pj4+IGFkZHJlc3MgKi8KPj4+PiDCoHVuc2lnbmVkIGxvbmcgdmFfcGFfb2Zmc2V0Owo+Pj4+IMKg RVhQT1JUX1NZTUJPTCh2YV9wYV9vZmZzZXQpOwo+Pj4+ICsvKiBPZmZzZXQgYmV0d2VlbiBrZXJu ZWwgbWFwcGluZyB2aXJ0dWFsIGFkZHJlc3MgYW5kIGtlcm5lbCBsb2FkCj4+Pj4gYWRkcmVzcyAq Lwo+Pj4+ICt1bnNpZ25lZCBsb25nIHZhX2tlcm5lbF9wYV9vZmZzZXQ7Cj4+Pj4gK0VYUE9SVF9T WU1CT0wodmFfa2VybmVsX3BhX29mZnNldCk7Cj4+Pj4gwqB1bnNpZ25lZCBsb25nIHBmbl9iYXNl Owo+Pj4+IMKgRVhQT1JUX1NZTUJPTChwZm5fYmFzZSk7Cj4+Pj4KPj4+PiBAQCAtMjcxLDcgKzI3 OCw3IEBAIHN0YXRpYyBwaHlzX2FkZHJfdCBfX2luaXQgYWxsb2NfcG1kKHVpbnRwdHJfdCB2YSkK Pj4+PiDCoMKgwqDCoCBpZiAobW11X2VuYWJsZWQpCj4+Pj4gwqDCoMKgwqDCoMKgwqDCoCByZXR1 cm4gbWVtYmxvY2tfcGh5c19hbGxvYyhQQUdFX1NJWkUsIFBBR0VfU0laRSk7Cj4+Pj4KPj4+PiAt wqDCoMKgIHBtZF9udW0gPSAodmEgLSBQQUdFX09GRlNFVCkgPj4gUEdESVJfU0hJRlQ7Cj4+Pj4g K8KgwqDCoCBwbWRfbnVtID0gKHZhIC0ga2VybmVsX3ZpcnRfYWRkcikgPj4gUEdESVJfU0hJRlQ7 Cj4+Pj4gwqDCoMKgwqAgQlVHX09OKHBtZF9udW0gPj0gTlVNX0VBUkxZX1BNRFMpOwo+Pj4+IMKg wqDCoMKgIHJldHVybiAodWludHB0cl90KSZlYXJseV9wbWRbcG1kX251bSAqIFBUUlNfUEVSX1BN RF07Cj4+Pj4gwqB9Cj4+Pj4gQEAgLTM3MiwxNCArMzc5LDMwIEBAIHN0YXRpYyB1aW50cHRyX3Qg X19pbml0Cj4+Pj4gYmVzdF9tYXBfc2l6ZShwaHlzX2FkZHJfdCBiYXNlLCBwaHlzX2FkZHJfdCBz aXplKQo+Pj4+IMKgI2Vycm9yICJzZXR1cF92bSgpIGlzIGNhbGxlZCBmcm9tIGhlYWQuUyBiZWZv cmUgcmVsb2NhdGUgc28gaXQKPj4+PiBzaG91bGQgbm90IHVzZSBhYnNvbHV0ZSBhZGRyZXNzaW5n LiIKPj4+PiDCoCNlbmRpZgo+Pj4+Cj4+Pj4gK3N0YXRpYyB1aW50cHRyX3QgbG9hZF9wYSwgbG9h ZF9zejsKPj4+PiArCj4+Pj4gK3N0YXRpYyB2b2lkIF9faW5pdCBjcmVhdGVfa2VybmVsX3BhZ2Vf dGFibGUocGdkX3QgKnBnZGlyLCB1aW50cHRyX3QKPj4+PiBtYXBfc2l6ZSkKPj4+PiArewo+Pj4+ ICvCoMKgwqAgdWludHB0cl90IHZhLCBlbmRfdmE7Cj4+Pj4gKwo+Pj4+ICvCoMKgwqAgZW5kX3Zh ID0ga2VybmVsX3ZpcnRfYWRkciArIGxvYWRfc3o7Cj4+Pj4gK8KgwqDCoCBmb3IgKHZhID0ga2Vy bmVsX3ZpcnRfYWRkcjsgdmEgPCBlbmRfdmE7IHZhICs9IG1hcF9zaXplKQo+Pj4+ICvCoMKgwqDC oMKgwqDCoCBjcmVhdGVfcGdkX21hcHBpbmcocGdkaXIsIHZhLAo+Pj4+ICvCoMKgwqDCoMKgwqDC oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbG9hZF9wYSArICh2YSAtIGtlcm5lbF92aXJ0X2FkZHIp LAo+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbWFwX3NpemUsIFBB R0VfS0VSTkVMX0VYRUMpOwo+Pj4+ICt9Cj4+Pj4gKwo+Pj4+IMKgYXNtbGlua2FnZSB2b2lkIF9f aW5pdCBzZXR1cF92bSh1aW50cHRyX3QgZHRiX3BhKQo+Pj4+IMKgewo+Pj4+IMKgwqDCoMKgIHVp bnRwdHJfdCB2YSwgZW5kX3ZhOwo+Pj4+IC3CoMKgwqAgdWludHB0cl90IGxvYWRfcGEgPSAodWlu dHB0cl90KSgmX3N0YXJ0KTsKPj4+PiAtwqDCoMKgIHVpbnRwdHJfdCBsb2FkX3N6ID0gKHVpbnRw dHJfdCkoJl9lbmQpIC0gbG9hZF9wYTsKPj4+PiDCoMKgwqDCoCB1aW50cHRyX3QgbWFwX3NpemUg PSBiZXN0X21hcF9zaXplKGxvYWRfcGEsCj4+Pj4gTUFYX0VBUkxZX01BUFBJTkdfU0laRSk7Cj4+ Pj4KPj4+PiArwqDCoMKgIGxvYWRfcGEgPSAodWludHB0cl90KSgmX3N0YXJ0KTsKPj4+PiArwqDC oMKgIGxvYWRfc3ogPSAodWludHB0cl90KSgmX2VuZCkgLSBsb2FkX3BhOwo+Pj4+ICsKPj4+PiDC oMKgwqDCoCB2YV9wYV9vZmZzZXQgPSBQQUdFX09GRlNFVCAtIGxvYWRfcGE7Cj4+Pj4gK8KgwqDC oCB2YV9rZXJuZWxfcGFfb2Zmc2V0ID0ga2VybmVsX3ZpcnRfYWRkciAtIGxvYWRfcGE7Cj4+Pj4g Kwo+Pj4+IMKgwqDCoMKgIHBmbl9iYXNlID0gUEZOX0RPV04obG9hZF9wYSk7Cj4+Pj4KPj4+PiDC oMKgwqDCoCAvKgo+Pj4+IEBAIC00MDIsMjYgKzQyNSwyMiBAQCBhc21saW5rYWdlIHZvaWQgX19p bml0IHNldHVwX3ZtKHVpbnRwdHJfdCBkdGJfcGEpCj4+Pj4gwqDCoMKgwqAgY3JlYXRlX3BtZF9t YXBwaW5nKGZpeG1hcF9wbWQsIEZJWEFERFJfU1RBUlQsCj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKg wqDCoMKgwqDCoMKgICh1aW50cHRyX3QpZml4bWFwX3B0ZSwgUE1EX1NJWkUsIFBBR0VfVEFCTEUp Owo+Pj4+IMKgwqDCoMKgIC8qIFNldHVwIHRyYW1wb2xpbmUgUEdEIGFuZCBQTUQgKi8KPj4+PiAt wqDCoMKgIGNyZWF0ZV9wZ2RfbWFwcGluZyh0cmFtcG9saW5lX3BnX2RpciwgUEFHRV9PRkZTRVQs Cj4+Pj4gK8KgwqDCoCBjcmVhdGVfcGdkX21hcHBpbmcodHJhbXBvbGluZV9wZ19kaXIsIGtlcm5l bF92aXJ0X2FkZHIsCj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICh1aW50cHRy X3QpdHJhbXBvbGluZV9wbWQsIFBHRElSX1NJWkUsIFBBR0VfVEFCTEUpOwo+Pj4+IC3CoMKgwqAg Y3JlYXRlX3BtZF9tYXBwaW5nKHRyYW1wb2xpbmVfcG1kLCBQQUdFX09GRlNFVCwKPj4+PiArwqDC oMKgIGNyZWF0ZV9wbWRfbWFwcGluZyh0cmFtcG9saW5lX3BtZCwga2VybmVsX3ZpcnRfYWRkciwK Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbG9hZF9wYSwgUE1EX1NJWkUsIFBB R0VfS0VSTkVMX0VYRUMpOwo+Pj4+IMKgI2Vsc2UKPj4+PiDCoMKgwqDCoCAvKiBTZXR1cCB0cmFt cG9saW5lIFBHRCAqLwo+Pj4+IC3CoMKgwqAgY3JlYXRlX3BnZF9tYXBwaW5nKHRyYW1wb2xpbmVf cGdfZGlyLCBQQUdFX09GRlNFVCwKPj4+PiArwqDCoMKgIGNyZWF0ZV9wZ2RfbWFwcGluZyh0cmFt cG9saW5lX3BnX2Rpciwga2VybmVsX3ZpcnRfYWRkciwKPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDC oMKgwqDCoMKgwqAgbG9hZF9wYSwgUEdESVJfU0laRSwgUEFHRV9LRVJORUxfRVhFQyk7Cj4+Pj4g wqAjZW5kaWYKPj4+Pgo+Pj4+IMKgwqDCoMKgIC8qCj4+Pj4gLcKgwqDCoMKgICogU2V0dXAgZWFy bHkgUEdEIGNvdmVyaW5nIGVudGlyZSBrZXJuZWwgd2hpY2ggd2lsbCBhbGxvd3MKPj4+PiArwqDC oMKgwqAgKiBTZXR1cCBlYXJseSBQR0QgY292ZXJpbmcgZW50aXJlIGtlcm5lbCB3aGljaCB3aWxs IGFsbG93Cj4+Pj4gwqDCoMKgwqDCoCAqIHVzIHRvIHJlYWNoIHBhZ2luZ19pbml0KCkuIFdlIG1h cCBhbGwgbWVtb3J5IGJhbmtzIGxhdGVyCj4+Pj4gwqDCoMKgwqDCoCAqIGluIHNldHVwX3ZtX2Zp bmFsKCkgYmVsb3cuCj4+Pj4gwqDCoMKgwqDCoCAqLwo+Pj4+IC3CoMKgwqAgZW5kX3ZhID0gUEFH RV9PRkZTRVQgKyBsb2FkX3N6Owo+Pj4+IC3CoMKgwqAgZm9yICh2YSA9IFBBR0VfT0ZGU0VUOyB2 YSA8IGVuZF92YTsgdmEgKz0gbWFwX3NpemUpCj4+Pj4gLcKgwqDCoMKgwqDCoMKgIGNyZWF0ZV9w Z2RfbWFwcGluZyhlYXJseV9wZ19kaXIsIHZhLAo+Pj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKg wqDCoMKgwqDCoMKgwqAgbG9hZF9wYSArICh2YSAtIFBBR0VfT0ZGU0VUKSwKPj4+PiAtwqDCoMKg wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG1hcF9zaXplLCBQQUdFX0tFUk5FTF9FWEVD KTsKPj4+PiArwqDCoMKgIGNyZWF0ZV9rZXJuZWxfcGFnZV90YWJsZShlYXJseV9wZ19kaXIsIG1h cF9zaXplKTsKPj4+Pgo+Pj4+IMKgwqDCoMKgIC8qIENyZWF0ZSBmaXhlZCBtYXBwaW5nIGZvciBl YXJseSBGRFQgcGFyc2luZyAqLwo+Pj4+IMKgwqDCoMKgIGVuZF92YSA9IF9fZml4X3RvX3ZpcnQo RklYX0ZEVCkgKyBGSVhfRkRUX1NJWkU7Cj4+Pj4gQEAgLTQ0MSw2ICs0NjAsNyBAQCBzdGF0aWMg dm9pZCBfX2luaXQgc2V0dXBfdm1fZmluYWwodm9pZCkKPj4+PiDCoMKgwqDCoCB1aW50cHRyX3Qg dmEsIG1hcF9zaXplOwo+Pj4+IMKgwqDCoMKgIHBoeXNfYWRkcl90IHBhLCBzdGFydCwgZW5kOwo+ Pj4+IMKgwqDCoMKgIHN0cnVjdCBtZW1ibG9ja19yZWdpb24gKnJlZzsKPj4+PiArwqDCoMKgIHN0 YXRpYyBzdHJ1Y3Qgdm1fc3RydWN0IHZtX2tlcm5lbCA9IHsgMCB9Owo+Pj4+Cj4+Pj4gwqDCoMKg wqAgLyogU2V0IG1tdV9lbmFibGVkIGZsYWcgKi8KPj4+PiDCoMKgwqDCoCBtbXVfZW5hYmxlZCA9 IHRydWU7Cj4+Pj4gQEAgLTQ2NywxMCArNDg3LDIyIEBAIHN0YXRpYyB2b2lkIF9faW5pdCBzZXR1 cF92bV9maW5hbCh2b2lkKQo+Pj4+IMKgwqDCoMKgwqDCoMKgwqAgZm9yIChwYSA9IHN0YXJ0OyBw YSA8IGVuZDsgcGEgKz0gbWFwX3NpemUpIHsKPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg dmEgPSAodWludHB0cl90KV9fdmEocGEpOwo+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBj cmVhdGVfcGdkX21hcHBpbmcoc3dhcHBlcl9wZ19kaXIsIHZhLCBwYSwKPj4+PiAtwqDCoMKgwqDC oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbWFwX3NpemUsIFBBR0VfS0VSTkVM X0VYRUMpOwo+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC oCBtYXBfc2l6ZSwgUEFHRV9LRVJORUwpOwo+Pj4+IMKgwqDCoMKgwqDCoMKgwqAgfQo+Pj4+IMKg wqDCoMKgIH0KPj4+Pgo+Pj4+ICvCoMKgwqAgLyogTWFwIHRoZSBrZXJuZWwgKi8KPj4+PiArwqDC oMKgIGNyZWF0ZV9rZXJuZWxfcGFnZV90YWJsZShzd2FwcGVyX3BnX2RpciwgUE1EX1NJWkUpOwo+ Pj4+ICsKPj4+PiArwqDCoMKgIC8qIFJlc2VydmUgdGhlIHZtYWxsb2MgYXJlYSBvY2N1cGllZCBi eSB0aGUga2VybmVsICovCj4+Pj4gK8KgwqDCoCB2bV9rZXJuZWwuYWRkciA9ICh2b2lkICopa2Vy bmVsX3ZpcnRfYWRkcjsKPj4+PiArwqDCoMKgIHZtX2tlcm5lbC5waHlzX2FkZHIgPSBsb2FkX3Bh Owo+Pj4+ICvCoMKgwqAgdm1fa2VybmVsLnNpemUgPSAobG9hZF9zeiArIFBNRF9TSVpFIC0gMSkg JiB+KFBNRF9TSVpFIC0gMSk7Cj4+Pj4gK8KgwqDCoCB2bV9rZXJuZWwuZmxhZ3MgPSBWTV9NQVAg fCBWTV9OT19HVUFSRDsKPj4+PiArwqDCoMKgIHZtX2tlcm5lbC5jYWxsZXIgPSBfX2J1aWx0aW5f cmV0dXJuX2FkZHJlc3MoMCk7Cj4+Pj4gKwo+Pj4+ICvCoMKgwqAgdm1fYXJlYV9hZGRfZWFybHko JnZtX2tlcm5lbCk7Cj4+Pj4gKwo+Pj4+IMKgwqDCoMKgIC8qIENsZWFyIGZpeG1hcCBQVEUgYW5k IFBNRCBtYXBwaW5ncyAqLwo+Pj4+IMKgwqDCoMKgIGNsZWFyX2ZpeG1hcChGSVhfUFRFKTsKPj4+ PiDCoMKgwqDCoCBjbGVhcl9maXhtYXAoRklYX1BNRCk7Cj4+Pj4gZGlmZiAtLWdpdCBhL2FyY2gv cmlzY3YvbW0vcGh5c2FkZHIuYyBiL2FyY2gvcmlzY3YvbW0vcGh5c2FkZHIuYwo+Pj4+IGluZGV4 IGU4ZTRkY2QzOWZlZC4uMzU3MDNkNWVmNWZkIDEwMDY0NAo+Pj4+IC0tLSBhL2FyY2gvcmlzY3Yv bW0vcGh5c2FkZHIuYwo+Pj4+ICsrKyBiL2FyY2gvcmlzY3YvbW0vcGh5c2FkZHIuYwo+Pj4+IEBA IC0yMyw3ICsyMyw3IEBAIEVYUE9SVF9TWU1CT0woX192aXJ0X3RvX3BoeXMpOwo+Pj4+Cj4+Pj4g wqBwaHlzX2FkZHJfdCBfX3BoeXNfYWRkcl9zeW1ib2wodW5zaWduZWQgbG9uZyB4KQo+Pj4+IMKg ewo+Pj4+IC3CoMKgwqAgdW5zaWduZWQgbG9uZyBrZXJuZWxfc3RhcnQgPSAodW5zaWduZWQgbG9u ZylQQUdFX09GRlNFVDsKPj4+PiArwqDCoMKgIHVuc2lnbmVkIGxvbmcga2VybmVsX3N0YXJ0ID0g KHVuc2lnbmVkIGxvbmcpa2VybmVsX3ZpcnRfYWRkcjsKPj4+PiDCoMKgwqDCoCB1bnNpZ25lZCBs b25nIGtlcm5lbF9lbmQgPSAodW5zaWduZWQgbG9uZylfZW5kOwo+Pj4+Cj4+Pj4gwqDCoMKgwqAg LyoKPj4KPj4gQWxleAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX18KbGludXgtcmlzY3YgbWFpbGluZyBsaXN0CmxpbnV4LXJpc2N2QGxpc3RzLmluZnJhZGVh ZC5vcmcKaHR0cDovL2xpc3RzLmluZnJhZGVhZC5vcmcvbWFpbG1hbi9saXN0aW5mby9saW51eC1y aXNjdgo= From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A208C433E0 for ; Tue, 21 Jul 2020 19:09:03 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DB6A42068F for ; Tue, 21 Jul 2020 19:09:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=dabbelt-com.20150623.gappssmtp.com header.i=@dabbelt-com.20150623.gappssmtp.com header.b="2FJezxvM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB6A42068F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=dabbelt.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BB7Rw3YBczDqR0 for ; Wed, 22 Jul 2020 05:09:00 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=dabbelt.com (client-ip=2607:f8b0:4864:20::1041; helo=mail-pj1-x1041.google.com; envelope-from=palmer@dabbelt.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=dabbelt.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=dabbelt-com.20150623.gappssmtp.com header.i=@dabbelt-com.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=2FJezxvM; dkim-atps=neutral Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BB7NB4rcGzDqc4 for ; Wed, 22 Jul 2020 05:05:44 +1000 (AEST) Received: by mail-pj1-x1041.google.com with SMTP id 8so1442871pjj.1 for ; Tue, 21 Jul 2020 12:05:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dabbelt-com.20150623.gappssmtp.com; s=20150623; h=date:subject:in-reply-to:cc:from:to:message-id:mime-version :content-transfer-encoding; bh=hdMo1qYoWLTP0HS+quxo8iCzLU73ccVkSQIkXp+qPq8=; b=2FJezxvM+9b8Xttr0n/eceW6GA62NY0QfXnChicSHLlwOkxfhota5ldbuNFmefmfBV ebrbZ+yN46o7tjV3J7AZO/7TGHm3iqlPg5XC5Q+MycLIR9M5W3PI0IR3opjjcXikzLmT b8ObaKPA1IXuXPJp0FexeQqjuun7jXud7KyG6sbsS7R7fhP1LIBQDHCdYRz+cv2+d3Wz zMslNGo3EvSCkCgQGmPBXT9So5/4gYUtoqWCHtBPz+iSoreXjcdU42i+WacAd10psnag VLV+zVj0TK9vwURD18LoJFLnV4FBig0MNZBJb1awKv/Tu+2d25iSnxaRAjcft9UoogRw QD7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:subject:in-reply-to:cc:from:to:message-id :mime-version:content-transfer-encoding; bh=hdMo1qYoWLTP0HS+quxo8iCzLU73ccVkSQIkXp+qPq8=; b=rY7sG7S2nWvi2WxSLML2G29F9xgNG6yQgWNt1JAT7XkD4oBFtoR3zXMhiT48R6LW9T IyOqnr2vppi1lqCTKQTppwihrfAerTvfl7E90u8ko9/uciaLP5/HAOXwaKr8zo5pn1lY RGPkXNS0zJkuyvkjtiw5TCMyTxU+pf2uypYcYvyUon/LmRzpxbZ7aFQ5+g7oUxHGazh5 XhVcaknANNbutr2yCY1+z3krZ9KqCvZEWYamV9I+bSa8F9hiH9bXbj7foAjeo4pfx1pR v1n9wCm1QksAuyYAcYbbfx6rqsmUCierTcujuAS2Wq73SeHpb+0qG0Vo6IDehi6XKL7A Q7nQ== X-Gm-Message-State: AOAM530Wy23AHffvqcU0IdOJfYG7psNKVSwzCMx2tUkqfmBobxIaR2GM NCsU1JGOUuF2KsUlRZDK/yexZw== X-Google-Smtp-Source: ABdhPJxKuiBX6f4KJzlsVs8ZHQap9PIyUpv6lkcFvntcNCzI+ZMNm0uQ7S0hBLHBa7MWsXSsWTybIA== X-Received: by 2002:a17:90a:728d:: with SMTP id e13mr6596750pjg.51.1595358340852; Tue, 21 Jul 2020 12:05:40 -0700 (PDT) Received: from localhost (76-210-143-223.lightspeed.sntcca.sbcglobal.net. [76.210.143.223]) by smtp.gmail.com with ESMTPSA id q6sm21079467pfg.76.2020.07.21.12.05.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 12:05:40 -0700 (PDT) Date: Tue, 21 Jul 2020 12:05:40 -0700 (PDT) X-Google-Original-Date: Tue, 21 Jul 2020 12:05:27 PDT (-0700) Subject: Re: [PATCH v5 1/4] riscv: Move kernel mapping to vmalloc zone In-Reply-To: <7cb2285e-68ba-6827-5e61-e33a4b65ac03@ghiti.fr> From: Palmer Dabbelt To: alex@ghiti.fr Message-ID: Mime-Version: 1.0 (MHng) Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: aou@eecs.berkeley.edu, linux-mm@kvack.org, Anup Patel , linux-kernel@vger.kernel.org, Atish Patra , paulus@samba.org, zong.li@sifive.com, Paul Walmsley , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Tue, 21 Jul 2020 11:36:10 PDT (-0700), alex@ghiti.fr wrote: > Let's try to make progress here: I add linux-mm in CC to get feedback on > this patch as it blocks sv48 support too. Sorry for being slow here. I haven't replied because I hadn't really fleshed out the design yet, but just so everyone's on the same page my problems with this are: * We waste vmalloc space on 32-bit systems, where there isn't a lot of it. * On 64-bit systems the VA space around the kernel is precious because it's the only place we can place text (modules, BPF, whatever). If we start putting the kernel in the vmalloc space then we either have to pre-allocate a bunch of space around it (essentially making it a fixed mapping anyway) or it becomes likely that we won't be able to find space for modules as they're loaded into running systems. * Relying on a relocatable kernel for sv48 support introduces a fairly large performance hit. Roughly, my proposal would be to: * Leave the 32-bit memory map alone. On 32-bit systems we can load modules anywhere and we only have one VA width, so we're not really solving any problems with these changes. * Staticly allocate a 2GiB portion of the VA space for all our text, as its own region. We'd link/relocate the kernel here instead of around PAGE_OFFSET, which would decouple the kernel from the physical memory layout of the system. This would have the side effect of sorting out a bunch of bootloader headaches that we currently have. * Sort out how to maintain a linear map as the canonical hole moves around between the VA widths without adding a bunch of overhead to the virt2phys and friends. This is probably going to be the trickiest part, but I think if we just change the page table code to essentially lie about VAs when an sv39 system runs an sv48+sv39 kernel we could make it work -- there'd be some logical complexity involved, but it would remain fast. This doesn't solve the problem of virtually relocatable kernels, but it does let us decouple that from the sv48 stuff. It also lets us stop relying on a fixed physical address the kernel is loaded into, which is another thing I don't like. I know this may be a more complicated approach, but there aren't any sv48 systems around right now so I just don't see the rush to support them, particularly when there's a cost to what already exists (for those who haven't been watching, so far all the sv48 patch sets have imposed a significant performance penalty on all systems). > > Alex > > Le 7/9/20 à 7:11 AM, Alex Ghiti a écrit : >> Hi Palmer, >> >> Le 7/9/20 à 1:05 AM, Palmer Dabbelt a écrit : >>> On Sun, 07 Jun 2020 00:59:46 PDT (-0700), alex@ghiti.fr wrote: >>>> This is a preparatory patch for relocatable kernel. >>>> >>>> The kernel used to be linked at PAGE_OFFSET address and used to be >>>> loaded >>>> physically at the beginning of the main memory. Therefore, we could use >>>> the linear mapping for the kernel mapping. >>>> >>>> But the relocated kernel base address will be different from PAGE_OFFSET >>>> and since in the linear mapping, two different virtual addresses cannot >>>> point to the same physical address, the kernel mapping needs to lie >>>> outside >>>> the linear mapping. >>> >>> I know it's been a while, but I keep opening this up to review it and >>> just >>> can't get over how ugly it is to put the kernel's linear map in the >>> vmalloc >>> region. >>> >>> I guess I don't understand why this is necessary at all. >>> Specifically: why >>> can't we just relocate the kernel within the linear map?  That would >>> let the >>> bootloader put the kernel wherever it wants, modulo the physical >>> memory size we >>> support.  We'd need to handle the regions that are coupled to the >>> kernel's >>> execution address, but we could just put them in an explicit memory >>> region >>> which is what we should probably be doing anyway. >> >> Virtual relocation in the linear mapping requires to move the kernel >> physically too. Zong implemented this physical move in its KASLR RFC >> patchset, which is cumbersome since finding an available physical spot >> is harder than just selecting a virtual range in the vmalloc range. >> >> In addition, having the kernel mapping in the linear mapping prevents >> the use of hugepage for the linear mapping resulting in performance loss >> (at least for the GB that encompasses the kernel). >> >> Why do you find this "ugly" ? The vmalloc region is just a bunch of >> available virtual addresses to whatever purpose we want, and as noted by >> Zong, arm64 uses the same scheme. >> >>> >>>> In addition, because modules and BPF must be close to the kernel (inside >>>> +-2GB window), the kernel is placed at the end of the vmalloc zone minus >>>> 2GB, which leaves room for modules and BPF. The kernel could not be >>>> placed at the beginning of the vmalloc zone since other vmalloc >>>> allocations from the kernel could get all the +-2GB window around the >>>> kernel which would prevent new modules and BPF programs to be loaded. >>> >>> Well, that's not enough to make sure this doesn't happen -- it's just >>> enough to >>> make sure it doesn't happen very quickily.  That's the same boat we're >>> already >>> in, though, so it's not like it's worse. >> >> Indeed, that's not worse, I haven't found a way to reserve vmalloc area >> without actually allocating it. >> >>> >>>> Signed-off-by: Alexandre Ghiti >>>> Reviewed-by: Zong Li >>>> --- >>>>  arch/riscv/boot/loader.lds.S     |  3 +- >>>>  arch/riscv/include/asm/page.h    | 10 +++++- >>>>  arch/riscv/include/asm/pgtable.h | 38 ++++++++++++++------- >>>>  arch/riscv/kernel/head.S         |  3 +- >>>>  arch/riscv/kernel/module.c       |  4 +-- >>>>  arch/riscv/kernel/vmlinux.lds.S  |  3 +- >>>>  arch/riscv/mm/init.c             | 58 +++++++++++++++++++++++++------- >>>>  arch/riscv/mm/physaddr.c         |  2 +- >>>>  8 files changed, 88 insertions(+), 33 deletions(-) >>>> >>>> diff --git a/arch/riscv/boot/loader.lds.S b/arch/riscv/boot/loader.lds.S >>>> index 47a5003c2e28..62d94696a19c 100644 >>>> --- a/arch/riscv/boot/loader.lds.S >>>> +++ b/arch/riscv/boot/loader.lds.S >>>> @@ -1,13 +1,14 @@ >>>>  /* SPDX-License-Identifier: GPL-2.0 */ >>>> >>>>  #include >>>> +#include >>>> >>>>  OUTPUT_ARCH(riscv) >>>>  ENTRY(_start) >>>> >>>>  SECTIONS >>>>  { >>>> -    . = PAGE_OFFSET; >>>> +    . = KERNEL_LINK_ADDR; >>>> >>>>      .payload : { >>>>          *(.payload) >>>> diff --git a/arch/riscv/include/asm/page.h >>>> b/arch/riscv/include/asm/page.h >>>> index 2d50f76efe48..48bb09b6a9b7 100644 >>>> --- a/arch/riscv/include/asm/page.h >>>> +++ b/arch/riscv/include/asm/page.h >>>> @@ -90,18 +90,26 @@ typedef struct page *pgtable_t; >>>> >>>>  #ifdef CONFIG_MMU >>>>  extern unsigned long va_pa_offset; >>>> +extern unsigned long va_kernel_pa_offset; >>>>  extern unsigned long pfn_base; >>>>  #define ARCH_PFN_OFFSET        (pfn_base) >>>>  #else >>>>  #define va_pa_offset        0 >>>> +#define va_kernel_pa_offset    0 >>>>  #define ARCH_PFN_OFFSET        (PAGE_OFFSET >> PAGE_SHIFT) >>>>  #endif /* CONFIG_MMU */ >>>> >>>>  extern unsigned long max_low_pfn; >>>>  extern unsigned long min_low_pfn; >>>> +extern unsigned long kernel_virt_addr; >>>> >>>>  #define __pa_to_va_nodebug(x)    ((void *)((unsigned long) (x) + >>>> va_pa_offset)) >>>> -#define __va_to_pa_nodebug(x)    ((unsigned long)(x) - va_pa_offset) >>>> +#define linear_mapping_va_to_pa(x)    ((unsigned long)(x) - >>>> va_pa_offset) >>>> +#define kernel_mapping_va_to_pa(x)    \ >>>> +    ((unsigned long)(x) - va_kernel_pa_offset) >>>> +#define __va_to_pa_nodebug(x)        \ >>>> +    (((x) >= PAGE_OFFSET) ?        \ >>>> +        linear_mapping_va_to_pa(x) : kernel_mapping_va_to_pa(x)) >>>> >>>>  #ifdef CONFIG_DEBUG_VIRTUAL >>>>  extern phys_addr_t __virt_to_phys(unsigned long x); >>>> diff --git a/arch/riscv/include/asm/pgtable.h >>>> b/arch/riscv/include/asm/pgtable.h >>>> index 35b60035b6b0..94ef3b49dfb6 100644 >>>> --- a/arch/riscv/include/asm/pgtable.h >>>> +++ b/arch/riscv/include/asm/pgtable.h >>>> @@ -11,23 +11,29 @@ >>>> >>>>  #include >>>> >>>> -#ifndef __ASSEMBLY__ >>>> - >>>> -/* Page Upper Directory not used in RISC-V */ >>>> -#include >>>> -#include >>>> -#include >>>> -#include >>>> - >>>> -#ifdef CONFIG_MMU >>>> +#ifndef CONFIG_MMU >>>> +#define KERNEL_VIRT_ADDR    PAGE_OFFSET >>>> +#define KERNEL_LINK_ADDR    PAGE_OFFSET >>>> +#else >>>> +/* >>>> + * Leave 2GB for modules and BPF that must lie within a 2GB range >>>> around >>>> + * the kernel. >>>> + */ >>>> +#define KERNEL_VIRT_ADDR    (VMALLOC_END - SZ_2G + 1) >>>> +#define KERNEL_LINK_ADDR    KERNEL_VIRT_ADDR >>> >>> At a bare minimum this is going to make a mess of the 32-bit port, as >>> non-relocatable kernels are now going to get linked at 1GiB which is >>> where user >>> code is supposed to live.  That's an easy fix, though, as the 32-bit >>> stuff >>> doesn't need any module address restrictions. >> >> Indeed, I will take a look at that. >> >>> >>>>  #define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1) >>>>  #define VMALLOC_END      (PAGE_OFFSET - 1) >>>>  #define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE) >>>> >>>>  #define BPF_JIT_REGION_SIZE    (SZ_128M) >>>> -#define BPF_JIT_REGION_START    (PAGE_OFFSET - BPF_JIT_REGION_SIZE) >>>> -#define BPF_JIT_REGION_END    (VMALLOC_END) >>>> +#define BPF_JIT_REGION_START    PFN_ALIGN((unsigned long)&_end) >>>> +#define BPF_JIT_REGION_END    (BPF_JIT_REGION_START + >>>> BPF_JIT_REGION_SIZE) >>>> + >>>> +#ifdef CONFIG_64BIT >>>> +#define VMALLOC_MODULE_START    BPF_JIT_REGION_END >>>> +#define VMALLOC_MODULE_END    (((unsigned long)&_start & PAGE_MASK) >>>> + SZ_2G) >>>> +#endif >>>> >>>>  /* >>>>   * Roughly size the vmemmap space to be large enough to fit enough >>>> @@ -57,9 +63,16 @@ >>>>  #define FIXADDR_SIZE     PGDIR_SIZE >>>>  #endif >>>>  #define FIXADDR_START    (FIXADDR_TOP - FIXADDR_SIZE) >>>> - >>>>  #endif >>>> >>>> +#ifndef __ASSEMBLY__ >>>> + >>>> +/* Page Upper Directory not used in RISC-V */ >>>> +#include >>>> +#include >>>> +#include >>>> +#include >>>> + >>>>  #ifdef CONFIG_64BIT >>>>  #include >>>>  #else >>>> @@ -483,6 +496,7 @@ static inline void __kernel_map_pages(struct page >>>> *page, int numpages, int enabl >>>> >>>>  #define kern_addr_valid(addr)   (1) /* FIXME */ >>>> >>>> +extern char _start[]; >>>>  extern void *dtb_early_va; >>>>  void setup_bootmem(void); >>>>  void paging_init(void); >>>> diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S >>>> index 98a406474e7d..8f5bb7731327 100644 >>>> --- a/arch/riscv/kernel/head.S >>>> +++ b/arch/riscv/kernel/head.S >>>> @@ -49,7 +49,8 @@ ENTRY(_start) >>>>  #ifdef CONFIG_MMU >>>>  relocate: >>>>      /* Relocate return address */ >>>> -    li a1, PAGE_OFFSET >>>> +    la a1, kernel_virt_addr >>>> +    REG_L a1, 0(a1) >>>>      la a2, _start >>>>      sub a1, a1, a2 >>>>      add ra, ra, a1 >>>> diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c >>>> index 8bbe5dbe1341..1a8fbe05accf 100644 >>>> --- a/arch/riscv/kernel/module.c >>>> +++ b/arch/riscv/kernel/module.c >>>> @@ -392,12 +392,10 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const >>>> char *strtab, >>>>  } >>>> >>>>  #if defined(CONFIG_MMU) && defined(CONFIG_64BIT) >>>> -#define VMALLOC_MODULE_START \ >>>> -     max(PFN_ALIGN((unsigned long)&_end - SZ_2G), VMALLOC_START) >>>>  void *module_alloc(unsigned long size) >>>>  { >>>>      return __vmalloc_node_range(size, 1, VMALLOC_MODULE_START, >>>> -                    VMALLOC_END, GFP_KERNEL, >>>> +                    VMALLOC_MODULE_END, GFP_KERNEL, >>>>                      PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, >>>>                      __builtin_return_address(0)); >>>>  } >>>> diff --git a/arch/riscv/kernel/vmlinux.lds.S >>>> b/arch/riscv/kernel/vmlinux.lds.S >>>> index 0339b6bbe11a..a9abde62909f 100644 >>>> --- a/arch/riscv/kernel/vmlinux.lds.S >>>> +++ b/arch/riscv/kernel/vmlinux.lds.S >>>> @@ -4,7 +4,8 @@ >>>>   * Copyright (C) 2017 SiFive >>>>   */ >>>> >>>> -#define LOAD_OFFSET PAGE_OFFSET >>>> +#include >>>> +#define LOAD_OFFSET KERNEL_LINK_ADDR >>>>  #include >>>>  #include >>>>  #include >>>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c >>>> index 736de6c8739f..71da78914645 100644 >>>> --- a/arch/riscv/mm/init.c >>>> +++ b/arch/riscv/mm/init.c >>>> @@ -22,6 +22,9 @@ >>>> >>>>  #include "../kernel/head.h" >>>> >>>> +unsigned long kernel_virt_addr = KERNEL_VIRT_ADDR; >>>> +EXPORT_SYMBOL(kernel_virt_addr); >>>> + >>>>  unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] >>>>                              __page_aligned_bss; >>>>  EXPORT_SYMBOL(empty_zero_page); >>>> @@ -178,8 +181,12 @@ void __init setup_bootmem(void) >>>>  } >>>> >>>>  #ifdef CONFIG_MMU >>>> +/* Offset between linear mapping virtual address and kernel load >>>> address */ >>>>  unsigned long va_pa_offset; >>>>  EXPORT_SYMBOL(va_pa_offset); >>>> +/* Offset between kernel mapping virtual address and kernel load >>>> address */ >>>> +unsigned long va_kernel_pa_offset; >>>> +EXPORT_SYMBOL(va_kernel_pa_offset); >>>>  unsigned long pfn_base; >>>>  EXPORT_SYMBOL(pfn_base); >>>> >>>> @@ -271,7 +278,7 @@ static phys_addr_t __init alloc_pmd(uintptr_t va) >>>>      if (mmu_enabled) >>>>          return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); >>>> >>>> -    pmd_num = (va - PAGE_OFFSET) >> PGDIR_SHIFT; >>>> +    pmd_num = (va - kernel_virt_addr) >> PGDIR_SHIFT; >>>>      BUG_ON(pmd_num >= NUM_EARLY_PMDS); >>>>      return (uintptr_t)&early_pmd[pmd_num * PTRS_PER_PMD]; >>>>  } >>>> @@ -372,14 +379,30 @@ static uintptr_t __init >>>> best_map_size(phys_addr_t base, phys_addr_t size) >>>>  #error "setup_vm() is called from head.S before relocate so it >>>> should not use absolute addressing." >>>>  #endif >>>> >>>> +static uintptr_t load_pa, load_sz; >>>> + >>>> +static void __init create_kernel_page_table(pgd_t *pgdir, uintptr_t >>>> map_size) >>>> +{ >>>> +    uintptr_t va, end_va; >>>> + >>>> +    end_va = kernel_virt_addr + load_sz; >>>> +    for (va = kernel_virt_addr; va < end_va; va += map_size) >>>> +        create_pgd_mapping(pgdir, va, >>>> +                   load_pa + (va - kernel_virt_addr), >>>> +                   map_size, PAGE_KERNEL_EXEC); >>>> +} >>>> + >>>>  asmlinkage void __init setup_vm(uintptr_t dtb_pa) >>>>  { >>>>      uintptr_t va, end_va; >>>> -    uintptr_t load_pa = (uintptr_t)(&_start); >>>> -    uintptr_t load_sz = (uintptr_t)(&_end) - load_pa; >>>>      uintptr_t map_size = best_map_size(load_pa, >>>> MAX_EARLY_MAPPING_SIZE); >>>> >>>> +    load_pa = (uintptr_t)(&_start); >>>> +    load_sz = (uintptr_t)(&_end) - load_pa; >>>> + >>>>      va_pa_offset = PAGE_OFFSET - load_pa; >>>> +    va_kernel_pa_offset = kernel_virt_addr - load_pa; >>>> + >>>>      pfn_base = PFN_DOWN(load_pa); >>>> >>>>      /* >>>> @@ -402,26 +425,22 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) >>>>      create_pmd_mapping(fixmap_pmd, FIXADDR_START, >>>>                 (uintptr_t)fixmap_pte, PMD_SIZE, PAGE_TABLE); >>>>      /* Setup trampoline PGD and PMD */ >>>> -    create_pgd_mapping(trampoline_pg_dir, PAGE_OFFSET, >>>> +    create_pgd_mapping(trampoline_pg_dir, kernel_virt_addr, >>>>                 (uintptr_t)trampoline_pmd, PGDIR_SIZE, PAGE_TABLE); >>>> -    create_pmd_mapping(trampoline_pmd, PAGE_OFFSET, >>>> +    create_pmd_mapping(trampoline_pmd, kernel_virt_addr, >>>>                 load_pa, PMD_SIZE, PAGE_KERNEL_EXEC); >>>>  #else >>>>      /* Setup trampoline PGD */ >>>> -    create_pgd_mapping(trampoline_pg_dir, PAGE_OFFSET, >>>> +    create_pgd_mapping(trampoline_pg_dir, kernel_virt_addr, >>>>                 load_pa, PGDIR_SIZE, PAGE_KERNEL_EXEC); >>>>  #endif >>>> >>>>      /* >>>> -     * Setup early PGD covering entire kernel which will allows >>>> +     * Setup early PGD covering entire kernel which will allow >>>>       * us to reach paging_init(). We map all memory banks later >>>>       * in setup_vm_final() below. >>>>       */ >>>> -    end_va = PAGE_OFFSET + load_sz; >>>> -    for (va = PAGE_OFFSET; va < end_va; va += map_size) >>>> -        create_pgd_mapping(early_pg_dir, va, >>>> -                   load_pa + (va - PAGE_OFFSET), >>>> -                   map_size, PAGE_KERNEL_EXEC); >>>> +    create_kernel_page_table(early_pg_dir, map_size); >>>> >>>>      /* Create fixed mapping for early FDT parsing */ >>>>      end_va = __fix_to_virt(FIX_FDT) + FIX_FDT_SIZE; >>>> @@ -441,6 +460,7 @@ static void __init setup_vm_final(void) >>>>      uintptr_t va, map_size; >>>>      phys_addr_t pa, start, end; >>>>      struct memblock_region *reg; >>>> +    static struct vm_struct vm_kernel = { 0 }; >>>> >>>>      /* Set mmu_enabled flag */ >>>>      mmu_enabled = true; >>>> @@ -467,10 +487,22 @@ static void __init setup_vm_final(void) >>>>          for (pa = start; pa < end; pa += map_size) { >>>>              va = (uintptr_t)__va(pa); >>>>              create_pgd_mapping(swapper_pg_dir, va, pa, >>>> -                       map_size, PAGE_KERNEL_EXEC); >>>> +                       map_size, PAGE_KERNEL); >>>>          } >>>>      } >>>> >>>> +    /* Map the kernel */ >>>> +    create_kernel_page_table(swapper_pg_dir, PMD_SIZE); >>>> + >>>> +    /* Reserve the vmalloc area occupied by the kernel */ >>>> +    vm_kernel.addr = (void *)kernel_virt_addr; >>>> +    vm_kernel.phys_addr = load_pa; >>>> +    vm_kernel.size = (load_sz + PMD_SIZE - 1) & ~(PMD_SIZE - 1); >>>> +    vm_kernel.flags = VM_MAP | VM_NO_GUARD; >>>> +    vm_kernel.caller = __builtin_return_address(0); >>>> + >>>> +    vm_area_add_early(&vm_kernel); >>>> + >>>>      /* Clear fixmap PTE and PMD mappings */ >>>>      clear_fixmap(FIX_PTE); >>>>      clear_fixmap(FIX_PMD); >>>> diff --git a/arch/riscv/mm/physaddr.c b/arch/riscv/mm/physaddr.c >>>> index e8e4dcd39fed..35703d5ef5fd 100644 >>>> --- a/arch/riscv/mm/physaddr.c >>>> +++ b/arch/riscv/mm/physaddr.c >>>> @@ -23,7 +23,7 @@ EXPORT_SYMBOL(__virt_to_phys); >>>> >>>>  phys_addr_t __phys_addr_symbol(unsigned long x) >>>>  { >>>> -    unsigned long kernel_start = (unsigned long)PAGE_OFFSET; >>>> +    unsigned long kernel_start = (unsigned long)kernel_virt_addr; >>>>      unsigned long kernel_end = (unsigned long)_end; >>>> >>>>      /* >> >> Alex