From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5016C433E0 for ; Sun, 21 Feb 2021 13:39:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6905764E92 for ; Sun, 21 Feb 2021 13:39:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229891AbhBUNi5 (ORCPT ); Sun, 21 Feb 2021 08:38:57 -0500 Received: from relay7-d.mail.gandi.net ([217.70.183.200]:36311 "EHLO relay7-d.mail.gandi.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229685AbhBUNiy (ORCPT ); Sun, 21 Feb 2021 08:38:54 -0500 X-Originating-IP: 2.7.49.219 Received: from [192.168.1.100] (lfbn-lyo-1-457-219.w2-7.abo.wanadoo.fr [2.7.49.219]) (Authenticated sender: alex@ghiti.fr) by relay7-d.mail.gandi.net (Postfix) with ESMTPSA id DC02E20005; Sun, 21 Feb 2021 13:38:04 +0000 (UTC) Subject: Re: [PATCH v2 1/1] riscv/kasan: add KASAN_VMALLOC support From: Alex Ghiti To: Palmer Dabbelt , nylon7@andestech.com Cc: aou@eecs.berkeley.edu, nickhu@andestech.com, alankao@andestech.com, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, nylon7717@gmail.com, glider@google.com, Paul Walmsley , aryabinin@virtuozzo.com, linux-riscv@lists.infradead.org, dvyukov@google.com References: <2b2f3038-3e27-8763-cf78-3fbbfd2100a0@ghiti.fr> <4fa97788-157c-4059-ae3f-28ab074c5836@ghiti.fr> Message-ID: Date: Sun, 21 Feb 2021 08:38:04 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: <4fa97788-157c-4059-ae3f-28ab074c5836@ghiti.fr> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Language: fr Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Le 2/13/21 à 5:52 AM, Alex Ghiti a écrit : > Hi Nylon, Palmer, > > Le 2/8/21 à 1:28 AM, Alex Ghiti a écrit : >> Hi Nylon, >> >> Le 1/22/21 à 10:56 PM, Palmer Dabbelt a écrit : >>> On Fri, 15 Jan 2021 21:58:35 PST (-0800), nylon7@andestech.com wrote: >>>> It references to x86/s390 architecture. >>>> >> So, it doesn't map the early shadow page to cover VMALLOC space. >>>> >>>> Prepopulate top level page table for the range that would otherwise be >>>> empty. >>>> >>>> lower levels are filled dynamically upon memory allocation while >>>> booting. >> >> I think we can improve the changelog a bit here with something like that: >> >> "KASAN vmalloc space used to be mapped using kasan early shadow page. >> KASAN_VMALLOC requires the top-level of the kernel page table to be >> properly populated, lower levels being filled dynamically upon memory >> allocation at runtime." >> >>>> >>>> Signed-off-by: Nylon Chen >>>> Signed-off-by: Nick Hu >>>> --- >>>>  arch/riscv/Kconfig         |  1 + >>>>  arch/riscv/mm/kasan_init.c | 57 +++++++++++++++++++++++++++++++++++++- >>>>  2 files changed, 57 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig >>>> index 81b76d44725d..15a2c8088bbe 100644 >>>> --- a/arch/riscv/Kconfig >>>> +++ b/arch/riscv/Kconfig >>>> @@ -57,6 +57,7 @@ config RISCV >>>>      select HAVE_ARCH_JUMP_LABEL >>>>      select HAVE_ARCH_JUMP_LABEL_RELATIVE >>>>      select HAVE_ARCH_KASAN if MMU && 64BIT >>>> +    select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT >>>>      select HAVE_ARCH_KGDB >>>>      select HAVE_ARCH_KGDB_QXFER_PKT >>>>      select HAVE_ARCH_MMAP_RND_BITS if MMU >>>> diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c >>>> index 12ddd1f6bf70..4b9149f963d3 100644 >>>> --- a/arch/riscv/mm/kasan_init.c >>>> +++ b/arch/riscv/mm/kasan_init.c >>>> @@ -9,6 +9,19 @@ >>>>  #include >>>>  #include >>>>  #include >>>> +#include >>>> + >>>> +static __init void *early_alloc(size_t size, int node) >>>> +{ >>>> +    void *ptr = memblock_alloc_try_nid(size, size, >>>> +        __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, node); >>>> + >>>> +    if (!ptr) >>>> +        panic("%pS: Failed to allocate %zu bytes align=%zx nid=%d >>>> from=%llx\n", >>>> +            __func__, size, size, node, (u64)__pa(MAX_DMA_ADDRESS)); >>>> + >>>> +    return ptr; >>>> +} >>>> >>>>  extern pgd_t early_pg_dir[PTRS_PER_PGD]; >>>>  asmlinkage void __init kasan_early_init(void) >>>> @@ -83,6 +96,40 @@ static void __init populate(void *start, void *end) >>>>      memset(start, 0, end - start); >>>>  } >>>> >>>> +void __init kasan_shallow_populate(void *start, void *end) >>>> +{ >>>> +    unsigned long vaddr = (unsigned long)start & PAGE_MASK; >>>> +    unsigned long vend = PAGE_ALIGN((unsigned long)end); >>>> +    unsigned long pfn; >>>> +    int index; >>>> +    void *p; >>>> +    pud_t *pud_dir, *pud_k; >>>> +    pgd_t *pgd_dir, *pgd_k; >>>> +    p4d_t *p4d_dir, *p4d_k; >>>> + >>>> +    while (vaddr < vend) { >>>> +        index = pgd_index(vaddr); >>>> +        pfn = csr_read(CSR_SATP) & SATP_PPN; >> >> At this point in the boot process, we know that we use swapper_pg_dir >> so no need to read SATP. >> >>>> +        pgd_dir = (pgd_t *)pfn_to_virt(pfn) + index; >> >> Here, this pgd_dir assignment is overwritten 2 lines below, so no need >> for it. >> >>>> +        pgd_k = init_mm.pgd + index; >>>> +        pgd_dir = pgd_offset_k(vaddr); >> >> pgd_offset_k(vaddr) = init_mm.pgd + pgd_index(vaddr) so pgd_k == pgd_dir. >> >>>> +        set_pgd(pgd_dir, *pgd_k); >>>> + >>>> +        p4d_dir = p4d_offset(pgd_dir, vaddr); >>>> +        p4d_k  = p4d_offset(pgd_k, vaddr); >>>> + >>>> +        vaddr = (vaddr + PUD_SIZE) & PUD_MASK; >> >> Why do you increase vaddr *before* populating the first one ? And >> pud_addr_end does that properly: it returns the next pud address if it >> does not go beyond end address to map. >> >>>> +        pud_dir = pud_offset(p4d_dir, vaddr); >>>> +        pud_k = pud_offset(p4d_k, vaddr); >>>> + >>>> +        if (pud_present(*pud_dir)) { >>>> +            p = early_alloc(PAGE_SIZE, NUMA_NO_NODE); >>>> +            pud_populate(&init_mm, pud_dir, p); >> >> init_mm is not needed here. >> >>>> +        } >>>> +        vaddr += PAGE_SIZE; >> >> Why do you need to add PAGE_SIZE ? vaddr already points to the next pud. >> >> It seems like this patch tries to populate userspace page table >> whereas at this point in the boot process, only swapper_pg_dir is used >> or am I missing something ? >> >> Thanks, >> >> Alex > > I implemented this morning a version that fixes all the comments I made > earlier. I was able to insert test_kasan_module on both sv39 and sv48 > without any modification: set_pgd "goes through" all the unused page > table levels, whereas p*d_populate are noop for unused levels. > > If you have any comment, do not hesitate. > > diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c > index adbf94b7e68a..d643b222167c 100644 > --- a/arch/riscv/mm/kasan_init.c > +++ b/arch/riscv/mm/kasan_init.c > @@ -195,6 +195,31 @@ static void __init kasan_populate(void *start, void > *end) >         memset(start, KASAN_SHADOW_INIT, end - start); >  } > > > +void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned > long end) > +{ > +       unsigned long next; > +       void *p; > +       pgd_t *pgd_k = pgd_offset_k(vaddr); > + > +       do { > +               next = pgd_addr_end(vaddr, end); > +               if (pgd_page_vaddr(*pgd_k) == (unsigned > long)lm_alias(kasan_early_shadow_pgd_next)) { > +                       p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); > +                       set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), > PAGE_TABLE)); > +               } > +       } while (pgd_k++, vaddr = next, vaddr != end); > +} > + This way of going through the page table seems to be largely used across the kernel (cf KASAN population functions of arm64/x86) so I do think this patch brings value to Nylon and Nick's patch. I can propose a real patch if you agree and I'll add a co-developed by Nylon/Nick since this only 'improves' theirs. Thanks, Alex > +void __init kasan_shallow_populate(void *start, void *end) > +{ > +       unsigned long vaddr = (unsigned long)start & PAGE_MASK; > +       unsigned long vend = PAGE_ALIGN((unsigned long)end); > + > +       kasan_shallow_populate_pgd(vaddr, vend); > + > +       local_flush_tlb_all(); > +} > + >  void __init kasan_init(void) >  { >         phys_addr_t _start, _end; > @@ -206,7 +231,15 @@ void __init kasan_init(void) >          */ >         kasan_populate_early_shadow((void *)KASAN_SHADOW_START, >                                     (void *)kasan_mem_to_shadow((void *) > - VMALLOC_END)); > + VMEMMAP_END)); > +       if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) > +               kasan_shallow_populate( > +                       (void *)kasan_mem_to_shadow((void *)VMALLOC_START), > +                       (void *)kasan_mem_to_shadow((void *)VMALLOC_END)); > +       else > +               kasan_populate_early_shadow( > +                       (void *)kasan_mem_to_shadow((void *)VMALLOC_START), > +                       (void *)kasan_mem_to_shadow((void *)VMALLOC_END)); > > >         /* Populate the linear mapping */ >         for_each_mem_range(i, &_start, &_end) { From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E9AAC433DB for ; Sun, 21 Feb 2021 13:38:30 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0BB4B64E15 for ; Sun, 21 Feb 2021 13:38:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0BB4B64E15 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ghiti.fr Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:References: To:From:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=kGWA3ff/14WNCcuHSyKkRheV4RVNH/q2ejLfJaQ51N8=; b=HDFJ2kNKqsSUU4wGw+Vuk1+Xw MaVXpBo7sT3CxJz6EcfZw/LV2gjrCl6EHEZ+JUQm5v3nzddiIHAtcNNo+Sm6w0iI9NZnfL5RG9mB3 nD0XdlvOAnuyus88POk2WVmhyEM0gO/3ue3ZpyyfhBq+0XtWCb2x7eLq34xQOD1vb4B1BTBZktV1+ +pjYNsJvdfe9KE+YiXLET+ufqz/P/R03Zj941kzJjxpD/ZuAM8UjPXqW/dEfbgNOC4gp9PlEOcOU1 Y5qD0nUfQPA80M11sd+Lw/t9HMSR5DZlkdlztQsEQvzWRFpjfMcYmUJxrlgmzLkluldapx3raS/ZP HL4Dy8suw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1lDowF-0001s5-CA; Sun, 21 Feb 2021 13:38:19 +0000 Received: from relay7-d.mail.gandi.net ([217.70.183.200]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1lDowC-0001ri-3t for linux-riscv@lists.infradead.org; Sun, 21 Feb 2021 13:38:17 +0000 X-Originating-IP: 2.7.49.219 Received: from [192.168.1.100] (lfbn-lyo-1-457-219.w2-7.abo.wanadoo.fr [2.7.49.219]) (Authenticated sender: alex@ghiti.fr) by relay7-d.mail.gandi.net (Postfix) with ESMTPSA id DC02E20005; Sun, 21 Feb 2021 13:38:04 +0000 (UTC) Subject: Re: [PATCH v2 1/1] riscv/kasan: add KASAN_VMALLOC support From: Alex Ghiti To: Palmer Dabbelt , nylon7@andestech.com References: <2b2f3038-3e27-8763-cf78-3fbbfd2100a0@ghiti.fr> <4fa97788-157c-4059-ae3f-28ab074c5836@ghiti.fr> Message-ID: Date: Sun, 21 Feb 2021 08:38:04 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: <4fa97788-157c-4059-ae3f-28ab074c5836@ghiti.fr> Content-Language: fr X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210221_083816_396440_72E6C5DE X-CRM114-Status: GOOD ( 33.84 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: aou@eecs.berkeley.edu, nickhu@andestech.com, alankao@andestech.com, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, nylon7717@gmail.com, glider@google.com, Paul Walmsley , aryabinin@virtuozzo.com, linux-riscv@lists.infradead.org, dvyukov@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="windows-1252"; Format="flowed" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Le 2/13/21 =E0 5:52 AM, Alex Ghiti a =E9crit=A0: > Hi Nylon, Palmer, > = > Le 2/8/21 =E0 1:28 AM, Alex Ghiti a =E9crit=A0: >> Hi Nylon, >> >> Le 1/22/21 =E0 10:56 PM, Palmer Dabbelt a =E9crit=A0: >>> On Fri, 15 Jan 2021 21:58:35 PST (-0800), nylon7@andestech.com wrote: >>>> It references to x86/s390 architecture. >>>> >> So, it doesn't map the early shadow page to cover VMALLOC space. >>>> >>>> Prepopulate top level page table for the range that would otherwise be >>>> empty. >>>> >>>> lower levels are filled dynamically upon memory allocation while >>>> booting. >> >> I think we can improve the changelog a bit here with something like that: >> >> "KASAN vmalloc space used to be mapped using kasan early shadow page. = >> KASAN_VMALLOC requires the top-level of the kernel page table to be = >> properly populated, lower levels being filled dynamically upon memory = >> allocation at runtime." >> >>>> >>>> Signed-off-by: Nylon Chen >>>> Signed-off-by: Nick Hu >>>> --- >>>> =A0arch/riscv/Kconfig=A0=A0=A0=A0=A0=A0=A0=A0 |=A0 1 + >>>> =A0arch/riscv/mm/kasan_init.c | 57 +++++++++++++++++++++++++++++++++++= ++- >>>> =A02 files changed, 57 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig >>>> index 81b76d44725d..15a2c8088bbe 100644 >>>> --- a/arch/riscv/Kconfig >>>> +++ b/arch/riscv/Kconfig >>>> @@ -57,6 +57,7 @@ config RISCV >>>> =A0=A0=A0=A0 select HAVE_ARCH_JUMP_LABEL >>>> =A0=A0=A0=A0 select HAVE_ARCH_JUMP_LABEL_RELATIVE >>>> =A0=A0=A0=A0 select HAVE_ARCH_KASAN if MMU && 64BIT >>>> +=A0=A0=A0 select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT >>>> =A0=A0=A0=A0 select HAVE_ARCH_KGDB >>>> =A0=A0=A0=A0 select HAVE_ARCH_KGDB_QXFER_PKT >>>> =A0=A0=A0=A0 select HAVE_ARCH_MMAP_RND_BITS if MMU >>>> diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c >>>> index 12ddd1f6bf70..4b9149f963d3 100644 >>>> --- a/arch/riscv/mm/kasan_init.c >>>> +++ b/arch/riscv/mm/kasan_init.c >>>> @@ -9,6 +9,19 @@ >>>> =A0#include >>>> =A0#include >>>> =A0#include >>>> +#include >>>> + >>>> +static __init void *early_alloc(size_t size, int node) >>>> +{ >>>> +=A0=A0=A0 void *ptr =3D memblock_alloc_try_nid(size, size, >>>> +=A0=A0=A0=A0=A0=A0=A0 __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBL= E, node); >>>> + >>>> +=A0=A0=A0 if (!ptr) >>>> +=A0=A0=A0=A0=A0=A0=A0 panic("%pS: Failed to allocate %zu bytes align= =3D%zx nid=3D%d = >>>> from=3D%llx\n", >>>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 __func__, size, size, node, (u64)__= pa(MAX_DMA_ADDRESS)); >>>> + >>>> +=A0=A0=A0 return ptr; >>>> +} >>>> >>>> =A0extern pgd_t early_pg_dir[PTRS_PER_PGD]; >>>> =A0asmlinkage void __init kasan_early_init(void) >>>> @@ -83,6 +96,40 @@ static void __init populate(void *start, void *end) >>>> =A0=A0=A0=A0 memset(start, 0, end - start); >>>> =A0} >>>> >>>> +void __init kasan_shallow_populate(void *start, void *end) >>>> +{ >>>> +=A0=A0=A0 unsigned long vaddr =3D (unsigned long)start & PAGE_MASK; >>>> +=A0=A0=A0 unsigned long vend =3D PAGE_ALIGN((unsigned long)end); >>>> +=A0=A0=A0 unsigned long pfn; >>>> +=A0=A0=A0 int index; >>>> +=A0=A0=A0 void *p; >>>> +=A0=A0=A0 pud_t *pud_dir, *pud_k; >>>> +=A0=A0=A0 pgd_t *pgd_dir, *pgd_k; >>>> +=A0=A0=A0 p4d_t *p4d_dir, *p4d_k; >>>> + >>>> +=A0=A0=A0 while (vaddr < vend) { >>>> +=A0=A0=A0=A0=A0=A0=A0 index =3D pgd_index(vaddr); >>>> +=A0=A0=A0=A0=A0=A0=A0 pfn =3D csr_read(CSR_SATP) & SATP_PPN; >> >> At this point in the boot process, we know that we use swapper_pg_dir = >> so no need to read SATP. >> >>>> +=A0=A0=A0=A0=A0=A0=A0 pgd_dir =3D (pgd_t *)pfn_to_virt(pfn) + index; >> >> Here, this pgd_dir assignment is overwritten 2 lines below, so no need = >> for it. >> >>>> +=A0=A0=A0=A0=A0=A0=A0 pgd_k =3D init_mm.pgd + index; >>>> +=A0=A0=A0=A0=A0=A0=A0 pgd_dir =3D pgd_offset_k(vaddr); >> >> pgd_offset_k(vaddr) =3D init_mm.pgd + pgd_index(vaddr) so pgd_k =3D=3D p= gd_dir. >> >>>> +=A0=A0=A0=A0=A0=A0=A0 set_pgd(pgd_dir, *pgd_k); >>>> + >>>> +=A0=A0=A0=A0=A0=A0=A0 p4d_dir =3D p4d_offset(pgd_dir, vaddr); >>>> +=A0=A0=A0=A0=A0=A0=A0 p4d_k=A0 =3D p4d_offset(pgd_k, vaddr); >>>> + >>>> +=A0=A0=A0=A0=A0=A0=A0 vaddr =3D (vaddr + PUD_SIZE) & PUD_MASK; >> >> Why do you increase vaddr *before* populating the first one ? And = >> pud_addr_end does that properly: it returns the next pud address if it = >> does not go beyond end address to map. >> >>>> +=A0=A0=A0=A0=A0=A0=A0 pud_dir =3D pud_offset(p4d_dir, vaddr); >>>> +=A0=A0=A0=A0=A0=A0=A0 pud_k =3D pud_offset(p4d_k, vaddr); >>>> + >>>> +=A0=A0=A0=A0=A0=A0=A0 if (pud_present(*pud_dir)) { >>>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 p =3D early_alloc(PAGE_SIZE, NUMA_N= O_NODE); >>>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 pud_populate(&init_mm, pud_dir, p); >> >> init_mm is not needed here. >> >>>> +=A0=A0=A0=A0=A0=A0=A0 } >>>> +=A0=A0=A0=A0=A0=A0=A0 vaddr +=3D PAGE_SIZE; >> >> Why do you need to add PAGE_SIZE ? vaddr already points to the next pud. >> >> It seems like this patch tries to populate userspace page table = >> whereas at this point in the boot process, only swapper_pg_dir is used = >> or am I missing something ? >> >> Thanks, >> >> Alex > = > I implemented this morning a version that fixes all the comments I made = > earlier. I was able to insert test_kasan_module on both sv39 and sv48 = > without any modification: set_pgd "goes through" all the unused page = > table levels, whereas p*d_populate are noop for unused levels. > = > If you have any comment, do not hesitate. > = > diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c > index adbf94b7e68a..d643b222167c 100644 > --- a/arch/riscv/mm/kasan_init.c > +++ b/arch/riscv/mm/kasan_init.c > @@ -195,6 +195,31 @@ static void __init kasan_populate(void *start, void = > *end) > =A0=A0=A0=A0=A0=A0=A0 memset(start, KASAN_SHADOW_INIT, end - start); > =A0} > = > = > +void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned = > long end) > +{ > +=A0=A0=A0=A0=A0=A0 unsigned long next; > +=A0=A0=A0=A0=A0=A0 void *p; > +=A0=A0=A0=A0=A0=A0 pgd_t *pgd_k =3D pgd_offset_k(vaddr); > + > +=A0=A0=A0=A0=A0=A0 do { > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 next =3D pgd_addr_end(vaddr, = end); > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (pgd_page_vaddr(*pgd_k) = =3D=3D (unsigned = > long)lm_alias(kasan_early_shadow_pgd_next)) { > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 p =3D= memblock_alloc(PAGE_SIZE, PAGE_SIZE); > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 set_p= gd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), = > PAGE_TABLE)); > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 } > +=A0=A0=A0=A0=A0=A0 } while (pgd_k++, vaddr =3D next, vaddr !=3D end); > +} > + This way of going through the page table seems to be largely used across = the kernel (cf KASAN population functions of arm64/x86) so I do think = this patch brings value to Nylon and Nick's patch. I can propose a real patch if you agree and I'll add a co-developed by = Nylon/Nick since this only 'improves' theirs. Thanks, Alex > +void __init kasan_shallow_populate(void *start, void *end) > +{ > +=A0=A0=A0=A0=A0=A0 unsigned long vaddr =3D (unsigned long)start & PAGE_M= ASK; > +=A0=A0=A0=A0=A0=A0 unsigned long vend =3D PAGE_ALIGN((unsigned long)end); > + > +=A0=A0=A0=A0=A0=A0 kasan_shallow_populate_pgd(vaddr, vend); > + > +=A0=A0=A0=A0=A0=A0 local_flush_tlb_all(); > +} > + > =A0void __init kasan_init(void) > =A0{ > =A0=A0=A0=A0=A0=A0=A0 phys_addr_t _start, _end; > @@ -206,7 +231,15 @@ void __init kasan_init(void) > =A0=A0=A0=A0=A0=A0=A0=A0 */ > =A0=A0=A0=A0=A0=A0=A0 kasan_populate_early_shadow((void *)KASAN_SHADOW_S= TART, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 (void *)kasan_mem_to_shadow((void *) > - VMALLOC_END)); > + VMEMMAP_END)); > +=A0=A0=A0=A0=A0=A0 if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 kasan_shallow_populate( > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 (void= *)kasan_mem_to_shadow((void *)VMALLOC_START), > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 (void= *)kasan_mem_to_shadow((void *)VMALLOC_END)); > +=A0=A0=A0=A0=A0=A0 else > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 kasan_populate_early_shadow( > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 (void= *)kasan_mem_to_shadow((void *)VMALLOC_START), > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 (void= *)kasan_mem_to_shadow((void *)VMALLOC_END)); > = > = > =A0=A0=A0=A0=A0=A0=A0 /* Populate the linear mapping */ > =A0=A0=A0=A0=A0=A0=A0 for_each_mem_range(i, &_start, &_end) { _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv