* [PATCH 0/1] kasan: support backing vmalloc space for riscv
@ 2021-01-13 2:28 ` Nylon Chen
0 siblings, 0 replies; 6+ messages in thread
From: Nylon Chen @ 2021-01-13 2:28 UTC (permalink / raw)
To: linux-riscv, linux-kernel
Cc: kasan-dev, aou, palmer, paul.walmsley, dvyukov, glider,
aryabinin, alankao, nickhu, nylon7, nylon7717
This patchset is support KASAN_VMALLOC in riscv.
We reference x86/s390 mailing list discussion for our implement.
https://lwn.net/Articles/797950/
It's also pass `vmalloc-out-of-bounds` of test_kasan.ko
log:
[ 235.834318] # Subtest: kasan
[ 235.835190] 1..37
[ 235.845238]
==================================================================
[ 235.847818] BUG: KASAN: slab-out-of-bounds in
kmalloc_oob_right+0xe2/0x192 [test_kasan]
[ 235.850688] Write of size 1 at addr ffffffe0075d5a7b by task
kunit_try_catch/125
[ 235.852630]
[ 235.853212] CPU: 0 PID: 125 Comm: kunit_try_catch Tainted: G B
5.11.0-rc3-13940-gb0bb4cd86282-dirty #1
...
[ 241.835850]
==================================================================
[1154/67143]
[ 241.840884] ok 36 - kmalloc_double_kzfree
[ 241.852642]
==================================================================
[ 241.857261] BUG: KASAN: vmalloc-out-of-bounds in
vmalloc_oob+0xcc/0x17c [test_kasan]
[ 241.861327] Read of size 1 at addr ffffffd00407ec1c by task
kunit_try_catch/161
[ 241.864525]
[ 241.865200] CPU: 0 PID: 161 Comm: kunit_try_catch Tainted: G B
5.11.0-rc3-13940-gb0bb4cd86282-dirty #1
[ 241.869887] Call Trace:
[ 241.870972] [<ffffffe0000052d2>] walk_stackframe+0x0/0x128
[ 241.873353] [<ffffffe000abcff0>] show_stack+0x32/0x3e
[ 241.875457] [<ffffffe000ac0d46>] dump_stack+0x84/0xa0
[ 241.877806] [<ffffffe000188926>]
print_address_description.constprop.0+0x88/0x362
[ 241.881150] [<ffffffe000188e4a>] kasan_report+0x176/0x194
[ 241.883604] [<ffffffe000189390>] __asan_load1+0x42/0x4a
[ 241.885897] [<ffffffdf81f9f2f4>] vmalloc_oob+0xcc/0x17c [test_kasan]
[ 241.889458] [<ffffffdf81f91e8e>] kunit_try_run_case+0x80/0x11a
[kunit]
[ 241.892665] [<ffffffdf81f92e16>]
kunit_generic_run_threadfn_adapter+0x2c/0x4e [kunit]
[ 241.896568] [<ffffffe000034ac4>] kthread+0x206/0x222
[ 241.899219] [<ffffffe00000361a>] ret_from_exception+0x0/0xc
[ 241.901700]
[ 241.902497]
[ 241.903257] Memory state around the buggy address:
[ 241.905430] ffffffd00407eb00: 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00
[ 241.908661] ffffffd00407eb80: 00 00 00 00 00 00 00 f8 f8 f8 f8 f8 f8
f8 f8 f8
[ 241.911841] >ffffffd00407ec00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
f8 f8 f8
[ 241.915037] ^
[ 241.916053] ffffffd00407ec80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
f8 f8 f8
[ 241.919272] ffffffd00407ed00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
f8 f8 f8
[ 241.922417]
==================================================================
[ 242.073698] ok 37 - vmalloc_oob
Nylon Chen (1):
riscv/kasan: add KASAN_VMALLOC support
arch/riscv/Kconfig | 1 +
arch/riscv/mm/kasan_init.c | 66 +++++++++++++++++++++++++++++++++++++-
2 files changed, 66 insertions(+), 1 deletion(-)
--
2.17.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 0/1] kasan: support backing vmalloc space for riscv
@ 2021-01-13 2:28 ` Nylon Chen
0 siblings, 0 replies; 6+ messages in thread
From: Nylon Chen @ 2021-01-13 2:28 UTC (permalink / raw)
To: linux-riscv, linux-kernel
Cc: aou, nickhu, alankao, kasan-dev, nylon7, nylon7717, aryabinin,
glider, paul.walmsley, palmer, dvyukov
This patchset is support KASAN_VMALLOC in riscv.
We reference x86/s390 mailing list discussion for our implement.
https://lwn.net/Articles/797950/
It's also pass `vmalloc-out-of-bounds` of test_kasan.ko
log:
[ 235.834318] # Subtest: kasan
[ 235.835190] 1..37
[ 235.845238]
==================================================================
[ 235.847818] BUG: KASAN: slab-out-of-bounds in
kmalloc_oob_right+0xe2/0x192 [test_kasan]
[ 235.850688] Write of size 1 at addr ffffffe0075d5a7b by task
kunit_try_catch/125
[ 235.852630]
[ 235.853212] CPU: 0 PID: 125 Comm: kunit_try_catch Tainted: G B
5.11.0-rc3-13940-gb0bb4cd86282-dirty #1
...
[ 241.835850]
==================================================================
[1154/67143]
[ 241.840884] ok 36 - kmalloc_double_kzfree
[ 241.852642]
==================================================================
[ 241.857261] BUG: KASAN: vmalloc-out-of-bounds in
vmalloc_oob+0xcc/0x17c [test_kasan]
[ 241.861327] Read of size 1 at addr ffffffd00407ec1c by task
kunit_try_catch/161
[ 241.864525]
[ 241.865200] CPU: 0 PID: 161 Comm: kunit_try_catch Tainted: G B
5.11.0-rc3-13940-gb0bb4cd86282-dirty #1
[ 241.869887] Call Trace:
[ 241.870972] [<ffffffe0000052d2>] walk_stackframe+0x0/0x128
[ 241.873353] [<ffffffe000abcff0>] show_stack+0x32/0x3e
[ 241.875457] [<ffffffe000ac0d46>] dump_stack+0x84/0xa0
[ 241.877806] [<ffffffe000188926>]
print_address_description.constprop.0+0x88/0x362
[ 241.881150] [<ffffffe000188e4a>] kasan_report+0x176/0x194
[ 241.883604] [<ffffffe000189390>] __asan_load1+0x42/0x4a
[ 241.885897] [<ffffffdf81f9f2f4>] vmalloc_oob+0xcc/0x17c [test_kasan]
[ 241.889458] [<ffffffdf81f91e8e>] kunit_try_run_case+0x80/0x11a
[kunit]
[ 241.892665] [<ffffffdf81f92e16>]
kunit_generic_run_threadfn_adapter+0x2c/0x4e [kunit]
[ 241.896568] [<ffffffe000034ac4>] kthread+0x206/0x222
[ 241.899219] [<ffffffe00000361a>] ret_from_exception+0x0/0xc
[ 241.901700]
[ 241.902497]
[ 241.903257] Memory state around the buggy address:
[ 241.905430] ffffffd00407eb00: 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00
[ 241.908661] ffffffd00407eb80: 00 00 00 00 00 00 00 f8 f8 f8 f8 f8 f8
f8 f8 f8
[ 241.911841] >ffffffd00407ec00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
f8 f8 f8
[ 241.915037] ^
[ 241.916053] ffffffd00407ec80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
f8 f8 f8
[ 241.919272] ffffffd00407ed00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
f8 f8 f8
[ 241.922417]
==================================================================
[ 242.073698] ok 37 - vmalloc_oob
Nylon Chen (1):
riscv/kasan: add KASAN_VMALLOC support
arch/riscv/Kconfig | 1 +
arch/riscv/mm/kasan_init.c | 66 +++++++++++++++++++++++++++++++++++++-
2 files changed, 66 insertions(+), 1 deletion(-)
--
2.17.1
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/1] riscv/kasan: add KASAN_VMALLOC support
2021-01-13 2:28 ` Nylon Chen
@ 2021-01-13 2:28 ` Nylon Chen
-1 siblings, 0 replies; 6+ messages in thread
From: Nylon Chen @ 2021-01-13 2:28 UTC (permalink / raw)
To: linux-riscv, linux-kernel
Cc: kasan-dev, aou, palmer, paul.walmsley, dvyukov, glider,
aryabinin, alankao, nickhu, nylon7, nylon7717
It's reference x86/s390 architecture.
So, it's don't map the early shadow page to cover VMALLOC space.
Prepopulate top level page table for the range that would otherwise be
empty.
lower levels are filled dynamically upon memory allocation while
booting.
Signed-off-by: Nylon Chen <nylon7@andestech.com>
Signed-off-by: Nick Hu <nickhu@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/mm/kasan_init.c | 66 +++++++++++++++++++++++++++++++++++++-
2 files changed, 66 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 81b76d44725d..15a2c8088bbe 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -57,6 +57,7 @@ config RISCV
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if MMU && 64BIT
+ select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT
select HAVE_ARCH_KGDB
select HAVE_ARCH_KGDB_QXFER_PKT
select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 12ddd1f6bf70..ee332513d728 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -9,6 +9,19 @@
#include <linux/pgtable.h>
#include <asm/tlbflush.h>
#include <asm/fixmap.h>
+#include <asm/pgalloc.h>
+
+static __init void *early_alloc(size_t size, int node)
+{
+ void *ptr = memblock_alloc_try_nid(size, size,
+ __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, node);
+
+ if (!ptr)
+ panic("%pS: Failed to allocate %zu bytes align=%zx nid=%d from=%llx\n",
+ __func__, size, size, node, (u64)__pa(MAX_DMA_ADDRESS));
+
+ return ptr;
+}
extern pgd_t early_pg_dir[PTRS_PER_PGD];
asmlinkage void __init kasan_early_init(void)
@@ -83,6 +96,49 @@ static void __init populate(void *start, void *end)
memset(start, 0, end - start);
}
+void __init kasan_shallow_populate(void *start, void *end)
+{
+ unsigned long vaddr = (unsigned long)start & PAGE_MASK;
+ unsigned long vend = PAGE_ALIGN((unsigned long)end);
+ unsigned long pfn;
+ int index;
+ void *p;
+ pud_t *pud_dir, *pud_k;
+ pmd_t *pmd_dir, *pmd_k;
+ pgd_t *pgd_dir, *pgd_k;
+ p4d_t *p4d_dir, *p4d_k;
+
+ while (vaddr < vend) {
+ index = pgd_index(vaddr);
+ pfn = csr_read(CSR_SATP) & SATP_PPN;
+ pgd_dir = (pgd_t *)pfn_to_virt(pfn) + index;
+ pgd_k = init_mm.pgd + index;
+ pgd_dir = pgd_offset_k(vaddr);
+ set_pgd(pgd_dir, *pgd_k);
+
+ p4d_dir = p4d_offset(pgd_dir, vaddr);
+ p4d_k = p4d_offset(pgd_k,vaddr);
+
+ vaddr = (vaddr + PUD_SIZE) & PUD_MASK;
+ pud_dir = pud_offset(p4d_dir, vaddr);
+ pud_k = pud_offset(p4d_k,vaddr);
+
+ if (pud_present(*pud_dir)) {
+ p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
+ pud_populate(&init_mm, pud_dir, p);
+ }
+
+ pmd_dir = pmd_offset(pud_dir, vaddr);
+ pmd_k = pmd_offset(pud_k,vaddr);
+ set_pmd(pmd_dir, *pmd_k);
+ if (pmd_present(*pmd_dir)) {
+ p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
+ pmd_populate(&init_mm, pmd_dir, p);
+ }
+ vaddr += PAGE_SIZE;
+ }
+}
+
void __init kasan_init(void)
{
phys_addr_t _start, _end;
@@ -90,7 +146,15 @@ void __init kasan_init(void)
kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
(void *)kasan_mem_to_shadow((void *)
- VMALLOC_END));
+ VMEMMAP_END));
+ if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
+ kasan_shallow_populate(
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_START),
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
+ else
+ kasan_populate_early_shadow(
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_START),
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
for_each_mem_range(i, &_start, &_end) {
void *start = (void *)_start;
--
2.17.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 1/1] riscv/kasan: add KASAN_VMALLOC support
@ 2021-01-13 2:28 ` Nylon Chen
0 siblings, 0 replies; 6+ messages in thread
From: Nylon Chen @ 2021-01-13 2:28 UTC (permalink / raw)
To: linux-riscv, linux-kernel
Cc: aou, nickhu, alankao, kasan-dev, nylon7, nylon7717, aryabinin,
glider, paul.walmsley, palmer, dvyukov
It's reference x86/s390 architecture.
So, it's don't map the early shadow page to cover VMALLOC space.
Prepopulate top level page table for the range that would otherwise be
empty.
lower levels are filled dynamically upon memory allocation while
booting.
Signed-off-by: Nylon Chen <nylon7@andestech.com>
Signed-off-by: Nick Hu <nickhu@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/mm/kasan_init.c | 66 +++++++++++++++++++++++++++++++++++++-
2 files changed, 66 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 81b76d44725d..15a2c8088bbe 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -57,6 +57,7 @@ config RISCV
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if MMU && 64BIT
+ select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT
select HAVE_ARCH_KGDB
select HAVE_ARCH_KGDB_QXFER_PKT
select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 12ddd1f6bf70..ee332513d728 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -9,6 +9,19 @@
#include <linux/pgtable.h>
#include <asm/tlbflush.h>
#include <asm/fixmap.h>
+#include <asm/pgalloc.h>
+
+static __init void *early_alloc(size_t size, int node)
+{
+ void *ptr = memblock_alloc_try_nid(size, size,
+ __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, node);
+
+ if (!ptr)
+ panic("%pS: Failed to allocate %zu bytes align=%zx nid=%d from=%llx\n",
+ __func__, size, size, node, (u64)__pa(MAX_DMA_ADDRESS));
+
+ return ptr;
+}
extern pgd_t early_pg_dir[PTRS_PER_PGD];
asmlinkage void __init kasan_early_init(void)
@@ -83,6 +96,49 @@ static void __init populate(void *start, void *end)
memset(start, 0, end - start);
}
+void __init kasan_shallow_populate(void *start, void *end)
+{
+ unsigned long vaddr = (unsigned long)start & PAGE_MASK;
+ unsigned long vend = PAGE_ALIGN((unsigned long)end);
+ unsigned long pfn;
+ int index;
+ void *p;
+ pud_t *pud_dir, *pud_k;
+ pmd_t *pmd_dir, *pmd_k;
+ pgd_t *pgd_dir, *pgd_k;
+ p4d_t *p4d_dir, *p4d_k;
+
+ while (vaddr < vend) {
+ index = pgd_index(vaddr);
+ pfn = csr_read(CSR_SATP) & SATP_PPN;
+ pgd_dir = (pgd_t *)pfn_to_virt(pfn) + index;
+ pgd_k = init_mm.pgd + index;
+ pgd_dir = pgd_offset_k(vaddr);
+ set_pgd(pgd_dir, *pgd_k);
+
+ p4d_dir = p4d_offset(pgd_dir, vaddr);
+ p4d_k = p4d_offset(pgd_k,vaddr);
+
+ vaddr = (vaddr + PUD_SIZE) & PUD_MASK;
+ pud_dir = pud_offset(p4d_dir, vaddr);
+ pud_k = pud_offset(p4d_k,vaddr);
+
+ if (pud_present(*pud_dir)) {
+ p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
+ pud_populate(&init_mm, pud_dir, p);
+ }
+
+ pmd_dir = pmd_offset(pud_dir, vaddr);
+ pmd_k = pmd_offset(pud_k,vaddr);
+ set_pmd(pmd_dir, *pmd_k);
+ if (pmd_present(*pmd_dir)) {
+ p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
+ pmd_populate(&init_mm, pmd_dir, p);
+ }
+ vaddr += PAGE_SIZE;
+ }
+}
+
void __init kasan_init(void)
{
phys_addr_t _start, _end;
@@ -90,7 +146,15 @@ void __init kasan_init(void)
kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
(void *)kasan_mem_to_shadow((void *)
- VMALLOC_END));
+ VMEMMAP_END));
+ if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
+ kasan_shallow_populate(
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_START),
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
+ else
+ kasan_populate_early_shadow(
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_START),
+ (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
for_each_mem_range(i, &_start, &_end) {
void *start = (void *)_start;
--
2.17.1
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 1/1] riscv/kasan: add KASAN_VMALLOC support
2021-01-13 2:28 ` Nylon Chen
@ 2021-01-15 2:24 ` Palmer Dabbelt
-1 siblings, 0 replies; 6+ messages in thread
From: Palmer Dabbelt @ 2021-01-15 2:24 UTC (permalink / raw)
To: nylon7
Cc: linux-riscv, linux-kernel, kasan-dev, aou, Paul Walmsley,
dvyukov, glider, aryabinin, alankao, nickhu, nylon7, nylon7717
On Tue, 12 Jan 2021 18:28:22 PST (-0800), nylon7@andestech.com wrote:
> It's reference x86/s390 architecture.
>
> So, it's don't map the early shadow page to cover VMALLOC space.
>
> Prepopulate top level page table for the range that would otherwise be
> empty.
>
> lower levels are filled dynamically upon memory allocation while
> booting.
>
> Signed-off-by: Nylon Chen <nylon7@andestech.com>
> Signed-off-by: Nick Hu <nickhu@andestech.com>
> ---
> arch/riscv/Kconfig | 1 +
> arch/riscv/mm/kasan_init.c | 66 +++++++++++++++++++++++++++++++++++++-
> 2 files changed, 66 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 81b76d44725d..15a2c8088bbe 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -57,6 +57,7 @@ config RISCV
> select HAVE_ARCH_JUMP_LABEL
> select HAVE_ARCH_JUMP_LABEL_RELATIVE
> select HAVE_ARCH_KASAN if MMU && 64BIT
> + select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT
> select HAVE_ARCH_KGDB
> select HAVE_ARCH_KGDB_QXFER_PKT
> select HAVE_ARCH_MMAP_RND_BITS if MMU
> diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
> index 12ddd1f6bf70..ee332513d728 100644
> --- a/arch/riscv/mm/kasan_init.c
> +++ b/arch/riscv/mm/kasan_init.c
> @@ -9,6 +9,19 @@
> #include <linux/pgtable.h>
> #include <asm/tlbflush.h>
> #include <asm/fixmap.h>
> +#include <asm/pgalloc.h>
> +
> +static __init void *early_alloc(size_t size, int node)
> +{
> + void *ptr = memblock_alloc_try_nid(size, size,
> + __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, node);
> +
> + if (!ptr)
> + panic("%pS: Failed to allocate %zu bytes align=%zx nid=%d from=%llx\n",
> + __func__, size, size, node, (u64)__pa(MAX_DMA_ADDRESS));
> +
> + return ptr;
> +}
>
> extern pgd_t early_pg_dir[PTRS_PER_PGD];
> asmlinkage void __init kasan_early_init(void)
> @@ -83,6 +96,49 @@ static void __init populate(void *start, void *end)
> memset(start, 0, end - start);
> }
>
> +void __init kasan_shallow_populate(void *start, void *end)
> +{
> + unsigned long vaddr = (unsigned long)start & PAGE_MASK;
> + unsigned long vend = PAGE_ALIGN((unsigned long)end);
> + unsigned long pfn;
> + int index;
> + void *p;
> + pud_t *pud_dir, *pud_k;
> + pmd_t *pmd_dir, *pmd_k;
> + pgd_t *pgd_dir, *pgd_k;
> + p4d_t *p4d_dir, *p4d_k;
> +
> + while (vaddr < vend) {
> + index = pgd_index(vaddr);
> + pfn = csr_read(CSR_SATP) & SATP_PPN;
> + pgd_dir = (pgd_t *)pfn_to_virt(pfn) + index;
> + pgd_k = init_mm.pgd + index;
> + pgd_dir = pgd_offset_k(vaddr);
> + set_pgd(pgd_dir, *pgd_k);
> +
> + p4d_dir = p4d_offset(pgd_dir, vaddr);
> + p4d_k = p4d_offset(pgd_k,vaddr);
> +
> + vaddr = (vaddr + PUD_SIZE) & PUD_MASK;
> + pud_dir = pud_offset(p4d_dir, vaddr);
> + pud_k = pud_offset(p4d_k,vaddr);
> +
> + if (pud_present(*pud_dir)) {
> + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
> + pud_populate(&init_mm, pud_dir, p);
> + }
> +
> + pmd_dir = pmd_offset(pud_dir, vaddr);
> + pmd_k = pmd_offset(pud_k,vaddr);
> + set_pmd(pmd_dir, *pmd_k);
> + if (pmd_present(*pmd_dir)) {
> + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
> + pmd_populate(&init_mm, pmd_dir, p);
> + }
> + vaddr += PAGE_SIZE;
> + }
> +}
> +
> void __init kasan_init(void)
> {
> phys_addr_t _start, _end;
> @@ -90,7 +146,15 @@ void __init kasan_init(void)
>
> kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
> (void *)kasan_mem_to_shadow((void *)
> - VMALLOC_END));
> + VMEMMAP_END));
> + if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
> + kasan_shallow_populate(
> + (void *)kasan_mem_to_shadow((void *)VMALLOC_START),
> + (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
> + else
> + kasan_populate_early_shadow(
> + (void *)kasan_mem_to_shadow((void *)VMALLOC_START),
> + (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
>
> for_each_mem_range(i, &_start, &_end) {
> void *start = (void *)_start;
There are a bunch of checkpatch issues here.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/1] riscv/kasan: add KASAN_VMALLOC support
@ 2021-01-15 2:24 ` Palmer Dabbelt
0 siblings, 0 replies; 6+ messages in thread
From: Palmer Dabbelt @ 2021-01-15 2:24 UTC (permalink / raw)
To: nylon7
Cc: aou, nickhu, alankao, linux-kernel, kasan-dev, nylon7, nylon7717,
glider, Paul Walmsley, aryabinin, linux-riscv, dvyukov
On Tue, 12 Jan 2021 18:28:22 PST (-0800), nylon7@andestech.com wrote:
> It's reference x86/s390 architecture.
>
> So, it's don't map the early shadow page to cover VMALLOC space.
>
> Prepopulate top level page table for the range that would otherwise be
> empty.
>
> lower levels are filled dynamically upon memory allocation while
> booting.
>
> Signed-off-by: Nylon Chen <nylon7@andestech.com>
> Signed-off-by: Nick Hu <nickhu@andestech.com>
> ---
> arch/riscv/Kconfig | 1 +
> arch/riscv/mm/kasan_init.c | 66 +++++++++++++++++++++++++++++++++++++-
> 2 files changed, 66 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 81b76d44725d..15a2c8088bbe 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -57,6 +57,7 @@ config RISCV
> select HAVE_ARCH_JUMP_LABEL
> select HAVE_ARCH_JUMP_LABEL_RELATIVE
> select HAVE_ARCH_KASAN if MMU && 64BIT
> + select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT
> select HAVE_ARCH_KGDB
> select HAVE_ARCH_KGDB_QXFER_PKT
> select HAVE_ARCH_MMAP_RND_BITS if MMU
> diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
> index 12ddd1f6bf70..ee332513d728 100644
> --- a/arch/riscv/mm/kasan_init.c
> +++ b/arch/riscv/mm/kasan_init.c
> @@ -9,6 +9,19 @@
> #include <linux/pgtable.h>
> #include <asm/tlbflush.h>
> #include <asm/fixmap.h>
> +#include <asm/pgalloc.h>
> +
> +static __init void *early_alloc(size_t size, int node)
> +{
> + void *ptr = memblock_alloc_try_nid(size, size,
> + __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, node);
> +
> + if (!ptr)
> + panic("%pS: Failed to allocate %zu bytes align=%zx nid=%d from=%llx\n",
> + __func__, size, size, node, (u64)__pa(MAX_DMA_ADDRESS));
> +
> + return ptr;
> +}
>
> extern pgd_t early_pg_dir[PTRS_PER_PGD];
> asmlinkage void __init kasan_early_init(void)
> @@ -83,6 +96,49 @@ static void __init populate(void *start, void *end)
> memset(start, 0, end - start);
> }
>
> +void __init kasan_shallow_populate(void *start, void *end)
> +{
> + unsigned long vaddr = (unsigned long)start & PAGE_MASK;
> + unsigned long vend = PAGE_ALIGN((unsigned long)end);
> + unsigned long pfn;
> + int index;
> + void *p;
> + pud_t *pud_dir, *pud_k;
> + pmd_t *pmd_dir, *pmd_k;
> + pgd_t *pgd_dir, *pgd_k;
> + p4d_t *p4d_dir, *p4d_k;
> +
> + while (vaddr < vend) {
> + index = pgd_index(vaddr);
> + pfn = csr_read(CSR_SATP) & SATP_PPN;
> + pgd_dir = (pgd_t *)pfn_to_virt(pfn) + index;
> + pgd_k = init_mm.pgd + index;
> + pgd_dir = pgd_offset_k(vaddr);
> + set_pgd(pgd_dir, *pgd_k);
> +
> + p4d_dir = p4d_offset(pgd_dir, vaddr);
> + p4d_k = p4d_offset(pgd_k,vaddr);
> +
> + vaddr = (vaddr + PUD_SIZE) & PUD_MASK;
> + pud_dir = pud_offset(p4d_dir, vaddr);
> + pud_k = pud_offset(p4d_k,vaddr);
> +
> + if (pud_present(*pud_dir)) {
> + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
> + pud_populate(&init_mm, pud_dir, p);
> + }
> +
> + pmd_dir = pmd_offset(pud_dir, vaddr);
> + pmd_k = pmd_offset(pud_k,vaddr);
> + set_pmd(pmd_dir, *pmd_k);
> + if (pmd_present(*pmd_dir)) {
> + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
> + pmd_populate(&init_mm, pmd_dir, p);
> + }
> + vaddr += PAGE_SIZE;
> + }
> +}
> +
> void __init kasan_init(void)
> {
> phys_addr_t _start, _end;
> @@ -90,7 +146,15 @@ void __init kasan_init(void)
>
> kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
> (void *)kasan_mem_to_shadow((void *)
> - VMALLOC_END));
> + VMEMMAP_END));
> + if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
> + kasan_shallow_populate(
> + (void *)kasan_mem_to_shadow((void *)VMALLOC_START),
> + (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
> + else
> + kasan_populate_early_shadow(
> + (void *)kasan_mem_to_shadow((void *)VMALLOC_START),
> + (void *)kasan_mem_to_shadow((void *)VMALLOC_END));
>
> for_each_mem_range(i, &_start, &_end) {
> void *start = (void *)_start;
There are a bunch of checkpatch issues here.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2021-01-15 2:25 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-13 2:28 [PATCH 0/1] kasan: support backing vmalloc space for riscv Nylon Chen
2021-01-13 2:28 ` Nylon Chen
2021-01-13 2:28 ` [PATCH 1/1] riscv/kasan: add KASAN_VMALLOC support Nylon Chen
2021-01-13 2:28 ` Nylon Chen
2021-01-15 2:24 ` Palmer Dabbelt
2021-01-15 2:24 ` Palmer Dabbelt
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.