linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC
@ 2021-02-06  8:35 Lecopzer Chen
  2021-02-06  8:35 ` [PATCH v3 1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Lecopzer Chen @ 2021-02-06  8:35 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel, will
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	lecopzer, Lecopzer Chen


Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Acroding to how x86 ported it [1], they early allocated p4d and pgd,
but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
by not to populate the vmalloc area except for kimg address.

  -----------  vmalloc_shadow_start
 |           |
 |           | 
 |           | <= non-mapping
 |           |
 |           |
 |-----------|
 |///////////|<- kimage shadow with page table mapping.
 |-----------|
 |           |
 |           | <= non-mapping
 |           |
 ------------- vmalloc_shadow_end
 |00000000000|
 |00000000000| <= Zero shadow
 |00000000000|
 ------------- KASAN_SHADOW_END


Test environment:
    4G and 8G Qemu virt, 
    39-bit VA + 4k PAGE_SIZE with 3-level page table,
    test by lib/test_kasan.ko and lib/test_kasan_module.ko

It works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL
and randomize module region inside vmalloc area.

Also work with VMAP_STACK, thanks Ard for testing it.


[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")


Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>

---
Thanks Will Deacon, Ard Biesheuvel and Andrey Konovalov
for reviewing and suggestion.

v3 -> v2
rebase on 5.11-rc6
	1. remove always true condition in kasan_init() and remove unsed
	   vmalloc_shadow_start.
	2. select KASAN_VMALLOC if KANSAN_GENERIC is enabled
	   for VMAP_STACK.
	3. tweak commit message

v2 -> v1
	1. kasan_init.c tweak indent
	2. change Kconfig depends only on HAVE_ARCH_KASAN
	3. support randomized module region.


v2:
https://lkml.org/lkml/2021/1/9/49
v1:
https://lore.kernel.org/lkml/20210103171137.153834-1-lecopzer@gmail.com/
---
Lecopzer Chen (5):
  arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  arm64: kasan: abstract _text and _end to KERNEL_START/END
  arm64: Kconfig: support CONFIG_KASAN_VMALLOC
  arm64: kaslr: support randomized module area with KASAN_VMALLOC
  arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled

 arch/arm64/Kconfig         |  2 ++
 arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
 arch/arm64/kernel/module.c | 16 +++++++++-------
 arch/arm64/mm/kasan_init.c | 24 ++++++++++++++++--------
 4 files changed, 37 insertions(+), 23 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-06  8:35 [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-02-06  8:35 ` Lecopzer Chen
  2021-03-19 17:37   ` Catalin Marinas
  2021-02-06  8:35 ` [PATCH v3 2/5] arm64: kasan: abstract _text and _end to KERNEL_START/END Lecopzer Chen
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Lecopzer Chen @ 2021-02-06  8:35 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel, will
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	lecopzer, Lecopzer Chen

Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.

Before:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: backed with zero shadow at init

After:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: no mapping, no zoreo shadow at init

Thus the mapping will get allocated on demand by the core function
of KASAN_VMALLOC.

  -----------  vmalloc_shadow_start
 |           |
 |           |
 |           | <= non-mapping
 |           |
 |           |
 |-----------|
 |///////////|<- kimage shadow with page table mapping.
 |-----------|
 |           |
 |           | <= non-mapping
 |           |
 ------------- vmalloc_shadow_end
 |00000000000|
 |00000000000| <= Zero shadow
 |00000000000|
 ------------- KASAN_SHADOW_END

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/mm/kasan_init.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..20d06008785f 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
 {
 	u64 kimg_shadow_start, kimg_shadow_end;
 	u64 mod_shadow_start, mod_shadow_end;
+	u64 vmalloc_shadow_end;
 	phys_addr_t pa_start, pa_end;
 	u64 i;
 
@@ -223,6 +224,8 @@ static void __init kasan_init_shadow(void)
 	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
 	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
 
+	vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
+
 	/*
 	 * We are going to perform proper setup of shadow memory.
 	 * At first we should unmap early shadow (clear_pgds() call below).
@@ -241,12 +244,17 @@ static void __init kasan_init_shadow(void)
 
 	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
 				   (void *)mod_shadow_start);
-	kasan_populate_early_shadow((void *)kimg_shadow_end,
-				   (void *)KASAN_SHADOW_END);
 
-	if (kimg_shadow_start > mod_shadow_end)
-		kasan_populate_early_shadow((void *)mod_shadow_end,
-					    (void *)kimg_shadow_start);
+	if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
+		kasan_populate_early_shadow((void *)vmalloc_shadow_end,
+					    (void *)KASAN_SHADOW_END);
+	else {
+		kasan_populate_early_shadow((void *)kimg_shadow_end,
+					    (void *)KASAN_SHADOW_END);
+		if (kimg_shadow_start > mod_shadow_end)
+			kasan_populate_early_shadow((void *)mod_shadow_end,
+						    (void *)kimg_shadow_start);
+	}
 
 	for_each_mem_range(i, &pa_start, &pa_end) {
 		void *start = (void *)__phys_to_virt(pa_start);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/5] arm64: kasan: abstract _text and _end to KERNEL_START/END
  2021-02-06  8:35 [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
  2021-02-06  8:35 ` [PATCH v3 1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-02-06  8:35 ` Lecopzer Chen
  2021-02-06  8:35 ` [PATCH v3 3/5] arm64: Kconfig: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Lecopzer Chen @ 2021-02-06  8:35 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel, will
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	lecopzer, Lecopzer Chen

Arm64 provides defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/mm/kasan_init.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 20d06008785f..cd2653b7b174 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
 	phys_addr_t pa_start, pa_end;
 	u64 i;
 
-	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
-	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
+	kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
+	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));
 
 	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
 	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
@@ -240,7 +240,7 @@ static void __init kasan_init_shadow(void)
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
-			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+			   early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START))));
 
 	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
 				   (void *)mod_shadow_start);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 3/5] arm64: Kconfig: support CONFIG_KASAN_VMALLOC
  2021-02-06  8:35 [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
  2021-02-06  8:35 ` [PATCH v3 1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
  2021-02-06  8:35 ` [PATCH v3 2/5] arm64: kasan: abstract _text and _end to KERNEL_START/END Lecopzer Chen
@ 2021-02-06  8:35 ` Lecopzer Chen
  2021-02-06  8:35 ` [PATCH v3 4/5] arm64: kaslr: support randomized module area with KASAN_VMALLOC Lecopzer Chen
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Lecopzer Chen @ 2021-02-06  8:35 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel, will
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	lecopzer, Lecopzer Chen

Now we can backed shadow memory in vmalloc area,
thus make KASAN_VMALLOC selectable.

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f39568b28ec1..a8f5a9171a85 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -136,6 +136,7 @@ config ARM64
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
 	select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+	select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
 	select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN
 	select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE)
 	select HAVE_ARCH_KGDB
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 4/5] arm64: kaslr: support randomized module area with KASAN_VMALLOC
  2021-02-06  8:35 [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (2 preceding siblings ...)
  2021-02-06  8:35 ` [PATCH v3 3/5] arm64: Kconfig: support CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-02-06  8:35 ` Lecopzer Chen
  2021-02-06  8:35 ` [PATCH v3 5/5] arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled Lecopzer Chen
  2021-03-19 17:41 ` [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Catalin Marinas
  5 siblings, 0 replies; 10+ messages in thread
From: Lecopzer Chen @ 2021-02-06  8:35 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel, will
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	lecopzer, Lecopzer Chen

After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
	VMALLOC area ffffffc010000000 fffffffdf0000000

	before the patch:
		module_alloc_base/end ffffffc008b80000 ffffffc010000000
	after the patch:
		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

	And the function that insmod some modules is fine.

Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
 arch/arm64/kernel/module.c | 16 +++++++++-------
 2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1c74c45b9494..a2858058e724 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/* use the top 16 bits to randomize the linear region */
 	memstart_offset_seed = seed >> 48;
 
-	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
-	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
+	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+	     IS_ENABLED(CONFIG_KASAN_SW_TAGS)))
 		/*
-		 * KASAN does not expect the module region to intersect the
-		 * vmalloc region, since shadow memory is allocated for each
-		 * module at load time, whereas the vmalloc region is shadowed
-		 * by KASAN zero pages. So keep modules out of the vmalloc
-		 * region if KASAN is enabled, and put the kernel well within
-		 * 4 GB of the module region.
+		 * KASAN without KASAN_VMALLOC does not expect the module region
+		 * to intersect the vmalloc region, since shadow memory is
+		 * allocated for each module at load time, whereas the vmalloc
+		 * region is shadowed by KASAN zero pages. So keep modules
+		 * out of the vmalloc region if KASAN is enabled without
+		 * KASAN_VMALLOC, and put the kernel well within 4 GB of the
+		 * module region.
 		 */
 		return offset % SZ_2G;
 
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index fe21e0f06492..b5ec010c481f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -40,14 +40,16 @@ void *module_alloc(unsigned long size)
 				NUMA_NO_NODE, __builtin_return_address(0));
 
 	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
-	    !IS_ENABLED(CONFIG_KASAN_GENERIC) &&
-	    !IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+	    (IS_ENABLED(CONFIG_KASAN_VMALLOC) ||
+	     (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+	      !IS_ENABLED(CONFIG_KASAN_SW_TAGS))))
 		/*
-		 * KASAN can only deal with module allocations being served
-		 * from the reserved module region, since the remainder of
-		 * the vmalloc region is already backed by zero shadow pages,
-		 * and punching holes into it is non-trivial. Since the module
-		 * region is not randomized when KASAN is enabled, it is even
+		 * KASAN without KASAN_VMALLOC can only deal with module
+		 * allocations being served from the reserved module region,
+		 * since the remainder of the vmalloc region is already
+		 * backed by zero shadow pages, and punching holes into it
+		 * is non-trivial. Since the module region is not randomized
+		 * when KASAN is enabled without KASAN_VMALLOC, it is even
 		 * less likely that the module region gets exhausted, so we
 		 * can simply omit this fallback in that case.
 		 */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 5/5] arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled
  2021-02-06  8:35 [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (3 preceding siblings ...)
  2021-02-06  8:35 ` [PATCH v3 4/5] arm64: kaslr: support randomized module area with KASAN_VMALLOC Lecopzer Chen
@ 2021-02-06  8:35 ` Lecopzer Chen
  2021-03-19 17:41 ` [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Catalin Marinas
  5 siblings, 0 replies; 10+ messages in thread
From: Lecopzer Chen @ 2021-02-06  8:35 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel, will
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	lecopzer, Lecopzer Chen

Before this patch, someone who wants to use VMAP_STACK when
KASAN_GENERIC enabled must explicitly select KASAN_VMALLOC.

From Will's suggestion [1]:
  > I would _really_ like to move to VMAP stack unconditionally, and
  > that would effectively force KASAN_VMALLOC to be set if KASAN is in use.

Because VMAP_STACK now depends on either HW_TAGS or KASAN_VMALLOC if
KASAN enabled, in order to make VMAP_STACK selected unconditionally,
we bind KANSAN_GENERIC and KASAN_VMALLOC together.

Note that SW_TAGS supports neither VMAP_STACK nor KASAN_VMALLOC now,
so this is the first step to make VMAP_STACK selected unconditionally.

Bind KANSAN_GENERIC and KASAN_VMALLOC together is supposed to cost more
memory at runtime, thus the alternative is using SW_TAGS KASAN instead.

[1]: https://lore.kernel.org/lkml/20210204150100.GE20815@willie-the-truck/

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a8f5a9171a85..9be6a57f6447 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -190,6 +190,7 @@ config ARM64
 	select IOMMU_DMA if IOMMU_SUPPORT
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
+	select KASAN_VMALLOC if KASAN_GENERIC
 	select MODULES_USE_ELF_RELA
 	select NEED_DMA_MAP_STATE
 	select NEED_SG_DMA_LENGTH
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-06  8:35 ` [PATCH v3 1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-03-19 17:37   ` Catalin Marinas
  2021-03-20 13:01     ` Lecopzer Chen
  0 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2021-03-19 17:37 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel, will,
	dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, ardb, andreyknvl, broonie, linux, rppt, tyhicks,
	robin.murphy, vincenzo.frascino, gustavoars, lecopzer

On Sat, Feb 06, 2021 at 04:35:48PM +0800, Lecopzer Chen wrote:
> Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
> 
> Like how the MODULES_VADDR does now, just not to early populate
> the VMALLOC_START between VMALLOC_END.
> 
> Before:
> 
> MODULE_VADDR: no mapping, no zoreo shadow at init
> VMALLOC_VADDR: backed with zero shadow at init
> 
> After:
> 
> MODULE_VADDR: no mapping, no zoreo shadow at init
> VMALLOC_VADDR: no mapping, no zoreo shadow at init

s/zoreo/zero/

> Thus the mapping will get allocated on demand by the core function
> of KASAN_VMALLOC.
> 
>   -----------  vmalloc_shadow_start
>  |           |
>  |           |
>  |           | <= non-mapping
>  |           |
>  |           |
>  |-----------|
>  |///////////|<- kimage shadow with page table mapping.
>  |-----------|
>  |           |
>  |           | <= non-mapping
>  |           |
>  ------------- vmalloc_shadow_end
>  |00000000000|
>  |00000000000| <= Zero shadow
>  |00000000000|
>  ------------- KASAN_SHADOW_END
> 
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> ---
>  arch/arm64/mm/kasan_init.c | 18 +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index d8e66c78440e..20d06008785f 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
>  {
>  	u64 kimg_shadow_start, kimg_shadow_end;
>  	u64 mod_shadow_start, mod_shadow_end;
> +	u64 vmalloc_shadow_end;
>  	phys_addr_t pa_start, pa_end;
>  	u64 i;
>  
> @@ -223,6 +224,8 @@ static void __init kasan_init_shadow(void)
>  	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
>  	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
>  
> +	vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> +
>  	/*
>  	 * We are going to perform proper setup of shadow memory.
>  	 * At first we should unmap early shadow (clear_pgds() call below).
> @@ -241,12 +244,17 @@ static void __init kasan_init_shadow(void)
>  
>  	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
>  				   (void *)mod_shadow_start);
> -	kasan_populate_early_shadow((void *)kimg_shadow_end,
> -				   (void *)KASAN_SHADOW_END);
>  
> -	if (kimg_shadow_start > mod_shadow_end)
> -		kasan_populate_early_shadow((void *)mod_shadow_end,
> -					    (void *)kimg_shadow_start);

Not something introduced by this patch but what happens if this
condition is false? It means that kimg_shadow_end < mod_shadow_start and
the above kasan_populate_early_shadow(PAGE_END, mod_shadow_start)
overlaps with the earlier kasan_map_populate(kimg_shadow_start,
kimg_shadow_end).

> +	if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
> +		kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> +					    (void *)KASAN_SHADOW_END);
> +	else {
> +		kasan_populate_early_shadow((void *)kimg_shadow_end,
> +					    (void *)KASAN_SHADOW_END);
> +		if (kimg_shadow_start > mod_shadow_end)
> +			kasan_populate_early_shadow((void *)mod_shadow_end,
> +						    (void *)kimg_shadow_start);
> +	}
>  
>  	for_each_mem_range(i, &pa_start, &pa_end) {
>  		void *start = (void *)__phys_to_virt(pa_start);
> -- 
> 2.25.1
> 

-- 
Catalin

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-02-06  8:35 [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (4 preceding siblings ...)
  2021-02-06  8:35 ` [PATCH v3 5/5] arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled Lecopzer Chen
@ 2021-03-19 17:41 ` Catalin Marinas
  2021-03-20 10:58   ` Lecopzer Chen
  5 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2021-03-19 17:41 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel, will,
	dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, ardb, andreyknvl, broonie, linux, rppt, tyhicks,
	robin.murphy, vincenzo.frascino, gustavoars, lecopzer

Hi Lecopzer,

On Sat, Feb 06, 2021 at 04:35:47PM +0800, Lecopzer Chen wrote:
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
> 
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.

Do you plan an update to a newer kernel like 5.12-rc3?

> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> Acked-by: Andrey Konovalov <andreyknvl@google.com>
> Tested-by: Andrey Konovalov <andreyknvl@google.com>
> Tested-by: Ard Biesheuvel <ardb@kernel.org>

You could move these to individual patches rather than the cover letter,
assuming that they still stand after the changes you've made. Also note
that Andrey K no longer has the @google.com email address if you cc him
on future patches (replace it with @gmail.com).

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-03-19 17:41 ` [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Catalin Marinas
@ 2021-03-20 10:58   ` Lecopzer Chen
  0 siblings, 0 replies; 10+ messages in thread
From: Lecopzer Chen @ 2021-03-20 10:58 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Lecopzer Chen, Linux Kernel Mailing List, linux-mm, kasan-dev,
	linux-arm-kernel, Will Deacon, dan.j.williams, aryabinin,
	Alexander Potapenko, Dmitry Vyukov, Andrew Morton,
	linux-mediatek, yj.chiang, ardb, Andrey Konovalov, broonie,
	linux, rppt, tyhicks, robin.murphy, vincenzo.frascino,
	gustavoars

On Sat, Mar 20, 2021 at 1:41 AM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> Hi Lecopzer,
>
> On Sat, Feb 06, 2021 at 04:35:47PM +0800, Lecopzer Chen wrote:
> > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> > but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> > by not to populate the vmalloc area except for kimg address.
>
> Do you plan an update to a newer kernel like 5.12-rc3?
>

Yes, of course. I dealt with some personal matters so didn't update
these series last month.

> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > Acked-by: Andrey Konovalov <andreyknvl@google.com>
> > Tested-by: Andrey Konovalov <andreyknvl@google.com>
> > Tested-by: Ard Biesheuvel <ardb@kernel.org>
>
> You could move these to individual patches rather than the cover letter,
> assuming that they still stand after the changes you've made. Also note
> that Andrey K no longer has the @google.com email address if you cc him
> on future patches (replace it with @gmail.com).
>

Ok thanks for the suggestion.
I will move them to each patch and correct the email address.


Thanks,
Lecopzer

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-03-19 17:37   ` Catalin Marinas
@ 2021-03-20 13:01     ` Lecopzer Chen
  0 siblings, 0 replies; 10+ messages in thread
From: Lecopzer Chen @ 2021-03-20 13:01 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Lecopzer Chen, Linux Kernel Mailing List, linux-mm, kasan-dev,
	linux-arm-kernel, Will Deacon, dan.j.williams, aryabinin,
	Alexander Potapenko, Dmitry Vyukov, Andrew Morton,
	linux-mediatek, yj.chiang, ardb, Andrey Konovalov, broonie,
	linux, rppt, tyhicks, robin.murphy, vincenzo.frascino,
	gustavoars

On Sat, Mar 20, 2021 at 1:38 AM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Sat, Feb 06, 2021 at 04:35:48PM +0800, Lecopzer Chen wrote:
> > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Like how the MODULES_VADDR does now, just not to early populate
> > the VMALLOC_START between VMALLOC_END.
> >
> > Before:
> >
> > MODULE_VADDR: no mapping, no zoreo shadow at init
> > VMALLOC_VADDR: backed with zero shadow at init
> >
> > After:
> >
> > MODULE_VADDR: no mapping, no zoreo shadow at init
> > VMALLOC_VADDR: no mapping, no zoreo shadow at init
>
> s/zoreo/zero/
>

thanks!

> > Thus the mapping will get allocated on demand by the core function
> > of KASAN_VMALLOC.
> >
> >   -----------  vmalloc_shadow_start
> >  |           |
> >  |           |
> >  |           | <= non-mapping
> >  |           |
> >  |           |
> >  |-----------|
> >  |///////////|<- kimage shadow with page table mapping.
> >  |-----------|
> >  |           |
> >  |           | <= non-mapping
> >  |           |
> >  ------------- vmalloc_shadow_end
> >  |00000000000|
> >  |00000000000| <= Zero shadow
> >  |00000000000|
> >  ------------- KASAN_SHADOW_END
> >
> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > ---
> >  arch/arm64/mm/kasan_init.c | 18 +++++++++++++-----
> >  1 file changed, 13 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > index d8e66c78440e..20d06008785f 100644
> > --- a/arch/arm64/mm/kasan_init.c
> > +++ b/arch/arm64/mm/kasan_init.c
> > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> >  {
> >       u64 kimg_shadow_start, kimg_shadow_end;
> >       u64 mod_shadow_start, mod_shadow_end;
> > +     u64 vmalloc_shadow_end;
> >       phys_addr_t pa_start, pa_end;
> >       u64 i;
> >
> > @@ -223,6 +224,8 @@ static void __init kasan_init_shadow(void)
> >       mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> >       mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> >
> > +     vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > +
> >       /*
> >        * We are going to perform proper setup of shadow memory.
> >        * At first we should unmap early shadow (clear_pgds() call below).
> > @@ -241,12 +244,17 @@ static void __init kasan_init_shadow(void)
> >
> >       kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> >                                  (void *)mod_shadow_start);
> > -     kasan_populate_early_shadow((void *)kimg_shadow_end,
> > -                                (void *)KASAN_SHADOW_END);
> >
> > -     if (kimg_shadow_start > mod_shadow_end)
> > -             kasan_populate_early_shadow((void *)mod_shadow_end,
> > -                                         (void *)kimg_shadow_start);
>
> Not something introduced by this patch but what happens if this
> condition is false? It means that kimg_shadow_end < mod_shadow_start and
> the above kasan_populate_early_shadow(PAGE_END, mod_shadow_start)
> overlaps with the earlier kasan_map_populate(kimg_shadow_start,
> kimg_shadow_end).

In this case, the area between mod_shadow_start and kimg_shadow_end
was mapping when kasan init.

Thus the corner case is that module_alloc() allocates that range
(the area between mod_shadow_start and kimg_shadow_end) again.


With VMALLOC_KASAN,
module_alloc() ->
    ... ->
        kasan_populate_vmalloc ->
            apply_to_page_range()
will check the mapping exists or not and bypass allocating new mapping
if it exists.
So it should be fine in the second allocation.

Without VMALLOC_KASAN,
module_alloc() ->
    kasan_module_alloc()
will allocate the range twice, first time is kasan_map_populate() and
second time is vmalloc(),
and this should have some problems(?).

Now the only possibility that the module area can overlap with kimage
should be KASLR on.
I'm not sure if this is the case that really happens in KASLR, it depends on
how __relocate_kernel() calculates kimage and how kaslr_earlt_init()
decides module_alloc_base.


> > +     if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
> > +             kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> > +                                         (void *)KASAN_SHADOW_END);
> > +     else {
> > +             kasan_populate_early_shadow((void *)kimg_shadow_end,
> > +                                         (void *)KASAN_SHADOW_END);
> > +             if (kimg_shadow_start > mod_shadow_end)
> > +                     kasan_populate_early_shadow((void *)mod_shadow_end,
> > +                                                 (void *)kimg_shadow_start);
> > +     }
> >
> >       for_each_mem_range(i, &pa_start, &pa_end) {
> >               void *start = (void *)__phys_to_virt(pa_start);
> > --
> > 2.25.1
> >

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-03-20 13:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-06  8:35 [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
2021-02-06  8:35 ` [PATCH v3 1/5] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
2021-03-19 17:37   ` Catalin Marinas
2021-03-20 13:01     ` Lecopzer Chen
2021-02-06  8:35 ` [PATCH v3 2/5] arm64: kasan: abstract _text and _end to KERNEL_START/END Lecopzer Chen
2021-02-06  8:35 ` [PATCH v3 3/5] arm64: Kconfig: support CONFIG_KASAN_VMALLOC Lecopzer Chen
2021-02-06  8:35 ` [PATCH v3 4/5] arm64: kaslr: support randomized module area with KASAN_VMALLOC Lecopzer Chen
2021-02-06  8:35 ` [PATCH v3 5/5] arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled Lecopzer Chen
2021-03-19 17:41 ` [PATCH v3 0/5] arm64: kasan: support CONFIG_KASAN_VMALLOC Catalin Marinas
2021-03-20 10:58   ` Lecopzer Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).