linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
@ 2021-01-09 10:32 Lecopzer Chen
  2021-01-09 10:32 ` [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (7 more replies)
  0 siblings, 8 replies; 32+ messages in thread
From: Lecopzer Chen @ 2021-01-09 10:32 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, will, catalin.marinas, ardb, andreyknvl, broonie,
	linux, rppt, tyhicks, robin.murphy, vincenzo.frascino,
	gustavoars, Lecopzer Chen, Lecopzer Chen

Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Acroding to how x86 ported it [1], they early allocated p4d and pgd,
but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
by not to populate the vmalloc area except for kimg address.

Test environment:
    4G and 8G Qemu virt, 
    39-bit VA + 4k PAGE_SIZE with 3-level page table,
    test by lib/test_kasan.ko and lib/test_kasan_module.ko

It also works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL
and randomize module region inside vmalloc area.


[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>


v2 -> v1
	1. kasan_init.c tweak indent
	2. change Kconfig depends only on HAVE_ARCH_KASAN
	3. support randomized module region.

v1:
https://lore.kernel.org/lkml/20210103171137.153834-1-lecopzer@gmail.com/

Lecopzer Chen (4):
  arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  arm64: kasan: abstract _text and _end to KERNEL_START/END
  arm64: Kconfig: support CONFIG_KASAN_VMALLOC
  arm64: kaslr: support randomized module area with KASAN_VMALLOC

 arch/arm64/Kconfig         |  1 +
 arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
 arch/arm64/kernel/module.c | 16 +++++++++-------
 arch/arm64/mm/kasan_init.c | 29 +++++++++++++++++++++--------
 4 files changed, 41 insertions(+), 23 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-01-09 10:32 ` Lecopzer Chen
  2021-02-03 18:37   ` Ard Biesheuvel
  2021-02-04 12:45   ` Will Deacon
  2021-01-09 10:32 ` [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END Lecopzer Chen
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 32+ messages in thread
From: Lecopzer Chen @ 2021-01-09 10:32 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, will, catalin.marinas, ardb, andreyknvl, broonie,
	linux, rppt, tyhicks, robin.murphy, vincenzo.frascino,
	gustavoars, Lecopzer Chen, Lecopzer Chen

Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.
similarly, the kernel code mapping is now in the VMALLOC area and
should keep these area populated.

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..39b218a64279 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
 {
 	u64 kimg_shadow_start, kimg_shadow_end;
 	u64 mod_shadow_start, mod_shadow_end;
+	u64 vmalloc_shadow_start, vmalloc_shadow_end;
 	phys_addr_t pa_start, pa_end;
 	u64 i;
 
@@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
 	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
 	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
 
+	vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
+	vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
+
 	/*
 	 * We are going to perform proper setup of shadow memory.
 	 * At first we should unmap early shadow (clear_pgds() call below).
@@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
 
 	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
 				   (void *)mod_shadow_start);
-	kasan_populate_early_shadow((void *)kimg_shadow_end,
-				   (void *)KASAN_SHADOW_END);
+	if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
+		kasan_populate_early_shadow((void *)vmalloc_shadow_end,
+					    (void *)KASAN_SHADOW_END);
+		if (vmalloc_shadow_start > mod_shadow_end)
+			kasan_populate_early_shadow((void *)mod_shadow_end,
+						    (void *)vmalloc_shadow_start);
+
+	} else {
+		kasan_populate_early_shadow((void *)kimg_shadow_end,
+					    (void *)KASAN_SHADOW_END);
+		if (kimg_shadow_start > mod_shadow_end)
+			kasan_populate_early_shadow((void *)mod_shadow_end,
+						    (void *)kimg_shadow_start);
+	}
 
-	if (kimg_shadow_start > mod_shadow_end)
-		kasan_populate_early_shadow((void *)mod_shadow_end,
-					    (void *)kimg_shadow_start);
 
 	for_each_mem_range(i, &pa_start, &pa_end) {
 		void *start = (void *)__phys_to_virt(pa_start);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END
  2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
  2021-01-09 10:32 ` [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-01-09 10:32 ` Lecopzer Chen
  2021-02-04 12:46   ` Will Deacon
  2021-01-09 10:32 ` [PATCH v2 3/4] arm64: Kconfig: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Lecopzer Chen @ 2021-01-09 10:32 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, will, catalin.marinas, ardb, andreyknvl, broonie,
	linux, rppt, tyhicks, robin.murphy, vincenzo.frascino,
	gustavoars, Lecopzer Chen, Lecopzer Chen

Arm64 provide defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/mm/kasan_init.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 39b218a64279..fa8d7ece895d 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
 	phys_addr_t pa_start, pa_end;
 	u64 i;
 
-	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
-	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
+	kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
+	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));
 
 	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
 	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
@@ -241,7 +241,7 @@ static void __init kasan_init_shadow(void)
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
-			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+			   early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START))));
 
 	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
 				   (void *)mod_shadow_start);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 3/4] arm64: Kconfig: support CONFIG_KASAN_VMALLOC
  2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
  2021-01-09 10:32 ` [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
  2021-01-09 10:32 ` [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END Lecopzer Chen
@ 2021-01-09 10:32 ` Lecopzer Chen
  2021-01-09 10:32 ` [PATCH v2 4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC Lecopzer Chen
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 32+ messages in thread
From: Lecopzer Chen @ 2021-01-09 10:32 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, will, catalin.marinas, ardb, andreyknvl, broonie,
	linux, rppt, tyhicks, robin.murphy, vincenzo.frascino,
	gustavoars, Lecopzer Chen, Lecopzer Chen

now we can backed shadow memory in vmalloc area,
thus support KASAN_VMALLOC in KASAN_GENERIC mode.

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 05e17351e4f3..ba03820402ee 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -136,6 +136,7 @@ config ARM64
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
 	select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+	select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
 	select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN
 	select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE)
 	select HAVE_ARCH_KGDB
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC
  2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (2 preceding siblings ...)
  2021-01-09 10:32 ` [PATCH v2 3/4] arm64: Kconfig: support CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-01-09 10:32 ` Lecopzer Chen
  2021-01-27 23:04   ` Will Deacon
  2021-01-21 10:19 ` [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Lecopzer Chen @ 2021-01-09 10:32 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel
  Cc: dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, will, catalin.marinas, ardb, andreyknvl, broonie,
	linux, rppt, tyhicks, robin.murphy, vincenzo.frascino,
	gustavoars, Lecopzer Chen, Lecopzer Chen

After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
	VMALLOC area ffffffc010000000 fffffffdf0000000

	before the patch:
		module_alloc_base/end ffffffc008b80000 ffffffc010000000
	after the patch:
		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

	And the function that insmod some modules is fine.

Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
 arch/arm64/kernel/module.c | 16 +++++++++-------
 2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1c74c45b9494..a2858058e724 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/* use the top 16 bits to randomize the linear region */
 	memstart_offset_seed = seed >> 48;
 
-	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
-	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
+	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+	     IS_ENABLED(CONFIG_KASAN_SW_TAGS)))
 		/*
-		 * KASAN does not expect the module region to intersect the
-		 * vmalloc region, since shadow memory is allocated for each
-		 * module at load time, whereas the vmalloc region is shadowed
-		 * by KASAN zero pages. So keep modules out of the vmalloc
-		 * region if KASAN is enabled, and put the kernel well within
-		 * 4 GB of the module region.
+		 * KASAN without KASAN_VMALLOC does not expect the module region
+		 * to intersect the vmalloc region, since shadow memory is
+		 * allocated for each module at load time, whereas the vmalloc
+		 * region is shadowed by KASAN zero pages. So keep modules
+		 * out of the vmalloc region if KASAN is enabled without
+		 * KASAN_VMALLOC, and put the kernel well within 4 GB of the
+		 * module region.
 		 */
 		return offset % SZ_2G;
 
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index fe21e0f06492..b5ec010c481f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -40,14 +40,16 @@ void *module_alloc(unsigned long size)
 				NUMA_NO_NODE, __builtin_return_address(0));
 
 	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
-	    !IS_ENABLED(CONFIG_KASAN_GENERIC) &&
-	    !IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+	    (IS_ENABLED(CONFIG_KASAN_VMALLOC) ||
+	     (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+	      !IS_ENABLED(CONFIG_KASAN_SW_TAGS))))
 		/*
-		 * KASAN can only deal with module allocations being served
-		 * from the reserved module region, since the remainder of
-		 * the vmalloc region is already backed by zero shadow pages,
-		 * and punching holes into it is non-trivial. Since the module
-		 * region is not randomized when KASAN is enabled, it is even
+		 * KASAN without KASAN_VMALLOC can only deal with module
+		 * allocations being served from the reserved module region,
+		 * since the remainder of the vmalloc region is already
+		 * backed by zero shadow pages, and punching holes into it
+		 * is non-trivial. Since the module region is not randomized
+		 * when KASAN is enabled without KASAN_VMALLOC, it is even
 		 * less likely that the module region gets exhausted, so we
 		 * can simply omit this fallback in that case.
 		 */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (3 preceding siblings ...)
  2021-01-09 10:32 ` [PATCH v2 4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC Lecopzer Chen
@ 2021-01-21 10:19 ` Lecopzer Chen
  2021-01-21 17:44 ` Andrey Konovalov
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 32+ messages in thread
From: Lecopzer Chen @ 2021-01-21 10:19 UTC (permalink / raw)
  To: lecopzer
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, linux-arm-kernel, linux-kernel, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	will, yj.chiang

Dear reviewers and maintainers,


Could we have chance to upstream this in 5.12-rc?

So if these patches have any problem I can fix as soon as possible before
next -rc comming.


thanks!

BRs,
Lecopzer

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (4 preceding siblings ...)
  2021-01-21 10:19 ` [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-01-21 17:44 ` Andrey Konovalov
  2021-01-22 19:05   ` Will Deacon
  2021-02-03 18:31 ` Ard Biesheuvel
  2021-02-04 12:49 ` Will Deacon
  7 siblings, 1 reply; 32+ messages in thread
From: Andrey Konovalov @ 2021-01-21 17:44 UTC (permalink / raw)
  To: Will Deacon
  Cc: LKML, Linux Memory Management List, kasan-dev, Linux ARM,
	Dan Williams, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Andrew Morton, moderated list:ARM/Mediatek SoC...,
	yj.chiang, Catalin Marinas, Ard Biesheuvel, Mark Brown,
	Guenter Roeck, rppt, tyhicks, Robin Murphy, Vincenzo Frascino,
	gustavoars, Lecopzer Chen, Lecopzer Chen

On Sat, Jan 9, 2021 at 11:33 AM Lecopzer Chen <lecopzer@gmail.com> wrote:
>
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.
>
> Test environment:
>     4G and 8G Qemu virt,
>     39-bit VA + 4k PAGE_SIZE with 3-level page table,
>     test by lib/test_kasan.ko and lib/test_kasan_module.ko
>
> It also works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL
> and randomize module region inside vmalloc area.
>
>
> [1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")
>
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> Acked-by: Andrey Konovalov <andreyknvl@google.com>
> Tested-by: Andrey Konovalov <andreyknvl@google.com>
>
>
> v2 -> v1
>         1. kasan_init.c tweak indent
>         2. change Kconfig depends only on HAVE_ARCH_KASAN
>         3. support randomized module region.
>
> v1:
> https://lore.kernel.org/lkml/20210103171137.153834-1-lecopzer@gmail.com/
>
> Lecopzer Chen (4):
>   arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
>   arm64: kasan: abstract _text and _end to KERNEL_START/END
>   arm64: Kconfig: support CONFIG_KASAN_VMALLOC
>   arm64: kaslr: support randomized module area with KASAN_VMALLOC
>
>  arch/arm64/Kconfig         |  1 +
>  arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
>  arch/arm64/kernel/module.c | 16 +++++++++-------
>  arch/arm64/mm/kasan_init.c | 29 +++++++++++++++++++++--------
>  4 files changed, 41 insertions(+), 23 deletions(-)
>
> --
> 2.25.1
>

Hi Will,

Could you PTAL at the arm64 changes?

Thanks!

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-01-21 17:44 ` Andrey Konovalov
@ 2021-01-22 19:05   ` Will Deacon
  0 siblings, 0 replies; 32+ messages in thread
From: Will Deacon @ 2021-01-22 19:05 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: LKML, Linux Memory Management List, kasan-dev, Linux ARM,
	Dan Williams, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Andrew Morton, moderated list:ARM/Mediatek SoC...,
	yj.chiang, Catalin Marinas, Ard Biesheuvel, Mark Brown,
	Guenter Roeck, rppt, tyhicks, Robin Murphy, Vincenzo Frascino,
	gustavoars, Lecopzer Chen, Lecopzer Chen

On Thu, Jan 21, 2021 at 06:44:14PM +0100, Andrey Konovalov wrote:
> On Sat, Jan 9, 2021 at 11:33 AM Lecopzer Chen <lecopzer@gmail.com> wrote:
> >
> > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> > but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> > by not to populate the vmalloc area except for kimg address.
> >
> > Test environment:
> >     4G and 8G Qemu virt,
> >     39-bit VA + 4k PAGE_SIZE with 3-level page table,
> >     test by lib/test_kasan.ko and lib/test_kasan_module.ko
> >
> > It also works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL
> > and randomize module region inside vmalloc area.
> >
> >
> > [1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")
> >
> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > Acked-by: Andrey Konovalov <andreyknvl@google.com>
> > Tested-by: Andrey Konovalov <andreyknvl@google.com>
> >
> >
> > v2 -> v1
> >         1. kasan_init.c tweak indent
> >         2. change Kconfig depends only on HAVE_ARCH_KASAN
> >         3. support randomized module region.
> >
> > v1:
> > https://lore.kernel.org/lkml/20210103171137.153834-1-lecopzer@gmail.com/
> >
> > Lecopzer Chen (4):
> >   arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
> >   arm64: kasan: abstract _text and _end to KERNEL_START/END
> >   arm64: Kconfig: support CONFIG_KASAN_VMALLOC
> >   arm64: kaslr: support randomized module area with KASAN_VMALLOC
> >
> >  arch/arm64/Kconfig         |  1 +
> >  arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
> >  arch/arm64/kernel/module.c | 16 +++++++++-------
> >  arch/arm64/mm/kasan_init.c | 29 +++++++++++++++++++++--------
> >  4 files changed, 41 insertions(+), 23 deletions(-)
> >
> > --
> > 2.25.1
> >
> 
> Hi Will,
> 
> Could you PTAL at the arm64 changes?

Sorry, wanted to get to this today but I ran out of time in the end. On the
list for next week!

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC
  2021-01-09 10:32 ` [PATCH v2 4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC Lecopzer Chen
@ 2021-01-27 23:04   ` Will Deacon
  2021-01-28  8:53     ` Lecopzer Chen
  0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2021-01-27 23:04 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel,
	dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	Lecopzer Chen

On Sat, Jan 09, 2021 at 06:32:52PM +0800, Lecopzer Chen wrote:
> After KASAN_VMALLOC works in arm64, we can randomize module region
> into vmalloc area now.
> 
> Test:
> 	VMALLOC area ffffffc010000000 fffffffdf0000000
> 
> 	before the patch:
> 		module_alloc_base/end ffffffc008b80000 ffffffc010000000
> 	after the patch:
> 		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000
> 
> 	And the function that insmod some modules is fine.
> 
> Suggested-by: Ard Biesheuvel <ardb@kernel.org>
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> ---
>  arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
>  arch/arm64/kernel/module.c | 16 +++++++++-------
>  2 files changed, 19 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 1c74c45b9494..a2858058e724 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
>  	/* use the top 16 bits to randomize the linear region */
>  	memstart_offset_seed = seed >> 48;
>  
> -	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> -	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> +	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
> +	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||

CONFIG_KASAN_VMALLOC depends on CONFIG_KASAN_GENERIC so why is this
necessary?

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC
  2021-01-27 23:04   ` Will Deacon
@ 2021-01-28  8:53     ` Lecopzer Chen
  2021-01-28 20:26       ` Will Deacon
  0 siblings, 1 reply; 32+ messages in thread
From: Lecopzer Chen @ 2021-01-28  8:53 UTC (permalink / raw)
  To: will
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, lecopzer, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-mm, linux, robin.murphy, rppt, tyhicks,
	vincenzo.frascino, yj.chiang

 
> On Sat, Jan 09, 2021 at 06:32:52PM +0800, Lecopzer Chen wrote:
> > After KASAN_VMALLOC works in arm64, we can randomize module region
> > into vmalloc area now.
> > 
> > Test:
> > 	VMALLOC area ffffffc010000000 fffffffdf0000000
> > 
> > 	before the patch:
> > 		module_alloc_base/end ffffffc008b80000 ffffffc010000000
> > 	after the patch:
> > 		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000
> > 
> > 	And the function that insmod some modules is fine.
> > 
> > Suggested-by: Ard Biesheuvel <ardb@kernel.org>
> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > ---
> >  arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
> >  arch/arm64/kernel/module.c | 16 +++++++++-------
> >  2 files changed, 19 insertions(+), 15 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> > index 1c74c45b9494..a2858058e724 100644
> > --- a/arch/arm64/kernel/kaslr.c
> > +++ b/arch/arm64/kernel/kaslr.c
> > @@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
> >  	/* use the top 16 bits to randomize the linear region */
> >  	memstart_offset_seed = seed >> 48;
> >  
> > -	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> > -	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> > +	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
> > +	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> 
> CONFIG_KASAN_VMALLOC depends on CONFIG_KASAN_GENERIC so why is this
> necessary?
> 
> Will

CONFIG_KASAN_VMALLOC=y means CONFIG_KASAN_GENERIC=y
but CONFIG_KASAN_GENERIC=y doesn't means CONFIG_KASAN_VMALLOC=y

So this if-condition allows only KASAN rather than
KASAN + KASAN_VMALLOC enabled.

Please correct me if I'm wrong.

thanks,
Lecopzer


 



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC
  2021-01-28  8:53     ` Lecopzer Chen
@ 2021-01-28 20:26       ` Will Deacon
  0 siblings, 0 replies; 32+ messages in thread
From: Will Deacon @ 2021-01-28 20:26 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev, lecopzer,
	linux-arm-kernel, linux-kernel, linux-mediatek, linux-mm, linux,
	robin.murphy, rppt, tyhicks, vincenzo.frascino, yj.chiang

On Thu, Jan 28, 2021 at 04:53:26PM +0800, Lecopzer Chen wrote:
>  
> > On Sat, Jan 09, 2021 at 06:32:52PM +0800, Lecopzer Chen wrote:
> > > After KASAN_VMALLOC works in arm64, we can randomize module region
> > > into vmalloc area now.
> > > 
> > > Test:
> > > 	VMALLOC area ffffffc010000000 fffffffdf0000000
> > > 
> > > 	before the patch:
> > > 		module_alloc_base/end ffffffc008b80000 ffffffc010000000
> > > 	after the patch:
> > > 		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000
> > > 
> > > 	And the function that insmod some modules is fine.
> > > 
> > > Suggested-by: Ard Biesheuvel <ardb@kernel.org>
> > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > ---
> > >  arch/arm64/kernel/kaslr.c  | 18 ++++++++++--------
> > >  arch/arm64/kernel/module.c | 16 +++++++++-------
> > >  2 files changed, 19 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> > > index 1c74c45b9494..a2858058e724 100644
> > > --- a/arch/arm64/kernel/kaslr.c
> > > +++ b/arch/arm64/kernel/kaslr.c
> > > @@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
> > >  	/* use the top 16 bits to randomize the linear region */
> > >  	memstart_offset_seed = seed >> 48;
> > >  
> > > -	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> > > -	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> > > +	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
> > > +	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
> > 
> > CONFIG_KASAN_VMALLOC depends on CONFIG_KASAN_GENERIC so why is this
> > necessary?
> > 
> > Will
> 
> CONFIG_KASAN_VMALLOC=y means CONFIG_KASAN_GENERIC=y
> but CONFIG_KASAN_GENERIC=y doesn't means CONFIG_KASAN_VMALLOC=y
> 
> So this if-condition allows only KASAN rather than
> KASAN + KASAN_VMALLOC enabled.
> 
> Please correct me if I'm wrong.

Sorry, you're completely right -- I missed the '!' when I read this
initially.

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (5 preceding siblings ...)
  2021-01-21 17:44 ` Andrey Konovalov
@ 2021-02-03 18:31 ` Ard Biesheuvel
  2021-02-04 12:49 ` Will Deacon
  7 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2021-02-03 18:31 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: Linux Kernel Mailing List, Linux Memory Management List,
	kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov, Andrew Morton,
	linux-mediatek, yj.chiang, Will Deacon, Catalin Marinas,
	Andrey Konovalov, Mark Brown, Guenter Roeck, Mike Rapoport,
	Tyler Hicks, Robin Murphy, Vincenzo Frascino,
	Gustavo A. R. Silva, Lecopzer Chen

On Sat, 9 Jan 2021 at 11:33, Lecopzer Chen <lecopzer@gmail.com> wrote:
>
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.
>
> Test environment:
>     4G and 8G Qemu virt,
>     39-bit VA + 4k PAGE_SIZE with 3-level page table,
>     test by lib/test_kasan.ko and lib/test_kasan_module.ko
>
> It also works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL
> and randomize module region inside vmalloc area.
>
>
> [1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")
>
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> Acked-by: Andrey Konovalov <andreyknvl@google.com>
> Tested-by: Andrey Konovalov <andreyknvl@google.com>
>
>
> v2 -> v1
>         1. kasan_init.c tweak indent
>         2. change Kconfig depends only on HAVE_ARCH_KASAN
>         3. support randomized module region.
>
> v1:
> https://lore.kernel.org/lkml/20210103171137.153834-1-lecopzer@gmail.com/
>
> Lecopzer Chen (4):
>   arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
>   arm64: kasan: abstract _text and _end to KERNEL_START/END
>   arm64: Kconfig: support CONFIG_KASAN_VMALLOC
>   arm64: kaslr: support randomized module area with KASAN_VMALLOC
>

I failed to realize that VMAP_STACK and KASAN are currently mutually
exclusive on arm64, and that this series actually fixes that, which is
a big improvement, so it would make sense to call that out.

This builds and runs fine for me on a VM running under KVM.

Tested-by: Ard Biesheuvel <ardb@kernel.org>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-01-09 10:32 ` [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
@ 2021-02-03 18:37   ` Ard Biesheuvel
  2021-02-04  6:21     ` Lecopzer Chen
  2021-02-04 12:45   ` Will Deacon
  1 sibling, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2021-02-03 18:37 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: Linux Kernel Mailing List, Linux Memory Management List,
	kasan-dev, Linux ARM, Dan Williams, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov, Andrew Morton,
	linux-mediatek, yj.chiang, Will Deacon, Catalin Marinas,
	Andrey Konovalov, Mark Brown, Guenter Roeck, Mike Rapoport,
	Tyler Hicks, Robin Murphy, Vincenzo Frascino,
	Gustavo A. R. Silva, Lecopzer Chen

On Sat, 9 Jan 2021 at 11:33, Lecopzer Chen <lecopzer@gmail.com> wrote:
>
> Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
>
> Like how the MODULES_VADDR does now, just not to early populate
> the VMALLOC_START between VMALLOC_END.
> similarly, the kernel code mapping is now in the VMALLOC area and
> should keep these area populated.
>
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>


This commit log text is a bit hard to follow. You are saying that the
vmalloc region is *not* backed with zero shadow or any default mapping
at all, right, and everything gets allocated on demand, just like is
the case for modules?

> ---
>  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
>  1 file changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index d8e66c78440e..39b218a64279 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
>  {
>         u64 kimg_shadow_start, kimg_shadow_end;
>         u64 mod_shadow_start, mod_shadow_end;
> +       u64 vmalloc_shadow_start, vmalloc_shadow_end;
>         phys_addr_t pa_start, pa_end;
>         u64 i;
>
> @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
>         mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
>         mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
>
> +       vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> +       vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> +


This and the below seems overly complicated, given that VMALLOC_START
== MODULES_END. Can we simplify this?

>         /*
>          * We are going to perform proper setup of shadow memory.
>          * At first we should unmap early shadow (clear_pgds() call below).
> @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
>
>         kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
>                                    (void *)mod_shadow_start);
> -       kasan_populate_early_shadow((void *)kimg_shadow_end,
> -                                  (void *)KASAN_SHADOW_END);
> +       if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> +               kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> +                                           (void *)KASAN_SHADOW_END);
> +               if (vmalloc_shadow_start > mod_shadow_end)
> +                       kasan_populate_early_shadow((void *)mod_shadow_end,
> +                                                   (void *)vmalloc_shadow_start);
> +
> +       } else {
> +               kasan_populate_early_shadow((void *)kimg_shadow_end,
> +                                           (void *)KASAN_SHADOW_END);
> +               if (kimg_shadow_start > mod_shadow_end)
> +                       kasan_populate_early_shadow((void *)mod_shadow_end,
> +                                                   (void *)kimg_shadow_start);
> +       }
>
> -       if (kimg_shadow_start > mod_shadow_end)
> -               kasan_populate_early_shadow((void *)mod_shadow_end,
> -                                           (void *)kimg_shadow_start);
>
>         for_each_mem_range(i, &pa_start, &pa_end) {
>                 void *start = (void *)__phys_to_virt(pa_start);
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-03 18:37   ` Ard Biesheuvel
@ 2021-02-04  6:21     ` Lecopzer Chen
  0 siblings, 0 replies; 32+ messages in thread
From: Lecopzer Chen @ 2021-02-04  6:21 UTC (permalink / raw)
  To: ardb
  Cc: akpm, andreyknvl, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, lecopzer, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-mm, linux, robin.murphy, rppt, tyhicks,
	vincenzo.frascino, will, yj.chiang

> On Sat, 9 Jan 2021 at 11:33, Lecopzer Chen <lecopzer@gmail.com> wrote:
> >
> > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Like how the MODULES_VADDR does now, just not to early populate
> > the VMALLOC_START between VMALLOC_END.
> > similarly, the kernel code mapping is now in the VMALLOC area and
> > should keep these area populated.
> >
> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> 
> 
> This commit log text is a bit hard to follow. You are saying that the
> vmalloc region is *not* backed with zero shadow or any default mapping
> at all, right, and everything gets allocated on demand, just like is
> the case for modules?

It's much more like:

before:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: backed with zero shadow at init

after:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: no mapping, no zoreo shadow at init

So it should be both "not backed with zero shadow" and
"not any mapping and everything gets allocated on demand".

And the "not backed with zero shadow" is like a subset of "not any mapping ...".


Is that being more clear if the commit revises to:

----------------------
Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.

Before:

MODULE_VADDR: no mapping, no zoreo shadow at init
VMALLOC_VADDR: backed with zero shadow at init

After:

VMALLOC_VADDR: no mapping, no zoreo shadow at init

Thus the mapping will get allocate on demand by the core function
of KASAN vmalloc.

similarly, the kernel code mapping is now in the VMALLOC area and
should keep these area populated.
--------------------

Or would you have any suggestion?


Thanks a lot for your review!

BRs,
Lecopzer


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-01-09 10:32 ` [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
  2021-02-03 18:37   ` Ard Biesheuvel
@ 2021-02-04 12:45   ` Will Deacon
  2021-02-04 14:46     ` Lecopzer Chen
  1 sibling, 1 reply; 32+ messages in thread
From: Will Deacon @ 2021-02-04 12:45 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel,
	dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	Lecopzer Chen

On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
> 
> Like how the MODULES_VADDR does now, just not to early populate
> the VMALLOC_START between VMALLOC_END.
> similarly, the kernel code mapping is now in the VMALLOC area and
> should keep these area populated.
> 
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> ---
>  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
>  1 file changed, 18 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index d8e66c78440e..39b218a64279 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
>  {
>  	u64 kimg_shadow_start, kimg_shadow_end;
>  	u64 mod_shadow_start, mod_shadow_end;
> +	u64 vmalloc_shadow_start, vmalloc_shadow_end;
>  	phys_addr_t pa_start, pa_end;
>  	u64 i;
>  
> @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
>  	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
>  	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
>  
> +	vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> +	vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> +
>  	/*
>  	 * We are going to perform proper setup of shadow memory.
>  	 * At first we should unmap early shadow (clear_pgds() call below).
> @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
>  
>  	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
>  				   (void *)mod_shadow_start);
> -	kasan_populate_early_shadow((void *)kimg_shadow_end,
> -				   (void *)KASAN_SHADOW_END);
> +	if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {

Do we really need yet another CONFIG option for KASAN? What's the use-case
for *not* enabling this if you're already enabling one of the KASAN
backends?

> +		kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> +					    (void *)KASAN_SHADOW_END);
> +		if (vmalloc_shadow_start > mod_shadow_end)

To echo Ard's concern: when is the above 'if' condition true?

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END
  2021-01-09 10:32 ` [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END Lecopzer Chen
@ 2021-02-04 12:46   ` Will Deacon
  2021-02-04 14:51     ` Lecopzer Chen
  0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2021-02-04 12:46 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel,
	dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	Lecopzer Chen

On Sat, Jan 09, 2021 at 06:32:50PM +0800, Lecopzer Chen wrote:
> Arm64 provide defined macro for KERNEL_START and KERNEL_END,
> thus replace them by the abstration instead of using _text and _end.
> 
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> ---
>  arch/arm64/mm/kasan_init.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 39b218a64279..fa8d7ece895d 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
>  	phys_addr_t pa_start, pa_end;
>  	u64 i;
>  
> -	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
> -	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
> +	kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
> +	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));
>  
>  	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
>  	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> @@ -241,7 +241,7 @@ static void __init kasan_init_shadow(void)
>  	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
>  
>  	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
> -			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
> +			   early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START))));

To be honest, I think this whole line is pointless. We should be able to
pass NUMA_NO_NODE now that we're not abusing the vmemmap() allocator to
populate the shadow.

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
                   ` (6 preceding siblings ...)
  2021-02-03 18:31 ` Ard Biesheuvel
@ 2021-02-04 12:49 ` Will Deacon
  2021-02-04 15:53   ` Lecopzer Chen
  7 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2021-02-04 12:49 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-kernel, linux-mm, kasan-dev, linux-arm-kernel,
	dan.j.williams, aryabinin, glider, dvyukov, akpm, linux-mediatek,
	yj.chiang, catalin.marinas, ardb, andreyknvl, broonie, linux,
	rppt, tyhicks, robin.murphy, vincenzo.frascino, gustavoars,
	Lecopzer Chen

On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote:
> Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> ("kasan: support backing vmalloc space with real shadow memory")
> 
> Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> by not to populate the vmalloc area except for kimg address.

The one thing I've failed to grok from your series is how you deal with
vmalloc allocations where the shadow overlaps with the shadow which has
already been allocated for the kernel image. Please can you explain?

Thanks,

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-04 12:45   ` Will Deacon
@ 2021-02-04 14:46     ` Lecopzer Chen
  2021-02-04 15:01       ` Will Deacon
  0 siblings, 1 reply; 32+ messages in thread
From: Lecopzer Chen @ 2021-02-04 14:46 UTC (permalink / raw)
  To: will
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, lecopzer, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-mm, linux, robin.murphy, rppt, tyhicks,
	vincenzo.frascino, yj.chiang

> On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> > 
> > Like how the MODULES_VADDR does now, just not to early populate
> > the VMALLOC_START between VMALLOC_END.
> > similarly, the kernel code mapping is now in the VMALLOC area and
> > should keep these area populated.
> > 
> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > ---
> >  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> >  1 file changed, 18 insertions(+), 5 deletions(-)
> > 
> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > index d8e66c78440e..39b218a64279 100644
> > --- a/arch/arm64/mm/kasan_init.c
> > +++ b/arch/arm64/mm/kasan_init.c
> > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> >  {
> >  	u64 kimg_shadow_start, kimg_shadow_end;
> >  	u64 mod_shadow_start, mod_shadow_end;
> > +	u64 vmalloc_shadow_start, vmalloc_shadow_end;
> >  	phys_addr_t pa_start, pa_end;
> >  	u64 i;
> >  
> > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> >  	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> >  	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> >  
> > +	vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > +	vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > +
> >  	/*
> >  	 * We are going to perform proper setup of shadow memory.
> >  	 * At first we should unmap early shadow (clear_pgds() call below).
> > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> >  
> >  	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> >  				   (void *)mod_shadow_start);
> > -	kasan_populate_early_shadow((void *)kimg_shadow_end,
> > -				   (void *)KASAN_SHADOW_END);
> > +	if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> 
> Do we really need yet another CONFIG option for KASAN? What's the use-case
> for *not* enabling this if you're already enabling one of the KASAN
> backends?

As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).

There should be someone can enable KASAN_GENERIC but can't use VMALLOC
due to memory issue.
 
> > +		kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> > +					    (void *)KASAN_SHADOW_END);
> > +		if (vmalloc_shadow_start > mod_shadow_end)
> 
> To echo Ard's concern: when is the above 'if' condition true?

After reviewing this code,
since VMALLOC_STAR is a compiler defined macro of MODULES_END,
this if-condition will never be true.

I also test it with removing this and works fine.

I'll remove this in the next version patch,
thanks a lot for pointing out this.

BRs,
Lecopzer

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END
  2021-02-04 12:46   ` Will Deacon
@ 2021-02-04 14:51     ` Lecopzer Chen
  2021-02-04 14:55       ` Will Deacon
  0 siblings, 1 reply; 32+ messages in thread
From: Lecopzer Chen @ 2021-02-04 14:51 UTC (permalink / raw)
  To: will
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, lecopzer, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-mm, linux, robin.murphy, rppt, tyhicks,
	vincenzo.frascino, yj.chiang

> On Sat, Jan 09, 2021 at 06:32:50PM +0800, Lecopzer Chen wrote:
> > Arm64 provide defined macro for KERNEL_START and KERNEL_END,
> > thus replace them by the abstration instead of using _text and _end.
> > 
> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > ---
> >  arch/arm64/mm/kasan_init.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > index 39b218a64279..fa8d7ece895d 100644
> > --- a/arch/arm64/mm/kasan_init.c
> > +++ b/arch/arm64/mm/kasan_init.c
> > @@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
> >  	phys_addr_t pa_start, pa_end;
> >  	u64 i;
> >  
> > -	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
> > -	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
> > +	kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
> > +	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));
> >  
> >  	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> >  	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > @@ -241,7 +241,7 @@ static void __init kasan_init_shadow(void)
> >  	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> >  
> >  	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
> > -			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
> > +			   early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START))));
> 
> To be honest, I think this whole line is pointless. We should be able to
> pass NUMA_NO_NODE now that we're not abusing the vmemmap() allocator to
> populate the shadow.

Do we need to fix this in this series? it seems another topic.
If not, should this patch be removed in this series?

Thanks,
Lecopzer

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END
  2021-02-04 14:51     ` Lecopzer Chen
@ 2021-02-04 14:55       ` Will Deacon
  2021-02-04 16:06         ` Lecopzer Chen
  0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2021-02-04 14:55 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, linux-arm-kernel, linux-kernel, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	yj.chiang

On Thu, Feb 04, 2021 at 10:51:27PM +0800, Lecopzer Chen wrote:
> > On Sat, Jan 09, 2021 at 06:32:50PM +0800, Lecopzer Chen wrote:
> > > Arm64 provide defined macro for KERNEL_START and KERNEL_END,
> > > thus replace them by the abstration instead of using _text and _end.
> > > 
> > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > ---
> > >  arch/arm64/mm/kasan_init.c | 6 +++---
> > >  1 file changed, 3 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > index 39b218a64279..fa8d7ece895d 100644
> > > --- a/arch/arm64/mm/kasan_init.c
> > > +++ b/arch/arm64/mm/kasan_init.c
> > > @@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
> > >  	phys_addr_t pa_start, pa_end;
> > >  	u64 i;
> > >  
> > > -	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
> > > -	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
> > > +	kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
> > > +	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));
> > >  
> > >  	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > >  	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > @@ -241,7 +241,7 @@ static void __init kasan_init_shadow(void)
> > >  	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> > >  
> > >  	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
> > > -			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
> > > +			   early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START))));
> > 
> > To be honest, I think this whole line is pointless. We should be able to
> > pass NUMA_NO_NODE now that we're not abusing the vmemmap() allocator to
> > populate the shadow.
> 
> Do we need to fix this in this series? it seems another topic.
> If not, should this patch be removed in this series?

Since you're reposting anyway, you may as well include a patch doing that.
If you don't, then I will.

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-04 14:46     ` Lecopzer Chen
@ 2021-02-04 15:01       ` Will Deacon
  2021-02-04 16:37         ` Lecopzer Chen
  0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2021-02-04 15:01 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, linux-arm-kernel, linux-kernel, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	yj.chiang

On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > ("kasan: support backing vmalloc space with real shadow memory")
> > > 
> > > Like how the MODULES_VADDR does now, just not to early populate
> > > the VMALLOC_START between VMALLOC_END.
> > > similarly, the kernel code mapping is now in the VMALLOC area and
> > > should keep these area populated.
> > > 
> > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > ---
> > >  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> > >  1 file changed, 18 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > index d8e66c78440e..39b218a64279 100644
> > > --- a/arch/arm64/mm/kasan_init.c
> > > +++ b/arch/arm64/mm/kasan_init.c
> > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> > >  {
> > >  	u64 kimg_shadow_start, kimg_shadow_end;
> > >  	u64 mod_shadow_start, mod_shadow_end;
> > > +	u64 vmalloc_shadow_start, vmalloc_shadow_end;
> > >  	phys_addr_t pa_start, pa_end;
> > >  	u64 i;
> > >  
> > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> > >  	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > >  	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > >  
> > > +	vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > > +	vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > > +
> > >  	/*
> > >  	 * We are going to perform proper setup of shadow memory.
> > >  	 * At first we should unmap early shadow (clear_pgds() call below).
> > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> > >  
> > >  	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> > >  				   (void *)mod_shadow_start);
> > > -	kasan_populate_early_shadow((void *)kimg_shadow_end,
> > > -				   (void *)KASAN_SHADOW_END);
> > > +	if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> > 
> > Do we really need yet another CONFIG option for KASAN? What's the use-case
> > for *not* enabling this if you're already enabling one of the KASAN
> > backends?
> 
> As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).

The shadow is allocated dynamically though, isn't it?

> There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> due to memory issue.

That doesn't sound particularly realistic to me. The reason I'm pushing here
is because I would _really_ like to move to VMAP stack unconditionally, and
that would effectively force KASAN_VMALLOC to be set if KASAN is in use.

So unless there's a really good reason not to do that, please can we make
this unconditional for arm64? Pretty please?

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-02-04 12:49 ` Will Deacon
@ 2021-02-04 15:53   ` Lecopzer Chen
  2021-02-04 17:57     ` Will Deacon
  0 siblings, 1 reply; 32+ messages in thread
From: Lecopzer Chen @ 2021-02-04 15:53 UTC (permalink / raw)
  To: will
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, lecopzer, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-mm, linux, robin.murphy, rppt, tyhicks,
	vincenzo.frascino, yj.chiang

> On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote:
> > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> > 
> > Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> > but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> > by not to populate the vmalloc area except for kimg address.
> 
> The one thing I've failed to grok from your series is how you deal with
> vmalloc allocations where the shadow overlaps with the shadow which has
> already been allocated for the kernel image. Please can you explain?


The most key point is we don't map anything in the vmalloc shadow address.
So we don't care where the kernel image locate inside vmalloc area.

  kasan_map_populate(kimg_shadow_start, kimg_shadow_end,...)

Kernel image was populated with real mapping in its shadow address.
I `bypass' the whole shadow of vmalloc area, the only place you can find
about vmalloc_shadow is
	kasan_populate_early_shadow((void *)vmalloc_shadow_end,
			(void *)KASAN_SHADOW_END);

	-----------  vmalloc_shadow_start
 |           |
 |           | 
 |           | <= non-mapping
 |           |
 |           |
 |-----------|
 |///////////|<- kimage shadow with page table mapping.
 |-----------|
 |           |
 |           | <= non-mapping
 |           |
 ------------- vmalloc_shadow_end
 |00000000000|
 |00000000000| <= Zero shadow
 |00000000000|
 ------------- KASAN_SHADOW_END

vmalloc shadow will be mapped 'ondemend', see kasan_populate_vmalloc()
in mm/vmalloc.c in detail.
So the shadow of vmalloc will be allocated later if anyone use its va.


BRs,
Lecopzer



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END
  2021-02-04 14:55       ` Will Deacon
@ 2021-02-04 16:06         ` Lecopzer Chen
  2021-02-05 17:02           ` Will Deacon
  0 siblings, 1 reply; 32+ messages in thread
From: Lecopzer Chen @ 2021-02-04 16:06 UTC (permalink / raw)
  To: Will Deacon
  Cc: Andrew Morton, Andrey Konovalov, ardb, aryabinin, broonie,
	Catalin Marinas, dan.j.williams, Dmitry Vyukov,
	Alexander Potapenko, gustavoars, kasan-dev, Jian-Lin Chen,
	linux-arm-kernel, Linux Kernel Mailing List, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	yj.chiang

I think it would be better to leave this for you since I'm not
familiar with the relationship
between vmemmap() and NUMA_NO_NODE.

So I would just keep this patch in next version, is this fine with you?


Thanks for your help:)

Lecopzer



Will Deacon <will@kernel.org> 於 2021年2月4日 週四 下午10:55寫道:
>
> On Thu, Feb 04, 2021 at 10:51:27PM +0800, Lecopzer Chen wrote:
> > > On Sat, Jan 09, 2021 at 06:32:50PM +0800, Lecopzer Chen wrote:
> > > > Arm64 provide defined macro for KERNEL_START and KERNEL_END,
> > > > thus replace them by the abstration instead of using _text and _end.
> > > >
> > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > > ---
> > > >  arch/arm64/mm/kasan_init.c | 6 +++---
> > > >  1 file changed, 3 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > > index 39b218a64279..fa8d7ece895d 100644
> > > > --- a/arch/arm64/mm/kasan_init.c
> > > > +++ b/arch/arm64/mm/kasan_init.c
> > > > @@ -218,8 +218,8 @@ static void __init kasan_init_shadow(void)
> > > >   phys_addr_t pa_start, pa_end;
> > > >   u64 i;
> > > >
> > > > - kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
> > > > - kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
> > > > + kimg_shadow_start = (u64)kasan_mem_to_shadow(KERNEL_START) & PAGE_MASK;
> > > > + kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(KERNEL_END));
> > > >
> > > >   mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > > >   mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > > @@ -241,7 +241,7 @@ static void __init kasan_init_shadow(void)
> > > >   clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> > > >
> > > >   kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
> > > > -                    early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
> > > > +                    early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START))));
> > >
> > > To be honest, I think this whole line is pointless. We should be able to
> > > pass NUMA_NO_NODE now that we're not abusing the vmemmap() allocator to
> > > populate the shadow.
> >
> > Do we need to fix this in this series? it seems another topic.
> > If not, should this patch be removed in this series?
>
> Since you're reposting anyway, you may as well include a patch doing that.
> If you don't, then I will.
>
> Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-04 15:01       ` Will Deacon
@ 2021-02-04 16:37         ` Lecopzer Chen
  2021-02-05 17:18           ` Will Deacon
  0 siblings, 1 reply; 32+ messages in thread
From: Lecopzer Chen @ 2021-02-04 16:37 UTC (permalink / raw)
  To: will
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, lecopzer, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-mm, linux, robin.murphy, rppt, tyhicks,
	vincenzo.frascino, yj.chiang


> On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > ("kasan: support backing vmalloc space with real shadow memory")
> > > >
> > > > Like how the MODULES_VADDR does now, just not to early populate
> > > > the VMALLOC_START between VMALLOC_END.
> > > > similarly, the kernel code mapping is now in the VMALLOC area and
> > > > should keep these area populated.
> > > >
> > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > > ---
> > > >  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> > > >  1 file changed, 18 insertions(+), 5 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > > index d8e66c78440e..39b218a64279 100644
> > > > --- a/arch/arm64/mm/kasan_init.c
> > > > +++ b/arch/arm64/mm/kasan_init.c
> > > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> > > >  {
> > > >   u64 kimg_shadow_start, kimg_shadow_end;
> > > >   u64 mod_shadow_start, mod_shadow_end;
> > > > + u64 vmalloc_shadow_start, vmalloc_shadow_end;
> > > >   phys_addr_t pa_start, pa_end;
> > > >   u64 i;
> > > >
> > > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> > > >   mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > > >   mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > >
> > > > + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > > > + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > > > +
> > > >   /*
> > > >    * We are going to perform proper setup of shadow memory.
> > > >    * At first we should unmap early shadow (clear_pgds() call below).
> > > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> > > >
> > > >   kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> > > >                              (void *)mod_shadow_start);
> > > > - kasan_populate_early_shadow((void *)kimg_shadow_end,
> > > > -                            (void *)KASAN_SHADOW_END);
> > > > + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> > >
> > > Do we really need yet another CONFIG option for KASAN? What's the use-case
> > > for *not* enabling this if you're already enabling one of the KASAN
> > > backends?
> >
> > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).
>
> The shadow is allocated dynamically though, isn't it?

Yes, but It's still a cost.

> > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > due to memory issue.
>
> That doesn't sound particularly realistic to me. The reason I'm pushing here
> is because I would _really_ like to move to VMAP stack unconditionally, and
> that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
>
> So unless there's a really good reason not to do that, please can we make
> this unconditional for arm64? Pretty please?

I think it's fine since we have a good reason.
Also if someone have memory issue in KASAN_VMALLOC,
they can use SW_TAG, right?

However the SW_TAG/HW_TAG is not supported VMALLOC yet.
So the code would be like

	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
		/* explain the relationship between 
		 * KASAN_GENERIC and KASAN_VMALLOC in arm64
		 * XXX: because we want VMAP stack....
		 */
		kasan_populate_early_shadow((void *)vmalloc_shadow_end,
					    (void *)KASAN_SHADOW_END);
	else {
		kasan_populate_early_shadow((void *)kimg_shadow_end,
					    (void *)KASAN_SHADOW_END);
		if (kimg_shadow_start > mod_shadow_end)
			kasan_populate_early_shadow((void *)mod_shadow_end,
						    (void *)kimg_shadow_start);
	}

and the arch/arm64/Kconfig will add
	select KASAN_VMALLOC if KASAN_GENERIC

Is this code same as your thought?

BRs,
Lecopzer


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-02-04 15:53   ` Lecopzer Chen
@ 2021-02-04 17:57     ` Will Deacon
  2021-02-04 18:41       ` Lecopzer Chen
  0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2021-02-04 17:57 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, linux-arm-kernel, linux-kernel, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	yj.chiang

On Thu, Feb 04, 2021 at 11:53:46PM +0800, Lecopzer Chen wrote:
> > On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote:
> > > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > ("kasan: support backing vmalloc space with real shadow memory")
> > > 
> > > Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> > > but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> > > by not to populate the vmalloc area except for kimg address.
> > 
> > The one thing I've failed to grok from your series is how you deal with
> > vmalloc allocations where the shadow overlaps with the shadow which has
> > already been allocated for the kernel image. Please can you explain?
> 
> 
> The most key point is we don't map anything in the vmalloc shadow address.
> So we don't care where the kernel image locate inside vmalloc area.
> 
>   kasan_map_populate(kimg_shadow_start, kimg_shadow_end,...)
> 
> Kernel image was populated with real mapping in its shadow address.
> I `bypass' the whole shadow of vmalloc area, the only place you can find
> about vmalloc_shadow is
> 	kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> 			(void *)KASAN_SHADOW_END);
> 
> 	-----------  vmalloc_shadow_start
>  |           |
>  |           | 
>  |           | <= non-mapping
>  |           |
>  |           |
>  |-----------|
>  |///////////|<- kimage shadow with page table mapping.
>  |-----------|
>  |           |
>  |           | <= non-mapping
>  |           |
>  ------------- vmalloc_shadow_end
>  |00000000000|
>  |00000000000| <= Zero shadow
>  |00000000000|
>  ------------- KASAN_SHADOW_END
> 
> vmalloc shadow will be mapped 'ondemend', see kasan_populate_vmalloc()
> in mm/vmalloc.c in detail.
> So the shadow of vmalloc will be allocated later if anyone use its va.

Indeed, but the question I'm asking is what happens when an on-demand shadow
allocation from vmalloc overlaps with the shadow that we allocated early for
the kernel image?

Sounds like I have to go and read the code...

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC
  2021-02-04 17:57     ` Will Deacon
@ 2021-02-04 18:41       ` Lecopzer Chen
  0 siblings, 0 replies; 32+ messages in thread
From: Lecopzer Chen @ 2021-02-04 18:41 UTC (permalink / raw)
  To: Will Deacon
  Cc: Andrew Morton, Andrey Konovalov, ardb, aryabinin, broonie,
	Catalin Marinas, dan.j.williams, Dmitry Vyukov,
	Alexander Potapenko, gustavoars, kasan-dev, Jian-Lin Chen,
	linux-arm-kernel, Linux Kernel Mailing List, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	yj.chiang

> On Thu, Feb 04, 2021 at 11:53:46PM +0800, Lecopzer Chen wrote:
> > > On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote:
> > > > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > ("kasan: support backing vmalloc space with real shadow memory")
> > > >
> > > > Acroding to how x86 ported it [1], they early allocated p4d and pgd,
> > > > but in arm64 I just simulate how KAsan supports MODULES_VADDR in arm64
> > > > by not to populate the vmalloc area except for kimg address.
> > >
> > > The one thing I've failed to grok from your series is how you deal with
> > > vmalloc allocations where the shadow overlaps with the shadow which has
> > > already been allocated for the kernel image. Please can you explain?
> >
> >
> > The most key point is we don't map anything in the vmalloc shadow address.
> > So we don't care where the kernel image locate inside vmalloc area.
> >
> >   kasan_map_populate(kimg_shadow_start, kimg_shadow_end,...)
> >
> > Kernel image was populated with real mapping in its shadow address.
> > I `bypass' the whole shadow of vmalloc area, the only place you can find
> > about vmalloc_shadow is
> >       kasan_populate_early_shadow((void *)vmalloc_shadow_end,
> >                       (void *)KASAN_SHADOW_END);
> >
> >       -----------  vmalloc_shadow_start
> >  |           |
> >  |           |
> >  |           | <= non-mapping
> >  |           |
> >  |           |
> >  |-----------|
> >  |///////////|<- kimage shadow with page table mapping.
> >  |-----------|
> >  |           |
> >  |           | <= non-mapping
> >  |           |
> >  ------------- vmalloc_shadow_end
> >  |00000000000|
> >  |00000000000| <= Zero shadow
> >  |00000000000|
> >  ------------- KASAN_SHADOW_END
> >
> > vmalloc shadow will be mapped 'ondemend', see kasan_populate_vmalloc()
> > in mm/vmalloc.c in detail.
> > So the shadow of vmalloc will be allocated later if anyone use its va.
>
> Indeed, but the question I'm asking is what happens when an on-demand shadow
> allocation from vmalloc overlaps with the shadow that we allocated early for
> the kernel image?
>
> Sounds like I have to go and read the code...
oh, sorry I misunderstood your question.

FWIW,
I think this won't happend because this mean vmalloc() provides va
which already allocated by kimg, as I know, vmalloc_init() will insert
early allocated vma into its vmalloc rb tree

, and this early allocated vma will include  kernel image.

After quick review of mm init code,
this early allocated for vma is at map_kernel() in arch/arm64/mm/mmu.c



BRs
Lecopzer

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END
  2021-02-04 16:06         ` Lecopzer Chen
@ 2021-02-05 17:02           ` Will Deacon
  0 siblings, 0 replies; 32+ messages in thread
From: Will Deacon @ 2021-02-05 17:02 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: Andrew Morton, Andrey Konovalov, ardb, aryabinin, broonie,
	Catalin Marinas, dan.j.williams, Dmitry Vyukov,
	Alexander Potapenko, gustavoars, kasan-dev, Jian-Lin Chen,
	linux-arm-kernel, Linux Kernel Mailing List, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	yj.chiang

On Fri, Feb 05, 2021 at 12:06:10AM +0800, Lecopzer Chen wrote:
> I think it would be better to leave this for you since I'm not
> familiar with the relationship
> between vmemmap() and NUMA_NO_NODE.
> 
> So I would just keep this patch in next version, is this fine with you?

Yes, ok.

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-04 16:37         ` Lecopzer Chen
@ 2021-02-05 17:18           ` Will Deacon
  2021-02-05 17:30             ` Andrey Konovalov
  2021-02-05 18:10             ` Lecopzer Chen
  0 siblings, 2 replies; 32+ messages in thread
From: Will Deacon @ 2021-02-05 17:18 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: akpm, andreyknvl, ardb, aryabinin, broonie, catalin.marinas,
	dan.j.williams, dvyukov, glider, gustavoars, kasan-dev,
	lecopzer.chen, linux-arm-kernel, linux-kernel, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	yj.chiang

On Fri, Feb 05, 2021 at 12:37:21AM +0800, Lecopzer Chen wrote:
> 
> > On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > > ("kasan: support backing vmalloc space with real shadow memory")
> > > > >
> > > > > Like how the MODULES_VADDR does now, just not to early populate
> > > > > the VMALLOC_START between VMALLOC_END.
> > > > > similarly, the kernel code mapping is now in the VMALLOC area and
> > > > > should keep these area populated.
> > > > >
> > > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > > > ---
> > > > >  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> > > > >  1 file changed, 18 insertions(+), 5 deletions(-)
> > > > >
> > > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > > > index d8e66c78440e..39b218a64279 100644
> > > > > --- a/arch/arm64/mm/kasan_init.c
> > > > > +++ b/arch/arm64/mm/kasan_init.c
> > > > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> > > > >  {
> > > > >   u64 kimg_shadow_start, kimg_shadow_end;
> > > > >   u64 mod_shadow_start, mod_shadow_end;
> > > > > + u64 vmalloc_shadow_start, vmalloc_shadow_end;
> > > > >   phys_addr_t pa_start, pa_end;
> > > > >   u64 i;
> > > > >
> > > > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> > > > >   mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > > > >   mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > > >
> > > > > + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > > > > + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > > > > +
> > > > >   /*
> > > > >    * We are going to perform proper setup of shadow memory.
> > > > >    * At first we should unmap early shadow (clear_pgds() call below).
> > > > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> > > > >
> > > > >   kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> > > > >                              (void *)mod_shadow_start);
> > > > > - kasan_populate_early_shadow((void *)kimg_shadow_end,
> > > > > -                            (void *)KASAN_SHADOW_END);
> > > > > + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> > > >
> > > > Do we really need yet another CONFIG option for KASAN? What's the use-case
> > > > for *not* enabling this if you're already enabling one of the KASAN
> > > > backends?
> > >
> > > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> > > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).
> >
> > The shadow is allocated dynamically though, isn't it?
> 
> Yes, but It's still a cost.
> 
> > > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > > due to memory issue.
> >
> > That doesn't sound particularly realistic to me. The reason I'm pushing here
> > is because I would _really_ like to move to VMAP stack unconditionally, and
> > that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
> >
> > So unless there's a really good reason not to do that, please can we make
> > this unconditional for arm64? Pretty please?
> 
> I think it's fine since we have a good reason.
> Also if someone have memory issue in KASAN_VMALLOC,
> they can use SW_TAG, right?
> 
> However the SW_TAG/HW_TAG is not supported VMALLOC yet.
> So the code would be like
> 
> 	if (IS_ENABLED(CONFIG_KASAN_GENERIC))

Just make this CONFIG_KASAN_VMALLOC, since that depends on KASAN_GENERIC.

> 		/* explain the relationship between 
> 		 * KASAN_GENERIC and KASAN_VMALLOC in arm64
> 		 * XXX: because we want VMAP stack....
> 		 */

I don't understand the relation with SW_TAGS. The VMAP_STACK dependency is:

	depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC

which doesn't mention SW_TAGS at all. So that seems to imply that SW_TAGS
and VMAP_STACK are mutually exclusive :(

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-05 17:18           ` Will Deacon
@ 2021-02-05 17:30             ` Andrey Konovalov
  2021-02-05 17:43               ` Will Deacon
  2021-02-05 18:10             ` Lecopzer Chen
  1 sibling, 1 reply; 32+ messages in thread
From: Andrey Konovalov @ 2021-02-05 17:30 UTC (permalink / raw)
  To: Will Deacon
  Cc: Lecopzer Chen, Andrew Morton, Ard Biesheuvel, Andrey Ryabinin,
	Mark Brown, Catalin Marinas, Dan Williams, Dmitry Vyukov,
	Alexander Potapenko, gustavoars, kasan-dev, Lecopzer Chen,
	Linux ARM, LKML, moderated list:ARM/Mediatek SoC...,
	Linux Memory Management List, Guenter Roeck, Robin Murphy, rppt,
	tyhicks, Vincenzo Frascino, yj.chiang

On Fri, Feb 5, 2021 at 6:19 PM Will Deacon <will@kernel.org> wrote:
>
> On Fri, Feb 05, 2021 at 12:37:21AM +0800, Lecopzer Chen wrote:
> >
> > > On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > > > ("kasan: support backing vmalloc space with real shadow memory")
> > > > > >
> > > > > > Like how the MODULES_VADDR does now, just not to early populate
> > > > > > the VMALLOC_START between VMALLOC_END.
> > > > > > similarly, the kernel code mapping is now in the VMALLOC area and
> > > > > > should keep these area populated.
> > > > > >
> > > > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > > > > ---
> > > > > >  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> > > > > >  1 file changed, 18 insertions(+), 5 deletions(-)
> > > > > >
> > > > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > > > > index d8e66c78440e..39b218a64279 100644
> > > > > > --- a/arch/arm64/mm/kasan_init.c
> > > > > > +++ b/arch/arm64/mm/kasan_init.c
> > > > > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> > > > > >  {
> > > > > >   u64 kimg_shadow_start, kimg_shadow_end;
> > > > > >   u64 mod_shadow_start, mod_shadow_end;
> > > > > > + u64 vmalloc_shadow_start, vmalloc_shadow_end;
> > > > > >   phys_addr_t pa_start, pa_end;
> > > > > >   u64 i;
> > > > > >
> > > > > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> > > > > >   mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > > > > >   mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > > > >
> > > > > > + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > > > > > + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > > > > > +
> > > > > >   /*
> > > > > >    * We are going to perform proper setup of shadow memory.
> > > > > >    * At first we should unmap early shadow (clear_pgds() call below).
> > > > > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> > > > > >
> > > > > >   kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> > > > > >                              (void *)mod_shadow_start);
> > > > > > - kasan_populate_early_shadow((void *)kimg_shadow_end,
> > > > > > -                            (void *)KASAN_SHADOW_END);
> > > > > > + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> > > > >
> > > > > Do we really need yet another CONFIG option for KASAN? What's the use-case
> > > > > for *not* enabling this if you're already enabling one of the KASAN
> > > > > backends?
> > > >
> > > > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> > > > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).
> > >
> > > The shadow is allocated dynamically though, isn't it?
> >
> > Yes, but It's still a cost.
> >
> > > > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > > > due to memory issue.
> > >
> > > That doesn't sound particularly realistic to me. The reason I'm pushing here
> > > is because I would _really_ like to move to VMAP stack unconditionally, and
> > > that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
> > >
> > > So unless there's a really good reason not to do that, please can we make
> > > this unconditional for arm64? Pretty please?
> >
> > I think it's fine since we have a good reason.
> > Also if someone have memory issue in KASAN_VMALLOC,
> > they can use SW_TAG, right?
> >
> > However the SW_TAG/HW_TAG is not supported VMALLOC yet.
> > So the code would be like
> >
> >       if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>
> Just make this CONFIG_KASAN_VMALLOC, since that depends on KASAN_GENERIC.
>
> >               /* explain the relationship between
> >                * KASAN_GENERIC and KASAN_VMALLOC in arm64
> >                * XXX: because we want VMAP stack....
> >                */
>
> I don't understand the relation with SW_TAGS. The VMAP_STACK dependency is:
>
>         depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC

This means that VMAP_STACK can be only enabled if KASAN_HW_TAGS=y or
if KASAN_VMALLOC=y for other modes.

>
> which doesn't mention SW_TAGS at all. So that seems to imply that SW_TAGS
> and VMAP_STACK are mutually exclusive :(

SW_TAGS doesn't yet have vmalloc support, so it's not compatible with
VMAP_STACK. Once vmalloc support is added to SW_TAGS, KASAN_VMALLOC
should be allowed to be enabled with SW_TAGS. This series is a step
towards having that support, but doesn't implement it. That will be a
separate effort.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-05 17:30             ` Andrey Konovalov
@ 2021-02-05 17:43               ` Will Deacon
  2021-02-05 20:50                 ` Andrey Konovalov
  0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2021-02-05 17:43 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Lecopzer Chen, Andrew Morton, Ard Biesheuvel, Andrey Ryabinin,
	Mark Brown, Catalin Marinas, Dan Williams, Dmitry Vyukov,
	Alexander Potapenko, gustavoars, kasan-dev, Lecopzer Chen,
	Linux ARM, LKML, moderated list:ARM/Mediatek SoC...,
	Linux Memory Management List, Guenter Roeck, Robin Murphy, rppt,
	tyhicks, Vincenzo Frascino, yj.chiang

On Fri, Feb 05, 2021 at 06:30:44PM +0100, Andrey Konovalov wrote:
> On Fri, Feb 5, 2021 at 6:19 PM Will Deacon <will@kernel.org> wrote:
> >
> > On Fri, Feb 05, 2021 at 12:37:21AM +0800, Lecopzer Chen wrote:
> > >
> > > > On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > > > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > > > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > > > > ("kasan: support backing vmalloc space with real shadow memory")
> > > > > > >
> > > > > > > Like how the MODULES_VADDR does now, just not to early populate
> > > > > > > the VMALLOC_START between VMALLOC_END.
> > > > > > > similarly, the kernel code mapping is now in the VMALLOC area and
> > > > > > > should keep these area populated.
> > > > > > >
> > > > > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > > > > > ---
> > > > > > >  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> > > > > > >  1 file changed, 18 insertions(+), 5 deletions(-)
> > > > > > >
> > > > > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > > > > > index d8e66c78440e..39b218a64279 100644
> > > > > > > --- a/arch/arm64/mm/kasan_init.c
> > > > > > > +++ b/arch/arm64/mm/kasan_init.c
> > > > > > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> > > > > > >  {
> > > > > > >   u64 kimg_shadow_start, kimg_shadow_end;
> > > > > > >   u64 mod_shadow_start, mod_shadow_end;
> > > > > > > + u64 vmalloc_shadow_start, vmalloc_shadow_end;
> > > > > > >   phys_addr_t pa_start, pa_end;
> > > > > > >   u64 i;
> > > > > > >
> > > > > > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> > > > > > >   mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > > > > > >   mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > > > > >
> > > > > > > + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > > > > > > + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > > > > > > +
> > > > > > >   /*
> > > > > > >    * We are going to perform proper setup of shadow memory.
> > > > > > >    * At first we should unmap early shadow (clear_pgds() call below).
> > > > > > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> > > > > > >
> > > > > > >   kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> > > > > > >                              (void *)mod_shadow_start);
> > > > > > > - kasan_populate_early_shadow((void *)kimg_shadow_end,
> > > > > > > -                            (void *)KASAN_SHADOW_END);
> > > > > > > + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> > > > > >
> > > > > > Do we really need yet another CONFIG option for KASAN? What's the use-case
> > > > > > for *not* enabling this if you're already enabling one of the KASAN
> > > > > > backends?
> > > > >
> > > > > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> > > > > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).
> > > >
> > > > The shadow is allocated dynamically though, isn't it?
> > >
> > > Yes, but It's still a cost.
> > >
> > > > > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > > > > due to memory issue.
> > > >
> > > > That doesn't sound particularly realistic to me. The reason I'm pushing here
> > > > is because I would _really_ like to move to VMAP stack unconditionally, and
> > > > that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
> > > >
> > > > So unless there's a really good reason not to do that, please can we make
> > > > this unconditional for arm64? Pretty please?
> > >
> > > I think it's fine since we have a good reason.
> > > Also if someone have memory issue in KASAN_VMALLOC,
> > > they can use SW_TAG, right?
> > >
> > > However the SW_TAG/HW_TAG is not supported VMALLOC yet.
> > > So the code would be like
> > >
> > >       if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> >
> > Just make this CONFIG_KASAN_VMALLOC, since that depends on KASAN_GENERIC.
> >
> > >               /* explain the relationship between
> > >                * KASAN_GENERIC and KASAN_VMALLOC in arm64
> > >                * XXX: because we want VMAP stack....
> > >                */
> >
> > I don't understand the relation with SW_TAGS. The VMAP_STACK dependency is:
> >
> >         depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC
> 
> This means that VMAP_STACK can be only enabled if KASAN_HW_TAGS=y or
> if KASAN_VMALLOC=y for other modes.
> 
> >
> > which doesn't mention SW_TAGS at all. So that seems to imply that SW_TAGS
> > and VMAP_STACK are mutually exclusive :(
> 
> SW_TAGS doesn't yet have vmalloc support, so it's not compatible with
> VMAP_STACK. Once vmalloc support is added to SW_TAGS, KASAN_VMALLOC
> should be allowed to be enabled with SW_TAGS. This series is a step
> towards having that support, but doesn't implement it. That will be a
> separate effort.

Ok, thanks. Then I think we should try to invert the dependency here, if
possible, so that the KASAN backends depend on !VMAP_STACK if they don't
support it, rather than silently disabling VMAP_STACK when they are
selected.

Will

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-05 17:18           ` Will Deacon
  2021-02-05 17:30             ` Andrey Konovalov
@ 2021-02-05 18:10             ` Lecopzer Chen
  1 sibling, 0 replies; 32+ messages in thread
From: Lecopzer Chen @ 2021-02-05 18:10 UTC (permalink / raw)
  To: Will Deacon
  Cc: Andrew Morton, Andrey Konovalov, ardb, aryabinin, broonie,
	Catalin Marinas, dan.j.williams, Dmitry Vyukov,
	Alexander Potapenko, gustavoars, kasan-dev, Jian-Lin Chen,
	linux-arm-kernel, Linux Kernel Mailing List, linux-mediatek,
	linux-mm, linux, robin.murphy, rppt, tyhicks, vincenzo.frascino,
	yj.chiang

Will Deacon <will@kernel.org> 於 2021年2月6日 週六 上午1:19寫道:
>
> On Fri, Feb 05, 2021 at 12:37:21AM +0800, Lecopzer Chen wrote:
> >
> > > On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > > > ("kasan: support backing vmalloc space with real shadow memory")
> > > > > >
> > > > > > Like how the MODULES_VADDR does now, just not to early populate
> > > > > > the VMALLOC_START between VMALLOC_END.
> > > > > > similarly, the kernel code mapping is now in the VMALLOC area and
> > > > > > should keep these area populated.
> > > > > >
> > > > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > > > > ---
> > > > > >  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> > > > > >  1 file changed, 18 insertions(+), 5 deletions(-)
> > > > > >
> > > > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > > > > index d8e66c78440e..39b218a64279 100644
> > > > > > --- a/arch/arm64/mm/kasan_init.c
> > > > > > +++ b/arch/arm64/mm/kasan_init.c
> > > > > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> > > > > >  {
> > > > > >   u64 kimg_shadow_start, kimg_shadow_end;
> > > > > >   u64 mod_shadow_start, mod_shadow_end;
> > > > > > + u64 vmalloc_shadow_start, vmalloc_shadow_end;
> > > > > >   phys_addr_t pa_start, pa_end;
> > > > > >   u64 i;
> > > > > >
> > > > > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> > > > > >   mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > > > > >   mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > > > >
> > > > > > + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > > > > > + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > > > > > +
> > > > > >   /*
> > > > > >    * We are going to perform proper setup of shadow memory.
> > > > > >    * At first we should unmap early shadow (clear_pgds() call below).
> > > > > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> > > > > >
> > > > > >   kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> > > > > >                              (void *)mod_shadow_start);
> > > > > > - kasan_populate_early_shadow((void *)kimg_shadow_end,
> > > > > > -                            (void *)KASAN_SHADOW_END);
> > > > > > + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> > > > >
> > > > > Do we really need yet another CONFIG option for KASAN? What's the use-case
> > > > > for *not* enabling this if you're already enabling one of the KASAN
> > > > > backends?
> > > >commit message
> > > > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> > > > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).
> > >
> > > The shadow is allocated dynamically though, isn't it?
> >
> > Yes, but It's still a cost.
> >
> > > > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > > > due to memory issue.
> > >
> > > That doesn't sound particularly realistic to me. The reason I'm pushing here
> > > is because I would _really_ like to move to VMAP stack unconditionally, and
> > > that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
> > >
> > > So unless there's a really good reason not to do that, please can we make
> > > this unconditional for arm64? Pretty please?
> >
> > I think it's fine since we have a good reason.
> > Also if someone have memory issue in KASAN_VMALLOC,
> > they can use SW_TAG, right?
> >
> > However the SW_TAG/HW_TAG is not supported VMALLOC yet.
> > So the code would be like
> >
> >       if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>
> Just make this CONFIG_KASAN_VMALLOC, since that depends on KASAN_GENERIC.

OK, this also make sense.
My first thought was that selecting KASAN_GENERIC implies VMALLOC in
arm64 is a special case so this need well documented.
I'll document this in the commit message of Kconfig patch to avoid
messing up the code here.

I'm going to send V3 patch, thanks again for your review.

BRs,
Lecopzer

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
  2021-02-05 17:43               ` Will Deacon
@ 2021-02-05 20:50                 ` Andrey Konovalov
  0 siblings, 0 replies; 32+ messages in thread
From: Andrey Konovalov @ 2021-02-05 20:50 UTC (permalink / raw)
  To: Will Deacon
  Cc: Lecopzer Chen, Andrew Morton, Ard Biesheuvel, Andrey Ryabinin,
	Mark Brown, Catalin Marinas, Dan Williams, Dmitry Vyukov,
	Alexander Potapenko, gustavoars, kasan-dev, Lecopzer Chen,
	Linux ARM, LKML, moderated list:ARM/Mediatek SoC...,
	Linux Memory Management List, Guenter Roeck, Robin Murphy, rppt,
	tyhicks, Vincenzo Frascino, yj.chiang

On Fri, Feb 5, 2021 at 6:43 PM Will Deacon <will@kernel.org> wrote:
>
> On Fri, Feb 05, 2021 at 06:30:44PM +0100, Andrey Konovalov wrote:
> > On Fri, Feb 5, 2021 at 6:19 PM Will Deacon <will@kernel.org> wrote:
> > >
> > > On Fri, Feb 05, 2021 at 12:37:21AM +0800, Lecopzer Chen wrote:
> > > >
> > > > > On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > > > > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > > > > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > > > > > ("kasan: support backing vmalloc space with real shadow memory")
> > > > > > > >
> > > > > > > > Like how the MODULES_VADDR does now, just not to early populate
> > > > > > > > the VMALLOC_START between VMALLOC_END.
> > > > > > > > similarly, the kernel code mapping is now in the VMALLOC area and
> > > > > > > > should keep these area populated.
> > > > > > > >
> > > > > > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > > > > > > > ---
> > > > > > > >  arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> > > > > > > >  1 file changed, 18 insertions(+), 5 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > > > > > > index d8e66c78440e..39b218a64279 100644
> > > > > > > > --- a/arch/arm64/mm/kasan_init.c
> > > > > > > > +++ b/arch/arm64/mm/kasan_init.c
> > > > > > > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> > > > > > > >  {
> > > > > > > >   u64 kimg_shadow_start, kimg_shadow_end;
> > > > > > > >   u64 mod_shadow_start, mod_shadow_end;
> > > > > > > > + u64 vmalloc_shadow_start, vmalloc_shadow_end;
> > > > > > > >   phys_addr_t pa_start, pa_end;
> > > > > > > >   u64 i;
> > > > > > > >
> > > > > > > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> > > > > > > >   mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > > > > > > >   mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > > > > > >
> > > > > > > > + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > > > > > > > + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > > > > > > > +
> > > > > > > >   /*
> > > > > > > >    * We are going to perform proper setup of shadow memory.
> > > > > > > >    * At first we should unmap early shadow (clear_pgds() call below).
> > > > > > > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> > > > > > > >
> > > > > > > >   kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> > > > > > > >                              (void *)mod_shadow_start);
> > > > > > > > - kasan_populate_early_shadow((void *)kimg_shadow_end,
> > > > > > > > -                            (void *)KASAN_SHADOW_END);
> > > > > > > > + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> > > > > > >
> > > > > > > Do we really need yet another CONFIG option for KASAN? What's the use-case
> > > > > > > for *not* enabling this if you're already enabling one of the KASAN
> > > > > > > backends?
> > > > > >
> > > > > > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> > > > > > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).
> > > > >
> > > > > The shadow is allocated dynamically though, isn't it?
> > > >
> > > > Yes, but It's still a cost.
> > > >
> > > > > > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > > > > > due to memory issue.
> > > > >
> > > > > That doesn't sound particularly realistic to me. The reason I'm pushing here
> > > > > is because I would _really_ like to move to VMAP stack unconditionally, and
> > > > > that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
> > > > >
> > > > > So unless there's a really good reason not to do that, please can we make
> > > > > this unconditional for arm64? Pretty please?
> > > >
> > > > I think it's fine since we have a good reason.
> > > > Also if someone have memory issue in KASAN_VMALLOC,
> > > > they can use SW_TAG, right?
> > > >
> > > > However the SW_TAG/HW_TAG is not supported VMALLOC yet.
> > > > So the code would be like
> > > >
> > > >       if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> > >
> > > Just make this CONFIG_KASAN_VMALLOC, since that depends on KASAN_GENERIC.
> > >
> > > >               /* explain the relationship between
> > > >                * KASAN_GENERIC and KASAN_VMALLOC in arm64
> > > >                * XXX: because we want VMAP stack....
> > > >                */
> > >
> > > I don't understand the relation with SW_TAGS. The VMAP_STACK dependency is:
> > >
> > >         depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC
> >
> > This means that VMAP_STACK can be only enabled if KASAN_HW_TAGS=y or
> > if KASAN_VMALLOC=y for other modes.
> >
> > >
> > > which doesn't mention SW_TAGS at all. So that seems to imply that SW_TAGS
> > > and VMAP_STACK are mutually exclusive :(
> >
> > SW_TAGS doesn't yet have vmalloc support, so it's not compatible with
> > VMAP_STACK. Once vmalloc support is added to SW_TAGS, KASAN_VMALLOC
> > should be allowed to be enabled with SW_TAGS. This series is a step
> > towards having that support, but doesn't implement it. That will be a
> > separate effort.
>
> Ok, thanks. Then I think we should try to invert the dependency here, if
> possible, so that the KASAN backends depend on !VMAP_STACK if they don't
> support it, rather than silently disabling VMAP_STACK when they are
> selected.

SGTM. Not sure if I will get to this in the nearest future, so I filed
a bug: https://bugzilla.kernel.org/show_bug.cgi?id=211581

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2021-02-05 20:54 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-09 10:32 [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
2021-01-09 10:32 ` [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC Lecopzer Chen
2021-02-03 18:37   ` Ard Biesheuvel
2021-02-04  6:21     ` Lecopzer Chen
2021-02-04 12:45   ` Will Deacon
2021-02-04 14:46     ` Lecopzer Chen
2021-02-04 15:01       ` Will Deacon
2021-02-04 16:37         ` Lecopzer Chen
2021-02-05 17:18           ` Will Deacon
2021-02-05 17:30             ` Andrey Konovalov
2021-02-05 17:43               ` Will Deacon
2021-02-05 20:50                 ` Andrey Konovalov
2021-02-05 18:10             ` Lecopzer Chen
2021-01-09 10:32 ` [PATCH v2 2/4] arm64: kasan: abstract _text and _end to KERNEL_START/END Lecopzer Chen
2021-02-04 12:46   ` Will Deacon
2021-02-04 14:51     ` Lecopzer Chen
2021-02-04 14:55       ` Will Deacon
2021-02-04 16:06         ` Lecopzer Chen
2021-02-05 17:02           ` Will Deacon
2021-01-09 10:32 ` [PATCH v2 3/4] arm64: Kconfig: support CONFIG_KASAN_VMALLOC Lecopzer Chen
2021-01-09 10:32 ` [PATCH v2 4/4] arm64: kaslr: support randomized module area with KASAN_VMALLOC Lecopzer Chen
2021-01-27 23:04   ` Will Deacon
2021-01-28  8:53     ` Lecopzer Chen
2021-01-28 20:26       ` Will Deacon
2021-01-21 10:19 ` [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC Lecopzer Chen
2021-01-21 17:44 ` Andrey Konovalov
2021-01-22 19:05   ` Will Deacon
2021-02-03 18:31 ` Ard Biesheuvel
2021-02-04 12:49 ` Will Deacon
2021-02-04 15:53   ` Lecopzer Chen
2021-02-04 17:57     ` Will Deacon
2021-02-04 18:41       ` Lecopzer Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).