All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5 v15] KASan for Arm
@ 2020-10-12 21:56 Linus Walleij
  2020-10-12 21:56 ` [PATCH 1/5 v15] ARM: Disable KASan instrumentation for some code Linus Walleij
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Linus Walleij @ 2020-10-12 21:56 UTC (permalink / raw)
  To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
	Andrey Ryabinin, Mike Rapoport
  Cc: Linus Walleij, Arnd Bergmann, linux-arm-kernel

This is the 15th iteration of KASan for ARM/Aarch32.

I dropped my fix in the beginning of the series for
Ard's more elaborate and thorough fix moving the DTB
out of the kernel linear mapped region and into its own
part of the memory.

This fixes my particular issue on the Qualcomm APQ8060
and I hope it may also solve Florian's issue and what
Ard has been seeing. KASan should be working with
pretty much everything you throw on it, unless you
do what I did and ran it on a 64MB system, where
under some load it can run into the OOM killer for
obvious reasons.

You are encouraged to test this patch set to find memory out
of bounds bugs with ARM32 platforms and drivers.

There is a git branch you can pull in:
https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kasan

This branch includes Ard's two patches.

As Ard's patches are in Russell's patch tracker I will
put these there as well if it now works for everyone.

Abbott Liu (1):
  ARM: Define the virtual space of KASan's shadow region

Andrey Ryabinin (3):
  ARM: Disable KASan instrumentation for some code
  ARM: Replace string mem* functions for KASan
  ARM: Enable KASan for ARM

Linus Walleij (1):
  ARM: Initialize the mapping of KASan shadow memory

 Documentation/arm/memory.rst                  |   5 +
 Documentation/dev-tools/kasan.rst             |   4 +-
 .../features/debug/KASAN/arch-support.txt     |   2 +-
 arch/arm/Kconfig                              |  10 +
 arch/arm/boot/compressed/Makefile             |   1 +
 arch/arm/boot/compressed/string.c             |  19 ++
 arch/arm/include/asm/kasan.h                  |  33 ++
 arch/arm/include/asm/kasan_def.h              |  81 +++++
 arch/arm/include/asm/memory.h                 |   5 +
 arch/arm/include/asm/pgalloc.h                |   8 +-
 arch/arm/include/asm/string.h                 |  21 ++
 arch/arm/include/asm/thread_info.h            |   8 +
 arch/arm/include/asm/uaccess-asm.h            |   2 +-
 arch/arm/kernel/entry-armv.S                  |   3 +-
 arch/arm/kernel/entry-common.S                |   9 +-
 arch/arm/kernel/head-common.S                 |   7 +-
 arch/arm/kernel/setup.c                       |   2 +
 arch/arm/kernel/unwind.c                      |   6 +-
 arch/arm/lib/memcpy.S                         |   3 +
 arch/arm/lib/memmove.S                        |   5 +-
 arch/arm/lib/memset.S                         |   3 +
 arch/arm/mm/Makefile                          |   5 +
 arch/arm/mm/kasan_init.c                      | 284 ++++++++++++++++++
 arch/arm/mm/mmu.c                             |  18 ++
 arch/arm/mm/pgd.c                             |  16 +-
 arch/arm/vdso/Makefile                        |   2 +
 26 files changed, 548 insertions(+), 14 deletions(-)
 create mode 100644 arch/arm/include/asm/kasan.h
 create mode 100644 arch/arm/include/asm/kasan_def.h
 create mode 100644 arch/arm/mm/kasan_init.c

-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/5 v15] ARM: Disable KASan instrumentation for some code
  2020-10-12 21:56 [PATCH 0/5 v15] KASan for Arm Linus Walleij
@ 2020-10-12 21:56 ` Linus Walleij
  2020-10-12 21:56 ` [PATCH 2/5 v15] ARM: Replace string mem* functions for KASan Linus Walleij
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Linus Walleij @ 2020-10-12 21:56 UTC (permalink / raw)
  To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
	Andrey Ryabinin, Mike Rapoport
  Cc: Arnd Bergmann, Marc Zyngier, Linus Walleij, kasan-dev,
	Alexander Potapenko, linux-arm-kernel, Dmitry Vyukov

From: Andrey Ryabinin <aryabinin@virtuozzo.com>

Disable instrumentation for arch/arm/boot/compressed/*
since that code is executed before the kernel has even
set up its mappings and definately out of scope for
KASan.

Disable instrumentation of arch/arm/vdso/* because that code
is not linked with the kernel image, so the KASan management
code would fail to link.

Disable instrumentation of arch/arm/mm/physaddr.c. See commit
ec6d06efb0ba ("arm64: Add support for CONFIG_DEBUG_VIRTUAL")
for more details.

Disable kasan check in the function unwind_pop_register because
it does not matter that kasan checks failed when unwind_pop_register()
reads the stack memory of a task.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v14->v15:
- Resend with the other patches
ChangeLog v13->v14:
- Resend with the other patches
ChangeLog v12->v13:
- Rebase on kernel v5.9-rc1
ChangeLog v11->v12:
- Resend with the other changes.
ChangeLog v10->v11:
- Resend with the other changes.
ChangeLog v9->v10:
- Rebase on v5.8-rc1
ChangeLog v8->v9:
- Collect Ard's tags.
ChangeLog v7->v8:
- Do not sanitize arch/arm/mm/mmu.c.
  Apart from being intuitively correct, it turns out that KASan
  will insert a __asan_load4() into the set_pte_at() function
  in mmu.c and this is something that KASan calls in the early
  initialization, to set up the shadow memory. Naturally,
  __asan_load4() cannot be called before the shadow memory is
  set up so we need to exclude mmu.c from sanitization.
ChangeLog v6->v7:
- Removed the KVM instrumentaton disablement since KVM
  on ARM32 is gone.
---
 arch/arm/boot/compressed/Makefile | 1 +
 arch/arm/kernel/unwind.c          | 6 +++++-
 arch/arm/mm/Makefile              | 2 ++
 arch/arm/vdso/Makefile            | 2 ++
 4 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index b1147b7f2c8d..362e17e37398 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -24,6 +24,7 @@ OBJS		+= hyp-stub.o
 endif
 
 GCOV_PROFILE		:= n
+KASAN_SANITIZE		:= n
 
 # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
 KCOV_INSTRUMENT		:= n
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index d2bd0df2318d..f35eb584a18a 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -236,7 +236,11 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
 		if (*vsp >= (unsigned long *)ctrl->sp_high)
 			return -URC_FAILURE;
 
-	ctrl->vrs[reg] = *(*vsp)++;
+	/* Use READ_ONCE_NOCHECK here to avoid this memory access
+	 * from being tracked by KASAN.
+	 */
+	ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+	(*vsp)++;
 	return URC_OK;
 }
 
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 7cb1699fbfc4..99699c32d8a5 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -7,6 +7,7 @@ obj-y				:= extable.o fault.o init.o iomap.o
 obj-y				+= dma-mapping$(MMUEXT).o
 obj-$(CONFIG_MMU)		+= fault-armv.o flush.o idmap.o ioremap.o \
 				   mmap.o pgd.o mmu.o pageattr.o
+KASAN_SANITIZE_mmu.o		:= n
 
 ifneq ($(CONFIG_MMU),y)
 obj-y				+= nommu.o
@@ -16,6 +17,7 @@ endif
 obj-$(CONFIG_ARM_PTDUMP_CORE)	+= dump.o
 obj-$(CONFIG_ARM_PTDUMP_DEBUGFS)	+= ptdump_debugfs.o
 obj-$(CONFIG_MODULES)		+= proc-syms.o
+KASAN_SANITIZE_physaddr.o	:= n
 obj-$(CONFIG_DEBUG_VIRTUAL)	+= physaddr.o
 
 obj-$(CONFIG_ALIGNMENT_TRAP)	+= alignment.o
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index a54f70731d9f..171c3dcb5242 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -42,6 +42,8 @@ GCOV_PROFILE := n
 # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
 KCOV_INSTRUMENT := n
 
+KASAN_SANITIZE := n
+
 # Force dependency
 $(obj)/vdso.o : $(obj)/vdso.so
 
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/5 v15] ARM: Replace string mem* functions for KASan
  2020-10-12 21:56 [PATCH 0/5 v15] KASan for Arm Linus Walleij
  2020-10-12 21:56 ` [PATCH 1/5 v15] ARM: Disable KASan instrumentation for some code Linus Walleij
@ 2020-10-12 21:56 ` Linus Walleij
  2020-10-14 10:59   ` [PATCH] fixup! " Ahmad Fatoum
  2020-10-12 21:56 ` [PATCH 3/5 v15] ARM: Define the virtual space of KASan's shadow region Linus Walleij
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Linus Walleij @ 2020-10-12 21:56 UTC (permalink / raw)
  To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
	Andrey Ryabinin, Mike Rapoport
  Cc: Arnd Bergmann, Linus Walleij, kasan-dev, Alexander Potapenko,
	linux-arm-kernel, Dmitry Vyukov

From: Andrey Ryabinin <aryabinin@virtuozzo.com>

Functions like memset()/memmove()/memcpy() do a lot of memory
accesses.

If a bad pointer is passed to one of these functions it is important
to catch this. Compiler instrumentation cannot do this since these
functions are written in assembly.

KASan replaces these memory functions with instrumented variants.

The original functions are declared as weak symbols so that
the strong definitions in mm/kasan/kasan.c can replace them.

The original functions have aliases with a '__' prefix in their
name, so we can call the non-instrumented variant if needed.

We must use __memcpy()/__memset() in place of memcpy()/memset()
when we copy .data to RAM and when we clear .bss, because
kasan_early_init cannot be called before the initialization of
.data and .bss.

For the kernel compression and EFI libstub's custom string
libraries we need a special quirk: even if these are built
without KASan enabled, they rely on the global headers for their
custom string libraries, which means that e.g. memcpy()
will be defined to __memcpy() and we get link failures.
Since these implementations are written i C rather than
assembly we use e.g. __alias(memcpy) to redirected any
users back to the local implementation.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v14->v15:
- Resend with the other patches
ChangeLog v13->v14:
- Resend with the other patches
ChangeLog v12->v13:
- Rebase on kernel v5.9-rc1
ChangeLog v11->v12:
- Resend with the other changes.
ChangeLog v10->v11:
- Resend with the other changes.
ChangeLog v9->v10:
- Rebase on v5.8-rc1
ChangeLog v8->v9:
- Collect Ard's tags.
ChangeLog v7->v8:
- Use the less invasive version of handling the global redefines
  of the string functions in the decompressor: __alias() the
  functions locally in the library.
- Put in some more comments so readers of the code knows what
  is going on.
ChangeLog v6->v7:
- Move the hacks around __SANITIZE_ADDRESS__ into this file
- Edit the commit message
- Rebase on the other v2 patches
---
 arch/arm/boot/compressed/string.c | 19 +++++++++++++++++++
 arch/arm/include/asm/string.h     | 21 +++++++++++++++++++++
 arch/arm/kernel/head-common.S     |  4 ++--
 arch/arm/lib/memcpy.S             |  3 +++
 arch/arm/lib/memmove.S            |  5 ++++-
 arch/arm/lib/memset.S             |  3 +++
 6 files changed, 52 insertions(+), 3 deletions(-)

diff --git a/arch/arm/boot/compressed/string.c b/arch/arm/boot/compressed/string.c
index ade5079bebbf..8c0fa276d994 100644
--- a/arch/arm/boot/compressed/string.c
+++ b/arch/arm/boot/compressed/string.c
@@ -7,6 +7,25 @@
 
 #include <linux/string.h>
 
+/*
+ * The decompressor is built without KASan but uses the same redirects as the
+ * rest of the kernel when CONFIG_KASAN is enabled, defining e.g. memcpy()
+ * to __memcpy() but since we are not linking with the main kernel string
+ * library in the decompressor, that will lead to link failures.
+ *
+ * Undefine KASan's versions, define the wrapped functions and alias them to
+ * the right names so that when e.g. __memcpy() appear in the code, it will
+ * still be linked to this local version of memcpy().
+ */
+#ifdef CONFIG_KASAN
+#undef memcpy
+#undef memmove
+#undef memset
+void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
+void *__memmove(void *__dest, __const void *__src, size_t count) __alias(memmove);
+void *__memset(void *s, int c, size_t count) __alias(memset);
+#endif
+
 void *memcpy(void *__dest, __const void *__src, size_t __n)
 {
 	int i = 0;
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index 111a1d8a41dd..947f93037d87 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -5,6 +5,9 @@
 /*
  * We don't do inline string functions, since the
  * optimised inline asm versions are not small.
+ *
+ * The __underscore versions of some functions are for KASan to be able
+ * to replace them with instrumented versions.
  */
 
 #define __HAVE_ARCH_STRRCHR
@@ -15,15 +18,18 @@ extern char * strchr(const char * s, int c);
 
 #define __HAVE_ARCH_MEMCPY
 extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void *__memcpy(void *dest, const void *src, __kernel_size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 extern void * memmove(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *dest, const void *src, __kernel_size_t n);
 
 #define __HAVE_ARCH_MEMCHR
 extern void * memchr(const void *, int, __kernel_size_t);
 
 #define __HAVE_ARCH_MEMSET
 extern void * memset(void *, int, __kernel_size_t);
+extern void *__memset(void *s, int c, __kernel_size_t n);
 
 #define __HAVE_ARCH_MEMSET32
 extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,4 +45,19 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
 	return __memset64(p, v, n * 8, v >> 32);
 }
 
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * must use non-instrumented versions of the mem*
+ * functions named __memcpy() etc. All such kernel code has
+ * been tagged with KASAN_SANITIZE_file.o = n, which means
+ * that the address sanitization argument isn't passed to the
+ * compiler, and __SANITIZE_ADDRESS__ is not set. As a result
+ * these defines kick in.
+ */
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 4a3982812a40..6840c7c60a85 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -95,7 +95,7 @@ __mmap_switched:
  THUMB(	ldmia	r4!, {r0, r1, r2, r3} )
  THUMB(	mov	sp, r3 )
 	sub	r2, r2, r1
-	bl	memcpy				@ copy .data to RAM
+	bl	__memcpy			@ copy .data to RAM
 #endif
 
    ARM(	ldmia	r4!, {r0, r1, sp} )
@@ -103,7 +103,7 @@ __mmap_switched:
  THUMB(	mov	sp, r3 )
 	sub	r2, r1, r0
 	mov	r1, #0
-	bl	memset				@ clear .bss
+	bl	__memset			@ clear .bss
 
 	ldmia	r4, {r0, r1, r2, r3}
 	str	r9, [r0]			@ Save processor ID
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 09a333153dc6..ad4625d16e11 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -58,6 +58,8 @@
 
 /* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
 
+.weak memcpy
+ENTRY(__memcpy)
 ENTRY(mmiocpy)
 ENTRY(memcpy)
 
@@ -65,3 +67,4 @@ ENTRY(memcpy)
 
 ENDPROC(memcpy)
 ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index b50e5770fb44..fd123ea5a5a4 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -24,12 +24,14 @@
  * occurring in the opposite direction.
  */
 
+.weak memmove
+ENTRY(__memmove)
 ENTRY(memmove)
 	UNWIND(	.fnstart			)
 
 		subs	ip, r0, r1
 		cmphi	r2, ip
-		bls	memcpy
+		bls	__memcpy
 
 		stmfd	sp!, {r0, r4, lr}
 	UNWIND(	.fnend				)
@@ -222,3 +224,4 @@ ENTRY(memmove)
 18:		backward_copy_shift	push=24	pull=8
 
 ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index 6ca4535c47fb..0e7ff0423f50 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -13,6 +13,8 @@
 	.text
 	.align	5
 
+.weak memset
+ENTRY(__memset)
 ENTRY(mmioset)
 ENTRY(memset)
 UNWIND( .fnstart         )
@@ -132,6 +134,7 @@ UNWIND( .fnstart            )
 UNWIND( .fnend   )
 ENDPROC(memset)
 ENDPROC(mmioset)
+ENDPROC(__memset)
 
 ENTRY(__memset32)
 UNWIND( .fnstart         )
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/5 v15] ARM: Define the virtual space of KASan's shadow region
  2020-10-12 21:56 [PATCH 0/5 v15] KASan for Arm Linus Walleij
  2020-10-12 21:56 ` [PATCH 1/5 v15] ARM: Disable KASan instrumentation for some code Linus Walleij
  2020-10-12 21:56 ` [PATCH 2/5 v15] ARM: Replace string mem* functions for KASan Linus Walleij
@ 2020-10-12 21:56 ` Linus Walleij
  2020-10-12 21:57 ` [PATCH 4/5 v15] ARM: Initialize the mapping of KASan shadow memory Linus Walleij
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Linus Walleij @ 2020-10-12 21:56 UTC (permalink / raw)
  To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
	Andrey Ryabinin, Mike Rapoport
  Cc: Arnd Bergmann, Linus Walleij, kasan-dev, Alexander Potapenko,
	linux-arm-kernel, Dmitry Vyukov

From: Abbott Liu <liuwenliang@huawei.com>

Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
addressable by a 32bit architecture) out of the virtual address
space to use as shadow memory for KASan as follows:

 +----+ 0xffffffff
 |    |\
 |    | |-> Static kernel image (vmlinux) BSS and page table
 |    |/
 +----+ PAGE_OFFSET
 |    |\
 |    | |->  Loadable kernel modules virtual address space area
 |    |/
 +----+ MODULES_VADDR = KASAN_SHADOW_END
 |    |\
 |    | |-> The shadow area of kernel virtual address.
 |    |/
 +----+->  TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
 |    |\   shadow address of MODULES_VADDR
 |    | |
 |    | |
 |    | |-> The user space area in lowmem. The kernel address
 |    | |   sanitizer do not use this space, nor does it map it.
 |    | |
 |    | |
 |    | |
 |    | |
 |    |/
 ------ 0

0 .. TASK_SIZE is the memory that can be used by shared
userspace/kernelspace. It us used for userspace processes and for
passing parameters and memory buffers in system calls etc. We do not
need to shadow this area.

KASAN_SHADOW_START:
 This value begins with the MODULE_VADDR's shadow address. It is the
 start of kernel virtual space. Since we have modules to load, we need
 to cover also that area with shadow memory so we can find memory
 bugs in modules.

KASAN_SHADOW_END
 This value is the 0x100000000's shadow address: the mapping that would
 be after the end of the kernel memory at 0xffffffff. It is the end of
 kernel address sanitizer shadow area. It is also the start of the
 module area.

KASAN_SHADOW_OFFSET:
 This value is used to map an address to the corresponding shadow
 address by the following formula:

   shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;

 As you would expect, >> 3 is equal to dividing by 8, meaning each
 byte in the shadow memory covers 8 bytes of kernel memory, so one
 bit shadow memory per byte of kernel memory is used.

 The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
 on the VMSPLIT layout of the system: the kernel and userspace can
 split up lowmem in different ways according to needs, so we calculate
 the shadow offset depending on this.

When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.

The kernel and modules may use different amounts of memory,
according to the VMSPLIT configuration, which in turn
determines the PAGE_OFFSET.

We use the following KASAN_SHADOW_OFFSETs depending on how the
virtual memory is split up:

- 0x1f000000 if we have 1G userspace / 3G kernelspace split:
  - The kernel address space is 3G (0xc0000000)
  - PAGE_OFFSET is then set to 0x40000000 so the kernel static
    image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
    so the modules use addresses 0x3f000000 .. 0x3fffffff
  - So the addresses 0x3f000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0xc1000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x18200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0x26e00000, to
    KASAN_SHADOW_END at 0x3effffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0x3f000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
    KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
    KASAN_SHADOW_OFFSET = 0x1f000000

- 0x5f000000 if we have 2G userspace / 2G kernelspace split:
  - The kernel space is 2G (0x80000000)
  - PAGE_OFFSET is set to 0x80000000 so the kernel static
    image uses 0x80000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
    so the modules use addresses 0x7f000000 .. 0x7fffffff
  - So the addresses 0x7f000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x81000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x10200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0x6ee00000, to
    KASAN_SHADOW_END at 0x7effffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0x7f000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
    KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
    KASAN_SHADOW_OFFSET = 0x5f000000

- 0x9f000000 if we have 3G userspace / 1G kernelspace split,
  and this is the default split for ARM:
  - The kernel address space is 1GB (0x40000000)
  - PAGE_OFFSET is set to 0xc0000000 so the kernel static
    image uses 0xc0000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
    so the modules use addresses 0xbf000000 .. 0xbfffffff
  - So the addresses 0xbf000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x41000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x08200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0xb6e00000, to
    KASAN_SHADOW_END at 0xbfffffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0xbf000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
    KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
    KASAN_SHADOW_OFFSET = 0x9f000000

- 0x8f000000 if we have 3G userspace / 1G kernelspace with
  full 1 GB low memory (VMSPLIT_3G_OPT):
  - The kernel address space is 1GB (0x40000000)
  - PAGE_OFFSET is set to 0xb0000000 so the kernel static
    image uses 0xb0000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
    so the modules use addresses 0xaf000000 .. 0xaffffff
  - So the addresses 0xaf000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x51000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x0a200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0xa4e00000, to
    KASAN_SHADOW_END at 0xaeffffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0xaf000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
    KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
    KASAN_SHADOW_OFFSET = 0x8f000000

- The default value of 0xffffffff for KASAN_SHADOW_OFFSET
  is an error value. We should always match one of the
  above shadow offsets.

When we do this, TASK_SIZE will sometimes get a bit odd values
that will not fit into immediate mov assembly instructions.
To account for this, we need to rewrite some assembly using
TASK_SIZE like this:

-       mov     r1, #TASK_SIZE
+       ldr     r1, =TASK_SIZE

or

-       cmp     r4, #TASK_SIZE
+       ldr     r0, =TASK_SIZE
+       cmp     r4, r0

this is done to avoid the immediate #TASK_SIZE that need to
fit into a limited number of bits.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v14->v15:
- Resend with the other patches
ChangeLog v13->v14:
- Resend with the other patches
ChangeLog v12->v13:
- Rebase on kernel v5.9-rc1
ChangeLog v11->v12:
- Resend with the other changes.
ChangeLog v10->v11:
- Resend with the other changes.
ChangeLog v9->v10:
- Rebase on v5.8-rc1
ChangeLog v8->v9:
- Collect Ard's tags.
ChangeLog v7->v8:
- Rewrote the PMD clearing code to take into account that
  KASan may not always be adjacent to MODULES_VADDR: if we
  compile for thumb, then there will be an 8 MB hole between
  the shadow memory and MODULES_VADDR. Make this explicit and
  use the KASAN defines with an explicit ifdef so it is clear
  what is going on in the prepare_page_table().
- Patch memory.rst to reflect the location of KASan shadow
  memory.
ChangeLog v6->v7:
- Use the SPDX license identifier.
- Rewrote the commit message and updates the illustration.
- Move KASAN_OFFSET Kconfig set-up into this patch and put it
  right after PAGE_OFFSET so it is clear how this works, and
  we have all defines in one patch.
- Added KASAN_SHADOW_OFFSET of 0x8f000000 for 3G_OPT.
  See the calculation in the commit message.
- Updated the commit message with detailed information on
  how KASAN_SHADOW_OFFSET is obtained for the different
  VMSPLIT/PAGE_OFFSET options.
---
 Documentation/arm/memory.rst       |  5 ++
 arch/arm/Kconfig                   |  9 ++++
 arch/arm/include/asm/kasan_def.h   | 81 ++++++++++++++++++++++++++++++
 arch/arm/include/asm/memory.h      |  5 ++
 arch/arm/include/asm/uaccess-asm.h |  2 +-
 arch/arm/kernel/entry-armv.S       |  3 +-
 arch/arm/kernel/entry-common.S     |  9 ++--
 arch/arm/mm/mmu.c                  | 18 +++++++
 8 files changed, 127 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm/include/asm/kasan_def.h

diff --git a/Documentation/arm/memory.rst b/Documentation/arm/memory.rst
index 34bb23c44a71..0cb1e2938823 100644
--- a/Documentation/arm/memory.rst
+++ b/Documentation/arm/memory.rst
@@ -77,6 +77,11 @@ MODULES_VADDR	MODULES_END-1	Kernel module space
 				Kernel modules inserted via insmod are
 				placed here using dynamic mappings.
 
+TASK_SIZE	MODULES_VADDR-1	KASAn shadow memory when KASan is in use.
+				The range from MODULES_VADDR to the top
+				of the memory is shadowed here with 1 bit
+				per byte of memory.
+
 00001000	TASK_SIZE-1	User space mappings
 				Per-thread mappings are placed here via
 				the mmap() system call.
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e00d94b16658..0489b8d07172 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1324,6 +1324,15 @@ config PAGE_OFFSET
 	default 0xB0000000 if VMSPLIT_3G_OPT
 	default 0xC0000000
 
+config KASAN_SHADOW_OFFSET
+	hex
+	depends on KASAN
+	default 0x1f000000 if PAGE_OFFSET=0x40000000
+	default 0x5f000000 if PAGE_OFFSET=0x80000000
+	default 0x9f000000 if PAGE_OFFSET=0xC0000000
+	default 0x8f000000 if PAGE_OFFSET=0xB0000000
+	default 0xffffffff
+
 config NR_CPUS
 	int "Maximum number of CPUs (2-32)"
 	range 2 32
diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 000000000000..5739605aa7cf
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *  arch/arm/include/asm/kasan_def.h
+ *
+ *  Copyright (c) 2018 Huawei Technologies Co., Ltd.
+ *
+ *  Author: Abbott Liu <liuwenliang@huawei.com>
+ */
+
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ * Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
+ * the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
+ * addressable by a 32bit architecture) out of the virtual address
+ * space to use as shadow memory for KASan as follows:
+ *
+ * +----+ 0xffffffff
+ * |    |							\
+ * |    | |-> Static kernel image (vmlinux) BSS and page table
+ * |    |/
+ * +----+ PAGE_OFFSET
+ * |    |							\
+ * |    | |->  Loadable kernel modules virtual address space area
+ * |    |/
+ * +----+ MODULES_VADDR = KASAN_SHADOW_END
+ * |    |						\
+ * |    | |-> The shadow area of kernel virtual address.
+ * |    |/
+ * +----+->  TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
+ * |    |\   shadow address of MODULES_VADDR
+ * |    | |
+ * |    | |
+ * |    | |-> The user space area in lowmem. The kernel address
+ * |    | |   sanitizer do not use this space, nor does it map it.
+ * |    | |
+ * |    | |
+ * |    | |
+ * |    | |
+ * |    |/
+ * ------ 0
+ *
+ * 1) KASAN_SHADOW_START
+ *   This value begins with the MODULE_VADDR's shadow address. It is the
+ *   start of kernel virtual space. Since we have modules to load, we need
+ *   to cover also that area with shadow memory so we can find memory
+ *   bugs in modules.
+ *
+ * 2) KASAN_SHADOW_END
+ *   This value is the 0x100000000's shadow address: the mapping that would
+ *   be after the end of the kernel memory at 0xffffffff. It is the end of
+ *   kernel address sanitizer shadow area. It is also the start of the
+ *   module area.
+ *
+ * 3) KASAN_SHADOW_OFFSET:
+ *   This value is used to map an address to the corresponding shadow
+ *   address by the following formula:
+ *
+ *	shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
+ *
+ *  As you would expect, >> 3 is equal to dividing by 8, meaning each
+ *  byte in the shadow memory covers 8 bytes of kernel memory, so one
+ *  bit shadow memory per byte of kernel memory is used.
+ *
+ *  The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
+ *  on the VMSPLIT layout of the system: the kernel and userspace can
+ *  split up lowmem in different ways according to needs, so we calculate
+ *  the shadow offset depending on this.
+ */
+
+#define KASAN_SHADOW_SCALE_SHIFT	3
+#define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END	((UL(1) << (32 - KASAN_SHADOW_SCALE_SHIFT)) \
+				 + KASAN_SHADOW_OFFSET)
+#define KASAN_SHADOW_START      ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index bb79e52aeb90..598dbdca2017 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -18,6 +18,7 @@
 #ifdef CONFIG_NEED_MACH_MEMORY_H
 #include <mach/memory.h>
 #endif
+#include <asm/kasan_def.h>
 
 /* PAGE_OFFSET - the virtual address of the start of the kernel image */
 #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
@@ -28,7 +29,11 @@
  * TASK_SIZE - the maximum size of a user space task.
  * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
  */
+#ifndef CONFIG_KASAN
 #define TASK_SIZE		(UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE		(KASAN_SHADOW_START)
+#endif
 #define TASK_UNMAPPED_BASE	ALIGN(TASK_SIZE / 3, SZ_16M)
 
 /*
diff --git a/arch/arm/include/asm/uaccess-asm.h b/arch/arm/include/asm/uaccess-asm.h
index 907571fd05c6..e6eb7a2aaf1e 100644
--- a/arch/arm/include/asm/uaccess-asm.h
+++ b/arch/arm/include/asm/uaccess-asm.h
@@ -85,7 +85,7 @@
 	 */
 	.macro	uaccess_entry, tsk, tmp0, tmp1, tmp2, disable
 	ldr	\tmp1, [\tsk, #TI_ADDR_LIMIT]
-	mov	\tmp2, #TASK_SIZE
+	ldr	\tmp2, =TASK_SIZE
 	str	\tmp2, [\tsk, #TI_ADDR_LIMIT]
  DACR(	mrc	p15, 0, \tmp0, c3, c0, 0)
  DACR(	str	\tmp0, [sp, #SVC_DACR])
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 55a47df04773..c4220f51fcf3 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -427,7 +427,8 @@ ENDPROC(__fiq_abt)
 	@ if it was interrupted in a critical region.  Here we
 	@ perform a quick test inline since it should be false
 	@ 99.9999% of the time.  The rest is done out of line.
-	cmp	r4, #TASK_SIZE
+	ldr	r0, =TASK_SIZE
+	cmp	r4, r0
 	blhs	kuser_cmpxchg64_fixup
 #endif
 #endif
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 271cb8a1eba1..fee279e28a72 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,8 @@ __ret_fast_syscall:
  UNWIND(.cantunwind	)
 	disable_irq_notrace			@ disable interrupts
 	ldr	r2, [tsk, #TI_ADDR_LIMIT]
-	cmp	r2, #TASK_SIZE
+	ldr	r1, =TASK_SIZE
+	cmp	r2, r1
 	blne	addr_limit_check_failed
 	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
 	tst	r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -87,7 +88,8 @@ __ret_fast_syscall:
 #endif
 	disable_irq_notrace			@ disable interrupts
 	ldr	r2, [tsk, #TI_ADDR_LIMIT]
-	cmp	r2, #TASK_SIZE
+	ldr     r1, =TASK_SIZE
+	cmp     r2, r1
 	blne	addr_limit_check_failed
 	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
 	tst	r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -128,7 +130,8 @@ ret_slow_syscall:
 	disable_irq_notrace			@ disable interrupts
 ENTRY(ret_to_user_from_irq)
 	ldr	r2, [tsk, #TI_ADDR_LIMIT]
-	cmp	r2, #TASK_SIZE
+	ldr     r1, =TASK_SIZE
+	cmp	r2, r1
 	blne	addr_limit_check_failed
 	ldr	r1, [tsk, #TI_FLAGS]
 	tst	r1, #_TIF_WORK_MASK
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index a7231d151c63..50ae506a39e1 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -30,6 +30,7 @@
 #include <asm/procinfo.h>
 #include <asm/memory.h>
 #include <asm/pgalloc.h>
+#include <asm/kasan_def.h>
 
 #include <asm/mach/arch.h>
 #include <asm/mach/map.h>
@@ -1265,8 +1266,25 @@ static inline void prepare_page_table(void)
 	/*
 	 * Clear out all the mappings below the kernel image.
 	 */
+#ifdef CONFIG_KASAN
+	/*
+	 * KASan's shadow memory inserts itself between the TASK_SIZE
+	 * and MODULES_VADDR. Do not clear the KASan shadow memory mappings.
+	 */
+	for (addr = 0; addr < KASAN_SHADOW_START; addr += PMD_SIZE)
+		pmd_clear(pmd_off_k(addr));
+	/*
+	 * Skip over the KASan shadow area. KASAN_SHADOW_END is sometimes
+	 * equal to MODULES_VADDR and then we exit the pmd clearing. If we
+	 * are using a thumb-compiled kernel, there there will be 8MB more
+	 * to clear as KASan always offset to 16 MB below MODULES_VADDR.
+	 */
+	for (addr = KASAN_SHADOW_END; addr < MODULES_VADDR; addr += PMD_SIZE)
+		pmd_clear(pmd_off_k(addr));
+#else
 	for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
 		pmd_clear(pmd_off_k(addr));
+#endif
 
 #ifdef CONFIG_XIP_KERNEL
 	/* The XIP kernel is mapped in the module area -- skip over it */
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/5 v15] ARM: Initialize the mapping of KASan shadow memory
  2020-10-12 21:56 [PATCH 0/5 v15] KASan for Arm Linus Walleij
                   ` (2 preceding siblings ...)
  2020-10-12 21:56 ` [PATCH 3/5 v15] ARM: Define the virtual space of KASan's shadow region Linus Walleij
@ 2020-10-12 21:57 ` Linus Walleij
  2020-10-13  6:58   ` Ard Biesheuvel
  2020-10-12 21:57 ` [PATCH 5/5 v15] ARM: Enable KASan for ARM Linus Walleij
  2020-10-13  3:22 ` [PATCH 0/5 v15] KASan for Arm Florian Fainelli
  5 siblings, 1 reply; 15+ messages in thread
From: Linus Walleij @ 2020-10-12 21:57 UTC (permalink / raw)
  To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
	Andrey Ryabinin, Mike Rapoport
  Cc: Arnd Bergmann, Linus Walleij, kasan-dev, Alexander Potapenko,
	linux-arm-kernel, Dmitry Vyukov

This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:

1. At early boot stage the whole shadow region is mapped to just
   one physical page (kasan_zero_page). It is finished by the function
   kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
   head-common.S)

2. After the calling of paging_init, we use kasan_zero_page as zero
   shadow for some memory that KASan does not need to track, and we
   allocate a new shadow space for the other memory that KASan need to
   track. These issues are finished by the function kasan_init which is
   call by setup_arch.

When using KASan we also need to increase the THREAD_SIZE_ORDER
from 1 to 2 as the extra calls for shadow memory uses quite a bit
of stack.

As we need to make a temporary copy of the PGD when setting up
shadow memory we create a helpful PGD_SIZE definition for both
LPAE and non-LPAE setups.

The KASan core code unconditionally calls pud_populate() so this
needs to be changed from BUG() to do {} while (0) when building
with KASan enabled.

After the initial development by Andre Ryabinin several modifications
have been made to this code:

Abbott Liu <liuwenliang@huawei.com>
- Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
  mapping table need be copied in the pgd_alloc() function.
- Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
  kasan_pgd_populate from .meminit.text section to .init.text section.
  Reported by Florian Fainelli <f.fainelli@gmail.com>

Linus Walleij <linus.walleij@linaro.org>:
- Drop the custom mainpulation of TTBR0 and just use
  cpu_switch_mm() to switch the pgd table.
- Adopt to handle 4th level page tabel folding.
- Rewrite the entire page directory and page entry initialization
  sequence to be recursive based on ARM64:s kasan_init.c.

Ard Biesheuvel <ardb@kernel.org>:
- Necessary underlying fixes.
- Crucial bug fixes to the memory set-up code.

Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Co-developed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Co-developed-by: Abbott Liu <liuwenliang@huawei.com>
Co-developed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v14->v15:
- Avoids reallocating KASAN blocks when a range gets
  mapped twice - this occurs when mapping the DTB space explicitly.
- Insert a missing TLB flush.
- Move the cache flush after switching the MM (which makes logical
  sense.
- All these fixes discovered by Ard Bisheuvel.
- Dropped the special mapping around the DTB after using Ard's
  patches for remapping the DTB in a special memory area.
- Add asmlinkage prototype for kasan_early_init() to get
  rid of some compilation warnings.
ChangeLog v13->v14:
- Provide more elaborate prints of how virtual kernel memory
  is mapped to the allocated lowmem pages.
- Make sure to also map the memory around the __atags_pointer:
  this memory is used for the device tree blob (DTB) and will be
  accessed by the device tree parser. We were just lucky that
  this was mostly in some acceptable memory location until now.
ChangeLog v12->v13:
- Rebase on kernel v5.9-rc1
ChangeLog v11->v12:
- Do not try to shadow highmem memory blocks. (Ard)
- Provoke a build bug if the entire shadow memory doesn't fit
  inside a single pgd_index() (Ard)
- Move the pointer to (unsigned long) casts into the create_mapping()
  function. (Ard)
- After setting up the shadow memory make sure to issue
  local_flush_tlb_all() so that we refresh all the global mappings. (Ard)
- Simplify pte_populate() (Ard)
- Skip over pud population as well as p4d. (Ard)
- Drop the stop condition pmd_none(*pmdp) in the pmd population
  loop. (Ard)
- Stop passing around the node (NUMA) parameter in the init code,
  we are not expecting any NUMA architectures to be introduced into
  ARM32 so just hardcode NUMA_NO_NODE when calling
  memblock_alloc_try_nid().
ChangeLog v10->v11:
- Fix compilation on LPAE systems.
- Move the check for valid pgdp, pudp and pmdp into the loop for
  each level moving over the directory pointers: we were just lucky
  that we just needed one directory for each level so this fixes
  the pmdp issue with LPAE and KASan now works like a charm on
  LPAE as well.
- Fold fourth level page directory (p4d) into the global page directory
  pgd and just skip into the page upper directory (pud) directly. We
  do not anticipate that ARM32 will every use 5-level page tables.
- Simplify the ifdeffery around the temporary pgd.
- Insert a comment about pud_populate() that is unconditionally called
  by the KASan core code.
ChangeLog v9->v10:
- Rebase onto v5.8-rc1
- add support for folded p4d page tables, use the primitives necessary
  for the 4th level folding, add (empty) walks of p4d level.
- Use the <linux/pgtable.h> header file that has now appeared as part
  of the VM consolidation series.
- Use a recursive method to walk pgd/p4d/pud/pmd/pte instead of the
  separate early/main calls and the flat call structure used in the
  old code. This was inspired by the ARM64 KASan init code.
- Assume authorship of this code, I have now written the majority of
  it so the blame is on me and noone else.
ChangeLog v8->v9:
- Drop the custom CP15 manipulation and cache flushing for swapping
  TTBR0 and instead just use cpu_switch_mm().
- Collect Ard's tags.
ChangeLog v7->v8:
- Rebased.
ChangeLog v6->v7:
- Use SPDX identifer for the license.
- Move the TTBR0 accessor calls into this patch.
---
 arch/arm/include/asm/kasan.h       |  33 ++++
 arch/arm/include/asm/pgalloc.h     |   8 +-
 arch/arm/include/asm/thread_info.h |   8 +
 arch/arm/kernel/head-common.S      |   3 +
 arch/arm/kernel/setup.c            |   2 +
 arch/arm/mm/Makefile               |   3 +
 arch/arm/mm/kasan_init.c           | 284 +++++++++++++++++++++++++++++
 arch/arm/mm/pgd.c                  |  16 +-
 8 files changed, 355 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm/include/asm/kasan.h
 create mode 100644 arch/arm/mm/kasan_init.c

diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 000000000000..303c35df3135
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * arch/arm/include/asm/kasan.h
+ *
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ */
+
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+
+/*
+ * The compiler uses a shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+asmlinkage void kasan_early_init(void);
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index 15f4674715f8..fdee1f04f4f3 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -21,6 +21,7 @@
 #define _PAGE_KERNEL_TABLE	(PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL))
 
 #ifdef CONFIG_ARM_LPAE
+#define PGD_SIZE		(PTRS_PER_PGD * sizeof(pgd_t))
 
 static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
 {
@@ -28,14 +29,19 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
 }
 
 #else	/* !CONFIG_ARM_LPAE */
+#define PGD_SIZE		(PAGE_SIZE << 2)
 
 /*
  * Since we have only two-level page tables, these are trivial
  */
 #define pmd_alloc_one(mm,addr)		({ BUG(); ((pmd_t *)2); })
 #define pmd_free(mm, pmd)		do { } while (0)
+#ifdef CONFIG_KASAN
+/* The KASan core unconditionally calls pud_populate() on all architectures */
+#define pud_populate(mm,pmd,pte)	do { } while (0)
+#else
 #define pud_populate(mm,pmd,pte)	BUG()
-
+#endif
 #endif	/* CONFIG_ARM_LPAE */
 
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 536b6b979f63..56fae7861fd3 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -13,7 +13,15 @@
 #include <asm/fpstate.h>
 #include <asm/page.h>
 
+#ifdef CONFIG_KASAN
+/*
+ * KASan uses a lot of extra stack space so the thread size order needs to
+ * be increased.
+ */
+#define THREAD_SIZE_ORDER	2
+#else
 #define THREAD_SIZE_ORDER	1
+#endif
 #define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
 #define THREAD_START_SP		(THREAD_SIZE - 8)
 
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 6840c7c60a85..89c80154b9ef 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -111,6 +111,9 @@ __mmap_switched:
 	str	r8, [r2]			@ Save atags pointer
 	cmp	r3, #0
 	strne	r10, [r3]			@ Save control register values
+#ifdef CONFIG_KASAN
+	bl	kasan_early_init
+#endif
 	mov	lr, #0
 	b	start_kernel
 ENDPROC(__mmap_switched)
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 2a70e4958c14..43d033696e33 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -59,6 +59,7 @@
 #include <asm/unwind.h>
 #include <asm/memblock.h>
 #include <asm/virt.h>
+#include <asm/kasan.h>
 
 #include "atags.h"
 
@@ -1139,6 +1140,7 @@ void __init setup_arch(char **cmdline_p)
 	early_ioremap_reset();
 
 	paging_init(mdesc);
+	kasan_init();
 	request_standard_resources(mdesc);
 
 	if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 99699c32d8a5..4536159bc8fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -113,3 +113,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU)	+= cache-l2x0-pmu.o
 obj-$(CONFIG_CACHE_XSC3L2)	+= cache-xsc3l2.o
 obj-$(CONFIG_CACHE_TAUROS2)	+= cache-tauros2.o
 obj-$(CONFIG_CACHE_UNIPHIER)	+= cache-uniphier.o
+
+KASAN_SANITIZE_kasan_init.o	:= n
+obj-$(CONFIG_KASAN)		+= kasan_init.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 000000000000..22ac84defa5d
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,284 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This file contains kasan initialization code for ARM.
+ *
+ * Copyright (c) 2018 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ * Author: Linus Walleij <linus.walleij@linaro.org>
+ */
+
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/sched/task.h>
+#include <linux/start_kernel.h>
+#include <linux/pgtable.h>
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+
+#include "mm.h"
+
+static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size)
+{
+	return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+				      MEMBLOCK_ALLOC_KASAN, NUMA_NO_NODE);
+}
+
+static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
+				      unsigned long end, bool early)
+{
+	unsigned long next;
+	pte_t *ptep = pte_offset_kernel(pmdp, addr);
+
+	do {
+		pte_t entry;
+		void *p;
+
+		next = addr + PAGE_SIZE;
+
+		if (!early) {
+			if (!pte_none(READ_ONCE(*ptep)))
+				continue;
+
+			p = kasan_alloc_block(PAGE_SIZE);
+			if (!p) {
+				panic("%s failed to alloc pte for address 0x%lx\n",
+				      __func__, addr);
+				return;
+			}
+			memset(p, KASAN_SHADOW_INIT, PAGE_SIZE);
+			entry = pfn_pte(virt_to_pfn(p),
+					__pgprot(pgprot_val(PAGE_KERNEL)));
+		} else if (pte_none(READ_ONCE(*ptep))) {
+			/*
+			 * The early shadow memory is mapping all KASan
+			 * operations to one and the same page in memory,
+			 * "kasan_early_shadow_page" so that the instrumentation
+			 * will work on a scratch area until we can set up the
+			 * proper KASan shadow memory.
+			 */
+			entry = pfn_pte(virt_to_pfn(kasan_early_shadow_page),
+					__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
+		} else {
+			/*
+			 * Early shadow mappings are PMD_SIZE aligned, so if the
+			 * first entry is already set, they must all be set.
+			 */
+			return;
+		}
+
+		set_pte_at(&init_mm, addr, ptep, entry);
+	} while (ptep++, addr = next, addr != end);
+}
+
+/*
+ * The pmd (page middle directory) is only used on LPAE
+ */
+static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr,
+				      unsigned long end, bool early)
+{
+	unsigned long next;
+	pmd_t *pmdp = pmd_offset(pudp, addr);
+
+	do {
+		if (pmd_none(*pmdp)) {
+			void *p = early ? kasan_early_shadow_pte :
+				kasan_alloc_block(PAGE_SIZE);
+
+			if (!p) {
+				panic("%s failed to allocate pmd for address 0x%lx\n",
+				      __func__, addr);
+				return;
+			}
+			pmd_populate_kernel(&init_mm, pmdp, p);
+			flush_pmd_entry(pmdp);
+		}
+
+		next = pmd_addr_end(addr, end);
+		kasan_pte_populate(pmdp, addr, next, early);
+	} while (pmdp++, addr = next, addr != end);
+}
+
+static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
+				      bool early)
+{
+	unsigned long next;
+	pgd_t *pgdp;
+	p4d_t *p4dp;
+	pud_t *pudp;
+
+	pgdp = pgd_offset_k(addr);
+
+	do {
+		/* Allocate and populate the PGD if it doesn't already exist */
+		if (!early && pgd_none(*pgdp)) {
+			void *p = kasan_alloc_block(PAGE_SIZE);
+
+			if (!p) {
+				panic("%s failed to allocate pgd for address 0x%lx\n",
+				      __func__, addr);
+				return;
+			}
+			pgd_populate(&init_mm, pgdp, p);
+		}
+
+		next = pgd_addr_end(addr, end);
+		/*
+		 * We just immediately jump over the p4d and pud page
+		 * directories since we believe ARM32 will never gain four
+		 * nor five level page tables.
+		 */
+		p4dp = p4d_offset(pgdp, addr);
+		pudp = pud_offset(p4dp, addr);
+
+		kasan_pmd_populate(pudp, addr, next, early);
+	} while (pgdp++, addr = next, addr != end);
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+	struct proc_info_list *list;
+
+	/*
+	 * locate processor in the list of supported processor
+	 * types.  The linker builds this table for us from the
+	 * entries in arch/arm/mm/proc-*.S
+	 */
+	list = lookup_processor_type(read_cpuid_id());
+	if (list) {
+#ifdef MULTI_CPU
+		processor = *list->proc;
+#endif
+	}
+
+	BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET);
+	/*
+	 * We walk the page table and set all of the shadow memory to point
+	 * to the scratch page.
+	 */
+	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, true);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start && start < end; start += PMD_SIZE)
+		pmd_clear(pmd_off_k(start));
+}
+
+static int __init create_mapping(void *start, void *end)
+{
+	void *shadow_start, *shadow_end;
+
+	shadow_start = kasan_mem_to_shadow(start);
+	shadow_end = kasan_mem_to_shadow(end);
+
+	pr_info("Mapping kernel virtual memory block: %px-%px at shadow: %px-%px\n",
+		start, end, shadow_start, shadow_end);
+
+	kasan_pgd_populate((unsigned long)shadow_start & PAGE_MASK,
+			   (unsigned long)shadow_end, false);
+	return 0;
+}
+
+void __init kasan_init(void)
+{
+	struct memblock_region *reg;
+	int i;
+
+	/*
+	 * We are going to perform proper setup of shadow memory.
+	 *
+	 * At first we should unmap early shadow (clear_pgds() call bellow).
+	 * However, instrumented code can't execute without shadow memory.
+	 *
+	 * To keep the early shadow memory MMU tables around while setting up
+	 * the proper shadow memory, we copy swapper_pg_dir (the initial page
+	 * table) to tmp_pgd_table and use that to keep the early shadow memory
+	 * mapped until the full shadow setup is finished. Then we swap back
+	 * to the proper swapper_pg_dir.
+	 */
+
+	memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+#ifdef CONFIG_ARM_LPAE
+	/* We need to be in the same PGD or this won't work */
+	BUILD_BUG_ON(pgd_index(KASAN_SHADOW_START) !=
+		     pgd_index(KASAN_SHADOW_END));
+	memcpy(tmp_pmd_table,
+	       pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
+	       sizeof(tmp_pmd_table));
+	set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
+		__pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+#endif
+	cpu_switch_mm(tmp_pgd_table, &init_mm);
+	local_flush_tlb_all();
+
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+				    kasan_mem_to_shadow((void *)-1UL) + 1);
+
+	for_each_memblock(memory, reg) {
+		void *start = __va(reg->base);
+		void *end = __va(reg->base + reg->size);
+
+		/* Do not attempt to shadow highmem */
+		if (reg->base >= arm_lowmem_limit) {
+			pr_info("Skip highmem block %px-%px\n",
+				start, end);
+			continue;
+		}
+		if (reg->base + reg->size > arm_lowmem_limit) {
+			pr_info("Truncate memory block %px-%px\n to %px-%px\n",
+				start, end, start, __va(arm_lowmem_limit));
+			end = __va(arm_lowmem_limit);
+		}
+		if (start >= end) {
+			pr_info("Skipping invalid memory block %px-%px\n",
+				start, end);
+			continue;
+		}
+
+		create_mapping(start, end);
+	}
+
+	/*
+	 * 1. The module global variables are in MODULES_VADDR ~ MODULES_END,
+	 *    so we need to map this area.
+	 * 2. PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
+	 *    ~ MODULES_END's shadow is in the same PMD_SIZE, so we can't
+	 *    use kasan_populate_zero_shadow.
+	 */
+	create_mapping((void *)MODULES_VADDR, (void *)(PKMAP_BASE + PMD_SIZE));
+
+	/*
+	 * KAsan may reuse the contents of kasan_early_shadow_pte directly, so
+	 * we should make sure that it maps the zero page read-only.
+	 */
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+			   &kasan_early_shadow_pte[i],
+			   pfn_pte(virt_to_pfn(kasan_early_shadow_page),
+				__pgprot(pgprot_val(PAGE_KERNEL)
+					 | L_PTE_RDONLY)));
+
+	cpu_switch_mm(swapper_pg_dir, &init_mm);
+	local_flush_tlb_all();
+
+	memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+	pr_info("Kernel address sanitizer initialized\n");
+	init_task.kasan_depth = 0;
+}
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index c5e1b27046a8..f8e9bc58a84f 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -66,7 +66,21 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
 	new_pmd = pmd_alloc(mm, new_pud, 0);
 	if (!new_pmd)
 		goto no_pmd;
-#endif
+#ifdef CONFIG_KASAN
+	/*
+	 * Copy PMD table for KASAN shadow mappings.
+	 */
+	init_pgd = pgd_offset_k(TASK_SIZE);
+	init_p4d = p4d_offset(init_pgd, TASK_SIZE);
+	init_pud = pud_offset(init_p4d, TASK_SIZE);
+	init_pmd = pmd_offset(init_pud, TASK_SIZE);
+	new_pmd = pmd_offset(new_pud, TASK_SIZE);
+	memcpy(new_pmd, init_pmd,
+	       (pmd_index(MODULES_VADDR) - pmd_index(TASK_SIZE))
+	       * sizeof(pmd_t));
+	clean_dcache_area(new_pmd, PTRS_PER_PMD * sizeof(pmd_t));
+#endif /* CONFIG_KASAN */
+#endif /* CONFIG_LPAE */
 
 	if (!vectors_high()) {
 		/*
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/5 v15] ARM: Enable KASan for ARM
  2020-10-12 21:56 [PATCH 0/5 v15] KASan for Arm Linus Walleij
                   ` (3 preceding siblings ...)
  2020-10-12 21:57 ` [PATCH 4/5 v15] ARM: Initialize the mapping of KASan shadow memory Linus Walleij
@ 2020-10-12 21:57 ` Linus Walleij
  2020-10-13  3:22 ` [PATCH 0/5 v15] KASan for Arm Florian Fainelli
  5 siblings, 0 replies; 15+ messages in thread
From: Linus Walleij @ 2020-10-12 21:57 UTC (permalink / raw)
  To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
	Andrey Ryabinin, Mike Rapoport
  Cc: Arnd Bergmann, Linus Walleij, kasan-dev, Alexander Potapenko,
	linux-arm-kernel, Andrey Ryabinin, Dmitry Vyukov

From: Andrey Ryabinin <ryabinin@virtuozzo.com>

This patch enables the kernel address sanitizer for ARM. XIP_KERNEL
has not been tested and is therefore not allowed for now.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v14->v15:
- Resend with the other patches
ChangeLog v13->v14:
- Resend with the other patches.
ChangeLog v12->v13:
- Rebase on kernel v5.9-rc1
ChangeLog v11->v12:
- Resend with the other changes.
ChangeLog v10->v11:
- Resend with the other changes.
ChangeLog v9->v10:
- Rebase on v5.8-rc1
ChangeLog v8->v9:
- Fix the arch feature matrix for Arm to include KASan.
- Collect Ard's tags.
ChangeLog v7->v8:
- Moved the hacks to __ADDRESS_SANITIZE__ to the patch
  replacing the memory access functions.
- Moved the definition of KASAN_OFFSET out of this patch
  and to the patch that defines the virtual memory used by
  KASan.
---
 Documentation/dev-tools/kasan.rst                   | 4 ++--
 Documentation/features/debug/KASAN/arch-support.txt | 2 +-
 arch/arm/Kconfig                                    | 1 +
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index 38fd5681fade..050dcd346144 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -18,8 +18,8 @@ out-of-bounds accesses for global variables is only supported since Clang 11.
 
 Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
 
-Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and
-riscv architectures, and tag-based KASAN is supported only for arm64.
+Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa, s390
+and riscv architectures, and tag-based KASAN is supported only for arm64.
 
 Usage
 -----
diff --git a/Documentation/features/debug/KASAN/arch-support.txt b/Documentation/features/debug/KASAN/arch-support.txt
index c3fe9b266e7b..b2288dc14b72 100644
--- a/Documentation/features/debug/KASAN/arch-support.txt
+++ b/Documentation/features/debug/KASAN/arch-support.txt
@@ -8,7 +8,7 @@
     -----------------------
     |       alpha: | TODO |
     |         arc: | TODO |
-    |         arm: | TODO |
+    |         arm: |  ok  |
     |       arm64: |  ok  |
     |         c6x: | TODO |
     |        csky: | TODO |
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 0489b8d07172..873bd26f5d43 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -66,6 +66,7 @@ config ARM
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+	select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
 	select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
 	select HAVE_ARCH_THREAD_STRUCT_WHITELIST
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5 v15] KASan for Arm
  2020-10-12 21:56 [PATCH 0/5 v15] KASan for Arm Linus Walleij
                   ` (4 preceding siblings ...)
  2020-10-12 21:57 ` [PATCH 5/5 v15] ARM: Enable KASan for ARM Linus Walleij
@ 2020-10-13  3:22 ` Florian Fainelli
  2020-10-13  6:34   ` Ard Biesheuvel
  5 siblings, 1 reply; 15+ messages in thread
From: Florian Fainelli @ 2020-10-13  3:22 UTC (permalink / raw)
  To: Linus Walleij, Abbott Liu, Russell King, Ard Biesheuvel,
	Andrey Ryabinin, Mike Rapoport
  Cc: Arnd Bergmann, linux-arm-kernel



On 10/12/2020 2:56 PM, Linus Walleij wrote:
> This is the 15th iteration of KASan for ARM/Aarch32.
> 
> I dropped my fix in the beginning of the series for
> Ard's more elaborate and thorough fix moving the DTB
> out of the kernel linear mapped region and into its own
> part of the memory.
> 
> This fixes my particular issue on the Qualcomm APQ8060
> and I hope it may also solve Florian's issue and what
> Ard has been seeing. KASan should be working with
> pretty much everything you throw on it, unless you
> do what I did and ran it on a 64MB system, where
> under some load it can run into the OOM killer for
> obvious reasons.
> 
> You are encouraged to test this patch set to find memory out
> of bounds bugs with ARM32 platforms and drivers.
> 
> There is a git branch you can pull in:
> https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kasan
> 
> This branch includes Ard's two patches.
> 
> As Ard's patches are in Russell's patch tracker I will
> put these there as well if it now works for everyone.

Tested-by: Florian Fainelli <f.fainelli@gmail.com>

On Brahma-B15 (ARMv7 LPAE) and Brahma-B53 (ARMv8 in AArch32, also with 
LPAE). The 3 Cortex-A72 devices that I have access to all fail with the 
following (not related to the CPU type, more to the memory map) which I 
am hoping to track down later this week, I would not consider those 
failures to be a blocker at this point.

Thanks a lot for your persistence working on this Linus, and Ard!


[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Linux version 5.9.0-gdf4dd84a3f7d 
(fainelli@fainelli-desktop) (arm-linux-gcc (GCC) 8.3.0, GNU ld (GNU 
Binutils) 2.32) #16 SMP Mon Oct 12 20:01:43 PDT 2020
[    0.000000] CPU: ARMv7 Processor [410fd083] revision 3 (ARMv7), 
cr=30c5383d
[    0.000000] CPU: div instructions available: patching division code
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction 
cache
[    0.000000] OF: fdt: Machine model: BCM972112SV
[    0.000000] earlycon: pl11 at MMIO 0x000000047e201000 (options '115200')
[    0.000000] printk: bootconsole [pl11] enabled
[    0.000000] Memory policy: Data cache writealloc
[    0.000000] cma: Reserved 16 MiB at 0x000000007f000000
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x0000000000000000-0x000000002fffffff]
[    0.000000]   Normal   empty
[    0.000000]   HighMem  [mem 0x0000000030000000-0x000000007fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000000000000-0x00000000063fdfff]
[    0.000000]   node   0: [mem 0x0000000006400000-0x000000000fffffff]
[    0.000000]   node   0: [mem 0x0000000010400000-0x000000007fffffff]
[    0.000000] Zeroed struct page in unavailable ranges: 2 pages
[    0.000000] Initmem setup node 0 [mem 
0x0000000000000000-0x000000007fffffff]
[    0.000000] kasan: Mapping kernel virtual memory block: 
c0000000-c63fe000 at shadow: b7000000-b7c7fc00
[    0.000000] Kernel panic - not syncing: kasan_pte_populate failed to 
alloc pte for address 0xe2806000
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 
5.9.0-gdf4dd84a3f7d #16
[    0.000000] Hardware name: Broadcom STB (Flattened Device Tree)
[    0.000000] Backtrace:
[    0.000000] [<c02120b4>] (dump_backtrace) from [<c02123d8>] 
(show_stack+0x20/0x24)
[    0.000000]  r9:ffffffff r8:00000080 r7:c298e3c0 r6:600000d3 
r5:00000000 r4:c298e3c0
[    0.000000] [<c02123b8>] (show_stack) from [<c08852a0>] 
(dump_stack+0xbc/0xe0)
[    0.000000] [<c08851e4>] (dump_stack) from [<c022fbec>] 
(panic+0x19c/0x3e4)
[    0.000000]  r10:e2806000 r9:c2b790e0 r8:c166b410 r7:c2803d80 
r6:00000000 r5:c2b7de80
[    0.000000]  r4:c2b78e20 r3:00000001
[    0.000000] [<c022fa50>] (panic) from [<c180b960>] 
(kasan_pgd_populate+0x1ac/0x26c)
[    0.000000]  r3:00000000 r2:e2806000 r1:c12126d4 r0:c166b410
[    0.000000]  r7:b7c7fc00
[    0.000000] [<c180b7b4>] (kasan_pgd_populate) from [<c180ba78>] 
(create_mapping+0x58/0x64)
[    0.000000]  r10:c166b4e4 r9:00000000 r8:063fe000 r7:c2ba0a40 
r6:c28a24e0 r5:b7000000
[    0.000000]  r4:b7c7fc00
[    0.000000] [<c180ba20>] (create_mapping) from [<c180bd58>] 
(kasan_init+0x26c/0x390)
[    0.000000]  r5:00000000 r4:c0000000
[    0.000000] [<c180baec>] (kasan_init) from [<c1805728>] 
(setup_arch+0x288/0xa28)
[    0.000000]  r10:c1861238 r9:410fd083 r8:c0008000 r7:c1873a40 
r6:c2803fbc r5:c2fdcf60
[    0.000000]  r4:c28a2280
[    0.000000] [<c18054a0>] (setup_arch) from [<c1801010>] 
(start_kernel+0x88/0x3e4)
[    0.000000]  r10:c2806d40 r9:410fd083 r8:0e415000 r7:ffffffff 
r6:30c0387d r5:c2806d48
[    0.000000]  r4:00007000
[    0.000000] [<c1800f88>] (start_kernel) from [<00000000>] (0x0)
[    0.000000]  r10:30c5387d r9:410fd083 r8:0e415000 r7:ffffffff 
r6:30c0387d r5:00000000
[    0.000000]  r4:c1800334
[    0.000000] ---[ end Kernel panic - not syncing: kasan_pte_populate 
failed to alloc pte for address 0xe2806000 ]---

> 
> Abbott Liu (1):
>    ARM: Define the virtual space of KASan's shadow region
> 
> Andrey Ryabinin (3):
>    ARM: Disable KASan instrumentation for some code
>    ARM: Replace string mem* functions for KASan
>    ARM: Enable KASan for ARM
> 
> Linus Walleij (1):
>    ARM: Initialize the mapping of KASan shadow memory
> 
>   Documentation/arm/memory.rst                  |   5 +
>   Documentation/dev-tools/kasan.rst             |   4 +-
>   .../features/debug/KASAN/arch-support.txt     |   2 +-
>   arch/arm/Kconfig                              |  10 +
>   arch/arm/boot/compressed/Makefile             |   1 +
>   arch/arm/boot/compressed/string.c             |  19 ++
>   arch/arm/include/asm/kasan.h                  |  33 ++
>   arch/arm/include/asm/kasan_def.h              |  81 +++++
>   arch/arm/include/asm/memory.h                 |   5 +
>   arch/arm/include/asm/pgalloc.h                |   8 +-
>   arch/arm/include/asm/string.h                 |  21 ++
>   arch/arm/include/asm/thread_info.h            |   8 +
>   arch/arm/include/asm/uaccess-asm.h            |   2 +-
>   arch/arm/kernel/entry-armv.S                  |   3 +-
>   arch/arm/kernel/entry-common.S                |   9 +-
>   arch/arm/kernel/head-common.S                 |   7 +-
>   arch/arm/kernel/setup.c                       |   2 +
>   arch/arm/kernel/unwind.c                      |   6 +-
>   arch/arm/lib/memcpy.S                         |   3 +
>   arch/arm/lib/memmove.S                        |   5 +-
>   arch/arm/lib/memset.S                         |   3 +
>   arch/arm/mm/Makefile                          |   5 +
>   arch/arm/mm/kasan_init.c                      | 284 ++++++++++++++++++
>   arch/arm/mm/mmu.c                             |  18 ++
>   arch/arm/mm/pgd.c                             |  16 +-
>   arch/arm/vdso/Makefile                        |   2 +
>   26 files changed, 548 insertions(+), 14 deletions(-)
>   create mode 100644 arch/arm/include/asm/kasan.h
>   create mode 100644 arch/arm/include/asm/kasan_def.h
>   create mode 100644 arch/arm/mm/kasan_init.c
> 

-- 
Florian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5 v15] KASan for Arm
  2020-10-13  3:22 ` [PATCH 0/5 v15] KASan for Arm Florian Fainelli
@ 2020-10-13  6:34   ` Ard Biesheuvel
  2020-10-13 17:57     ` Florian Fainelli
  0 siblings, 1 reply; 15+ messages in thread
From: Ard Biesheuvel @ 2020-10-13  6:34 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: Arnd Bergmann, Abbott Liu, Linus Walleij, Russell King,
	Mike Rapoport, Andrey Ryabinin, Linux ARM

On Tue, 13 Oct 2020 at 05:22, Florian Fainelli <f.fainelli@gmail.com> wrote:
>
>
>
> On 10/12/2020 2:56 PM, Linus Walleij wrote:
> > This is the 15th iteration of KASan for ARM/Aarch32.
> >
> > I dropped my fix in the beginning of the series for
> > Ard's more elaborate and thorough fix moving the DTB
> > out of the kernel linear mapped region and into its own
> > part of the memory.
> >
> > This fixes my particular issue on the Qualcomm APQ8060
> > and I hope it may also solve Florian's issue and what
> > Ard has been seeing. KASan should be working with
> > pretty much everything you throw on it, unless you
> > do what I did and ran it on a 64MB system, where
> > under some load it can run into the OOM killer for
> > obvious reasons.
> >
> > You are encouraged to test this patch set to find memory out
> > of bounds bugs with ARM32 platforms and drivers.
> >
> > There is a git branch you can pull in:
> > https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kasan
> >
> > This branch includes Ard's two patches.
> >
> > As Ard's patches are in Russell's patch tracker I will
> > put these there as well if it now works for everyone.
>
> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>
> On Brahma-B15 (ARMv7 LPAE) and Brahma-B53 (ARMv8 in AArch32, also with
> LPAE). The 3 Cortex-A72 devices that I have access to all fail with the
> following (not related to the CPU type, more to the memory map) which I
> am hoping to track down later this week, I would not consider those
> failures to be a blocker at this point.
>
> Thanks a lot for your persistence working on this Linus, and Ard!
>

Hi Florian,

> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x0000000000000000-0x00000000063fdfff]
> [    0.000000]   node   0: [mem 0x0000000006400000-0x000000000fffffff]
> [    0.000000]   node   0: [mem 0x0000000010400000-0x000000007fffffff]
> [    0.000000] kasan: Mapping kernel virtual memory block:
> c0000000-c63fe000 at shadow: b7000000-b7c7fc00
> [    0.000000] Kernel panic - not syncing: kasan_pte_populate failed to
> alloc pte for address 0xe2806000

The issue here is that the end of the shadow region being populated is
not aligned to the page size, and so we never meet the stop condition
in kasan_pgd_populate(), and instead, we keep iterating until we run
out of memory.

Does this help?

--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -190,7 +190,7 @@ static int __init create_mapping(void *start, void *end)
                start, end, shadow_start, shadow_end);

        kasan_pgd_populate((unsigned long)shadow_start & PAGE_MASK,
-                          (unsigned long)shadow_end, false);
+                          PAGE_ALIGN((unsigned long)shadow_end), false);
        return 0;
 }

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5 v15] ARM: Initialize the mapping of KASan shadow memory
  2020-10-12 21:57 ` [PATCH 4/5 v15] ARM: Initialize the mapping of KASan shadow memory Linus Walleij
@ 2020-10-13  6:58   ` Ard Biesheuvel
  0 siblings, 0 replies; 15+ messages in thread
From: Ard Biesheuvel @ 2020-10-13  6:58 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Florian Fainelli, Arnd Bergmann, Abbott Liu, Russell King,
	kasan-dev, Mike Rapoport, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, Linux ARM

Hi Linus,

Just a couple of cosmetic tweaks below - no need to resend for this.

On Mon, 12 Oct 2020 at 23:59, Linus Walleij <linus.walleij@linaro.org> wrote:
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
>
> 1. At early boot stage the whole shadow region is mapped to just
>    one physical page (kasan_zero_page). It is finished by the function
>    kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
>    head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
>    shadow for some memory that KASan does not need to track, and we
>    allocate a new shadow space for the other memory that KASan need to
>    track. These issues are finished by the function kasan_init which is
>    call by setup_arch.
>
> When using KASan we also need to increase the THREAD_SIZE_ORDER
> from 1 to 2 as the extra calls for shadow memory uses quite a bit
> of stack.
>
> As we need to make a temporary copy of the PGD when setting up
> shadow memory we create a helpful PGD_SIZE definition for both
> LPAE and non-LPAE setups.
>
> The KASan core code unconditionally calls pud_populate() so this
> needs to be changed from BUG() to do {} while (0) when building
> with KASan enabled.
>
> After the initial development by Andre Ryabinin several modifications
> have been made to this code:
>
> Abbott Liu <liuwenliang@huawei.com>
> - Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
>   mapping table need be copied in the pgd_alloc() function.
> - Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
>   kasan_pgd_populate from .meminit.text section to .init.text section.
>   Reported by Florian Fainelli <f.fainelli@gmail.com>
>
> Linus Walleij <linus.walleij@linaro.org>:
> - Drop the custom mainpulation of TTBR0 and just use
>   cpu_switch_mm() to switch the pgd table.
> - Adopt to handle 4th level page tabel folding.
> - Rewrite the entire page directory and page entry initialization
>   sequence to be recursive based on ARM64:s kasan_init.c.
>
> Ard Biesheuvel <ardb@kernel.org>:
> - Necessary underlying fixes.
> - Crucial bug fixes to the memory set-up code.
>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: Mike Rapoport <rppt@linux.ibm.com>
> Co-developed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Co-developed-by: Abbott Liu <liuwenliang@huawei.com>
> Co-developed-by: Ard Biesheuvel <ardb@kernel.org>
> Acked-by: Mike Rapoport <rppt@linux.ibm.com>
> Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
> Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
> Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
> Reported-by: Florian Fainelli <f.fainelli@gmail.com>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
> ---
> ChangeLog v14->v15:
> - Avoids reallocating KASAN blocks when a range gets
>   mapped twice - this occurs when mapping the DTB space explicitly.
> - Insert a missing TLB flush.
> - Move the cache flush after switching the MM (which makes logical
>   sense.
> - All these fixes discovered by Ard Bisheuvel.
> - Dropped the special mapping around the DTB after using Ard's
>   patches for remapping the DTB in a special memory area.
> - Add asmlinkage prototype for kasan_early_init() to get
>   rid of some compilation warnings.
> ChangeLog v13->v14:
> - Provide more elaborate prints of how virtual kernel memory
>   is mapped to the allocated lowmem pages.
> - Make sure to also map the memory around the __atags_pointer:
>   this memory is used for the device tree blob (DTB) and will be
>   accessed by the device tree parser. We were just lucky that
>   this was mostly in some acceptable memory location until now.
> ChangeLog v12->v13:
> - Rebase on kernel v5.9-rc1
> ChangeLog v11->v12:
> - Do not try to shadow highmem memory blocks. (Ard)
> - Provoke a build bug if the entire shadow memory doesn't fit
>   inside a single pgd_index() (Ard)
> - Move the pointer to (unsigned long) casts into the create_mapping()
>   function. (Ard)
> - After setting up the shadow memory make sure to issue
>   local_flush_tlb_all() so that we refresh all the global mappings. (Ard)
> - Simplify pte_populate() (Ard)
> - Skip over pud population as well as p4d. (Ard)
> - Drop the stop condition pmd_none(*pmdp) in the pmd population
>   loop. (Ard)
> - Stop passing around the node (NUMA) parameter in the init code,
>   we are not expecting any NUMA architectures to be introduced into
>   ARM32 so just hardcode NUMA_NO_NODE when calling
>   memblock_alloc_try_nid().
> ChangeLog v10->v11:
> - Fix compilation on LPAE systems.
> - Move the check for valid pgdp, pudp and pmdp into the loop for
>   each level moving over the directory pointers: we were just lucky
>   that we just needed one directory for each level so this fixes
>   the pmdp issue with LPAE and KASan now works like a charm on
>   LPAE as well.
> - Fold fourth level page directory (p4d) into the global page directory
>   pgd and just skip into the page upper directory (pud) directly. We
>   do not anticipate that ARM32 will every use 5-level page tables.
> - Simplify the ifdeffery around the temporary pgd.
> - Insert a comment about pud_populate() that is unconditionally called
>   by the KASan core code.
> ChangeLog v9->v10:
> - Rebase onto v5.8-rc1
> - add support for folded p4d page tables, use the primitives necessary
>   for the 4th level folding, add (empty) walks of p4d level.
> - Use the <linux/pgtable.h> header file that has now appeared as part
>   of the VM consolidation series.
> - Use a recursive method to walk pgd/p4d/pud/pmd/pte instead of the
>   separate early/main calls and the flat call structure used in the
>   old code. This was inspired by the ARM64 KASan init code.
> - Assume authorship of this code, I have now written the majority of
>   it so the blame is on me and noone else.
> ChangeLog v8->v9:
> - Drop the custom CP15 manipulation and cache flushing for swapping
>   TTBR0 and instead just use cpu_switch_mm().
> - Collect Ard's tags.
> ChangeLog v7->v8:
> - Rebased.
> ChangeLog v6->v7:
> - Use SPDX identifer for the license.
> - Move the TTBR0 accessor calls into this patch.
> ---
>  arch/arm/include/asm/kasan.h       |  33 ++++
>  arch/arm/include/asm/pgalloc.h     |   8 +-
>  arch/arm/include/asm/thread_info.h |   8 +
>  arch/arm/kernel/head-common.S      |   3 +
>  arch/arm/kernel/setup.c            |   2 +
>  arch/arm/mm/Makefile               |   3 +
>  arch/arm/mm/kasan_init.c           | 284 +++++++++++++++++++++++++++++
>  arch/arm/mm/pgd.c                  |  16 +-
>  8 files changed, 355 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm/include/asm/kasan.h
>  create mode 100644 arch/arm/mm/kasan_init.c
>
> diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
> new file mode 100644
> index 000000000000..303c35df3135
> --- /dev/null
> +++ b/arch/arm/include/asm/kasan.h
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * arch/arm/include/asm/kasan.h
> + *
> + * Copyright (c) 2015 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> + *
> + */
> +
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifdef CONFIG_KASAN
> +
> +#include <asm/kasan_def.h>
> +
> +#define KASAN_SHADOW_SCALE_SHIFT 3
> +
> +/*
> + * The compiler uses a shadow offset assuming that addresses start
> + * from 0. Kernel addresses don't start from 0, so shadow
> + * for kernel really starts from 'compiler's shadow offset' +
> + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
> + */
> +
> +asmlinkage void kasan_early_init(void);
> +extern void kasan_init(void);
> +
> +#else
> +static inline void kasan_init(void) { }
> +#endif
> +
> +#endif
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index 15f4674715f8..fdee1f04f4f3 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -21,6 +21,7 @@
>  #define _PAGE_KERNEL_TABLE     (PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL))
>
>  #ifdef CONFIG_ARM_LPAE
> +#define PGD_SIZE               (PTRS_PER_PGD * sizeof(pgd_t))
>
>  static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
>  {
> @@ -28,14 +29,19 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
>  }
>
>  #else  /* !CONFIG_ARM_LPAE */
> +#define PGD_SIZE               (PAGE_SIZE << 2)
>
>  /*
>   * Since we have only two-level page tables, these are trivial
>   */
>  #define pmd_alloc_one(mm,addr)         ({ BUG(); ((pmd_t *)2); })
>  #define pmd_free(mm, pmd)              do { } while (0)
> +#ifdef CONFIG_KASAN
> +/* The KASan core unconditionally calls pud_populate() on all architectures */
> +#define pud_populate(mm,pmd,pte)       do { } while (0)
> +#else
>  #define pud_populate(mm,pmd,pte)       BUG()
> -
> +#endif
>  #endif /* CONFIG_ARM_LPAE */
>
>  extern pgd_t *pgd_alloc(struct mm_struct *mm);
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 536b6b979f63..56fae7861fd3 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -13,7 +13,15 @@
>  #include <asm/fpstate.h>
>  #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +/*
> + * KASan uses a lot of extra stack space so the thread size order needs to
> + * be increased.
> + */
> +#define THREAD_SIZE_ORDER      2
> +#else
>  #define THREAD_SIZE_ORDER      1
> +#endif
>  #define THREAD_SIZE            (PAGE_SIZE << THREAD_SIZE_ORDER)
>  #define THREAD_START_SP                (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 6840c7c60a85..89c80154b9ef 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -111,6 +111,9 @@ __mmap_switched:
>         str     r8, [r2]                        @ Save atags pointer
>         cmp     r3, #0
>         strne   r10, [r3]                       @ Save control register values
> +#ifdef CONFIG_KASAN
> +       bl      kasan_early_init
> +#endif
>         mov     lr, #0
>         b       start_kernel
>  ENDPROC(__mmap_switched)
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index 2a70e4958c14..43d033696e33 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -59,6 +59,7 @@
>  #include <asm/unwind.h>
>  #include <asm/memblock.h>
>  #include <asm/virt.h>
> +#include <asm/kasan.h>
>
>  #include "atags.h"
>
> @@ -1139,6 +1140,7 @@ void __init setup_arch(char **cmdline_p)
>         early_ioremap_reset();
>
>         paging_init(mdesc);
> +       kasan_init();
>         request_standard_resources(mdesc);
>
>         if (mdesc->restart)
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 99699c32d8a5..4536159bc8fa 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -113,3 +113,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU)        += cache-l2x0-pmu.o
>  obj-$(CONFIG_CACHE_XSC3L2)     += cache-xsc3l2.o
>  obj-$(CONFIG_CACHE_TAUROS2)    += cache-tauros2.o
>  obj-$(CONFIG_CACHE_UNIPHIER)   += cache-uniphier.o
> +
> +KASAN_SANITIZE_kasan_init.o    := n
> +obj-$(CONFIG_KASAN)            += kasan_init.o
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> new file mode 100644
> index 000000000000..22ac84defa5d
> --- /dev/null
> +++ b/arch/arm/mm/kasan_init.c
> @@ -0,0 +1,284 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * This file contains kasan initialization code for ARM.
> + *
> + * Copyright (c) 2018 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> + * Author: Linus Walleij <linus.walleij@linaro.org>
> + */
> +
> +#define pr_fmt(fmt) "kasan: " fmt
> +#include <linux/kasan.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/sched/task.h>
> +#include <linux/start_kernel.h>
> +#include <linux/pgtable.h>
> +#include <asm/cputype.h>
> +#include <asm/highmem.h>
> +#include <asm/mach/map.h>
> +#include <asm/memory.h>
> +#include <asm/page.h>
> +#include <asm/pgalloc.h>
> +#include <asm/procinfo.h>
> +#include <asm/proc-fns.h>
> +
> +#include "mm.h"
> +
> +static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
> +
> +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
> +
> +static __init void *kasan_alloc_block(size_t size)
> +{
> +       return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
> +                                     MEMBLOCK_ALLOC_KASAN, NUMA_NO_NODE);
> +}
> +
> +static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
> +                                     unsigned long end, bool early)
> +{
> +       unsigned long next;
> +       pte_t *ptep = pte_offset_kernel(pmdp, addr);
> +
> +       do {
> +               pte_t entry;
> +               void *p;
> +
> +               next = addr + PAGE_SIZE;
> +
> +               if (!early) {
> +                       if (!pte_none(READ_ONCE(*ptep)))
> +                               continue;
> +
> +                       p = kasan_alloc_block(PAGE_SIZE);
> +                       if (!p) {
> +                               panic("%s failed to alloc pte for address 0x%lx\n",

This does not allocate a page table but a shadow page.

> +                                     __func__, addr);
> +                               return;
> +                       }
> +                       memset(p, KASAN_SHADOW_INIT, PAGE_SIZE);
> +                       entry = pfn_pte(virt_to_pfn(p),
> +                                       __pgprot(pgprot_val(PAGE_KERNEL)));
> +               } else if (pte_none(READ_ONCE(*ptep))) {
> +                       /*
> +                        * The early shadow memory is mapping all KASan
> +                        * operations to one and the same page in memory,
> +                        * "kasan_early_shadow_page" so that the instrumentation
> +                        * will work on a scratch area until we can set up the
> +                        * proper KASan shadow memory.
> +                        */
> +                       entry = pfn_pte(virt_to_pfn(kasan_early_shadow_page),
> +                                       __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
> +               } else {
> +                       /*
> +                        * Early shadow mappings are PMD_SIZE aligned, so if the
> +                        * first entry is already set, they must all be set.
> +                        */
> +                       return;
> +               }
> +
> +               set_pte_at(&init_mm, addr, ptep, entry);
> +       } while (ptep++, addr = next, addr != end);
> +}
> +
> +/*
> + * The pmd (page middle directory) is only used on LPAE
> + */
> +static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr,
> +                                     unsigned long end, bool early)
> +{
> +       unsigned long next;
> +       pmd_t *pmdp = pmd_offset(pudp, addr);
> +
> +       do {
> +               if (pmd_none(*pmdp)) {
> +                       void *p = early ? kasan_early_shadow_pte :
> +                               kasan_alloc_block(PAGE_SIZE);
> +
> +                       if (!p) {
> +                               panic("%s failed to allocate pmd for address 0x%lx\n",

This allocates a block of PTEs

> +                                     __func__, addr);
> +                               return;
> +                       }
> +                       pmd_populate_kernel(&init_mm, pmdp, p);
> +                       flush_pmd_entry(pmdp);
> +               }
> +
> +               next = pmd_addr_end(addr, end);
> +               kasan_pte_populate(pmdp, addr, next, early);
> +       } while (pmdp++, addr = next, addr != end);
> +}
> +
> +static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
> +                                     bool early)
> +{
> +       unsigned long next;
> +       pgd_t *pgdp;
> +       p4d_t *p4dp;
> +       pud_t *pudp;
> +
> +       pgdp = pgd_offset_k(addr);
> +
> +       do {
> +               /* Allocate and populate the PGD if it doesn't already exist */
> +               if (!early && pgd_none(*pgdp)) {
> +                       void *p = kasan_alloc_block(PAGE_SIZE);
> +
> +                       if (!p) {
> +                               panic("%s failed to allocate pgd for address 0x%lx\n",

This allocates a block of P4D folded into PUD folded into PMD.

In summary, since the __func__ gives us the location of the error,
perhaps just drop the pgd here (and pmd above?)

> +                                     __func__, addr);
> +                               return;
> +                       }
> +                       pgd_populate(&init_mm, pgdp, p);
> +               }
> +
> +               next = pgd_addr_end(addr, end);
> +               /*
> +                * We just immediately jump over the p4d and pud page
> +                * directories since we believe ARM32 will never gain four
> +                * nor five level page tables.
> +                */
> +               p4dp = p4d_offset(pgdp, addr);
> +               pudp = pud_offset(p4dp, addr);
> +
> +               kasan_pmd_populate(pudp, addr, next, early);
> +       } while (pgdp++, addr = next, addr != end);
> +}
> +
> +extern struct proc_info_list *lookup_processor_type(unsigned int);
> +
> +void __init kasan_early_init(void)
> +{
> +       struct proc_info_list *list;
> +
> +       /*
> +        * locate processor in the list of supported processor
> +        * types.  The linker builds this table for us from the
> +        * entries in arch/arm/mm/proc-*.S
> +        */
> +       list = lookup_processor_type(read_cpuid_id());
> +       if (list) {
> +#ifdef MULTI_CPU
> +               processor = *list->proc;
> +#endif
> +       }
> +
> +       BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET);
> +       /*
> +        * We walk the page table and set all of the shadow memory to point
> +        * to the scratch page.
> +        */
> +       kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, true);
> +}
> +
> +static void __init clear_pgds(unsigned long start,
> +                       unsigned long end)
> +{
> +       for (; start && start < end; start += PMD_SIZE)
> +               pmd_clear(pmd_off_k(start));
> +}
> +
> +static int __init create_mapping(void *start, void *end)
> +{
> +       void *shadow_start, *shadow_end;
> +
> +       shadow_start = kasan_mem_to_shadow(start);
> +       shadow_end = kasan_mem_to_shadow(end);
> +
> +       pr_info("Mapping kernel virtual memory block: %px-%px at shadow: %px-%px\n",
> +               start, end, shadow_start, shadow_end);
> +
> +       kasan_pgd_populate((unsigned long)shadow_start & PAGE_MASK,
> +                          (unsigned long)shadow_end, false);

As I mentioned in my reply to Florian, we should PAGE__ALIGN()
shadow_end here to ensure that we can meet the stop condition in
kasan_pgd_populate()

> +       return 0;
> +}
> +
> +void __init kasan_init(void)
> +{
> +       struct memblock_region *reg;
> +       int i;
> +
> +       /*
> +        * We are going to perform proper setup of shadow memory.
> +        *
> +        * At first we should unmap early shadow (clear_pgds() call bellow).
> +        * However, instrumented code can't execute without shadow memory.
> +        *
> +        * To keep the early shadow memory MMU tables around while setting up
> +        * the proper shadow memory, we copy swapper_pg_dir (the initial page
> +        * table) to tmp_pgd_table and use that to keep the early shadow memory
> +        * mapped until the full shadow setup is finished. Then we swap back
> +        * to the proper swapper_pg_dir.
> +        */
> +
> +       memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
> +#ifdef CONFIG_ARM_LPAE
> +       /* We need to be in the same PGD or this won't work */
> +       BUILD_BUG_ON(pgd_index(KASAN_SHADOW_START) !=
> +                    pgd_index(KASAN_SHADOW_END));
> +       memcpy(tmp_pmd_table,
> +              pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
> +              sizeof(tmp_pmd_table));
> +       set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
> +               __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
> +#endif
> +       cpu_switch_mm(tmp_pgd_table, &init_mm);
> +       local_flush_tlb_all();
> +
> +       clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> +
> +       kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
> +                                   kasan_mem_to_shadow((void *)-1UL) + 1);
> +
> +       for_each_memblock(memory, reg) {
> +               void *start = __va(reg->base);
> +               void *end = __va(reg->base + reg->size);
> +
> +               /* Do not attempt to shadow highmem */
> +               if (reg->base >= arm_lowmem_limit) {
> +                       pr_info("Skip highmem block %px-%px\n",
> +                               start, end);

This gives me

[    0.000000] kasan: Skip highmem block 7f9db000-7fb9d000
[    0.000000] kasan: Skip highmem block 7fb9d000-c0000000

for

[    0.000000]   node   0: [mem 0x00000000ff9db000-0x00000000ffb9cfff]
[    0.000000]   node   0: [mem 0x00000000ffb9d000-0x000000023fffffff]

which is highly confusing - highmem does not have a VA in the first
place, so reporting it here makes no sense. Better use %llx here and
print reg->base/size directly.

> +                       continue;
> +               }
> +               if (reg->base + reg->size > arm_lowmem_limit) {
> +                       pr_info("Truncate memory block %px-%px\n to %px-%px\n",
> +                               start, end, start, __va(arm_lowmem_limit));

This gives me

[    0.000000] kasan: Truncate memory block c0000000-7f9db000
                to c0000000-f0000000
for

[    0.000000]   node   0: [mem 0x0000000040000000-0x00000000ff9dafff]

which is equally confusing. I think we should also use reg->base/size
here, and omit the start and __va(arm_lowmem_limit) entirely, and just
print something like

kasan: Truncating shadow for 0x0040000000-0x00ff9dafff to lowmem region

(note that 0x%10llx should be sufficient as LPAE addresses have at most 40 bits)



> +                       end = __va(arm_lowmem_limit);
> +               }
> +               if (start >= end) {
> +                       pr_info("Skipping invalid memory block %px-%px\n",
> +                               start, end);
> +                       continue;
> +               }
> +
> +               create_mapping(start, end);
> +       }
> +
> +       /*
> +        * 1. The module global variables are in MODULES_VADDR ~ MODULES_END,
> +        *    so we need to map this area.
> +        * 2. PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
> +        *    ~ MODULES_END's shadow is in the same PMD_SIZE, so we can't
> +        *    use kasan_populate_zero_shadow.
> +        */
> +       create_mapping((void *)MODULES_VADDR, (void *)(PKMAP_BASE + PMD_SIZE));
> +
> +       /*
> +        * KAsan may reuse the contents of kasan_early_shadow_pte directly, so
> +        * we should make sure that it maps the zero page read-only.
> +        */
> +       for (i = 0; i < PTRS_PER_PTE; i++)
> +               set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> +                          &kasan_early_shadow_pte[i],
> +                          pfn_pte(virt_to_pfn(kasan_early_shadow_page),
> +                               __pgprot(pgprot_val(PAGE_KERNEL)
> +                                        | L_PTE_RDONLY)));
> +
> +       cpu_switch_mm(swapper_pg_dir, &init_mm);
> +       local_flush_tlb_all();
> +
> +       memset(kasan_early_shadow_page, 0, PAGE_SIZE);
> +       pr_info("Kernel address sanitizer initialized\n");
> +       init_task.kasan_depth = 0;
> +}
> diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
> index c5e1b27046a8..f8e9bc58a84f 100644
> --- a/arch/arm/mm/pgd.c
> +++ b/arch/arm/mm/pgd.c
> @@ -66,7 +66,21 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
>         new_pmd = pmd_alloc(mm, new_pud, 0);
>         if (!new_pmd)
>                 goto no_pmd;
> -#endif
> +#ifdef CONFIG_KASAN
> +       /*
> +        * Copy PMD table for KASAN shadow mappings.
> +        */
> +       init_pgd = pgd_offset_k(TASK_SIZE);
> +       init_p4d = p4d_offset(init_pgd, TASK_SIZE);
> +       init_pud = pud_offset(init_p4d, TASK_SIZE);
> +       init_pmd = pmd_offset(init_pud, TASK_SIZE);
> +       new_pmd = pmd_offset(new_pud, TASK_SIZE);
> +       memcpy(new_pmd, init_pmd,
> +              (pmd_index(MODULES_VADDR) - pmd_index(TASK_SIZE))
> +              * sizeof(pmd_t));
> +       clean_dcache_area(new_pmd, PTRS_PER_PMD * sizeof(pmd_t));
> +#endif /* CONFIG_KASAN */
> +#endif /* CONFIG_LPAE */
>
>         if (!vectors_high()) {
>                 /*
> --
> 2.26.2
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5 v15] KASan for Arm
  2020-10-13  6:34   ` Ard Biesheuvel
@ 2020-10-13 17:57     ` Florian Fainelli
  2020-10-13 18:00       ` Ard Biesheuvel
  0 siblings, 1 reply; 15+ messages in thread
From: Florian Fainelli @ 2020-10-13 17:57 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Arnd Bergmann, Abbott Liu, Linus Walleij, Russell King,
	Mike Rapoport, Andrey Ryabinin, Linux ARM

On 10/12/20 11:34 PM, Ard Biesheuvel wrote:
> On Tue, 13 Oct 2020 at 05:22, Florian Fainelli <f.fainelli@gmail.com> wrote:
>>
>>
>>
>> On 10/12/2020 2:56 PM, Linus Walleij wrote:
>>> This is the 15th iteration of KASan for ARM/Aarch32.
>>>
>>> I dropped my fix in the beginning of the series for
>>> Ard's more elaborate and thorough fix moving the DTB
>>> out of the kernel linear mapped region and into its own
>>> part of the memory.
>>>
>>> This fixes my particular issue on the Qualcomm APQ8060
>>> and I hope it may also solve Florian's issue and what
>>> Ard has been seeing. KASan should be working with
>>> pretty much everything you throw on it, unless you
>>> do what I did and ran it on a 64MB system, where
>>> under some load it can run into the OOM killer for
>>> obvious reasons.
>>>
>>> You are encouraged to test this patch set to find memory out
>>> of bounds bugs with ARM32 platforms and drivers.
>>>
>>> There is a git branch you can pull in:
>>> https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kasan
>>>
>>> This branch includes Ard's two patches.
>>>
>>> As Ard's patches are in Russell's patch tracker I will
>>> put these there as well if it now works for everyone.
>>
>> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>>
>> On Brahma-B15 (ARMv7 LPAE) and Brahma-B53 (ARMv8 in AArch32, also with
>> LPAE). The 3 Cortex-A72 devices that I have access to all fail with the
>> following (not related to the CPU type, more to the memory map) which I
>> am hoping to track down later this week, I would not consider those
>> failures to be a blocker at this point.
>>
>> Thanks a lot for your persistence working on this Linus, and Ard!
>>
> 
> Hi Florian,
> 
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x0000000000000000-0x00000000063fdfff]
>> [    0.000000]   node   0: [mem 0x0000000006400000-0x000000000fffffff]
>> [    0.000000]   node   0: [mem 0x0000000010400000-0x000000007fffffff]
>> [    0.000000] kasan: Mapping kernel virtual memory block:
>> c0000000-c63fe000 at shadow: b7000000-b7c7fc00
>> [    0.000000] Kernel panic - not syncing: kasan_pte_populate failed to
>> alloc pte for address 0xe2806000
> 
> The issue here is that the end of the shadow region being populated is
> not aligned to the page size, and so we never meet the stop condition
> in kasan_pgd_populate(), and instead, we keep iterating until we run
> out of memory.
> 
> Does this help?

Not really, the same kasan_pte_populate() failure happens for the same
address(es).

Adding memblock=debug does not allow me to boot to the point where kasan
shadow memory gets initialized, again, not a blocker, but this sounds
like something that may have to be looked at.
--
Florian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5 v15] KASan for Arm
  2020-10-13 17:57     ` Florian Fainelli
@ 2020-10-13 18:00       ` Ard Biesheuvel
  2020-10-13 23:57         ` Florian Fainelli
  0 siblings, 1 reply; 15+ messages in thread
From: Ard Biesheuvel @ 2020-10-13 18:00 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: Arnd Bergmann, Abbott Liu, Linus Walleij, Russell King,
	Mike Rapoport, Andrey Ryabinin, Linux ARM

On Tue, 13 Oct 2020 at 19:57, Florian Fainelli <f.fainelli@gmail.com> wrote:
>
> On 10/12/20 11:34 PM, Ard Biesheuvel wrote:
> > On Tue, 13 Oct 2020 at 05:22, Florian Fainelli <f.fainelli@gmail.com> wrote:
> >>
> >>
> >>
> >> On 10/12/2020 2:56 PM, Linus Walleij wrote:
> >>> This is the 15th iteration of KASan for ARM/Aarch32.
> >>>
> >>> I dropped my fix in the beginning of the series for
> >>> Ard's more elaborate and thorough fix moving the DTB
> >>> out of the kernel linear mapped region and into its own
> >>> part of the memory.
> >>>
> >>> This fixes my particular issue on the Qualcomm APQ8060
> >>> and I hope it may also solve Florian's issue and what
> >>> Ard has been seeing. KASan should be working with
> >>> pretty much everything you throw on it, unless you
> >>> do what I did and ran it on a 64MB system, where
> >>> under some load it can run into the OOM killer for
> >>> obvious reasons.
> >>>
> >>> You are encouraged to test this patch set to find memory out
> >>> of bounds bugs with ARM32 platforms and drivers.
> >>>
> >>> There is a git branch you can pull in:
> >>> https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kasan
> >>>
> >>> This branch includes Ard's two patches.
> >>>
> >>> As Ard's patches are in Russell's patch tracker I will
> >>> put these there as well if it now works for everyone.
> >>
> >> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
> >>
> >> On Brahma-B15 (ARMv7 LPAE) and Brahma-B53 (ARMv8 in AArch32, also with
> >> LPAE). The 3 Cortex-A72 devices that I have access to all fail with the
> >> following (not related to the CPU type, more to the memory map) which I
> >> am hoping to track down later this week, I would not consider those
> >> failures to be a blocker at this point.
> >>
> >> Thanks a lot for your persistence working on this Linus, and Ard!
> >>
> >
> > Hi Florian,
> >
> >> [    0.000000] Early memory node ranges
> >> [    0.000000]   node   0: [mem 0x0000000000000000-0x00000000063fdfff]
> >> [    0.000000]   node   0: [mem 0x0000000006400000-0x000000000fffffff]
> >> [    0.000000]   node   0: [mem 0x0000000010400000-0x000000007fffffff]
> >> [    0.000000] kasan: Mapping kernel virtual memory block:
> >> c0000000-c63fe000 at shadow: b7000000-b7c7fc00
> >> [    0.000000] Kernel panic - not syncing: kasan_pte_populate failed to
> >> alloc pte for address 0xe2806000
> >
> > The issue here is that the end of the shadow region being populated is
> > not aligned to the page size, and so we never meet the stop condition
> > in kasan_pgd_populate(), and instead, we keep iterating until we run
> > out of memory.
> >
> > Does this help?
>
> Not really, the same kasan_pte_populate() failure happens for the same
> address(es).
>
> Adding memblock=debug does not allow me to boot to the point where kasan
> shadow memory gets initialized, again, not a blocker, but this sounds
> like something that may have to be looked at.

That address is not part of the shadow range, so it must be something
with the stop condition in kasan_pgd_populate(). If you have time,
could you add some printk()s in there to see what is going on?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5 v15] KASan for Arm
  2020-10-13 18:00       ` Ard Biesheuvel
@ 2020-10-13 23:57         ` Florian Fainelli
  2020-10-14  7:18           ` Ard Biesheuvel
  0 siblings, 1 reply; 15+ messages in thread
From: Florian Fainelli @ 2020-10-13 23:57 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Arnd Bergmann, Abbott Liu, Linus Walleij, Russell King,
	Mike Rapoport, Andrey Ryabinin, Linux ARM

On 10/13/20 11:00 AM, Ard Biesheuvel wrote:
> On Tue, 13 Oct 2020 at 19:57, Florian Fainelli <f.fainelli@gmail.com> wrote:
>>
>> On 10/12/20 11:34 PM, Ard Biesheuvel wrote:
>>> On Tue, 13 Oct 2020 at 05:22, Florian Fainelli <f.fainelli@gmail.com> wrote:
>>>>
>>>>
>>>>
>>>> On 10/12/2020 2:56 PM, Linus Walleij wrote:
>>>>> This is the 15th iteration of KASan for ARM/Aarch32.
>>>>>
>>>>> I dropped my fix in the beginning of the series for
>>>>> Ard's more elaborate and thorough fix moving the DTB
>>>>> out of the kernel linear mapped region and into its own
>>>>> part of the memory.
>>>>>
>>>>> This fixes my particular issue on the Qualcomm APQ8060
>>>>> and I hope it may also solve Florian's issue and what
>>>>> Ard has been seeing. KASan should be working with
>>>>> pretty much everything you throw on it, unless you
>>>>> do what I did and ran it on a 64MB system, where
>>>>> under some load it can run into the OOM killer for
>>>>> obvious reasons.
>>>>>
>>>>> You are encouraged to test this patch set to find memory out
>>>>> of bounds bugs with ARM32 platforms and drivers.
>>>>>
>>>>> There is a git branch you can pull in:
>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kasan
>>>>>
>>>>> This branch includes Ard's two patches.
>>>>>
>>>>> As Ard's patches are in Russell's patch tracker I will
>>>>> put these there as well if it now works for everyone.
>>>>
>>>> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>>>>
>>>> On Brahma-B15 (ARMv7 LPAE) and Brahma-B53 (ARMv8 in AArch32, also with
>>>> LPAE). The 3 Cortex-A72 devices that I have access to all fail with the
>>>> following (not related to the CPU type, more to the memory map) which I
>>>> am hoping to track down later this week, I would not consider those
>>>> failures to be a blocker at this point.
>>>>
>>>> Thanks a lot for your persistence working on this Linus, and Ard!
>>>>
>>>
>>> Hi Florian,
>>>
>>>> [    0.000000] Early memory node ranges
>>>> [    0.000000]   node   0: [mem 0x0000000000000000-0x00000000063fdfff]
>>>> [    0.000000]   node   0: [mem 0x0000000006400000-0x000000000fffffff]
>>>> [    0.000000]   node   0: [mem 0x0000000010400000-0x000000007fffffff]
>>>> [    0.000000] kasan: Mapping kernel virtual memory block:
>>>> c0000000-c63fe000 at shadow: b7000000-b7c7fc00
>>>> [    0.000000] Kernel panic - not syncing: kasan_pte_populate failed to
>>>> alloc pte for address 0xe2806000
>>>
>>> The issue here is that the end of the shadow region being populated is
>>> not aligned to the page size, and so we never meet the stop condition
>>> in kasan_pgd_populate(), and instead, we keep iterating until we run
>>> out of memory.
>>>
>>> Does this help?
>>
>> Not really, the same kasan_pte_populate() failure happens for the same
>> address(es).
>>
>> Adding memblock=debug does not allow me to boot to the point where kasan
>> shadow memory gets initialized, again, not a blocker, but this sounds
>> like something that may have to be looked at.
> 
> That address is not part of the shadow range, so it must be something
> with the stop condition in kasan_pgd_populate(). If you have time,
> could you add some printk()s in there to see what is going on?

Yes, I have just incorrectly applied the patch not sure how... it does
work correctly now on all of my systems, thanks a lot!
-- 
Florian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5 v15] KASan for Arm
  2020-10-13 23:57         ` Florian Fainelli
@ 2020-10-14  7:18           ` Ard Biesheuvel
  0 siblings, 0 replies; 15+ messages in thread
From: Ard Biesheuvel @ 2020-10-14  7:18 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: Arnd Bergmann, Abbott Liu, Linus Walleij, Russell King,
	Mike Rapoport, Andrey Ryabinin, Linux ARM

On Wed, 14 Oct 2020 at 01:57, Florian Fainelli <f.fainelli@gmail.com> wrote:
>
> On 10/13/20 11:00 AM, Ard Biesheuvel wrote:
> > On Tue, 13 Oct 2020 at 19:57, Florian Fainelli <f.fainelli@gmail.com> wrote:
> >>
> >> On 10/12/20 11:34 PM, Ard Biesheuvel wrote:
> >>> On Tue, 13 Oct 2020 at 05:22, Florian Fainelli <f.fainelli@gmail.com> wrote:
> >>>>
> >>>>
> >>>>
> >>>> On 10/12/2020 2:56 PM, Linus Walleij wrote:
> >>>>> This is the 15th iteration of KASan for ARM/Aarch32.
> >>>>>
> >>>>> I dropped my fix in the beginning of the series for
> >>>>> Ard's more elaborate and thorough fix moving the DTB
> >>>>> out of the kernel linear mapped region and into its own
> >>>>> part of the memory.
> >>>>>
> >>>>> This fixes my particular issue on the Qualcomm APQ8060
> >>>>> and I hope it may also solve Florian's issue and what
> >>>>> Ard has been seeing. KASan should be working with
> >>>>> pretty much everything you throw on it, unless you
> >>>>> do what I did and ran it on a 64MB system, where
> >>>>> under some load it can run into the OOM killer for
> >>>>> obvious reasons.
> >>>>>
> >>>>> You are encouraged to test this patch set to find memory out
> >>>>> of bounds bugs with ARM32 platforms and drivers.
> >>>>>
> >>>>> There is a git branch you can pull in:
> >>>>> https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kasan
> >>>>>
> >>>>> This branch includes Ard's two patches.
> >>>>>
> >>>>> As Ard's patches are in Russell's patch tracker I will
> >>>>> put these there as well if it now works for everyone.
> >>>>
> >>>> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
> >>>>
> >>>> On Brahma-B15 (ARMv7 LPAE) and Brahma-B53 (ARMv8 in AArch32, also with
> >>>> LPAE). The 3 Cortex-A72 devices that I have access to all fail with the
> >>>> following (not related to the CPU type, more to the memory map) which I
> >>>> am hoping to track down later this week, I would not consider those
> >>>> failures to be a blocker at this point.
> >>>>
> >>>> Thanks a lot for your persistence working on this Linus, and Ard!
> >>>>
> >>>
> >>> Hi Florian,
> >>>
> >>>> [    0.000000] Early memory node ranges
> >>>> [    0.000000]   node   0: [mem 0x0000000000000000-0x00000000063fdfff]
> >>>> [    0.000000]   node   0: [mem 0x0000000006400000-0x000000000fffffff]
> >>>> [    0.000000]   node   0: [mem 0x0000000010400000-0x000000007fffffff]
> >>>> [    0.000000] kasan: Mapping kernel virtual memory block:
> >>>> c0000000-c63fe000 at shadow: b7000000-b7c7fc00
> >>>> [    0.000000] Kernel panic - not syncing: kasan_pte_populate failed to
> >>>> alloc pte for address 0xe2806000
> >>>
> >>> The issue here is that the end of the shadow region being populated is
> >>> not aligned to the page size, and so we never meet the stop condition
> >>> in kasan_pgd_populate(), and instead, we keep iterating until we run
> >>> out of memory.
> >>>
> >>> Does this help?
> >>
> >> Not really, the same kasan_pte_populate() failure happens for the same
> >> address(es).
> >>
> >> Adding memblock=debug does not allow me to boot to the point where kasan
> >> shadow memory gets initialized, again, not a blocker, but this sounds
> >> like something that may have to be looked at.
> >
> > That address is not part of the shadow range, so it must be something
> > with the stop condition in kasan_pgd_populate(). If you have time,
> > could you add some printk()s in there to see what is going on?
>
> Yes, I have just incorrectly applied the patch not sure how... it does
> work correctly now on all of my systems, thanks a lot!

Excellent! Thanks for double checking.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH] fixup! ARM: Replace string mem* functions for KASan
  2020-10-12 21:56 ` [PATCH 2/5 v15] ARM: Replace string mem* functions for KASan Linus Walleij
@ 2020-10-14 10:59   ` Ahmad Fatoum
  2020-10-19  8:36     ` Linus Walleij
  0 siblings, 1 reply; 15+ messages in thread
From: Ahmad Fatoum @ 2020-10-14 10:59 UTC (permalink / raw)
  To: linus.walleij
  Cc: f.fainelli, Ahmad Fatoum, arnd, liuwenliang, linux, kasan-dev,
	rppt, glider, linux-arm-kernel, kernel, aryabinin, ardb, dvyukov

CONFIG_FORTIFY_SOURCE doesn't play nicely for files that are compiled
with CONFIG_KASAN=y, but have sanitization disabled.

This happens despite 47227d27e2fc ("string.h: fix incompatibility between
FORTIFY_SOURCE and KASAN"). For now, do what ARM64 is already doing and
disable FORTIFY_SOURCE for such files.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
CONFIG_FORTIFY_SOURCE kernel on i.MX6Q hangs indefinitely in a
memcpy inside the very first printk without this patch.

With this patch squashed:
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 arch/arm/include/asm/string.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index 947f93037d87..6c607c68f3ad 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -58,6 +58,11 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
 #define memcpy(dst, src, len) __memcpy(dst, src, len)
 #define memmove(dst, src, len) __memmove(dst, src, len)
 #define memset(s, c, n) __memset(s, c, n)
+
+#ifndef __NO_FORTIFY
+#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */
+#endif
+
 #endif
 
 #endif
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] fixup! ARM: Replace string mem* functions for KASan
  2020-10-14 10:59   ` [PATCH] fixup! " Ahmad Fatoum
@ 2020-10-19  8:36     ` Linus Walleij
  0 siblings, 0 replies; 15+ messages in thread
From: Linus Walleij @ 2020-10-19  8:36 UTC (permalink / raw)
  To: Ahmad Fatoum
  Cc: Florian Fainelli, Arnd Bergmann, Abbott Liu, Russell King,
	kasan-dev, Mike Rapoport, Alexander Potapenko, Linux ARM,
	Sascha Hauer, Andrey Ryabinin, Ard Biesheuvel, Dmitry Vyukov

On Wed, Oct 14, 2020 at 1:00 PM Ahmad Fatoum <a.fatoum@pengutronix.de> wrote:

> CONFIG_FORTIFY_SOURCE doesn't play nicely for files that are compiled
> with CONFIG_KASAN=y, but have sanitization disabled.
>
> This happens despite 47227d27e2fc ("string.h: fix incompatibility between
> FORTIFY_SOURCE and KASAN"). For now, do what ARM64 is already doing and
> disable FORTIFY_SOURCE for such files.
>
> Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> CONFIG_FORTIFY_SOURCE kernel on i.MX6Q hangs indefinitely in a
> memcpy inside the very first printk without this patch.
>
> With this patch squashed:
> Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de>

Thanks so much Ahmad! I folded in your fix into this patch and
added your Signed-off-by then added your Tested-by on all
patches and will resend as v16 before putting this into Russell's
patch tracker.

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-10-19  8:38 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-12 21:56 [PATCH 0/5 v15] KASan for Arm Linus Walleij
2020-10-12 21:56 ` [PATCH 1/5 v15] ARM: Disable KASan instrumentation for some code Linus Walleij
2020-10-12 21:56 ` [PATCH 2/5 v15] ARM: Replace string mem* functions for KASan Linus Walleij
2020-10-14 10:59   ` [PATCH] fixup! " Ahmad Fatoum
2020-10-19  8:36     ` Linus Walleij
2020-10-12 21:56 ` [PATCH 3/5 v15] ARM: Define the virtual space of KASan's shadow region Linus Walleij
2020-10-12 21:57 ` [PATCH 4/5 v15] ARM: Initialize the mapping of KASan shadow memory Linus Walleij
2020-10-13  6:58   ` Ard Biesheuvel
2020-10-12 21:57 ` [PATCH 5/5 v15] ARM: Enable KASan for ARM Linus Walleij
2020-10-13  3:22 ` [PATCH 0/5 v15] KASan for Arm Florian Fainelli
2020-10-13  6:34   ` Ard Biesheuvel
2020-10-13 17:57     ` Florian Fainelli
2020-10-13 18:00       ` Ard Biesheuvel
2020-10-13 23:57         ` Florian Fainelli
2020-10-14  7:18           ` Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.