* [PATCH 1/5 v10] ARM: Disable KASan instrumentation for some code
2020-06-15 9:02 [PATCH 0/5 v10] KASan for Arm Linus Walleij
@ 2020-06-15 9:02 ` Linus Walleij
2020-06-15 9:02 ` [PATCH 2/5 v10] ARM: Replace string mem* functions for KASan Linus Walleij
` (3 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Linus Walleij @ 2020-06-15 9:02 UTC (permalink / raw)
To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
Andrey Ryabinin, Mike Rapoport
Cc: Arnd Bergmann, Marc Zyngier, Linus Walleij, kasan-dev,
Alexander Potapenko, linux-arm-kernel, Dmitry Vyukov
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Disable instrumentation for arch/arm/boot/compressed/*
since that code is executed before the kernel has even
set up its mappings and definately out of scope for
KASan.
Disable instrumentation of arch/arm/vdso/* because that code
is not linked with the kernel image, so the KASan management
code would fail to link.
Disable instrumentation of arch/arm/mm/physaddr.c. See commit
ec6d06efb0ba ("arm64: Add support for CONFIG_DEBUG_VIRTUAL")
for more details.
Disable kasan check in the function unwind_pop_register because
it does not matter that kasan checks failed when unwind_pop_register()
reads the stack memory of a task.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v9->v10:
- Rebase on v5.8-rc1
ChangeLog v8->v9:
- Collect Ard's tags.
ChangeLog v7->v8:
- Do not sanitize arch/arm/mm/mmu.c.
Apart from being intuitively correct, it turns out that KASan
will insert a __asan_load4() into the set_pte_at() function
in mmu.c and this is something that KASan calls in the early
initialization, to set up the shadow memory. Naturally,
__asan_load4() cannot be called before the shadow memory is
set up so we need to exclude mmu.c from sanitization.
ChangeLog v6->v7:
- Removed the KVM instrumentaton disablement since KVM
on ARM32 is gone.
---
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/kernel/unwind.c | 6 +++++-
arch/arm/mm/Makefile | 2 ++
arch/arm/vdso/Makefile | 2 ++
4 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 00602a6fba04..bb8d193d13de 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -24,6 +24,7 @@ OBJS += hyp-stub.o
endif
GCOV_PROFILE := n
+KASAN_SANITIZE := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index d2bd0df2318d..f35eb584a18a 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -236,7 +236,11 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ /* Use READ_ONCE_NOCHECK here to avoid this memory access
+ * from being tracked by KASAN.
+ */
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 7cb1699fbfc4..99699c32d8a5 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -7,6 +7,7 @@ obj-y := extable.o fault.o init.o iomap.o
obj-y += dma-mapping$(MMUEXT).o
obj-$(CONFIG_MMU) += fault-armv.o flush.o idmap.o ioremap.o \
mmap.o pgd.o mmu.o pageattr.o
+KASAN_SANITIZE_mmu.o := n
ifneq ($(CONFIG_MMU),y)
obj-y += nommu.o
@@ -16,6 +17,7 @@ endif
obj-$(CONFIG_ARM_PTDUMP_CORE) += dump.o
obj-$(CONFIG_ARM_PTDUMP_DEBUGFS) += ptdump_debugfs.o
obj-$(CONFIG_MODULES) += proc-syms.o
+KASAN_SANITIZE_physaddr.o := n
obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index d3c9f03e7e79..71d18d59bd35 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -42,6 +42,8 @@ GCOV_PROFILE := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
+KASAN_SANITIZE := n
+
# Force dependency
$(obj)/vdso.o : $(obj)/vdso.so
--
2.25.4
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/5 v10] ARM: Replace string mem* functions for KASan
2020-06-15 9:02 [PATCH 0/5 v10] KASan for Arm Linus Walleij
2020-06-15 9:02 ` [PATCH 1/5 v10] ARM: Disable KASan instrumentation for some code Linus Walleij
@ 2020-06-15 9:02 ` Linus Walleij
2020-06-15 9:02 ` [PATCH 3/5 v10] ARM: Define the virtual space of KASan's shadow region Linus Walleij
` (2 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Linus Walleij @ 2020-06-15 9:02 UTC (permalink / raw)
To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
Andrey Ryabinin, Mike Rapoport
Cc: Arnd Bergmann, Linus Walleij, kasan-dev, Alexander Potapenko,
linux-arm-kernel, Dmitry Vyukov
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Functions like memset()/memmove()/memcpy() do a lot of memory
accesses.
If a bad pointer is passed to one of these functions it is important
to catch this. Compiler instrumentation cannot do this since these
functions are written in assembly.
KASan replaces these memory functions with instrumented variants.
The original functions are declared as weak symbols so that
the strong definitions in mm/kasan/kasan.c can replace them.
The original functions have aliases with a '__' prefix in their
name, so we can call the non-instrumented variant if needed.
We must use __memcpy()/__memset() in place of memcpy()/memset()
when we copy .data to RAM and when we clear .bss, because
kasan_early_init cannot be called before the initialization of
.data and .bss.
For the kernel compression and EFI libstub's custom string
libraries we need a special quirk: even if these are built
without KASan enabled, they rely on the global headers for their
custom string libraries, which means that e.g. memcpy()
will be defined to __memcpy() and we get link failures.
Since these implementations are written i C rather than
assembly we use e.g. __alias(memcpy) to redirected any
users back to the local implementation.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v9->v10:
- Rebase on v5.8-rc1
ChangeLog v8->v9:
- Collect Ard's tags.
ChangeLog v7->v8:
- Use the less invasive version of handling the global redefines
of the string functions in the decompressor: __alias() the
functions locally in the library.
- Put in some more comments so readers of the code knows what
is going on.
ChangeLog v6->v7:
- Move the hacks around __SANITIZE_ADDRESS__ into this file
- Edit the commit message
- Rebase on the other v2 patches
---
arch/arm/boot/compressed/string.c | 19 +++++++++++++++++++
arch/arm/include/asm/string.h | 21 +++++++++++++++++++++
arch/arm/kernel/head-common.S | 4 ++--
arch/arm/lib/memcpy.S | 3 +++
arch/arm/lib/memmove.S | 5 ++++-
arch/arm/lib/memset.S | 3 +++
6 files changed, 52 insertions(+), 3 deletions(-)
diff --git a/arch/arm/boot/compressed/string.c b/arch/arm/boot/compressed/string.c
index ade5079bebbf..8c0fa276d994 100644
--- a/arch/arm/boot/compressed/string.c
+++ b/arch/arm/boot/compressed/string.c
@@ -7,6 +7,25 @@
#include <linux/string.h>
+/*
+ * The decompressor is built without KASan but uses the same redirects as the
+ * rest of the kernel when CONFIG_KASAN is enabled, defining e.g. memcpy()
+ * to __memcpy() but since we are not linking with the main kernel string
+ * library in the decompressor, that will lead to link failures.
+ *
+ * Undefine KASan's versions, define the wrapped functions and alias them to
+ * the right names so that when e.g. __memcpy() appear in the code, it will
+ * still be linked to this local version of memcpy().
+ */
+#ifdef CONFIG_KASAN
+#undef memcpy
+#undef memmove
+#undef memset
+void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
+void *__memmove(void *__dest, __const void *__src, size_t count) __alias(memmove);
+void *__memset(void *s, int c, size_t count) __alias(memset);
+#endif
+
void *memcpy(void *__dest, __const void *__src, size_t __n)
{
int i = 0;
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index 111a1d8a41dd..947f93037d87 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -5,6 +5,9 @@
/*
* We don't do inline string functions, since the
* optimised inline asm versions are not small.
+ *
+ * The __underscore versions of some functions are for KASan to be able
+ * to replace them with instrumented versions.
*/
#define __HAVE_ARCH_STRRCHR
@@ -15,15 +18,18 @@ extern char * strchr(const char * s, int c);
#define __HAVE_ARCH_MEMCPY
extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void *__memcpy(void *dest, const void *src, __kernel_size_t n);
#define __HAVE_ARCH_MEMMOVE
extern void * memmove(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *dest, const void *src, __kernel_size_t n);
#define __HAVE_ARCH_MEMCHR
extern void * memchr(const void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void * memset(void *, int, __kernel_size_t);
+extern void *__memset(void *s, int c, __kernel_size_t n);
#define __HAVE_ARCH_MEMSET32
extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,4 +45,19 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
return __memset64(p, v, n * 8, v >> 32);
}
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * must use non-instrumented versions of the mem*
+ * functions named __memcpy() etc. All such kernel code has
+ * been tagged with KASAN_SANITIZE_file.o = n, which means
+ * that the address sanitization argument isn't passed to the
+ * compiler, and __SANITIZE_ADDRESS__ is not set. As a result
+ * these defines kick in.
+ */
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
#endif
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 4a3982812a40..6840c7c60a85 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -95,7 +95,7 @@ __mmap_switched:
THUMB( ldmia r4!, {r0, r1, r2, r3} )
THUMB( mov sp, r3 )
sub r2, r2, r1
- bl memcpy @ copy .data to RAM
+ bl __memcpy @ copy .data to RAM
#endif
ARM( ldmia r4!, {r0, r1, sp} )
@@ -103,7 +103,7 @@ __mmap_switched:
THUMB( mov sp, r3 )
sub r2, r1, r0
mov r1, #0
- bl memset @ clear .bss
+ bl __memset @ clear .bss
ldmia r4, {r0, r1, r2, r3}
str r9, [r0] @ Save processor ID
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 09a333153dc6..ad4625d16e11 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -58,6 +58,8 @@
/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
+.weak memcpy
+ENTRY(__memcpy)
ENTRY(mmiocpy)
ENTRY(memcpy)
@@ -65,3 +67,4 @@ ENTRY(memcpy)
ENDPROC(memcpy)
ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index b50e5770fb44..fd123ea5a5a4 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -24,12 +24,14 @@
* occurring in the opposite direction.
*/
+.weak memmove
+ENTRY(__memmove)
ENTRY(memmove)
UNWIND( .fnstart )
subs ip, r0, r1
cmphi r2, ip
- bls memcpy
+ bls __memcpy
stmfd sp!, {r0, r4, lr}
UNWIND( .fnend )
@@ -222,3 +224,4 @@ ENTRY(memmove)
18: backward_copy_shift push=24 pull=8
ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index 6ca4535c47fb..0e7ff0423f50 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -13,6 +13,8 @@
.text
.align 5
+.weak memset
+ENTRY(__memset)
ENTRY(mmioset)
ENTRY(memset)
UNWIND( .fnstart )
@@ -132,6 +134,7 @@ UNWIND( .fnstart )
UNWIND( .fnend )
ENDPROC(memset)
ENDPROC(mmioset)
+ENDPROC(__memset)
ENTRY(__memset32)
UNWIND( .fnstart )
--
2.25.4
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/5 v10] ARM: Define the virtual space of KASan's shadow region
2020-06-15 9:02 [PATCH 0/5 v10] KASan for Arm Linus Walleij
2020-06-15 9:02 ` [PATCH 1/5 v10] ARM: Disable KASan instrumentation for some code Linus Walleij
2020-06-15 9:02 ` [PATCH 2/5 v10] ARM: Replace string mem* functions for KASan Linus Walleij
@ 2020-06-15 9:02 ` Linus Walleij
2020-06-15 9:02 ` [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory Linus Walleij
2020-06-15 9:02 ` [PATCH 5/5 v10] ARM: Enable KASan for ARM Linus Walleij
4 siblings, 0 replies; 12+ messages in thread
From: Linus Walleij @ 2020-06-15 9:02 UTC (permalink / raw)
To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
Andrey Ryabinin, Mike Rapoport
Cc: Arnd Bergmann, Linus Walleij, kasan-dev, Alexander Potapenko,
linux-arm-kernel, Dmitry Vyukov
From: Abbott Liu <liuwenliang@huawei.com>
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
addressable by a 32bit architecture) out of the virtual address
space to use as shadow memory for KASan as follows:
+----+ 0xffffffff
| |\
| | |-> Static kernel image (vmlinux) BSS and page table
| |/
+----+ PAGE_OFFSET
| |\
| | |-> Loadable kernel modules virtual address space area
| |/
+----+ MODULES_VADDR = KASAN_SHADOW_END
| |\
| | |-> The shadow area of kernel virtual address.
| |/
+----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
| |\ shadow address of MODULES_VADDR
| | |
| | |
| | |-> The user space area in lowmem. The kernel address
| | | sanitizer do not use this space, nor does it map it.
| | |
| | |
| | |
| | |
| |/
------ 0
0 .. TASK_SIZE is the memory that can be used by shared
userspace/kernelspace. It us used for userspace processes and for
passing parameters and memory buffers in system calls etc. We do not
need to shadow this area.
KASAN_SHADOW_START:
This value begins with the MODULE_VADDR's shadow address. It is the
start of kernel virtual space. Since we have modules to load, we need
to cover also that area with shadow memory so we can find memory
bugs in modules.
KASAN_SHADOW_END
This value is the 0x100000000's shadow address: the mapping that would
be after the end of the kernel memory at 0xffffffff. It is the end of
kernel address sanitizer shadow area. It is also the start of the
module area.
KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow
address by the following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
As you would expect, >> 3 is equal to dividing by 8, meaning each
byte in the shadow memory covers 8 bytes of kernel memory, so one
bit shadow memory per byte of kernel memory is used.
The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
on the VMSPLIT layout of the system: the kernel and userspace can
split up lowmem in different ways according to needs, so we calculate
the shadow offset depending on this.
When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.
The kernel and modules may use different amounts of memory,
according to the VMSPLIT configuration, which in turn
determines the PAGE_OFFSET.
We use the following KASAN_SHADOW_OFFSETs depending on how the
virtual memory is split up:
- 0x1f000000 if we have 1G userspace / 3G kernelspace split:
- The kernel address space is 3G (0xc0000000)
- PAGE_OFFSET is then set to 0x40000000 so the kernel static
image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
so the modules use addresses 0x3f000000 .. 0x3fffffff
- So the addresses 0x3f000000 .. 0xffffffff need to be
covered with shadow memory. That is 0xc1000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x18200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0x26e00000, to
KASAN_SHADOW_END at 0x3effffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0x3f000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
KASAN_SHADOW_OFFSET = 0x1f000000
- 0x5f000000 if we have 2G userspace / 2G kernelspace split:
- The kernel space is 2G (0x80000000)
- PAGE_OFFSET is set to 0x80000000 so the kernel static
image uses 0x80000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
so the modules use addresses 0x7f000000 .. 0x7fffffff
- So the addresses 0x7f000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x81000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x10200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0x6ee00000, to
KASAN_SHADOW_END at 0x7effffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0x7f000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
KASAN_SHADOW_OFFSET = 0x5f000000
- 0x9f000000 if we have 3G userspace / 1G kernelspace split,
and this is the default split for ARM:
- The kernel address space is 1GB (0x40000000)
- PAGE_OFFSET is set to 0xc0000000 so the kernel static
image uses 0xc0000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
so the modules use addresses 0xbf000000 .. 0xbfffffff
- So the addresses 0xbf000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x41000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x08200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0xb6e00000, to
KASAN_SHADOW_END at 0xbfffffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0xbf000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
KASAN_SHADOW_OFFSET = 0x9f000000
- 0x8f000000 if we have 3G userspace / 1G kernelspace with
full 1 GB low memory (VMSPLIT_3G_OPT):
- The kernel address space is 1GB (0x40000000)
- PAGE_OFFSET is set to 0xb0000000 so the kernel static
image uses 0xb0000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
so the modules use addresses 0xaf000000 .. 0xaffffff
- So the addresses 0xaf000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x51000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x0a200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0xa4e00000, to
KASAN_SHADOW_END at 0xaeffffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0xaf000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
KASAN_SHADOW_OFFSET = 0x8f000000
- The default value of 0xffffffff for KASAN_SHADOW_OFFSET
is an error value. We should always match one of the
above shadow offsets.
When we do this, TASK_SIZE will sometimes get a bit odd values
that will not fit into immediate mov assembly instructions.
To account for this, we need to rewrite some assembly using
TASK_SIZE like this:
- mov r1, #TASK_SIZE
+ ldr r1, =TASK_SIZE
or
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
this is done to avoid the immediate #TASK_SIZE that need to
fit into a limited number of bits.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v9->v10:
- Rebase on v5.8-rc1
ChangeLog v8->v9:
- Collect Ard's tags.
ChangeLog v7->v8:
- Rewrote the PMD clearing code to take into account that
KASan may not always be adjacent to MODULES_VADDR: if we
compile for thumb, then there will be an 8 MB hole between
the shadow memory and MODULES_VADDR. Make this explicit and
use the KASAN defines with an explicit ifdef so it is clear
what is going on in the prepare_page_table().
- Patch memory.rst to reflect the location of KASan shadow
memory.
ChangeLog v6->v7:
- Use the SPDX license identifier.
- Rewrote the commit message and updates the illustration.
- Move KASAN_OFFSET Kconfig set-up into this patch and put it
right after PAGE_OFFSET so it is clear how this works, and
we have all defines in one patch.
- Added KASAN_SHADOW_OFFSET of 0x8f000000 for 3G_OPT.
See the calculation in the commit message.
- Updated the commit message with detailed information on
how KASAN_SHADOW_OFFSET is obtained for the different
VMSPLIT/PAGE_OFFSET options.
---
Documentation/arm/memory.rst | 5 ++
arch/arm/Kconfig | 9 ++++
arch/arm/include/asm/kasan_def.h | 81 ++++++++++++++++++++++++++++++
arch/arm/include/asm/memory.h | 5 ++
arch/arm/include/asm/uaccess-asm.h | 2 +-
arch/arm/kernel/entry-armv.S | 3 +-
arch/arm/kernel/entry-common.S | 9 ++--
arch/arm/mm/mmu.c | 18 +++++++
8 files changed, 127 insertions(+), 5 deletions(-)
create mode 100644 arch/arm/include/asm/kasan_def.h
diff --git a/Documentation/arm/memory.rst b/Documentation/arm/memory.rst
index 0521b4ce5c96..36bae90cfb1e 100644
--- a/Documentation/arm/memory.rst
+++ b/Documentation/arm/memory.rst
@@ -72,6 +72,11 @@ MODULES_VADDR MODULES_END-1 Kernel module space
Kernel modules inserted via insmod are
placed here using dynamic mappings.
+TASK_SIZE MODULES_VADDR-1 KASAn shadow memory when KASan is in use.
+ The range from MODULES_VADDR to the top
+ of the memory is shadowed here with 1 bit
+ per byte of memory.
+
00001000 TASK_SIZE-1 User space mappings
Per-thread mappings are placed here via
the mmap() system call.
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 2ac74904a3ce..d291cdb84c9d 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1328,6 +1328,15 @@ config PAGE_OFFSET
default 0xB0000000 if VMSPLIT_3G_OPT
default 0xC0000000
+config KASAN_SHADOW_OFFSET
+ hex
+ depends on KASAN
+ default 0x1f000000 if PAGE_OFFSET=0x40000000
+ default 0x5f000000 if PAGE_OFFSET=0x80000000
+ default 0x9f000000 if PAGE_OFFSET=0xC0000000
+ default 0x8f000000 if PAGE_OFFSET=0xB0000000
+ default 0xffffffff
+
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 000000000000..5739605aa7cf
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * arch/arm/include/asm/kasan_def.h
+ *
+ * Copyright (c) 2018 Huawei Technologies Co., Ltd.
+ *
+ * Author: Abbott Liu <liuwenliang@huawei.com>
+ */
+
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ * Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
+ * the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
+ * addressable by a 32bit architecture) out of the virtual address
+ * space to use as shadow memory for KASan as follows:
+ *
+ * +----+ 0xffffffff
+ * | | \
+ * | | |-> Static kernel image (vmlinux) BSS and page table
+ * | |/
+ * +----+ PAGE_OFFSET
+ * | | \
+ * | | |-> Loadable kernel modules virtual address space area
+ * | |/
+ * +----+ MODULES_VADDR = KASAN_SHADOW_END
+ * | | \
+ * | | |-> The shadow area of kernel virtual address.
+ * | |/
+ * +----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
+ * | |\ shadow address of MODULES_VADDR
+ * | | |
+ * | | |
+ * | | |-> The user space area in lowmem. The kernel address
+ * | | | sanitizer do not use this space, nor does it map it.
+ * | | |
+ * | | |
+ * | | |
+ * | | |
+ * | |/
+ * ------ 0
+ *
+ * 1) KASAN_SHADOW_START
+ * This value begins with the MODULE_VADDR's shadow address. It is the
+ * start of kernel virtual space. Since we have modules to load, we need
+ * to cover also that area with shadow memory so we can find memory
+ * bugs in modules.
+ *
+ * 2) KASAN_SHADOW_END
+ * This value is the 0x100000000's shadow address: the mapping that would
+ * be after the end of the kernel memory at 0xffffffff. It is the end of
+ * kernel address sanitizer shadow area. It is also the start of the
+ * module area.
+ *
+ * 3) KASAN_SHADOW_OFFSET:
+ * This value is used to map an address to the corresponding shadow
+ * address by the following formula:
+ *
+ * shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
+ *
+ * As you would expect, >> 3 is equal to dividing by 8, meaning each
+ * byte in the shadow memory covers 8 bytes of kernel memory, so one
+ * bit shadow memory per byte of kernel memory is used.
+ *
+ * The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
+ * on the VMSPLIT layout of the system: the kernel and userspace can
+ * split up lowmem in different ways according to needs, so we calculate
+ * the shadow offset depending on this.
+ */
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END ((UL(1) << (32 - KASAN_SHADOW_SCALE_SHIFT)) \
+ + KASAN_SHADOW_OFFSET)
+#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 99035b5891ef..5cfa9e5dc733 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -18,6 +18,7 @@
#ifdef CONFIG_NEED_MACH_MEMORY_H
#include <mach/memory.h>
#endif
+#include <asm/kasan_def.h>
/* PAGE_OFFSET - the virtual address of the start of the kernel image */
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
@@ -28,7 +29,11 @@
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
+#ifndef CONFIG_KASAN
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE (KASAN_SHADOW_START)
+#endif
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/*
diff --git a/arch/arm/include/asm/uaccess-asm.h b/arch/arm/include/asm/uaccess-asm.h
index 907571fd05c6..e6eb7a2aaf1e 100644
--- a/arch/arm/include/asm/uaccess-asm.h
+++ b/arch/arm/include/asm/uaccess-asm.h
@@ -85,7 +85,7 @@
*/
.macro uaccess_entry, tsk, tmp0, tmp1, tmp2, disable
ldr \tmp1, [\tsk, #TI_ADDR_LIMIT]
- mov \tmp2, #TASK_SIZE
+ ldr \tmp2, =TASK_SIZE
str \tmp2, [\tsk, #TI_ADDR_LIMIT]
DACR( mrc p15, 0, \tmp0, c3, c0, 0)
DACR( str \tmp0, [sp, #SVC_DACR])
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 55a47df04773..c4220f51fcf3 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -427,7 +427,8 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
blhs kuser_cmpxchg64_fixup
#endif
#endif
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 271cb8a1eba1..fee279e28a72 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,8 @@ __ret_fast_syscall:
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -87,7 +88,8 @@ __ret_fast_syscall:
#endif
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -128,7 +130,8 @@ ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 628028bfbb92..46ee62d39f04 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -29,6 +29,7 @@
#include <asm/traps.h>
#include <asm/procinfo.h>
#include <asm/memory.h>
+#include <asm/kasan_def.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
@@ -1264,8 +1265,25 @@ static inline void prepare_page_table(void)
/*
* Clear out all the mappings below the kernel image.
*/
+#ifdef CONFIG_KASAN
+ /*
+ * KASan's shadow memory inserts itself between the TASK_SIZE
+ * and MODULES_VADDR. Do not clear the KASan shadow memory mappings.
+ */
+ for (addr = 0; addr < KASAN_SHADOW_START; addr += PMD_SIZE)
+ pmd_clear(pmd_off_k(addr));
+ /*
+ * Skip over the KASan shadow area. KASAN_SHADOW_END is sometimes
+ * equal to MODULES_VADDR and then we exit the pmd clearing. If we
+ * are using a thumb-compiled kernel, there there will be 8MB more
+ * to clear as KASan always offset to 16 MB below MODULES_VADDR.
+ */
+ for (addr = KASAN_SHADOW_END; addr < MODULES_VADDR; addr += PMD_SIZE)
+ pmd_clear(pmd_off_k(addr));
+#else
for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
+#endif
#ifdef CONFIG_XIP_KERNEL
/* The XIP kernel is mapped in the module area -- skip over it */
--
2.25.4
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory
2020-06-15 9:02 [PATCH 0/5 v10] KASan for Arm Linus Walleij
` (2 preceding siblings ...)
2020-06-15 9:02 ` [PATCH 3/5 v10] ARM: Define the virtual space of KASan's shadow region Linus Walleij
@ 2020-06-15 9:02 ` Linus Walleij
2020-06-15 14:33 ` Mike Rapoport
2020-06-29 14:07 ` Linus Walleij
2020-06-15 9:02 ` [PATCH 5/5 v10] ARM: Enable KASan for ARM Linus Walleij
4 siblings, 2 replies; 12+ messages in thread
From: Linus Walleij @ 2020-06-15 9:02 UTC (permalink / raw)
To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
Andrey Ryabinin, Mike Rapoport
Cc: Arnd Bergmann, Linus Walleij, kasan-dev, Alexander Potapenko,
linux-arm-kernel, Dmitry Vyukov
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It is finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)
2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan does not need to track, and we
allocate a new shadow space for the other memory that KASan need to
track. These issues are finished by the function kasan_init which is
call by setup_arch.
After the initial development by Andre Ryabinin several modifications
have been made to this code:
Abbott Liu <liuwenliang@huawei.com>
- Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
mapping table need be copied in the pgd_alloc() function.
- Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
kasan_pgd_populate from .meminit.text section to .init.text section.
Reported by Florian Fainelli <f.fainelli@gmail.com>
Linus Walleij <linus.walleij@linaro.org>:
- Drop the custom mainpulation of TTBR0 and just use
cpu_switch_mm() to switch the pgd table.
- Adopt to handle 4th level page tabel folding.
- Rewrite the entire page directory and page entry initialization
sequence to be recursive based on ARM64:s kasan_init.c
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Co-Developed-by: Abbott Liu <liuwenliang@huawei.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v9->v10:
- Rebase onto v5.8-rc1
- add support for folded p4d page tables, use the primitives necessary
for the 4th level folding, add (empty) walks of p4d level.
- Use the <linux/pgtable.h> header file that has now appeared as part
of the VM consolidation series.
- Use a recursive method to walk pgd/p4d/pud/pmd/pte instead of the
separate early/main calls and the flat call structure used in the
old code. This was inspired by the ARM64 KASan init code.
- Assume authorship of this code, I have now written the majority of
it so the blame is on me and noone else.
ChangeLog v8->v9:
- Drop the custom CP15 manipulation and cache flushing for swapping
TTBR0 and instead just use cpu_switch_mm().
- Collect Ard's tags.
ChangeLog v7->v8:
- Rebased.
ChangeLog v6->v7:
- Use SPDX identifer for the license.
- Move the TTBR0 accessor calls into this patch.
---
arch/arm/include/asm/kasan.h | 32 +++
arch/arm/include/asm/pgalloc.h | 9 +-
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/head-common.S | 3 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 3 +
arch/arm/mm/kasan_init.c | 304 +++++++++++++++++++++++++++++
arch/arm/mm/pgd.c | 15 +-
8 files changed, 369 insertions(+), 3 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/mm/kasan_init.c
diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 000000000000..56b954db160e
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * arch/arm/include/asm/kasan.h
+ *
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ */
+
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+
+/*
+ * The compiler uses a shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index 069da393110c..d969f8058b26 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -21,6 +21,7 @@
#define _PAGE_KERNEL_TABLE (PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL))
#ifdef CONFIG_ARM_LPAE
+#define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
@@ -39,14 +40,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
}
#else /* !CONFIG_ARM_LPAE */
+#define PGD_SIZE (PAGE_SIZE << 2)
/*
* Since we have only two-level page tables, these are trivial
*/
#define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
#define pmd_free(mm, pmd) do { } while (0)
-#define pud_populate(mm,pmd,pte) BUG()
-
+#ifndef CONFIG_KASAN
+#define pud_populate(mm, pmd, pte) BUG()
+#else
+#define pud_populate(mm, pmd, pte) do { } while (0)
+#endif
#endif /* CONFIG_ARM_LPAE */
extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 3609a6980c34..cf47cf9c4742 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -13,7 +13,11 @@
#include <asm/fpstate.h>
#include <asm/page.h>
+#ifdef CONFIG_KASAN
+#define THREAD_SIZE_ORDER 2
+#else
#define THREAD_SIZE_ORDER 1
+#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#define THREAD_START_SP (THREAD_SIZE - 8)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 6840c7c60a85..89c80154b9ef 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -111,6 +111,9 @@ __mmap_switched:
str r8, [r2] @ Save atags pointer
cmp r3, #0
strne r10, [r3] @ Save control register values
+#ifdef CONFIG_KASAN
+ bl kasan_early_init
+#endif
mov lr, #0
b start_kernel
ENDPROC(__mmap_switched)
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index d8e18cdd96d3..b0820847bb92 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -58,6 +58,7 @@
#include <asm/unwind.h>
#include <asm/memblock.h>
#include <asm/virt.h>
+#include <asm/kasan.h>
#include "atags.h"
@@ -1130,6 +1131,7 @@ void __init setup_arch(char **cmdline_p)
early_ioremap_reset();
paging_init(mdesc);
+ kasan_init();
request_standard_resources(mdesc);
if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 99699c32d8a5..4536159bc8fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -113,3 +113,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
+
+KASAN_SANITIZE_kasan_init.o := n
+obj-$(CONFIG_KASAN) += kasan_init.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 000000000000..6438a13f8368
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,304 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This file contains kasan initialization code for ARM.
+ *
+ * Copyright (c) 2018 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ */
+
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/sched/task.h>
+#include <linux/start_kernel.h>
+#include <linux/pgtable.h>
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+
+#include "mm.h"
+
+static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size, int node)
+{
+ return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+ MEMBLOCK_ALLOC_KASAN, node);
+}
+
+static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
+ unsigned long end, int node, bool early)
+{
+ unsigned long next;
+ pte_t *ptep = pte_offset_kernel(pmdp, addr);
+
+ do {
+ next = addr + PAGE_SIZE;
+
+ if (pte_none(*ptep)) {
+ pte_t entry;
+ void *p;
+
+ /*
+ * The early shadow memory is mapping all KASan operations to one and the same page
+ * in memory, "kasan_early_shadow_page" so that the instrumentation will work on
+ * a scratch area until we can set up the proper KASan shadow memory.
+ */
+ if (early) {
+ p = kasan_early_shadow_page;
+ entry = pfn_pte(virt_to_pfn(kasan_early_shadow_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY
+ | L_PTE_XN));
+ } else {
+ p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p) {
+ panic("%s failed to alloc pte for address 0x%lx\n",
+ __func__, addr);
+ return;
+ }
+ memset(p, KASAN_SHADOW_INIT, PAGE_SIZE);
+ entry = pfn_pte(virt_to_pfn(p),
+ __pgprot(pgprot_val(PAGE_KERNEL)));
+ }
+
+ set_pte_at(&init_mm, addr, ptep, entry);
+ }
+ } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep)));
+}
+
+/*
+ * The pmd (page middle directory) used on LPAE?
+ */
+static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr,
+ unsigned long end, int node, bool early)
+{
+ unsigned long next;
+ pmd_t *pmdp = pmd_offset(pudp, addr);
+
+ if (pmd_none(*pmdp)) {
+ void *p = early ? kasan_early_shadow_pte : kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p) {
+ panic("%s failed to allocate pmd for address 0x%lx\n",
+ __func__, addr);
+ return;
+ }
+ pmd_populate_kernel(&init_mm, pmdp, p);
+ flush_pmd_entry(pmdp);
+ }
+
+ do {
+ next = pmd_addr_end(addr, end);
+ kasan_pte_populate(pmdp, addr, next, node, early);
+ } while (pmdp++, addr = next, addr != end && pmd_none(READ_ONCE(*pmdp)));
+}
+
+/*
+ * The pud (page upper directory) is only used on LPAE systems.
+ */
+static void __init kasan_pud_populate(p4d_t *p4dp, unsigned long addr,
+ unsigned long end, int node, bool early)
+{
+ unsigned long next;
+ pud_t *pudp = pud_offset(p4dp, addr);
+
+ /*
+ * FIXME: necessary?
+ * Allocate and populate the PUD if it doesn't already exist
+ * On non-LPAE systems using just 2-level page tables pud_none()
+ * will always be zero and this will be skipped.
+ */
+ if (!early && pud_none(*pudp)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p) {
+ panic("%s failed to allocate pud for address 0x%lx\n",
+ __func__, addr);
+ return;
+ }
+ pr_info("populating pud addr %lx\n", addr);
+ pud_populate(&init_mm, pudp, p);
+ }
+
+ do {
+ next = pud_addr_end(addr, end);
+ kasan_pmd_populate(pudp, addr, next, node, early);
+ } while (pudp++, addr = next, addr != end && pud_none(READ_ONCE(*pudp)));
+}
+
+/*
+ * The p4d (fourth level translation table) is unused on ARM32 but we iterate over it to
+ * please the Linux VMM.
+ */
+static void __init kasan_p4d_populate(pgd_t *pgdp, unsigned long addr,
+ unsigned long end, int node, bool early)
+{
+ unsigned long next;
+ p4d_t *p4dp = p4d_offset(pgdp, addr);
+
+ /* We do not check for p4d_none() as it is unused for sure */
+ if (p4d_none_or_clear_bad(p4dp)) {
+ panic("%s failed to populate p4d for address 0x%lx\n",
+ __func__, addr);
+ return;
+ }
+
+ do {
+ next = p4d_addr_end(addr, end);
+ kasan_pud_populate(p4dp, addr, next, node, early);
+ } while (p4dp++, addr = next, addr != end);
+}
+
+
+static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
+ int node, bool early)
+{
+ unsigned long next;
+ pgd_t *pgdp;
+
+ pgdp = pgd_offset_k(addr);
+
+ /* Allocate and populate the PGD if it doesn't already exist */
+ if (!early && pgd_none(*pgdp)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p) {
+ panic("%s failed to allocate pgd for address 0x%lx\n",
+ __func__, addr);
+ return;
+ }
+ pgd_populate(&init_mm, pgdp, p);
+ }
+
+ do {
+ next = pgd_addr_end(addr, end);
+ kasan_p4d_populate(pgdp, addr, next, node, early);
+ } while (pgdp++, addr = next, addr != end);
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+ struct proc_info_list *list;
+
+ /*
+ * locate processor in the list of supported processor
+ * types. The linker builds this table for us from the
+ * entries in arch/arm/mm/proc-*.S
+ */
+ list = lookup_processor_type(read_cpuid_id());
+ if (list) {
+#ifdef MULTI_CPU
+ processor = *list->proc;
+#endif
+ }
+
+ BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET);
+ /*
+ * We walk the page table and set all of the shadow memory to point
+ * to the scratch page.
+ */
+ kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
+ true);
+}
+
+static void __init clear_pgds(unsigned long start,
+ unsigned long end)
+{
+ for (; start && start < end; start += PMD_SIZE)
+ pmd_clear(pmd_off_k(start));
+}
+
+static int __init create_mapping(unsigned long start, unsigned long end,
+ int node)
+{
+ pr_info("populating shadow for %lx, %lx\n", start, end);
+ kasan_pgd_populate(start, end, NUMA_NO_NODE, false);
+ return 0;
+}
+
+void __init kasan_init(void)
+{
+ struct memblock_region *reg;
+ int i;
+
+ /*
+ * We are going to perform proper setup of shadow memory.
+ *
+ * At first we should unmap early shadow (clear_pgds() call bellow).
+ * However, instrumented code couldn't execute without shadow memory.
+ *
+ * To keep the early shadow memory MMU tables around while setting up
+ * the proper shadow memory, we copy swapper_pg_dir (the initial page
+ * table) to tmp_pgd_table and use that to keep the early shadow memory
+ * mapped until the full shadow setup is finished. Then we swap back
+ * to the proper swapper_pg_dir.
+ */
+#ifdef CONFIG_ARM_LPAE
+ memcpy(tmp_pmd_table,
+ pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
+ sizeof(tmp_pmd_table));
+ memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+ set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
+ __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+ cpu_switch_mm(tmp_pgd_table, &init_mm);
+#else
+ memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+ cpu_switch_mm(tmp_pgd_table, &init_mm);
+#endif
+ clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+ kasan_mem_to_shadow((void *)-1UL) + 1);
+
+ for_each_memblock(memory, reg) {
+ void *start = __va(reg->base);
+ void *end = __va(reg->base + reg->size);
+
+ if (reg->base + reg->size > arm_lowmem_limit)
+ end = __va(arm_lowmem_limit);
+ if (start >= end)
+ break;
+
+ create_mapping((unsigned long)kasan_mem_to_shadow(start),
+ (unsigned long)kasan_mem_to_shadow(end),
+ NUMA_NO_NODE);
+ }
+
+ /*
+ * 1. The module global variables are in MODULES_VADDR ~ MODULES_END,
+ * so we need to map this area.
+ * 2. PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
+ * ~ MODULES_END's shadow is in the same PMD_SIZE, so we can't
+ * use kasan_populate_zero_shadow.
+ */
+ create_mapping(
+ (unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
+ (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE +
+ PMD_SIZE)),
+ NUMA_NO_NODE);
+
+ /*
+ * KAsan may reuse the contents of kasan_early_shadow_pte directly, so
+ * we should make sure that it maps the zero page read-only.
+ */
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_early_shadow_pte[i],
+ pfn_pte(virt_to_pfn(kasan_early_shadow_page),
+ __pgprot(pgprot_val(PAGE_KERNEL)
+ | L_PTE_RDONLY)));
+ memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+ cpu_switch_mm(swapper_pg_dir, &init_mm);
+ pr_info("Kernel address sanitizer initialized\n");
+ init_task.kasan_depth = 0;
+}
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index c5e1b27046a8..db5ef068e523 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -66,7 +66,20 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
new_pmd = pmd_alloc(mm, new_pud, 0);
if (!new_pmd)
goto no_pmd;
-#endif
+#ifdef CONFIG_KASAN
+ /*
+ * Copy PMD table for KASAN shadow mappings.
+ */
+ init_pgd = pgd_offset_k(TASK_SIZE);
+ init_pud = pud_offset(init_pgd, TASK_SIZE);
+ init_pmd = pmd_offset(init_pud, TASK_SIZE);
+ new_pmd = pmd_offset(new_pud, TASK_SIZE);
+ memcpy(new_pmd, init_pmd,
+ (pmd_index(MODULES_VADDR) - pmd_index(TASK_SIZE))
+ * sizeof(pmd_t));
+ clean_dcache_area(new_pmd, PTRS_PER_PMD * sizeof(pmd_t));
+#endif /* CONFIG_KASAN */
+#endif /* CONFIG_LPAE */
if (!vectors_high()) {
/*
--
2.25.4
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory
2020-06-15 9:02 ` [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory Linus Walleij
@ 2020-06-15 14:33 ` Mike Rapoport
2020-06-30 13:22 ` Linus Walleij
2020-06-29 14:07 ` Linus Walleij
1 sibling, 1 reply; 12+ messages in thread
From: Mike Rapoport @ 2020-06-15 14:33 UTC (permalink / raw)
To: Linus Walleij
Cc: Florian Fainelli, Arnd Bergmann, Abbott Liu, Russell King,
kasan-dev, Alexander Potapenko, Dmitry Vyukov, Andrey Ryabinin,
Ard Biesheuvel, linux-arm-kernel
Hi,
On Mon, Jun 15, 2020 at 11:02:46AM +0200, Linus Walleij wrote:
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
>
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It is finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan does not need to track, and we
> allocate a new shadow space for the other memory that KASan need to
> track. These issues are finished by the function kasan_init which is
> call by setup_arch.
>
> After the initial development by Andre Ryabinin several modifications
> have been made to this code:
>
> Abbott Liu <liuwenliang@huawei.com>
> - Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
> mapping table need be copied in the pgd_alloc() function.
> - Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
> kasan_pgd_populate from .meminit.text section to .init.text section.
> Reported by Florian Fainelli <f.fainelli@gmail.com>
>
> Linus Walleij <linus.walleij@linaro.org>:
> - Drop the custom mainpulation of TTBR0 and just use
> cpu_switch_mm() to switch the pgd table.
> - Adopt to handle 4th level page tabel folding.
> - Rewrite the entire page directory and page entry initialization
> sequence to be recursive based on ARM64:s kasan_init.c
>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: kasan-dev@googlegroups.com
> Co-Developed-by: Abbott Liu <liuwenliang@huawei.com>
> Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
> Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
> Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
> Reported-by: Florian Fainelli <f.fainelli@gmail.com>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
> ---
> ChangeLog v9->v10:
> - Rebase onto v5.8-rc1
> - add support for folded p4d page tables, use the primitives necessary
> for the 4th level folding, add (empty) walks of p4d level.
> - Use the <linux/pgtable.h> header file that has now appeared as part
> of the VM consolidation series.
> - Use a recursive method to walk pgd/p4d/pud/pmd/pte instead of the
> separate early/main calls and the flat call structure used in the
> old code. This was inspired by the ARM64 KASan init code.
> - Assume authorship of this code, I have now written the majority of
> it so the blame is on me and noone else.
> ChangeLog v8->v9:
> - Drop the custom CP15 manipulation and cache flushing for swapping
> TTBR0 and instead just use cpu_switch_mm().
> - Collect Ard's tags.
> ChangeLog v7->v8:
> - Rebased.
> ChangeLog v6->v7:
> - Use SPDX identifer for the license.
> - Move the TTBR0 accessor calls into this patch.
> ---
> arch/arm/include/asm/kasan.h | 32 +++
> arch/arm/include/asm/pgalloc.h | 9 +-
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/head-common.S | 3 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/mm/Makefile | 3 +
> arch/arm/mm/kasan_init.c | 304 +++++++++++++++++++++++++++++
> arch/arm/mm/pgd.c | 15 +-
> 8 files changed, 369 insertions(+), 3 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
> diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
> new file mode 100644
> index 000000000000..56b954db160e
> --- /dev/null
> +++ b/arch/arm/include/asm/kasan.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * arch/arm/include/asm/kasan.h
> + *
> + * Copyright (c) 2015 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> + *
> + */
> +
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifdef CONFIG_KASAN
> +
> +#include <asm/kasan_def.h>
> +
> +#define KASAN_SHADOW_SCALE_SHIFT 3
> +
> +/*
> + * The compiler uses a shadow offset assuming that addresses start
> + * from 0. Kernel addresses don't start from 0, so shadow
> + * for kernel really starts from 'compiler's shadow offset' +
> + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
> + */
> +
> +extern void kasan_init(void);
> +
> +#else
> +static inline void kasan_init(void) { }
> +#endif
> +
> +#endif
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index 069da393110c..d969f8058b26 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -21,6 +21,7 @@
> #define _PAGE_KERNEL_TABLE (PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL))
>
> #ifdef CONFIG_ARM_LPAE
> +#define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))
>
> static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
> {
> @@ -39,14 +40,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> }
>
> #else /* !CONFIG_ARM_LPAE */
> +#define PGD_SIZE (PAGE_SIZE << 2)
>
> /*
> * Since we have only two-level page tables, these are trivial
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> -#define pud_populate(mm,pmd,pte) BUG()
> -
> +#ifndef CONFIG_KASAN
> +#define pud_populate(mm, pmd, pte) BUG()
> +#else
> +#define pud_populate(mm, pmd, pte) do { } while (0)
Hmm, is this really necessary? Regardless of CONFIG_KASAN pud_populate()
should never be called for non-LPAE case...
> +#endif
> #endif /* CONFIG_ARM_LPAE */
>
> extern pgd_t *pgd_alloc(struct mm_struct *mm);
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 3609a6980c34..cf47cf9c4742 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -13,7 +13,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 6840c7c60a85..89c80154b9ef 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -111,6 +111,9 @@ __mmap_switched:
> str r8, [r2] @ Save atags pointer
> cmp r3, #0
> strne r10, [r3] @ Save control register values
> +#ifdef CONFIG_KASAN
> + bl kasan_early_init
> +#endif
> mov lr, #0
> b start_kernel
> ENDPROC(__mmap_switched)
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index d8e18cdd96d3..b0820847bb92 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -58,6 +58,7 @@
> #include <asm/unwind.h>
> #include <asm/memblock.h>
> #include <asm/virt.h>
> +#include <asm/kasan.h>
>
> #include "atags.h"
>
> @@ -1130,6 +1131,7 @@ void __init setup_arch(char **cmdline_p)
> early_ioremap_reset();
>
> paging_init(mdesc);
> + kasan_init();
> request_standard_resources(mdesc);
>
> if (mdesc->restart)
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 99699c32d8a5..4536159bc8fa 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -113,3 +113,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
> obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
> +
> +KASAN_SANITIZE_kasan_init.o := n
> +obj-$(CONFIG_KASAN) += kasan_init.o
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> new file mode 100644
> index 000000000000..6438a13f8368
> --- /dev/null
> +++ b/arch/arm/mm/kasan_init.c
> @@ -0,0 +1,304 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * This file contains kasan initialization code for ARM.
> + *
> + * Copyright (c) 2018 Samsung Electronics Co., Ltd.
> + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> + */
> +
> +#define pr_fmt(fmt) "kasan: " fmt
> +#include <linux/kasan.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/sched/task.h>
> +#include <linux/start_kernel.h>
> +#include <linux/pgtable.h>
> +#include <asm/cputype.h>
> +#include <asm/highmem.h>
> +#include <asm/mach/map.h>
> +#include <asm/memory.h>
> +#include <asm/page.h>
> +#include <asm/pgalloc.h>
> +#include <asm/procinfo.h>
> +#include <asm/proc-fns.h>
> +
> +#include "mm.h"
> +
> +static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
> +
> +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
> +
> +static __init void *kasan_alloc_block(size_t size, int node)
> +{
> + return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
> + MEMBLOCK_ALLOC_KASAN, node);
> +}
> +
> +static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
> + unsigned long end, int node, bool early)
> +{
> + unsigned long next;
> + pte_t *ptep = pte_offset_kernel(pmdp, addr);
> +
> + do {
> + next = addr + PAGE_SIZE;
> +
> + if (pte_none(*ptep)) {
> + pte_t entry;
> + void *p;
> +
> + /*
> + * The early shadow memory is mapping all KASan operations to one and the same page
> + * in memory, "kasan_early_shadow_page" so that the instrumentation will work on
> + * a scratch area until we can set up the proper KASan shadow memory.
> + */
> + if (early) {
> + p = kasan_early_shadow_page;
> + entry = pfn_pte(virt_to_pfn(kasan_early_shadow_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY
> + | L_PTE_XN));
> + } else {
> + p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p) {
> + panic("%s failed to alloc pte for address 0x%lx\n",
> + __func__, addr);
> + return;
> + }
> + memset(p, KASAN_SHADOW_INIT, PAGE_SIZE);
> + entry = pfn_pte(virt_to_pfn(p),
> + __pgprot(pgprot_val(PAGE_KERNEL)));
> + }
> +
> + set_pte_at(&init_mm, addr, ptep, entry);
> + }
> + } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep)));
> +}
> +
> +/*
> + * The pmd (page middle directory) used on LPAE?
> + */
> +static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr,
> + unsigned long end, int node, bool early)
> +{
> + unsigned long next;
> + pmd_t *pmdp = pmd_offset(pudp, addr);
> +
> + if (pmd_none(*pmdp)) {
> + void *p = early ? kasan_early_shadow_pte : kasan_alloc_block(PAGE_SIZE, node);
> +
> + if (!p) {
> + panic("%s failed to allocate pmd for address 0x%lx\n",
> + __func__, addr);
> + return;
> + }
> + pmd_populate_kernel(&init_mm, pmdp, p);
> + flush_pmd_entry(pmdp);
> + }
> +
> + do {
> + next = pmd_addr_end(addr, end);
> + kasan_pte_populate(pmdp, addr, next, node, early);
> + } while (pmdp++, addr = next, addr != end && pmd_none(READ_ONCE(*pmdp)));
> +}
> +
> +/*
> + * The pud (page upper directory) is only used on LPAE systems.
> + */
> +static void __init kasan_pud_populate(p4d_t *p4dp, unsigned long addr,
> + unsigned long end, int node, bool early)
> +{
> + unsigned long next;
> + pud_t *pudp = pud_offset(p4dp, addr);
> +
> + /*
> + * FIXME: necessary?
> + * Allocate and populate the PUD if it doesn't already exist
> + * On non-LPAE systems using just 2-level page tables pud_none()
> + * will always be zero and this will be skipped.
> + */
> + if (!early && pud_none(*pudp)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
And how early pud-level mappings whould be established in LPAE case?
Am I missing something here?
> +
> + if (!p) {
> + panic("%s failed to allocate pud for address 0x%lx\n",
> + __func__, addr);
> + return;
> + }
> + pr_info("populating pud addr %lx\n", addr);
> + pud_populate(&init_mm, pudp, p);
> + }
> +
> + do {
> + next = pud_addr_end(addr, end);
> + kasan_pmd_populate(pudp, addr, next, node, early);
> + } while (pudp++, addr = next, addr != end && pud_none(READ_ONCE(*pudp)));
> +}
> +
> +/*
> + * The p4d (fourth level translation table) is unused on ARM32 but we iterate over it to
> + * please the Linux VMM.
> + */
That's really nice of you :)
But presuming that arm32 will never get 5-level of page tables, I think
this function can be removed and replaced with p4d access in
kasan_pgd_populate()
> +static void __init kasan_p4d_populate(pgd_t *pgdp, unsigned long addr,
> + unsigned long end, int node, bool early)
> +{
> + unsigned long next;
> + p4d_t *p4dp = p4d_offset(pgdp, addr);
> +
> + /* We do not check for p4d_none() as it is unused for sure */
> + if (p4d_none_or_clear_bad(p4dp)) {
> + panic("%s failed to populate p4d for address 0x%lx\n",
> + __func__, addr);
> + return;
> + }
> +
> + do {
> + next = p4d_addr_end(addr, end);
> + kasan_pud_populate(p4dp, addr, next, node, early);
> + } while (p4dp++, addr = next, addr != end);
> +}
> +
> +
> +static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
> + int node, bool early)
> +{
> + unsigned long next;
> + pgd_t *pgdp;
> +
> + pgdp = pgd_offset_k(addr);
> +
> + /* Allocate and populate the PGD if it doesn't already exist */
> + if (!early && pgd_none(*pgdp)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> +
> + if (!p) {
> + panic("%s failed to allocate pgd for address 0x%lx\n",
> + __func__, addr);
> + return;
> + }
> + pgd_populate(&init_mm, pgdp, p);
> + }
> +
> + do {
> + next = pgd_addr_end(addr, end);
> + kasan_p4d_populate(pgdp, addr, next, node, early);
Here we can simply
p4dp = p4d_offset(pgdp, addr)
kasan_pud_populate(p4dp, addr, next, node, early);
> + } while (pgdp++, addr = next, addr != end);
> +}
> +
> +extern struct proc_info_list *lookup_processor_type(unsigned int);
> +
> +void __init kasan_early_init(void)
> +{
> + struct proc_info_list *list;
> +
> + /*
> + * locate processor in the list of supported processor
> + * types. The linker builds this table for us from the
> + * entries in arch/arm/mm/proc-*.S
> + */
> + list = lookup_processor_type(read_cpuid_id());
> + if (list) {
> +#ifdef MULTI_CPU
> + processor = *list->proc;
> +#endif
> + }
> +
> + BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET);
> + /*
> + * We walk the page table and set all of the shadow memory to point
> + * to the scratch page.
> + */
> + kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
> + true);
> +}
> +
> +static void __init clear_pgds(unsigned long start,
> + unsigned long end)
> +{
> + for (; start && start < end; start += PMD_SIZE)
> + pmd_clear(pmd_off_k(start));
> +}
> +
> +static int __init create_mapping(unsigned long start, unsigned long end,
> + int node)
> +{
> + pr_info("populating shadow for %lx, %lx\n", start, end);
> + kasan_pgd_populate(start, end, NUMA_NO_NODE, false);
> + return 0;
> +}
> +
> +void __init kasan_init(void)
> +{
> + struct memblock_region *reg;
> + int i;
> +
> + /*
> + * We are going to perform proper setup of shadow memory.
> + *
> + * At first we should unmap early shadow (clear_pgds() call bellow).
> + * However, instrumented code couldn't execute without shadow memory.
> + *
> + * To keep the early shadow memory MMU tables around while setting up
> + * the proper shadow memory, we copy swapper_pg_dir (the initial page
> + * table) to tmp_pgd_table and use that to keep the early shadow memory
> + * mapped until the full shadow setup is finished. Then we swap back
> + * to the proper swapper_pg_dir.
> + */
> +#ifdef CONFIG_ARM_LPAE
> + memcpy(tmp_pmd_table,
> + pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
> + sizeof(tmp_pmd_table));
> + memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
> + set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
> + __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
> + cpu_switch_mm(tmp_pgd_table, &init_mm);
> +#else
> + memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
> + cpu_switch_mm(tmp_pgd_table, &init_mm);
> +#endif
I think the #ifdefery can be slightly simplified:
memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table,
pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
sizeof(tmp_pmd_table));
set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
__pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
#endif
cpu_switch_mm(tmp_pgd_table, &init_mm);
And, why do we need a context switch here at all?
> + clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> +
> + kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
> + kasan_mem_to_shadow((void *)-1UL) + 1);
> +
> + for_each_memblock(memory, reg) {
> + void *start = __va(reg->base);
> + void *end = __va(reg->base + reg->size);
> +
> + if (reg->base + reg->size > arm_lowmem_limit)
> + end = __va(arm_lowmem_limit);
> + if (start >= end)
> + break;
> +
> + create_mapping((unsigned long)kasan_mem_to_shadow(start),
> + (unsigned long)kasan_mem_to_shadow(end),
> + NUMA_NO_NODE);
> + }
> +
> + /*
> + * 1. The module global variables are in MODULES_VADDR ~ MODULES_END,
> + * so we need to map this area.
> + * 2. PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
> + * ~ MODULES_END's shadow is in the same PMD_SIZE, so we can't
> + * use kasan_populate_zero_shadow.
> + */
> + create_mapping(
> + (unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
> + (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE +
> + PMD_SIZE)),
> + NUMA_NO_NODE);
> +
> + /*
> + * KAsan may reuse the contents of kasan_early_shadow_pte directly, so
> + * we should make sure that it maps the zero page read-only.
> + */
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_early_shadow_pte[i],
> + pfn_pte(virt_to_pfn(kasan_early_shadow_page),
> + __pgprot(pgprot_val(PAGE_KERNEL)
> + | L_PTE_RDONLY)));
> + memset(kasan_early_shadow_page, 0, PAGE_SIZE);
> + cpu_switch_mm(swapper_pg_dir, &init_mm);
> + pr_info("Kernel address sanitizer initialized\n");
> + init_task.kasan_depth = 0;
> +}
> diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
> index c5e1b27046a8..db5ef068e523 100644
> --- a/arch/arm/mm/pgd.c
> +++ b/arch/arm/mm/pgd.c
> @@ -66,7 +66,20 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
> new_pmd = pmd_alloc(mm, new_pud, 0);
> if (!new_pmd)
> goto no_pmd;
> -#endif
> +#ifdef CONFIG_KASAN
> + /*
> + * Copy PMD table for KASAN shadow mappings.
> + */
> + init_pgd = pgd_offset_k(TASK_SIZE);
> + init_pud = pud_offset(init_pgd, TASK_SIZE);
> + init_pmd = pmd_offset(init_pud, TASK_SIZE);
> + new_pmd = pmd_offset(new_pud, TASK_SIZE);
> + memcpy(new_pmd, init_pmd,
> + (pmd_index(MODULES_VADDR) - pmd_index(TASK_SIZE))
> + * sizeof(pmd_t));
> + clean_dcache_area(new_pmd, PTRS_PER_PMD * sizeof(pmd_t));
> +#endif /* CONFIG_KASAN */
> +#endif /* CONFIG_LPAE */
>
> if (!vectors_high()) {
> /*
> --
> 2.25.4
>
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory
2020-06-15 14:33 ` Mike Rapoport
@ 2020-06-30 13:22 ` Linus Walleij
2020-06-30 14:45 ` Mike Rapoport
0 siblings, 1 reply; 12+ messages in thread
From: Linus Walleij @ 2020-06-30 13:22 UTC (permalink / raw)
To: Mike Rapoport
Cc: Florian Fainelli, Arnd Bergmann, Abbott Liu, Russell King,
kasan-dev, Alexander Potapenko, Dmitry Vyukov, Andrey Ryabinin,
Ard Biesheuvel, Linux ARM
Hi Mike!
First a BIG THANKS for your help! With the aid of your review comments
and the further comments from Russell I have really progressed with this
patch set the last few days.
On Mon, Jun 15, 2020 at 4:33 PM Mike Rapoport <rppt@linux.ibm.com> wrote:
> > -#define pud_populate(mm,pmd,pte) BUG()
> > -
> > +#ifndef CONFIG_KASAN
> > +#define pud_populate(mm, pmd, pte) BUG()
> > +#else
> > +#define pud_populate(mm, pmd, pte) do { } while (0)
>
> Hmm, is this really necessary? Regardless of CONFIG_KASAN pud_populate()
> should never be called for non-LPAE case...
It is necessary because the generic KASan code in
mm/kasan/init.c unconditionally calls pud_populate() and act as
if pud's always exist and need to be populated.
Possibly this means that pud_populate() should just be turned
into do { } while (0) as well (like other functions called unconditionally
from the VMM) but I'll leave this in for now.
> cpu_switch_mm(tmp_pgd_table, &init_mm);
>
> And, why do we need a context switch here at all?
This is really just a way of reusing that function call to replace
the master page table pointer TTBR0 (Translation Table Base Register)
while setting up the shadow memory.
Yours,
Linus Walleij
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory
2020-06-30 13:22 ` Linus Walleij
@ 2020-06-30 14:45 ` Mike Rapoport
0 siblings, 0 replies; 12+ messages in thread
From: Mike Rapoport @ 2020-06-30 14:45 UTC (permalink / raw)
To: Linus Walleij
Cc: Florian Fainelli, Arnd Bergmann, Abbott Liu, Russell King,
kasan-dev, Alexander Potapenko, Dmitry Vyukov, Andrey Ryabinin,
Ard Biesheuvel, Linux ARM
On Tue, Jun 30, 2020 at 03:22:19PM +0200, Linus Walleij wrote:
> Hi Mike!
>
> First a BIG THANKS for your help! With the aid of your review comments
> and the further comments from Russell I have really progressed with this
> patch set the last few days.
>
> On Mon, Jun 15, 2020 at 4:33 PM Mike Rapoport <rppt@linux.ibm.com> wrote:
>
> > > -#define pud_populate(mm,pmd,pte) BUG()
> > > -
> > > +#ifndef CONFIG_KASAN
> > > +#define pud_populate(mm, pmd, pte) BUG()
> > > +#else
> > > +#define pud_populate(mm, pmd, pte) do { } while (0)
> >
> > Hmm, is this really necessary? Regardless of CONFIG_KASAN pud_populate()
> > should never be called for non-LPAE case...
>
> It is necessary because the generic KASan code in
> mm/kasan/init.c unconditionally calls pud_populate() and act as
> if pud's always exist and need to be populated.
>
> Possibly this means that pud_populate() should just be turned
> into do { } while (0) as well (like other functions called unconditionally
> from the VMM) but I'll leave this in for now.
Yes, making pud_populate() a NOP will match the "generic" implementation
in asm-generic/pgtable-nopmd.h.
If this patchset will get to v12, maybe it would be worth doing that :)
> > cpu_switch_mm(tmp_pgd_table, &init_mm);
> >
> > And, why do we need a context switch here at all?
>
> This is really just a way of reusing that function call to replace
> the master page table pointer TTBR0 (Translation Table Base Register)
> while setting up the shadow memory.
Right, but is this really necessary to create the shadow page table?
If I remember correctly, the mm parameter is anyway not used by ARM page
table manpulators and pgd_offset_k() can be replaced by
pgd_offset_pgd(tmp_pgd_table, ...).
> Yours,
> Linus Walleij
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory
2020-06-15 9:02 ` [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory Linus Walleij
2020-06-15 14:33 ` Mike Rapoport
@ 2020-06-29 14:07 ` Linus Walleij
2020-06-29 14:37 ` Russell King - ARM Linux admin
1 sibling, 1 reply; 12+ messages in thread
From: Linus Walleij @ 2020-06-29 14:07 UTC (permalink / raw)
To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
Andrey Ryabinin, Mike Rapoport, Will Deacon
Cc: Dmitry Vyukov, kasan-dev, Alexander Potapenko, Arnd Bergmann, Linux ARM
Asking for help here!
I have a problem with populating PTEs for the LPAE usecase using
Versatile Express Cortex A15 (TC1) in QEMU.
In this loop of the patch:
On Mon, Jun 15, 2020 at 11:05 AM Linus Walleij <linus.walleij@linaro.org> wrote:
> +static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
> + unsigned long end, int node, bool early)
> +{
> + unsigned long next;
> + pte_t *ptep = pte_offset_kernel(pmdp, addr);
(...)
> + do {
> + next = pmd_addr_end(addr, end);
> + kasan_pte_populate(pmdp, addr, next, node, early);
> + } while (pmdp++, addr = next, addr != end && pmd_none(READ_ONCE(*pmdp)));
I first populate the PMD for 0x6ee00000 .. 0x6f000000
and this works fine, and the PTEs are all initialized.
pte_offset_kernel() returns something reasonable.
(0x815F5000).
Next the kernel processes the PMD for
0x6f000000 .. 0x6f200000 and now I run into trouble,
because pte_offset_kernel() suddenly returns a NULL
pointer 0x00000000.
Naturally dereferencing the pointer when checking
if (pte_none(*ptep)) hangs the machine since this
is in early init.
Does anyone have hints on why this happens, and why it
only happens on LPAE? non-LPAE on the Versatile Express
QEMU A15 works fine.
I'm debugging, but any hints are very welcome.
Yours,
Linus Walleij
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory
2020-06-29 14:07 ` Linus Walleij
@ 2020-06-29 14:37 ` Russell King - ARM Linux admin
2020-06-30 9:38 ` Linus Walleij
0 siblings, 1 reply; 12+ messages in thread
From: Russell King - ARM Linux admin @ 2020-06-29 14:37 UTC (permalink / raw)
To: Linus Walleij
Cc: Florian Fainelli, Arnd Bergmann, Abbott Liu, kasan-dev,
Mike Rapoport, Alexander Potapenko, Dmitry Vyukov,
Andrey Ryabinin, Will Deacon, Ard Biesheuvel, Linux ARM
On Mon, Jun 29, 2020 at 04:07:06PM +0200, Linus Walleij wrote:
> Asking for help here!
>
> I have a problem with populating PTEs for the LPAE usecase using
> Versatile Express Cortex A15 (TC1) in QEMU.
>
> In this loop of the patch:
>
> On Mon, Jun 15, 2020 at 11:05 AM Linus Walleij <linus.walleij@linaro.org> wrote:
>
> > +static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
> > + unsigned long end, int node, bool early)
> > +{
> > + unsigned long next;
> > + pte_t *ptep = pte_offset_kernel(pmdp, addr);
>
> (...)
>
> > + do {
> > + next = pmd_addr_end(addr, end);
> > + kasan_pte_populate(pmdp, addr, next, node, early);
> > + } while (pmdp++, addr = next, addr != end && pmd_none(READ_ONCE(*pmdp)));
>
> I first populate the PMD for 0x6ee00000 .. 0x6f000000
> and this works fine, and the PTEs are all initialized.
> pte_offset_kernel() returns something reasonable.
> (0x815F5000).
>
> Next the kernel processes the PMD for
> 0x6f000000 .. 0x6f200000 and now I run into trouble,
> because pte_offset_kernel() suddenly returns a NULL
> pointer 0x00000000.
That means there is no PTE table allocated which covers 0x6f000000.
"pmdp" points at the previous level's table entry that points at the
pte, and all pte_offset*() does is load that entry, convert it to a
pte_t pointer type, and point it to the appropriate entry for the
address. So, pte_offset*() is an accessor that takes a pointer to
the preceding level's entry for "addr", and returns a pointer to
the pte_t entry in the last level of page table for "addr".
It is the responsibility of the caller to pte_offset*() to ensure
either by explicit tests, or prior knowledge, that pmd_val(*pmdp)
is a valid PTE table entry.
Since generic kernel code can't use "prior knowledge", it has to do
the full checks (see, mm/vmalloc.c vunmap_pte_range() and higher
levels etc using pmd_none_or_clear_bad() for example - whether you
can use _clear_bad() depends whether you intend to clear "bad" entries.
Beware that the 1MB sections on non-LPAE will appear as "bad" entries
since we can't "walk" them to PTE level, and they're certainly not
"none" entries.)
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory
2020-06-29 14:37 ` Russell King - ARM Linux admin
@ 2020-06-30 9:38 ` Linus Walleij
0 siblings, 0 replies; 12+ messages in thread
From: Linus Walleij @ 2020-06-30 9:38 UTC (permalink / raw)
To: Russell King - ARM Linux admin
Cc: Florian Fainelli, Arnd Bergmann, Abbott Liu, kasan-dev,
Mike Rapoport, Alexander Potapenko, Dmitry Vyukov,
Andrey Ryabinin, Will Deacon, Ard Biesheuvel, Linux ARM
On Mon, Jun 29, 2020 at 4:37 PM Russell King - ARM Linux admin
<linux@armlinux.org.uk> wrote:
> On Mon, Jun 29, 2020 at 04:07:06PM +0200, Linus Walleij wrote:
> > Asking for help here!
> >
> > I have a problem with populating PTEs for the LPAE usecase using
> > Versatile Express Cortex A15 (TC1) in QEMU.
> >
> > In this loop of the patch:
> >
> > On Mon, Jun 15, 2020 at 11:05 AM Linus Walleij <linus.walleij@linaro.org> wrote:
> >
> > > +static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
> > > + unsigned long end, int node, bool early)
> > > +{
> > > + unsigned long next;
> > > + pte_t *ptep = pte_offset_kernel(pmdp, addr);
> >
> > (...)
> >
> > > + do {
> > > + next = pmd_addr_end(addr, end);
> > > + kasan_pte_populate(pmdp, addr, next, node, early);
> > > + } while (pmdp++, addr = next, addr != end && pmd_none(READ_ONCE(*pmdp)));
> >
> > I first populate the PMD for 0x6ee00000 .. 0x6f000000
> > and this works fine, and the PTEs are all initialized.
> > pte_offset_kernel() returns something reasonable.
> > (0x815F5000).
> >
> > Next the kernel processes the PMD for
> > 0x6f000000 .. 0x6f200000 and now I run into trouble,
> > because pte_offset_kernel() suddenly returns a NULL
> > pointer 0x00000000.
>
> That means there is no PTE table allocated which covers 0x6f000000.
>
> "pmdp" points at the previous level's table entry that points at the
> pte, and all pte_offset*() does is load that entry, convert it to a
> pte_t pointer type, and point it to the appropriate entry for the
> address. So, pte_offset*() is an accessor that takes a pointer to
> the preceding level's entry for "addr", and returns a pointer to
> the pte_t entry in the last level of page table for "addr".
>
> It is the responsibility of the caller to pte_offset*() to ensure
> either by explicit tests, or prior knowledge, that pmd_val(*pmdp)
> is a valid PTE table entry.
>
> Since generic kernel code can't use "prior knowledge", it has to do
> the full checks (see, mm/vmalloc.c vunmap_pte_range() and higher
> levels etc using pmd_none_or_clear_bad() for example - whether you
> can use _clear_bad() depends whether you intend to clear "bad" entries.
> Beware that the 1MB sections on non-LPAE will appear as "bad" entries
> since we can't "walk" them to PTE level, and they're certainly not
> "none" entries.)
Spot on! I figured it out quickly with this hint.
Essentially I have some loops like this:
pmd_t *pmdp = pmd_offset(pudp, addr);
if (pmd_none(*pmdp)) {
void *p = early ? kasan_early_shadow_pte :
kasan_alloc_block(PAGE_SIZE, node);
....
}
do {
pmd_populate_kernel(&init_mm, pmdp, p);
flush_pmd_entry(pmdp);
next = pmd_addr_end(addr, end);
kasan_pte_populate(pmdp, addr, next, node, early);
} while (pmdp++, addr = next, addr != end && pmd_none(READ_ONCE(*pmdp)));
I just had to move the i (pmd_node(*pmdp)) inside the loop and it all
starts working
fine.
What confuses me is that arm64 does it this way (checking pmdp outside the loop)
for all levels of the cache and it works (I suppose?) for them, but I
suspect it is
formally wrong.
I'll rewrite with the check inside the loop at all levels and retest
and resend, then
I hope this starts to work and look reasonable, finally.
Yours,
Linus Walleij
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 5/5 v10] ARM: Enable KASan for ARM
2020-06-15 9:02 [PATCH 0/5 v10] KASan for Arm Linus Walleij
` (3 preceding siblings ...)
2020-06-15 9:02 ` [PATCH 4/5 v10] ARM: Initialize the mapping of KASan shadow memory Linus Walleij
@ 2020-06-15 9:02 ` Linus Walleij
4 siblings, 0 replies; 12+ messages in thread
From: Linus Walleij @ 2020-06-15 9:02 UTC (permalink / raw)
To: Florian Fainelli, Abbott Liu, Russell King, Ard Biesheuvel,
Andrey Ryabinin, Mike Rapoport
Cc: Arnd Bergmann, Linus Walleij, kasan-dev, Alexander Potapenko,
linux-arm-kernel, Andrey Ryabinin, Dmitry Vyukov
From: Andrey Ryabinin <ryabinin@virtuozzo.com>
This patch enables the kernel address sanitizer for ARM. XIP_KERNEL
has not been tested and is therefore not allowed for now.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v9->v10:
- Rebase on v5.8-rc1
ChangeLog v8->v9:
- Fix the arch feature matrix for Arm to include KASan.
- Collect Ard's tags.
ChangeLog v7->v8:
- Moved the hacks to __ADDRESS_SANITIZE__ to the patch
replacing the memory access functions.
- Moved the definition of KASAN_OFFSET out of this patch
and to the patch that defines the virtual memory used by
KASan.
---
Documentation/dev-tools/kasan.rst | 4 ++--
Documentation/features/debug/KASAN/arch-support.txt | 2 +-
arch/arm/Kconfig | 1 +
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index c652d740735d..0962365e1405 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -21,8 +21,8 @@ global variables yet.
Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
-Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and
-riscv architectures, and tag-based KASAN is supported only for arm64.
+Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa, s390
+and riscv architectures, and tag-based KASAN is supported only for arm64.
Usage
-----
diff --git a/Documentation/features/debug/KASAN/arch-support.txt b/Documentation/features/debug/KASAN/arch-support.txt
index 6ff38548923e..a73c55fb76e6 100644
--- a/Documentation/features/debug/KASAN/arch-support.txt
+++ b/Documentation/features/debug/KASAN/arch-support.txt
@@ -8,7 +8,7 @@
-----------------------
| alpha: | TODO |
| arc: | TODO |
- | arm: | TODO |
+ | arm: | ok |
| arm64: | ok |
| c6x: | TODO |
| csky: | TODO |
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index d291cdb84c9d..6a6059f8bab9 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -65,6 +65,7 @@ config ARM
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
--
2.25.4
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 12+ messages in thread