linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] KASAN for powerpc/32
@ 2019-01-22 14:28 Christophe Leroy
  2019-01-22 14:28 ` [PATCH v4 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Christophe Leroy @ 2019-01-22 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Nicholas Piggin, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

This serie adds KASAN support to powerpc/32

Tested on nohash/32 (8xx) and book3s/32 (mpc832x ie 603)

Changes in v4:
- Comments from Andrey (DISABLE_BRANCH_PROFILING, Activation of reports)
- Proper initialisation of shadow area in kasan_init()
- Panic in case Hash table is required.
- Added comments in patch one to explain why *t = *s becomes memcpy(t, s, ...)
- Call of kasan_init_tags()

Changes in v3:
- Removed the printk() in kasan_early_init() to avoid build failure (see https://github.com/linuxppc/issues/issues/218)
- Added necessary changes in asm/book3s/32/pgtable.h to get it work on powerpc 603 family
- Added a few KASAN_SANITIZE_xxx.o := n to successfully boot on powerpc 603 family

Changes in v2:
- Rebased.
- Using __set_pte_at() to build the early table.
- Worked around and got rid of the patch adding asm/page.h in asm/pgtable-types.h
    ==> might be fixed independently but not needed for this serie.

For book3s/32 (not 603), it cannot work as is because due to HASHPTE flag, we
can't use the same pagetable for several PGD entries, and because Hash table
management is not not active early enough at the time being.

Christophe Leroy (3):
  powerpc/mm: prepare kernel for KAsan on PPC32
  powerpc/32: Move early_init() in a separate file
  powerpc/32: Add KASAN support

 arch/powerpc/Kconfig                         |  1 +
 arch/powerpc/include/asm/book3s/32/pgtable.h |  2 +
 arch/powerpc/include/asm/kasan.h             | 24 ++++++++
 arch/powerpc/include/asm/nohash/32/pgtable.h |  2 +
 arch/powerpc/include/asm/ppc_asm.h           |  5 ++
 arch/powerpc/include/asm/setup.h             |  5 ++
 arch/powerpc/include/asm/string.h            | 14 +++++
 arch/powerpc/kernel/Makefile                 | 11 +++-
 arch/powerpc/kernel/cputable.c               | 13 ++++-
 arch/powerpc/kernel/early_32.c               | 36 ++++++++++++
 arch/powerpc/kernel/prom_init_check.sh       | 10 +++-
 arch/powerpc/kernel/setup-common.c           |  2 +
 arch/powerpc/kernel/setup_32.c               | 31 +---------
 arch/powerpc/lib/Makefile                    |  8 +++
 arch/powerpc/lib/copy_32.S                   |  9 ++-
 arch/powerpc/mm/Makefile                     |  3 +
 arch/powerpc/mm/dump_linuxpagetables.c       |  8 +++
 arch/powerpc/mm/kasan_init.c                 | 86 ++++++++++++++++++++++++++++
 arch/powerpc/mm/mem.c                        |  4 ++
 19 files changed, 236 insertions(+), 38 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kasan.h
 create mode 100644 arch/powerpc/kernel/early_32.c
 create mode 100644 arch/powerpc/mm/kasan_init.c

-- 
2.13.3


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-22 14:28 [PATCH v4 0/3] KASAN for powerpc/32 Christophe Leroy
@ 2019-01-22 14:28 ` Christophe Leroy
  2019-01-22 14:28 ` [PATCH v4 2/3] powerpc/32: Move early_init() in a separate file Christophe Leroy
  2019-01-22 14:28 ` [PATCH v4 3/3] powerpc/32: Add KASAN support Christophe Leroy
  2 siblings, 0 replies; 14+ messages in thread
From: Christophe Leroy @ 2019-01-22 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Nicholas Piggin, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

In kernel/cputable.c, explicitly use memcpy() in order
to allow GCC to replace it with __memcpy() when KASAN is
selected.

Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
enabled"), memset() can be used before activation of the cache,
so no need to use memset_io() for zeroing the BSS.

Acked-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/cputable.c | 13 ++++++++++---
 arch/powerpc/kernel/setup_32.c |  6 ++----
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index 1eab54bc6ee9..cd12f362b61f 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -2147,7 +2147,11 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
 	struct cpu_spec *t = &the_cpu_spec;
 
 	t = PTRRELOC(t);
-	*t = *s;
+	/*
+	 * use memcpy() instead of *t = *s so that GCC replaces it
+	 * by __memcpy() when KASAN is active
+	 */
+	memcpy(t, s, sizeof(*t));
 
 	*PTRRELOC(&cur_cpu_spec) = &the_cpu_spec;
 }
@@ -2161,8 +2165,11 @@ static struct cpu_spec * __init setup_cpu_spec(unsigned long offset,
 	t = PTRRELOC(t);
 	old = *t;
 
-	/* Copy everything, then do fixups */
-	*t = *s;
+	/*
+	 * Copy everything, then do fixups. Use memcpy() instead of *t = *s
+	 * so that GCC replaces it by __memcpy() when KASAN is active
+	 */
+	memcpy(t, s, sizeof(*t));
 
 	/*
 	 * If we are overriding a previous value derived from the real
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 947f904688b0..5e761eb16a6d 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -73,10 +73,8 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
 {
 	unsigned long offset = reloc_offset();
 
-	/* First zero the BSS -- use memset_io, some platforms don't have
-	 * caches on yet */
-	memset_io((void __iomem *)PTRRELOC(&__bss_start), 0,
-			__bss_stop - __bss_start);
+	/* First zero the BSS */
+	memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
 
 	/*
 	 * Identify the CPU type and fix up code sections
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/3] powerpc/32: Move early_init() in a separate file
  2019-01-22 14:28 [PATCH v4 0/3] KASAN for powerpc/32 Christophe Leroy
  2019-01-22 14:28 ` [PATCH v4 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
@ 2019-01-22 14:28 ` Christophe Leroy
  2019-01-22 14:28 ` [PATCH v4 3/3] powerpc/32: Add KASAN support Christophe Leroy
  2 siblings, 0 replies; 14+ messages in thread
From: Christophe Leroy @ 2019-01-22 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Nicholas Piggin, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

In preparation of KASAN, move early_init() into a separate
file in order to allow deactivation of KASAN for that function.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/Makefile   |  2 +-
 arch/powerpc/kernel/early_32.c | 35 +++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/setup_32.c | 26 --------------------------
 3 files changed, 36 insertions(+), 27 deletions(-)
 create mode 100644 arch/powerpc/kernel/early_32.c

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index cb7f0bb9ee71..879b36602748 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -93,7 +93,7 @@ extra-y				+= vmlinux.lds
 
 obj-$(CONFIG_RELOCATABLE)	+= reloc_$(BITS).o
 
-obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o
+obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o early_32.o
 obj-$(CONFIG_PPC64)		+= dma-iommu.o iommu.o
 obj-$(CONFIG_KGDB)		+= kgdb.o
 obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
diff --git a/arch/powerpc/kernel/early_32.c b/arch/powerpc/kernel/early_32.c
new file mode 100644
index 000000000000..b3e40d6d651c
--- /dev/null
+++ b/arch/powerpc/kernel/early_32.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Early init before relocation
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <asm/setup.h>
+#include <asm/sections.h>
+
+/*
+ * We're called here very early in the boot.
+ *
+ * Note that the kernel may be running at an address which is different
+ * from the address that it was linked at, so we must use RELOC/PTRRELOC
+ * to access static data (including strings).  -- paulus
+ */
+notrace unsigned long __init early_init(unsigned long dt_ptr)
+{
+	unsigned long offset = reloc_offset();
+
+	/* First zero the BSS */
+	memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
+
+	/*
+	 * Identify the CPU type and fix up code sections
+	 * that depend on which cpu we have.
+	 */
+	identify_cpu(offset, mfspr(SPRN_PVR));
+
+	apply_feature_fixups();
+
+	return KERNELBASE + offset;
+}
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 5e761eb16a6d..b46a9a33225b 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -63,32 +63,6 @@ EXPORT_SYMBOL(DMA_MODE_READ);
 EXPORT_SYMBOL(DMA_MODE_WRITE);
 
 /*
- * We're called here very early in the boot.
- *
- * Note that the kernel may be running at an address which is different
- * from the address that it was linked at, so we must use RELOC/PTRRELOC
- * to access static data (including strings).  -- paulus
- */
-notrace unsigned long __init early_init(unsigned long dt_ptr)
-{
-	unsigned long offset = reloc_offset();
-
-	/* First zero the BSS */
-	memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
-
-	/*
-	 * Identify the CPU type and fix up code sections
-	 * that depend on which cpu we have.
-	 */
-	identify_cpu(offset, mfspr(SPRN_PVR));
-
-	apply_feature_fixups();
-
-	return KERNELBASE + offset;
-}
-
-
-/*
  * This is run before start_kernel(), the kernel has been relocated
  * and we are running with enough of the MMU enabled to have our
  * proper kernel virtual addresses
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-01-22 14:28 [PATCH v4 0/3] KASAN for powerpc/32 Christophe Leroy
  2019-01-22 14:28 ` [PATCH v4 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
  2019-01-22 14:28 ` [PATCH v4 2/3] powerpc/32: Move early_init() in a separate file Christophe Leroy
@ 2019-01-22 14:28 ` Christophe Leroy
  2019-02-08 16:18   ` Daniel Axtens
  2 siblings, 1 reply; 14+ messages in thread
From: Christophe Leroy @ 2019-01-22 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Nicholas Piggin, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

This patch adds KASAN support for PPC32.

Note that on book3s it will only work on the 603 because the other
ones use hash table and can therefore not share a single PTE table
covering the entire early KASAN shadow area.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/Kconfig                         |  1 +
 arch/powerpc/include/asm/book3s/32/pgtable.h |  2 +
 arch/powerpc/include/asm/kasan.h             | 24 ++++++++
 arch/powerpc/include/asm/nohash/32/pgtable.h |  2 +
 arch/powerpc/include/asm/ppc_asm.h           |  5 ++
 arch/powerpc/include/asm/setup.h             |  5 ++
 arch/powerpc/include/asm/string.h            | 14 +++++
 arch/powerpc/kernel/Makefile                 |  9 ++-
 arch/powerpc/kernel/early_32.c               |  1 +
 arch/powerpc/kernel/prom_init_check.sh       | 10 +++-
 arch/powerpc/kernel/setup-common.c           |  2 +
 arch/powerpc/kernel/setup_32.c               |  3 +
 arch/powerpc/lib/Makefile                    |  8 +++
 arch/powerpc/lib/copy_32.S                   |  9 ++-
 arch/powerpc/mm/Makefile                     |  3 +
 arch/powerpc/mm/dump_linuxpagetables.c       |  8 +++
 arch/powerpc/mm/kasan_init.c                 | 86 ++++++++++++++++++++++++++++
 arch/powerpc/mm/mem.c                        |  4 ++
 18 files changed, 190 insertions(+), 6 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kasan.h
 create mode 100644 arch/powerpc/mm/kasan_init.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2890d36eb531..11dcaa80d3ff 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -175,6 +175,7 @@ config PPC
 	select GENERIC_TIME_VSYSCALL
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_JUMP_LABEL
+	select HAVE_ARCH_KASAN			if PPC32
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 49d76adb9bc5..4543016f80ca 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -141,6 +141,8 @@ static inline bool pte_user(pte_t pte)
  */
 #ifdef CONFIG_HIGHMEM
 #define KVIRT_TOP	PKMAP_BASE
+#elif defined(CONFIG_KASAN)
+#define KVIRT_TOP	KASAN_SHADOW_START
 #else
 #define KVIRT_TOP	(0xfe000000UL)	/* for now, could be FIXMAP_BASE ? */
 #endif
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
new file mode 100644
index 000000000000..5d0088429b62
--- /dev/null
+++ b/arch/powerpc/include/asm/kasan.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/page.h>
+#include <asm/pgtable-types.h>
+#include <asm/fixmap.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT	3
+#define KASAN_SHADOW_SIZE	((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
+
+#define KASAN_SHADOW_START	(ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
+					    PGDIR_SIZE))
+#define KASAN_SHADOW_END	(KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
+#define KASAN_SHADOW_OFFSET	(KASAN_SHADOW_START - \
+				 (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
+
+void kasan_early_init(void);
+void kasan_init(void);
+
+#endif
+#endif
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
index bed433358260..b3b52f02be1a 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -71,6 +71,8 @@ extern int icache_44x_need_flush;
  */
 #ifdef CONFIG_HIGHMEM
 #define KVIRT_TOP	PKMAP_BASE
+#elif defined(CONFIG_KASAN)
+#define KVIRT_TOP	KASAN_SHADOW_START
 #else
 #define KVIRT_TOP	(0xfe000000UL)	/* for now, could be FIXMAP_BASE ? */
 #endif
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index e0637730a8e7..8d5291c721fa 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -251,6 +251,11 @@ GLUE(.,name):
 
 #define _GLOBAL_TOC(name) _GLOBAL(name)
 
+#define KASAN_OVERRIDE(x, y) \
+	.weak x;	     \
+	.set x, y
+
+
 #endif
 
 /*
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 65676e2325b8..da7768aa996a 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -74,6 +74,11 @@ static inline void setup_spectre_v2(void) {};
 #endif
 void do_btb_flush_fixups(void);
 
+#ifndef CONFIG_KASAN
+static inline void kasan_early_init(void) { }
+static inline void kasan_init(void) { }
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_SETUP_H */
diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index 1647de15a31e..64d44d4836b4 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -27,6 +27,20 @@ extern int memcmp(const void *,const void *,__kernel_size_t);
 extern void * memchr(const void *,int,__kernel_size_t);
 extern void * memcpy_flushcache(void *,const void *,__kernel_size_t);
 
+void *__memset(void *s, int c, __kernel_size_t count);
+void *__memcpy(void *to, const void *from, __kernel_size_t n);
+void *__memmove(void *to, const void *from, __kernel_size_t n);
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #ifdef CONFIG_PPC64
 #define __HAVE_ARCH_MEMSET32
 #define __HAVE_ARCH_MEMSET64
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 879b36602748..fc4c42262694 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -16,8 +16,9 @@ CFLAGS_prom_init.o      += -fPIC
 CFLAGS_btext.o		+= -fPIC
 endif
 
-CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
-CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+CFLAGS_early_32.o += -DDISABLE_BRANCH_PROFILING
+CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) -DDISABLE_BRANCH_PROFILING
+CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) -DDISABLE_BRANCH_PROFILING
 CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
 CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
 
@@ -31,6 +32,10 @@ CFLAGS_REMOVE_btext.o = $(CC_FLAGS_FTRACE)
 CFLAGS_REMOVE_prom.o = $(CC_FLAGS_FTRACE)
 endif
 
+KASAN_SANITIZE_early_32.o := n
+KASAN_SANITIZE_cputable.o := n
+KASAN_SANITIZE_prom_init.o := n
+
 obj-y				:= cputable.o ptrace.o syscalls.o \
 				   irq.o align.o signal_32.o pmc.o vdso.o \
 				   process.o systbl.o idle.o \
diff --git a/arch/powerpc/kernel/early_32.c b/arch/powerpc/kernel/early_32.c
index b3e40d6d651c..3482118ffe76 100644
--- a/arch/powerpc/kernel/early_32.c
+++ b/arch/powerpc/kernel/early_32.c
@@ -8,6 +8,7 @@
 #include <linux/kernel.h>
 #include <asm/setup.h>
 #include <asm/sections.h>
+#include <asm/asm-prototypes.h>
 
 /*
  * We're called here very early in the boot.
diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh
index 667df97d2595..da6bb16e0876 100644
--- a/arch/powerpc/kernel/prom_init_check.sh
+++ b/arch/powerpc/kernel/prom_init_check.sh
@@ -16,8 +16,16 @@
 # If you really need to reference something from prom_init.o add
 # it to the list below:
 
+grep CONFIG_KASAN=y .config >/dev/null
+if [ $? -eq 0 ]
+then
+	MEMFCT="__memcpy __memset"
+else
+	MEMFCT="memcpy memset"
+fi
+
 WHITELIST="add_reloc_offset __bss_start __bss_stop copy_and_flush
-_end enter_prom memcpy memset reloc_offset __secondary_hold
+_end enter_prom $MEMFCT reloc_offset __secondary_hold
 __secondary_hold_acknowledge __secondary_hold_spinloop __start
 strcmp strcpy strlcpy strlen strncmp strstr kstrtobool logo_linux_clut224
 reloc_got2 kernstart_addr memstart_addr linux_banner _stext
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index ca00fbb97cf8..16ff1ea66805 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -978,6 +978,8 @@ void __init setup_arch(char **cmdline_p)
 
 	paging_init();
 
+	kasan_init();
+
 	/* Initialize the MMU context management stuff. */
 	mmu_context_init();
 
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index b46a9a33225b..fe6990dec6fc 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -17,6 +17,7 @@
 #include <linux/console.h>
 #include <linux/memblock.h>
 #include <linux/export.h>
+#include <linux/kasan.h>
 
 #include <asm/io.h>
 #include <asm/prom.h>
@@ -75,6 +76,8 @@ notrace void __init machine_init(u64 dt_ptr)
 	unsigned int *addr = (unsigned int *)patch_site_addr(&patch__memset_nocache);
 	unsigned long insn;
 
+	kasan_early_init();
+
 	/* Configure static keys first, now that we're relocated. */
 	setup_feature_keys();
 
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 3bf9fc6fd36c..ce8d4a9f810a 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -8,6 +8,14 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 CFLAGS_REMOVE_code-patching.o = $(CC_FLAGS_FTRACE)
 CFLAGS_REMOVE_feature-fixups.o = $(CC_FLAGS_FTRACE)
 
+KASAN_SANITIZE_code-patching.o := n
+KASAN_SANITIZE_feature-fixups.o := n
+
+ifdef CONFIG_KASAN
+CFLAGS_code-patching.o += -DDISABLE_BRANCH_PROFILING
+CFLAGS_feature-fixups.o += -DDISABLE_BRANCH_PROFILING
+endif
+
 obj-y += string.o alloc.o code-patching.o feature-fixups.o
 
 obj-$(CONFIG_PPC32)	+= div64.o copy_32.o crtsavres.o strlen_32.o
diff --git a/arch/powerpc/lib/copy_32.S b/arch/powerpc/lib/copy_32.S
index ba66846fe973..4d8a1c73b4cf 100644
--- a/arch/powerpc/lib/copy_32.S
+++ b/arch/powerpc/lib/copy_32.S
@@ -91,7 +91,8 @@ EXPORT_SYMBOL(memset16)
  * We therefore skip the optimised bloc that uses dcbz. This jump is
  * replaced by a nop once cache is active. This is done in machine_init()
  */
-_GLOBAL(memset)
+_GLOBAL(__memset)
+KASAN_OVERRIDE(memset, __memset)
 	cmplwi	0,r5,4
 	blt	7f
 
@@ -163,12 +164,14 @@ EXPORT_SYMBOL(memset)
  * We therefore jump to generic_memcpy which doesn't use dcbz. This jump is
  * replaced by a nop once cache is active. This is done in machine_init()
  */
-_GLOBAL(memmove)
+_GLOBAL(__memmove)
+KASAN_OVERRIDE(memmove, __memmove)
 	cmplw	0,r3,r4
 	bgt	backwards_memcpy
 	/* fall through */
 
-_GLOBAL(memcpy)
+_GLOBAL(__memcpy)
+KASAN_OVERRIDE(memcpy, __memcpy)
 1:	b	generic_memcpy
 	patch_site	1b, patch__memcpy_nocache
 
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index f965fc33a8b7..d6b76f25f6de 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -7,6 +7,8 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
 CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
 
+KASAN_SANITIZE_kasan_init.o := n
+
 obj-y				:= fault.o mem.o pgtable.o mmap.o \
 				   init_$(BITS).o pgtable_$(BITS).o \
 				   init-common.o mmu_context.o drmem.o
@@ -55,3 +57,4 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= dump_linuxpagetables-book3s64.o
 endif
 obj-$(CONFIG_PPC_HTDUMP)	+= dump_hashpagetable.o
 obj-$(CONFIG_PPC_MEM_KEYS)	+= pkeys.o
+obj-$(CONFIG_KASAN)		+= kasan_init.o
diff --git a/arch/powerpc/mm/dump_linuxpagetables.c b/arch/powerpc/mm/dump_linuxpagetables.c
index 6aa41669ac1a..c862b48118f1 100644
--- a/arch/powerpc/mm/dump_linuxpagetables.c
+++ b/arch/powerpc/mm/dump_linuxpagetables.c
@@ -94,6 +94,10 @@ static struct addr_marker address_markers[] = {
 	{ 0,	"Consistent mem start" },
 	{ 0,	"Consistent mem end" },
 #endif
+#ifdef CONFIG_KASAN
+	{ 0,	"kasan shadow mem start" },
+	{ 0,	"kasan shadow mem end" },
+#endif
 #ifdef CONFIG_HIGHMEM
 	{ 0,	"Highmem PTEs start" },
 	{ 0,	"Highmem PTEs end" },
@@ -310,6 +314,10 @@ static void populate_markers(void)
 	address_markers[i++].start_address = IOREMAP_TOP +
 					     CONFIG_CONSISTENT_SIZE;
 #endif
+#ifdef CONFIG_KASAN
+	address_markers[i++].start_address = KASAN_SHADOW_START;
+	address_markers[i++].start_address = KASAN_SHADOW_END;
+#endif
 #ifdef CONFIG_HIGHMEM
 	address_markers[i++].start_address = PKMAP_BASE;
 	address_markers[i++].start_address = PKMAP_ADDR(LAST_PKMAP);
diff --git a/arch/powerpc/mm/kasan_init.c b/arch/powerpc/mm/kasan_init.c
new file mode 100644
index 000000000000..69e17428c2e9
--- /dev/null
+++ b/arch/powerpc/mm/kasan_init.c
@@ -0,0 +1,86 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/kasan.h>
+#include <linux/printk.h>
+#include <linux/memblock.h>
+#include <linux/sched/task.h>
+#include <asm/pgalloc.h>
+
+void __init kasan_early_init(void)
+{
+	unsigned long addr = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+	unsigned long next;
+	pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
+	int i;
+	phys_addr_t pa = __pa(kasan_early_shadow_page);
+
+	BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
+
+	if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
+		panic("KASAN not supported with Hash MMU\n");
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		__set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
+			     kasan_early_shadow_pte + i,
+			     pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
+
+	do {
+		next = pgd_addr_end(addr, end);
+		pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
+	} while (pmd++, addr = next, addr != end);
+}
+
+static void __init kasan_init_region(struct memblock_region *reg)
+{
+	void *start = __va(reg->base);
+	void *end = __va(reg->base + reg->size);
+	unsigned long k_start, k_end, k_cur, k_next;
+	pmd_t *pmd;
+
+	if (start >= end)
+		return;
+
+	k_start = (unsigned long)kasan_mem_to_shadow(start);
+	k_end = (unsigned long)kasan_mem_to_shadow(end);
+	pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
+
+	for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
+		k_next = pgd_addr_end(k_cur, k_end);
+		if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
+			pte_t *new = pte_alloc_one_kernel(&init_mm);
+
+			if (!new)
+				panic("kasan: pte_alloc_one_kernel() failed");
+			memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
+			pmd_populate_kernel(&init_mm, pmd, new);
+		}
+	};
+
+	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
+		void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+		pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
+
+		if (!va)
+			panic("kasan: memblock_alloc() failed");
+		pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
+		pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
+	}
+	flush_tlb_kernel_range(k_start, k_end);
+}
+
+void __init kasan_init(void)
+{
+	struct memblock_region *reg;
+
+	for_each_memblock(memory, reg)
+		kasan_init_region(reg);
+
+	kasan_init_tags();
+
+	/* At this point kasan is fully initialized. Enable error messages */
+	init_task.kasan_depth = 0;
+	pr_info("KASAN init done\n");
+}
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 33cc6f676fa6..ae7db88b72d6 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -369,6 +369,10 @@ void __init mem_init(void)
 	pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
 		PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
 #endif /* CONFIG_HIGHMEM */
+#ifdef CONFIG_KASAN
+	pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
+		KASAN_SHADOW_START, KASAN_SHADOW_END);
+#endif
 #ifdef CONFIG_NOT_COHERENT_CACHE
 	pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
 		IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-01-22 14:28 ` [PATCH v4 3/3] powerpc/32: Add KASAN support Christophe Leroy
@ 2019-02-08 16:18   ` Daniel Axtens
  2019-02-08 17:17     ` Christophe Leroy
  0 siblings, 1 reply; 14+ messages in thread
From: Daniel Axtens @ 2019-02-08 16:18 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Nicholas Piggin, Aneesh Kumar K.V,
	Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

Hi Christophe,

I've been attempting to port this to 64-bit Book3e nohash (e6500),
although I think I've ended up with an approach more similar to Aneesh's
much earlier (2015) series for book3s.

Part of this is just due to the changes between 32 and 64 bits - we need
to hack around the discontiguous mappings - but one thing that I'm
particularly puzzled by is what the kasan_early_init is supposed to do.

> +void __init kasan_early_init(void)
> +{
> +	unsigned long addr = KASAN_SHADOW_START;
> +	unsigned long end = KASAN_SHADOW_END;
> +	unsigned long next;
> +	pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> +	int i;
> +	phys_addr_t pa = __pa(kasan_early_shadow_page);
> +
> +	BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
> +
> +	if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
> +		panic("KASAN not supported with Hash MMU\n");
> +
> +	for (i = 0; i < PTRS_PER_PTE; i++)
> +		__set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
> +			     kasan_early_shadow_pte + i,
> +			     pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
> +
> +	do {
> +		next = pgd_addr_end(addr, end);
> +		pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
> +	} while (pmd++, addr = next, addr != end);
> +}

As far as I can tell it's mapping the early shadow page, read-only, over
the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
shadow PTE array from the generic code.

I haven't been able to find an answer to why this is in the docs, so I
was wondering if you or anyone else could explain the early part of
kasan init a bit better.

At the moment, I don't do any early init, and like Aneesh's series for
book3s, I end up needing a special flag to disable kasan until after
kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
fire, although my missing tests are a superset of his. I suspect the
early init has something to do with these...?

(I'm happy to collate answers into a patch to the docs, btw!)

In the long term I hope to revive Aneesh's and Balbir's series for hash
and radix as well.

Regards,
Daniel

> +
> +static void __init kasan_init_region(struct memblock_region *reg)
> +{
> +	void *start = __va(reg->base);
> +	void *end = __va(reg->base + reg->size);
> +	unsigned long k_start, k_end, k_cur, k_next;
> +	pmd_t *pmd;
> +
> +	if (start >= end)
> +		return;
> +
> +	k_start = (unsigned long)kasan_mem_to_shadow(start);
> +	k_end = (unsigned long)kasan_mem_to_shadow(end);
> +	pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
> +
> +	for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
> +		k_next = pgd_addr_end(k_cur, k_end);
> +		if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
> +			pte_t *new = pte_alloc_one_kernel(&init_mm);
> +
> +			if (!new)
> +				panic("kasan: pte_alloc_one_kernel() failed");
> +			memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
> +			pmd_populate_kernel(&init_mm, pmd, new);
> +		}
> +	};
> +
> +	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> +		void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> +		pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
> +
> +		if (!va)
> +			panic("kasan: memblock_alloc() failed");
> +		pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
> +		pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
> +	}
> +	flush_tlb_kernel_range(k_start, k_end);
> +}
> +
> +void __init kasan_init(void)
> +{
> +	struct memblock_region *reg;
> +
> +	for_each_memblock(memory, reg)
> +		kasan_init_region(reg);
> +
> +	kasan_init_tags();
> +
> +	/* At this point kasan is fully initialized. Enable error messages */
> +	init_task.kasan_depth = 0;
> +	pr_info("KASAN init done\n");
> +}
> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> index 33cc6f676fa6..ae7db88b72d6 100644
> --- a/arch/powerpc/mm/mem.c
> +++ b/arch/powerpc/mm/mem.c
> @@ -369,6 +369,10 @@ void __init mem_init(void)
>  	pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
>  		PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
>  #endif /* CONFIG_HIGHMEM */
> +#ifdef CONFIG_KASAN
> +	pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
> +		KASAN_SHADOW_START, KASAN_SHADOW_END);
> +#endif
>  #ifdef CONFIG_NOT_COHERENT_CACHE
>  	pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
>  		IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
> -- 
> 2.13.3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-08 16:18   ` Daniel Axtens
@ 2019-02-08 17:17     ` Christophe Leroy
  2019-02-08 17:40       ` Andrey Konovalov
  0 siblings, 1 reply; 14+ messages in thread
From: Christophe Leroy @ 2019-02-08 17:17 UTC (permalink / raw)
  To: Daniel Axtens, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Nicholas Piggin, Aneesh Kumar K.V,
	Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

Hi Daniel,

Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
> Hi Christophe,
> 
> I've been attempting to port this to 64-bit Book3e nohash (e6500),
> although I think I've ended up with an approach more similar to Aneesh's
> much earlier (2015) series for book3s.
> 
> Part of this is just due to the changes between 32 and 64 bits - we need
> to hack around the discontiguous mappings - but one thing that I'm
> particularly puzzled by is what the kasan_early_init is supposed to do.

It should be a problem as my patch uses a 'for_each_memblock(memory, 
reg)' loop.

> 
>> +void __init kasan_early_init(void)
>> +{
>> +	unsigned long addr = KASAN_SHADOW_START;
>> +	unsigned long end = KASAN_SHADOW_END;
>> +	unsigned long next;
>> +	pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>> +	int i;
>> +	phys_addr_t pa = __pa(kasan_early_shadow_page);
>> +
>> +	BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>> +
>> +	if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>> +		panic("KASAN not supported with Hash MMU\n");
>> +
>> +	for (i = 0; i < PTRS_PER_PTE; i++)
>> +		__set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>> +			     kasan_early_shadow_pte + i,
>> +			     pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>> +
>> +	do {
>> +		next = pgd_addr_end(addr, end);
>> +		pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>> +	} while (pmd++, addr = next, addr != end);
>> +}
> 
> As far as I can tell it's mapping the early shadow page, read-only, over
> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
> shadow PTE array from the generic code.
> 
> I haven't been able to find an answer to why this is in the docs, so I
> was wondering if you or anyone else could explain the early part of
> kasan init a bit better.

See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an 
explanation of the shadow.

When shadow is 0, it means the memory area is entirely accessible.

It is necessary to setup a shadow area as soon as possible because all 
data accesses check the shadow area, from the begining (except for a few 
files where sanitizing has been disabled in Makefiles).

Until the real shadow area is set, all access are granted thanks to the 
zero shadow area beeing for of zeros.

I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.

> 
> At the moment, I don't do any early init, and like Aneesh's series for
> book3s, I end up needing a special flag to disable kasan until after
> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
> fire, although my missing tests are a superset of his. I suspect the
> early init has something to do with these...?

I think you should really focus on establishing a zero shadow area as 
early as possible instead of trying to ack the core parts of KASAN.

> 
> (I'm happy to collate answers into a patch to the docs, btw!)

We can also have the discussion going via 
https://github.com/linuxppc/issues/issues/106

> 
> In the long term I hope to revive Aneesh's and Balbir's series for hash
> and radix as well.

Great.

Christophe

> 
> Regards,
> Daniel
> 
>> +
>> +static void __init kasan_init_region(struct memblock_region *reg)
>> +{
>> +	void *start = __va(reg->base);
>> +	void *end = __va(reg->base + reg->size);
>> +	unsigned long k_start, k_end, k_cur, k_next;
>> +	pmd_t *pmd;
>> +
>> +	if (start >= end)
>> +		return;
>> +
>> +	k_start = (unsigned long)kasan_mem_to_shadow(start);
>> +	k_end = (unsigned long)kasan_mem_to_shadow(end);
>> +	pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
>> +
>> +	for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
>> +		k_next = pgd_addr_end(k_cur, k_end);
>> +		if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
>> +			pte_t *new = pte_alloc_one_kernel(&init_mm);
>> +
>> +			if (!new)
>> +				panic("kasan: pte_alloc_one_kernel() failed");
>> +			memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
>> +			pmd_populate_kernel(&init_mm, pmd, new);
>> +		}
>> +	};
>> +
>> +	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
>> +		void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>> +		pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>> +
>> +		if (!va)
>> +			panic("kasan: memblock_alloc() failed");
>> +		pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
>> +		pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
>> +	}
>> +	flush_tlb_kernel_range(k_start, k_end);
>> +}
>> +
>> +void __init kasan_init(void)
>> +{
>> +	struct memblock_region *reg;
>> +
>> +	for_each_memblock(memory, reg)
>> +		kasan_init_region(reg);
>> +
>> +	kasan_init_tags();
>> +
>> +	/* At this point kasan is fully initialized. Enable error messages */
>> +	init_task.kasan_depth = 0;
>> +	pr_info("KASAN init done\n");
>> +}
>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>> index 33cc6f676fa6..ae7db88b72d6 100644
>> --- a/arch/powerpc/mm/mem.c
>> +++ b/arch/powerpc/mm/mem.c
>> @@ -369,6 +369,10 @@ void __init mem_init(void)
>>   	pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
>>   		PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
>>   #endif /* CONFIG_HIGHMEM */
>> +#ifdef CONFIG_KASAN
>> +	pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
>> +		KASAN_SHADOW_START, KASAN_SHADOW_END);
>> +#endif
>>   #ifdef CONFIG_NOT_COHERENT_CACHE
>>   	pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
>>   		IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
>> -- 
>> 2.13.3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-08 17:17     ` Christophe Leroy
@ 2019-02-08 17:40       ` Andrey Konovalov
  2019-02-09 11:55         ` christophe leroy
  0 siblings, 1 reply; 14+ messages in thread
From: Andrey Konovalov @ 2019-02-08 17:40 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: LKML, Nicholas Piggin, Linux Memory Management List,
	Paul Mackerras, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, kasan-dev, PowerPC, Dmitry Vyukov,
	Daniel Axtens

On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
>
> Hi Daniel,
>
> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
> > Hi Christophe,
> >
> > I've been attempting to port this to 64-bit Book3e nohash (e6500),
> > although I think I've ended up with an approach more similar to Aneesh's
> > much earlier (2015) series for book3s.
> >
> > Part of this is just due to the changes between 32 and 64 bits - we need
> > to hack around the discontiguous mappings - but one thing that I'm
> > particularly puzzled by is what the kasan_early_init is supposed to do.
>
> It should be a problem as my patch uses a 'for_each_memblock(memory,
> reg)' loop.
>
> >
> >> +void __init kasan_early_init(void)
> >> +{
> >> +    unsigned long addr = KASAN_SHADOW_START;
> >> +    unsigned long end = KASAN_SHADOW_END;
> >> +    unsigned long next;
> >> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> >> +    int i;
> >> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
> >> +
> >> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
> >> +
> >> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
> >> +            panic("KASAN not supported with Hash MMU\n");
> >> +
> >> +    for (i = 0; i < PTRS_PER_PTE; i++)
> >> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
> >> +                         kasan_early_shadow_pte + i,
> >> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
> >> +
> >> +    do {
> >> +            next = pgd_addr_end(addr, end);
> >> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
> >> +    } while (pmd++, addr = next, addr != end);
> >> +}
> >
> > As far as I can tell it's mapping the early shadow page, read-only, over
> > the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
> > shadow PTE array from the generic code.
> >
> > I haven't been able to find an answer to why this is in the docs, so I
> > was wondering if you or anyone else could explain the early part of
> > kasan init a bit better.
>
> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
> explanation of the shadow.
>
> When shadow is 0, it means the memory area is entirely accessible.
>
> It is necessary to setup a shadow area as soon as possible because all
> data accesses check the shadow area, from the begining (except for a few
> files where sanitizing has been disabled in Makefiles).
>
> Until the real shadow area is set, all access are granted thanks to the
> zero shadow area beeing for of zeros.

Not entirely correct. kasan_early_init() indeed maps the whole shadow
memory range to the same kasan_early_shadow_page. However as kernel
loads and memory gets allocated this shadow page gets rewritten with
non-zero values by different KASAN allocator hooks. Since these values
come from completely different parts of the kernel, but all land on
the same page, kasan_early_shadow_page's content can be considered
garbage. When KASAN checks memory accesses for validity it detects
these garbage shadow values, but doesn't print any reports, as the
reporting routine bails out on the current->kasan_depth check (which
has the value of 1 initially). Only after kasan_init() completes, when
the proper shadow memory is mapped, current->kasan_depth gets set to 0
and we start reporting bad accesses.

>
> I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.
>
> >
> > At the moment, I don't do any early init, and like Aneesh's series for
> > book3s, I end up needing a special flag to disable kasan until after
> > kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
> > fire, although my missing tests are a superset of his. I suspect the
> > early init has something to do with these...?
>
> I think you should really focus on establishing a zero shadow area as
> early as possible instead of trying to ack the core parts of KASAN.
>
> >
> > (I'm happy to collate answers into a patch to the docs, btw!)
>
> We can also have the discussion going via
> https://github.com/linuxppc/issues/issues/106
>
> >
> > In the long term I hope to revive Aneesh's and Balbir's series for hash
> > and radix as well.
>
> Great.
>
> Christophe
>
> >
> > Regards,
> > Daniel
> >
> >> +
> >> +static void __init kasan_init_region(struct memblock_region *reg)
> >> +{
> >> +    void *start = __va(reg->base);
> >> +    void *end = __va(reg->base + reg->size);
> >> +    unsigned long k_start, k_end, k_cur, k_next;
> >> +    pmd_t *pmd;
> >> +
> >> +    if (start >= end)
> >> +            return;
> >> +
> >> +    k_start = (unsigned long)kasan_mem_to_shadow(start);
> >> +    k_end = (unsigned long)kasan_mem_to_shadow(end);
> >> +    pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
> >> +
> >> +    for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
> >> +            k_next = pgd_addr_end(k_cur, k_end);
> >> +            if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
> >> +                    pte_t *new = pte_alloc_one_kernel(&init_mm);
> >> +
> >> +                    if (!new)
> >> +                            panic("kasan: pte_alloc_one_kernel() failed");
> >> +                    memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
> >> +                    pmd_populate_kernel(&init_mm, pmd, new);
> >> +            }
> >> +    };
> >> +
> >> +    for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> >> +            void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> >> +            pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
> >> +
> >> +            if (!va)
> >> +                    panic("kasan: memblock_alloc() failed");
> >> +            pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
> >> +            pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
> >> +    }
> >> +    flush_tlb_kernel_range(k_start, k_end);
> >> +}
> >> +
> >> +void __init kasan_init(void)
> >> +{
> >> +    struct memblock_region *reg;
> >> +
> >> +    for_each_memblock(memory, reg)
> >> +            kasan_init_region(reg);
> >> +
> >> +    kasan_init_tags();
> >> +
> >> +    /* At this point kasan is fully initialized. Enable error messages */
> >> +    init_task.kasan_depth = 0;
> >> +    pr_info("KASAN init done\n");
> >> +}
> >> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> >> index 33cc6f676fa6..ae7db88b72d6 100644
> >> --- a/arch/powerpc/mm/mem.c
> >> +++ b/arch/powerpc/mm/mem.c
> >> @@ -369,6 +369,10 @@ void __init mem_init(void)
> >>      pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
> >>              PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
> >>   #endif /* CONFIG_HIGHMEM */
> >> +#ifdef CONFIG_KASAN
> >> +    pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
> >> +            KASAN_SHADOW_START, KASAN_SHADOW_END);
> >> +#endif
> >>   #ifdef CONFIG_NOT_COHERENT_CACHE
> >>      pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
> >>              IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
> >> --
> >> 2.13.3
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/69720148-fd19-0810-5a1d-96c45e2ec00c%40c-s.fr.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-08 17:40       ` Andrey Konovalov
@ 2019-02-09 11:55         ` christophe leroy
  2019-02-11 12:25           ` Andrey Konovalov
  0 siblings, 1 reply; 14+ messages in thread
From: christophe leroy @ 2019-02-09 11:55 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: LKML, Nicholas Piggin, Linux Memory Management List,
	Paul Mackerras, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, kasan-dev, PowerPC, Dmitry Vyukov,
	Daniel Axtens

Hi Andrey,

Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
>>
>> Hi Daniel,
>>
>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>> Hi Christophe,
>>>
>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>> although I think I've ended up with an approach more similar to Aneesh's
>>> much earlier (2015) series for book3s.
>>>
>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>> to hack around the discontiguous mappings - but one thing that I'm
>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>
>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>> reg)' loop.
>>
>>>
>>>> +void __init kasan_early_init(void)
>>>> +{
>>>> +    unsigned long addr = KASAN_SHADOW_START;
>>>> +    unsigned long end = KASAN_SHADOW_END;
>>>> +    unsigned long next;
>>>> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>> +    int i;
>>>> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>> +
>>>> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>> +
>>>> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>> +            panic("KASAN not supported with Hash MMU\n");
>>>> +
>>>> +    for (i = 0; i < PTRS_PER_PTE; i++)
>>>> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>> +                         kasan_early_shadow_pte + i,
>>>> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>> +
>>>> +    do {
>>>> +            next = pgd_addr_end(addr, end);
>>>> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>> +    } while (pmd++, addr = next, addr != end);
>>>> +}
>>>
>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>> shadow PTE array from the generic code.
>>>
>>> I haven't been able to find an answer to why this is in the docs, so I
>>> was wondering if you or anyone else could explain the early part of
>>> kasan init a bit better.
>>
>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>> explanation of the shadow.
>>
>> When shadow is 0, it means the memory area is entirely accessible.
>>
>> It is necessary to setup a shadow area as soon as possible because all
>> data accesses check the shadow area, from the begining (except for a few
>> files where sanitizing has been disabled in Makefiles).
>>
>> Until the real shadow area is set, all access are granted thanks to the
>> zero shadow area beeing for of zeros.
> 
> Not entirely correct. kasan_early_init() indeed maps the whole shadow
> memory range to the same kasan_early_shadow_page. However as kernel
> loads and memory gets allocated this shadow page gets rewritten with
> non-zero values by different KASAN allocator hooks. Since these values
> come from completely different parts of the kernel, but all land on
> the same page, kasan_early_shadow_page's content can be considered
> garbage. When KASAN checks memory accesses for validity it detects
> these garbage shadow values, but doesn't print any reports, as the
> reporting routine bails out on the current->kasan_depth check (which
> has the value of 1 initially). Only after kasan_init() completes, when
> the proper shadow memory is mapped, current->kasan_depth gets set to 0
> and we start reporting bad accesses.

That's surprising, because in the early phase I map the shadow area 
read-only, so I do not expect it to get modified unless RO protection is 
failing for some reason.

Next week I'll add a test in early_init() to check the content of the 
early shadow area.

Christophe

> 
>>
>> I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.
>>
>>>
>>> At the moment, I don't do any early init, and like Aneesh's series for
>>> book3s, I end up needing a special flag to disable kasan until after
>>> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
>>> fire, although my missing tests are a superset of his. I suspect the
>>> early init has something to do with these...?
>>
>> I think you should really focus on establishing a zero shadow area as
>> early as possible instead of trying to ack the core parts of KASAN.
>>
>>>
>>> (I'm happy to collate answers into a patch to the docs, btw!)
>>
>> We can also have the discussion going via
>> https://github.com/linuxppc/issues/issues/106
>>
>>>
>>> In the long term I hope to revive Aneesh's and Balbir's series for hash
>>> and radix as well.
>>
>> Great.
>>
>> Christophe
>>
>>>
>>> Regards,
>>> Daniel
>>>
>>>> +
>>>> +static void __init kasan_init_region(struct memblock_region *reg)
>>>> +{
>>>> +    void *start = __va(reg->base);
>>>> +    void *end = __va(reg->base + reg->size);
>>>> +    unsigned long k_start, k_end, k_cur, k_next;
>>>> +    pmd_t *pmd;
>>>> +
>>>> +    if (start >= end)
>>>> +            return;
>>>> +
>>>> +    k_start = (unsigned long)kasan_mem_to_shadow(start);
>>>> +    k_end = (unsigned long)kasan_mem_to_shadow(end);
>>>> +    pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
>>>> +
>>>> +    for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
>>>> +            k_next = pgd_addr_end(k_cur, k_end);
>>>> +            if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
>>>> +                    pte_t *new = pte_alloc_one_kernel(&init_mm);
>>>> +
>>>> +                    if (!new)
>>>> +                            panic("kasan: pte_alloc_one_kernel() failed");
>>>> +                    memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
>>>> +                    pmd_populate_kernel(&init_mm, pmd, new);
>>>> +            }
>>>> +    };
>>>> +
>>>> +    for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
>>>> +            void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>>>> +            pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>>>> +
>>>> +            if (!va)
>>>> +                    panic("kasan: memblock_alloc() failed");
>>>> +            pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
>>>> +            pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
>>>> +    }
>>>> +    flush_tlb_kernel_range(k_start, k_end);
>>>> +}
>>>> +
>>>> +void __init kasan_init(void)
>>>> +{
>>>> +    struct memblock_region *reg;
>>>> +
>>>> +    for_each_memblock(memory, reg)
>>>> +            kasan_init_region(reg);
>>>> +
>>>> +    kasan_init_tags();
>>>> +
>>>> +    /* At this point kasan is fully initialized. Enable error messages */
>>>> +    init_task.kasan_depth = 0;
>>>> +    pr_info("KASAN init done\n");
>>>> +}
>>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>>> index 33cc6f676fa6..ae7db88b72d6 100644
>>>> --- a/arch/powerpc/mm/mem.c
>>>> +++ b/arch/powerpc/mm/mem.c
>>>> @@ -369,6 +369,10 @@ void __init mem_init(void)
>>>>       pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
>>>>               PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
>>>>    #endif /* CONFIG_HIGHMEM */
>>>> +#ifdef CONFIG_KASAN
>>>> +    pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
>>>> +            KASAN_SHADOW_START, KASAN_SHADOW_END);
>>>> +#endif
>>>>    #ifdef CONFIG_NOT_COHERENT_CACHE
>>>>       pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
>>>>               IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
>>>> --
>>>> 2.13.3
>>
>> --
>> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
>> To post to this group, send email to kasan-dev@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/69720148-fd19-0810-5a1d-96c45e2ec00c%40c-s.fr.
>> For more options, visit https://groups.google.com/d/optout.

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-09 11:55         ` christophe leroy
@ 2019-02-11 12:25           ` Andrey Konovalov
  2019-02-11 16:28             ` Andrey Ryabinin
  0 siblings, 1 reply; 14+ messages in thread
From: Andrey Konovalov @ 2019-02-11 12:25 UTC (permalink / raw)
  To: christophe leroy
  Cc: LKML, Nicholas Piggin, Linux Memory Management List,
	Paul Mackerras, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, kasan-dev, PowerPC, Dmitry Vyukov,
	Daniel Axtens

On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
<christophe.leroy@c-s.fr> wrote:
>
> Hi Andrey,
>
> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
> > On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
> >>
> >> Hi Daniel,
> >>
> >> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
> >>> Hi Christophe,
> >>>
> >>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
> >>> although I think I've ended up with an approach more similar to Aneesh's
> >>> much earlier (2015) series for book3s.
> >>>
> >>> Part of this is just due to the changes between 32 and 64 bits - we need
> >>> to hack around the discontiguous mappings - but one thing that I'm
> >>> particularly puzzled by is what the kasan_early_init is supposed to do.
> >>
> >> It should be a problem as my patch uses a 'for_each_memblock(memory,
> >> reg)' loop.
> >>
> >>>
> >>>> +void __init kasan_early_init(void)
> >>>> +{
> >>>> +    unsigned long addr = KASAN_SHADOW_START;
> >>>> +    unsigned long end = KASAN_SHADOW_END;
> >>>> +    unsigned long next;
> >>>> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> >>>> +    int i;
> >>>> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
> >>>> +
> >>>> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
> >>>> +
> >>>> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
> >>>> +            panic("KASAN not supported with Hash MMU\n");
> >>>> +
> >>>> +    for (i = 0; i < PTRS_PER_PTE; i++)
> >>>> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
> >>>> +                         kasan_early_shadow_pte + i,
> >>>> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
> >>>> +
> >>>> +    do {
> >>>> +            next = pgd_addr_end(addr, end);
> >>>> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
> >>>> +    } while (pmd++, addr = next, addr != end);
> >>>> +}
> >>>
> >>> As far as I can tell it's mapping the early shadow page, read-only, over
> >>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
> >>> shadow PTE array from the generic code.
> >>>
> >>> I haven't been able to find an answer to why this is in the docs, so I
> >>> was wondering if you or anyone else could explain the early part of
> >>> kasan init a bit better.
> >>
> >> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
> >> explanation of the shadow.
> >>
> >> When shadow is 0, it means the memory area is entirely accessible.
> >>
> >> It is necessary to setup a shadow area as soon as possible because all
> >> data accesses check the shadow area, from the begining (except for a few
> >> files where sanitizing has been disabled in Makefiles).
> >>
> >> Until the real shadow area is set, all access are granted thanks to the
> >> zero shadow area beeing for of zeros.
> >
> > Not entirely correct. kasan_early_init() indeed maps the whole shadow
> > memory range to the same kasan_early_shadow_page. However as kernel
> > loads and memory gets allocated this shadow page gets rewritten with
> > non-zero values by different KASAN allocator hooks. Since these values
> > come from completely different parts of the kernel, but all land on
> > the same page, kasan_early_shadow_page's content can be considered
> > garbage. When KASAN checks memory accesses for validity it detects
> > these garbage shadow values, but doesn't print any reports, as the
> > reporting routine bails out on the current->kasan_depth check (which
> > has the value of 1 initially). Only after kasan_init() completes, when
> > the proper shadow memory is mapped, current->kasan_depth gets set to 0
> > and we start reporting bad accesses.
>
> That's surprising, because in the early phase I map the shadow area
> read-only, so I do not expect it to get modified unless RO protection is
> failing for some reason.

Actually it might be that the allocator hooks don't modify shadow at
this point, as the allocator is not yet initialized. However stack
should be getting poisoned and unpoisoned from the very start. But the
generic statement that early shadow gets dirtied should be correct.
Might it be that you don't use stack instrumentation?

>
> Next week I'll add a test in early_init() to check the content of the
> early shadow area.
>
> Christophe
>
> >
> >>
> >> I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.
> >>
> >>>
> >>> At the moment, I don't do any early init, and like Aneesh's series for
> >>> book3s, I end up needing a special flag to disable kasan until after
> >>> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
> >>> fire, although my missing tests are a superset of his. I suspect the
> >>> early init has something to do with these...?
> >>
> >> I think you should really focus on establishing a zero shadow area as
> >> early as possible instead of trying to ack the core parts of KASAN.
> >>
> >>>
> >>> (I'm happy to collate answers into a patch to the docs, btw!)
> >>
> >> We can also have the discussion going via
> >> https://github.com/linuxppc/issues/issues/106
> >>
> >>>
> >>> In the long term I hope to revive Aneesh's and Balbir's series for hash
> >>> and radix as well.
> >>
> >> Great.
> >>
> >> Christophe
> >>
> >>>
> >>> Regards,
> >>> Daniel
> >>>
> >>>> +
> >>>> +static void __init kasan_init_region(struct memblock_region *reg)
> >>>> +{
> >>>> +    void *start = __va(reg->base);
> >>>> +    void *end = __va(reg->base + reg->size);
> >>>> +    unsigned long k_start, k_end, k_cur, k_next;
> >>>> +    pmd_t *pmd;
> >>>> +
> >>>> +    if (start >= end)
> >>>> +            return;
> >>>> +
> >>>> +    k_start = (unsigned long)kasan_mem_to_shadow(start);
> >>>> +    k_end = (unsigned long)kasan_mem_to_shadow(end);
> >>>> +    pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
> >>>> +
> >>>> +    for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
> >>>> +            k_next = pgd_addr_end(k_cur, k_end);
> >>>> +            if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
> >>>> +                    pte_t *new = pte_alloc_one_kernel(&init_mm);
> >>>> +
> >>>> +                    if (!new)
> >>>> +                            panic("kasan: pte_alloc_one_kernel() failed");
> >>>> +                    memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
> >>>> +                    pmd_populate_kernel(&init_mm, pmd, new);
> >>>> +            }
> >>>> +    };
> >>>> +
> >>>> +    for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> >>>> +            void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> >>>> +            pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
> >>>> +
> >>>> +            if (!va)
> >>>> +                    panic("kasan: memblock_alloc() failed");
> >>>> +            pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
> >>>> +            pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
> >>>> +    }
> >>>> +    flush_tlb_kernel_range(k_start, k_end);
> >>>> +}
> >>>> +
> >>>> +void __init kasan_init(void)
> >>>> +{
> >>>> +    struct memblock_region *reg;
> >>>> +
> >>>> +    for_each_memblock(memory, reg)
> >>>> +            kasan_init_region(reg);
> >>>> +
> >>>> +    kasan_init_tags();
> >>>> +
> >>>> +    /* At this point kasan is fully initialized. Enable error messages */
> >>>> +    init_task.kasan_depth = 0;
> >>>> +    pr_info("KASAN init done\n");
> >>>> +}
> >>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> >>>> index 33cc6f676fa6..ae7db88b72d6 100644
> >>>> --- a/arch/powerpc/mm/mem.c
> >>>> +++ b/arch/powerpc/mm/mem.c
> >>>> @@ -369,6 +369,10 @@ void __init mem_init(void)
> >>>>       pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
> >>>>               PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
> >>>>    #endif /* CONFIG_HIGHMEM */
> >>>> +#ifdef CONFIG_KASAN
> >>>> +    pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
> >>>> +            KASAN_SHADOW_START, KASAN_SHADOW_END);
> >>>> +#endif
> >>>>    #ifdef CONFIG_NOT_COHERENT_CACHE
> >>>>       pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
> >>>>               IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
> >>>> --
> >>>> 2.13.3
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> >> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> >> To post to this group, send email to kasan-dev@googlegroups.com.
> >> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/69720148-fd19-0810-5a1d-96c45e2ec00c%40c-s.fr.
> >> For more options, visit https://groups.google.com/d/optout.
>
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
> https://www.avast.com/antivirus
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-11 12:25           ` Andrey Konovalov
@ 2019-02-11 16:28             ` Andrey Ryabinin
  2019-02-12  1:08               ` Daniel Axtens
  0 siblings, 1 reply; 14+ messages in thread
From: Andrey Ryabinin @ 2019-02-11 16:28 UTC (permalink / raw)
  To: Andrey Konovalov, christophe leroy
  Cc: LKML, Nicholas Piggin, Linux Memory Management List,
	Paul Mackerras, Aneesh Kumar K.V, Alexander Potapenko, kasan-dev,
	PowerPC, Dmitry Vyukov, Daniel Axtens



On 2/11/19 3:25 PM, Andrey Konovalov wrote:
> On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
> <christophe.leroy@c-s.fr> wrote:
>>
>> Hi Andrey,
>>
>> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
>>> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
>>>>
>>>> Hi Daniel,
>>>>
>>>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>>>> Hi Christophe,
>>>>>
>>>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>>>> although I think I've ended up with an approach more similar to Aneesh's
>>>>> much earlier (2015) series for book3s.
>>>>>
>>>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>>>> to hack around the discontiguous mappings - but one thing that I'm
>>>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>>>
>>>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>>>> reg)' loop.
>>>>
>>>>>
>>>>>> +void __init kasan_early_init(void)
>>>>>> +{
>>>>>> +    unsigned long addr = KASAN_SHADOW_START;
>>>>>> +    unsigned long end = KASAN_SHADOW_END;
>>>>>> +    unsigned long next;
>>>>>> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>>>> +    int i;
>>>>>> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>>>> +
>>>>>> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>>>> +
>>>>>> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>>>> +            panic("KASAN not supported with Hash MMU\n");
>>>>>> +
>>>>>> +    for (i = 0; i < PTRS_PER_PTE; i++)
>>>>>> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>>>> +                         kasan_early_shadow_pte + i,
>>>>>> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>>>> +
>>>>>> +    do {
>>>>>> +            next = pgd_addr_end(addr, end);
>>>>>> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>>>> +    } while (pmd++, addr = next, addr != end);
>>>>>> +}
>>>>>
>>>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>>>> shadow PTE array from the generic code.
>>>>>
>>>>> I haven't been able to find an answer to why this is in the docs, so I
>>>>> was wondering if you or anyone else could explain the early part of
>>>>> kasan init a bit better.
>>>>
>>>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>>>> explanation of the shadow.
>>>>
>>>> When shadow is 0, it means the memory area is entirely accessible.
>>>>
>>>> It is necessary to setup a shadow area as soon as possible because all
>>>> data accesses check the shadow area, from the begining (except for a few
>>>> files where sanitizing has been disabled in Makefiles).
>>>>
>>>> Until the real shadow area is set, all access are granted thanks to the
>>>> zero shadow area beeing for of zeros.
>>>
>>> Not entirely correct. kasan_early_init() indeed maps the whole shadow
>>> memory range to the same kasan_early_shadow_page. However as kernel
>>> loads and memory gets allocated this shadow page gets rewritten with
>>> non-zero values by different KASAN allocator hooks. Since these values
>>> come from completely different parts of the kernel, but all land on
>>> the same page, kasan_early_shadow_page's content can be considered
>>> garbage. When KASAN checks memory accesses for validity it detects
>>> these garbage shadow values, but doesn't print any reports, as the
>>> reporting routine bails out on the current->kasan_depth check (which
>>> has the value of 1 initially). Only after kasan_init() completes, when
>>> the proper shadow memory is mapped, current->kasan_depth gets set to 0
>>> and we start reporting bad accesses.
>>
>> That's surprising, because in the early phase I map the shadow area
>> read-only, so I do not expect it to get modified unless RO protection is
>> failing for some reason.
> 
> Actually it might be that the allocator hooks don't modify shadow at
> this point, as the allocator is not yet initialized. However stack
> should be getting poisoned and unpoisoned from the very start. But the
> generic statement that early shadow gets dirtied should be correct.
> Might it be that you don't use stack instrumentation?
> 

Yes, stack instrumentation is not used here, because shadow offset which we pass to
the -fasan-shadow-offset= cflag is not specified here. So the logic in scrpits/Makefile.kasan
just fallbacks to CFLAGS_KASAN_MINIMAL, which is outline and without stack instrumentation.

Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
to shadow memory in function prologue/epilogue.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-11 16:28             ` Andrey Ryabinin
@ 2019-02-12  1:08               ` Daniel Axtens
  2019-02-12 11:38                 ` Christophe Leroy
  2019-02-12 12:02                 ` Andrey Ryabinin
  0 siblings, 2 replies; 14+ messages in thread
From: Daniel Axtens @ 2019-02-12  1:08 UTC (permalink / raw)
  To: Andrey Ryabinin, Andrey Konovalov, christophe leroy
  Cc: LKML, Nicholas Piggin, Linux Memory Management List,
	Alexander Potapenko, Aneesh Kumar K.V, Paul Mackerras, kasan-dev,
	PowerPC, Dmitry Vyukov

Andrey Ryabinin <aryabinin@virtuozzo.com> writes:

> On 2/11/19 3:25 PM, Andrey Konovalov wrote:
>> On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
>> <christophe.leroy@c-s.fr> wrote:
>>>
>>> Hi Andrey,
>>>
>>> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
>>>> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
>>>>>
>>>>> Hi Daniel,
>>>>>
>>>>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>>>>> Hi Christophe,
>>>>>>
>>>>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>>>>> although I think I've ended up with an approach more similar to Aneesh's
>>>>>> much earlier (2015) series for book3s.
>>>>>>
>>>>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>>>>> to hack around the discontiguous mappings - but one thing that I'm
>>>>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>>>>
>>>>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>>>>> reg)' loop.
>>>>>
>>>>>>
>>>>>>> +void __init kasan_early_init(void)
>>>>>>> +{
>>>>>>> +    unsigned long addr = KASAN_SHADOW_START;
>>>>>>> +    unsigned long end = KASAN_SHADOW_END;
>>>>>>> +    unsigned long next;
>>>>>>> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>>>>> +    int i;
>>>>>>> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>>>>> +
>>>>>>> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>>>>> +
>>>>>>> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>>>>> +            panic("KASAN not supported with Hash MMU\n");
>>>>>>> +
>>>>>>> +    for (i = 0; i < PTRS_PER_PTE; i++)
>>>>>>> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>>>>> +                         kasan_early_shadow_pte + i,
>>>>>>> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>>>>> +
>>>>>>> +    do {
>>>>>>> +            next = pgd_addr_end(addr, end);
>>>>>>> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>>>>> +    } while (pmd++, addr = next, addr != end);
>>>>>>> +}
>>>>>>
>>>>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>>>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>>>>> shadow PTE array from the generic code.
>>>>>>
>>>>>> I haven't been able to find an answer to why this is in the docs, so I
>>>>>> was wondering if you or anyone else could explain the early part of
>>>>>> kasan init a bit better.
>>>>>
>>>>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>>>>> explanation of the shadow.
>>>>>
>>>>> When shadow is 0, it means the memory area is entirely accessible.
>>>>>
>>>>> It is necessary to setup a shadow area as soon as possible because all
>>>>> data accesses check the shadow area, from the begining (except for a few
>>>>> files where sanitizing has been disabled in Makefiles).
>>>>>
>>>>> Until the real shadow area is set, all access are granted thanks to the
>>>>> zero shadow area beeing for of zeros.
>>>>
>>>> Not entirely correct. kasan_early_init() indeed maps the whole shadow
>>>> memory range to the same kasan_early_shadow_page. However as kernel
>>>> loads and memory gets allocated this shadow page gets rewritten with
>>>> non-zero values by different KASAN allocator hooks. Since these values
>>>> come from completely different parts of the kernel, but all land on
>>>> the same page, kasan_early_shadow_page's content can be considered
>>>> garbage. When KASAN checks memory accesses for validity it detects
>>>> these garbage shadow values, but doesn't print any reports, as the
>>>> reporting routine bails out on the current->kasan_depth check (which
>>>> has the value of 1 initially). Only after kasan_init() completes, when
>>>> the proper shadow memory is mapped, current->kasan_depth gets set to 0
>>>> and we start reporting bad accesses.
>>>
>>> That's surprising, because in the early phase I map the shadow area
>>> read-only, so I do not expect it to get modified unless RO protection is
>>> failing for some reason.
>> 
>> Actually it might be that the allocator hooks don't modify shadow at
>> this point, as the allocator is not yet initialized. However stack
>> should be getting poisoned and unpoisoned from the very start. But the
>> generic statement that early shadow gets dirtied should be correct.
>> Might it be that you don't use stack instrumentation?
>> 
>
> Yes, stack instrumentation is not used here, because shadow offset which we pass to
> the -fasan-shadow-offset= cflag is not specified here. So the logic in scrpits/Makefile.kasan
> just fallbacks to CFLAGS_KASAN_MINIMAL, which is outline and without stack instrumentation.
>
> Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
> in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
> to shadow memory in function prologue/epilogue.

Hmm. Is this limitation just that compilers have not implemented
out-of-line support for stack instrumentation, or is there a deeper
reason that stack/global instrumentation relies upon inline
instrumentation?

I ask because it's very common on ppc64 to have the virtual address
space split up into discontiguous blocks. I know this means we lose
inline instrumentation, but I didn't realise we'd also lose stack and
global instrumentation...

I wonder if it would be worth, in the distant future, trying to
implement a smarter scheme in compilers where we could insert more
complex inline mapping schemes.

Regards,
Daniel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-12  1:08               ` Daniel Axtens
@ 2019-02-12 11:38                 ` Christophe Leroy
  2019-02-12 12:14                   ` Andrey Ryabinin
  2019-02-12 12:02                 ` Andrey Ryabinin
  1 sibling, 1 reply; 14+ messages in thread
From: Christophe Leroy @ 2019-02-12 11:38 UTC (permalink / raw)
  To: Daniel Axtens, Andrey Ryabinin, Andrey Konovalov
  Cc: LKML, Nicholas Piggin, Linux Memory Management List,
	Alexander Potapenko, Aneesh Kumar K.V, Paul Mackerras, kasan-dev,
	PowerPC, Dmitry Vyukov



Le 12/02/2019 à 02:08, Daniel Axtens a écrit :
> Andrey Ryabinin <aryabinin@virtuozzo.com> writes:
> 
>> On 2/11/19 3:25 PM, Andrey Konovalov wrote:
>>> On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
>>> <christophe.leroy@c-s.fr> wrote:
>>>>
>>>> Hi Andrey,
>>>>
>>>> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
>>>>> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
>>>>>>
>>>>>> Hi Daniel,
>>>>>>
>>>>>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>>>>>> Hi Christophe,
>>>>>>>
>>>>>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>>>>>> although I think I've ended up with an approach more similar to Aneesh's
>>>>>>> much earlier (2015) series for book3s.
>>>>>>>
>>>>>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>>>>>> to hack around the discontiguous mappings - but one thing that I'm
>>>>>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>>>>>
>>>>>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>>>>>> reg)' loop.
>>>>>>
>>>>>>>
>>>>>>>> +void __init kasan_early_init(void)
>>>>>>>> +{
>>>>>>>> +    unsigned long addr = KASAN_SHADOW_START;
>>>>>>>> +    unsigned long end = KASAN_SHADOW_END;
>>>>>>>> +    unsigned long next;
>>>>>>>> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>>>>>> +    int i;
>>>>>>>> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>>>>>> +
>>>>>>>> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>>>>>> +
>>>>>>>> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>>>>>> +            panic("KASAN not supported with Hash MMU\n");
>>>>>>>> +
>>>>>>>> +    for (i = 0; i < PTRS_PER_PTE; i++)
>>>>>>>> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>>>>>> +                         kasan_early_shadow_pte + i,
>>>>>>>> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>>>>>> +
>>>>>>>> +    do {
>>>>>>>> +            next = pgd_addr_end(addr, end);
>>>>>>>> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>>>>>> +    } while (pmd++, addr = next, addr != end);
>>>>>>>> +}
>>>>>>>
>>>>>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>>>>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>>>>>> shadow PTE array from the generic code.
>>>>>>>
>>>>>>> I haven't been able to find an answer to why this is in the docs, so I
>>>>>>> was wondering if you or anyone else could explain the early part of
>>>>>>> kasan init a bit better.
>>>>>>
>>>>>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>>>>>> explanation of the shadow.
>>>>>>
>>>>>> When shadow is 0, it means the memory area is entirely accessible.
>>>>>>
>>>>>> It is necessary to setup a shadow area as soon as possible because all
>>>>>> data accesses check the shadow area, from the begining (except for a few
>>>>>> files where sanitizing has been disabled in Makefiles).
>>>>>>
>>>>>> Until the real shadow area is set, all access are granted thanks to the
>>>>>> zero shadow area beeing for of zeros.
>>>>>
>>>>> Not entirely correct. kasan_early_init() indeed maps the whole shadow
>>>>> memory range to the same kasan_early_shadow_page. However as kernel
>>>>> loads and memory gets allocated this shadow page gets rewritten with
>>>>> non-zero values by different KASAN allocator hooks. Since these values
>>>>> come from completely different parts of the kernel, but all land on
>>>>> the same page, kasan_early_shadow_page's content can be considered
>>>>> garbage. When KASAN checks memory accesses for validity it detects
>>>>> these garbage shadow values, but doesn't print any reports, as the
>>>>> reporting routine bails out on the current->kasan_depth check (which
>>>>> has the value of 1 initially). Only after kasan_init() completes, when
>>>>> the proper shadow memory is mapped, current->kasan_depth gets set to 0
>>>>> and we start reporting bad accesses.
>>>>
>>>> That's surprising, because in the early phase I map the shadow area
>>>> read-only, so I do not expect it to get modified unless RO protection is
>>>> failing for some reason.
>>>
>>> Actually it might be that the allocator hooks don't modify shadow at
>>> this point, as the allocator is not yet initialized. However stack
>>> should be getting poisoned and unpoisoned from the very start. But the
>>> generic statement that early shadow gets dirtied should be correct.
>>> Might it be that you don't use stack instrumentation?
>>>
>>
>> Yes, stack instrumentation is not used here, because shadow offset which we pass to
>> the -fasan-shadow-offset= cflag is not specified here. So the logic in scrpits/Makefile.kasan
>> just fallbacks to CFLAGS_KASAN_MINIMAL, which is outline and without stack instrumentation.
>>
>> Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
>> in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
>> to shadow memory in function prologue/epilogue.
> 
> Hmm. Is this limitation just that compilers have not implemented
> out-of-line support for stack instrumentation, or is there a deeper
> reason that stack/global instrumentation relies upon inline
> instrumentation?

No, it looks like as soon as we define KASAN_SHADOW_OFFSET in Makefile 
in addition to asm/kasan.h, stack instrumentation works with out-of-line.

I'll send series v5 soon.

Christophe

> 
> I ask because it's very common on ppc64 to have the virtual address
> space split up into discontiguous blocks. I know this means we lose
> inline instrumentation, but I didn't realise we'd also lose stack and
> global instrumentation...
> 
> I wonder if it would be worth, in the distant future, trying to
> implement a smarter scheme in compilers where we could insert more
> complex inline mapping schemes.
> 
> Regards,
> Daniel
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-12  1:08               ` Daniel Axtens
  2019-02-12 11:38                 ` Christophe Leroy
@ 2019-02-12 12:02                 ` Andrey Ryabinin
  1 sibling, 0 replies; 14+ messages in thread
From: Andrey Ryabinin @ 2019-02-12 12:02 UTC (permalink / raw)
  To: Daniel Axtens, Andrey Konovalov, christophe leroy
  Cc: LKML, Nicholas Piggin, Linux Memory Management List,
	Alexander Potapenko, Aneesh Kumar K.V, Paul Mackerras, kasan-dev,
	PowerPC, Dmitry Vyukov



On 2/12/19 4:08 AM, Daniel Axtens wrote:
> Andrey Ryabinin <aryabinin@virtuozzo.com> writes:
> 
>> On 2/11/19 3:25 PM, Andrey Konovalov wrote:
>>> On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
>>> <christophe.leroy@c-s.fr> wrote:
>>>>
>>>> Hi Andrey,
>>>>
>>>> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
>>>>> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
>>>>>>
>>>>>> Hi Daniel,
>>>>>>
>>>>>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>>>>>> Hi Christophe,
>>>>>>>
>>>>>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>>>>>> although I think I've ended up with an approach more similar to Aneesh's
>>>>>>> much earlier (2015) series for book3s.
>>>>>>>
>>>>>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>>>>>> to hack around the discontiguous mappings - but one thing that I'm
>>>>>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>>>>>
>>>>>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>>>>>> reg)' loop.
>>>>>>
>>>>>>>
>>>>>>>> +void __init kasan_early_init(void)
>>>>>>>> +{
>>>>>>>> +    unsigned long addr = KASAN_SHADOW_START;
>>>>>>>> +    unsigned long end = KASAN_SHADOW_END;
>>>>>>>> +    unsigned long next;
>>>>>>>> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>>>>>> +    int i;
>>>>>>>> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>>>>>> +
>>>>>>>> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>>>>>> +
>>>>>>>> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>>>>>> +            panic("KASAN not supported with Hash MMU\n");
>>>>>>>> +
>>>>>>>> +    for (i = 0; i < PTRS_PER_PTE; i++)
>>>>>>>> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>>>>>> +                         kasan_early_shadow_pte + i,
>>>>>>>> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>>>>>> +
>>>>>>>> +    do {
>>>>>>>> +            next = pgd_addr_end(addr, end);
>>>>>>>> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>>>>>> +    } while (pmd++, addr = next, addr != end);
>>>>>>>> +}
>>>>>>>
>>>>>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>>>>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>>>>>> shadow PTE array from the generic code.
>>>>>>>
>>>>>>> I haven't been able to find an answer to why this is in the docs, so I
>>>>>>> was wondering if you or anyone else could explain the early part of
>>>>>>> kasan init a bit better.
>>>>>>
>>>>>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>>>>>> explanation of the shadow.
>>>>>>
>>>>>> When shadow is 0, it means the memory area is entirely accessible.
>>>>>>
>>>>>> It is necessary to setup a shadow area as soon as possible because all
>>>>>> data accesses check the shadow area, from the begining (except for a few
>>>>>> files where sanitizing has been disabled in Makefiles).
>>>>>>
>>>>>> Until the real shadow area is set, all access are granted thanks to the
>>>>>> zero shadow area beeing for of zeros.
>>>>>
>>>>> Not entirely correct. kasan_early_init() indeed maps the whole shadow
>>>>> memory range to the same kasan_early_shadow_page. However as kernel
>>>>> loads and memory gets allocated this shadow page gets rewritten with
>>>>> non-zero values by different KASAN allocator hooks. Since these values
>>>>> come from completely different parts of the kernel, but all land on
>>>>> the same page, kasan_early_shadow_page's content can be considered
>>>>> garbage. When KASAN checks memory accesses for validity it detects
>>>>> these garbage shadow values, but doesn't print any reports, as the
>>>>> reporting routine bails out on the current->kasan_depth check (which
>>>>> has the value of 1 initially). Only after kasan_init() completes, when
>>>>> the proper shadow memory is mapped, current->kasan_depth gets set to 0
>>>>> and we start reporting bad accesses.
>>>>
>>>> That's surprising, because in the early phase I map the shadow area
>>>> read-only, so I do not expect it to get modified unless RO protection is
>>>> failing for some reason.
>>>
>>> Actually it might be that the allocator hooks don't modify shadow at
>>> this point, as the allocator is not yet initialized. However stack
>>> should be getting poisoned and unpoisoned from the very start. But the
>>> generic statement that early shadow gets dirtied should be correct.
>>> Might it be that you don't use stack instrumentation?
>>>
>>
>> Yes, stack instrumentation is not used here, because shadow offset which we pass to
>> the -fasan-shadow-offset= cflag is not specified here. So the logic in scrpits/Makefile.kasan
>> just fallbacks to CFLAGS_KASAN_MINIMAL, which is outline and without stack instrumentation.
>>
>> Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
>> in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
>> to shadow memory in function prologue/epilogue.
> 
> Hmm. Is this limitation just that compilers have not implemented
> out-of-line support for stack instrumentation, or is there a deeper
> reason that stack/global instrumentation relies upon inline
> instrumentation?
> 

Yes, it's simply wasn't implemented in compilers. Stack [un]poisoning code is always inlined.

But globals is the opposite of that, they all poisoned out-of-line via __asan_register_globals() call.

> I ask because it's very common on ppc64 to have the virtual address
> space split up into discontiguous blocks. I know this means we lose
> inline instrumentation, but I didn't realise we'd also lose stack and
> global instrumentation...
> 
> I wonder if it would be worth, in the distant future, trying to
> implement a smarter scheme in compilers where we could insert more
> complex inline mapping schemes.
> 

I'd say it depends on performance boost that inline might give for those complex inline schemes.
The whole inline instrumentation thing exists only because it gives better performance.


> Regards,
> Daniel
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
  2019-02-12 11:38                 ` Christophe Leroy
@ 2019-02-12 12:14                   ` Andrey Ryabinin
  0 siblings, 0 replies; 14+ messages in thread
From: Andrey Ryabinin @ 2019-02-12 12:14 UTC (permalink / raw)
  To: Christophe Leroy, Daniel Axtens, Andrey Konovalov
  Cc: LKML, Nicholas Piggin, Linux Memory Management List,
	Alexander Potapenko, Aneesh Kumar K.V, Paul Mackerras, kasan-dev,
	PowerPC, Dmitry Vyukov

On 2/12/19 2:38 PM, Christophe Leroy wrote:
> 
> 
> Le 12/02/2019 à 02:08, Daniel Axtens a écrit :
>> Andrey Ryabinin <aryabinin@virtuozzo.com> writes:
>>
>>>
>>> Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
>>> in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
>>> to shadow memory in function prologue/epilogue.
>>
>> Hmm. Is this limitation just that compilers have not implemented
>> out-of-line support for stack instrumentation, or is there a deeper
>> reason that stack/global instrumentation relies upon inline
>> instrumentation?
> 
> No, it looks like as soon as we define KASAN_SHADOW_OFFSET in Makefile in addition to asm/kasan.h, stack instrumentation works with out-of-line.
> 


I think you confusing two different things.
CONFIG_KASAN_INLINE/CONFIG_KASAN_OUTLINE affects only generation of code that checks memory accesses,
whether we call __asan_loadx()/__asan_storex() or directly insert code, checking shadow memory. 

But with stack instrumentation we also need to poison redzones around stack variables and unpoison them
when we leave the function. That poisoning/unpoisoning thing is always inlined. Currently there is no option to make it out-of-line.

So you can have stack instrumentation with KASAN_OUTLINE, but it just means that checking shadow memory on memory access will be outlined,
poisoning/unpoisoing stack redzones will remain inlined.


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-02-12 12:16 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-22 14:28 [PATCH v4 0/3] KASAN for powerpc/32 Christophe Leroy
2019-01-22 14:28 ` [PATCH v4 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
2019-01-22 14:28 ` [PATCH v4 2/3] powerpc/32: Move early_init() in a separate file Christophe Leroy
2019-01-22 14:28 ` [PATCH v4 3/3] powerpc/32: Add KASAN support Christophe Leroy
2019-02-08 16:18   ` Daniel Axtens
2019-02-08 17:17     ` Christophe Leroy
2019-02-08 17:40       ` Andrey Konovalov
2019-02-09 11:55         ` christophe leroy
2019-02-11 12:25           ` Andrey Konovalov
2019-02-11 16:28             ` Andrey Ryabinin
2019-02-12  1:08               ` Daniel Axtens
2019-02-12 11:38                 ` Christophe Leroy
2019-02-12 12:14                   ` Andrey Ryabinin
2019-02-12 12:02                 ` Andrey Ryabinin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).