LinuxPPC-Dev Archive on lore.kernel.org
 help / Atom feed
* [PATCH v3 0/3] KASAN for powerpc/32
@ 2019-01-12 11:16 Christophe Leroy
  2019-01-12 11:16 ` [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Christophe Leroy @ 2019-01-12 11:16 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Nicholas Piggin, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

This serie adds KASAN support to powerpc/32

Tested on nohash/32 (8xx) and book3s/32 (mpc832x ie 603)

Changes in v3:
- Removed the printk() in kasan_early_init() to avoid build failure (see https://github.com/linuxppc/issues/issues/218)
- Added necessary changes in asm/book3s/32/pgtable.h to get it work on powerpc 603 family
- Added a few KASAN_SANITIZE_xxx.o := n to successfully boot on powerpc 603 family

Changes in v2:
- Rebased.
- Using __set_pte_at() to build the early table.
- Worked around and got rid of the patch adding asm/page.h in asm/pgtable-types.h
    ==> might be fixed independently but not needed for this serie.

For book3s/32 (not 603), it cannot work as is because due to HASHPTE flag, we
can't use the same pagetable for several PGD entries.

Christophe Leroy (3):
  powerpc/mm: prepare kernel for KAsan on PPC32
  powerpc/32: Move early_init() in a separate file
  powerpc/32: Add KASAN support

 arch/powerpc/Kconfig                         |  1 +
 arch/powerpc/include/asm/book3s/32/pgtable.h |  2 +
 arch/powerpc/include/asm/kasan.h             | 24 ++++++++++
 arch/powerpc/include/asm/nohash/32/pgtable.h |  2 +
 arch/powerpc/include/asm/ppc_asm.h           |  5 ++
 arch/powerpc/include/asm/setup.h             |  5 ++
 arch/powerpc/include/asm/string.h            | 14 ++++++
 arch/powerpc/kernel/Makefile                 |  6 ++-
 arch/powerpc/kernel/cputable.c               |  4 +-
 arch/powerpc/kernel/early_32.c               | 36 ++++++++++++++
 arch/powerpc/kernel/prom_init_check.sh       |  1 +
 arch/powerpc/kernel/setup-common.c           |  2 +
 arch/powerpc/kernel/setup_32.c               | 31 ++----------
 arch/powerpc/lib/Makefile                    |  3 ++
 arch/powerpc/lib/copy_32.S                   |  9 ++--
 arch/powerpc/mm/Makefile                     |  3 ++
 arch/powerpc/mm/dump_linuxpagetables.c       |  8 ++++
 arch/powerpc/mm/kasan_init.c                 | 72 ++++++++++++++++++++++++++++
 arch/powerpc/mm/mem.c                        |  4 ++
 19 files changed, 198 insertions(+), 34 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kasan.h
 create mode 100644 arch/powerpc/kernel/early_32.c
 create mode 100644 arch/powerpc/mm/kasan_init.c

-- 
2.13.3


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-12 11:16 [PATCH v3 0/3] KASAN for powerpc/32 Christophe Leroy
@ 2019-01-12 11:16 ` Christophe Leroy
  2019-01-14  9:34   ` Dmitry Vyukov
  2019-01-12 11:16 ` [PATCH v3 2/3] powerpc/32: Move early_init() in a separate file Christophe Leroy
  2019-01-12 11:16 ` [PATCH v3 3/3] powerpc/32: Add KASAN support Christophe Leroy
  2 siblings, 1 reply; 12+ messages in thread
From: Christophe Leroy @ 2019-01-12 11:16 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Nicholas Piggin, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

In kernel/cputable.c, explicitly use memcpy() in order
to allow GCC to replace it with __memcpy() when KASAN is
selected.

Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
enabled"), memset() can be used before activation of the cache,
so no need to use memset_io() for zeroing the BSS.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/cputable.c | 4 ++--
 arch/powerpc/kernel/setup_32.c | 6 ++----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index 1eab54bc6ee9..84814c8d1bcb 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
 	struct cpu_spec *t = &the_cpu_spec;
 
 	t = PTRRELOC(t);
-	*t = *s;
+	memcpy(t, s, sizeof(*t));
 
 	*PTRRELOC(&cur_cpu_spec) = &the_cpu_spec;
 }
@@ -2162,7 +2162,7 @@ static struct cpu_spec * __init setup_cpu_spec(unsigned long offset,
 	old = *t;
 
 	/* Copy everything, then do fixups */
-	*t = *s;
+	memcpy(t, s, sizeof(*t));
 
 	/*
 	 * If we are overriding a previous value derived from the real
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 947f904688b0..5e761eb16a6d 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -73,10 +73,8 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
 {
 	unsigned long offset = reloc_offset();
 
-	/* First zero the BSS -- use memset_io, some platforms don't have
-	 * caches on yet */
-	memset_io((void __iomem *)PTRRELOC(&__bss_start), 0,
-			__bss_stop - __bss_start);
+	/* First zero the BSS */
+	memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
 
 	/*
 	 * Identify the CPU type and fix up code sections
-- 
2.13.3


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 2/3] powerpc/32: Move early_init() in a separate file
  2019-01-12 11:16 [PATCH v3 0/3] KASAN for powerpc/32 Christophe Leroy
  2019-01-12 11:16 ` [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
@ 2019-01-12 11:16 ` Christophe Leroy
  2019-01-12 11:16 ` [PATCH v3 3/3] powerpc/32: Add KASAN support Christophe Leroy
  2 siblings, 0 replies; 12+ messages in thread
From: Christophe Leroy @ 2019-01-12 11:16 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Nicholas Piggin, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

In preparation of KASAN, move early_init() into a separate
file in order to allow deactivation of KASAN for that function.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/Makefile   |  2 +-
 arch/powerpc/kernel/early_32.c | 35 +++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/setup_32.c | 26 --------------------------
 3 files changed, 36 insertions(+), 27 deletions(-)
 create mode 100644 arch/powerpc/kernel/early_32.c

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index cb7f0bb9ee71..879b36602748 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -93,7 +93,7 @@ extra-y				+= vmlinux.lds
 
 obj-$(CONFIG_RELOCATABLE)	+= reloc_$(BITS).o
 
-obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o
+obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o early_32.o
 obj-$(CONFIG_PPC64)		+= dma-iommu.o iommu.o
 obj-$(CONFIG_KGDB)		+= kgdb.o
 obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
diff --git a/arch/powerpc/kernel/early_32.c b/arch/powerpc/kernel/early_32.c
new file mode 100644
index 000000000000..b3e40d6d651c
--- /dev/null
+++ b/arch/powerpc/kernel/early_32.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Early init before relocation
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <asm/setup.h>
+#include <asm/sections.h>
+
+/*
+ * We're called here very early in the boot.
+ *
+ * Note that the kernel may be running at an address which is different
+ * from the address that it was linked at, so we must use RELOC/PTRRELOC
+ * to access static data (including strings).  -- paulus
+ */
+notrace unsigned long __init early_init(unsigned long dt_ptr)
+{
+	unsigned long offset = reloc_offset();
+
+	/* First zero the BSS */
+	memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
+
+	/*
+	 * Identify the CPU type and fix up code sections
+	 * that depend on which cpu we have.
+	 */
+	identify_cpu(offset, mfspr(SPRN_PVR));
+
+	apply_feature_fixups();
+
+	return KERNELBASE + offset;
+}
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 5e761eb16a6d..b46a9a33225b 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -63,32 +63,6 @@ EXPORT_SYMBOL(DMA_MODE_READ);
 EXPORT_SYMBOL(DMA_MODE_WRITE);
 
 /*
- * We're called here very early in the boot.
- *
- * Note that the kernel may be running at an address which is different
- * from the address that it was linked at, so we must use RELOC/PTRRELOC
- * to access static data (including strings).  -- paulus
- */
-notrace unsigned long __init early_init(unsigned long dt_ptr)
-{
-	unsigned long offset = reloc_offset();
-
-	/* First zero the BSS */
-	memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
-
-	/*
-	 * Identify the CPU type and fix up code sections
-	 * that depend on which cpu we have.
-	 */
-	identify_cpu(offset, mfspr(SPRN_PVR));
-
-	apply_feature_fixups();
-
-	return KERNELBASE + offset;
-}
-
-
-/*
  * This is run before start_kernel(), the kernel has been relocated
  * and we are running with enough of the MMU enabled to have our
  * proper kernel virtual addresses
-- 
2.13.3


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 3/3] powerpc/32: Add KASAN support
  2019-01-12 11:16 [PATCH v3 0/3] KASAN for powerpc/32 Christophe Leroy
  2019-01-12 11:16 ` [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
  2019-01-12 11:16 ` [PATCH v3 2/3] powerpc/32: Move early_init() in a separate file Christophe Leroy
@ 2019-01-12 11:16 ` Christophe Leroy
  2019-01-15 17:23   ` Andrey Ryabinin
  2 siblings, 1 reply; 12+ messages in thread
From: Christophe Leroy @ 2019-01-12 11:16 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Nicholas Piggin, Aneesh Kumar K.V, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev

This patch adds KASAN support for PPC32.

Note that on book3s it will only work on the 603 because the other
ones use hash table and can therefore not share a single PTE table
covering the entire early KASAN shadow area.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/Kconfig                         |  1 +
 arch/powerpc/include/asm/book3s/32/pgtable.h |  2 +
 arch/powerpc/include/asm/kasan.h             | 24 ++++++++++
 arch/powerpc/include/asm/nohash/32/pgtable.h |  2 +
 arch/powerpc/include/asm/ppc_asm.h           |  5 ++
 arch/powerpc/include/asm/setup.h             |  5 ++
 arch/powerpc/include/asm/string.h            | 14 ++++++
 arch/powerpc/kernel/Makefile                 |  4 ++
 arch/powerpc/kernel/early_32.c               |  1 +
 arch/powerpc/kernel/prom_init_check.sh       |  1 +
 arch/powerpc/kernel/setup-common.c           |  2 +
 arch/powerpc/kernel/setup_32.c               |  3 ++
 arch/powerpc/lib/Makefile                    |  3 ++
 arch/powerpc/lib/copy_32.S                   |  9 ++--
 arch/powerpc/mm/Makefile                     |  3 ++
 arch/powerpc/mm/dump_linuxpagetables.c       |  8 ++++
 arch/powerpc/mm/kasan_init.c                 | 72 ++++++++++++++++++++++++++++
 arch/powerpc/mm/mem.c                        |  4 ++
 18 files changed, 160 insertions(+), 3 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kasan.h
 create mode 100644 arch/powerpc/mm/kasan_init.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2890d36eb531..11dcaa80d3ff 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -175,6 +175,7 @@ config PPC
 	select GENERIC_TIME_VSYSCALL
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_JUMP_LABEL
+	select HAVE_ARCH_KASAN			if PPC32
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 49d76adb9bc5..4543016f80ca 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -141,6 +141,8 @@ static inline bool pte_user(pte_t pte)
  */
 #ifdef CONFIG_HIGHMEM
 #define KVIRT_TOP	PKMAP_BASE
+#elif defined(CONFIG_KASAN)
+#define KVIRT_TOP	KASAN_SHADOW_START
 #else
 #define KVIRT_TOP	(0xfe000000UL)	/* for now, could be FIXMAP_BASE ? */
 #endif
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
new file mode 100644
index 000000000000..5d0088429b62
--- /dev/null
+++ b/arch/powerpc/include/asm/kasan.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/page.h>
+#include <asm/pgtable-types.h>
+#include <asm/fixmap.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT	3
+#define KASAN_SHADOW_SIZE	((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
+
+#define KASAN_SHADOW_START	(ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
+					    PGDIR_SIZE))
+#define KASAN_SHADOW_END	(KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
+#define KASAN_SHADOW_OFFSET	(KASAN_SHADOW_START - \
+				 (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
+
+void kasan_early_init(void);
+void kasan_init(void);
+
+#endif
+#endif
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
index bed433358260..b3b52f02be1a 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -71,6 +71,8 @@ extern int icache_44x_need_flush;
  */
 #ifdef CONFIG_HIGHMEM
 #define KVIRT_TOP	PKMAP_BASE
+#elif defined(CONFIG_KASAN)
+#define KVIRT_TOP	KASAN_SHADOW_START
 #else
 #define KVIRT_TOP	(0xfe000000UL)	/* for now, could be FIXMAP_BASE ? */
 #endif
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index e0637730a8e7..8d5291c721fa 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -251,6 +251,11 @@ GLUE(.,name):
 
 #define _GLOBAL_TOC(name) _GLOBAL(name)
 
+#define KASAN_OVERRIDE(x, y) \
+	.weak x;	     \
+	.set x, y
+
+
 #endif
 
 /*
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 65676e2325b8..da7768aa996a 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -74,6 +74,11 @@ static inline void setup_spectre_v2(void) {};
 #endif
 void do_btb_flush_fixups(void);
 
+#ifndef CONFIG_KASAN
+static inline void kasan_early_init(void) { }
+static inline void kasan_init(void) { }
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_SETUP_H */
diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index 1647de15a31e..64d44d4836b4 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -27,6 +27,20 @@ extern int memcmp(const void *,const void *,__kernel_size_t);
 extern void * memchr(const void *,int,__kernel_size_t);
 extern void * memcpy_flushcache(void *,const void *,__kernel_size_t);
 
+void *__memset(void *s, int c, __kernel_size_t count);
+void *__memcpy(void *to, const void *from, __kernel_size_t n);
+void *__memmove(void *to, const void *from, __kernel_size_t n);
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #ifdef CONFIG_PPC64
 #define __HAVE_ARCH_MEMSET32
 #define __HAVE_ARCH_MEMSET64
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 879b36602748..7556000e1d0f 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -31,6 +31,10 @@ CFLAGS_REMOVE_btext.o = $(CC_FLAGS_FTRACE)
 CFLAGS_REMOVE_prom.o = $(CC_FLAGS_FTRACE)
 endif
 
+KASAN_SANITIZE_early_32.o := n
+KASAN_SANITIZE_cputable.o := n
+KASAN_SANITIZE_prom_init.o := n
+
 obj-y				:= cputable.o ptrace.o syscalls.o \
 				   irq.o align.o signal_32.o pmc.o vdso.o \
 				   process.o systbl.o idle.o \
diff --git a/arch/powerpc/kernel/early_32.c b/arch/powerpc/kernel/early_32.c
index b3e40d6d651c..3482118ffe76 100644
--- a/arch/powerpc/kernel/early_32.c
+++ b/arch/powerpc/kernel/early_32.c
@@ -8,6 +8,7 @@
 #include <linux/kernel.h>
 #include <asm/setup.h>
 #include <asm/sections.h>
+#include <asm/asm-prototypes.h>
 
 /*
  * We're called here very early in the boot.
diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh
index 667df97d2595..9282730661ed 100644
--- a/arch/powerpc/kernel/prom_init_check.sh
+++ b/arch/powerpc/kernel/prom_init_check.sh
@@ -18,6 +18,7 @@
 
 WHITELIST="add_reloc_offset __bss_start __bss_stop copy_and_flush
 _end enter_prom memcpy memset reloc_offset __secondary_hold
+__memcpy __memset
 __secondary_hold_acknowledge __secondary_hold_spinloop __start
 strcmp strcpy strlcpy strlen strncmp strstr kstrtobool logo_linux_clut224
 reloc_got2 kernstart_addr memstart_addr linux_banner _stext
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index ca00fbb97cf8..16ff1ea66805 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -978,6 +978,8 @@ void __init setup_arch(char **cmdline_p)
 
 	paging_init();
 
+	kasan_init();
+
 	/* Initialize the MMU context management stuff. */
 	mmu_context_init();
 
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index b46a9a33225b..fe6990dec6fc 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -17,6 +17,7 @@
 #include <linux/console.h>
 #include <linux/memblock.h>
 #include <linux/export.h>
+#include <linux/kasan.h>
 
 #include <asm/io.h>
 #include <asm/prom.h>
@@ -75,6 +76,8 @@ notrace void __init machine_init(u64 dt_ptr)
 	unsigned int *addr = (unsigned int *)patch_site_addr(&patch__memset_nocache);
 	unsigned long insn;
 
+	kasan_early_init();
+
 	/* Configure static keys first, now that we're relocated. */
 	setup_feature_keys();
 
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 3bf9fc6fd36c..31ca9d4ac92e 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -8,6 +8,9 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 CFLAGS_REMOVE_code-patching.o = $(CC_FLAGS_FTRACE)
 CFLAGS_REMOVE_feature-fixups.o = $(CC_FLAGS_FTRACE)
 
+KASAN_SANITIZE_code-patching.o := n
+KASAN_SANITIZE_feature-fixups.o := n
+
 obj-y += string.o alloc.o code-patching.o feature-fixups.o
 
 obj-$(CONFIG_PPC32)	+= div64.o copy_32.o crtsavres.o strlen_32.o
diff --git a/arch/powerpc/lib/copy_32.S b/arch/powerpc/lib/copy_32.S
index ba66846fe973..4d8a1c73b4cf 100644
--- a/arch/powerpc/lib/copy_32.S
+++ b/arch/powerpc/lib/copy_32.S
@@ -91,7 +91,8 @@ EXPORT_SYMBOL(memset16)
  * We therefore skip the optimised bloc that uses dcbz. This jump is
  * replaced by a nop once cache is active. This is done in machine_init()
  */
-_GLOBAL(memset)
+_GLOBAL(__memset)
+KASAN_OVERRIDE(memset, __memset)
 	cmplwi	0,r5,4
 	blt	7f
 
@@ -163,12 +164,14 @@ EXPORT_SYMBOL(memset)
  * We therefore jump to generic_memcpy which doesn't use dcbz. This jump is
  * replaced by a nop once cache is active. This is done in machine_init()
  */
-_GLOBAL(memmove)
+_GLOBAL(__memmove)
+KASAN_OVERRIDE(memmove, __memmove)
 	cmplw	0,r3,r4
 	bgt	backwards_memcpy
 	/* fall through */
 
-_GLOBAL(memcpy)
+_GLOBAL(__memcpy)
+KASAN_OVERRIDE(memcpy, __memcpy)
 1:	b	generic_memcpy
 	patch_site	1b, patch__memcpy_nocache
 
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index f965fc33a8b7..d6b76f25f6de 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -7,6 +7,8 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
 CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
 
+KASAN_SANITIZE_kasan_init.o := n
+
 obj-y				:= fault.o mem.o pgtable.o mmap.o \
 				   init_$(BITS).o pgtable_$(BITS).o \
 				   init-common.o mmu_context.o drmem.o
@@ -55,3 +57,4 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= dump_linuxpagetables-book3s64.o
 endif
 obj-$(CONFIG_PPC_HTDUMP)	+= dump_hashpagetable.o
 obj-$(CONFIG_PPC_MEM_KEYS)	+= pkeys.o
+obj-$(CONFIG_KASAN)		+= kasan_init.o
diff --git a/arch/powerpc/mm/dump_linuxpagetables.c b/arch/powerpc/mm/dump_linuxpagetables.c
index 6aa41669ac1a..c862b48118f1 100644
--- a/arch/powerpc/mm/dump_linuxpagetables.c
+++ b/arch/powerpc/mm/dump_linuxpagetables.c
@@ -94,6 +94,10 @@ static struct addr_marker address_markers[] = {
 	{ 0,	"Consistent mem start" },
 	{ 0,	"Consistent mem end" },
 #endif
+#ifdef CONFIG_KASAN
+	{ 0,	"kasan shadow mem start" },
+	{ 0,	"kasan shadow mem end" },
+#endif
 #ifdef CONFIG_HIGHMEM
 	{ 0,	"Highmem PTEs start" },
 	{ 0,	"Highmem PTEs end" },
@@ -310,6 +314,10 @@ static void populate_markers(void)
 	address_markers[i++].start_address = IOREMAP_TOP +
 					     CONFIG_CONSISTENT_SIZE;
 #endif
+#ifdef CONFIG_KASAN
+	address_markers[i++].start_address = KASAN_SHADOW_START;
+	address_markers[i++].start_address = KASAN_SHADOW_END;
+#endif
 #ifdef CONFIG_HIGHMEM
 	address_markers[i++].start_address = PKMAP_BASE;
 	address_markers[i++].start_address = PKMAP_ADDR(LAST_PKMAP);
diff --git a/arch/powerpc/mm/kasan_init.c b/arch/powerpc/mm/kasan_init.c
new file mode 100644
index 000000000000..3edc9c2d2f3e
--- /dev/null
+++ b/arch/powerpc/mm/kasan_init.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/kasan.h>
+#include <linux/printk.h>
+#include <linux/memblock.h>
+#include <asm/pgalloc.h>
+
+void __init kasan_early_init(void)
+{
+	unsigned long addr = KASAN_SHADOW_START & PGDIR_MASK;
+	unsigned long end = KASAN_SHADOW_END;
+	unsigned long next;
+	pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
+	int i;
+	phys_addr_t pa = __pa(kasan_early_shadow_page);
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		__set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
+			     kasan_early_shadow_pte + i,
+			     pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
+
+	do {
+		next = pgd_addr_end(addr, end);
+		pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
+	} while (pmd++, addr = next, addr != end);
+}
+
+static void __init kasan_init_region(struct memblock_region *reg)
+{
+	void *start = __va(reg->base);
+	void *end = __va(reg->base + reg->size);
+	unsigned long k_start, k_end, k_cur, k_next;
+	pmd_t *pmd;
+
+	if (start >= end)
+		return;
+
+	k_start = (unsigned long)kasan_mem_to_shadow(start);
+	k_end = (unsigned long)kasan_mem_to_shadow(end);
+	pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
+
+	for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
+		k_next = pgd_addr_end(k_cur, k_end);
+		if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
+			pte_t *new = pte_alloc_one_kernel(&init_mm);
+
+			if (!new)
+				panic("kasan: pte_alloc_one_kernel() failed");
+			memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
+			pmd_populate_kernel(&init_mm, pmd, new);
+		}
+	};
+
+	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
+		phys_addr_t pa = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
+		pte_t pte = pfn_pte(PHYS_PFN(pa), PAGE_KERNEL);
+
+		pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
+		pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
+	}
+	flush_tlb_kernel_range(k_start, k_end);
+}
+
+void __init kasan_init(void)
+{
+	struct memblock_region *reg;
+
+	for_each_memblock(memory, reg)
+		kasan_init_region(reg);
+
+	pr_info("KASAN init done\n");
+}
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 33cc6f676fa6..ae7db88b72d6 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -369,6 +369,10 @@ void __init mem_init(void)
 	pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
 		PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
 #endif /* CONFIG_HIGHMEM */
+#ifdef CONFIG_KASAN
+	pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
+		KASAN_SHADOW_START, KASAN_SHADOW_END);
+#endif
 #ifdef CONFIG_NOT_COHERENT_CACHE
 	pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
 		IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
-- 
2.13.3


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-12 11:16 ` [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
@ 2019-01-14  9:34   ` Dmitry Vyukov
  2019-01-15  7:27     ` Christophe Leroy
  0 siblings, 1 reply; 12+ messages in thread
From: Dmitry Vyukov @ 2019-01-14  9:34 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: LKML, Nicholas Piggin, Linux-MM, Paul Mackerras,
	Aneesh Kumar K.V, Andrey Ryabinin, Alexander Potapenko,
	kasan-dev, linuxppc-dev

On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
<christophe.leroy@c-s.fr> wrote:
&gt;
&gt; In kernel/cputable.c, explicitly use memcpy() in order
&gt; to allow GCC to replace it with __memcpy() when KASAN is
&gt; selected.
&gt;
&gt; Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
&gt; enabled"), memset() can be used before activation of the cache,
&gt; so no need to use memset_io() for zeroing the BSS.
&gt;
&gt; Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
&gt; ---
&gt;  arch/powerpc/kernel/cputable.c | 4 ++--
&gt;  arch/powerpc/kernel/setup_32.c | 6 ++----
&gt;  2 files changed, 4 insertions(+), 6 deletions(-)
&gt;
&gt; diff --git a/arch/powerpc/kernel/cputable.c
b/arch/powerpc/kernel/cputable.c
&gt; index 1eab54bc6ee9..84814c8d1bcb 100644
&gt; --- a/arch/powerpc/kernel/cputable.c
&gt; +++ b/arch/powerpc/kernel/cputable.c
&gt; @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
&gt;         struct cpu_spec *t = &amp;the_cpu_spec;
&gt;
&gt;         t = PTRRELOC(t);
&gt; -       *t = *s;
&gt; +       memcpy(t, s, sizeof(*t));

Hi Christophe,

I understand why you are doing this, but this looks a bit fragile and
non-scalable. This may not work with the next version of compiler,
just different than yours version of compiler, clang, etc.

Does using -ffreestanding and/or -fno-builtin-memcpy (-memset) help?
If it helps, perhaps it makes sense to add these flags to
KASAN_SANITIZE := n files.


>         *PTRRELOC(&cur_cpu_spec) = &the_cpu_spec;
>  }
> @@ -2162,7 +2162,7 @@ static struct cpu_spec * __init setup_cpu_spec(unsigned long offset,
>         old = *t;
>
>         /* Copy everything, then do fixups */
> -       *t = *s;
> +       memcpy(t, s, sizeof(*t));
>
>         /*
>          * If we are overriding a previous value derived from the real
> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
> index 947f904688b0..5e761eb16a6d 100644
> --- a/arch/powerpc/kernel/setup_32.c
> +++ b/arch/powerpc/kernel/setup_32.c
> @@ -73,10 +73,8 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
>  {
>         unsigned long offset = reloc_offset();
>
> -       /* First zero the BSS -- use memset_io, some platforms don't have
> -        * caches on yet */
> -       memset_io((void __iomem *)PTRRELOC(&__bss_start), 0,
> -                       __bss_stop - __bss_start);
> +       /* First zero the BSS */
> +       memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
>
>         /*
>          * Identify the CPU type and fix up code sections
> --
> 2.13.3
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-14  9:34   ` Dmitry Vyukov
@ 2019-01-15  7:27     ` Christophe Leroy
  2019-01-15 11:14       ` Dmitry Vyukov
  0 siblings, 1 reply; 12+ messages in thread
From: Christophe Leroy @ 2019-01-15  7:27 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: LKML, Nicholas Piggin, Linux-MM, Paul Mackerras,
	Aneesh Kumar K.V, Andrey Ryabinin, Alexander Potapenko,
	kasan-dev, linuxppc-dev



On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
> On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
> <christophe.leroy@c-s.fr> wrote:
> &gt;
> &gt; In kernel/cputable.c, explicitly use memcpy() in order
> &gt; to allow GCC to replace it with __memcpy() when KASAN is
> &gt; selected.
> &gt;
> &gt; Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
> &gt; enabled"), memset() can be used before activation of the cache,
> &gt; so no need to use memset_io() for zeroing the BSS.
> &gt;
> &gt; Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> &gt; ---
> &gt;  arch/powerpc/kernel/cputable.c | 4 ++--
> &gt;  arch/powerpc/kernel/setup_32.c | 6 ++----
> &gt;  2 files changed, 4 insertions(+), 6 deletions(-)
> &gt;
> &gt; diff --git a/arch/powerpc/kernel/cputable.c
> b/arch/powerpc/kernel/cputable.c
> &gt; index 1eab54bc6ee9..84814c8d1bcb 100644
> &gt; --- a/arch/powerpc/kernel/cputable.c
> &gt; +++ b/arch/powerpc/kernel/cputable.c
> &gt; @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
> &gt;         struct cpu_spec *t = &amp;the_cpu_spec;
> &gt;
> &gt;         t = PTRRELOC(t);
> &gt; -       *t = *s;
> &gt; +       memcpy(t, s, sizeof(*t));
> 
> Hi Christophe,
> 
> I understand why you are doing this, but this looks a bit fragile and
> non-scalable. This may not work with the next version of compiler,
> just different than yours version of compiler, clang, etc.

My felling would be that this change makes it more solid.

My understanding is that when you do *t = *s, the compiler can use 
whatever way it wants to do the copy.
When you do memcpy(), you ensure it will do it that way and not another 
way, don't you ?

My problem is that when using *t = *s, the function set_cur_cpu_spec() 
always calls memcpy(), not taking into account the following define 
which is in arch/powerpc/include/asm/string.h (other arches do the same):

#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
/*
  * For files that are not instrumented (e.g. mm/slub.c) we
  * should use not instrumented version of mem* functions.
  */
#define memcpy(dst, src, len) __memcpy(dst, src, len)
#define memmove(dst, src, len) __memmove(dst, src, len)
#define memset(s, c, n) __memset(s, c, n)
#endif

void __init set_cur_cpu_spec(struct cpu_spec *s)
{
	struct cpu_spec *t = &the_cpu_spec;

	t = PTRRELOC(t);
	*t = *s;

	*PTRRELOC(&cur_cpu_spec) = &the_cpu_spec;
}

00000000 <set_cur_cpu_spec>:
    0:   94 21 ff f0     stwu    r1,-16(r1)
    4:   7c 08 02 a6     mflr    r0
    8:   bf c1 00 08     stmw    r30,8(r1)
    c:   3f e0 00 00     lis     r31,0
                         e: R_PPC_ADDR16_HA      .data..read_mostly
   10:   3b ff 00 00     addi    r31,r31,0
                         12: R_PPC_ADDR16_LO     .data..read_mostly
   14:   7c 7e 1b 78     mr      r30,r3
   18:   7f e3 fb 78     mr      r3,r31
   1c:   90 01 00 14     stw     r0,20(r1)
   20:   48 00 00 01     bl      20 <set_cur_cpu_spec+0x20>
                         20: R_PPC_REL24 add_reloc_offset
   24:   7f c4 f3 78     mr      r4,r30
   28:   38 a0 00 58     li      r5,88
   2c:   48 00 00 01     bl      2c <set_cur_cpu_spec+0x2c>
                         2c: R_PPC_REL24 memcpy
   30:   38 7f 00 58     addi    r3,r31,88
   34:   48 00 00 01     bl      34 <set_cur_cpu_spec+0x34>
                         34: R_PPC_REL24 add_reloc_offset
   38:   93 e3 00 00     stw     r31,0(r3)
   3c:   80 01 00 14     lwz     r0,20(r1)
   40:   bb c1 00 08     lmw     r30,8(r1)
   44:   7c 08 03 a6     mtlr    r0
   48:   38 21 00 10     addi    r1,r1,16
   4c:   4e 80 00 20     blr


When replacing *t = *s by memcpy(t, s, sizeof(*t)), GCC replace it by 
__memcpy() as expected.

> 
> Does using -ffreestanding and/or -fno-builtin-memcpy (-memset) help?

No it doesn't and to be honest I can't see how it would. My 
understanding is that it could be even worse because it would mean 
adding calls to memcpy() also in all trivial places where GCC does the 
copy itself by default.

Do you see any alternative ?

Christophe

> If it helps, perhaps it makes sense to add these flags to
> KASAN_SANITIZE := n files.
> 
> 
>>          *PTRRELOC(&cur_cpu_spec) = &the_cpu_spec;
>>   }
>> @@ -2162,7 +2162,7 @@ static struct cpu_spec * __init setup_cpu_spec(unsigned long offset,
>>          old = *t;
>>
>>          /* Copy everything, then do fixups */
>> -       *t = *s;
>> +       memcpy(t, s, sizeof(*t));
>>
>>          /*
>>           * If we are overriding a previous value derived from the real
>> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
>> index 947f904688b0..5e761eb16a6d 100644
>> --- a/arch/powerpc/kernel/setup_32.c
>> +++ b/arch/powerpc/kernel/setup_32.c
>> @@ -73,10 +73,8 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
>>   {
>>          unsigned long offset = reloc_offset();
>>
>> -       /* First zero the BSS -- use memset_io, some platforms don't have
>> -        * caches on yet */
>> -       memset_io((void __iomem *)PTRRELOC(&__bss_start), 0,
>> -                       __bss_stop - __bss_start);
>> +       /* First zero the BSS */
>> +       memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
>>
>>          /*
>>           * Identify the CPU type and fix up code sections
>> --
>> 2.13.3
>>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-15  7:27     ` Christophe Leroy
@ 2019-01-15 11:14       ` Dmitry Vyukov
  2019-01-15 17:07         ` Andrey Ryabinin
  0 siblings, 1 reply; 12+ messages in thread
From: Dmitry Vyukov @ 2019-01-15 11:14 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: LKML, Nicholas Piggin, Linux-MM, Paul Mackerras,
	Aneesh Kumar K.V, Andrey Ryabinin, Alexander Potapenko,
	kasan-dev, linuxppc-dev

On Tue, Jan 15, 2019 at 8:27 AM Christophe Leroy
<christophe.leroy@c-s.fr> wrote:
>
>
>
> On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
> > On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
> > <christophe.leroy@c-s.fr> wrote:
> > &gt;
> > &gt; In kernel/cputable.c, explicitly use memcpy() in order
> > &gt; to allow GCC to replace it with __memcpy() when KASAN is
> > &gt; selected.
> > &gt;
> > &gt; Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
> > &gt; enabled"), memset() can be used before activation of the cache,
> > &gt; so no need to use memset_io() for zeroing the BSS.
> > &gt;
> > &gt; Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> > &gt; ---
> > &gt;  arch/powerpc/kernel/cputable.c | 4 ++--
> > &gt;  arch/powerpc/kernel/setup_32.c | 6 ++----
> > &gt;  2 files changed, 4 insertions(+), 6 deletions(-)
> > &gt;
> > &gt; diff --git a/arch/powerpc/kernel/cputable.c
> > b/arch/powerpc/kernel/cputable.c
> > &gt; index 1eab54bc6ee9..84814c8d1bcb 100644
> > &gt; --- a/arch/powerpc/kernel/cputable.c
> > &gt; +++ b/arch/powerpc/kernel/cputable.c
> > &gt; @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
> > &gt;         struct cpu_spec *t = &amp;the_cpu_spec;
> > &gt;
> > &gt;         t = PTRRELOC(t);
> > &gt; -       *t = *s;
> > &gt; +       memcpy(t, s, sizeof(*t));
> >
> > Hi Christophe,
> >
> > I understand why you are doing this, but this looks a bit fragile and
> > non-scalable. This may not work with the next version of compiler,
> > just different than yours version of compiler, clang, etc.
>
> My felling would be that this change makes it more solid.
>
> My understanding is that when you do *t = *s, the compiler can use
> whatever way it wants to do the copy.
> When you do memcpy(), you ensure it will do it that way and not another
> way, don't you ?

It makes this single line more deterministic wrt code-gen (though,
strictly saying compiler can turn memcpy back into inlines
instructions, it knows memcpy semantics anyway).
But the problem I meant is that the set of places that are subject to
this problem is not deterministic. So if we go with this solution,
after this change it's in the status "works on your machine" and we
either need to commit to not using struct copies and zeroing
throughout kernel code or potentially have a long tail of other
similar cases, and since they can be triggered by another compiler
version, we may need to backport these changes to previous releases
too. Whereas if we would go with compiler flags, it would prevent the
problem in all current and future places and with other past/future
versions of compilers.


> My problem is that when using *t = *s, the function set_cur_cpu_spec()
> always calls memcpy(), not taking into account the following define
> which is in arch/powerpc/include/asm/string.h (other arches do the same):
>
> #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
> /*
>   * For files that are not instrumented (e.g. mm/slub.c) we
>   * should use not instrumented version of mem* functions.
>   */
> #define memcpy(dst, src, len) __memcpy(dst, src, len)
> #define memmove(dst, src, len) __memmove(dst, src, len)
> #define memset(s, c, n) __memset(s, c, n)
> #endif
>
> void __init set_cur_cpu_spec(struct cpu_spec *s)
> {
>         struct cpu_spec *t = &the_cpu_spec;
>
>         t = PTRRELOC(t);
>         *t = *s;
>
>         *PTRRELOC(&cur_cpu_spec) = &the_cpu_spec;
> }
>
> 00000000 <set_cur_cpu_spec>:
>     0:   94 21 ff f0     stwu    r1,-16(r1)
>     4:   7c 08 02 a6     mflr    r0
>     8:   bf c1 00 08     stmw    r30,8(r1)
>     c:   3f e0 00 00     lis     r31,0
>                          e: R_PPC_ADDR16_HA      .data..read_mostly
>    10:   3b ff 00 00     addi    r31,r31,0
>                          12: R_PPC_ADDR16_LO     .data..read_mostly
>    14:   7c 7e 1b 78     mr      r30,r3
>    18:   7f e3 fb 78     mr      r3,r31
>    1c:   90 01 00 14     stw     r0,20(r1)
>    20:   48 00 00 01     bl      20 <set_cur_cpu_spec+0x20>
>                          20: R_PPC_REL24 add_reloc_offset
>    24:   7f c4 f3 78     mr      r4,r30
>    28:   38 a0 00 58     li      r5,88
>    2c:   48 00 00 01     bl      2c <set_cur_cpu_spec+0x2c>
>                          2c: R_PPC_REL24 memcpy
>    30:   38 7f 00 58     addi    r3,r31,88
>    34:   48 00 00 01     bl      34 <set_cur_cpu_spec+0x34>
>                          34: R_PPC_REL24 add_reloc_offset
>    38:   93 e3 00 00     stw     r31,0(r3)
>    3c:   80 01 00 14     lwz     r0,20(r1)
>    40:   bb c1 00 08     lmw     r30,8(r1)
>    44:   7c 08 03 a6     mtlr    r0
>    48:   38 21 00 10     addi    r1,r1,16
>    4c:   4e 80 00 20     blr
>
>
> When replacing *t = *s by memcpy(t, s, sizeof(*t)), GCC replace it by
> __memcpy() as expected.
>
> >
> > Does using -ffreestanding and/or -fno-builtin-memcpy (-memset) help?
>
> No it doesn't and to be honest I can't see how it would. My
> understanding is that it could be even worse because it would mean
> adding calls to memcpy() also in all trivial places where GCC does the
> copy itself by default.

The idea was that with -ffreestanding compiler must not assume
presence of any runtime support library, so it must not emit any calls
that are not explicitly present in the source code. However, after
reading more docs, it seems that even with -ffreestanding gcc and
clang still assume presence of a runtime library that provides at
least memcpy,  memmove, memset and memcmp. There does not seem to be a
way to prevent clang and gcc from doing it. So I guess this approach
is our only option:

Acked-by: Dmitry Vyukov <dvyukov@google.com>

Though, a comment may be useful so that a next person does not try to
revert it back.


> Do you see any alternative ?
>
> Christophe
>
> > If it helps, perhaps it makes sense to add these flags to
> > KASAN_SANITIZE := n files.
> >
> >
> >>          *PTRRELOC(&cur_cpu_spec) = &the_cpu_spec;
> >>   }
> >> @@ -2162,7 +2162,7 @@ static struct cpu_spec * __init setup_cpu_spec(unsigned long offset,
> >>          old = *t;
> >>
> >>          /* Copy everything, then do fixups */
> >> -       *t = *s;
> >> +       memcpy(t, s, sizeof(*t));
> >>
> >>          /*
> >>           * If we are overriding a previous value derived from the real
> >> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
> >> index 947f904688b0..5e761eb16a6d 100644
> >> --- a/arch/powerpc/kernel/setup_32.c
> >> +++ b/arch/powerpc/kernel/setup_32.c
> >> @@ -73,10 +73,8 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
> >>   {
> >>          unsigned long offset = reloc_offset();
> >>
> >> -       /* First zero the BSS -- use memset_io, some platforms don't have
> >> -        * caches on yet */
> >> -       memset_io((void __iomem *)PTRRELOC(&__bss_start), 0,
> >> -                       __bss_stop - __bss_start);
> >> +       /* First zero the BSS */
> >> +       memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
> >>
> >>          /*
> >>           * Identify the CPU type and fix up code sections
> >> --
> >> 2.13.3
> >>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-15 11:14       ` Dmitry Vyukov
@ 2019-01-15 17:07         ` Andrey Ryabinin
  2019-01-15 17:10           ` Dmitry Vyukov
  0 siblings, 1 reply; 12+ messages in thread
From: Andrey Ryabinin @ 2019-01-15 17:07 UTC (permalink / raw)
  To: Dmitry Vyukov, Christophe Leroy
  Cc: LKML, Nicholas Piggin, Linux-MM, Alexander Potapenko,
	Aneesh Kumar K.V, Paul Mackerras, kasan-dev, linuxppc-dev



On 1/15/19 2:14 PM, Dmitry Vyukov wrote:
> On Tue, Jan 15, 2019 at 8:27 AM Christophe Leroy
> <christophe.leroy@c-s.fr> wrote:
>> On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
>>> On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
>>> <christophe.leroy@c-s.fr> wrote:
>>> &gt;
>>> &gt; In kernel/cputable.c, explicitly use memcpy() in order
>>> &gt; to allow GCC to replace it with __memcpy() when KASAN is
>>> &gt; selected.
>>> &gt;
>>> &gt; Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
>>> &gt; enabled"), memset() can be used before activation of the cache,
>>> &gt; so no need to use memset_io() for zeroing the BSS.
>>> &gt;
>>> &gt; Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
>>> &gt; ---
>>> &gt;  arch/powerpc/kernel/cputable.c | 4 ++--
>>> &gt;  arch/powerpc/kernel/setup_32.c | 6 ++----
>>> &gt;  2 files changed, 4 insertions(+), 6 deletions(-)
>>> &gt;
>>> &gt; diff --git a/arch/powerpc/kernel/cputable.c
>>> b/arch/powerpc/kernel/cputable.c
>>> &gt; index 1eab54bc6ee9..84814c8d1bcb 100644
>>> &gt; --- a/arch/powerpc/kernel/cputable.c
>>> &gt; +++ b/arch/powerpc/kernel/cputable.c
>>> &gt; @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
>>> &gt;         struct cpu_spec *t = &amp;the_cpu_spec;
>>> &gt;
>>> &gt;         t = PTRRELOC(t);
>>> &gt; -       *t = *s;
>>> &gt; +       memcpy(t, s, sizeof(*t));
>>>
>>> Hi Christophe,
>>>
>>> I understand why you are doing this, but this looks a bit fragile and
>>> non-scalable. This may not work with the next version of compiler,
>>> just different than yours version of compiler, clang, etc.
>>
>> My felling would be that this change makes it more solid.
>>
>> My understanding is that when you do *t = *s, the compiler can use
>> whatever way it wants to do the copy.
>> When you do memcpy(), you ensure it will do it that way and not another
>> way, don't you ?
> 
> It makes this single line more deterministic wrt code-gen (though,
> strictly saying compiler can turn memcpy back into inlines
> instructions, it knows memcpy semantics anyway).
> But the problem I meant is that the set of places that are subject to
> this problem is not deterministic. So if we go with this solution,
> after this change it's in the status "works on your machine" and we
> either need to commit to not using struct copies and zeroing
> throughout kernel code or potentially have a long tail of other
> similar cases, and since they can be triggered by another compiler
> version, we may need to backport these changes to previous releases
> too. Whereas if we would go with compiler flags, it would prevent the
> problem in all current and future places and with other past/future
> versions of compilers.
> 

The patch will work for any compiler. The point of this patch is to make
memcpy() visible to the preprocessor which will replace it with __memcpy().

After preprocessor's work, compiler will see just __memcpy() call here.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-15 17:07         ` Andrey Ryabinin
@ 2019-01-15 17:10           ` Dmitry Vyukov
  2019-01-15 17:25             ` Christophe Leroy
  0 siblings, 1 reply; 12+ messages in thread
From: Dmitry Vyukov @ 2019-01-15 17:10 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Nicholas Piggin, Linux-MM, Paul Mackerras,
	Aneesh Kumar K.V, Alexander Potapenko, kasan-dev, linuxppc-dev

On Tue, Jan 15, 2019 at 6:06 PM Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>
>
>
> On 1/15/19 2:14 PM, Dmitry Vyukov wrote:
> > On Tue, Jan 15, 2019 at 8:27 AM Christophe Leroy
> > <christophe.leroy@c-s.fr> wrote:
> >> On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
> >>> On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
> >>> <christophe.leroy@c-s.fr> wrote:
> >>> &gt;
> >>> &gt; In kernel/cputable.c, explicitly use memcpy() in order
> >>> &gt; to allow GCC to replace it with __memcpy() when KASAN is
> >>> &gt; selected.
> >>> &gt;
> >>> &gt; Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
> >>> &gt; enabled"), memset() can be used before activation of the cache,
> >>> &gt; so no need to use memset_io() for zeroing the BSS.
> >>> &gt;
> >>> &gt; Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> >>> &gt; ---
> >>> &gt;  arch/powerpc/kernel/cputable.c | 4 ++--
> >>> &gt;  arch/powerpc/kernel/setup_32.c | 6 ++----
> >>> &gt;  2 files changed, 4 insertions(+), 6 deletions(-)
> >>> &gt;
> >>> &gt; diff --git a/arch/powerpc/kernel/cputable.c
> >>> b/arch/powerpc/kernel/cputable.c
> >>> &gt; index 1eab54bc6ee9..84814c8d1bcb 100644
> >>> &gt; --- a/arch/powerpc/kernel/cputable.c
> >>> &gt; +++ b/arch/powerpc/kernel/cputable.c
> >>> &gt; @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
> >>> &gt;         struct cpu_spec *t = &amp;the_cpu_spec;
> >>> &gt;
> >>> &gt;         t = PTRRELOC(t);
> >>> &gt; -       *t = *s;
> >>> &gt; +       memcpy(t, s, sizeof(*t));
> >>>
> >>> Hi Christophe,
> >>>
> >>> I understand why you are doing this, but this looks a bit fragile and
> >>> non-scalable. This may not work with the next version of compiler,
> >>> just different than yours version of compiler, clang, etc.
> >>
> >> My felling would be that this change makes it more solid.
> >>
> >> My understanding is that when you do *t = *s, the compiler can use
> >> whatever way it wants to do the copy.
> >> When you do memcpy(), you ensure it will do it that way and not another
> >> way, don't you ?
> >
> > It makes this single line more deterministic wrt code-gen (though,
> > strictly saying compiler can turn memcpy back into inlines
> > instructions, it knows memcpy semantics anyway).
> > But the problem I meant is that the set of places that are subject to
> > this problem is not deterministic. So if we go with this solution,
> > after this change it's in the status "works on your machine" and we
> > either need to commit to not using struct copies and zeroing
> > throughout kernel code or potentially have a long tail of other
> > similar cases, and since they can be triggered by another compiler
> > version, we may need to backport these changes to previous releases
> > too. Whereas if we would go with compiler flags, it would prevent the
> > problem in all current and future places and with other past/future
> > versions of compilers.
> >
>
> The patch will work for any compiler. The point of this patch is to make
> memcpy() visible to the preprocessor which will replace it with __memcpy().

For this single line, yes. But it does not mean that KASAN will work.

> After preprocessor's work, compiler will see just __memcpy() call here.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/3] powerpc/32: Add KASAN support
  2019-01-12 11:16 ` [PATCH v3 3/3] powerpc/32: Add KASAN support Christophe Leroy
@ 2019-01-15 17:23   ` Andrey Ryabinin
  0 siblings, 0 replies; 12+ messages in thread
From: Andrey Ryabinin @ 2019-01-15 17:23 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Nicholas Piggin, Aneesh Kumar K.V,
	Alexander Potapenko, Dmitry Vyukov
  Cc: linux-mm, linuxppc-dev, linux-kernel, kasan-dev



On 1/12/19 2:16 PM, Christophe Leroy wrote:

> +KASAN_SANITIZE_early_32.o := n
> +KASAN_SANITIZE_cputable.o := n
> +KASAN_SANITIZE_prom_init.o := n
> +

Usually it's also good idea to disable branch profiling - define DISABLE_BRANCH_PROFILING
either in top of these files or via Makefile. Branch profiling redefines if() statement and calls
instrumented ftrace_likely_update in every if().



> diff --git a/arch/powerpc/mm/kasan_init.c b/arch/powerpc/mm/kasan_init.c
> new file mode 100644
> index 000000000000..3edc9c2d2f3e

> +void __init kasan_init(void)
> +{
> +	struct memblock_region *reg;
> +
> +	for_each_memblock(memory, reg)
> +		kasan_init_region(reg);
> +
> +	pr_info("KASAN init done\n");

Without "init_task.kasan_depth = 0;" kasan will not repot bugs.

There is test_kasan module. Make sure that it produce reports.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-15 17:10           ` Dmitry Vyukov
@ 2019-01-15 17:25             ` Christophe Leroy
  2019-01-16 10:03               ` Dmitry Vyukov
  0 siblings, 1 reply; 12+ messages in thread
From: Christophe Leroy @ 2019-01-15 17:25 UTC (permalink / raw)
  To: Dmitry Vyukov, Andrey Ryabinin
  Cc: LKML, Nicholas Piggin, Linux-MM, Alexander Potapenko,
	Aneesh Kumar K.V, Paul Mackerras, kasan-dev, linuxppc-dev



Le 15/01/2019 à 18:10, Dmitry Vyukov a écrit :
> On Tue, Jan 15, 2019 at 6:06 PM Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>
>>
>>
>> On 1/15/19 2:14 PM, Dmitry Vyukov wrote:
>>> On Tue, Jan 15, 2019 at 8:27 AM Christophe Leroy
>>> <christophe.leroy@c-s.fr> wrote:
>>>> On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
>>>>> On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
>>>>> <christophe.leroy@c-s.fr> wrote:
>>>>> &gt;
>>>>> &gt; In kernel/cputable.c, explicitly use memcpy() in order
>>>>> &gt; to allow GCC to replace it with __memcpy() when KASAN is
>>>>> &gt; selected.
>>>>> &gt;
>>>>> &gt; Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
>>>>> &gt; enabled"), memset() can be used before activation of the cache,
>>>>> &gt; so no need to use memset_io() for zeroing the BSS.
>>>>> &gt;
>>>>> &gt; Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
>>>>> &gt; ---
>>>>> &gt;  arch/powerpc/kernel/cputable.c | 4 ++--
>>>>> &gt;  arch/powerpc/kernel/setup_32.c | 6 ++----
>>>>> &gt;  2 files changed, 4 insertions(+), 6 deletions(-)
>>>>> &gt;
>>>>> &gt; diff --git a/arch/powerpc/kernel/cputable.c
>>>>> b/arch/powerpc/kernel/cputable.c
>>>>> &gt; index 1eab54bc6ee9..84814c8d1bcb 100644
>>>>> &gt; --- a/arch/powerpc/kernel/cputable.c
>>>>> &gt; +++ b/arch/powerpc/kernel/cputable.c
>>>>> &gt; @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
>>>>> &gt;         struct cpu_spec *t = &amp;the_cpu_spec;
>>>>> &gt;
>>>>> &gt;         t = PTRRELOC(t);
>>>>> &gt; -       *t = *s;
>>>>> &gt; +       memcpy(t, s, sizeof(*t));
>>>>>
>>>>> Hi Christophe,
>>>>>
>>>>> I understand why you are doing this, but this looks a bit fragile and
>>>>> non-scalable. This may not work with the next version of compiler,
>>>>> just different than yours version of compiler, clang, etc.
>>>>
>>>> My felling would be that this change makes it more solid.
>>>>
>>>> My understanding is that when you do *t = *s, the compiler can use
>>>> whatever way it wants to do the copy.
>>>> When you do memcpy(), you ensure it will do it that way and not another
>>>> way, don't you ?
>>>
>>> It makes this single line more deterministic wrt code-gen (though,
>>> strictly saying compiler can turn memcpy back into inlines
>>> instructions, it knows memcpy semantics anyway).
>>> But the problem I meant is that the set of places that are subject to
>>> this problem is not deterministic. So if we go with this solution,
>>> after this change it's in the status "works on your machine" and we
>>> either need to commit to not using struct copies and zeroing
>>> throughout kernel code or potentially have a long tail of other
>>> similar cases, and since they can be triggered by another compiler
>>> version, we may need to backport these changes to previous releases
>>> too. Whereas if we would go with compiler flags, it would prevent the
>>> problem in all current and future places and with other past/future
>>> versions of compilers.
>>>
>>
>> The patch will work for any compiler. The point of this patch is to make
>> memcpy() visible to the preprocessor which will replace it with __memcpy().
> 
> For this single line, yes. But it does not mean that KASAN will work.
> 
>> After preprocessor's work, compiler will see just __memcpy() call here.

This problem can affect any arch I believe. Maybe the 'solution' would 
be to run a generic script similar to 
arch/powerpc/kernel/prom_init_check.sh on all objects compiled with 
KASAN_SANITIZE_object.o := n don't include any reference to memcpy() 
memset() or memmove() ?

Christophe

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
  2019-01-15 17:25             ` Christophe Leroy
@ 2019-01-16 10:03               ` Dmitry Vyukov
  0 siblings, 0 replies; 12+ messages in thread
From: Dmitry Vyukov @ 2019-01-16 10:03 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: Aneesh Kumar K.V, LKML, Nicholas Piggin, Linux-MM,
	Paul Mackerras, kasan-dev, Andrey Ryabinin, Alexander Potapenko,
	linuxppc-dev

On Tue, Jan 15, 2019 at 6:25 PM Christophe Leroy
<christophe.leroy@c-s.fr> wrote:
>
> Le 15/01/2019 à 18:10, Dmitry Vyukov a écrit :
> > On Tue, Jan 15, 2019 at 6:06 PM Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
> >>
> >> On 1/15/19 2:14 PM, Dmitry Vyukov wrote:
> >>> On Tue, Jan 15, 2019 at 8:27 AM Christophe Leroy
> >>> <christophe.leroy@c-s.fr> wrote:
> >>>> On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
> >>>>> On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
> >>>>> <christophe.leroy@c-s.fr> wrote:
> >>>>> &gt;
> >>>>> &gt; In kernel/cputable.c, explicitly use memcpy() in order
> >>>>> &gt; to allow GCC to replace it with __memcpy() when KASAN is
> >>>>> &gt; selected.
> >>>>> &gt;
> >>>>> &gt; Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
> >>>>> &gt; enabled"), memset() can be used before activation of the cache,
> >>>>> &gt; so no need to use memset_io() for zeroing the BSS.
> >>>>> &gt;
> >>>>> &gt; Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> >>>>> &gt; ---
> >>>>> &gt;  arch/powerpc/kernel/cputable.c | 4 ++--
> >>>>> &gt;  arch/powerpc/kernel/setup_32.c | 6 ++----
> >>>>> &gt;  2 files changed, 4 insertions(+), 6 deletions(-)
> >>>>> &gt;
> >>>>> &gt; diff --git a/arch/powerpc/kernel/cputable.c
> >>>>> b/arch/powerpc/kernel/cputable.c
> >>>>> &gt; index 1eab54bc6ee9..84814c8d1bcb 100644
> >>>>> &gt; --- a/arch/powerpc/kernel/cputable.c
> >>>>> &gt; +++ b/arch/powerpc/kernel/cputable.c
> >>>>> &gt; @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
> >>>>> &gt;         struct cpu_spec *t = &amp;the_cpu_spec;
> >>>>> &gt;
> >>>>> &gt;         t = PTRRELOC(t);
> >>>>> &gt; -       *t = *s;
> >>>>> &gt; +       memcpy(t, s, sizeof(*t));
> >>>>>
> >>>>> Hi Christophe,
> >>>>>
> >>>>> I understand why you are doing this, but this looks a bit fragile and
> >>>>> non-scalable. This may not work with the next version of compiler,
> >>>>> just different than yours version of compiler, clang, etc.
> >>>>
> >>>> My felling would be that this change makes it more solid.
> >>>>
> >>>> My understanding is that when you do *t = *s, the compiler can use
> >>>> whatever way it wants to do the copy.
> >>>> When you do memcpy(), you ensure it will do it that way and not another
> >>>> way, don't you ?
> >>>
> >>> It makes this single line more deterministic wrt code-gen (though,
> >>> strictly saying compiler can turn memcpy back into inlines
> >>> instructions, it knows memcpy semantics anyway).
> >>> But the problem I meant is that the set of places that are subject to
> >>> this problem is not deterministic. So if we go with this solution,
> >>> after this change it's in the status "works on your machine" and we
> >>> either need to commit to not using struct copies and zeroing
> >>> throughout kernel code or potentially have a long tail of other
> >>> similar cases, and since they can be triggered by another compiler
> >>> version, we may need to backport these changes to previous releases
> >>> too. Whereas if we would go with compiler flags, it would prevent the
> >>> problem in all current and future places and with other past/future
> >>> versions of compilers.
> >>>
> >>
> >> The patch will work for any compiler. The point of this patch is to make
> >> memcpy() visible to the preprocessor which will replace it with __memcpy().
> >
> > For this single line, yes. But it does not mean that KASAN will work.
> >
> >> After preprocessor's work, compiler will see just __memcpy() call here.
>
> This problem can affect any arch I believe. Maybe the 'solution' would
> be to run a generic script similar to
> arch/powerpc/kernel/prom_init_check.sh on all objects compiled with
> KASAN_SANITIZE_object.o := n don't include any reference to memcpy()
> memset() or memmove() ?


We do this when building user-space sanitizers runtime. There all code
always runs with sanitizer enabled, but at the same time must not be
instrumented. So we committed to changing all possible memcpy/memset
injection points and have a script that checks that we indeed have no
such calls at any paths. There problem is a bit simpler as we don't
have gazillion combinations of configs and the runtime is usually
self-hosted (as it is bundled with compiler), so we know what compiler
is used to build it. And that all is checked on CI.
I don't know how much work it is to do the same for kernel, though.
Adding -ffreestanding, if worked, looked like a cheap option to
achieve the same.

Another option is to insert checks into KASAN's memcpy/memset that at
least some early init has completed. If early init hasn't finished
yet, then they could skip all additional work besides just doing
memcpy/memset. We can't afford this for memory access instrumentation
for performance reasons, but it should be bearable for memcpy/memset.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, back to index

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-12 11:16 [PATCH v3 0/3] KASAN for powerpc/32 Christophe Leroy
2019-01-12 11:16 ` [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32 Christophe Leroy
2019-01-14  9:34   ` Dmitry Vyukov
2019-01-15  7:27     ` Christophe Leroy
2019-01-15 11:14       ` Dmitry Vyukov
2019-01-15 17:07         ` Andrey Ryabinin
2019-01-15 17:10           ` Dmitry Vyukov
2019-01-15 17:25             ` Christophe Leroy
2019-01-16 10:03               ` Dmitry Vyukov
2019-01-12 11:16 ` [PATCH v3 2/3] powerpc/32: Move early_init() in a separate file Christophe Leroy
2019-01-12 11:16 ` [PATCH v3 3/3] powerpc/32: Add KASAN support Christophe Leroy
2019-01-15 17:23   ` Andrey Ryabinin

LinuxPPC-Dev Archive on lore.kernel.org

Archives are clonable: git clone --mirror https://lore.kernel.org/linuxppc-dev/0 linuxppc-dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linuxppc-dev linuxppc-dev/ https://lore.kernel.org/linuxppc-dev \
		linuxppc-dev@lists.ozlabs.org linuxppc-dev@ozlabs.org linuxppc-dev@archiver.kernel.org
	public-inbox-index linuxppc-dev


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.ozlabs.lists.linuxppc-dev


AGPL code for this site: git clone https://public-inbox.org/ public-inbox