All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix
@ 2019-05-23  5:21 Daniel Axtens
  2019-05-23  5:21 ` [RFC PATCH 1/7] kasan: do not open-code addr_has_shadow Daniel Axtens
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  5:21 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

Building on the work of Christophe, Aneesh and Balbir, I've ported
KASAN to Book3S radix.

It builds on top Christophe's work on 32bit, and includes my work for
64-bit Book3E (3S doesn't really depend on 3E, but it was handy to
have around when developing and debugging).

This provides full inline instrumentation on radix, but does require
that you be able to specify the amount of memory on the system at
compile time. More details in patch 7.

Regards,
Daniel

Daniel Axtens (7):
  kasan: do not open-code addr_has_shadow
  kasan: allow architectures to manage the memory-to-shadow mapping
  kasan: allow architectures to provide an outline readiness check
  powerpc: KASAN for 64bit Book3E
  kasan: allow arches to provide their own early shadow setup
  kasan: allow arches to hook into global registration
  powerpc: Book3S 64-bit "heavyweight" KASAN support

 arch/powerpc/Kconfig                         |   2 +
 arch/powerpc/Kconfig.debug                   |  17 ++-
 arch/powerpc/Makefile                        |   7 ++
 arch/powerpc/include/asm/kasan.h             | 116 +++++++++++++++++++
 arch/powerpc/kernel/prom.c                   |  40 +++++++
 arch/powerpc/mm/kasan/Makefile               |   2 +
 arch/powerpc/mm/kasan/kasan_init_book3e_64.c |  50 ++++++++
 arch/powerpc/mm/kasan/kasan_init_book3s_64.c |  67 +++++++++++
 arch/powerpc/mm/nohash/Makefile              |   5 +
 include/linux/kasan.h                        |  13 +++
 mm/kasan/generic.c                           |   9 +-
 mm/kasan/generic_report.c                    |   2 +-
 mm/kasan/init.c                              |  10 ++
 mm/kasan/kasan.h                             |   6 +-
 mm/kasan/report.c                            |   6 +-
 mm/kasan/tags.c                              |   3 +-
 16 files changed, 345 insertions(+), 10 deletions(-)
 create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
 create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3s_64.c

-- 
2.19.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 1/7] kasan: do not open-code addr_has_shadow
  2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
@ 2019-05-23  5:21 ` Daniel Axtens
  2019-05-23  5:21 ` [RFC PATCH 2/7] kasan: allow architectures to manage the memory-to-shadow mapping Daniel Axtens
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  5:21 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

We have a couple of places checking for the existence of a shadow
mapping for an address by open-coding the inverse of the check in
addr_has_shadow.

Replace the open-coded versions with the helper. This will be
needed in future to allow architectures to override the layout
of the shadow mapping.

Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 mm/kasan/generic.c | 3 +--
 mm/kasan/tags.c    | 3 +--
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 504c79363a34..9e5c989dab8c 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -173,8 +173,7 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 	if (unlikely(size == 0))
 		return;
 
-	if (unlikely((void *)addr <
-		kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+	if (unlikely(!addr_has_shadow((void *)addr))) {
 		kasan_report(addr, size, write, ret_ip);
 		return;
 	}
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 63fca3172659..87ebee0a6aea 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -109,8 +109,7 @@ void check_memory_region(unsigned long addr, size_t size, bool write,
 		return;
 
 	untagged_addr = reset_tag((const void *)addr);
-	if (unlikely(untagged_addr <
-			kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+	if (unlikely(!addr_has_shadow(untagged_addr))) {
 		kasan_report(addr, size, write, ret_ip);
 		return;
 	}
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 2/7] kasan: allow architectures to manage the memory-to-shadow mapping
  2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
  2019-05-23  5:21 ` [RFC PATCH 1/7] kasan: do not open-code addr_has_shadow Daniel Axtens
@ 2019-05-23  5:21 ` Daniel Axtens
  2019-05-23  5:21 ` [RFC PATCH 3/7] kasan: allow architectures to provide an outline readiness check Daniel Axtens
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  5:21 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

Currently, shadow addresses are always addr >> shift + offset.
However, for powerpc, the virtual address space is fragmented in
ways that make this simple scheme impractical.

Allow architectures to override:
 - kasan_shadow_to_mem
 - kasan_mem_to_shadow
 - addr_has_shadow

Rename addr_has_shadow to kasan_addr_has_shadow as if it is
overridden it will be available in more places, increasing the
risk of collisions.

If architectures do not #define their own versions, the generic
code will continue to run as usual.

Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 include/linux/kasan.h     | 2 ++
 mm/kasan/generic.c        | 2 +-
 mm/kasan/generic_report.c | 2 +-
 mm/kasan/kasan.h          | 6 +++++-
 mm/kasan/report.c         | 6 +++---
 mm/kasan/tags.c           | 2 +-
 6 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b40ea104dd36..f6261840f94c 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -23,11 +23,13 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
 int kasan_populate_early_shadow(const void *shadow_start,
 				const void *shadow_end);
 
+#ifndef kasan_mem_to_shadow
 static inline void *kasan_mem_to_shadow(const void *addr)
 {
 	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
 		+ KASAN_SHADOW_OFFSET;
 }
+#endif
 
 /* Enable reporting bugs after kasan_disable_current() */
 extern void kasan_enable_current(void);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 9e5c989dab8c..a5b28e3ceacb 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -173,7 +173,7 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 	if (unlikely(size == 0))
 		return;
 
-	if (unlikely(!addr_has_shadow((void *)addr))) {
+	if (unlikely(!kasan_addr_has_shadow((void *)addr))) {
 		kasan_report(addr, size, write, ret_ip);
 		return;
 	}
diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c
index 36c645939bc9..6caafd61fc3a 100644
--- a/mm/kasan/generic_report.c
+++ b/mm/kasan/generic_report.c
@@ -107,7 +107,7 @@ static const char *get_wild_bug_type(struct kasan_access_info *info)
 
 const char *get_bug_type(struct kasan_access_info *info)
 {
-	if (addr_has_shadow(info->access_addr))
+	if (kasan_addr_has_shadow(info->access_addr))
 		return get_shadow_bug_type(info);
 	return get_wild_bug_type(info);
 }
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 3ce956efa0cb..8fcbe4027929 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -110,16 +110,20 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
 struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
 					const void *object);
 
+#ifndef kasan_shadow_to_mem
 static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
 {
 	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
 		<< KASAN_SHADOW_SCALE_SHIFT);
 }
+#endif
 
-static inline bool addr_has_shadow(const void *addr)
+#ifndef kasan_addr_has_shadow
+static inline bool kasan_addr_has_shadow(const void *addr)
 {
 	return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
 }
+#endif
 
 void kasan_poison_shadow(const void *address, size_t size, u8 value);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 03a443579386..a713b64c232b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -298,7 +298,7 @@ void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned lon
 	untagged_addr = reset_tag(tagged_addr);
 
 	info.access_addr = tagged_addr;
-	if (addr_has_shadow(untagged_addr))
+	if (kasan_addr_has_shadow(untagged_addr))
 		info.first_bad_addr = find_first_bad_addr(tagged_addr, size);
 	else
 		info.first_bad_addr = untagged_addr;
@@ -309,11 +309,11 @@ void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned lon
 	start_report(&flags);
 
 	print_error_description(&info);
-	if (addr_has_shadow(untagged_addr))
+	if (kasan_addr_has_shadow(untagged_addr))
 		print_tags(get_tag(tagged_addr), info.first_bad_addr);
 	pr_err("\n");
 
-	if (addr_has_shadow(untagged_addr)) {
+	if (kasan_addr_has_shadow(untagged_addr)) {
 		print_address_description(untagged_addr);
 		pr_err("\n");
 		print_shadow_for_address(info.first_bad_addr);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 87ebee0a6aea..661c23dd5340 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -109,7 +109,7 @@ void check_memory_region(unsigned long addr, size_t size, bool write,
 		return;
 
 	untagged_addr = reset_tag((const void *)addr);
-	if (unlikely(!addr_has_shadow(untagged_addr))) {
+	if (unlikely(!kasan_addr_has_shadow(untagged_addr))) {
 		kasan_report(addr, size, write, ret_ip);
 		return;
 	}
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 3/7] kasan: allow architectures to provide an outline readiness check
  2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
  2019-05-23  5:21 ` [RFC PATCH 1/7] kasan: do not open-code addr_has_shadow Daniel Axtens
  2019-05-23  5:21 ` [RFC PATCH 2/7] kasan: allow architectures to manage the memory-to-shadow mapping Daniel Axtens
@ 2019-05-23  5:21 ` Daniel Axtens
  2019-05-23  6:14   ` Christophe Leroy
  2019-05-23  5:21 ` [RFC PATCH 4/7] powerpc: KASAN for 64bit Book3E Daniel Axtens
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  5:21 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev, Daniel Axtens

In powerpc (as I understand it), we spend a lot of time in boot
running in real mode before MMU paging is initialised. During
this time we call a lot of generic code, including printk(). If
we try to access the shadow region during this time, things fail.

My attempts to move early init before the first printk have not
been successful. (Both previous RFCs for ppc64 - by 2 different
people - have needed this trick too!)

So, allow architectures to define a kasan_arch_is_ready()
hook that bails out of check_memory_region_inline() unless the
arch has done all of the init.

Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
Originally-by: Balbir Singh <bsingharora@gmail.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>
[check_return_arch_not_ready() ==> static inline kasan_arch_is_ready()]
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 include/linux/kasan.h | 4 ++++
 mm/kasan/generic.c    | 3 +++
 2 files changed, 7 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f6261840f94c..a630d53f1a36 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -14,6 +14,10 @@ struct task_struct;
 #include <asm/kasan.h>
 #include <asm/pgtable.h>
 
+#ifndef kasan_arch_is_ready
+static inline bool kasan_arch_is_ready(void)	{ return true; }
+#endif
+
 extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
 extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
 extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index a5b28e3ceacb..0336f31bbae3 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -170,6 +170,9 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 						size_t size, bool write,
 						unsigned long ret_ip)
 {
+	if (!kasan_arch_is_ready())
+		return;
+
 	if (unlikely(size == 0))
 		return;
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 4/7] powerpc: KASAN for 64bit Book3E
  2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
                   ` (2 preceding siblings ...)
  2019-05-23  5:21 ` [RFC PATCH 3/7] kasan: allow architectures to provide an outline readiness check Daniel Axtens
@ 2019-05-23  5:21 ` Daniel Axtens
  2019-05-23  6:15   ` Christophe Leroy
  2019-05-23  5:21 ` [RFC PATCH 5/7] kasan: allow arches to provide their own early shadow setup Daniel Axtens
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  5:21 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev, Daniel Axtens

Wire up KASAN. Only outline instrumentation is supported.

The KASAN shadow area is mapped into vmemmap space:
0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
To do this we require that vmemmap be disabled. (This is the default
in the kernel config that QorIQ provides for the machine in their
SDK anyway - they use flat memory.)

Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
ioremap areas (also in 0x800...) are all mapped to the zero page. As
with the Book3S hash series, this requires overriding the memory <->
shadow mapping.

Also, as with both previous 64-bit series, early instrumentation is not
supported.  It would allow us to drop the check_return_arch_not_ready()
hook in the KASAN core, but it's tricky to get it set up early enough:
we need it setup before the first call to instrumented code like printk().
Perhaps in the future.

Only KASAN_MINIMAL works.

Tested on e6500. KVM, kexec and xmon have not been tested.

The test_kasan module fires warnings as expected, except for the
following tests:

 - Expected/by design:
kasan test: memcg_accounted_kmem_cache allocate memcg accounted object

 - Due to only supporting KASAN_MINIMAL:
kasan test: kasan_stack_oob out-of-bounds on stack
kasan test: kasan_global_oob out-of-bounds global variable
kasan test: kasan_alloca_oob_left out-of-bounds to left on alloca
kasan test: kasan_alloca_oob_right out-of-bounds to right on alloca
kasan test: use_after_scope_test use-after-scope on int
kasan test: use_after_scope_test use-after-scope on array

Thanks to those who have done the heavy lifting over the past several
years:
 - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
 - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
 - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/

Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>
[- Removed EXPORT_SYMBOL of the static key
 - Fixed most checkpatch problems
 - Replaced kasan_zero_page[] by kasan_early_shadow_page[]
 - Reduced casting mess by using intermediate locals
 - Fixed build failure on pmac32_defconfig]
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/Kconfig                         |  1 +
 arch/powerpc/Kconfig.debug                   |  2 +-
 arch/powerpc/include/asm/kasan.h             | 71 ++++++++++++++++++++
 arch/powerpc/mm/kasan/Makefile               |  1 +
 arch/powerpc/mm/kasan/kasan_init_book3e_64.c | 50 ++++++++++++++
 arch/powerpc/mm/nohash/Makefile              |  5 ++
 6 files changed, 129 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 6a66a2da5b1a..4e266b019dd7 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -170,6 +170,7 @@ config PPC
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if PPC32
+	select HAVE_ARCH_KASAN			if PPC_BOOK3E_64 && !SPARSEMEM_VMEMMAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index c59920920ddc..23a37facc854 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -396,5 +396,5 @@ config PPC_FAST_ENDIAN_SWITCH
 
 config KASAN_SHADOW_OFFSET
 	hex
-	depends on KASAN
+	depends on KASAN && PPC32
 	default 0xe0000000
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index 296e51c2f066..ae410f0e060d 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -21,12 +21,15 @@
 #define KASAN_SHADOW_START	(KASAN_SHADOW_OFFSET + \
 				 (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
 
+#ifdef CONFIG_PPC32
 #define KASAN_SHADOW_OFFSET	ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET)
 
 #define KASAN_SHADOW_END	0UL
 
 #define KASAN_SHADOW_SIZE	(KASAN_SHADOW_END - KASAN_SHADOW_START)
 
+#endif /* CONFIG_PPC32 */
+
 #ifdef CONFIG_KASAN
 void kasan_early_init(void);
 void kasan_mmu_init(void);
@@ -36,5 +39,73 @@ static inline void kasan_init(void) { }
 static inline void kasan_mmu_init(void) { }
 #endif
 
+#ifdef CONFIG_PPC_BOOK3E_64
+#include <asm/pgtable.h>
+#include <linux/jump_label.h>
+
+/*
+ * We don't put this in Kconfig as we only support KASAN_MINIMAL, and
+ * that will be disabled if the symbol is available in Kconfig
+ */
+#define KASAN_SHADOW_OFFSET	ASM_CONST(0x6800040000000000)
+
+#define KASAN_SHADOW_SIZE	(KERN_VIRT_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+
+extern struct static_key_false powerpc_kasan_enabled_key;
+extern unsigned char kasan_early_shadow_page[];
+
+static inline bool kasan_arch_is_ready_book3e(void)
+{
+	if (static_branch_likely(&powerpc_kasan_enabled_key))
+		return true;
+	return false;
+}
+#define kasan_arch_is_ready kasan_arch_is_ready_book3e
+
+static inline void *kasan_mem_to_shadow_book3e(const void *ptr)
+{
+	unsigned long addr = (unsigned long)ptr;
+
+	if (addr >= KERN_VIRT_START && addr < KERN_VIRT_START + KERN_VIRT_SIZE)
+		return kasan_early_shadow_page;
+
+	return (void *)(addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
+}
+#define kasan_mem_to_shadow kasan_mem_to_shadow_book3e
+
+static inline void *kasan_shadow_to_mem_book3e(const void *shadow_addr)
+{
+	/*
+	 * We map the entire non-linear virtual mapping onto the zero page so if
+	 * we are asked to map the zero page back just pick the beginning of that
+	 * area.
+	 */
+	if (shadow_addr >= (void *)kasan_early_shadow_page &&
+	    shadow_addr < (void *)(kasan_early_shadow_page + PAGE_SIZE))
+		return (void *)KERN_VIRT_START;
+
+	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET) <<
+			KASAN_SHADOW_SCALE_SHIFT);
+}
+#define kasan_shadow_to_mem kasan_shadow_to_mem_book3e
+
+static inline bool kasan_addr_has_shadow_book3e(const void *ptr)
+{
+	unsigned long addr = (unsigned long)ptr;
+
+	/*
+	 * We want to specifically assert that the addresses in the 0x8000...
+	 * region have a shadow, otherwise they are considered by the kasan
+	 * core to be wild pointers
+	 */
+	if (addr >= KERN_VIRT_START && addr < (KERN_VIRT_START + KERN_VIRT_SIZE))
+		return true;
+
+	return (ptr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
+}
+#define kasan_addr_has_shadow kasan_addr_has_shadow_book3e
+
+#endif /* CONFIG_PPC_BOOK3E_64 */
+
 #endif /* __ASSEMBLY */
 #endif
diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
index 6577897673dd..f8f164ad8ade 100644
--- a/arch/powerpc/mm/kasan/Makefile
+++ b/arch/powerpc/mm/kasan/Makefile
@@ -3,3 +3,4 @@
 KASAN_SANITIZE := n
 
 obj-$(CONFIG_PPC32)           += kasan_init_32.o
+obj-$(CONFIG_PPC_BOOK3E_64)   += kasan_init_book3e_64.o
diff --git a/arch/powerpc/mm/kasan/kasan_init_book3e_64.c b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
new file mode 100644
index 000000000000..f116c211d83c
--- /dev/null
+++ b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
@@ -0,0 +1,50 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/kasan.h>
+#include <linux/printk.h>
+#include <linux/memblock.h>
+#include <linux/sched/task.h>
+#include <asm/pgalloc.h>
+
+DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
+
+static void __init kasan_init_region(struct memblock_region *reg)
+{
+	void *start = __va(reg->base);
+	void *end = __va(reg->base + reg->size);
+	unsigned long k_start, k_end, k_cur;
+
+	if (start >= end)
+		return;
+
+	k_start = (unsigned long)kasan_mem_to_shadow(start);
+	k_end = (unsigned long)kasan_mem_to_shadow(end);
+
+	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
+		void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+
+		map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
+	}
+	flush_tlb_kernel_range(k_start, k_end);
+}
+
+void __init kasan_init(void)
+{
+	struct memblock_region *reg;
+
+	for_each_memblock(memory, reg)
+		kasan_init_region(reg);
+
+	/* map the zero page RO */
+	map_kernel_page((unsigned long)kasan_early_shadow_page,
+			__pa(kasan_early_shadow_page), PAGE_KERNEL_RO);
+
+	/* Turn on checking */
+	static_branch_inc(&powerpc_kasan_enabled_key);
+
+	/* Enable error messages */
+	init_task.kasan_depth = 0;
+	pr_info("KASAN init done (64-bit Book3E)\n");
+}
diff --git a/arch/powerpc/mm/nohash/Makefile b/arch/powerpc/mm/nohash/Makefile
index 33b6f6f29d3f..310149f217d7 100644
--- a/arch/powerpc/mm/nohash/Makefile
+++ b/arch/powerpc/mm/nohash/Makefile
@@ -16,3 +16,8 @@ endif
 # This is necessary for booting with kcov enabled on book3e machines
 KCOV_INSTRUMENT_tlb.o := n
 KCOV_INSTRUMENT_fsl_booke.o := n
+
+ifdef CONFIG_KASAN
+CFLAGS_fsl_booke_mmu.o		+= -DDISABLE_BRANCH_PROFILING
+CFLAGS_tlb.o			+= -DDISABLE_BRANCH_PROFILING
+endif
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 5/7] kasan: allow arches to provide their own early shadow setup
  2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
                   ` (3 preceding siblings ...)
  2019-05-23  5:21 ` [RFC PATCH 4/7] powerpc: KASAN for 64bit Book3E Daniel Axtens
@ 2019-05-23  5:21 ` Daniel Axtens
  2019-05-23  5:21 ` [RFC PATCH 6/7] kasan: allow arches to hook into global registration Daniel Axtens
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  5:21 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

powerpc supports several different MMUs. In particular, book3s
machines support both a hash-table based MMU and a radix MMU.
These MMUs support different numbers of entries per directory
level: PTES_PER_* reference variables. This leads to complier
errors as global variables must have constant sizes.

Allow architectures to manage their own early shadow variables
so we can work around this on powerpc.

Signed-off-by: Daniel Axtens <dja@axtens.net>
---
 include/linux/kasan.h |  2 ++
 mm/kasan/init.c       | 10 ++++++++++
 2 files changed, 12 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index a630d53f1a36..dfee2b42d799 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -18,11 +18,13 @@ struct task_struct;
 static inline bool kasan_arch_is_ready(void)	{ return true; }
 #endif
 
+#ifndef ARCH_HAS_KASAN_EARLY_SHADOW
 extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
 extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
 extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
 extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD];
 extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
+#endif
 
 int kasan_populate_early_shadow(const void *shadow_start,
 				const void *shadow_end);
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index ce45c491ebcd..2522382bf374 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -31,10 +31,14 @@
  *   - Latter it reused it as zero shadow to cover large ranges of memory
  *     that allowed to access, but not handled by kasan (vmalloc/vmemmap ...).
  */
+#ifndef ARCH_HAS_KASAN_EARLY_SHADOW
 unsigned char kasan_early_shadow_page[PAGE_SIZE] __page_aligned_bss;
+#endif
 
 #if CONFIG_PGTABLE_LEVELS > 4
+#ifndef ARCH_HAS_KASAN_EARLY_SHADOW
 p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D] __page_aligned_bss;
+#endif
 static inline bool kasan_p4d_table(pgd_t pgd)
 {
 	return pgd_page(pgd) == virt_to_page(lm_alias(kasan_early_shadow_p4d));
@@ -46,7 +50,9 @@ static inline bool kasan_p4d_table(pgd_t pgd)
 }
 #endif
 #if CONFIG_PGTABLE_LEVELS > 3
+#ifndef ARCH_HAS_KASAN_EARLY_SHADOW
 pud_t kasan_early_shadow_pud[PTRS_PER_PUD] __page_aligned_bss;
+#endif
 static inline bool kasan_pud_table(p4d_t p4d)
 {
 	return p4d_page(p4d) == virt_to_page(lm_alias(kasan_early_shadow_pud));
@@ -58,7 +64,9 @@ static inline bool kasan_pud_table(p4d_t p4d)
 }
 #endif
 #if CONFIG_PGTABLE_LEVELS > 2
+#ifndef ARCH_HAS_KASAN_EARLY_SHADOW
 pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD] __page_aligned_bss;
+#endif
 static inline bool kasan_pmd_table(pud_t pud)
 {
 	return pud_page(pud) == virt_to_page(lm_alias(kasan_early_shadow_pmd));
@@ -69,7 +77,9 @@ static inline bool kasan_pmd_table(pud_t pud)
 	return false;
 }
 #endif
+#ifndef ARCH_HAS_KASAN_EARLY_SHADOW
 pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss;
+#endif
 
 static inline bool kasan_pte_table(pmd_t pmd)
 {
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 6/7] kasan: allow arches to hook into global registration
  2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
                   ` (4 preceding siblings ...)
  2019-05-23  5:21 ` [RFC PATCH 5/7] kasan: allow arches to provide their own early shadow setup Daniel Axtens
@ 2019-05-23  5:21 ` Daniel Axtens
  2019-05-23  6:31   ` Christophe Leroy
  2019-05-23  5:21 ` [RFC PATCH 7/7] powerpc: Book3S 64-bit "heavyweight" KASAN support Daniel Axtens
  2019-05-23  6:10 ` [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Christophe Leroy
  7 siblings, 1 reply; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  5:21 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

Not all arches have a specific space carved out for modules -
some, such as powerpc, just use regular vmalloc space. Therefore,
globals in these modules cannot be backed by real shadow memory.

In order to allow arches to perform this check, add a hook.

Signed-off-by: Daniel Axtens <dja@axtens.net>
---
 include/linux/kasan.h | 5 +++++
 mm/kasan/generic.c    | 3 +++
 2 files changed, 8 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index dfee2b42d799..4752749e4797 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -18,6 +18,11 @@ struct task_struct;
 static inline bool kasan_arch_is_ready(void)	{ return true; }
 #endif
 
+#ifndef kasan_arch_can_register_global
+static inline bool kasan_arch_can_register_global(const void * addr)	{ return true; }
+#endif
+
+
 #ifndef ARCH_HAS_KASAN_EARLY_SHADOW
 extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
 extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 0336f31bbae3..935b06f659a0 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -208,6 +208,9 @@ static void register_global(struct kasan_global *global)
 {
 	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
 
+	if (!kasan_arch_can_register_global(global->beg))
+		return;
+
 	kasan_unpoison_shadow(global->beg, global->size);
 
 	kasan_poison_shadow(global->beg + aligned_size,
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 7/7] powerpc: Book3S 64-bit "heavyweight" KASAN support
  2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
                   ` (5 preceding siblings ...)
  2019-05-23  5:21 ` [RFC PATCH 6/7] kasan: allow arches to hook into global registration Daniel Axtens
@ 2019-05-23  5:21 ` Daniel Axtens
  2019-05-23  6:10 ` [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Christophe Leroy
  7 siblings, 0 replies; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  5:21 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

KASAN support on powerpc64 is interesting:

 - We want to be able to support inline instrumentation so as to be
   able to catch global and stack issues.

 - We run a lot of code at boot in real mode. This includes stuff like
   printk(), so it's not feasible to just disable instrumentation
   around it.

   [For those not immersed in ppc64, in real mode, the top nibble or
   byte (depending on radix/hash mmu) of the address is ignored. To
   make things work, we put the linear mapping at
   0xc000000000000000. This means that a pointer to part of the linear
   mapping will work both in real mode, where it will be interpreted
   as a physical address of the form 0x000..., and out of real mode,
   where it will go via the linear mapping.]

 - Inline instrumentation requires a fixed offset.

 - Because of our running things in real mode, the offset has to
   point to valid memory both in and out of real mode.

This makes finding somewhere to put the KASAN shadow region a bit fun.

One approach is just to give up on inline instrumentation; and this is
what the 64 bit book3e code does. This way we can delay all checks
until after we get everything set up to our satisfaction. However,
we'd really like to do better.

What we can do - if we know _at compile time_ how much physical memory
we have - is to set aside the top 1/8th of the memory and use that.
This is a big hammer (hence the "heavyweight" name) and comes with 2
big consequences:

 - kernels will simply fail to boot on machines with less memory than
   specified when compiling.

 - kernels running on machines with more memory than specified when
   compiling will simply ignore the extra memory.

If you can bear this consequence, you get pretty full support for
KASAN.

This is still pretty WIP but I wanted to get it out there sooner
rather than later. Ongoing work:

 - Currently incompatible with KUAP (top priority to fix)

 - Currently incompatible with ftrace (no idea why yet)

 - Only supports radix at the moment

 - Very minimal testing (boots a Ubuntu VM, test_kasan runs)

 - Extend 'lightweight' outline support from book3e that will work
   without requring memory to be known at compile time.

 - It assumes physical memory is contiguous. I don't really think
   we can get around this, so we should try to ensure it.

Despite the limitations, it can still find bugs,
e.g. http://patchwork.ozlabs.org/patch/1103775/

Massive thanks to mpe, who had the idea for the initial design.

Signed-off-by: Daniel Axtens <dja@axtens.net>

---

Tested on qemu-pseries and qemu-powernv, seems to work on both
of those. Does not work on the talos that I tested on, no idea
why yet.

---
 arch/powerpc/Kconfig                         |  1 +
 arch/powerpc/Kconfig.debug                   | 15 +++++
 arch/powerpc/Makefile                        |  7 ++
 arch/powerpc/include/asm/kasan.h             | 45 +++++++++++++
 arch/powerpc/kernel/prom.c                   | 40 ++++++++++++
 arch/powerpc/mm/kasan/Makefile               |  1 +
 arch/powerpc/mm/kasan/kasan_init_book3s_64.c | 67 ++++++++++++++++++++
 7 files changed, 176 insertions(+)
 create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3s_64.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 4e266b019dd7..203cd07cf6e0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -171,6 +171,7 @@ config PPC
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if PPC32
 	select HAVE_ARCH_KASAN			if PPC_BOOK3E_64 && !SPARSEMEM_VMEMMAP
+	select HAVE_ARCH_KASAN			if PPC_BOOK3S_64 && !CONFIG_FTRACE && !PPC_KUAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index 23a37facc854..c0916408668c 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -394,6 +394,21 @@ config PPC_FAST_ENDIAN_SWITCH
         help
 	  If you're unsure what this is, say N.
 
+config PHYS_MEM_SIZE_FOR_KASAN
+	int "Physical memory size for KASAN (MB)"
+	depends on KASAN && PPC_BOOK3S_64
+	help
+	  To get inline instrumentation support for KASAN on 64-bit Book3S
+	  machines, you need to specify how much physical memory your system
+	  has. A shadow offset will be calculated based on this figure, which
+	  will be compiled in to the kernel. KASAN will use this offset to
+	  access its shadow region, which is used to verify memory accesses.
+
+	  If you attempt to boot on a system with less memory than you specify
+	  here, your system will fail to boot very early in the process. If you
+	  boot on a system with more memory than you specify, the extra memory
+	  will wasted - it will be reserved and not used.
+
 config KASAN_SHADOW_OFFSET
 	hex
 	depends on KASAN && PPC32
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index c345b79414a9..33e7bba4c8db 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -229,6 +229,13 @@ ifdef CONFIG_476FPE_ERR46
 		-T $(srctree)/arch/powerpc/platforms/44x/ppc476_modules.lds
 endif
 
+ifdef CONFIG_KASAN
+ifdef CONFIG_PPC_BOOK3S_64
+# 0xa800000000000000 = 12105675798371893248
+KASAN_SHADOW_OFFSET = $(shell echo 7 \* 1024 \* 1024 \* $(CONFIG_PHYS_MEM_SIZE_FOR_KASAN) / 8 + 12105675798371893248 | bc)
+endif
+endif
+
 # No AltiVec or VSX instructions when building kernel
 KBUILD_CFLAGS += $(call cc-option,-mno-altivec)
 KBUILD_CFLAGS += $(call cc-option,-mno-vsx)
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index ae410f0e060d..7f75f904998b 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -107,5 +107,50 @@ static inline bool kasan_addr_has_shadow_book3e(const void *ptr)
 
 #endif /* CONFIG_PPC_BOOK3E_64 */
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#include <asm/pgtable.h>
+#include <linux/jump_label.h>
+
+/*
+ * The KASAN shadow offset is such that the linear map (0xc000...) is
+ * shadowed by the last 8th of physical memory. This way, if the code
+ * uses 0xc addresses throughout, accesses work both in in real mode
+ * (where the top nibble is ignored) and outside of real mode.
+ */
+#define KASAN_SHADOW_OFFSET ((u64)CONFIG_PHYS_MEM_SIZE_FOR_KASAN * \
+				1024 * 1024 * 7 / 8 + 0xa800000000000000UL)
+
+#define KASAN_SHADOW_SIZE ((u64)CONFIG_PHYS_MEM_SIZE_FOR_KASAN * \
+				1024 * 1024 * 1 / 8)
+
+static inline bool kasan_arch_can_register_global_book3s(const void * addr) {
+
+	/*
+	 * We don't define a particular area for modules, we just put them in
+	 * vmalloc space. This means that they live in an area backed entirely
+	 * by our read-only zero page. The global registration system is not
+	 * smart enough to deal with this and attempts to poison it, which
+	 * blows up. Unless we want to split out an area of vmalloc space for
+	 * modules and back it with real shadow memory, just refuse to register
+	 * globals in vmalloc space.
+	 */
+
+	return ((unsigned long)addr < VMALLOC_START);
+}
+#define kasan_arch_can_register_global kasan_arch_can_register_global_book3s
+
+#define ARCH_HAS_KASAN_EARLY_SHADOW
+extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
+
+#define R_PTRS_PER_PTE	(1 << RADIX_PTE_INDEX_SIZE)
+#define R_PTRS_PER_PMD	(1 << RADIX_PMD_INDEX_SIZE)
+#define R_PTRS_PER_PUD	(1 << RADIX_PUD_INDEX_SIZE)
+extern pte_t kasan_early_shadow_pte[R_PTRS_PER_PTE];
+extern pmd_t kasan_early_shadow_pmd[R_PTRS_PER_PMD];
+extern pud_t kasan_early_shadow_pud[R_PTRS_PER_PUD];
+extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
+
+#endif
+
 #endif /* __ASSEMBLY */
 #endif
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 4221527b082f..7ae90942d52f 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -75,6 +75,7 @@ unsigned long tce_alloc_start, tce_alloc_end;
 u64 ppc64_rma_size;
 #endif
 static phys_addr_t first_memblock_size;
+static phys_addr_t top_phys_addr;
 static int __initdata boot_cpu_count;
 
 static int __init early_parse_mem(char *p)
@@ -573,6 +574,9 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
 		first_memblock_size = size;
 	}
 
+	if (base + size > top_phys_addr)
+		top_phys_addr = base + size;
+
 	/* Add the chunk to the MEMBLOCK list */
 	if (add_mem_to_memblock) {
 		if (validate_mem_limit(base, &size))
@@ -616,6 +620,8 @@ static void __init early_reserve_mem_dt(void)
 static void __init early_reserve_mem(void)
 {
 	__be64 *reserve_map;
+	phys_addr_t kasan_shadow_start __maybe_unused;
+	phys_addr_t kasan_memory_size __maybe_unused;
 
 	reserve_map = (__be64 *)(((unsigned long)initial_boot_params) +
 			fdt_off_mem_rsvmap(initial_boot_params));
@@ -654,6 +660,40 @@ static void __init early_reserve_mem(void)
 		return;
 	}
 #endif
+
+#if defined(CONFIG_KASAN) && defined(CONFIG_PPC_BOOK3S_64)
+	kasan_memory_size = (unsigned long long)CONFIG_PHYS_MEM_SIZE_FOR_KASAN
+				 * 1024 * 1024;
+	if (top_phys_addr < kasan_memory_size) {
+		/*
+		 * We are doomed. Attempts to call e.g. panic() are likely to
+		 * fail because they call out into instrumented code, which
+		 * will almost certainly access memory beyond the end of
+		 * physical memory. Hang here so that at least the NIP points
+		 * somewhere that will help you debug it if you look at it in
+		 * qemu.
+		 */
+		while (true) ;
+	} else if (top_phys_addr > kasan_memory_size) {
+		/* print a biiiig warning in hopes people notice */
+		pr_err("==================================================\n"
+		       "Physical memory exceeds compiled-in maximum!\n"
+		       "This kernel was compiled for KASAN with %u MB physical"
+		       "memory\n"
+		       "The actual physical memory detected is %llu MB\n"
+		       "Memory above the compiled limit will be ignored!\n"
+		       "==================================================\n",
+		       CONFIG_PHYS_MEM_SIZE_FOR_KASAN,
+		       top_phys_addr / (1024 * 1024));
+	}
+
+	kasan_shadow_start = _ALIGN_DOWN(kasan_memory_size * 7 / 8, PAGE_SIZE);
+	DBG("reserving %llx -> %llx for KASAN",
+	    kasan_shadow_start, top_phys_addr);
+	memblock_reserve(kasan_shadow_start,
+			 top_phys_addr - kasan_shadow_start);
+#endif
+
 }
 
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
index f8f164ad8ade..1f52f688751d 100644
--- a/arch/powerpc/mm/kasan/Makefile
+++ b/arch/powerpc/mm/kasan/Makefile
@@ -4,3 +4,4 @@ KASAN_SANITIZE := n
 
 obj-$(CONFIG_PPC32)           += kasan_init_32.o
 obj-$(CONFIG_PPC_BOOK3E_64)   += kasan_init_book3e_64.o
+obj-$(CONFIG_PPC_BOOK3S_64)   += kasan_init_book3s_64.o
diff --git a/arch/powerpc/mm/kasan/kasan_init_book3s_64.c b/arch/powerpc/mm/kasan/kasan_init_book3s_64.c
new file mode 100644
index 000000000000..dce34120959b
--- /dev/null
+++ b/arch/powerpc/mm/kasan/kasan_init_book3s_64.c
@@ -0,0 +1,67 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KASAN for 64-bit Book3S powerpc
+ *
+ * Copyright (C) 2019 IBM Corporation
+ * Author: Daniel Axtens <dja@axtens.net>
+ */
+
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/kasan.h>
+#include <linux/printk.h>
+#include <linux/sched/task.h>
+#include <asm/pgalloc.h>
+
+unsigned char kasan_early_shadow_page[PAGE_SIZE] __page_aligned_bss;
+
+pte_t kasan_early_shadow_pte[R_PTRS_PER_PTE] __page_aligned_bss;
+pmd_t kasan_early_shadow_pmd[R_PTRS_PER_PMD] __page_aligned_bss;
+pud_t kasan_early_shadow_pud[R_PTRS_PER_PUD] __page_aligned_bss;
+p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D] __page_aligned_bss;
+
+void __init kasan_init(void)
+{
+	int i;
+	void * k_start = kasan_mem_to_shadow((void *)RADIX_KERN_VIRT_START);
+	void * k_end = kasan_mem_to_shadow((void *)RADIX_VMEMMAP_END);
+
+	unsigned long pte_val = __pa(kasan_early_shadow_page)
+					| pgprot_val(PAGE_KERNEL) | _PAGE_PTE;
+
+	if (!early_radix_enabled())
+		panic("KASAN requires radix!");
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		kasan_early_shadow_pte[i] = __pte(pte_val);
+
+	for (i = 0; i < PTRS_PER_PMD; i++)
+		pmd_populate_kernel(&init_mm, &kasan_early_shadow_pmd[i],
+				    kasan_early_shadow_pte);
+
+	for (i = 0; i < PTRS_PER_PUD; i++)
+		pud_populate(&init_mm, &kasan_early_shadow_pud[i],
+			     kasan_early_shadow_pmd);
+
+
+	memset(kasan_mem_to_shadow((void*)PAGE_OFFSET), KASAN_SHADOW_INIT,
+		KASAN_SHADOW_SIZE);
+
+	kasan_populate_early_shadow(k_start, k_end);
+	flush_tlb_kernel_range((unsigned long)k_start, (unsigned long)k_end);
+
+	/* mark early shadow region as RO and wipe */
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		__set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
+			&kasan_early_shadow_pte[i],
+			pfn_pte(virt_to_pfn(kasan_early_shadow_page),
+			__pgprot(_PAGE_PTE | _PAGE_KERNEL_RO | _PAGE_BASE)),
+			0);
+	memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+
+	kasan_init_tags();
+
+	/* Enable error messages */
+	init_task.kasan_depth = 0;
+	pr_info("KASAN init done (64-bit Book3S heavyweight mode)\n");
+}
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix
  2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
                   ` (6 preceding siblings ...)
  2019-05-23  5:21 ` [RFC PATCH 7/7] powerpc: Book3S 64-bit "heavyweight" KASAN support Daniel Axtens
@ 2019-05-23  6:10 ` Christophe Leroy
  2019-05-23  6:18   ` Daniel Axtens
  7 siblings, 1 reply; 14+ messages in thread
From: Christophe Leroy @ 2019-05-23  6:10 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora; +Cc: linuxppc-dev, kasan-dev

Hi Daniel,

Le 23/05/2019 à 07:21, Daniel Axtens a écrit :
> Building on the work of Christophe, Aneesh and Balbir, I've ported
> KASAN to Book3S radix.
> 
> It builds on top Christophe's work on 32bit, and includes my work for
> 64-bit Book3E (3S doesn't really depend on 3E, but it was handy to
> have around when developing and debugging).
> 
> This provides full inline instrumentation on radix, but does require
> that you be able to specify the amount of memory on the system at
> compile time. More details in patch 7.
> 
> Regards,
> Daniel
> 
> Daniel Axtens (7):
>    kasan: do not open-code addr_has_shadow
>    kasan: allow architectures to manage the memory-to-shadow mapping
>    kasan: allow architectures to provide an outline readiness check
>    powerpc: KASAN for 64bit Book3E

I see you are still hacking the core part of KASAN.

Did you have a look at my RFC patch 
(https://patchwork.ozlabs.org/patch/1068260/) which demonstrate that 
full KASAN can be implemented on book3E/64 without those hacks ?

Christophe

>    kasan: allow arches to provide their own early shadow setup
>    kasan: allow arches to hook into global registration
>    powerpc: Book3S 64-bit "heavyweight" KASAN support
> 
>   arch/powerpc/Kconfig                         |   2 +
>   arch/powerpc/Kconfig.debug                   |  17 ++-
>   arch/powerpc/Makefile                        |   7 ++
>   arch/powerpc/include/asm/kasan.h             | 116 +++++++++++++++++++
>   arch/powerpc/kernel/prom.c                   |  40 +++++++
>   arch/powerpc/mm/kasan/Makefile               |   2 +
>   arch/powerpc/mm/kasan/kasan_init_book3e_64.c |  50 ++++++++
>   arch/powerpc/mm/kasan/kasan_init_book3s_64.c |  67 +++++++++++
>   arch/powerpc/mm/nohash/Makefile              |   5 +
>   include/linux/kasan.h                        |  13 +++
>   mm/kasan/generic.c                           |   9 +-
>   mm/kasan/generic_report.c                    |   2 +-
>   mm/kasan/init.c                              |  10 ++
>   mm/kasan/kasan.h                             |   6 +-
>   mm/kasan/report.c                            |   6 +-
>   mm/kasan/tags.c                              |   3 +-
>   16 files changed, 345 insertions(+), 10 deletions(-)
>   create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>   create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3s_64.c
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 3/7] kasan: allow architectures to provide an outline readiness check
  2019-05-23  5:21 ` [RFC PATCH 3/7] kasan: allow architectures to provide an outline readiness check Daniel Axtens
@ 2019-05-23  6:14   ` Christophe Leroy
  0 siblings, 0 replies; 14+ messages in thread
From: Christophe Leroy @ 2019-05-23  6:14 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev



Le 23/05/2019 à 07:21, Daniel Axtens a écrit :
> In powerpc (as I understand it), we spend a lot of time in boot
> running in real mode before MMU paging is initialised. During
> this time we call a lot of generic code, including printk(). If
> we try to access the shadow region during this time, things fail.
> 
> My attempts to move early init before the first printk have not
> been successful. (Both previous RFCs for ppc64 - by 2 different
> people - have needed this trick too!)

I have been able to do it successfully for BOOK3E/64, see 
https://patchwork.ozlabs.org/patch/1068260/ for the details.

Christophe

> 
> So, allow architectures to define a kasan_arch_is_ready()
> hook that bails out of check_memory_region_inline() unless the
> arch has done all of the init.
> 
> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
> Originally-by: Balbir Singh <bsingharora@gmail.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> [check_return_arch_not_ready() ==> static inline kasan_arch_is_ready()]
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>   include/linux/kasan.h | 4 ++++
>   mm/kasan/generic.c    | 3 +++
>   2 files changed, 7 insertions(+)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index f6261840f94c..a630d53f1a36 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -14,6 +14,10 @@ struct task_struct;
>   #include <asm/kasan.h>
>   #include <asm/pgtable.h>
>   
> +#ifndef kasan_arch_is_ready
> +static inline bool kasan_arch_is_ready(void)	{ return true; }
> +#endif
> +
>   extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>   extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>   extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index a5b28e3ceacb..0336f31bbae3 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -170,6 +170,9 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>   						size_t size, bool write,
>   						unsigned long ret_ip)
>   {
> +	if (!kasan_arch_is_ready())
> +		return;
> +
>   	if (unlikely(size == 0))
>   		return;
>   
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 4/7] powerpc: KASAN for 64bit Book3E
  2019-05-23  5:21 ` [RFC PATCH 4/7] powerpc: KASAN for 64bit Book3E Daniel Axtens
@ 2019-05-23  6:15   ` Christophe Leroy
  0 siblings, 0 replies; 14+ messages in thread
From: Christophe Leroy @ 2019-05-23  6:15 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev



Le 23/05/2019 à 07:21, Daniel Axtens a écrit :
> Wire up KASAN. Only outline instrumentation is supported.
> 
> The KASAN shadow area is mapped into vmemmap space:
> 0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
> To do this we require that vmemmap be disabled. (This is the default
> in the kernel config that QorIQ provides for the machine in their
> SDK anyway - they use flat memory.)
> 
> Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
> ioremap areas (also in 0x800...) are all mapped to the zero page. As
> with the Book3S hash series, this requires overriding the memory <->
> shadow mapping.
> 
> Also, as with both previous 64-bit series, early instrumentation is not
> supported.  It would allow us to drop the check_return_arch_not_ready()
> hook in the KASAN core, but it's tricky to get it set up early enough:
> we need it setup before the first call to instrumented code like printk().
> Perhaps in the future.
> 
> Only KASAN_MINIMAL works.

See https://patchwork.ozlabs.org/patch/1068260/ for a full implementation

Christophe

> 
> Tested on e6500. KVM, kexec and xmon have not been tested.
> 
> The test_kasan module fires warnings as expected, except for the
> following tests:
> 
>   - Expected/by design:
> kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
> 
>   - Due to only supporting KASAN_MINIMAL:
> kasan test: kasan_stack_oob out-of-bounds on stack
> kasan test: kasan_global_oob out-of-bounds global variable
> kasan test: kasan_alloca_oob_left out-of-bounds to left on alloca
> kasan test: kasan_alloca_oob_right out-of-bounds to right on alloca
> kasan test: use_after_scope_test use-after-scope on int
> kasan test: use_after_scope_test use-after-scope on array
> 
> Thanks to those who have done the heavy lifting over the past several
> years:
>   - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
>   - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
>   - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/
> 
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> [- Removed EXPORT_SYMBOL of the static key
>   - Fixed most checkpatch problems
>   - Replaced kasan_zero_page[] by kasan_early_shadow_page[]
>   - Reduced casting mess by using intermediate locals
>   - Fixed build failure on pmac32_defconfig]
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>   arch/powerpc/Kconfig                         |  1 +
>   arch/powerpc/Kconfig.debug                   |  2 +-
>   arch/powerpc/include/asm/kasan.h             | 71 ++++++++++++++++++++
>   arch/powerpc/mm/kasan/Makefile               |  1 +
>   arch/powerpc/mm/kasan/kasan_init_book3e_64.c | 50 ++++++++++++++
>   arch/powerpc/mm/nohash/Makefile              |  5 ++
>   6 files changed, 129 insertions(+), 1 deletion(-)
>   create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 6a66a2da5b1a..4e266b019dd7 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -170,6 +170,7 @@ config PPC
>   	select HAVE_ARCH_AUDITSYSCALL
>   	select HAVE_ARCH_JUMP_LABEL
>   	select HAVE_ARCH_KASAN			if PPC32
> +	select HAVE_ARCH_KASAN			if PPC_BOOK3E_64 && !SPARSEMEM_VMEMMAP
>   	select HAVE_ARCH_KGDB
>   	select HAVE_ARCH_MMAP_RND_BITS
>   	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
> diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
> index c59920920ddc..23a37facc854 100644
> --- a/arch/powerpc/Kconfig.debug
> +++ b/arch/powerpc/Kconfig.debug
> @@ -396,5 +396,5 @@ config PPC_FAST_ENDIAN_SWITCH
>   
>   config KASAN_SHADOW_OFFSET
>   	hex
> -	depends on KASAN
> +	depends on KASAN && PPC32
>   	default 0xe0000000
> diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
> index 296e51c2f066..ae410f0e060d 100644
> --- a/arch/powerpc/include/asm/kasan.h
> +++ b/arch/powerpc/include/asm/kasan.h
> @@ -21,12 +21,15 @@
>   #define KASAN_SHADOW_START	(KASAN_SHADOW_OFFSET + \
>   				 (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
>   
> +#ifdef CONFIG_PPC32
>   #define KASAN_SHADOW_OFFSET	ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET)
>   
>   #define KASAN_SHADOW_END	0UL
>   
>   #define KASAN_SHADOW_SIZE	(KASAN_SHADOW_END - KASAN_SHADOW_START)
>   
> +#endif /* CONFIG_PPC32 */
> +
>   #ifdef CONFIG_KASAN
>   void kasan_early_init(void);
>   void kasan_mmu_init(void);
> @@ -36,5 +39,73 @@ static inline void kasan_init(void) { }
>   static inline void kasan_mmu_init(void) { }
>   #endif
>   
> +#ifdef CONFIG_PPC_BOOK3E_64
> +#include <asm/pgtable.h>
> +#include <linux/jump_label.h>
> +
> +/*
> + * We don't put this in Kconfig as we only support KASAN_MINIMAL, and
> + * that will be disabled if the symbol is available in Kconfig
> + */
> +#define KASAN_SHADOW_OFFSET	ASM_CONST(0x6800040000000000)
> +
> +#define KASAN_SHADOW_SIZE	(KERN_VIRT_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
> +
> +extern struct static_key_false powerpc_kasan_enabled_key;
> +extern unsigned char kasan_early_shadow_page[];
> +
> +static inline bool kasan_arch_is_ready_book3e(void)
> +{
> +	if (static_branch_likely(&powerpc_kasan_enabled_key))
> +		return true;
> +	return false;
> +}
> +#define kasan_arch_is_ready kasan_arch_is_ready_book3e
> +
> +static inline void *kasan_mem_to_shadow_book3e(const void *ptr)
> +{
> +	unsigned long addr = (unsigned long)ptr;
> +
> +	if (addr >= KERN_VIRT_START && addr < KERN_VIRT_START + KERN_VIRT_SIZE)
> +		return kasan_early_shadow_page;
> +
> +	return (void *)(addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
> +}
> +#define kasan_mem_to_shadow kasan_mem_to_shadow_book3e
> +
> +static inline void *kasan_shadow_to_mem_book3e(const void *shadow_addr)
> +{
> +	/*
> +	 * We map the entire non-linear virtual mapping onto the zero page so if
> +	 * we are asked to map the zero page back just pick the beginning of that
> +	 * area.
> +	 */
> +	if (shadow_addr >= (void *)kasan_early_shadow_page &&
> +	    shadow_addr < (void *)(kasan_early_shadow_page + PAGE_SIZE))
> +		return (void *)KERN_VIRT_START;
> +
> +	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET) <<
> +			KASAN_SHADOW_SCALE_SHIFT);
> +}
> +#define kasan_shadow_to_mem kasan_shadow_to_mem_book3e
> +
> +static inline bool kasan_addr_has_shadow_book3e(const void *ptr)
> +{
> +	unsigned long addr = (unsigned long)ptr;
> +
> +	/*
> +	 * We want to specifically assert that the addresses in the 0x8000...
> +	 * region have a shadow, otherwise they are considered by the kasan
> +	 * core to be wild pointers
> +	 */
> +	if (addr >= KERN_VIRT_START && addr < (KERN_VIRT_START + KERN_VIRT_SIZE))
> +		return true;
> +
> +	return (ptr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
> +}
> +#define kasan_addr_has_shadow kasan_addr_has_shadow_book3e
> +
> +#endif /* CONFIG_PPC_BOOK3E_64 */
> +
>   #endif /* __ASSEMBLY */
>   #endif
> diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
> index 6577897673dd..f8f164ad8ade 100644
> --- a/arch/powerpc/mm/kasan/Makefile
> +++ b/arch/powerpc/mm/kasan/Makefile
> @@ -3,3 +3,4 @@
>   KASAN_SANITIZE := n
>   
>   obj-$(CONFIG_PPC32)           += kasan_init_32.o
> +obj-$(CONFIG_PPC_BOOK3E_64)   += kasan_init_book3e_64.o
> diff --git a/arch/powerpc/mm/kasan/kasan_init_book3e_64.c b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> new file mode 100644
> index 000000000000..f116c211d83c
> --- /dev/null
> +++ b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> @@ -0,0 +1,50 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#define DISABLE_BRANCH_PROFILING
> +
> +#include <linux/kasan.h>
> +#include <linux/printk.h>
> +#include <linux/memblock.h>
> +#include <linux/sched/task.h>
> +#include <asm/pgalloc.h>
> +
> +DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
> +
> +static void __init kasan_init_region(struct memblock_region *reg)
> +{
> +	void *start = __va(reg->base);
> +	void *end = __va(reg->base + reg->size);
> +	unsigned long k_start, k_end, k_cur;
> +
> +	if (start >= end)
> +		return;
> +
> +	k_start = (unsigned long)kasan_mem_to_shadow(start);
> +	k_end = (unsigned long)kasan_mem_to_shadow(end);
> +
> +	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> +		void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> +
> +		map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
> +	}
> +	flush_tlb_kernel_range(k_start, k_end);
> +}
> +
> +void __init kasan_init(void)
> +{
> +	struct memblock_region *reg;
> +
> +	for_each_memblock(memory, reg)
> +		kasan_init_region(reg);
> +
> +	/* map the zero page RO */
> +	map_kernel_page((unsigned long)kasan_early_shadow_page,
> +			__pa(kasan_early_shadow_page), PAGE_KERNEL_RO);
> +
> +	/* Turn on checking */
> +	static_branch_inc(&powerpc_kasan_enabled_key);
> +
> +	/* Enable error messages */
> +	init_task.kasan_depth = 0;
> +	pr_info("KASAN init done (64-bit Book3E)\n");
> +}
> diff --git a/arch/powerpc/mm/nohash/Makefile b/arch/powerpc/mm/nohash/Makefile
> index 33b6f6f29d3f..310149f217d7 100644
> --- a/arch/powerpc/mm/nohash/Makefile
> +++ b/arch/powerpc/mm/nohash/Makefile
> @@ -16,3 +16,8 @@ endif
>   # This is necessary for booting with kcov enabled on book3e machines
>   KCOV_INSTRUMENT_tlb.o := n
>   KCOV_INSTRUMENT_fsl_booke.o := n
> +
> +ifdef CONFIG_KASAN
> +CFLAGS_fsl_booke_mmu.o		+= -DDISABLE_BRANCH_PROFILING
> +CFLAGS_tlb.o			+= -DDISABLE_BRANCH_PROFILING
> +endif
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix
  2019-05-23  6:10 ` [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Christophe Leroy
@ 2019-05-23  6:18   ` Daniel Axtens
  0 siblings, 0 replies; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  6:18 UTC (permalink / raw)
  To: Christophe Leroy, aneesh.kumar, bsingharora; +Cc: linuxppc-dev, kasan-dev

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> Hi Daniel,
>
> Le 23/05/2019 à 07:21, Daniel Axtens a écrit :
>> Building on the work of Christophe, Aneesh and Balbir, I've ported
>> KASAN to Book3S radix.
>> 
>> It builds on top Christophe's work on 32bit, and includes my work for
>> 64-bit Book3E (3S doesn't really depend on 3E, but it was handy to
>> have around when developing and debugging).
>> 
>> This provides full inline instrumentation on radix, but does require
>> that you be able to specify the amount of memory on the system at
>> compile time. More details in patch 7.
>> 
>> Regards,
>> Daniel
>> 
>> Daniel Axtens (7):
>>    kasan: do not open-code addr_has_shadow
>>    kasan: allow architectures to manage the memory-to-shadow mapping
>>    kasan: allow architectures to provide an outline readiness check
>>    powerpc: KASAN for 64bit Book3E
>
> I see you are still hacking the core part of KASAN.
>
> Did you have a look at my RFC patch 
> (https://patchwork.ozlabs.org/patch/1068260/) which demonstrate that 
> full KASAN can be implemented on book3E/64 without those hacks ?

I haven't gone back and looked at the book3e patches as I've just been
working on the 3s stuff. I will have a look at that for the next version
for sure. I just wanted to get the 3s stuff out into the world sooner
rather than later! I don't think 3s uses those hacks so we can probably
drop them entirely.

Regards,
Daniel

>
> Christophe
>
>>    kasan: allow arches to provide their own early shadow setup
>>    kasan: allow arches to hook into global registration
>>    powerpc: Book3S 64-bit "heavyweight" KASAN support
>> 
>>   arch/powerpc/Kconfig                         |   2 +
>>   arch/powerpc/Kconfig.debug                   |  17 ++-
>>   arch/powerpc/Makefile                        |   7 ++
>>   arch/powerpc/include/asm/kasan.h             | 116 +++++++++++++++++++
>>   arch/powerpc/kernel/prom.c                   |  40 +++++++
>>   arch/powerpc/mm/kasan/Makefile               |   2 +
>>   arch/powerpc/mm/kasan/kasan_init_book3e_64.c |  50 ++++++++
>>   arch/powerpc/mm/kasan/kasan_init_book3s_64.c |  67 +++++++++++
>>   arch/powerpc/mm/nohash/Makefile              |   5 +
>>   include/linux/kasan.h                        |  13 +++
>>   mm/kasan/generic.c                           |   9 +-
>>   mm/kasan/generic_report.c                    |   2 +-
>>   mm/kasan/init.c                              |  10 ++
>>   mm/kasan/kasan.h                             |   6 +-
>>   mm/kasan/report.c                            |   6 +-
>>   mm/kasan/tags.c                              |   3 +-
>>   16 files changed, 345 insertions(+), 10 deletions(-)
>>   create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>>   create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3s_64.c
>> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 6/7] kasan: allow arches to hook into global registration
  2019-05-23  5:21 ` [RFC PATCH 6/7] kasan: allow arches to hook into global registration Daniel Axtens
@ 2019-05-23  6:31   ` Christophe Leroy
  2019-05-23  6:59     ` Daniel Axtens
  0 siblings, 1 reply; 14+ messages in thread
From: Christophe Leroy @ 2019-05-23  6:31 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora; +Cc: linuxppc-dev, kasan-dev



Le 23/05/2019 à 07:21, Daniel Axtens a écrit :
> Not all arches have a specific space carved out for modules -
> some, such as powerpc, just use regular vmalloc space. Therefore,
> globals in these modules cannot be backed by real shadow memory.

Can you explain in more details the reason why ?

PPC32 also uses regular vmalloc space, and it has been possible to 
manage globals on it, by simply implementing a module_alloc() function.

See 
https://elixir.bootlin.com/linux/v5.2-rc1/source/arch/powerpc/mm/kasan/kasan_init_32.c#L135

It is also possible to easily define a different area for modules, by 
replacing the call to vmalloc_exec() by a call to __vmalloc_node_range() 
as done by vmalloc_exec(), but with different bounds than 
VMALLOC_START/VMALLOC_END

See https://elixir.bootlin.com/linux/v5.2-rc1/source/mm/vmalloc.c#L2633

Today in PPC64 (unlike PPC32), there is already a split between VMALLOC 
space and IOREMAP space. I'm sure it would be easy to split it once more 
for modules.

Christophe

> 
> In order to allow arches to perform this check, add a hook.
> 
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> ---
>   include/linux/kasan.h | 5 +++++
>   mm/kasan/generic.c    | 3 +++
>   2 files changed, 8 insertions(+)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index dfee2b42d799..4752749e4797 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -18,6 +18,11 @@ struct task_struct;
>   static inline bool kasan_arch_is_ready(void)	{ return true; }
>   #endif
>   
> +#ifndef kasan_arch_can_register_global
> +static inline bool kasan_arch_can_register_global(const void * addr)	{ return true; }
> +#endif
> +
> +
>   #ifndef ARCH_HAS_KASAN_EARLY_SHADOW
>   extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>   extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 0336f31bbae3..935b06f659a0 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -208,6 +208,9 @@ static void register_global(struct kasan_global *global)
>   {
>   	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
>   
> +	if (!kasan_arch_can_register_global(global->beg))
> +		return;
> +
>   	kasan_unpoison_shadow(global->beg, global->size);
>   
>   	kasan_poison_shadow(global->beg + aligned_size,
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 6/7] kasan: allow arches to hook into global registration
  2019-05-23  6:31   ` Christophe Leroy
@ 2019-05-23  6:59     ` Daniel Axtens
  0 siblings, 0 replies; 14+ messages in thread
From: Daniel Axtens @ 2019-05-23  6:59 UTC (permalink / raw)
  To: Christophe Leroy, aneesh.kumar, bsingharora; +Cc: linuxppc-dev, kasan-dev

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> Le 23/05/2019 à 07:21, Daniel Axtens a écrit :
>> Not all arches have a specific space carved out for modules -
>> some, such as powerpc, just use regular vmalloc space. Therefore,
>> globals in these modules cannot be backed by real shadow memory.
>
> Can you explain in more details the reason why ?

At this point, purely simplicity. As you discuss below, it's possible to
do better.

>
> PPC32 also uses regular vmalloc space, and it has been possible to 
> manage globals on it, by simply implementing a module_alloc() function.
>
> See 
> https://elixir.bootlin.com/linux/v5.2-rc1/source/arch/powerpc/mm/kasan/kasan_init_32.c#L135
>
> It is also possible to easily define a different area for modules, by 
> replacing the call to vmalloc_exec() by a call to __vmalloc_node_range() 
> as done by vmalloc_exec(), but with different bounds than 
> VMALLOC_START/VMALLOC_END
>
> See https://elixir.bootlin.com/linux/v5.2-rc1/source/mm/vmalloc.c#L2633
>
> Today in PPC64 (unlike PPC32), there is already a split between VMALLOC 
> space and IOREMAP space. I'm sure it would be easy to split it once more 
> for modules.
>

OK, good to know, I'll look into one of those approaches for the next
spin!

Regards,
Daniel


> Christophe
>
>> 
>> In order to allow arches to perform this check, add a hook.
>> 
>> Signed-off-by: Daniel Axtens <dja@axtens.net>
>> ---
>>   include/linux/kasan.h | 5 +++++
>>   mm/kasan/generic.c    | 3 +++
>>   2 files changed, 8 insertions(+)
>> 
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index dfee2b42d799..4752749e4797 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -18,6 +18,11 @@ struct task_struct;
>>   static inline bool kasan_arch_is_ready(void)	{ return true; }
>>   #endif
>>   
>> +#ifndef kasan_arch_can_register_global
>> +static inline bool kasan_arch_can_register_global(const void * addr)	{ return true; }
>> +#endif
>> +
>> +
>>   #ifndef ARCH_HAS_KASAN_EARLY_SHADOW
>>   extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>>   extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
>> index 0336f31bbae3..935b06f659a0 100644
>> --- a/mm/kasan/generic.c
>> +++ b/mm/kasan/generic.c
>> @@ -208,6 +208,9 @@ static void register_global(struct kasan_global *global)
>>   {
>>   	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
>>   
>> +	if (!kasan_arch_can_register_global(global->beg))
>> +		return;
>> +
>>   	kasan_unpoison_shadow(global->beg, global->size);
>>   
>>   	kasan_poison_shadow(global->beg + aligned_size,
>> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-05-23  7:01 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-23  5:21 [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Daniel Axtens
2019-05-23  5:21 ` [RFC PATCH 1/7] kasan: do not open-code addr_has_shadow Daniel Axtens
2019-05-23  5:21 ` [RFC PATCH 2/7] kasan: allow architectures to manage the memory-to-shadow mapping Daniel Axtens
2019-05-23  5:21 ` [RFC PATCH 3/7] kasan: allow architectures to provide an outline readiness check Daniel Axtens
2019-05-23  6:14   ` Christophe Leroy
2019-05-23  5:21 ` [RFC PATCH 4/7] powerpc: KASAN for 64bit Book3E Daniel Axtens
2019-05-23  6:15   ` Christophe Leroy
2019-05-23  5:21 ` [RFC PATCH 5/7] kasan: allow arches to provide their own early shadow setup Daniel Axtens
2019-05-23  5:21 ` [RFC PATCH 6/7] kasan: allow arches to hook into global registration Daniel Axtens
2019-05-23  6:31   ` Christophe Leroy
2019-05-23  6:59     ` Daniel Axtens
2019-05-23  5:21 ` [RFC PATCH 7/7] powerpc: Book3S 64-bit "heavyweight" KASAN support Daniel Axtens
2019-05-23  6:10 ` [RFC PATCH 0/7] powerpc: KASAN for 64-bit 3s radix Christophe Leroy
2019-05-23  6:18   ` Daniel Axtens

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.