linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E
@ 2019-02-15  0:04 Daniel Axtens
  2019-02-15  0:04 ` [RFC PATCH 1/5] kasan: do not open-code addr_has_shadow Daniel Axtens
                   ` (6 more replies)
  0 siblings, 7 replies; 27+ messages in thread
From: Daniel Axtens @ 2019-02-15  0:04 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

Building on the work of Christophe, Aneesh and Balbir, I've ported
KASAN to the e6500, a 64-bit Book3E processor which doesn't have a
hashed page table. It applies on top of Christophe's series, v5.

It requires some changes to the KASAN core - please let me know if
these are problematic and we see if an alternative approach is
possible.

The KASAN shadow area is mapped into vmemmap space:
0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
To do this we require that vmemmap be disabled. (This is the default
in the kernel config that QorIQ provides for the machine in their
SDK anyway - they use flat memory.)

Only outline instrumentation is supported and only KASAN_MINIMAL works.
Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
ioremap areas (also in 0x800...) are all mapped to a zero page. As
with the Book3S hash series, this requires overriding the memory <->
shadow mapping.

Also, as with both previous 64-bit series, early instrumentation is not
supported.

KVM, kexec and xmon have not been tested.

Thanks to those who have done the heavy lifting over the past several years:
 - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
 - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
 - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/

While useful if you have an Book3E device, this is mostly intended
as a warm-up exercise for reviving Aneesh's series for book3s hash.
In particular, changes to the kasan core are going to be required
for hash and radix as well.

Regards,
Daniel

Daniel Axtens (5):
  kasan: do not open-code addr_has_shadow
  kasan: allow architectures to manage the memory-to-shadow mapping
  kasan: allow architectures to provide an outline readiness check
  powerpc: move KASAN into its own subdirectory
  powerpc: KASAN for 64bit Book3E

 arch/powerpc/Kconfig                          |  1 +
 arch/powerpc/Makefile                         |  2 +
 arch/powerpc/include/asm/kasan.h              | 77 +++++++++++++++++--
 arch/powerpc/include/asm/ppc_asm.h            |  7 ++
 arch/powerpc/include/asm/string.h             |  7 +-
 arch/powerpc/lib/mem_64.S                     |  6 +-
 arch/powerpc/lib/memcmp_64.S                  |  5 +-
 arch/powerpc/lib/memcpy_64.S                  |  3 +-
 arch/powerpc/lib/string.S                     | 15 ++--
 arch/powerpc/mm/Makefile                      |  4 +-
 arch/powerpc/mm/kasan/Makefile                |  6 ++
 .../{kasan_init.c => kasan/kasan_init_32.c}   |  0
 arch/powerpc/mm/kasan/kasan_init_book3e_64.c  | 53 +++++++++++++
 arch/powerpc/purgatory/Makefile               |  3 +
 arch/powerpc/xmon/Makefile                    |  1 +
 include/linux/kasan.h                         |  6 ++
 mm/kasan/generic.c                            |  5 +-
 mm/kasan/generic_report.c                     |  2 +-
 mm/kasan/kasan.h                              |  6 +-
 mm/kasan/report.c                             |  6 +-
 mm/kasan/tags.c                               |  3 +-
 21 files changed, 188 insertions(+), 30 deletions(-)
 create mode 100644 arch/powerpc/mm/kasan/Makefile
 rename arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} (100%)
 create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c

-- 
2.19.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [RFC PATCH 1/5] kasan: do not open-code addr_has_shadow
  2019-02-15  0:04 [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Daniel Axtens
@ 2019-02-15  0:04 ` Daniel Axtens
  2019-02-15  0:12   ` Andrew Donnellan
  2019-02-15  0:04 ` [RFC PATCH 2/5] kasan: allow architectures to manage the memory-to-shadow mapping Daniel Axtens
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 27+ messages in thread
From: Daniel Axtens @ 2019-02-15  0:04 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

We have a couple of places checking for the existence of a shadow
mapping for an address by open-coding the inverse of the check in
addr_has_shadow.

Replace the open-coded versions with the helper. This will be
needed in future to allow architectures to override the layout
of the shadow mapping.

Signed-off-by: Daniel Axtens <dja@axtens.net>
---
 mm/kasan/generic.c | 3 +--
 mm/kasan/tags.c    | 3 +--
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index ccb6207276e3..ffc64a9a97a5 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -173,8 +173,7 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 	if (unlikely(size == 0))
 		return;
 
-	if (unlikely((void *)addr <
-		kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+	if (unlikely(!addr_has_shadow((void *)addr))) {
 		kasan_report(addr, size, write, ret_ip);
 		return;
 	}
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 0777649e07c4..bc759f8f1c67 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -109,8 +109,7 @@ void check_memory_region(unsigned long addr, size_t size, bool write,
 		return;
 
 	untagged_addr = reset_tag((const void *)addr);
-	if (unlikely(untagged_addr <
-			kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+	if (unlikely(!addr_has_shadow(untagged_addr))) {
 		kasan_report(addr, size, write, ret_ip);
 		return;
 	}
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH 2/5] kasan: allow architectures to manage the memory-to-shadow mapping
  2019-02-15  0:04 [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Daniel Axtens
  2019-02-15  0:04 ` [RFC PATCH 1/5] kasan: do not open-code addr_has_shadow Daniel Axtens
@ 2019-02-15  0:04 ` Daniel Axtens
  2019-02-15  6:35   ` Dmitry Vyukov
  2019-02-15  0:04 ` [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check Daniel Axtens
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 27+ messages in thread
From: Daniel Axtens @ 2019-02-15  0:04 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

Currently, shadow addresses are always addr >> shift + offset.
However, for powerpc, the virtual address space is fragmented in
ways that make this simple scheme impractical.

Allow architectures to override:
 - kasan_shadow_to_mem
 - kasan_mem_to_shadow
 - addr_has_shadow

Rename addr_has_shadow to kasan_addr_has_shadow as if it is
overridden it will be available in more places, increasing the
risk of collisions.

If architectures do not #define their own versions, the generic
code will continue to run as usual.

Signed-off-by: Daniel Axtens <dja@axtens.net>
---
 include/linux/kasan.h     | 2 ++
 mm/kasan/generic.c        | 2 +-
 mm/kasan/generic_report.c | 2 +-
 mm/kasan/kasan.h          | 6 +++++-
 mm/kasan/report.c         | 6 +++---
 mm/kasan/tags.c           | 2 +-
 6 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b40ea104dd36..f6261840f94c 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -23,11 +23,13 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
 int kasan_populate_early_shadow(const void *shadow_start,
 				const void *shadow_end);
 
+#ifndef kasan_mem_to_shadow
 static inline void *kasan_mem_to_shadow(const void *addr)
 {
 	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
 		+ KASAN_SHADOW_OFFSET;
 }
+#endif
 
 /* Enable reporting bugs after kasan_disable_current() */
 extern void kasan_enable_current(void);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index ffc64a9a97a5..bafa2f986660 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -173,7 +173,7 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 	if (unlikely(size == 0))
 		return;
 
-	if (unlikely(!addr_has_shadow((void *)addr))) {
+	if (unlikely(!kasan_addr_has_shadow((void *)addr))) {
 		kasan_report(addr, size, write, ret_ip);
 		return;
 	}
diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c
index 5e12035888f2..854f4de1fe10 100644
--- a/mm/kasan/generic_report.c
+++ b/mm/kasan/generic_report.c
@@ -110,7 +110,7 @@ static const char *get_wild_bug_type(struct kasan_access_info *info)
 
 const char *get_bug_type(struct kasan_access_info *info)
 {
-	if (addr_has_shadow(info->access_addr))
+	if (kasan_addr_has_shadow(info->access_addr))
 		return get_shadow_bug_type(info);
 	return get_wild_bug_type(info);
 }
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index ea51b2d898ec..57ec24cf7bd1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -111,16 +111,20 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
 struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
 					const void *object);
 
+#ifndef kasan_shadow_to_mem
 static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
 {
 	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
 		<< KASAN_SHADOW_SCALE_SHIFT);
 }
+#endif
 
-static inline bool addr_has_shadow(const void *addr)
+#ifndef kasan_addr_has_shadow
+static inline bool kasan_addr_has_shadow(const void *addr)
 {
 	return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
 }
+#endif
 
 void kasan_poison_shadow(const void *address, size_t size, u8 value);
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index ca9418fe9232..bc3355ee2dd0 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -298,7 +298,7 @@ void kasan_report(unsigned long addr, size_t size,
 	untagged_addr = reset_tag(tagged_addr);
 
 	info.access_addr = tagged_addr;
-	if (addr_has_shadow(untagged_addr))
+	if (kasan_addr_has_shadow(untagged_addr))
 		info.first_bad_addr = find_first_bad_addr(tagged_addr, size);
 	else
 		info.first_bad_addr = untagged_addr;
@@ -309,11 +309,11 @@ void kasan_report(unsigned long addr, size_t size,
 	start_report(&flags);
 
 	print_error_description(&info);
-	if (addr_has_shadow(untagged_addr))
+	if (kasan_addr_has_shadow(untagged_addr))
 		print_tags(get_tag(tagged_addr), info.first_bad_addr);
 	pr_err("\n");
 
-	if (addr_has_shadow(untagged_addr)) {
+	if (kasan_addr_has_shadow(untagged_addr)) {
 		print_address_description(untagged_addr);
 		pr_err("\n");
 		print_shadow_for_address(info.first_bad_addr);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index bc759f8f1c67..cdefd0fe1f5d 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -109,7 +109,7 @@ void check_memory_region(unsigned long addr, size_t size, bool write,
 		return;
 
 	untagged_addr = reset_tag((const void *)addr);
-	if (unlikely(!addr_has_shadow(untagged_addr))) {
+	if (unlikely(!kasan_addr_has_shadow(untagged_addr))) {
 		kasan_report(addr, size, write, ret_ip);
 		return;
 	}
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check
  2019-02-15  0:04 [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Daniel Axtens
  2019-02-15  0:04 ` [RFC PATCH 1/5] kasan: do not open-code addr_has_shadow Daniel Axtens
  2019-02-15  0:04 ` [RFC PATCH 2/5] kasan: allow architectures to manage the memory-to-shadow mapping Daniel Axtens
@ 2019-02-15  0:04 ` Daniel Axtens
  2019-02-15  8:25   ` Dmitry Vyukov
  2019-02-17 12:05   ` christophe leroy
  2019-02-15  0:04 ` [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory Daniel Axtens
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 27+ messages in thread
From: Daniel Axtens @ 2019-02-15  0:04 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev, Daniel Axtens

In powerpc (as I understand it), we spend a lot of time in boot
running in real mode before MMU paging is initalised. During
this time we call a lot of generic code, including printk(). If
we try to access the shadow region during this time, things fail.

My attempts to move early init before the first printk have not
been successful. (Both previous RFCs for ppc64 - by 2 different
people - have needed this trick too!)

So, allow architectures to define a check_return_arch_not_ready()
hook that bails out of check_memory_region_inline() unless the
arch has done all of the init.

Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
Originally-by: Balbir Singh <bsingharora@gmail.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>
---
 include/linux/kasan.h | 4 ++++
 mm/kasan/generic.c    | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f6261840f94c..83edc5e2b6a0 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -14,6 +14,10 @@ struct task_struct;
 #include <asm/kasan.h>
 #include <asm/pgtable.h>
 
+#ifndef check_return_arch_not_ready
+#define check_return_arch_not_ready()	do { } while (0)
+#endif
+
 extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
 extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
 extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index bafa2f986660..4c18bbd09a20 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 						size_t size, bool write,
 						unsigned long ret_ip)
 {
+	check_return_arch_not_ready();
+
 	if (unlikely(size == 0))
 		return;
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory
  2019-02-15  0:04 [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Daniel Axtens
                   ` (2 preceding siblings ...)
  2019-02-15  0:04 ` [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check Daniel Axtens
@ 2019-02-15  0:04 ` Daniel Axtens
  2019-02-15  0:24   ` Andrew Donnellan
  2019-02-17 16:29   ` christophe leroy
  2019-02-15  0:04 ` [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E Daniel Axtens
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 27+ messages in thread
From: Daniel Axtens @ 2019-02-15  0:04 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev, Daniel Axtens

In preparation for adding ppc64 implementations, break out the
code into its own subdirectory.

Signed-off-by: Daniel Axtens <dja@axtens.net>
---
 arch/powerpc/mm/Makefile                                | 4 +---
 arch/powerpc/mm/kasan/Makefile                          | 5 +++++
 arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} | 0
 3 files changed, 6 insertions(+), 3 deletions(-)
 create mode 100644 arch/powerpc/mm/kasan/Makefile
 rename arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} (100%)

diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index d6b76f25f6de..457c0ea2b5e7 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -7,8 +7,6 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
 CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
 
-KASAN_SANITIZE_kasan_init.o := n
-
 obj-y				:= fault.o mem.o pgtable.o mmap.o \
 				   init_$(BITS).o pgtable_$(BITS).o \
 				   init-common.o mmu_context.o drmem.o
@@ -57,4 +55,4 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= dump_linuxpagetables-book3s64.o
 endif
 obj-$(CONFIG_PPC_HTDUMP)	+= dump_hashpagetable.o
 obj-$(CONFIG_PPC_MEM_KEYS)	+= pkeys.o
-obj-$(CONFIG_KASAN)		+= kasan_init.o
+obj-$(CONFIG_KASAN)		+= kasan/
diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
new file mode 100644
index 000000000000..6577897673dd
--- /dev/null
+++ b/arch/powerpc/mm/kasan/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+
+KASAN_SANITIZE := n
+
+obj-$(CONFIG_PPC32)           += kasan_init_32.o
diff --git a/arch/powerpc/mm/kasan_init.c b/arch/powerpc/mm/kasan/kasan_init_32.c
similarity index 100%
rename from arch/powerpc/mm/kasan_init.c
rename to arch/powerpc/mm/kasan/kasan_init_32.c
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E
  2019-02-15  0:04 [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Daniel Axtens
                   ` (3 preceding siblings ...)
  2019-02-15  0:04 ` [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory Daniel Axtens
@ 2019-02-15  0:04 ` Daniel Axtens
  2019-02-15  8:28   ` Dmitry Vyukov
                     ` (2 more replies)
  2019-02-15 16:39 ` [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Christophe Leroy
  2019-02-17  6:34 ` Balbir Singh
  6 siblings, 3 replies; 27+ messages in thread
From: Daniel Axtens @ 2019-02-15  0:04 UTC (permalink / raw)
  To: aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev, Daniel Axtens

Wire up KASAN. Only outline instrumentation is supported.

The KASAN shadow area is mapped into vmemmap space:
0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
To do this we require that vmemmap be disabled. (This is the default
in the kernel config that QorIQ provides for the machine in their
SDK anyway - they use flat memory.)

Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
ioremap areas (also in 0x800...) are all mapped to a zero page. As
with the Book3S hash series, this requires overriding the memory <->
shadow mapping.

Also, as with both previous 64-bit series, early instrumentation is not
supported.  It would allow us to drop the check_return_arch_not_ready()
hook in the KASAN core, but it's tricky to get it set up early enough:
we need it setup before the first call to instrumented code like printk().
Perhaps in the future.

Only KASAN_MINIMAL works.

Lightly tested on e6500. KVM, kexec and xmon have not been tested.

The test_kasan module fires warnings as expected, except for the
following tests:

 - Expected/by design:
kasan test: memcg_accounted_kmem_cache allocate memcg accounted object

 - Due to only supporting KASAN_MINIMAL:
kasan test: kasan_stack_oob out-of-bounds on stack
kasan test: kasan_global_oob out-of-bounds global variable
kasan test: kasan_alloca_oob_left out-of-bounds to left on alloca
kasan test: kasan_alloca_oob_right out-of-bounds to right on alloca
kasan test: use_after_scope_test use-after-scope on int
kasan test: use_after_scope_test use-after-scope on array

Thanks to those who have done the heavy lifting over the past several years:
 - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
 - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
 - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/

Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>

---

While useful if you have a book3e device, this is mostly intended
as a warm-up exercise for reviving Aneesh's series for book3s hash.
In particular, changes to the kasan core are going to be required
for hash and radix as well.
---
 arch/powerpc/Kconfig                         |  1 +
 arch/powerpc/Makefile                        |  2 +
 arch/powerpc/include/asm/kasan.h             | 77 ++++++++++++++++++--
 arch/powerpc/include/asm/ppc_asm.h           |  7 ++
 arch/powerpc/include/asm/string.h            |  7 +-
 arch/powerpc/lib/mem_64.S                    |  6 +-
 arch/powerpc/lib/memcmp_64.S                 |  5 +-
 arch/powerpc/lib/memcpy_64.S                 |  3 +-
 arch/powerpc/lib/string.S                    | 15 ++--
 arch/powerpc/mm/Makefile                     |  2 +
 arch/powerpc/mm/kasan/Makefile               |  1 +
 arch/powerpc/mm/kasan/kasan_init_book3e_64.c | 53 ++++++++++++++
 arch/powerpc/purgatory/Makefile              |  3 +
 arch/powerpc/xmon/Makefile                   |  1 +
 14 files changed, 164 insertions(+), 19 deletions(-)
 create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 850b06def84f..2c7c20d52778 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -176,6 +176,7 @@ config PPC
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if PPC32
+	select HAVE_ARCH_KASAN			if PPC_BOOK3E_64 && !SPARSEMEM_VMEMMAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index f0738099e31e..21c2dadf0315 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -428,11 +428,13 @@ endif
 endif
 
 ifdef CONFIG_KASAN
+ifdef CONFIG_PPC32
 prepare: kasan_prepare
 
 kasan_prepare: prepare0
        $(eval KASAN_SHADOW_OFFSET = $(shell awk '{if ($$2 == "KASAN_SHADOW_OFFSET") print $$3;}' include/generated/asm-offsets.h))
 endif
+endif
 
 # Check toolchain versions:
 # - gcc-4.6 is the minimum kernel-wide version so nothing required.
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index 5d0088429b62..c2f6f05dfaa3 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -5,20 +5,85 @@
 #ifndef __ASSEMBLY__
 
 #include <asm/page.h>
+#include <asm/pgtable.h>
 #include <asm/pgtable-types.h>
-#include <asm/fixmap.h>
 
 #define KASAN_SHADOW_SCALE_SHIFT	3
-#define KASAN_SHADOW_SIZE	((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
 
-#define KASAN_SHADOW_START	(ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
-					    PGDIR_SIZE))
-#define KASAN_SHADOW_END	(KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
 #define KASAN_SHADOW_OFFSET	(KASAN_SHADOW_START - \
 				 (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
+#define KASAN_SHADOW_END	(KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
+
+
+#ifdef CONFIG_PPC32
+#include <asm/fixmap.h>
+#define KASAN_SHADOW_START	(ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
+					    PGDIR_SIZE))
+#define KASAN_SHADOW_SIZE	((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
 
 void kasan_early_init(void);
+
+#endif /* CONFIG_PPC32 */
+
+#ifdef CONFIG_PPC_BOOK3E_64
+#define KASAN_SHADOW_START VMEMMAP_BASE
+#define KASAN_SHADOW_SIZE	(KERN_VIRT_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+
+extern struct static_key_false powerpc_kasan_enabled_key;
+#define check_return_arch_not_ready() \
+	do {								\
+		if (!static_branch_likely(&powerpc_kasan_enabled_key))	\
+			return;						\
+	} while (0)
+
+extern unsigned char kasan_zero_page[PAGE_SIZE];
+static inline void *kasan_mem_to_shadow_book3e(const void *addr)
+{
+	if ((unsigned long)addr >= KERN_VIRT_START &&
+		(unsigned long)addr < (KERN_VIRT_START + KERN_VIRT_SIZE)) {
+		return (void *)kasan_zero_page;
+	}
+
+	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_OFFSET;
+}
+#define kasan_mem_to_shadow kasan_mem_to_shadow_book3e
+
+static inline void *kasan_shadow_to_mem_book3e(const void *shadow_addr)
+{
+	/*
+	 * We map the entire non-linear virtual mapping onto the zero page so if
+	 * we are asked to map the zero page back just pick the beginning of that
+	 * area.
+	 */
+	if (shadow_addr >= (void *)kasan_zero_page &&
+		shadow_addr < (void *)(kasan_zero_page + PAGE_SIZE)) {
+		return (void *)KERN_VIRT_START;
+	}
+
+	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
+		<< KASAN_SHADOW_SCALE_SHIFT);
+}
+#define kasan_shadow_to_mem kasan_shadow_to_mem_book3e
+
+static inline bool kasan_addr_has_shadow_book3e(const void *addr)
+{
+	/*
+	 * We want to specifically assert that the addresses in the 0x8000...
+	 * region have a shadow, otherwise they are considered by the kasan
+	 * core to be wild pointers
+	 */
+	if ((unsigned long)addr >= KERN_VIRT_START &&
+		(unsigned long)addr < (KERN_VIRT_START + KERN_VIRT_SIZE)) {
+		return true;
+	}
+	return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
+}
+#define kasan_addr_has_shadow kasan_addr_has_shadow_book3e
+
+#endif /* CONFIG_PPC_BOOK3E_64 */
+
 void kasan_init(void);
 
-#endif
+#endif /* CONFIG_KASAN */
 #endif
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index dba2c1038363..fd7c9fa9d307 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -251,10 +251,17 @@ GLUE(.,name):
 
 #define _GLOBAL_TOC(name) _GLOBAL(name)
 
+#endif /* 32-bit */
+
+/* KASAN helpers */
 #define KASAN_OVERRIDE(x, y) \
 	.weak x;	     \
 	.set x, y
 
+#ifdef CONFIG_KASAN
+#define EXPORT_SYMBOL_NOKASAN(x)
+#else
+#define EXPORT_SYMBOL_NOKASAN(x) EXPORT_SYMBOL(x)
 #endif
 
 /*
diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index 64d44d4836b4..e2801d517d57 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -4,13 +4,16 @@
 
 #ifdef __KERNEL__
 
+#ifndef CONFIG_KASAN
 #define __HAVE_ARCH_STRNCPY
 #define __HAVE_ARCH_STRNCMP
+#define __HAVE_ARCH_MEMCHR
+#define __HAVE_ARCH_MEMCMP
+#endif
+
 #define __HAVE_ARCH_MEMSET
 #define __HAVE_ARCH_MEMCPY
 #define __HAVE_ARCH_MEMMOVE
-#define __HAVE_ARCH_MEMCMP
-#define __HAVE_ARCH_MEMCHR
 #define __HAVE_ARCH_MEMSET16
 #define __HAVE_ARCH_MEMCPY_FLUSHCACHE
 
diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
index 3c3be02f33b7..3ff4c6b45505 100644
--- a/arch/powerpc/lib/mem_64.S
+++ b/arch/powerpc/lib/mem_64.S
@@ -30,7 +30,8 @@ EXPORT_SYMBOL(__memset16)
 EXPORT_SYMBOL(__memset32)
 EXPORT_SYMBOL(__memset64)
 
-_GLOBAL(memset)
+_GLOBAL(__memset)
+KASAN_OVERRIDE(memset, __memset)
 	neg	r0,r3
 	rlwimi	r4,r4,8,16,23
 	andi.	r0,r0,7			/* # bytes to be 8-byte aligned */
@@ -97,7 +98,8 @@ _GLOBAL(memset)
 	blr
 EXPORT_SYMBOL(memset)
 
-_GLOBAL_TOC(memmove)
+_GLOBAL_TOC(__memmove)
+KASAN_OVERRIDE(memmove, __memmove)
 	cmplw	0,r3,r4
 	bgt	backwards_memcpy
 	b	memcpy
diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S
index 844d8e774492..21aee60de2cd 100644
--- a/arch/powerpc/lib/memcmp_64.S
+++ b/arch/powerpc/lib/memcmp_64.S
@@ -102,7 +102,8 @@
  * 2) src/dst has different offset to the 8 bytes boundary. The handlers
  * are named like .Ldiffoffset_xxxx
  */
-_GLOBAL_TOC(memcmp)
+_GLOBAL_TOC(__memcmp)
+KASAN_OVERRIDE(memcmp, __memcmp)
 	cmpdi	cr1,r5,0
 
 	/* Use the short loop if the src/dst addresses are not
@@ -630,4 +631,4 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
 	b	.Lcmp_lt32bytes
 
 #endif
-EXPORT_SYMBOL(memcmp)
+EXPORT_SYMBOL_NOKASAN(memcmp)
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index 273ea67e60a1..e9092a0e531a 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -18,7 +18,8 @@
 #endif
 
 	.align	7
-_GLOBAL_TOC(memcpy)
+_GLOBAL_TOC(__memcpy)
+KASAN_OVERRIDE(memcpy, __memcpy)
 BEGIN_FTR_SECTION
 #ifdef __LITTLE_ENDIAN__
 	cmpdi	cr7,r5,0
diff --git a/arch/powerpc/lib/string.S b/arch/powerpc/lib/string.S
index 4b41970e9ed8..09deaac6e5f1 100644
--- a/arch/powerpc/lib/string.S
+++ b/arch/powerpc/lib/string.S
@@ -16,7 +16,8 @@
 	
 /* This clears out any unused part of the destination buffer,
    just as the libc version does.  -- paulus */
-_GLOBAL(strncpy)
+_GLOBAL(__strncpy)
+KASAN_OVERRIDE(strncpy, __strncpy)
 	PPC_LCMPI 0,r5,0
 	beqlr
 	mtctr	r5
@@ -34,9 +35,10 @@ _GLOBAL(strncpy)
 2:	stbu	r0,1(r6)	/* clear it out if so */
 	bdnz	2b
 	blr
-EXPORT_SYMBOL(strncpy)
+EXPORT_SYMBOL_NOKASAN(strncpy)
 
-_GLOBAL(strncmp)
+_GLOBAL(__strncmp)
+KASAN_OVERRIDE(strncmp, __strncmp)
 	PPC_LCMPI 0,r5,0
 	beq-	2f
 	mtctr	r5
@@ -52,9 +54,10 @@ _GLOBAL(strncmp)
 	blr
 2:	li	r3,0
 	blr
-EXPORT_SYMBOL(strncmp)
+EXPORT_SYMBOL_NOKASAN(strncmp)
 
-_GLOBAL(memchr)
+_GLOBAL(__memchr)
+KASAN_OVERRIDE(memchr, __memchr)
 	PPC_LCMPI 0,r5,0
 	beq-	2f
 	mtctr	r5
@@ -66,4 +69,4 @@ _GLOBAL(memchr)
 	beqlr
 2:	li	r3,0
 	blr
-EXPORT_SYMBOL(memchr)
+EXPORT_SYMBOL_NOKASAN(memchr)
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index 457c0ea2b5e7..d974f7bcb177 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -7,6 +7,8 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
 CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
 
+KASAN_SANITIZE_fsl_booke_mmu.o := n
+
 obj-y				:= fault.o mem.o pgtable.o mmap.o \
 				   init_$(BITS).o pgtable_$(BITS).o \
 				   init-common.o mmu_context.o drmem.o
diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
index 6577897673dd..f8f164ad8ade 100644
--- a/arch/powerpc/mm/kasan/Makefile
+++ b/arch/powerpc/mm/kasan/Makefile
@@ -3,3 +3,4 @@
 KASAN_SANITIZE := n
 
 obj-$(CONFIG_PPC32)           += kasan_init_32.o
+obj-$(CONFIG_PPC_BOOK3E_64)   += kasan_init_book3e_64.o
diff --git a/arch/powerpc/mm/kasan/kasan_init_book3e_64.c b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
new file mode 100644
index 000000000000..93b9afcf1020
--- /dev/null
+++ b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/kasan.h>
+#include <linux/printk.h>
+#include <linux/memblock.h>
+#include <linux/sched/task.h>
+#include <asm/pgalloc.h>
+
+DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
+EXPORT_SYMBOL(powerpc_kasan_enabled_key);
+unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
+
+static void __init kasan_init_region(struct memblock_region *reg)
+{
+	void *start = __va(reg->base);
+	void *end = __va(reg->base + reg->size);
+	unsigned long k_start, k_end, k_cur;
+
+	if (start >= end)
+		return;
+
+	k_start = (unsigned long)kasan_mem_to_shadow(start);
+	k_end = (unsigned long)kasan_mem_to_shadow(end);
+
+	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
+		void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+		map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
+	}
+	flush_tlb_kernel_range(k_start, k_end);
+}
+
+void __init kasan_init(void)
+{
+	struct memblock_region *reg;
+
+	for_each_memblock(memory, reg)
+		kasan_init_region(reg);
+
+	/* map the zero page RO */
+	map_kernel_page((unsigned long)kasan_zero_page,
+					__pa(kasan_zero_page), PAGE_KERNEL_RO);
+
+	kasan_init_tags();
+
+	/* Turn on checking */
+	static_branch_inc(&powerpc_kasan_enabled_key);
+
+	/* Enable error messages */
+	init_task.kasan_depth = 0;
+	pr_info("KASAN init done (64-bit Book3E)\n");
+}
diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
index 4314ba5baf43..7c6d8b14f440 100644
--- a/arch/powerpc/purgatory/Makefile
+++ b/arch/powerpc/purgatory/Makefile
@@ -1,4 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
+
+KASAN_SANITIZE := n
+
 targets += trampoline.o purgatory.ro kexec-purgatory.c
 
 LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined
diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile
index 878f9c1d3615..064f7062c0a3 100644
--- a/arch/powerpc/xmon/Makefile
+++ b/arch/powerpc/xmon/Makefile
@@ -6,6 +6,7 @@ subdir-ccflags-y := $(call cc-disable-warning, builtin-requires-header)
 
 GCOV_PROFILE := n
 UBSAN_SANITIZE := n
+KASAN_SANITIZE := n
 
 # Disable ftrace for the entire directory
 ORIG_CFLAGS := $(KBUILD_CFLAGS)
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 1/5] kasan: do not open-code addr_has_shadow
  2019-02-15  0:04 ` [RFC PATCH 1/5] kasan: do not open-code addr_has_shadow Daniel Axtens
@ 2019-02-15  0:12   ` Andrew Donnellan
  2019-02-15  8:21     ` Dmitry Vyukov
  0 siblings, 1 reply; 27+ messages in thread
From: Andrew Donnellan @ 2019-02-15  0:12 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev

On 15/2/19 11:04 am, Daniel Axtens wrote:
> We have a couple of places checking for the existence of a shadow
> mapping for an address by open-coding the inverse of the check in
> addr_has_shadow.
> 
> Replace the open-coded versions with the helper. This will be
> needed in future to allow architectures to override the layout
> of the shadow mapping.
> 
> Signed-off-by: Daniel Axtens <dja@axtens.net>

Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>

> ---
>   mm/kasan/generic.c | 3 +--
>   mm/kasan/tags.c    | 3 +--
>   2 files changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index ccb6207276e3..ffc64a9a97a5 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -173,8 +173,7 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>   	if (unlikely(size == 0))
>   		return;
>   
> -	if (unlikely((void *)addr <
> -		kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
> +	if (unlikely(!addr_has_shadow((void *)addr))) {
>   		kasan_report(addr, size, write, ret_ip);
>   		return;
>   	}
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 0777649e07c4..bc759f8f1c67 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -109,8 +109,7 @@ void check_memory_region(unsigned long addr, size_t size, bool write,
>   		return;
>   
>   	untagged_addr = reset_tag((const void *)addr);
> -	if (unlikely(untagged_addr <
> -			kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
> +	if (unlikely(!addr_has_shadow(untagged_addr))) {
>   		kasan_report(addr, size, write, ret_ip);
>   		return;
>   	}
> 

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory
  2019-02-15  0:04 ` [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory Daniel Axtens
@ 2019-02-15  0:24   ` Andrew Donnellan
  2019-02-17 16:29   ` christophe leroy
  1 sibling, 0 replies; 27+ messages in thread
From: Andrew Donnellan @ 2019-02-15  0:24 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, christophe.leroy, bsingharora
  Cc: linuxppc-dev, kasan-dev

On 15/2/19 11:04 am, Daniel Axtens wrote:
> In preparation for adding ppc64 implementations, break out the
> code into its own subdirectory.
> 
> Signed-off-by: Daniel Axtens <dja@axtens.net>

If Christophe respins his series, can we squash this in there?

> ---
>   arch/powerpc/mm/Makefile                                | 4 +---
>   arch/powerpc/mm/kasan/Makefile                          | 5 +++++
>   arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} | 0
>   3 files changed, 6 insertions(+), 3 deletions(-)
>   create mode 100644 arch/powerpc/mm/kasan/Makefile
>   rename arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} (100%)
> 
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index d6b76f25f6de..457c0ea2b5e7 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -7,8 +7,6 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
>   
>   CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
>   
> -KASAN_SANITIZE_kasan_init.o := n
> -
>   obj-y				:= fault.o mem.o pgtable.o mmap.o \
>   				   init_$(BITS).o pgtable_$(BITS).o \
>   				   init-common.o mmu_context.o drmem.o
> @@ -57,4 +55,4 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= dump_linuxpagetables-book3s64.o
>   endif
>   obj-$(CONFIG_PPC_HTDUMP)	+= dump_hashpagetable.o
>   obj-$(CONFIG_PPC_MEM_KEYS)	+= pkeys.o
> -obj-$(CONFIG_KASAN)		+= kasan_init.o
> +obj-$(CONFIG_KASAN)		+= kasan/
> diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
> new file mode 100644
> index 000000000000..6577897673dd
> --- /dev/null
> +++ b/arch/powerpc/mm/kasan/Makefile
> @@ -0,0 +1,5 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +KASAN_SANITIZE := n
> +
> +obj-$(CONFIG_PPC32)           += kasan_init_32.o
> diff --git a/arch/powerpc/mm/kasan_init.c b/arch/powerpc/mm/kasan/kasan_init_32.c
> similarity index 100%
> rename from arch/powerpc/mm/kasan_init.c
> rename to arch/powerpc/mm/kasan/kasan_init_32.c
> 

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 2/5] kasan: allow architectures to manage the memory-to-shadow mapping
  2019-02-15  0:04 ` [RFC PATCH 2/5] kasan: allow architectures to manage the memory-to-shadow mapping Daniel Axtens
@ 2019-02-15  6:35   ` Dmitry Vyukov
  0 siblings, 0 replies; 27+ messages in thread
From: Dmitry Vyukov @ 2019-02-15  6:35 UTC (permalink / raw)
  To: Daniel Axtens; +Cc: Aneesh Kumar K.V, linuxppc-dev, kasan-dev

On Fri, Feb 15, 2019 at 1:05 AM Daniel Axtens <dja@axtens.net> wrote:
>
> Currently, shadow addresses are always addr >> shift + offset.
> However, for powerpc, the virtual address space is fragmented in
> ways that make this simple scheme impractical.
>
> Allow architectures to override:
>  - kasan_shadow_to_mem
>  - kasan_mem_to_shadow
>  - addr_has_shadow
>
> Rename addr_has_shadow to kasan_addr_has_shadow as if it is
> overridden it will be available in more places, increasing the
> risk of collisions.
>
> If architectures do not #define their own versions, the generic
> code will continue to run as usual.
>
> Signed-off-by: Daniel Axtens <dja@axtens.net>

Reviewed-by: Dmitry Vyukov <dvyukov@google.com>

> ---
>  include/linux/kasan.h     | 2 ++
>  mm/kasan/generic.c        | 2 +-
>  mm/kasan/generic_report.c | 2 +-
>  mm/kasan/kasan.h          | 6 +++++-
>  mm/kasan/report.c         | 6 +++---
>  mm/kasan/tags.c           | 2 +-
>  6 files changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index b40ea104dd36..f6261840f94c 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -23,11 +23,13 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>  int kasan_populate_early_shadow(const void *shadow_start,
>                                 const void *shadow_end);
>
> +#ifndef kasan_mem_to_shadow
>  static inline void *kasan_mem_to_shadow(const void *addr)
>  {
>         return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>                 + KASAN_SHADOW_OFFSET;
>  }
> +#endif
>
>  /* Enable reporting bugs after kasan_disable_current() */
>  extern void kasan_enable_current(void);
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index ffc64a9a97a5..bafa2f986660 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -173,7 +173,7 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>         if (unlikely(size == 0))
>                 return;
>
> -       if (unlikely(!addr_has_shadow((void *)addr))) {
> +       if (unlikely(!kasan_addr_has_shadow((void *)addr))) {
>                 kasan_report(addr, size, write, ret_ip);
>                 return;
>         }
> diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c
> index 5e12035888f2..854f4de1fe10 100644
> --- a/mm/kasan/generic_report.c
> +++ b/mm/kasan/generic_report.c
> @@ -110,7 +110,7 @@ static const char *get_wild_bug_type(struct kasan_access_info *info)
>
>  const char *get_bug_type(struct kasan_access_info *info)
>  {
> -       if (addr_has_shadow(info->access_addr))
> +       if (kasan_addr_has_shadow(info->access_addr))
>                 return get_shadow_bug_type(info);
>         return get_wild_bug_type(info);
>  }
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index ea51b2d898ec..57ec24cf7bd1 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -111,16 +111,20 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
>  struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
>                                         const void *object);
>
> +#ifndef kasan_shadow_to_mem
>  static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
>  {
>         return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
>                 << KASAN_SHADOW_SCALE_SHIFT);
>  }
> +#endif
>
> -static inline bool addr_has_shadow(const void *addr)
> +#ifndef kasan_addr_has_shadow
> +static inline bool kasan_addr_has_shadow(const void *addr)
>  {
>         return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
>  }
> +#endif
>
>  void kasan_poison_shadow(const void *address, size_t size, u8 value);
>
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index ca9418fe9232..bc3355ee2dd0 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -298,7 +298,7 @@ void kasan_report(unsigned long addr, size_t size,
>         untagged_addr = reset_tag(tagged_addr);
>
>         info.access_addr = tagged_addr;
> -       if (addr_has_shadow(untagged_addr))
> +       if (kasan_addr_has_shadow(untagged_addr))
>                 info.first_bad_addr = find_first_bad_addr(tagged_addr, size);
>         else
>                 info.first_bad_addr = untagged_addr;
> @@ -309,11 +309,11 @@ void kasan_report(unsigned long addr, size_t size,
>         start_report(&flags);
>
>         print_error_description(&info);
> -       if (addr_has_shadow(untagged_addr))
> +       if (kasan_addr_has_shadow(untagged_addr))
>                 print_tags(get_tag(tagged_addr), info.first_bad_addr);
>         pr_err("\n");
>
> -       if (addr_has_shadow(untagged_addr)) {
> +       if (kasan_addr_has_shadow(untagged_addr)) {
>                 print_address_description(untagged_addr);
>                 pr_err("\n");
>                 print_shadow_for_address(info.first_bad_addr);
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index bc759f8f1c67..cdefd0fe1f5d 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -109,7 +109,7 @@ void check_memory_region(unsigned long addr, size_t size, bool write,
>                 return;
>
>         untagged_addr = reset_tag((const void *)addr);
> -       if (unlikely(!addr_has_shadow(untagged_addr))) {
> +       if (unlikely(!kasan_addr_has_shadow(untagged_addr))) {
>                 kasan_report(addr, size, write, ret_ip);
>                 return;
>         }
> --
> 2.19.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190215000441.14323-3-dja%40axtens.net.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 1/5] kasan: do not open-code addr_has_shadow
  2019-02-15  0:12   ` Andrew Donnellan
@ 2019-02-15  8:21     ` Dmitry Vyukov
  0 siblings, 0 replies; 27+ messages in thread
From: Dmitry Vyukov @ 2019-02-15  8:21 UTC (permalink / raw)
  To: Andrew Donnellan; +Cc: Aneesh Kumar K.V, kasan-dev, linuxppc-dev, Daniel Axtens

On Fri, Feb 15, 2019 at 1:12 AM Andrew Donnellan
<andrew.donnellan@au1.ibm.com> wrote:
>
> On 15/2/19 11:04 am, Daniel Axtens wrote:
> > We have a couple of places checking for the existence of a shadow
> > mapping for an address by open-coding the inverse of the check in
> > addr_has_shadow.
> >
> > Replace the open-coded versions with the helper. This will be
> > needed in future to allow architectures to override the layout
> > of the shadow mapping.
> >
> > Signed-off-by: Daniel Axtens <dja@axtens.net>
>
> Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>

Reviewed-by: Dmitry Vyukov <dvyukov@google.com>

> > ---
> >   mm/kasan/generic.c | 3 +--
> >   mm/kasan/tags.c    | 3 +--
> >   2 files changed, 2 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> > index ccb6207276e3..ffc64a9a97a5 100644
> > --- a/mm/kasan/generic.c
> > +++ b/mm/kasan/generic.c
> > @@ -173,8 +173,7 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
> >       if (unlikely(size == 0))
> >               return;
> >
> > -     if (unlikely((void *)addr <
> > -             kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
> > +     if (unlikely(!addr_has_shadow((void *)addr))) {
> >               kasan_report(addr, size, write, ret_ip);
> >               return;
> >       }
> > diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> > index 0777649e07c4..bc759f8f1c67 100644
> > --- a/mm/kasan/tags.c
> > +++ b/mm/kasan/tags.c
> > @@ -109,8 +109,7 @@ void check_memory_region(unsigned long addr, size_t size, bool write,
> >               return;
> >
> >       untagged_addr = reset_tag((const void *)addr);
> > -     if (unlikely(untagged_addr <
> > -                     kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
> > +     if (unlikely(!addr_has_shadow(untagged_addr))) {
> >               kasan_report(addr, size, write, ret_ip);
> >               return;
> >       }
> >
>
> --
> Andrew Donnellan              OzLabs, ADL Canberra
> andrew.donnellan@au1.ibm.com  IBM Australia Limited
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/f155a38b-c6ab-7825-71e2-15709f9410f6%40au1.ibm.com.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check
  2019-02-15  0:04 ` [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check Daniel Axtens
@ 2019-02-15  8:25   ` Dmitry Vyukov
  2019-02-17 12:05   ` christophe leroy
  1 sibling, 0 replies; 27+ messages in thread
From: Dmitry Vyukov @ 2019-02-15  8:25 UTC (permalink / raw)
  To: Daniel Axtens
  Cc: Aneesh Kumar K.V, kasan-dev, Aneesh Kumar K . V, linuxppc-dev

On Fri, Feb 15, 2019 at 1:05 AM Daniel Axtens <dja@axtens.net> wrote:
>
> In powerpc (as I understand it), we spend a lot of time in boot
> running in real mode before MMU paging is initalised. During
> this time we call a lot of generic code, including printk(). If
> we try to access the shadow region during this time, things fail.
>
> My attempts to move early init before the first printk have not
> been successful. (Both previous RFCs for ppc64 - by 2 different
> people - have needed this trick too!)
>
> So, allow architectures to define a check_return_arch_not_ready()
> hook that bails out of check_memory_region_inline() unless the
> arch has done all of the init.
>
> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
> Originally-by: Balbir Singh <bsingharora@gmail.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> ---
>  include/linux/kasan.h | 4 ++++
>  mm/kasan/generic.c    | 2 ++
>  2 files changed, 6 insertions(+)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index f6261840f94c..83edc5e2b6a0 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -14,6 +14,10 @@ struct task_struct;
>  #include <asm/kasan.h>
>  #include <asm/pgtable.h>
>
> +#ifndef check_return_arch_not_ready
> +#define check_return_arch_not_ready()  do { } while (0)
> +#endif

Please do a bool-returning function. There is no need for
macro-super-powers here and normal C should be the default choice in
such cases.
It will be inlined and an empty impl will dissolve just as the macro.

>  extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>  extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>  extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index bafa2f986660..4c18bbd09a20 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>                                                 size_t size, bool write,
>                                                 unsigned long ret_ip)
>  {
> +       check_return_arch_not_ready();
> +
>         if (unlikely(size == 0))
>                 return;
>
> --
> 2.19.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190215000441.14323-4-dja%40axtens.net.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E
  2019-02-15  0:04 ` [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E Daniel Axtens
@ 2019-02-15  8:28   ` Dmitry Vyukov
  2019-02-19  6:37     ` Daniel Axtens
  2019-02-17 14:06   ` christophe leroy
  2019-02-18 19:26   ` Christophe Leroy
  2 siblings, 1 reply; 27+ messages in thread
From: Dmitry Vyukov @ 2019-02-15  8:28 UTC (permalink / raw)
  To: Daniel Axtens
  Cc: Aneesh Kumar K.V, kasan-dev, Aneesh Kumar K . V, linuxppc-dev

On Fri, Feb 15, 2019 at 1:05 AM Daniel Axtens <dja@axtens.net> wrote:
>
> Wire up KASAN. Only outline instrumentation is supported.
>
> The KASAN shadow area is mapped into vmemmap space:
> 0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
> To do this we require that vmemmap be disabled. (This is the default
> in the kernel config that QorIQ provides for the machine in their
> SDK anyway - they use flat memory.)
>
> Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
> ioremap areas (also in 0x800...) are all mapped to a zero page. As
> with the Book3S hash series, this requires overriding the memory <->
> shadow mapping.
>
> Also, as with both previous 64-bit series, early instrumentation is not
> supported.  It would allow us to drop the check_return_arch_not_ready()
> hook in the KASAN core, but it's tricky to get it set up early enough:
> we need it setup before the first call to instrumented code like printk().
> Perhaps in the future.
>
> Only KASAN_MINIMAL works.
>
> Lightly tested on e6500. KVM, kexec and xmon have not been tested.

Hi Daniel,

This is great!

Not related to the patch, but if you booted a real devices and used it
to some degree, I wonder if you hit any KASAN reports?

Thanks

> The test_kasan module fires warnings as expected, except for the
> following tests:
>
>  - Expected/by design:
> kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
>
>  - Due to only supporting KASAN_MINIMAL:
> kasan test: kasan_stack_oob out-of-bounds on stack
> kasan test: kasan_global_oob out-of-bounds global variable
> kasan test: kasan_alloca_oob_left out-of-bounds to left on alloca
> kasan test: kasan_alloca_oob_right out-of-bounds to right on alloca
> kasan test: use_after_scope_test use-after-scope on int
> kasan test: use_after_scope_test use-after-scope on array
>
> Thanks to those who have done the heavy lifting over the past several years:
>  - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
>  - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
>  - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/
>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
>
> ---
>
> While useful if you have a book3e device, this is mostly intended
> as a warm-up exercise for reviving Aneesh's series for book3s hash.
> In particular, changes to the kasan core are going to be required
> for hash and radix as well.
> ---
>  arch/powerpc/Kconfig                         |  1 +
>  arch/powerpc/Makefile                        |  2 +
>  arch/powerpc/include/asm/kasan.h             | 77 ++++++++++++++++++--
>  arch/powerpc/include/asm/ppc_asm.h           |  7 ++
>  arch/powerpc/include/asm/string.h            |  7 +-
>  arch/powerpc/lib/mem_64.S                    |  6 +-
>  arch/powerpc/lib/memcmp_64.S                 |  5 +-
>  arch/powerpc/lib/memcpy_64.S                 |  3 +-
>  arch/powerpc/lib/string.S                    | 15 ++--
>  arch/powerpc/mm/Makefile                     |  2 +
>  arch/powerpc/mm/kasan/Makefile               |  1 +
>  arch/powerpc/mm/kasan/kasan_init_book3e_64.c | 53 ++++++++++++++
>  arch/powerpc/purgatory/Makefile              |  3 +
>  arch/powerpc/xmon/Makefile                   |  1 +
>  14 files changed, 164 insertions(+), 19 deletions(-)
>  create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 850b06def84f..2c7c20d52778 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -176,6 +176,7 @@ config PPC
>         select HAVE_ARCH_AUDITSYSCALL
>         select HAVE_ARCH_JUMP_LABEL
>         select HAVE_ARCH_KASAN                  if PPC32
> +       select HAVE_ARCH_KASAN                  if PPC_BOOK3E_64 && !SPARSEMEM_VMEMMAP
>         select HAVE_ARCH_KGDB
>         select HAVE_ARCH_MMAP_RND_BITS
>         select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
> diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
> index f0738099e31e..21c2dadf0315 100644
> --- a/arch/powerpc/Makefile
> +++ b/arch/powerpc/Makefile
> @@ -428,11 +428,13 @@ endif
>  endif
>
>  ifdef CONFIG_KASAN
> +ifdef CONFIG_PPC32
>  prepare: kasan_prepare
>
>  kasan_prepare: prepare0
>         $(eval KASAN_SHADOW_OFFSET = $(shell awk '{if ($$2 == "KASAN_SHADOW_OFFSET") print $$3;}' include/generated/asm-offsets.h))
>  endif
> +endif
>
>  # Check toolchain versions:
>  # - gcc-4.6 is the minimum kernel-wide version so nothing required.
> diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
> index 5d0088429b62..c2f6f05dfaa3 100644
> --- a/arch/powerpc/include/asm/kasan.h
> +++ b/arch/powerpc/include/asm/kasan.h
> @@ -5,20 +5,85 @@
>  #ifndef __ASSEMBLY__
>
>  #include <asm/page.h>
> +#include <asm/pgtable.h>
>  #include <asm/pgtable-types.h>
> -#include <asm/fixmap.h>
>
>  #define KASAN_SHADOW_SCALE_SHIFT       3
> -#define KASAN_SHADOW_SIZE      ((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
>
> -#define KASAN_SHADOW_START     (ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
> -                                           PGDIR_SIZE))
> -#define KASAN_SHADOW_END       (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
>  #define KASAN_SHADOW_OFFSET    (KASAN_SHADOW_START - \
>                                  (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
> +#define KASAN_SHADOW_END       (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
> +
> +
> +#ifdef CONFIG_PPC32
> +#include <asm/fixmap.h>
> +#define KASAN_SHADOW_START     (ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
> +                                           PGDIR_SIZE))
> +#define KASAN_SHADOW_SIZE      ((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
>
>  void kasan_early_init(void);
> +
> +#endif /* CONFIG_PPC32 */
> +
> +#ifdef CONFIG_PPC_BOOK3E_64
> +#define KASAN_SHADOW_START VMEMMAP_BASE
> +#define KASAN_SHADOW_SIZE      (KERN_VIRT_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
> +
> +extern struct static_key_false powerpc_kasan_enabled_key;
> +#define check_return_arch_not_ready() \
> +       do {                                                            \
> +               if (!static_branch_likely(&powerpc_kasan_enabled_key))  \
> +                       return;                                         \
> +       } while (0)
> +
> +extern unsigned char kasan_zero_page[PAGE_SIZE];
> +static inline void *kasan_mem_to_shadow_book3e(const void *addr)
> +{
> +       if ((unsigned long)addr >= KERN_VIRT_START &&
> +               (unsigned long)addr < (KERN_VIRT_START + KERN_VIRT_SIZE)) {
> +               return (void *)kasan_zero_page;
> +       }
> +
> +       return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> +               + KASAN_SHADOW_OFFSET;
> +}
> +#define kasan_mem_to_shadow kasan_mem_to_shadow_book3e
> +
> +static inline void *kasan_shadow_to_mem_book3e(const void *shadow_addr)
> +{
> +       /*
> +        * We map the entire non-linear virtual mapping onto the zero page so if
> +        * we are asked to map the zero page back just pick the beginning of that
> +        * area.
> +        */
> +       if (shadow_addr >= (void *)kasan_zero_page &&
> +               shadow_addr < (void *)(kasan_zero_page + PAGE_SIZE)) {
> +               return (void *)KERN_VIRT_START;
> +       }
> +
> +       return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
> +               << KASAN_SHADOW_SCALE_SHIFT);
> +}
> +#define kasan_shadow_to_mem kasan_shadow_to_mem_book3e
> +
> +static inline bool kasan_addr_has_shadow_book3e(const void *addr)
> +{
> +       /*
> +        * We want to specifically assert that the addresses in the 0x8000...
> +        * region have a shadow, otherwise they are considered by the kasan
> +        * core to be wild pointers
> +        */
> +       if ((unsigned long)addr >= KERN_VIRT_START &&
> +               (unsigned long)addr < (KERN_VIRT_START + KERN_VIRT_SIZE)) {
> +               return true;
> +       }
> +       return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
> +}
> +#define kasan_addr_has_shadow kasan_addr_has_shadow_book3e
> +
> +#endif /* CONFIG_PPC_BOOK3E_64 */
> +
>  void kasan_init(void);
>
> -#endif
> +#endif /* CONFIG_KASAN */
>  #endif
> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
> index dba2c1038363..fd7c9fa9d307 100644
> --- a/arch/powerpc/include/asm/ppc_asm.h
> +++ b/arch/powerpc/include/asm/ppc_asm.h
> @@ -251,10 +251,17 @@ GLUE(.,name):
>
>  #define _GLOBAL_TOC(name) _GLOBAL(name)
>
> +#endif /* 32-bit */
> +
> +/* KASAN helpers */
>  #define KASAN_OVERRIDE(x, y) \
>         .weak x;             \
>         .set x, y
>
> +#ifdef CONFIG_KASAN
> +#define EXPORT_SYMBOL_NOKASAN(x)
> +#else
> +#define EXPORT_SYMBOL_NOKASAN(x) EXPORT_SYMBOL(x)
>  #endif
>
>  /*
> diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
> index 64d44d4836b4..e2801d517d57 100644
> --- a/arch/powerpc/include/asm/string.h
> +++ b/arch/powerpc/include/asm/string.h
> @@ -4,13 +4,16 @@
>
>  #ifdef __KERNEL__
>
> +#ifndef CONFIG_KASAN
>  #define __HAVE_ARCH_STRNCPY
>  #define __HAVE_ARCH_STRNCMP
> +#define __HAVE_ARCH_MEMCHR
> +#define __HAVE_ARCH_MEMCMP
> +#endif
> +
>  #define __HAVE_ARCH_MEMSET
>  #define __HAVE_ARCH_MEMCPY
>  #define __HAVE_ARCH_MEMMOVE
> -#define __HAVE_ARCH_MEMCMP
> -#define __HAVE_ARCH_MEMCHR
>  #define __HAVE_ARCH_MEMSET16
>  #define __HAVE_ARCH_MEMCPY_FLUSHCACHE
>
> diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
> index 3c3be02f33b7..3ff4c6b45505 100644
> --- a/arch/powerpc/lib/mem_64.S
> +++ b/arch/powerpc/lib/mem_64.S
> @@ -30,7 +30,8 @@ EXPORT_SYMBOL(__memset16)
>  EXPORT_SYMBOL(__memset32)
>  EXPORT_SYMBOL(__memset64)
>
> -_GLOBAL(memset)
> +_GLOBAL(__memset)
> +KASAN_OVERRIDE(memset, __memset)
>         neg     r0,r3
>         rlwimi  r4,r4,8,16,23
>         andi.   r0,r0,7                 /* # bytes to be 8-byte aligned */
> @@ -97,7 +98,8 @@ _GLOBAL(memset)
>         blr
>  EXPORT_SYMBOL(memset)
>
> -_GLOBAL_TOC(memmove)
> +_GLOBAL_TOC(__memmove)
> +KASAN_OVERRIDE(memmove, __memmove)
>         cmplw   0,r3,r4
>         bgt     backwards_memcpy
>         b       memcpy
> diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S
> index 844d8e774492..21aee60de2cd 100644
> --- a/arch/powerpc/lib/memcmp_64.S
> +++ b/arch/powerpc/lib/memcmp_64.S
> @@ -102,7 +102,8 @@
>   * 2) src/dst has different offset to the 8 bytes boundary. The handlers
>   * are named like .Ldiffoffset_xxxx
>   */
> -_GLOBAL_TOC(memcmp)
> +_GLOBAL_TOC(__memcmp)
> +KASAN_OVERRIDE(memcmp, __memcmp)
>         cmpdi   cr1,r5,0
>
>         /* Use the short loop if the src/dst addresses are not
> @@ -630,4 +631,4 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>         b       .Lcmp_lt32bytes
>
>  #endif
> -EXPORT_SYMBOL(memcmp)
> +EXPORT_SYMBOL_NOKASAN(memcmp)
> diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
> index 273ea67e60a1..e9092a0e531a 100644
> --- a/arch/powerpc/lib/memcpy_64.S
> +++ b/arch/powerpc/lib/memcpy_64.S
> @@ -18,7 +18,8 @@
>  #endif
>
>         .align  7
> -_GLOBAL_TOC(memcpy)
> +_GLOBAL_TOC(__memcpy)
> +KASAN_OVERRIDE(memcpy, __memcpy)
>  BEGIN_FTR_SECTION
>  #ifdef __LITTLE_ENDIAN__
>         cmpdi   cr7,r5,0
> diff --git a/arch/powerpc/lib/string.S b/arch/powerpc/lib/string.S
> index 4b41970e9ed8..09deaac6e5f1 100644
> --- a/arch/powerpc/lib/string.S
> +++ b/arch/powerpc/lib/string.S
> @@ -16,7 +16,8 @@
>
>  /* This clears out any unused part of the destination buffer,
>     just as the libc version does.  -- paulus */
> -_GLOBAL(strncpy)
> +_GLOBAL(__strncpy)
> +KASAN_OVERRIDE(strncpy, __strncpy)
>         PPC_LCMPI 0,r5,0
>         beqlr
>         mtctr   r5
> @@ -34,9 +35,10 @@ _GLOBAL(strncpy)
>  2:     stbu    r0,1(r6)        /* clear it out if so */
>         bdnz    2b
>         blr
> -EXPORT_SYMBOL(strncpy)
> +EXPORT_SYMBOL_NOKASAN(strncpy)
>
> -_GLOBAL(strncmp)
> +_GLOBAL(__strncmp)
> +KASAN_OVERRIDE(strncmp, __strncmp)
>         PPC_LCMPI 0,r5,0
>         beq-    2f
>         mtctr   r5
> @@ -52,9 +54,10 @@ _GLOBAL(strncmp)
>         blr
>  2:     li      r3,0
>         blr
> -EXPORT_SYMBOL(strncmp)
> +EXPORT_SYMBOL_NOKASAN(strncmp)
>
> -_GLOBAL(memchr)
> +_GLOBAL(__memchr)
> +KASAN_OVERRIDE(memchr, __memchr)
>         PPC_LCMPI 0,r5,0
>         beq-    2f
>         mtctr   r5
> @@ -66,4 +69,4 @@ _GLOBAL(memchr)
>         beqlr
>  2:     li      r3,0
>         blr
> -EXPORT_SYMBOL(memchr)
> +EXPORT_SYMBOL_NOKASAN(memchr)
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index 457c0ea2b5e7..d974f7bcb177 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -7,6 +7,8 @@ ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
>
>  CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
>
> +KASAN_SANITIZE_fsl_booke_mmu.o := n
> +
>  obj-y                          := fault.o mem.o pgtable.o mmap.o \
>                                    init_$(BITS).o pgtable_$(BITS).o \
>                                    init-common.o mmu_context.o drmem.o
> diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
> index 6577897673dd..f8f164ad8ade 100644
> --- a/arch/powerpc/mm/kasan/Makefile
> +++ b/arch/powerpc/mm/kasan/Makefile
> @@ -3,3 +3,4 @@
>  KASAN_SANITIZE := n
>
>  obj-$(CONFIG_PPC32)           += kasan_init_32.o
> +obj-$(CONFIG_PPC_BOOK3E_64)   += kasan_init_book3e_64.o
> diff --git a/arch/powerpc/mm/kasan/kasan_init_book3e_64.c b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> new file mode 100644
> index 000000000000..93b9afcf1020
> --- /dev/null
> +++ b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> @@ -0,0 +1,53 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#define DISABLE_BRANCH_PROFILING
> +
> +#include <linux/kasan.h>
> +#include <linux/printk.h>
> +#include <linux/memblock.h>
> +#include <linux/sched/task.h>
> +#include <asm/pgalloc.h>
> +
> +DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
> +EXPORT_SYMBOL(powerpc_kasan_enabled_key);
> +unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
> +
> +static void __init kasan_init_region(struct memblock_region *reg)
> +{
> +       void *start = __va(reg->base);
> +       void *end = __va(reg->base + reg->size);
> +       unsigned long k_start, k_end, k_cur;
> +
> +       if (start >= end)
> +               return;
> +
> +       k_start = (unsigned long)kasan_mem_to_shadow(start);
> +       k_end = (unsigned long)kasan_mem_to_shadow(end);
> +
> +       for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> +               void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> +               map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
> +       }
> +       flush_tlb_kernel_range(k_start, k_end);
> +}
> +
> +void __init kasan_init(void)
> +{
> +       struct memblock_region *reg;
> +
> +       for_each_memblock(memory, reg)
> +               kasan_init_region(reg);
> +
> +       /* map the zero page RO */
> +       map_kernel_page((unsigned long)kasan_zero_page,
> +                                       __pa(kasan_zero_page), PAGE_KERNEL_RO);
> +
> +       kasan_init_tags();
> +
> +       /* Turn on checking */
> +       static_branch_inc(&powerpc_kasan_enabled_key);
> +
> +       /* Enable error messages */
> +       init_task.kasan_depth = 0;
> +       pr_info("KASAN init done (64-bit Book3E)\n");
> +}
> diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
> index 4314ba5baf43..7c6d8b14f440 100644
> --- a/arch/powerpc/purgatory/Makefile
> +++ b/arch/powerpc/purgatory/Makefile
> @@ -1,4 +1,7 @@
>  # SPDX-License-Identifier: GPL-2.0
> +
> +KASAN_SANITIZE := n
> +
>  targets += trampoline.o purgatory.ro kexec-purgatory.c
>
>  LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined
> diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile
> index 878f9c1d3615..064f7062c0a3 100644
> --- a/arch/powerpc/xmon/Makefile
> +++ b/arch/powerpc/xmon/Makefile
> @@ -6,6 +6,7 @@ subdir-ccflags-y := $(call cc-disable-warning, builtin-requires-header)
>
>  GCOV_PROFILE := n
>  UBSAN_SANITIZE := n
> +KASAN_SANITIZE := n
>
>  # Disable ftrace for the entire directory
>  ORIG_CFLAGS := $(KBUILD_CFLAGS)
> --
> 2.19.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190215000441.14323-6-dja%40axtens.net.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E
  2019-02-15  0:04 [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Daniel Axtens
                   ` (4 preceding siblings ...)
  2019-02-15  0:04 ` [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E Daniel Axtens
@ 2019-02-15 16:39 ` Christophe Leroy
  2019-02-17  6:34 ` Balbir Singh
  6 siblings, 0 replies; 27+ messages in thread
From: Christophe Leroy @ 2019-02-15 16:39 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora; +Cc: linuxppc-dev, kasan-dev



On 02/15/2019 12:04 AM, Daniel Axtens wrote:
> Building on the work of Christophe, Aneesh and Balbir, I've ported
> KASAN to the e6500, a 64-bit Book3E processor which doesn't have a
> hashed page table. It applies on top of Christophe's series, v5.
> 
> It requires some changes to the KASAN core - please let me know if
> these are problematic and we see if an alternative approach is
> possible.
> 
> The KASAN shadow area is mapped into vmemmap space:
> 0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
> To do this we require that vmemmap be disabled. (This is the default
> in the kernel config that QorIQ provides for the machine in their
> SDK anyway - they use flat memory.)
> 
> Only outline instrumentation is supported and only KASAN_MINIMAL works.
> Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
> ioremap areas (also in 0x800...) are all mapped to a zero page. As
> with the Book3S hash series, this requires overriding the memory <->
> shadow mapping.
> 
> Also, as with both previous 64-bit series, early instrumentation is not
> supported.
> 
> KVM, kexec and xmon have not been tested.
> 
> Thanks to those who have done the heavy lifting over the past several years:
>   - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
>   - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
>   - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/
> 
> While useful if you have an Book3E device, this is mostly intended
> as a warm-up exercise for reviving Aneesh's series for book3s hash.
> In particular, changes to the kasan core are going to be required
> for hash and radix as well.
> 
> Regards,
> Daniel

Hi Daniel,

I'll look into your series in more details later, for now I just want to 
let you know that I get a build failure:

   LD      vmlinux.o
lib/string.o: In function `memcmp':
/root/linux-powerpc/lib/string.c:857: multiple definition of `memcmp'
arch/powerpc/lib/memcmp_32.o:/root/linux-powerpc/arch/powerpc/lib/memcmp_32.S:16: 
first defined here


Christophe

> 
> Daniel Axtens (5):
>    kasan: do not open-code addr_has_shadow
>    kasan: allow architectures to manage the memory-to-shadow mapping
>    kasan: allow architectures to provide an outline readiness check
>    powerpc: move KASAN into its own subdirectory
>    powerpc: KASAN for 64bit Book3E
> 
>   arch/powerpc/Kconfig                          |  1 +
>   arch/powerpc/Makefile                         |  2 +
>   arch/powerpc/include/asm/kasan.h              | 77 +++++++++++++++++--
>   arch/powerpc/include/asm/ppc_asm.h            |  7 ++
>   arch/powerpc/include/asm/string.h             |  7 +-
>   arch/powerpc/lib/mem_64.S                     |  6 +-
>   arch/powerpc/lib/memcmp_64.S                  |  5 +-
>   arch/powerpc/lib/memcpy_64.S                  |  3 +-
>   arch/powerpc/lib/string.S                     | 15 ++--
>   arch/powerpc/mm/Makefile                      |  4 +-
>   arch/powerpc/mm/kasan/Makefile                |  6 ++
>   .../{kasan_init.c => kasan/kasan_init_32.c}   |  0
>   arch/powerpc/mm/kasan/kasan_init_book3e_64.c  | 53 +++++++++++++
>   arch/powerpc/purgatory/Makefile               |  3 +
>   arch/powerpc/xmon/Makefile                    |  1 +
>   include/linux/kasan.h                         |  6 ++
>   mm/kasan/generic.c                            |  5 +-
>   mm/kasan/generic_report.c                     |  2 +-
>   mm/kasan/kasan.h                              |  6 +-
>   mm/kasan/report.c                             |  6 +-
>   mm/kasan/tags.c                               |  3 +-
>   21 files changed, 188 insertions(+), 30 deletions(-)
>   create mode 100644 arch/powerpc/mm/kasan/Makefile
>   rename arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} (100%)
>   create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E
  2019-02-15  0:04 [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Daniel Axtens
                   ` (5 preceding siblings ...)
  2019-02-15 16:39 ` [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Christophe Leroy
@ 2019-02-17  6:34 ` Balbir Singh
  2019-02-19  6:35   ` Daniel Axtens
  6 siblings, 1 reply; 27+ messages in thread
From: Balbir Singh @ 2019-02-17  6:34 UTC (permalink / raw)
  To: Daniel Axtens; +Cc: aneesh.kumar, linuxppc-dev, kasan-dev

On Fri, Feb 15, 2019 at 11:04:36AM +1100, Daniel Axtens wrote:
> Building on the work of Christophe, Aneesh and Balbir, I've ported
> KASAN to the e6500, a 64-bit Book3E processor which doesn't have a
> hashed page table. It applies on top of Christophe's series, v5.
> 
> It requires some changes to the KASAN core - please let me know if
> these are problematic and we see if an alternative approach is
> possible.
> 
> The KASAN shadow area is mapped into vmemmap space:
> 0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
> To do this we require that vmemmap be disabled. (This is the default
> in the kernel config that QorIQ provides for the machine in their
> SDK anyway - they use flat memory.)
> 
> Only outline instrumentation is supported and only KASAN_MINIMAL works.
> Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
> ioremap areas (also in 0x800...) are all mapped to a zero page. As
> with the Book3S hash series, this requires overriding the memory <->
> shadow mapping.
> 
> Also, as with both previous 64-bit series, early instrumentation is not
> supported.
> 
> KVM, kexec and xmon have not been tested.
> 
> Thanks to those who have done the heavy lifting over the past several years:
>  - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
>  - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
>  - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/
> 
> While useful if you have an Book3E device, this is mostly intended
> as a warm-up exercise for reviving Aneesh's series for book3s hash.
> In particular, changes to the kasan core are going to be required
> for hash and radix as well.
>

Thanks for following through with this, could you please share details on
how you've been testing this?

I know qemu supports qemu -cpu e6500, but beyond that what does the machine
look like?

Balbir Singh. 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check
  2019-02-15  0:04 ` [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check Daniel Axtens
  2019-02-15  8:25   ` Dmitry Vyukov
@ 2019-02-17 12:05   ` christophe leroy
  2019-02-18  6:13     ` Daniel Axtens
  1 sibling, 1 reply; 27+ messages in thread
From: christophe leroy @ 2019-02-17 12:05 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev



Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
> In powerpc (as I understand it), we spend a lot of time in boot
> running in real mode before MMU paging is initalised. During
> this time we call a lot of generic code, including printk(). If
> we try to access the shadow region during this time, things fail.
> 
> My attempts to move early init before the first printk have not
> been successful. (Both previous RFCs for ppc64 - by 2 different
> people - have needed this trick too!)
> 
> So, allow architectures to define a check_return_arch_not_ready()
> hook that bails out of check_memory_region_inline() unless the
> arch has done all of the init.
> 
> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
> Originally-by: Balbir Singh <bsingharora@gmail.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> ---
>   include/linux/kasan.h | 4 ++++
>   mm/kasan/generic.c    | 2 ++
>   2 files changed, 6 insertions(+)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index f6261840f94c..83edc5e2b6a0 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -14,6 +14,10 @@ struct task_struct;
>   #include <asm/kasan.h>
>   #include <asm/pgtable.h>
>   
> +#ifndef check_return_arch_not_ready
> +#define check_return_arch_not_ready()	do { } while (0)
> +#endif

A static inline would be better I believe.

Something like

#ifndef kasan_arch_is_ready
static inline bool kasan_arch_is_ready {return true;}
#endif

> +
>   extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>   extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>   extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index bafa2f986660..4c18bbd09a20 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>   						size_t size, bool write,
>   						unsigned long ret_ip)
>   {
> +	check_return_arch_not_ready();
> +

Not good for readibility that the above macro embeds a return, something 
like below would be better I think:

	if (!kasan_arch_is_ready())
		return;

Unless somebody minds, I'll do the change and take this patch in my 
series in order to handle the case of book3s/32 hash.

Christophe

>   	if (unlikely(size == 0))
>   		return;
>   
> 

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E
  2019-02-15  0:04 ` [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E Daniel Axtens
  2019-02-15  8:28   ` Dmitry Vyukov
@ 2019-02-17 14:06   ` christophe leroy
  2019-02-18 19:26   ` Christophe Leroy
  2 siblings, 0 replies; 27+ messages in thread
From: christophe leroy @ 2019-02-17 14:06 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev



Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
> Wire up KASAN. Only outline instrumentation is supported.
> 
> The KASAN shadow area is mapped into vmemmap space:
> 0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
> To do this we require that vmemmap be disabled. (This is the default
> in the kernel config that QorIQ provides for the machine in their
> SDK anyway - they use flat memory.)
> 
> Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
> ioremap areas (also in 0x800...) are all mapped to a zero page. As
> with the Book3S hash series, this requires overriding the memory <->
> shadow mapping.
> 
> Also, as with both previous 64-bit series, early instrumentation is not
> supported.  It would allow us to drop the check_return_arch_not_ready()
> hook in the KASAN core, but it's tricky to get it set up early enough:
> we need it setup before the first call to instrumented code like printk().
> Perhaps in the future.
> 
> Only KASAN_MINIMAL works.
> 
> Lightly tested on e6500. KVM, kexec and xmon have not been tested.
> 
> The test_kasan module fires warnings as expected, except for the
> following tests:
> 
>   - Expected/by design:
> kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
> 
>   - Due to only supporting KASAN_MINIMAL:
> kasan test: kasan_stack_oob out-of-bounds on stack
> kasan test: kasan_global_oob out-of-bounds global variable
> kasan test: kasan_alloca_oob_left out-of-bounds to left on alloca
> kasan test: kasan_alloca_oob_right out-of-bounds to right on alloca
> kasan test: use_after_scope_test use-after-scope on int
> kasan test: use_after_scope_test use-after-scope on array
> 
> Thanks to those who have done the heavy lifting over the past several years:
>   - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html

You're welcome.

>   - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
>   - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/
> 
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> 
> ---
> 
> While useful if you have a book3e device, this is mostly intended
> as a warm-up exercise for reviving Aneesh's series for book3s hash.
> In particular, changes to the kasan core are going to be required
> for hash and radix as well.

And part of it will be needed for hash32 as well, until we implement an 
early static hash stable.

> ---
>   arch/powerpc/Kconfig                         |  1 +
>   arch/powerpc/Makefile                        |  2 +
>   arch/powerpc/include/asm/kasan.h             | 77 ++++++++++++++++++--
>   arch/powerpc/include/asm/ppc_asm.h           |  7 ++
>   arch/powerpc/include/asm/string.h            |  7 +-
>   arch/powerpc/lib/mem_64.S                    |  6 +-
>   arch/powerpc/lib/memcmp_64.S                 |  5 +-
>   arch/powerpc/lib/memcpy_64.S                 |  3 +-
>   arch/powerpc/lib/string.S                    | 15 ++--
>   arch/powerpc/mm/Makefile                     |  2 +
>   arch/powerpc/mm/kasan/Makefile               |  1 +
>   arch/powerpc/mm/kasan/kasan_init_book3e_64.c | 53 ++++++++++++++
>   arch/powerpc/purgatory/Makefile              |  3 +
>   arch/powerpc/xmon/Makefile                   |  1 +
>   14 files changed, 164 insertions(+), 19 deletions(-)
>   create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 850b06def84f..2c7c20d52778 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -176,6 +176,7 @@ config PPC
>   	select HAVE_ARCH_AUDITSYSCALL
>   	select HAVE_ARCH_JUMP_LABEL
>   	select HAVE_ARCH_KASAN			if PPC32
> +	select HAVE_ARCH_KASAN			if PPC_BOOK3E_64 && !SPARSEMEM_VMEMMAP
>   	select HAVE_ARCH_KGDB
>   	select HAVE_ARCH_MMAP_RND_BITS
>   	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
> diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
> index f0738099e31e..21c2dadf0315 100644
> --- a/arch/powerpc/Makefile
> +++ b/arch/powerpc/Makefile
> @@ -428,11 +428,13 @@ endif
>   endif
>   
>   ifdef CONFIG_KASAN
> +ifdef CONFIG_PPC32
>   prepare: kasan_prepare
>   
>   kasan_prepare: prepare0
>          $(eval KASAN_SHADOW_OFFSET = $(shell awk '{if ($$2 == "KASAN_SHADOW_OFFSET") print $$3;}' include/generated/asm-offsets.h))
>   endif
> +endif
>   
>   # Check toolchain versions:
>   # - gcc-4.6 is the minimum kernel-wide version so nothing required.
> diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
> index 5d0088429b62..c2f6f05dfaa3 100644
> --- a/arch/powerpc/include/asm/kasan.h
> +++ b/arch/powerpc/include/asm/kasan.h
> @@ -5,20 +5,85 @@
>   #ifndef __ASSEMBLY__
>   
>   #include <asm/page.h>
> +#include <asm/pgtable.h>
>   #include <asm/pgtable-types.h>
> -#include <asm/fixmap.h>
>   
>   #define KASAN_SHADOW_SCALE_SHIFT	3
> -#define KASAN_SHADOW_SIZE	((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
>   
> -#define KASAN_SHADOW_START	(ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
> -					    PGDIR_SIZE))
> -#define KASAN_SHADOW_END	(KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
>   #define KASAN_SHADOW_OFFSET	(KASAN_SHADOW_START - \
>   				 (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
> +#define KASAN_SHADOW_END	(KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
> +
> +
> +#ifdef CONFIG_PPC32
> +#include <asm/fixmap.h>
> +#define KASAN_SHADOW_START	(ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
> +					    PGDIR_SIZE))
> +#define KASAN_SHADOW_SIZE	((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
>   
>   void kasan_early_init(void);
> +
> +#endif /* CONFIG_PPC32 */

All the above is a bit messy. I'll reorder this file in my series so 
that when your patch comes in it doesn't reshuffle existing lines.

> +
> +#ifdef CONFIG_PPC_BOOK3E_64
> +#define KASAN_SHADOW_START VMEMMAP_BASE
> +#define KASAN_SHADOW_SIZE	(KERN_VIRT_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
> +
> +extern struct static_key_false powerpc_kasan_enabled_key;
> +#define check_return_arch_not_ready() \
> +	do {								\
> +		if (!static_branch_likely(&powerpc_kasan_enabled_key))	\
> +			return;						\
> +	} while (0)

Would look better like

static inline bool kasan_arch_is_ready(void)
{
	if (static_branch_likely(&powerpc_kasan_enabled_key))
		return true;
	return false;
}

> +
> +extern unsigned char kasan_zero_page[PAGE_SIZE];
> +static inline void *kasan_mem_to_shadow_book3e(const void *addr)
> +{
> +	if ((unsigned long)addr >= KERN_VIRT_START &&
> +		(unsigned long)addr < (KERN_VIRT_START + KERN_VIRT_SIZE)) {
> +		return (void *)kasan_zero_page;
> +	}
> +
> +	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> +		+ KASAN_SHADOW_OFFSET;
> +}
> +#define kasan_mem_to_shadow kasan_mem_to_shadow_book3e
> +
> +static inline void *kasan_shadow_to_mem_book3e(const void *shadow_addr)
> +{
> +	/*
> +	 * We map the entire non-linear virtual mapping onto the zero page so if
> +	 * we are asked to map the zero page back just pick the beginning of that
> +	 * area.
> +	 */
> +	if (shadow_addr >= (void *)kasan_zero_page &&
> +		shadow_addr < (void *)(kasan_zero_page + PAGE_SIZE)) {
> +		return (void *)KERN_VIRT_START;
> +	}
> +
> +	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
> +		<< KASAN_SHADOW_SCALE_SHIFT);
> +}
> +#define kasan_shadow_to_mem kasan_shadow_to_mem_book3e
> +
> +static inline bool kasan_addr_has_shadow_book3e(const void *addr)
> +{
> +	/*
> +	 * We want to specifically assert that the addresses in the 0x8000...
> +	 * region have a shadow, otherwise they are considered by the kasan
> +	 * core to be wild pointers
> +	 */
> +	if ((unsigned long)addr >= KERN_VIRT_START &&
> +		(unsigned long)addr < (KERN_VIRT_START + KERN_VIRT_SIZE)) {
> +		return true;
> +	}
> +	return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
> +}
> +#define kasan_addr_has_shadow kasan_addr_has_shadow_book3e
> +
> +#endif /* CONFIG_PPC_BOOK3E_64 */
> +
>   void kasan_init(void);
>   
> -#endif
> +#endif /* CONFIG_KASAN */

The above endif is for __ASSEMBLY__

>   #endif
> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
> index dba2c1038363..fd7c9fa9d307 100644
> --- a/arch/powerpc/include/asm/ppc_asm.h
> +++ b/arch/powerpc/include/asm/ppc_asm.h
> @@ -251,10 +251,17 @@ GLUE(.,name):
>   
>   #define _GLOBAL_TOC(name) _GLOBAL(name)
>   
> +#endif /* 32-bit */
> +
> +/* KASAN helpers */
>   #define KASAN_OVERRIDE(x, y) \
>   	.weak x;	     \
>   	.set x, y
>   

I'll leave it out of the PPC32 section in my series, it's harmless.

> +#ifdef CONFIG_KASAN
> +#define EXPORT_SYMBOL_NOKASAN(x)
> +#else
> +#define EXPORT_SYMBOL_NOKASAN(x) EXPORT_SYMBOL(x)
>   #endif

I can't see the point about the above. Is it worth still having the 
functions when nobody is going to use them ?

>   
>   /*
> diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
> index 64d44d4836b4..e2801d517d57 100644
> --- a/arch/powerpc/include/asm/string.h
> +++ b/arch/powerpc/include/asm/string.h
> @@ -4,13 +4,16 @@
>   
>   #ifdef __KERNEL__
>   
> +#ifndef CONFIG_KASAN
>   #define __HAVE_ARCH_STRNCPY
>   #define __HAVE_ARCH_STRNCMP
> +#define __HAVE_ARCH_MEMCHR
> +#define __HAVE_ARCH_MEMCMP
> +#endif
> +

Good catch, we can't use the optimised version when CONFIG_KASAN is set 
until kasan implements verifications with check_memory_region() as it 
does for memmove(), memcpy() and memset().

I'll take that in my series.

>   #define __HAVE_ARCH_MEMSET
>   #define __HAVE_ARCH_MEMCPY
>   #define __HAVE_ARCH_MEMMOVE
> -#define __HAVE_ARCH_MEMCMP
> -#define __HAVE_ARCH_MEMCHR
>   #define __HAVE_ARCH_MEMSET16
>   #define __HAVE_ARCH_MEMCPY_FLUSHCACHE
>   
> diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
> index 3c3be02f33b7..3ff4c6b45505 100644
> --- a/arch/powerpc/lib/mem_64.S
> +++ b/arch/powerpc/lib/mem_64.S
> @@ -30,7 +30,8 @@ EXPORT_SYMBOL(__memset16)
>   EXPORT_SYMBOL(__memset32)
>   EXPORT_SYMBOL(__memset64)
>   
> -_GLOBAL(memset)
> +_GLOBAL(__memset)
> +KASAN_OVERRIDE(memset, __memset)
>   	neg	r0,r3
>   	rlwimi	r4,r4,8,16,23
>   	andi.	r0,r0,7			/* # bytes to be 8-byte aligned */
> @@ -97,7 +98,8 @@ _GLOBAL(memset)
>   	blr
>   EXPORT_SYMBOL(memset)
>   
> -_GLOBAL_TOC(memmove)
> +_GLOBAL_TOC(__memmove)
> +KASAN_OVERRIDE(memmove, __memmove)
>   	cmplw	0,r3,r4
>   	bgt	backwards_memcpy
>   	b	memcpy
> diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S
> index 844d8e774492..21aee60de2cd 100644
> --- a/arch/powerpc/lib/memcmp_64.S
> +++ b/arch/powerpc/lib/memcmp_64.S
> @@ -102,7 +102,8 @@
>    * 2) src/dst has different offset to the 8 bytes boundary. The handlers
>    * are named like .Ldiffoffset_xxxx
>    */
> -_GLOBAL_TOC(memcmp)
> +_GLOBAL_TOC(__memcmp)
> +KASAN_OVERRIDE(memcmp, __memcmp)
>   	cmpdi	cr1,r5,0
>   
>   	/* Use the short loop if the src/dst addresses are not
> @@ -630,4 +631,4 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>   	b	.Lcmp_lt32bytes
>   
>   #endif
> -EXPORT_SYMBOL(memcmp)
> +EXPORT_SYMBOL_NOKASAN(memcmp)

That's pointless. Nobody is going to call __memcmp(), so we should just 
not compile it in when CONFIG_KASAN is defined. Same for memchr(), 
strncpy() and strncmp().

I'll do it in my series.

> diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
> index 273ea67e60a1..e9092a0e531a 100644
> --- a/arch/powerpc/lib/memcpy_64.S
> +++ b/arch/powerpc/lib/memcpy_64.S
> @@ -18,7 +18,8 @@
>   #endif
>   
>   	.align	7
> -_GLOBAL_TOC(memcpy)
> +_GLOBAL_TOC(__memcpy)
> +KASAN_OVERRIDE(memcpy, __memcpy)
>   BEGIN_FTR_SECTION
>   #ifdef __LITTLE_ENDIAN__
>   	cmpdi	cr7,r5,0
> diff --git a/arch/powerpc/lib/string.S b/arch/powerpc/lib/string.S
> index 4b41970e9ed8..09deaac6e5f1 100644
> --- a/arch/powerpc/lib/string.S
> +++ b/arch/powerpc/lib/string.S
> @@ -16,7 +16,8 @@
>   	
>   /* This clears out any unused part of the destination buffer,
>      just as the libc version does.  -- paulus */
> -_GLOBAL(strncpy)
> +_GLOBAL(__strncpy)
> +KASAN_OVERRIDE(strncpy, __strncpy)
>   	PPC_LCMPI 0,r5,0
>   	beqlr
>   	mtctr	r5
> @@ -34,9 +35,10 @@ _GLOBAL(strncpy)
>   2:	stbu	r0,1(r6)	/* clear it out if so */
>   	bdnz	2b
>   	blr
> -EXPORT_SYMBOL(strncpy)
> +EXPORT_SYMBOL_NOKASAN(strncpy)
>   
> -_GLOBAL(strncmp)
> +_GLOBAL(__strncmp)
> +KASAN_OVERRIDE(strncmp, __strncmp)
>   	PPC_LCMPI 0,r5,0
>   	beq-	2f
>   	mtctr	r5
> @@ -52,9 +54,10 @@ _GLOBAL(strncmp)
>   	blr
>   2:	li	r3,0
>   	blr
> -EXPORT_SYMBOL(strncmp)
> +EXPORT_SYMBOL_NOKASAN(strncmp)
>   
> -_GLOBAL(memchr)
> +_GLOBAL(__memchr)
> +KASAN_OVERRIDE(memchr, __memchr)
>   	PPC_LCMPI 0,r5,0
>   	beq-	2f
>   	mtctr	r5
> @@ -66,4 +69,4 @@ _GLOBAL(memchr)
>   	beqlr
>   2:	li	r3,0
>   	blr
> -EXPORT_SYMBOL(memchr)
> +EXPORT_SYMBOL_NOKASAN(memchr)
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index 457c0ea2b5e7..d974f7bcb177 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -7,6 +7,8 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
>   
>   CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
>   
> +KASAN_SANITIZE_fsl_booke_mmu.o := n
> +
>   obj-y				:= fault.o mem.o pgtable.o mmap.o \
>   				   init_$(BITS).o pgtable_$(BITS).o \
>   				   init-common.o mmu_context.o drmem.o
> diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
> index 6577897673dd..f8f164ad8ade 100644
> --- a/arch/powerpc/mm/kasan/Makefile
> +++ b/arch/powerpc/mm/kasan/Makefile
> @@ -3,3 +3,4 @@
>   KASAN_SANITIZE := n
>   
>   obj-$(CONFIG_PPC32)           += kasan_init_32.o
> +obj-$(CONFIG_PPC_BOOK3E_64)   += kasan_init_book3e_64.o
> diff --git a/arch/powerpc/mm/kasan/kasan_init_book3e_64.c b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> new file mode 100644
> index 000000000000..93b9afcf1020
> --- /dev/null
> +++ b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> @@ -0,0 +1,53 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#define DISABLE_BRANCH_PROFILING
> +
> +#include <linux/kasan.h>
> +#include <linux/printk.h>
> +#include <linux/memblock.h>
> +#include <linux/sched/task.h>
> +#include <asm/pgalloc.h>
> +
> +DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
> +EXPORT_SYMBOL(powerpc_kasan_enabled_key);
> +unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;

Why not using the existing kasan_early_shadow_page[] defined in 
mm/kasan/init.c ? (which was called kasan_zero_page before)

> +
> +static void __init kasan_init_region(struct memblock_region *reg)
> +{
> +	void *start = __va(reg->base);
> +	void *end = __va(reg->base + reg->size);
> +	unsigned long k_start, k_end, k_cur;
> +
> +	if (start >= end)
> +		return;
> +
> +	k_start = (unsigned long)kasan_mem_to_shadow(start);
> +	k_end = (unsigned long)kasan_mem_to_shadow(end);
> +
> +	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> +		void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);

What if memblock_alloc() fails are return NULL ?

> +		map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
> +	}
> +	flush_tlb_kernel_range(k_start, k_end);
> +}
> +
> +void __init kasan_init(void)
> +{
> +	struct memblock_region *reg;
> +
> +	for_each_memblock(memory, reg)
> +		kasan_init_region(reg);
> +
> +	/* map the zero page RO */
> +	map_kernel_page((unsigned long)kasan_zero_page,
> +					__pa(kasan_zero_page), PAGE_KERNEL_RO);

This page is already mapped. Shouldn't the change be done with a kind of 
page rights updating function() ?

> +
> +	kasan_init_tags();

This is unneeded, it is specific to arm64.

> +
> +	/* Turn on checking */
> +	static_branch_inc(&powerpc_kasan_enabled_key);
> +
> +	/* Enable error messages */
> +	init_task.kasan_depth = 0;
> +	pr_info("KASAN init done (64-bit Book3E)\n");
> +}
> diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
> index 4314ba5baf43..7c6d8b14f440 100644
> --- a/arch/powerpc/purgatory/Makefile
> +++ b/arch/powerpc/purgatory/Makefile
> @@ -1,4 +1,7 @@
>   # SPDX-License-Identifier: GPL-2.0
> +
> +KASAN_SANITIZE := n
> +

I'll take it in my series.

>   targets += trampoline.o purgatory.ro kexec-purgatory.c
>   
>   LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined
> diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile
> index 878f9c1d3615..064f7062c0a3 100644
> --- a/arch/powerpc/xmon/Makefile
> +++ b/arch/powerpc/xmon/Makefile
> @@ -6,6 +6,7 @@ subdir-ccflags-y := $(call cc-disable-warning, builtin-requires-header)
>   
>   GCOV_PROFILE := n
>   UBSAN_SANITIZE := n
> +KASAN_SANITIZE := n
>   

I'll take it in my series.

>   # Disable ftrace for the entire directory
>   ORIG_CFLAGS := $(KBUILD_CFLAGS)
> 

Christophe

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory
  2019-02-15  0:04 ` [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory Daniel Axtens
  2019-02-15  0:24   ` Andrew Donnellan
@ 2019-02-17 16:29   ` christophe leroy
  2019-02-18  9:14     ` Michael Ellerman
  1 sibling, 1 reply; 27+ messages in thread
From: christophe leroy @ 2019-02-17 16:29 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora; +Cc: linuxppc-dev, kasan-dev



Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
> In preparation for adding ppc64 implementations, break out the
> code into its own subdirectory.

That's not a bad idea, arch/powerpc/mm is rather messy with lot of
subarch stuff.

I'll take it in my series.

Christophe

> 
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> ---
>   arch/powerpc/mm/Makefile                                | 4 +---
>   arch/powerpc/mm/kasan/Makefile                          | 5 +++++
>   arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} | 0
>   3 files changed, 6 insertions(+), 3 deletions(-)
>   create mode 100644 arch/powerpc/mm/kasan/Makefile
>   rename arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} (100%)
> 
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index d6b76f25f6de..457c0ea2b5e7 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -7,8 +7,6 @@ ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
>   
>   CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
>   
> -KASAN_SANITIZE_kasan_init.o := n
> -
>   obj-y				:= fault.o mem.o pgtable.o mmap.o \
>   				   init_$(BITS).o pgtable_$(BITS).o \
>   				   init-common.o mmu_context.o drmem.o
> @@ -57,4 +55,4 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= dump_linuxpagetables-book3s64.o
>   endif
>   obj-$(CONFIG_PPC_HTDUMP)	+= dump_hashpagetable.o
>   obj-$(CONFIG_PPC_MEM_KEYS)	+= pkeys.o
> -obj-$(CONFIG_KASAN)		+= kasan_init.o
> +obj-$(CONFIG_KASAN)		+= kasan/
> diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
> new file mode 100644
> index 000000000000..6577897673dd
> --- /dev/null
> +++ b/arch/powerpc/mm/kasan/Makefile
> @@ -0,0 +1,5 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +KASAN_SANITIZE := n
> +
> +obj-$(CONFIG_PPC32)           += kasan_init_32.o
> diff --git a/arch/powerpc/mm/kasan_init.c b/arch/powerpc/mm/kasan/kasan_init_32.c
> similarity index 100%
> rename from arch/powerpc/mm/kasan_init.c
> rename to arch/powerpc/mm/kasan/kasan_init_32.c
> 

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check
  2019-02-17 12:05   ` christophe leroy
@ 2019-02-18  6:13     ` Daniel Axtens
  2019-02-25 14:01       ` Christophe Leroy
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Axtens @ 2019-02-18  6:13 UTC (permalink / raw)
  To: christophe leroy, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev

christophe leroy <christophe.leroy@c-s.fr> writes:

> Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
>> In powerpc (as I understand it), we spend a lot of time in boot
>> running in real mode before MMU paging is initalised. During
>> this time we call a lot of generic code, including printk(). If
>> we try to access the shadow region during this time, things fail.
>> 
>> My attempts to move early init before the first printk have not
>> been successful. (Both previous RFCs for ppc64 - by 2 different
>> people - have needed this trick too!)
>> 
>> So, allow architectures to define a check_return_arch_not_ready()
>> hook that bails out of check_memory_region_inline() unless the
>> arch has done all of the init.
>> 
>> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
>> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
>> Originally-by: Balbir Singh <bsingharora@gmail.com>
>> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> Signed-off-by: Daniel Axtens <dja@axtens.net>
>> ---
>>   include/linux/kasan.h | 4 ++++
>>   mm/kasan/generic.c    | 2 ++
>>   2 files changed, 6 insertions(+)
>> 
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index f6261840f94c..83edc5e2b6a0 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -14,6 +14,10 @@ struct task_struct;
>>   #include <asm/kasan.h>
>>   #include <asm/pgtable.h>
>>   
>> +#ifndef check_return_arch_not_ready
>> +#define check_return_arch_not_ready()	do { } while (0)
>> +#endif
>
> A static inline would be better I believe.
>
> Something like
>
> #ifndef kasan_arch_is_ready
> static inline bool kasan_arch_is_ready {return true;}
> #endif
>
>> +
>>   extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>>   extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>>   extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
>> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
>> index bafa2f986660..4c18bbd09a20 100644
>> --- a/mm/kasan/generic.c
>> +++ b/mm/kasan/generic.c
>> @@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>>   						size_t size, bool write,
>>   						unsigned long ret_ip)
>>   {
>> +	check_return_arch_not_ready();
>> +
>
> Not good for readibility that the above macro embeds a return, something 
> like below would be better I think:
>
> 	if (!kasan_arch_is_ready())
> 		return;
>
> Unless somebody minds, I'll do the change and take this patch in my 
> series in order to handle the case of book3s/32 hash.

Please do; feel free to take as many of the patches as you would like
and I'll rebase whatever is left on the next version of your series.

The idea with the macro magic was to take advantage of the speed of
static keys (I think, I borrowed it from Balbir's patch). Perhaps an
inline function will achieve this anyway, but given that KASAN with
outline instrumentation is inevitably slow, I guess it doesn't matter
much either way.

Regards,
Daniel
>
> Christophe
>
>>   	if (unlikely(size == 0))
>>   		return;
>>   
>> 
>
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
> https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory
  2019-02-17 16:29   ` christophe leroy
@ 2019-02-18  9:14     ` Michael Ellerman
  2019-02-18 12:27       ` Christophe Leroy
  0 siblings, 1 reply; 27+ messages in thread
From: Michael Ellerman @ 2019-02-18  9:14 UTC (permalink / raw)
  To: christophe leroy, Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, kasan-dev

christophe leroy <christophe.leroy@c-s.fr> writes:

> Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
>> In preparation for adding ppc64 implementations, break out the
>> code into its own subdirectory.
>
> That's not a bad idea, arch/powerpc/mm is rather messy with lot of
> subarch stuff.

I'm always happy to have more directories with more focused content.

cheers

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory
  2019-02-18  9:14     ` Michael Ellerman
@ 2019-02-18 12:27       ` Christophe Leroy
  2019-02-19  0:44         ` Michael Ellerman
  0 siblings, 1 reply; 27+ messages in thread
From: Christophe Leroy @ 2019-02-18 12:27 UTC (permalink / raw)
  To: Michael Ellerman, Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, kasan-dev



Le 18/02/2019 à 10:14, Michael Ellerman a écrit :
> christophe leroy <christophe.leroy@c-s.fr> writes:
> 
>> Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
>>> In preparation for adding ppc64 implementations, break out the
>>> code into its own subdirectory.
>>
>> That's not a bad idea, arch/powerpc/mm is rather messy with lot of
>> subarch stuff.
> 
> I'm always happy to have more directories with more focused content.
> 

Nice to know how to make you happy :)

I'll send you a patch for moving all page table dumping stuff into a 
subdirectory.

Christophe

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E
  2019-02-15  0:04 ` [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E Daniel Axtens
  2019-02-15  8:28   ` Dmitry Vyukov
  2019-02-17 14:06   ` christophe leroy
@ 2019-02-18 19:26   ` Christophe Leroy
  2019-02-19  0:14     ` Daniel Axtens
  2 siblings, 1 reply; 27+ messages in thread
From: Christophe Leroy @ 2019-02-18 19:26 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev



Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
> Wire up KASAN. Only outline instrumentation is supported.
> 
> The KASAN shadow area is mapped into vmemmap space:
> 0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
> To do this we require that vmemmap be disabled. (This is the default
> in the kernel config that QorIQ provides for the machine in their
> SDK anyway - they use flat memory.)
> 
> Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
> ioremap areas (also in 0x800...) are all mapped to a zero page. As
> with the Book3S hash series, this requires overriding the memory <->
> shadow mapping.
> 
> Also, as with both previous 64-bit series, early instrumentation is not
> supported.  It would allow us to drop the check_return_arch_not_ready()
> hook in the KASAN core, but it's tricky to get it set up early enough:
> we need it setup before the first call to instrumented code like printk().
> Perhaps in the future.
> 
> Only KASAN_MINIMAL works.
> 
> Lightly tested on e6500. KVM, kexec and xmon have not been tested.
> 
> The test_kasan module fires warnings as expected, except for the
> following tests:
> 
>   - Expected/by design:
> kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
> 
>   - Due to only supporting KASAN_MINIMAL:
> kasan test: kasan_stack_oob out-of-bounds on stack
> kasan test: kasan_global_oob out-of-bounds global variable
> kasan test: kasan_alloca_oob_left out-of-bounds to left on alloca
> kasan test: kasan_alloca_oob_right out-of-bounds to right on alloca
> kasan test: use_after_scope_test use-after-scope on int
> kasan test: use_after_scope_test use-after-scope on array
> 
> Thanks to those who have done the heavy lifting over the past several years:
>   - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
>   - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
>   - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/
> 
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> 
> ---
> 
> While useful if you have a book3e device, this is mostly intended
> as a warm-up exercise for reviving Aneesh's series for book3s hash.
> In particular, changes to the kasan core are going to be required
> for hash and radix as well.
> ---
>   arch/powerpc/Kconfig                         |  1 +
>   arch/powerpc/Makefile                        |  2 +
>   arch/powerpc/include/asm/kasan.h             | 77 ++++++++++++++++++--
>   arch/powerpc/include/asm/ppc_asm.h           |  7 ++
>   arch/powerpc/include/asm/string.h            |  7 +-
>   arch/powerpc/lib/mem_64.S                    |  6 +-
>   arch/powerpc/lib/memcmp_64.S                 |  5 +-
>   arch/powerpc/lib/memcpy_64.S                 |  3 +-
>   arch/powerpc/lib/string.S                    | 15 ++--
>   arch/powerpc/mm/Makefile                     |  2 +
>   arch/powerpc/mm/kasan/Makefile               |  1 +
>   arch/powerpc/mm/kasan/kasan_init_book3e_64.c | 53 ++++++++++++++
>   arch/powerpc/purgatory/Makefile              |  3 +
>   arch/powerpc/xmon/Makefile                   |  1 +
>   14 files changed, 164 insertions(+), 19 deletions(-)
>   create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c

[snip]

> diff --git a/arch/powerpc/mm/kasan/kasan_init_book3e_64.c b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> new file mode 100644
> index 000000000000..93b9afcf1020
> --- /dev/null
> +++ b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
> @@ -0,0 +1,53 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#define DISABLE_BRANCH_PROFILING
> +
> +#include <linux/kasan.h>
> +#include <linux/printk.h>
> +#include <linux/memblock.h>
> +#include <linux/sched/task.h>
> +#include <asm/pgalloc.h>
> +
> +DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
> +EXPORT_SYMBOL(powerpc_kasan_enabled_key);

Why does this symbol need to be exported ?

Christophe


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E
  2019-02-18 19:26   ` Christophe Leroy
@ 2019-02-19  0:14     ` Daniel Axtens
  0 siblings, 0 replies; 27+ messages in thread
From: Daniel Axtens @ 2019-02-19  0:14 UTC (permalink / raw)
  To: Christophe Leroy, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev

>> diff --git a/arch/powerpc/mm/kasan/kasan_init_book3e_64.c b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>> new file mode 100644
>> index 000000000000..93b9afcf1020
>> --- /dev/null
>> +++ b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>> @@ -0,0 +1,53 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#define DISABLE_BRANCH_PROFILING
>> +
>> +#include <linux/kasan.h>
>> +#include <linux/printk.h>
>> +#include <linux/memblock.h>
>> +#include <linux/sched/task.h>
>> +#include <asm/pgalloc.h>
>> +
>> +DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
>> +EXPORT_SYMBOL(powerpc_kasan_enabled_key);
>
> Why does this symbol need to be exported ?

I suppose it probably doesn't! I copied Balbir's code without much
thought as it seemed a lot smarter than my random global variable code.

Regards,
Daniel

>
> Christophe

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory
  2019-02-18 12:27       ` Christophe Leroy
@ 2019-02-19  0:44         ` Michael Ellerman
  0 siblings, 0 replies; 27+ messages in thread
From: Michael Ellerman @ 2019-02-19  0:44 UTC (permalink / raw)
  To: Christophe Leroy, Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, kasan-dev

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> Le 18/02/2019 à 10:14, Michael Ellerman a écrit :
>> christophe leroy <christophe.leroy@c-s.fr> writes:
>> 
>>> Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
>>>> In preparation for adding ppc64 implementations, break out the
>>>> code into its own subdirectory.
>>>
>>> That's not a bad idea, arch/powerpc/mm is rather messy with lot of
>>> subarch stuff.
>> 
>> I'm always happy to have more directories with more focused content.
>> 
>
> Nice to know how to make you happy :)

Haha, well there is also beer :)

> I'll send you a patch for moving all page table dumping stuff into a 
> subdirectory.

Thanks.

cheers

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E
  2019-02-17  6:34 ` Balbir Singh
@ 2019-02-19  6:35   ` Daniel Axtens
  0 siblings, 0 replies; 27+ messages in thread
From: Daniel Axtens @ 2019-02-19  6:35 UTC (permalink / raw)
  To: Balbir Singh; +Cc: aneesh.kumar, linuxppc-dev, kasan-dev

Hi Balbir,


> Thanks for following through with this, could you please share details on
> how you've been testing this?
>
> I know qemu supports qemu -cpu e6500, but beyond that what does the machine
> look like?

I've been using a T4240RDB, so real hardware. It boots both the QorIQ
Yocto-based distro and Debian ppc64. I have run parts of kselftest and
am currently running LTP - so far no errors have triggered.

Regards,
Daniel

>
> Balbir Singh. 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E
  2019-02-15  8:28   ` Dmitry Vyukov
@ 2019-02-19  6:37     ` Daniel Axtens
  0 siblings, 0 replies; 27+ messages in thread
From: Daniel Axtens @ 2019-02-19  6:37 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Aneesh Kumar K.V, kasan-dev, Aneesh Kumar K . V, linuxppc-dev

Dmitry Vyukov <dvyukov@google.com> writes:

> On Fri, Feb 15, 2019 at 1:05 AM Daniel Axtens <dja@axtens.net> wrote:
>>
>> Wire up KASAN. Only outline instrumentation is supported.
>>
>> The KASAN shadow area is mapped into vmemmap space:
>> 0x8000 0400 0000 0000 to 0x8000 0600 0000 0000.
>> To do this we require that vmemmap be disabled. (This is the default
>> in the kernel config that QorIQ provides for the machine in their
>> SDK anyway - they use flat memory.)
>>
>> Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
>> ioremap areas (also in 0x800...) are all mapped to a zero page. As
>> with the Book3S hash series, this requires overriding the memory <->
>> shadow mapping.
>>
>> Also, as with both previous 64-bit series, early instrumentation is not
>> supported.  It would allow us to drop the check_return_arch_not_ready()
>> hook in the KASAN core, but it's tricky to get it set up early enough:
>> we need it setup before the first call to instrumented code like printk().
>> Perhaps in the future.
>>
>> Only KASAN_MINIMAL works.
>>
>> Lightly tested on e6500. KVM, kexec and xmon have not been tested.
>
> Hi Daniel,
>
> This is great!
>
> Not related to the patch, but if you booted a real devices and used it
> to some degree, I wonder if you hit any KASAN reports?

Not yet, but the hope is that I will be able to extend this to book3s
and then it will be more useful in combination with syzkaller.

Regards,
Daniel

>
> Thanks
>
>> The test_kasan module fires warnings as expected, except for the
>> following tests:
>>
>>  - Expected/by design:
>> kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
>>
>>  - Due to only supporting KASAN_MINIMAL:
>> kasan test: kasan_stack_oob out-of-bounds on stack
>> kasan test: kasan_global_oob out-of-bounds global variable
>> kasan test: kasan_alloca_oob_left out-of-bounds to left on alloca
>> kasan test: kasan_alloca_oob_right out-of-bounds to right on alloca
>> kasan test: use_after_scope_test use-after-scope on int
>> kasan test: use_after_scope_test use-after-scope on array
>>
>> Thanks to those who have done the heavy lifting over the past several years:
>>  - Christophe's 32 bit series: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
>>  - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
>>  - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/
>>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> Cc: Balbir Singh <bsingharora@gmail.com>
>> Signed-off-by: Daniel Axtens <dja@axtens.net>
>>
>> ---
>>
>> While useful if you have a book3e device, this is mostly intended
>> as a warm-up exercise for reviving Aneesh's series for book3s hash.
>> In particular, changes to the kasan core are going to be required
>> for hash and radix as well.
>> ---
>>  arch/powerpc/Kconfig                         |  1 +
>>  arch/powerpc/Makefile                        |  2 +
>>  arch/powerpc/include/asm/kasan.h             | 77 ++++++++++++++++++--
>>  arch/powerpc/include/asm/ppc_asm.h           |  7 ++
>>  arch/powerpc/include/asm/string.h            |  7 +-
>>  arch/powerpc/lib/mem_64.S                    |  6 +-
>>  arch/powerpc/lib/memcmp_64.S                 |  5 +-
>>  arch/powerpc/lib/memcpy_64.S                 |  3 +-
>>  arch/powerpc/lib/string.S                    | 15 ++--
>>  arch/powerpc/mm/Makefile                     |  2 +
>>  arch/powerpc/mm/kasan/Makefile               |  1 +
>>  arch/powerpc/mm/kasan/kasan_init_book3e_64.c | 53 ++++++++++++++
>>  arch/powerpc/purgatory/Makefile              |  3 +
>>  arch/powerpc/xmon/Makefile                   |  1 +
>>  14 files changed, 164 insertions(+), 19 deletions(-)
>>  create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>>
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index 850b06def84f..2c7c20d52778 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -176,6 +176,7 @@ config PPC
>>         select HAVE_ARCH_AUDITSYSCALL
>>         select HAVE_ARCH_JUMP_LABEL
>>         select HAVE_ARCH_KASAN                  if PPC32
>> +       select HAVE_ARCH_KASAN                  if PPC_BOOK3E_64 && !SPARSEMEM_VMEMMAP
>>         select HAVE_ARCH_KGDB
>>         select HAVE_ARCH_MMAP_RND_BITS
>>         select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
>> diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
>> index f0738099e31e..21c2dadf0315 100644
>> --- a/arch/powerpc/Makefile
>> +++ b/arch/powerpc/Makefile
>> @@ -428,11 +428,13 @@ endif
>>  endif
>>
>>  ifdef CONFIG_KASAN
>> +ifdef CONFIG_PPC32
>>  prepare: kasan_prepare
>>
>>  kasan_prepare: prepare0
>>         $(eval KASAN_SHADOW_OFFSET = $(shell awk '{if ($$2 == "KASAN_SHADOW_OFFSET") print $$3;}' include/generated/asm-offsets.h))
>>  endif
>> +endif
>>
>>  # Check toolchain versions:
>>  # - gcc-4.6 is the minimum kernel-wide version so nothing required.
>> diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
>> index 5d0088429b62..c2f6f05dfaa3 100644
>> --- a/arch/powerpc/include/asm/kasan.h
>> +++ b/arch/powerpc/include/asm/kasan.h
>> @@ -5,20 +5,85 @@
>>  #ifndef __ASSEMBLY__
>>
>>  #include <asm/page.h>
>> +#include <asm/pgtable.h>
>>  #include <asm/pgtable-types.h>
>> -#include <asm/fixmap.h>
>>
>>  #define KASAN_SHADOW_SCALE_SHIFT       3
>> -#define KASAN_SHADOW_SIZE      ((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
>>
>> -#define KASAN_SHADOW_START     (ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
>> -                                           PGDIR_SIZE))
>> -#define KASAN_SHADOW_END       (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
>>  #define KASAN_SHADOW_OFFSET    (KASAN_SHADOW_START - \
>>                                  (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT))
>> +#define KASAN_SHADOW_END       (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
>> +
>> +
>> +#ifdef CONFIG_PPC32
>> +#include <asm/fixmap.h>
>> +#define KASAN_SHADOW_START     (ALIGN_DOWN(FIXADDR_START - KASAN_SHADOW_SIZE, \
>> +                                           PGDIR_SIZE))
>> +#define KASAN_SHADOW_SIZE      ((~0UL - PAGE_OFFSET + 1) >> KASAN_SHADOW_SCALE_SHIFT)
>>
>>  void kasan_early_init(void);
>> +
>> +#endif /* CONFIG_PPC32 */
>> +
>> +#ifdef CONFIG_PPC_BOOK3E_64
>> +#define KASAN_SHADOW_START VMEMMAP_BASE
>> +#define KASAN_SHADOW_SIZE      (KERN_VIRT_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
>> +
>> +extern struct static_key_false powerpc_kasan_enabled_key;
>> +#define check_return_arch_not_ready() \
>> +       do {                                                            \
>> +               if (!static_branch_likely(&powerpc_kasan_enabled_key))  \
>> +                       return;                                         \
>> +       } while (0)
>> +
>> +extern unsigned char kasan_zero_page[PAGE_SIZE];
>> +static inline void *kasan_mem_to_shadow_book3e(const void *addr)
>> +{
>> +       if ((unsigned long)addr >= KERN_VIRT_START &&
>> +               (unsigned long)addr < (KERN_VIRT_START + KERN_VIRT_SIZE)) {
>> +               return (void *)kasan_zero_page;
>> +       }
>> +
>> +       return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>> +               + KASAN_SHADOW_OFFSET;
>> +}
>> +#define kasan_mem_to_shadow kasan_mem_to_shadow_book3e
>> +
>> +static inline void *kasan_shadow_to_mem_book3e(const void *shadow_addr)
>> +{
>> +       /*
>> +        * We map the entire non-linear virtual mapping onto the zero page so if
>> +        * we are asked to map the zero page back just pick the beginning of that
>> +        * area.
>> +        */
>> +       if (shadow_addr >= (void *)kasan_zero_page &&
>> +               shadow_addr < (void *)(kasan_zero_page + PAGE_SIZE)) {
>> +               return (void *)KERN_VIRT_START;
>> +       }
>> +
>> +       return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
>> +               << KASAN_SHADOW_SCALE_SHIFT);
>> +}
>> +#define kasan_shadow_to_mem kasan_shadow_to_mem_book3e
>> +
>> +static inline bool kasan_addr_has_shadow_book3e(const void *addr)
>> +{
>> +       /*
>> +        * We want to specifically assert that the addresses in the 0x8000...
>> +        * region have a shadow, otherwise they are considered by the kasan
>> +        * core to be wild pointers
>> +        */
>> +       if ((unsigned long)addr >= KERN_VIRT_START &&
>> +               (unsigned long)addr < (KERN_VIRT_START + KERN_VIRT_SIZE)) {
>> +               return true;
>> +       }
>> +       return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
>> +}
>> +#define kasan_addr_has_shadow kasan_addr_has_shadow_book3e
>> +
>> +#endif /* CONFIG_PPC_BOOK3E_64 */
>> +
>>  void kasan_init(void);
>>
>> -#endif
>> +#endif /* CONFIG_KASAN */
>>  #endif
>> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
>> index dba2c1038363..fd7c9fa9d307 100644
>> --- a/arch/powerpc/include/asm/ppc_asm.h
>> +++ b/arch/powerpc/include/asm/ppc_asm.h
>> @@ -251,10 +251,17 @@ GLUE(.,name):
>>
>>  #define _GLOBAL_TOC(name) _GLOBAL(name)
>>
>> +#endif /* 32-bit */
>> +
>> +/* KASAN helpers */
>>  #define KASAN_OVERRIDE(x, y) \
>>         .weak x;             \
>>         .set x, y
>>
>> +#ifdef CONFIG_KASAN
>> +#define EXPORT_SYMBOL_NOKASAN(x)
>> +#else
>> +#define EXPORT_SYMBOL_NOKASAN(x) EXPORT_SYMBOL(x)
>>  #endif
>>
>>  /*
>> diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
>> index 64d44d4836b4..e2801d517d57 100644
>> --- a/arch/powerpc/include/asm/string.h
>> +++ b/arch/powerpc/include/asm/string.h
>> @@ -4,13 +4,16 @@
>>
>>  #ifdef __KERNEL__
>>
>> +#ifndef CONFIG_KASAN
>>  #define __HAVE_ARCH_STRNCPY
>>  #define __HAVE_ARCH_STRNCMP
>> +#define __HAVE_ARCH_MEMCHR
>> +#define __HAVE_ARCH_MEMCMP
>> +#endif
>> +
>>  #define __HAVE_ARCH_MEMSET
>>  #define __HAVE_ARCH_MEMCPY
>>  #define __HAVE_ARCH_MEMMOVE
>> -#define __HAVE_ARCH_MEMCMP
>> -#define __HAVE_ARCH_MEMCHR
>>  #define __HAVE_ARCH_MEMSET16
>>  #define __HAVE_ARCH_MEMCPY_FLUSHCACHE
>>
>> diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
>> index 3c3be02f33b7..3ff4c6b45505 100644
>> --- a/arch/powerpc/lib/mem_64.S
>> +++ b/arch/powerpc/lib/mem_64.S
>> @@ -30,7 +30,8 @@ EXPORT_SYMBOL(__memset16)
>>  EXPORT_SYMBOL(__memset32)
>>  EXPORT_SYMBOL(__memset64)
>>
>> -_GLOBAL(memset)
>> +_GLOBAL(__memset)
>> +KASAN_OVERRIDE(memset, __memset)
>>         neg     r0,r3
>>         rlwimi  r4,r4,8,16,23
>>         andi.   r0,r0,7                 /* # bytes to be 8-byte aligned */
>> @@ -97,7 +98,8 @@ _GLOBAL(memset)
>>         blr
>>  EXPORT_SYMBOL(memset)
>>
>> -_GLOBAL_TOC(memmove)
>> +_GLOBAL_TOC(__memmove)
>> +KASAN_OVERRIDE(memmove, __memmove)
>>         cmplw   0,r3,r4
>>         bgt     backwards_memcpy
>>         b       memcpy
>> diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S
>> index 844d8e774492..21aee60de2cd 100644
>> --- a/arch/powerpc/lib/memcmp_64.S
>> +++ b/arch/powerpc/lib/memcmp_64.S
>> @@ -102,7 +102,8 @@
>>   * 2) src/dst has different offset to the 8 bytes boundary. The handlers
>>   * are named like .Ldiffoffset_xxxx
>>   */
>> -_GLOBAL_TOC(memcmp)
>> +_GLOBAL_TOC(__memcmp)
>> +KASAN_OVERRIDE(memcmp, __memcmp)
>>         cmpdi   cr1,r5,0
>>
>>         /* Use the short loop if the src/dst addresses are not
>> @@ -630,4 +631,4 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>>         b       .Lcmp_lt32bytes
>>
>>  #endif
>> -EXPORT_SYMBOL(memcmp)
>> +EXPORT_SYMBOL_NOKASAN(memcmp)
>> diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
>> index 273ea67e60a1..e9092a0e531a 100644
>> --- a/arch/powerpc/lib/memcpy_64.S
>> +++ b/arch/powerpc/lib/memcpy_64.S
>> @@ -18,7 +18,8 @@
>>  #endif
>>
>>         .align  7
>> -_GLOBAL_TOC(memcpy)
>> +_GLOBAL_TOC(__memcpy)
>> +KASAN_OVERRIDE(memcpy, __memcpy)
>>  BEGIN_FTR_SECTION
>>  #ifdef __LITTLE_ENDIAN__
>>         cmpdi   cr7,r5,0
>> diff --git a/arch/powerpc/lib/string.S b/arch/powerpc/lib/string.S
>> index 4b41970e9ed8..09deaac6e5f1 100644
>> --- a/arch/powerpc/lib/string.S
>> +++ b/arch/powerpc/lib/string.S
>> @@ -16,7 +16,8 @@
>>
>>  /* This clears out any unused part of the destination buffer,
>>     just as the libc version does.  -- paulus */
>> -_GLOBAL(strncpy)
>> +_GLOBAL(__strncpy)
>> +KASAN_OVERRIDE(strncpy, __strncpy)
>>         PPC_LCMPI 0,r5,0
>>         beqlr
>>         mtctr   r5
>> @@ -34,9 +35,10 @@ _GLOBAL(strncpy)
>>  2:     stbu    r0,1(r6)        /* clear it out if so */
>>         bdnz    2b
>>         blr
>> -EXPORT_SYMBOL(strncpy)
>> +EXPORT_SYMBOL_NOKASAN(strncpy)
>>
>> -_GLOBAL(strncmp)
>> +_GLOBAL(__strncmp)
>> +KASAN_OVERRIDE(strncmp, __strncmp)
>>         PPC_LCMPI 0,r5,0
>>         beq-    2f
>>         mtctr   r5
>> @@ -52,9 +54,10 @@ _GLOBAL(strncmp)
>>         blr
>>  2:     li      r3,0
>>         blr
>> -EXPORT_SYMBOL(strncmp)
>> +EXPORT_SYMBOL_NOKASAN(strncmp)
>>
>> -_GLOBAL(memchr)
>> +_GLOBAL(__memchr)
>> +KASAN_OVERRIDE(memchr, __memchr)
>>         PPC_LCMPI 0,r5,0
>>         beq-    2f
>>         mtctr   r5
>> @@ -66,4 +69,4 @@ _GLOBAL(memchr)
>>         beqlr
>>  2:     li      r3,0
>>         blr
>> -EXPORT_SYMBOL(memchr)
>> +EXPORT_SYMBOL_NOKASAN(memchr)
>> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
>> index 457c0ea2b5e7..d974f7bcb177 100644
>> --- a/arch/powerpc/mm/Makefile
>> +++ b/arch/powerpc/mm/Makefile
>> @@ -7,6 +7,8 @@ ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
>>
>>  CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
>>
>> +KASAN_SANITIZE_fsl_booke_mmu.o := n
>> +
>>  obj-y                          := fault.o mem.o pgtable.o mmap.o \
>>                                    init_$(BITS).o pgtable_$(BITS).o \
>>                                    init-common.o mmu_context.o drmem.o
>> diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile
>> index 6577897673dd..f8f164ad8ade 100644
>> --- a/arch/powerpc/mm/kasan/Makefile
>> +++ b/arch/powerpc/mm/kasan/Makefile
>> @@ -3,3 +3,4 @@
>>  KASAN_SANITIZE := n
>>
>>  obj-$(CONFIG_PPC32)           += kasan_init_32.o
>> +obj-$(CONFIG_PPC_BOOK3E_64)   += kasan_init_book3e_64.o
>> diff --git a/arch/powerpc/mm/kasan/kasan_init_book3e_64.c b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>> new file mode 100644
>> index 000000000000..93b9afcf1020
>> --- /dev/null
>> +++ b/arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>> @@ -0,0 +1,53 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#define DISABLE_BRANCH_PROFILING
>> +
>> +#include <linux/kasan.h>
>> +#include <linux/printk.h>
>> +#include <linux/memblock.h>
>> +#include <linux/sched/task.h>
>> +#include <asm/pgalloc.h>
>> +
>> +DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
>> +EXPORT_SYMBOL(powerpc_kasan_enabled_key);
>> +unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
>> +
>> +static void __init kasan_init_region(struct memblock_region *reg)
>> +{
>> +       void *start = __va(reg->base);
>> +       void *end = __va(reg->base + reg->size);
>> +       unsigned long k_start, k_end, k_cur;
>> +
>> +       if (start >= end)
>> +               return;
>> +
>> +       k_start = (unsigned long)kasan_mem_to_shadow(start);
>> +       k_end = (unsigned long)kasan_mem_to_shadow(end);
>> +
>> +       for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
>> +               void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>> +               map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
>> +       }
>> +       flush_tlb_kernel_range(k_start, k_end);
>> +}
>> +
>> +void __init kasan_init(void)
>> +{
>> +       struct memblock_region *reg;
>> +
>> +       for_each_memblock(memory, reg)
>> +               kasan_init_region(reg);
>> +
>> +       /* map the zero page RO */
>> +       map_kernel_page((unsigned long)kasan_zero_page,
>> +                                       __pa(kasan_zero_page), PAGE_KERNEL_RO);
>> +
>> +       kasan_init_tags();
>> +
>> +       /* Turn on checking */
>> +       static_branch_inc(&powerpc_kasan_enabled_key);
>> +
>> +       /* Enable error messages */
>> +       init_task.kasan_depth = 0;
>> +       pr_info("KASAN init done (64-bit Book3E)\n");
>> +}
>> diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
>> index 4314ba5baf43..7c6d8b14f440 100644
>> --- a/arch/powerpc/purgatory/Makefile
>> +++ b/arch/powerpc/purgatory/Makefile
>> @@ -1,4 +1,7 @@
>>  # SPDX-License-Identifier: GPL-2.0
>> +
>> +KASAN_SANITIZE := n
>> +
>>  targets += trampoline.o purgatory.ro kexec-purgatory.c
>>
>>  LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined
>> diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile
>> index 878f9c1d3615..064f7062c0a3 100644
>> --- a/arch/powerpc/xmon/Makefile
>> +++ b/arch/powerpc/xmon/Makefile
>> @@ -6,6 +6,7 @@ subdir-ccflags-y := $(call cc-disable-warning, builtin-requires-header)
>>
>>  GCOV_PROFILE := n
>>  UBSAN_SANITIZE := n
>> +KASAN_SANITIZE := n
>>
>>  # Disable ftrace for the entire directory
>>  ORIG_CFLAGS := $(KBUILD_CFLAGS)
>> --
>> 2.19.1
>>
>> --
>> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
>> To post to this group, send email to kasan-dev@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190215000441.14323-6-dja%40axtens.net.
>> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check
  2019-02-18  6:13     ` Daniel Axtens
@ 2019-02-25 14:01       ` Christophe Leroy
  2019-02-26  0:14         ` Daniel Axtens
  0 siblings, 1 reply; 27+ messages in thread
From: Christophe Leroy @ 2019-02-25 14:01 UTC (permalink / raw)
  To: Daniel Axtens, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev

Hi Daniel,

Le 18/02/2019 à 07:13, Daniel Axtens a écrit :
> christophe leroy <christophe.leroy@c-s.fr> writes:
> 
>> Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
>>> In powerpc (as I understand it), we spend a lot of time in boot
>>> running in real mode before MMU paging is initalised. During
>>> this time we call a lot of generic code, including printk(). If
>>> we try to access the shadow region during this time, things fail.
>>>
>>> My attempts to move early init before the first printk have not
>>> been successful. (Both previous RFCs for ppc64 - by 2 different
>>> people - have needed this trick too!)
>>>
>>> So, allow architectures to define a check_return_arch_not_ready()
>>> hook that bails out of check_memory_region_inline() unless the
>>> arch has done all of the init.
>>>
>>> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
>>> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
>>> Originally-by: Balbir Singh <bsingharora@gmail.com>
>>> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>> Signed-off-by: Daniel Axtens <dja@axtens.net>
>>> ---
>>>    include/linux/kasan.h | 4 ++++
>>>    mm/kasan/generic.c    | 2 ++
>>>    2 files changed, 6 insertions(+)
>>>
>>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>>> index f6261840f94c..83edc5e2b6a0 100644
>>> --- a/include/linux/kasan.h
>>> +++ b/include/linux/kasan.h
>>> @@ -14,6 +14,10 @@ struct task_struct;
>>>    #include <asm/kasan.h>
>>>    #include <asm/pgtable.h>
>>>    
>>> +#ifndef check_return_arch_not_ready
>>> +#define check_return_arch_not_ready()	do { } while (0)
>>> +#endif
>>
>> A static inline would be better I believe.
>>
>> Something like
>>
>> #ifndef kasan_arch_is_ready
>> static inline bool kasan_arch_is_ready {return true;}
>> #endif
>>
>>> +
>>>    extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>>>    extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>>>    extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
>>> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
>>> index bafa2f986660..4c18bbd09a20 100644
>>> --- a/mm/kasan/generic.c
>>> +++ b/mm/kasan/generic.c
>>> @@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>>>    						size_t size, bool write,
>>>    						unsigned long ret_ip)
>>>    {
>>> +	check_return_arch_not_ready();
>>> +
>>
>> Not good for readibility that the above macro embeds a return, something
>> like below would be better I think:
>>
>> 	if (!kasan_arch_is_ready())
>> 		return;
>>
>> Unless somebody minds, I'll do the change and take this patch in my
>> series in order to handle the case of book3s/32 hash.
> 
> Please do; feel free to take as many of the patches as you would like
> and I'll rebase whatever is left on the next version of your series.

I have now done a big step with v7: works on both nohash and hash ppc32 
without any special feature in the core of kasan. Have to do more tests 
on the hash version, but it seems promissing.

I have kept your patches on sync on top of it (allthough totally 
untested), you can find them in 
https://github.com/chleroy/linux/commits/kasan

> 
> The idea with the macro magic was to take advantage of the speed of
> static keys (I think, I borrowed it from Balbir's patch). Perhaps an
> inline function will achieve this anyway, but given that KASAN with
> outline instrumentation is inevitably slow, I guess it doesn't matter
> much either way.

You'll see in the modifications I've done to your patches, we can still 
use static keys while using static inline functions.

Christophe

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check
  2019-02-25 14:01       ` Christophe Leroy
@ 2019-02-26  0:14         ` Daniel Axtens
  0 siblings, 0 replies; 27+ messages in thread
From: Daniel Axtens @ 2019-02-26  0:14 UTC (permalink / raw)
  To: Christophe Leroy, aneesh.kumar, bsingharora
  Cc: linuxppc-dev, Aneesh Kumar K . V, kasan-dev

>>> Unless somebody minds, I'll do the change and take this patch in my
>>> series in order to handle the case of book3s/32 hash.
>> 
>> Please do; feel free to take as many of the patches as you would like
>> and I'll rebase whatever is left on the next version of your series.
>
> I have now done a big step with v7: works on both nohash and hash ppc32 
> without any special feature in the core of kasan. Have to do more tests 
> on the hash version, but it seems promissing.
>
> I have kept your patches on sync on top of it (allthough totally 
> untested), you can find them in 
> https://github.com/chleroy/linux/commits/kasan

Thanks - I've got sidetracked with other internal stuff but I hope to
get back to this later in the week.

Regards,
Daniel
>
>> 
>> The idea with the macro magic was to take advantage of the speed of
>> static keys (I think, I borrowed it from Balbir's patch). Perhaps an
>> inline function will achieve this anyway, but given that KASAN with
>> outline instrumentation is inevitably slow, I guess it doesn't matter
>> much either way.
>
> You'll see in the modifications I've done to your patches, we can still 
> use static keys while using static inline functions.
>
> Christophe

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2019-02-26  0:15 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-15  0:04 [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Daniel Axtens
2019-02-15  0:04 ` [RFC PATCH 1/5] kasan: do not open-code addr_has_shadow Daniel Axtens
2019-02-15  0:12   ` Andrew Donnellan
2019-02-15  8:21     ` Dmitry Vyukov
2019-02-15  0:04 ` [RFC PATCH 2/5] kasan: allow architectures to manage the memory-to-shadow mapping Daniel Axtens
2019-02-15  6:35   ` Dmitry Vyukov
2019-02-15  0:04 ` [RFC PATCH 3/5] kasan: allow architectures to provide an outline readiness check Daniel Axtens
2019-02-15  8:25   ` Dmitry Vyukov
2019-02-17 12:05   ` christophe leroy
2019-02-18  6:13     ` Daniel Axtens
2019-02-25 14:01       ` Christophe Leroy
2019-02-26  0:14         ` Daniel Axtens
2019-02-15  0:04 ` [RFC PATCH 4/5] powerpc: move KASAN into its own subdirectory Daniel Axtens
2019-02-15  0:24   ` Andrew Donnellan
2019-02-17 16:29   ` christophe leroy
2019-02-18  9:14     ` Michael Ellerman
2019-02-18 12:27       ` Christophe Leroy
2019-02-19  0:44         ` Michael Ellerman
2019-02-15  0:04 ` [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E Daniel Axtens
2019-02-15  8:28   ` Dmitry Vyukov
2019-02-19  6:37     ` Daniel Axtens
2019-02-17 14:06   ` christophe leroy
2019-02-18 19:26   ` Christophe Leroy
2019-02-19  0:14     ` Daniel Axtens
2019-02-15 16:39 ` [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E Christophe Leroy
2019-02-17  6:34 ` Balbir Singh
2019-02-19  6:35   ` Daniel Axtens

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).