linux-mips.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] MIPS64: add KASAN support
@ 2019-08-27  8:28 Tommy Jin
  2019-08-29 11:33 ` Thomas Bogendoerfer
  0 siblings, 1 reply; 3+ messages in thread
From: Tommy Jin @ 2019-08-27  8:28 UTC (permalink / raw)
  To: linux-mips; +Cc: Zhongwu Zhu, Tommy Jin

From: tjin <tjin@wavecomp.com>

This patch adds arch specific code for kernel address sanitizer

1/8 of kernel addresses reserved for shadow memory. But for misp64,
There are a lot of holes between different segments and valid address
space(256T available) is insufficient to map all these segments
(Scattered in 8192P space) to kasan shadow memory with the common
formula provided by kasan core, saying
addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET

So MIPS64 has a ARCH specific mapping formula,different segments
are mapped individually, and only limited length of space of that
specific segment is mapped to shadow. for example,XKPHYS CACHE starts
from 0xa800000000000000, around 1T of this segment is going be mapped
to shadow in case that 40-bit address is using.
XKPHYS UNCACHE starts from 0x9000000000000000, the gap between
0xa800000000000000 + 1T and 0x9000000000000000 is not going to be
mapped to shadow.

At early boot stage the whole shadow region populated with just
one physical page (kasan_early_shadow_page). Later, this page is
reused as readonly zero shadow for some memory that Kasan currently
don't track.
After mapping the physical memory, pages for shadow memory are
allocated and mapped.

Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.

Some files built without kasan instrumentation(e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.

With GCC, it requires 9.0 or later version for the basic support of
MIPS address SANitizer.

Signed-off-by: tjin <tjin@wavecomp.com>
---
 arch/mips/Kconfig                  |   1 +
 arch/mips/include/asm/kasan.h      | 201 ++++++++++++++++++++++++++++++++++
 arch/mips/include/asm/pgtable-64.h |  12 ++-
 arch/mips/include/asm/string.h     |  20 ++++
 arch/mips/kernel/Makefile          |   8 ++
 arch/mips/kernel/head.S            |   3 +
 arch/mips/kernel/setup.c           |   3 +
 arch/mips/kernel/traps.c           |  11 ++
 arch/mips/lib/memcpy.S             |  20 ++--
 arch/mips/lib/memset.S             |  18 ++--
 arch/mips/mm/Makefile              |   5 +
 arch/mips/mm/kasan_init.c          | 216 +++++++++++++++++++++++++++++++++++++
 arch/mips/vdso/Makefile            |   5 +
 include/linux/kasan.h              |   2 +
 mm/kasan/generic.c                 |  17 ++-
 mm/kasan/kasan.h                   |   2 +
 16 files changed, 522 insertions(+), 22 deletions(-)
 create mode 100644 arch/mips/include/asm/kasan.h
 create mode 100644 arch/mips/mm/kasan_init.c

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 70d3200..0ed8eb0 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -80,6 +80,7 @@ config MIPS
 	select RTC_LIB
 	select SYSCTL_EXCEPTION_TRACE
 	select VIRT_TO_BUS
+	select HAVE_ARCH_KASAN if 64BIT
 
 menu "Machine selection"
 
diff --git a/arch/mips/include/asm/kasan.h b/arch/mips/include/asm/kasan.h
new file mode 100644
index 0000000..62e75d1
--- /dev/null
+++ b/arch/mips/include/asm/kasan.h
@@ -0,0 +1,201 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/linkage.h>
+#include <asm/addrspace.h>
+#include <asm/pgtable-64.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET 0
+
+#define XSEG_SHIFT (56)
+/* 32-bit compatatibility address length .*/
+#define CSEG_SHIFT (28)
+
+/* Valid address length. */
+#define XXSEG_SHADOW_SHIFT (PGDIR_SHIFT + PGD_ORDER + PAGE_SHIFT - 3)
+/* Used for taking out the valid address. */
+#define XXSEG_SHADOW_MASK  GENMASK_ULL(XXSEG_SHADOW_SHIFT - 1, 0)
+/* One segment whole address space size. */
+#define	XXSEG_SIZE (XXSEG_SHADOW_MASK + 1)
+
+#define CKSEG_SHADOW_MASK  GENMASK_ULL(CSEG_SHIFT - 1, 0)
+/* One segment whole address space size. */
+#define	CKSEG_SIZE (CKSEG_SHADOW_MASK + 1)
+
+/* Mask used to take CSEG segmet value, e.g. CKSEGx_SEG.
+ * Take one bit more to cover segment start from:
+ * - 0xFFFF FFFF 9000 0000
+ * - 0xFFFF FFFF B000 0000
+ * - 0xFFFF FFFF F000 0000
+ */
+#define CSEG_SHIFT_1BM (CSEG_SHIFT + 1)
+#define CSSEG_SHADOW_MASK_1BM GENMASK_ULL(CSEG_SHIFT_1BM - 1, 0)
+
+/* 64-bit segment value. */
+#define XKPHYS_CACHE_SEG	(0xa8)
+#define XKPHYS_UNCACHE_SEG	(0x90)
+#define XKSEG_SEG	(0xc0)
+
+/* 32-bit compatatibility segment value.
+ * Shift the address CSEG_SHIFT bit to the left, then &0F can get this value.
+ */
+#define CKSEGX_SEG	(0xff)
+#define CKSEG0_SEG	(0x08)
+#define CKSEG1_SEG	(0x0a)
+#define CSSEG_SEG	(0x0c)
+#define CKSEG3_SEG	(0x0e)
+/* COH_SHAREABLE */
+#define XKPHYS_CACHE_START	(0xa800000000000000)
+#define XKPHYS_CACHE_SIZE	XXSEG_SIZE
+#define XKPHYS_CACHE_KASAN_OFFSET	(0)
+#define XKPHYS_CACHE_SHADOW_SIZE	(XKPHYS_CACHE_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+#define XKPHYS_CACHE_SHADOW_END	(XKPHYS_CACHE_KASAN_OFFSET + XKPHYS_CACHE_SHADOW_SIZE)
+/* IO/UNCACHED */
+#define XKPHYS_UNCACHE_START		(0x9000000000000000)
+#define XKPHYS_UNCACHE_SIZE			XXSEG_SIZE
+#define XKPHYS_UNCACHE_KASAN_OFFSET	XKPHYS_CACHE_SHADOW_END
+#define XKPHYS_UNCACHE_SHADOW_SIZE	(XKPHYS_UNCACHE_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+#define XKPHYS_UNCACHE_SHADOW_END	(XKPHYS_UNCACHE_KASAN_OFFSET + XKPHYS_UNCACHE_SHADOW_SIZE)
+/* VMALLOC  */
+#define XKSEG_VMALLOC_START VMALLOC_START
+/* 1MB alignment. */
+#define XKSEG_VMALLOC_SIZE			round_up(VMALLOC_END - VMALLOC_START + 1, 0x10000)
+#define XKSEG_VMALLOC_KASAN_OFFSET	XKPHYS_UNCACHE_SHADOW_END
+#define XKPHYS_VMALLOC_SHADOW_SIZE	(XKSEG_VMALLOC_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+#define XKPHYS_VMALLOC_SHADOW_END	(XKSEG_VMALLOC_KASAN_OFFSET + XKPHYS_VMALLOC_SHADOW_SIZE)
+
+/* 32-bit compatibiity address space. */
+#define CKSEG0_START	(0xffffffff80000000)
+#define CKSEG0_SIZE		CKSEG_SIZE
+#define CKSEG0_KASAN_OFFSET	XKPHYS_VMALLOC_SHADOW_END
+#define CKSEG0_SHADOW_SIZE	(CKSEG0_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+#define CKSEG0_SHADOW_END	(CKSEG0_KASAN_OFFSET + CKSEG0_SHADOW_SIZE)
+
+#define CKSEG1_START (0xffffffffa0000000)
+#define CKSEG1_SIZE  CKSEG_SIZE
+#define CKSEG1_KASAN_OFFSET CKSEG0_SHADOW_END
+#define CKSEG1_SHADOW_SIZE	(CKSEG1_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+#define CKSEG1_SHADOW_END	(CKSEG1_KASAN_OFFSET + CKSEG1_SHADOW_SIZE)
+
+#define CSSEG_START  (0xffffffffc0000000)
+#define CSSEG_SIZE  CKSEG_SIZE
+#define CSSEG_KASAN_OFFSET CKSEG1_SHADOW_END
+#define CSSEG_SHADOW_SIZE	(CSSEG_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+#define CSSEG_SHADOW_END	(CSSEG_KASAN_OFFSET + CSSEG_SHADOW_SIZE)
+
+#define CKSEG3_START (0xffffffffe0000000)
+#define CKSEG3_SIZE  CKSEG_SIZE
+#define CKSEG3_KASAN_OFFSET (CSSEG_KASAN_OFFSET + (CSSEG_SIZE >> KASAN_SHADOW_SCALE_SHIFT))
+#define CKSEG3_SHADOW_SIZE	(CKSEG3_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
+#define CKSEG3_SHADOW_END	(CKSEG3_KASAN_OFFSET + CKSEG3_SHADOW_SIZE)
+
+/* Kasan shadow memory start right after vmalloc. */
+#define KASAN_SHADOW_START	round_up(VMALLOC_END, PGDIR_SIZE)
+#define KASAN_SHADOW_SIZE	(CKSEG3_SHADOW_END - XKPHYS_CACHE_KASAN_OFFSET)
+#define KASAN_SHADOW_END	round_up(KASAN_SHADOW_START + KASAN_SHADOW_SIZE, PGDIR_SIZE)
+
+#define XKPHYS_CACHE_SHADOW_OFFSET	(KASAN_SHADOW_START + XKPHYS_CACHE_KASAN_OFFSET)
+#define XKPHYS_UNCACHE_SHADOW_OFFSET	(KASAN_SHADOW_START + XKPHYS_UNCACHE_KASAN_OFFSET)
+#define XKSEG_SHADOW_OFFSET	(KASAN_SHADOW_START + XKSEG_VMALLOC_KASAN_OFFSET)
+#define CKSEG0_SHADOW_OFFSET	(KASAN_SHADOW_START + CKSEG0_KASAN_OFFSET)
+#define CKSEG1_SHADOW_OFFSET	(KASAN_SHADOW_START + CKSEG1_KASAN_OFFSET)
+#define CSSEG_SHADOW_OFFSET	(KASAN_SHADOW_START + CSSEG_KASAN_OFFSET)
+#define CKSEG3_SHADOW_OFFSET	(KASAN_SHADOW_START + CKSEG3_KASAN_OFFSET)
+
+extern bool kasan_early_stage;
+extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+	if (kasan_early_stage) {
+		return (void *)(kasan_early_shadow_page);
+	} else {
+		unsigned long maddr = (unsigned long)addr;
+		unsigned char xreg = (maddr >> XSEG_SHIFT) & 0xff;
+		unsigned char creg = (maddr >> CSEG_SHIFT) & 0x0f;
+		unsigned long offset = 0;
+
+		maddr &= XXSEG_SHADOW_MASK;
+		switch (xreg) { /*xkphys,cached*/
+		case XKPHYS_CACHE_SEG:
+			offset = XKPHYS_CACHE_SHADOW_OFFSET;
+			break;
+		case XKPHYS_UNCACHE_SEG:/* xkphys, uncached*/
+			offset = XKPHYS_UNCACHE_SHADOW_OFFSET;
+			break;
+		case XKSEG_SEG:/* xkseg*/
+			offset = XKSEG_SHADOW_OFFSET;
+			break;
+		case CKSEGX_SEG:/* cksegx*/
+			maddr &=  CSSEG_SHADOW_MASK_1BM;
+			switch (creg) {
+			case CKSEG0_SEG:
+			case (CKSEG0_SEG + 1):
+				offset = CKSEG0_SHADOW_OFFSET;
+				break;
+			case CKSEG1_SEG:
+			case (CKSEG1_SEG + 1):
+				offset = CKSEG1_SHADOW_OFFSET;
+				break;
+			case CSSEG_SEG:
+			case (CSSEG_SEG + 1):
+				offset = CSSEG_SHADOW_OFFSET;
+				break;
+			case CKSEG3_SEG:
+			case (CKSEG3_SEG + 1):
+				offset = CKSEG3_SHADOW_OFFSET;
+				break;
+			default:
+				WARN_ON(1);
+				return NULL;
+			}
+			break;
+		default:/*unlikely*/
+		    WARN_ON(1);
+			return NULL;
+		}
+
+		return (void *)((maddr >> KASAN_SHADOW_SCALE_SHIFT) + offset);
+	}
+}
+
+static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
+{
+	unsigned long addr = (unsigned long)shadow_addr;
+
+	if (unlikely(addr > KASAN_SHADOW_END) ||
+		unlikely(addr < KASAN_SHADOW_START)) {
+		WARN_ON(1);
+		return NULL;
+	}
+
+	if (addr >= CKSEG3_SHADOW_OFFSET)
+		return (void *)(((addr - CKSEG3_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + CKSEG3_START);
+	else if (addr >= CSSEG_SHADOW_OFFSET)
+		return (void *)(((addr - CSSEG_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + CSSEG_START);
+	else if (addr >= CKSEG1_SHADOW_OFFSET)
+		return (void *)(((addr - CKSEG1_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + CKSEG1_START);
+	else if (addr >= CKSEG0_SHADOW_OFFSET)
+		return (void *)(((addr - CKSEG0_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + CKSEG0_START);
+	else if (addr >= XKSEG_SHADOW_OFFSET)
+		return (void *)(((addr - XKSEG_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKSEG_VMALLOC_START);
+	else if (addr >= XKPHYS_UNCACHE_SHADOW_OFFSET)
+		return (void *)(((addr - XKPHYS_UNCACHE_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKPHYS_UNCACHE_START);
+	else if (addr >= XKPHYS_CACHE_SHADOW_OFFSET)
+		return (void *)(((addr - XKPHYS_CACHE_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKPHYS_CACHE_START);
+	else
+		WARN_ON(1);
+
+	return NULL;
+}
+
+#define __HAVE_ARCH_SHADOW_MAP
+
+void kasan_init(void);
+asmlinkage void kasan_early_init(void);
+
+#endif
+#endif
diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h
index 93a9dce..e3f5c4e 100644
--- a/arch/mips/include/asm/pgtable-64.h
+++ b/arch/mips/include/asm/pgtable-64.h
@@ -144,10 +144,17 @@
  * reliably trap.
  */
 #define VMALLOC_START		(MAP_BASE + (2 * PAGE_SIZE))
+#ifdef CONFIG_KASAN
+#define VMALLOC_END	\
+	(MAP_BASE + \
+	 min(PTRS_PER_PGD * PTRS_PER_PUD * PTRS_PER_PMD * PTRS_PER_PTE * PAGE_SIZE / 2, \
+	     (1UL << cpu_vmbits)) - (1UL << 32))
+#else
 #define VMALLOC_END	\
 	(MAP_BASE + \
 	 min(PTRS_PER_PGD * PTRS_PER_PUD * PTRS_PER_PMD * PTRS_PER_PTE * PAGE_SIZE, \
 	     (1UL << cpu_vmbits)) - (1UL << 32))
+#endif
 
 #if defined(CONFIG_MODULES) && defined(KBUILD_64BIT_SYM32) && \
 	VMALLOC_START != CKSSEG
@@ -352,7 +359,8 @@ static inline pmd_t *pmd_offset(pud_t * pud, unsigned long address)
 #define pte_offset_map(dir, address)					\
 	((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset(address))
 #define pte_unmap(pte) ((void)(pte))
-
+#define pte_index(addr)		(((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+#define pte_none(pte)		(!(pte_val(pte) & ~_PAGE_GLOBAL))
 /*
  * Initialize a new pgd / pmd table with invalid pointers.
  */
@@ -372,5 +380,5 @@ static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
 #define __swp_entry(type, offset) ((swp_entry_t) { pte_val(mk_swap_pte((type), (offset))) })
 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
 #define __swp_entry_to_pte(x)	((pte_t) { (x).val })
-
+#define sym_to_pfn(x)	    __phys_to_pfn(__pa_symbol(x))
 #endif /* _ASM_PGTABLE_64_H */
diff --git a/arch/mips/include/asm/string.h b/arch/mips/include/asm/string.h
index 29030cb..19d3740 100644
--- a/arch/mips/include/asm/string.h
+++ b/arch/mips/include/asm/string.h
@@ -133,11 +133,31 @@ strncmp(__const__ char *__cs, __const__ char *__ct, size_t __count)
 
 #define __HAVE_ARCH_MEMSET
 extern void *memset(void *__s, int __c, size_t __count);
+extern void *__memset(void *__s, int __c, size_t __count);
 
 #define __HAVE_ARCH_MEMCPY
 extern void *memcpy(void *__to, __const__ void *__from, size_t __n);
+extern void *__memcpy(void *__to, __const__ void *__from, size_t __n);
 
 #define __HAVE_ARCH_MEMMOVE
 extern void *memmove(void *__dest, __const__ void *__src, size_t __n);
+extern void *__memmove(void *__dest, __const__ void *__src, size_t __n);
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+
+#ifndef __NO_FORTIFY
+#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */
+#endif
+
+#endif
 
 #endif /* _ASM_STRING_H */
diff --git a/arch/mips/kernel/Makefile b/arch/mips/kernel/Makefile
index 89b07ea..196564b 100644
--- a/arch/mips/kernel/Makefile
+++ b/arch/mips/kernel/Makefile
@@ -17,6 +17,14 @@ CFLAGS_REMOVE_perf_event.o = -pg
 CFLAGS_REMOVE_perf_event_mipsxx.o = -pg
 endif
 
+KASAN_SANITIZE_head.o := n
+KASAN_SANITIZE_spram.o := n
+KASAN_SANITIZE_traps.o := n
+KASAN_SANITIZE_vdso.o := n
+KASAN_SANITIZE_watch.o := n
+KASAN_SANITIZE_stacktrace.o := n
+KASAN_SANITIZE_cpu-probe.o := n
+
 obj-$(CONFIG_CEVT_BCM1480)	+= cevt-bcm1480.o
 obj-$(CONFIG_CEVT_R4K)		+= cevt-r4k.o
 obj-$(CONFIG_CEVT_DS1287)	+= cevt-ds1287.o
diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
index 351d40f..f2e4910 100644
--- a/arch/mips/kernel/head.S
+++ b/arch/mips/kernel/head.S
@@ -159,6 +159,9 @@ dtb_found:
 	 */
 	jr.hb		v0
 #else  /* !CONFIG_RELOCATABLE */
+#ifdef CONFIG_KASAN
+	jal kasan_early_init
+#endif /* CONFIG_KASAN */
 	j		start_kernel
 #endif /* !CONFIG_RELOCATABLE */
 	END(kernel_entry)
diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index ab349d2..7c28fe4 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -942,6 +942,9 @@ void __init setup_arch(char **cmdline_p)
 
 	cpu_cache_init();
 	paging_init();
+#if defined(CONFIG_KASAN)
+	kasan_init();
+#endif
 }
 
 unsigned long kernelsp[NR_CPUS];
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index c52766a..a63ff86 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -2273,6 +2273,17 @@ void __init trap_init(void)
 	unsigned long i, vec_size;
 	phys_addr_t ebase_pa;
 
+	/*
+	 * If kasan is enabled, instrumented code may cause tlb excpetion.
+	 * trap_init will be called in kasan_init, once tarp is initialized,
+	 * ebase should be a non-zero value , so use it as a flag that trap
+	 * won't be initialized more than once.
+	 */
+#if defined(CONFIG_KASAN)
+	if (ebase)
+		return;
+#endif
+
 	check_wait();
 
 	if (!cpu_has_mips_r2_r6) {
diff --git a/arch/mips/lib/memcpy.S b/arch/mips/lib/memcpy.S
index cdd19d85..a150f0b 100644
--- a/arch/mips/lib/memcpy.S
+++ b/arch/mips/lib/memcpy.S
@@ -271,10 +271,10 @@
 	 */
 	.macro __BUILD_COPY_USER mode, from, to
 
-	/* initialize __memcpy if this the first time we execute this macro */
-	.ifnotdef __memcpy
-	.set __memcpy, 1
-	.hidden __memcpy /* make sure it does not leak */
+	/* initialize _memcpy if this the first time we execute this macro */
+	.ifnotdef _memcpy
+	.set _memcpy, 1
+	.hidden _memcpy /* make sure it does not leak */
 	.endif
 
 	/*
@@ -535,10 +535,10 @@
 	b	1b
 	 ADD	dst, dst, 8
 #endif /* !CONFIG_CPU_HAS_LOAD_STORE_LR */
-	.if __memcpy == 1
+	.if _memcpy == 1
 	END(memcpy)
-	.set __memcpy, 0
-	.hidden __memcpy
+	.set _memcpy, 0
+	.hidden _memcpy
 	.endif
 
 .Ll_exc_copy\@:
@@ -599,6 +599,9 @@ SEXC(1)
 	.endm
 
 	.align	5
+    .weak memmove
+FEXPORT(__memmove)
+EXPORT_SYMBOL(__memmove)
 LEAF(memmove)
 EXPORT_SYMBOL(memmove)
 	ADD	t0, a0, a2
@@ -656,6 +659,9 @@ LEAF(__rmemcpy)					/* a0=dst a1=src a2=len */
  * memcpy sets v0 to dst.
  */
 	.align	5
+    .weak memcpy
+FEXPORT(__memcpy)
+EXPORT_SYMBOL(__memcpy)
 LEAF(memcpy)					/* a0=dst a1=src a2=len */
 EXPORT_SYMBOL(memcpy)
 	move	v0, dst				/* return value */
diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
index 418611e..0234328 100644
--- a/arch/mips/lib/memset.S
+++ b/arch/mips/lib/memset.S
@@ -86,10 +86,10 @@
 	 * mode: LEGACY_MODE or EVA_MODE
 	 */
 	.macro __BUILD_BZERO mode
-	/* Initialize __memset if this is the first time we call this macro */
-	.ifnotdef __memset
-	.set __memset, 1
-	.hidden __memset /* Make sure it does not leak */
+	/* Initialize _memset if this is the first time we call this macro */
+	.ifnotdef _memset
+	.set _memset, 1
+	.hidden _memset /* Make sure it does not leak */
 	.endif
 
 	sltiu		t0, a2, STORSIZE	/* very small region? */
@@ -228,10 +228,10 @@
 
 2:	move		a2, zero
 	jr		ra			/* done */
-	.if __memset == 1
+	.if _memset == 1
 	END(memset)
-	.set __memset, 0
-	.hidden __memset
+	.set _memset, 0
+	.hidden _memset
 	.endif
 
 #ifndef CONFIG_CPU_HAS_LOAD_STORE_LR
@@ -295,7 +295,9 @@
  * a1: char to fill with
  * a2: size of area to clear
  */
-
+    .weak memset
+FEXPORT(__memset)
+EXPORT_SYMBOL(__memset)
 LEAF(memset)
 EXPORT_SYMBOL(memset)
 	move		v0, a0			/* result */
diff --git a/arch/mips/mm/Makefile b/arch/mips/mm/Makefile
index f34d7ff..cf2a4a9 100644
--- a/arch/mips/mm/Makefile
+++ b/arch/mips/mm/Makefile
@@ -41,3 +41,8 @@ obj-$(CONFIG_R5000_CPU_SCACHE)	+= sc-r5k.o
 obj-$(CONFIG_RM7000_CPU_SCACHE) += sc-rm7k.o
 obj-$(CONFIG_MIPS_CPU_SCACHE)	+= sc-mips.o
 obj-$(CONFIG_SCACHE_DEBUGFS)	+= sc-debugfs.o
+obj-$(CONFIG_KASAN)     += kasan_init.o
+KASAN_SANITIZE_kasan_init.o     := n
+KASAN_SANITIZE_pgtable-64.o     := n
+KASAN_SANITIZE_tlb-r4k.o        := n
+KASAN_SANITIZE_tlbex.o          := n
diff --git a/arch/mips/mm/kasan_init.c b/arch/mips/mm/kasan_init.c
new file mode 100644
index 0000000..8ef1275
--- /dev/null
+++ b/arch/mips/mm/kasan_init.c
@@ -0,0 +1,216 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This file contains kasan initialization code for MIPS64.
+ *
+ * Author: Tommy Jin <tjin@wavecomp.com> Zhongwu Zhu<zzhu@wavecomp.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/sched/task.h>
+#include <linux/memblock.h>
+#include <linux/start_kernel.h>
+#include <linux/mm.h>
+#include <linux/cpu.h>
+
+#include <asm/mmu_context.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/sections.h>
+#include <asm/tlbflush.h>
+
+#define __pgd_none(early, pgd) (early ? (pgd_val(pgd) == 0) : \
+(__pa(pgd_val(pgd)) == (unsigned long)__pa(kasan_early_shadow_pmd)))
+
+#define __pmd_none(early, pmd) (early ? (pmd_val(pmd) == 0) : \
+(__pa(pmd_val(pmd)) == (unsigned long)__pa(kasan_early_shadow_pte)))
+
+#define __pte_none(early, pte) (early ? pte_none(pte) : \
+((pte_val(pte) & _PFN_MASK) == (unsigned long)__pa(kasan_early_shadow_page)))
+
+bool kasan_early_stage = true;
+
+/*
+ * Alloc memory for shadow memory page table.
+ */
+static phys_addr_t __init kasan_alloc_zeroed_page(int node)
+{
+	void *p = memblock_alloc_try_nid(PAGE_SIZE, PAGE_SIZE,
+					__pa(MAX_DMA_ADDRESS),
+						MEMBLOCK_ALLOC_ACCESSIBLE, node);
+	return __pa(p);
+}
+
+static pte_t *kasan_pte_offset(pmd_t *pmdp, unsigned long addr, int node,
+				      bool early)
+{
+	if (__pmd_none(early, READ_ONCE(*pmdp))) {
+		phys_addr_t pte_phys = early ?
+				__pa_symbol(kasan_early_shadow_pte)
+					: kasan_alloc_zeroed_page(node);
+		if (!early)
+			memcpy(__va(pte_phys), kasan_early_shadow_pte,
+				sizeof(kasan_early_shadow_pte));
+
+		pmd_populate_kernel(NULL, pmdp, (pte_t *)__va(pte_phys));
+	}
+
+	return pte_offset_kernel(pmdp, addr);
+}
+
+static inline void kasan_set_pgd(pgd_t *pgdp, pgd_t pgdval)
+{
+	WRITE_ONCE(*pgdp, pgdval);
+}
+
+static pmd_t *kasan_pmd_offset(pgd_t *pgdp, unsigned long addr, int node,
+				      bool early)
+{
+	if (__pgd_none(early, READ_ONCE(*pgdp))) {
+		phys_addr_t pmd_phys = early ?
+				__pa_symbol(kasan_early_shadow_pmd)
+					: kasan_alloc_zeroed_page(node);
+		if (!early)
+			memcpy(__va(pmd_phys), kasan_early_shadow_pmd,
+				sizeof(kasan_early_shadow_pmd));
+		kasan_set_pgd(pgdp, __pgd((unsigned long)__va(pmd_phys)));
+	}
+
+	return (pmd_t *)((pmd_t *)pgd_val(*pgdp) + pmd_index(addr));
+}
+
+static void  kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
+				      unsigned long end, int node, bool early)
+{
+	unsigned long next;
+	pte_t *ptep = kasan_pte_offset(pmdp, addr, node, early);
+
+	do {
+		phys_addr_t page_phys = early ?
+					__pa_symbol(kasan_early_shadow_page)
+					      : kasan_alloc_zeroed_page(node);
+		next = addr + PAGE_SIZE;
+		set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
+	} while (ptep++, addr = next, addr != end && __pte_none(early, READ_ONCE(*ptep)));
+}
+
+static void kasan_pmd_populate(pgd_t *pgdp, unsigned long addr,
+				      unsigned long end, int node, bool early)
+{
+	unsigned long next;
+	pmd_t *pmdp = kasan_pmd_offset(pgdp, addr, node, early);
+
+	do {
+		next = pmd_addr_end(addr, end);
+		kasan_pte_populate(pmdp, addr, next, node, early);
+	} while (pmdp++, addr = next, addr != end && __pmd_none(early, READ_ONCE(*pmdp)));
+}
+
+static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
+				      int node, bool early)
+{
+	unsigned long next;
+	pgd_t *pgdp;
+
+	pgdp = pgd_offset_k(addr);
+
+	do {
+		next = pgd_addr_end(addr, end);
+		kasan_pmd_populate(pgdp, addr, next, node, early);
+	} while (pgdp++, addr = next, addr != end);
+}
+
+/* The early shadow maps everything to a single page of zeroes */
+asmlinkage void __init kasan_early_init(void)
+{
+	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
+}
+
+/* Set up full kasan mappings, ensuring that the mapped pages are zeroed */
+static void __init kasan_map_populate(unsigned long start, unsigned long end,
+				      int node)
+{
+	kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start < end; start += PGDIR_SIZE)
+		kasan_set_pgd((pgd_t *)pgd_offset_k(start), __pgd(0));
+}
+
+void __init kasan_init(void)
+{
+	u64 kimg_shadow_start, kimg_shadow_end;
+	struct memblock_region *reg;
+	int i;
+
+	/*
+	 * Instrumented code may cause tlb excpetion,
+	 * so if kasan if enabled, We need to init trap
+	 */
+	trap_init();
+
+	/*
+	 * Pgd was populated as invalid_pmd_table or invalid_pud_table
+	 * in pagetable_init() which depends on how many levels of page
+	 * table you are using, but we had to clean the gpd of kasan
+	 * shadow memory, as the pgd value is none-zero.
+	 * The assertion pgd_none is gong to be false and the formal populate
+	 * afterwards is not going to create any new pgd at all.
+	 */
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	/* Maps everything to a single page of zeroes */
+	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
+			true);
+
+	kasan_early_stage = false;
+	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
+	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
+
+	/*
+	 * Instrumented code couldn't execute without shadow memory.
+	 * tmp_pg_dir used to keep early shadow mapped until full shadow
+	 * setup will be finished.
+	 */
+	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
+			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+
+	for_each_memblock(memory, reg) {
+		void *start = (void *)phys_to_virt(reg->base);
+		void *end = (void *)phys_to_virt(reg->base + reg->size);
+
+		if (start >= end)
+			break;
+
+		kasan_map_populate((unsigned long)kasan_mem_to_shadow(start),
+				   (unsigned long)kasan_mem_to_shadow(end),
+				   early_pfn_to_nid(virt_to_pfn(start)));
+	}
+
+	/*
+	 * KAsan may reuse the contents of kasan_zero_pte directly, so we
+	 * should make sure that it maps the zero page read-only.
+	 */
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte(&kasan_early_shadow_pte[i],
+			pfn_pte(sym_to_pfn(kasan_early_shadow_page),
+				PAGE_KERNEL_RO));
+
+	memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+
+	/* At this point kasan is fully initialized. Enable error messages */
+	init_task.kasan_depth = 0;
+	pr_info("KernelAddressSanitizer initialized.\n");
+}
+
diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
index 7221df2..b84ceb4 100644
--- a/arch/mips/vdso/Makefile
+++ b/arch/mips/vdso/Makefile
@@ -1,5 +1,10 @@
 # SPDX-License-Identifier: GPL-2.0
 # Objects to go into the VDSO.
+
+ifdef CONFIG_KASAN
+KASAN_SANITIZE	:= n
+endif
+
 obj-vdso-y := elf.o gettimeofday.o sigreturn.o
 
 # Common compiler flags between ABIs.
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b40ea10..055eb85 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -23,11 +23,13 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
 int kasan_populate_early_shadow(const void *shadow_start,
 				const void *shadow_end);
 
+#ifndef __HAVE_ARCH_SHADOW_MAP
 static inline void *kasan_mem_to_shadow(const void *addr)
 {
 	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
 		+ KASAN_SHADOW_OFFSET;
 }
+#endif /* __HAVE_ARCH_SHADOW_MAP*/
 
 /* Enable reporting bugs after kasan_disable_current() */
 extern void kasan_enable_current(void);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 504c7936..ce070fb 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -173,11 +173,18 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 	if (unlikely(size == 0))
 		return;
 
-	if (unlikely((void *)addr <
-		kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
-		kasan_report(addr, size, write, ret_ip);
-		return;
-	}
+#ifndef __HAVE_ARCH_SHADOW_MAP
+		if (unlikely((void *)addr <
+			kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+			kasan_report(addr, size, write, ret_ip);
+			return;
+		}
+#else
+		if (unlikely(kasan_mem_to_shadow((void *)addr) == NULL)) {
+			kasan_report(addr, size, write, ret_ip);
+			return;
+		}
+#endif
 
 	if (likely(!memory_is_poisoned(addr, size)))
 		return;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 3ce956e..5f86724 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -110,11 +110,13 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
 struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
 					const void *object);
 
+#ifndef __HAVE_ARCH_SHADOW_MAP
 static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
 {
 	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
 		<< KASAN_SHADOW_SCALE_SHIFT);
 }
+#endif
 
 static inline bool addr_has_shadow(const void *addr)
 {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] MIPS64: add KASAN support
  2019-08-27  8:28 [PATCH] MIPS64: add KASAN support Tommy Jin
@ 2019-08-29 11:33 ` Thomas Bogendoerfer
  2019-09-05  3:17   ` 答复: [EXTERNAL]Re: " Zhongwu Zhu
  0 siblings, 1 reply; 3+ messages in thread
From: Thomas Bogendoerfer @ 2019-08-29 11:33 UTC (permalink / raw)
  To: Tommy Jin; +Cc: linux-mips, Zhongwu Zhu

On Tue, Aug 27, 2019 at 08:28:38AM +0000, Tommy Jin wrote:
> +/* 64-bit segment value. */
> +#define XKPHYS_CACHE_SEG	(0xa8)

that's just cachable coherent exclusive on write, what about
cachable non coherent (0x98) and cachable exclusive (0xa0) ?

Thomas.

-- 
Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
good idea.                                                [ RFC1925, 2.3 ]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* 答复: [EXTERNAL]Re: [PATCH] MIPS64: add KASAN support
  2019-08-29 11:33 ` Thomas Bogendoerfer
@ 2019-09-05  3:17   ` Zhongwu Zhu
  0 siblings, 0 replies; 3+ messages in thread
From: Zhongwu Zhu @ 2019-09-05  3:17 UTC (permalink / raw)
  To: Thomas Bogendoerfer, Tommy Jin; +Cc: linux-mips

Hi Thomas,
	Tommy and me are responsible for mips KASAN issue. Thank you for your comments.
	We didn't consider cachable non coherent (0x98) and cachable exclusive (0xa0) mainly because we had not seen them be used. Do you know in which case this two segment address will be applied/accessed?
	but if these two segments are really needed, we can add it to our code. And correspondingly the KASAN address space need to be extend.(from 512G to 1024G, that is change the PGD_ORDER from 1 to 2).
BR
Zhongwu

-----邮件原件-----
发件人: Thomas Bogendoerfer <tsbogend@alpha.franken.de> 
发送时间: 2019年8月29日 19:33
收件人: Tommy Jin <tjin@wavecomp.com>
抄送: linux-mips@vger.kernel.org; Zhongwu Zhu <zzhu@wavecomp.com>
主题: [EXTERNAL]Re: [PATCH] MIPS64: add KASAN support

On Tue, Aug 27, 2019 at 08:28:38AM +0000, Tommy Jin wrote:
> +/* 64-bit segment value. */
> +#define XKPHYS_CACHE_SEG	(0xa8)

that's just cachable coherent exclusive on write, what about cachable non coherent (0x98) and cachable exclusive (0xa0) ?

Thomas.

--
Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
good idea.                                                [ RFC1925, 2.3 ]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-09-05  3:18 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-27  8:28 [PATCH] MIPS64: add KASAN support Tommy Jin
2019-08-29 11:33 ` Thomas Bogendoerfer
2019-09-05  3:17   ` 答复: [EXTERNAL]Re: " Zhongwu Zhu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).