linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/7] KASan for arm
@ 2018-03-18 12:53 Abbott Liu
  2018-03-18 12:53 ` [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Abbott Liu
                   ` (9 more replies)
  0 siblings, 10 replies; 15+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
  To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
	liuwenliang, akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

Changelog:
v2 - v1
- Fixed some compiling error which happens on changing kernel compression
  mode to lzma/xz/lzo/lz4.
  ---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
	     Russell King - ARM Linux <linux@armlinux.org.uk>
- Fixed a compiling error cause by some older arm instruction set(armv4t)
  don't suppory movw/movt which is reported by kbuild.
- Changed the pte flag from _L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN to
  pgprot_val(PAGE_KERNEL).
  ---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved Enable KASan patch as the last one.
  ---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
     Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved the definitions of cp15 registers from 
  arch/arm/include/asm/kvm_hyp.h to arch/arm/include/asm/cp15.h.
  ---Asked by: Mark Rutland <mark.rutland@arm.com>
- Merge the following commits into the commit
  Define the virtual space of KASan's shadow region:
  1) Define the virtual space of KASan's shadow region;
  2) Avoid cleaning the KASan shadow area's mapping table;
  3) Add KASan layout;
- Merge the following commits into the commit
  Initialize the mapping of KASan shadow memory:
  1) Initialize the mapping of KASan shadow memory;
  2) Add support arm LPAE;
  3) Don't need to map the shadow of KASan's shadow memory;
     ---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
  4) Change mapping of kasan_zero_page int readonly.

Hi,all:
   These patches add arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).

   1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from user space.

   At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).

  After mapping the physical memory, pages for shadow memory are
allocated and mapped.
  
  KASan's stack instrumentation significantly increases stack's
consumption, so CONFIG_KASAN doubles THREAD_SIZE.

  Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.

  KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.

  Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.

  On arm LPAE architecture,  the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.


Most of the code comes from:
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe

These patches are tested on vexpress-ca15, vexpress-ca9



Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Tested-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>

Abbott Liu (3):
  2 1-byte checks more safer for memory_is_poisoned_16
  Add TTBR operator for kasan_init
  Define the virtual space of KASan's shadow region

Andrey Ryabinin (4):
  Disable instrumentation for some code
  Replace memory function for kasan
  Initialize the mapping of KASan shadow memory
  Enable KASan for arm

 arch/arm/Kconfig                      |   1 +
 arch/arm/boot/compressed/Makefile     |   1 +
 arch/arm/boot/compressed/decompress.c |   2 +
 arch/arm/boot/compressed/libfdt_env.h |   2 +
 arch/arm/include/asm/cp15.h           | 104 ++++++++++++
 arch/arm/include/asm/kasan.h          |  23 +++
 arch/arm/include/asm/kasan_def.h      |  52 ++++++
 arch/arm/include/asm/kvm_hyp.h        |  52 ------
 arch/arm/include/asm/memory.h         |   5 +
 arch/arm/include/asm/pgalloc.h        |   7 +-
 arch/arm/include/asm/string.h         |  17 ++
 arch/arm/include/asm/thread_info.h    |   4 +
 arch/arm/kernel/entry-armv.S          |   5 +-
 arch/arm/kernel/entry-common.S        |   6 +-
 arch/arm/kernel/head-common.S         |   7 +-
 arch/arm/kernel/setup.c               |   2 +
 arch/arm/kernel/unwind.c              |   3 +-
 arch/arm/kvm/hyp/cp15-sr.c            |  12 +-
 arch/arm/kvm/hyp/switch.c             |   6 +-
 arch/arm/lib/memcpy.S                 |   3 +
 arch/arm/lib/memmove.S                |   5 +-
 arch/arm/lib/memset.S                 |   3 +
 arch/arm/mm/Makefile                  |   3 +
 arch/arm/mm/init.c                    |   6 +
 arch/arm/mm/kasan_init.c              | 290 ++++++++++++++++++++++++++++++++++
 arch/arm/mm/mmu.c                     |   7 +-
 arch/arm/mm/pgd.c                     |  14 ++
 arch/arm/vdso/Makefile                |   2 +
 mm/kasan/kasan.c                      |  24 ++-
 29 files changed, 588 insertions(+), 80 deletions(-)
 create mode 100644 arch/arm/include/asm/kasan.h
 create mode 100644 arch/arm/include/asm/kasan_def.h
 create mode 100644 arch/arm/mm/kasan_init.c

-- 
2.9.0

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
@ 2018-03-18 12:53 ` Abbott Liu
  2018-03-18 13:21   ` Russell King - ARM Linux
  2018-03-18 12:53 ` [PATCH 2/7] Add TTBR operator for kasan_init Abbott Liu
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
  To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
	liuwenliang, akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

Because in some architecture(eg. arm) instruction set, non-aligned
access support is not very well, so 2 1-byte checks is more
safer than 1 2-byte check. The impact on performance is small
because 16-byte accesses are not too common.

Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
 mm/kasan/kasan.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e13d911..104839a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -151,13 +151,20 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
 
 static __always_inline bool memory_is_poisoned_16(unsigned long addr)
 {
-	u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
-
-	/* Unaligned 16-bytes access maps into 3 shadow bytes. */
-	if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
-		return *shadow_addr || memory_is_poisoned_1(addr + 15);
+	u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
 
-	return *shadow_addr;
+	if (unlikely(shadow_addr[0] || shadow_addr[1])) {
+		return true;
+	} else if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE))) {
+		/*
+		 * If two shadow bytes covers 16-byte access, we don't
+		 * need to do anything more. Otherwise, test the last
+		 * shadow byte.
+		 */
+		return false;
+	} else {
+		return memory_is_poisoned_1(addr + 15);
+	}
 }
 
 static __always_inline unsigned long bytes_is_nonzero(const u8 *start,
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/7] Add TTBR operator for kasan_init
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
  2018-03-18 12:53 ` [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Abbott Liu
@ 2018-03-18 12:53 ` Abbott Liu
  2018-03-18 12:53 ` [PATCH 3/7] Disable instrumentation for some code Abbott Liu
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
  To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
	liuwenliang, akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

The purpose of this patch is to provide set_ttbr0/get_ttbr0
to kasan_init function. The definitions of cp15 registers
should be in arch/arm/include/asm/cp15.h rather than
arch/arm/include/asm/kvm_hyp.h, so move them.

Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
 arch/arm/include/asm/cp15.h    | 104 +++++++++++++++++++++++++++++++++++++++++
 arch/arm/include/asm/kvm_hyp.h |  52 ---------------------
 arch/arm/kvm/hyp/cp15-sr.c     |  12 ++---
 arch/arm/kvm/hyp/switch.c      |   6 +--
 4 files changed, 113 insertions(+), 61 deletions(-)

diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index 4c9fa72..99ebb31 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -3,6 +3,7 @@
 #define __ASM_ARM_CP15_H
 
 #include <asm/barrier.h>
+#include <linux/stringify.h>
 
 /*
  * CR1 bits (CP#15 CR1)
@@ -65,8 +66,111 @@
 #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
 #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
 
+#define TTBR0_32	__ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32	__ACCESS_CP15(c2, 0, c0, 1)
+#define PAR_32		__ACCESS_CP15(c7, 0, c4, 0)
+#define TTBR0_64	__ACCESS_CP15_64(0, c2)
+#define TTBR1_64	__ACCESS_CP15_64(1, c2)
+#define PAR_64		__ACCESS_CP15_64(0, c7)
+#define VTTBR		__ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL	__ACCESS_CP15_64(3, c14)
+#define CNTVOFF		__ACCESS_CP15_64(4, c14)
+
+#define MIDR		__ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR		__ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR		__ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR		__ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR		__ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR		__ACCESS_CP15(c1, 0, c0, 2)
+#define HCR		__ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR		__ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR		__ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR		__ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR		__ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR		__ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR		__ACCESS_CP15(c2, 4, c1, 2)
+#define DACR		__ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR		__ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR		__ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR		__ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR		__ACCESS_CP15(c5, 0, c1, 1)
+#define HSR		__ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR		__ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR		__ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR		__ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR		__ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR		__ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS	__ACCESS_CP15(c7, 0, c1, 0)
+#define BPIALLIS	__ACCESS_CP15(c7, 0, c1, 6)
+#define ICIMVAU		__ACCESS_CP15(c7, 0, c5, 1)
+#define ATS1CPR		__ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS	__ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL		__ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS	__ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR		__ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR		__ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0		__ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1		__ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR		__ACCESS_CP15(c12, 0, c0, 0)
+#define CID		__ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW		__ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO		__ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV	__ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR		__ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL		__ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL	__ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL		__ACCESS_CP15(c14, 4, c1, 0)
+
 extern unsigned long cr_alignment;	/* defined in entry-armv.S */
 
+static inline void set_par(u64 val)
+{
+	if (IS_ENABLED(CONFIG_ARM_LPAE))
+		write_sysreg(val, PAR_64);
+	else
+		write_sysreg(val, PAR_32);
+}
+
+static inline u64 get_par(void)
+{
+	if (IS_ENABLED(CONFIG_ARM_LPAE))
+		return read_sysreg(PAR_64);
+	else
+		return read_sysreg(PAR_32);
+}
+
+static inline void set_ttbr0(u64 val)
+{
+	if (IS_ENABLED(CONFIG_ARM_LPAE))
+		write_sysreg(val, TTBR0_64);
+	else
+		write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+	if (IS_ENABLED(CONFIG_ARM_LPAE))
+		return read_sysreg(TTBR0_64);
+	else
+		return read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+	if (IS_ENABLED(CONFIG_ARM_LPAE))
+		write_sysreg(val, TTBR1_64);
+	else
+		write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+	if (IS_ENABLED(CONFIG_ARM_LPAE))
+		return read_sysreg(TTBR1_64);
+	else
+		return read_sysreg(TTBR1_32);
+}
+
 static inline unsigned long get_cr(void)
 {
 	unsigned long val;
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 1ab8329..8e8592e 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -36,58 +36,6 @@
 	__val;							\
 })
 
-#define TTBR0		__ACCESS_CP15_64(0, c2)
-#define TTBR1		__ACCESS_CP15_64(1, c2)
-#define VTTBR		__ACCESS_CP15_64(6, c2)
-#define PAR		__ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL	__ACCESS_CP15_64(3, c14)
-#define CNTVOFF		__ACCESS_CP15_64(4, c14)
-
-#define MIDR		__ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR		__ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR		__ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR		__ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR		__ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR		__ACCESS_CP15(c1, 0, c0, 2)
-#define HCR		__ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR		__ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR		__ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR		__ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR		__ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR		__ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR		__ACCESS_CP15(c2, 4, c1, 2)
-#define DACR		__ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR		__ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR		__ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR		__ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR		__ACCESS_CP15(c5, 0, c1, 1)
-#define HSR		__ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR		__ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR		__ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR		__ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR		__ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR		__ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS	__ACCESS_CP15(c7, 0, c1, 0)
-#define BPIALLIS	__ACCESS_CP15(c7, 0, c1, 6)
-#define ICIMVAU		__ACCESS_CP15(c7, 0, c5, 1)
-#define ATS1CPR		__ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS	__ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL		__ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS	__ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR		__ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR		__ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0		__ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1		__ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR		__ACCESS_CP15(c12, 0, c0, 0)
-#define CID		__ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW		__ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO		__ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV	__ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR		__ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL		__ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL	__ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL		__ACCESS_CP15(c14, 4, c1, 0)
-
 #define VFP_FPEXC	__ACCESS_VFP(FPEXC)
 
 /* AArch64 compatibility macros, only for the timer so far */
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index c478281..d365e3c 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -31,8 +31,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->cp15[c0_CSSELR]		= read_sysreg(CSSELR);
 	ctxt->cp15[c1_SCTLR]		= read_sysreg(SCTLR);
 	ctxt->cp15[c1_CPACR]		= read_sysreg(CPACR);
-	*cp15_64(ctxt, c2_TTBR0)	= read_sysreg(TTBR0);
-	*cp15_64(ctxt, c2_TTBR1)	= read_sysreg(TTBR1);
+	*cp15_64(ctxt, c2_TTBR0)	= read_sysreg(TTBR0_64);
+	*cp15_64(ctxt, c2_TTBR1)	= read_sysreg(TTBR1_64);
 	ctxt->cp15[c2_TTBCR]		= read_sysreg(TTBCR);
 	ctxt->cp15[c3_DACR]		= read_sysreg(DACR);
 	ctxt->cp15[c5_DFSR]		= read_sysreg(DFSR);
@@ -41,7 +41,7 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->cp15[c5_AIFSR]		= read_sysreg(AIFSR);
 	ctxt->cp15[c6_DFAR]		= read_sysreg(DFAR);
 	ctxt->cp15[c6_IFAR]		= read_sysreg(IFAR);
-	*cp15_64(ctxt, c7_PAR)		= read_sysreg(PAR);
+	*cp15_64(ctxt, c7_PAR)		= read_sysreg(PAR_64);
 	ctxt->cp15[c10_PRRR]		= read_sysreg(PRRR);
 	ctxt->cp15[c10_NMRR]		= read_sysreg(NMRR);
 	ctxt->cp15[c10_AMAIR0]		= read_sysreg(AMAIR0);
@@ -60,8 +60,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->cp15[c0_CSSELR],	CSSELR);
 	write_sysreg(ctxt->cp15[c1_SCTLR],	SCTLR);
 	write_sysreg(ctxt->cp15[c1_CPACR],	CPACR);
-	write_sysreg(*cp15_64(ctxt, c2_TTBR0),	TTBR0);
-	write_sysreg(*cp15_64(ctxt, c2_TTBR1),	TTBR1);
+	write_sysreg(*cp15_64(ctxt, c2_TTBR0),	TTBR0_64);
+	write_sysreg(*cp15_64(ctxt, c2_TTBR1),	TTBR1_64);
 	write_sysreg(ctxt->cp15[c2_TTBCR],	TTBCR);
 	write_sysreg(ctxt->cp15[c3_DACR],	DACR);
 	write_sysreg(ctxt->cp15[c5_DFSR],	DFSR);
@@ -70,7 +70,7 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->cp15[c5_AIFSR],	AIFSR);
 	write_sysreg(ctxt->cp15[c6_DFAR],	DFAR);
 	write_sysreg(ctxt->cp15[c6_IFAR],	IFAR);
-	write_sysreg(*cp15_64(ctxt, c7_PAR),	PAR);
+	write_sysreg(*cp15_64(ctxt, c7_PAR),	PAR_64);
 	write_sysreg(ctxt->cp15[c10_PRRR],	PRRR);
 	write_sysreg(ctxt->cp15[c10_NMRR],	NMRR);
 	write_sysreg(ctxt->cp15[c10_AMAIR0],	AMAIR0);
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index ae45ae9..94d5bb9 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -134,12 +134,12 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
 	if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
 		u64 par, tmp;
 
-		par = read_sysreg(PAR);
+		par = read_sysreg(PAR_64);
 		write_sysreg(far, ATS1CPR);
 		isb();
 
-		tmp = read_sysreg(PAR);
-		write_sysreg(par, PAR);
+		tmp = read_sysreg(PAR_64);
+		write_sysreg(par, PAR_64);
 
 		if (unlikely(tmp & 1))
 			return false; /* Translation failed, back to guest */
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/7] Disable instrumentation for some code
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
  2018-03-18 12:53 ` [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Abbott Liu
  2018-03-18 12:53 ` [PATCH 2/7] Add TTBR operator for kasan_init Abbott Liu
@ 2018-03-18 12:53 ` Abbott Liu
  2018-03-19  8:38   ` Marc Zyngier
  2018-03-18 12:53 ` [PATCH 4/7] Replace memory function for kasan Abbott Liu
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
  To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
	liuwenliang, akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

From: Andrey Ryabinin <a.ryabinin@samsung.com>

Disable instrumentation for arch/arm/boot/compressed/*
and arch/arm/vdso/* because those code won't linkd with
kernel image.

Disable kasan check in the function unwind_pop_register
because it doesn't matter that kasan checks failed when
unwind_pop_register read stack memory of task.

Reviewed-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
 arch/arm/boot/compressed/Makefile | 1 +
 arch/arm/kernel/unwind.c          | 3 ++-
 arch/arm/vdso/Makefile            | 2 ++
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 45a6b9b..966103e 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -24,6 +24,7 @@ OBJS		+= hyp-stub.o
 endif
 
 GCOV_PROFILE		:= n
+KASAN_SANITIZE		:= n
 
 #
 # Architecture dependencies
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index 0bee233..2e55c7d 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -249,7 +249,8 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
 		if (*vsp >= (unsigned long *)ctrl->sp_high)
 			return -URC_FAILURE;
 
-	ctrl->vrs[reg] = *(*vsp)++;
+	ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+	(*vsp)++;
 	return URC_OK;
 }
 
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index bb411821..87abbb7 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -30,6 +30,8 @@ CFLAGS_vgettimeofday.o = -O2
 # Disable gcov profiling for VDSO code
 GCOV_PROFILE := n
 
+KASAN_SANITIZE := n
+
 # Force dependency
 $(obj)/vdso.o : $(obj)/vdso.so
 
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/7] Replace memory function for kasan
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
                   ` (2 preceding siblings ...)
  2018-03-18 12:53 ` [PATCH 3/7] Disable instrumentation for some code Abbott Liu
@ 2018-03-18 12:53 ` Abbott Liu
  2018-03-18 12:53 ` [PATCH 5/7] Define the virtual space of KASan's shadow region Abbott Liu
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
  To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
	liuwenliang, akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

From: Andrey Ryabinin <a.ryabinin@samsung.com>

Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.

KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.

We must use __memcpy/__memset to replace memcpy/memset when we copy
.data to RAM and when we clear .bss, because kasan_early_init can't
be called before the initialization of .data and .bss.

Reviewed-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
 arch/arm/boot/compressed/decompress.c |  2 ++
 arch/arm/boot/compressed/libfdt_env.h |  2 ++
 arch/arm/include/asm/string.h         | 17 +++++++++++++++++
 arch/arm/kernel/head-common.S         |  4 ++--
 arch/arm/lib/memcpy.S                 |  3 +++
 arch/arm/lib/memmove.S                |  5 ++++-
 arch/arm/lib/memset.S                 |  3 +++
 7 files changed, 33 insertions(+), 3 deletions(-)

diff --git a/arch/arm/boot/compressed/decompress.c b/arch/arm/boot/compressed/decompress.c
index a2ac3fe..0596077 100644
--- a/arch/arm/boot/compressed/decompress.c
+++ b/arch/arm/boot/compressed/decompress.c
@@ -49,8 +49,10 @@ extern int memcmp(const void *cs, const void *ct, size_t count);
 #endif
 
 #ifdef CONFIG_KERNEL_XZ
+#ifndef CONFIG_KASAN
 #define memmove memmove
 #define memcpy memcpy
+#endif
 #include "../../../../lib/decompress_unxz.c"
 #endif
 
diff --git a/arch/arm/boot/compressed/libfdt_env.h b/arch/arm/boot/compressed/libfdt_env.h
index 0743781..736ed36 100644
--- a/arch/arm/boot/compressed/libfdt_env.h
+++ b/arch/arm/boot/compressed/libfdt_env.h
@@ -17,4 +17,6 @@ typedef __be64 fdt64_t;
 #define fdt64_to_cpu(x)		be64_to_cpu(x)
 #define cpu_to_fdt64(x)		cpu_to_be64(x)
 
+#undef memset
+
 #endif
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index 111a1d8..1f9016b 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -15,15 +15,18 @@ extern char * strchr(const char * s, int c);
 
 #define __HAVE_ARCH_MEMCPY
 extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void *__memcpy(void *dest, const void *src, __kernel_size_t n);
 
 #define __HAVE_ARCH_MEMMOVE
 extern void * memmove(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *dest, const void *src, __kernel_size_t n);
 
 #define __HAVE_ARCH_MEMCHR
 extern void * memchr(const void *, int, __kernel_size_t);
 
 #define __HAVE_ARCH_MEMSET
 extern void * memset(void *, int, __kernel_size_t);
+extern void *__memset(void *s, int c, __kernel_size_t n);
 
 #define __HAVE_ARCH_MEMSET32
 extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,4 +42,18 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
 	return __memset64(p, v, n * 8, v >> 32);
 }
 
+
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
 #endif
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 6e0375e..c79b829 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -99,7 +99,7 @@ __mmap_switched:
  THUMB(	ldmia	r4!, {r0, r1, r2, r3} )
  THUMB(	mov	sp, r3 )
 	sub	r2, r2, r1
-	bl	memcpy				@ copy .data to RAM
+	bl	__memcpy			@ copy .data to RAM
 #endif
 
    ARM(	ldmia	r4!, {r0, r1, sp} )
@@ -107,7 +107,7 @@ __mmap_switched:
  THUMB(	mov	sp, r3 )
 	sub	r2, r1, r0
 	mov	r1, #0
-	bl	memset				@ clear .bss
+	bl	__memset			@ clear .bss
 
 	ldmia	r4, {r0, r1, r2, r3}
 	str	r9, [r0]			@ Save processor ID
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 64111bd..79a83f8 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -61,6 +61,8 @@
 
 /* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
 
+.weak memcpy
+ENTRY(__memcpy)
 ENTRY(mmiocpy)
 ENTRY(memcpy)
 
@@ -68,3 +70,4 @@ ENTRY(memcpy)
 
 ENDPROC(memcpy)
 ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index 69a9d47..313db6c 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -27,12 +27,14 @@
  * occurring in the opposite direction.
  */
 
+.weak memmove
+ENTRY(__memmove)
 ENTRY(memmove)
 	UNWIND(	.fnstart			)
 
 		subs	ip, r0, r1
 		cmphi	r2, ip
-		bls	memcpy
+		bls	__memcpy
 
 		stmfd	sp!, {r0, r4, lr}
 	UNWIND(	.fnend				)
@@ -225,3 +227,4 @@ ENTRY(memmove)
 18:		backward_copy_shift	push=24	pull=8
 
 ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index ed6d35d..64aa06a 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -16,6 +16,8 @@
 	.text
 	.align	5
 
+.weak memset
+ENTRY(__memset)
 ENTRY(mmioset)
 ENTRY(memset)
 UNWIND( .fnstart         )
@@ -135,6 +137,7 @@ UNWIND( .fnstart            )
 UNWIND( .fnend   )
 ENDPROC(memset)
 ENDPROC(mmioset)
+ENDPROC(__memset)
 
 ENTRY(__memset32)
 UNWIND( .fnstart         )
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/7] Define the virtual space of KASan's shadow region
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
                   ` (3 preceding siblings ...)
  2018-03-18 12:53 ` [PATCH 4/7] Replace memory function for kasan Abbott Liu
@ 2018-03-18 12:53 ` Abbott Liu
  2018-03-18 12:53 ` [PATCH 6/7] Initialize the mapping of KASan shadow memory Abbott Liu
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
  To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
	liuwenliang, akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for arm
kernel address sanitizer.

     +----+ 0xffffffff
     |    |
     |    |
     |    |
     +----+ CONFIG_PAGE_OFFSET
     |    |\
     |    | |->  module virtual address space area.
     |    |/
     +----+ MODULE_VADDR = KASAN_SHADOW_END
     |    |\
     |    | |-> the shadow area of kernel virtual address.
     |    |/
     +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START  the
     |    |\  shadow address of MODULE_VADDR
     |    | ---------------------+
     |    |                      |
     +    + KASAN_SHADOW_OFFSET  |-> the user space area. Kernel address
     |    |                      |    sanitizer do not use this space.
     |    | ---------------------+
     |    |/
     ------ 0

1)KASAN_SHADOW_OFFSET:
  This value is used to map an address to the corresponding shadow
address by the following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;

2)KASAN_SHADOW_START
  This value is the MODULE_VADDR's shadow address. It is the start
of kernel virtual space.

3)KASAN_SHADOW_END
  This value is the 0x100000000's shadow address. It is the end of
kernel addresssanitizer's shadow area. It is also the start of the
module area.

When enable kasan, the definition of TASK_SIZE is not an an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code
in the *.s file.

Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
 arch/arm/include/asm/kasan_def.h | 52 ++++++++++++++++++++++++++++++++++++++++
 arch/arm/include/asm/memory.h    |  5 ++++
 arch/arm/kernel/entry-armv.S     |  5 ++--
 arch/arm/kernel/entry-common.S   |  6 +++--
 arch/arm/mm/init.c               |  6 +++++
 arch/arm/mm/mmu.c                |  7 +++++-
 6 files changed, 76 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm/include/asm/kasan_def.h

diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 0000000..3a5cdc9
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,52 @@
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ *    +----+ 0xffffffff
+ *    |    |
+ *    |    |
+ *    |    |
+ *    +----+ CONFIG_PAGE_OFFSET
+ *    |    |\
+ *    |    | |->  module virtual address space area.
+ *    |    |/
+ *    +----+ MODULE_VADDR = KASAN_SHADOW_END
+ *    |    |\
+ *    |    | |-> the shadow area of kernel virtual address.
+ *    |    |/
+ *    +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START  the
+ *    |    |\  shadow address of MODULE_VADDR
+ *    |    | ---------------------+
+ *    |    |                      |
+ *    +    + KASAN_SHADOW_OFFSET  |-> the user space area. Kernel address
+ *    |    |                      |    sanitizer do not use this space.
+ *    |    | ---------------------+
+ *    |    |/
+ *    ------ 0
+ *
+ *1)KASAN_SHADOW_OFFSET:
+ *    This value is used to map an address to the corresponding shadow
+ * address by the following formula:
+ * shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
+ *
+ * 2)KASAN_SHADOW_START
+ *     This value is the MODULE_VADDR's shadow address. It is the start
+ * of kernel virtual space.
+ *
+ * 3) KASAN_SHADOW_END
+ *   This value is the 0x100000000's shadow address. It is the end of
+ * kernel addresssanitizer's shadow area. It is also the start of the
+ * module area.
+ *
+ */
+
+#define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1<<29))
+
+#define KASAN_SHADOW_START      ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#define KASAN_SHADOW_END        (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 4966677..3ce1a9a 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -21,6 +21,7 @@
 #ifdef CONFIG_NEED_MACH_MEMORY_H
 #include <mach/memory.h>
 #endif
+#include <asm/kasan_def.h>
 
 /*
  * Allow for constants defined here to be used from assembly code
@@ -37,7 +38,11 @@
  * TASK_SIZE - the maximum size of a user space task.
  * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
  */
+#ifndef CONFIG_KASAN
 #define TASK_SIZE		(UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE		(KASAN_SHADOW_START)
+#endif
 #define TASK_UNMAPPED_BASE	ALIGN(TASK_SIZE / 3, SZ_16M)
 
 /*
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 1752033..b4de9e4 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -183,7 +183,7 @@ ENDPROC(__und_invalid)
 
 	get_thread_info tsk
 	ldr	r0, [tsk, #TI_ADDR_LIMIT]
-	mov	r1, #TASK_SIZE
+	ldr	r1, =TASK_SIZE
 	str	r1, [tsk, #TI_ADDR_LIMIT]
 	str	r0, [sp, #SVC_ADDR_LIMIT]
 
@@ -437,7 +437,8 @@ ENDPROC(__fiq_abt)
 	@ if it was interrupted in a critical region.  Here we
 	@ perform a quick test inline since it should be false
 	@ 99.9999% of the time.  The rest is done out of line.
-	cmp	r4, #TASK_SIZE
+	ldr	r0, =TASK_SIZE
+	cmp	r4, r0
 	blhs	kuser_cmpxchg64_fixup
 #endif
 #endif
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 3c4f887..b7d0c6c 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -51,7 +51,8 @@ ret_fast_syscall:
  UNWIND(.cantunwind	)
 	disable_irq_notrace			@ disable interrupts
 	ldr	r2, [tsk, #TI_ADDR_LIMIT]
-	cmp	r2, #TASK_SIZE
+	ldr	r1, =TASK_SIZE
+	cmp	r2, r1
 	blne	addr_limit_check_failed
 	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
 	tst	r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -116,7 +117,8 @@ ret_slow_syscall:
 	disable_irq_notrace			@ disable interrupts
 ENTRY(ret_to_user_from_irq)
 	ldr	r2, [tsk, #TI_ADDR_LIMIT]
-	cmp	r2, #TASK_SIZE
+	ldr     r1, =TASK_SIZE
+	cmp	r2, r1
 	blne	addr_limit_check_failed
 	ldr	r1, [tsk, #TI_FLAGS]
 	tst	r1, #_TIF_WORK_MASK
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index bd6f451..da11f61 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -538,6 +538,9 @@ void __init mem_init(void)
 #ifdef CONFIG_MODULES
 			"    modules : 0x%08lx - 0x%08lx   (%4ld MB)\n"
 #endif
+#ifdef CONFIG_KASAN
+			"    kasan   : 0x%08lx - 0x%08lx   (%4ld MB)\n"
+#endif
 			"      .text : 0x%p" " - 0x%p" "   (%4td kB)\n"
 			"      .init : 0x%p" " - 0x%p" "   (%4td kB)\n"
 			"      .data : 0x%p" " - 0x%p" "   (%4td kB)\n"
@@ -558,6 +561,9 @@ void __init mem_init(void)
 #ifdef CONFIG_MODULES
 			MLM(MODULES_VADDR, MODULES_END),
 #endif
+#ifdef CONFIG_KASAN
+			MLM(KASAN_SHADOW_START, KASAN_SHADOW_END),
+#endif
 
 			MLK_ROUNDUP(_text, _etext),
 			MLK_ROUNDUP(__init_begin, __init_end),
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index e46a6a4..f5aa1de 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1251,9 +1251,14 @@ static inline void prepare_page_table(void)
 	/*
 	 * Clear out all the mappings below the kernel image.
 	 */
-	for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
+	for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE)
 		pmd_clear(pmd_off_k(addr));
 
+#ifdef CONFIG_KASAN
+	/*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/
+	addr = MODULES_VADDR;
+#endif
+
 #ifdef CONFIG_XIP_KERNEL
 	/* The XIP kernel is mapped in the module area -- skip over it */
 	addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK;
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/7] Initialize the mapping of KASan shadow memory
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
                   ` (4 preceding siblings ...)
  2018-03-18 12:53 ` [PATCH 5/7] Define the virtual space of KASan's shadow region Abbott Liu
@ 2018-03-18 12:53 ` Abbott Liu
  2018-03-18 12:53 ` [PATCH 7/7] Enable KASan for arm Abbott Liu
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
  To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
	liuwenliang, akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

From: Andrey Ryabinin <a.ryabinin@samsung.com>

This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
   one physical page (kasan_zero_page). It's finished by the function
   kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
   head-common.S)

2. After the calling of paging_init, we use kasan_zero_page as zero
   shadow for some memory that KASan don't need to track, and we alloc
   new shadow space for the other memory that KASan need to track. These
   issues are finished by the function kasan_init which is call by
   setup_arch.

3. Add support arm LPAE   ---Abbott Liu <liuwenliang@huawei.com>
   If LPAE is enabled, KASan shadow region's mapping table need be copyed
   in pgd_alloc function.

4. In 64bit machine, size_t is unsigned long, but int 32bit machine,
   size_t is unsigned int, so we need type conversion in
   the function of kasan_cache_create.

Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Co-Developed-by: Abbott Liu <liuwenliang@huawei.com>
Reviewed-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
 arch/arm/include/asm/kasan.h       |  23 +++
 arch/arm/include/asm/pgalloc.h     |   7 +-
 arch/arm/include/asm/thread_info.h |   4 +
 arch/arm/kernel/head-common.S      |   3 +
 arch/arm/kernel/setup.c            |   2 +
 arch/arm/mm/Makefile               |   3 +
 arch/arm/mm/kasan_init.c           | 290 +++++++++++++++++++++++++++++++++++++
 arch/arm/mm/pgd.c                  |  14 ++
 mm/kasan/kasan.c                   |   5 +-
 9 files changed, 347 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm/include/asm/kasan.h
 create mode 100644 arch/arm/mm/kasan_init.c

diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 0000000..5561461
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,23 @@
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index 2d7344f..f170659 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
  */
 #define pmd_alloc_one(mm,addr)		({ BUG(); ((pmd_t *)2); })
 #define pmd_free(mm, pmd)		do { } while (0)
-#define pud_populate(mm,pmd,pte)	BUG()
-
+#ifndef CONFIG_KASAN
+#define pud_populate(mm, pmd, pte)	BUG()
+#else
+#define pud_populate(mm, pmd, pte)	do { } while (0)
+#endif
 #endif	/* CONFIG_ARM_LPAE */
 
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index e71cc35..bc681a0 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -16,7 +16,11 @@
 #include <asm/fpstate.h>
 #include <asm/page.h>
 
+#ifdef CONFIG_KASAN
+#define THREAD_SIZE_ORDER	2
+#else
 #define THREAD_SIZE_ORDER	1
+#endif
 #define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
 #define THREAD_START_SP		(THREAD_SIZE - 8)
 
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index c79b829..20161e2 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -115,6 +115,9 @@ __mmap_switched:
 	str	r8, [r2]			@ Save atags pointer
 	cmp	r3, #0
 	strne	r10, [r3]			@ Save control register values
+#ifdef CONFIG_KASAN
+	bl	kasan_early_init
+#endif
 	mov	lr, #0
 	b	start_kernel
 ENDPROC(__mmap_switched)
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index fc40a2b..81c3e9df 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -62,6 +62,7 @@
 #include <asm/unwind.h>
 #include <asm/memblock.h>
 #include <asm/virt.h>
+#include <asm/kasan.h>
 
 #include "atags.h"
 
@@ -1118,6 +1119,7 @@ void __init setup_arch(char **cmdline_p)
 	early_ioremap_reset();
 
 	paging_init(mdesc);
+	kasan_init();
 	request_standard_resources(mdesc);
 
 	if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 9dbb849..573203e 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -111,3 +111,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU)	+= cache-l2x0-pmu.o
 obj-$(CONFIG_CACHE_XSC3L2)	+= cache-xsc3l2.o
 obj-$(CONFIG_CACHE_TAUROS2)	+= cache-tauros2.o
 obj-$(CONFIG_CACHE_UNIPHIER)	+= cache-uniphier.o
+
+KASAN_SANITIZE_kasan_init.o    := n
+obj-$(CONFIG_KASAN)            += kasan_init.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 0000000..d316f37
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,290 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/start_kernel.h>
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+#include <asm/tlbflush.h>
+#include <asm/cp15.h>
+#include <linux/sched/task.h>
+
+#include "mm.h"
+
+static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size, int node)
+{
+	return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+					BOOTMEM_ALLOC_ACCESSIBLE, node);
+}
+
+static void __init kasan_early_pmd_populate(unsigned long start,
+					unsigned long end, pud_t *pud)
+{
+	unsigned long addr;
+	unsigned long next;
+	pmd_t *pmd;
+
+	pmd = pmd_offset(pud, start);
+	for (addr = start; addr < end;) {
+		pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
+		next = pmd_addr_end(addr, end);
+		addr = next;
+		flush_pmd_entry(pmd);
+		pmd++;
+	}
+}
+
+static void __init kasan_early_pud_populate(unsigned long start,
+				unsigned long end, pgd_t *pgd)
+{
+	unsigned long addr;
+	unsigned long next;
+	pud_t *pud;
+
+	pud = pud_offset(pgd, start);
+	for (addr = start; addr < end;) {
+		next = pud_addr_end(addr, end);
+		kasan_early_pmd_populate(addr, next, pud);
+		addr = next;
+		pud++;
+	}
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgdp)
+{
+	int i;
+	unsigned long start = KASAN_SHADOW_START;
+	unsigned long end = KASAN_SHADOW_END;
+	unsigned long addr;
+	unsigned long next;
+	pgd_t *pgd;
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+			&kasan_zero_pte[i], pfn_pte(
+				virt_to_pfn(kasan_zero_page),
+				__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY
+					| L_PTE_XN)));
+
+	pgd = pgd_offset_k(start);
+	for (addr = start; addr < end;) {
+		next = pgd_addr_end(addr, end);
+		kasan_early_pud_populate(addr, next, pgd);
+		addr = next;
+		pgd++;
+	}
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+	struct proc_info_list *list;
+
+	/*
+	 * locate processor in the list of supported processor
+	 * types.  The linker builds this table for us from the
+	 * entries in arch/arm/mm/proc-*.S
+	 */
+	list = lookup_processor_type(read_cpuid_id());
+	if (list) {
+#ifdef MULTI_CPU
+		processor = *list->proc;
+#endif
+	}
+
+	BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET);
+	kasan_map_early_shadow(swapper_pg_dir);
+}
+
+static void __init clear_pgds(unsigned long start,
+			unsigned long end)
+{
+	for (; start && start < end; start += PMD_SIZE)
+		pmd_clear(pmd_off_k(start));
+}
+
+pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
+{
+	pte_t *pte = pte_offset_kernel(pmd, addr);
+
+	if (pte_none(*pte)) {
+		pte_t entry;
+		void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+		if (!p)
+			return NULL;
+		entry = pfn_pte(virt_to_pfn(p),
+			__pgprot(pgprot_val(PAGE_KERNEL)));
+		set_pte_at(&init_mm, addr, pte, entry);
+	}
+	return pte;
+}
+
+pmd_t * __meminit kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
+{
+	pmd_t *pmd = pmd_offset(pud, addr);
+
+	if (pmd_none(*pmd)) {
+		void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+		if (!p)
+			return NULL;
+		pmd_populate_kernel(&init_mm, pmd, p);
+	}
+	return pmd;
+}
+
+pud_t * __meminit kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
+{
+	pud_t *pud = pud_offset(pgd, addr);
+
+	if (pud_none(*pud)) {
+		void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+		if (!p)
+			return NULL;
+		pr_err("populating pud addr %lx\n", addr);
+		pud_populate(&init_mm, pud, p);
+	}
+	return pud;
+}
+
+pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
+{
+	pgd_t *pgd = pgd_offset_k(addr);
+
+	if (pgd_none(*pgd)) {
+		void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+		if (!p)
+			return NULL;
+		pgd_populate(&init_mm, pgd, p);
+	}
+	return pgd;
+}
+
+static int __init create_mapping(unsigned long start, unsigned long end,
+				int node)
+{
+	unsigned long addr = start;
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	pr_info("populating shadow for %lx, %lx\n", start, end);
+
+	for (; addr < end; addr += PAGE_SIZE) {
+		pgd = kasan_pgd_populate(addr, node);
+		if (!pgd)
+			return -ENOMEM;
+
+		pud = kasan_pud_populate(pgd, addr, node);
+		if (!pud)
+			return -ENOMEM;
+
+		pmd = kasan_pmd_populate(pud, addr, node);
+		if (!pmd)
+			return -ENOMEM;
+
+		pte = kasan_pte_populate(pmd, addr, node);
+		if (!pte)
+			return -ENOMEM;
+	}
+	return 0;
+}
+
+
+void __init kasan_init(void)
+{
+	struct memblock_region *reg;
+	u64 orig_ttbr0;
+	int i;
+
+	/*
+	 * We are going to perform proper setup of shadow memory.
+	 * At first we should unmap early shadow (clear_pgds() call bellow).
+	 * However, instrumented code couldn't execute without shadow memory.
+	 * tmp_pgd_table and tmp_pmd_table used to keep early shadow mapped
+	 * until full shadow setup will be finished.
+	 */
+	orig_ttbr0 = get_ttbr0();
+
+#ifdef CONFIG_ARM_LPAE
+	memcpy(tmp_pmd_table,
+		pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
+		sizeof(tmp_pmd_table));
+	memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+	set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
+		__pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+	set_ttbr0(__pa(tmp_pgd_table));
+#else
+	memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+	set_ttbr0((u64)__pa(tmp_pgd_table));
+#endif
+	flush_cache_all();
+	local_flush_bp_all();
+	local_flush_tlb_all();
+
+	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+	kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+				kasan_mem_to_shadow((void *)-1UL) + 1);
+
+	for_each_memblock(memory, reg) {
+		void *start = __va(reg->base);
+		void *end = __va(reg->base + reg->size);
+
+		if (reg->base + reg->size > arm_lowmem_limit)
+			end = __va(arm_lowmem_limit);
+		if (start >= end)
+			break;
+
+		create_mapping((unsigned long)kasan_mem_to_shadow(start),
+			(unsigned long)kasan_mem_to_shadow(end),
+			NUMA_NO_NODE);
+	}
+
+	/*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,
+	 *  so we need mapping.
+	 *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
+	 *  ~ MODULES_END's shadow is in the same PMD_SIZE, so we cant
+	 *  use kasan_populate_zero_shadow.
+	 */
+	create_mapping(
+		(unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
+
+		(unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE +
+							PMD_SIZE)),
+		NUMA_NO_NODE);
+
+	/*
+	 * KAsan may reuse the contents of kasan_zero_pte directly, so we
+	 * should make sure that it maps the zero page read-only.
+	 */
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+			&kasan_zero_pte[i],
+			pfn_pte(virt_to_pfn(kasan_zero_page),
+				__pgprot(pgprot_val(PAGE_KERNEL)
+					| L_PTE_RDONLY)));
+	memset(kasan_zero_page, 0, PAGE_SIZE);
+	set_ttbr0(orig_ttbr0);
+	flush_cache_all();
+	local_flush_bp_all();
+	local_flush_tlb_all();
+	pr_info("Kernel address sanitizer initialized\n");
+	init_task.kasan_depth = 0;
+}
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index 61e281c..4644a21 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -64,6 +64,20 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
 	new_pmd = pmd_alloc(mm, new_pud, 0);
 	if (!new_pmd)
 		goto no_pmd;
+#ifdef CONFIG_KASAN
+	/*
+	 *Copy PMD table for KASAN shadow mappings.
+	 */
+	init_pgd = pgd_offset_k(TASK_SIZE);
+	init_pud = pud_offset(init_pgd, TASK_SIZE);
+	init_pmd = pmd_offset(init_pud, TASK_SIZE);
+	new_pmd = pmd_offset(new_pud, TASK_SIZE);
+	memcpy(new_pmd, init_pmd,
+		(pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE))
+		* sizeof(pmd_t));
+	clean_dcache_area(new_pmd, PTRS_PER_PMD*sizeof(pmd_t));
+#endif
+
 #endif
 
 	if (!vectors_high()) {
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 104839a..af67b64 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -365,8 +365,9 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
 	if (redzone_adjust > 0)
 		*size += redzone_adjust;
 
-	*size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
-					optimal_redzone(cache->object_size)));
+	*size = min_t(unsigned long, KMALLOC_MAX_SIZE,
+			max(*size, cache->object_size +
+				optimal_redzone(cache->object_size)));
 
 	/*
 	 * If the metadata doesn't fit, don't enable KASAN at all.
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 7/7] Enable KASan for arm
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
                   ` (5 preceding siblings ...)
  2018-03-18 12:53 ` [PATCH 6/7] Initialize the mapping of KASan shadow memory Abbott Liu
@ 2018-03-18 12:53 ` Abbott Liu
  2018-03-19 20:43   ` kbuild test robot
  2018-03-18 19:13 ` [PATCH v2 0/7] " Florian Fainelli
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
  To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
	liuwenliang, akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

From: Andrey Ryabinin <a.ryabinin@samsung.com>

This patch enable kernel address sanitizer for arm.

Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
 arch/arm/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 7e3d535..ac2287b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -49,6 +49,7 @@ config ARM
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+	select HAVE_ARCH_KASAN if MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
 	select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_THREAD_STRUCT_WHITELIST
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16
  2018-03-18 12:53 ` [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Abbott Liu
@ 2018-03-18 13:21   ` Russell King - ARM Linux
  0 siblings, 0 replies; 15+ messages in thread
From: Russell King - ARM Linux @ 2018-03-18 13:21 UTC (permalink / raw)
  To: Abbott Liu
  Cc: aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli, akpm,
	afzal.mohd.ma, alexander.levin, glider, dvyukov,
	christoffer.dall, linux, mawilcox, pombredanne, ard.biesheuvel,
	vladimir.murzin, nicolas.pitre, tglx, thgarnie, dhowells,
	keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

On Sun, Mar 18, 2018 at 08:53:36PM +0800, Abbott Liu wrote:
> Because in some architecture(eg. arm) instruction set, non-aligned
> access support is not very well, so 2 1-byte checks is more
> safer than 1 2-byte check. The impact on performance is small
> because 16-byte accesses are not too common.

This is unnecessary:

1. a load of a 16-bit quantity will work as desired on modern ARMs.
2. Networking already relies on unaligned loads to work as per x86
   (iow, an unaligned 32-bit load loads the 32-bits at the address
   even if it's not naturally aligned, and that also goes for 16-bit
   accesses.)

If these are rare (which you say above - "not too common") then it's
much better to leave the code as-is, because it will most likely be
faster on modern CPUs, and the impact for older generation CPUs is
likely to be low.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/7] KASan for arm
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
                   ` (6 preceding siblings ...)
  2018-03-18 12:53 ` [PATCH 7/7] Enable KASan for arm Abbott Liu
@ 2018-03-18 19:13 ` Florian Fainelli
  2018-03-19 18:29 ` Florian Fainelli
  2018-03-25 23:58 ` Joel Stanley
  9 siblings, 0 replies; 15+ messages in thread
From: Florian Fainelli @ 2018-03-18 19:13 UTC (permalink / raw)
  To: Abbott Liu, linux, aryabinin, marc.zyngier, kstewart, gregkh,
	akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

Hi Abbott,

On 03/18/2018 05:53 AM, Abbott Liu wrote:
> Changelog:
> v2 - v1
> - Fixed some compiling error which happens on changing kernel compression
>   mode to lzma/xz/lzo/lz4.
>   ---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
> 	     Russell King - ARM Linux <linux@armlinux.org.uk>
> - Fixed a compiling error cause by some older arm instruction set(armv4t)
>   don't suppory movw/movt which is reported by kbuild.
> - Changed the pte flag from _L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN to
>   pgprot_val(PAGE_KERNEL).
>   ---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
> - Moved Enable KASan patch as the last one.
>   ---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
>      Russell King - ARM Linux <linux@armlinux.org.uk>
> - Moved the definitions of cp15 registers from 
>   arch/arm/include/asm/kvm_hyp.h to arch/arm/include/asm/cp15.h.
>   ---Asked by: Mark Rutland <mark.rutland@arm.com>
> - Merge the following commits into the commit
>   Define the virtual space of KASan's shadow region:
>   1) Define the virtual space of KASan's shadow region;
>   2) Avoid cleaning the KASan shadow area's mapping table;
>   3) Add KASan layout;
> - Merge the following commits into the commit
>   Initialize the mapping of KASan shadow memory:
>   1) Initialize the mapping of KASan shadow memory;
>   2) Add support arm LPAE;
>   3) Don't need to map the shadow of KASan's shadow memory;
>      ---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
>   4) Change mapping of kasan_zero_page int readonly.

Thanks for posting these patches! Just FWIW, you cannot quite add
someone's Tested-by for a patch series that was just resubmitted given
the differences with v1. I just gave it a spin on a Cortex-A5 (no LPAE)
and it looks like test_kasan.ko is passing, great job!

> 
> Hi,all:
>    These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
> 
>    1/8 of kernel addresses reserved for shadow memory. There was no
> big enough hole for this, so virtual addresses for shadow were
> stolen from user space.
> 
>    At early boot stage the whole shadow region populated with just
> one physical page (kasan_zero_page). Later, this page reused
> as readonly zero shadow for some memory that KASan currently
> don't track (vmalloc).
> 
>   After mapping the physical memory, pages for shadow memory are
> allocated and mapped.
>   
>   KASan's stack instrumentation significantly increases stack's
> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
> 
>   Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
> 
>   KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
> 
>   Some files built without kasan instrumentation (e.g. mm/slub.c).
> Original mem* function replaced (via #define) with prefixed variants
> to disable memory access checks for such files.
> 
>   On arm LPAE architecture,  the mapping table of KASan shadow memory(if
> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
> because kasan instrumentation maybe cause do_translation_fault function
> accessing KASan shadow memory. The accessing of KASan shadow memory in
> do_translation_fault function maybe cause dead circle. So the mapping table
> of KASan shadow memory need be copyed in pgd_alloc function.
> 
> 
> Most of the code comes from:
> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe
> 
> These patches are tested on vexpress-ca15, vexpress-ca9
> 
> 
> 
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Tested-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> 
> Abbott Liu (3):
>   2 1-byte checks more safer for memory_is_poisoned_16
>   Add TTBR operator for kasan_init
>   Define the virtual space of KASan's shadow region
> 
> Andrey Ryabinin (4):
>   Disable instrumentation for some code
>   Replace memory function for kasan
>   Initialize the mapping of KASan shadow memory
>   Enable KASan for arm
> 
>  arch/arm/Kconfig                      |   1 +
>  arch/arm/boot/compressed/Makefile     |   1 +
>  arch/arm/boot/compressed/decompress.c |   2 +
>  arch/arm/boot/compressed/libfdt_env.h |   2 +
>  arch/arm/include/asm/cp15.h           | 104 ++++++++++++
>  arch/arm/include/asm/kasan.h          |  23 +++
>  arch/arm/include/asm/kasan_def.h      |  52 ++++++
>  arch/arm/include/asm/kvm_hyp.h        |  52 ------
>  arch/arm/include/asm/memory.h         |   5 +
>  arch/arm/include/asm/pgalloc.h        |   7 +-
>  arch/arm/include/asm/string.h         |  17 ++
>  arch/arm/include/asm/thread_info.h    |   4 +
>  arch/arm/kernel/entry-armv.S          |   5 +-
>  arch/arm/kernel/entry-common.S        |   6 +-
>  arch/arm/kernel/head-common.S         |   7 +-
>  arch/arm/kernel/setup.c               |   2 +
>  arch/arm/kernel/unwind.c              |   3 +-
>  arch/arm/kvm/hyp/cp15-sr.c            |  12 +-
>  arch/arm/kvm/hyp/switch.c             |   6 +-
>  arch/arm/lib/memcpy.S                 |   3 +
>  arch/arm/lib/memmove.S                |   5 +-
>  arch/arm/lib/memset.S                 |   3 +
>  arch/arm/mm/Makefile                  |   3 +
>  arch/arm/mm/init.c                    |   6 +
>  arch/arm/mm/kasan_init.c              | 290 ++++++++++++++++++++++++++++++++++
>  arch/arm/mm/mmu.c                     |   7 +-
>  arch/arm/mm/pgd.c                     |  14 ++
>  arch/arm/vdso/Makefile                |   2 +
>  mm/kasan/kasan.c                      |  24 ++-
>  29 files changed, 588 insertions(+), 80 deletions(-)
>  create mode 100644 arch/arm/include/asm/kasan.h
>  create mode 100644 arch/arm/include/asm/kasan_def.h
>  create mode 100644 arch/arm/mm/kasan_init.c
> 

-- 
Florian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/7] Disable instrumentation for some code
  2018-03-18 12:53 ` [PATCH 3/7] Disable instrumentation for some code Abbott Liu
@ 2018-03-19  8:38   ` Marc Zyngier
  0 siblings, 0 replies; 15+ messages in thread
From: Marc Zyngier @ 2018-03-19  8:38 UTC (permalink / raw)
  To: Abbott Liu, linux, aryabinin, kstewart, gregkh, f.fainelli, akpm,
	afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

On 18/03/18 12:53, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
> 
> Disable instrumentation for arch/arm/boot/compressed/*
> and arch/arm/vdso/* because those code won't linkd with
> kernel image.
> 
> Disable kasan check in the function unwind_pop_register
> because it doesn't matter that kasan checks failed when
> unwind_pop_register read stack memory of task.
> 
> Reviewed-by: Russell King - ARM Linux <linux@armlinux.org.uk>
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
>  arch/arm/boot/compressed/Makefile | 1 +
>  arch/arm/kernel/unwind.c          | 3 ++-
>  arch/arm/vdso/Makefile            | 2 ++
>  3 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
> index 45a6b9b..966103e 100644
> --- a/arch/arm/boot/compressed/Makefile
> +++ b/arch/arm/boot/compressed/Makefile
> @@ -24,6 +24,7 @@ OBJS		+= hyp-stub.o
>  endif
>  
>  GCOV_PROFILE		:= n
> +KASAN_SANITIZE		:= n
>  
>  #
>  # Architecture dependencies
> diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
> index 0bee233..2e55c7d 100644
> --- a/arch/arm/kernel/unwind.c
> +++ b/arch/arm/kernel/unwind.c
> @@ -249,7 +249,8 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
>  		if (*vsp >= (unsigned long *)ctrl->sp_high)
>  			return -URC_FAILURE;
>  
> -	ctrl->vrs[reg] = *(*vsp)++;
> +	ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
> +	(*vsp)++;
>  	return URC_OK;
>  }
>  
> diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
> index bb411821..87abbb7 100644
> --- a/arch/arm/vdso/Makefile
> +++ b/arch/arm/vdso/Makefile
> @@ -30,6 +30,8 @@ CFLAGS_vgettimeofday.o = -O2
>  # Disable gcov profiling for VDSO code
>  GCOV_PROFILE := n
>  
> +KASAN_SANITIZE := n
> +
>  # Force dependency
>  $(obj)/vdso.o : $(obj)/vdso.so
>  
> 

You need to extend this at least to arch/arm/kvm/hyp/Makefile, as the
KASAN shadow region won't be mapped in HYP. See commit a6cdf1c08cbfe for
more details (all the arm64 comments in this patch apply to 32bit as well).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/7] KASan for arm
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
                   ` (7 preceding siblings ...)
  2018-03-18 19:13 ` [PATCH v2 0/7] " Florian Fainelli
@ 2018-03-19 18:29 ` Florian Fainelli
  2018-03-25 23:58 ` Joel Stanley
  9 siblings, 0 replies; 15+ messages in thread
From: Florian Fainelli @ 2018-03-19 18:29 UTC (permalink / raw)
  To: Abbott Liu, linux, aryabinin, marc.zyngier, kstewart, gregkh,
	akpm, afzal.mohd.ma, alexander.levin
  Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

On 03/18/2018 05:53 AM, Abbott Liu wrote:
> Changelog:
> v2 - v1
> - Fixed some compiling error which happens on changing kernel compression
>   mode to lzma/xz/lzo/lz4.
>   ---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
> 	     Russell King - ARM Linux <linux@armlinux.org.uk>
> - Fixed a compiling error cause by some older arm instruction set(armv4t)
>   don't suppory movw/movt which is reported by kbuild.
> - Changed the pte flag from _L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN to
>   pgprot_val(PAGE_KERNEL).
>   ---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
> - Moved Enable KASan patch as the last one.
>   ---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
>      Russell King - ARM Linux <linux@armlinux.org.uk>
> - Moved the definitions of cp15 registers from 
>   arch/arm/include/asm/kvm_hyp.h to arch/arm/include/asm/cp15.h.
>   ---Asked by: Mark Rutland <mark.rutland@arm.com>
> - Merge the following commits into the commit
>   Define the virtual space of KASan's shadow region:
>   1) Define the virtual space of KASan's shadow region;
>   2) Avoid cleaning the KASan shadow area's mapping table;
>   3) Add KASan layout;
> - Merge the following commits into the commit
>   Initialize the mapping of KASan shadow memory:
>   1) Initialize the mapping of KASan shadow memory;
>   2) Add support arm LPAE;
>   3) Don't need to map the shadow of KASan's shadow memory;
>      ---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
>   4) Change mapping of kasan_zero_page int readonly.
> 
> Hi,all:
>    These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
> 
>    1/8 of kernel addresses reserved for shadow memory. There was no
> big enough hole for this, so virtual addresses for shadow were
> stolen from user space.
> 
>    At early boot stage the whole shadow region populated with just
> one physical page (kasan_zero_page). Later, this page reused
> as readonly zero shadow for some memory that KASan currently
> don't track (vmalloc).
> 
>   After mapping the physical memory, pages for shadow memory are
> allocated and mapped.
>   
>   KASan's stack instrumentation significantly increases stack's
> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
> 
>   Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
> 
>   KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
> 
>   Some files built without kasan instrumentation (e.g. mm/slub.c).
> Original mem* function replaced (via #define) with prefixed variants
> to disable memory access checks for such files.
> 
>   On arm LPAE architecture,  the mapping table of KASan shadow memory(if
> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
> because kasan instrumentation maybe cause do_translation_fault function
> accessing KASan shadow memory. The accessing of KASan shadow memory in
> do_translation_fault function maybe cause dead circle. So the mapping table
> of KASan shadow memory need be copyed in pgd_alloc function.
> 
> 
> Most of the code comes from:
> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe
> 
> These patches are tested on vexpress-ca15, vexpress-ca9

BTW, it looks like you have some section mismatches:

WARNING: vmlinux.o(.meminit.text+0x40): Section mismatch in reference
from the function kasan_pte_populate() to the function
.init.text:kasan_alloc_block.constprop.5()
The function __meminit kasan_pte_populate() references
a function __init kasan_alloc_block.constprop.5().
If kasan_alloc_block.constprop.5 is only used by kasan_pte_populate then
annotate kasan_alloc_block.constprop.5 with a matching annotation.

WARNING: vmlinux.o(.meminit.text+0x144): Section mismatch in reference
from the function kasan_pmd_populate() to the function
.init.text:kasan_alloc_block.constprop.5()
The function __meminit kasan_pmd_populate() references
a function __init kasan_alloc_block.constprop.5().
If kasan_alloc_block.constprop.5 is only used by kasan_pmd_populate then
annotate kasan_alloc_block.constprop.5 with a matching annotation.

WARNING: vmlinux.o(.meminit.text+0x1a4): Section mismatch in reference
from the function kasan_pud_populate() to the function
.init.text:kasan_alloc_block.constprop.5()
The function __meminit kasan_pud_populate() references
a function __init kasan_alloc_block.constprop.5().
If kasan_alloc_block.constprop.5 is only used by kasan_pud_populate then
annotate kasan_alloc_block.constprop.5 with a matching annotation.


> 
> 
> 
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Tested-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> 
> Abbott Liu (3):
>   2 1-byte checks more safer for memory_is_poisoned_16
>   Add TTBR operator for kasan_init
>   Define the virtual space of KASan's shadow region
> 
> Andrey Ryabinin (4):
>   Disable instrumentation for some code
>   Replace memory function for kasan
>   Initialize the mapping of KASan shadow memory
>   Enable KASan for arm
> 
>  arch/arm/Kconfig                      |   1 +
>  arch/arm/boot/compressed/Makefile     |   1 +
>  arch/arm/boot/compressed/decompress.c |   2 +
>  arch/arm/boot/compressed/libfdt_env.h |   2 +
>  arch/arm/include/asm/cp15.h           | 104 ++++++++++++
>  arch/arm/include/asm/kasan.h          |  23 +++
>  arch/arm/include/asm/kasan_def.h      |  52 ++++++
>  arch/arm/include/asm/kvm_hyp.h        |  52 ------
>  arch/arm/include/asm/memory.h         |   5 +
>  arch/arm/include/asm/pgalloc.h        |   7 +-
>  arch/arm/include/asm/string.h         |  17 ++
>  arch/arm/include/asm/thread_info.h    |   4 +
>  arch/arm/kernel/entry-armv.S          |   5 +-
>  arch/arm/kernel/entry-common.S        |   6 +-
>  arch/arm/kernel/head-common.S         |   7 +-
>  arch/arm/kernel/setup.c               |   2 +
>  arch/arm/kernel/unwind.c              |   3 +-
>  arch/arm/kvm/hyp/cp15-sr.c            |  12 +-
>  arch/arm/kvm/hyp/switch.c             |   6 +-
>  arch/arm/lib/memcpy.S                 |   3 +
>  arch/arm/lib/memmove.S                |   5 +-
>  arch/arm/lib/memset.S                 |   3 +
>  arch/arm/mm/Makefile                  |   3 +
>  arch/arm/mm/init.c                    |   6 +
>  arch/arm/mm/kasan_init.c              | 290 ++++++++++++++++++++++++++++++++++
>  arch/arm/mm/mmu.c                     |   7 +-
>  arch/arm/mm/pgd.c                     |  14 ++
>  arch/arm/vdso/Makefile                |   2 +
>  mm/kasan/kasan.c                      |  24 ++-
>  29 files changed, 588 insertions(+), 80 deletions(-)
>  create mode 100644 arch/arm/include/asm/kasan.h
>  create mode 100644 arch/arm/include/asm/kasan_def.h
>  create mode 100644 arch/arm/mm/kasan_init.c
> 


-- 
Florian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 7/7] Enable KASan for arm
  2018-03-18 12:53 ` [PATCH 7/7] Enable KASan for arm Abbott Liu
@ 2018-03-19 20:43   ` kbuild test robot
  0 siblings, 0 replies; 15+ messages in thread
From: kbuild test robot @ 2018-03-19 20:43 UTC (permalink / raw)
  To: Abbott Liu
  Cc: kbuild-all, linux, aryabinin, marc.zyngier, kstewart, gregkh,
	f.fainelli, liuwenliang, akpm, afzal.mohd.ma, alexander.levin,
	glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

[-- Attachment #1: Type: text/plain, Size: 4310 bytes --]

Hi Andrey,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16-rc6]
[cannot apply to next-20180319]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Abbott-Liu/KASan-for-arm/20180319-120138
config: arm-allmodconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All errors (new ones prefixed by >>):

   arch/arm/kernel/entry-common.S: Assembler messages:
>> arch/arm/kernel/entry-common.S:85: Error: invalid constant (ffffffffb6e00000) after fixup

vim +85 arch/arm/kernel/entry-common.S

^1da177e4 Linus Torvalds  2005-04-16   68  
3302caddf Russell King    2015-08-20   69  	/* Ok, we need to do extra processing, enter the slow path. */
^1da177e4 Linus Torvalds  2005-04-16   70  fast_work_pending:
^1da177e4 Linus Torvalds  2005-04-16   71  	str	r0, [sp, #S_R0+S_OFF]!		@ returned r0
3302caddf Russell King    2015-08-20   72  	/* fall through to work_pending */
3302caddf Russell King    2015-08-20   73  #else
3302caddf Russell King    2015-08-20   74  /*
3302caddf Russell King    2015-08-20   75   * The "replacement" ret_fast_syscall for when tracing or context tracking
3302caddf Russell King    2015-08-20   76   * is enabled.  As we will need to call out to some C functions, we save
3302caddf Russell King    2015-08-20   77   * r0 first to avoid needing to save registers around each C function call.
3302caddf Russell King    2015-08-20   78   */
3302caddf Russell King    2015-08-20   79  ret_fast_syscall:
3302caddf Russell King    2015-08-20   80   UNWIND(.fnstart	)
3302caddf Russell King    2015-08-20   81   UNWIND(.cantunwind	)
3302caddf Russell King    2015-08-20   82  	str	r0, [sp, #S_R0 + S_OFF]!	@ save returned r0
3302caddf Russell King    2015-08-20   83  	disable_irq_notrace			@ disable interrupts
e33f8d326 Thomas Garnier  2017-09-07   84  	ldr	r2, [tsk, #TI_ADDR_LIMIT]
e33f8d326 Thomas Garnier  2017-09-07  @85  	cmp	r2, #TASK_SIZE
e33f8d326 Thomas Garnier  2017-09-07   86  	blne	addr_limit_check_failed
3302caddf Russell King    2015-08-20   87  	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
2404269bc Thomas Garnier  2017-09-07   88  	tst	r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
3302caddf Russell King    2015-08-20   89  	beq	no_work_pending
3302caddf Russell King    2015-08-20   90   UNWIND(.fnend		)
3302caddf Russell King    2015-08-20   91  ENDPROC(ret_fast_syscall)
3302caddf Russell King    2015-08-20   92  
3302caddf Russell King    2015-08-20   93  	/* Slower path - fall through to work_pending */
3302caddf Russell King    2015-08-20   94  #endif
3302caddf Russell King    2015-08-20   95  
3302caddf Russell King    2015-08-20   96  	tst	r1, #_TIF_SYSCALL_WORK
3302caddf Russell King    2015-08-20   97  	bne	__sys_trace_return_nosave
3302caddf Russell King    2015-08-20   98  slow_work_pending:
^1da177e4 Linus Torvalds  2005-04-16   99  	mov	r0, sp				@ 'regs'
^1da177e4 Linus Torvalds  2005-04-16  100  	mov	r2, why				@ 'syscall'
0a267fa6a Al Viro         2012-07-19  101  	bl	do_work_pending
662852178 Al Viro         2012-07-19  102  	cmp	r0, #0
81783786d Al Viro         2012-07-19  103  	beq	no_work_pending
662852178 Al Viro         2012-07-19  104  	movlt	scno, #(__NR_restart_syscall - __NR_SYSCALL_BASE)
81783786d Al Viro         2012-07-19  105  	ldmia	sp, {r0 - r6}			@ have to reload r0 - r6
81783786d Al Viro         2012-07-19  106  	b	local_restart			@ ... and off we go
e83dd3770 Drew Richardson 2015-08-06  107  ENDPROC(ret_fast_syscall)
81783786d Al Viro         2012-07-19  108  

:::::: The code at line 85 was first introduced by commit
:::::: e33f8d32677fa4f4f8996ef46748f86aac81ccff arm/syscalls: Optimize address limit check

:::::: TO: Thomas Garnier <thgarnie@google.com>
:::::: CC: Thomas Gleixner <tglx@linutronix.de>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 65135 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/7] KASan for arm
  2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
                   ` (8 preceding siblings ...)
  2018-03-19 18:29 ` Florian Fainelli
@ 2018-03-25 23:58 ` Joel Stanley
  9 siblings, 0 replies; 15+ messages in thread
From: Joel Stanley @ 2018-03-25 23:58 UTC (permalink / raw)
  To: Abbott Liu
  Cc: Russell King, aryabinin, Marc Zyngier, kstewart, Greg KH,
	Florian Fainelli, Andrew Morton, Afzal Mohammed, alexander.levin,
	glider, dvyukov, Christoffer Dall, linux, mawilcox,
	Philippe Ombredanne, ard.biesheuvel, vladimir.murzin,
	nicolas.pitre, Thomas Gleixner, thgarnie, dhowells, Kees Cook,
	Arnd Bergmann, Geert Uytterhoeven, Jon Medhurst (Tixy),
	Mark Rutland, james.morse, zhichao.huang, jinb.park7, labbott,
	philip, grygorii.strashko, catalin.marinas, opendmb,
	kirill.shutemov, Linux ARM, Linux Kernel Mailing List, kasan-dev,
	kvmarm, linux-mm

On 18 March 2018 at 23:23, Abbott Liu <liuwenliang@huawei.com> wrote:

>    These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).

Thanks for implementing this. I gave the series a spin on an ASPEED
ast2500 (ARMv5) system with aspeed_g5_defconfig.

It found a bug in the NCSI code (https://github.com/openbmc/linux/issues/146).

Tested-by: Joel Stanley <joel@jms.id.au>

Cheers,

Joel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 7/7] Enable KASan for arm
@ 2018-03-24 13:55 Liuwenliang (Abbott Liu)
  0 siblings, 0 replies; 15+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-03-24 13:55 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, linux, aryabinin, marc.zyngier, kstewart, gregkh,
	f.fainelli, akpm, afzal.mohd.ma, alexander.levin, glider,
	dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
	ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
	dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
	zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
	catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
	linux-kernel, kasan-dev, kvmarm, linux-mm

On 03/20/2018 2:30 AM, kbuild test robot wrote:
>All errors (new ones prefixed by >>):
>
>   arch/arm/kernel/entry-common.S: Assembler messages:
>>> arch/arm/kernel/entry-common.S:85: Error: invalid constant (ffffffffb6e00000) after fixup

I'm sorry!
We need to add the fellowing code to solve the upper error:
> git diff
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index b7d0c6c..9b728c5 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -82,7 +82,8 @@ ret_fast_syscall:
        str     r0, [sp, #S_R0 + S_OFF]!        @ save returned r0
        disable_irq_notrace                     @ disable interrupts
        ldr     r2, [tsk, #TI_ADDR_LIMIT]
-	cmp     r2, #TASK_SIZE
+	ldr		r1, =TASK_SIZE
+	cmp		r2, r1
        blne    addr_limit_check_failed
        ldr     r1, [tsk, #TI_FLAGS]            @ re-check for syscall tracing
        tst     r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-03-25 23:58 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
2018-03-18 12:53 ` [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Abbott Liu
2018-03-18 13:21   ` Russell King - ARM Linux
2018-03-18 12:53 ` [PATCH 2/7] Add TTBR operator for kasan_init Abbott Liu
2018-03-18 12:53 ` [PATCH 3/7] Disable instrumentation for some code Abbott Liu
2018-03-19  8:38   ` Marc Zyngier
2018-03-18 12:53 ` [PATCH 4/7] Replace memory function for kasan Abbott Liu
2018-03-18 12:53 ` [PATCH 5/7] Define the virtual space of KASan's shadow region Abbott Liu
2018-03-18 12:53 ` [PATCH 6/7] Initialize the mapping of KASan shadow memory Abbott Liu
2018-03-18 12:53 ` [PATCH 7/7] Enable KASan for arm Abbott Liu
2018-03-19 20:43   ` kbuild test robot
2018-03-18 19:13 ` [PATCH v2 0/7] " Florian Fainelli
2018-03-19 18:29 ` Florian Fainelli
2018-03-25 23:58 ` Joel Stanley
2018-03-24 13:55 [PATCH 7/7] Enable " Liuwenliang (Abbott Liu)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).