* [PATCH 00/11] KASan for arm
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Hi,all:
These patches add arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).
1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from user space.
At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.
KASan's stack instrumentation significantly increases stack's
consumption, so CONFIG_KASAN doubles THREAD_SIZE.
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Most of the code comes from:
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
These patches are tested on vexpress-ca15, vexpress-ca9
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Tested-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Abbott Liu (6):
Define the virtual space of KASan's shadow region
change memory_is_poisoned_16 for aligned error
Add support arm LPAE
Don't need to map the shadow of KASan's shadow memory
Change mapping of kasan_zero_page int readonly
Add KASan layout
Andrey Ryabinin (5):
Initialize the mapping of KASan shadow memory
replace memory function
arm: Kconfig: enable KASan
Disable kasan's instrumentation
Avoid cleaning the KASan shadow area's mapping table
arch/arm/Kconfig | 1 +
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/include/asm/kasan.h | 20 +++
arch/arm/include/asm/kasan_def.h | 51 +++++++
arch/arm/include/asm/memory.h | 5 +
arch/arm/include/asm/pgalloc.h | 5 +-
arch/arm/include/asm/pgtable.h | 1 +
arch/arm/include/asm/proc-fns.h | 33 +++++
arch/arm/include/asm/string.h | 18 ++-
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/entry-armv.S | 7 +-
arch/arm/kernel/head-common.S | 4 +
arch/arm/kernel/setup.c | 2 +
arch/arm/kernel/unwind.c | 3 +-
arch/arm/lib/memcpy.S | 3 +
arch/arm/lib/memmove.S | 5 +-
arch/arm/lib/memset.S | 3 +
arch/arm/mm/Makefile | 5 +
arch/arm/mm/init.c | 6 +
arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 7 +-
arch/arm/mm/pgd.c | 12 ++
arch/arm/vdso/Makefile | 2 +
mm/kasan/kasan.c | 22 ++-
24 files changed, 478 insertions(+), 7 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/include/asm/kasan_def.h
create mode 100644 arch/arm/mm/kasan_init.c
--
2.9.0
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Hi,all:
These patches add arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).
1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from user space.
At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.
KASan's stack instrumentation significantly increases stack's
consumption, so CONFIG_KASAN doubles THREAD_SIZE.
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Most of the code comes from:
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
These patches are tested on vexpress-ca15, vexpress-ca9
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Tested-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Abbott Liu (6):
Define the virtual space of KASan's shadow region
change memory_is_poisoned_16 for aligned error
Add support arm LPAE
Don't need to map the shadow of KASan's shadow memory
Change mapping of kasan_zero_page int readonly
Add KASan layout
Andrey Ryabinin (5):
Initialize the mapping of KASan shadow memory
replace memory function
arm: Kconfig: enable KASan
Disable kasan's instrumentation
Avoid cleaning the KASan shadow area's mapping table
arch/arm/Kconfig | 1 +
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/include/asm/kasan.h | 20 +++
arch/arm/include/asm/kasan_def.h | 51 +++++++
arch/arm/include/asm/memory.h | 5 +
arch/arm/include/asm/pgalloc.h | 5 +-
arch/arm/include/asm/pgtable.h | 1 +
arch/arm/include/asm/proc-fns.h | 33 +++++
arch/arm/include/asm/string.h | 18 ++-
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/entry-armv.S | 7 +-
arch/arm/kernel/head-common.S | 4 +
arch/arm/kernel/setup.c | 2 +
arch/arm/kernel/unwind.c | 3 +-
arch/arm/lib/memcpy.S | 3 +
arch/arm/lib/memmove.S | 5 +-
arch/arm/lib/memset.S | 3 +
arch/arm/mm/Makefile | 5 +
arch/arm/mm/init.c | 6 +
arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 7 +-
arch/arm/mm/pgd.c | 12 ++
arch/arm/vdso/Makefile | 2 +
mm/kasan/kasan.c | 22 ++-
24 files changed, 478 insertions(+), 7 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/include/asm/kasan_def.h
create mode 100644 arch/arm/mm/kasan_init.c
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
Hi,all:
These patches add arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).
1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from user space.
At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.
KASan's stack instrumentation significantly increases stack's
consumption, so CONFIG_KASAN doubles THREAD_SIZE.
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Most of the code comes from:
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
These patches are tested on vexpress-ca15, vexpress-ca9
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Tested-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Abbott Liu (6):
Define the virtual space of KASan's shadow region
change memory_is_poisoned_16 for aligned error
Add support arm LPAE
Don't need to map the shadow of KASan's shadow memory
Change mapping of kasan_zero_page int readonly
Add KASan layout
Andrey Ryabinin (5):
Initialize the mapping of KASan shadow memory
replace memory function
arm: Kconfig: enable KASan
Disable kasan's instrumentation
Avoid cleaning the KASan shadow area's mapping table
arch/arm/Kconfig | 1 +
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/include/asm/kasan.h | 20 +++
arch/arm/include/asm/kasan_def.h | 51 +++++++
arch/arm/include/asm/memory.h | 5 +
arch/arm/include/asm/pgalloc.h | 5 +-
arch/arm/include/asm/pgtable.h | 1 +
arch/arm/include/asm/proc-fns.h | 33 +++++
arch/arm/include/asm/string.h | 18 ++-
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/entry-armv.S | 7 +-
arch/arm/kernel/head-common.S | 4 +
arch/arm/kernel/setup.c | 2 +
arch/arm/kernel/unwind.c | 3 +-
arch/arm/lib/memcpy.S | 3 +
arch/arm/lib/memmove.S | 5 +-
arch/arm/lib/memset.S | 3 +
arch/arm/mm/Makefile | 5 +
arch/arm/mm/init.c | 6 +
arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 7 +-
arch/arm/mm/pgd.c | 12 ++
arch/arm/vdso/Makefile | 2 +
mm/kasan/kasan.c | 22 ++-
24 files changed, 478 insertions(+), 7 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/include/asm/kasan_def.h
create mode 100644 arch/arm/mm/kasan_init.c
--
2.9.0
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It's finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)
2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan don't need to track, and we alloc
new shadow space for the other memory that KASan need to track. These
issues are finished by the function kasan_init which is call by setup_arch.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/include/asm/kasan.h | 20 +++
arch/arm/include/asm/pgalloc.h | 5 +-
arch/arm/include/asm/pgtable.h | 1 +
arch/arm/include/asm/proc-fns.h | 33 +++++
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/head-common.S | 4 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 5 +
arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.c | 2 +-
10 files changed, 331 insertions(+), 2 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/mm/kasan_init.c
diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 0000000..90ee60c
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,20 @@
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index b2902a5..10cee6a 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
*/
#define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
#define pmd_free(mm, pmd) do { } while (0)
+#ifndef CONFIG_KASAN
#define pud_populate(mm,pmd,pte) BUG()
-
+#else
+#define pud_populate(mm,pmd,pte) do { } while (0)
+#endif
#endif /* CONFIG_ARM_LPAE */
extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 1c46238..fdf343f 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
#define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
#define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
#define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
+#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
#define PAGE_KERNEL_EXEC pgprot_kernel
#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
#define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index f2e1af4..6e26714 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -131,6 +131,15 @@ extern void cpu_resume(void);
pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
(pgd_t *)phys_to_virt(pg); \
})
+
+#define cpu_set_ttbr0(val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcrr p15, 0, %Q0, %R0, c2" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+
#else
#define cpu_get_pgd() \
({ \
@@ -140,6 +149,30 @@ extern void cpu_resume(void);
pg &= ~0x3fff; \
(pgd_t *)phys_to_virt(pg); \
})
+
+#define cpu_set_ttbr(nr, val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcr p15, 0, %0, c2, c0, 0" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+#define cpu_get_ttbr(nr) \
+ ({ \
+ unsigned long ttbr; \
+ __asm__("mrc p15, 0, %0, c2, c0, 0" \
+ : "=r" (ttbr)); \
+ ttbr; \
+ })
+
+#define cpu_set_ttbr0(val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcr p15, 0, %0, c2, c0, 0" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+
#endif
#else /*!CONFIG_MMU */
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 1d468b5..52c4858 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -16,7 +16,11 @@
#include <asm/fpstate.h>
#include <asm/page.h>
+#ifdef CONFIG_KASAN
+#define THREAD_SIZE_ORDER 2
+#else
#define THREAD_SIZE_ORDER 1
+#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#define THREAD_START_SP (THREAD_SIZE - 8)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 8733012..c17f4a2 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -101,7 +101,11 @@ __mmap_switched:
str r2, [r6] @ Save atags pointer
cmp r7, #0
strne r0, [r7] @ Save control register values
+#ifdef CONFIG_KASAN
+ b kasan_early_init
+#else
b start_kernel
+#endif
ENDPROC(__mmap_switched)
.align 2
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 8e9a3e4..985d9a3 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -62,6 +62,7 @@
#include <asm/unwind.h>
#include <asm/memblock.h>
#include <asm/virt.h>
+#include <asm/kasan.h>
#include "atags.h"
@@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
early_ioremap_reset();
paging_init(mdesc);
+ kasan_init();
request_standard_resources(mdesc);
if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 950d19b..498c316 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
+
+KASAN_SANITIZE_kasan_init.o := n
+obj-$(CONFIG_KASAN) += kasan_init.o
+
+
obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 0000000..2bf0782
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,257 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/start_kernel.h>
+
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+#include <asm/tlbflush.h>
+#include <asm/cp15.h>
+#include <linux/sched/task.h>
+
+#include "mm.h"
+
+static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size, int node)
+{
+ return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+ BOOTMEM_ALLOC_ACCESSIBLE, node);
+}
+
+static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
+{
+ unsigned long addr;
+ unsigned long next;
+ pmd_t *pmd;
+
+ pmd = pmd_offset(pud, start);
+ for (addr = start; addr < end;) {
+ pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
+ next = pmd_addr_end(addr, end);
+ addr = next;
+ flush_pmd_entry(pmd);
+ pmd++;
+ }
+}
+
+static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
+{
+ unsigned long addr;
+ unsigned long next;
+ pud_t *pud;
+
+ pud = pud_offset(pgd, start);
+ for (addr = start; addr < end;) {
+ next = pud_addr_end(addr, end);
+ kasan_early_pmd_populate(addr, next, pud);
+ addr = next;
+ pud++;
+ }
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgdp)
+{
+ int i;
+ unsigned long start = KASAN_SHADOW_START;
+ unsigned long end = KASAN_SHADOW_END;
+ unsigned long addr;
+ unsigned long next;
+ pgd_t *pgd;
+
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_zero_pte[i], pfn_pte(
+ virt_to_pfn(kasan_zero_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
+
+ pgd = pgd_offset_k(start);
+ for (addr = start; addr < end;) {
+ next = pgd_addr_end(addr, end);
+ kasan_early_pud_populate(addr, next, pgd);
+ addr = next;
+ pgd++;
+ }
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+ struct proc_info_list *list;
+
+ /*
+ * locate processor in the list of supported processor
+ * types. The linker builds this table for us from the
+ * entries in arch/arm/mm/proc-*.S
+ */
+ list = lookup_processor_type(read_cpuid_id());
+ if (list) {
+#ifdef MULTI_CPU
+ processor = *list->proc;
+#endif
+ }
+
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 29));
+
+
+ kasan_map_early_shadow(swapper_pg_dir);
+ start_kernel();
+}
+
+static void __init clear_pgds(unsigned long start,
+ unsigned long end)
+{
+ for (; start && start < end; start += PMD_SIZE)
+ pmd_clear(pmd_off_k(start));
+}
+
+pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
+{
+ pte_t *pte = pte_offset_kernel(pmd, addr);
+ if (pte_none(*pte)) {
+ pte_t entry;
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
+ set_pte_at(&init_mm, addr, pte, entry);
+ }
+ return pte;
+}
+
+pmd_t * __meminit kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
+{
+ pmd_t *pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pmd_populate_kernel(&init_mm, pmd, p);
+ }
+ return pmd;
+}
+
+pud_t * __meminit kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
+{
+ pud_t *pud = pud_offset(pgd, addr);
+ if (pud_none(*pud)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pr_err("populating pud addr %lx\n", addr);
+ pud_populate(&init_mm, pud, p);
+ }
+ return pud;
+}
+
+pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
+{
+ pgd_t *pgd = pgd_offset_k(addr);
+ if (pgd_none(*pgd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pgd_populate(&init_mm, pgd, p);
+ }
+ return pgd;
+}
+
+static int __init create_mapping(unsigned long start, unsigned long end, int node)
+{
+ unsigned long addr = start;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+ pr_info("populating shadow for %lx, %lx\n", start, end);
+ for (; addr < end; addr += PAGE_SIZE) {
+ pgd = kasan_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+
+ pud = kasan_pud_populate(pgd, addr, node);
+ if (!pud)
+ return -ENOMEM;
+
+ pmd = kasan_pmd_populate(pud, addr, node);
+ if (!pmd)
+ return -ENOMEM;
+
+ pte = kasan_pte_populate(pmd, addr, node);
+ if (!pte)
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+
+void __init kasan_init(void)
+{
+ struct memblock_region *reg;
+ u64 orig_ttbr0;
+
+ orig_ttbr0 = cpu_get_ttbr(0);
+
+#ifdef CONFIG_ARM_LPAE
+ memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
+ memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
+ set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+ cpu_set_ttbr0(__pa(tmp_page_table));
+#else
+ memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
+ cpu_set_ttbr0(__pa(tmp_page_table));
+#endif
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+
+ clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ kasan_populate_zero_shadow(
+ kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
+ kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
+
+ kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+ kasan_mem_to_shadow((void *)-1UL) + 1);
+
+ for_each_memblock(memory, reg) {
+ void *start = __va(reg->base);
+ void *end = __va(reg->base + reg->size);
+
+ if (reg->base + reg->size > arm_lowmem_limit)
+ end = __va(arm_lowmem_limit);
+ if (start >= end)
+ break;
+
+ create_mapping((unsigned long)kasan_mem_to_shadow(start),
+ (unsigned long)kasan_mem_to_shadow(end),
+ NUMA_NO_NODE);
+ }
+
+ /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,so we need mapping.
+ *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR ~ MODULES_END's shadow
+ * is in the same PMD_SIZE, so we cant use kasan_populate_zero_shadow.
+ *
+ **/
+ create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
+ (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
+ NUMA_NO_NODE);
+ cpu_set_ttbr0(orig_ttbr0);
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+ memset(kasan_zero_page, 0, PAGE_SIZE);
+ pr_info("Kernel address sanitizer initialized\n");
+ init_task.kasan_depth = 0;
+}
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 6f319fb..12749da 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -358,7 +358,7 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
if (redzone_adjust > 0)
*size += redzone_adjust;
- *size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
+ *size = min((size_t)KMALLOC_MAX_SIZE, max(*size, cache->object_size +
optimal_redzone(cache->object_size)));
/*
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It's finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)
2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan don't need to track, and we alloc
new shadow space for the other memory that KASan need to track. These
issues are finished by the function kasan_init which is call by setup_arch.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/include/asm/kasan.h | 20 +++
arch/arm/include/asm/pgalloc.h | 5 +-
arch/arm/include/asm/pgtable.h | 1 +
arch/arm/include/asm/proc-fns.h | 33 +++++
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/head-common.S | 4 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 5 +
arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.c | 2 +-
10 files changed, 331 insertions(+), 2 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/mm/kasan_init.c
diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 0000000..90ee60c
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,20 @@
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index b2902a5..10cee6a 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
*/
#define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
#define pmd_free(mm, pmd) do { } while (0)
+#ifndef CONFIG_KASAN
#define pud_populate(mm,pmd,pte) BUG()
-
+#else
+#define pud_populate(mm,pmd,pte) do { } while (0)
+#endif
#endif /* CONFIG_ARM_LPAE */
extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 1c46238..fdf343f 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
#define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
#define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
#define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
+#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
#define PAGE_KERNEL_EXEC pgprot_kernel
#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
#define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index f2e1af4..6e26714 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -131,6 +131,15 @@ extern void cpu_resume(void);
pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
(pgd_t *)phys_to_virt(pg); \
})
+
+#define cpu_set_ttbr0(val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcrr p15, 0, %Q0, %R0, c2" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+
#else
#define cpu_get_pgd() \
({ \
@@ -140,6 +149,30 @@ extern void cpu_resume(void);
pg &= ~0x3fff; \
(pgd_t *)phys_to_virt(pg); \
})
+
+#define cpu_set_ttbr(nr, val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcr p15, 0, %0, c2, c0, 0" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+#define cpu_get_ttbr(nr) \
+ ({ \
+ unsigned long ttbr; \
+ __asm__("mrc p15, 0, %0, c2, c0, 0" \
+ : "=r" (ttbr)); \
+ ttbr; \
+ })
+
+#define cpu_set_ttbr0(val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcr p15, 0, %0, c2, c0, 0" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+
#endif
#else /*!CONFIG_MMU */
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 1d468b5..52c4858 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -16,7 +16,11 @@
#include <asm/fpstate.h>
#include <asm/page.h>
+#ifdef CONFIG_KASAN
+#define THREAD_SIZE_ORDER 2
+#else
#define THREAD_SIZE_ORDER 1
+#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#define THREAD_START_SP (THREAD_SIZE - 8)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 8733012..c17f4a2 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -101,7 +101,11 @@ __mmap_switched:
str r2, [r6] @ Save atags pointer
cmp r7, #0
strne r0, [r7] @ Save control register values
+#ifdef CONFIG_KASAN
+ b kasan_early_init
+#else
b start_kernel
+#endif
ENDPROC(__mmap_switched)
.align 2
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 8e9a3e4..985d9a3 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -62,6 +62,7 @@
#include <asm/unwind.h>
#include <asm/memblock.h>
#include <asm/virt.h>
+#include <asm/kasan.h>
#include "atags.h"
@@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
early_ioremap_reset();
paging_init(mdesc);
+ kasan_init();
request_standard_resources(mdesc);
if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 950d19b..498c316 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
+
+KASAN_SANITIZE_kasan_init.o := n
+obj-$(CONFIG_KASAN) += kasan_init.o
+
+
obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 0000000..2bf0782
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,257 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/start_kernel.h>
+
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+#include <asm/tlbflush.h>
+#include <asm/cp15.h>
+#include <linux/sched/task.h>
+
+#include "mm.h"
+
+static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size, int node)
+{
+ return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+ BOOTMEM_ALLOC_ACCESSIBLE, node);
+}
+
+static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
+{
+ unsigned long addr;
+ unsigned long next;
+ pmd_t *pmd;
+
+ pmd = pmd_offset(pud, start);
+ for (addr = start; addr < end;) {
+ pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
+ next = pmd_addr_end(addr, end);
+ addr = next;
+ flush_pmd_entry(pmd);
+ pmd++;
+ }
+}
+
+static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
+{
+ unsigned long addr;
+ unsigned long next;
+ pud_t *pud;
+
+ pud = pud_offset(pgd, start);
+ for (addr = start; addr < end;) {
+ next = pud_addr_end(addr, end);
+ kasan_early_pmd_populate(addr, next, pud);
+ addr = next;
+ pud++;
+ }
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgdp)
+{
+ int i;
+ unsigned long start = KASAN_SHADOW_START;
+ unsigned long end = KASAN_SHADOW_END;
+ unsigned long addr;
+ unsigned long next;
+ pgd_t *pgd;
+
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_zero_pte[i], pfn_pte(
+ virt_to_pfn(kasan_zero_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
+
+ pgd = pgd_offset_k(start);
+ for (addr = start; addr < end;) {
+ next = pgd_addr_end(addr, end);
+ kasan_early_pud_populate(addr, next, pgd);
+ addr = next;
+ pgd++;
+ }
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+ struct proc_info_list *list;
+
+ /*
+ * locate processor in the list of supported processor
+ * types. The linker builds this table for us from the
+ * entries in arch/arm/mm/proc-*.S
+ */
+ list = lookup_processor_type(read_cpuid_id());
+ if (list) {
+#ifdef MULTI_CPU
+ processor = *list->proc;
+#endif
+ }
+
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 29));
+
+
+ kasan_map_early_shadow(swapper_pg_dir);
+ start_kernel();
+}
+
+static void __init clear_pgds(unsigned long start,
+ unsigned long end)
+{
+ for (; start && start < end; start += PMD_SIZE)
+ pmd_clear(pmd_off_k(start));
+}
+
+pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
+{
+ pte_t *pte = pte_offset_kernel(pmd, addr);
+ if (pte_none(*pte)) {
+ pte_t entry;
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
+ set_pte_at(&init_mm, addr, pte, entry);
+ }
+ return pte;
+}
+
+pmd_t * __meminit kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
+{
+ pmd_t *pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pmd_populate_kernel(&init_mm, pmd, p);
+ }
+ return pmd;
+}
+
+pud_t * __meminit kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
+{
+ pud_t *pud = pud_offset(pgd, addr);
+ if (pud_none(*pud)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pr_err("populating pud addr %lx\n", addr);
+ pud_populate(&init_mm, pud, p);
+ }
+ return pud;
+}
+
+pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
+{
+ pgd_t *pgd = pgd_offset_k(addr);
+ if (pgd_none(*pgd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pgd_populate(&init_mm, pgd, p);
+ }
+ return pgd;
+}
+
+static int __init create_mapping(unsigned long start, unsigned long end, int node)
+{
+ unsigned long addr = start;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+ pr_info("populating shadow for %lx, %lx\n", start, end);
+ for (; addr < end; addr += PAGE_SIZE) {
+ pgd = kasan_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+
+ pud = kasan_pud_populate(pgd, addr, node);
+ if (!pud)
+ return -ENOMEM;
+
+ pmd = kasan_pmd_populate(pud, addr, node);
+ if (!pmd)
+ return -ENOMEM;
+
+ pte = kasan_pte_populate(pmd, addr, node);
+ if (!pte)
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+
+void __init kasan_init(void)
+{
+ struct memblock_region *reg;
+ u64 orig_ttbr0;
+
+ orig_ttbr0 = cpu_get_ttbr(0);
+
+#ifdef CONFIG_ARM_LPAE
+ memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
+ memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
+ set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+ cpu_set_ttbr0(__pa(tmp_page_table));
+#else
+ memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
+ cpu_set_ttbr0(__pa(tmp_page_table));
+#endif
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+
+ clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ kasan_populate_zero_shadow(
+ kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
+ kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
+
+ kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+ kasan_mem_to_shadow((void *)-1UL) + 1);
+
+ for_each_memblock(memory, reg) {
+ void *start = __va(reg->base);
+ void *end = __va(reg->base + reg->size);
+
+ if (reg->base + reg->size > arm_lowmem_limit)
+ end = __va(arm_lowmem_limit);
+ if (start >= end)
+ break;
+
+ create_mapping((unsigned long)kasan_mem_to_shadow(start),
+ (unsigned long)kasan_mem_to_shadow(end),
+ NUMA_NO_NODE);
+ }
+
+ /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,so we need mapping.
+ *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR ~ MODULES_END's shadow
+ * is in the same PMD_SIZE, so we cant use kasan_populate_zero_shadow.
+ *
+ **/
+ create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
+ (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
+ NUMA_NO_NODE);
+ cpu_set_ttbr0(orig_ttbr0);
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+ memset(kasan_zero_page, 0, PAGE_SIZE);
+ pr_info("Kernel address sanitizer initialized\n");
+ init_task.kasan_depth = 0;
+}
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 6f319fb..12749da 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -358,7 +358,7 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
if (redzone_adjust > 0)
*size += redzone_adjust;
- *size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
+ *size = min((size_t)KMALLOC_MAX_SIZE, max(*size, cache->object_size +
optimal_redzone(cache->object_size)));
/*
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
From: Andrey Ryabinin <a.ryabinin@samsung.com>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It's finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)
2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan don't need to track, and we alloc
new shadow space for the other memory that KASan need to track. These
issues are finished by the function kasan_init which is call by setup_arch.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/include/asm/kasan.h | 20 +++
arch/arm/include/asm/pgalloc.h | 5 +-
arch/arm/include/asm/pgtable.h | 1 +
arch/arm/include/asm/proc-fns.h | 33 +++++
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/head-common.S | 4 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 5 +
arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.c | 2 +-
10 files changed, 331 insertions(+), 2 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/mm/kasan_init.c
diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 0000000..90ee60c
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,20 @@
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index b2902a5..10cee6a 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
*/
#define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
#define pmd_free(mm, pmd) do { } while (0)
+#ifndef CONFIG_KASAN
#define pud_populate(mm,pmd,pte) BUG()
-
+#else
+#define pud_populate(mm,pmd,pte) do { } while (0)
+#endif
#endif /* CONFIG_ARM_LPAE */
extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 1c46238..fdf343f 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
#define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
#define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
#define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
+#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
#define PAGE_KERNEL_EXEC pgprot_kernel
#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
#define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index f2e1af4..6e26714 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -131,6 +131,15 @@ extern void cpu_resume(void);
pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
(pgd_t *)phys_to_virt(pg); \
})
+
+#define cpu_set_ttbr0(val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcrr p15, 0, %Q0, %R0, c2" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+
#else
#define cpu_get_pgd() \
({ \
@@ -140,6 +149,30 @@ extern void cpu_resume(void);
pg &= ~0x3fff; \
(pgd_t *)phys_to_virt(pg); \
})
+
+#define cpu_set_ttbr(nr, val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcr p15, 0, %0, c2, c0, 0" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+#define cpu_get_ttbr(nr) \
+ ({ \
+ unsigned long ttbr; \
+ __asm__("mrc p15, 0, %0, c2, c0, 0" \
+ : "=r" (ttbr)); \
+ ttbr; \
+ })
+
+#define cpu_set_ttbr0(val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcr p15, 0, %0, c2, c0, 0" \
+ : : "r" (ttbr)); \
+ } while (0)
+
+
#endif
#else /*!CONFIG_MMU */
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 1d468b5..52c4858 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -16,7 +16,11 @@
#include <asm/fpstate.h>
#include <asm/page.h>
+#ifdef CONFIG_KASAN
+#define THREAD_SIZE_ORDER 2
+#else
#define THREAD_SIZE_ORDER 1
+#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#define THREAD_START_SP (THREAD_SIZE - 8)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 8733012..c17f4a2 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -101,7 +101,11 @@ __mmap_switched:
str r2, [r6] @ Save atags pointer
cmp r7, #0
strne r0, [r7] @ Save control register values
+#ifdef CONFIG_KASAN
+ b kasan_early_init
+#else
b start_kernel
+#endif
ENDPROC(__mmap_switched)
.align 2
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 8e9a3e4..985d9a3 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -62,6 +62,7 @@
#include <asm/unwind.h>
#include <asm/memblock.h>
#include <asm/virt.h>
+#include <asm/kasan.h>
#include "atags.h"
@@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
early_ioremap_reset();
paging_init(mdesc);
+ kasan_init();
request_standard_resources(mdesc);
if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 950d19b..498c316 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
+
+KASAN_SANITIZE_kasan_init.o := n
+obj-$(CONFIG_KASAN) += kasan_init.o
+
+
obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 0000000..2bf0782
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,257 @@
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/start_kernel.h>
+
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+#include <asm/tlbflush.h>
+#include <asm/cp15.h>
+#include <linux/sched/task.h>
+
+#include "mm.h"
+
+static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size, int node)
+{
+ return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+ BOOTMEM_ALLOC_ACCESSIBLE, node);
+}
+
+static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
+{
+ unsigned long addr;
+ unsigned long next;
+ pmd_t *pmd;
+
+ pmd = pmd_offset(pud, start);
+ for (addr = start; addr < end;) {
+ pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
+ next = pmd_addr_end(addr, end);
+ addr = next;
+ flush_pmd_entry(pmd);
+ pmd++;
+ }
+}
+
+static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
+{
+ unsigned long addr;
+ unsigned long next;
+ pud_t *pud;
+
+ pud = pud_offset(pgd, start);
+ for (addr = start; addr < end;) {
+ next = pud_addr_end(addr, end);
+ kasan_early_pmd_populate(addr, next, pud);
+ addr = next;
+ pud++;
+ }
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgdp)
+{
+ int i;
+ unsigned long start = KASAN_SHADOW_START;
+ unsigned long end = KASAN_SHADOW_END;
+ unsigned long addr;
+ unsigned long next;
+ pgd_t *pgd;
+
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_zero_pte[i], pfn_pte(
+ virt_to_pfn(kasan_zero_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
+
+ pgd = pgd_offset_k(start);
+ for (addr = start; addr < end;) {
+ next = pgd_addr_end(addr, end);
+ kasan_early_pud_populate(addr, next, pgd);
+ addr = next;
+ pgd++;
+ }
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+ struct proc_info_list *list;
+
+ /*
+ * locate processor in the list of supported processor
+ * types. The linker builds this table for us from the
+ * entries in arch/arm/mm/proc-*.S
+ */
+ list = lookup_processor_type(read_cpuid_id());
+ if (list) {
+#ifdef MULTI_CPU
+ processor = *list->proc;
+#endif
+ }
+
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 29));
+
+
+ kasan_map_early_shadow(swapper_pg_dir);
+ start_kernel();
+}
+
+static void __init clear_pgds(unsigned long start,
+ unsigned long end)
+{
+ for (; start && start < end; start += PMD_SIZE)
+ pmd_clear(pmd_off_k(start));
+}
+
+pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
+{
+ pte_t *pte = pte_offset_kernel(pmd, addr);
+ if (pte_none(*pte)) {
+ pte_t entry;
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
+ set_pte_at(&init_mm, addr, pte, entry);
+ }
+ return pte;
+}
+
+pmd_t * __meminit kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
+{
+ pmd_t *pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pmd_populate_kernel(&init_mm, pmd, p);
+ }
+ return pmd;
+}
+
+pud_t * __meminit kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
+{
+ pud_t *pud = pud_offset(pgd, addr);
+ if (pud_none(*pud)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pr_err("populating pud addr %lx\n", addr);
+ pud_populate(&init_mm, pud, p);
+ }
+ return pud;
+}
+
+pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
+{
+ pgd_t *pgd = pgd_offset_k(addr);
+ if (pgd_none(*pgd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+ if (!p)
+ return NULL;
+ pgd_populate(&init_mm, pgd, p);
+ }
+ return pgd;
+}
+
+static int __init create_mapping(unsigned long start, unsigned long end, int node)
+{
+ unsigned long addr = start;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+ pr_info("populating shadow for %lx, %lx\n", start, end);
+ for (; addr < end; addr += PAGE_SIZE) {
+ pgd = kasan_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+
+ pud = kasan_pud_populate(pgd, addr, node);
+ if (!pud)
+ return -ENOMEM;
+
+ pmd = kasan_pmd_populate(pud, addr, node);
+ if (!pmd)
+ return -ENOMEM;
+
+ pte = kasan_pte_populate(pmd, addr, node);
+ if (!pte)
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+
+void __init kasan_init(void)
+{
+ struct memblock_region *reg;
+ u64 orig_ttbr0;
+
+ orig_ttbr0 = cpu_get_ttbr(0);
+
+#ifdef CONFIG_ARM_LPAE
+ memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
+ memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
+ set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+ cpu_set_ttbr0(__pa(tmp_page_table));
+#else
+ memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
+ cpu_set_ttbr0(__pa(tmp_page_table));
+#endif
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+
+ clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ kasan_populate_zero_shadow(
+ kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
+ kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
+
+ kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+ kasan_mem_to_shadow((void *)-1UL) + 1);
+
+ for_each_memblock(memory, reg) {
+ void *start = __va(reg->base);
+ void *end = __va(reg->base + reg->size);
+
+ if (reg->base + reg->size > arm_lowmem_limit)
+ end = __va(arm_lowmem_limit);
+ if (start >= end)
+ break;
+
+ create_mapping((unsigned long)kasan_mem_to_shadow(start),
+ (unsigned long)kasan_mem_to_shadow(end),
+ NUMA_NO_NODE);
+ }
+
+ /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,so we need mapping.
+ *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR ~ MODULES_END's shadow
+ * is in the same PMD_SIZE, so we cant use kasan_populate_zero_shadow.
+ *
+ **/
+ create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
+ (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
+ NUMA_NO_NODE);
+ cpu_set_ttbr0(orig_ttbr0);
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+ memset(kasan_zero_page, 0, PAGE_SIZE);
+ pr_info("Kernel address sanitizer initialized\n");
+ init_task.kasan_depth = 0;
+}
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 6f319fb..12749da 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -358,7 +358,7 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
if (redzone_adjust > 0)
*size += redzone_adjust;
- *size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
+ *size = min((size_t)KMALLOC_MAX_SIZE, max(*size, cache->object_size +
optimal_redzone(cache->object_size)));
/*
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 02/11] replace memory function
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/include/asm/string.h | 18 +++++++++++++++++-
arch/arm/lib/memcpy.S | 3 +++
arch/arm/lib/memmove.S | 5 ++++-
arch/arm/lib/memset.S | 3 +++
4 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index fe1c6af..43325f8 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -14,15 +14,18 @@ extern char * strchr(const char * s, int c);
#define __HAVE_ARCH_MEMCPY
extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void * __memcpy(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMMOVE
extern void * memmove(void *, const void *, __kernel_size_t);
+extern void * __memmove(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMCHR
extern void * memchr(const void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void * memset(void *, int, __kernel_size_t);
+extern void * __memset(void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET32
extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,7 +42,7 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
}
extern void __memzero(void *ptr, __kernel_size_t n);
-
+#ifndef CONFIG_KASAN
#define memset(p,v,n) \
({ \
void *__p = (p); size_t __n = n; \
@@ -51,5 +54,18 @@ extern void __memzero(void *ptr, __kernel_size_t n);
} \
(__p); \
})
+#endif
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
#endif
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 64111bd..79a83f8 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -61,6 +61,8 @@
/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
+.weak memcpy
+ENTRY(__memcpy)
ENTRY(mmiocpy)
ENTRY(memcpy)
@@ -68,3 +70,4 @@ ENTRY(memcpy)
ENDPROC(memcpy)
ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index 69a9d47..313db6c 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -27,12 +27,14 @@
* occurring in the opposite direction.
*/
+.weak memmove
+ENTRY(__memmove)
ENTRY(memmove)
UNWIND( .fnstart )
subs ip, r0, r1
cmphi r2, ip
- bls memcpy
+ bls __memcpy
stmfd sp!, {r0, r4, lr}
UNWIND( .fnend )
@@ -225,3 +227,4 @@ ENTRY(memmove)
18: backward_copy_shift push=24 pull=8
ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index ed6d35d..64aa06a 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -16,6 +16,8 @@
.text
.align 5
+.weak memset
+ENTRY(__memset)
ENTRY(mmioset)
ENTRY(memset)
UNWIND( .fnstart )
@@ -135,6 +137,7 @@ UNWIND( .fnstart )
UNWIND( .fnend )
ENDPROC(memset)
ENDPROC(mmioset)
+ENDPROC(__memset)
ENTRY(__memset32)
UNWIND( .fnstart )
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 02/11] replace memory function
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/include/asm/string.h | 18 +++++++++++++++++-
arch/arm/lib/memcpy.S | 3 +++
arch/arm/lib/memmove.S | 5 ++++-
arch/arm/lib/memset.S | 3 +++
4 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index fe1c6af..43325f8 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -14,15 +14,18 @@ extern char * strchr(const char * s, int c);
#define __HAVE_ARCH_MEMCPY
extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void * __memcpy(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMMOVE
extern void * memmove(void *, const void *, __kernel_size_t);
+extern void * __memmove(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMCHR
extern void * memchr(const void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void * memset(void *, int, __kernel_size_t);
+extern void * __memset(void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET32
extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,7 +42,7 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
}
extern void __memzero(void *ptr, __kernel_size_t n);
-
+#ifndef CONFIG_KASAN
#define memset(p,v,n) \
({ \
void *__p = (p); size_t __n = n; \
@@ -51,5 +54,18 @@ extern void __memzero(void *ptr, __kernel_size_t n);
} \
(__p); \
})
+#endif
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
#endif
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 64111bd..79a83f8 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -61,6 +61,8 @@
/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
+.weak memcpy
+ENTRY(__memcpy)
ENTRY(mmiocpy)
ENTRY(memcpy)
@@ -68,3 +70,4 @@ ENTRY(memcpy)
ENDPROC(memcpy)
ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index 69a9d47..313db6c 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -27,12 +27,14 @@
* occurring in the opposite direction.
*/
+.weak memmove
+ENTRY(__memmove)
ENTRY(memmove)
UNWIND( .fnstart )
subs ip, r0, r1
cmphi r2, ip
- bls memcpy
+ bls __memcpy
stmfd sp!, {r0, r4, lr}
UNWIND( .fnend )
@@ -225,3 +227,4 @@ ENTRY(memmove)
18: backward_copy_shift push=24 pull=8
ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index ed6d35d..64aa06a 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -16,6 +16,8 @@
.text
.align 5
+.weak memset
+ENTRY(__memset)
ENTRY(mmioset)
ENTRY(memset)
UNWIND( .fnstart )
@@ -135,6 +137,7 @@ UNWIND( .fnstart )
UNWIND( .fnend )
ENDPROC(memset)
ENDPROC(mmioset)
+ENDPROC(__memset)
ENTRY(__memset32)
UNWIND( .fnstart )
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 02/11] replace memory function
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/include/asm/string.h | 18 +++++++++++++++++-
arch/arm/lib/memcpy.S | 3 +++
arch/arm/lib/memmove.S | 5 ++++-
arch/arm/lib/memset.S | 3 +++
4 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index fe1c6af..43325f8 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -14,15 +14,18 @@ extern char * strchr(const char * s, int c);
#define __HAVE_ARCH_MEMCPY
extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void * __memcpy(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMMOVE
extern void * memmove(void *, const void *, __kernel_size_t);
+extern void * __memmove(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMCHR
extern void * memchr(const void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void * memset(void *, int, __kernel_size_t);
+extern void * __memset(void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET32
extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,7 +42,7 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
}
extern void __memzero(void *ptr, __kernel_size_t n);
-
+#ifndef CONFIG_KASAN
#define memset(p,v,n) \
({ \
void *__p = (p); size_t __n = n; \
@@ -51,5 +54,18 @@ extern void __memzero(void *ptr, __kernel_size_t n);
} \
(__p); \
})
+#endif
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
#endif
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 64111bd..79a83f8 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -61,6 +61,8 @@
/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
+.weak memcpy
+ENTRY(__memcpy)
ENTRY(mmiocpy)
ENTRY(memcpy)
@@ -68,3 +70,4 @@ ENTRY(memcpy)
ENDPROC(memcpy)
ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index 69a9d47..313db6c 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -27,12 +27,14 @@
* occurring in the opposite direction.
*/
+.weak memmove
+ENTRY(__memmove)
ENTRY(memmove)
UNWIND( .fnstart )
subs ip, r0, r1
cmphi r2, ip
- bls memcpy
+ bls __memcpy
stmfd sp!, {r0, r4, lr}
UNWIND( .fnend )
@@ -225,3 +227,4 @@ ENTRY(memmove)
18: backward_copy_shift push=24 pull=8
ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index ed6d35d..64aa06a 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -16,6 +16,8 @@
.text
.align 5
+.weak memset
+ENTRY(__memset)
ENTRY(mmioset)
ENTRY(memset)
UNWIND( .fnstart )
@@ -135,6 +137,7 @@ UNWIND( .fnstart )
UNWIND( .fnend )
ENDPROC(memset)
ENDPROC(mmioset)
+ENDPROC(__memset)
ENTRY(__memset32)
UNWIND( .fnstart )
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 03/11] arm: Kconfig: enable KASan
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
This patch enable kernel address sanitizer for arm.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 7888c98..e9249fd 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -46,6 +46,7 @@ config ARM
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_KASAN if MMU
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_TRACEHOOK
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 03/11] arm: Kconfig: enable KASan
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
This patch enable kernel address sanitizer for arm.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 7888c98..e9249fd 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -46,6 +46,7 @@ config ARM
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_KASAN if MMU
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_TRACEHOOK
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 03/11] arm: Kconfig: enable KASan
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
From: Andrey Ryabinin <a.ryabinin@samsung.com>
This patch enable kernel address sanitizer for arm.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 7888c98..e9249fd 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -46,6 +46,7 @@ config ARM
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_KASAN if MMU
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_TRACEHOOK
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for arm
kernel address sanitizer.
+----+ 0xffffffff
| |
| |
| |
+----+ CONFIG_PAGE_OFFSET
| |\
| | |-> module virtual address space area.
| |/
+----+ MODULE_VADDR = KASAN_SHADOW_END
| |\
| | |-> the shadow area of kernel virtual address.
| |/
+----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the shadow address of MODULE_VADDR
| |\
| | ---------------------+
| | |
+ + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address sanitizer do not use this space.
| | |
| | ---------------------+
| |/
------ 0
1)KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow address by the
following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
2)KASAN_SHADOW_START
This value is the MODULE_VADDR's shadow address. It is the start of kernel virtual
space.
3) KASAN_SHADOW_END
This value is the 0x100000000's shadow address. It is the end of kernel address
sanitizer's shadow area. It is also the start of the module area.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/include/asm/kasan_def.h | 51 ++++++++++++++++++++++++++++++++++++++++
arch/arm/include/asm/memory.h | 5 ++++
arch/arm/kernel/entry-armv.S | 7 +++++-
3 files changed, 62 insertions(+), 1 deletion(-)
create mode 100644 arch/arm/include/asm/kasan_def.h
diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 0000000..7746908
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,51 @@
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ * +----+ 0xffffffff
+ * | |
+ * | |
+ * | |
+ * +----+ CONFIG_PAGE_OFFSET
+ * | |\
+ * | | |-> module virtual address space area.
+ * | |/
+ * +----+ MODULE_VADDR = KASAN_SHADOW_END
+ * | |\
+ * | | |-> the shadow area of kernel virtual address.
+ * | |/
+ * +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the shadow address of MODULE_VADDR
+ * | |\
+ * | | ---------------------+
+ * | | |
+ * + + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address sanitizer do not use this space.
+ * | | |
+ * | | ---------------------+
+ * | |/
+ * ------ 0
+ *
+ *1)KASAN_SHADOW_OFFSET:
+ * This value is used to map an address to the corresponding shadow address by the
+ * following formula:
+ * shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
+ *
+ * 2)KASAN_SHADOW_START
+ * This value is the MODULE_VADDR's shadow address. It is the start of kernel virtual
+ * space.
+ *
+ * 3) KASAN_SHADOW_END
+ * This value is the 0x100000000's shadow address. It is the end of kernel address
+ * sanitizer's shadow area. It is also the start of the module area.
+ *
+ */
+
+#define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1<<29))
+
+#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#define KASAN_SHADOW_END (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 1f54e4e..069710d 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -21,6 +21,7 @@
#ifdef CONFIG_NEED_MACH_MEMORY_H
#include <mach/memory.h>
#endif
+#include <asm/kasan_def.h>
/*
* Allow for constants defined here to be used from assembly code
@@ -37,7 +38,11 @@
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
+#ifndef CONFIG_KASAN
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE (KASAN_SHADOW_START)
+#endif
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/*
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index fbc7076..f9efea3 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -187,7 +187,12 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
- mov r1, #TASK_SIZE
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+#else
+ mov r1, #TASK_SIZE
+#endif
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for arm
kernel address sanitizer.
+----+ 0xffffffff
| |
| |
| |
+----+ CONFIG_PAGE_OFFSET
| |\
| | |-> module virtual address space area.
| |/
+----+ MODULE_VADDR = KASAN_SHADOW_END
| |\
| | |-> the shadow area of kernel virtual address.
| |/
+----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the shadow address of MODULE_VADDR
| |\
| | ---------------------+
| | |
+ + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address sanitizer do not use this space.
| | |
| | ---------------------+
| |/
------ 0
1)KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow address by the
following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
2)KASAN_SHADOW_START
This value is the MODULE_VADDR's shadow address. It is the start of kernel virtual
space.
3) KASAN_SHADOW_END
This value is the 0x100000000's shadow address. It is the end of kernel address
sanitizer's shadow area. It is also the start of the module area.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/include/asm/kasan_def.h | 51 ++++++++++++++++++++++++++++++++++++++++
arch/arm/include/asm/memory.h | 5 ++++
arch/arm/kernel/entry-armv.S | 7 +++++-
3 files changed, 62 insertions(+), 1 deletion(-)
create mode 100644 arch/arm/include/asm/kasan_def.h
diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 0000000..7746908
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,51 @@
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ * +----+ 0xffffffff
+ * | |
+ * | |
+ * | |
+ * +----+ CONFIG_PAGE_OFFSET
+ * | |\
+ * | | |-> module virtual address space area.
+ * | |/
+ * +----+ MODULE_VADDR = KASAN_SHADOW_END
+ * | |\
+ * | | |-> the shadow area of kernel virtual address.
+ * | |/
+ * +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the shadow address of MODULE_VADDR
+ * | |\
+ * | | ---------------------+
+ * | | |
+ * + + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address sanitizer do not use this space.
+ * | | |
+ * | | ---------------------+
+ * | |/
+ * ------ 0
+ *
+ *1)KASAN_SHADOW_OFFSET:
+ * This value is used to map an address to the corresponding shadow address by the
+ * following formula:
+ * shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
+ *
+ * 2)KASAN_SHADOW_START
+ * This value is the MODULE_VADDR's shadow address. It is the start of kernel virtual
+ * space.
+ *
+ * 3) KASAN_SHADOW_END
+ * This value is the 0x100000000's shadow address. It is the end of kernel address
+ * sanitizer's shadow area. It is also the start of the module area.
+ *
+ */
+
+#define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1<<29))
+
+#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#define KASAN_SHADOW_END (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 1f54e4e..069710d 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -21,6 +21,7 @@
#ifdef CONFIG_NEED_MACH_MEMORY_H
#include <mach/memory.h>
#endif
+#include <asm/kasan_def.h>
/*
* Allow for constants defined here to be used from assembly code
@@ -37,7 +38,11 @@
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
+#ifndef CONFIG_KASAN
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE (KASAN_SHADOW_START)
+#endif
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/*
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index fbc7076..f9efea3 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -187,7 +187,12 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
- mov r1, #TASK_SIZE
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+#else
+ mov r1, #TASK_SIZE
+#endif
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for arm
kernel address sanitizer.
+----+ 0xffffffff
| |
| |
| |
+----+ CONFIG_PAGE_OFFSET
| |\
| | |-> module virtual address space area.
| |/
+----+ MODULE_VADDR = KASAN_SHADOW_END
| |\
| | |-> the shadow area of kernel virtual address.
| |/
+----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the shadow address of MODULE_VADDR
| |\
| | ---------------------+
| | |
+ + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address sanitizer do not use this space.
| | |
| | ---------------------+
| |/
------ 0
1)KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow address by the
following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
2)KASAN_SHADOW_START
This value is the MODULE_VADDR's shadow address. It is the start of kernel virtual
space.
3) KASAN_SHADOW_END
This value is the 0x100000000's shadow address. It is the end of kernel address
sanitizer's shadow area. It is also the start of the module area.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/include/asm/kasan_def.h | 51 ++++++++++++++++++++++++++++++++++++++++
arch/arm/include/asm/memory.h | 5 ++++
arch/arm/kernel/entry-armv.S | 7 +++++-
3 files changed, 62 insertions(+), 1 deletion(-)
create mode 100644 arch/arm/include/asm/kasan_def.h
diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 0000000..7746908
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,51 @@
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ * +----+ 0xffffffff
+ * | |
+ * | |
+ * | |
+ * +----+ CONFIG_PAGE_OFFSET
+ * | |\
+ * | | |-> module virtual address space area.
+ * | |/
+ * +----+ MODULE_VADDR = KASAN_SHADOW_END
+ * | |\
+ * | | |-> the shadow area of kernel virtual address.
+ * | |/
+ * +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the shadow address of MODULE_VADDR
+ * | |\
+ * | | ---------------------+
+ * | | |
+ * + + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address sanitizer do not use this space.
+ * | | |
+ * | | ---------------------+
+ * | |/
+ * ------ 0
+ *
+ *1)KASAN_SHADOW_OFFSET:
+ * This value is used to map an address to the corresponding shadow address by the
+ * following formula:
+ * shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
+ *
+ * 2)KASAN_SHADOW_START
+ * This value is the MODULE_VADDR's shadow address. It is the start of kernel virtual
+ * space.
+ *
+ * 3) KASAN_SHADOW_END
+ * This value is the 0x100000000's shadow address. It is the end of kernel address
+ * sanitizer's shadow area. It is also the start of the module area.
+ *
+ */
+
+#define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1<<29))
+
+#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#define KASAN_SHADOW_END (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 1f54e4e..069710d 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -21,6 +21,7 @@
#ifdef CONFIG_NEED_MACH_MEMORY_H
#include <mach/memory.h>
#endif
+#include <asm/kasan_def.h>
/*
* Allow for constants defined here to be used from assembly code
@@ -37,7 +38,11 @@
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
+#ifndef CONFIG_KASAN
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE (KASAN_SHADOW_START)
+#endif
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/*
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index fbc7076..f9efea3 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -187,7 +187,12 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
- mov r1, #TASK_SIZE
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+#else
+ mov r1, #TASK_SIZE
+#endif
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 05/11] Disable kasan's instrumentation
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
To avoid some build and runtime errors, compiler's instrumentation must
be disabled for code not linked with kernel image.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/kernel/unwind.c | 3 ++-
arch/arm/vdso/Makefile | 2 ++
3 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index d50430c..ab5693b 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -23,6 +23,7 @@ OBJS += hyp-stub.o
endif
GCOV_PROFILE := n
+KASAN_SANITIZE := n
#
# Architecture dependencies
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index 0bee233..2e55c7d 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -249,7 +249,8 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index 59a8fa7..689dfec 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -29,6 +29,8 @@ CFLAGS_vgettimeofday.o = -O2
# Disable gcov profiling for VDSO code
GCOV_PROFILE := n
+KASAN_SANITIZE := n
+
# Force dependency
$(obj)/vdso.o : $(obj)/vdso.so
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 05/11] Disable kasan's instrumentation
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
To avoid some build and runtime errors, compiler's instrumentation must
be disabled for code not linked with kernel image.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/kernel/unwind.c | 3 ++-
arch/arm/vdso/Makefile | 2 ++
3 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index d50430c..ab5693b 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -23,6 +23,7 @@ OBJS += hyp-stub.o
endif
GCOV_PROFILE := n
+KASAN_SANITIZE := n
#
# Architecture dependencies
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index 0bee233..2e55c7d 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -249,7 +249,8 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index 59a8fa7..689dfec 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -29,6 +29,8 @@ CFLAGS_vgettimeofday.o = -O2
# Disable gcov profiling for VDSO code
GCOV_PROFILE := n
+KASAN_SANITIZE := n
+
# Force dependency
$(obj)/vdso.o : $(obj)/vdso.so
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 05/11] Disable kasan's instrumentation
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
From: Andrey Ryabinin <a.ryabinin@samsung.com>
To avoid some build and runtime errors, compiler's instrumentation must
be disabled for code not linked with kernel image.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/kernel/unwind.c | 3 ++-
arch/arm/vdso/Makefile | 2 ++
3 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index d50430c..ab5693b 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -23,6 +23,7 @@ OBJS += hyp-stub.o
endif
GCOV_PROFILE := n
+KASAN_SANITIZE := n
#
# Architecture dependencies
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index 0bee233..2e55c7d 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -249,7 +249,8 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index 59a8fa7..689dfec 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -29,6 +29,8 @@ CFLAGS_vgettimeofday.o = -O2
# Disable gcov profiling for VDSO code
GCOV_PROFILE := n
+KASAN_SANITIZE := n
+
# Force dependency
$(obj)/vdso.o : $(obj)/vdso.so
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Because arm instruction set don't support access the address which is
not aligned, so must change memory_is_poisoned_16 for arm.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
mm/kasan/kasan.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 12749da..e0e152b 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
return memory_is_poisoned_1(addr + size - 1);
}
+#ifdef CONFIG_ARM
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+ u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
+
+ if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
+ else {
+ /*
+ * If two shadow bytes covers 16-byte access, we don't
+ * need to do anything more. Otherwise, test the last
+ * shadow byte.
+ */
+ if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
+ return false;
+ return memory_is_poisoned_1(addr + 15);
+ }
+}
+
+#else
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
@@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
return *shadow_addr;
}
+#endif
static __always_inline unsigned long bytes_is_nonzero(const u8 *start,
size_t size)
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Because arm instruction set don't support access the address which is
not aligned, so must change memory_is_poisoned_16 for arm.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
mm/kasan/kasan.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 12749da..e0e152b 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
return memory_is_poisoned_1(addr + size - 1);
}
+#ifdef CONFIG_ARM
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+ u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
+
+ if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
+ else {
+ /*
+ * If two shadow bytes covers 16-byte access, we don't
+ * need to do anything more. Otherwise, test the last
+ * shadow byte.
+ */
+ if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
+ return false;
+ return memory_is_poisoned_1(addr + 15);
+ }
+}
+
+#else
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
@@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
return *shadow_addr;
}
+#endif
static __always_inline unsigned long bytes_is_nonzero(const u8 *start,
size_t size)
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
Because arm instruction set don't support access the address which is
not aligned, so must change memory_is_poisoned_16 for arm.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
mm/kasan/kasan.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 12749da..e0e152b 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
return memory_is_poisoned_1(addr + size - 1);
}
+#ifdef CONFIG_ARM
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+ u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
+
+ if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
+ else {
+ /*
+ * If two shadow bytes covers 16-byte access, we don't
+ * need to do anything more. Otherwise, test the last
+ * shadow byte.
+ */
+ if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
+ return false;
+ return memory_is_poisoned_1(addr + 15);
+ }
+}
+
+#else
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
@@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
return *shadow_addr;
}
+#endif
static __always_inline unsigned long bytes_is_nonzero(const u8 *start,
size_t size)
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 07/11] Avoid cleaning the KASan shadow area's mapping table
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Avoid cleaning the mapping table(page table) of KASAN shadow area.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/mm/mmu.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index e46a6a4..f5aa1de 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1251,9 +1251,14 @@ static inline void prepare_page_table(void)
/*
* Clear out all the mappings below the kernel image.
*/
- for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
+ for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
+#ifdef CONFIG_KASAN
+ /*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/
+ addr = MODULES_VADDR;
+#endif
+
#ifdef CONFIG_XIP_KERNEL
/* The XIP kernel is mapped in the module area -- skip over it */
addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK;
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 07/11] Avoid cleaning the KASan shadow area's mapping table
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Avoid cleaning the mapping table(page table) of KASAN shadow area.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/mm/mmu.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index e46a6a4..f5aa1de 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1251,9 +1251,14 @@ static inline void prepare_page_table(void)
/*
* Clear out all the mappings below the kernel image.
*/
- for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
+ for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
+#ifdef CONFIG_KASAN
+ /*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/
+ addr = MODULES_VADDR;
+#endif
+
#ifdef CONFIG_XIP_KERNEL
/* The XIP kernel is mapped in the module area -- skip over it */
addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK;
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 07/11] Avoid cleaning the KASan shadow area's mapping table
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
From: Andrey Ryabinin <a.ryabinin@samsung.com>
Avoid cleaning the mapping table(page table) of KASAN shadow area.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
arch/arm/mm/mmu.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index e46a6a4..f5aa1de 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1251,9 +1251,14 @@ static inline void prepare_page_table(void)
/*
* Clear out all the mappings below the kernel image.
*/
- for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
+ for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
+#ifdef CONFIG_KASAN
+ /*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/
+ addr = MODULES_VADDR;
+#endif
+
#ifdef CONFIG_XIP_KERNEL
/* The XIP kernel is mapped in the module area -- skip over it */
addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK;
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 08/11] Add support arm LPAE
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/pgd.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index c1c1a5c..4f73978 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -64,6 +64,18 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
new_pmd = pmd_alloc(mm, new_pud, 0);
if (!new_pmd)
goto no_pmd;
+#ifdef CONFIG_KASAN
+ /*
+ *Copy PMD table for KASAN shadow mappings.
+ */
+ init_pgd = pgd_offset_k(TASK_SIZE);
+ init_pud = pud_offset(init_pgd, TASK_SIZE);
+ init_pmd = pmd_offset(init_pud, TASK_SIZE);
+ new_pmd = pmd_offset(new_pud,TASK_SIZE);
+ memcpy(new_pmd, init_pmd, (pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE)) * sizeof(pmd_t));
+ clean_dcache_area(new_pmd,PTRS_PER_PMD*sizeof(pmd_t));
+#endif
+
#endif
if (!vectors_high()) {
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 08/11] Add support arm LPAE
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/pgd.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index c1c1a5c..4f73978 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -64,6 +64,18 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
new_pmd = pmd_alloc(mm, new_pud, 0);
if (!new_pmd)
goto no_pmd;
+#ifdef CONFIG_KASAN
+ /*
+ *Copy PMD table for KASAN shadow mappings.
+ */
+ init_pgd = pgd_offset_k(TASK_SIZE);
+ init_pud = pud_offset(init_pgd, TASK_SIZE);
+ init_pmd = pmd_offset(init_pud, TASK_SIZE);
+ new_pmd = pmd_offset(new_pud,TASK_SIZE);
+ memcpy(new_pmd, init_pmd, (pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE)) * sizeof(pmd_t));
+ clean_dcache_area(new_pmd,PTRS_PER_PMD*sizeof(pmd_t));
+#endif
+
#endif
if (!vectors_high()) {
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 08/11] Add support arm LPAE
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/pgd.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index c1c1a5c..4f73978 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -64,6 +64,18 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
new_pmd = pmd_alloc(mm, new_pud, 0);
if (!new_pmd)
goto no_pmd;
+#ifdef CONFIG_KASAN
+ /*
+ *Copy PMD table for KASAN shadow mappings.
+ */
+ init_pgd = pgd_offset_k(TASK_SIZE);
+ init_pud = pud_offset(init_pgd, TASK_SIZE);
+ init_pmd = pmd_offset(init_pud, TASK_SIZE);
+ new_pmd = pmd_offset(new_pud,TASK_SIZE);
+ memcpy(new_pmd, init_pmd, (pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE)) * sizeof(pmd_t));
+ clean_dcache_area(new_pmd,PTRS_PER_PMD*sizeof(pmd_t));
+#endif
+
#endif
if (!vectors_high()) {
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Because the KASan's shadow memory don't need to track,so remove the
mapping code in kasan_init.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/kasan_init.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 2bf0782..7cfdc39 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -218,10 +218,6 @@ void __init kasan_init(void)
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
- kasan_populate_zero_shadow(
- kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
- kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
-
kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
kasan_mem_to_shadow((void *)-1UL) + 1);
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Because the KASan's shadow memory don't need to track,so remove the
mapping code in kasan_init.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/kasan_init.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 2bf0782..7cfdc39 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -218,10 +218,6 @@ void __init kasan_init(void)
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
- kasan_populate_zero_shadow(
- kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
- kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
-
kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
kasan_mem_to_shadow((void *)-1UL) + 1);
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
Because the KASan's shadow memory don't need to track,so remove the
mapping code in kasan_init.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/kasan_init.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 2bf0782..7cfdc39 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -218,10 +218,6 @@ void __init kasan_init(void)
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
- kasan_populate_zero_shadow(
- kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
- kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
-
kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
kasan_mem_to_shadow((void *)-1UL) + 1);
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 10/11] Change mapping of kasan_zero_page int readonly
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Because the kasan_zero_page(which is used as the shadow
region for some memory that KASan doesn't need to track.) won't be writen
after kasan_init, so change the mapping of kasan_zero_page into readonly.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/kasan_init.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 7cfdc39..c11826a 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -200,6 +200,7 @@ void __init kasan_init(void)
{
struct memblock_region *reg;
u64 orig_ttbr0;
+ int i;
orig_ttbr0 = cpu_get_ttbr(0);
@@ -243,6 +244,17 @@ void __init kasan_init(void)
create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
(unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
NUMA_NO_NODE);
+
+ /*
+ * KAsan may reuse the contents of kasan_zero_pte directly, so we
+ * should make sure that it maps the zero page read-only.
+ */
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_zero_pte[i], pfn_pte(
+ virt_to_pfn(kasan_zero_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
+ memset(kasan_zero_page, 0, PAGE_SIZE);
cpu_set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 10/11] Change mapping of kasan_zero_page int readonly
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Because the kasan_zero_page(which is used as the shadow
region for some memory that KASan doesn't need to track.) won't be writen
after kasan_init, so change the mapping of kasan_zero_page into readonly.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/kasan_init.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 7cfdc39..c11826a 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -200,6 +200,7 @@ void __init kasan_init(void)
{
struct memblock_region *reg;
u64 orig_ttbr0;
+ int i;
orig_ttbr0 = cpu_get_ttbr(0);
@@ -243,6 +244,17 @@ void __init kasan_init(void)
create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
(unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
NUMA_NO_NODE);
+
+ /*
+ * KAsan may reuse the contents of kasan_zero_pte directly, so we
+ * should make sure that it maps the zero page read-only.
+ */
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_zero_pte[i], pfn_pte(
+ virt_to_pfn(kasan_zero_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
+ memset(kasan_zero_page, 0, PAGE_SIZE);
cpu_set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 10/11] Change mapping of kasan_zero_page int readonly
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
Because the kasan_zero_page(which is used as the shadow
region for some memory that KASan doesn't need to track.) won't be writen
after kasan_init, so change the mapping of kasan_zero_page into readonly.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/kasan_init.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 7cfdc39..c11826a 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -200,6 +200,7 @@ void __init kasan_init(void)
{
struct memblock_region *reg;
u64 orig_ttbr0;
+ int i;
orig_ttbr0 = cpu_get_ttbr(0);
@@ -243,6 +244,17 @@ void __init kasan_init(void)
create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
(unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
NUMA_NO_NODE);
+
+ /*
+ * KAsan may reuse the contents of kasan_zero_pte directly, so we
+ * should make sure that it maps the zero page read-only.
+ */
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_zero_pte[i], pfn_pte(
+ virt_to_pfn(kasan_zero_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
+ memset(kasan_zero_page, 0, PAGE_SIZE);
cpu_set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 11/11] Add KASan layout
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 8:22 ` Abbott Liu
-1 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Add KASan layout into virtual kernel memory layout.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/init.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ad80548..b490cf4 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -537,6 +537,9 @@ void __init mem_init(void)
#ifdef CONFIG_MODULES
" modules : 0x%08lx - 0x%08lx (%4ld MB)\n"
#endif
+#ifdef CONFIG_KASAN
+ " kasan : 0x%08lx - 0x%08lx (%4ld MB)\n"
+#endif
" .text : 0x%p" " - 0x%p" " (%4td kB)\n"
" .init : 0x%p" " - 0x%p" " (%4td kB)\n"
" .data : 0x%p" " - 0x%p" " (%4td kB)\n"
@@ -557,6 +560,9 @@ void __init mem_init(void)
#ifdef CONFIG_MODULES
MLM(MODULES_VADDR, MODULES_END),
#endif
+#ifdef CONFIG_KASAN
+ MLM(KASAN_SHADOW_START, KASAN_SHADOW_END),
+#endif
MLK_ROUNDUP(_text, _etext),
MLK_ROUNDUP(__init_begin, __init_end),
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 11/11] Add KASan layout
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux, aryabinin, liuwenliang, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Add KASan layout into virtual kernel memory layout.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/init.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ad80548..b490cf4 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -537,6 +537,9 @@ void __init mem_init(void)
#ifdef CONFIG_MODULES
" modules : 0x%08lx - 0x%08lx (%4ld MB)\n"
#endif
+#ifdef CONFIG_KASAN
+ " kasan : 0x%08lx - 0x%08lx (%4ld MB)\n"
+#endif
" .text : 0x%p" " - 0x%p" " (%4td kB)\n"
" .init : 0x%p" " - 0x%p" " (%4td kB)\n"
" .data : 0x%p" " - 0x%p" " (%4td kB)\n"
@@ -557,6 +560,9 @@ void __init mem_init(void)
#ifdef CONFIG_MODULES
MLM(MODULES_VADDR, MODULES_END),
#endif
+#ifdef CONFIG_KASAN
+ MLM(KASAN_SHADOW_START, KASAN_SHADOW_END),
+#endif
MLK_ROUNDUP(_text, _etext),
MLK_ROUNDUP(__init_begin, __init_end),
--
2.9.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 11/11] Add KASan layout
@ 2017-10-11 8:22 ` Abbott Liu
0 siblings, 0 replies; 253+ messages in thread
From: Abbott Liu @ 2017-10-11 8:22 UTC (permalink / raw)
To: linux-arm-kernel
Add KASan layout into virtual kernel memory layout.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
---
arch/arm/mm/init.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ad80548..b490cf4 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -537,6 +537,9 @@ void __init mem_init(void)
#ifdef CONFIG_MODULES
" modules : 0x%08lx - 0x%08lx (%4ld MB)\n"
#endif
+#ifdef CONFIG_KASAN
+ " kasan : 0x%08lx - 0x%08lx (%4ld MB)\n"
+#endif
" .text : 0x%p" " - 0x%p" " (%4td kB)\n"
" .init : 0x%p" " - 0x%p" " (%4td kB)\n"
" .data : 0x%p" " - 0x%p" " (%4td kB)\n"
@@ -557,6 +560,9 @@ void __init mem_init(void)
#ifdef CONFIG_MODULES
MLM(MODULES_VADDR, MODULES_END),
#endif
+#ifdef CONFIG_KASAN
+ MLM(KASAN_SHADOW_START, KASAN_SHADOW_END),
+#endif
MLK_ROUNDUP(_text, _etext),
MLK_ROUNDUP(__init_begin, __init_end),
--
2.9.0
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 19:13 ` Florian Fainelli
-1 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:13 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Hi Abbott,
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
>
> 1/8 of kernel addresses reserved for shadow memory. There was no
> big enough hole for this, so virtual addresses for shadow were
> stolen from user space.
>
> At early boot stage the whole shadow region populated with just
> one physical page (kasan_zero_page). Later, this page reused
> as readonly zero shadow for some memory that KASan currently
> don't track (vmalloc).
>
> After mapping the physical memory, pages for shadow memory are
> allocated and mapped.
>
> KASan's stack instrumentation significantly increases stack's
> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
>
> Some files built without kasan instrumentation (e.g. mm/slub.c).
> Original mem* function replaced (via #define) with prefixed variants
> to disable memory access checks for such files.
>
> On arm LPAE architecture, the mapping table of KASan shadow memory(if
> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
> because kasan instrumentation maybe cause do_translation_fault function
> accessing KASan shadow memory. The accessing of KASan shadow memory in
> do_translation_fault function maybe cause dead circle. So the mapping table
> of KASan shadow memory need be copyed in pgd_alloc function.
>
>
> Most of the code comes from:
> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Thanks for putting these patches together, I can't get a kernel to build
with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
AS arch/arm/kernel/entry-common.o
arch/arm/kernel/entry-common.S: Assembler messages:
arch/arm/kernel/entry-common.S:53: Error: invalid constant
(ffffffffb6e00000) after fixup
arch/arm/kernel/entry-common.S:118: Error: invalid constant
(ffffffffb6e00000) after fixup
scripts/Makefile.build:412: recipe for target
'arch/arm/kernel/entry-common.o' failed
make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
Makefile:1019: recipe for target 'arch/arm/kernel' failed
make[2]: *** [arch/arm/kernel] Error 2
make[2]: *** Waiting for unfinished jobs....
This is coming from the increase in TASK_SIZE it seems.
This is on top of v4.14-rc4-84-gff5abbe799e2
Thank you
>
> These patches are tested on vexpress-ca15, vexpress-ca9
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Tested-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>
> Abbott Liu (6):
> Define the virtual space of KASan's shadow region
> change memory_is_poisoned_16 for aligned error
> Add support arm LPAE
> Don't need to map the shadow of KASan's shadow memory
> Change mapping of kasan_zero_page int readonly
> Add KASan layout
>
> Andrey Ryabinin (5):
> Initialize the mapping of KASan shadow memory
> replace memory function
> arm: Kconfig: enable KASan
> Disable kasan's instrumentation
> Avoid cleaning the KASan shadow area's mapping table
>
> arch/arm/Kconfig | 1 +
> arch/arm/boot/compressed/Makefile | 1 +
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/kasan_def.h | 51 +++++++
> arch/arm/include/asm/memory.h | 5 +
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/string.h | 18 ++-
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/entry-armv.S | 7 +-
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/kernel/unwind.c | 3 +-
> arch/arm/lib/memcpy.S | 3 +
> arch/arm/lib/memmove.S | 5 +-
> arch/arm/lib/memset.S | 3 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/init.c | 6 +
> arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
> arch/arm/mm/mmu.c | 7 +-
> arch/arm/mm/pgd.c | 12 ++
> arch/arm/vdso/Makefile | 2 +
> mm/kasan/kasan.c | 22 ++-
> 24 files changed, 478 insertions(+), 7 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/include/asm/kasan_def.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
@ 2017-10-11 19:13 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:13 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Hi Abbott,
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
>
> 1/8 of kernel addresses reserved for shadow memory. There was no
> big enough hole for this, so virtual addresses for shadow were
> stolen from user space.
>
> At early boot stage the whole shadow region populated with just
> one physical page (kasan_zero_page). Later, this page reused
> as readonly zero shadow for some memory that KASan currently
> don't track (vmalloc).
>
> After mapping the physical memory, pages for shadow memory are
> allocated and mapped.
>
> KASan's stack instrumentation significantly increases stack's
> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
>
> Some files built without kasan instrumentation (e.g. mm/slub.c).
> Original mem* function replaced (via #define) with prefixed variants
> to disable memory access checks for such files.
>
> On arm LPAE architecture, the mapping table of KASan shadow memory(if
> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
> because kasan instrumentation maybe cause do_translation_fault function
> accessing KASan shadow memory. The accessing of KASan shadow memory in
> do_translation_fault function maybe cause dead circle. So the mapping table
> of KASan shadow memory need be copyed in pgd_alloc function.
>
>
> Most of the code comes from:
> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Thanks for putting these patches together, I can't get a kernel to build
with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
AS arch/arm/kernel/entry-common.o
arch/arm/kernel/entry-common.S: Assembler messages:
arch/arm/kernel/entry-common.S:53: Error: invalid constant
(ffffffffb6e00000) after fixup
arch/arm/kernel/entry-common.S:118: Error: invalid constant
(ffffffffb6e00000) after fixup
scripts/Makefile.build:412: recipe for target
'arch/arm/kernel/entry-common.o' failed
make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
Makefile:1019: recipe for target 'arch/arm/kernel' failed
make[2]: *** [arch/arm/kernel] Error 2
make[2]: *** Waiting for unfinished jobs....
This is coming from the increase in TASK_SIZE it seems.
This is on top of v4.14-rc4-84-gff5abbe799e2
Thank you
>
> These patches are tested on vexpress-ca15, vexpress-ca9
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Tested-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>
> Abbott Liu (6):
> Define the virtual space of KASan's shadow region
> change memory_is_poisoned_16 for aligned error
> Add support arm LPAE
> Don't need to map the shadow of KASan's shadow memory
> Change mapping of kasan_zero_page int readonly
> Add KASan layout
>
> Andrey Ryabinin (5):
> Initialize the mapping of KASan shadow memory
> replace memory function
> arm: Kconfig: enable KASan
> Disable kasan's instrumentation
> Avoid cleaning the KASan shadow area's mapping table
>
> arch/arm/Kconfig | 1 +
> arch/arm/boot/compressed/Makefile | 1 +
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/kasan_def.h | 51 +++++++
> arch/arm/include/asm/memory.h | 5 +
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/string.h | 18 ++-
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/entry-armv.S | 7 +-
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/kernel/unwind.c | 3 +-
> arch/arm/lib/memcpy.S | 3 +
> arch/arm/lib/memmove.S | 5 +-
> arch/arm/lib/memset.S | 3 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/init.c | 6 +
> arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
> arch/arm/mm/mmu.c | 7 +-
> arch/arm/mm/pgd.c | 12 ++
> arch/arm/vdso/Makefile | 2 +
> mm/kasan/kasan.c | 22 ++-
> 24 files changed, 478 insertions(+), 7 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/include/asm/kasan_def.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
--
Florian
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-11 19:13 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:13 UTC (permalink / raw)
To: linux-arm-kernel
Hi Abbott,
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
>
> 1/8 of kernel addresses reserved for shadow memory. There was no
> big enough hole for this, so virtual addresses for shadow were
> stolen from user space.
>
> At early boot stage the whole shadow region populated with just
> one physical page (kasan_zero_page). Later, this page reused
> as readonly zero shadow for some memory that KASan currently
> don't track (vmalloc).
>
> After mapping the physical memory, pages for shadow memory are
> allocated and mapped.
>
> KASan's stack instrumentation significantly increases stack's
> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
>
> Some files built without kasan instrumentation (e.g. mm/slub.c).
> Original mem* function replaced (via #define) with prefixed variants
> to disable memory access checks for such files.
>
> On arm LPAE architecture, the mapping table of KASan shadow memory(if
> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
> because kasan instrumentation maybe cause do_translation_fault function
> accessing KASan shadow memory. The accessing of KASan shadow memory in
> do_translation_fault function maybe cause dead circle. So the mapping table
> of KASan shadow memory need be copyed in pgd_alloc function.
>
>
> Most of the code comes from:
> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Thanks for putting these patches together, I can't get a kernel to build
with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
AS arch/arm/kernel/entry-common.o
arch/arm/kernel/entry-common.S: Assembler messages:
arch/arm/kernel/entry-common.S:53: Error: invalid constant
(ffffffffb6e00000) after fixup
arch/arm/kernel/entry-common.S:118: Error: invalid constant
(ffffffffb6e00000) after fixup
scripts/Makefile.build:412: recipe for target
'arch/arm/kernel/entry-common.o' failed
make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
Makefile:1019: recipe for target 'arch/arm/kernel' failed
make[2]: *** [arch/arm/kernel] Error 2
make[2]: *** Waiting for unfinished jobs....
This is coming from the increase in TASK_SIZE it seems.
This is on top of v4.14-rc4-84-gff5abbe799e2
Thank you
>
> These patches are tested on vexpress-ca15, vexpress-ca9
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Tested-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>
> Abbott Liu (6):
> Define the virtual space of KASan's shadow region
> change memory_is_poisoned_16 for aligned error
> Add support arm LPAE
> Don't need to map the shadow of KASan's shadow memory
> Change mapping of kasan_zero_page int readonly
> Add KASan layout
>
> Andrey Ryabinin (5):
> Initialize the mapping of KASan shadow memory
> replace memory function
> arm: Kconfig: enable KASan
> Disable kasan's instrumentation
> Avoid cleaning the KASan shadow area's mapping table
>
> arch/arm/Kconfig | 1 +
> arch/arm/boot/compressed/Makefile | 1 +
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/kasan_def.h | 51 +++++++
> arch/arm/include/asm/memory.h | 5 +
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/string.h | 18 ++-
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/entry-armv.S | 7 +-
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/kernel/unwind.c | 3 +-
> arch/arm/lib/memcpy.S | 3 +
> arch/arm/lib/memmove.S | 5 +-
> arch/arm/lib/memset.S | 3 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/init.c | 6 +
> arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
> arch/arm/mm/mmu.c | 7 +-
> arch/arm/mm/pgd.c | 12 ++
> arch/arm/vdso/Makefile | 2 +
> mm/kasan/kasan.c | 22 ++-
> 24 files changed, 478 insertions(+), 7 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/include/asm/kasan_def.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 03/11] arm: Kconfig: enable KASan
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 19:15 ` Florian Fainelli
-1 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:15 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch enable kernel address sanitizer for arm.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
This needs to be the last patch in the series, otherwise you allow
people between patch 3 and 11 to have varying degrees of experience with
this patch series depending on their system type (LPAE or not, etc.)
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 03/11] arm: Kconfig: enable KASan
@ 2017-10-11 19:15 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:15 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch enable kernel address sanitizer for arm.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
This needs to be the last patch in the series, otherwise you allow
people between patch 3 and 11 to have varying degrees of experience with
this patch series depending on their system type (LPAE or not, etc.)
--
Florian
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 03/11] arm: Kconfig: enable KASan
@ 2017-10-11 19:15 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:15 UTC (permalink / raw)
To: linux-arm-kernel
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch enable kernel address sanitizer for arm.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
This needs to be the last patch in the series, otherwise you allow
people between patch 3 and 11 to have varying degrees of experience with
this patch series depending on their system type (LPAE or not, etc.)
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 05/11] Disable kasan's instrumentation
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 19:16 ` Florian Fainelli
-1 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:16 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> To avoid some build and runtime errors, compiler's instrumentation must
> be disabled for code not linked with kernel image.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Same as patch 3, this needs to be moved before you allow KAsan to be
enabled/selected. This has little to no dependencies on other patches so
this could be moved as the first patch in the series.
Thanks!
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 05/11] Disable kasan's instrumentation
@ 2017-10-11 19:16 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:16 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> To avoid some build and runtime errors, compiler's instrumentation must
> be disabled for code not linked with kernel image.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Same as patch 3, this needs to be moved before you allow KAsan to be
enabled/selected. This has little to no dependencies on other patches so
this could be moved as the first patch in the series.
Thanks!
--
Florian
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 05/11] Disable kasan's instrumentation
@ 2017-10-11 19:16 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:16 UTC (permalink / raw)
To: linux-arm-kernel
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> To avoid some build and runtime errors, compiler's instrumentation must
> be disabled for code not linked with kernel image.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Same as patch 3, this needs to be moved before you allow KAsan to be
enabled/selected. This has little to no dependencies on other patches so
this could be moved as the first patch in the series.
Thanks!
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 10/11] Change mapping of kasan_zero_page int readonly
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 19:19 ` Florian Fainelli
-1 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:19 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Because the kasan_zero_page(which is used as the shadow
> region for some memory that KASan doesn't need to track.) won't be writen
> after kasan_init, so change the mapping of kasan_zero_page into readonly.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
> arch/arm/mm/kasan_init.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> index 7cfdc39..c11826a 100644
> --- a/arch/arm/mm/kasan_init.c
> +++ b/arch/arm/mm/kasan_init.c
> @@ -200,6 +200,7 @@ void __init kasan_init(void)
> {
> struct memblock_region *reg;
> u64 orig_ttbr0;
> + int i;
Nit: unsigned int i.
>
> orig_ttbr0 = cpu_get_ttbr(0);
>
> @@ -243,6 +244,17 @@ void __init kasan_init(void)
> create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
> (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
> NUMA_NO_NODE);
> +
> + /*
> + * KAsan may reuse the contents of kasan_zero_pte directly, so we
> + * should make sure that it maps the zero page read-only.
> + */
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
> + memset(kasan_zero_page, 0, PAGE_SIZE);
> cpu_set_ttbr0(orig_ttbr0);
> flush_cache_all();
> local_flush_bp_all();
>
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 10/11] Change mapping of kasan_zero_page int readonly
@ 2017-10-11 19:19 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:19 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Because the kasan_zero_page(which is used as the shadow
> region for some memory that KASan doesn't need to track.) won't be writen
> after kasan_init, so change the mapping of kasan_zero_page into readonly.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
> arch/arm/mm/kasan_init.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> index 7cfdc39..c11826a 100644
> --- a/arch/arm/mm/kasan_init.c
> +++ b/arch/arm/mm/kasan_init.c
> @@ -200,6 +200,7 @@ void __init kasan_init(void)
> {
> struct memblock_region *reg;
> u64 orig_ttbr0;
> + int i;
Nit: unsigned int i.
>
> orig_ttbr0 = cpu_get_ttbr(0);
>
> @@ -243,6 +244,17 @@ void __init kasan_init(void)
> create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
> (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
> NUMA_NO_NODE);
> +
> + /*
> + * KAsan may reuse the contents of kasan_zero_pte directly, so we
> + * should make sure that it maps the zero page read-only.
> + */
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
> + memset(kasan_zero_page, 0, PAGE_SIZE);
> cpu_set_ttbr0(orig_ttbr0);
> flush_cache_all();
> local_flush_bp_all();
>
--
Florian
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 10/11] Change mapping of kasan_zero_page int readonly
@ 2017-10-11 19:19 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:19 UTC (permalink / raw)
To: linux-arm-kernel
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Because the kasan_zero_page(which is used as the shadow
> region for some memory that KASan doesn't need to track.) won't be writen
> after kasan_init, so change the mapping of kasan_zero_page into readonly.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> ---
> arch/arm/mm/kasan_init.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> index 7cfdc39..c11826a 100644
> --- a/arch/arm/mm/kasan_init.c
> +++ b/arch/arm/mm/kasan_init.c
> @@ -200,6 +200,7 @@ void __init kasan_init(void)
> {
> struct memblock_region *reg;
> u64 orig_ttbr0;
> + int i;
Nit: unsigned int i.
>
> orig_ttbr0 = cpu_get_ttbr(0);
>
> @@ -243,6 +244,17 @@ void __init kasan_init(void)
> create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
> (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
> NUMA_NO_NODE);
> +
> + /*
> + * KAsan may reuse the contents of kasan_zero_pte directly, so we
> + * should make sure that it maps the zero page read-only.
> + */
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
> + memset(kasan_zero_page, 0, PAGE_SIZE);
> cpu_set_ttbr0(orig_ttbr0);
> flush_cache_all();
> local_flush_bp_all();
>
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 19:39 ` Florian Fainelli
-1 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:39 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
[snip]
\
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
nr seems to be unused here?
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
Why is not cpu_set_ttbr0() not using cpu_set_ttbr()?
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
Please don't make this "exclusive" just conditionally call
kasan_early_init(), remove the call to start_kernel from
kasan_early_init and keep the call to start_kernel here.
> ENDPROC(__mmap_switched)
>
> .align 2
[snip]
> +void __init kasan_early_init(void)
> +{
> + struct proc_info_list *list;
> +
> + /*
> + * locate processor in the list of supported processor
> + * types. The linker builds this table for us from the
> + * entries in arch/arm/mm/proc-*.S
> + */
> + list = lookup_processor_type(read_cpuid_id());
> + if (list) {
> +#ifdef MULTI_CPU
> + processor = *list->proc;
> +#endif
> + }
I could not quite spot in your patch series when do you need this
information?
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-11 19:39 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:39 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
[snip]
\
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
nr seems to be unused here?
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
Why is not cpu_set_ttbr0() not using cpu_set_ttbr()?
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
Please don't make this "exclusive" just conditionally call
kasan_early_init(), remove the call to start_kernel from
kasan_early_init and keep the call to start_kernel here.
> ENDPROC(__mmap_switched)
>
> .align 2
[snip]
> +void __init kasan_early_init(void)
> +{
> + struct proc_info_list *list;
> +
> + /*
> + * locate processor in the list of supported processor
> + * types. The linker builds this table for us from the
> + * entries in arch/arm/mm/proc-*.S
> + */
> + list = lookup_processor_type(read_cpuid_id());
> + if (list) {
> +#ifdef MULTI_CPU
> + processor = *list->proc;
> +#endif
> + }
I could not quite spot in your patch series when do you need this
information?
--
Florian
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-11 19:39 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:39 UTC (permalink / raw)
To: linux-arm-kernel
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
[snip]
\
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
nr seems to be unused here?
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
Why is not cpu_set_ttbr0() not using cpu_set_ttbr()?
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
Please don't make this "exclusive" just conditionally call
kasan_early_init(), remove the call to start_kernel from
kasan_early_init and keep the call to start_kernel here.
> ENDPROC(__mmap_switched)
>
> .align 2
[snip]
> +void __init kasan_early_init(void)
> +{
> + struct proc_info_list *list;
> +
> + /*
> + * locate processor in the list of supported processor
> + * types. The linker builds this table for us from the
> + * entries in arch/arm/mm/proc-*.S
> + */
> + list = lookup_processor_type(read_cpuid_id());
> + if (list) {
> +#ifdef MULTI_CPU
> + processor = *list->proc;
> +#endif
> + }
I could not quite spot in your patch series when do you need this
information?
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 19:13 ` Florian Fainelli
(?)
@ 2017-10-11 19:50 ` Florian Fainelli
-1 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:50 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 12:13 PM, Florian Fainelli wrote:
> Hi Abbott,
>
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>> Hi,all:
>> These patches add arch specific code for kernel address sanitizer
>> (see Documentation/kasan.txt).
>>
>> 1/8 of kernel addresses reserved for shadow memory. There was no
>> big enough hole for this, so virtual addresses for shadow were
>> stolen from user space.
>>
>> At early boot stage the whole shadow region populated with just
>> one physical page (kasan_zero_page). Later, this page reused
>> as readonly zero shadow for some memory that KASan currently
>> don't track (vmalloc).
>>
>> After mapping the physical memory, pages for shadow memory are
>> allocated and mapped.
>>
>> KASan's stack instrumentation significantly increases stack's
>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>
>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>> If bad pointer passed to one of these function it is important
>> to catch this. Compiler's instrumentation cannot do this since
>> these functions are written in assembly.
>>
>> KASan replaces memory functions with manually instrumented variants.
>> Original functions declared as weak symbols so strong definitions
>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>> with '__' prefix in name, so we could call non-instrumented variant
>> if needed.
>>
>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>> Original mem* function replaced (via #define) with prefixed variants
>> to disable memory access checks for such files.
>>
>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>> because kasan instrumentation maybe cause do_translation_fault function
>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>> do_translation_fault function maybe cause dead circle. So the mapping table
>> of KASan shadow memory need be copyed in pgd_alloc function.
>>
>>
>> Most of the code comes from:
>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>
> Thanks for putting these patches together, I can't get a kernel to build
> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>
> AS arch/arm/kernel/entry-common.o
> arch/arm/kernel/entry-common.S: Assembler messages:
> arch/arm/kernel/entry-common.S:53: Error: invalid constant
> (ffffffffb6e00000) after fixup
> arch/arm/kernel/entry-common.S:118: Error: invalid constant
> (ffffffffb6e00000) after fixup
> scripts/Makefile.build:412: recipe for target
> 'arch/arm/kernel/entry-common.o' failed
> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
> Makefile:1019: recipe for target 'arch/arm/kernel' failed
> make[2]: *** [arch/arm/kernel] Error 2
> make[2]: *** Waiting for unfinished jobs....
>
> This is coming from the increase in TASK_SIZE it seems.
>
> This is on top of v4.14-rc4-84-gff5abbe799e2
Seems like we can use the following to get through that build failure:
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 99c908226065..0de1160d136e 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,13 @@ ret_fast_syscall:
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+ cmp r2, r1
+#else
cmp r2, #TASK_SIZE
+#endif
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -115,7 +121,13 @@ ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+ cmp r2, r1
+#else
cmp r2, #TASK_SIZE
+#endif
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
but then we will see another set of build failures with the decompressor
code:
WARNING: modpost: Found 2 section mismatch(es).
To see full details build your kernel with:
'make CONFIG_DEBUG_SECTION_MISMATCH=y'
KSYM .tmp_kallsyms1.o
KSYM .tmp_kallsyms2.o
LD vmlinux
SORTEX vmlinux
SYSMAP System.map
OBJCOPY arch/arm/boot/Image
Kernel: arch/arm/boot/Image is ready
LDS arch/arm/boot/compressed/vmlinux.lds
AS arch/arm/boot/compressed/head.o
XZKERN arch/arm/boot/compressed/piggy_data
CC arch/arm/boot/compressed/misc.o
CC arch/arm/boot/compressed/decompress.o
CC arch/arm/boot/compressed/string.o
arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
#define memmove memmove
In file included from arch/arm/boot/compressed/decompress.c:7:0:
./arch/arm/include/asm/string.h:67:0: note: this is the location of the
previous definition
#define memmove(dst, src, len) __memmove(dst, src, len)
arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
#define memcpy memcpy
In file included from arch/arm/boot/compressed/decompress.c:7:0:
./arch/arm/include/asm/string.h:66:0: note: this is the location of the
previous definition
#define memcpy(dst, src, len) __memcpy(dst, src, len)
SHIPPED arch/arm/boot/compressed/hyp-stub.S
SHIPPED arch/arm/boot/compressed/fdt_rw.c
SHIPPED arch/arm/boot/compressed/fdt.h
SHIPPED arch/arm/boot/compressed/libfdt.h
SHIPPED arch/arm/boot/compressed/libfdt_internal.h
SHIPPED arch/arm/boot/compressed/fdt_ro.c
SHIPPED arch/arm/boot/compressed/fdt_wip.c
SHIPPED arch/arm/boot/compressed/fdt.c
CC arch/arm/boot/compressed/atags_to_fdt.o
SHIPPED arch/arm/boot/compressed/lib1funcs.S
SHIPPED arch/arm/boot/compressed/ashldi3.S
SHIPPED arch/arm/boot/compressed/bswapsdi2.S
AS arch/arm/boot/compressed/hyp-stub.o
CC arch/arm/boot/compressed/fdt_rw.o
CC arch/arm/boot/compressed/fdt_ro.o
CC arch/arm/boot/compressed/fdt_wip.o
CC arch/arm/boot/compressed/fdt.o
AS arch/arm/boot/compressed/lib1funcs.o
AS arch/arm/boot/compressed/ashldi3.o
AS arch/arm/boot/compressed/bswapsdi2.o
AS arch/arm/boot/compressed/piggy.o
LD arch/arm/boot/compressed/vmlinux
arch/arm/boot/compressed/decompress.o: In function `fill_temp':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
undefined reference to `memmove'
arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `dict_flush':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
undefined reference to `memmove'
arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
undefined reference to `memcpy'
arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
/home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
undefined reference to `__memset'
arch/arm/boot/compressed/Makefile:182: recipe for target
'arch/arm/boot/compressed/vmlinux' failed
make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
arch/arm/boot/Makefile:53: recipe for target
'arch/arm/boot/compressed/vmlinux' failed
make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>
> Thank you
>
>>
>> These patches are tested on vexpress-ca15, vexpress-ca9
>>
>> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
>> Tested-by: Abbott Liu <liuwenliang@huawei.com>
>> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>>
>> Abbott Liu (6):
>> Define the virtual space of KASan's shadow region
>> change memory_is_poisoned_16 for aligned error
>> Add support arm LPAE
>> Don't need to map the shadow of KASan's shadow memory
>> Change mapping of kasan_zero_page int readonly
>> Add KASan layout
>>
>> Andrey Ryabinin (5):
>> Initialize the mapping of KASan shadow memory
>> replace memory function
>> arm: Kconfig: enable KASan
>> Disable kasan's instrumentation
>> Avoid cleaning the KASan shadow area's mapping table
>>
>> arch/arm/Kconfig | 1 +
>> arch/arm/boot/compressed/Makefile | 1 +
>> arch/arm/include/asm/kasan.h | 20 +++
>> arch/arm/include/asm/kasan_def.h | 51 +++++++
>> arch/arm/include/asm/memory.h | 5 +
>> arch/arm/include/asm/pgalloc.h | 5 +-
>> arch/arm/include/asm/pgtable.h | 1 +
>> arch/arm/include/asm/proc-fns.h | 33 +++++
>> arch/arm/include/asm/string.h | 18 ++-
>> arch/arm/include/asm/thread_info.h | 4 +
>> arch/arm/kernel/entry-armv.S | 7 +-
>> arch/arm/kernel/head-common.S | 4 +
>> arch/arm/kernel/setup.c | 2 +
>> arch/arm/kernel/unwind.c | 3 +-
>> arch/arm/lib/memcpy.S | 3 +
>> arch/arm/lib/memmove.S | 5 +-
>> arch/arm/lib/memset.S | 3 +
>> arch/arm/mm/Makefile | 5 +
>> arch/arm/mm/init.c | 6 +
>> arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
>> arch/arm/mm/mmu.c | 7 +-
>> arch/arm/mm/pgd.c | 12 ++
>> arch/arm/vdso/Makefile | 2 +
>> mm/kasan/kasan.c | 22 ++-
>> 24 files changed, 478 insertions(+), 7 deletions(-)
>> create mode 100644 arch/arm/include/asm/kasan.h
>> create mode 100644 arch/arm/include/asm/kasan_def.h
>> create mode 100644 arch/arm/mm/kasan_init.c
>>
>
>
--
Florian
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
@ 2017-10-11 19:50 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:50 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 10/11/2017 12:13 PM, Florian Fainelli wrote:
> Hi Abbott,
>
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>> Hi,all:
>> These patches add arch specific code for kernel address sanitizer
>> (see Documentation/kasan.txt).
>>
>> 1/8 of kernel addresses reserved for shadow memory. There was no
>> big enough hole for this, so virtual addresses for shadow were
>> stolen from user space.
>>
>> At early boot stage the whole shadow region populated with just
>> one physical page (kasan_zero_page). Later, this page reused
>> as readonly zero shadow for some memory that KASan currently
>> don't track (vmalloc).
>>
>> After mapping the physical memory, pages for shadow memory are
>> allocated and mapped.
>>
>> KASan's stack instrumentation significantly increases stack's
>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>
>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>> If bad pointer passed to one of these function it is important
>> to catch this. Compiler's instrumentation cannot do this since
>> these functions are written in assembly.
>>
>> KASan replaces memory functions with manually instrumented variants.
>> Original functions declared as weak symbols so strong definitions
>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>> with '__' prefix in name, so we could call non-instrumented variant
>> if needed.
>>
>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>> Original mem* function replaced (via #define) with prefixed variants
>> to disable memory access checks for such files.
>>
>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>> because kasan instrumentation maybe cause do_translation_fault function
>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>> do_translation_fault function maybe cause dead circle. So the mapping table
>> of KASan shadow memory need be copyed in pgd_alloc function.
>>
>>
>> Most of the code comes from:
>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>
> Thanks for putting these patches together, I can't get a kernel to build
> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>
> AS arch/arm/kernel/entry-common.o
> arch/arm/kernel/entry-common.S: Assembler messages:
> arch/arm/kernel/entry-common.S:53: Error: invalid constant
> (ffffffffb6e00000) after fixup
> arch/arm/kernel/entry-common.S:118: Error: invalid constant
> (ffffffffb6e00000) after fixup
> scripts/Makefile.build:412: recipe for target
> 'arch/arm/kernel/entry-common.o' failed
> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
> Makefile:1019: recipe for target 'arch/arm/kernel' failed
> make[2]: *** [arch/arm/kernel] Error 2
> make[2]: *** Waiting for unfinished jobs....
>
> This is coming from the increase in TASK_SIZE it seems.
>
> This is on top of v4.14-rc4-84-gff5abbe799e2
Seems like we can use the following to get through that build failure:
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 99c908226065..0de1160d136e 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,13 @@ ret_fast_syscall:
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+ cmp r2, r1
+#else
cmp r2, #TASK_SIZE
+#endif
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -115,7 +121,13 @@ ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+ cmp r2, r1
+#else
cmp r2, #TASK_SIZE
+#endif
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
but then we will see another set of build failures with the decompressor
code:
WARNING: modpost: Found 2 section mismatch(es).
To see full details build your kernel with:
'make CONFIG_DEBUG_SECTION_MISMATCH=y'
KSYM .tmp_kallsyms1.o
KSYM .tmp_kallsyms2.o
LD vmlinux
SORTEX vmlinux
SYSMAP System.map
OBJCOPY arch/arm/boot/Image
Kernel: arch/arm/boot/Image is ready
LDS arch/arm/boot/compressed/vmlinux.lds
AS arch/arm/boot/compressed/head.o
XZKERN arch/arm/boot/compressed/piggy_data
CC arch/arm/boot/compressed/misc.o
CC arch/arm/boot/compressed/decompress.o
CC arch/arm/boot/compressed/string.o
arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
#define memmove memmove
In file included from arch/arm/boot/compressed/decompress.c:7:0:
./arch/arm/include/asm/string.h:67:0: note: this is the location of the
previous definition
#define memmove(dst, src, len) __memmove(dst, src, len)
arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
#define memcpy memcpy
In file included from arch/arm/boot/compressed/decompress.c:7:0:
./arch/arm/include/asm/string.h:66:0: note: this is the location of the
previous definition
#define memcpy(dst, src, len) __memcpy(dst, src, len)
SHIPPED arch/arm/boot/compressed/hyp-stub.S
SHIPPED arch/arm/boot/compressed/fdt_rw.c
SHIPPED arch/arm/boot/compressed/fdt.h
SHIPPED arch/arm/boot/compressed/libfdt.h
SHIPPED arch/arm/boot/compressed/libfdt_internal.h
SHIPPED arch/arm/boot/compressed/fdt_ro.c
SHIPPED arch/arm/boot/compressed/fdt_wip.c
SHIPPED arch/arm/boot/compressed/fdt.c
CC arch/arm/boot/compressed/atags_to_fdt.o
SHIPPED arch/arm/boot/compressed/lib1funcs.S
SHIPPED arch/arm/boot/compressed/ashldi3.S
SHIPPED arch/arm/boot/compressed/bswapsdi2.S
AS arch/arm/boot/compressed/hyp-stub.o
CC arch/arm/boot/compressed/fdt_rw.o
CC arch/arm/boot/compressed/fdt_ro.o
CC arch/arm/boot/compressed/fdt_wip.o
CC arch/arm/boot/compressed/fdt.o
AS arch/arm/boot/compressed/lib1funcs.o
AS arch/arm/boot/compressed/ashldi3.o
AS arch/arm/boot/compressed/bswapsdi2.o
AS arch/arm/boot/compressed/piggy.o
LD arch/arm/boot/compressed/vmlinux
arch/arm/boot/compressed/decompress.o: In function `fill_temp':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
undefined reference to `memmove'
arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `dict_flush':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
undefined reference to `memmove'
arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
undefined reference to `memcpy'
arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
/home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
undefined reference to `__memset'
arch/arm/boot/compressed/Makefile:182: recipe for target
'arch/arm/boot/compressed/vmlinux' failed
make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
arch/arm/boot/Makefile:53: recipe for target
'arch/arm/boot/compressed/vmlinux' failed
make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>
> Thank you
>
>>
>> These patches are tested on vexpress-ca15, vexpress-ca9
>>
>> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
>> Tested-by: Abbott Liu <liuwenliang@huawei.com>
>> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>>
>> Abbott Liu (6):
>> Define the virtual space of KASan's shadow region
>> change memory_is_poisoned_16 for aligned error
>> Add support arm LPAE
>> Don't need to map the shadow of KASan's shadow memory
>> Change mapping of kasan_zero_page int readonly
>> Add KASan layout
>>
>> Andrey Ryabinin (5):
>> Initialize the mapping of KASan shadow memory
>> replace memory function
>> arm: Kconfig: enable KASan
>> Disable kasan's instrumentation
>> Avoid cleaning the KASan shadow area's mapping table
>>
>> arch/arm/Kconfig | 1 +
>> arch/arm/boot/compressed/Makefile | 1 +
>> arch/arm/include/asm/kasan.h | 20 +++
>> arch/arm/include/asm/kasan_def.h | 51 +++++++
>> arch/arm/include/asm/memory.h | 5 +
>> arch/arm/include/asm/pgalloc.h | 5 +-
>> arch/arm/include/asm/pgtable.h | 1 +
>> arch/arm/include/asm/proc-fns.h | 33 +++++
>> arch/arm/include/asm/string.h | 18 ++-
>> arch/arm/include/asm/thread_info.h | 4 +
>> arch/arm/kernel/entry-armv.S | 7 +-
>> arch/arm/kernel/head-common.S | 4 +
>> arch/arm/kernel/setup.c | 2 +
>> arch/arm/kernel/unwind.c | 3 +-
>> arch/arm/lib/memcpy.S | 3 +
>> arch/arm/lib/memmove.S | 5 +-
>> arch/arm/lib/memset.S | 3 +
>> arch/arm/mm/Makefile | 5 +
>> arch/arm/mm/init.c | 6 +
>> arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
>> arch/arm/mm/mmu.c | 7 +-
>> arch/arm/mm/pgd.c | 12 ++
>> arch/arm/vdso/Makefile | 2 +
>> mm/kasan/kasan.c | 22 ++-
>> 24 files changed, 478 insertions(+), 7 deletions(-)
>> create mode 100644 arch/arm/include/asm/kasan.h
>> create mode 100644 arch/arm/include/asm/kasan_def.h
>> create mode 100644 arch/arm/mm/kasan_init.c
>>
>
>
--
Florian
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-11 19:50 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 19:50 UTC (permalink / raw)
To: linux-arm-kernel
On 10/11/2017 12:13 PM, Florian Fainelli wrote:
> Hi Abbott,
>
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>> Hi,all:
>> These patches add arch specific code for kernel address sanitizer
>> (see Documentation/kasan.txt).
>>
>> 1/8 of kernel addresses reserved for shadow memory. There was no
>> big enough hole for this, so virtual addresses for shadow were
>> stolen from user space.
>>
>> At early boot stage the whole shadow region populated with just
>> one physical page (kasan_zero_page). Later, this page reused
>> as readonly zero shadow for some memory that KASan currently
>> don't track (vmalloc).
>>
>> After mapping the physical memory, pages for shadow memory are
>> allocated and mapped.
>>
>> KASan's stack instrumentation significantly increases stack's
>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>
>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>> If bad pointer passed to one of these function it is important
>> to catch this. Compiler's instrumentation cannot do this since
>> these functions are written in assembly.
>>
>> KASan replaces memory functions with manually instrumented variants.
>> Original functions declared as weak symbols so strong definitions
>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>> with '__' prefix in name, so we could call non-instrumented variant
>> if needed.
>>
>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>> Original mem* function replaced (via #define) with prefixed variants
>> to disable memory access checks for such files.
>>
>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>> because kasan instrumentation maybe cause do_translation_fault function
>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>> do_translation_fault function maybe cause dead circle. So the mapping table
>> of KASan shadow memory need be copyed in pgd_alloc function.
>>
>>
>> Most of the code comes from:
>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>
> Thanks for putting these patches together, I can't get a kernel to build
> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>
> AS arch/arm/kernel/entry-common.o
> arch/arm/kernel/entry-common.S: Assembler messages:
> arch/arm/kernel/entry-common.S:53: Error: invalid constant
> (ffffffffb6e00000) after fixup
> arch/arm/kernel/entry-common.S:118: Error: invalid constant
> (ffffffffb6e00000) after fixup
> scripts/Makefile.build:412: recipe for target
> 'arch/arm/kernel/entry-common.o' failed
> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
> Makefile:1019: recipe for target 'arch/arm/kernel' failed
> make[2]: *** [arch/arm/kernel] Error 2
> make[2]: *** Waiting for unfinished jobs....
>
> This is coming from the increase in TASK_SIZE it seems.
>
> This is on top of v4.14-rc4-84-gff5abbe799e2
Seems like we can use the following to get through that build failure:
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 99c908226065..0de1160d136e 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,13 @@ ret_fast_syscall:
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+ cmp r2, r1
+#else
cmp r2, #TASK_SIZE
+#endif
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -115,7 +121,13 @@ ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+ cmp r2, r1
+#else
cmp r2, #TASK_SIZE
+#endif
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
but then we will see another set of build failures with the decompressor
code:
WARNING: modpost: Found 2 section mismatch(es).
To see full details build your kernel with:
'make CONFIG_DEBUG_SECTION_MISMATCH=y'
KSYM .tmp_kallsyms1.o
KSYM .tmp_kallsyms2.o
LD vmlinux
SORTEX vmlinux
SYSMAP System.map
OBJCOPY arch/arm/boot/Image
Kernel: arch/arm/boot/Image is ready
LDS arch/arm/boot/compressed/vmlinux.lds
AS arch/arm/boot/compressed/head.o
XZKERN arch/arm/boot/compressed/piggy_data
CC arch/arm/boot/compressed/misc.o
CC arch/arm/boot/compressed/decompress.o
CC arch/arm/boot/compressed/string.o
arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
#define memmove memmove
In file included from arch/arm/boot/compressed/decompress.c:7:0:
./arch/arm/include/asm/string.h:67:0: note: this is the location of the
previous definition
#define memmove(dst, src, len) __memmove(dst, src, len)
arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
#define memcpy memcpy
In file included from arch/arm/boot/compressed/decompress.c:7:0:
./arch/arm/include/asm/string.h:66:0: note: this is the location of the
previous definition
#define memcpy(dst, src, len) __memcpy(dst, src, len)
SHIPPED arch/arm/boot/compressed/hyp-stub.S
SHIPPED arch/arm/boot/compressed/fdt_rw.c
SHIPPED arch/arm/boot/compressed/fdt.h
SHIPPED arch/arm/boot/compressed/libfdt.h
SHIPPED arch/arm/boot/compressed/libfdt_internal.h
SHIPPED arch/arm/boot/compressed/fdt_ro.c
SHIPPED arch/arm/boot/compressed/fdt_wip.c
SHIPPED arch/arm/boot/compressed/fdt.c
CC arch/arm/boot/compressed/atags_to_fdt.o
SHIPPED arch/arm/boot/compressed/lib1funcs.S
SHIPPED arch/arm/boot/compressed/ashldi3.S
SHIPPED arch/arm/boot/compressed/bswapsdi2.S
AS arch/arm/boot/compressed/hyp-stub.o
CC arch/arm/boot/compressed/fdt_rw.o
CC arch/arm/boot/compressed/fdt_ro.o
CC arch/arm/boot/compressed/fdt_wip.o
CC arch/arm/boot/compressed/fdt.o
AS arch/arm/boot/compressed/lib1funcs.o
AS arch/arm/boot/compressed/ashldi3.o
AS arch/arm/boot/compressed/bswapsdi2.o
AS arch/arm/boot/compressed/piggy.o
LD arch/arm/boot/compressed/vmlinux
arch/arm/boot/compressed/decompress.o: In function `fill_temp':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
undefined reference to `memmove'
arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `dict_flush':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
undefined reference to `memcpy'
arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
undefined reference to `memmove'
arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
undefined reference to `memcpy'
/home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
undefined reference to `memcpy'
arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
/home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
undefined reference to `__memset'
arch/arm/boot/compressed/Makefile:182: recipe for target
'arch/arm/boot/compressed/vmlinux' failed
make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
arch/arm/boot/Makefile:53: recipe for target
'arch/arm/boot/compressed/vmlinux' failed
make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>
> Thank you
>
>>
>> These patches are tested on vexpress-ca15, vexpress-ca9
>>
>> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
>> Tested-by: Abbott Liu <liuwenliang@huawei.com>
>> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>>
>> Abbott Liu (6):
>> Define the virtual space of KASan's shadow region
>> change memory_is_poisoned_16 for aligned error
>> Add support arm LPAE
>> Don't need to map the shadow of KASan's shadow memory
>> Change mapping of kasan_zero_page int readonly
>> Add KASan layout
>>
>> Andrey Ryabinin (5):
>> Initialize the mapping of KASan shadow memory
>> replace memory function
>> arm: Kconfig: enable KASan
>> Disable kasan's instrumentation
>> Avoid cleaning the KASan shadow area's mapping table
>>
>> arch/arm/Kconfig | 1 +
>> arch/arm/boot/compressed/Makefile | 1 +
>> arch/arm/include/asm/kasan.h | 20 +++
>> arch/arm/include/asm/kasan_def.h | 51 +++++++
>> arch/arm/include/asm/memory.h | 5 +
>> arch/arm/include/asm/pgalloc.h | 5 +-
>> arch/arm/include/asm/pgtable.h | 1 +
>> arch/arm/include/asm/proc-fns.h | 33 +++++
>> arch/arm/include/asm/string.h | 18 ++-
>> arch/arm/include/asm/thread_info.h | 4 +
>> arch/arm/kernel/entry-armv.S | 7 +-
>> arch/arm/kernel/head-common.S | 4 +
>> arch/arm/kernel/setup.c | 2 +
>> arch/arm/kernel/unwind.c | 3 +-
>> arch/arm/lib/memcpy.S | 3 +
>> arch/arm/lib/memmove.S | 5 +-
>> arch/arm/lib/memset.S | 3 +
>> arch/arm/mm/Makefile | 5 +
>> arch/arm/mm/init.c | 6 +
>> arch/arm/mm/kasan_init.c | 265 +++++++++++++++++++++++++++++++++++++
>> arch/arm/mm/mmu.c | 7 +-
>> arch/arm/mm/pgd.c | 12 ++
>> arch/arm/vdso/Makefile | 2 +
>> mm/kasan/kasan.c | 22 ++-
>> 24 files changed, 478 insertions(+), 7 deletions(-)
>> create mode 100644 arch/arm/include/asm/kasan.h
>> create mode 100644 arch/arm/include/asm/kasan_def.h
>> create mode 100644 arch/arm/mm/kasan_init.c
>>
>
>
--
Florian
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 19:50 ` Florian Fainelli
(?)
(?)
@ 2017-10-11 21:36 ` Florian Fainelli
2017-10-11 22:10 ` Laura Abbott
2017-10-12 4:55 ` Liuwenliang (Lamb)
-1 siblings, 2 replies; 253+ messages in thread
From: Florian Fainelli @ 2017-10-11 21:36 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
[-- Attachment #1: Type: text/plain, Size: 10400 bytes --]
On 10/11/2017 12:50 PM, Florian Fainelli wrote:
> On 10/11/2017 12:13 PM, Florian Fainelli wrote:
>> Hi Abbott,
>>
>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>>> Hi,all:
>>> These patches add arch specific code for kernel address sanitizer
>>> (see Documentation/kasan.txt).
>>>
>>> 1/8 of kernel addresses reserved for shadow memory. There was no
>>> big enough hole for this, so virtual addresses for shadow were
>>> stolen from user space.
>>>
>>> At early boot stage the whole shadow region populated with just
>>> one physical page (kasan_zero_page). Later, this page reused
>>> as readonly zero shadow for some memory that KASan currently
>>> don't track (vmalloc).
>>>
>>> After mapping the physical memory, pages for shadow memory are
>>> allocated and mapped.
>>>
>>> KASan's stack instrumentation significantly increases stack's
>>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>>
>>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>>> If bad pointer passed to one of these function it is important
>>> to catch this. Compiler's instrumentation cannot do this since
>>> these functions are written in assembly.
>>>
>>> KASan replaces memory functions with manually instrumented variants.
>>> Original functions declared as weak symbols so strong definitions
>>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>>> with '__' prefix in name, so we could call non-instrumented variant
>>> if needed.
>>>
>>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>>> Original mem* function replaced (via #define) with prefixed variants
>>> to disable memory access checks for such files.
>>>
>>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>>> because kasan instrumentation maybe cause do_translation_fault function
>>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>>> do_translation_fault function maybe cause dead circle. So the mapping table
>>> of KASan shadow memory need be copyed in pgd_alloc function.
>>>
>>>
>>> Most of the code comes from:
>>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>>
>> Thanks for putting these patches together, I can't get a kernel to build
>> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>>
>> AS arch/arm/kernel/entry-common.o
>> arch/arm/kernel/entry-common.S: Assembler messages:
>> arch/arm/kernel/entry-common.S:53: Error: invalid constant
>> (ffffffffb6e00000) after fixup
>> arch/arm/kernel/entry-common.S:118: Error: invalid constant
>> (ffffffffb6e00000) after fixup
>> scripts/Makefile.build:412: recipe for target
>> 'arch/arm/kernel/entry-common.o' failed
>> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
>> Makefile:1019: recipe for target 'arch/arm/kernel' failed
>> make[2]: *** [arch/arm/kernel] Error 2
>> make[2]: *** Waiting for unfinished jobs....
>>
>> This is coming from the increase in TASK_SIZE it seems.
>>
>> This is on top of v4.14-rc4-84-gff5abbe799e2
>
> Seems like we can use the following to get through that build failure:
>
> diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
> index 99c908226065..0de1160d136e 100644
> --- a/arch/arm/kernel/entry-common.S
> +++ b/arch/arm/kernel/entry-common.S
> @@ -50,7 +50,13 @@ ret_fast_syscall:
> UNWIND(.cantunwind )
> disable_irq_notrace @ disable interrupts
> ldr r2, [tsk, #TI_ADDR_LIMIT]
> +#ifdef CONFIG_KASAN
> + movw r1, #:lower16:TASK_SIZE
> + movt r1, #:upper16:TASK_SIZE
> + cmp r2, r1
> +#else
> cmp r2, #TASK_SIZE
> +#endif
> blne addr_limit_check_failed
> ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
> tracing
> tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
> @@ -115,7 +121,13 @@ ret_slow_syscall:
> disable_irq_notrace @ disable interrupts
> ENTRY(ret_to_user_from_irq)
> ldr r2, [tsk, #TI_ADDR_LIMIT]
> +#ifdef CONFIG_KASAN
> + movw r1, #:lower16:TASK_SIZE
> + movt r1, #:upper16:TASK_SIZE
> + cmp r2, r1
> +#else
> cmp r2, #TASK_SIZE
> +#endif
> blne addr_limit_check_failed
> ldr r1, [tsk, #TI_FLAGS]
> tst r1, #_TIF_WORK_MASK
>
>
>
> but then we will see another set of build failures with the decompressor
> code:
>
> WARNING: modpost: Found 2 section mismatch(es).
> To see full details build your kernel with:
> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
> KSYM .tmp_kallsyms1.o
> KSYM .tmp_kallsyms2.o
> LD vmlinux
> SORTEX vmlinux
> SYSMAP System.map
> OBJCOPY arch/arm/boot/Image
> Kernel: arch/arm/boot/Image is ready
> LDS arch/arm/boot/compressed/vmlinux.lds
> AS arch/arm/boot/compressed/head.o
> XZKERN arch/arm/boot/compressed/piggy_data
> CC arch/arm/boot/compressed/misc.o
> CC arch/arm/boot/compressed/decompress.o
> CC arch/arm/boot/compressed/string.o
> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
> #define memmove memmove
>
> In file included from arch/arm/boot/compressed/decompress.c:7:0:
> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
> previous definition
> #define memmove(dst, src, len) __memmove(dst, src, len)
>
> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
> #define memcpy memcpy
>
> In file included from arch/arm/boot/compressed/decompress.c:7:0:
> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
> previous definition
> #define memcpy(dst, src, len) __memcpy(dst, src, len)
>
> SHIPPED arch/arm/boot/compressed/hyp-stub.S
> SHIPPED arch/arm/boot/compressed/fdt_rw.c
> SHIPPED arch/arm/boot/compressed/fdt.h
> SHIPPED arch/arm/boot/compressed/libfdt.h
> SHIPPED arch/arm/boot/compressed/libfdt_internal.h
> SHIPPED arch/arm/boot/compressed/fdt_ro.c
> SHIPPED arch/arm/boot/compressed/fdt_wip.c
> SHIPPED arch/arm/boot/compressed/fdt.c
> CC arch/arm/boot/compressed/atags_to_fdt.o
> SHIPPED arch/arm/boot/compressed/lib1funcs.S
> SHIPPED arch/arm/boot/compressed/ashldi3.S
> SHIPPED arch/arm/boot/compressed/bswapsdi2.S
> AS arch/arm/boot/compressed/hyp-stub.o
> CC arch/arm/boot/compressed/fdt_rw.o
> CC arch/arm/boot/compressed/fdt_ro.o
> CC arch/arm/boot/compressed/fdt_wip.o
> CC arch/arm/boot/compressed/fdt.o
> AS arch/arm/boot/compressed/lib1funcs.o
> AS arch/arm/boot/compressed/ashldi3.o
> AS arch/arm/boot/compressed/bswapsdi2.o
> AS arch/arm/boot/compressed/piggy.o
> LD arch/arm/boot/compressed/vmlinux
> arch/arm/boot/compressed/decompress.o: In function `fill_temp':
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
> undefined reference to `memcpy'
> arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
> undefined reference to `memcpy'
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
> undefined reference to `memmove'
> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
> undefined reference to `memcpy'
> arch/arm/boot/compressed/decompress.o: In function `dict_flush':
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
> undefined reference to `memcpy'
> arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
> undefined reference to `memcpy'
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
> undefined reference to `memcpy'
> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
> undefined reference to `memcpy'
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
> undefined reference to `memmove'
> arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
> undefined reference to `memcpy'
> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
> undefined reference to `memcpy'
> arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
> /home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
> undefined reference to `__memset'
> arch/arm/boot/compressed/Makefile:182: recipe for target
> 'arch/arm/boot/compressed/vmlinux' failed
> make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
> arch/arm/boot/Makefile:53: recipe for target
> 'arch/arm/boot/compressed/vmlinux' failed
> make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
I ended up fixing the redefinition warnings/build failures this way, but
I am not 100% confident this is the right fix:
diff --git a/arch/arm/boot/compressed/decompress.c
b/arch/arm/boot/compressed/decompress.c
index f3a4bedd1afc..7d4a47752760 100644
--- a/arch/arm/boot/compressed/decompress.c
+++ b/arch/arm/boot/compressed/decompress.c
@@ -48,8 +48,10 @@ extern int memcmp(const void *cs, const void *ct,
size_t count);
#endif
#ifdef CONFIG_KERNEL_XZ
+#ifndef CONFIG_KASAN
#define memmove memmove
#define memcpy memcpy
+#endif
#include "../../../../lib/decompress_unxz.c"
#endif
Was not able yet to track down why __memset is not being resolved, but
since I don't need them, disabled CONFIG_ATAGS and
CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
This brought me all the way to a prompt and please find attached the
results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
free in one of our drivers (spi-bcm-qspi) so with this:
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Great job thanks!
--
Florian
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: no-lpae.log --]
[-- Type: text/x-log; name="no-lpae.log", Size: 80280 bytes --]
# insmod test_kasan.ko
[ 90.732418] kasan test: kmalloc_oob_right out-of-bounds to right
[ 90.739598] ==================================================================
[ 90.747735] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0x54/0x6c [test_kasan]
[ 90.756194] Write of size 1 at addr cb32df7b by task insmod/1456
[ 90.762532]
[ 90.764350] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 90.774742] Hardware name: Broadcom STB (Flattened Device Tree)
[ 90.781235] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 90.789608] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 90.797493] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 90.806809] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 90.816763] [<c02a7ab8>] (kasan_report) from [<bf0041bc>] (kmalloc_oob_right+0x54/0x6c [test_kasan])
[ 90.827327] [<bf0041bc>] (kmalloc_oob_right [test_kasan]) from [<bf004da0>] (kmalloc_tests_init+0x10/0x270 [test_kasan])
[ 90.839327] [<bf004da0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 90.849645] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 90.858458] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 90.867177] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 90.875827] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 90.884407]
[ 90.886124] Allocated by task 1456:
[ 90.890022] kmem_cache_alloc_trace+0xb4/0x170
[ 90.895194] kmalloc_oob_right+0x30/0x6c [test_kasan]
[ 90.901002] kmalloc_tests_init+0x10/0x270 [test_kasan]
[ 90.906625] do_one_initcall+0x60/0x1b0
[ 90.910831] do_init_module+0xd4/0x2cc
[ 90.914949] load_module+0x3110/0x3af0
[ 90.919071] SyS_init_module+0x19c/0x1d4
[ 90.923385] ret_fast_syscall+0x0/0x50
[ 90.927396]
[ 90.929103] Freed by task 0:
[ 90.932240] (stack is not available)
[ 90.936080]
[ 90.937846] The buggy address belongs to the object at cb32df00
[ 90.937846] which belongs to the cache kmalloc-128 of size 128
[ 90.950387] The buggy address is located 123 bytes inside of
[ 90.950387] 128-byte region [cb32df00, cb32df80)
[ 90.961330] The buggy address belongs to the page:
[ 90.966480] page:ee95e5a0 count:1 mapcount:0 mapping:cb32d000 index:0x0
[ 90.973499] flags: 0x100(slab)
[ 90.977019] raw: 00000100 cb32d000 00000000 00000015 00000001 ee837f34 ee965014 d00000c0
[ 90.985610] page dumped because: kasan: bad access detected
[ 90.991497]
[ 90.993201] Memory state around the buggy address:
[ 90.998387] cb32de00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.005363] cb32de80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.012342] >cb32df00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03
[ 91.019248] ^
[ 91.026142] cb32df80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.033126] cb32e000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 91.040032] ==================================================================
[ 91.048462] kasan test: kmalloc_oob_left out-of-bounds to left
[ 91.055542] ==================================================================
[ 91.063691] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_left+0x54/0x74 [test_kasan]
[ 91.072056] Read of size 1 at addr cb32c3ff by task insmod/1456
[ 91.078302]
[ 91.080116] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 91.090505] Hardware name: Broadcom STB (Flattened Device Tree)
[ 91.097004] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 91.105390] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 91.113278] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 91.122595] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 91.132521] [<c02a7ab8>] (kasan_report) from [<bf004228>] (kmalloc_oob_left+0x54/0x74 [test_kasan])
[ 91.143025] [<bf004228>] (kmalloc_oob_left [test_kasan]) from [<bf004da4>] (kmalloc_tests_init+0x14/0x270 [test_kasan])
[ 91.154958] [<bf004da4>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 91.165284] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 91.174106] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 91.182824] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 91.191495] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 91.200072]
[ 91.201782] Allocated by task 0:
[ 91.205273] (stack is not available)
[ 91.209111]
[ 91.210818] Freed by task 0:
[ 91.213965] (stack is not available)
[ 91.217804]
[ 91.219577] The buggy address belongs to the object at cb32c380
[ 91.219577] which belongs to the cache kmalloc-64 of size 64
[ 91.231940] The buggy address is located 63 bytes to the right of
[ 91.231940] 64-byte region [cb32c380, cb32c3c0)
[ 91.243258] The buggy address belongs to the page:
[ 91.248411] page:ee95e580 count:1 mapcount:0 mapping:cb32c000 index:0x0
[ 91.255439] flags: 0x100(slab)
[ 91.258968] raw: 00000100 cb32c000 00000000 00000020 00000001 ee81ea94 ee962934 d0000000
[ 91.267561] page dumped because: kasan: bad access detected
[ 91.273450]
[ 91.275152] Memory state around the buggy address:
[ 91.280338] cb32c280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.287320] cb32c300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.294302] >cb32c380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.301207] ^
[ 91.308101] cb32c400: 00 07 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.315083] cb32c480: 00 04 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.321995] ==================================================================
[ 91.330451] kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
[ 91.339664] ==================================================================
[ 91.347813] BUG: KASAN: slab-out-of-bounds in kmalloc_node_oob_right+0x58/0x70 [test_kasan]
[ 91.356716] Write of size 1 at addr cb38d200 by task insmod/1456
[ 91.363060]
[ 91.364877] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 91.375280] Hardware name: Broadcom STB (Flattened Device Tree)
[ 91.381764] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 91.390148] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 91.398040] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 91.407367] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 91.417314] [<c02a7ab8>] (kasan_report) from [<bf0042a0>] (kmalloc_node_oob_right+0x58/0x70 [test_kasan])
[ 91.428358] [<bf0042a0>] (kmalloc_node_oob_right [test_kasan]) from [<bf004da8>] (kmalloc_tests_init+0x18/0x270 [test_kasan])
[ 91.440820] [<bf004da8>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 91.451152] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 91.459969] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 91.468684] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 91.477343] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 91.485918]
[ 91.487638] Allocated by task 1456:
[ 91.491537] kmem_cache_alloc_trace+0xb4/0x170
[ 91.496720] kmalloc_node_oob_right+0x30/0x70 [test_kasan]
[ 91.502987] kmalloc_tests_init+0x18/0x270 [test_kasan]
[ 91.508614] do_one_initcall+0x60/0x1b0
[ 91.512828] do_init_module+0xd4/0x2cc
[ 91.516964] load_module+0x3110/0x3af0
[ 91.521097] SyS_init_module+0x19c/0x1d4
[ 91.525425] ret_fast_syscall+0x0/0x50
[ 91.529435]
[ 91.531141] Freed by task 0:
[ 91.534268] (stack is not available)
[ 91.538103]
[ 91.539868] The buggy address belongs to the object at cb38c200
[ 91.539868] which belongs to the cache kmalloc-4096 of size 4096
[ 91.552587] The buggy address is located 0 bytes to the right of
[ 91.552587] 4096-byte region [cb38c200, cb38d200)
[ 91.563981] The buggy address belongs to the page:
[ 91.569141] page:ee95f180 count:1 mapcount:0 mapping:cb38c200 index:0x0 compound_mapcount: 0
[ 91.578155] flags: 0x8100(slab|head)
[ 91.582207] raw: 00008100 cb38c200 00000000 00000001 00000001 ee95f094 d000140c d0000540
[ 91.590792] page dumped because: kasan: bad access detected
[ 91.596678]
[ 91.598373] Memory state around the buggy address:
[ 91.603551] cb38d100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 91.610518] cb38d180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 91.617485] >cb38d200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.624360] ^
[ 91.627217] cb38d280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.634196] cb38d300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.641103] ==================================================================
[ 91.649357] kasan test: kmalloc_large_oob_right kmalloc large allocation: out-of-bounds to right
[ 91.686569] ==================================================================
[ 91.694713] BUG: KASAN: slab-out-of-bounds in kmalloc_large_oob_right+0x60/0x78 [test_kasan]
[ 91.703685] Write of size 1 at addr cabfff00 by task insmod/1456
[ 91.710024]
[ 91.711823] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 91.722227] Hardware name: Broadcom STB (Flattened Device Tree)
[ 91.728695] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 91.737073] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 91.744957] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 91.754277] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 91.764205] [<c02a7ab8>] (kasan_report) from [<bf004318>] (kmalloc_large_oob_right+0x60/0x78 [test_kasan])
[ 91.775315] [<bf004318>] (kmalloc_large_oob_right [test_kasan]) from [<bf004dac>] (kmalloc_tests_init+0x1c/0x270 [test_kasan])
[ 91.787851] [<bf004dac>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 91.798174] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 91.806980] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 91.815681] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 91.824328] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 91.832894]
[ 91.834662] The buggy address belongs to the object at ca800000
[ 91.834662] which belongs to the cache kmalloc-4194304 of size 4194304
[ 91.847908] The buggy address is located 4194048 bytes inside of
[ 91.847908] 4194304-byte region [ca800000, cac00000)
[ 91.859557] The buggy address belongs to the page:
[ 91.864697] page:ee948000 count:1 mapcount:0 mapping:ca800000 index:0x0 compound_mapcount: 0
[ 91.873697] flags: 0x8100(slab|head)
[ 91.877735] raw: 00008100 ca800000 00000000 00000001 00000001 d000190c d000190c d0000cc0
[ 91.886325] page dumped because: kasan: bad access detected
[ 91.892207]
[ 91.893912] Memory state around the buggy address:
[ 91.899108] cabffe00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 91.906084] cabffe80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 91.913063] >cabfff00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.919949] ^
[ 91.922804] cabfff80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.929778] cac00000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 91.936676] ==================================================================
[ 91.950255] kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
[ 91.959414] ==================================================================
[ 91.967560] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_krealloc_more+0x78/0x90 [test_kasan]
[ 91.976714] Write of size 1 at addr cb32c393 by task insmod/1456
[ 91.983052]
[ 91.984852] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 91.995253] Hardware name: Broadcom STB (Flattened Device Tree)
[ 92.001723] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 92.010095] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 92.017977] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 92.027295] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 92.037226] [<c02a7ab8>] (kasan_report) from [<bf004558>] (kmalloc_oob_krealloc_more+0x78/0x90 [test_kasan])
[ 92.048509] [<bf004558>] (kmalloc_oob_krealloc_more [test_kasan]) from [<bf004db0>] (kmalloc_tests_init+0x20/0x270 [test_kasan])
[ 92.061216] [<bf004db0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 92.071531] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 92.080337] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 92.089050] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 92.097685] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 92.106254]
[ 92.107973] Allocated by task 1456:
[ 92.111809] krealloc+0x44/0xc8
[ 92.115649] kmalloc_oob_krealloc_more+0x44/0x90 [test_kasan]
[ 92.122170] kmalloc_tests_init+0x20/0x270 [test_kasan]
[ 92.127788] do_one_initcall+0x60/0x1b0
[ 92.132007] do_init_module+0xd4/0x2cc
[ 92.136129] load_module+0x3110/0x3af0
[ 92.140246] SyS_init_module+0x19c/0x1d4
[ 92.144551] ret_fast_syscall+0x0/0x50
[ 92.148554]
[ 92.150253] Freed by task 0:
[ 92.153373] (stack is not available)
[ 92.157198]
[ 92.158965] The buggy address belongs to the object at cb32c380
[ 92.158965] which belongs to the cache kmalloc-64 of size 64
[ 92.171311] The buggy address is located 19 bytes inside of
[ 92.171311] 64-byte region [cb32c380, cb32c3c0)
[ 92.182073] The buggy address belongs to the page:
[ 92.187218] page:ee95e580 count:1 mapcount:0 mapping:cb32c000 index:0x0
[ 92.194233] flags: 0x100(slab)
[ 92.197736] raw: 00000100 cb32c000 00000000 00000020 00000001 ee81ea94 ee962934 d0000000
[ 92.206328] page dumped because: kasan: bad access detected
[ 92.212210]
[ 92.213917] Memory state around the buggy address:
[ 92.219113] cb32c280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.226092] cb32c300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.233071] >cb32c380: 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.239961] ^
[ 92.243351] cb32c400: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 92.250319] cb32c480: 00 04 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.257218] ==================================================================
[ 92.265303] kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
[ 92.274463] ==================================================================
[ 92.282607] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_krealloc_less+0x78/0x90 [test_kasan]
[ 92.291759] Write of size 1 at addr cb32c30f by task insmod/1456
[ 92.298099]
[ 92.299905] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 92.310306] Hardware name: Broadcom STB (Flattened Device Tree)
[ 92.316774] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 92.325148] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 92.333030] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 92.342351] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 92.352280] [<c02a7ab8>] (kasan_report) from [<bf0045e8>] (kmalloc_oob_krealloc_less+0x78/0x90 [test_kasan])
[ 92.363564] [<bf0045e8>] (kmalloc_oob_krealloc_less [test_kasan]) from [<bf004db4>] (kmalloc_tests_init+0x24/0x270 [test_kasan])
[ 92.376275] [<bf004db4>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 92.386583] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 92.395387] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 92.404104] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 92.412742] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 92.421308]
[ 92.423024] Allocated by task 1456:
[ 92.426863] krealloc+0x44/0xc8
[ 92.430706] kmalloc_oob_krealloc_less+0x44/0x90 [test_kasan]
[ 92.437229] kmalloc_tests_init+0x24/0x270 [test_kasan]
[ 92.442848] do_one_initcall+0x60/0x1b0
[ 92.447072] do_init_module+0xd4/0x2cc
[ 92.451189] load_module+0x3110/0x3af0
[ 92.455303] SyS_init_module+0x19c/0x1d4
[ 92.459609] ret_fast_syscall+0x0/0x50
[ 92.463612]
[ 92.465311] Freed by task 0:
[ 92.468431] (stack is not available)
[ 92.472256]
[ 92.474025] The buggy address belongs to the object at cb32c300
[ 92.474025] which belongs to the cache kmalloc-64 of size 64
[ 92.486371] The buggy address is located 15 bytes inside of
[ 92.486371] 64-byte region [cb32c300, cb32c340)
[ 92.497131] The buggy address belongs to the page:
[ 92.502272] page:ee95e580 count:1 mapcount:0 mapping:cb32c000 index:0x0
[ 92.509280] flags: 0x100(slab)
[ 92.512782] raw: 00000100 cb32c000 00000000 00000020 00000001 ee81ea94 ee962934 d0000000
[ 92.521376] page dumped because: kasan: bad access detected
[ 92.527257]
[ 92.528968] Memory state around the buggy address:
[ 92.534159] cb32c200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.541139] cb32c280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.548118] >cb32c300: 00 07 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.555005] ^
[ 92.558136] cb32c380: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 92.565114] cb32c400: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 92.572017] ==================================================================
[ 92.580279] kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
[ 92.589445] ==================================================================
[ 92.597580] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_16+0x78/0xa4 [test_kasan]
[ 92.605751] Write of size 16 at addr cb32c280 by task insmod/1456
[ 92.612175]
[ 92.613992] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 92.624380] Hardware name: Broadcom STB (Flattened Device Tree)
[ 92.630852] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 92.639233] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 92.647117] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 92.656435] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 92.666355] [<c02a7ab8>] (kasan_report) from [<bf0043a8>] (kmalloc_oob_16+0x78/0xa4 [test_kasan])
[ 92.676644] [<bf0043a8>] (kmalloc_oob_16 [test_kasan]) from [<bf004db8>] (kmalloc_tests_init+0x28/0x270 [test_kasan])
[ 92.688369] [<bf004db8>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 92.698671] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 92.707478] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 92.716194] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 92.724832] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 92.733398]
[ 92.735106] Allocated by task 1456:
[ 92.739006] kmem_cache_alloc_trace+0xb4/0x170
[ 92.744178] kmalloc_oob_16+0x30/0xa4 [test_kasan]
[ 92.749706] kmalloc_tests_init+0x28/0x270 [test_kasan]
[ 92.755323] do_one_initcall+0x60/0x1b0
[ 92.759523] do_init_module+0xd4/0x2cc
[ 92.763632] load_module+0x3110/0x3af0
[ 92.767746] SyS_init_module+0x19c/0x1d4
[ 92.772066] ret_fast_syscall+0x0/0x50
[ 92.776078]
[ 92.777778] Freed by task 0:
[ 92.780912] (stack is not available)
[ 92.784744]
[ 92.786496] The buggy address belongs to the object at cb32c280
[ 92.786496] which belongs to the cache kmalloc-64 of size 64
[ 92.798829] The buggy address is located 0 bytes inside of
[ 92.798829] 64-byte region [cb32c280, cb32c2c0)
[ 92.809505] The buggy address belongs to the page:
[ 92.814646] page:ee95e580 count:1 mapcount:0 mapping:cb32c000 index:0x0
[ 92.821657] flags: 0x100(slab)
[ 92.825173] raw: 00000100 cb32c000 00000000 00000020 00000001 ee81ea94 ee962934 d0000000
[ 92.833758] page dumped because: kasan: bad access detected
[ 92.839637]
[ 92.841334] Memory state around the buggy address:
[ 92.846511] cb32c180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.853479] cb32c200: 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.860447] >cb32c280: 00 05 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 92.867322] ^
[ 92.870447] cb32c300: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 92.877413] cb32c380: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 92.884307] ==================================================================
[ 92.892598] kasan test: kmalloc_oob_in_memset out-of-bounds in memset
[ 92.900248] ==================================================================
[ 92.908420] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_in_memset+0x58/0x68 [test_kasan]
[ 92.917228] Write of size 671 at addr cad89b40 by task insmod/1456
[ 92.923733]
[ 92.925532] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 92.935922] Hardware name: Broadcom STB (Flattened Device Tree)
[ 92.942404] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 92.950765] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 92.958639] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 92.967958] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 92.977571] [<c02a7ab8>] (kasan_report) from [<c02a6b5c>] (memset+0x20/0x34)
[ 92.985592] [<c02a6b5c>] (memset) from [<bf004658>] (kmalloc_oob_in_memset+0x58/0x68 [test_kasan])
[ 92.995990] [<bf004658>] (kmalloc_oob_in_memset [test_kasan]) from [<bf004dbc>] (kmalloc_tests_init+0x2c/0x270 [test_kasan])
[ 93.008345] [<bf004dbc>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 93.018648] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 93.027455] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 93.036169] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 93.044805] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 93.053371]
[ 93.055081] Allocated by task 1456:
[ 93.058980] kmem_cache_alloc_trace+0xb4/0x170
[ 93.064158] kmalloc_oob_in_memset+0x30/0x68 [test_kasan]
[ 93.070325] kmalloc_tests_init+0x2c/0x270 [test_kasan]
[ 93.075957] do_one_initcall+0x60/0x1b0
[ 93.080169] do_init_module+0xd4/0x2cc
[ 93.084277] load_module+0x3110/0x3af0
[ 93.088391] SyS_init_module+0x19c/0x1d4
[ 93.092697] ret_fast_syscall+0x0/0x50
[ 93.096701]
[ 93.098398] Freed by task 0:
[ 93.101517] (stack is not available)
[ 93.105339]
[ 93.107104] The buggy address belongs to the object at cad89b40
[ 93.107104] which belongs to the cache kmalloc-1024 of size 1024
[ 93.119796] The buggy address is located 0 bytes inside of
[ 93.119796] 1024-byte region [cad89b40, cad89f40)
[ 93.130644] The buggy address belongs to the page:
[ 93.135786] page:ee953100 count:1 mapcount:0 mapping:cad88040 index:0x0 compound_mapcount: 0
[ 93.144802] flags: 0x8100(slab|head)
[ 93.148850] raw: 00008100 cad88040 00000000 00000007 00000001 ee9596d4 d000130c d00003c0
[ 93.157444] page dumped because: kasan: bad access detected
[ 93.163324]
[ 93.165029] Memory state around the buggy address:
[ 93.170218] cad89c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 93.177197] cad89d00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 93.184180] >cad89d80: 00 00 00 00 00 00 00 00 00 00 00 02 fc fc fc fc
[ 93.191080] ^
[ 93.196890] cad89e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 93.203868] cad89e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 93.210773] ==================================================================
[ 93.218837] kasan test: kmalloc_oob_memset_2 out-of-bounds in memset2
[ 93.226573] ==================================================================
[ 93.234711] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_2+0x5c/0x6c [test_kasan]
[ 93.243416] Write of size 2 at addr cb32c187 by task insmod/1456
[ 93.249743]
[ 93.251541] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 93.261933] Hardware name: Broadcom STB (Flattened Device Tree)
[ 93.268413] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 93.276773] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 93.284645] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 93.293964] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 93.303573] [<c02a7ab8>] (kasan_report) from [<c02a6b5c>] (memset+0x20/0x34)
[ 93.311591] [<c02a6b5c>] (memset) from [<bf0046c4>] (kmalloc_oob_memset_2+0x5c/0x6c [test_kasan])
[ 93.321894] [<bf0046c4>] (kmalloc_oob_memset_2 [test_kasan]) from [<bf004dc0>] (kmalloc_tests_init+0x30/0x270 [test_kasan])
[ 93.334164] [<bf004dc0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 93.344478] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 93.353283] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 93.361998] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 93.370635] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 93.379203]
[ 93.380918] Allocated by task 1456:
[ 93.384808] kmem_cache_alloc_trace+0xb4/0x170
[ 93.389993] kmalloc_oob_memset_2+0x30/0x6c [test_kasan]
[ 93.396068] kmalloc_tests_init+0x30/0x270 [test_kasan]
[ 93.401684] do_one_initcall+0x60/0x1b0
[ 93.405891] do_init_module+0xd4/0x2cc
[ 93.410019] load_module+0x3110/0x3af0
[ 93.414145] SyS_init_module+0x19c/0x1d4
[ 93.418452] ret_fast_syscall+0x0/0x50
[ 93.422456]
[ 93.424153] Freed by task 0:
[ 93.427271] (stack is not available)
[ 93.431102]
[ 93.432855] The buggy address belongs to the object at cb32c180
[ 93.432855] which belongs to the cache kmalloc-64 of size 64
[ 93.445210] The buggy address is located 7 bytes inside of
[ 93.445210] 64-byte region [cb32c180, cb32c1c0)
[ 93.455875] The buggy address belongs to the page:
[ 93.461038] page:ee95e580 count:1 mapcount:0 mapping:cb32c000 index:0x0
[ 93.468058] flags: 0x100(slab)
[ 93.471561] raw: 00000100 cb32c000 00000000 00000020 00000001 ee81ea94 ee962934 d0000000
[ 93.480154] page dumped because: kasan: bad access detected
[ 93.486049]
[ 93.487745] Memory state around the buggy address:
[ 93.492938] cb32c080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 93.499919] cb32c100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 93.506902] >cb32c180: 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 93.513786] ^
[ 93.516926] cb32c200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 93.523907] cb32c280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 93.530807] ==================================================================
[ 93.539046] kasan test: kmalloc_oob_memset_4 out-of-bounds in memset4
[ 93.546514] ==================================================================
[ 93.554656] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_4+0x5c/0x6c [test_kasan]
[ 93.563367] Write of size 4 at addr cb32c105 by task insmod/1456
[ 93.569692]
[ 93.571492] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 93.581880] Hardware name: Broadcom STB (Flattened Device Tree)
[ 93.588371] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 93.596730] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 93.604601] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 93.613918] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 93.623533] [<c02a7ab8>] (kasan_report) from [<c02a6b5c>] (memset+0x20/0x34)
[ 93.631557] [<c02a6b5c>] (memset) from [<bf004730>] (kmalloc_oob_memset_4+0x5c/0x6c [test_kasan])
[ 93.641857] [<bf004730>] (kmalloc_oob_memset_4 [test_kasan]) from [<bf004dc4>] (kmalloc_tests_init+0x34/0x270 [test_kasan])
[ 93.654131] [<bf004dc4>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 93.664446] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 93.673247] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 93.681962] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 93.690601] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 93.699172]
[ 93.700887] Allocated by task 1456:
[ 93.704782] kmem_cache_alloc_trace+0xb4/0x170
[ 93.709967] kmalloc_oob_memset_4+0x30/0x6c [test_kasan]
[ 93.716042] kmalloc_tests_init+0x34/0x270 [test_kasan]
[ 93.721657] do_one_initcall+0x60/0x1b0
[ 93.725862] do_init_module+0xd4/0x2cc
[ 93.729995] load_module+0x3110/0x3af0
[ 93.734121] SyS_init_module+0x19c/0x1d4
[ 93.738427] ret_fast_syscall+0x0/0x50
[ 93.742431]
[ 93.744130] Freed by task 0:
[ 93.747249] (stack is not available)
[ 93.751084]
[ 93.752837] The buggy address belongs to the object at cb32c100
[ 93.752837] which belongs to the cache kmalloc-64 of size 64
[ 93.765193] The buggy address is located 5 bytes inside of
[ 93.765193] 64-byte region [cb32c100, cb32c140)
[ 93.775856] The buggy address belongs to the page:
[ 93.781022] page:ee95e580 count:1 mapcount:0 mapping:cb32c000 index:0x0
[ 93.788043] flags: 0x100(slab)
[ 93.791546] raw: 00000100 cb32c000 00000000 00000020 00000001 ee81ea94 ee962934 d0000000
[ 93.800140] page dumped because: kasan: bad access detected
[ 93.806031]
[ 93.807727] Memory state around the buggy address:
[ 93.812915] cb32c000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 93.819896] cb32c080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 93.826880] >cb32c100: 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 93.833768] ^
[ 93.836900] cb32c180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 93.843883] cb32c200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 93.850787] ==================================================================
[ 93.858849] kasan test: kmalloc_oob_memset_8 out-of-bounds in memset8
[ 93.866585] ==================================================================
[ 93.874723] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_8+0x5c/0x6c [test_kasan]
[ 93.883428] Write of size 8 at addr cb32c081 by task insmod/1456
[ 93.889754]
[ 93.891554] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 93.901950] Hardware name: Broadcom STB (Flattened Device Tree)
[ 93.908424] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 93.916784] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 93.924657] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 93.933976] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 93.943582] [<c02a7ab8>] (kasan_report) from [<c02a6b5c>] (memset+0x20/0x34)
[ 93.951602] [<c02a6b5c>] (memset) from [<bf00479c>] (kmalloc_oob_memset_8+0x5c/0x6c [test_kasan])
[ 93.961907] [<bf00479c>] (kmalloc_oob_memset_8 [test_kasan]) from [<bf004dc8>] (kmalloc_tests_init+0x38/0x270 [test_kasan])
[ 93.974177] [<bf004dc8>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 93.984490] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 93.993293] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 94.002010] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 94.010643] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 94.019213]
[ 94.020928] Allocated by task 1456:
[ 94.024816] kmem_cache_alloc_trace+0xb4/0x170
[ 94.030005] kmalloc_oob_memset_8+0x30/0x6c [test_kasan]
[ 94.036080] kmalloc_tests_init+0x38/0x270 [test_kasan]
[ 94.041696] do_one_initcall+0x60/0x1b0
[ 94.045906] do_init_module+0xd4/0x2cc
[ 94.050036] load_module+0x3110/0x3af0
[ 94.054161] SyS_init_module+0x19c/0x1d4
[ 94.058467] ret_fast_syscall+0x0/0x50
[ 94.062470]
[ 94.064166] Freed by task 0:
[ 94.067285] (stack is not available)
[ 94.071114]
[ 94.072869] The buggy address belongs to the object at cb32c080
[ 94.072869] which belongs to the cache kmalloc-64 of size 64
[ 94.085222] The buggy address is located 1 bytes inside of
[ 94.085222] 64-byte region [cb32c080, cb32c0c0)
[ 94.095889] The buggy address belongs to the page:
[ 94.101050] page:ee95e580 count:1 mapcount:0 mapping:cb32c000 index:0x0
[ 94.108074] flags: 0x100(slab)
[ 94.111577] raw: 00000100 cb32c000 00000000 00000020 00000001 ee81ea94 ee962934 d0000000
[ 94.120172] page dumped because: kasan: bad access detected
[ 94.126067]
[ 94.127761] Memory state around the buggy address:
[ 94.132954] cb32bf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 94.139935] cb32c000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 94.146916] >cb32c080: 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 94.153798] ^
[ 94.156938] cb32c100: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 94.163918] cb32c180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 94.170817] ==================================================================
[ 94.179061] kasan test: kmalloc_oob_memset_16 out-of-bounds in memset16
[ 94.186673] ==================================================================
[ 94.194807] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_16+0x5c/0x6c [test_kasan]
[ 94.203608] Write of size 16 at addr cb32c001 by task insmod/1456
[ 94.210036]
[ 94.211836] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 94.222240] Hardware name: Broadcom STB (Flattened Device Tree)
[ 94.228707] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 94.237084] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 94.244968] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 94.254286] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 94.263895] [<c02a7ab8>] (kasan_report) from [<c02a6b5c>] (memset+0x20/0x34)
[ 94.271928] [<c02a6b5c>] (memset) from [<bf004808>] (kmalloc_oob_memset_16+0x5c/0x6c [test_kasan])
[ 94.282322] [<bf004808>] (kmalloc_oob_memset_16 [test_kasan]) from [<bf004dcc>] (kmalloc_tests_init+0x3c/0x270 [test_kasan])
[ 94.294672] [<bf004dcc>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 94.304988] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 94.313780] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 94.322498] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 94.331148] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 94.339705]
[ 94.341409] Allocated by task 1456:
[ 94.345293] kmem_cache_alloc_trace+0xb4/0x170
[ 94.350477] kmalloc_oob_memset_16+0x30/0x6c [test_kasan]
[ 94.356633] kmalloc_tests_init+0x3c/0x270 [test_kasan]
[ 94.362255] do_one_initcall+0x60/0x1b0
[ 94.366456] do_init_module+0xd4/0x2cc
[ 94.370563] load_module+0x3110/0x3af0
[ 94.374679] SyS_init_module+0x19c/0x1d4
[ 94.379000] ret_fast_syscall+0x0/0x50
[ 94.383015]
[ 94.384715] Freed by task 0:
[ 94.387837] (stack is not available)
[ 94.391668]
[ 94.393418] The buggy address belongs to the object at cb32c000
[ 94.393418] which belongs to the cache kmalloc-64 of size 64
[ 94.405751] The buggy address is located 1 bytes inside of
[ 94.405751] 64-byte region [cb32c000, cb32c040)
[ 94.416414] The buggy address belongs to the page:
[ 94.421557] page:ee95e580 count:1 mapcount:0 mapping:cb32c000 index:0x0
[ 94.428567] flags: 0x100(slab)
[ 94.432083] raw: 00000100 cb32c000 00000000 00000020 00000001 ee81ea94 ee962934 d0000000
[ 94.440668] page dumped because: kasan: bad access detected
[ 94.446547]
[ 94.448242] Memory state around the buggy address:
[ 94.453420] cb32bf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 94.460386] cb32bf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 94.467353] >cb32c000: 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 94.474234] ^
[ 94.477624] cb32c080: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 94.484590] cb32c100: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 94.491485] ==================================================================
[ 94.499541] kasan test: kmalloc_uaf use-after-free
[ 94.505668] ==================================================================
[ 94.513786] BUG: KASAN: use-after-free in kmalloc_uaf+0x58/0x68 [test_kasan]
[ 94.521264] Write of size 1 at addr cb681f88 by task insmod/1456
[ 94.527589]
[ 94.529387] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 94.539768] Hardware name: Broadcom STB (Flattened Device Tree)
[ 94.546253] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 94.554614] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 94.562491] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 94.571796] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 94.581720] [<c02a7ab8>] (kasan_report) from [<bf00442c>] (kmalloc_uaf+0x58/0x68 [test_kasan])
[ 94.591738] [<bf00442c>] (kmalloc_uaf [test_kasan]) from [<bf004dd0>] (kmalloc_tests_init+0x40/0x270 [test_kasan])
[ 94.603200] [<bf004dd0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 94.613514] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 94.622318] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 94.631031] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 94.639669] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 94.648238]
[ 94.649957] Allocated by task 1456:
[ 94.653847] kmem_cache_alloc_trace+0xb4/0x170
[ 94.659028] kmalloc_uaf+0x30/0x68 [test_kasan]
[ 94.664303] kmalloc_tests_init+0x40/0x270 [test_kasan]
[ 94.669928] do_one_initcall+0x60/0x1b0
[ 94.674144] do_init_module+0xd4/0x2cc
[ 94.678255] load_module+0x3110/0x3af0
[ 94.682370] SyS_init_module+0x19c/0x1d4
[ 94.686677] ret_fast_syscall+0x0/0x50
[ 94.690679]
[ 94.692383] Freed by task 1456:
[ 94.695888] kfree+0x64/0x100
[ 94.699541] kmalloc_uaf+0x50/0x68 [test_kasan]
[ 94.704802] kmalloc_tests_init+0x40/0x270 [test_kasan]
[ 94.710425] do_one_initcall+0x60/0x1b0
[ 94.714626] do_init_module+0xd4/0x2cc
[ 94.718734] load_module+0x3110/0x3af0
[ 94.722850] SyS_init_module+0x19c/0x1d4
[ 94.727177] ret_fast_syscall+0x0/0x50
[ 94.731181]
[ 94.732949] The buggy address belongs to the object at cb681f80
[ 94.732949] which belongs to the cache kmalloc-64 of size 64
[ 94.745294] The buggy address is located 8 bytes inside of
[ 94.745294] 64-byte region [cb681f80, cb681fc0)
[ 94.755966] The buggy address belongs to the page:
[ 94.761122] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 94.768145] flags: 0x100(slab)
[ 94.771647] raw: 00000100 cb681000 00000000 00000020 00000001 ee962934 d000108c d0000000
[ 94.780245] page dumped because: kasan: bad access detected
[ 94.786135]
[ 94.787832] Memory state around the buggy address:
[ 94.793035] cb681e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 94.800014] cb681f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 94.806997] >cb681f80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 94.813881] ^
[ 94.817028] cb682000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 94.824009] cb682080: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[ 94.830913] ==================================================================
[ 94.838770] kasan test: kmalloc_uaf_memset use-after-free in memset
[ 94.846416] ==================================================================
[ 94.854558] BUG: KASAN: use-after-free in kmalloc_tests_init+0x44/0x270 [test_kasan]
[ 94.862819] Write of size 33 at addr cb681f00 by task insmod/1456
[ 94.869245]
[ 94.871058] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 94.881438] Hardware name: Broadcom STB (Flattened Device Tree)
[ 94.887914] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 94.896292] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 94.904173] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 94.913492] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 94.923111] [<c02a7ab8>] (kasan_report) from [<c02a6b5c>] (memset+0x20/0x34)
[ 94.931134] [<c02a6b5c>] (memset) from [<bf004dd4>] (kmalloc_tests_init+0x44/0x270 [test_kasan])
[ 94.940986] [<bf004dd4>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 94.951300] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 94.960109] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 94.968810] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 94.977464] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 94.986029]
[ 94.987733] Allocated by task 1456:
[ 94.991619] kmem_cache_alloc_trace+0xb4/0x170
[ 94.996786] kmalloc_uaf_memset+0x30/0x68 [test_kasan]
[ 95.002677] kmalloc_tests_init+0x44/0x270 [test_kasan]
[ 95.008292] do_one_initcall+0x60/0x1b0
[ 95.012491] do_init_module+0xd4/0x2cc
[ 95.016599] load_module+0x3110/0x3af0
[ 95.020712] SyS_init_module+0x19c/0x1d4
[ 95.025029] ret_fast_syscall+0x0/0x50
[ 95.029043]
[ 95.030746] Freed by task 1456:
[ 95.034246] kfree+0x64/0x100
[ 95.037900] kmalloc_uaf_memset+0x50/0x68 [test_kasan]
[ 95.043794] kmalloc_tests_init+0x44/0x270 [test_kasan]
[ 95.049416] do_one_initcall+0x60/0x1b0
[ 95.053614] do_init_module+0xd4/0x2cc
[ 95.057722] load_module+0x3110/0x3af0
[ 95.061837] SyS_init_module+0x19c/0x1d4
[ 95.066168] ret_fast_syscall+0x0/0x50
[ 95.070172]
[ 95.071940] The buggy address belongs to the object at cb681f00
[ 95.071940] which belongs to the cache kmalloc-64 of size 64
[ 95.084288] The buggy address is located 0 bytes inside of
[ 95.084288] 64-byte region [cb681f00, cb681f40)
[ 95.094960] The buggy address belongs to the page:
[ 95.100113] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 95.107135] flags: 0x100(slab)
[ 95.110640] raw: 00000100 cb681000 00000000 00000020 00000001 ee962934 d000108c d0000000
[ 95.119236] page dumped because: kasan: bad access detected
[ 95.125126]
[ 95.126823] Memory state around the buggy address:
[ 95.132028] cb681e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 95.139010] cb681e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 95.145990] >cb681f00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 95.152873] ^
[ 95.155737] cb681f80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 95.162704] cb682000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 95.169596] ==================================================================
[ 95.177458] kasan test: kmalloc_uaf2 use-after-free after another kmalloc
[ 95.186287] ==================================================================
[ 95.194418] BUG: KASAN: use-after-free in kmalloc_uaf2+0x74/0xa4 [test_kasan]
[ 95.201989] Write of size 1 at addr cb681ea8 by task insmod/1456
[ 95.208316]
[ 95.210127] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 95.220509] Hardware name: Broadcom STB (Flattened Device Tree)
[ 95.226993] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 95.235366] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 95.243249] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 95.252562] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 95.262483] [<c02a7ab8>] (kasan_report) from [<bf0044b0>] (kmalloc_uaf2+0x74/0xa4 [test_kasan])
[ 95.272593] [<bf0044b0>] (kmalloc_uaf2 [test_kasan]) from [<bf004dd8>] (kmalloc_tests_init+0x48/0x270 [test_kasan])
[ 95.284141] [<bf004dd8>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 95.294459] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 95.303262] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 95.311979] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 95.320616] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 95.329186]
[ 95.330902] Allocated by task 1456:
[ 95.334796] kmem_cache_alloc_trace+0xb4/0x170
[ 95.339974] kmalloc_uaf2+0x30/0xa4 [test_kasan]
[ 95.345338] kmalloc_tests_init+0x48/0x270 [test_kasan]
[ 95.350971] do_one_initcall+0x60/0x1b0
[ 95.355182] do_init_module+0xd4/0x2cc
[ 95.359292] load_module+0x3110/0x3af0
[ 95.363406] SyS_init_module+0x19c/0x1d4
[ 95.367714] ret_fast_syscall+0x0/0x50
[ 95.371717]
[ 95.373420] Freed by task 1456:
[ 95.376926] kfree+0x64/0x100
[ 95.380571] kmalloc_uaf2+0x50/0xa4 [test_kasan]
[ 95.385929] kmalloc_tests_init+0x48/0x270 [test_kasan]
[ 95.391551] do_one_initcall+0x60/0x1b0
[ 95.395751] do_init_module+0xd4/0x2cc
[ 95.399864] load_module+0x3110/0x3af0
[ 95.404003] SyS_init_module+0x19c/0x1d4
[ 95.408310] ret_fast_syscall+0x0/0x50
[ 95.412312]
[ 95.414073] The buggy address belongs to the object at cb681e80
[ 95.414073] which belongs to the cache kmalloc-64 of size 64
[ 95.426418] The buggy address is located 40 bytes inside of
[ 95.426418] 64-byte region [cb681e80, cb681ec0)
[ 95.437177] The buggy address belongs to the page:
[ 95.442318] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 95.449329] flags: 0x100(slab)
[ 95.452831] raw: 00000100 cb681000 00000000 00000020 00000001 ee95e594 d000108c d0000000
[ 95.461426] page dumped because: kasan: bad access detected
[ 95.467307]
[ 95.469012] Memory state around the buggy address:
[ 95.474200] cb681d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 95.481179] cb681e00: 00 00 00 00 00 03 fc fc fc fc fc fc fc fc fc fc
[ 95.488158] >cb681e80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 95.495050] ^
[ 95.499247] cb681f00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 95.506227] cb681f80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 95.513133] ==================================================================
[ 95.524422] kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc
[ 95.532322] ==================================================================
[ 95.540461] BUG: KASAN: slab-out-of-bounds in kmem_cache_oob+0x88/0xb8 [test_kasan]
[ 95.548629] Read of size 1 at addr cb32ef78 by task insmod/1456
[ 95.554877]
[ 95.556684] CPU: 0 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 95.567074] Hardware name: Broadcom STB (Flattened Device Tree)
[ 95.573541] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 95.581912] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 95.589790] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 95.599117] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 95.609041] [<c02a7ab8>] (kasan_report) from [<bf004908>] (kmem_cache_oob+0x88/0xb8 [test_kasan])
[ 95.619340] [<bf004908>] (kmem_cache_oob [test_kasan]) from [<bf004ddc>] (kmalloc_tests_init+0x4c/0x270 [test_kasan])
[ 95.631070] [<bf004ddc>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 95.641383] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 95.650190] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 95.658902] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 95.667555] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 95.676124]
[ 95.677831] Allocated by task 1456:
[ 95.681712] kmem_cache_alloc+0xac/0x16c
[ 95.686353] kmem_cache_oob+0x64/0xb8 [test_kasan]
[ 95.691887] kmalloc_tests_init+0x4c/0x270 [test_kasan]
[ 95.697515] do_one_initcall+0x60/0x1b0
[ 95.701717] do_init_module+0xd4/0x2cc
[ 95.705827] load_module+0x3110/0x3af0
[ 95.709965] SyS_init_module+0x19c/0x1d4
[ 95.714269] ret_fast_syscall+0x0/0x50
[ 95.718272]
[ 95.719984] Freed by task 0:
[ 95.723111] (stack is not available)
[ 95.726950]
[ 95.728706] The buggy address belongs to the object at cb32eeb0
[ 95.728706] which belongs to the cache test_cache of size 200
[ 95.741146] The buggy address is located 0 bytes to the right of
[ 95.741146] 200-byte region [cb32eeb0, cb32ef78)
[ 95.752433] The buggy address belongs to the page:
[ 95.757575] page:ee95e5c0 count:1 mapcount:0 mapping:cb32e040 index:0x0
[ 95.764583] flags: 0x100(slab)
[ 95.768100] raw: 00000100 cb32e040 00000000 0000000f 00000001 cb681d0c cb681d0c cdc6b000
[ 95.776685] page dumped because: kasan: bad access detected
[ 95.782566]
[ 95.784261] Memory state around the buggy address:
[ 95.789440] cb32ee00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 95.796408] cb32ee80: fc fc fc fc fc fc 00 00 00 00 00 00 00 00 00 00
[ 95.803376] >cb32ef00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[ 95.810268] ^
[ 95.817156] cb32ef80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 95.824135] cb32f000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 95.831043] ==================================================================
[ 95.859462] kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
[ 96.407433] kasan test: kasan_stack_oob out-of-bounds on stack
[ 96.413815] kasan test: kasan_global_oob out-of-bounds global variable
[ 96.421066] kasan test: ksize_unpoisons_memory ksize() unpoisons the whole allocated chunk
[ 96.430550] ==================================================================
[ 96.438688] BUG: KASAN: slab-out-of-bounds in ksize_unpoisons_memory+0x6c/0x84 [test_kasan]
[ 96.447573] Write of size 1 at addr cac5ab00 by task insmod/1456
[ 96.453899]
[ 96.455700] CPU: 2 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 96.466080] Hardware name: Broadcom STB (Flattened Device Tree)
[ 96.472554] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 96.480918] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 96.488792] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 96.498098] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 96.508019] [<c02a7ab8>] (kasan_report) from [<bf004a58>] (ksize_unpoisons_memory+0x6c/0x84 [test_kasan])
[ 96.519026] [<bf004a58>] (ksize_unpoisons_memory [test_kasan]) from [<bf004dec>] (kmalloc_tests_init+0x5c/0x270 [test_kasan])
[ 96.531455] [<bf004dec>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 96.541758] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 96.550550] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 96.559254] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 96.567891] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 96.576451]
[ 96.578156] Allocated by task 1456:
[ 96.582043] kmem_cache_alloc_trace+0xb4/0x170
[ 96.587213] ksize_unpoisons_memory+0x30/0x84 [test_kasan]
[ 96.593457] kmalloc_tests_init+0x5c/0x270 [test_kasan]
[ 96.599075] do_one_initcall+0x60/0x1b0
[ 96.603274] do_init_module+0xd4/0x2cc
[ 96.607382] load_module+0x3110/0x3af0
[ 96.611495] SyS_init_module+0x19c/0x1d4
[ 96.615803] ret_fast_syscall+0x0/0x50
[ 96.619805]
[ 96.621504] Freed by task 0:
[ 96.624623] (stack is not available)
[ 96.628446]
[ 96.630201] The buggy address belongs to the object at cac5aa80
[ 96.630201] which belongs to the cache kmalloc-128 of size 128
[ 96.642718] The buggy address is located 0 bytes to the right of
[ 96.642718] 128-byte region [cac5aa80, cac5ab00)
[ 96.654003] The buggy address belongs to the page:
[ 96.659154] page:ee950b40 count:1 mapcount:0 mapping:cac5a000 index:0xcac5af00
[ 96.666869] flags: 0x100(slab)
[ 96.670382] raw: 00000100 cac5a000 cac5af00 00000008 00000001 ee965014 d0001104 d00000c0
[ 96.678964] page dumped because: kasan: bad access detected
[ 96.684846]
[ 96.686541] Memory state around the buggy address:
[ 96.691721] cac5aa00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 96.698687] cac5aa80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 96.705653] >cac5ab00: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
[ 96.712528] ^
[ 96.715382] cac5ab80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 96.722349] cac5ac00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 96.729242] ==================================================================
[ 96.738725] kasan test: copy_user_test out-of-bounds in copy_from_user()
[ 96.746098] ==================================================================
[ 96.754226] BUG: KASAN: slab-out-of-bounds in copy_user_test+0xb8/0x320 [test_kasan]
[ 96.762485] Write of size 11 at addr cb681400 by task insmod/1456
[ 96.768900]
[ 96.770701] CPU: 2 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 96.781081] Hardware name: Broadcom STB (Flattened Device Tree)
[ 96.787548] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 96.795911] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 96.803782] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 96.813088] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 96.823003] [<c02a7ab8>] (kasan_report) from [<bf004b28>] (copy_user_test+0xb8/0x320 [test_kasan])
[ 96.833378] [<bf004b28>] (copy_user_test [test_kasan]) from [<bf004df0>] (kmalloc_tests_init+0x60/0x270 [test_kasan])
[ 96.845096] [<bf004df0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 96.855397] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 96.864191] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 96.872895] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 96.881531] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 96.890088]
[ 96.891791] Allocated by task 1456:
[ 96.895675] kmem_cache_alloc_trace+0xb4/0x170
[ 96.900843] copy_user_test+0x24/0x320 [test_kasan]
[ 96.906460] kmalloc_tests_init+0x60/0x270 [test_kasan]
[ 96.912077] do_one_initcall+0x60/0x1b0
[ 96.916276] do_init_module+0xd4/0x2cc
[ 96.920383] load_module+0x3110/0x3af0
[ 96.924497] SyS_init_module+0x19c/0x1d4
[ 96.928806] ret_fast_syscall+0x0/0x50
[ 96.932807]
[ 96.934506] Freed by task 0:
[ 96.937628] (stack is not available)
[ 96.941451]
[ 96.943204] The buggy address belongs to the object at cb681400
[ 96.943204] which belongs to the cache kmalloc-64 of size 64
[ 96.955538] The buggy address is located 0 bytes inside of
[ 96.955538] 64-byte region [cb681400, cb681440)
[ 96.966198] The buggy address belongs to the page:
[ 96.971339] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 96.978349] flags: 0x100(slab)
[ 96.981854] raw: 00000100 cb681000 00000000 00000020 00000001 ee962934 d000108c d0000000
[ 96.990439] page dumped because: kasan: bad access detected
[ 96.996321]
[ 96.998019] Memory state around the buggy address:
[ 97.003198] cb681300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.010164] cb681380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.017130] >cb681400: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.024006] ^
[ 97.027127] cb681480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.034095] cb681500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.040989] ==================================================================
[ 97.049167] kasan test: copy_user_test out-of-bounds in copy_to_user()
[ 97.056238] ==================================================================
[ 97.064369] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x15c/0x320 [test_kasan]
[ 97.072716] Read of size 11 at addr cb681400 by task insmod/1456
[ 97.079043]
[ 97.080842] CPU: 2 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 97.091223] Hardware name: Broadcom STB (Flattened Device Tree)
[ 97.097690] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 97.106050] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 97.113921] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 97.123228] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 97.133145] [<c02a7ab8>] (kasan_report) from [<bf004bcc>] (copy_user_test+0x15c/0x320 [test_kasan])
[ 97.143608] [<bf004bcc>] (copy_user_test [test_kasan]) from [<bf004df0>] (kmalloc_tests_init+0x60/0x270 [test_kasan])
[ 97.155326] [<bf004df0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 97.165628] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 97.174421] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 97.183124] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 97.191761] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 97.200319]
[ 97.202023] Allocated by task 1456:
[ 97.205910] kmem_cache_alloc_trace+0xb4/0x170
[ 97.211078] copy_user_test+0x24/0x320 [test_kasan]
[ 97.216695] kmalloc_tests_init+0x60/0x270 [test_kasan]
[ 97.222312] do_one_initcall+0x60/0x1b0
[ 97.226512] do_init_module+0xd4/0x2cc
[ 97.230619] load_module+0x3110/0x3af0
[ 97.234735] SyS_init_module+0x19c/0x1d4
[ 97.239041] ret_fast_syscall+0x0/0x50
[ 97.243046]
[ 97.244744] Freed by task 0:
[ 97.247862] (stack is not available)
[ 97.251685]
[ 97.253435] The buggy address belongs to the object at cb681400
[ 97.253435] which belongs to the cache kmalloc-64 of size 64
[ 97.265770] The buggy address is located 0 bytes inside of
[ 97.265770] 64-byte region [cb681400, cb681440)
[ 97.276428] The buggy address belongs to the page:
[ 97.281570] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 97.288581] flags: 0x100(slab)
[ 97.292085] raw: 00000100 cb681000 00000000 00000020 00000001 ee95e594 d000108c d0000000
[ 97.300671] page dumped because: kasan: bad access detected
[ 97.306552]
[ 97.308249] Memory state around the buggy address:
[ 97.313427] cb681300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.320393] cb681380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.327360] >cb681400: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.334235] ^
[ 97.337360] cb681480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.344326] cb681500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.351218] ==================================================================
[ 97.360461] kasan test: copy_user_test out-of-bounds in __copy_from_user()
[ 97.368031] ==================================================================
[ 97.376165] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x1b4/0x320 [test_kasan]
[ 97.384514] Write of size 11 at addr cb681400 by task insmod/1456
[ 97.390930]
[ 97.392727] CPU: 2 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 97.403106] Hardware name: Broadcom STB (Flattened Device Tree)
[ 97.409574] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 97.417935] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 97.425805] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 97.435112] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 97.445028] [<c02a7ab8>] (kasan_report) from [<bf004c24>] (copy_user_test+0x1b4/0x320 [test_kasan])
[ 97.455492] [<bf004c24>] (copy_user_test [test_kasan]) from [<bf004df0>] (kmalloc_tests_init+0x60/0x270 [test_kasan])
[ 97.467205] [<bf004df0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 97.477507] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 97.486302] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 97.495006] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 97.503641] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 97.512198]
[ 97.513901] Allocated by task 1456:
[ 97.517786] kmem_cache_alloc_trace+0xb4/0x170
[ 97.522950] copy_user_test+0x24/0x320 [test_kasan]
[ 97.528567] kmalloc_tests_init+0x60/0x270 [test_kasan]
[ 97.534184] do_one_initcall+0x60/0x1b0
[ 97.538383] do_init_module+0xd4/0x2cc
[ 97.542493] load_module+0x3110/0x3af0
[ 97.546606] SyS_init_module+0x19c/0x1d4
[ 97.550913] ret_fast_syscall+0x0/0x50
[ 97.554918]
[ 97.556619] Freed by task 0:
[ 97.559738] (stack is not available)
[ 97.563563]
[ 97.565314] The buggy address belongs to the object at cb681400
[ 97.565314] which belongs to the cache kmalloc-64 of size 64
[ 97.577659] The buggy address is located 0 bytes inside of
[ 97.577659] 64-byte region [cb681400, cb681440)
[ 97.588325] The buggy address belongs to the page:
[ 97.593471] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 97.600481] flags: 0x100(slab)
[ 97.603986] raw: 00000100 cb681000 00000000 00000020 00000001 ee95e594 d000108c d0000000
[ 97.612570] page dumped because: kasan: bad access detected
[ 97.618453]
[ 97.620148] Memory state around the buggy address:
[ 97.625327] cb681300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.632297] cb681380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.639263] >cb681400: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.646138] ^
[ 97.649262] cb681480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.656228] cb681500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.663121] ==================================================================
[ 97.671127] kasan test: copy_user_test out-of-bounds in __copy_to_user()
[ 97.678390] ==================================================================
[ 97.686523] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x204/0x320 [test_kasan]
[ 97.694873] Read of size 11 at addr cb681400 by task insmod/1456
[ 97.701201]
[ 97.703001] CPU: 2 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 97.713382] Hardware name: Broadcom STB (Flattened Device Tree)
[ 97.719851] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 97.728211] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 97.736081] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 97.745390] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 97.755306] [<c02a7ab8>] (kasan_report) from [<bf004c74>] (copy_user_test+0x204/0x320 [test_kasan])
[ 97.765770] [<bf004c74>] (copy_user_test [test_kasan]) from [<bf004df0>] (kmalloc_tests_init+0x60/0x270 [test_kasan])
[ 97.777486] [<bf004df0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 97.787789] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 97.796584] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 97.805287] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 97.813924] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 97.822480]
[ 97.824187] Allocated by task 1456:
[ 97.828073] kmem_cache_alloc_trace+0xb4/0x170
[ 97.833239] copy_user_test+0x24/0x320 [test_kasan]
[ 97.838857] kmalloc_tests_init+0x60/0x270 [test_kasan]
[ 97.844473] do_one_initcall+0x60/0x1b0
[ 97.848673] do_init_module+0xd4/0x2cc
[ 97.852783] load_module+0x3110/0x3af0
[ 97.856898] SyS_init_module+0x19c/0x1d4
[ 97.861205] ret_fast_syscall+0x0/0x50
[ 97.865208]
[ 97.866905] Freed by task 0:
[ 97.870024] (stack is not available)
[ 97.873846]
[ 97.875597] The buggy address belongs to the object at cb681400
[ 97.875597] which belongs to the cache kmalloc-64 of size 64
[ 97.887930] The buggy address is located 0 bytes inside of
[ 97.887930] 64-byte region [cb681400, cb681440)
[ 97.898589] The buggy address belongs to the page:
[ 97.903730] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 97.910741] flags: 0x100(slab)
[ 97.914246] raw: 00000100 cb681000 00000000 00000020 00000001 ee95e594 d000108c d0000000
[ 97.922832] page dumped because: kasan: bad access detected
[ 97.928713]
[ 97.930407] Memory state around the buggy address:
[ 97.935586] cb681300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.942551] cb681380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.949520] >cb681400: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.956395] ^
[ 97.959520] cb681480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.966486] cb681500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 97.973379] ==================================================================
[ 97.981357] kasan test: copy_user_test out-of-bounds in __copy_from_user_inatomic()
[ 97.989682] ==================================================================
[ 97.997814] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x254/0x320 [test_kasan]
[ 98.006164] Write of size 11 at addr cb681400 by task insmod/1456
[ 98.012579]
[ 98.014377] CPU: 2 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 98.024756] Hardware name: Broadcom STB (Flattened Device Tree)
[ 98.031223] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 98.039584] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 98.047456] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 98.056762] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 98.066678] [<c02a7ab8>] (kasan_report) from [<bf004cc4>] (copy_user_test+0x254/0x320 [test_kasan])
[ 98.077142] [<bf004cc4>] (copy_user_test [test_kasan]) from [<bf004df0>] (kmalloc_tests_init+0x60/0x270 [test_kasan])
[ 98.088855] [<bf004df0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 98.099157] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 98.107950] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 98.116652] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 98.125287] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 98.133847]
[ 98.135550] Allocated by task 1456:
[ 98.139436] kmem_cache_alloc_trace+0xb4/0x170
[ 98.144603] copy_user_test+0x24/0x320 [test_kasan]
[ 98.150222] kmalloc_tests_init+0x60/0x270 [test_kasan]
[ 98.155839] do_one_initcall+0x60/0x1b0
[ 98.160039] do_init_module+0xd4/0x2cc
[ 98.164148] load_module+0x3110/0x3af0
[ 98.168263] SyS_init_module+0x19c/0x1d4
[ 98.172571] ret_fast_syscall+0x0/0x50
[ 98.176573]
[ 98.178272] Freed by task 0:
[ 98.181392] (stack is not available)
[ 98.185216]
[ 98.186968] The buggy address belongs to the object at cb681400
[ 98.186968] which belongs to the cache kmalloc-64 of size 64
[ 98.199302] The buggy address is located 0 bytes inside of
[ 98.199302] 64-byte region [cb681400, cb681440)
[ 98.209962] The buggy address belongs to the page:
[ 98.215104] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 98.222112] flags: 0x100(slab)
[ 98.225617] raw: 00000100 cb681000 00000000 00000020 00000001 ee95e594 d000108c d0000000
[ 98.234202] page dumped because: kasan: bad access detected
[ 98.240083]
[ 98.241781] Memory state around the buggy address:
[ 98.246961] cb681300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.253927] cb681380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.260893] >cb681400: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.267771] ^
[ 98.270894] cb681480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.277861] cb681500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.284757] ==================================================================
[ 98.292719] kasan test: copy_user_test out-of-bounds in __copy_to_user_inatomic()
[ 98.301045] ==================================================================
[ 98.309179] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x2a4/0x320 [test_kasan]
[ 98.317528] Read of size 11 at addr cb681400 by task insmod/1456
[ 98.323855]
[ 98.325656] CPU: 2 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 98.336036] Hardware name: Broadcom STB (Flattened Device Tree)
[ 98.342505] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 98.350868] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 98.358741] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 98.368048] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 98.377965] [<c02a7ab8>] (kasan_report) from [<bf004d14>] (copy_user_test+0x2a4/0x320 [test_kasan])
[ 98.388429] [<bf004d14>] (copy_user_test [test_kasan]) from [<bf004df0>] (kmalloc_tests_init+0x60/0x270 [test_kasan])
[ 98.400144] [<bf004df0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 98.410445] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 98.419240] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 98.427942] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 98.436578] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 98.445137]
[ 98.446840] Allocated by task 1456:
[ 98.450726] kmem_cache_alloc_trace+0xb4/0x170
[ 98.455893] copy_user_test+0x24/0x320 [test_kasan]
[ 98.461510] kmalloc_tests_init+0x60/0x270 [test_kasan]
[ 98.467126] do_one_initcall+0x60/0x1b0
[ 98.471326] do_init_module+0xd4/0x2cc
[ 98.475437] load_module+0x3110/0x3af0
[ 98.479551] SyS_init_module+0x19c/0x1d4
[ 98.483860] ret_fast_syscall+0x0/0x50
[ 98.487864]
[ 98.489563] Freed by task 0:
[ 98.492683] (stack is not available)
[ 98.496507]
[ 98.498258] The buggy address belongs to the object at cb681400
[ 98.498258] which belongs to the cache kmalloc-64 of size 64
[ 98.510593] The buggy address is located 0 bytes inside of
[ 98.510593] 64-byte region [cb681400, cb681440)
[ 98.521253] The buggy address belongs to the page:
[ 98.526394] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 98.533404] flags: 0x100(slab)
[ 98.536906] raw: 00000100 cb681000 00000000 00000020 00000001 ee95e594 d000108c d0000000
[ 98.545491] page dumped because: kasan: bad access detected
[ 98.551370]
[ 98.553066] Memory state around the buggy address:
[ 98.558246] cb681300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.565213] cb681380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.572179] >cb681400: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.579054] ^
[ 98.582177] cb681480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.589144] cb681500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.596038] ==================================================================
[ 98.604200] kasan test: copy_user_test out-of-bounds in strncpy_from_user()
[ 98.611705] ==================================================================
[ 98.619495] BUG: KASAN: slab-out-of-bounds in strncpy_from_user+0x58/0x1e4
[ 98.626782] Write of size 11 at addr cb681400 by task insmod/1456
[ 98.633196]
[ 98.634993] CPU: 2 PID: 1456 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #27
[ 98.645374] Hardware name: Broadcom STB (Flattened Device Tree)
[ 98.651841] [<c01157c0>] (unwind_backtrace) from [<c010f118>] (show_stack+0x10/0x14)
[ 98.660204] [<c010f118>] (show_stack) from [<c0b85908>] (dump_stack+0x90/0xa4)
[ 98.668075] [<c0b85908>] (dump_stack) from [<c02a73b4>] (print_address_description+0x50/0x24c)
[ 98.677381] [<c02a73b4>] (print_address_description) from [<c02a7ab8>] (kasan_report+0x238/0x324)
[ 98.686951] [<c02a7ab8>] (kasan_report) from [<c05bbf68>] (strncpy_from_user+0x58/0x1e4)
[ 98.696085] [<c05bbf68>] (strncpy_from_user) from [<bf004d68>] (copy_user_test+0x2f8/0x320 [test_kasan])
[ 98.706998] [<bf004d68>] (copy_user_test [test_kasan]) from [<bf004df0>] (kmalloc_tests_init+0x60/0x270 [test_kasan])
[ 98.718716] [<bf004df0>] (kmalloc_tests_init [test_kasan]) from [<c0101f54>] (do_one_initcall+0x60/0x1b0)
[ 98.729018] [<c0101f54>] (do_one_initcall) from [<c01dcfc8>] (do_init_module+0xd4/0x2cc)
[ 98.737812] [<c01dcfc8>] (do_init_module) from [<c01dbad8>] (load_module+0x3110/0x3af0)
[ 98.746516] [<c01dbad8>] (load_module) from [<c01dc654>] (SyS_init_module+0x19c/0x1d4)
[ 98.755152] [<c01dc654>] (SyS_init_module) from [<c0109800>] (ret_fast_syscall+0x0/0x50)
[ 98.763710]
[ 98.765413] Allocated by task 1456:
[ 98.769299] kmem_cache_alloc_trace+0xb4/0x170
[ 98.774466] copy_user_test+0x24/0x320 [test_kasan]
[ 98.780083] kmalloc_tests_init+0x60/0x270 [test_kasan]
[ 98.785700] do_one_initcall+0x60/0x1b0
[ 98.789900] do_init_module+0xd4/0x2cc
[ 98.794010] load_module+0x3110/0x3af0
[ 98.798124] SyS_init_module+0x19c/0x1d4
[ 98.802433] ret_fast_syscall+0x0/0x50
[ 98.806436]
[ 98.808135] Freed by task 0:
[ 98.811258] (stack is not available)
[ 98.815081]
[ 98.816834] The buggy address belongs to the object at cb681400
[ 98.816834] which belongs to the cache kmalloc-64 of size 64
[ 98.829169] The buggy address is located 0 bytes inside of
[ 98.829169] 64-byte region [cb681400, cb681440)
[ 98.839829] The buggy address belongs to the page:
[ 98.844971] page:ee965020 count:1 mapcount:0 mapping:cb681000 index:0x0
[ 98.851979] flags: 0x100(slab)
[ 98.855484] raw: 00000100 cb681000 00000000 00000020 00000001 ee95e594 d000108c d0000000
[ 98.864067] page dumped because: kasan: bad access detected
[ 98.869950]
[ 98.871644] Memory state around the buggy address:
[ 98.876824] cb681300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.883790] cb681380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.890756] >cb681400: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.897632] ^
[ 98.900753] cb681480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.907720] cb681500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 98.914615] ==================================================================
[ 98.924518] kasan test: use_after_scope_test use-after-scope on int
[ 98.931329] kasan test: use_after_scope_test use-after-scope on array
insmod: can't insert 'test_kasan.ko': Resource temporarily unavailable
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: fix-build.patch --]
[-- Type: text/x-patch; name="fix-build.patch", Size: 1476 bytes --]
diff --git a/arch/arm/boot/compressed/decompress.c b/arch/arm/boot/compressed/decompress.c
index f3a4bedd1afc..7d4a47752760 100644
--- a/arch/arm/boot/compressed/decompress.c
+++ b/arch/arm/boot/compressed/decompress.c
@@ -48,8 +48,10 @@ extern int memcmp(const void *cs, const void *ct, size_t count);
#endif
#ifdef CONFIG_KERNEL_XZ
+#ifndef CONFIG_KASAN
#define memmove memmove
#define memcpy memcpy
+#endif
#include "../../../../lib/decompress_unxz.c"
#endif
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 99c908226065..0de1160d136e 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,13 @@ ret_fast_syscall:
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+ cmp r2, r1
+#else
cmp r2, #TASK_SIZE
+#endif
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -115,7 +121,13 @@ ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
+#ifdef CONFIG_KASAN
+ movw r1, #:lower16:TASK_SIZE
+ movt r1, #:upper16:TASK_SIZE
+ cmp r2, r1
+#else
cmp r2, #TASK_SIZE
+#endif
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #4: lpae.log --]
[-- Type: text/x-log; name="lpae.log", Size: 80425 bytes --]
test_kasan.ko
# insmod test_kasan.ko
[ 101.420931] test_kasan: no symbol version for module_layout
[ 101.470457] kasan test: kmalloc_oob_right out-of-bounds to right
[ 101.477653] ==================================================================
[ 101.485794] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0x54/0x6c [test_kasan]
[ 101.494242] Write of size 1 at addr cb7dcdfb by task insmod/1453
[ 101.500584]
[ 101.502400] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 101.512802] Hardware name: Broadcom STB (Flattened Device Tree)
[ 101.519288] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 101.527663] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 101.535547] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 101.544868] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 101.554822] [<c03a838c>] (kasan_report) from [<bf0041bc>] (kmalloc_oob_right+0x54/0x6c [test_kasan])
[ 101.565384] [<bf0041bc>] (kmalloc_oob_right [test_kasan]) from [<bf004cb4>] (kmalloc_tests_init+0x10/0x35c [test_kasan])
[ 101.577390] [<bf004cb4>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 101.587716] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 101.596532] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 101.605249] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 101.613918] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 101.622490]
[ 101.624203] Allocated by task 1453:
[ 101.628107] kmem_cache_alloc_trace+0xb4/0x170
[ 101.633291] kmalloc_oob_right+0x30/0x6c [test_kasan]
[ 101.639099] kmalloc_tests_init+0x10/0x35c [test_kasan]
[ 101.644726] do_one_initcall+0x60/0x1b0
[ 101.648937] do_init_module+0xd4/0x2cc
[ 101.653057] load_module+0x3110/0x3af0
[ 101.657178] SyS_init_module+0x184/0x1bc
[ 101.661500] ret_fast_syscall+0x0/0x48
[ 101.665511]
[ 101.667219] Freed by task 0:
[ 101.670362] (stack is not available)
[ 101.674201]
[ 101.675972] The buggy address belongs to the object at cb7dcd80
[ 101.675972] which belongs to the cache kmalloc-128 of size 128
[ 101.688518] The buggy address is located 123 bytes inside of
[ 101.688518] 128-byte region [cb7dcd80, cb7dce00)
[ 101.699465] The buggy address belongs to the page:
[ 101.704622] page:ee967b80 count:1 mapcount:0 mapping:cb7dc000 index:0x0
[ 101.711646] flags: 0x100(slab)
[ 101.715164] raw: 00000100 cb7dc000 00000000 00000015 00000001 ee96b514 ee95e8f4 d00000c0
[ 101.723765] page dumped because: kasan: bad access detected
[ 101.729653]
[ 101.731366] Memory state around the buggy address:
[ 101.736565] cb7dcc80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 101.743559] cb7dcd00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 101.750547] >cb7dcd80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03
[ 101.757462] ^
[ 101.764367] cb7dce00: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
[ 101.771363] cb7dce80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 101.778274] ==================================================================
[ 101.786797] kasan test: kmalloc_oob_left out-of-bounds to left
[ 101.793807] ==================================================================
[ 101.801963] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_left+0x54/0x74 [test_kasan]
[ 101.810337] Read of size 1 at addr cb18227f by task insmod/1453
[ 101.816588]
[ 101.818405] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 101.828800] Hardware name: Broadcom STB (Flattened Device Tree)
[ 101.835292] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 101.843683] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 101.851578] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 101.860909] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 101.870850] [<c03a838c>] (kasan_report) from [<bf004228>] (kmalloc_oob_left+0x54/0x74 [test_kasan])
[ 101.881361] [<bf004228>] (kmalloc_oob_left [test_kasan]) from [<bf004cb8>] (kmalloc_tests_init+0x14/0x35c [test_kasan])
[ 101.893292] [<bf004cb8>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 101.903621] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 101.912438] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 101.921154] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 101.929822] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 101.938404]
[ 101.940113] Allocated by task 0:
[ 101.943601] (stack is not available)
[ 101.947442]
[ 101.949150] Freed by task 0:
[ 101.952288] (stack is not available)
[ 101.956127]
[ 101.957888] The buggy address belongs to the object at cb182200
[ 101.957888] which belongs to the cache kmalloc-64 of size 64
[ 101.970258] The buggy address is located 63 bytes to the right of
[ 101.970258] 64-byte region [cb182200, cb182240)
[ 101.981570] The buggy address belongs to the page:
[ 101.986721] page:ee95b040 count:1 mapcount:0 mapping:cb182000 index:0x0
[ 101.993742] flags: 0x100(slab)
[ 101.997267] raw: 00000100 cb182000 00000000 00000020 00000001 ee9616f4 ee95e894 d0000000
[ 102.005866] page dumped because: kasan: bad access detected
[ 102.011758]
[ 102.013467] Memory state around the buggy address:
[ 102.018660] cb182100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.025646] cb182180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.032634] >cb182200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.039547] ^
[ 102.046443] cb182280: 00 07 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.053430] cb182300: 00 04 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.060342] ==================================================================
[ 102.068609] kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
[ 102.077848] ==================================================================
[ 102.085999] BUG: KASAN: slab-out-of-bounds in kmalloc_node_oob_right+0x58/0x70 [test_kasan]
[ 102.094898] Write of size 1 at addr cac85900 by task insmod/1453
[ 102.101237]
[ 102.103055] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 102.113456] Hardware name: Broadcom STB (Flattened Device Tree)
[ 102.119943] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 102.128327] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 102.136222] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 102.145567] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 102.155516] [<c03a838c>] (kasan_report) from [<bf0042a0>] (kmalloc_node_oob_right+0x58/0x70 [test_kasan])
[ 102.166571] [<bf0042a0>] (kmalloc_node_oob_right [test_kasan]) from [<bf004cbc>] (kmalloc_tests_init+0x18/0x35c [test_kasan])
[ 102.179031] [<bf004cbc>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 102.189356] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 102.198161] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 102.206895] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 102.215558] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 102.224126]
[ 102.225841] Allocated by task 1453:
[ 102.229744] kmem_cache_alloc_trace+0xb4/0x170
[ 102.234940] kmalloc_node_oob_right+0x30/0x70 [test_kasan]
[ 102.241200] kmalloc_tests_init+0x18/0x35c [test_kasan]
[ 102.246837] do_one_initcall+0x60/0x1b0
[ 102.251047] do_init_module+0xd4/0x2cc
[ 102.255165] load_module+0x3110/0x3af0
[ 102.259299] SyS_init_module+0x184/0x1bc
[ 102.263637] ret_fast_syscall+0x0/0x48
[ 102.267651]
[ 102.269367] Freed by task 0:
[ 102.272498] (stack is not available)
[ 102.276338]
[ 102.278107] The buggy address belongs to the object at cac84900
[ 102.278107] which belongs to the cache kmalloc-4096 of size 4096
[ 102.290832] The buggy address is located 0 bytes to the right of
[ 102.290832] 4096-byte region [cac84900, cac85900)
[ 102.302216] The buggy address belongs to the page:
[ 102.307378] page:ee951080 count:1 mapcount:0 mapping:cac84900 index:0x0 compound_mapcount: 0
[ 102.316392] flags: 0x8100(slab|head)
[ 102.320445] raw: 00008100 cac84900 00000000 00000001 00000001 ee95e754 d000140c d0000540
[ 102.329029] page dumped because: kasan: bad access detected
[ 102.334909]
[ 102.336608] Memory state around the buggy address:
[ 102.341793] cac85800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 102.348763] cac85880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 102.355733] >cac85900: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.362612] ^
[ 102.365479] cac85980: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.372454] cac85a00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.379362] ==================================================================
[ 102.387622] kasan test: kmalloc_large_oob_right kmalloc large allocation: out-of-bounds to right
[ 102.424790] ==================================================================
[ 102.432931] BUG: KASAN: slab-out-of-bounds in kmalloc_large_oob_right+0x60/0x78 [test_kasan]
[ 102.441905] Write of size 1 at addr cabfff00 by task insmod/1453
[ 102.448239]
[ 102.450050] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 102.460444] Hardware name: Broadcom STB (Flattened Device Tree)
[ 102.466913] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 102.475282] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 102.483161] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 102.492489] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 102.502413] [<c03a838c>] (kasan_report) from [<bf004318>] (kmalloc_large_oob_right+0x60/0x78 [test_kasan])
[ 102.513523] [<bf004318>] (kmalloc_large_oob_right [test_kasan]) from [<bf004cc0>] (kmalloc_tests_init+0x1c/0x35c [test_kasan])
[ 102.526051] [<bf004cc0>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 102.536368] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 102.545162] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 102.553890] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 102.562544] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 102.571104]
[ 102.572865] The buggy address belongs to the object at ca800000
[ 102.572865] which belongs to the cache kmalloc-4194304 of size 4194304
[ 102.586109] The buggy address is located 4194048 bytes inside of
[ 102.586109] 4194304-byte region [ca800000, cac00000)
[ 102.597768] The buggy address belongs to the page:
[ 102.602912] page:ee948000 count:1 mapcount:0 mapping:ca800000 index:0x0 compound_mapcount: 0
[ 102.611915] flags: 0x8100(slab|head)
[ 102.615955] raw: 00008100 ca800000 00000000 00000001 00000001 d000190c d000190c d0000cc0
[ 102.624552] page dumped because: kasan: bad access detected
[ 102.630442]
[ 102.632138] Memory state around the buggy address:
[ 102.637332] cabffe00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 102.644311] cabffe80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 102.651291] >cabfff00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.658173] ^
[ 102.661035] cabfff80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.668002] cac00000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.674899] ==================================================================
[ 102.688490] kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
[ 102.697666] ==================================================================
[ 102.705816] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_krealloc_more+0x78/0x90 [test_kasan]
[ 102.714971] Write of size 1 at addr cb182213 by task insmod/1453
[ 102.721310]
[ 102.723113] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 102.733503] Hardware name: Broadcom STB (Flattened Device Tree)
[ 102.739971] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 102.748348] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 102.756226] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 102.765561] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 102.775491] [<c03a838c>] (kasan_report) from [<bf004558>] (kmalloc_oob_krealloc_more+0x78/0x90 [test_kasan])
[ 102.786776] [<bf004558>] (kmalloc_oob_krealloc_more [test_kasan]) from [<bf004cc4>] (kmalloc_tests_init+0x20/0x35c [test_kasan])
[ 102.799486] [<bf004cc4>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 102.809801] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 102.818603] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 102.827313] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 102.835959] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 102.844530]
[ 102.846238] Allocated by task 1453:
[ 102.850081] krealloc+0x44/0xc8
[ 102.853917] kmalloc_oob_krealloc_more+0x44/0x90 [test_kasan]
[ 102.860440] kmalloc_tests_init+0x20/0x35c [test_kasan]
[ 102.866057] do_one_initcall+0x60/0x1b0
[ 102.870262] do_init_module+0xd4/0x2cc
[ 102.874395] load_module+0x3110/0x3af0
[ 102.878519] SyS_init_module+0x184/0x1bc
[ 102.882826] ret_fast_syscall+0x0/0x48
[ 102.886831]
[ 102.888530] Freed by task 0:
[ 102.891651] (stack is not available)
[ 102.895483]
[ 102.897239] The buggy address belongs to the object at cb182200
[ 102.897239] which belongs to the cache kmalloc-64 of size 64
[ 102.909599] The buggy address is located 19 bytes inside of
[ 102.909599] 64-byte region [cb182200, cb182240)
[ 102.920360] The buggy address belongs to the page:
[ 102.925516] page:ee95b040 count:1 mapcount:0 mapping:cb182000 index:0x0
[ 102.932541] flags: 0x100(slab)
[ 102.936045] raw: 00000100 cb182000 00000000 00000020 00000001 ee9616f4 ee95e894 d0000000
[ 102.944642] page dumped because: kasan: bad access detected
[ 102.950530]
[ 102.952228] Memory state around the buggy address:
[ 102.957429] cb182100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.964408] cb182180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.971391] >cb182200: 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.978279] ^
[ 102.981678] cb182280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 102.988653] cb182300: 00 04 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 102.995558] ==================================================================
[ 103.003661] kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
[ 103.012824] ==================================================================
[ 103.020973] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_krealloc_less+0x78/0x90 [test_kasan]
[ 103.030125] Write of size 1 at addr cb18218f by task insmod/1453
[ 103.036467]
[ 103.038272] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 103.048670] Hardware name: Broadcom STB (Flattened Device Tree)
[ 103.055136] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 103.063511] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 103.071394] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 103.080712] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 103.090645] [<c03a838c>] (kasan_report) from [<bf0045e8>] (kmalloc_oob_krealloc_less+0x78/0x90 [test_kasan])
[ 103.101928] [<bf0045e8>] (kmalloc_oob_krealloc_less [test_kasan]) from [<bf004cc8>] (kmalloc_tests_init+0x24/0x35c [test_kasan])
[ 103.114640] [<bf004cc8>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 103.124951] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 103.133754] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 103.142470] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 103.151105] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 103.159673]
[ 103.161390] Allocated by task 1453:
[ 103.165227] krealloc+0x44/0xc8
[ 103.169068] kmalloc_oob_krealloc_less+0x44/0x90 [test_kasan]
[ 103.175589] kmalloc_tests_init+0x24/0x35c [test_kasan]
[ 103.181207] do_one_initcall+0x60/0x1b0
[ 103.185433] do_init_module+0xd4/0x2cc
[ 103.189553] load_module+0x3110/0x3af0
[ 103.193669] SyS_init_module+0x184/0x1bc
[ 103.197976] ret_fast_syscall+0x0/0x48
[ 103.201980]
[ 103.203680] Freed by task 0:
[ 103.206803] (stack is not available)
[ 103.210628]
[ 103.212393] The buggy address belongs to the object at cb182180
[ 103.212393] which belongs to the cache kmalloc-64 of size 64
[ 103.224742] The buggy address is located 15 bytes inside of
[ 103.224742] 64-byte region [cb182180, cb1821c0)
[ 103.235500] The buggy address belongs to the page:
[ 103.240643] page:ee95b040 count:1 mapcount:0 mapping:cb182000 index:0x0
[ 103.247654] flags: 0x100(slab)
[ 103.251157] raw: 00000100 cb182000 00000000 00000020 00000001 ee9616f4 ee95e894 d0000000
[ 103.259751] page dumped because: kasan: bad access detected
[ 103.265634]
[ 103.267341] Memory state around the buggy address:
[ 103.272534] cb182080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 103.279513] cb182100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 103.286490] >cb182180: 00 07 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 103.293378] ^
[ 103.296513] cb182200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 103.303491] cb182280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 103.310398] ==================================================================
[ 103.318645] kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
[ 103.327807] ==================================================================
[ 103.335944] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_16+0x78/0xa4 [test_kasan]
[ 103.344114] Write of size 16 at addr cb182100 by task insmod/1453
[ 103.350539]
[ 103.352353] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 103.362746] Hardware name: Broadcom STB (Flattened Device Tree)
[ 103.369218] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 103.377603] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 103.385493] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 103.394819] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 103.404740] [<c03a838c>] (kasan_report) from [<bf0043a8>] (kmalloc_oob_16+0x78/0xa4 [test_kasan])
[ 103.415029] [<bf0043a8>] (kmalloc_oob_16 [test_kasan]) from [<bf004ccc>] (kmalloc_tests_init+0x28/0x35c [test_kasan])
[ 103.426756] [<bf004ccc>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 103.437058] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 103.445862] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 103.454577] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 103.463215] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 103.471786]
[ 103.473494] Allocated by task 1453:
[ 103.477395] kmem_cache_alloc_trace+0xb4/0x170
[ 103.482566] kmalloc_oob_16+0x30/0xa4 [test_kasan]
[ 103.488094] kmalloc_tests_init+0x28/0x35c [test_kasan]
[ 103.493713] do_one_initcall+0x60/0x1b0
[ 103.497913] do_init_module+0xd4/0x2cc
[ 103.502021] load_module+0x3110/0x3af0
[ 103.506136] SyS_init_module+0x184/0x1bc
[ 103.510456] ret_fast_syscall+0x0/0x48
[ 103.514471]
[ 103.516172] Freed by task 0:
[ 103.519309] (stack is not available)
[ 103.523140]
[ 103.524896] The buggy address belongs to the object at cb182100
[ 103.524896] which belongs to the cache kmalloc-64 of size 64
[ 103.537236] The buggy address is located 0 bytes inside of
[ 103.537236] 64-byte region [cb182100, cb182140)
[ 103.547910] The buggy address belongs to the page:
[ 103.553051] page:ee95b040 count:1 mapcount:0 mapping:cb182000 index:0x0
[ 103.560062] flags: 0x100(slab)
[ 103.563577] raw: 00000100 cb182000 00000000 00000020 00000001 ee9616f4 ee95e894 d0000000
[ 103.572163] page dumped because: kasan: bad access detected
[ 103.578051]
[ 103.579751] Memory state around the buggy address:
[ 103.584932] cb182000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 103.591900] cb182080: 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 103.598867] >cb182100: 00 05 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 103.605744] ^
[ 103.608868] cb182180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 103.615834] cb182200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 103.622729] ==================================================================
[ 103.631013] kasan test: kmalloc_oob_in_memset out-of-bounds in memset
[ 103.638659] ==================================================================
[ 103.646828] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_in_memset+0x58/0x68 [test_kasan]
[ 103.655638] Write of size 671 at addr cad5db40 by task insmod/1453
[ 103.662145]
[ 103.663946] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 103.674342] Hardware name: Broadcom STB (Flattened Device Tree)
[ 103.680815] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 103.689177] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 103.697056] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 103.706378] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 103.715985] [<c03a838c>] (kasan_report) from [<c03a7430>] (memset+0x20/0x34)
[ 103.724003] [<c03a7430>] (memset) from [<bf004658>] (kmalloc_oob_in_memset+0x58/0x68 [test_kasan])
[ 103.734395] [<bf004658>] (kmalloc_oob_in_memset [test_kasan]) from [<bf004cd0>] (kmalloc_tests_init+0x2c/0x35c [test_kasan])
[ 103.746745] [<bf004cd0>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 103.757048] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 103.765852] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 103.774567] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 103.783205] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 103.791774]
[ 103.793484] Allocated by task 1453:
[ 103.797385] kmem_cache_alloc_trace+0xb4/0x170
[ 103.802562] kmalloc_oob_in_memset+0x30/0x68 [test_kasan]
[ 103.808729] kmalloc_tests_init+0x2c/0x35c [test_kasan]
[ 103.814363] do_one_initcall+0x60/0x1b0
[ 103.818573] do_init_module+0xd4/0x2cc
[ 103.822681] load_module+0x3110/0x3af0
[ 103.826796] SyS_init_module+0x184/0x1bc
[ 103.831103] ret_fast_syscall+0x0/0x48
[ 103.835108]
[ 103.836808] Freed by task 0:
[ 103.839930] (stack is not available)
[ 103.843754]
[ 103.845519] The buggy address belongs to the object at cad5db40
[ 103.845519] which belongs to the cache kmalloc-1024 of size 1024
[ 103.858218] The buggy address is located 0 bytes inside of
[ 103.858218] 1024-byte region [cad5db40, cad5df40)
[ 103.869071] The buggy address belongs to the page:
[ 103.874215] page:ee952b80 count:1 mapcount:0 mapping:cad5c040 index:0x0 compound_mapcount: 0
[ 103.883237] flags: 0x8100(slab|head)
[ 103.887289] raw: 00008100 cad5c040 00000000 00000007 00000001 ee950f14 d000130c d00003c0
[ 103.895881] page dumped because: kasan: bad access detected
[ 103.901763]
[ 103.903466] Memory state around the buggy address:
[ 103.908650] cad5dc80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 103.915629] cad5dd00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 103.922609] >cad5dd80: 00 00 00 00 00 00 00 00 00 00 00 02 fc fc fc fc
[ 103.929513] ^
[ 103.935333] cad5de00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 103.942308] cad5de80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 103.949208] ==================================================================
[ 103.957453] kasan test: kmalloc_oob_memset_2 out-of-bounds in memset2
[ 103.964912] ==================================================================
[ 103.973051] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_2+0x5c/0x6c [test_kasan]
[ 103.981764] Write of size 2 at addr cb182007 by task insmod/1453
[ 103.988094]
[ 103.989893] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 104.000283] Hardware name: Broadcom STB (Flattened Device Tree)
[ 104.006766] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 104.015128] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 104.023002] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 104.032322] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 104.041940] [<c03a838c>] (kasan_report) from [<c03a7430>] (memset+0x20/0x34)
[ 104.049960] [<c03a7430>] (memset) from [<bf0046c4>] (kmalloc_oob_memset_2+0x5c/0x6c [test_kasan])
[ 104.060258] [<bf0046c4>] (kmalloc_oob_memset_2 [test_kasan]) from [<bf004cd4>] (kmalloc_tests_init+0x30/0x35c [test_kasan])
[ 104.072531] [<bf004cd4>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 104.082847] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 104.091650] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 104.100363] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 104.109000] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 104.117570]
[ 104.119284] Allocated by task 1453:
[ 104.123180] kmem_cache_alloc_trace+0xb4/0x170
[ 104.128367] kmalloc_oob_memset_2+0x30/0x6c [test_kasan]
[ 104.134442] kmalloc_tests_init+0x30/0x35c [test_kasan]
[ 104.140061] do_one_initcall+0x60/0x1b0
[ 104.144269] do_init_module+0xd4/0x2cc
[ 104.148402] load_module+0x3110/0x3af0
[ 104.152529] SyS_init_module+0x184/0x1bc
[ 104.156837] ret_fast_syscall+0x0/0x48
[ 104.160841]
[ 104.162543] Freed by task 0:
[ 104.165664] (stack is not available)
[ 104.169498]
[ 104.171259] The buggy address belongs to the object at cb182000
[ 104.171259] which belongs to the cache kmalloc-64 of size 64
[ 104.183618] The buggy address is located 7 bytes inside of
[ 104.183618] 64-byte region [cb182000, cb182040)
[ 104.194288] The buggy address belongs to the page:
[ 104.199448] page:ee95b040 count:1 mapcount:0 mapping:cb182000 index:0x0
[ 104.206472] flags: 0x100(slab)
[ 104.209977] raw: 00000100 cb182000 00000000 00000020 00000001 ee9616f4 ee95e894 d0000000
[ 104.218573] page dumped because: kasan: bad access detected
[ 104.224470]
[ 104.226169] Memory state around the buggy address:
[ 104.231367] cb181f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 104.238348] cb181f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 104.245324] >cb182000: 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 104.252205] ^
[ 104.255354] cb182080: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 104.262336] cb182100: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 104.269235] ==================================================================
[ 104.277474] kasan test: kmalloc_oob_memset_4 out-of-bounds in memset4
[ 104.284953] ==================================================================
[ 104.293092] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_4+0x5c/0x6c [test_kasan]
[ 104.301799] Write of size 4 at addr cb183f85 by task insmod/1453
[ 104.308129]
[ 104.309928] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 104.320321] Hardware name: Broadcom STB (Flattened Device Tree)
[ 104.326799] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 104.335164] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 104.343045] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 104.352366] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 104.361979] [<c03a838c>] (kasan_report) from [<c03a7430>] (memset+0x20/0x34)
[ 104.369999] [<c03a7430>] (memset) from [<bf004730>] (kmalloc_oob_memset_4+0x5c/0x6c [test_kasan])
[ 104.380298] [<bf004730>] (kmalloc_oob_memset_4 [test_kasan]) from [<bf004cd8>] (kmalloc_tests_init+0x34/0x35c [test_kasan])
[ 104.392567] [<bf004cd8>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 104.402884] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 104.411686] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 104.420399] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 104.429038] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 104.437608]
[ 104.439329] Allocated by task 1453:
[ 104.443220] kmem_cache_alloc_trace+0xb4/0x170
[ 104.448408] kmalloc_oob_memset_4+0x30/0x6c [test_kasan]
[ 104.454482] kmalloc_tests_init+0x34/0x35c [test_kasan]
[ 104.460099] do_one_initcall+0x60/0x1b0
[ 104.464310] do_init_module+0xd4/0x2cc
[ 104.468438] load_module+0x3110/0x3af0
[ 104.472562] SyS_init_module+0x184/0x1bc
[ 104.476870] ret_fast_syscall+0x0/0x48
[ 104.480875]
[ 104.482577] Freed by task 0:
[ 104.485698] (stack is not available)
[ 104.489525]
[ 104.491284] The buggy address belongs to the object at cb183f80
[ 104.491284] which belongs to the cache kmalloc-64 of size 64
[ 104.503637] The buggy address is located 5 bytes inside of
[ 104.503637] 64-byte region [cb183f80, cb183fc0)
[ 104.514309] The buggy address belongs to the page:
[ 104.519465] page:ee95b060 count:1 mapcount:0 mapping:cb183000 index:0x0
[ 104.526484] flags: 0x100(slab)
[ 104.529989] raw: 00000100 cb183000 00000000 00000020 00000001 ee95e894 d000108c d0000000
[ 104.538585] page dumped because: kasan: bad access detected
[ 104.544480]
[ 104.546178] Memory state around the buggy address:
[ 104.551378] cb183e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 104.558360] cb183f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 104.565341] >cb183f80: 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 104.572221] ^
[ 104.575366] cb184000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 104.582349] cb184080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 104.589249] ==================================================================
[ 104.597495] kasan test: kmalloc_oob_memset_8 out-of-bounds in memset8
[ 104.604928] ==================================================================
[ 104.613072] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_8+0x5c/0x6c [test_kasan]
[ 104.621782] Write of size 8 at addr cb183f01 by task insmod/1453
[ 104.628110]
[ 104.629909] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 104.640299] Hardware name: Broadcom STB (Flattened Device Tree)
[ 104.646779] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 104.655142] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 104.663017] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 104.672337] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 104.681949] [<c03a838c>] (kasan_report) from [<c03a7430>] (memset+0x20/0x34)
[ 104.689970] [<c03a7430>] (memset) from [<bf00479c>] (kmalloc_oob_memset_8+0x5c/0x6c [test_kasan])
[ 104.700272] [<bf00479c>] (kmalloc_oob_memset_8 [test_kasan]) from [<bf004cdc>] (kmalloc_tests_init+0x38/0x35c [test_kasan])
[ 104.712541] [<bf004cdc>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 104.722856] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 104.731661] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 104.740373] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 104.749010] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 104.757583]
[ 104.759299] Allocated by task 1453:
[ 104.763193] kmem_cache_alloc_trace+0xb4/0x170
[ 104.768378] kmalloc_oob_memset_8+0x30/0x6c [test_kasan]
[ 104.774453] kmalloc_tests_init+0x38/0x35c [test_kasan]
[ 104.780070] do_one_initcall+0x60/0x1b0
[ 104.784277] do_init_module+0xd4/0x2cc
[ 104.788403] load_module+0x3110/0x3af0
[ 104.792531] SyS_init_module+0x184/0x1bc
[ 104.796839] ret_fast_syscall+0x0/0x48
[ 104.800843]
[ 104.802544] Freed by task 0:
[ 104.805666] (stack is not available)
[ 104.809498]
[ 104.811258] The buggy address belongs to the object at cb183f00
[ 104.811258] which belongs to the cache kmalloc-64 of size 64
[ 104.823614] The buggy address is located 1 bytes inside of
[ 104.823614] 64-byte region [cb183f00, cb183f40)
[ 104.834286] The buggy address belongs to the page:
[ 104.839444] page:ee95b060 count:1 mapcount:0 mapping:cb183000 index:0x0
[ 104.846467] flags: 0x100(slab)
[ 104.849970] raw: 00000100 cb183000 00000000 00000020 00000001 ee95e894 d000108c d0000000
[ 104.858570] page dumped because: kasan: bad access detected
[ 104.864466]
[ 104.866165] Memory state around the buggy address:
[ 104.871364] cb183e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 104.878347] cb183e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 104.885326] >cb183f00: 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 104.892207] ^
[ 104.895356] cb183f80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 104.902337] cb184000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 104.909235] ==================================================================
[ 104.917473] kasan test: kmalloc_oob_memset_16 out-of-bounds in memset16
[ 104.925082] ==================================================================
[ 104.933214] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_16+0x5c/0x6c [test_kasan]
[ 104.942023] Write of size 16 at addr cb183e81 by task insmod/1453
[ 104.948453]
[ 104.950258] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 104.960667] Hardware name: Broadcom STB (Flattened Device Tree)
[ 104.967135] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 104.975510] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 104.983395] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 104.992717] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 105.002334] [<c03a838c>] (kasan_report) from [<c03a7430>] (memset+0x20/0x34)
[ 105.010356] [<c03a7430>] (memset) from [<bf004808>] (kmalloc_oob_memset_16+0x5c/0x6c [test_kasan])
[ 105.020741] [<bf004808>] (kmalloc_oob_memset_16 [test_kasan]) from [<bf004ce0>] (kmalloc_tests_init+0x3c/0x35c [test_kasan])
[ 105.033091] [<bf004ce0>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 105.043404] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 105.052196] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 105.060913] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 105.069564] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 105.078121]
[ 105.079825] Allocated by task 1453:
[ 105.083712] kmem_cache_alloc_trace+0xb4/0x170
[ 105.088892] kmalloc_oob_memset_16+0x30/0x6c [test_kasan]
[ 105.095046] kmalloc_tests_init+0x3c/0x35c [test_kasan]
[ 105.100664] do_one_initcall+0x60/0x1b0
[ 105.104865] do_init_module+0xd4/0x2cc
[ 105.108975] load_module+0x3110/0x3af0
[ 105.113088] SyS_init_module+0x184/0x1bc
[ 105.117409] ret_fast_syscall+0x0/0x48
[ 105.121428]
[ 105.123130] Freed by task 0:
[ 105.126260] (stack is not available)
[ 105.130099]
[ 105.131853] The buggy address belongs to the object at cb183e80
[ 105.131853] which belongs to the cache kmalloc-64 of size 64
[ 105.144192] The buggy address is located 1 bytes inside of
[ 105.144192] 64-byte region [cb183e80, cb183ec0)
[ 105.154867] The buggy address belongs to the page:
[ 105.160009] page:ee95b060 count:1 mapcount:0 mapping:cb183000 index:0x0
[ 105.167020] flags: 0x100(slab)
[ 105.170536] raw: 00000100 cb183000 00000000 00000020 00000001 ee95e894 d000108c d0000000
[ 105.179122] page dumped because: kasan: bad access detected
[ 105.185004]
[ 105.186701] Memory state around the buggy address:
[ 105.191884] cb183d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 105.198851] cb183e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 105.205820] >cb183e80: 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 105.212698] ^
[ 105.216091] cb183f00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 105.223059] cb183f80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 105.229953] ==================================================================
[ 105.238004] kasan test: kmalloc_uaf use-after-free
[ 105.244102] ==================================================================
[ 105.252221] BUG: KASAN: use-after-free in kmalloc_uaf+0x58/0x68 [test_kasan]
[ 105.259698] Write of size 1 at addr cb183e08 by task insmod/1453
[ 105.266027]
[ 105.267827] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 105.278209] Hardware name: Broadcom STB (Flattened Device Tree)
[ 105.284703] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 105.293065] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 105.300939] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 105.310252] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 105.320182] [<c03a838c>] (kasan_report) from [<bf00442c>] (kmalloc_uaf+0x58/0x68 [test_kasan])
[ 105.330209] [<bf00442c>] (kmalloc_uaf [test_kasan]) from [<bf004ce4>] (kmalloc_tests_init+0x40/0x35c [test_kasan])
[ 105.341674] [<bf004ce4>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 105.351982] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 105.360787] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 105.369505] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 105.378142] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 105.386710]
[ 105.388423] Allocated by task 1453:
[ 105.392317] kmem_cache_alloc_trace+0xb4/0x170
[ 105.397487] kmalloc_uaf+0x30/0x68 [test_kasan]
[ 105.402758] kmalloc_tests_init+0x40/0x35c [test_kasan]
[ 105.408389] do_one_initcall+0x60/0x1b0
[ 105.412597] do_init_module+0xd4/0x2cc
[ 105.416705] load_module+0x3110/0x3af0
[ 105.420819] SyS_init_module+0x184/0x1bc
[ 105.425126] ret_fast_syscall+0x0/0x48
[ 105.429130]
[ 105.430833] Freed by task 1453:
[ 105.434344] kfree+0x64/0x100
[ 105.437983] kmalloc_uaf+0x50/0x68 [test_kasan]
[ 105.443246] kmalloc_tests_init+0x40/0x35c [test_kasan]
[ 105.448877] do_one_initcall+0x60/0x1b0
[ 105.453079] do_init_module+0xd4/0x2cc
[ 105.457188] load_module+0x3110/0x3af0
[ 105.461319] SyS_init_module+0x184/0x1bc
[ 105.465634] ret_fast_syscall+0x0/0x48
[ 105.469638]
[ 105.471403] The buggy address belongs to the object at cb183e00
[ 105.471403] which belongs to the cache kmalloc-64 of size 64
[ 105.483749] The buggy address is located 8 bytes inside of
[ 105.483749] 64-byte region [cb183e00, cb183e40)
[ 105.494422] The buggy address belongs to the page:
[ 105.499573] page:ee95b060 count:1 mapcount:0 mapping:cb183000 index:0x0
[ 105.506589] flags: 0x100(slab)
[ 105.510094] raw: 00000100 cb183000 00000000 00000020 00000001 ee95e894 d000108c d0000000
[ 105.518688] page dumped because: kasan: bad access detected
[ 105.524572]
[ 105.526279] Memory state around the buggy address:
[ 105.531479] cb183d00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 105.538456] cb183d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 105.545437] >cb183e00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 105.552325] ^
[ 105.555460] cb183e80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 105.562442] cb183f00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 105.569352] ==================================================================
[ 105.577198] kasan test: kmalloc_uaf_memset use-after-free in memset
[ 105.585014] ==================================================================
[ 105.593150] BUG: KASAN: use-after-free in kmalloc_tests_init+0x44/0x35c [test_kasan]
[ 105.601420] Write of size 33 at addr cb183d80 by task insmod/1453
[ 105.607836]
[ 105.609637] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 105.620019] Hardware name: Broadcom STB (Flattened Device Tree)
[ 105.626501] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 105.634870] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 105.642758] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 105.652066] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 105.661682] [<c03a838c>] (kasan_report) from [<c03a7430>] (memset+0x20/0x34)
[ 105.669707] [<c03a7430>] (memset) from [<bf004ce8>] (kmalloc_tests_init+0x44/0x35c [test_kasan])
[ 105.679557] [<bf004ce8>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 105.689871] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 105.698676] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 105.707390] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 105.716025] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 105.724597]
[ 105.726311] Allocated by task 1453:
[ 105.730203] kmem_cache_alloc_trace+0xb4/0x170
[ 105.735391] kmalloc_uaf_memset+0x30/0x68 [test_kasan]
[ 105.741283] kmalloc_tests_init+0x44/0x35c [test_kasan]
[ 105.746909] do_one_initcall+0x60/0x1b0
[ 105.751109] do_init_module+0xd4/0x2cc
[ 105.755220] load_module+0x3110/0x3af0
[ 105.759361] SyS_init_module+0x184/0x1bc
[ 105.763668] ret_fast_syscall+0x0/0x48
[ 105.767672]
[ 105.769385] Freed by task 1453:
[ 105.772886] kfree+0x64/0x100
[ 105.776546] kmalloc_uaf_memset+0x50/0x68 [test_kasan]
[ 105.782446] kmalloc_tests_init+0x44/0x35c [test_kasan]
[ 105.788062] do_one_initcall+0x60/0x1b0
[ 105.792267] do_init_module+0xd4/0x2cc
[ 105.796396] load_module+0x3110/0x3af0
[ 105.800521] SyS_init_module+0x184/0x1bc
[ 105.804828] ret_fast_syscall+0x0/0x48
[ 105.808834]
[ 105.810588] The buggy address belongs to the object at cb183d80
[ 105.810588] which belongs to the cache kmalloc-64 of size 64
[ 105.822925] The buggy address is located 0 bytes inside of
[ 105.822925] 64-byte region [cb183d80, cb183dc0)
[ 105.833598] The buggy address belongs to the page:
[ 105.838741] page:ee95b060 count:1 mapcount:0 mapping:cb183000 index:0x0
[ 105.845752] flags: 0x100(slab)
[ 105.849263] raw: 00000100 cb183000 00000000 00000020 00000001 ee95e894 d000108c d0000000
[ 105.857858] page dumped because: kasan: bad access detected
[ 105.863739]
[ 105.865444] Memory state around the buggy address:
[ 105.870631] cb183c80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 105.877613] cb183d00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 105.884593] >cb183d80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 105.891483] ^
[ 105.894352] cb183e00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 105.901334] cb183e80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 105.908233] ==================================================================
[ 105.916094] kasan test: kmalloc_uaf2 use-after-free after another kmalloc
[ 105.924783] ==================================================================
[ 105.932911] BUG: KASAN: use-after-free in kmalloc_uaf2+0x74/0xa4 [test_kasan]
[ 105.940479] Write of size 1 at addr cb183d28 by task insmod/1453
[ 105.946808]
[ 105.948610] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 105.958991] Hardware name: Broadcom STB (Flattened Device Tree)
[ 105.965474] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 105.973845] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 105.981733] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 105.991041] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 106.000959] [<c03a838c>] (kasan_report) from [<bf0044b0>] (kmalloc_uaf2+0x74/0xa4 [test_kasan])
[ 106.011065] [<bf0044b0>] (kmalloc_uaf2 [test_kasan]) from [<bf004cec>] (kmalloc_tests_init+0x48/0x35c [test_kasan])
[ 106.022610] [<bf004cec>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 106.032925] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 106.041727] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 106.050441] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 106.059077] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 106.067646]
[ 106.069367] Allocated by task 1453:
[ 106.073259] kmem_cache_alloc_trace+0xb4/0x170
[ 106.078436] kmalloc_uaf2+0x30/0xa4 [test_kasan]
[ 106.083796] kmalloc_tests_init+0x48/0x35c [test_kasan]
[ 106.089428] do_one_initcall+0x60/0x1b0
[ 106.093631] do_init_module+0xd4/0x2cc
[ 106.097739] load_module+0x3110/0x3af0
[ 106.101852] SyS_init_module+0x184/0x1bc
[ 106.106158] ret_fast_syscall+0x0/0x48
[ 106.110170]
[ 106.111878] Freed by task 1453:
[ 106.115390] kfree+0x64/0x100
[ 106.119030] kmalloc_uaf2+0x50/0xa4 [test_kasan]
[ 106.124389] kmalloc_tests_init+0x48/0x35c [test_kasan]
[ 106.130007] do_one_initcall+0x60/0x1b0
[ 106.134208] do_init_module+0xd4/0x2cc
[ 106.138345] load_module+0x3110/0x3af0
[ 106.142467] SyS_init_module+0x184/0x1bc
[ 106.146775] ret_fast_syscall+0x0/0x48
[ 106.150781]
[ 106.152538] The buggy address belongs to the object at cb183d00
[ 106.152538] which belongs to the cache kmalloc-64 of size 64
[ 106.164882] The buggy address is located 40 bytes inside of
[ 106.164882] 64-byte region [cb183d00, cb183d40)
[ 106.175645] The buggy address belongs to the page:
[ 106.180788] page:ee95b060 count:1 mapcount:0 mapping:cb183000 index:0x0
[ 106.187798] flags: 0x100(slab)
[ 106.191312] raw: 00000100 cb183000 00000000 00000020 00000001 ee95e894 d000108c d0000000
[ 106.199900] page dumped because: kasan: bad access detected
[ 106.205782]
[ 106.207483] Memory state around the buggy address:
[ 106.212663] cb183c00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 106.219640] cb183c80: 00 00 00 00 00 03 fc fc fc fc fc fc fc fc fc fc
[ 106.226619] >cb183d00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 106.233515] ^
[ 106.237712] cb183d80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 106.244688] cb183e00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 106.251590] ==================================================================
[ 106.262793] kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc
[ 106.270686] ==================================================================
[ 106.278825] BUG: KASAN: slab-out-of-bounds in kmem_cache_oob+0x88/0xb8 [test_kasan]
[ 106.286996] Read of size 1 at addr cb184f78 by task insmod/1453
[ 106.293239]
[ 106.295051] CPU: 2 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 106.305445] Hardware name: Broadcom STB (Flattened Device Tree)
[ 106.311914] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 106.320283] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 106.328166] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 106.337495] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 106.347417] [<c03a838c>] (kasan_report) from [<bf004908>] (kmem_cache_oob+0x88/0xb8 [test_kasan])
[ 106.357708] [<bf004908>] (kmem_cache_oob [test_kasan]) from [<bf004cf0>] (kmalloc_tests_init+0x4c/0x35c [test_kasan])
[ 106.369435] [<bf004cf0>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 106.379750] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 106.388558] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 106.397267] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 106.405922] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 106.414491]
[ 106.416198] Allocated by task 1453:
[ 106.420081] kmem_cache_alloc+0xac/0x16c
[ 106.424720] kmem_cache_oob+0x64/0xb8 [test_kasan]
[ 106.430252] kmalloc_tests_init+0x4c/0x35c [test_kasan]
[ 106.435880] do_one_initcall+0x60/0x1b0
[ 106.440084] do_init_module+0xd4/0x2cc
[ 106.444191] load_module+0x3110/0x3af0
[ 106.448321] SyS_init_module+0x184/0x1bc
[ 106.452635] ret_fast_syscall+0x0/0x48
[ 106.456641]
[ 106.458353] Freed by task 0:
[ 106.461480] (stack is not available)
[ 106.465313]
[ 106.467071] The buggy address belongs to the object at cb184eb0
[ 106.467071] which belongs to the cache test_cache of size 200
[ 106.479514] The buggy address is located 0 bytes to the right of
[ 106.479514] 200-byte region [cb184eb0, cb184f78)
[ 106.490804] The buggy address belongs to the page:
[ 106.495945] page:ee95b080 count:1 mapcount:0 mapping:cb184040 index:0x0
[ 106.502959] flags: 0x100(slab)
[ 106.506476] raw: 00000100 cb184040 00000000 0000000f 00000001 cb183b8c cb183b8c cdc35780
[ 106.515063] page dumped because: kasan: bad access detected
[ 106.520946]
[ 106.522642] Memory state around the buggy address:
[ 106.527824] cb184e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 106.534793] cb184e80: fc fc fc fc fc fc 00 00 00 00 00 00 00 00 00 00
[ 106.541761] >cb184f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[ 106.548655] ^
[ 106.555546] cb184f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 106.562527] cb185000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 106.569433] ==================================================================
[ 106.598153] kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
[ 107.145531] kasan test: kasan_stack_oob out-of-bounds on stack
[ 107.151915] kasan test: kasan_global_oob out-of-bounds global variable
[ 107.159004] kasan test: ksize_unpoisons_memory ksize() unpoisons the whole allocated chunk
[ 107.168566] ==================================================================
[ 107.176705] BUG: KASAN: slab-out-of-bounds in ksize_unpoisons_memory+0x6c/0x84 [test_kasan]
[ 107.185593] Write of size 1 at addr cb347a40 by task insmod/1453
[ 107.191920]
[ 107.193723] CPU: 3 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 107.204106] Hardware name: Broadcom STB (Flattened Device Tree)
[ 107.210581] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 107.218944] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 107.226817] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 107.236127] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 107.246046] [<c03a838c>] (kasan_report) from [<bf004a58>] (ksize_unpoisons_memory+0x6c/0x84 [test_kasan])
[ 107.257051] [<bf004a58>] (ksize_unpoisons_memory [test_kasan]) from [<bf004d00>] (kmalloc_tests_init+0x5c/0x35c [test_kasan])
[ 107.269479] [<bf004d00>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 107.279783] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 107.288579] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 107.297282] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 107.305919] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 107.314480]
[ 107.316187] Allocated by task 1453:
[ 107.320078] kmem_cache_alloc_trace+0xb4/0x170
[ 107.325251] ksize_unpoisons_memory+0x30/0x84 [test_kasan]
[ 107.331495] kmalloc_tests_init+0x5c/0x35c [test_kasan]
[ 107.337113] do_one_initcall+0x60/0x1b0
[ 107.341317] do_init_module+0xd4/0x2cc
[ 107.345424] load_module+0x3110/0x3af0
[ 107.349540] SyS_init_module+0x184/0x1bc
[ 107.353848] ret_fast_syscall+0x0/0x48
[ 107.357855]
[ 107.359554] Freed by task 0:
[ 107.362677] (stack is not available)
[ 107.366501]
[ 107.368256] The buggy address belongs to the object at cb3479c0
[ 107.368256] which belongs to the cache kmalloc-128 of size 128
[ 107.380776] The buggy address is located 0 bytes to the right of
[ 107.380776] 128-byte region [cb3479c0, cb347a40)
[ 107.392062] The buggy address belongs to the page:
[ 107.397206] page:ee95e8e0 count:1 mapcount:0 mapping:cb347000 index:0x0
[ 107.404219] flags: 0x100(slab)
[ 107.407727] raw: 00000100 cb347000 00000000 00000015 00000001 ee967b94 d000110c d00000c0
[ 107.416312] page dumped because: kasan: bad access detected
[ 107.422192]
[ 107.423888] Memory state around the buggy address:
[ 107.429068] cb347900: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 107.436035] cb347980: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
[ 107.443004] >cb347a00: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
[ 107.449890] ^
[ 107.454892] cb347a80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 107.461859] cb347b00: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
[ 107.468756] ==================================================================
[ 107.478535] kasan test: copy_user_test out-of-bounds in copy_from_user()
[ 107.485803] ==================================================================
[ 107.493934] BUG: KASAN: slab-out-of-bounds in copy_user_test+0xb4/0x234 [test_kasan]
[ 107.502195] Write of size 11 at addr cb344100 by task insmod/1453
[ 107.508613]
[ 107.510413] CPU: 3 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 107.520797] Hardware name: Broadcom STB (Flattened Device Tree)
[ 107.527267] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 107.535629] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 107.543505] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 107.552815] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 107.562729] [<c03a838c>] (kasan_report) from [<bf004b24>] (copy_user_test+0xb4/0x234 [test_kasan])
[ 107.573101] [<bf004b24>] (copy_user_test [test_kasan]) from [<bf004d04>] (kmalloc_tests_init+0x60/0x35c [test_kasan])
[ 107.584818] [<bf004d04>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 107.595123] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 107.603918] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 107.612623] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 107.621261] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 107.629818]
[ 107.631524] Allocated by task 1453:
[ 107.635412] kmem_cache_alloc_trace+0xb4/0x170
[ 107.640577] copy_user_test+0x24/0x234 [test_kasan]
[ 107.646195] kmalloc_tests_init+0x60/0x35c [test_kasan]
[ 107.651813] do_one_initcall+0x60/0x1b0
[ 107.656014] do_init_module+0xd4/0x2cc
[ 107.660125] load_module+0x3110/0x3af0
[ 107.664241] SyS_init_module+0x184/0x1bc
[ 107.668549] ret_fast_syscall+0x0/0x48
[ 107.672553]
[ 107.674254] Freed by task 0:
[ 107.677374] (stack is not available)
[ 107.681198]
[ 107.682953] The buggy address belongs to the object at cb344100
[ 107.682953] which belongs to the cache kmalloc-64 of size 64
[ 107.695289] The buggy address is located 0 bytes inside of
[ 107.695289] 64-byte region [cb344100, cb344140)
[ 107.705951] The buggy address belongs to the page:
[ 107.711102] page:ee95e880 count:1 mapcount:0 mapping:cb344000 index:0xcb344800
[ 107.718822] flags: 0x100(slab)
[ 107.722333] raw: 00000100 cb344000 cb344800 0000001f 00000001 d0001084 ee963174 d0000000
[ 107.730918] page dumped because: kasan: bad access detected
[ 107.736798]
[ 107.738496] Memory state around the buggy address:
[ 107.743677] cb344000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 107.750644] cb344080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 107.757613] >cb344100: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 107.764491] ^
[ 107.767617] cb344180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 107.774585] cb344200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 107.781477] ==================================================================
[ 107.789655] kasan test: copy_user_test out-of-bounds in copy_to_user()
[ 107.796746] ==================================================================
[ 107.804879] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x12c/0x234 [test_kasan]
[ 107.813230] Read of size 11 at addr cb344100 by task insmod/1453
[ 107.819558]
[ 107.821357] CPU: 3 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 107.831739] Hardware name: Broadcom STB (Flattened Device Tree)
[ 107.838207] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 107.846572] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 107.854448] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 107.863759] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 107.873676] [<c03a838c>] (kasan_report) from [<bf004b9c>] (copy_user_test+0x12c/0x234 [test_kasan])
[ 107.884138] [<bf004b9c>] (copy_user_test [test_kasan]) from [<bf004d04>] (kmalloc_tests_init+0x60/0x35c [test_kasan])
[ 107.895852] [<bf004d04>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 107.906156] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 107.914947] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 107.923650] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 107.932286] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 107.940847]
[ 107.942552] Allocated by task 1453:
[ 107.946439] kmem_cache_alloc_trace+0xb4/0x170
[ 107.951604] copy_user_test+0x24/0x234 [test_kasan]
[ 107.957221] kmalloc_tests_init+0x60/0x35c [test_kasan]
[ 107.962839] do_one_initcall+0x60/0x1b0
[ 107.967039] do_init_module+0xd4/0x2cc
[ 107.971151] load_module+0x3110/0x3af0
[ 107.975266] SyS_init_module+0x184/0x1bc
[ 107.979575] ret_fast_syscall+0x0/0x48
[ 107.983581]
[ 107.985281] Freed by task 0:
[ 107.988405] (stack is not available)
[ 107.992231]
[ 107.993985] The buggy address belongs to the object at cb344100
[ 107.993985] which belongs to the cache kmalloc-64 of size 64
[ 108.006323] The buggy address is located 0 bytes inside of
[ 108.006323] 64-byte region [cb344100, cb344140)
[ 108.016983] The buggy address belongs to the page:
[ 108.022132] page:ee95e880 count:1 mapcount:0 mapping:cb344000 index:0xcb344800
[ 108.029848] flags: 0x100(slab)
[ 108.033360] raw: 00000100 cb344000 cb344800 0000001f 00000001 d0001084 ee963174 d0000000
[ 108.041943] page dumped because: kasan: bad access detected
[ 108.047827]
[ 108.049523] Memory state around the buggy address:
[ 108.054704] cb344000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.061671] cb344080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.068641] >cb344100: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.075517] ^
[ 108.078643] cb344180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 108.085610] cb344200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 108.092507] ==================================================================
[ 108.101783] kasan test: copy_user_test out-of-bounds in __copy_from_user()
[ 108.109227] ==================================================================
[ 108.117361] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x158/0x234 [test_kasan]
[ 108.125709] Write of size 11 at addr cb344100 by task insmod/1453
[ 108.132128]
[ 108.133928] CPU: 3 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 108.144311] Hardware name: Broadcom STB (Flattened Device Tree)
[ 108.150781] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 108.159144] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 108.167016] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 108.176328] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 108.186244] [<c03a838c>] (kasan_report) from [<bf004bc8>] (copy_user_test+0x158/0x234 [test_kasan])
[ 108.196705] [<bf004bc8>] (copy_user_test [test_kasan]) from [<bf004d04>] (kmalloc_tests_init+0x60/0x35c [test_kasan])
[ 108.208423] [<bf004d04>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 108.218726] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 108.227519] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 108.236221] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 108.244858] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 108.253418]
[ 108.255125] Allocated by task 1453:
[ 108.259014] kmem_cache_alloc_trace+0xb4/0x170
[ 108.264181] copy_user_test+0x24/0x234 [test_kasan]
[ 108.269799] kmalloc_tests_init+0x60/0x35c [test_kasan]
[ 108.275416] do_one_initcall+0x60/0x1b0
[ 108.279617] do_init_module+0xd4/0x2cc
[ 108.283727] load_module+0x3110/0x3af0
[ 108.287839] SyS_init_module+0x184/0x1bc
[ 108.292147] ret_fast_syscall+0x0/0x48
[ 108.296154]
[ 108.297852] Freed by task 0:
[ 108.300973] (stack is not available)
[ 108.304797]
[ 108.306555] The buggy address belongs to the object at cb344100
[ 108.306555] which belongs to the cache kmalloc-64 of size 64
[ 108.318895] The buggy address is located 0 bytes inside of
[ 108.318895] 64-byte region [cb344100, cb344140)
[ 108.329557] The buggy address belongs to the page:
[ 108.334708] page:ee95e880 count:1 mapcount:0 mapping:cb344000 index:0xcb344800
[ 108.342426] flags: 0x100(slab)
[ 108.345936] raw: 00000100 cb344000 cb344800 0000001f 00000001 d0001084 ee963174 d0000000
[ 108.354520] page dumped because: kasan: bad access detected
[ 108.360400]
[ 108.362099] Memory state around the buggy address:
[ 108.367278] cb344000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.374245] cb344080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.381212] >cb344100: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.388088] ^
[ 108.391212] cb344180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 108.398180] cb344200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 108.405076] ==================================================================
[ 108.413052] kasan test: copy_user_test out-of-bounds in __copy_to_user()
[ 108.420442] ==================================================================
[ 108.428575] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x184/0x234 [test_kasan]
[ 108.436926] Read of size 11 at addr cb344100 by task insmod/1453
[ 108.443256]
[ 108.445055] CPU: 3 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 108.455438] Hardware name: Broadcom STB (Flattened Device Tree)
[ 108.461907] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 108.470272] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 108.478148] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 108.487457] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 108.497374] [<c03a838c>] (kasan_report) from [<bf004bf4>] (copy_user_test+0x184/0x234 [test_kasan])
[ 108.507838] [<bf004bf4>] (copy_user_test [test_kasan]) from [<bf004d04>] (kmalloc_tests_init+0x60/0x35c [test_kasan])
[ 108.519555] [<bf004d04>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 108.529858] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 108.538652] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 108.547355] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 108.555992] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 108.564551]
[ 108.566256] Allocated by task 1453:
[ 108.570143] kmem_cache_alloc_trace+0xb4/0x170
[ 108.575307] copy_user_test+0x24/0x234 [test_kasan]
[ 108.580926] kmalloc_tests_init+0x60/0x35c [test_kasan]
[ 108.586544] do_one_initcall+0x60/0x1b0
[ 108.590744] do_init_module+0xd4/0x2cc
[ 108.594852] load_module+0x3110/0x3af0
[ 108.598968] SyS_init_module+0x184/0x1bc
[ 108.603277] ret_fast_syscall+0x0/0x48
[ 108.607280]
[ 108.608980] Freed by task 0:
[ 108.612101] (stack is not available)
[ 108.615927]
[ 108.617680] The buggy address belongs to the object at cb344100
[ 108.617680] which belongs to the cache kmalloc-64 of size 64
[ 108.630019] The buggy address is located 0 bytes inside of
[ 108.630019] 64-byte region [cb344100, cb344140)
[ 108.640683] The buggy address belongs to the page:
[ 108.645833] page:ee95e880 count:1 mapcount:0 mapping:cb344000 index:0xcb344800
[ 108.653549] flags: 0x100(slab)
[ 108.657059] raw: 00000100 cb344000 cb344800 0000001f 00000001 d0001084 ee963174 d0000000
[ 108.665644] page dumped because: kasan: bad access detected
[ 108.671525]
[ 108.673222] Memory state around the buggy address:
[ 108.678403] cb344000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.685371] cb344080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.692338] >cb344100: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.699215] ^
[ 108.702340] cb344180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 108.709306] cb344200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 108.716201] ==================================================================
[ 108.724182] kasan test: copy_user_test out-of-bounds in __copy_from_user_inatomic()
[ 108.732511] ==================================================================
[ 108.740646] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x1b0/0x234 [test_kasan]
[ 108.748996] Write of size 11 at addr cb344100 by task insmod/1453
[ 108.755415]
[ 108.757209] CPU: 3 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 108.767593] Hardware name: Broadcom STB (Flattened Device Tree)
[ 108.774063] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 108.782426] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 108.790300] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 108.799611] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 108.809526] [<c03a838c>] (kasan_report) from [<bf004c20>] (copy_user_test+0x1b0/0x234 [test_kasan])
[ 108.819989] [<bf004c20>] (copy_user_test [test_kasan]) from [<bf004d04>] (kmalloc_tests_init+0x60/0x35c [test_kasan])
[ 108.831703] [<bf004d04>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 108.842007] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 108.850803] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 108.859506] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 108.868144] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 108.876702]
[ 108.878410] Allocated by task 1453:
[ 108.882300] kmem_cache_alloc_trace+0xb4/0x170
[ 108.887470] copy_user_test+0x24/0x234 [test_kasan]
[ 108.893088] kmalloc_tests_init+0x60/0x35c [test_kasan]
[ 108.898705] do_one_initcall+0x60/0x1b0
[ 108.902906] do_init_module+0xd4/0x2cc
[ 108.907016] load_module+0x3110/0x3af0
[ 108.911130] SyS_init_module+0x184/0x1bc
[ 108.915437] ret_fast_syscall+0x0/0x48
[ 108.919441]
[ 108.921140] Freed by task 0:
[ 108.924260] (stack is not available)
[ 108.928084]
[ 108.929836] The buggy address belongs to the object at cb344100
[ 108.929836] which belongs to the cache kmalloc-64 of size 64
[ 108.942173] The buggy address is located 0 bytes inside of
[ 108.942173] 64-byte region [cb344100, cb344140)
[ 108.952835] The buggy address belongs to the page:
[ 108.957986] page:ee95e880 count:1 mapcount:0 mapping:cb344000 index:0xcb344800
[ 108.965702] flags: 0x100(slab)
[ 108.969213] raw: 00000100 cb344000 cb344800 0000001f 00000001 d0001084 ee963174 d0000000
[ 108.977800] page dumped because: kasan: bad access detected
[ 108.983683]
[ 108.985379] Memory state around the buggy address:
[ 108.990559] cb344000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 108.997526] cb344080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 109.004496] >cb344100: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 109.011374] ^
[ 109.014497] cb344180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 109.021465] cb344200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 109.028359] ==================================================================
[ 109.036546] kasan test: copy_user_test out-of-bounds in __copy_to_user_inatomic()
[ 109.044665] ==================================================================
[ 109.052799] BUG: KASAN: slab-out-of-bounds in copy_user_test+0x1dc/0x234 [test_kasan]
[ 109.061147] Read of size 11 at addr cb344100 by task insmod/1453
[ 109.067476]
[ 109.069276] CPU: 3 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 109.079660] Hardware name: Broadcom STB (Flattened Device Tree)
[ 109.086129] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 109.094491] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 109.102366] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 109.111678] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 109.121592] [<c03a838c>] (kasan_report) from [<bf004c4c>] (copy_user_test+0x1dc/0x234 [test_kasan])
[ 109.132052] [<bf004c4c>] (copy_user_test [test_kasan]) from [<bf004d04>] (kmalloc_tests_init+0x60/0x35c [test_kasan])
[ 109.143765] [<bf004d04>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 109.154070] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 109.162863] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 109.171565] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 109.180203] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 109.188763]
[ 109.190472] Allocated by task 1453:
[ 109.194361] kmem_cache_alloc_trace+0xb4/0x170
[ 109.199529] copy_user_test+0x24/0x234 [test_kasan]
[ 109.205147] kmalloc_tests_init+0x60/0x35c [test_kasan]
[ 109.210765] do_one_initcall+0x60/0x1b0
[ 109.214965] do_init_module+0xd4/0x2cc
[ 109.219073] load_module+0x3110/0x3af0
[ 109.223188] SyS_init_module+0x184/0x1bc
[ 109.227497] ret_fast_syscall+0x0/0x48
[ 109.231503]
[ 109.233201] Freed by task 0:
[ 109.236322] (stack is not available)
[ 109.240146]
[ 109.241898] The buggy address belongs to the object at cb344100
[ 109.241898] which belongs to the cache kmalloc-64 of size 64
[ 109.254235] The buggy address is located 0 bytes inside of
[ 109.254235] 64-byte region [cb344100, cb344140)
[ 109.264898] The buggy address belongs to the page:
[ 109.270049] page:ee95e880 count:1 mapcount:0 mapping:cb344000 index:0xcb344800
[ 109.277765] flags: 0x100(slab)
[ 109.281277] raw: 00000100 cb344000 cb344800 0000001f 00000001 d0001084 ee963174 d0000000
[ 109.289861] page dumped because: kasan: bad access detected
[ 109.295742]
[ 109.297438] Memory state around the buggy address:
[ 109.302618] cb344000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 109.309585] cb344080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 109.316555] >cb344100: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 109.323431] ^
[ 109.326556] cb344180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 109.333526] cb344200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 109.340420] ==================================================================
[ 109.348407] kasan test: copy_user_test out-of-bounds in strncpy_from_user()
[ 109.355915] ==================================================================
[ 109.363705] BUG: KASAN: slab-out-of-bounds in strncpy_from_user+0x58/0x1f4
[ 109.370996] Write of size 11 at addr cb344100 by task insmod/1453
[ 109.377414]
[ 109.379217] CPU: 3 PID: 1453 Comm: insmod Tainted: G B 4.14.0-rc4-00095-gcd1a365fca2e-dirty #31
[ 109.389600] Hardware name: Broadcom STB (Flattened Device Tree)
[ 109.396070] [<c0214cb4>] (unwind_backtrace) from [<c020e664>] (show_stack+0x10/0x14)
[ 109.404433] [<c020e664>] (show_stack) from [<c0c7daa8>] (dump_stack+0x90/0xa4)
[ 109.412306] [<c0c7daa8>] (dump_stack) from [<c03a7c88>] (print_address_description+0x50/0x24c)
[ 109.421615] [<c03a7c88>] (print_address_description) from [<c03a838c>] (kasan_report+0x238/0x324)
[ 109.431187] [<c03a838c>] (kasan_report) from [<c06ba0e8>] (strncpy_from_user+0x58/0x1f4)
[ 109.440325] [<c06ba0e8>] (strncpy_from_user) from [<bf004c7c>] (copy_user_test+0x20c/0x234 [test_kasan])
[ 109.451233] [<bf004c7c>] (copy_user_test [test_kasan]) from [<bf004d04>] (kmalloc_tests_init+0x60/0x35c [test_kasan])
[ 109.462947] [<bf004d04>] (kmalloc_tests_init [test_kasan]) from [<c0201ef4>] (do_one_initcall+0x60/0x1b0)
[ 109.473251] [<c0201ef4>] (do_one_initcall) from [<c02db4bc>] (do_init_module+0xd4/0x2cc)
[ 109.482046] [<c02db4bc>] (do_init_module) from [<c02d9fe4>] (load_module+0x3110/0x3af0)
[ 109.490748] [<c02d9fe4>] (load_module) from [<c02dab48>] (SyS_init_module+0x184/0x1bc)
[ 109.499385] [<c02dab48>] (SyS_init_module) from [<c0209640>] (ret_fast_syscall+0x0/0x48)
[ 109.507946]
[ 109.509652] Allocated by task 1453:
[ 109.513540] kmem_cache_alloc_trace+0xb4/0x170
[ 109.518705] copy_user_test+0x24/0x234 [test_kasan]
[ 109.524323] kmalloc_tests_init+0x60/0x35c [test_kasan]
[ 109.529941] do_one_initcall+0x60/0x1b0
[ 109.534142] do_init_module+0xd4/0x2cc
[ 109.538252] load_module+0x3110/0x3af0
[ 109.542359] SyS_init_module+0x184/0x1bc
[ 109.546668] ret_fast_syscall+0x0/0x48
[ 109.550672]
[ 109.552370] Freed by task 0:
[ 109.555490] (stack is not available)
[ 109.559315]
[ 109.561069] The buggy address belongs to the object at cb344100
[ 109.561069] which belongs to the cache kmalloc-64 of size 64
[ 109.573405] The buggy address is located 0 bytes inside of
[ 109.573405] 64-byte region [cb344100, cb344140)
[ 109.584068] The buggy address belongs to the page:
[ 109.589219] page:ee95e880 count:1 mapcount:0 mapping:cb344000 index:0xcb344800
[ 109.596935] flags: 0x100(slab)
[ 109.600444] raw: 00000100 cb344000 cb344800 0000001f 00000001 ee963174 d0001084 d0000000
[ 109.609032] page dumped because: kasan: bad access detected
[ 109.614911]
[ 109.616608] Memory state around the buggy address:
[ 109.621788] cb344000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 109.628756] cb344080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 109.635723] >cb344100: 00 02 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 109.642600] ^
[ 109.645725] cb344180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 109.652693] cb344200: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[ 109.659589] ==================================================================
[ 109.668931] kasan test: use_after_scope_test use-after-scope on int
[ 109.675755] kasan test: use_after_scope_test use-after-scope on array
insmod: can't insert 'test_kasan.ko': Resource temporarily unavailable
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 19:39 ` Florian Fainelli
(?)
@ 2017-10-11 21:41 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-11 21:41 UTC (permalink / raw)
To: Florian Fainelli
Cc: Abbott Liu, aryabinin, afzal.mohd.ma, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, opendmb,
linux-kernel, kasan-dev, zengweilin, linux-mm, dylix.dailei,
glider, dvyukov, jiazhenghua, linux-arm-kernel, heshaoliang
On Wed, Oct 11, 2017 at 12:39:39PM -0700, Florian Fainelli wrote:
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
> > diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> > index 8733012..c17f4a2 100644
> > --- a/arch/arm/kernel/head-common.S
> > +++ b/arch/arm/kernel/head-common.S
> > @@ -101,7 +101,11 @@ __mmap_switched:
> > str r2, [r6] @ Save atags pointer
> > cmp r7, #0
> > strne r0, [r7] @ Save control register values
> > +#ifdef CONFIG_KASAN
> > + b kasan_early_init
> > +#else
> > b start_kernel
> > +#endif
>
> Please don't make this "exclusive" just conditionally call
> kasan_early_init(), remove the call to start_kernel from
> kasan_early_init and keep the call to start_kernel here.
iow:
#ifdef CONFIG_KASAN
bl kasan_early_init
#endif
b start_kernel
This has the advantage that we don't leave any stack frame from
kasan_early_init() on the init task stack.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-11 21:41 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-11 21:41 UTC (permalink / raw)
To: Florian Fainelli
Cc: Abbott Liu, aryabinin, afzal.mohd.ma, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, opendmb,
linux-kernel, kasan-dev, zengweilin, linux-mm, dylix.dailei,
glider, dvyukov, jiazhenghua, linux-arm-kernel, heshaoliang
On Wed, Oct 11, 2017 at 12:39:39PM -0700, Florian Fainelli wrote:
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
> > diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> > index 8733012..c17f4a2 100644
> > --- a/arch/arm/kernel/head-common.S
> > +++ b/arch/arm/kernel/head-common.S
> > @@ -101,7 +101,11 @@ __mmap_switched:
> > str r2, [r6] @ Save atags pointer
> > cmp r7, #0
> > strne r0, [r7] @ Save control register values
> > +#ifdef CONFIG_KASAN
> > + b kasan_early_init
> > +#else
> > b start_kernel
> > +#endif
>
> Please don't make this "exclusive" just conditionally call
> kasan_early_init(), remove the call to start_kernel from
> kasan_early_init and keep the call to start_kernel here.
iow:
#ifdef CONFIG_KASAN
bl kasan_early_init
#endif
b start_kernel
This has the advantage that we don't leave any stack frame from
kasan_early_init() on the init task stack.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-11 21:41 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-11 21:41 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Oct 11, 2017 at 12:39:39PM -0700, Florian Fainelli wrote:
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
> > diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> > index 8733012..c17f4a2 100644
> > --- a/arch/arm/kernel/head-common.S
> > +++ b/arch/arm/kernel/head-common.S
> > @@ -101,7 +101,11 @@ __mmap_switched:
> > str r2, [r6] @ Save atags pointer
> > cmp r7, #0
> > strne r0, [r7] @ Save control register values
> > +#ifdef CONFIG_KASAN
> > + b kasan_early_init
> > +#else
> > b start_kernel
> > +#endif
>
> Please don't make this "exclusive" just conditionally call
> kasan_early_init(), remove the call to start_kernel from
> kasan_early_init and keep the call to start_kernel here.
iow:
#ifdef CONFIG_KASAN
bl kasan_early_init
#endif
b start_kernel
This has the advantage that we don't leave any stack frame from
kasan_early_init() on the init task stack.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 21:36 ` Florian Fainelli
2017-10-11 22:10 ` Laura Abbott
@ 2017-10-11 22:10 ` Laura Abbott
1 sibling, 0 replies; 253+ messages in thread
From: Laura Abbott @ 2017-10-11 22:10 UTC (permalink / raw)
To: Florian Fainelli, Abbott Liu, linux, aryabinin, afzal.mohd.ma,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang, Nicolas Pitre
On 10/11/2017 02:36 PM, Florian Fainelli wrote:
> On 10/11/2017 12:50 PM, Florian Fainelli wrote:
>> On 10/11/2017 12:13 PM, Florian Fainelli wrote:
>>> Hi Abbott,
>>>
>>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>>>> Hi,all:
>>>> These patches add arch specific code for kernel address sanitizer
>>>> (see Documentation/kasan.txt).
>>>>
>>>> 1/8 of kernel addresses reserved for shadow memory. There was no
>>>> big enough hole for this, so virtual addresses for shadow were
>>>> stolen from user space.
>>>>
>>>> At early boot stage the whole shadow region populated with just
>>>> one physical page (kasan_zero_page). Later, this page reused
>>>> as readonly zero shadow for some memory that KASan currently
>>>> don't track (vmalloc).
>>>>
>>>> After mapping the physical memory, pages for shadow memory are
>>>> allocated and mapped.
>>>>
>>>> KASan's stack instrumentation significantly increases stack's
>>>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>>>
>>>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>>>> If bad pointer passed to one of these function it is important
>>>> to catch this. Compiler's instrumentation cannot do this since
>>>> these functions are written in assembly.
>>>>
>>>> KASan replaces memory functions with manually instrumented variants.
>>>> Original functions declared as weak symbols so strong definitions
>>>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>>>> with '__' prefix in name, so we could call non-instrumented variant
>>>> if needed.
>>>>
>>>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>>>> Original mem* function replaced (via #define) with prefixed variants
>>>> to disable memory access checks for such files.
>>>>
>>>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>>>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>>>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>>>> because kasan instrumentation maybe cause do_translation_fault function
>>>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>>>> do_translation_fault function maybe cause dead circle. So the mapping table
>>>> of KASan shadow memory need be copyed in pgd_alloc function.
>>>>
>>>>
>>>> Most of the code comes from:
>>>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>>>
>>> Thanks for putting these patches together, I can't get a kernel to build
>>> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>>>
>>> AS arch/arm/kernel/entry-common.o
>>> arch/arm/kernel/entry-common.S: Assembler messages:
>>> arch/arm/kernel/entry-common.S:53: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> arch/arm/kernel/entry-common.S:118: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> scripts/Makefile.build:412: recipe for target
>>> 'arch/arm/kernel/entry-common.o' failed
>>> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
>>> Makefile:1019: recipe for target 'arch/arm/kernel' failed
>>> make[2]: *** [arch/arm/kernel] Error 2
>>> make[2]: *** Waiting for unfinished jobs....
>>>
>>> This is coming from the increase in TASK_SIZE it seems.
>>>
>>> This is on top of v4.14-rc4-84-gff5abbe799e2
>>
>> Seems like we can use the following to get through that build failure:
>>
>> diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
>> index 99c908226065..0de1160d136e 100644
>> --- a/arch/arm/kernel/entry-common.S
>> +++ b/arch/arm/kernel/entry-common.S
>> @@ -50,7 +50,13 @@ ret_fast_syscall:
>> UNWIND(.cantunwind )
>> disable_irq_notrace @ disable interrupts
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
>> tracing
>> tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
>> @@ -115,7 +121,13 @@ ret_slow_syscall:
>> disable_irq_notrace @ disable interrupts
>> ENTRY(ret_to_user_from_irq)
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS]
>> tst r1, #_TIF_WORK_MASK
>>
>>
>>
>> but then we will see another set of build failures with the decompressor
>> code:
>>
>> WARNING: modpost: Found 2 section mismatch(es).
>> To see full details build your kernel with:
>> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
>> KSYM .tmp_kallsyms1.o
>> KSYM .tmp_kallsyms2.o
>> LD vmlinux
>> SORTEX vmlinux
>> SYSMAP System.map
>> OBJCOPY arch/arm/boot/Image
>> Kernel: arch/arm/boot/Image is ready
>> LDS arch/arm/boot/compressed/vmlinux.lds
>> AS arch/arm/boot/compressed/head.o
>> XZKERN arch/arm/boot/compressed/piggy_data
>> CC arch/arm/boot/compressed/misc.o
>> CC arch/arm/boot/compressed/decompress.o
>> CC arch/arm/boot/compressed/string.o
>> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
>> #define memmove memmove
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
>> previous definition
>> #define memmove(dst, src, len) __memmove(dst, src, len)
>>
>> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
>> #define memcpy memcpy
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
>> previous definition
>> #define memcpy(dst, src, len) __memcpy(dst, src, len)
>>
>> SHIPPED arch/arm/boot/compressed/hyp-stub.S
>> SHIPPED arch/arm/boot/compressed/fdt_rw.c
>> SHIPPED arch/arm/boot/compressed/fdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt_internal.h
>> SHIPPED arch/arm/boot/compressed/fdt_ro.c
>> SHIPPED arch/arm/boot/compressed/fdt_wip.c
>> SHIPPED arch/arm/boot/compressed/fdt.c
>> CC arch/arm/boot/compressed/atags_to_fdt.o
>> SHIPPED arch/arm/boot/compressed/lib1funcs.S
>> SHIPPED arch/arm/boot/compressed/ashldi3.S
>> SHIPPED arch/arm/boot/compressed/bswapsdi2.S
>> AS arch/arm/boot/compressed/hyp-stub.o
>> CC arch/arm/boot/compressed/fdt_rw.o
>> CC arch/arm/boot/compressed/fdt_ro.o
>> CC arch/arm/boot/compressed/fdt_wip.o
>> CC arch/arm/boot/compressed/fdt.o
>> AS arch/arm/boot/compressed/lib1funcs.o
>> AS arch/arm/boot/compressed/ashldi3.o
>> AS arch/arm/boot/compressed/bswapsdi2.o
>> AS arch/arm/boot/compressed/piggy.o
>> LD arch/arm/boot/compressed/vmlinux
>> arch/arm/boot/compressed/decompress.o: In function `fill_temp':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
>> undefined reference to `__memset'
>> arch/arm/boot/compressed/Makefile:182: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
>> arch/arm/boot/Makefile:53: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>
> I ended up fixing the redefinition warnings/build failures this way, but
> I am not 100% confident this is the right fix:
>
> diff --git a/arch/arm/boot/compressed/decompress.c
> b/arch/arm/boot/compressed/decompress.c
> index f3a4bedd1afc..7d4a47752760 100644
> --- a/arch/arm/boot/compressed/decompress.c
> +++ b/arch/arm/boot/compressed/decompress.c
> @@ -48,8 +48,10 @@ extern int memcmp(const void *cs, const void *ct,
> size_t count);
> #endif
>
> #ifdef CONFIG_KERNEL_XZ
> +#ifndef CONFIG_KASAN
> #define memmove memmove
> #define memcpy memcpy
> +#endif
> #include "../../../../lib/decompress_unxz.c"
> #endif
>
> Was not able yet to track down why __memset is not being resolved, but
> since I don't need them, disabled CONFIG_ATAGS and
> CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
>
> This brought me all the way to a prompt and please find attached the
> results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
> CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
> free in one of our drivers (spi-bcm-qspi) so with this:
>
> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>
> Great job thanks!
>
The memset failure comes from the fact that the decompressor has
its own string functions and there is an #undefine memset in there.
The git history doesn't make it clear where this comes from but
if I remove it the kernel at least compiles for me with the
multi_v7_defconfig.
Thanks,
Laura
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
@ 2017-10-11 22:10 ` Laura Abbott
0 siblings, 0 replies; 253+ messages in thread
From: Laura Abbott @ 2017-10-11 22:10 UTC (permalink / raw)
To: Florian Fainelli, Abbott Liu, linux, aryabinin, afzal.mohd.ma,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang, Nicolas Pitre
On 10/11/2017 02:36 PM, Florian Fainelli wrote:
> On 10/11/2017 12:50 PM, Florian Fainelli wrote:
>> On 10/11/2017 12:13 PM, Florian Fainelli wrote:
>>> Hi Abbott,
>>>
>>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>>>> Hi,all:
>>>> These patches add arch specific code for kernel address sanitizer
>>>> (see Documentation/kasan.txt).
>>>>
>>>> 1/8 of kernel addresses reserved for shadow memory. There was no
>>>> big enough hole for this, so virtual addresses for shadow were
>>>> stolen from user space.
>>>>
>>>> At early boot stage the whole shadow region populated with just
>>>> one physical page (kasan_zero_page). Later, this page reused
>>>> as readonly zero shadow for some memory that KASan currently
>>>> don't track (vmalloc).
>>>>
>>>> After mapping the physical memory, pages for shadow memory are
>>>> allocated and mapped.
>>>>
>>>> KASan's stack instrumentation significantly increases stack's
>>>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>>>
>>>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>>>> If bad pointer passed to one of these function it is important
>>>> to catch this. Compiler's instrumentation cannot do this since
>>>> these functions are written in assembly.
>>>>
>>>> KASan replaces memory functions with manually instrumented variants.
>>>> Original functions declared as weak symbols so strong definitions
>>>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>>>> with '__' prefix in name, so we could call non-instrumented variant
>>>> if needed.
>>>>
>>>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>>>> Original mem* function replaced (via #define) with prefixed variants
>>>> to disable memory access checks for such files.
>>>>
>>>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>>>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>>>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>>>> because kasan instrumentation maybe cause do_translation_fault function
>>>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>>>> do_translation_fault function maybe cause dead circle. So the mapping table
>>>> of KASan shadow memory need be copyed in pgd_alloc function.
>>>>
>>>>
>>>> Most of the code comes from:
>>>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>>>
>>> Thanks for putting these patches together, I can't get a kernel to build
>>> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>>>
>>> AS arch/arm/kernel/entry-common.o
>>> arch/arm/kernel/entry-common.S: Assembler messages:
>>> arch/arm/kernel/entry-common.S:53: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> arch/arm/kernel/entry-common.S:118: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> scripts/Makefile.build:412: recipe for target
>>> 'arch/arm/kernel/entry-common.o' failed
>>> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
>>> Makefile:1019: recipe for target 'arch/arm/kernel' failed
>>> make[2]: *** [arch/arm/kernel] Error 2
>>> make[2]: *** Waiting for unfinished jobs....
>>>
>>> This is coming from the increase in TASK_SIZE it seems.
>>>
>>> This is on top of v4.14-rc4-84-gff5abbe799e2
>>
>> Seems like we can use the following to get through that build failure:
>>
>> diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
>> index 99c908226065..0de1160d136e 100644
>> --- a/arch/arm/kernel/entry-common.S
>> +++ b/arch/arm/kernel/entry-common.S
>> @@ -50,7 +50,13 @@ ret_fast_syscall:
>> UNWIND(.cantunwind )
>> disable_irq_notrace @ disable interrupts
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
>> tracing
>> tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
>> @@ -115,7 +121,13 @@ ret_slow_syscall:
>> disable_irq_notrace @ disable interrupts
>> ENTRY(ret_to_user_from_irq)
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS]
>> tst r1, #_TIF_WORK_MASK
>>
>>
>>
>> but then we will see another set of build failures with the decompressor
>> code:
>>
>> WARNING: modpost: Found 2 section mismatch(es).
>> To see full details build your kernel with:
>> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
>> KSYM .tmp_kallsyms1.o
>> KSYM .tmp_kallsyms2.o
>> LD vmlinux
>> SORTEX vmlinux
>> SYSMAP System.map
>> OBJCOPY arch/arm/boot/Image
>> Kernel: arch/arm/boot/Image is ready
>> LDS arch/arm/boot/compressed/vmlinux.lds
>> AS arch/arm/boot/compressed/head.o
>> XZKERN arch/arm/boot/compressed/piggy_data
>> CC arch/arm/boot/compressed/misc.o
>> CC arch/arm/boot/compressed/decompress.o
>> CC arch/arm/boot/compressed/string.o
>> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
>> #define memmove memmove
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
>> previous definition
>> #define memmove(dst, src, len) __memmove(dst, src, len)
>>
>> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
>> #define memcpy memcpy
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
>> previous definition
>> #define memcpy(dst, src, len) __memcpy(dst, src, len)
>>
>> SHIPPED arch/arm/boot/compressed/hyp-stub.S
>> SHIPPED arch/arm/boot/compressed/fdt_rw.c
>> SHIPPED arch/arm/boot/compressed/fdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt_internal.h
>> SHIPPED arch/arm/boot/compressed/fdt_ro.c
>> SHIPPED arch/arm/boot/compressed/fdt_wip.c
>> SHIPPED arch/arm/boot/compressed/fdt.c
>> CC arch/arm/boot/compressed/atags_to_fdt.o
>> SHIPPED arch/arm/boot/compressed/lib1funcs.S
>> SHIPPED arch/arm/boot/compressed/ashldi3.S
>> SHIPPED arch/arm/boot/compressed/bswapsdi2.S
>> AS arch/arm/boot/compressed/hyp-stub.o
>> CC arch/arm/boot/compressed/fdt_rw.o
>> CC arch/arm/boot/compressed/fdt_ro.o
>> CC arch/arm/boot/compressed/fdt_wip.o
>> CC arch/arm/boot/compressed/fdt.o
>> AS arch/arm/boot/compressed/lib1funcs.o
>> AS arch/arm/boot/compressed/ashldi3.o
>> AS arch/arm/boot/compressed/bswapsdi2.o
>> AS arch/arm/boot/compressed/piggy.o
>> LD arch/arm/boot/compressed/vmlinux
>> arch/arm/boot/compressed/decompress.o: In function `fill_temp':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
>> undefined reference to `__memset'
>> arch/arm/boot/compressed/Makefile:182: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
>> arch/arm/boot/Makefile:53: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>
> I ended up fixing the redefinition warnings/build failures this way, but
> I am not 100% confident this is the right fix:
>
> diff --git a/arch/arm/boot/compressed/decompress.c
> b/arch/arm/boot/compressed/decompress.c
> index f3a4bedd1afc..7d4a47752760 100644
> --- a/arch/arm/boot/compressed/decompress.c
> +++ b/arch/arm/boot/compressed/decompress.c
> @@ -48,8 +48,10 @@ extern int memcmp(const void *cs, const void *ct,
> size_t count);
> #endif
>
> #ifdef CONFIG_KERNEL_XZ
> +#ifndef CONFIG_KASAN
> #define memmove memmove
> #define memcpy memcpy
> +#endif
> #include "../../../../lib/decompress_unxz.c"
> #endif
>
> Was not able yet to track down why __memset is not being resolved, but
> since I don't need them, disabled CONFIG_ATAGS and
> CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
>
> This brought me all the way to a prompt and please find attached the
> results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
> CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
> free in one of our drivers (spi-bcm-qspi) so with this:
>
> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>
> Great job thanks!
>
The memset failure comes from the fact that the decompressor has
its own string functions and there is an #undefine memset in there.
The git history doesn't make it clear where this comes from but
if I remove it the kernel at least compiles for me with the
multi_v7_defconfig.
Thanks,
Laura
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-11 22:10 ` Laura Abbott
0 siblings, 0 replies; 253+ messages in thread
From: Laura Abbott @ 2017-10-11 22:10 UTC (permalink / raw)
To: linux-arm-kernel
On 10/11/2017 02:36 PM, Florian Fainelli wrote:
> On 10/11/2017 12:50 PM, Florian Fainelli wrote:
>> On 10/11/2017 12:13 PM, Florian Fainelli wrote:
>>> Hi Abbott,
>>>
>>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>>>> Hi,all:
>>>> These patches add arch specific code for kernel address sanitizer
>>>> (see Documentation/kasan.txt).
>>>>
>>>> 1/8 of kernel addresses reserved for shadow memory. There was no
>>>> big enough hole for this, so virtual addresses for shadow were
>>>> stolen from user space.
>>>>
>>>> At early boot stage the whole shadow region populated with just
>>>> one physical page (kasan_zero_page). Later, this page reused
>>>> as readonly zero shadow for some memory that KASan currently
>>>> don't track (vmalloc).
>>>>
>>>> After mapping the physical memory, pages for shadow memory are
>>>> allocated and mapped.
>>>>
>>>> KASan's stack instrumentation significantly increases stack's
>>>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>>>
>>>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>>>> If bad pointer passed to one of these function it is important
>>>> to catch this. Compiler's instrumentation cannot do this since
>>>> these functions are written in assembly.
>>>>
>>>> KASan replaces memory functions with manually instrumented variants.
>>>> Original functions declared as weak symbols so strong definitions
>>>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>>>> with '__' prefix in name, so we could call non-instrumented variant
>>>> if needed.
>>>>
>>>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>>>> Original mem* function replaced (via #define) with prefixed variants
>>>> to disable memory access checks for such files.
>>>>
>>>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>>>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>>>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>>>> because kasan instrumentation maybe cause do_translation_fault function
>>>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>>>> do_translation_fault function maybe cause dead circle. So the mapping table
>>>> of KASan shadow memory need be copyed in pgd_alloc function.
>>>>
>>>>
>>>> Most of the code comes from:
>>>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>>>
>>> Thanks for putting these patches together, I can't get a kernel to build
>>> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>>>
>>> AS arch/arm/kernel/entry-common.o
>>> arch/arm/kernel/entry-common.S: Assembler messages:
>>> arch/arm/kernel/entry-common.S:53: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> arch/arm/kernel/entry-common.S:118: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> scripts/Makefile.build:412: recipe for target
>>> 'arch/arm/kernel/entry-common.o' failed
>>> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
>>> Makefile:1019: recipe for target 'arch/arm/kernel' failed
>>> make[2]: *** [arch/arm/kernel] Error 2
>>> make[2]: *** Waiting for unfinished jobs....
>>>
>>> This is coming from the increase in TASK_SIZE it seems.
>>>
>>> This is on top of v4.14-rc4-84-gff5abbe799e2
>>
>> Seems like we can use the following to get through that build failure:
>>
>> diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
>> index 99c908226065..0de1160d136e 100644
>> --- a/arch/arm/kernel/entry-common.S
>> +++ b/arch/arm/kernel/entry-common.S
>> @@ -50,7 +50,13 @@ ret_fast_syscall:
>> UNWIND(.cantunwind )
>> disable_irq_notrace @ disable interrupts
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
>> tracing
>> tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
>> @@ -115,7 +121,13 @@ ret_slow_syscall:
>> disable_irq_notrace @ disable interrupts
>> ENTRY(ret_to_user_from_irq)
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS]
>> tst r1, #_TIF_WORK_MASK
>>
>>
>>
>> but then we will see another set of build failures with the decompressor
>> code:
>>
>> WARNING: modpost: Found 2 section mismatch(es).
>> To see full details build your kernel with:
>> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
>> KSYM .tmp_kallsyms1.o
>> KSYM .tmp_kallsyms2.o
>> LD vmlinux
>> SORTEX vmlinux
>> SYSMAP System.map
>> OBJCOPY arch/arm/boot/Image
>> Kernel: arch/arm/boot/Image is ready
>> LDS arch/arm/boot/compressed/vmlinux.lds
>> AS arch/arm/boot/compressed/head.o
>> XZKERN arch/arm/boot/compressed/piggy_data
>> CC arch/arm/boot/compressed/misc.o
>> CC arch/arm/boot/compressed/decompress.o
>> CC arch/arm/boot/compressed/string.o
>> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
>> #define memmove memmove
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
>> previous definition
>> #define memmove(dst, src, len) __memmove(dst, src, len)
>>
>> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
>> #define memcpy memcpy
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
>> previous definition
>> #define memcpy(dst, src, len) __memcpy(dst, src, len)
>>
>> SHIPPED arch/arm/boot/compressed/hyp-stub.S
>> SHIPPED arch/arm/boot/compressed/fdt_rw.c
>> SHIPPED arch/arm/boot/compressed/fdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt_internal.h
>> SHIPPED arch/arm/boot/compressed/fdt_ro.c
>> SHIPPED arch/arm/boot/compressed/fdt_wip.c
>> SHIPPED arch/arm/boot/compressed/fdt.c
>> CC arch/arm/boot/compressed/atags_to_fdt.o
>> SHIPPED arch/arm/boot/compressed/lib1funcs.S
>> SHIPPED arch/arm/boot/compressed/ashldi3.S
>> SHIPPED arch/arm/boot/compressed/bswapsdi2.S
>> AS arch/arm/boot/compressed/hyp-stub.o
>> CC arch/arm/boot/compressed/fdt_rw.o
>> CC arch/arm/boot/compressed/fdt_ro.o
>> CC arch/arm/boot/compressed/fdt_wip.o
>> CC arch/arm/boot/compressed/fdt.o
>> AS arch/arm/boot/compressed/lib1funcs.o
>> AS arch/arm/boot/compressed/ashldi3.o
>> AS arch/arm/boot/compressed/bswapsdi2.o
>> AS arch/arm/boot/compressed/piggy.o
>> LD arch/arm/boot/compressed/vmlinux
>> arch/arm/boot/compressed/decompress.o: In function `fill_temp':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
>> undefined reference to `__memset'
>> arch/arm/boot/compressed/Makefile:182: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
>> arch/arm/boot/Makefile:53: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>
> I ended up fixing the redefinition warnings/build failures this way, but
> I am not 100% confident this is the right fix:
>
> diff --git a/arch/arm/boot/compressed/decompress.c
> b/arch/arm/boot/compressed/decompress.c
> index f3a4bedd1afc..7d4a47752760 100644
> --- a/arch/arm/boot/compressed/decompress.c
> +++ b/arch/arm/boot/compressed/decompress.c
> @@ -48,8 +48,10 @@ extern int memcmp(const void *cs, const void *ct,
> size_t count);
> #endif
>
> #ifdef CONFIG_KERNEL_XZ
> +#ifndef CONFIG_KASAN
> #define memmove memmove
> #define memcpy memcpy
> +#endif
> #include "../../../../lib/decompress_unxz.c"
> #endif
>
> Was not able yet to track down why __memset is not being resolved, but
> since I don't need them, disabled CONFIG_ATAGS and
> CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
>
> This brought me all the way to a prompt and please find attached the
> results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
> CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
> free in one of our drivers (spi-bcm-qspi) so with this:
>
> Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>
> Great job thanks!
>
The memset failure comes from the fact that the decompressor has
its own string functions and there is an #undefine memset in there.
The git history doesn't make it clear where this comes from but
if I remove it the kernel at least compiles for me with the
multi_v7_defconfig.
Thanks,
Laura
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 22:10 ` Laura Abbott
(?)
@ 2017-10-11 22:58 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-11 22:58 UTC (permalink / raw)
To: Laura Abbott
Cc: Florian Fainelli, Abbott Liu, aryabinin, afzal.mohd.ma,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
Nicolas Pitre, opendmb, linux-kernel, kasan-dev, zengweilin,
linux-mm, dylix.dailei, glider, dvyukov, jiazhenghua,
linux-arm-kernel, heshaoliang
On Wed, Oct 11, 2017 at 03:10:56PM -0700, Laura Abbott wrote:
> On 10/11/2017 02:36 PM, Florian Fainelli wrote:
> >> CC arch/arm/boot/compressed/string.o
> >> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
> >> #define memmove memmove
> >>
> >> In file included from arch/arm/boot/compressed/decompress.c:7:0:
> >> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
> >> previous definition
> >> #define memmove(dst, src, len) __memmove(dst, src, len)
> >>
> >> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
> >> #define memcpy memcpy
> >>
> >> In file included from arch/arm/boot/compressed/decompress.c:7:0:
> >> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
> >> previous definition
> >> #define memcpy(dst, src, len) __memcpy(dst, src, len)
> >>
> >
> > Was not able yet to track down why __memset is not being resolved, but
> > since I don't need them, disabled CONFIG_ATAGS and
> > CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
> >
> > This brought me all the way to a prompt and please find attached the
> > results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
> > CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
> > free in one of our drivers (spi-bcm-qspi) so with this:
> >
> > Tested-by: Florian Fainelli <f.fainelli@gmail.com>
> >
> > Great job thanks!
> >
>
> The memset failure comes from the fact that the decompressor has
> its own string functions and there is an #undefine memset in there.
> The git history doesn't make it clear where this comes from but
> if I remove it the kernel at least compiles for me with the
> multi_v7_defconfig.
The decompressor does not link with the standard C library, so it
needs to provide implementations of standard C library functionality
where required. That means, if we have any memset() users, we need
to provide the memset() function.
The undef is there to avoid the optimisation we have in asm/string.h
for __memzero, because we don't want to use __memzero in the
decompressor.
Whether memset() is required depends on which compression method is
being used - LZO and LZ4 appear to make direct references to it, but
the inflate (gzip) decompressor code does not.
What this means is that all supported kernel compression options need
to be tested.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
@ 2017-10-11 22:58 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-11 22:58 UTC (permalink / raw)
To: Laura Abbott
Cc: Florian Fainelli, Abbott Liu, aryabinin, afzal.mohd.ma,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
Nicolas Pitre, opendmb, linux-kernel, kasan-dev, zengweilin,
linux-mm, dylix.dailei, glider, dvyukov, jiazhenghua,
linux-arm-kernel, heshaoliang
On Wed, Oct 11, 2017 at 03:10:56PM -0700, Laura Abbott wrote:
> On 10/11/2017 02:36 PM, Florian Fainelli wrote:
> >> CC arch/arm/boot/compressed/string.o
> >> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
> >> #define memmove memmove
> >>
> >> In file included from arch/arm/boot/compressed/decompress.c:7:0:
> >> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
> >> previous definition
> >> #define memmove(dst, src, len) __memmove(dst, src, len)
> >>
> >> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
> >> #define memcpy memcpy
> >>
> >> In file included from arch/arm/boot/compressed/decompress.c:7:0:
> >> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
> >> previous definition
> >> #define memcpy(dst, src, len) __memcpy(dst, src, len)
> >>
> >
> > Was not able yet to track down why __memset is not being resolved, but
> > since I don't need them, disabled CONFIG_ATAGS and
> > CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
> >
> > This brought me all the way to a prompt and please find attached the
> > results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
> > CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
> > free in one of our drivers (spi-bcm-qspi) so with this:
> >
> > Tested-by: Florian Fainelli <f.fainelli@gmail.com>
> >
> > Great job thanks!
> >
>
> The memset failure comes from the fact that the decompressor has
> its own string functions and there is an #undefine memset in there.
> The git history doesn't make it clear where this comes from but
> if I remove it the kernel at least compiles for me with the
> multi_v7_defconfig.
The decompressor does not link with the standard C library, so it
needs to provide implementations of standard C library functionality
where required. That means, if we have any memset() users, we need
to provide the memset() function.
The undef is there to avoid the optimisation we have in asm/string.h
for __memzero, because we don't want to use __memzero in the
decompressor.
Whether memset() is required depends on which compression method is
being used - LZO and LZ4 appear to make direct references to it, but
the inflate (gzip) decompressor code does not.
What this means is that all supported kernel compression options need
to be tested.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-11 22:58 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-11 22:58 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Oct 11, 2017 at 03:10:56PM -0700, Laura Abbott wrote:
> On 10/11/2017 02:36 PM, Florian Fainelli wrote:
> >> CC arch/arm/boot/compressed/string.o
> >> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
> >> #define memmove memmove
> >>
> >> In file included from arch/arm/boot/compressed/decompress.c:7:0:
> >> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
> >> previous definition
> >> #define memmove(dst, src, len) __memmove(dst, src, len)
> >>
> >> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
> >> #define memcpy memcpy
> >>
> >> In file included from arch/arm/boot/compressed/decompress.c:7:0:
> >> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
> >> previous definition
> >> #define memcpy(dst, src, len) __memcpy(dst, src, len)
> >>
> >
> > Was not able yet to track down why __memset is not being resolved, but
> > since I don't need them, disabled CONFIG_ATAGS and
> > CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
> >
> > This brought me all the way to a prompt and please find attached the
> > results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
> > CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
> > free in one of our drivers (spi-bcm-qspi) so with this:
> >
> > Tested-by: Florian Fainelli <f.fainelli@gmail.com>
> >
> > Great job thanks!
> >
>
> The memset failure comes from the fact that the decompressor has
> its own string functions and there is an #undefine memset in there.
> The git history doesn't make it clear where this comes from but
> if I remove it the kernel at least compiles for me with the
> multi_v7_defconfig.
The decompressor does not link with the standard C library, so it
needs to provide implementations of standard C library functionality
where required. That means, if we have any memset() users, we need
to provide the memset() function.
The undef is there to avoid the optimisation we have in asm/string.h
for __memzero, because we don't want to use __memzero in the
decompressor.
Whether memset() is required depends on which compression method is
being used - LZO and LZ4 appear to make direct references to it, but
the inflate (gzip) decompressor code does not.
What this means is that all supported kernel compression options need
to be tested.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 23:23 ` Andrew Morton
-1 siblings, 0 replies; 253+ messages in thread
From: Andrew Morton @ 2017-10-11 23:23 UTC (permalink / raw)
To: Abbott Liu
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, jiazhenghua, dylix.dailei, zengweilin, heshaoliang
On Wed, 11 Oct 2017 16:22:22 +0800 Abbott Liu <liuwenliang@huawei.com> wrote:
> Because arm instruction set don't support access the address which is
> not aligned, so must change memory_is_poisoned_16 for arm.
>
> ...
>
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
> return memory_is_poisoned_1(addr + size - 1);
> }
>
> +#ifdef CONFIG_ARM
> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> +{
> + u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
> +
> + if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
Coding-style is messed up. Please use scripts/checkpatch.pl.
> + else {
> + /*
> + * If two shadow bytes covers 16-byte access, we don't
> + * need to do anything more. Otherwise, test the last
> + * shadow byte.
> + */
> + if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> + return false;
> + return memory_is_poisoned_1(addr + 15);
> + }
> +}
> +
> +#else
> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> {
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
> @@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>
> return *shadow_addr;
> }
> +#endif
- I don't understand why this is necessary. memory_is_poisoned_16()
already handles unaligned addresses?
- If it's needed on ARM then presumably it will be needed on other
architectures, so CONFIG_ARM is insufficiently general.
- If the present memory_is_poisoned_16() indeed doesn't work on ARM,
it would be better to generalize/fix it in some fashion rather than
creating a new variant of the function.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-11 23:23 ` Andrew Morton
0 siblings, 0 replies; 253+ messages in thread
From: Andrew Morton @ 2017-10-11 23:23 UTC (permalink / raw)
To: Abbott Liu
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, jiazhenghua, dylix.dailei, zengweilin, heshaoliang
On Wed, 11 Oct 2017 16:22:22 +0800 Abbott Liu <liuwenliang@huawei.com> wrote:
> Because arm instruction set don't support access the address which is
> not aligned, so must change memory_is_poisoned_16 for arm.
>
> ...
>
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
> return memory_is_poisoned_1(addr + size - 1);
> }
>
> +#ifdef CONFIG_ARM
> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> +{
> + u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
> +
> + if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
Coding-style is messed up. Please use scripts/checkpatch.pl.
> + else {
> + /*
> + * If two shadow bytes covers 16-byte access, we don't
> + * need to do anything more. Otherwise, test the last
> + * shadow byte.
> + */
> + if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> + return false;
> + return memory_is_poisoned_1(addr + 15);
> + }
> +}
> +
> +#else
> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> {
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
> @@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>
> return *shadow_addr;
> }
> +#endif
- I don't understand why this is necessary. memory_is_poisoned_16()
already handles unaligned addresses?
- If it's needed on ARM then presumably it will be needed on other
architectures, so CONFIG_ARM is insufficiently general.
- If the present memory_is_poisoned_16() indeed doesn't work on ARM,
it would be better to generalize/fix it in some fashion rather than
creating a new variant of the function.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-11 23:23 ` Andrew Morton
0 siblings, 0 replies; 253+ messages in thread
From: Andrew Morton @ 2017-10-11 23:23 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 11 Oct 2017 16:22:22 +0800 Abbott Liu <liuwenliang@huawei.com> wrote:
> Because arm instruction set don't support access the address which is
> not aligned, so must change memory_is_poisoned_16 for arm.
>
> ...
>
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
> return memory_is_poisoned_1(addr + size - 1);
> }
>
> +#ifdef CONFIG_ARM
> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> +{
> + u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
> +
> + if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
Coding-style is messed up. Please use scripts/checkpatch.pl.
> + else {
> + /*
> + * If two shadow bytes covers 16-byte access, we don't
> + * need to do anything more. Otherwise, test the last
> + * shadow byte.
> + */
> + if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> + return false;
> + return memory_is_poisoned_1(addr + 15);
> + }
> +}
> +
> +#else
> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> {
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
> @@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>
> return *shadow_addr;
> }
> +#endif
- I don't understand why this is necessary. memory_is_poisoned_16()
already handles unaligned addresses?
- If it's needed on ARM then presumably it will be needed on other
architectures, so CONFIG_ARM is insufficiently general.
- If the present memory_is_poisoned_16() indeed doesn't work on ARM,
it would be better to generalize/fix it in some fashion rather than
creating a new variant of the function.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-11 23:42 ` Dmitry Osipenko
-1 siblings, 0 replies; 253+ messages in thread
From: Dmitry Osipenko @ 2017-10-11 23:42 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 11.10.2017 11:22, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
> mm/kasan/kasan.c | 2 +-
> 10 files changed, 331 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
> diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
> new file mode 100644
> index 0000000..90ee60c
> --- /dev/null
> +++ b/arch/arm/include/asm/kasan.h
> @@ -0,0 +1,20 @@
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifdef CONFIG_KASAN
> +
> +#include <asm/kasan_def.h>
> +/*
> + * Compiler uses shadow offset assuming that addresses start
> + * from 0. Kernel addresses don't start from 0, so shadow
> + * for kernel really starts from 'compiler's shadow offset' +
> + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
> + */
> +
> +extern void kasan_init(void);
> +
> +#else
> +static inline void kasan_init(void) { }
> +#endif
> +
> +#endif
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
> #endif /* CONFIG_ARM_LPAE */
>
> extern pgd_t *pgd_alloc(struct mm_struct *mm);
> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
> index 1c46238..fdf343f 100644
> --- a/arch/arm/include/asm/pgtable.h
> +++ b/arch/arm/include/asm/pgtable.h
> @@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
> #define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
> #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
> #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
> +#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
> #define PAGE_KERNEL_EXEC pgprot_kernel
> #define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
> #define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
> ENDPROC(__mmap_switched)
>
> .align 2
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index 8e9a3e4..985d9a3 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -62,6 +62,7 @@
> #include <asm/unwind.h>
> #include <asm/memblock.h>
> #include <asm/virt.h>
> +#include <asm/kasan.h>
>
> #include "atags.h"
>
> @@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
> early_ioremap_reset();
>
> paging_init(mdesc);
> + kasan_init();
> request_standard_resources(mdesc);
>
> if (mdesc->restart)
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 950d19b..498c316 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
> obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
> obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
> +
> +KASAN_SANITIZE_kasan_init.o := n
> +obj-$(CONFIG_KASAN) += kasan_init.o
> +
> +
> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> new file mode 100644
> index 0000000..2bf0782
> --- /dev/null
> +++ b/arch/arm/mm/kasan_init.c
> @@ -0,0 +1,257 @@
> +#include <linux/bootmem.h>
> +#include <linux/kasan.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/start_kernel.h>
> +
> +#include <asm/cputype.h>
> +#include <asm/highmem.h>
> +#include <asm/mach/map.h>
> +#include <asm/memory.h>
> +#include <asm/page.h>
> +#include <asm/pgalloc.h>
> +#include <asm/pgtable.h>
> +#include <asm/procinfo.h>
> +#include <asm/proc-fns.h>
> +#include <asm/tlbflush.h>
> +#include <asm/cp15.h>
> +#include <linux/sched/task.h>
> +
> +#include "mm.h"
> +
> +static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
> +
> +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
> +
> +static __init void *kasan_alloc_block(size_t size, int node)
> +{
> + return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
> + BOOTMEM_ALLOC_ACCESSIBLE, node);
> +}
> +
> +static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pmd_t *pmd;
> +
> + pmd = pmd_offset(pud, start);
> + for (addr = start; addr < end;) {
> + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + next = pmd_addr_end(addr, end);
> + addr = next;
> + flush_pmd_entry(pmd);
> + pmd++;
> + }
> +}
> +
> +static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pud_t *pud;
> +
> + pud = pud_offset(pgd, start);
> + for (addr = start; addr < end;) {
> + next = pud_addr_end(addr, end);
> + kasan_early_pmd_populate(addr, next, pud);
> + addr = next;
> + pud++;
> + }
> +}
> +
> +void __init kasan_map_early_shadow(pgd_t *pgdp)
> +{
> + int i;
> + unsigned long start = KASAN_SHADOW_START;
> + unsigned long end = KASAN_SHADOW_END;
> + unsigned long addr;
> + unsigned long next;
> + pgd_t *pgd;
> +
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
[...]
--
Dmitry
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-11 23:42 ` Dmitry Osipenko
0 siblings, 0 replies; 253+ messages in thread
From: Dmitry Osipenko @ 2017-10-11 23:42 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 11.10.2017 11:22, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
> mm/kasan/kasan.c | 2 +-
> 10 files changed, 331 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
> diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
> new file mode 100644
> index 0000000..90ee60c
> --- /dev/null
> +++ b/arch/arm/include/asm/kasan.h
> @@ -0,0 +1,20 @@
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifdef CONFIG_KASAN
> +
> +#include <asm/kasan_def.h>
> +/*
> + * Compiler uses shadow offset assuming that addresses start
> + * from 0. Kernel addresses don't start from 0, so shadow
> + * for kernel really starts from 'compiler's shadow offset' +
> + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
> + */
> +
> +extern void kasan_init(void);
> +
> +#else
> +static inline void kasan_init(void) { }
> +#endif
> +
> +#endif
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
> #endif /* CONFIG_ARM_LPAE */
>
> extern pgd_t *pgd_alloc(struct mm_struct *mm);
> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
> index 1c46238..fdf343f 100644
> --- a/arch/arm/include/asm/pgtable.h
> +++ b/arch/arm/include/asm/pgtable.h
> @@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
> #define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
> #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
> #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
> +#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
> #define PAGE_KERNEL_EXEC pgprot_kernel
> #define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
> #define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
> ENDPROC(__mmap_switched)
>
> .align 2
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index 8e9a3e4..985d9a3 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -62,6 +62,7 @@
> #include <asm/unwind.h>
> #include <asm/memblock.h>
> #include <asm/virt.h>
> +#include <asm/kasan.h>
>
> #include "atags.h"
>
> @@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
> early_ioremap_reset();
>
> paging_init(mdesc);
> + kasan_init();
> request_standard_resources(mdesc);
>
> if (mdesc->restart)
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 950d19b..498c316 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
> obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
> obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
> +
> +KASAN_SANITIZE_kasan_init.o := n
> +obj-$(CONFIG_KASAN) += kasan_init.o
> +
> +
> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> new file mode 100644
> index 0000000..2bf0782
> --- /dev/null
> +++ b/arch/arm/mm/kasan_init.c
> @@ -0,0 +1,257 @@
> +#include <linux/bootmem.h>
> +#include <linux/kasan.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/start_kernel.h>
> +
> +#include <asm/cputype.h>
> +#include <asm/highmem.h>
> +#include <asm/mach/map.h>
> +#include <asm/memory.h>
> +#include <asm/page.h>
> +#include <asm/pgalloc.h>
> +#include <asm/pgtable.h>
> +#include <asm/procinfo.h>
> +#include <asm/proc-fns.h>
> +#include <asm/tlbflush.h>
> +#include <asm/cp15.h>
> +#include <linux/sched/task.h>
> +
> +#include "mm.h"
> +
> +static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
> +
> +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
> +
> +static __init void *kasan_alloc_block(size_t size, int node)
> +{
> + return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
> + BOOTMEM_ALLOC_ACCESSIBLE, node);
> +}
> +
> +static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pmd_t *pmd;
> +
> + pmd = pmd_offset(pud, start);
> + for (addr = start; addr < end;) {
> + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + next = pmd_addr_end(addr, end);
> + addr = next;
> + flush_pmd_entry(pmd);
> + pmd++;
> + }
> +}
> +
> +static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pud_t *pud;
> +
> + pud = pud_offset(pgd, start);
> + for (addr = start; addr < end;) {
> + next = pud_addr_end(addr, end);
> + kasan_early_pmd_populate(addr, next, pud);
> + addr = next;
> + pud++;
> + }
> +}
> +
> +void __init kasan_map_early_shadow(pgd_t *pgdp)
> +{
> + int i;
> + unsigned long start = KASAN_SHADOW_START;
> + unsigned long end = KASAN_SHADOW_END;
> + unsigned long addr;
> + unsigned long next;
> + pgd_t *pgd;
> +
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
[...]
--
Dmitry
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-11 23:42 ` Dmitry Osipenko
0 siblings, 0 replies; 253+ messages in thread
From: Dmitry Osipenko @ 2017-10-11 23:42 UTC (permalink / raw)
To: linux-arm-kernel
On 11.10.2017 11:22, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
> mm/kasan/kasan.c | 2 +-
> 10 files changed, 331 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
> diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
> new file mode 100644
> index 0000000..90ee60c
> --- /dev/null
> +++ b/arch/arm/include/asm/kasan.h
> @@ -0,0 +1,20 @@
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifdef CONFIG_KASAN
> +
> +#include <asm/kasan_def.h>
> +/*
> + * Compiler uses shadow offset assuming that addresses start
> + * from 0. Kernel addresses don't start from 0, so shadow
> + * for kernel really starts from 'compiler's shadow offset' +
> + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
> + */
> +
> +extern void kasan_init(void);
> +
> +#else
> +static inline void kasan_init(void) { }
> +#endif
> +
> +#endif
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
> #endif /* CONFIG_ARM_LPAE */
>
> extern pgd_t *pgd_alloc(struct mm_struct *mm);
> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
> index 1c46238..fdf343f 100644
> --- a/arch/arm/include/asm/pgtable.h
> +++ b/arch/arm/include/asm/pgtable.h
> @@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
> #define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
> #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
> #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
> +#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
> #define PAGE_KERNEL_EXEC pgprot_kernel
> #define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
> #define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
> ENDPROC(__mmap_switched)
>
> .align 2
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index 8e9a3e4..985d9a3 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -62,6 +62,7 @@
> #include <asm/unwind.h>
> #include <asm/memblock.h>
> #include <asm/virt.h>
> +#include <asm/kasan.h>
>
> #include "atags.h"
>
> @@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
> early_ioremap_reset();
>
> paging_init(mdesc);
> + kasan_init();
> request_standard_resources(mdesc);
>
> if (mdesc->restart)
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 950d19b..498c316 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
> obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
> obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
> +
> +KASAN_SANITIZE_kasan_init.o := n
> +obj-$(CONFIG_KASAN) += kasan_init.o
> +
> +
> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> new file mode 100644
> index 0000000..2bf0782
> --- /dev/null
> +++ b/arch/arm/mm/kasan_init.c
> @@ -0,0 +1,257 @@
> +#include <linux/bootmem.h>
> +#include <linux/kasan.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/start_kernel.h>
> +
> +#include <asm/cputype.h>
> +#include <asm/highmem.h>
> +#include <asm/mach/map.h>
> +#include <asm/memory.h>
> +#include <asm/page.h>
> +#include <asm/pgalloc.h>
> +#include <asm/pgtable.h>
> +#include <asm/procinfo.h>
> +#include <asm/proc-fns.h>
> +#include <asm/tlbflush.h>
> +#include <asm/cp15.h>
> +#include <linux/sched/task.h>
> +
> +#include "mm.h"
> +
> +static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
> +
> +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
> +
> +static __init void *kasan_alloc_block(size_t size, int node)
> +{
> + return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
> + BOOTMEM_ALLOC_ACCESSIBLE, node);
> +}
> +
> +static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pmd_t *pmd;
> +
> + pmd = pmd_offset(pud, start);
> + for (addr = start; addr < end;) {
> + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + next = pmd_addr_end(addr, end);
> + addr = next;
> + flush_pmd_entry(pmd);
> + pmd++;
> + }
> +}
> +
> +static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pud_t *pud;
> +
> + pud = pud_offset(pgd, start);
> + for (addr = start; addr < end;) {
> + next = pud_addr_end(addr, end);
> + kasan_early_pmd_populate(addr, next, pud);
> + addr = next;
> + pud++;
> + }
> +}
> +
> +void __init kasan_map_early_shadow(pgd_t *pgdp)
> +{
> + int i;
> + unsigned long start = KASAN_SHADOW_START;
> + unsigned long end = KASAN_SHADOW_END;
> + unsigned long addr;
> + unsigned long next;
> + pgd_t *pgd;
> +
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
[...]
--
Dmitry
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 21:36 ` Florian Fainelli
2017-10-11 22:10 ` Laura Abbott
@ 2017-10-12 4:55 ` Liuwenliang (Lamb)
1 sibling, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-12 4:55 UTC (permalink / raw)
To: Florian Fainelli, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 10/12/2017 12:10 AM, Abbott Liu wrote:
>On 10/11/2017 12:50 PM, Florian Fainelli wrote:
>> On 10/11/2017 12:13 PM, Florian Fainelli wrote:
>>> Hi Abbott,
>>>
>>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>>>> Hi,all:
>>>> These patches add arch specific code for kernel address sanitizer
>>>> (see Documentation/kasan.txt).
>>>>
>>>> 1/8 of kernel addresses reserved for shadow memory. There was no
>>>> big enough hole for this, so virtual addresses for shadow were
>>>> stolen from user space.
>>>>
>>>> At early boot stage the whole shadow region populated with just
>>>> one physical page (kasan_zero_page). Later, this page reused
>>>> as readonly zero shadow for some memory that KASan currently
>>>> don't track (vmalloc).
>>>>
>>>> After mapping the physical memory, pages for shadow memory are
>>>> allocated and mapped.
>>>>
>>>> KASan's stack instrumentation significantly increases stack's
>>>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>>>
>>>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>>>> If bad pointer passed to one of these function it is important
>>>> to catch this. Compiler's instrumentation cannot do this since
>>>> these functions are written in assembly.
>>>>
>>>> KASan replaces memory functions with manually instrumented variants.
>>>> Original functions declared as weak symbols so strong definitions
>>>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>>>> with '__' prefix in name, so we could call non-instrumented variant
>>>> if needed.
>>>>
>>>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>>>> Original mem* function replaced (via #define) with prefixed variants
>>>> to disable memory access checks for such files.
>>>>
>>>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>>>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>>>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>>>> because kasan instrumentation maybe cause do_translation_fault function
>>>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>>>> do_translation_fault function maybe cause dead circle. So the mapping table
>>>> of KASan shadow memory need be copyed in pgd_alloc function.
>>>>
>>>>
>>>> Most of the code comes from:
>>>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>>>
>>> Thanks for putting these patches together, I can't get a kernel to build
>>> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>>>
>>> AS arch/arm/kernel/entry-common.o
>>> arch/arm/kernel/entry-common.S: Assembler messages:
>>> arch/arm/kernel/entry-common.S:53: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> arch/arm/kernel/entry-common.S:118: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> scripts/Makefile.build:412: recipe for target
>>> 'arch/arm/kernel/entry-common.o' failed
>>> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
>>> Makefile:1019: recipe for target 'arch/arm/kernel' failed
>>> make[2]: *** [arch/arm/kernel] Error 2
>>> make[2]: *** Waiting for unfinished jobs....
>>>
>>> This is coming from the increase in TASK_SIZE it seems.
>>>
>>> This is on top of v4.14-rc4-84-gff5abbe799e2
>>
>> Seems like we can use the following to get through that build failure:
>>
>> diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
>> index 99c908226065..0de1160d136e 100644
>> --- a/arch/arm/kernel/entry-common.S
>> +++ b/arch/arm/kernel/entry-common.S
>> @@ -50,7 +50,13 @@ ret_fast_syscall:
>> UNWIND(.cantunwind )
>> disable_irq_notrace @ disable interrupts
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
>> tracing
>> tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
>> @@ -115,7 +121,13 @@ ret_slow_syscall:
>> disable_irq_notrace @ disable interrupts
>> ENTRY(ret_to_user_from_irq)
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS]
>> tst r1, #_TIF_WORK_MASK
>>
>>
>>
>> but then we will see another set of build failures with the decompressor
>> code:
>>
>> WARNING: modpost: Found 2 section mismatch(es).
>> To see full details build your kernel with:
>> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
>> KSYM .tmp_kallsyms1.o
>> KSYM .tmp_kallsyms2.o
>> LD vmlinux
>> SORTEX vmlinux
>> SYSMAP System.map
>> OBJCOPY arch/arm/boot/Image
>> Kernel: arch/arm/boot/Image is ready
>> LDS arch/arm/boot/compressed/vmlinux.lds
>> AS arch/arm/boot/compressed/head.o
>> XZKERN arch/arm/boot/compressed/piggy_data
>> CC arch/arm/boot/compressed/misc.o
>> CC arch/arm/boot/compressed/decompress.o
>> CC arch/arm/boot/compressed/string.o
>> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
>> #define memmove memmove
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
>> previous definition
>> #define memmove(dst, src, len) __memmove(dst, src, len)
>>
>> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
>> #define memcpy memcpy
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
>> previous definition
>> #define memcpy(dst, src, len) __memcpy(dst, src, len)
>>
>> SHIPPED arch/arm/boot/compressed/hyp-stub.S
>> SHIPPED arch/arm/boot/compressed/fdt_rw.c
>> SHIPPED arch/arm/boot/compressed/fdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt_internal.h
>> SHIPPED arch/arm/boot/compressed/fdt_ro.c
>> SHIPPED arch/arm/boot/compressed/fdt_wip.c
>> SHIPPED arch/arm/boot/compressed/fdt.c
>> CC arch/arm/boot/compressed/atags_to_fdt.o
>> SHIPPED arch/arm/boot/compressed/lib1funcs.S
>> SHIPPED arch/arm/boot/compressed/ashldi3.S
>> SHIPPED arch/arm/boot/compressed/bswapsdi2.S
>> AS arch/arm/boot/compressed/hyp-stub.o
>> CC arch/arm/boot/compressed/fdt_rw.o
>> CC arch/arm/boot/compressed/fdt_ro.o
>> CC arch/arm/boot/compressed/fdt_wip.o
>> CC arch/arm/boot/compressed/fdt.o
>> AS arch/arm/boot/compressed/lib1funcs.o
>> AS arch/arm/boot/compressed/ashldi3.o
>> AS arch/arm/boot/compressed/bswapsdi2.o
>> AS arch/arm/boot/compressed/piggy.o
>> LD arch/arm/boot/compressed/vmlinux
>> arch/arm/boot/compressed/decompress.o: In function `fill_temp':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
>> undefined reference to `__memset'
>> arch/arm/boot/compressed/Makefile:182: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
>> arch/arm/boot/Makefile:53: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>I ended up fixing the redefinition warnings/build failures this way, but
>I am not 100% confident this is the right fix:
>diff --git a/arch/arm/boot/compressed/decompress.c
>b/arch/arm/boot/compressed/decompress.c
>index f3a4bedd1afc..7d4a47752760 100644
>--- a/arch/arm/boot/compressed/decompress.c
>+++ b/arch/arm/boot/compressed/decompress.c
>@@ -48,8 +48,10 @@ extern int memcmp(const void *cs, const void *ct,
>size_t count);
> #endif
>
> #ifdef CONFIG_KERNEL_XZ
>+#ifndef CONFIG_KASAN
> #define memmove memmove
> #define memcpy memcpy
>+#endif
> #include "../../../../lib/decompress_unxz.c"
> #endif
>
>Was not able yet to track down why __memset is not being resolved, but
>since I don't need them, disabled CONFIG_ATAGS and
>CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
>
>This brought me all the way to a prompt and please find attached the
>results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
>CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
>free in one of our drivers (spi-bcm-qspi) so with this:
>
>Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>
>Great job thanks!
>--
>Florian
Thanks for your testing and solution. I'm sorry that I don't test when CONFIG_ATAGS,
CONFIG_ARM_ATAG_DTB_COMPAT and CONFIG_KERNEL_XZ are enabling.
The fellow error:
arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
/home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
undefined reference to `__memset'
It can be resolved by the code of Andrey Ryabinin <a.ryabinin@samsung.com> on
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Here is the patch:
--- a/arch/arm/boot/compressed/libfdt_env.h
+++ b/arch/arm/boot/compressed/libfdt_env.h
@@ -16,4 +16,6 @@ typedef __be64 fdt64_t;
#define fdt64_to_cpu(x) be64_to_cpu(x)
#define cpu_to_fdt64(x) cpu_to_be64(x)
+#undef memset
+
#endif
I delete it because I don't know that is needed when CONFIG_ATAGS and
CONFIG_ARM_ATAG_DTB_COMPAT are enabling. I'm sorry for my fault.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
@ 2017-10-12 4:55 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-12 4:55 UTC (permalink / raw)
To: Florian Fainelli, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 11925 bytes --]
On 10/12/2017 12:10 AM, Abbott Liu wrote:
>On 10/11/2017 12:50 PM, Florian Fainelli wrote:
>> On 10/11/2017 12:13 PM, Florian Fainelli wrote:
>>> Hi Abbott,
>>>
>>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>>>> Hi,all:
>>>> These patches add arch specific code for kernel address sanitizer
>>>> (see Documentation/kasan.txt).
>>>>
>>>> 1/8 of kernel addresses reserved for shadow memory. There was no
>>>> big enough hole for this, so virtual addresses for shadow were
>>>> stolen from user space.
>>>>
>>>> At early boot stage the whole shadow region populated with just
>>>> one physical page (kasan_zero_page). Later, this page reused
>>>> as readonly zero shadow for some memory that KASan currently
>>>> don't track (vmalloc).
>>>>
>>>> After mapping the physical memory, pages for shadow memory are
>>>> allocated and mapped.
>>>>
>>>> KASan's stack instrumentation significantly increases stack's
>>>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>>>
>>>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>>>> If bad pointer passed to one of these function it is important
>>>> to catch this. Compiler's instrumentation cannot do this since
>>>> these functions are written in assembly.
>>>>
>>>> KASan replaces memory functions with manually instrumented variants.
>>>> Original functions declared as weak symbols so strong definitions
>>>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>>>> with '__' prefix in name, so we could call non-instrumented variant
>>>> if needed.
>>>>
>>>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>>>> Original mem* function replaced (via #define) with prefixed variants
>>>> to disable memory access checks for such files.
>>>>
>>>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>>>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>>>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>>>> because kasan instrumentation maybe cause do_translation_fault function
>>>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>>>> do_translation_fault function maybe cause dead circle. So the mapping table
>>>> of KASan shadow memory need be copyed in pgd_alloc function.
>>>>
>>>>
>>>> Most of the code comes from:
>>>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>>>
>>> Thanks for putting these patches together, I can't get a kernel to build
>>> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>>>
>>> AS arch/arm/kernel/entry-common.o
>>> arch/arm/kernel/entry-common.S: Assembler messages:
>>> arch/arm/kernel/entry-common.S:53: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> arch/arm/kernel/entry-common.S:118: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> scripts/Makefile.build:412: recipe for target
>>> 'arch/arm/kernel/entry-common.o' failed
>>> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
>>> Makefile:1019: recipe for target 'arch/arm/kernel' failed
>>> make[2]: *** [arch/arm/kernel] Error 2
>>> make[2]: *** Waiting for unfinished jobs....
>>>
>>> This is coming from the increase in TASK_SIZE it seems.
>>>
>>> This is on top of v4.14-rc4-84-gff5abbe799e2
>>
>> Seems like we can use the following to get through that build failure:
>>
>> diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
>> index 99c908226065..0de1160d136e 100644
>> --- a/arch/arm/kernel/entry-common.S
>> +++ b/arch/arm/kernel/entry-common.S
>> @@ -50,7 +50,13 @@ ret_fast_syscall:
>> UNWIND(.cantunwind )
>> disable_irq_notrace @ disable interrupts
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
>> tracing
>> tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
>> @@ -115,7 +121,13 @@ ret_slow_syscall:
>> disable_irq_notrace @ disable interrupts
>> ENTRY(ret_to_user_from_irq)
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS]
>> tst r1, #_TIF_WORK_MASK
>>
>>
>>
>> but then we will see another set of build failures with the decompressor
>> code:
>>
>> WARNING: modpost: Found 2 section mismatch(es).
>> To see full details build your kernel with:
>> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
>> KSYM .tmp_kallsyms1.o
>> KSYM .tmp_kallsyms2.o
>> LD vmlinux
>> SORTEX vmlinux
>> SYSMAP System.map
>> OBJCOPY arch/arm/boot/Image
>> Kernel: arch/arm/boot/Image is ready
>> LDS arch/arm/boot/compressed/vmlinux.lds
>> AS arch/arm/boot/compressed/head.o
>> XZKERN arch/arm/boot/compressed/piggy_data
>> CC arch/arm/boot/compressed/misc.o
>> CC arch/arm/boot/compressed/decompress.o
>> CC arch/arm/boot/compressed/string.o
>> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
>> #define memmove memmove
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
>> previous definition
>> #define memmove(dst, src, len) __memmove(dst, src, len)
>>
>> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
>> #define memcpy memcpy
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
>> previous definition
>> #define memcpy(dst, src, len) __memcpy(dst, src, len)
>>
>> SHIPPED arch/arm/boot/compressed/hyp-stub.S
>> SHIPPED arch/arm/boot/compressed/fdt_rw.c
>> SHIPPED arch/arm/boot/compressed/fdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt_internal.h
>> SHIPPED arch/arm/boot/compressed/fdt_ro.c
>> SHIPPED arch/arm/boot/compressed/fdt_wip.c
>> SHIPPED arch/arm/boot/compressed/fdt.c
>> CC arch/arm/boot/compressed/atags_to_fdt.o
>> SHIPPED arch/arm/boot/compressed/lib1funcs.S
>> SHIPPED arch/arm/boot/compressed/ashldi3.S
>> SHIPPED arch/arm/boot/compressed/bswapsdi2.S
>> AS arch/arm/boot/compressed/hyp-stub.o
>> CC arch/arm/boot/compressed/fdt_rw.o
>> CC arch/arm/boot/compressed/fdt_ro.o
>> CC arch/arm/boot/compressed/fdt_wip.o
>> CC arch/arm/boot/compressed/fdt.o
>> AS arch/arm/boot/compressed/lib1funcs.o
>> AS arch/arm/boot/compressed/ashldi3.o
>> AS arch/arm/boot/compressed/bswapsdi2.o
>> AS arch/arm/boot/compressed/piggy.o
>> LD arch/arm/boot/compressed/vmlinux
>> arch/arm/boot/compressed/decompress.o: In function `fill_temp':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
>> undefined reference to `__memset'
>> arch/arm/boot/compressed/Makefile:182: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
>> arch/arm/boot/Makefile:53: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>I ended up fixing the redefinition warnings/build failures this way, but
>I am not 100% confident this is the right fix:
>diff --git a/arch/arm/boot/compressed/decompress.c
>b/arch/arm/boot/compressed/decompress.c
>index f3a4bedd1afc..7d4a47752760 100644
>--- a/arch/arm/boot/compressed/decompress.c
>+++ b/arch/arm/boot/compressed/decompress.c
>@@ -48,8 +48,10 @@ extern int memcmp(const void *cs, const void *ct,
>size_t count);
> #endif
>
> #ifdef CONFIG_KERNEL_XZ
>+#ifndef CONFIG_KASAN
> #define memmove memmove
> #define memcpy memcpy
>+#endif
> #include "../../../../lib/decompress_unxz.c"
> #endif
>
>Was not able yet to track down why __memset is not being resolved, but
>since I don't need them, disabled CONFIG_ATAGS and
>CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
>
>This brought me all the way to a prompt and please find attached the
>results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
>CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
>free in one of our drivers (spi-bcm-qspi) so with this:
>
>Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>
>Great job thanks!
>--
>Florian
Thanks for your testing and solution. I'm sorry that I don't test when CONFIG_ATAGS,
CONFIG_ARM_ATAG_DTB_COMPAT and CONFIG_KERNEL_XZ are enabling.
The fellow error:
arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
/home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
undefined reference to `__memset'
It can be resolved by the code of Andrey Ryabinin <a.ryabinin@samsung.com> on
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Here is the patch:
--- a/arch/arm/boot/compressed/libfdt_env.h
+++ b/arch/arm/boot/compressed/libfdt_env.h
@@ -16,4 +16,6 @@ typedef __be64 fdt64_t;
#define fdt64_to_cpu(x) be64_to_cpu(x)
#define cpu_to_fdt64(x) cpu_to_be64(x)
+#undef memset
+
#endif
I delete it because I don't know that is needed when CONFIG_ATAGS and
CONFIG_ARM_ATAG_DTB_COMPAT are enabling. I'm sorry for my fault.
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-12 4:55 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-12 4:55 UTC (permalink / raw)
To: linux-arm-kernel
On 10/12/2017 12:10 AM, Abbott Liu wrote:
>On 10/11/2017 12:50 PM, Florian Fainelli wrote:
>> On 10/11/2017 12:13 PM, Florian Fainelli wrote:
>>> Hi Abbott,
>>>
>>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>>>> Hi,all:
>>>> These patches add arch specific code for kernel address sanitizer
>>>> (see Documentation/kasan.txt).
>>>>
>>>> 1/8 of kernel addresses reserved for shadow memory. There was no
>>>> big enough hole for this, so virtual addresses for shadow were
>>>> stolen from user space.
>>>>
>>>> At early boot stage the whole shadow region populated with just
>>>> one physical page (kasan_zero_page). Later, this page reused
>>>> as readonly zero shadow for some memory that KASan currently
>>>> don't track (vmalloc).
>>>>
>>>> After mapping the physical memory, pages for shadow memory are
>>>> allocated and mapped.
>>>>
>>>> KASan's stack instrumentation significantly increases stack's
>>>> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>>>>
>>>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>>>> If bad pointer passed to one of these function it is important
>>>> to catch this. Compiler's instrumentation cannot do this since
>>>> these functions are written in assembly.
>>>>
>>>> KASan replaces memory functions with manually instrumented variants.
>>>> Original functions declared as weak symbols so strong definitions
>>>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>>>> with '__' prefix in name, so we could call non-instrumented variant
>>>> if needed.
>>>>
>>>> Some files built without kasan instrumentation (e.g. mm/slub.c).
>>>> Original mem* function replaced (via #define) with prefixed variants
>>>> to disable memory access checks for such files.
>>>>
>>>> On arm LPAE architecture, the mapping table of KASan shadow memory(if
>>>> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
>>>> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
>>>> because kasan instrumentation maybe cause do_translation_fault function
>>>> accessing KASan shadow memory. The accessing of KASan shadow memory in
>>>> do_translation_fault function maybe cause dead circle. So the mapping table
>>>> of KASan shadow memory need be copyed in pgd_alloc function.
>>>>
>>>>
>>>> Most of the code comes from:
>>>> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>>>
>>> Thanks for putting these patches together, I can't get a kernel to build
>>> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>>>
>>> AS arch/arm/kernel/entry-common.o
>>> arch/arm/kernel/entry-common.S: Assembler messages:
>>> arch/arm/kernel/entry-common.S:53: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> arch/arm/kernel/entry-common.S:118: Error: invalid constant
>>> (ffffffffb6e00000) after fixup
>>> scripts/Makefile.build:412: recipe for target
>>> 'arch/arm/kernel/entry-common.o' failed
>>> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
>>> Makefile:1019: recipe for target 'arch/arm/kernel' failed
>>> make[2]: *** [arch/arm/kernel] Error 2
>>> make[2]: *** Waiting for unfinished jobs....
>>>
>>> This is coming from the increase in TASK_SIZE it seems.
>>>
>>> This is on top of v4.14-rc4-84-gff5abbe799e2
>>
>> Seems like we can use the following to get through that build failure:
>>
>> diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
>> index 99c908226065..0de1160d136e 100644
>> --- a/arch/arm/kernel/entry-common.S
>> +++ b/arch/arm/kernel/entry-common.S
>> @@ -50,7 +50,13 @@ ret_fast_syscall:
>> UNWIND(.cantunwind )
>> disable_irq_notrace @ disable interrupts
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall
>> tracing
>> tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
>> @@ -115,7 +121,13 @@ ret_slow_syscall:
>> disable_irq_notrace @ disable interrupts
>> ENTRY(ret_to_user_from_irq)
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> + movw r1, #:lower16:TASK_SIZE
>> + movt r1, #:upper16:TASK_SIZE
>> + cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blne addr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS]
>> tst r1, #_TIF_WORK_MASK
>>
>>
>>
>> but then we will see another set of build failures with the decompressor
>> code:
>>
>> WARNING: modpost: Found 2 section mismatch(es).
>> To see full details build your kernel with:
>> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
>> KSYM .tmp_kallsyms1.o
>> KSYM .tmp_kallsyms2.o
>> LD vmlinux
>> SORTEX vmlinux
>> SYSMAP System.map
>> OBJCOPY arch/arm/boot/Image
>> Kernel: arch/arm/boot/Image is ready
>> LDS arch/arm/boot/compressed/vmlinux.lds
>> AS arch/arm/boot/compressed/head.o
>> XZKERN arch/arm/boot/compressed/piggy_data
>> CC arch/arm/boot/compressed/misc.o
>> CC arch/arm/boot/compressed/decompress.o
>> CC arch/arm/boot/compressed/string.o
>> arch/arm/boot/compressed/decompress.c:51:0: warning: "memmove" redefined
>> #define memmove memmove
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:67:0: note: this is the location of the
>> previous definition
>> #define memmove(dst, src, len) __memmove(dst, src, len)
>>
>> arch/arm/boot/compressed/decompress.c:52:0: warning: "memcpy" redefined
>> #define memcpy memcpy
>>
>> In file included from arch/arm/boot/compressed/decompress.c:7:0:
>> ./arch/arm/include/asm/string.h:66:0: note: this is the location of the
>> previous definition
>> #define memcpy(dst, src, len) __memcpy(dst, src, len)
>>
>> SHIPPED arch/arm/boot/compressed/hyp-stub.S
>> SHIPPED arch/arm/boot/compressed/fdt_rw.c
>> SHIPPED arch/arm/boot/compressed/fdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt.h
>> SHIPPED arch/arm/boot/compressed/libfdt_internal.h
>> SHIPPED arch/arm/boot/compressed/fdt_ro.c
>> SHIPPED arch/arm/boot/compressed/fdt_wip.c
>> SHIPPED arch/arm/boot/compressed/fdt.c
>> CC arch/arm/boot/compressed/atags_to_fdt.o
>> SHIPPED arch/arm/boot/compressed/lib1funcs.S
>> SHIPPED arch/arm/boot/compressed/ashldi3.S
>> SHIPPED arch/arm/boot/compressed/bswapsdi2.S
>> AS arch/arm/boot/compressed/hyp-stub.o
>> CC arch/arm/boot/compressed/fdt_rw.o
>> CC arch/arm/boot/compressed/fdt_ro.o
>> CC arch/arm/boot/compressed/fdt_wip.o
>> CC arch/arm/boot/compressed/fdt.o
>> AS arch/arm/boot/compressed/lib1funcs.o
>> AS arch/arm/boot/compressed/ashldi3.o
>> AS arch/arm/boot/compressed/bswapsdi2.o
>> AS arch/arm/boot/compressed/piggy.o
>> LD arch/arm/boot/compressed/vmlinux
>> arch/arm/boot/compressed/decompress.o: In function `fill_temp':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_stream.c:162:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `bcj_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:404:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:409:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:919:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_flush':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:424:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `dict_uncompressed':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:390:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:400:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/decompress.o: In function `lzma2_lzma':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:859:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_lzma2.c:884:
>> undefined reference to `memmove'
>> arch/arm/boot/compressed/decompress.o: In function `xz_dec_bcj_run':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:451:
>> undefined reference to `memcpy'
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/../../../../lib/xz/xz_dec_bcj.c:471:
>> undefined reference to `memcpy'
>> arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
>> /home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
>> undefined reference to `__memset'
>> arch/arm/boot/compressed/Makefile:182: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[4]: *** [arch/arm/boot/compressed/vmlinux] Error 1
>> arch/arm/boot/Makefile:53: recipe for target
>> 'arch/arm/boot/compressed/vmlinux' failed
>> make[3]: *** [arch/arm/boot/compressed/vmlinux] Error 2
>I ended up fixing the redefinition warnings/build failures this way, but
>I am not 100% confident this is the right fix:
>diff --git a/arch/arm/boot/compressed/decompress.c
>b/arch/arm/boot/compressed/decompress.c
>index f3a4bedd1afc..7d4a47752760 100644
>--- a/arch/arm/boot/compressed/decompress.c
>+++ b/arch/arm/boot/compressed/decompress.c
>@@ -48,8 +48,10 @@ extern int memcmp(const void *cs, const void *ct,
>size_t count);
> #endif
>
> #ifdef CONFIG_KERNEL_XZ
>+#ifndef CONFIG_KASAN
> #define memmove memmove
> #define memcpy memcpy
>+#endif
> #include "../../../../lib/decompress_unxz.c"
> #endif
>
>Was not able yet to track down why __memset is not being resolved, but
>since I don't need them, disabled CONFIG_ATAGS and
>CONFIG_ARM_ATAG_DTB_COMPAT and this allowed me to get a build working.
>
>This brought me all the way to a prompt and please find attached the
>results of insmod test_kasan.ko for CONFIG_ARM_LPAE=y and
>CONFIG_ARM_LPAE=n. Your patches actually spotted a genuine use after
>free in one of our drivers (spi-bcm-qspi) so with this:
>
>Tested-by: Florian Fainelli <f.fainelli@gmail.com>
>
>Great job thanks!
>--
>Florian
Thanks for your testing and solution. I'm sorry that I don't test when CONFIG_ATAGS,
CONFIG_ARM_ATAG_DTB_COMPAT and CONFIG_KERNEL_XZ are enabling.
The fellow error:
arch/arm/boot/compressed/fdt_rw.o: In function `fdt_add_subnode_namelen':
/home/fainelli/dev/linux/arch/arm/boot/compressed/fdt_rw.c:366:
undefined reference to `__memset'
It can be resolved by the code of Andrey Ryabinin <a.ryabinin@samsung.com> on
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Here is the patch:
--- a/arch/arm/boot/compressed/libfdt_env.h
+++ b/arch/arm/boot/compressed/libfdt_env.h
@@ -16,4 +16,6 @@ typedef __be64 fdt64_t;
#define fdt64_to_cpu(x) be64_to_cpu(x)
#define cpu_to_fdt64(x) cpu_to_be64(x)
+#undef memset
+
#endif
I delete it because I don't know that is needed when CONFIG_ATAGS and
CONFIG_ARM_ATAG_DTB_COMPAT are enabling. I'm sorry for my fault.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
2017-10-11 23:23 ` Andrew Morton
(?)
@ 2017-10-12 7:16 ` Dmitry Vyukov
-1 siblings, 0 replies; 253+ messages in thread
From: Dmitry Vyukov @ 2017-10-12 7:16 UTC (permalink / raw)
To: Andrew Morton
Cc: Abbott Liu, Russell King - ARM Linux, Andrey Ryabinin,
afzal.mohd.ma, f.fainelli, Laura Abbott, Kirill A. Shutemov,
Michal Hocko, cdall, marc.zyngier, Catalin Marinas,
Matthew Wilcox, Thomas Gleixner, Thomas Garnier, Kees Cook,
Arnd Bergmann, Vladimir Murzin, tixy, Ard Biesheuvel,
robin.murphy, Ingo Molnar, grygorii.strashko,
Alexander Potapenko, opendmb, linux-arm-kernel, LKML, kasan-dev,
linux-mm, jiazhenghua, dylix.dailei, zengweilin, heshaoliang
On Thu, Oct 12, 2017 at 1:23 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Wed, 11 Oct 2017 16:22:22 +0800 Abbott Liu <liuwenliang@huawei.com> wrote:
>
>> Because arm instruction set don't support access the address which is
>> not aligned, so must change memory_is_poisoned_16 for arm.
>>
>> ...
>>
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
>> return memory_is_poisoned_1(addr + size - 1);
>> }
>>
>> +#ifdef CONFIG_ARM
>> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> +{
>> + u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>> +
>> + if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
>
> Coding-style is messed up. Please use scripts/checkpatch.pl.
>
>> + else {
>> + /*
>> + * If two shadow bytes covers 16-byte access, we don't
>> + * need to do anything more. Otherwise, test the last
>> + * shadow byte.
>> + */
>> + if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> + return false;
>> + return memory_is_poisoned_1(addr + 15);
>> + }
>> +}
>> +
>> +#else
>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> {
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>> @@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>
>> return *shadow_addr;
>> }
>> +#endif
>
> - I don't understand why this is necessary. memory_is_poisoned_16()
> already handles unaligned addresses?
>
> - If it's needed on ARM then presumably it will be needed on other
> architectures, so CONFIG_ARM is insufficiently general.
>
> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
> it would be better to generalize/fix it in some fashion rather than
> creating a new variant of the function.
Yes, I think it will be better to fix the current function rather then
have 2 slightly different copies with ifdef's.
Will something along these lines work for arm? 16-byte accesses are
not too common, so it should not be a performance problem. And
probably modern compilers can turn 2 1-byte checks into a 2-byte check
where safe (x86).
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
if (shadow_addr[0] || shadow_addr[1])
return true;
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return memory_is_poisoned_1(addr + 15);
return false;
}
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-12 7:16 ` Dmitry Vyukov
0 siblings, 0 replies; 253+ messages in thread
From: Dmitry Vyukov @ 2017-10-12 7:16 UTC (permalink / raw)
To: Andrew Morton
Cc: Abbott Liu, Russell King - ARM Linux, Andrey Ryabinin,
afzal.mohd.ma, f.fainelli, Laura Abbott, Kirill A. Shutemov,
Michal Hocko, cdall, marc.zyngier, Catalin Marinas,
Matthew Wilcox, Thomas Gleixner, Thomas Garnier, Kees Cook,
Arnd Bergmann, Vladimir Murzin, tixy, Ard Biesheuvel,
robin.murphy, Ingo Molnar, grygorii.strashko,
Alexander Potapenko, opendmb, linux-arm-kernel, LKML, kasan-dev,
linux-mm, jiazhenghua, dylix.dailei, zengweilin, heshaoliang
On Thu, Oct 12, 2017 at 1:23 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Wed, 11 Oct 2017 16:22:22 +0800 Abbott Liu <liuwenliang@huawei.com> wrote:
>
>> Because arm instruction set don't support access the address which is
>> not aligned, so must change memory_is_poisoned_16 for arm.
>>
>> ...
>>
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
>> return memory_is_poisoned_1(addr + size - 1);
>> }
>>
>> +#ifdef CONFIG_ARM
>> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> +{
>> + u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>> +
>> + if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
>
> Coding-style is messed up. Please use scripts/checkpatch.pl.
>
>> + else {
>> + /*
>> + * If two shadow bytes covers 16-byte access, we don't
>> + * need to do anything more. Otherwise, test the last
>> + * shadow byte.
>> + */
>> + if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> + return false;
>> + return memory_is_poisoned_1(addr + 15);
>> + }
>> +}
>> +
>> +#else
>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> {
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>> @@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>
>> return *shadow_addr;
>> }
>> +#endif
>
> - I don't understand why this is necessary. memory_is_poisoned_16()
> already handles unaligned addresses?
>
> - If it's needed on ARM then presumably it will be needed on other
> architectures, so CONFIG_ARM is insufficiently general.
>
> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
> it would be better to generalize/fix it in some fashion rather than
> creating a new variant of the function.
Yes, I think it will be better to fix the current function rather then
have 2 slightly different copies with ifdef's.
Will something along these lines work for arm? 16-byte accesses are
not too common, so it should not be a performance problem. And
probably modern compilers can turn 2 1-byte checks into a 2-byte check
where safe (x86).
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
if (shadow_addr[0] || shadow_addr[1])
return true;
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return memory_is_poisoned_1(addr + 15);
return false;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-12 7:16 ` Dmitry Vyukov
0 siblings, 0 replies; 253+ messages in thread
From: Dmitry Vyukov @ 2017-10-12 7:16 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Oct 12, 2017 at 1:23 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Wed, 11 Oct 2017 16:22:22 +0800 Abbott Liu <liuwenliang@huawei.com> wrote:
>
>> Because arm instruction set don't support access the address which is
>> not aligned, so must change memory_is_poisoned_16 for arm.
>>
>> ...
>>
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
>> return memory_is_poisoned_1(addr + size - 1);
>> }
>>
>> +#ifdef CONFIG_ARM
>> +static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> +{
>> + u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>> +
>> + if (unlikely(shadow_addr[0] || shadow_addr[1])) return true;
>
> Coding-style is messed up. Please use scripts/checkpatch.pl.
>
>> + else {
>> + /*
>> + * If two shadow bytes covers 16-byte access, we don't
>> + * need to do anything more. Otherwise, test the last
>> + * shadow byte.
>> + */
>> + if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> + return false;
>> + return memory_is_poisoned_1(addr + 15);
>> + }
>> +}
>> +
>> +#else
>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> {
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>> @@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>
>> return *shadow_addr;
>> }
>> +#endif
>
> - I don't understand why this is necessary. memory_is_poisoned_16()
> already handles unaligned addresses?
>
> - If it's needed on ARM then presumably it will be needed on other
> architectures, so CONFIG_ARM is insufficiently general.
>
> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
> it would be better to generalize/fix it in some fashion rather than
> creating a new variant of the function.
Yes, I think it will be better to fix the current function rather then
have 2 slightly different copies with ifdef's.
Will something along these lines work for arm? 16-byte accesses are
not too common, so it should not be a performance problem. And
probably modern compilers can turn 2 1-byte checks into a 2-byte check
where safe (x86).
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
if (shadow_addr[0] || shadow_addr[1])
return true;
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return memory_is_poisoned_1(addr + 15);
return false;
}
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-12 7:38 ` Arnd Bergmann
-1 siblings, 0 replies; 253+ messages in thread
From: Arnd Bergmann @ 2017-10-12 7:38 UTC (permalink / raw)
To: Abbott Liu
Cc: Russell King - ARM Linux, Andrey Ryabinin, afzal.mohd.ma,
Florian Fainelli, Laura Abbott, Kirill A . Shutemov,
Michal Hocko, Christoffer Dall, Marc Zyngier, Catalin Marinas,
Andrew Morton, mawilcox, Thomas Gleixner, Thomas Garnier,
Kees Cook, Vladimir Murzin, tixy, Ard Biesheuvel, Robin Murphy,
Ingo Molnar, grygorii.strashko, Alexander Potapenko,
Dmitry Vyukov, Doug Berger, Linux ARM, Linux Kernel Mailing List,
kasan-dev, Linux-MM, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On Wed, Oct 11, 2017 at 10:22 AM, Abbott Liu <liuwenliang@huawei.com> wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
Nice!
When I build-tested KASAN on x86 and arm64, I ran into a lot of build-time
regressions (mostly warnings but also some errors), so I'd like to give it
a spin in my randconfig tree before this gets merged. Can you point me
to a git URL that I can pull into my testing tree?
I could of course apply the patches from email, but I expect that there
will be updated versions of the series, so it's easier if I can just pull
the latest version.
Arnd
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
@ 2017-10-12 7:38 ` Arnd Bergmann
0 siblings, 0 replies; 253+ messages in thread
From: Arnd Bergmann @ 2017-10-12 7:38 UTC (permalink / raw)
To: Abbott Liu
Cc: Russell King - ARM Linux, Andrey Ryabinin, afzal.mohd.ma,
Florian Fainelli, Laura Abbott, Kirill A . Shutemov,
Michal Hocko, Christoffer Dall, Marc Zyngier, Catalin Marinas,
Andrew Morton, mawilcox, Thomas Gleixner, Thomas Garnier,
Kees Cook, Vladimir Murzin, tixy, Ard Biesheuvel, Robin Murphy,
Ingo Molnar, grygorii.strashko, Alexander Potapenko,
Dmitry Vyukov, Doug Berger, Linux ARM, Linux Kernel Mailing List,
kasan-dev, Linux-MM, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On Wed, Oct 11, 2017 at 10:22 AM, Abbott Liu <liuwenliang@huawei.com> wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
Nice!
When I build-tested KASAN on x86 and arm64, I ran into a lot of build-time
regressions (mostly warnings but also some errors), so I'd like to give it
a spin in my randconfig tree before this gets merged. Can you point me
to a git URL that I can pull into my testing tree?
I could of course apply the patches from email, but I expect that there
will be updated versions of the series, so it's easier if I can just pull
the latest version.
Arnd
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-12 7:38 ` Arnd Bergmann
0 siblings, 0 replies; 253+ messages in thread
From: Arnd Bergmann @ 2017-10-12 7:38 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Oct 11, 2017 at 10:22 AM, Abbott Liu <liuwenliang@huawei.com> wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
Nice!
When I build-tested KASAN on x86 and arm64, I ran into a lot of build-time
regressions (mostly warnings but also some errors), so I'd like to give it
a spin in my randconfig tree before this gets merged. Can you point me
to a git URL that I can pull into my testing tree?
I could of course apply the patches from email, but I expect that there
will be updated versions of the series, so it's easier if I can just pull
the latest version.
Arnd
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-12 7:58 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-10-12 7:58 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 11/10/17 09:22, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
> mm/kasan/kasan.c | 2 +-
> 10 files changed, 331 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
> diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
> new file mode 100644
> index 0000000..90ee60c
> --- /dev/null
> +++ b/arch/arm/include/asm/kasan.h
> @@ -0,0 +1,20 @@
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifdef CONFIG_KASAN
> +
> +#include <asm/kasan_def.h>
> +/*
> + * Compiler uses shadow offset assuming that addresses start
> + * from 0. Kernel addresses don't start from 0, so shadow
> + * for kernel really starts from 'compiler's shadow offset' +
> + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
> + */
> +
> +extern void kasan_init(void);
> +
> +#else
> +static inline void kasan_init(void) { }
> +#endif
> +
> +#endif
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
> #endif /* CONFIG_ARM_LPAE */
>
> extern pgd_t *pgd_alloc(struct mm_struct *mm);
> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
> index 1c46238..fdf343f 100644
> --- a/arch/arm/include/asm/pgtable.h
> +++ b/arch/arm/include/asm/pgtable.h
> @@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
> #define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
> #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
> #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
> +#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
> #define PAGE_KERNEL_EXEC pgprot_kernel
> #define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
> #define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
You could instead lift and extend the definitions provided in kvm_hyp.h,
and use the read_sysreg/write_sysreg helpers defined in cp15.h.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-12 7:58 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-10-12 7:58 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On 11/10/17 09:22, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
> mm/kasan/kasan.c | 2 +-
> 10 files changed, 331 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
> diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
> new file mode 100644
> index 0000000..90ee60c
> --- /dev/null
> +++ b/arch/arm/include/asm/kasan.h
> @@ -0,0 +1,20 @@
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifdef CONFIG_KASAN
> +
> +#include <asm/kasan_def.h>
> +/*
> + * Compiler uses shadow offset assuming that addresses start
> + * from 0. Kernel addresses don't start from 0, so shadow
> + * for kernel really starts from 'compiler's shadow offset' +
> + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
> + */
> +
> +extern void kasan_init(void);
> +
> +#else
> +static inline void kasan_init(void) { }
> +#endif
> +
> +#endif
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
> #endif /* CONFIG_ARM_LPAE */
>
> extern pgd_t *pgd_alloc(struct mm_struct *mm);
> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
> index 1c46238..fdf343f 100644
> --- a/arch/arm/include/asm/pgtable.h
> +++ b/arch/arm/include/asm/pgtable.h
> @@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
> #define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
> #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
> #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
> +#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
> #define PAGE_KERNEL_EXEC pgprot_kernel
> #define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
> #define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
You could instead lift and extend the definitions provided in kvm_hyp.h,
and use the read_sysreg/write_sysreg helpers defined in cp15.h.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-12 7:58 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-10-12 7:58 UTC (permalink / raw)
To: linux-arm-kernel
On 11/10/17 09:22, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> This patch initializes KASan shadow region's page table and memory.
> There are two stage for KASan initializing:
> 1. At early boot stage the whole shadow region is mapped to just
> one physical page (kasan_zero_page). It's finished by the function
> kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
> head-common.S)
>
> 2. After the calling of paging_init, we use kasan_zero_page as zero
> shadow for some memory that KASan don't need to track, and we alloc
> new shadow space for the other memory that KASan need to track. These
> issues are finished by the function kasan_init which is call by setup_arch.
>
> Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> ---
> arch/arm/include/asm/kasan.h | 20 +++
> arch/arm/include/asm/pgalloc.h | 5 +-
> arch/arm/include/asm/pgtable.h | 1 +
> arch/arm/include/asm/proc-fns.h | 33 +++++
> arch/arm/include/asm/thread_info.h | 4 +
> arch/arm/kernel/head-common.S | 4 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/mm/Makefile | 5 +
> arch/arm/mm/kasan_init.c | 257 +++++++++++++++++++++++++++++++++++++
> mm/kasan/kasan.c | 2 +-
> 10 files changed, 331 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm/include/asm/kasan.h
> create mode 100644 arch/arm/mm/kasan_init.c
>
> diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
> new file mode 100644
> index 0000000..90ee60c
> --- /dev/null
> +++ b/arch/arm/include/asm/kasan.h
> @@ -0,0 +1,20 @@
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifdef CONFIG_KASAN
> +
> +#include <asm/kasan_def.h>
> +/*
> + * Compiler uses shadow offset assuming that addresses start
> + * from 0. Kernel addresses don't start from 0, so shadow
> + * for kernel really starts from 'compiler's shadow offset' +
> + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
> + */
> +
> +extern void kasan_init(void);
> +
> +#else
> +static inline void kasan_init(void) { }
> +#endif
> +
> +#endif
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
> #endif /* CONFIG_ARM_LPAE */
>
> extern pgd_t *pgd_alloc(struct mm_struct *mm);
> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
> index 1c46238..fdf343f 100644
> --- a/arch/arm/include/asm/pgtable.h
> +++ b/arch/arm/include/asm/pgtable.h
> @@ -97,6 +97,7 @@ extern pgprot_t pgprot_s2_device;
> #define PAGE_READONLY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
> #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
> #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
> +#define PAGE_KERNEL_RO _MOD_PROT(pgprot_kernel, L_PTE_XN | L_PTE_RDONLY)
> #define PAGE_KERNEL_EXEC pgprot_kernel
> #define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
> #define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
You could instead lift and extend the definitions provided in kvm_hyp.h,
and use the read_sysreg/write_sysreg helpers defined in cp15.h.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
2017-10-12 7:16 ` Dmitry Vyukov
(?)
@ 2017-10-12 11:27 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-12 11:27 UTC (permalink / raw)
To: Dmitry Vyukov, Andrew Morton
Cc: Russell King - ARM Linux, Andrey Ryabinin, afzal.mohd.ma,
f.fainelli, Laura Abbott, Kirill A. Shutemov, Michal Hocko,
cdall, marc.zyngier, Catalin Marinas, Matthew Wilcox,
Thomas Gleixner, Thomas Garnier, Kees Cook, Arnd Bergmann,
Vladimir Murzin, tixy, Ard Biesheuvel, robin.murphy, Ingo Molnar,
grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang, Liuwenliang (Lamb)
>> - I don't understand why this is necessary. memory_is_poisoned_16()
>> already handles unaligned addresses?
>>
>> - If it's needed on ARM then presumably it will be needed on other
>> architectures, so CONFIG_ARM is insufficiently general.
>>
>> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>> it would be better to generalize/fix it in some fashion rather than
>> creating a new variant of the function.
>Yes, I think it will be better to fix the current function rather then
>have 2 slightly different copies with ifdef's.
>Will something along these lines work for arm? 16-byte accesses are
>not too common, so it should not be a performance problem. And
>probably modern compilers can turn 2 1-byte checks into a 2-byte check
>where safe (x86).
>static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>{
> u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>
> if (shadow_addr[0] || shadow_addr[1])
> return true;
> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> return memory_is_poisoned_1(addr + 15);
> return false;
>}
Thanks for Andrew Morton and Dmitry Vyukov's review.
If the parameter addr=0xc0000008, now in function:
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
--- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
--- // unsigned by 2 bytes.
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return *shadow_addr || memory_is_poisoned_1(addr + 15);
---- //here is going to be error on arm, specially when kernel has not finished yet.
---- //Because the unsigned accessing cause DataAbort Exception which is not
---- //initialized when kernel is starting.
return *shadow_addr;
}
I also think it is better to fix this problem.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-12 11:27 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-12 11:27 UTC (permalink / raw)
To: Dmitry Vyukov, Andrew Morton
Cc: Russell King - ARM Linux, Andrey Ryabinin, afzal.mohd.ma,
f.fainelli, Laura Abbott, Kirill A. Shutemov, Michal Hocko,
cdall, marc.zyngier, Catalin Marinas, Matthew Wilcox,
Thomas Gleixner, Thomas Garnier, Kees Cook, Arnd Bergmann,
Vladimir Murzin, tixy, Ard Biesheuvel, robin.murphy, Ingo Molnar,
grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang, Liuwenliang (Lamb)
>> - I don't understand why this is necessary. memory_is_poisoned_16()
>> already handles unaligned addresses?
>>
>> - If it's needed on ARM then presumably it will be needed on other
>> architectures, so CONFIG_ARM is insufficiently general.
>>
>> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>> it would be better to generalize/fix it in some fashion rather than
>> creating a new variant of the function.
>Yes, I think it will be better to fix the current function rather then
>have 2 slightly different copies with ifdef's.
>Will something along these lines work for arm? 16-byte accesses are
>not too common, so it should not be a performance problem. And
>probably modern compilers can turn 2 1-byte checks into a 2-byte check
>where safe (x86).
>static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>{
> u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>
> if (shadow_addr[0] || shadow_addr[1])
> return true;
> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> return memory_is_poisoned_1(addr + 15);
> return false;
>}
Thanks for Andrew Morton and Dmitry Vyukov's review.
If the parameter addr=0xc0000008, now in function:
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
--- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
--- // unsigned by 2 bytes.
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return *shadow_addr || memory_is_poisoned_1(addr + 15);
---- //here is going to be error on arm, specially when kernel has not finished yet.
---- //Because the unsigned accessing cause DataAbort Exception which is not
---- //initialized when kernel is starting.
return *shadow_addr;
}
I also think it is better to fix this problem.
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-12 11:27 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-12 11:27 UTC (permalink / raw)
To: linux-arm-kernel
>> - I don't understand why this is necessary. memory_is_poisoned_16()
>> already handles unaligned addresses?
>>
>> - If it's needed on ARM then presumably it will be needed on other
>> architectures, so CONFIG_ARM is insufficiently general.
>>
>> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>> it would be better to generalize/fix it in some fashion rather than
>> creating a new variant of the function.
>Yes, I think it will be better to fix the current function rather then
>have 2 slightly different copies with ifdef's.
>Will something along these lines work for arm? 16-byte accesses are
>not too common, so it should not be a performance problem. And
>probably modern compilers can turn 2 1-byte checks into a 2-byte check
>where safe (x86).
>static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>{
> u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>
> if (shadow_addr[0] || shadow_addr[1])
> return true;
> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> return memory_is_poisoned_1(addr + 15);
> return false;
>}
Thanks for Andrew Morton and Dmitry Vyukov's review.
If the parameter addr=0xc0000008, now in function:
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
--- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
--- // unsigned by 2 bytes.
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return *shadow_addr || memory_is_poisoned_1(addr + 15);
---- //here is going to be error on arm, specially when kernel has not finished yet.
---- //Because the unsigned accessing cause DataAbort Exception which is not
---- //initialized when kernel is starting.
return *shadow_addr;
}
I also think it is better to fix this problem.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-14 11:41 ` kbuild test robot
-1 siblings, 0 replies; 253+ messages in thread
From: kbuild test robot @ 2017-10-14 11:41 UTC (permalink / raw)
To: Abbott Liu
Cc: kbuild-all, linux, aryabinin, liuwenliang, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, ard.biesheuvel,
robin.murphy, mingo, grygorii.strashko, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, jiazhenghua,
dylix.dailei, zengweilin, heshaoliang
[-- Attachment #1: Type: text/plain, Size: 8441 bytes --]
Hi Abbott,
[auto build test ERROR on linus/master]
[also build test ERROR on v4.14-rc4]
[cannot apply to next-20171013]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Abbott-Liu/KASan-for-arm/20171014-104108
config: arm-allmodconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=arm
All errors (new ones prefixed by >>):
arch/arm/kernel/entry-common.S: Assembler messages:
>> arch/arm/kernel/entry-common.S:83: Error: invalid constant (ffffffffb6e00000) after fixup
arch/arm/kernel/entry-common.S:118: Error: invalid constant (ffffffffb6e00000) after fixup
--
arch/arm/kernel/entry-armv.S: Assembler messages:
>> arch/arm/kernel/entry-armv.S:213: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>> arch/arm/kernel/entry-armv.S:213: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:223: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:223: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:270: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:270: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:311: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:311: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:320: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:320: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
vim +213 arch/arm/kernel/entry-armv.S
2dede2d8e Nicolas Pitre 2006-01-14 151
2190fed67 Russell King 2015-08-20 152 .macro svc_entry, stack_hole=0, trace=1, uaccess=1
c4c5716e1 Catalin Marinas 2009-02-16 153 UNWIND(.fnstart )
c4c5716e1 Catalin Marinas 2009-02-16 154 UNWIND(.save {r0 - pc} )
e6a9dc612 Russell King 2016-05-13 155 sub sp, sp, #(SVC_REGS_SIZE + \stack_hole - 4)
b86040a59 Catalin Marinas 2009-07-24 156 #ifdef CONFIG_THUMB2_KERNEL
b86040a59 Catalin Marinas 2009-07-24 157 SPFIX( str r0, [sp] ) @ temporarily saved
b86040a59 Catalin Marinas 2009-07-24 158 SPFIX( mov r0, sp )
b86040a59 Catalin Marinas 2009-07-24 159 SPFIX( tst r0, #4 ) @ test original stack alignment
b86040a59 Catalin Marinas 2009-07-24 160 SPFIX( ldr r0, [sp] ) @ restored
b86040a59 Catalin Marinas 2009-07-24 161 #else
2dede2d8e Nicolas Pitre 2006-01-14 162 SPFIX( tst sp, #4 )
b86040a59 Catalin Marinas 2009-07-24 163 #endif
b86040a59 Catalin Marinas 2009-07-24 164 SPFIX( subeq sp, sp, #4 )
b86040a59 Catalin Marinas 2009-07-24 165 stmia sp, {r1 - r12}
ccea7a19e Russell King 2005-05-31 166
b059bdc39 Russell King 2011-06-25 167 ldmia r0, {r3 - r5}
b059bdc39 Russell King 2011-06-25 168 add r7, sp, #S_SP - 4 @ here for interlock avoidance
b059bdc39 Russell King 2011-06-25 169 mov r6, #-1 @ "" "" "" ""
e6a9dc612 Russell King 2016-05-13 170 add r2, sp, #(SVC_REGS_SIZE + \stack_hole - 4)
b059bdc39 Russell King 2011-06-25 171 SPFIX( addeq r2, r2, #4 )
b059bdc39 Russell King 2011-06-25 172 str r3, [sp, #-4]! @ save the "real" r0 copied
ccea7a19e Russell King 2005-05-31 173 @ from the exception stack
ccea7a19e Russell King 2005-05-31 174
b059bdc39 Russell King 2011-06-25 175 mov r3, lr
^1da177e4 Linus Torvalds 2005-04-16 176
^1da177e4 Linus Torvalds 2005-04-16 177 @
^1da177e4 Linus Torvalds 2005-04-16 178 @ We are now ready to fill in the remaining blanks on the stack:
^1da177e4 Linus Torvalds 2005-04-16 179 @
b059bdc39 Russell King 2011-06-25 180 @ r2 - sp_svc
b059bdc39 Russell King 2011-06-25 181 @ r3 - lr_svc
b059bdc39 Russell King 2011-06-25 182 @ r4 - lr_<exception>, already fixed up for correct return/restart
b059bdc39 Russell King 2011-06-25 183 @ r5 - spsr_<exception>
b059bdc39 Russell King 2011-06-25 184 @ r6 - orig_r0 (see pt_regs definition in ptrace.h)
^1da177e4 Linus Torvalds 2005-04-16 185 @
b059bdc39 Russell King 2011-06-25 186 stmia r7, {r2 - r6}
^1da177e4 Linus Torvalds 2005-04-16 187
e6978e4bf Russell King 2016-05-13 188 get_thread_info tsk
e6978e4bf Russell King 2016-05-13 189 ldr r0, [tsk, #TI_ADDR_LIMIT]
74e552f98 Abbott Liu 2017-10-11 190 #ifdef CONFIG_KASAN
74e552f98 Abbott Liu 2017-10-11 191 movw r1, #:lower16:TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 192 movt r1, #:upper16:TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 193 #else
e6978e4bf Russell King 2016-05-13 194 mov r1, #TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 195 #endif
e6978e4bf Russell King 2016-05-13 196 str r1, [tsk, #TI_ADDR_LIMIT]
e6978e4bf Russell King 2016-05-13 197 str r0, [sp, #SVC_ADDR_LIMIT]
e6978e4bf Russell King 2016-05-13 198
2190fed67 Russell King 2015-08-20 199 uaccess_save r0
2190fed67 Russell King 2015-08-20 200 .if \uaccess
2190fed67 Russell King 2015-08-20 201 uaccess_disable r0
2190fed67 Russell King 2015-08-20 202 .endif
2190fed67 Russell King 2015-08-20 203
c0e7f7ee7 Daniel Thompson 2014-09-17 204 .if \trace
02fe2845d Russell King 2011-06-25 205 #ifdef CONFIG_TRACE_IRQFLAGS
02fe2845d Russell King 2011-06-25 206 bl trace_hardirqs_off
02fe2845d Russell King 2011-06-25 207 #endif
c0e7f7ee7 Daniel Thompson 2014-09-17 208 .endif
f2741b78b Russell King 2011-06-25 209 .endm
^1da177e4 Linus Torvalds 2005-04-16 210
f2741b78b Russell King 2011-06-25 211 .align 5
f2741b78b Russell King 2011-06-25 212 __dabt_svc:
2190fed67 Russell King 2015-08-20 @213 svc_entry uaccess=0
^1da177e4 Linus Torvalds 2005-04-16 214 mov r2, sp
da7404725 Russell King 2011-06-26 215 dabt_helper
e16b31bf4 Marc Zyngier 2013-11-04 216 THUMB( ldr r5, [sp, #S_PSR] ) @ potentially updated CPSR
b059bdc39 Russell King 2011-06-25 217 svc_exit r5 @ return from exception
c4c5716e1 Catalin Marinas 2009-02-16 218 UNWIND(.fnend )
93ed39701 Catalin Marinas 2008-08-28 219 ENDPROC(__dabt_svc)
^1da177e4 Linus Torvalds 2005-04-16 220
^1da177e4 Linus Torvalds 2005-04-16 221 .align 5
^1da177e4 Linus Torvalds 2005-04-16 222 __irq_svc:
ccea7a19e Russell King 2005-05-31 223 svc_entry
187a51ad1 Russell King 2005-05-21 224 irq_handler
1613cc111 Russell King 2011-06-25 225
:::::: The code at line 213 was first introduced by commit
:::::: 2190fed67ba6f3e8129513929f2395843645e928 ARM: entry: provide uaccess assembly macro hooks
:::::: TO: Russell King <rmk+kernel@arm.linux.org.uk>
:::::: CC: Russell King <rmk+kernel@arm.linux.org.uk>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 64028 bytes --]
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-14 11:41 ` kbuild test robot
0 siblings, 0 replies; 253+ messages in thread
From: kbuild test robot @ 2017-10-14 11:41 UTC (permalink / raw)
To: Abbott Liu
Cc: kbuild-all, linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
[-- Attachment #1: Type: text/plain, Size: 8441 bytes --]
Hi Abbott,
[auto build test ERROR on linus/master]
[also build test ERROR on v4.14-rc4]
[cannot apply to next-20171013]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Abbott-Liu/KASan-for-arm/20171014-104108
config: arm-allmodconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=arm
All errors (new ones prefixed by >>):
arch/arm/kernel/entry-common.S: Assembler messages:
>> arch/arm/kernel/entry-common.S:83: Error: invalid constant (ffffffffb6e00000) after fixup
arch/arm/kernel/entry-common.S:118: Error: invalid constant (ffffffffb6e00000) after fixup
--
arch/arm/kernel/entry-armv.S: Assembler messages:
>> arch/arm/kernel/entry-armv.S:213: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>> arch/arm/kernel/entry-armv.S:213: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:223: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:223: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:270: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:270: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:311: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:311: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:320: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:320: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
vim +213 arch/arm/kernel/entry-armv.S
2dede2d8e Nicolas Pitre 2006-01-14 151
2190fed67 Russell King 2015-08-20 152 .macro svc_entry, stack_hole=0, trace=1, uaccess=1
c4c5716e1 Catalin Marinas 2009-02-16 153 UNWIND(.fnstart )
c4c5716e1 Catalin Marinas 2009-02-16 154 UNWIND(.save {r0 - pc} )
e6a9dc612 Russell King 2016-05-13 155 sub sp, sp, #(SVC_REGS_SIZE + \stack_hole - 4)
b86040a59 Catalin Marinas 2009-07-24 156 #ifdef CONFIG_THUMB2_KERNEL
b86040a59 Catalin Marinas 2009-07-24 157 SPFIX( str r0, [sp] ) @ temporarily saved
b86040a59 Catalin Marinas 2009-07-24 158 SPFIX( mov r0, sp )
b86040a59 Catalin Marinas 2009-07-24 159 SPFIX( tst r0, #4 ) @ test original stack alignment
b86040a59 Catalin Marinas 2009-07-24 160 SPFIX( ldr r0, [sp] ) @ restored
b86040a59 Catalin Marinas 2009-07-24 161 #else
2dede2d8e Nicolas Pitre 2006-01-14 162 SPFIX( tst sp, #4 )
b86040a59 Catalin Marinas 2009-07-24 163 #endif
b86040a59 Catalin Marinas 2009-07-24 164 SPFIX( subeq sp, sp, #4 )
b86040a59 Catalin Marinas 2009-07-24 165 stmia sp, {r1 - r12}
ccea7a19e Russell King 2005-05-31 166
b059bdc39 Russell King 2011-06-25 167 ldmia r0, {r3 - r5}
b059bdc39 Russell King 2011-06-25 168 add r7, sp, #S_SP - 4 @ here for interlock avoidance
b059bdc39 Russell King 2011-06-25 169 mov r6, #-1 @ "" "" "" ""
e6a9dc612 Russell King 2016-05-13 170 add r2, sp, #(SVC_REGS_SIZE + \stack_hole - 4)
b059bdc39 Russell King 2011-06-25 171 SPFIX( addeq r2, r2, #4 )
b059bdc39 Russell King 2011-06-25 172 str r3, [sp, #-4]! @ save the "real" r0 copied
ccea7a19e Russell King 2005-05-31 173 @ from the exception stack
ccea7a19e Russell King 2005-05-31 174
b059bdc39 Russell King 2011-06-25 175 mov r3, lr
^1da177e4 Linus Torvalds 2005-04-16 176
^1da177e4 Linus Torvalds 2005-04-16 177 @
^1da177e4 Linus Torvalds 2005-04-16 178 @ We are now ready to fill in the remaining blanks on the stack:
^1da177e4 Linus Torvalds 2005-04-16 179 @
b059bdc39 Russell King 2011-06-25 180 @ r2 - sp_svc
b059bdc39 Russell King 2011-06-25 181 @ r3 - lr_svc
b059bdc39 Russell King 2011-06-25 182 @ r4 - lr_<exception>, already fixed up for correct return/restart
b059bdc39 Russell King 2011-06-25 183 @ r5 - spsr_<exception>
b059bdc39 Russell King 2011-06-25 184 @ r6 - orig_r0 (see pt_regs definition in ptrace.h)
^1da177e4 Linus Torvalds 2005-04-16 185 @
b059bdc39 Russell King 2011-06-25 186 stmia r7, {r2 - r6}
^1da177e4 Linus Torvalds 2005-04-16 187
e6978e4bf Russell King 2016-05-13 188 get_thread_info tsk
e6978e4bf Russell King 2016-05-13 189 ldr r0, [tsk, #TI_ADDR_LIMIT]
74e552f98 Abbott Liu 2017-10-11 190 #ifdef CONFIG_KASAN
74e552f98 Abbott Liu 2017-10-11 191 movw r1, #:lower16:TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 192 movt r1, #:upper16:TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 193 #else
e6978e4bf Russell King 2016-05-13 194 mov r1, #TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 195 #endif
e6978e4bf Russell King 2016-05-13 196 str r1, [tsk, #TI_ADDR_LIMIT]
e6978e4bf Russell King 2016-05-13 197 str r0, [sp, #SVC_ADDR_LIMIT]
e6978e4bf Russell King 2016-05-13 198
2190fed67 Russell King 2015-08-20 199 uaccess_save r0
2190fed67 Russell King 2015-08-20 200 .if \uaccess
2190fed67 Russell King 2015-08-20 201 uaccess_disable r0
2190fed67 Russell King 2015-08-20 202 .endif
2190fed67 Russell King 2015-08-20 203
c0e7f7ee7 Daniel Thompson 2014-09-17 204 .if \trace
02fe2845d Russell King 2011-06-25 205 #ifdef CONFIG_TRACE_IRQFLAGS
02fe2845d Russell King 2011-06-25 206 bl trace_hardirqs_off
02fe2845d Russell King 2011-06-25 207 #endif
c0e7f7ee7 Daniel Thompson 2014-09-17 208 .endif
f2741b78b Russell King 2011-06-25 209 .endm
^1da177e4 Linus Torvalds 2005-04-16 210
f2741b78b Russell King 2011-06-25 211 .align 5
f2741b78b Russell King 2011-06-25 212 __dabt_svc:
2190fed67 Russell King 2015-08-20 @213 svc_entry uaccess=0
^1da177e4 Linus Torvalds 2005-04-16 214 mov r2, sp
da7404725 Russell King 2011-06-26 215 dabt_helper
e16b31bf4 Marc Zyngier 2013-11-04 216 THUMB( ldr r5, [sp, #S_PSR] ) @ potentially updated CPSR
b059bdc39 Russell King 2011-06-25 217 svc_exit r5 @ return from exception
c4c5716e1 Catalin Marinas 2009-02-16 218 UNWIND(.fnend )
93ed39701 Catalin Marinas 2008-08-28 219 ENDPROC(__dabt_svc)
^1da177e4 Linus Torvalds 2005-04-16 220
^1da177e4 Linus Torvalds 2005-04-16 221 .align 5
^1da177e4 Linus Torvalds 2005-04-16 222 __irq_svc:
ccea7a19e Russell King 2005-05-31 223 svc_entry
187a51ad1 Russell King 2005-05-21 224 irq_handler
1613cc111 Russell King 2011-06-25 225
:::::: The code at line 213 was first introduced by commit
:::::: 2190fed67ba6f3e8129513929f2395843645e928 ARM: entry: provide uaccess assembly macro hooks
:::::: TO: Russell King <rmk+kernel@arm.linux.org.uk>
:::::: CC: Russell King <rmk+kernel@arm.linux.org.uk>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 64028 bytes --]
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-14 11:41 ` kbuild test robot
0 siblings, 0 replies; 253+ messages in thread
From: kbuild test robot @ 2017-10-14 11:41 UTC (permalink / raw)
To: linux-arm-kernel
Hi Abbott,
[auto build test ERROR on linus/master]
[also build test ERROR on v4.14-rc4]
[cannot apply to next-20171013]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Abbott-Liu/KASan-for-arm/20171014-104108
config: arm-allmodconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=arm
All errors (new ones prefixed by >>):
arch/arm/kernel/entry-common.S: Assembler messages:
>> arch/arm/kernel/entry-common.S:83: Error: invalid constant (ffffffffb6e00000) after fixup
arch/arm/kernel/entry-common.S:118: Error: invalid constant (ffffffffb6e00000) after fixup
--
arch/arm/kernel/entry-armv.S: Assembler messages:
>> arch/arm/kernel/entry-armv.S:213: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>> arch/arm/kernel/entry-armv.S:213: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:223: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:223: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:270: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:270: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:311: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:311: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:320: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:320: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
vim +213 arch/arm/kernel/entry-armv.S
2dede2d8e Nicolas Pitre 2006-01-14 151
2190fed67 Russell King 2015-08-20 152 .macro svc_entry, stack_hole=0, trace=1, uaccess=1
c4c5716e1 Catalin Marinas 2009-02-16 153 UNWIND(.fnstart )
c4c5716e1 Catalin Marinas 2009-02-16 154 UNWIND(.save {r0 - pc} )
e6a9dc612 Russell King 2016-05-13 155 sub sp, sp, #(SVC_REGS_SIZE + \stack_hole - 4)
b86040a59 Catalin Marinas 2009-07-24 156 #ifdef CONFIG_THUMB2_KERNEL
b86040a59 Catalin Marinas 2009-07-24 157 SPFIX( str r0, [sp] ) @ temporarily saved
b86040a59 Catalin Marinas 2009-07-24 158 SPFIX( mov r0, sp )
b86040a59 Catalin Marinas 2009-07-24 159 SPFIX( tst r0, #4 ) @ test original stack alignment
b86040a59 Catalin Marinas 2009-07-24 160 SPFIX( ldr r0, [sp] ) @ restored
b86040a59 Catalin Marinas 2009-07-24 161 #else
2dede2d8e Nicolas Pitre 2006-01-14 162 SPFIX( tst sp, #4 )
b86040a59 Catalin Marinas 2009-07-24 163 #endif
b86040a59 Catalin Marinas 2009-07-24 164 SPFIX( subeq sp, sp, #4 )
b86040a59 Catalin Marinas 2009-07-24 165 stmia sp, {r1 - r12}
ccea7a19e Russell King 2005-05-31 166
b059bdc39 Russell King 2011-06-25 167 ldmia r0, {r3 - r5}
b059bdc39 Russell King 2011-06-25 168 add r7, sp, #S_SP - 4 @ here for interlock avoidance
b059bdc39 Russell King 2011-06-25 169 mov r6, #-1 @ "" "" "" ""
e6a9dc612 Russell King 2016-05-13 170 add r2, sp, #(SVC_REGS_SIZE + \stack_hole - 4)
b059bdc39 Russell King 2011-06-25 171 SPFIX( addeq r2, r2, #4 )
b059bdc39 Russell King 2011-06-25 172 str r3, [sp, #-4]! @ save the "real" r0 copied
ccea7a19e Russell King 2005-05-31 173 @ from the exception stack
ccea7a19e Russell King 2005-05-31 174
b059bdc39 Russell King 2011-06-25 175 mov r3, lr
^1da177e4 Linus Torvalds 2005-04-16 176
^1da177e4 Linus Torvalds 2005-04-16 177 @
^1da177e4 Linus Torvalds 2005-04-16 178 @ We are now ready to fill in the remaining blanks on the stack:
^1da177e4 Linus Torvalds 2005-04-16 179 @
b059bdc39 Russell King 2011-06-25 180 @ r2 - sp_svc
b059bdc39 Russell King 2011-06-25 181 @ r3 - lr_svc
b059bdc39 Russell King 2011-06-25 182 @ r4 - lr_<exception>, already fixed up for correct return/restart
b059bdc39 Russell King 2011-06-25 183 @ r5 - spsr_<exception>
b059bdc39 Russell King 2011-06-25 184 @ r6 - orig_r0 (see pt_regs definition in ptrace.h)
^1da177e4 Linus Torvalds 2005-04-16 185 @
b059bdc39 Russell King 2011-06-25 186 stmia r7, {r2 - r6}
^1da177e4 Linus Torvalds 2005-04-16 187
e6978e4bf Russell King 2016-05-13 188 get_thread_info tsk
e6978e4bf Russell King 2016-05-13 189 ldr r0, [tsk, #TI_ADDR_LIMIT]
74e552f98 Abbott Liu 2017-10-11 190 #ifdef CONFIG_KASAN
74e552f98 Abbott Liu 2017-10-11 191 movw r1, #:lower16:TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 192 movt r1, #:upper16:TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 193 #else
e6978e4bf Russell King 2016-05-13 194 mov r1, #TASK_SIZE
74e552f98 Abbott Liu 2017-10-11 195 #endif
e6978e4bf Russell King 2016-05-13 196 str r1, [tsk, #TI_ADDR_LIMIT]
e6978e4bf Russell King 2016-05-13 197 str r0, [sp, #SVC_ADDR_LIMIT]
e6978e4bf Russell King 2016-05-13 198
2190fed67 Russell King 2015-08-20 199 uaccess_save r0
2190fed67 Russell King 2015-08-20 200 .if \uaccess
2190fed67 Russell King 2015-08-20 201 uaccess_disable r0
2190fed67 Russell King 2015-08-20 202 .endif
2190fed67 Russell King 2015-08-20 203
c0e7f7ee7 Daniel Thompson 2014-09-17 204 .if \trace
02fe2845d Russell King 2011-06-25 205 #ifdef CONFIG_TRACE_IRQFLAGS
02fe2845d Russell King 2011-06-25 206 bl trace_hardirqs_off
02fe2845d Russell King 2011-06-25 207 #endif
c0e7f7ee7 Daniel Thompson 2014-09-17 208 .endif
f2741b78b Russell King 2011-06-25 209 .endm
^1da177e4 Linus Torvalds 2005-04-16 210
f2741b78b Russell King 2011-06-25 211 .align 5
f2741b78b Russell King 2011-06-25 212 __dabt_svc:
2190fed67 Russell King 2015-08-20 @213 svc_entry uaccess=0
^1da177e4 Linus Torvalds 2005-04-16 214 mov r2, sp
da7404725 Russell King 2011-06-26 215 dabt_helper
e16b31bf4 Marc Zyngier 2013-11-04 216 THUMB( ldr r5, [sp, #S_PSR] ) @ potentially updated CPSR
b059bdc39 Russell King 2011-06-25 217 svc_exit r5 @ return from exception
c4c5716e1 Catalin Marinas 2009-02-16 218 UNWIND(.fnend )
93ed39701 Catalin Marinas 2008-08-28 219 ENDPROC(__dabt_svc)
^1da177e4 Linus Torvalds 2005-04-16 220
^1da177e4 Linus Torvalds 2005-04-16 221 .align 5
^1da177e4 Linus Torvalds 2005-04-16 222 __irq_svc:
ccea7a19e Russell King 2005-05-31 223 svc_entry
187a51ad1 Russell King 2005-05-21 224 irq_handler
1613cc111 Russell King 2011-06-25 225
:::::: The code at line 213 was first introduced by commit
:::::: 2190fed67ba6f3e8129513929f2395843645e928 ARM: entry: provide uaccess assembly macro hooks
:::::: TO: Russell King <rmk+kernel@arm.linux.org.uk>
:::::: CC: Russell King <rmk+kernel@arm.linux.org.uk>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/gzip
Size: 64028 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20171014/e2e98833/attachment-0001.gz>
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-14 11:41 ` kbuild test robot
(?)
@ 2017-10-16 11:42 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-16 11:42 UTC (permalink / raw)
To: kbuild test robot
Cc: kbuild-all, linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 10/16/2017 07:03 PM, Abbott Liu wrote:
>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
Thanks for building test. This error can be solved by following code:
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
#ifdef CONFIG_KASAN
- movw r1, #:lower16:TASK_SIZE
- movt r1, #:upper16:TASK_SIZE
+ ldr r1, =TASK_SIZE
#else
mov r1, #TASK_SIZE
#endif
@@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
+#if CONFIG_KASAN
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
+#else
cmp r4, #TASK_SIZE
+#endif
blhs kuser_cmpxchg64_fixup
#endif
#endif
movt,movw can only be used in ARMv6*, ARMv7 instruction set. But ldr can be used in ARMv4*, ARMv5T*, ARMv6*, ARMv7.
Maybe the performance is going to fall down by using ldr, but I think the influence of performance is very limited.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-16 11:42 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-16 11:42 UTC (permalink / raw)
To: kbuild test robot
Cc: kbuild-all, linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 10/16/2017 07:03 PM, Abbott Liu wrote:
>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
Thanks for building test. This error can be solved by following code:
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
#ifdef CONFIG_KASAN
- movw r1, #:lower16:TASK_SIZE
- movt r1, #:upper16:TASK_SIZE
+ ldr r1, =TASK_SIZE
#else
mov r1, #TASK_SIZE
#endif
@@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
+#if CONFIG_KASAN
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
+#else
cmp r4, #TASK_SIZE
+#endif
blhs kuser_cmpxchg64_fixup
#endif
#endif
movt,movw can only be used in ARMv6*, ARMv7 instruction set. But ldr can be used in ARMv4*, ARMv5T*, ARMv6*, ARMv7.
Maybe the performance is going to fall down by using ldr, but I think the influence of performance is very limited.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-16 11:42 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-16 11:42 UTC (permalink / raw)
To: linux-arm-kernel
On 10/16/2017 07:03 PM, Abbott Liu wrote:
>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
#:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
#:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
Thanks for building test. This error can be solved by following code:
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
#ifdef CONFIG_KASAN
- movw r1, #:lower16:TASK_SIZE
- movt r1, #:upper16:TASK_SIZE
+ ldr r1, =TASK_SIZE
#else
mov r1, #TASK_SIZE
#endif
@@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
+#if CONFIG_KASAN
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
+#else
cmp r4, #TASK_SIZE
+#endif
blhs kuser_cmpxchg64_fixup
#endif
#endif
movt,movw can only be used in ARMv6*, ARMv7 instruction set. But ldr can be used in ARMv4*, ARMv5T*, ARMv6*, ARMv7.
Maybe the performance is going to fall down by using ldr, but I think the influence of performance is very limited.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-16 11:42 ` Liuwenliang (Lamb)
(?)
@ 2017-10-16 12:14 ` Ard Biesheuvel
-1 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-10-16 12:14 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: kbuild test robot, kbuild-all, linux, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, robin.murphy, mingo,
grygorii.strashko, glider, dvyukov, opendmb, linux-arm-kernel,
linux-kernel, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 16 October 2017 at 12:42, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> On 10/16/2017 07:03 PM, Abbott Liu wrote:
>>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>
> Thanks for building test. This error can be solved by following code:
> --- a/arch/arm/kernel/entry-armv.S
> +++ b/arch/arm/kernel/entry-armv.S
> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> get_thread_info tsk
> ldr r0, [tsk, #TI_ADDR_LIMIT]
> #ifdef CONFIG_KASAN
> - movw r1, #:lower16:TASK_SIZE
> - movt r1, #:upper16:TASK_SIZE
> + ldr r1, =TASK_SIZE
> #else
> mov r1, #TASK_SIZE
> #endif
This is unnecessary:
ldr r1, =TASK_SIZE
will be converted to a mov instruction by the assembler if the value
of TASK_SIZE fits its 12-bit immediate field.
So please remove the whole #ifdef, and just use ldr r1, =xxx
> @@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
> @ if it was interrupted in a critical region. Here we
> @ perform a quick test inline since it should be false
> @ 99.9999% of the time. The rest is done out of line.
> +#if CONFIG_KASAN
> + ldr r0, =TASK_SIZE
> + cmp r4, r0
> +#else
> cmp r4, #TASK_SIZE
> +#endif
> blhs kuser_cmpxchg64_fixup
> #endif
> #endif
>
> movt,movw can only be used in ARMv6*, ARMv7 instruction set. But ldr can be used in ARMv4*, ARMv5T*, ARMv6*, ARMv7.
> Maybe the performance is going to fall down by using ldr, but I think the influence of performance is very limited.
>
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-16 12:14 ` Ard Biesheuvel
0 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-10-16 12:14 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: kbuild test robot, kbuild-all, linux, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, robin.murphy, mingo,
grygorii.strashko, glider, dvyukov, opendmb, linux-arm-kernel,
linux-kernel, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 16 October 2017 at 12:42, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> On 10/16/2017 07:03 PM, Abbott Liu wrote:
>>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>
> Thanks for building test. This error can be solved by following code:
> --- a/arch/arm/kernel/entry-armv.S
> +++ b/arch/arm/kernel/entry-armv.S
> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> get_thread_info tsk
> ldr r0, [tsk, #TI_ADDR_LIMIT]
> #ifdef CONFIG_KASAN
> - movw r1, #:lower16:TASK_SIZE
> - movt r1, #:upper16:TASK_SIZE
> + ldr r1, =TASK_SIZE
> #else
> mov r1, #TASK_SIZE
> #endif
This is unnecessary:
ldr r1, =TASK_SIZE
will be converted to a mov instruction by the assembler if the value
of TASK_SIZE fits its 12-bit immediate field.
So please remove the whole #ifdef, and just use ldr r1, =xxx
> @@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
> @ if it was interrupted in a critical region. Here we
> @ perform a quick test inline since it should be false
> @ 99.9999% of the time. The rest is done out of line.
> +#if CONFIG_KASAN
> + ldr r0, =TASK_SIZE
> + cmp r4, r0
> +#else
> cmp r4, #TASK_SIZE
> +#endif
> blhs kuser_cmpxchg64_fixup
> #endif
> #endif
>
> movt,movw can only be used in ARMv6*, ARMv7 instruction set. But ldr can be used in ARMv4*, ARMv5T*, ARMv6*, ARMv7.
> Maybe the performance is going to fall down by using ldr, but I think the influence of performance is very limited.
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-16 12:14 ` Ard Biesheuvel
0 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-10-16 12:14 UTC (permalink / raw)
To: linux-arm-kernel
On 16 October 2017 at 12:42, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> On 10/16/2017 07:03 PM, Abbott Liu wrote:
>>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>
> Thanks for building test. This error can be solved by following code:
> --- a/arch/arm/kernel/entry-armv.S
> +++ b/arch/arm/kernel/entry-armv.S
> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> get_thread_info tsk
> ldr r0, [tsk, #TI_ADDR_LIMIT]
> #ifdef CONFIG_KASAN
> - movw r1, #:lower16:TASK_SIZE
> - movt r1, #:upper16:TASK_SIZE
> + ldr r1, =TASK_SIZE
> #else
> mov r1, #TASK_SIZE
> #endif
This is unnecessary:
ldr r1, =TASK_SIZE
will be converted to a mov instruction by the assembler if the value
of TASK_SIZE fits its 12-bit immediate field.
So please remove the whole #ifdef, and just use ldr r1, =xxx
> @@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
> @ if it was interrupted in a critical region. Here we
> @ perform a quick test inline since it should be false
> @ 99.9999% of the time. The rest is done out of line.
> +#if CONFIG_KASAN
> + ldr r0, =TASK_SIZE
> + cmp r4, r0
> +#else
> cmp r4, #TASK_SIZE
> +#endif
> blhs kuser_cmpxchg64_fixup
> #endif
> #endif
>
> movt,movw can only be used in ARMv6*, ARMv7 instruction set. But ldr can be used in ARMv4*, ARMv5T*, ARMv6*, ARMv7.
> Maybe the performance is going to fall down by using ldr, but I think the influence of performance is very limited.
>
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 00/11] KASan for arm
2017-10-12 7:38 ` Arnd Bergmann
(?)
@ 2017-10-17 1:04 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 1:04 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Russell King - ARM Linux, Andrey Ryabinin, afzal.mohd.ma,
Florian Fainelli, Laura Abbott, Kirill A . Shutemov,
Michal Hocko, Christoffer Dall, Marc Zyngier, Catalin Marinas,
Andrew Morton, mawilcox, Thomas Gleixner, Thomas Garnier,
Kees Cook, Vladimir Murzin, tixy, Ard Biesheuvel, Robin Murphy,
Ingo Molnar, grygorii.strashko, Alexander Potapenko,
Dmitry Vyukov, Doug Berger, Linux ARM, Linux Kernel Mailing List,
kasan-dev, Linux-MM, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 10/16/2017 07:57 PM, Abbott Liu wrote:
>Nice!
>
>When I build-tested KASAN on x86 and arm64, I ran into a lot of build-time
>regressions (mostly warnings but also some errors), so I'd like to give it
>a spin in my randconfig tree before this gets merged. Can you point me
>to a git URL that I can pull into my testing tree?
>
>I could of course apply the patches from email, but I expect that there
>will be updated versions of the series, so it's easier if I can just pull
>the latest version.
>
> Arnd
I'm sorry. I don't have git server. These patches base on:
1. git remote -v
origin git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (fetch)
origin git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (push)
2. the commit is:
commit 46c1e79fee417f151547aa46fae04ab06cb666f4
Merge: ec846ec b130a69
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Wed Sep 13 12:24:20 2017 -0700
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fixes from Ingo Molnar:
"A handful of tooling fixes"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf stat: Wait for the correct child
perf tools: Support running perf binaries with a dash in their name
perf config: Check not only section->from_system_config but also item's
perf ui progress: Fix progress update
perf ui progress: Make sure we always define step value
perf tools: Open perf.data with O_CLOEXEC flag
tools lib api: Fix make DEBUG=1 build
perf tests: Fix compile when libunwind's unwind.h is available
tools include linux: Guard against redefinition of some macros
I'm sorry that I didn't base on a stabe version.
3. config: arch/arm/configs/vexpress_defconfig
4. gcc version: gcc version 6.1.0
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 00/11] KASan for arm
@ 2017-10-17 1:04 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 1:04 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Russell King - ARM Linux, Andrey Ryabinin, afzal.mohd.ma,
Florian Fainelli, Laura Abbott, Kirill A . Shutemov,
Michal Hocko, Christoffer Dall, Marc Zyngier, Catalin Marinas,
Andrew Morton, mawilcox, Thomas Gleixner, Thomas Garnier,
Kees Cook, Vladimir Murzin, tixy, Ard Biesheuvel, Robin Murphy,
Ingo Molnar, grygorii.strashko, Alexander Potapenko,
Dmitry Vyukov, Doug Berger, Linux ARM, Linux Kernel Mailing List,
kasan-dev, Linux-MM, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 2010 bytes --]
On 10/16/2017 07:57 PM, Abbott Liu wrote:
>Nice!
>
>When I build-tested KASAN on x86 and arm64, I ran into a lot of build-time
>regressions (mostly warnings but also some errors), so I'd like to give it
>a spin in my randconfig tree before this gets merged. Can you point me
>to a git URL that I can pull into my testing tree?
>
>I could of course apply the patches from email, but I expect that there
>will be updated versions of the series, so it's easier if I can just pull
>the latest version.
>
> Arnd
I'm sorry. I don't have git server. These patches base on:
1. git remote -v
origin git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (fetch)
origin git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (push)
2. the commit is:
commit 46c1e79fee417f151547aa46fae04ab06cb666f4
Merge: ec846ec b130a69
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Wed Sep 13 12:24:20 2017 -0700
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fixes from Ingo Molnar:
"A handful of tooling fixes"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf stat: Wait for the correct child
perf tools: Support running perf binaries with a dash in their name
perf config: Check not only section->from_system_config but also item's
perf ui progress: Fix progress update
perf ui progress: Make sure we always define step value
perf tools: Open perf.data with O_CLOEXEC flag
tools lib api: Fix make DEBUG=1 build
perf tests: Fix compile when libunwind's unwind.h is available
tools include linux: Guard against redefinition of some macros
I'm sorry that I didn't base on a stabe version.
3. config: arch/arm/configs/vexpress_defconfig
4. gcc version: gcc version 6.1.0
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 00/11] KASan for arm
@ 2017-10-17 1:04 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 1:04 UTC (permalink / raw)
To: linux-arm-kernel
On 10/16/2017 07:57 PM, Abbott Liu wrote:
>Nice!
>
>When I build-tested KASAN on x86 and arm64, I ran into a lot of build-time
>regressions (mostly warnings but also some errors), so I'd like to give it
>a spin in my randconfig tree before this gets merged. Can you point me
>to a git URL that I can pull into my testing tree?
>
>I could of course apply the patches from email, but I expect that there
>will be updated versions of the series, so it's easier if I can just pull
>the latest version.
>
> Arnd
I'm sorry. I don't have git server. These patches base on:
1. git remote -v
origin git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (fetch)
origin git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (push)
2. the commit is:
commit 46c1e79fee417f151547aa46fae04ab06cb666f4
Merge: ec846ec b130a69
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Wed Sep 13 12:24:20 2017 -0700
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fixes from Ingo Molnar:
"A handful of tooling fixes"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf stat: Wait for the correct child
perf tools: Support running perf binaries with a dash in their name
perf config: Check not only section->from_system_config but also item's
perf ui progress: Fix progress update
perf ui progress: Make sure we always define step value
perf tools: Open perf.data with O_CLOEXEC flag
tools lib api: Fix make DEBUG=1 build
perf tests: Fix compile when libunwind's unwind.h is available
tools include linux: Guard against redefinition of some macros
I'm sorry that I didn't base on a stabe version.
3. config: arch/arm/configs/vexpress_defconfig
4. gcc version: gcc version 6.1.0
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-16 12:14 ` Ard Biesheuvel
(?)
@ 2017-10-17 11:27 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 11:27 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: kbuild test robot, kbuild-all, linux, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, robin.murphy, mingo,
grygorii.strashko, glider, dvyukov, opendmb, linux-arm-kernel,
linux-kernel, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 10/17/2017 12:40 AM, Abbott Liu wrote:
> Ard Biesheuvel [ard.biesheuvel@linaro.org] wrote
>This is unnecessary:
>
>ldr r1, =TASK_SIZE
>
>will be converted to a mov instruction by the assembler if the value of TASK_SIZE fits its 12-bit immediate field.
>
>So please remove the whole #ifdef, and just use ldr r1, =xxx
Thanks for your review.
The assembler on my computer don't convert ldr r1,=xxx into mov instruction. Here is the objdump for vmlinux:
c0a3b100 <__irq_svc>:
c0a3b100: e24dd04c sub sp, sp, #76 ; 0x4c
c0a3b104: e31d0004 tst sp, #4
c0a3b108: 024dd004 subeq sp, sp, #4
c0a3b10c: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
c0a3b110: e8900038 ldm r0, {r3, r4, r5}
c0a3b114: e28d7030 add r7, sp, #48 ; 0x30
c0a3b118: e3e06000 mvn r6, #0
c0a3b11c: e28d204c add r2, sp, #76 ; 0x4c
c0a3b120: 02822004 addeq r2, r2, #4
c0a3b124: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
c0a3b128: e1a0300e mov r3, lr
c0a3b12c: e887007c stm r7, {r2, r3, r4, r5, r6}
c0a3b130: e1a0972d lsr r9, sp, #14
c0a3b134: e1a09709 lsl r9, r9, #14
c0a3b138: e5990008 ldr r0, [r9, #8]
---c0a3b13c: e59f1054 ldr r1, [pc, #84] ; c0a3b198 <__irq_svc+0x98> //ldr r1, =TASK_SIZE
c0a3b140: e5891008 str r1, [r9, #8]
c0a3b144: e58d004c str r0, [sp, #76] ; 0x4c
c0a3b148: ee130f10 mrc 15, 0, r0, cr3, cr0, {0}
c0a3b14c: e58d0048 str r0, [sp, #72] ; 0x48
c0a3b150: e3a00051 mov r0, #81 ; 0x51
c0a3b154: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
---c0a3b158: e59f103c ldr r1, [pc, #60] ; c0a3b19c <__irq_svc+0x9c> //orginal irq_svc also used same instruction
c0a3b15c: e1a0000d mov r0, sp
c0a3b160: e28fe000 add lr, pc, #0
c0a3b164: e591f000 ldr pc, [r1]
c0a3b168: e5998004 ldr r8, [r9, #4]
c0a3b16c: e5990000 ldr r0, [r9]
c0a3b170: e3380000 teq r8, #0
c0a3b174: 13a00000 movne r0, #0
c0a3b178: e3100002 tst r0, #2
c0a3b17c: 1b000007 blne c0a3b1a0 <svc_preempt>
c0a3b180: e59d104c ldr r1, [sp, #76] ; 0x4c
c0a3b184: e59d0048 ldr r0, [sp, #72] ; 0x48
c0a3b188: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
c0a3b18c: e5891008 str r1, [r9, #8]
c0a3b190: e16ff005 msr SPSR_fsxc, r5
c0a3b194: e8ddffff ldm sp, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip, sp, lr, pc}^
---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
c0a3b19c: c0ccccf0 .word 0xc0ccccf0
Even "ldr r1, =TASK_SIZE" won't be converted to a mov instruction by some assembler, I also think it is better
to remove the whole #ifdef because the influence of performance by ldr is very limited.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-17 11:27 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 11:27 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: kbuild test robot, kbuild-all, linux, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, robin.murphy, mingo,
grygorii.strashko, glider, dvyukov, opendmb, linux-arm-kernel,
linux-kernel, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 3324 bytes --]
On 10/17/2017 12:40 AM, Abbott Liu wrote:
> Ard Biesheuvel [ard.biesheuvel@linaro.org] wrote
>This is unnecessary:
>
>ldr r1, =TASK_SIZE
>
>will be converted to a mov instruction by the assembler if the value of TASK_SIZE fits its 12-bit immediate field.
>
>So please remove the whole #ifdef, and just use ldr r1, =xxx
Thanks for your review.
The assembler on my computer don't convert ldr r1,=xxx into mov instruction. Here is the objdump for vmlinux:
c0a3b100 <__irq_svc>:
c0a3b100: e24dd04c sub sp, sp, #76 ; 0x4c
c0a3b104: e31d0004 tst sp, #4
c0a3b108: 024dd004 subeq sp, sp, #4
c0a3b10c: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
c0a3b110: e8900038 ldm r0, {r3, r4, r5}
c0a3b114: e28d7030 add r7, sp, #48 ; 0x30
c0a3b118: e3e06000 mvn r6, #0
c0a3b11c: e28d204c add r2, sp, #76 ; 0x4c
c0a3b120: 02822004 addeq r2, r2, #4
c0a3b124: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
c0a3b128: e1a0300e mov r3, lr
c0a3b12c: e887007c stm r7, {r2, r3, r4, r5, r6}
c0a3b130: e1a0972d lsr r9, sp, #14
c0a3b134: e1a09709 lsl r9, r9, #14
c0a3b138: e5990008 ldr r0, [r9, #8]
---c0a3b13c: e59f1054 ldr r1, [pc, #84] ; c0a3b198 <__irq_svc+0x98> //ldr r1, =TASK_SIZE
c0a3b140: e5891008 str r1, [r9, #8]
c0a3b144: e58d004c str r0, [sp, #76] ; 0x4c
c0a3b148: ee130f10 mrc 15, 0, r0, cr3, cr0, {0}
c0a3b14c: e58d0048 str r0, [sp, #72] ; 0x48
c0a3b150: e3a00051 mov r0, #81 ; 0x51
c0a3b154: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
---c0a3b158: e59f103c ldr r1, [pc, #60] ; c0a3b19c <__irq_svc+0x9c> //orginal irq_svc also used same instruction
c0a3b15c: e1a0000d mov r0, sp
c0a3b160: e28fe000 add lr, pc, #0
c0a3b164: e591f000 ldr pc, [r1]
c0a3b168: e5998004 ldr r8, [r9, #4]
c0a3b16c: e5990000 ldr r0, [r9]
c0a3b170: e3380000 teq r8, #0
c0a3b174: 13a00000 movne r0, #0
c0a3b178: e3100002 tst r0, #2
c0a3b17c: 1b000007 blne c0a3b1a0 <svc_preempt>
c0a3b180: e59d104c ldr r1, [sp, #76] ; 0x4c
c0a3b184: e59d0048 ldr r0, [sp, #72] ; 0x48
c0a3b188: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
c0a3b18c: e5891008 str r1, [r9, #8]
c0a3b190: e16ff005 msr SPSR_fsxc, r5
c0a3b194: e8ddffff ldm sp, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip, sp, lr, pc}^
---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
c0a3b19c: c0ccccf0 .word 0xc0ccccf0
Even "ldr r1, =TASK_SIZE" won't be converted to a mov instruction by some assembler, I also think it is better
to remove the whole #ifdef because the influence of performance by ldr is very limited.
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-17 11:27 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 11:27 UTC (permalink / raw)
To: linux-arm-kernel
On 10/17/2017 12:40 AM, Abbott Liu wrote:
> Ard Biesheuvel [ard.biesheuvel at linaro.org] wrote
>This is unnecessary:
>
>ldr r1, =TASK_SIZE
>
>will be converted to a mov instruction by the assembler if the value of TASK_SIZE fits its 12-bit immediate field.
>
>So please remove the whole #ifdef, and just use ldr r1, =xxx
Thanks for your review.
The assembler on my computer don't convert ldr r1,=xxx into mov instruction. Here is the objdump for vmlinux:
c0a3b100 <__irq_svc>:
c0a3b100: e24dd04c sub sp, sp, #76 ; 0x4c
c0a3b104: e31d0004 tst sp, #4
c0a3b108: 024dd004 subeq sp, sp, #4
c0a3b10c: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
c0a3b110: e8900038 ldm r0, {r3, r4, r5}
c0a3b114: e28d7030 add r7, sp, #48 ; 0x30
c0a3b118: e3e06000 mvn r6, #0
c0a3b11c: e28d204c add r2, sp, #76 ; 0x4c
c0a3b120: 02822004 addeq r2, r2, #4
c0a3b124: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
c0a3b128: e1a0300e mov r3, lr
c0a3b12c: e887007c stm r7, {r2, r3, r4, r5, r6}
c0a3b130: e1a0972d lsr r9, sp, #14
c0a3b134: e1a09709 lsl r9, r9, #14
c0a3b138: e5990008 ldr r0, [r9, #8]
---c0a3b13c: e59f1054 ldr r1, [pc, #84] ; c0a3b198 <__irq_svc+0x98> //ldr r1, =TASK_SIZE
c0a3b140: e5891008 str r1, [r9, #8]
c0a3b144: e58d004c str r0, [sp, #76] ; 0x4c
c0a3b148: ee130f10 mrc 15, 0, r0, cr3, cr0, {0}
c0a3b14c: e58d0048 str r0, [sp, #72] ; 0x48
c0a3b150: e3a00051 mov r0, #81 ; 0x51
c0a3b154: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
---c0a3b158: e59f103c ldr r1, [pc, #60] ; c0a3b19c <__irq_svc+0x9c> //orginal irq_svc also used same instruction
c0a3b15c: e1a0000d mov r0, sp
c0a3b160: e28fe000 add lr, pc, #0
c0a3b164: e591f000 ldr pc, [r1]
c0a3b168: e5998004 ldr r8, [r9, #4]
c0a3b16c: e5990000 ldr r0, [r9]
c0a3b170: e3380000 teq r8, #0
c0a3b174: 13a00000 movne r0, #0
c0a3b178: e3100002 tst r0, #2
c0a3b17c: 1b000007 blne c0a3b1a0 <svc_preempt>
c0a3b180: e59d104c ldr r1, [sp, #76] ; 0x4c
c0a3b184: e59d0048 ldr r0, [sp, #72] ; 0x48
c0a3b188: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
c0a3b18c: e5891008 str r1, [r9, #8]
c0a3b190: e16ff005 msr SPSR_fsxc, r5
c0a3b194: e8ddffff ldm sp, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip, sp, lr, pc}^
---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
c0a3b19c: c0ccccf0 .word 0xc0ccccf0
Even "ldr r1, =TASK_SIZE" won't be converted to a mov instruction by some assembler, I also think it is better
to remove the whole #ifdef because the influence of performance by ldr is very limited.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-17 11:27 ` Liuwenliang (Lamb)
(?)
@ 2017-10-17 11:52 ` Ard Biesheuvel
-1 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-10-17 11:52 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: kbuild test robot, kbuild-all, linux, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, robin.murphy, mingo,
grygorii.strashko, glider, dvyukov, opendmb, linux-arm-kernel,
linux-kernel, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 17 October 2017 at 12:27, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> On 10/17/2017 12:40 AM, Abbott Liu wrote:
>> Ard Biesheuvel [ard.biesheuvel@linaro.org] wrote
>>This is unnecessary:
>>
>>ldr r1, =TASK_SIZE
>>
>>will be converted to a mov instruction by the assembler if the value of TASK_SIZE fits its 12-bit immediate field.
>>
>>So please remove the whole #ifdef, and just use ldr r1, =xxx
>
> Thanks for your review.
>
> The assembler on my computer don't convert ldr r1,=xxx into mov instruction.
What I said was
'if the value of TASK_SIZE fits its 12-bit immediate field'
and your value of TASK_SIZE is 0xb6e00000, which cannot be decomposed
in the right way.
If you build with KASAN disabled, it will generate a mov instruction instead.
> Here is the objdump for vmlinux:
>
> c0a3b100 <__irq_svc>:
> c0a3b100: e24dd04c sub sp, sp, #76 ; 0x4c
> c0a3b104: e31d0004 tst sp, #4
> c0a3b108: 024dd004 subeq sp, sp, #4
> c0a3b10c: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
> c0a3b110: e8900038 ldm r0, {r3, r4, r5}
> c0a3b114: e28d7030 add r7, sp, #48 ; 0x30
> c0a3b118: e3e06000 mvn r6, #0
> c0a3b11c: e28d204c add r2, sp, #76 ; 0x4c
> c0a3b120: 02822004 addeq r2, r2, #4
> c0a3b124: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
> c0a3b128: e1a0300e mov r3, lr
> c0a3b12c: e887007c stm r7, {r2, r3, r4, r5, r6}
> c0a3b130: e1a0972d lsr r9, sp, #14
> c0a3b134: e1a09709 lsl r9, r9, #14
> c0a3b138: e5990008 ldr r0, [r9, #8]
> ---c0a3b13c: e59f1054 ldr r1, [pc, #84] ; c0a3b198 <__irq_svc+0x98> //ldr r1, =TASK_SIZE
> c0a3b140: e5891008 str r1, [r9, #8]
> c0a3b144: e58d004c str r0, [sp, #76] ; 0x4c
> c0a3b148: ee130f10 mrc 15, 0, r0, cr3, cr0, {0}
> c0a3b14c: e58d0048 str r0, [sp, #72] ; 0x48
> c0a3b150: e3a00051 mov r0, #81 ; 0x51
> c0a3b154: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
> ---c0a3b158: e59f103c ldr r1, [pc, #60] ; c0a3b19c <__irq_svc+0x9c> //orginal irq_svc also used same instruction
> c0a3b15c: e1a0000d mov r0, sp
> c0a3b160: e28fe000 add lr, pc, #0
> c0a3b164: e591f000 ldr pc, [r1]
> c0a3b168: e5998004 ldr r8, [r9, #4]
> c0a3b16c: e5990000 ldr r0, [r9]
> c0a3b170: e3380000 teq r8, #0
> c0a3b174: 13a00000 movne r0, #0
> c0a3b178: e3100002 tst r0, #2
> c0a3b17c: 1b000007 blne c0a3b1a0 <svc_preempt>
> c0a3b180: e59d104c ldr r1, [sp, #76] ; 0x4c
> c0a3b184: e59d0048 ldr r0, [sp, #72] ; 0x48
> c0a3b188: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
> c0a3b18c: e5891008 str r1, [r9, #8]
> c0a3b190: e16ff005 msr SPSR_fsxc, r5
> c0a3b194: e8ddffff ldm sp, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip, sp, lr, pc}^
> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
> c0a3b19c: c0ccccf0 .word 0xc0ccccf0
>
>
>
> Even "ldr r1, =TASK_SIZE" won't be converted to a mov instruction by some assembler, I also think it is better
> to remove the whole #ifdef because the influence of performance by ldr is very limited.
>
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-17 11:52 ` Ard Biesheuvel
0 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-10-17 11:52 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: kbuild test robot, kbuild-all, linux, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, robin.murphy, mingo,
grygorii.strashko, glider, dvyukov, opendmb, linux-arm-kernel,
linux-kernel, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 17 October 2017 at 12:27, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> On 10/17/2017 12:40 AM, Abbott Liu wrote:
>> Ard Biesheuvel [ard.biesheuvel@linaro.org] wrote
>>This is unnecessary:
>>
>>ldr r1, =TASK_SIZE
>>
>>will be converted to a mov instruction by the assembler if the value of TASK_SIZE fits its 12-bit immediate field.
>>
>>So please remove the whole #ifdef, and just use ldr r1, =xxx
>
> Thanks for your review.
>
> The assembler on my computer don't convert ldr r1,=xxx into mov instruction.
What I said was
'if the value of TASK_SIZE fits its 12-bit immediate field'
and your value of TASK_SIZE is 0xb6e00000, which cannot be decomposed
in the right way.
If you build with KASAN disabled, it will generate a mov instruction instead.
> Here is the objdump for vmlinux:
>
> c0a3b100 <__irq_svc>:
> c0a3b100: e24dd04c sub sp, sp, #76 ; 0x4c
> c0a3b104: e31d0004 tst sp, #4
> c0a3b108: 024dd004 subeq sp, sp, #4
> c0a3b10c: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
> c0a3b110: e8900038 ldm r0, {r3, r4, r5}
> c0a3b114: e28d7030 add r7, sp, #48 ; 0x30
> c0a3b118: e3e06000 mvn r6, #0
> c0a3b11c: e28d204c add r2, sp, #76 ; 0x4c
> c0a3b120: 02822004 addeq r2, r2, #4
> c0a3b124: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
> c0a3b128: e1a0300e mov r3, lr
> c0a3b12c: e887007c stm r7, {r2, r3, r4, r5, r6}
> c0a3b130: e1a0972d lsr r9, sp, #14
> c0a3b134: e1a09709 lsl r9, r9, #14
> c0a3b138: e5990008 ldr r0, [r9, #8]
> ---c0a3b13c: e59f1054 ldr r1, [pc, #84] ; c0a3b198 <__irq_svc+0x98> //ldr r1, =TASK_SIZE
> c0a3b140: e5891008 str r1, [r9, #8]
> c0a3b144: e58d004c str r0, [sp, #76] ; 0x4c
> c0a3b148: ee130f10 mrc 15, 0, r0, cr3, cr0, {0}
> c0a3b14c: e58d0048 str r0, [sp, #72] ; 0x48
> c0a3b150: e3a00051 mov r0, #81 ; 0x51
> c0a3b154: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
> ---c0a3b158: e59f103c ldr r1, [pc, #60] ; c0a3b19c <__irq_svc+0x9c> //orginal irq_svc also used same instruction
> c0a3b15c: e1a0000d mov r0, sp
> c0a3b160: e28fe000 add lr, pc, #0
> c0a3b164: e591f000 ldr pc, [r1]
> c0a3b168: e5998004 ldr r8, [r9, #4]
> c0a3b16c: e5990000 ldr r0, [r9]
> c0a3b170: e3380000 teq r8, #0
> c0a3b174: 13a00000 movne r0, #0
> c0a3b178: e3100002 tst r0, #2
> c0a3b17c: 1b000007 blne c0a3b1a0 <svc_preempt>
> c0a3b180: e59d104c ldr r1, [sp, #76] ; 0x4c
> c0a3b184: e59d0048 ldr r0, [sp, #72] ; 0x48
> c0a3b188: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
> c0a3b18c: e5891008 str r1, [r9, #8]
> c0a3b190: e16ff005 msr SPSR_fsxc, r5
> c0a3b194: e8ddffff ldm sp, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip, sp, lr, pc}^
> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
> c0a3b19c: c0ccccf0 .word 0xc0ccccf0
>
>
>
> Even "ldr r1, =TASK_SIZE" won't be converted to a mov instruction by some assembler, I also think it is better
> to remove the whole #ifdef because the influence of performance by ldr is very limited.
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-17 11:52 ` Ard Biesheuvel
0 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-10-17 11:52 UTC (permalink / raw)
To: linux-arm-kernel
On 17 October 2017 at 12:27, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> On 10/17/2017 12:40 AM, Abbott Liu wrote:
>> Ard Biesheuvel [ard.biesheuvel at linaro.org] wrote
>>This is unnecessary:
>>
>>ldr r1, =TASK_SIZE
>>
>>will be converted to a mov instruction by the assembler if the value of TASK_SIZE fits its 12-bit immediate field.
>>
>>So please remove the whole #ifdef, and just use ldr r1, =xxx
>
> Thanks for your review.
>
> The assembler on my computer don't convert ldr r1,=xxx into mov instruction.
What I said was
'if the value of TASK_SIZE fits its 12-bit immediate field'
and your value of TASK_SIZE is 0xb6e00000, which cannot be decomposed
in the right way.
If you build with KASAN disabled, it will generate a mov instruction instead.
> Here is the objdump for vmlinux:
>
> c0a3b100 <__irq_svc>:
> c0a3b100: e24dd04c sub sp, sp, #76 ; 0x4c
> c0a3b104: e31d0004 tst sp, #4
> c0a3b108: 024dd004 subeq sp, sp, #4
> c0a3b10c: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
> c0a3b110: e8900038 ldm r0, {r3, r4, r5}
> c0a3b114: e28d7030 add r7, sp, #48 ; 0x30
> c0a3b118: e3e06000 mvn r6, #0
> c0a3b11c: e28d204c add r2, sp, #76 ; 0x4c
> c0a3b120: 02822004 addeq r2, r2, #4
> c0a3b124: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
> c0a3b128: e1a0300e mov r3, lr
> c0a3b12c: e887007c stm r7, {r2, r3, r4, r5, r6}
> c0a3b130: e1a0972d lsr r9, sp, #14
> c0a3b134: e1a09709 lsl r9, r9, #14
> c0a3b138: e5990008 ldr r0, [r9, #8]
> ---c0a3b13c: e59f1054 ldr r1, [pc, #84] ; c0a3b198 <__irq_svc+0x98> //ldr r1, =TASK_SIZE
> c0a3b140: e5891008 str r1, [r9, #8]
> c0a3b144: e58d004c str r0, [sp, #76] ; 0x4c
> c0a3b148: ee130f10 mrc 15, 0, r0, cr3, cr0, {0}
> c0a3b14c: e58d0048 str r0, [sp, #72] ; 0x48
> c0a3b150: e3a00051 mov r0, #81 ; 0x51
> c0a3b154: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
> ---c0a3b158: e59f103c ldr r1, [pc, #60] ; c0a3b19c <__irq_svc+0x9c> //orginal irq_svc also used same instruction
> c0a3b15c: e1a0000d mov r0, sp
> c0a3b160: e28fe000 add lr, pc, #0
> c0a3b164: e591f000 ldr pc, [r1]
> c0a3b168: e5998004 ldr r8, [r9, #4]
> c0a3b16c: e5990000 ldr r0, [r9]
> c0a3b170: e3380000 teq r8, #0
> c0a3b174: 13a00000 movne r0, #0
> c0a3b178: e3100002 tst r0, #2
> c0a3b17c: 1b000007 blne c0a3b1a0 <svc_preempt>
> c0a3b180: e59d104c ldr r1, [sp, #76] ; 0x4c
> c0a3b184: e59d0048 ldr r0, [sp, #72] ; 0x48
> c0a3b188: ee030f10 mcr 15, 0, r0, cr3, cr0, {0}
> c0a3b18c: e5891008 str r1, [r9, #8]
> c0a3b190: e16ff005 msr SPSR_fsxc, r5
> c0a3b194: e8ddffff ldm sp, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip, sp, lr, pc}^
> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
> c0a3b19c: c0ccccf0 .word 0xc0ccccf0
>
>
>
> Even "ldr r1, =TASK_SIZE" won't be converted to a mov instruction by some assembler, I also think it is better
> to remove the whole #ifdef because the influence of performance by ldr is very limited.
>
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 22:58 ` Russell King - ARM Linux
(?)
@ 2017-10-17 12:41 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 12:41 UTC (permalink / raw)
To: Russell King - ARM Linux, Laura Abbott
Cc: Florian Fainelli, aryabinin, afzal.mohd.ma, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
Nicolas Pitre, opendmb, linux-kernel, kasan-dev, Zengweilin,
linux-mm, Dailei, glider, dvyukov, Jiazhenghua, linux-arm-kernel,
Heshaoliang
On 10/17/2017 7:40 PM, Abbott Liu wrote:
>On Wed, Oct 11, 2017 at 03:10:56PM -0700, Laura Abbott wrote:
>The decompressor does not link with the standard C library, so it
>needs to provide implementations of standard C library functionality
>where required. That means, if we have any memset() users, we need
>to provide the memset() function.
>
>The undef is there to avoid the optimisation we have in asm/string.h
>for __memzero, because we don't want to use __memzero in the
>decompressor.
>
>Whether memset() is required depends on which compression method is
>being used - LZO and LZ4 appear to make direct references to it, but
>the inflate (gzip) decompressor code does not.
>
>What this means is that all supported kernel compression options need
>to be tested.
Thanks for your review. I am sorry that I am so late to reply your email.
I will test all arm kernel compression options.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
@ 2017-10-17 12:41 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 12:41 UTC (permalink / raw)
To: Russell King - ARM Linux, Laura Abbott
Cc: Florian Fainelli, aryabinin, afzal.mohd.ma, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
Nicolas Pitre, opendmb, linux-kernel, kasan-dev, Zengweilin,
linux-mm, Dailei, glider, dvyukov, Jiazhenghua, linux-arm-kernel,
Heshaoliang
On 10/17/2017 7:40 PM, Abbott Liu wrote:
>On Wed, Oct 11, 2017 at 03:10:56PM -0700, Laura Abbott wrote:
>The decompressor does not link with the standard C library, so it
>needs to provide implementations of standard C library functionality
>where required. That means, if we have any memset() users, we need
>to provide the memset() function.
>
>The undef is there to avoid the optimisation we have in asm/string.h
>for __memzero, because we don't want to use __memzero in the
>decompressor.
>
>Whether memset() is required depends on which compression method is
>being used - LZO and LZ4 appear to make direct references to it, but
>the inflate (gzip) decompressor code does not.
>
>What this means is that all supported kernel compression options need
>to be tested.
Thanks for your review. I am sorry that I am so late to reply your email.
I will test all arm kernel compression options.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2017-10-17 12:41 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 12:41 UTC (permalink / raw)
To: linux-arm-kernel
On 10/17/2017 7:40 PM, Abbott Liu wrote:
>On Wed, Oct 11, 2017 at 03:10:56PM -0700, Laura Abbott wrote:
>The decompressor does not link with the standard C library, so it
>needs to provide implementations of standard C library functionality
>where required. That means, if we have any memset() users, we need
>to provide the memset() function.
>
>The undef is there to avoid the optimisation we have in asm/string.h
>for __memzero, because we don't want to use __memzero in the
>decompressor.
>
>Whether memset() is required depends on which compression method is
>being used - LZO and LZ4 appear to make direct references to it, but
>the inflate (gzip) decompressor code does not.
>
>What this means is that all supported kernel compression options need
>to be tested.
Thanks for your review. I am sorry that I am so late to reply your email.
I will test all arm kernel compression options.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-17 11:52 ` Ard Biesheuvel
(?)
@ 2017-10-17 13:02 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 13:02 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: kbuild test robot, kbuild-all, linux, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, robin.murphy, mingo,
grygorii.strashko, glider, dvyukov, opendmb, linux-arm-kernel,
linux-kernel, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 10/17/2017 8:45 PM, Abbott Liu wrote:
>What I said was
>
>'if the value of TASK_SIZE fits its 12-bit immediate field'
>
>and your value of TASK_SIZE is 0xb6e00000, which cannot be decomposed in the right way.
>
>If you build with KASAN disabled, it will generate a mov instruction instead.
Thanks for your explain. I understand now. I has tested and the testing result proves that what
you said is right.
Here is test log:
c010e9e0 <__irq_svc>:
c010e9e0: e24dd04c sub sp, sp, #76 ; 0x4c
c010e9e4: e31d0004 tst sp, #4
c010e9e8: 024dd004 subeq sp, sp, #4
c010e9ec: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
c010e9f0: e8900038 ldm r0, {r3, r4, r5}
c010e9f4: e28d7030 add r7, sp, #48 ; 0x30
c010e9f8: e3e06000 mvn r6, #0
c010e9fc: e28d204c add r2, sp, #76 ; 0x4c
c010ea00: 02822004 addeq r2, r2, #4
c010ea04: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
c010ea08: e1a0300e mov r3, lr
c010ea0c: e887007c stm r7, {r2, r3, r4, r5, r6}
c010ea10: e1a0972d lsr r9, sp, #14
c010ea14: e1a09709 lsl r9, r9, #14
c010ea18: e5990008 ldr r0, [r9, #8]
c010ea1c: e3a014bf mov r1, #-1090519040 ; 0xbf000000 // ldr r1,=0xbf000000
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-17 13:02 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 13:02 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: kbuild test robot, kbuild-all, linux, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, robin.murphy, mingo,
grygorii.strashko, glider, dvyukov, opendmb, linux-arm-kernel,
linux-kernel, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 10/17/2017 8:45 PM, Abbott Liu wrote:
>What I said was
>
>'if the value of TASK_SIZE fits its 12-bit immediate field'
>
>and your value of TASK_SIZE is 0xb6e00000, which cannot be decomposed in the right way.
>
>If you build with KASAN disabled, it will generate a mov instruction instead.
Thanks for your explain. I understand now. I has tested and the testing result proves that what
you said is right.
Here is test log:
c010e9e0 <__irq_svc>:
c010e9e0: e24dd04c sub sp, sp, #76 ; 0x4c
c010e9e4: e31d0004 tst sp, #4
c010e9e8: 024dd004 subeq sp, sp, #4
c010e9ec: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
c010e9f0: e8900038 ldm r0, {r3, r4, r5}
c010e9f4: e28d7030 add r7, sp, #48 ; 0x30
c010e9f8: e3e06000 mvn r6, #0
c010e9fc: e28d204c add r2, sp, #76 ; 0x4c
c010ea00: 02822004 addeq r2, r2, #4
c010ea04: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
c010ea08: e1a0300e mov r3, lr
c010ea0c: e887007c stm r7, {r2, r3, r4, r5, r6}
c010ea10: e1a0972d lsr r9, sp, #14
c010ea14: e1a09709 lsl r9, r9, #14
c010ea18: e5990008 ldr r0, [r9, #8]
c010ea1c: e3a014bf mov r1, #-1090519040 ; 0xbf000000 // ldr r1,=0xbf000000
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-17 13:02 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 13:02 UTC (permalink / raw)
To: linux-arm-kernel
On 10/17/2017 8:45 PM, Abbott Liu wrote:
>What I said was
>
>'if the value of TASK_SIZE fits its 12-bit immediate field'
>
>and your value of TASK_SIZE is 0xb6e00000, which cannot be decomposed in the right way.
>
>If you build with KASAN disabled, it will generate a mov instruction instead.
Thanks for your explain. I understand now. I has tested and the testing result proves that what
you said is right.
Here is test log:
c010e9e0 <__irq_svc>:
c010e9e0: e24dd04c sub sp, sp, #76 ; 0x4c
c010e9e4: e31d0004 tst sp, #4
c010e9e8: 024dd004 subeq sp, sp, #4
c010e9ec: e88d1ffe stm sp, {r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
c010e9f0: e8900038 ldm r0, {r3, r4, r5}
c010e9f4: e28d7030 add r7, sp, #48 ; 0x30
c010e9f8: e3e06000 mvn r6, #0
c010e9fc: e28d204c add r2, sp, #76 ; 0x4c
c010ea00: 02822004 addeq r2, r2, #4
c010ea04: e52d3004 push {r3} ; (str r3, [sp, #-4]!)
c010ea08: e1a0300e mov r3, lr
c010ea0c: e887007c stm r7, {r2, r3, r4, r5, r6}
c010ea10: e1a0972d lsr r9, sp, #14
c010ea14: e1a09709 lsl r9, r9, #14
c010ea18: e5990008 ldr r0, [r9, #8]
c010ea1c: e3a014bf mov r1, #-1090519040 ; 0xbf000000 // ldr r1,=0xbf000000
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 21:41 ` Russell King - ARM Linux
(?)
@ 2017-10-17 13:28 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 13:28 UTC (permalink / raw)
To: Russell King - ARM Linux, Florian Fainelli
Cc: aryabinin, afzal.mohd.ma, labbott, kirill.shutemov, mhocko,
cdall, marc.zyngier, catalin.marinas, akpm, mawilcox, tglx,
thgarnie, keescook, arnd, vladimir.murzin, tixy, ard.biesheuvel,
robin.murphy, mingo, grygorii.strashko, opendmb, linux-kernel,
kasan-dev, Zengweilin, linux-mm, Dailei, glider, dvyukov,
Jiazhenghua, linux-arm-kernel, Heshaoliang
2017.10.12 05:42 AM Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>> Please don't make this "exclusive" just conditionally call
>> kasan_early_init(), remove the call to start_kernel from
>> kasan_early_init and keep the call to start_kernel here.
>iow:
>
>#ifdef CONFIG_KASAN
> bl kasan_early_init
>#endif
> b start_kernel
>
>This has the advantage that we don't leave any stack frame from
>kasan_early_init() on the init task stack.
Thanks for your review. I tested your opinion and it work well.
I agree with you that it is better to use follow code
#ifdef CONFIG_KASAN
bl kasan_early_init
#endif
b start_kernel
than :
#ifdef CONFIG_KASAN
bl kasan_early_init
#else
b start_kernel
#endif
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-17 13:28 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 13:28 UTC (permalink / raw)
To: Russell King - ARM Linux, Florian Fainelli
Cc: aryabinin, afzal.mohd.ma, labbott, kirill.shutemov, mhocko,
cdall, marc.zyngier, catalin.marinas, akpm, mawilcox, tglx,
thgarnie, keescook, arnd, vladimir.murzin, tixy, ard.biesheuvel,
robin.murphy, mingo, grygorii.strashko, opendmb, linux-kernel,
kasan-dev, Zengweilin, linux-mm, Dailei, glider, dvyukov,
Jiazhenghua, linux-arm-kernel, Heshaoliang
2017.10.12 05:42 AM Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>> Please don't make this "exclusive" just conditionally call
>> kasan_early_init(), remove the call to start_kernel from
>> kasan_early_init and keep the call to start_kernel here.
>iow:
>
>#ifdef CONFIG_KASAN
> bl kasan_early_init
>#endif
> b start_kernel
>
>This has the advantage that we don't leave any stack frame from
>kasan_early_init() on the init task stack.
Thanks for your review. I tested your opinion and it work well.
I agree with you that it is better to use follow code
#ifdef CONFIG_KASAN
bl kasan_early_init
#endif
b start_kernel
than :
#ifdef CONFIG_KASAN
bl kasan_early_init
#else
b start_kernel
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-17 13:28 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-17 13:28 UTC (permalink / raw)
To: linux-arm-kernel
2017.10.12 05:42 AM Russell King - ARM Linux [mailto:linux at armlinux.org.uk] wrote:
>> Please don't make this "exclusive" just conditionally call
>> kasan_early_init(), remove the call to start_kernel from
>> kasan_early_init and keep the call to start_kernel here.
>iow:
>
>#ifdef CONFIG_KASAN
> bl kasan_early_init
>#endif
> b start_kernel
>
>This has the advantage that we don't leave any stack frame from
>kasan_early_init() on the init task stack.
Thanks for your review. I tested your opinion and it work well.
I agree with you that it is better to use follow code
#ifdef CONFIG_KASAN
bl kasan_early_init
#endif
b start_kernel
than :
#ifdef CONFIG_KASAN
bl kasan_early_init
#else
b start_kernel
#endif
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 23:42 ` Dmitry Osipenko
(?)
@ 2017-10-19 6:52 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-19 6:52 UTC (permalink / raw)
To: Dmitry Osipenko, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 2017.10.12 7:43AM Dmitry Osipenko [mailto:digetx@gmail.com] wrote:
>Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
>
>[...]
>
>--
>Dmitry
Thanks for your review. I'm sorry that my replay is so late.
I don't think L_PTE_MT_WRITETHROUGH is need for all arm soc. So I think kasan's
mapping can use PAGE_KERNEL which can be initialized for different arm soc and
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)).
I don't think the mapping table flags in kasan_early_init need be changed because of the follow reason:
1) PAGE_KERNEL can't be used in early_kasan_init because the pgprot_kernel which is used to define
PAGE_KERNEL doesn't be initialized.
2) all of the kasan shadow's mapping table is going to be created again in kasan_init function.
All what I say is: I think only the mapping table flags in kasan_init function need to be changed into PAGE_KERNEL
or __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)).
Here is the code, I has already tested:
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -124,7 +124,7 @@ pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
void *p = kasan_alloc_block(PAGE_SIZE, node);
if (!p)
return NULL;
- entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
+ entry = pfn_pte(virt_to_pfn(p), __pgprot(pgprot_val(PAGE_KERNEL)));
set_pte_at(&init_mm, addr, pte, entry);
}
return pte;
@@ -253,7 +254,7 @@ void __init kasan_init(void)
set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
&kasan_zero_pte[i], pfn_pte(
virt_to_pfn(kasan_zero_page),
- __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
+ __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
cpu_set_ttbr0(orig_ttbr0);
flush_cache_all();
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-19 6:52 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-19 6:52 UTC (permalink / raw)
To: Dmitry Osipenko, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, marc.zyngier,
catalin.marinas, akpm, mawilcox, tglx, thgarnie, keescook, arnd,
vladimir.murzin, tixy, ard.biesheuvel, robin.murphy, mingo,
grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 2017.10.12 7:43AM Dmitry Osipenko [mailto:digetx@gmail.com] wrote:
>Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
>
>[...]
>
>--
>Dmitry
Thanks for your review. I'm sorry that my replay is so late.
I don't think L_PTE_MT_WRITETHROUGH is need for all arm soc. So I think kasan's
mapping can use PAGE_KERNEL which can be initialized for different arm soc and
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)).
I don't think the mapping table flags in kasan_early_init need be changed because of the follow reason:
1) PAGE_KERNEL can't be used in early_kasan_init because the pgprot_kernel which is used to define
PAGE_KERNEL doesn't be initialized.
2) all of the kasan shadow's mapping table is going to be created again in kasan_init function.
All what I say is: I think only the mapping table flags in kasan_init function need to be changed into PAGE_KERNEL
or __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)).
Here is the code, I has already tested:
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -124,7 +124,7 @@ pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
void *p = kasan_alloc_block(PAGE_SIZE, node);
if (!p)
return NULL;
- entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
+ entry = pfn_pte(virt_to_pfn(p), __pgprot(pgprot_val(PAGE_KERNEL)));
set_pte_at(&init_mm, addr, pte, entry);
}
return pte;
@@ -253,7 +254,7 @@ void __init kasan_init(void)
set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
&kasan_zero_pte[i], pfn_pte(
virt_to_pfn(kasan_zero_page),
- __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
+ __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
cpu_set_ttbr0(orig_ttbr0);
flush_cache_all();
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-19 6:52 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-19 6:52 UTC (permalink / raw)
To: linux-arm-kernel
On 2017.10.12 7:43AM Dmitry Osipenko [mailto:digetx at gmail.com] wrote:
>Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
>
>[...]
>
>--
>Dmitry
Thanks for your review. I'm sorry that my replay is so late.
I don't think L_PTE_MT_WRITETHROUGH is need for all arm soc. So I think kasan's
mapping can use PAGE_KERNEL which can be initialized for different arm soc and
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)).
I don't think the mapping table flags in kasan_early_init need be changed because of the follow reason:
1) PAGE_KERNEL can't be used in early_kasan_init because the pgprot_kernel which is used to define
PAGE_KERNEL doesn't be initialized.
2) all of the kasan shadow's mapping table is going to be created again in kasan_init function.
All what I say is: I think only the mapping table flags in kasan_init function need to be changed into PAGE_KERNEL
or __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)).
Here is the code, I has already tested:
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -124,7 +124,7 @@ pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
void *p = kasan_alloc_block(PAGE_SIZE, node);
if (!p)
return NULL;
- entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
+ entry = pfn_pte(virt_to_pfn(p), __pgprot(pgprot_val(PAGE_KERNEL)));
set_pte_at(&init_mm, addr, pte, entry);
}
return pte;
@@ -253,7 +254,7 @@ void __init kasan_init(void)
set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
&kasan_zero_pte[i], pfn_pte(
virt_to_pfn(kasan_zero_page),
- __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY)));
+ __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
cpu_set_ttbr0(orig_ttbr0);
flush_cache_all();
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-19 11:09 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 11:09 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, jiazhenghua, dylix.dailei, zengweilin, heshaoliang
On Wed, Oct 11, 2017 at 04:22:17PM +0800, Abbott Liu wrote:
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
Please explain this change - we don't have a "pud" as far as the rest of
the Linux MM layer is concerned, so why do we need it for kasan?
I suspect it comes from the way we wrap up the page tables - where ARM
does it one way (because it has to) vs the subsequently merged method
which is completely upside down to what ARMs doing, and therefore is
totally incompatible and impossible to fit in with our way.
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
> ENDPROC(__mmap_switched)
>
> .align 2
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index 8e9a3e4..985d9a3 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -62,6 +62,7 @@
> #include <asm/unwind.h>
> #include <asm/memblock.h>
> #include <asm/virt.h>
> +#include <asm/kasan.h>
>
> #include "atags.h"
>
> @@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
> early_ioremap_reset();
>
> paging_init(mdesc);
> + kasan_init();
> request_standard_resources(mdesc);
>
> if (mdesc->restart)
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 950d19b..498c316 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
> obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
> obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
> +
> +KASAN_SANITIZE_kasan_init.o := n
> +obj-$(CONFIG_KASAN) += kasan_init.o
Why is this placed in the middle of the cache object listing?
> +
> +
> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> new file mode 100644
> index 0000000..2bf0782
> --- /dev/null
> +++ b/arch/arm/mm/kasan_init.c
> @@ -0,0 +1,257 @@
> +#include <linux/bootmem.h>
> +#include <linux/kasan.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/start_kernel.h>
> +
> +#include <asm/cputype.h>
> +#include <asm/highmem.h>
> +#include <asm/mach/map.h>
> +#include <asm/memory.h>
> +#include <asm/page.h>
> +#include <asm/pgalloc.h>
> +#include <asm/pgtable.h>
> +#include <asm/procinfo.h>
> +#include <asm/proc-fns.h>
> +#include <asm/tlbflush.h>
> +#include <asm/cp15.h>
> +#include <linux/sched/task.h>
> +
> +#include "mm.h"
> +
> +static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
> +
> +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
> +
> +static __init void *kasan_alloc_block(size_t size, int node)
> +{
> + return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
> + BOOTMEM_ALLOC_ACCESSIBLE, node);
> +}
> +
> +static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pmd_t *pmd;
> +
> + pmd = pmd_offset(pud, start);
> + for (addr = start; addr < end;) {
> + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + next = pmd_addr_end(addr, end);
> + addr = next;
> + flush_pmd_entry(pmd);
> + pmd++;
> + }
> +}
> +
> +static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pud_t *pud;
> +
> + pud = pud_offset(pgd, start);
> + for (addr = start; addr < end;) {
> + next = pud_addr_end(addr, end);
> + kasan_early_pmd_populate(addr, next, pud);
> + addr = next;
> + pud++;
> + }
> +}
> +
> +void __init kasan_map_early_shadow(pgd_t *pgdp)
> +{
> + int i;
> + unsigned long start = KASAN_SHADOW_START;
> + unsigned long end = KASAN_SHADOW_END;
> + unsigned long addr;
> + unsigned long next;
> + pgd_t *pgd;
> +
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
> +
> + pgd = pgd_offset_k(start);
> + for (addr = start; addr < end;) {
> + next = pgd_addr_end(addr, end);
> + kasan_early_pud_populate(addr, next, pgd);
> + addr = next;
> + pgd++;
> + }
> +}
> +
> +extern struct proc_info_list *lookup_processor_type(unsigned int);
> +
> +void __init kasan_early_init(void)
> +{
> + struct proc_info_list *list;
> +
> + /*
> + * locate processor in the list of supported processor
> + * types. The linker builds this table for us from the
> + * entries in arch/arm/mm/proc-*.S
> + */
> + list = lookup_processor_type(read_cpuid_id());
> + if (list) {
> +#ifdef MULTI_CPU
> + processor = *list->proc;
> +#endif
> + }
> +
> + BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 29));
> +
> +
> + kasan_map_early_shadow(swapper_pg_dir);
> + start_kernel();
> +}
> +
> +static void __init clear_pgds(unsigned long start,
> + unsigned long end)
> +{
> + for (; start && start < end; start += PMD_SIZE)
> + pmd_clear(pmd_off_k(start));
> +}
> +
> +pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
> +{
> + pte_t *pte = pte_offset_kernel(pmd, addr);
> + if (pte_none(*pte)) {
> + pte_t entry;
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
> + set_pte_at(&init_mm, addr, pte, entry);
> + }
> + return pte;
> +}
> +
> +pmd_t * __meminit kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
> +{
> + pmd_t *pmd = pmd_offset(pud, addr);
> + if (pmd_none(*pmd)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pmd_populate_kernel(&init_mm, pmd, p);
> + }
> + return pmd;
> +}
> +
> +pud_t * __meminit kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
> +{
> + pud_t *pud = pud_offset(pgd, addr);
> + if (pud_none(*pud)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pr_err("populating pud addr %lx\n", addr);
> + pud_populate(&init_mm, pud, p);
> + }
> + return pud;
> +}
> +
> +pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
> +{
> + pgd_t *pgd = pgd_offset_k(addr);
> + if (pgd_none(*pgd)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pgd_populate(&init_mm, pgd, p);
> + }
> + return pgd;
> +}
This all looks wrong - you are aware that on non-LPAE platforms, there
is only a _two_ level page table - the top level page table is 16K in
size, and each _individual_ lower level page table is actually 1024
bytes, but we do some special handling in the kernel to combine two
together. It looks to me that you allocate memory for each Linux-
abstracted page table level whether the hardware needs it or not.
Is there any reason why the pre-existing "create_mapping()" function
can't be used, and you've had to rewrite that code here?
> +
> +static int __init create_mapping(unsigned long start, unsigned long end, int node)
> +{
> + unsigned long addr = start;
> + pgd_t *pgd;
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *pte;
A blank line would help between the auto variables and the code of the
function.
> + pr_info("populating shadow for %lx, %lx\n", start, end);
Blank line here too please.
> + for (; addr < end; addr += PAGE_SIZE) {
> + pgd = kasan_pgd_populate(addr, node);
> + if (!pgd)
> + return -ENOMEM;
> +
> + pud = kasan_pud_populate(pgd, addr, node);
> + if (!pud)
> + return -ENOMEM;
> +
> + pmd = kasan_pmd_populate(pud, addr, node);
> + if (!pmd)
> + return -ENOMEM;
> +
> + pte = kasan_pte_populate(pmd, addr, node);
> + if (!pte)
> + return -ENOMEM;
> + }
> + return 0;
> +}
> +
> +
> +void __init kasan_init(void)
> +{
> + struct memblock_region *reg;
> + u64 orig_ttbr0;
> +
> + orig_ttbr0 = cpu_get_ttbr(0);
> +
> +#ifdef CONFIG_ARM_LPAE
> + memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> + set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
> + cpu_set_ttbr0(__pa(tmp_page_table));
> +#else
> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> + cpu_set_ttbr0(__pa(tmp_page_table));
> +#endif
> + flush_cache_all();
> + local_flush_bp_all();
> + local_flush_tlb_all();
What are you trying to achieve with all this complexity? Some comments
might be useful, especially for those of us who don't know the internals
of kasan.
> +
> + clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> +
> + kasan_populate_zero_shadow(
> + kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
> + kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
> +
> + kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
> + kasan_mem_to_shadow((void *)-1UL) + 1);
> +
> + for_each_memblock(memory, reg) {
> + void *start = __va(reg->base);
> + void *end = __va(reg->base + reg->size);
Isn't this going to complain if the translation macro debugging is enabled?
> +
> + if (reg->base + reg->size > arm_lowmem_limit)
> + end = __va(arm_lowmem_limit);
> + if (start >= end)
> + break;
> +
> + create_mapping((unsigned long)kasan_mem_to_shadow(start),
> + (unsigned long)kasan_mem_to_shadow(end),
> + NUMA_NO_NODE);
> + }
> +
> + /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,so we need mapping.
> + *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR ~ MODULES_END's shadow
> + * is in the same PMD_SIZE, so we cant use kasan_populate_zero_shadow.
> + *
> + **/
> + create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
> + (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
> + NUMA_NO_NODE);
> + cpu_set_ttbr0(orig_ttbr0);
> + flush_cache_all();
> + local_flush_bp_all();
> + local_flush_tlb_all();
> + memset(kasan_zero_page, 0, PAGE_SIZE);
> + pr_info("Kernel address sanitizer initialized\n");
> + init_task.kasan_depth = 0;
> +}
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 6f319fb..12749da 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -358,7 +358,7 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
> if (redzone_adjust > 0)
> *size += redzone_adjust;
>
> - *size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
> + *size = min((size_t)KMALLOC_MAX_SIZE, max(*size, cache->object_size +
> optimal_redzone(cache->object_size)));
>
> /*
> --
> 2.9.0
>
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-19 11:09 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 11:09 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, jiazhenghua, dylix.dailei, zengweilin, heshaoliang
On Wed, Oct 11, 2017 at 04:22:17PM +0800, Abbott Liu wrote:
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
Please explain this change - we don't have a "pud" as far as the rest of
the Linux MM layer is concerned, so why do we need it for kasan?
I suspect it comes from the way we wrap up the page tables - where ARM
does it one way (because it has to) vs the subsequently merged method
which is completely upside down to what ARMs doing, and therefore is
totally incompatible and impossible to fit in with our way.
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
> ENDPROC(__mmap_switched)
>
> .align 2
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index 8e9a3e4..985d9a3 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -62,6 +62,7 @@
> #include <asm/unwind.h>
> #include <asm/memblock.h>
> #include <asm/virt.h>
> +#include <asm/kasan.h>
>
> #include "atags.h"
>
> @@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
> early_ioremap_reset();
>
> paging_init(mdesc);
> + kasan_init();
> request_standard_resources(mdesc);
>
> if (mdesc->restart)
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 950d19b..498c316 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
> obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
> obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
> +
> +KASAN_SANITIZE_kasan_init.o := n
> +obj-$(CONFIG_KASAN) += kasan_init.o
Why is this placed in the middle of the cache object listing?
> +
> +
> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> new file mode 100644
> index 0000000..2bf0782
> --- /dev/null
> +++ b/arch/arm/mm/kasan_init.c
> @@ -0,0 +1,257 @@
> +#include <linux/bootmem.h>
> +#include <linux/kasan.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/start_kernel.h>
> +
> +#include <asm/cputype.h>
> +#include <asm/highmem.h>
> +#include <asm/mach/map.h>
> +#include <asm/memory.h>
> +#include <asm/page.h>
> +#include <asm/pgalloc.h>
> +#include <asm/pgtable.h>
> +#include <asm/procinfo.h>
> +#include <asm/proc-fns.h>
> +#include <asm/tlbflush.h>
> +#include <asm/cp15.h>
> +#include <linux/sched/task.h>
> +
> +#include "mm.h"
> +
> +static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
> +
> +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
> +
> +static __init void *kasan_alloc_block(size_t size, int node)
> +{
> + return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
> + BOOTMEM_ALLOC_ACCESSIBLE, node);
> +}
> +
> +static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pmd_t *pmd;
> +
> + pmd = pmd_offset(pud, start);
> + for (addr = start; addr < end;) {
> + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + next = pmd_addr_end(addr, end);
> + addr = next;
> + flush_pmd_entry(pmd);
> + pmd++;
> + }
> +}
> +
> +static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pud_t *pud;
> +
> + pud = pud_offset(pgd, start);
> + for (addr = start; addr < end;) {
> + next = pud_addr_end(addr, end);
> + kasan_early_pmd_populate(addr, next, pud);
> + addr = next;
> + pud++;
> + }
> +}
> +
> +void __init kasan_map_early_shadow(pgd_t *pgdp)
> +{
> + int i;
> + unsigned long start = KASAN_SHADOW_START;
> + unsigned long end = KASAN_SHADOW_END;
> + unsigned long addr;
> + unsigned long next;
> + pgd_t *pgd;
> +
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
> +
> + pgd = pgd_offset_k(start);
> + for (addr = start; addr < end;) {
> + next = pgd_addr_end(addr, end);
> + kasan_early_pud_populate(addr, next, pgd);
> + addr = next;
> + pgd++;
> + }
> +}
> +
> +extern struct proc_info_list *lookup_processor_type(unsigned int);
> +
> +void __init kasan_early_init(void)
> +{
> + struct proc_info_list *list;
> +
> + /*
> + * locate processor in the list of supported processor
> + * types. The linker builds this table for us from the
> + * entries in arch/arm/mm/proc-*.S
> + */
> + list = lookup_processor_type(read_cpuid_id());
> + if (list) {
> +#ifdef MULTI_CPU
> + processor = *list->proc;
> +#endif
> + }
> +
> + BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 29));
> +
> +
> + kasan_map_early_shadow(swapper_pg_dir);
> + start_kernel();
> +}
> +
> +static void __init clear_pgds(unsigned long start,
> + unsigned long end)
> +{
> + for (; start && start < end; start += PMD_SIZE)
> + pmd_clear(pmd_off_k(start));
> +}
> +
> +pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
> +{
> + pte_t *pte = pte_offset_kernel(pmd, addr);
> + if (pte_none(*pte)) {
> + pte_t entry;
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
> + set_pte_at(&init_mm, addr, pte, entry);
> + }
> + return pte;
> +}
> +
> +pmd_t * __meminit kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
> +{
> + pmd_t *pmd = pmd_offset(pud, addr);
> + if (pmd_none(*pmd)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pmd_populate_kernel(&init_mm, pmd, p);
> + }
> + return pmd;
> +}
> +
> +pud_t * __meminit kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
> +{
> + pud_t *pud = pud_offset(pgd, addr);
> + if (pud_none(*pud)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pr_err("populating pud addr %lx\n", addr);
> + pud_populate(&init_mm, pud, p);
> + }
> + return pud;
> +}
> +
> +pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
> +{
> + pgd_t *pgd = pgd_offset_k(addr);
> + if (pgd_none(*pgd)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pgd_populate(&init_mm, pgd, p);
> + }
> + return pgd;
> +}
This all looks wrong - you are aware that on non-LPAE platforms, there
is only a _two_ level page table - the top level page table is 16K in
size, and each _individual_ lower level page table is actually 1024
bytes, but we do some special handling in the kernel to combine two
together. It looks to me that you allocate memory for each Linux-
abstracted page table level whether the hardware needs it or not.
Is there any reason why the pre-existing "create_mapping()" function
can't be used, and you've had to rewrite that code here?
> +
> +static int __init create_mapping(unsigned long start, unsigned long end, int node)
> +{
> + unsigned long addr = start;
> + pgd_t *pgd;
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *pte;
A blank line would help between the auto variables and the code of the
function.
> + pr_info("populating shadow for %lx, %lx\n", start, end);
Blank line here too please.
> + for (; addr < end; addr += PAGE_SIZE) {
> + pgd = kasan_pgd_populate(addr, node);
> + if (!pgd)
> + return -ENOMEM;
> +
> + pud = kasan_pud_populate(pgd, addr, node);
> + if (!pud)
> + return -ENOMEM;
> +
> + pmd = kasan_pmd_populate(pud, addr, node);
> + if (!pmd)
> + return -ENOMEM;
> +
> + pte = kasan_pte_populate(pmd, addr, node);
> + if (!pte)
> + return -ENOMEM;
> + }
> + return 0;
> +}
> +
> +
> +void __init kasan_init(void)
> +{
> + struct memblock_region *reg;
> + u64 orig_ttbr0;
> +
> + orig_ttbr0 = cpu_get_ttbr(0);
> +
> +#ifdef CONFIG_ARM_LPAE
> + memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> + set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
> + cpu_set_ttbr0(__pa(tmp_page_table));
> +#else
> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> + cpu_set_ttbr0(__pa(tmp_page_table));
> +#endif
> + flush_cache_all();
> + local_flush_bp_all();
> + local_flush_tlb_all();
What are you trying to achieve with all this complexity? Some comments
might be useful, especially for those of us who don't know the internals
of kasan.
> +
> + clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> +
> + kasan_populate_zero_shadow(
> + kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
> + kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
> +
> + kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
> + kasan_mem_to_shadow((void *)-1UL) + 1);
> +
> + for_each_memblock(memory, reg) {
> + void *start = __va(reg->base);
> + void *end = __va(reg->base + reg->size);
Isn't this going to complain if the translation macro debugging is enabled?
> +
> + if (reg->base + reg->size > arm_lowmem_limit)
> + end = __va(arm_lowmem_limit);
> + if (start >= end)
> + break;
> +
> + create_mapping((unsigned long)kasan_mem_to_shadow(start),
> + (unsigned long)kasan_mem_to_shadow(end),
> + NUMA_NO_NODE);
> + }
> +
> + /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,so we need mapping.
> + *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR ~ MODULES_END's shadow
> + * is in the same PMD_SIZE, so we cant use kasan_populate_zero_shadow.
> + *
> + **/
> + create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
> + (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
> + NUMA_NO_NODE);
> + cpu_set_ttbr0(orig_ttbr0);
> + flush_cache_all();
> + local_flush_bp_all();
> + local_flush_tlb_all();
> + memset(kasan_zero_page, 0, PAGE_SIZE);
> + pr_info("Kernel address sanitizer initialized\n");
> + init_task.kasan_depth = 0;
> +}
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 6f319fb..12749da 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -358,7 +358,7 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
> if (redzone_adjust > 0)
> *size += redzone_adjust;
>
> - *size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
> + *size = min((size_t)KMALLOC_MAX_SIZE, max(*size, cache->object_size +
> optimal_redzone(cache->object_size)));
>
> /*
> --
> 2.9.0
>
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-19 11:09 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 11:09 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Oct 11, 2017 at 04:22:17PM +0800, Abbott Liu wrote:
> diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
> index b2902a5..10cee6a 100644
> --- a/arch/arm/include/asm/pgalloc.h
> +++ b/arch/arm/include/asm/pgalloc.h
> @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
> */
> #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
> #define pmd_free(mm, pmd) do { } while (0)
> +#ifndef CONFIG_KASAN
> #define pud_populate(mm,pmd,pte) BUG()
> -
> +#else
> +#define pud_populate(mm,pmd,pte) do { } while (0)
> +#endif
Please explain this change - we don't have a "pud" as far as the rest of
the Linux MM layer is concerned, so why do we need it for kasan?
I suspect it comes from the way we wrap up the page tables - where ARM
does it one way (because it has to) vs the subsequently merged method
which is completely upside down to what ARMs doing, and therefore is
totally incompatible and impossible to fit in with our way.
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f2e1af4..6e26714 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #else
> #define cpu_get_pgd() \
> ({ \
> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
> pg &= ~0x3fff; \
> (pgd_t *)phys_to_virt(pg); \
> })
> +
> +#define cpu_set_ttbr(nr, val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +#define cpu_get_ttbr(nr) \
> + ({ \
> + unsigned long ttbr; \
> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
> + : "=r" (ttbr)); \
> + ttbr; \
> + })
> +
> +#define cpu_set_ttbr0(val) \
> + do { \
> + u64 ttbr = val; \
> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
> + : : "r" (ttbr)); \
> + } while (0)
> +
> +
> #endif
>
> #else /*!CONFIG_MMU */
> diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
> index 1d468b5..52c4858 100644
> --- a/arch/arm/include/asm/thread_info.h
> +++ b/arch/arm/include/asm/thread_info.h
> @@ -16,7 +16,11 @@
> #include <asm/fpstate.h>
> #include <asm/page.h>
>
> +#ifdef CONFIG_KASAN
> +#define THREAD_SIZE_ORDER 2
> +#else
> #define THREAD_SIZE_ORDER 1
> +#endif
> #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
> #define THREAD_START_SP (THREAD_SIZE - 8)
>
> diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
> index 8733012..c17f4a2 100644
> --- a/arch/arm/kernel/head-common.S
> +++ b/arch/arm/kernel/head-common.S
> @@ -101,7 +101,11 @@ __mmap_switched:
> str r2, [r6] @ Save atags pointer
> cmp r7, #0
> strne r0, [r7] @ Save control register values
> +#ifdef CONFIG_KASAN
> + b kasan_early_init
> +#else
> b start_kernel
> +#endif
> ENDPROC(__mmap_switched)
>
> .align 2
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index 8e9a3e4..985d9a3 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -62,6 +62,7 @@
> #include <asm/unwind.h>
> #include <asm/memblock.h>
> #include <asm/virt.h>
> +#include <asm/kasan.h>
>
> #include "atags.h"
>
> @@ -1108,6 +1109,7 @@ void __init setup_arch(char **cmdline_p)
> early_ioremap_reset();
>
> paging_init(mdesc);
> + kasan_init();
> request_standard_resources(mdesc);
>
> if (mdesc->restart)
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 950d19b..498c316 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -106,4 +106,9 @@ obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o
> obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
> obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
> +
> +KASAN_SANITIZE_kasan_init.o := n
> +obj-$(CONFIG_KASAN) += kasan_init.o
Why is this placed in the middle of the cache object listing?
> +
> +
> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> new file mode 100644
> index 0000000..2bf0782
> --- /dev/null
> +++ b/arch/arm/mm/kasan_init.c
> @@ -0,0 +1,257 @@
> +#include <linux/bootmem.h>
> +#include <linux/kasan.h>
> +#include <linux/kernel.h>
> +#include <linux/memblock.h>
> +#include <linux/start_kernel.h>
> +
> +#include <asm/cputype.h>
> +#include <asm/highmem.h>
> +#include <asm/mach/map.h>
> +#include <asm/memory.h>
> +#include <asm/page.h>
> +#include <asm/pgalloc.h>
> +#include <asm/pgtable.h>
> +#include <asm/procinfo.h>
> +#include <asm/proc-fns.h>
> +#include <asm/tlbflush.h>
> +#include <asm/cp15.h>
> +#include <linux/sched/task.h>
> +
> +#include "mm.h"
> +
> +static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14);
> +
> +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
> +
> +static __init void *kasan_alloc_block(size_t size, int node)
> +{
> + return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
> + BOOTMEM_ALLOC_ACCESSIBLE, node);
> +}
> +
> +static void __init kasan_early_pmd_populate(unsigned long start, unsigned long end, pud_t *pud)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pmd_t *pmd;
> +
> + pmd = pmd_offset(pud, start);
> + for (addr = start; addr < end;) {
> + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + next = pmd_addr_end(addr, end);
> + addr = next;
> + flush_pmd_entry(pmd);
> + pmd++;
> + }
> +}
> +
> +static void __init kasan_early_pud_populate(unsigned long start, unsigned long end, pgd_t *pgd)
> +{
> + unsigned long addr;
> + unsigned long next;
> + pud_t *pud;
> +
> + pud = pud_offset(pgd, start);
> + for (addr = start; addr < end;) {
> + next = pud_addr_end(addr, end);
> + kasan_early_pmd_populate(addr, next, pud);
> + addr = next;
> + pud++;
> + }
> +}
> +
> +void __init kasan_map_early_shadow(pgd_t *pgdp)
> +{
> + int i;
> + unsigned long start = KASAN_SHADOW_START;
> + unsigned long end = KASAN_SHADOW_END;
> + unsigned long addr;
> + unsigned long next;
> + pgd_t *pgd;
> +
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> + &kasan_zero_pte[i], pfn_pte(
> + virt_to_pfn(kasan_zero_page),
> + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
> +
> + pgd = pgd_offset_k(start);
> + for (addr = start; addr < end;) {
> + next = pgd_addr_end(addr, end);
> + kasan_early_pud_populate(addr, next, pgd);
> + addr = next;
> + pgd++;
> + }
> +}
> +
> +extern struct proc_info_list *lookup_processor_type(unsigned int);
> +
> +void __init kasan_early_init(void)
> +{
> + struct proc_info_list *list;
> +
> + /*
> + * locate processor in the list of supported processor
> + * types. The linker builds this table for us from the
> + * entries in arch/arm/mm/proc-*.S
> + */
> + list = lookup_processor_type(read_cpuid_id());
> + if (list) {
> +#ifdef MULTI_CPU
> + processor = *list->proc;
> +#endif
> + }
> +
> + BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 29));
> +
> +
> + kasan_map_early_shadow(swapper_pg_dir);
> + start_kernel();
> +}
> +
> +static void __init clear_pgds(unsigned long start,
> + unsigned long end)
> +{
> + for (; start && start < end; start += PMD_SIZE)
> + pmd_clear(pmd_off_k(start));
> +}
> +
> +pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
> +{
> + pte_t *pte = pte_offset_kernel(pmd, addr);
> + if (pte_none(*pte)) {
> + pte_t entry;
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN));
> + set_pte_at(&init_mm, addr, pte, entry);
> + }
> + return pte;
> +}
> +
> +pmd_t * __meminit kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
> +{
> + pmd_t *pmd = pmd_offset(pud, addr);
> + if (pmd_none(*pmd)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pmd_populate_kernel(&init_mm, pmd, p);
> + }
> + return pmd;
> +}
> +
> +pud_t * __meminit kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
> +{
> + pud_t *pud = pud_offset(pgd, addr);
> + if (pud_none(*pud)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pr_err("populating pud addr %lx\n", addr);
> + pud_populate(&init_mm, pud, p);
> + }
> + return pud;
> +}
> +
> +pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
> +{
> + pgd_t *pgd = pgd_offset_k(addr);
> + if (pgd_none(*pgd)) {
> + void *p = kasan_alloc_block(PAGE_SIZE, node);
> + if (!p)
> + return NULL;
> + pgd_populate(&init_mm, pgd, p);
> + }
> + return pgd;
> +}
This all looks wrong - you are aware that on non-LPAE platforms, there
is only a _two_ level page table - the top level page table is 16K in
size, and each _individual_ lower level page table is actually 1024
bytes, but we do some special handling in the kernel to combine two
together. It looks to me that you allocate memory for each Linux-
abstracted page table level whether the hardware needs it or not.
Is there any reason why the pre-existing "create_mapping()" function
can't be used, and you've had to rewrite that code here?
> +
> +static int __init create_mapping(unsigned long start, unsigned long end, int node)
> +{
> + unsigned long addr = start;
> + pgd_t *pgd;
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *pte;
A blank line would help between the auto variables and the code of the
function.
> + pr_info("populating shadow for %lx, %lx\n", start, end);
Blank line here too please.
> + for (; addr < end; addr += PAGE_SIZE) {
> + pgd = kasan_pgd_populate(addr, node);
> + if (!pgd)
> + return -ENOMEM;
> +
> + pud = kasan_pud_populate(pgd, addr, node);
> + if (!pud)
> + return -ENOMEM;
> +
> + pmd = kasan_pmd_populate(pud, addr, node);
> + if (!pmd)
> + return -ENOMEM;
> +
> + pte = kasan_pte_populate(pmd, addr, node);
> + if (!pte)
> + return -ENOMEM;
> + }
> + return 0;
> +}
> +
> +
> +void __init kasan_init(void)
> +{
> + struct memblock_region *reg;
> + u64 orig_ttbr0;
> +
> + orig_ttbr0 = cpu_get_ttbr(0);
> +
> +#ifdef CONFIG_ARM_LPAE
> + memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> + set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
> + cpu_set_ttbr0(__pa(tmp_page_table));
> +#else
> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> + cpu_set_ttbr0(__pa(tmp_page_table));
> +#endif
> + flush_cache_all();
> + local_flush_bp_all();
> + local_flush_tlb_all();
What are you trying to achieve with all this complexity? Some comments
might be useful, especially for those of us who don't know the internals
of kasan.
> +
> + clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> +
> + kasan_populate_zero_shadow(
> + kasan_mem_to_shadow((void *)KASAN_SHADOW_START),
> + kasan_mem_to_shadow((void *)KASAN_SHADOW_END));
> +
> + kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
> + kasan_mem_to_shadow((void *)-1UL) + 1);
> +
> + for_each_memblock(memory, reg) {
> + void *start = __va(reg->base);
> + void *end = __va(reg->base + reg->size);
Isn't this going to complain if the translation macro debugging is enabled?
> +
> + if (reg->base + reg->size > arm_lowmem_limit)
> + end = __va(arm_lowmem_limit);
> + if (start >= end)
> + break;
> +
> + create_mapping((unsigned long)kasan_mem_to_shadow(start),
> + (unsigned long)kasan_mem_to_shadow(end),
> + NUMA_NO_NODE);
> + }
> +
> + /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,so we need mapping.
> + *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR ~ MODULES_END's shadow
> + * is in the same PMD_SIZE, so we cant use kasan_populate_zero_shadow.
> + *
> + **/
> + create_mapping((unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
> + (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE+PMD_SIZE)),
> + NUMA_NO_NODE);
> + cpu_set_ttbr0(orig_ttbr0);
> + flush_cache_all();
> + local_flush_bp_all();
> + local_flush_tlb_all();
> + memset(kasan_zero_page, 0, PAGE_SIZE);
> + pr_info("Kernel address sanitizer initialized\n");
> + init_task.kasan_depth = 0;
> +}
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 6f319fb..12749da 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -358,7 +358,7 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
> if (redzone_adjust > 0)
> *size += redzone_adjust;
>
> - *size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
> + *size = min((size_t)KMALLOC_MAX_SIZE, max(*size, cache->object_size +
> optimal_redzone(cache->object_size)));
>
> /*
> --
> 2.9.0
>
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-11 23:42 ` Dmitry Osipenko
(?)
@ 2017-10-19 12:01 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:01 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: Abbott Liu, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On Thu, Oct 12, 2017 at 02:42:49AM +0300, Dmitry Osipenko wrote:
> On 11.10.2017 11:22, Abbott Liu wrote:
> > +void __init kasan_map_early_shadow(pgd_t *pgdp)
> > +{
> > + int i;
> > + unsigned long start = KASAN_SHADOW_START;
> > + unsigned long end = KASAN_SHADOW_END;
> > + unsigned long addr;
> > + unsigned long next;
> > + pgd_t *pgd;
> > +
> > + for (i = 0; i < PTRS_PER_PTE; i++)
> > + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> > + &kasan_zero_pte[i], pfn_pte(
> > + virt_to_pfn(kasan_zero_page),
> > + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
>
> Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
One of the architecture restrictions is that the cache attributes of
all aliases should match (but there is a specific workaround that
permits this, provided that the dis-similar mappings aren't accessed
without certain intervening instructions.)
Why should it be L_PTE_MT_WRITETHROUGH, and not the same cache
attributes as the lowmem mapping?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-19 12:01 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:01 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: Abbott Liu, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko,
glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
On Thu, Oct 12, 2017 at 02:42:49AM +0300, Dmitry Osipenko wrote:
> On 11.10.2017 11:22, Abbott Liu wrote:
> > +void __init kasan_map_early_shadow(pgd_t *pgdp)
> > +{
> > + int i;
> > + unsigned long start = KASAN_SHADOW_START;
> > + unsigned long end = KASAN_SHADOW_END;
> > + unsigned long addr;
> > + unsigned long next;
> > + pgd_t *pgd;
> > +
> > + for (i = 0; i < PTRS_PER_PTE; i++)
> > + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> > + &kasan_zero_pte[i], pfn_pte(
> > + virt_to_pfn(kasan_zero_page),
> > + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
>
> Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
One of the architecture restrictions is that the cache attributes of
all aliases should match (but there is a specific workaround that
permits this, provided that the dis-similar mappings aren't accessed
without certain intervening instructions.)
Why should it be L_PTE_MT_WRITETHROUGH, and not the same cache
attributes as the lowmem mapping?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-10-19 12:01 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:01 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Oct 12, 2017 at 02:42:49AM +0300, Dmitry Osipenko wrote:
> On 11.10.2017 11:22, Abbott Liu wrote:
> > +void __init kasan_map_early_shadow(pgd_t *pgdp)
> > +{
> > + int i;
> > + unsigned long start = KASAN_SHADOW_START;
> > + unsigned long end = KASAN_SHADOW_END;
> > + unsigned long addr;
> > + unsigned long next;
> > + pgd_t *pgd;
> > +
> > + for (i = 0; i < PTRS_PER_PTE; i++)
> > + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
> > + &kasan_zero_pte[i], pfn_pte(
> > + virt_to_pfn(kasan_zero_page),
> > + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
>
> Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
One of the architecture restrictions is that the cache attributes of
all aliases should match (but there is a specific workaround that
permits this, provided that the dis-similar mappings aren't accessed
without certain intervening instructions.)
Why should it be L_PTE_MT_WRITETHROUGH, and not the same cache
attributes as the lowmem mapping?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 02/11] replace memory function
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-19 12:05 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:05 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, opendmb,
linux-kernel, kasan-dev, zengweilin, linux-mm, dylix.dailei,
glider, dvyukov, jiazhenghua, linux-arm-kernel, heshaoliang
On Wed, Oct 11, 2017 at 04:22:18PM +0800, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
KASAN in the decompressor makes no sense, so I think you need to
mark the decompressor compilation as such in this patch so it,
as a whole, sees no change.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 02/11] replace memory function
@ 2017-10-19 12:05 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:05 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, opendmb,
linux-kernel, kasan-dev, zengweilin, linux-mm, dylix.dailei,
glider, dvyukov, jiazhenghua, linux-arm-kernel, heshaoliang
On Wed, Oct 11, 2017 at 04:22:18PM +0800, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
KASAN in the decompressor makes no sense, so I think you need to
mark the decompressor compilation as such in this patch so it,
as a whole, sees no change.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 02/11] replace memory function
@ 2017-10-19 12:05 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:05 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Oct 11, 2017 at 04:22:18PM +0800, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
KASAN in the decompressor makes no sense, so I think you need to
mark the decompressor compilation as such in this patch so it,
as a whole, sees no change.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 03/11] arm: Kconfig: enable KASan
2017-10-11 19:15 ` Florian Fainelli
(?)
@ 2017-10-19 12:34 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:34 UTC (permalink / raw)
To: Florian Fainelli
Cc: Abbott Liu, aryabinin, afzal.mohd.ma, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, opendmb,
linux-kernel, kasan-dev, zengweilin, linux-mm, dylix.dailei,
glider, dvyukov, jiazhenghua, linux-arm-kernel, heshaoliang
On Wed, Oct 11, 2017 at 12:15:44PM -0700, Florian Fainelli wrote:
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
> > From: Andrey Ryabinin <a.ryabinin@samsung.com>
> >
> > This patch enable kernel address sanitizer for arm.
> >
> > Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>
> This needs to be the last patch in the series, otherwise you allow
> people between patch 3 and 11 to have varying degrees of experience with
> this patch series depending on their system type (LPAE or not, etc.)
As the series stands, if patches 1-3 are applied, and KASAN is enabled,
there are various constants that end up being undefined, and the kernel
build will fail. That is, of course, not acceptable.
KASAN must not be available until support for it is functionally
complete.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 03/11] arm: Kconfig: enable KASan
@ 2017-10-19 12:34 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:34 UTC (permalink / raw)
To: Florian Fainelli
Cc: Abbott Liu, aryabinin, afzal.mohd.ma, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, opendmb,
linux-kernel, kasan-dev, zengweilin, linux-mm, dylix.dailei,
glider, dvyukov, jiazhenghua, linux-arm-kernel, heshaoliang
On Wed, Oct 11, 2017 at 12:15:44PM -0700, Florian Fainelli wrote:
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
> > From: Andrey Ryabinin <a.ryabinin@samsung.com>
> >
> > This patch enable kernel address sanitizer for arm.
> >
> > Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>
> This needs to be the last patch in the series, otherwise you allow
> people between patch 3 and 11 to have varying degrees of experience with
> this patch series depending on their system type (LPAE or not, etc.)
As the series stands, if patches 1-3 are applied, and KASAN is enabled,
there are various constants that end up being undefined, and the kernel
build will fail. That is, of course, not acceptable.
KASAN must not be available until support for it is functionally
complete.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 03/11] arm: Kconfig: enable KASan
@ 2017-10-19 12:34 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:34 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Oct 11, 2017 at 12:15:44PM -0700, Florian Fainelli wrote:
> On 10/11/2017 01:22 AM, Abbott Liu wrote:
> > From: Andrey Ryabinin <a.ryabinin@samsung.com>
> >
> > This patch enable kernel address sanitizer for arm.
> >
> > Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>
> This needs to be the last patch in the series, otherwise you allow
> people between patch 3 and 11 to have varying degrees of experience with
> this patch series depending on their system type (LPAE or not, etc.)
As the series stands, if patches 1-3 are applied, and KASAN is enabled,
there are various constants that end up being undefined, and the kernel
build will fail. That is, of course, not acceptable.
KASAN must not be available until support for it is functionally
complete.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-16 11:42 ` Liuwenliang (Lamb)
(?)
@ 2017-10-19 12:40 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:40 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: kbuild test robot, kbuild-all, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, ard.biesheuvel,
robin.murphy, mingo, grygorii.strashko, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Jiazhenghua,
Dailei, Zengweilin, Heshaoliang
On Mon, Oct 16, 2017 at 11:42:05AM +0000, Liuwenliang (Lamb) wrote:
> On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>
> Thanks for building test. This error can be solved by following code:
> --- a/arch/arm/kernel/entry-armv.S
> +++ b/arch/arm/kernel/entry-armv.S
> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> get_thread_info tsk
> ldr r0, [tsk, #TI_ADDR_LIMIT]
> #ifdef CONFIG_KASAN
> - movw r1, #:lower16:TASK_SIZE
> - movt r1, #:upper16:TASK_SIZE
> + ldr r1, =TASK_SIZE
> #else
> mov r1, #TASK_SIZE
> #endif
We can surely do better than this with macros and condition support -
we can build-time test in the assembler whether TASK_SIZE can fit in a
normal "mov", whether we can use the movw/movt instructions, or fall
back to ldr if necessary. I'd rather we avoided "ldr" here where
possible.
> @@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
> @ if it was interrupted in a critical region. Here we
> @ perform a quick test inline since it should be false
> @ 99.9999% of the time. The rest is done out of line.
> +#if CONFIG_KASAN
> + ldr r0, =TASK_SIZE
> + cmp r4, r0
> +#else
> cmp r4, #TASK_SIZE
Same sort of thing goes for here - we can select the instruction at
runtime using the assembler's macros and condition support.
We know that TASK_SIZE is going to be one of a limited set of values.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-19 12:40 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:40 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: kbuild test robot, kbuild-all, aryabinin, afzal.mohd.ma,
f.fainelli, labbott, kirill.shutemov, mhocko, cdall,
marc.zyngier, catalin.marinas, akpm, mawilcox, tglx, thgarnie,
keescook, arnd, vladimir.murzin, tixy, ard.biesheuvel,
robin.murphy, mingo, grygorii.strashko, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Jiazhenghua,
Dailei, Zengweilin, Heshaoliang
On Mon, Oct 16, 2017 at 11:42:05AM +0000, Liuwenliang (Lamb) wrote:
> On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>
> Thanks for building test. This error can be solved by following code:
> --- a/arch/arm/kernel/entry-armv.S
> +++ b/arch/arm/kernel/entry-armv.S
> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> get_thread_info tsk
> ldr r0, [tsk, #TI_ADDR_LIMIT]
> #ifdef CONFIG_KASAN
> - movw r1, #:lower16:TASK_SIZE
> - movt r1, #:upper16:TASK_SIZE
> + ldr r1, =TASK_SIZE
> #else
> mov r1, #TASK_SIZE
> #endif
We can surely do better than this with macros and condition support -
we can build-time test in the assembler whether TASK_SIZE can fit in a
normal "mov", whether we can use the movw/movt instructions, or fall
back to ldr if necessary. I'd rather we avoided "ldr" here where
possible.
> @@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
> @ if it was interrupted in a critical region. Here we
> @ perform a quick test inline since it should be false
> @ 99.9999% of the time. The rest is done out of line.
> +#if CONFIG_KASAN
> + ldr r0, =TASK_SIZE
> + cmp r4, r0
> +#else
> cmp r4, #TASK_SIZE
Same sort of thing goes for here - we can select the instruction at
runtime using the assembler's macros and condition support.
We know that TASK_SIZE is going to be one of a limited set of values.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-19 12:40 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:40 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Oct 16, 2017 at 11:42:05AM +0000, Liuwenliang (Lamb) wrote:
> On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>
> Thanks for building test. This error can be solved by following code:
> --- a/arch/arm/kernel/entry-armv.S
> +++ b/arch/arm/kernel/entry-armv.S
> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> get_thread_info tsk
> ldr r0, [tsk, #TI_ADDR_LIMIT]
> #ifdef CONFIG_KASAN
> - movw r1, #:lower16:TASK_SIZE
> - movt r1, #:upper16:TASK_SIZE
> + ldr r1, =TASK_SIZE
> #else
> mov r1, #TASK_SIZE
> #endif
We can surely do better than this with macros and condition support -
we can build-time test in the assembler whether TASK_SIZE can fit in a
normal "mov", whether we can use the movw/movt instructions, or fall
back to ldr if necessary. I'd rather we avoided "ldr" here where
possible.
> @@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
> @ if it was interrupted in a critical region. Here we
> @ perform a quick test inline since it should be false
> @ 99.9999% of the time. The rest is done out of line.
> +#if CONFIG_KASAN
> + ldr r0, =TASK_SIZE
> + cmp r4, r0
> +#else
> cmp r4, #TASK_SIZE
Same sort of thing goes for here - we can select the instruction at
runtime using the assembler's macros and condition support.
We know that TASK_SIZE is going to be one of a limited set of values.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-16 12:14 ` Ard Biesheuvel
(?)
@ 2017-10-19 12:41 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:41 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Liuwenliang (Lamb),
tixy, mhocko, grygorii.strashko, catalin.marinas, linux-mm,
glider, afzal.mohd.ma, mingo, cdall, f.fainelli,
kbuild test robot, mawilcox, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd,
marc.zyngier, Zengweilin, opendmb, Heshaoliang, tglx, dvyukov,
linux-kernel, kbuild-all, Jiazhenghua, akpm, robin.murphy,
thgarnie, kirill.shutemov
On Mon, Oct 16, 2017 at 01:14:54PM +0100, Ard Biesheuvel wrote:
> On 16 October 2017 at 12:42, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> > On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> > #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> > #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >
> > Thanks for building test. This error can be solved by following code:
> > --- a/arch/arm/kernel/entry-armv.S
> > +++ b/arch/arm/kernel/entry-armv.S
> > @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> > get_thread_info tsk
> > ldr r0, [tsk, #TI_ADDR_LIMIT]
> > #ifdef CONFIG_KASAN
> > - movw r1, #:lower16:TASK_SIZE
> > - movt r1, #:upper16:TASK_SIZE
> > + ldr r1, =TASK_SIZE
> > #else
> > mov r1, #TASK_SIZE
> > #endif
>
> This is unnecessary:
>
> ldr r1, =TASK_SIZE
>
> will be converted to a mov instruction by the assembler if the value
> of TASK_SIZE fits its 12-bit immediate field.
It's an 8-bit immediate field for ARM.
What it won't do is expand it to a pair of movw/movt instructions if it
doesn't fit.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-19 12:41 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:41 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Liuwenliang (Lamb),
tixy, mhocko, grygorii.strashko, catalin.marinas, linux-mm,
glider, afzal.mohd.ma, mingo, cdall, f.fainelli,
kbuild test robot, mawilcox, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd,
marc.zyngier, Zengweilin, opendmb, Heshaoliang, tglx, dvyukov,
linux-kernel, kbuild-all, Jiazhenghua, akpm, robin.murphy,
thgarnie, kirill.shutemov
On Mon, Oct 16, 2017 at 01:14:54PM +0100, Ard Biesheuvel wrote:
> On 16 October 2017 at 12:42, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> > On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> > #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> > #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >
> > Thanks for building test. This error can be solved by following code:
> > --- a/arch/arm/kernel/entry-armv.S
> > +++ b/arch/arm/kernel/entry-armv.S
> > @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> > get_thread_info tsk
> > ldr r0, [tsk, #TI_ADDR_LIMIT]
> > #ifdef CONFIG_KASAN
> > - movw r1, #:lower16:TASK_SIZE
> > - movt r1, #:upper16:TASK_SIZE
> > + ldr r1, =TASK_SIZE
> > #else
> > mov r1, #TASK_SIZE
> > #endif
>
> This is unnecessary:
>
> ldr r1, =TASK_SIZE
>
> will be converted to a mov instruction by the assembler if the value
> of TASK_SIZE fits its 12-bit immediate field.
It's an 8-bit immediate field for ARM.
What it won't do is expand it to a pair of movw/movt instructions if it
doesn't fit.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-19 12:41 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:41 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Oct 16, 2017 at 01:14:54PM +0100, Ard Biesheuvel wrote:
> On 16 October 2017 at 12:42, Liuwenliang (Lamb) <liuwenliang@huawei.com> wrote:
> > On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
> > #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
> > #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
> >
> > Thanks for building test. This error can be solved by following code:
> > --- a/arch/arm/kernel/entry-armv.S
> > +++ b/arch/arm/kernel/entry-armv.S
> > @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
> > get_thread_info tsk
> > ldr r0, [tsk, #TI_ADDR_LIMIT]
> > #ifdef CONFIG_KASAN
> > - movw r1, #:lower16:TASK_SIZE
> > - movt r1, #:upper16:TASK_SIZE
> > + ldr r1, =TASK_SIZE
> > #else
> > mov r1, #TASK_SIZE
> > #endif
>
> This is unnecessary:
>
> ldr r1, =TASK_SIZE
>
> will be converted to a mov instruction by the assembler if the value
> of TASK_SIZE fits its 12-bit immediate field.
It's an 8-bit immediate field for ARM.
What it won't do is expand it to a pair of movw/movt instructions if it
doesn't fit.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-17 11:27 ` Liuwenliang (Lamb)
(?)
@ 2017-10-19 12:43 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:43 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: Ard Biesheuvel, kbuild test robot, kbuild-all, aryabinin,
afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov, mhocko,
cdall, marc.zyngier, catalin.marinas, akpm, mawilcox, tglx,
thgarnie, keescook, arnd, vladimir.murzin, tixy, robin.murphy,
mingo, grygorii.strashko, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Jiazhenghua,
Dailei, Zengweilin, Heshaoliang
On Tue, Oct 17, 2017 at 11:27:19AM +0000, Liuwenliang (Lamb) wrote:
> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
It's probably going to be better all round to round TASK_SIZE down
to something that fits in an 8-bit rotated constant anyway (like
we already guarantee) which would mean this patch is not necessary.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-19 12:43 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:43 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: Ard Biesheuvel, kbuild test robot, kbuild-all, aryabinin,
afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov, mhocko,
cdall, marc.zyngier, catalin.marinas, akpm, mawilcox, tglx,
thgarnie, keescook, arnd, vladimir.murzin, tixy, robin.murphy,
mingo, grygorii.strashko, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Jiazhenghua,
Dailei, Zengweilin, Heshaoliang
On Tue, Oct 17, 2017 at 11:27:19AM +0000, Liuwenliang (Lamb) wrote:
> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
It's probably going to be better all round to round TASK_SIZE down
to something that fits in an 8-bit rotated constant anyway (like
we already guarantee) which would mean this patch is not necessary.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-19 12:43 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:43 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Oct 17, 2017 at 11:27:19AM +0000, Liuwenliang (Lamb) wrote:
> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
It's probably going to be better all round to round TASK_SIZE down
to something that fits in an 8-bit rotated constant anyway (like
we already guarantee) which would mean this patch is not necessary.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 05/11] Disable kasan's instrumentation
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-19 12:47 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:47 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, jiazhenghua,
dylix.dailei, zengweilin, heshaoliang
On Wed, Oct 11, 2017 at 04:22:21PM +0800, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> To avoid some build and runtime errors, compiler's instrumentation must
> be disabled for code not linked with kernel image.
How does that explain the change to unwind.c ?
Does this also disable the string macro changes?
In any case, this should certainly precede patch 4, and very probably
patch 2.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 05/11] Disable kasan's instrumentation
@ 2017-10-19 12:47 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:47 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, jiazhenghua,
dylix.dailei, zengweilin, heshaoliang
On Wed, Oct 11, 2017 at 04:22:21PM +0800, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> To avoid some build and runtime errors, compiler's instrumentation must
> be disabled for code not linked with kernel image.
How does that explain the change to unwind.c ?
Does this also disable the string macro changes?
In any case, this should certainly precede patch 4, and very probably
patch 2.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 05/11] Disable kasan's instrumentation
@ 2017-10-19 12:47 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:47 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Oct 11, 2017 at 04:22:21PM +0800, Abbott Liu wrote:
> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>
> To avoid some build and runtime errors, compiler's instrumentation must
> be disabled for code not linked with kernel image.
How does that explain the change to unwind.c ?
Does this also disable the string macro changes?
In any case, this should certainly precede patch 4, and very probably
patch 2.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
2017-10-12 11:27 ` Liuwenliang (Lamb)
(?)
@ 2017-10-19 12:51 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:51 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: Dmitry Vyukov, Andrew Morton, Andrey Ryabinin, afzal.mohd.ma,
f.fainelli, Laura Abbott, Kirill A. Shutemov, Michal Hocko,
cdall, marc.zyngier, Catalin Marinas, Matthew Wilcox,
Thomas Gleixner, Thomas Garnier, Kees Cook, Arnd Bergmann,
Vladimir Murzin, tixy, Ard Biesheuvel, robin.murphy, Ingo Molnar,
grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
> >> - I don't understand why this is necessary. memory_is_poisoned_16()
> >> already handles unaligned addresses?
> >>
> >> - If it's needed on ARM then presumably it will be needed on other
> >> architectures, so CONFIG_ARM is insufficiently general.
> >>
> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
> >> it would be better to generalize/fix it in some fashion rather than
> >> creating a new variant of the function.
>
>
> >Yes, I think it will be better to fix the current function rather then
> >have 2 slightly different copies with ifdef's.
> >Will something along these lines work for arm? 16-byte accesses are
> >not too common, so it should not be a performance problem. And
> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
> >where safe (x86).
>
> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> >{
> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
> >
> > if (shadow_addr[0] || shadow_addr[1])
> > return true;
> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> > return memory_is_poisoned_1(addr + 15);
> > return false;
> >}
>
> Thanks for Andrew Morton and Dmitry Vyukov's review.
> If the parameter addr=0xc0000008, now in function:
> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> {
> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
> --- // unsigned by 2 bytes.
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>
> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> return *shadow_addr || memory_is_poisoned_1(addr + 15);
> ---- //here is going to be error on arm, specially when kernel has not finished yet.
> ---- //Because the unsigned accessing cause DataAbort Exception which is not
> ---- //initialized when kernel is starting.
> return *shadow_addr;
> }
>
> I also think it is better to fix this problem.
What about using get_unaligned() ?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-19 12:51 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:51 UTC (permalink / raw)
To: Liuwenliang (Lamb)
Cc: Dmitry Vyukov, Andrew Morton, Andrey Ryabinin, afzal.mohd.ma,
f.fainelli, Laura Abbott, Kirill A. Shutemov, Michal Hocko,
cdall, marc.zyngier, Catalin Marinas, Matthew Wilcox,
Thomas Gleixner, Thomas Garnier, Kees Cook, Arnd Bergmann,
Vladimir Murzin, tixy, Ard Biesheuvel, robin.murphy, Ingo Molnar,
grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
> >> - I don't understand why this is necessary. memory_is_poisoned_16()
> >> already handles unaligned addresses?
> >>
> >> - If it's needed on ARM then presumably it will be needed on other
> >> architectures, so CONFIG_ARM is insufficiently general.
> >>
> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
> >> it would be better to generalize/fix it in some fashion rather than
> >> creating a new variant of the function.
>
>
> >Yes, I think it will be better to fix the current function rather then
> >have 2 slightly different copies with ifdef's.
> >Will something along these lines work for arm? 16-byte accesses are
> >not too common, so it should not be a performance problem. And
> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
> >where safe (x86).
>
> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> >{
> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
> >
> > if (shadow_addr[0] || shadow_addr[1])
> > return true;
> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> > return memory_is_poisoned_1(addr + 15);
> > return false;
> >}
>
> Thanks for Andrew Morton and Dmitry Vyukov's review.
> If the parameter addr=0xc0000008, now in function:
> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> {
> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
> --- // unsigned by 2 bytes.
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>
> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> return *shadow_addr || memory_is_poisoned_1(addr + 15);
> ---- //here is going to be error on arm, specially when kernel has not finished yet.
> ---- //Because the unsigned accessing cause DataAbort Exception which is not
> ---- //initialized when kernel is starting.
> return *shadow_addr;
> }
>
> I also think it is better to fix this problem.
What about using get_unaligned() ?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-10-19 12:51 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:51 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
> >> - I don't understand why this is necessary. memory_is_poisoned_16()
> >> already handles unaligned addresses?
> >>
> >> - If it's needed on ARM then presumably it will be needed on other
> >> architectures, so CONFIG_ARM is insufficiently general.
> >>
> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
> >> it would be better to generalize/fix it in some fashion rather than
> >> creating a new variant of the function.
>
>
> >Yes, I think it will be better to fix the current function rather then
> >have 2 slightly different copies with ifdef's.
> >Will something along these lines work for arm? 16-byte accesses are
> >not too common, so it should not be a performance problem. And
> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
> >where safe (x86).
>
> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> >{
> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
> >
> > if (shadow_addr[0] || shadow_addr[1])
> > return true;
> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> > return memory_is_poisoned_1(addr + 15);
> > return false;
> >}
>
> Thanks for Andrew Morton and Dmitry Vyukov's review.
> If the parameter addr=0xc0000008, now in function:
> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
> {
> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
> --- // unsigned by 2 bytes.
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>
> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> return *shadow_addr || memory_is_poisoned_1(addr + 15);
> ---- //here is going to be error on arm, specially when kernel has not finished yet.
> ---- //Because the unsigned accessing cause DataAbort Exception which is not
> ---- //initialized when kernel is starting.
> return *shadow_addr;
> }
>
> I also think it is better to fix this problem.
What about using get_unaligned() ?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2017-10-19 12:55 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:55 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, jiazhenghua, dylix.dailei, zengweilin, heshaoliang
On Wed, Oct 11, 2017 at 04:22:25PM +0800, Abbott Liu wrote:
> Because the KASan's shadow memory don't need to track,so remove the
> mapping code in kasan_init.
Is there a reason why this isn't part of the earlier patch that
introduced the code below?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
@ 2017-10-19 12:55 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:55 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, jiazhenghua, dylix.dailei, zengweilin, heshaoliang
On Wed, Oct 11, 2017 at 04:22:25PM +0800, Abbott Liu wrote:
> Because the KASan's shadow memory don't need to track,so remove the
> mapping code in kasan_init.
Is there a reason why this isn't part of the earlier patch that
introduced the code below?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
@ 2017-10-19 12:55 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-10-19 12:55 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Oct 11, 2017 at 04:22:25PM +0800, Abbott Liu wrote:
> Because the KASan's shadow memory don't need to track,so remove the
> mapping code in kasan_init.
Is there a reason why this isn't part of the earlier patch that
introduced the code below?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
2017-10-19 12:43 ` Russell King - ARM Linux
(?)
@ 2017-10-22 12:12 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:12 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: Ard Biesheuvel, kbuild test robot, kbuild-all, aryabinin,
afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov, mhocko,
cdall, marc.zyngier, catalin.marinas, akpm, mawilcox, tglx,
thgarnie, keescook, arnd, vladimir.murzin, tixy, robin.murphy,
mingo, grygorii.strashko, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Jiazhenghua,
Dailei, Zengweilin, Heshaoliang
On Tue, Oct 19, 2017 at 20:41 17PM +0000, Russell King - ARM Linux:
>On Mon, Oct 16, 2017 at 11:42:05AM +0000, Liuwenliang (Lamb) wrote:
>> On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
>> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
>> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>>
>> Thanks for building test. This error can be solved by following code:
>> --- a/arch/arm/kernel/entry-armv.S
>> +++ b/arch/arm/kernel/entry-armv.S
>> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
>> get_thread_info tsk
>> ldr r0, [tsk, #TI_ADDR_LIMIT]
>> #ifdef CONFIG_KASAN
>> - movw r1, #:lower16:TASK_SIZE
>> - movt r1, #:upper16:TASK_SIZE
>> + ldr r1, =TASK_SIZE
>> #else
>> mov r1, #TASK_SIZE
>> #endif
>
>We can surely do better than this with macros and condition support -
>we can build-time test in the assembler whether TASK_SIZE can fit in a
>normal "mov", whether we can use the movw/movt instructions, or fall
>back to ldr if necessary. I'd rather we avoided "ldr" here where
>possible.
Thanks for your review.
I don't know why we need to avoided "ldr". The "ldr" maybe cause the
performance fall down, but it will be very limited, and as we know the
performance of kasan version is lower than the normal version. And usually
we don't use kasan version in our product, we only use kasan version when
we want to debug some memory corruption problem in laboratory(not not in
commercial product) because the performance of kasan version is lower than
normal version.
So I think we can accept the influence of the performance by using "ldr" here.
On Tue, Oct 19, 2017 at 20:44 17PM +0000, Russell King - ARM Linux:
>On Tue, Oct 17, 2017 at 11:27:19AM +0000, Liuwenliang (Lamb) wrote:
>> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
>
>It's probably going to be better all round to round TASK_SIZE down
>to something that fits in an 8-bit rotated constant anyway (like
>we already guarantee) which would mean this patch is not necessary.
Thanks for you review.
If we enable CONFIG_KASAN, we need steal 130MByte(0xb6e00000 ~ 0xbf000000) from user space.
If we change to steal 130MByte(0xb6000000 ~ 0xbe200000) , the 14MB of user space is going to be
wasted. I think it is better to to use "ldr" whose influence to the system are very limited than to waste
14MB user space by chaned TASK_SIZE from 0xb6e00000 from 0xb6000000.
If TASK_SIZE is an 8-bit rotated constant, the compiler can convert "ldr rx,=TASK_SIZE" into mov rx, #TASK_SIZE
If TASK_SIZE is not an 8-bit rotated constant, the compiler can convert "ldr rx,=TASK_SIZE" into ldr rx, [pc,xxx],
So we can use ldr to replace mov. Here is the code which is tested by me:
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index f9efea3..00a1833 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -187,12 +187,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
-#ifdef CONFIG_KASAN
- movw r1, #:lower16:TASK_SIZE
- movt r1, #:upper16:TASK_SIZE
-#else
- mov r1, #TASK_SIZE
-#endif
+ ldr r1, =TASK_SIZE
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
@@ -446,7 +441,8 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
blhs kuser_cmpxchg64_fixup
#endif
#endif
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-22 12:12 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:12 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: Ard Biesheuvel, kbuild test robot, kbuild-all, aryabinin,
afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov, mhocko,
cdall, marc.zyngier, catalin.marinas, akpm, mawilcox, tglx,
thgarnie, keescook, arnd, vladimir.murzin, tixy, robin.murphy,
mingo, grygorii.strashko, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Jiazhenghua,
Dailei, Zengweilin, Heshaoliang
On Tue, Oct 19, 2017 at 20:41 17PM +0000, Russell King - ARM Linux:
>On Mon, Oct 16, 2017 at 11:42:05AM +0000, Liuwenliang (Lamb) wrote:
>> On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
>> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
>> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>>
>> Thanks for building test. This error can be solved by following code:
>> --- a/arch/arm/kernel/entry-armv.S
>> +++ b/arch/arm/kernel/entry-armv.S
>> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
>> get_thread_info tsk
>> ldr r0, [tsk, #TI_ADDR_LIMIT]
>> #ifdef CONFIG_KASAN
>> - movw r1, #:lower16:TASK_SIZE
>> - movt r1, #:upper16:TASK_SIZE
>> + ldr r1, =TASK_SIZE
>> #else
>> mov r1, #TASK_SIZE
>> #endif
>
>We can surely do better than this with macros and condition support -
>we can build-time test in the assembler whether TASK_SIZE can fit in a
>normal "mov", whether we can use the movw/movt instructions, or fall
>back to ldr if necessary. I'd rather we avoided "ldr" here where
>possible.
Thanks for your review.
I don't know why we need to avoided "ldr". The "ldr" maybe cause the
performance fall down, but it will be very limited, and as we know the
performance of kasan version is lower than the normal version. And usually
we don't use kasan version in our product, we only use kasan version when
we want to debug some memory corruption problem in laboratory(not not in
commercial product) because the performance of kasan version is lower than
normal version.
So I think we can accept the influence of the performance by using "ldr" here.
On Tue, Oct 19, 2017 at 20:44 17PM +0000, Russell King - ARM Linux:
>On Tue, Oct 17, 2017 at 11:27:19AM +0000, Liuwenliang (Lamb) wrote:
>> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
>
>It's probably going to be better all round to round TASK_SIZE down
>to something that fits in an 8-bit rotated constant anyway (like
>we already guarantee) which would mean this patch is not necessary.
Thanks for you review.
If we enable CONFIG_KASAN, we need steal 130MByte(0xb6e00000 ~ 0xbf000000) from user space.
If we change to steal 130MByte(0xb6000000 ~ 0xbe200000) , the 14MB of user space is going to be
wasted. I think it is better to to use "ldr" whose influence to the system are very limited than to waste
14MB user space by chaned TASK_SIZE from 0xb6e00000 from 0xb6000000.
If TASK_SIZE is an 8-bit rotated constant, the compiler can convert "ldr rx,=TASK_SIZE" into mov rx, #TASK_SIZE
If TASK_SIZE is not an 8-bit rotated constant, the compiler can convert "ldr rx,=TASK_SIZE" into ldr rx, [pc,xxx],
So we can use ldr to replace mov. Here is the code which is tested by me:
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index f9efea3..00a1833 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -187,12 +187,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
-#ifdef CONFIG_KASAN
- movw r1, #:lower16:TASK_SIZE
- movt r1, #:upper16:TASK_SIZE
-#else
- mov r1, #TASK_SIZE
-#endif
+ ldr r1, =TASK_SIZE
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
@@ -446,7 +441,8 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
blhs kuser_cmpxchg64_fixup
#endif
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 04/11] Define the virtual space of KASan's shadow region
@ 2017-10-22 12:12 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:12 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Oct 19, 2017 at 20:41 17PM +0000, Russell King - ARM Linux:
>On Mon, Oct 16, 2017 at 11:42:05AM +0000, Liuwenliang (Lamb) wrote:
>> On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movw r1,
>> #:lower16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support `movt r1,
>> #:upper16:((((0xC0000000-0x01000000)>>3)+((0xC0000000-0x01000000)-(1<<29))))' in ARM mode
>>
>> Thanks for building test. This error can be solved by following code:
>> --- a/arch/arm/kernel/entry-armv.S
>> +++ b/arch/arm/kernel/entry-armv.S
>> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
>> get_thread_info tsk
>> ldr r0, [tsk, #TI_ADDR_LIMIT]
>> #ifdef CONFIG_KASAN
>> - movw r1, #:lower16:TASK_SIZE
>> - movt r1, #:upper16:TASK_SIZE
>> + ldr r1, =TASK_SIZE
>> #else
>> mov r1, #TASK_SIZE
>> #endif
>
>We can surely do better than this with macros and condition support -
>we can build-time test in the assembler whether TASK_SIZE can fit in a
>normal "mov", whether we can use the movw/movt instructions, or fall
>back to ldr if necessary. I'd rather we avoided "ldr" here where
>possible.
Thanks for your review.
I don't know why we need to avoided "ldr". The "ldr" maybe cause the
performance fall down, but it will be very limited, and as we know the
performance of kasan version is lower than the normal version. And usually
we don't use kasan version in our product, we only use kasan version when
we want to debug some memory corruption problem in laboratory(not not in
commercial product) because the performance of kasan version is lower than
normal version.
So I think we can accept the influence of the performance by using "ldr" here.
On Tue, Oct 19, 2017 at 20:44 17PM +0000, Russell King - ARM Linux:
>On Tue, Oct 17, 2017 at 11:27:19AM +0000, Liuwenliang (Lamb) wrote:
>> ---c0a3b198: b6e00000 .word 0xb6e00000 //TASK_SIZE:0xb6e00000
>
>It's probably going to be better all round to round TASK_SIZE down
>to something that fits in an 8-bit rotated constant anyway (like
>we already guarantee) which would mean this patch is not necessary.
Thanks for you review.
If we enable CONFIG_KASAN, we need steal 130MByte(0xb6e00000 ~ 0xbf000000) from user space.
If we change to steal 130MByte(0xb6000000 ~ 0xbe200000) , the 14MB of user space is going to be
wasted. I think it is better to to use "ldr" whose influence to the system are very limited than to waste
14MB user space by chaned TASK_SIZE from 0xb6e00000 from 0xb6000000.
If TASK_SIZE is an 8-bit rotated constant, the compiler can convert "ldr rx,=TASK_SIZE" into mov rx, #TASK_SIZE
If TASK_SIZE is not an 8-bit rotated constant, the compiler can convert "ldr rx,=TASK_SIZE" into ldr rx, [pc,xxx],
So we can use ldr to replace mov. Here is the code which is tested by me:
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index f9efea3..00a1833 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -187,12 +187,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
-#ifdef CONFIG_KASAN
- movw r1, #:lower16:TASK_SIZE
- movt r1, #:upper16:TASK_SIZE
-#else
- mov r1, #TASK_SIZE
-#endif
+ ldr r1, =TASK_SIZE
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
@@ -446,7 +441,8 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
blhs kuser_cmpxchg64_fixup
#endif
#endif
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 03/11] arm: Kconfig: enable KASan
2017-10-19 12:34 ` Russell King - ARM Linux
(?)
@ 2017-10-22 12:27 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:27 UTC (permalink / raw)
To: Russell King - ARM Linux, Florian Fainelli
Cc: aryabinin, afzal.mohd.ma, labbott, kirill.shutemov, mhocko,
cdall, marc.zyngier, catalin.marinas, akpm, mawilcox, tglx,
thgarnie, keescook, arnd, vladimir.murzin, tixy, ard.biesheuvel,
robin.murphy, mingo, grygorii.strashko, opendmb, linux-kernel,
kasan-dev, Zengweilin, linux-mm, Dailei, glider, dvyukov,
Jiazhenghua, linux-arm-kernel, Heshaoliang
On 10/22/2017 01:22 AM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 12:15:44PM -0700, Florian Fainelli wrote:
>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>> > From: Andrey Ryabinin <a.ryabinin@samsung.com>
>> >
>> > This patch enable kernel address sanitizer for arm.
>> >
>> > Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
>> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>>
>> This needs to be the last patch in the series, otherwise you allow
>> people between patch 3 and 11 to have varying degrees of experience with
>> this patch series depending on their system type (LPAE or not, etc.)
>
>As the series stands, if patches 1-3 are applied, and KASAN is enabled,
>there are various constants that end up being undefined, and the kernel
>build will fail. That is, of course, not acceptable.
>
>KASAN must not be available until support for it is functionally
>complete.
Thanks for Florian Fainelli and Russell King's review.
I'm going to change it in the new version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 03/11] arm: Kconfig: enable KASan
@ 2017-10-22 12:27 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:27 UTC (permalink / raw)
To: Russell King - ARM Linux, Florian Fainelli
Cc: aryabinin, afzal.mohd.ma, labbott, kirill.shutemov, mhocko,
cdall, marc.zyngier, catalin.marinas, akpm, mawilcox, tglx,
thgarnie, keescook, arnd, vladimir.murzin, tixy, ard.biesheuvel,
robin.murphy, mingo, grygorii.strashko, opendmb, linux-kernel,
kasan-dev, Zengweilin, linux-mm, Dailei, glider, dvyukov,
Jiazhenghua, linux-arm-kernel, Heshaoliang
On 10/22/2017 01:22 AM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 12:15:44PM -0700, Florian Fainelli wrote:
>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>> > From: Andrey Ryabinin <a.ryabinin@samsung.com>
>> >
>> > This patch enable kernel address sanitizer for arm.
>> >
>> > Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
>> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>>
>> This needs to be the last patch in the series, otherwise you allow
>> people between patch 3 and 11 to have varying degrees of experience with
>> this patch series depending on their system type (LPAE or not, etc.)
>
>As the series stands, if patches 1-3 are applied, and KASAN is enabled,
>there are various constants that end up being undefined, and the kernel
>build will fail. That is, of course, not acceptable.
>
>KASAN must not be available until support for it is functionally
>complete.
Thanks for Florian Fainelli and Russell King's review.
I'm going to change it in the new version.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 03/11] arm: Kconfig: enable KASan
@ 2017-10-22 12:27 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:27 UTC (permalink / raw)
To: linux-arm-kernel
On 10/22/2017 01:22 AM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 12:15:44PM -0700, Florian Fainelli wrote:
>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>> > From: Andrey Ryabinin <a.ryabinin@samsung.com>
>> >
>> > This patch enable kernel address sanitizer for arm.
>> >
>> > Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
>> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
>>
>> This needs to be the last patch in the series, otherwise you allow
>> people between patch 3 and 11 to have varying degrees of experience with
>> this patch series depending on their system type (LPAE or not, etc.)
>
>As the series stands, if patches 1-3 are applied, and KASAN is enabled,
>there are various constants that end up being undefined, and the kernel
>build will fail. That is, of course, not acceptable.
>
>KASAN must not be available until support for it is functionally
>complete.
Thanks for Florian Fainelli and Russell King's review.
I'm going to change it in the new version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
2017-10-19 12:55 ` Russell King - ARM Linux
(?)
@ 2017-10-22 12:31 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:31 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 10/19/2017 20:56PM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 04:22:25PM +0800, Abbott Liu wrote:
>> Because the KASan's shadow memory don't need to track,so remove the
>> mapping code in kasan_init.
>
>Is there a reason why this isn't part of the earlier patch that introduced the code below?
Thanks for your reviews.
I'm going to change it in the new version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
@ 2017-10-22 12:31 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:31 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 10/19/2017 20:56PM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 04:22:25PM +0800, Abbott Liu wrote:
>> Because the KASan's shadow memory don't need to track,so remove the
>> mapping code in kasan_init.
>
>Is there a reason why this isn't part of the earlier patch that introduced the code below?
Thanks for your reviews.
I'm going to change it in the new version.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory
@ 2017-10-22 12:31 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:31 UTC (permalink / raw)
To: linux-arm-kernel
On 10/19/2017 20:56PM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 04:22:25PM +0800, Abbott Liu wrote:
>> Because the KASan's shadow memory don't need to track,so remove the
>> mapping code in kasan_init.
>
>Is there a reason why this isn't part of the earlier patch that introduced the code below?
Thanks for your reviews.
I'm going to change it in the new version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 02/11] replace memory function
2017-10-19 12:05 ` Russell King - ARM Linux
@ 2017-10-22 12:42 ` Liuwenliang (Lamb)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:42 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, opendmb,
linux-kernel, kasan-dev, Zengweilin, linux-mm, Dailei, glider,
dvyukov, Jiazhenghua, linux-arm-kernel, Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb2312", Size: 1140 bytes --]
On 10/19/2017 20:06 PM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 04:22:18PM +0800, Abbott Liu wrote:
>> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>>
>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>> If bad pointer passed to one of these function it is important
>> to catch this. Compiler's instrumentation cannot do this since
>> these functions are written in assembly.
>>
>> KASan replaces memory functions with manually instrumented variants.
>> Original functions declared as weak symbols so strong definitions
>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>> with '__' prefix in name, so we could call non-instrumented variant
>> if needed.
>
>KASAN in the decompressor makes no sense, so I think you need to
>mark the decompressor compilation as such in this patch so it,
>as a whole, sees no change.
Thanks for your reviews, I has already known some error in arm/boot/compressed/ .
I'm going to change it in this patch in new version.
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 02/11] replace memory function
@ 2017-10-22 12:42 ` Liuwenliang (Lamb)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Lamb) @ 2017-10-22 12:42 UTC (permalink / raw)
To: linux-arm-kernel
On 10/19/2017 20:06 PM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 04:22:18PM +0800, Abbott Liu wrote:
>> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>>
>> Functions like memset/memmove/memcpy do a lot of memory accesses.
>> If bad pointer passed to one of these function it is important
>> to catch this. Compiler's instrumentation cannot do this since
>> these functions are written in assembly.
>>
>> KASan replaces memory functions with manually instrumented variants.
>> Original functions declared as weak symbols so strong definitions
>> in mm/kasan/kasan.c could replace them. Original functions have aliases
>> with '__' prefix in name, so we could call non-instrumented variant
>> if needed.
>
>KASAN in the decompressor makes no sense, so I think you need to
>mark the decompressor compilation as such in this patch so it,
>as a whole, sees no change.
Thanks for your reviews, I has already known some error in arm/boot/compressed/ .
I'm going to change it in this patch in new version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-12 7:58 ` Marc Zyngier
(?)
@ 2017-11-09 7:46 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-09 7:46 UTC (permalink / raw)
To: Marc Zyngier, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 12/10/17 15:59, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> On 11/10/17 09:22, Abbott Liu wrote:
>> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
>> index f2e1af4..6e26714 100644
>> --- a/arch/arm/include/asm/proc-fns.h
>> +++ b/arch/arm/include/asm/proc-fns.h
>> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
>> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
>> (pgd_t *)phys_to_virt(pg); \
>> })
>> +
>> +#define cpu_set_ttbr0(val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +
>> #else
>> #define cpu_get_pgd() \
>> ({ \
>> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
>> pg &= ~0x3fff; \
>> (pgd_t *)phys_to_virt(pg); \
>> })
>> +
>> +#define cpu_set_ttbr(nr, val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +#define cpu_get_ttbr(nr) \
>> + ({ \
>> + unsigned long ttbr; \
>> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
>> + : "=r" (ttbr)); \
>> + ttbr; \
>> + })
>> +
>> +#define cpu_set_ttbr0(val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +
>
>You could instead lift and extend the definitions provided in kvm_hyp.h,
>and use the read_sysreg/write_sysreg helpers defined in cp15.h.
Thanks for your review.
I extend definitions of TTBR0/TTBR1/PAR in kvm_hyp.h when the CONFIG_ARM_LPAE is
not defined.
Because cortex A9 don't support virtualization, so use CONFIG_ARM_LPAE to exclude
some functions and macros which are only used in virtualization.
Here is the code which I tested on vexpress_a15 and vexpress_a9:
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..2592608 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -19,12 +19,14 @@
#define __ARM_KVM_HYP_H__
#include <linux/compiler.h>
-#include <linux/kvm_host.h>
#include <asm/cp15.h>
+
+#ifdef CONFIG_ARM_LPAE
+#include <linux/kvm_host.h>
#include <asm/kvm_mmu.h>
#include <asm/vfp.h>
-
#define __hyp_text __section(.hyp.text) notrace
+#endif
#define __ACCESS_VFP(CRn) \
"mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
@@ -37,12 +39,18 @@
__val; \
})
+#ifdef CONFIG_ARM_LPAE
#define TTBR0 __ACCESS_CP15_64(0, c2)
#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
#define PAR __ACCESS_CP15_64(0, c7)
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
@@ -98,6 +106,7 @@
#define cntvoff_el2 CNTVOFF
#define cnthctl_el2 CNTHCTL
+#ifdef CONFIG_ARM_LPAE
void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu);
@@ -123,5 +132,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
asmlinkage int __guest_enter(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *host);
asmlinkage int __hyp_do_panic(const char *, int, u32);
+#endif
#endif /* __ARM_KVM_HYP_H__ */
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..359a782 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -15,6 +15,7 @@
#include <asm/proc-fns.h>
#include <asm/tlbflush.h>
#include <asm/cp15.h>
+#include <asm/kvm_hyp.h>
#include <linux/sched/task.h>
#include "mm.h"
@@ -203,16 +204,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = read_sysreg(TTBR0);
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ write_sysreg(__pa(tmp_page_table), TTBR0);
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ write_sysreg(__pa(tmp_page_table),TTBR0);
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +258,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ write_sysreg(orig_ttbr0 ,TTBR0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-09 7:46 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-09 7:46 UTC (permalink / raw)
To: Marc Zyngier, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 5490 bytes --]
On 12/10/17 15:59, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> On 11/10/17 09:22, Abbott Liu wrote:
>> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
>> index f2e1af4..6e26714 100644
>> --- a/arch/arm/include/asm/proc-fns.h
>> +++ b/arch/arm/include/asm/proc-fns.h
>> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
>> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
>> (pgd_t *)phys_to_virt(pg); \
>> })
>> +
>> +#define cpu_set_ttbr0(val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +
>> #else
>> #define cpu_get_pgd() \
>> ({ \
>> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
>> pg &= ~0x3fff; \
>> (pgd_t *)phys_to_virt(pg); \
>> })
>> +
>> +#define cpu_set_ttbr(nr, val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +#define cpu_get_ttbr(nr) \
>> + ({ \
>> + unsigned long ttbr; \
>> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
>> + : "=r" (ttbr)); \
>> + ttbr; \
>> + })
>> +
>> +#define cpu_set_ttbr0(val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +
>
>You could instead lift and extend the definitions provided in kvm_hyp.h,
>and use the read_sysreg/write_sysreg helpers defined in cp15.h.
Thanks for your review.
I extend definitions of TTBR0/TTBR1/PAR in kvm_hyp.h when the CONFIG_ARM_LPAE is
not defined.
Because cortex A9 don't support virtualization, so use CONFIG_ARM_LPAE to exclude
some functions and macros which are only used in virtualization.
Here is the code which I tested on vexpress_a15 and vexpress_a9:
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..2592608 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -19,12 +19,14 @@
#define __ARM_KVM_HYP_H__
#include <linux/compiler.h>
-#include <linux/kvm_host.h>
#include <asm/cp15.h>
+
+#ifdef CONFIG_ARM_LPAE
+#include <linux/kvm_host.h>
#include <asm/kvm_mmu.h>
#include <asm/vfp.h>
-
#define __hyp_text __section(.hyp.text) notrace
+#endif
#define __ACCESS_VFP(CRn) \
"mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
@@ -37,12 +39,18 @@
__val; \
})
+#ifdef CONFIG_ARM_LPAE
#define TTBR0 __ACCESS_CP15_64(0, c2)
#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
#define PAR __ACCESS_CP15_64(0, c7)
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
@@ -98,6 +106,7 @@
#define cntvoff_el2 CNTVOFF
#define cnthctl_el2 CNTHCTL
+#ifdef CONFIG_ARM_LPAE
void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu);
@@ -123,5 +132,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
asmlinkage int __guest_enter(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *host);
asmlinkage int __hyp_do_panic(const char *, int, u32);
+#endif
#endif /* __ARM_KVM_HYP_H__ */
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..359a782 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -15,6 +15,7 @@
#include <asm/proc-fns.h>
#include <asm/tlbflush.h>
#include <asm/cp15.h>
+#include <asm/kvm_hyp.h>
#include <linux/sched/task.h>
#include "mm.h"
@@ -203,16 +204,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = read_sysreg(TTBR0);
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ write_sysreg(__pa(tmp_page_table), TTBR0);
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ write_sysreg(__pa(tmp_page_table),TTBR0);
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +258,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ write_sysreg(orig_ttbr0 ,TTBR0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-09 7:46 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-09 7:46 UTC (permalink / raw)
To: linux-arm-kernel
On 12/10/17 15:59, Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
> On 11/10/17 09:22, Abbott Liu wrote:
>> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
>> index f2e1af4..6e26714 100644
>> --- a/arch/arm/include/asm/proc-fns.h
>> +++ b/arch/arm/include/asm/proc-fns.h
>> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
>> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
>> (pgd_t *)phys_to_virt(pg); \
>> })
>> +
>> +#define cpu_set_ttbr0(val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +
>> #else
>> #define cpu_get_pgd() \
>> ({ \
>> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
>> pg &= ~0x3fff; \
>> (pgd_t *)phys_to_virt(pg); \
>> })
>> +
>> +#define cpu_set_ttbr(nr, val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +#define cpu_get_ttbr(nr) \
>> + ({ \
>> + unsigned long ttbr; \
>> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
>> + : "=r" (ttbr)); \
>> + ttbr; \
>> + })
>> +
>> +#define cpu_set_ttbr0(val) \
>> + do { \
>> + u64 ttbr = val; \
>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>> + : : "r" (ttbr)); \
>> + } while (0)
>> +
>> +
>
>You could instead lift and extend the definitions provided in kvm_hyp.h,
>and use the read_sysreg/write_sysreg helpers defined in cp15.h.
Thanks for your review.
I extend definitions of TTBR0/TTBR1/PAR in kvm_hyp.h when the CONFIG_ARM_LPAE is
not defined.
Because cortex A9 don't support virtualization, so use CONFIG_ARM_LPAE to exclude
some functions and macros which are only used in virtualization.
Here is the code which I tested on vexpress_a15 and vexpress_a9:
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..2592608 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -19,12 +19,14 @@
#define __ARM_KVM_HYP_H__
#include <linux/compiler.h>
-#include <linux/kvm_host.h>
#include <asm/cp15.h>
+
+#ifdef CONFIG_ARM_LPAE
+#include <linux/kvm_host.h>
#include <asm/kvm_mmu.h>
#include <asm/vfp.h>
-
#define __hyp_text __section(.hyp.text) notrace
+#endif
#define __ACCESS_VFP(CRn) \
"mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
@@ -37,12 +39,18 @@
__val; \
})
+#ifdef CONFIG_ARM_LPAE
#define TTBR0 __ACCESS_CP15_64(0, c2)
#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
#define PAR __ACCESS_CP15_64(0, c7)
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
@@ -98,6 +106,7 @@
#define cntvoff_el2 CNTVOFF
#define cnthctl_el2 CNTHCTL
+#ifdef CONFIG_ARM_LPAE
void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu);
@@ -123,5 +132,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
asmlinkage int __guest_enter(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *host);
asmlinkage int __hyp_do_panic(const char *, int, u32);
+#endif
#endif /* __ARM_KVM_HYP_H__ */
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..359a782 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -15,6 +15,7 @@
#include <asm/proc-fns.h>
#include <asm/tlbflush.h>
#include <asm/cp15.h>
+#include <asm/kvm_hyp.h>
#include <linux/sched/task.h>
#include "mm.h"
@@ -203,16 +204,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = read_sysreg(TTBR0);
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ write_sysreg(__pa(tmp_page_table), TTBR0);
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ write_sysreg(__pa(tmp_page_table),TTBR0);
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +258,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ write_sysreg(orig_ttbr0 ,TTBR0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-09 7:46 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-09 10:10 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-09 10:10 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu),
linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
> On 12/10/17 15:59, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>> On 11/10/17 09:22, Abbott Liu wrote:
>>> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
>>> index f2e1af4..6e26714 100644
>>> --- a/arch/arm/include/asm/proc-fns.h
>>> +++ b/arch/arm/include/asm/proc-fns.h
>>> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
>>> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
>>> (pgd_t *)phys_to_virt(pg); \
>>> })
>>> +
>>> +#define cpu_set_ttbr0(val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +
>>> #else
>>> #define cpu_get_pgd() \
>>> ({ \
>>> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
>>> pg &= ~0x3fff; \
>>> (pgd_t *)phys_to_virt(pg); \
>>> })
>>> +
>>> +#define cpu_set_ttbr(nr, val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +#define cpu_get_ttbr(nr) \
>>> + ({ \
>>> + unsigned long ttbr; \
>>> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
>>> + : "=r" (ttbr)); \
>>> + ttbr; \
>>> + })
>>> +
>>> +#define cpu_set_ttbr0(val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +
>>
>> You could instead lift and extend the definitions provided in kvm_hyp.h,
>> and use the read_sysreg/write_sysreg helpers defined in cp15.h.
>
> Thanks for your review.
> I extend definitions of TTBR0/TTBR1/PAR in kvm_hyp.h when the CONFIG_ARM_LPAE is
> not defined.
> Because cortex A9 don't support virtualization, so use CONFIG_ARM_LPAE to exclude
> some functions and macros which are only used in virtualization.
>
> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>
> diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
> index 14b5903..2592608 100644
> --- a/arch/arm/include/asm/kvm_hyp.h
> +++ b/arch/arm/include/asm/kvm_hyp.h
> @@ -19,12 +19,14 @@
> #define __ARM_KVM_HYP_H__
>
> #include <linux/compiler.h>
> -#include <linux/kvm_host.h>
> #include <asm/cp15.h>
> +
> +#ifdef CONFIG_ARM_LPAE
> +#include <linux/kvm_host.h>
> #include <asm/kvm_mmu.h>
> #include <asm/vfp.h>
> -
> #define __hyp_text __section(.hyp.text) notrace
> +#endif
>
> #define __ACCESS_VFP(CRn) \
> "mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
> @@ -37,12 +39,18 @@
> __val; \
> })
>
> +#ifdef CONFIG_ARM_LPAE
> #define TTBR0 __ACCESS_CP15_64(0, c2)
> #define TTBR1 __ACCESS_CP15_64(1, c2)
> #define VTTBR __ACCESS_CP15_64(6, c2)
> #define PAR __ACCESS_CP15_64(0, c7)
> #define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> #define CNTVOFF __ACCESS_CP15_64(4, c14)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
There is no reason for this LPAE vs non LPAE dichotomy. Both registers
do exist if your system supports LPAE. So you can either suffix the
64bit version with an _64 (and change the KVM code), or suffix the bit
version with _32.
>
> #define MIDR __ACCESS_CP15(c0, 0, c0, 0)
> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> @@ -98,6 +106,7 @@
> #define cntvoff_el2 CNTVOFF
> #define cnthctl_el2 CNTHCTL
>
> +#ifdef CONFIG_ARM_LPAE
> void __timer_save_state(struct kvm_vcpu *vcpu);
> void __timer_restore_state(struct kvm_vcpu *vcpu);
>
> @@ -123,5 +132,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
> asmlinkage int __guest_enter(struct kvm_vcpu *vcpu,
> struct kvm_cpu_context *host);
> asmlinkage int __hyp_do_panic(const char *, int, u32);
> +#endif
>
> #endif /* __ARM_KVM_HYP_H__ */
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> index 049ee0a..359a782 100644
> --- a/arch/arm/mm/kasan_init.c
> +++ b/arch/arm/mm/kasan_init.c
> @@ -15,6 +15,7 @@
> #include <asm/proc-fns.h>
> #include <asm/tlbflush.h>
> #include <asm/cp15.h>
> +#include <asm/kvm_hyp.h>
No, please don't do that. You shouldn't have to include KVM stuff in
unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
is where new definition should be added.
> #include <linux/sched/task.h>
>
> #include "mm.h"
> @@ -203,16 +204,16 @@ void __init kasan_init(void)
> u64 orig_ttbr0;
> int i;
>
> - orig_ttbr0 = cpu_get_ttbr(0);
> + orig_ttbr0 = read_sysreg(TTBR0);
>
> #ifdef CONFIG_ARM_LPAE
> memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
> memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
> - cpu_set_ttbr0(__pa(tmp_page_table));
> + write_sysreg(__pa(tmp_page_table), TTBR0);
> #else
> memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> - cpu_set_ttbr0(__pa(tmp_page_table));
> + write_sysreg(__pa(tmp_page_table),TTBR0);
> #endif
> flush_cache_all();
> local_flush_bp_all();
> @@ -257,7 +258,7 @@ void __init kasan_init(void)
> /*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
> __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
> memset(kasan_zero_page, 0, PAGE_SIZE);
> - cpu_set_ttbr0(orig_ttbr0);
> + write_sysreg(orig_ttbr0 ,TTBR0);
> flush_cache_all();
> local_flush_bp_all();
> local_flush_tlb_all();
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-09 10:10 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-09 10:10 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu),
linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
> On 12/10/17 15:59, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>> On 11/10/17 09:22, Abbott Liu wrote:
>>> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
>>> index f2e1af4..6e26714 100644
>>> --- a/arch/arm/include/asm/proc-fns.h
>>> +++ b/arch/arm/include/asm/proc-fns.h
>>> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
>>> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
>>> (pgd_t *)phys_to_virt(pg); \
>>> })
>>> +
>>> +#define cpu_set_ttbr0(val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +
>>> #else
>>> #define cpu_get_pgd() \
>>> ({ \
>>> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
>>> pg &= ~0x3fff; \
>>> (pgd_t *)phys_to_virt(pg); \
>>> })
>>> +
>>> +#define cpu_set_ttbr(nr, val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +#define cpu_get_ttbr(nr) \
>>> + ({ \
>>> + unsigned long ttbr; \
>>> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
>>> + : "=r" (ttbr)); \
>>> + ttbr; \
>>> + })
>>> +
>>> +#define cpu_set_ttbr0(val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +
>>
>> You could instead lift and extend the definitions provided in kvm_hyp.h,
>> and use the read_sysreg/write_sysreg helpers defined in cp15.h.
>
> Thanks for your review.
> I extend definitions of TTBR0/TTBR1/PAR in kvm_hyp.h when the CONFIG_ARM_LPAE is
> not defined.
> Because cortex A9 don't support virtualization, so use CONFIG_ARM_LPAE to exclude
> some functions and macros which are only used in virtualization.
>
> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>
> diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
> index 14b5903..2592608 100644
> --- a/arch/arm/include/asm/kvm_hyp.h
> +++ b/arch/arm/include/asm/kvm_hyp.h
> @@ -19,12 +19,14 @@
> #define __ARM_KVM_HYP_H__
>
> #include <linux/compiler.h>
> -#include <linux/kvm_host.h>
> #include <asm/cp15.h>
> +
> +#ifdef CONFIG_ARM_LPAE
> +#include <linux/kvm_host.h>
> #include <asm/kvm_mmu.h>
> #include <asm/vfp.h>
> -
> #define __hyp_text __section(.hyp.text) notrace
> +#endif
>
> #define __ACCESS_VFP(CRn) \
> "mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
> @@ -37,12 +39,18 @@
> __val; \
> })
>
> +#ifdef CONFIG_ARM_LPAE
> #define TTBR0 __ACCESS_CP15_64(0, c2)
> #define TTBR1 __ACCESS_CP15_64(1, c2)
> #define VTTBR __ACCESS_CP15_64(6, c2)
> #define PAR __ACCESS_CP15_64(0, c7)
> #define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> #define CNTVOFF __ACCESS_CP15_64(4, c14)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
There is no reason for this LPAE vs non LPAE dichotomy. Both registers
do exist if your system supports LPAE. So you can either suffix the
64bit version with an _64 (and change the KVM code), or suffix the bit
version with _32.
>
> #define MIDR __ACCESS_CP15(c0, 0, c0, 0)
> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> @@ -98,6 +106,7 @@
> #define cntvoff_el2 CNTVOFF
> #define cnthctl_el2 CNTHCTL
>
> +#ifdef CONFIG_ARM_LPAE
> void __timer_save_state(struct kvm_vcpu *vcpu);
> void __timer_restore_state(struct kvm_vcpu *vcpu);
>
> @@ -123,5 +132,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
> asmlinkage int __guest_enter(struct kvm_vcpu *vcpu,
> struct kvm_cpu_context *host);
> asmlinkage int __hyp_do_panic(const char *, int, u32);
> +#endif
>
> #endif /* __ARM_KVM_HYP_H__ */
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> index 049ee0a..359a782 100644
> --- a/arch/arm/mm/kasan_init.c
> +++ b/arch/arm/mm/kasan_init.c
> @@ -15,6 +15,7 @@
> #include <asm/proc-fns.h>
> #include <asm/tlbflush.h>
> #include <asm/cp15.h>
> +#include <asm/kvm_hyp.h>
No, please don't do that. You shouldn't have to include KVM stuff in
unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
is where new definition should be added.
> #include <linux/sched/task.h>
>
> #include "mm.h"
> @@ -203,16 +204,16 @@ void __init kasan_init(void)
> u64 orig_ttbr0;
> int i;
>
> - orig_ttbr0 = cpu_get_ttbr(0);
> + orig_ttbr0 = read_sysreg(TTBR0);
>
> #ifdef CONFIG_ARM_LPAE
> memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
> memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
> - cpu_set_ttbr0(__pa(tmp_page_table));
> + write_sysreg(__pa(tmp_page_table), TTBR0);
> #else
> memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> - cpu_set_ttbr0(__pa(tmp_page_table));
> + write_sysreg(__pa(tmp_page_table),TTBR0);
> #endif
> flush_cache_all();
> local_flush_bp_all();
> @@ -257,7 +258,7 @@ void __init kasan_init(void)
> /*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
> __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
> memset(kasan_zero_page, 0, PAGE_SIZE);
> - cpu_set_ttbr0(orig_ttbr0);
> + write_sysreg(orig_ttbr0 ,TTBR0);
> flush_cache_all();
> local_flush_bp_all();
> local_flush_tlb_all();
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-09 10:10 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-09 10:10 UTC (permalink / raw)
To: linux-arm-kernel
On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
> On 12/10/17 15:59, Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>> On 11/10/17 09:22, Abbott Liu wrote:
>>> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
>>> index f2e1af4..6e26714 100644
>>> --- a/arch/arm/include/asm/proc-fns.h
>>> +++ b/arch/arm/include/asm/proc-fns.h
>>> @@ -131,6 +131,15 @@ extern void cpu_resume(void);
>>> pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
>>> (pgd_t *)phys_to_virt(pg); \
>>> })
>>> +
>>> +#define cpu_set_ttbr0(val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcrr p15, 0, %Q0, %R0, c2" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +
>>> #else
>>> #define cpu_get_pgd() \
>>> ({ \
>>> @@ -140,6 +149,30 @@ extern void cpu_resume(void);
>>> pg &= ~0x3fff; \
>>> (pgd_t *)phys_to_virt(pg); \
>>> })
>>> +
>>> +#define cpu_set_ttbr(nr, val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +#define cpu_get_ttbr(nr) \
>>> + ({ \
>>> + unsigned long ttbr; \
>>> + __asm__("mrc p15, 0, %0, c2, c0, 0" \
>>> + : "=r" (ttbr)); \
>>> + ttbr; \
>>> + })
>>> +
>>> +#define cpu_set_ttbr0(val) \
>>> + do { \
>>> + u64 ttbr = val; \
>>> + __asm__("mcr p15, 0, %0, c2, c0, 0" \
>>> + : : "r" (ttbr)); \
>>> + } while (0)
>>> +
>>> +
>>
>> You could instead lift and extend the definitions provided in kvm_hyp.h,
>> and use the read_sysreg/write_sysreg helpers defined in cp15.h.
>
> Thanks for your review.
> I extend definitions of TTBR0/TTBR1/PAR in kvm_hyp.h when the CONFIG_ARM_LPAE is
> not defined.
> Because cortex A9 don't support virtualization, so use CONFIG_ARM_LPAE to exclude
> some functions and macros which are only used in virtualization.
>
> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>
> diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
> index 14b5903..2592608 100644
> --- a/arch/arm/include/asm/kvm_hyp.h
> +++ b/arch/arm/include/asm/kvm_hyp.h
> @@ -19,12 +19,14 @@
> #define __ARM_KVM_HYP_H__
>
> #include <linux/compiler.h>
> -#include <linux/kvm_host.h>
> #include <asm/cp15.h>
> +
> +#ifdef CONFIG_ARM_LPAE
> +#include <linux/kvm_host.h>
> #include <asm/kvm_mmu.h>
> #include <asm/vfp.h>
> -
> #define __hyp_text __section(.hyp.text) notrace
> +#endif
>
> #define __ACCESS_VFP(CRn) \
> "mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
> @@ -37,12 +39,18 @@
> __val; \
> })
>
> +#ifdef CONFIG_ARM_LPAE
> #define TTBR0 __ACCESS_CP15_64(0, c2)
> #define TTBR1 __ACCESS_CP15_64(1, c2)
> #define VTTBR __ACCESS_CP15_64(6, c2)
> #define PAR __ACCESS_CP15_64(0, c7)
> #define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> #define CNTVOFF __ACCESS_CP15_64(4, c14)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
There is no reason for this LPAE vs non LPAE dichotomy. Both registers
do exist if your system supports LPAE. So you can either suffix the
64bit version with an _64 (and change the KVM code), or suffix the bit
version with _32.
>
> #define MIDR __ACCESS_CP15(c0, 0, c0, 0)
> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> @@ -98,6 +106,7 @@
> #define cntvoff_el2 CNTVOFF
> #define cnthctl_el2 CNTHCTL
>
> +#ifdef CONFIG_ARM_LPAE
> void __timer_save_state(struct kvm_vcpu *vcpu);
> void __timer_restore_state(struct kvm_vcpu *vcpu);
>
> @@ -123,5 +132,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
> asmlinkage int __guest_enter(struct kvm_vcpu *vcpu,
> struct kvm_cpu_context *host);
> asmlinkage int __hyp_do_panic(const char *, int, u32);
> +#endif
>
> #endif /* __ARM_KVM_HYP_H__ */
> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
> index 049ee0a..359a782 100644
> --- a/arch/arm/mm/kasan_init.c
> +++ b/arch/arm/mm/kasan_init.c
> @@ -15,6 +15,7 @@
> #include <asm/proc-fns.h>
> #include <asm/tlbflush.h>
> #include <asm/cp15.h>
> +#include <asm/kvm_hyp.h>
No, please don't do that. You shouldn't have to include KVM stuff in
unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
is where new definition should be added.
> #include <linux/sched/task.h>
>
> #include "mm.h"
> @@ -203,16 +204,16 @@ void __init kasan_init(void)
> u64 orig_ttbr0;
> int i;
>
> - orig_ttbr0 = cpu_get_ttbr(0);
> + orig_ttbr0 = read_sysreg(TTBR0);
>
> #ifdef CONFIG_ARM_LPAE
> memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
> memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
> - cpu_set_ttbr0(__pa(tmp_page_table));
> + write_sysreg(__pa(tmp_page_table), TTBR0);
> #else
> memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
> - cpu_set_ttbr0(__pa(tmp_page_table));
> + write_sysreg(__pa(tmp_page_table),TTBR0);
> #endif
> flush_cache_all();
> local_flush_bp_all();
> @@ -257,7 +258,7 @@ void __init kasan_init(void)
> /*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
> __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
> memset(kasan_zero_page, 0, PAGE_SIZE);
> - cpu_set_ttbr0(orig_ttbr0);
> + write_sysreg(orig_ttbr0 ,TTBR0);
> flush_cache_all();
> local_flush_bp_all();
> local_flush_tlb_all();
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 05/11] Disable kasan's instrumentation
2017-10-19 12:47 ` Russell King - ARM Linux
(?)
@ 2017-11-15 10:19 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 10:19 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Jiazhenghua,
Dailei, Zengweilin, Heshaoliang
On 19/10/17 20:47, Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>On Wed, Oct 11, 2017 at 04:22:21PM +0800, Abbott Liu wrote:
>> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>>
>> To avoid some build and runtime errors, compiler's instrumentation must
>> be disabled for code not linked with kernel image.
>
>How does that explain the change to unwind.c ?
Thanks for your review.
Here is patch code:
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -249,7 +249,8 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
I change here because I don't think unwind_frame need to be check by kasan, and I have ever
found the following error which rarely appares when remove the change of unwind.c.
Here is the error log:
==================================================================
BUG: KASAN: stack-out-of-bounds in unwind_frame+0x3e0/0x788
Read of size 4 at addr 868a3b20 by task swapper/0/1
CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc2+ #2
Hardware name: ARM-Versatile Express
[<8011479c>] (unwind_backtrace) from [<8010f558>] (show_stack+0x10/0x14)
[<8010f558>] (show_stack) from [<808fdca0>] (dump_stack+0x90/0xa4)
[<808fdca0>] (dump_stack) from [<802b3808>] (print_address_description+0x4c/0x270)
[<802b3808>] (print_address_description) from [<802b3ec4>] (kasan_report+0x218/0x300)
[<802b3ec4>] (kasan_report) from [<801143f4>] (unwind_frame+0x3e0/0x788)
[<801143f4>] (unwind_frame) from [<8010ebc4>] (walk_stackframe+0x2c/0x38)
[<8010ebc4>] (walk_stackframe) from [<8010ee70>] (__save_stack_trace+0x160/0x164)
[<8010ee70>] (__save_stack_trace) from [<802b342c>] (kasan_slab_free+0x84/0x158)
[<802b342c>] (kasan_slab_free) from [<802b05dc>] (kmem_cache_free+0x58/0x1d4)
[<802b05dc>] (kmem_cache_free) from [<801a6420>] (rcu_process_callbacks+0x600/0xe04)
[<801a6420>] (rcu_process_callbacks) from [<801018e8>] (__do_softirq+0x1a0/0x4e0)
[<801018e8>] (__do_softirq) from [<80131560>] (irq_exit+0xec/0x120)
[<80131560>] (irq_exit) from [<8018d2a0>] (__handle_domain_irq+0x78/0xdc)
[<8018d2a0>] (__handle_domain_irq) from [<80101700>] (gic_handle_irq+0x48/0x8c)
[<80101700>] (gic_handle_irq) from [<80110690>] (__irq_svc+0x70/0x94)
Exception stack(0x868a39f0 to 0x868a3a38)
39e0: 7fffffff 868a3b88 00000000 00000001
3a00: 868a3b84 7fffffff 868a3b88 6fd1474c 868a3ac0 868a0000 00000002 86898000
3a20: 00000001 868a3a40 8091b4d4 8091edb0 60000013 ffffffff
[<80110690>] (__irq_svc) from [<8091edb0>] (schedule_timeout+0x0/0x3c4)
[<8091edb0>] (schedule_timeout) from [<6fd14770>] (0x6fd14770)
The buggy address belongs to the page:
page:87fcc460 count:0 mapcount:0 mapping: (null) index:0x0
flags: 0x0()
raw: 00000000 00000000 00000000 ffffffff 00000000 87fcc474 87fcc474 00000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
868a3a00: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
868a3a80: 00 00 04 f4 f3 f3 f3 f3 00 00 00 00 00 00 00 00
>868a3b00: 00 00 00 00 f1 f1 f1 f1 04 f4 f4 f4 f2 f2 f2 f2
^
868a3b80: 00 00 00 00 00 04 f4 f4 f3 f3 f3 f3 00 00 00 00
868a3c00: 00 00 00 00 f1 f1 f1 f1 00 07 f4 f4 f3 f3 f3 f3
==================================================================
Disabling lock debugging due to kernel taint
/* Before poping a register check whether it is feasible or not */
static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
unsigned long **vsp, unsigned int reg)
{
if (unlikely(ctrl->check_each_pop))
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
// unwind_frame+0x3e0/0x788 is here
ctrl->vrs[reg] = *(*vsp)++;
return URC_OK;
}
>
>Does this also disable the string macro changes?
>
>In any case, this should certainly precede patch 4, and very probably
>patch 2.
You are right. I will change it in net version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 05/11] Disable kasan's instrumentation
@ 2017-11-15 10:19 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 10:19 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, glider, dvyukov, opendmb,
linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Jiazhenghua,
Dailei, Zengweilin, Heshaoliang
On 19/10/17 20:47, Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>On Wed, Oct 11, 2017 at 04:22:21PM +0800, Abbott Liu wrote:
>> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>>
>> To avoid some build and runtime errors, compiler's instrumentation must
>> be disabled for code not linked with kernel image.
>
>How does that explain the change to unwind.c ?
Thanks for your review.
Here is patch code:
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -249,7 +249,8 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
I change here because I don't think unwind_frame need to be check by kasan, and I have ever
found the following error which rarely appares when remove the change of unwind.c.
Here is the error log:
==================================================================
BUG: KASAN: stack-out-of-bounds in unwind_frame+0x3e0/0x788
Read of size 4 at addr 868a3b20 by task swapper/0/1
CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc2+ #2
Hardware name: ARM-Versatile Express
[<8011479c>] (unwind_backtrace) from [<8010f558>] (show_stack+0x10/0x14)
[<8010f558>] (show_stack) from [<808fdca0>] (dump_stack+0x90/0xa4)
[<808fdca0>] (dump_stack) from [<802b3808>] (print_address_description+0x4c/0x270)
[<802b3808>] (print_address_description) from [<802b3ec4>] (kasan_report+0x218/0x300)
[<802b3ec4>] (kasan_report) from [<801143f4>] (unwind_frame+0x3e0/0x788)
[<801143f4>] (unwind_frame) from [<8010ebc4>] (walk_stackframe+0x2c/0x38)
[<8010ebc4>] (walk_stackframe) from [<8010ee70>] (__save_stack_trace+0x160/0x164)
[<8010ee70>] (__save_stack_trace) from [<802b342c>] (kasan_slab_free+0x84/0x158)
[<802b342c>] (kasan_slab_free) from [<802b05dc>] (kmem_cache_free+0x58/0x1d4)
[<802b05dc>] (kmem_cache_free) from [<801a6420>] (rcu_process_callbacks+0x600/0xe04)
[<801a6420>] (rcu_process_callbacks) from [<801018e8>] (__do_softirq+0x1a0/0x4e0)
[<801018e8>] (__do_softirq) from [<80131560>] (irq_exit+0xec/0x120)
[<80131560>] (irq_exit) from [<8018d2a0>] (__handle_domain_irq+0x78/0xdc)
[<8018d2a0>] (__handle_domain_irq) from [<80101700>] (gic_handle_irq+0x48/0x8c)
[<80101700>] (gic_handle_irq) from [<80110690>] (__irq_svc+0x70/0x94)
Exception stack(0x868a39f0 to 0x868a3a38)
39e0: 7fffffff 868a3b88 00000000 00000001
3a00: 868a3b84 7fffffff 868a3b88 6fd1474c 868a3ac0 868a0000 00000002 86898000
3a20: 00000001 868a3a40 8091b4d4 8091edb0 60000013 ffffffff
[<80110690>] (__irq_svc) from [<8091edb0>] (schedule_timeout+0x0/0x3c4)
[<8091edb0>] (schedule_timeout) from [<6fd14770>] (0x6fd14770)
The buggy address belongs to the page:
page:87fcc460 count:0 mapcount:0 mapping: (null) index:0x0
flags: 0x0()
raw: 00000000 00000000 00000000 ffffffff 00000000 87fcc474 87fcc474 00000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
868a3a00: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
868a3a80: 00 00 04 f4 f3 f3 f3 f3 00 00 00 00 00 00 00 00
>868a3b00: 00 00 00 00 f1 f1 f1 f1 04 f4 f4 f4 f2 f2 f2 f2
^
868a3b80: 00 00 00 00 00 04 f4 f4 f3 f3 f3 f3 00 00 00 00
868a3c00: 00 00 00 00 f1 f1 f1 f1 00 07 f4 f4 f3 f3 f3 f3
==================================================================
Disabling lock debugging due to kernel taint
/* Before poping a register check whether it is feasible or not */
static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
unsigned long **vsp, unsigned int reg)
{
if (unlikely(ctrl->check_each_pop))
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
// unwind_frame+0x3e0/0x788 is here
ctrl->vrs[reg] = *(*vsp)++;
return URC_OK;
}
>
>Does this also disable the string macro changes?
>
>In any case, this should certainly precede patch 4, and very probably
>patch 2.
You are right. I will change it in net version.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 05/11] Disable kasan's instrumentation
@ 2017-11-15 10:19 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 10:19 UTC (permalink / raw)
To: linux-arm-kernel
On 19/10/17 20:47, Russell King - ARM Linux [mailto:linux at armlinux.org.uk] wrote:
>On Wed, Oct 11, 2017 at 04:22:21PM +0800, Abbott Liu wrote:
>> From: Andrey Ryabinin <a.ryabinin@samsung.com>
>>
>> To avoid some build and runtime errors, compiler's instrumentation must
>> be disabled for code not linked with kernel image.
>
>How does that explain the change to unwind.c ?
Thanks for your review.
Here is patch code:
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -249,7 +249,8 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
I change here because I don't think unwind_frame need to be check by kasan, and I have ever
found the following error which rarely appares when remove the change of unwind.c.
Here is the error log:
==================================================================
BUG: KASAN: stack-out-of-bounds in unwind_frame+0x3e0/0x788
Read of size 4 at addr 868a3b20 by task swapper/0/1
CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc2+ #2
Hardware name: ARM-Versatile Express
[<8011479c>] (unwind_backtrace) from [<8010f558>] (show_stack+0x10/0x14)
[<8010f558>] (show_stack) from [<808fdca0>] (dump_stack+0x90/0xa4)
[<808fdca0>] (dump_stack) from [<802b3808>] (print_address_description+0x4c/0x270)
[<802b3808>] (print_address_description) from [<802b3ec4>] (kasan_report+0x218/0x300)
[<802b3ec4>] (kasan_report) from [<801143f4>] (unwind_frame+0x3e0/0x788)
[<801143f4>] (unwind_frame) from [<8010ebc4>] (walk_stackframe+0x2c/0x38)
[<8010ebc4>] (walk_stackframe) from [<8010ee70>] (__save_stack_trace+0x160/0x164)
[<8010ee70>] (__save_stack_trace) from [<802b342c>] (kasan_slab_free+0x84/0x158)
[<802b342c>] (kasan_slab_free) from [<802b05dc>] (kmem_cache_free+0x58/0x1d4)
[<802b05dc>] (kmem_cache_free) from [<801a6420>] (rcu_process_callbacks+0x600/0xe04)
[<801a6420>] (rcu_process_callbacks) from [<801018e8>] (__do_softirq+0x1a0/0x4e0)
[<801018e8>] (__do_softirq) from [<80131560>] (irq_exit+0xec/0x120)
[<80131560>] (irq_exit) from [<8018d2a0>] (__handle_domain_irq+0x78/0xdc)
[<8018d2a0>] (__handle_domain_irq) from [<80101700>] (gic_handle_irq+0x48/0x8c)
[<80101700>] (gic_handle_irq) from [<80110690>] (__irq_svc+0x70/0x94)
Exception stack(0x868a39f0 to 0x868a3a38)
39e0: 7fffffff 868a3b88 00000000 00000001
3a00: 868a3b84 7fffffff 868a3b88 6fd1474c 868a3ac0 868a0000 00000002 86898000
3a20: 00000001 868a3a40 8091b4d4 8091edb0 60000013 ffffffff
[<80110690>] (__irq_svc) from [<8091edb0>] (schedule_timeout+0x0/0x3c4)
[<8091edb0>] (schedule_timeout) from [<6fd14770>] (0x6fd14770)
The buggy address belongs to the page:
page:87fcc460 count:0 mapcount:0 mapping: (null) index:0x0
flags: 0x0()
raw: 00000000 00000000 00000000 ffffffff 00000000 87fcc474 87fcc474 00000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
868a3a00: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
868a3a80: 00 00 04 f4 f3 f3 f3 f3 00 00 00 00 00 00 00 00
>868a3b00: 00 00 00 00 f1 f1 f1 f1 04 f4 f4 f4 f2 f2 f2 f2
^
868a3b80: 00 00 00 00 00 04 f4 f4 f3 f3 f3 f3 00 00 00 00
868a3c00: 00 00 00 00 f1 f1 f1 f1 00 07 f4 f4 f3 f3 f3 f3
==================================================================
Disabling lock debugging due to kernel taint
/* Before poping a register check whether it is feasible or not */
static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
unsigned long **vsp, unsigned int reg)
{
if (unlikely(ctrl->check_each_pop))
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
// unwind_frame+0x3e0/0x788 is here
ctrl->vrs[reg] = *(*vsp)++;
return URC_OK;
}
>
>Does this also disable the string macro changes?
>
>In any case, this should certainly precede patch 4, and very probably
>patch 2.
You are right. I will change it in net version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-09 10:10 ` Marc Zyngier
(?)
@ 2017-11-15 10:20 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 10:20 UTC (permalink / raw)
To: Marc Zyngier, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>> index 049ee0a..359a782 100644
>> --- a/arch/arm/mm/kasan_init.c
>> +++ b/arch/arm/mm/kasan_init.c
>> @@ -15,6 +15,7 @@
>> #include <asm/proc-fns.h>
>> #include <asm/tlbflush.h>
>> #include <asm/cp15.h>
>> +#include <asm/kvm_hyp.h>
>
>No, please don't do that. You shouldn't have to include KVM stuff in
>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>is where new definition should be added.
Thanks for your review.
You are right. It is better to move __ACCESS_CP15* to cp15.h than to include
kvm_hyp.h. But I don't think it is a good idea to move registers definition which
is used in virtualization to cp15.h, Because there is no virtualization stuff in
cp15.h.
Here is the code which I tested on vexpress_a15 and vexpress_a9:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..6db1f51 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -64,6 +64,43 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#ifdef CONFIG_ARM_LPAE
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..db8d8db 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,55 +37,25 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
#define HCR __ACCESS_CP15(c1, 4, c1, 0)
#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-15 10:20 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 10:20 UTC (permalink / raw)
To: Marc Zyngier, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, cdall, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, Jiazhenghua, Dailei, Zengweilin,
Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 6402 bytes --]
On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>> index 049ee0a..359a782 100644
>> --- a/arch/arm/mm/kasan_init.c
>> +++ b/arch/arm/mm/kasan_init.c
>> @@ -15,6 +15,7 @@
>> #include <asm/proc-fns.h>
>> #include <asm/tlbflush.h>
>> #include <asm/cp15.h>
>> +#include <asm/kvm_hyp.h>
>
>No, please don't do that. You shouldn't have to include KVM stuff in
>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>is where new definition should be added.
Thanks for your review.
You are right. It is better to move __ACCESS_CP15* to cp15.h than to include
kvm_hyp.h. But I don't think it is a good idea to move registers definition which
is used in virtualization to cp15.h, Because there is no virtualization stuff in
cp15.h.
Here is the code which I tested on vexpress_a15 and vexpress_a9:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..6db1f51 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -64,6 +64,43 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#ifdef CONFIG_ARM_LPAE
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..db8d8db 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,55 +37,25 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
#define HCR __ACCESS_CP15(c1, 4, c1, 0)
#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-15 10:20 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 10:20 UTC (permalink / raw)
To: linux-arm-kernel
On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>> index 049ee0a..359a782 100644
>> --- a/arch/arm/mm/kasan_init.c
>> +++ b/arch/arm/mm/kasan_init.c
>> @@ -15,6 +15,7 @@
>> #include <asm/proc-fns.h>
>> #include <asm/tlbflush.h>
>> #include <asm/cp15.h>
>> +#include <asm/kvm_hyp.h>
>
>No, please don't do that. You shouldn't have to include KVM stuff in
>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>is where new definition should be added.
Thanks for your review.
You are right. It is better to move __ACCESS_CP15* to cp15.h than to include
kvm_hyp.h. But I don't think it is a good idea to move registers definition which
is used in virtualization to cp15.h, Because there is no virtualization stuff in
cp15.h.
Here is the code which I tested on vexpress_a15 and vexpress_a9:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..6db1f51 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -64,6 +64,43 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#ifdef CONFIG_ARM_LPAE
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..db8d8db 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,55 +37,25 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
#define HCR __ACCESS_CP15(c1, 4, c1, 0)
#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-15 10:20 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-15 10:35 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-15 10:35 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>> index 049ee0a..359a782 100644
>>> --- a/arch/arm/mm/kasan_init.c
>>> +++ b/arch/arm/mm/kasan_init.c
>>> @@ -15,6 +15,7 @@
>>> #include <asm/proc-fns.h>
>>> #include <asm/tlbflush.h>
>>> #include <asm/cp15.h>
>>> +#include <asm/kvm_hyp.h>
>>
>>No, please don't do that. You shouldn't have to include KVM stuff in
>>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>is where new definition should be added.
>
> Thanks for your review. You are right. It is better to move
> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
> it is a good idea to move registers definition which is used in
> virtualization to cp15.h, Because there is no virtualization stuff in
> cp15.h.
It is not about virtualization at all.
It is about what is a CP15 register and what is not. This file is called
"cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
at the end of the day, that's for Russell to decide.
>
> Here is the code which I tested on vexpress_a15 and vexpress_a9:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..6db1f51 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -64,6 +64,43 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
Again: there is no point in not having these register encodings
cohabiting. They are both perfectly defined in the architecture. Just
suffix one (or even both) with their respective size, making it obvious
which one you're talking about.
Thanks,
M.
--
Jazz is not dead, it just smell funny.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-15 10:35 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-15 10:35 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>> index 049ee0a..359a782 100644
>>> --- a/arch/arm/mm/kasan_init.c
>>> +++ b/arch/arm/mm/kasan_init.c
>>> @@ -15,6 +15,7 @@
>>> #include <asm/proc-fns.h>
>>> #include <asm/tlbflush.h>
>>> #include <asm/cp15.h>
>>> +#include <asm/kvm_hyp.h>
>>
>>No, please don't do that. You shouldn't have to include KVM stuff in
>>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>is where new definition should be added.
>
> Thanks for your review. You are right. It is better to move
> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
> it is a good idea to move registers definition which is used in
> virtualization to cp15.h, Because there is no virtualization stuff in
> cp15.h.
It is not about virtualization at all.
It is about what is a CP15 register and what is not. This file is called
"cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
at the end of the day, that's for Russell to decide.
>
> Here is the code which I tested on vexpress_a15 and vexpress_a9:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..6db1f51 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -64,6 +64,43 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
Again: there is no point in not having these register encodings
cohabiting. They are both perfectly defined in the architecture. Just
suffix one (or even both) with their respective size, making it obvious
which one you're talking about.
Thanks,
M.
--
Jazz is not dead, it just smell funny.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-15 10:35 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-15 10:35 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>> index 049ee0a..359a782 100644
>>> --- a/arch/arm/mm/kasan_init.c
>>> +++ b/arch/arm/mm/kasan_init.c
>>> @@ -15,6 +15,7 @@
>>> #include <asm/proc-fns.h>
>>> #include <asm/tlbflush.h>
>>> #include <asm/cp15.h>
>>> +#include <asm/kvm_hyp.h>
>>
>>No, please don't do that. You shouldn't have to include KVM stuff in
>>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>is where new definition should be added.
>
> Thanks for your review. You are right. It is better to move
> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
> it is a good idea to move registers definition which is used in
> virtualization to cp15.h, Because there is no virtualization stuff in
> cp15.h.
It is not about virtualization at all.
It is about what is a CP15 register and what is not. This file is called
"cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
at the end of the day, that's for Russell to decide.
>
> Here is the code which I tested on vexpress_a15 and vexpress_a9:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..6db1f51 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -64,6 +64,43 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
Again: there is no point in not having these register encodings
cohabiting. They are both perfectly defined in the architecture. Just
suffix one (or even both) with their respective size, making it obvious
which one you're talking about.
Thanks,
M.
--
Jazz is not dead, it just smell funny.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-15 10:35 ` Marc Zyngier
(?)
@ 2017-11-15 13:16 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 13:16 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>>> index 049ee0a..359a782 100644
>>>> --- a/arch/arm/mm/kasan_init.c
>>>> +++ b/arch/arm/mm/kasan_init.c
>>>> @@ -15,6 +15,7 @@
>>>> #include <asm/proc-fns.h>
>>>> #include <asm/tlbflush.h>
>>>> #include <asm/cp15.h>
>>>> +#include <asm/kvm_hyp.h>
>>>
>>>No, please don't do that. You shouldn't have to include KVM stuff in
>>>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>>is where new definition should be added.
>>
>> Thanks for your review. You are right. It is better to move
>> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
>> it is a good idea to move registers definition which is used in
>> virtualization to cp15.h, Because there is no virtualization stuff in
>> cp15.h.
>
>It is not about virtualization at all.
>
>It is about what is a CP15 register and what is not. This file is called
>"cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
>at the end of the day, that's for Russell to decide.
Thanks for your review.
You are right, all __ACCESS_CP15* are cp15 registers. I splited normal cp15 register
(such as TTBR0/TTBR1/SCTLR and so on) and virtualizaton cp15 registers(such as VTTBR/
CNTV_CVAL/HCR) because I didn't think we need use those virtualization cp15 registers
in non virtualization system.
But now I think all __ACCESS_CP15* move to cp15.h may be a better choise.
>>
>> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>> index dbdbce1..6db1f51 100644
>> --- a/arch/arm/include/asm/cp15.h
>> +++ b/arch/arm/include/asm/cp15.h
>> @@ -64,6 +64,43 @@
>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>
>> +#ifdef CONFIG_ARM_LPAE
>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>> +#define PAR __ACCESS_CP15_64(0, c7)
>> +#else
>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>> +#endif
>
>Again: there is no point in not having these register encodings
>cohabiting. They are both perfectly defined in the architecture. Just
>suffix one (or even both) with their respective size, making it obvious
>which one you're talking about.
I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
The following description is the reason:
Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
B4.1.155 TTBR0, Translation Table Base Register 0, VMSA
...
The Multiprocessing Extensions change the TTBR0 32-bit register format.
The Large Physical Address Extension extends TTBR0 to a 64-bit register. In an
implementation that includes the Large Physical Address Extension, TTBCR.EAE
determines which TTBR0 format is used:
EAE==0 32-bit format is used. TTBR0[63:32] are ignored.
EAE==1 64-bit format is used.
B4.1.156 TTBR1, Translation Table Base Register 1, VMSA
...
The Multiprocessing Extensions change the TTBR0 32-bit register format.
The Large Physical Address Extension extends TTBR1 to a 64-bit register. In an
implementation that includes the Large Physical Address Extension, TTBCR.EAE
determines which TTBR1 format is used:
EAE==0 32-bit format is used. TTBR1[63:32] are ignored.
EAE==1 64-bit format is used.
B4.1.154 TTBCR, Translation Table Base Control Register, VMSA
...
EAE, bit[31], if implementation includes the Large Physical Address Extension
Extended Address Enable. The meanings of the possible values of this bit are:
0 Use the 32-bit translation system, with the Short-descriptor translation table format. In
this case, the format of the TTBCR is as described in this section.
1 Use the 40-bit translation system, with the Long-descriptor translation table format. In
this case, the format of the TTBCR is as described in TTBCR format when using the
Long-descriptor translation table format on page B4-1692.
B4.1.112 PAR, Physical Address Register, VMSA
If the implementation includes the Large Physical Address Extension, the PAR is extended
to be a 64-bit register and:
* The 64-bit PAR is used:
- when using the Long-descriptor translation table format
- in an implementation that includes the Virtualization Extensions, for the result
of an ATS1Cxx operation performed from Hyp mode.
* The 32-bit PAR is used when using the Short-descriptor translation table format. In
this case, PAR[63:32] is UNK/SBZP.
Otherwise, the PAR is a 32-bit register.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-15 13:16 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 13:16 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>>> index 049ee0a..359a782 100644
>>>> --- a/arch/arm/mm/kasan_init.c
>>>> +++ b/arch/arm/mm/kasan_init.c
>>>> @@ -15,6 +15,7 @@
>>>> #include <asm/proc-fns.h>
>>>> #include <asm/tlbflush.h>
>>>> #include <asm/cp15.h>
>>>> +#include <asm/kvm_hyp.h>
>>>
>>>No, please don't do that. You shouldn't have to include KVM stuff in
>>>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>>is where new definition should be added.
>>
>> Thanks for your review. You are right. It is better to move
>> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
>> it is a good idea to move registers definition which is used in
>> virtualization to cp15.h, Because there is no virtualization stuff in
>> cp15.h.
>
>It is not about virtualization at all.
>
>It is about what is a CP15 register and what is not. This file is called
>"cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
>at the end of the day, that's for Russell to decide.
Thanks for your review.
You are right, all __ACCESS_CP15* are cp15 registers. I splited normal cp15 register
(such as TTBR0/TTBR1/SCTLR and so on) and virtualizaton cp15 registers(such as VTTBR/
CNTV_CVAL/HCR) because I didn't think we need use those virtualization cp15 registers
in non virtualization system.
But now I think all __ACCESS_CP15* move to cp15.h may be a better choise.
>>
>> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>> index dbdbce1..6db1f51 100644
>> --- a/arch/arm/include/asm/cp15.h
>> +++ b/arch/arm/include/asm/cp15.h
>> @@ -64,6 +64,43 @@
>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>
>> +#ifdef CONFIG_ARM_LPAE
>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>> +#define PAR __ACCESS_CP15_64(0, c7)
>> +#else
>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>> +#endif
>
>Again: there is no point in not having these register encodings
>cohabiting. They are both perfectly defined in the architecture. Just
>suffix one (or even both) with their respective size, making it obvious
>which one you're talking about.
I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
The following description is the reason:
Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
B4.1.155 TTBR0, Translation Table Base Register 0, VMSA
...
The Multiprocessing Extensions change the TTBR0 32-bit register format.
The Large Physical Address Extension extends TTBR0 to a 64-bit register. In an
implementation that includes the Large Physical Address Extension, TTBCR.EAE
determines which TTBR0 format is used:
EAE==0 32-bit format is used. TTBR0[63:32] are ignored.
EAE==1 64-bit format is used.
B4.1.156 TTBR1, Translation Table Base Register 1, VMSA
...
The Multiprocessing Extensions change the TTBR0 32-bit register format.
The Large Physical Address Extension extends TTBR1 to a 64-bit register. In an
implementation that includes the Large Physical Address Extension, TTBCR.EAE
determines which TTBR1 format is used:
EAE==0 32-bit format is used. TTBR1[63:32] are ignored.
EAE==1 64-bit format is used.
B4.1.154 TTBCR, Translation Table Base Control Register, VMSA
...
EAE, bit[31], if implementation includes the Large Physical Address Extension
Extended Address Enable. The meanings of the possible values of this bit are:
0 Use the 32-bit translation system, with the Short-descriptor translation table format. In
this case, the format of the TTBCR is as described in this section.
1 Use the 40-bit translation system, with the Long-descriptor translation table format. In
this case, the format of the TTBCR is as described in TTBCR format when using the
Long-descriptor translation table format on page B4-1692.
B4.1.112 PAR, Physical Address Register, VMSA
If the implementation includes the Large Physical Address Extension, the PAR is extended
to be a 64-bit register and:
* The 64-bit PAR is used:
- when using the Long-descriptor translation table format
- in an implementation that includes the Virtualization Extensions, for the result
of an ATS1Cxx operation performed from Hyp mode.
* The 32-bit PAR is used when using the Short-descriptor translation table format. In
this case, PAR[63:32] is UNK/SBZP.
Otherwise, the PAR is a 32-bit register.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-15 13:16 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-15 13:16 UTC (permalink / raw)
To: linux-arm-kernel
On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>>On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>>> index 049ee0a..359a782 100644
>>>> --- a/arch/arm/mm/kasan_init.c
>>>> +++ b/arch/arm/mm/kasan_init.c
>>>> @@ -15,6 +15,7 @@
>>>> #include <asm/proc-fns.h>
>>>> #include <asm/tlbflush.h>
>>>> #include <asm/cp15.h>
>>>> +#include <asm/kvm_hyp.h>
>>>
>>>No, please don't do that. You shouldn't have to include KVM stuff in
>>>unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>>__ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>>is where new definition should be added.
>>
>> Thanks for your review. You are right. It is better to move
>> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
>> it is a good idea to move registers definition which is used in
>> virtualization to cp15.h, Because there is no virtualization stuff in
>> cp15.h.
>
>It is not about virtualization at all.
>
>It is about what is a CP15 register and what is not. This file is called
>"cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
>at the end of the day, that's for Russell to decide.
Thanks for your review.
You are right, all __ACCESS_CP15* are cp15 registers. I splited normal cp15 register
(such as TTBR0/TTBR1/SCTLR and so on) and virtualizaton cp15 registers(such as VTTBR/
CNTV_CVAL/HCR) because I didn't think we need use those virtualization cp15 registers
in non virtualization system.
But now I think all __ACCESS_CP15* move to cp15.h may be a better choise.
>>
>> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>> index dbdbce1..6db1f51 100644
>> --- a/arch/arm/include/asm/cp15.h
>> +++ b/arch/arm/include/asm/cp15.h
>> @@ -64,6 +64,43 @@
>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>
>> +#ifdef CONFIG_ARM_LPAE
>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>> +#define PAR __ACCESS_CP15_64(0, c7)
>> +#else
>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>> +#endif
>
>Again: there is no point in not having these register encodings
>cohabiting. They are both perfectly defined in the architecture. Just
>suffix one (or even both) with their respective size, making it obvious
>which one you're talking about.
I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
The following description is the reason:
Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
B4.1.155 TTBR0, Translation Table Base Register 0, VMSA
...
The Multiprocessing Extensions change the TTBR0 32-bit register format.
The Large Physical Address Extension extends TTBR0 to a 64-bit register. In an
implementation that includes the Large Physical Address Extension, TTBCR.EAE
determines which TTBR0 format is used:
EAE==0 32-bit format is used. TTBR0[63:32] are ignored.
EAE==1 64-bit format is used.
B4.1.156 TTBR1, Translation Table Base Register 1, VMSA
...
The Multiprocessing Extensions change the TTBR0 32-bit register format.
The Large Physical Address Extension extends TTBR1 to a 64-bit register. In an
implementation that includes the Large Physical Address Extension, TTBCR.EAE
determines which TTBR1 format is used:
EAE==0 32-bit format is used. TTBR1[63:32] are ignored.
EAE==1 64-bit format is used.
B4.1.154 TTBCR, Translation Table Base Control Register, VMSA
...
EAE, bit[31], if implementation includes the Large Physical Address Extension
Extended Address Enable. The meanings of the possible values of this bit are:
0 Use the 32-bit translation system, with the Short-descriptor translation table format. In
this case, the format of the TTBCR is as described in this section.
1 Use the 40-bit translation system, with the Long-descriptor translation table format. In
this case, the format of the TTBCR is as described in TTBCR format when using the
Long-descriptor translation table format on page B4-1692.
B4.1.112 PAR, Physical Address Register, VMSA
If the implementation includes the Large Physical Address Extension, the PAR is extended
to be a 64-bit register and:
* The 64-bit PAR is used:
- when using the Long-descriptor translation table format
- in an implementation that includes the Virtualization Extensions, for the result
of an ATS1Cxx operation performed from Hyp mode.
* The 32-bit PAR is used when using the Short-descriptor translation table format. In
this case, PAR[63:32] is UNK/SBZP.
Otherwise, the PAR is a 32-bit register.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-15 13:16 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-15 13:54 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-15 13:54 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>> On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>>>> index 049ee0a..359a782 100644
>>>>> --- a/arch/arm/mm/kasan_init.c
>>>>> +++ b/arch/arm/mm/kasan_init.c
>>>>> @@ -15,6 +15,7 @@
>>>>> #include <asm/proc-fns.h>
>>>>> #include <asm/tlbflush.h>
>>>>> #include <asm/cp15.h>
>>>>> +#include <asm/kvm_hyp.h>
>>>>
>>>> No, please don't do that. You shouldn't have to include KVM stuff in
>>>> unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>>> __ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>>> is where new definition should be added.
>>>
>>> Thanks for your review. You are right. It is better to move
>>> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
>>> it is a good idea to move registers definition which is used in
>>> virtualization to cp15.h, Because there is no virtualization stuff in
>>> cp15.h.
>>
>> It is not about virtualization at all.
>>
>> It is about what is a CP15 register and what is not. This file is called
>> "cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
>> at the end of the day, that's for Russell to decide.
>
> Thanks for your review.
> You are right, all __ACCESS_CP15* are cp15 registers. I splited normal cp15 register
> (such as TTBR0/TTBR1/SCTLR and so on) and virtualizaton cp15 registers(such as VTTBR/
> CNTV_CVAL/HCR) because I didn't think we need use those virtualization cp15 registers
> in non virtualization system.
>
> But now I think all __ACCESS_CP15* move to cp15.h may be a better choise.
>
>>>
>>> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>> index dbdbce1..6db1f51 100644
>>> --- a/arch/arm/include/asm/cp15.h
>>> +++ b/arch/arm/include/asm/cp15.h
>>> @@ -64,6 +64,43 @@
>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>>
>> Again: there is no point in not having these register encodings
>> cohabiting. They are both perfectly defined in the architecture. Just
>> suffix one (or even both) with their respective size, making it obvious
>> which one you're talking about.
>
> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
> The following description is the reason:
> Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
[...]
You're missing the point. TTBR0 existence as a 64bit CP15 register has
nothing to do the kernel being compiled with LPAE or not. It has
everything to do with the HW supporting LPAE, and it is the kernel's job
to use the right accessor depending on how it is compiled. On a CPU
supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
chooses to use one rather than the other.
Also, if I follow your reasoning, why are you bothering defining PAR in
the non-LPAE case? It is not used by anything, as far as I can see...
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-15 13:54 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-15 13:54 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>> On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>>>> index 049ee0a..359a782 100644
>>>>> --- a/arch/arm/mm/kasan_init.c
>>>>> +++ b/arch/arm/mm/kasan_init.c
>>>>> @@ -15,6 +15,7 @@
>>>>> #include <asm/proc-fns.h>
>>>>> #include <asm/tlbflush.h>
>>>>> #include <asm/cp15.h>
>>>>> +#include <asm/kvm_hyp.h>
>>>>
>>>> No, please don't do that. You shouldn't have to include KVM stuff in
>>>> unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>>> __ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>>> is where new definition should be added.
>>>
>>> Thanks for your review. You are right. It is better to move
>>> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
>>> it is a good idea to move registers definition which is used in
>>> virtualization to cp15.h, Because there is no virtualization stuff in
>>> cp15.h.
>>
>> It is not about virtualization at all.
>>
>> It is about what is a CP15 register and what is not. This file is called
>> "cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
>> at the end of the day, that's for Russell to decide.
>
> Thanks for your review.
> You are right, all __ACCESS_CP15* are cp15 registers. I splited normal cp15 register
> (such as TTBR0/TTBR1/SCTLR and so on) and virtualizaton cp15 registers(such as VTTBR/
> CNTV_CVAL/HCR) because I didn't think we need use those virtualization cp15 registers
> in non virtualization system.
>
> But now I think all __ACCESS_CP15* move to cp15.h may be a better choise.
>
>>>
>>> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>> index dbdbce1..6db1f51 100644
>>> --- a/arch/arm/include/asm/cp15.h
>>> +++ b/arch/arm/include/asm/cp15.h
>>> @@ -64,6 +64,43 @@
>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>>
>> Again: there is no point in not having these register encodings
>> cohabiting. They are both perfectly defined in the architecture. Just
>> suffix one (or even both) with their respective size, making it obvious
>> which one you're talking about.
>
> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
> The following description is the reason:
> Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
[...]
You're missing the point. TTBR0 existence as a 64bit CP15 register has
nothing to do the kernel being compiled with LPAE or not. It has
everything to do with the HW supporting LPAE, and it is the kernel's job
to use the right accessor depending on how it is compiled. On a CPU
supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
chooses to use one rather than the other.
Also, if I follow your reasoning, why are you bothering defining PAR in
the non-LPAE case? It is not used by anything, as far as I can see...
Thanks,
M.
--
Jazz is not dead. It just smells funny...
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-15 13:54 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-15 13:54 UTC (permalink / raw)
To: linux-arm-kernel
On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>> On 09/11/17 18:11, Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>>> On 09/11/17 07:46, Liuwenliang (Abbott Liu) wrote:
>>>>> diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
>>>>> index 049ee0a..359a782 100644
>>>>> --- a/arch/arm/mm/kasan_init.c
>>>>> +++ b/arch/arm/mm/kasan_init.c
>>>>> @@ -15,6 +15,7 @@
>>>>> #include <asm/proc-fns.h>
>>>>> #include <asm/tlbflush.h>
>>>>> #include <asm/cp15.h>
>>>>> +#include <asm/kvm_hyp.h>
>>>>
>>>> No, please don't do that. You shouldn't have to include KVM stuff in
>>>> unrelated code. Instead of adding stuff to kvm_hyp.h, move all the
>>>> __ACCESS_CP15* to cp15.h, and it will be obvious to everyone that this
>>>> is where new definition should be added.
>>>
>>> Thanks for your review. You are right. It is better to move
>>> __ACCESS_CP15* to cp15.h than to include kvm_hyp.h. But I don't think
>>> it is a good idea to move registers definition which is used in
>>> virtualization to cp15.h, Because there is no virtualization stuff in
>>> cp15.h.
>>
>> It is not about virtualization at all.
>>
>> It is about what is a CP15 register and what is not. This file is called
>> "cp15.h", not "cp15-except-virtualization-and-maybe-some-others.h". But
>> at the end of the day, that's for Russell to decide.
>
> Thanks for your review.
> You are right, all __ACCESS_CP15* are cp15 registers. I splited normal cp15 register
> (such as TTBR0/TTBR1/SCTLR and so on) and virtualizaton cp15 registers(such as VTTBR/
> CNTV_CVAL/HCR) because I didn't think we need use those virtualization cp15 registers
> in non virtualization system.
>
> But now I think all __ACCESS_CP15* move to cp15.h may be a better choise.
>
>>>
>>> Here is the code which I tested on vexpress_a15 and vexpress_a9:
>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>> index dbdbce1..6db1f51 100644
>>> --- a/arch/arm/include/asm/cp15.h
>>> +++ b/arch/arm/include/asm/cp15.h
>>> @@ -64,6 +64,43 @@
>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>>
>> Again: there is no point in not having these register encodings
>> cohabiting. They are both perfectly defined in the architecture. Just
>> suffix one (or even both) with their respective size, making it obvious
>> which one you're talking about.
>
> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
> The following description is the reason:
> Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
[...]
You're missing the point. TTBR0 existence as a 64bit CP15 register has
nothing to do the kernel being compiled with LPAE or not. It has
everything to do with the HW supporting LPAE, and it is the kernel's job
to use the right accessor depending on how it is compiled. On a CPU
supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
chooses to use one rather than the other.
Also, if I follow your reasoning, why are you bothering defining PAR in
the non-LPAE case? It is not used by anything, as far as I can see...
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-15 13:54 ` Marc Zyngier
(?)
@ 2017-11-16 3:07 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-16 3:07 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>>> index dbdbce1..6db1f51 100644
>>>> --- a/arch/arm/include/asm/cp15.h
>>>> +++ b/arch/arm/include/asm/cp15.h
>>>> @@ -64,6 +64,43 @@
>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>
>>>> +#ifdef CONFIG_ARM_LPAE
>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>> +#else
>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>> +#endif
>>> Again: there is no point in not having these register encodings
>>> cohabiting. They are both perfectly defined in the architecture. Just
>>> suffix one (or even both) with their respective size, making it obvious
>>> which one you're talking about.
>>
>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
>> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>> The following description is the reason:
>> Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
>[...]
>
>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>nothing to do the kernel being compiled with LPAE or not. It has
>everything to do with the HW supporting LPAE, and it is the kernel's job
>to use the right accessor depending on how it is compiled. On a CPU
>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>chooses to use one rather than the other.
Thanks for your review.
I don't think both TTBR0 accessors(64bit accessor and 32bit accessor) are valid on a CPU supporting
LPAE which the LPAE is enabled. Here is the description come form DDI0406C2c_arm_architecture_reference_manual.pdf
(=ARM® Architecture Reference Manual ARMv7-A and ARMv7-R edition) which you can get the document
by google "ARM® Architecture Reference Manual ARMv7-A and ARMv7-R edition".
64-bit TTBR0 and TTBR1 format
The bit assignments for the 64-bit implementations of TTBR0 and TTBR1 are identical, and are:
Bits[63:56] UNK/SBZP.
ASID, bits[55:48]:
An ASID for the translation table base address. The TTBCR.A1 field selects either TTBR0.ASID
or TTBR1.ASID.
Bits[47:40] UNK/SBZP.
BADDR, bits[39:x]:
Translation table base address, bits[39:x]. Defining the translation table base address width on
page B4-1698 describes how x is defined.
The value of x determines the required alignment of the translation table, which must be aligned to
2x bytes.
Bits[x-1:0] UNK/SBZP.
...
To access a 64-bit TTBR0, software performs a 64-bit read or write of the CP15 registers with <CRm> set to c2 and
<opc1> set to 0. For example:
MRRC p15,0,<Rt>,<Rt2>, c2 ; Read 64-bit TTBR0 into Rt (low word) and Rt2 (high word)
MCRR p15,0,<Rt>,<Rt2>, c2 ; Write Rt (low word) and Rt2 (high word) to 64-bit TTBR0
So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must use "mcrr/mrrc" instruction
(__ACCESS_CP15_64). If you access TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction
which is 32bit version (__ACCESS_CP15), even if the CPU doesn't report error, you also lose the high
or low 32bit of the TTBR0/TTBR1.
>Also, if I follow your reasoning, why are you bothering defining PAR in
>the non-LPAE case? It is not used by anything, as far as I can see...
I don't use the PAR, I change the defining PAR just because I think it will be wrong in
a non LPAE CPU.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-16 3:07 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-16 3:07 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>>> index dbdbce1..6db1f51 100644
>>>> --- a/arch/arm/include/asm/cp15.h
>>>> +++ b/arch/arm/include/asm/cp15.h
>>>> @@ -64,6 +64,43 @@
>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>
>>>> +#ifdef CONFIG_ARM_LPAE
>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>> +#else
>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>> +#endif
>>> Again: there is no point in not having these register encodings
>>> cohabiting. They are both perfectly defined in the architecture. Just
>>> suffix one (or even both) with their respective size, making it obvious
>>> which one you're talking about.
>>
>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
>> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>> The following description is the reason:
>> Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
>[...]
>
>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>nothing to do the kernel being compiled with LPAE or not. It has
>everything to do with the HW supporting LPAE, and it is the kernel's job
>to use the right accessor depending on how it is compiled. On a CPU
>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>chooses to use one rather than the other.
Thanks for your review.
I don't think both TTBR0 accessors(64bit accessor and 32bit accessor) are valid on a CPU supporting
LPAE which the LPAE is enabled. Here is the description come form DDI0406C2c_arm_architecture_reference_manual.pdf
(=ARM® Architecture Reference Manual ARMv7-A and ARMv7-R edition) which you can get the document
by google "ARM® Architecture Reference Manual ARMv7-A and ARMv7-R edition".
64-bit TTBR0 and TTBR1 format
The bit assignments for the 64-bit implementations of TTBR0 and TTBR1 are identical, and are:
Bits[63:56] UNK/SBZP.
ASID, bits[55:48]:
An ASID for the translation table base address. The TTBCR.A1 field selects either TTBR0.ASID
or TTBR1.ASID.
Bits[47:40] UNK/SBZP.
BADDR, bits[39:x]:
Translation table base address, bits[39:x]. Defining the translation table base address width on
page B4-1698 describes how x is defined.
The value of x determines the required alignment of the translation table, which must be aligned to
2x bytes.
Bits[x-1:0] UNK/SBZP.
...
To access a 64-bit TTBR0, software performs a 64-bit read or write of the CP15 registers with <CRm> set to c2 and
<opc1> set to 0. For example:
MRRC p15,0,<Rt>,<Rt2>, c2 ; Read 64-bit TTBR0 into Rt (low word) and Rt2 (high word)
MCRR p15,0,<Rt>,<Rt2>, c2 ; Write Rt (low word) and Rt2 (high word) to 64-bit TTBR0
So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must use "mcrr/mrrc" instruction
(__ACCESS_CP15_64). If you access TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction
which is 32bit version (__ACCESS_CP15), even if the CPU doesn't report error, you also lose the high
or low 32bit of the TTBR0/TTBR1.
>Also, if I follow your reasoning, why are you bothering defining PAR in
>the non-LPAE case? It is not used by anything, as far as I can see...
I don't use the PAR, I change the defining PAR just because I think it will be wrong in
a non LPAE CPU.
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-16 3:07 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-16 3:07 UTC (permalink / raw)
To: linux-arm-kernel
>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>>> index dbdbce1..6db1f51 100644
>>>> --- a/arch/arm/include/asm/cp15.h
>>>> +++ b/arch/arm/include/asm/cp15.h
>>>> @@ -64,6 +64,43 @@
>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>
>>>> +#ifdef CONFIG_ARM_LPAE
>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>> +#else
>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>> +#endif
>>> Again: there is no point in not having these register encodings
>>> cohabiting. They are both perfectly defined in the architecture. Just
>>> suffix one (or even both) with their respective size, making it obvious
>>> which one you're talking about.
>>
>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR in to different way
>> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>> The following description is the reason:
>> Here is the description come from DDI0406C2c_arm_architecture_reference_manual.pdf:
>[...]
>
>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>nothing to do the kernel being compiled with LPAE or not. It has
>everything to do with the HW supporting LPAE, and it is the kernel's job
>to use the right accessor depending on how it is compiled. On a CPU
>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>chooses to use one rather than the other.
Thanks for your review.
I don't think both TTBR0 accessors(64bit accessor and 32bit accessor) are valid on a CPU supporting
LPAE which the LPAE is enabled. Here is the description come form DDI0406C2c_arm_architecture_reference_manual.pdf
(=ARM? Architecture Reference Manual ARMv7-A and ARMv7-R edition) which you can get the document
by google "ARM? Architecture Reference Manual ARMv7-A and ARMv7-R edition".
64-bit TTBR0 and TTBR1 format
The bit assignments for the 64-bit implementations of TTBR0 and TTBR1 are identical, and are:
Bits[63:56] UNK/SBZP.
ASID, bits[55:48]:
An ASID for the translation table base address. The TTBCR.A1 field selects either TTBR0.ASID
or TTBR1.ASID.
Bits[47:40] UNK/SBZP.
BADDR, bits[39:x]:
Translation table base address, bits[39:x]. Defining the translation table base address width on
page B4-1698 describes how x is defined.
The value of x determines the required alignment of the translation table, which must be aligned to
2x bytes.
Bits[x-1:0] UNK/SBZP.
...
To access a 64-bit TTBR0, software performs a 64-bit read or write of the CP15 registers with <CRm> set to c2 and
<opc1> set to 0. For example:
MRRC p15,0,<Rt>,<Rt2>, c2 ; Read 64-bit TTBR0 into Rt (low word) and Rt2 (high word)
MCRR p15,0,<Rt>,<Rt2>, c2 ; Write Rt (low word) and Rt2 (high word) to 64-bit TTBR0
So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must use "mcrr/mrrc" instruction
(__ACCESS_CP15_64). If you access TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction
which is 32bit version (__ACCESS_CP15), even if the CPU doesn't report error, you also lose the high
or low 32bit of the TTBR0/TTBR1.
>Also, if I follow your reasoning, why are you bothering defining PAR in
>the non-LPAE case? It is not used by anything, as far as I can see...
I don't use the PAR, I change the defining PAR just because I think it will be wrong in
a non LPAE CPU.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-16 3:07 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-16 9:54 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-16 9:54 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>> <liuwenliang@huawei.com> wrote:
>>>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>>>> index dbdbce1..6db1f51 100644
>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>> @@ -64,6 +64,43 @@
>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>> "r" ((t)(v)))
>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>
>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>> +#else
>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>> +#endif
>>>> Again: there is no point in not having these register encodings
>>>> cohabiting. They are both perfectly defined in the architecture. Just
>>>> suffix one (or even both) with their respective size, making it obvious
>>>> which one you're talking about.
>>>
>>> I am sorry that I didn't point why I need to define TTBR0/
>>> TTBR1/PAR in to different way
>>> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>> The following description is the reason:
>>> Here is the description come from
>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>[...]
>>
>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>nothing to do the kernel being compiled with LPAE or not. It has
>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>to use the right accessor depending on how it is compiled. On a CPU
>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>chooses to use one rather than the other.
>
> Thanks for your review. I don't think both TTBR0 accessors(64bit
> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
> the LPAE is enabled. Here is the description come form
> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM® Architecture
> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
> document by google "ARM® Architecture Reference Manual ARMv7-A and
> ARMv7-R edition".
Trust me, from where I seat, I have a much better source than Google for
that document. Who would have thought?
Nothing in what you randomly quote invalids what I've been saying. And
to show you what's wrong with your reasoning, let me describe a
scenario,
I have a non-LPAE kernel that runs on my system. It uses the 32bit
version of the TTBRs. It turns out that this kernel runs under a
hypervisor (KVM, Xen, or your toy of the day). The hypervisor
context-switches vcpus without even looking at whether the configuration
of that guest. It doesn't have to care. It just blindly uses the 64bit
version of the TTBRs.
The architecture *guarantees* that it works (it even works with a 32bit
guest under a 64bit hypervisor). In your world, this doesn't work. I
guess the architecture wins.
> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
> you also lose the high or low 32bit of the TTBR0/TTBR1.
It is not about "supporting LPAE". It is about using the accessor that
makes sense in a particular context. Yes, the architecture allows you to
do something stupid. Don't do it. It doesn't mean the accessors cannot
be used, and I hope that my example above demonstrates it.
Conclusion: I still stand by my request that both versions of TTBRs/PAR
are described without depending on the kernel configuration, because
this has nothing to do with the kernel configuration.
Thanks,
M.
--
Jazz is not dead, it just smell funny.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-16 9:54 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-16 9:54 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>> <liuwenliang@huawei.com> wrote:
>>>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>>>> index dbdbce1..6db1f51 100644
>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>> @@ -64,6 +64,43 @@
>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>> "r" ((t)(v)))
>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>
>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>> +#else
>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>> +#endif
>>>> Again: there is no point in not having these register encodings
>>>> cohabiting. They are both perfectly defined in the architecture. Just
>>>> suffix one (or even both) with their respective size, making it obvious
>>>> which one you're talking about.
>>>
>>> I am sorry that I didn't point why I need to define TTBR0/
>>> TTBR1/PAR in to different way
>>> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>> The following description is the reason:
>>> Here is the description come from
>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>[...]
>>
>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>nothing to do the kernel being compiled with LPAE or not. It has
>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>to use the right accessor depending on how it is compiled. On a CPU
>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>chooses to use one rather than the other.
>
> Thanks for your review. I don't think both TTBR0 accessors(64bit
> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
> the LPAE is enabled. Here is the description come form
> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM® Architecture
> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
> document by google "ARM® Architecture Reference Manual ARMv7-A and
> ARMv7-R edition".
Trust me, from where I seat, I have a much better source than Google for
that document. Who would have thought?
Nothing in what you randomly quote invalids what I've been saying. And
to show you what's wrong with your reasoning, let me describe a
scenario,
I have a non-LPAE kernel that runs on my system. It uses the 32bit
version of the TTBRs. It turns out that this kernel runs under a
hypervisor (KVM, Xen, or your toy of the day). The hypervisor
context-switches vcpus without even looking at whether the configuration
of that guest. It doesn't have to care. It just blindly uses the 64bit
version of the TTBRs.
The architecture *guarantees* that it works (it even works with a 32bit
guest under a 64bit hypervisor). In your world, this doesn't work. I
guess the architecture wins.
> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
> you also lose the high or low 32bit of the TTBR0/TTBR1.
It is not about "supporting LPAE". It is about using the accessor that
makes sense in a particular context. Yes, the architecture allows you to
do something stupid. Don't do it. It doesn't mean the accessors cannot
be used, and I hope that my example above demonstrates it.
Conclusion: I still stand by my request that both versions of TTBRs/PAR
are described without depending on the kernel configuration, because
this has nothing to do with the kernel configuration.
Thanks,
M.
--
Jazz is not dead, it just smell funny.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-16 9:54 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-16 9:54 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>> <liuwenliang@huawei.com> wrote:
>>>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>>>> index dbdbce1..6db1f51 100644
>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>> @@ -64,6 +64,43 @@
>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>> "r" ((t)(v)))
>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>
>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>> +#else
>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>> +#endif
>>>> Again: there is no point in not having these register encodings
>>>> cohabiting. They are both perfectly defined in the architecture. Just
>>>> suffix one (or even both) with their respective size, making it obvious
>>>> which one you're talking about.
>>>
>>> I am sorry that I didn't point why I need to define TTBR0/
>>> TTBR1/PAR in to different way
>>> between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>> The following description is the reason:
>>> Here is the description come from
>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>[...]
>>
>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>nothing to do the kernel being compiled with LPAE or not. It has
>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>to use the right accessor depending on how it is compiled. On a CPU
>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>chooses to use one rather than the other.
>
> Thanks for your review. I don't think both TTBR0 accessors(64bit
> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
> the LPAE is enabled. Here is the description come form
> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM? Architecture
> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
> document by google "ARM? Architecture Reference Manual ARMv7-A and
> ARMv7-R edition".
Trust me, from where I seat, I have a much better source than Google for
that document. Who would have thought?
Nothing in what you randomly quote invalids what I've been saying. And
to show you what's wrong with your reasoning, let me describe a
scenario,
I have a non-LPAE kernel that runs on my system. It uses the 32bit
version of the TTBRs. It turns out that this kernel runs under a
hypervisor (KVM, Xen, or your toy of the day). The hypervisor
context-switches vcpus without even looking at whether the configuration
of that guest. It doesn't have to care. It just blindly uses the 64bit
version of the TTBRs.
The architecture *guarantees* that it works (it even works with a 32bit
guest under a 64bit hypervisor). In your world, this doesn't work. I
guess the architecture wins.
> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
> you also lose the high or low 32bit of the TTBR0/TTBR1.
It is not about "supporting LPAE". It is about using the accessor that
makes sense in a particular context. Yes, the architecture allows you to
do something stupid. Don't do it. It doesn't mean the accessors cannot
be used, and I hope that my example above demonstrates it.
Conclusion: I still stand by my request that both versions of TTBRs/PAR
are described without depending on the kernel configuration, because
this has nothing to do with the kernel configuration.
Thanks,
M.
--
Jazz is not dead, it just smell funny.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-16 9:54 ` Marc Zyngier
(?)
@ 2017-11-16 14:24 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-16 14:24 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 16/11/17 17:54 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>>> <liuwenliang@huawei.com> wrote:
>>>>>> diff --git a/arch/arm/include/asm/cp15.h
>>>>>> b/arch/arm/include/asm/cp15.h index dbdbce1..6db1f51 100644
>>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>>> @@ -64,6 +64,43 @@
>>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>>> "r" ((t)(v)))
>>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>>
>>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>>> +#else
>>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>>> +#endif
>>>>> Again: there is no point in not having these register encodings
>>>>> cohabiting. They are both perfectly defined in the architecture.
>>>>> Just suffix one (or even both) with their respective size, making
>>>>> it obvious which one you're talking about.
>>>>
>>>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR
>>>> in to different way between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>>> The following description is the reason:
>>>> Here is the description come from
>>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>>[...]
>>>
>>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>>nothing to do the kernel being compiled with LPAE or not. It has
>>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>>to use the right accessor depending on how it is compiled. On a CPU
>>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>>chooses to use one rather than the other.
>>
>> Thanks for your review. I don't think both TTBR0 accessors(64bit
>> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
>> the LPAE is enabled. Here is the description come form
>> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM® Architecture
>> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
>> document by google "ARM® Architecture Reference Manual ARMv7-A and
>> ARMv7-R edition".
>Trust me, from where I seat, I have a much better source than Google for
>that document. Who would have thought?
>Nothing in what you randomly quote invalids what I've been saying. And
>to show you what's wrong with your reasoning, let me describe a
>scenario,
>I have a non-LPAE kernel that runs on my system. It uses the 32bit
>version of the TTBRs. It turns out that this kernel runs under a
>hypervisor (KVM, Xen, or your toy of the day). The hypervisor
>context-switches vcpus without even looking at whether the configuration
>of that guest. It doesn't have to care. It just blindly uses the 64bit
>version of the TTBRs.
>The architecture *guarantees* that it works (it even works with a 32bit
>guest under a 64bit hypervisor). In your world, this doesn't work. I
>guess the architecture wins.
>> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
>> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
>> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
>> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
>> you also lose the high or low 32bit of the TTBR0/TTBR1.
>It is not about "supporting LPAE". It is about using the accessor that
>makes sense in a particular context. Yes, the architecture allows you to
>do something stupid. Don't do it. It doesn't mean the accessors cannot
>be used, and I hope that my example above demonstrates it.
>Conclusion: I still stand by my request that both versions of TTBRs/PAR
>are described without depending on the kernel configuration, because
>this has nothing to do with the kernel configuration.
Thanks for your reviews.
Yes, you are right. I have tested that "mcrr/mrrc" instruction (__ACCESS_CP15_64)
can work on non LPAE on vexpress_a9.
Here is the code I tested on vexpress_a9 and vexpress_a15:
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -64,6 +64,56 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-16 14:24 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-16 14:24 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 16/11/17 17:54 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>>> <liuwenliang@huawei.com> wrote:
>>>>>> diff --git a/arch/arm/include/asm/cp15.h
>>>>>> b/arch/arm/include/asm/cp15.h index dbdbce1..6db1f51 100644
>>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>>> @@ -64,6 +64,43 @@
>>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>>> "r" ((t)(v)))
>>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>>
>>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>>> +#else
>>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>>> +#endif
>>>>> Again: there is no point in not having these register encodings
>>>>> cohabiting. They are both perfectly defined in the architecture.
>>>>> Just suffix one (or even both) with their respective size, making
>>>>> it obvious which one you're talking about.
>>>>
>>>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR
>>>> in to different way between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>>> The following description is the reason:
>>>> Here is the description come from
>>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>>[...]
>>>
>>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>>nothing to do the kernel being compiled with LPAE or not. It has
>>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>>to use the right accessor depending on how it is compiled. On a CPU
>>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>>chooses to use one rather than the other.
>>
>> Thanks for your review. I don't think both TTBR0 accessors(64bit
>> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
>> the LPAE is enabled. Here is the description come form
>> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM® Architecture
>> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
>> document by google "ARM® Architecture Reference Manual ARMv7-A and
>> ARMv7-R edition".
>Trust me, from where I seat, I have a much better source than Google for
>that document. Who would have thought?
>Nothing in what you randomly quote invalids what I've been saying. And
>to show you what's wrong with your reasoning, let me describe a
>scenario,
>I have a non-LPAE kernel that runs on my system. It uses the 32bit
>version of the TTBRs. It turns out that this kernel runs under a
>hypervisor (KVM, Xen, or your toy of the day). The hypervisor
>context-switches vcpus without even looking at whether the configuration
>of that guest. It doesn't have to care. It just blindly uses the 64bit
>version of the TTBRs.
>The architecture *guarantees* that it works (it even works with a 32bit
>guest under a 64bit hypervisor). In your world, this doesn't work. I
>guess the architecture wins.
>> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
>> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
>> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
>> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
>> you also lose the high or low 32bit of the TTBR0/TTBR1.
>It is not about "supporting LPAE". It is about using the accessor that
>makes sense in a particular context. Yes, the architecture allows you to
>do something stupid. Don't do it. It doesn't mean the accessors cannot
>be used, and I hope that my example above demonstrates it.
>Conclusion: I still stand by my request that both versions of TTBRs/PAR
>are described without depending on the kernel configuration, because
>this has nothing to do with the kernel configuration.
Thanks for your reviews.
Yes, you are right. I have tested that "mcrr/mrrc" instruction (__ACCESS_CP15_64)
can work on non LPAE on vexpress_a9.
Here is the code I tested on vexpress_a9 and vexpress_a15:
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -64,6 +64,56 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-16 14:24 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-16 14:24 UTC (permalink / raw)
To: linux-arm-kernel
On 16/11/17 17:54 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>>> <liuwenliang@huawei.com> wrote:
>>>>>> diff --git a/arch/arm/include/asm/cp15.h
>>>>>> b/arch/arm/include/asm/cp15.h index dbdbce1..6db1f51 100644
>>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>>> @@ -64,6 +64,43 @@
>>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>>> "r" ((t)(v)))
>>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>>
>>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>>> +#else
>>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>>> +#endif
>>>>> Again: there is no point in not having these register encodings
>>>>> cohabiting. They are both perfectly defined in the architecture.
>>>>> Just suffix one (or even both) with their respective size, making
>>>>> it obvious which one you're talking about.
>>>>
>>>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR
>>>> in to different way between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>>> The following description is the reason:
>>>> Here is the description come from
>>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>>[...]
>>>
>>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>>nothing to do the kernel being compiled with LPAE or not. It has
>>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>>to use the right accessor depending on how it is compiled. On a CPU
>>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>>chooses to use one rather than the other.
>>
>> Thanks for your review. I don't think both TTBR0 accessors(64bit
>> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
>> the LPAE is enabled. Here is the description come form
>> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM? Architecture
>> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
>> document by google "ARM? Architecture Reference Manual ARMv7-A and
>> ARMv7-R edition".
>Trust me, from where I seat, I have a much better source than Google for
>that document. Who would have thought?
>Nothing in what you randomly quote invalids what I've been saying. And
>to show you what's wrong with your reasoning, let me describe a
>scenario,
>I have a non-LPAE kernel that runs on my system. It uses the 32bit
>version of the TTBRs. It turns out that this kernel runs under a
>hypervisor (KVM, Xen, or your toy of the day). The hypervisor
>context-switches vcpus without even looking at whether the configuration
>of that guest. It doesn't have to care. It just blindly uses the 64bit
>version of the TTBRs.
>The architecture *guarantees* that it works (it even works with a 32bit
>guest under a 64bit hypervisor). In your world, this doesn't work. I
>guess the architecture wins.
>> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
>> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
>> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
>> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
>> you also lose the high or low 32bit of the TTBR0/TTBR1.
>It is not about "supporting LPAE". It is about using the accessor that
>makes sense in a particular context. Yes, the architecture allows you to
>do something stupid. Don't do it. It doesn't mean the accessors cannot
>be used, and I hope that my example above demonstrates it.
>Conclusion: I still stand by my request that both versions of TTBRs/PAR
>are described without depending on the kernel configuration, because
>this has nothing to do with the kernel configuration.
Thanks for your reviews.
Yes, you are right. I have tested that "mcrr/mrrc" instruction (__ACCESS_CP15_64)
can work on non LPAE on vexpress_a9.
Here is the code I tested on vexpress_a9 and vexpress_a15:
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -64,6 +64,56 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-16 14:24 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-16 14:40 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-16 14:40 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Thu, Nov 16 2017 at 2:24:31 pm GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On 16/11/17 17:54 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)"
>> <liuwenliang@huawei.com> wrote:
>>>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>>>> <liuwenliang@huawei.com> wrote:
>>>>>>> diff --git a/arch/arm/include/asm/cp15.h
>>>>>>> b/arch/arm/include/asm/cp15.h index dbdbce1..6db1f51 100644
>>>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>>>> @@ -64,6 +64,43 @@
>>>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>>>> "r" ((t)(v)))
>>>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>>>
>>>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>>>> +#else
>>>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>>>> +#endif
>>>>>> Again: there is no point in not having these register encodings
>>>>>> cohabiting. They are both perfectly defined in the architecture.
>>>>>> Just suffix one (or even both) with their respective size, making
>>>>>> it obvious which one you're talking about.
>>>>>
>>>>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR
>>>>> in to different way between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>>>> The following description is the reason:
>>>>> Here is the description come from
>>>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>>>[...]
>>>>
>>>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>>>nothing to do the kernel being compiled with LPAE or not. It has
>>>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>>>to use the right accessor depending on how it is compiled. On a CPU
>>>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>>>chooses to use one rather than the other.
>>>
>>> Thanks for your review. I don't think both TTBR0 accessors(64bit
>>> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
>>> the LPAE is enabled. Here is the description come form
>>> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM® Architecture
>>> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
>>> document by google "ARM® Architecture Reference Manual ARMv7-A and
>>> ARMv7-R edition".
>
>>Trust me, from where I seat, I have a much better source than Google for
>>that document. Who would have thought?
>
>>Nothing in what you randomly quote invalids what I've been saying. And
>>to show you what's wrong with your reasoning, let me describe a
>>scenario,
>
>>I have a non-LPAE kernel that runs on my system. It uses the 32bit
>>version of the TTBRs. It turns out that this kernel runs under a
>>hypervisor (KVM, Xen, or your toy of the day). The hypervisor
>>context-switches vcpus without even looking at whether the configuration
>>of that guest. It doesn't have to care. It just blindly uses the 64bit
>>version of the TTBRs.
>
>>The architecture *guarantees* that it works (it even works with a 32bit
>>guest under a 64bit hypervisor). In your world, this doesn't work. I
>>guess the architecture wins.
>
>>> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
>>> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
>>> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
>>> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
>>> you also lose the high or low 32bit of the TTBR0/TTBR1.
>
>>It is not about "supporting LPAE". It is about using the accessor that
>>makes sense in a particular context. Yes, the architecture allows you to
>>do something stupid. Don't do it. It doesn't mean the accessors cannot
>>be used, and I hope that my example above demonstrates it.
>
>>Conclusion: I still stand by my request that both versions of TTBRs/PAR
>>are described without depending on the kernel configuration, because
>>this has nothing to do with the kernel configuration.
>
> Thanks for your reviews.
> Yes, you are right. I have tested that "mcrr/mrrc" instruction
> (__ACCESS_CP15_64) can work on non LPAE on vexpress_a9.
No, it doesn't. It cannot work, because Cortex-A9 predates the invention
of the 64bit accessor. I suspect that you are testing stuff in QEMU,
which is giving you a SW model that always supports LPAE. I suggest you
test this code on *real* HW, and not only on QEMU.
What I have said is:
- If the CPU supports LPAE, then both 32 and 64bit accessors work
- If the CPU doesn't support LPAE, then only the 32bit accssor work
- In both cases, that's a function of the CPU, and not of the kernel
configuration.
- Both accessors can be safely defined as long as we ensure that they
are used in the right context.
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -64,6 +64,56 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
You still need to add the 32bit accessors.
M.
--
Jazz is not dead, it just smell funny.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-16 14:40 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-16 14:40 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Thu, Nov 16 2017 at 2:24:31 pm GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On 16/11/17 17:54 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)"
>> <liuwenliang@huawei.com> wrote:
>>>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>>>> <liuwenliang@huawei.com> wrote:
>>>>>>> diff --git a/arch/arm/include/asm/cp15.h
>>>>>>> b/arch/arm/include/asm/cp15.h index dbdbce1..6db1f51 100644
>>>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>>>> @@ -64,6 +64,43 @@
>>>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>>>> "r" ((t)(v)))
>>>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>>>
>>>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>>>> +#else
>>>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>>>> +#endif
>>>>>> Again: there is no point in not having these register encodings
>>>>>> cohabiting. They are both perfectly defined in the architecture.
>>>>>> Just suffix one (or even both) with their respective size, making
>>>>>> it obvious which one you're talking about.
>>>>>
>>>>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR
>>>>> in to different way between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>>>> The following description is the reason:
>>>>> Here is the description come from
>>>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>>>[...]
>>>>
>>>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>>>nothing to do the kernel being compiled with LPAE or not. It has
>>>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>>>to use the right accessor depending on how it is compiled. On a CPU
>>>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>>>chooses to use one rather than the other.
>>>
>>> Thanks for your review. I don't think both TTBR0 accessors(64bit
>>> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
>>> the LPAE is enabled. Here is the description come form
>>> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM® Architecture
>>> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
>>> document by google "ARM® Architecture Reference Manual ARMv7-A and
>>> ARMv7-R edition".
>
>>Trust me, from where I seat, I have a much better source than Google for
>>that document. Who would have thought?
>
>>Nothing in what you randomly quote invalids what I've been saying. And
>>to show you what's wrong with your reasoning, let me describe a
>>scenario,
>
>>I have a non-LPAE kernel that runs on my system. It uses the 32bit
>>version of the TTBRs. It turns out that this kernel runs under a
>>hypervisor (KVM, Xen, or your toy of the day). The hypervisor
>>context-switches vcpus without even looking at whether the configuration
>>of that guest. It doesn't have to care. It just blindly uses the 64bit
>>version of the TTBRs.
>
>>The architecture *guarantees* that it works (it even works with a 32bit
>>guest under a 64bit hypervisor). In your world, this doesn't work. I
>>guess the architecture wins.
>
>>> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
>>> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
>>> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
>>> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
>>> you also lose the high or low 32bit of the TTBR0/TTBR1.
>
>>It is not about "supporting LPAE". It is about using the accessor that
>>makes sense in a particular context. Yes, the architecture allows you to
>>do something stupid. Don't do it. It doesn't mean the accessors cannot
>>be used, and I hope that my example above demonstrates it.
>
>>Conclusion: I still stand by my request that both versions of TTBRs/PAR
>>are described without depending on the kernel configuration, because
>>this has nothing to do with the kernel configuration.
>
> Thanks for your reviews.
> Yes, you are right. I have tested that "mcrr/mrrc" instruction
> (__ACCESS_CP15_64) can work on non LPAE on vexpress_a9.
No, it doesn't. It cannot work, because Cortex-A9 predates the invention
of the 64bit accessor. I suspect that you are testing stuff in QEMU,
which is giving you a SW model that always supports LPAE. I suggest you
test this code on *real* HW, and not only on QEMU.
What I have said is:
- If the CPU supports LPAE, then both 32 and 64bit accessors work
- If the CPU doesn't support LPAE, then only the 32bit accssor work
- In both cases, that's a function of the CPU, and not of the kernel
configuration.
- Both accessors can be safely defined as long as we ensure that they
are used in the right context.
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -64,6 +64,56 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
You still need to add the 32bit accessors.
M.
--
Jazz is not dead, it just smell funny.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-16 14:40 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-16 14:40 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 16 2017 at 2:24:31 pm GMT, "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On 16/11/17 17:54 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>On Thu, Nov 16 2017 at 3:07:54 am GMT, "Liuwenliang (Abbott Liu)"
>> <liuwenliang@huawei.com> wrote:
>>>>On 15/11/17 13:16, Liuwenliang (Abbott Liu) wrote:
>>>>> On 09/11/17 18:36 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>>>>> On Wed, Nov 15 2017 at 10:20:02 am GMT, "Liuwenliang (Abbott Liu)"
>>>>>> <liuwenliang@huawei.com> wrote:
>>>>>>> diff --git a/arch/arm/include/asm/cp15.h
>>>>>>> b/arch/arm/include/asm/cp15.h index dbdbce1..6db1f51 100644
>>>>>>> --- a/arch/arm/include/asm/cp15.h
>>>>>>> +++ b/arch/arm/include/asm/cp15.h
>>>>>>> @@ -64,6 +64,43 @@
>>>>>>> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : :
>>>>>>> "r" ((t)(v)))
>>>>>>> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>>>>>>>
>>>>>>> +#ifdef CONFIG_ARM_LPAE
>>>>>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>>>>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>>>>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>>>>>> +#else
>>>>>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>>>>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>>>>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>>>>>> +#endif
>>>>>> Again: there is no point in not having these register encodings
>>>>>> cohabiting. They are both perfectly defined in the architecture.
>>>>>> Just suffix one (or even both) with their respective size, making
>>>>>> it obvious which one you're talking about.
>>>>>
>>>>> I am sorry that I didn't point why I need to define TTBR0/ TTBR1/PAR
>>>>> in to different way between CONFIG_ARM_LPAE and non CONFIG_ARM_LPAE.
>>>>> The following description is the reason:
>>>>> Here is the description come from
>>>>> DDI0406C2c_arm_architecture_reference_manual.pdf:
>>>>[...]
>>>>
>>>>You're missing the point. TTBR0 existence as a 64bit CP15 register has
>>>>nothing to do the kernel being compiled with LPAE or not. It has
>>>>everything to do with the HW supporting LPAE, and it is the kernel's job
>>>>to use the right accessor depending on how it is compiled. On a CPU
>>>>supporting LPAE, both TTBR0 accessors are valid. It is the kernel that
>>>>chooses to use one rather than the other.
>>>
>>> Thanks for your review. I don't think both TTBR0 accessors(64bit
>>> accessor and 32bit accessor) are valid on a CPU supporting LPAE which
>>> the LPAE is enabled. Here is the description come form
>>> DDI0406C2c_arm_architecture_reference_manual.pdf (=ARM? Architecture
>>> Reference Manual ARMv7-A and ARMv7-R edition) which you can get the
>>> document by google "ARM? Architecture Reference Manual ARMv7-A and
>>> ARMv7-R edition".
>
>>Trust me, from where I seat, I have a much better source than Google for
>>that document. Who would have thought?
>
>>Nothing in what you randomly quote invalids what I've been saying. And
>>to show you what's wrong with your reasoning, let me describe a
>>scenario,
>
>>I have a non-LPAE kernel that runs on my system. It uses the 32bit
>>version of the TTBRs. It turns out that this kernel runs under a
>>hypervisor (KVM, Xen, or your toy of the day). The hypervisor
>>context-switches vcpus without even looking at whether the configuration
>>of that guest. It doesn't have to care. It just blindly uses the 64bit
>>version of the TTBRs.
>
>>The architecture *guarantees* that it works (it even works with a 32bit
>>guest under a 64bit hypervisor). In your world, this doesn't work. I
>>guess the architecture wins.
>
>>> So, I think if you access TTBR0/TTBR1 on CPU supporting LPAE, you must
>>> use "mcrr/mrrc" instruction (__ACCESS_CP15_64). If you access
>>> TTBR0/TTBR1 on CPU supporting LPAE by "mcr/mrc" instruction which is
>>> 32bit version (__ACCESS_CP15), even if the CPU doesn't report error,
>>> you also lose the high or low 32bit of the TTBR0/TTBR1.
>
>>It is not about "supporting LPAE". It is about using the accessor that
>>makes sense in a particular context. Yes, the architecture allows you to
>>do something stupid. Don't do it. It doesn't mean the accessors cannot
>>be used, and I hope that my example above demonstrates it.
>
>>Conclusion: I still stand by my request that both versions of TTBRs/PAR
>>are described without depending on the kernel configuration, because
>>this has nothing to do with the kernel configuration.
>
> Thanks for your reviews.
> Yes, you are right. I have tested that "mcrr/mrrc" instruction
> (__ACCESS_CP15_64) can work on non LPAE on vexpress_a9.
No, it doesn't. It cannot work, because Cortex-A9 predates the invention
of the 64bit accessor. I suspect that you are testing stuff in QEMU,
which is giving you a SW model that always supports LPAE. I suggest you
test this code on *real* HW, and not only on QEMU.
What I have said is:
- If the CPU supports LPAE, then both 32 and 64bit accessors work
- If the CPU doesn't support LPAE, then only the 32bit accssor work
- In both cases, that's a function of the CPU, and not of the kernel
configuration.
- Both accessors can be safely defined as long as we ensure that they
are used in the right context.
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -64,6 +64,56 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
You still need to add the 32bit accessors.
M.
--
Jazz is not dead, it just smell funny.
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-16 14:40 ` Marc Zyngier
(?)
@ 2017-11-17 1:39 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-17 1:39 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>- If the CPU supports LPAE, then both 32 and 64bit accessors work
I don't how 32bit accessor can work on CPU supporting LPAE, give me your solution.
Thanks.
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-17 1:39 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-17 1:39 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 342 bytes --]
On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>- If the CPU supports LPAE, then both 32 and 64bit accessors work
I don't how 32bit accessor can work on CPU supporting LPAE, give me your solution.
Thanks.
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-17 1:39 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-17 1:39 UTC (permalink / raw)
To: linux-arm-kernel
On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>- If the CPU supports LPAE, then both 32 and 64bit accessors work
I don't how 32bit accessor can work on CPU supporting LPAE, give me your solution.
Thanks.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-16 14:40 ` Marc Zyngier
(?)
@ 2017-11-17 7:18 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-17 7:18 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>No, it doesn't. It cannot work, because Cortex-A9 predates the invention
>of the 64bit accessor. I suspect that you are testing stuff in QEMU,
>which is giving you a SW model that always supports LPAE. I suggest you
>test this code on *real* HW, and not only on QEMU.
I am sorry. My test is fault. I only defined TTBR0 as __ACCESS_CP15_64,
but I don't use the definition TTBR0 as __ACCESS_CP15_64.
Now I use the definition TTBR0 as __ACCESS_CP15_64 on CPU supporting
LPAE(vexpress_a9), I find it doesn't work and report undefined instruction error
when execute "mrrc" instruction.
So, you are right that 64bit accessor of TTBR0 cannot work on LPAE.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-17 7:18 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-17 7:18 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux, aryabinin, afzal.mohd.ma, f.fainelli, labbott,
kirill.shutemov, mhocko, cdall, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>No, it doesn't. It cannot work, because Cortex-A9 predates the invention
>of the 64bit accessor. I suspect that you are testing stuff in QEMU,
>which is giving you a SW model that always supports LPAE. I suggest you
>test this code on *real* HW, and not only on QEMU.
I am sorry. My test is fault. I only defined TTBR0 as __ACCESS_CP15_64,
but I don't use the definition TTBR0 as __ACCESS_CP15_64.
Now I use the definition TTBR0 as __ACCESS_CP15_64 on CPU supporting
LPAE(vexpress_a9), I find it doesn't work and report undefined instruction error
when execute "mrrc" instruction.
So, you are right that 64bit accessor of TTBR0 cannot work on LPAE.
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-17 7:18 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-17 7:18 UTC (permalink / raw)
To: linux-arm-kernel
On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>No, it doesn't. It cannot work, because Cortex-A9 predates the invention
>of the 64bit accessor. I suspect that you are testing stuff in QEMU,
>which is giving you a SW model that always supports LPAE. I suggest you
>test this code on *real* HW, and not only on QEMU.
I am sorry. My test is fault. I only defined TTBR0 as __ACCESS_CP15_64,
but I don't use the definition TTBR0 as __ACCESS_CP15_64.
Now I use the definition TTBR0 as __ACCESS_CP15_64 on CPU supporting
LPAE(vexpress_a9), I find it doesn't work and report undefined instruction error
when execute "mrrc" instruction.
So, you are right that 64bit accessor of TTBR0 cannot work on LPAE.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-17 7:18 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-17 7:35 ` Christoffer Dall
-1 siblings, 0 replies; 253+ messages in thread
From: Christoffer Dall @ 2017-11-17 7:35 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Fri, Nov 17, 2017 at 07:18:45AM +0000, Liuwenliang (Abbott Liu) wrote:
> On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> >No, it doesn't. It cannot work, because Cortex-A9 predates the invention
> >of the 64bit accessor. I suspect that you are testing stuff in QEMU,
> >which is giving you a SW model that always supports LPAE. I suggest you
> >test this code on *real* HW, and not only on QEMU.
>
> I am sorry. My test is fault. I only defined TTBR0 as __ACCESS_CP15_64,
> but I don't use the definition TTBR0 as __ACCESS_CP15_64.
>
> Now I use the definition TTBR0 as __ACCESS_CP15_64 on CPU supporting
> LPAE(vexpress_a9)
What does a "CPU supporting LPAE(vexpress_a9) mean? As Marc pointed
out, a Cortex-A9 doesn't support LPAE. If you configure your kernel
with LPAE it's not going to work on a Cortex-A9.
> I find it doesn't work and report undefined instruction error
> when execute "mrrc" instruction.
>
> So, you are right that 64bit accessor of TTBR0 cannot work on LPAE.
>
It's the other way around. It doesn't work WITHOUT LPAE, it only works
WITH LPAE.
The ARM ARM explains this quite clearly:
"Accessing TTBR0
To access TTBR0 in an implementation that does not include the Large
Physical Address Extension, or bits[31:0] of TTBR0 in an implementation
that includes the Large Physical Address Extension, software reads or
writes the CP15 registers with <opc1> set to 0, <CRn> set to c2, <CRm>
set to c0, and <opc2> set to 0. For example:
MRC p15, 0, <Rt>, c2, c0, 0
; Read 32-bit TTBR0 into Rt
MCR p15, 0, <Rt>, c2, c0, 0
; Write Rt to 32-bit TTBR0
In an implementation that includes the Large Physical Address Extension,
to access all 64 bits of TTBR0, software performs a 64-bit read or write
of the CP15 registers with <CRm> set to c2 and <opc1> set to 0. For
example:
MRRC p15, 0, <Rt>, <Rt2>, c2
; Read 64-bit TTBR0 into Rt (low word) and Rt2 (high word)
MCRR p15, 0, <Rt>, <Rt2>, c2
; Write Rt (low word) and Rt2 (high word) to 64-bit TTBR0
In these MRRC and MCRR instructions, Rt holds the least-significant word
of TTBR0, and Rt2 holds the most-significant word."
That is, if your processor (like the Cortex-A9) does NOT support LPAE,
all you have is the 32-bit accessors (MRC and MCR).
If your processor does support LPAE (like a Cortex-A15 for example),
then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
the lower 32-bits of the 64-bit register.
Hope this helps,
-Christoffer
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-17 7:35 ` Christoffer Dall
0 siblings, 0 replies; 253+ messages in thread
From: Christoffer Dall @ 2017-11-17 7:35 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Fri, Nov 17, 2017 at 07:18:45AM +0000, Liuwenliang (Abbott Liu) wrote:
> On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> >No, it doesn't. It cannot work, because Cortex-A9 predates the invention
> >of the 64bit accessor. I suspect that you are testing stuff in QEMU,
> >which is giving you a SW model that always supports LPAE. I suggest you
> >test this code on *real* HW, and not only on QEMU.
>
> I am sorry. My test is fault. I only defined TTBR0 as __ACCESS_CP15_64,
> but I don't use the definition TTBR0 as __ACCESS_CP15_64.
>
> Now I use the definition TTBR0 as __ACCESS_CP15_64 on CPU supporting
> LPAE(vexpress_a9)
What does a "CPU supporting LPAE(vexpress_a9) mean? As Marc pointed
out, a Cortex-A9 doesn't support LPAE. If you configure your kernel
with LPAE it's not going to work on a Cortex-A9.
> I find it doesn't work and report undefined instruction error
> when execute "mrrc" instruction.
>
> So, you are right that 64bit accessor of TTBR0 cannot work on LPAE.
>
It's the other way around. It doesn't work WITHOUT LPAE, it only works
WITH LPAE.
The ARM ARM explains this quite clearly:
"Accessing TTBR0
To access TTBR0 in an implementation that does not include the Large
Physical Address Extension, or bits[31:0] of TTBR0 in an implementation
that includes the Large Physical Address Extension, software reads or
writes the CP15 registers with <opc1> set to 0, <CRn> set to c2, <CRm>
set to c0, and <opc2> set to 0. For example:
MRC p15, 0, <Rt>, c2, c0, 0
; Read 32-bit TTBR0 into Rt
MCR p15, 0, <Rt>, c2, c0, 0
; Write Rt to 32-bit TTBR0
In an implementation that includes the Large Physical Address Extension,
to access all 64 bits of TTBR0, software performs a 64-bit read or write
of the CP15 registers with <CRm> set to c2 and <opc1> set to 0. For
example:
MRRC p15, 0, <Rt>, <Rt2>, c2
; Read 64-bit TTBR0 into Rt (low word) and Rt2 (high word)
MCRR p15, 0, <Rt>, <Rt2>, c2
; Write Rt (low word) and Rt2 (high word) to 64-bit TTBR0
In these MRRC and MCRR instructions, Rt holds the least-significant word
of TTBR0, and Rt2 holds the most-significant word."
That is, if your processor (like the Cortex-A9) does NOT support LPAE,
all you have is the 32-bit accessors (MRC and MCR).
If your processor does support LPAE (like a Cortex-A15 for example),
then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
the lower 32-bits of the 64-bit register.
Hope this helps,
-Christoffer
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-17 7:35 ` Christoffer Dall
0 siblings, 0 replies; 253+ messages in thread
From: Christoffer Dall @ 2017-11-17 7:35 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Nov 17, 2017 at 07:18:45AM +0000, Liuwenliang (Abbott Liu) wrote:
> On 16/11/17 22:41 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
> >No, it doesn't. It cannot work, because Cortex-A9 predates the invention
> >of the 64bit accessor. I suspect that you are testing stuff in QEMU,
> >which is giving you a SW model that always supports LPAE. I suggest you
> >test this code on *real* HW, and not only on QEMU.
>
> I am sorry. My test is fault. I only defined TTBR0 as __ACCESS_CP15_64,
> but I don't use the definition TTBR0 as __ACCESS_CP15_64.
>
> Now I use the definition TTBR0 as __ACCESS_CP15_64 on CPU supporting
> LPAE(vexpress_a9)
What does a "CPU supporting LPAE(vexpress_a9) mean? As Marc pointed
out, a Cortex-A9 doesn't support LPAE. If you configure your kernel
with LPAE it's not going to work on a Cortex-A9.
> I find it doesn't work and report undefined instruction error
> when execute "mrrc" instruction.
>
> So, you are right that 64bit accessor of TTBR0 cannot work on LPAE.
>
It's the other way around. It doesn't work WITHOUT LPAE, it only works
WITH LPAE.
The ARM ARM explains this quite clearly:
"Accessing TTBR0
To access TTBR0 in an implementation that does not include the Large
Physical Address Extension, or bits[31:0] of TTBR0 in an implementation
that includes the Large Physical Address Extension, software reads or
writes the CP15 registers with <opc1> set to 0, <CRn> set to c2, <CRm>
set to c0, and <opc2> set to 0. For example:
MRC p15, 0, <Rt>, c2, c0, 0
; Read 32-bit TTBR0 into Rt
MCR p15, 0, <Rt>, c2, c0, 0
; Write Rt to 32-bit TTBR0
In an implementation that includes the Large Physical Address Extension,
to access all 64 bits of TTBR0, software performs a 64-bit read or write
of the CP15 registers with <CRm> set to c2 and <opc1> set to 0. For
example:
MRRC p15, 0, <Rt>, <Rt2>, c2
; Read 64-bit TTBR0 into Rt (low word) and Rt2 (high word)
MCRR p15, 0, <Rt>, <Rt2>, c2
; Write Rt (low word) and Rt2 (high word) to 64-bit TTBR0
In these MRRC and MCRR instructions, Rt holds the least-significant word
of TTBR0, and Rt2 holds the most-significant word."
That is, if your processor (like the Cortex-A9) does NOT support LPAE,
all you have is the 32-bit accessors (MRC and MCR).
If your processor does support LPAE (like a Cortex-A15 for example),
then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
the lower 32-bits of the 64-bit register.
Hope this helps,
-Christoffer
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-17 7:35 ` Christoffer Dall
(?)
@ 2017-11-18 10:40 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-18 10:40 UTC (permalink / raw)
To: Christoffer Dall
Cc: Marc Zyngier, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>If your processor does support LPAE (like a Cortex-A15 for example),
>then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
>accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
>the lower 32-bits of the 64-bit register.
>
>Hope this helps,
>-Christoffer
If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
The following description which comes from ARM(r) Architecture Reference
Manual ARMv7-A and ARMv7-R edition tell us the reason:
64-bit TTBR0 and TTBR1 format:
...
BADDR, bits[39:x] :
Translation table base address, bits[39:x]. Defining the translation table base address width on
page B4-1698 describes how x is defined.
The value of x determines the required alignment of the translation table, which must be aligned to
2x bytes.
Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
be valid value which is useful for the system.
64-bit PAR format
...
PA[39:12]
Physical Address. The physical address corresponding to the supplied virtual address. This field
returns address bits[39:12].
Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
so bits[39:32] may be valid value which is useful for the system.
Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
if you do that, your system may run error.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-18 10:40 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-18 10:40 UTC (permalink / raw)
To: Christoffer Dall
Cc: Marc Zyngier, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>If your processor does support LPAE (like a Cortex-A15 for example),
>then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
>accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
>the lower 32-bits of the 64-bit register.
>
>Hope this helps,
>-Christoffer
If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
The following description which comes from ARM(r) Architecture Reference
Manual ARMv7-A and ARMv7-R edition tell us the reason:
64-bit TTBR0 and TTBR1 format:
...
BADDR, bits[39:x] :
Translation table base address, bits[39:x]. Defining the translation table base address width on
page B4-1698 describes how x is defined.
The value of x determines the required alignment of the translation table, which must be aligned to
2x bytes.
Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
be valid value which is useful for the system.
64-bit PAR format
...
PA[39:12]
Physical Address. The physical address corresponding to the supplied virtual address. This field
returns address bits[39:12].
Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
so bits[39:32] may be valid value which is useful for the system.
Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
if you do that, your system may run error.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-18 10:40 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-18 10:40 UTC (permalink / raw)
To: linux-arm-kernel
On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
>If your processor does support LPAE (like a Cortex-A15 for example),
>then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
>accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
>the lower 32-bits of the 64-bit register.
>
>Hope this helps,
>-Christoffer
If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
The following description which comes from ARM(r) Architecture Reference
Manual ARMv7-A and ARMv7-R edition tell us the reason:
64-bit TTBR0 and TTBR1 format:
...
BADDR, bits[39:x] :
Translation table base address, bits[39:x]. Defining the translation table base address width on
page B4-1698 describes how x is defined.
The value of x determines the required alignment of the translation table, which must be aligned to
2x bytes.
Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
be valid value which is useful for the system.
64-bit PAR format
...
PA[39:12]
Physical Address. The physical address corresponding to the supplied virtual address. This field
returns address bits[39:12].
Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
so bits[39:32] may be valid value which is useful for the system.
Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
if you do that, your system may run error.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-18 10:40 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-18 13:48 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-18 13:48 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Christoffer Dall, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Sat, 18 Nov 2017 10:40:08 +0000
"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
> >If your processor does support LPAE (like a Cortex-A15 for example),
> >then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
> >accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
> >the lower 32-bits of the 64-bit register.
> >
> >Hope this helps,
> >-Christoffer
>
> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
>
> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
> The following description which comes from ARM(r) Architecture Reference
> Manual ARMv7-A and ARMv7-R edition tell us the reason:
>
> 64-bit TTBR0 and TTBR1 format:
> ...
> BADDR, bits[39:x] :
> Translation table base address, bits[39:x]. Defining the translation table base address width on
> page B4-1698 describes how x is defined.
> The value of x determines the required alignment of the translation table, which must be aligned to
> 2x bytes.
>
> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
> be valid value which is useful for the system.
>
> 64-bit PAR format
> ...
> PA[39:12]
> Physical Address. The physical address corresponding to the supplied virtual address. This field
> returns address bits[39:12].
>
> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
> so bits[39:32] may be valid value which is useful for the system.
>
> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
> if you do that, your system may run error.
That's not really true. You can run an non-LPAE kernel that uses the
32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
to 4GB of physical space. And you're pretty much guaranteed to have
some memory below 4GB (one way or another), or you'd have a slight
problem setting up your page tables.
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-18 13:48 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-18 13:48 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Christoffer Dall, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On Sat, 18 Nov 2017 10:40:08 +0000
"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
> >If your processor does support LPAE (like a Cortex-A15 for example),
> >then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
> >accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
> >the lower 32-bits of the 64-bit register.
> >
> >Hope this helps,
> >-Christoffer
>
> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
>
> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
> The following description which comes from ARM(r) Architecture Reference
> Manual ARMv7-A and ARMv7-R edition tell us the reason:
>
> 64-bit TTBR0 and TTBR1 format:
> ...
> BADDR, bits[39:x] :
> Translation table base address, bits[39:x]. Defining the translation table base address width on
> page B4-1698 describes how x is defined.
> The value of x determines the required alignment of the translation table, which must be aligned to
> 2x bytes.
>
> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
> be valid value which is useful for the system.
>
> 64-bit PAR format
> ...
> PA[39:12]
> Physical Address. The physical address corresponding to the supplied virtual address. This field
> returns address bits[39:12].
>
> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
> so bits[39:32] may be valid value which is useful for the system.
>
> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
> if you do that, your system may run error.
That's not really true. You can run an non-LPAE kernel that uses the
32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
to 4GB of physical space. And you're pretty much guaranteed to have
some memory below 4GB (one way or another), or you'd have a slight
problem setting up your page tables.
M.
--
Without deviation from the norm, progress is not possible.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-18 13:48 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-18 13:48 UTC (permalink / raw)
To: linux-arm-kernel
On Sat, 18 Nov 2017 10:40:08 +0000
"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
> >If your processor does support LPAE (like a Cortex-A15 for example),
> >then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
> >accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
> >the lower 32-bits of the 64-bit register.
> >
> >Hope this helps,
> >-Christoffer
>
> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
>
> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
> The following description which comes from ARM(r) Architecture Reference
> Manual ARMv7-A and ARMv7-R edition tell us the reason:
>
> 64-bit TTBR0 and TTBR1 format:
> ...
> BADDR, bits[39:x] :
> Translation table base address, bits[39:x]. Defining the translation table base address width on
> page B4-1698 describes how x is defined.
> The value of x determines the required alignment of the translation table, which must be aligned to
> 2x bytes.
>
> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
> be valid value which is useful for the system.
>
> 64-bit PAR format
> ...
> PA[39:12]
> Physical Address. The physical address corresponding to the supplied virtual address. This field
> returns address bits[39:12].
>
> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
> so bits[39:32] may be valid value which is useful for the system.
>
> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
> if you do that, your system may run error.
That's not really true. You can run an non-LPAE kernel that uses the
32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
to 4GB of physical space. And you're pretty much guaranteed to have
some memory below 4GB (one way or another), or you'd have a slight
problem setting up your page tables.
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-18 13:48 ` Marc Zyngier
@ 2017-11-21 7:59 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-21 7:59 UTC (permalink / raw)
To: Marc Zyngier
Cc: Christoffer Dall, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb2312", Size: 4467 bytes --]
On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>On Sat, 18 Nov 2017 10:40:08 +0000
>"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>> >If your processor does support LPAE (like a Cortex-A15 for example),
>> >then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
>> >accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
>> >the lower 32-bits of the 64-bit register.
>> >
>> >Hope this helps,
>> >-Christoffer
>>
>> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
>> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
>> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
>> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
>>
>> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
>> The following description which comes from ARM(r) Architecture Reference
>> Manual ARMv7-A and ARMv7-R edition tell us the reason:
>>
>> 64-bit TTBR0 and TTBR1 format:
>> ...
>> BADDR, bits[39:x] :
>> Translation table base address, bits[39:x]. Defining the translation table base address width on
>> page B4-1698 describes how x is defined.
>> The value of x determines the required alignment of the translation table, which must be aligned to
>> 2x bytes.
>>
>> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
>> be valid value which is useful for the system.
>>
>> 64-bit PAR format
>> ...
>> PA[39:12]
>> Physical Address. The physical address corresponding to the supplied virtual address. This field
>> returns address bits[39:12].
>>
>> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
>> so bits[39:32] may be valid value which is useful for the system.
>>
>> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
>> if you do that, your system may run error.
>That's not really true. You can run an non-LPAE kernel that uses the
>32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
>to 4GB of physical space. And you're pretty much guaranteed to have
>some memory below 4GB (one way or another), or you'd have a slight
>problem setting up your page tables.
> M.
>--
>Without deviation from the norm, progress is not possible.
Thanks for your review.
Please don't ask people to limit to 4GB of physical space on CPU
supporting LPAE, please don't ask people to guaranteed to have some
memory below 4GB on CPU supporting LPAE.
Why people select CPU supporting LPAE(just like cortex A15)?
Because some of people think 4GB physical space is not enough for their
system, maybe they want to use 8GB/16GB DDR space.
Then you tell them that they must guaranteed to have some memory below 4GB,
just only because you think the code as follow:
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
is better than the code like this:
+#ifdef CONFIG_ARM_LPAE
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
So,I think the following code:
+#ifdef CONFIG_ARM_LPAE
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
is better because it's not necessary to ask people to guaranteed to
have some memory below 4GB on CPU supporting LPAE.
If we want to ask people to guaranteed to have some memory below 4GB
on CPU supporting LPAE, there need to modify some other code.
I think it makes the simple problem more complex to modify some other code for this.
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-21 7:59 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-21 7:59 UTC (permalink / raw)
To: linux-arm-kernel
On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>On Sat, 18 Nov 2017 10:40:08 +0000
>"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
>> >If your processor does support LPAE (like a Cortex-A15 for example),
>> >then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
>> >accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
>> >the lower 32-bits of the 64-bit register.
>> >
>> >Hope this helps,
>> >-Christoffer
>>
>> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
>> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
>> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
>> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
>>
>> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
>> The following description which comes from ARM(r) Architecture Reference
>> Manual ARMv7-A and ARMv7-R edition tell us the reason:
>>
>> 64-bit TTBR0 and TTBR1 format:
>> ...
>> BADDR, bits[39:x] :
>> Translation table base address, bits[39:x]. Defining the translation table base address width on
>> page B4-1698 describes how x is defined.
>> The value of x determines the required alignment of the translation table, which must be aligned to
>> 2x bytes.
>>
>> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
>> be valid value which is useful for the system.
>>
>> 64-bit PAR format
>> ...
>> PA[39:12]
>> Physical Address. The physical address corresponding to the supplied virtual address. This field
>> returns address bits[39:12].
>>
>> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
>> so bits[39:32] may be valid value which is useful for the system.
>>
>> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
>> if you do that, your system may run error.
>That's not really true. You can run an non-LPAE kernel that uses the
>32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
>to 4GB of physical space. And you're pretty much guaranteed to have
>some memory below 4GB (one way or another), or you'd have a slight
>problem setting up your page tables.
> M.
>--
>Without deviation from the norm, progress is not possible.
Thanks for your review.
Please don't ask people to limit to 4GB of physical space on CPU
supporting LPAE, please don't ask people to guaranteed to have some
memory below 4GB on CPU supporting LPAE.
Why people select CPU supporting LPAE(just like cortex A15)?
Because some of people think 4GB physical space is not enough for their
system, maybe they want to use 8GB/16GB DDR space.
Then you tell them that they must guaranteed to have some memory below 4GB,
just only because you think the code as follow:
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
is better than the code like this:
+#ifdef CONFIG_ARM_LPAE
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
So,I think the following code:
+#ifdef CONFIG_ARM_LPAE
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#else
+#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR __ACCESS_CP15(c7, 0, c4, 0)
+#endif
is better because it's not necessary to ask people to guaranteed to
have some memory below 4GB on CPU supporting LPAE.
If we want to ask people to guaranteed to have some memory below 4GB
on CPU supporting LPAE, there need to modify some other code.
I think it makes the simple problem more complex to modify some other code for this.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-21 7:59 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-21 9:40 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-11-21 9:40 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> >On Sat, 18 Nov 2017 10:40:08 +0000
> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>
> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
> >> >If your processor does support LPAE (like a Cortex-A15 for example),
> >> >then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
> >> >accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
> >> >the lower 32-bits of the 64-bit register.
> >> >
> >> >Hope this helps,
> >> >-Christoffer
> >>
> >> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
> >> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
> >> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
> >> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
> >>
> >> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
> >> The following description which comes from ARM(r) Architecture Reference
> >> Manual ARMv7-A and ARMv7-R edition tell us the reason:
> >>
> >> 64-bit TTBR0 and TTBR1 format:
> >> ...
> >> BADDR, bits[39:x] :
> >> Translation table base address, bits[39:x]. Defining the translation table base address width on
> >> page B4-1698 describes how x is defined.
> >> The value of x determines the required alignment of the translation table, which must be aligned to
> >> 2x bytes.
> >>
> >> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
> >> be valid value which is useful for the system.
> >>
> >> 64-bit PAR format
> >> ...
> >> PA[39:12]
> >> Physical Address. The physical address corresponding to the supplied virtual address. This field
> >> returns address bits[39:12].
> >>
> >> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
> >> so bits[39:32] may be valid value which is useful for the system.
> >>
> >> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
> >> if you do that, your system may run error.
>
> >That's not really true. You can run an non-LPAE kernel that uses the
> >32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
> >to 4GB of physical space. And you're pretty much guaranteed to have
> >some memory below 4GB (one way or another), or you'd have a slight
> >problem setting up your page tables.
>
> > M.
> >--
> >Without deviation from the norm, progress is not possible.
>
> Thanks for your review.
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
A LPAE-capable CPU must always have memory below 4GB physical, no ifs
no buts.
If there's no memory below 4GB physical, then the CPU has no accessible
memory before the MMU is enabled. That means operating systems such as
Linux are completely unbootable.
There must _always_ be accessible memory below 4GB physical. This is
not negotiable, it's a fundamental requirement.
The location of physical memory has nothing to do with the accessors.
This point I'm making also has nothing to do with the accessors.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-21 9:40 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-11-21 9:40 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> >On Sat, 18 Nov 2017 10:40:08 +0000
> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>
> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
> >> >If your processor does support LPAE (like a Cortex-A15 for example),
> >> >then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
> >> >accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
> >> >the lower 32-bits of the 64-bit register.
> >> >
> >> >Hope this helps,
> >> >-Christoffer
> >>
> >> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
> >> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
> >> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
> >> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
> >>
> >> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
> >> The following description which comes from ARM(r) Architecture Reference
> >> Manual ARMv7-A and ARMv7-R edition tell us the reason:
> >>
> >> 64-bit TTBR0 and TTBR1 format:
> >> ...
> >> BADDR, bits[39:x] :
> >> Translation table base address, bits[39:x]. Defining the translation table base address width on
> >> page B4-1698 describes how x is defined.
> >> The value of x determines the required alignment of the translation table, which must be aligned to
> >> 2x bytes.
> >>
> >> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
> >> be valid value which is useful for the system.
> >>
> >> 64-bit PAR format
> >> ...
> >> PA[39:12]
> >> Physical Address. The physical address corresponding to the supplied virtual address. This field
> >> returns address bits[39:12].
> >>
> >> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
> >> so bits[39:32] may be valid value which is useful for the system.
> >>
> >> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
> >> if you do that, your system may run error.
>
> >That's not really true. You can run an non-LPAE kernel that uses the
> >32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
> >to 4GB of physical space. And you're pretty much guaranteed to have
> >some memory below 4GB (one way or another), or you'd have a slight
> >problem setting up your page tables.
>
> > M.
> >--
> >Without deviation from the norm, progress is not possible.
>
> Thanks for your review.
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
A LPAE-capable CPU must always have memory below 4GB physical, no ifs
no buts.
If there's no memory below 4GB physical, then the CPU has no accessible
memory before the MMU is enabled. That means operating systems such as
Linux are completely unbootable.
There must _always_ be accessible memory below 4GB physical. This is
not negotiable, it's a fundamental requirement.
The location of physical memory has nothing to do with the accessors.
This point I'm making also has nothing to do with the accessors.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-21 9:40 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-11-21 9:40 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
> >On Sat, 18 Nov 2017 10:40:08 +0000
> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>
> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
> >> >If your processor does support LPAE (like a Cortex-A15 for example),
> >> >then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
> >> >accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
> >> >the lower 32-bits of the 64-bit register.
> >> >
> >> >Hope this helps,
> >> >-Christoffer
> >>
> >> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
> >> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
> >> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
> >> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
> >>
> >> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
> >> The following description which comes from ARM(r) Architecture Reference
> >> Manual ARMv7-A and ARMv7-R edition tell us the reason:
> >>
> >> 64-bit TTBR0 and TTBR1 format:
> >> ...
> >> BADDR, bits[39:x] :
> >> Translation table base address, bits[39:x]. Defining the translation table base address width on
> >> page B4-1698 describes how x is defined.
> >> The value of x determines the required alignment of the translation table, which must be aligned to
> >> 2x bytes.
> >>
> >> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
> >> be valid value which is useful for the system.
> >>
> >> 64-bit PAR format
> >> ...
> >> PA[39:12]
> >> Physical Address. The physical address corresponding to the supplied virtual address. This field
> >> returns address bits[39:12].
> >>
> >> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
> >> so bits[39:32] may be valid value which is useful for the system.
> >>
> >> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
> >> if you do that, your system may run error.
>
> >That's not really true. You can run an non-LPAE kernel that uses the
> >32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
> >to 4GB of physical space. And you're pretty much guaranteed to have
> >some memory below 4GB (one way or another), or you'd have a slight
> >problem setting up your page tables.
>
> > M.
> >--
> >Without deviation from the norm, progress is not possible.
>
> Thanks for your review.
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
A LPAE-capable CPU must always have memory below 4GB physical, no ifs
no buts.
If there's no memory below 4GB physical, then the CPU has no accessible
memory before the MMU is enabled. That means operating systems such as
Linux are completely unbootable.
There must _always_ be accessible memory below 4GB physical. This is
not negotiable, it's a fundamental requirement.
The location of physical memory has nothing to do with the accessors.
This point I'm making also has nothing to do with the accessors.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-21 7:59 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-21 9:46 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-21 9:46 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Christoffer Dall, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 21/11/17 07:59, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>> On Sat, 18 Nov 2017 10:40:08 +0000
>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>
>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>>>> If your processor does support LPAE (like a Cortex-A15 for example),
>>>> then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
>>>> accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
>>>> the lower 32-bits of the 64-bit register.
>>>>
>>>> Hope this helps,
>>>> -Christoffer
>>>
>>> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
>>> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
>>> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
>>> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
>>>
>>> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
>>> The following description which comes from ARM(r) Architecture Reference
>>> Manual ARMv7-A and ARMv7-R edition tell us the reason:
>>>
>>> 64-bit TTBR0 and TTBR1 format:
>>> ...
>>> BADDR, bits[39:x] :
>>> Translation table base address, bits[39:x]. Defining the translation table base address width on
>>> page B4-1698 describes how x is defined.
>>> The value of x determines the required alignment of the translation table, which must be aligned to
>>> 2x bytes.
>>>
>>> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
>>> be valid value which is useful for the system.
>>>
>>> 64-bit PAR format
>>> ...
>>> PA[39:12]
>>> Physical Address. The physical address corresponding to the supplied virtual address. This field
>>> returns address bits[39:12].
>>>
>>> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
>>> so bits[39:32] may be valid value which is useful for the system.
>>>
>>> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
>>> if you do that, your system may run error.
>
>> That's not really true. You can run an non-LPAE kernel that uses the
>> 32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
>> to 4GB of physical space. And you're pretty much guaranteed to have
>> some memory below 4GB (one way or another), or you'd have a slight
>> problem setting up your page tables.
>
>> M.
>> --
>> Without deviation from the norm, progress is not possible.
>
> Thanks for your review.
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
Please tell me how you enable LPAE if you don't. I've truly curious.
Because otherwise, you should really take a step back and seriously
reconsider what you're writing. Hint: where do you think the page tables
required to enable LPAE will be? How do you even *boot*?
> Why people select CPU supporting LPAE(just like cortex A15)?
> Because some of people think 4GB physical space is not enough for their
> system, maybe they want to use 8GB/16GB DDR space.
> Then you tell them that they must guaranteed to have some memory below 4GB,
> just only because you think the code as follow:
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>
> is better than the code like this:
>
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
>
>
> So,I think the following code:
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
>
> is better because it's not necessary to ask people to guaranteed to
> have some memory below 4GB on CPU supporting LPAE.
NAK.
> If we want to ask people to guaranteed to have some memory below 4GB
> on CPU supporting LPAE, there need to modify some other code.
> I think it makes the simple problem more complex to modify some other code for this.
At this stage, you've proven that you don't understand the problem at hand.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-21 9:46 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-21 9:46 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Christoffer Dall, linux, aryabinin, afzal.mohd.ma, f.fainelli,
labbott, kirill.shutemov, mhocko, catalin.marinas, akpm,
mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
On 21/11/17 07:59, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>> On Sat, 18 Nov 2017 10:40:08 +0000
>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>
>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>>>> If your processor does support LPAE (like a Cortex-A15 for example),
>>>> then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
>>>> accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
>>>> the lower 32-bits of the 64-bit register.
>>>>
>>>> Hope this helps,
>>>> -Christoffer
>>>
>>> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
>>> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
>>> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
>>> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
>>>
>>> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
>>> The following description which comes from ARM(r) Architecture Reference
>>> Manual ARMv7-A and ARMv7-R edition tell us the reason:
>>>
>>> 64-bit TTBR0 and TTBR1 format:
>>> ...
>>> BADDR, bits[39:x] :
>>> Translation table base address, bits[39:x]. Defining the translation table base address width on
>>> page B4-1698 describes how x is defined.
>>> The value of x determines the required alignment of the translation table, which must be aligned to
>>> 2x bytes.
>>>
>>> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
>>> be valid value which is useful for the system.
>>>
>>> 64-bit PAR format
>>> ...
>>> PA[39:12]
>>> Physical Address. The physical address corresponding to the supplied virtual address. This field
>>> returns address bits[39:12].
>>>
>>> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
>>> so bits[39:32] may be valid value which is useful for the system.
>>>
>>> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
>>> if you do that, your system may run error.
>
>> That's not really true. You can run an non-LPAE kernel that uses the
>> 32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
>> to 4GB of physical space. And you're pretty much guaranteed to have
>> some memory below 4GB (one way or another), or you'd have a slight
>> problem setting up your page tables.
>
>> M.
>> --
>> Without deviation from the norm, progress is not possible.
>
> Thanks for your review.
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
Please tell me how you enable LPAE if you don't. I've truly curious.
Because otherwise, you should really take a step back and seriously
reconsider what you're writing. Hint: where do you think the page tables
required to enable LPAE will be? How do you even *boot*?
> Why people select CPU supporting LPAE(just like cortex A15)?
> Because some of people think 4GB physical space is not enough for their
> system, maybe they want to use 8GB/16GB DDR space.
> Then you tell them that they must guaranteed to have some memory below 4GB,
> just only because you think the code as follow:
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>
> is better than the code like this:
>
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
>
>
> So,I think the following code:
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
>
> is better because it's not necessary to ask people to guaranteed to
> have some memory below 4GB on CPU supporting LPAE.
NAK.
> If we want to ask people to guaranteed to have some memory below 4GB
> on CPU supporting LPAE, there need to modify some other code.
> I think it makes the simple problem more complex to modify some other code for this.
At this stage, you've proven that you don't understand the problem at hand.
M.
--
Jazz is not dead. It just smells funny...
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-21 9:46 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-21 9:46 UTC (permalink / raw)
To: linux-arm-kernel
On 21/11/17 07:59, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>> On Sat, 18 Nov 2017 10:40:08 +0000
>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>
>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
>>>> If your processor does support LPAE (like a Cortex-A15 for example),
>>>> then you have both the 32-bit accessors (MRC and MCR) and the 64-bit
>>>> accessors (MRRC, MCRR), and using the 32-bit accessor will simply access
>>>> the lower 32-bits of the 64-bit register.
>>>>
>>>> Hope this helps,
>>>> -Christoffer
>>>
>>> If you know the higher 32-bits of the 64-bits cp15's register is not useful for your system,
>>> then you can use the 32-bit accessor to get or set the 64-bit cp15's register.
>>> But if the higher 32-bits of the 64-bits cp15's register is useful for your system,
>>> then you can't use the 32-bit accessor to get or set the 64-bit cp15's register.
>>>
>>> TTBR0/TTBR1/PAR's higher 32-bits is useful for CPU supporting LPAE.
>>> The following description which comes from ARM(r) Architecture Reference
>>> Manual ARMv7-A and ARMv7-R edition tell us the reason:
>>>
>>> 64-bit TTBR0 and TTBR1 format:
>>> ...
>>> BADDR, bits[39:x] :
>>> Translation table base address, bits[39:x]. Defining the translation table base address width on
>>> page B4-1698 describes how x is defined.
>>> The value of x determines the required alignment of the translation table, which must be aligned to
>>> 2x bytes.
>>>
>>> Abbott Liu: Because BADDR on CPU supporting LPAE may be bigger than max value of 32-bit, so bits[39:32] may
>>> be valid value which is useful for the system.
>>>
>>> 64-bit PAR format
>>> ...
>>> PA[39:12]
>>> Physical Address. The physical address corresponding to the supplied virtual address. This field
>>> returns address bits[39:12].
>>>
>>> Abbott Liu: Because Physical Address on CPU supporting LPAE may be bigger than max value of 32-bit,
>>> so bits[39:32] may be valid value which is useful for the system.
>>>
>>> Conclusion: Don't use 32-bit accessor to get or set TTBR0/TTBR1/PAR on CPU supporting LPAE,
>>> if you do that, your system may run error.
>
>> That's not really true. You can run an non-LPAE kernel that uses the
>> 32bit accessors an a Cortex-A15 that supports LPAE. You're just limited
>> to 4GB of physical space. And you're pretty much guaranteed to have
>> some memory below 4GB (one way or another), or you'd have a slight
>> problem setting up your page tables.
>
>> M.
>> --
>> Without deviation from the norm, progress is not possible.
>
> Thanks for your review.
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
Please tell me how you enable LPAE if you don't. I've truly curious.
Because otherwise, you should really take a step back and seriously
reconsider what you're writing. Hint: where do you think the page tables
required to enable LPAE will be? How do you even *boot*?
> Why people select CPU supporting LPAE(just like cortex A15)?
> Because some of people think 4GB physical space is not enough for their
> system, maybe they want to use 8GB/16GB DDR space.
> Then you tell them that they must guaranteed to have some memory below 4GB,
> just only because you think the code as follow:
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>
> is better than the code like this:
>
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
>
>
> So,I think the following code:
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
>
> is better because it's not necessary to ask people to guaranteed to
> have some memory below 4GB on CPU supporting LPAE.
NAK.
> If we want to ask people to guaranteed to have some memory below 4GB
> on CPU supporting LPAE, there need to modify some other code.
> I think it makes the simple problem more complex to modify some other code for this.
At this stage, you've proven that you don't understand the problem at hand.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-21 7:59 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-21 12:29 ` Mark Rutland
-1 siblings, 0 replies; 253+ messages in thread
From: Mark Rutland @ 2017-11-21 12:29 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> >On Sat, 18 Nov 2017 10:40:08 +0000
> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
I don't think that Marc is suggesting that you'd always use the 32-bit
accessors on an LPAE system, just that all the definitions should exist
regardless of configuration.
So rather than this:
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
... you'd have the following in cp15.h:
#define TTBR0_64 __ACCESS_CP15_64(0, c2)
#define TTBR1_64 __ACCESS_CP15_64(1, c2)
#define PAR_64 __ACCESS_CP15_64(0, c7)
#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
... and elsewhere, where it matters, we choose which to use depending on
the kernel configuration, e.g.
void set_ttbr0(u64 val)
{
if (IS_ENABLED(CONFIG_ARM_LPAE))
write_sysreg(val, TTBR0_64);
else
write_sysreg(val, TTBR0_32);
}
Thanks,
Mark.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-21 12:29 ` Mark Rutland
0 siblings, 0 replies; 253+ messages in thread
From: Mark Rutland @ 2017-11-21 12:29 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> >On Sat, 18 Nov 2017 10:40:08 +0000
> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
I don't think that Marc is suggesting that you'd always use the 32-bit
accessors on an LPAE system, just that all the definitions should exist
regardless of configuration.
So rather than this:
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
... you'd have the following in cp15.h:
#define TTBR0_64 __ACCESS_CP15_64(0, c2)
#define TTBR1_64 __ACCESS_CP15_64(1, c2)
#define PAR_64 __ACCESS_CP15_64(0, c7)
#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
... and elsewhere, where it matters, we choose which to use depending on
the kernel configuration, e.g.
void set_ttbr0(u64 val)
{
if (IS_ENABLED(CONFIG_ARM_LPAE))
write_sysreg(val, TTBR0_64);
else
write_sysreg(val, TTBR0_32);
}
Thanks,
Mark.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-21 12:29 ` Mark Rutland
0 siblings, 0 replies; 253+ messages in thread
From: Mark Rutland @ 2017-11-21 12:29 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
> >On Sat, 18 Nov 2017 10:40:08 +0000
> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
> Please don't ask people to limit to 4GB of physical space on CPU
> supporting LPAE, please don't ask people to guaranteed to have some
> memory below 4GB on CPU supporting LPAE.
I don't think that Marc is suggesting that you'd always use the 32-bit
accessors on an LPAE system, just that all the definitions should exist
regardless of configuration.
So rather than this:
> +#ifdef CONFIG_ARM_LPAE
> +#define TTBR0 __ACCESS_CP15_64(0, c2)
> +#define TTBR1 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
> +#else
> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
> +#endif
... you'd have the following in cp15.h:
#define TTBR0_64 __ACCESS_CP15_64(0, c2)
#define TTBR1_64 __ACCESS_CP15_64(1, c2)
#define PAR_64 __ACCESS_CP15_64(0, c7)
#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
... and elsewhere, where it matters, we choose which to use depending on
the kernel configuration, e.g.
void set_ttbr0(u64 val)
{
if (IS_ENABLED(CONFIG_ARM_LPAE))
write_sysreg(val, TTBR0_64);
else
write_sysreg(val, TTBR0_32);
}
Thanks,
Mark.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-21 12:29 ` Mark Rutland
(?)
@ 2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-22 12:56 UTC (permalink / raw)
To: Mark Rutland
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland@arm.com] wrote:
>On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>> >On Sat, 18 Nov 2017 10:40:08 +0000
>> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>> Please don't ask people to limit to 4GB of physical space on CPU
>> supporting LPAE, please don't ask people to guaranteed to have some
>> memory below 4GB on CPU supporting LPAE.
>I don't think that Marc is suggesting that you'd always use the 32-bit
>accessors on an LPAE system, just that all the definitions should exist
>regardless of configuration.
>So rather than this:
>> +#ifdef CONFIG_ARM_LPAE
>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>> +#define PAR __ACCESS_CP15_64(0, c7)
>> +#else
>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>> +#endif
>... you'd have the following in cp15.h:
>#define TTBR0_64 __ACCESS_CP15_64(0, c2)
>#define TTBR1_64 __ACCESS_CP15_64(1, c2)
>#define PAR_64 __ACCESS_CP15_64(0, c7)
>#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>... and elsewhere, where it matters, we choose which to use depending on
>the kernel configuration, e.g.
>void set_ttbr0(u64 val)
>{
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
>}
>Thanks,
>Mark.
Thanks for your solution.
I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
like:
void set_ttbr0(u64 val)
{
if (IS_ENABLED(CONFIG_ARM_LPAE))
write_sysreg(val, TTBR0_64);
else
write_sysreg(val, TTBR0_32);
}
Here is the code I tested on vexpress_a9 and vexpress_a15:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..5eb0185 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -2,6 +2,7 @@
#define __ASM_ARM_CP15_H
#include <asm/barrier.h>
+#include <linux/stringify.h>
/*
* CR1 bits (CP#15 CR1)
@@ -64,8 +65,93 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return (u64)read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return (u64)read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index c478281..d1302ae 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -31,8 +31,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -60,8 +60,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..87c86c7 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -203,16 +203,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = get_ttbr0();
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0(__pa(tmp_page_table));
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0((u64)__pa(tmp_page_table));
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +257,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-22 12:56 UTC (permalink / raw)
To: Mark Rutland
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland@arm.com] wrote:
>On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>> >On Sat, 18 Nov 2017 10:40:08 +0000
>> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>> Please don't ask people to limit to 4GB of physical space on CPU
>> supporting LPAE, please don't ask people to guaranteed to have some
>> memory below 4GB on CPU supporting LPAE.
>I don't think that Marc is suggesting that you'd always use the 32-bit
>accessors on an LPAE system, just that all the definitions should exist
>regardless of configuration.
>So rather than this:
>> +#ifdef CONFIG_ARM_LPAE
>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>> +#define PAR __ACCESS_CP15_64(0, c7)
>> +#else
>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>> +#endif
>... you'd have the following in cp15.h:
>#define TTBR0_64 __ACCESS_CP15_64(0, c2)
>#define TTBR1_64 __ACCESS_CP15_64(1, c2)
>#define PAR_64 __ACCESS_CP15_64(0, c7)
>#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>... and elsewhere, where it matters, we choose which to use depending on
>the kernel configuration, e.g.
>void set_ttbr0(u64 val)
>{
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
>}
>Thanks,
>Mark.
Thanks for your solution.
I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
like:
void set_ttbr0(u64 val)
{
if (IS_ENABLED(CONFIG_ARM_LPAE))
write_sysreg(val, TTBR0_64);
else
write_sysreg(val, TTBR0_32);
}
Here is the code I tested on vexpress_a9 and vexpress_a15:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..5eb0185 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -2,6 +2,7 @@
#define __ASM_ARM_CP15_H
#include <asm/barrier.h>
+#include <linux/stringify.h>
/*
* CR1 bits (CP#15 CR1)
@@ -64,8 +65,93 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return (u64)read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return (u64)read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index c478281..d1302ae 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -31,8 +31,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -60,8 +60,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..87c86c7 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -203,16 +203,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = get_ttbr0();
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0(__pa(tmp_page_table));
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0((u64)__pa(tmp_page_table));
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +257,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-22 12:56 UTC (permalink / raw)
To: linux-arm-kernel
On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland at arm.com] wrote:
>On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>> >On Sat, 18 Nov 2017 10:40:08 +0000
>> >"Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>> >> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
>> Please don't ask people to limit to 4GB of physical space on CPU
>> supporting LPAE, please don't ask people to guaranteed to have some
>> memory below 4GB on CPU supporting LPAE.
>I don't think that Marc is suggesting that you'd always use the 32-bit
>accessors on an LPAE system, just that all the definitions should exist
>regardless of configuration.
>So rather than this:
>> +#ifdef CONFIG_ARM_LPAE
>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>> +#define PAR __ACCESS_CP15_64(0, c7)
>> +#else
>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>> +#endif
>... you'd have the following in cp15.h:
>#define TTBR0_64 __ACCESS_CP15_64(0, c2)
>#define TTBR1_64 __ACCESS_CP15_64(1, c2)
>#define PAR_64 __ACCESS_CP15_64(0, c7)
>#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>... and elsewhere, where it matters, we choose which to use depending on
>the kernel configuration, e.g.
>void set_ttbr0(u64 val)
>{
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
>}
>Thanks,
>Mark.
Thanks for your solution.
I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
like:
void set_ttbr0(u64 val)
{
if (IS_ENABLED(CONFIG_ARM_LPAE))
write_sysreg(val, TTBR0_64);
else
write_sysreg(val, TTBR0_32);
}
Here is the code I tested on vexpress_a9 and vexpress_a15:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..5eb0185 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -2,6 +2,7 @@
#define __ASM_ARM_CP15_H
#include <asm/barrier.h>
+#include <linux/stringify.h>
/*
* CR1 bits (CP#15 CR1)
@@ -64,8 +65,93 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return (u64)read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return (u64)read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index c478281..d1302ae 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -31,8 +31,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -60,8 +60,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..87c86c7 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -203,16 +203,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = get_ttbr0();
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0(__pa(tmp_page_table));
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0((u64)__pa(tmp_page_table));
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +257,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-22 13:06 ` Marc Zyngier
-1 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-22 13:06 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu), Mark Rutland
Cc: tixy, mhocko, grygorii.strashko, catalin.marinas, linux-mm,
glider, afzal.mohd.ma, mingo, Christoffer Dall, f.fainelli,
mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel, aryabinin,
labbott, vladimir.murzin, keescook, arnd, Zengweilin, opendmb,
Heshaoliang, tglx, dvyukov, ard.biesheuvel, linux-kernel,
Jiazhenghua, akpm, robin.murphy, thgarnie, kirill.shutemov
On 22/11/17 12:56, Liuwenliang (Abbott Liu) wrote:
> On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland@arm.com] wrote:
>> On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>> On Sat, 18 Nov 2017 10:40:08 +0000
>>>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>
>>> Please don't ask people to limit to 4GB of physical space on CPU
>>> supporting LPAE, please don't ask people to guaranteed to have some
>>> memory below 4GB on CPU supporting LPAE.
>
>> I don't think that Marc is suggesting that you'd always use the 32-bit
>> accessors on an LPAE system, just that all the definitions should exist
>> regardless of configuration.
>
>> So rather than this:
>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>
>> ... you'd have the following in cp15.h:
>
>> #define TTBR0_64 __ACCESS_CP15_64(0, c2)
>> #define TTBR1_64 __ACCESS_CP15_64(1, c2)
>> #define PAR_64 __ACCESS_CP15_64(0, c7)
>
>> #define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>> #define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>> #define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>
>> ... and elsewhere, where it matters, we choose which to use depending on
>> the kernel configuration, e.g.
>
>> void set_ttbr0(u64 val)
>> {
>> if (IS_ENABLED(CONFIG_ARM_LPAE))
>> write_sysreg(val, TTBR0_64);
>> else
>> write_sysreg(val, TTBR0_32);
>> }
>
>> Thanks,
>> Mark.
>
> Thanks for your solution.
> I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
> like:
> void set_ttbr0(u64 val)
> {
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
> }
>
>
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..5eb0185 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,93 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
Please define both PAR accessors. Yes, I know the 32bit version is not
used yet, but it doesn't hurt to make it visible.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-22 13:06 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-22 13:06 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu), Mark Rutland
Cc: tixy, mhocko, grygorii.strashko, catalin.marinas, linux-mm,
glider, afzal.mohd.ma, mingo, Christoffer Dall, f.fainelli,
mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel, aryabinin,
labbott, vladimir.murzin, keescook, arnd, Zengweilin, opendmb,
Heshaoliang, tglx, dvyukov, ard.biesheuvel, linux-kernel,
Jiazhenghua, akpm, robin.murphy, thgarnie, kirill.shutemov
On 22/11/17 12:56, Liuwenliang (Abbott Liu) wrote:
> On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland@arm.com] wrote:
>> On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>> On Sat, 18 Nov 2017 10:40:08 +0000
>>>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>
>>> Please don't ask people to limit to 4GB of physical space on CPU
>>> supporting LPAE, please don't ask people to guaranteed to have some
>>> memory below 4GB on CPU supporting LPAE.
>
>> I don't think that Marc is suggesting that you'd always use the 32-bit
>> accessors on an LPAE system, just that all the definitions should exist
>> regardless of configuration.
>
>> So rather than this:
>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>
>> ... you'd have the following in cp15.h:
>
>> #define TTBR0_64 __ACCESS_CP15_64(0, c2)
>> #define TTBR1_64 __ACCESS_CP15_64(1, c2)
>> #define PAR_64 __ACCESS_CP15_64(0, c7)
>
>> #define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>> #define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>> #define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>
>> ... and elsewhere, where it matters, we choose which to use depending on
>> the kernel configuration, e.g.
>
>> void set_ttbr0(u64 val)
>> {
>> if (IS_ENABLED(CONFIG_ARM_LPAE))
>> write_sysreg(val, TTBR0_64);
>> else
>> write_sysreg(val, TTBR0_32);
>> }
>
>> Thanks,
>> Mark.
>
> Thanks for your solution.
> I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
> like:
> void set_ttbr0(u64 val)
> {
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
> }
>
>
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..5eb0185 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,93 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
Please define both PAR accessors. Yes, I know the 32bit version is not
used yet, but it doesn't hurt to make it visible.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-22 13:06 ` Marc Zyngier
0 siblings, 0 replies; 253+ messages in thread
From: Marc Zyngier @ 2017-11-22 13:06 UTC (permalink / raw)
To: linux-arm-kernel
On 22/11/17 12:56, Liuwenliang (Abbott Liu) wrote:
> On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland at arm.com] wrote:
>> On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>>> On Sat, 18 Nov 2017 10:40:08 +0000
>>>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
>
>>> Please don't ask people to limit to 4GB of physical space on CPU
>>> supporting LPAE, please don't ask people to guaranteed to have some
>>> memory below 4GB on CPU supporting LPAE.
>
>> I don't think that Marc is suggesting that you'd always use the 32-bit
>> accessors on an LPAE system, just that all the definitions should exist
>> regardless of configuration.
>
>> So rather than this:
>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>
>> ... you'd have the following in cp15.h:
>
>> #define TTBR0_64 __ACCESS_CP15_64(0, c2)
>> #define TTBR1_64 __ACCESS_CP15_64(1, c2)
>> #define PAR_64 __ACCESS_CP15_64(0, c7)
>
>> #define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>> #define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>> #define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>
>> ... and elsewhere, where it matters, we choose which to use depending on
>> the kernel configuration, e.g.
>
>> void set_ttbr0(u64 val)
>> {
>> if (IS_ENABLED(CONFIG_ARM_LPAE))
>> write_sysreg(val, TTBR0_64);
>> else
>> write_sysreg(val, TTBR0_32);
>> }
>
>> Thanks,
>> Mark.
>
> Thanks for your solution.
> I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
> like:
> void set_ttbr0(u64 val)
> {
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
> }
>
>
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..5eb0185 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,93 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
Please define both PAR accessors. Yes, I know the 32bit version is not
used yet, but it doesn't hurt to make it visible.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-22 13:06 ` Marc Zyngier
(?)
@ 2017-11-23 1:54 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-23 1:54 UTC (permalink / raw)
To: Marc Zyngier, Mark Rutland
Cc: tixy, mhocko, grygorii.strashko, catalin.marinas, linux-mm,
glider, afzal.mohd.ma, mingo, Christoffer Dall, f.fainelli,
mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel, aryabinin,
labbott, vladimir.murzin, keescook, arnd, Zengweilin, opendmb,
Heshaoliang, tglx, dvyukov, ard.biesheuvel, linux-kernel,
Jiazhenghua, akpm, robin.murphy, thgarnie, kirill.shutemov
On Nov 23, 2017 20:30 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>Please define both PAR accessors. Yes, I know the 32bit version is not
>used yet, but it doesn't hurt to make it visible.
Thanks for your review.
I'm going to change it in the new version.
Here is the code I tested on vexpress_a9 and vexpress_a15:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..b8353b1 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -2,6 +2,7 @@
#define __ASM_ARM_CP15_H
#include <asm/barrier.h>
+#include <linux/stringify.h>
/*
* CR1 bits (CP#15 CR1)
@@ -64,8 +65,109 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR_64 __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+static inline void set_par(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, PAR_64);
+ else
+ write_sysreg(val, PAR_32);
+}
+
+static inline u64 get_par(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(PAR_64);
+ else
+ return (u64)read_sysreg(PAR_32);
+}
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return (u64)read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return (u64)read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index c478281..d365e3c 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -31,8 +31,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -41,7 +41,7 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
- *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
+ *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR_64);
ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
@@ -60,8 +60,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
@@ -70,7 +70,7 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
- write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
+ write_sysreg(*cp15_64(ctxt, c7_PAR), PAR_64);
write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index ebd2dd4..4879588 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -133,12 +133,12 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
u64 par, tmp;
- par = read_sysreg(PAR);
+ par = read_sysreg(PAR_64);
write_sysreg(far, ATS1CPR);
isb();
- tmp = read_sysreg(PAR);
- write_sysreg(par, PAR);
+ tmp = read_sysreg(PAR_64);
+ write_sysreg(par, PAR_64);
if (unlikely(tmp & 1))
return false; /* Translation failed, back to guest */
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..87c86c7 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -203,16 +203,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = get_ttbr0();
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0(__pa(tmp_page_table));
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0((u64)__pa(tmp_page_table));
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +257,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
-----邮件原件-----
发件人: Marc Zyngier [mailto:marc.zyngier@arm.com]
发送时间: 2017年11月22日 21:06
收件人: Liuwenliang (Abbott Liu); Mark Rutland
抄送: tixy@linaro.org; mhocko@suse.com; grygorii.strashko@linaro.org; catalin.marinas@arm.com; linux-mm@kvack.org; glider@google.com; afzal.mohd.ma@gmail.com; mingo@kernel.org; Christoffer Dall; f.fainelli@gmail.com; mawilcox@microsoft.com; linux@armlinux.org.uk; kasan-dev@googlegroups.com; Dailei; linux-arm-kernel@lists.infradead.org; aryabinin@virtuozzo.com; labbott@redhat.com; vladimir.murzin@arm.com; keescook@chromium.org; arnd@arndb.de; Zengweilin; opendmb@gmail.com; Heshaoliang; tglx@linutronix.de; dvyukov@google.com; ard.biesheuvel@linaro.org; linux-kernel@vger.kernel.org; Jiazhenghua; akpm@linux-foundation.org; robin.murphy@arm.com; thgarnie@google.com; kirill.shutemov@linux.intel.com
主题: Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
On 22/11/17 12:56, Liuwenliang (Abbott Liu) wrote:
> On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland@arm.com] wrote:
>> On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>> On Sat, 18 Nov 2017 10:40:08 +0000
>>>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>
>>> Please don't ask people to limit to 4GB of physical space on CPU
>>> supporting LPAE, please don't ask people to guaranteed to have some
>>> memory below 4GB on CPU supporting LPAE.
>
>> I don't think that Marc is suggesting that you'd always use the 32-bit
>> accessors on an LPAE system, just that all the definitions should exist
>> regardless of configuration.
>
>> So rather than this:
>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>
>> ... you'd have the following in cp15.h:
>
>> #define TTBR0_64 __ACCESS_CP15_64(0, c2)
>> #define TTBR1_64 __ACCESS_CP15_64(1, c2)
>> #define PAR_64 __ACCESS_CP15_64(0, c7)
>
>> #define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>> #define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>> #define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>
>> ... and elsewhere, where it matters, we choose which to use depending on
>> the kernel configuration, e.g.
>
>> void set_ttbr0(u64 val)
>> {
>> if (IS_ENABLED(CONFIG_ARM_LPAE))
>> write_sysreg(val, TTBR0_64);
>> else
>> write_sysreg(val, TTBR0_32);
>> }
>
>> Thanks,
>> Mark.
>
> Thanks for your solution.
> I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
> like:
> void set_ttbr0(u64 val)
> {
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
> }
>
>
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..5eb0185 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,93 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
Please define both PAR accessors. Yes, I know the 32bit version is not
used yet, but it doesn't hurt to make it visible.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-23 1:54 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-23 1:54 UTC (permalink / raw)
To: Marc Zyngier, Mark Rutland
Cc: tixy, mhocko, grygorii.strashko, catalin.marinas, linux-mm,
glider, afzal.mohd.ma, mingo, Christoffer Dall, f.fainelli,
mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel, aryabinin,
labbott, vladimir.murzin, keescook, arnd, Zengweilin, opendmb,
Heshaoliang, tglx, dvyukov, ard.biesheuvel, linux-kernel,
Jiazhenghua, akpm, robin.murphy, thgarnie, kirill.shutemov
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 17130 bytes --]
On Nov 23, 2017 20:30 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>Please define both PAR accessors. Yes, I know the 32bit version is not
>used yet, but it doesn't hurt to make it visible.
Thanks for your review.
I'm going to change it in the new version.
Here is the code I tested on vexpress_a9 and vexpress_a15:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..b8353b1 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -2,6 +2,7 @@
#define __ASM_ARM_CP15_H
#include <asm/barrier.h>
+#include <linux/stringify.h>
/*
* CR1 bits (CP#15 CR1)
@@ -64,8 +65,109 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR_64 __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+static inline void set_par(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, PAR_64);
+ else
+ write_sysreg(val, PAR_32);
+}
+
+static inline u64 get_par(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(PAR_64);
+ else
+ return (u64)read_sysreg(PAR_32);
+}
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return (u64)read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return (u64)read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index c478281..d365e3c 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -31,8 +31,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -41,7 +41,7 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
- *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
+ *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR_64);
ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
@@ -60,8 +60,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
@@ -70,7 +70,7 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
- write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
+ write_sysreg(*cp15_64(ctxt, c7_PAR), PAR_64);
write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index ebd2dd4..4879588 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -133,12 +133,12 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
u64 par, tmp;
- par = read_sysreg(PAR);
+ par = read_sysreg(PAR_64);
write_sysreg(far, ATS1CPR);
isb();
- tmp = read_sysreg(PAR);
- write_sysreg(par, PAR);
+ tmp = read_sysreg(PAR_64);
+ write_sysreg(par, PAR_64);
if (unlikely(tmp & 1))
return false; /* Translation failed, back to guest */
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..87c86c7 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -203,16 +203,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = get_ttbr0();
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0(__pa(tmp_page_table));
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0((u64)__pa(tmp_page_table));
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +257,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
-----é®ä»¶å件-----
å件人: Marc Zyngier [mailto:marc.zyngier@arm.com]
åéæ¶é´: 2017å¹´11æ22æ¥ 21:06
æ¶ä»¶äºº: Liuwenliang (Abbott Liu); Mark Rutland
æé: tixy@linaro.org; mhocko@suse.com; grygorii.strashko@linaro.org; catalin.marinas@arm.com; linux-mm@kvack.org; glider@google.com; afzal.mohd.ma@gmail.com; mingo@kernel.org; Christoffer Dall; f.fainelli@gmail.com; mawilcox@microsoft.com; linux@armlinux.org.uk; kasan-dev@googlegroups.com; Dailei; linux-arm-kernel@lists.infradead.org; aryabinin@virtuozzo.com; labbott@redhat.com; vladimir.murzin@arm.com; keescook@chromium.org; arnd@arndb.de; Zengweilin; opendmb@gmail.com; Heshaoliang; tglx@linutronix.de; dvyukov@google.com; ard.biesheuvel@linaro.org; linux-kernel@vger.kernel.org; Jiazhenghua; akpm@linux-foundation.org; robin.murphy@arm.com; thgarnie@google.com; kirill.shutemov@linux.intel.com
主é¢: Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
On 22/11/17 12:56, Liuwenliang (Abbott Liu) wrote:
> On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland@arm.com] wrote:
>> On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
>>>> On Sat, 18 Nov 2017 10:40:08 +0000
>>>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall@linaro.org] wrote:
>
>>> Please don't ask people to limit to 4GB of physical space on CPU
>>> supporting LPAE, please don't ask people to guaranteed to have some
>>> memory below 4GB on CPU supporting LPAE.
>
>> I don't think that Marc is suggesting that you'd always use the 32-bit
>> accessors on an LPAE system, just that all the definitions should exist
>> regardless of configuration.
>
>> So rather than this:
>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>
>> ... you'd have the following in cp15.h:
>
>> #define TTBR0_64 __ACCESS_CP15_64(0, c2)
>> #define TTBR1_64 __ACCESS_CP15_64(1, c2)
>> #define PAR_64 __ACCESS_CP15_64(0, c7)
>
>> #define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>> #define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>> #define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>
>> ... and elsewhere, where it matters, we choose which to use depending on
>> the kernel configuration, e.g.
>
>> void set_ttbr0(u64 val)
>> {
>> if (IS_ENABLED(CONFIG_ARM_LPAE))
>> write_sysreg(val, TTBR0_64);
>> else
>> write_sysreg(val, TTBR0_32);
>> }
>
>> Thanks,
>> Mark.
>
> Thanks for your solution.
> I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
> like:
> void set_ttbr0(u64 val)
> {
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
> }
>
>
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..5eb0185 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,93 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
Please define both PAR accessors. Yes, I know the 32bit version is not
used yet, but it doesn't hurt to make it visible.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply related [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-23 1:54 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-23 1:54 UTC (permalink / raw)
To: linux-arm-kernel
On Nov 23, 2017 20:30 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>Please define both PAR accessors. Yes, I know the 32bit version is not
>used yet, but it doesn't hurt to make it visible.
Thanks for your review.
I'm going to change it in the new version.
Here is the code I tested on vexpress_a9 and vexpress_a15:
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1..b8353b1 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -2,6 +2,7 @@
#define __ASM_ARM_CP15_H
#include <asm/barrier.h>
+#include <linux/stringify.h>
/*
* CR1 bits (CP#15 CR1)
@@ -64,8 +65,109 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR_64 __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+static inline void set_par(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, PAR_64);
+ else
+ write_sysreg(val, PAR_32);
+}
+
+static inline u64 get_par(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(PAR_64);
+ else
+ return (u64)read_sysreg(PAR_32);
+}
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return (u64)read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return (u64)read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 14b5903..8db8a8c 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -37,56 +37,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index c478281..d365e3c 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -31,8 +31,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -41,7 +41,7 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
- *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
+ *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR_64);
ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
@@ -60,8 +60,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
@@ -70,7 +70,7 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
- write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
+ write_sysreg(*cp15_64(ctxt, c7_PAR), PAR_64);
write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index ebd2dd4..4879588 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -133,12 +133,12 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
u64 par, tmp;
- par = read_sysreg(PAR);
+ par = read_sysreg(PAR_64);
write_sysreg(far, ATS1CPR);
isb();
- tmp = read_sysreg(PAR);
- write_sysreg(par, PAR);
+ tmp = read_sysreg(PAR_64);
+ write_sysreg(par, PAR_64);
if (unlikely(tmp & 1))
return false; /* Translation failed, back to guest */
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 049ee0a..87c86c7 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -203,16 +203,16 @@ void __init kasan_init(void)
u64 orig_ttbr0;
int i;
- orig_ttbr0 = cpu_get_ttbr(0);
+ orig_ttbr0 = get_ttbr0();
#ifdef CONFIG_ARM_LPAE
memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0(__pa(tmp_page_table));
#else
memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
- cpu_set_ttbr0(__pa(tmp_page_table));
+ set_ttbr0((u64)__pa(tmp_page_table));
#endif
flush_cache_all();
local_flush_bp_all();
@@ -257,7 +257,7 @@ void __init kasan_init(void)
/*__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN | L_PTE_RDONLY))*/
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_set_ttbr0(orig_ttbr0);
+ set_ttbr0(orig_ttbr0);
flush_cache_all();
local_flush_bp_all();
local_flush_tlb_all();
-----????-----
???: Marc Zyngier [mailto:marc.zyngier at arm.com]
????: 2017?11?22? 21:06
???: Liuwenliang (Abbott Liu); Mark Rutland
??: tixy at linaro.org; mhocko at suse.com; grygorii.strashko at linaro.org; catalin.marinas at arm.com; linux-mm at kvack.org; glider at google.com; afzal.mohd.ma at gmail.com; mingo at kernel.org; Christoffer Dall; f.fainelli at gmail.com; mawilcox at microsoft.com; linux at armlinux.org.uk; kasan-dev at googlegroups.com; Dailei; linux-arm-kernel at lists.infradead.org; aryabinin at virtuozzo.com; labbott at redhat.com; vladimir.murzin at arm.com; keescook at chromium.org; arnd at arndb.de; Zengweilin; opendmb at gmail.com; Heshaoliang; tglx at linutronix.de; dvyukov at google.com; ard.biesheuvel at linaro.org; linux-kernel at vger.kernel.org; Jiazhenghua; akpm at linux-foundation.org; robin.murphy at arm.com; thgarnie at google.com; kirill.shutemov at linux.intel.com
??: Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
On 22/11/17 12:56, Liuwenliang (Abbott Liu) wrote:
> On Nov 22, 2017 20:30 Mark Rutland [mailto:mark.rutland at arm.com] wrote:
>> On Tue, Nov 21, 2017 at 07:59:01AM +0000, Liuwenliang (Abbott Liu) wrote:
>>> On Nov 17, 2017 21:49 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
>>>> On Sat, 18 Nov 2017 10:40:08 +0000
>>>> "Liuwenliang (Abbott Liu)" <liuwenliang@huawei.com> wrote:
>>>>> On Nov 17, 2017 15:36 Christoffer Dall [mailto:cdall at linaro.org] wrote:
>
>>> Please don't ask people to limit to 4GB of physical space on CPU
>>> supporting LPAE, please don't ask people to guaranteed to have some
>>> memory below 4GB on CPU supporting LPAE.
>
>> I don't think that Marc is suggesting that you'd always use the 32-bit
>> accessors on an LPAE system, just that all the definitions should exist
>> regardless of configuration.
>
>> So rather than this:
>
>>> +#ifdef CONFIG_ARM_LPAE
>>> +#define TTBR0 __ACCESS_CP15_64(0, c2)
>>> +#define TTBR1 __ACCESS_CP15_64(1, c2)
>>> +#define PAR __ACCESS_CP15_64(0, c7)
>>> +#else
>>> +#define TTBR0 __ACCESS_CP15(c2, 0, c0, 0)
>>> +#define TTBR1 __ACCESS_CP15(c2, 0, c0, 1)
>>> +#define PAR __ACCESS_CP15(c7, 0, c4, 0)
>>> +#endif
>
>> ... you'd have the following in cp15.h:
>
>> #define TTBR0_64 __ACCESS_CP15_64(0, c2)
>> #define TTBR1_64 __ACCESS_CP15_64(1, c2)
>> #define PAR_64 __ACCESS_CP15_64(0, c7)
>
>> #define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
>> #define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
>> #define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
>
>> ... and elsewhere, where it matters, we choose which to use depending on
>> the kernel configuration, e.g.
>
>> void set_ttbr0(u64 val)
>> {
>> if (IS_ENABLED(CONFIG_ARM_LPAE))
>> write_sysreg(val, TTBR0_64);
>> else
>> write_sysreg(val, TTBR0_32);
>> }
>
>> Thanks,
>> Mark.
>
> Thanks for your solution.
> I didn't know there was a IS_ENABLED macro that I can use, so I can't write a function
> like:
> void set_ttbr0(u64 val)
> {
> if (IS_ENABLED(CONFIG_ARM_LPAE))
> write_sysreg(val, TTBR0_64);
> else
> write_sysreg(val, TTBR0_32);
> }
>
>
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..5eb0185 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,93 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR __ACCESS_CP15_64(0, c7)
Please define both PAR accessors. Yes, I know the 32bit version is not
used yet, but it doesn't hurt to make it visible.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply related [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-23 1:54 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-23 15:22 ` Russell King - ARM Linux
-1 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-11-23 15:22 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, Mark Rutland, tixy, mhocko, grygorii.strashko,
catalin.marinas, linux-mm, glider, afzal.mohd.ma, mingo,
Christoffer Dall, opendmb, mawilcox, kasan-dev, Dailei, dvyukov,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
f.fainelli, Heshaoliang, tglx, linux-arm-kernel, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Thu, Nov 23, 2017 at 01:54:59AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 23, 2017 20:30 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> >Please define both PAR accessors. Yes, I know the 32bit version is not
> >used yet, but it doesn't hurt to make it visible.
>
> Thanks for your review.
> I'm going to change it in the new version.
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..b8353b1 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,109 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR_64 __ACCESS_CP15_64(0, c7)
> +#define VTTBR __ACCESS_CP15_64(6, c2)
> +#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> +#define CNTVOFF __ACCESS_CP15_64(4, c14)
> +
> +#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
> +#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> +#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
> +#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
> +#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
> +#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
> +#define HCR __ACCESS_CP15(c1, 4, c1, 0)
> +#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
> +#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
> +#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
> +#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
> +#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
> +#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
> +#define DACR __ACCESS_CP15(c3, 0, c0, 0)
> +#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
> +#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
> +#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
> +#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
> +#define HSR __ACCESS_CP15(c5, 4, c2, 0)
> +#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
> +#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
> +#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
> +#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
> +#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
> +#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
> +#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
> +#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
> +#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
> +#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
> +#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
> +#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
> +#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
> +#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
> +#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
> +#define CID __ACCESS_CP15(c13, 0, c0, 1)
> +#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
> +#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
> +#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
> +#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
> +#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
> +#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
> +#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
> +
> extern unsigned long cr_alignment; /* defined in entry-armv.S */
>
> +static inline void set_par(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, PAR_64);
> + else
> + write_sysreg(val, PAR_32);
> +}
> +
> +static inline u64 get_par(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(PAR_64);
> + else
> + return (u64)read_sysreg(PAR_32);
> +}
> +
> +static inline void set_ttbr0(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, TTBR0_64);
> + else
> + write_sysreg(val, TTBR0_32);
> +}
> +
> +static inline u64 get_ttbr0(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR0_64);
> + else
> + return (u64)read_sysreg(TTBR0_32);
> +}
> +
> +static inline void set_ttbr1(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, TTBR1_64);
> + else
> + write_sysreg(val, TTBR1_32);
> +}
> +
> +static inline u64 get_ttbr1(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR1_64);
> + else
> + return (u64)read_sysreg(TTBR1_32);
> +}
> +
Please pay attention to the project coding style whenever creating code
for a program. It doesn't matter what the project coding style is, as
long as you write your code to match the style that is already there.
For the kernel, that is: tabs not spaces for indentation of code.
You seem to be using a variable number of spaces for all the new code
above.
Some of it seems to be your email client thinking it knows better about
white space - and such behaviours basically makes patches unapplyable.
See Documentation/process/email-clients.rst for hints about email
clients.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-23 15:22 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-11-23 15:22 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, Mark Rutland, tixy, mhocko, grygorii.strashko,
catalin.marinas, linux-mm, glider, afzal.mohd.ma, mingo,
Christoffer Dall, opendmb, mawilcox, kasan-dev, Dailei, dvyukov,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
f.fainelli, Heshaoliang, tglx, linux-arm-kernel, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Thu, Nov 23, 2017 at 01:54:59AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 23, 2017 20:30 Marc Zyngier [mailto:marc.zyngier@arm.com] wrote:
> >Please define both PAR accessors. Yes, I know the 32bit version is not
> >used yet, but it doesn't hurt to make it visible.
>
> Thanks for your review.
> I'm going to change it in the new version.
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..b8353b1 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,109 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR_64 __ACCESS_CP15_64(0, c7)
> +#define VTTBR __ACCESS_CP15_64(6, c2)
> +#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> +#define CNTVOFF __ACCESS_CP15_64(4, c14)
> +
> +#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
> +#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> +#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
> +#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
> +#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
> +#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
> +#define HCR __ACCESS_CP15(c1, 4, c1, 0)
> +#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
> +#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
> +#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
> +#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
> +#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
> +#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
> +#define DACR __ACCESS_CP15(c3, 0, c0, 0)
> +#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
> +#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
> +#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
> +#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
> +#define HSR __ACCESS_CP15(c5, 4, c2, 0)
> +#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
> +#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
> +#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
> +#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
> +#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
> +#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
> +#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
> +#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
> +#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
> +#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
> +#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
> +#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
> +#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
> +#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
> +#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
> +#define CID __ACCESS_CP15(c13, 0, c0, 1)
> +#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
> +#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
> +#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
> +#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
> +#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
> +#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
> +#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
> +
> extern unsigned long cr_alignment; /* defined in entry-armv.S */
>
> +static inline void set_par(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, PAR_64);
> + else
> + write_sysreg(val, PAR_32);
> +}
> +
> +static inline u64 get_par(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(PAR_64);
> + else
> + return (u64)read_sysreg(PAR_32);
> +}
> +
> +static inline void set_ttbr0(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, TTBR0_64);
> + else
> + write_sysreg(val, TTBR0_32);
> +}
> +
> +static inline u64 get_ttbr0(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR0_64);
> + else
> + return (u64)read_sysreg(TTBR0_32);
> +}
> +
> +static inline void set_ttbr1(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, TTBR1_64);
> + else
> + write_sysreg(val, TTBR1_32);
> +}
> +
> +static inline u64 get_ttbr1(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR1_64);
> + else
> + return (u64)read_sysreg(TTBR1_32);
> +}
> +
Please pay attention to the project coding style whenever creating code
for a program. It doesn't matter what the project coding style is, as
long as you write your code to match the style that is already there.
For the kernel, that is: tabs not spaces for indentation of code.
You seem to be using a variable number of spaces for all the new code
above.
Some of it seems to be your email client thinking it knows better about
white space - and such behaviours basically makes patches unapplyable.
See Documentation/process/email-clients.rst for hints about email
clients.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-23 15:22 ` Russell King - ARM Linux
0 siblings, 0 replies; 253+ messages in thread
From: Russell King - ARM Linux @ 2017-11-23 15:22 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 23, 2017 at 01:54:59AM +0000, Liuwenliang (Abbott Liu) wrote:
> On Nov 23, 2017 20:30 Marc Zyngier [mailto:marc.zyngier at arm.com] wrote:
> >Please define both PAR accessors. Yes, I know the 32bit version is not
> >used yet, but it doesn't hurt to make it visible.
>
> Thanks for your review.
> I'm going to change it in the new version.
> Here is the code I tested on vexpress_a9 and vexpress_a15:
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index dbdbce1..b8353b1 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -2,6 +2,7 @@
> #define __ASM_ARM_CP15_H
>
> #include <asm/barrier.h>
> +#include <linux/stringify.h>
>
> /*
> * CR1 bits (CP#15 CR1)
> @@ -64,8 +65,109 @@
> #define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> #define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>
> +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
> +#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
> +#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
> +#define TTBR0_64 __ACCESS_CP15_64(0, c2)
> +#define TTBR1_64 __ACCESS_CP15_64(1, c2)
> +#define PAR_64 __ACCESS_CP15_64(0, c7)
> +#define VTTBR __ACCESS_CP15_64(6, c2)
> +#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> +#define CNTVOFF __ACCESS_CP15_64(4, c14)
> +
> +#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
> +#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> +#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
> +#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
> +#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
> +#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
> +#define HCR __ACCESS_CP15(c1, 4, c1, 0)
> +#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
> +#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
> +#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
> +#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
> +#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
> +#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
> +#define DACR __ACCESS_CP15(c3, 0, c0, 0)
> +#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
> +#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
> +#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
> +#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
> +#define HSR __ACCESS_CP15(c5, 4, c2, 0)
> +#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
> +#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
> +#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
> +#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
> +#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
> +#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
> +#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
> +#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
> +#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
> +#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
> +#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
> +#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
> +#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
> +#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
> +#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
> +#define CID __ACCESS_CP15(c13, 0, c0, 1)
> +#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
> +#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
> +#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
> +#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
> +#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
> +#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
> +#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
> +
> extern unsigned long cr_alignment; /* defined in entry-armv.S */
>
> +static inline void set_par(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, PAR_64);
> + else
> + write_sysreg(val, PAR_32);
> +}
> +
> +static inline u64 get_par(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(PAR_64);
> + else
> + return (u64)read_sysreg(PAR_32);
> +}
> +
> +static inline void set_ttbr0(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, TTBR0_64);
> + else
> + write_sysreg(val, TTBR0_32);
> +}
> +
> +static inline u64 get_ttbr0(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR0_64);
> + else
> + return (u64)read_sysreg(TTBR0_32);
> +}
> +
> +static inline void set_ttbr1(u64 val)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + write_sysreg(val, TTBR1_64);
> + else
> + write_sysreg(val, TTBR1_32);
> +}
> +
> +static inline u64 get_ttbr1(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR1_64);
> + else
> + return (u64)read_sysreg(TTBR1_32);
> +}
> +
Please pay attention to the project coding style whenever creating code
for a program. It doesn't matter what the project coding style is, as
long as you write your code to match the style that is already there.
For the kernel, that is: tabs not spaces for indentation of code.
You seem to be using a variable number of spaces for all the new code
above.
Some of it seems to be your email client thinking it knows better about
white space - and such behaviours basically makes patches unapplyable.
See Documentation/process/email-clients.rst for hints about email
clients.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-11-23 15:31 ` Mark Rutland
-1 siblings, 0 replies; 253+ messages in thread
From: Mark Rutland @ 2017-11-23 15:31 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Wed, Nov 22, 2017 at 12:56:44PM +0000, Liuwenliang (Abbott Liu) wrote:
> +static inline u64 get_ttbr0(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR0_64);
> + else
> + return (u64)read_sysreg(TTBR0_32);
> +}
> +static inline u64 get_ttbr1(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR1_64);
> + else
> + return (u64)read_sysreg(TTBR1_32);
> +}
In addition to the whitespace damage that need to be fixed, there's no
need for the u64 casts here. The compiler will implicitly cast to the
return type, and as u32 and u64 are both arithmetic types, we don't need
an explicit cast here.
Thanks,
Mark.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-23 15:31 ` Mark Rutland
0 siblings, 0 replies; 253+ messages in thread
From: Mark Rutland @ 2017-11-23 15:31 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Wed, Nov 22, 2017 at 12:56:44PM +0000, Liuwenliang (Abbott Liu) wrote:
> +static inline u64 get_ttbr0(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR0_64);
> + else
> + return (u64)read_sysreg(TTBR0_32);
> +}
> +static inline u64 get_ttbr1(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR1_64);
> + else
> + return (u64)read_sysreg(TTBR1_32);
> +}
In addition to the whitespace damage that need to be fixed, there's no
need for the u64 casts here. The compiler will implicitly cast to the
return type, and as u32 and u64 are both arithmetic types, we don't need
an explicit cast here.
Thanks,
Mark.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-23 15:31 ` Mark Rutland
0 siblings, 0 replies; 253+ messages in thread
From: Mark Rutland @ 2017-11-23 15:31 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Nov 22, 2017 at 12:56:44PM +0000, Liuwenliang (Abbott Liu) wrote:
> +static inline u64 get_ttbr0(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR0_64);
> + else
> + return (u64)read_sysreg(TTBR0_32);
> +}
> +static inline u64 get_ttbr1(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR1_64);
> + else
> + return (u64)read_sysreg(TTBR1_32);
> +}
In addition to the whitespace damage that need to be fixed, there's no
need for the u64 casts here. The compiler will implicitly cast to the
return type, and as u32 and u64 are both arithmetic types, we don't need
an explicit cast here.
Thanks,
Mark.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-23 15:22 ` Russell King - ARM Linux
(?)
@ 2017-11-27 1:23 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-27 1:23 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: Marc Zyngier, Mark Rutland, tixy, mhocko, grygorii.strashko,
catalin.marinas, linux-mm, glider, afzal.mohd.ma, mingo,
Christoffer Dall, opendmb, mawilcox, kasan-dev, Dailei, dvyukov,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
f.fainelli, Heshaoliang, tglx, linux-arm-kernel, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Nov 23, 2017 23:22 Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>Please pay attention to the project coding style whenever creating code
>for a program. It doesn't matter what the project coding style is, as
>long as you write your code to match the style that is already there.
>
>For the kernel, that is: tabs not spaces for indentation of code.
>You seem to be using a variable number of spaces for all the new code
>above.
>
>Some of it seems to be your email client thinking it knows better about
>white space - and such behaviours basically makes patches unapplyable.
>See Documentation/process/email-clients.rst for hints about email
>clients.
Thanks for your review.
I'm going to change it in the new version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-27 1:23 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-27 1:23 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: Marc Zyngier, Mark Rutland, tixy, mhocko, grygorii.strashko,
catalin.marinas, linux-mm, glider, afzal.mohd.ma, mingo,
Christoffer Dall, opendmb, mawilcox, kasan-dev, Dailei, dvyukov,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
f.fainelli, Heshaoliang, tglx, linux-arm-kernel, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Nov 23, 2017 23:22 Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>Please pay attention to the project coding style whenever creating code
>for a program. It doesn't matter what the project coding style is, as
>long as you write your code to match the style that is already there.
>
>For the kernel, that is: tabs not spaces for indentation of code.
>You seem to be using a variable number of spaces for all the new code
>above.
>
>Some of it seems to be your email client thinking it knows better about
>white space - and such behaviours basically makes patches unapplyable.
>See Documentation/process/email-clients.rst for hints about email
>clients.
Thanks for your review.
I'm going to change it in the new version.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-27 1:23 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-27 1:23 UTC (permalink / raw)
To: linux-arm-kernel
On Nov 23, 2017 23:22 Russell King - ARM Linux [mailto:linux at armlinux.org.uk] wrote:
>Please pay attention to the project coding style whenever creating code
>for a program. It doesn't matter what the project coding style is, as
>long as you write your code to match the style that is already there.
>
>For the kernel, that is: tabs not spaces for indentation of code.
>You seem to be using a variable number of spaces for all the new code
>above.
>
>Some of it seems to be your email client thinking it knows better about
>white space - and such behaviours basically makes patches unapplyable.
>See Documentation/process/email-clients.rst for hints about email
>clients.
Thanks for your review.
I'm going to change it in the new version.
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-11-23 15:31 ` Mark Rutland
@ 2017-11-27 1:26 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-27 1:26 UTC (permalink / raw)
To: Mark Rutland
Cc: Marc Zyngier, tixy, mhocko, grygorii.strashko, catalin.marinas,
linux-mm, glider, afzal.mohd.ma, mingo, Christoffer Dall,
f.fainelli, mawilcox, linux, kasan-dev, Dailei, linux-arm-kernel,
aryabinin, labbott, vladimir.murzin, keescook, arnd, Zengweilin,
opendmb, Heshaoliang, tglx, dvyukov, ard.biesheuvel,
linux-kernel, Jiazhenghua, akpm, robin.murphy, thgarnie,
kirill.shutemov
On Nov 23, 2017 23:32 Mark Rutland [mailto:mark.rutland@arm.com] wrote:
>On Wed, Nov 22, 2017 at 12:56:44PM +0000, Liuwenliang (Abbott Liu) wrote:
>> +static inline u64 get_ttbr0(void)
>> +{
>> + if (IS_ENABLED(CONFIG_ARM_LPAE))
>> + return read_sysreg(TTBR0_64);
>> + else
>> + return (u64)read_sysreg(TTBR0_32);
>> +}
>
>> +static inline u64 get_ttbr1(void)
>> +{
>> + if (IS_ENABLED(CONFIG_ARM_LPAE))
>> + return read_sysreg(TTBR1_64);
>> + else
>> + return (u64)read_sysreg(TTBR1_32);
>> +}
>
>In addition to the whitespace damage that need to be fixed, there's no
>need for the u64 casts here. The compiler will implicitly cast to the
>return type, and as u32 and u64 are both arithmetic types, we don't need
>an explicit cast here.
Thanks for your review.
I'm going to change it in the new version.
-----邮件原件-----
发件人: Mark Rutland [mailto:mark.rutland@arm.com]
发送时间: 2017年11月23日 23:32
收件人: Liuwenliang (Abbott Liu)
抄送: Marc Zyngier; tixy@linaro.org; mhocko@suse.com; grygorii.strashko@linaro.org; catalin.marinas@arm.com; linux-mm@kvack.org; glider@google.com; afzal.mohd.ma@gmail.com; mingo@kernel.org; Christoffer Dall; f.fainelli@gmail.com; mawilcox@microsoft.com; linux@armlinux.org.uk; kasan-dev@googlegroups.com; Dailei; linux-arm-kernel@lists.infradead.org; aryabinin@virtuozzo.com; labbott@redhat.com; vladimir.murzin@arm.com; keescook@chromium.org; arnd@arndb.de; Zengweilin; opendmb@gmail.com; Heshaoliang; tglx@linutronix.de; dvyukov@google.com; ard.biesheuvel@linaro.org; linux-kernel@vger.kernel.org; Jiazhenghua; akpm@linux-foundation.org; robin.murphy@arm.com; thgarnie@google.com; kirill.shutemov@linux.intel.com
主题: Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
On Wed, Nov 22, 2017 at 12:56:44PM +0000, Liuwenliang (Abbott Liu) wrote:
> +static inline u64 get_ttbr0(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR0_64);
> + else
> + return (u64)read_sysreg(TTBR0_32);
> +}
> +static inline u64 get_ttbr1(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR1_64);
> + else
> + return (u64)read_sysreg(TTBR1_32);
> +}
In addition to the whitespace damage that need to be fixed, there's no
need for the u64 casts here. The compiler will implicitly cast to the
return type, and as u32 and u64 are both arithmetic types, we don't need
an explicit cast here.
Thanks,
Mark.
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2017-11-27 1:26 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-11-27 1:26 UTC (permalink / raw)
To: linux-arm-kernel
On Nov 23, 2017 23:32 Mark Rutland [mailto:mark.rutland at arm.com] wrote:
>On Wed, Nov 22, 2017 at 12:56:44PM +0000, Liuwenliang (Abbott Liu) wrote:
>> +static inline u64 get_ttbr0(void)
>> +{
>> + if (IS_ENABLED(CONFIG_ARM_LPAE))
>> + return read_sysreg(TTBR0_64);
>> + else
>> + return (u64)read_sysreg(TTBR0_32);
>> +}
>
>> +static inline u64 get_ttbr1(void)
>> +{
>> + if (IS_ENABLED(CONFIG_ARM_LPAE))
>> + return read_sysreg(TTBR1_64);
>> + else
>> + return (u64)read_sysreg(TTBR1_32);
>> +}
>
>In addition to the whitespace damage that need to be fixed, there's no
>need for the u64 casts here. The compiler will implicitly cast to the
>return type, and as u32 and u64 are both arithmetic types, we don't need
>an explicit cast here.
Thanks for your review.
I'm going to change it in the new version.
-----????-----
???: Mark Rutland [mailto:mark.rutland at arm.com]
????: 2017?11?23? 23:32
???: Liuwenliang (Abbott Liu)
??: Marc Zyngier; tixy at linaro.org; mhocko at suse.com; grygorii.strashko at linaro.org; catalin.marinas at arm.com; linux-mm at kvack.org; glider at google.com; afzal.mohd.ma at gmail.com; mingo at kernel.org; Christoffer Dall; f.fainelli at gmail.com; mawilcox at microsoft.com; linux at armlinux.org.uk; kasan-dev at googlegroups.com; Dailei; linux-arm-kernel at lists.infradead.org; aryabinin at virtuozzo.com; labbott at redhat.com; vladimir.murzin at arm.com; keescook at chromium.org; arnd at arndb.de; Zengweilin; opendmb at gmail.com; Heshaoliang; tglx at linutronix.de; dvyukov at google.com; ard.biesheuvel at linaro.org; linux-kernel at vger.kernel.org; Jiazhenghua; akpm at linux-foundation.org; robin.murphy at arm.com; thgarnie at google.com; kirill.shutemov at linux.intel.com
??: Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
On Wed, Nov 22, 2017 at 12:56:44PM +0000, Liuwenliang (Abbott Liu) wrote:
> +static inline u64 get_ttbr0(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR0_64);
> + else
> + return (u64)read_sysreg(TTBR0_32);
> +}
> +static inline u64 get_ttbr1(void)
> +{
> + if (IS_ENABLED(CONFIG_ARM_LPAE))
> + return read_sysreg(TTBR1_64);
> + else
> + return (u64)read_sysreg(TTBR1_32);
> +}
In addition to the whitespace damage that need to be fixed, there's no
need for the u64 casts here. The compiler will implicitly cast to the
return type, and as u32 and u64 are both arithmetic types, we don't need
an explicit cast here.
Thanks,
Mark.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
2017-10-19 12:51 ` Russell King - ARM Linux
(?)
@ 2017-12-05 14:19 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-12-05 14:19 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: Dmitry Vyukov, Andrew Morton, Andrey Ryabinin, afzal.mohd.ma,
f.fainelli, Laura Abbott, Kirill A. Shutemov, Michal Hocko,
cdall, marc.zyngier, Catalin Marinas, Matthew Wilcox,
Thomas Gleixner, Thomas Garnier, Kees Cook, Arnd Bergmann,
Vladimir Murzin, tixy, Ard Biesheuvel, robin.murphy, Ingo Molnar,
grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>> >> already handles unaligned addresses?
>> >>
>> >> - If it's needed on ARM then presumably it will be needed on other
>> >> architectures, so CONFIG_ARM is insufficiently general.
>> >>
>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>> >> it would be better to generalize/fix it in some fashion rather than
>> >> creating a new variant of the function.
>>
>>
>> >Yes, I think it will be better to fix the current function rather then
>> >have 2 slightly different copies with ifdef's.
>> >Will something along these lines work for arm? 16-byte accesses are
>> >not too common, so it should not be a performance problem. And
>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>> >where safe (x86).
>>
>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> >{
>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>> >
>> > if (shadow_addr[0] || shadow_addr[1])
>> > return true;
>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> > return memory_is_poisoned_1(addr + 15);
>> > return false;
>> >}
>>
>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>> If the parameter addr=0xc0000008, now in function:
>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> {
>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>> --- // unsigned by 2 bytes.
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>
>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>> ---- //initialized when kernel is starting.
>> return *shadow_addr;
>> }
>>
>> I also think it is better to fix this problem.
>What about using get_unaligned() ?
Thanks for your review.
I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
(arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
So on ARMv7, the code:
u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
equals the code:000
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
Manual ARMv7-A and ARMv7-R edition :
A3.2.1 Unaligned data access
An ARMv7 implementation must support unaligned data accesses by some load and store instructions, as
Table A3-1 shows. Software can set the SCTLR.A bit to control whether a misaligned access by one of these
instructions causes an Alignment fault Data abort exception.
Table A3-1 Alignment requirements of load/store instructions
Instructions Alignment check SCTLR.A is 0 SCTLR.A is 1
LDRB, LDREXB, LDRBT,
LDRSB, LDRSBT, STRB, None - -
STREXB, STRBT, SWPB,
TBB
LDRH, LDRHT, LDRSH,
LDRSHT, STRH, STRHT, Halfword Unaligned access Alignment fault
TBH
LDREXH, STREXH Halfword Alignment fault Alignment fault
LDR, LDRT, STR, STRT
PUSH, encodings T3 and A2 only Word Unaligned access Alignment fault
POP, encodings T3 and A2 only
LDREX, STREX Word Alignment fault Alignment fault
LDREXD, STREXD Doubleword Alignment fault Alignment fault
All forms of LDM and STM,
LDRD, RFE, SRS, STRD, SWP
PUSH, except for encodings
T3 and A2 Word Alignment fault Alignment fault
POP, except for encodings
T3 and A2
LDC, LDC2, STC, STC2 Word Alignment fault Alignment fault
VLDM, VLDR, VPOP,
VPUSH, VSTM, VSTR Word Alignment fault Alignment fault
VLD1, VLD2, VLD3, VLD4,
VST1, VST2, VST3, VST4, Element size Unaligned access Alignment fault
all with standard alignmenta
VLD1, VLD2, VLD3, VLD4,
VST1, VST2, VST3, VST4, As specified by@<align> Alignment fault Alignment fault
all with @<align> specifieda
On ARMv7, the following code can guarantee that if SCRLR.A is 0:
__enable_mmu:
#if defined(CONFIG_ALIGNMENT_TRAP) && __LINUX_ARM_ARCH__ < 6
orr r0, r0, #CR_A
#else
bic r0, r0, #CR_A //clear CR_A
#endif
#ifdef CONFIG_CPU_DCACHE_DISABLE
bic r0, r0, #CR_C
#endif
#ifdef CONFIG_CPU_BPREDICT_DISABLE
bic r0, r0, #CR_Z
#endif
#ifdef CONFIG_CPU_ICACHE_DISABLE
bic r0, r0, #CR_I
#endif
#ifdef CONFIG_ARM_LPAE
mcrr p15, 0, r4, r5, c2 @ load TTBR0
#else
mov r5, #DACR_INIT
mcr p15, 0, r5, c3, c0, 0 @ load domain access register
mcr p15, 0, r4, c2, c0, 0 @ load page table pointer
#endif
b __turn_mmu_on
ENDPROC(__enable_mmu)
/*
* Enable the MMU. This completely changes the structure of the visible
* memory space. You will not be able to trace execution through this.
* If you have an enquiry about this, *please* check the linux-arm-kernel
* mailing list archives BEFORE sending another post to the list.
*
* r0 = cp#15 control register
* r1 = machine ID
* r2 = atags or dtb pointer
* r9 = processor ID
* r13 = *virtual* address to jump to upon completion
*
* other registers depend on the function called upon completion
*/
.align 5
.pushsection .idmap.text, "ax"
ENTRY(__turn_mmu_on)
mov r0, r0
instr_sync
mcr p15, 0, r0, c1, c0, 0 @ write control reg //here set SCTLR=r0
mrc p15, 0, r3, c0, c0, 0 @ read id reg
instr_sync
mov r3, r3
mov r3, r13
ret r3
__turn_mmu_on_end:
ENDPROC(__turn_mmu_on)
So the following code is OK:
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
- u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+ u16 *shadow_addr = get_unaligned( (u16 *)kasan_mem_to_shadow((void *)addr));
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return *shadow_addr || memory_is_poisoned_1(addr + 15);
return *shadow_addr;
}
A very good suggestion, Thanks.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-12-05 14:19 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-12-05 14:19 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: Dmitry Vyukov, Andrew Morton, Andrey Ryabinin, afzal.mohd.ma,
f.fainelli, Laura Abbott, Kirill A. Shutemov, Michal Hocko,
cdall, marc.zyngier, Catalin Marinas, Matthew Wilcox,
Thomas Gleixner, Thomas Garnier, Kees Cook, Arnd Bergmann,
Vladimir Murzin, tixy, Ard Biesheuvel, robin.murphy, Ingo Molnar,
grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>> >> already handles unaligned addresses?
>> >>
>> >> - If it's needed on ARM then presumably it will be needed on other
>> >> architectures, so CONFIG_ARM is insufficiently general.
>> >>
>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>> >> it would be better to generalize/fix it in some fashion rather than
>> >> creating a new variant of the function.
>>
>>
>> >Yes, I think it will be better to fix the current function rather then
>> >have 2 slightly different copies with ifdef's.
>> >Will something along these lines work for arm? 16-byte accesses are
>> >not too common, so it should not be a performance problem. And
>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>> >where safe (x86).
>>
>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> >{
>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>> >
>> > if (shadow_addr[0] || shadow_addr[1])
>> > return true;
>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> > return memory_is_poisoned_1(addr + 15);
>> > return false;
>> >}
>>
>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>> If the parameter addr=0xc0000008, now in function:
>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> {
>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>> --- // unsigned by 2 bytes.
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>
>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>> ---- //initialized when kernel is starting.
>> return *shadow_addr;
>> }
>>
>> I also think it is better to fix this problem.
>What about using get_unaligned() ?
Thanks for your review.
I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
(arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
So on ARMv7, the code:
u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
equals the code:000
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
Manual ARMv7-A and ARMv7-R edition :
A3.2.1 Unaligned data access
An ARMv7 implementation must support unaligned data accesses by some load and store instructions, as
Table A3-1 shows. Software can set the SCTLR.A bit to control whether a misaligned access by one of these
instructions causes an Alignment fault Data abort exception.
Table A3-1 Alignment requirements of load/store instructions
Instructions Alignment check SCTLR.A is 0 SCTLR.A is 1
LDRB, LDREXB, LDRBT,
LDRSB, LDRSBT, STRB, None - -
STREXB, STRBT, SWPB,
TBB
LDRH, LDRHT, LDRSH,
LDRSHT, STRH, STRHT, Halfword Unaligned access Alignment fault
TBH
LDREXH, STREXH Halfword Alignment fault Alignment fault
LDR, LDRT, STR, STRT
PUSH, encodings T3 and A2 only Word Unaligned access Alignment fault
POP, encodings T3 and A2 only
LDREX, STREX Word Alignment fault Alignment fault
LDREXD, STREXD Doubleword Alignment fault Alignment fault
All forms of LDM and STM,
LDRD, RFE, SRS, STRD, SWP
PUSH, except for encodings
T3 and A2 Word Alignment fault Alignment fault
POP, except for encodings
T3 and A2
LDC, LDC2, STC, STC2 Word Alignment fault Alignment fault
VLDM, VLDR, VPOP,
VPUSH, VSTM, VSTR Word Alignment fault Alignment fault
VLD1, VLD2, VLD3, VLD4,
VST1, VST2, VST3, VST4, Element size Unaligned access Alignment fault
all with standard alignmenta
VLD1, VLD2, VLD3, VLD4,
VST1, VST2, VST3, VST4, As specified by@<align> Alignment fault Alignment fault
all with @<align> specifieda
On ARMv7, the following code can guarantee that if SCRLR.A is 0:
__enable_mmu:
#if defined(CONFIG_ALIGNMENT_TRAP) && __LINUX_ARM_ARCH__ < 6
orr r0, r0, #CR_A
#else
bic r0, r0, #CR_A //clear CR_A
#endif
#ifdef CONFIG_CPU_DCACHE_DISABLE
bic r0, r0, #CR_C
#endif
#ifdef CONFIG_CPU_BPREDICT_DISABLE
bic r0, r0, #CR_Z
#endif
#ifdef CONFIG_CPU_ICACHE_DISABLE
bic r0, r0, #CR_I
#endif
#ifdef CONFIG_ARM_LPAE
mcrr p15, 0, r4, r5, c2 @ load TTBR0
#else
mov r5, #DACR_INIT
mcr p15, 0, r5, c3, c0, 0 @ load domain access register
mcr p15, 0, r4, c2, c0, 0 @ load page table pointer
#endif
b __turn_mmu_on
ENDPROC(__enable_mmu)
/*
* Enable the MMU. This completely changes the structure of the visible
* memory space. You will not be able to trace execution through this.
* If you have an enquiry about this, *please* check the linux-arm-kernel
* mailing list archives BEFORE sending another post to the list.
*
* r0 = cp#15 control register
* r1 = machine ID
* r2 = atags or dtb pointer
* r9 = processor ID
* r13 = *virtual* address to jump to upon completion
*
* other registers depend on the function called upon completion
*/
.align 5
.pushsection .idmap.text, "ax"
ENTRY(__turn_mmu_on)
mov r0, r0
instr_sync
mcr p15, 0, r0, c1, c0, 0 @ write control reg //here set SCTLR=r0
mrc p15, 0, r3, c0, c0, 0 @ read id reg
instr_sync
mov r3, r3
mov r3, r13
ret r3
__turn_mmu_on_end:
ENDPROC(__turn_mmu_on)
So the following code is OK:
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
- u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+ u16 *shadow_addr = get_unaligned( (u16 *)kasan_mem_to_shadow((void *)addr));
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return *shadow_addr || memory_is_poisoned_1(addr + 15);
return *shadow_addr;
}
A very good suggestion, Thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-12-05 14:19 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2017-12-05 14:19 UTC (permalink / raw)
To: linux-arm-kernel
On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux at armlinux.org.uk] wrote:
>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>> >> already handles unaligned addresses?
>> >>
>> >> - If it's needed on ARM then presumably it will be needed on other
>> >> architectures, so CONFIG_ARM is insufficiently general.
>> >>
>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>> >> it would be better to generalize/fix it in some fashion rather than
>> >> creating a new variant of the function.
>>
>>
>> >Yes, I think it will be better to fix the current function rather then
>> >have 2 slightly different copies with ifdef's.
>> >Will something along these lines work for arm? 16-byte accesses are
>> >not too common, so it should not be a performance problem. And
>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>> >where safe (x86).
>>
>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> >{
>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>> >
>> > if (shadow_addr[0] || shadow_addr[1])
>> > return true;
>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> > return memory_is_poisoned_1(addr + 15);
>> > return false;
>> >}
>>
>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>> If the parameter addr=0xc0000008, now in function:
>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> {
>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>> --- // unsigned by 2 bytes.
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>
>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>> ---- //initialized when kernel is starting.
>> return *shadow_addr;
>> }
>>
>> I also think it is better to fix this problem.
>What about using get_unaligned() ?
Thanks for your review.
I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
(arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
So on ARMv7, the code:
u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
equals the code:000
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
Manual ARMv7-A and ARMv7-R edition :
A3.2.1 Unaligned data access
An ARMv7 implementation must support unaligned data accesses by some load and store instructions, as
Table A3-1 shows. Software can set the SCTLR.A bit to control whether a misaligned access by one of these
instructions causes an Alignment fault Data abort exception.
Table A3-1 Alignment requirements of load/store instructions
Instructions Alignment check SCTLR.A is 0 SCTLR.A is 1
LDRB, LDREXB, LDRBT,
LDRSB, LDRSBT, STRB, None - -
STREXB, STRBT, SWPB,
TBB
LDRH, LDRHT, LDRSH,
LDRSHT, STRH, STRHT, Halfword Unaligned access Alignment fault
TBH
LDREXH, STREXH Halfword Alignment fault Alignment fault
LDR, LDRT, STR, STRT
PUSH, encodings T3 and A2 only Word Unaligned access Alignment fault
POP, encodings T3 and A2 only
LDREX, STREX Word Alignment fault Alignment fault
LDREXD, STREXD Doubleword Alignment fault Alignment fault
All forms of LDM and STM,
LDRD, RFE, SRS, STRD, SWP
PUSH, except for encodings
T3 and A2 Word Alignment fault Alignment fault
POP, except for encodings
T3 and A2
LDC, LDC2, STC, STC2 Word Alignment fault Alignment fault
VLDM, VLDR, VPOP,
VPUSH, VSTM, VSTR Word Alignment fault Alignment fault
VLD1, VLD2, VLD3, VLD4,
VST1, VST2, VST3, VST4, Element size Unaligned access Alignment fault
all with standard alignmenta
VLD1, VLD2, VLD3, VLD4,
VST1, VST2, VST3, VST4, As specified by@<align> Alignment fault Alignment fault
all with @<align> specifieda
On ARMv7, the following code can guarantee that if SCRLR.A is 0:
__enable_mmu:
#if defined(CONFIG_ALIGNMENT_TRAP) && __LINUX_ARM_ARCH__ < 6
orr r0, r0, #CR_A
#else
bic r0, r0, #CR_A //clear CR_A
#endif
#ifdef CONFIG_CPU_DCACHE_DISABLE
bic r0, r0, #CR_C
#endif
#ifdef CONFIG_CPU_BPREDICT_DISABLE
bic r0, r0, #CR_Z
#endif
#ifdef CONFIG_CPU_ICACHE_DISABLE
bic r0, r0, #CR_I
#endif
#ifdef CONFIG_ARM_LPAE
mcrr p15, 0, r4, r5, c2 @ load TTBR0
#else
mov r5, #DACR_INIT
mcr p15, 0, r5, c3, c0, 0 @ load domain access register
mcr p15, 0, r4, c2, c0, 0 @ load page table pointer
#endif
b __turn_mmu_on
ENDPROC(__enable_mmu)
/*
* Enable the MMU. This completely changes the structure of the visible
* memory space. You will not be able to trace execution through this.
* If you have an enquiry about this, *please* check the linux-arm-kernel
* mailing list archives BEFORE sending another post to the list.
*
* r0 = cp#15 control register
* r1 = machine ID
* r2 = atags or dtb pointer
* r9 = processor ID
* r13 = *virtual* address to jump to upon completion
*
* other registers depend on the function called upon completion
*/
.align 5
.pushsection .idmap.text, "ax"
ENTRY(__turn_mmu_on)
mov r0, r0
instr_sync
mcr p15, 0, r0, c1, c0, 0 @ write control reg //here set SCTLR=r0
mrc p15, 0, r3, c0, c0, 0 @ read id reg
instr_sync
mov r3, r3
mov r3, r13
ret r3
__turn_mmu_on_end:
ENDPROC(__turn_mmu_on)
So the following code is OK:
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
- u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+ u16 *shadow_addr = get_unaligned( (u16 *)kasan_mem_to_shadow((void *)addr));
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return *shadow_addr || memory_is_poisoned_1(addr + 15);
return *shadow_addr;
}
A very good suggestion, Thanks.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
2017-12-05 14:19 ` Liuwenliang (Abbott Liu)
(?)
@ 2017-12-05 17:08 ` Ard Biesheuvel
-1 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-12-05 17:08 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Russell King - ARM Linux, Dmitry Vyukov, Andrew Morton,
Andrey Ryabinin, afzal.mohd.ma, f.fainelli, Laura Abbott,
Kirill A. Shutemov, Michal Hocko, cdall, marc.zyngier,
Catalin Marinas, Matthew Wilcox, Thomas Gleixner, Thomas Garnier,
Kees Cook, Arnd Bergmann, Vladimir Murzin, tixy, robin.murphy,
Ingo Molnar, grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 5 December 2017 at 14:19, Liuwenliang (Abbott Liu)
<liuwenliang@huawei.com> wrote:
> On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>>> >> already handles unaligned addresses?
>>> >>
>>> >> - If it's needed on ARM then presumably it will be needed on other
>>> >> architectures, so CONFIG_ARM is insufficiently general.
>>> >>
>>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>>> >> it would be better to generalize/fix it in some fashion rather than
>>> >> creating a new variant of the function.
>>>
>>>
>>> >Yes, I think it will be better to fix the current function rather then
>>> >have 2 slightly different copies with ifdef's.
>>> >Will something along these lines work for arm? 16-byte accesses are
>>> >not too common, so it should not be a performance problem. And
>>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>>> >where safe (x86).
>>>
>>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>> >{
>>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>>> >
>>> > if (shadow_addr[0] || shadow_addr[1])
>>> > return true;
>>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>> > return memory_is_poisoned_1(addr + 15);
>>> > return false;
>>> >}
>>>
>>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>>> If the parameter addr=0xc0000008, now in function:
>>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>> {
>>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>>> --- // unsigned by 2 bytes.
>>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>>
>>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>>> ---- //initialized when kernel is starting.
>>> return *shadow_addr;
>>> }
>>>
>>> I also think it is better to fix this problem.
>
>>What about using get_unaligned() ?
>
> Thanks for your review.
>
> I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
> (arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
> So on ARMv7, the code:
> u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
> equals the code:000
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>
No it does not. The compiler may merge adjacent accesses into ldm or
ldrd instructions, which do not tolerate misalignment regardless of
the SCTLR.A bit.
This is actually something we may need to fix for ARM, i.e., drop
HAVE_EFFICIENT_UNALIGNED_ACCESS altogether, or carefully review the
way it is used currently.
> On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
> Manual ARMv7-A and ARMv7-R edition :
>
<snip>
Could you *please* stop quoting the ARM ARM at us? People who are
seeking detailed information like that will know where to find it.
--
Ard.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-12-05 17:08 ` Ard Biesheuvel
0 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-12-05 17:08 UTC (permalink / raw)
To: Liuwenliang (Abbott Liu)
Cc: Russell King - ARM Linux, Dmitry Vyukov, Andrew Morton,
Andrey Ryabinin, afzal.mohd.ma, f.fainelli, Laura Abbott,
Kirill A. Shutemov, Michal Hocko, cdall, marc.zyngier,
Catalin Marinas, Matthew Wilcox, Thomas Gleixner, Thomas Garnier,
Kees Cook, Arnd Bergmann, Vladimir Murzin, tixy, robin.murphy,
Ingo Molnar, grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 5 December 2017 at 14:19, Liuwenliang (Abbott Liu)
<liuwenliang@huawei.com> wrote:
> On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>>> >> already handles unaligned addresses?
>>> >>
>>> >> - If it's needed on ARM then presumably it will be needed on other
>>> >> architectures, so CONFIG_ARM is insufficiently general.
>>> >>
>>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>>> >> it would be better to generalize/fix it in some fashion rather than
>>> >> creating a new variant of the function.
>>>
>>>
>>> >Yes, I think it will be better to fix the current function rather then
>>> >have 2 slightly different copies with ifdef's.
>>> >Will something along these lines work for arm? 16-byte accesses are
>>> >not too common, so it should not be a performance problem. And
>>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>>> >where safe (x86).
>>>
>>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>> >{
>>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>>> >
>>> > if (shadow_addr[0] || shadow_addr[1])
>>> > return true;
>>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>> > return memory_is_poisoned_1(addr + 15);
>>> > return false;
>>> >}
>>>
>>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>>> If the parameter addr=0xc0000008, now in function:
>>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>> {
>>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>>> --- // unsigned by 2 bytes.
>>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>>
>>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>>> ---- //initialized when kernel is starting.
>>> return *shadow_addr;
>>> }
>>>
>>> I also think it is better to fix this problem.
>
>>What about using get_unaligned() ?
>
> Thanks for your review.
>
> I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
> (arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
> So on ARMv7, the code:
> u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
> equals the code:000
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>
No it does not. The compiler may merge adjacent accesses into ldm or
ldrd instructions, which do not tolerate misalignment regardless of
the SCTLR.A bit.
This is actually something we may need to fix for ARM, i.e., drop
HAVE_EFFICIENT_UNALIGNED_ACCESS altogether, or carefully review the
way it is used currently.
> On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
> Manual ARMv7-A and ARMv7-R edition :
>
<snip>
Could you *please* stop quoting the ARM ARM at us? People who are
seeking detailed information like that will know where to find it.
--
Ard.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2017-12-05 17:08 ` Ard Biesheuvel
0 siblings, 0 replies; 253+ messages in thread
From: Ard Biesheuvel @ 2017-12-05 17:08 UTC (permalink / raw)
To: linux-arm-kernel
On 5 December 2017 at 14:19, Liuwenliang (Abbott Liu)
<liuwenliang@huawei.com> wrote:
> On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux at armlinux.org.uk] wrote:
>>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>>> >> already handles unaligned addresses?
>>> >>
>>> >> - If it's needed on ARM then presumably it will be needed on other
>>> >> architectures, so CONFIG_ARM is insufficiently general.
>>> >>
>>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>>> >> it would be better to generalize/fix it in some fashion rather than
>>> >> creating a new variant of the function.
>>>
>>>
>>> >Yes, I think it will be better to fix the current function rather then
>>> >have 2 slightly different copies with ifdef's.
>>> >Will something along these lines work for arm? 16-byte accesses are
>>> >not too common, so it should not be a performance problem. And
>>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>>> >where safe (x86).
>>>
>>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>> >{
>>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>>> >
>>> > if (shadow_addr[0] || shadow_addr[1])
>>> > return true;
>>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>> > return memory_is_poisoned_1(addr + 15);
>>> > return false;
>>> >}
>>>
>>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>>> If the parameter addr=0xc0000008, now in function:
>>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>> {
>>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>>> --- // unsigned by 2 bytes.
>>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>>
>>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>>> ---- //initialized when kernel is starting.
>>> return *shadow_addr;
>>> }
>>>
>>> I also think it is better to fix this problem.
>
>>What about using get_unaligned() ?
>
> Thanks for your review.
>
> I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
> (arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
> So on ARMv7, the code:
> u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
> equals the code:000
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>
No it does not. The compiler may merge adjacent accesses into ldm or
ldrd instructions, which do not tolerate misalignment regardless of
the SCTLR.A bit.
This is actually something we may need to fix for ARM, i.e., drop
HAVE_EFFICIENT_UNALIGNED_ACCESS altogether, or carefully review the
way it is used currently.
> On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
> Manual ARMv7-A and ARMv7-R edition :
>
<snip>
Could you *please* stop quoting the ARM ARM at us? People who are
seeking detailed information like that will know where to find it.
--
Ard.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
2017-12-05 17:08 ` Ard Biesheuvel
@ 2018-01-16 8:39 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-01-16 8:39 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Russell King - ARM Linux, Dmitry Vyukov, Andrew Morton,
Andrey Ryabinin, afzal.mohd.ma, f.fainelli, Laura Abbott,
Kirill A. Shutemov, Michal Hocko, cdall, marc.zyngier,
Catalin Marinas, Matthew Wilcox, Thomas Gleixner, Thomas Garnier,
Kees Cook, Arnd Bergmann, Vladimir Murzin, tixy, robin.murphy,
Ingo Molnar, grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 6 December 2017 at 1:09 Ard Biesheuvel [ard.biesheuvel@linaro.org] wrote:
>On 5 December 2017 at 14:19, Liuwenliang (Abbott Liu)
><liuwenliang@huawei.com> wrote:
>> On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>>>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>>>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>>>> >> already handles unaligned addresses?
>>>> >>
>>>> >> - If it's needed on ARM then presumably it will be needed on other
>>>> >> architectures, so CONFIG_ARM is insufficiently general.
>>>> >>
>>>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>>>> >> it would be better to generalize/fix it in some fashion rather than
>>>> >> creating a new variant of the function.
>>>>
>>>>
>>>> >Yes, I think it will be better to fix the current function rather then
>>>> >have 2 slightly different copies with ifdef's.
>>>> >Will something along these lines work for arm? 16-byte accesses are
>>>> >not too common, so it should not be a performance problem. And
>>>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>>>> >where safe (x86).
>>>>
>>>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>>> >{
>>>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>>>> >
>>>> > if (shadow_addr[0] || shadow_addr[1])
>>>> > return true;
>>>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>>> > return memory_is_poisoned_1(addr + 15);
>>>> > return false;
>>>> >}
>>>>
>>>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>>>> If the parameter addr=0xc0000008, now in function:
>>>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>>> {
>>>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>>>> --- // unsigned by 2 bytes.
>>>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>>>
>>>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>>>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>>>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>>>> ---- //initialized when kernel is starting.
>>>> return *shadow_addr;
>>>> }
>>>>
>>>> I also think it is better to fix this problem.
>>
>>>What about using get_unaligned() ?
>>
>> Thanks for your review.
>>
>> I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
>> (arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
>> So on ARMv7, the code:
>> u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
>> equals the code:000
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>
>
>No it does not. The compiler may merge adjacent accesses into ldm or
>ldrd instructions, which do not tolerate misalignment regardless of
>the SCTLR.A bit.
>
>This is actually something we may need to fix for ARM, i.e., drop
>HAVE_EFFICIENT_UNALIGNED_ACCESS altogether, or carefully review the
>way it is used currently.
>
>> On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
>> Manual ARMv7-A and ARMv7-R edition :
>>
><snip>
>
>Could you *please* stop quoting the ARM ARM at us? People who are
>seeking detailed information like that will know where to find it.
>
>--
>Ard.
Thanks for Ard Biesheuvel's review.
Using get_unaligned does not give us too much benefit, and get_unaligned may have some problem.
So it may be better to not use get_unaligned.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
@ 2018-01-16 8:39 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-01-16 8:39 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Russell King - ARM Linux, Dmitry Vyukov, Andrew Morton,
Andrey Ryabinin, afzal.mohd.ma, f.fainelli, Laura Abbott,
Kirill A. Shutemov, Michal Hocko, cdall, marc.zyngier,
Catalin Marinas, Matthew Wilcox, Thomas Gleixner, Thomas Garnier,
Kees Cook, Arnd Bergmann, Vladimir Murzin, tixy, robin.murphy,
Ingo Molnar, grygorii.strashko, Alexander Potapenko, opendmb,
linux-arm-kernel, LKML, kasan-dev, linux-mm, Jiazhenghua, Dailei,
Zengweilin, Heshaoliang
On 6 December 2017 at 1:09 Ard Biesheuvel [ard.biesheuvel@linaro.org] wrote:
>On 5 December 2017 at 14:19, Liuwenliang (Abbott Liu)
><liuwenliang@huawei.com> wrote:
>> On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>>>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>>>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>>>> >> already handles unaligned addresses?
>>>> >>
>>>> >> - If it's needed on ARM then presumably it will be needed on other
>>>> >> architectures, so CONFIG_ARM is insufficiently general.
>>>> >>
>>>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>>>> >> it would be better to generalize/fix it in some fashion rather than
>>>> >> creating a new variant of the function.
>>>>
>>>>
>>>> >Yes, I think it will be better to fix the current function rather then
>>>> >have 2 slightly different copies with ifdef's.
>>>> >Will something along these lines work for arm? 16-byte accesses are
>>>> >not too common, so it should not be a performance problem. And
>>>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>>>> >where safe (x86).
>>>>
>>>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>>> >{
>>>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>>>> >
>>>> > if (shadow_addr[0] || shadow_addr[1])
>>>> > return true;
>>>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>>> > return memory_is_poisoned_1(addr + 15);
>>>> > return false;
>>>> >}
>>>>
>>>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>>>> If the parameter addr=0xc0000008, now in function:
>>>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>>> {
>>>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>>>> --- // unsigned by 2 bytes.
>>>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>>>
>>>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>>>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>>>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>>>> ---- //initialized when kernel is starting.
>>>> return *shadow_addr;
>>>> }
>>>>
>>>> I also think it is better to fix this problem.
>>
>>>What about using get_unaligned() ?
>>
>> Thanks for your review.
>>
>> I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
>> (arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
>> So on ARMv7, the code:
>> u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
>> equals the code:000
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>
>
>No it does not. The compiler may merge adjacent accesses into ldm or
>ldrd instructions, which do not tolerate misalignment regardless of
>the SCTLR.A bit.
>
>This is actually something we may need to fix for ARM, i.e., drop
>HAVE_EFFICIENT_UNALIGNED_ACCESS altogether, or carefully review the
>way it is used currently.
>
>> On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
>> Manual ARMv7-A and ARMv7-R edition :
>>
><snip>
>
>Could you *please* stop quoting the ARM ARM at us? People who are
>seeking detailed information like that will know where to find it.
>
>--
>Ard.
Thanks for Ard Biesheuvel's review.
Using get_unaligned does not give us too much benefit, and get_unaligned may have some problem.
So it may be better to not use get_unaligned.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2017-10-11 8:22 ` Abbott Liu
(?)
@ 2018-02-13 18:40 ` Florian Fainelli
-1 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2018-02-13 18:40 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Hi Abbott,
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
>
> 1/8 of kernel addresses reserved for shadow memory. There was no
> big enough hole for this, so virtual addresses for shadow were
> stolen from user space.
>
> At early boot stage the whole shadow region populated with just
> one physical page (kasan_zero_page). Later, this page reused
> as readonly zero shadow for some memory that KASan currently
> don't track (vmalloc).
>
> After mapping the physical memory, pages for shadow memory are
> allocated and mapped.
>
> KASan's stack instrumentation significantly increases stack's
> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
>
> Some files built without kasan instrumentation (e.g. mm/slub.c).
> Original mem* function replaced (via #define) with prefixed variants
> to disable memory access checks for such files.
>
> On arm LPAE architecture, the mapping table of KASan shadow memory(if
> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
> because kasan instrumentation maybe cause do_translation_fault function
> accessing KASan shadow memory. The accessing of KASan shadow memory in
> do_translation_fault function maybe cause dead circle. So the mapping table
> of KASan shadow memory need be copyed in pgd_alloc function.
>
>
> Most of the code comes from:
> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Are you planning on picking up these patches and sending a second
version? I would be more than happy to provide test results once you
have something, this is very useful, thank you!
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
@ 2018-02-13 18:40 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2018-02-13 18:40 UTC (permalink / raw)
To: Abbott Liu, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm, jiazhenghua, dylix.dailei, zengweilin,
heshaoliang
Hi Abbott,
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
>
> 1/8 of kernel addresses reserved for shadow memory. There was no
> big enough hole for this, so virtual addresses for shadow were
> stolen from user space.
>
> At early boot stage the whole shadow region populated with just
> one physical page (kasan_zero_page). Later, this page reused
> as readonly zero shadow for some memory that KASan currently
> don't track (vmalloc).
>
> After mapping the physical memory, pages for shadow memory are
> allocated and mapped.
>
> KASan's stack instrumentation significantly increases stack's
> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
>
> Some files built without kasan instrumentation (e.g. mm/slub.c).
> Original mem* function replaced (via #define) with prefixed variants
> to disable memory access checks for such files.
>
> On arm LPAE architecture, the mapping table of KASan shadow memory(if
> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
> because kasan instrumentation maybe cause do_translation_fault function
> accessing KASan shadow memory. The accessing of KASan shadow memory in
> do_translation_fault function maybe cause dead circle. So the mapping table
> of KASan shadow memory need be copyed in pgd_alloc function.
>
>
> Most of the code comes from:
> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Are you planning on picking up these patches and sending a second
version? I would be more than happy to provide test results once you
have something, this is very useful, thank you!
--
Florian
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2018-02-13 18:40 ` Florian Fainelli
0 siblings, 0 replies; 253+ messages in thread
From: Florian Fainelli @ 2018-02-13 18:40 UTC (permalink / raw)
To: linux-arm-kernel
Hi Abbott,
On 10/11/2017 01:22 AM, Abbott Liu wrote:
> Hi,all:
> These patches add arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).
>
> 1/8 of kernel addresses reserved for shadow memory. There was no
> big enough hole for this, so virtual addresses for shadow were
> stolen from user space.
>
> At early boot stage the whole shadow region populated with just
> one physical page (kasan_zero_page). Later, this page reused
> as readonly zero shadow for some memory that KASan currently
> don't track (vmalloc).
>
> After mapping the physical memory, pages for shadow memory are
> allocated and mapped.
>
> KASan's stack instrumentation significantly increases stack's
> consumption, so CONFIG_KASAN doubles THREAD_SIZE.
>
> Functions like memset/memmove/memcpy do a lot of memory accesses.
> If bad pointer passed to one of these function it is important
> to catch this. Compiler's instrumentation cannot do this since
> these functions are written in assembly.
>
> KASan replaces memory functions with manually instrumented variants.
> Original functions declared as weak symbols so strong definitions
> in mm/kasan/kasan.c could replace them. Original functions have aliases
> with '__' prefix in name, so we could call non-instrumented variant
> if needed.
>
> Some files built without kasan instrumentation (e.g. mm/slub.c).
> Original mem* function replaced (via #define) with prefixed variants
> to disable memory access checks for such files.
>
> On arm LPAE architecture, the mapping table of KASan shadow memory(if
> PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
> 0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
> because kasan instrumentation maybe cause do_translation_fault function
> accessing KASan shadow memory. The accessing of KASan shadow memory in
> do_translation_fault function maybe cause dead circle. So the mapping table
> of KASan shadow memory need be copyed in pgd_alloc function.
>
>
> Most of the code comes from:
> https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
Are you planning on picking up these patches and sending a second
version? I would be more than happy to provide test results once you
have something, this is very useful, thank you!
--
Florian
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 00/11] KASan for arm
2018-02-13 18:40 ` Florian Fainelli
@ 2018-02-23 2:10 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-02-23 2:10 UTC (permalink / raw)
To: Florian Fainelli, linux, aryabinin, afzal.mohd.ma, labbott,
kirill.shutemov, mhocko, cdall, marc.zyngier, catalin.marinas,
akpm, mawilcox, tglx, thgarnie, keescook, arnd, vladimir.murzin,
tixy, ard.biesheuvel, robin.murphy, mingo, grygorii.strashko
Cc: glider, dvyukov, opendmb, linux-arm-kernel, linux-kernel,
kasan-dev, linux-mm
On 2018/2/14 2:41 AM, Florian Fainelli [f.fainelli@gmail.com] wrote:
>Hi Abbott,
>
>Are you planning on picking up these patches and sending a second
>version? I would be more than happy to provide test results once you
>have something, this is very useful, thank you!
>--
>Florian
I'm sorry to reply you so late. I had a holiday on last few days.
Yes, I will send the second version, maybe on next two weeks.
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 00/11] KASan for arm
@ 2018-02-23 2:10 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-02-23 2:10 UTC (permalink / raw)
To: linux-arm-kernel
On 2018/2/14 2:41 AM, Florian Fainelli [f.fainelli at gmail.com] wrote:
>Hi Abbott,
>
>Are you planning on picking up these patches and sending a second
>version? I would be more than happy to provide test results once you
>have something, this is very useful, thank you!
>--
>Florian
I'm sorry to reply you so late. I had a holiday on last few days.
Yes, I will send the second version, maybe on next two weeks.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-19 11:09 ` Russell King - ARM Linux
(?)
@ 2018-02-24 14:28 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-02-24 14:28 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm
On Oct 19, 2017 at 19:09, Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>On Wed, Oct 11, 2017 at 04:22:17PM +0800, Abbott Liu wrote:
>> +#else
>> +#define pud_populate(mm,pmd,pte) do { } while (0)
>> +#endif
>
>Please explain this change - we don't have a "pud" as far as the rest of
>the Linux MM layer is concerned, so why do we need it for kasan?
>
>I suspect it comes from the way we wrap up the page tables - where ARM
>does it one way (because it has to) vs the subsequently merged method
>which is completely upside down to what ARMs doing, and therefore is
>totally incompatible and impossible to fit in with our way.
We will use pud_polulate in kasan_populate_zero_shadow function.
....
>> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
>> +
>> +KASAN_SANITIZE_kasan_init.o := n
>> +obj-$(CONFIG_KASAN) += kasan_init.o
>
>Why is this placed in the middle of the cache object listing?
Sorry, I will place this at the end of the arch/arm/mm/Makefile.
>> +
>> +
>> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
...
>> +pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
>> +{
>> + pgd_t *pgd = pgd_offset_k(addr);
>> + if (pgd_none(*pgd)) {
>> + void *p = kasan_alloc_block(PAGE_SIZE, node);
>> + if (!p)
>> + return NULL;
>> + pgd_populate(&init_mm, pgd, p);
>> + }
>> + return pgd;
>> +}
>This all looks wrong - you are aware that on non-LPAE platforms, there
>is only a _two_ level page table - the top level page table is 16K in
>size, and each _individual_ lower level page table is actually 1024
>bytes, but we do some special handling in the kernel to combine two
>together. It looks to me that you allocate memory for each Linux-
>abstracted page table level whether the hardware needs it or not.
You are right. If non-LPAE platform check if(pgd_none(*pgd)) true,
void *p = kasan_alloc_block(PAGE_SIZE, node) alloc space is not enough.
But the the function kasan_pgd_populate only used in :
Kasan_init-> create_mapping-> kasan_pgd_populate , so when non-LPAE platform
the if (pgd_none(*pgd)) always false.
But I also think change those code is much better :
if (IS_ENABLED(CONFIG_ARM_LPAE)) {
p = kasan_alloc_block(PAGE_SIZE, node);
} else {
/* non-LPAE need 16K for first level pagetabe*/
p = kasan_alloc_block(PAGE_SIZE*4, node);
}
>Is there any reason why the pre-existing "create_mapping()" function
>can't be used, and you've had to rewrite that code here?
Two reason:
1) Here create_mapping can dynamic alloc phys memory space for mapping to virtual space
Which from start to end, but the create_mapping in arch/arm/mm/mmu.c can't.
2) for LPAE, create_mapping need alloc pgd which we need use virtual space below 0xc0000000,
here create_mapping can alloc pgd, but create_mapping in arch/arm/mm/mmu.c can't.
>> +
>> +static int __init create_mapping(unsigned long start, unsigned long end, int node)
>> +{
>> + unsigned long addr = start;
>> + pgd_t *pgd;
>> + pud_t *pud;
>> + pmd_t *pmd;
>> + pte_t *pte;
>
>A blank line would help between the auto variables and the code of the
>function.
Ok, I will add blank line in new version.
>> + pr_info("populating shadow for %lx, %lx\n", start, end);
>
>Blank line here too please.
Ok, I will add blank line in new version.
>> + for (; addr < end; addr += PAGE_SIZE) {
>> + pgd = kasan_pgd_populate(addr, node);
>> + if (!pgd)
>> + return -ENOMEM;
...
>> +void __init kasan_init(void)
>> +{
>> + struct memblock_region *reg;
>> + u64 orig_ttbr0;
>> +
>> + orig_ttbr0 = cpu_get_ttbr(0);
>> +
>> +#ifdef CONFIG_ARM_LPAE
>> + memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
>> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
>> + set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
>> + cpu_set_ttbr0(__pa(tmp_page_table));
>> +#else
>> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
>> + cpu_set_ttbr0(__pa(tmp_page_table));
>> +#endif
>> + flush_cache_all();
>> + local_flush_bp_all();
>> + local_flush_tlb_all();
>What are you trying to achieve with all this complexity? Some comments
>might be useful, especially for those of us who don't know the internals
>of kasan.
OK, I will add some comments in kasan_init function in new version.
...
>> + for_each_memblock(memory, reg) {
>> + void *start = __va(reg->base);
>> + void *end = __va(reg->base + reg->size);
>
>Isn't this going to complain if the translation macro debugging is enabled?
Sorry, I don't what is the translation macro. Can you tell me.
^ permalink raw reply [flat|nested] 253+ messages in thread
* Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2018-02-24 14:28 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-02-24 14:28 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm
On Oct 19, 2017 at 19:09, Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>On Wed, Oct 11, 2017 at 04:22:17PM +0800, Abbott Liu wrote:
>> +#else
>> +#define pud_populate(mm,pmd,pte) do { } while (0)
>> +#endif
>
>Please explain this change - we don't have a "pud" as far as the rest of
>the Linux MM layer is concerned, so why do we need it for kasan?
>
>I suspect it comes from the way we wrap up the page tables - where ARM
>does it one way (because it has to) vs the subsequently merged method
>which is completely upside down to what ARMs doing, and therefore is
>totally incompatible and impossible to fit in with our way.
We will use pud_polulate in kasan_populate_zero_shadow function.
....
>> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
>> +
>> +KASAN_SANITIZE_kasan_init.o := n
>> +obj-$(CONFIG_KASAN) += kasan_init.o
>
>Why is this placed in the middle of the cache object listing?
Sorry, I will place this at the end of the arch/arm/mm/Makefile.
>> +
>> +
>> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
...
>> +pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
>> +{
>> + pgd_t *pgd = pgd_offset_k(addr);
>> + if (pgd_none(*pgd)) {
>> + void *p = kasan_alloc_block(PAGE_SIZE, node);
>> + if (!p)
>> + return NULL;
>> + pgd_populate(&init_mm, pgd, p);
>> + }
>> + return pgd;
>> +}
>This all looks wrong - you are aware that on non-LPAE platforms, there
>is only a _two_ level page table - the top level page table is 16K in
>size, and each _individual_ lower level page table is actually 1024
>bytes, but we do some special handling in the kernel to combine two
>together. It looks to me that you allocate memory for each Linux-
>abstracted page table level whether the hardware needs it or not.
You are right. If non-LPAE platform check if(pgd_none(*pgd)) true,
void *p = kasan_alloc_block(PAGE_SIZE, node) alloc space is not enough.
But the the function kasan_pgd_populate only used in :
Kasan_init-> create_mapping-> kasan_pgd_populate , so when non-LPAE platform
the if (pgd_none(*pgd)) always false.
But I also think change those code is much better :
if (IS_ENABLED(CONFIG_ARM_LPAE)) {
p = kasan_alloc_block(PAGE_SIZE, node);
} else {
/* non-LPAE need 16K for first level pagetabe*/
p = kasan_alloc_block(PAGE_SIZE*4, node);
}
>Is there any reason why the pre-existing "create_mapping()" function
>can't be used, and you've had to rewrite that code here?
Two reason:
1) Here create_mapping can dynamic alloc phys memory space for mapping to virtual space
Which from start to end, but the create_mapping in arch/arm/mm/mmu.c can't.
2) for LPAE, create_mapping need alloc pgd which we need use virtual space below 0xc0000000,
here create_mapping can alloc pgd, but create_mapping in arch/arm/mm/mmu.c can't.
>> +
>> +static int __init create_mapping(unsigned long start, unsigned long end, int node)
>> +{
>> + unsigned long addr = start;
>> + pgd_t *pgd;
>> + pud_t *pud;
>> + pmd_t *pmd;
>> + pte_t *pte;
>
>A blank line would help between the auto variables and the code of the
>function.
Ok, I will add blank line in new version.
>> + pr_info("populating shadow for %lx, %lx\n", start, end);
>
>Blank line here too please.
Ok, I will add blank line in new version.
>> + for (; addr < end; addr += PAGE_SIZE) {
>> + pgd = kasan_pgd_populate(addr, node);
>> + if (!pgd)
>> + return -ENOMEM;
...
>> +void __init kasan_init(void)
>> +{
>> + struct memblock_region *reg;
>> + u64 orig_ttbr0;
>> +
>> + orig_ttbr0 = cpu_get_ttbr(0);
>> +
>> +#ifdef CONFIG_ARM_LPAE
>> + memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
>> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
>> + set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
>> + cpu_set_ttbr0(__pa(tmp_page_table));
>> +#else
>> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
>> + cpu_set_ttbr0(__pa(tmp_page_table));
>> +#endif
>> + flush_cache_all();
>> + local_flush_bp_all();
>> + local_flush_tlb_all();
>What are you trying to achieve with all this complexity? Some comments
>might be useful, especially for those of us who don't know the internals
>of kasan.
OK, I will add some comments in kasan_init function in new version.
...
>> + for_each_memblock(memory, reg) {
>> + void *start = __va(reg->base);
>> + void *end = __va(reg->base + reg->size);
>
>Isn't this going to complain if the translation macro debugging is enabled?
Sorry, I don't what is the translation macro. Can you tell me.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 253+ messages in thread
* [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2018-02-24 14:28 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-02-24 14:28 UTC (permalink / raw)
To: linux-arm-kernel
On Oct 19, 2017 at 19:09, Russell King - ARM Linux [mailto:linux at armlinux.org.uk] wrote:
>On Wed, Oct 11, 2017 at 04:22:17PM +0800, Abbott Liu wrote:
>> +#else
>> +#define pud_populate(mm,pmd,pte) do { } while (0)
>> +#endif
>
>Please explain this change - we don't have a "pud" as far as the rest of
>the Linux MM layer is concerned, so why do we need it for kasan?
>
>I suspect it comes from the way we wrap up the page tables - where ARM
>does it one way (because it has to) vs the subsequently merged method
>which is completely upside down to what ARMs doing, and therefore is
>totally incompatible and impossible to fit in with our way.
We will use pud_polulate in kasan_populate_zero_shadow function.
....
>> obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
>> +
>> +KASAN_SANITIZE_kasan_init.o := n
>> +obj-$(CONFIG_KASAN) += kasan_init.o
>
>Why is this placed in the middle of the cache object listing?
Sorry, I will place this at the end of the arch/arm/mm/Makefile.
>> +
>> +
>> obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
...
>> +pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node)
>> +{
>> + pgd_t *pgd = pgd_offset_k(addr);
>> + if (pgd_none(*pgd)) {
>> + void *p = kasan_alloc_block(PAGE_SIZE, node);
>> + if (!p)
>> + return NULL;
>> + pgd_populate(&init_mm, pgd, p);
>> + }
>> + return pgd;
>> +}
>This all looks wrong - you are aware that on non-LPAE platforms, there
>is only a _two_ level page table - the top level page table is 16K in
>size, and each _individual_ lower level page table is actually 1024
>bytes, but we do some special handling in the kernel to combine two
>together. It looks to me that you allocate memory for each Linux-
>abstracted page table level whether the hardware needs it or not.
You are right. If non-LPAE platform check if(pgd_none(*pgd)) true,
void *p = kasan_alloc_block(PAGE_SIZE, node) alloc space is not enough.
But the the function kasan_pgd_populate only used in :
Kasan_init-> create_mapping-> kasan_pgd_populate , so when non-LPAE platform
the if (pgd_none(*pgd)) always false.
But I also think change those code is much better :
if (IS_ENABLED(CONFIG_ARM_LPAE)) {
p = kasan_alloc_block(PAGE_SIZE, node);
} else {
/* non-LPAE need 16K for first level pagetabe*/
p = kasan_alloc_block(PAGE_SIZE*4, node);
}
>Is there any reason why the pre-existing "create_mapping()" function
>can't be used, and you've had to rewrite that code here?
Two reason:
1) Here create_mapping can dynamic alloc phys memory space for mapping to virtual space
Which from start to end, but the create_mapping in arch/arm/mm/mmu.c can't.
2) for LPAE, create_mapping need alloc pgd which we need use virtual space below 0xc0000000,
here create_mapping can alloc pgd, but create_mapping in arch/arm/mm/mmu.c can't.
>> +
>> +static int __init create_mapping(unsigned long start, unsigned long end, int node)
>> +{
>> + unsigned long addr = start;
>> + pgd_t *pgd;
>> + pud_t *pud;
>> + pmd_t *pmd;
>> + pte_t *pte;
>
>A blank line would help between the auto variables and the code of the
>function.
Ok, I will add blank line in new version.
>> + pr_info("populating shadow for %lx, %lx\n", start, end);
>
>Blank line here too please.
Ok, I will add blank line in new version.
>> + for (; addr < end; addr += PAGE_SIZE) {
>> + pgd = kasan_pgd_populate(addr, node);
>> + if (!pgd)
>> + return -ENOMEM;
...
>> +void __init kasan_init(void)
>> +{
>> + struct memblock_region *reg;
>> + u64 orig_ttbr0;
>> +
>> + orig_ttbr0 = cpu_get_ttbr(0);
>> +
>> +#ifdef CONFIG_ARM_LPAE
>> + memcpy(tmp_pmd_table, pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), sizeof(tmp_pmd_table));
>> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
>> + set_pgd(&tmp_page_table[pgd_index(KASAN_SHADOW_START)], __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
>> + cpu_set_ttbr0(__pa(tmp_page_table));
>> +#else
>> + memcpy(tmp_page_table, swapper_pg_dir, sizeof(tmp_page_table));
>> + cpu_set_ttbr0(__pa(tmp_page_table));
>> +#endif
>> + flush_cache_all();
>> + local_flush_bp_all();
>> + local_flush_tlb_all();
>What are you trying to achieve with all this complexity? Some comments
>might be useful, especially for those of us who don't know the internals
>of kasan.
OK, I will add some comments in kasan_init function in new version.
...
>> + for_each_memblock(memory, reg) {
>> + void *start = __va(reg->base);
>> + void *end = __va(reg->base + reg->size);
>
>Isn't this going to complain if the translation macro debugging is enabled?
Sorry, I don't what is the translation macro. Can you tell me.
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
2017-10-19 12:01 ` Russell King - ARM Linux
@ 2018-02-26 13:09 ` Liuwenliang (Abbott Liu)
-1 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-02-26 13:09 UTC (permalink / raw)
To: Russell King - ARM Linux, Dmitry Osipenko
Cc: aryabinin, afzal.mohd.ma, f.fainelli, labbott, kirill.shutemov,
mhocko, cdall, marc.zyngier, catalin.marinas, akpm, mawilcox,
tglx, thgarnie, keescook, arnd, vladimir.murzin, tixy,
ard.biesheuvel, robin.murphy, mingo, grygorii.strashko, glider,
dvyukov, opendmb, linux-arm-kernel, linux-kernel, kasan-dev,
linux-mm, Jiazhenghua, Dailei, Zengweilin, Heshaoliang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb2312", Size: 1926 bytes --]
On Oct 19, 2017 at 19:09, Russell King - ARM Linux [mailto:linux@armlinux.org.uk] wrote:
>On Thu, Oct 12, 2017 at 02:42:49AM +0300, Dmitry Osipenko wrote:
>> On 11.10.2017 11:22, Abbott Liu wrote:
>> > +void __init kasan_map_early_shadow(pgd_t *pgdp)
>> > +{
>> > + int i;
>> > + unsigned long start = KASAN_SHADOW_START;
>> > + unsigned long end = KASAN_SHADOW_END;
>> > + unsigned long addr;
>> > + unsigned long next;
>> > + pgd_t *pgd;
>> > +
>> > + for (i = 0; i < PTRS_PER_PTE; i++)
>> > + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
>> > + &kasan_zero_pte[i], pfn_pte(
>> > + virt_to_pfn(kasan_zero_page),
>> > + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
>>
>> Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
>
>One of the architecture restrictions is that the cache attributes of
>all aliases should match (but there is a specific workaround that
>permits this, provided that the dis-similar mappings aren't accessed
>without certain intervening instructions.)
>
>Why should it be L_PTE_MT_WRITETHROUGH, and not the same cache
>attributes as the lowmem mapping?
>
Here is mapping the kasan shadow which is used at the early stage of kernel start(from start
of start_kernel to paging_init). At this stage we only read the kasan shadows, never write the
kasan shadows which is initialized to be zero.
We will map the kasan shadows again with flags PAGE_KERNEL:
pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
{
pte_t *pte = pte_offset_kernel(pmd, addr);
if (pte_none(*pte)) {
pte_t entry;
void *p = kasan_alloc_block(PAGE_SIZE, node);
if (!p)
return NULL;
entry = pfn_pte(virt_to_pfn(p), __pgprot(pgprot_val(PAGE_KERNEL)));
set_pte_at(&init_mm, addr, pte, entry);
}
return pte;
}
N§²æìr¸zǧu©²Æ {\béì¹»\x1c®&Þ)îÆi¢Ø^nr¶Ý¢j$½§$¢¸\x05¢¹¨è§~'.)îÄÃ,yèm¶ÿÃ\f%{±j+ðèצj)Z·
^ permalink raw reply [flat|nested] 253+ messages in thread
* 答复: [PATCH 01/11] Initialize the mapping of KASan shadow memory
@ 2018-02-26 13:09 ` Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 253+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-02-26 13:09 UTC (permalink / raw)
To: linux-arm-kernel
On Oct 19, 2017 at 19:09, Russell King - ARM Linux [mailto:linux at armlinux.org.uk] wrote:
>On Thu, Oct 12, 2017 at 02:42:49AM +0300, Dmitry Osipenko wrote:
>> On 11.10.2017 11:22, Abbott Liu wrote:
>> > +void __init kasan_map_early_shadow(pgd_t *pgdp)
>> > +{
>> > + int i;
>> > + unsigned long start = KASAN_SHADOW_START;
>> > + unsigned long end = KASAN_SHADOW_END;
>> > + unsigned long addr;
>> > + unsigned long next;
>> > + pgd_t *pgd;
>> > +
>> > + for (i = 0; i < PTRS_PER_PTE; i++)
>> > + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
>> > + &kasan_zero_pte[i], pfn_pte(
>> > + virt_to_pfn(kasan_zero_page),
>> > + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)));
>>
>> Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
>
>One of the architecture restrictions is that the cache attributes of
>all aliases should match (but there is a specific workaround that
>permits this, provided that the dis-similar mappings aren't accessed
>without certain intervening instructions.)
>
>Why should it be L_PTE_MT_WRITETHROUGH, and not the same cache
>attributes as the lowmem mapping?
>
Here is mapping the kasan shadow which is used at the early stage of kernel start(from start
of start_kernel to paging_init). At this stage we only read the kasan shadows, never write the
kasan shadows which is initialized to be zero.
We will map the kasan shadows again with flags PAGE_KERNEL:
pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
{
pte_t *pte = pte_offset_kernel(pmd, addr);
if (pte_none(*pte)) {
pte_t entry;
void *p = kasan_alloc_block(PAGE_SIZE, node);
if (!p)
return NULL;
entry = pfn_pte(virt_to_pfn(p), __pgprot(pgprot_val(PAGE_KERNEL)));
set_pte_at(&init_mm, addr, pte, entry);
}
return pte;
}
^ permalink raw reply [flat|nested] 253+ messages in thread
end of thread, other threads:[~2018-02-26 13:09 UTC | newest]
Thread overview: 253+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-11 8:22 [PATCH 00/11] KASan for arm Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` [PATCH 01/11] Initialize the mapping of KASan shadow memory Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 19:39 ` Florian Fainelli
2017-10-11 19:39 ` Florian Fainelli
2017-10-11 19:39 ` Florian Fainelli
2017-10-11 21:41 ` Russell King - ARM Linux
2017-10-11 21:41 ` Russell King - ARM Linux
2017-10-11 21:41 ` Russell King - ARM Linux
2017-10-17 13:28 ` Liuwenliang (Lamb)
2017-10-17 13:28 ` Liuwenliang (Lamb)
2017-10-17 13:28 ` Liuwenliang (Lamb)
2017-10-11 23:42 ` Dmitry Osipenko
2017-10-11 23:42 ` Dmitry Osipenko
2017-10-11 23:42 ` Dmitry Osipenko
2017-10-19 6:52 ` Liuwenliang (Lamb)
2017-10-19 6:52 ` Liuwenliang (Lamb)
2017-10-19 6:52 ` Liuwenliang (Lamb)
2017-10-19 12:01 ` Russell King - ARM Linux
2017-10-19 12:01 ` Russell King - ARM Linux
2017-10-19 12:01 ` Russell King - ARM Linux
2018-02-26 13:09 ` 答复: " Liuwenliang (Abbott Liu)
2018-02-26 13:09 ` Liuwenliang (Abbott Liu)
2017-10-12 7:58 ` Marc Zyngier
2017-10-12 7:58 ` Marc Zyngier
2017-10-12 7:58 ` Marc Zyngier
2017-11-09 7:46 ` Liuwenliang (Abbott Liu)
2017-11-09 7:46 ` Liuwenliang (Abbott Liu)
2017-11-09 7:46 ` Liuwenliang (Abbott Liu)
2017-11-09 10:10 ` Marc Zyngier
2017-11-09 10:10 ` Marc Zyngier
2017-11-09 10:10 ` Marc Zyngier
2017-11-15 10:20 ` Liuwenliang (Abbott Liu)
2017-11-15 10:20 ` Liuwenliang (Abbott Liu)
2017-11-15 10:20 ` Liuwenliang (Abbott Liu)
2017-11-15 10:35 ` Marc Zyngier
2017-11-15 10:35 ` Marc Zyngier
2017-11-15 10:35 ` Marc Zyngier
2017-11-15 13:16 ` Liuwenliang (Abbott Liu)
2017-11-15 13:16 ` Liuwenliang (Abbott Liu)
2017-11-15 13:16 ` Liuwenliang (Abbott Liu)
2017-11-15 13:54 ` Marc Zyngier
2017-11-15 13:54 ` Marc Zyngier
2017-11-15 13:54 ` Marc Zyngier
2017-11-16 3:07 ` Liuwenliang (Abbott Liu)
2017-11-16 3:07 ` Liuwenliang (Abbott Liu)
2017-11-16 3:07 ` Liuwenliang (Abbott Liu)
2017-11-16 9:54 ` Marc Zyngier
2017-11-16 9:54 ` Marc Zyngier
2017-11-16 9:54 ` Marc Zyngier
2017-11-16 14:24 ` Liuwenliang (Abbott Liu)
2017-11-16 14:24 ` Liuwenliang (Abbott Liu)
2017-11-16 14:24 ` Liuwenliang (Abbott Liu)
2017-11-16 14:40 ` Marc Zyngier
2017-11-16 14:40 ` Marc Zyngier
2017-11-16 14:40 ` Marc Zyngier
2017-11-17 1:39 ` 答复: " Liuwenliang (Abbott Liu)
2017-11-17 1:39 ` Liuwenliang (Abbott Liu)
2017-11-17 1:39 ` Liuwenliang (Abbott Liu)
2017-11-17 7:18 ` Liuwenliang (Abbott Liu)
2017-11-17 7:18 ` Liuwenliang (Abbott Liu)
2017-11-17 7:18 ` Liuwenliang (Abbott Liu)
2017-11-17 7:35 ` Christoffer Dall
2017-11-17 7:35 ` Christoffer Dall
2017-11-17 7:35 ` Christoffer Dall
2017-11-18 10:40 ` Liuwenliang (Abbott Liu)
2017-11-18 10:40 ` Liuwenliang (Abbott Liu)
2017-11-18 10:40 ` Liuwenliang (Abbott Liu)
2017-11-18 13:48 ` Marc Zyngier
2017-11-18 13:48 ` Marc Zyngier
2017-11-18 13:48 ` Marc Zyngier
2017-11-21 7:59 ` 答复: " Liuwenliang (Abbott Liu)
2017-11-21 7:59 ` Liuwenliang (Abbott Liu)
2017-11-21 9:40 ` Russell King - ARM Linux
2017-11-21 9:40 ` Russell King - ARM Linux
2017-11-21 9:40 ` Russell King - ARM Linux
2017-11-21 9:46 ` Marc Zyngier
2017-11-21 9:46 ` Marc Zyngier
2017-11-21 9:46 ` Marc Zyngier
2017-11-21 12:29 ` Mark Rutland
2017-11-21 12:29 ` Mark Rutland
2017-11-21 12:29 ` Mark Rutland
2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
2017-11-22 13:06 ` Marc Zyngier
2017-11-22 13:06 ` Marc Zyngier
2017-11-22 13:06 ` Marc Zyngier
2017-11-23 1:54 ` Liuwenliang (Abbott Liu)
2017-11-23 1:54 ` Liuwenliang (Abbott Liu)
2017-11-23 1:54 ` Liuwenliang (Abbott Liu)
2017-11-23 15:22 ` Russell King - ARM Linux
2017-11-23 15:22 ` Russell King - ARM Linux
2017-11-23 15:22 ` Russell King - ARM Linux
2017-11-27 1:23 ` Liuwenliang (Abbott Liu)
2017-11-27 1:23 ` Liuwenliang (Abbott Liu)
2017-11-27 1:23 ` Liuwenliang (Abbott Liu)
2017-11-23 15:31 ` Mark Rutland
2017-11-23 15:31 ` Mark Rutland
2017-11-23 15:31 ` Mark Rutland
2017-11-27 1:26 ` 答复: " Liuwenliang (Abbott Liu)
2017-11-27 1:26 ` Liuwenliang (Abbott Liu)
2017-10-19 11:09 ` Russell King - ARM Linux
2017-10-19 11:09 ` Russell King - ARM Linux
2017-10-19 11:09 ` Russell King - ARM Linux
2018-02-24 14:28 ` Liuwenliang (Abbott Liu)
2018-02-24 14:28 ` Liuwenliang (Abbott Liu)
2018-02-24 14:28 ` Liuwenliang (Abbott Liu)
2017-10-11 8:22 ` [PATCH 02/11] replace memory function Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-19 12:05 ` Russell King - ARM Linux
2017-10-19 12:05 ` Russell King - ARM Linux
2017-10-19 12:05 ` Russell King - ARM Linux
2017-10-22 12:42 ` 答复: " Liuwenliang (Lamb)
2017-10-22 12:42 ` Liuwenliang (Lamb)
2017-10-11 8:22 ` [PATCH 03/11] arm: Kconfig: enable KASan Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 19:15 ` Florian Fainelli
2017-10-11 19:15 ` Florian Fainelli
2017-10-11 19:15 ` Florian Fainelli
2017-10-19 12:34 ` Russell King - ARM Linux
2017-10-19 12:34 ` Russell King - ARM Linux
2017-10-19 12:34 ` Russell King - ARM Linux
2017-10-22 12:27 ` Liuwenliang (Lamb)
2017-10-22 12:27 ` Liuwenliang (Lamb)
2017-10-22 12:27 ` Liuwenliang (Lamb)
2017-10-11 8:22 ` [PATCH 04/11] Define the virtual space of KASan's shadow region Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-14 11:41 ` kbuild test robot
2017-10-14 11:41 ` kbuild test robot
2017-10-14 11:41 ` kbuild test robot
2017-10-16 11:42 ` Liuwenliang (Lamb)
2017-10-16 11:42 ` Liuwenliang (Lamb)
2017-10-16 11:42 ` Liuwenliang (Lamb)
2017-10-16 12:14 ` Ard Biesheuvel
2017-10-16 12:14 ` Ard Biesheuvel
2017-10-16 12:14 ` Ard Biesheuvel
2017-10-17 11:27 ` Liuwenliang (Lamb)
2017-10-17 11:27 ` Liuwenliang (Lamb)
2017-10-17 11:27 ` Liuwenliang (Lamb)
2017-10-17 11:52 ` Ard Biesheuvel
2017-10-17 11:52 ` Ard Biesheuvel
2017-10-17 11:52 ` Ard Biesheuvel
2017-10-17 13:02 ` Liuwenliang (Lamb)
2017-10-17 13:02 ` Liuwenliang (Lamb)
2017-10-17 13:02 ` Liuwenliang (Lamb)
2017-10-19 12:43 ` Russell King - ARM Linux
2017-10-19 12:43 ` Russell King - ARM Linux
2017-10-19 12:43 ` Russell King - ARM Linux
2017-10-22 12:12 ` Liuwenliang (Lamb)
2017-10-22 12:12 ` Liuwenliang (Lamb)
2017-10-22 12:12 ` Liuwenliang (Lamb)
2017-10-19 12:41 ` Russell King - ARM Linux
2017-10-19 12:41 ` Russell King - ARM Linux
2017-10-19 12:41 ` Russell King - ARM Linux
2017-10-19 12:40 ` Russell King - ARM Linux
2017-10-19 12:40 ` Russell King - ARM Linux
2017-10-19 12:40 ` Russell King - ARM Linux
2017-10-11 8:22 ` [PATCH 05/11] Disable kasan's instrumentation Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 19:16 ` Florian Fainelli
2017-10-11 19:16 ` Florian Fainelli
2017-10-11 19:16 ` Florian Fainelli
2017-10-19 12:47 ` Russell King - ARM Linux
2017-10-19 12:47 ` Russell King - ARM Linux
2017-10-19 12:47 ` Russell King - ARM Linux
2017-11-15 10:19 ` Liuwenliang (Abbott Liu)
2017-11-15 10:19 ` Liuwenliang (Abbott Liu)
2017-11-15 10:19 ` Liuwenliang (Abbott Liu)
2017-10-11 8:22 ` [PATCH 06/11] change memory_is_poisoned_16 for aligned error Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 23:23 ` Andrew Morton
2017-10-11 23:23 ` Andrew Morton
2017-10-11 23:23 ` Andrew Morton
2017-10-12 7:16 ` Dmitry Vyukov
2017-10-12 7:16 ` Dmitry Vyukov
2017-10-12 7:16 ` Dmitry Vyukov
2017-10-12 11:27 ` Liuwenliang (Lamb)
2017-10-12 11:27 ` Liuwenliang (Lamb)
2017-10-12 11:27 ` Liuwenliang (Lamb)
2017-10-19 12:51 ` Russell King - ARM Linux
2017-10-19 12:51 ` Russell King - ARM Linux
2017-10-19 12:51 ` Russell King - ARM Linux
2017-12-05 14:19 ` Liuwenliang (Abbott Liu)
2017-12-05 14:19 ` Liuwenliang (Abbott Liu)
2017-12-05 14:19 ` Liuwenliang (Abbott Liu)
2017-12-05 17:08 ` Ard Biesheuvel
2017-12-05 17:08 ` Ard Biesheuvel
2017-12-05 17:08 ` Ard Biesheuvel
2018-01-16 8:39 ` Liuwenliang (Abbott Liu)
2018-01-16 8:39 ` Liuwenliang (Abbott Liu)
2017-10-11 8:22 ` [PATCH 07/11] Avoid cleaning the KASan shadow area's mapping table Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` [PATCH 08/11] Add support arm LPAE Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-19 12:55 ` Russell King - ARM Linux
2017-10-19 12:55 ` Russell King - ARM Linux
2017-10-19 12:55 ` Russell King - ARM Linux
2017-10-22 12:31 ` Liuwenliang (Lamb)
2017-10-22 12:31 ` Liuwenliang (Lamb)
2017-10-22 12:31 ` Liuwenliang (Lamb)
2017-10-11 8:22 ` [PATCH 10/11] Change mapping of kasan_zero_page int readonly Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 19:19 ` Florian Fainelli
2017-10-11 19:19 ` Florian Fainelli
2017-10-11 19:19 ` Florian Fainelli
2017-10-11 8:22 ` [PATCH 11/11] Add KASan layout Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 8:22 ` Abbott Liu
2017-10-11 19:13 ` [PATCH 00/11] KASan for arm Florian Fainelli
2017-10-11 19:13 ` Florian Fainelli
2017-10-11 19:13 ` Florian Fainelli
2017-10-11 19:50 ` Florian Fainelli
2017-10-11 19:50 ` Florian Fainelli
2017-10-11 19:50 ` Florian Fainelli
2017-10-11 21:36 ` Florian Fainelli
2017-10-11 22:10 ` Laura Abbott
2017-10-11 22:10 ` Laura Abbott
2017-10-11 22:10 ` Laura Abbott
2017-10-11 22:58 ` Russell King - ARM Linux
2017-10-11 22:58 ` Russell King - ARM Linux
2017-10-11 22:58 ` Russell King - ARM Linux
2017-10-17 12:41 ` Liuwenliang (Lamb)
2017-10-17 12:41 ` Liuwenliang (Lamb)
2017-10-17 12:41 ` Liuwenliang (Lamb)
2017-10-12 4:55 ` Liuwenliang (Lamb)
2017-10-12 4:55 ` Liuwenliang (Lamb)
2017-10-12 4:55 ` Liuwenliang (Lamb)
2017-10-12 7:38 ` Arnd Bergmann
2017-10-12 7:38 ` Arnd Bergmann
2017-10-12 7:38 ` Arnd Bergmann
2017-10-17 1:04 ` 答复: " Liuwenliang (Lamb)
2017-10-17 1:04 ` Liuwenliang (Lamb)
2017-10-17 1:04 ` Liuwenliang (Lamb)
2018-02-13 18:40 ` Florian Fainelli
2018-02-13 18:40 ` Florian Fainelli
2018-02-13 18:40 ` Florian Fainelli
2018-02-23 2:10 ` Liuwenliang (Abbott Liu)
2018-02-23 2:10 ` Liuwenliang (Abbott Liu)
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.