All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] ARM: Support KFENCE feature
@ 2021-11-03 13:38 ` Kefeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-03 13:38 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel, linux-kernel
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Kefeng Wang

This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
with or without ARM_LPAE and all passed.

V2:
- drop patch4 in v1, which is used a new way to skip kfence test
  see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
- fix some issue about NO MMU
  - drop useless set_memory_valid() under no mmu
  - fix implicit declaration of function ‘is_write_fault’ if no mmu
- make KFENCE depends on !XIP_KERNEL, no tested with xip

v1:
https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/

Kefeng Wang (3):
  ARM: mm: Provide set_memory_valid()
  ARM: mm: Provide is_write_fault()
  ARM: Support KFENCE for ARM

 arch/arm/Kconfig                  |  1 +
 arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
 arch/arm/include/asm/set_memory.h |  1 +
 arch/arm/mm/fault.c               | 16 ++++++++--
 arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
 5 files changed, 100 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm/include/asm/kfence.h

-- 
2.26.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 0/3] ARM: Support KFENCE feature
@ 2021-11-03 13:38 ` Kefeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-03 13:38 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel, linux-kernel
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Kefeng Wang

This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
with or without ARM_LPAE and all passed.

V2:
- drop patch4 in v1, which is used a new way to skip kfence test
  see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
- fix some issue about NO MMU
  - drop useless set_memory_valid() under no mmu
  - fix implicit declaration of function ‘is_write_fault’ if no mmu
- make KFENCE depends on !XIP_KERNEL, no tested with xip

v1:
https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/

Kefeng Wang (3):
  ARM: mm: Provide set_memory_valid()
  ARM: mm: Provide is_write_fault()
  ARM: Support KFENCE for ARM

 arch/arm/Kconfig                  |  1 +
 arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
 arch/arm/include/asm/set_memory.h |  1 +
 arch/arm/mm/fault.c               | 16 ++++++++--
 arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
 5 files changed, 100 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm/include/asm/kfence.h

-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/3] ARM: mm: Provide set_memory_valid()
  2021-11-03 13:38 ` Kefeng Wang
@ 2021-11-03 13:38   ` Kefeng Wang
  -1 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-03 13:38 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel, linux-kernel
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Kefeng Wang

This function validates and invalidates PTE entries, it will be used
in the later patch.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/include/asm/set_memory.h |  1 +
 arch/arm/mm/pageattr.c            | 42 +++++++++++++++++++++++--------
 2 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/set_memory.h b/arch/arm/include/asm/set_memory.h
index ec17fc0fda7a..0211b9c5b14d 100644
--- a/arch/arm/include/asm/set_memory.h
+++ b/arch/arm/include/asm/set_memory.h
@@ -11,6 +11,7 @@ int set_memory_ro(unsigned long addr, int numpages);
 int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
+int set_memory_valid(unsigned long addr, int numpages, int enable);
 #else
 static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c
index 9790ae3a8c68..c3c34fe714b0 100644
--- a/arch/arm/mm/pageattr.c
+++ b/arch/arm/mm/pageattr.c
@@ -32,14 +32,31 @@ static bool in_range(unsigned long start, unsigned long size,
 		size <= range_end - start;
 }
 
+/*
+ * This function assumes that the range is mapped with PAGE_SIZE pages.
+ */
+static int __change_memory_common(unsigned long start, unsigned long size,
+				pgprot_t set_mask, pgprot_t clear_mask)
+{
+	struct page_change_data data;
+	int ret;
+
+	data.set_mask = set_mask;
+	data.clear_mask = clear_mask;
+
+	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
+				  &data);
+
+	flush_tlb_kernel_range(start, start + size);
+	return ret;
+}
+
 static int change_memory_common(unsigned long addr, int numpages,
 				pgprot_t set_mask, pgprot_t clear_mask)
 {
 	unsigned long start = addr & PAGE_MASK;
 	unsigned long end = PAGE_ALIGN(addr) + numpages * PAGE_SIZE;
 	unsigned long size = end - start;
-	int ret;
-	struct page_change_data data;
 
 	WARN_ON_ONCE(start != addr);
 
@@ -50,14 +67,7 @@ static int change_memory_common(unsigned long addr, int numpages,
 	    !in_range(start, size, VMALLOC_START, VMALLOC_END))
 		return -EINVAL;
 
-	data.set_mask = set_mask;
-	data.clear_mask = clear_mask;
-
-	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
-					&data);
-
-	flush_tlb_kernel_range(start, end);
-	return ret;
+	return __change_memory_common(start, size, set_mask, clear_mask);
 }
 
 int set_memory_ro(unsigned long addr, int numpages)
@@ -87,3 +97,15 @@ int set_memory_x(unsigned long addr, int numpages)
 					__pgprot(0),
 					__pgprot(L_PTE_XN));
 }
+
+int set_memory_valid(unsigned long addr, int numpages, int enable)
+{
+	if (enable)
+		return __change_memory_common(addr, PAGE_SIZE * numpages,
+					      __pgprot(L_PTE_VALID),
+					      __pgprot(0));
+	else
+		return __change_memory_common(addr, PAGE_SIZE * numpages,
+					      __pgprot(0),
+					      __pgprot(L_PTE_VALID));
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 1/3] ARM: mm: Provide set_memory_valid()
@ 2021-11-03 13:38   ` Kefeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-03 13:38 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel, linux-kernel
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Kefeng Wang

This function validates and invalidates PTE entries, it will be used
in the later patch.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/include/asm/set_memory.h |  1 +
 arch/arm/mm/pageattr.c            | 42 +++++++++++++++++++++++--------
 2 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/set_memory.h b/arch/arm/include/asm/set_memory.h
index ec17fc0fda7a..0211b9c5b14d 100644
--- a/arch/arm/include/asm/set_memory.h
+++ b/arch/arm/include/asm/set_memory.h
@@ -11,6 +11,7 @@ int set_memory_ro(unsigned long addr, int numpages);
 int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
+int set_memory_valid(unsigned long addr, int numpages, int enable);
 #else
 static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c
index 9790ae3a8c68..c3c34fe714b0 100644
--- a/arch/arm/mm/pageattr.c
+++ b/arch/arm/mm/pageattr.c
@@ -32,14 +32,31 @@ static bool in_range(unsigned long start, unsigned long size,
 		size <= range_end - start;
 }
 
+/*
+ * This function assumes that the range is mapped with PAGE_SIZE pages.
+ */
+static int __change_memory_common(unsigned long start, unsigned long size,
+				pgprot_t set_mask, pgprot_t clear_mask)
+{
+	struct page_change_data data;
+	int ret;
+
+	data.set_mask = set_mask;
+	data.clear_mask = clear_mask;
+
+	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
+				  &data);
+
+	flush_tlb_kernel_range(start, start + size);
+	return ret;
+}
+
 static int change_memory_common(unsigned long addr, int numpages,
 				pgprot_t set_mask, pgprot_t clear_mask)
 {
 	unsigned long start = addr & PAGE_MASK;
 	unsigned long end = PAGE_ALIGN(addr) + numpages * PAGE_SIZE;
 	unsigned long size = end - start;
-	int ret;
-	struct page_change_data data;
 
 	WARN_ON_ONCE(start != addr);
 
@@ -50,14 +67,7 @@ static int change_memory_common(unsigned long addr, int numpages,
 	    !in_range(start, size, VMALLOC_START, VMALLOC_END))
 		return -EINVAL;
 
-	data.set_mask = set_mask;
-	data.clear_mask = clear_mask;
-
-	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
-					&data);
-
-	flush_tlb_kernel_range(start, end);
-	return ret;
+	return __change_memory_common(start, size, set_mask, clear_mask);
 }
 
 int set_memory_ro(unsigned long addr, int numpages)
@@ -87,3 +97,15 @@ int set_memory_x(unsigned long addr, int numpages)
 					__pgprot(0),
 					__pgprot(L_PTE_XN));
 }
+
+int set_memory_valid(unsigned long addr, int numpages, int enable)
+{
+	if (enable)
+		return __change_memory_common(addr, PAGE_SIZE * numpages,
+					      __pgprot(L_PTE_VALID),
+					      __pgprot(0));
+	else
+		return __change_memory_common(addr, PAGE_SIZE * numpages,
+					      __pgprot(0),
+					      __pgprot(L_PTE_VALID));
+}
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/3] ARM: mm: Provide is_write_fault()
  2021-11-03 13:38 ` Kefeng Wang
@ 2021-11-03 13:38   ` Kefeng Wang
  -1 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-03 13:38 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel, linux-kernel
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Kefeng Wang

The function will check whether the fault is caused by a write access.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/mm/fault.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index bc8779d54a64..f7ab6dabe89f 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -207,6 +207,11 @@ static inline bool is_permission_fault(unsigned int fsr)
 	return false;
 }
 
+static inline bool is_write_fault(unsigned int fsr)
+{
+	return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
+}
+
 static vm_fault_t __kprobes
 __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
 		unsigned long vma_flags, struct pt_regs *regs)
@@ -261,7 +266,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 	if (user_mode(regs))
 		flags |= FAULT_FLAG_USER;
 
-	if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) {
+	if (is_write_fault(fsr)) {
 		flags |= FAULT_FLAG_WRITE;
 		vm_flags = VM_WRITE;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/3] ARM: mm: Provide is_write_fault()
@ 2021-11-03 13:38   ` Kefeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-03 13:38 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel, linux-kernel
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Kefeng Wang

The function will check whether the fault is caused by a write access.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/mm/fault.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index bc8779d54a64..f7ab6dabe89f 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -207,6 +207,11 @@ static inline bool is_permission_fault(unsigned int fsr)
 	return false;
 }
 
+static inline bool is_write_fault(unsigned int fsr)
+{
+	return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
+}
+
 static vm_fault_t __kprobes
 __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
 		unsigned long vma_flags, struct pt_regs *regs)
@@ -261,7 +266,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 	if (user_mode(regs))
 		flags |= FAULT_FLAG_USER;
 
-	if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) {
+	if (is_write_fault(fsr)) {
 		flags |= FAULT_FLAG_WRITE;
 		vm_flags = VM_WRITE;
 	}
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/3] ARM: Support KFENCE for ARM
  2021-11-03 13:38 ` Kefeng Wang
@ 2021-11-03 13:38   ` Kefeng Wang
  -1 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-03 13:38 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel, linux-kernel
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Kefeng Wang

Add architecture specific implementation details for KFENCE and enable
KFENCE on ARM. In particular, this implements the required interface in
 <asm/kfence.h>.

KFENCE requires that attributes for pages from its memory pool can
individually be set. Therefore, force the kfence pool to be mapped
at page granularity.

Testing this patch using the testcases in kfence_test.c and all passed
with or without ARM_LPAE.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/Kconfig              |  1 +
 arch/arm/include/asm/kfence.h | 53 +++++++++++++++++++++++++++++++++++
 arch/arm/mm/fault.c           | 19 ++++++++-----
 3 files changed, 66 insertions(+), 7 deletions(-)
 create mode 100644 arch/arm/include/asm/kfence.h

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index b9f72337224c..6d1f6f48995c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -69,6 +69,7 @@ config ARM
 	select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
+	select HAVE_ARCH_KFENCE if MMU && !XIP_KERNEL
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/kfence.h b/arch/arm/include/asm/kfence.h
new file mode 100644
index 000000000000..7980d0f2271f
--- /dev/null
+++ b/arch/arm/include/asm/kfence.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_ARM_KFENCE_H
+#define __ASM_ARM_KFENCE_H
+
+#include <linux/kfence.h>
+
+#include <asm/pgalloc.h>
+#include <asm/set_memory.h>
+
+static inline int split_pmd_page(pmd_t *pmd, unsigned long addr)
+{
+	int i;
+	unsigned long pfn = PFN_DOWN(__pa(addr));
+	pte_t *pte = pte_alloc_one_kernel(&init_mm);
+
+	if (!pte)
+		return -ENOMEM;
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte_ext(pte + i, pfn_pte(pfn + i, PAGE_KERNEL), 0);
+	pmd_populate_kernel(&init_mm, pmd, pte);
+
+	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+	return 0;
+}
+
+static inline bool arch_kfence_init_pool(void)
+{
+	unsigned long addr;
+	pmd_t *pmd;
+
+	for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
+	     addr += PAGE_SIZE) {
+		pmd = pmd_off_k(addr);
+
+		if (pmd_leaf(*pmd)) {
+			if (split_pmd_page(pmd, addr & PMD_MASK))
+				return false;
+		}
+	}
+
+	return true;
+}
+
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+	set_memory_valid(addr, 1, !protect);
+
+	return true;
+}
+
+#endif /* __ASM_ARM_KFENCE_H */
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index f7ab6dabe89f..49148b675b43 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -17,6 +17,7 @@
 #include <linux/sched/debug.h>
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
+#include <linux/kfence.h>
 
 #include <asm/system_misc.h>
 #include <asm/system_info.h>
@@ -99,6 +100,11 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
 { }
 #endif					/* CONFIG_MMU */
 
+static inline bool is_write_fault(unsigned int fsr)
+{
+	return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
+}
+
 static void die_kernel_fault(const char *msg, struct mm_struct *mm,
 			     unsigned long addr, unsigned int fsr,
 			     struct pt_regs *regs)
@@ -131,10 +137,14 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
 	/*
 	 * No handler, we'll have to terminate things with extreme prejudice.
 	 */
-	if (addr < PAGE_SIZE)
+	if (addr < PAGE_SIZE) {
 		msg = "NULL pointer dereference";
-	else
+	} else {
+		if (kfence_handle_page_fault(addr, is_write_fault(fsr), regs))
+			return;
+
 		msg = "paging request";
+	}
 
 	die_kernel_fault(msg, mm, addr, fsr, regs);
 }
@@ -207,11 +217,6 @@ static inline bool is_permission_fault(unsigned int fsr)
 	return false;
 }
 
-static inline bool is_write_fault(unsigned int fsr)
-{
-	return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
-}
-
 static vm_fault_t __kprobes
 __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
 		unsigned long vma_flags, struct pt_regs *regs)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/3] ARM: Support KFENCE for ARM
@ 2021-11-03 13:38   ` Kefeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-03 13:38 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel, linux-kernel
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Kefeng Wang

Add architecture specific implementation details for KFENCE and enable
KFENCE on ARM. In particular, this implements the required interface in
 <asm/kfence.h>.

KFENCE requires that attributes for pages from its memory pool can
individually be set. Therefore, force the kfence pool to be mapped
at page granularity.

Testing this patch using the testcases in kfence_test.c and all passed
with or without ARM_LPAE.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/Kconfig              |  1 +
 arch/arm/include/asm/kfence.h | 53 +++++++++++++++++++++++++++++++++++
 arch/arm/mm/fault.c           | 19 ++++++++-----
 3 files changed, 66 insertions(+), 7 deletions(-)
 create mode 100644 arch/arm/include/asm/kfence.h

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index b9f72337224c..6d1f6f48995c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -69,6 +69,7 @@ config ARM
 	select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
+	select HAVE_ARCH_KFENCE if MMU && !XIP_KERNEL
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/kfence.h b/arch/arm/include/asm/kfence.h
new file mode 100644
index 000000000000..7980d0f2271f
--- /dev/null
+++ b/arch/arm/include/asm/kfence.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_ARM_KFENCE_H
+#define __ASM_ARM_KFENCE_H
+
+#include <linux/kfence.h>
+
+#include <asm/pgalloc.h>
+#include <asm/set_memory.h>
+
+static inline int split_pmd_page(pmd_t *pmd, unsigned long addr)
+{
+	int i;
+	unsigned long pfn = PFN_DOWN(__pa(addr));
+	pte_t *pte = pte_alloc_one_kernel(&init_mm);
+
+	if (!pte)
+		return -ENOMEM;
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte_ext(pte + i, pfn_pte(pfn + i, PAGE_KERNEL), 0);
+	pmd_populate_kernel(&init_mm, pmd, pte);
+
+	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+	return 0;
+}
+
+static inline bool arch_kfence_init_pool(void)
+{
+	unsigned long addr;
+	pmd_t *pmd;
+
+	for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
+	     addr += PAGE_SIZE) {
+		pmd = pmd_off_k(addr);
+
+		if (pmd_leaf(*pmd)) {
+			if (split_pmd_page(pmd, addr & PMD_MASK))
+				return false;
+		}
+	}
+
+	return true;
+}
+
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+	set_memory_valid(addr, 1, !protect);
+
+	return true;
+}
+
+#endif /* __ASM_ARM_KFENCE_H */
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index f7ab6dabe89f..49148b675b43 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -17,6 +17,7 @@
 #include <linux/sched/debug.h>
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
+#include <linux/kfence.h>
 
 #include <asm/system_misc.h>
 #include <asm/system_info.h>
@@ -99,6 +100,11 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
 { }
 #endif					/* CONFIG_MMU */
 
+static inline bool is_write_fault(unsigned int fsr)
+{
+	return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
+}
+
 static void die_kernel_fault(const char *msg, struct mm_struct *mm,
 			     unsigned long addr, unsigned int fsr,
 			     struct pt_regs *regs)
@@ -131,10 +137,14 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
 	/*
 	 * No handler, we'll have to terminate things with extreme prejudice.
 	 */
-	if (addr < PAGE_SIZE)
+	if (addr < PAGE_SIZE) {
 		msg = "NULL pointer dereference";
-	else
+	} else {
+		if (kfence_handle_page_fault(addr, is_write_fault(fsr), regs))
+			return;
+
 		msg = "paging request";
+	}
 
 	die_kernel_fault(msg, mm, addr, fsr, regs);
 }
@@ -207,11 +217,6 @@ static inline bool is_permission_fault(unsigned int fsr)
 	return false;
 }
 
-static inline bool is_write_fault(unsigned int fsr)
-{
-	return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
-}
-
 static vm_fault_t __kprobes
 __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
 		unsigned long vma_flags, struct pt_regs *regs)
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/3] ARM: Support KFENCE for ARM
  2021-11-03 13:38   ` Kefeng Wang
@ 2021-11-03 16:22     ` Alexander Potapenko
  -1 siblings, 0 replies; 20+ messages in thread
From: Alexander Potapenko @ 2021-11-03 16:22 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Russell King, linux-arm-kernel, linux-kernel, Marco Elver, Dmitry Vyukov

On Wed, Nov 3, 2021 at 2:26 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
> Add architecture specific implementation details for KFENCE and enable
> KFENCE on ARM. In particular, this implements the required interface in
>  <asm/kfence.h>.
>
> KFENCE requires that attributes for pages from its memory pool can
> individually be set. Therefore, force the kfence pool to be mapped
> at page granularity.
>
> Testing this patch using the testcases in kfence_test.c and all passed
> with or without ARM_LPAE.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  arch/arm/Kconfig              |  1 +
>  arch/arm/include/asm/kfence.h | 53 +++++++++++++++++++++++++++++++++++
>  arch/arm/mm/fault.c           | 19 ++++++++-----
>  3 files changed, 66 insertions(+), 7 deletions(-)
>  create mode 100644 arch/arm/include/asm/kfence.h
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index b9f72337224c..6d1f6f48995c 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -69,6 +69,7 @@ config ARM
>         select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
>         select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
>         select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> +       select HAVE_ARCH_KFENCE if MMU && !XIP_KERNEL
>         select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
>         select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
>         select HAVE_ARCH_MMAP_RND_BITS if MMU
> diff --git a/arch/arm/include/asm/kfence.h b/arch/arm/include/asm/kfence.h
> new file mode 100644
> index 000000000000..7980d0f2271f
> --- /dev/null
> +++ b/arch/arm/include/asm/kfence.h
> @@ -0,0 +1,53 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef __ASM_ARM_KFENCE_H
> +#define __ASM_ARM_KFENCE_H
> +
> +#include <linux/kfence.h>
> +
> +#include <asm/pgalloc.h>
> +#include <asm/set_memory.h>
> +
> +static inline int split_pmd_page(pmd_t *pmd, unsigned long addr)
> +{
> +       int i;
> +       unsigned long pfn = PFN_DOWN(__pa(addr));
> +       pte_t *pte = pte_alloc_one_kernel(&init_mm);
> +
> +       if (!pte)
> +               return -ENOMEM;
> +
> +       for (i = 0; i < PTRS_PER_PTE; i++)
> +               set_pte_ext(pte + i, pfn_pte(pfn + i, PAGE_KERNEL), 0);
> +       pmd_populate_kernel(&init_mm, pmd, pte);
> +
> +       flush_tlb_kernel_range(addr, addr + PMD_SIZE);
> +       return 0;
> +}
> +
> +static inline bool arch_kfence_init_pool(void)
> +{
> +       unsigned long addr;
> +       pmd_t *pmd;
> +
> +       for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
> +            addr += PAGE_SIZE) {
> +               pmd = pmd_off_k(addr);
> +
> +               if (pmd_leaf(*pmd)) {
> +                       if (split_pmd_page(pmd, addr & PMD_MASK))
> +                               return false;
> +               }
> +       }
> +
> +       return true;
> +}
> +
> +static inline bool kfence_protect_page(unsigned long addr, bool protect)
> +{
> +       set_memory_valid(addr, 1, !protect);
> +
> +       return true;
> +}
> +
> +#endif /* __ASM_ARM_KFENCE_H */
> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> index f7ab6dabe89f..49148b675b43 100644
> --- a/arch/arm/mm/fault.c
> +++ b/arch/arm/mm/fault.c
> @@ -17,6 +17,7 @@
>  #include <linux/sched/debug.h>
>  #include <linux/highmem.h>
>  #include <linux/perf_event.h>
> +#include <linux/kfence.h>
>
>  #include <asm/system_misc.h>
>  #include <asm/system_info.h>
> @@ -99,6 +100,11 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
>  { }
>  #endif                                 /* CONFIG_MMU */
>
> +static inline bool is_write_fault(unsigned int fsr)
> +{
> +       return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
> +}

Please don't increase the diff by moving the code around. Consider
putting is_write_fault() in the right place in "ARM: mm: Provide
is_write_fault()" instead.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/3] ARM: Support KFENCE for ARM
@ 2021-11-03 16:22     ` Alexander Potapenko
  0 siblings, 0 replies; 20+ messages in thread
From: Alexander Potapenko @ 2021-11-03 16:22 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Russell King, linux-arm-kernel, linux-kernel, Marco Elver, Dmitry Vyukov

On Wed, Nov 3, 2021 at 2:26 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
> Add architecture specific implementation details for KFENCE and enable
> KFENCE on ARM. In particular, this implements the required interface in
>  <asm/kfence.h>.
>
> KFENCE requires that attributes for pages from its memory pool can
> individually be set. Therefore, force the kfence pool to be mapped
> at page granularity.
>
> Testing this patch using the testcases in kfence_test.c and all passed
> with or without ARM_LPAE.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  arch/arm/Kconfig              |  1 +
>  arch/arm/include/asm/kfence.h | 53 +++++++++++++++++++++++++++++++++++
>  arch/arm/mm/fault.c           | 19 ++++++++-----
>  3 files changed, 66 insertions(+), 7 deletions(-)
>  create mode 100644 arch/arm/include/asm/kfence.h
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index b9f72337224c..6d1f6f48995c 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -69,6 +69,7 @@ config ARM
>         select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
>         select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
>         select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> +       select HAVE_ARCH_KFENCE if MMU && !XIP_KERNEL
>         select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
>         select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
>         select HAVE_ARCH_MMAP_RND_BITS if MMU
> diff --git a/arch/arm/include/asm/kfence.h b/arch/arm/include/asm/kfence.h
> new file mode 100644
> index 000000000000..7980d0f2271f
> --- /dev/null
> +++ b/arch/arm/include/asm/kfence.h
> @@ -0,0 +1,53 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef __ASM_ARM_KFENCE_H
> +#define __ASM_ARM_KFENCE_H
> +
> +#include <linux/kfence.h>
> +
> +#include <asm/pgalloc.h>
> +#include <asm/set_memory.h>
> +
> +static inline int split_pmd_page(pmd_t *pmd, unsigned long addr)
> +{
> +       int i;
> +       unsigned long pfn = PFN_DOWN(__pa(addr));
> +       pte_t *pte = pte_alloc_one_kernel(&init_mm);
> +
> +       if (!pte)
> +               return -ENOMEM;
> +
> +       for (i = 0; i < PTRS_PER_PTE; i++)
> +               set_pte_ext(pte + i, pfn_pte(pfn + i, PAGE_KERNEL), 0);
> +       pmd_populate_kernel(&init_mm, pmd, pte);
> +
> +       flush_tlb_kernel_range(addr, addr + PMD_SIZE);
> +       return 0;
> +}
> +
> +static inline bool arch_kfence_init_pool(void)
> +{
> +       unsigned long addr;
> +       pmd_t *pmd;
> +
> +       for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
> +            addr += PAGE_SIZE) {
> +               pmd = pmd_off_k(addr);
> +
> +               if (pmd_leaf(*pmd)) {
> +                       if (split_pmd_page(pmd, addr & PMD_MASK))
> +                               return false;
> +               }
> +       }
> +
> +       return true;
> +}
> +
> +static inline bool kfence_protect_page(unsigned long addr, bool protect)
> +{
> +       set_memory_valid(addr, 1, !protect);
> +
> +       return true;
> +}
> +
> +#endif /* __ASM_ARM_KFENCE_H */
> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> index f7ab6dabe89f..49148b675b43 100644
> --- a/arch/arm/mm/fault.c
> +++ b/arch/arm/mm/fault.c
> @@ -17,6 +17,7 @@
>  #include <linux/sched/debug.h>
>  #include <linux/highmem.h>
>  #include <linux/perf_event.h>
> +#include <linux/kfence.h>
>
>  #include <asm/system_misc.h>
>  #include <asm/system_info.h>
> @@ -99,6 +100,11 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
>  { }
>  #endif                                 /* CONFIG_MMU */
>
> +static inline bool is_write_fault(unsigned int fsr)
> +{
> +       return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
> +}

Please don't increase the diff by moving the code around. Consider
putting is_write_fault() in the right place in "ARM: mm: Provide
is_write_fault()" instead.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/3] ARM: Support KFENCE for ARM
  2021-11-03 16:22     ` Alexander Potapenko
@ 2021-11-04  1:14       ` Kefeng Wang
  -1 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-04  1:14 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Russell King, linux-arm-kernel, linux-kernel, Marco Elver, Dmitry Vyukov



On 2021/11/4 0:22, Alexander Potapenko wrote:
> On Wed, Nov 3, 2021 at 2:26 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>
>> Add architecture specific implementation details for KFENCE and enable
>> KFENCE on ARM. In particular, this implements the required interface in
>>   <asm/kfence.h>.
>>
>> KFENCE requires that attributes for pages from its memory pool can
>> individually be set. Therefore, force the kfence pool to be mapped
>> at page granularity.
>>
>> Testing this patch using the testcases in kfence_test.c and all passed
>> with or without ARM_LPAE.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
...
>> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
>> index f7ab6dabe89f..49148b675b43 100644
>> --- a/arch/arm/mm/fault.c
>> +++ b/arch/arm/mm/fault.c
>> @@ -17,6 +17,7 @@
>>   #include <linux/sched/debug.h>
>>   #include <linux/highmem.h>
>>   #include <linux/perf_event.h>
>> +#include <linux/kfence.h>
>>
>>   #include <asm/system_misc.h>
>>   #include <asm/system_info.h>
>> @@ -99,6 +100,11 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
>>   { }
>>   #endif                                 /* CONFIG_MMU */
>>
>> +static inline bool is_write_fault(unsigned int fsr)
>> +{
>> +       return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
>> +}
> 
> Please don't increase the diff by moving the code around. Consider
> putting is_write_fault() in the right place in "ARM: mm: Provide
> is_write_fault()" instead.
Sure. Let's wait some time to see whether there are any other comments,
and resend, thanks.
> .
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/3] ARM: Support KFENCE for ARM
@ 2021-11-04  1:14       ` Kefeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-04  1:14 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Russell King, linux-arm-kernel, linux-kernel, Marco Elver, Dmitry Vyukov



On 2021/11/4 0:22, Alexander Potapenko wrote:
> On Wed, Nov 3, 2021 at 2:26 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>
>> Add architecture specific implementation details for KFENCE and enable
>> KFENCE on ARM. In particular, this implements the required interface in
>>   <asm/kfence.h>.
>>
>> KFENCE requires that attributes for pages from its memory pool can
>> individually be set. Therefore, force the kfence pool to be mapped
>> at page granularity.
>>
>> Testing this patch using the testcases in kfence_test.c and all passed
>> with or without ARM_LPAE.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
...
>> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
>> index f7ab6dabe89f..49148b675b43 100644
>> --- a/arch/arm/mm/fault.c
>> +++ b/arch/arm/mm/fault.c
>> @@ -17,6 +17,7 @@
>>   #include <linux/sched/debug.h>
>>   #include <linux/highmem.h>
>>   #include <linux/perf_event.h>
>> +#include <linux/kfence.h>
>>
>>   #include <asm/system_misc.h>
>>   #include <asm/system_info.h>
>> @@ -99,6 +100,11 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
>>   { }
>>   #endif                                 /* CONFIG_MMU */
>>
>> +static inline bool is_write_fault(unsigned int fsr)
>> +{
>> +       return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
>> +}
> 
> Please don't increase the diff by moving the code around. Consider
> putting is_write_fault() in the right place in "ARM: mm: Provide
> is_write_fault()" instead.
Sure. Let's wait some time to see whether there are any other comments,
and resend, thanks.
> .
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] ARM: Support KFENCE feature
  2021-11-03 13:38 ` Kefeng Wang
@ 2021-11-04  7:00   ` Marco Elver
  -1 siblings, 0 replies; 20+ messages in thread
From: Marco Elver @ 2021-11-04  7:00 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Russell King, linux-arm-kernel, linux-kernel,
	Alexander Potapenko, Dmitry Vyukov

On Wed, 3 Nov 2021 at 14:26, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
> with or without ARM_LPAE and all passed.
>
> V2:
> - drop patch4 in v1, which is used a new way to skip kfence test
>   see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
> - fix some issue about NO MMU
>   - drop useless set_memory_valid() under no mmu
>   - fix implicit declaration of function ‘is_write_fault’ if no mmu
> - make KFENCE depends on !XIP_KERNEL, no tested with xip
>
> v1:
> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
>
> Kefeng Wang (3):
>   ARM: mm: Provide set_memory_valid()
>   ARM: mm: Provide is_write_fault()
>   ARM: Support KFENCE for ARM

Looks good to me.

Acked-by: Marco Elver <elver@google.com>


>  arch/arm/Kconfig                  |  1 +
>  arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
>  arch/arm/include/asm/set_memory.h |  1 +
>  arch/arm/mm/fault.c               | 16 ++++++++--
>  arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
>  5 files changed, 100 insertions(+), 13 deletions(-)
>  create mode 100644 arch/arm/include/asm/kfence.h
>
> --
> 2.26.2
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] ARM: Support KFENCE feature
@ 2021-11-04  7:00   ` Marco Elver
  0 siblings, 0 replies; 20+ messages in thread
From: Marco Elver @ 2021-11-04  7:00 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Russell King, linux-arm-kernel, linux-kernel,
	Alexander Potapenko, Dmitry Vyukov

On Wed, 3 Nov 2021 at 14:26, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
> with or without ARM_LPAE and all passed.
>
> V2:
> - drop patch4 in v1, which is used a new way to skip kfence test
>   see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
> - fix some issue about NO MMU
>   - drop useless set_memory_valid() under no mmu
>   - fix implicit declaration of function ‘is_write_fault’ if no mmu
> - make KFENCE depends on !XIP_KERNEL, no tested with xip
>
> v1:
> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
>
> Kefeng Wang (3):
>   ARM: mm: Provide set_memory_valid()
>   ARM: mm: Provide is_write_fault()
>   ARM: Support KFENCE for ARM

Looks good to me.

Acked-by: Marco Elver <elver@google.com>


>  arch/arm/Kconfig                  |  1 +
>  arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
>  arch/arm/include/asm/set_memory.h |  1 +
>  arch/arm/mm/fault.c               | 16 ++++++++--
>  arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
>  5 files changed, 100 insertions(+), 13 deletions(-)
>  create mode 100644 arch/arm/include/asm/kfence.h
>
> --
> 2.26.2
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] ARM: Support KFENCE feature
  2021-11-04  7:00   ` Marco Elver
@ 2021-11-04  7:17     ` Kefeng Wang
  -1 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-04  7:17 UTC (permalink / raw)
  To: Marco Elver
  Cc: Russell King, linux-arm-kernel, linux-kernel,
	Alexander Potapenko, Dmitry Vyukov



On 2021/11/4 15:00, Marco Elver wrote:
> On Wed, 3 Nov 2021 at 14:26, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>
>> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
>> with or without ARM_LPAE and all passed.
>>
>> V2:
>> - drop patch4 in v1, which is used a new way to skip kfence test
>>    see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
>> - fix some issue about NO MMU
>>    - drop useless set_memory_valid() under no mmu
>>    - fix implicit declaration of function ‘is_write_fault’ if no mmu
>> - make KFENCE depends on !XIP_KERNEL, no tested with xip
>>
>> v1:
>> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
>>
>> Kefeng Wang (3):
>>    ARM: mm: Provide set_memory_valid()
>>    ARM: mm: Provide is_write_fault()
>>    ARM: Support KFENCE for ARM
> 
> Looks good to me.
> 
> Acked-by: Marco Elver <elver@google.com>

Thanks Marco.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] ARM: Support KFENCE feature
@ 2021-11-04  7:17     ` Kefeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-04  7:17 UTC (permalink / raw)
  To: Marco Elver
  Cc: Russell King, linux-arm-kernel, linux-kernel,
	Alexander Potapenko, Dmitry Vyukov



On 2021/11/4 15:00, Marco Elver wrote:
> On Wed, 3 Nov 2021 at 14:26, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>
>> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
>> with or without ARM_LPAE and all passed.
>>
>> V2:
>> - drop patch4 in v1, which is used a new way to skip kfence test
>>    see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
>> - fix some issue about NO MMU
>>    - drop useless set_memory_valid() under no mmu
>>    - fix implicit declaration of function ‘is_write_fault’ if no mmu
>> - make KFENCE depends on !XIP_KERNEL, no tested with xip
>>
>> v1:
>> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
>>
>> Kefeng Wang (3):
>>    ARM: mm: Provide set_memory_valid()
>>    ARM: mm: Provide is_write_fault()
>>    ARM: Support KFENCE for ARM
> 
> Looks good to me.
> 
> Acked-by: Marco Elver <elver@google.com>

Thanks Marco.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] ARM: Support KFENCE feature
  2021-11-03 13:38 ` Kefeng Wang
@ 2021-11-04 12:12   ` Russell King (Oracle)
  -1 siblings, 0 replies; 20+ messages in thread
From: Russell King (Oracle) @ 2021-11-04 12:12 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: linux-arm-kernel, linux-kernel, Alexander Potapenko, Marco Elver,
	Dmitry Vyukov

The ARM tree is closed; we're in the mainline merge window. Please
resend after -rc1 is released.

On Wed, Nov 03, 2021 at 09:38:42PM +0800, Kefeng Wang wrote:
> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
> with or without ARM_LPAE and all passed.
> 
> V2:
> - drop patch4 in v1, which is used a new way to skip kfence test
>   see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
> - fix some issue about NO MMU
>   - drop useless set_memory_valid() under no mmu
>   - fix implicit declaration of function ‘is_write_fault’ if no mmu
> - make KFENCE depends on !XIP_KERNEL, no tested with xip
> 
> v1:
> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
> 
> Kefeng Wang (3):
>   ARM: mm: Provide set_memory_valid()
>   ARM: mm: Provide is_write_fault()
>   ARM: Support KFENCE for ARM
> 
>  arch/arm/Kconfig                  |  1 +
>  arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
>  arch/arm/include/asm/set_memory.h |  1 +
>  arch/arm/mm/fault.c               | 16 ++++++++--
>  arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
>  5 files changed, 100 insertions(+), 13 deletions(-)
>  create mode 100644 arch/arm/include/asm/kfence.h
> 
> -- 
> 2.26.2
> 
> 

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] ARM: Support KFENCE feature
@ 2021-11-04 12:12   ` Russell King (Oracle)
  0 siblings, 0 replies; 20+ messages in thread
From: Russell King (Oracle) @ 2021-11-04 12:12 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: linux-arm-kernel, linux-kernel, Alexander Potapenko, Marco Elver,
	Dmitry Vyukov

The ARM tree is closed; we're in the mainline merge window. Please
resend after -rc1 is released.

On Wed, Nov 03, 2021 at 09:38:42PM +0800, Kefeng Wang wrote:
> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
> with or without ARM_LPAE and all passed.
> 
> V2:
> - drop patch4 in v1, which is used a new way to skip kfence test
>   see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
> - fix some issue about NO MMU
>   - drop useless set_memory_valid() under no mmu
>   - fix implicit declaration of function ‘is_write_fault’ if no mmu
> - make KFENCE depends on !XIP_KERNEL, no tested with xip
> 
> v1:
> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
> 
> Kefeng Wang (3):
>   ARM: mm: Provide set_memory_valid()
>   ARM: mm: Provide is_write_fault()
>   ARM: Support KFENCE for ARM
> 
>  arch/arm/Kconfig                  |  1 +
>  arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
>  arch/arm/include/asm/set_memory.h |  1 +
>  arch/arm/mm/fault.c               | 16 ++++++++--
>  arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
>  5 files changed, 100 insertions(+), 13 deletions(-)
>  create mode 100644 arch/arm/include/asm/kfence.h
> 
> -- 
> 2.26.2
> 
> 

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] ARM: Support KFENCE feature
  2021-11-04 12:12   ` Russell King (Oracle)
@ 2021-11-04 12:38     ` Kefeng Wang
  -1 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-04 12:38 UTC (permalink / raw)
  To: Russell King (Oracle)
  Cc: linux-arm-kernel, linux-kernel, Alexander Potapenko, Marco Elver,
	Dmitry Vyukov



On 2021/11/4 20:12, Russell King (Oracle) wrote:
> The ARM tree is closed; we're in the mainline merge window. Please
> resend after -rc1 is released.

Got it, will do.

> 
> On Wed, Nov 03, 2021 at 09:38:42PM +0800, Kefeng Wang wrote:
>> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
>> with or without ARM_LPAE and all passed.
>>
>> V2:
>> - drop patch4 in v1, which is used a new way to skip kfence test
>>    see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
>> - fix some issue about NO MMU
>>    - drop useless set_memory_valid() under no mmu
>>    - fix implicit declaration of function ‘is_write_fault’ if no mmu
>> - make KFENCE depends on !XIP_KERNEL, no tested with xip
>>
>> v1:
>> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
>>
>> Kefeng Wang (3):
>>    ARM: mm: Provide set_memory_valid()
>>    ARM: mm: Provide is_write_fault()
>>    ARM: Support KFENCE for ARM
>>
>>   arch/arm/Kconfig                  |  1 +
>>   arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
>>   arch/arm/include/asm/set_memory.h |  1 +
>>   arch/arm/mm/fault.c               | 16 ++++++++--
>>   arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
>>   5 files changed, 100 insertions(+), 13 deletions(-)
>>   create mode 100644 arch/arm/include/asm/kfence.h
>>
>> -- 
>> 2.26.2
>>
>>
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/3] ARM: Support KFENCE feature
@ 2021-11-04 12:38     ` Kefeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Kefeng Wang @ 2021-11-04 12:38 UTC (permalink / raw)
  To: Russell King (Oracle)
  Cc: linux-arm-kernel, linux-kernel, Alexander Potapenko, Marco Elver,
	Dmitry Vyukov



On 2021/11/4 20:12, Russell King (Oracle) wrote:
> The ARM tree is closed; we're in the mainline merge window. Please
> resend after -rc1 is released.

Got it, will do.

> 
> On Wed, Nov 03, 2021 at 09:38:42PM +0800, Kefeng Wang wrote:
>> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
>> with or without ARM_LPAE and all passed.
>>
>> V2:
>> - drop patch4 in v1, which is used a new way to skip kfence test
>>    see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
>> - fix some issue about NO MMU
>>    - drop useless set_memory_valid() under no mmu
>>    - fix implicit declaration of function ‘is_write_fault’ if no mmu
>> - make KFENCE depends on !XIP_KERNEL, no tested with xip
>>
>> v1:
>> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
>>
>> Kefeng Wang (3):
>>    ARM: mm: Provide set_memory_valid()
>>    ARM: mm: Provide is_write_fault()
>>    ARM: Support KFENCE for ARM
>>
>>   arch/arm/Kconfig                  |  1 +
>>   arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
>>   arch/arm/include/asm/set_memory.h |  1 +
>>   arch/arm/mm/fault.c               | 16 ++++++++--
>>   arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
>>   5 files changed, 100 insertions(+), 13 deletions(-)
>>   create mode 100644 arch/arm/include/asm/kfence.h
>>
>> -- 
>> 2.26.2
>>
>>
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-11-04 12:40 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-03 13:38 [PATCH v2 0/3] ARM: Support KFENCE feature Kefeng Wang
2021-11-03 13:38 ` Kefeng Wang
2021-11-03 13:38 ` [PATCH v2 1/3] ARM: mm: Provide set_memory_valid() Kefeng Wang
2021-11-03 13:38   ` Kefeng Wang
2021-11-03 13:38 ` [PATCH v2 2/3] ARM: mm: Provide is_write_fault() Kefeng Wang
2021-11-03 13:38   ` Kefeng Wang
2021-11-03 13:38 ` [PATCH v2 3/3] ARM: Support KFENCE for ARM Kefeng Wang
2021-11-03 13:38   ` Kefeng Wang
2021-11-03 16:22   ` Alexander Potapenko
2021-11-03 16:22     ` Alexander Potapenko
2021-11-04  1:14     ` Kefeng Wang
2021-11-04  1:14       ` Kefeng Wang
2021-11-04  7:00 ` [PATCH v2 0/3] ARM: Support KFENCE feature Marco Elver
2021-11-04  7:00   ` Marco Elver
2021-11-04  7:17   ` Kefeng Wang
2021-11-04  7:17     ` Kefeng Wang
2021-11-04 12:12 ` Russell King (Oracle)
2021-11-04 12:12   ` Russell King (Oracle)
2021-11-04 12:38   ` Kefeng Wang
2021-11-04 12:38     ` Kefeng Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.