All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH Resend v2 0/3] ARM: Support KFENCE feature
@ 2021-11-15 13:48 ` Kefeng Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-15 13:48 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov, Kefeng Wang

This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
with or without ARM_LPAE and all passed.

V2 Resend:
- adjust is_write_fault() position in patch2 not patch3, sugguested Alexander
- Add ACKed from Marco
- rebased on v5.16-rc1

V2:
- drop patch4 in v1, which is used a new way to skip kfence test
  see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
- fix some issue about NO MMU
  - drop useless set_memory_valid() under no mmu
  - fix implicit declaration of function ‘is_write_fault’ if no mmu
- make KFENCE depends on !XIP_KERNEL, no tested with xip

v1:
https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/

Kefeng Wang (3):
  ARM: mm: Provide set_memory_valid()
  ARM: mm: Provide is_write_fault()
  ARM: Support KFENCE for ARM

 arch/arm/Kconfig                  |  1 +
 arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
 arch/arm/include/asm/set_memory.h |  1 +
 arch/arm/mm/fault.c               | 16 ++++++++--
 arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
 5 files changed, 100 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm/include/asm/kfence.h

-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH Resend v2 0/3] ARM: Support KFENCE feature
@ 2021-11-15 13:48 ` Kefeng Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-15 13:48 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov, Kefeng Wang

This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
with or without ARM_LPAE and all passed.

V2 Resend:
- adjust is_write_fault() position in patch2 not patch3, sugguested Alexander
- Add ACKed from Marco
- rebased on v5.16-rc1

V2:
- drop patch4 in v1, which is used a new way to skip kfence test
  see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
- fix some issue about NO MMU
  - drop useless set_memory_valid() under no mmu
  - fix implicit declaration of function ‘is_write_fault’ if no mmu
- make KFENCE depends on !XIP_KERNEL, no tested with xip

v1:
https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/

Kefeng Wang (3):
  ARM: mm: Provide set_memory_valid()
  ARM: mm: Provide is_write_fault()
  ARM: Support KFENCE for ARM

 arch/arm/Kconfig                  |  1 +
 arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
 arch/arm/include/asm/set_memory.h |  1 +
 arch/arm/mm/fault.c               | 16 ++++++++--
 arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
 5 files changed, 100 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm/include/asm/kfence.h

-- 
2.26.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH Resend v2 1/3] ARM: mm: Provide set_memory_valid()
  2021-11-15 13:48 ` Kefeng Wang
@ 2021-11-15 13:48   ` Kefeng Wang
  -1 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-15 13:48 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov, Kefeng Wang

This function validates and invalidates PTE entries, it will be used
in the later patch.

Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/include/asm/set_memory.h |  1 +
 arch/arm/mm/pageattr.c            | 42 +++++++++++++++++++++++--------
 2 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/set_memory.h b/arch/arm/include/asm/set_memory.h
index ec17fc0fda7a..0211b9c5b14d 100644
--- a/arch/arm/include/asm/set_memory.h
+++ b/arch/arm/include/asm/set_memory.h
@@ -11,6 +11,7 @@ int set_memory_ro(unsigned long addr, int numpages);
 int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
+int set_memory_valid(unsigned long addr, int numpages, int enable);
 #else
 static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c
index 9790ae3a8c68..c3c34fe714b0 100644
--- a/arch/arm/mm/pageattr.c
+++ b/arch/arm/mm/pageattr.c
@@ -32,14 +32,31 @@ static bool in_range(unsigned long start, unsigned long size,
 		size <= range_end - start;
 }
 
+/*
+ * This function assumes that the range is mapped with PAGE_SIZE pages.
+ */
+static int __change_memory_common(unsigned long start, unsigned long size,
+				pgprot_t set_mask, pgprot_t clear_mask)
+{
+	struct page_change_data data;
+	int ret;
+
+	data.set_mask = set_mask;
+	data.clear_mask = clear_mask;
+
+	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
+				  &data);
+
+	flush_tlb_kernel_range(start, start + size);
+	return ret;
+}
+
 static int change_memory_common(unsigned long addr, int numpages,
 				pgprot_t set_mask, pgprot_t clear_mask)
 {
 	unsigned long start = addr & PAGE_MASK;
 	unsigned long end = PAGE_ALIGN(addr) + numpages * PAGE_SIZE;
 	unsigned long size = end - start;
-	int ret;
-	struct page_change_data data;
 
 	WARN_ON_ONCE(start != addr);
 
@@ -50,14 +67,7 @@ static int change_memory_common(unsigned long addr, int numpages,
 	    !in_range(start, size, VMALLOC_START, VMALLOC_END))
 		return -EINVAL;
 
-	data.set_mask = set_mask;
-	data.clear_mask = clear_mask;
-
-	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
-					&data);
-
-	flush_tlb_kernel_range(start, end);
-	return ret;
+	return __change_memory_common(start, size, set_mask, clear_mask);
 }
 
 int set_memory_ro(unsigned long addr, int numpages)
@@ -87,3 +97,15 @@ int set_memory_x(unsigned long addr, int numpages)
 					__pgprot(0),
 					__pgprot(L_PTE_XN));
 }
+
+int set_memory_valid(unsigned long addr, int numpages, int enable)
+{
+	if (enable)
+		return __change_memory_common(addr, PAGE_SIZE * numpages,
+					      __pgprot(L_PTE_VALID),
+					      __pgprot(0));
+	else
+		return __change_memory_common(addr, PAGE_SIZE * numpages,
+					      __pgprot(0),
+					      __pgprot(L_PTE_VALID));
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH Resend v2 1/3] ARM: mm: Provide set_memory_valid()
@ 2021-11-15 13:48   ` Kefeng Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-15 13:48 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov, Kefeng Wang

This function validates and invalidates PTE entries, it will be used
in the later patch.

Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/include/asm/set_memory.h |  1 +
 arch/arm/mm/pageattr.c            | 42 +++++++++++++++++++++++--------
 2 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/set_memory.h b/arch/arm/include/asm/set_memory.h
index ec17fc0fda7a..0211b9c5b14d 100644
--- a/arch/arm/include/asm/set_memory.h
+++ b/arch/arm/include/asm/set_memory.h
@@ -11,6 +11,7 @@ int set_memory_ro(unsigned long addr, int numpages);
 int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
+int set_memory_valid(unsigned long addr, int numpages, int enable);
 #else
 static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c
index 9790ae3a8c68..c3c34fe714b0 100644
--- a/arch/arm/mm/pageattr.c
+++ b/arch/arm/mm/pageattr.c
@@ -32,14 +32,31 @@ static bool in_range(unsigned long start, unsigned long size,
 		size <= range_end - start;
 }
 
+/*
+ * This function assumes that the range is mapped with PAGE_SIZE pages.
+ */
+static int __change_memory_common(unsigned long start, unsigned long size,
+				pgprot_t set_mask, pgprot_t clear_mask)
+{
+	struct page_change_data data;
+	int ret;
+
+	data.set_mask = set_mask;
+	data.clear_mask = clear_mask;
+
+	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
+				  &data);
+
+	flush_tlb_kernel_range(start, start + size);
+	return ret;
+}
+
 static int change_memory_common(unsigned long addr, int numpages,
 				pgprot_t set_mask, pgprot_t clear_mask)
 {
 	unsigned long start = addr & PAGE_MASK;
 	unsigned long end = PAGE_ALIGN(addr) + numpages * PAGE_SIZE;
 	unsigned long size = end - start;
-	int ret;
-	struct page_change_data data;
 
 	WARN_ON_ONCE(start != addr);
 
@@ -50,14 +67,7 @@ static int change_memory_common(unsigned long addr, int numpages,
 	    !in_range(start, size, VMALLOC_START, VMALLOC_END))
 		return -EINVAL;
 
-	data.set_mask = set_mask;
-	data.clear_mask = clear_mask;
-
-	ret = apply_to_page_range(&init_mm, start, size, change_page_range,
-					&data);
-
-	flush_tlb_kernel_range(start, end);
-	return ret;
+	return __change_memory_common(start, size, set_mask, clear_mask);
 }
 
 int set_memory_ro(unsigned long addr, int numpages)
@@ -87,3 +97,15 @@ int set_memory_x(unsigned long addr, int numpages)
 					__pgprot(0),
 					__pgprot(L_PTE_XN));
 }
+
+int set_memory_valid(unsigned long addr, int numpages, int enable)
+{
+	if (enable)
+		return __change_memory_common(addr, PAGE_SIZE * numpages,
+					      __pgprot(L_PTE_VALID),
+					      __pgprot(0));
+	else
+		return __change_memory_common(addr, PAGE_SIZE * numpages,
+					      __pgprot(0),
+					      __pgprot(L_PTE_VALID));
+}
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH Resend v2 2/3] ARM: mm: Provide is_write_fault()
  2021-11-15 13:48 ` Kefeng Wang
@ 2021-11-15 13:48   ` Kefeng Wang
  -1 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-15 13:48 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov, Kefeng Wang

The function will check whether the fault is caused by a write access,
it will be called in die_kernel_fault() too in next patch, so put it
before the function of die_kernel_fault().

Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/mm/fault.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index bc8779d54a64..1207ed925039 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -99,6 +99,11 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
 { }
 #endif					/* CONFIG_MMU */
 
+static inline bool is_write_fault(unsigned int fsr)
+{
+	return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
+}
+
 static void die_kernel_fault(const char *msg, struct mm_struct *mm,
 			     unsigned long addr, unsigned int fsr,
 			     struct pt_regs *regs)
@@ -261,7 +266,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 	if (user_mode(regs))
 		flags |= FAULT_FLAG_USER;
 
-	if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) {
+	if (is_write_fault(fsr)) {
 		flags |= FAULT_FLAG_WRITE;
 		vm_flags = VM_WRITE;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH Resend v2 2/3] ARM: mm: Provide is_write_fault()
@ 2021-11-15 13:48   ` Kefeng Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-15 13:48 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov, Kefeng Wang

The function will check whether the fault is caused by a write access,
it will be called in die_kernel_fault() too in next patch, so put it
before the function of die_kernel_fault().

Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/mm/fault.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index bc8779d54a64..1207ed925039 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -99,6 +99,11 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
 { }
 #endif					/* CONFIG_MMU */
 
+static inline bool is_write_fault(unsigned int fsr)
+{
+	return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
+}
+
 static void die_kernel_fault(const char *msg, struct mm_struct *mm,
 			     unsigned long addr, unsigned int fsr,
 			     struct pt_regs *regs)
@@ -261,7 +266,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 	if (user_mode(regs))
 		flags |= FAULT_FLAG_USER;
 
-	if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) {
+	if (is_write_fault(fsr)) {
 		flags |= FAULT_FLAG_WRITE;
 		vm_flags = VM_WRITE;
 	}
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH Resend v2 3/3] ARM: Support KFENCE for ARM
  2021-11-15 13:48 ` Kefeng Wang
@ 2021-11-15 13:48   ` Kefeng Wang
  -1 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-15 13:48 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov, Kefeng Wang

Add architecture specific implementation details for KFENCE and enable
KFENCE on ARM. In particular, this implements the required interface in
 <asm/kfence.h>.

KFENCE requires that attributes for pages from its memory pool can
individually be set. Therefore, force the kfence pool to be mapped
at page granularity.

Testing this patch using the testcases in kfence_test.c and all passed
with or without ARM_LPAE.

Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/Kconfig              |  1 +
 arch/arm/include/asm/kfence.h | 53 +++++++++++++++++++++++++++++++++++
 arch/arm/mm/fault.c           |  9 ++++--
 3 files changed, 61 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm/include/asm/kfence.h

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index f0f9e8bec83a..321b0a1c2820 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -69,6 +69,7 @@ config ARM
 	select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
+	select HAVE_ARCH_KFENCE if MMU && !XIP_KERNEL
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/kfence.h b/arch/arm/include/asm/kfence.h
new file mode 100644
index 000000000000..7980d0f2271f
--- /dev/null
+++ b/arch/arm/include/asm/kfence.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_ARM_KFENCE_H
+#define __ASM_ARM_KFENCE_H
+
+#include <linux/kfence.h>
+
+#include <asm/pgalloc.h>
+#include <asm/set_memory.h>
+
+static inline int split_pmd_page(pmd_t *pmd, unsigned long addr)
+{
+	int i;
+	unsigned long pfn = PFN_DOWN(__pa(addr));
+	pte_t *pte = pte_alloc_one_kernel(&init_mm);
+
+	if (!pte)
+		return -ENOMEM;
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte_ext(pte + i, pfn_pte(pfn + i, PAGE_KERNEL), 0);
+	pmd_populate_kernel(&init_mm, pmd, pte);
+
+	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+	return 0;
+}
+
+static inline bool arch_kfence_init_pool(void)
+{
+	unsigned long addr;
+	pmd_t *pmd;
+
+	for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
+	     addr += PAGE_SIZE) {
+		pmd = pmd_off_k(addr);
+
+		if (pmd_leaf(*pmd)) {
+			if (split_pmd_page(pmd, addr & PMD_MASK))
+				return false;
+		}
+	}
+
+	return true;
+}
+
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+	set_memory_valid(addr, 1, !protect);
+
+	return true;
+}
+
+#endif /* __ASM_ARM_KFENCE_H */
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 1207ed925039..49148b675b43 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -17,6 +17,7 @@
 #include <linux/sched/debug.h>
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
+#include <linux/kfence.h>
 
 #include <asm/system_misc.h>
 #include <asm/system_info.h>
@@ -136,10 +137,14 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
 	/*
 	 * No handler, we'll have to terminate things with extreme prejudice.
 	 */
-	if (addr < PAGE_SIZE)
+	if (addr < PAGE_SIZE) {
 		msg = "NULL pointer dereference";
-	else
+	} else {
+		if (kfence_handle_page_fault(addr, is_write_fault(fsr), regs))
+			return;
+
 		msg = "paging request";
+	}
 
 	die_kernel_fault(msg, mm, addr, fsr, regs);
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH Resend v2 3/3] ARM: Support KFENCE for ARM
@ 2021-11-15 13:48   ` Kefeng Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-15 13:48 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov, Kefeng Wang

Add architecture specific implementation details for KFENCE and enable
KFENCE on ARM. In particular, this implements the required interface in
 <asm/kfence.h>.

KFENCE requires that attributes for pages from its memory pool can
individually be set. Therefore, force the kfence pool to be mapped
at page granularity.

Testing this patch using the testcases in kfence_test.c and all passed
with or without ARM_LPAE.

Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm/Kconfig              |  1 +
 arch/arm/include/asm/kfence.h | 53 +++++++++++++++++++++++++++++++++++
 arch/arm/mm/fault.c           |  9 ++++--
 3 files changed, 61 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm/include/asm/kfence.h

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index f0f9e8bec83a..321b0a1c2820 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -69,6 +69,7 @@ config ARM
 	select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
+	select HAVE_ARCH_KFENCE if MMU && !XIP_KERNEL
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/kfence.h b/arch/arm/include/asm/kfence.h
new file mode 100644
index 000000000000..7980d0f2271f
--- /dev/null
+++ b/arch/arm/include/asm/kfence.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_ARM_KFENCE_H
+#define __ASM_ARM_KFENCE_H
+
+#include <linux/kfence.h>
+
+#include <asm/pgalloc.h>
+#include <asm/set_memory.h>
+
+static inline int split_pmd_page(pmd_t *pmd, unsigned long addr)
+{
+	int i;
+	unsigned long pfn = PFN_DOWN(__pa(addr));
+	pte_t *pte = pte_alloc_one_kernel(&init_mm);
+
+	if (!pte)
+		return -ENOMEM;
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte_ext(pte + i, pfn_pte(pfn + i, PAGE_KERNEL), 0);
+	pmd_populate_kernel(&init_mm, pmd, pte);
+
+	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+	return 0;
+}
+
+static inline bool arch_kfence_init_pool(void)
+{
+	unsigned long addr;
+	pmd_t *pmd;
+
+	for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
+	     addr += PAGE_SIZE) {
+		pmd = pmd_off_k(addr);
+
+		if (pmd_leaf(*pmd)) {
+			if (split_pmd_page(pmd, addr & PMD_MASK))
+				return false;
+		}
+	}
+
+	return true;
+}
+
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+	set_memory_valid(addr, 1, !protect);
+
+	return true;
+}
+
+#endif /* __ASM_ARM_KFENCE_H */
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 1207ed925039..49148b675b43 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -17,6 +17,7 @@
 #include <linux/sched/debug.h>
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
+#include <linux/kfence.h>
 
 #include <asm/system_misc.h>
 #include <asm/system_info.h>
@@ -136,10 +137,14 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
 	/*
 	 * No handler, we'll have to terminate things with extreme prejudice.
 	 */
-	if (addr < PAGE_SIZE)
+	if (addr < PAGE_SIZE) {
 		msg = "NULL pointer dereference";
-	else
+	} else {
+		if (kfence_handle_page_fault(addr, is_write_fault(fsr), regs))
+			return;
+
 		msg = "paging request";
+	}
 
 	die_kernel_fault(msg, mm, addr, fsr, regs);
 }
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH Resend v2 0/3] ARM: Support KFENCE feature
  2021-11-15 13:48 ` Kefeng Wang
@ 2021-11-29 14:55   ` Kefeng Wang
  -1 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-29 14:55 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov

Hi Russell,

Could I send this to ARM patch system if no more comments,

what about your opinion, thanks.

On 2021/11/15 21:48, Kefeng Wang wrote:
> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
> with or without ARM_LPAE and all passed.
>
> V2 Resend:
> - adjust is_write_fault() position in patch2 not patch3, sugguested Alexander
> - Add ACKed from Marco
> - rebased on v5.16-rc1
>
> V2:
> - drop patch4 in v1, which is used a new way to skip kfence test
>    see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
> - fix some issue about NO MMU
>    - drop useless set_memory_valid() under no mmu
>    - fix implicit declaration of function ‘is_write_fault’ if no mmu
> - make KFENCE depends on !XIP_KERNEL, no tested with xip
>
> v1:
> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
>
> Kefeng Wang (3):
>    ARM: mm: Provide set_memory_valid()
>    ARM: mm: Provide is_write_fault()
>    ARM: Support KFENCE for ARM
>
>   arch/arm/Kconfig                  |  1 +
>   arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
>   arch/arm/include/asm/set_memory.h |  1 +
>   arch/arm/mm/fault.c               | 16 ++++++++--
>   arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
>   5 files changed, 100 insertions(+), 13 deletions(-)
>   create mode 100644 arch/arm/include/asm/kfence.h
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH Resend v2 0/3] ARM: Support KFENCE feature
@ 2021-11-29 14:55   ` Kefeng Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2021-11-29 14:55 UTC (permalink / raw)
  To: linux, linux-arm-kernel, linux-kernel; +Cc: glider, elver, dvyukov

Hi Russell,

Could I send this to ARM patch system if no more comments,

what about your opinion, thanks.

On 2021/11/15 21:48, Kefeng Wang wrote:
> This patchset supports Kfence feature, tested the kfence_test on ARM QEMU
> with or without ARM_LPAE and all passed.
>
> V2 Resend:
> - adjust is_write_fault() position in patch2 not patch3, sugguested Alexander
> - Add ACKed from Marco
> - rebased on v5.16-rc1
>
> V2:
> - drop patch4 in v1, which is used a new way to skip kfence test
>    see commit c40c6e593bf9 ("kfence: test: fail fast if disabled at boot")
> - fix some issue about NO MMU
>    - drop useless set_memory_valid() under no mmu
>    - fix implicit declaration of function ‘is_write_fault’ if no mmu
> - make KFENCE depends on !XIP_KERNEL, no tested with xip
>
> v1:
> https://lore.kernel.org/linux-arm-kernel/20210825092116.149975-1-wangkefeng.wang@huawei.com/
>
> Kefeng Wang (3):
>    ARM: mm: Provide set_memory_valid()
>    ARM: mm: Provide is_write_fault()
>    ARM: Support KFENCE for ARM
>
>   arch/arm/Kconfig                  |  1 +
>   arch/arm/include/asm/kfence.h     | 53 +++++++++++++++++++++++++++++++
>   arch/arm/include/asm/set_memory.h |  1 +
>   arch/arm/mm/fault.c               | 16 ++++++++--
>   arch/arm/mm/pageattr.c            | 42 ++++++++++++++++++------
>   5 files changed, 100 insertions(+), 13 deletions(-)
>   create mode 100644 arch/arm/include/asm/kfence.h
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-11-29 14:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-15 13:48 [PATCH Resend v2 0/3] ARM: Support KFENCE feature Kefeng Wang
2021-11-15 13:48 ` Kefeng Wang
2021-11-15 13:48 ` [PATCH Resend v2 1/3] ARM: mm: Provide set_memory_valid() Kefeng Wang
2021-11-15 13:48   ` Kefeng Wang
2021-11-15 13:48 ` [PATCH Resend v2 2/3] ARM: mm: Provide is_write_fault() Kefeng Wang
2021-11-15 13:48   ` Kefeng Wang
2021-11-15 13:48 ` [PATCH Resend v2 3/3] ARM: Support KFENCE for ARM Kefeng Wang
2021-11-15 13:48   ` Kefeng Wang
2021-11-29 14:55 ` [PATCH Resend v2 0/3] ARM: Support KFENCE feature Kefeng Wang
2021-11-29 14:55   ` Kefeng Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.