linux-riscv.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next] riscv: Enable KFENCE for riscv64
@ 2021-05-29  8:03 Liu Shixin
  2021-06-09  6:17 ` Liu Shixin
  2021-06-11 11:37 ` Alexander Potapenko
  0 siblings, 2 replies; 4+ messages in thread
From: Liu Shixin @ 2021-05-29  8:03 UTC (permalink / raw)
  To: Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexander Potapenko,
	Marco Elver, Dmitry Vyukov
  Cc: linux-riscv, linux-kernel, kasan-dev, Liu Shixin

Add architecture specific implementation details for KFENCE and enable
KFENCE for the riscv64 architecture. In particular, this implements the
required interface in <asm/kfence.h>.

KFENCE requires that attributes for pages from its memory pool can
individually be set. Therefore, force the kfence pool to be mapped at
page granularity.

Testing this patch using the testcases in kfence_test.c and all passed.

Signed-off-by: Liu Shixin <liushixin2@huawei.com>
---
1. Add helper function split_pmd_page() which is used to split a pmd to ptes. 
2. Add the judgment on the result of pte_alloc_one_kernel().

 arch/riscv/Kconfig              |  1 +
 arch/riscv/include/asm/kfence.h | 63 +++++++++++++++++++++++++++++++++
 arch/riscv/mm/fault.c           | 11 +++++-
 3 files changed, 74 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/include/asm/kfence.h

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 4982130064ef..2f4903a7730f 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -65,6 +65,7 @@ config RISCV
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
 	select HAVE_ARCH_KASAN if MMU && 64BIT
 	select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT
+	select HAVE_ARCH_KFENCE if MMU && 64BIT
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_KGDB_QXFER_PKT
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/riscv/include/asm/kfence.h b/arch/riscv/include/asm/kfence.h
new file mode 100644
index 000000000000..d887a54042aa
--- /dev/null
+++ b/arch/riscv/include/asm/kfence.h
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_RISCV_KFENCE_H
+#define _ASM_RISCV_KFENCE_H
+
+#include <linux/kfence.h>
+#include <linux/pfn.h>
+#include <asm-generic/pgalloc.h>
+#include <asm/pgtable.h>
+
+static inline int split_pmd_page(unsigned long addr)
+{
+	int i;
+	unsigned long pfn = PFN_DOWN(__pa((addr & PMD_MASK)));
+	pmd_t *pmd = pmd_off_k(addr);
+	pte_t *pte = pte_alloc_one_kernel(&init_mm);
+
+	if (!pte)
+		return -ENOMEM;
+
+	for (i = 0; i < PTRS_PER_PTE; i++)
+		set_pte(pte + i, pfn_pte(pfn + i, PAGE_KERNEL));
+	set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(pte)), PAGE_TABLE));
+
+	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+	return 0;
+}
+
+static inline bool arch_kfence_init_pool(void)
+{
+	int ret;
+	unsigned long addr;
+	pmd_t *pmd;
+
+	for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
+	     addr += PAGE_SIZE) {
+		pmd = pmd_off_k(addr);
+
+		if (pmd_leaf(*pmd)) {
+			ret = split_pmd_page(addr);
+			if (ret)
+				return false;
+		}
+	}
+
+	return true;
+}
+
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+	pte_t *pte = virt_to_kpte(addr);
+
+	if (protect)
+		set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
+	else
+		set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
+
+	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+	return true;
+}
+
+#endif /* _ASM_RISCV_KFENCE_H */
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 096463cc6fff..aa08dd2f8fae 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -14,6 +14,7 @@
 #include <linux/signal.h>
 #include <linux/uaccess.h>
 #include <linux/kprobes.h>
+#include <linux/kfence.h>
 
 #include <asm/ptrace.h>
 #include <asm/tlbflush.h>
@@ -45,7 +46,15 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr)
 	 * Oops. The kernel tried to access some bad page. We'll have to
 	 * terminate things with extreme prejudice.
 	 */
-	msg = (addr < PAGE_SIZE) ? "NULL pointer dereference" : "paging request";
+	if (addr < PAGE_SIZE)
+		msg = "NULL pointer dereference";
+	else {
+		if (kfence_handle_page_fault(addr, regs->cause == EXC_STORE_PAGE_FAULT, regs))
+			return;
+
+		msg = "paging request";
+	}
+
 	die_kernel_fault(msg, addr, regs);
 }
 
-- 
2.18.0.huawei.25


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH -next] riscv: Enable KFENCE for riscv64
  2021-05-29  8:03 [PATCH -next] riscv: Enable KFENCE for riscv64 Liu Shixin
@ 2021-06-09  6:17 ` Liu Shixin
  2021-06-11  1:33   ` Kefeng Wang
  2021-06-11 11:37 ` Alexander Potapenko
  1 sibling, 1 reply; 4+ messages in thread
From: Liu Shixin @ 2021-06-09  6:17 UTC (permalink / raw)
  To: Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexander Potapenko,
	Marco Elver, Dmitry Vyukov
  Cc: linux-riscv, linux-kernel, kasan-dev

Hi, everybody,

I perfected the patch based on the previous advice. How about this version?


Thanks,


On 2021/5/29 16:03, Liu Shixin wrote:
> Add architecture specific implementation details for KFENCE and enable
> KFENCE for the riscv64 architecture. In particular, this implements the
> required interface in <asm/kfence.h>.
>
> KFENCE requires that attributes for pages from its memory pool can
> individually be set. Therefore, force the kfence pool to be mapped at
> page granularity.
>
> Testing this patch using the testcases in kfence_test.c and all passed.
>
> Signed-off-by: Liu Shixin <liushixin2@huawei.com>
> ---
> 1. Add helper function split_pmd_page() which is used to split a pmd to ptes. 
> 2. Add the judgment on the result of pte_alloc_one_kernel().
>
>  arch/riscv/Kconfig              |  1 +
>  arch/riscv/include/asm/kfence.h | 63 +++++++++++++++++++++++++++++++++
>  arch/riscv/mm/fault.c           | 11 +++++-
>  3 files changed, 74 insertions(+), 1 deletion(-)
>  create mode 100644 arch/riscv/include/asm/kfence.h
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 4982130064ef..2f4903a7730f 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -65,6 +65,7 @@ config RISCV
>  	select HAVE_ARCH_JUMP_LABEL_RELATIVE
>  	select HAVE_ARCH_KASAN if MMU && 64BIT
>  	select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT
> +	select HAVE_ARCH_KFENCE if MMU && 64BIT
>  	select HAVE_ARCH_KGDB
>  	select HAVE_ARCH_KGDB_QXFER_PKT
>  	select HAVE_ARCH_MMAP_RND_BITS if MMU
> diff --git a/arch/riscv/include/asm/kfence.h b/arch/riscv/include/asm/kfence.h
> new file mode 100644
> index 000000000000..d887a54042aa
> --- /dev/null
> +++ b/arch/riscv/include/asm/kfence.h
> @@ -0,0 +1,63 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _ASM_RISCV_KFENCE_H
> +#define _ASM_RISCV_KFENCE_H
> +
> +#include <linux/kfence.h>
> +#include <linux/pfn.h>
> +#include <asm-generic/pgalloc.h>
> +#include <asm/pgtable.h>
> +
> +static inline int split_pmd_page(unsigned long addr)
> +{
> +	int i;
> +	unsigned long pfn = PFN_DOWN(__pa((addr & PMD_MASK)));
> +	pmd_t *pmd = pmd_off_k(addr);
> +	pte_t *pte = pte_alloc_one_kernel(&init_mm);
> +
> +	if (!pte)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < PTRS_PER_PTE; i++)
> +		set_pte(pte + i, pfn_pte(pfn + i, PAGE_KERNEL));
> +	set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(pte)), PAGE_TABLE));
> +
> +	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
> +	return 0;
> +}
> +
> +static inline bool arch_kfence_init_pool(void)
> +{
> +	int ret;
> +	unsigned long addr;
> +	pmd_t *pmd;
> +
> +	for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
> +	     addr += PAGE_SIZE) {
> +		pmd = pmd_off_k(addr);
> +
> +		if (pmd_leaf(*pmd)) {
> +			ret = split_pmd_page(addr);
> +			if (ret)
> +				return false;
> +		}
> +	}
> +
> +	return true;
> +}
> +
> +static inline bool kfence_protect_page(unsigned long addr, bool protect)
> +{
> +	pte_t *pte = virt_to_kpte(addr);
> +
> +	if (protect)
> +		set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
> +	else
> +		set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
> +
> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +
> +	return true;
> +}
> +
> +#endif /* _ASM_RISCV_KFENCE_H */
> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> index 096463cc6fff..aa08dd2f8fae 100644
> --- a/arch/riscv/mm/fault.c
> +++ b/arch/riscv/mm/fault.c
> @@ -14,6 +14,7 @@
>  #include <linux/signal.h>
>  #include <linux/uaccess.h>
>  #include <linux/kprobes.h>
> +#include <linux/kfence.h>
>  
>  #include <asm/ptrace.h>
>  #include <asm/tlbflush.h>
> @@ -45,7 +46,15 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr)
>  	 * Oops. The kernel tried to access some bad page. We'll have to
>  	 * terminate things with extreme prejudice.
>  	 */
> -	msg = (addr < PAGE_SIZE) ? "NULL pointer dereference" : "paging request";
> +	if (addr < PAGE_SIZE)
> +		msg = "NULL pointer dereference";
> +	else {
> +		if (kfence_handle_page_fault(addr, regs->cause == EXC_STORE_PAGE_FAULT, regs))
> +			return;
> +
> +		msg = "paging request";
> +	}
> +
>  	die_kernel_fault(msg, addr, regs);
>  }
>  


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH -next] riscv: Enable KFENCE for riscv64
  2021-06-09  6:17 ` Liu Shixin
@ 2021-06-11  1:33   ` Kefeng Wang
  0 siblings, 0 replies; 4+ messages in thread
From: Kefeng Wang @ 2021-06-11  1:33 UTC (permalink / raw)
  To: Liu Shixin, Paul Walmsley, Palmer Dabbelt, Albert Ou,
	Alexander Potapenko, Marco Elver, Dmitry Vyukov
  Cc: linux-riscv, linux-kernel, kasan-dev, Palmer Dabbelt


On 2021/6/9 14:17, Liu Shixin wrote:
> Hi, everybody,
>
> I perfected the patch based on the previous advice. How about this version?
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> Thanks,
>
>
> On 2021/5/29 16:03, Liu Shixin wrote:
>> Add architecture specific implementation details for KFENCE and enable
>> KFENCE for the riscv64 architecture. In particular, this implements the
>> required interface in <asm/kfence.h>.
>>
>> KFENCE requires that attributes for pages from its memory pool can
>> individually be set. Therefore, force the kfence pool to be mapped at
>> page granularity.
>>
>> Testing this patch using the testcases in kfence_test.c and all passed.
>>
>> Signed-off-by: Liu Shixin <liushixin2@huawei.com>
>> ---
>> 1. Add helper function split_pmd_page() which is used to split a pmd to ptes.
>> 2. Add the judgment on the result of pte_alloc_one_kernel().
>>
>>   arch/riscv/Kconfig              |  1 +
>>   arch/riscv/include/asm/kfence.h | 63 +++++++++++++++++++++++++++++++++
>>   arch/riscv/mm/fault.c           | 11 +++++-
>>   3 files changed, 74 insertions(+), 1 deletion(-)
>>   create mode 100644 arch/riscv/include/asm/kfence.h
>>
>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> index 4982130064ef..2f4903a7730f 100644
>> --- a/arch/riscv/Kconfig
>> +++ b/arch/riscv/Kconfig
>> @@ -65,6 +65,7 @@ config RISCV
>>   	select HAVE_ARCH_JUMP_LABEL_RELATIVE
>>   	select HAVE_ARCH_KASAN if MMU && 64BIT
>>   	select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT
>> +	select HAVE_ARCH_KFENCE if MMU && 64BIT
>>   	select HAVE_ARCH_KGDB
>>   	select HAVE_ARCH_KGDB_QXFER_PKT
>>   	select HAVE_ARCH_MMAP_RND_BITS if MMU
>> diff --git a/arch/riscv/include/asm/kfence.h b/arch/riscv/include/asm/kfence.h
>> new file mode 100644
>> index 000000000000..d887a54042aa
>> --- /dev/null
>> +++ b/arch/riscv/include/asm/kfence.h
>> @@ -0,0 +1,63 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +#ifndef _ASM_RISCV_KFENCE_H
>> +#define _ASM_RISCV_KFENCE_H
>> +
>> +#include <linux/kfence.h>
>> +#include <linux/pfn.h>
>> +#include <asm-generic/pgalloc.h>
>> +#include <asm/pgtable.h>
>> +
>> +static inline int split_pmd_page(unsigned long addr)
>> +{
>> +	int i;
>> +	unsigned long pfn = PFN_DOWN(__pa((addr & PMD_MASK)));
>> +	pmd_t *pmd = pmd_off_k(addr);
>> +	pte_t *pte = pte_alloc_one_kernel(&init_mm);
>> +
>> +	if (!pte)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < PTRS_PER_PTE; i++)
>> +		set_pte(pte + i, pfn_pte(pfn + i, PAGE_KERNEL));
>> +	set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(pte)), PAGE_TABLE));
>> +
>> +	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
>> +	return 0;
>> +}
>> +
>> +static inline bool arch_kfence_init_pool(void)
>> +{
>> +	int ret;
>> +	unsigned long addr;
>> +	pmd_t *pmd;
>> +
>> +	for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
>> +	     addr += PAGE_SIZE) {
>> +		pmd = pmd_off_k(addr);
>> +
>> +		if (pmd_leaf(*pmd)) {
>> +			ret = split_pmd_page(addr);
>> +			if (ret)
>> +				return false;
>> +		}
>> +	}
>> +
>> +	return true;
>> +}
>> +
>> +static inline bool kfence_protect_page(unsigned long addr, bool protect)
>> +{
>> +	pte_t *pte = virt_to_kpte(addr);
>> +
>> +	if (protect)
>> +		set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
>> +	else
>> +		set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
>> +
>> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
>> +
>> +	return true;
>> +}
>> +
>> +#endif /* _ASM_RISCV_KFENCE_H */
>> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
>> index 096463cc6fff..aa08dd2f8fae 100644
>> --- a/arch/riscv/mm/fault.c
>> +++ b/arch/riscv/mm/fault.c
>> @@ -14,6 +14,7 @@
>>   #include <linux/signal.h>
>>   #include <linux/uaccess.h>
>>   #include <linux/kprobes.h>
>> +#include <linux/kfence.h>
>>   
>>   #include <asm/ptrace.h>
>>   #include <asm/tlbflush.h>
>> @@ -45,7 +46,15 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr)
>>   	 * Oops. The kernel tried to access some bad page. We'll have to
>>   	 * terminate things with extreme prejudice.
>>   	 */
>> -	msg = (addr < PAGE_SIZE) ? "NULL pointer dereference" : "paging request";
>> +	if (addr < PAGE_SIZE)
>> +		msg = "NULL pointer dereference";
>> +	else {
>> +		if (kfence_handle_page_fault(addr, regs->cause == EXC_STORE_PAGE_FAULT, regs))
>> +			return;
>> +
>> +		msg = "paging request";
>> +	}
>> +
>>   	die_kernel_fault(msg, addr, regs);
>>   }
>>   
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
> .
>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH -next] riscv: Enable KFENCE for riscv64
  2021-05-29  8:03 [PATCH -next] riscv: Enable KFENCE for riscv64 Liu Shixin
  2021-06-09  6:17 ` Liu Shixin
@ 2021-06-11 11:37 ` Alexander Potapenko
  1 sibling, 0 replies; 4+ messages in thread
From: Alexander Potapenko @ 2021-06-11 11:37 UTC (permalink / raw)
  To: Liu Shixin
  Cc: Paul Walmsley, Palmer Dabbelt, Albert Ou, Marco Elver,
	Dmitry Vyukov, linux-riscv, LKML, kasan-dev

Hi Liu,

On Sat, May 29, 2021 at 9:31 AM Liu Shixin <liushixin2@huawei.com> wrote:
>
> Add architecture specific implementation details for KFENCE and enable
> KFENCE for the riscv64 architecture. In particular, this implements the
> required interface in <asm/kfence.h>.
>
> KFENCE requires that attributes for pages from its memory pool can
> individually be set. Therefore, force the kfence pool to be mapped at
> page granularity.
>
> Testing this patch using the testcases in kfence_test.c and all passed.
>
> Signed-off-by: Liu Shixin <liushixin2@huawei.com>

Looks like you're missing the Acked-by: that Marco gave here:
https://lkml.org/lkml/2021/5/14/588

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-06-11 11:39 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-29  8:03 [PATCH -next] riscv: Enable KFENCE for riscv64 Liu Shixin
2021-06-09  6:17 ` Liu Shixin
2021-06-11  1:33   ` Kefeng Wang
2021-06-11 11:37 ` Alexander Potapenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).