kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/5] Implement STRICT_MODULE_RWX for powerpc
@ 2019-12-24  5:55 Russell Currey
  2019-12-24  5:55 ` [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Russell Currey @ 2019-12-24  5:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
	kernel-hardening

v5 cover letter: https://lore.kernel.org/kernel-hardening/20191030073111.140493-1-ruscur@russell.cc/
v4 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198268.html
v3 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198023.html

Changes since v5:
	[1/5]: Addressed review comments from Christophe Leroy (thanks!)
	[2/5]: Use patch_instruction() instead of memcpy() thanks to mpe

Thanks for the feedback, hopefully this is the final iteration.  I have a patch
to remove the STRICT_KERNEL_RWX incompatibility with RELOCATABLE for book3s64
coming soon, so with that we should have a great basis for powerpc RWX going
forward.

Russell Currey (5):
  powerpc/mm: Implement set_memory() routines
  powerpc/kprobes: Mark newly allocated probes as RO
  powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
  powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
  powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig

 arch/powerpc/Kconfig                   |  2 +
 arch/powerpc/Kconfig.debug             |  6 +-
 arch/powerpc/configs/skiroot_defconfig |  1 +
 arch/powerpc/include/asm/set_memory.h  | 32 ++++++++++
 arch/powerpc/kernel/kprobes.c          |  6 +-
 arch/powerpc/mm/Makefile               |  1 +
 arch/powerpc/mm/pageattr.c             | 83 ++++++++++++++++++++++++++
 arch/powerpc/mm/ptdump/ptdump.c        | 21 ++++++-
 8 files changed, 147 insertions(+), 5 deletions(-)
 create mode 100644 arch/powerpc/include/asm/set_memory.h
 create mode 100644 arch/powerpc/mm/pageattr.c

-- 
2.24.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines
  2019-12-24  5:55 [PATCH v6 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
@ 2019-12-24  5:55 ` Russell Currey
  2020-01-08 12:52   ` Christophe Leroy
  2020-01-20  8:35   ` Christophe Leroy
  2019-12-24  5:55 ` [PATCH v6 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 13+ messages in thread
From: Russell Currey @ 2019-12-24  5:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
	kernel-hardening

The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
and are generally useful primitives to have.  This implementation is
designed to be completely generic across powerpc's many MMUs.

It's possible that this could be optimised to be faster for specific
MMUs, but the focus is on having a generic and safe implementation for
now.

This implementation does not handle cases where the caller is attempting
to change the mapping of the page it is executing from, or if another
CPU is concurrently using the page being altered.  These cases likely
shouldn't happen, but a more complex implementation with MMU-specific code
could safely handle them, so that is left as a TODO for now.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/Kconfig                  |  1 +
 arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
 arch/powerpc/mm/Makefile              |  1 +
 arch/powerpc/mm/pageattr.c            | 83 +++++++++++++++++++++++++++
 4 files changed, 117 insertions(+)
 create mode 100644 arch/powerpc/include/asm/set_memory.h
 create mode 100644 arch/powerpc/mm/pageattr.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 1ec34e16ed65..f0b9b47b5353 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -133,6 +133,7 @@ config PPC
 	select ARCH_HAS_PTE_SPECIAL
 	select ARCH_HAS_MEMBARRIER_CALLBACKS
 	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
+	select ARCH_HAS_SET_MEMORY
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
 	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_HAS_UACCESS_FLUSHCACHE
diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
new file mode 100644
index 000000000000..5230ddb2fefd
--- /dev/null
+++ b/arch/powerpc/include/asm/set_memory.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_SET_MEMORY_H
+#define _ASM_POWERPC_SET_MEMORY_H
+
+#define SET_MEMORY_RO	1
+#define SET_MEMORY_RW	2
+#define SET_MEMORY_NX	3
+#define SET_MEMORY_X	4
+
+int change_memory_attr(unsigned long addr, int numpages, int action);
+
+static inline int set_memory_ro(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_RO);
+}
+
+static inline int set_memory_rw(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_RW);
+}
+
+static inline int set_memory_nx(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_NX);
+}
+
+static inline int set_memory_x(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_X);
+}
+
+#endif
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index 5e147986400d..d0a0bcbc9289 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -20,3 +20,4 @@ obj-$(CONFIG_HIGHMEM)		+= highmem.o
 obj-$(CONFIG_PPC_COPRO_BASE)	+= copro_fault.o
 obj-$(CONFIG_PPC_PTDUMP)	+= ptdump/
 obj-$(CONFIG_KASAN)		+= kasan/
+obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pageattr.o
diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
new file mode 100644
index 000000000000..15d5fb04f531
--- /dev/null
+++ b/arch/powerpc/mm/pageattr.c
@@ -0,0 +1,83 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * MMU-generic set_memory implementation for powerpc
+ *
+ * Copyright 2019, IBM Corporation.
+ */
+
+#include <linux/mm.h>
+#include <linux/set_memory.h>
+
+#include <asm/mmu.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+
+
+/*
+ * Updates the attributes of a page in three steps:
+ *
+ * 1. invalidate the page table entry
+ * 2. flush the TLB
+ * 3. install the new entry with the updated attributes
+ *
+ * This is unsafe if the caller is attempting to change the mapping of the
+ * page it is executing from, or if another CPU is concurrently using the
+ * page being altered.
+ *
+ * TODO make the implementation resistant to this.
+ */
+static int __change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+{
+	int action = *((int *)data);
+	pte_t pte_val;
+
+	// invalidate the PTE so it's safe to modify
+	pte_val = ptep_get_and_clear(&init_mm, addr, ptep);
+	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+	// modify the PTE bits as desired, then apply
+	switch (action) {
+	case SET_MEMORY_RO:
+		pte_val = pte_wrprotect(pte_val);
+		break;
+	case SET_MEMORY_RW:
+		pte_val = pte_mkwrite(pte_val);
+		break;
+	case SET_MEMORY_NX:
+		pte_val = pte_exprotect(pte_val);
+		break;
+	case SET_MEMORY_X:
+		pte_val = pte_mkexec(pte_val);
+		break;
+	default:
+		WARN_ON(true);
+		return -EINVAL;
+	}
+
+	set_pte_at(&init_mm, addr, ptep, pte_val);
+
+	return 0;
+}
+
+static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+{
+	int ret;
+
+	spin_lock(&init_mm.page_table_lock);
+	ret = __change_page_attr(ptep, addr, data);
+	spin_unlock(&init_mm.page_table_lock);
+
+	return ret;
+}
+
+int change_memory_attr(unsigned long addr, int numpages, int action)
+{
+	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
+	unsigned long size = numpages * PAGE_SIZE;
+
+	if (!numpages)
+		return 0;
+
+	return apply_to_page_range(&init_mm, start, size, change_page_attr, &action);
+}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v6 2/5] powerpc/kprobes: Mark newly allocated probes as RO
  2019-12-24  5:55 [PATCH v6 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
  2019-12-24  5:55 ` [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
@ 2019-12-24  5:55 ` Russell Currey
  2020-01-08 16:48   ` Christophe Leroy
  2019-12-24  5:55 ` [PATCH v6 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Russell Currey
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Russell Currey @ 2019-12-24  5:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
	kernel-hardening

With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
W+X page at boot by default.  This can be tested with
CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
kernel log during boot.

powerpc doesn't implement its own alloc() for kprobes like other
architectures do, but we couldn't immediately mark RO anyway since we do
a memcpy to the page we allocate later.  After that, nothing should be
allowed to modify the page, and write permissions are removed well
before the kprobe is armed.

The memcpy() would fail if >1 probes were allocated, so use
patch_instruction() instead which is safe for RO.

Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/kernel/kprobes.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 2d27ec4feee4..b72761f0c9e3 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -24,6 +24,7 @@
 #include <asm/sstep.h>
 #include <asm/sections.h>
 #include <linux/uaccess.h>
+#include <linux/set_memory.h>
 
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
 DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -124,13 +125,14 @@ int arch_prepare_kprobe(struct kprobe *p)
 	}
 
 	if (!ret) {
-		memcpy(p->ainsn.insn, p->addr,
-				MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+		patch_instruction(p->ainsn.insn, *p->addr);
 		p->opcode = *p->addr;
 		flush_icache_range((unsigned long)p->ainsn.insn,
 			(unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t));
 	}
 
+	set_memory_ro((unsigned long)p->ainsn.insn, 1);
+
 	p->ainsn.boostable = 0;
 	return ret;
 }
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v6 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
  2019-12-24  5:55 [PATCH v6 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
  2019-12-24  5:55 ` [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
  2019-12-24  5:55 ` [PATCH v6 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
@ 2019-12-24  5:55 ` Russell Currey
  2019-12-31 17:14   ` Christophe Leroy
  2019-12-24  5:55 ` [PATCH v6 4/5] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Russell Currey
  2019-12-24  5:55 ` [PATCH v6 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Russell Currey
  4 siblings, 1 reply; 13+ messages in thread
From: Russell Currey @ 2019-12-24  5:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
	kernel-hardening

Very rudimentary, just

	echo 1 > [debugfs]/check_wx_pages

and check the kernel log.  Useful for testing strict module RWX.

Updated the Kconfig entry to reflect this.

Also fixed a typo.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/Kconfig.debug      |  6 ++++--
 arch/powerpc/mm/ptdump/ptdump.c | 21 ++++++++++++++++++++-
 2 files changed, 24 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index 4e1d39847462..7c14c9728bc0 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -370,7 +370,7 @@ config PPC_PTDUMP
 	  If you are unsure, say N.
 
 config PPC_DEBUG_WX
-	bool "Warn on W+X mappings at boot"
+	bool "Warn on W+X mappings at boot & enable manual checks at runtime"
 	depends on PPC_PTDUMP
 	help
 	  Generate a warning if any W+X mappings are found at boot.
@@ -384,7 +384,9 @@ config PPC_DEBUG_WX
 	  of other unfixed kernel bugs easier.
 
 	  There is no runtime or memory usage effect of this option
-	  once the kernel has booted up - it's a one time check.
+	  once the kernel has booted up, it only automatically checks once.
+
+	  Enables the "check_wx_pages" debugfs entry for checking at runtime.
 
 	  If in doubt, say "Y".
 
diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
index 2f9ddc29c535..b6cba29ae4a0 100644
--- a/arch/powerpc/mm/ptdump/ptdump.c
+++ b/arch/powerpc/mm/ptdump/ptdump.c
@@ -4,7 +4,7 @@
  *
  * This traverses the kernel pagetables and dumps the
  * information about the used sections of memory to
- * /sys/kernel/debug/kernel_pagetables.
+ * /sys/kernel/debug/kernel_page_tables.
  *
  * Derived from the arm64 implementation:
  * Copyright (c) 2014, The Linux Foundation, Laura Abbott.
@@ -409,6 +409,25 @@ void ptdump_check_wx(void)
 	else
 		pr_info("Checked W+X mappings: passed, no W+X pages found\n");
 }
+
+static int check_wx_debugfs_set(void *data, u64 val)
+{
+	if (val != 1ULL)
+		return -EINVAL;
+
+	ptdump_check_wx();
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(check_wx_fops, NULL, check_wx_debugfs_set, "%llu\n");
+
+static int ptdump_check_wx_init(void)
+{
+	return debugfs_create_file("check_wx_pages", 0200, NULL,
+				   NULL, &check_wx_fops) ? 0 : -ENOMEM;
+}
+device_initcall(ptdump_check_wx_init);
 #endif
 
 static int ptdump_init(void)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v6 4/5] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
  2019-12-24  5:55 [PATCH v6 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
                   ` (2 preceding siblings ...)
  2019-12-24  5:55 ` [PATCH v6 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Russell Currey
@ 2019-12-24  5:55 ` Russell Currey
  2019-12-24  5:55 ` [PATCH v6 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Russell Currey
  4 siblings, 0 replies; 13+ messages in thread
From: Russell Currey @ 2019-12-24  5:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
	kernel-hardening

To enable strict module RWX on powerpc, set:

    CONFIG_STRICT_MODULE_RWX=y

You should also have CONFIG_STRICT_KERNEL_RWX=y set to have any real
security benefit.

ARCH_HAS_STRICT_MODULE_RWX is set to require ARCH_HAS_STRICT_KERNEL_RWX.
This is due to a quirk in arch/Kconfig and arch/powerpc/Kconfig that
makes STRICT_MODULE_RWX *on by default* in configurations where
STRICT_KERNEL_RWX is *unavailable*.

Since this doesn't make much sense, and module RWX without kernel RWX
doesn't make much sense, having the same dependencies as kernel RWX
works around this problem.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index f0b9b47b5353..97ea012fdff9 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -135,6 +135,7 @@ config PPC
 	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
 	select ARCH_HAS_SET_MEMORY
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
+	select ARCH_HAS_STRICT_MODULE_RWX	if ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_HAS_UACCESS_FLUSHCACHE
 	select ARCH_HAS_UACCESS_MCSAFE		if PPC64
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v6 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
  2019-12-24  5:55 [PATCH v6 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
                   ` (3 preceding siblings ...)
  2019-12-24  5:55 ` [PATCH v6 4/5] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Russell Currey
@ 2019-12-24  5:55 ` Russell Currey
  4 siblings, 0 replies; 13+ messages in thread
From: Russell Currey @ 2019-12-24  5:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
	kernel-hardening, Joel Stanley

skiroot_defconfig is the only powerpc defconfig with STRICT_KERNEL_RWX
enabled, and if you want memory protection for kernel text you'd want it
for modules too, so enable STRICT_MODULE_RWX there.

Acked-by: Joel Stanley <joel@joel.id.au>
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/configs/skiroot_defconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig
index 069f67f12731..b74358c3ede8 100644
--- a/arch/powerpc/configs/skiroot_defconfig
+++ b/arch/powerpc/configs/skiroot_defconfig
@@ -31,6 +31,7 @@ CONFIG_PERF_EVENTS=y
 CONFIG_SLAB_FREELIST_HARDENED=y
 CONFIG_JUMP_LABEL=y
 CONFIG_STRICT_KERNEL_RWX=y
+CONFIG_STRICT_MODULE_RWX=y
 CONFIG_MODULES=y
 CONFIG_MODULE_UNLOAD=y
 CONFIG_MODULE_SIG=y
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
  2019-12-24  5:55 ` [PATCH v6 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Russell Currey
@ 2019-12-31 17:14   ` Christophe Leroy
  2020-01-07 10:48     ` Michael Ellerman
  0 siblings, 1 reply; 13+ messages in thread
From: Christophe Leroy @ 2019-12-31 17:14 UTC (permalink / raw)
  To: Russell Currey
  Cc: kernel-hardening, npiggin, dja, ajd, mpe, joel, linuxppc-dev

Russell Currey <ruscur@russell.cc> a écrit :

> Very rudimentary, just
>
> 	echo 1 > [debugfs]/check_wx_pages
>
> and check the kernel log.  Useful for testing strict module RWX.

For testing strict module RWX you could instead implement  
module_arch_freeing_init() and call  ptdump_check_wx() from there.

Christophe

>
> Updated the Kconfig entry to reflect this.
>
> Also fixed a typo.
>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
>  arch/powerpc/Kconfig.debug      |  6 ++++--
>  arch/powerpc/mm/ptdump/ptdump.c | 21 ++++++++++++++++++++-
>  2 files changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
> index 4e1d39847462..7c14c9728bc0 100644
> --- a/arch/powerpc/Kconfig.debug
> +++ b/arch/powerpc/Kconfig.debug
> @@ -370,7 +370,7 @@ config PPC_PTDUMP
>  	  If you are unsure, say N.
>
>  config PPC_DEBUG_WX
> -	bool "Warn on W+X mappings at boot"
> +	bool "Warn on W+X mappings at boot & enable manual checks at runtime"
>  	depends on PPC_PTDUMP
>  	help
>  	  Generate a warning if any W+X mappings are found at boot.
> @@ -384,7 +384,9 @@ config PPC_DEBUG_WX
>  	  of other unfixed kernel bugs easier.
>
>  	  There is no runtime or memory usage effect of this option
> -	  once the kernel has booted up - it's a one time check.
> +	  once the kernel has booted up, it only automatically checks once.
> +
> +	  Enables the "check_wx_pages" debugfs entry for checking at runtime.
>
>  	  If in doubt, say "Y".
>
> diff --git a/arch/powerpc/mm/ptdump/ptdump.c  
> b/arch/powerpc/mm/ptdump/ptdump.c
> index 2f9ddc29c535..b6cba29ae4a0 100644
> --- a/arch/powerpc/mm/ptdump/ptdump.c
> +++ b/arch/powerpc/mm/ptdump/ptdump.c
> @@ -4,7 +4,7 @@
>   *
>   * This traverses the kernel pagetables and dumps the
>   * information about the used sections of memory to
> - * /sys/kernel/debug/kernel_pagetables.
> + * /sys/kernel/debug/kernel_page_tables.
>   *
>   * Derived from the arm64 implementation:
>   * Copyright (c) 2014, The Linux Foundation, Laura Abbott.
> @@ -409,6 +409,25 @@ void ptdump_check_wx(void)
>  	else
>  		pr_info("Checked W+X mappings: passed, no W+X pages found\n");
>  }
> +
> +static int check_wx_debugfs_set(void *data, u64 val)
> +{
> +	if (val != 1ULL)
> +		return -EINVAL;
> +
> +	ptdump_check_wx();
> +
> +	return 0;
> +}
> +
> +DEFINE_SIMPLE_ATTRIBUTE(check_wx_fops, NULL, check_wx_debugfs_set,  
> "%llu\n");
> +
> +static int ptdump_check_wx_init(void)
> +{
> +	return debugfs_create_file("check_wx_pages", 0200, NULL,
> +				   NULL, &check_wx_fops) ? 0 : -ENOMEM;
> +}
> +device_initcall(ptdump_check_wx_init);
>  #endif
>
>  static int ptdump_init(void)
> --
> 2.24.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
  2019-12-31 17:14   ` Christophe Leroy
@ 2020-01-07 10:48     ` Michael Ellerman
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2020-01-07 10:48 UTC (permalink / raw)
  To: Christophe Leroy, Russell Currey
  Cc: kernel-hardening, npiggin, dja, ajd, joel, linuxppc-dev

Christophe Leroy <christophe.leroy@c-s.fr> writes:
> Russell Currey <ruscur@russell.cc> a écrit :
>
>> Very rudimentary, just
>>
>> 	echo 1 > [debugfs]/check_wx_pages
>>
>> and check the kernel log.  Useful for testing strict module RWX.
>
> For testing strict module RWX you could instead implement  
> module_arch_freeing_init() and call  ptdump_check_wx() from there.

That could get expensive on large systems, not sure if we want it
enabled by default?

cheers


>> diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
>> index 4e1d39847462..7c14c9728bc0 100644
>> --- a/arch/powerpc/Kconfig.debug
>> +++ b/arch/powerpc/Kconfig.debug
>> @@ -370,7 +370,7 @@ config PPC_PTDUMP
>>  	  If you are unsure, say N.
>>
>>  config PPC_DEBUG_WX
>> -	bool "Warn on W+X mappings at boot"
>> +	bool "Warn on W+X mappings at boot & enable manual checks at runtime"
>>  	depends on PPC_PTDUMP
>>  	help
>>  	  Generate a warning if any W+X mappings are found at boot.
>> @@ -384,7 +384,9 @@ config PPC_DEBUG_WX
>>  	  of other unfixed kernel bugs easier.
>>
>>  	  There is no runtime or memory usage effect of this option
>> -	  once the kernel has booted up - it's a one time check.
>> +	  once the kernel has booted up, it only automatically checks once.
>> +
>> +	  Enables the "check_wx_pages" debugfs entry for checking at runtime.
>>
>>  	  If in doubt, say "Y".
>>
>> diff --git a/arch/powerpc/mm/ptdump/ptdump.c  
>> b/arch/powerpc/mm/ptdump/ptdump.c
>> index 2f9ddc29c535..b6cba29ae4a0 100644
>> --- a/arch/powerpc/mm/ptdump/ptdump.c
>> +++ b/arch/powerpc/mm/ptdump/ptdump.c
>> @@ -4,7 +4,7 @@
>>   *
>>   * This traverses the kernel pagetables and dumps the
>>   * information about the used sections of memory to
>> - * /sys/kernel/debug/kernel_pagetables.
>> + * /sys/kernel/debug/kernel_page_tables.
>>   *
>>   * Derived from the arm64 implementation:
>>   * Copyright (c) 2014, The Linux Foundation, Laura Abbott.
>> @@ -409,6 +409,25 @@ void ptdump_check_wx(void)
>>  	else
>>  		pr_info("Checked W+X mappings: passed, no W+X pages found\n");
>>  }
>> +
>> +static int check_wx_debugfs_set(void *data, u64 val)
>> +{
>> +	if (val != 1ULL)
>> +		return -EINVAL;
>> +
>> +	ptdump_check_wx();
>> +
>> +	return 0;
>> +}
>> +
>> +DEFINE_SIMPLE_ATTRIBUTE(check_wx_fops, NULL, check_wx_debugfs_set,  
>> "%llu\n");
>> +
>> +static int ptdump_check_wx_init(void)
>> +{
>> +	return debugfs_create_file("check_wx_pages", 0200, NULL,
>> +				   NULL, &check_wx_fops) ? 0 : -ENOMEM;
>> +}
>> +device_initcall(ptdump_check_wx_init);
>>  #endif
>>
>>  static int ptdump_init(void)
>> --
>> 2.24.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines
  2019-12-24  5:55 ` [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
@ 2020-01-08 12:52   ` Christophe Leroy
  2020-02-03  0:46     ` Russell Currey
  2020-01-20  8:35   ` Christophe Leroy
  1 sibling, 1 reply; 13+ messages in thread
From: Christophe Leroy @ 2020-01-08 12:52 UTC (permalink / raw)
  To: Russell Currey, linuxppc-dev
  Cc: joel, mpe, ajd, dja, npiggin, kernel-hardening



Le 24/12/2019 à 06:55, Russell Currey a écrit :
> The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> and are generally useful primitives to have.  This implementation is
> designed to be completely generic across powerpc's many MMUs.
> 
> It's possible that this could be optimised to be faster for specific
> MMUs, but the focus is on having a generic and safe implementation for
> now.
> 
> This implementation does not handle cases where the caller is attempting
> to change the mapping of the page it is executing from, or if another
> CPU is concurrently using the page being altered.  These cases likely
> shouldn't happen, but a more complex implementation with MMU-specific code
> could safely handle them, so that is left as a TODO for now.
> 
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
>   arch/powerpc/Kconfig                  |  1 +
>   arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
>   arch/powerpc/mm/Makefile              |  1 +
>   arch/powerpc/mm/pageattr.c            | 83 +++++++++++++++++++++++++++
>   4 files changed, 117 insertions(+)
>   create mode 100644 arch/powerpc/include/asm/set_memory.h
>   create mode 100644 arch/powerpc/mm/pageattr.c
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 1ec34e16ed65..f0b9b47b5353 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -133,6 +133,7 @@ config PPC
>   	select ARCH_HAS_PTE_SPECIAL
>   	select ARCH_HAS_MEMBARRIER_CALLBACKS
>   	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> +	select ARCH_HAS_SET_MEMORY
>   	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
>   	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
>   	select ARCH_HAS_UACCESS_FLUSHCACHE
> diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
> new file mode 100644
> index 000000000000..5230ddb2fefd
> --- /dev/null
> +++ b/arch/powerpc/include/asm/set_memory.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_POWERPC_SET_MEMORY_H
> +#define _ASM_POWERPC_SET_MEMORY_H
> +
> +#define SET_MEMORY_RO	1
> +#define SET_MEMORY_RW	2
> +#define SET_MEMORY_NX	3
> +#define SET_MEMORY_X	4

Maybe going from 0 to 3 would be better than 1 to 4

> +
> +int change_memory_attr(unsigned long addr, int numpages, int action);

action could be unsigned.

> +
> +static inline int set_memory_ro(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_RO);
> +}
> +
> +static inline int set_memory_rw(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_RW);
> +}
> +
> +static inline int set_memory_nx(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_NX);
> +}
> +
> +static inline int set_memory_x(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_X);
> +}
> +
> +#endif
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index 5e147986400d..d0a0bcbc9289 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -20,3 +20,4 @@ obj-$(CONFIG_HIGHMEM)		+= highmem.o
>   obj-$(CONFIG_PPC_COPRO_BASE)	+= copro_fault.o
>   obj-$(CONFIG_PPC_PTDUMP)	+= ptdump/
>   obj-$(CONFIG_KASAN)		+= kasan/
> +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pageattr.o

CONFIG_ARCH_HAS_SET_MEMORY is set inconditionnally, I think you should 
add pageattr.o to obj-y instead. CONFIG_ARCH_HAS_XXX are almost never 
used in Makefiles

> diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> new file mode 100644
> index 000000000000..15d5fb04f531
> --- /dev/null
> +++ b/arch/powerpc/mm/pageattr.c
> @@ -0,0 +1,83 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * MMU-generic set_memory implementation for powerpc
> + *
> + * Copyright 2019, IBM Corporation.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/set_memory.h>
> +
> +#include <asm/mmu.h>
> +#include <asm/page.h>
> +#include <asm/pgtable.h>
> +
> +
> +/*
> + * Updates the attributes of a page in three steps:
> + *
> + * 1. invalidate the page table entry
> + * 2. flush the TLB
> + * 3. install the new entry with the updated attributes
> + *
> + * This is unsafe if the caller is attempting to change the mapping of the
> + * page it is executing from, or if another CPU is concurrently using the
> + * page being altered.
> + *
> + * TODO make the implementation resistant to this.
> + */
> +static int __change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> +{
> +	int action = *((int *)data);

Don't use pointers for so simple things, pointers forces the compiler to 
setup a stack frame and save the data into stack. Instead do:

	int action = (int)data;

> +	pte_t pte_val;
> +
> +	// invalidate the PTE so it's safe to modify
> +	pte_val = ptep_get_and_clear(&init_mm, addr, ptep);
> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);

Why flush a range for a single page ? On most targets this will do a 
tlbia which is heavy, while a tlbie would suffice.

I think flush_tlb_kernel_range() should be replaced by something 
flushing only a single page.

> +
> +	// modify the PTE bits as desired, then apply
> +	switch (action) {
> +	case SET_MEMORY_RO:
> +		pte_val = pte_wrprotect(pte_val);
> +		break;
> +	case SET_MEMORY_RW:
> +		pte_val = pte_mkwrite(pte_val);
> +		break;
> +	case SET_MEMORY_NX:
> +		pte_val = pte_exprotect(pte_val);
> +		break;
> +	case SET_MEMORY_X:
> +		pte_val = pte_mkexec(pte_val);
> +		break;
> +	default:
> +		WARN_ON(true);
> +		return -EINVAL;

Is it worth checking that the action is valid for each page ? I think 
validity of action should be checked in change_memory_attr(). All other 
functions are static so you know they won't be called from outside.

Once done, you can squash __change_page_attr() into change_page_attr(), 
remove the ret var and return 0 all the time.

> +	}
> +
> +	set_pte_at(&init_mm, addr, ptep, pte_val);
> +
> +	return 0;
> +}
> +
> +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> +{
> +	int ret;
> +
> +	spin_lock(&init_mm.page_table_lock);
> +	ret = __change_page_attr(ptep, addr, data);
> +	spin_unlock(&init_mm.page_table_lock);
> +
> +	return ret;
> +}
> +
> +int change_memory_attr(unsigned long addr, int numpages, int action)
> +{
> +	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> +	unsigned long size = numpages * PAGE_SIZE;
> +
> +	if (!numpages)
> +		return 0;
> +
> +	return apply_to_page_range(&init_mm, start, size, change_page_attr, &action);

Use (void*)action instead of &action (see upper comment)

> +}
> 

Christophe


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 2/5] powerpc/kprobes: Mark newly allocated probes as RO
  2019-12-24  5:55 ` [PATCH v6 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
@ 2020-01-08 16:48   ` Christophe Leroy
  0 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2020-01-08 16:48 UTC (permalink / raw)
  To: Russell Currey, linuxppc-dev
  Cc: joel, mpe, ajd, dja, npiggin, kernel-hardening



Le 24/12/2019 à 06:55, Russell Currey a écrit :
> With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
> W+X page at boot by default.  This can be tested with
> CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
> kernel log during boot.
> 
> powerpc doesn't implement its own alloc() for kprobes like other
> architectures do, but we couldn't immediately mark RO anyway since we do
> a memcpy to the page we allocate later.  After that, nothing should be
> allowed to modify the page, and write permissions are removed well
> before the kprobe is armed.
> 
> The memcpy() would fail if >1 probes were allocated, so use
> patch_instruction() instead which is safe for RO.
> 
> Reviewed-by: Daniel Axtens <dja@axtens.net>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
>   arch/powerpc/kernel/kprobes.c | 6 ++++--
>   1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 2d27ec4feee4..b72761f0c9e3 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -24,6 +24,7 @@
>   #include <asm/sstep.h>
>   #include <asm/sections.h>
>   #include <linux/uaccess.h>
> +#include <linux/set_memory.h>
>   
>   DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
>   DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> @@ -124,13 +125,14 @@ int arch_prepare_kprobe(struct kprobe *p)
>   	}
>   
>   	if (!ret) {
> -		memcpy(p->ainsn.insn, p->addr,
> -				MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
> +		patch_instruction(p->ainsn.insn, *p->addr);
>   		p->opcode = *p->addr;
>   		flush_icache_range((unsigned long)p->ainsn.insn,
>   			(unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t));

patch_instruction() already does the flush, no need to flush again with 
flush_icache_range()

>   	}
>   
> +	set_memory_ro((unsigned long)p->ainsn.insn, 1);
> +

I don't really understand, why do you need to set this ro ? Or why do 
you need to change the memcpy() to patch_instruction() if the area is 
not already ro ?

If I understand correctly, p->ainsn.insn is within a special executable 
page allocated via module_alloc(). Wouldn't it be more correct to modify 
kprobe get_insn_slot() logic so that allocated page is ROX instead of RWX ?

>   	p->ainsn.boostable = 0;
>   	return ret;
>   }
> 

Christophe

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines
  2019-12-24  5:55 ` [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
  2020-01-08 12:52   ` Christophe Leroy
@ 2020-01-20  8:35   ` Christophe Leroy
  1 sibling, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2020-01-20  8:35 UTC (permalink / raw)
  To: Russell Currey, linuxppc-dev
  Cc: joel, mpe, ajd, dja, npiggin, kernel-hardening



Le 24/12/2019 à 06:55, Russell Currey a écrit :
> The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> and are generally useful primitives to have.  This implementation is
> designed to be completely generic across powerpc's many MMUs.
> 
> It's possible that this could be optimised to be faster for specific
> MMUs, but the focus is on having a generic and safe implementation for
> now.
> 
> This implementation does not handle cases where the caller is attempting
> to change the mapping of the page it is executing from, or if another
> CPU is concurrently using the page being altered.  These cases likely
> shouldn't happen, but a more complex implementation with MMU-specific code
> could safely handle them, so that is left as a TODO for now.
> 
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
>   arch/powerpc/Kconfig                  |  1 +
>   arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
>   arch/powerpc/mm/Makefile              |  1 +
>   arch/powerpc/mm/pageattr.c            | 83 +++++++++++++++++++++++++++
>   4 files changed, 117 insertions(+)
>   create mode 100644 arch/powerpc/include/asm/set_memory.h
>   create mode 100644 arch/powerpc/mm/pageattr.c
> 
> +static int __change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> +{
> +	int action = *((int *)data);
> +	pte_t pte_val;

pte_val is really not a good naming, because pte_val() is already a 
function which returns the value of a pte_t var.

Here you should name it 'pte' as usual.

Christophe

> +
> +	// invalidate the PTE so it's safe to modify
> +	pte_val = ptep_get_and_clear(&init_mm, addr, ptep);
> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +
> +	// modify the PTE bits as desired, then apply
> +	switch (action) {
> +	case SET_MEMORY_RO:
> +		pte_val = pte_wrprotect(pte_val);
> +		break;
> +	case SET_MEMORY_RW:
> +		pte_val = pte_mkwrite(pte_val);
> +		break;
> +	case SET_MEMORY_NX:
> +		pte_val = pte_exprotect(pte_val);
> +		break;
> +	case SET_MEMORY_X:
> +		pte_val = pte_mkexec(pte_val);
> +		break;
> +	default:
> +		WARN_ON(true);
> +		return -EINVAL;
> +	}
> +
> +	set_pte_at(&init_mm, addr, ptep, pte_val);
> +
> +	return 0;
> +}
> +

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines
  2020-01-08 12:52   ` Christophe Leroy
@ 2020-02-03  0:46     ` Russell Currey
  2020-02-03  7:06       ` Christophe Leroy
  0 siblings, 1 reply; 13+ messages in thread
From: Russell Currey @ 2020-02-03  0:46 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev
  Cc: joel, mpe, ajd, dja, npiggin, kernel-hardening

On Wed, 2020-01-08 at 13:52 +0100, Christophe Leroy wrote:
> 
> Le 24/12/2019 à 06:55, Russell Currey a écrit :
> > The set_memory_{ro/rw/nx/x}() functions are required for
> > STRICT_MODULE_RWX,
> > and are generally useful primitives to have.  This implementation
> > is
> > designed to be completely generic across powerpc's many MMUs.
> > 
> > It's possible that this could be optimised to be faster for
> > specific
> > MMUs, but the focus is on having a generic and safe implementation
> > for
> > now.
> > 
> > This implementation does not handle cases where the caller is
> > attempting
> > to change the mapping of the page it is executing from, or if
> > another
> > CPU is concurrently using the page being altered.  These cases
> > likely
> > shouldn't happen, but a more complex implementation with MMU-
> > specific code
> > could safely handle them, so that is left as a TODO for now.
> > 
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > ---
> >   arch/powerpc/Kconfig                  |  1 +
> >   arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
> >   arch/powerpc/mm/Makefile              |  1 +
> >   arch/powerpc/mm/pageattr.c            | 83
> > +++++++++++++++++++++++++++
> >   4 files changed, 117 insertions(+)
> >   create mode 100644 arch/powerpc/include/asm/set_memory.h
> >   create mode 100644 arch/powerpc/mm/pageattr.c
> > 
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index 1ec34e16ed65..f0b9b47b5353 100644
> > --- a/arch/powerpc/Kconfig
> > +++ b/arch/powerpc/Kconfig
> > @@ -133,6 +133,7 @@ config PPC
> >   	select ARCH_HAS_PTE_SPECIAL
> >   	select ARCH_HAS_MEMBARRIER_CALLBACKS
> >   	select ARCH_HAS_SCALED_CPUTIME		if
> > VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> > +	select ARCH_HAS_SET_MEMORY
> >   	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 ||
> > PPC32) && !RELOCATABLE && !HIBERNATION)
> >   	select ARCH_HAS_TICK_BROADCAST		if
> > GENERIC_CLOCKEVENTS_BROADCAST
> >   	select ARCH_HAS_UACCESS_FLUSHCACHE
> > diff --git a/arch/powerpc/include/asm/set_memory.h
> > b/arch/powerpc/include/asm/set_memory.h
> > new file mode 100644
> > index 000000000000..5230ddb2fefd
> > --- /dev/null
> > +++ b/arch/powerpc/include/asm/set_memory.h
> > @@ -0,0 +1,32 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef _ASM_POWERPC_SET_MEMORY_H
> > +#define _ASM_POWERPC_SET_MEMORY_H
> > +
> > +#define SET_MEMORY_RO	1
> > +#define SET_MEMORY_RW	2
> > +#define SET_MEMORY_NX	3
> > +#define SET_MEMORY_X	4
> 
> Maybe going from 0 to 3 would be better than 1 to 4
> 
> > +
> > +int change_memory_attr(unsigned long addr, int numpages, int
> > action);
> 
> action could be unsigned.
> 
> > +
> > +static inline int set_memory_ro(unsigned long addr, int numpages)
> > +{
> > +	return change_memory_attr(addr, numpages, SET_MEMORY_RO);
> > +}
> > +
> > +static inline int set_memory_rw(unsigned long addr, int numpages)
> > +{
> > +	return change_memory_attr(addr, numpages, SET_MEMORY_RW);
> > +}
> > +
> > +static inline int set_memory_nx(unsigned long addr, int numpages)
> > +{
> > +	return change_memory_attr(addr, numpages, SET_MEMORY_NX);
> > +}
> > +
> > +static inline int set_memory_x(unsigned long addr, int numpages)
> > +{
> > +	return change_memory_attr(addr, numpages, SET_MEMORY_X);
> > +}
> > +
> > +#endif
> > diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> > index 5e147986400d..d0a0bcbc9289 100644
> > --- a/arch/powerpc/mm/Makefile
> > +++ b/arch/powerpc/mm/Makefile
> > @@ -20,3 +20,4 @@ obj-$(CONFIG_HIGHMEM)		+= highmem.o
> >   obj-$(CONFIG_PPC_COPRO_BASE)	+= copro_fault.o
> >   obj-$(CONFIG_PPC_PTDUMP)	+= ptdump/
> >   obj-$(CONFIG_KASAN)		+= kasan/
> > +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pageattr.o
> 
> CONFIG_ARCH_HAS_SET_MEMORY is set inconditionnally, I think you
> should 
> add pageattr.o to obj-y instead. CONFIG_ARCH_HAS_XXX are almost
> never 
> used in Makefiles

Fair enough, will keep that in mind

> 
> > diff --git a/arch/powerpc/mm/pageattr.c
> > b/arch/powerpc/mm/pageattr.c
> > new file mode 100644
> > index 000000000000..15d5fb04f531
> > --- /dev/null
> > +++ b/arch/powerpc/mm/pageattr.c
> > @@ -0,0 +1,83 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/*
> > + * MMU-generic set_memory implementation for powerpc
> > + *
> > + * Copyright 2019, IBM Corporation.
> > + */
> > +
> > +#include <linux/mm.h>
> > +#include <linux/set_memory.h>
> > +
> > +#include <asm/mmu.h>
> > +#include <asm/page.h>
> > +#include <asm/pgtable.h>
> > +
> > +
> > +/*
> > + * Updates the attributes of a page in three steps:
> > + *
> > + * 1. invalidate the page table entry
> > + * 2. flush the TLB
> > + * 3. install the new entry with the updated attributes
> > + *
> > + * This is unsafe if the caller is attempting to change the
> > mapping of the
> > + * page it is executing from, or if another CPU is concurrently
> > using the
> > + * page being altered.
> > + *
> > + * TODO make the implementation resistant to this.
> > + */
> > +static int __change_page_attr(pte_t *ptep, unsigned long addr,
> > void *data)
> > +{
> > +	int action = *((int *)data);
> 
> Don't use pointers for so simple things, pointers forces the compiler
> to 
> setup a stack frame and save the data into stack. Instead do:
> 
> 	int action = (int)data;
> 
> > +	pte_t pte_val;
> > +
> > +	// invalidate the PTE so it's safe to modify
> > +	pte_val = ptep_get_and_clear(&init_mm, addr, ptep);
> > +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> 
> Why flush a range for a single page ? On most targets this will do a 
> tlbia which is heavy, while a tlbie would suffice.
> 
> I think flush_tlb_kernel_range() should be replaced by something 
> flushing only a single page.

You might be able to help me out here, I wanted to do that but the only
functions I could find that flushed single pages needed a
vm_area_struct, which I can't get.

> 
> > +
> > +	// modify the PTE bits as desired, then apply
> > +	switch (action) {
> > +	case SET_MEMORY_RO:
> > +		pte_val = pte_wrprotect(pte_val);
> > +		break;
> > +	case SET_MEMORY_RW:
> > +		pte_val = pte_mkwrite(pte_val);
> > +		break;
> > +	case SET_MEMORY_NX:
> > +		pte_val = pte_exprotect(pte_val);
> > +		break;
> > +	case SET_MEMORY_X:
> > +		pte_val = pte_mkexec(pte_val);
> > +		break;
> > +	default:
> > +		WARN_ON(true);
> > +		return -EINVAL;
> 
> Is it worth checking that the action is valid for each page ? I
> think 
> validity of action should be checked in change_memory_attr(). All
> other 
> functions are static so you know they won't be called from outside.
> 
> Once done, you can squash __change_page_attr() into
> change_page_attr(), 
> remove the ret var and return 0 all the time.

Makes sense to fold things into a single function, but in terms of
performance it shouldn't make a difference, right?  I still have to
check the action to determine what to change (unless I replace passing
SET_MEMORY_RO into apply_to_page_range() with a function pointer to
pte_wrprotect() for example).  

> 
> > +	}
> > +
> > +	set_pte_at(&init_mm, addr, ptep, pte_val);
> > +
> > +	return 0;
> > +}
> > +
> > +static int change_page_attr(pte_t *ptep, unsigned long addr, void
> > *data)
> > +{
> > +	int ret;
> > +
> > +	spin_lock(&init_mm.page_table_lock);
> > +	ret = __change_page_attr(ptep, addr, data);
> > +	spin_unlock(&init_mm.page_table_lock);
> > +
> > +	return ret;
> > +}
> > +
> > +int change_memory_attr(unsigned long addr, int numpages, int
> > action)
> > +{
> > +	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> > +	unsigned long size = numpages * PAGE_SIZE;
> > +
> > +	if (!numpages)
> > +		return 0;
> > +
> > +	return apply_to_page_range(&init_mm, start, size,
> > change_page_attr, &action);
> 
> Use (void*)action instead of &action (see upper comment)

To get this to work I had to use (void *)(size_t)action to stop the
compiler from complaining about casting an int to a void*, is there a
better way to go about it?  Works fine, just looks gross.

> 
> > +}
> > 
> 
> Christophe
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines
  2020-02-03  0:46     ` Russell Currey
@ 2020-02-03  7:06       ` Christophe Leroy
  0 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2020-02-03  7:06 UTC (permalink / raw)
  To: Russell Currey, linuxppc-dev
  Cc: joel, mpe, ajd, dja, npiggin, kernel-hardening



Le 03/02/2020 à 01:46, Russell Currey a écrit :
> On Wed, 2020-01-08 at 13:52 +0100, Christophe Leroy wrote:
>>
>> Le 24/12/2019 à 06:55, Russell Currey a écrit :
>>> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
>>> index 5e147986400d..d0a0bcbc9289 100644
>>> --- a/arch/powerpc/mm/Makefile
>>> +++ b/arch/powerpc/mm/Makefile
>>> @@ -20,3 +20,4 @@ obj-$(CONFIG_HIGHMEM)		+= highmem.o
>>>    obj-$(CONFIG_PPC_COPRO_BASE)	+= copro_fault.o
>>>    obj-$(CONFIG_PPC_PTDUMP)	+= ptdump/
>>>    obj-$(CONFIG_KASAN)		+= kasan/
>>> +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pageattr.o
>>
>> CONFIG_ARCH_HAS_SET_MEMORY is set inconditionnally, I think you
>> should
>> add pageattr.o to obj-y instead. CONFIG_ARCH_HAS_XXX are almost
>> never
>> used in Makefiles
> 
> Fair enough, will keep that in mind

I forgot I commented that. I'll do it in v3.

>>> +	pte_t pte_val;
>>> +
>>> +	// invalidate the PTE so it's safe to modify
>>> +	pte_val = ptep_get_and_clear(&init_mm, addr, ptep);
>>> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
>>
>> Why flush a range for a single page ? On most targets this will do a
>> tlbia which is heavy, while a tlbie would suffice.
>>
>> I think flush_tlb_kernel_range() should be replaced by something
>> flushing only a single page.
> 
> You might be able to help me out here, I wanted to do that but the only
> functions I could find that flushed single pages needed a
> vm_area_struct, which I can't get.

I sent out two patches for that, one for book3s/32 and one for nohash:
https://patchwork.ozlabs.org/patch/1231983/
https://patchwork.ozlabs.org/patch/1232223/

Maybe one for book3s/64 would be needed as well ? Can you do it if needed ?


> 
>>
>>> +
>>> +	// modify the PTE bits as desired, then apply
>>> +	switch (action) {
>>> +	case SET_MEMORY_RO:
>>> +		pte_val = pte_wrprotect(pte_val);
>>> +		break;
>>> +	case SET_MEMORY_RW:
>>> +		pte_val = pte_mkwrite(pte_val);
>>> +		break;
>>> +	case SET_MEMORY_NX:
>>> +		pte_val = pte_exprotect(pte_val);
>>> +		break;
>>> +	case SET_MEMORY_X:
>>> +		pte_val = pte_mkexec(pte_val);
>>> +		break;
>>> +	default:
>>> +		WARN_ON(true);
>>> +		return -EINVAL;
>>
>> Is it worth checking that the action is valid for each page ? I
>> think
>> validity of action should be checked in change_memory_attr(). All
>> other
>> functions are static so you know they won't be called from outside.
>>
>> Once done, you can squash __change_page_attr() into
>> change_page_attr(),
>> remove the ret var and return 0 all the time.
> 
> Makes sense to fold things into a single function, but in terms of
> performance it shouldn't make a difference, right?  I still have to
> check the action to determine what to change (unless I replace passing
> SET_MEMORY_RO into apply_to_page_range() with a function pointer to
> pte_wrprotect() for example).

pte_wrprotect() is a static inline.

> 
>>
>>> +	}
>>> +
>>> +	set_pte_at(&init_mm, addr, ptep, pte_val);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int change_page_attr(pte_t *ptep, unsigned long addr, void
>>> *data)
>>> +{
>>> +	int ret;
>>> +
>>> +	spin_lock(&init_mm.page_table_lock);
>>> +	ret = __change_page_attr(ptep, addr, data);
>>> +	spin_unlock(&init_mm.page_table_lock);
>>> +
>>> +	return ret;
>>> +}
>>> +
>>> +int change_memory_attr(unsigned long addr, int numpages, int
>>> action)
>>> +{
>>> +	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
>>> +	unsigned long size = numpages * PAGE_SIZE;
>>> +
>>> +	if (!numpages)
>>> +		return 0;
>>> +
>>> +	return apply_to_page_range(&init_mm, start, size,
>>> change_page_attr, &action);
>>
>> Use (void*)action instead of &action (see upper comment)
> 
> To get this to work I had to use (void *)(size_t)action to stop the
> compiler from complaining about casting an int to a void*, is there a
> better way to go about it?  Works fine, just looks gross.

Yes, use long instead (see my v3)

Christophe

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-02-03  7:07 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-24  5:55 [PATCH v6 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
2019-12-24  5:55 ` [PATCH v6 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
2020-01-08 12:52   ` Christophe Leroy
2020-02-03  0:46     ` Russell Currey
2020-02-03  7:06       ` Christophe Leroy
2020-01-20  8:35   ` Christophe Leroy
2019-12-24  5:55 ` [PATCH v6 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
2020-01-08 16:48   ` Christophe Leroy
2019-12-24  5:55 ` [PATCH v6 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Russell Currey
2019-12-31 17:14   ` Christophe Leroy
2020-01-07 10:48     ` Michael Ellerman
2019-12-24  5:55 ` [PATCH v6 4/5] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Russell Currey
2019-12-24  5:55 ` [PATCH v6 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Russell Currey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).