linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 00/10] powerpc: Further Strict RWX support
@ 2021-03-30  4:51 Jordan Niethe
  2021-03-30  4:51 ` [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines Jordan Niethe
                   ` (9 more replies)
  0 siblings, 10 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: ajd, cmr, npiggin, Jordan Niethe, naveen.n.rao, dja

Another revision to this series adding more Strict RWX support on powerpc, in
particular Strict Module RWX.  This revision adds consideration for bpf.

The changes in v10 for each patch:

Christophe Leroy (2):
  powerpc/mm: implement set_memory_attr()
  powerpc/32: use set_memory_attr()

Jordan Niethe (3):
  powerpc/lib/code-patching: Set up Strict RWX patching earlier
  powerpc: Always define MODULES_{VADDR,END}
    v10: - New to series

  powerpc/bpf: Write protect JIT code
    v10: - New to series

Russell Currey (5):
  powerpc/mm: Implement set_memory() routines
    v10: - WARN if trying to change the hash linear map

  powerpc/kprobes: Mark newly allocated probes as ROX
    v10: - Use __vmalloc_node_range()

  powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
    v10: check_wx_pages now affects kernel_page_tables rather
         then triggers its own action.

  powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
    v10: - Predicate on !PPC_BOOK3S_604
         - Make module_alloc() use PAGE_KERNEL protection

  powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig

 arch/powerpc/Kconfig                   |   2 +
 arch/powerpc/Kconfig.debug             |   6 +-
 arch/powerpc/configs/skiroot_defconfig |   1 +
 arch/powerpc/include/asm/pgtable.h     |   5 +
 arch/powerpc/include/asm/set_memory.h  |  34 +++++++
 arch/powerpc/kernel/kprobes.c          |  14 +++
 arch/powerpc/kernel/module.c           |  14 +--
 arch/powerpc/lib/code-patching.c       |  12 +--
 arch/powerpc/mm/Makefile               |   2 +-
 arch/powerpc/mm/pageattr.c             | 121 +++++++++++++++++++++++++
 arch/powerpc/mm/pgtable_32.c           |  60 ++----------
 arch/powerpc/mm/ptdump/ptdump.c        |  34 ++++++-
 arch/powerpc/net/bpf_jit_comp.c        |   5 +-
 arch/powerpc/net/bpf_jit_comp64.c      |   4 +
 14 files changed, 245 insertions(+), 69 deletions(-)
 create mode 100644 arch/powerpc/include/asm/set_memory.h
 create mode 100644 arch/powerpc/mm/pageattr.c

-- 
2.25.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-30  5:16   ` Christophe Leroy
                     ` (2 more replies)
  2021-03-30  4:51 ` [PATCH v10 02/10] powerpc/lib/code-patching: Set up Strict RWX patching earlier Jordan Niethe
                   ` (8 subsequent siblings)
  9 siblings, 3 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: ajd, cmr, npiggin, naveen.n.rao, Jordan Niethe, dja

From: Russell Currey <ruscur@russell.cc>

The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
and are generally useful primitives to have.  This implementation is
designed to be completely generic across powerpc's many MMUs.

It's possible that this could be optimised to be faster for specific
MMUs, but the focus is on having a generic and safe implementation for
now.

This implementation does not handle cases where the caller is attempting
to change the mapping of the page it is executing from, or if another
CPU is concurrently using the page being altered.  These cases likely
shouldn't happen, but a more complex implementation with MMU-specific code
could safely handle them, so that is left as a TODO for now.

On hash the linear mapping is not kept in the linux pagetable, so this
will not change the protection if used on that range. Currently these
functions are not used on the linear map so just WARN for now.

These functions do nothing if STRICT_KERNEL_RWX is not enabled.

Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[jpn: -rebase on next plus "powerpc/mm/64s: Allow STRICT_KERNEL_RWX again"
      - WARN on hash linear map]
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v10: WARN if trying to change the hash linear map
---
 arch/powerpc/Kconfig                  |  1 +
 arch/powerpc/include/asm/set_memory.h | 32 ++++++++++
 arch/powerpc/mm/Makefile              |  2 +-
 arch/powerpc/mm/pageattr.c            | 88 +++++++++++++++++++++++++++
 4 files changed, 122 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/set_memory.h
 create mode 100644 arch/powerpc/mm/pageattr.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index fc7f5c5933e6..4498a27ac9db 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -135,6 +135,7 @@ config PPC
 	select ARCH_HAS_MEMBARRIER_CALLBACKS
 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
 	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
+	select ARCH_HAS_SET_MEMORY
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
 	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_HAS_UACCESS_FLUSHCACHE
diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
new file mode 100644
index 000000000000..64011ea444b4
--- /dev/null
+++ b/arch/powerpc/include/asm/set_memory.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_SET_MEMORY_H
+#define _ASM_POWERPC_SET_MEMORY_H
+
+#define SET_MEMORY_RO	0
+#define SET_MEMORY_RW	1
+#define SET_MEMORY_NX	2
+#define SET_MEMORY_X	3
+
+int change_memory_attr(unsigned long addr, int numpages, long action);
+
+static inline int set_memory_ro(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_RO);
+}
+
+static inline int set_memory_rw(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_RW);
+}
+
+static inline int set_memory_nx(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_NX);
+}
+
+static inline int set_memory_x(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_X);
+}
+
+#endif
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index 3b4e9e4e25ea..d8a08abde1ae 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -5,7 +5,7 @@
 
 ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
-obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o \
+obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
 				   init_$(BITS).o pgtable_$(BITS).o \
 				   pgtable-frag.o ioremap.o ioremap_$(BITS).o \
 				   init-common.o mmu_context.o drmem.o
diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
new file mode 100644
index 000000000000..9efcb01088da
--- /dev/null
+++ b/arch/powerpc/mm/pageattr.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * MMU-generic set_memory implementation for powerpc
+ *
+ * Copyright 2019, IBM Corporation.
+ */
+
+#include <linux/mm.h>
+#include <linux/set_memory.h>
+
+#include <asm/mmu.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+
+
+/*
+ * Updates the attributes of a page in three steps:
+ *
+ * 1. invalidate the page table entry
+ * 2. flush the TLB
+ * 3. install the new entry with the updated attributes
+ *
+ * This is unsafe if the caller is attempting to change the mapping of the
+ * page it is executing from, or if another CPU is concurrently using the
+ * page being altered.
+ *
+ * TODO make the implementation resistant to this.
+ *
+ * NOTE: can be dangerous to call without STRICT_KERNEL_RWX
+ */
+static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+{
+	long action = (long)data;
+	pte_t pte;
+
+	spin_lock(&init_mm.page_table_lock);
+
+	/* invalidate the PTE so it's safe to modify */
+	pte = ptep_get_and_clear(&init_mm, addr, ptep);
+	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+	/* modify the PTE bits as desired, then apply */
+	switch (action) {
+	case SET_MEMORY_RO:
+		pte = pte_wrprotect(pte);
+		break;
+	case SET_MEMORY_RW:
+		pte = pte_mkwrite(pte);
+		break;
+	case SET_MEMORY_NX:
+		pte = pte_exprotect(pte);
+		break;
+	case SET_MEMORY_X:
+		pte = pte_mkexec(pte);
+		break;
+	default:
+		WARN_ON_ONCE(1);
+		break;
+	}
+
+	set_pte_at(&init_mm, addr, ptep, pte);
+	spin_unlock(&init_mm.page_table_lock);
+
+	return 0;
+}
+
+int change_memory_attr(unsigned long addr, int numpages, long action)
+{
+	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
+	unsigned long sz = numpages * PAGE_SIZE;
+
+	if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
+		return 0;
+
+	if (numpages <= 0)
+		return 0;
+
+#ifdef CONFIG_PPC_BOOK3S_64
+	if (WARN_ON_ONCE(!radix_enabled() &&
+		     get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
+		return -1;
+	}
+#endif
+
+	return apply_to_existing_page_range(&init_mm, start, sz,
+					    change_page_attr, (void *)action);
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 02/10] powerpc/lib/code-patching: Set up Strict RWX patching earlier
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
  2021-03-30  4:51 ` [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-30  4:51 ` [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END} Jordan Niethe
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: ajd, cmr, npiggin, Jordan Niethe, naveen.n.rao, dja

setup_text_poke_area() is a late init call so it runs before
mark_rodata_ro() and after the init calls. This lets all the init code
patching simply write to their locations. In the future, kprobes is
going to allocate its instruction pages RO which means they will need
setup_text__poke_area() to have been already called for their code
patching. However, init_kprobes() (which allocates and patches some
instruction pages) is an early init call so it happens before
setup_text__poke_area().

start_kernel() calls poking_init() before any of the init calls. On
powerpc, poking_init() is currently a nop. setup_text_poke_area() relies
on kernel virtual memory, cpu hotplug and per_cpu_areas being setup.
setup_per_cpu_areas(), boot_cpu_hotplug_init() and mm_init() are called
before poking_init().

Turn setup_text_poke_area() into poking_init().

Reviewed-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v9: New to series
---
 arch/powerpc/lib/code-patching.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 2333625b5e31..b28afa1133db 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -65,14 +65,11 @@ static int text_area_cpu_down(unsigned int cpu)
 }
 
 /*
- * Run as a late init call. This allows all the boot time patching to be done
- * simply by patching the code, and then we're called here prior to
- * mark_rodata_ro(), which happens after all init calls are run. Although
- * BUG_ON() is rude, in this case it should only happen if ENOMEM, and we judge
- * it as being preferable to a kernel that will crash later when someone tries
- * to use patch_instruction().
+ * Although BUG_ON() is rude, in this case it should only happen if ENOMEM, and
+ * we judge it as being preferable to a kernel that will crash later when
+ * someone tries to use patch_instruction().
  */
-static int __init setup_text_poke_area(void)
+int __init poking_init(void)
 {
 	BUG_ON(!cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
 		"powerpc/text_poke:online", text_area_cpu_up,
@@ -80,7 +77,6 @@ static int __init setup_text_poke_area(void)
 
 	return 0;
 }
-late_initcall(setup_text_poke_area);
 
 /*
  * This can be called for kernel text or a module.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END}
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
  2021-03-30  4:51 ` [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines Jordan Niethe
  2021-03-30  4:51 ` [PATCH v10 02/10] powerpc/lib/code-patching: Set up Strict RWX patching earlier Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-30  5:00   ` Christophe Leroy
  2021-04-01 13:36   ` Christophe Leroy
  2021-03-30  4:51 ` [PATCH v10 04/10] powerpc/kprobes: Mark newly allocated probes as ROX Jordan Niethe
                   ` (6 subsequent siblings)
  9 siblings, 2 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: ajd, cmr, npiggin, Jordan Niethe, naveen.n.rao, dja

If MODULES_{VADDR,END} are not defined set them to VMALLOC_START and
VMALLOC_END respectively. This reduces the need for special cases. For
example, powerpc's module_alloc() was previously predicated on
MODULES_VADDR being defined but now is unconditionally defined.

This will be useful reducing conditional code in other places that need
to allocate from the module region (i.e., kprobes).

Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v10: New to series
---
 arch/powerpc/include/asm/pgtable.h | 5 +++++
 arch/powerpc/kernel/module.c       | 5 +----
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 4eed82172e33..014c2921f26a 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -167,6 +167,11 @@ struct seq_file;
 void arch_report_meminfo(struct seq_file *m);
 #endif /* CONFIG_PPC64 */
 
+#ifndef MODULES_VADDR
+#define MODULES_VADDR VMALLOC_START
+#define MODULES_END VMALLOC_END
+#endif
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_PGTABLE_H */
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index a211b0253cdb..f1fb58389d58 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -14,6 +14,7 @@
 #include <asm/firmware.h>
 #include <linux/sort.h>
 #include <asm/setup.h>
+#include <linux/mm.h>
 
 static LIST_HEAD(module_bug_list);
 
@@ -87,13 +88,9 @@ int module_finalize(const Elf_Ehdr *hdr,
 	return 0;
 }
 
-#ifdef MODULES_VADDR
 void *module_alloc(unsigned long size)
 {
-	BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR);
-
 	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL,
 				    PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 }
-#endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 04/10] powerpc/kprobes: Mark newly allocated probes as ROX
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
                   ` (2 preceding siblings ...)
  2021-03-30  4:51 ` [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END} Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-30  5:05   ` Christophe Leroy
  2021-03-30  4:51 ` [PATCH v10 05/10] powerpc/bpf: Write protect JIT code Jordan Niethe
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: ajd, cmr, npiggin, naveen.n.rao, Jordan Niethe, dja

From: Russell Currey <ruscur@russell.cc>

Add the arch specific insn page allocator for powerpc. This allocates
ROX pages if STRICT_KERNEL_RWX is enabled. These pages are only written
to with patch_instruction() which is able to write RO pages.

Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[jpn: Reword commit message, switch to __vmalloc_node_range()]
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v9: - vmalloc_exec() no longer exists
    - Set the page to RW before freeing it
v10: - use __vmalloc_node_range()
---
 arch/powerpc/kernel/kprobes.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 01ab2163659e..3ae27af9b094 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -25,6 +25,7 @@
 #include <asm/sections.h>
 #include <asm/inst.h>
 #include <linux/uaccess.h>
+#include <linux/vmalloc.h>
 
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
 DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -103,6 +104,19 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
 	return addr;
 }
 
+void *alloc_insn_page(void)
+{
+	if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) {
+		return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_VADDR, MODULES_END,
+				GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS,
+				NUMA_NO_NODE, __builtin_return_address(0));
+	} else {
+		return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_VADDR, MODULES_END,
+				GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS,
+				NUMA_NO_NODE, __builtin_return_address(0));
+	}
+}
+
 int arch_prepare_kprobe(struct kprobe *p)
 {
 	int ret = 0;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 05/10] powerpc/bpf: Write protect JIT code
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
                   ` (3 preceding siblings ...)
  2021-03-30  4:51 ` [PATCH v10 04/10] powerpc/kprobes: Mark newly allocated probes as ROX Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-31 10:37   ` Michael Ellerman
  2021-03-30  4:51 ` [PATCH v10 06/10] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Jordan Niethe
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: ajd, cmr, npiggin, Jordan Niethe, naveen.n.rao, dja

Once CONFIG_STRICT_MODULE_RWX is enabled there will be no need to
override bpf_jit_free() because it is now possible to set images
read-only. So use the default implementation.

Also add the necessary call to bpf_jit_binary_lock_ro() which will
remove write protection and add exec protection to the JIT image after
it has finished being written.

Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v10: New to series
---
 arch/powerpc/net/bpf_jit_comp.c   | 5 ++++-
 arch/powerpc/net/bpf_jit_comp64.c | 4 ++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index e809cb5a1631..8015e4a7d2d4 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -659,12 +659,15 @@ void bpf_jit_compile(struct bpf_prog *fp)
 		bpf_jit_dump(flen, proglen, pass, code_base);
 
 	bpf_flush_icache(code_base, code_base + (proglen/4));
-
 #ifdef CONFIG_PPC64
 	/* Function descriptor nastiness: Address + TOC */
 	((u64 *)image)[0] = (u64)code_base;
 	((u64 *)image)[1] = local_paca->kernel_toc;
 #endif
+	if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) {
+		set_memory_ro((unsigned long)image, alloclen >> PAGE_SHIFT);
+		set_memory_x((unsigned long)image, alloclen >> PAGE_SHIFT);
+	}
 
 	fp->bpf_func = (void *)image;
 	fp->jited = 1;
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index aaf1a887f653..1484ad588685 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -1240,6 +1240,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 	fp->jited_len = alloclen;
 
 	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
+	if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
+		bpf_jit_binary_lock_ro(bpf_hdr);
 	if (!fp->is_func || extra_pass) {
 		bpf_prog_fill_jited_linfo(fp, addrs);
 out_addrs:
@@ -1262,6 +1264,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 }
 
 /* Overriding bpf_jit_free() as we don't set images read-only. */
+#ifndef CONFIG_STRICT_MODULE_RWX
 void bpf_jit_free(struct bpf_prog *fp)
 {
 	unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
@@ -1272,3 +1275,4 @@ void bpf_jit_free(struct bpf_prog *fp)
 
 	bpf_prog_unlock_free(fp);
 }
+#endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 06/10] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
                   ` (4 preceding siblings ...)
  2021-03-30  4:51 ` [PATCH v10 05/10] powerpc/bpf: Write protect JIT code Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-31 11:24   ` Michael Ellerman
  2021-03-30  4:51 ` [PATCH v10 07/10] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Jordan Niethe
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, Kees Cook, cmr, npiggin, naveen.n.rao, Jordan Niethe, dja

From: Russell Currey <ruscur@russell.cc>

Optionally run W+X checks when dumping pagetable information to
debugfs' kernel_page_tables.

To use:
    $ echo 1 > /sys/kernel/debug/check_wx_pages
    $ cat /sys/kernel/debug/kernel_page_tables

and check the kernel log.  Useful for testing strict module RWX.

To disable W+X checks:
	$ echo 0 > /sys/kernel/debug/check_wx_pages

Update the Kconfig entry to reflect this.

Also fix a typo.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Russell Currey <ruscur@russell.cc>
[jpn: Change check_wx_pages to act as mode bit affecting
      kernel_page_tables instead of triggering action on its own]
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v10: check_wx_pages now affects kernel_page_tables rather then triggers
     its own action.
---
 arch/powerpc/Kconfig.debug      |  6 ++++--
 arch/powerpc/mm/ptdump/ptdump.c | 34 ++++++++++++++++++++++++++++++++-
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index ae084357994e..56e99e9a30d9 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -371,7 +371,7 @@ config PPC_PTDUMP
 	  If you are unsure, say N.
 
 config PPC_DEBUG_WX
-	bool "Warn on W+X mappings at boot"
+	bool "Warn on W+X mappings at boot & enable manual checks at runtime"
 	depends on PPC_PTDUMP && STRICT_KERNEL_RWX
 	help
 	  Generate a warning if any W+X mappings are found at boot.
@@ -385,7 +385,9 @@ config PPC_DEBUG_WX
 	  of other unfixed kernel bugs easier.
 
 	  There is no runtime or memory usage effect of this option
-	  once the kernel has booted up - it's a one time check.
+	  once the kernel has booted up, it only automatically checks once.
+
+	  Enables the "check_wx_pages" debugfs entry for checking at runtime.
 
 	  If in doubt, say "Y".
 
diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
index aca354fb670b..6592f7a48c96 100644
--- a/arch/powerpc/mm/ptdump/ptdump.c
+++ b/arch/powerpc/mm/ptdump/ptdump.c
@@ -4,7 +4,7 @@
  *
  * This traverses the kernel pagetables and dumps the
  * information about the used sections of memory to
- * /sys/kernel/debug/kernel_pagetables.
+ * /sys/kernel/debug/kernel_page_tables.
  *
  * Derived from the arm64 implementation:
  * Copyright (c) 2014, The Linux Foundation, Laura Abbott.
@@ -27,6 +27,8 @@
 
 #include "ptdump.h"
 
+static bool check_wx;
+
 /*
  * To visualise what is happening,
  *
@@ -410,6 +412,9 @@ static int ptdump_show(struct seq_file *m, void *v)
 	/* Traverse kernel page tables */
 	walk_pagetables(&st);
 	note_page(&st, 0, 0, 0, 0);
+
+	if (check_wx)
+		ptdump_check_wx();
 	return 0;
 }
 
@@ -459,6 +464,33 @@ void ptdump_check_wx(void)
 	else
 		pr_info("Checked W+X mappings: passed, no W+X pages found\n");
 }
+
+static int check_wx_debugfs_set(void *data, u64 val)
+{
+	if (val == 1ULL)
+		check_wx = true;
+	else if (val == 0ULL)
+		check_wx = false;
+	else
+		return -EINVAL;
+
+	return 0;
+}
+
+static int check_wx_debugfs_get(void *data, u64 *val)
+{
+	*val = check_wx ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(check_wx_fops, check_wx_debugfs_get, check_wx_debugfs_set, "%llu\n");
+
+static int ptdump_check_wx_init(void)
+{
+	return debugfs_create_file("check_wx_pages", 0200, NULL,
+				   NULL, &check_wx_fops) ? 0 : -ENOMEM;
+}
+device_initcall(ptdump_check_wx_init);
 #endif
 
 static int ptdump_init(void)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 07/10] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
                   ` (5 preceding siblings ...)
  2021-03-30  4:51 ` [PATCH v10 06/10] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-30  4:51 ` [PATCH v10 08/10] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Jordan Niethe
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: ajd, cmr, npiggin, naveen.n.rao, Jordan Niethe, dja

From: Russell Currey <ruscur@russell.cc>

To enable strict module RWX on powerpc, set:

    CONFIG_STRICT_MODULE_RWX=y

You should also have CONFIG_STRICT_KERNEL_RWX=y set to have any real
security benefit.

ARCH_HAS_STRICT_MODULE_RWX is set to require ARCH_HAS_STRICT_KERNEL_RWX.
This is due to a quirk in arch/Kconfig and arch/powerpc/Kconfig that
makes STRICT_MODULE_RWX *on by default* in configurations where
STRICT_KERNEL_RWX is *unavailable*.

Since this doesn't make much sense, and module RWX without kernel RWX
doesn't make much sense, having the same dependencies as kernel RWX
works around this problem.

With STRICT_MODULE_RWX, now make module_alloc() allocate pages with
KERNEL_PAGE protection rather than KERNEL_PAGE_EXEC.

Book32s/32 processors with a hash mmu (i.e. 604 core) can not set memory
protection on a page by page basis so do not enable.

Signed-off-by: Russell Currey <ruscur@russell.cc>
[jpn: - predicate on !PPC_BOOK3S_604
      - make module_alloc() use PAGE_KERNEL protection]
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v10: - Predicate on !PPC_BOOK3S_604
     - Make module_alloc() use PAGE_KERNEL protection
---
 arch/powerpc/Kconfig         |  1 +
 arch/powerpc/kernel/module.c | 11 ++++++++---
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 4498a27ac9db..97c0c3540bfd 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -137,6 +137,7 @@ config PPC
 	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
 	select ARCH_HAS_SET_MEMORY
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
+	select ARCH_HAS_STRICT_MODULE_RWX	if ARCH_HAS_STRICT_KERNEL_RWX && !PPC_BOOK3S_604
 	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_HAS_UACCESS_FLUSHCACHE
 	select ARCH_HAS_COPY_MC			if PPC64
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index f1fb58389d58..d086f5534fac 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -90,7 +90,12 @@ int module_finalize(const Elf_Ehdr *hdr,
 
 void *module_alloc(unsigned long size)
 {
-	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL,
-				    PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
-				    __builtin_return_address(0));
+	pgprot_t prot = PAGE_KERNEL_EXEC;
+
+	if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
+		prot = PAGE_KERNEL;
+
+	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
+			GFP_KERNEL, prot, VM_FLUSH_RESET_PERMS,
+			NUMA_NO_NODE, __builtin_return_address(0));
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 08/10] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
                   ` (6 preceding siblings ...)
  2021-03-30  4:51 ` [PATCH v10 07/10] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-30  5:27   ` Christophe Leroy
  2021-03-30  4:51 ` [PATCH v10 09/10] powerpc/mm: implement set_memory_attr() Jordan Niethe
  2021-03-30  4:51 ` [PATCH v10 10/10] powerpc/32: use set_memory_attr() Jordan Niethe
  9 siblings, 1 reply; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, Joel Stanley, cmr, npiggin, naveen.n.rao, Jordan Niethe, dja

From: Russell Currey <ruscur@russell.cc>

skiroot_defconfig is the only powerpc defconfig with STRICT_KERNEL_RWX
enabled, and if you want memory protection for kernel text you'd want it
for modules too, so enable STRICT_MODULE_RWX there.

Acked-by: Joel Stanley <joel@joel.id.au>
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
 arch/powerpc/configs/skiroot_defconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig
index b806a5d3a695..50fe06cb3a31 100644
--- a/arch/powerpc/configs/skiroot_defconfig
+++ b/arch/powerpc/configs/skiroot_defconfig
@@ -50,6 +50,7 @@ CONFIG_CMDLINE="console=tty0 console=hvc0 ipr.fast_reboot=1 quiet"
 # CONFIG_PPC_MEM_KEYS is not set
 CONFIG_JUMP_LABEL=y
 CONFIG_STRICT_KERNEL_RWX=y
+CONFIG_STRICT_MODULE_RWX=y
 CONFIG_MODULES=y
 CONFIG_MODULE_UNLOAD=y
 CONFIG_MODULE_SIG_FORCE=y
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 09/10] powerpc/mm: implement set_memory_attr()
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
                   ` (7 preceding siblings ...)
  2021-03-30  4:51 ` [PATCH v10 08/10] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  2021-03-30  4:51 ` [PATCH v10 10/10] powerpc/32: use set_memory_attr() Jordan Niethe
  9 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, kbuild test robot, npiggin, naveen.n.rao, Jordan Niethe, dja

From: Christophe Leroy <christophe.leroy@csgroup.eu>

In addition to the set_memory_xx() functions which allows to change
the memory attributes of not (yet) used memory regions, implement a
set_memory_attr() function to:
- set the final memory protection after init on currently used
kernel regions.
- enable/disable kernel memory regions in the scope of DEBUG_PAGEALLOC.

Unlike the set_memory_xx() which can act in three step as the regions
are unused, this function must modify 'on the fly' as the kernel is
executing from them. At the moment only PPC32 will use it and changing
page attributes on the fly is not an issue.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reported-by: kbuild test robot <lkp@intel.com>
[ruscur: cast "data" to unsigned long instead of int]
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
 arch/powerpc/include/asm/set_memory.h |  2 ++
 arch/powerpc/mm/pageattr.c            | 33 +++++++++++++++++++++++++++
 2 files changed, 35 insertions(+)

diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
index 64011ea444b4..b040094f7920 100644
--- a/arch/powerpc/include/asm/set_memory.h
+++ b/arch/powerpc/include/asm/set_memory.h
@@ -29,4 +29,6 @@ static inline int set_memory_x(unsigned long addr, int numpages)
 	return change_memory_attr(addr, numpages, SET_MEMORY_X);
 }
 
+int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot);
+
 #endif
diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
index 9efcb01088da..9611dfaebd45 100644
--- a/arch/powerpc/mm/pageattr.c
+++ b/arch/powerpc/mm/pageattr.c
@@ -86,3 +86,36 @@ int change_memory_attr(unsigned long addr, int numpages, long action)
 	return apply_to_existing_page_range(&init_mm, start, sz,
 					    change_page_attr, (void *)action);
 }
+
+/*
+ * Set the attributes of a page:
+ *
+ * This function is used by PPC32 at the end of init to set final kernel memory
+ * protection. It includes changing the maping of the page it is executing from
+ * and data pages it is using.
+ */
+static int set_page_attr(pte_t *ptep, unsigned long addr, void *data)
+{
+	pgprot_t prot = __pgprot((unsigned long)data);
+
+	spin_lock(&init_mm.page_table_lock);
+
+	set_pte_at(&init_mm, addr, ptep, pte_modify(*ptep, prot));
+	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+	spin_unlock(&init_mm.page_table_lock);
+
+	return 0;
+}
+
+int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot)
+{
+	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
+	unsigned long sz = numpages * PAGE_SIZE;
+
+	if (numpages <= 0)
+		return 0;
+
+	return apply_to_existing_page_range(&init_mm, start, sz, set_page_attr,
+					    (void *)pgprot_val(prot));
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v10 10/10] powerpc/32: use set_memory_attr()
  2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
                   ` (8 preceding siblings ...)
  2021-03-30  4:51 ` [PATCH v10 09/10] powerpc/mm: implement set_memory_attr() Jordan Niethe
@ 2021-03-30  4:51 ` Jordan Niethe
  9 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-03-30  4:51 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: ajd, cmr, npiggin, naveen.n.rao, Jordan Niethe, dja

From: Christophe Leroy <christophe.leroy@csgroup.eu>

Use set_memory_attr() instead of the PPC32 specific change_page_attr()

change_page_attr() was checking that the address was not mapped by
blocks and was handling highmem, but that's unneeded because the
affected pages can't be in highmem and block mapping verification
is already done by the callers.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[ruscur: rebase on powerpc/merge with Christophe's new patches]
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
 arch/powerpc/mm/pgtable_32.c | 60 ++++++------------------------------
 1 file changed, 10 insertions(+), 50 deletions(-)

diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index e0ec67a16887..dcf5ecca19d9 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -23,6 +23,7 @@
 #include <linux/highmem.h>
 #include <linux/memblock.h>
 #include <linux/slab.h>
+#include <linux/set_memory.h>
 
 #include <asm/pgalloc.h>
 #include <asm/fixmap.h>
@@ -132,64 +133,20 @@ void __init mapin_ram(void)
 	}
 }
 
-static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
-{
-	pte_t *kpte;
-	unsigned long address;
-
-	BUG_ON(PageHighMem(page));
-	address = (unsigned long)page_address(page);
-
-	if (v_block_mapped(address))
-		return 0;
-	kpte = virt_to_kpte(address);
-	if (!kpte)
-		return -EINVAL;
-	__set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0);
-
-	return 0;
-}
-
-/*
- * Change the page attributes of an page in the linear mapping.
- *
- * THIS DOES NOTHING WITH BAT MAPPINGS, DEBUG USE ONLY
- */
-static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
-{
-	int i, err = 0;
-	unsigned long flags;
-	struct page *start = page;
-
-	local_irq_save(flags);
-	for (i = 0; i < numpages; i++, page++) {
-		err = __change_page_attr_noflush(page, prot);
-		if (err)
-			break;
-	}
-	wmb();
-	local_irq_restore(flags);
-	flush_tlb_kernel_range((unsigned long)page_address(start),
-			       (unsigned long)page_address(page));
-	return err;
-}
-
 void mark_initmem_nx(void)
 {
-	struct page *page = virt_to_page(_sinittext);
 	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
 				 PFN_DOWN((unsigned long)_sinittext);
 
 	if (v_block_mapped((unsigned long)_sinittext))
 		mmu_mark_initmem_nx();
 	else
-		change_page_attr(page, numpages, PAGE_KERNEL);
+		set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL);
 }
 
 #ifdef CONFIG_STRICT_KERNEL_RWX
 void mark_rodata_ro(void)
 {
-	struct page *page;
 	unsigned long numpages;
 
 	if (v_block_mapped((unsigned long)_stext + 1)) {
@@ -198,20 +155,18 @@ void mark_rodata_ro(void)
 		return;
 	}
 
-	page = virt_to_page(_stext);
 	numpages = PFN_UP((unsigned long)_etext) -
 		   PFN_DOWN((unsigned long)_stext);
 
-	change_page_attr(page, numpages, PAGE_KERNEL_ROX);
+	set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
 	 */
-	page = virt_to_page(__start_rodata);
 	numpages = PFN_UP((unsigned long)__init_begin) -
 		   PFN_DOWN((unsigned long)__start_rodata);
 
-	change_page_attr(page, numpages, PAGE_KERNEL_RO);
+	set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO);
 
 	// mark_initmem_nx() should have already run by now
 	ptdump_check_wx();
@@ -221,9 +176,14 @@ void mark_rodata_ro(void)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
+	unsigned long addr = (unsigned long)page_address(page);
+
 	if (PageHighMem(page))
 		return;
 
-	change_page_attr(page, numpages, enable ? PAGE_KERNEL : __pgprot(0));
+	if (enable)
+		set_memory_attr(addr, numpages, PAGE_KERNEL);
+	else
+		set_memory_attr(addr, numpages, __pgprot(0));
 }
 #endif /* CONFIG_DEBUG_PAGEALLOC */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END}
  2021-03-30  4:51 ` [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END} Jordan Niethe
@ 2021-03-30  5:00   ` Christophe Leroy
  2021-04-01 13:36   ` Christophe Leroy
  1 sibling, 0 replies; 34+ messages in thread
From: Christophe Leroy @ 2021-03-30  5:00 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev; +Cc: ajd, npiggin, cmr, naveen.n.rao, dja



Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> If MODULES_{VADDR,END} are not defined set them to VMALLOC_START and
> VMALLOC_END respectively. This reduces the need for special cases. For
> example, powerpc's module_alloc() was previously predicated on
> MODULES_VADDR being defined but now is unconditionally defined.
> 
> This will be useful reducing conditional code in other places that need
> to allocate from the module region (i.e., kprobes).
> 
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v10: New to series
> ---
>   arch/powerpc/include/asm/pgtable.h | 5 +++++
>   arch/powerpc/kernel/module.c       | 5 +----
>   2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 4eed82172e33..014c2921f26a 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -167,6 +167,11 @@ struct seq_file;
>   void arch_report_meminfo(struct seq_file *m);
>   #endif /* CONFIG_PPC64 */
>   
> +#ifndef MODULES_VADDR
> +#define MODULES_VADDR VMALLOC_START
> +#define MODULES_END VMALLOC_END
> +#endif
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif /* _ASM_POWERPC_PGTABLE_H */
> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> index a211b0253cdb..f1fb58389d58 100644
> --- a/arch/powerpc/kernel/module.c
> +++ b/arch/powerpc/kernel/module.c
> @@ -14,6 +14,7 @@
>   #include <asm/firmware.h>
>   #include <linux/sort.h>
>   #include <asm/setup.h>
> +#include <linux/mm.h>
>   
>   static LIST_HEAD(module_bug_list);
>   
> @@ -87,13 +88,9 @@ int module_finalize(const Elf_Ehdr *hdr,
>   	return 0;
>   }
>   
> -#ifdef MODULES_VADDR
>   void *module_alloc(unsigned long size)
>   {
> -	BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR);
> -

This check is important, if we remove it from here it should be done somewhere else, for instance in 
asm/task_size_32.h

>   	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL,
>   				    PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
>   				    __builtin_return_address(0));
>   }
> -#endif
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 04/10] powerpc/kprobes: Mark newly allocated probes as ROX
  2021-03-30  4:51 ` [PATCH v10 04/10] powerpc/kprobes: Mark newly allocated probes as ROX Jordan Niethe
@ 2021-03-30  5:05   ` Christophe Leroy
  2021-04-21  2:39     ` Jordan Niethe
  0 siblings, 1 reply; 34+ messages in thread
From: Christophe Leroy @ 2021-03-30  5:05 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev; +Cc: ajd, npiggin, cmr, naveen.n.rao, dja



Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> From: Russell Currey <ruscur@russell.cc>
> 
> Add the arch specific insn page allocator for powerpc. This allocates
> ROX pages if STRICT_KERNEL_RWX is enabled. These pages are only written
> to with patch_instruction() which is able to write RO pages.
> 
> Reviewed-by: Daniel Axtens <dja@axtens.net>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> [jpn: Reword commit message, switch to __vmalloc_node_range()]
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v9: - vmalloc_exec() no longer exists
>      - Set the page to RW before freeing it
> v10: - use __vmalloc_node_range()
> ---
>   arch/powerpc/kernel/kprobes.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 01ab2163659e..3ae27af9b094 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -25,6 +25,7 @@
>   #include <asm/sections.h>
>   #include <asm/inst.h>
>   #include <linux/uaccess.h>
> +#include <linux/vmalloc.h>
>   
>   DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
>   DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> @@ -103,6 +104,19 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
>   	return addr;
>   }
>   
> +void *alloc_insn_page(void)
> +{
> +	if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) {
> +		return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_VADDR, MODULES_END,
> +				GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS,
> +				NUMA_NO_NODE, __builtin_return_address(0));
> +	} else {
> +		return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_VADDR, MODULES_END,
> +				GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS,
> +				NUMA_NO_NODE, __builtin_return_address(0));
> +	}
> +}
> +

What about

void *alloc_insn_page(void)
{
	pgprot_t prot = IS_ENABLED(CONFIG_STRICT_KERNEL_RWX) ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;

	return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_VADDR, MODULES_END,
			GFP_KERNEL, prot, VM_FLUSH_RESET_PERMS,
			NUMA_NO_NODE, __builtin_return_address(0));
}

>   int arch_prepare_kprobe(struct kprobe *p)
>   {
>   	int ret = 0;
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines
  2021-03-30  4:51 ` [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines Jordan Niethe
@ 2021-03-30  5:16   ` Christophe Leroy
  2021-04-21  2:51     ` Jordan Niethe
  2021-03-31 11:16   ` Michael Ellerman
  2021-04-01  4:37   ` Aneesh Kumar K.V
  2 siblings, 1 reply; 34+ messages in thread
From: Christophe Leroy @ 2021-03-30  5:16 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev; +Cc: ajd, npiggin, cmr, naveen.n.rao, dja



Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> From: Russell Currey <ruscur@russell.cc>
> 
> The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> and are generally useful primitives to have.  This implementation is
> designed to be completely generic across powerpc's many MMUs.
> 
> It's possible that this could be optimised to be faster for specific
> MMUs, but the focus is on having a generic and safe implementation for
> now.
> 
> This implementation does not handle cases where the caller is attempting
> to change the mapping of the page it is executing from, or if another
> CPU is concurrently using the page being altered.  These cases likely
> shouldn't happen, but a more complex implementation with MMU-specific code
> could safely handle them, so that is left as a TODO for now.
> 
> On hash the linear mapping is not kept in the linux pagetable, so this
> will not change the protection if used on that range. Currently these
> functions are not used on the linear map so just WARN for now.
> 
> These functions do nothing if STRICT_KERNEL_RWX is not enabled.
> 
> Reviewed-by: Daniel Axtens <dja@axtens.net>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> [jpn: -rebase on next plus "powerpc/mm/64s: Allow STRICT_KERNEL_RWX again"
>        - WARN on hash linear map]
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v10: WARN if trying to change the hash linear map
> ---
>   arch/powerpc/Kconfig                  |  1 +
>   arch/powerpc/include/asm/set_memory.h | 32 ++++++++++
>   arch/powerpc/mm/Makefile              |  2 +-
>   arch/powerpc/mm/pageattr.c            | 88 +++++++++++++++++++++++++++
>   4 files changed, 122 insertions(+), 1 deletion(-)
>   create mode 100644 arch/powerpc/include/asm/set_memory.h
>   create mode 100644 arch/powerpc/mm/pageattr.c
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index fc7f5c5933e6..4498a27ac9db 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -135,6 +135,7 @@ config PPC
>   	select ARCH_HAS_MEMBARRIER_CALLBACKS
>   	select ARCH_HAS_MEMBARRIER_SYNC_CORE
>   	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> +	select ARCH_HAS_SET_MEMORY
>   	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
>   	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
>   	select ARCH_HAS_UACCESS_FLUSHCACHE
> diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
> new file mode 100644
> index 000000000000..64011ea444b4
> --- /dev/null
> +++ b/arch/powerpc/include/asm/set_memory.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_POWERPC_SET_MEMORY_H
> +#define _ASM_POWERPC_SET_MEMORY_H
> +
> +#define SET_MEMORY_RO	0
> +#define SET_MEMORY_RW	1
> +#define SET_MEMORY_NX	2
> +#define SET_MEMORY_X	3
> +
> +int change_memory_attr(unsigned long addr, int numpages, long action);
> +
> +static inline int set_memory_ro(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_RO);
> +}
> +
> +static inline int set_memory_rw(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_RW);
> +}
> +
> +static inline int set_memory_nx(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_NX);
> +}
> +
> +static inline int set_memory_x(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_X);
> +}
> +
> +#endif
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index 3b4e9e4e25ea..d8a08abde1ae 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -5,7 +5,7 @@
>   
>   ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
>   
> -obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o \
> +obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
>   				   init_$(BITS).o pgtable_$(BITS).o \
>   				   pgtable-frag.o ioremap.o ioremap_$(BITS).o \
>   				   init-common.o mmu_context.o drmem.o
> diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> new file mode 100644
> index 000000000000..9efcb01088da
> --- /dev/null
> +++ b/arch/powerpc/mm/pageattr.c
> @@ -0,0 +1,88 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * MMU-generic set_memory implementation for powerpc
> + *
> + * Copyright 2019, IBM Corporation.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/set_memory.h>
> +
> +#include <asm/mmu.h>
> +#include <asm/page.h>
> +#include <asm/pgtable.h>
> +
> +
> +/*
> + * Updates the attributes of a page in three steps:
> + *
> + * 1. invalidate the page table entry
> + * 2. flush the TLB
> + * 3. install the new entry with the updated attributes
> + *
> + * This is unsafe if the caller is attempting to change the mapping of the
> + * page it is executing from, or if another CPU is concurrently using the
> + * page being altered.
> + *
> + * TODO make the implementation resistant to this.
> + *
> + * NOTE: can be dangerous to call without STRICT_KERNEL_RWX
> + */
> +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> +{
> +	long action = (long)data;
> +	pte_t pte;
> +
> +	spin_lock(&init_mm.page_table_lock);
> +
> +	/* invalidate the PTE so it's safe to modify */
> +	pte = ptep_get_and_clear(&init_mm, addr, ptep);
> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +
> +	/* modify the PTE bits as desired, then apply */
> +	switch (action) {
> +	case SET_MEMORY_RO:
> +		pte = pte_wrprotect(pte);
> +		break;
> +	case SET_MEMORY_RW:
> +		pte = pte_mkwrite(pte);
> +		break;
> +	case SET_MEMORY_NX:
> +		pte = pte_exprotect(pte);
> +		break;
> +	case SET_MEMORY_X:
> +		pte = pte_mkexec(pte);
> +		break;
> +	default:
> +		WARN_ON_ONCE(1);
> +		break;
> +	}
> +
> +	set_pte_at(&init_mm, addr, ptep, pte);
> +	spin_unlock(&init_mm.page_table_lock);
> +
> +	return 0;
> +}
> +
> +int change_memory_attr(unsigned long addr, int numpages, long action)
> +{
> +	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> +	unsigned long sz = numpages * PAGE_SIZE;
> +
> +	if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
> +		return 0;

You should do this in the header file in order to get it optimised out completely when 
CONFIG_STRICT_KERNEL_RWX is not set.

In asm/set_memory.h you could have:

#ifdef CONFIG_STRICT_KERNEL_RWX
int change_memory_attr(unsigned long addr, int numpages, long action);
#else
static inline int change_memory_attr(unsigned long addr, int numpages, long action) { return 0; }
#endif

Or another solution is to only define ARCH_HAS_SET_MEMORY when CONFIG_STRICT_KERNEL_RWX is selected.

> +
> +	if (numpages <= 0)
> +		return 0;
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +	if (WARN_ON_ONCE(!radix_enabled() &&
> +		     get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
> +		return -1;
> +	}
> +#endif
> +
> +	return apply_to_existing_page_range(&init_mm, start, sz,
> +					    change_page_attr, (void *)action);
> +}
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 08/10] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
  2021-03-30  4:51 ` [PATCH v10 08/10] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Jordan Niethe
@ 2021-03-30  5:27   ` Christophe Leroy
  2021-04-21  2:37     ` Jordan Niethe
  0 siblings, 1 reply; 34+ messages in thread
From: Christophe Leroy @ 2021-03-30  5:27 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev
  Cc: ajd, Joel Stanley, npiggin, cmr, naveen.n.rao, dja



Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> From: Russell Currey <ruscur@russell.cc>
> 
> skiroot_defconfig is the only powerpc defconfig with STRICT_KERNEL_RWX
> enabled, and if you want memory protection for kernel text you'd want it
> for modules too, so enable STRICT_MODULE_RWX there.

Maybe we could now selectt ARCH_OPTIONAL_KERNEL_RWX_DEFAULT in arch/powerpc/Kconfig.

Then this change would not be necessary.

Would be in line with https://github.com/linuxppc/issues/issues/223


> 
> Acked-by: Joel Stanley <joel@joel.id.au>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
>   arch/powerpc/configs/skiroot_defconfig | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig
> index b806a5d3a695..50fe06cb3a31 100644
> --- a/arch/powerpc/configs/skiroot_defconfig
> +++ b/arch/powerpc/configs/skiroot_defconfig
> @@ -50,6 +50,7 @@ CONFIG_CMDLINE="console=tty0 console=hvc0 ipr.fast_reboot=1 quiet"
>   # CONFIG_PPC_MEM_KEYS is not set
>   CONFIG_JUMP_LABEL=y
>   CONFIG_STRICT_KERNEL_RWX=y
> +CONFIG_STRICT_MODULE_RWX=y
>   CONFIG_MODULES=y
>   CONFIG_MODULE_UNLOAD=y
>   CONFIG_MODULE_SIG_FORCE=y
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 05/10] powerpc/bpf: Write protect JIT code
  2021-03-30  4:51 ` [PATCH v10 05/10] powerpc/bpf: Write protect JIT code Jordan Niethe
@ 2021-03-31 10:37   ` Michael Ellerman
  2021-03-31 10:39     ` Christophe Leroy
  2021-04-21  2:35     ` Jordan Niethe
  0 siblings, 2 replies; 34+ messages in thread
From: Michael Ellerman @ 2021-03-31 10:37 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev
  Cc: ajd, Jordan Niethe, cmr, npiggin, naveen.n.rao, dja

Jordan Niethe <jniethe5@gmail.com> writes:

> Once CONFIG_STRICT_MODULE_RWX is enabled there will be no need to
> override bpf_jit_free() because it is now possible to set images
> read-only. So use the default implementation.
>
> Also add the necessary call to bpf_jit_binary_lock_ro() which will
> remove write protection and add exec protection to the JIT image after
> it has finished being written.
>
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v10: New to series
> ---
>  arch/powerpc/net/bpf_jit_comp.c   | 5 ++++-
>  arch/powerpc/net/bpf_jit_comp64.c | 4 ++++
>  2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index e809cb5a1631..8015e4a7d2d4 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -659,12 +659,15 @@ void bpf_jit_compile(struct bpf_prog *fp)
>  		bpf_jit_dump(flen, proglen, pass, code_base);
>  
>  	bpf_flush_icache(code_base, code_base + (proglen/4));
> -
>  #ifdef CONFIG_PPC64
>  	/* Function descriptor nastiness: Address + TOC */
>  	((u64 *)image)[0] = (u64)code_base;
>  	((u64 *)image)[1] = local_paca->kernel_toc;
>  #endif
> +	if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) {
> +		set_memory_ro((unsigned long)image, alloclen >> PAGE_SHIFT);
> +		set_memory_x((unsigned long)image, alloclen >> PAGE_SHIFT);
> +	}

You don't need to check the ifdef in a caller, there are stubs that
compile to nothing when CONFIG_ARCH_HAS_SET_MEMORY=n.

> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index aaf1a887f653..1484ad588685 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -1240,6 +1240,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>  	fp->jited_len = alloclen;
>  
>  	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
> +	if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
> +		bpf_jit_binary_lock_ro(bpf_hdr);

Do we need the ifdef here either? Looks like it should be safe to call
due to the stubs.

> @@ -1262,6 +1264,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>  }
>  
>  /* Overriding bpf_jit_free() as we don't set images read-only. */
> +#ifndef CONFIG_STRICT_MODULE_RWX

Did you test without this and notice something broken?

Looking at the generic version I can't tell why we need to override
this. Maybe we don't (anymore?) ?

cheers

>  void bpf_jit_free(struct bpf_prog *fp)
>  {
>  	unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
> @@ -1272,3 +1275,4 @@ void bpf_jit_free(struct bpf_prog *fp)
>  
>  	bpf_prog_unlock_free(fp);
>  }
> +#endif
> -- 
> 2.25.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 05/10] powerpc/bpf: Write protect JIT code
  2021-03-31 10:37   ` Michael Ellerman
@ 2021-03-31 10:39     ` Christophe Leroy
  2021-04-21  2:35     ` Jordan Niethe
  1 sibling, 0 replies; 34+ messages in thread
From: Christophe Leroy @ 2021-03-31 10:39 UTC (permalink / raw)
  To: Michael Ellerman, Jordan Niethe, linuxppc-dev
  Cc: naveen.n.rao, cmr, ajd, npiggin, dja



Le 31/03/2021 à 12:37, Michael Ellerman a écrit :
> Jordan Niethe <jniethe5@gmail.com> writes:
> 
>> Once CONFIG_STRICT_MODULE_RWX is enabled there will be no need to
>> override bpf_jit_free() because it is now possible to set images
>> read-only. So use the default implementation.
>>
>> Also add the necessary call to bpf_jit_binary_lock_ro() which will
>> remove write protection and add exec protection to the JIT image after
>> it has finished being written.
>>
>> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>> ---
>> v10: New to series
>> ---
>>   arch/powerpc/net/bpf_jit_comp.c   | 5 ++++-
>>   arch/powerpc/net/bpf_jit_comp64.c | 4 ++++
>>   2 files changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
>> index e809cb5a1631..8015e4a7d2d4 100644
>> --- a/arch/powerpc/net/bpf_jit_comp.c
>> +++ b/arch/powerpc/net/bpf_jit_comp.c
>> @@ -659,12 +659,15 @@ void bpf_jit_compile(struct bpf_prog *fp)
>>   		bpf_jit_dump(flen, proglen, pass, code_base);
>>   
>>   	bpf_flush_icache(code_base, code_base + (proglen/4));
>> -
>>   #ifdef CONFIG_PPC64
>>   	/* Function descriptor nastiness: Address + TOC */
>>   	((u64 *)image)[0] = (u64)code_base;
>>   	((u64 *)image)[1] = local_paca->kernel_toc;
>>   #endif
>> +	if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) {
>> +		set_memory_ro((unsigned long)image, alloclen >> PAGE_SHIFT);
>> +		set_memory_x((unsigned long)image, alloclen >> PAGE_SHIFT);
>> +	}
> 
> You don't need to check the ifdef in a caller, there are stubs that
> compile to nothing when CONFIG_ARCH_HAS_SET_MEMORY=n.

I was about to do the same comment, but ....

CONFIG_STRICT_MODULE_RWX is not CONFIG_ARCH_HAS_SET_MEMORY

> 
>> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
>> index aaf1a887f653..1484ad588685 100644
>> --- a/arch/powerpc/net/bpf_jit_comp64.c
>> +++ b/arch/powerpc/net/bpf_jit_comp64.c
>> @@ -1240,6 +1240,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>>   	fp->jited_len = alloclen;
>>   
>>   	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
>> +	if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
>> +		bpf_jit_binary_lock_ro(bpf_hdr);
> 
> Do we need the ifdef here either? Looks like it should be safe to call
> due to the stubs.

Same

> 
>> @@ -1262,6 +1264,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>>   }
>>   
>>   /* Overriding bpf_jit_free() as we don't set images read-only. */
>> +#ifndef CONFIG_STRICT_MODULE_RWX
> 
> Did you test without this and notice something broken?
> 
> Looking at the generic version I can't tell why we need to override
> this. Maybe we don't (anymore?) ?
> 
> cheers
> 
>>   void bpf_jit_free(struct bpf_prog *fp)
>>   {
>>   	unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
>> @@ -1272,3 +1275,4 @@ void bpf_jit_free(struct bpf_prog *fp)
>>   
>>   	bpf_prog_unlock_free(fp);
>>   }
>> +#endif
>> -- 
>> 2.25.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines
  2021-03-30  4:51 ` [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines Jordan Niethe
  2021-03-30  5:16   ` Christophe Leroy
@ 2021-03-31 11:16   ` Michael Ellerman
  2021-03-31 12:03     ` Christophe Leroy
  2021-04-21  5:03     ` Jordan Niethe
  2021-04-01  4:37   ` Aneesh Kumar K.V
  2 siblings, 2 replies; 34+ messages in thread
From: Michael Ellerman @ 2021-03-31 11:16 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev
  Cc: ajd, Jordan Niethe, cmr, npiggin, naveen.n.rao, dja

Hi Jordan,

A few nits below ...

Jordan Niethe <jniethe5@gmail.com> writes:
> From: Russell Currey <ruscur@russell.cc>
>
> The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> and are generally useful primitives to have.  This implementation is
> designed to be completely generic across powerpc's many MMUs.
>
> It's possible that this could be optimised to be faster for specific
> MMUs, but the focus is on having a generic and safe implementation for
> now.
>
> This implementation does not handle cases where the caller is attempting
> to change the mapping of the page it is executing from, or if another
> CPU is concurrently using the page being altered.  These cases likely
> shouldn't happen, but a more complex implementation with MMU-specific code
> could safely handle them, so that is left as a TODO for now.
>
> On hash the linear mapping is not kept in the linux pagetable, so this
> will not change the protection if used on that range. Currently these
> functions are not used on the linear map so just WARN for now.
>
> These functions do nothing if STRICT_KERNEL_RWX is not enabled.
>
> Reviewed-by: Daniel Axtens <dja@axtens.net>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> [jpn: -rebase on next plus "powerpc/mm/64s: Allow STRICT_KERNEL_RWX again"
>       - WARN on hash linear map]
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v10: WARN if trying to change the hash linear map
> ---
>  arch/powerpc/Kconfig                  |  1 +
>  arch/powerpc/include/asm/set_memory.h | 32 ++++++++++
>  arch/powerpc/mm/Makefile              |  2 +-
>  arch/powerpc/mm/pageattr.c            | 88 +++++++++++++++++++++++++++
>  4 files changed, 122 insertions(+), 1 deletion(-)
>  create mode 100644 arch/powerpc/include/asm/set_memory.h
>  create mode 100644 arch/powerpc/mm/pageattr.c
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index fc7f5c5933e6..4498a27ac9db 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -135,6 +135,7 @@ config PPC
>  	select ARCH_HAS_MEMBARRIER_CALLBACKS
>  	select ARCH_HAS_MEMBARRIER_SYNC_CORE
>  	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> +	select ARCH_HAS_SET_MEMORY

Below you do:

	if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
		return 0;

Which suggests we should instead just only select ARCH_HAS_SET_MEMORY if
STRICT_KERNEL_RWX ?


> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index 3b4e9e4e25ea..d8a08abde1ae 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -5,7 +5,7 @@
>  
>  ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
>  
> -obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o \
> +obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \

.. and then the file should only be built if ARCH_HAS_SET_MEMORY = y.

>  				   init_$(BITS).o pgtable_$(BITS).o \
>  				   pgtable-frag.o ioremap.o ioremap_$(BITS).o \
>  				   init-common.o mmu_context.o drmem.o
> diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> new file mode 100644
> index 000000000000..9efcb01088da
> --- /dev/null
> +++ b/arch/powerpc/mm/pageattr.c
> @@ -0,0 +1,88 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * MMU-generic set_memory implementation for powerpc
> + *
> + * Copyright 2019, IBM Corporation.

Should be 2019-2021.

> + */
> +
> +#include <linux/mm.h>
> +#include <linux/set_memory.h>
> +
> +#include <asm/mmu.h>
> +#include <asm/page.h>
> +#include <asm/pgtable.h>
> +
> +
> +/*
> + * Updates the attributes of a page in three steps:
> + *
> + * 1. invalidate the page table entry
> + * 2. flush the TLB
> + * 3. install the new entry with the updated attributes
> + *
> + * This is unsafe if the caller is attempting to change the mapping of the
> + * page it is executing from, or if another CPU is concurrently using the
> + * page being altered.

Is the 2nd part of that statement true?

Or, I guess maybe it is true depending on what "unsafe" means.

AIUI it's unsafe to use this on the page you're executing from, and by
unsafe we mean the kernel will potentially crash because it will lose
the mapping for the currently executing text.

Using this on a page that another CPU is accessing could be safe, if eg.
the other CPU is reading from the page and we are just changing it from
RW->RO.

So I'm not sure they're the same type of "unsafe".

> + * TODO make the implementation resistant to this.
> + *
> + * NOTE: can be dangerous to call without STRICT_KERNEL_RWX

I don't think we need that anymore?

> + */
> +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> +{
> +	long action = (long)data;
> +	pte_t pte;
> +
> +	spin_lock(&init_mm.page_table_lock);
> +
> +	/* invalidate the PTE so it's safe to modify */
> +	pte = ptep_get_and_clear(&init_mm, addr, ptep);
> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +
> +	/* modify the PTE bits as desired, then apply */
> +	switch (action) {
> +	case SET_MEMORY_RO:
> +		pte = pte_wrprotect(pte);
> +		break;

So set_memory_ro() removes write, but doesn't remove execute.

That doesn't match my mental model of what "set to ro" means, but I
guess I'm wrong because the other implementations seem to do something
similar.


> +	case SET_MEMORY_RW:
> +		pte = pte_mkwrite(pte);

I think we want to add pte_mkdirty() here also to avoid a fault when the
mapping is written to.

eg. pmd_mkwrite(pmd_mkdirty(pte));

> +		break;
> +	case SET_MEMORY_NX:
> +		pte = pte_exprotect(pte);
> +		break;
> +	case SET_MEMORY_X:
> +		pte = pte_mkexec(pte);
> +		break;
> +	default:
> +		WARN_ON_ONCE(1);
> +		break;
> +	}
> +
> +	set_pte_at(&init_mm, addr, ptep, pte);
> +	spin_unlock(&init_mm.page_table_lock);
> +
> +	return 0;
> +}
> +
> +int change_memory_attr(unsigned long addr, int numpages, long action)
> +{
> +	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> +	unsigned long sz = numpages * PAGE_SIZE;
> +
> +	if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
> +		return 0;
> +
> +	if (numpages <= 0)
> +		return 0;
> +

This ↓ should have a comment explaining what it's doing:

> +#ifdef CONFIG_PPC_BOOK3S_64
> +	if (WARN_ON_ONCE(!radix_enabled() &&
> +		     get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
> +		return -1;
> +	}
> +#endif

Maybe:

	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) &&
	    WARN_ON_ONCE(!radix_enabled() && get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
		return -1;
	}

But then Aneesh pointed out that we should also block VMEMMAP_REGION_ID.

It might be better to just check for the permitted regions.

	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled()) {
        	int region = get_region_id(addr);

	    	if (WARN_ON_ONCE(region != VMALLOC_REGION_ID && region != IO_REGION_ID))
                	return -1;
	}

> +
> +	return apply_to_existing_page_range(&init_mm, start, sz,
> +					    change_page_attr, (void *)action);
> +}


cheers

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 06/10] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
  2021-03-30  4:51 ` [PATCH v10 06/10] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Jordan Niethe
@ 2021-03-31 11:24   ` Michael Ellerman
  2021-04-21  2:23     ` Jordan Niethe
  0 siblings, 1 reply; 34+ messages in thread
From: Michael Ellerman @ 2021-03-31 11:24 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev
  Cc: ajd, Kees Cook, Jordan Niethe, cmr, npiggin, naveen.n.rao, dja

Jordan Niethe <jniethe5@gmail.com> writes:
> From: Russell Currey <ruscur@russell.cc>
>
> Optionally run W+X checks when dumping pagetable information to
> debugfs' kernel_page_tables.
>
> To use:
>     $ echo 1 > /sys/kernel/debug/check_wx_pages
>     $ cat /sys/kernel/debug/kernel_page_tables
>
> and check the kernel log.  Useful for testing strict module RWX.
>
> To disable W+X checks:
> 	$ echo 0 > /sys/kernel/debug/check_wx_pages
>
> Update the Kconfig entry to reflect this.
>
> Also fix a typo.
>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> [jpn: Change check_wx_pages to act as mode bit affecting
>       kernel_page_tables instead of triggering action on its own]
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v10: check_wx_pages now affects kernel_page_tables rather then triggers
>      its own action.

Hmm. I liked the old version better :)

I think you changed it based on Christophe's comment:

  Why not just perform the test everytime someone dumps kernel_page_tables ?


But I think he meant *always* do the check when someone dumps
kernel_page_tables, not have another file to enable checking and then
require someone to dump kernel_page_tables to do the actual check.

Still I like the previous version where you can do the checks
separately, without having to dump the page tables, because dumping can
sometimes take quite a while.

What would be even better is if ptdump_check_wx() returned an error when
wx pages were found, and that was plumbed out to the debugs file. That
way you can script around it.

cheers

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines
  2021-03-31 11:16   ` Michael Ellerman
@ 2021-03-31 12:03     ` Christophe Leroy
  2021-04-21  5:03     ` Jordan Niethe
  1 sibling, 0 replies; 34+ messages in thread
From: Christophe Leroy @ 2021-03-31 12:03 UTC (permalink / raw)
  To: Michael Ellerman, Jordan Niethe, linuxppc-dev
  Cc: ajd, npiggin, cmr, naveen.n.rao, dja



Le 31/03/2021 à 13:16, Michael Ellerman a écrit :
> Hi Jordan,
> 
> A few nits below ...
> 
> Jordan Niethe <jniethe5@gmail.com> writes:
>> From: Russell Currey <ruscur@russell.cc>
>>
>> The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
>> and are generally useful primitives to have.  This implementation is
>> designed to be completely generic across powerpc's many MMUs.
>>
>> It's possible that this could be optimised to be faster for specific
>> MMUs, but the focus is on having a generic and safe implementation for
>> now.
>>
>> This implementation does not handle cases where the caller is attempting
>> to change the mapping of the page it is executing from, or if another
>> CPU is concurrently using the page being altered.  These cases likely
>> shouldn't happen, but a more complex implementation with MMU-specific code
>> could safely handle them, so that is left as a TODO for now.
>>
>> On hash the linear mapping is not kept in the linux pagetable, so this
>> will not change the protection if used on that range. Currently these
>> functions are not used on the linear map so just WARN for now.
>>
>> These functions do nothing if STRICT_KERNEL_RWX is not enabled.
>>
>> Reviewed-by: Daniel Axtens <dja@axtens.net>
>> Signed-off-by: Russell Currey <ruscur@russell.cc>
>> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
>> [jpn: -rebase on next plus "powerpc/mm/64s: Allow STRICT_KERNEL_RWX again"
>>        - WARN on hash linear map]
>> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>> ---
>> v10: WARN if trying to change the hash linear map
>> ---

> 
> This ↓ should have a comment explaining what it's doing:
> 
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +	if (WARN_ON_ONCE(!radix_enabled() &&
>> +		     get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
>> +		return -1;
>> +	}
>> +#endif
> 
> Maybe:
> 
> 	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) &&
> 	    WARN_ON_ONCE(!radix_enabled() && get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
> 		return -1;
> 	}

get_region_id() only exists for book3s/64 at the time being, and LINEAR_MAP_REGION_ID as well.


> 
> But then Aneesh pointed out that we should also block VMEMMAP_REGION_ID.
> 
> It might be better to just check for the permitted regions.
> 
> 	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled()) {
>          	int region = get_region_id(addr);
> 
> 	    	if (WARN_ON_ONCE(region != VMALLOC_REGION_ID && region != IO_REGION_ID))
>                  	return -1;
> 	}
> 
>> +
>> +	return apply_to_existing_page_range(&init_mm, start, sz,
>> +					    change_page_attr, (void *)action);
>> +}
> 
> 
> cheers
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines
  2021-03-30  4:51 ` [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines Jordan Niethe
  2021-03-30  5:16   ` Christophe Leroy
  2021-03-31 11:16   ` Michael Ellerman
@ 2021-04-01  4:37   ` Aneesh Kumar K.V
  2021-04-21  5:19     ` Jordan Niethe
  2 siblings, 1 reply; 34+ messages in thread
From: Aneesh Kumar K.V @ 2021-04-01  4:37 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev
  Cc: ajd, Jordan Niethe, npiggin, cmr, naveen.n.rao, dja

Jordan Niethe <jniethe5@gmail.com> writes:

> From: Russell Currey <ruscur@russell.cc>
>
> The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> and are generally useful primitives to have.  This implementation is
> designed to be completely generic across powerpc's many MMUs.
>
> It's possible that this could be optimised to be faster for specific
> MMUs, but the focus is on having a generic and safe implementation for
> now.
>
> This implementation does not handle cases where the caller is attempting
> to change the mapping of the page it is executing from, or if another
> CPU is concurrently using the page being altered.  These cases likely
> shouldn't happen, but a more complex implementation with MMU-specific code
> could safely handle them, so that is left as a TODO for now.
>
> On hash the linear mapping is not kept in the linux pagetable, so this
> will not change the protection if used on that range. Currently these
> functions are not used on the linear map so just WARN for now.
>
> These functions do nothing if STRICT_KERNEL_RWX is not enabled.
>
> Reviewed-by: Daniel Axtens <dja@axtens.net>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> [jpn: -rebase on next plus "powerpc/mm/64s: Allow STRICT_KERNEL_RWX again"
>       - WARN on hash linear map]
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v10: WARN if trying to change the hash linear map
> ---
>  arch/powerpc/Kconfig                  |  1 +
>  arch/powerpc/include/asm/set_memory.h | 32 ++++++++++
>  arch/powerpc/mm/Makefile              |  2 +-
>  arch/powerpc/mm/pageattr.c            | 88 +++++++++++++++++++++++++++
>  4 files changed, 122 insertions(+), 1 deletion(-)
>  create mode 100644 arch/powerpc/include/asm/set_memory.h
>  create mode 100644 arch/powerpc/mm/pageattr.c
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index fc7f5c5933e6..4498a27ac9db 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -135,6 +135,7 @@ config PPC
>  	select ARCH_HAS_MEMBARRIER_CALLBACKS
>  	select ARCH_HAS_MEMBARRIER_SYNC_CORE
>  	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> +	select ARCH_HAS_SET_MEMORY
>  	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
>  	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
>  	select ARCH_HAS_UACCESS_FLUSHCACHE
> diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
> new file mode 100644
> index 000000000000..64011ea444b4
> --- /dev/null
> +++ b/arch/powerpc/include/asm/set_memory.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_POWERPC_SET_MEMORY_H
> +#define _ASM_POWERPC_SET_MEMORY_H
> +
> +#define SET_MEMORY_RO	0
> +#define SET_MEMORY_RW	1
> +#define SET_MEMORY_NX	2
> +#define SET_MEMORY_X	3
> +
> +int change_memory_attr(unsigned long addr, int numpages, long action);
> +
> +static inline int set_memory_ro(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_RO);
> +}
> +
> +static inline int set_memory_rw(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_RW);
> +}
> +
> +static inline int set_memory_nx(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_NX);
> +}
> +
> +static inline int set_memory_x(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_X);
> +}
> +
> +#endif
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index 3b4e9e4e25ea..d8a08abde1ae 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -5,7 +5,7 @@
>  
>  ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
>  
> -obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o \
> +obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
>  				   init_$(BITS).o pgtable_$(BITS).o \
>  				   pgtable-frag.o ioremap.o ioremap_$(BITS).o \
>  				   init-common.o mmu_context.o drmem.o
> diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> new file mode 100644
> index 000000000000..9efcb01088da
> --- /dev/null
> +++ b/arch/powerpc/mm/pageattr.c
> @@ -0,0 +1,88 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * MMU-generic set_memory implementation for powerpc
> + *
> + * Copyright 2019, IBM Corporation.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/set_memory.h>
> +
> +#include <asm/mmu.h>
> +#include <asm/page.h>
> +#include <asm/pgtable.h>
> +
> +
> +/*
> + * Updates the attributes of a page in three steps:
> + *
> + * 1. invalidate the page table entry
> + * 2. flush the TLB
> + * 3. install the new entry with the updated attributes
> + *
> + * This is unsafe if the caller is attempting to change the mapping of the
> + * page it is executing from, or if another CPU is concurrently using the
> + * page being altered.
> + *
> + * TODO make the implementation resistant to this.
> + *
> + * NOTE: can be dangerous to call without STRICT_KERNEL_RWX
> + */
> +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> +{
> +	long action = (long)data;
> +	pte_t pte;
> +
> +	spin_lock(&init_mm.page_table_lock);
> +
> +	/* invalidate the PTE so it's safe to modify */
> +	pte = ptep_get_and_clear(&init_mm, addr, ptep);
> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +
> +	/* modify the PTE bits as desired, then apply */
> +	switch (action) {
> +	case SET_MEMORY_RO:
> +		pte = pte_wrprotect(pte);
> +		break;
> +	case SET_MEMORY_RW:
> +		pte = pte_mkwrite(pte);
> +		break;
> +	case SET_MEMORY_NX:
> +		pte = pte_exprotect(pte);
> +		break;
> +	case SET_MEMORY_X:
> +		pte = pte_mkexec(pte);
> +		break;
> +	default:
> +		WARN_ON_ONCE(1);
> +		break;
> +	}
> +
> +	set_pte_at(&init_mm, addr, ptep, pte);
> +	spin_unlock(&init_mm.page_table_lock);
> +
> +	return 0;
> +}
> +
> +int change_memory_attr(unsigned long addr, int numpages, long action)
> +{
> +	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> +	unsigned long sz = numpages * PAGE_SIZE;
> +
> +	if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
> +		return 0;

What restrictions imposed by that config are we dependent on here? 


> +
> +	if (numpages <= 0)
> +		return 0;
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +	if (WARN_ON_ONCE(!radix_enabled() &&
> +		     get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
> +		return -1;
> +	}
> +#endif

What about VMEMMAP_REGIOND_ID

> +
> +	return apply_to_existing_page_range(&init_mm, start, sz,
> +					    change_page_attr, (void *)action);


That handles on 64K mapping. What about linear map? Also there is a
patchset implementing hugepage for vmalloc mapping. 

> +}
> -- 
> 2.25.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END}
  2021-03-30  4:51 ` [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END} Jordan Niethe
  2021-03-30  5:00   ` Christophe Leroy
@ 2021-04-01 13:36   ` Christophe Leroy
  2021-04-21  2:46     ` Jordan Niethe
  1 sibling, 1 reply; 34+ messages in thread
From: Christophe Leroy @ 2021-04-01 13:36 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev; +Cc: ajd, npiggin, cmr, naveen.n.rao, dja



Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> If MODULES_{VADDR,END} are not defined set them to VMALLOC_START and
> VMALLOC_END respectively. This reduces the need for special cases. For
> example, powerpc's module_alloc() was previously predicated on
> MODULES_VADDR being defined but now is unconditionally defined.
> 
> This will be useful reducing conditional code in other places that need
> to allocate from the module region (i.e., kprobes).
> 
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v10: New to series
> ---
>   arch/powerpc/include/asm/pgtable.h | 5 +++++
>   arch/powerpc/kernel/module.c       | 5 +----

You probably also have changes to do in kernel/ptdump.c

In mm/book3s32/mmu.c and mm/kasan/kasan_init_32.c as well allthough that's harmless here.

>   2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 4eed82172e33..014c2921f26a 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -167,6 +167,11 @@ struct seq_file;
>   void arch_report_meminfo(struct seq_file *m);
>   #endif /* CONFIG_PPC64 */
>   
> +#ifndef MODULES_VADDR
> +#define MODULES_VADDR VMALLOC_START
> +#define MODULES_END VMALLOC_END
> +#endif
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif /* _ASM_POWERPC_PGTABLE_H */
> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> index a211b0253cdb..f1fb58389d58 100644
> --- a/arch/powerpc/kernel/module.c
> +++ b/arch/powerpc/kernel/module.c
> @@ -14,6 +14,7 @@
>   #include <asm/firmware.h>
>   #include <linux/sort.h>
>   #include <asm/setup.h>
> +#include <linux/mm.h>
>   
>   static LIST_HEAD(module_bug_list);
>   
> @@ -87,13 +88,9 @@ int module_finalize(const Elf_Ehdr *hdr,
>   	return 0;
>   }
>   
> -#ifdef MODULES_VADDR
>   void *module_alloc(unsigned long size)
>   {
> -	BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR);
> -

The above check is needed somewhere, if you remove it from here you have to perform the check 
somewhere else.

>   	return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL,
>   				    PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
>   				    __builtin_return_address(0));
>   }
> -#endif
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 06/10] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
  2021-03-31 11:24   ` Michael Ellerman
@ 2021-04-21  2:23     ` Jordan Niethe
  2021-04-21  5:16       ` Christophe Leroy
  0 siblings, 1 reply; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  2:23 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: ajd, Kees Cook, cmr, Nicholas Piggin, naveen.n.rao, linuxppc-dev,
	Daniel Axtens

On Wed, Mar 31, 2021 at 10:24 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
>
> Jordan Niethe <jniethe5@gmail.com> writes:
> > From: Russell Currey <ruscur@russell.cc>
> >
> > Optionally run W+X checks when dumping pagetable information to
> > debugfs' kernel_page_tables.
> >
> > To use:
> >     $ echo 1 > /sys/kernel/debug/check_wx_pages
> >     $ cat /sys/kernel/debug/kernel_page_tables
> >
> > and check the kernel log.  Useful for testing strict module RWX.
> >
> > To disable W+X checks:
> >       $ echo 0 > /sys/kernel/debug/check_wx_pages
> >
> > Update the Kconfig entry to reflect this.
> >
> > Also fix a typo.
> >
> > Reviewed-by: Kees Cook <keescook@chromium.org>
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > [jpn: Change check_wx_pages to act as mode bit affecting
> >       kernel_page_tables instead of triggering action on its own]
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v10: check_wx_pages now affects kernel_page_tables rather then triggers
> >      its own action.
>
> Hmm. I liked the old version better :)
>
> I think you changed it based on Christophe's comment:
>
>   Why not just perform the test everytime someone dumps kernel_page_tables ?
>
>
> But I think he meant *always* do the check when someone dumps
> kernel_page_tables, not have another file to enable checking and then
> require someone to dump kernel_page_tables to do the actual check.
Yes, I guess I misinterpreted that.
>
> Still I like the previous version where you can do the checks
> separately, without having to dump the page tables, because dumping can
> sometimes take quite a while.
>
> What would be even better is if ptdump_check_wx() returned an error when
> wx pages were found, and that was plumbed out to the debugs file. That
> way you can script around it.
Ok I'll go back to how it was and add in returning an error.
>
> cheers

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 05/10] powerpc/bpf: Write protect JIT code
  2021-03-31 10:37   ` Michael Ellerman
  2021-03-31 10:39     ` Christophe Leroy
@ 2021-04-21  2:35     ` Jordan Niethe
  2021-04-21  6:51       ` Michael Ellerman
  1 sibling, 1 reply; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  2:35 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: ajd, cmr, Nicholas Piggin, naveen.n.rao, linuxppc-dev, Daniel Axtens

On Wed, Mar 31, 2021 at 9:37 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
>
> Jordan Niethe <jniethe5@gmail.com> writes:
>
> > Once CONFIG_STRICT_MODULE_RWX is enabled there will be no need to
> > override bpf_jit_free() because it is now possible to set images
> > read-only. So use the default implementation.
> >
> > Also add the necessary call to bpf_jit_binary_lock_ro() which will
> > remove write protection and add exec protection to the JIT image after
> > it has finished being written.
> >
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v10: New to series
> > ---
> >  arch/powerpc/net/bpf_jit_comp.c   | 5 ++++-
> >  arch/powerpc/net/bpf_jit_comp64.c | 4 ++++
> >  2 files changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> > index e809cb5a1631..8015e4a7d2d4 100644
> > --- a/arch/powerpc/net/bpf_jit_comp.c
> > +++ b/arch/powerpc/net/bpf_jit_comp.c
> > @@ -659,12 +659,15 @@ void bpf_jit_compile(struct bpf_prog *fp)
> >               bpf_jit_dump(flen, proglen, pass, code_base);
> >
> >       bpf_flush_icache(code_base, code_base + (proglen/4));
> > -
> >  #ifdef CONFIG_PPC64
> >       /* Function descriptor nastiness: Address + TOC */
> >       ((u64 *)image)[0] = (u64)code_base;
> >       ((u64 *)image)[1] = local_paca->kernel_toc;
> >  #endif
> > +     if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) {
> > +             set_memory_ro((unsigned long)image, alloclen >> PAGE_SHIFT);
> > +             set_memory_x((unsigned long)image, alloclen >> PAGE_SHIFT);
> > +     }
>
> You don't need to check the ifdef in a caller, there are stubs that
> compile to nothing when CONFIG_ARCH_HAS_SET_MEMORY=n.
As Christophe pointed out we could have !CONFIG_STRICT_MODULE_RWX and
CONFIG_ARCH_HAS_SET_MEMORY which would then be wrong here.
Probably we could make CONFIG_ARCH_HAS_SET_MEMORY depend on
CONFIG_STRICT_MODULE_RWX?
>
> > diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> > index aaf1a887f653..1484ad588685 100644
> > --- a/arch/powerpc/net/bpf_jit_comp64.c
> > +++ b/arch/powerpc/net/bpf_jit_comp64.c
> > @@ -1240,6 +1240,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
> >       fp->jited_len = alloclen;
> >
> >       bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
> > +     if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
> > +             bpf_jit_binary_lock_ro(bpf_hdr);
>
> Do we need the ifdef here either? Looks like it should be safe to call
> due to the stubs.
>
> > @@ -1262,6 +1264,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
> >  }
> >
> >  /* Overriding bpf_jit_free() as we don't set images read-only. */
> > +#ifndef CONFIG_STRICT_MODULE_RWX
>
> Did you test without this and notice something broken?
>
> Looking at the generic version I can't tell why we need to override
> this. Maybe we don't (anymore?) ?
Yeah we don't.
>
> cheers
>
> >  void bpf_jit_free(struct bpf_prog *fp)
> >  {
> >       unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
> > @@ -1272,3 +1275,4 @@ void bpf_jit_free(struct bpf_prog *fp)
> >
> >       bpf_prog_unlock_free(fp);
> >  }
> > +#endif
> > --
> > 2.25.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 08/10] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
  2021-03-30  5:27   ` Christophe Leroy
@ 2021-04-21  2:37     ` Jordan Niethe
  0 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  2:37 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: ajd, Joel Stanley, Nicholas Piggin, cmr, naveen.n.rao,
	linuxppc-dev, Daniel Axtens

On Tue, Mar 30, 2021 at 4:27 PM Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> > From: Russell Currey <ruscur@russell.cc>
> >
> > skiroot_defconfig is the only powerpc defconfig with STRICT_KERNEL_RWX
> > enabled, and if you want memory protection for kernel text you'd want it
> > for modules too, so enable STRICT_MODULE_RWX there.
>
> Maybe we could now selectt ARCH_OPTIONAL_KERNEL_RWX_DEFAULT in arch/powerpc/Kconfig.
>
> Then this change would not be necessary.
>
> Would be in line with https://github.com/linuxppc/issues/issues/223
Yes, I think that is the way to go.
>
>
> >
> > Acked-by: Joel Stanley <joel@joel.id.au>
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> >   arch/powerpc/configs/skiroot_defconfig | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig
> > index b806a5d3a695..50fe06cb3a31 100644
> > --- a/arch/powerpc/configs/skiroot_defconfig
> > +++ b/arch/powerpc/configs/skiroot_defconfig
> > @@ -50,6 +50,7 @@ CONFIG_CMDLINE="console=tty0 console=hvc0 ipr.fast_reboot=1 quiet"
> >   # CONFIG_PPC_MEM_KEYS is not set
> >   CONFIG_JUMP_LABEL=y
> >   CONFIG_STRICT_KERNEL_RWX=y
> > +CONFIG_STRICT_MODULE_RWX=y
> >   CONFIG_MODULES=y
> >   CONFIG_MODULE_UNLOAD=y
> >   CONFIG_MODULE_SIG_FORCE=y
> >

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 04/10] powerpc/kprobes: Mark newly allocated probes as ROX
  2021-03-30  5:05   ` Christophe Leroy
@ 2021-04-21  2:39     ` Jordan Niethe
  0 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  2:39 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: ajd, Nicholas Piggin, cmr, naveen.n.rao, linuxppc-dev, Daniel Axtens

On Tue, Mar 30, 2021 at 4:05 PM Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> > From: Russell Currey <ruscur@russell.cc>
> >
> > Add the arch specific insn page allocator for powerpc. This allocates
> > ROX pages if STRICT_KERNEL_RWX is enabled. These pages are only written
> > to with patch_instruction() which is able to write RO pages.
> >
> > Reviewed-by: Daniel Axtens <dja@axtens.net>
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> > [jpn: Reword commit message, switch to __vmalloc_node_range()]
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v9: - vmalloc_exec() no longer exists
> >      - Set the page to RW before freeing it
> > v10: - use __vmalloc_node_range()
> > ---
> >   arch/powerpc/kernel/kprobes.c | 14 ++++++++++++++
> >   1 file changed, 14 insertions(+)
> >
> > diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> > index 01ab2163659e..3ae27af9b094 100644
> > --- a/arch/powerpc/kernel/kprobes.c
> > +++ b/arch/powerpc/kernel/kprobes.c
> > @@ -25,6 +25,7 @@
> >   #include <asm/sections.h>
> >   #include <asm/inst.h>
> >   #include <linux/uaccess.h>
> > +#include <linux/vmalloc.h>
> >
> >   DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> >   DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> > @@ -103,6 +104,19 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
> >       return addr;
> >   }
> >
> > +void *alloc_insn_page(void)
> > +{
> > +     if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) {
> > +             return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_VADDR, MODULES_END,
> > +                             GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS,
> > +                             NUMA_NO_NODE, __builtin_return_address(0));
> > +     } else {
> > +             return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_VADDR, MODULES_END,
> > +                             GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS,
> > +                             NUMA_NO_NODE, __builtin_return_address(0));
> > +     }
> > +}
> > +
>
> What about
>
> void *alloc_insn_page(void)
> {
>         pgprot_t prot = IS_ENABLED(CONFIG_STRICT_KERNEL_RWX) ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
>
>         return __vmalloc_node_range(PAGE_SIZE, 1, MODULES_VADDR, MODULES_END,
>                         GFP_KERNEL, prot, VM_FLUSH_RESET_PERMS,
>                         NUMA_NO_NODE, __builtin_return_address(0));
> }
Yes, that is better.
>
> >   int arch_prepare_kprobe(struct kprobe *p)
> >   {
> >       int ret = 0;
> >

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END}
  2021-04-01 13:36   ` Christophe Leroy
@ 2021-04-21  2:46     ` Jordan Niethe
  2021-04-21  5:14       ` Christophe Leroy
  0 siblings, 1 reply; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  2:46 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: ajd, Nicholas Piggin, cmr, naveen.n.rao, linuxppc-dev, Daniel Axtens

On Fri, Apr 2, 2021 at 12:36 AM Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> > If MODULES_{VADDR,END} are not defined set them to VMALLOC_START and
> > VMALLOC_END respectively. This reduces the need for special cases. For
> > example, powerpc's module_alloc() was previously predicated on
> > MODULES_VADDR being defined but now is unconditionally defined.
> >
> > This will be useful reducing conditional code in other places that need
> > to allocate from the module region (i.e., kprobes).
> >
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v10: New to series
> > ---
> >   arch/powerpc/include/asm/pgtable.h | 5 +++++
> >   arch/powerpc/kernel/module.c       | 5 +----
>
> You probably also have changes to do in kernel/ptdump.c
>
> In mm/book3s32/mmu.c and mm/kasan/kasan_init_32.c as well allthough that's harmless here.
>
> >   2 files changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> > index 4eed82172e33..014c2921f26a 100644
> > --- a/arch/powerpc/include/asm/pgtable.h
> > +++ b/arch/powerpc/include/asm/pgtable.h
> > @@ -167,6 +167,11 @@ struct seq_file;
> >   void arch_report_meminfo(struct seq_file *m);
> >   #endif /* CONFIG_PPC64 */
> >
> > +#ifndef MODULES_VADDR
> > +#define MODULES_VADDR VMALLOC_START
> > +#define MODULES_END VMALLOC_END
> > +#endif
> > +
> >   #endif /* __ASSEMBLY__ */
> >
> >   #endif /* _ASM_POWERPC_PGTABLE_H */
> > diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> > index a211b0253cdb..f1fb58389d58 100644
> > --- a/arch/powerpc/kernel/module.c
> > +++ b/arch/powerpc/kernel/module.c
> > @@ -14,6 +14,7 @@
> >   #include <asm/firmware.h>
> >   #include <linux/sort.h>
> >   #include <asm/setup.h>
> > +#include <linux/mm.h>
> >
> >   static LIST_HEAD(module_bug_list);
> >
> > @@ -87,13 +88,9 @@ int module_finalize(const Elf_Ehdr *hdr,
> >       return 0;
> >   }
> >
> > -#ifdef MODULES_VADDR
> >   void *module_alloc(unsigned long size)
> >   {
> > -     BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR);
> > -
>
> The above check is needed somewhere, if you remove it from here you have to perform the check
> somewhere else.

This also introduces this warning:
fs/proc/kcore.c:626:52: warning: self-comparison always evaluates to
false [-Wtautological-compare]
  626 |  if (MODULES_VADDR != VMALLOC_START && MODULES_END != VMALLOC_END) {
I might leave this patch out of this series and use an #ifdef for now
and make this change separately as a follow up.

>
> >       return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL,
> >                                   PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
> >                                   __builtin_return_address(0));
> >   }
> > -#endif
> >

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines
  2021-03-30  5:16   ` Christophe Leroy
@ 2021-04-21  2:51     ` Jordan Niethe
  0 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  2:51 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: ajd, Nicholas Piggin, cmr, naveen.n.rao, linuxppc-dev, Daniel Axtens

On Tue, Mar 30, 2021 at 4:16 PM Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> > From: Russell Currey <ruscur@russell.cc>
> >
> > The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> > and are generally useful primitives to have.  This implementation is
> > designed to be completely generic across powerpc's many MMUs.
> >
> > It's possible that this could be optimised to be faster for specific
> > MMUs, but the focus is on having a generic and safe implementation for
> > now.
> >
> > This implementation does not handle cases where the caller is attempting
> > to change the mapping of the page it is executing from, or if another
> > CPU is concurrently using the page being altered.  These cases likely
> > shouldn't happen, but a more complex implementation with MMU-specific code
> > could safely handle them, so that is left as a TODO for now.
> >
> > On hash the linear mapping is not kept in the linux pagetable, so this
> > will not change the protection if used on that range. Currently these
> > functions are not used on the linear map so just WARN for now.
> >
> > These functions do nothing if STRICT_KERNEL_RWX is not enabled.
> >
> > Reviewed-by: Daniel Axtens <dja@axtens.net>
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> > [jpn: -rebase on next plus "powerpc/mm/64s: Allow STRICT_KERNEL_RWX again"
> >        - WARN on hash linear map]
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v10: WARN if trying to change the hash linear map
> > ---
> >   arch/powerpc/Kconfig                  |  1 +
> >   arch/powerpc/include/asm/set_memory.h | 32 ++++++++++
> >   arch/powerpc/mm/Makefile              |  2 +-
> >   arch/powerpc/mm/pageattr.c            | 88 +++++++++++++++++++++++++++
> >   4 files changed, 122 insertions(+), 1 deletion(-)
> >   create mode 100644 arch/powerpc/include/asm/set_memory.h
> >   create mode 100644 arch/powerpc/mm/pageattr.c
> >
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index fc7f5c5933e6..4498a27ac9db 100644
> > --- a/arch/powerpc/Kconfig
> > +++ b/arch/powerpc/Kconfig
> > @@ -135,6 +135,7 @@ config PPC
> >       select ARCH_HAS_MEMBARRIER_CALLBACKS
> >       select ARCH_HAS_MEMBARRIER_SYNC_CORE
> >       select ARCH_HAS_SCALED_CPUTIME          if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> > +     select ARCH_HAS_SET_MEMORY
> >       select ARCH_HAS_STRICT_KERNEL_RWX       if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
> >       select ARCH_HAS_TICK_BROADCAST          if GENERIC_CLOCKEVENTS_BROADCAST
> >       select ARCH_HAS_UACCESS_FLUSHCACHE
> > diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
> > new file mode 100644
> > index 000000000000..64011ea444b4
> > --- /dev/null
> > +++ b/arch/powerpc/include/asm/set_memory.h
> > @@ -0,0 +1,32 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef _ASM_POWERPC_SET_MEMORY_H
> > +#define _ASM_POWERPC_SET_MEMORY_H
> > +
> > +#define SET_MEMORY_RO        0
> > +#define SET_MEMORY_RW        1
> > +#define SET_MEMORY_NX        2
> > +#define SET_MEMORY_X 3
> > +
> > +int change_memory_attr(unsigned long addr, int numpages, long action);
> > +
> > +static inline int set_memory_ro(unsigned long addr, int numpages)
> > +{
> > +     return change_memory_attr(addr, numpages, SET_MEMORY_RO);
> > +}
> > +
> > +static inline int set_memory_rw(unsigned long addr, int numpages)
> > +{
> > +     return change_memory_attr(addr, numpages, SET_MEMORY_RW);
> > +}
> > +
> > +static inline int set_memory_nx(unsigned long addr, int numpages)
> > +{
> > +     return change_memory_attr(addr, numpages, SET_MEMORY_NX);
> > +}
> > +
> > +static inline int set_memory_x(unsigned long addr, int numpages)
> > +{
> > +     return change_memory_attr(addr, numpages, SET_MEMORY_X);
> > +}
> > +
> > +#endif
> > diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> > index 3b4e9e4e25ea..d8a08abde1ae 100644
> > --- a/arch/powerpc/mm/Makefile
> > +++ b/arch/powerpc/mm/Makefile
> > @@ -5,7 +5,7 @@
> >
> >   ccflags-$(CONFIG_PPC64)     := $(NO_MINIMAL_TOC)
> >
> > -obj-y                                := fault.o mem.o pgtable.o mmap.o maccess.o \
> > +obj-y                                := fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
> >                                  init_$(BITS).o pgtable_$(BITS).o \
> >                                  pgtable-frag.o ioremap.o ioremap_$(BITS).o \
> >                                  init-common.o mmu_context.o drmem.o
> > diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> > new file mode 100644
> > index 000000000000..9efcb01088da
> > --- /dev/null
> > +++ b/arch/powerpc/mm/pageattr.c
> > @@ -0,0 +1,88 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/*
> > + * MMU-generic set_memory implementation for powerpc
> > + *
> > + * Copyright 2019, IBM Corporation.
> > + */
> > +
> > +#include <linux/mm.h>
> > +#include <linux/set_memory.h>
> > +
> > +#include <asm/mmu.h>
> > +#include <asm/page.h>
> > +#include <asm/pgtable.h>
> > +
> > +
> > +/*
> > + * Updates the attributes of a page in three steps:
> > + *
> > + * 1. invalidate the page table entry
> > + * 2. flush the TLB
> > + * 3. install the new entry with the updated attributes
> > + *
> > + * This is unsafe if the caller is attempting to change the mapping of the
> > + * page it is executing from, or if another CPU is concurrently using the
> > + * page being altered.
> > + *
> > + * TODO make the implementation resistant to this.
> > + *
> > + * NOTE: can be dangerous to call without STRICT_KERNEL_RWX
> > + */
> > +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> > +{
> > +     long action = (long)data;
> > +     pte_t pte;
> > +
> > +     spin_lock(&init_mm.page_table_lock);
> > +
> > +     /* invalidate the PTE so it's safe to modify */
> > +     pte = ptep_get_and_clear(&init_mm, addr, ptep);
> > +     flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > +
> > +     /* modify the PTE bits as desired, then apply */
> > +     switch (action) {
> > +     case SET_MEMORY_RO:
> > +             pte = pte_wrprotect(pte);
> > +             break;
> > +     case SET_MEMORY_RW:
> > +             pte = pte_mkwrite(pte);
> > +             break;
> > +     case SET_MEMORY_NX:
> > +             pte = pte_exprotect(pte);
> > +             break;
> > +     case SET_MEMORY_X:
> > +             pte = pte_mkexec(pte);
> > +             break;
> > +     default:
> > +             WARN_ON_ONCE(1);
> > +             break;
> > +     }
> > +
> > +     set_pte_at(&init_mm, addr, ptep, pte);
> > +     spin_unlock(&init_mm.page_table_lock);
> > +
> > +     return 0;
> > +}
> > +
> > +int change_memory_attr(unsigned long addr, int numpages, long action)
> > +{
> > +     unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> > +     unsigned long sz = numpages * PAGE_SIZE;
> > +
> > +     if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
> > +             return 0;
>
> You should do this in the header file in order to get it optimised out completely when
> CONFIG_STRICT_KERNEL_RWX is not set.
>
> In asm/set_memory.h you could have:
>
> #ifdef CONFIG_STRICT_KERNEL_RWX
> int change_memory_attr(unsigned long addr, int numpages, long action);
> #else
> static inline int change_memory_attr(unsigned long addr, int numpages, long action) { return 0; }
> #endif
>
> Or another solution is to only define ARCH_HAS_SET_MEMORY when CONFIG_STRICT_KERNEL_RWX is selected.
I think making ARCH_HAS_SET_MEMORY depend on CONFIG_STRICT_KERNEL_RWX
is the way to go.
>
> > +
> > +     if (numpages <= 0)
> > +             return 0;
> > +
> > +#ifdef CONFIG_PPC_BOOK3S_64
> > +     if (WARN_ON_ONCE(!radix_enabled() &&
> > +                  get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
> > +             return -1;
> > +     }
> > +#endif
> > +
> > +     return apply_to_existing_page_range(&init_mm, start, sz,
> > +                                         change_page_attr, (void *)action);
> > +}
> >

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines
  2021-03-31 11:16   ` Michael Ellerman
  2021-03-31 12:03     ` Christophe Leroy
@ 2021-04-21  5:03     ` Jordan Niethe
  1 sibling, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  5:03 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: ajd, cmr, Nicholas Piggin, naveen.n.rao, linuxppc-dev, Daniel Axtens

On Wed, Mar 31, 2021 at 10:16 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
>
> Hi Jordan,
>
> A few nits below ...
>
> Jordan Niethe <jniethe5@gmail.com> writes:
> > From: Russell Currey <ruscur@russell.cc>
> >
> > The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> > and are generally useful primitives to have.  This implementation is
> > designed to be completely generic across powerpc's many MMUs.
> >
> > It's possible that this could be optimised to be faster for specific
> > MMUs, but the focus is on having a generic and safe implementation for
> > now.
> >
> > This implementation does not handle cases where the caller is attempting
> > to change the mapping of the page it is executing from, or if another
> > CPU is concurrently using the page being altered.  These cases likely
> > shouldn't happen, but a more complex implementation with MMU-specific code
> > could safely handle them, so that is left as a TODO for now.
> >
> > On hash the linear mapping is not kept in the linux pagetable, so this
> > will not change the protection if used on that range. Currently these
> > functions are not used on the linear map so just WARN for now.
> >
> > These functions do nothing if STRICT_KERNEL_RWX is not enabled.
> >
> > Reviewed-by: Daniel Axtens <dja@axtens.net>
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> > [jpn: -rebase on next plus "powerpc/mm/64s: Allow STRICT_KERNEL_RWX again"
> >       - WARN on hash linear map]
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v10: WARN if trying to change the hash linear map
> > ---
> >  arch/powerpc/Kconfig                  |  1 +
> >  arch/powerpc/include/asm/set_memory.h | 32 ++++++++++
> >  arch/powerpc/mm/Makefile              |  2 +-
> >  arch/powerpc/mm/pageattr.c            | 88 +++++++++++++++++++++++++++
> >  4 files changed, 122 insertions(+), 1 deletion(-)
> >  create mode 100644 arch/powerpc/include/asm/set_memory.h
> >  create mode 100644 arch/powerpc/mm/pageattr.c
> >
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index fc7f5c5933e6..4498a27ac9db 100644
> > --- a/arch/powerpc/Kconfig
> > +++ b/arch/powerpc/Kconfig
> > @@ -135,6 +135,7 @@ config PPC
> >       select ARCH_HAS_MEMBARRIER_CALLBACKS
> >       select ARCH_HAS_MEMBARRIER_SYNC_CORE
> >       select ARCH_HAS_SCALED_CPUTIME          if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> > +     select ARCH_HAS_SET_MEMORY
>
> Below you do:
>
>         if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
>                 return 0;
>
> Which suggests we should instead just only select ARCH_HAS_SET_MEMORY if
> STRICT_KERNEL_RWX ?
Yeah, I'm just going to do that.
>
>
> > diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> > index 3b4e9e4e25ea..d8a08abde1ae 100644
> > --- a/arch/powerpc/mm/Makefile
> > +++ b/arch/powerpc/mm/Makefile
> > @@ -5,7 +5,7 @@
> >
> >  ccflags-$(CONFIG_PPC64)      := $(NO_MINIMAL_TOC)
> >
> > -obj-y                                := fault.o mem.o pgtable.o mmap.o maccess.o \
> > +obj-y                                := fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
>
> .. and then the file should only be built if ARCH_HAS_SET_MEMORY = y.
>
> >                                  init_$(BITS).o pgtable_$(BITS).o \
> >                                  pgtable-frag.o ioremap.o ioremap_$(BITS).o \
> >                                  init-common.o mmu_context.o drmem.o
> > diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> > new file mode 100644
> > index 000000000000..9efcb01088da
> > --- /dev/null
> > +++ b/arch/powerpc/mm/pageattr.c
> > @@ -0,0 +1,88 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/*
> > + * MMU-generic set_memory implementation for powerpc
> > + *
> > + * Copyright 2019, IBM Corporation.
>
> Should be 2019-2021.
Right.
>
> > + */
> > +
> > +#include <linux/mm.h>
> > +#include <linux/set_memory.h>
> > +
> > +#include <asm/mmu.h>
> > +#include <asm/page.h>
> > +#include <asm/pgtable.h>
> > +
> > +
> > +/*
> > + * Updates the attributes of a page in three steps:
> > + *
> > + * 1. invalidate the page table entry
> > + * 2. flush the TLB
> > + * 3. install the new entry with the updated attributes
> > + *
> > + * This is unsafe if the caller is attempting to change the mapping of the
> > + * page it is executing from, or if another CPU is concurrently using the
> > + * page being altered.
>
> Is the 2nd part of that statement true?
>
> Or, I guess maybe it is true depending on what "unsafe" means.
>
> AIUI it's unsafe to use this on the page you're executing from, and by
> unsafe we mean the kernel will potentially crash because it will lose
> the mapping for the currently executing text.
>
> Using this on a page that another CPU is accessing could be safe, if eg.
> the other CPU is reading from the page and we are just changing it from
> RW->RO.
>
> So I'm not sure they're the same type of "unsafe".

I think the comment was prompted by your message here:
https://lore.kernel.org/linuxppc-dev/87pnio5fva.fsf@mpe.ellerman.id.au/

So I'll rewrite the comment to separate the two cases and indicate the
2nd case only might be an issue.
>
> > + * TODO make the implementation resistant to this.
> > + *
> > + * NOTE: can be dangerous to call without STRICT_KERNEL_RWX
>
> I don't think we need that anymore?
No we don't, change_memory_attr() won't call it without STRICT_KERNEL_RWX.
>
> > + */
> > +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> > +{
> > +     long action = (long)data;
> > +     pte_t pte;
> > +
> > +     spin_lock(&init_mm.page_table_lock);
> > +
> > +     /* invalidate the PTE so it's safe to modify */
> > +     pte = ptep_get_and_clear(&init_mm, addr, ptep);
> > +     flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > +
> > +     /* modify the PTE bits as desired, then apply */
> > +     switch (action) {
> > +     case SET_MEMORY_RO:
> > +             pte = pte_wrprotect(pte);
> > +             break;
>
> So set_memory_ro() removes write, but doesn't remove execute.
>
> That doesn't match my mental model of what "set to ro" means, but I
> guess I'm wrong because the other implementations seem to do something
> similar.
Hm, looking at arm and riscv it does seem to make it just RO.
>
>
> > +     case SET_MEMORY_RW:
> > +             pte = pte_mkwrite(pte);
>
> I think we want to add pte_mkdirty() here also to avoid a fault when the
> mapping is written to.
Right.
>
> eg. pmd_mkwrite(pmd_mkdirty(pte));
>
> > +             break;
> > +     case SET_MEMORY_NX:
> > +             pte = pte_exprotect(pte);
> > +             break;
> > +     case SET_MEMORY_X:
> > +             pte = pte_mkexec(pte);
> > +             break;
> > +     default:
> > +             WARN_ON_ONCE(1);
> > +             break;
> > +     }
> > +
> > +     set_pte_at(&init_mm, addr, ptep, pte);
> > +     spin_unlock(&init_mm.page_table_lock);
> > +
> > +     return 0;
> > +}
> > +
> > +int change_memory_attr(unsigned long addr, int numpages, long action)
> > +{
> > +     unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> > +     unsigned long sz = numpages * PAGE_SIZE;
> > +
> > +     if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
> > +             return 0;
> > +
> > +     if (numpages <= 0)
> > +             return 0;
> > +
>
> This ↓ should have a comment explaining what it's doing:
Sure.
>
> > +#ifdef CONFIG_PPC_BOOK3S_64
> > +     if (WARN_ON_ONCE(!radix_enabled() &&
> > +                  get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
> > +             return -1;
> > +     }
> > +#endif
>
> Maybe:
As Chrisophe says, we can't do that because those symbols aren't
defined for !CONFIG_PPC_BOOK3S_64.
>
>         if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) &&
>             WARN_ON_ONCE(!radix_enabled() && get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
>                 return -1;
>         }
>
> But then Aneesh pointed out that we should also block VMEMMAP_REGION_ID.
>
> It might be better to just check for the permitted regions.
That would probably work better.
>
>         if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled()) {
>                 int region = get_region_id(addr);
>
>                 if (WARN_ON_ONCE(region != VMALLOC_REGION_ID && region != IO_REGION_ID))
>                         return -1;
>         }
>
> > +
> > +     return apply_to_existing_page_range(&init_mm, start, sz,
> > +                                         change_page_attr, (void *)action);
> > +}
>
>
> cheers

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END}
  2021-04-21  2:46     ` Jordan Niethe
@ 2021-04-21  5:14       ` Christophe Leroy
  2021-04-21  5:22         ` Jordan Niethe
  0 siblings, 1 reply; 34+ messages in thread
From: Christophe Leroy @ 2021-04-21  5:14 UTC (permalink / raw)
  To: Jordan Niethe
  Cc: ajd, Nicholas Piggin, cmr, naveen.n.rao, linuxppc-dev, Daniel Axtens



Le 21/04/2021 à 04:46, Jordan Niethe a écrit :
> On Fri, Apr 2, 2021 at 12:36 AM Christophe Leroy
> <christophe.leroy@csgroup.eu> wrote:
>>
>>
>>
>> Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
>>> If MODULES_{VADDR,END} are not defined set them to VMALLOC_START and
>>> VMALLOC_END respectively. This reduces the need for special cases. For
>>> example, powerpc's module_alloc() was previously predicated on
>>> MODULES_VADDR being defined but now is unconditionally defined.
>>>
>>> This will be useful reducing conditional code in other places that need
>>> to allocate from the module region (i.e., kprobes).
>>>
>>> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>>> ---
>>> v10: New to series
>>> ---
>>>    arch/powerpc/include/asm/pgtable.h | 5 +++++
>>>    arch/powerpc/kernel/module.c       | 5 +----
>>
>> You probably also have changes to do in kernel/ptdump.c
>>
>> In mm/book3s32/mmu.c and mm/kasan/kasan_init_32.c as well allthough that's harmless here.
>>
>>>    2 files changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
>>> index 4eed82172e33..014c2921f26a 100644
>>> --- a/arch/powerpc/include/asm/pgtable.h
>>> +++ b/arch/powerpc/include/asm/pgtable.h
>>> @@ -167,6 +167,11 @@ struct seq_file;
>>>    void arch_report_meminfo(struct seq_file *m);
>>>    #endif /* CONFIG_PPC64 */
>>>
>>> +#ifndef MODULES_VADDR
>>> +#define MODULES_VADDR VMALLOC_START
>>> +#define MODULES_END VMALLOC_END
>>> +#endif
>>> +
>>>    #endif /* __ASSEMBLY__ */
>>>
>>>    #endif /* _ASM_POWERPC_PGTABLE_H */
>>> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
>>> index a211b0253cdb..f1fb58389d58 100644
>>> --- a/arch/powerpc/kernel/module.c
>>> +++ b/arch/powerpc/kernel/module.c
>>> @@ -14,6 +14,7 @@
>>>    #include <asm/firmware.h>
>>>    #include <linux/sort.h>
>>>    #include <asm/setup.h>
>>> +#include <linux/mm.h>
>>>
>>>    static LIST_HEAD(module_bug_list);
>>>
>>> @@ -87,13 +88,9 @@ int module_finalize(const Elf_Ehdr *hdr,
>>>        return 0;
>>>    }
>>>
>>> -#ifdef MODULES_VADDR
>>>    void *module_alloc(unsigned long size)
>>>    {
>>> -     BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR);
>>> -
>>
>> The above check is needed somewhere, if you remove it from here you have to perform the check
>> somewhere else.
> 
> This also introduces this warning:
> fs/proc/kcore.c:626:52: warning: self-comparison always evaluates to
> false [-Wtautological-compare]
>    626 |  if (MODULES_VADDR != VMALLOC_START && MODULES_END != VMALLOC_END) {
> I might leave this patch out of this series and use an #ifdef for now
> and make this change separately as a follow up.

x86/32 at least does the same (see 
https://elixir.bootlin.com/linux/v5.12-rc8/source/arch/x86/include/asm/pgtable_32_areas.h#L47)

They probably also get the warning, so I think would shouldn't bother.
One day someone will fix fs/proc/kcore.c , that's not a powerpc problem.

> 
>>
>>>        return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL,
>>>                                    PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
>>>                                    __builtin_return_address(0));
>>>    }
>>> -#endif
>>>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 06/10] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
  2021-04-21  2:23     ` Jordan Niethe
@ 2021-04-21  5:16       ` Christophe Leroy
  0 siblings, 0 replies; 34+ messages in thread
From: Christophe Leroy @ 2021-04-21  5:16 UTC (permalink / raw)
  To: Jordan Niethe, Michael Ellerman
  Cc: ajd, Kees Cook, Nicholas Piggin, cmr, naveen.n.rao, linuxppc-dev,
	Daniel Axtens



Le 21/04/2021 à 04:23, Jordan Niethe a écrit :
> On Wed, Mar 31, 2021 at 10:24 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
>>
>> Jordan Niethe <jniethe5@gmail.com> writes:
>>> From: Russell Currey <ruscur@russell.cc>
>>>
>>> Optionally run W+X checks when dumping pagetable information to
>>> debugfs' kernel_page_tables.
>>>
>>> To use:
>>>      $ echo 1 > /sys/kernel/debug/check_wx_pages
>>>      $ cat /sys/kernel/debug/kernel_page_tables
>>>
>>> and check the kernel log.  Useful for testing strict module RWX.
>>>
>>> To disable W+X checks:
>>>        $ echo 0 > /sys/kernel/debug/check_wx_pages
>>>
>>> Update the Kconfig entry to reflect this.
>>>
>>> Also fix a typo.
>>>
>>> Reviewed-by: Kees Cook <keescook@chromium.org>
>>> Signed-off-by: Russell Currey <ruscur@russell.cc>
>>> [jpn: Change check_wx_pages to act as mode bit affecting
>>>        kernel_page_tables instead of triggering action on its own]
>>> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>>> ---
>>> v10: check_wx_pages now affects kernel_page_tables rather then triggers
>>>       its own action.
>>
>> Hmm. I liked the old version better :)
>>
>> I think you changed it based on Christophe's comment:
>>
>>    Why not just perform the test everytime someone dumps kernel_page_tables ?
>>
>>
>> But I think he meant *always* do the check when someone dumps
>> kernel_page_tables, not have another file to enable checking and then
>> require someone to dump kernel_page_tables to do the actual check.
> Yes, I guess I misinterpreted that.
>>
>> Still I like the previous version where you can do the checks
>> separately, without having to dump the page tables, because dumping can
>> sometimes take quite a while.
>>
>> What would be even better is if ptdump_check_wx() returned an error when
>> wx pages were found, and that was plumbed out to the debugs file. That
>> way you can script around it.
> Ok I'll go back to how it was and add in returning an error.

I have a series to convert PPC_PTDUMP into GENERIC_PTDUMP, see 
https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=239795

>>
>> cheers

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines
  2021-04-01  4:37   ` Aneesh Kumar K.V
@ 2021-04-21  5:19     ` Jordan Niethe
  0 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  5:19 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: ajd, Nicholas Piggin, cmr, naveen.n.rao, linuxppc-dev, Daniel Axtens

On Thu, Apr 1, 2021 at 3:37 PM Aneesh Kumar K.V
<aneesh.kumar@linux.ibm.com> wrote:
>
> Jordan Niethe <jniethe5@gmail.com> writes:
>
> > From: Russell Currey <ruscur@russell.cc>
> >
> > The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> > and are generally useful primitives to have.  This implementation is
> > designed to be completely generic across powerpc's many MMUs.
> >
> > It's possible that this could be optimised to be faster for specific
> > MMUs, but the focus is on having a generic and safe implementation for
> > now.
> >
> > This implementation does not handle cases where the caller is attempting
> > to change the mapping of the page it is executing from, or if another
> > CPU is concurrently using the page being altered.  These cases likely
> > shouldn't happen, but a more complex implementation with MMU-specific code
> > could safely handle them, so that is left as a TODO for now.
> >
> > On hash the linear mapping is not kept in the linux pagetable, so this
> > will not change the protection if used on that range. Currently these
> > functions are not used on the linear map so just WARN for now.
> >
> > These functions do nothing if STRICT_KERNEL_RWX is not enabled.
> >
> > Reviewed-by: Daniel Axtens <dja@axtens.net>
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> > [jpn: -rebase on next plus "powerpc/mm/64s: Allow STRICT_KERNEL_RWX again"
> >       - WARN on hash linear map]
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v10: WARN if trying to change the hash linear map
> > ---
> >  arch/powerpc/Kconfig                  |  1 +
> >  arch/powerpc/include/asm/set_memory.h | 32 ++++++++++
> >  arch/powerpc/mm/Makefile              |  2 +-
> >  arch/powerpc/mm/pageattr.c            | 88 +++++++++++++++++++++++++++
> >  4 files changed, 122 insertions(+), 1 deletion(-)
> >  create mode 100644 arch/powerpc/include/asm/set_memory.h
> >  create mode 100644 arch/powerpc/mm/pageattr.c
> >
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index fc7f5c5933e6..4498a27ac9db 100644
> > --- a/arch/powerpc/Kconfig
> > +++ b/arch/powerpc/Kconfig
> > @@ -135,6 +135,7 @@ config PPC
> >       select ARCH_HAS_MEMBARRIER_CALLBACKS
> >       select ARCH_HAS_MEMBARRIER_SYNC_CORE
> >       select ARCH_HAS_SCALED_CPUTIME          if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> > +     select ARCH_HAS_SET_MEMORY
> >       select ARCH_HAS_STRICT_KERNEL_RWX       if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
> >       select ARCH_HAS_TICK_BROADCAST          if GENERIC_CLOCKEVENTS_BROADCAST
> >       select ARCH_HAS_UACCESS_FLUSHCACHE
> > diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
> > new file mode 100644
> > index 000000000000..64011ea444b4
> > --- /dev/null
> > +++ b/arch/powerpc/include/asm/set_memory.h
> > @@ -0,0 +1,32 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef _ASM_POWERPC_SET_MEMORY_H
> > +#define _ASM_POWERPC_SET_MEMORY_H
> > +
> > +#define SET_MEMORY_RO        0
> > +#define SET_MEMORY_RW        1
> > +#define SET_MEMORY_NX        2
> > +#define SET_MEMORY_X 3
> > +
> > +int change_memory_attr(unsigned long addr, int numpages, long action);
> > +
> > +static inline int set_memory_ro(unsigned long addr, int numpages)
> > +{
> > +     return change_memory_attr(addr, numpages, SET_MEMORY_RO);
> > +}
> > +
> > +static inline int set_memory_rw(unsigned long addr, int numpages)
> > +{
> > +     return change_memory_attr(addr, numpages, SET_MEMORY_RW);
> > +}
> > +
> > +static inline int set_memory_nx(unsigned long addr, int numpages)
> > +{
> > +     return change_memory_attr(addr, numpages, SET_MEMORY_NX);
> > +}
> > +
> > +static inline int set_memory_x(unsigned long addr, int numpages)
> > +{
> > +     return change_memory_attr(addr, numpages, SET_MEMORY_X);
> > +}
> > +
> > +#endif
> > diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> > index 3b4e9e4e25ea..d8a08abde1ae 100644
> > --- a/arch/powerpc/mm/Makefile
> > +++ b/arch/powerpc/mm/Makefile
> > @@ -5,7 +5,7 @@
> >
> >  ccflags-$(CONFIG_PPC64)      := $(NO_MINIMAL_TOC)
> >
> > -obj-y                                := fault.o mem.o pgtable.o mmap.o maccess.o \
> > +obj-y                                := fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
> >                                  init_$(BITS).o pgtable_$(BITS).o \
> >                                  pgtable-frag.o ioremap.o ioremap_$(BITS).o \
> >                                  init-common.o mmu_context.o drmem.o
> > diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> > new file mode 100644
> > index 000000000000..9efcb01088da
> > --- /dev/null
> > +++ b/arch/powerpc/mm/pageattr.c
> > @@ -0,0 +1,88 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/*
> > + * MMU-generic set_memory implementation for powerpc
> > + *
> > + * Copyright 2019, IBM Corporation.
> > + */
> > +
> > +#include <linux/mm.h>
> > +#include <linux/set_memory.h>
> > +
> > +#include <asm/mmu.h>
> > +#include <asm/page.h>
> > +#include <asm/pgtable.h>
> > +
> > +
> > +/*
> > + * Updates the attributes of a page in three steps:
> > + *
> > + * 1. invalidate the page table entry
> > + * 2. flush the TLB
> > + * 3. install the new entry with the updated attributes
> > + *
> > + * This is unsafe if the caller is attempting to change the mapping of the
> > + * page it is executing from, or if another CPU is concurrently using the
> > + * page being altered.
> > + *
> > + * TODO make the implementation resistant to this.
> > + *
> > + * NOTE: can be dangerous to call without STRICT_KERNEL_RWX
> > + */
> > +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
> > +{
> > +     long action = (long)data;
> > +     pte_t pte;
> > +
> > +     spin_lock(&init_mm.page_table_lock);
> > +
> > +     /* invalidate the PTE so it's safe to modify */
> > +     pte = ptep_get_and_clear(&init_mm, addr, ptep);
> > +     flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > +
> > +     /* modify the PTE bits as desired, then apply */
> > +     switch (action) {
> > +     case SET_MEMORY_RO:
> > +             pte = pte_wrprotect(pte);
> > +             break;
> > +     case SET_MEMORY_RW:
> > +             pte = pte_mkwrite(pte);
> > +             break;
> > +     case SET_MEMORY_NX:
> > +             pte = pte_exprotect(pte);
> > +             break;
> > +     case SET_MEMORY_X:
> > +             pte = pte_mkexec(pte);
> > +             break;
> > +     default:
> > +             WARN_ON_ONCE(1);
> > +             break;
> > +     }
> > +
> > +     set_pte_at(&init_mm, addr, ptep, pte);
> > +     spin_unlock(&init_mm.page_table_lock);
> > +
> > +     return 0;
> > +}
> > +
> > +int change_memory_attr(unsigned long addr, int numpages, long action)
> > +{
> > +     unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> > +     unsigned long sz = numpages * PAGE_SIZE;
> > +
> > +     if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))
> > +             return 0;
>
> What restrictions imposed by that config are we dependent on here?
So the reasons given here
https://lore.kernel.org/linuxppc-dev/20200226062403.63790-9-ruscur@russell.cc/
were:
"
 - The linear mapping is a different size and apply_to_page_range()
may modify a giant section, breaking everything
 - patch_instruction() doesn't know to work around a page being marked
  RO, and will subsequently crash
"
but now I'm not 100% sure about it... we might not actually need to
have that restriction.

>
>
> > +
> > +     if (numpages <= 0)
> > +             return 0;
> > +
> > +#ifdef CONFIG_PPC_BOOK3S_64
> > +     if (WARN_ON_ONCE(!radix_enabled() &&
> > +                  get_region_id(addr) == LINEAR_MAP_REGION_ID)) {
> > +             return -1;
> > +     }
> > +#endif
>
> What about VMEMMAP_REGIOND_ID
True.
>
> > +
> > +     return apply_to_existing_page_range(&init_mm, start, sz,
> > +                                         change_page_attr, (void *)action);
>
>
> That handles on 64K mapping. What about linear map? Also there is a
> patchset implementing hugepage for vmalloc mapping.
At least for now there is nothing that calls the set memory functions
on the linear map.
Is that this series:
https://lore.kernel.org/linuxppc-dev/20210317062402.533919-15-npiggin@gmail.com/
?
I will test on top of that.
>
> > +}
> > --
> > 2.25.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END}
  2021-04-21  5:14       ` Christophe Leroy
@ 2021-04-21  5:22         ` Jordan Niethe
  0 siblings, 0 replies; 34+ messages in thread
From: Jordan Niethe @ 2021-04-21  5:22 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: ajd, Nicholas Piggin, cmr, naveen.n.rao, linuxppc-dev, Daniel Axtens

On Wed, Apr 21, 2021 at 3:14 PM Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 21/04/2021 à 04:46, Jordan Niethe a écrit :
> > On Fri, Apr 2, 2021 at 12:36 AM Christophe Leroy
> > <christophe.leroy@csgroup.eu> wrote:
> >>
> >>
> >>
> >> Le 30/03/2021 à 06:51, Jordan Niethe a écrit :
> >>> If MODULES_{VADDR,END} are not defined set them to VMALLOC_START and
> >>> VMALLOC_END respectively. This reduces the need for special cases. For
> >>> example, powerpc's module_alloc() was previously predicated on
> >>> MODULES_VADDR being defined but now is unconditionally defined.
> >>>
> >>> This will be useful reducing conditional code in other places that need
> >>> to allocate from the module region (i.e., kprobes).
> >>>
> >>> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> >>> ---
> >>> v10: New to series
> >>> ---
> >>>    arch/powerpc/include/asm/pgtable.h | 5 +++++
> >>>    arch/powerpc/kernel/module.c       | 5 +----
> >>
> >> You probably also have changes to do in kernel/ptdump.c
> >>
> >> In mm/book3s32/mmu.c and mm/kasan/kasan_init_32.c as well allthough that's harmless here.
> >>
> >>>    2 files changed, 6 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> >>> index 4eed82172e33..014c2921f26a 100644
> >>> --- a/arch/powerpc/include/asm/pgtable.h
> >>> +++ b/arch/powerpc/include/asm/pgtable.h
> >>> @@ -167,6 +167,11 @@ struct seq_file;
> >>>    void arch_report_meminfo(struct seq_file *m);
> >>>    #endif /* CONFIG_PPC64 */
> >>>
> >>> +#ifndef MODULES_VADDR
> >>> +#define MODULES_VADDR VMALLOC_START
> >>> +#define MODULES_END VMALLOC_END
> >>> +#endif
> >>> +
> >>>    #endif /* __ASSEMBLY__ */
> >>>
> >>>    #endif /* _ASM_POWERPC_PGTABLE_H */
> >>> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> >>> index a211b0253cdb..f1fb58389d58 100644
> >>> --- a/arch/powerpc/kernel/module.c
> >>> +++ b/arch/powerpc/kernel/module.c
> >>> @@ -14,6 +14,7 @@
> >>>    #include <asm/firmware.h>
> >>>    #include <linux/sort.h>
> >>>    #include <asm/setup.h>
> >>> +#include <linux/mm.h>
> >>>
> >>>    static LIST_HEAD(module_bug_list);
> >>>
> >>> @@ -87,13 +88,9 @@ int module_finalize(const Elf_Ehdr *hdr,
> >>>        return 0;
> >>>    }
> >>>
> >>> -#ifdef MODULES_VADDR
> >>>    void *module_alloc(unsigned long size)
> >>>    {
> >>> -     BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR);
> >>> -
> >>
> >> The above check is needed somewhere, if you remove it from here you have to perform the check
> >> somewhere else.
> >
> > This also introduces this warning:
> > fs/proc/kcore.c:626:52: warning: self-comparison always evaluates to
> > false [-Wtautological-compare]
> >    626 |  if (MODULES_VADDR != VMALLOC_START && MODULES_END != VMALLOC_END) {
> > I might leave this patch out of this series and use an #ifdef for now
> > and make this change separately as a follow up.
>
> x86/32 at least does the same (see
> https://elixir.bootlin.com/linux/v5.12-rc8/source/arch/x86/include/asm/pgtable_32_areas.h#L47)
>
> They probably also get the warning, so I think would shouldn't bother.
> One day someone will fix fs/proc/kcore.c , that's not a powerpc problem.
Yeah you are right. I'll add the BUILD_BUG_ON() check to
asm/task_size_32.h and keep the patch.
>
> >
> >>
> >>>        return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL,
> >>>                                    PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
> >>>                                    __builtin_return_address(0));
> >>>    }
> >>> -#endif
> >>>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v10 05/10] powerpc/bpf: Write protect JIT code
  2021-04-21  2:35     ` Jordan Niethe
@ 2021-04-21  6:51       ` Michael Ellerman
  0 siblings, 0 replies; 34+ messages in thread
From: Michael Ellerman @ 2021-04-21  6:51 UTC (permalink / raw)
  To: Jordan Niethe
  Cc: ajd, cmr, Nicholas Piggin, naveen.n.rao, linuxppc-dev, Daniel Axtens

Jordan Niethe <jniethe5@gmail.com> writes:
> On Wed, Mar 31, 2021 at 9:37 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
>>
>> Jordan Niethe <jniethe5@gmail.com> writes:
>>
>> > Once CONFIG_STRICT_MODULE_RWX is enabled there will be no need to
>> > override bpf_jit_free() because it is now possible to set images
>> > read-only. So use the default implementation.
>> >
>> > Also add the necessary call to bpf_jit_binary_lock_ro() which will
>> > remove write protection and add exec protection to the JIT image after
>> > it has finished being written.
>> >
>> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>> > ---
>> > v10: New to series
>> > ---
>> >  arch/powerpc/net/bpf_jit_comp.c   | 5 ++++-
>> >  arch/powerpc/net/bpf_jit_comp64.c | 4 ++++
>> >  2 files changed, 8 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
>> > index e809cb5a1631..8015e4a7d2d4 100644
>> > --- a/arch/powerpc/net/bpf_jit_comp.c
>> > +++ b/arch/powerpc/net/bpf_jit_comp.c
>> > @@ -659,12 +659,15 @@ void bpf_jit_compile(struct bpf_prog *fp)
>> >               bpf_jit_dump(flen, proglen, pass, code_base);
>> >
>> >       bpf_flush_icache(code_base, code_base + (proglen/4));
>> > -
>> >  #ifdef CONFIG_PPC64
>> >       /* Function descriptor nastiness: Address + TOC */
>> >       ((u64 *)image)[0] = (u64)code_base;
>> >       ((u64 *)image)[1] = local_paca->kernel_toc;
>> >  #endif
>> > +     if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) {
>> > +             set_memory_ro((unsigned long)image, alloclen >> PAGE_SHIFT);
>> > +             set_memory_x((unsigned long)image, alloclen >> PAGE_SHIFT);
>> > +     }
>>
>> You don't need to check the ifdef in a caller, there are stubs that
>> compile to nothing when CONFIG_ARCH_HAS_SET_MEMORY=n.

> As Christophe pointed out we could have !CONFIG_STRICT_MODULE_RWX and
> CONFIG_ARCH_HAS_SET_MEMORY which would then be wrong here.
> Probably we could make CONFIG_ARCH_HAS_SET_MEMORY depend on
> CONFIG_STRICT_MODULE_RWX?

I thought it already did depend on it :)

That seems a reasonable dependency to me.

cheers

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2021-04-21  6:52 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-30  4:51 [PATCH v10 00/10] powerpc: Further Strict RWX support Jordan Niethe
2021-03-30  4:51 ` [PATCH v10 01/10] powerpc/mm: Implement set_memory() routines Jordan Niethe
2021-03-30  5:16   ` Christophe Leroy
2021-04-21  2:51     ` Jordan Niethe
2021-03-31 11:16   ` Michael Ellerman
2021-03-31 12:03     ` Christophe Leroy
2021-04-21  5:03     ` Jordan Niethe
2021-04-01  4:37   ` Aneesh Kumar K.V
2021-04-21  5:19     ` Jordan Niethe
2021-03-30  4:51 ` [PATCH v10 02/10] powerpc/lib/code-patching: Set up Strict RWX patching earlier Jordan Niethe
2021-03-30  4:51 ` [PATCH v10 03/10] powerpc: Always define MODULES_{VADDR,END} Jordan Niethe
2021-03-30  5:00   ` Christophe Leroy
2021-04-01 13:36   ` Christophe Leroy
2021-04-21  2:46     ` Jordan Niethe
2021-04-21  5:14       ` Christophe Leroy
2021-04-21  5:22         ` Jordan Niethe
2021-03-30  4:51 ` [PATCH v10 04/10] powerpc/kprobes: Mark newly allocated probes as ROX Jordan Niethe
2021-03-30  5:05   ` Christophe Leroy
2021-04-21  2:39     ` Jordan Niethe
2021-03-30  4:51 ` [PATCH v10 05/10] powerpc/bpf: Write protect JIT code Jordan Niethe
2021-03-31 10:37   ` Michael Ellerman
2021-03-31 10:39     ` Christophe Leroy
2021-04-21  2:35     ` Jordan Niethe
2021-04-21  6:51       ` Michael Ellerman
2021-03-30  4:51 ` [PATCH v10 06/10] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Jordan Niethe
2021-03-31 11:24   ` Michael Ellerman
2021-04-21  2:23     ` Jordan Niethe
2021-04-21  5:16       ` Christophe Leroy
2021-03-30  4:51 ` [PATCH v10 07/10] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Jordan Niethe
2021-03-30  4:51 ` [PATCH v10 08/10] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Jordan Niethe
2021-03-30  5:27   ` Christophe Leroy
2021-04-21  2:37     ` Jordan Niethe
2021-03-30  4:51 ` [PATCH v10 09/10] powerpc/mm: implement set_memory_attr() Jordan Niethe
2021-03-30  4:51 ` [PATCH v10 10/10] powerpc/32: use set_memory_attr() Jordan Niethe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).