All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v14 0/9] powerpc: Further Strict RWX support
@ 2021-05-17  3:28 Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 1/9] powerpc/mm: Implement set_memory() routines Jordan Niethe
                   ` (8 more replies)
  0 siblings, 9 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

Adding more Strict RWX support on powerpc, in particular Strict Module RWX.
Thanks for all of the feedback everyone.
It is now rebased on linux-next + powerpc/64s/radix: Enable huge vmalloc mappings
(https://lore.kernel.org/linuxppc-dev/20210503091755.613393-1-npiggin@gmail.com/)

For reference the previous revision is available here: 
https://lore.kernel.org/linuxppc-dev/20210510011828.4006623-1-jniethe5@gmail.com/

The changes in v14 for each patch:

Christophe Leroy (2):
  powerpc/mm: implement set_memory_attr()
  powerpc/32: use set_memory_attr()

Jordan Niethe (4):
  powerpc/lib/code-patching: Set up Strict RWX patching earlier
  powerpc/modules: Make module_alloc() Strict Module RWX aware
    v14: - Split out from powerpc: Set ARCH_HAS_STRICT_MODULE_RW
    - Add and use strict_module_rwx_enabled() helper
  powerpc/bpf: Remove bpf_jit_free()
  powerpc/bpf: Write protect JIT code

Russell Currey (3):
  powerpc/mm: Implement set_memory() routines
    v14: - only check is_vm_area_hugepages() for virtual memory
  powerpc/kprobes: Mark newly allocated probes as ROX
    v14: - Use strict_module_rwx_enabled()
  powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
    v14: - Make changes to module_alloc() its own commit

 arch/powerpc/Kconfig                  |   2 +
 arch/powerpc/include/asm/mmu.h        |   5 +
 arch/powerpc/include/asm/set_memory.h |  34 +++++++
 arch/powerpc/kernel/kprobes.c         |  17 ++++
 arch/powerpc/kernel/module.c          |   4 +-
 arch/powerpc/lib/code-patching.c      |  12 +--
 arch/powerpc/mm/Makefile              |   2 +-
 arch/powerpc/mm/pageattr.c            | 134 ++++++++++++++++++++++++++
 arch/powerpc/mm/pgtable_32.c          |  60 ++----------
 arch/powerpc/net/bpf_jit_comp.c       |  13 +--
 10 files changed, 211 insertions(+), 72 deletions(-)
 create mode 100644 arch/powerpc/include/asm/set_memory.h
 create mode 100644 arch/powerpc/mm/pageattr.c

-- 
2.25.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v14 1/9] powerpc/mm: Implement set_memory() routines
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 2/9] powerpc/lib/code-patching: Set up Strict RWX patching earlier Jordan Niethe
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

From: Russell Currey <ruscur@russell.cc>

The set_memory_{ro/rw/nx/x}() functions are required for
STRICT_MODULE_RWX, and are generally useful primitives to have.  This
implementation is designed to be generic across powerpc's many MMUs.
It's possible that this could be optimised to be faster for specific
MMUs.

This implementation does not handle cases where the caller is attempting
to change the mapping of the page it is executing from, or if another
CPU is concurrently using the page being altered.  These cases likely
shouldn't happen, but a more complex implementation with MMU-specific code
could safely handle them.

On hash, the linear mapping is not kept in the linux pagetable, so this
will not change the protection if used on that range. Currently these
functions are not used on the linear map so just WARN for now.

apply_to_existing_page_range() does not work on huge pages so for now
disallow changing the protection of huge pages.

Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[jpn: - Allow set memory functions to be used without Strict RWX
      - Hash: Disallow certain regions
      - Have change_page_attr() take function pointers to manipulate ptes
      - Radix: Add ptesync after set_pte_at()]
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v10: WARN if trying to change the hash linear map
v11: - Update copywrite dates
     - Allow set memory functions to be used without Strict RWX
     - Hash: Disallow certain regions and add comment explaining why
     - Have change_page_attr() take function pointers to manipulate ptes
     - Clarify change_page_attr()'s comment
     - Radix: Add ptesync after set_pte_at()
v12: - change_page_attr() back to taking an action value
     - disallow operating on huge pages
v14: - only check is_vm_area_hugepages() for virtual memory
---
 arch/powerpc/Kconfig                  |   1 +
 arch/powerpc/include/asm/set_memory.h |  32 ++++++++
 arch/powerpc/mm/Makefile              |   2 +-
 arch/powerpc/mm/pageattr.c            | 101 ++++++++++++++++++++++++++
 4 files changed, 135 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/set_memory.h
 create mode 100644 arch/powerpc/mm/pageattr.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 3f863dd21374..cce0a137b046 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -138,6 +138,7 @@ config PPC
 	select ARCH_HAS_MEMBARRIER_CALLBACKS
 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
 	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
+	select ARCH_HAS_SET_MEMORY
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
 	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_HAS_UACCESS_FLUSHCACHE
diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
new file mode 100644
index 000000000000..64011ea444b4
--- /dev/null
+++ b/arch/powerpc/include/asm/set_memory.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_SET_MEMORY_H
+#define _ASM_POWERPC_SET_MEMORY_H
+
+#define SET_MEMORY_RO	0
+#define SET_MEMORY_RW	1
+#define SET_MEMORY_NX	2
+#define SET_MEMORY_X	3
+
+int change_memory_attr(unsigned long addr, int numpages, long action);
+
+static inline int set_memory_ro(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_RO);
+}
+
+static inline int set_memory_rw(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_RW);
+}
+
+static inline int set_memory_nx(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_NX);
+}
+
+static inline int set_memory_x(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_X);
+}
+
+#endif
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index c3df3a8501d4..9142cf1fb0d5 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -5,7 +5,7 @@
 
 ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
-obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o \
+obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
 				   init_$(BITS).o pgtable_$(BITS).o \
 				   pgtable-frag.o ioremap.o ioremap_$(BITS).o \
 				   init-common.o mmu_context.o drmem.o \
diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
new file mode 100644
index 000000000000..5e5ae50a7f23
--- /dev/null
+++ b/arch/powerpc/mm/pageattr.c
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * MMU-generic set_memory implementation for powerpc
+ *
+ * Copyright 2019-2021, IBM Corporation.
+ */
+
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/set_memory.h>
+
+#include <asm/mmu.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+
+
+/*
+ * Updates the attributes of a page in three steps:
+ *
+ * 1. invalidate the page table entry
+ * 2. flush the TLB
+ * 3. install the new entry with the updated attributes
+ *
+ * Invalidating the pte means there are situations where this will not work
+ * when in theory it should.
+ * For example:
+ * - removing write from page whilst it is being executed
+ * - setting a page read-only whilst it is being read by another CPU
+ *
+ */
+static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+{
+	long action = (long)data;
+	pte_t pte;
+
+	spin_lock(&init_mm.page_table_lock);
+
+	/* invalidate the PTE so it's safe to modify */
+	pte = ptep_get_and_clear(&init_mm, addr, ptep);
+	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+	/* modify the PTE bits as desired, then apply */
+	switch (action) {
+	case SET_MEMORY_RO:
+		pte = pte_wrprotect(pte);
+		break;
+	case SET_MEMORY_RW:
+		pte = pte_mkwrite(pte_mkdirty(pte));
+		break;
+	case SET_MEMORY_NX:
+		pte = pte_exprotect(pte);
+		break;
+	case SET_MEMORY_X:
+		pte = pte_mkexec(pte);
+		break;
+	default:
+		WARN_ON_ONCE(1);
+		break;
+	}
+
+	set_pte_at(&init_mm, addr, ptep, pte);
+
+	/* See ptesync comment in radix__set_pte_at() */
+	if (radix_enabled())
+		asm volatile("ptesync": : :"memory");
+	spin_unlock(&init_mm.page_table_lock);
+
+	return 0;
+}
+
+int change_memory_attr(unsigned long addr, int numpages, long action)
+{
+	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
+	unsigned long size = numpages * PAGE_SIZE;
+
+	if (!numpages)
+		return 0;
+
+	if (WARN_ON_ONCE(is_vmalloc_or_module_addr((void *)addr) &&
+			 is_vm_area_hugepages((void *)addr)))
+		return -EINVAL;
+
+#ifdef CONFIG_PPC_BOOK3S_64
+	/*
+	 * On hash, the linear mapping is not in the Linux page table so
+	 * apply_to_existing_page_range() will have no effect. If in the future
+	 * the set_memory_* functions are used on the linear map this will need
+	 * to be updated.
+	 */
+	if (!radix_enabled()) {
+		int region = get_region_id(addr);
+
+		if (WARN_ON_ONCE(region != VMALLOC_REGION_ID && region != IO_REGION_ID))
+			return -EINVAL;
+	}
+#endif
+
+	return apply_to_existing_page_range(&init_mm, start, size,
+					    change_page_attr, (void *)action);
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v14 2/9] powerpc/lib/code-patching: Set up Strict RWX patching earlier
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 1/9] powerpc/mm: Implement set_memory() routines Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware Jordan Niethe
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

setup_text_poke_area() is a late init call so it runs before
mark_rodata_ro() and after the init calls. This lets all the init code
patching simply write to their locations. In the future, kprobes is
going to allocate its instruction pages RO which means they will need
setup_text__poke_area() to have been already called for their code
patching. However, init_kprobes() (which allocates and patches some
instruction pages) is an early init call so it happens before
setup_text__poke_area().

start_kernel() calls poking_init() before any of the init calls. On
powerpc, poking_init() is currently a nop. setup_text_poke_area() relies
on kernel virtual memory, cpu hotplug and per_cpu_areas being setup.
setup_per_cpu_areas(), boot_cpu_hotplug_init() and mm_init() are called
before poking_init().

Turn setup_text_poke_area() into poking_init().

Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v9: New to series
---
 arch/powerpc/lib/code-patching.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 870b30d9be2f..15296207e1ba 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -70,14 +70,11 @@ static int text_area_cpu_down(unsigned int cpu)
 }
 
 /*
- * Run as a late init call. This allows all the boot time patching to be done
- * simply by patching the code, and then we're called here prior to
- * mark_rodata_ro(), which happens after all init calls are run. Although
- * BUG_ON() is rude, in this case it should only happen if ENOMEM, and we judge
- * it as being preferable to a kernel that will crash later when someone tries
- * to use patch_instruction().
+ * Although BUG_ON() is rude, in this case it should only happen if ENOMEM, and
+ * we judge it as being preferable to a kernel that will crash later when
+ * someone tries to use patch_instruction().
  */
-static int __init setup_text_poke_area(void)
+int __init poking_init(void)
 {
 	BUG_ON(!cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
 		"powerpc/text_poke:online", text_area_cpu_up,
@@ -85,7 +82,6 @@ static int __init setup_text_poke_area(void)
 
 	return 0;
 }
-late_initcall(setup_text_poke_area);
 
 /*
  * This can be called for kernel text or a module.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 1/9] powerpc/mm: Implement set_memory() routines Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 2/9] powerpc/lib/code-patching: Set up Strict RWX patching earlier Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  2021-05-17  6:36   ` Christophe Leroy
  2021-05-17  3:28 ` [PATCH v14 4/9] powerpc/kprobes: Mark newly allocated probes as ROX Jordan Niethe
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

Make module_alloc() use PAGE_KERNEL protections instead of
PAGE_KERNEL_EXEX if Strict Module RWX is enabled.

Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v14: - Split out from powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
     - Add and use strict_module_rwx_enabled() helper
---
 arch/powerpc/include/asm/mmu.h | 5 +++++
 arch/powerpc/kernel/module.c   | 4 +++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 607168b1aef4..7710bf0cbf8a 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -357,6 +357,11 @@ static inline bool strict_kernel_rwx_enabled(void)
 	return false;
 }
 #endif
+
+static inline bool strict_module_rwx_enabled(void)
+{
+	return IS_ENABLED(CONFIG_STRICT_MODULE_RWX) && strict_kernel_rwx_enabled();
+}
 #endif /* !__ASSEMBLY__ */
 
 /* The kernel use the constants below to index in the page sizes array.
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index 3f35c8d20be7..ed04a3ba66fe 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -92,12 +92,14 @@ int module_finalize(const Elf_Ehdr *hdr,
 static __always_inline void *
 __module_alloc(unsigned long size, unsigned long start, unsigned long end)
 {
+	pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC;
+
 	/*
 	 * Don't do huge page allocations for modules yet until more testing
 	 * is done. STRICT_MODULE_RWX may require extra work to support this
 	 * too.
 	 */
-	return __vmalloc_node_range(size, 1, start, end, GFP_KERNEL, PAGE_KERNEL_EXEC,
+	return __vmalloc_node_range(size, 1, start, end, GFP_KERNEL, prot,
 				    VM_FLUSH_RESET_PERMS | VM_NO_HUGE_VMAP,
 				    NUMA_NO_NODE, __builtin_return_address(0));
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v14 4/9] powerpc/kprobes: Mark newly allocated probes as ROX
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
                   ` (2 preceding siblings ...)
  2021-05-17  3:28 ` [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 5/9] powerpc/bpf: Remove bpf_jit_free() Jordan Niethe
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

From: Russell Currey <ruscur@russell.cc>

Add the arch specific insn page allocator for powerpc. This allocates
ROX pages if STRICT_KERNEL_RWX is enabled. These pages are only written
to with patch_instruction() which is able to write RO pages.

Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[jpn: Reword commit message, switch to __vmalloc_node_range()]
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v9: - vmalloc_exec() no longer exists
    - Set the page to RW before freeing it
v10: - use __vmalloc_node_range()
v11: - Neaten up
v12: - Switch from __vmalloc_node_range() to module_alloc()
v13: Use strict_kernel_rwx_enabled()
v14: Use strict_module_rwx_enabled()
---
 arch/powerpc/kernel/kprobes.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 01ab2163659e..937e338053ff 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -19,11 +19,13 @@
 #include <linux/extable.h>
 #include <linux/kdebug.h>
 #include <linux/slab.h>
+#include <linux/moduleloader.h>
 #include <asm/code-patching.h>
 #include <asm/cacheflush.h>
 #include <asm/sstep.h>
 #include <asm/sections.h>
 #include <asm/inst.h>
+#include <asm/set_memory.h>
 #include <linux/uaccess.h>
 
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
@@ -103,6 +105,21 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
 	return addr;
 }
 
+void *alloc_insn_page(void)
+{
+	void *page;
+
+	page = module_alloc(PAGE_SIZE);
+	if (!page)
+		return NULL;
+
+	if (strict_module_rwx_enabled()) {
+		set_memory_ro((unsigned long)page, 1);
+		set_memory_x((unsigned long)page, 1);
+	}
+	return page;
+}
+
 int arch_prepare_kprobe(struct kprobe *p)
 {
 	int ret = 0;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v14 5/9] powerpc/bpf: Remove bpf_jit_free()
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
                   ` (3 preceding siblings ...)
  2021-05-17  3:28 ` [PATCH v14 4/9] powerpc/kprobes: Mark newly allocated probes as ROX Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 6/9] powerpc/bpf: Write protect JIT code Jordan Niethe
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

Commit 74451e66d516 ("bpf: make jited programs visible in traces") added
a default bpf_jit_free() implementation. Powerpc did not use the default
bpf_jit_free() as powerpc did not set the images read-only. The default
bpf_jit_free() called bpf_jit_binary_unlock_ro() is why it could not be
used for powerpc.

Commit d53d2f78cead ("bpf: Use vmalloc special flag") moved keeping
track of read-only memory to vmalloc. This included removing
bpf_jit_binary_unlock_ro(). Therefore there is no reason powerpc needs
its own bpf_jit_free(). Remove it.

Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v11: New to series
---
 arch/powerpc/net/bpf_jit_comp.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 798ac4350a82..6c8c268e4fe8 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -257,15 +257,3 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 
 	return fp;
 }
-
-/* Overriding bpf_jit_free() as we don't set images read-only. */
-void bpf_jit_free(struct bpf_prog *fp)
-{
-	unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
-	struct bpf_binary_header *bpf_hdr = (void *)addr;
-
-	if (fp->jited)
-		bpf_jit_binary_free(bpf_hdr);
-
-	bpf_prog_unlock_free(fp);
-}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v14 6/9] powerpc/bpf: Write protect JIT code
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
                   ` (4 preceding siblings ...)
  2021-05-17  3:28 ` [PATCH v14 5/9] powerpc/bpf: Remove bpf_jit_free() Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  2021-05-17  6:39   ` Christophe Leroy
  2021-05-17  3:28 ` [PATCH v14 7/9] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Jordan Niethe
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

Add the necessary call to bpf_jit_binary_lock_ro() to remove write and
add exec permissions to the JIT image after it has finished being
written.

Without CONFIG_STRICT_MODULE_RWX the image will be writable and
executable until the call to bpf_jit_binary_lock_ro().

Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v10: New to series
v11: Remove CONFIG_STRICT_MODULE_RWX conditional
---
 arch/powerpc/net/bpf_jit_comp.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 6c8c268e4fe8..53aefee3fe70 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -237,6 +237,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 	fp->jited_len = alloclen;
 
 	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
+	bpf_jit_binary_lock_ro(bpf_hdr);
 	if (!fp->is_func || extra_pass) {
 		bpf_prog_fill_jited_linfo(fp, addrs);
 out_addrs:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v14 7/9] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
                   ` (5 preceding siblings ...)
  2021-05-17  3:28 ` [PATCH v14 6/9] powerpc/bpf: Write protect JIT code Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  2021-05-17  6:48   ` Christophe Leroy
  2021-05-17  3:28 ` [PATCH v14 8/9] powerpc/mm: implement set_memory_attr() Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 9/9] powerpc/32: use set_memory_attr() Jordan Niethe
  8 siblings, 1 reply; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

From: Russell Currey <ruscur@russell.cc>

To enable strict module RWX on powerpc, set:

    CONFIG_STRICT_MODULE_RWX=y

You should also have CONFIG_STRICT_KERNEL_RWX=y set to have any real
security benefit.

ARCH_HAS_STRICT_MODULE_RWX is set to require ARCH_HAS_STRICT_KERNEL_RWX.
This is due to a quirk in arch/Kconfig and arch/powerpc/Kconfig that
makes STRICT_MODULE_RWX *on by default* in configurations where
STRICT_KERNEL_RWX is *unavailable*.

Since this doesn't make much sense, and module RWX without kernel RWX
doesn't make much sense, having the same dependencies as kernel RWX
works around this problem.

Book32s/32 processors with a hash mmu (i.e. 604 core) can not set memory
protection on a page by page basis so do not enable.

Signed-off-by: Russell Currey <ruscur@russell.cc>
[jpn: - predicate on !PPC_BOOK3S_604
      - make module_alloc() use PAGE_KERNEL protection]
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
v10: - Predicate on !PPC_BOOK3S_604
     - Make module_alloc() use PAGE_KERNEL protection
v11: - Neaten up
v13: Use strict_kernel_rwx_enabled()
v14: Make changes to module_alloc() its own commit
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index cce0a137b046..cb5d9d862c35 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -140,6 +140,7 @@ config PPC
 	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
 	select ARCH_HAS_SET_MEMORY
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
+	select ARCH_HAS_STRICT_MODULE_RWX	if ARCH_HAS_STRICT_KERNEL_RWX && !PPC_BOOK3S_604
 	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_HAS_UACCESS_FLUSHCACHE
 	select ARCH_HAS_COPY_MC			if PPC64
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v14 8/9] powerpc/mm: implement set_memory_attr()
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
                   ` (6 preceding siblings ...)
  2021-05-17  3:28 ` [PATCH v14 7/9] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  2021-05-17  3:28 ` [PATCH v14 9/9] powerpc/32: use set_memory_attr() Jordan Niethe
  8 siblings, 0 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, kbuild test robot, npiggin, aneesh.kumar, naveen.n.rao,
	Jordan Niethe, dja

From: Christophe Leroy <christophe.leroy@csgroup.eu>

In addition to the set_memory_xx() functions which allows to change
the memory attributes of not (yet) used memory regions, implement a
set_memory_attr() function to:
- set the final memory protection after init on currently used
kernel regions.
- enable/disable kernel memory regions in the scope of DEBUG_PAGEALLOC.

Unlike the set_memory_xx() which can act in three step as the regions
are unused, this function must modify 'on the fly' as the kernel is
executing from them. At the moment only PPC32 will use it and changing
page attributes on the fly is not an issue.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reported-by: kbuild test robot <lkp@intel.com>
[ruscur: cast "data" to unsigned long instead of int]
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
 arch/powerpc/include/asm/set_memory.h |  2 ++
 arch/powerpc/mm/pageattr.c            | 33 +++++++++++++++++++++++++++
 2 files changed, 35 insertions(+)

diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
index 64011ea444b4..b040094f7920 100644
--- a/arch/powerpc/include/asm/set_memory.h
+++ b/arch/powerpc/include/asm/set_memory.h
@@ -29,4 +29,6 @@ static inline int set_memory_x(unsigned long addr, int numpages)
 	return change_memory_attr(addr, numpages, SET_MEMORY_X);
 }
 
+int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot);
+
 #endif
diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
index 5e5ae50a7f23..0876216ceee6 100644
--- a/arch/powerpc/mm/pageattr.c
+++ b/arch/powerpc/mm/pageattr.c
@@ -99,3 +99,36 @@ int change_memory_attr(unsigned long addr, int numpages, long action)
 	return apply_to_existing_page_range(&init_mm, start, size,
 					    change_page_attr, (void *)action);
 }
+
+/*
+ * Set the attributes of a page:
+ *
+ * This function is used by PPC32 at the end of init to set final kernel memory
+ * protection. It includes changing the maping of the page it is executing from
+ * and data pages it is using.
+ */
+static int set_page_attr(pte_t *ptep, unsigned long addr, void *data)
+{
+	pgprot_t prot = __pgprot((unsigned long)data);
+
+	spin_lock(&init_mm.page_table_lock);
+
+	set_pte_at(&init_mm, addr, ptep, pte_modify(*ptep, prot));
+	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+	spin_unlock(&init_mm.page_table_lock);
+
+	return 0;
+}
+
+int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot)
+{
+	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
+	unsigned long sz = numpages * PAGE_SIZE;
+
+	if (numpages <= 0)
+		return 0;
+
+	return apply_to_existing_page_range(&init_mm, start, sz, set_page_attr,
+					    (void *)pgprot_val(prot));
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v14 9/9] powerpc/32: use set_memory_attr()
  2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
                   ` (7 preceding siblings ...)
  2021-05-17  3:28 ` [PATCH v14 8/9] powerpc/mm: implement set_memory_attr() Jordan Niethe
@ 2021-05-17  3:28 ` Jordan Niethe
  8 siblings, 0 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  3:28 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: ajd, cmr, npiggin, aneesh.kumar, naveen.n.rao, Jordan Niethe, dja

From: Christophe Leroy <christophe.leroy@csgroup.eu>

Use set_memory_attr() instead of the PPC32 specific change_page_attr()

change_page_attr() was checking that the address was not mapped by
blocks and was handling highmem, but that's unneeded because the
affected pages can't be in highmem and block mapping verification
is already done by the callers.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[ruscur: rebase on powerpc/merge with Christophe's new patches]
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
---
 arch/powerpc/mm/pgtable_32.c | 60 ++++++------------------------------
 1 file changed, 10 insertions(+), 50 deletions(-)

diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index e0ec67a16887..dcf5ecca19d9 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -23,6 +23,7 @@
 #include <linux/highmem.h>
 #include <linux/memblock.h>
 #include <linux/slab.h>
+#include <linux/set_memory.h>
 
 #include <asm/pgalloc.h>
 #include <asm/fixmap.h>
@@ -132,64 +133,20 @@ void __init mapin_ram(void)
 	}
 }
 
-static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
-{
-	pte_t *kpte;
-	unsigned long address;
-
-	BUG_ON(PageHighMem(page));
-	address = (unsigned long)page_address(page);
-
-	if (v_block_mapped(address))
-		return 0;
-	kpte = virt_to_kpte(address);
-	if (!kpte)
-		return -EINVAL;
-	__set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0);
-
-	return 0;
-}
-
-/*
- * Change the page attributes of an page in the linear mapping.
- *
- * THIS DOES NOTHING WITH BAT MAPPINGS, DEBUG USE ONLY
- */
-static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
-{
-	int i, err = 0;
-	unsigned long flags;
-	struct page *start = page;
-
-	local_irq_save(flags);
-	for (i = 0; i < numpages; i++, page++) {
-		err = __change_page_attr_noflush(page, prot);
-		if (err)
-			break;
-	}
-	wmb();
-	local_irq_restore(flags);
-	flush_tlb_kernel_range((unsigned long)page_address(start),
-			       (unsigned long)page_address(page));
-	return err;
-}
-
 void mark_initmem_nx(void)
 {
-	struct page *page = virt_to_page(_sinittext);
 	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
 				 PFN_DOWN((unsigned long)_sinittext);
 
 	if (v_block_mapped((unsigned long)_sinittext))
 		mmu_mark_initmem_nx();
 	else
-		change_page_attr(page, numpages, PAGE_KERNEL);
+		set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL);
 }
 
 #ifdef CONFIG_STRICT_KERNEL_RWX
 void mark_rodata_ro(void)
 {
-	struct page *page;
 	unsigned long numpages;
 
 	if (v_block_mapped((unsigned long)_stext + 1)) {
@@ -198,20 +155,18 @@ void mark_rodata_ro(void)
 		return;
 	}
 
-	page = virt_to_page(_stext);
 	numpages = PFN_UP((unsigned long)_etext) -
 		   PFN_DOWN((unsigned long)_stext);
 
-	change_page_attr(page, numpages, PAGE_KERNEL_ROX);
+	set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
 	 */
-	page = virt_to_page(__start_rodata);
 	numpages = PFN_UP((unsigned long)__init_begin) -
 		   PFN_DOWN((unsigned long)__start_rodata);
 
-	change_page_attr(page, numpages, PAGE_KERNEL_RO);
+	set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO);
 
 	// mark_initmem_nx() should have already run by now
 	ptdump_check_wx();
@@ -221,9 +176,14 @@ void mark_rodata_ro(void)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
+	unsigned long addr = (unsigned long)page_address(page);
+
 	if (PageHighMem(page))
 		return;
 
-	change_page_attr(page, numpages, enable ? PAGE_KERNEL : __pgprot(0));
+	if (enable)
+		set_memory_attr(addr, numpages, PAGE_KERNEL);
+	else
+		set_memory_attr(addr, numpages, __pgprot(0));
 }
 #endif /* CONFIG_DEBUG_PAGEALLOC */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware
  2021-05-17  3:28 ` [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware Jordan Niethe
@ 2021-05-17  6:36   ` Christophe Leroy
  2021-05-17  6:48     ` Jordan Niethe
  0 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2021-05-17  6:36 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev
  Cc: ajd, npiggin, cmr, aneesh.kumar, naveen.n.rao, dja



Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
> Make module_alloc() use PAGE_KERNEL protections instead of
> PAGE_KERNEL_EXEX if Strict Module RWX is enabled.
> 
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v14: - Split out from powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
>       - Add and use strict_module_rwx_enabled() helper
> ---
>   arch/powerpc/include/asm/mmu.h | 5 +++++
>   arch/powerpc/kernel/module.c   | 4 +++-
>   2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
> index 607168b1aef4..7710bf0cbf8a 100644
> --- a/arch/powerpc/include/asm/mmu.h
> +++ b/arch/powerpc/include/asm/mmu.h
> @@ -357,6 +357,11 @@ static inline bool strict_kernel_rwx_enabled(void)
>   	return false;
>   }
>   #endif
> +
> +static inline bool strict_module_rwx_enabled(void)
> +{
> +	return IS_ENABLED(CONFIG_STRICT_MODULE_RWX) && strict_kernel_rwx_enabled();
> +}

Looking at arch/Kconfig, I have the feeling that it is possible to select CONFIG_STRICT_MODULE_RWX 
without selecting CONFIG_STRICT_KERNEL_RWX.

In that case, strict_kernel_rwx_enabled() will return false.

>   #endif /* !__ASSEMBLY__ */
>   
>   /* The kernel use the constants below to index in the page sizes array.
> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> index 3f35c8d20be7..ed04a3ba66fe 100644
> --- a/arch/powerpc/kernel/module.c
> +++ b/arch/powerpc/kernel/module.c
> @@ -92,12 +92,14 @@ int module_finalize(const Elf_Ehdr *hdr,
>   static __always_inline void *
>   __module_alloc(unsigned long size, unsigned long start, unsigned long end)
>   {
> +	pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC;
> +
>   	/*
>   	 * Don't do huge page allocations for modules yet until more testing
>   	 * is done. STRICT_MODULE_RWX may require extra work to support this
>   	 * too.
>   	 */
> -	return __vmalloc_node_range(size, 1, start, end, GFP_KERNEL, PAGE_KERNEL_EXEC,
> +	return __vmalloc_node_range(size, 1, start, end, GFP_KERNEL, prot,
>   				    VM_FLUSH_RESET_PERMS | VM_NO_HUGE_VMAP,
>   				    NUMA_NO_NODE, __builtin_return_address(0));
>   }
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 6/9] powerpc/bpf: Write protect JIT code
  2021-05-17  3:28 ` [PATCH v14 6/9] powerpc/bpf: Write protect JIT code Jordan Niethe
@ 2021-05-17  6:39   ` Christophe Leroy
  2021-05-20  4:02     ` Jordan Niethe
  0 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2021-05-17  6:39 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev
  Cc: ajd, npiggin, cmr, aneesh.kumar, naveen.n.rao, dja



Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
> Add the necessary call to bpf_jit_binary_lock_ro() to remove write and
> add exec permissions to the JIT image after it has finished being
> written.
> 
> Without CONFIG_STRICT_MODULE_RWX the image will be writable and
> executable until the call to bpf_jit_binary_lock_ro().

And _with_ CONFIG_STRICT_MODULE_RWX what will happen ? It will be _writable_ but not _executable_ ?

> 
> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> ---
> v10: New to series
> v11: Remove CONFIG_STRICT_MODULE_RWX conditional
> ---
>   arch/powerpc/net/bpf_jit_comp.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 6c8c268e4fe8..53aefee3fe70 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -237,6 +237,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>   	fp->jited_len = alloclen;
>   
>   	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
> +	bpf_jit_binary_lock_ro(bpf_hdr);
>   	if (!fp->is_func || extra_pass) {
>   		bpf_prog_fill_jited_linfo(fp, addrs);
>   out_addrs:
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 7/9] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
  2021-05-17  3:28 ` [PATCH v14 7/9] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Jordan Niethe
@ 2021-05-17  6:48   ` Christophe Leroy
  2021-05-20  3:50     ` Jordan Niethe
  0 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2021-05-17  6:48 UTC (permalink / raw)
  To: Jordan Niethe, linuxppc-dev
  Cc: ajd, npiggin, cmr, aneesh.kumar, naveen.n.rao, dja



Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
> From: Russell Currey <ruscur@russell.cc>
> 
> To enable strict module RWX on powerpc, set:
> 
>      CONFIG_STRICT_MODULE_RWX=y
> 
> You should also have CONFIG_STRICT_KERNEL_RWX=y set to have any real
> security benefit.
> 
> ARCH_HAS_STRICT_MODULE_RWX is set to require ARCH_HAS_STRICT_KERNEL_RWX.
> This is due to a quirk in arch/Kconfig and arch/powerpc/Kconfig that
> makes STRICT_MODULE_RWX *on by default* in configurations where
> STRICT_KERNEL_RWX is *unavailable*.
> 
> Since this doesn't make much sense, and module RWX without kernel RWX
> doesn't make much sense, having the same dependencies as kernel RWX
> works around this problem.
> 
> Book32s/32 processors with a hash mmu (i.e. 604 core) can not set memory
   ^^^^^^

Book32s ==> Book3s

> protection on a page by page basis so do not enable.

It is not exactly that. The problem on 604 is for _exec_ protection.

Note that on book3s/32, on both 603 and 604 core, it is not possible to write protect kernel pages. 
So maybe it would make sense to disable ARCH_HAS_STRICT_MODULE_RWX on CONFIG_PPC_BOOK3S_32 
completely, I'm not sure.


> 
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> [jpn: - predicate on !PPC_BOOK3S_604
>        - make module_alloc() use PAGE_KERNEL protection]
> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>

Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>

> ---
> v10: - Predicate on !PPC_BOOK3S_604
>       - Make module_alloc() use PAGE_KERNEL protection
> v11: - Neaten up
> v13: Use strict_kernel_rwx_enabled()
> v14: Make changes to module_alloc() its own commit
> ---
>   arch/powerpc/Kconfig | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index cce0a137b046..cb5d9d862c35 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -140,6 +140,7 @@ config PPC
>   	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
>   	select ARCH_HAS_SET_MEMORY
>   	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
> +	select ARCH_HAS_STRICT_MODULE_RWX	if ARCH_HAS_STRICT_KERNEL_RWX && !PPC_BOOK3S_604
>   	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
>   	select ARCH_HAS_UACCESS_FLUSHCACHE
>   	select ARCH_HAS_COPY_MC			if PPC64
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware
  2021-05-17  6:36   ` Christophe Leroy
@ 2021-05-17  6:48     ` Jordan Niethe
  2021-05-17 11:01       ` Michael Ellerman
  0 siblings, 1 reply; 19+ messages in thread
From: Jordan Niethe @ 2021-05-17  6:48 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: ajd, Nicholas Piggin, cmr, Aneesh Kumar K.V, naveen.n.rao,
	linuxppc-dev, Daniel Axtens

On Mon, May 17, 2021 at 4:37 PM Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
> > Make module_alloc() use PAGE_KERNEL protections instead of
> > PAGE_KERNEL_EXEX if Strict Module RWX is enabled.
> >
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v14: - Split out from powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
> >       - Add and use strict_module_rwx_enabled() helper
> > ---
> >   arch/powerpc/include/asm/mmu.h | 5 +++++
> >   arch/powerpc/kernel/module.c   | 4 +++-
> >   2 files changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
> > index 607168b1aef4..7710bf0cbf8a 100644
> > --- a/arch/powerpc/include/asm/mmu.h
> > +++ b/arch/powerpc/include/asm/mmu.h
> > @@ -357,6 +357,11 @@ static inline bool strict_kernel_rwx_enabled(void)
> >       return false;
> >   }
> >   #endif
> > +
> > +static inline bool strict_module_rwx_enabled(void)
> > +{
> > +     return IS_ENABLED(CONFIG_STRICT_MODULE_RWX) && strict_kernel_rwx_enabled();
> > +}
>
> Looking at arch/Kconfig, I have the feeling that it is possible to select CONFIG_STRICT_MODULE_RWX
> without selecting CONFIG_STRICT_KERNEL_RWX.
>
> In that case, strict_kernel_rwx_enabled() will return false.
Ok, if someone did that currently it would break things, e.g. code
patching. I think it should it be made impossible to
CONFIG_STRICT_MODULE_RWX without CONFIG_STRICT_KERNEL_RWX?
>
> >   #endif /* !__ASSEMBLY__ */
> >
> >   /* The kernel use the constants below to index in the page sizes array.
> > diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> > index 3f35c8d20be7..ed04a3ba66fe 100644
> > --- a/arch/powerpc/kernel/module.c
> > +++ b/arch/powerpc/kernel/module.c
> > @@ -92,12 +92,14 @@ int module_finalize(const Elf_Ehdr *hdr,
> >   static __always_inline void *
> >   __module_alloc(unsigned long size, unsigned long start, unsigned long end)
> >   {
> > +     pgprot_t prot = strict_module_rwx_enabled() ? PAGE_KERNEL : PAGE_KERNEL_EXEC;
> > +
> >       /*
> >        * Don't do huge page allocations for modules yet until more testing
> >        * is done. STRICT_MODULE_RWX may require extra work to support this
> >        * too.
> >        */
> > -     return __vmalloc_node_range(size, 1, start, end, GFP_KERNEL, PAGE_KERNEL_EXEC,
> > +     return __vmalloc_node_range(size, 1, start, end, GFP_KERNEL, prot,
> >                                   VM_FLUSH_RESET_PERMS | VM_NO_HUGE_VMAP,
> >                                   NUMA_NO_NODE, __builtin_return_address(0));
> >   }
> >

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware
  2021-05-17  6:48     ` Jordan Niethe
@ 2021-05-17 11:01       ` Michael Ellerman
  2021-05-17 11:05         ` Christophe Leroy
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Ellerman @ 2021-05-17 11:01 UTC (permalink / raw)
  To: Jordan Niethe, Christophe Leroy
  Cc: ajd, Aneesh Kumar K.V, Nicholas Piggin, cmr, naveen.n.rao,
	linuxppc-dev, Daniel Axtens

Jordan Niethe <jniethe5@gmail.com> writes:
> On Mon, May 17, 2021 at 4:37 PM Christophe Leroy
> <christophe.leroy@csgroup.eu> wrote:
>> Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
>> > Make module_alloc() use PAGE_KERNEL protections instead of
>> > PAGE_KERNEL_EXEX if Strict Module RWX is enabled.
>> >
>> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>> > ---
>> > v14: - Split out from powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
>> >       - Add and use strict_module_rwx_enabled() helper
>> > ---
>> >   arch/powerpc/include/asm/mmu.h | 5 +++++
>> >   arch/powerpc/kernel/module.c   | 4 +++-
>> >   2 files changed, 8 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
>> > index 607168b1aef4..7710bf0cbf8a 100644
>> > --- a/arch/powerpc/include/asm/mmu.h
>> > +++ b/arch/powerpc/include/asm/mmu.h
>> > @@ -357,6 +357,11 @@ static inline bool strict_kernel_rwx_enabled(void)
>> >       return false;
>> >   }
>> >   #endif
>> > +
>> > +static inline bool strict_module_rwx_enabled(void)
>> > +{
>> > +     return IS_ENABLED(CONFIG_STRICT_MODULE_RWX) && strict_kernel_rwx_enabled();
>> > +}
>>
>> Looking at arch/Kconfig, I have the feeling that it is possible to select CONFIG_STRICT_MODULE_RWX
>> without selecting CONFIG_STRICT_KERNEL_RWX.
>>
>> In that case, strict_kernel_rwx_enabled() will return false.

> Ok, if someone did that currently it would break things, e.g. code
> patching. I think it should it be made impossible to
> CONFIG_STRICT_MODULE_RWX without CONFIG_STRICT_KERNEL_RWX?

Yeah I don't see any reason to support that combination.

We should be moving to a world where both are on by default, or in fact
are always enabled.

cheers

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware
  2021-05-17 11:01       ` Michael Ellerman
@ 2021-05-17 11:05         ` Christophe Leroy
  2021-05-18  1:43           ` Michael Ellerman
  0 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2021-05-17 11:05 UTC (permalink / raw)
  To: Michael Ellerman, Jordan Niethe
  Cc: ajd, Aneesh Kumar K.V, Nicholas Piggin, cmr, naveen.n.rao,
	linuxppc-dev, Daniel Axtens



Le 17/05/2021 à 13:01, Michael Ellerman a écrit :
> Jordan Niethe <jniethe5@gmail.com> writes:
>> On Mon, May 17, 2021 at 4:37 PM Christophe Leroy
>> <christophe.leroy@csgroup.eu> wrote:
>>> Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
>>>> Make module_alloc() use PAGE_KERNEL protections instead of
>>>> PAGE_KERNEL_EXEX if Strict Module RWX is enabled.
>>>>
>>>> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>>>> ---
>>>> v14: - Split out from powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
>>>>        - Add and use strict_module_rwx_enabled() helper
>>>> ---
>>>>    arch/powerpc/include/asm/mmu.h | 5 +++++
>>>>    arch/powerpc/kernel/module.c   | 4 +++-
>>>>    2 files changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
>>>> index 607168b1aef4..7710bf0cbf8a 100644
>>>> --- a/arch/powerpc/include/asm/mmu.h
>>>> +++ b/arch/powerpc/include/asm/mmu.h
>>>> @@ -357,6 +357,11 @@ static inline bool strict_kernel_rwx_enabled(void)
>>>>        return false;
>>>>    }
>>>>    #endif
>>>> +
>>>> +static inline bool strict_module_rwx_enabled(void)
>>>> +{
>>>> +     return IS_ENABLED(CONFIG_STRICT_MODULE_RWX) && strict_kernel_rwx_enabled();
>>>> +}
>>>
>>> Looking at arch/Kconfig, I have the feeling that it is possible to select CONFIG_STRICT_MODULE_RWX
>>> without selecting CONFIG_STRICT_KERNEL_RWX.
>>>
>>> In that case, strict_kernel_rwx_enabled() will return false.
> 
>> Ok, if someone did that currently it would break things, e.g. code
>> patching. I think it should it be made impossible to
>> CONFIG_STRICT_MODULE_RWX without CONFIG_STRICT_KERNEL_RWX?
> 
> Yeah I don't see any reason to support that combination.
> 
> We should be moving to a world where both are on by default, or in fact
> are always enabled.
> 

Would it work if we add the following in arch/powerpc/Kconfig ? :

	select STRICT_KERNEL_RWX if STRICT_MODULE_RWX

There should be no dependency issue as powerpc only selects ARCH_HAS_STRICT_MODULE_RWX when 
ARCH_HAS_STRICT_KERNEL_RWX is also selected.

Christophe

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware
  2021-05-17 11:05         ` Christophe Leroy
@ 2021-05-18  1:43           ` Michael Ellerman
  0 siblings, 0 replies; 19+ messages in thread
From: Michael Ellerman @ 2021-05-18  1:43 UTC (permalink / raw)
  To: Christophe Leroy, Jordan Niethe
  Cc: ajd, Aneesh Kumar K.V, Nicholas Piggin, cmr, naveen.n.rao,
	linuxppc-dev, Daniel Axtens

Christophe Leroy <christophe.leroy@csgroup.eu> writes:
> Le 17/05/2021 à 13:01, Michael Ellerman a écrit :
>> Jordan Niethe <jniethe5@gmail.com> writes:
>>> On Mon, May 17, 2021 at 4:37 PM Christophe Leroy
>>> <christophe.leroy@csgroup.eu> wrote:
>>>> Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
>>>>> Make module_alloc() use PAGE_KERNEL protections instead of
>>>>> PAGE_KERNEL_EXEX if Strict Module RWX is enabled.
>>>>>
>>>>> Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>>>>> ---
>>>>> v14: - Split out from powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
>>>>>        - Add and use strict_module_rwx_enabled() helper
>>>>> ---
>>>>>    arch/powerpc/include/asm/mmu.h | 5 +++++
>>>>>    arch/powerpc/kernel/module.c   | 4 +++-
>>>>>    2 files changed, 8 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
>>>>> index 607168b1aef4..7710bf0cbf8a 100644
>>>>> --- a/arch/powerpc/include/asm/mmu.h
>>>>> +++ b/arch/powerpc/include/asm/mmu.h
>>>>> @@ -357,6 +357,11 @@ static inline bool strict_kernel_rwx_enabled(void)
>>>>>        return false;
>>>>>    }
>>>>>    #endif
>>>>> +
>>>>> +static inline bool strict_module_rwx_enabled(void)
>>>>> +{
>>>>> +     return IS_ENABLED(CONFIG_STRICT_MODULE_RWX) && strict_kernel_rwx_enabled();
>>>>> +}
>>>>
>>>> Looking at arch/Kconfig, I have the feeling that it is possible to select CONFIG_STRICT_MODULE_RWX
>>>> without selecting CONFIG_STRICT_KERNEL_RWX.
>>>>
>>>> In that case, strict_kernel_rwx_enabled() will return false.
>> 
>>> Ok, if someone did that currently it would break things, e.g. code
>>> patching. I think it should it be made impossible to
>>> CONFIG_STRICT_MODULE_RWX without CONFIG_STRICT_KERNEL_RWX?
>> 
>> Yeah I don't see any reason to support that combination.
>> 
>> We should be moving to a world where both are on by default, or in fact
>> are always enabled.
>
> Would it work if we add the following in arch/powerpc/Kconfig ? :
>
> 	select STRICT_KERNEL_RWX if STRICT_MODULE_RWX
>
> There should be no dependency issue as powerpc only selects ARCH_HAS_STRICT_MODULE_RWX when 
> ARCH_HAS_STRICT_KERNEL_RWX is also selected.

I think it will work. It's slightly rude to select things like that, but
I think it's OK for something like this.

Medium term we can possibly just have the generic STRICT_MODULE_RWX
depend on STRICT_KERNEL_RWX.

cheers

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 7/9] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
  2021-05-17  6:48   ` Christophe Leroy
@ 2021-05-20  3:50     ` Jordan Niethe
  0 siblings, 0 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-20  3:50 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: ajd, Nicholas Piggin, cmr, Aneesh Kumar K.V, naveen.n.rao,
	linuxppc-dev, Daniel Axtens

On Mon, May 17, 2021 at 4:49 PM Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
> > From: Russell Currey <ruscur@russell.cc>
> >
> > To enable strict module RWX on powerpc, set:
> >
> >      CONFIG_STRICT_MODULE_RWX=y
> >
> > You should also have CONFIG_STRICT_KERNEL_RWX=y set to have any real
> > security benefit.
> >
> > ARCH_HAS_STRICT_MODULE_RWX is set to require ARCH_HAS_STRICT_KERNEL_RWX.
> > This is due to a quirk in arch/Kconfig and arch/powerpc/Kconfig that
> > makes STRICT_MODULE_RWX *on by default* in configurations where
> > STRICT_KERNEL_RWX is *unavailable*.
> >
> > Since this doesn't make much sense, and module RWX without kernel RWX
> > doesn't make much sense, having the same dependencies as kernel RWX
> > works around this problem.
> >
> > Book32s/32 processors with a hash mmu (i.e. 604 core) can not set memory
>    ^^^^^^
>
> Book32s ==> Book3s
Thanks.
>
> > protection on a page by page basis so do not enable.
>
> It is not exactly that. The problem on 604 is for _exec_ protection.
Right.
>
> Note that on book3s/32, on both 603 and 604 core, it is not possible to write protect kernel pages.
> So maybe it would make sense to disable ARCH_HAS_STRICT_MODULE_RWX on CONFIG_PPC_BOOK3S_32
> completely, I'm not sure.
Yeah, that does seem like it would make sense to disable it.
>
>
> >
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > [jpn: - predicate on !PPC_BOOK3S_604
> >        - make module_alloc() use PAGE_KERNEL protection]
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
>
> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
>
> > ---
> > v10: - Predicate on !PPC_BOOK3S_604
> >       - Make module_alloc() use PAGE_KERNEL protection
> > v11: - Neaten up
> > v13: Use strict_kernel_rwx_enabled()
> > v14: Make changes to module_alloc() its own commit
> > ---
> >   arch/powerpc/Kconfig | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index cce0a137b046..cb5d9d862c35 100644
> > --- a/arch/powerpc/Kconfig
> > +++ b/arch/powerpc/Kconfig
> > @@ -140,6 +140,7 @@ config PPC
> >       select ARCH_HAS_SCALED_CPUTIME          if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> >       select ARCH_HAS_SET_MEMORY
> >       select ARCH_HAS_STRICT_KERNEL_RWX       if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
> > +     select ARCH_HAS_STRICT_MODULE_RWX       if ARCH_HAS_STRICT_KERNEL_RWX && !PPC_BOOK3S_604
> >       select ARCH_HAS_TICK_BROADCAST          if GENERIC_CLOCKEVENTS_BROADCAST
> >       select ARCH_HAS_UACCESS_FLUSHCACHE
> >       select ARCH_HAS_COPY_MC                 if PPC64
> >

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v14 6/9] powerpc/bpf: Write protect JIT code
  2021-05-17  6:39   ` Christophe Leroy
@ 2021-05-20  4:02     ` Jordan Niethe
  0 siblings, 0 replies; 19+ messages in thread
From: Jordan Niethe @ 2021-05-20  4:02 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: ajd, Nicholas Piggin, cmr, Aneesh Kumar K.V, naveen.n.rao,
	linuxppc-dev, Daniel Axtens

On Mon, May 17, 2021 at 4:40 PM Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 17/05/2021 à 05:28, Jordan Niethe a écrit :
> > Add the necessary call to bpf_jit_binary_lock_ro() to remove write and
> > add exec permissions to the JIT image after it has finished being
> > written.
> >
> > Without CONFIG_STRICT_MODULE_RWX the image will be writable and
> > executable until the call to bpf_jit_binary_lock_ro().
>
> And _with_ CONFIG_STRICT_MODULE_RWX what will happen ? It will be _writable_ but not _executable_ ?
That's right.
With CONFIG_STRICT_MODULE_RWX the image will initially be PAGE_KERNEL
from bpf_jit_alloc_exec() calling module_alloc(). So not executable.
bpf_jit_binary_lock_ro() will then remove write and add executable.

Without CONFIG_STRICT_MODULE_RWX the image will initially be
PAGE_KERNEL_EXEC from module_alloc().
bpf_jit_binary_lock_ro() will remove write, but until that point it
will have been write + exec.
>
> >
> > Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> > Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
> > ---
> > v10: New to series
> > v11: Remove CONFIG_STRICT_MODULE_RWX conditional
> > ---
> >   arch/powerpc/net/bpf_jit_comp.c | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> > index 6c8c268e4fe8..53aefee3fe70 100644
> > --- a/arch/powerpc/net/bpf_jit_comp.c
> > +++ b/arch/powerpc/net/bpf_jit_comp.c
> > @@ -237,6 +237,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
> >       fp->jited_len = alloclen;
> >
> >       bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
> > +     bpf_jit_binary_lock_ro(bpf_hdr);
> >       if (!fp->is_func || extra_pass) {
> >               bpf_prog_fill_jited_linfo(fp, addrs);
> >   out_addrs:
> >

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-05-20  4:03 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-17  3:28 [PATCH v14 0/9] powerpc: Further Strict RWX support Jordan Niethe
2021-05-17  3:28 ` [PATCH v14 1/9] powerpc/mm: Implement set_memory() routines Jordan Niethe
2021-05-17  3:28 ` [PATCH v14 2/9] powerpc/lib/code-patching: Set up Strict RWX patching earlier Jordan Niethe
2021-05-17  3:28 ` [PATCH v14 3/9] powerpc/modules: Make module_alloc() Strict Module RWX aware Jordan Niethe
2021-05-17  6:36   ` Christophe Leroy
2021-05-17  6:48     ` Jordan Niethe
2021-05-17 11:01       ` Michael Ellerman
2021-05-17 11:05         ` Christophe Leroy
2021-05-18  1:43           ` Michael Ellerman
2021-05-17  3:28 ` [PATCH v14 4/9] powerpc/kprobes: Mark newly allocated probes as ROX Jordan Niethe
2021-05-17  3:28 ` [PATCH v14 5/9] powerpc/bpf: Remove bpf_jit_free() Jordan Niethe
2021-05-17  3:28 ` [PATCH v14 6/9] powerpc/bpf: Write protect JIT code Jordan Niethe
2021-05-17  6:39   ` Christophe Leroy
2021-05-20  4:02     ` Jordan Niethe
2021-05-17  3:28 ` [PATCH v14 7/9] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Jordan Niethe
2021-05-17  6:48   ` Christophe Leroy
2021-05-20  3:50     ` Jordan Niethe
2021-05-17  3:28 ` [PATCH v14 8/9] powerpc/mm: implement set_memory_attr() Jordan Niethe
2021-05-17  3:28 ` [PATCH v14 9/9] powerpc/32: use set_memory_attr() Jordan Niethe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.