* [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc
@ 2019-10-30 7:31 Russell Currey
2019-10-30 7:31 ` [PATCH v5 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
` (5 more replies)
0 siblings, 6 replies; 16+ messages in thread
From: Russell Currey @ 2019-10-30 7:31 UTC (permalink / raw)
To: linuxppc-dev
Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
kernel-hardening
v4 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198268.html
v3 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198023.html
Changes since v4:
[1/5]: Addressed review comments from Michael Ellerman (thanks!)
[4/5]: make ARCH_HAS_STRICT_MODULE_RWX depend on
ARCH_HAS_STRICT_KERNEL_RWX to simplify things and avoid
STRICT_MODULE_RWX being *on by default* in cases where
STRICT_KERNEL_RWX is *unavailable*
[5/5]: split skiroot_defconfig changes out into its own patch
The whole Kconfig situation is really weird and confusing, I believe the
correct resolution is to change arch/Kconfig but the consequences are so
minor that I don't think it's worth it, especially given that I expect
powerpc to have mandatory strict RWX Soon(tm).
Russell Currey (5):
powerpc/mm: Implement set_memory() routines
powerpc/kprobes: Mark newly allocated probes as RO
powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
arch/powerpc/Kconfig | 2 +
arch/powerpc/Kconfig.debug | 6 +-
arch/powerpc/configs/skiroot_defconfig | 1 +
arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
arch/powerpc/kernel/kprobes.c | 3 +
arch/powerpc/mm/Makefile | 1 +
arch/powerpc/mm/pageattr.c | 77 ++++++++++++++++++++++++++
arch/powerpc/mm/ptdump/ptdump.c | 21 ++++++-
8 files changed, 140 insertions(+), 3 deletions(-)
create mode 100644 arch/powerpc/include/asm/set_memory.h
create mode 100644 arch/powerpc/mm/pageattr.c
--
2.23.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v5 1/5] powerpc/mm: Implement set_memory() routines
2019-10-30 7:31 [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
@ 2019-10-30 7:31 ` Russell Currey
2019-10-30 8:01 ` Christophe Leroy
2019-10-30 7:31 ` [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
` (4 subsequent siblings)
5 siblings, 1 reply; 16+ messages in thread
From: Russell Currey @ 2019-10-30 7:31 UTC (permalink / raw)
To: linuxppc-dev
Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
kernel-hardening
The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
and are generally useful primitives to have. This implementation is
designed to be completely generic across powerpc's many MMUs.
It's possible that this could be optimised to be faster for specific
MMUs, but the focus is on having a generic and safe implementation for
now.
This implementation does not handle cases where the caller is attempting
to change the mapping of the page it is executing from, or if another
CPU is concurrently using the page being altered. These cases likely
shouldn't happen, but a more complex implementation with MMU-specific code
could safely handle them, so that is left as a TODO for now.
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
arch/powerpc/mm/Makefile | 1 +
arch/powerpc/mm/pageattr.c | 77 +++++++++++++++++++++++++++
4 files changed, 111 insertions(+)
create mode 100644 arch/powerpc/include/asm/set_memory.h
create mode 100644 arch/powerpc/mm/pageattr.c
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 3e56c9c2f16e..8f7005f0d097 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -133,6 +133,7 @@ config PPC
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_MEMBARRIER_CALLBACKS
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
+ select ARCH_HAS_SET_MEMORY
select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UACCESS_FLUSHCACHE
diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
new file mode 100644
index 000000000000..5230ddb2fefd
--- /dev/null
+++ b/arch/powerpc/include/asm/set_memory.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_SET_MEMORY_H
+#define _ASM_POWERPC_SET_MEMORY_H
+
+#define SET_MEMORY_RO 1
+#define SET_MEMORY_RW 2
+#define SET_MEMORY_NX 3
+#define SET_MEMORY_X 4
+
+int change_memory_attr(unsigned long addr, int numpages, int action);
+
+static inline int set_memory_ro(unsigned long addr, int numpages)
+{
+ return change_memory_attr(addr, numpages, SET_MEMORY_RO);
+}
+
+static inline int set_memory_rw(unsigned long addr, int numpages)
+{
+ return change_memory_attr(addr, numpages, SET_MEMORY_RW);
+}
+
+static inline int set_memory_nx(unsigned long addr, int numpages)
+{
+ return change_memory_attr(addr, numpages, SET_MEMORY_NX);
+}
+
+static inline int set_memory_x(unsigned long addr, int numpages)
+{
+ return change_memory_attr(addr, numpages, SET_MEMORY_X);
+}
+
+#endif
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index 5e147986400d..d0a0bcbc9289 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -20,3 +20,4 @@ obj-$(CONFIG_HIGHMEM) += highmem.o
obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o
obj-$(CONFIG_PPC_PTDUMP) += ptdump/
obj-$(CONFIG_KASAN) += kasan/
+obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pageattr.o
diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
new file mode 100644
index 000000000000..aedd79173a44
--- /dev/null
+++ b/arch/powerpc/mm/pageattr.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * MMU-generic set_memory implementation for powerpc
+ *
+ * Copyright 2019, IBM Corporation.
+ */
+
+#include <linux/mm.h>
+#include <linux/set_memory.h>
+
+#include <asm/mmu.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+
+
+/*
+ * Updates the attributes of a page in three steps:
+ *
+ * 1. invalidate the page table entry
+ * 2. flush the TLB
+ * 3. install the new entry with the updated attributes
+ *
+ * This is unsafe if the caller is attempting to change the mapping of the
+ * page it is executing from, or if another CPU is concurrently using the
+ * page being altered.
+ *
+ * TODO make the implementation resistant to this.
+ */
+static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+{
+ int action = *((int *)data);
+ pte_t pte_val;
+ int ret = 0;
+
+ spin_lock(&init_mm.page_table_lock);
+
+ // invalidate the PTE so it's safe to modify
+ pte_val = ptep_get_and_clear(&init_mm, addr, ptep);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+ // modify the PTE bits as desired, then apply
+ switch (action) {
+ case SET_MEMORY_RO:
+ pte_val = pte_wrprotect(pte_val);
+ break;
+ case SET_MEMORY_RW:
+ pte_val = pte_mkwrite(pte_val);
+ break;
+ case SET_MEMORY_NX:
+ pte_val = pte_exprotect(pte_val);
+ break;
+ case SET_MEMORY_X:
+ pte_val = pte_mkexec(pte_val);
+ break;
+ default:
+ WARN_ON(true);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ set_pte_at(&init_mm, addr, ptep, pte_val);
+out:
+ spin_unlock(&init_mm.page_table_lock);
+ return ret;
+}
+
+int change_memory_attr(unsigned long addr, int numpages, int action)
+{
+ unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
+ unsigned long size = numpages * PAGE_SIZE;
+
+ if (!numpages)
+ return 0;
+
+ return apply_to_page_range(&init_mm, start, size, change_page_attr, &action);
+}
--
2.23.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO
2019-10-30 7:31 [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
2019-10-30 7:31 ` [PATCH v5 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
@ 2019-10-30 7:31 ` Russell Currey
2019-11-01 14:23 ` Daniel Axtens
2019-11-02 10:45 ` Michael Ellerman
2019-10-30 7:31 ` [PATCH v5 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Russell Currey
` (3 subsequent siblings)
5 siblings, 2 replies; 16+ messages in thread
From: Russell Currey @ 2019-10-30 7:31 UTC (permalink / raw)
To: linuxppc-dev
Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
kernel-hardening
With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
W+X page at boot by default. This can be tested with
CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
kernel log during boot.
powerpc doesn't implement its own alloc() for kprobes like other
architectures do, but we couldn't immediately mark RO anyway since we do
a memcpy to the page we allocate later. After that, nothing should be
allowed to modify the page, and write permissions are removed well
before the kprobe is armed.
Thus mark newly allocated probes as read-only once it's safe to do so.
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
arch/powerpc/kernel/kprobes.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 2d27ec4feee4..2610496de7c7 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -24,6 +24,7 @@
#include <asm/sstep.h>
#include <asm/sections.h>
#include <linux/uaccess.h>
+#include <linux/set_memory.h>
DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -131,6 +132,8 @@ int arch_prepare_kprobe(struct kprobe *p)
(unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t));
}
+ set_memory_ro((unsigned long)p->ainsn.insn, 1);
+
p->ainsn.boostable = 0;
return ret;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v5 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
2019-10-30 7:31 [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
2019-10-30 7:31 ` [PATCH v5 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
2019-10-30 7:31 ` [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
@ 2019-10-30 7:31 ` Russell Currey
2019-10-30 7:31 ` [PATCH v5 4/5] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Russell Currey
` (2 subsequent siblings)
5 siblings, 0 replies; 16+ messages in thread
From: Russell Currey @ 2019-10-30 7:31 UTC (permalink / raw)
To: linuxppc-dev
Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
kernel-hardening
Very rudimentary, just
echo 1 > [debugfs]/check_wx_pages
and check the kernel log. Useful for testing strict module RWX.
Updated the Kconfig entry to reflect this.
Also fixed a typo.
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
arch/powerpc/Kconfig.debug | 6 ++++--
arch/powerpc/mm/ptdump/ptdump.c | 21 ++++++++++++++++++++-
2 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index c59920920ddc..dcfe83d4c211 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -370,7 +370,7 @@ config PPC_PTDUMP
If you are unsure, say N.
config PPC_DEBUG_WX
- bool "Warn on W+X mappings at boot"
+ bool "Warn on W+X mappings at boot & enable manual checks at runtime"
depends on PPC_PTDUMP
help
Generate a warning if any W+X mappings are found at boot.
@@ -384,7 +384,9 @@ config PPC_DEBUG_WX
of other unfixed kernel bugs easier.
There is no runtime or memory usage effect of this option
- once the kernel has booted up - it's a one time check.
+ once the kernel has booted up, it only automatically checks once.
+
+ Enables the "check_wx_pages" debugfs entry for checking at runtime.
If in doubt, say "Y".
diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
index 2f9ddc29c535..b6cba29ae4a0 100644
--- a/arch/powerpc/mm/ptdump/ptdump.c
+++ b/arch/powerpc/mm/ptdump/ptdump.c
@@ -4,7 +4,7 @@
*
* This traverses the kernel pagetables and dumps the
* information about the used sections of memory to
- * /sys/kernel/debug/kernel_pagetables.
+ * /sys/kernel/debug/kernel_page_tables.
*
* Derived from the arm64 implementation:
* Copyright (c) 2014, The Linux Foundation, Laura Abbott.
@@ -409,6 +409,25 @@ void ptdump_check_wx(void)
else
pr_info("Checked W+X mappings: passed, no W+X pages found\n");
}
+
+static int check_wx_debugfs_set(void *data, u64 val)
+{
+ if (val != 1ULL)
+ return -EINVAL;
+
+ ptdump_check_wx();
+
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(check_wx_fops, NULL, check_wx_debugfs_set, "%llu\n");
+
+static int ptdump_check_wx_init(void)
+{
+ return debugfs_create_file("check_wx_pages", 0200, NULL,
+ NULL, &check_wx_fops) ? 0 : -ENOMEM;
+}
+device_initcall(ptdump_check_wx_init);
#endif
static int ptdump_init(void)
--
2.23.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v5 4/5] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
2019-10-30 7:31 [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
` (2 preceding siblings ...)
2019-10-30 7:31 ` [PATCH v5 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Russell Currey
@ 2019-10-30 7:31 ` Russell Currey
2019-10-30 7:31 ` [PATCH v5 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Russell Currey
2019-10-30 8:58 ` [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Christophe Leroy
5 siblings, 0 replies; 16+ messages in thread
From: Russell Currey @ 2019-10-30 7:31 UTC (permalink / raw)
To: linuxppc-dev
Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
kernel-hardening
To enable strict module RWX on powerpc, set:
CONFIG_STRICT_MODULE_RWX=y
You should also have CONFIG_STRICT_KERNEL_RWX=y set to have any real
security benefit.
ARCH_HAS_STRICT_MODULE_RWX is set to require ARCH_HAS_STRICT_KERNEL_RWX.
This is due to a quirk in arch/Kconfig and arch/powerpc/Kconfig that
makes STRICT_MODULE_RWX *on by default* in configurations where
STRICT_KERNEL_RWX is *unavailable*.
Since this doesn't make much sense, and module RWX without kernel RWX
doesn't make much sense, having the same dependencies as kernel RWX
works around this problem.
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
This means that Daniel's test on book3e64 is pretty useless since we've
gone from being unable to turn it *off* to being unable to turn it *on*.
I think this is the right course of action for now.
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8f7005f0d097..52028e27f2d3 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -135,6 +135,7 @@ config PPC
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
+ select ARCH_HAS_STRICT_MODULE_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UACCESS_FLUSHCACHE
select ARCH_HAS_UACCESS_MCSAFE if PPC64
--
2.23.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v5 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
2019-10-30 7:31 [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
` (3 preceding siblings ...)
2019-10-30 7:31 ` [PATCH v5 4/5] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Russell Currey
@ 2019-10-30 7:31 ` Russell Currey
2019-10-31 0:05 ` Joel Stanley
2019-10-30 8:58 ` [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Christophe Leroy
5 siblings, 1 reply; 16+ messages in thread
From: Russell Currey @ 2019-10-30 7:31 UTC (permalink / raw)
To: linuxppc-dev
Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, dja, npiggin,
kernel-hardening
skiroot_defconfig is the only powerpc defconfig with STRICT_KERNEL_RWX
enabled, and if you want memory protection for kernel text you'd want it
for modules too, so enable STRICT_MODULE_RWX there.
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
arch/powerpc/configs/skiroot_defconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig
index 1253482a67c0..719d899081b3 100644
--- a/arch/powerpc/configs/skiroot_defconfig
+++ b/arch/powerpc/configs/skiroot_defconfig
@@ -31,6 +31,7 @@ CONFIG_PERF_EVENTS=y
CONFIG_SLAB_FREELIST_HARDENED=y
CONFIG_JUMP_LABEL=y
CONFIG_STRICT_KERNEL_RWX=y
+CONFIG_STRICT_MODULE_RWX=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_SIG=y
--
2.23.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v5 1/5] powerpc/mm: Implement set_memory() routines
2019-10-30 7:31 ` [PATCH v5 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
@ 2019-10-30 8:01 ` Christophe Leroy
0 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2019-10-30 8:01 UTC (permalink / raw)
To: Russell Currey, linuxppc-dev
Cc: joel, mpe, ajd, dja, npiggin, kernel-hardening
Le 30/10/2019 à 08:31, Russell Currey a écrit :
> The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
> and are generally useful primitives to have. This implementation is
> designed to be completely generic across powerpc's many MMUs.
>
> It's possible that this could be optimised to be faster for specific
> MMUs, but the focus is on having a generic and safe implementation for
> now.
>
> This implementation does not handle cases where the caller is attempting
> to change the mapping of the page it is executing from, or if another
> CPU is concurrently using the page being altered. These cases likely
> shouldn't happen, but a more complex implementation with MMU-specific code
> could safely handle them, so that is left as a TODO for now.
>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
> arch/powerpc/mm/Makefile | 1 +
> arch/powerpc/mm/pageattr.c | 77 +++++++++++++++++++++++++++
> 4 files changed, 111 insertions(+)
> create mode 100644 arch/powerpc/include/asm/set_memory.h
> create mode 100644 arch/powerpc/mm/pageattr.c
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 3e56c9c2f16e..8f7005f0d097 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -133,6 +133,7 @@ config PPC
> select ARCH_HAS_PTE_SPECIAL
> select ARCH_HAS_MEMBARRIER_CALLBACKS
> select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
> + select ARCH_HAS_SET_MEMORY
> select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> select ARCH_HAS_UACCESS_FLUSHCACHE
> diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
> new file mode 100644
> index 000000000000..5230ddb2fefd
> --- /dev/null
> +++ b/arch/powerpc/include/asm/set_memory.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_POWERPC_SET_MEMORY_H
> +#define _ASM_POWERPC_SET_MEMORY_H
> +
> +#define SET_MEMORY_RO 1
> +#define SET_MEMORY_RW 2
> +#define SET_MEMORY_NX 3
> +#define SET_MEMORY_X 4
> +
> +int change_memory_attr(unsigned long addr, int numpages, int action);
> +
> +static inline int set_memory_ro(unsigned long addr, int numpages)
> +{
> + return change_memory_attr(addr, numpages, SET_MEMORY_RO);
> +}
> +
> +static inline int set_memory_rw(unsigned long addr, int numpages)
> +{
> + return change_memory_attr(addr, numpages, SET_MEMORY_RW);
> +}
> +
> +static inline int set_memory_nx(unsigned long addr, int numpages)
> +{
> + return change_memory_attr(addr, numpages, SET_MEMORY_NX);
> +}
> +
> +static inline int set_memory_x(unsigned long addr, int numpages)
> +{
> + return change_memory_attr(addr, numpages, SET_MEMORY_X);
> +}
> +
> +#endif
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index 5e147986400d..d0a0bcbc9289 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -20,3 +20,4 @@ obj-$(CONFIG_HIGHMEM) += highmem.o
> obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o
> obj-$(CONFIG_PPC_PTDUMP) += ptdump/
> obj-$(CONFIG_KASAN) += kasan/
> +obj-$(CONFIG_ARCH_HAS_SET_MEMORY) += pageattr.o
> diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> new file mode 100644
> index 000000000000..aedd79173a44
> --- /dev/null
> +++ b/arch/powerpc/mm/pageattr.c
> @@ -0,0 +1,77 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * MMU-generic set_memory implementation for powerpc
> + *
> + * Copyright 2019, IBM Corporation.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/set_memory.h>
> +
> +#include <asm/mmu.h>
> +#include <asm/page.h>
> +#include <asm/pgtable.h>
> +
> +
> +/*
> + * Updates the attributes of a page in three steps:
> + *
> + * 1. invalidate the page table entry
> + * 2. flush the TLB
> + * 3. install the new entry with the updated attributes
> + *
> + * This is unsafe if the caller is attempting to change the mapping of the
> + * page it is executing from, or if another CPU is concurrently using the
> + * page being altered.
> + *
> + * TODO make the implementation resistant to this.
> + */
> +static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
I don't like too much the way you are making this function more complex
and less readable with local var, goto, etc ...
You could just keep the v4 version of change_page_attr(), rename it
__change_page_attr(), then add:
static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
{
int ret;
spin_lock(&init_mm.page_table_lock);
ret = __change_page_attr(ptep, addr, data);
spin_unlock(&init_mm.page_table_lock);
return ret;
}
Christophe
> +{
> + int action = *((int *)data);
> + pte_t pte_val;
> + int ret = 0;
> +
> + spin_lock(&init_mm.page_table_lock);
> +
> + // invalidate the PTE so it's safe to modify
> + pte_val = ptep_get_and_clear(&init_mm, addr, ptep);
> + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +
> + // modify the PTE bits as desired, then apply
> + switch (action) {
> + case SET_MEMORY_RO:
> + pte_val = pte_wrprotect(pte_val);
> + break;
> + case SET_MEMORY_RW:
> + pte_val = pte_mkwrite(pte_val);
> + break;
> + case SET_MEMORY_NX:
> + pte_val = pte_exprotect(pte_val);
> + break;
> + case SET_MEMORY_X:
> + pte_val = pte_mkexec(pte_val);
> + break;
> + default:
> + WARN_ON(true);
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + set_pte_at(&init_mm, addr, ptep, pte_val);
> +out:
> + spin_unlock(&init_mm.page_table_lock);
> + return ret;
> +}
> +
> +int change_memory_attr(unsigned long addr, int numpages, int action)
> +{
> + unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> + unsigned long size = numpages * PAGE_SIZE;
> +
> + if (!numpages)
> + return 0;
> +
> + return apply_to_page_range(&init_mm, start, size, change_page_attr, &action);
> +}
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc
2019-10-30 7:31 [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
` (4 preceding siblings ...)
2019-10-30 7:31 ` [PATCH v5 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Russell Currey
@ 2019-10-30 8:58 ` Christophe Leroy
2019-10-30 18:30 ` Kees Cook
2019-10-31 0:09 ` Russell Currey
5 siblings, 2 replies; 16+ messages in thread
From: Christophe Leroy @ 2019-10-30 8:58 UTC (permalink / raw)
To: Russell Currey, linuxppc-dev
Cc: joel, mpe, ajd, dja, npiggin, kernel-hardening
Le 30/10/2019 à 08:31, Russell Currey a écrit :
> v4 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198268.html
> v3 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198023.html
>
> Changes since v4:
> [1/5]: Addressed review comments from Michael Ellerman (thanks!)
> [4/5]: make ARCH_HAS_STRICT_MODULE_RWX depend on
> ARCH_HAS_STRICT_KERNEL_RWX to simplify things and avoid
> STRICT_MODULE_RWX being *on by default* in cases where
> STRICT_KERNEL_RWX is *unavailable*
> [5/5]: split skiroot_defconfig changes out into its own patch
>
> The whole Kconfig situation is really weird and confusing, I believe the
> correct resolution is to change arch/Kconfig but the consequences are so
> minor that I don't think it's worth it, especially given that I expect
> powerpc to have mandatory strict RWX Soon(tm).
I'm not such strict RWX can be made mandatory due to the impact it has
on some subarches:
- On the 8xx, unless all areas are 8Mbytes aligned, there is a
significant overhead on TLB misses. And Aligning everthing to 8M is a
waste of RAM which is not acceptable on systems having very few RAM.
- On hash book3s32, we are able to map the kernel BATs. With a few
alignment constraints, we are able to provide STRICT_KERNEL_RWX. But we
are unable to provide exec protection on page granularity. Only on
256Mbytes segments. So for modules, we have to have the vmspace X. It is
also not possible to have a kernel area RO. Only user areas can be made RO.
Christophe
>
> Russell Currey (5):
> powerpc/mm: Implement set_memory() routines
> powerpc/kprobes: Mark newly allocated probes as RO
> powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
> powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
> powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
>
> arch/powerpc/Kconfig | 2 +
> arch/powerpc/Kconfig.debug | 6 +-
> arch/powerpc/configs/skiroot_defconfig | 1 +
> arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
> arch/powerpc/kernel/kprobes.c | 3 +
> arch/powerpc/mm/Makefile | 1 +
> arch/powerpc/mm/pageattr.c | 77 ++++++++++++++++++++++++++
> arch/powerpc/mm/ptdump/ptdump.c | 21 ++++++-
> 8 files changed, 140 insertions(+), 3 deletions(-)
> create mode 100644 arch/powerpc/include/asm/set_memory.h
> create mode 100644 arch/powerpc/mm/pageattr.c
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc
2019-10-30 8:58 ` [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Christophe Leroy
@ 2019-10-30 18:30 ` Kees Cook
2019-10-30 19:28 ` Christophe Leroy
2019-10-31 0:09 ` Russell Currey
1 sibling, 1 reply; 16+ messages in thread
From: Kees Cook @ 2019-10-30 18:30 UTC (permalink / raw)
To: Christophe Leroy
Cc: Russell Currey, linuxppc-dev, joel, mpe, ajd, dja, npiggin,
kernel-hardening
On Wed, Oct 30, 2019 at 09:58:19AM +0100, Christophe Leroy wrote:
>
>
> Le 30/10/2019 à 08:31, Russell Currey a écrit :
> > v4 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198268.html
> > v3 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198023.html
> >
> > Changes since v4:
> > [1/5]: Addressed review comments from Michael Ellerman (thanks!)
> > [4/5]: make ARCH_HAS_STRICT_MODULE_RWX depend on
> > ARCH_HAS_STRICT_KERNEL_RWX to simplify things and avoid
> > STRICT_MODULE_RWX being *on by default* in cases where
> > STRICT_KERNEL_RWX is *unavailable*
> > [5/5]: split skiroot_defconfig changes out into its own patch
> >
> > The whole Kconfig situation is really weird and confusing, I believe the
> > correct resolution is to change arch/Kconfig but the consequences are so
> > minor that I don't think it's worth it, especially given that I expect
> > powerpc to have mandatory strict RWX Soon(tm).
>
> I'm not such strict RWX can be made mandatory due to the impact it has on
> some subarches:
> - On the 8xx, unless all areas are 8Mbytes aligned, there is a significant
> overhead on TLB misses. And Aligning everthing to 8M is a waste of RAM which
> is not acceptable on systems having very few RAM.
> - On hash book3s32, we are able to map the kernel BATs. With a few alignment
> constraints, we are able to provide STRICT_KERNEL_RWX. But we are unable to
> provide exec protection on page granularity. Only on 256Mbytes segments. So
> for modules, we have to have the vmspace X. It is also not possible to have
> a kernel area RO. Only user areas can be made RO.
As I understand it, the idea was for it to be mandatory (or at least
default-on) only for the subarches where it wasn't totally insane to
accomplish. :) (I'm not familiar with all the details on the subarchs,
but it sounded like the more modern systems would be the targets for
this?)
--
Kees Cook
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc
2019-10-30 18:30 ` Kees Cook
@ 2019-10-30 19:28 ` Christophe Leroy
0 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2019-10-30 19:28 UTC (permalink / raw)
To: Kees Cook
Cc: Russell Currey, linuxppc-dev, joel, mpe, ajd, dja, npiggin,
kernel-hardening
Le 30/10/2019 à 19:30, Kees Cook a écrit :
> On Wed, Oct 30, 2019 at 09:58:19AM +0100, Christophe Leroy wrote:
>>
>>
>> Le 30/10/2019 à 08:31, Russell Currey a écrit :
>>> v4 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198268.html
>>> v3 cover letter: https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198023.html
>>>
>>> Changes since v4:
>>> [1/5]: Addressed review comments from Michael Ellerman (thanks!)
>>> [4/5]: make ARCH_HAS_STRICT_MODULE_RWX depend on
>>> ARCH_HAS_STRICT_KERNEL_RWX to simplify things and avoid
>>> STRICT_MODULE_RWX being *on by default* in cases where
>>> STRICT_KERNEL_RWX is *unavailable*
>>> [5/5]: split skiroot_defconfig changes out into its own patch
>>>
>>> The whole Kconfig situation is really weird and confusing, I believe the
>>> correct resolution is to change arch/Kconfig but the consequences are so
>>> minor that I don't think it's worth it, especially given that I expect
>>> powerpc to have mandatory strict RWX Soon(tm).
>>
>> I'm not such strict RWX can be made mandatory due to the impact it has on
>> some subarches:
>> - On the 8xx, unless all areas are 8Mbytes aligned, there is a significant
>> overhead on TLB misses. And Aligning everthing to 8M is a waste of RAM which
>> is not acceptable on systems having very few RAM.
>> - On hash book3s32, we are able to map the kernel BATs. With a few alignment
>> constraints, we are able to provide STRICT_KERNEL_RWX. But we are unable to
>> provide exec protection on page granularity. Only on 256Mbytes segments. So
>> for modules, we have to have the vmspace X. It is also not possible to have
>> a kernel area RO. Only user areas can be made RO.
>
> As I understand it, the idea was for it to be mandatory (or at least
> default-on) only for the subarches where it wasn't totally insane to
> accomplish. :) (I'm not familiar with all the details on the subarchs,
> but it sounded like the more modern systems would be the targets for
> this?)
>
Yes I guess so.
I was just afraid by "I expect powerpc to have mandatory strict RWX"
By the way, we have an open issue #223 namely 'Make strict kernel RWX
the default on 64-bit', so no worry as 32-bit is aside for now.
Christophe
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
2019-10-30 7:31 ` [PATCH v5 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Russell Currey
@ 2019-10-31 0:05 ` Joel Stanley
0 siblings, 0 replies; 16+ messages in thread
From: Joel Stanley @ 2019-10-31 0:05 UTC (permalink / raw)
To: Russell Currey
Cc: linuxppc-dev, Christophe LEROY, Michael Ellerman, ajd,
Daniel Axtens, Nicholas Piggin, kernel-hardening
On Wed, 30 Oct 2019 at 07:31, Russell Currey <ruscur@russell.cc> wrote:
>
> skiroot_defconfig is the only powerpc defconfig with STRICT_KERNEL_RWX
> enabled, and if you want memory protection for kernel text you'd want it
> for modules too, so enable STRICT_MODULE_RWX there.
>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
Acked-by: Joel Stanley <joel@jms.id.au>
> ---
> arch/powerpc/configs/skiroot_defconfig | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig
> index 1253482a67c0..719d899081b3 100644
> --- a/arch/powerpc/configs/skiroot_defconfig
> +++ b/arch/powerpc/configs/skiroot_defconfig
> @@ -31,6 +31,7 @@ CONFIG_PERF_EVENTS=y
> CONFIG_SLAB_FREELIST_HARDENED=y
> CONFIG_JUMP_LABEL=y
> CONFIG_STRICT_KERNEL_RWX=y
> +CONFIG_STRICT_MODULE_RWX=y
> CONFIG_MODULES=y
> CONFIG_MODULE_UNLOAD=y
> CONFIG_MODULE_SIG=y
> --
> 2.23.0
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc
2019-10-30 8:58 ` [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Christophe Leroy
2019-10-30 18:30 ` Kees Cook
@ 2019-10-31 0:09 ` Russell Currey
1 sibling, 0 replies; 16+ messages in thread
From: Russell Currey @ 2019-10-31 0:09 UTC (permalink / raw)
To: Christophe Leroy, linuxppc-dev
Cc: joel, mpe, ajd, dja, npiggin, kernel-hardening
On Wed, 2019-10-30 at 09:58 +0100, Christophe Leroy wrote:
>
> Le 30/10/2019 à 08:31, Russell Currey a écrit :
> > v4 cover letter:
> > https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198268.html
> > v3 cover letter:
> > https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-October/198023.html
> >
> > Changes since v4:
> > [1/5]: Addressed review comments from Michael Ellerman
> > (thanks!)
> > [4/5]: make ARCH_HAS_STRICT_MODULE_RWX depend on
> > ARCH_HAS_STRICT_KERNEL_RWX to simplify things and avoid
> > STRICT_MODULE_RWX being *on by default* in cases where
> > STRICT_KERNEL_RWX is *unavailable*
> > [5/5]: split skiroot_defconfig changes out into its own patch
> >
> > The whole Kconfig situation is really weird and confusing, I
> > believe the
> > correct resolution is to change arch/Kconfig but the consequences
> > are so
> > minor that I don't think it's worth it, especially given that I
> > expect
> > powerpc to have mandatory strict RWX Soon(tm).
>
> I'm not such strict RWX can be made mandatory due to the impact it
> has
> on some subarches:
> - On the 8xx, unless all areas are 8Mbytes aligned, there is a
> significant overhead on TLB misses. And Aligning everthing to 8M is
> a
> waste of RAM which is not acceptable on systems having very few RAM.
> - On hash book3s32, we are able to map the kernel BATs. With a few
> alignment constraints, we are able to provide STRICT_KERNEL_RWX. But
> we
> are unable to provide exec protection on page granularity. Only on
> 256Mbytes segments. So for modules, we have to have the vmspace X. It
> is
> also not possible to have a kernel area RO. Only user areas can be
> made RO.
>
Yes, sorry, this was thoughtless from me, since in my mind I was just
thinking about the platforms I primarily work on (book3s64).
> Christophe
>
> > Russell Currey (5):
> > powerpc/mm: Implement set_memory() routines
> > powerpc/kprobes: Mark newly allocated probes as RO
> > powerpc/mm/ptdump: debugfs handler for W+X checks at runtime
> > powerpc: Set ARCH_HAS_STRICT_MODULE_RWX
> > powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig
> >
> > arch/powerpc/Kconfig | 2 +
> > arch/powerpc/Kconfig.debug | 6 +-
> > arch/powerpc/configs/skiroot_defconfig | 1 +
> > arch/powerpc/include/asm/set_memory.h | 32 +++++++++++
> > arch/powerpc/kernel/kprobes.c | 3 +
> > arch/powerpc/mm/Makefile | 1 +
> > arch/powerpc/mm/pageattr.c | 77
> > ++++++++++++++++++++++++++
> > arch/powerpc/mm/ptdump/ptdump.c | 21 ++++++-
> > 8 files changed, 140 insertions(+), 3 deletions(-)
> > create mode 100644 arch/powerpc/include/asm/set_memory.h
> > create mode 100644 arch/powerpc/mm/pageattr.c
> >
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO
2019-10-30 7:31 ` [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
@ 2019-11-01 14:23 ` Daniel Axtens
2019-11-02 10:45 ` Michael Ellerman
1 sibling, 0 replies; 16+ messages in thread
From: Daniel Axtens @ 2019-11-01 14:23 UTC (permalink / raw)
To: Russell Currey, linuxppc-dev
Cc: Russell Currey, christophe.leroy, joel, mpe, ajd, npiggin,
kernel-hardening
Russell Currey <ruscur@russell.cc> writes:
> With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
> W+X page at boot by default. This can be tested with
> CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
> kernel log during boot.
>
> powerpc doesn't implement its own alloc() for kprobes like other
> architectures do, but we couldn't immediately mark RO anyway since we do
> a memcpy to the page we allocate later. After that, nothing should be
> allowed to modify the page, and write permissions are removed well
> before the kprobe is armed.
>
> Thus mark newly allocated probes as read-only once it's safe to do so.
So if I've got the flow right here:
register[_aggr]_kprobe
-> prepare_kprobe
-> arch_prepare_kprobe
perform memcpy
mark as readonly, after which no-one touches writes to the memory
which all seems right to me.
I have been trying to check if optprobes need special handling: it looks
like the buffer for them lives in kernel text, not dynamically allocated
memory, so it should be protected by the usual Strict RWX protections
without special treatment here.
So lgtm.
Reviewed-by: Daniel Axtens <dja@axtens.net>
Regards,
Daniel
>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
> arch/powerpc/kernel/kprobes.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 2d27ec4feee4..2610496de7c7 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -24,6 +24,7 @@
> #include <asm/sstep.h>
> #include <asm/sections.h>
> #include <linux/uaccess.h>
> +#include <linux/set_memory.h>
>
> DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> @@ -131,6 +132,8 @@ int arch_prepare_kprobe(struct kprobe *p)
> (unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t));
> }
>
> + set_memory_ro((unsigned long)p->ainsn.insn, 1);
> +
> p->ainsn.boostable = 0;
> return ret;
> }
> --
> 2.23.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO
2019-10-30 7:31 ` [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
2019-11-01 14:23 ` Daniel Axtens
@ 2019-11-02 10:45 ` Michael Ellerman
2019-12-05 23:47 ` Michael Ellerman
1 sibling, 1 reply; 16+ messages in thread
From: Michael Ellerman @ 2019-11-02 10:45 UTC (permalink / raw)
To: Russell Currey, linuxppc-dev
Cc: Russell Currey, christophe.leroy, joel, ajd, dja, npiggin,
kernel-hardening
Russell Currey <ruscur@russell.cc> writes:
> With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
> W+X page at boot by default. This can be tested with
> CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
> kernel log during boot.
>
> powerpc doesn't implement its own alloc() for kprobes like other
> architectures do, but we couldn't immediately mark RO anyway since we do
> a memcpy to the page we allocate later. After that, nothing should be
> allowed to modify the page, and write permissions are removed well
> before the kprobe is armed.
>
> Thus mark newly allocated probes as read-only once it's safe to do so.
>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
> arch/powerpc/kernel/kprobes.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 2d27ec4feee4..2610496de7c7 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -24,6 +24,7 @@
> #include <asm/sstep.h>
> #include <asm/sections.h>
> #include <linux/uaccess.h>
> +#include <linux/set_memory.h>
>
> DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> @@ -131,6 +132,8 @@ int arch_prepare_kprobe(struct kprobe *p)
> (unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t));
> }
>
> + set_memory_ro((unsigned long)p->ainsn.insn, 1);
> +
That comes from:
p->ainsn.insn = get_insn_slot();
Which ends up in __get_insn_slot() I think. And that looks very much
like it's going to hand out multiple slots per page, which isn't going
to work because you've just marked the whole page RO.
So I would expect this to crash on the 2nd kprobe that's installed. Have
you tested it somehow?
I think this code should just use patch_instruction() rather than
memcpy().
cheers
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO
2019-11-02 10:45 ` Michael Ellerman
@ 2019-12-05 23:47 ` Michael Ellerman
2019-12-12 6:43 ` Russell Currey
0 siblings, 1 reply; 16+ messages in thread
From: Michael Ellerman @ 2019-12-05 23:47 UTC (permalink / raw)
To: Russell Currey, linuxppc-dev; +Cc: ajd, kernel-hardening, npiggin, joel, dja
Michael Ellerman <mpe@ellerman.id.au> writes:
> Russell Currey <ruscur@russell.cc> writes:
>> With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
>> W+X page at boot by default. This can be tested with
>> CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
>> kernel log during boot.
>>
>> powerpc doesn't implement its own alloc() for kprobes like other
>> architectures do, but we couldn't immediately mark RO anyway since we do
>> a memcpy to the page we allocate later. After that, nothing should be
>> allowed to modify the page, and write permissions are removed well
>> before the kprobe is armed.
>>
>> Thus mark newly allocated probes as read-only once it's safe to do so.
>>
>> Signed-off-by: Russell Currey <ruscur@russell.cc>
>> ---
>> arch/powerpc/kernel/kprobes.c | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
>> index 2d27ec4feee4..2610496de7c7 100644
>> --- a/arch/powerpc/kernel/kprobes.c
>> +++ b/arch/powerpc/kernel/kprobes.c
>> @@ -24,6 +24,7 @@
>> #include <asm/sstep.h>
>> #include <asm/sections.h>
>> #include <linux/uaccess.h>
>> +#include <linux/set_memory.h>
>>
>> DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
>> DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>> @@ -131,6 +132,8 @@ int arch_prepare_kprobe(struct kprobe *p)
>> (unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t));
>> }
>>
>> + set_memory_ro((unsigned long)p->ainsn.insn, 1);
>> +
>
> That comes from:
> p->ainsn.insn = get_insn_slot();
>
>
> Which ends up in __get_insn_slot() I think. And that looks very much
> like it's going to hand out multiple slots per page, which isn't going
> to work because you've just marked the whole page RO.
>
> So I would expect this to crash on the 2nd kprobe that's installed. Have
> you tested it somehow?
I'm not sure if this is the issue I was talking about, but it doesn't
survive ftracetest:
[ 1139.576047] ------------[ cut here ]------------
[ 1139.576322] kernel BUG at mm/memory.c:2036!
cpu 0x1f: Vector: 700 (Program Check) at [c000001fd6c675d0]
pc: c00000000035d018: apply_to_page_range+0x318/0x610
lr: c0000000000900bc: change_memory_attr+0x4c/0x70
sp: c000001fd6c67860
msr: 9000000000029033
current = 0xc000001fa4a47880
paca = 0xc000001ffffe5c80 irqmask: 0x03 irq_happened: 0x01
pid = 7168, comm = ftracetest
kernel BUG at mm/memory.c:2036!
Linux version 5.4.0-gcc-8.2.0-11694-gf1f9aa266811 (michael@Raptor-2.ozlabs.ibm.com) (gcc version 8.2.0 (crosstool-NG 1.24.0-rc1.16-9627a04)) #1384 SMP Thu Dec 5 22:11:09 AEDT 2019
enter ? for help
[c000001fd6c67940] c0000000000900bc change_memory_attr+0x4c/0x70
[c000001fd6c67970] c000000000053c48 arch_prepare_kprobe+0xb8/0x120
[c000001fd6c679e0] c00000000022f718 register_kprobe+0x608/0x790
[c000001fd6c67a40] c00000000022fc50 register_kretprobe+0x230/0x350
[c000001fd6c67a80] c0000000002849b4 __register_trace_kprobe+0xf4/0x1a0
[c000001fd6c67af0] c000000000285b18 trace_kprobe_create+0x738/0xf70
[c000001fd6c67c30] c000000000286378 create_or_delete_trace_kprobe+0x28/0x70
[c000001fd6c67c50] c00000000025f024 trace_run_command+0xc4/0xe0
[c000001fd6c67ca0] c00000000025f128 trace_parse_run_command+0xe8/0x230
[c000001fd6c67d40] c0000000002845d0 probes_write+0x20/0x40
[c000001fd6c67d60] c0000000003eef4c __vfs_write+0x3c/0x70
[c000001fd6c67d80] c0000000003f26a0 vfs_write+0xd0/0x200
[c000001fd6c67dd0] c0000000003f2a3c ksys_write+0x7c/0x140
[c000001fd6c67e20] c00000000000b9e0 system_call+0x5c/0x68
--- Exception: c01 (System Call) at 00007fff8f06e420
SP (7ffff93d6830) is in userspace
1f:mon> client_loop: send disconnect: Broken pipe
Sorry I didn't get any more info on the crash, I lost the console and
then some CI bot stole the machine 8)
You should be able to reproduce just by running ftracetest.
cheers
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO
2019-12-05 23:47 ` Michael Ellerman
@ 2019-12-12 6:43 ` Russell Currey
0 siblings, 0 replies; 16+ messages in thread
From: Russell Currey @ 2019-12-12 6:43 UTC (permalink / raw)
To: Michael Ellerman, linuxppc-dev; +Cc: ajd, kernel-hardening, npiggin, joel, dja
On Fri, 2019-12-06 at 10:47 +1100, Michael Ellerman wrote:
> Michael Ellerman <mpe@ellerman.id.au> writes:
> > Russell Currey <ruscur@russell.cc> writes:
> > > With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will
> > > be one
> > > W+X page at boot by default. This can be tested with
> > > CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking
> > > the
> > > kernel log during boot.
> > >
> > > powerpc doesn't implement its own alloc() for kprobes like other
> > > architectures do, but we couldn't immediately mark RO anyway
> > > since we do
> > > a memcpy to the page we allocate later. After that, nothing
> > > should be
> > > allowed to modify the page, and write permissions are removed
> > > well
> > > before the kprobe is armed.
> > >
> > > Thus mark newly allocated probes as read-only once it's safe to
> > > do so.
> > >
> > > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > > ---
> > > arch/powerpc/kernel/kprobes.c | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > > diff --git a/arch/powerpc/kernel/kprobes.c
> > > b/arch/powerpc/kernel/kprobes.c
> > > index 2d27ec4feee4..2610496de7c7 100644
> > > --- a/arch/powerpc/kernel/kprobes.c
> > > +++ b/arch/powerpc/kernel/kprobes.c
> > > @@ -24,6 +24,7 @@
> > > #include <asm/sstep.h>
> > > #include <asm/sections.h>
> > > #include <linux/uaccess.h>
> > > +#include <linux/set_memory.h>
> > >
> > > DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> > > DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> > > @@ -131,6 +132,8 @@ int arch_prepare_kprobe(struct kprobe *p)
> > > (unsigned long)p->ainsn.insn +
> > > sizeof(kprobe_opcode_t));
> > > }
> > >
> > > + set_memory_ro((unsigned long)p->ainsn.insn, 1);
> > > +
> >
> > That comes from:
> > p->ainsn.insn = get_insn_slot();
> >
> >
> > Which ends up in __get_insn_slot() I think. And that looks very
> > much
> > like it's going to hand out multiple slots per page, which isn't
> > going
> > to work because you've just marked the whole page RO.
> >
> > So I would expect this to crash on the 2nd kprobe that's installed.
> > Have
> > you tested it somehow?
>
> I'm not sure if this is the issue I was talking about, but it doesn't
> survive ftracetest:
>
> [ 1139.576047] ------------[ cut here ]------------
> [ 1139.576322] kernel BUG at mm/memory.c:2036!
> cpu 0x1f: Vector: 700 (Program Check) at [c000001fd6c675d0]
> pc: c00000000035d018: apply_to_page_range+0x318/0x610
> lr: c0000000000900bc: change_memory_attr+0x4c/0x70
> sp: c000001fd6c67860
> msr: 9000000000029033
> current = 0xc000001fa4a47880
> paca = 0xc000001ffffe5c80 irqmask: 0x03 irq_happened: 0x01
> pid = 7168, comm = ftracetest
> kernel BUG at mm/memory.c:2036!
> Linux version 5.4.0-gcc-8.2.0-11694-gf1f9aa266811 (
> michael@Raptor-2.ozlabs.ibm.com) (gcc version 8.2.0 (crosstool-NG
> 1.24.0-rc1.16-9627a04)) #1384 SMP Thu Dec 5 22:11:09 AEDT 2019
> enter ? for help
> [c000001fd6c67940] c0000000000900bc change_memory_attr+0x4c/0x70
> [c000001fd6c67970] c000000000053c48 arch_prepare_kprobe+0xb8/0x120
> [c000001fd6c679e0] c00000000022f718 register_kprobe+0x608/0x790
> [c000001fd6c67a40] c00000000022fc50 register_kretprobe+0x230/0x350
> [c000001fd6c67a80] c0000000002849b4
> __register_trace_kprobe+0xf4/0x1a0
> [c000001fd6c67af0] c000000000285b18 trace_kprobe_create+0x738/0xf70
> [c000001fd6c67c30] c000000000286378
> create_or_delete_trace_kprobe+0x28/0x70
> [c000001fd6c67c50] c00000000025f024 trace_run_command+0xc4/0xe0
> [c000001fd6c67ca0] c00000000025f128
> trace_parse_run_command+0xe8/0x230
> [c000001fd6c67d40] c0000000002845d0 probes_write+0x20/0x40
> [c000001fd6c67d60] c0000000003eef4c __vfs_write+0x3c/0x70
> [c000001fd6c67d80] c0000000003f26a0 vfs_write+0xd0/0x200
> [c000001fd6c67dd0] c0000000003f2a3c ksys_write+0x7c/0x140
> [c000001fd6c67e20] c00000000000b9e0 system_call+0x5c/0x68
> --- Exception: c01 (System Call) at 00007fff8f06e420
> SP (7ffff93d6830) is in userspace
> 1f:mon> client_loop: send disconnect: Broken pipe
>
>
> Sorry I didn't get any more info on the crash, I lost the console and
> then some CI bot stole the machine 8)
>
> You should be able to reproduce just by running ftracetest.
The test that blew it up was test.d/kprobe/probepoint.tc for the
record. It goes away when replacing the memcpy with a
patch_instruction().
>
> cheers
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2019-12-12 6:43 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-30 7:31 [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Russell Currey
2019-10-30 7:31 ` [PATCH v5 1/5] powerpc/mm: Implement set_memory() routines Russell Currey
2019-10-30 8:01 ` Christophe Leroy
2019-10-30 7:31 ` [PATCH v5 2/5] powerpc/kprobes: Mark newly allocated probes as RO Russell Currey
2019-11-01 14:23 ` Daniel Axtens
2019-11-02 10:45 ` Michael Ellerman
2019-12-05 23:47 ` Michael Ellerman
2019-12-12 6:43 ` Russell Currey
2019-10-30 7:31 ` [PATCH v5 3/5] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime Russell Currey
2019-10-30 7:31 ` [PATCH v5 4/5] powerpc: Set ARCH_HAS_STRICT_MODULE_RWX Russell Currey
2019-10-30 7:31 ` [PATCH v5 5/5] powerpc/configs: Enable STRICT_MODULE_RWX in skiroot_defconfig Russell Currey
2019-10-31 0:05 ` Joel Stanley
2019-10-30 8:58 ` [PATCH v5 0/5] Implement STRICT_MODULE_RWX for powerpc Christophe Leroy
2019-10-30 18:30 ` Kees Cook
2019-10-30 19:28 ` Christophe Leroy
2019-10-31 0:09 ` Russell Currey
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).