kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] Kernel Userspace Protection for radix
@ 2019-02-21  9:35 Russell Currey
  2019-02-21  9:35 ` [PATCH 1/7] powerpc: Add framework for Kernel Userspace Protection Russell Currey
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Russell Currey @ 2019-02-21  9:35 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mpe, npiggin, christophe.leroy, kernel-hardening, Russell Currey

The first three patches of these series are from Christophe's work and are
the bare minimum framework needed to implement the support for radix.

In patch 3, I have removed from Christophe's patch my implementation of
the 64-bit exception handling code, since we don't have an answer for
making nested exceptions work yet.  This is mentioned in the final KUAP
patch.  Regardless, this is still a significant security improvement
and greatly narrows the attack surface.

Here are patches you will want if you want this to work:

http://patchwork.ozlabs.org/patch/1045215/
http://patchwork.ozlabs.org/patch/1045049/
http://patchwork.ozlabs.org/patch/1038568/

(or subsequent revisions, which the latter two will need)

I wouldn't expect this series to be merged without those fixes.

Thanks to Christophe for his great work and to Michael Ellerman for a
ton of feedback as I've worked on this.

Christophe Leroy (3):
  powerpc: Add framework for Kernel Userspace Protection
  powerpc: Add skeleton for Kernel Userspace Execution Prevention
  powerpc/mm: Add a framework for Kernel Userspace Access Protection

Russell Currey (4):
  powerpc/64: Setup KUP on secondary CPUs
  powerpc/mm/radix: Use KUEP API for Radix MMU
  powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()
  powerpc/64s: Implement KUAP for Radix MMU

 .../admin-guide/kernel-parameters.txt         |  4 +-
 .../powerpc/include/asm/book3s/64/kup-radix.h | 36 ++++++++++++++++
 arch/powerpc/include/asm/exception-64e.h      |  3 ++
 arch/powerpc/include/asm/exception-64s.h      |  3 ++
 arch/powerpc/include/asm/futex.h              |  4 ++
 arch/powerpc/include/asm/kup.h                | 42 +++++++++++++++++++
 arch/powerpc/include/asm/mmu.h                |  9 +++-
 arch/powerpc/include/asm/paca.h               |  3 ++
 arch/powerpc/include/asm/processor.h          |  3 ++
 arch/powerpc/include/asm/ptrace.h             |  3 ++
 arch/powerpc/include/asm/reg.h                |  1 +
 arch/powerpc/include/asm/uaccess.h            | 38 +++++++++++++----
 arch/powerpc/kernel/asm-offsets.c             |  7 ++++
 arch/powerpc/kernel/entry_32.S                |  8 +++-
 arch/powerpc/kernel/process.c                 |  3 ++
 arch/powerpc/kernel/setup_64.c                | 10 +++++
 arch/powerpc/lib/checksum_wrappers.c          |  4 ++
 arch/powerpc/lib/code-patching.c              |  4 +-
 arch/powerpc/mm/fault.c                       | 20 ++++++---
 arch/powerpc/mm/init-common.c                 | 26 ++++++++++++
 arch/powerpc/mm/init_32.c                     |  3 ++
 arch/powerpc/mm/pgtable-radix.c               | 28 +++++++++++--
 arch/powerpc/mm/pkeys.c                       |  7 +++-
 arch/powerpc/platforms/Kconfig.cputype        | 26 ++++++++++++
 24 files changed, 271 insertions(+), 24 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h
 create mode 100644 arch/powerpc/include/asm/kup.h

-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/7] powerpc: Add framework for Kernel Userspace Protection
  2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
@ 2019-02-21  9:35 ` Russell Currey
  2019-02-21  9:35 ` [PATCH 2/7] powerpc: Add skeleton for Kernel Userspace Execution Prevention Russell Currey
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Russell Currey @ 2019-02-21  9:35 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mpe, npiggin, christophe.leroy, kernel-hardening, Russell Currey

From: Christophe Leroy <christophe.leroy@c-s.fr>

This patch adds a skeleton for Kernel Userspace Protection
functionnalities like Kernel Userspace Access Protection and
Kernel Userspace Execution Prevention

The subsequent implementation of KUAP for radix makes use of a MMU
feature in order to patch out assembly when KUAP is disabled or
unsupported.  This won't work unless there's an entry point for
KUP support before the feature magic happens, so for PPC64
setup_kup() is called early in setup.

On PPC32, feature_fixup is done too early to allow the same.

Suggested-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/kup.h | 11 +++++++++++
 arch/powerpc/kernel/setup_64.c |  7 +++++++
 arch/powerpc/mm/init-common.c  |  5 +++++
 arch/powerpc/mm/init_32.c      |  3 +++
 4 files changed, 26 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kup.h

diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
new file mode 100644
index 000000000000..7a88b8b9b54d
--- /dev/null
+++ b/arch/powerpc/include/asm/kup.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_KUP_H_
+#define _ASM_POWERPC_KUP_H_
+
+#ifndef __ASSEMBLY__
+
+void setup_kup(void);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_POWERPC_KUP_H_ */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 236c1151a3a7..771f280a6bf6 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -68,6 +68,7 @@
 #include <asm/cputhreads.h>
 #include <asm/hw_irq.h>
 #include <asm/feature-fixups.h>
+#include <asm/kup.h>
 
 #include "setup.h"
 
@@ -331,6 +332,12 @@ void __init early_setup(unsigned long dt_ptr)
 	 */
 	configure_exceptions();
 
+	/*
+	 * Configure Kernel Userspace Protection. This needs to happen before
+	 * feature fixups for platforms that implement this using features.
+	 */
+	setup_kup();
+
 	/* Apply all the dynamic patching */
 	apply_feature_fixups();
 	setup_feature_keys();
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 1e6910eb70ed..36d28e872289 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -24,6 +24,11 @@
 #include <linux/string.h>
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
+#include <asm/kup.h>
+
+void __init setup_kup(void)
+{
+}
 
 #define CTOR(shift) static void ctor_##shift(void *addr) \
 {							\
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 3e59e5d64b01..93cfa8cf015d 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -45,6 +45,7 @@
 #include <asm/tlb.h>
 #include <asm/sections.h>
 #include <asm/hugetlb.h>
+#include <asm/kup.h>
 
 #include "mmu_decl.h"
 
@@ -182,6 +183,8 @@ void __init MMU_init(void)
 	btext_unmap();
 #endif
 
+	setup_kup();
+
 	/* Shortly after that, the entire linear mapping will be available */
 	memblock_set_current_limit(lowmem_end_addr);
 }
-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 2/7] powerpc: Add skeleton for Kernel Userspace Execution Prevention
  2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
  2019-02-21  9:35 ` [PATCH 1/7] powerpc: Add framework for Kernel Userspace Protection Russell Currey
@ 2019-02-21  9:35 ` Russell Currey
  2019-02-21  9:35 ` [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection Russell Currey
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Russell Currey @ 2019-02-21  9:35 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: mpe, npiggin, christophe.leroy, kernel-hardening

From: Christophe Leroy <christophe.leroy@c-s.fr>

This patch adds a skeleton for Kernel Userspace Execution Prevention.

Then subarches implementing it have to define CONFIG_PPC_HAVE_KUEP
and provide setup_kuep() function.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 Documentation/admin-guide/kernel-parameters.txt |  2 +-
 arch/powerpc/include/asm/kup.h                  |  6 ++++++
 arch/powerpc/mm/fault.c                         |  3 ++-
 arch/powerpc/mm/init-common.c                   | 11 +++++++++++
 arch/powerpc/platforms/Kconfig.cputype          | 12 ++++++++++++
 5 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 858b6c0b9a15..8448a56a9eb9 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2812,7 +2812,7 @@
 			Disable SMAP (Supervisor Mode Access Prevention)
 			even if it is supported by processor.
 
-	nosmep		[X86]
+	nosmep		[X86,PPC]
 			Disable SMEP (Supervisor Mode Execution Prevention)
 			even if it is supported by processor.
 
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 7a88b8b9b54d..af4b5f854ca4 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -6,6 +6,12 @@
 
 void setup_kup(void);
 
+#ifdef CONFIG_PPC_KUEP
+void setup_kuep(bool disabled);
+#else
+static inline void setup_kuep(bool disabled) { }
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_KUP_H_ */
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 887f11bcf330..ad65e336721a 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -230,8 +230,9 @@ static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
 	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT |
 				      DSISR_PROTFAULT))) {
 		printk_ratelimited(KERN_CRIT "kernel tried to execute"
-				   " exec-protected page (%lx) -"
+				   " %s page (%lx) -"
 				   "exploit attempt? (uid: %d)\n",
+				   address >= TASK_SIZE ? "exec-protected" : "user",
 				   address, from_kuid(&init_user_ns,
 						      current_uid()));
 	}
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 36d28e872289..83f95a5565d6 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -26,8 +26,19 @@
 #include <asm/pgtable.h>
 #include <asm/kup.h>
 
+static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
+
+static int __init parse_nosmep(char *p)
+{
+	disable_kuep = true;
+	pr_warn("Disabling Kernel Userspace Execution Prevention\n");
+	return 0;
+}
+early_param("nosmep", parse_nosmep);
+
 void __init setup_kup(void)
 {
+	setup_kuep(disable_kuep);
 }
 
 #define CTOR(shift) static void ctor_##shift(void *addr) \
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 8c7464c3f27f..410c3443652c 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -339,6 +339,18 @@ config PPC_RADIX_MMU_DEFAULT
 
 	  If you're unsure, say Y.
 
+config PPC_HAVE_KUEP
+	bool
+
+config PPC_KUEP
+	bool "Kernel Userspace Execution Prevention"
+	depends on PPC_HAVE_KUEP
+	default y
+	help
+	  Enable support for Kernel Userspace Execution Prevention (KUEP)
+
+	  If you're unsure, say Y.
+
 config ARCH_ENABLE_HUGEPAGE_MIGRATION
 	def_bool y
 	depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION
-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection
  2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
  2019-02-21  9:35 ` [PATCH 1/7] powerpc: Add framework for Kernel Userspace Protection Russell Currey
  2019-02-21  9:35 ` [PATCH 2/7] powerpc: Add skeleton for Kernel Userspace Execution Prevention Russell Currey
@ 2019-02-21  9:35 ` Russell Currey
  2019-02-21 10:46   ` Christophe Leroy
  2019-02-21 12:56   ` kbuild test robot
  2019-02-21  9:35 ` [PATCH 4/7] powerpc/64: Setup KUP on secondary CPUs Russell Currey
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 17+ messages in thread
From: Russell Currey @ 2019-02-21  9:35 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mpe, npiggin, christophe.leroy, kernel-hardening, Russell Currey

From: Christophe Leroy <christophe.leroy@c-s.fr>

This patch implements a framework for Kernel Userspace Access
Protection.

Then subarches will have to possibility to provide their own
implementation by providing setup_kuap() and lock/unlock_user_access()

Some platform will need to know the area accessed and whether it is
accessed from read, write or both. Therefore source, destination and
size and handed over to the two functions.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[ruscur: remove 64-bit exception handling code]
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 .../admin-guide/kernel-parameters.txt         |  2 +-
 arch/powerpc/include/asm/exception-64e.h      |  3 ++
 arch/powerpc/include/asm/exception-64s.h      |  3 ++
 arch/powerpc/include/asm/futex.h              |  4 ++
 arch/powerpc/include/asm/kup.h                | 21 ++++++++++
 arch/powerpc/include/asm/paca.h               |  3 ++
 arch/powerpc/include/asm/processor.h          |  3 ++
 arch/powerpc/include/asm/ptrace.h             |  3 ++
 arch/powerpc/include/asm/uaccess.h            | 38 +++++++++++++++----
 arch/powerpc/kernel/asm-offsets.c             |  7 ++++
 arch/powerpc/kernel/entry_32.S                |  8 +++-
 arch/powerpc/kernel/process.c                 |  3 ++
 arch/powerpc/lib/checksum_wrappers.c          |  4 ++
 arch/powerpc/mm/fault.c                       | 17 +++++++--
 arch/powerpc/mm/init-common.c                 | 10 +++++
 arch/powerpc/platforms/Kconfig.cputype        | 12 ++++++
 16 files changed, 127 insertions(+), 14 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 8448a56a9eb9..0a76dbb39011 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2808,7 +2808,7 @@
 			noexec=on: enable non-executable mappings (default)
 			noexec=off: disable non-executable mappings
 
-	nosmap		[X86]
+	nosmap		[X86,PPC]
 			Disable SMAP (Supervisor Mode Access Prevention)
 			even if it is supported by processor.
 
diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h
index 555e22d5e07f..bf25015834ee 100644
--- a/arch/powerpc/include/asm/exception-64e.h
+++ b/arch/powerpc/include/asm/exception-64e.h
@@ -215,5 +215,8 @@ exc_##label##_book3e:
 #define RFI_TO_USER							\
 	rfi
 
+#define UNLOCK_USER_ACCESS(reg)
+#define LOCK_USER_ACCESS(reg)
+
 #endif /* _ASM_POWERPC_EXCEPTION_64E_H */
 
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 3b4767ed3ec5..8ae2d7855cfe 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -264,6 +264,9 @@ BEGIN_FTR_SECTION_NESTED(943)						\
 	std	ra,offset(r13);						\
 END_FTR_SECTION_NESTED(ftr,ftr,943)
 
+#define UNLOCK_USER_ACCESS(reg)
+#define LOCK_USER_ACCESS(reg)
+
 #define EXCEPTION_PROLOG_0(area)					\
 	GET_PACA(r13);							\
 	std	r9,area+EX_R9(r13);	/* save r9 */			\
diff --git a/arch/powerpc/include/asm/futex.h b/arch/powerpc/include/asm/futex.h
index 88b38b37c21b..f85facf3739b 100644
--- a/arch/powerpc/include/asm/futex.h
+++ b/arch/powerpc/include/asm/futex.h
@@ -35,6 +35,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
 {
 	int oldval = 0, ret;
 
+	unlock_user_access(uaddr, NULL, sizeof(*uaddr));
 	pagefault_disable();
 
 	switch (op) {
@@ -62,6 +63,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
 	if (!ret)
 		*oval = oldval;
 
+	lock_user_access(uaddr, NULL, sizeof(*uaddr));
 	return ret;
 }
 
@@ -75,6 +77,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
 	if (!access_ok(uaddr, sizeof(u32)))
 		return -EFAULT;
 
+	unlock_user_access(uaddr, NULL, sizeof(*uaddr));
         __asm__ __volatile__ (
         PPC_ATOMIC_ENTRY_BARRIER
 "1:     lwarx   %1,0,%3         # futex_atomic_cmpxchg_inatomic\n\
@@ -95,6 +98,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
         : "cc", "memory");
 
 	*uval = prev;
+	lock_user_access(uaddr, NULL, sizeof(*uaddr));
         return ret;
 }
 
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index af4b5f854ca4..2ac540fb488f 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -4,6 +4,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/pgtable.h>
+
 void setup_kup(void);
 
 #ifdef CONFIG_PPC_KUEP
@@ -12,6 +14,25 @@ void setup_kuep(bool disabled);
 static inline void setup_kuep(bool disabled) { }
 #endif
 
+#ifdef CONFIG_PPC_KUAP
+void setup_kuap(bool disabled);
+#else
+static inline void setup_kuap(bool disabled) { }
+static inline void unlock_user_access(void __user *to, const void __user *from,
+				      unsigned long size) { }
+static inline void lock_user_access(void __user *to, const void __user *from,
+				    unsigned long size) { }
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
+#ifndef CONFIG_PPC_KUAP
+
+#ifdef CONFIG_PPC32
+#define LOCK_USER_ACCESS(val, sp, sr, srmax, curr)
+#define REST_USER_ACCESS(val, sp, sr, srmax, curr)
+#endif
+
+#endif
+
 #endif /* _ASM_POWERPC_KUP_H_ */
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index e843bc5d1a0f..56236f6d8c89 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -169,6 +169,9 @@ struct paca_struct {
 	u64 saved_r1;			/* r1 save for RTAS calls or PM or EE=0 */
 	u64 saved_msr;			/* MSR saved here by enter_rtas */
 	u16 trap_save;			/* Used when bad stack is encountered */
+#ifdef CONFIG_PPC_KUAP
+	u8 user_access_allowed;		/* can the kernel access user memory? */
+#endif
 	u8 irq_soft_mask;		/* mask for irq soft masking */
 	u8 irq_happened;		/* irq happened while soft-disabled */
 	u8 io_sync;			/* writel() needs spin_unlock sync */
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index ee58526cb6c2..4a9a10e86828 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -250,6 +250,9 @@ struct thread_struct {
 #ifdef CONFIG_PPC32
 	void		*pgdir;		/* root of page-table tree */
 	unsigned long	ksp_limit;	/* if ksp <= ksp_limit stack overflow */
+#ifdef CONFIG_PPC_KUAP
+	unsigned long	kuap;		/* state of user access protection */
+#endif
 #endif
 	/* Debug Registers */
 	struct debug_reg debug;
diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index 0b8a735b6d85..0321ba5b3d12 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -55,6 +55,9 @@ struct pt_regs
 #ifdef CONFIG_PPC64
 	unsigned long ppr;
 	unsigned long __pad;	/* Maintain 16 byte interrupt stack alignment */
+#elif defined(CONFIG_PPC_KUAP)
+	unsigned long kuap;
+	unsigned long __pad[3];	/* Maintain 16 byte interrupt stack alignment */
 #endif
 };
 #endif
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index e3a731793ea2..9db6a4c7c058 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -6,6 +6,7 @@
 #include <asm/processor.h>
 #include <asm/page.h>
 #include <asm/extable.h>
+#include <asm/kup.h>
 
 /*
  * The fs value determines whether argument validity checking should be
@@ -141,6 +142,7 @@ extern long __put_user_bad(void);
 #define __put_user_size(x, ptr, size, retval)			\
 do {								\
 	retval = 0;						\
+	unlock_user_access(ptr, NULL, size);			\
 	switch (size) {						\
 	  case 1: __put_user_asm(x, ptr, retval, "stb"); break;	\
 	  case 2: __put_user_asm(x, ptr, retval, "sth"); break;	\
@@ -148,6 +150,7 @@ do {								\
 	  case 8: __put_user_asm2(x, ptr, retval); break;	\
 	  default: __put_user_bad();				\
 	}							\
+	lock_user_access(ptr, NULL, size);			\
 } while (0)
 
 #define __put_user_nocheck(x, ptr, size)			\
@@ -240,6 +243,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (size > sizeof(x))					\
 		(x) = __get_user_bad();				\
+	unlock_user_access(NULL, ptr, size);			\
 	switch (size) {						\
 	case 1: __get_user_asm(x, ptr, retval, "lbz"); break;	\
 	case 2: __get_user_asm(x, ptr, retval, "lhz"); break;	\
@@ -247,6 +251,7 @@ do {								\
 	case 8: __get_user_asm2(x, ptr, retval);  break;	\
 	default: (x) = __get_user_bad();			\
 	}							\
+	lock_user_access(NULL, ptr, size);			\
 } while (0)
 
 /*
@@ -306,15 +311,21 @@ extern unsigned long __copy_tofrom_user(void __user *to,
 static inline unsigned long
 raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
 {
-	return __copy_tofrom_user(to, from, n);
+	unsigned long ret;
+
+	unlock_user_access(to, from, n);
+	ret = __copy_tofrom_user(to, from, n);
+	lock_user_access(to, from, n);
+	return ret;
 }
 #endif /* __powerpc64__ */
 
 static inline unsigned long raw_copy_from_user(void *to,
 		const void __user *from, unsigned long n)
 {
+	unsigned long ret;
 	if (__builtin_constant_p(n) && (n <= 8)) {
-		unsigned long ret = 1;
+		ret = 1;
 
 		switch (n) {
 		case 1:
@@ -339,14 +350,18 @@ static inline unsigned long raw_copy_from_user(void *to,
 	}
 
 	barrier_nospec();
-	return __copy_tofrom_user((__force void __user *)to, from, n);
+	unlock_user_access(NULL, from, n);
+	ret = __copy_tofrom_user((__force void __user *)to, from, n);
+	lock_user_access(NULL, from, n);
+	return ret;
 }
 
 static inline unsigned long raw_copy_to_user(void __user *to,
 		const void *from, unsigned long n)
 {
+	unsigned long ret;
 	if (__builtin_constant_p(n) && (n <= 8)) {
-		unsigned long ret = 1;
+		ret = 1;
 
 		switch (n) {
 		case 1:
@@ -366,17 +381,24 @@ static inline unsigned long raw_copy_to_user(void __user *to,
 			return 0;
 	}
 
-	return __copy_tofrom_user(to, (__force const void __user *)from, n);
+	unlock_user_access(to, NULL, n);
+	ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
+	lock_user_access(to, NULL, n);
+	return ret;
 }
 
 extern unsigned long __clear_user(void __user *addr, unsigned long size);
 
 static inline unsigned long clear_user(void __user *addr, unsigned long size)
 {
+	unsigned long ret = size;
 	might_fault();
-	if (likely(access_ok(addr, size)))
-		return __clear_user(addr, size);
-	return size;
+	if (likely(access_ok(addr, size))) {
+		unlock_user_access(addr, NULL, size);
+		ret = __clear_user(addr, size);
+		lock_user_access(addr, NULL, size);
+	}
+	return ret;
 }
 
 extern long strncpy_from_user(char *dst, const char __user *src, long count);
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 9ffc72ded73a..98e94299e728 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -93,6 +93,9 @@ int main(void)
 	OFFSET(THREAD_INFO, task_struct, stack);
 	DEFINE(THREAD_INFO_GAP, _ALIGN_UP(sizeof(struct thread_info), 16));
 	OFFSET(KSP_LIMIT, thread_struct, ksp_limit);
+#ifdef CONFIG_PPC_KUAP
+	OFFSET(KUAP, thread_struct, kuap);
+#endif
 #endif /* CONFIG_PPC64 */
 
 #ifdef CONFIG_LIVEPATCH
@@ -260,6 +263,7 @@ int main(void)
 	OFFSET(ACCOUNT_STARTTIME_USER, paca_struct, accounting.starttime_user);
 	OFFSET(ACCOUNT_USER_TIME, paca_struct, accounting.utime);
 	OFFSET(ACCOUNT_SYSTEM_TIME, paca_struct, accounting.stime);
+	OFFSET(PACA_USER_ACCESS_ALLOWED, paca_struct, user_access_allowed);
 	OFFSET(PACA_TRAP_SAVE, paca_struct, trap_save);
 	OFFSET(PACA_NAPSTATELOST, paca_struct, nap_state_lost);
 	OFFSET(PACA_SPRG_VDSO, paca_struct, sprg_vdso);
@@ -320,6 +324,9 @@ int main(void)
 	 */
 	STACK_PT_REGS_OFFSET(_DEAR, dar);
 	STACK_PT_REGS_OFFSET(_ESR, dsisr);
+#ifdef CONFIG_PPC_KUAP
+	STACK_PT_REGS_OFFSET(_KUAP, kuap);
+#endif
 #else /* CONFIG_PPC64 */
 	STACK_PT_REGS_OFFSET(SOFTE, softe);
 	STACK_PT_REGS_OFFSET(_PPR, ppr);
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 0768dfd8a64e..f8f2268f8b12 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -36,6 +36,7 @@
 #include <asm/asm-405.h>
 #include <asm/feature-fixups.h>
 #include <asm/barrier.h>
+#include <asm/kup.h>
 
 /*
  * MSR_KERNEL is > 0x10000 on 4xx/Book-E since it include MSR_CE.
@@ -156,6 +157,7 @@ transfer_to_handler:
 	stw	r12,_CTR(r11)
 	stw	r2,_XER(r11)
 	mfspr	r12,SPRN_SPRG_THREAD
+	LOCK_USER_ACCESS(r2, r11, r9, r0, r12)
 	addi	r2,r12,-THREAD
 	tovirt(r2,r2)			/* set r2 to current */
 	beq	2f			/* if from user, fix up THREAD.regs */
@@ -442,6 +444,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
 	ACCOUNT_CPU_USER_EXIT(r4, r5, r7)
 3:
 #endif
+	REST_USER_ACCESS(r7, r1, r4, r5, r2)
 	lwz	r4,_LINK(r1)
 	lwz	r5,_CCR(r1)
 	mtlr	r4
@@ -739,7 +742,8 @@ fast_exception_return:
 	beq	1f			/* if not, we've got problems */
 #endif
 
-2:	REST_4GPRS(3, r11)
+2:	REST_USER_ACCESS(r3, r11, r4, r5, r2)
+	REST_4GPRS(3, r11)
 	lwz	r10,_CCR(r11)
 	REST_GPR(1, r11)
 	mtcr	r10
@@ -957,6 +961,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_47x)
 1:
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
+	REST_USER_ACCESS(r3, r1, r4, r5, r2)
+
 	lwz	r0,GPR0(r1)
 	lwz	r2,GPR2(r1)
 	REST_4GPRS(3, r1)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index ce393df243aa..995ef82a583d 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1771,6 +1771,9 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
 	regs->mq = 0;
 	regs->nip = start;
 	regs->msr = MSR_USER;
+#ifdef CONFIG_PPC_KUAP
+	regs->kuap = KUAP_START;
+#endif
 #else
 	if (!is_32bit_task()) {
 		unsigned long entry;
diff --git a/arch/powerpc/lib/checksum_wrappers.c b/arch/powerpc/lib/checksum_wrappers.c
index 890d4ddd91d6..48e0c91e0083 100644
--- a/arch/powerpc/lib/checksum_wrappers.c
+++ b/arch/powerpc/lib/checksum_wrappers.c
@@ -28,6 +28,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst,
 {
 	unsigned int csum;
 
+	unlock_user_access(NULL, src, len);
 	might_sleep();
 
 	*err_ptr = 0;
@@ -60,6 +61,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst,
 	}
 
 out:
+	lock_user_access(NULL, src, len);
 	return (__force __wsum)csum;
 }
 EXPORT_SYMBOL(csum_and_copy_from_user);
@@ -69,6 +71,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len,
 {
 	unsigned int csum;
 
+	unlock_user_access(dst, NULL, len);
 	might_sleep();
 
 	*err_ptr = 0;
@@ -97,6 +100,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len,
 	}
 
 out:
+	lock_user_access(dst, NULL, len);
 	return (__force __wsum)csum;
 }
 EXPORT_SYMBOL(csum_and_copy_to_user);
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index ad65e336721a..5f4a5391f41e 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -223,9 +223,11 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr,
 }
 
 /* Is this a bad kernel fault ? */
-static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
+static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
 			     unsigned long address)
 {
+	int is_exec = TRAP(regs) == 0x400;
+
 	/* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */
 	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT |
 				      DSISR_PROTFAULT))) {
@@ -236,7 +238,13 @@ static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
 				   address, from_kuid(&init_user_ns,
 						      current_uid()));
 	}
-	return is_exec || (address >= TASK_SIZE);
+	if (!is_exec && address < TASK_SIZE && (error_code & DSISR_PROTFAULT) &&
+	    !search_exception_tables(regs->nip))
+		printk_ratelimited(KERN_CRIT "Kernel attempted to access user"
+				   " page (%lx) - exploit attempt? (uid: %d)\n",
+				   address, from_kuid(&init_user_ns,
+						      current_uid()));
+	return is_exec || (address >= TASK_SIZE) || !search_exception_tables(regs->nip);
 }
 
 static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
@@ -456,9 +464,10 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
 
 	/*
 	 * The kernel should never take an execute fault nor should it
-	 * take a page fault to a kernel address.
+	 * take a page fault to a kernel address or a page fault to a user
+	 * address outside of dedicated places
 	 */
-	if (unlikely(!is_user && bad_kernel_fault(is_exec, error_code, address)))
+	if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address)))
 		return SIGSEGV;
 
 	/*
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 83f95a5565d6..ecaedfff9992 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -27,6 +27,7 @@
 #include <asm/kup.h>
 
 static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
+static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
 
 static int __init parse_nosmep(char *p)
 {
@@ -36,9 +37,18 @@ static int __init parse_nosmep(char *p)
 }
 early_param("nosmep", parse_nosmep);
 
+static int __init parse_nosmap(char *p)
+{
+	disable_kuap = true;
+	pr_warn("Disabling Kernel Userspace Access Protection\n");
+	return 0;
+}
+early_param("nosmap", parse_nosmap);
+
 void __init setup_kup(void)
 {
 	setup_kuep(disable_kuep);
+	setup_kuap(disable_kuap);
 }
 
 #define CTOR(shift) static void ctor_##shift(void *addr) \
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 410c3443652c..7fa5ddbdce12 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -351,6 +351,18 @@ config PPC_KUEP
 
 	  If you're unsure, say Y.
 
+config PPC_HAVE_KUAP
+	bool
+
+config PPC_KUAP
+	bool "Kernel Userspace Access Protection"
+	depends on PPC_HAVE_KUAP
+	default y
+	help
+	  Enable support for Kernel Userspace Access Protection (KUAP)
+
+	  If you're unsure, say Y.
+
 config ARCH_ENABLE_HUGEPAGE_MIGRATION
 	def_bool y
 	depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION
-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 4/7] powerpc/64: Setup KUP on secondary CPUs
  2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
                   ` (2 preceding siblings ...)
  2019-02-21  9:35 ` [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection Russell Currey
@ 2019-02-21  9:35 ` Russell Currey
  2019-02-21  9:35 ` [PATCH 5/7] powerpc/mm/radix: Use KUEP API for Radix MMU Russell Currey
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Russell Currey @ 2019-02-21  9:35 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mpe, npiggin, christophe.leroy, kernel-hardening, Russell Currey

Some platforms (i.e. Radix MMU) need per-CPU initialisation for KUP.

Any platforms that only want to do KUP initialisation once
globally can just check to see if they're running on the boot CPU, or
check if whatever setup they need has already been performed.

Note that this is only for 64-bit.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/kernel/setup_64.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 771f280a6bf6..17921ee7f3a0 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -390,6 +390,9 @@ void early_setup_secondary(void)
 	/* Initialize the hash table or TLB handling */
 	early_init_mmu_secondary();
 
+	/* Perform any KUP setup that is per-cpu */
+	setup_kup();
+
 	/*
 	 * At this point, we can let interrupts switch to virtual mode
 	 * (the MMU has been setup), so adjust the MSR in the PACA to
-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 5/7] powerpc/mm/radix: Use KUEP API for Radix MMU
  2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
                   ` (3 preceding siblings ...)
  2019-02-21  9:35 ` [PATCH 4/7] powerpc/64: Setup KUP on secondary CPUs Russell Currey
@ 2019-02-21  9:35 ` Russell Currey
  2019-02-21  9:36 ` [PATCH 6/7] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() Russell Currey
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Russell Currey @ 2019-02-21  9:35 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mpe, npiggin, christophe.leroy, kernel-hardening, Russell Currey

Execution protection already exists on radix, this just refactors
the radix init to provide the KUEP setup function instead.

Thus, the only functional change is that it can now be disabled.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/mm/pgtable-radix.c        | 12 +++++++++---
 arch/powerpc/platforms/Kconfig.cputype |  1 +
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 931156069a81..224bcd4be5ae 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -535,8 +535,15 @@ static void radix_init_amor(void)
 	mtspr(SPRN_AMOR, (3ul << 62));
 }
 
-static void radix_init_iamr(void)
+#ifdef CONFIG_PPC_KUEP
+void __init setup_kuep(bool disabled)
 {
+	if (disabled || !early_radix_enabled())
+		return;
+
+	if (smp_processor_id() == boot_cpuid)
+		pr_info("Activating Kernel Userspace Execution Prevention\n");
+
 	/*
 	 * Radix always uses key0 of the IAMR to determine if an access is
 	 * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
@@ -544,6 +551,7 @@ static void radix_init_iamr(void)
 	 */
 	mtspr(SPRN_IAMR, (1ul << 62));
 }
+#endif
 
 void __init radix__early_init_mmu(void)
 {
@@ -605,7 +613,6 @@ void __init radix__early_init_mmu(void)
 
 	memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
 
-	radix_init_iamr();
 	radix_init_pgtable();
 	/* Switch to the guard PID before turning on MMU */
 	radix__switch_mmu_context(NULL, &init_mm);
@@ -627,7 +634,6 @@ void radix__early_init_mmu_secondary(void)
 		      __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
 		radix_init_amor();
 	}
-	radix_init_iamr();
 
 	radix__switch_mmu_context(NULL, &init_mm);
 	if (cpu_has_feature(CPU_FTR_HVMODE))
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 7fa5ddbdce12..25cc7d36b27d 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -320,6 +320,7 @@ config PPC_RADIX_MMU
 	bool "Radix MMU Support"
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
+	select PPC_HAVE_KUEP
 	default y
 	help
 	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 6/7] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()
  2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
                   ` (4 preceding siblings ...)
  2019-02-21  9:35 ` [PATCH 5/7] powerpc/mm/radix: Use KUEP API for Radix MMU Russell Currey
@ 2019-02-21  9:36 ` Russell Currey
  2019-02-21  9:36 ` [PATCH 7/7] powerpc/64s: Implement KUAP for Radix MMU Russell Currey
  2019-02-21 16:07 ` [PATCH 0/7] Kernel Userspace Protection for radix Kees Cook
  7 siblings, 0 replies; 17+ messages in thread
From: Russell Currey @ 2019-02-21  9:36 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mpe, npiggin, christophe.leroy, kernel-hardening, Russell Currey

__patch_instruction() is called in early boot, and uses
__put_user_size(), which includes the locks and unlocks for KUAP,
which could either be called too early, or in the Radix case, forced to
use "early_" versions of functions just to safely handle this one case.

__put_user_asm() does not do this, and thus is safe to use both in early
boot, and later on since in this case it should only ever be touching
kernel memory.

__patch_instruction() was previously refactored to use __put_user_size()
in order to be able to return -EFAULT, which would allow the kernel to
patch instructions in userspace, which should never happen.  This has
the functional change of causing faults on userspace addresses if KUAP
is turned on, which should never happen in practice.

A future enhancement could be to double check the patch address is
definitely allowed to be tampered with by the kernel.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/lib/code-patching.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 506413a2c25e..42fdadac6587 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -26,9 +26,9 @@
 static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
 			       unsigned int *patch_addr)
 {
-	int err;
+	int err = 0;
 
-	__put_user_size(instr, patch_addr, 4, err);
+	__put_user_asm(instr, patch_addr, err, "stw");
 	if (err)
 		return err;
 
-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 7/7] powerpc/64s: Implement KUAP for Radix MMU
  2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
                   ` (5 preceding siblings ...)
  2019-02-21  9:36 ` [PATCH 6/7] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() Russell Currey
@ 2019-02-21  9:36 ` Russell Currey
  2019-02-22  5:14   ` Nicholas Piggin
  2019-02-21 16:07 ` [PATCH 0/7] Kernel Userspace Protection for radix Kees Cook
  7 siblings, 1 reply; 17+ messages in thread
From: Russell Currey @ 2019-02-21  9:36 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mpe, npiggin, christophe.leroy, kernel-hardening, Russell Currey

Kernel Userspace Access Prevention utilises a feature of the Radix MMU
which disallows read and write access to userspace addresses. By
utilising this, the kernel is prevented from accessing user data from
outside of trusted paths that perform proper safety checks, such as
copy_{to/from}_user() and friends.

Userspace access is disabled from early boot and is only enabled when
performing an operation like copy_{to/from}_user().  The register that
controls this (AMR) does not prevent userspace from accessing other
userspace, so there is no need to save and restore when entering and
exiting userspace.

This feature has a slight performance impact which I roughly measured
to be 3% slower in the worst case (performing 1GB of 1 byte
read()/write() syscalls), and is gated behind the CONFIG_PPC_KUAP
option for performance-critical builds.

This feature can be tested by using the lkdtm driver (CONFIG_LKDTM=y)
and performing the following:

  # (echo ACCESS_USERSPACE) > [debugfs]/provoke-crash/DIRECT

If enabled, this should send SIGSEGV to the thread.

A big limitation of the current implementation is that user access
is left unlocked if an exception is taken while user access is unlocked
(i.e. if an interrupt is taken during copy_to_user()). This should be
resolved in future, and is why the state is tracked in the PACA even
though nothing currently uses it.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 .../powerpc/include/asm/book3s/64/kup-radix.h | 36 +++++++++++++++++++
 arch/powerpc/include/asm/kup.h                |  4 +++
 arch/powerpc/include/asm/mmu.h                |  9 ++++-
 arch/powerpc/include/asm/reg.h                |  1 +
 arch/powerpc/mm/pgtable-radix.c               | 16 +++++++++
 arch/powerpc/mm/pkeys.c                       |  7 ++--
 arch/powerpc/platforms/Kconfig.cputype        |  1 +
 7 files changed, 71 insertions(+), 3 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h

diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
new file mode 100644
index 000000000000..5cfdea954418
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_KUP_RADIX_H
+#define _ASM_POWERPC_KUP_RADIX_H
+
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_PPC_KUAP
+#include <asm/reg.h>
+/*
+ * We do have the ability to individually lock/unlock reads and writes rather
+ * than both at once, however it's a significant performance hit due to needing
+ * to do a read-modify-write, which adds a mfspr, which is slow.  As a result,
+ * locking/unlocking both at once is preferred.
+ */
+static inline void unlock_user_access(void __user *to, const void __user *from,
+				      unsigned long size)
+{
+	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
+		return;
+
+	mtspr(SPRN_AMR, 0);
+	isync();
+	get_paca()->user_access_allowed = 1;
+}
+
+static inline void lock_user_access(void __user *to, const void __user *from,
+				    unsigned long size)
+{
+	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
+		return;
+
+	mtspr(SPRN_AMR, RADIX_AMR_LOCKED);
+	get_paca()->user_access_allowed = 0;
+}
+#endif /* CONFIG_PPC_KUAP */
+#endif /* __ASSEMBLY__ */
+#endif
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 2ac540fb488f..af583fd5a027 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -2,6 +2,10 @@
 #ifndef _ASM_POWERPC_KUP_H_
 #define _ASM_POWERPC_KUP_H_
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#include <asm/book3s/64/kup-radix.h>
+#endif
+
 #ifndef __ASSEMBLY__
 
 #include <asm/pgtable.h>
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 25607604a7a5..ea703de9be9b 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -107,6 +107,10 @@
  */
 #define MMU_FTR_1T_SEGMENT		ASM_CONST(0x40000000)
 
+/* Supports KUAP (key 0 controlling userspace addresses) on radix
+ */
+#define MMU_FTR_RADIX_KUAP		ASM_CONST(0x80000000)
+
 /* MMU feature bit sets for various CPUs */
 #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2	\
 	MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
@@ -164,7 +168,10 @@ enum {
 #endif
 #ifdef CONFIG_PPC_RADIX_MMU
 		MMU_FTR_TYPE_RADIX |
-#endif
+#ifdef CONFIG_PPC_KUAP
+		MMU_FTR_RADIX_KUAP |
+#endif /* CONFIG_PPC_KUAP */
+#endif /* CONFIG_PPC_RADIX_MMU */
 		0,
 };
 
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 1c98ef1f2d5b..0e789e2c5bc3 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -246,6 +246,7 @@
 #define SPRN_DSCR	0x11
 #define SPRN_CFAR	0x1c	/* Come From Address Register */
 #define SPRN_AMR	0x1d	/* Authority Mask Register */
+#define   RADIX_AMR_LOCKED	0xC000000000000000UL /* Read & Write disabled */
 #define SPRN_UAMOR	0x9d	/* User Authority Mask Override Register */
 #define SPRN_AMOR	0x15d	/* Authority Mask Override Register */
 #define SPRN_ACOP	0x1F	/* Available Coprocessor Register */
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 224bcd4be5ae..b621cef4825e 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -29,6 +29,7 @@
 #include <asm/powernv.h>
 #include <asm/sections.h>
 #include <asm/trace.h>
+#include <asm/uaccess.h>
 
 #include <trace/events/thp.h>
 
@@ -553,6 +554,21 @@ void __init setup_kuep(bool disabled)
 }
 #endif
 
+#ifdef CONFIG_PPC_KUAP
+void __init setup_kuap(bool disabled)
+{
+	if (disabled || !early_radix_enabled())
+		return;
+
+	if (smp_processor_id() == boot_cpuid) {
+		pr_info("Activating Kernel Userspace Access Prevention\n");
+		cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
+	}
+
+	mtspr(SPRN_AMR, RADIX_AMR_LOCKED);
+}
+#endif
+
 void __init radix__early_init_mmu(void)
 {
 	unsigned long lpcr;
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index 587807763737..2223f4b4b1bf 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -7,6 +7,7 @@
 
 #include <asm/mman.h>
 #include <asm/mmu_context.h>
+#include <asm/mmu.h>
 #include <asm/setup.h>
 #include <linux/pkeys.h>
 #include <linux/of_device.h>
@@ -267,7 +268,8 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
 
 void thread_pkey_regs_save(struct thread_struct *thread)
 {
-	if (static_branch_likely(&pkey_disabled))
+	if (static_branch_likely(&pkey_disabled) &&
+	    !mmu_has_feature(MMU_FTR_RADIX_KUAP))
 		return;
 
 	/*
@@ -281,7 +283,8 @@ void thread_pkey_regs_save(struct thread_struct *thread)
 void thread_pkey_regs_restore(struct thread_struct *new_thread,
 			      struct thread_struct *old_thread)
 {
-	if (static_branch_likely(&pkey_disabled))
+	if (static_branch_likely(&pkey_disabled) &&
+	    !mmu_has_feature(MMU_FTR_RADIX_KUAP))
 		return;
 
 	if (old_thread->amr != new_thread->amr)
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 25cc7d36b27d..67b2ed9bb9f3 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -321,6 +321,7 @@ config PPC_RADIX_MMU
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
 	select PPC_HAVE_KUEP
+	select PPC_HAVE_KUAP
 	default y
 	help
 	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection
  2019-02-21  9:35 ` [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection Russell Currey
@ 2019-02-21 10:46   ` Christophe Leroy
  2019-02-21 14:48     ` Mark Rutland
  2019-02-21 12:56   ` kbuild test robot
  1 sibling, 1 reply; 17+ messages in thread
From: Christophe Leroy @ 2019-02-21 10:46 UTC (permalink / raw)
  To: Russell Currey, linuxppc-dev; +Cc: mpe, npiggin, kernel-hardening



Le 21/02/2019 à 10:35, Russell Currey a écrit :
> From: Christophe Leroy <christophe.leroy@c-s.fr>
> 
> This patch implements a framework for Kernel Userspace Access
> Protection.
> 
> Then subarches will have to possibility to provide their own
> implementation by providing setup_kuap() and lock/unlock_user_access()
> 
> Some platform will need to know the area accessed and whether it is
> accessed from read, write or both. Therefore source, destination and
> size and handed over to the two functions.

Should we also consider adding user_access_begin() and the 3 other 
associated macros ?

See x86 for details.

Christophe

> 
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> [ruscur: remove 64-bit exception handling code]
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
>   .../admin-guide/kernel-parameters.txt         |  2 +-
>   arch/powerpc/include/asm/exception-64e.h      |  3 ++
>   arch/powerpc/include/asm/exception-64s.h      |  3 ++
>   arch/powerpc/include/asm/futex.h              |  4 ++
>   arch/powerpc/include/asm/kup.h                | 21 ++++++++++
>   arch/powerpc/include/asm/paca.h               |  3 ++
>   arch/powerpc/include/asm/processor.h          |  3 ++
>   arch/powerpc/include/asm/ptrace.h             |  3 ++
>   arch/powerpc/include/asm/uaccess.h            | 38 +++++++++++++++----
>   arch/powerpc/kernel/asm-offsets.c             |  7 ++++
>   arch/powerpc/kernel/entry_32.S                |  8 +++-
>   arch/powerpc/kernel/process.c                 |  3 ++
>   arch/powerpc/lib/checksum_wrappers.c          |  4 ++
>   arch/powerpc/mm/fault.c                       | 17 +++++++--
>   arch/powerpc/mm/init-common.c                 | 10 +++++
>   arch/powerpc/platforms/Kconfig.cputype        | 12 ++++++
>   16 files changed, 127 insertions(+), 14 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 8448a56a9eb9..0a76dbb39011 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2808,7 +2808,7 @@
>   			noexec=on: enable non-executable mappings (default)
>   			noexec=off: disable non-executable mappings
>   
> -	nosmap		[X86]
> +	nosmap		[X86,PPC]
>   			Disable SMAP (Supervisor Mode Access Prevention)
>   			even if it is supported by processor.
>   
> diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h
> index 555e22d5e07f..bf25015834ee 100644
> --- a/arch/powerpc/include/asm/exception-64e.h
> +++ b/arch/powerpc/include/asm/exception-64e.h
> @@ -215,5 +215,8 @@ exc_##label##_book3e:
>   #define RFI_TO_USER							\
>   	rfi
>   
> +#define UNLOCK_USER_ACCESS(reg)
> +#define LOCK_USER_ACCESS(reg)
> +
>   #endif /* _ASM_POWERPC_EXCEPTION_64E_H */
>   
> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
> index 3b4767ed3ec5..8ae2d7855cfe 100644
> --- a/arch/powerpc/include/asm/exception-64s.h
> +++ b/arch/powerpc/include/asm/exception-64s.h
> @@ -264,6 +264,9 @@ BEGIN_FTR_SECTION_NESTED(943)						\
>   	std	ra,offset(r13);						\
>   END_FTR_SECTION_NESTED(ftr,ftr,943)
>   
> +#define UNLOCK_USER_ACCESS(reg)
> +#define LOCK_USER_ACCESS(reg)
> +
>   #define EXCEPTION_PROLOG_0(area)					\
>   	GET_PACA(r13);							\
>   	std	r9,area+EX_R9(r13);	/* save r9 */			\
> diff --git a/arch/powerpc/include/asm/futex.h b/arch/powerpc/include/asm/futex.h
> index 88b38b37c21b..f85facf3739b 100644
> --- a/arch/powerpc/include/asm/futex.h
> +++ b/arch/powerpc/include/asm/futex.h
> @@ -35,6 +35,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
>   {
>   	int oldval = 0, ret;
>   
> +	unlock_user_access(uaddr, NULL, sizeof(*uaddr));
>   	pagefault_disable();
>   
>   	switch (op) {
> @@ -62,6 +63,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
>   	if (!ret)
>   		*oval = oldval;
>   
> +	lock_user_access(uaddr, NULL, sizeof(*uaddr));
>   	return ret;
>   }
>   
> @@ -75,6 +77,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
>   	if (!access_ok(uaddr, sizeof(u32)))
>   		return -EFAULT;
>   
> +	unlock_user_access(uaddr, NULL, sizeof(*uaddr));
>           __asm__ __volatile__ (
>           PPC_ATOMIC_ENTRY_BARRIER
>   "1:     lwarx   %1,0,%3         # futex_atomic_cmpxchg_inatomic\n\
> @@ -95,6 +98,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
>           : "cc", "memory");
>   
>   	*uval = prev;
> +	lock_user_access(uaddr, NULL, sizeof(*uaddr));
>           return ret;
>   }
>   
> diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
> index af4b5f854ca4..2ac540fb488f 100644
> --- a/arch/powerpc/include/asm/kup.h
> +++ b/arch/powerpc/include/asm/kup.h
> @@ -4,6 +4,8 @@
>   
>   #ifndef __ASSEMBLY__
>   
> +#include <asm/pgtable.h>
> +
>   void setup_kup(void);
>   
>   #ifdef CONFIG_PPC_KUEP
> @@ -12,6 +14,25 @@ void setup_kuep(bool disabled);
>   static inline void setup_kuep(bool disabled) { }
>   #endif
>   
> +#ifdef CONFIG_PPC_KUAP
> +void setup_kuap(bool disabled);
> +#else
> +static inline void setup_kuap(bool disabled) { }
> +static inline void unlock_user_access(void __user *to, const void __user *from,
> +				      unsigned long size) { }
> +static inline void lock_user_access(void __user *to, const void __user *from,
> +				    unsigned long size) { }
> +#endif
> +
>   #endif /* !__ASSEMBLY__ */
>   
> +#ifndef CONFIG_PPC_KUAP
> +
> +#ifdef CONFIG_PPC32
> +#define LOCK_USER_ACCESS(val, sp, sr, srmax, curr)
> +#define REST_USER_ACCESS(val, sp, sr, srmax, curr)
> +#endif
> +
> +#endif
> +
>   #endif /* _ASM_POWERPC_KUP_H_ */
> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
> index e843bc5d1a0f..56236f6d8c89 100644
> --- a/arch/powerpc/include/asm/paca.h
> +++ b/arch/powerpc/include/asm/paca.h
> @@ -169,6 +169,9 @@ struct paca_struct {
>   	u64 saved_r1;			/* r1 save for RTAS calls or PM or EE=0 */
>   	u64 saved_msr;			/* MSR saved here by enter_rtas */
>   	u16 trap_save;			/* Used when bad stack is encountered */
> +#ifdef CONFIG_PPC_KUAP
> +	u8 user_access_allowed;		/* can the kernel access user memory? */
> +#endif
>   	u8 irq_soft_mask;		/* mask for irq soft masking */
>   	u8 irq_happened;		/* irq happened while soft-disabled */
>   	u8 io_sync;			/* writel() needs spin_unlock sync */
> diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
> index ee58526cb6c2..4a9a10e86828 100644
> --- a/arch/powerpc/include/asm/processor.h
> +++ b/arch/powerpc/include/asm/processor.h
> @@ -250,6 +250,9 @@ struct thread_struct {
>   #ifdef CONFIG_PPC32
>   	void		*pgdir;		/* root of page-table tree */
>   	unsigned long	ksp_limit;	/* if ksp <= ksp_limit stack overflow */
> +#ifdef CONFIG_PPC_KUAP
> +	unsigned long	kuap;		/* state of user access protection */
> +#endif
>   #endif
>   	/* Debug Registers */
>   	struct debug_reg debug;
> diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
> index 0b8a735b6d85..0321ba5b3d12 100644
> --- a/arch/powerpc/include/asm/ptrace.h
> +++ b/arch/powerpc/include/asm/ptrace.h
> @@ -55,6 +55,9 @@ struct pt_regs
>   #ifdef CONFIG_PPC64
>   	unsigned long ppr;
>   	unsigned long __pad;	/* Maintain 16 byte interrupt stack alignment */
> +#elif defined(CONFIG_PPC_KUAP)
> +	unsigned long kuap;
> +	unsigned long __pad[3];	/* Maintain 16 byte interrupt stack alignment */
>   #endif
>   };
>   #endif
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index e3a731793ea2..9db6a4c7c058 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -6,6 +6,7 @@
>   #include <asm/processor.h>
>   #include <asm/page.h>
>   #include <asm/extable.h>
> +#include <asm/kup.h>
>   
>   /*
>    * The fs value determines whether argument validity checking should be
> @@ -141,6 +142,7 @@ extern long __put_user_bad(void);
>   #define __put_user_size(x, ptr, size, retval)			\
>   do {								\
>   	retval = 0;						\
> +	unlock_user_access(ptr, NULL, size);			\
>   	switch (size) {						\
>   	  case 1: __put_user_asm(x, ptr, retval, "stb"); break;	\
>   	  case 2: __put_user_asm(x, ptr, retval, "sth"); break;	\
> @@ -148,6 +150,7 @@ do {								\
>   	  case 8: __put_user_asm2(x, ptr, retval); break;	\
>   	  default: __put_user_bad();				\
>   	}							\
> +	lock_user_access(ptr, NULL, size);			\
>   } while (0)
>   
>   #define __put_user_nocheck(x, ptr, size)			\
> @@ -240,6 +243,7 @@ do {								\
>   	__chk_user_ptr(ptr);					\
>   	if (size > sizeof(x))					\
>   		(x) = __get_user_bad();				\
> +	unlock_user_access(NULL, ptr, size);			\
>   	switch (size) {						\
>   	case 1: __get_user_asm(x, ptr, retval, "lbz"); break;	\
>   	case 2: __get_user_asm(x, ptr, retval, "lhz"); break;	\
> @@ -247,6 +251,7 @@ do {								\
>   	case 8: __get_user_asm2(x, ptr, retval);  break;	\
>   	default: (x) = __get_user_bad();			\
>   	}							\
> +	lock_user_access(NULL, ptr, size);			\
>   } while (0)
>   
>   /*
> @@ -306,15 +311,21 @@ extern unsigned long __copy_tofrom_user(void __user *to,
>   static inline unsigned long
>   raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
>   {
> -	return __copy_tofrom_user(to, from, n);
> +	unsigned long ret;
> +
> +	unlock_user_access(to, from, n);
> +	ret = __copy_tofrom_user(to, from, n);
> +	lock_user_access(to, from, n);
> +	return ret;
>   }
>   #endif /* __powerpc64__ */
>   
>   static inline unsigned long raw_copy_from_user(void *to,
>   		const void __user *from, unsigned long n)
>   {
> +	unsigned long ret;
>   	if (__builtin_constant_p(n) && (n <= 8)) {
> -		unsigned long ret = 1;
> +		ret = 1;
>   
>   		switch (n) {
>   		case 1:
> @@ -339,14 +350,18 @@ static inline unsigned long raw_copy_from_user(void *to,
>   	}
>   
>   	barrier_nospec();
> -	return __copy_tofrom_user((__force void __user *)to, from, n);
> +	unlock_user_access(NULL, from, n);
> +	ret = __copy_tofrom_user((__force void __user *)to, from, n);
> +	lock_user_access(NULL, from, n);
> +	return ret;
>   }
>   
>   static inline unsigned long raw_copy_to_user(void __user *to,
>   		const void *from, unsigned long n)
>   {
> +	unsigned long ret;
>   	if (__builtin_constant_p(n) && (n <= 8)) {
> -		unsigned long ret = 1;
> +		ret = 1;
>   
>   		switch (n) {
>   		case 1:
> @@ -366,17 +381,24 @@ static inline unsigned long raw_copy_to_user(void __user *to,
>   			return 0;
>   	}
>   
> -	return __copy_tofrom_user(to, (__force const void __user *)from, n);
> +	unlock_user_access(to, NULL, n);
> +	ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
> +	lock_user_access(to, NULL, n);
> +	return ret;
>   }
>   
>   extern unsigned long __clear_user(void __user *addr, unsigned long size);
>   
>   static inline unsigned long clear_user(void __user *addr, unsigned long size)
>   {
> +	unsigned long ret = size;
>   	might_fault();
> -	if (likely(access_ok(addr, size)))
> -		return __clear_user(addr, size);
> -	return size;
> +	if (likely(access_ok(addr, size))) {
> +		unlock_user_access(addr, NULL, size);
> +		ret = __clear_user(addr, size);
> +		lock_user_access(addr, NULL, size);
> +	}
> +	return ret;
>   }
>   
>   extern long strncpy_from_user(char *dst, const char __user *src, long count);
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index 9ffc72ded73a..98e94299e728 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -93,6 +93,9 @@ int main(void)
>   	OFFSET(THREAD_INFO, task_struct, stack);
>   	DEFINE(THREAD_INFO_GAP, _ALIGN_UP(sizeof(struct thread_info), 16));
>   	OFFSET(KSP_LIMIT, thread_struct, ksp_limit);
> +#ifdef CONFIG_PPC_KUAP
> +	OFFSET(KUAP, thread_struct, kuap);
> +#endif
>   #endif /* CONFIG_PPC64 */
>   
>   #ifdef CONFIG_LIVEPATCH
> @@ -260,6 +263,7 @@ int main(void)
>   	OFFSET(ACCOUNT_STARTTIME_USER, paca_struct, accounting.starttime_user);
>   	OFFSET(ACCOUNT_USER_TIME, paca_struct, accounting.utime);
>   	OFFSET(ACCOUNT_SYSTEM_TIME, paca_struct, accounting.stime);
> +	OFFSET(PACA_USER_ACCESS_ALLOWED, paca_struct, user_access_allowed);
>   	OFFSET(PACA_TRAP_SAVE, paca_struct, trap_save);
>   	OFFSET(PACA_NAPSTATELOST, paca_struct, nap_state_lost);
>   	OFFSET(PACA_SPRG_VDSO, paca_struct, sprg_vdso);
> @@ -320,6 +324,9 @@ int main(void)
>   	 */
>   	STACK_PT_REGS_OFFSET(_DEAR, dar);
>   	STACK_PT_REGS_OFFSET(_ESR, dsisr);
> +#ifdef CONFIG_PPC_KUAP
> +	STACK_PT_REGS_OFFSET(_KUAP, kuap);
> +#endif
>   #else /* CONFIG_PPC64 */
>   	STACK_PT_REGS_OFFSET(SOFTE, softe);
>   	STACK_PT_REGS_OFFSET(_PPR, ppr);
> diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
> index 0768dfd8a64e..f8f2268f8b12 100644
> --- a/arch/powerpc/kernel/entry_32.S
> +++ b/arch/powerpc/kernel/entry_32.S
> @@ -36,6 +36,7 @@
>   #include <asm/asm-405.h>
>   #include <asm/feature-fixups.h>
>   #include <asm/barrier.h>
> +#include <asm/kup.h>
>   
>   /*
>    * MSR_KERNEL is > 0x10000 on 4xx/Book-E since it include MSR_CE.
> @@ -156,6 +157,7 @@ transfer_to_handler:
>   	stw	r12,_CTR(r11)
>   	stw	r2,_XER(r11)
>   	mfspr	r12,SPRN_SPRG_THREAD
> +	LOCK_USER_ACCESS(r2, r11, r9, r0, r12)
>   	addi	r2,r12,-THREAD
>   	tovirt(r2,r2)			/* set r2 to current */
>   	beq	2f			/* if from user, fix up THREAD.regs */
> @@ -442,6 +444,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
>   	ACCOUNT_CPU_USER_EXIT(r4, r5, r7)
>   3:
>   #endif
> +	REST_USER_ACCESS(r7, r1, r4, r5, r2)
>   	lwz	r4,_LINK(r1)
>   	lwz	r5,_CCR(r1)
>   	mtlr	r4
> @@ -739,7 +742,8 @@ fast_exception_return:
>   	beq	1f			/* if not, we've got problems */
>   #endif
>   
> -2:	REST_4GPRS(3, r11)
> +2:	REST_USER_ACCESS(r3, r11, r4, r5, r2)
> +	REST_4GPRS(3, r11)
>   	lwz	r10,_CCR(r11)
>   	REST_GPR(1, r11)
>   	mtcr	r10
> @@ -957,6 +961,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_47x)
>   1:
>   #endif /* CONFIG_TRACE_IRQFLAGS */
>   
> +	REST_USER_ACCESS(r3, r1, r4, r5, r2)
> +
>   	lwz	r0,GPR0(r1)
>   	lwz	r2,GPR2(r1)
>   	REST_4GPRS(3, r1)
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index ce393df243aa..995ef82a583d 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1771,6 +1771,9 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
>   	regs->mq = 0;
>   	regs->nip = start;
>   	regs->msr = MSR_USER;
> +#ifdef CONFIG_PPC_KUAP
> +	regs->kuap = KUAP_START;
> +#endif
>   #else
>   	if (!is_32bit_task()) {
>   		unsigned long entry;
> diff --git a/arch/powerpc/lib/checksum_wrappers.c b/arch/powerpc/lib/checksum_wrappers.c
> index 890d4ddd91d6..48e0c91e0083 100644
> --- a/arch/powerpc/lib/checksum_wrappers.c
> +++ b/arch/powerpc/lib/checksum_wrappers.c
> @@ -28,6 +28,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst,
>   {
>   	unsigned int csum;
>   
> +	unlock_user_access(NULL, src, len);
>   	might_sleep();
>   
>   	*err_ptr = 0;
> @@ -60,6 +61,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst,
>   	}
>   
>   out:
> +	lock_user_access(NULL, src, len);
>   	return (__force __wsum)csum;
>   }
>   EXPORT_SYMBOL(csum_and_copy_from_user);
> @@ -69,6 +71,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len,
>   {
>   	unsigned int csum;
>   
> +	unlock_user_access(dst, NULL, len);
>   	might_sleep();
>   
>   	*err_ptr = 0;
> @@ -97,6 +100,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len,
>   	}
>   
>   out:
> +	lock_user_access(dst, NULL, len);
>   	return (__force __wsum)csum;
>   }
>   EXPORT_SYMBOL(csum_and_copy_to_user);
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index ad65e336721a..5f4a5391f41e 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -223,9 +223,11 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr,
>   }
>   
>   /* Is this a bad kernel fault ? */
> -static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
> +static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
>   			     unsigned long address)
>   {
> +	int is_exec = TRAP(regs) == 0x400;
> +
>   	/* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */
>   	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT |
>   				      DSISR_PROTFAULT))) {
> @@ -236,7 +238,13 @@ static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
>   				   address, from_kuid(&init_user_ns,
>   						      current_uid()));
>   	}
> -	return is_exec || (address >= TASK_SIZE);
> +	if (!is_exec && address < TASK_SIZE && (error_code & DSISR_PROTFAULT) &&
> +	    !search_exception_tables(regs->nip))
> +		printk_ratelimited(KERN_CRIT "Kernel attempted to access user"
> +				   " page (%lx) - exploit attempt? (uid: %d)\n",
> +				   address, from_kuid(&init_user_ns,
> +						      current_uid()));
> +	return is_exec || (address >= TASK_SIZE) || !search_exception_tables(regs->nip);
>   }
>   
>   static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
> @@ -456,9 +464,10 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
>   
>   	/*
>   	 * The kernel should never take an execute fault nor should it
> -	 * take a page fault to a kernel address.
> +	 * take a page fault to a kernel address or a page fault to a user
> +	 * address outside of dedicated places
>   	 */
> -	if (unlikely(!is_user && bad_kernel_fault(is_exec, error_code, address)))
> +	if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address)))
>   		return SIGSEGV;
>   
>   	/*
> diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
> index 83f95a5565d6..ecaedfff9992 100644
> --- a/arch/powerpc/mm/init-common.c
> +++ b/arch/powerpc/mm/init-common.c
> @@ -27,6 +27,7 @@
>   #include <asm/kup.h>
>   
>   static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
> +static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
>   
>   static int __init parse_nosmep(char *p)
>   {
> @@ -36,9 +37,18 @@ static int __init parse_nosmep(char *p)
>   }
>   early_param("nosmep", parse_nosmep);
>   
> +static int __init parse_nosmap(char *p)
> +{
> +	disable_kuap = true;
> +	pr_warn("Disabling Kernel Userspace Access Protection\n");
> +	return 0;
> +}
> +early_param("nosmap", parse_nosmap);
> +
>   void __init setup_kup(void)
>   {
>   	setup_kuep(disable_kuep);
> +	setup_kuap(disable_kuap);
>   }
>   
>   #define CTOR(shift) static void ctor_##shift(void *addr) \
> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
> index 410c3443652c..7fa5ddbdce12 100644
> --- a/arch/powerpc/platforms/Kconfig.cputype
> +++ b/arch/powerpc/platforms/Kconfig.cputype
> @@ -351,6 +351,18 @@ config PPC_KUEP
>   
>   	  If you're unsure, say Y.
>   
> +config PPC_HAVE_KUAP
> +	bool
> +
> +config PPC_KUAP
> +	bool "Kernel Userspace Access Protection"
> +	depends on PPC_HAVE_KUAP
> +	default y
> +	help
> +	  Enable support for Kernel Userspace Access Protection (KUAP)
> +
> +	  If you're unsure, say Y.
> +
>   config ARCH_ENABLE_HUGEPAGE_MIGRATION
>   	def_bool y
>   	depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection
  2019-02-21  9:35 ` [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection Russell Currey
  2019-02-21 10:46   ` Christophe Leroy
@ 2019-02-21 12:56   ` kbuild test robot
  1 sibling, 0 replies; 17+ messages in thread
From: kbuild test robot @ 2019-02-21 12:56 UTC (permalink / raw)
  To: Russell Currey; +Cc: kbuild-all, linuxppc-dev, npiggin, kernel-hardening

[-- Attachment #1: Type: text/plain, Size: 3051 bytes --]

Hi Christophe,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.0-rc4 next-20190221]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Russell-Currey/Kernel-Userspace-Protection-for-radix/20190221-193749
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allyesconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 8.2.0-11) 8.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=8.2.0 make.cross ARCH=powerpc 

Note: the linux-review/Russell-Currey/Kernel-Userspace-Protection-for-radix/20190221-193749 HEAD 5f2951ce63da41c63227ed597252cea801adb5a4 builds fine.
      It only hurts bisectibility.

All errors (new ones prefixed by >>):

   In file included from arch/powerpc/kernel/asm-offsets.c:31:
   arch/powerpc/kernel/asm-offsets.c: In function 'main':
>> include/linux/compiler_types.h:127:35: error: 'struct paca_struct' has no member named 'user_access_allowed'
    #define __compiler_offsetof(a, b) __builtin_offsetof(a, b)
                                      ^~~~~~~~~~~~~~~~~~
   include/linux/kbuild.h:6:62: note: in definition of macro 'DEFINE'
     asm volatile("\n.ascii \"->" #sym " %0 " #val "\"" : : "i" (val))
                                                                 ^~~
   include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
    #define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
                                   ^~~~~~~~~~~~~~~~~~~
   include/linux/kbuild.h:11:14: note: in expansion of macro 'offsetof'
     DEFINE(sym, offsetof(struct str, mem))
                 ^~~~~~~~
   arch/powerpc/kernel/asm-offsets.c:266:2: note: in expansion of macro 'OFFSET'
     OFFSET(PACA_USER_ACCESS_ALLOWED, paca_struct, user_access_allowed);
     ^~~~~~
   make[2]: *** [arch/powerpc/kernel/asm-offsets.s] Error 1
   make[2]: Target '__build' not remade because of errors.
   make[1]: *** [prepare0] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [sub-make] Error 2

vim +127 include/linux/compiler_types.h

71391bdd Xiaozhou Liu 2018-12-14  126  
71391bdd Xiaozhou Liu 2018-12-14 @127  #define __compiler_offsetof(a, b)	__builtin_offsetof(a, b)
71391bdd Xiaozhou Liu 2018-12-14  128  

:::::: The code at line 127 was first introduced by commit
:::::: 71391bdd2e9aab188f86bf1ecd9b232531ec7eea include/linux/compiler_types.h: don't pollute userspace with macro definitions

:::::: TO: Xiaozhou Liu <liuxiaozhou@bytedance.com>
:::::: CC: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 59719 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection
  2019-02-21 10:46   ` Christophe Leroy
@ 2019-02-21 14:48     ` Mark Rutland
  2019-02-22  0:11       ` Russell Currey
  0 siblings, 1 reply; 17+ messages in thread
From: Mark Rutland @ 2019-02-21 14:48 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: Russell Currey, linuxppc-dev, mpe, npiggin, kernel-hardening

On Thu, Feb 21, 2019 at 11:46:06AM +0100, Christophe Leroy wrote:
> 
> 
> Le 21/02/2019 à 10:35, Russell Currey a écrit :
> > From: Christophe Leroy <christophe.leroy@c-s.fr>
> > 
> > This patch implements a framework for Kernel Userspace Access
> > Protection.
> > 
> > Then subarches will have to possibility to provide their own
> > implementation by providing setup_kuap() and lock/unlock_user_access()
> > 
> > Some platform will need to know the area accessed and whether it is
> > accessed from read, write or both. Therefore source, destination and
> > size and handed over to the two functions.
> 
> Should we also consider adding user_access_begin() and the 3 other
> associated macros ?
> 
> See x86 for details.

As a heads-up, there are some potential issues with
user_access_{begin,end}() that may affect PPC. There's a long thread
starting at:

https://lkml.kernel.org/r/1547560709-56207-4-git-send-email-julien.thierry@arm.com/

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/7] Kernel Userspace Protection for radix
  2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
                   ` (6 preceding siblings ...)
  2019-02-21  9:36 ` [PATCH 7/7] powerpc/64s: Implement KUAP for Radix MMU Russell Currey
@ 2019-02-21 16:07 ` Kees Cook
  2019-02-22  0:09   ` Russell Currey
  7 siblings, 1 reply; 17+ messages in thread
From: Kees Cook @ 2019-02-21 16:07 UTC (permalink / raw)
  To: Russell Currey
  Cc: PowerPC, Michael Ellerman, Nick Piggin, Christophe Leroy,
	Kernel Hardening

On Thu, Feb 21, 2019 at 1:36 AM Russell Currey <ruscur@russell.cc> wrote:
> The first three patches of these series are from Christophe's work and are
> the bare minimum framework needed to implement the support for radix.
>
> In patch 3, I have removed from Christophe's patch my implementation of
> the 64-bit exception handling code, since we don't have an answer for
> making nested exceptions work yet.  This is mentioned in the final KUAP
> patch.  Regardless, this is still a significant security improvement
> and greatly narrows the attack surface.

Nice! Am I understanding correctly that with this series powerpc9 and
later, using radix, will pass the lkdtm tests for KUAP and KUEP (i.e.
EXEC_USERSPACE and ACCESS_USERSPACE)?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/7] Kernel Userspace Protection for radix
  2019-02-21 16:07 ` [PATCH 0/7] Kernel Userspace Protection for radix Kees Cook
@ 2019-02-22  0:09   ` Russell Currey
  2019-02-22  0:16     ` Kees Cook
  0 siblings, 1 reply; 17+ messages in thread
From: Russell Currey @ 2019-02-22  0:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: PowerPC, Michael Ellerman, Nick Piggin, Christophe Leroy,
	Kernel Hardening

On Thu, 2019-02-21 at 08:07 -0800, Kees Cook wrote:
> On Thu, Feb 21, 2019 at 1:36 AM Russell Currey <ruscur@russell.cc>
> wrote:
> > The first three patches of these series are from Christophe's work
> > and are
> > the bare minimum framework needed to implement the support for
> > radix.
> > 
> > In patch 3, I have removed from Christophe's patch my
> > implementation of
> > the 64-bit exception handling code, since we don't have an answer
> > for
> > making nested exceptions work yet.  This is mentioned in the final
> > KUAP
> > patch.  Regardless, this is still a significant security
> > improvement
> > and greatly narrows the attack surface.
> 
> Nice! Am I understanding correctly that with this series powerpc9 and
> later, using radix, will pass the lkdtm tests for KUAP and KUEP (i.e.
> EXEC_USERSPACE and ACCESS_USERSPACE)?

Yes!  We've had execution prevention for a while on radix (which is
default on POWER9) since 3b10d0095a1e, the only functional thing this
series does is allow disabling it with nosmep.  This series adds access
prevention.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection
  2019-02-21 14:48     ` Mark Rutland
@ 2019-02-22  0:11       ` Russell Currey
  0 siblings, 0 replies; 17+ messages in thread
From: Russell Currey @ 2019-02-22  0:11 UTC (permalink / raw)
  To: Mark Rutland, Christophe Leroy
  Cc: linuxppc-dev, mpe, npiggin, kernel-hardening

On Thu, 2019-02-21 at 14:48 +0000, Mark Rutland wrote:
> On Thu, Feb 21, 2019 at 11:46:06AM +0100, Christophe Leroy wrote:
> > 
> > Le 21/02/2019 à 10:35, Russell Currey a écrit :
> > > From: Christophe Leroy <christophe.leroy@c-s.fr>
> > > 
> > > This patch implements a framework for Kernel Userspace Access
> > > Protection.
> > > 
> > > Then subarches will have to possibility to provide their own
> > > implementation by providing setup_kuap() and
> > > lock/unlock_user_access()
> > > 
> > > Some platform will need to know the area accessed and whether it
> > > is
> > > accessed from read, write or both. Therefore source, destination
> > > and
> > > size and handed over to the two functions.
> > 
> > Should we also consider adding user_access_begin() and the 3 other
> > associated macros ?
> > 
> > See x86 for details.
> 
> As a heads-up, there are some potential issues with
> user_access_{begin,end}() that may affect PPC. There's a long thread
> starting at:
> 
> https://lkml.kernel.org/r/1547560709-56207-4-git-send-email-julien.thierry@arm.com/

I'd say we should look at adding those macros, but as follow-up work
after this series (and after we know what's going on with those issues)

- Russell

> 
> Thanks,
> Mark.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/7] Kernel Userspace Protection for radix
  2019-02-22  0:09   ` Russell Currey
@ 2019-02-22  0:16     ` Kees Cook
  2019-02-22  3:46       ` Michael Ellerman
  0 siblings, 1 reply; 17+ messages in thread
From: Kees Cook @ 2019-02-22  0:16 UTC (permalink / raw)
  To: Russell Currey
  Cc: PowerPC, Michael Ellerman, Nick Piggin, Christophe Leroy,
	Kernel Hardening

On Thu, Feb 21, 2019 at 4:09 PM Russell Currey <ruscur@russell.cc> wrote:
> On Thu, 2019-02-21 at 08:07 -0800, Kees Cook wrote:
> > On Thu, Feb 21, 2019 at 1:36 AM Russell Currey <ruscur@russell.cc>
> > wrote:
> > > The first three patches of these series are from Christophe's work
> > > and are
> > > the bare minimum framework needed to implement the support for
> > > radix.
> > >
> > > In patch 3, I have removed from Christophe's patch my
> > > implementation of
> > > the 64-bit exception handling code, since we don't have an answer
> > > for
> > > making nested exceptions work yet.  This is mentioned in the final
> > > KUAP
> > > patch.  Regardless, this is still a significant security
> > > improvement
> > > and greatly narrows the attack surface.
> >
> > Nice! Am I understanding correctly that with this series powerpc9 and
> > later, using radix, will pass the lkdtm tests for KUAP and KUEP (i.e.
> > EXEC_USERSPACE and ACCESS_USERSPACE)?
>
> Yes!  We've had execution prevention for a while on radix (which is
> default on POWER9) since 3b10d0095a1e, the only functional thing this
> series does is allow disabling it with nosmep.  This series adds access
> prevention.

Ah-ha; excellent. And CONFIG_PPC_RADIX_MMU is "default y" already. :)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/7] Kernel Userspace Protection for radix
  2019-02-22  0:16     ` Kees Cook
@ 2019-02-22  3:46       ` Michael Ellerman
  0 siblings, 0 replies; 17+ messages in thread
From: Michael Ellerman @ 2019-02-22  3:46 UTC (permalink / raw)
  To: Kees Cook, Russell Currey
  Cc: PowerPC, Nick Piggin, Christophe Leroy, Kernel Hardening

Kees Cook <keescook@chromium.org> writes:
> On Thu, Feb 21, 2019 at 4:09 PM Russell Currey <ruscur@russell.cc> wrote:
>> On Thu, 2019-02-21 at 08:07 -0800, Kees Cook wrote:
>> > On Thu, Feb 21, 2019 at 1:36 AM Russell Currey <ruscur@russell.cc>
>> > wrote:
>> > > The first three patches of these series are from Christophe's work
>> > > and are
>> > > the bare minimum framework needed to implement the support for
>> > > radix.
>> > >
>> > > In patch 3, I have removed from Christophe's patch my
>> > > implementation of
>> > > the 64-bit exception handling code, since we don't have an answer
>> > > for
>> > > making nested exceptions work yet.  This is mentioned in the final
>> > > KUAP
>> > > patch.  Regardless, this is still a significant security
>> > > improvement
>> > > and greatly narrows the attack surface.
>> >
>> > Nice! Am I understanding correctly that with this series powerpc9 and
>> > later, using radix, will pass the lkdtm tests for KUAP and KUEP (i.e.
>> > EXEC_USERSPACE and ACCESS_USERSPACE)?
>>
>> Yes!  We've had execution prevention for a while on radix (which is
>> default on POWER9) since 3b10d0095a1e, the only functional thing this
>> series does is allow disabling it with nosmep.  This series adds access
>> prevention.
>
> Ah-ha; excellent. And CONFIG_PPC_RADIX_MMU is "default y" already. :)

Though on real hardware it doesn't really work yet, at least if there
are any idle states enabled.

Patch under discussion to fix it:

  https://patchwork.ozlabs.org/patch/1038568/


cheers

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 7/7] powerpc/64s: Implement KUAP for Radix MMU
  2019-02-21  9:36 ` [PATCH 7/7] powerpc/64s: Implement KUAP for Radix MMU Russell Currey
@ 2019-02-22  5:14   ` Nicholas Piggin
  0 siblings, 0 replies; 17+ messages in thread
From: Nicholas Piggin @ 2019-02-22  5:14 UTC (permalink / raw)
  To: linuxppc-dev, Russell Currey; +Cc: christophe.leroy, kernel-hardening, mpe

Russell Currey's on February 21, 2019 7:36 pm:
> Kernel Userspace Access Prevention utilises a feature of the Radix MMU
> which disallows read and write access to userspace addresses. By
> utilising this, the kernel is prevented from accessing user data from
> outside of trusted paths that perform proper safety checks, such as
> copy_{to/from}_user() and friends.
> 
> Userspace access is disabled from early boot and is only enabled when
> performing an operation like copy_{to/from}_user().  The register that
> controls this (AMR) does not prevent userspace from accessing other
> userspace, so there is no need to save and restore when entering and
> exiting userspace.
> 
> This feature has a slight performance impact which I roughly measured
> to be 3% slower in the worst case (performing 1GB of 1 byte
> read()/write() syscalls), and is gated behind the CONFIG_PPC_KUAP
> option for performance-critical builds.
> 
> This feature can be tested by using the lkdtm driver (CONFIG_LKDTM=y)
> and performing the following:
> 
>   # (echo ACCESS_USERSPACE) > [debugfs]/provoke-crash/DIRECT
> 
> If enabled, this should send SIGSEGV to the thread.
> 
> A big limitation of the current implementation is that user access
> is left unlocked if an exception is taken while user access is unlocked
> (i.e. if an interrupt is taken during copy_to_user()). This should be
> resolved in future, and is why the state is tracked in the PACA even
> though nothing currently uses it.

Did you have an implementation for this in an earlier series?
What's happened to that? If the idea is to add things incrementally
that's fine.

> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
>  .../powerpc/include/asm/book3s/64/kup-radix.h | 36 +++++++++++++++++++
>  arch/powerpc/include/asm/kup.h                |  4 +++
>  arch/powerpc/include/asm/mmu.h                |  9 ++++-
>  arch/powerpc/include/asm/reg.h                |  1 +
>  arch/powerpc/mm/pgtable-radix.c               | 16 +++++++++
>  arch/powerpc/mm/pkeys.c                       |  7 ++--
>  arch/powerpc/platforms/Kconfig.cputype        |  1 +
>  7 files changed, 71 insertions(+), 3 deletions(-)
>  create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
> new file mode 100644
> index 000000000000..5cfdea954418
> --- /dev/null
> +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
> @@ -0,0 +1,36 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_POWERPC_KUP_RADIX_H
> +#define _ASM_POWERPC_KUP_RADIX_H
> +
> +#ifndef __ASSEMBLY__
> +#ifdef CONFIG_PPC_KUAP
> +#include <asm/reg.h>
> +/*
> + * We do have the ability to individually lock/unlock reads and writes rather
> + * than both at once, however it's a significant performance hit due to needing
> + * to do a read-modify-write, which adds a mfspr, which is slow.  As a result,
> + * locking/unlocking both at once is preferred.
> + */
> +static inline void unlock_user_access(void __user *to, const void __user *from,
> +				      unsigned long size)
> +{
> +	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
> +		return;
> +
> +	mtspr(SPRN_AMR, 0);
> +	isync();
> +	get_paca()->user_access_allowed = 1;

I think this is going to get corrupted when you context switch isn't
it? I would have thought a per thread flag would be easier, but maybe 
that's difficult in your exception code... If you've got more code to 
deal with it in a later patch, might be worth just moving all the
user_access_allowed stuff there.

Possibly you could add some debug warnings to catch double lock or 
unpaired unlock? That could be removed or put under a CONFIG option 
after it gets more testing.

> +}
> +
> +static inline void lock_user_access(void __user *to, const void __user *from,
> +				    unsigned long size)
> +{
> +	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
> +		return;
> +
> +	mtspr(SPRN_AMR, RADIX_AMR_LOCKED);
> +	get_paca()->user_access_allowed = 0;

Without the isync here gives you some small window to execute user 
accesses without faulting I think. If that's for performance I won't 
complain, but a comment would be good.

Looks good though, no real complaints about the series.

Thanks,
Nick


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-02-22  5:14 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-21  9:35 [PATCH 0/7] Kernel Userspace Protection for radix Russell Currey
2019-02-21  9:35 ` [PATCH 1/7] powerpc: Add framework for Kernel Userspace Protection Russell Currey
2019-02-21  9:35 ` [PATCH 2/7] powerpc: Add skeleton for Kernel Userspace Execution Prevention Russell Currey
2019-02-21  9:35 ` [PATCH 3/7] powerpc/mm: Add a framework for Kernel Userspace Access Protection Russell Currey
2019-02-21 10:46   ` Christophe Leroy
2019-02-21 14:48     ` Mark Rutland
2019-02-22  0:11       ` Russell Currey
2019-02-21 12:56   ` kbuild test robot
2019-02-21  9:35 ` [PATCH 4/7] powerpc/64: Setup KUP on secondary CPUs Russell Currey
2019-02-21  9:35 ` [PATCH 5/7] powerpc/mm/radix: Use KUEP API for Radix MMU Russell Currey
2019-02-21  9:36 ` [PATCH 6/7] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() Russell Currey
2019-02-21  9:36 ` [PATCH 7/7] powerpc/64s: Implement KUAP for Radix MMU Russell Currey
2019-02-22  5:14   ` Nicholas Piggin
2019-02-21 16:07 ` [PATCH 0/7] Kernel Userspace Protection for radix Kees Cook
2019-02-22  0:09   ` Russell Currey
2019-02-22  0:16     ` Kees Cook
2019-02-22  3:46       ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).