linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx
@ 2018-11-28  9:27 Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 02/11] powerpc: Add framework for Kernel Userspace Protection Christophe Leroy
                   ` (10 more replies)
  0 siblings, 11 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

On the 8xx, no-execute is set via PPP bits in the PTE. Therefore
a no-exec fault generates DSISR_PROTFAULT error bits,
not DSISR_NOEXEC_OR_G.

This patch adds DSISR_PROTFAULT in the test mask.

Fixes: d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/fault.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 1697e903bbf2..50e5c790d11e 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -226,7 +226,9 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr,
 static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
 			     unsigned long address)
 {
-	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT))) {
+	/* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */
+	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT |
+				      DSISR_PROTFAULT))) {
 		printk_ratelimited(KERN_CRIT "kernel tried to execute"
 				   " exec-protected page (%lx) -"
 				   "exploit attempt? (uid: %d)\n",
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 02/11] powerpc: Add framework for Kernel Userspace Protection
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
@ 2018-11-28  9:27 ` Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 03/11] powerpc: Add skeleton for Kernel Userspace Execution Prevention Christophe Leroy
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

This patch adds a skeleton for Kernel Userspace Protection
functionnalities like Kernel Userspace Access Protection and
Kernel Userspace Execution Prevention

The subsequent implementation of KUAP for radix makes use of a MMU
feature in order to patch out assembly when KUAP is disabled or
unsupported.  This won't work unless there's an entry point for
KUP support before the feature magic happens, so for PPC64
setup_kup() is called early in setup.

On PPC32, feature_fixup is done too early to allow the same.

Suggested-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/kup.h | 11 +++++++++++
 arch/powerpc/kernel/setup_64.c |  7 +++++++
 arch/powerpc/mm/init-common.c  |  5 +++++
 arch/powerpc/mm/init_32.c      |  3 +++
 4 files changed, 26 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kup.h

diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
new file mode 100644
index 000000000000..7a88b8b9b54d
--- /dev/null
+++ b/arch/powerpc/include/asm/kup.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_KUP_H_
+#define _ASM_POWERPC_KUP_H_
+
+#ifndef __ASSEMBLY__
+
+void setup_kup(void);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_POWERPC_KUP_H_ */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 236c1151a3a7..771f280a6bf6 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -68,6 +68,7 @@
 #include <asm/cputhreads.h>
 #include <asm/hw_irq.h>
 #include <asm/feature-fixups.h>
+#include <asm/kup.h>
 
 #include "setup.h"
 
@@ -331,6 +332,12 @@ void __init early_setup(unsigned long dt_ptr)
 	 */
 	configure_exceptions();
 
+	/*
+	 * Configure Kernel Userspace Protection. This needs to happen before
+	 * feature fixups for platforms that implement this using features.
+	 */
+	setup_kup();
+
 	/* Apply all the dynamic patching */
 	apply_feature_fixups();
 	setup_feature_keys();
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 2b656e67f2ea..a72bbfc3add6 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -24,6 +24,11 @@
 #include <linux/string.h>
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
+#include <asm/kup.h>
+
+void __init setup_kup(void)
+{
+}
 
 static void pgd_ctor(void *addr)
 {
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 3e59e5d64b01..93cfa8cf015d 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -45,6 +45,7 @@
 #include <asm/tlb.h>
 #include <asm/sections.h>
 #include <asm/hugetlb.h>
+#include <asm/kup.h>
 
 #include "mmu_decl.h"
 
@@ -182,6 +183,8 @@ void __init MMU_init(void)
 	btext_unmap();
 #endif
 
+	setup_kup();
+
 	/* Shortly after that, the entire linear mapping will be available */
 	memblock_set_current_limit(lowmem_end_addr);
 }
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 03/11] powerpc: Add skeleton for Kernel Userspace Execution Prevention
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 02/11] powerpc: Add framework for Kernel Userspace Protection Christophe Leroy
@ 2018-11-28  9:27 ` Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 04/11] powerpc/mm: Add a framework for Kernel Userspace Access Protection Christophe Leroy
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

This patch adds a skeleton for Kernel Userspace Execution Prevention.

Then subarches implementing it have to define CONFIG_PPC_HAVE_KUEP
and provide setup_kuep() function.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 Documentation/admin-guide/kernel-parameters.txt |  2 +-
 arch/powerpc/include/asm/kup.h                  |  6 ++++++
 arch/powerpc/mm/fault.c                         |  3 ++-
 arch/powerpc/mm/init-common.c                   | 11 +++++++++++
 arch/powerpc/platforms/Kconfig.cputype          | 12 ++++++++++++
 5 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 81d1d5a74728..1103549363bb 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2796,7 +2796,7 @@
 			Disable SMAP (Supervisor Mode Access Prevention)
 			even if it is supported by processor.
 
-	nosmep		[X86]
+	nosmep		[X86,PPC]
 			Disable SMEP (Supervisor Mode Execution Prevention)
 			even if it is supported by processor.
 
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 7a88b8b9b54d..af4b5f854ca4 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -6,6 +6,12 @@
 
 void setup_kup(void);
 
+#ifdef CONFIG_PPC_KUEP
+void setup_kuep(bool disabled);
+#else
+static inline void setup_kuep(bool disabled) { }
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_KUP_H_ */
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 50e5c790d11e..e57bd46cf25b 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -230,8 +230,9 @@ static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
 	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT |
 				      DSISR_PROTFAULT))) {
 		printk_ratelimited(KERN_CRIT "kernel tried to execute"
-				   " exec-protected page (%lx) -"
+				   " %s page (%lx) -"
 				   "exploit attempt? (uid: %d)\n",
+				   address >= TASK_SIZE ? "exec-protected" : "user",
 				   address, from_kuid(&init_user_ns,
 						      current_uid()));
 	}
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index a72bbfc3add6..37f84a43b822 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -26,8 +26,19 @@
 #include <asm/pgtable.h>
 #include <asm/kup.h>
 
+static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
+
+static int __init parse_nosmep(char *p)
+{
+	disable_kuep = true;
+	pr_warn("Disabling Kernel Userspace Execution Prevention\n");
+	return 0;
+}
+early_param("nosmep", parse_nosmep);
+
 void __init setup_kup(void)
 {
+	setup_kuep(disable_kuep);
 }
 
 static void pgd_ctor(void *addr)
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index f4e2c5729374..70830cb3c18a 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -351,6 +351,18 @@ config PPC_RADIX_MMU_DEFAULT
 
 	  If you're unsure, say Y.
 
+config PPC_HAVE_KUEP
+	bool
+
+config PPC_KUEP
+	bool "Kernel Userspace Execution Prevention"
+	depends on PPC_HAVE_KUEP
+	default y
+	help
+	  Enable support for Kernel Userspace Execution Prevention (KUEP)
+
+	  If you're unsure, say Y.
+
 config ARCH_ENABLE_HUGEPAGE_MIGRATION
 	def_bool y
 	depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 04/11] powerpc/mm: Add a framework for Kernel Userspace Access Protection
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 02/11] powerpc: Add framework for Kernel Userspace Protection Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 03/11] powerpc: Add skeleton for Kernel Userspace Execution Prevention Christophe Leroy
@ 2018-11-28  9:27 ` Christophe Leroy
  2018-12-21  5:07   ` Michael Ellerman
  2018-11-28  9:27 ` [RFC PATCH v2 05/11] powerpc/8xx: Add Kernel Userspace Execution Prevention Christophe Leroy
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

This patch implements a framework for Kernel Userspace Access
Protection.

Then subarches will have to possibility to provide their own
implementation by providing setup_kuap() and lock/unlock_user_access()

Some platform will need to know the area accessed and whether it is
accessed from read, write or both. Therefore source, destination and
size and handed over to the two functions.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 Documentation/admin-guide/kernel-parameters.txt |  2 +-
 arch/powerpc/include/asm/exception-64e.h        |  3 ++
 arch/powerpc/include/asm/exception-64s.h        |  9 +++++-
 arch/powerpc/include/asm/futex.h                |  4 +++
 arch/powerpc/include/asm/kup.h                  | 21 ++++++++++++++
 arch/powerpc/include/asm/paca.h                 |  3 ++
 arch/powerpc/include/asm/processor.h            |  3 ++
 arch/powerpc/include/asm/ptrace.h               |  3 ++
 arch/powerpc/include/asm/uaccess.h              | 38 +++++++++++++++++++------
 arch/powerpc/kernel/asm-offsets.c               |  7 +++++
 arch/powerpc/kernel/entry_32.S                  |  8 +++++-
 arch/powerpc/kernel/entry_64.S                  | 16 +++++++++--
 arch/powerpc/kernel/process.c                   |  3 ++
 arch/powerpc/lib/checksum_wrappers.c            |  4 +++
 arch/powerpc/mm/fault.c                         | 17 ++++++++---
 arch/powerpc/mm/init-common.c                   | 10 +++++++
 arch/powerpc/platforms/Kconfig.cputype          | 12 ++++++++
 17 files changed, 146 insertions(+), 17 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 1103549363bb..0d059b141ff8 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2792,7 +2792,7 @@
 			noexec=on: enable non-executable mappings (default)
 			noexec=off: disable non-executable mappings
 
-	nosmap		[X86]
+	nosmap		[X86,PPC]
 			Disable SMAP (Supervisor Mode Access Prevention)
 			even if it is supported by processor.
 
diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h
index 555e22d5e07f..bf25015834ee 100644
--- a/arch/powerpc/include/asm/exception-64e.h
+++ b/arch/powerpc/include/asm/exception-64e.h
@@ -215,5 +215,8 @@ exc_##label##_book3e:
 #define RFI_TO_USER							\
 	rfi
 
+#define UNLOCK_USER_ACCESS(reg)
+#define LOCK_USER_ACCESS(reg)
+
 #endif /* _ASM_POWERPC_EXCEPTION_64E_H */
 
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 3b4767ed3ec5..4d971ca1e69b 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -264,6 +264,9 @@ BEGIN_FTR_SECTION_NESTED(943)						\
 	std	ra,offset(r13);						\
 END_FTR_SECTION_NESTED(ftr,ftr,943)
 
+#define LOCK_USER_ACCESS(reg)
+#define UNLOCK_USER_ACCESS(reg)
+
 #define EXCEPTION_PROLOG_0(area)					\
 	GET_PACA(r13);							\
 	std	r9,area+EX_R9(r13);	/* save r9 */			\
@@ -500,7 +503,11 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	beq	4f;			/* if from kernel mode		*/ \
 	ACCOUNT_CPU_USER_ENTRY(r13, r9, r10);				   \
 	SAVE_PPR(area, r9);						   \
-4:	EXCEPTION_PROLOG_COMMON_2(area)					   \
+4:	lbz	r9,PACA_USER_ACCESS_ALLOWED(r13);			   \
+	cmpwi	cr1,r9,0;						   \
+	beq	5f;							   \
+	LOCK_USER_ACCESS(r9);						   \
+5:	EXCEPTION_PROLOG_COMMON_2(area)					\
 	EXCEPTION_PROLOG_COMMON_3(n)					   \
 	ACCOUNT_STOLEN_TIME
 
diff --git a/arch/powerpc/include/asm/futex.h b/arch/powerpc/include/asm/futex.h
index 94542776a62d..32230f9a1c32 100644
--- a/arch/powerpc/include/asm/futex.h
+++ b/arch/powerpc/include/asm/futex.h
@@ -35,6 +35,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
 {
 	int oldval = 0, ret;
 
+	unlock_user_access(uaddr, NULL, sizeof(*uaddr));
 	pagefault_disable();
 
 	switch (op) {
@@ -62,6 +63,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
 	if (!ret)
 		*oval = oldval;
 
+	lock_user_access(uaddr, NULL, sizeof(*uaddr));
 	return ret;
 }
 
@@ -75,6 +77,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
 	if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
 		return -EFAULT;
 
+	unlock_user_access(uaddr, NULL, sizeof(*uaddr));
         __asm__ __volatile__ (
         PPC_ATOMIC_ENTRY_BARRIER
 "1:     lwarx   %1,0,%3         # futex_atomic_cmpxchg_inatomic\n\
@@ -95,6 +98,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
         : "cc", "memory");
 
 	*uval = prev;
+	lock_user_access(uaddr, NULL, sizeof(*uaddr));
         return ret;
 }
 
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index af4b5f854ca4..2ac540fb488f 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -4,6 +4,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/pgtable.h>
+
 void setup_kup(void);
 
 #ifdef CONFIG_PPC_KUEP
@@ -12,6 +14,25 @@ void setup_kuep(bool disabled);
 static inline void setup_kuep(bool disabled) { }
 #endif
 
+#ifdef CONFIG_PPC_KUAP
+void setup_kuap(bool disabled);
+#else
+static inline void setup_kuap(bool disabled) { }
+static inline void unlock_user_access(void __user *to, const void __user *from,
+				      unsigned long size) { }
+static inline void lock_user_access(void __user *to, const void __user *from,
+				    unsigned long size) { }
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
+#ifndef CONFIG_PPC_KUAP
+
+#ifdef CONFIG_PPC32
+#define LOCK_USER_ACCESS(val, sp, sr, srmax, curr)
+#define REST_USER_ACCESS(val, sp, sr, srmax, curr)
+#endif
+
+#endif
+
 #endif /* _ASM_POWERPC_KUP_H_ */
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index e843bc5d1a0f..56236f6d8c89 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -169,6 +169,9 @@ struct paca_struct {
 	u64 saved_r1;			/* r1 save for RTAS calls or PM or EE=0 */
 	u64 saved_msr;			/* MSR saved here by enter_rtas */
 	u16 trap_save;			/* Used when bad stack is encountered */
+#ifdef CONFIG_PPC_KUAP
+	u8 user_access_allowed;		/* can the kernel access user memory? */
+#endif
 	u8 irq_soft_mask;		/* mask for irq soft masking */
 	u8 irq_happened;		/* irq happened while soft-disabled */
 	u8 io_sync;			/* writel() needs spin_unlock sync */
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index ee58526cb6c2..4a9a10e86828 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -250,6 +250,9 @@ struct thread_struct {
 #ifdef CONFIG_PPC32
 	void		*pgdir;		/* root of page-table tree */
 	unsigned long	ksp_limit;	/* if ksp <= ksp_limit stack overflow */
+#ifdef CONFIG_PPC_KUAP
+	unsigned long	kuap;		/* state of user access protection */
+#endif
 #endif
 	/* Debug Registers */
 	struct debug_reg debug;
diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index 0b8a735b6d85..0321ba5b3d12 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -55,6 +55,9 @@ struct pt_regs
 #ifdef CONFIG_PPC64
 	unsigned long ppr;
 	unsigned long __pad;	/* Maintain 16 byte interrupt stack alignment */
+#elif defined(CONFIG_PPC_KUAP)
+	unsigned long kuap;
+	unsigned long __pad[3];	/* Maintain 16 byte interrupt stack alignment */
 #endif
 };
 #endif
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 15bea9a0f260..b8f7f023fcbd 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -6,6 +6,7 @@
 #include <asm/processor.h>
 #include <asm/page.h>
 #include <asm/extable.h>
+#include <asm/kup.h>
 
 /*
  * The fs value determines whether argument validity checking should be
@@ -141,6 +142,7 @@ extern long __put_user_bad(void);
 #define __put_user_size(x, ptr, size, retval)			\
 do {								\
 	retval = 0;						\
+	unlock_user_access(ptr, NULL, size);			\
 	switch (size) {						\
 	  case 1: __put_user_asm(x, ptr, retval, "stb"); break;	\
 	  case 2: __put_user_asm(x, ptr, retval, "sth"); break;	\
@@ -148,6 +150,7 @@ do {								\
 	  case 8: __put_user_asm2(x, ptr, retval); break;	\
 	  default: __put_user_bad();				\
 	}							\
+	lock_user_access(ptr, NULL, size);			\
 } while (0)
 
 #define __put_user_nocheck(x, ptr, size)			\
@@ -240,6 +243,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (size > sizeof(x))					\
 		(x) = __get_user_bad();				\
+	unlock_user_access(NULL, ptr, size);			\
 	switch (size) {						\
 	case 1: __get_user_asm(x, ptr, retval, "lbz"); break;	\
 	case 2: __get_user_asm(x, ptr, retval, "lhz"); break;	\
@@ -247,6 +251,7 @@ do {								\
 	case 8: __get_user_asm2(x, ptr, retval);  break;	\
 	default: (x) = __get_user_bad();			\
 	}							\
+	lock_user_access(NULL, ptr, size);			\
 } while (0)
 
 /*
@@ -306,15 +311,21 @@ extern unsigned long __copy_tofrom_user(void __user *to,
 static inline unsigned long
 raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
 {
-	return __copy_tofrom_user(to, from, n);
+	unsigned long ret;
+
+	unlock_user_access(to, from, n);
+	ret = __copy_tofrom_user(to, from, n);
+	lock_user_access(to, from, n);
+	return ret;
 }
 #endif /* __powerpc64__ */
 
 static inline unsigned long raw_copy_from_user(void *to,
 		const void __user *from, unsigned long n)
 {
+	unsigned long ret;
 	if (__builtin_constant_p(n) && (n <= 8)) {
-		unsigned long ret = 1;
+		ret = 1;
 
 		switch (n) {
 		case 1:
@@ -339,14 +350,18 @@ static inline unsigned long raw_copy_from_user(void *to,
 	}
 
 	barrier_nospec();
-	return __copy_tofrom_user((__force void __user *)to, from, n);
+	unlock_user_access(NULL, from, n);
+	ret = __copy_tofrom_user((__force void __user *)to, from, n);
+	lock_user_access(NULL, from, n);
+	return ret;
 }
 
 static inline unsigned long raw_copy_to_user(void __user *to,
 		const void *from, unsigned long n)
 {
+	unsigned long ret;
 	if (__builtin_constant_p(n) && (n <= 8)) {
-		unsigned long ret = 1;
+		ret = 1;
 
 		switch (n) {
 		case 1:
@@ -366,17 +381,24 @@ static inline unsigned long raw_copy_to_user(void __user *to,
 			return 0;
 	}
 
-	return __copy_tofrom_user(to, (__force const void __user *)from, n);
+	unlock_user_access(to, NULL, n);
+	ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
+	lock_user_access(to, NULL, n);
+	return ret;
 }
 
 extern unsigned long __clear_user(void __user *addr, unsigned long size);
 
 static inline unsigned long clear_user(void __user *addr, unsigned long size)
 {
+	unsigned long ret = size;
 	might_fault();
-	if (likely(access_ok(VERIFY_WRITE, addr, size)))
-		return __clear_user(addr, size);
-	return size;
+	if (likely(access_ok(VERIFY_WRITE, addr, size))) {
+		unlock_user_access(addr, NULL, size);
+		ret = __clear_user(addr, size);
+		lock_user_access(addr, NULL, size);
+	}
+	return ret;
 }
 
 extern long strncpy_from_user(char *dst, const char __user *src, long count);
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 9ffc72ded73a..98e94299e728 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -93,6 +93,9 @@ int main(void)
 	OFFSET(THREAD_INFO, task_struct, stack);
 	DEFINE(THREAD_INFO_GAP, _ALIGN_UP(sizeof(struct thread_info), 16));
 	OFFSET(KSP_LIMIT, thread_struct, ksp_limit);
+#ifdef CONFIG_PPC_KUAP
+	OFFSET(KUAP, thread_struct, kuap);
+#endif
 #endif /* CONFIG_PPC64 */
 
 #ifdef CONFIG_LIVEPATCH
@@ -260,6 +263,7 @@ int main(void)
 	OFFSET(ACCOUNT_STARTTIME_USER, paca_struct, accounting.starttime_user);
 	OFFSET(ACCOUNT_USER_TIME, paca_struct, accounting.utime);
 	OFFSET(ACCOUNT_SYSTEM_TIME, paca_struct, accounting.stime);
+	OFFSET(PACA_USER_ACCESS_ALLOWED, paca_struct, user_access_allowed);
 	OFFSET(PACA_TRAP_SAVE, paca_struct, trap_save);
 	OFFSET(PACA_NAPSTATELOST, paca_struct, nap_state_lost);
 	OFFSET(PACA_SPRG_VDSO, paca_struct, sprg_vdso);
@@ -320,6 +324,9 @@ int main(void)
 	 */
 	STACK_PT_REGS_OFFSET(_DEAR, dar);
 	STACK_PT_REGS_OFFSET(_ESR, dsisr);
+#ifdef CONFIG_PPC_KUAP
+	STACK_PT_REGS_OFFSET(_KUAP, kuap);
+#endif
 #else /* CONFIG_PPC64 */
 	STACK_PT_REGS_OFFSET(SOFTE, softe);
 	STACK_PT_REGS_OFFSET(_PPR, ppr);
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 77decded1175..64cb6f65ab53 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -36,6 +36,7 @@
 #include <asm/asm-405.h>
 #include <asm/feature-fixups.h>
 #include <asm/barrier.h>
+#include <asm/kup.h>
 
 /*
  * MSR_KERNEL is > 0x10000 on 4xx/Book-E since it include MSR_CE.
@@ -156,6 +157,7 @@ transfer_to_handler:
 	stw	r12,_CTR(r11)
 	stw	r2,_XER(r11)
 	mfspr	r12,SPRN_SPRG_THREAD
+	LOCK_USER_ACCESS(r2, r11, r9, r0, r12)
 	addi	r2,r12,-THREAD
 	tovirt(r2,r2)			/* set r2 to current */
 	beq	2f			/* if from user, fix up THREAD.regs */
@@ -442,6 +444,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
 	ACCOUNT_CPU_USER_EXIT(r4, r5, r7)
 3:
 #endif
+	REST_USER_ACCESS(r7, r1, r4, r5, r2)
 	lwz	r4,_LINK(r1)
 	lwz	r5,_CCR(r1)
 	mtlr	r4
@@ -739,7 +742,8 @@ fast_exception_return:
 	beq	1f			/* if not, we've got problems */
 #endif
 
-2:	REST_4GPRS(3, r11)
+2:	REST_USER_ACCESS(r3, r11, r4, r5, r2)
+	REST_4GPRS(3, r11)
 	lwz	r10,_CCR(r11)
 	REST_GPR(1, r11)
 	mtcr	r10
@@ -957,6 +961,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_47x)
 1:
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
+	REST_USER_ACCESS(r3, r1, r4, r5, r2)
+
 	lwz	r0,GPR0(r1)
 	lwz	r2,GPR2(r1)
 	REST_4GPRS(3, r1)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 7b1693adff2a..d5879f32bd34 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -297,7 +297,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	b	.	/* prevent speculative execution */
 
 	/* exit to kernel */
-1:	ld	r2,GPR2(r1)
+1:	/* if the AMR was unlocked before, unlock it again */
+	lbz	r2,PACA_USER_ACCESS_ALLOWED(r13)
+	cmpwi	cr1,0
+	bne	2f
+	UNLOCK_USER_ACCESS(r2)
+2:	ld	r2,GPR2(r1)
 	ld	r1,GPR1(r1)
 	mtlr	r4
 	mtcr	r5
@@ -983,7 +988,14 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	RFI_TO_USER
 	b	.	/* prevent speculative execution */
 
-1:	mtspr	SPRN_SRR1,r3
+1:	/* exit to kernel */
+	/* if the AMR was unlocked before, unlock it again */
+	lbz	r2,PACA_USER_ACCESS_ALLOWED(r13)
+	cmpwi	cr1,0
+	bne	2f
+	UNLOCK_USER_ACCESS(r2)
+
+2:	mtspr	SPRN_SRR1,r3
 
 	ld	r2,_CCR(r1)
 	mtcrf	0xFF,r2
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 96f34730010f..d57bc0d90e18 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1771,6 +1771,9 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
 	regs->mq = 0;
 	regs->nip = start;
 	regs->msr = MSR_USER;
+#ifdef CONFIG_PPC_KUAP
+	regs->kuap = KUAP_START;
+#endif
 #else
 	if (!is_32bit_task()) {
 		unsigned long entry;
diff --git a/arch/powerpc/lib/checksum_wrappers.c b/arch/powerpc/lib/checksum_wrappers.c
index a0cb63fb76a1..6d84d5e14ef5 100644
--- a/arch/powerpc/lib/checksum_wrappers.c
+++ b/arch/powerpc/lib/checksum_wrappers.c
@@ -28,6 +28,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst,
 {
 	unsigned int csum;
 
+	unlock_user_access(NULL, src, len);
 	might_sleep();
 
 	*err_ptr = 0;
@@ -60,6 +61,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst,
 	}
 
 out:
+	lock_user_access(NULL, src, len);
 	return (__force __wsum)csum;
 }
 EXPORT_SYMBOL(csum_and_copy_from_user);
@@ -69,6 +71,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len,
 {
 	unsigned int csum;
 
+	unlock_user_access(dst, NULL, len);
 	might_sleep();
 
 	*err_ptr = 0;
@@ -97,6 +100,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len,
 	}
 
 out:
+	lock_user_access(dst, NULL, len);
 	return (__force __wsum)csum;
 }
 EXPORT_SYMBOL(csum_and_copy_to_user);
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index e57bd46cf25b..7cea9d7fc5e8 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -223,9 +223,11 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr,
 }
 
 /* Is this a bad kernel fault ? */
-static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
+static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
 			     unsigned long address)
 {
+	int is_exec = TRAP(regs) == 0x400;
+
 	/* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */
 	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT |
 				      DSISR_PROTFAULT))) {
@@ -236,7 +238,13 @@ static bool bad_kernel_fault(bool is_exec, unsigned long error_code,
 				   address, from_kuid(&init_user_ns,
 						      current_uid()));
 	}
-	return is_exec || (address >= TASK_SIZE);
+	if (!is_exec && address < TASK_SIZE && (error_code & DSISR_PROTFAULT) &&
+	    !search_exception_tables(regs->nip))
+		printk_ratelimited(KERN_CRIT "Kernel attempted to access user"
+				   " page (%lx) - exploit attempt? (uid: %d)\n",
+				   address, from_kuid(&init_user_ns,
+						      current_uid()));
+	return is_exec || (address >= TASK_SIZE) || !search_exception_tables(regs->nip);
 }
 
 static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
@@ -442,9 +450,10 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
 
 	/*
 	 * The kernel should never take an execute fault nor should it
-	 * take a page fault to a kernel address.
+	 * take a page fault to a kernel address or a page fault to a user
+	 * address outside of dedicated places
 	 */
-	if (unlikely(!is_user && bad_kernel_fault(is_exec, error_code, address)))
+	if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address)))
 		return SIGSEGV;
 
 	/*
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 37f84a43b822..0fe98f62c58a 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -27,6 +27,7 @@
 #include <asm/kup.h>
 
 static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
+static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
 
 static int __init parse_nosmep(char *p)
 {
@@ -36,9 +37,18 @@ static int __init parse_nosmep(char *p)
 }
 early_param("nosmep", parse_nosmep);
 
+static int __init parse_nosmap(char *p)
+{
+	disable_kuap = true;
+	pr_warn("Disabling Kernel Userspace Access Protection\n");
+	return 0;
+}
+early_param("nosmap", parse_nosmap);
+
 void __init setup_kup(void)
 {
 	setup_kuep(disable_kuep);
+	setup_kuap(disable_kuap);
 }
 
 static void pgd_ctor(void *addr)
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 70830cb3c18a..68eaafd54aca 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -363,6 +363,18 @@ config PPC_KUEP
 
 	  If you're unsure, say Y.
 
+config PPC_HAVE_KUAP
+	bool
+
+config PPC_KUAP
+	bool "Kernel Userspace Access Protection"
+	depends on PPC_HAVE_KUAP
+	default y
+	help
+	  Enable support for Kernel Userspace Access Protection (KUAP)
+
+	  If you're unsure, say Y.
+
 config ARCH_ENABLE_HUGEPAGE_MIGRATION
 	def_bool y
 	depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 05/11] powerpc/8xx: Add Kernel Userspace Execution Prevention
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
                   ` (2 preceding siblings ...)
  2018-11-28  9:27 ` [RFC PATCH v2 04/11] powerpc/mm: Add a framework for Kernel Userspace Access Protection Christophe Leroy
@ 2018-11-28  9:27 ` Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 06/11] powerpc/8xx: Add Kernel Userspace Access Protection Christophe Leroy
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

This patch adds Kernel Userspace Execution Prevention on the 8xx.

When a page is Executable, it is set Executable for Key 0 and NX
for Key 1.

Up to now, the User group is defined with Key 0 for both User and
Supervisor.

By changing the group to Key 0 for User and Key 1 for Supervisor,
this patch prevents the Kernel from being able to execute user code.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/mmu-8xx.h     |  6 ++++++
 arch/powerpc/mm/8xx_mmu.c              | 12 ++++++++++++
 arch/powerpc/platforms/Kconfig.cputype |  1 +
 3 files changed, 19 insertions(+)

diff --git a/arch/powerpc/include/asm/mmu-8xx.h b/arch/powerpc/include/asm/mmu-8xx.h
index fa05aa566ece..53dbf0788fce 100644
--- a/arch/powerpc/include/asm/mmu-8xx.h
+++ b/arch/powerpc/include/asm/mmu-8xx.h
@@ -41,6 +41,12 @@
  */
 #define MI_APG_INIT	0x44444444
 
+/*
+ * 0 => No user => 01 (all accesses performed according to page definition)
+ * 1 => User => 10 (all accesses performed according to swaped page definition)
+ */
+#define MI_APG_KUEP	0x66666666
+
 /* The effective page number register.  When read, contains the information
  * about the last instruction TLB miss.  When MI_RPN is written, bits in
  * this register are used to create the TLB entry.
diff --git a/arch/powerpc/mm/8xx_mmu.c b/arch/powerpc/mm/8xx_mmu.c
index 01b7f5107c3a..f14ceb507d98 100644
--- a/arch/powerpc/mm/8xx_mmu.c
+++ b/arch/powerpc/mm/8xx_mmu.c
@@ -194,3 +194,15 @@ void flush_instruction_cache(void)
 	mtspr(SPRN_IC_CST, IDC_INVALL);
 	isync();
 }
+
+#ifdef CONFIG_PPC_KUEP
+void __init setup_kuep(bool disabled)
+{
+	if (disabled)
+		return;
+
+	pr_info("Activating Kernel Userspace Execution Prevention\n");
+
+	mtspr(SPRN_MI_AP, MI_APG_KUEP);
+}
+#endif
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 68eaafd54aca..d1757cedf60b 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -33,6 +33,7 @@ config PPC_8xx
 	bool "Freescale 8xx"
 	select FSL_SOC
 	select SYS_SUPPORTS_HUGETLBFS
+	select PPC_HAVE_KUEP
 
 config 40x
 	bool "AMCC 40x"
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 06/11] powerpc/8xx: Add Kernel Userspace Access Protection
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
                   ` (3 preceding siblings ...)
  2018-11-28  9:27 ` [RFC PATCH v2 05/11] powerpc/8xx: Add Kernel Userspace Execution Prevention Christophe Leroy
@ 2018-11-28  9:27 ` Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 07/11] powerpc/mm/radix: Use KUEP API for Radix MMU Russell Currey
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

This patch adds Kernel Userspace Access Protection on the 8xx.

When a page is RO or RW, it is set RO or RW for Key 0 and NA
for Key 1.

Up to now, the User group is defined with Key 0 for both User and
Supervisor.

By changing the group to Key 0 for User and Key 1 for Supervisor,
this patch prevents the Kernel from being able to access user data.

At exception entry, the kernel saves SPRN_MD_AP in the regs struct,
and reapply the protection. At exception exit it restore SPRN_MD_AP
with the value it had on exception entry.

For the time being, the unused mq field of pt_regs struct is used for
that.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/kup.h               |  4 ++++
 arch/powerpc/include/asm/mmu-8xx.h           |  6 +++++
 arch/powerpc/include/asm/nohash/32/kup-8xx.h | 34 ++++++++++++++++++++++++++++
 arch/powerpc/mm/8xx_mmu.c                    | 12 ++++++++++
 arch/powerpc/platforms/Kconfig.cputype       |  1 +
 5 files changed, 57 insertions(+)
 create mode 100644 arch/powerpc/include/asm/nohash/32/kup-8xx.h

diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 2ac540fb488f..f7262f4c427e 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -2,6 +2,10 @@
 #ifndef _ASM_POWERPC_KUP_H_
 #define _ASM_POWERPC_KUP_H_
 
+#ifdef CONFIG_PPC_8xx
+#include <asm/nohash/32/kup-8xx.h>
+#endif
+
 #ifndef __ASSEMBLY__
 
 #include <asm/pgtable.h>
diff --git a/arch/powerpc/include/asm/mmu-8xx.h b/arch/powerpc/include/asm/mmu-8xx.h
index 53dbf0788fce..01a0a1694ebd 100644
--- a/arch/powerpc/include/asm/mmu-8xx.h
+++ b/arch/powerpc/include/asm/mmu-8xx.h
@@ -120,6 +120,12 @@
  */
 #define MD_APG_INIT	0x44444444
 
+/*
+ * 0 => No user => 01 (all accesses performed according to page definition)
+ * 1 => User => 10 (all accesses performed according to swaped page definition)
+ */
+#define MD_APG_KUAP	0x66666666
+
 /* The effective page number register.  When read, contains the information
  * about the last instruction TLB miss.  When MD_RPN is written, bits in
  * this register are used to create the TLB entry.
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
new file mode 100644
index 000000000000..8f4975c0de22
--- /dev/null
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_KUP_8XX_H_
+#define _ASM_POWERPC_KUP_8XX_H_
+
+#ifdef CONFIG_PPC_KUAP
+#define LOCK_USER_ACCESS(val, sp, sr, srmax, current)	\
+	mfspr	val, SPRN_MD_AP;			\
+	stw	val, _KUAP(sp);				\
+	lis	val, MD_APG_KUAP@h;			\
+	ori	val, val, MD_APG_KUAP@l;		\
+	mtspr	SPRN_MD_AP, val
+
+#define REST_USER_ACCESS(val, sp, sr, srmax, current)	\
+	lwz	val, _KUAP(sp);				\
+	mtspr	SPRN_MD_AP, val
+
+#define KUAP_START			MD_APG_KUAP
+
+#ifndef __ASSEMBLY__
+static inline void lock_user_access(void __user *to, const void __user *from,
+				    unsigned long size)
+{
+	mtspr(SPRN_MD_AP, MD_APG_KUAP);
+}
+
+static inline void unlock_user_access(void __user *to, const void __user *from,
+				      unsigned long size)
+{
+	mtspr(SPRN_MD_AP, MD_APG_INIT);
+}
+#endif /* !__ASSEMBLY__ */
+#endif /* CONFIG_PPC_KUAP */
+
+#endif /* _ASM_POWERPC_KUP_8XX_H_ */
diff --git a/arch/powerpc/mm/8xx_mmu.c b/arch/powerpc/mm/8xx_mmu.c
index f14ceb507d98..2bba4fd2eed7 100644
--- a/arch/powerpc/mm/8xx_mmu.c
+++ b/arch/powerpc/mm/8xx_mmu.c
@@ -206,3 +206,15 @@ void __init setup_kuep(bool disabled)
 	mtspr(SPRN_MI_AP, MI_APG_KUEP);
 }
 #endif
+
+#ifdef CONFIG_PPC_KUAP
+void __init setup_kuap(bool disabled)
+{
+	pr_info("Activating Kernel Userspace Access Protection\n");
+
+	if (disabled)
+		pr_warn("KUAP cannot be disabled yet on 8xx when compiled in\n");
+
+	mtspr(SPRN_MD_AP, MD_APG_KUAP);
+}
+#endif
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index d1757cedf60b..a20669a9ec13 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -34,6 +34,7 @@ config PPC_8xx
 	select FSL_SOC
 	select SYS_SUPPORTS_HUGETLBFS
 	select PPC_HAVE_KUEP
+	select PPC_HAVE_KUAP
 
 config 40x
 	bool "AMCC 40x"
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 07/11] powerpc/mm/radix: Use KUEP API for Radix MMU
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
                   ` (4 preceding siblings ...)
  2018-11-28  9:27 ` [RFC PATCH v2 06/11] powerpc/8xx: Add Kernel Userspace Access Protection Christophe Leroy
@ 2018-11-28  9:27 ` Russell Currey
  2018-11-28  9:43   ` Christophe Leroy
  2018-11-28  9:46   ` Christophe LEROY
  2018-11-28  9:27 ` [RFC PATCH v2 08/11] powerpc/64s: Implement KUAP " Russell Currey
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 19+ messages in thread
From: Russell Currey @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

Execution protection already exists on radix, this just refactors
the radix init to provide the KUEP setup function instead.

Thus, the only functional change is that it can now be disabled.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/pgtable-radix.c        | 9 ++++++---
 arch/powerpc/platforms/Kconfig.cputype | 1 +
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 931156069a81..45aa9e501e76 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -535,8 +535,13 @@ static void radix_init_amor(void)
 	mtspr(SPRN_AMOR, (3ul << 62));
 }
 
-static void radix_init_iamr(void)
+void setup_kuep(bool disabled)
 {
+	if (disabled)
+		return;
+
+	pr_info("Activating Kernel Userspace Execution Prevention\n");
+
 	/*
 	 * Radix always uses key0 of the IAMR to determine if an access is
 	 * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
@@ -605,7 +610,6 @@ void __init radix__early_init_mmu(void)
 
 	memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
 
-	radix_init_iamr();
 	radix_init_pgtable();
 	/* Switch to the guard PID before turning on MMU */
 	radix__switch_mmu_context(NULL, &init_mm);
@@ -627,7 +631,6 @@ void radix__early_init_mmu_secondary(void)
 		      __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
 		radix_init_amor();
 	}
-	radix_init_iamr();
 
 	radix__switch_mmu_context(NULL, &init_mm);
 	if (cpu_has_feature(CPU_FTR_HVMODE))
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index a20669a9ec13..e6831d0ec159 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -334,6 +334,7 @@ config PPC_RADIX_MMU
 	bool "Radix MMU Support"
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
+	select PPC_HAVE_KUEP
 	default y
 	help
 	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 08/11] powerpc/64s: Implement KUAP for Radix MMU
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
                   ` (5 preceding siblings ...)
  2018-11-28  9:27 ` [RFC PATCH v2 07/11] powerpc/mm/radix: Use KUEP API for Radix MMU Russell Currey
@ 2018-11-28  9:27 ` Russell Currey
  2018-11-28  9:43   ` Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 09/11] powerpc/32: add helper to write into segment registers Christophe Leroy
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 19+ messages in thread
From: Russell Currey @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

Kernel Userspace Access Prevention utilises a feature of
the Radix MMU which disallows read and write access to userspace
addresses.  By utilising this, the kernel is prevented from accessing
user data from outside of trusted paths that perform proper safety
checks, such as copy_{to/from}_user() and friends.

Userspace access is disabled from early boot and is only enabled when:

        - exiting the kernel and entering userspace
        - performing an operation like copy_{to/from}_user()
        - context switching to a process that has access enabled

and similarly, access is disabled again when exiting userspace and
entering the kernel.

This feature has a slight performance impact which I roughly measured
to be
3% slower in the worst case (performing 1GB of 1 byte read()/write()
syscalls), and is gated behind the CONFIG_PPC_KUAP option for
performance-critical builds.

This feature can be tested by using the lkdtm driver (CONFIG_LKDTM=y)
and performing the following:

        echo ACCESS_USERSPACE > [debugfs]/provoke-crash/DIRECT

if enabled, this should send SIGSEGV to the thread.

The KUAP state is tracked in the PACA because reading the register
that manages these accesses is costly. This Has the unfortunate
downside of another layer of abstraction for platforms that implement
the locks and unlocks, but this could be useful in future for other
things too, like counters for benchmarking or smartly handling lots
of small accesses at once.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/book3s/64/kup-radix.h | 36 ++++++++++++++++++++++++++
 arch/powerpc/include/asm/exception-64s.h       | 14 ++++++++--
 arch/powerpc/include/asm/kup.h                 |  3 +++
 arch/powerpc/include/asm/mmu.h                 |  9 ++++++-
 arch/powerpc/include/asm/reg.h                 |  1 +
 arch/powerpc/mm/pgtable-radix.c                | 12 +++++++++
 arch/powerpc/mm/pkeys.c                        |  7 +++--
 arch/powerpc/platforms/Kconfig.cputype         |  1 +
 8 files changed, 78 insertions(+), 5 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h

diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
new file mode 100644
index 000000000000..93273ca99310
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_KUP_RADIX_H
+#define _ASM_POWERPC_KUP_RADIX_H
+
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_PPC_KUAP
+#include <asm/reg.h>
+/*
+ * We do have the ability to individually lock/unlock reads and writes rather
+ * than both at once, however it's a significant performance hit due to needing
+ * to do a read-modify-write, which adds a mfspr, which is slow.  As a result,
+ * locking/unlocking both at once is preferred.
+ */
+static inline void unlock_user_access(void __user *to, const void __user *from,
+				      unsigned long size)
+{
+	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
+		return;
+
+	mtspr(SPRN_AMR, 0);
+	isync();
+	get_paca()->user_access_allowed = 1;
+}
+
+static inline void lock_user_access(void __user *to, const void __user *from,
+				    unsigned long size)
+{
+	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
+		return;
+
+	mtspr(SPRN_AMR, AMR_LOCKED);
+	get_paca()->user_access_allowed = 0;
+}
+#endif /* CONFIG_PPC_KUAP */
+#endif /* __ASSEMBLY__ */
+#endif
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 4d971ca1e69b..d92614c66d87 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -264,8 +264,18 @@ BEGIN_FTR_SECTION_NESTED(943)						\
 	std	ra,offset(r13);						\
 END_FTR_SECTION_NESTED(ftr,ftr,943)
 
-#define LOCK_USER_ACCESS(reg)
-#define UNLOCK_USER_ACCESS(reg)
+#define LOCK_USER_ACCESS(reg)							\
+BEGIN_MMU_FTR_SECTION_NESTED(944)					\
+	LOAD_REG_IMMEDIATE(reg,AMR_LOCKED);				\
+	mtspr	SPRN_AMR,reg;						\
+END_MMU_FTR_SECTION_NESTED(MMU_FTR_RADIX_KUAP,MMU_FTR_RADIX_KUAP,944)
+
+#define UNLOCK_USER_ACCESS(reg)							\
+BEGIN_MMU_FTR_SECTION_NESTED(945)					\
+	li	reg,0;							\
+	mtspr	SPRN_AMR,reg;						\
+	isync;								\
+END_MMU_FTR_SECTION_NESTED(MMU_FTR_RADIX_KUAP,MMU_FTR_RADIX_KUAP,945)
 
 #define EXCEPTION_PROLOG_0(area)					\
 	GET_PACA(r13);							\
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index f7262f4c427e..d4dd242251bd 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -5,6 +5,9 @@
 #ifdef CONFIG_PPC_8xx
 #include <asm/nohash/32/kup-8xx.h>
 #endif
+#ifdef CONFIG_PPC_BOOK3S_64
+#include <asm/book3s/64/kup-radix.h>
+#endif
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index eb20eb3b8fb0..048df188fc10 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -107,6 +107,10 @@
  */
 #define MMU_FTR_1T_SEGMENT		ASM_CONST(0x40000000)
 
+/* Supports KUAP (key 0 controlling userspace addresses) on radix
+ */
+#define MMU_FTR_RADIX_KUAP		ASM_CONST(0x80000000)
+
 /* MMU feature bit sets for various CPUs */
 #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2	\
 	MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
@@ -143,7 +147,10 @@ enum {
 		MMU_FTR_KERNEL_RO | MMU_FTR_68_BIT_VA |
 #ifdef CONFIG_PPC_RADIX_MMU
 		MMU_FTR_TYPE_RADIX |
-#endif
+#ifdef CONFIG_PPC_KUAP
+		MMU_FTR_RADIX_KUAP |
+#endif /* CONFIG_PPC_KUAP */
+#endif /* CONFIG_PPC_RADIX_MMU */
 		0,
 };
 
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index de52c3166ba4..d9598e6790d8 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -246,6 +246,7 @@
 #define SPRN_DSCR	0x11
 #define SPRN_CFAR	0x1c	/* Come From Address Register */
 #define SPRN_AMR	0x1d	/* Authority Mask Register */
+#define   AMR_LOCKED	0xC000000000000000UL /* Read & Write disabled */
 #define SPRN_UAMOR	0x9d	/* User Authority Mask Override Register */
 #define SPRN_AMOR	0x15d	/* Authority Mask Override Register */
 #define SPRN_ACOP	0x1F	/* Available Coprocessor Register */
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 45aa9e501e76..6490067952a0 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -29,6 +29,7 @@
 #include <asm/powernv.h>
 #include <asm/sections.h>
 #include <asm/trace.h>
+#include <asm/uaccess.h>
 
 #include <trace/events/thp.h>
 
@@ -550,6 +551,17 @@ void setup_kuep(bool disabled)
 	mtspr(SPRN_IAMR, (1ul << 62));
 }
 
+void __init setup_kuap(bool disabled)
+{
+	if (disabled)
+		return;
+
+	pr_info("Activating Kernel Userspace Access Prevention\n");
+
+	cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
+	mtspr(SPRN_AMR, AMR_LOCKED);
+}
+
 void __init radix__early_init_mmu(void)
 {
 	unsigned long lpcr;
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index b271b283c785..bb3cf915016f 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -7,6 +7,7 @@
 
 #include <asm/mman.h>
 #include <asm/setup.h>
+#include <asm/uaccess.h>
 #include <linux/pkeys.h>
 #include <linux/of_device.h>
 
@@ -266,7 +267,8 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
 
 void thread_pkey_regs_save(struct thread_struct *thread)
 {
-	if (static_branch_likely(&pkey_disabled))
+	if (static_branch_likely(&pkey_disabled) &&
+	    !mmu_has_feature(MMU_FTR_RADIX_KUAP))
 		return;
 
 	/*
@@ -280,7 +282,8 @@ void thread_pkey_regs_save(struct thread_struct *thread)
 void thread_pkey_regs_restore(struct thread_struct *new_thread,
 			      struct thread_struct *old_thread)
 {
-	if (static_branch_likely(&pkey_disabled))
+	if (static_branch_likely(&pkey_disabled) &&
+	    !mmu_has_feature(MMU_FTR_RADIX_KUAP))
 		return;
 
 	if (old_thread->amr != new_thread->amr)
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index e6831d0ec159..5fbfa041194d 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -335,6 +335,7 @@ config PPC_RADIX_MMU
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
 	select PPC_HAVE_KUEP
+	select PPC_HAVE_KUAP
 	default y
 	help
 	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 09/11] powerpc/32: add helper to write into segment registers
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
                   ` (6 preceding siblings ...)
  2018-11-28  9:27 ` [RFC PATCH v2 08/11] powerpc/64s: Implement KUAP " Russell Currey
@ 2018-11-28  9:27 ` Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 10/11] powerpc/book3s32: Prepare Kernel Userspace Access Protection Christophe Leroy
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

This patch add an helper which wraps 'mtsrin' instruction
to write into segment registers.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/reg.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index d9598e6790d8..6b5d2a61af5a 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -1424,6 +1424,11 @@ static inline void msr_check_and_clear(unsigned long bits)
 #define mfsrin(v)	({unsigned int rval; \
 			asm volatile("mfsrin %0,%1" : "=r" (rval) : "r" (v)); \
 					rval;})
+
+static inline void mtsrin(u32 val, u32 idx)
+{
+	asm volatile("mtsrin %0, %1" : : "r" (val), "r" (idx));
+}
 #endif
 
 #define proc_trap()	asm volatile("trap")
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 10/11] powerpc/book3s32: Prepare Kernel Userspace Access Protection
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
                   ` (7 preceding siblings ...)
  2018-11-28  9:27 ` [RFC PATCH v2 09/11] powerpc/32: add helper to write into segment registers Christophe Leroy
@ 2018-11-28  9:27 ` Christophe Leroy
  2018-11-28  9:27 ` [RFC PATCH v2 11/11] powerpc/book3s32: Implement " Christophe Leroy
  2018-12-23 13:27 ` [v2, 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Michael Ellerman
  10 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

This patch prepares Kernel Userspace Access Protection for
book3s/32.

Due to limitations of the processor page protection capabilities,
the protection is only against writing. read protection cannot be
achieved using page protection.

In order to provide the protection, Ku and Ks keys are modified in
Userspace Segment registers, and different PP bits are used to:

PP01 provides RW for Key 0 and RO for Key 1
PP10 provides RW for all
PP11 provides RO for all

Today PP10 is used for RW pages and PP11 for RO pages, SR Ku and Ks
set to 1. This patch modifies page protection to user PP01 for RW pages.

Then segment registers are set to Ku 0 and Ks 0. This will allow
to setup Userspace write access protection by settng Ks to 1 in the
following patch.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/head_32.S | 20 +++++++++++---------
 arch/powerpc/mm/hash_low_32.S |  6 +++---
 2 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 61ca27929355..1aca0dba0ec1 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -522,13 +522,13 @@ InstructionTLBMiss:
 	 */
 	stw	r0,0(r2)		/* update PTE (accessed bit) */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
-	rlwinm	r1,r0,32-10,31,31	/* _PAGE_RW -> PP lsb */
-	rlwinm	r2,r0,32-7,31,31	/* _PAGE_DIRTY -> PP lsb */
+	rlwinm	r1,r0,32-9,30,30	/* _PAGE_RW -> PP msb */
+	rlwinm	r2,r0,32-6,30,30	/* _PAGE_DIRTY -> PP msb */
 	and	r1,r1,r2		/* writable if _RW and _DIRTY */
 	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
 	rlwimi	r0,r0,32-1,31,31	/* _PAGE_USER -> PP lsb */
 	ori	r1,r1,0xe04		/* clear out reserved bits */
-	andc	r1,r0,r1		/* PP = user? (rw&dirty? 2: 3): 0 */
+	andc	r1,r0,r1		/* PP = user? (rw&dirty? 1: 3): 0 */
 BEGIN_FTR_SECTION
 	rlwinm	r1,r1,0,~_PAGE_COHERENT	/* clear M (coherence not required) */
 END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
@@ -596,8 +596,8 @@ DataLoadTLBMiss:
 	 */
 	stw	r0,0(r2)		/* update PTE (accessed bit) */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
-	rlwinm	r1,r0,32-10,31,31	/* _PAGE_RW -> PP lsb */
-	rlwinm	r2,r0,32-7,31,31	/* _PAGE_DIRTY -> PP lsb */
+	rlwinm	r1,r0,32-9,30,30	/* _PAGE_RW -> PP msb */
+	rlwinm	r2,r0,32-6,30,30	/* _PAGE_DIRTY -> PP msb */
 	and	r1,r1,r2		/* writable if _RW and _DIRTY */
 	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
 	rlwimi	r0,r0,32-1,31,31	/* _PAGE_USER -> PP lsb */
@@ -680,9 +680,9 @@ DataStoreTLBMiss:
 	 */
 	stw	r0,0(r2)		/* update PTE (accessed/dirty bits) */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
-	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
-	li	r1,0xe05		/* clear out reserved bits & PP lsb */
-	andc	r1,r0,r1		/* PP = user? 2: 0 */
+	rlwimi	r0,r0,32-2,31,31	/* _PAGE_USER -> PP lsb */
+	li	r1,0xe06		/* clear out reserved bits & PP msb */
+	andc	r1,r0,r1		/* PP = user? 1: 0 */
 BEGIN_FTR_SECTION
 	rlwinm	r1,r1,0,~_PAGE_COHERENT	/* clear M (coherence not required) */
 END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
@@ -1014,7 +1014,9 @@ _ENTRY(switch_mmu_context)
 	blt-	4f
 	mulli	r3,r3,897	/* multiply context by skew factor */
 	rlwinm	r3,r3,4,8,27	/* VSID = (context & 0xfffff) << 4 */
-	addis	r3,r3,0x6000	/* Set Ks, Ku bits */
+#ifdef CONFIG_PPC_KUAP
+	addis	r3,r3,0x4000	/* Set Ks, clear Ku bits */
+#endif
 	li	r0,NUM_USER_SEGMENTS
 	mtctr	r0
 
diff --git a/arch/powerpc/mm/hash_low_32.S b/arch/powerpc/mm/hash_low_32.S
index 26acf6c8c20c..0e549eb91823 100644
--- a/arch/powerpc/mm/hash_low_32.S
+++ b/arch/powerpc/mm/hash_low_32.S
@@ -316,13 +316,13 @@ Hash_msk = (((1 << Hash_bits) - 1) * 64)
 
 _GLOBAL(create_hpte)
 	/* Convert linux-style PTE (r5) to low word of PPC-style PTE (r8) */
-	rlwinm	r8,r5,32-10,31,31	/* _PAGE_RW -> PP lsb */
-	rlwinm	r0,r5,32-7,31,31	/* _PAGE_DIRTY -> PP lsb */
+	rlwinm	r8,r5,32-9,30,30	/* _PAGE_RW -> PP msb */
+	rlwinm	r0,r5,32-6,30,30	/* _PAGE_DIRTY -> PP msb */
 	and	r8,r8,r0		/* writable if _RW & _DIRTY */
 	rlwimi	r5,r5,32-1,30,30	/* _PAGE_USER -> PP msb */
 	rlwimi	r5,r5,32-2,31,31	/* _PAGE_USER -> PP lsb */
 	ori	r8,r8,0xe04		/* clear out reserved bits */
-	andc	r8,r5,r8		/* PP = user? (rw&dirty? 2: 3): 0 */
+	andc	r8,r5,r8		/* PP = user? (rw&dirty? 1: 3): 0 */
 BEGIN_FTR_SECTION
 	rlwinm	r8,r8,0,~_PAGE_COHERENT	/* clear M (coherence not required) */
 END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 11/11] powerpc/book3s32: Implement Kernel Userspace Access Protection
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
                   ` (8 preceding siblings ...)
  2018-11-28  9:27 ` [RFC PATCH v2 10/11] powerpc/book3s32: Prepare Kernel Userspace Access Protection Christophe Leroy
@ 2018-11-28  9:27 ` Christophe Leroy
  2018-12-11  5:25   ` Russell Currey
  2018-12-23 13:27 ` [v2, 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Michael Ellerman
  10 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

This patch implements Kernel Userspace Access Protection for
book3s/32.

Due to limitations of the processor page protection capabilities,
the protection is only against writing. read protection cannot be
achieved using page protection.

In order to provide the protection, Ku and Ks keys are modified in
Userspace Segment registers, and different PP bits are used to:

PP01 provides RW for Key 0 and RO for Key 1
PP10 provides RW for all
PP11 provides RO for all

Today PP10 is used for RW pages and PP11 for RO pages. This patch
modifies page protection to PP01 for RW pages.

Then segment registers are set to Ku 0 and Ks 1. When kernel needs
to write to RW pages, the associated segment register is changed to
Ks 0 in order to allow write access to the kernel.

In order to avoid having the read all segment registers when
locking/unlocking the access, some data is kept in the thread_struct
and saved on stack on exceptions. The field identifies both the
first unlocked segment and the first segment following the last
unlocked one. When no segment is unlocked, it contains value 0.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/book3s/32/kup.h | 98 ++++++++++++++++++++++++++++++++
 arch/powerpc/include/asm/kup.h           |  3 +
 arch/powerpc/kernel/head_32.S            |  2 +-
 arch/powerpc/mm/ppc_mmu_32.c             | 10 ++++
 arch/powerpc/platforms/Kconfig.cputype   |  1 +
 5 files changed, 113 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/book3s/32/kup.h

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
new file mode 100644
index 000000000000..7455ecaab3f9
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_BOOK3S_32_KUP_H
+#define _ASM_POWERPC_BOOK3S_32_KUP_H
+
+#ifdef CONFIG_PPC_KUAP
+#define LOCK_USER_ACCESS(val, sp, sr, srmax, thread)			\
+	lwz	sr, KUAP(thread);					\
+	stw	sr, _KUAP(sp);						\
+	cmpli	cr7, sr, 0;						\
+	beq+	cr7, 102f;						\
+	li	val, 0;							\
+	stw	val, KUAP(thread);					\
+	rlwinm	srmax, sr, 28, 0xf0000000;				\
+	mfsrin	val, sr;						\
+	oris	val, val ,0x4000;	/* Set Ks */			\
+101:									\
+	mtsrin	val, sr;						\
+	addi	val, val, 0x111;	/* next VSID */			\
+	rlwinm	val, val, 0, 8, 3;	/* clear VSID overflow */	\
+	addis	sr, sr, 0x1000;		/* address of next segment */	\
+	cmpl	cr7, sr, srmax;						\
+	blt-	cr7, 101b;						\
+102:
+
+#define REST_USER_ACCESS(val, sp, sr, srmax, curr)			\
+	lwz	sr, _KUAP(sp);						\
+	stw	sr, THREAD+KUAP(curr);					\
+	cmpli	cr7, sr, 0;						\
+	beq+	cr7, 102f;						\
+	rlwinm	srmax, sr, 28, 0xf0000000;				\
+	mfsrin	val, sr;						\
+	rlwinm	val, val ,0, ~0x40000000;	/* Clear Ks */		\
+101:									\
+	mtsrin	val, sr;						\
+	addi	val, val, 0x111;	/* next VSID */			\
+	rlwinm	val, val, 0, 8, 3;	/* clear VSID overflow */	\
+	addis	sr, sr, 0x1000;		/* address of next segment */	\
+	cmpl	cr7, sr, srmax;						\
+	blt-	cr7, 101b;						\
+102:
+
+#define KUAP_START			0
+#endif
+
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_PPC_KUAP
+
+#include <linux/sched.h>
+
+static inline void lock_user_access(void __user *to, const void __user *from,
+				    unsigned long size)
+{
+	unsigned long addr = (unsigned long)to;
+	unsigned long end = addr + size;
+	unsigned long sr;
+
+	if (!to)
+		return;
+
+	current->thread.kuap = 0;
+	sr = mfsrin(addr);
+	sr |= 0x40000000;		/* set Ks */
+	mb();	/* make sure all writes are done before SR are updated */
+	while (addr < end) {
+		mtsrin(sr, addr);
+		sr += 0x111;		/* next VSID */
+		sr &= 0xf0ffffff;	/* clear VSID overflow */
+		addr += 0x10000000;	/* address of next segment */
+	}
+}
+
+static inline void unlock_user_access(void __user *to, const void __user *from,
+				      unsigned long size)
+{
+	unsigned long addr = (unsigned long)to;
+	unsigned long end = addr + size;
+	unsigned long kuap = addr & 0xf0000000;
+	unsigned long sr;
+
+	if (!to)
+		return;
+
+	sr = mfsrin(addr);
+	sr &= ~0x40000000;		/* clear Ks */
+	while (addr < end) {
+		mtsrin(sr, addr);
+		sr += 0x111;		/* next VSID */
+		sr &= 0xf0ffffff;	/* clear VSID overflow */
+		addr += 0x10000000;	/* address of next segment */
+	}
+	kuap |= (addr >> 28) & 0xf;
+	current->thread.kuap = kuap;
+	mb();	/* make sure SRs are updated before writing */
+}
+#endif
+#endif
+
+#endif /* _ASM_POWERPC_BOOK3S_32_KUP_H */
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index d4dd242251bd..7813e2bcfb7c 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -5,6 +5,9 @@
 #ifdef CONFIG_PPC_8xx
 #include <asm/nohash/32/kup-8xx.h>
 #endif
+#ifdef CONFIG_PPC_BOOK3S_32
+#include <asm/book3s/32/kup.h>
+#endif
 #ifdef CONFIG_PPC_BOOK3S_64
 #include <asm/book3s/64/kup-radix.h>
 #endif
diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 1aca0dba0ec1..f73db6891901 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -1015,7 +1015,7 @@ _ENTRY(switch_mmu_context)
 	mulli	r3,r3,897	/* multiply context by skew factor */
 	rlwinm	r3,r3,4,8,27	/* VSID = (context & 0xfffff) << 4 */
 #ifdef CONFIG_PPC_KUAP
-	addis	r3,r3,0x4000	/* Set Ks, clear Ku bits */
+	addis	r3, r3, 0x4000	/* Set Ks, clear Ku bits */
 #endif
 	li	r0,NUM_USER_SEGMENTS
 	mtctr	r0
diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c
index f6f575bae3bc..5dfebed93ab6 100644
--- a/arch/powerpc/mm/ppc_mmu_32.c
+++ b/arch/powerpc/mm/ppc_mmu_32.c
@@ -287,3 +287,13 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 	else /* Anything else has 256M mapped */
 		memblock_set_current_limit(min_t(u64, first_memblock_size, 0x10000000));
 }
+
+#ifdef CONFIG_PPC_KUAP
+void __init setup_kuap(bool disabled)
+{
+	pr_info("Activating Kernel Userspace Access Protection\n");
+
+	if (disabled)
+		pr_warn("KUAP cannot be disabled yet on 6xx when compiled in\n");
+}
+#endif
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 5fbfa041194d..0b9a8eda413a 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -24,6 +24,7 @@ choice
 config PPC_BOOK3S_32
 	bool "512x/52xx/6xx/7xx/74xx/82xx/83xx/86xx"
 	select PPC_FPU
+	select PPC_HAVE_KUAP
 
 config PPC_85xx
 	bool "Freescale 85xx"
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 07/11] powerpc/mm/radix: Use KUEP API for Radix MMU
  2018-11-28  9:27 ` [RFC PATCH v2 07/11] powerpc/mm/radix: Use KUEP API for Radix MMU Russell Currey
@ 2018-11-28  9:43   ` Christophe Leroy
  2018-11-28  9:46   ` Christophe LEROY
  1 sibling, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:43 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

From: Russell Currey <ruscur@russell.cc>

Execution protection already exists on radix, this just refactors
the radix init to provide the KUEP setup function instead.

Thus, the only functional change is that it can now be disabled.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/pgtable-radix.c        | 9 ++++++---
 arch/powerpc/platforms/Kconfig.cputype | 1 +
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 931156069a81..45aa9e501e76 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -535,8 +535,13 @@ static void radix_init_amor(void)
 	mtspr(SPRN_AMOR, (3ul << 62));
 }
 
-static void radix_init_iamr(void)
+void setup_kuep(bool disabled)
 {
+	if (disabled)
+		return;
+
+	pr_info("Activating Kernel Userspace Execution Prevention\n");
+
 	/*
 	 * Radix always uses key0 of the IAMR to determine if an access is
 	 * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
@@ -605,7 +610,6 @@ void __init radix__early_init_mmu(void)
 
 	memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
 
-	radix_init_iamr();
 	radix_init_pgtable();
 	/* Switch to the guard PID before turning on MMU */
 	radix__switch_mmu_context(NULL, &init_mm);
@@ -627,7 +631,6 @@ void radix__early_init_mmu_secondary(void)
 		      __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
 		radix_init_amor();
 	}
-	radix_init_iamr();
 
 	radix__switch_mmu_context(NULL, &init_mm);
 	if (cpu_has_feature(CPU_FTR_HVMODE))
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index a20669a9ec13..e6831d0ec159 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -334,6 +334,7 @@ config PPC_RADIX_MMU
 	bool "Radix MMU Support"
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
+	select PPC_HAVE_KUEP
 	default y
 	help
 	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v2 08/11] powerpc/64s: Implement KUAP for Radix MMU
  2018-11-28  9:27 ` [RFC PATCH v2 08/11] powerpc/64s: Implement KUAP " Russell Currey
@ 2018-11-28  9:43   ` Christophe Leroy
  0 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-11-28  9:43 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, ruscur
  Cc: linux-kernel, linuxppc-dev

From: Russell Currey <ruscur@russell.cc>

Kernel Userspace Access Prevention utilises a feature of
the Radix MMU which disallows read and write access to userspace
addresses.  By utilising this, the kernel is prevented from accessing
user data from outside of trusted paths that perform proper safety
checks, such as copy_{to/from}_user() and friends.

Userspace access is disabled from early boot and is only enabled when:

        - exiting the kernel and entering userspace
        - performing an operation like copy_{to/from}_user()
        - context switching to a process that has access enabled

and similarly, access is disabled again when exiting userspace and
entering the kernel.

This feature has a slight performance impact which I roughly measured
to be
3% slower in the worst case (performing 1GB of 1 byte read()/write()
syscalls), and is gated behind the CONFIG_PPC_KUAP option for
performance-critical builds.

This feature can be tested by using the lkdtm driver (CONFIG_LKDTM=y)
and performing the following:

        echo ACCESS_USERSPACE > [debugfs]/provoke-crash/DIRECT

if enabled, this should send SIGSEGV to the thread.

The KUAP state is tracked in the PACA because reading the register
that manages these accesses is costly. This Has the unfortunate
downside of another layer of abstraction for platforms that implement
the locks and unlocks, but this could be useful in future for other
things too, like counters for benchmarking or smartly handling lots
of small accesses at once.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/book3s/64/kup-radix.h | 36 ++++++++++++++++++++++++++
 arch/powerpc/include/asm/exception-64s.h       | 14 ++++++++--
 arch/powerpc/include/asm/kup.h                 |  3 +++
 arch/powerpc/include/asm/mmu.h                 |  9 ++++++-
 arch/powerpc/include/asm/reg.h                 |  1 +
 arch/powerpc/mm/pgtable-radix.c                | 12 +++++++++
 arch/powerpc/mm/pkeys.c                        |  7 +++--
 arch/powerpc/platforms/Kconfig.cputype         |  1 +
 8 files changed, 78 insertions(+), 5 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h

diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
new file mode 100644
index 000000000000..93273ca99310
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_KUP_RADIX_H
+#define _ASM_POWERPC_KUP_RADIX_H
+
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_PPC_KUAP
+#include <asm/reg.h>
+/*
+ * We do have the ability to individually lock/unlock reads and writes rather
+ * than both at once, however it's a significant performance hit due to needing
+ * to do a read-modify-write, which adds a mfspr, which is slow.  As a result,
+ * locking/unlocking both at once is preferred.
+ */
+static inline void unlock_user_access(void __user *to, const void __user *from,
+				      unsigned long size)
+{
+	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
+		return;
+
+	mtspr(SPRN_AMR, 0);
+	isync();
+	get_paca()->user_access_allowed = 1;
+}
+
+static inline void lock_user_access(void __user *to, const void __user *from,
+				    unsigned long size)
+{
+	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
+		return;
+
+	mtspr(SPRN_AMR, AMR_LOCKED);
+	get_paca()->user_access_allowed = 0;
+}
+#endif /* CONFIG_PPC_KUAP */
+#endif /* __ASSEMBLY__ */
+#endif
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 4d971ca1e69b..d92614c66d87 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -264,8 +264,18 @@ BEGIN_FTR_SECTION_NESTED(943)						\
 	std	ra,offset(r13);						\
 END_FTR_SECTION_NESTED(ftr,ftr,943)
 
-#define LOCK_USER_ACCESS(reg)
-#define UNLOCK_USER_ACCESS(reg)
+#define LOCK_USER_ACCESS(reg)							\
+BEGIN_MMU_FTR_SECTION_NESTED(944)					\
+	LOAD_REG_IMMEDIATE(reg,AMR_LOCKED);				\
+	mtspr	SPRN_AMR,reg;						\
+END_MMU_FTR_SECTION_NESTED(MMU_FTR_RADIX_KUAP,MMU_FTR_RADIX_KUAP,944)
+
+#define UNLOCK_USER_ACCESS(reg)							\
+BEGIN_MMU_FTR_SECTION_NESTED(945)					\
+	li	reg,0;							\
+	mtspr	SPRN_AMR,reg;						\
+	isync;								\
+END_MMU_FTR_SECTION_NESTED(MMU_FTR_RADIX_KUAP,MMU_FTR_RADIX_KUAP,945)
 
 #define EXCEPTION_PROLOG_0(area)					\
 	GET_PACA(r13);							\
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index f7262f4c427e..d4dd242251bd 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -5,6 +5,9 @@
 #ifdef CONFIG_PPC_8xx
 #include <asm/nohash/32/kup-8xx.h>
 #endif
+#ifdef CONFIG_PPC_BOOK3S_64
+#include <asm/book3s/64/kup-radix.h>
+#endif
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index eb20eb3b8fb0..048df188fc10 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -107,6 +107,10 @@
  */
 #define MMU_FTR_1T_SEGMENT		ASM_CONST(0x40000000)
 
+/* Supports KUAP (key 0 controlling userspace addresses) on radix
+ */
+#define MMU_FTR_RADIX_KUAP		ASM_CONST(0x80000000)
+
 /* MMU feature bit sets for various CPUs */
 #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2	\
 	MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
@@ -143,7 +147,10 @@ enum {
 		MMU_FTR_KERNEL_RO | MMU_FTR_68_BIT_VA |
 #ifdef CONFIG_PPC_RADIX_MMU
 		MMU_FTR_TYPE_RADIX |
-#endif
+#ifdef CONFIG_PPC_KUAP
+		MMU_FTR_RADIX_KUAP |
+#endif /* CONFIG_PPC_KUAP */
+#endif /* CONFIG_PPC_RADIX_MMU */
 		0,
 };
 
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index de52c3166ba4..d9598e6790d8 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -246,6 +246,7 @@
 #define SPRN_DSCR	0x11
 #define SPRN_CFAR	0x1c	/* Come From Address Register */
 #define SPRN_AMR	0x1d	/* Authority Mask Register */
+#define   AMR_LOCKED	0xC000000000000000UL /* Read & Write disabled */
 #define SPRN_UAMOR	0x9d	/* User Authority Mask Override Register */
 #define SPRN_AMOR	0x15d	/* Authority Mask Override Register */
 #define SPRN_ACOP	0x1F	/* Available Coprocessor Register */
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 45aa9e501e76..6490067952a0 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -29,6 +29,7 @@
 #include <asm/powernv.h>
 #include <asm/sections.h>
 #include <asm/trace.h>
+#include <asm/uaccess.h>
 
 #include <trace/events/thp.h>
 
@@ -550,6 +551,17 @@ void setup_kuep(bool disabled)
 	mtspr(SPRN_IAMR, (1ul << 62));
 }
 
+void __init setup_kuap(bool disabled)
+{
+	if (disabled)
+		return;
+
+	pr_info("Activating Kernel Userspace Access Prevention\n");
+
+	cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
+	mtspr(SPRN_AMR, AMR_LOCKED);
+}
+
 void __init radix__early_init_mmu(void)
 {
 	unsigned long lpcr;
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index b271b283c785..bb3cf915016f 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -7,6 +7,7 @@
 
 #include <asm/mman.h>
 #include <asm/setup.h>
+#include <asm/uaccess.h>
 #include <linux/pkeys.h>
 #include <linux/of_device.h>
 
@@ -266,7 +267,8 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
 
 void thread_pkey_regs_save(struct thread_struct *thread)
 {
-	if (static_branch_likely(&pkey_disabled))
+	if (static_branch_likely(&pkey_disabled) &&
+	    !mmu_has_feature(MMU_FTR_RADIX_KUAP))
 		return;
 
 	/*
@@ -280,7 +282,8 @@ void thread_pkey_regs_save(struct thread_struct *thread)
 void thread_pkey_regs_restore(struct thread_struct *new_thread,
 			      struct thread_struct *old_thread)
 {
-	if (static_branch_likely(&pkey_disabled))
+	if (static_branch_likely(&pkey_disabled) &&
+	    !mmu_has_feature(MMU_FTR_RADIX_KUAP))
 		return;
 
 	if (old_thread->amr != new_thread->amr)
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index e6831d0ec159..5fbfa041194d 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -335,6 +335,7 @@ config PPC_RADIX_MMU
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
 	select PPC_HAVE_KUEP
+	select PPC_HAVE_KUAP
 	default y
 	help
 	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v2 07/11] powerpc/mm/radix: Use KUEP API for Radix MMU
  2018-11-28  9:27 ` [RFC PATCH v2 07/11] powerpc/mm/radix: Use KUEP API for Radix MMU Russell Currey
  2018-11-28  9:43   ` Christophe Leroy
@ 2018-11-28  9:46   ` Christophe LEROY
  1 sibling, 0 replies; 19+ messages in thread
From: Christophe LEROY @ 2018-11-28  9:46 UTC (permalink / raw)
  To: Russell Currey, Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Sorry, I forgot to reset the author so the patch appears as coming from 
yourself.

Le 28/11/2018 à 10:27, Russell Currey a écrit :
> Execution protection already exists on radix, this just refactors
> the radix init to provide the KUEP setup function instead.
> 
> Thus, the only functional change is that it can now be disabled.
> 
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>   arch/powerpc/mm/pgtable-radix.c        | 9 ++++++---
>   arch/powerpc/platforms/Kconfig.cputype | 1 +
>   2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
> index 931156069a81..45aa9e501e76 100644
> --- a/arch/powerpc/mm/pgtable-radix.c
> +++ b/arch/powerpc/mm/pgtable-radix.c
> @@ -535,8 +535,13 @@ static void radix_init_amor(void)
>   	mtspr(SPRN_AMOR, (3ul << 62));
>   }
>   
> -static void radix_init_iamr(void)
> +void setup_kuep(bool disabled)
>   {
> +	if (disabled)
> +		return;
> +
> +	pr_info("Activating Kernel Userspace Execution Prevention\n");
> +
>   	/*
>   	 * Radix always uses key0 of the IAMR to determine if an access is
>   	 * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
> @@ -605,7 +610,6 @@ void __init radix__early_init_mmu(void)
>   
>   	memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
>   
> -	radix_init_iamr();
>   	radix_init_pgtable();
>   	/* Switch to the guard PID before turning on MMU */
>   	radix__switch_mmu_context(NULL, &init_mm);
> @@ -627,7 +631,6 @@ void radix__early_init_mmu_secondary(void)
>   		      __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
>   		radix_init_amor();
>   	}
> -	radix_init_iamr();
>   
>   	radix__switch_mmu_context(NULL, &init_mm);
>   	if (cpu_has_feature(CPU_FTR_HVMODE))
> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
> index a20669a9ec13..e6831d0ec159 100644
> --- a/arch/powerpc/platforms/Kconfig.cputype
> +++ b/arch/powerpc/platforms/Kconfig.cputype
> @@ -334,6 +334,7 @@ config PPC_RADIX_MMU
>   	bool "Radix MMU Support"
>   	depends on PPC_BOOK3S_64
>   	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
> +	select PPC_HAVE_KUEP
>   	default y
>   	help
>   	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v2 11/11] powerpc/book3s32: Implement Kernel Userspace Access Protection
  2018-11-28  9:27 ` [RFC PATCH v2 11/11] powerpc/book3s32: Implement " Christophe Leroy
@ 2018-12-11  5:25   ` Russell Currey
  2018-12-11 20:46     ` Christophe Leroy
  0 siblings, 1 reply; 19+ messages in thread
From: Russell Currey @ 2018-12-11  5:25 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

On Wed, 2018-11-28 at 09:27 +0000, Christophe Leroy wrote:
> This patch implements Kernel Userspace Access Protection for
> book3s/32.
> 
> Due to limitations of the processor page protection capabilities,
> the protection is only against writing. read protection cannot be
> achieved using page protection.
> 
> In order to provide the protection, Ku and Ks keys are modified in
> Userspace Segment registers, and different PP bits are used to:
> 
> PP01 provides RW for Key 0 and RO for Key 1
> PP10 provides RW for all
> PP11 provides RO for all
> 
> Today PP10 is used for RW pages and PP11 for RO pages. This patch
> modifies page protection to PP01 for RW pages.
> 
> Then segment registers are set to Ku 0 and Ks 1. When kernel needs
> to write to RW pages, the associated segment register is changed to
> Ks 0 in order to allow write access to the kernel.
> 
> In order to avoid having the read all segment registers when
> locking/unlocking the access, some data is kept in the thread_struct
> and saved on stack on exceptions. The field identifies both the
> first unlocked segment and the first segment following the last
> unlocked one. When no segment is unlocked, it contains value 0.
> 
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

Hey Christophe, I tried to test this and got a machine check after the
kernel starts init.

Vector: 700 (Program Check) at [ef0b5e70]
    pc: 00000ca4
    lr: b7e1a030
    sp: ef0b5f30
   msr: 81002
  current = 0xef0b8000
    pid   = 1, comm = init

Testing with mac99 model in qemu.

- Russell


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v2 11/11] powerpc/book3s32: Implement Kernel Userspace Access Protection
  2018-12-11  5:25   ` Russell Currey
@ 2018-12-11 20:46     ` Christophe Leroy
  0 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-12-11 20:46 UTC (permalink / raw)
  To: Russell Currey, Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev



On 12/11/2018 05:25 AM, Russell Currey wrote:
> On Wed, 2018-11-28 at 09:27 +0000, Christophe Leroy wrote:
>> This patch implements Kernel Userspace Access Protection for
>> book3s/32.
>>
>> Due to limitations of the processor page protection capabilities,
>> the protection is only against writing. read protection cannot be
>> achieved using page protection.
>>
>> In order to provide the protection, Ku and Ks keys are modified in
>> Userspace Segment registers, and different PP bits are used to:
>>
>> PP01 provides RW for Key 0 and RO for Key 1
>> PP10 provides RW for all
>> PP11 provides RO for all
>>
>> Today PP10 is used for RW pages and PP11 for RO pages. This patch
>> modifies page protection to PP01 for RW pages.
>>
>> Then segment registers are set to Ku 0 and Ks 1. When kernel needs
>> to write to RW pages, the associated segment register is changed to
>> Ks 0 in order to allow write access to the kernel.
>>
>> In order to avoid having the read all segment registers when
>> locking/unlocking the access, some data is kept in the thread_struct
>> and saved on stack on exceptions. The field identifies both the
>> first unlocked segment and the first segment following the last
>> unlocked one. When no segment is unlocked, it contains value 0.
>>
>> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> 
> Hey Christophe, I tried to test this and got a machine check after the
> kernel starts init.

A program check you mean ?

> 
> Vector: 700 (Program Check) at [ef0b5e70]
>      pc: 00000ca4
>      lr: b7e1a030
>      sp: ef0b5f30
>     msr: 81002
>    current = 0xef0b8000
>      pid   = 1, comm = init
> 
> Testing with mac99 model in qemu.

That's pretty surprising. At 0xca4 there is nothing particular for me. 
This is a handler for system call. Do you have the same ?
How can this trigger a program check ? According to the MSR, the check 
is due to an illegal instruction (bit 12). An we are with MMU off.

c0000c00 <SystemCall>:
c0000c00:       7d 50 43 a6     mtsprg  0,r10
c0000c04:       7d 71 43 a6     mtsprg  1,r11
c0000c08:       7d 40 00 26     mfcr    r10
c0000c0c:       7d 7b 02 a6     mfsrr1  r11
c0000c10:       71 6b 40 00     andi.   r11,r11,16384
c0000c14:       3d 61 40 00     addis   r11,r1,16384
c0000c18:       41 82 00 14     beq     c0000c2c <SystemCall+0x2c>
c0000c1c:       7d 73 42 a6     mfsprg  r11,3
c0000c20:       81 6b fb d8     lwz     r11,-1064(r11)
c0000c24:       39 6b 20 00     addi    r11,r11,8192
c0000c28:       3d 6b 40 00     addis   r11,r11,16384
c0000c2c:       39 6b ff 40     addi    r11,r11,-192
c0000c30:       91 4b 00 a8     stw     r10,168(r11)
c0000c34:       91 8b 00 40     stw     r12,64(r11)
c0000c38:       91 2b 00 34     stw     r9,52(r11)
c0000c3c:       7d 50 42 a6     mfsprg  r10,0
c0000c40:       91 4b 00 38     stw     r10,56(r11)
c0000c44:       7d 91 42 a6     mfsprg  r12,1
c0000c48:       91 8b 00 3c     stw     r12,60(r11)
c0000c4c:       7d 48 02 a6     mflr    r10
c0000c50:       91 4b 00 a0     stw     r10,160(r11)
c0000c54:       7d 9a 02 a6     mfsrr0  r12
c0000c58:       7d 3b 02 a6     mfsrr1  r9
c0000c5c:       90 2b 00 14     stw     r1,20(r11)
c0000c60:       90 2b 00 00     stw     r1,0(r11)
c0000c64:       3c 2b c0 00     addis   r1,r11,-16384
c0000c68:       39 40 10 02     li      r10,4098
c0000c6c:       7d 40 01 24     mtmsr   r10
c0000c70:       90 0b 00 10     stw     r0,16(r11)
c0000c74:       3d 40 72 65     lis     r10,29285
c0000c78:       39 4a 67 73     addi    r10,r10,26483
c0000c7c:       91 4b 00 08     stw     r10,8(r11)
c0000c80:       90 6b 00 1c     stw     r3,28(r11)
c0000c84:       90 8b 00 20     stw     r4,32(r11)
c0000c88:       90 ab 00 24     stw     r5,36(r11)
c0000c8c:       90 cb 00 28     stw     r6,40(r11)
c0000c90:       90 eb 00 2c     stw     r7,44(r11)
c0000c94:       91 0b 00 30     stw     r8,48(r11)
c0000c98:       39 40 0c 01     li      r10,3073
c0000c9c:       91 4b 00 b0     stw     r10,176(r11)
c0000ca0:       39 40 10 32     li      r10,4146
c0000ca4:       51 2a 04 20     rlwimi  r10,r9,0,16,16
c0000ca8:       48 01 13 5d     bl      c0012004 <transfer_to_handler>

Christophe

> 
> - Russell
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v2 04/11] powerpc/mm: Add a framework for Kernel Userspace Access Protection
  2018-11-28  9:27 ` [RFC PATCH v2 04/11] powerpc/mm: Add a framework for Kernel Userspace Access Protection Christophe Leroy
@ 2018-12-21  5:07   ` Michael Ellerman
  2018-12-21  6:48     ` Christophe Leroy
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Ellerman @ 2018-12-21  5:07 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras, ruscur
  Cc: linux-kernel, linuxppc-dev

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> This patch implements a framework for Kernel Userspace Access
> Protection.
>
> Then subarches will have to possibility to provide their own
> implementation by providing setup_kuap() and lock/unlock_user_access()
>
> Some platform will need to know the area accessed and whether it is
> accessed from read, write or both. Therefore source, destination and
> size and handed over to the two functions.
>
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

I think some of this code came from Russell's original patch?

In which case we should have his signed-off-by here.

cheers

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v2 04/11] powerpc/mm: Add a framework for Kernel Userspace Access Protection
  2018-12-21  5:07   ` Michael Ellerman
@ 2018-12-21  6:48     ` Christophe Leroy
  0 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2018-12-21  6:48 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras, ruscur
  Cc: linux-kernel, linuxppc-dev



Le 21/12/2018 à 06:07, Michael Ellerman a écrit :
> Christophe Leroy <christophe.leroy@c-s.fr> writes:
> 
>> This patch implements a framework for Kernel Userspace Access
>> Protection.
>>
>> Then subarches will have to possibility to provide their own
>> implementation by providing setup_kuap() and lock/unlock_user_access()
>>
>> Some platform will need to know the area accessed and whether it is
>> accessed from read, write or both. Therefore source, destination and
>> size and handed over to the two functions.
>>
>> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> 
> I think some of this code came from Russell's original patch?
> 
> In which case we should have his signed-off-by here.
> 

Yes that's right the ppc64 part is from Russel. As it's still an RFC and 
there is still some work to be done I didn't pay much attention to 
Signed-off and other tags yet.

Signed-off-by: Russell Currey <ruscur@russell.cc>


Christophe

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [v2, 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx
  2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
                   ` (9 preceding siblings ...)
  2018-11-28  9:27 ` [RFC PATCH v2 11/11] powerpc/book3s32: Implement " Christophe Leroy
@ 2018-12-23 13:27 ` Michael Ellerman
  10 siblings, 0 replies; 19+ messages in thread
From: Michael Ellerman @ 2018-12-23 13:27 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras, ruscur
  Cc: linuxppc-dev, linux-kernel

On Wed, 2018-11-28 at 09:27:04 UTC, Christophe Leroy wrote:
> On the 8xx, no-execute is set via PPP bits in the PTE. Therefore
> a no-exec fault generates DSISR_PROTFAULT error bits,
> not DSISR_NOEXEC_OR_G.
> 
> This patch adds DSISR_PROTFAULT in the test mask.
> 
> Fixes: d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults")
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/ffca395b11c4a5a6df6d6345f794b0

cheers

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2018-12-23 13:28 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-28  9:27 [PATCH v2 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 02/11] powerpc: Add framework for Kernel Userspace Protection Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 03/11] powerpc: Add skeleton for Kernel Userspace Execution Prevention Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 04/11] powerpc/mm: Add a framework for Kernel Userspace Access Protection Christophe Leroy
2018-12-21  5:07   ` Michael Ellerman
2018-12-21  6:48     ` Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 05/11] powerpc/8xx: Add Kernel Userspace Execution Prevention Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 06/11] powerpc/8xx: Add Kernel Userspace Access Protection Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 07/11] powerpc/mm/radix: Use KUEP API for Radix MMU Russell Currey
2018-11-28  9:43   ` Christophe Leroy
2018-11-28  9:46   ` Christophe LEROY
2018-11-28  9:27 ` [RFC PATCH v2 08/11] powerpc/64s: Implement KUAP " Russell Currey
2018-11-28  9:43   ` Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 09/11] powerpc/32: add helper to write into segment registers Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 10/11] powerpc/book3s32: Prepare Kernel Userspace Access Protection Christophe Leroy
2018-11-28  9:27 ` [RFC PATCH v2 11/11] powerpc/book3s32: Implement " Christophe Leroy
2018-12-11  5:25   ` Russell Currey
2018-12-11 20:46     ` Christophe Leroy
2018-12-23 13:27 ` [v2, 01/11] powerpc/mm: Fix reporting of kernel execute faults on the 8xx Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).