kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] [PATCH v2 0/3] Kernel Userspace Protection for Radix MMU
@ 2018-12-10  7:00 Russell Currey
  2018-12-10  7:00 ` [PATCH v2 1/3] powerpc/mm/radix: Use KUEP API " Russell Currey
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Russell Currey @ 2018-12-10  7:00 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mikey, mpe, benh, npiggin, christophe.leroy, kernel-hardening,
	Russell Currey

This series is based on Christophe's series:
http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=78469
with some minor changes.

I wanted to move my patches to apply at the tail of his series to make it
easier for the two of us to work on independent parts, so I'm resending my
part of the series with the intent that it applies at the end.  There are
two required changes to his series to make this work:

In patch 04/11, the #ifdef around the paca_struct flag user_access_allowed
needs to be dropped.

With my patches removed, patch 11/11 needs to not reference
asm/book3s/64/kup-radix.h in asm/book3s/64/kup.h (so below the kup.h chunk
in that patch).

Sorry for being a pain, I'd rather not send a gigantic series full of
patches that aren't mine.

This branch shows how I'd imagine it would be pulled together:
https://github.com/ruscur/linux/commits/kuap2

Since the last version of this series:

      - fixed issues booting on hash, and the series now fully bisects
      - dropped some parts which are now part of Christophe's series
      - Fix __patch_instruction() in early boot
      - save three instructions in LOCK_USER_ACCESS()

Russell Currey (3):
  powerpc/mm/radix: Use KUEP API for Radix MMU
  powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()
  powerpc/64s: Implement KUAP for Radix MMU

 .../powerpc/include/asm/book3s/64/kup-radix.h | 36 +++++++++++++++++++
 arch/powerpc/include/asm/exception-64s.h      | 15 ++++++--
 arch/powerpc/include/asm/kup.h                |  3 ++
 arch/powerpc/include/asm/mmu.h                |  9 ++++-
 arch/powerpc/include/asm/reg.h                |  1 +
 arch/powerpc/lib/code-patching.c              |  4 +--
 arch/powerpc/mm/pgtable-radix.c               | 25 +++++++++++--
 arch/powerpc/mm/pkeys.c                       |  7 ++--
 arch/powerpc/platforms/Kconfig.cputype        |  2 ++
 9 files changed, 92 insertions(+), 10 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h

-- 
2.19.2

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/3] powerpc/mm/radix: Use KUEP API for Radix MMU
  2018-12-10  7:00 [PATCH v2 0/3] [PATCH v2 0/3] Kernel Userspace Protection for Radix MMU Russell Currey
@ 2018-12-10  7:00 ` Russell Currey
  2018-12-10  7:00 ` [PATCH v2 2/3] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() Russell Currey
  2018-12-10  7:00 ` [PATCH v2 3/3] powerpc/64s: Implement KUAP for Radix MMU Russell Currey
  2 siblings, 0 replies; 7+ messages in thread
From: Russell Currey @ 2018-12-10  7:00 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mikey, mpe, benh, npiggin, christophe.leroy, kernel-hardening,
	Russell Currey

Execution protection already exists on radix, this just refactors
the radix init to provide the KUEP setup function instead.

Thus, the only functional change is that it can now be disabled.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/mm/pgtable-radix.c        | 11 ++++++++---
 arch/powerpc/platforms/Kconfig.cputype |  1 +
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 931156069a81..3565e266994b 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -535,8 +535,14 @@ static void radix_init_amor(void)
 	mtspr(SPRN_AMOR, (3ul << 62));
 }
 
-static void radix_init_iamr(void)
+#ifdef CONFIG_PPC_KUEP
+void __init setup_kuep(bool disabled)
 {
+	if (disabled || !early_radix_enabled())
+		return;
+
+	pr_warn("Activating Kernel Userspace Execution Prevention\n");
+
 	/*
 	 * Radix always uses key0 of the IAMR to determine if an access is
 	 * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
@@ -544,6 +550,7 @@ static void radix_init_iamr(void)
 	 */
 	mtspr(SPRN_IAMR, (1ul << 62));
 }
+#endif
 
 void __init radix__early_init_mmu(void)
 {
@@ -605,7 +612,6 @@ void __init radix__early_init_mmu(void)
 
 	memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
 
-	radix_init_iamr();
 	radix_init_pgtable();
 	/* Switch to the guard PID before turning on MMU */
 	radix__switch_mmu_context(NULL, &init_mm);
@@ -627,7 +633,6 @@ void radix__early_init_mmu_secondary(void)
 		      __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
 		radix_init_amor();
 	}
-	radix_init_iamr();
 
 	radix__switch_mmu_context(NULL, &init_mm);
 	if (cpu_has_feature(CPU_FTR_HVMODE))
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 9997b5ea5693..48cc8df0fdd2 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -335,6 +335,7 @@ config PPC_RADIX_MMU
 	bool "Radix MMU Support"
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
+	select PPC_HAVE_KUEP
 	default y
 	help
 	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.19.2

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/3] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()
  2018-12-10  7:00 [PATCH v2 0/3] [PATCH v2 0/3] Kernel Userspace Protection for Radix MMU Russell Currey
  2018-12-10  7:00 ` [PATCH v2 1/3] powerpc/mm/radix: Use KUEP API " Russell Currey
@ 2018-12-10  7:00 ` Russell Currey
  2018-12-17  7:09   ` Christophe Leroy
  2018-12-10  7:00 ` [PATCH v2 3/3] powerpc/64s: Implement KUAP for Radix MMU Russell Currey
  2 siblings, 1 reply; 7+ messages in thread
From: Russell Currey @ 2018-12-10  7:00 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mikey, mpe, benh, npiggin, christophe.leroy, kernel-hardening,
	Russell Currey

__patch_instruction() is called in early boot, and uses
__put_user_size(), which includes the locks and unlocks for KUAP,
which could either be called too early, or in the Radix case, forced to
use "early_" versions of functions just to safely handle this one case.

__put_user_asm() does not do this, and thus is safe to use both in early
boot, and later on since in this case it should only ever be touching
kernel memory.

__patch_instruction() was previously refactored to use __put_user_size()
in order to be able to return -EFAULT, which would allow the kernel to
patch instructions in userspace, which should never happen.  This has
the functional change of causing faults on userspace addresses if KUAP
is turned on, which should never happen in practice.

A future enhancement could be to double check the patch address is
definitely allowed to be tampered with by the kernel.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 arch/powerpc/lib/code-patching.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 89502cbccb1b..15e8c6339960 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -26,9 +26,9 @@
 static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
 			       unsigned int *patch_addr)
 {
-	int err;
+	int err = 0;
 
-	__put_user_size(instr, patch_addr, 4, err);
+	__put_user_asm(instr, patch_addr, err, "stw");
 	if (err)
 		return err;
 
-- 
2.19.2

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/3] powerpc/64s: Implement KUAP for Radix MMU
  2018-12-10  7:00 [PATCH v2 0/3] [PATCH v2 0/3] Kernel Userspace Protection for Radix MMU Russell Currey
  2018-12-10  7:00 ` [PATCH v2 1/3] powerpc/mm/radix: Use KUEP API " Russell Currey
  2018-12-10  7:00 ` [PATCH v2 2/3] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() Russell Currey
@ 2018-12-10  7:00 ` Russell Currey
  2 siblings, 0 replies; 7+ messages in thread
From: Russell Currey @ 2018-12-10  7:00 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: mikey, mpe, benh, npiggin, christophe.leroy, kernel-hardening,
	Russell Currey

Kernel Userspace Access Prevention utilises a feature of
the Radix MMU which disallows read and write access to userspace
addresses.  By utilising this, the kernel is prevented from accessing
user data from outside of trusted paths that perform proper safety
checks, such as copy_{to/from}_user() and friends.

Userspace access is disabled from early boot and is only enabled when:

        - exiting the kernel and entering userspace
        - performing an operation like copy_{to/from}_user()
        - context switching to a process that has access enabled

and similarly, access is disabled again when exiting userspace and
entering the kernel.

This feature has a slight performance impact which I roughly measured
to be
3% slower in the worst case (performing 1GB of 1 byte read()/write()
syscalls), and is gated behind the CONFIG_PPC_KUAP option for
performance-critical builds.

This feature can be tested by using the lkdtm driver (CONFIG_LKDTM=y)
and performing the following:

        echo ACCESS_USERSPACE > [debugfs]/provoke-crash/DIRECT

if enabled, this should send SIGSEGV to the thread.

The KUAP state is tracked in the PACA because reading the register
that manages these accesses is costly. This Has the unfortunate
downside of another layer of abstraction for platforms that implement
the locks and unlocks, but this could be useful in future for other
things too, like counters for benchmarking or smartly handling lots
of small accesses at once.

Signed-off-by: Russell Currey <ruscur@russell.cc>
---
 .../powerpc/include/asm/book3s/64/kup-radix.h | 36 +++++++++++++++++++
 arch/powerpc/include/asm/exception-64s.h      | 15 ++++++--
 arch/powerpc/include/asm/kup.h                |  3 ++
 arch/powerpc/include/asm/mmu.h                |  9 ++++-
 arch/powerpc/include/asm/reg.h                |  1 +
 arch/powerpc/mm/pgtable-radix.c               | 14 ++++++++
 arch/powerpc/mm/pkeys.c                       |  7 ++--
 arch/powerpc/platforms/Kconfig.cputype        |  1 +
 8 files changed, 81 insertions(+), 5 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h

diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
new file mode 100644
index 000000000000..93273ca99310
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_KUP_RADIX_H
+#define _ASM_POWERPC_KUP_RADIX_H
+
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_PPC_KUAP
+#include <asm/reg.h>
+/*
+ * We do have the ability to individually lock/unlock reads and writes rather
+ * than both at once, however it's a significant performance hit due to needing
+ * to do a read-modify-write, which adds a mfspr, which is slow.  As a result,
+ * locking/unlocking both at once is preferred.
+ */
+static inline void unlock_user_access(void __user *to, const void __user *from,
+				      unsigned long size)
+{
+	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
+		return;
+
+	mtspr(SPRN_AMR, 0);
+	isync();
+	get_paca()->user_access_allowed = 1;
+}
+
+static inline void lock_user_access(void __user *to, const void __user *from,
+				    unsigned long size)
+{
+	if (!mmu_has_feature(MMU_FTR_RADIX_KUAP))
+		return;
+
+	mtspr(SPRN_AMR, AMR_LOCKED);
+	get_paca()->user_access_allowed = 0;
+}
+#endif /* CONFIG_PPC_KUAP */
+#endif /* __ASSEMBLY__ */
+#endif
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 4d971ca1e69b..0ed4923c1282 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -264,8 +264,19 @@ BEGIN_FTR_SECTION_NESTED(943)						\
 	std	ra,offset(r13);						\
 END_FTR_SECTION_NESTED(ftr,ftr,943)
 
-#define LOCK_USER_ACCESS(reg)
-#define UNLOCK_USER_ACCESS(reg)
+#define LOCK_USER_ACCESS(reg)						\
+BEGIN_MMU_FTR_SECTION_NESTED(944)					\
+	lis	reg,(AMR_LOCKED)@highest;				\
+	rldicr	reg,reg,32,31;						\
+	mtspr	SPRN_AMR,reg;						\
+END_MMU_FTR_SECTION_NESTED(MMU_FTR_RADIX_KUAP,MMU_FTR_RADIX_KUAP,944)
+
+#define UNLOCK_USER_ACCESS(reg)						\
+BEGIN_MMU_FTR_SECTION_NESTED(945)					\
+	li	reg,0;							\
+	mtspr	SPRN_AMR,reg;						\
+	isync;								\
+END_MMU_FTR_SECTION_NESTED(MMU_FTR_RADIX_KUAP,MMU_FTR_RADIX_KUAP,945)
 
 #define EXCEPTION_PROLOG_0(area)					\
 	GET_PACA(r13);							\
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 16ff3f9da29c..7813e2bcfb7c 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -8,6 +8,9 @@
 #ifdef CONFIG_PPC_BOOK3S_32
 #include <asm/book3s/32/kup.h>
 #endif
+#ifdef CONFIG_PPC_BOOK3S_64
+#include <asm/book3s/64/kup-radix.h>
+#endif
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index eb20eb3b8fb0..048df188fc10 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -107,6 +107,10 @@
  */
 #define MMU_FTR_1T_SEGMENT		ASM_CONST(0x40000000)
 
+/* Supports KUAP (key 0 controlling userspace addresses) on radix
+ */
+#define MMU_FTR_RADIX_KUAP		ASM_CONST(0x80000000)
+
 /* MMU feature bit sets for various CPUs */
 #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2	\
 	MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
@@ -143,7 +147,10 @@ enum {
 		MMU_FTR_KERNEL_RO | MMU_FTR_68_BIT_VA |
 #ifdef CONFIG_PPC_RADIX_MMU
 		MMU_FTR_TYPE_RADIX |
-#endif
+#ifdef CONFIG_PPC_KUAP
+		MMU_FTR_RADIX_KUAP |
+#endif /* CONFIG_PPC_KUAP */
+#endif /* CONFIG_PPC_RADIX_MMU */
 		0,
 };
 
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index c9c382e57017..6b5d2a61af5a 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -246,6 +246,7 @@
 #define SPRN_DSCR	0x11
 #define SPRN_CFAR	0x1c	/* Come From Address Register */
 #define SPRN_AMR	0x1d	/* Authority Mask Register */
+#define   AMR_LOCKED	0xC000000000000000UL /* Read & Write disabled */
 #define SPRN_UAMOR	0x9d	/* User Authority Mask Override Register */
 #define SPRN_AMOR	0x15d	/* Authority Mask Override Register */
 #define SPRN_ACOP	0x1F	/* Available Coprocessor Register */
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 3565e266994b..e248f330d6fd 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -29,6 +29,7 @@
 #include <asm/powernv.h>
 #include <asm/sections.h>
 #include <asm/trace.h>
+#include <asm/uaccess.h>
 
 #include <trace/events/thp.h>
 
@@ -552,6 +553,19 @@ void __init setup_kuep(bool disabled)
 }
 #endif
 
+#ifdef CONFIG_PPC_KUAP
+void __init setup_kuap(bool disabled)
+{
+	if (disabled || !early_radix_enabled())
+		return;
+
+	pr_info("Activating Kernel Userspace Access Prevention\n");
+
+	cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
+	mtspr(SPRN_AMR, AMR_LOCKED);
+}
+#endif
+
 void __init radix__early_init_mmu(void)
 {
 	unsigned long lpcr;
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index b271b283c785..bb3cf915016f 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -7,6 +7,7 @@
 
 #include <asm/mman.h>
 #include <asm/setup.h>
+#include <asm/uaccess.h>
 #include <linux/pkeys.h>
 #include <linux/of_device.h>
 
@@ -266,7 +267,8 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
 
 void thread_pkey_regs_save(struct thread_struct *thread)
 {
-	if (static_branch_likely(&pkey_disabled))
+	if (static_branch_likely(&pkey_disabled) &&
+	    !mmu_has_feature(MMU_FTR_RADIX_KUAP))
 		return;
 
 	/*
@@ -280,7 +282,8 @@ void thread_pkey_regs_save(struct thread_struct *thread)
 void thread_pkey_regs_restore(struct thread_struct *new_thread,
 			      struct thread_struct *old_thread)
 {
-	if (static_branch_likely(&pkey_disabled))
+	if (static_branch_likely(&pkey_disabled) &&
+	    !mmu_has_feature(MMU_FTR_RADIX_KUAP))
 		return;
 
 	if (old_thread->amr != new_thread->amr)
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 48cc8df0fdd2..0b9a8eda413a 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -336,6 +336,7 @@ config PPC_RADIX_MMU
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
 	select PPC_HAVE_KUEP
+	select PPC_HAVE_KUAP
 	default y
 	help
 	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.19.2

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/3] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()
  2018-12-10  7:00 ` [PATCH v2 2/3] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() Russell Currey
@ 2018-12-17  7:09   ` Christophe Leroy
  2019-01-25 11:45     ` Christophe Leroy
  0 siblings, 1 reply; 7+ messages in thread
From: Christophe Leroy @ 2018-12-17  7:09 UTC (permalink / raw)
  To: Russell Currey, linuxppc-dev; +Cc: mikey, mpe, benh, npiggin, kernel-hardening

Hi Russel,

Le 10/12/2018 à 08:00, Russell Currey a écrit :
> __patch_instruction() is called in early boot, and uses
> __put_user_size(), which includes the locks and unlocks for KUAP,
> which could either be called too early, or in the Radix case, forced to
> use "early_" versions of functions just to safely handle this one case.

Looking at x86, I see that __put_user_size() doesn't includes the locks. 
The lock/unlock is do by callers. I'll do the same.


> 
> __put_user_asm() does not do this, and thus is safe to use both in early
> boot, and later on since in this case it should only ever be touching
> kernel memory.
> 
> __patch_instruction() was previously refactored to use __put_user_size()
> in order to be able to return -EFAULT, which would allow the kernel to
> patch instructions in userspace, which should never happen.  This has
> the functional change of causing faults on userspace addresses if KUAP
> is turned on, which should never happen in practice.
> 
> A future enhancement could be to double check the patch address is
> definitely allowed to be tampered with by the kernel.

This makes me realise that we are calling lock_user_access() with kernel 
addresses. That most likely breaks protection on kernel addresses for 
book3s/32. I'll have to work around it.

Another thing I realised also is that get_user() at least is called in 
some exceptions/trap handlers. Which means it can be called nested with 
an ongoing user access. It means that get_paca()->user_access_allowed 
might be modified during those exceptions/traps.

Christophe

> 
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
>   arch/powerpc/lib/code-patching.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index 89502cbccb1b..15e8c6339960 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -26,9 +26,9 @@
>   static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
>   			       unsigned int *patch_addr)
>   {
> -	int err;
> +	int err = 0;
>   
> -	__put_user_size(instr, patch_addr, 4, err);
> +	__put_user_asm(instr, patch_addr, err, "stw");
>   	if (err)
>   		return err;
>   
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/3] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()
  2018-12-17  7:09   ` Christophe Leroy
@ 2019-01-25 11:45     ` Christophe Leroy
  2019-02-20 11:57       ` Russell Currey
  0 siblings, 1 reply; 7+ messages in thread
From: Christophe Leroy @ 2019-01-25 11:45 UTC (permalink / raw)
  To: Russell Currey, linuxppc-dev; +Cc: mikey, mpe, benh, npiggin, kernel-hardening

Hi Russel,

Le 17/12/2018 à 08:09, Christophe Leroy a écrit :
> Hi Russel,
> 
> Le 10/12/2018 à 08:00, Russell Currey a écrit :
>> __patch_instruction() is called in early boot, and uses
>> __put_user_size(), which includes the locks and unlocks for KUAP,
>> which could either be called too early, or in the Radix case, forced to
>> use "early_" versions of functions just to safely handle this one case.
> 
> Looking at x86, I see that __put_user_size() doesn't includes the locks. 
> The lock/unlock is do by callers. I'll do the same.
> 
> 
>>
>> __put_user_asm() does not do this, and thus is safe to use both in early
>> boot, and later on since in this case it should only ever be touching
>> kernel memory.
>>
>> __patch_instruction() was previously refactored to use __put_user_size()
>> in order to be able to return -EFAULT, which would allow the kernel to
>> patch instructions in userspace, which should never happen.  This has
>> the functional change of causing faults on userspace addresses if KUAP
>> is turned on, which should never happen in practice.
>>
>> A future enhancement could be to double check the patch address is
>> definitely allowed to be tampered with by the kernel.
> 
> This makes me realise that we are calling lock_user_access() with kernel 
> addresses. That most likely breaks protection on kernel addresses for 
> book3s/32. I'll have to work around it.
> 
> Another thing I realised also is that get_user() at least is called in 
> some exceptions/trap handlers. Which means it can be called nested with 
> an ongoing user access. It means that get_paca()->user_access_allowed 
> might be modified during those exceptions/traps.

Any comment about that ? Isn't it a problem ?

Christophe

> 
> Christophe
> 
>>
>> Signed-off-by: Russell Currey <ruscur@russell.cc>
>> ---
>>   arch/powerpc/lib/code-patching.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/powerpc/lib/code-patching.c 
>> b/arch/powerpc/lib/code-patching.c
>> index 89502cbccb1b..15e8c6339960 100644
>> --- a/arch/powerpc/lib/code-patching.c
>> +++ b/arch/powerpc/lib/code-patching.c
>> @@ -26,9 +26,9 @@
>>   static int __patch_instruction(unsigned int *exec_addr, unsigned int 
>> instr,
>>                      unsigned int *patch_addr)
>>   {
>> -    int err;
>> +    int err = 0;
>> -    __put_user_size(instr, patch_addr, 4, err);
>> +    __put_user_asm(instr, patch_addr, err, "stw");
>>       if (err)
>>           return err;
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/3] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()
  2019-01-25 11:45     ` Christophe Leroy
@ 2019-02-20 11:57       ` Russell Currey
  0 siblings, 0 replies; 7+ messages in thread
From: Russell Currey @ 2019-02-20 11:57 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev
  Cc: mikey, mpe, benh, npiggin, kernel-hardening

On Fri, 2019-01-25 at 12:45 +0100, Christophe Leroy wrote:
> Hi Russel,
> 
> Le 17/12/2018 à 08:09, Christophe Leroy a écrit :
> > Hi Russel,
> > 
> > Le 10/12/2018 à 08:00, Russell Currey a écrit :
> > > __patch_instruction() is called in early boot, and uses
> > > __put_user_size(), which includes the locks and unlocks for KUAP,
> > > which could either be called too early, or in the Radix case,
> > > forced to
> > > use "early_" versions of functions just to safely handle this one
> > > case.
> > 
> > Looking at x86, I see that __put_user_size() doesn't includes the
> > locks. 
> > The lock/unlock is do by callers. I'll do the same.
> > 
> > 
> > > __put_user_asm() does not do this, and thus is safe to use both
> > > in early
> > > boot, and later on since in this case it should only ever be
> > > touching
> > > kernel memory.
> > > 
> > > __patch_instruction() was previously refactored to use
> > > __put_user_size()
> > > in order to be able to return -EFAULT, which would allow the
> > > kernel to
> > > patch instructions in userspace, which should never happen.  This
> > > has
> > > the functional change of causing faults on userspace addresses if
> > > KUAP
> > > is turned on, which should never happen in practice.
> > > 
> > > A future enhancement could be to double check the patch address
> > > is
> > > definitely allowed to be tampered with by the kernel.
> > 
> > This makes me realise that we are calling lock_user_access() with
> > kernel 
> > addresses. That most likely breaks protection on kernel addresses
> > for 
> > book3s/32. I'll have to work around it.
> > 
> > Another thing I realised also is that get_user() at least is called
> > in 
> > some exceptions/trap handlers. Which means it can be called nested
> > with 
> > an ongoing user access. It means that get_paca()-
> > >user_access_allowed 
> > might be modified during those exceptions/traps.
> 
> Any comment about that ? Isn't it a problem ?

Yes, I think so.  I wonder why I haven't hit this issue, though.  Which
handlers is this an issue with?

Maybe we could do something like...

unlock_user_access() checks if user access is already unlocked (== 1),
if so sets user_access_allowed to 2

lock_user_access() sees that user_access_allowed is 2, and knows it's
nested and sets user_access_allowed back to 1 instead of its usual 0.

...that's pretty gross, though.  It also means that every
implementation has to figure out how to cope with that.

I've done a lot of testing where a) user access hasn't been left
unlocked and b) faults haven't happened where they shouldn't, so I
wonder how I could try and hit such a case.

Could also have a get_user() without locking that's only allowed to be
used by exception handlers...

I dunno.  Open to better ideas.

- Russell
> 
> Christophe
> 
> > Christophe
> > 
> > > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > > ---
> > >   arch/powerpc/lib/code-patching.c | 4 ++--
> > >   1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/powerpc/lib/code-patching.c 
> > > b/arch/powerpc/lib/code-patching.c
> > > index 89502cbccb1b..15e8c6339960 100644
> > > --- a/arch/powerpc/lib/code-patching.c
> > > +++ b/arch/powerpc/lib/code-patching.c
> > > @@ -26,9 +26,9 @@
> > >   static int __patch_instruction(unsigned int *exec_addr,
> > > unsigned int 
> > > instr,
> > >                      unsigned int *patch_addr)
> > >   {
> > > -    int err;
> > > +    int err = 0;
> > > -    __put_user_size(instr, patch_addr, 4, err);
> > > +    __put_user_asm(instr, patch_addr, err, "stw");
> > >       if (err)
> > >           return err;
> > > 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-02-20 11:57 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-10  7:00 [PATCH v2 0/3] [PATCH v2 0/3] Kernel Userspace Protection for Radix MMU Russell Currey
2018-12-10  7:00 ` [PATCH v2 1/3] powerpc/mm/radix: Use KUEP API " Russell Currey
2018-12-10  7:00 ` [PATCH v2 2/3] powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() Russell Currey
2018-12-17  7:09   ` Christophe Leroy
2019-01-25 11:45     ` Christophe Leroy
2019-02-20 11:57       ` Russell Currey
2018-12-10  7:00 ` [PATCH v2 3/3] powerpc/64s: Implement KUAP for Radix MMU Russell Currey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).