linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13] arm64: remove set_fs() and friends
@ 2020-09-25 16:07 Mark Rutland
  2020-09-25 16:07 ` [PATCH 01/13] arm64: head.S: rename el2_setup -> init_kernel_el Mark Rutland
                   ` (13 more replies)
  0 siblings, 14 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

This series removes set_fs() from arm64, building atop the core rework
done by Christophe. The series can be found in my arm64/set_fs-removal
branch [2].

The bulk of the rework is to address the way we manipulate PAN and UAO,
which is largely rendered redundant.

The kernel maccess routines (__{get,put}_kernel_nofault) are trivial
wrappers which share code with the uaccess routines, so I expect these
should just work, but they'll need testing in-context, especially where
they're wrapped by the gerneric copy routines.

So far this has seen some very basic boot testing. I intend to throw
Syzkaller and LTP at this soon.

Mark.

[1] git://git.infradead.org/users/hch/misc.git set_fs-removal
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/set_fs-removal

Mark Rutland (13):
  arm64: head.S: rename el2_setup -> init_kernel_el
  arm64: head.S: cleanup SCTLR_ELx initialization
  arm64: head.S: always initialize PSTATE
  arm64: sdei: move uaccess logic to arch/arm64/
  arm64: uaccess: move uao_* alternatives to asm-uaccess.h
  arm64: uaccess: rename privileged uaccess routines
  arm64: uaccess: simplify __copy_user_flushcache()
  arm64: uaccess: refactor __{get,put}_user
  arm64: uaccess: split user/kernel routines
  arm64: uaccess cleanup macro naming
  arm64: uaccess: remove set_fs()
  arm64: uaccess: remove addr_limit_user_check()
  arm64: uaccess: remove redundant PAN toggling

 arch/arm64/Kconfig                   |   1 -
 arch/arm64/include/asm/alternative.h |  59 ------------
 arch/arm64/include/asm/asm-uaccess.h |  29 ++++++
 arch/arm64/include/asm/cpucaps.h     |   1 -
 arch/arm64/include/asm/exec.h        |   1 -
 arch/arm64/include/asm/futex.h       |   8 +-
 arch/arm64/include/asm/processor.h   |   4 +-
 arch/arm64/include/asm/ptrace.h      |   8 +-
 arch/arm64/include/asm/sysreg.h      |  16 +++-
 arch/arm64/include/asm/thread_info.h |   8 +-
 arch/arm64/include/asm/uaccess.h     | 180 ++++++++++++++---------------------
 arch/arm64/kernel/armv8_deprecated.c |   4 +-
 arch/arm64/kernel/asm-offsets.c      |   3 +-
 arch/arm64/kernel/cpufeature.c       |  11 ---
 arch/arm64/kernel/entry.S            |  19 +---
 arch/arm64/kernel/head.S             |  46 +++++----
 arch/arm64/kernel/process.c          |  12 ---
 arch/arm64/kernel/sdei.c             |  12 +--
 arch/arm64/kernel/signal.c           |   3 -
 arch/arm64/kernel/sleep.S            |   2 +-
 arch/arm64/kernel/suspend.c          |   3 +-
 arch/arm64/lib/clear_user.S          |   8 +-
 arch/arm64/lib/copy_from_user.S      |   8 +-
 arch/arm64/lib/copy_in_user.S        |  16 ++--
 arch/arm64/lib/copy_to_user.S        |   8 +-
 arch/arm64/lib/uaccess_flushcache.c  |   4 +-
 arch/arm64/mm/fault.c                |   5 -
 arch/arm64/mm/proc.S                 |   2 +-
 drivers/firmware/arm_sdei.c          |  14 ---
 29 files changed, 189 insertions(+), 306 deletions(-)

-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 01/13] arm64: head.S: rename el2_setup -> init_kernel_el
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 02/13] arm64: head.S: cleanup SCTLR_ELx initialization Mark Rutland
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

For a while now el2_setup has performed some basic initialization of EL1
even when the kernel is booted at EL1, so the name is a little
misleading. Further, some comments are stale as with VHE it doesn't drop
the CPU to EL1.

To clarify things, rename el2_setup to init_kernel_el, and update
comments to be clearer as to the function's purpose.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/head.S  | 15 ++++++++-------
 arch/arm64/kernel/sleep.S |  2 +-
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 037421c66b147..f367f2828d3d3 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -108,7 +108,7 @@ pe_header:
 	 */
 SYM_CODE_START(primary_entry)
 	bl	preserve_boot_args
-	bl	el2_setup			// Drop to EL1, w0=cpu_boot_mode
+	bl	init_kernel_el			// w0=cpu_boot_mode
 	adrp	x23, __PHYS_OFFSET
 	and	x23, x23, MIN_KIMG_ALIGN - 1	// KASLR offset, defaults to 0
 	bl	set_cpu_boot_mode_flag
@@ -486,13 +486,14 @@ EXPORT_SYMBOL(kimage_vaddr)
 	.section ".idmap.text","awx"
 
 /*
- * If we're fortunate enough to boot at EL2, ensure that the world is
- * sane before dropping to EL1.
+ * Starting from EL2 or EL1, configure the CPU to execute at the highest
+ * reachable EL supported by the kernel in a chosen default state. If dropping
+ * from EL2 to EL1, configure EL2 before configuring EL1.
  *
  * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in w0 if
  * booted in EL1 or EL2 respectively.
  */
-SYM_FUNC_START(el2_setup)
+SYM_FUNC_START(init_kernel_el)
 	msr	SPsel, #1			// We want to use SP_EL{1,2}
 	mrs	x0, CurrentEL
 	cmp	x0, #CurrentEL_EL2
@@ -653,7 +654,7 @@ SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
 	msr	elr_el2, lr
 	mov	w0, #BOOT_CPU_MODE_EL2		// This CPU booted in EL2
 	eret
-SYM_FUNC_END(el2_setup)
+SYM_FUNC_END(init_kernel_el)
 
 /*
  * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed
@@ -703,7 +704,7 @@ SYM_DATA_END(__early_cpu_boot_status)
 	 * cores are held until we're ready for them to initialise.
 	 */
 SYM_FUNC_START(secondary_holding_pen)
-	bl	el2_setup			// Drop to EL1, w0=cpu_boot_mode
+	bl	init_kernel_el			// w0=cpu_boot_mode
 	bl	set_cpu_boot_mode_flag
 	mrs	x0, mpidr_el1
 	mov_q	x1, MPIDR_HWID_BITMASK
@@ -721,7 +722,7 @@ SYM_FUNC_END(secondary_holding_pen)
 	 * be used where CPUs are brought online dynamically by the kernel.
 	 */
 SYM_FUNC_START(secondary_entry)
-	bl	el2_setup			// Drop to EL1
+	bl	init_kernel_el			// w0=cpu_boot_mode
 	bl	set_cpu_boot_mode_flag
 	b	secondary_startup
 SYM_FUNC_END(secondary_entry)
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index ba40d57757d63..4be7f7eed8750 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -99,7 +99,7 @@ SYM_FUNC_END(__cpu_suspend_enter)
 
 	.pushsection ".idmap.text", "awx"
 SYM_CODE_START(cpu_resume)
-	bl	el2_setup		// if in EL2 drop to EL1 cleanly
+	bl	init_kernel_el
 	bl	__cpu_setup
 	/* enable the MMU early - so we can access sleep_save_stash by va */
 	adrp	x1, swapper_pg_dir
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 02/13] arm64: head.S: cleanup SCTLR_ELx initialization
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
  2020-09-25 16:07 ` [PATCH 01/13] arm64: head.S: rename el2_setup -> init_kernel_el Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 03/13] arm64: head.S: always initialize PSTATE Mark Rutland
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

Let's make SCTLR_ELx initialization a bit clearer by using meaningful
names for the initialization values, following the same scheme for
SCTLR_EL1 and SCTLR_EL2.

These definitions will be used more widely in subsequent patches.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 16 +++++++++++-----
 arch/arm64/kernel/head.S        |  6 +++---
 arch/arm64/mm/proc.S            |  2 +-
 3 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 554a7e8ecb074..17061109dcd2f 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -558,6 +558,9 @@
 #define ENDIAN_SET_EL2		0
 #endif
 
+#define INIT_SCTLR_EL2_MMU_OFF \
+	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
+
 /* SCTLR_EL1 specific flags. */
 #define SCTLR_EL1_BT1		(BIT(36))
 #define SCTLR_EL1_BT0		(BIT(35))
@@ -583,11 +586,14 @@
 #define ENDIAN_SET_EL1		0
 #endif
 
-#define SCTLR_EL1_SET	(SCTLR_ELx_M    | SCTLR_ELx_C    | SCTLR_ELx_SA   |\
-			 SCTLR_EL1_SA0  | SCTLR_EL1_SED  | SCTLR_ELx_I    |\
-			 SCTLR_EL1_DZE  | SCTLR_EL1_UCT                   |\
-			 SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN |\
-			 ENDIAN_SET_EL1 | SCTLR_EL1_UCI  | SCTLR_EL1_RES1)
+#define INIT_SCTLR_EL1_MMU_OFF \
+	(ENDIAN_SET_EL1 | SCTLR_EL1_RES1)
+
+#define INIT_SCTLR_EL1_MMU_ON \
+	(SCTLR_ELx_M    | SCTLR_ELx_C    | SCTLR_ELx_SA   | SCTLR_EL1_SA0  | \
+	 SCTLR_EL1_SED  | SCTLR_ELx_I    | SCTLR_EL1_DZE  | SCTLR_EL1_UCT  | \
+	 SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | ENDIAN_SET_EL1 | \
+	 SCTLR_EL1_UCI  | SCTLR_EL1_RES1)
 
 /* MAIR_ELx memory attributes (used by Linux) */
 #define MAIR_ATTR_DEVICE_nGnRnE		UL(0x00)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index f367f2828d3d3..9bb07be45249a 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -498,13 +498,13 @@ SYM_FUNC_START(init_kernel_el)
 	mrs	x0, CurrentEL
 	cmp	x0, #CurrentEL_EL2
 	b.eq	1f
-	mov_q	x0, (SCTLR_EL1_RES1 | ENDIAN_SET_EL1)
+	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
 	msr	sctlr_el1, x0
 	mov	w0, #BOOT_CPU_MODE_EL1		// This cpu booted in EL1
 	isb
 	ret
 
-1:	mov_q	x0, (SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
+1:	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
 	msr	sctlr_el2, x0
 
 #ifdef CONFIG_ARM64_VHE
@@ -625,7 +625,7 @@ SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
 	 * requires no configuration, and all non-hyp-specific EL2 setup
 	 * will be done via the _EL1 system register aliases in __cpu_setup.
 	 */
-	mov_q	x0, (SCTLR_EL1_RES1 | ENDIAN_SET_EL1)
+	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
 	msr	sctlr_el1, x0
 
 	/* Coprocessor traps. */
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 796e47a571e66..a72a54246034b 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -461,6 +461,6 @@ SYM_FUNC_START(__cpu_setup)
 	/*
 	 * Prepare SCTLR
 	 */
-	mov_q	x0, SCTLR_EL1_SET
+	mov_q	x0, INIT_SCTLR_EL1_MMU_ON
 	ret					// return to head.S
 SYM_FUNC_END(__cpu_setup)
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 03/13] arm64: head.S: always initialize PSTATE
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
  2020-09-25 16:07 ` [PATCH 01/13] arm64: head.S: rename el2_setup -> init_kernel_el Mark Rutland
  2020-09-25 16:07 ` [PATCH 02/13] arm64: head.S: cleanup SCTLR_ELx initialization Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-30 16:09   ` James Morse
  2020-09-25 16:07 ` [PATCH 04/13] arm64: sdei: move uaccess logic to arch/arm64/ Mark Rutland
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

As with SCTLR_ELx and other control registers, some PSTATE bits are
UNKNOWN out-of-reset, and we may not be able to rely on hardware of
firmware to initialize them to our liking prior to entry to the kernel,
e.g. in the primary/secondary boot paths and return from idle/suspend.

It would be more robust (and easier to reason about) if we consistently
initialized PSTATE to a default value, as we do with control registers.
This will ensure that the kernel is not adversely affected by bits it is
not aware of, e.g. when support for a feature such as PAN/UAO is
disabled.

This patch ensures that PSTATE is consistently initialized at boot time
via an ERET. This is not intended to relax the existing requirements
(e.g. DAIF bits must still be set prior to entering the kernel). For
features detected dynamically (which may require system-wide support),
it is still necessary to subsequently modify PSTATE.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/ptrace.h |  5 +++++
 arch/arm64/kernel/head.S        | 27 +++++++++++++++++----------
 2 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 966ed30ed5f7b..366075f228eb3 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -16,6 +16,11 @@
 #define CurrentEL_EL1		(1 << 2)
 #define CurrentEL_EL2		(2 << 2)
 
+#define INIT_PSTATE_EL1 \
+	(PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT | PSR_MODE_EL1h)
+#define INIT_PSTATE_EL2 \
+	(PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT | PSR_MODE_EL2h)
+
 /*
  * PMR values used to mask/unmask interrupts.
  *
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 9bb07be45249a..b8ee9a62c9b61 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -494,17 +494,22 @@ EXPORT_SYMBOL(kimage_vaddr)
  * booted in EL1 or EL2 respectively.
  */
 SYM_FUNC_START(init_kernel_el)
-	msr	SPsel, #1			// We want to use SP_EL{1,2}
 	mrs	x0, CurrentEL
 	cmp	x0, #CurrentEL_EL2
-	b.eq	1f
+	b.eq	init_el2
+
+SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
 	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
 	msr	sctlr_el1, x0
-	mov	w0, #BOOT_CPU_MODE_EL1		// This cpu booted in EL1
 	isb
-	ret
+	mov_q	x0, INIT_PSTATE_EL1
+	msr	spsr_el1, x0
+	msr	elr_el1, lr
+	mov	w0, #BOOT_CPU_MODE_EL1
+	eret
 
-1:	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
+SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
+	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
 	msr	sctlr_el2, x0
 
 #ifdef CONFIG_ARM64_VHE
@@ -613,9 +618,12 @@ set_hcr:
 
 	cbz	x2, install_el2_stub
 
-	mov	w0, #BOOT_CPU_MODE_EL2		// This CPU booted in EL2
 	isb
-	ret
+	mov_q	x0, INIT_PSTATE_EL2
+	msr	spsr_el2, x0
+	msr	elr_el2, lr
+	mov	w0, #BOOT_CPU_MODE_EL2
+	eret
 
 SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
 	/*
@@ -648,11 +656,10 @@ SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
 	msr	vbar_el2, x0
 
 	/* spsr */
-	mov	x0, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
-		      PSR_MODE_EL1h)
+	mov	x0, #INIT_PSTATE_EL1
 	msr	spsr_el2, x0
 	msr	elr_el2, lr
-	mov	w0, #BOOT_CPU_MODE_EL2		// This CPU booted in EL2
+	mov	w0, #BOOT_CPU_MODE_EL2
 	eret
 SYM_FUNC_END(init_kernel_el)
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 04/13] arm64: sdei: move uaccess logic to arch/arm64/
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (2 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 03/13] arm64: head.S: always initialize PSTATE Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 05/13] arm64: uaccess: move uao_* alternatives to asm-uaccess.h Mark Rutland
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

The SDEI support code is split across arch/arm64/ and drivers/firmware/,
largley this is split so that the arch-specific portions are under
arch/arm64, and the management logic is under drivers/firmware/.
However, exception entry fixups are currently under drivers/firmware.

Let's move the exception entry fixups under arch/arm64/. This
de-clutters the management logic, and puts all the arch-specific
portions in one place. Doing this also allows the fixups to be applied
earlier, so things like PAN and UAO will be in a known good state before
we run other logic. This will also make subsequent refactoring easier.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/sdei.c    | 18 ++++++++++++------
 drivers/firmware/arm_sdei.c | 14 --------------
 2 files changed, 12 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
index 7689f2031c0c4..4a5f24602aa0c 100644
--- a/arch/arm64/kernel/sdei.c
+++ b/arch/arm64/kernel/sdei.c
@@ -178,12 +178,6 @@ static __kprobes unsigned long _sdei_handler(struct pt_regs *regs,
 		sdei_api_event_context(i, &regs->regs[i]);
 	}
 
-	/*
-	 * We didn't take an exception to get here, set PAN. UAO will be cleared
-	 * by sdei_event_handler()s force_uaccess_begin() call.
-	 */
-	__uaccess_enable_hw_pan();
-
 	err = sdei_event_handler(regs, arg);
 	if (err)
 		return SDEI_EV_FAILED;
@@ -227,6 +221,16 @@ asmlinkage __kprobes notrace unsigned long
 __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 {
 	unsigned long ret;
+	mm_segment_t orig_addr_limit;
+
+	/*
+	 * We didn't take an exception to get here, so the HW hasn't set PAN or
+	 * cleared UAO, and the exception entry code hasn't reset addr_limit.
+	 * Set PAN, then use force_uaccess_begin() to clear UAO and reset
+	 * addr_limit.
+	 */
+	__uaccess_enable_hw_pan();
+	orig_addr_limit = force_uaccess_begin();
 
 	nmi_enter();
 
@@ -234,5 +238,7 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 
 	nmi_exit();
 
+	force_uaccess_end(orig_addr_limit);
+
 	return ret;
 }
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
index b4b9ce97f415e..25a87c729f090 100644
--- a/drivers/firmware/arm_sdei.c
+++ b/drivers/firmware/arm_sdei.c
@@ -31,7 +31,6 @@
 #include <linux/slab.h>
 #include <linux/smp.h>
 #include <linux/spinlock.h>
-#include <linux/uaccess.h>
 
 /*
  * The call to use to reach the firmware.
@@ -1125,26 +1124,13 @@ int sdei_event_handler(struct pt_regs *regs,
 		       struct sdei_registered_event *arg)
 {
 	int err;
-	mm_segment_t orig_addr_limit;
 	u32 event_num = arg->event_num;
 
-	/*
-	 * Save restore 'fs'.
-	 * The architecture's entry code save/restores 'fs' when taking an
-	 * exception from the kernel. This ensures addr_limit isn't inherited
-	 * if you interrupted something that allowed the uaccess routines to
-	 * access kernel memory.
-	 * Do the same here because this doesn't come via the same entry code.
-	*/
-	orig_addr_limit = force_uaccess_begin();
-
 	err = arg->callback(event_num, regs, arg->callback_arg);
 	if (err)
 		pr_err_ratelimited("event %u on CPU %u failed with error: %d\n",
 				   event_num, smp_processor_id(), err);
 
-	force_uaccess_end(orig_addr_limit);
-
 	return err;
 }
 NOKPROBE_SYMBOL(sdei_event_handler);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 05/13] arm64: uaccess: move uao_* alternatives to asm-uaccess.h
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (3 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 04/13] arm64: sdei: move uaccess logic to arch/arm64/ Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 06/13] arm64: uaccess: rename privileged uaccess routines Mark Rutland
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

The uao_* alternative asm macros are only used by the uaccess assembly
routines in arch/arm64/lib/, where they are included indirectly via
asm-uaccess.h. Since they're specific to the uaccess assembly (and will
lose the alternatives in subsequent patches), let's move them into
asm-uaccess.h.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/alternative.h | 59 ------------------------------------
 arch/arm64/include/asm/asm-uaccess.h | 59 ++++++++++++++++++++++++++++++++++++
 2 files changed, 59 insertions(+), 59 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 619db9b4c9d5c..5d6b89d26de4b 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -224,65 +224,6 @@ alternative_endif
 	_asm_extable 9999b, \label
 .endm
 
-/*
- * Generate the assembly for UAO alternatives with exception table entries.
- * This is complicated as there is no post-increment or pair versions of the
- * unprivileged instructions, and USER() only works for single instructions.
- */
-#ifdef CONFIG_ARM64_UAO
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			ldtr	\reg1, [\addr];
-			ldtr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-		_asm_extable	8889b,\l;
-	.endm
-
-	.macro uao_stp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			stp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			sttr	\reg1, [\addr];
-			sttr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-		_asm_extable	8889b,\l;
-	.endm
-
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			\inst	\reg, [\addr], \post_inc;
-			nop;
-		alternative_else
-			\alt_inst	\reg, [\addr];
-			add		\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-	.endm
-#else
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_stp l, reg1, reg2, addr, post_inc
-		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		USER(\l, \inst \reg, [\addr], \post_inc)
-	.endm
-#endif
-
 #endif  /*  __ASSEMBLY__  */
 
 /*
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index f68a0e64482a1..479222ab82d44 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -58,4 +58,63 @@ alternative_else_nop_endif
 	.endm
 #endif
 
+/*
+ * Generate the assembly for UAO alternatives with exception table entries.
+ * This is complicated as there is no post-increment or pair versions of the
+ * unprivileged instructions, and USER() only works for single instructions.
+ */
+#ifdef CONFIG_ARM64_UAO
+	.macro uao_ldp l, reg1, reg2, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
+8889:			nop;
+			nop;
+		alternative_else
+			ldtr	\reg1, [\addr];
+			ldtr	\reg2, [\addr, #8];
+			add	\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+		_asm_extable	8889b,\l;
+	.endm
+
+	.macro uao_stp l, reg1, reg2, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			stp	\reg1, \reg2, [\addr], \post_inc;
+8889:			nop;
+			nop;
+		alternative_else
+			sttr	\reg1, [\addr];
+			sttr	\reg2, [\addr, #8];
+			add	\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+		_asm_extable	8889b,\l;
+	.endm
+
+	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			\inst	\reg, [\addr], \post_inc;
+			nop;
+		alternative_else
+			\alt_inst	\reg, [\addr];
+			add		\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+	.endm
+#else
+	.macro uao_ldp l, reg1, reg2, addr, post_inc
+		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
+	.endm
+	.macro uao_stp l, reg1, reg2, addr, post_inc
+		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
+	.endm
+	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
+		USER(\l, \inst \reg, [\addr], \post_inc)
+	.endm
+#endif
+
 #endif
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 06/13] arm64: uaccess: rename privileged uaccess routines
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (4 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 05/13] arm64: uaccess: move uao_* alternatives to asm-uaccess.h Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 07/13] arm64: uaccess: simplify __copy_user_flushcache() Mark Rutland
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

We currently have many uaccess_*{enable,disable}*() variants, which
subsequent patches will cut down as part of removing set_fs() and
friends. Once this simplification is made, most uaccess routines will
only need to ensure that the user page tables are mapped in TTBR0, as is
currently dealt with by uaccess_ttbr0_{enable,disable}().

The existing uaccess_{enable,disable}() routines ensure that user page
tables are mapped in TTBR0, and also disable PAN protections, which is
necessary to be able to use atomics on user memory, but also permit
unrelated privileged accesses to access user memory.

As preparatory step, let's rename uaccess_{enable,disable}() to
uaccess_{enable,disable}_privileged(), highlighting this caveat and
discouraging wider misuse. Subsequent patches can reuse the
uaccess_{enable,disable}() naming for the common case of ensuring the
user page tables are mapped in TTBR0.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/futex.h       | 8 ++++----
 arch/arm64/include/asm/uaccess.h     | 4 ++--
 arch/arm64/kernel/armv8_deprecated.c | 4 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index 97f6a63810ecb..8e41faa37c691 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -16,7 +16,7 @@
 do {									\
 	unsigned int loops = FUTEX_MAX_LOOPS;				\
 									\
-	uaccess_enable();						\
+	uaccess_enable_privileged();					\
 	asm volatile(							\
 "	prfm	pstl1strm, %2\n"					\
 "1:	ldxr	%w1, %2\n"						\
@@ -39,7 +39,7 @@ do {									\
 	  "+r" (loops)							\
 	: "r" (oparg), "Ir" (-EFAULT), "Ir" (-EAGAIN)			\
 	: "memory");							\
-	uaccess_disable();						\
+	uaccess_disable_privileged();					\
 } while (0)
 
 static inline int
@@ -95,7 +95,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
 		return -EFAULT;
 
 	uaddr = __uaccess_mask_ptr(_uaddr);
-	uaccess_enable();
+	uaccess_enable_privileged();
 	asm volatile("// futex_atomic_cmpxchg_inatomic\n"
 "	prfm	pstl1strm, %2\n"
 "1:	ldxr	%w1, %2\n"
@@ -118,7 +118,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
 	: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
 	: "r" (oldval), "r" (newval), "Ir" (-EFAULT), "Ir" (-EAGAIN)
 	: "memory");
-	uaccess_disable();
+	uaccess_disable_privileged();
 
 	if (!ret)
 		*uval = val;
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 991dd5f031e46..d6a4e496ebc64 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -200,12 +200,12 @@ do {									\
 				CONFIG_ARM64_PAN));			\
 } while (0)
 
-static inline void uaccess_disable(void)
+static inline void uaccess_disable_privileged(void)
 {
 	__uaccess_disable(ARM64_HAS_PAN);
 }
 
-static inline void uaccess_enable(void)
+static inline void uaccess_enable_privileged(void)
 {
 	__uaccess_enable(ARM64_HAS_PAN);
 }
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 7364de008bab3..0e86e8b9ceddf 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -277,7 +277,7 @@ static void __init register_insn_emulation_sysctl(void)
 
 #define __user_swpX_asm(data, addr, res, temp, temp2, B)	\
 do {								\
-	uaccess_enable();					\
+	uaccess_enable_privileged();				\
 	__asm__ __volatile__(					\
 	"	mov		%w3, %w7\n"			\
 	"0:	ldxr"B"		%w2, [%4]\n"			\
@@ -302,7 +302,7 @@ do {								\
 	  "i" (-EFAULT),					\
 	  "i" (__SWP_LL_SC_LOOPS)				\
 	: "memory");						\
-	uaccess_disable();					\
+	uaccess_disable_privileged();				\
 } while (0)
 
 #define __user_swp_asm(data, addr, res, temp, temp2) \
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 07/13] arm64: uaccess: simplify __copy_user_flushcache()
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (5 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 06/13] arm64: uaccess: rename privileged uaccess routines Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 08/13] arm64: uaccess: refactor __{get,put}_user Mark Rutland
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

Currently __copy_user_flushcache() open-codes raw_copy_from_user(), and
doesn't use uaccess_mask_ptr() on the user address. Let's have it call
raw_copy_from_user(), which is both a simplification and ensures that
user pointers are masked under speculation.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/lib/uaccess_flushcache.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
index bfa30b75b2b8e..c83bb5a4aad2c 100644
--- a/arch/arm64/lib/uaccess_flushcache.c
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -30,9 +30,7 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
 {
 	unsigned long rc;
 
-	uaccess_enable_not_uao();
-	rc = __arch_copy_from_user(to, from, n);
-	uaccess_disable_not_uao();
+	rc = raw_copy_from_user(to, from, n);
 
 	/* See above */
 	__clean_dcache_area_pop(to, n - rc);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 08/13] arm64: uaccess: refactor __{get,put}_user
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (6 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 07/13] arm64: uaccess: simplify __copy_user_flushcache() Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 09/13] arm64: uaccess: split user/kernel routines Mark Rutland
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

As a step towards implementing __{get,put}_kernel_nofault(), this patch
splits most user-memory specific logic out of __{get,put}_user(), with
the memory access and fault handling in new __{raw_get,put}_mem()
helpers.

For now the LDR/LDTR patching is left within the *get_mem() helpers, and
will be removed in a subsequent patch.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/uaccess.h | 40 +++++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index d6a4e496ebc64..4ad2990241d78 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -253,7 +253,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
  * The "__xxx_error" versions set the third argument to -EFAULT if an error
  * occurs, and leave it unchanged on success.
  */
-#define __get_user_asm(instr, alt_instr, reg, x, addr, err, feature)	\
+#define __get_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
 	asm volatile(							\
 	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
 			alt_instr " " reg "1, [%2]\n", feature)		\
@@ -268,35 +268,40 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
 	: "+r" (err), "=&r" (x)						\
 	: "r" (addr), "i" (-EFAULT))
 
-#define __raw_get_user(x, ptr, err)					\
+#define __raw_get_mem(x, ptr, err)					\
 do {									\
 	unsigned long __gu_val;						\
 	__chk_user_ptr(ptr);						\
-	uaccess_enable_not_uao();					\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__get_user_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr),  \
+		__get_mem_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 2:								\
-		__get_user_asm("ldrh", "ldtrh", "%w", __gu_val, (ptr),  \
+		__get_mem_asm("ldrh", "ldtrh", "%w", __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 4:								\
-		__get_user_asm("ldr", "ldtr", "%w", __gu_val, (ptr),	\
+		__get_mem_asm("ldr", "ldtr", "%w", __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 8:								\
-		__get_user_asm("ldr", "ldtr", "%x",  __gu_val, (ptr),	\
+		__get_mem_asm("ldr", "ldtr", "%x",  __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
 	}								\
-	uaccess_disable_not_uao();					\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 } while (0)
 
+#define __raw_get_user(x, ptr, err)					\
+do {									\
+	uaccess_enable_not_uao();					\
+	__raw_get_mem(x, ptr, err);					\
+	uaccess_disable_not_uao();					\
+} while (0)
+
 #define __get_user_error(x, ptr, err)					\
 do {									\
 	__typeof__(*(ptr)) __user *__p = (ptr);				\
@@ -318,7 +323,7 @@ do {									\
 
 #define get_user	__get_user
 
-#define __put_user_asm(instr, alt_instr, reg, x, addr, err, feature)	\
+#define __put_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
 	asm volatile(							\
 	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
 			alt_instr " " reg "1, [%2]\n", feature)		\
@@ -332,31 +337,36 @@ do {									\
 	: "+r" (err)							\
 	: "r" (x), "r" (addr), "i" (-EFAULT))
 
-#define __raw_put_user(x, ptr, err)					\
+#define __raw_put_mem(x, ptr, err)					\
 do {									\
 	__typeof__(*(ptr)) __pu_val = (x);				\
 	__chk_user_ptr(ptr);						\
-	uaccess_enable_not_uao();					\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__put_user_asm("strb", "sttrb", "%w", __pu_val, (ptr),	\
+		__put_mem_asm("strb", "sttrb", "%w", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 2:								\
-		__put_user_asm("strh", "sttrh", "%w", __pu_val, (ptr),	\
+		__put_mem_asm("strh", "sttrh", "%w", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 4:								\
-		__put_user_asm("str", "sttr", "%w", __pu_val, (ptr),	\
+		__put_mem_asm("str", "sttr", "%w", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 8:								\
-		__put_user_asm("str", "sttr", "%x", __pu_val, (ptr),	\
+		__put_mem_asm("str", "sttr", "%x", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
 	}								\
+} while (0)
+
+#define __raw_put_user(x, ptr, err)					\
+do {									\
+	uaccess_enable_not_uao();					\
+	__raw_put_mem(x, ptr, err);					\
 	uaccess_disable_not_uao();					\
 } while (0)
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 09/13] arm64: uaccess: split user/kernel routines
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (7 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 08/13] arm64: uaccess: refactor __{get,put}_user Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 10/13] arm64: uaccess cleanup macro naming Mark Rutland
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

This patch separates arm64's user and kernel memory access primitives
into distinct routines, adding new __{get,put}_kernel_nofault() helpers
to acess kernel memory, upon which core code builds larger copy
routines.

The kernel access routines (using LDR/STR) are not affected by PAN (when
legitimately accessing kernel memory), nor are they affected by UAO.
Switching to KERNEL_DS may set UAO, but this does not adversely affect
the kernel access routines.

The user access routines (using LDTR/STTR) are not affected by PAN (when
legitimately accessing user memory), but are affected by UAO. As these
are only legitimate to use under USER_DS with UAO clear, this should not
be problematic.

Routines performing atomics to user memory (futex and deprecated
instruction emulation) still need to transiently clear PAN, and these
are left as-is. These are never used on kernel memory.

Subsequent patches will refactor the uaccess helpers to remove redundant
code, and will also remove the redundant PAN/UAO manipulation.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/asm-uaccess.h | 48 +++++----------------------
 arch/arm64/include/asm/uaccess.h     | 64 +++++++++++++++++++++---------------
 2 files changed, 47 insertions(+), 65 deletions(-)

diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 479222ab82d44..046196f08988b 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -59,62 +59,32 @@ alternative_else_nop_endif
 #endif
 
 /*
- * Generate the assembly for UAO alternatives with exception table entries.
+ * Generate the assembly for LDTR/STTR with exception table entries.
  * This is complicated as there is no post-increment or pair versions of the
  * unprivileged instructions, and USER() only works for single instructions.
  */
-#ifdef CONFIG_ARM64_UAO
 	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			ldtr	\reg1, [\addr];
-			ldtr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
+8888:		ldtr	\reg1, [\addr];
+8889:		ldtr	\reg2, [\addr, #8];
+		add	\addr, \addr, \post_inc;
 
 		_asm_extable	8888b,\l;
 		_asm_extable	8889b,\l;
 	.endm
 
 	.macro uao_stp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			stp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			sttr	\reg1, [\addr];
-			sttr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
+8888:		sttr	\reg1, [\addr];
+8889:		sttr	\reg2, [\addr, #8];
+		add	\addr, \addr, \post_inc;
 
 		_asm_extable	8888b,\l;
 		_asm_extable	8889b,\l;
 	.endm
 
 	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			\inst	\reg, [\addr], \post_inc;
-			nop;
-		alternative_else
-			\alt_inst	\reg, [\addr];
-			add		\addr, \addr, \post_inc;
-		alternative_endif
+8888:		\alt_inst	\reg, [\addr];
+		add		\addr, \addr, \post_inc;
 
 		_asm_extable	8888b,\l;
 	.endm
-#else
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_stp l, reg1, reg2, addr, post_inc
-		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		USER(\l, \inst \reg, [\addr], \post_inc)
-	.endm
-#endif
-
 #endif
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 4ad2990241d78..bc7f3ff24ccd5 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -24,6 +24,8 @@
 #include <asm/memory.h>
 #include <asm/extable.h>
 
+#define HAVE_GET_KERNEL_NOFAULT
+
 #define get_fs()	(current_thread_info()->addr_limit)
 
 static inline void set_fs(mm_segment_t fs)
@@ -253,10 +255,9 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
  * The "__xxx_error" versions set the third argument to -EFAULT if an error
  * occurs, and leave it unchanged on success.
  */
-#define __get_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
+#define __get_mem_asm(ldr, reg, x, addr, err)				\
 	asm volatile(							\
-	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
-			alt_instr " " reg "1, [%2]\n", feature)		\
+	"1:	" ldr "	" reg "1, [%2]\n"				\
 	"2:\n"								\
 	"	.section .fixup, \"ax\"\n"				\
 	"	.align	2\n"						\
@@ -268,26 +269,22 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
 	: "+r" (err), "=&r" (x)						\
 	: "r" (addr), "i" (-EFAULT))
 
-#define __raw_get_mem(x, ptr, err)					\
+#define __raw_get_mem(ldr, x, ptr, err)					\
 do {									\
 	unsigned long __gu_val;						\
 	__chk_user_ptr(ptr);						\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__get_mem_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__get_mem_asm(ldr "b", "%w", __gu_val, (ptr), (err));	\
 		break;							\
 	case 2:								\
-		__get_mem_asm("ldrh", "ldtrh", "%w", __gu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__get_mem_asm(ldr "h", "%w", __gu_val, (ptr), (err));	\
 		break;							\
 	case 4:								\
-		__get_mem_asm("ldr", "ldtr", "%w", __gu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__get_mem_asm(ldr, "%w", __gu_val, (ptr), (err));	\
 		break;							\
 	case 8:								\
-		__get_mem_asm("ldr", "ldtr", "%x",  __gu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__get_mem_asm(ldr, "%x",  __gu_val, (ptr), (err));	\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
@@ -298,7 +295,7 @@ do {									\
 #define __raw_get_user(x, ptr, err)					\
 do {									\
 	uaccess_enable_not_uao();					\
-	__raw_get_mem(x, ptr, err);					\
+	__raw_get_mem("ldtr", x, ptr, err);				\
 	uaccess_disable_not_uao();					\
 } while (0)
 
@@ -323,10 +320,19 @@ do {									\
 
 #define get_user	__get_user
 
-#define __put_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
+#define __get_kernel_nofault(dst, src, type, err_label)			\
+do {									\
+	int __gkn_err;							\
+									\
+	__raw_get_mem("ldr", *((type *)(dst)),				\
+		      (__force type __user *)(src), __gkn_err);		\
+	if (unlikely(__gkn_err))					\
+		goto err_label;						\
+} while(0)
+
+#define __put_mem_asm(str, reg, x, addr, err)				\
 	asm volatile(							\
-	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
-			alt_instr " " reg "1, [%2]\n", feature)		\
+	"1:	" str "	" reg "1, [%2]\n"				\
 	"2:\n"								\
 	"	.section .fixup,\"ax\"\n"				\
 	"	.align	2\n"						\
@@ -337,26 +343,22 @@ do {									\
 	: "+r" (err)							\
 	: "r" (x), "r" (addr), "i" (-EFAULT))
 
-#define __raw_put_mem(x, ptr, err)					\
+#define __raw_put_mem(str, x, ptr, err)					\
 do {									\
 	__typeof__(*(ptr)) __pu_val = (x);				\
 	__chk_user_ptr(ptr);						\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__put_mem_asm("strb", "sttrb", "%w", __pu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__put_mem_asm(str "b", "%w", __pu_val, (ptr), (err));	\
 		break;							\
 	case 2:								\
-		__put_mem_asm("strh", "sttrh", "%w", __pu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__put_mem_asm(str "h", "%w", __pu_val, (ptr), (err));	\
 		break;							\
 	case 4:								\
-		__put_mem_asm("str", "sttr", "%w", __pu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__put_mem_asm(str, "%w", __pu_val, (ptr), (err));	\
 		break;							\
 	case 8:								\
-		__put_mem_asm("str", "sttr", "%x", __pu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__put_mem_asm(str, "%x", __pu_val, (ptr), (err));	\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
@@ -366,7 +368,7 @@ do {									\
 #define __raw_put_user(x, ptr, err)					\
 do {									\
 	uaccess_enable_not_uao();					\
-	__raw_put_mem(x, ptr, err);					\
+	__raw_put_mem("sttr", x, ptr, err);				\
 	uaccess_disable_not_uao();					\
 } while (0)
 
@@ -391,6 +393,16 @@ do {									\
 
 #define put_user	__put_user
 
+#define __put_kernel_nofault(dst, src, type, err_label)			\
+do {									\
+	int __pkn_err;							\
+									\
+	__raw_put_mem("str", *((type *)(src)),				\
+		      (__force type __user *)(dst), __pkn_err);		\
+	if (unlikely(__pkn_err))					\
+		goto err_label;						\
+} while(0)
+
 extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
 #define raw_copy_from_user(to, from, n)					\
 ({									\
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 10/13] arm64: uaccess cleanup macro naming
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (8 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 09/13] arm64: uaccess: split user/kernel routines Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-25 16:07 ` [PATCH 11/13] arm64: uaccess: remove set_fs() Mark Rutland
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

Now the uaccess primitives use LDTR/STTR unconditionally, the the
uao_{ldp,stp,user_alternative} asm macros are misnamed, and have a
redunant argument. Let's remove te redundant argument and rename these
to user_{ldp,stp,ldst} respectively to clean this up.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/asm-uaccess.h |  8 ++++----
 arch/arm64/lib/clear_user.S          |  8 ++++----
 arch/arm64/lib/copy_from_user.S      |  8 ++++----
 arch/arm64/lib/copy_in_user.S        | 16 ++++++++--------
 arch/arm64/lib/copy_to_user.S        |  8 ++++----
 5 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 046196f08988b..5e3035fb2c802 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -63,7 +63,7 @@ alternative_else_nop_endif
  * This is complicated as there is no post-increment or pair versions of the
  * unprivileged instructions, and USER() only works for single instructions.
  */
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
+	.macro user_ldp l, reg1, reg2, addr, post_inc
 8888:		ldtr	\reg1, [\addr];
 8889:		ldtr	\reg2, [\addr, #8];
 		add	\addr, \addr, \post_inc;
@@ -72,7 +72,7 @@ alternative_else_nop_endif
 		_asm_extable	8889b,\l;
 	.endm
 
-	.macro uao_stp l, reg1, reg2, addr, post_inc
+	.macro user_stp l, reg1, reg2, addr, post_inc
 8888:		sttr	\reg1, [\addr];
 8889:		sttr	\reg2, [\addr, #8];
 		add	\addr, \addr, \post_inc;
@@ -81,8 +81,8 @@ alternative_else_nop_endif
 		_asm_extable	8889b,\l;
 	.endm
 
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-8888:		\alt_inst	\reg, [\addr];
+	.macro user_ldst l, inst, reg, addr, post_inc
+8888:		\inst		\reg, [\addr];
 		add		\addr, \addr, \post_inc;
 
 		_asm_extable	8888b,\l;
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index 48a3a26eff663..af9afcbec92cd 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -24,20 +24,20 @@ SYM_FUNC_START(__arch_clear_user)
 	subs	x1, x1, #8
 	b.mi	2f
 1:
-uao_user_alternative 9f, str, sttr, xzr, x0, 8
+user_ldst 9f, sttr, xzr, x0, 8
 	subs	x1, x1, #8
 	b.pl	1b
 2:	adds	x1, x1, #4
 	b.mi	3f
-uao_user_alternative 9f, str, sttr, wzr, x0, 4
+user_ldst 9f, sttr, wzr, x0, 4
 	sub	x1, x1, #4
 3:	adds	x1, x1, #2
 	b.mi	4f
-uao_user_alternative 9f, strh, sttrh, wzr, x0, 2
+user_ldst 9f, sttrh, wzr, x0, 2
 	sub	x1, x1, #2
 4:	adds	x1, x1, #1
 	b.mi	5f
-uao_user_alternative 9f, strb, sttrb, wzr, x0, 0
+user_ldst 9f, sttrb, wzr, x0, 0
 5:	mov	x0, #0
 	ret
 SYM_FUNC_END(__arch_clear_user)
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 0f8a3a9e3795b..95cd62d673711 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -21,7 +21,7 @@
  */
 
 	.macro ldrb1 reg, ptr, val
-	uao_user_alternative 9998f, ldrb, ldtrb, \reg, \ptr, \val
+	user_ldst 9998f, ldtrb, \reg, \ptr, \val
 	.endm
 
 	.macro strb1 reg, ptr, val
@@ -29,7 +29,7 @@
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
+	user_ldst 9998f, ldtrh, \reg, \ptr, \val
 	.endm
 
 	.macro strh1 reg, ptr, val
@@ -37,7 +37,7 @@
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
+	user_ldst 9998f, ldtr, \reg, \ptr, \val
 	.endm
 
 	.macro str1 reg, ptr, val
@@ -45,7 +45,7 @@
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	uao_ldp 9998f, \reg1, \reg2, \ptr, \val
+	user_ldp 9998f, \reg1, \reg2, \ptr, \val
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index 80e37ada0ee1a..1f61cd0df0627 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -22,35 +22,35 @@
  *	x0 - bytes not copied
  */
 	.macro ldrb1 reg, ptr, val
-	uao_user_alternative 9998f, ldrb, ldtrb, \reg, \ptr, \val
+	user_ldst 9998f, ldtrb, \reg, \ptr, \val
 	.endm
 
 	.macro strb1 reg, ptr, val
-	uao_user_alternative 9998f, strb, sttrb, \reg, \ptr, \val
+	user_ldst 9998f, sttrb, \reg, \ptr, \val
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
+	user_ldst 9998f, ldtrh, \reg, \ptr, \val
 	.endm
 
 	.macro strh1 reg, ptr, val
-	uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
+	user_ldst 9998f, sttrh, \reg, \ptr, \val
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
+	user_ldst 9998f, ldtr, \reg, \ptr, \val
 	.endm
 
 	.macro str1 reg, ptr, val
-	uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
+	user_ldst 9998f, sttr, \reg, \ptr, \val
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	uao_ldp 9998f, \reg1, \reg2, \ptr, \val
+	user_ldp 9998f, \reg1, \reg2, \ptr, \val
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
-	uao_stp 9998f, \reg1, \reg2, \ptr, \val
+	user_stp 9998f, \reg1, \reg2, \ptr, \val
 	.endm
 
 end	.req	x5
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 4ec59704b8f2d..043da90f5dd7d 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -24,7 +24,7 @@
 	.endm
 
 	.macro strb1 reg, ptr, val
-	uao_user_alternative 9998f, strb, sttrb, \reg, \ptr, \val
+	user_ldst 9998f, sttrb, \reg, \ptr, \val
 	.endm
 
 	.macro ldrh1 reg, ptr, val
@@ -32,7 +32,7 @@
 	.endm
 
 	.macro strh1 reg, ptr, val
-	uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
+	user_ldst 9998f, sttrh, \reg, \ptr, \val
 	.endm
 
 	.macro ldr1 reg, ptr, val
@@ -40,7 +40,7 @@
 	.endm
 
 	.macro str1 reg, ptr, val
-	uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
+	user_ldst 9998f, sttr, \reg, \ptr, \val
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
@@ -48,7 +48,7 @@
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
-	uao_stp 9998f, \reg1, \reg2, \ptr, \val
+	user_stp 9998f, \reg1, \reg2, \ptr, \val
 	.endm
 
 end	.req	x5
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 11/13] arm64: uaccess: remove set_fs()
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (9 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 10/13] arm64: uaccess cleanup macro naming Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-28  7:04   ` Christoph Hellwig
  2020-09-25 16:07 ` [PATCH 12/13] arm64: uaccess: remove addr_limit_user_check() Mark Rutland
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

Now that the uaccess primitives dont take addr_limit into acccount, we
have no need to manipulate this via set_fs() and get_fs(). Remove
support for these, along with some infrastructure this renders
redundant.

We no longer need to flip UAO to access kernel memory under KERNEL_DS,
and head.S unconditionally clears UAO for all kernel configurations via
an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
nor do we need to context-switch it. However, we do need to clear UAO
(and set PAN) during SDEI entry.

Masking of __user pointers no longer needs to use the dynamic value of
addr_limit, and can use a constant derived from the maximum possible
userspace task size. A new TASK_SIZE_MAX constant is introduced for
this, which is also used by core code. In configurations supporting
52-bit VAs, this may include a region of unusable VA space above a
48-bit TTBR0 limit, but never includes any portion of TTBR1.

Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and
KERNEL_DS were inclusive limits, and is converted to a mask by
subtracting one.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/Kconfig                   |  1 -
 arch/arm64/include/asm/exec.h        |  1 -
 arch/arm64/include/asm/processor.h   |  4 +---
 arch/arm64/include/asm/ptrace.h      |  3 +--
 arch/arm64/include/asm/thread_info.h |  4 ----
 arch/arm64/include/asm/uaccess.h     | 41 ++++++++----------------------------
 arch/arm64/kernel/asm-offsets.c      |  3 +--
 arch/arm64/kernel/cpufeature.c       |  4 ----
 arch/arm64/kernel/entry.S            | 19 +++--------------
 arch/arm64/kernel/process.c          | 12 -----------
 arch/arm64/kernel/sdei.c             | 10 ++-------
 arch/arm64/kernel/suspend.c          |  3 +--
 arch/arm64/mm/fault.c                |  5 -----
 13 files changed, 18 insertions(+), 92 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fbd9e35bef096..6d232837cbeee 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -192,7 +192,6 @@ config ARM64
 	select PCI_SYSCALL if PCI
 	select POWER_RESET
 	select POWER_SUPPLY
-	select SET_FS
 	select SPARSE_IRQ
 	select SWIOTLB
 	select SYSCTL_EXCEPTION_TRACE
diff --git a/arch/arm64/include/asm/exec.h b/arch/arm64/include/asm/exec.h
index 1aae6f9962fc1..9a1c22ce664b5 100644
--- a/arch/arm64/include/asm/exec.h
+++ b/arch/arm64/include/asm/exec.h
@@ -10,6 +10,5 @@
 #include <linux/sched.h>
 
 extern unsigned long arch_align_stack(unsigned long sp);
-void uao_thread_switch(struct task_struct *next);
 
 #endif	/* __ASM_EXEC_H */
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 240fe5e5b7209..fa91351e7f928 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -8,9 +8,6 @@
 #ifndef __ASM_PROCESSOR_H
 #define __ASM_PROCESSOR_H
 
-#define KERNEL_DS		UL(-1)
-#define USER_DS			((UL(1) << VA_BITS) - 1)
-
 /*
  * On arm64 systems, unaligned accesses by the CPU are cheap, and so there is
  * no point in shifting all network buffers by 2 bytes just to make some IP
@@ -47,6 +44,7 @@
 
 #define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS_MIN)
 #define TASK_SIZE_64		(UL(1) << vabits_actual)
+#define TASK_SIZE_MAX		(UL(1) << VA_BITS)
 
 #ifdef CONFIG_COMPAT
 #if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 366075f228eb3..ab99512c257a8 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -181,8 +181,7 @@ struct pt_regs {
 	s32 syscallno;
 	u32 unused2;
 #endif
-
-	u64 orig_addr_limit;
+	u64 sdei_ttbr1;
 	/* Only valid when ARM64_HAS_IRQ_PRIO_MASKING is enabled. */
 	u64 pmr_save;
 	u64 stackframe[2];
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 5e784e16ee895..f02181ec7028c 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -18,14 +18,11 @@ struct task_struct;
 #include <asm/stack_pointer.h>
 #include <asm/types.h>
 
-typedef unsigned long mm_segment_t;
-
 /*
  * low level task data that entry.S needs immediate access to.
  */
 struct thread_info {
 	unsigned long		flags;		/* low level flags */
-	mm_segment_t		addr_limit;	/* address limit */
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 	u64			ttbr0;		/* saved TTBR0_EL1 */
 #endif
@@ -117,7 +114,6 @@ void arch_release_task_struct(struct task_struct *tsk);
 {									\
 	.flags		= _TIF_FOREIGN_FPSTATE,				\
 	.preempt_count	= INIT_PREEMPT_COUNT,				\
-	.addr_limit	= KERNEL_DS,					\
 	INIT_SCS							\
 }
 
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index bc7f3ff24ccd5..6d3ac598777bc 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -26,44 +26,22 @@
 
 #define HAVE_GET_KERNEL_NOFAULT
 
-#define get_fs()	(current_thread_info()->addr_limit)
-
-static inline void set_fs(mm_segment_t fs)
+static inline void init_hw_uaccess_state(void)
 {
-	current_thread_info()->addr_limit = fs;
-
-	/*
-	 * Prevent a mispredicted conditional call to set_fs from forwarding
-	 * the wrong address limit to access_ok under speculation.
-	 */
-	spec_bar();
-
-	/* On user-mode return, check fs is correct */
-	set_thread_flag(TIF_FSCHECK);
-
-	/*
-	 * Enable/disable UAO so that copy_to_user() etc can access
-	 * kernel memory with the unprivileged instructions.
-	 */
-	if (IS_ENABLED(CONFIG_ARM64_UAO) && fs == KERNEL_DS)
-		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(1), ARM64_HAS_UAO));
-	else
-		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO,
-				CONFIG_ARM64_UAO));
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN));
+	asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
 }
 
-#define uaccess_kernel()	(get_fs() == KERNEL_DS)
-
 /*
  * Test whether a block of memory is a valid user space address.
  * Returns 1 if the range is valid, 0 otherwise.
  *
  * This is equivalent to the following test:
- * (u65)addr + (u65)size <= (u65)current->addr_limit + 1
+ * (u65)addr + (u65)size <= (u65)TASK_SIZE_MAX
  */
 static inline unsigned long __range_ok(const void __user *addr, unsigned long size)
 {
-	unsigned long ret, limit = current_thread_info()->addr_limit;
+	unsigned long ret, limit = TASK_SIZE_MAX - 1;
 
 	/*
 	 * Asynchronous I/O running in a kernel thread does not have the
@@ -96,7 +74,6 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si
 }
 
 #define access_ok(addr, size)	__range_ok(addr, size)
-#define user_addr_max			get_fs
 
 #define _ASM_EXTABLE(from, to)						\
 	"	.pushsection	__ex_table, \"a\"\n"			\
@@ -226,9 +203,9 @@ static inline void uaccess_enable_not_uao(void)
 }
 
 /*
- * Sanitise a uaccess pointer such that it becomes NULL if above the
- * current addr_limit. In case the pointer is tagged (has the top byte set),
- * untag the pointer before checking.
+ * Sanitise a uaccess pointer such that it becomes NULL if above the maximum
+ * user address. In case the pointer is tagged (has the top byte set), untag
+ * the pointer before checking.
  */
 #define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr)
 static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
@@ -239,7 +216,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
 	"	bics	xzr, %3, %2\n"
 	"	csel	%0, %1, xzr, eq\n"
 	: "=&r" (safe_ptr)
-	: "r" (ptr), "r" (current_thread_info()->addr_limit),
+	: "r" (ptr), "r" (TASK_SIZE_MAX - 1),
 	  "r" (untagged_addr(ptr))
 	: "cc");
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7d32fc959b1a0..679b19b8a7ff2 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -30,7 +30,6 @@ int main(void)
   BLANK();
   DEFINE(TSK_TI_FLAGS,		offsetof(struct task_struct, thread_info.flags));
   DEFINE(TSK_TI_PREEMPT,	offsetof(struct task_struct, thread_info.preempt_count));
-  DEFINE(TSK_TI_ADDR_LIMIT,	offsetof(struct task_struct, thread_info.addr_limit));
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
   DEFINE(TSK_TI_TTBR0,		offsetof(struct task_struct, thread_info.ttbr0));
 #endif
@@ -70,7 +69,7 @@ int main(void)
   DEFINE(S_PSTATE,		offsetof(struct pt_regs, pstate));
   DEFINE(S_PC,			offsetof(struct pt_regs, pc));
   DEFINE(S_SYSCALLNO,		offsetof(struct pt_regs, syscallno));
-  DEFINE(S_ORIG_ADDR_LIMIT,	offsetof(struct pt_regs, orig_addr_limit));
+  DEFINE(S_SDEI_TTBR1,		offsetof(struct pt_regs, sdei_ttbr1));
   DEFINE(S_PMR_SAVE,		offsetof(struct pt_regs, pmr_save));
   DEFINE(S_STACKFRAME,		offsetof(struct pt_regs, stackframe));
   DEFINE(S_FRAME_SIZE,		sizeof(struct pt_regs));
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6424584be01e6..3ac79d0935778 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1772,10 +1772,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64MMFR2_EL1,
 		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
 		.min_field_value = 1,
-		/*
-		 * We rely on stop_machine() calling uao_thread_switch() to set
-		 * UAO immediately after patching.
-		 */
 	},
 #endif /* CONFIG_ARM64_UAO */
 #ifdef CONFIG_ARM64_PAN
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 55af8b504b65a..5d6033c606961 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -190,12 +190,6 @@ alternative_cb_end
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_current_task tsk
-	/* Save the task's original addr_limit and set USER_DS */
-	ldr	x20, [tsk, #TSK_TI_ADDR_LIMIT]
-	str	x20, [sp, #S_ORIG_ADDR_LIMIT]
-	mov	x20, #USER_DS
-	str	x20, [tsk, #TSK_TI_ADDR_LIMIT]
-	/* No need to reset PSTATE.UAO, hardware's already set it to 0 for us */
 	.endif /* \el == 0 */
 	mrs	x22, elr_el1
 	mrs	x23, spsr_el1
@@ -246,12 +240,6 @@ alternative_else_nop_endif
 	.macro	kernel_exit, el
 	.if	\el != 0
 	disable_daif
-
-	/* Restore the task's original addr_limit. */
-	ldr	x20, [sp, #S_ORIG_ADDR_LIMIT]
-	str	x20, [tsk, #TSK_TI_ADDR_LIMIT]
-
-	/* No need to restore UAO, it will be restored from SPSR_EL1 */
 	.endif
 
 	/* Restore pmr */
@@ -963,10 +951,9 @@ SYM_CODE_START(__sdei_asm_entry_trampoline)
 	mov	x4, xzr
 
 	/*
-	 * Use reg->interrupted_regs.addr_limit to remember whether to unmap
-	 * the kernel on exit.
+	 * Remember whether to unmap the kernel on exit.
 	 */
-1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
+1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
 
 #ifdef CONFIG_RANDOMIZE_BASE
 	adr	x4, tramp_vectors + PAGE_SIZE
@@ -987,7 +974,7 @@ NOKPROBE(__sdei_asm_entry_trampoline)
  * x4: struct sdei_registered_event argument from registration time.
  */
 SYM_CODE_START(__sdei_asm_exit_trampoline)
-	ldr	x4, [x4, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
+	ldr	x4, [x4, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
 	cbnz	x4, 1f
 
 	tramp_unmap_kernel	tmp=x4
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index f1804496b9350..6ec12f4cb546f 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -455,17 +455,6 @@ static void tls_thread_switch(struct task_struct *next)
 	write_sysreg(*task_user_tls(next), tpidr_el0);
 }
 
-/* Restore the UAO state depending on next's addr_limit */
-void uao_thread_switch(struct task_struct *next)
-{
-	if (IS_ENABLED(CONFIG_ARM64_UAO)) {
-		if (task_thread_info(next)->addr_limit == KERNEL_DS)
-			asm(ALTERNATIVE("nop", SET_PSTATE_UAO(1), ARM64_HAS_UAO));
-		else
-			asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
-	}
-}
-
 /*
  * Force SSBS state on context-switch, since it may be lost after migrating
  * from a CPU which treats the bit as RES0 in a heterogeneous system.
@@ -559,7 +548,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	hw_breakpoint_thread_switch(next);
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
-	uao_thread_switch(next);
 	ssbs_thread_switch(next);
 	erratum_1418040_thread_switch(prev, next);
 
diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
index 4a5f24602aa0c..f5e8797ac3b9b 100644
--- a/arch/arm64/kernel/sdei.c
+++ b/arch/arm64/kernel/sdei.c
@@ -221,16 +221,12 @@ asmlinkage __kprobes notrace unsigned long
 __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 {
 	unsigned long ret;
-	mm_segment_t orig_addr_limit;
 
 	/*
 	 * We didn't take an exception to get here, so the HW hasn't set PAN or
-	 * cleared UAO, and the exception entry code hasn't reset addr_limit.
-	 * Set PAN, then use force_uaccess_begin() to clear UAO and reset
-	 * addr_limit.
+	 * cleared UAO. Reset these to their expected state.
 	 */
-	__uaccess_enable_hw_pan();
-	orig_addr_limit = force_uaccess_begin();
+	init_hw_uaccess_state();
 
 	nmi_enter();
 
@@ -238,7 +234,5 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 
 	nmi_exit();
 
-	force_uaccess_end(orig_addr_limit);
-
 	return ret;
 }
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index c1dee9066ff97..8f0cd958b1411 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -56,8 +56,7 @@ void notrace __cpu_suspend_exit(void)
 	 * PSTATE was not saved over suspend/resume, re-enable any detected
 	 * features that might not have been set correctly.
 	 */
-	__uaccess_enable_hw_pan();
-	uao_thread_switch(current);
+	init_hw_uaccess_state();
 
 	/*
 	 * Restore HW breakpoint registers to sane values
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index f07333e86c2ff..bf984b8323888 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -477,11 +477,6 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 	}
 
 	if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) {
-		/* regs->orig_addr_limit may be 0 if we entered from EL0 */
-		if (regs->orig_addr_limit == KERNEL_DS)
-			die_kernel_fault("access to user memory with fs=KERNEL_DS",
-					 addr, esr, regs);
-
 		if (is_el1_instruction_abort(esr))
 			die_kernel_fault("execution of user memory",
 					 addr, esr, regs);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 12/13] arm64: uaccess: remove addr_limit_user_check()
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (10 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 11/13] arm64: uaccess: remove set_fs() Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-28  7:05   ` Christoph Hellwig
  2020-09-25 16:07 ` [PATCH 13/13] arm64: uaccess: remove redundant PAN toggling Mark Rutland
  2020-09-28  7:16 ` [PATCH 00/13] arm64: remove set_fs() and friends Christoph Hellwig
  13 siblings, 1 reply; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

Subsequent patches will remove addr_limit from arm64, consequently the
addr_limit checks will become redundant. As a preparatory step, remove
the checks and associated thread flag.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/thread_info.h | 4 +---
 arch/arm64/kernel/signal.c           | 3 ---
 2 files changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index f02181ec7028c..eb4e222e65d16 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -63,7 +63,6 @@ void arch_release_task_struct(struct task_struct *tsk);
 #define TIF_NOTIFY_RESUME	2	/* callback before returning to user */
 #define TIF_FOREIGN_FPSTATE	3	/* CPU's FP state is not current's */
 #define TIF_UPROBE		4	/* uprobe breakpoint or singlestep */
-#define TIF_FSCHECK		5	/* Check FS is USER_DS on return */
 #define TIF_SYSCALL_TRACE	8	/* syscall trace active */
 #define TIF_SYSCALL_AUDIT	9	/* syscall auditing */
 #define TIF_SYSCALL_TRACEPOINT	10	/* syscall tracepoint for ftrace */
@@ -89,14 +88,13 @@ void arch_release_task_struct(struct task_struct *tsk);
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
-#define _TIF_FSCHECK		(1 << TIF_FSCHECK)
 #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
 #define _TIF_32BIT		(1 << TIF_32BIT)
 #define _TIF_SVE		(1 << TIF_SVE)
 
 #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
 				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
-				 _TIF_UPROBE | _TIF_FSCHECK)
+				 _TIF_UPROBE)
 
 #define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 3b4f31f35e458..d8fc49339c813 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -918,9 +918,6 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
 	trace_hardirqs_off();
 
 	do {
-		/* Check valid user FS if needed */
-		addr_limit_user_check();
-
 		if (thread_flags & _TIF_NEED_RESCHED) {
 			/* Unmask Debug and SError for the next task */
 			local_daif_restore(DAIF_PROCCTX_NOIRQ);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 13/13] arm64: uaccess: remove redundant PAN toggling
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (11 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 12/13] arm64: uaccess: remove addr_limit_user_check() Mark Rutland
@ 2020-09-25 16:07 ` Mark Rutland
  2020-09-28  7:16 ` [PATCH 00/13] arm64: remove set_fs() and friends Christoph Hellwig
  13 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-25 16:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, robin.murphy, james.morse, will, hch

Some code (e.g. futex) needs to make privileged accesses to userspace
memory, and uses uaccess_{enable,disable}_privileged() in order to
permit this. All other uaccess primitives use LDTR/STTR, and never need
to toggle PAN.

Remove the redundant PAN toggling.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpucaps.h |  1 -
 arch/arm64/include/asm/uaccess.h | 69 +++++++++++-----------------------------
 arch/arm64/kernel/cpufeature.c   |  7 ----
 3 files changed, 19 insertions(+), 58 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 07b643a707100..6c1c4a960e40c 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -17,7 +17,6 @@
 #define ARM64_WORKAROUND_834220			7
 #define ARM64_HAS_NO_HW_PREFETCH		8
 #define ARM64_HAS_UAO				9
-#define ARM64_ALT_PAN_NOT_UAO			10
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
 #define ARM64_HAS_32BIT_EL0			13
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 6d3ac598777bc..ef2d5a90e1815 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -153,53 +153,22 @@ static inline bool uaccess_ttbr0_enable(void)
 }
 #endif
 
-static inline void __uaccess_disable_hw_pan(void)
+static inline void uaccess_disable_privileged(void)
 {
-	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,
-			CONFIG_ARM64_PAN));
-}
+	if (uaccess_ttbr0_disable())
+		return;
 
-static inline void __uaccess_enable_hw_pan(void)
-{
 	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,
 			CONFIG_ARM64_PAN));
 }
 
-#define __uaccess_disable(alt)						\
-do {									\
-	if (!uaccess_ttbr0_disable())					\
-		asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), alt,		\
-				CONFIG_ARM64_PAN));			\
-} while (0)
-
-#define __uaccess_enable(alt)						\
-do {									\
-	if (!uaccess_ttbr0_enable())					\
-		asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), alt,		\
-				CONFIG_ARM64_PAN));			\
-} while (0)
-
-static inline void uaccess_disable_privileged(void)
-{
-	__uaccess_disable(ARM64_HAS_PAN);
-}
-
 static inline void uaccess_enable_privileged(void)
 {
-	__uaccess_enable(ARM64_HAS_PAN);
-}
+	if (uaccess_ttbr0_enable())
+		return;
 
-/*
- * These functions are no-ops when UAO is present.
- */
-static inline void uaccess_disable_not_uao(void)
-{
-	__uaccess_disable(ARM64_ALT_PAN_NOT_UAO);
-}
-
-static inline void uaccess_enable_not_uao(void)
-{
-	__uaccess_enable(ARM64_ALT_PAN_NOT_UAO);
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), alt, ARM64_HAS_PAN,
+			CONFIG_ARM64_PAN));
 }
 
 /*
@@ -271,9 +240,9 @@ do {									\
 
 #define __raw_get_user(x, ptr, err)					\
 do {									\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__raw_get_mem("ldtr", x, ptr, err);				\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 } while (0)
 
 #define __get_user_error(x, ptr, err)					\
@@ -344,9 +313,9 @@ do {									\
 
 #define __raw_put_user(x, ptr, err)					\
 do {									\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__raw_put_mem("sttr", x, ptr, err);				\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 } while (0)
 
 #define __put_user_error(x, ptr, err)					\
@@ -384,10 +353,10 @@ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __u
 #define raw_copy_from_user(to, from, n)					\
 ({									\
 	unsigned long __acfu_ret;					\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__acfu_ret = __arch_copy_from_user((to),			\
 				      __uaccess_mask_ptr(from), (n));	\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 	__acfu_ret;							\
 })
 
@@ -395,10 +364,10 @@ extern unsigned long __must_check __arch_copy_to_user(void __user *to, const voi
 #define raw_copy_to_user(to, from, n)					\
 ({									\
 	unsigned long __actu_ret;					\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__actu_ret = __arch_copy_to_user(__uaccess_mask_ptr(to),	\
 				    (from), (n));			\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 	__actu_ret;							\
 })
 
@@ -406,10 +375,10 @@ extern unsigned long __must_check __arch_copy_in_user(void __user *to, const voi
 #define raw_copy_in_user(to, from, n)					\
 ({									\
 	unsigned long __aciu_ret;					\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__aciu_ret = __arch_copy_in_user(__uaccess_mask_ptr(to),	\
 				    __uaccess_mask_ptr(from), (n));	\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 	__aciu_ret;							\
 })
 
@@ -420,9 +389,9 @@ extern unsigned long __must_check __arch_clear_user(void __user *to, unsigned lo
 static inline unsigned long __must_check __clear_user(void __user *to, unsigned long n)
 {
 	if (access_ok(to, n)) {
-		uaccess_enable_not_uao();
+		uaccess_ttbr0_enable();
 		n = __arch_clear_user(__uaccess_mask_ptr(to), n);
-		uaccess_disable_not_uao();
+		uaccess_ttbr0_disable();
 	}
 	return n;
 }
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3ac79d0935778..9ca8144f1e6a4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1774,13 +1774,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 1,
 	},
 #endif /* CONFIG_ARM64_UAO */
-#ifdef CONFIG_ARM64_PAN
-	{
-		.capability = ARM64_ALT_PAN_NOT_UAO,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = cpufeature_pan_not_uao,
-	},
-#endif /* CONFIG_ARM64_PAN */
 #ifdef CONFIG_ARM64_VHE
 	{
 		.desc = "Virtualization Host Extensions",
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 11/13] arm64: uaccess: remove set_fs()
  2020-09-25 16:07 ` [PATCH 11/13] arm64: uaccess: remove set_fs() Mark Rutland
@ 2020-09-28  7:04   ` Christoph Hellwig
  2020-09-28  9:02     ` Mark Rutland
  0 siblings, 1 reply; 24+ messages in thread
From: Christoph Hellwig @ 2020-09-28  7:04 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, robin.murphy, james.morse, will, hch, linux-arm-kernel

> +static inline void init_hw_uaccess_state(void)
>  {
> +	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN));
> +	asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
>  }

The way I read this code it is about per-task state, maybe that
could be reflected in the name?  init_task_hw_access_state or
init_task_uaccess_flags?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 12/13] arm64: uaccess: remove addr_limit_user_check()
  2020-09-25 16:07 ` [PATCH 12/13] arm64: uaccess: remove addr_limit_user_check() Mark Rutland
@ 2020-09-28  7:05   ` Christoph Hellwig
  2020-09-28  9:06     ` Mark Rutland
  0 siblings, 1 reply; 24+ messages in thread
From: Christoph Hellwig @ 2020-09-28  7:05 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, robin.murphy, james.morse, will, hch, linux-arm-kernel

On Fri, Sep 25, 2020 at 05:07:21PM +0100, Mark Rutland wrote:
> Subsequent patches will remove addr_limit from arm64, consequently the
> addr_limit checks will become redundant. As a preparatory step, remove
> the checks and associated thread flag.

Wasn't this done in earlier patches?  The only thing the next patch
does is to remove the PAN stuff.  Maybe fold it into the patch
removing the addr_limit checking?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/13] arm64: remove set_fs() and friends
  2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
                   ` (12 preceding siblings ...)
  2020-09-25 16:07 ` [PATCH 13/13] arm64: uaccess: remove redundant PAN toggling Mark Rutland
@ 2020-09-28  7:16 ` Christoph Hellwig
  2020-09-28  7:23   ` Christoph Hellwig
  13 siblings, 1 reply; 24+ messages in thread
From: Christoph Hellwig @ 2020-09-28  7:16 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, robin.murphy, james.morse, will, hch, linux-arm-kernel

On Fri, Sep 25, 2020 at 05:07:09PM +0100, Mark Rutland wrote:
> This series removes set_fs() from arm64, building atop the core rework
> done by Christophe. The series can be found in my arm64/set_fs-removal
> branch [2].
> 
> The bulk of the rework is to address the way we manipulate PAN and UAO,
> which is largely rendered redundant.
> 
> The kernel maccess routines (__{get,put}_kernel_nofault) are trivial
> wrappers which share code with the uaccess routines, so I expect these
> should just work, but they'll need testing in-context, especially where
> they're wrapped by the gerneric copy routines.
> 
> So far this has seen some very basic boot testing. I intend to throw
> Syzkaller and LTP at this soon.

I'm not a an arm64 experts, but this looks reasonable to me.

Also can't we remove all the remaining UAO handling as in the patch
below or did I totally misunderstood how uaccess works for arm64?

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d232837cbeee8..dd3c8f8a34dae2 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1434,27 +1434,6 @@ endmenu
 
 menu "ARMv8.2 architectural features"
 
-config ARM64_UAO
-	bool "Enable support for User Access Override (UAO)"
-	default y
-	help
-	  User Access Override (UAO; part of the ARMv8.2 Extensions)
-	  causes the 'unprivileged' variant of the load/store instructions to
-	  be overridden to be privileged.
-
-	  This option changes get_user() and friends to use the 'unprivileged'
-	  variant of the load/store instructions. This ensures that user-space
-	  really did have access to the supplied memory. When addr_limit is
-	  set to kernel memory the UAO bit will be set, allowing privileged
-	  access to kernel memory.
-
-	  Choosing this option will cause copy_to_user() et al to use user-space
-	  memory permissions.
-
-	  The feature is detected at runtime, the kernel will use the
-	  regular load/store instructions if the cpu does not implement the
-	  feature.
-
 config ARM64_PMEM
 	bool "Enable support for persistent memory"
 	select ARCH_HAS_PMEM_API
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index ef2d5a90e1815f..1c16e43f035a7a 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -29,7 +29,6 @@
 static inline void init_hw_uaccess_state(void)
 {
 	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN));
-	asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
 }
 
 /*
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 8bc2cb5547346f..c460cd15dc49b3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1760,17 +1760,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
 		.matches = has_no_hw_prefetch,
 	},
-#ifdef CONFIG_ARM64_UAO
-	{
-		.desc = "User Access Override",
-		.capability = ARM64_HAS_UAO,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = has_cpuid_feature,
-		.sys_reg = SYS_ID_AA64MMFR2_EL1,
-		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
-		.min_field_value = 1,
-	},
-#endif /* CONFIG_ARM64_UAO */
 #ifdef CONFIG_ARM64_VHE
 	{
 		.desc = "Virtualization Host Extensions",
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 6ec12f4cb546f4..f223d27d991b3c 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -239,7 +239,7 @@ static void print_pstate(struct pt_regs *regs)
 		const char *btype_str = btypes[(pstate & PSR_BTYPE_MASK) >>
 					       PSR_BTYPE_SHIFT];
 
-		printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO BTYPE=%s)\n",
+		printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN BTYPE=%s)\n",
 			pstate,
 			pstate & PSR_N_BIT ? 'N' : 'n',
 			pstate & PSR_Z_BIT ? 'Z' : 'z',
@@ -250,7 +250,6 @@ static void print_pstate(struct pt_regs *regs)
 			pstate & PSR_I_BIT ? 'I' : 'i',
 			pstate & PSR_F_BIT ? 'F' : 'f',
 			pstate & PSR_PAN_BIT ? '+' : '-',
-			pstate & PSR_UAO_BIT ? '+' : '-',
 			btype_str);
 	}
 }
@@ -417,10 +416,6 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 	} else {
 		memset(childregs, 0, sizeof(struct pt_regs));
 		childregs->pstate = PSR_MODE_EL1h;
-		if (IS_ENABLED(CONFIG_ARM64_UAO) &&
-		    cpus_have_const_cap(ARM64_HAS_UAO))
-			childregs->pstate |= PSR_UAO_BIT;
-
 		if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
 			set_ssbs_bit(childregs);
 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/13] arm64: remove set_fs() and friends
  2020-09-28  7:16 ` [PATCH 00/13] arm64: remove set_fs() and friends Christoph Hellwig
@ 2020-09-28  7:23   ` Christoph Hellwig
  2020-09-28  9:18     ` Mark Rutland
  0 siblings, 1 reply; 24+ messages in thread
From: Christoph Hellwig @ 2020-09-28  7:23 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, robin.murphy, james.morse, will, hch, linux-arm-kernel

On Mon, Sep 28, 2020 at 09:16:01AM +0200, Christoph Hellwig wrote:
> On Fri, Sep 25, 2020 at 05:07:09PM +0100, Mark Rutland wrote:
> > This series removes set_fs() from arm64, building atop the core rework
> > done by Christophe. The series can be found in my arm64/set_fs-removal
> > branch [2].
> > 
> > The bulk of the rework is to address the way we manipulate PAN and UAO,
> > which is largely rendered redundant.
> > 
> > The kernel maccess routines (__{get,put}_kernel_nofault) are trivial
> > wrappers which share code with the uaccess routines, so I expect these
> > should just work, but they'll need testing in-context, especially where
> > they're wrapped by the gerneric copy routines.
> > 
> > So far this has seen some very basic boot testing. I intend to throw
> > Syzkaller and LTP at this soon.
> 
> I'm not a an arm64 experts, but this looks reasonable to me.
> 
> Also can't we remove all the remaining UAO handling as in the patch
> below or did I totally misunderstood how uaccess works for arm64?

Actually the patch was incomplete, here is the full one:

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d232837cbeee8..dd3c8f8a34dae2 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1434,27 +1434,6 @@ endmenu
 
 menu "ARMv8.2 architectural features"
 
-config ARM64_UAO
-	bool "Enable support for User Access Override (UAO)"
-	default y
-	help
-	  User Access Override (UAO; part of the ARMv8.2 Extensions)
-	  causes the 'unprivileged' variant of the load/store instructions to
-	  be overridden to be privileged.
-
-	  This option changes get_user() and friends to use the 'unprivileged'
-	  variant of the load/store instructions. This ensures that user-space
-	  really did have access to the supplied memory. When addr_limit is
-	  set to kernel memory the UAO bit will be set, allowing privileged
-	  access to kernel memory.
-
-	  Choosing this option will cause copy_to_user() et al to use user-space
-	  memory permissions.
-
-	  The feature is detected at runtime, the kernel will use the
-	  regular load/store instructions if the cpu does not implement the
-	  feature.
-
 config ARM64_PMEM
 	bool "Enable support for persistent memory"
 	select ARCH_HAS_PMEM_API
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index ef2d5a90e1815f..1c16e43f035a7a 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -29,7 +29,6 @@
 static inline void init_hw_uaccess_state(void)
 {
 	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN));
-	asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
 }
 
 /*
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ca8144f1e6a45..c460cd15dc49b3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -153,9 +153,6 @@ EXPORT_SYMBOL(cpu_hwcap_keys);
 	}
 
 /* meta feature for alternatives */
-static bool __maybe_unused
-cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused);
-
 static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
 
 static bool __system_matches_cap(unsigned int n);
@@ -1763,17 +1760,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
 		.matches = has_no_hw_prefetch,
 	},
-#ifdef CONFIG_ARM64_UAO
-	{
-		.desc = "User Access Override",
-		.capability = ARM64_HAS_UAO,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = has_cpuid_feature,
-		.sys_reg = SYS_ID_AA64MMFR2_EL1,
-		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
-		.min_field_value = 1,
-	},
-#endif /* CONFIG_ARM64_UAO */
 #ifdef CONFIG_ARM64_VHE
 	{
 		.desc = "Virtualization Host Extensions",
@@ -2701,12 +2687,6 @@ void __init setup_cpu_features(void)
 			ARCH_DMA_MINALIGN);
 }
 
-static bool __maybe_unused
-cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
-{
-	return (__system_matches_cap(ARM64_HAS_PAN) && !__system_matches_cap(ARM64_HAS_UAO));
-}
-
 static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
 {
 	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 6ec12f4cb546f4..f223d27d991b3c 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -239,7 +239,7 @@ static void print_pstate(struct pt_regs *regs)
 		const char *btype_str = btypes[(pstate & PSR_BTYPE_MASK) >>
 					       PSR_BTYPE_SHIFT];
 
-		printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO BTYPE=%s)\n",
+		printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN BTYPE=%s)\n",
 			pstate,
 			pstate & PSR_N_BIT ? 'N' : 'n',
 			pstate & PSR_Z_BIT ? 'Z' : 'z',
@@ -250,7 +250,6 @@ static void print_pstate(struct pt_regs *regs)
 			pstate & PSR_I_BIT ? 'I' : 'i',
 			pstate & PSR_F_BIT ? 'F' : 'f',
 			pstate & PSR_PAN_BIT ? '+' : '-',
-			pstate & PSR_UAO_BIT ? '+' : '-',
 			btype_str);
 	}
 }
@@ -417,10 +416,6 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 	} else {
 		memset(childregs, 0, sizeof(struct pt_regs));
 		childregs->pstate = PSR_MODE_EL1h;
-		if (IS_ENABLED(CONFIG_ARM64_UAO) &&
-		    cpus_have_const_cap(ARM64_HAS_UAO))
-			childregs->pstate |= PSR_UAO_BIT;
-
 		if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
 			set_ssbs_bit(childregs);
 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 11/13] arm64: uaccess: remove set_fs()
  2020-09-28  7:04   ` Christoph Hellwig
@ 2020-09-28  9:02     ` Mark Rutland
  2020-09-28 15:29       ` Mark Rutland
  0 siblings, 1 reply; 24+ messages in thread
From: Mark Rutland @ 2020-09-28  9:02 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: catalin.marinas, will, james.morse, robin.murphy, linux-arm-kernel

On Mon, Sep 28, 2020 at 09:04:43AM +0200, Christoph Hellwig wrote:
> > +static inline void init_hw_uaccess_state(void)
> >  {
> > +	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN));
> > +	asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
> >  }
> 
> The way I read this code it is about per-task state, maybe that
> could be reflected in the name?  init_task_hw_access_state or
> init_task_uaccess_flags?

My intent writing this was that this resets the HW into the expected
ambient state common to all tasks, so I don't think about this as a
per-task thing (but I agree the transient PAN toggling for atomics means
PAN is per-task state).

I'd prefer not to put 'task' in the name as to me that implies messing
with something like task_struct or other SW state, but if you have
strong feeling about this, I think something like
init_task_hw_access_state() would be fine.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 12/13] arm64: uaccess: remove addr_limit_user_check()
  2020-09-28  7:05   ` Christoph Hellwig
@ 2020-09-28  9:06     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-28  9:06 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: catalin.marinas, will, james.morse, robin.murphy, linux-arm-kernel

On Mon, Sep 28, 2020 at 09:05:34AM +0200, Christoph Hellwig wrote:
> On Fri, Sep 25, 2020 at 05:07:21PM +0100, Mark Rutland wrote:
> > Subsequent patches will remove addr_limit from arm64, consequently the
> > addr_limit checks will become redundant. As a preparatory step, remove
> > the checks and associated thread flag.
> 
> Wasn't this done in earlier patches?  The only thing the next patch
> does is to remove the PAN stuff.  Maybe fold it into the patch
> removing the addr_limit checking?

Ah, this is a rebase artifact from when the patches were the other way
around. I'll update the commit message to not be misleading.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/13] arm64: remove set_fs() and friends
  2020-09-28  7:23   ` Christoph Hellwig
@ 2020-09-28  9:18     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-28  9:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: catalin.marinas, will, james.morse, robin.murphy, linux-arm-kernel

On Mon, Sep 28, 2020 at 09:23:15AM +0200, Christoph Hellwig wrote:
> On Mon, Sep 28, 2020 at 09:16:01AM +0200, Christoph Hellwig wrote:
> > On Fri, Sep 25, 2020 at 05:07:09PM +0100, Mark Rutland wrote:
> > I'm not a an arm64 experts, but this looks reasonable to me.
> > 
> > Also can't we remove all the remaining UAO handling as in the patch
> > below or did I totally misunderstood how uaccess works for arm64?

The bits in copy_thread() can go, but I'd like to keep the bits in
print_pstate() since that's only dumping HW state rather than modifying
it and is still useful for diagnosing issues.

The rest is indirectly required by SDEI, and I would also prefer to
remove it, so I'll see about reworking the SDEI entry to handle this
explicitly given SDEI is the special case.

Thanks,
Mark.

> 
> Actually the patch was incomplete, here is the full one:
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 6d232837cbeee8..dd3c8f8a34dae2 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1434,27 +1434,6 @@ endmenu
>  
>  menu "ARMv8.2 architectural features"
>  
> -config ARM64_UAO
> -	bool "Enable support for User Access Override (UAO)"
> -	default y
> -	help
> -	  User Access Override (UAO; part of the ARMv8.2 Extensions)
> -	  causes the 'unprivileged' variant of the load/store instructions to
> -	  be overridden to be privileged.
> -
> -	  This option changes get_user() and friends to use the 'unprivileged'
> -	  variant of the load/store instructions. This ensures that user-space
> -	  really did have access to the supplied memory. When addr_limit is
> -	  set to kernel memory the UAO bit will be set, allowing privileged
> -	  access to kernel memory.
> -
> -	  Choosing this option will cause copy_to_user() et al to use user-space
> -	  memory permissions.
> -
> -	  The feature is detected at runtime, the kernel will use the
> -	  regular load/store instructions if the cpu does not implement the
> -	  feature.
> -
>  config ARM64_PMEM
>  	bool "Enable support for persistent memory"
>  	select ARCH_HAS_PMEM_API
> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index ef2d5a90e1815f..1c16e43f035a7a 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -29,7 +29,6 @@
>  static inline void init_hw_uaccess_state(void)
>  {
>  	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN));
> -	asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
>  }
>  
>  /*
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9ca8144f1e6a45..c460cd15dc49b3 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -153,9 +153,6 @@ EXPORT_SYMBOL(cpu_hwcap_keys);
>  	}
>  
>  /* meta feature for alternatives */
> -static bool __maybe_unused
> -cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused);
> -
>  static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>  
>  static bool __system_matches_cap(unsigned int n);
> @@ -1763,17 +1760,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
>  		.matches = has_no_hw_prefetch,
>  	},
> -#ifdef CONFIG_ARM64_UAO
> -	{
> -		.desc = "User Access Override",
> -		.capability = ARM64_HAS_UAO,
> -		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> -		.matches = has_cpuid_feature,
> -		.sys_reg = SYS_ID_AA64MMFR2_EL1,
> -		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
> -		.min_field_value = 1,
> -	},
> -#endif /* CONFIG_ARM64_UAO */
>  #ifdef CONFIG_ARM64_VHE
>  	{
>  		.desc = "Virtualization Host Extensions",
> @@ -2701,12 +2687,6 @@ void __init setup_cpu_features(void)
>  			ARCH_DMA_MINALIGN);
>  }
>  
> -static bool __maybe_unused
> -cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
> -{
> -	return (__system_matches_cap(ARM64_HAS_PAN) && !__system_matches_cap(ARM64_HAS_UAO));
> -}
> -
>  static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
>  {
>  	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 6ec12f4cb546f4..f223d27d991b3c 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -239,7 +239,7 @@ static void print_pstate(struct pt_regs *regs)
>  		const char *btype_str = btypes[(pstate & PSR_BTYPE_MASK) >>
>  					       PSR_BTYPE_SHIFT];
>  
> -		printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO BTYPE=%s)\n",
> +		printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN BTYPE=%s)\n",
>  			pstate,
>  			pstate & PSR_N_BIT ? 'N' : 'n',
>  			pstate & PSR_Z_BIT ? 'Z' : 'z',
> @@ -250,7 +250,6 @@ static void print_pstate(struct pt_regs *regs)
>  			pstate & PSR_I_BIT ? 'I' : 'i',
>  			pstate & PSR_F_BIT ? 'F' : 'f',
>  			pstate & PSR_PAN_BIT ? '+' : '-',
> -			pstate & PSR_UAO_BIT ? '+' : '-',
>  			btype_str);
>  	}
>  }
> @@ -417,10 +416,6 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
>  	} else {
>  		memset(childregs, 0, sizeof(struct pt_regs));
>  		childregs->pstate = PSR_MODE_EL1h;
> -		if (IS_ENABLED(CONFIG_ARM64_UAO) &&
> -		    cpus_have_const_cap(ARM64_HAS_UAO))
> -			childregs->pstate |= PSR_UAO_BIT;
> -
>  		if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
>  			set_ssbs_bit(childregs);
>  

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 11/13] arm64: uaccess: remove set_fs()
  2020-09-28  9:02     ` Mark Rutland
@ 2020-09-28 15:29       ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-09-28 15:29 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: catalin.marinas, will, james.morse, linux-arm-kernel, robin.murphy

On Mon, Sep 28, 2020 at 10:02:28AM +0100, Mark Rutland wrote:
> On Mon, Sep 28, 2020 at 09:04:43AM +0200, Christoph Hellwig wrote:
> > > +static inline void init_hw_uaccess_state(void)
> > >  {
> > > +	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN));
> > > +	asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
> > >  }
> > 
> > The way I read this code it is about per-task state, maybe that
> > could be reflected in the name?  init_task_hw_access_state or
> > init_task_uaccess_flags?
> 
> My intent writing this was that this resets the HW into the expected
> ambient state common to all tasks, so I don't think about this as a
> per-task thing (but I agree the transient PAN toggling for atomics means
> PAN is per-task state).
> 
> I'd prefer not to put 'task' in the name as to me that implies messing
> with something like task_struct or other SW state, but if you have
> strong feeling about this, I think something like
> init_task_hw_access_state() would be fine.

Luckily this disappeared entirely as part of removing the vestigal UAO
support, so no need to debate the name. ;)

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 03/13] arm64: head.S: always initialize PSTATE
  2020-09-25 16:07 ` [PATCH 03/13] arm64: head.S: always initialize PSTATE Mark Rutland
@ 2020-09-30 16:09   ` James Morse
  2020-10-01 11:01     ` Mark Rutland
  0 siblings, 1 reply; 24+ messages in thread
From: James Morse @ 2020-09-30 16:09 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel; +Cc: catalin.marinas, robin.murphy, hch, will

Hi Mark,

On 25/09/2020 17:07, Mark Rutland wrote:
> As with SCTLR_ELx and other control registers, some PSTATE bits are
> UNKNOWN out-of-reset, and we may not be able to rely on hardware of
> firmware

(hardware or firmware)


> to initialize them to our liking prior to entry to the kernel,
> e.g. in the primary/secondary boot paths and return from idle/suspend.
> 
> It would be more robust (and easier to reason about) if we consistently
> initialized PSTATE to a default value, as we do with control registers.
> This will ensure that the kernel is not adversely affected by bits it is
> not aware of, e.g. when support for a feature such as PAN/UAO is
> disabled.
> 
> This patch ensures that PSTATE is consistently initialized at boot time
> via an ERET. This is not intended to relax the existing requirements
> (e.g. DAIF bits must still be set prior to entering the kernel). For
> features detected dynamically (which may require system-wide support),
> it is still necessary to subsequently modify PSTATE.


> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 9bb07be45249a..b8ee9a62c9b61 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -494,17 +494,22 @@ EXPORT_SYMBOL(kimage_vaddr)
>   * booted in EL1 or EL2 respectively.
>   */
>  SYM_FUNC_START(init_kernel_el)
> -	msr	SPsel, #1			// We want to use SP_EL{1,2}
>  	mrs	x0, CurrentEL
>  	cmp	x0, #CurrentEL_EL2
> -	b.eq	1f
> +	b.eq	init_el2
> +
> +SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
>  	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
>  	msr	sctlr_el1, x0
> -	mov	w0, #BOOT_CPU_MODE_EL1		// This cpu booted in EL1

>  	isb

Could you add a comment covering why this ISB isn't rolled into the ERET?
I'm guessing its in case SCTLR_EL1 previously had v8.5s EOS 'Exception Exit is Context
Synchronizing' clear.


> -	ret
> +	mov_q	x0, INIT_PSTATE_EL1
> +	msr	spsr_el1, x0
> +	msr	elr_el1, lr
> +	mov	w0, #BOOT_CPU_MODE_EL1
> +	eret


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 03/13] arm64: head.S: always initialize PSTATE
  2020-09-30 16:09   ` James Morse
@ 2020-10-01 11:01     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-10-01 11:01 UTC (permalink / raw)
  To: James Morse; +Cc: catalin.marinas, will, robin.murphy, hch, linux-arm-kernel

On Wed, Sep 30, 2020 at 05:09:47PM +0100, James Morse wrote:
> Hi Mark,
> 
> On 25/09/2020 17:07, Mark Rutland wrote:
> > As with SCTLR_ELx and other control registers, some PSTATE bits are
> > UNKNOWN out-of-reset, and we may not be able to rely on hardware of
> > firmware
> 
> (hardware or firmware)

Whoops; fixed!

[...]

> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index 9bb07be45249a..b8ee9a62c9b61 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -494,17 +494,22 @@ EXPORT_SYMBOL(kimage_vaddr)
> >   * booted in EL1 or EL2 respectively.
> >   */
> >  SYM_FUNC_START(init_kernel_el)
> > -	msr	SPsel, #1			// We want to use SP_EL{1,2}
> >  	mrs	x0, CurrentEL
> >  	cmp	x0, #CurrentEL_EL2
> > -	b.eq	1f
> > +	b.eq	init_el2
> > +
> > +SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
> >  	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
> >  	msr	sctlr_el1, x0
> > -	mov	w0, #BOOT_CPU_MODE_EL1		// This cpu booted in EL1
> 
> >  	isb
> 
> Could you add a comment covering why this ISB isn't rolled into the ERET?
> I'm guessing its in case SCTLR_EL1 previously had v8.5s EOS 'Exception Exit is Context
> Synchronizing' clear.

Yup, and I had a vague fear that we may gain new bits in future along
similar lines.

I'll add a note to the top of init_kernel_el, since it applies to all of
the ERET cases. I'll also add a (pessimistic) ISB to the
install_el2_stub return path to make this clear/consistent.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-10-01 11:03 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-25 16:07 [PATCH 00/13] arm64: remove set_fs() and friends Mark Rutland
2020-09-25 16:07 ` [PATCH 01/13] arm64: head.S: rename el2_setup -> init_kernel_el Mark Rutland
2020-09-25 16:07 ` [PATCH 02/13] arm64: head.S: cleanup SCTLR_ELx initialization Mark Rutland
2020-09-25 16:07 ` [PATCH 03/13] arm64: head.S: always initialize PSTATE Mark Rutland
2020-09-30 16:09   ` James Morse
2020-10-01 11:01     ` Mark Rutland
2020-09-25 16:07 ` [PATCH 04/13] arm64: sdei: move uaccess logic to arch/arm64/ Mark Rutland
2020-09-25 16:07 ` [PATCH 05/13] arm64: uaccess: move uao_* alternatives to asm-uaccess.h Mark Rutland
2020-09-25 16:07 ` [PATCH 06/13] arm64: uaccess: rename privileged uaccess routines Mark Rutland
2020-09-25 16:07 ` [PATCH 07/13] arm64: uaccess: simplify __copy_user_flushcache() Mark Rutland
2020-09-25 16:07 ` [PATCH 08/13] arm64: uaccess: refactor __{get,put}_user Mark Rutland
2020-09-25 16:07 ` [PATCH 09/13] arm64: uaccess: split user/kernel routines Mark Rutland
2020-09-25 16:07 ` [PATCH 10/13] arm64: uaccess cleanup macro naming Mark Rutland
2020-09-25 16:07 ` [PATCH 11/13] arm64: uaccess: remove set_fs() Mark Rutland
2020-09-28  7:04   ` Christoph Hellwig
2020-09-28  9:02     ` Mark Rutland
2020-09-28 15:29       ` Mark Rutland
2020-09-25 16:07 ` [PATCH 12/13] arm64: uaccess: remove addr_limit_user_check() Mark Rutland
2020-09-28  7:05   ` Christoph Hellwig
2020-09-28  9:06     ` Mark Rutland
2020-09-25 16:07 ` [PATCH 13/13] arm64: uaccess: remove redundant PAN toggling Mark Rutland
2020-09-28  7:16 ` [PATCH 00/13] arm64: remove set_fs() and friends Christoph Hellwig
2020-09-28  7:23   ` Christoph Hellwig
2020-09-28  9:18     ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).