linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv4 00/17] arm64: remove set_fs() and friends
@ 2020-11-13 12:49 Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 01/17] arm64: ensure ERET from kthread is illegal Mark Rutland
                   ` (17 more replies)
  0 siblings, 18 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

Building atop Christoph's core rework now in v5.10-rc1, this series removes
set_fs() from arm64. The series can be found in arm64/set_fs-removal branch
[1].

Since v1 [2]:
* Remove redundant kthread init code
* Remove vestigal bit left by v1
* Rework SDEI entry
* Comment head.S better
* Fix commit message typos

Since v2 [3]:
* Rebase to v5.10-rc1
* Handle new MTE uaccess asm
* Fix trivial conflict with spectre mitigation rework
* Cleanup commit messages

Since v3 [4]:
* Rebase to v5.10-rc3 (trivial)
* Fix __get_kernel_nofault / __put_kernel_nofault error handling
* Fix include in mte asm
* Fix which patches change includes
* Fix typos

I've given v4 light boot + userspace testing.  As v2 had seen some light
testing with LTP (both native and compat), and Syzkaller (native only) I'm not
expecting any major issues, but I will be giving this some extended soak
testing just in case.

Catalin, Will, I'd like to get this into next ASAP so that it can soak for a
while. There is a trivial conflict with for-next/lto that git doesn't seem
smart enough to resolve automatically despite our attempts to make this easy.
The right thing to do is take Will's version of asm/alternative.h (which
deletes the macros that got moved elsewhere), and my version of
asm/asm-uaccess.h (which leaves the trivial ldtr/sttr helpers).

Thanks,
Mark.

[1] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/set_fs-removal
[2] https://lore/kernel.org/r/20200925160722.27155-1-mark.rutland@arm.com
[3] https://lore/kernel.org/r/20201006144642.12195-1-mark.rutland@arm.com
[4] https://lore/kernel.org/r/20201026133156.44186-1-mark.rutland@arm.com

Mark Rutland (17):
  arm64: ensure ERET from kthread is illegal
  arm64: add C wrappers for SET_PSTATE_*()
  arm64: head.S: rename el2_setup -> init_kernel_el
  arm64: head.S: cleanup SCTLR_ELx initialization
  arm64: head.S: always initialize PSTATE
  arm64: sdei: move uaccess logic to arch/arm64/
  arm64: sdei: explicitly simulate PAN/UAO entry
  arm64: uaccess: move uao_* alternatives to asm-uaccess.h
  arm64: uaccess: rename privileged uaccess routines
  arm64: uaccess: simplify __copy_user_flushcache()
  arm64: uaccess: refactor __{get,put}_user
  arm64: uaccess: split user/kernel routines
  arm64: uaccess cleanup macro naming
  arm64: uaccess: remove set_fs()
  arm64: uaccess: remove addr_limit_user_check()
  arm64: uaccess: remove redundant PAN toggling
  arm64: uaccess: remove vestigal UAO support

 arch/arm64/Kconfig                   |  22 -----
 arch/arm64/include/asm/alternative.h |  59 ------------
 arch/arm64/include/asm/asm-uaccess.h |  29 ++++++
 arch/arm64/include/asm/cpucaps.h     |   2 -
 arch/arm64/include/asm/cpufeature.h  |  22 ++++-
 arch/arm64/include/asm/exec.h        |   1 -
 arch/arm64/include/asm/futex.h       |   8 +-
 arch/arm64/include/asm/processor.h   |   4 +-
 arch/arm64/include/asm/ptrace.h      |   8 +-
 arch/arm64/include/asm/sysreg.h      |  22 +++--
 arch/arm64/include/asm/thread_info.h |  10 +-
 arch/arm64/include/asm/uaccess.h     | 174 +++++++++++++++--------------------
 arch/arm64/kernel/armv8_deprecated.c |   4 +-
 arch/arm64/kernel/asm-offsets.c      |   3 +-
 arch/arm64/kernel/cpufeature.c       |  34 +------
 arch/arm64/kernel/entry.S            |  19 +---
 arch/arm64/kernel/head.S             |  51 ++++++----
 arch/arm64/kernel/process.c          |  29 ++----
 arch/arm64/kernel/proton-pack.c      |   4 +-
 arch/arm64/kernel/sdei.c             |  22 +++--
 arch/arm64/kernel/signal.c           |   3 -
 arch/arm64/kernel/sleep.S            |   2 +-
 arch/arm64/kernel/suspend.c          |   1 -
 arch/arm64/lib/clear_user.S          |   8 +-
 arch/arm64/lib/copy_from_user.S      |   8 +-
 arch/arm64/lib/copy_in_user.S        |  16 ++--
 arch/arm64/lib/copy_to_user.S        |   8 +-
 arch/arm64/lib/mte.S                 |   6 +-
 arch/arm64/lib/uaccess_flushcache.c  |   4 +-
 arch/arm64/mm/fault.c                |   5 -
 arch/arm64/mm/proc.S                 |   2 +-
 drivers/firmware/arm_sdei.c          |  14 ---
 32 files changed, 242 insertions(+), 362 deletions(-)

-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCHv4 01/17] arm64: ensure ERET from kthread is illegal
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 02/17] arm64: add C wrappers for SET_PSTATE_*() Mark Rutland
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

For consistency, all tasks have a pt_regs reserved at the highest
portion of their task stack. Among other things, this ensures that a
task's SP is always pointing within its stack rather than pointing
immediately past the end.

While it is never legitimate to ERET from a kthread, we take pains to
initialize pt_regs for kthreads as if this were legitimate. As this is
never legitimate, the effects of an erroneous return are rarely tested.

Let's simplify things by initializing a kthread's pt_regs such that an
ERET is caught as an illegal exception return, and removing the explicit
initialization of other exception context. Note that as
spectre_v4_enable_task_mitigation() only manipulates the PSTATE within
the unused regs this is safe to remove.

As user tasks will have their exception context initialized via
start_thread() or start_compat_thread(), this should only impact cases
where something has gone very wrong and we'd like that to be clearly
indicated.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/process.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 4784011cecac..855137daafbf 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -422,16 +422,15 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 		if (clone_flags & CLONE_SETTLS)
 			p->thread.uw.tp_value = tls;
 	} else {
+		/*
+		 * A kthread has no context to ERET to, so ensure any buggy
+		 * ERET is treated as an illegal exception return.
+		 *
+		 * When a user task is created from a kthread, childregs will
+		 * be initialized by start_thread() or start_compat_thread().
+		 */
 		memset(childregs, 0, sizeof(struct pt_regs));
-		childregs->pstate = PSR_MODE_EL1h;
-		if (IS_ENABLED(CONFIG_ARM64_UAO) &&
-		    cpus_have_const_cap(ARM64_HAS_UAO))
-			childregs->pstate |= PSR_UAO_BIT;
-
-		spectre_v4_enable_task_mitigation(p);
-
-		if (system_uses_irq_prio_masking())
-			childregs->pmr_save = GIC_PRIO_IRQON;
+		childregs->pstate = PSR_MODE_EL1h | PSR_IL_BIT;
 
 		p->thread.cpu_context.x19 = stack_start;
 		p->thread.cpu_context.x20 = stk_sz;
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 02/17] arm64: add C wrappers for SET_PSTATE_*()
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 01/17] arm64: ensure ERET from kthread is illegal Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 03/17] arm64: head.S: rename el2_setup -> init_kernel_el Mark Rutland
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

To make callsites easier to read, add trivial C wrappers for the
SET_PSTATE_*() helpers, and convert trivial uses over to these. The new
wrappers will be used further in subsequent patches.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 4 ++++
 arch/arm64/kernel/cpufeature.c  | 2 +-
 arch/arm64/kernel/proton-pack.c | 4 ++--
 3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 174817ba119c..24160f7410e8 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -98,6 +98,10 @@
 #define SET_PSTATE_SSBS(x)		__emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift))
 #define SET_PSTATE_TCO(x)		__emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift))
 
+#define set_pstate_pan(x)		asm volatile(SET_PSTATE_PAN(x))
+#define set_pstate_uao(x)		asm volatile(SET_PSTATE_UAO(x))
+#define set_pstate_ssbs(x)		asm volatile(SET_PSTATE_SSBS(x))
+
 #define __SYS_BARRIER_INSN(CRm, op2, Rt) \
 	__emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index dcc165b3fc04..4ff0369bd0b3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1598,7 +1598,7 @@ static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
 	WARN_ON_ONCE(in_interrupt());
 
 	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPAN, 0);
-	asm(SET_PSTATE_PAN(1));
+	set_pstate_pan(1);
 }
 #endif /* CONFIG_ARM64_PAN */
 
diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
index c18eb7d41274..e49ed531ad5c 100644
--- a/arch/arm64/kernel/proton-pack.c
+++ b/arch/arm64/kernel/proton-pack.c
@@ -537,12 +537,12 @@ static enum mitigation_state spectre_v4_enable_hw_mitigation(void)
 
 	if (spectre_v4_mitigations_off()) {
 		sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS);
-		asm volatile(SET_PSTATE_SSBS(1));
+		set_pstate_ssbs(1);
 		return SPECTRE_VULNERABLE;
 	}
 
 	/* SCTLR_EL1.DSSBS was initialised to 0 during boot */
-	asm volatile(SET_PSTATE_SSBS(0));
+	set_pstate_ssbs(0);
 	return SPECTRE_MITIGATED;
 }
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 03/17] arm64: head.S: rename el2_setup -> init_kernel_el
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 01/17] arm64: ensure ERET from kthread is illegal Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 02/17] arm64: add C wrappers for SET_PSTATE_*() Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 04/17] arm64: head.S: cleanup SCTLR_ELx initialization Mark Rutland
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

For a while now el2_setup has performed some basic initialization of EL1
even when the kernel is booted at EL1, so the name is a little
misleading. Further, some comments are stale as with VHE it doesn't drop
the CPU to EL1.

To clarify things, rename el2_setup to init_kernel_el, and update
comments to be clearer as to the function's purpose.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/head.S  | 15 ++++++++-------
 arch/arm64/kernel/sleep.S |  2 +-
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index d8d9caf02834..5a31086e8e85 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -104,7 +104,7 @@ pe_header:
 	 */
 SYM_CODE_START(primary_entry)
 	bl	preserve_boot_args
-	bl	el2_setup			// Drop to EL1, w0=cpu_boot_mode
+	bl	init_kernel_el			// w0=cpu_boot_mode
 	adrp	x23, __PHYS_OFFSET
 	and	x23, x23, MIN_KIMG_ALIGN - 1	// KASLR offset, defaults to 0
 	bl	set_cpu_boot_mode_flag
@@ -482,13 +482,14 @@ EXPORT_SYMBOL(kimage_vaddr)
 	.section ".idmap.text","awx"
 
 /*
- * If we're fortunate enough to boot at EL2, ensure that the world is
- * sane before dropping to EL1.
+ * Starting from EL2 or EL1, configure the CPU to execute at the highest
+ * reachable EL supported by the kernel in a chosen default state. If dropping
+ * from EL2 to EL1, configure EL2 before configuring EL1.
  *
  * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in w0 if
  * booted in EL1 or EL2 respectively.
  */
-SYM_FUNC_START(el2_setup)
+SYM_FUNC_START(init_kernel_el)
 	msr	SPsel, #1			// We want to use SP_EL{1,2}
 	mrs	x0, CurrentEL
 	cmp	x0, #CurrentEL_EL2
@@ -649,7 +650,7 @@ SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
 	msr	elr_el2, lr
 	mov	w0, #BOOT_CPU_MODE_EL2		// This CPU booted in EL2
 	eret
-SYM_FUNC_END(el2_setup)
+SYM_FUNC_END(init_kernel_el)
 
 /*
  * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed
@@ -699,7 +700,7 @@ SYM_DATA_END(__early_cpu_boot_status)
 	 * cores are held until we're ready for them to initialise.
 	 */
 SYM_FUNC_START(secondary_holding_pen)
-	bl	el2_setup			// Drop to EL1, w0=cpu_boot_mode
+	bl	init_kernel_el			// w0=cpu_boot_mode
 	bl	set_cpu_boot_mode_flag
 	mrs	x0, mpidr_el1
 	mov_q	x1, MPIDR_HWID_BITMASK
@@ -717,7 +718,7 @@ SYM_FUNC_END(secondary_holding_pen)
 	 * be used where CPUs are brought online dynamically by the kernel.
 	 */
 SYM_FUNC_START(secondary_entry)
-	bl	el2_setup			// Drop to EL1
+	bl	init_kernel_el			// w0=cpu_boot_mode
 	bl	set_cpu_boot_mode_flag
 	b	secondary_startup
 SYM_FUNC_END(secondary_entry)
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index ba40d57757d6..4be7f7eed875 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -99,7 +99,7 @@ SYM_FUNC_END(__cpu_suspend_enter)
 
 	.pushsection ".idmap.text", "awx"
 SYM_CODE_START(cpu_resume)
-	bl	el2_setup		// if in EL2 drop to EL1 cleanly
+	bl	init_kernel_el
 	bl	__cpu_setup
 	/* enable the MMU early - so we can access sleep_save_stash by va */
 	adrp	x1, swapper_pg_dir
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 04/17] arm64: head.S: cleanup SCTLR_ELx initialization
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (2 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 03/17] arm64: head.S: rename el2_setup -> init_kernel_el Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 05/17] arm64: head.S: always initialize PSTATE Mark Rutland
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

Let's make SCTLR_ELx initialization a bit clearer by using meaningful
names for the initialization values, following the same scheme for
SCTLR_EL1 and SCTLR_EL2.

These definitions will be used more widely in subsequent patches.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 18 ++++++++++++------
 arch/arm64/kernel/head.S        |  6 +++---
 arch/arm64/mm/proc.S            |  2 +-
 3 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 24160f7410e8..117bf97666ee 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -582,6 +582,9 @@
 #define ENDIAN_SET_EL2		0
 #endif
 
+#define INIT_SCTLR_EL2_MMU_OFF \
+	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
+
 /* SCTLR_EL1 specific flags. */
 #define SCTLR_EL1_ATA0		(BIT(42))
 
@@ -615,12 +618,15 @@
 #define ENDIAN_SET_EL1		0
 #endif
 
-#define SCTLR_EL1_SET	(SCTLR_ELx_M    | SCTLR_ELx_C    | SCTLR_ELx_SA   |\
-			 SCTLR_EL1_SA0  | SCTLR_EL1_SED  | SCTLR_ELx_I    |\
-			 SCTLR_EL1_DZE  | SCTLR_EL1_UCT                   |\
-			 SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN |\
-			 SCTLR_ELx_ITFSB| SCTLR_ELx_ATA  | SCTLR_EL1_ATA0 |\
-			 ENDIAN_SET_EL1 | SCTLR_EL1_UCI  | SCTLR_EL1_RES1)
+#define INIT_SCTLR_EL1_MMU_OFF \
+	(ENDIAN_SET_EL1 | SCTLR_EL1_RES1)
+
+#define INIT_SCTLR_EL1_MMU_ON \
+	(SCTLR_ELx_M    | SCTLR_ELx_C    | SCTLR_ELx_SA   | SCTLR_EL1_SA0   | \
+	 SCTLR_EL1_SED  | SCTLR_ELx_I    | SCTLR_EL1_DZE  | SCTLR_EL1_UCT   | \
+	 SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \
+	 SCTLR_ELx_ATA  | SCTLR_EL1_ATA0 | ENDIAN_SET_EL1 | SCTLR_EL1_UCI   | \
+	 SCTLR_EL1_RES1)
 
 /* MAIR_ELx memory attributes (used by Linux) */
 #define MAIR_ATTR_DEVICE_nGnRnE		UL(0x00)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 5a31086e8e85..4d113a4ef929 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -494,13 +494,13 @@ SYM_FUNC_START(init_kernel_el)
 	mrs	x0, CurrentEL
 	cmp	x0, #CurrentEL_EL2
 	b.eq	1f
-	mov_q	x0, (SCTLR_EL1_RES1 | ENDIAN_SET_EL1)
+	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
 	msr	sctlr_el1, x0
 	mov	w0, #BOOT_CPU_MODE_EL1		// This cpu booted in EL1
 	isb
 	ret
 
-1:	mov_q	x0, (SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
+1:	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
 	msr	sctlr_el2, x0
 
 #ifdef CONFIG_ARM64_VHE
@@ -621,7 +621,7 @@ SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
 	 * requires no configuration, and all non-hyp-specific EL2 setup
 	 * will be done via the _EL1 system register aliases in __cpu_setup.
 	 */
-	mov_q	x0, (SCTLR_EL1_RES1 | ENDIAN_SET_EL1)
+	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
 	msr	sctlr_el1, x0
 
 	/* Coprocessor traps. */
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 23c326a06b2d..29f064e117d9 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -489,6 +489,6 @@ SYM_FUNC_START(__cpu_setup)
 	/*
 	 * Prepare SCTLR
 	 */
-	mov_q	x0, SCTLR_EL1_SET
+	mov_q	x0, INIT_SCTLR_EL1_MMU_ON
 	ret					// return to head.S
 SYM_FUNC_END(__cpu_setup)
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 05/17] arm64: head.S: always initialize PSTATE
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (3 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 04/17] arm64: head.S: cleanup SCTLR_ELx initialization Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 06/17] arm64: sdei: move uaccess logic to arch/arm64/ Mark Rutland
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

As with SCTLR_ELx and other control registers, some PSTATE bits are
UNKNOWN out-of-reset, and we may not be able to rely on hardware or
firmware to initialize them to our liking prior to entry to the kernel,
e.g. in the primary/secondary boot paths and return from idle/suspend.

It would be more robust (and easier to reason about) if we consistently
initialized PSTATE to a default value, as we do with control registers.
This will ensure that the kernel is not adversely affected by bits it is
not aware of, e.g. when support for a feature such as PAN/UAO is
disabled.

This patch ensures that PSTATE is consistently initialized at boot time
via an ERET. This is not intended to relax the existing requirements
(e.g. DAIF bits must still be set prior to entering the kernel). For
features detected dynamically (which may require system-wide support),
it is still necessary to subsequently modify PSTATE.

As ERET is not always a Context Synchronization Event, an ISB is placed
before each exception return to ensure updates to control registers have
taken effect. This handles the kernel being entered with SCTLR_ELx.EOS
clear (or any future control bits being in an UNKNOWN state).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/ptrace.h |  5 +++++
 arch/arm64/kernel/head.S        | 32 +++++++++++++++++++++-----------
 2 files changed, 26 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 997cf8c8cd52..2547d94634be 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -16,6 +16,11 @@
 #define CurrentEL_EL1		(1 << 2)
 #define CurrentEL_EL2		(2 << 2)
 
+#define INIT_PSTATE_EL1 \
+	(PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT | PSR_MODE_EL1h)
+#define INIT_PSTATE_EL2 \
+	(PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT | PSR_MODE_EL2h)
+
 /*
  * PMR values used to mask/unmask interrupts.
  *
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 4d113a4ef929..0b145bca1b0e 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -486,21 +486,29 @@ EXPORT_SYMBOL(kimage_vaddr)
  * reachable EL supported by the kernel in a chosen default state. If dropping
  * from EL2 to EL1, configure EL2 before configuring EL1.
  *
+ * Since we cannot always rely on ERET synchronizing writes to sysregs (e.g. if
+ * SCTLR_ELx.EOS is clear), we place an ISB prior to ERET.
+ *
  * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in w0 if
  * booted in EL1 or EL2 respectively.
  */
 SYM_FUNC_START(init_kernel_el)
-	msr	SPsel, #1			// We want to use SP_EL{1,2}
 	mrs	x0, CurrentEL
 	cmp	x0, #CurrentEL_EL2
-	b.eq	1f
+	b.eq	init_el2
+
+SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
 	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
 	msr	sctlr_el1, x0
-	mov	w0, #BOOT_CPU_MODE_EL1		// This cpu booted in EL1
 	isb
-	ret
+	mov_q	x0, INIT_PSTATE_EL1
+	msr	spsr_el1, x0
+	msr	elr_el1, lr
+	mov	w0, #BOOT_CPU_MODE_EL1
+	eret
 
-1:	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
+SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
+	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
 	msr	sctlr_el2, x0
 
 #ifdef CONFIG_ARM64_VHE
@@ -609,9 +617,12 @@ set_hcr:
 
 	cbz	x2, install_el2_stub
 
-	mov	w0, #BOOT_CPU_MODE_EL2		// This CPU booted in EL2
 	isb
-	ret
+	mov_q	x0, INIT_PSTATE_EL2
+	msr	spsr_el2, x0
+	msr	elr_el2, lr
+	mov	w0, #BOOT_CPU_MODE_EL2
+	eret
 
 SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
 	/*
@@ -643,12 +654,11 @@ SYM_INNER_LABEL(install_el2_stub, SYM_L_LOCAL)
 7:	adr_l	x0, __hyp_stub_vectors
 	msr	vbar_el2, x0
 
-	/* spsr */
-	mov	x0, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
-		      PSR_MODE_EL1h)
+	isb
+	mov	x0, #INIT_PSTATE_EL1
 	msr	spsr_el2, x0
 	msr	elr_el2, lr
-	mov	w0, #BOOT_CPU_MODE_EL2		// This CPU booted in EL2
+	mov	w0, #BOOT_CPU_MODE_EL2
 	eret
 SYM_FUNC_END(init_kernel_el)
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 06/17] arm64: sdei: move uaccess logic to arch/arm64/
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (4 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 05/17] arm64: head.S: always initialize PSTATE Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-16 17:42   ` Catalin Marinas
  2020-11-26 18:42   ` James Morse
  2020-11-13 12:49 ` [PATCHv4 07/17] arm64: sdei: explicitly simulate PAN/UAO entry Mark Rutland
                   ` (11 subsequent siblings)
  17 siblings, 2 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

The SDEI support code is split across arch/arm64/ and drivers/firmware/,
largley this is split so that the arch-specific portions are under
arch/arm64, and the management logic is under drivers/firmware/.
However, exception entry fixups are currently under drivers/firmware.

Let's move the exception entry fixups under arch/arm64/. This
de-clutters the management logic, and puts all the arch-specific
portions in one place. Doing this also allows the fixups to be applied
earlier, so things like PAN and UAO will be in a known good state before
we run other logic. This will also make subsequent refactoring easier.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/sdei.c    | 18 ++++++++++++------
 drivers/firmware/arm_sdei.c | 14 --------------
 2 files changed, 12 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
index 7689f2031c0c..4a5f24602aa0 100644
--- a/arch/arm64/kernel/sdei.c
+++ b/arch/arm64/kernel/sdei.c
@@ -178,12 +178,6 @@ static __kprobes unsigned long _sdei_handler(struct pt_regs *regs,
 		sdei_api_event_context(i, &regs->regs[i]);
 	}
 
-	/*
-	 * We didn't take an exception to get here, set PAN. UAO will be cleared
-	 * by sdei_event_handler()s force_uaccess_begin() call.
-	 */
-	__uaccess_enable_hw_pan();
-
 	err = sdei_event_handler(regs, arg);
 	if (err)
 		return SDEI_EV_FAILED;
@@ -227,6 +221,16 @@ asmlinkage __kprobes notrace unsigned long
 __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 {
 	unsigned long ret;
+	mm_segment_t orig_addr_limit;
+
+	/*
+	 * We didn't take an exception to get here, so the HW hasn't set PAN or
+	 * cleared UAO, and the exception entry code hasn't reset addr_limit.
+	 * Set PAN, then use force_uaccess_begin() to clear UAO and reset
+	 * addr_limit.
+	 */
+	__uaccess_enable_hw_pan();
+	orig_addr_limit = force_uaccess_begin();
 
 	nmi_enter();
 
@@ -234,5 +238,7 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 
 	nmi_exit();
 
+	force_uaccess_end(orig_addr_limit);
+
 	return ret;
 }
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
index 840754dcc6ca..a7e762c352f9 100644
--- a/drivers/firmware/arm_sdei.c
+++ b/drivers/firmware/arm_sdei.c
@@ -31,7 +31,6 @@
 #include <linux/slab.h>
 #include <linux/smp.h>
 #include <linux/spinlock.h>
-#include <linux/uaccess.h>
 
 /*
  * The call to use to reach the firmware.
@@ -1092,26 +1091,13 @@ int sdei_event_handler(struct pt_regs *regs,
 		       struct sdei_registered_event *arg)
 {
 	int err;
-	mm_segment_t orig_addr_limit;
 	u32 event_num = arg->event_num;
 
-	/*
-	 * Save restore 'fs'.
-	 * The architecture's entry code save/restores 'fs' when taking an
-	 * exception from the kernel. This ensures addr_limit isn't inherited
-	 * if you interrupted something that allowed the uaccess routines to
-	 * access kernel memory.
-	 * Do the same here because this doesn't come via the same entry code.
-	*/
-	orig_addr_limit = force_uaccess_begin();
-
 	err = arg->callback(event_num, regs, arg->callback_arg);
 	if (err)
 		pr_err_ratelimited("event %u on CPU %u failed with error: %d\n",
 				   event_num, smp_processor_id(), err);
 
-	force_uaccess_end(orig_addr_limit);
-
 	return err;
 }
 NOKPROBE_SYMBOL(sdei_event_handler);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 07/17] arm64: sdei: explicitly simulate PAN/UAO entry
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (5 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 06/17] arm64: sdei: move uaccess logic to arch/arm64/ Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-26 18:42   ` James Morse
  2020-11-13 12:49 ` [PATCHv4 08/17] arm64: uaccess: move uao_* alternatives to asm-uaccess.h Mark Rutland
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

SDEI enters the kernel with a non-architectural exception which does not
manipulate PSTATE bits (e.g. PAN, UAO) in the same way as architectural
exceptions. We currently fix this up with a combination of
__uaccess_enable_hw_pan() and force_uaccess_access_begin(), but this has
a few problems:

* When set_fs() is removed, force_uaccess_begin() will have no HW
  side-effects, and UAO will need to be reset elsewhere.

* Kernels built without support for PAN or UAO will not reset these bits
  upon SDEI entry, and may inherit the values used by a VM, leading to
  unexpected behaviour.

* Kernels built *with* support for PAN or UAO, when run on systems with
  mismatched support across CPUs, will not reset these bits upon SDEI
  entry, and may inherit the values used by a VM, leading to unexpected
  behaviour.

To deal with all of these, let's always explicitly reset the PAN and UAO
bits when an SDEI event is delivered to the kernel. As above, we must do
so even when the kernel has chosen to not use PAN/UAO, or was not built
with support for PAN/UAO generally.

The existing system_uses_ttbr0_pan() is redefined in terms of
system_uses_hw_pan() both for clarity and as a minor optimization when
HW PAN is not selected.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h | 22 +++++++++++++++++++++-
 arch/arm64/kernel/sdei.c            | 19 ++++++++++++++-----
 2 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 97244d4feca9..98aab6138443 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -667,10 +667,16 @@ static __always_inline bool system_supports_fpsimd(void)
 	return !cpus_have_const_cap(ARM64_HAS_NO_FPSIMD);
 }
 
+static inline bool system_uses_hw_pan(void)
+{
+	return IS_ENABLED(CONFIG_ARM64_PAN) &&
+		cpus_have_const_cap(ARM64_HAS_PAN);
+}
+
 static inline bool system_uses_ttbr0_pan(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN) &&
-		!cpus_have_const_cap(ARM64_HAS_PAN);
+		!system_uses_hw_pan();
 }
 
 static __always_inline bool system_supports_sve(void)
@@ -762,6 +768,20 @@ static inline bool cpu_has_hw_af(void)
 						ID_AA64MMFR1_HADBS_SHIFT);
 }
 
+static inline bool cpu_has_pan(void)
+{
+	u64 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
+	return cpuid_feature_extract_unsigned_field(mmfr1,
+						    ID_AA64MMFR1_PAN_SHIFT);
+}
+
+static inline bool cpu_has_uao(void)
+{
+	u64 mmfr2 = read_cpuid(ID_AA64MMFR2_EL1);
+	return cpuid_feature_extract_unsigned_field(mmfr2,
+						    ID_AA64MMFR2_UAO_SHIFT);
+}
+
 #ifdef CONFIG_ARM64_AMU_EXTN
 /* Check whether the cpu supports the Activity Monitors Unit (AMU) */
 extern bool cpu_has_amu_feat(int cpu);
diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
index 4a5f24602aa0..908d7be70eac 100644
--- a/arch/arm64/kernel/sdei.c
+++ b/arch/arm64/kernel/sdei.c
@@ -216,6 +216,16 @@ static __kprobes unsigned long _sdei_handler(struct pt_regs *regs,
 	return vbar + 0x480;
 }
 
+static void __kprobes notrace __sdei_pstate_entry(void)
+{
+	if (system_uses_hw_pan())
+		set_pstate_pan(1);
+	else if (cpu_has_pan())
+		set_pstate_pan(0);
+
+	if (cpu_has_uao())
+		set_pstate_uao(0);
+}
 
 asmlinkage __kprobes notrace unsigned long
 __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
@@ -224,12 +234,11 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 	mm_segment_t orig_addr_limit;
 
 	/*
-	 * We didn't take an exception to get here, so the HW hasn't set PAN or
-	 * cleared UAO, and the exception entry code hasn't reset addr_limit.
-	 * Set PAN, then use force_uaccess_begin() to clear UAO and reset
-	 * addr_limit.
+	 * We didn't take an exception to get here, so the HW hasn't
+	 * set/cleared bits in PSTATE that we may rely on. Intialize PAN/UAO,
+	 * then use force_uaccess_begin() to reset addr_limit.
 	 */
-	__uaccess_enable_hw_pan();
+	__sdei_pstate_entry();
 	orig_addr_limit = force_uaccess_begin();
 
 	nmi_enter();
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 08/17] arm64: uaccess: move uao_* alternatives to asm-uaccess.h
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (6 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 07/17] arm64: sdei: explicitly simulate PAN/UAO entry Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-14  9:22   ` Christoph Hellwig
  2020-11-13 12:49 ` [PATCHv4 09/17] arm64: uaccess: rename privileged uaccess routines Mark Rutland
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

The uao_* alternative asm macros are only used by the uaccess assembly
routines in arch/arm64/lib/, where they are included indirectly via
asm-uaccess.h. Since they're specific to the uaccess assembly (and will
lose the alternatives in subsequent patches), let's move them into
asm-uaccess.h.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/alternative.h | 59 ------------------------------------
 arch/arm64/include/asm/asm-uaccess.h | 59 ++++++++++++++++++++++++++++++++++++
 arch/arm64/lib/mte.S                 |  2 +-
 3 files changed, 60 insertions(+), 60 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 619db9b4c9d5..5d6b89d26de4 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -224,65 +224,6 @@ alternative_endif
 	_asm_extable 9999b, \label
 .endm
 
-/*
- * Generate the assembly for UAO alternatives with exception table entries.
- * This is complicated as there is no post-increment or pair versions of the
- * unprivileged instructions, and USER() only works for single instructions.
- */
-#ifdef CONFIG_ARM64_UAO
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			ldtr	\reg1, [\addr];
-			ldtr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-		_asm_extable	8889b,\l;
-	.endm
-
-	.macro uao_stp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			stp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			sttr	\reg1, [\addr];
-			sttr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-		_asm_extable	8889b,\l;
-	.endm
-
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			\inst	\reg, [\addr], \post_inc;
-			nop;
-		alternative_else
-			\alt_inst	\reg, [\addr];
-			add		\addr, \addr, \post_inc;
-		alternative_endif
-
-		_asm_extable	8888b,\l;
-	.endm
-#else
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_stp l, reg1, reg2, addr, post_inc
-		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		USER(\l, \inst \reg, [\addr], \post_inc)
-	.endm
-#endif
-
 #endif  /*  __ASSEMBLY__  */
 
 /*
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index f68a0e64482a..479222ab82d4 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -58,4 +58,63 @@ alternative_else_nop_endif
 	.endm
 #endif
 
+/*
+ * Generate the assembly for UAO alternatives with exception table entries.
+ * This is complicated as there is no post-increment or pair versions of the
+ * unprivileged instructions, and USER() only works for single instructions.
+ */
+#ifdef CONFIG_ARM64_UAO
+	.macro uao_ldp l, reg1, reg2, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
+8889:			nop;
+			nop;
+		alternative_else
+			ldtr	\reg1, [\addr];
+			ldtr	\reg2, [\addr, #8];
+			add	\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+		_asm_extable	8889b,\l;
+	.endm
+
+	.macro uao_stp l, reg1, reg2, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			stp	\reg1, \reg2, [\addr], \post_inc;
+8889:			nop;
+			nop;
+		alternative_else
+			sttr	\reg1, [\addr];
+			sttr	\reg2, [\addr, #8];
+			add	\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+		_asm_extable	8889b,\l;
+	.endm
+
+	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			\inst	\reg, [\addr], \post_inc;
+			nop;
+		alternative_else
+			\alt_inst	\reg, [\addr];
+			add		\addr, \addr, \post_inc;
+		alternative_endif
+
+		_asm_extable	8888b,\l;
+	.endm
+#else
+	.macro uao_ldp l, reg1, reg2, addr, post_inc
+		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
+	.endm
+	.macro uao_stp l, reg1, reg2, addr, post_inc
+		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
+	.endm
+	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
+		USER(\l, \inst \reg, [\addr], \post_inc)
+	.endm
+#endif
+
 #endif
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 03ca6d8b8670..cceed41bba15 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -4,7 +4,7 @@
  */
 #include <linux/linkage.h>
 
-#include <asm/alternative.h>
+#include <asm/asm-uaccess.h>
 #include <asm/assembler.h>
 #include <asm/mte.h>
 #include <asm/page.h>
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 09/17] arm64: uaccess: rename privileged uaccess routines
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (7 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 08/17] arm64: uaccess: move uao_* alternatives to asm-uaccess.h Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 10/17] arm64: uaccess: simplify __copy_user_flushcache() Mark Rutland
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

We currently have many uaccess_*{enable,disable}*() variants, which
subsequent patches will cut down as part of removing set_fs() and
friends. Once this simplification is made, most uaccess routines will
only need to ensure that the user page tables are mapped in TTBR0, as is
currently dealt with by uaccess_ttbr0_{enable,disable}().

The existing uaccess_{enable,disable}() routines ensure that user page
tables are mapped in TTBR0, and also disable PAN protections, which is
necessary to be able to use atomics on user memory, but also permit
unrelated privileged accesses to access user memory.

As preparatory step, let's rename uaccess_{enable,disable}() to
uaccess_{enable,disable}_privileged(), highlighting this caveat and
discouraging wider misuse. Subsequent patches can reuse the
uaccess_{enable,disable}() naming for the common case of ensuring the
user page tables are mapped in TTBR0.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/futex.h       | 8 ++++----
 arch/arm64/include/asm/uaccess.h     | 4 ++--
 arch/arm64/kernel/armv8_deprecated.c | 4 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index 97f6a63810ec..8e41faa37c69 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -16,7 +16,7 @@
 do {									\
 	unsigned int loops = FUTEX_MAX_LOOPS;				\
 									\
-	uaccess_enable();						\
+	uaccess_enable_privileged();					\
 	asm volatile(							\
 "	prfm	pstl1strm, %2\n"					\
 "1:	ldxr	%w1, %2\n"						\
@@ -39,7 +39,7 @@ do {									\
 	  "+r" (loops)							\
 	: "r" (oparg), "Ir" (-EFAULT), "Ir" (-EAGAIN)			\
 	: "memory");							\
-	uaccess_disable();						\
+	uaccess_disable_privileged();					\
 } while (0)
 
 static inline int
@@ -95,7 +95,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
 		return -EFAULT;
 
 	uaddr = __uaccess_mask_ptr(_uaddr);
-	uaccess_enable();
+	uaccess_enable_privileged();
 	asm volatile("// futex_atomic_cmpxchg_inatomic\n"
 "	prfm	pstl1strm, %2\n"
 "1:	ldxr	%w1, %2\n"
@@ -118,7 +118,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
 	: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
 	: "r" (oldval), "r" (newval), "Ir" (-EFAULT), "Ir" (-EAGAIN)
 	: "memory");
-	uaccess_disable();
+	uaccess_disable_privileged();
 
 	if (!ret)
 		*uval = val;
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 991dd5f031e4..d6a4e496ebc6 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -200,12 +200,12 @@ do {									\
 				CONFIG_ARM64_PAN));			\
 } while (0)
 
-static inline void uaccess_disable(void)
+static inline void uaccess_disable_privileged(void)
 {
 	__uaccess_disable(ARM64_HAS_PAN);
 }
 
-static inline void uaccess_enable(void)
+static inline void uaccess_enable_privileged(void)
 {
 	__uaccess_enable(ARM64_HAS_PAN);
 }
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 7364de008bab..0e86e8b9cedd 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -277,7 +277,7 @@ static void __init register_insn_emulation_sysctl(void)
 
 #define __user_swpX_asm(data, addr, res, temp, temp2, B)	\
 do {								\
-	uaccess_enable();					\
+	uaccess_enable_privileged();				\
 	__asm__ __volatile__(					\
 	"	mov		%w3, %w7\n"			\
 	"0:	ldxr"B"		%w2, [%4]\n"			\
@@ -302,7 +302,7 @@ do {								\
 	  "i" (-EFAULT),					\
 	  "i" (__SWP_LL_SC_LOOPS)				\
 	: "memory");						\
-	uaccess_disable();					\
+	uaccess_disable_privileged();				\
 } while (0)
 
 #define __user_swp_asm(data, addr, res, temp, temp2) \
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 10/17] arm64: uaccess: simplify __copy_user_flushcache()
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (8 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 09/17] arm64: uaccess: rename privileged uaccess routines Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 11/17] arm64: uaccess: refactor __{get,put}_user Mark Rutland
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

Currently __copy_user_flushcache() open-codes raw_copy_from_user(), and
doesn't use uaccess_mask_ptr() on the user address. Let's have it call
raw_copy_from_user(), which is both a simplification and ensures that
user pointers are masked under speculation.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/lib/uaccess_flushcache.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
index bfa30b75b2b8..c83bb5a4aad2 100644
--- a/arch/arm64/lib/uaccess_flushcache.c
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -30,9 +30,7 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
 {
 	unsigned long rc;
 
-	uaccess_enable_not_uao();
-	rc = __arch_copy_from_user(to, from, n);
-	uaccess_disable_not_uao();
+	rc = raw_copy_from_user(to, from, n);
 
 	/* See above */
 	__clean_dcache_area_pop(to, n - rc);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 11/17] arm64: uaccess: refactor __{get,put}_user
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (9 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 10/17] arm64: uaccess: simplify __copy_user_flushcache() Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 12/17] arm64: uaccess: split user/kernel routines Mark Rutland
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

As a step towards implementing __{get,put}_kernel_nofault(), this patch
splits most user-memory specific logic out of __{get,put}_user(), with
the memory access and fault handling in new __{raw_get,put}_mem()
helpers.

For now the LDR/LDTR patching is left within the *get_mem() helpers, and
will be removed in a subsequent patch.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/uaccess.h | 44 ++++++++++++++++++++++++----------------
 1 file changed, 27 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index d6a4e496ebc6..743f209d3fab 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -253,7 +253,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
  * The "__xxx_error" versions set the third argument to -EFAULT if an error
  * occurs, and leave it unchanged on success.
  */
-#define __get_user_asm(instr, alt_instr, reg, x, addr, err, feature)	\
+#define __get_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
 	asm volatile(							\
 	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
 			alt_instr " " reg "1, [%2]\n", feature)		\
@@ -268,35 +268,40 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
 	: "+r" (err), "=&r" (x)						\
 	: "r" (addr), "i" (-EFAULT))
 
-#define __raw_get_user(x, ptr, err)					\
+#define __raw_get_mem(x, ptr, err)					\
 do {									\
 	unsigned long __gu_val;						\
-	__chk_user_ptr(ptr);						\
-	uaccess_enable_not_uao();					\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__get_user_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr),  \
+		__get_mem_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 2:								\
-		__get_user_asm("ldrh", "ldtrh", "%w", __gu_val, (ptr),  \
+		__get_mem_asm("ldrh", "ldtrh", "%w", __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 4:								\
-		__get_user_asm("ldr", "ldtr", "%w", __gu_val, (ptr),	\
+		__get_mem_asm("ldr", "ldtr", "%w", __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 8:								\
-		__get_user_asm("ldr", "ldtr", "%x",  __gu_val, (ptr),	\
+		__get_mem_asm("ldr", "ldtr", "%x",  __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
 	}								\
-	uaccess_disable_not_uao();					\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 } while (0)
 
+#define __raw_get_user(x, ptr, err)					\
+do {									\
+	__chk_user_ptr(ptr);						\
+	uaccess_enable_not_uao();					\
+	__raw_get_mem(x, ptr, err);					\
+	uaccess_disable_not_uao();					\
+} while (0)
+
 #define __get_user_error(x, ptr, err)					\
 do {									\
 	__typeof__(*(ptr)) __user *__p = (ptr);				\
@@ -318,7 +323,7 @@ do {									\
 
 #define get_user	__get_user
 
-#define __put_user_asm(instr, alt_instr, reg, x, addr, err, feature)	\
+#define __put_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
 	asm volatile(							\
 	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
 			alt_instr " " reg "1, [%2]\n", feature)		\
@@ -332,31 +337,36 @@ do {									\
 	: "+r" (err)							\
 	: "r" (x), "r" (addr), "i" (-EFAULT))
 
-#define __raw_put_user(x, ptr, err)					\
+#define __raw_put_mem(x, ptr, err)					\
 do {									\
 	__typeof__(*(ptr)) __pu_val = (x);				\
-	__chk_user_ptr(ptr);						\
-	uaccess_enable_not_uao();					\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__put_user_asm("strb", "sttrb", "%w", __pu_val, (ptr),	\
+		__put_mem_asm("strb", "sttrb", "%w", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 2:								\
-		__put_user_asm("strh", "sttrh", "%w", __pu_val, (ptr),	\
+		__put_mem_asm("strh", "sttrh", "%w", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 4:								\
-		__put_user_asm("str", "sttr", "%w", __pu_val, (ptr),	\
+		__put_mem_asm("str", "sttr", "%w", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 8:								\
-		__put_user_asm("str", "sttr", "%x", __pu_val, (ptr),	\
+		__put_mem_asm("str", "sttr", "%x", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
 	}								\
+} while (0)
+
+#define __raw_put_user(x, ptr, err)					\
+do {									\
+	__chk_user_ptr(ptr);						\
+	uaccess_enable_not_uao();					\
+	__raw_put_mem(x, ptr, err);					\
 	uaccess_disable_not_uao();					\
 } while (0)
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 12/17] arm64: uaccess: split user/kernel routines
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (10 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 11/17] arm64: uaccess: refactor __{get,put}_user Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-26 18:42   ` James Morse
  2020-11-13 12:49 ` [PATCHv4 13/17] arm64: uaccess cleanup macro naming Mark Rutland
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

This patch separates arm64's user and kernel memory access primitives
into distinct routines, adding new __{get,put}_kernel_nofault() helpers
to acess kernel memory, upon which core code builds larger copy
routines.

The kernel access routines (using LDR/STR) are not affected by PAN (when
legitimately accessing kernel memory), nor are they affected by UAO.
Switching to KERNEL_DS may set UAO, but this does not adversely affect
the kernel access routines.

The user access routines (using LDTR/STTR) are not affected by PAN (when
legitimately accessing user memory), but are affected by UAO. As these
are only legitimate to use under USER_DS with UAO clear, this should not
be problematic.

Routines performing atomics to user memory (futex and deprecated
instruction emulation) still need to transiently clear PAN, and these
are left as-is. These are never used on kernel memory.

Subsequent patches will refactor the uaccess helpers to remove redundant
code, and will also remove the redundant PAN/UAO manipulation.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/asm-uaccess.h | 48 +++++----------------------
 arch/arm64/include/asm/uaccess.h     | 64 +++++++++++++++++++++---------------
 2 files changed, 47 insertions(+), 65 deletions(-)

diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 479222ab82d4..046196f08988 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -59,62 +59,32 @@ alternative_else_nop_endif
 #endif
 
 /*
- * Generate the assembly for UAO alternatives with exception table entries.
+ * Generate the assembly for LDTR/STTR with exception table entries.
  * This is complicated as there is no post-increment or pair versions of the
  * unprivileged instructions, and USER() only works for single instructions.
  */
-#ifdef CONFIG_ARM64_UAO
 	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			ldtr	\reg1, [\addr];
-			ldtr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
+8888:		ldtr	\reg1, [\addr];
+8889:		ldtr	\reg2, [\addr, #8];
+		add	\addr, \addr, \post_inc;
 
 		_asm_extable	8888b,\l;
 		_asm_extable	8889b,\l;
 	.endm
 
 	.macro uao_stp l, reg1, reg2, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			stp	\reg1, \reg2, [\addr], \post_inc;
-8889:			nop;
-			nop;
-		alternative_else
-			sttr	\reg1, [\addr];
-			sttr	\reg2, [\addr, #8];
-			add	\addr, \addr, \post_inc;
-		alternative_endif
+8888:		sttr	\reg1, [\addr];
+8889:		sttr	\reg2, [\addr, #8];
+		add	\addr, \addr, \post_inc;
 
 		_asm_extable	8888b,\l;
 		_asm_extable	8889b,\l;
 	.endm
 
 	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		alternative_if_not ARM64_HAS_UAO
-8888:			\inst	\reg, [\addr], \post_inc;
-			nop;
-		alternative_else
-			\alt_inst	\reg, [\addr];
-			add		\addr, \addr, \post_inc;
-		alternative_endif
+8888:		\alt_inst	\reg, [\addr];
+		add		\addr, \addr, \post_inc;
 
 		_asm_extable	8888b,\l;
 	.endm
-#else
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
-		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_stp l, reg1, reg2, addr, post_inc
-		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
-	.endm
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-		USER(\l, \inst \reg, [\addr], \post_inc)
-	.endm
-#endif
-
 #endif
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 743f209d3fab..8687ec27dd00 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -24,6 +24,8 @@
 #include <asm/memory.h>
 #include <asm/extable.h>
 
+#define HAVE_GET_KERNEL_NOFAULT
+
 #define get_fs()	(current_thread_info()->addr_limit)
 
 static inline void set_fs(mm_segment_t fs)
@@ -253,10 +255,9 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
  * The "__xxx_error" versions set the third argument to -EFAULT if an error
  * occurs, and leave it unchanged on success.
  */
-#define __get_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
+#define __get_mem_asm(ldr, reg, x, addr, err)				\
 	asm volatile(							\
-	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
-			alt_instr " " reg "1, [%2]\n", feature)		\
+	"1:	" ldr "	" reg "1, [%2]\n"				\
 	"2:\n"								\
 	"	.section .fixup, \"ax\"\n"				\
 	"	.align	2\n"						\
@@ -268,25 +269,21 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
 	: "+r" (err), "=&r" (x)						\
 	: "r" (addr), "i" (-EFAULT))
 
-#define __raw_get_mem(x, ptr, err)					\
+#define __raw_get_mem(ldr, x, ptr, err)					\
 do {									\
 	unsigned long __gu_val;						\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__get_mem_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__get_mem_asm(ldr "b", "%w", __gu_val, (ptr), (err));	\
 		break;							\
 	case 2:								\
-		__get_mem_asm("ldrh", "ldtrh", "%w", __gu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__get_mem_asm(ldr "h", "%w", __gu_val, (ptr), (err));	\
 		break;							\
 	case 4:								\
-		__get_mem_asm("ldr", "ldtr", "%w", __gu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__get_mem_asm(ldr, "%w", __gu_val, (ptr), (err));	\
 		break;							\
 	case 8:								\
-		__get_mem_asm("ldr", "ldtr", "%x",  __gu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__get_mem_asm(ldr, "%x",  __gu_val, (ptr), (err));	\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
@@ -298,7 +295,7 @@ do {									\
 do {									\
 	__chk_user_ptr(ptr);						\
 	uaccess_enable_not_uao();					\
-	__raw_get_mem(x, ptr, err);					\
+	__raw_get_mem("ldtr", x, ptr, err);				\
 	uaccess_disable_not_uao();					\
 } while (0)
 
@@ -323,10 +320,19 @@ do {									\
 
 #define get_user	__get_user
 
-#define __put_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
+#define __get_kernel_nofault(dst, src, type, err_label)			\
+do {									\
+	int __gkn_err = 0;						\
+									\
+	__raw_get_mem("ldr", *((type *)(dst)),				\
+		      (__force type __user *)(src), __gkn_err);		\
+	if (unlikely(__gkn_err))					\
+		goto err_label;						\
+} while(0)
+
+#define __put_mem_asm(str, reg, x, addr, err)				\
 	asm volatile(							\
-	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
-			alt_instr " " reg "1, [%2]\n", feature)		\
+	"1:	" str "	" reg "1, [%2]\n"				\
 	"2:\n"								\
 	"	.section .fixup,\"ax\"\n"				\
 	"	.align	2\n"						\
@@ -337,25 +343,21 @@ do {									\
 	: "+r" (err)							\
 	: "r" (x), "r" (addr), "i" (-EFAULT))
 
-#define __raw_put_mem(x, ptr, err)					\
+#define __raw_put_mem(str, x, ptr, err)					\
 do {									\
 	__typeof__(*(ptr)) __pu_val = (x);				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__put_mem_asm("strb", "sttrb", "%w", __pu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__put_mem_asm(str "b", "%w", __pu_val, (ptr), (err));	\
 		break;							\
 	case 2:								\
-		__put_mem_asm("strh", "sttrh", "%w", __pu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__put_mem_asm(str "h", "%w", __pu_val, (ptr), (err));	\
 		break;							\
 	case 4:								\
-		__put_mem_asm("str", "sttr", "%w", __pu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__put_mem_asm(str, "%w", __pu_val, (ptr), (err));	\
 		break;							\
 	case 8:								\
-		__put_mem_asm("str", "sttr", "%x", __pu_val, (ptr),	\
-			       (err), ARM64_HAS_UAO);			\
+		__put_mem_asm(str, "%x", __pu_val, (ptr), (err));	\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
@@ -366,7 +368,7 @@ do {									\
 do {									\
 	__chk_user_ptr(ptr);						\
 	uaccess_enable_not_uao();					\
-	__raw_put_mem(x, ptr, err);					\
+	__raw_put_mem("sttr", x, ptr, err);				\
 	uaccess_disable_not_uao();					\
 } while (0)
 
@@ -391,6 +393,16 @@ do {									\
 
 #define put_user	__put_user
 
+#define __put_kernel_nofault(dst, src, type, err_label)			\
+do {									\
+	int __pkn_err = 0;						\
+									\
+	__raw_put_mem("str", *((type *)(src)),				\
+		      (__force type __user *)(dst), __pkn_err);		\
+	if (unlikely(__pkn_err))					\
+		goto err_label;						\
+} while(0)
+
 extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
 #define raw_copy_from_user(to, from, n)					\
 ({									\
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 13/17] arm64: uaccess cleanup macro naming
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (11 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 12/17] arm64: uaccess: split user/kernel routines Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 14/17] arm64: uaccess: remove set_fs() Mark Rutland
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

Now the uaccess primitives use LDTR/STTR unconditionally, the
uao_{ldp,stp,user_alternative} asm macros are misnamed, and have a
redundant argument. Let's remove the redundant argument and rename these
to user_{ldp,stp,ldst} respectively to clean this up.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Robin Murohy <robin.murphy@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/asm-uaccess.h |  8 ++++----
 arch/arm64/lib/clear_user.S          |  8 ++++----
 arch/arm64/lib/copy_from_user.S      |  8 ++++----
 arch/arm64/lib/copy_in_user.S        | 16 ++++++++--------
 arch/arm64/lib/copy_to_user.S        |  8 ++++----
 arch/arm64/lib/mte.S                 |  4 ++--
 6 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 046196f08988..5e3035fb2c80 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -63,7 +63,7 @@ alternative_else_nop_endif
  * This is complicated as there is no post-increment or pair versions of the
  * unprivileged instructions, and USER() only works for single instructions.
  */
-	.macro uao_ldp l, reg1, reg2, addr, post_inc
+	.macro user_ldp l, reg1, reg2, addr, post_inc
 8888:		ldtr	\reg1, [\addr];
 8889:		ldtr	\reg2, [\addr, #8];
 		add	\addr, \addr, \post_inc;
@@ -72,7 +72,7 @@ alternative_else_nop_endif
 		_asm_extable	8889b,\l;
 	.endm
 
-	.macro uao_stp l, reg1, reg2, addr, post_inc
+	.macro user_stp l, reg1, reg2, addr, post_inc
 8888:		sttr	\reg1, [\addr];
 8889:		sttr	\reg2, [\addr, #8];
 		add	\addr, \addr, \post_inc;
@@ -81,8 +81,8 @@ alternative_else_nop_endif
 		_asm_extable	8889b,\l;
 	.endm
 
-	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
-8888:		\alt_inst	\reg, [\addr];
+	.macro user_ldst l, inst, reg, addr, post_inc
+8888:		\inst		\reg, [\addr];
 		add		\addr, \addr, \post_inc;
 
 		_asm_extable	8888b,\l;
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index 48a3a26eff66..af9afcbec92c 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -24,20 +24,20 @@ SYM_FUNC_START(__arch_clear_user)
 	subs	x1, x1, #8
 	b.mi	2f
 1:
-uao_user_alternative 9f, str, sttr, xzr, x0, 8
+user_ldst 9f, sttr, xzr, x0, 8
 	subs	x1, x1, #8
 	b.pl	1b
 2:	adds	x1, x1, #4
 	b.mi	3f
-uao_user_alternative 9f, str, sttr, wzr, x0, 4
+user_ldst 9f, sttr, wzr, x0, 4
 	sub	x1, x1, #4
 3:	adds	x1, x1, #2
 	b.mi	4f
-uao_user_alternative 9f, strh, sttrh, wzr, x0, 2
+user_ldst 9f, sttrh, wzr, x0, 2
 	sub	x1, x1, #2
 4:	adds	x1, x1, #1
 	b.mi	5f
-uao_user_alternative 9f, strb, sttrb, wzr, x0, 0
+user_ldst 9f, sttrb, wzr, x0, 0
 5:	mov	x0, #0
 	ret
 SYM_FUNC_END(__arch_clear_user)
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 0f8a3a9e3795..95cd62d67371 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -21,7 +21,7 @@
  */
 
 	.macro ldrb1 reg, ptr, val
-	uao_user_alternative 9998f, ldrb, ldtrb, \reg, \ptr, \val
+	user_ldst 9998f, ldtrb, \reg, \ptr, \val
 	.endm
 
 	.macro strb1 reg, ptr, val
@@ -29,7 +29,7 @@
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
+	user_ldst 9998f, ldtrh, \reg, \ptr, \val
 	.endm
 
 	.macro strh1 reg, ptr, val
@@ -37,7 +37,7 @@
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
+	user_ldst 9998f, ldtr, \reg, \ptr, \val
 	.endm
 
 	.macro str1 reg, ptr, val
@@ -45,7 +45,7 @@
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	uao_ldp 9998f, \reg1, \reg2, \ptr, \val
+	user_ldp 9998f, \reg1, \reg2, \ptr, \val
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index 80e37ada0ee1..1f61cd0df062 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -22,35 +22,35 @@
  *	x0 - bytes not copied
  */
 	.macro ldrb1 reg, ptr, val
-	uao_user_alternative 9998f, ldrb, ldtrb, \reg, \ptr, \val
+	user_ldst 9998f, ldtrb, \reg, \ptr, \val
 	.endm
 
 	.macro strb1 reg, ptr, val
-	uao_user_alternative 9998f, strb, sttrb, \reg, \ptr, \val
+	user_ldst 9998f, sttrb, \reg, \ptr, \val
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
+	user_ldst 9998f, ldtrh, \reg, \ptr, \val
 	.endm
 
 	.macro strh1 reg, ptr, val
-	uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
+	user_ldst 9998f, sttrh, \reg, \ptr, \val
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
+	user_ldst 9998f, ldtr, \reg, \ptr, \val
 	.endm
 
 	.macro str1 reg, ptr, val
-	uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
+	user_ldst 9998f, sttr, \reg, \ptr, \val
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	uao_ldp 9998f, \reg1, \reg2, \ptr, \val
+	user_ldp 9998f, \reg1, \reg2, \ptr, \val
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
-	uao_stp 9998f, \reg1, \reg2, \ptr, \val
+	user_stp 9998f, \reg1, \reg2, \ptr, \val
 	.endm
 
 end	.req	x5
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 4ec59704b8f2..043da90f5dd7 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -24,7 +24,7 @@
 	.endm
 
 	.macro strb1 reg, ptr, val
-	uao_user_alternative 9998f, strb, sttrb, \reg, \ptr, \val
+	user_ldst 9998f, sttrb, \reg, \ptr, \val
 	.endm
 
 	.macro ldrh1 reg, ptr, val
@@ -32,7 +32,7 @@
 	.endm
 
 	.macro strh1 reg, ptr, val
-	uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
+	user_ldst 9998f, sttrh, \reg, \ptr, \val
 	.endm
 
 	.macro ldr1 reg, ptr, val
@@ -40,7 +40,7 @@
 	.endm
 
 	.macro str1 reg, ptr, val
-	uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
+	user_ldst 9998f, sttr, \reg, \ptr, \val
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
@@ -48,7 +48,7 @@
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
-	uao_stp 9998f, \reg1, \reg2, \ptr, \val
+	user_stp 9998f, \reg1, \reg2, \ptr, \val
 	.endm
 
 end	.req	x5
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index cceed41bba15..351537c12f36 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -67,7 +67,7 @@ SYM_FUNC_START(mte_copy_tags_from_user)
 	mov	x3, x1
 	cbz	x2, 2f
 1:
-	uao_user_alternative 2f, ldrb, ldtrb, w4, x1, 0
+	user_ldst 2f, ldtrb, w4, x1, 0
 	lsl	x4, x4, #MTE_TAG_SHIFT
 	stg	x4, [x0], #MTE_GRANULE_SIZE
 	add	x1, x1, #1
@@ -94,7 +94,7 @@ SYM_FUNC_START(mte_copy_tags_to_user)
 1:
 	ldg	x4, [x1]
 	ubfx	x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE
-	uao_user_alternative 2f, strb, sttrb, w4, x0, 0
+	user_ldst 2f, sttrb, w4, x0, 0
 	add	x0, x0, #1
 	add	x1, x1, #MTE_GRANULE_SIZE
 	subs	x2, x2, #1
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (12 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 13/17] arm64: uaccess cleanup macro naming Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-16 17:40   ` Catalin Marinas
  2020-11-26 18:42   ` James Morse
  2020-11-13 12:49 ` [PATCHv4 15/17] arm64: uaccess: remove addr_limit_user_check() Mark Rutland
                   ` (3 subsequent siblings)
  17 siblings, 2 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

Now that the uaccess primitives dont take addr_limit into acccount, we
have no need to manipulate this via set_fs() and get_fs(). Remove
support for these, along with some infrastructure this renders
redundant.

We no longer need to flip UAO to access kernel memory under KERNEL_DS,
and head.S unconditionally clears UAO for all kernel configurations via
an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
nor do we need to context-switch it. However, we do need to clear UAO
(and set PAN) during SDEI entry.

Masking of __user pointers no longer needs to use the dynamic value of
addr_limit, and can use a constant derived from the maximum possible
userspace task size. A new TASK_SIZE_MAX constant is introduced for
this, which is also used by core code. In configurations supporting
52-bit VAs, this may include a region of unusable VA space above a
48-bit TTBR0 limit, but never includes any portion of TTBR1.

Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and
KERNEL_DS were inclusive limits, and is converted to a mask by
subtracting one.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/Kconfig                   |  1 -
 arch/arm64/include/asm/exec.h        |  1 -
 arch/arm64/include/asm/processor.h   |  4 +---
 arch/arm64/include/asm/ptrace.h      |  3 +--
 arch/arm64/include/asm/thread_info.h |  4 ----
 arch/arm64/include/asm/uaccess.h     | 41 ++++++------------------------------
 arch/arm64/kernel/asm-offsets.c      |  3 +--
 arch/arm64/kernel/cpufeature.c       |  4 ----
 arch/arm64/kernel/entry.S            | 19 +++--------------
 arch/arm64/kernel/process.c          | 12 -----------
 arch/arm64/kernel/sdei.c             |  7 +-----
 arch/arm64/kernel/suspend.c          |  1 -
 arch/arm64/mm/fault.c                |  5 -----
 13 files changed, 13 insertions(+), 92 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1515f6f153a0..ac402c266dd8 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -195,7 +195,6 @@ config ARM64
 	select PCI_SYSCALL if PCI
 	select POWER_RESET
 	select POWER_SUPPLY
-	select SET_FS
 	select SPARSE_IRQ
 	select SWIOTLB
 	select SYSCTL_EXCEPTION_TRACE
diff --git a/arch/arm64/include/asm/exec.h b/arch/arm64/include/asm/exec.h
index 1aae6f9962fc..9a1c22ce664b 100644
--- a/arch/arm64/include/asm/exec.h
+++ b/arch/arm64/include/asm/exec.h
@@ -10,6 +10,5 @@
 #include <linux/sched.h>
 
 extern unsigned long arch_align_stack(unsigned long sp);
-void uao_thread_switch(struct task_struct *next);
 
 #endif	/* __ASM_EXEC_H */
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index fce8cbecd6bc..724249f37af5 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -8,9 +8,6 @@
 #ifndef __ASM_PROCESSOR_H
 #define __ASM_PROCESSOR_H
 
-#define KERNEL_DS		UL(-1)
-#define USER_DS			((UL(1) << VA_BITS) - 1)
-
 /*
  * On arm64 systems, unaligned accesses by the CPU are cheap, and so there is
  * no point in shifting all network buffers by 2 bytes just to make some IP
@@ -48,6 +45,7 @@
 
 #define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS_MIN)
 #define TASK_SIZE_64		(UL(1) << vabits_actual)
+#define TASK_SIZE_MAX		(UL(1) << VA_BITS)
 
 #ifdef CONFIG_COMPAT
 #if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 2547d94634be..2bb53bc3c326 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -193,8 +193,7 @@ struct pt_regs {
 	s32 syscallno;
 	u32 unused2;
 #endif
-
-	u64 orig_addr_limit;
+	u64 sdei_ttbr1;
 	/* Only valid when ARM64_HAS_IRQ_PRIO_MASKING is enabled. */
 	u64 pmr_save;
 	u64 stackframe[2];
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 1fbab854a51b..3a88a55de568 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -18,14 +18,11 @@ struct task_struct;
 #include <asm/stack_pointer.h>
 #include <asm/types.h>
 
-typedef unsigned long mm_segment_t;
-
 /*
  * low level task data that entry.S needs immediate access to.
  */
 struct thread_info {
 	unsigned long		flags;		/* low level flags */
-	mm_segment_t		addr_limit;	/* address limit */
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 	u64			ttbr0;		/* saved TTBR0_EL1 */
 #endif
@@ -119,7 +116,6 @@ void arch_release_task_struct(struct task_struct *tsk);
 {									\
 	.flags		= _TIF_FOREIGN_FPSTATE,				\
 	.preempt_count	= INIT_PREEMPT_COUNT,				\
-	.addr_limit	= KERNEL_DS,					\
 	INIT_SCS							\
 }
 
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 8687ec27dd00..8a4d9ed5eb51 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -26,44 +26,16 @@
 
 #define HAVE_GET_KERNEL_NOFAULT
 
-#define get_fs()	(current_thread_info()->addr_limit)
-
-static inline void set_fs(mm_segment_t fs)
-{
-	current_thread_info()->addr_limit = fs;
-
-	/*
-	 * Prevent a mispredicted conditional call to set_fs from forwarding
-	 * the wrong address limit to access_ok under speculation.
-	 */
-	spec_bar();
-
-	/* On user-mode return, check fs is correct */
-	set_thread_flag(TIF_FSCHECK);
-
-	/*
-	 * Enable/disable UAO so that copy_to_user() etc can access
-	 * kernel memory with the unprivileged instructions.
-	 */
-	if (IS_ENABLED(CONFIG_ARM64_UAO) && fs == KERNEL_DS)
-		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(1), ARM64_HAS_UAO));
-	else
-		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO,
-				CONFIG_ARM64_UAO));
-}
-
-#define uaccess_kernel()	(get_fs() == KERNEL_DS)
-
 /*
  * Test whether a block of memory is a valid user space address.
  * Returns 1 if the range is valid, 0 otherwise.
  *
  * This is equivalent to the following test:
- * (u65)addr + (u65)size <= (u65)current->addr_limit + 1
+ * (u65)addr + (u65)size <= (u65)TASK_SIZE_MAX
  */
 static inline unsigned long __range_ok(const void __user *addr, unsigned long size)
 {
-	unsigned long ret, limit = current_thread_info()->addr_limit;
+	unsigned long ret, limit = TASK_SIZE_MAX - 1;
 
 	/*
 	 * Asynchronous I/O running in a kernel thread does not have the
@@ -96,7 +68,6 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si
 }
 
 #define access_ok(addr, size)	__range_ok(addr, size)
-#define user_addr_max			get_fs
 
 #define _ASM_EXTABLE(from, to)						\
 	"	.pushsection	__ex_table, \"a\"\n"			\
@@ -226,9 +197,9 @@ static inline void uaccess_enable_not_uao(void)
 }
 
 /*
- * Sanitise a uaccess pointer such that it becomes NULL if above the
- * current addr_limit. In case the pointer is tagged (has the top byte set),
- * untag the pointer before checking.
+ * Sanitise a uaccess pointer such that it becomes NULL if above the maximum
+ * user address. In case the pointer is tagged (has the top byte set), untag
+ * the pointer before checking.
  */
 #define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr)
 static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
@@ -239,7 +210,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
 	"	bics	xzr, %3, %2\n"
 	"	csel	%0, %1, xzr, eq\n"
 	: "=&r" (safe_ptr)
-	: "r" (ptr), "r" (current_thread_info()->addr_limit),
+	: "r" (ptr), "r" (TASK_SIZE_MAX - 1),
 	  "r" (untagged_addr(ptr))
 	: "cc");
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7d32fc959b1a..679b19b8a7ff 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -30,7 +30,6 @@ int main(void)
   BLANK();
   DEFINE(TSK_TI_FLAGS,		offsetof(struct task_struct, thread_info.flags));
   DEFINE(TSK_TI_PREEMPT,	offsetof(struct task_struct, thread_info.preempt_count));
-  DEFINE(TSK_TI_ADDR_LIMIT,	offsetof(struct task_struct, thread_info.addr_limit));
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
   DEFINE(TSK_TI_TTBR0,		offsetof(struct task_struct, thread_info.ttbr0));
 #endif
@@ -70,7 +69,7 @@ int main(void)
   DEFINE(S_PSTATE,		offsetof(struct pt_regs, pstate));
   DEFINE(S_PC,			offsetof(struct pt_regs, pc));
   DEFINE(S_SYSCALLNO,		offsetof(struct pt_regs, syscallno));
-  DEFINE(S_ORIG_ADDR_LIMIT,	offsetof(struct pt_regs, orig_addr_limit));
+  DEFINE(S_SDEI_TTBR1,		offsetof(struct pt_regs, sdei_ttbr1));
   DEFINE(S_PMR_SAVE,		offsetof(struct pt_regs, pmr_save));
   DEFINE(S_STACKFRAME,		offsetof(struct pt_regs, stackframe));
   DEFINE(S_FRAME_SIZE,		sizeof(struct pt_regs));
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4ff0369bd0b3..5e9732f3561e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1777,10 +1777,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64MMFR2_EL1,
 		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
 		.min_field_value = 1,
-		/*
-		 * We rely on stop_machine() calling uao_thread_switch() to set
-		 * UAO immediately after patching.
-		 */
 	},
 #endif /* CONFIG_ARM64_UAO */
 #ifdef CONFIG_ARM64_PAN
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index b295fb912b12..bdd3b57b12f5 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -216,12 +216,6 @@ alternative_else_nop_endif
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_current_task tsk
-	/* Save the task's original addr_limit and set USER_DS */
-	ldr	x20, [tsk, #TSK_TI_ADDR_LIMIT]
-	str	x20, [sp, #S_ORIG_ADDR_LIMIT]
-	mov	x20, #USER_DS
-	str	x20, [tsk, #TSK_TI_ADDR_LIMIT]
-	/* No need to reset PSTATE.UAO, hardware's already set it to 0 for us */
 	.endif /* \el == 0 */
 	mrs	x22, elr_el1
 	mrs	x23, spsr_el1
@@ -279,12 +273,6 @@ alternative_else_nop_endif
 	.macro	kernel_exit, el
 	.if	\el != 0
 	disable_daif
-
-	/* Restore the task's original addr_limit. */
-	ldr	x20, [sp, #S_ORIG_ADDR_LIMIT]
-	str	x20, [tsk, #TSK_TI_ADDR_LIMIT]
-
-	/* No need to restore UAO, it will be restored from SPSR_EL1 */
 	.endif
 
 	/* Restore pmr */
@@ -999,10 +987,9 @@ SYM_CODE_START(__sdei_asm_entry_trampoline)
 	mov	x4, xzr
 
 	/*
-	 * Use reg->interrupted_regs.addr_limit to remember whether to unmap
-	 * the kernel on exit.
+	 * Remember whether to unmap the kernel on exit.
 	 */
-1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
+1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
 
 #ifdef CONFIG_RANDOMIZE_BASE
 	adr	x4, tramp_vectors + PAGE_SIZE
@@ -1023,7 +1010,7 @@ NOKPROBE(__sdei_asm_entry_trampoline)
  * x4: struct sdei_registered_event argument from registration time.
  */
 SYM_CODE_START(__sdei_asm_exit_trampoline)
-	ldr	x4, [x4, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
+	ldr	x4, [x4, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
 	cbnz	x4, 1f
 
 	tramp_unmap_kernel	tmp=x4
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 855137daafbf..569910ac0fd2 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -460,17 +460,6 @@ static void tls_thread_switch(struct task_struct *next)
 	write_sysreg(*task_user_tls(next), tpidr_el0);
 }
 
-/* Restore the UAO state depending on next's addr_limit */
-void uao_thread_switch(struct task_struct *next)
-{
-	if (IS_ENABLED(CONFIG_ARM64_UAO)) {
-		if (task_thread_info(next)->addr_limit == KERNEL_DS)
-			asm(ALTERNATIVE("nop", SET_PSTATE_UAO(1), ARM64_HAS_UAO));
-		else
-			asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO));
-	}
-}
-
 /*
  * Force SSBS state on context-switch, since it may be lost after migrating
  * from a CPU which treats the bit as RES0 in a heterogeneous system.
@@ -554,7 +543,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	hw_breakpoint_thread_switch(next);
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
-	uao_thread_switch(next);
 	ssbs_thread_switch(next);
 	erratum_1418040_thread_switch(prev, next);
 
diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
index 908d7be70eac..6c45dad11666 100644
--- a/arch/arm64/kernel/sdei.c
+++ b/arch/arm64/kernel/sdei.c
@@ -231,15 +231,12 @@ asmlinkage __kprobes notrace unsigned long
 __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 {
 	unsigned long ret;
-	mm_segment_t orig_addr_limit;
 
 	/*
 	 * We didn't take an exception to get here, so the HW hasn't
-	 * set/cleared bits in PSTATE that we may rely on. Intialize PAN/UAO,
-	 * then use force_uaccess_begin() to reset addr_limit.
+	 * set/cleared bits in PSTATE that we may rely on. Intialize PAN/UAO.
 	 */
 	__sdei_pstate_entry();
-	orig_addr_limit = force_uaccess_begin();
 
 	nmi_enter();
 
@@ -247,7 +244,5 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
 
 	nmi_exit();
 
-	force_uaccess_end(orig_addr_limit);
-
 	return ret;
 }
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index 96cd347c7a46..a67b37a7a47e 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -58,7 +58,6 @@ void notrace __cpu_suspend_exit(void)
 	 * features that might not have been set correctly.
 	 */
 	__uaccess_enable_hw_pan();
-	uao_thread_switch(current);
 
 	/*
 	 * Restore HW breakpoint registers to sane values
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 1ee94002801f..8dc17e650c8e 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -479,11 +479,6 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 	}
 
 	if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) {
-		/* regs->orig_addr_limit may be 0 if we entered from EL0 */
-		if (regs->orig_addr_limit == KERNEL_DS)
-			die_kernel_fault("access to user memory with fs=KERNEL_DS",
-					 addr, esr, regs);
-
 		if (is_el1_instruction_abort(esr))
 			die_kernel_fault("execution of user memory",
 					 addr, esr, regs);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 15/17] arm64: uaccess: remove addr_limit_user_check()
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (13 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 14/17] arm64: uaccess: remove set_fs() Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 16/17] arm64: uaccess: remove redundant PAN toggling Mark Rutland
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

Now that set_fs() is gone, addr_limit_user_check() is redundant. Remove
the checks and associated thread flag.

To ensure that _TIF_WORK_MASK can be used as an immediate value in an
AND instruction (as it is in `ret_to_user`), TIF_MTE_ASYNC_FAULT is
renumbered to keep the constituent bits of _TIF_WORK_MASK contiguous.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/thread_info.h | 6 ++----
 arch/arm64/kernel/signal.c           | 3 ---
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 3a88a55de568..015beafe58f5 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -63,8 +63,7 @@ void arch_release_task_struct(struct task_struct *tsk);
 #define TIF_NOTIFY_RESUME	2	/* callback before returning to user */
 #define TIF_FOREIGN_FPSTATE	3	/* CPU's FP state is not current's */
 #define TIF_UPROBE		4	/* uprobe breakpoint or singlestep */
-#define TIF_FSCHECK		5	/* Check FS is USER_DS on return */
-#define TIF_MTE_ASYNC_FAULT	6	/* MTE Asynchronous Tag Check Fault */
+#define TIF_MTE_ASYNC_FAULT	5	/* MTE Asynchronous Tag Check Fault */
 #define TIF_SYSCALL_TRACE	8	/* syscall trace active */
 #define TIF_SYSCALL_AUDIT	9	/* syscall auditing */
 #define TIF_SYSCALL_TRACEPOINT	10	/* syscall tracepoint for ftrace */
@@ -90,7 +89,6 @@ void arch_release_task_struct(struct task_struct *tsk);
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
-#define _TIF_FSCHECK		(1 << TIF_FSCHECK)
 #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
 #define _TIF_32BIT		(1 << TIF_32BIT)
 #define _TIF_SVE		(1 << TIF_SVE)
@@ -98,7 +96,7 @@ void arch_release_task_struct(struct task_struct *tsk);
 
 #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
 				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
-				 _TIF_UPROBE | _TIF_FSCHECK | _TIF_MTE_ASYNC_FAULT)
+				 _TIF_UPROBE | _TIF_MTE_ASYNC_FAULT)
 
 #define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index a8184cad8890..af5c6c6638f7 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -922,9 +922,6 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
 	trace_hardirqs_off();
 
 	do {
-		/* Check valid user FS if needed */
-		addr_limit_user_check();
-
 		if (thread_flags & _TIF_NEED_RESCHED) {
 			/* Unmask Debug and SError for the next task */
 			local_daif_restore(DAIF_PROCCTX_NOIRQ);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 16/17] arm64: uaccess: remove redundant PAN toggling
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (14 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 15/17] arm64: uaccess: remove addr_limit_user_check() Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-13 12:49 ` [PATCHv4 17/17] arm64: uaccess: remove vestigal UAO support Mark Rutland
  2020-11-25 19:10 ` [PATCHv4 00/17] arm64: remove set_fs() and friends Catalin Marinas
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

Some code (e.g. futex) needs to make privileged accesses to userspace
memory, and uses uaccess_{enable,disable}_privileged() in order to
permit this. All other uaccess primitives use LDTR/STTR, and never need
to toggle PAN.

Remove the redundant PAN toggling.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpucaps.h |  1 -
 arch/arm64/include/asm/uaccess.h | 59 +++++++++++++---------------------------
 arch/arm64/kernel/cpufeature.c   | 17 ------------
 3 files changed, 19 insertions(+), 58 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index e7d98997c09c..ce7b7ada2c82 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -17,7 +17,6 @@
 #define ARM64_WORKAROUND_834220			7
 #define ARM64_HAS_NO_HW_PREFETCH		8
 #define ARM64_HAS_UAO				9
-#define ARM64_ALT_PAN_NOT_UAO			10
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
 #define ARM64_HAS_32BIT_EL0			13
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 8a4d9ed5eb51..d81c083967ed 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -159,41 +159,20 @@ static inline void __uaccess_enable_hw_pan(void)
 			CONFIG_ARM64_PAN));
 }
 
-#define __uaccess_disable(alt)						\
-do {									\
-	if (!uaccess_ttbr0_disable())					\
-		asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), alt,		\
-				CONFIG_ARM64_PAN));			\
-} while (0)
-
-#define __uaccess_enable(alt)						\
-do {									\
-	if (!uaccess_ttbr0_enable())					\
-		asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), alt,		\
-				CONFIG_ARM64_PAN));			\
-} while (0)
-
 static inline void uaccess_disable_privileged(void)
 {
-	__uaccess_disable(ARM64_HAS_PAN);
-}
+	if (uaccess_ttbr0_disable())
+		return;
 
-static inline void uaccess_enable_privileged(void)
-{
-	__uaccess_enable(ARM64_HAS_PAN);
+	__uaccess_enable_hw_pan();
 }
 
-/*
- * These functions are no-ops when UAO is present.
- */
-static inline void uaccess_disable_not_uao(void)
+static inline void uaccess_enable_privileged(void)
 {
-	__uaccess_disable(ARM64_ALT_PAN_NOT_UAO);
-}
+	if (uaccess_ttbr0_enable())
+		return;
 
-static inline void uaccess_enable_not_uao(void)
-{
-	__uaccess_enable(ARM64_ALT_PAN_NOT_UAO);
+	__uaccess_disable_hw_pan();
 }
 
 /*
@@ -265,9 +244,9 @@ do {									\
 #define __raw_get_user(x, ptr, err)					\
 do {									\
 	__chk_user_ptr(ptr);						\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__raw_get_mem("ldtr", x, ptr, err);				\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 } while (0)
 
 #define __get_user_error(x, ptr, err)					\
@@ -338,9 +317,9 @@ do {									\
 #define __raw_put_user(x, ptr, err)					\
 do {									\
 	__chk_user_ptr(ptr);						\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__raw_put_mem("sttr", x, ptr, err);				\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 } while (0)
 
 #define __put_user_error(x, ptr, err)					\
@@ -378,10 +357,10 @@ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __u
 #define raw_copy_from_user(to, from, n)					\
 ({									\
 	unsigned long __acfu_ret;					\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__acfu_ret = __arch_copy_from_user((to),			\
 				      __uaccess_mask_ptr(from), (n));	\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 	__acfu_ret;							\
 })
 
@@ -389,10 +368,10 @@ extern unsigned long __must_check __arch_copy_to_user(void __user *to, const voi
 #define raw_copy_to_user(to, from, n)					\
 ({									\
 	unsigned long __actu_ret;					\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__actu_ret = __arch_copy_to_user(__uaccess_mask_ptr(to),	\
 				    (from), (n));			\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 	__actu_ret;							\
 })
 
@@ -400,10 +379,10 @@ extern unsigned long __must_check __arch_copy_in_user(void __user *to, const voi
 #define raw_copy_in_user(to, from, n)					\
 ({									\
 	unsigned long __aciu_ret;					\
-	uaccess_enable_not_uao();					\
+	uaccess_ttbr0_enable();						\
 	__aciu_ret = __arch_copy_in_user(__uaccess_mask_ptr(to),	\
 				    __uaccess_mask_ptr(from), (n));	\
-	uaccess_disable_not_uao();					\
+	uaccess_ttbr0_disable();					\
 	__aciu_ret;							\
 })
 
@@ -414,9 +393,9 @@ extern unsigned long __must_check __arch_clear_user(void __user *to, unsigned lo
 static inline unsigned long __must_check __clear_user(void __user *to, unsigned long n)
 {
 	if (access_ok(to, n)) {
-		uaccess_enable_not_uao();
+		uaccess_ttbr0_enable();
 		n = __arch_clear_user(__uaccess_mask_ptr(to), n);
-		uaccess_disable_not_uao();
+		uaccess_ttbr0_disable();
 	}
 	return n;
 }
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5e9732f3561e..5d7032625ff2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -153,10 +153,6 @@ EXPORT_SYMBOL(cpu_hwcap_keys);
 		.width = 0,				\
 	}
 
-/* meta feature for alternatives */
-static bool __maybe_unused
-cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused);
-
 static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
 
 static bool __system_matches_cap(unsigned int n);
@@ -1779,13 +1775,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 1,
 	},
 #endif /* CONFIG_ARM64_UAO */
-#ifdef CONFIG_ARM64_PAN
-	{
-		.capability = ARM64_ALT_PAN_NOT_UAO,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = cpufeature_pan_not_uao,
-	},
-#endif /* CONFIG_ARM64_PAN */
 #ifdef CONFIG_ARM64_VHE
 	{
 		.desc = "Virtualization Host Extensions",
@@ -2726,12 +2715,6 @@ void __init setup_cpu_features(void)
 			ARCH_DMA_MINALIGN);
 }
 
-static bool __maybe_unused
-cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
-{
-	return (__system_matches_cap(ARM64_HAS_PAN) && !__system_matches_cap(ARM64_HAS_UAO));
-}
-
 static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
 {
 	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCHv4 17/17] arm64: uaccess: remove vestigal UAO support
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (15 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 16/17] arm64: uaccess: remove redundant PAN toggling Mark Rutland
@ 2020-11-13 12:49 ` Mark Rutland
  2020-11-25 19:10 ` [PATCHv4 00/17] arm64: remove set_fs() and friends Catalin Marinas
  17 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-13 12:49 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will
  Cc: mark.rutland, robin.murphy, james.morse, hch

Now that arm64 no longer uses UAO, remove the vestigal feature detection
code and Kconfig text.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/Kconfig               | 21 ---------------------
 arch/arm64/include/asm/cpucaps.h |  1 -
 arch/arm64/kernel/cpufeature.c   | 11 -----------
 3 files changed, 33 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ac402c266dd8..71f3f3b9f3d2 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1424,27 +1424,6 @@ endmenu
 
 menu "ARMv8.2 architectural features"
 
-config ARM64_UAO
-	bool "Enable support for User Access Override (UAO)"
-	default y
-	help
-	  User Access Override (UAO; part of the ARMv8.2 Extensions)
-	  causes the 'unprivileged' variant of the load/store instructions to
-	  be overridden to be privileged.
-
-	  This option changes get_user() and friends to use the 'unprivileged'
-	  variant of the load/store instructions. This ensures that user-space
-	  really did have access to the supplied memory. When addr_limit is
-	  set to kernel memory the UAO bit will be set, allowing privileged
-	  access to kernel memory.
-
-	  Choosing this option will cause copy_to_user() et al to use user-space
-	  memory permissions.
-
-	  The feature is detected at runtime, the kernel will use the
-	  regular load/store instructions if the cpu does not implement the
-	  feature.
-
 config ARM64_PMEM
 	bool "Enable support for persistent memory"
 	select ARCH_HAS_PMEM_API
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index ce7b7ada2c82..95064b7d64b2 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -16,7 +16,6 @@
 #define ARM64_WORKAROUND_CAVIUM_23154		6
 #define ARM64_WORKAROUND_834220			7
 #define ARM64_HAS_NO_HW_PREFETCH		8
-#define ARM64_HAS_UAO				9
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
 #define ARM64_HAS_32BIT_EL0			13
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5d7032625ff2..3a9e1d672cb0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1764,17 +1764,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
 		.matches = has_no_hw_prefetch,
 	},
-#ifdef CONFIG_ARM64_UAO
-	{
-		.desc = "User Access Override",
-		.capability = ARM64_HAS_UAO,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = has_cpuid_feature,
-		.sys_reg = SYS_ID_AA64MMFR2_EL1,
-		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
-		.min_field_value = 1,
-	},
-#endif /* CONFIG_ARM64_UAO */
 #ifdef CONFIG_ARM64_VHE
 	{
 		.desc = "Virtualization Host Extensions",
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 08/17] arm64: uaccess: move uao_* alternatives to asm-uaccess.h
  2020-11-13 12:49 ` [PATCHv4 08/17] arm64: uaccess: move uao_* alternatives to asm-uaccess.h Mark Rutland
@ 2020-11-14  9:22   ` Christoph Hellwig
  2020-11-16 12:25     ` Mark Rutland
  0 siblings, 1 reply; 38+ messages in thread
From: Christoph Hellwig @ 2020-11-14  9:22 UTC (permalink / raw)
  To: Mark Rutland
  Cc: will, catalin.marinas, james.morse, robin.murphy, hch, linux-arm-kernel

> +#ifdef CONFIG_ARM64_UAO
> +	.macro uao_ldp l, reg1, reg2, addr, post_inc
> +		alternative_if_not ARM64_HAS_UAO
> +8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
> +8889:			nop;

No actually new in this patch, but that is the reason for these horrible
label names?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 08/17] arm64: uaccess: move uao_* alternatives to asm-uaccess.h
  2020-11-14  9:22   ` Christoph Hellwig
@ 2020-11-16 12:25     ` Mark Rutland
  0 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-16 12:25 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: catalin.marinas, will, james.morse, linux-arm-kernel, robin.murphy

On Sat, Nov 14, 2020 at 10:22:15AM +0100, Christoph Hellwig wrote:
> > +#ifdef CONFIG_ARM64_UAO
> > +	.macro uao_ldp l, reg1, reg2, addr, post_inc
> > +		alternative_if_not ARM64_HAS_UAO
> > +8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
> > +8889:			nop;
> 
> No actually new in this patch, but that is the reason for these horrible
> label names?

It's a historic way of having non-conflicting labels. I also hate it,
and intend to clean it up subsequently.

The new/preferred way for labels in asm macros is to use a descriptive
local label with a '\@' suffix to ensure true uniqueness, e.g.

| 	.macro foo
| 	cbz	x0, .Lskip\@
| 	mov	x0, x1
| .Lskip\@:
| 	.endm

... which I assume is what you'd also like here.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-13 12:49 ` [PATCHv4 14/17] arm64: uaccess: remove set_fs() Mark Rutland
@ 2020-11-16 17:40   ` Catalin Marinas
  2020-11-17 10:44     ` Mark Rutland
  2020-11-26 18:42   ` James Morse
  1 sibling, 1 reply; 38+ messages in thread
From: Catalin Marinas @ 2020-11-16 17:40 UTC (permalink / raw)
  To: Mark Rutland; +Cc: james.morse, will, hch, linux-arm-kernel, robin.murphy

Hi Mark,

On Fri, Nov 13, 2020 at 12:49:34PM +0000, Mark Rutland wrote:
> We no longer need to flip UAO to access kernel memory under KERNEL_DS,
> and head.S unconditionally clears UAO for all kernel configurations via
> an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
> nor do we need to context-switch it. However, we do need to clear UAO
> (and set PAN) during SDEI entry.

If the kernel never sets the UAO bit, why do we need to clear it during
SDEI entry?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 06/17] arm64: sdei: move uaccess logic to arch/arm64/
  2020-11-13 12:49 ` [PATCHv4 06/17] arm64: sdei: move uaccess logic to arch/arm64/ Mark Rutland
@ 2020-11-16 17:42   ` Catalin Marinas
  2020-11-26 18:42   ` James Morse
  1 sibling, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2020-11-16 17:42 UTC (permalink / raw)
  To: Mark Rutland; +Cc: james.morse, will, hch, linux-arm-kernel, robin.murphy

On Fri, Nov 13, 2020 at 12:49:26PM +0000, Mark Rutland wrote:
> The SDEI support code is split across arch/arm64/ and drivers/firmware/,
> largley this is split so that the arch-specific portions are under
> arch/arm64, and the management logic is under drivers/firmware/.
> However, exception entry fixups are currently under drivers/firmware.
> 
> Let's move the exception entry fixups under arch/arm64/. This
> de-clutters the management logic, and puts all the arch-specific
> portions in one place. Doing this also allows the fixups to be applied
> earlier, so things like PAN and UAO will be in a known good state before
> we run other logic. This will also make subsequent refactoring easier.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>

James, could you please have a look at the SDEI changes in this series
and give an ack/reviewed-by if all looks fine? I plan to queue them
sometime this week.

Thanks.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-16 17:40   ` Catalin Marinas
@ 2020-11-17 10:44     ` Mark Rutland
  2020-11-17 10:54       ` Catalin Marinas
  0 siblings, 1 reply; 38+ messages in thread
From: Mark Rutland @ 2020-11-17 10:44 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: james.morse, will, hch, linux-arm-kernel, robin.murphy

On Mon, Nov 16, 2020 at 05:40:48PM +0000, Catalin Marinas wrote:
> Hi Mark,
> 
> On Fri, Nov 13, 2020 at 12:49:34PM +0000, Mark Rutland wrote:
> > We no longer need to flip UAO to access kernel memory under KERNEL_DS,
> > and head.S unconditionally clears UAO for all kernel configurations via
> > an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
> > nor do we need to context-switch it. However, we do need to clear UAO
> > (and set PAN) during SDEI entry.
> 
> If the kernel never sets the UAO bit, why do we need to clear it during
> SDEI entry?

The fear was taking an SDEI event from a VM which had UAO set, with the
SDEI FW not clearing UAO.

That might not happen in practice because while the spec implies that
could happen, TF-A currently generates a new PSTATE from scratch, and
going forward the spec will be aligned with regular exception entry
rules for PSTATE (so UAO will be cleared automatically).

So we can probably drop the clearing of UAO if you prefer.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-17 10:44     ` Mark Rutland
@ 2020-11-17 10:54       ` Catalin Marinas
  2020-11-17 10:57         ` Mark Rutland
  0 siblings, 1 reply; 38+ messages in thread
From: Catalin Marinas @ 2020-11-17 10:54 UTC (permalink / raw)
  To: Mark Rutland; +Cc: james.morse, will, hch, linux-arm-kernel, robin.murphy

On Tue, Nov 17, 2020 at 10:44:54AM +0000, Mark Rutland wrote:
> On Mon, Nov 16, 2020 at 05:40:48PM +0000, Catalin Marinas wrote:
> > On Fri, Nov 13, 2020 at 12:49:34PM +0000, Mark Rutland wrote:
> > > We no longer need to flip UAO to access kernel memory under KERNEL_DS,
> > > and head.S unconditionally clears UAO for all kernel configurations via
> > > an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
> > > nor do we need to context-switch it. However, we do need to clear UAO
> > > (and set PAN) during SDEI entry.
> > 
> > If the kernel never sets the UAO bit, why do we need to clear it during
> > SDEI entry?
> 
> The fear was taking an SDEI event from a VM which had UAO set, with the
> SDEI FW not clearing UAO.
> 
> That might not happen in practice because while the spec implies that
> could happen, TF-A currently generates a new PSTATE from scratch, and
> going forward the spec will be aligned with regular exception entry
> rules for PSTATE (so UAO will be cleared automatically).

Does this requirement apply retrospectively or it only for the future
SDEI specs?

> So we can probably drop the clearing of UAO if you prefer.

I don't like clearing UAO specifically. There may be other PSTATE bits
in the future we don't know or care about and that are left set by
firmware. If we don't trust firmware to give a clean PSTATE, can we
reset it with an ERET?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-17 10:54       ` Catalin Marinas
@ 2020-11-17 10:57         ` Mark Rutland
  2020-11-17 11:02           ` Catalin Marinas
  0 siblings, 1 reply; 38+ messages in thread
From: Mark Rutland @ 2020-11-17 10:57 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: james.morse, will, hch, linux-arm-kernel, robin.murphy

On Tue, Nov 17, 2020 at 10:54:18AM +0000, Catalin Marinas wrote:
> On Tue, Nov 17, 2020 at 10:44:54AM +0000, Mark Rutland wrote:
> > On Mon, Nov 16, 2020 at 05:40:48PM +0000, Catalin Marinas wrote:
> > > On Fri, Nov 13, 2020 at 12:49:34PM +0000, Mark Rutland wrote:
> > > > We no longer need to flip UAO to access kernel memory under KERNEL_DS,
> > > > and head.S unconditionally clears UAO for all kernel configurations via
> > > > an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
> > > > nor do we need to context-switch it. However, we do need to clear UAO
> > > > (and set PAN) during SDEI entry.
> > > 
> > > If the kernel never sets the UAO bit, why do we need to clear it during
> > > SDEI entry?
> > 
> > The fear was taking an SDEI event from a VM which had UAO set, with the
> > SDEI FW not clearing UAO.
> > 
> > That might not happen in practice because while the spec implies that
> > could happen, TF-A currently generates a new PSTATE from scratch, and
> > going forward the spec will be aligned with regular exception entry
> > rules for PSTATE (so UAO will be cleared automatically).
> 
> Does this requirement apply retrospectively or it only for the future
> SDEI specs?

It applies from the current spec, but note that TF-A has always
generated a new PSTATE (with UAO clear).

> > So we can probably drop the clearing of UAO if you prefer.
> 
> I don't like clearing UAO specifically. There may be other PSTATE bits
> in the future we don't know or care about and that are left set by
> firmware. If we don't trust firmware to give a clean PSTATE, can we
> reset it with an ERET?

In future we should get the new behaviour, so I think we should be fine
there. If we can't trust PSTATE at all, then we're not necessarily able
to perform an ERET anyway.

... which I think means I'm arguing in favour of deleting the clearing
of UAO, if you're happy with that?

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-17 10:57         ` Mark Rutland
@ 2020-11-17 11:02           ` Catalin Marinas
  2020-11-17 11:07             ` Mark Rutland
  0 siblings, 1 reply; 38+ messages in thread
From: Catalin Marinas @ 2020-11-17 11:02 UTC (permalink / raw)
  To: Mark Rutland; +Cc: james.morse, will, hch, linux-arm-kernel, robin.murphy

On Tue, Nov 17, 2020 at 10:57:40AM +0000, Mark Rutland wrote:
> On Tue, Nov 17, 2020 at 10:54:18AM +0000, Catalin Marinas wrote:
> > On Tue, Nov 17, 2020 at 10:44:54AM +0000, Mark Rutland wrote:
> > > On Mon, Nov 16, 2020 at 05:40:48PM +0000, Catalin Marinas wrote:
> > > > On Fri, Nov 13, 2020 at 12:49:34PM +0000, Mark Rutland wrote:
> > > > > We no longer need to flip UAO to access kernel memory under KERNEL_DS,
> > > > > and head.S unconditionally clears UAO for all kernel configurations via
> > > > > an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
> > > > > nor do we need to context-switch it. However, we do need to clear UAO
> > > > > (and set PAN) during SDEI entry.
> > > > 
> > > > If the kernel never sets the UAO bit, why do we need to clear it during
> > > > SDEI entry?
> > > 
> > > The fear was taking an SDEI event from a VM which had UAO set, with the
> > > SDEI FW not clearing UAO.
> > > 
> > > That might not happen in practice because while the spec implies that
> > > could happen, TF-A currently generates a new PSTATE from scratch, and
> > > going forward the spec will be aligned with regular exception entry
> > > rules for PSTATE (so UAO will be cleared automatically).
> > 
> > Does this requirement apply retrospectively or it only for the future
> > SDEI specs?
> 
> It applies from the current spec, but note that TF-A has always
> generated a new PSTATE (with UAO clear).

And we can assume that TF-A is the only implementation generating SDEI.

> > > So we can probably drop the clearing of UAO if you prefer.
> > 
> > I don't like clearing UAO specifically. There may be other PSTATE bits
> > in the future we don't know or care about and that are left set by
> > firmware. If we don't trust firmware to give a clean PSTATE, can we
> > reset it with an ERET?
> 
> In future we should get the new behaviour, so I think we should be fine
> there. If we can't trust PSTATE at all, then we're not necessarily able
> to perform an ERET anyway.
> 
> ... which I think means I'm arguing in favour of deleting the clearing
> of UAO, if you're happy with that?

Yes, I'm fine with that. That means that you can also get rid of
cpu_has_uao().

Could we apply the same logic to SDEI+PAN?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-17 11:02           ` Catalin Marinas
@ 2020-11-17 11:07             ` Mark Rutland
  2020-11-17 11:10               ` Catalin Marinas
  2020-11-26 18:42               ` James Morse
  0 siblings, 2 replies; 38+ messages in thread
From: Mark Rutland @ 2020-11-17 11:07 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: james.morse, will, hch, linux-arm-kernel, robin.murphy

On Tue, Nov 17, 2020 at 11:02:20AM +0000, Catalin Marinas wrote:
> On Tue, Nov 17, 2020 at 10:57:40AM +0000, Mark Rutland wrote:
> > On Tue, Nov 17, 2020 at 10:54:18AM +0000, Catalin Marinas wrote:
> > > On Tue, Nov 17, 2020 at 10:44:54AM +0000, Mark Rutland wrote:
> > > > On Mon, Nov 16, 2020 at 05:40:48PM +0000, Catalin Marinas wrote:
> > > > > On Fri, Nov 13, 2020 at 12:49:34PM +0000, Mark Rutland wrote:
> > > > > > We no longer need to flip UAO to access kernel memory under KERNEL_DS,
> > > > > > and head.S unconditionally clears UAO for all kernel configurations via
> > > > > > an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
> > > > > > nor do we need to context-switch it. However, we do need to clear UAO
> > > > > > (and set PAN) during SDEI entry.
> > > > > 
> > > > > If the kernel never sets the UAO bit, why do we need to clear it during
> > > > > SDEI entry?
> > > > 
> > > > The fear was taking an SDEI event from a VM which had UAO set, with the
> > > > SDEI FW not clearing UAO.
> > > > 
> > > > That might not happen in practice because while the spec implies that
> > > > could happen, TF-A currently generates a new PSTATE from scratch, and
> > > > going forward the spec will be aligned with regular exception entry
> > > > rules for PSTATE (so UAO will be cleared automatically).
> > > 
> > > Does this requirement apply retrospectively or it only for the future
> > > SDEI specs?
> > 
> > It applies from the current spec, but note that TF-A has always
> > generated a new PSTATE (with UAO clear).
> 
> And we can assume that TF-A is the only implementation generating SDEI.

I *think* so, hopefully James can confirm.

> > > > So we can probably drop the clearing of UAO if you prefer.
> > > 
> > > I don't like clearing UAO specifically. There may be other PSTATE bits
> > > in the future we don't know or care about and that are left set by
> > > firmware. If we don't trust firmware to give a clean PSTATE, can we
> > > reset it with an ERET?
> > 
> > In future we should get the new behaviour, so I think we should be fine
> > there. If we can't trust PSTATE at all, then we're not necessarily able
> > to perform an ERET anyway.
> > 
> > ... which I think means I'm arguing in favour of deleting the clearing
> > of UAO, if you're happy with that?
> 
> Yes, I'm fine with that. That means that you can also get rid of
> cpu_has_uao().

Yup; I can go adjust the prior patches with that in mind.

> Could we apply the same logic to SDEI+PAN?

I don't think we can; existing TF-A FW will clear PAN (since it
generates the new PSTATE from scratch), so we need to set PAN explicitly
to support that.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-17 11:07             ` Mark Rutland
@ 2020-11-17 11:10               ` Catalin Marinas
  2020-11-26 18:42               ` James Morse
  1 sibling, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2020-11-17 11:10 UTC (permalink / raw)
  To: Mark Rutland; +Cc: james.morse, will, hch, linux-arm-kernel, robin.murphy

On Tue, Nov 17, 2020 at 11:07:58AM +0000, Mark Rutland wrote:
> On Tue, Nov 17, 2020 at 11:02:20AM +0000, Catalin Marinas wrote:
> > Could we apply the same logic to SDEI+PAN?
> 
> I don't think we can; existing TF-A FW will clear PAN (since it
> generates the new PSTATE from scratch), so we need to set PAN explicitly
> to support that.

Ah, you are right, we need PAN set (and UAO clear).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 00/17] arm64: remove set_fs() and friends
  2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
                   ` (16 preceding siblings ...)
  2020-11-13 12:49 ` [PATCHv4 17/17] arm64: uaccess: remove vestigal UAO support Mark Rutland
@ 2020-11-25 19:10 ` Catalin Marinas
  17 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2020-11-25 19:10 UTC (permalink / raw)
  To: will, Mark Rutland, linux-arm-kernel; +Cc: james.morse, robin.murphy, hch

On Fri, 13 Nov 2020 12:49:20 +0000, Mark Rutland wrote:
> Building atop Christoph's core rework now in v5.10-rc1, this series removes
> set_fs() from arm64. The series can be found in arm64/set_fs-removal branch
> [1].
> 
> Since v1 [2]:
> * Remove redundant kthread init code
> * Remove vestigal bit left by v1
> * Rework SDEI entry
> * Comment head.S better
> * Fix commit message typos
> 
> [...]

I picked up the first 5 preparatory patches, pushed to arm64
(for-next/uaccess). I'll wait for James to ack the SDEI patches and for
you to respin the the rest (some nitpicks).

[1/17] arm64: ensure ERET from kthread is illegal
      https://git.kernel.org/arm64/c/75ed0ca3281a
[2/17] arm64: add C wrappers for SET_PSTATE_*()
      https://git.kernel.org/arm64/c/db4b0281cc00
[3/17] arm64: head.S: rename el2_setup -> init_kernel_el
      https://git.kernel.org/arm64/c/a4b81a241a77
[4/17] arm64: head.S: cleanup SCTLR_ELx initialization
      https://git.kernel.org/arm64/c/cc0f3417f4f9
[5/17] arm64: head.S: always initialize PSTATE
      https://git.kernel.org/arm64/c/7cdab4d42ac7

Thanks.

-- 
Catalin


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 12/17] arm64: uaccess: split user/kernel routines
  2020-11-13 12:49 ` [PATCHv4 12/17] arm64: uaccess: split user/kernel routines Mark Rutland
@ 2020-11-26 18:42   ` James Morse
  2020-12-01 11:03     ` Mark Rutland
  0 siblings, 1 reply; 38+ messages in thread
From: James Morse @ 2020-11-26 18:42 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will; +Cc: robin.murphy, hch

Hi Mark,

On 13/11/2020 12:49, Mark Rutland wrote:
> This patch separates arm64's user and kernel memory access primitives
> into distinct routines, adding new __{get,put}_kernel_nofault() helpers
> to acess kernel memory, upon which core code builds larger copy

(access)

> routines.
> 
> The kernel access routines (using LDR/STR) are not affected by PAN (when
> legitimately accessing kernel memory), nor are they affected by UAO.
> Switching to KERNEL_DS may set UAO, but this does not adversely affect
> the kernel access routines.
> 
> The user access routines (using LDTR/STTR) are not affected by PAN (when
> legitimately accessing user memory), but are affected by UAO. As these
> are only legitimate to use under USER_DS with UAO clear, this should not
> be problematic.
> 
> Routines performing atomics to user memory (futex and deprecated
> instruction emulation) still need to transiently clear PAN, and these
> are left as-is. These are never used on kernel memory.
> 
> Subsequent patches will refactor the uaccess helpers to remove redundant
> code, and will also remove the redundant PAN/UAO manipulation.

> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index 743f209d3fab..8687ec27dd00 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -253,10 +255,9 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
>   * The "__xxx_error" versions set the third argument to -EFAULT if an error
>   * occurs, and leave it unchanged on success.
>   */
> -#define __get_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\

> +#define __get_mem_asm(ldr, reg, x, addr, err)				\

('ldr' as variable name where you need to output something other than ldr does my head in)


>  	asm volatile(							\
> -	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
> -			alt_instr " " reg "1, [%2]\n", feature)		\
> +	"1:	" ldr "	" reg "1, [%2]\n"				\
>  	"2:\n"								\
>  	"	.section .fixup, \"ax\"\n"				\
>  	"	.align	2\n"						\


> @@ -323,10 +320,19 @@ do {									\
>  
>  #define get_user	__get_user
>  
> -#define __put_mem_asm(instr, alt_instr, reg, x, addr, err, feature)	\
> +#define __get_kernel_nofault(dst, src, type, err_label)			\
> +do {									\
> +	int __gkn_err = 0;						\
> +									\
> +	__raw_get_mem("ldr", *((type *)(dst)),				\
> +		      (__force type __user *)(src), __gkn_err);		\

Is the __user needed as you moved __chk_user_ptr() out of __raw_get_mem() in the previous
patch?

(your branch on kernel.org does something different here, not sure if its ahead or behind v4)


> +	if (unlikely(__gkn_err))					\
> +		goto err_label;						\
> +} while(0)


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-17 11:07             ` Mark Rutland
  2020-11-17 11:10               ` Catalin Marinas
@ 2020-11-26 18:42               ` James Morse
  1 sibling, 0 replies; 38+ messages in thread
From: James Morse @ 2020-11-26 18:42 UTC (permalink / raw)
  To: Mark Rutland, Catalin Marinas; +Cc: robin.murphy, will, hch, linux-arm-kernel

Hi Mark,

On 17/11/2020 11:07, Mark Rutland wrote:
> On Tue, Nov 17, 2020 at 11:02:20AM +0000, Catalin Marinas wrote:
>> On Tue, Nov 17, 2020 at 10:57:40AM +0000, Mark Rutland wrote:
>>> On Tue, Nov 17, 2020 at 10:54:18AM +0000, Catalin Marinas wrote:
>>>> Does this requirement apply retrospectively or it only for the future
>>>> SDEI specs?
>>>
>>> It applies from the current spec, but note that TF-A has always
>>> generated a new PSTATE (with UAO clear).
>>
>> And we can assume that TF-A is the only implementation generating SDEI.
> 
> I *think* so, hopefully James can confirm.

Yup. Neither kvmtool nor Qemu implementations were merged.
We can avoid this with Gavin's in-kernel implementation if asynchronous-page-fault goes
that way.


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-13 12:49 ` [PATCHv4 14/17] arm64: uaccess: remove set_fs() Mark Rutland
  2020-11-16 17:40   ` Catalin Marinas
@ 2020-11-26 18:42   ` James Morse
  2020-12-01 12:30     ` Mark Rutland
  1 sibling, 1 reply; 38+ messages in thread
From: James Morse @ 2020-11-26 18:42 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will; +Cc: robin.murphy, hch

Hi Mark,

On 13/11/2020 12:49, Mark Rutland wrote:
> Now that the uaccess primitives dont take addr_limit into acccount, we

(account)


>  #if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
> diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
> index 2547d94634be..2bb53bc3c326 100644
> --- a/arch/arm64/include/asm/ptrace.h
> +++ b/arch/arm64/include/asm/ptrace.h
> @@ -193,8 +193,7 @@ struct pt_regs {
>  	s32 syscallno;
>  	u32 unused2;
>  #endif
> -
> -	u64 orig_addr_limit;
> +	u64 sdei_ttbr1;
>  	/* Only valid when ARM64_HAS_IRQ_PRIO_MASKING is enabled. */
>  	u64 pmr_save;
>  	u64 stackframe[2];

Hmmm,


> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index b295fb912b12..bdd3b57b12f5 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -999,10 +987,9 @@ SYM_CODE_START(__sdei_asm_entry_trampoline)
>  	mov	x4, xzr
>  
>  	/*
> -	 * Use reg->interrupted_regs.addr_limit to remember whether to unmap
> -	 * the kernel on exit.
> +	 * Remember whether to unmap the kernel on exit.
>  	 */
> -1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
> +1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
>  
>  #ifdef CONFIG_RANDOMIZE_BASE
>  	adr	x4, tramp_vectors + PAGE_SIZE
> @@ -1023,7 +1010,7 @@ NOKPROBE(__sdei_asm_entry_trampoline)
>   * x4: struct sdei_registered_event argument from registration time.
>   */
>  SYM_CODE_START(__sdei_asm_exit_trampoline)
> -	ldr	x4, [x4, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
> +	ldr	x4, [x4, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
>  	cbnz	x4, 1f
>  
>  	tramp_unmap_kernel	tmp=x4

Ah, that. This was previously abusing orig_addr_limit as the space was already there, and
if you squint....

Now it seems odd to waste that space in every pt_regs given its only needed in here, but
its still needed for padding. If more padding ever turns up I'll add cleaning this up to
my todo list.


Acked-by: James Morse <james.morse@arm.com>


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 06/17] arm64: sdei: move uaccess logic to arch/arm64/
  2020-11-13 12:49 ` [PATCHv4 06/17] arm64: sdei: move uaccess logic to arch/arm64/ Mark Rutland
  2020-11-16 17:42   ` Catalin Marinas
@ 2020-11-26 18:42   ` James Morse
  1 sibling, 0 replies; 38+ messages in thread
From: James Morse @ 2020-11-26 18:42 UTC (permalink / raw)
  To: Mark Rutland; +Cc: catalin.marinas, will, hch, linux-arm-kernel, robin.murphy

Hi Mark,

On 13/11/2020 12:49, Mark Rutland wrote:
> The SDEI support code is split across arch/arm64/ and drivers/firmware/,
> largley this is split so that the arch-specific portions are under
> arch/arm64, and the management logic is under drivers/firmware/.
> However, exception entry fixups are currently under drivers/firmware.
> 
> Let's move the exception entry fixups under arch/arm64/. This
> de-clutters the management logic, and puts all the arch-specific
> portions in one place. Doing this also allows the fixups to be applied
> earlier, so things like PAN and UAO will be in a known good state before
> we run other logic. This will also make subsequent refactoring easier.

Reviewed-by: James Morse <james.morse@arm.com>


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 07/17] arm64: sdei: explicitly simulate PAN/UAO entry
  2020-11-13 12:49 ` [PATCHv4 07/17] arm64: sdei: explicitly simulate PAN/UAO entry Mark Rutland
@ 2020-11-26 18:42   ` James Morse
  2020-12-01 12:14     ` Mark Rutland
  0 siblings, 1 reply; 38+ messages in thread
From: James Morse @ 2020-11-26 18:42 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will; +Cc: robin.murphy, hch

Hi Mark,

On 13/11/2020 12:49, Mark Rutland wrote:
> SDEI enters the kernel with a non-architectural exception which does not
> manipulate PSTATE bits (e.g. PAN, UAO) in the same way as architectural
> exceptions. We currently fix this up with a combination of
> __uaccess_enable_hw_pan() and force_uaccess_access_begin(), but this has
> a few problems:
> 
> * When set_fs() is removed, force_uaccess_begin() will have no HW
>   side-effects, and UAO will need to be reset elsewhere.
> 
> * Kernels built without support for PAN or UAO will not reset these bits
>   upon SDEI entry, and may inherit the values used by a VM, leading to
>   unexpected behaviour.
> 
> * Kernels built *with* support for PAN or UAO, when run on systems with
>   mismatched support across CPUs, will not reset these bits upon SDEI
>   entry, and may inherit the values used by a VM, leading to unexpected
>   behaviour.

> To deal with all of these, let's always explicitly reset the PAN and UAO
> bits when an SDEI event is delivered to the kernel. As above, we must do
> so even when the kernel has chosen to not use PAN/UAO, or was not built
> with support for PAN/UAO generally.
> 
> The existing system_uses_ttbr0_pan() is redefined in terms of
> system_uses_hw_pan() both for clarity and as a minor optimization when
> HW PAN is not selected.

Reviewed-by: James Morse <james.morse@arm.com>


> diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
> index 4a5f24602aa0..908d7be70eac 100644
> --- a/arch/arm64/kernel/sdei.c
> +++ b/arch/arm64/kernel/sdei.c
> @@ -224,12 +234,11 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
>  	mm_segment_t orig_addr_limit;
>  
>  	/*
> -	 * We didn't take an exception to get here, so the HW hasn't set PAN or
> -	 * cleared UAO, and the exception entry code hasn't reset addr_limit.
> -	 * Set PAN, then use force_uaccess_begin() to clear UAO and reset
> -	 * addr_limit.
> +	 * We didn't take an exception to get here, so the HW hasn't
> +	 * set/cleared bits in PSTATE that we may rely on. Intialize PAN/UAO,

(Initialize)

> +	 * then use force_uaccess_begin() to reset addr_limit.
>  	 */
> -	__uaccess_enable_hw_pan();
> +	__sdei_pstate_entry();
>  	orig_addr_limit = force_uaccess_begin();
>  
>  	nmi_enter();
> 


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 12/17] arm64: uaccess: split user/kernel routines
  2020-11-26 18:42   ` James Morse
@ 2020-12-01 11:03     ` Mark Rutland
  2020-12-02 12:19       ` Mark Rutland
  0 siblings, 1 reply; 38+ messages in thread
From: Mark Rutland @ 2020-12-01 11:03 UTC (permalink / raw)
  To: James Morse; +Cc: catalin.marinas, will, hch, linux-arm-kernel, robin.murphy

On Thu, Nov 26, 2020 at 06:42:24PM +0000, James Morse wrote:
> Hi Mark,

Hi James,

Thanks for the comments. I've applied most of those below, but I have
one follow-up question.

> On 13/11/2020 12:49, Mark Rutland wrote:
> > +#define __get_mem_asm(ldr, reg, x, addr, err)				\
> 
> ('ldr' as variable name where you need to output something other than ldr does my head in)
>
> >  	asm volatile(							\
> > -	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
> > -			alt_instr " " reg "1, [%2]\n", feature)		\
> > +	"1:	" ldr "	" reg "1, [%2]\n"				\

I've changed this to 'load', and used 'store' in __put_mem_asm().

[...]

> > +#define __get_kernel_nofault(dst, src, type, err_label)			\
> > +do {									\
> > +	int __gkn_err = 0;						\
> > +									\
> > +	__raw_get_mem("ldr", *((type *)(dst)),				\
> > +		      (__force type __user *)(src), __gkn_err);		\
> 
> Is the __user needed as you moved __chk_user_ptr() out of __raw_get_mem() in the previous
> patch?

You're right; the __user portion can go as neither __raw_get_mem() or
__get_mem_asm() needs that. Likewise for __put_kernel_nofault(),
__raw_put_mem(), and __put_mem_asm().

I've removed those.

> (your branch on kernel.org does something different here, not sure if its ahead or behind v4)

AFAICT this portion of the patch is up-to-date with what's on
kernel.org. I might have pushed that out-of-order with posting the
series -- does this look the same now, or am I missing something?

> > +	if (unlikely(__gkn_err))					\
> > +		goto err_label;						\
> > +} while(0)

I'll fix the whitespace for the while(0), too...

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 07/17] arm64: sdei: explicitly simulate PAN/UAO entry
  2020-11-26 18:42   ` James Morse
@ 2020-12-01 12:14     ` Mark Rutland
  0 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-12-01 12:14 UTC (permalink / raw)
  To: James Morse; +Cc: catalin.marinas, will, hch, linux-arm-kernel, robin.murphy

On Thu, Nov 26, 2020 at 06:42:46PM +0000, James Morse wrote:
> Hi Mark,
> 
> On 13/11/2020 12:49, Mark Rutland wrote:
> > SDEI enters the kernel with a non-architectural exception which does not
> > manipulate PSTATE bits (e.g. PAN, UAO) in the same way as architectural
> > exceptions. We currently fix this up with a combination of
> > __uaccess_enable_hw_pan() and force_uaccess_access_begin(), but this has
> > a few problems:
> > 
> > * When set_fs() is removed, force_uaccess_begin() will have no HW
> >   side-effects, and UAO will need to be reset elsewhere.
> > 
> > * Kernels built without support for PAN or UAO will not reset these bits
> >   upon SDEI entry, and may inherit the values used by a VM, leading to
> >   unexpected behaviour.
> > 
> > * Kernels built *with* support for PAN or UAO, when run on systems with
> >   mismatched support across CPUs, will not reset these bits upon SDEI
> >   entry, and may inherit the values used by a VM, leading to unexpected
> >   behaviour.
> 
> > To deal with all of these, let's always explicitly reset the PAN and UAO
> > bits when an SDEI event is delivered to the kernel. As above, we must do
> > so even when the kernel has chosen to not use PAN/UAO, or was not built
> > with support for PAN/UAO generally.
> > 
> > The existing system_uses_ttbr0_pan() is redefined in terms of
> > system_uses_hw_pan() both for clarity and as a minor optimization when
> > HW PAN is not selected.
> 
> Reviewed-by: James Morse <james.morse@arm.com>

Thanks! Per the comments on patch 14, I've reworked this to remove the
UAO bits (and correspondingly, have rewritten the commit message), so
I'll post v5 without your R-b just in case there's anything
objectionable in that rework.

> > diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
> > index 4a5f24602aa0..908d7be70eac 100644
> > --- a/arch/arm64/kernel/sdei.c
> > +++ b/arch/arm64/kernel/sdei.c
> > @@ -224,12 +234,11 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
> >  	mm_segment_t orig_addr_limit;
> >  
> >  	/*
> > -	 * We didn't take an exception to get here, so the HW hasn't set PAN or
> > -	 * cleared UAO, and the exception entry code hasn't reset addr_limit.
> > -	 * Set PAN, then use force_uaccess_begin() to clear UAO and reset
> > -	 * addr_limit.
> > +	 * We didn't take an exception to get here, so the HW hasn't
> > +	 * set/cleared bits in PSTATE that we may rely on. Intialize PAN/UAO,
> 
> (Initialize)

Whoops, fixed.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 14/17] arm64: uaccess: remove set_fs()
  2020-11-26 18:42   ` James Morse
@ 2020-12-01 12:30     ` Mark Rutland
  0 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-12-01 12:30 UTC (permalink / raw)
  To: James Morse; +Cc: catalin.marinas, will, hch, linux-arm-kernel, robin.murphy

On Thu, Nov 26, 2020 at 06:42:26PM +0000, James Morse wrote:
> On 13/11/2020 12:49, Mark Rutland wrote:
> > Now that the uaccess primitives dont take addr_limit into acccount, we
> 
> (account)

I've fixed that up locally.

[...]

> > @@ -193,8 +193,7 @@ struct pt_regs {
> >  	s32 syscallno;
> >  	u32 unused2;
> >  #endif
> > -
> > -	u64 orig_addr_limit;
> > +	u64 sdei_ttbr1;
> >  	/* Only valid when ARM64_HAS_IRQ_PRIO_MASKING is enabled. */
> >  	u64 pmr_save;
> >  	u64 stackframe[2];
> 
> Hmmm,

> >  	/*
> > -	 * Use reg->interrupted_regs.addr_limit to remember whether to unmap
> > -	 * the kernel on exit.
> > +	 * Remember whether to unmap the kernel on exit.
> >  	 */
> > -1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
> > +1:	str	x4, [x1, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
> >  
> >  #ifdef CONFIG_RANDOMIZE_BASE
> >  	adr	x4, tramp_vectors + PAGE_SIZE
> > @@ -1023,7 +1010,7 @@ NOKPROBE(__sdei_asm_entry_trampoline)
> >   * x4: struct sdei_registered_event argument from registration time.
> >   */
> >  SYM_CODE_START(__sdei_asm_exit_trampoline)
> > -	ldr	x4, [x4, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
> > +	ldr	x4, [x4, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)]
> >  	cbnz	x4, 1f
> >  
> >  	tramp_unmap_kernel	tmp=x4
> 
> Ah, that. This was previously abusing orig_addr_limit as the space was already there, and
> if you squint....
>
> Now it seems odd to waste that space in every pt_regs given its only needed in here, but
> its still needed for padding. If more padding ever turns up I'll add cleaning this up to
> my todo list.

Mhmm. For now, I've added a comment to the commit message to point out
what going on here.

> Acked-by: James Morse <james.morse@arm.com>

Thanks!

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCHv4 12/17] arm64: uaccess: split user/kernel routines
  2020-12-01 11:03     ` Mark Rutland
@ 2020-12-02 12:19       ` Mark Rutland
  0 siblings, 0 replies; 38+ messages in thread
From: Mark Rutland @ 2020-12-02 12:19 UTC (permalink / raw)
  To: James Morse; +Cc: catalin.marinas, will, hch, linux-arm-kernel, robin.murphy

On Tue, Dec 01, 2020 at 11:03:23AM +0000, Mark Rutland wrote:
> On Thu, Nov 26, 2020 at 06:42:24PM +0000, James Morse wrote:
> > On 13/11/2020 12:49, Mark Rutland wrote:
 
> > > +#define __get_kernel_nofault(dst, src, type, err_label)			\
> > > +do {									\
> > > +	int __gkn_err = 0;						\
> > > +									\
> > > +	__raw_get_mem("ldr", *((type *)(dst)),				\
> > > +		      (__force type __user *)(src), __gkn_err);		\
> > 
> > Is the __user needed as you moved __chk_user_ptr() out of __raw_get_mem() in the previous
> > patch?
> 
> You're right; the __user portion can go as neither __raw_get_mem() or
> __get_mem_asm() needs that. Likewise for __put_kernel_nofault(),
> __raw_put_mem(), and __put_mem_asm().
> 
> I've removed those.
> 
> > (your branch on kernel.org does something different here, not sure if its ahead or behind v4)
> 
> AFAICT this portion of the patch is up-to-date with what's on
> kernel.org. I might have pushed that out-of-order with posting the
> series -- does this look the same now, or am I missing something?

So I can update the branch, I've tagged the version I was referring to
above as arm64-set-fs-removal-20201113. The HEAD commit was:

  c765cd9bec3bfac8 ("arm64: uaccess: remove vestigal UAO support")

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2020-12-02 12:21 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-13 12:49 [PATCHv4 00/17] arm64: remove set_fs() and friends Mark Rutland
2020-11-13 12:49 ` [PATCHv4 01/17] arm64: ensure ERET from kthread is illegal Mark Rutland
2020-11-13 12:49 ` [PATCHv4 02/17] arm64: add C wrappers for SET_PSTATE_*() Mark Rutland
2020-11-13 12:49 ` [PATCHv4 03/17] arm64: head.S: rename el2_setup -> init_kernel_el Mark Rutland
2020-11-13 12:49 ` [PATCHv4 04/17] arm64: head.S: cleanup SCTLR_ELx initialization Mark Rutland
2020-11-13 12:49 ` [PATCHv4 05/17] arm64: head.S: always initialize PSTATE Mark Rutland
2020-11-13 12:49 ` [PATCHv4 06/17] arm64: sdei: move uaccess logic to arch/arm64/ Mark Rutland
2020-11-16 17:42   ` Catalin Marinas
2020-11-26 18:42   ` James Morse
2020-11-13 12:49 ` [PATCHv4 07/17] arm64: sdei: explicitly simulate PAN/UAO entry Mark Rutland
2020-11-26 18:42   ` James Morse
2020-12-01 12:14     ` Mark Rutland
2020-11-13 12:49 ` [PATCHv4 08/17] arm64: uaccess: move uao_* alternatives to asm-uaccess.h Mark Rutland
2020-11-14  9:22   ` Christoph Hellwig
2020-11-16 12:25     ` Mark Rutland
2020-11-13 12:49 ` [PATCHv4 09/17] arm64: uaccess: rename privileged uaccess routines Mark Rutland
2020-11-13 12:49 ` [PATCHv4 10/17] arm64: uaccess: simplify __copy_user_flushcache() Mark Rutland
2020-11-13 12:49 ` [PATCHv4 11/17] arm64: uaccess: refactor __{get,put}_user Mark Rutland
2020-11-13 12:49 ` [PATCHv4 12/17] arm64: uaccess: split user/kernel routines Mark Rutland
2020-11-26 18:42   ` James Morse
2020-12-01 11:03     ` Mark Rutland
2020-12-02 12:19       ` Mark Rutland
2020-11-13 12:49 ` [PATCHv4 13/17] arm64: uaccess cleanup macro naming Mark Rutland
2020-11-13 12:49 ` [PATCHv4 14/17] arm64: uaccess: remove set_fs() Mark Rutland
2020-11-16 17:40   ` Catalin Marinas
2020-11-17 10:44     ` Mark Rutland
2020-11-17 10:54       ` Catalin Marinas
2020-11-17 10:57         ` Mark Rutland
2020-11-17 11:02           ` Catalin Marinas
2020-11-17 11:07             ` Mark Rutland
2020-11-17 11:10               ` Catalin Marinas
2020-11-26 18:42               ` James Morse
2020-11-26 18:42   ` James Morse
2020-12-01 12:30     ` Mark Rutland
2020-11-13 12:49 ` [PATCHv4 15/17] arm64: uaccess: remove addr_limit_user_check() Mark Rutland
2020-11-13 12:49 ` [PATCHv4 16/17] arm64: uaccess: remove redundant PAN toggling Mark Rutland
2020-11-13 12:49 ` [PATCHv4 17/17] arm64: uaccess: remove vestigal UAO support Mark Rutland
2020-11-25 19:10 ` [PATCHv4 00/17] arm64: remove set_fs() and friends Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).