All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/17] arm64: entry deasmification and cleanup
@ 2020-01-08 18:56 Mark Rutland
  2020-01-08 18:56 ` [PATCH 01/17] arm64: entry: mark all entry code as notrace Mark Rutland
                   ` (16 more replies)
  0 siblings, 17 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

This series aims to make the arm64 exception handling code easier to
maintain and update, primarily by converting from assembly to C wherever
reasonably possible. This allows us to remove infrastructure we had to
duplicate for C and assembly, and leaves less assembly behind that may
require special treatment (e.g. for BTI).

Previous patches converted syscall management to C, along with the rest
of the synchronous exception vectors. This series converts the remaining
IRQ and error paths, before factoring out the common EL0 exception
entry/return work. Some parts of the existing assembly were somewhat
arcane, and for these I've added more extensive comments than were
present in the assembly.

After converting the bulk of the logic to C, it was clear that a few
structural inconsistencies had crept in over the years, so I've tried to
clean those up and make the remaining assembly simpler and clearer.

There are a couple more things which could be factored out, notably the
SW PAN logic and the GIC prio masking entry/exit work. I've left those
as-is for now.

The series has seen some basic testing, and is I'm currently fuzzing it
with a local Syzkaller instance.

I've pushed the patches to my arm64/entry-deasm branch [1,2], based on
v5.5-rc3.

Thanks,
Mark.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/entry-deasm
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/entry-deasm

Mark Rutland (17):
  arm64: entry: mark all entry code as notrace
  arm64: entry: cleanup el0 svc handler naming
  arm64: entry: move arm64_preempt_schedule_irq to entry-common.c
  arm64: entry: move preempt logic to C
  arm64: entry: add a call_on_stack helper
  arm64: entry: convert irq entry to C
  arm64: entry: convert error entry to C
  arm64: entry: Split el0_sync_compat from el0_sync
  arm64: entry: organise handler stubs consistently
  arm64: entry: consolidate EL1 return paths
  stackleak: allow C to call stackleak_erase()
  arm64: debug-monitors: refactor MDSCR manipulation
  arm64: entry: move common el0 entry/return work to C
  arm64: entry: move NO_SYSCALL setup to C
  arm64: entry: move ARM64_ERRATUM_845719 workaround to C
  arm64: entry: move ARM64_ERRATUM_1418040 workaround to C
  arm64: entry: cleanup sp_el0 manipulation

 arch/arm64/include/asm/assembler.h      |  18 --
 arch/arm64/include/asm/debug-monitors.h |  10 +
 arch/arm64/include/asm/exception.h      |   8 +-
 arch/arm64/kernel/asm-offsets.c         |   1 -
 arch/arm64/kernel/debug-monitors.c      |  32 +--
 arch/arm64/kernel/entry-common.c        | 245 ++++++++++++++++++++++-
 arch/arm64/kernel/entry.S               | 333 +++++++-------------------------
 arch/arm64/kernel/irq.c                 |  15 --
 arch/arm64/kernel/process.c             |  17 --
 arch/arm64/kernel/signal.c              |   3 +-
 arch/arm64/kernel/syscall.c             |   4 +-
 arch/arm64/kernel/traps.c               |   2 +-
 arch/arm64/mm/fault.c                   |   7 -
 include/linux/stackleak.h               |   3 +
 14 files changed, 338 insertions(+), 360 deletions(-)

-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH 01/17] arm64: entry: mark all entry code as notrace
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09  5:21   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 02/17] arm64: entry: cleanup el0 svc handler naming Mark Rutland
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Almost all functions in entry-common.c are marked notrace, with
el1_undef and el1_inv being the only exceptions. We appear to have done
this on the assumption that there were no exception registers that we
needed to snapshot, and thus it was safe to run trace code that might
result in further exceptions and clobber those registers.

However, until we inherit the DAIF flags, our irq flag tracing is stale,
and this discrepancy could set off warnings in some configurations.
Given we don't expect to trigger el1_undef or el1_inv unless something
is already wrong, any irqflag warnigns are liable to mask the
information we'd actually care about.

Let's keep things simple and mark el1_undef and el1_inv as notrace.
Developers can trace do_undefinstr and bad_mode if they really want to
monitor these cases.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 5dce5e56995a..67198142a0fc 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -36,14 +36,14 @@ static void notrace el1_pc(struct pt_regs *regs, unsigned long esr)
 }
 NOKPROBE_SYMBOL(el1_pc);
 
-static void el1_undef(struct pt_regs *regs)
+static void notrace el1_undef(struct pt_regs *regs)
 {
 	local_daif_inherit(regs);
 	do_undefinstr(regs);
 }
 NOKPROBE_SYMBOL(el1_undef);
 
-static void el1_inv(struct pt_regs *regs, unsigned long esr)
+static void notrace el1_inv(struct pt_regs *regs, unsigned long esr)
 {
 	local_daif_inherit(regs);
 	bad_mode(regs, 0, esr);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 02/17] arm64: entry: cleanup el0 svc handler naming
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
  2020-01-08 18:56 ` [PATCH 01/17] arm64: entry: mark all entry code as notrace Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09  5:33   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 03/17] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c Mark Rutland
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

For most of the exception entry code, <foo>_handler() is the first C
function called from the entry assembly in entry-common.c, and external
functions handling the bulk of the logic are called do_<foo>().

For consistency, apply this scheme to el0_svc_handler and
el0_svc_compat_hander, renaming them to do_el0_svc and do_el0_svc_compat
respectively.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/exception.h | 4 ++--
 arch/arm64/kernel/entry-common.c   | 4 ++--
 arch/arm64/kernel/syscall.c        | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 4d5f3b5f50cd..b87c6e276ab1 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -45,8 +45,8 @@ void do_sysinstr(unsigned int esr, struct pt_regs *regs);
 void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
 void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
 void do_cp15instr(unsigned int esr, struct pt_regs *regs);
-void el0_svc_handler(struct pt_regs *regs);
-void el0_svc_compat_handler(struct pt_regs *regs);
+void do_el0_svc(struct pt_regs *regs);
+void do_el0_svc_compat(struct pt_regs *regs);
 void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
 			    struct pt_regs *regs);
 
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 67198142a0fc..fde59981445c 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -215,7 +215,7 @@ static void notrace el0_svc(struct pt_regs *regs)
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
 
-	el0_svc_handler(regs);
+	do_el0_svc(regs);
 }
 NOKPROBE_SYMBOL(el0_svc);
 
@@ -281,7 +281,7 @@ static void notrace el0_svc_compat(struct pt_regs *regs)
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
 
-	el0_svc_compat_handler(regs);
+	do_el0_svc_compat(regs);
 }
 NOKPROBE_SYMBOL(el0_svc_compat);
 
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 9a9d98a443fc..a12c0c88d345 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -154,14 +154,14 @@ static inline void sve_user_discard(void)
 	sve_user_disable();
 }
 
-void el0_svc_handler(struct pt_regs *regs)
+void do_el0_svc(struct pt_regs *regs)
 {
 	sve_user_discard();
 	el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
 }
 
 #ifdef CONFIG_COMPAT
-void el0_svc_compat_handler(struct pt_regs *regs)
+void do_el0_svc_compat(struct pt_regs *regs)
 {
 	el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls,
 		       compat_sys_call_table);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 03/17] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
  2020-01-08 18:56 ` [PATCH 01/17] arm64: entry: mark all entry code as notrace Mark Rutland
  2020-01-08 18:56 ` [PATCH 02/17] arm64: entry: cleanup el0 svc handler naming Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09  5:36   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 04/17] arm64: entry: move preempt logic to C Mark Rutland
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Subsequent patches will pull more of the IRQ entry handling into C. To
keep this in one place, let's move arm64_preempt_schedule_irq() into
entry-common.c along with the other entry management functions.

We no longer need to include <linux/lockdep.h> in process.c, so the
include directive is removed.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will@kernel.org>
---
---
 arch/arm64/kernel/entry-common.c | 19 +++++++++++++++++++
 arch/arm64/kernel/process.c      | 17 -----------------
 2 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index fde59981445c..4fa058453468 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -6,7 +6,10 @@
  */
 
 #include <linux/context_tracking.h>
+#include <linux/linkage.h>
+#include <linux/lockdep.h>
 #include <linux/ptrace.h>
+#include <linux/sched/debug.h>
 #include <linux/thread_info.h>
 
 #include <asm/cpufeature.h>
@@ -330,3 +333,19 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_sync_compat_handler);
 #endif /* CONFIG_COMPAT */
+
+asmlinkage void __sched arm64_preempt_schedule_irq(void)
+{
+	lockdep_assert_irqs_disabled();
+
+	/*
+	 * Preempting a task from an IRQ means we leave copies of PSTATE
+	 * on the stack. cpufeature's enable calls may modify PSTATE, but
+	 * resuming one of these preempted tasks would undo those changes.
+	 *
+	 * Only allow a task to be preempted once cpufeatures have been
+	 * enabled.
+	 */
+	if (static_branch_likely(&arm64_const_caps_ready))
+		preempt_schedule_irq();
+}
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 71f788cd2b18..94b3ae351af9 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -17,7 +17,6 @@
 #include <linux/sched/task.h>
 #include <linux/sched/task_stack.h>
 #include <linux/kernel.h>
-#include <linux/lockdep.h>
 #include <linux/mm.h>
 #include <linux/stddef.h>
 #include <linux/sysctl.h>
@@ -633,19 +632,3 @@ static int __init tagged_addr_init(void)
 
 core_initcall(tagged_addr_init);
 #endif	/* CONFIG_ARM64_TAGGED_ADDR_ABI */
-
-asmlinkage void __sched arm64_preempt_schedule_irq(void)
-{
-	lockdep_assert_irqs_disabled();
-
-	/*
-	 * Preempting a task from an IRQ means we leave copies of PSTATE
-	 * on the stack. cpufeature's enable calls may modify PSTATE, but
-	 * resuming one of these preempted tasks would undo those changes.
-	 *
-	 * Only allow a task to be preempted once cpufeatures have been
-	 * enabled.
-	 */
-	if (static_branch_likely(&arm64_const_caps_ready))
-		preempt_schedule_irq();
-}
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 04/17] arm64: entry: move preempt logic to C
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (2 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 03/17] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09  6:43   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 05/17] arm64: entry: add a call_on_stack helper Mark Rutland
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Some of our preeemption logic is in C, while a portion of it is in
assembly. Let's pull the remainder  of it into C so that it lives in one
place.

At the same time, let's improve the comments regarding NMI preemption to
make it clearer why we cannot preempt from NMIs.

Subsequent patches will covert the caller of el1_preempt() to C.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 18 +++++++++++++++++-
 arch/arm64/kernel/entry.S        | 13 +------------
 2 files changed, 18 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 4fa058453468..b93ca2148a20 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -8,6 +8,7 @@
 #include <linux/context_tracking.h>
 #include <linux/linkage.h>
 #include <linux/lockdep.h>
+#include <linux/preempt.h>
 #include <linux/ptrace.h>
 #include <linux/sched/debug.h>
 #include <linux/thread_info.h>
@@ -334,8 +335,23 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
 NOKPROBE_SYMBOL(el0_sync_compat_handler);
 #endif /* CONFIG_COMPAT */
 
-asmlinkage void __sched arm64_preempt_schedule_irq(void)
+asmlinkage void __sched el1_preempt(void)
 {
+	if (!IS_ENABLED(CONFIG_PREEMPT) || preempt_count())
+		return;
+
+	/*
+	 * To avoid nesting NMIs and overflowing the stack, we must leave NMIs
+	 * masked until the exception return. We want to context-switch with
+	 * IRQs masked but NMIs enabled, so cannot preempt an NMI.
+	 *
+	 * PSTATE.{D,A,F} are cleared for IRQ and NMI by el1_irq().
+	 * When gic_handle_irq() handles an NMI, it leaves PSTATE.I set.
+	 * If anything is set in DAIF, this is an NMI.
+	 */
+	if (system_uses_irq_prio_masking() && read_sysreg(daif) != 0)
+		return;
+
 	lockdep_assert_irqs_disabled();
 
 	/*
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 7c6a0a41676f..53ce1877a4aa 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -606,18 +606,7 @@ el1_irq:
 	irq_handler
 
 #ifdef CONFIG_PREEMPT
-	ldr	x24, [tsk, #TSK_TI_PREEMPT]	// get preempt count
-alternative_if ARM64_HAS_IRQ_PRIO_MASKING
-	/*
-	 * DA_F were cleared at start of handling. If anything is set in DAIF,
-	 * we come back from an NMI, so skip preemption
-	 */
-	mrs	x0, daif
-	orr	x24, x24, x0
-alternative_else_nop_endif
-	cbnz	x24, 1f				// preempt count != 0 || NMI return path
-	bl	arm64_preempt_schedule_irq	// irq en/disable is done inside
-1:
+	bl	el1_preempt
 #endif
 
 #ifdef CONFIG_ARM64_PSEUDO_NMI
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 05/17] arm64: entry: add a call_on_stack helper
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (3 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 04/17] arm64: entry: move preempt logic to C Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09  8:00   ` Anshuman Khandual
  2020-01-09 14:30   ` Laura Abbott
  2020-01-08 18:56 ` [PATCH 06/17] arm64: entry: convert irq entry to C Mark Rutland
                   ` (11 subsequent siblings)
  16 siblings, 2 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

In some cases, we want to call a function from C code, using an
alternative stack. Add a helper that we can use in such cases.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/exception.h |  2 ++
 arch/arm64/kernel/entry.S          | 21 +++++++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index b87c6e276ab1..a49038fa4faf 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -31,6 +31,8 @@ static inline u32 disr_to_esr(u64 disr)
 	return esr;
 }
 
+asmlinkage void call_on_stack(struct pt_regs *, void (*)(struct pt_regs *),
+			      unsigned long);
 asmlinkage void enter_from_user_mode(void);
 void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
 void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 53ce1877a4aa..184313c773ea 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -901,6 +901,27 @@ ENTRY(ret_from_fork)
 ENDPROC(ret_from_fork)
 NOKPROBE(ret_from_fork)
 
+/*
+ * x0 = argument to function
+ * x1 = function to call
+ * x2 = new stack pointer
+ */
+ENTRY(call_on_stack)
+	/* Create a frame record to save our LR and SP (implicit in FP) */
+	stp	x29, x30, [sp, #-16]!
+	mov	x29, sp
+
+	/* Move to the new stack and call the function there */
+	mov	sp, x2
+	blr	x1
+
+	/* Restore SP from the FP, FP and LR from the record, and return */
+	mov	sp, x29
+	ldp	x29, x30, [sp], #16
+	ret
+ENDPROC(call_on_stack)
+NOKPROBE(call_on_stack)
+
 #ifdef CONFIG_ARM_SDE_INTERFACE
 
 #include <asm/sdei.h>
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 06/17] arm64: entry: convert irq entry to C
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (4 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 05/17] arm64: entry: add a call_on_stack helper Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-08 18:56 ` [PATCH 07/17] arm64: entry: convert error " Mark Rutland
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Portions of our IRQ entry logic are handled in C while other parts are
handled in assembly. Let's migrate the bulk of it to C so that it's
easier to follow and maintain.

The original assembly called trace_hardirqs_off() for IRQs and NMIs, but
did not call trace_hardirqs_on() for NMIs where IRQs were priority
masked. The C code always balances these calls.

The now unused asm_nmi_{enter,exit}() wrappers are removed.

The EL0 BP hardening is moved from fault.c so that it's in the same
compilation unit as its only caller, where the compiler can inline it.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c |  81 +++++++++++++++++++++++-
 arch/arm64/kernel/entry.S        | 133 +++------------------------------------
 arch/arm64/kernel/irq.c          |  15 -----
 arch/arm64/mm/fault.c            |   7 ---
 4 files changed, 87 insertions(+), 149 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index b93ca2148a20..45155d9f72da 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -6,8 +6,12 @@
  */
 
 #include <linux/context_tracking.h>
+#include <linux/hardirq.h>
+#include <linux/irq.h>
+#include <linux/irqflags.h>
 #include <linux/linkage.h>
 #include <linux/lockdep.h>
+#include <linux/percpu.h>
 #include <linux/preempt.h>
 #include <linux/ptrace.h>
 #include <linux/sched/debug.h>
@@ -19,6 +23,7 @@
 #include <asm/exception.h>
 #include <asm/kprobes.h>
 #include <asm/mmu.h>
+#include <asm/stacktrace.h>
 #include <asm/sysreg.h>
 
 static void notrace el1_abort(struct pt_regs *regs, unsigned long esr)
@@ -335,7 +340,7 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
 NOKPROBE_SYMBOL(el0_sync_compat_handler);
 #endif /* CONFIG_COMPAT */
 
-asmlinkage void __sched el1_preempt(void)
+static void __sched el1_preempt(void)
 {
 	if (!IS_ENABLED(CONFIG_PREEMPT) || preempt_count())
 		return;
@@ -345,7 +350,7 @@ asmlinkage void __sched el1_preempt(void)
 	 * masked until the exception return. We want to context-switch with
 	 * IRQs masked but NMIs enabled, so cannot preempt an NMI.
 	 *
-	 * PSTATE.{D,A,F} are cleared for IRQ and NMI by el1_irq().
+	 * PSTATE.{D,A,F} are cleared for IRQ and NMI by el1_irq_handler().
 	 * When gic_handle_irq() handles an NMI, it leaves PSTATE.I set.
 	 * If anything is set in DAIF, this is an NMI.
 	 */
@@ -365,3 +370,75 @@ asmlinkage void __sched el1_preempt(void)
 	if (static_branch_likely(&arm64_const_caps_ready))
 		preempt_schedule_irq();
 }
+
+static void notrace invoke_irq_handler(struct pt_regs *regs)
+{
+	unsigned long irq_stack = (unsigned long)raw_cpu_read(irq_stack_ptr);
+
+	irq_stack += IRQ_STACK_SIZE;
+
+	if (on_thread_stack())
+		call_on_stack(regs, handle_arch_irq, irq_stack);
+	else
+		handle_arch_irq(regs);
+}
+NOKPROBE_SYMBOL(invoke_irq_handler);
+
+asmlinkage void notrace el1_irq_handler(struct pt_regs *regs)
+{
+	bool masked;
+
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(regs->pmr_save | GIC_PRIO_PSR_I_SET);
+
+	/*
+	 * We can't use local_daif_restore(DAIF_PROCCTX_NOIRQ) here as it will
+	 * see the A flag is clear and try to unmask NMIs.
+	 */
+	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
+
+	/*
+	 * If IRQs were masked, this is definitely an NMI. If IRQs were
+	 * unmasked, this may be an IRQ or an NMI, and gic_handle_nmi() will
+	 * handle nmi_{enter,exit} as necessary.
+	 */
+	masked = !irqs_priority_unmasked(regs);
+
+	if (masked)
+		nmi_enter();
+	else
+		trace_hardirqs_off();
+
+	invoke_irq_handler(regs);
+
+	if (masked) {
+		nmi_exit();
+	} else {
+		el1_preempt();
+		trace_hardirqs_on();
+	}
+}
+NOKPROBE_SYMBOL(el1_irq_handler);
+
+static inline void notrace do_el0_irq_bp_hardening(struct pt_regs *regs)
+{
+	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
+		return;
+	if (regs->pc & BIT(55))
+		arm64_apply_bp_hardening();
+}
+NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
+
+asmlinkage void notrace el0_irq_handler(struct pt_regs *regs)
+{
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+	trace_hardirqs_off();
+	do_el0_irq_bp_hardening(regs);
+	invoke_irq_handler(regs);
+	trace_hardirqs_on();
+}
+NOKPROBE_SYMBOL(el0_irq_handler);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 184313c773ea..55c4be1e996a 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -384,64 +384,9 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
 	sb
 	.endm
 
-	.macro	irq_stack_entry
-	mov	x19, sp			// preserve the original sp
-
-	/*
-	 * Compare sp with the base of the task stack.
-	 * If the top ~(THREAD_SIZE - 1) bits match, we are on a task stack,
-	 * and should switch to the irq stack.
-	 */
-	ldr	x25, [tsk, TSK_STACK]
-	eor	x25, x25, x19
-	and	x25, x25, #~(THREAD_SIZE - 1)
-	cbnz	x25, 9998f
-
-	ldr_this_cpu x25, irq_stack_ptr, x26
-	mov	x26, #IRQ_STACK_SIZE
-	add	x26, x25, x26
-
-	/* switch to the irq stack */
-	mov	sp, x26
-9998:
-	.endm
-
-	/*
-	 * x19 should be preserved between irq_stack_entry and
-	 * irq_stack_exit.
-	 */
-	.macro	irq_stack_exit
-	mov	sp, x19
-	.endm
-
 /* GPRs used by entry code */
 tsk	.req	x28		// current thread_info
 
-/*
- * Interrupt handling.
- */
-	.macro	irq_handler
-	ldr_l	x1, handle_arch_irq
-	mov	x0, sp
-	irq_stack_entry
-	blr	x1
-	irq_stack_exit
-	.endm
-
-#ifdef CONFIG_ARM64_PSEUDO_NMI
-	/*
-	 * Set res to 0 if irqs were unmasked in interrupted context.
-	 * Otherwise set res to non-0 value.
-	 */
-	.macro	test_irqs_unmasked res:req, pmr:req
-alternative_if ARM64_HAS_IRQ_PRIO_MASKING
-	sub	\res, \pmr, #GIC_PRIO_IRQON
-alternative_else
-	mov	\res, xzr
-alternative_endif
-	.endm
-#endif
-
 	.macro	gic_prio_kentry_setup, tmp:req
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 	alternative_if ARM64_HAS_IRQ_PRIO_MASKING
@@ -451,15 +396,6 @@ alternative_endif
 #endif
 	.endm
 
-	.macro	gic_prio_irq_setup, pmr:req, tmp:req
-#ifdef CONFIG_ARM64_PSEUDO_NMI
-	alternative_if ARM64_HAS_IRQ_PRIO_MASKING
-	orr	\tmp, \pmr, #GIC_PRIO_PSR_I_SET
-	msr_s	SYS_ICC_PMR_EL1, \tmp
-	alternative_else_nop_endif
-#endif
-	.endm
-
 	.text
 
 /*
@@ -589,47 +525,8 @@ ENDPROC(el1_sync)
 	.align	6
 el1_irq:
 	kernel_entry 1
-	gic_prio_irq_setup pmr=x20, tmp=x1
-	enable_da_f
-
-#ifdef CONFIG_ARM64_PSEUDO_NMI
-	test_irqs_unmasked	res=x0, pmr=x20
-	cbz	x0, 1f
-	bl	asm_nmi_enter
-1:
-#endif
-
-#ifdef CONFIG_TRACE_IRQFLAGS
-	bl	trace_hardirqs_off
-#endif
-
-	irq_handler
-
-#ifdef CONFIG_PREEMPT
-	bl	el1_preempt
-#endif
-
-#ifdef CONFIG_ARM64_PSEUDO_NMI
-	/*
-	 * When using IRQ priority masking, we can get spurious interrupts while
-	 * PMR is set to GIC_PRIO_IRQOFF. An NMI might also have occurred in a
-	 * section with interrupts disabled. Skip tracing in those cases.
-	 */
-	test_irqs_unmasked	res=x0, pmr=x20
-	cbz	x0, 1f
-	bl	asm_nmi_exit
-1:
-#endif
-
-#ifdef CONFIG_TRACE_IRQFLAGS
-#ifdef CONFIG_ARM64_PSEUDO_NMI
-	test_irqs_unmasked	res=x0, pmr=x20
-	cbnz	x0, 1f
-#endif
-	bl	trace_hardirqs_on
-1:
-#endif
-
+	mov	x0, sp
+	bl	el1_irq_handler
 	kernel_exit 1
 ENDPROC(el1_irq)
 
@@ -655,7 +552,10 @@ ENDPROC(el0_sync)
 	.align	6
 el0_irq_compat:
 	kernel_entry 0, 32
-	b	el0_irq_naked
+	mov	x0, sp
+	bl	el0_irq_handler
+	b	ret_to_user
+ENDPROC(el0_irq_compat)
 
 el0_error_compat:
 	kernel_entry 0, 32
@@ -665,25 +565,8 @@ el0_error_compat:
 	.align	6
 el0_irq:
 	kernel_entry 0
-el0_irq_naked:
-	gic_prio_irq_setup pmr=x20, tmp=x0
-	ct_user_exit_irqoff
-	enable_da_f
-
-#ifdef CONFIG_TRACE_IRQFLAGS
-	bl	trace_hardirqs_off
-#endif
-
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
-	tbz	x22, #55, 1f
-	bl	do_el0_irq_bp_hardening
-1:
-#endif
-	irq_handler
-
-#ifdef CONFIG_TRACE_IRQFLAGS
-	bl	trace_hardirqs_on
-#endif
+	mov	x0, sp
+	bl	el0_irq_handler
 	b	ret_to_user
 ENDPROC(el0_irq)
 
diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
index 04a327ccf84d..49a2d7b68531 100644
--- a/arch/arm64/kernel/irq.c
+++ b/arch/arm64/kernel/irq.c
@@ -76,18 +76,3 @@ void __init init_IRQ(void)
 		local_daif_restore(DAIF_PROCCTX_NOIRQ);
 	}
 }
-
-/*
- * Stubs to make nmi_enter/exit() code callable from ASM
- */
-asmlinkage void notrace asm_nmi_enter(void)
-{
-	nmi_enter();
-}
-NOKPROBE_SYMBOL(asm_nmi_enter);
-
-asmlinkage void notrace asm_nmi_exit(void)
-{
-	nmi_exit();
-}
-NOKPROBE_SYMBOL(asm_nmi_exit);
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 077b02a2d4d3..43aea8012f62 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -745,13 +745,6 @@ void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(do_mem_abort);
 
-void do_el0_irq_bp_hardening(void)
-{
-	/* PC has already been checked in entry.S */
-	arm64_apply_bp_hardening();
-}
-NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
-
 void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
 {
 	arm64_notify_die("SP/PC alignment exception", regs,
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 07/17] arm64: entry: convert error entry to C
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (5 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 06/17] arm64: entry: convert irq entry to C Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09  9:12   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 08/17] arm64: entry: Split el0_sync_compat from el0_sync Mark Rutland
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Portions of our error entry logic are handled in C while other parts are
handled in assembly. Let's migrate the bulk of it to C so that it's
easier to follow and maintain.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/exception.h |  1 +
 arch/arm64/kernel/entry-common.c   | 26 ++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S          | 19 ++++++-------------
 arch/arm64/kernel/traps.c          |  2 +-
 4 files changed, 34 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index a49038fa4faf..220a7c3971d8 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -51,5 +51,6 @@ void do_el0_svc(struct pt_regs *regs);
 void do_el0_svc_compat(struct pt_regs *regs);
 void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
 			    struct pt_regs *regs);
+void do_serror(struct pt_regs *regs, unsigned int esr);
 
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 45155d9f72da..bf9d497e620c 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -442,3 +442,29 @@ asmlinkage void notrace el0_irq_handler(struct pt_regs *regs)
 	trace_hardirqs_on();
 }
 NOKPROBE_SYMBOL(el0_irq_handler);
+
+asmlinkage void el1_error_handler(struct pt_regs *regs)
+{
+	unsigned long esr = read_sysreg(esr_el1);
+
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+
+	local_daif_restore(DAIF_ERRCTX);
+	do_serror(regs, esr);
+}
+NOKPROBE_SYMBOL(el1_error_handler);
+
+asmlinkage void el0_error_handler(struct pt_regs *regs)
+{
+	unsigned long esr = read_sysreg(esr_el1);
+
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+
+	user_exit_irqoff();
+	local_daif_restore(DAIF_ERRCTX);
+	do_serror(regs, esr);
+	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+}
+NOKPROBE_SYMBOL(el0_error_handler);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 55c4be1e996a..0c5117ef7c3c 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -559,7 +559,10 @@ ENDPROC(el0_irq_compat)
 
 el0_error_compat:
 	kernel_entry 0, 32
-	b	el0_error_naked
+	mov	x0, sp
+	bl	el0_error_handler
+	b	ret_to_user
+ENDPROC(el0_error_compat)
 #endif
 
 	.align	6
@@ -572,25 +575,15 @@ ENDPROC(el0_irq)
 
 el1_error:
 	kernel_entry 1
-	mrs	x1, esr_el1
-	gic_prio_kentry_setup tmp=x2
-	enable_dbg
 	mov	x0, sp
-	bl	do_serror
+	bl	el1_error_handler
 	kernel_exit 1
 ENDPROC(el1_error)
 
 el0_error:
 	kernel_entry 0
-el0_error_naked:
-	mrs	x25, esr_el1
-	gic_prio_kentry_setup tmp=x2
-	ct_user_exit_irqoff
-	enable_dbg
 	mov	x0, sp
-	mov	x1, x25
-	bl	do_serror
-	enable_da_f
+	bl	el0_error_handler
 	b	ret_to_user
 ENDPROC(el0_error)
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 73caf35c2262..170e637bb87b 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -901,7 +901,7 @@ bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned int esr)
 	}
 }
 
-asmlinkage void do_serror(struct pt_regs *regs, unsigned int esr)
+void do_serror(struct pt_regs *regs, unsigned int esr)
 {
 	const bool was_in_nmi = in_nmi();
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 08/17] arm64: entry: Split el0_sync_compat from el0_sync
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (6 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 07/17] arm64: entry: convert error " Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09  9:50   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 09/17] arm64: entry: organise handler stubs consistently Mark Rutland
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Currently we treat el0_sync_compat as-if it's a portion of el0_sync,
which is unlike all the other exception entry stubs. Let's split it out
and give it it's own ENDPROC(), so that we can treat it as a separate
path entirely.

Reported-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 0c5117ef7c3c..2c3de577f720 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -539,6 +539,7 @@ el0_sync:
 	mov	x0, sp
 	bl	el0_sync_handler
 	b	ret_to_user
+ENDPROC(el0_sync)
 
 #ifdef CONFIG_COMPAT
 	.align	6
@@ -547,7 +548,7 @@ el0_sync_compat:
 	mov	x0, sp
 	bl	el0_sync_compat_handler
 	b	ret_to_user
-ENDPROC(el0_sync)
+ENDPROC(el0_sync_compat)
 
 	.align	6
 el0_irq_compat:
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 09/17] arm64: entry: organise handler stubs consistently
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (7 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 08/17] arm64: entry: Split el0_sync_compat from el0_sync Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09 10:01   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 10/17] arm64: entry: consolidate EL1 return paths Mark Rutland
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

The exception handler stubs are laid out in a random order, violating
the EL1/EL0 split described by the comments. In an attempt to get more
optimal cache usage for commonly-invoked handlers, some handlers are
given special alignment while others are not.

To make things less surprising, This patch reorganises the hander stubs
so they're in a consistent order, with the EL1 sync/irq/error stubs
first, followed by the EL0 sync/irq/error stubs, then the EL0 compat
sync/irq/error stubs. All the stubs are given the same alignment.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry.S | 48 +++++++++++++++++++++++++----------------------
 1 file changed, 26 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 2c3de577f720..35a8c56b0582 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -530,6 +530,14 @@ el1_irq:
 	kernel_exit 1
 ENDPROC(el1_irq)
 
+	.align	6
+el1_error:
+	kernel_entry 1
+	mov	x0, sp
+	bl	el1_error_handler
+	kernel_exit 1
+ENDPROC(el1_error)
+
 /*
  * EL0 mode handlers.
  */
@@ -541,6 +549,23 @@ el0_sync:
 	b	ret_to_user
 ENDPROC(el0_sync)
 
+	.align	6
+el0_irq:
+	kernel_entry 0
+	mov	x0, sp
+	bl	el0_irq_handler
+	b	ret_to_user
+ENDPROC(el0_irq)
+
+	.align	6
+el0_error:
+	kernel_entry 0
+	mov	x0, sp
+	bl	el0_error_handler
+	b	ret_to_user
+ENDPROC(el0_error)
+
+
 #ifdef CONFIG_COMPAT
 	.align	6
 el0_sync_compat:
@@ -558,6 +583,7 @@ el0_irq_compat:
 	b	ret_to_user
 ENDPROC(el0_irq_compat)
 
+	.align	6
 el0_error_compat:
 	kernel_entry 0, 32
 	mov	x0, sp
@@ -566,28 +592,6 @@ el0_error_compat:
 ENDPROC(el0_error_compat)
 #endif
 
-	.align	6
-el0_irq:
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_irq_handler
-	b	ret_to_user
-ENDPROC(el0_irq)
-
-el1_error:
-	kernel_entry 1
-	mov	x0, sp
-	bl	el1_error_handler
-	kernel_exit 1
-ENDPROC(el1_error)
-
-el0_error:
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_error_handler
-	b	ret_to_user
-ENDPROC(el0_error)
-
 /*
  * Ok, we need to do extra processing, enter the slow path.
  */
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 10/17] arm64: entry: consolidate EL1 return paths
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (8 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 09/17] arm64: entry: organise handler stubs consistently Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-10  3:39   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 11/17] stackleak: allow C to call stackleak_erase() Mark Rutland
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Each of the EL1 exception handler stubs has an identical copy of the
kernel_exit code. While each handler needs its own kernel_entry
sequence, there's no need to duplicate this for each handler, and we can
consolidate them for better I-cache usage.

This patch makes the EL1 handlers all use a common kernel_exit stub
called ret_to_kernel, matching the ret_to_user stub used by EL0
handlers.

As with the handlers, ret_to_kenerl is aligned for better I-cache and
brapnch predictor utilization, and for consistency the same alignment is
applied to ret_to_user.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry.S | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 35a8c56b0582..e76326feb1da 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -519,7 +519,7 @@ el1_sync:
 	kernel_entry 1
 	mov	x0, sp
 	bl	el1_sync_handler
-	kernel_exit 1
+	b	ret_to_kernel
 ENDPROC(el1_sync)
 
 	.align	6
@@ -527,7 +527,7 @@ el1_irq:
 	kernel_entry 1
 	mov	x0, sp
 	bl	el1_irq_handler
-	kernel_exit 1
+	b	ret_to_kernel
 ENDPROC(el1_irq)
 
 	.align	6
@@ -535,10 +535,18 @@ el1_error:
 	kernel_entry 1
 	mov	x0, sp
 	bl	el1_error_handler
-	kernel_exit 1
+	b	ret_to_kernel
 ENDPROC(el1_error)
 
 /*
+ * Common EL1 exception return path
+ */
+	.align 6
+ret_to_kernel:
+	kernel_exit 1
+ENDPROC(ret_to_kernel)
+
+/*
  * EL0 mode handlers.
  */
 	.align	6
@@ -606,6 +614,7 @@ work_pending:
 /*
  * "slow" syscall return path.
  */
+	.align 6
 ret_to_user:
 	disable_daif
 	gic_prio_kentry_setup tmp=x3
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 11/17] stackleak: allow C to call stackleak_erase()
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (9 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 10/17] arm64: entry: consolidate EL1 return paths Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-10  3:45   ` Anshuman Khandual
  2020-01-27 23:00   ` Kees Cook
  2020-01-08 18:56 ` [PATCH 12/17] arm64: debug-monitors: refactor MDSCR manipulation Mark Rutland
                   ` (5 subsequent siblings)
  16 siblings, 2 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Currently, stackleak_erase() has no prototype in a header file, and has
to be called directly from low-level architecture entry assembly code.
This necessitates ifdeffery and complicates the entry assembly.

To ameliorate matters, let's provide a prototype so that architecture
can call stackleak_erase() from slightly higher level C code used as
part of the entry flow. This makes things easier to read and maintain.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Popov <alex.popov@linux.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/stackleak.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/linux/stackleak.h b/include/linux/stackleak.h
index 3d5c3271a9a8..2b09d3759c76 100644
--- a/include/linux/stackleak.h
+++ b/include/linux/stackleak.h
@@ -15,6 +15,8 @@
 #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
 #include <asm/stacktrace.h>
 
+asmlinkage void notrace stackleak_erase(void);
+
 static inline void stackleak_task_init(struct task_struct *t)
 {
 	t->lowest_stack = (unsigned long)end_of_stack(t) + sizeof(unsigned long);
@@ -30,6 +32,7 @@ int stack_erasing_sysctl(struct ctl_table *table, int write,
 
 #else /* !CONFIG_GCC_PLUGIN_STACKLEAK */
 static inline void stackleak_task_init(struct task_struct *t) { }
+static inline void stackleak_erase(void) { }
 #endif
 
 #endif
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 12/17] arm64: debug-monitors: refactor MDSCR manipulation
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (10 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 11/17] stackleak: allow C to call stackleak_erase() Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-10  4:35   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 13/17] arm64: entry: move common el0 entry/return work to C Mark Rutland
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

When we convert the ret_to_user/work_pending code to C, we're going to
want to poke the MDSCR to enable/disable single-step. Let's factor out
the existing code for this from debug-monitors.c.

At the same time, we can make use of {read,write}_sysreg() directly, and
get rid of the mdscr_{read,write} wrappers.

The existing code masked DAIF when manipulating MDSCR_EL1, but this
should not be necessary. Exceptions can be taken immediately before DAIF
is masked, and given the lack of an ISB can also be taken after DAIF is
unmasked as writes to DAIF are only self-synchronizing and not
context-synchronizing in general. We may want to add an ISB to ensure
that updates to MDSCR have taken effect, however.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/debug-monitors.h | 10 ++++++++++
 arch/arm64/kernel/debug-monitors.c      | 32 +++++++-------------------------
 2 files changed, 17 insertions(+), 25 deletions(-)

diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
index 7619f473155f..342867e50c54 100644
--- a/arch/arm64/include/asm/debug-monitors.h
+++ b/arch/arm64/include/asm/debug-monitors.h
@@ -107,6 +107,16 @@ enum dbg_active_el {
 void enable_debug_monitors(enum dbg_active_el el);
 void disable_debug_monitors(enum dbg_active_el el);
 
+static __always_inline void __enable_single_step_nosync(void)
+{
+	sysreg_clear_set(mdscr_el1, 0, DBG_MDSCR_SS);
+}
+
+static __always_inline void __disable_single_step_nosync(void)
+{
+	sysreg_clear_set(mdscr_el1, DBG_MDSCR_SS, 0);
+}
+
 void user_rewind_single_step(struct task_struct *task);
 void user_fastforward_single_step(struct task_struct *task);
 
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 48222a4760c2..fa2d4145bd07 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -32,24 +32,6 @@ u8 debug_monitors_arch(void)
 }
 
 /*
- * MDSCR access routines.
- */
-static void mdscr_write(u32 mdscr)
-{
-	unsigned long flags;
-	flags = local_daif_save();
-	write_sysreg(mdscr, mdscr_el1);
-	local_daif_restore(flags);
-}
-NOKPROBE_SYMBOL(mdscr_write);
-
-static u32 mdscr_read(void)
-{
-	return read_sysreg(mdscr_el1);
-}
-NOKPROBE_SYMBOL(mdscr_read);
-
-/*
  * Allow root to disable self-hosted debug from userspace.
  * This is useful if you want to connect an external JTAG debugger.
  */
@@ -91,9 +73,9 @@ void enable_debug_monitors(enum dbg_active_el el)
 		enable |= DBG_MDSCR_KDE;
 
 	if (enable && debug_enabled) {
-		mdscr = mdscr_read();
+		mdscr = read_sysreg(mdscr_el1);
 		mdscr |= enable;
-		mdscr_write(mdscr);
+		write_sysreg(mdscr, mdscr_el1);
 	}
 }
 NOKPROBE_SYMBOL(enable_debug_monitors);
@@ -112,9 +94,9 @@ void disable_debug_monitors(enum dbg_active_el el)
 		disable &= ~DBG_MDSCR_KDE;
 
 	if (disable) {
-		mdscr = mdscr_read();
+		mdscr = read_sysreg(mdscr_el1);
 		mdscr &= disable;
-		mdscr_write(mdscr);
+		write_sysreg(mdscr, mdscr_el1);
 	}
 }
 NOKPROBE_SYMBOL(disable_debug_monitors);
@@ -409,7 +391,7 @@ void kernel_enable_single_step(struct pt_regs *regs)
 {
 	WARN_ON(!irqs_disabled());
 	set_regs_spsr_ss(regs);
-	mdscr_write(mdscr_read() | DBG_MDSCR_SS);
+	__enable_single_step_nosync();
 	enable_debug_monitors(DBG_ACTIVE_EL1);
 }
 NOKPROBE_SYMBOL(kernel_enable_single_step);
@@ -417,7 +399,7 @@ NOKPROBE_SYMBOL(kernel_enable_single_step);
 void kernel_disable_single_step(void)
 {
 	WARN_ON(!irqs_disabled());
-	mdscr_write(mdscr_read() & ~DBG_MDSCR_SS);
+	__disable_single_step_nosync();
 	disable_debug_monitors(DBG_ACTIVE_EL1);
 }
 NOKPROBE_SYMBOL(kernel_disable_single_step);
@@ -425,7 +407,7 @@ NOKPROBE_SYMBOL(kernel_disable_single_step);
 int kernel_active_single_step(void)
 {
 	WARN_ON(!irqs_disabled());
-	return mdscr_read() & DBG_MDSCR_SS;
+	return read_sysreg(mdscr_el1) & DBG_MDSCR_SS;
 }
 NOKPROBE_SYMBOL(kernel_active_single_step);
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 13/17] arm64: entry: move common el0 entry/return work to C
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (11 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 12/17] arm64: debug-monitors: refactor MDSCR manipulation Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-09 15:19   ` Mark Rutland
  2020-01-08 18:56 ` [PATCH 14/17] arm64: entry: move NO_SYSCALL setup " Mark Rutland
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

Portions of common EL0 exception entry/return logic are handled in C
while other parts are handled in assembly. Let's migrate the bulk of it
to C so that it's easier to follow and maintain.

This patch moves the ret_to_user/work_pending logic to C. As that
handled enabling singlestep for userspace, the matching disable is
similarly factored out of the entry code. Additionally user_enter() is
factored out of kernel_exit as all the corresponding user_enter() calls
have already been converted to C.

Rather than add tedious boilerplate to each top-level exception handler
to perform this entry/return work, a new EL0_HANDLER() handles this
automatically. This takes the full name of each handler to keep them
easy to find with grep, and also takes a name for the pt_regs argument
so that we don't have a confusing implicit variable.

Since local_daif_mask() handles the GIC priority, we don't need to mess
with this explicitly as we did in the old assembly, and the now unused
gic_prio_kentry_setup macro can be removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 18 ----------
 arch/arm64/include/asm/exception.h |  1 +
 arch/arm64/kernel/entry-common.c   | 67 +++++++++++++++++++++++++++++++++-----
 arch/arm64/kernel/entry.S          | 56 ++-----------------------------
 arch/arm64/kernel/signal.c         |  3 +-
 5 files changed, 64 insertions(+), 81 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index b8cf7c85ffa2..505beb369f1e 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -67,24 +67,6 @@
 	msr	daifclr, #8
 	.endm
 
-	.macro	disable_step_tsk, flgs, tmp
-	tbz	\flgs, #TIF_SINGLESTEP, 9990f
-	mrs	\tmp, mdscr_el1
-	bic	\tmp, \tmp, #DBG_MDSCR_SS
-	msr	mdscr_el1, \tmp
-	isb	// Synchronise with enable_dbg
-9990:
-	.endm
-
-	/* call with daif masked */
-	.macro	enable_step_tsk, flgs, tmp
-	tbz	\flgs, #TIF_SINGLESTEP, 9990f
-	mrs	\tmp, mdscr_el1
-	orr	\tmp, \tmp, #DBG_MDSCR_SS
-	msr	mdscr_el1, \tmp
-9990:
-	.endm
-
 /*
  * SMP data memory barrier
  */
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 220a7c3971d8..7f367d720ca4 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -52,5 +52,6 @@ void do_el0_svc_compat(struct pt_regs *regs);
 void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
 			    struct pt_regs *regs);
 void do_serror(struct pt_regs *regs, unsigned int esr);
+void do_notify_resume(struct pt_regs *regs,  unsigned long thread_flags);
 
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index bf9d497e620c..17dbfadb2fb8 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -15,10 +15,12 @@
 #include <linux/preempt.h>
 #include <linux/ptrace.h>
 #include <linux/sched/debug.h>
+#include <linux/stackleak.h>
 #include <linux/thread_info.h>
 
 #include <asm/cpufeature.h>
 #include <asm/daifflags.h>
+#include <asm/debug-monitors.h>
 #include <asm/esr.h>
 #include <asm/exception.h>
 #include <asm/kprobes.h>
@@ -107,6 +109,59 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el1_sync_handler);
 
+static void notrace el0_prepare_entry(struct pt_regs *regs)
+{
+	if (test_thread_flag(TIF_SINGLESTEP)) {
+		__disable_single_step_nosync();
+		isb();
+	}
+}
+NOKPROBE_SYMBOL(el0_prepare_entry);
+
+static void notrace el0_prepare_return(struct pt_regs *regs)
+{
+	unsigned long flags;
+
+	local_daif_mask();
+
+	flags = current_thread_info()->flags;
+	if (unlikely(flags & _TIF_WORK_MASK)) {
+		do_notify_resume(regs, flags);
+		trace_hardirqs_on();
+	}
+
+	if (test_thread_flag(TIF_SINGLESTEP))
+		__enable_single_step_nosync();
+
+	user_enter();
+
+	stackleak_erase();
+}
+NOKPROBE_SYMBOL(el0_prepare_return);
+
+asmlinkage void notrace el0_prepare_return_from_fork(void)
+{
+	el0_prepare_return(current_pt_regs());
+}
+NOKPROBE_SYMBOL(el0_prepare_return_from_fork);
+
+/*
+ * Top-level exception handlers need to perform common entry/exit work. Use
+ * this macro when defining a handler for exceptions from EL0, so that work is
+ * handled automatically.
+ */
+#define EL0_HANDLER(handlername, regsname)						\
+static __always_inline void notrace __raw_##handlername(struct pt_regs *regsname);	\
+NOKPROBE_SYMBOL(__raw_##handlername);							\
+asmlinkage void notrace handlername(struct pt_regs *regs)				\
+{											\
+	el0_prepare_entry(regs);							\
+	__raw_##handlername(regs);							\
+	el0_prepare_return(regs);							\
+}											\
+NOKPROBE_SYMBOL(handlername);								\
+static __always_inline void notrace __raw_##handlername(struct pt_regs *regsname)
+
 static void notrace el0_da(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
@@ -228,7 +283,7 @@ static void notrace el0_svc(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_svc);
 
-asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
+EL0_HANDLER(el0_sync_handler, regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
 
@@ -274,7 +329,6 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 		el0_inv(regs, esr);
 	}
 }
-NOKPROBE_SYMBOL(el0_sync_handler);
 
 #ifdef CONFIG_COMPAT
 static void notrace el0_cp15(struct pt_regs *regs, unsigned long esr)
@@ -294,7 +348,7 @@ static void notrace el0_svc_compat(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_svc_compat);
 
-asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
+EL0_HANDLER(el0_sync_compat_handler, regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
 
@@ -337,7 +391,6 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
 		el0_inv(regs, esr);
 	}
 }
-NOKPROBE_SYMBOL(el0_sync_compat_handler);
 #endif /* CONFIG_COMPAT */
 
 static void __sched el1_preempt(void)
@@ -429,7 +482,7 @@ static inline void notrace do_el0_irq_bp_hardening(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
 
-asmlinkage void notrace el0_irq_handler(struct pt_regs *regs)
+EL0_HANDLER(el0_irq_handler, regs)
 {
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
@@ -441,7 +494,6 @@ asmlinkage void notrace el0_irq_handler(struct pt_regs *regs)
 	invoke_irq_handler(regs);
 	trace_hardirqs_on();
 }
-NOKPROBE_SYMBOL(el0_irq_handler);
 
 asmlinkage void el1_error_handler(struct pt_regs *regs)
 {
@@ -455,7 +507,7 @@ asmlinkage void el1_error_handler(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el1_error_handler);
 
-asmlinkage void el0_error_handler(struct pt_regs *regs)
+EL0_HANDLER(el0_error_handler, regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
 
@@ -467,4 +519,3 @@ asmlinkage void el0_error_handler(struct pt_regs *regs)
 	do_serror(regs, esr);
 	local_daif_restore(DAIF_PROCCTX_NOIRQ);
 }
-NOKPROBE_SYMBOL(el0_error_handler);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e76326feb1da..e67c8ad94cce 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -26,22 +26,6 @@
 #include <asm/asm-uaccess.h>
 #include <asm/unistd.h>
 
-/*
- * Context tracking subsystem.  Used to instrument transitions
- * between user and kernel mode.
- */
-	.macro ct_user_exit_irqoff
-#ifdef CONFIG_CONTEXT_TRACKING
-	bl	enter_from_user_mode
-#endif
-	.endm
-
-	.macro ct_user_enter
-#ifdef CONFIG_CONTEXT_TRACKING
-	bl	context_tracking_user_enter
-#endif
-	.endm
-
 	.macro	clear_gp_regs
 	.irp	n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
 	mov	x\n, xzr
@@ -167,9 +151,7 @@ alternative_cb_end
 	.if	\el == 0
 	clear_gp_regs
 	mrs	x21, sp_el0
-	ldr_this_cpu	tsk, __entry_task, x20	// Ensure MDSCR_EL1.SS is clear,
-	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
-	disable_step_tsk x19, x20		// exceptions when scheduling.
+	ldr_this_cpu	tsk, __entry_task, x20
 
 	apply_ssbd 1, x22, x23
 
@@ -277,9 +259,6 @@ alternative_if ARM64_HAS_IRQ_PRIO_MASKING
 alternative_else_nop_endif
 
 	ldp	x21, x22, [sp, #S_PC]		// load ELR, SPSR
-	.if	\el == 0
-	ct_user_enter
-	.endif
 
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 	/*
@@ -387,15 +366,6 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
 /* GPRs used by entry code */
 tsk	.req	x28		// current thread_info
 
-	.macro	gic_prio_kentry_setup, tmp:req
-#ifdef CONFIG_ARM64_PSEUDO_NMI
-	alternative_if ARM64_HAS_IRQ_PRIO_MASKING
-	mov	\tmp, #(GIC_PRIO_PSR_I_SET | GIC_PRIO_IRQON)
-	msr_s	SYS_ICC_PMR_EL1, \tmp
-	alternative_else_nop_endif
-#endif
-	.endm
-
 	.text
 
 /*
@@ -601,31 +571,10 @@ ENDPROC(el0_error_compat)
 #endif
 
 /*
- * Ok, we need to do extra processing, enter the slow path.
- */
-work_pending:
-	mov	x0, sp				// 'regs'
-	bl	do_notify_resume
-#ifdef CONFIG_TRACE_IRQFLAGS
-	bl	trace_hardirqs_on		// enabled while in userspace
-#endif
-	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for single-step
-	b	finish_ret_to_user
-/*
- * "slow" syscall return path.
+ * Common EL0 exception return path
  */
 	.align 6
 ret_to_user:
-	disable_daif
-	gic_prio_kentry_setup tmp=x3
-	ldr	x1, [tsk, #TSK_TI_FLAGS]
-	and	x2, x1, #_TIF_WORK_MASK
-	cbnz	x2, work_pending
-finish_ret_to_user:
-	enable_step_tsk x1, x2
-#ifdef CONFIG_GCC_PLUGIN_STACKLEAK
-	bl	stackleak_erase
-#endif
 	kernel_exit 0
 ENDPROC(ret_to_user)
 
@@ -787,6 +736,7 @@ ENTRY(ret_from_fork)
 	mov	x0, x20
 	blr	x19
 1:	get_current_task tsk
+	bl	el0_prepare_return_from_fork
 	b	ret_to_user
 ENDPROC(ret_from_fork)
 NOKPROBE(ret_from_fork)
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index dd2cdc0d5be2..22e726d57846 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -896,8 +896,7 @@ static void do_signal(struct pt_regs *regs)
 	restore_saved_sigmask();
 }
 
-asmlinkage void do_notify_resume(struct pt_regs *regs,
-				 unsigned long thread_flags)
+void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags)
 {
 	/*
 	 * The assembly code enters us with IRQs off, but it hasn't
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 14/17] arm64: entry: move NO_SYSCALL setup to C
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (12 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 13/17] arm64: entry: move common el0 entry/return work to C Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-10  5:37   ` Anshuman Khandual
  2020-01-08 18:56 ` [PATCH 15/17] arm64: entry: move ARM64_ERRATUM_845719 workaround " Mark Rutland
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

In the entry assembly we set up pt_regs::syscallno to NO_SYSCALL so that
any ptrace calls will see a sensible value. For real syscalls, the
actual syscall number is setup in C code, in do_el0_svc or
do_el0_svc_compat.

Given that tracing isn't performed until the usual EL0 entry work is
performed, we can move the default syscallno setup to C code, making
things simpler and more legible.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/asm-offsets.c  | 1 -
 arch/arm64/kernel/entry-common.c | 3 +++
 arch/arm64/kernel/entry.S        | 6 ------
 3 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index a5bdce8af65b..0b6ffa9ecc08 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -61,7 +61,6 @@ int main(void)
   DEFINE(S_SP,			offsetof(struct pt_regs, sp));
   DEFINE(S_PSTATE,		offsetof(struct pt_regs, pstate));
   DEFINE(S_PC,			offsetof(struct pt_regs, pc));
-  DEFINE(S_SYSCALLNO,		offsetof(struct pt_regs, syscallno));
   DEFINE(S_ORIG_ADDR_LIMIT,	offsetof(struct pt_regs, orig_addr_limit));
   DEFINE(S_PMR_SAVE,		offsetof(struct pt_regs, pmr_save));
   DEFINE(S_STACKFRAME,		offsetof(struct pt_regs, stackframe));
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 17dbfadb2fb8..fa568284e73f 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -111,6 +111,9 @@ NOKPROBE_SYMBOL(el1_sync_handler);
 
 static void notrace el0_prepare_entry(struct pt_regs *regs)
 {
+	/* Not in a syscall by default; do_el0_svc{,_compat} overwrite this */
+	regs->syscallno = NO_SYSCALL;
+
 	if (test_thread_flag(TIF_SINGLESTEP)) {
 		__disable_single_step_nosync();
 		isb();
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e67c8ad94cce..d84718d272e9 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -208,12 +208,6 @@ alternative_else_nop_endif
 
 	stp	x22, x23, [sp, #S_PC]
 
-	/* Not in a syscall by default (el0_svc overwrites for real syscall) */
-	.if	\el == 0
-	mov	w21, #NO_SYSCALL
-	str	w21, [sp, #S_SYSCALLNO]
-	.endif
-
 	/*
 	 * Set sp_el0 to current thread_info.
 	 */
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 15/17] arm64: entry: move ARM64_ERRATUM_845719 workaround to C
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (13 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 14/17] arm64: entry: move NO_SYSCALL setup " Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-08 18:56 ` [PATCH 16/17] arm64: entry: move ARM64_ERRATUM_1418040 " Mark Rutland
  2020-01-08 18:56 ` [PATCH 17/17] arm64: entry: cleanup sp_el0 manipulation Mark Rutland
  16 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

To make the entry code less of a rats nest of overlapping labels and
shared state, and to make the code easier to debug and maintain, let's
move the workaround for ARM64_ERRATUM_845719 to C.

The workaround requires us to perform a write to CONTEXTIDR_EL1 at
AArch64 EL1 before retuning to an AArch32 EL0 task. There are no
additional requirements on the state of the CPU, or on subsequent
instructions prior to the ERET, so this can safely be performed in C
code.

As with the assembly version, we preserve the value of CONTEXTIDR if
CONFIG_PID_IN_CONTEXTIDR is selected.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 17 +++++++++++++++++
 arch/arm64/kernel/entry.S        | 14 --------------
 2 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index fa568284e73f..28b241cfd8f0 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -121,6 +121,21 @@ static void notrace el0_prepare_entry(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_prepare_entry);
 
+static void notrace workaround_arm64_erratum_845719(void)
+{
+	unsigned long val = 0;
+
+	if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_845719) ||
+	    !cpus_have_const_cap(ARM64_WORKAROUND_845719) ||
+	    !is_compat_task())
+		return;
+
+	if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR))
+		val = read_sysreg(contextidr_el1);
+	write_sysreg(val, contextidr_el1);
+}
+NOKPROBE_SYMBOL(workaround_arm64_erratum_845719);
+
 static void notrace el0_prepare_return(struct pt_regs *regs)
 {
 	unsigned long flags;
@@ -138,6 +153,8 @@ static void notrace el0_prepare_return(struct pt_regs *regs)
 
 	user_enter();
 
+	workaround_arm64_erratum_845719();
+
 	stackleak_erase();
 }
 NOKPROBE_SYMBOL(el0_prepare_return);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index d84718d272e9..a7340e551589 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -288,20 +288,6 @@ alternative_else_nop_endif
 	.if	\el == 0
 	ldr	x23, [sp, #S_SP]		// load return stack pointer
 	msr	sp_el0, x23
-	tst	x22, #PSR_MODE32_BIT		// native task?
-	b.eq	3f
-
-#ifdef CONFIG_ARM64_ERRATUM_845719
-alternative_if ARM64_WORKAROUND_845719
-#ifdef CONFIG_PID_IN_CONTEXTIDR
-	mrs	x29, contextidr_el1
-	msr	contextidr_el1, x29
-#else
-	msr contextidr_el1, xzr
-#endif
-alternative_else_nop_endif
-#endif
-3:
 #ifdef CONFIG_ARM64_ERRATUM_1418040
 alternative_if_not ARM64_WORKAROUND_1418040
 	b	4f
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 16/17] arm64: entry: move ARM64_ERRATUM_1418040 workaround to C
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (14 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 15/17] arm64: entry: move ARM64_ERRATUM_845719 workaround " Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  2020-01-08 18:56 ` [PATCH 17/17] arm64: entry: cleanup sp_el0 manipulation Mark Rutland
  16 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

To make the entry code less of a rats nest of overlapping labels and
shared state, and to make the code easier to debug and maintain, lets
move the workaround for ARM64_ERRATUM_1418040 to C.

The workaround requires us to disable EL0 access to the virtual counter,
and emulate these accesses when they are trapped. The assembly code is
only responsible for manipulating the trap control, which we can safely
do in C code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 20 ++++++++++++++++++++
 arch/arm64/kernel/entry.S        | 15 ---------------
 2 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 28b241cfd8f0..a7bebc3ce2a4 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -28,6 +28,8 @@
 #include <asm/stacktrace.h>
 #include <asm/sysreg.h>
 
+#include <clocksource/arm_arch_timer.h>
+
 static void notrace el1_abort(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
@@ -136,6 +138,23 @@ static void notrace workaround_arm64_erratum_845719(void)
 }
 NOKPROBE_SYMBOL(workaround_arm64_erratum_845719);
 
+static void notrace workaround_arm64_erratum_1418040(void)
+{
+	unsigned long clear = 0, set = 0;
+
+	if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040) ||
+	    !cpus_have_const_cap(ARM64_WORKAROUND_1418040))
+		return;
+
+	if (is_compat_task())
+		clear = ARCH_TIMER_USR_VCT_ACCESS_EN;
+	else
+		set = ARCH_TIMER_USR_VCT_ACCESS_EN;
+
+	sysreg_clear_set(cntkctl_el1, clear, set);
+}
+NOKPROBE_SYMBOL(workaround_arm64_erratum_1418040);
+
 static void notrace el0_prepare_return(struct pt_regs *regs)
 {
 	unsigned long flags;
@@ -154,6 +173,7 @@ static void notrace el0_prepare_return(struct pt_regs *regs)
 	user_enter();
 
 	workaround_arm64_erratum_845719();
+	workaround_arm64_erratum_1418040();
 
 	stackleak_erase();
 }
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index a7340e551589..537b44c413df 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -288,21 +288,6 @@ alternative_else_nop_endif
 	.if	\el == 0
 	ldr	x23, [sp, #S_SP]		// load return stack pointer
 	msr	sp_el0, x23
-#ifdef CONFIG_ARM64_ERRATUM_1418040
-alternative_if_not ARM64_WORKAROUND_1418040
-	b	4f
-alternative_else_nop_endif
-	/*
-	 * if (x22.mode32 == cntkctl_el1.el0vcten)
-	 *     cntkctl_el1.el0vcten = ~cntkctl_el1.el0vcten
-	 */
-	mrs	x1, cntkctl_el1
-	eon	x0, x1, x22, lsr #3
-	tbz	x0, #1, 4f
-	eor	x1, x1, #2	// ARCH_TIMER_USR_VCT_ACCESS_EN
-	msr	cntkctl_el1, x1
-4:
-#endif
 	apply_ssbd 0, x0, x1
 	.endif
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 17/17] arm64: entry: cleanup sp_el0 manipulation
  2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
                   ` (15 preceding siblings ...)
  2020-01-08 18:56 ` [PATCH 16/17] arm64: entry: move ARM64_ERRATUM_1418040 " Mark Rutland
@ 2020-01-08 18:56 ` Mark Rutland
  16 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-08 18:56 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: mark.rutland, keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

The kernel stashes the current task struct in sp_el0 so that this can be
acquired consistently/cheaply when required. When we take an exception
from EL0 we have to:

1) stash the original sp_el0 value
2) find the current task
3) update sp_el0 with the current task pointer

Currently steps #1 and #2 occur in one place, and step #3 a while later.
As the value of sp_el0 is immaterial between these points, let's move
them together to make the code clearer and minimize ifdeffery.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry.S | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 537b44c413df..40d1cd8fb4c6 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -152,6 +152,7 @@ alternative_cb_end
 	clear_gp_regs
 	mrs	x21, sp_el0
 	ldr_this_cpu	tsk, __entry_task, x20
+	msr	sp_el0, tsk
 
 	apply_ssbd 1, x22, x23
 
@@ -208,13 +209,6 @@ alternative_else_nop_endif
 
 	stp	x22, x23, [sp, #S_PC]
 
-	/*
-	 * Set sp_el0 to current thread_info.
-	 */
-	.if	\el == 0
-	msr	sp_el0, tsk
-	.endif
-
 	/* Save pmr */
 alternative_if ARM64_HAS_IRQ_PRIO_MASKING
 	mrs_s	x20, SYS_ICC_PMR_EL1
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH 01/17] arm64: entry: mark all entry code as notrace
  2020-01-08 18:56 ` [PATCH 01/17] arm64: entry: mark all entry code as notrace Mark Rutland
@ 2020-01-09  5:21   ` Anshuman Khandual
  2020-01-13 15:44     ` Mark Rutland
  0 siblings, 1 reply; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-09  5:21 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> Almost all functions in entry-common.c are marked notrace, with
> el1_undef and el1_inv being the only exceptions. We appear to have done
> this on the assumption that there were no exception registers that we
> needed to snapshot, and thus it was safe to run trace code that might
> result in further exceptions and clobber those registers.
> 
> However, until we inherit the DAIF flags, our irq flag tracing is stale,
> and this discrepancy could set off warnings in some configurations.

Could you give some example scenarios when this might happen ?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 02/17] arm64: entry: cleanup el0 svc handler naming
  2020-01-08 18:56 ` [PATCH 02/17] arm64: entry: cleanup el0 svc handler naming Mark Rutland
@ 2020-01-09  5:33   ` Anshuman Khandual
  0 siblings, 0 replies; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-09  5:33 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> For most of the exception entry code, <foo>_handler() is the first C
> function called from the entry assembly in entry-common.c, and external
> functions handling the bulk of the logic are called do_<foo>().
> 
> For consistency, apply this scheme to el0_svc_handler and
> el0_svc_compat_hander, renaming them to do_el0_svc and do_el0_svc_compat

A small nit - s/hander/handler

> respectively.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 03/17] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c
  2020-01-08 18:56 ` [PATCH 03/17] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c Mark Rutland
@ 2020-01-09  5:36   ` Anshuman Khandual
  0 siblings, 0 replies; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-09  5:36 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> Subsequent patches will pull more of the IRQ entry handling into C. To
> keep this in one place, let's move arm64_preempt_schedule_irq() into
> entry-common.c along with the other entry management functions.
> 
> We no longer need to include <linux/lockdep.h> in process.c, so the
> include directive is removed.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Will Deacon <will@kernel.org>
> ---

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 04/17] arm64: entry: move preempt logic to C
  2020-01-08 18:56 ` [PATCH 04/17] arm64: entry: move preempt logic to C Mark Rutland
@ 2020-01-09  6:43   ` Anshuman Khandual
  2020-01-09 12:22     ` Mark Rutland
  0 siblings, 1 reply; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-09  6:43 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> Some of our preeemption logic is in C, while a portion of it is in
> assembly. Let's pull the remainder  of it into C so that it lives in one
> place.
> 
> At the same time, let's improve the comments regarding NMI preemption to
> make it clearer why we cannot preempt from NMIs.
> 
> Subsequent patches will covert the caller of el1_preempt() to C.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/entry-common.c | 18 +++++++++++++++++-
>  arch/arm64/kernel/entry.S        | 13 +------------
>  2 files changed, 18 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 4fa058453468..b93ca2148a20 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -8,6 +8,7 @@
>  #include <linux/context_tracking.h>
>  #include <linux/linkage.h>
>  #include <linux/lockdep.h>
> +#include <linux/preempt.h>
>  #include <linux/ptrace.h>
>  #include <linux/sched/debug.h>
>  #include <linux/thread_info.h>
> @@ -334,8 +335,23 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
>  NOKPROBE_SYMBOL(el0_sync_compat_handler);
>  #endif /* CONFIG_COMPAT */
>  
> -asmlinkage void __sched arm64_preempt_schedule_irq(void)
> +asmlinkage void __sched el1_preempt(void)
>  {
> +	if (!IS_ENABLED(CONFIG_PREEMPT) || preempt_count())
> +		return;

IS_ENABLED(CONFIG_PREEMPT) is not really required here as the single
call site for el1_preempt() is still wrapped around CONFIG_PREEMPT.
So we should retain any one of them.

> +
> +	/*
> +	 * To avoid nesting NMIs and overflowing the stack, we must leave NMIs
> +	 * masked until the exception return. We want to context-switch with
> +	 * IRQs masked but NMIs enabled, so cannot preempt an NMI.
> +	 *
> +	 * PSTATE.{D,A,F} are cleared for IRQ and NMI by el1_irq().
> +	 * When gic_handle_irq() handles an NMI, it leaves PSTATE.I set.
> +	 * If anything is set in DAIF, this is an NMI.
> +	 */
> +	if (system_uses_irq_prio_masking() && read_sysreg(daif) != 0)

In case above CONFIG_PREEMPT gets dropped, preempt_count() can be
moved here as well.

> +		return;
> +
>  	lockdep_assert_irqs_disabled();
>  
>  	/*
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 7c6a0a41676f..53ce1877a4aa 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -606,18 +606,7 @@ el1_irq:
>  	irq_handler
>  
>  #ifdef CONFIG_PREEMPT
> -	ldr	x24, [tsk, #TSK_TI_PREEMPT]	// get preempt count
> -alternative_if ARM64_HAS_IRQ_PRIO_MASKING
> -	/*
> -	 * DA_F were cleared at start of handling. If anything is set in DAIF,
> -	 * we come back from an NMI, so skip preemption
> -	 */
> -	mrs	x0, daif
> -	orr	x24, x24, x0
> -alternative_else_nop_endif
> -	cbnz	x24, 1f				// preempt count != 0 || NMI return path
> -	bl	arm64_preempt_schedule_irq	// irq en/disable is done inside
> -1:
> +	bl	el1_preempt
>  #endif
>  
>  #ifdef CONFIG_ARM64_PSEUDO_NMI
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 05/17] arm64: entry: add a call_on_stack helper
  2020-01-08 18:56 ` [PATCH 05/17] arm64: entry: add a call_on_stack helper Mark Rutland
@ 2020-01-09  8:00   ` Anshuman Khandual
  2020-01-14 18:24     ` Mark Rutland
  2020-01-09 14:30   ` Laura Abbott
  1 sibling, 1 reply; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-09  8:00 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> In some cases, we want to call a function from C code, using an
> alternative stack. Add a helper that we can use in such cases.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/exception.h |  2 ++
>  arch/arm64/kernel/entry.S          | 21 +++++++++++++++++++++
>  2 files changed, 23 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index b87c6e276ab1..a49038fa4faf 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h
> @@ -31,6 +31,8 @@ static inline u32 disr_to_esr(u64 disr)
>  	return esr;
>  }
>  
> +asmlinkage void call_on_stack(struct pt_regs *, void (*)(struct pt_regs *),
> +			      unsigned long);
>  asmlinkage void enter_from_user_mode(void);
>  void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
>  void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 53ce1877a4aa..184313c773ea 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -901,6 +901,27 @@ ENTRY(ret_from_fork)
>  ENDPROC(ret_from_fork)
>  NOKPROBE(ret_from_fork)
>  
> +/*
> + * x0 = argument to function

A small nit. Though the definition here itself does not limit the
argument type, it might worth to mention that to be struct pt_regs
per the previous declaration.

> + * x1 = function to call
> + * x2 = new stack pointer
> + */
> +ENTRY(call_on_stack)
> +	/* Create a frame record to save our LR and SP (implicit in FP) */
> +	stp	x29, x30, [sp, #-16]!
> +	mov	x29, sp
> +
> +	/* Move to the new stack and call the function there */
> +	mov	sp, x2
> +	blr	x1
> +
> +	/* Restore SP from the FP, FP and LR from the record, and return */
> +	mov	sp, x29
> +	ldp	x29, x30, [sp], #16
> +	ret
> +ENDPROC(call_on_stack)
> +NOKPROBE(call_on_stack)
> +
>  #ifdef CONFIG_ARM_SDE_INTERFACE
>  
>  #include <asm/sdei.h>
> 

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 07/17] arm64: entry: convert error entry to C
  2020-01-08 18:56 ` [PATCH 07/17] arm64: entry: convert error " Mark Rutland
@ 2020-01-09  9:12   ` Anshuman Khandual
  2020-01-09 12:49     ` Mark Rutland
  0 siblings, 1 reply; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-09  9:12 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> Portions of our error entry logic are handled in C while other parts are
> handled in assembly. Let's migrate the bulk of it to C so that it's
> easier to follow and maintain.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/exception.h |  1 +
>  arch/arm64/kernel/entry-common.c   | 26 ++++++++++++++++++++++++++
>  arch/arm64/kernel/entry.S          | 19 ++++++-------------
>  arch/arm64/kernel/traps.c          |  2 +-
>  4 files changed, 34 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index a49038fa4faf..220a7c3971d8 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h
> @@ -51,5 +51,6 @@ void do_el0_svc(struct pt_regs *regs);
>  void do_el0_svc_compat(struct pt_regs *regs);
>  void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
>  			    struct pt_regs *regs);
> +void do_serror(struct pt_regs *regs, unsigned int esr);
>  
>  #endif	/* __ASM_EXCEPTION_H */
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 45155d9f72da..bf9d497e620c 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -442,3 +442,29 @@ asmlinkage void notrace el0_irq_handler(struct pt_regs *regs)
>  	trace_hardirqs_on();
>  }
>  NOKPROBE_SYMBOL(el0_irq_handler);
> +
> +asmlinkage void el1_error_handler(struct pt_regs *regs)
> +{
> +	unsigned long esr = read_sysreg(esr_el1);
> +
> +	if (system_uses_irq_prio_masking())
> +		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
> +
> +	local_daif_restore(DAIF_ERRCTX);
> +	do_serror(regs, esr);
> +}
> +NOKPROBE_SYMBOL(el1_error_handler);
> +
> +asmlinkage void el0_error_handler(struct pt_regs *regs)
> +{
> +	unsigned long esr = read_sysreg(esr_el1);
> +
> +	if (system_uses_irq_prio_masking())
> +		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
> +
> +	user_exit_irqoff();
> +	local_daif_restore(DAIF_ERRCTX);

Just being curious. local_daif_restore(DAIF_ERRCTX) has the same
effect as enable_dbg asm macro previously ?


> +	do_serror(regs, esr);
> +	local_daif_restore(DAIF_PROCCTX_NOIRQ);
> +}
> +NOKPROBE_SYMBOL(el0_error_handler);
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 55c4be1e996a..0c5117ef7c3c 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -559,7 +559,10 @@ ENDPROC(el0_irq_compat)
>  
>  el0_error_compat:
>  	kernel_entry 0, 32
> -	b	el0_error_naked
> +	mov	x0, sp
> +	bl	el0_error_handler
> +	b	ret_to_user
> +ENDPROC(el0_error_compat)
>  #endif
>  
>  	.align	6
> @@ -572,25 +575,15 @@ ENDPROC(el0_irq)
>  
>  el1_error:
>  	kernel_entry 1
> -	mrs	x1, esr_el1
> -	gic_prio_kentry_setup tmp=x2
> -	enable_dbg
>  	mov	x0, sp
> -	bl	do_serror
> +	bl	el1_error_handler
>  	kernel_exit 1
>  ENDPROC(el1_error)
>  
>  el0_error:
>  	kernel_entry 0
> -el0_error_naked:
> -	mrs	x25, esr_el1
> -	gic_prio_kentry_setup tmp=x2
> -	ct_user_exit_irqoff
> -	enable_dbg
>  	mov	x0, sp
> -	mov	x1, x25
> -	bl	do_serror
> -	enable_da_f
> +	bl	el0_error_handler
>  	b	ret_to_user
>  ENDPROC(el0_error)

Macros enable_da_f and ct_user_exit_irqoff can be removed here itself
although it is eventually getting dropped off in a later patch.

>  
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 73caf35c2262..170e637bb87b 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -901,7 +901,7 @@ bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned int esr)
>  	}
>  }
>  
> -asmlinkage void do_serror(struct pt_regs *regs, unsigned int esr)
> +void do_serror(struct pt_regs *regs, unsigned int esr)
>  {
>  	const bool was_in_nmi = in_nmi();
>  
> 

Should not we add NOKPROBE_SYMBOL() for the symbol.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 08/17] arm64: entry: Split el0_sync_compat from el0_sync
  2020-01-08 18:56 ` [PATCH 08/17] arm64: entry: Split el0_sync_compat from el0_sync Mark Rutland
@ 2020-01-09  9:50   ` Anshuman Khandual
  0 siblings, 0 replies; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-09  9:50 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> Currently we treat el0_sync_compat as-if it's a portion of el0_sync,
> which is unlike all the other exception entry stubs. Let's split it out
> and give it it's own ENDPROC(), so that we can treat it as a separate
> path entirely.
> 
> Reported-by: Mark Brown <broonie@kernel.org>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 09/17] arm64: entry: organise handler stubs consistently
  2020-01-08 18:56 ` [PATCH 09/17] arm64: entry: organise handler stubs consistently Mark Rutland
@ 2020-01-09 10:01   ` Anshuman Khandual
  0 siblings, 0 replies; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-09 10:01 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> The exception handler stubs are laid out in a random order, violating
> the EL1/EL0 split described by the comments. In an attempt to get more
> optimal cache usage for commonly-invoked handlers, some handlers are
> given special alignment while others are not.
> 
> To make things less surprising, This patch reorganises the hander stubs
> so they're in a consistent order, with the EL1 sync/irq/error stubs
> first, followed by the EL0 sync/irq/error stubs, then the EL0 compat
> sync/irq/error stubs. All the stubs are given the same alignment.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 04/17] arm64: entry: move preempt logic to C
  2020-01-09  6:43   ` Anshuman Khandual
@ 2020-01-09 12:22     ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-09 12:22 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: keescook, maz, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, catalin.marinas, labbott, will,
	linux-arm-kernel, alex.popov

On Thu, Jan 09, 2020 at 12:13:13PM +0530, Anshuman Khandual wrote:
> On 01/09/2020 12:26 AM, Mark Rutland wrote:
> > Some of our preeemption logic is in C, while a portion of it is in
> > assembly. Let's pull the remainder  of it into C so that it lives in one
> > place.
> > 
> > At the same time, let's improve the comments regarding NMI preemption to
> > make it clearer why we cannot preempt from NMIs.
> > 
> > Subsequent patches will covert the caller of el1_preempt() to C.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: James Morse <james.morse@arm.com>
> > Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/entry-common.c | 18 +++++++++++++++++-
> >  arch/arm64/kernel/entry.S        | 13 +------------
> >  2 files changed, 18 insertions(+), 13 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> > index 4fa058453468..b93ca2148a20 100644
> > --- a/arch/arm64/kernel/entry-common.c
> > +++ b/arch/arm64/kernel/entry-common.c
> > @@ -8,6 +8,7 @@
> >  #include <linux/context_tracking.h>
> >  #include <linux/linkage.h>
> >  #include <linux/lockdep.h>
> > +#include <linux/preempt.h>
> >  #include <linux/ptrace.h>
> >  #include <linux/sched/debug.h>
> >  #include <linux/thread_info.h>
> > @@ -334,8 +335,23 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
> >  NOKPROBE_SYMBOL(el0_sync_compat_handler);
> >  #endif /* CONFIG_COMPAT */
> >  
> > -asmlinkage void __sched arm64_preempt_schedule_irq(void)
> > +asmlinkage void __sched el1_preempt(void)
> >  {
> > +	if (!IS_ENABLED(CONFIG_PREEMPT) || preempt_count())
> > +		return;
> 
> IS_ENABLED(CONFIG_PREEMPT) is not really required here as the single
> call site for el1_preempt() is still wrapped around CONFIG_PREEMPT.
> So we should retain any one of them.

Using both was deliberate.

I wanted to avoid ifdeffery here, but also wanted to avoid the
unnecessary bits being build for !CONFIG_PREEMPT buillds, in both this
function and the caller.

In a subsequent patch this'll be made static, and called
unconditionally, where we'll definitely need to IS_ENABLED(). I'll
update the commit message to make this part explicit.

> 
> > +
> > +	/*
> > +	 * To avoid nesting NMIs and overflowing the stack, we must leave NMIs
> > +	 * masked until the exception return. We want to context-switch with
> > +	 * IRQs masked but NMIs enabled, so cannot preempt an NMI.
> > +	 *
> > +	 * PSTATE.{D,A,F} are cleared for IRQ and NMI by el1_irq().
> > +	 * When gic_handle_irq() handles an NMI, it leaves PSTATE.I set.
> > +	 * If anything is set in DAIF, this is an NMI.
> > +	 */
> > +	if (system_uses_irq_prio_masking() && read_sysreg(daif) != 0)
> 
> In case above CONFIG_PREEMPT gets dropped, preempt_count() can be
> moved here as well.

As above, I'm going to leave this as-is.

I also think it's worth keeping this separate from the early return for
!preempt_count() due to the comment, which is specific to the GIC prio
masking logic.

Thanks,
Mark.

> 
> > +		return;
> > +
> >  	lockdep_assert_irqs_disabled();
> >  
> >  	/*
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index 7c6a0a41676f..53ce1877a4aa 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -606,18 +606,7 @@ el1_irq:
> >  	irq_handler
> >  
> >  #ifdef CONFIG_PREEMPT
> > -	ldr	x24, [tsk, #TSK_TI_PREEMPT]	// get preempt count
> > -alternative_if ARM64_HAS_IRQ_PRIO_MASKING
> > -	/*
> > -	 * DA_F were cleared at start of handling. If anything is set in DAIF,
> > -	 * we come back from an NMI, so skip preemption
> > -	 */
> > -	mrs	x0, daif
> > -	orr	x24, x24, x0
> > -alternative_else_nop_endif
> > -	cbnz	x24, 1f				// preempt count != 0 || NMI return path
> > -	bl	arm64_preempt_schedule_irq	// irq en/disable is done inside
> > -1:
> > +	bl	el1_preempt
> >  #endif
> >  
> >  #ifdef CONFIG_ARM64_PSEUDO_NMI
> > 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 07/17] arm64: entry: convert error entry to C
  2020-01-09  9:12   ` Anshuman Khandual
@ 2020-01-09 12:49     ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-09 12:49 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: keescook, maz, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, catalin.marinas, labbott, will,
	linux-arm-kernel, alex.popov

On Thu, Jan 09, 2020 at 02:42:23PM +0530, Anshuman Khandual wrote:
> 
> 
> On 01/09/2020 12:26 AM, Mark Rutland wrote:
> > Portions of our error entry logic are handled in C while other parts are
> > handled in assembly. Let's migrate the bulk of it to C so that it's
> > easier to follow and maintain.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: James Morse <james.morse@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/exception.h |  1 +
> >  arch/arm64/kernel/entry-common.c   | 26 ++++++++++++++++++++++++++
> >  arch/arm64/kernel/entry.S          | 19 ++++++-------------
> >  arch/arm64/kernel/traps.c          |  2 +-
> >  4 files changed, 34 insertions(+), 14 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> > index a49038fa4faf..220a7c3971d8 100644
> > --- a/arch/arm64/include/asm/exception.h
> > +++ b/arch/arm64/include/asm/exception.h
> > @@ -51,5 +51,6 @@ void do_el0_svc(struct pt_regs *regs);
> >  void do_el0_svc_compat(struct pt_regs *regs);
> >  void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
> >  			    struct pt_regs *regs);
> > +void do_serror(struct pt_regs *regs, unsigned int esr);
> >  
> >  #endif	/* __ASM_EXCEPTION_H */
> > diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> > index 45155d9f72da..bf9d497e620c 100644
> > --- a/arch/arm64/kernel/entry-common.c
> > +++ b/arch/arm64/kernel/entry-common.c
> > @@ -442,3 +442,29 @@ asmlinkage void notrace el0_irq_handler(struct pt_regs *regs)
> >  	trace_hardirqs_on();
> >  }
> >  NOKPROBE_SYMBOL(el0_irq_handler);
> > +
> > +asmlinkage void el1_error_handler(struct pt_regs *regs)
> > +{
> > +	unsigned long esr = read_sysreg(esr_el1);
> > +
> > +	if (system_uses_irq_prio_masking())
> > +		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
> > +
> > +	local_daif_restore(DAIF_ERRCTX);
> > +	do_serror(regs, esr);
> > +}
> > +NOKPROBE_SYMBOL(el1_error_handler);
> > +
> > +asmlinkage void el0_error_handler(struct pt_regs *regs)
> > +{
> > +	unsigned long esr = read_sysreg(esr_el1);
> > +
> > +	if (system_uses_irq_prio_masking())
> > +		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
> > +
> > +	user_exit_irqoff();
> > +	local_daif_restore(DAIF_ERRCTX);
> 
> Just being curious. local_daif_restore(DAIF_ERRCTX) has the same
> effect as enable_dbg asm macro previously ?

Not quite, DAIF_ERRCTX unmasks debug and FIQ, but I beleive that was an
oversight when we tightened up the DAIF nesting rules bask in commit:

  65be7a1b799f11ff ("arm64: introduce an order for exceptions")

... where the commit message says:

| FIQ is never expected, but we mask it when we mask debug exceptions,
| and unmask it at all other times.

I'll call that out explicitly in the commit message.

> > +	do_serror(regs, esr);
> > +	local_daif_restore(DAIF_PROCCTX_NOIRQ);
> > +}
> > +NOKPROBE_SYMBOL(el0_error_handler);
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index 55c4be1e996a..0c5117ef7c3c 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -559,7 +559,10 @@ ENDPROC(el0_irq_compat)
> >  
> >  el0_error_compat:
> >  	kernel_entry 0, 32
> > -	b	el0_error_naked
> > +	mov	x0, sp
> > +	bl	el0_error_handler
> > +	b	ret_to_user
> > +ENDPROC(el0_error_compat)
> >  #endif
> >  
> >  	.align	6
> > @@ -572,25 +575,15 @@ ENDPROC(el0_irq)
> >  
> >  el1_error:
> >  	kernel_entry 1
> > -	mrs	x1, esr_el1
> > -	gic_prio_kentry_setup tmp=x2
> > -	enable_dbg
> >  	mov	x0, sp
> > -	bl	do_serror
> > +	bl	el1_error_handler
> >  	kernel_exit 1
> >  ENDPROC(el1_error)
> >  
> >  el0_error:
> >  	kernel_entry 0
> > -el0_error_naked:
> > -	mrs	x25, esr_el1
> > -	gic_prio_kentry_setup tmp=x2
> > -	ct_user_exit_irqoff
> > -	enable_dbg
> >  	mov	x0, sp
> > -	mov	x1, x25
> > -	bl	do_serror
> > -	enable_da_f
> > +	bl	el0_error_handler
> >  	b	ret_to_user
> >  ENDPROC(el0_error)
> 
> Macros enable_da_f and ct_user_exit_irqoff can be removed here itself
> although it is eventually getting dropped off in a later patch.

True. I'll fold those removals here.

I'll prepend a patch to remove inherit_daif, too.

> > diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> > index 73caf35c2262..170e637bb87b 100644
> > --- a/arch/arm64/kernel/traps.c
> > +++ b/arch/arm64/kernel/traps.c
> > @@ -901,7 +901,7 @@ bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned int esr)
> >  	}
> >  }
> >  
> > -asmlinkage void do_serror(struct pt_regs *regs, unsigned int esr)
> > +void do_serror(struct pt_regs *regs, unsigned int esr)
> >  {
> >  	const bool was_in_nmi = in_nmi();
> >  
> > 
> 
> Should not we add NOKPROBE_SYMBOL() for the symbol.

At this point we've already sampled the exception registers and balanced
the IRQ flags, so we don't need to do so for the sames reasons as for
functions in entry-common.c.

If that needs to change, I think that should be a separate patch, as
nothing has changed from the PoV of do_serror() other than asmlinkage.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 05/17] arm64: entry: add a call_on_stack helper
  2020-01-08 18:56 ` [PATCH 05/17] arm64: entry: add a call_on_stack helper Mark Rutland
  2020-01-09  8:00   ` Anshuman Khandual
@ 2020-01-09 14:30   ` Laura Abbott
  2020-01-09 14:46     ` Mark Rutland
  1 sibling, 1 reply; 41+ messages in thread
From: Laura Abbott @ 2020-01-09 14:30 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, robin.murphy, julien.thierry.kdev, alex.popov

On 1/8/20 1:56 PM, Mark Rutland wrote:
> In some cases, we want to call a function from C code, using an
> alternative stack. Add a helper that we can use in such cases.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>   arch/arm64/include/asm/exception.h |  2 ++
>   arch/arm64/kernel/entry.S          | 21 +++++++++++++++++++++
>   2 files changed, 23 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index b87c6e276ab1..a49038fa4faf 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h
> @@ -31,6 +31,8 @@ static inline u32 disr_to_esr(u64 disr)
>   	return esr;
>   }
>   
> +asmlinkage void call_on_stack(struct pt_regs *, void (*)(struct pt_regs *),
> +			      unsigned long);
>   asmlinkage void enter_from_user_mode(void);
>   void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
>   void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 53ce1877a4aa..184313c773ea 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -901,6 +901,27 @@ ENTRY(ret_from_fork)
>   ENDPROC(ret_from_fork)
>   NOKPROBE(ret_from_fork)
>   
> +/*
> + * x0 = argument to function
> + * x1 = function to call
> + * x2 = new stack pointer
> + */
> +ENTRY(call_on_stack)
> +	/* Create a frame record to save our LR and SP (implicit in FP) */
> +	stp	x29, x30, [sp, #-16]!
> +	mov	x29, sp
> +
> +	/* Move to the new stack and call the function there */
> +	mov	sp, x2
> +	blr	x1
> +
> +	/* Restore SP from the FP, FP and LR from the record, and return */
> +	mov	sp, x29
> +	ldp	x29, x30, [sp], #16
> +	ret
> +ENDPROC(call_on_stack)
> +NOKPROBE(call_on_stack)
> +
>   #ifdef CONFIG_ARM_SDE_INTERFACE
>   
>   #include <asm/sdei.h>
> 

I'm a little worried this makes a very tempting gadget for
attackers to use. Maybe future security features will
make this less vulnerable?

Thanks,
Laura


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 05/17] arm64: entry: add a call_on_stack helper
  2020-01-09 14:30   ` Laura Abbott
@ 2020-01-09 14:46     ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-09 14:46 UTC (permalink / raw)
  To: Laura Abbott
  Cc: keescook, catalin.marinas, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, maz, will, linux-arm-kernel, alex.popov

On Thu, Jan 09, 2020 at 09:30:13AM -0500, Laura Abbott wrote:
> On 1/8/20 1:56 PM, Mark Rutland wrote:
> > In some cases, we want to call a function from C code, using an
> > alternative stack. Add a helper that we can use in such cases.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: James Morse <james.morse@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >   arch/arm64/include/asm/exception.h |  2 ++
> >   arch/arm64/kernel/entry.S          | 21 +++++++++++++++++++++
> >   2 files changed, 23 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> > index b87c6e276ab1..a49038fa4faf 100644
> > --- a/arch/arm64/include/asm/exception.h
> > +++ b/arch/arm64/include/asm/exception.h
> > @@ -31,6 +31,8 @@ static inline u32 disr_to_esr(u64 disr)
> >   	return esr;
> >   }
> > +asmlinkage void call_on_stack(struct pt_regs *, void (*)(struct pt_regs *),
> > +			      unsigned long);
> >   asmlinkage void enter_from_user_mode(void);
> >   void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> >   void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index 53ce1877a4aa..184313c773ea 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -901,6 +901,27 @@ ENTRY(ret_from_fork)
> >   ENDPROC(ret_from_fork)
> >   NOKPROBE(ret_from_fork)
> > +/*
> > + * x0 = argument to function
> > + * x1 = function to call
> > + * x2 = new stack pointer
> > + */
> > +ENTRY(call_on_stack)
> > +	/* Create a frame record to save our LR and SP (implicit in FP) */
> > +	stp	x29, x30, [sp, #-16]!
> > +	mov	x29, sp
> > +
> > +	/* Move to the new stack and call the function there */
> > +	mov	sp, x2
> > +	blr	x1
> > +
> > +	/* Restore SP from the FP, FP and LR from the record, and return */
> > +	mov	sp, x29
> > +	ldp	x29, x30, [sp], #16
> > +	ret
> > +ENDPROC(call_on_stack)
> > +NOKPROBE(call_on_stack)
> > +
> >   #ifdef CONFIG_ARM_SDE_INTERFACE
> >   #include <asm/sdei.h>
> > 
> 
> I'm a little worried this makes a very tempting gadget for
> attackers to use. Maybe future security features will
> make this less vulnerable?

With BTI we'll have to add a target identifier to the start of the
function, but that's about it.

As a gadget, I think it's similar to the existing cpu_switch_to(). If we
could protect assembly functions with CFI somehow that'd be great for
both.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 13/17] arm64: entry: move common el0 entry/return work to C
  2020-01-08 18:56 ` [PATCH 13/17] arm64: entry: move common el0 entry/return work to C Mark Rutland
@ 2020-01-09 15:19   ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-09 15:19 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov

On Wed, Jan 08, 2020 at 06:56:30PM +0000, Mark Rutland wrote:
> Portions of common EL0 exception entry/return logic are handled in C
> while other parts are handled in assembly. Let's migrate the bulk of it
> to C so that it's easier to follow and maintain.
> 
> This patch moves the ret_to_user/work_pending logic to C. As that
> handled enabling singlestep for userspace, the matching disable is
> similarly factored out of the entry code. Additionally user_enter() is
> factored out of kernel_exit as all the corresponding user_enter() calls
> have already been converted to C.
> 
> Rather than add tedious boilerplate to each top-level exception handler
> to perform this entry/return work, a new EL0_HANDLER() handles this
> automatically. This takes the full name of each handler to keep them
> easy to find with grep, and also takes a name for the pt_regs argument
> so that we don't have a confusing implicit variable.
> 
> Since local_daif_mask() handles the GIC priority, we don't need to mess
> with this explicitly as we did in the old assembly, and the now unused
> gic_prio_kentry_setup macro can be removed.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/assembler.h | 18 ----------
>  arch/arm64/include/asm/exception.h |  1 +
>  arch/arm64/kernel/entry-common.c   | 67 +++++++++++++++++++++++++++++++++-----
>  arch/arm64/kernel/entry.S          | 56 ++-----------------------------
>  arch/arm64/kernel/signal.c         |  3 +-
>  5 files changed, 64 insertions(+), 81 deletions(-)

> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index 220a7c3971d8..7f367d720ca4 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h
> @@ -52,5 +52,6 @@ void do_el0_svc_compat(struct pt_regs *regs);
>  void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
>  			    struct pt_regs *regs);
>  void do_serror(struct pt_regs *regs, unsigned int esr);
> +void do_notify_resume(struct pt_regs *regs,  unsigned long thread_flags);

I've just realised it's probably better to move do_notify_resume() into
entry-common.c and make it static, so I'll spin a preparatory patch
which moves it.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 10/17] arm64: entry: consolidate EL1 return paths
  2020-01-08 18:56 ` [PATCH 10/17] arm64: entry: consolidate EL1 return paths Mark Rutland
@ 2020-01-10  3:39   ` Anshuman Khandual
  2020-01-10 16:02     ` Mark Rutland
  0 siblings, 1 reply; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-10  3:39 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> Each of the EL1 exception handler stubs has an identical copy of the
> kernel_exit code. While each handler needs its own kernel_entry
> sequence, there's no need to duplicate this for each handler, and we can
> consolidate them for better I-cache usage.
> 
> This patch makes the EL1 handlers all use a common kernel_exit stub
> called ret_to_kernel, matching the ret_to_user stub used by EL0
> handlers.
> 
> As with the handlers, ret_to_kenerl is aligned for better I-cache and

Small nit, s/ret_to_kenerl/ret_to_kernel

> brapnch predictor utilization, and for consistency the same alignment is

Small nit, s/brapnch/branch

> applied to ret_to_user.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

> ---
>  arch/arm64/kernel/entry.S | 15 ++++++++++++---
>  1 file changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 35a8c56b0582..e76326feb1da 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -519,7 +519,7 @@ el1_sync:
>  	kernel_entry 1
>  	mov	x0, sp
>  	bl	el1_sync_handler
> -	kernel_exit 1
> +	b	ret_to_kernel
>  ENDPROC(el1_sync)
>  
>  	.align	6
> @@ -527,7 +527,7 @@ el1_irq:
>  	kernel_entry 1
>  	mov	x0, sp
>  	bl	el1_irq_handler
> -	kernel_exit 1
> +	b	ret_to_kernel
>  ENDPROC(el1_irq)
>  
>  	.align	6
> @@ -535,10 +535,18 @@ el1_error:
>  	kernel_entry 1
>  	mov	x0, sp
>  	bl	el1_error_handler
> -	kernel_exit 1
> +	b	ret_to_kernel
>  ENDPROC(el1_error)
>  
>  /*
> + * Common EL1 exception return path
> + */
> +	.align 6
> +ret_to_kernel:
> +	kernel_exit 1
> +ENDPROC(ret_to_kernel)
> +
> +/*
>   * EL0 mode handlers.
>   */
>  	.align	6
> @@ -606,6 +614,7 @@ work_pending:
>  /*
>   * "slow" syscall return path.
>   */
> +	.align 6
>  ret_to_user:
>  	disable_daif
>  	gic_prio_kentry_setup tmp=x3
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 11/17] stackleak: allow C to call stackleak_erase()
  2020-01-08 18:56 ` [PATCH 11/17] stackleak: allow C to call stackleak_erase() Mark Rutland
@ 2020-01-10  3:45   ` Anshuman Khandual
  2020-01-10 16:07     ` Mark Rutland
  2020-01-27 23:00   ` Kees Cook
  1 sibling, 1 reply; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-10  3:45 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> Currently, stackleak_erase() has no prototype in a header file, and has
> to be called directly from low-level architecture entry assembly code.
> This necessitates ifdeffery and complicates the entry assembly.
> 
> To ameliorate matters, let's provide a prototype so that architecture
> can call stackleak_erase() from slightly higher level C code used as
> part of the entry flow. This makes things easier to read and maintain.

Does this need to be a separate patch or should it be folded into
"[PATCH 13/17] arm64: entry: move common el0 entry/return work to C"
which actually adds the first C call site for this function.

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Popov <alex.popov@linux.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  include/linux/stackleak.h | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/include/linux/stackleak.h b/include/linux/stackleak.h
> index 3d5c3271a9a8..2b09d3759c76 100644
> --- a/include/linux/stackleak.h
> +++ b/include/linux/stackleak.h
> @@ -15,6 +15,8 @@
>  #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
>  #include <asm/stacktrace.h>
>  
> +asmlinkage void notrace stackleak_erase(void);
> +
>  static inline void stackleak_task_init(struct task_struct *t)
>  {
>  	t->lowest_stack = (unsigned long)end_of_stack(t) + sizeof(unsigned long);
> @@ -30,6 +32,7 @@ int stack_erasing_sysctl(struct ctl_table *table, int write,
>  
>  #else /* !CONFIG_GCC_PLUGIN_STACKLEAK */
>  static inline void stackleak_task_init(struct task_struct *t) { }
> +static inline void stackleak_erase(void) { }
>  #endif
>  
>  #endif
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 12/17] arm64: debug-monitors: refactor MDSCR manipulation
  2020-01-08 18:56 ` [PATCH 12/17] arm64: debug-monitors: refactor MDSCR manipulation Mark Rutland
@ 2020-01-10  4:35   ` Anshuman Khandual
  2020-01-10 16:09     ` Mark Rutland
  0 siblings, 1 reply; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-10  4:35 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> When we convert the ret_to_user/work_pending code to C, we're going to
> want to poke the MDSCR to enable/disable single-step. Let's factor out
> the existing code for this from debug-monitors.c.
> 
> At the same time, we can make use of {read,write}_sysreg() directly, and
> get rid of the mdscr_{read,write} wrappers.
> 
> The existing code masked DAIF when manipulating MDSCR_EL1, but this
> should not be necessary. Exceptions can be taken immediately before DAIF
> is masked, and given the lack of an ISB can also be taken after DAIF is
> unmasked as writes to DAIF are only self-synchronizing and not
> context-synchronizing in general. We may want to add an ISB to ensure
> that updates to MDSCR have taken effect, however.

Any reason this patch choose not add that ISB for now after writing
mdscr_el1 register via sysreg_clear_set().

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/debug-monitors.h | 10 ++++++++++
>  arch/arm64/kernel/debug-monitors.c      | 32 +++++++-------------------------
>  2 files changed, 17 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
> index 7619f473155f..342867e50c54 100644
> --- a/arch/arm64/include/asm/debug-monitors.h
> +++ b/arch/arm64/include/asm/debug-monitors.h
> @@ -107,6 +107,16 @@ enum dbg_active_el {
>  void enable_debug_monitors(enum dbg_active_el el);
>  void disable_debug_monitors(enum dbg_active_el el);
>  
> +static __always_inline void __enable_single_step_nosync(void)
> +{
> +	sysreg_clear_set(mdscr_el1, 0, DBG_MDSCR_SS);
> +}
> +
> +static __always_inline void __disable_single_step_nosync(void)
> +{
> +	sysreg_clear_set(mdscr_el1, DBG_MDSCR_SS, 0);
> +}
> +
>  void user_rewind_single_step(struct task_struct *task);
>  void user_fastforward_single_step(struct task_struct *task);
>  
> diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
> index 48222a4760c2..fa2d4145bd07 100644
> --- a/arch/arm64/kernel/debug-monitors.c
> +++ b/arch/arm64/kernel/debug-monitors.c
> @@ -32,24 +32,6 @@ u8 debug_monitors_arch(void)
>  }
>  
>  /*
> - * MDSCR access routines.
> - */
> -static void mdscr_write(u32 mdscr)
> -{
> -	unsigned long flags;
> -	flags = local_daif_save();
> -	write_sysreg(mdscr, mdscr_el1);
> -	local_daif_restore(flags);
> -}
> -NOKPROBE_SYMBOL(mdscr_write);
> -
> -static u32 mdscr_read(void)
> -{
> -	return read_sysreg(mdscr_el1);
> -}
> -NOKPROBE_SYMBOL(mdscr_read);
> -
> -/*
>   * Allow root to disable self-hosted debug from userspace.
>   * This is useful if you want to connect an external JTAG debugger.
>   */
> @@ -91,9 +73,9 @@ void enable_debug_monitors(enum dbg_active_el el)
>  		enable |= DBG_MDSCR_KDE;
>  
>  	if (enable && debug_enabled) {
> -		mdscr = mdscr_read();
> +		mdscr = read_sysreg(mdscr_el1);
>  		mdscr |= enable;
> -		mdscr_write(mdscr);
> +		write_sysreg(mdscr, mdscr_el1);
>  	}
>  }
>  NOKPROBE_SYMBOL(enable_debug_monitors);
> @@ -112,9 +94,9 @@ void disable_debug_monitors(enum dbg_active_el el)
>  		disable &= ~DBG_MDSCR_KDE;
>  
>  	if (disable) {
> -		mdscr = mdscr_read();
> +		mdscr = read_sysreg(mdscr_el1);
>  		mdscr &= disable;
> -		mdscr_write(mdscr);
> +		write_sysreg(mdscr, mdscr_el1);
>  	}
>  }
>  NOKPROBE_SYMBOL(disable_debug_monitors);
> @@ -409,7 +391,7 @@ void kernel_enable_single_step(struct pt_regs *regs)
>  {
>  	WARN_ON(!irqs_disabled());
>  	set_regs_spsr_ss(regs);
> -	mdscr_write(mdscr_read() | DBG_MDSCR_SS);
> +	__enable_single_step_nosync();
>  	enable_debug_monitors(DBG_ACTIVE_EL1);
>  }
>  NOKPROBE_SYMBOL(kernel_enable_single_step);
> @@ -417,7 +399,7 @@ NOKPROBE_SYMBOL(kernel_enable_single_step);
>  void kernel_disable_single_step(void)
>  {
>  	WARN_ON(!irqs_disabled());
> -	mdscr_write(mdscr_read() & ~DBG_MDSCR_SS);
> +	__disable_single_step_nosync();
>  	disable_debug_monitors(DBG_ACTIVE_EL1);
>  }
>  NOKPROBE_SYMBOL(kernel_disable_single_step);
> @@ -425,7 +407,7 @@ NOKPROBE_SYMBOL(kernel_disable_single_step);
>  int kernel_active_single_step(void)
>  {
>  	WARN_ON(!irqs_disabled());
> -	return mdscr_read() & DBG_MDSCR_SS;
> +	return read_sysreg(mdscr_el1) & DBG_MDSCR_SS;
>  }
>  NOKPROBE_SYMBOL(kernel_active_single_step);
>  
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 14/17] arm64: entry: move NO_SYSCALL setup to C
  2020-01-08 18:56 ` [PATCH 14/17] arm64: entry: move NO_SYSCALL setup " Mark Rutland
@ 2020-01-10  5:37   ` Anshuman Khandual
  0 siblings, 0 replies; 41+ messages in thread
From: Anshuman Khandual @ 2020-01-10  5:37 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel, catalin.marinas, will, james.morse
  Cc: keescook, maz, broonie, labbott, robin.murphy,
	julien.thierry.kdev, alex.popov



On 01/09/2020 12:26 AM, Mark Rutland wrote:
> In the entry assembly we set up pt_regs::syscallno to NO_SYSCALL so that
> any ptrace calls will see a sensible value. For real syscalls, the
> actual syscall number is setup in C code, in do_el0_svc or
> do_el0_svc_compat.
> 
> Given that tracing isn't performed until the usual EL0 entry work is
> performed, we can move the default syscallno setup to C code, making
> things simpler and more legible.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 10/17] arm64: entry: consolidate EL1 return paths
  2020-01-10  3:39   ` Anshuman Khandual
@ 2020-01-10 16:02     ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-10 16:02 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: keescook, maz, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, catalin.marinas, labbott, will,
	linux-arm-kernel, alex.popov

On Fri, Jan 10, 2020 at 09:09:03AM +0530, Anshuman Khandual wrote:
> 
> 
> On 01/09/2020 12:26 AM, Mark Rutland wrote:
> > Each of the EL1 exception handler stubs has an identical copy of the
> > kernel_exit code. While each handler needs its own kernel_entry
> > sequence, there's no need to duplicate this for each handler, and we can
> > consolidate them for better I-cache usage.
> > 
> > This patch makes the EL1 handlers all use a common kernel_exit stub
> > called ret_to_kernel, matching the ret_to_user stub used by EL0
> > handlers.
> > 
> > As with the handlers, ret_to_kenerl is aligned for better I-cache and
> 
> Small nit, s/ret_to_kenerl/ret_to_kernel
> 
> > brapnch predictor utilization, and for consistency the same alignment is
> 
> Small nit, s/brapnch/branch

I really need to improve my proofreading; thanks for the corrections!

Mark.

> > applied to ret_to_user.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: James Morse <james.morse@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> 
> > ---
> >  arch/arm64/kernel/entry.S | 15 ++++++++++++---
> >  1 file changed, 12 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index 35a8c56b0582..e76326feb1da 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -519,7 +519,7 @@ el1_sync:
> >  	kernel_entry 1
> >  	mov	x0, sp
> >  	bl	el1_sync_handler
> > -	kernel_exit 1
> > +	b	ret_to_kernel
> >  ENDPROC(el1_sync)
> >  
> >  	.align	6
> > @@ -527,7 +527,7 @@ el1_irq:
> >  	kernel_entry 1
> >  	mov	x0, sp
> >  	bl	el1_irq_handler
> > -	kernel_exit 1
> > +	b	ret_to_kernel
> >  ENDPROC(el1_irq)
> >  
> >  	.align	6
> > @@ -535,10 +535,18 @@ el1_error:
> >  	kernel_entry 1
> >  	mov	x0, sp
> >  	bl	el1_error_handler
> > -	kernel_exit 1
> > +	b	ret_to_kernel
> >  ENDPROC(el1_error)
> >  
> >  /*
> > + * Common EL1 exception return path
> > + */
> > +	.align 6
> > +ret_to_kernel:
> > +	kernel_exit 1
> > +ENDPROC(ret_to_kernel)
> > +
> > +/*
> >   * EL0 mode handlers.
> >   */
> >  	.align	6
> > @@ -606,6 +614,7 @@ work_pending:
> >  /*
> >   * "slow" syscall return path.
> >   */
> > +	.align 6
> >  ret_to_user:
> >  	disable_daif
> >  	gic_prio_kentry_setup tmp=x3
> > 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 11/17] stackleak: allow C to call stackleak_erase()
  2020-01-10  3:45   ` Anshuman Khandual
@ 2020-01-10 16:07     ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-10 16:07 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: keescook, maz, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, catalin.marinas, labbott, will,
	linux-arm-kernel, alex.popov

On Fri, Jan 10, 2020 at 09:15:31AM +0530, Anshuman Khandual wrote:
> 
> 
> On 01/09/2020 12:26 AM, Mark Rutland wrote:
> > Currently, stackleak_erase() has no prototype in a header file, and has
> > to be called directly from low-level architecture entry assembly code.
> > This necessitates ifdeffery and complicates the entry assembly.
> > 
> > To ameliorate matters, let's provide a prototype so that architecture
> > can call stackleak_erase() from slightly higher level C code used as
> > part of the entry flow. This makes things easier to read and maintain.
> 
> Does this need to be a separate patch or should it be folded into
> "[PATCH 13/17] arm64: entry: move common el0 entry/return work to C"
> which actually adds the first C call site for this function.

This could be folded into patch 13, but I'd split it since it touched
architecture-independent code, and wanted to make the change obvious to
the relevant maintainers. We've done similar elsewhere (e.g. for the
kasan bits we have to call from assembly).

I'd prefer to leave this separate for now.

Thanks,
Mark.

> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Alexander Popov <alex.popov@linux.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Kees Cook <keescook@chromium.org>
> > Cc: Laura Abbott <labbott@redhat.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  include/linux/stackleak.h | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/include/linux/stackleak.h b/include/linux/stackleak.h
> > index 3d5c3271a9a8..2b09d3759c76 100644
> > --- a/include/linux/stackleak.h
> > +++ b/include/linux/stackleak.h
> > @@ -15,6 +15,8 @@
> >  #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
> >  #include <asm/stacktrace.h>
> >  
> > +asmlinkage void notrace stackleak_erase(void);
> > +
> >  static inline void stackleak_task_init(struct task_struct *t)
> >  {
> >  	t->lowest_stack = (unsigned long)end_of_stack(t) + sizeof(unsigned long);
> > @@ -30,6 +32,7 @@ int stack_erasing_sysctl(struct ctl_table *table, int write,
> >  
> >  #else /* !CONFIG_GCC_PLUGIN_STACKLEAK */
> >  static inline void stackleak_task_init(struct task_struct *t) { }
> > +static inline void stackleak_erase(void) { }
> >  #endif
> >  
> >  #endif
> > 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 12/17] arm64: debug-monitors: refactor MDSCR manipulation
  2020-01-10  4:35   ` Anshuman Khandual
@ 2020-01-10 16:09     ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-10 16:09 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: keescook, maz, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, catalin.marinas, labbott, will,
	linux-arm-kernel, alex.popov

On Fri, Jan 10, 2020 at 10:05:48AM +0530, Anshuman Khandual wrote:
> 
> 
> On 01/09/2020 12:26 AM, Mark Rutland wrote:
> > When we convert the ret_to_user/work_pending code to C, we're going to
> > want to poke the MDSCR to enable/disable single-step. Let's factor out
> > the existing code for this from debug-monitors.c.
> > 
> > At the same time, we can make use of {read,write}_sysreg() directly, and
> > get rid of the mdscr_{read,write} wrappers.
> > 
> > The existing code masked DAIF when manipulating MDSCR_EL1, but this
> > should not be necessary. Exceptions can be taken immediately before DAIF
> > is masked, and given the lack of an ISB can also be taken after DAIF is
> > unmasked as writes to DAIF are only self-synchronizing and not
> > context-synchronizing in general. We may want to add an ISB to ensure
> > that updates to MDSCR have taken effect, however.
> 
> Any reason this patch choose not add that ISB for now after writing
> mdscr_el1 register via sysreg_clear_set().

I didn't want to make that functional change without justification. For
example, the ISB wouldn't be needed for changes that only affect
userspace.

Thanks,
Mark.

> 
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/debug-monitors.h | 10 ++++++++++
> >  arch/arm64/kernel/debug-monitors.c      | 32 +++++++-------------------------
> >  2 files changed, 17 insertions(+), 25 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
> > index 7619f473155f..342867e50c54 100644
> > --- a/arch/arm64/include/asm/debug-monitors.h
> > +++ b/arch/arm64/include/asm/debug-monitors.h
> > @@ -107,6 +107,16 @@ enum dbg_active_el {
> >  void enable_debug_monitors(enum dbg_active_el el);
> >  void disable_debug_monitors(enum dbg_active_el el);
> >  
> > +static __always_inline void __enable_single_step_nosync(void)
> > +{
> > +	sysreg_clear_set(mdscr_el1, 0, DBG_MDSCR_SS);
> > +}
> > +
> > +static __always_inline void __disable_single_step_nosync(void)
> > +{
> > +	sysreg_clear_set(mdscr_el1, DBG_MDSCR_SS, 0);
> > +}
> > +
> >  void user_rewind_single_step(struct task_struct *task);
> >  void user_fastforward_single_step(struct task_struct *task);
> >  
> > diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
> > index 48222a4760c2..fa2d4145bd07 100644
> > --- a/arch/arm64/kernel/debug-monitors.c
> > +++ b/arch/arm64/kernel/debug-monitors.c
> > @@ -32,24 +32,6 @@ u8 debug_monitors_arch(void)
> >  }
> >  
> >  /*
> > - * MDSCR access routines.
> > - */
> > -static void mdscr_write(u32 mdscr)
> > -{
> > -	unsigned long flags;
> > -	flags = local_daif_save();
> > -	write_sysreg(mdscr, mdscr_el1);
> > -	local_daif_restore(flags);
> > -}
> > -NOKPROBE_SYMBOL(mdscr_write);
> > -
> > -static u32 mdscr_read(void)
> > -{
> > -	return read_sysreg(mdscr_el1);
> > -}
> > -NOKPROBE_SYMBOL(mdscr_read);
> > -
> > -/*
> >   * Allow root to disable self-hosted debug from userspace.
> >   * This is useful if you want to connect an external JTAG debugger.
> >   */
> > @@ -91,9 +73,9 @@ void enable_debug_monitors(enum dbg_active_el el)
> >  		enable |= DBG_MDSCR_KDE;
> >  
> >  	if (enable && debug_enabled) {
> > -		mdscr = mdscr_read();
> > +		mdscr = read_sysreg(mdscr_el1);
> >  		mdscr |= enable;
> > -		mdscr_write(mdscr);
> > +		write_sysreg(mdscr, mdscr_el1);
> >  	}
> >  }
> >  NOKPROBE_SYMBOL(enable_debug_monitors);
> > @@ -112,9 +94,9 @@ void disable_debug_monitors(enum dbg_active_el el)
> >  		disable &= ~DBG_MDSCR_KDE;
> >  
> >  	if (disable) {
> > -		mdscr = mdscr_read();
> > +		mdscr = read_sysreg(mdscr_el1);
> >  		mdscr &= disable;
> > -		mdscr_write(mdscr);
> > +		write_sysreg(mdscr, mdscr_el1);
> >  	}
> >  }
> >  NOKPROBE_SYMBOL(disable_debug_monitors);
> > @@ -409,7 +391,7 @@ void kernel_enable_single_step(struct pt_regs *regs)
> >  {
> >  	WARN_ON(!irqs_disabled());
> >  	set_regs_spsr_ss(regs);
> > -	mdscr_write(mdscr_read() | DBG_MDSCR_SS);
> > +	__enable_single_step_nosync();
> >  	enable_debug_monitors(DBG_ACTIVE_EL1);
> >  }
> >  NOKPROBE_SYMBOL(kernel_enable_single_step);
> > @@ -417,7 +399,7 @@ NOKPROBE_SYMBOL(kernel_enable_single_step);
> >  void kernel_disable_single_step(void)
> >  {
> >  	WARN_ON(!irqs_disabled());
> > -	mdscr_write(mdscr_read() & ~DBG_MDSCR_SS);
> > +	__disable_single_step_nosync();
> >  	disable_debug_monitors(DBG_ACTIVE_EL1);
> >  }
> >  NOKPROBE_SYMBOL(kernel_disable_single_step);
> > @@ -425,7 +407,7 @@ NOKPROBE_SYMBOL(kernel_disable_single_step);
> >  int kernel_active_single_step(void)
> >  {
> >  	WARN_ON(!irqs_disabled());
> > -	return mdscr_read() & DBG_MDSCR_SS;
> > +	return read_sysreg(mdscr_el1) & DBG_MDSCR_SS;
> >  }
> >  NOKPROBE_SYMBOL(kernel_active_single_step);
> >  
> > 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 01/17] arm64: entry: mark all entry code as notrace
  2020-01-09  5:21   ` Anshuman Khandual
@ 2020-01-13 15:44     ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-13 15:44 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: keescook, maz, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, catalin.marinas, labbott, will,
	linux-arm-kernel, alex.popov

On Thu, Jan 09, 2020 at 10:51:10AM +0530, Anshuman Khandual wrote:
> 
> 
> On 01/09/2020 12:26 AM, Mark Rutland wrote:
> > Almost all functions in entry-common.c are marked notrace, with
> > el1_undef and el1_inv being the only exceptions. We appear to have done
> > this on the assumption that there were no exception registers that we
> > needed to snapshot, and thus it was safe to run trace code that might
> > result in further exceptions and clobber those registers.
> > 
> > However, until we inherit the DAIF flags, our irq flag tracing is stale,
> > and this discrepancy could set off warnings in some configurations.
> 
> Could you give some example scenarios when this might happen ?

With CONFIG_DEBUG_LOCKDEP, any locked-instrumented function which calls
check_flags() would trigger this. So if your trace function does any
sort of lock manipulation it's liable to set this off.

I'll amend the above:

| However, until we inherit the DAIF flags, our irq flag tracing is
| stale, and this discrepancy could set off warnings in some
| configurations (e.g. with CONFIG_DEBUG_LOCKDEP).

I also hope that flag checking is performed more generally in future,
since at the moment it's only strictly performed for locking operations.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 05/17] arm64: entry: add a call_on_stack helper
  2020-01-09  8:00   ` Anshuman Khandual
@ 2020-01-14 18:24     ` Mark Rutland
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2020-01-14 18:24 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: keescook, maz, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, catalin.marinas, labbott, will,
	linux-arm-kernel, alex.popov

On Thu, Jan 09, 2020 at 01:30:02PM +0530, Anshuman Khandual wrote:
> 
> 
> On 01/09/2020 12:26 AM, Mark Rutland wrote:
> > In some cases, we want to call a function from C code, using an
> > alternative stack. Add a helper that we can use in such cases.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: James Morse <james.morse@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/exception.h |  2 ++
> >  arch/arm64/kernel/entry.S          | 21 +++++++++++++++++++++
> >  2 files changed, 23 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> > index b87c6e276ab1..a49038fa4faf 100644
> > --- a/arch/arm64/include/asm/exception.h
> > +++ b/arch/arm64/include/asm/exception.h
> > @@ -31,6 +31,8 @@ static inline u32 disr_to_esr(u64 disr)
> >  	return esr;
> >  }
> >  
> > +asmlinkage void call_on_stack(struct pt_regs *, void (*)(struct pt_regs *),
> > +			      unsigned long);
> >  asmlinkage void enter_from_user_mode(void);
> >  void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> >  void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index 53ce1877a4aa..184313c773ea 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -901,6 +901,27 @@ ENTRY(ret_from_fork)
> >  ENDPROC(ret_from_fork)
> >  NOKPROBE(ret_from_fork)
> >  
> > +/*
> > + * x0 = argument to function
> 
> A small nit. Though the definition here itself does not limit the
> argument type, it might worth to mention that to be struct pt_regs
> per the previous declaration.
> 

True.

To make this clearer, I've given the C prototype instead, as we do for the
SMCCC wrappers:

/*
 * void call_on_stack(struct pt_regs *regs,
 *                    void (*func)(struct pt_regs *),
 *                    unsigned long new_sp)
 *
 * Calls func(regs) using new_sp as the initial stack pointer.
 */

> > +ENTRY(call_on_stack)
> > +	/* Create a frame record to save our LR and SP (implicit in FP) */
> > +	stp	x29, x30, [sp, #-16]!
> > +	mov	x29, sp
> > +
> > +	/* Move to the new stack and call the function there */
> > +	mov	sp, x2
> > +	blr	x1
> > +
> > +	/* Restore SP from the FP, FP and LR from the record, and return */
> > +	mov	sp, x29
> > +	ldp	x29, x30, [sp], #16
> > +	ret
> > +ENDPROC(call_on_stack)
> > +NOKPROBE(call_on_stack)
> > +
> >  #ifdef CONFIG_ARM_SDE_INTERFACE
> >  
> >  #include <asm/sdei.h>
> > 
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

Thanks!

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 11/17] stackleak: allow C to call stackleak_erase()
  2020-01-08 18:56 ` [PATCH 11/17] stackleak: allow C to call stackleak_erase() Mark Rutland
  2020-01-10  3:45   ` Anshuman Khandual
@ 2020-01-27 23:00   ` Kees Cook
  1 sibling, 0 replies; 41+ messages in thread
From: Kees Cook @ 2020-01-27 23:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, robin.murphy, broonie, james.morse,
	julien.thierry.kdev, maz, labbott, will, linux-arm-kernel,
	alex.popov

On Wed, Jan 08, 2020 at 06:56:28PM +0000, Mark Rutland wrote:
> Currently, stackleak_erase() has no prototype in a header file, and has
> to be called directly from low-level architecture entry assembly code.
> This necessitates ifdeffery and complicates the entry assembly.
> 
> To ameliorate matters, let's provide a prototype so that architecture
> can call stackleak_erase() from slightly higher level C code used as
> part of the entry flow. This makes things easier to read and maintain.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

Acked-by: Kees Cook <keescook@chromium.org>

-Kees

> Cc: Alexander Popov <alex.popov@linux.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  include/linux/stackleak.h | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/include/linux/stackleak.h b/include/linux/stackleak.h
> index 3d5c3271a9a8..2b09d3759c76 100644
> --- a/include/linux/stackleak.h
> +++ b/include/linux/stackleak.h
> @@ -15,6 +15,8 @@
>  #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
>  #include <asm/stacktrace.h>
>  
> +asmlinkage void notrace stackleak_erase(void);
> +
>  static inline void stackleak_task_init(struct task_struct *t)
>  {
>  	t->lowest_stack = (unsigned long)end_of_stack(t) + sizeof(unsigned long);
> @@ -30,6 +32,7 @@ int stack_erasing_sysctl(struct ctl_table *table, int write,
>  
>  #else /* !CONFIG_GCC_PLUGIN_STACKLEAK */
>  static inline void stackleak_task_init(struct task_struct *t) { }
> +static inline void stackleak_erase(void) { }
>  #endif
>  
>  #endif
> -- 
> 2.11.0
> 

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2020-01-27 23:00 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-08 18:56 [PATCH 00/17] arm64: entry deasmification and cleanup Mark Rutland
2020-01-08 18:56 ` [PATCH 01/17] arm64: entry: mark all entry code as notrace Mark Rutland
2020-01-09  5:21   ` Anshuman Khandual
2020-01-13 15:44     ` Mark Rutland
2020-01-08 18:56 ` [PATCH 02/17] arm64: entry: cleanup el0 svc handler naming Mark Rutland
2020-01-09  5:33   ` Anshuman Khandual
2020-01-08 18:56 ` [PATCH 03/17] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c Mark Rutland
2020-01-09  5:36   ` Anshuman Khandual
2020-01-08 18:56 ` [PATCH 04/17] arm64: entry: move preempt logic to C Mark Rutland
2020-01-09  6:43   ` Anshuman Khandual
2020-01-09 12:22     ` Mark Rutland
2020-01-08 18:56 ` [PATCH 05/17] arm64: entry: add a call_on_stack helper Mark Rutland
2020-01-09  8:00   ` Anshuman Khandual
2020-01-14 18:24     ` Mark Rutland
2020-01-09 14:30   ` Laura Abbott
2020-01-09 14:46     ` Mark Rutland
2020-01-08 18:56 ` [PATCH 06/17] arm64: entry: convert irq entry to C Mark Rutland
2020-01-08 18:56 ` [PATCH 07/17] arm64: entry: convert error " Mark Rutland
2020-01-09  9:12   ` Anshuman Khandual
2020-01-09 12:49     ` Mark Rutland
2020-01-08 18:56 ` [PATCH 08/17] arm64: entry: Split el0_sync_compat from el0_sync Mark Rutland
2020-01-09  9:50   ` Anshuman Khandual
2020-01-08 18:56 ` [PATCH 09/17] arm64: entry: organise handler stubs consistently Mark Rutland
2020-01-09 10:01   ` Anshuman Khandual
2020-01-08 18:56 ` [PATCH 10/17] arm64: entry: consolidate EL1 return paths Mark Rutland
2020-01-10  3:39   ` Anshuman Khandual
2020-01-10 16:02     ` Mark Rutland
2020-01-08 18:56 ` [PATCH 11/17] stackleak: allow C to call stackleak_erase() Mark Rutland
2020-01-10  3:45   ` Anshuman Khandual
2020-01-10 16:07     ` Mark Rutland
2020-01-27 23:00   ` Kees Cook
2020-01-08 18:56 ` [PATCH 12/17] arm64: debug-monitors: refactor MDSCR manipulation Mark Rutland
2020-01-10  4:35   ` Anshuman Khandual
2020-01-10 16:09     ` Mark Rutland
2020-01-08 18:56 ` [PATCH 13/17] arm64: entry: move common el0 entry/return work to C Mark Rutland
2020-01-09 15:19   ` Mark Rutland
2020-01-08 18:56 ` [PATCH 14/17] arm64: entry: move NO_SYSCALL setup " Mark Rutland
2020-01-10  5:37   ` Anshuman Khandual
2020-01-08 18:56 ` [PATCH 15/17] arm64: entry: move ARM64_ERRATUM_845719 workaround " Mark Rutland
2020-01-08 18:56 ` [PATCH 16/17] arm64: entry: move ARM64_ERRATUM_1418040 " Mark Rutland
2020-01-08 18:56 ` [PATCH 17/17] arm64: entry: cleanup sp_el0 manipulation Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.