All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/14] arm64: entry: migrate more code to C
@ 2021-05-10 15:56 Mark Rutland
  2021-05-10 15:56 ` [PATCH 01/14] arm64: remove redundant local_daif_mask() in bad_mode() Mark Rutland
                   ` (13 more replies)
  0 siblings, 14 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

This series (based on v5.13-rc1) migrates most of the remaining
exception triage assembly to C. All the exception vectors are given C
handlers, so that we can defer all decision making to C code, and the
assembly code can be made simpler and more uniform.

I've stopped short of converting the ret_to_user / work_pending loop.
Converting this cleanly will probably need something like the wrappers
generated by SYSCALL_DEFINE() to handle the common entry/exit logic, and
this is easier to build once all the handlers have been converted to C.
Similar is true for portions of kernel_entry and kernel_exit that could
be converted to C.

It should also be possible to generate the vectors and their associated
assembly handlers in one go by placing these in separate sections and
using .pushsection and .popsection. I've held off doing this for now as
this probably requires some changes to the linker script, and regardless
it should be easier to make that change atop this series.

So far this has seen some light boot testing, and had survived a few
hours under Syzkaller, which I intend to leave running for a while.

I've pushed the series to my arm64/entry/rework branch on kernel.org:

  https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/entry/rework
  git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/entry/rework

Thanks,
Mark.

Mark Rutland (14):
  arm64: remove redundant local_daif_mask() in bad_mode()
  arm64: entry: unmask IRQ+FIQ after EL0 handling
  arm64: entry: convert SError handlers to C
  arm64: entry: move arm64_preempt_schedule_irq to entry-common.c
  arm64: entry: move preempt logic to C
  arm64: entry: add a call_on_irq_stack helper
  arm64: entry: convert IRQ+FIQ handlers to C
  arm64: entry: organise entry handlers consistently
  arm64: entry: organise entry vectors consistently
  arm64: entry: consolidate EL1 exception returns
  arm64: entry: move bad_mode() to entry-common.c
  arm64: entry: improve bad_mode()
  arm64: entry: template the entry asm functions
  arm64: entry: handle all vectors with C

 arch/arm64/include/asm/exception.h |  27 ++-
 arch/arm64/include/asm/processor.h |   2 -
 arch/arm64/kernel/entry-common.c   | 185 ++++++++++++++++++-
 arch/arm64/kernel/entry.S          | 354 ++++++++++---------------------------
 arch/arm64/kernel/process.c        |  17 --
 arch/arm64/kernel/traps.c          |  26 ---
 arch/arm64/mm/fault.c              |   7 -
 7 files changed, 292 insertions(+), 326 deletions(-)

-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 01/14] arm64: remove redundant local_daif_mask() in bad_mode()
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 02/14] arm64: entry: unmask IRQ+FIQ after EL0 handling Mark Rutland
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

Upon taking an exception, the CPU sets all the DAIF bits. We never
clear any of these bits prior to calling bad_mode(), and bad_mode()
itself never clears any of these bits, so there's no need to call
local_daif_mask().

This patch removes the redundant call.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/traps.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index a05d34f0e82a..41f0aa92022a 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -765,7 +765,6 @@ asmlinkage void notrace bad_mode(struct pt_regs *regs, int reason, unsigned int
 		esr_get_class_string(esr));
 
 	__show_regs(regs);
-	local_daif_mask();
 	panic("bad mode");
 }
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 02/14] arm64: entry: unmask IRQ+FIQ after EL0 handling
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
  2021-05-10 15:56 ` [PATCH 01/14] arm64: remove redundant local_daif_mask() in bad_mode() Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 03/14] arm64: entry: convert SError handlers to C Mark Rutland
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

For non-fatal exceptions taken from EL0, we expect that at some point
during exception handling it is possible to return to a regular process
context with all exceptions unmasked (e.g. as we do in
do_notify_resume()), and we generally aim to unmask exceptions wherever
possible.

While handling SError and debug exceptions from EL0, we need to leave
some exceptions masked during handling. Handling SError requires us to
mask SError (which also requires masking IRQ+FIQ), and handing debug
exceptions requires us to mask debug (which also requires masking
SError+IRQ+FIQ).

Once do_serror() or do_debug_exception() has returned, we no longer need
to mask exceptions, and can unmask them all, which is what we did prior
to commit:

  9034f6251572a474 ("arm64: Do not enable IRQs for ct_user_exit")

... where we had to mask IRQs as for context_tracking_user_exit()
expected IRQs to be masked.

Since then, we realised that our context tracking wasn't entirely
correct, and reworked the entry code to fix this. As of commit:

  23529049c6842382 ("arm64: entry: fix non-NMI user<->kernel transitions")

... we consistently call context_tracking_user_exit() later as part of
ret_to_user. Prior to this we can transiently unmask exceptions (e.g. as
part of do_notify_resume), and we always mask all exceptions prior to
calling context_tracking_user_exit().

Thus, there's no longer a reason to leave IRQs or FIQs masked at the end
of el0_dbg() or el0_error(), so let's bring these into line with other
EL0 exceptions handlers and unmask all exceptions after the handler is
finished.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 2 +-
 arch/arm64/kernel/entry.S        | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 340d04e13617..02be1517e08f 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -398,7 +398,7 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
 
 	enter_from_user_mode();
 	do_debug_exception(far, esr, regs);
-	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+	local_daif_restore(DAIF_PROCCTX);
 }
 
 static void noinstr el0_svc(struct pt_regs *regs)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 3513984a88bd..6b2f6f5c5bb8 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -794,7 +794,7 @@ el0_error_naked:
 	mov	x0, sp
 	mov	x1, x25
 	bl	do_serror
-	enable_da
+	enable_daif
 	b	ret_to_user
 SYM_CODE_END(el0_error)
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 03/14] arm64: entry: convert SError handlers to C
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
  2021-05-10 15:56 ` [PATCH 01/14] arm64: remove redundant local_daif_mask() in bad_mode() Mark Rutland
  2021-05-10 15:56 ` [PATCH 02/14] arm64: entry: unmask IRQ+FIQ after EL0 handling Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 04/14] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c Mark Rutland
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

For various reasons we'd like to convert the bulk of arm64's exception
triage logic to C. As a step towards that, this patch converts the EL1
and EL0 SError triage logic to C.

Separate C functions are added for the native and compat cases so that
in subsequent patches we can handle native/compat differences in C.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/exception.h |  4 ++++
 arch/arm64/kernel/entry-common.c   | 28 ++++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S          | 16 +++++-----------
 3 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 6546158d2f2d..3a859d4e8b59 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -32,8 +32,11 @@ static inline u32 disr_to_esr(u64 disr)
 }
 
 asmlinkage void el1_sync_handler(struct pt_regs *regs);
+asmlinkage void el1_error_handler(struct pt_regs *regs);
 asmlinkage void el0_sync_handler(struct pt_regs *regs);
+asmlinkage void el0_error_handler(struct pt_regs *regs);
 asmlinkage void el0_sync_compat_handler(struct pt_regs *regs);
+asmlinkage void el0_error_compat_handler(struct pt_regs *regs);
 
 asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs);
 asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs);
@@ -57,4 +60,5 @@ void do_cp15instr(unsigned int esr, struct pt_regs *regs);
 void do_el0_svc(struct pt_regs *regs);
 void do_el0_svc_compat(struct pt_regs *regs);
 void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
+void do_serror(struct pt_regs *regs, unsigned int esr);
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 02be1517e08f..a890ba550af3 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -279,6 +279,14 @@ asmlinkage void noinstr el1_sync_handler(struct pt_regs *regs)
 	}
 }
 
+asmlinkage void noinstr el1_error_handler(struct pt_regs *regs)
+{
+	unsigned long esr = read_sysreg(esr_el1);
+
+	local_daif_restore(DAIF_ERRCTX);
+	do_serror(regs, esr);
+}
+
 asmlinkage void noinstr enter_from_user_mode(void)
 {
 	lockdep_hardirqs_off(CALLER_ADDR0);
@@ -468,6 +476,21 @@ asmlinkage void noinstr el0_sync_handler(struct pt_regs *regs)
 	}
 }
 
+static void __el0_error_handler_common(struct pt_regs *regs)
+{
+	unsigned long esr = read_sysreg(esr_el1);
+
+	enter_from_user_mode();
+	local_daif_restore(DAIF_ERRCTX);
+	do_serror(regs, esr);
+	local_daif_restore(DAIF_PROCCTX);
+}
+
+asmlinkage void noinstr el0_error_handler(struct pt_regs *regs)
+{
+	__el0_error_handler_common(regs);
+}
+
 #ifdef CONFIG_COMPAT
 static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
 {
@@ -526,4 +549,9 @@ asmlinkage void noinstr el0_sync_compat_handler(struct pt_regs *regs)
 		el0_inv(regs, esr);
 	}
 }
+
+asmlinkage void noinstr el0_error_compat_handler(struct pt_regs *regs)
+{
+	__el0_error_handler_common(regs);
+}
 #endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 6b2f6f5c5bb8..656f3129bfef 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -757,7 +757,9 @@ SYM_CODE_END(el0_fiq_compat)
 
 SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat)
 	kernel_entry 0, 32
-	b	el0_error_naked
+	mov	x0, sp
+	bl	el0_error_compat_handler
+	b	ret_to_user
 SYM_CODE_END(el0_error_compat)
 #endif
 
@@ -778,23 +780,15 @@ SYM_CODE_END(el0_fiq)
 
 SYM_CODE_START_LOCAL(el1_error)
 	kernel_entry 1
-	mrs	x1, esr_el1
-	enable_dbg
 	mov	x0, sp
-	bl	do_serror
+	bl	el1_error_handler
 	kernel_exit 1
 SYM_CODE_END(el1_error)
 
 SYM_CODE_START_LOCAL(el0_error)
 	kernel_entry 0
-el0_error_naked:
-	mrs	x25, esr_el1
-	user_exit_irqoff
-	enable_dbg
 	mov	x0, sp
-	mov	x1, x25
-	bl	do_serror
-	enable_daif
+	bl	el0_error_handler
 	b	ret_to_user
 SYM_CODE_END(el0_error)
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 04/14] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (2 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 03/14] arm64: entry: convert SError handlers to C Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 05/14] arm64: entry: move preempt logic to C Mark Rutland
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

Subsequent patches will pull more of the IRQ entry handling into C. To
keep this in one place, let's move arm64_preempt_schedule_irq() into
entry-common.c along with the other entry management functions.

We no longer need to include <linux/lockdep.h> in process.c, so the
include directive is removed.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 20 ++++++++++++++++++++
 arch/arm64/kernel/process.c      | 17 -----------------
 2 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index a890ba550af3..33b3660beea3 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -6,7 +6,11 @@
  */
 
 #include <linux/context_tracking.h>
+#include <linux/linkage.h>
+#include <linux/lockdep.h>
 #include <linux/ptrace.h>
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
 #include <linux/thread_info.h>
 
 #include <asm/cpufeature.h>
@@ -113,6 +117,22 @@ asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs)
 		exit_to_kernel_mode(regs);
 }
 
+asmlinkage void __sched arm64_preempt_schedule_irq(void)
+{
+	lockdep_assert_irqs_disabled();
+
+	/*
+	 * Preempting a task from an IRQ means we leave copies of PSTATE
+	 * on the stack. cpufeature's enable calls may modify PSTATE, but
+	 * resuming one of these preempted tasks would undo those changes.
+	 *
+	 * Only allow a task to be preempted once cpufeatures have been
+	 * enabled.
+	 */
+	if (system_capabilities_finalized())
+		preempt_schedule_irq();
+}
+
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 static DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
 
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index b4bb67f17a2c..2e7337709155 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -18,7 +18,6 @@
 #include <linux/sched/task.h>
 #include <linux/sched/task_stack.h>
 #include <linux/kernel.h>
-#include <linux/lockdep.h>
 #include <linux/mman.h>
 #include <linux/mm.h>
 #include <linux/nospec.h>
@@ -724,22 +723,6 @@ static int __init tagged_addr_init(void)
 core_initcall(tagged_addr_init);
 #endif	/* CONFIG_ARM64_TAGGED_ADDR_ABI */
 
-asmlinkage void __sched arm64_preempt_schedule_irq(void)
-{
-	lockdep_assert_irqs_disabled();
-
-	/*
-	 * Preempting a task from an IRQ means we leave copies of PSTATE
-	 * on the stack. cpufeature's enable calls may modify PSTATE, but
-	 * resuming one of these preempted tasks would undo those changes.
-	 *
-	 * Only allow a task to be preempted once cpufeatures have been
-	 * enabled.
-	 */
-	if (system_capabilities_finalized())
-		preempt_schedule_irq();
-}
-
 #ifdef CONFIG_BINFMT_ELF
 int arch_elf_adjust_prot(int prot, const struct arch_elf_state *state,
 			 bool has_interp, bool is_interp)
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 05/14] arm64: entry: move preempt logic to C
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (3 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 04/14] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 06/14] arm64: entry: add a call_on_irq_stack helper Mark Rutland
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

Currently portions of our preempt logic are written in C while other
parts are written in assembly. There's no reason any of this needs to
live in assembly, so let's move the rest of the lgoic to C. At the same
time, let's make the comment a bit clearer.

Other than the increased lockdep coverage there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 12 ++++++++++++
 arch/arm64/kernel/entry.S        | 13 -------------
 2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 33b3660beea3..87997a4a0936 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -121,6 +121,18 @@ asmlinkage void __sched arm64_preempt_schedule_irq(void)
 {
 	lockdep_assert_irqs_disabled();
 
+	if (preempt_count() != 0)
+		return;
+
+	/*
+	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
+	 * priority masking is used the GIC irqchip driver will clear DAIF.IF
+	 * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
+	 * DAIF we must have handled an NMI, so skip preemption.
+	 */
+	if (system_uses_irq_prio_masking() && read_sysreg(daif))
+		return;
+
 	/*
 	 * Preempting a task from an IRQ means we leave copies of PSTATE
 	 * on the stack. cpufeature's enable calls may modify PSTATE, but
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 656f3129bfef..8c7ddd651756 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -561,20 +561,7 @@ tsk	.req	x28		// current thread_info
 	irq_handler	\handler
 
 #ifdef CONFIG_PREEMPTION
-	ldr	x24, [tsk, #TSK_TI_PREEMPT]	// get preempt count
-alternative_if ARM64_HAS_IRQ_PRIO_MASKING
-	/*
-	 * DA were cleared at start of handling, and IF are cleared by
-	 * the GIC irqchip driver using gic_arch_enable_irqs() for
-	 * normal IRQs. If anything is set, it means we come back from
-	 * an NMI instead of a normal IRQ, so skip preemption
-	 */
-	mrs	x0, daif
-	orr	x24, x24, x0
-alternative_else_nop_endif
-	cbnz	x24, 1f				// preempt count != 0 || NMI return path
 	bl	arm64_preempt_schedule_irq	// irq en/disable is done inside
-1:
 #endif
 
 	mov	x0, sp
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 06/14] arm64: entry: add a call_on_irq_stack helper
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (4 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 05/14] arm64: entry: move preempt logic to C Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 07/14] arm64: entry: convert IRQ+FIQ handlers to C Mark Rutland
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

When handling IRQ/FIQ exceptions the entry assembly may transition from
a task's stack to that CPU's IRQ stack (and IRQ shadow call stack).

In subsequent patches we want to migrate the IRQ/FIQ triage logic to C,
and as we want to perform some actions on the task stack (e.g. EL1
preemption), we need to switch stacks within the C handler. So that we
can do so, this patch adds a helper to call a function on a CPU's IRQ
stack (and shadow stack as appropriate).

Subsequent patches will make use of the new helper function.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/exception.h |  2 ++
 arch/arm64/kernel/entry.S          | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 3a859d4e8b59..c24b69c0c589 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -40,6 +40,8 @@ asmlinkage void el0_error_compat_handler(struct pt_regs *regs);
 
 asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs);
 asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs);
+asmlinkage void call_on_irq_stack(struct pt_regs *regs,
+				  void (*func)(struct pt_regs *));
 asmlinkage void enter_from_user_mode(void);
 asmlinkage void exit_to_user_mode(void);
 void arm64_enter_nmi(struct pt_regs *regs);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 8c7ddd651756..327a559679f7 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -979,6 +979,42 @@ SYM_CODE_START(ret_from_fork)
 SYM_CODE_END(ret_from_fork)
 NOKPROBE(ret_from_fork)
 
+/*
+ * void call_on_irq_stack(struct pt_regs *regs,
+ * 		          void (*func)(struct pt_regs *));
+ *
+ * Calls func(regs) using this CPU's irq stack and shadow irq stack.
+ */
+SYM_FUNC_START(call_on_irq_stack)
+#ifdef CONFIG_SHADOW_CALL_STACK
+	stp	scs_sp, xzr, [sp, #-16]!
+	adr_this_cpu scs_sp, irq_shadow_call_stack, x17
+#endif
+	/* Create a frame record to save our LR and SP (implicit in FP) */
+	stp	x29, x30, [sp, #-16]!
+	mov	x29, sp
+
+	ldr_this_cpu x16, irq_stack_ptr, x17
+	mov	x15, #IRQ_STACK_SIZE
+	add	x16, x16, x15
+
+	/* Move to the new stack and call the function there */
+	mov	sp, x16
+	blr	x1
+
+	/*
+	 * Restore the SP from the FP, and restore the FP and LR from the frame
+	 * record.
+	 */
+	mov	sp, x29
+	ldp	x29, x30, [sp], #16
+#ifdef CONFIG_SHADOW_CALL_STACK
+	ldp	scs_sp, xzr, [sp], #16
+#endif
+	ret
+SYM_FUNC_END(call_on_irq_stack)
+NOKPROBE(call_on_irq_stack)
+
 #ifdef CONFIG_ARM_SDE_INTERFACE
 
 #include <asm/sdei.h>
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 07/14] arm64: entry: convert IRQ+FIQ handlers to C
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (5 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 06/14] arm64: entry: add a call_on_irq_stack helper Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 08/14] arm64: entry: organise entry handlers consistently Mark Rutland
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

For various reasons we'd like to convert the bulk of arm64's exception
triage logic to C. As a step towards that, this patch converts the EL1
and EL0 IRQ+FIQ triage logic to C.

Separate C functions are added for the native and compat cases so that
in subsequent patches we can handle native/compat differences in C.

Since the triage functions can now call arm64_apply_bp_hardening()
directly, the do_el0_irq_bp_hardening() wrapper function is removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/exception.h |  8 ++-
 arch/arm64/include/asm/processor.h |  2 -
 arch/arm64/kernel/entry-common.c   | 86 +++++++++++++++++++++++++++++++--
 arch/arm64/kernel/entry.S          | 99 ++++++--------------------------------
 arch/arm64/mm/fault.c              |  7 ---
 5 files changed, 102 insertions(+), 100 deletions(-)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index c24b69c0c589..4284ee57a9a5 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -32,14 +32,18 @@ static inline u32 disr_to_esr(u64 disr)
 }
 
 asmlinkage void el1_sync_handler(struct pt_regs *regs);
+asmlinkage void el1_irq_handler(struct pt_regs *regs);
+asmlinkage void el1_fiq_handler(struct pt_regs *regs);
 asmlinkage void el1_error_handler(struct pt_regs *regs);
 asmlinkage void el0_sync_handler(struct pt_regs *regs);
+asmlinkage void el0_irq_handler(struct pt_regs *regs);
+asmlinkage void el0_fiq_handler(struct pt_regs *regs);
 asmlinkage void el0_error_handler(struct pt_regs *regs);
 asmlinkage void el0_sync_compat_handler(struct pt_regs *regs);
+asmlinkage void el0_irq_compat_handler(struct pt_regs *regs);
+asmlinkage void el0_fiq_compat_handler(struct pt_regs *regs);
 asmlinkage void el0_error_compat_handler(struct pt_regs *regs);
 
-asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs);
-asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs);
 asmlinkage void call_on_irq_stack(struct pt_regs *regs,
 				  void (*func)(struct pt_regs *));
 asmlinkage void enter_from_user_mode(void);
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 9df3feeee890..2f21c76324bb 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -257,8 +257,6 @@ void set_task_sctlr_el1(u64 sctlr);
 extern struct task_struct *cpu_switch_to(struct task_struct *prev,
 					 struct task_struct *next);
 
-asmlinkage void arm64_preempt_schedule_irq(void);
-
 #define task_pt_regs(p) \
 	((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1)
 
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 87997a4a0936..0e876062481a 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -19,6 +19,8 @@
 #include <asm/exception.h>
 #include <asm/kprobes.h>
 #include <asm/mmu.h>
+#include <asm/processor.h>
+#include <asm/stacktrace.h>
 #include <asm/sysreg.h>
 
 /*
@@ -101,7 +103,7 @@ void noinstr arm64_exit_nmi(struct pt_regs *regs)
 	__nmi_exit();
 }
 
-asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs)
+static void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs)
 {
 	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
 		arm64_enter_nmi(regs);
@@ -109,7 +111,7 @@ asmlinkage void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs)
 		enter_from_kernel_mode(regs);
 }
 
-asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs)
+static void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs)
 {
 	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
 		arm64_exit_nmi(regs);
@@ -117,11 +119,11 @@ asmlinkage void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs)
 		exit_to_kernel_mode(regs);
 }
 
-asmlinkage void __sched arm64_preempt_schedule_irq(void)
+static void __sched arm64_preempt_schedule_irq(void)
 {
 	lockdep_assert_irqs_disabled();
 
-	if (preempt_count() != 0)
+	if (!IS_ENABLED(CONFIG_PREEMPTION) || preempt_count() != 0)
 		return;
 
 	/*
@@ -145,6 +147,18 @@ asmlinkage void __sched arm64_preempt_schedule_irq(void)
 		preempt_schedule_irq();
 }
 
+static void do_interrupt_handler(struct pt_regs *regs,
+				 void (*handler)(struct pt_regs *))
+{
+	if (on_thread_stack())
+		call_on_irq_stack(regs, handler);
+	else
+		handler(regs);
+}
+
+extern void (*handle_arch_irq)(struct pt_regs *);
+extern void (*handle_arch_fiq)(struct pt_regs *);
+
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 static DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
 
@@ -311,6 +325,27 @@ asmlinkage void noinstr el1_sync_handler(struct pt_regs *regs)
 	}
 }
 
+static void noinstr el1_interrupt(struct pt_regs *regs,
+				  void (*handler)(struct pt_regs *))
+{
+	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
+
+	enter_el1_irq_or_nmi(regs);
+	do_interrupt_handler(regs, handler);
+	arm64_preempt_schedule_irq();
+	exit_el1_irq_or_nmi(regs);
+}
+
+asmlinkage void noinstr el1_irq_handler(struct pt_regs *regs)
+{
+	el1_interrupt(regs, handle_arch_irq);
+}
+
+asmlinkage void noinstr el1_fiq_handler(struct pt_regs *regs)
+{
+	el1_interrupt(regs, handle_arch_fiq);
+}
+
 asmlinkage void noinstr el1_error_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -508,6 +543,39 @@ asmlinkage void noinstr el0_sync_handler(struct pt_regs *regs)
 	}
 }
 
+static void noinstr el0_interrupt(struct pt_regs *regs,
+				  void (*handler)(struct pt_regs *))
+{
+	enter_from_user_mode();
+
+	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
+
+	if (regs->pc & BIT(55))
+		arm64_apply_bp_hardening();
+
+	do_interrupt_handler(regs, handler);
+}
+
+static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
+{
+	el0_interrupt(regs, handle_arch_irq);
+}
+
+asmlinkage void noinstr el0_irq_handler(struct pt_regs *regs)
+{
+	__el0_irq_handler_common(regs);
+}
+
+static void noinstr __el0_fiq_handler_common(struct pt_regs *regs)
+{
+	el0_interrupt(regs, handle_arch_fiq);
+}
+
+asmlinkage void noinstr el0_fiq_handler(struct pt_regs *regs)
+{
+	__el0_fiq_handler_common(regs);
+}
+
 static void __el0_error_handler_common(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -582,6 +650,16 @@ asmlinkage void noinstr el0_sync_compat_handler(struct pt_regs *regs)
 	}
 }
 
+asmlinkage void noinstr el0_irq_compat_handler(struct pt_regs *regs)
+{
+	__el0_irq_handler_common(regs);
+}
+
+asmlinkage void noinstr el0_fiq_compat_handler(struct pt_regs *regs)
+{
+	__el0_fiq_handler_common(regs);
+}
+
 asmlinkage void noinstr el0_error_compat_handler(struct pt_regs *regs)
 {
 	__el0_error_handler_common(regs);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 327a559679f7..eebc6e72125c 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -486,63 +486,12 @@ SYM_CODE_START_LOCAL(__swpan_exit_el0)
 SYM_CODE_END(__swpan_exit_el0)
 #endif
 
-	.macro	irq_stack_entry
-	mov	x19, sp			// preserve the original sp
-#ifdef CONFIG_SHADOW_CALL_STACK
-	mov	x24, scs_sp		// preserve the original shadow stack
-#endif
-
-	/*
-	 * Compare sp with the base of the task stack.
-	 * If the top ~(THREAD_SIZE - 1) bits match, we are on a task stack,
-	 * and should switch to the irq stack.
-	 */
-	ldr	x25, [tsk, TSK_STACK]
-	eor	x25, x25, x19
-	and	x25, x25, #~(THREAD_SIZE - 1)
-	cbnz	x25, 9998f
-
-	ldr_this_cpu x25, irq_stack_ptr, x26
-	mov	x26, #IRQ_STACK_SIZE
-	add	x26, x25, x26
-
-	/* switch to the irq stack */
-	mov	sp, x26
-
-#ifdef CONFIG_SHADOW_CALL_STACK
-	/* also switch to the irq shadow stack */
-	ldr_this_cpu scs_sp, irq_shadow_call_stack_ptr, x26
-#endif
-
-9998:
-	.endm
-
-	/*
-	 * The callee-saved regs (x19-x29) should be preserved between
-	 * irq_stack_entry and irq_stack_exit, but note that kernel_entry
-	 * uses x20-x23 to store data for later use.
-	 */
-	.macro	irq_stack_exit
-	mov	sp, x19
-#ifdef CONFIG_SHADOW_CALL_STACK
-	mov	scs_sp, x24
-#endif
-	.endm
-
 /* GPRs used by entry code */
 tsk	.req	x28		// current thread_info
 
 /*
  * Interrupt handling.
  */
-	.macro	irq_handler, handler:req
-	ldr_l	x1, \handler
-	mov	x0, sp
-	irq_stack_entry
-	blr	x1
-	irq_stack_exit
-	.endm
-
 	.macro	gic_prio_kentry_setup, tmp:req
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 	alternative_if ARM64_HAS_IRQ_PRIO_MASKING
@@ -552,32 +501,6 @@ tsk	.req	x28		// current thread_info
 #endif
 	.endm
 
-	.macro el1_interrupt_handler, handler:req
-	enable_da
-
-	mov	x0, sp
-	bl	enter_el1_irq_or_nmi
-
-	irq_handler	\handler
-
-#ifdef CONFIG_PREEMPTION
-	bl	arm64_preempt_schedule_irq	// irq en/disable is done inside
-#endif
-
-	mov	x0, sp
-	bl	exit_el1_irq_or_nmi
-	.endm
-
-	.macro el0_interrupt_handler, handler:req
-	user_exit_irqoff
-	enable_da
-
-	tbz	x22, #55, 1f
-	bl	do_el0_irq_bp_hardening
-1:
-	irq_handler	\handler
-	.endm
-
 	.text
 
 /*
@@ -701,13 +624,15 @@ SYM_CODE_END(el1_sync)
 	.align	6
 SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
 	kernel_entry 1
-	el1_interrupt_handler handle_arch_irq
+	mov	x0, sp
+	bl	el1_irq_handler
 	kernel_exit 1
 SYM_CODE_END(el1_irq)
 
 SYM_CODE_START_LOCAL_NOALIGN(el1_fiq)
 	kernel_entry 1
-	el1_interrupt_handler handle_arch_fiq
+	mov	x0, sp
+	bl	el1_fiq_handler
 	kernel_exit 1
 SYM_CODE_END(el1_fiq)
 
@@ -734,12 +659,16 @@ SYM_CODE_END(el0_sync_compat)
 	.align	6
 SYM_CODE_START_LOCAL_NOALIGN(el0_irq_compat)
 	kernel_entry 0, 32
-	b	el0_irq_naked
+	mov	x0, sp
+	bl	el0_irq_compat_handler
+	b	ret_to_user
 SYM_CODE_END(el0_irq_compat)
 
 SYM_CODE_START_LOCAL_NOALIGN(el0_fiq_compat)
 	kernel_entry 0, 32
-	b	el0_fiq_naked
+	mov	x0, sp
+	bl	el0_fiq_compat_handler
+	b	ret_to_user
 SYM_CODE_END(el0_fiq_compat)
 
 SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat)
@@ -753,15 +682,15 @@ SYM_CODE_END(el0_error_compat)
 	.align	6
 SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
 	kernel_entry 0
-el0_irq_naked:
-	el0_interrupt_handler handle_arch_irq
+	mov	x0, sp
+	bl	el0_irq_handler
 	b	ret_to_user
 SYM_CODE_END(el0_irq)
 
 SYM_CODE_START_LOCAL_NOALIGN(el0_fiq)
 	kernel_entry 0
-el0_fiq_naked:
-	el0_interrupt_handler handle_arch_fiq
+	mov	x0, sp
+	bl	el0_fiq_handler
 	b	ret_to_user
 SYM_CODE_END(el0_fiq)
 
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 871c82ab0a30..3b4a4adfddfd 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -836,13 +836,6 @@ void do_mem_abort(unsigned long far, unsigned int esr, struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(do_mem_abort);
 
-void do_el0_irq_bp_hardening(void)
-{
-	/* PC has already been checked in entry.S */
-	arm64_apply_bp_hardening();
-}
-NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
-
 void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
 {
 	arm64_notify_die("SP/PC alignment exception", regs, SIGBUS, BUS_ADRALN,
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 08/14] arm64: entry: organise entry handlers consistently
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (6 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 07/14] arm64: entry: convert IRQ+FIQ handlers to C Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 09/14] arm64: entry: organise entry vectors consistently Mark Rutland
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

In entry.S we have two comments which distinguish EL0 and EL1 exception
handlers, but the code isn't actually laid out this way, and there are a
few other inconsitencies that would be good to clear up.

This patch organizes the entry handers consistently:

* The handlers are laid out in order of the vectors, to make them easier
  to navigate.

* All handlers are given the same alignment, which was previously
  applied inconsitently.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry.S | 64 ++++++++++++++++++++++++++---------------------
 1 file changed, 35 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index eebc6e72125c..a1dd730a474d 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -629,6 +629,7 @@ SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
 	kernel_exit 1
 SYM_CODE_END(el1_irq)
 
+	.align 6
 SYM_CODE_START_LOCAL_NOALIGN(el1_fiq)
 	kernel_entry 1
 	mov	x0, sp
@@ -636,6 +637,14 @@ SYM_CODE_START_LOCAL_NOALIGN(el1_fiq)
 	kernel_exit 1
 SYM_CODE_END(el1_fiq)
 
+	.align 6
+SYM_CODE_START_LOCAL_NOALIGN(el1_error)
+	kernel_entry 1
+	mov	x0, sp
+	bl	el1_error_handler
+	kernel_exit 1
+SYM_CODE_END(el1_error)
+
 /*
  * EL0 mode handlers.
  */
@@ -647,6 +656,30 @@ SYM_CODE_START_LOCAL_NOALIGN(el0_sync)
 	b	ret_to_user
 SYM_CODE_END(el0_sync)
 
+	.align	6
+SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
+	kernel_entry 0
+	mov	x0, sp
+	bl	el0_irq_handler
+	b	ret_to_user
+SYM_CODE_END(el0_irq)
+
+	.align 6
+SYM_CODE_START_LOCAL_NOALIGN(el0_fiq)
+	kernel_entry 0
+	mov	x0, sp
+	bl	el0_fiq_handler
+	b	ret_to_user
+SYM_CODE_END(el0_fiq)
+
+	.align 6
+SYM_CODE_START_LOCAL_NOALIGN(el0_error)
+	kernel_entry 0
+	mov	x0, sp
+	bl	el0_error_handler
+	b	ret_to_user
+SYM_CODE_END(el0_error)
+
 #ifdef CONFIG_COMPAT
 	.align	6
 SYM_CODE_START_LOCAL_NOALIGN(el0_sync_compat)
@@ -664,6 +697,7 @@ SYM_CODE_START_LOCAL_NOALIGN(el0_irq_compat)
 	b	ret_to_user
 SYM_CODE_END(el0_irq_compat)
 
+	.align 6
 SYM_CODE_START_LOCAL_NOALIGN(el0_fiq_compat)
 	kernel_entry 0, 32
 	mov	x0, sp
@@ -671,6 +705,7 @@ SYM_CODE_START_LOCAL_NOALIGN(el0_fiq_compat)
 	b	ret_to_user
 SYM_CODE_END(el0_fiq_compat)
 
+	.align 6
 SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat)
 	kernel_entry 0, 32
 	mov	x0, sp
@@ -679,35 +714,6 @@ SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat)
 SYM_CODE_END(el0_error_compat)
 #endif
 
-	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_irq_handler
-	b	ret_to_user
-SYM_CODE_END(el0_irq)
-
-SYM_CODE_START_LOCAL_NOALIGN(el0_fiq)
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_fiq_handler
-	b	ret_to_user
-SYM_CODE_END(el0_fiq)
-
-SYM_CODE_START_LOCAL(el1_error)
-	kernel_entry 1
-	mov	x0, sp
-	bl	el1_error_handler
-	kernel_exit 1
-SYM_CODE_END(el1_error)
-
-SYM_CODE_START_LOCAL(el0_error)
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_error_handler
-	b	ret_to_user
-SYM_CODE_END(el0_error)
-
 /*
  * "slow" syscall return path.
  */
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 09/14] arm64: entry: organise entry vectors consistently
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (7 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 08/14] arm64: entry: organise entry handlers consistently Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 10/14] arm64: entry: consolidate EL1 exception returns Mark Rutland
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

In subsequent patches we'll rename the entry handlers based on their
original EL, register width, and exception class. To do so, we need to
make all 3 mandatory arguments to the `kernel_ventry` macro, and
distinguish EL1h from EL1t.

In preparation for this, let's make the current set of arguments
mandatory, and move the `regsize` column before the branch label suffix,
making the vectors easier to read column-wise.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry.S | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index a1dd730a474d..be8e596af75a 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -60,7 +60,7 @@
 #define BAD_FIQ		2
 #define BAD_ERROR	3
 
-	.macro kernel_ventry, el, label, regsize = 64
+	.macro kernel_ventry, el:req, regsize:req, label:req
 	.align 7
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	.if	\el == 0
@@ -510,31 +510,31 @@ tsk	.req	x28		// current thread_info
 
 	.align	11
 SYM_CODE_START(vectors)
-	kernel_ventry	1, sync_invalid			// Synchronous EL1t
-	kernel_ventry	1, irq_invalid			// IRQ EL1t
-	kernel_ventry	1, fiq_invalid			// FIQ EL1t
-	kernel_ventry	1, error_invalid		// Error EL1t
+	kernel_ventry	1, 64, sync_invalid		// Synchronous EL1t
+	kernel_ventry	1, 64, irq_invalid		// IRQ EL1t
+	kernel_ventry	1, 64, fiq_invalid		// FIQ EL1t
+	kernel_ventry	1, 64, error_invalid		// Error EL1t
 
-	kernel_ventry	1, sync				// Synchronous EL1h
-	kernel_ventry	1, irq				// IRQ EL1h
-	kernel_ventry	1, fiq				// FIQ EL1h
-	kernel_ventry	1, error			// Error EL1h
+	kernel_ventry	1, 64, sync			// Synchronous EL1h
+	kernel_ventry	1, 64, irq			// IRQ EL1h
+	kernel_ventry	1, 64, fiq			// FIQ EL1h
+	kernel_ventry	1, 64, error			// Error EL1h
 
-	kernel_ventry	0, sync				// Synchronous 64-bit EL0
-	kernel_ventry	0, irq				// IRQ 64-bit EL0
-	kernel_ventry	0, fiq				// FIQ 64-bit EL0
-	kernel_ventry	0, error			// Error 64-bit EL0
+	kernel_ventry	0, 64, sync			// Synchronous 64-bit EL0
+	kernel_ventry	0, 64, irq			// IRQ 64-bit EL0
+	kernel_ventry	0, 64, fiq			// FIQ 64-bit EL0
+	kernel_ventry	0, 64, error			// Error 64-bit EL0
 
 #ifdef CONFIG_COMPAT
-	kernel_ventry	0, sync_compat, 32		// Synchronous 32-bit EL0
-	kernel_ventry	0, irq_compat, 32		// IRQ 32-bit EL0
-	kernel_ventry	0, fiq_compat, 32		// FIQ 32-bit EL0
-	kernel_ventry	0, error_compat, 32		// Error 32-bit EL0
+	kernel_ventry	0, 32, sync_compat		// Synchronous 32-bit EL0
+	kernel_ventry	0, 32, irq_compat		// IRQ 32-bit EL0
+	kernel_ventry	0, 32, fiq_compat		// FIQ 32-bit EL0
+	kernel_ventry	0, 32, error_compat		// Error 32-bit EL0
 #else
-	kernel_ventry	0, sync_invalid, 32		// Synchronous 32-bit EL0
-	kernel_ventry	0, irq_invalid, 32		// IRQ 32-bit EL0
-	kernel_ventry	0, fiq_invalid, 32		// FIQ 32-bit EL0
-	kernel_ventry	0, error_invalid, 32		// Error 32-bit EL0
+	kernel_ventry	0, 32, sync_invalid		// Synchronous 32-bit EL0
+	kernel_ventry	0, 32, irq_invalid		// IRQ 32-bit EL0
+	kernel_ventry	0, 32, fiq_invalid		// FIQ 32-bit EL0
+	kernel_ventry	0, 32, error_invalid		// Error 32-bit EL0
 #endif
 SYM_CODE_END(vectors)
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 10/14] arm64: entry: consolidate EL1 exception returns
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (8 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 09/14] arm64: entry: organise entry vectors consistently Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 11/14] arm64: entry: move bad_mode() to entry-common.c Mark Rutland
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

Following the example of ret_to_user, let's consolidate all the EL1
return paths with a ret_to_kernel helper, rather than each entry point
having its own copy of the return code.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry.S | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index be8e596af75a..9b662d4c09d8 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -618,7 +618,7 @@ SYM_CODE_START_LOCAL_NOALIGN(el1_sync)
 	kernel_entry 1
 	mov	x0, sp
 	bl	el1_sync_handler
-	kernel_exit 1
+	b	ret_to_kernel
 SYM_CODE_END(el1_sync)
 
 	.align	6
@@ -626,7 +626,7 @@ SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
 	kernel_entry 1
 	mov	x0, sp
 	bl	el1_irq_handler
-	kernel_exit 1
+	b	ret_to_kernel
 SYM_CODE_END(el1_irq)
 
 	.align 6
@@ -634,7 +634,7 @@ SYM_CODE_START_LOCAL_NOALIGN(el1_fiq)
 	kernel_entry 1
 	mov	x0, sp
 	bl	el1_fiq_handler
-	kernel_exit 1
+	b	ret_to_kernel
 SYM_CODE_END(el1_fiq)
 
 	.align 6
@@ -642,9 +642,13 @@ SYM_CODE_START_LOCAL_NOALIGN(el1_error)
 	kernel_entry 1
 	mov	x0, sp
 	bl	el1_error_handler
-	kernel_exit 1
+	b	ret_to_kernel
 SYM_CODE_END(el1_error)
 
+SYM_CODE_START_LOCAL(ret_to_kernel)
+	kernel_exit 1
+SYM_CODE_END(ret_to_kernel)
+
 /*
  * EL0 mode handlers.
  */
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 11/14] arm64: entry: move bad_mode() to entry-common.c
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (9 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 10/14] arm64: entry: consolidate EL1 exception returns Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 12/14] arm64: entry: improve bad_mode() Mark Rutland
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

In subsequent patches we'll rework the way bad_mode is called by
exception entry code. In preparation for this, let's move bad_mode()
itself into entry-common.c.

Let's also mark it as noinstr (e.g. to prevent it being kprobed), and
let's also make the `handler` array a local variable, as this is only
use by bad_mode(), and will be removed entirely in a subsequent patch.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 27 +++++++++++++++++++++++++++
 arch/arm64/kernel/traps.c        | 25 -------------------------
 2 files changed, 27 insertions(+), 25 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 0e876062481a..1b63687c38cd 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -22,6 +22,7 @@
 #include <asm/processor.h>
 #include <asm/stacktrace.h>
 #include <asm/sysreg.h>
+#include <asm/system_misc.h>
 
 /*
  * This is intended to match the logic in irqentry_enter(), handling the kernel
@@ -159,6 +160,32 @@ static void do_interrupt_handler(struct pt_regs *regs,
 extern void (*handle_arch_irq)(struct pt_regs *);
 extern void (*handle_arch_fiq)(struct pt_regs *);
 
+/*
+ * bad_mode handles the impossible case in the exception vector. This is always
+ * fatal.
+ */
+asmlinkage void noinstr bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
+{
+	const char *handler[] = {
+		"Synchronous Abort",
+		"IRQ",
+		"FIQ",
+		"Error"
+	};
+
+	arm64_enter_nmi(regs);
+
+	console_verbose();
+
+	pr_crit("Bad mode in %s handler detected on CPU%d, code 0x%08x -- %s\n",
+		handler[reason], smp_processor_id(), esr,
+		esr_get_class_string(esr));
+
+	__show_regs(regs);
+	panic("bad mode");
+}
+
+
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 static DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 41f0aa92022a..e62b92b32156 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -45,13 +45,6 @@
 #include <asm/system_misc.h>
 #include <asm/sysreg.h>
 
-static const char *handler[] = {
-	"Synchronous Abort",
-	"IRQ",
-	"FIQ",
-	"Error"
-};
-
 int show_unhandled_signals = 0;
 
 static void dump_kernel_instr(const char *lvl, struct pt_regs *regs)
@@ -751,24 +744,6 @@ const char *esr_get_class_string(u32 esr)
 }
 
 /*
- * bad_mode handles the impossible case in the exception vector. This is always
- * fatal.
- */
-asmlinkage void notrace bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
-{
-	arm64_enter_nmi(regs);
-
-	console_verbose();
-
-	pr_crit("Bad mode in %s handler detected on CPU%d, code 0x%08x -- %s\n",
-		handler[reason], smp_processor_id(), esr,
-		esr_get_class_string(esr));
-
-	__show_regs(regs);
-	panic("bad mode");
-}
-
-/*
  * bad_el0_sync handles unexpected, but potentially recoverable synchronous
  * exceptions taken from EL0. Unlike bad_mode, this returns.
  */
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 12/14] arm64: entry: improve bad_mode()
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (10 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 11/14] arm64: entry: move bad_mode() to entry-common.c Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 13/14] arm64: entry: template the entry asm functions Mark Rutland
  2021-05-10 15:56 ` [PATCH 14/14] arm64: entry: handle all vectors with C Mark Rutland
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

Our use of bad_mode() has a few rough edges:

* AArch64 doesn't use the term "mode", and refers to "Execution
  states", "Exception levels", and "Selected stack pointer".

* We log the exception type (SYNC/IRQ/FIQ/SError), but not the actual
  "mode" (though this can be deocded from the SPSR value).

* We use bad_mode() as a second-level handler for unexpected synchronous
  exceptions, where the "mode" is legitimate, but the specific exception
  is not.

* We dump the ESR value, but call this "code", and so it's not clear to
  all readers that this is the ESR.

... and all of this can be someqhat opaque to those who aren't extremely
familiar with the code.

Let's make this a bit clearer by having bad_mode() log "Unhandled
${TYPE} exception" rather than "Bad mode in ${TYPE} handler", using
"ESR" rather than "code", and having the final panic() log "Unhandled
exception" rather than "Bad mode".

In future we'd like to log the specific architectural vector rather than
just the type of exception, so we also split the core of basd_mode() out
into a helper called __panic_unhandled(), which takes the vector as a
string argument.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 31 ++++++++++++++++---------------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 1b63687c38cd..d7087e22ffc1 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -160,31 +160,32 @@ static void do_interrupt_handler(struct pt_regs *regs,
 extern void (*handle_arch_irq)(struct pt_regs *);
 extern void (*handle_arch_fiq)(struct pt_regs *);
 
-/*
- * bad_mode handles the impossible case in the exception vector. This is always
- * fatal.
- */
-asmlinkage void noinstr bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
+static void noinstr __panic_unhandled(struct pt_regs *regs, const char *vector,
+				      unsigned int esr)
 {
-	const char *handler[] = {
-		"Synchronous Abort",
-		"IRQ",
-		"FIQ",
-		"Error"
-	};
-
 	arm64_enter_nmi(regs);
 
 	console_verbose();
 
-	pr_crit("Bad mode in %s handler detected on CPU%d, code 0x%08x -- %s\n",
-		handler[reason], smp_processor_id(), esr,
+	pr_crit("Unhandled %s exception on CPU%d, ESR 0x%08x -- %s\n",
+		vector, smp_processor_id(), esr,
 		esr_get_class_string(esr));
 
 	__show_regs(regs);
-	panic("bad mode");
+	panic("Unhandled exception");
 }
 
+asmlinkage void noinstr bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
+{
+	const char *handler[] = {
+		"Synchronous Abort",
+		"IRQ",
+		"FIQ",
+		"Error"
+	};
+
+	__panic_unhandled(regs, handler[reason], esr);
+}
 
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 static DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 13/14] arm64: entry: template the entry asm functions
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (11 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 12/14] arm64: entry: improve bad_mode() Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  2021-05-10 15:56 ` [PATCH 14/14] arm64: entry: handle all vectors with C Mark Rutland
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

Now that the majority of the exception triage logic has been converted
to C, the entry assembly functions all have a uniform structure.

Let's generate them all with an assembly macro to reduce the amount of
code and to ensure they all remain in sync if we make changes in future.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry.S | 124 ++++++++++------------------------------------
 1 file changed, 27 insertions(+), 97 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 9b662d4c09d8..d4f80b9df621 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -610,114 +610,44 @@ SYM_CODE_START_LOCAL(el1_error_invalid)
 	inv_entry 1, BAD_ERROR
 SYM_CODE_END(el1_error_invalid)
 
-/*
- * EL1 mode handlers.
- */
-	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el1_sync)
-	kernel_entry 1
-	mov	x0, sp
-	bl	el1_sync_handler
-	b	ret_to_kernel
-SYM_CODE_END(el1_sync)
-
-	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
-	kernel_entry 1
-	mov	x0, sp
-	bl	el1_irq_handler
-	b	ret_to_kernel
-SYM_CODE_END(el1_irq)
-
-	.align 6
-SYM_CODE_START_LOCAL_NOALIGN(el1_fiq)
-	kernel_entry 1
-	mov	x0, sp
-	bl	el1_fiq_handler
-	b	ret_to_kernel
-SYM_CODE_END(el1_fiq)
-
+	.macro entry_handler el:req, regsize:req, label:req
 	.align 6
-SYM_CODE_START_LOCAL_NOALIGN(el1_error)
-	kernel_entry 1
+SYM_CODE_START_LOCAL_NOALIGN(el\el\()_\label)
+	kernel_entry \el, \regsize
 	mov	x0, sp
-	bl	el1_error_handler
+	bl	el\el\()_\label\()_handler
+	.if \el == 0
+	b	ret_to_user
+	.else
 	b	ret_to_kernel
-SYM_CODE_END(el1_error)
-
-SYM_CODE_START_LOCAL(ret_to_kernel)
-	kernel_exit 1
-SYM_CODE_END(ret_to_kernel)
+	.endif
+SYM_CODE_END(el\el\()_\label)
+	.endm
 
 /*
- * EL0 mode handlers.
+ * Early exception handlers
  */
-	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el0_sync)
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_sync_handler
-	b	ret_to_user
-SYM_CODE_END(el0_sync)
+	entry_handler	1, 64, sync
+	entry_handler	1, 64, irq
+	entry_handler	1, 64, fiq
+	entry_handler	1, 64, error
 
-	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_irq_handler
-	b	ret_to_user
-SYM_CODE_END(el0_irq)
-
-	.align 6
-SYM_CODE_START_LOCAL_NOALIGN(el0_fiq)
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_fiq_handler
-	b	ret_to_user
-SYM_CODE_END(el0_fiq)
-
-	.align 6
-SYM_CODE_START_LOCAL_NOALIGN(el0_error)
-	kernel_entry 0
-	mov	x0, sp
-	bl	el0_error_handler
-	b	ret_to_user
-SYM_CODE_END(el0_error)
+	entry_handler	0, 64, sync
+	entry_handler	0, 64, irq
+	entry_handler	0, 64, fiq
+	entry_handler	0, 64, error
 
 #ifdef CONFIG_COMPAT
-	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el0_sync_compat)
-	kernel_entry 0, 32
-	mov	x0, sp
-	bl	el0_sync_compat_handler
-	b	ret_to_user
-SYM_CODE_END(el0_sync_compat)
-
-	.align	6
-SYM_CODE_START_LOCAL_NOALIGN(el0_irq_compat)
-	kernel_entry 0, 32
-	mov	x0, sp
-	bl	el0_irq_compat_handler
-	b	ret_to_user
-SYM_CODE_END(el0_irq_compat)
-
-	.align 6
-SYM_CODE_START_LOCAL_NOALIGN(el0_fiq_compat)
-	kernel_entry 0, 32
-	mov	x0, sp
-	bl	el0_fiq_compat_handler
-	b	ret_to_user
-SYM_CODE_END(el0_fiq_compat)
-
-	.align 6
-SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat)
-	kernel_entry 0, 32
-	mov	x0, sp
-	bl	el0_error_compat_handler
-	b	ret_to_user
-SYM_CODE_END(el0_error_compat)
+	entry_handler	0, 32, sync_compat
+	entry_handler	0, 32, irq_compat
+	entry_handler	0, 32, fiq_compat
+	entry_handler	0, 32, error_compat
 #endif
 
+SYM_CODE_START_LOCAL(ret_to_kernel)
+	kernel_exit 1
+SYM_CODE_END(ret_to_kernel)
+
 /*
  * "slow" syscall return path.
  */
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 14/14] arm64: entry: handle all vectors with C
  2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
                   ` (12 preceding siblings ...)
  2021-05-10 15:56 ` [PATCH 13/14] arm64: entry: template the entry asm functions Mark Rutland
@ 2021-05-10 15:56 ` Mark Rutland
  13 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2021-05-10 15:56 UTC (permalink / raw)
  To: linux-arm-kernel, will, catalin.marinas; +Cc: james.morse, mark.rutland, maz

We have 16 architectural exception vectors, and depending on kernel
configuration we handle 8 or 12 of these with C code, and we handle 8 or
4 of these as sepcial cases in the entry assembly.

It would be nicer if the entry assembly were uniform for all exceptions,
and we deferred any specific handling of the exceptions to C code. This
way the entry assembly can be more easily templated without ifdeffery or
special cases, and it's easier to modify the handling of these cases in
future (e.g. to dump additional registers other context).

This patch reworks the entry code so that we always have a C handle for
every architectural exception vector, with the entry assembly being
completely uniform. We now have to handle exceptions from EL1t and EL1h,
and also have to handle exceptions from AArch32 even when the kernel is
built without CONFIG_COMPAT. To make this clear and to simplify
templating, we rename the top-level exception handlers with a consistent
naming scheme:

  asm: <el>_<regsize>_<type>
  c:   <el>_<regsize>_<type>_handler

.. where:

  <el> is `el1t`, `el1h`, or `el0`
  <regsize> is `64` or `32`
  <type> is `sync`, `irq`, `fiq`, or `error`

... e.g.

  asm: el1h_64_sync
  c:   el1h_64_sync_handler

... with lower-level handlers simply using "el1" and "compat" as today.

For unexpected exceptions, this information is passed to
panic_unandled(), so it can report the specific vector an unexpected
exception was taken from, e.g.

| Unexpected 64-bit el1t sync exception

For vectors we never expect to enter legitimately, the C code is
gnerated using a macro to avoid code duplication.

The `kernel_ventry` and `entry_handler` assembly macros are update to
handle the new naming scheme. In theory it should be possible to
generate the entry functions at the same time as the vectors using a
single table, but this will require reworking the linker script to split
the two into separate sections, so for now we duplicate the two.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/exception.h |  31 +++++---
 arch/arm64/kernel/entry-common.c   |  51 +++++++------
 arch/arm64/kernel/entry.S          | 146 ++++++++++++-------------------------
 3 files changed, 92 insertions(+), 136 deletions(-)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 4284ee57a9a5..40a3a20dca1c 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -31,18 +31,25 @@ static inline u32 disr_to_esr(u64 disr)
 	return esr;
 }
 
-asmlinkage void el1_sync_handler(struct pt_regs *regs);
-asmlinkage void el1_irq_handler(struct pt_regs *regs);
-asmlinkage void el1_fiq_handler(struct pt_regs *regs);
-asmlinkage void el1_error_handler(struct pt_regs *regs);
-asmlinkage void el0_sync_handler(struct pt_regs *regs);
-asmlinkage void el0_irq_handler(struct pt_regs *regs);
-asmlinkage void el0_fiq_handler(struct pt_regs *regs);
-asmlinkage void el0_error_handler(struct pt_regs *regs);
-asmlinkage void el0_sync_compat_handler(struct pt_regs *regs);
-asmlinkage void el0_irq_compat_handler(struct pt_regs *regs);
-asmlinkage void el0_fiq_compat_handler(struct pt_regs *regs);
-asmlinkage void el0_error_compat_handler(struct pt_regs *regs);
+asmlinkage void el1t_64_sync_handler(struct pt_regs *regs);
+asmlinkage void el1t_64_irq_handler(struct pt_regs *regs);
+asmlinkage void el1t_64_fiq_handler(struct pt_regs *regs);
+asmlinkage void el1t_64_error_handler(struct pt_regs *regs);
+
+asmlinkage void el1h_64_sync_handler(struct pt_regs *regs);
+asmlinkage void el1h_64_irq_handler(struct pt_regs *regs);
+asmlinkage void el1h_64_fiq_handler(struct pt_regs *regs);
+asmlinkage void el1h_64_error_handler(struct pt_regs *regs);
+
+asmlinkage void el0_64_sync_handler(struct pt_regs *regs);
+asmlinkage void el0_64_irq_handler(struct pt_regs *regs);
+asmlinkage void el0_64_fiq_handler(struct pt_regs *regs);
+asmlinkage void el0_64_error_handler(struct pt_regs *regs);
+
+asmlinkage void el0_32_sync_handler(struct pt_regs *regs);
+asmlinkage void el0_32_irq_handler(struct pt_regs *regs);
+asmlinkage void el0_32_fiq_handler(struct pt_regs *regs);
+asmlinkage void el0_32_error_handler(struct pt_regs *regs);
 
 asmlinkage void call_on_irq_stack(struct pt_regs *regs,
 				  void (*func)(struct pt_regs *));
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index d7087e22ffc1..9b6451e836d8 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -175,16 +175,11 @@ static void noinstr __panic_unhandled(struct pt_regs *regs, const char *vector,
 	panic("Unhandled exception");
 }
 
-asmlinkage void noinstr bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
-{
-	const char *handler[] = {
-		"Synchronous Abort",
-		"IRQ",
-		"FIQ",
-		"Error"
-	};
-
-	__panic_unhandled(regs, handler[reason], esr);
+#define UNHANDLED(el, regsize, vector)							\
+asmlinkage void noinstr el##_##regsize##_##vector##_handler(struct pt_regs *regs)	\
+{											\
+	const char *desc = #regsize "-bit " #el " " #vector;				\
+	__panic_unhandled(regs, desc, read_sysreg(esr_el1));				\
 }
 
 #ifdef CONFIG_ARM64_ERRATUM_1463225
@@ -236,6 +231,11 @@ static bool cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
 }
 #endif /* CONFIG_ARM64_ERRATUM_1463225 */
 
+UNHANDLED(el1t, 64, sync)
+UNHANDLED(el1t, 64, irq)
+UNHANDLED(el1t, 64, fiq)
+UNHANDLED(el1t, 64, error)
+
 static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
@@ -271,7 +271,7 @@ static void noinstr el1_inv(struct pt_regs *regs, unsigned long esr)
 {
 	enter_from_kernel_mode(regs);
 	local_daif_inherit(regs);
-	bad_mode(regs, 0, esr);
+	__panic_unhandled(regs, "el1h sync", esr);
 	local_daif_mask();
 	exit_to_kernel_mode(regs);
 }
@@ -319,7 +319,7 @@ static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
 	exit_to_kernel_mode(regs);
 }
 
-asmlinkage void noinstr el1_sync_handler(struct pt_regs *regs)
+asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
 
@@ -364,17 +364,17 @@ static void noinstr el1_interrupt(struct pt_regs *regs,
 	exit_el1_irq_or_nmi(regs);
 }
 
-asmlinkage void noinstr el1_irq_handler(struct pt_regs *regs)
+asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)
 {
 	el1_interrupt(regs, handle_arch_irq);
 }
 
-asmlinkage void noinstr el1_fiq_handler(struct pt_regs *regs)
+asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
 {
 	el1_interrupt(regs, handle_arch_fiq);
 }
 
-asmlinkage void noinstr el1_error_handler(struct pt_regs *regs)
+asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
 
@@ -518,7 +518,7 @@ static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr)
 	do_ptrauth_fault(regs, esr);
 }
 
-asmlinkage void noinstr el0_sync_handler(struct pt_regs *regs)
+asmlinkage void noinstr el0_64_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
 
@@ -589,7 +589,7 @@ static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
 	el0_interrupt(regs, handle_arch_irq);
 }
 
-asmlinkage void noinstr el0_irq_handler(struct pt_regs *regs)
+asmlinkage void noinstr el0_64_irq_handler(struct pt_regs *regs)
 {
 	__el0_irq_handler_common(regs);
 }
@@ -599,7 +599,7 @@ static void noinstr __el0_fiq_handler_common(struct pt_regs *regs)
 	el0_interrupt(regs, handle_arch_fiq);
 }
 
-asmlinkage void noinstr el0_fiq_handler(struct pt_regs *regs)
+asmlinkage void noinstr el0_64_fiq_handler(struct pt_regs *regs)
 {
 	__el0_fiq_handler_common(regs);
 }
@@ -614,7 +614,7 @@ static void __el0_error_handler_common(struct pt_regs *regs)
 	local_daif_restore(DAIF_PROCCTX);
 }
 
-asmlinkage void noinstr el0_error_handler(struct pt_regs *regs)
+asmlinkage void noinstr el0_64_error_handler(struct pt_regs *regs)
 {
 	__el0_error_handler_common(regs);
 }
@@ -634,7 +634,7 @@ static void noinstr el0_svc_compat(struct pt_regs *regs)
 	do_el0_svc_compat(regs);
 }
 
-asmlinkage void noinstr el0_sync_compat_handler(struct pt_regs *regs)
+asmlinkage void noinstr el0_32_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
 
@@ -678,18 +678,23 @@ asmlinkage void noinstr el0_sync_compat_handler(struct pt_regs *regs)
 	}
 }
 
-asmlinkage void noinstr el0_irq_compat_handler(struct pt_regs *regs)
+asmlinkage void noinstr el0_32_irq_handler(struct pt_regs *regs)
 {
 	__el0_irq_handler_common(regs);
 }
 
-asmlinkage void noinstr el0_fiq_compat_handler(struct pt_regs *regs)
+asmlinkage void noinstr el0_32_fiq_handler(struct pt_regs *regs)
 {
 	__el0_fiq_handler_common(regs);
 }
 
-asmlinkage void noinstr el0_error_compat_handler(struct pt_regs *regs)
+asmlinkage void noinstr el0_32_error_handler(struct pt_regs *regs)
 {
 	__el0_error_handler_common(regs);
 }
+#else /* CONFIG_COMPAT */
+UNHANDLED(el0, 32, sync)
+UNHANDLED(el0, 32, irq)
+UNHANDLED(el0, 32, fiq)
+UNHANDLED(el0, 32, error)
 #endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index d4f80b9df621..257e8192e8d8 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -51,16 +51,7 @@
 	.endr
 	.endm
 
-/*
- * Bad Abort numbers
- *-----------------
- */
-#define BAD_SYNC	0
-#define BAD_IRQ		1
-#define BAD_FIQ		2
-#define BAD_ERROR	3
-
-	.macro kernel_ventry, el:req, regsize:req, label:req
+	.macro kernel_ventry, el:req, ht, regsize:req, label:req
 	.align 7
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	.if	\el == 0
@@ -87,7 +78,7 @@ alternative_else_nop_endif
 	tbnz	x0, #THREAD_SHIFT, 0f
 	sub	x0, sp, x0			// x0'' = sp' - x0' = (sp + x0) - sp = x0
 	sub	sp, sp, x0			// sp'' = sp' - x0 = (sp + x0) - x0 = sp
-	b	el\()\el\()_\label
+	b	el\el\ht\()_\regsize\()_\label
 
 0:
 	/*
@@ -119,7 +110,7 @@ alternative_else_nop_endif
 	sub	sp, sp, x0
 	mrs	x0, tpidrro_el0
 #endif
-	b	el\()\el\()_\label
+	b	el\el\ht\()_\regsize\()_\label
 	.endm
 
 	.macro tramp_alias, dst, sym
@@ -510,32 +501,25 @@ tsk	.req	x28		// current thread_info
 
 	.align	11
 SYM_CODE_START(vectors)
-	kernel_ventry	1, 64, sync_invalid		// Synchronous EL1t
-	kernel_ventry	1, 64, irq_invalid		// IRQ EL1t
-	kernel_ventry	1, 64, fiq_invalid		// FIQ EL1t
-	kernel_ventry	1, 64, error_invalid		// Error EL1t
-
-	kernel_ventry	1, 64, sync			// Synchronous EL1h
-	kernel_ventry	1, 64, irq			// IRQ EL1h
-	kernel_ventry	1, 64, fiq			// FIQ EL1h
-	kernel_ventry	1, 64, error			// Error EL1h
-
-	kernel_ventry	0, 64, sync			// Synchronous 64-bit EL0
-	kernel_ventry	0, 64, irq			// IRQ 64-bit EL0
-	kernel_ventry	0, 64, fiq			// FIQ 64-bit EL0
-	kernel_ventry	0, 64, error			// Error 64-bit EL0
-
-#ifdef CONFIG_COMPAT
-	kernel_ventry	0, 32, sync_compat		// Synchronous 32-bit EL0
-	kernel_ventry	0, 32, irq_compat		// IRQ 32-bit EL0
-	kernel_ventry	0, 32, fiq_compat		// FIQ 32-bit EL0
-	kernel_ventry	0, 32, error_compat		// Error 32-bit EL0
-#else
-	kernel_ventry	0, 32, sync_invalid		// Synchronous 32-bit EL0
-	kernel_ventry	0, 32, irq_invalid		// IRQ 32-bit EL0
-	kernel_ventry	0, 32, fiq_invalid		// FIQ 32-bit EL0
-	kernel_ventry	0, 32, error_invalid		// Error 32-bit EL0
-#endif
+	kernel_ventry	1, t, 64, sync		// Synchronous EL1t
+	kernel_ventry	1, t, 64, irq		// IRQ EL1t
+	kernel_ventry	1, t, 64, fiq		// FIQ EL1h
+	kernel_ventry	1, t, 64, error		// Error EL1t
+
+	kernel_ventry	1, h, 64, sync		// Synchronous EL1h
+	kernel_ventry	1, h, 64, irq		// IRQ EL1h
+	kernel_ventry	1, h, 64, fiq		// FIQ EL1h
+	kernel_ventry	1, h, 64, error		// Error EL1h
+
+	kernel_ventry	0,  , 64, sync		// Synchronous 64-bit EL0
+	kernel_ventry	0,  , 64, irq		// IRQ 64-bit EL0
+	kernel_ventry	0,  , 64, fiq		// FIQ 64-bit EL0
+	kernel_ventry	0,  , 64, error		// Error 64-bit EL0
+
+	kernel_ventry	0,  , 32, sync		// Synchronous 32-bit EL0
+	kernel_ventry	0,  , 32, irq		// IRQ 32-bit EL0
+	kernel_ventry	0,  , 32, fiq		// FIQ 32-bit EL0
+	kernel_ventry	0,  , 32, error		// Error 32-bit EL0
 SYM_CODE_END(vectors)
 
 #ifdef CONFIG_VMAP_STACK
@@ -566,83 +550,43 @@ __bad_stack:
 	ASM_BUG()
 #endif /* CONFIG_VMAP_STACK */
 
-/*
- * Invalid mode handlers
- */
-	.macro	inv_entry, el, reason, regsize = 64
-	kernel_entry \el, \regsize
-	mov	x0, sp
-	mov	x1, #\reason
-	mrs	x2, esr_el1
-	bl	bad_mode
-	ASM_BUG()
-	.endm
-
-SYM_CODE_START_LOCAL(el0_sync_invalid)
-	inv_entry 0, BAD_SYNC
-SYM_CODE_END(el0_sync_invalid)
-
-SYM_CODE_START_LOCAL(el0_irq_invalid)
-	inv_entry 0, BAD_IRQ
-SYM_CODE_END(el0_irq_invalid)
-
-SYM_CODE_START_LOCAL(el0_fiq_invalid)
-	inv_entry 0, BAD_FIQ
-SYM_CODE_END(el0_fiq_invalid)
-
-SYM_CODE_START_LOCAL(el0_error_invalid)
-	inv_entry 0, BAD_ERROR
-SYM_CODE_END(el0_error_invalid)
 
-SYM_CODE_START_LOCAL(el1_sync_invalid)
-	inv_entry 1, BAD_SYNC
-SYM_CODE_END(el1_sync_invalid)
-
-SYM_CODE_START_LOCAL(el1_irq_invalid)
-	inv_entry 1, BAD_IRQ
-SYM_CODE_END(el1_irq_invalid)
-
-SYM_CODE_START_LOCAL(el1_fiq_invalid)
-	inv_entry 1, BAD_FIQ
-SYM_CODE_END(el1_fiq_invalid)
-
-SYM_CODE_START_LOCAL(el1_error_invalid)
-	inv_entry 1, BAD_ERROR
-SYM_CODE_END(el1_error_invalid)
-
-	.macro entry_handler el:req, regsize:req, label:req
+	.macro entry_handler el:req, ht, regsize:req, label:req
 	.align 6
-SYM_CODE_START_LOCAL_NOALIGN(el\el\()_\label)
+SYM_CODE_START_LOCAL_NOALIGN(el\el\ht\()_\regsize\()_\label)
 	kernel_entry \el, \regsize
 	mov	x0, sp
-	bl	el\el\()_\label\()_handler
+	bl	el\el\ht\()_\regsize\()_\label\()_handler
 	.if \el == 0
 	b	ret_to_user
 	.else
 	b	ret_to_kernel
 	.endif
-SYM_CODE_END(el\el\()_\label)
+SYM_CODE_END(el\el\ht\()_\regsize\()_\label)
 	.endm
 
 /*
  * Early exception handlers
  */
-	entry_handler	1, 64, sync
-	entry_handler	1, 64, irq
-	entry_handler	1, 64, fiq
-	entry_handler	1, 64, error
-
-	entry_handler	0, 64, sync
-	entry_handler	0, 64, irq
-	entry_handler	0, 64, fiq
-	entry_handler	0, 64, error
-
-#ifdef CONFIG_COMPAT
-	entry_handler	0, 32, sync_compat
-	entry_handler	0, 32, irq_compat
-	entry_handler	0, 32, fiq_compat
-	entry_handler	0, 32, error_compat
-#endif
+	entry_handler	1, t, 64, sync
+	entry_handler	1, t, 64, irq
+	entry_handler	1, t, 64, fiq
+	entry_handler	1, t, 64, error
+
+	entry_handler	1, h, 64, sync
+	entry_handler	1, h, 64, irq
+	entry_handler	1, h, 64, fiq
+	entry_handler	1, h, 64, error
+
+	entry_handler	0,  , 64, sync
+	entry_handler	0,  , 64, irq
+	entry_handler	0,  , 64, fiq
+	entry_handler	0,  , 64, error
+
+	entry_handler	0,  , 32, sync
+	entry_handler	0,  , 32, irq
+	entry_handler	0,  , 32, fiq
+	entry_handler	0,  , 32, error
 
 SYM_CODE_START_LOCAL(ret_to_kernel)
 	kernel_exit 1
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-05-10 16:03 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-10 15:56 [PATCH 00/14] arm64: entry: migrate more code to C Mark Rutland
2021-05-10 15:56 ` [PATCH 01/14] arm64: remove redundant local_daif_mask() in bad_mode() Mark Rutland
2021-05-10 15:56 ` [PATCH 02/14] arm64: entry: unmask IRQ+FIQ after EL0 handling Mark Rutland
2021-05-10 15:56 ` [PATCH 03/14] arm64: entry: convert SError handlers to C Mark Rutland
2021-05-10 15:56 ` [PATCH 04/14] arm64: entry: move arm64_preempt_schedule_irq to entry-common.c Mark Rutland
2021-05-10 15:56 ` [PATCH 05/14] arm64: entry: move preempt logic to C Mark Rutland
2021-05-10 15:56 ` [PATCH 06/14] arm64: entry: add a call_on_irq_stack helper Mark Rutland
2021-05-10 15:56 ` [PATCH 07/14] arm64: entry: convert IRQ+FIQ handlers to C Mark Rutland
2021-05-10 15:56 ` [PATCH 08/14] arm64: entry: organise entry handlers consistently Mark Rutland
2021-05-10 15:56 ` [PATCH 09/14] arm64: entry: organise entry vectors consistently Mark Rutland
2021-05-10 15:56 ` [PATCH 10/14] arm64: entry: consolidate EL1 exception returns Mark Rutland
2021-05-10 15:56 ` [PATCH 11/14] arm64: entry: move bad_mode() to entry-common.c Mark Rutland
2021-05-10 15:56 ` [PATCH 12/14] arm64: entry: improve bad_mode() Mark Rutland
2021-05-10 15:56 ` [PATCH 13/14] arm64: entry: template the entry asm functions Mark Rutland
2021-05-10 15:56 ` [PATCH 14/14] arm64: entry: handle all vectors with C Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.