linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C
@ 2019-10-03 17:16 James Morse
  2019-10-03 17:16 ` [PATCH 1/8] arm64: Fix incorrect irqflag restore for priority masking for compat James Morse
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

Hi folks,

This series is based on Mark Rutland's 'deasm' series here[0]. This is
just the parts that related to Synchronous Exceptions.

To handle v8.2 RAS errors directly in the kernel, we need to read from
some CPU system registers immediately after taking a synchronous external
abort.
Just to be awkward, the entry assembly calls 'inherit_daif', so if we
took an external abort from a pre-emptible context, we become pre-emptible
again before calling C code. If we moved to another CPU, we can't read the
the system registers.

Ideally, for an external abort, the entry code would increase the
preempt count. Doing this in assembly isn't going to improve entry.S
readability.


Bite the bullet, and move the synchronous exception paths into C.


This series can be retrieved from:
git://linux-arm.org/linux-jm.git deasm_sync_only/v1

(which has been force-pushed since the end of the merge-window...)

Patch one has already been posted as a fix here:
https://lore.kernel.org/linux-arm-kernel/20191003170127.127278-1-james.morse@arm.com/


Bugs welcome.


Thanks,

James


[0] https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/entry-deasm
[1] https://lore.kernel.org/linux-acpi/1562086280-5351-1-git-send-email-baicar@os.amperecomputing.com/


James Morse (5):
  arm64: Fix incorrect irqflag restore for priority masking for compat
  arm64: remove __exception annotations
  arm64: Add prototypes for functions called by entry.S
  arm64: Remove asmlinkage from updated functions
  arm64: entry-common: don't touch daif before bp-hardening

Mark Rutland (3):
  arm64: add local_daif_inherit()
  arm64: entry: convert el1_sync to C
  arm64: entry: convert el0_sync to C

 arch/arm64/include/asm/asm-uaccess.h |  10 -
 arch/arm64/include/asm/daifflags.h   |  16 ++
 arch/arm64/include/asm/exception.h   |  22 +-
 arch/arm64/include/asm/processor.h   |   7 +
 arch/arm64/include/asm/traps.h       |  10 -
 arch/arm64/kernel/Makefile           |   6 +-
 arch/arm64/kernel/entry-common.c     | 331 +++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S            | 274 +---------------------
 arch/arm64/kernel/fpsimd.c           |   6 +-
 arch/arm64/kernel/probes/kprobes.c   |   4 -
 arch/arm64/kernel/syscall.c          |   4 +-
 arch/arm64/kernel/traps.c            |  12 +-
 arch/arm64/kernel/vmlinux.lds.S      |   3 -
 arch/arm64/mm/fault.c                |  55 ++---
 14 files changed, 409 insertions(+), 351 deletions(-)
 create mode 100644 arch/arm64/kernel/entry-common.c

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/8] arm64: Fix incorrect irqflag restore for priority masking for compat
  2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
@ 2019-10-03 17:16 ` James Morse
  2019-10-03 17:16 ` [PATCH 2/8] arm64: remove __exception annotations James Morse
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

Commit bd82d4bd2188 ("arm64: Fix incorrect irqflag restore for priority
masking") added a macro to the entry.S call paths that leave the
PSTATE.I bit set. This tells the pPNMI masking logic that interrupts
are masked by the CPU, not by the PMR. This value is read back by
local_daif_save().

Commit bd82d4bd2188 added this call to el0_svc, as el0_svc_handler
is called with interrupts masked. el0_svc_compat was missed, but should
be covered in the same way as both of these paths end up in
el0_svc_common(), which expects to unmask interrupts.

Fixes: bd82d4bd2188 ("arm64: Fix incorrect irqflag restore for priority masking")
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
---
This patch previously posted as a standalone fix here:
Link: https://lore.kernel.org/linux-arm-kernel/20191003170127.127278-1-james.morse@arm.com/


 arch/arm64/kernel/entry.S | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 84a822748c84..e304fe04b098 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -775,6 +775,7 @@ el0_sync_compat:
 	b.ge	el0_dbg
 	b	el0_inv
 el0_svc_compat:
+	gic_prio_kentry_setup tmp=x1
 	mov	x0, sp
 	bl	el0_svc_compat_handler
 	b	ret_to_user
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/8] arm64: remove __exception annotations
  2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
  2019-10-03 17:16 ` [PATCH 1/8] arm64: Fix incorrect irqflag restore for priority masking for compat James Morse
@ 2019-10-03 17:16 ` James Morse
  2019-10-04 10:17   ` Mark Rutland
  2019-10-04 13:03   ` Marc Gonzalez
  2019-10-03 17:16 ` [PATCH 3/8] arm64: Add prototypes for functions called by entry.S James Morse
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded
stack frame") arm64 has has not used the __exception annotation to dump
the pt_regs during stack tracing. in_exception_text() has no callers.

This annotation is only used to blacklist kprobes, it means the same as
__kprobes.

Section annotations like this require the functions to be grouped
together between the start/end markers, and placed according to
the linker script. For kprobes we also have NOKPROBE_SYMBOL() which
logs the symbol address in a section that kprobes parses and
blacklists at boot.

Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of
blacklisted symbols, and saves us from having an arm64 specific
spelling of __kprobes.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>

---
(__exception_irq_entry means no-kprobes and optionally in a section
 ftrace can use to pretty-print interrupt handler boundaries.)
---
 arch/arm64/include/asm/exception.h |  4 ++--
 arch/arm64/include/asm/traps.h     | 10 ---------
 arch/arm64/kernel/probes/kprobes.c |  4 ----
 arch/arm64/kernel/traps.c          | 10 ++++++---
 arch/arm64/kernel/vmlinux.lds.S    |  3 ---
 arch/arm64/mm/fault.c              | 34 +++++++++++++++---------------
 6 files changed, 26 insertions(+), 39 deletions(-)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index a17393ff6677..b0b3ba56e919 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -8,14 +8,14 @@
 #define __ASM_EXCEPTION_H
 
 #include <asm/esr.h>
+#include <asm/kprobes.h>
 
 #include <linux/interrupt.h>
 
-#define __exception	__attribute__((section(".exception.text")))
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 #define __exception_irq_entry	__irq_entry
 #else
-#define __exception_irq_entry	__exception
+#define __exception_irq_entry	__kprobes
 #endif
 
 static inline u32 disr_to_esr(u64 disr)
diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
index 59690613ac31..cee5928e1b7d 100644
--- a/arch/arm64/include/asm/traps.h
+++ b/arch/arm64/include/asm/traps.h
@@ -42,16 +42,6 @@ static inline int __in_irqentry_text(unsigned long ptr)
 	       ptr < (unsigned long)&__irqentry_text_end;
 }
 
-static inline int in_exception_text(unsigned long ptr)
-{
-	int in;
-
-	in = ptr >= (unsigned long)&__exception_text_start &&
-	     ptr < (unsigned long)&__exception_text_end;
-
-	return in ? : __in_irqentry_text(ptr);
-}
-
 static inline int in_entry_text(unsigned long ptr)
 {
 	return ptr >= (unsigned long)&__entry_text_start &&
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index c4452827419b..d1c95dcf1d78 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -455,10 +455,6 @@ int __init arch_populate_kprobe_blacklist(void)
 					(unsigned long)__irqentry_text_end);
 	if (ret)
 		return ret;
-	ret = kprobe_add_area_blacklist((unsigned long)__exception_text_start,
-					(unsigned long)__exception_text_end);
-	if (ret)
-		return ret;
 	ret = kprobe_add_area_blacklist((unsigned long)__idmap_text_start,
 					(unsigned long)__idmap_text_end);
 	if (ret)
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 34739e80211b..ba1a571a7774 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -35,6 +35,7 @@
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
 #include <asm/insn.h>
+#include <asm/kprobes.h>
 #include <asm/traps.h>
 #include <asm/smp.h>
 #include <asm/stack_pointer.h>
@@ -393,7 +394,7 @@ void arm64_notify_segfault(unsigned long addr)
 	force_signal_inject(SIGSEGV, code, addr);
 }
 
-asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
+asmlinkage void do_undefinstr(struct pt_regs *regs)
 {
 	/* check for AArch32 breakpoint instructions */
 	if (!aarch32_break_handler(regs))
@@ -405,6 +406,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
 	BUG_ON(!user_mode(regs));
 	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
 }
+NOKPROBE_SYMBOL(do_undefinstr);
 
 #define __user_cache_maint(insn, address, res)			\
 	if (address >= user_addr_max()) {			\
@@ -667,7 +669,7 @@ static const struct sys64_hook cp15_64_hooks[] = {
 	{},
 };
 
-asmlinkage void __exception do_cp15instr(unsigned int esr, struct pt_regs *regs)
+asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs)
 {
 	const struct sys64_hook *hook, *hook_base;
 
@@ -705,9 +707,10 @@ asmlinkage void __exception do_cp15instr(unsigned int esr, struct pt_regs *regs)
 	 */
 	do_undefinstr(regs);
 }
+NOKPROBE_SYMBOL(do_cp15instr);
 #endif
 
-asmlinkage void __exception do_sysinstr(unsigned int esr, struct pt_regs *regs)
+asmlinkage void do_sysinstr(unsigned int esr, struct pt_regs *regs)
 {
 	const struct sys64_hook *hook;
 
@@ -724,6 +727,7 @@ asmlinkage void __exception do_sysinstr(unsigned int esr, struct pt_regs *regs)
 	 */
 	do_undefinstr(regs);
 }
+NOKPROBE_SYMBOL(do_sysinstr);
 
 static const char *esr_class_str[] = {
 	[0 ... ESR_ELx_EC_MAX]		= "UNRECOGNIZED EC",
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index aa76f7259668..009057517bdd 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -111,9 +111,6 @@ SECTIONS
 	}
 	.text : {			/* Real text segment		*/
 		_stext = .;		/* Text and read-only data	*/
-			__exception_text_start = .;
-			*(.exception.text)
-			__exception_text_end = .;
 			IRQENTRY_TEXT
 			SOFTIRQENTRY_TEXT
 			ENTRY_TEXT
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 115d7a0e4b08..ba62098de920 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -33,6 +33,7 @@
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
 #include <asm/kasan.h>
+#include <asm/kprobes.h>
 #include <asm/sysreg.h>
 #include <asm/system_misc.h>
 #include <asm/pgtable.h>
@@ -723,8 +724,8 @@ static const struct fault_info fault_info[] = {
 	{ do_bad,		SIGKILL, SI_KERNEL,	"unknown 63"			},
 };
 
-asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
-					 struct pt_regs *regs)
+asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
+			     struct pt_regs *regs)
 {
 	const struct fault_info *inf = esr_to_fault_info(esr);
 
@@ -740,16 +741,17 @@ asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
 	arm64_notify_die(inf->name, regs,
 			 inf->sig, inf->code, (void __user *)addr, esr);
 }
+NOKPROBE_SYMBOL(do_mem_abort);
 
-asmlinkage void __exception do_el0_irq_bp_hardening(void)
+asmlinkage void do_el0_irq_bp_hardening(void)
 {
 	/* PC has already been checked in entry.S */
 	arm64_apply_bp_hardening();
 }
+NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
 
-asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr,
-						   unsigned int esr,
-						   struct pt_regs *regs)
+asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
+				       struct pt_regs *regs)
 {
 	/*
 	 * We've taken an instruction abort from userspace and not yet
@@ -762,11 +764,10 @@ asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr,
 	local_daif_restore(DAIF_PROCCTX);
 	do_mem_abort(addr, esr, regs);
 }
+NOKPROBE_SYMBOL(do_el0_ia_bp_hardening);
 
-
-asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
-					   unsigned int esr,
-					   struct pt_regs *regs)
+asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
+			       struct pt_regs *regs)
 {
 	if (user_mode(regs)) {
 		if (!is_ttbr0_addr(instruction_pointer(regs)))
@@ -777,6 +778,7 @@ asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
 	arm64_notify_die("SP/PC alignment exception", regs,
 			 SIGBUS, BUS_ADRALN, (void __user *)addr, esr);
 }
+NOKPROBE_SYMBOL(do_sp_pc_abort);
 
 int __init early_brk64(unsigned long addr, unsigned int esr,
 		       struct pt_regs *regs);
@@ -859,8 +861,7 @@ NOKPROBE_SYMBOL(debug_exception_exit);
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
 
-static int __exception
-cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
+static int cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
 {
 	if (user_mode(regs))
 		return 0;
@@ -879,16 +880,15 @@ cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
 	return 1;
 }
 #else
-static int __exception
-cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
+static int cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
 {
 	return 0;
 }
 #endif /* CONFIG_ARM64_ERRATUM_1463225 */
+NOKPROBE_SYMBOL(cortex_a76_erratum_1463225_debug_handler);
 
-asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
-					       unsigned int esr,
-					       struct pt_regs *regs)
+asmlinkage void do_debug_exception(unsigned long addr_if_watchpoint,
+				   unsigned int esr, struct pt_regs *regs)
 {
 	const struct fault_info *inf = esr_to_debug_fault_info(esr);
 	unsigned long pc = instruction_pointer(regs);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/8] arm64: Add prototypes for functions called by entry.S
  2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
  2019-10-03 17:16 ` [PATCH 1/8] arm64: Fix incorrect irqflag restore for priority masking for compat James Morse
  2019-10-03 17:16 ` [PATCH 2/8] arm64: remove __exception annotations James Morse
@ 2019-10-03 17:16 ` James Morse
  2019-10-04 10:22   ` Mark Rutland
  2019-10-03 17:16 ` [PATCH 4/8] arm64: add local_daif_inherit() James Morse
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

Functions that are only called by assembly don't always have a
C header file prototype.

Add the prototypes before moving the assembly callers to C.

Signed-off-by: James Morse <james.morse@arm.com>
---
 arch/arm64/include/asm/exception.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index b0b3ba56e919..8bb3fe2d71a8 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -31,5 +31,26 @@ static inline u32 disr_to_esr(u64 disr)
 }
 
 asmlinkage void enter_from_user_mode(void);
+asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
+			     struct pt_regs *regs);
+asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
+			       struct pt_regs *regs);
+asmlinkage void do_undefinstr(struct pt_regs *regs);
+asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr);
+asmlinkage void do_debug_exception(unsigned long addr, unsigned int esr,
+				   struct pt_regs *regs);
+asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
+asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs);
+asmlinkage void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs);
+asmlinkage void do_sysinstr(unsigned int esr, struct pt_regs *regs);
+asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
+			       struct pt_regs *regs);
+asmlinkage void bad_el0_sync(struct pt_regs *regs, int reason,
+			     unsigned int esr);
+asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs);
+asmlinkage void el0_svc_handler(struct pt_regs *regs);
+asmlinkage void el0_svc_compat_handler(struct pt_regs *regs);
+asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
+				       struct pt_regs *regs);
 
 #endif	/* __ASM_EXCEPTION_H */
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/8] arm64: add local_daif_inherit()
  2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
                   ` (2 preceding siblings ...)
  2019-10-03 17:16 ` [PATCH 3/8] arm64: Add prototypes for functions called by entry.S James Morse
@ 2019-10-03 17:16 ` James Morse
  2019-10-03 17:16 ` [PATCH 5/8] arm64: entry: convert el1_sync to C James Morse
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

Some synchronous exceptions can be taken from a number of contexts,
e.g. where IRQs may or may not be masked. In the entry assembly for
these exceptions, we use the inherit_daif assembly macro to ensure
that we only mask those exceptions which were masked when the exception
was taken.

So that we can do the same from C code, this patch adds a new
local_daif_inherit() function, following the existing local_daif_*()
naming scheme.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[moved away from local_daif_restore()]
Signed-off-by: James Morse <james.morse@arm.com>
---
 arch/arm64/include/asm/daifflags.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 063c964af705..9207cd5aa39e 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -9,6 +9,7 @@
 
 #include <asm/arch_gicv3.h>
 #include <asm/cpufeature.h>
+#include <asm/ptrace.h>
 
 #define DAIF_PROCCTX		0
 #define DAIF_PROCCTX_NOIRQ	PSR_I_BIT
@@ -109,4 +110,19 @@ static inline void local_daif_restore(unsigned long flags)
 		trace_hardirqs_off();
 }
 
+/*
+ * Called by synchronous exception handlers to restore the DAIF bits that were
+ * modified by taking an exception.
+ */
+static inline void local_daif_inherit(struct pt_regs *regs)
+{
+	unsigned long flags = regs->pstate & DAIF_MASK;
+
+	/*
+	 * We can't use local_daif_restore(regs->pstate) here as
+	 * system_has_prio_mask_debugging() won't restore the I bit if it can
+	 * use the pmr instead.
+	 */
+	write_sysreg(flags, daif);
+}
 #endif
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/8] arm64: entry: convert el1_sync to C
  2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
                   ` (3 preceding siblings ...)
  2019-10-03 17:16 ` [PATCH 4/8] arm64: add local_daif_inherit() James Morse
@ 2019-10-03 17:16 ` James Morse
  2019-10-04 10:39   ` Mark Rutland
  2019-10-03 17:16 ` [PATCH 6/8] arm64: entry: convert el0_sync " James Morse
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

This patch converts the EL1 sync entry assembly logic to C code.

Doing this will allow us to make changes in a slightly more
readable way. A case in point is supporting kernel-first RAS.
do_sea() should be called on the CPU that took the fault.

Largely the assembly code is converted to C in a relatively
straightforward manner.

Since all sync sites share a common asm entry point, the ASM_BUG()
instances are no longer required for effective backtraces back to
assembly, and we don't need similar BUG() entries.

The ESR_ELx.EC codes for all (supported) debug exceptions are now
checked in the el1_sync_handler's switch statement, which renders the
check in el1_dbg redundant. This both simplifies the el1_dbg handler,
and makes the EL1 exception handling more robust to
currently-unallocated ESR_ELx.EC encodings.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[split out of a bigger series, added nokprobes, moved prototypes]
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
---
 arch/arm64/include/asm/exception.h |  1 +
 arch/arm64/kernel/Makefile         |  6 +-
 arch/arm64/kernel/entry-common.c   | 98 ++++++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S          | 69 +--------------------
 4 files changed, 103 insertions(+), 71 deletions(-)
 create mode 100644 arch/arm64/kernel/entry-common.c

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 8bb3fe2d71a8..e2f87b4ecbfc 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -9,6 +9,7 @@
 
 #include <asm/esr.h>
 #include <asm/kprobes.h>
+#include <asm/ptrace.h>
 
 #include <linux/interrupt.h>
 
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 478491f07b4f..fc6488660f64 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -13,9 +13,9 @@ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE)
 
 # Object file lists.
 obj-y			:= debug-monitors.o entry.o irq.o fpsimd.o		\
-			   entry-fpsimd.o process.o ptrace.o setup.o signal.o	\
-			   sys.o stacktrace.o time.o traps.o io.o vdso.o	\
-			   hyp-stub.o psci.o cpu_ops.o insn.o	\
+			   entry-common.o entry-fpsimd.o process.o ptrace.o	\
+			   setup.o signal.o sys.o stacktrace.o time.o traps.o	\
+			   io.o vdso.o hyp-stub.o psci.o cpu_ops.o insn.o	\
 			   return_address.o cpuinfo.o cpu_errata.o		\
 			   cpufeature.o alternative.o cacheinfo.o		\
 			   smp.o smp_spin_table.o topology.o smccc-call.o	\
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
new file mode 100644
index 000000000000..e726d1f4b9e9
--- /dev/null
+++ b/arch/arm64/kernel/entry-common.c
@@ -0,0 +1,98 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Exception handling code
+ *
+ * Copyright (C) 2019 ARM Ltd.
+ */
+
+#include <linux/context_tracking.h>
+#include <linux/ptrace.h>
+#include <linux/thread_info.h>
+
+#include <asm/cpufeature.h>
+#include <asm/daifflags.h>
+#include <asm/esr.h>
+#include <asm/exception.h>
+#include <asm/kprobes.h>
+#include <asm/sysreg.h>
+
+static void notrace el1_abort(struct pt_regs *regs, unsigned long esr)
+{
+	unsigned long far = read_sysreg(far_el1);
+
+	local_daif_inherit(regs);
+	far = untagged_addr(far);
+	do_mem_abort(far, esr, regs);
+}
+NOKPROBE_SYMBOL(el1_abort);
+
+static void notrace el1_pc(struct pt_regs *regs, unsigned long esr)
+{
+	unsigned long far = read_sysreg(far_el1);
+
+	local_daif_inherit(regs);
+	do_sp_pc_abort(far, esr, regs);
+}
+NOKPROBE_SYMBOL(el1_pc);
+
+static void el1_undef(struct pt_regs *regs)
+{
+	local_daif_inherit(regs);
+	do_undefinstr(regs);
+}
+NOKPROBE_SYMBOL(el1_undef);
+
+static void el1_inv(struct pt_regs *regs, unsigned long esr)
+{
+	local_daif_inherit(regs);
+	bad_mode(regs, 0, esr);
+}
+NOKPROBE_SYMBOL(el1_inv);
+
+static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
+{
+	unsigned long far = read_sysreg(far_el1);
+
+	/*
+	 * The CPU masked interrupts, and we are leaving them masked during
+	 * do_debug_exception(). Update PMR as if we had called
+	 * local_mask_daif().
+	 */
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+
+	do_debug_exception(far, esr, regs);
+}
+NOKPROBE_SYMBOL(el1_dbg);
+
+asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
+{
+	unsigned long esr = read_sysreg(esr_el1);
+
+	switch (ESR_ELx_EC(esr)) {
+	case ESR_ELx_EC_DABT_CUR:
+	case ESR_ELx_EC_IABT_CUR:
+		el1_abort(regs, esr);
+		break;
+	/*
+	 * We don't handle ESR_ELx_EC_SP_ALIGN, since we will have hit a
+	 * recursive exception when trying to push the initial pt_regs.
+	 */
+	case ESR_ELx_EC_PC_ALIGN:
+		el1_pc(regs, esr);
+		break;
+	case ESR_ELx_EC_SYS64:
+	case ESR_ELx_EC_UNKNOWN:
+		el1_undef(regs);
+		break;
+	case ESR_ELx_EC_BREAKPT_CUR:
+	case ESR_ELx_EC_SOFTSTP_CUR:
+	case ESR_ELx_EC_WATCHPT_CUR:
+	case ESR_ELx_EC_BRK64:
+		el1_dbg(regs, esr);
+		break;
+	default:
+		el1_inv(regs, esr);
+	};
+}
+NOKPROBE_SYMBOL(el1_sync_handler);
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e304fe04b098..5d7f42eb0e89 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -578,76 +578,9 @@ ENDPROC(el1_error_invalid)
 	.align	6
 el1_sync:
 	kernel_entry 1
-	mrs	x1, esr_el1			// read the syndrome register
-	lsr	x24, x1, #ESR_ELx_EC_SHIFT	// exception class
-	cmp	x24, #ESR_ELx_EC_DABT_CUR	// data abort in EL1
-	b.eq	el1_da
-	cmp	x24, #ESR_ELx_EC_IABT_CUR	// instruction abort in EL1
-	b.eq	el1_ia
-	cmp	x24, #ESR_ELx_EC_SYS64		// configurable trap
-	b.eq	el1_undef
-	cmp	x24, #ESR_ELx_EC_PC_ALIGN	// pc alignment exception
-	b.eq	el1_pc
-	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL1
-	b.eq	el1_undef
-	cmp	x24, #ESR_ELx_EC_BREAKPT_CUR	// debug exception in EL1
-	b.ge	el1_dbg
-	b	el1_inv
-
-el1_ia:
-	/*
-	 * Fall through to the Data abort case
-	 */
-el1_da:
-	/*
-	 * Data abort handling
-	 */
-	mrs	x3, far_el1
-	inherit_daif	pstate=x23, tmp=x2
-	clear_address_tag x0, x3
-	mov	x2, sp				// struct pt_regs
-	bl	do_mem_abort
-
-	kernel_exit 1
-el1_pc:
-	/*
-	 * PC alignment exception handling. We don't handle SP alignment faults,
-	 * since we will have hit a recursive exception when trying to push the
-	 * initial pt_regs.
-	 */
-	mrs	x0, far_el1
-	inherit_daif	pstate=x23, tmp=x2
-	mov	x2, sp
-	bl	do_sp_pc_abort
-	ASM_BUG()
-el1_undef:
-	/*
-	 * Undefined instruction
-	 */
-	inherit_daif	pstate=x23, tmp=x2
 	mov	x0, sp
-	bl	do_undefinstr
-	kernel_exit 1
-el1_dbg:
-	/*
-	 * Debug exception handling
-	 */
-	cmp	x24, #ESR_ELx_EC_BRK64		// if BRK64
-	cinc	x24, x24, eq			// set bit '0'
-	tbz	x24, #0, el1_inv		// EL1 only
-	gic_prio_kentry_setup tmp=x3
-	mrs	x0, far_el1
-	mov	x2, sp				// struct pt_regs
-	bl	do_debug_exception
+	bl	el1_sync_handler
 	kernel_exit 1
-el1_inv:
-	// TODO: add support for undefined instructions in kernel mode
-	inherit_daif	pstate=x23, tmp=x2
-	mov	x0, sp
-	mov	x2, x1
-	mov	x1, #BAD_SYNC
-	bl	bad_mode
-	ASM_BUG()
 ENDPROC(el1_sync)
 
 	.align	6
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/8] arm64: entry: convert el0_sync to C
  2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
                   ` (4 preceding siblings ...)
  2019-10-03 17:16 ` [PATCH 5/8] arm64: entry: convert el1_sync to C James Morse
@ 2019-10-03 17:16 ` James Morse
  2019-10-04 12:57   ` Mark Rutland
  2019-10-03 17:16 ` [PATCH 7/8] arm64: Remove asmlinkage from updated functions James Morse
  2019-10-03 17:16 ` [PATCH 8/8] arm64: entry-common: don't touch daif before bp-hardening James Morse
  7 siblings, 1 reply; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

This is largely a 1-1 conversion of asm to C, with a couple of caveats.

The el0_sync{_compat} switches explicitly handle all the EL0 debug
cases, so el0_dbg doesn't have to try to bail out for unexpected EL1
debug ESR values. This also means that an unexpected vector catch from
AArch32 is routed to el0_inv.

We *could* merge the native and compat switches, which would make the
diffstat negative, but I've tried to stay as close to the existing
assembly as possible for the moment.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[split out of a bigger series, added nokprobes. removed irq trace
 calls as the C helpers do this. renamed el0_dbg's use of FAR]
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
---
 arch/arm64/include/asm/asm-uaccess.h |  10 --
 arch/arm64/kernel/entry-common.c     | 221 +++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S            | 206 +------------------------
 3 files changed, 226 insertions(+), 211 deletions(-)

diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index f74909ba29bd..a70575edae8e 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -74,14 +74,4 @@ alternative_if ARM64_ALT_PAN_NOT_UAO
 	SET_PSTATE_PAN(0)
 alternative_else_nop_endif
 	.endm
-
-/*
- * Remove the address tag from a virtual address, if present.
- */
-	.macro	clear_address_tag, dst, addr
-	tst	\addr, #(1 << 55)
-	bic	\dst, \addr, #(0xff << 56)
-	csel	\dst, \dst, \addr, eq
-	.endm
-
 #endif
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index e726d1f4b9e9..176969e55677 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -96,3 +96,224 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 	};
 }
 NOKPROBE_SYMBOL(el1_sync_handler);
+
+static void notrace el0_da(struct pt_regs *regs, unsigned long esr)
+{
+	unsigned long far = read_sysreg(far_el1);
+
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	far = untagged_addr(far);
+	do_mem_abort(far, esr, regs);
+}
+NOKPROBE_SYMBOL(el0_da);
+
+static void notrace el0_ia(struct pt_regs *regs, unsigned long esr)
+{
+	unsigned long far = read_sysreg(far_el1);
+
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+	do_el0_ia_bp_hardening(far, esr, regs);
+}
+NOKPROBE_SYMBOL(el0_ia);
+
+static void notrace el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_fpsimd_acc(esr, regs);
+}
+NOKPROBE_SYMBOL(el0_fpsimd_acc);
+
+static void notrace el0_sve_acc(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_sve_acc(esr, regs);
+}
+NOKPROBE_SYMBOL(el0_sve_acc);
+
+static void notrace el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_fpsimd_exc(esr, regs);
+}
+NOKPROBE_SYMBOL(el0_fpsimd_exc);
+
+static void notrace el0_sys(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_sysinstr(esr, regs);
+}
+NOKPROBE_SYMBOL(el0_sys);
+
+static void notrace el0_pc(struct pt_regs *regs, unsigned long esr)
+{
+	unsigned long far = read_sysreg(far_el1);
+
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+	do_sp_pc_abort(far, esr, regs);
+}
+NOKPROBE_SYMBOL(el0_pc);
+
+static void notrace el0_sp(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+	do_sp_pc_abort(regs->sp, esr, regs);
+}
+NOKPROBE_SYMBOL(el0_sp);
+
+static void notrace el0_undef(struct pt_regs *regs)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_undefinstr(regs);
+}
+NOKPROBE_SYMBOL(el0_undef);
+
+static void notrace el0_inv(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	bad_el0_sync(regs, 0, esr);
+}
+NOKPROBE_SYMBOL(el0_inv);
+
+static void notrace el0_dbg(struct pt_regs *regs, unsigned long esr)
+{
+	unsigned long addr_if_watchpoint = read_sysreg(far_el1);
+
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+
+	user_exit_irqoff();
+	do_debug_exception(addr_if_watchpoint, esr, regs);
+	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+}
+NOKPROBE_SYMBOL(el0_dbg);
+
+static void notrace el0_svc(struct pt_regs *regs)
+{
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+
+	el0_svc_handler(regs);
+}
+NOKPROBE_SYMBOL(el0_svc);
+
+asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
+{
+	unsigned long esr = read_sysreg(esr_el1);
+
+	switch (ESR_ELx_EC(esr)) {
+	case ESR_ELx_EC_SVC64:
+		el0_svc(regs);
+		break;
+	case ESR_ELx_EC_DABT_LOW:
+		el0_da(regs, esr);
+		break;
+	case ESR_ELx_EC_IABT_LOW:
+		el0_ia(regs, esr);
+		break;
+	case ESR_ELx_EC_FP_ASIMD:
+		el0_fpsimd_acc(regs, esr);
+		break;
+	case ESR_ELx_EC_SVE:
+		el0_sve_acc(regs, esr);
+		break;
+	case ESR_ELx_EC_FP_EXC64:
+		el0_fpsimd_exc(regs, esr);
+		break;
+	case ESR_ELx_EC_SYS64:
+	case ESR_ELx_EC_WFx:
+		el0_sys(regs, esr);
+		break;
+	case ESR_ELx_EC_SP_ALIGN:
+		el0_sp(regs, esr);
+		break;
+	case ESR_ELx_EC_PC_ALIGN:
+		el0_pc(regs, esr);
+		break;
+	case ESR_ELx_EC_UNKNOWN:
+		el0_undef(regs);
+		break;
+	case ESR_ELx_EC_BREAKPT_LOW:
+	case ESR_ELx_EC_SOFTSTP_LOW:
+	case ESR_ELx_EC_WATCHPT_LOW:
+	case ESR_ELx_EC_BRK64:
+		el0_dbg(regs, esr);
+		break;
+	default:
+		el0_inv(regs, esr);
+	}
+}
+NOKPROBE_SYMBOL(el0_sync_handler);
+
+#ifdef CONFIG_COMPAT
+static void notrace el0_cp15(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_cp15instr(esr, regs);
+}
+NOKPROBE_SYMBOL(el0_cp15);
+
+static void notrace el0_svc_compat(struct pt_regs *regs)
+{
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+
+	el0_svc_compat_handler(regs);
+}
+NOKPROBE_SYMBOL(el0_svc_compat);
+
+asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
+{
+	unsigned long esr = read_sysreg(esr_el1);
+
+	switch (ESR_ELx_EC(esr)) {
+	case ESR_ELx_EC_SVC32:
+		el0_svc_compat(regs);
+		break;
+	case ESR_ELx_EC_DABT_LOW:
+		el0_da(regs, esr);
+		break;
+	case ESR_ELx_EC_IABT_LOW:
+		el0_ia(regs, esr);
+		break;
+	case ESR_ELx_EC_FP_ASIMD:
+		el0_fpsimd_acc(regs, esr);
+		break;
+	case ESR_ELx_EC_FP_EXC32:
+		el0_fpsimd_exc(regs, esr);
+		break;
+	case ESR_ELx_EC_PC_ALIGN:
+		el0_pc(regs, esr);
+		break;
+	case ESR_ELx_EC_UNKNOWN:
+	case ESR_ELx_EC_CP14_MR:
+	case ESR_ELx_EC_CP14_LS:
+	case ESR_ELx_EC_CP14_64:
+		el0_undef(regs);
+		break;
+	case ESR_ELx_EC_CP15_32:
+	case ESR_ELx_EC_CP15_64:
+		el0_cp15(regs, esr);
+		break;
+	case ESR_ELx_EC_BREAKPT_LOW:
+	case ESR_ELx_EC_SOFTSTP_LOW:
+	case ESR_ELx_EC_WATCHPT_LOW:
+	case ESR_ELx_EC_BKPT32:
+		el0_dbg(regs, esr);
+		break;
+	default:
+		el0_inv(regs, esr);
+	}
+}
+NOKPROBE_SYMBOL(el0_sync_compat_handler);
+#endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 5d7f42eb0e89..15822a0fe37f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -647,71 +647,18 @@ ENDPROC(el1_irq)
 	.align	6
 el0_sync:
 	kernel_entry 0
-	mrs	x25, esr_el1			// read the syndrome register
-	lsr	x24, x25, #ESR_ELx_EC_SHIFT	// exception class
-	cmp	x24, #ESR_ELx_EC_SVC64		// SVC in 64-bit state
-	b.eq	el0_svc
-	cmp	x24, #ESR_ELx_EC_DABT_LOW	// data abort in EL0
-	b.eq	el0_da
-	cmp	x24, #ESR_ELx_EC_IABT_LOW	// instruction abort in EL0
-	b.eq	el0_ia
-	cmp	x24, #ESR_ELx_EC_FP_ASIMD	// FP/ASIMD access
-	b.eq	el0_fpsimd_acc
-	cmp	x24, #ESR_ELx_EC_SVE		// SVE access
-	b.eq	el0_sve_acc
-	cmp	x24, #ESR_ELx_EC_FP_EXC64	// FP/ASIMD exception
-	b.eq	el0_fpsimd_exc
-	cmp	x24, #ESR_ELx_EC_SYS64		// configurable trap
-	ccmp	x24, #ESR_ELx_EC_WFx, #4, ne
-	b.eq	el0_sys
-	cmp	x24, #ESR_ELx_EC_SP_ALIGN	// stack alignment exception
-	b.eq	el0_sp
-	cmp	x24, #ESR_ELx_EC_PC_ALIGN	// pc alignment exception
-	b.eq	el0_pc
-	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL0
-	b.eq	el0_undef
-	cmp	x24, #ESR_ELx_EC_BREAKPT_LOW	// debug exception in EL0
-	b.ge	el0_dbg
-	b	el0_inv
+	mov	x0, sp
+	bl	el0_sync_handler
+	b	ret_to_user
 
 #ifdef CONFIG_COMPAT
 	.align	6
 el0_sync_compat:
 	kernel_entry 0, 32
-	mrs	x25, esr_el1			// read the syndrome register
-	lsr	x24, x25, #ESR_ELx_EC_SHIFT	// exception class
-	cmp	x24, #ESR_ELx_EC_SVC32		// SVC in 32-bit state
-	b.eq	el0_svc_compat
-	cmp	x24, #ESR_ELx_EC_DABT_LOW	// data abort in EL0
-	b.eq	el0_da
-	cmp	x24, #ESR_ELx_EC_IABT_LOW	// instruction abort in EL0
-	b.eq	el0_ia
-	cmp	x24, #ESR_ELx_EC_FP_ASIMD	// FP/ASIMD access
-	b.eq	el0_fpsimd_acc
-	cmp	x24, #ESR_ELx_EC_FP_EXC32	// FP/ASIMD exception
-	b.eq	el0_fpsimd_exc
-	cmp	x24, #ESR_ELx_EC_PC_ALIGN	// pc alignment exception
-	b.eq	el0_pc
-	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL0
-	b.eq	el0_undef
-	cmp	x24, #ESR_ELx_EC_CP15_32	// CP15 MRC/MCR trap
-	b.eq	el0_cp15
-	cmp	x24, #ESR_ELx_EC_CP15_64	// CP15 MRRC/MCRR trap
-	b.eq	el0_cp15
-	cmp	x24, #ESR_ELx_EC_CP14_MR	// CP14 MRC/MCR trap
-	b.eq	el0_undef
-	cmp	x24, #ESR_ELx_EC_CP14_LS	// CP14 LDC/STC trap
-	b.eq	el0_undef
-	cmp	x24, #ESR_ELx_EC_CP14_64	// CP14 MRRC/MCRR trap
-	b.eq	el0_undef
-	cmp	x24, #ESR_ELx_EC_BREAKPT_LOW	// debug exception in EL0
-	b.ge	el0_dbg
-	b	el0_inv
-el0_svc_compat:
-	gic_prio_kentry_setup tmp=x1
 	mov	x0, sp
-	bl	el0_svc_compat_handler
+	bl	el0_sync_compat_handler
 	b	ret_to_user
+ENDPROC(el0_sync)
 
 	.align	6
 el0_irq_compat:
@@ -721,139 +668,7 @@ el0_irq_compat:
 el0_error_compat:
 	kernel_entry 0, 32
 	b	el0_error_naked
-
-el0_cp15:
-	/*
-	 * Trapped CP15 (MRC, MCR, MRRC, MCRR) instructions
-	 */
-	ct_user_exit_irqoff
-	enable_daif
-	mov	x0, x25
-	mov	x1, sp
-	bl	do_cp15instr
-	b	ret_to_user
-#endif
-
-el0_da:
-	/*
-	 * Data abort handling
-	 */
-	mrs	x26, far_el1
-	ct_user_exit_irqoff
-	enable_daif
-	clear_address_tag x0, x26
-	mov	x1, x25
-	mov	x2, sp
-	bl	do_mem_abort
-	b	ret_to_user
-el0_ia:
-	/*
-	 * Instruction abort handling
-	 */
-	mrs	x26, far_el1
-	gic_prio_kentry_setup tmp=x0
-	ct_user_exit_irqoff
-	enable_da_f
-#ifdef CONFIG_TRACE_IRQFLAGS
-	bl	trace_hardirqs_off
-#endif
-	mov	x0, x26
-	mov	x1, x25
-	mov	x2, sp
-	bl	do_el0_ia_bp_hardening
-	b	ret_to_user
-el0_fpsimd_acc:
-	/*
-	 * Floating Point or Advanced SIMD access
-	 */
-	ct_user_exit_irqoff
-	enable_daif
-	mov	x0, x25
-	mov	x1, sp
-	bl	do_fpsimd_acc
-	b	ret_to_user
-el0_sve_acc:
-	/*
-	 * Scalable Vector Extension access
-	 */
-	ct_user_exit_irqoff
-	enable_daif
-	mov	x0, x25
-	mov	x1, sp
-	bl	do_sve_acc
-	b	ret_to_user
-el0_fpsimd_exc:
-	/*
-	 * Floating Point, Advanced SIMD or SVE exception
-	 */
-	ct_user_exit_irqoff
-	enable_daif
-	mov	x0, x25
-	mov	x1, sp
-	bl	do_fpsimd_exc
-	b	ret_to_user
-el0_sp:
-	ldr	x26, [sp, #S_SP]
-	b	el0_sp_pc
-el0_pc:
-	mrs	x26, far_el1
-el0_sp_pc:
-	/*
-	 * Stack or PC alignment exception handling
-	 */
-	gic_prio_kentry_setup tmp=x0
-	ct_user_exit_irqoff
-	enable_da_f
-#ifdef CONFIG_TRACE_IRQFLAGS
-	bl	trace_hardirqs_off
 #endif
-	mov	x0, x26
-	mov	x1, x25
-	mov	x2, sp
-	bl	do_sp_pc_abort
-	b	ret_to_user
-el0_undef:
-	/*
-	 * Undefined instruction
-	 */
-	ct_user_exit_irqoff
-	enable_daif
-	mov	x0, sp
-	bl	do_undefinstr
-	b	ret_to_user
-el0_sys:
-	/*
-	 * System instructions, for trapped cache maintenance instructions
-	 */
-	ct_user_exit_irqoff
-	enable_daif
-	mov	x0, x25
-	mov	x1, sp
-	bl	do_sysinstr
-	b	ret_to_user
-el0_dbg:
-	/*
-	 * Debug exception handling
-	 */
-	tbnz	x24, #0, el0_inv		// EL0 only
-	mrs	x24, far_el1
-	gic_prio_kentry_setup tmp=x3
-	ct_user_exit_irqoff
-	mov	x0, x24
-	mov	x1, x25
-	mov	x2, sp
-	bl	do_debug_exception
-	enable_da_f
-	b	ret_to_user
-el0_inv:
-	ct_user_exit_irqoff
-	enable_daif
-	mov	x0, sp
-	mov	x1, #BAD_SYNC
-	mov	x2, x25
-	bl	bad_el0_sync
-	b	ret_to_user
-ENDPROC(el0_sync)
 
 	.align	6
 el0_irq:
@@ -932,17 +747,6 @@ finish_ret_to_user:
 	kernel_exit 0
 ENDPROC(ret_to_user)
 
-/*
- * SVC handler.
- */
-	.align	6
-el0_svc:
-	gic_prio_kentry_setup tmp=x1
-	mov	x0, sp
-	bl	el0_svc_handler
-	b	ret_to_user
-ENDPROC(el0_svc)
-
 	.popsection				// .entry.text
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 7/8] arm64: Remove asmlinkage from updated functions
  2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
                   ` (5 preceding siblings ...)
  2019-10-03 17:16 ` [PATCH 6/8] arm64: entry: convert el0_sync " James Morse
@ 2019-10-03 17:16 ` James Morse
  2019-10-04 12:58   ` Mark Rutland
  2019-10-03 17:16 ` [PATCH 8/8] arm64: entry-common: don't touch daif before bp-hardening James Morse
  7 siblings, 1 reply; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

Now that the callers of these functions have moved into C, they no longer
need the asmlinkage annotation. Remove it.

Signed-off-by: James Morse <james.morse@arm.com>
---
 arch/arm64/include/asm/exception.h | 36 +++++++++++++-----------------
 arch/arm64/kernel/fpsimd.c         |  6 ++---
 arch/arm64/kernel/syscall.c        |  4 ++--
 arch/arm64/kernel/traps.c          |  8 +++----
 arch/arm64/mm/fault.c              | 16 ++++++-------
 5 files changed, 32 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index e2f87b4ecbfc..c70061d6b253 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -32,26 +32,22 @@ static inline u32 disr_to_esr(u64 disr)
 }
 
 asmlinkage void enter_from_user_mode(void);
-asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
-			     struct pt_regs *regs);
-asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
-			       struct pt_regs *regs);
-asmlinkage void do_undefinstr(struct pt_regs *regs);
+void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
+void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
+void do_undefinstr(struct pt_regs *regs);
 asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr);
-asmlinkage void do_debug_exception(unsigned long addr, unsigned int esr,
-				   struct pt_regs *regs);
-asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
-asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs);
-asmlinkage void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs);
-asmlinkage void do_sysinstr(unsigned int esr, struct pt_regs *regs);
-asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
-			       struct pt_regs *regs);
-asmlinkage void bad_el0_sync(struct pt_regs *regs, int reason,
-			     unsigned int esr);
-asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs);
-asmlinkage void el0_svc_handler(struct pt_regs *regs);
-asmlinkage void el0_svc_compat_handler(struct pt_regs *regs);
-asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
-				       struct pt_regs *regs);
+void do_debug_exception(unsigned long addr, unsigned int esr,
+			struct pt_regs *regs);
+void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
+void do_sve_acc(unsigned int esr, struct pt_regs *regs);
+void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs);
+void do_sysinstr(unsigned int esr, struct pt_regs *regs);
+void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
+void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
+void do_cp15instr(unsigned int esr, struct pt_regs *regs);
+void el0_svc_handler(struct pt_regs *regs);
+void el0_svc_compat_handler(struct pt_regs *regs);
+void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
+			    struct pt_regs *regs);
 
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 37d3912cfe06..3eb338f14386 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -920,7 +920,7 @@ void fpsimd_release_task(struct task_struct *dead_task)
  * would have disabled the SVE access trap for userspace during
  * ret_to_user, making an SVE access trap impossible in that case.
  */
-asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
+void do_sve_acc(unsigned int esr, struct pt_regs *regs)
 {
 	/* Even if we chose not to use SVE, the hardware could still trap: */
 	if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
@@ -947,7 +947,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
 /*
  * Trapped FP/ASIMD access.
  */
-asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
+void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
 {
 	/* TODO: implement lazy context saving/restoring */
 	WARN_ON(1);
@@ -956,7 +956,7 @@ asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
 /*
  * Raise a SIGFPE for the current process.
  */
-asmlinkage void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs)
+void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs)
 {
 	unsigned int si_code = FPE_FLTUNK;
 
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 871c739f060a..9a9d98a443fc 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -154,14 +154,14 @@ static inline void sve_user_discard(void)
 	sve_user_disable();
 }
 
-asmlinkage void el0_svc_handler(struct pt_regs *regs)
+void el0_svc_handler(struct pt_regs *regs)
 {
 	sve_user_discard();
 	el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
 }
 
 #ifdef CONFIG_COMPAT
-asmlinkage void el0_svc_compat_handler(struct pt_regs *regs)
+void el0_svc_compat_handler(struct pt_regs *regs)
 {
 	el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls,
 		       compat_sys_call_table);
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index ba1a571a7774..54ebe24ef4b1 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -394,7 +394,7 @@ void arm64_notify_segfault(unsigned long addr)
 	force_signal_inject(SIGSEGV, code, addr);
 }
 
-asmlinkage void do_undefinstr(struct pt_regs *regs)
+void do_undefinstr(struct pt_regs *regs)
 {
 	/* check for AArch32 breakpoint instructions */
 	if (!aarch32_break_handler(regs))
@@ -669,7 +669,7 @@ static const struct sys64_hook cp15_64_hooks[] = {
 	{},
 };
 
-asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs)
+void do_cp15instr(unsigned int esr, struct pt_regs *regs)
 {
 	const struct sys64_hook *hook, *hook_base;
 
@@ -710,7 +710,7 @@ asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs)
 NOKPROBE_SYMBOL(do_cp15instr);
 #endif
 
-asmlinkage void do_sysinstr(unsigned int esr, struct pt_regs *regs)
+void do_sysinstr(unsigned int esr, struct pt_regs *regs)
 {
 	const struct sys64_hook *hook;
 
@@ -797,7 +797,7 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
  * bad_el0_sync handles unexpected, but potentially recoverable synchronous
  * exceptions taken from EL0. Unlike bad_mode, this returns.
  */
-asmlinkage void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr)
+void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr)
 {
 	void __user *pc = (void __user *)instruction_pointer(regs);
 
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index ba62098de920..0857c2fc38b9 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -724,8 +724,7 @@ static const struct fault_info fault_info[] = {
 	{ do_bad,		SIGKILL, SI_KERNEL,	"unknown 63"			},
 };
 
-asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
-			     struct pt_regs *regs)
+void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
 {
 	const struct fault_info *inf = esr_to_fault_info(esr);
 
@@ -743,15 +742,15 @@ asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
 }
 NOKPROBE_SYMBOL(do_mem_abort);
 
-asmlinkage void do_el0_irq_bp_hardening(void)
+void do_el0_irq_bp_hardening(void)
 {
 	/* PC has already been checked in entry.S */
 	arm64_apply_bp_hardening();
 }
 NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
 
-asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
-				       struct pt_regs *regs)
+void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
+			    struct pt_regs *regs)
 {
 	/*
 	 * We've taken an instruction abort from userspace and not yet
@@ -766,8 +765,7 @@ asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
 }
 NOKPROBE_SYMBOL(do_el0_ia_bp_hardening);
 
-asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
-			       struct pt_regs *regs)
+void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
 {
 	if (user_mode(regs)) {
 		if (!is_ttbr0_addr(instruction_pointer(regs)))
@@ -887,8 +885,8 @@ static int cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
 #endif /* CONFIG_ARM64_ERRATUM_1463225 */
 NOKPROBE_SYMBOL(cortex_a76_erratum_1463225_debug_handler);
 
-asmlinkage void do_debug_exception(unsigned long addr_if_watchpoint,
-				   unsigned int esr, struct pt_regs *regs)
+void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr,
+			struct pt_regs *regs)
 {
 	const struct fault_info *inf = esr_to_debug_fault_info(esr);
 	unsigned long pc = instruction_pointer(regs);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 8/8] arm64: entry-common: don't touch daif before bp-hardening
  2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
                   ` (6 preceding siblings ...)
  2019-10-03 17:16 ` [PATCH 7/8] arm64: Remove asmlinkage from updated functions James Morse
@ 2019-10-03 17:16 ` James Morse
  2019-10-04 13:31   ` Mark Rutland
  7 siblings, 1 reply; 23+ messages in thread
From: James Morse @ 2019-10-03 17:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Catalin Marinas, James Morse, Masami Hiramatsu,
	Will Deacon, Julien Thierry

The previous patches mechanically transformed the assembly version of
entry.S to entry-common.c for synchronous exceptions.

The C version of local_daif_restore() doesn't quite do the same thing
as the assembly versions if pseudo-NMI is in use. In particular,
| local_daif_restore(DAIF_PROCCTX_NOIRQ)
will still allow pNMI to be delivered. This is not the behaviour
do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not
be possible for the PMU handler to run as an NMI until the bp-hardening
sequence has run.

The bp-hardening calls were placed where they are because this was the
first C code to run after the relevant exceptions. As we've now moved
that point earlier, move the checks and calls earlier too.

This makes it clearer that this stuff runs before any kind of exception,
and saves modifying PSTATE twice.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
---
 arch/arm64/include/asm/processor.h |  7 +++++++
 arch/arm64/kernel/entry-common.c   | 18 +++++++++++++++---
 arch/arm64/mm/fault.c              | 29 +----------------------------
 3 files changed, 23 insertions(+), 31 deletions(-)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5623685c7d13..c0c28c4589a8 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -24,6 +24,7 @@
 #include <linux/build_bug.h>
 #include <linux/cache.h>
 #include <linux/init.h>
+#include <linux/thread_info.h>
 #include <linux/stddef.h>
 #include <linux/string.h>
 
@@ -214,6 +215,12 @@ static inline void start_thread(struct pt_regs *regs, unsigned long pc,
 	regs->sp = sp;
 }
 
+static inline bool is_ttbr0_addr(unsigned long addr)
+{
+	/* entry assembly clears tags for TTBR0 addrs */
+	return addr < TASK_SIZE;
+}
+
 #ifdef CONFIG_COMPAT
 static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
 				       unsigned long sp)
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 176969e55677..eb73d250a081 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -14,6 +14,7 @@
 #include <asm/esr.h>
 #include <asm/exception.h>
 #include <asm/kprobes.h>
+#include <asm/mmu.h>
 #include <asm/sysreg.h>
 
 static void notrace el1_abort(struct pt_regs *regs, unsigned long esr)
@@ -112,9 +113,17 @@ static void notrace el0_ia(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
 
+	/*
+	 * We've taken an instruction abort from userspace and not yet
+	 * re-enabled IRQs. If the address is a kernel address, apply
+	 * BP hardening prior to enabling IRQs and pre-emption.
+	 */
+	if (!is_ttbr0_addr(far))
+		arm64_apply_bp_hardening();
+
 	user_exit_irqoff();
-	local_daif_restore(DAIF_PROCCTX_NOIRQ);
-	do_el0_ia_bp_hardening(far, esr, regs);
+	local_daif_restore(DAIF_PROCCTX);
+	do_mem_abort(far, esr, regs);
 }
 NOKPROBE_SYMBOL(el0_ia);
 
@@ -154,8 +163,11 @@ static void notrace el0_pc(struct pt_regs *regs, unsigned long esr)
 {
 	unsigned long far = read_sysreg(far_el1);
 
+	if (!is_ttbr0_addr(instruction_pointer(regs)))
+		arm64_apply_bp_hardening();
+
 	user_exit_irqoff();
-	local_daif_restore(DAIF_PROCCTX_NOIRQ);
+	local_daif_restore(DAIF_PROCCTX);
 	do_sp_pc_abort(far, esr, regs);
 }
 NOKPROBE_SYMBOL(el0_pc);
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 0857c2fc38b9..88e4bd4bc103 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -34,6 +34,7 @@
 #include <asm/esr.h>
 #include <asm/kasan.h>
 #include <asm/kprobes.h>
+#include <asm/processor.h>
 #include <asm/sysreg.h>
 #include <asm/system_misc.h>
 #include <asm/pgtable.h>
@@ -102,12 +103,6 @@ static void mem_abort_decode(unsigned int esr)
 		data_abort_decode(esr);
 }
 
-static inline bool is_ttbr0_addr(unsigned long addr)
-{
-	/* entry assembly clears tags for TTBR0 addrs */
-	return addr < TASK_SIZE;
-}
-
 static inline bool is_ttbr1_addr(unsigned long addr)
 {
 	/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
@@ -749,30 +744,8 @@ void do_el0_irq_bp_hardening(void)
 }
 NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
 
-void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
-			    struct pt_regs *regs)
-{
-	/*
-	 * We've taken an instruction abort from userspace and not yet
-	 * re-enabled IRQs. If the address is a kernel address, apply
-	 * BP hardening prior to enabling IRQs and pre-emption.
-	 */
-	if (!is_ttbr0_addr(addr))
-		arm64_apply_bp_hardening();
-
-	local_daif_restore(DAIF_PROCCTX);
-	do_mem_abort(addr, esr, regs);
-}
-NOKPROBE_SYMBOL(do_el0_ia_bp_hardening);
-
 void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
 {
-	if (user_mode(regs)) {
-		if (!is_ttbr0_addr(instruction_pointer(regs)))
-			arm64_apply_bp_hardening();
-		local_daif_restore(DAIF_PROCCTX);
-	}
-
 	arm64_notify_die("SP/PC alignment exception", regs,
 			 SIGBUS, BUS_ADRALN, (void __user *)addr, esr);
 }
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] arm64: remove __exception annotations
  2019-10-03 17:16 ` [PATCH 2/8] arm64: remove __exception annotations James Morse
@ 2019-10-04 10:17   ` Mark Rutland
  2019-10-04 14:10     ` Masami Hiramatsu
  2019-10-04 13:03   ` Marc Gonzalez
  1 sibling, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2019-10-04 10:17 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, Will Deacon, Julien Thierry, linux-arm-kernel,
	Masami Hiramatsu

On Thu, Oct 03, 2019 at 06:16:36PM +0100, James Morse wrote:
> Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded
> stack frame") arm64 has has not used the __exception annotation to dump
> the pt_regs during stack tracing. in_exception_text() has no callers.
> 
> This annotation is only used to blacklist kprobes, it means the same as
> __kprobes.
> 
> Section annotations like this require the functions to be grouped
> together between the start/end markers, and placed according to
> the linker script. For kprobes we also have NOKPROBE_SYMBOL() which
> logs the symbol address in a section that kprobes parses and
> blacklists at boot.
> 
> Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of
> blacklisted symbols, and saves us from having an arm64 specific
> spelling of __kprobes.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> 
> ---
> (__exception_irq_entry means no-kprobes and optionally in a section
>  ftrace can use to pretty-print interrupt handler boundaries.)
> ---
>  arch/arm64/include/asm/exception.h |  4 ++--
>  arch/arm64/include/asm/traps.h     | 10 ---------
>  arch/arm64/kernel/probes/kprobes.c |  4 ----
>  arch/arm64/kernel/traps.c          | 10 ++++++---
>  arch/arm64/kernel/vmlinux.lds.S    |  3 ---
>  arch/arm64/mm/fault.c              | 34 +++++++++++++++---------------
>  6 files changed, 26 insertions(+), 39 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index a17393ff6677..b0b3ba56e919 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h

[...]

> -asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
> -					       unsigned int esr,
> -					       struct pt_regs *regs)
> +asmlinkage void do_debug_exception(unsigned long addr_if_watchpoint,
> +				   unsigned int esr, struct pt_regs *regs)
>  {
>  	const struct fault_info *inf = esr_to_debug_fault_info(esr);
>  	unsigned long pc = instruction_pointer(regs);

I assume you meant to add NOKPROBE_SYMBOL(do_debug_exception) here.

Assuming so, and with that fixed up:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/8] arm64: Add prototypes for functions called by entry.S
  2019-10-03 17:16 ` [PATCH 3/8] arm64: Add prototypes for functions called by entry.S James Morse
@ 2019-10-04 10:22   ` Mark Rutland
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Rutland @ 2019-10-04 10:22 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, Will Deacon, Julien Thierry, linux-arm-kernel,
	Masami Hiramatsu

On Thu, Oct 03, 2019 at 06:16:37PM +0100, James Morse wrote:
> Functions that are only called by assembly don't always have a
> C header file prototype.
> 
> Add the prototypes before moving the assembly callers to C.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
>  arch/arm64/include/asm/exception.h | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index b0b3ba56e919..8bb3fe2d71a8 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h
> @@ -31,5 +31,26 @@ static inline u32 disr_to_esr(u64 disr)
>  }
>  
>  asmlinkage void enter_from_user_mode(void);
> +asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
> +			     struct pt_regs *regs);
> +asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
> +			       struct pt_regs *regs);
> +asmlinkage void do_undefinstr(struct pt_regs *regs);
> +asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr);
> +asmlinkage void do_debug_exception(unsigned long addr, unsigned int esr,
> +				   struct pt_regs *regs);

Trivial: potentially s/addr/addr_if_watchpoint/, since that got renamed
recently.

> +asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
> +asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs);
> +asmlinkage void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs);
> +asmlinkage void do_sysinstr(unsigned int esr, struct pt_regs *regs);
> +asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
> +			       struct pt_regs *regs);
> +asmlinkage void bad_el0_sync(struct pt_regs *regs, int reason,
> +			     unsigned int esr);
> +asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs);
> +asmlinkage void el0_svc_handler(struct pt_regs *regs);
> +asmlinkage void el0_svc_compat_handler(struct pt_regs *regs);
> +asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
> +				       struct pt_regs *regs);

Otherwise those all look right, so either way:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

>  
>  #endif	/* __ASM_EXCEPTION_H */
> -- 
> 2.20.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/8] arm64: entry: convert el1_sync to C
  2019-10-03 17:16 ` [PATCH 5/8] arm64: entry: convert el1_sync to C James Morse
@ 2019-10-04 10:39   ` Mark Rutland
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Rutland @ 2019-10-04 10:39 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, Will Deacon, Julien Thierry, linux-arm-kernel,
	Masami Hiramatsu

On Thu, Oct 03, 2019 at 06:16:39PM +0100, James Morse wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> This patch converts the EL1 sync entry assembly logic to C code.
> 
> Doing this will allow us to make changes in a slightly more
> readable way. A case in point is supporting kernel-first RAS.
> do_sea() should be called on the CPU that took the fault.
> 
> Largely the assembly code is converted to C in a relatively
> straightforward manner.
> 
> Since all sync sites share a common asm entry point, the ASM_BUG()
> instances are no longer required for effective backtraces back to
> assembly, and we don't need similar BUG() entries.
> 
> The ESR_ELx.EC codes for all (supported) debug exceptions are now
> checked in the el1_sync_handler's switch statement, which renders the
> check in el1_dbg redundant. This both simplifies the el1_dbg handler,
> and makes the EL1 exception handling more robust to
> currently-unallocated ESR_ELx.EC encodings.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [split out of a bigger series, added nokprobes, moved prototypes]
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> ---
>  arch/arm64/include/asm/exception.h |  1 +
>  arch/arm64/kernel/Makefile         |  6 +-
>  arch/arm64/kernel/entry-common.c   | 98 ++++++++++++++++++++++++++++++
>  arch/arm64/kernel/entry.S          | 69 +--------------------
>  4 files changed, 103 insertions(+), 71 deletions(-)
>  create mode 100644 arch/arm64/kernel/entry-common.c
> 
> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index 8bb3fe2d71a8..e2f87b4ecbfc 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h
> @@ -9,6 +9,7 @@
>  
>  #include <asm/esr.h>
>  #include <asm/kprobes.h>
> +#include <asm/ptrace.h>

I think this should have been in patch 3. IIUC it's needed for the type
of struct pt_regs in the funciton prototypes.

>  
>  #include <linux/interrupt.h>
>  
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index 478491f07b4f..fc6488660f64 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -13,9 +13,9 @@ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE)
>  
>  # Object file lists.
>  obj-y			:= debug-monitors.o entry.o irq.o fpsimd.o		\
> -			   entry-fpsimd.o process.o ptrace.o setup.o signal.o	\
> -			   sys.o stacktrace.o time.o traps.o io.o vdso.o	\
> -			   hyp-stub.o psci.o cpu_ops.o insn.o	\
> +			   entry-common.o entry-fpsimd.o process.o ptrace.o	\
> +			   setup.o signal.o sys.o stacktrace.o time.o traps.o	\
> +			   io.o vdso.o hyp-stub.o psci.o cpu_ops.o insn.o	\
>  			   return_address.o cpuinfo.o cpu_errata.o		\
>  			   cpufeature.o alternative.o cacheinfo.o		\
>  			   smp.o smp_spin_table.o topology.o smccc-call.o	\
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> new file mode 100644
> index 000000000000..e726d1f4b9e9
> --- /dev/null
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -0,0 +1,98 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Exception handling code
> + *
> + * Copyright (C) 2019 ARM Ltd.
> + */
> +
> +#include <linux/context_tracking.h>
> +#include <linux/ptrace.h>
> +#include <linux/thread_info.h>
> +
> +#include <asm/cpufeature.h>
> +#include <asm/daifflags.h>
> +#include <asm/esr.h>
> +#include <asm/exception.h>
> +#include <asm/kprobes.h>
> +#include <asm/sysreg.h>
> +
> +static void notrace el1_abort(struct pt_regs *regs, unsigned long esr)
> +{
> +	unsigned long far = read_sysreg(far_el1);
> +
> +	local_daif_inherit(regs);
> +	far = untagged_addr(far);
> +	do_mem_abort(far, esr, regs);
> +}
> +NOKPROBE_SYMBOL(el1_abort);
> +
> +static void notrace el1_pc(struct pt_regs *regs, unsigned long esr)
> +{
> +	unsigned long far = read_sysreg(far_el1);
> +
> +	local_daif_inherit(regs);
> +	do_sp_pc_abort(far, esr, regs);
> +}
> +NOKPROBE_SYMBOL(el1_pc);
> +
> +static void el1_undef(struct pt_regs *regs)
> +{
> +	local_daif_inherit(regs);
> +	do_undefinstr(regs);
> +}
> +NOKPROBE_SYMBOL(el1_undef);
> +
> +static void el1_inv(struct pt_regs *regs, unsigned long esr)
> +{
> +	local_daif_inherit(regs);
> +	bad_mode(regs, 0, esr);
> +}
> +NOKPROBE_SYMBOL(el1_inv);
> +
> +static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
> +{
> +	unsigned long far = read_sysreg(far_el1);
> +
> +	/*
> +	 * The CPU masked interrupts, and we are leaving them masked during
> +	 * do_debug_exception(). Update PMR as if we had called
> +	 * local_mask_daif().
> +	 */
> +	if (system_uses_irq_prio_masking())
> +		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
> +
> +	do_debug_exception(far, esr, regs);
> +}
> +NOKPROBE_SYMBOL(el1_dbg);
> +
> +asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
> +{
> +	unsigned long esr = read_sysreg(esr_el1);
> +
> +	switch (ESR_ELx_EC(esr)) {
> +	case ESR_ELx_EC_DABT_CUR:
> +	case ESR_ELx_EC_IABT_CUR:
> +		el1_abort(regs, esr);
> +		break;
> +	/*
> +	 * We don't handle ESR_ELx_EC_SP_ALIGN, since we will have hit a
> +	 * recursive exception when trying to push the initial pt_regs.
> +	 */
> +	case ESR_ELx_EC_PC_ALIGN:
> +		el1_pc(regs, esr);
> +		break;
> +	case ESR_ELx_EC_SYS64:
> +	case ESR_ELx_EC_UNKNOWN:
> +		el1_undef(regs);
> +		break;
> +	case ESR_ELx_EC_BREAKPT_CUR:
> +	case ESR_ELx_EC_SOFTSTP_CUR:
> +	case ESR_ELx_EC_WATCHPT_CUR:
> +	case ESR_ELx_EC_BRK64:
> +		el1_dbg(regs, esr);
> +		break;
> +	default:
> +		el1_inv(regs, esr);
> +	};
> +}
> +NOKPROBE_SYMBOL(el1_sync_handler);
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index e304fe04b098..5d7f42eb0e89 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -578,76 +578,9 @@ ENDPROC(el1_error_invalid)
>  	.align	6
>  el1_sync:
>  	kernel_entry 1
> -	mrs	x1, esr_el1			// read the syndrome register
> -	lsr	x24, x1, #ESR_ELx_EC_SHIFT	// exception class
> -	cmp	x24, #ESR_ELx_EC_DABT_CUR	// data abort in EL1
> -	b.eq	el1_da
> -	cmp	x24, #ESR_ELx_EC_IABT_CUR	// instruction abort in EL1
> -	b.eq	el1_ia
> -	cmp	x24, #ESR_ELx_EC_SYS64		// configurable trap
> -	b.eq	el1_undef
> -	cmp	x24, #ESR_ELx_EC_PC_ALIGN	// pc alignment exception
> -	b.eq	el1_pc
> -	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL1
> -	b.eq	el1_undef
> -	cmp	x24, #ESR_ELx_EC_BREAKPT_CUR	// debug exception in EL1
> -	b.ge	el1_dbg
> -	b	el1_inv
> -
> -el1_ia:
> -	/*
> -	 * Fall through to the Data abort case
> -	 */
> -el1_da:
> -	/*
> -	 * Data abort handling
> -	 */
> -	mrs	x3, far_el1
> -	inherit_daif	pstate=x23, tmp=x2
> -	clear_address_tag x0, x3
> -	mov	x2, sp				// struct pt_regs
> -	bl	do_mem_abort
> -
> -	kernel_exit 1
> -el1_pc:
> -	/*
> -	 * PC alignment exception handling. We don't handle SP alignment faults,
> -	 * since we will have hit a recursive exception when trying to push the
> -	 * initial pt_regs.
> -	 */
> -	mrs	x0, far_el1
> -	inherit_daif	pstate=x23, tmp=x2
> -	mov	x2, sp
> -	bl	do_sp_pc_abort
> -	ASM_BUG()
> -el1_undef:
> -	/*
> -	 * Undefined instruction
> -	 */
> -	inherit_daif	pstate=x23, tmp=x2
>  	mov	x0, sp
> -	bl	do_undefinstr
> -	kernel_exit 1
> -el1_dbg:
> -	/*
> -	 * Debug exception handling
> -	 */
> -	cmp	x24, #ESR_ELx_EC_BRK64		// if BRK64
> -	cinc	x24, x24, eq			// set bit '0'
> -	tbz	x24, #0, el1_inv		// EL1 only
> -	gic_prio_kentry_setup tmp=x3
> -	mrs	x0, far_el1
> -	mov	x2, sp				// struct pt_regs
> -	bl	do_debug_exception
> +	bl	el1_sync_handler
>  	kernel_exit 1

I've just compared the C and entry.S changes to the v5.4-rc1 entry.S
assembly, and I believe this is correct. It took me a while to spot that
we kept the MOV X0, SP from el1_undef!

Thanks for cleaning this up!

Feel free to add:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

... though it feels funny to review my own patch. ;)

Mark.

> -el1_inv:
> -	// TODO: add support for undefined instructions in kernel mode
> -	inherit_daif	pstate=x23, tmp=x2
> -	mov	x0, sp
> -	mov	x2, x1
> -	mov	x1, #BAD_SYNC
> -	bl	bad_mode
> -	ASM_BUG()
>  ENDPROC(el1_sync)
>  
>  	.align	6
> -- 
> 2.20.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/8] arm64: entry: convert el0_sync to C
  2019-10-03 17:16 ` [PATCH 6/8] arm64: entry: convert el0_sync " James Morse
@ 2019-10-04 12:57   ` Mark Rutland
  2019-10-04 16:09     ` James Morse
  0 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2019-10-04 12:57 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, Will Deacon, Julien Thierry, linux-arm-kernel,
	Masami Hiramatsu

On Thu, Oct 03, 2019 at 06:16:40PM +0100, James Morse wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> This is largely a 1-1 conversion of asm to C, with a couple of caveats.
> 
> The el0_sync{_compat} switches explicitly handle all the EL0 debug
> cases, so el0_dbg doesn't have to try to bail out for unexpected EL1
> debug ESR values. This also means that an unexpected vector catch from
> AArch32 is routed to el0_inv.
> 
> We *could* merge the native and compat switches, which would make the
> diffstat negative, but I've tried to stay as close to the existing
> assembly as possible for the moment.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [split out of a bigger series, added nokprobes. removed irq trace
>  calls as the C helpers do this. renamed el0_dbg's use of FAR]
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> ---
>  arch/arm64/include/asm/asm-uaccess.h |  10 --
>  arch/arm64/kernel/entry-common.c     | 221 +++++++++++++++++++++++++++
>  arch/arm64/kernel/entry.S            | 206 +------------------------
>  3 files changed, 226 insertions(+), 211 deletions(-)

[...]

> +static void notrace el0_dbg(struct pt_regs *regs, unsigned long esr)
> +{
> +	unsigned long addr_if_watchpoint = read_sysreg(far_el1);
> +
> +	if (system_uses_irq_prio_masking())
> +		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
> +
> +	user_exit_irqoff();
> +	do_debug_exception(addr_if_watchpoint, esr, regs);
> +	local_daif_restore(DAIF_PROCCTX_NOIRQ);
> +}
> +NOKPROBE_SYMBOL(el0_dbg);

I think that it'd be best to stick with 'far' here, and only have the
'addr_if_watchpoint' name within do_debug_exception(), where it used to
be 'addr'. That way all of this code consistently uses 'far'.

Otherwise this all looks good to me:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks for all the rework and cleanup!

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 7/8] arm64: Remove asmlinkage from updated functions
  2019-10-03 17:16 ` [PATCH 7/8] arm64: Remove asmlinkage from updated functions James Morse
@ 2019-10-04 12:58   ` Mark Rutland
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Rutland @ 2019-10-04 12:58 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, Will Deacon, Julien Thierry, linux-arm-kernel,
	Masami Hiramatsu

On Thu, Oct 03, 2019 at 06:16:41PM +0100, James Morse wrote:
> Now that the callers of these functions have moved into C, they no longer
> need the asmlinkage annotation. Remove it.
> 
> Signed-off-by: James Morse <james.morse@arm.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/exception.h | 36 +++++++++++++-----------------
>  arch/arm64/kernel/fpsimd.c         |  6 ++---
>  arch/arm64/kernel/syscall.c        |  4 ++--
>  arch/arm64/kernel/traps.c          |  8 +++----
>  arch/arm64/mm/fault.c              | 16 ++++++-------
>  5 files changed, 32 insertions(+), 38 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index e2f87b4ecbfc..c70061d6b253 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h
> @@ -32,26 +32,22 @@ static inline u32 disr_to_esr(u64 disr)
>  }
>  
>  asmlinkage void enter_from_user_mode(void);
> -asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
> -			     struct pt_regs *regs);
> -asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
> -			       struct pt_regs *regs);
> -asmlinkage void do_undefinstr(struct pt_regs *regs);
> +void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> +void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> +void do_undefinstr(struct pt_regs *regs);
>  asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr);
> -asmlinkage void do_debug_exception(unsigned long addr, unsigned int esr,
> -				   struct pt_regs *regs);
> -asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
> -asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs);
> -asmlinkage void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs);
> -asmlinkage void do_sysinstr(unsigned int esr, struct pt_regs *regs);
> -asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
> -			       struct pt_regs *regs);
> -asmlinkage void bad_el0_sync(struct pt_regs *regs, int reason,
> -			     unsigned int esr);
> -asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs);
> -asmlinkage void el0_svc_handler(struct pt_regs *regs);
> -asmlinkage void el0_svc_compat_handler(struct pt_regs *regs);
> -asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
> -				       struct pt_regs *regs);
> +void do_debug_exception(unsigned long addr, unsigned int esr,
> +			struct pt_regs *regs);
> +void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
> +void do_sve_acc(unsigned int esr, struct pt_regs *regs);
> +void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs);
> +void do_sysinstr(unsigned int esr, struct pt_regs *regs);
> +void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
> +void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
> +void do_cp15instr(unsigned int esr, struct pt_regs *regs);
> +void el0_svc_handler(struct pt_regs *regs);
> +void el0_svc_compat_handler(struct pt_regs *regs);
> +void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
> +			    struct pt_regs *regs);
>  
>  #endif	/* __ASM_EXCEPTION_H */
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 37d3912cfe06..3eb338f14386 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -920,7 +920,7 @@ void fpsimd_release_task(struct task_struct *dead_task)
>   * would have disabled the SVE access trap for userspace during
>   * ret_to_user, making an SVE access trap impossible in that case.
>   */
> -asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
> +void do_sve_acc(unsigned int esr, struct pt_regs *regs)
>  {
>  	/* Even if we chose not to use SVE, the hardware could still trap: */
>  	if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
> @@ -947,7 +947,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
>  /*
>   * Trapped FP/ASIMD access.
>   */
> -asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
> +void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
>  {
>  	/* TODO: implement lazy context saving/restoring */
>  	WARN_ON(1);
> @@ -956,7 +956,7 @@ asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
>  /*
>   * Raise a SIGFPE for the current process.
>   */
> -asmlinkage void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs)
> +void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs)
>  {
>  	unsigned int si_code = FPE_FLTUNK;
>  
> diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> index 871c739f060a..9a9d98a443fc 100644
> --- a/arch/arm64/kernel/syscall.c
> +++ b/arch/arm64/kernel/syscall.c
> @@ -154,14 +154,14 @@ static inline void sve_user_discard(void)
>  	sve_user_disable();
>  }
>  
> -asmlinkage void el0_svc_handler(struct pt_regs *regs)
> +void el0_svc_handler(struct pt_regs *regs)
>  {
>  	sve_user_discard();
>  	el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
>  }
>  
>  #ifdef CONFIG_COMPAT
> -asmlinkage void el0_svc_compat_handler(struct pt_regs *regs)
> +void el0_svc_compat_handler(struct pt_regs *regs)
>  {
>  	el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls,
>  		       compat_sys_call_table);
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index ba1a571a7774..54ebe24ef4b1 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -394,7 +394,7 @@ void arm64_notify_segfault(unsigned long addr)
>  	force_signal_inject(SIGSEGV, code, addr);
>  }
>  
> -asmlinkage void do_undefinstr(struct pt_regs *regs)
> +void do_undefinstr(struct pt_regs *regs)
>  {
>  	/* check for AArch32 breakpoint instructions */
>  	if (!aarch32_break_handler(regs))
> @@ -669,7 +669,7 @@ static const struct sys64_hook cp15_64_hooks[] = {
>  	{},
>  };
>  
> -asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs)
> +void do_cp15instr(unsigned int esr, struct pt_regs *regs)
>  {
>  	const struct sys64_hook *hook, *hook_base;
>  
> @@ -710,7 +710,7 @@ asmlinkage void do_cp15instr(unsigned int esr, struct pt_regs *regs)
>  NOKPROBE_SYMBOL(do_cp15instr);
>  #endif
>  
> -asmlinkage void do_sysinstr(unsigned int esr, struct pt_regs *regs)
> +void do_sysinstr(unsigned int esr, struct pt_regs *regs)
>  {
>  	const struct sys64_hook *hook;
>  
> @@ -797,7 +797,7 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
>   * bad_el0_sync handles unexpected, but potentially recoverable synchronous
>   * exceptions taken from EL0. Unlike bad_mode, this returns.
>   */
> -asmlinkage void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr)
> +void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr)
>  {
>  	void __user *pc = (void __user *)instruction_pointer(regs);
>  
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index ba62098de920..0857c2fc38b9 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -724,8 +724,7 @@ static const struct fault_info fault_info[] = {
>  	{ do_bad,		SIGKILL, SI_KERNEL,	"unknown 63"			},
>  };
>  
> -asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
> -			     struct pt_regs *regs)
> +void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
>  {
>  	const struct fault_info *inf = esr_to_fault_info(esr);
>  
> @@ -743,15 +742,15 @@ asmlinkage void do_mem_abort(unsigned long addr, unsigned int esr,
>  }
>  NOKPROBE_SYMBOL(do_mem_abort);
>  
> -asmlinkage void do_el0_irq_bp_hardening(void)
> +void do_el0_irq_bp_hardening(void)
>  {
>  	/* PC has already been checked in entry.S */
>  	arm64_apply_bp_hardening();
>  }
>  NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
>  
> -asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
> -				       struct pt_regs *regs)
> +void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
> +			    struct pt_regs *regs)
>  {
>  	/*
>  	 * We've taken an instruction abort from userspace and not yet
> @@ -766,8 +765,7 @@ asmlinkage void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
>  }
>  NOKPROBE_SYMBOL(do_el0_ia_bp_hardening);
>  
> -asmlinkage void do_sp_pc_abort(unsigned long addr, unsigned int esr,
> -			       struct pt_regs *regs)
> +void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
>  {
>  	if (user_mode(regs)) {
>  		if (!is_ttbr0_addr(instruction_pointer(regs)))
> @@ -887,8 +885,8 @@ static int cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
>  #endif /* CONFIG_ARM64_ERRATUM_1463225 */
>  NOKPROBE_SYMBOL(cortex_a76_erratum_1463225_debug_handler);
>  
> -asmlinkage void do_debug_exception(unsigned long addr_if_watchpoint,
> -				   unsigned int esr, struct pt_regs *regs)
> +void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr,
> +			struct pt_regs *regs)
>  {
>  	const struct fault_info *inf = esr_to_debug_fault_info(esr);
>  	unsigned long pc = instruction_pointer(regs);
> -- 
> 2.20.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] arm64: remove __exception annotations
  2019-10-03 17:16 ` [PATCH 2/8] arm64: remove __exception annotations James Morse
  2019-10-04 10:17   ` Mark Rutland
@ 2019-10-04 13:03   ` Marc Gonzalez
  2019-10-04 16:08     ` James Morse
  1 sibling, 1 reply; 23+ messages in thread
From: Marc Gonzalez @ 2019-10-04 13:03 UTC (permalink / raw)
  To: James Morse; +Cc: Linux ARM

On 03/10/2019 19:16, James Morse wrote:

> Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded
> stack frame") arm64 has has not used the __exception annotation to dump

s/has has not/has not/  ?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 8/8] arm64: entry-common: don't touch daif before bp-hardening
  2019-10-03 17:16 ` [PATCH 8/8] arm64: entry-common: don't touch daif before bp-hardening James Morse
@ 2019-10-04 13:31   ` Mark Rutland
  2019-10-04 16:09     ` James Morse
  0 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2019-10-04 13:31 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, Will Deacon, Julien Thierry, linux-arm-kernel,
	Masami Hiramatsu

On Thu, Oct 03, 2019 at 06:16:42PM +0100, James Morse wrote:
> The previous patches mechanically transformed the assembly version of
> entry.S to entry-common.c for synchronous exceptions.
> 
> The C version of local_daif_restore() doesn't quite do the same thing
> as the assembly versions if pseudo-NMI is in use. In particular,
> | local_daif_restore(DAIF_PROCCTX_NOIRQ)
> will still allow pNMI to be delivered. This is not the behaviour
> do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not
> be possible for the PMU handler to run as an NMI until the bp-hardening
> sequence has run.
> 
> The bp-hardening calls were placed where they are because this was the
> first C code to run after the relevant exceptions. As we've now moved
> that point earlier, move the checks and calls earlier too.
> 
> This makes it clearer that this stuff runs before any kind of exception,
> and saves modifying PSTATE twice.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> ---
>  arch/arm64/include/asm/processor.h |  7 +++++++
>  arch/arm64/kernel/entry-common.c   | 18 +++++++++++++++---
>  arch/arm64/mm/fault.c              | 29 +----------------------------
>  3 files changed, 23 insertions(+), 31 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 5623685c7d13..c0c28c4589a8 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -24,6 +24,7 @@
>  #include <linux/build_bug.h>
>  #include <linux/cache.h>
>  #include <linux/init.h>
> +#include <linux/thread_info.h>
>  #include <linux/stddef.h>
>  #include <linux/string.h>

Nit: alphabetical order please!

>  
> @@ -214,6 +215,12 @@ static inline void start_thread(struct pt_regs *regs, unsigned long pc,
>  	regs->sp = sp;
>  }
>  
> +static inline bool is_ttbr0_addr(unsigned long addr)
> +{
> +	/* entry assembly clears tags for TTBR0 addrs */
> +	return addr < TASK_SIZE;
> +}

Could we move is_ttbr1_addr() here too?

I guess there might be include ordering issues, but if not it would be
nice if they lived in the same place.

> +
>  #ifdef CONFIG_COMPAT
>  static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
>  				       unsigned long sp)
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 176969e55677..eb73d250a081 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -14,6 +14,7 @@
>  #include <asm/esr.h>
>  #include <asm/exception.h>
>  #include <asm/kprobes.h>
> +#include <asm/mmu.h>
>  #include <asm/sysreg.h>
>  
>  static void notrace el1_abort(struct pt_regs *regs, unsigned long esr)
> @@ -112,9 +113,17 @@ static void notrace el0_ia(struct pt_regs *regs, unsigned long esr)
>  {
>  	unsigned long far = read_sysreg(far_el1);
>  
> +	/*
> +	 * We've taken an instruction abort from userspace and not yet
> +	 * re-enabled IRQs. If the address is a kernel address, apply
> +	 * BP hardening prior to enabling IRQs and pre-emption.
> +	 */
> +	if (!is_ttbr0_addr(far))
> +		arm64_apply_bp_hardening();
> +
>  	user_exit_irqoff();
> -	local_daif_restore(DAIF_PROCCTX_NOIRQ);
> -	do_el0_ia_bp_hardening(far, esr, regs);
> +	local_daif_restore(DAIF_PROCCTX);
> +	do_mem_abort(far, esr, regs);
>  }
>  NOKPROBE_SYMBOL(el0_ia);
>  
> @@ -154,8 +163,11 @@ static void notrace el0_pc(struct pt_regs *regs, unsigned long esr)
>  {
>  	unsigned long far = read_sysreg(far_el1);
>  
> +	if (!is_ttbr0_addr(instruction_pointer(regs)))
> +		arm64_apply_bp_hardening();
> +
>  	user_exit_irqoff();
> -	local_daif_restore(DAIF_PROCCTX_NOIRQ);
> +	local_daif_restore(DAIF_PROCCTX);
>  	do_sp_pc_abort(far, esr, regs);
>  }
>  NOKPROBE_SYMBOL(el0_pc);

This is much nicer, and AFAICT is correct, so:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 0857c2fc38b9..88e4bd4bc103 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -34,6 +34,7 @@
>  #include <asm/esr.h>
>  #include <asm/kasan.h>
>  #include <asm/kprobes.h>
> +#include <asm/processor.h>
>  #include <asm/sysreg.h>
>  #include <asm/system_misc.h>
>  #include <asm/pgtable.h>
> @@ -102,12 +103,6 @@ static void mem_abort_decode(unsigned int esr)
>  		data_abort_decode(esr);
>  }
>  
> -static inline bool is_ttbr0_addr(unsigned long addr)
> -{
> -	/* entry assembly clears tags for TTBR0 addrs */
> -	return addr < TASK_SIZE;
> -}
> -
>  static inline bool is_ttbr1_addr(unsigned long addr)
>  {
>  	/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
> @@ -749,30 +744,8 @@ void do_el0_irq_bp_hardening(void)
>  }
>  NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
>  
> -void do_el0_ia_bp_hardening(unsigned long addr,  unsigned int esr,
> -			    struct pt_regs *regs)
> -{
> -	/*
> -	 * We've taken an instruction abort from userspace and not yet
> -	 * re-enabled IRQs. If the address is a kernel address, apply
> -	 * BP hardening prior to enabling IRQs and pre-emption.
> -	 */
> -	if (!is_ttbr0_addr(addr))
> -		arm64_apply_bp_hardening();
> -
> -	local_daif_restore(DAIF_PROCCTX);
> -	do_mem_abort(addr, esr, regs);
> -}
> -NOKPROBE_SYMBOL(do_el0_ia_bp_hardening);
> -
>  void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
>  {
> -	if (user_mode(regs)) {
> -		if (!is_ttbr0_addr(instruction_pointer(regs)))
> -			arm64_apply_bp_hardening();
> -		local_daif_restore(DAIF_PROCCTX);
> -	}
> -
>  	arm64_notify_die("SP/PC alignment exception", regs,
>  			 SIGBUS, BUS_ADRALN, (void __user *)addr, esr);
>  }
> -- 
> 2.20.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] arm64: remove __exception annotations
  2019-10-04 10:17   ` Mark Rutland
@ 2019-10-04 14:10     ` Masami Hiramatsu
  2019-10-04 16:08       ` James Morse
  0 siblings, 1 reply; 23+ messages in thread
From: Masami Hiramatsu @ 2019-10-04 14:10 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, James Morse, linux-arm-kernel, Will Deacon,
	Julien Thierry, Masami Hiramatsu

On Fri, 4 Oct 2019 11:17:17 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> On Thu, Oct 03, 2019 at 06:16:36PM +0100, James Morse wrote:
> > Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded
> > stack frame") arm64 has has not used the __exception annotation to dump
> > the pt_regs during stack tracing. in_exception_text() has no callers.
> > 
> > This annotation is only used to blacklist kprobes, it means the same as
> > __kprobes.
> > 
> > Section annotations like this require the functions to be grouped
> > together between the start/end markers, and placed according to
> > the linker script. For kprobes we also have NOKPROBE_SYMBOL() which
> > logs the symbol address in a section that kprobes parses and
> > blacklists at boot.
> > 
> > Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of
> > blacklisted symbols, and saves us from having an arm64 specific
> > spelling of __kprobes.
> > 
> > Signed-off-by: James Morse <james.morse@arm.com>
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > 
> > ---
> > (__exception_irq_entry means no-kprobes and optionally in a section
> >  ftrace can use to pretty-print interrupt handler boundaries.)
> > ---
> >  arch/arm64/include/asm/exception.h |  4 ++--
> >  arch/arm64/include/asm/traps.h     | 10 ---------
> >  arch/arm64/kernel/probes/kprobes.c |  4 ----
> >  arch/arm64/kernel/traps.c          | 10 ++++++---
> >  arch/arm64/kernel/vmlinux.lds.S    |  3 ---
> >  arch/arm64/mm/fault.c              | 34 +++++++++++++++---------------
> >  6 files changed, 26 insertions(+), 39 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> > index a17393ff6677..b0b3ba56e919 100644
> > --- a/arch/arm64/include/asm/exception.h
> > +++ b/arch/arm64/include/asm/exception.h
> 
> [...]
> 
> > -asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
> > -					       unsigned int esr,
> > -					       struct pt_regs *regs)
> > +asmlinkage void do_debug_exception(unsigned long addr_if_watchpoint,
> > +				   unsigned int esr, struct pt_regs *regs)
> >  {
> >  	const struct fault_info *inf = esr_to_debug_fault_info(esr);
> >  	unsigned long pc = instruction_pointer(regs);
> 
> I assume you meant to add NOKPROBE_SYMBOL(do_debug_exception) here.
> 
> Assuming so, and with that fixed up:
> 
> Acked-by: Mark Rutland <mark.rutland@arm.com>

Good catch, if so, this looks good to me too.
with that fixed up:

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks!

-- 
Masami Hiramatsu <mhiramat@kernel.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] arm64: remove __exception annotations
  2019-10-04 14:10     ` Masami Hiramatsu
@ 2019-10-04 16:08       ` James Morse
  2019-10-04 16:34         ` Mark Rutland
  0 siblings, 1 reply; 23+ messages in thread
From: James Morse @ 2019-10-04 16:08 UTC (permalink / raw)
  To: Masami Hiramatsu, Mark Rutland
  Cc: Catalin Marinas, Will Deacon, Julien Thierry, linux-arm-kernel

Hi guys,

On 04/10/2019 15:10, Masami Hiramatsu wrote:
> On Fri, 4 Oct 2019 11:17:17 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
>> On Thu, Oct 03, 2019 at 06:16:36PM +0100, James Morse wrote:
>>> Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded
>>> stack frame") arm64 has has not used the __exception annotation to dump
>>> the pt_regs during stack tracing. in_exception_text() has no callers.
>>>
>>> This annotation is only used to blacklist kprobes, it means the same as
>>> __kprobes.
>>>
>>> Section annotations like this require the functions to be grouped
>>> together between the start/end markers, and placed according to
>>> the linker script. For kprobes we also have NOKPROBE_SYMBOL() which
>>> logs the symbol address in a section that kprobes parses and
>>> blacklists at boot.
>>>
>>> Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of
>>> blacklisted symbols, and saves us from having an arm64 specific
>>> spelling of __kprobes.

>>> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
>>> index a17393ff6677..b0b3ba56e919 100644
>>> --- a/arch/arm64/include/asm/exception.h
>>> +++ b/arch/arm64/include/asm/exception.h
>>
>> [...]
>>
>>> -asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
>>> -					       unsigned int esr,
>>> -					       struct pt_regs *regs)
>>> +asmlinkage void do_debug_exception(unsigned long addr_if_watchpoint,
>>> +				   unsigned int esr, struct pt_regs *regs)
>>>  {
>>>  	const struct fault_info *inf = esr_to_debug_fault_info(esr);
>>>  	unsigned long pc = instruction_pointer(regs);
>>
>> I assume you meant to add NOKPROBE_SYMBOL(do_debug_exception) here.
>>
>> Assuming so, and with that fixed up:
>>
>> Acked-by: Mark Rutland <mark.rutland@arm.com>
> 
> Good catch, if so, this looks good to me too.

I should have noted it in the commit message, but the NOKPROBE_SYMBOL(do_debug_exception)
is already there! Added by commit 2dd0e8d2d2a15 ("arm64: Kprobes with single stepping
support").

(kprobing the debug handler is so bad, we blacklist it twice!)

I'll fix the commit message.


> with that fixed up:
> 
> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

I assume you're both happy for me to apply these tags.


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] arm64: remove __exception annotations
  2019-10-04 13:03   ` Marc Gonzalez
@ 2019-10-04 16:08     ` James Morse
  0 siblings, 0 replies; 23+ messages in thread
From: James Morse @ 2019-10-04 16:08 UTC (permalink / raw)
  To: Marc Gonzalez; +Cc: Linux ARM

On 04/10/2019 14:03, Marc Gonzalez wrote:
> On 03/10/2019 19:16, James Morse wrote:
> 
>> Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded
>> stack frame") arm64 has has not used the __exception annotation to dump
> 
> s/has has not/has not/  ?

oops!


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/8] arm64: entry: convert el0_sync to C
  2019-10-04 12:57   ` Mark Rutland
@ 2019-10-04 16:09     ` James Morse
  2019-10-04 16:37       ` Mark Rutland
  0 siblings, 1 reply; 23+ messages in thread
From: James Morse @ 2019-10-04 16:09 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, Julien Thierry,
	Masami Hiramatsu

Hi Mark,

On 04/10/2019 13:57, Mark Rutland wrote:
> On Thu, Oct 03, 2019 at 06:16:40PM +0100, James Morse wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> This is largely a 1-1 conversion of asm to C, with a couple of caveats.
>>
>> The el0_sync{_compat} switches explicitly handle all the EL0 debug
>> cases, so el0_dbg doesn't have to try to bail out for unexpected EL1
>> debug ESR values. This also means that an unexpected vector catch from
>> AArch32 is routed to el0_inv.
>>
>> We *could* merge the native and compat switches, which would make the
>> diffstat negative, but I've tried to stay as close to the existing
>> assembly as possible for the moment.

>> +static void notrace el0_dbg(struct pt_regs *regs, unsigned long esr)
>> +{
>> +	unsigned long addr_if_watchpoint = read_sysreg(far_el1);
>> +
>> +	if (system_uses_irq_prio_masking())
>> +		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
>> +
>> +	user_exit_irqoff();
>> +	do_debug_exception(addr_if_watchpoint, esr, regs);
>> +	local_daif_restore(DAIF_PROCCTX_NOIRQ);
>> +}
>> +NOKPROBE_SYMBOL(el0_dbg);
> 
> I think that it'd be best to stick with 'far' here, and only have the
> 'addr_if_watchpoint' name within do_debug_exception(), where it used to
> be 'addr'. That way all of this code consistently uses 'far'.

I'm nervous reading junk into a variable called 'far'... someone might assume its an address.

Of the exceptions that come through here, FAR_EL1 is only written by the CPU for a
watchpoint, for all the others its UNKNOWN.


> Otherwise this all looks good to me:
> 
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>

I'll leave this for now, let me know if my paranoia needs re-tuning!


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 8/8] arm64: entry-common: don't touch daif before bp-hardening
  2019-10-04 13:31   ` Mark Rutland
@ 2019-10-04 16:09     ` James Morse
  0 siblings, 0 replies; 23+ messages in thread
From: James Morse @ 2019-10-04 16:09 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, Julien Thierry,
	Masami Hiramatsu

Hi Mark,

On 04/10/2019 14:31, Mark Rutland wrote:
> On Thu, Oct 03, 2019 at 06:16:42PM +0100, James Morse wrote:
>> The previous patches mechanically transformed the assembly version of
>> entry.S to entry-common.c for synchronous exceptions.
>>
>> The C version of local_daif_restore() doesn't quite do the same thing
>> as the assembly versions if pseudo-NMI is in use. In particular,
>> | local_daif_restore(DAIF_PROCCTX_NOIRQ)
>> will still allow pNMI to be delivered. This is not the behaviour
>> do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not
>> be possible for the PMU handler to run as an NMI until the bp-hardening
>> sequence has run.
>>
>> The bp-hardening calls were placed where they are because this was the
>> first C code to run after the relevant exceptions. As we've now moved
>> that point earlier, move the checks and calls earlier too.
>>
>> This makes it clearer that this stuff runs before any kind of exception,
>> and saves modifying PSTATE twice.

>> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
>> index 5623685c7d13..c0c28c4589a8 100644
>> --- a/arch/arm64/include/asm/processor.h
>> +++ b/arch/arm64/include/asm/processor.h
>> @@ -24,6 +24,7 @@
>>  #include <linux/build_bug.h>
>>  #include <linux/cache.h>
>>  #include <linux/init.h>
>> +#include <linux/thread_info.h>
>>  #include <linux/stddef.h>
>>  #include <linux/string.h>
> 
> Nit: alphabetical order please!

Huh, I should really learn the alphabet...


>> @@ -214,6 +215,12 @@ static inline void start_thread(struct pt_regs *regs, unsigned long pc,
>>  	regs->sp = sp;
>>  }
>>  
>> +static inline bool is_ttbr0_addr(unsigned long addr)
>> +{
>> +	/* entry assembly clears tags for TTBR0 addrs */
>> +	return addr < TASK_SIZE;
>> +}
> 
> Could we move is_ttbr1_addr() here too?
> 
> I guess there might be include ordering issues, but if not it would be
> nice if they lived in the same place.

That works, I agree keeping them together makes sense. (I didn't spot there were two)


>> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
>> index 176969e55677..eb73d250a081 100644
>> --- a/arch/arm64/kernel/entry-common.c
>> +++ b/arch/arm64/kernel/entry-common.c
>> @@ -112,9 +113,17 @@ static void notrace el0_ia(struct pt_regs *regs, unsigned long esr)
>>  {
>>  	unsigned long far = read_sysreg(far_el1);
>>  
>> +	/*
>> +	 * We've taken an instruction abort from userspace and not yet
>> +	 * re-enabled IRQs. If the address is a kernel address, apply
>> +	 * BP hardening prior to enabling IRQs and pre-emption.
>> +	 */
>> +	if (!is_ttbr0_addr(far))
>> +		arm64_apply_bp_hardening();
>> +
>>  	user_exit_irqoff();
>> -	local_daif_restore(DAIF_PROCCTX_NOIRQ);
>> -	do_el0_ia_bp_hardening(far, esr, regs);
>> +	local_daif_restore(DAIF_PROCCTX);
>> +	do_mem_abort(far, esr, regs);
>>  }
>>  NOKPROBE_SYMBOL(el0_ia);
>>  
>> @@ -154,8 +163,11 @@ static void notrace el0_pc(struct pt_regs *regs, unsigned long esr)
>>  {
>>  	unsigned long far = read_sysreg(far_el1);
>>  
>> +	if (!is_ttbr0_addr(instruction_pointer(regs)))
>> +		arm64_apply_bp_hardening();
>> +
>>  	user_exit_irqoff();
>> -	local_daif_restore(DAIF_PROCCTX_NOIRQ);
>> +	local_daif_restore(DAIF_PROCCTX);
>>  	do_sp_pc_abort(far, esr, regs);
>>  }
>>  NOKPROBE_SYMBOL(el0_pc);
> 
> This is much nicer, and AFAICT is correct, so:
> 
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>


Thanks!

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] arm64: remove __exception annotations
  2019-10-04 16:08       ` James Morse
@ 2019-10-04 16:34         ` Mark Rutland
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Rutland @ 2019-10-04 16:34 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, Julien Thierry, Will Deacon, Masami Hiramatsu,
	linux-arm-kernel

On Fri, Oct 04, 2019 at 05:08:11PM +0100, James Morse wrote:
> On 04/10/2019 15:10, Masami Hiramatsu wrote:
> > On Fri, 4 Oct 2019 11:17:17 +0100
> > Mark Rutland <mark.rutland@arm.com> wrote:
> >> On Thu, Oct 03, 2019 at 06:16:36PM +0100, James Morse wrote:
> >>> Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded
> >>> stack frame") arm64 has has not used the __exception annotation to dump
> >>> the pt_regs during stack tracing. in_exception_text() has no callers.
> >>>
> >>> This annotation is only used to blacklist kprobes, it means the same as
> >>> __kprobes.
> >>>
> >>> Section annotations like this require the functions to be grouped
> >>> together between the start/end markers, and placed according to
> >>> the linker script. For kprobes we also have NOKPROBE_SYMBOL() which
> >>> logs the symbol address in a section that kprobes parses and
> >>> blacklists at boot.
> >>>
> >>> Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of
> >>> blacklisted symbols, and saves us from having an arm64 specific
> >>> spelling of __kprobes.
> 
> >>> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> >>> index a17393ff6677..b0b3ba56e919 100644
> >>> --- a/arch/arm64/include/asm/exception.h
> >>> +++ b/arch/arm64/include/asm/exception.h
> >>
> >> [...]
> >>
> >>> -asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
> >>> -					       unsigned int esr,
> >>> -					       struct pt_regs *regs)
> >>> +asmlinkage void do_debug_exception(unsigned long addr_if_watchpoint,
> >>> +				   unsigned int esr, struct pt_regs *regs)
> >>>  {
> >>>  	const struct fault_info *inf = esr_to_debug_fault_info(esr);
> >>>  	unsigned long pc = instruction_pointer(regs);
> >>
> >> I assume you meant to add NOKPROBE_SYMBOL(do_debug_exception) here.
> >>
> >> Assuming so, and with that fixed up:
> >>
> >> Acked-by: Mark Rutland <mark.rutland@arm.com>
> > 
> > Good catch, if so, this looks good to me too.
> 
> I should have noted it in the commit message, but the NOKPROBE_SYMBOL(do_debug_exception)
> is already there! Added by commit 2dd0e8d2d2a15 ("arm64: Kprobes with single stepping
> support").
> 
> (kprobing the debug handler is so bad, we blacklist it twice!)

Ah; neat!

> I'll fix the commit message.
> 
> 
> > with that fixed up:
> > 
> > Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
> 
> I assume you're both happy for me to apply these tags.

Certainly!

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/8] arm64: entry: convert el0_sync to C
  2019-10-04 16:09     ` James Morse
@ 2019-10-04 16:37       ` Mark Rutland
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Rutland @ 2019-10-04 16:37 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, Julien Thierry,
	Masami Hiramatsu

On Fri, Oct 04, 2019 at 05:09:31PM +0100, James Morse wrote:
> Hi Mark,
> 
> On 04/10/2019 13:57, Mark Rutland wrote:
> > On Thu, Oct 03, 2019 at 06:16:40PM +0100, James Morse wrote:
> >> From: Mark Rutland <mark.rutland@arm.com>
> >>
> >> This is largely a 1-1 conversion of asm to C, with a couple of caveats.
> >>
> >> The el0_sync{_compat} switches explicitly handle all the EL0 debug
> >> cases, so el0_dbg doesn't have to try to bail out for unexpected EL1
> >> debug ESR values. This also means that an unexpected vector catch from
> >> AArch32 is routed to el0_inv.
> >>
> >> We *could* merge the native and compat switches, which would make the
> >> diffstat negative, but I've tried to stay as close to the existing
> >> assembly as possible for the moment.
> 
> >> +static void notrace el0_dbg(struct pt_regs *regs, unsigned long esr)
> >> +{
> >> +	unsigned long addr_if_watchpoint = read_sysreg(far_el1);
> >> +
> >> +	if (system_uses_irq_prio_masking())
> >> +		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
> >> +
> >> +	user_exit_irqoff();
> >> +	do_debug_exception(addr_if_watchpoint, esr, regs);
> >> +	local_daif_restore(DAIF_PROCCTX_NOIRQ);
> >> +}
> >> +NOKPROBE_SYMBOL(el0_dbg);
> > 
> > I think that it'd be best to stick with 'far' here, and only have the
> > 'addr_if_watchpoint' name within do_debug_exception(), where it used to
> > be 'addr'. That way all of this code consistently uses 'far'.
> 
> I'm nervous reading junk into a variable called 'far'... someone might assume its an address.
> 
> Of the exceptions that come through here, FAR_EL1 is only written by the CPU for a
> watchpoint, for all the others its UNKNOWN.

My argument is that "far" just means "the contents of FAR_EL1", and
whether the value has any meaning for the current exception is
context-dependent (i.e. within do_debug_exception()).

> > Otherwise this all looks good to me:
> > 
> > Reviewed-by: Mark Rutland <mark.rutland@arm.com>
> 
> I'll leave this for now, let me know if my paranoia needs re-tuning!

Sure; my R-B holds either way. I just think it'd be nicer to use 'far'
here for consistency.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-10-04 16:37 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-03 17:16 [PATCH 0/8] arm64: Convert entry.S synchronous exception handling to C James Morse
2019-10-03 17:16 ` [PATCH 1/8] arm64: Fix incorrect irqflag restore for priority masking for compat James Morse
2019-10-03 17:16 ` [PATCH 2/8] arm64: remove __exception annotations James Morse
2019-10-04 10:17   ` Mark Rutland
2019-10-04 14:10     ` Masami Hiramatsu
2019-10-04 16:08       ` James Morse
2019-10-04 16:34         ` Mark Rutland
2019-10-04 13:03   ` Marc Gonzalez
2019-10-04 16:08     ` James Morse
2019-10-03 17:16 ` [PATCH 3/8] arm64: Add prototypes for functions called by entry.S James Morse
2019-10-04 10:22   ` Mark Rutland
2019-10-03 17:16 ` [PATCH 4/8] arm64: add local_daif_inherit() James Morse
2019-10-03 17:16 ` [PATCH 5/8] arm64: entry: convert el1_sync to C James Morse
2019-10-04 10:39   ` Mark Rutland
2019-10-03 17:16 ` [PATCH 6/8] arm64: entry: convert el0_sync " James Morse
2019-10-04 12:57   ` Mark Rutland
2019-10-04 16:09     ` James Morse
2019-10-04 16:37       ` Mark Rutland
2019-10-03 17:16 ` [PATCH 7/8] arm64: Remove asmlinkage from updated functions James Morse
2019-10-04 12:58   ` Mark Rutland
2019-10-03 17:16 ` [PATCH 8/8] arm64: entry-common: don't touch daif before bp-hardening James Morse
2019-10-04 13:31   ` Mark Rutland
2019-10-04 16:09     ` James Morse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).