All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump
@ 2017-07-26 18:18 Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 1/7] arm64: Add ASM_BUG() Mark Rutland
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Mark Rutland @ 2017-07-26 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

These patches correct our exception stack dumping code to correctly identify
frames associated with exceptions, for which there is a corresponding pt_regs.

Ard and I ran into these issues while working on vmap'd stacks, and these
patches will be a pre-requisite for overflow handling to function correctly.

This is a follow up to Ard's v1 [1]. I've pushed this out to my
arm64/exception-stack branch [2].

Since v1:
* Identify exception frames based on .entry.text
* Add ASM_BUG()
* Use BL to ensure all exception frames can be identified
* Minor fixups

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-July/520705.html
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/exception-stack

Ard Biesheuvel (3):
  arm64: unwind: disregard frame.sp when validating frame pointer
  arm64: unwind: reference pt_regs via embedded stack frame
  arm64: unwind: remove sp from struct stackframe

Mark Rutland (4):
  arm64: Add ASM_BUG()
  arm64: consistently use bl for C exception entry
  arm64: move non-entry code out of .entry.text
  arm64: unwind: avoid percpu indirection for irq stack

 arch/arm64/include/asm/asm-bug.h    |  54 +++++++++++++++++
 arch/arm64/include/asm/assembler.h  |  11 ++++
 arch/arm64/include/asm/bug.h        |  35 +----------
 arch/arm64/include/asm/irq.h        |  39 ++++--------
 arch/arm64/include/asm/ptrace.h     |   1 +
 arch/arm64/include/asm/stacktrace.h |   1 -
 arch/arm64/include/asm/traps.h      |   5 ++
 arch/arm64/kernel/asm-offsets.c     |   1 +
 arch/arm64/kernel/entry.S           | 118 +++++++++++++++++++-----------------
 arch/arm64/kernel/perf_callchain.c  |   1 -
 arch/arm64/kernel/process.c         |   5 +-
 arch/arm64/kernel/ptrace.c          |   2 +-
 arch/arm64/kernel/return_address.c  |   1 -
 arch/arm64/kernel/stacktrace.c      |  54 ++---------------
 arch/arm64/kernel/time.c            |   1 -
 arch/arm64/kernel/traps.c           |  34 +++--------
 16 files changed, 162 insertions(+), 201 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm-bug.h

-- 
1.9.1

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv2 1/7] arm64: Add ASM_BUG()
  2017-07-26 18:18 [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump Mark Rutland
@ 2017-07-26 18:18 ` Mark Rutland
  2017-08-08 15:31   ` Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 2/7] arm64: consistently use bl for C exception entry Mark Rutland
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Mark Rutland @ 2017-07-26 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

Currently. we can only use BUG() from C code, though there are
situations where we would like an equivalent mechanism in assembly code.

This patch refactors our BUG() definition such that it can be used in
either C or assembly, in the form of a new ASM_BUG().

The refactoring requires the removal of escape sequences, such as '\n'
and '\t', but these aren't strictly necessary as we can use ';' to
terminate assembler statements.

The low-level assembly is factored out into <asm/asm-bug.h>, with
<asm/bug.h> retained as the C wrapper.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/asm-bug.h | 54 ++++++++++++++++++++++++++++++++++++++++
 arch/arm64/include/asm/bug.h     | 35 +++-----------------------
 2 files changed, 57 insertions(+), 32 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm-bug.h

diff --git a/arch/arm64/include/asm/asm-bug.h b/arch/arm64/include/asm/asm-bug.h
new file mode 100644
index 0000000..4a0f708
--- /dev/null
+++ b/arch/arm64/include/asm/asm-bug.h
@@ -0,0 +1,54 @@
+#ifndef __ASM_ASM_BUG_H
+/*
+ * Copyright (C) 2017  ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#define __ASM_ASM_BUG_H
+
+#include <asm/brk-imm.h>
+
+#ifdef CONFIG_DEBUG_BUGVERBOSE
+#define _BUGVERBOSE_LOCATION(file, line) __BUGVERBOSE_LOCATION(file, line)
+#define __BUGVERBOSE_LOCATION(file, line)			\
+		.pushsection .rodata.str,"aMS", at progbits,1;	\
+	2:	.string file;					\
+		.popsection;					\
+								\
+		.long 2b - 0b;					\
+		.short line;
+#else
+#define _BUGVERBOSE_LOCATION(file, line)
+#endif
+
+#ifdef CONFIG_GENERIC_BUG
+
+#define __BUG_ENTRY(flags) 				\
+		.pushsection __bug_table,"a";		\
+		.align 2;				\
+	0:	.long 1f - 0b;				\
+_BUGVERBOSE_LOCATION(__FILE__, __LINE__)		\
+		.short flags; 				\
+		.popsection;				\
+	1:
+#else
+#define __BUG_ENTRY(flags)
+#endif
+
+#define ASM_BUG_FLAGS(flags)				\
+	__BUG_ENTRY(0)					\
+	brk	BUG_BRK_IMM
+
+#define ASM_BUG()	ASM_BUG_FLAGS(0)
+
+#endif /* __ASM_ASM_BUG_H */
diff --git a/arch/arm64/include/asm/bug.h b/arch/arm64/include/asm/bug.h
index 366448e..d7dc437 100644
--- a/arch/arm64/include/asm/bug.h
+++ b/arch/arm64/include/asm/bug.h
@@ -18,41 +18,12 @@
 #ifndef _ARCH_ARM64_ASM_BUG_H
 #define _ARCH_ARM64_ASM_BUG_H
 
-#include <asm/brk-imm.h>
+#include <linux/stringify.h>
 
-#ifdef CONFIG_DEBUG_BUGVERBOSE
-#define _BUGVERBOSE_LOCATION(file, line) __BUGVERBOSE_LOCATION(file, line)
-#define __BUGVERBOSE_LOCATION(file, line)				\
-		".pushsection .rodata.str,\"aMS\", at progbits,1\n"	\
-	"2:	.string \"" file "\"\n\t"				\
-		".popsection\n\t"					\
-									\
-		".long 2b - 0b\n\t"					\
-		".short " #line "\n\t"
-#else
-#define _BUGVERBOSE_LOCATION(file, line)
-#endif
-
-#ifdef CONFIG_GENERIC_BUG
-
-#define __BUG_ENTRY(flags) 				\
-		".pushsection __bug_table,\"a\"\n\t"	\
-		".align 2\n\t"				\
-	"0:	.long 1f - 0b\n\t"			\
-_BUGVERBOSE_LOCATION(__FILE__, __LINE__)		\
-		".short " #flags "\n\t"			\
-		".popsection\n"				\
-	"1:	"
-#else
-#define __BUG_ENTRY(flags) ""
-#endif
+#include <asm/asm-bug.h>
 
 #define __BUG_FLAGS(flags)				\
-	asm volatile (					\
-		__BUG_ENTRY(flags)			\
-		"brk %[imm]" :: [imm] "i" (BUG_BRK_IMM)	\
-	);
-
+	asm volatile (__stringify(ASM_BUG_FLAGS(flags)));
 
 #define BUG() do {					\
 	__BUG_FLAGS(0);					\
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2 2/7] arm64: consistently use bl for C exception entry
  2017-07-26 18:18 [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 1/7] arm64: Add ASM_BUG() Mark Rutland
@ 2017-07-26 18:18 ` Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 3/7] arm64: move non-entry code out of .entry.text Mark Rutland
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2017-07-26 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

In most cases, our exception entry assembly branches to C handlers with
a BL instruction, but in cases where we do not expect to return, we use
B instead.

While this is correct today, it means that backtraces for fatal
exceptions miss the entry assembly (as the LR is stale at the point we
call C code), while non-fatal exceptions have the entry assembly in the
LR. In subsequent patches, we will need the LR to be set in these cases
in order to backtrace reliably.

This patch updates these sites to use a BL, ensuring consistency, and
preparing for backtrace rework. An ASM_BUG() is added after each of
these new BLs, which both catches unexpected returns, and ensures that
the LR value doesn't point to another function label.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index b738880..660612a 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -351,7 +351,8 @@ END(vectors)
 	mov	x0, sp
 	mov	x1, #\reason
 	mrs	x2, esr_el1
-	b	bad_mode
+	bl	bad_mode
+	ASM_BUG()
 	.endm
 
 el0_sync_invalid:
@@ -448,14 +449,16 @@ el1_sp_pc:
 	mrs	x0, far_el1
 	enable_dbg
 	mov	x2, sp
-	b	do_sp_pc_abort
+	bl	do_sp_pc_abort
+	ASM_BUG()
 el1_undef:
 	/*
 	 * Undefined instruction
 	 */
 	enable_dbg
 	mov	x0, sp
-	b	do_undefinstr
+	bl	do_undefinstr
+	ASM_BUG()
 el1_dbg:
 	/*
 	 * Debug exception handling
@@ -473,7 +476,8 @@ el1_inv:
 	mov	x0, sp
 	mov	x2, x1
 	mov	x1, #BAD_SYNC
-	b	bad_mode
+	bl	bad_mode
+	ASM_BUG()
 ENDPROC(el1_sync)
 
 	.align	6
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2 3/7] arm64: move non-entry code out of .entry.text
  2017-07-26 18:18 [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 1/7] arm64: Add ASM_BUG() Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 2/7] arm64: consistently use bl for C exception entry Mark Rutland
@ 2017-07-26 18:18 ` Mark Rutland
  2017-07-26 21:38   ` Stephen Boyd
  2017-07-26 18:18 ` [PATCHv2 4/7] arm64: unwind: avoid percpu indirection for irq stack Mark Rutland
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Mark Rutland @ 2017-07-26 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

Currently, cpu_switch_to and ret_from_fork both live in .entry.text,
though neither form the critical path for an exception entry.

In subsequent patches, we will require that code in .entry.text is part
of the critical path for exception entry, for which we can assume
certain properties (e.g. the presence of exception regs on the stack).

Neither cpu_switch_to nor ret_from_fork will meet these requirements, so
we must most them out of .entry.text. To ensure that neither are kprobed
after being moved out of .entry.text, we must explicitly blacklist them,
requiring a new NOKPROBE() asm helper.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/assembler.h | 11 +++++
 arch/arm64/kernel/entry.S          | 90 +++++++++++++++++++-------------------
 2 files changed, 57 insertions(+), 44 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 1b67c37..610a420 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -403,6 +403,17 @@
 	.size	__pi_##x, . - x;	\
 	ENDPROC(x)
 
+/*
+ * Annotate a function as being unsuitable for kprobes.
+ */
+#ifdef CONFIG_KPROBES
+#define NOKPROBE(x)				\
+	.pushsection "_kprobe_blacklist", "aw";	\
+	.quad	x;				\
+	.popsection;
+#else
+#define NOKPROBE(x)
+#endif
 	/*
 	 * Emit a 64-bit absolute little endian symbol reference in a way that
 	 * ensures that it will be resolved at build time, even when building a
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 660612a..9e126d3 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -711,38 +711,6 @@ el0_irq_naked:
 ENDPROC(el0_irq)
 
 /*
- * Register switch for AArch64. The callee-saved registers need to be saved
- * and restored. On entry:
- *   x0 = previous task_struct (must be preserved across the switch)
- *   x1 = next task_struct
- * Previous and next are guaranteed not to be the same.
- *
- */
-ENTRY(cpu_switch_to)
-	mov	x10, #THREAD_CPU_CONTEXT
-	add	x8, x0, x10
-	mov	x9, sp
-	stp	x19, x20, [x8], #16		// store callee-saved registers
-	stp	x21, x22, [x8], #16
-	stp	x23, x24, [x8], #16
-	stp	x25, x26, [x8], #16
-	stp	x27, x28, [x8], #16
-	stp	x29, x9, [x8], #16
-	str	lr, [x8]
-	add	x8, x1, x10
-	ldp	x19, x20, [x8], #16		// restore callee-saved registers
-	ldp	x21, x22, [x8], #16
-	ldp	x23, x24, [x8], #16
-	ldp	x25, x26, [x8], #16
-	ldp	x27, x28, [x8], #16
-	ldp	x29, x9, [x8], #16
-	ldr	lr, [x8]
-	mov	sp, x9
-	msr	sp_el0, x1
-	ret
-ENDPROC(cpu_switch_to)
-
-/*
  * This is the fast syscall return path.  We do as little as possible here,
  * and this includes saving x0 back into the kernel stack.
  */
@@ -785,18 +753,6 @@ finish_ret_to_user:
 ENDPROC(ret_to_user)
 
 /*
- * This is how we return from a fork.
- */
-ENTRY(ret_from_fork)
-	bl	schedule_tail
-	cbz	x19, 1f				// not a kernel thread
-	mov	x0, x20
-	blr	x19
-1:	get_thread_info tsk
-	b	ret_to_user
-ENDPROC(ret_from_fork)
-
-/*
  * SVC handler.
  */
 	.align	6
@@ -869,3 +825,49 @@ ENTRY(sys_rt_sigreturn_wrapper)
 	mov	x0, sp
 	b	sys_rt_sigreturn
 ENDPROC(sys_rt_sigreturn_wrapper)
+
+/*
+ * Register switch for AArch64. The callee-saved registers need to be saved
+ * and restored. On entry:
+ *   x0 = previous task_struct (must be preserved across the switch)
+ *   x1 = next task_struct
+ * Previous and next are guaranteed not to be the same.
+ *
+ */
+ENTRY(cpu_switch_to)
+	mov	x10, #THREAD_CPU_CONTEXT
+	add	x8, x0, x10
+	mov	x9, sp
+	stp	x19, x20, [x8], #16		// store callee-saved registers
+	stp	x21, x22, [x8], #16
+	stp	x23, x24, [x8], #16
+	stp	x25, x26, [x8], #16
+	stp	x27, x28, [x8], #16
+	stp	x29, x9, [x8], #16
+	str	lr, [x8]
+	add	x8, x1, x10
+	ldp	x19, x20, [x8], #16		// restore callee-saved registers
+	ldp	x21, x22, [x8], #16
+	ldp	x23, x24, [x8], #16
+	ldp	x25, x26, [x8], #16
+	ldp	x27, x28, [x8], #16
+	ldp	x29, x9, [x8], #16
+	ldr	lr, [x8]
+	mov	sp, x9
+	msr	sp_el0, x1
+	ret
+ENDPROC(cpu_switch_to)
+NOKPROBE(cpu_switch_to)
+
+/*
+ * This is how we return from a fork.
+ */
+ENTRY(ret_from_fork)
+	bl	schedule_tail
+	cbz	x19, 1f				// not a kernel thread
+	mov	x0, x20
+	blr	x19
+1:	get_thread_info tsk
+	b	ret_to_user
+ENDPROC(ret_from_fork)
+NOKPROBE(ret_from_fork)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2 4/7] arm64: unwind: avoid percpu indirection for irq stack
  2017-07-26 18:18 [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump Mark Rutland
                   ` (2 preceding siblings ...)
  2017-07-26 18:18 ` [PATCHv2 3/7] arm64: move non-entry code out of .entry.text Mark Rutland
@ 2017-07-26 18:18 ` Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 5/7] arm64: unwind: disregard frame.sp when validating frame pointer Mark Rutland
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2017-07-26 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

Our IRQ_STACK_PTR() and on_irq_stack() helpers both take a cpu argument,
used to generate a percpu address. In all cases, they are passed
{raw_,}smp_processor_id(), so this parameter is redundant.

Since {raw_,}smp_processor_id() use a percpu variable internally, this
approach means we generate a percpu offset to find the current cpu, then
use this to index an array of percpu offsets, which we then use to find
the current CPU's IRQ stack pointer. Thus, most of the work is
redundant.

Instead, we can consistently use raw_cpu_ptr() to generate the CPU's
irq_stack pointer by simply adding the percpu offset to the irq_stack
address, which is simpler in both respects.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/irq.h   | 6 +++---
 arch/arm64/kernel/ptrace.c     | 2 +-
 arch/arm64/kernel/stacktrace.c | 4 ++--
 arch/arm64/kernel/traps.c      | 2 +-
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
index b77197d..6d6f85e 100644
--- a/arch/arm64/include/asm/irq.h
+++ b/arch/arm64/include/asm/irq.h
@@ -32,7 +32,7 @@
  * from kernel_entry can be found.
  *
  */
-#define IRQ_STACK_PTR(cpu) ((unsigned long)per_cpu(irq_stack, cpu) + IRQ_STACK_START_SP)
+#define IRQ_STACK_PTR() ((unsigned long)raw_cpu_ptr(irq_stack) + IRQ_STACK_START_SP)
 
 /*
  * The offset from irq_stack_ptr where entry.S will store the original
@@ -47,10 +47,10 @@ static inline int nr_legacy_irqs(void)
 	return 0;
 }
 
-static inline bool on_irq_stack(unsigned long sp, int cpu)
+static inline bool on_irq_stack(unsigned long sp)
 {
 	/* variable names the same as kernel/stacktrace.c */
-	unsigned long low = (unsigned long)per_cpu(irq_stack, cpu);
+	unsigned long low = (unsigned long)raw_cpu_ptr(irq_stack);
 	unsigned long high = low + IRQ_STACK_START_SP;
 
 	return (low <= sp && sp <= high);
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 1b38c01..baf0838 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -127,7 +127,7 @@ static bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
 {
 	return ((addr & ~(THREAD_SIZE - 1))  ==
 		(kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))) ||
-		on_irq_stack(addr, raw_smp_processor_id());
+		on_irq_stack(addr);
 }
 
 /**
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 09d37d6..6ffb965 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -54,13 +54,13 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	 * non-preemptible context.
 	 */
 	if (tsk == current && !preemptible())
-		irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
+		irq_stack_ptr = IRQ_STACK_PTR();
 	else
 		irq_stack_ptr = 0;
 
 	low  = frame->sp;
 	/* irq stacks are not THREAD_SIZE aligned */
-	if (on_irq_stack(frame->sp, raw_smp_processor_id()))
+	if (on_irq_stack(frame->sp))
 		high = irq_stack_ptr;
 	else
 		high = ALIGN(low, THREAD_SIZE) - 0x20;
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index c7c7088..7ba80f2 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -159,7 +159,7 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
 	 * non-preemptible context.
 	 */
 	if (tsk == current && !preemptible())
-		irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
+		irq_stack_ptr = IRQ_STACK_PTR();
 	else
 		irq_stack_ptr = 0;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2 5/7] arm64: unwind: disregard frame.sp when validating frame pointer
  2017-07-26 18:18 [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump Mark Rutland
                   ` (3 preceding siblings ...)
  2017-07-26 18:18 ` [PATCHv2 4/7] arm64: unwind: avoid percpu indirection for irq stack Mark Rutland
@ 2017-07-26 18:18 ` Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 6/7] arm64: unwind: reference pt_regs via embedded stack frame Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 7/7] arm64: unwind: remove sp from struct stackframe Mark Rutland
  6 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2017-07-26 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Currently, when unwinding the call stack, we validate the frame pointer
of each frame against frame.sp, whose value is not clearly defined, and
which makes it more difficult to link stack frames together across
different stacks. It is far better to simply check whether the frame
pointer itself points into a valid stack.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/irq.h   | 10 +++++++++-
 arch/arm64/kernel/stacktrace.c | 24 +++++++-----------------
 2 files changed, 16 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
index 6d6f85e..8155e48 100644
--- a/arch/arm64/include/asm/irq.h
+++ b/arch/arm64/include/asm/irq.h
@@ -7,6 +7,7 @@
 #ifndef __ASSEMBLER__
 
 #include <linux/percpu.h>
+#include <linux/sched/task_stack.h>
 
 #include <asm-generic/irq.h>
 #include <asm/thread_info.h>
@@ -49,12 +50,19 @@ static inline int nr_legacy_irqs(void)
 
 static inline bool on_irq_stack(unsigned long sp)
 {
-	/* variable names the same as kernel/stacktrace.c */
 	unsigned long low = (unsigned long)raw_cpu_ptr(irq_stack);
 	unsigned long high = low + IRQ_STACK_START_SP;
 
 	return (low <= sp && sp <= high);
 }
 
+static inline bool on_task_stack(struct task_struct *tsk, unsigned long sp)
+{
+	unsigned long low = (unsigned long)task_stack_page(tsk);
+	unsigned long high = low + THREAD_SIZE;
+
+	return (low <= sp && sp < high);
+}
+
 #endif /* !__ASSEMBLER__ */
 #endif
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 6ffb965..beaf51f 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -42,9 +42,10 @@
  */
 int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 {
-	unsigned long high, low;
 	unsigned long fp = frame->fp;
-	unsigned long irq_stack_ptr;
+
+	if (fp & 0xf)
+		return -EINVAL;
 
 	if (!tsk)
 		tsk = current;
@@ -53,19 +54,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	 * Switching between stacks is valid when tracing current and in
 	 * non-preemptible context.
 	 */
-	if (tsk == current && !preemptible())
-		irq_stack_ptr = IRQ_STACK_PTR();
-	else
-		irq_stack_ptr = 0;
-
-	low  = frame->sp;
-	/* irq stacks are not THREAD_SIZE aligned */
-	if (on_irq_stack(frame->sp))
-		high = irq_stack_ptr;
-	else
-		high = ALIGN(low, THREAD_SIZE) - 0x20;
-
-	if (fp < low || fp > high || fp & 0xf)
+	if (!(tsk == current && !preemptible() && on_irq_stack(fp)) &&
+	    !on_task_stack(tsk, fp))
 		return -EINVAL;
 
 	frame->sp = fp + 0x10;
@@ -94,9 +84,9 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	 * Check the frame->fp we read from the bottom of the irq_stack,
 	 * and the original task stack pointer are both in current->stack.
 	 */
-	if (frame->sp == irq_stack_ptr) {
+	if (frame->sp == IRQ_STACK_PTR()) {
 		struct pt_regs *irq_args;
-		unsigned long orig_sp = IRQ_STACK_TO_TASK_STACK(irq_stack_ptr);
+		unsigned long orig_sp = IRQ_STACK_TO_TASK_STACK(frame->sp);
 
 		if (object_is_on_stack((void *)orig_sp) &&
 		   object_is_on_stack((void *)frame->fp)) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2 6/7] arm64: unwind: reference pt_regs via embedded stack frame
  2017-07-26 18:18 [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump Mark Rutland
                   ` (4 preceding siblings ...)
  2017-07-26 18:18 ` [PATCHv2 5/7] arm64: unwind: disregard frame.sp when validating frame pointer Mark Rutland
@ 2017-07-26 18:18 ` Mark Rutland
  2017-07-26 18:18 ` [PATCHv2 7/7] arm64: unwind: remove sp from struct stackframe Mark Rutland
  6 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2017-07-26 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ard Biesheuvel <ard.biesheuvel@linaro.org>

As it turns out, the unwind code is slightly broken, and probably has
been for a while. The problem is in the dumping of the exception stack,
which is intended to dump the contents of the pt_regs struct at each
level in the call stack where an exception was taken and routed to a
routine marked as __exception (which means its stack frame is right
below the pt_regs struct on the stack).

'Right below the pt_regs struct' is ill defined, though: the unwind
code assigns 'frame pointer + 0x10' to the .sp member of the stackframe
struct at each level, and dump_backtrace() happily dereferences that as
the pt_regs pointer when encountering an __exception routine. However,
the actual size of the stack frame created by this routine (which could
be one of many __exception routines we have in the kernel) is not known,
and so frame.sp is pretty useless to figure out where struct pt_regs
really is.

So it seems the only way to ensure that we can find our struct pt_regs
when walking the stack frames is to put it at a known fixed offset of
the stack frame pointer that is passed to such __exception routines.
The simplest way to do that is to put it inside pt_regs itself, which is
the main change implemented by this patch. As a bonus, doing this allows
us to get rid of a fair amount of cruft related to walking from one stack
to the other, which is especially nice since we intend to introduce yet
another stack for overflow handling once we add support for vmapped
stacks. It also fixes an inconsistency where we only add a stack frame
pointing to ELR_EL1 if we are executing from the IRQ stack but not when
we are executing from the task stack.

To consistly identify exceptions regs even in the presence of exceptions
taken from entry code, we must check whether the next frame was created
by entry text, rather than whether the current frame was crated by
exception text.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: compare current frame against .entry.text]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/irq.h    | 25 -------------------------
 arch/arm64/include/asm/ptrace.h |  1 +
 arch/arm64/include/asm/traps.h  |  5 +++++
 arch/arm64/kernel/asm-offsets.c |  1 +
 arch/arm64/kernel/entry.S       | 16 ++++++++--------
 arch/arm64/kernel/stacktrace.c  | 30 ------------------------------
 arch/arm64/kernel/traps.c       | 32 +++++++-------------------------
 7 files changed, 22 insertions(+), 88 deletions(-)

diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
index 8155e48..8ba89c4 100644
--- a/arch/arm64/include/asm/irq.h
+++ b/arch/arm64/include/asm/irq.h
@@ -16,31 +16,6 @@
 
 DECLARE_PER_CPU(unsigned long [IRQ_STACK_SIZE/sizeof(long)], irq_stack);
 
-/*
- * The highest address on the stack, and the first to be used. Used to
- * find the dummy-stack frame put down by el?_irq() in entry.S, which
- * is structured as follows:
- *
- *       ------------
- *       |          |  <- irq_stack_ptr
- *   top ------------
- *       |   x19    | <- irq_stack_ptr - 0x08
- *       ------------
- *       |   x29    | <- irq_stack_ptr - 0x10
- *       ------------
- *
- * where x19 holds a copy of the task stack pointer where the struct pt_regs
- * from kernel_entry can be found.
- *
- */
-#define IRQ_STACK_PTR() ((unsigned long)raw_cpu_ptr(irq_stack) + IRQ_STACK_START_SP)
-
-/*
- * The offset from irq_stack_ptr where entry.S will store the original
- * stack pointer. Used by unwind_frame() and dump_backtrace().
- */
-#define IRQ_STACK_TO_TASK_STACK(ptr) (*((unsigned long *)((ptr) - 0x08)))
-
 extern void set_handle_irq(void (*handle_irq)(struct pt_regs *));
 
 static inline int nr_legacy_irqs(void)
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 11403fd..ee72aa9 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -119,6 +119,7 @@ struct pt_regs {
 	u64 syscallno;
 	u64 orig_addr_limit;
 	u64 unused;	// maintain 16 byte alignment
+	u64 stackframe[2];
 };
 
 #define MAX_REG_OFFSET offsetof(struct pt_regs, pstate)
diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
index 02e9035..4136168 100644
--- a/arch/arm64/include/asm/traps.h
+++ b/arch/arm64/include/asm/traps.h
@@ -60,4 +60,9 @@ static inline int in_exception_text(unsigned long ptr)
 	return in ? : __in_irqentry_text(ptr);
 }
 
+static inline int in_entry_text(unsigned long ptr)
+{
+	return ptr >= (unsigned long)&__entry_text_start &&
+	       ptr < (unsigned long)&__entry_text_end;
+}
 #endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index b3bb7ef..71bf088 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -75,6 +75,7 @@ int main(void)
   DEFINE(S_ORIG_X0,		offsetof(struct pt_regs, orig_x0));
   DEFINE(S_SYSCALLNO,		offsetof(struct pt_regs, syscallno));
   DEFINE(S_ORIG_ADDR_LIMIT,	offsetof(struct pt_regs, orig_addr_limit));
+  DEFINE(S_STACKFRAME,		offsetof(struct pt_regs, stackframe));
   DEFINE(S_FRAME_SIZE,		sizeof(struct pt_regs));
   BLANK();
   DEFINE(MM_CONTEXT_ID,		offsetof(struct mm_struct, context.id.counter));
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 9e126d3..4ddb8d7 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -111,6 +111,14 @@
 	mrs	x23, spsr_el1
 	stp	lr, x21, [sp, #S_LR]
 
+	/*
+	 * In order to be able to dump the contents of struct pt_regs@the
+	 * time the exception was taken (in case we attempt to walk the call
+	 * stack later), chain it together with the stack frames.
+	 */
+	stp	x29, x22, [sp, #S_STACKFRAME]
+	add	x29, sp, #S_STACKFRAME
+
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 	/*
 	 * Set the TTBR0 PAN bit in SPSR. When the exception is taken from
@@ -265,14 +273,6 @@ alternative_else_nop_endif
 
 	/* switch to the irq stack */
 	mov	sp, x26
-
-	/*
-	 * Add a dummy stack frame, this non-standard format is fixed up
-	 * by unwind_frame()
-	 */
-	stp     x29, x19, [sp, #-16]!
-	mov	x29, sp
-
 9998:
 	.endm
 
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index beaf51f..b079af1 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -75,36 +75,6 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	}
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
 
-	/*
-	 * Check whether we are going to walk through from interrupt stack
-	 * to task stack.
-	 * If we reach the end of the stack - and its an interrupt stack,
-	 * unpack the dummy frame to find the original elr.
-	 *
-	 * Check the frame->fp we read from the bottom of the irq_stack,
-	 * and the original task stack pointer are both in current->stack.
-	 */
-	if (frame->sp == IRQ_STACK_PTR()) {
-		struct pt_regs *irq_args;
-		unsigned long orig_sp = IRQ_STACK_TO_TASK_STACK(frame->sp);
-
-		if (object_is_on_stack((void *)orig_sp) &&
-		   object_is_on_stack((void *)frame->fp)) {
-			frame->sp = orig_sp;
-
-			/* orig_sp is the saved pt_regs, find the elr */
-			irq_args = (struct pt_regs *)orig_sp;
-			frame->pc = irq_args->pc;
-		} else {
-			/*
-			 * This frame has a non-standard format, and we
-			 * didn't fix it, because the data looked wrong.
-			 * Refuse to output this frame.
-			 */
-			return -EINVAL;
-		}
-	}
-
 	return 0;
 }
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 7ba80f2..caf2f79 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -143,7 +143,6 @@ static void dump_instr(const char *lvl, struct pt_regs *regs)
 void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
 {
 	struct stackframe frame;
-	unsigned long irq_stack_ptr;
 	int skip;
 
 	pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
@@ -154,15 +153,6 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
 	if (!try_get_task_stack(tsk))
 		return;
 
-	/*
-	 * Switching between stacks is valid when tracing current and in
-	 * non-preemptible context.
-	 */
-	if (tsk == current && !preemptible())
-		irq_stack_ptr = IRQ_STACK_PTR();
-	else
-		irq_stack_ptr = 0;
-
 	if (tsk == current) {
 		frame.fp = (unsigned long)__builtin_frame_address(0);
 		frame.sp = current_stack_pointer;
@@ -182,13 +172,12 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
 	skip = !!regs;
 	printk("Call trace:\n");
 	while (1) {
-		unsigned long where = frame.pc;
 		unsigned long stack;
 		int ret;
 
 		/* skip until specified stack frame */
 		if (!skip) {
-			dump_backtrace_entry(where);
+			dump_backtrace_entry(frame.pc);
 		} else if (frame.fp == regs->regs[29]) {
 			skip = 0;
 			/*
@@ -203,20 +192,13 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
 		ret = unwind_frame(tsk, &frame);
 		if (ret < 0)
 			break;
-		stack = frame.sp;
-		if (in_exception_text(where)) {
-			/*
-			 * If we switched to the irq_stack before calling this
-			 * exception handler, then the pt_regs will be on the
-			 * task stack. The easiest way to tell is if the large
-			 * pt_regs would overlap with the end of the irq_stack.
-			 */
-			if (stack < irq_stack_ptr &&
-			    (stack + sizeof(struct pt_regs)) > irq_stack_ptr)
-				stack = IRQ_STACK_TO_TASK_STACK(irq_stack_ptr);
+		if (in_entry_text(frame.pc)) {
+			stack = frame.fp - offsetof(struct pt_regs, stackframe);
 
-			dump_mem("", "Exception stack", stack,
-				 stack + sizeof(struct pt_regs));
+			if (on_task_stack(tsk, stack) ||
+			    (tsk == current && !preemptible() && on_irq_stack(stack)))
+				dump_mem("", "Exception stack", stack,
+					 stack + sizeof(struct pt_regs));
 		}
 	}
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2 7/7] arm64: unwind: remove sp from struct stackframe
  2017-07-26 18:18 [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump Mark Rutland
                   ` (5 preceding siblings ...)
  2017-07-26 18:18 ` [PATCHv2 6/7] arm64: unwind: reference pt_regs via embedded stack frame Mark Rutland
@ 2017-07-26 18:18 ` Mark Rutland
  6 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2017-07-26 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ard Biesheuvel <ard.biesheuvel@linaro.org>

The unwind code sets the sp member of struct stackframe to
'frame pointer + 0x10' unconditionally, without regard for whether
doing so produces a legal value. So let's simply remove it now that
we have stopped using it anyway.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/stacktrace.h | 1 -
 arch/arm64/kernel/perf_callchain.c  | 1 -
 arch/arm64/kernel/process.c         | 5 +----
 arch/arm64/kernel/return_address.c  | 1 -
 arch/arm64/kernel/stacktrace.c      | 4 ----
 arch/arm64/kernel/time.c            | 1 -
 arch/arm64/kernel/traps.c           | 2 --
 7 files changed, 1 insertion(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index 5b6eafc..3bebab3 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -20,7 +20,6 @@
 
 struct stackframe {
 	unsigned long fp;
-	unsigned long sp;
 	unsigned long pc;
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	unsigned int graph;
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index 713ca82..bcafd7d 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -162,7 +162,6 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 	}
 
 	frame.fp = regs->regs[29];
-	frame.sp = regs->sp;
 	frame.pc = regs->pc;
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	frame.graph = current->curr_ret_stack;
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 659ae80..85b953d 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -382,15 +382,12 @@ unsigned long get_wchan(struct task_struct *p)
 		return 0;
 
 	frame.fp = thread_saved_fp(p);
-	frame.sp = thread_saved_sp(p);
 	frame.pc = thread_saved_pc(p);
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	frame.graph = p->curr_ret_stack;
 #endif
 	do {
-		if (frame.sp < stack_page ||
-		    frame.sp >= stack_page + THREAD_SIZE ||
-		    unwind_frame(p, &frame))
+		if (unwind_frame(p, &frame))
 			goto out;
 		if (!in_sched_functions(frame.pc)) {
 			ret = frame.pc;
diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c
index 12a87f2..933adbc 100644
--- a/arch/arm64/kernel/return_address.c
+++ b/arch/arm64/kernel/return_address.c
@@ -42,7 +42,6 @@ void *return_address(unsigned int level)
 	data.addr = NULL;
 
 	frame.fp = (unsigned long)__builtin_frame_address(0);
-	frame.sp = current_stack_pointer;
 	frame.pc = (unsigned long)return_address; /* dummy */
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	frame.graph = current->curr_ret_stack;
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index b079af1..54f3463 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -58,7 +58,6 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	    !on_task_stack(tsk, fp))
 		return -EINVAL;
 
-	frame->sp = fp + 0x10;
 	frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
 	frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8));
 
@@ -127,7 +126,6 @@ void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
 	data.no_sched_functions = 0;
 
 	frame.fp = regs->regs[29];
-	frame.sp = regs->sp;
 	frame.pc = regs->pc;
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	frame.graph = current->curr_ret_stack;
@@ -152,12 +150,10 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 	if (tsk != current) {
 		data.no_sched_functions = 1;
 		frame.fp = thread_saved_fp(tsk);
-		frame.sp = thread_saved_sp(tsk);
 		frame.pc = thread_saved_pc(tsk);
 	} else {
 		data.no_sched_functions = 0;
 		frame.fp = (unsigned long)__builtin_frame_address(0);
-		frame.sp = current_stack_pointer;
 		frame.pc = (unsigned long)save_stack_trace_tsk;
 	}
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c
index da33c90..a439128 100644
--- a/arch/arm64/kernel/time.c
+++ b/arch/arm64/kernel/time.c
@@ -50,7 +50,6 @@ unsigned long profile_pc(struct pt_regs *regs)
 		return regs->pc;
 
 	frame.fp = regs->regs[29];
-	frame.sp = regs->sp;
 	frame.pc = regs->pc;
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	frame.graph = -1; /* no task info */
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index caf2f79..63cbfb0 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -155,14 +155,12 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
 
 	if (tsk == current) {
 		frame.fp = (unsigned long)__builtin_frame_address(0);
-		frame.sp = current_stack_pointer;
 		frame.pc = (unsigned long)dump_backtrace;
 	} else {
 		/*
 		 * task blocked in __switch_to
 		 */
 		frame.fp = thread_saved_fp(tsk);
-		frame.sp = thread_saved_sp(tsk);
 		frame.pc = thread_saved_pc(tsk);
 	}
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2 3/7] arm64: move non-entry code out of .entry.text
  2017-07-26 18:18 ` [PATCHv2 3/7] arm64: move non-entry code out of .entry.text Mark Rutland
@ 2017-07-26 21:38   ` Stephen Boyd
  2017-07-31 10:21     ` Mark Rutland
  0 siblings, 1 reply; 16+ messages in thread
From: Stephen Boyd @ 2017-07-26 21:38 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/26/2017 11:18 AM, Mark Rutland wrote:
> Currently, cpu_switch_to and ret_from_fork both live in .entry.text,
> though neither form the critical path for an exception entry.
>
> In subsequent patches, we will require that code in .entry.text is part
> of the critical path for exception entry, for which we can assume
> certain properties (e.g. the presence of exception regs on the stack).
>
> Neither cpu_switch_to nor ret_from_fork will meet these requirements, so
> we must most them out of .entry.text. To ensure that neither are kprobed

s/most/move/ ?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv2 3/7] arm64: move non-entry code out of .entry.text
  2017-07-26 21:38   ` Stephen Boyd
@ 2017-07-31 10:21     ` Mark Rutland
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2017-07-31 10:21 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 26, 2017 at 02:38:17PM -0700, Stephen Boyd wrote:
> On 07/26/2017 11:18 AM, Mark Rutland wrote:
> > Currently, cpu_switch_to and ret_from_fork both live in .entry.text,
> > though neither form the critical path for an exception entry.
> >
> > In subsequent patches, we will require that code in .entry.text is part
> > of the critical path for exception entry, for which we can assume
> > certain properties (e.g. the presence of exception regs on the stack).
> >
> > Neither cpu_switch_to nor ret_from_fork will meet these requirements, so
> > we must most them out of .entry.text. To ensure that neither are kprobed
> 
> s/most/move/ ?

Yes; whoops.

I've corrected that now, and pushed the corrected branch.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv2 1/7] arm64: Add ASM_BUG()
  2017-07-26 18:18 ` [PATCHv2 1/7] arm64: Add ASM_BUG() Mark Rutland
@ 2017-08-08 15:31   ` Mark Rutland
  2017-08-08 15:58     ` Catalin Marinas
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Rutland @ 2017-08-08 15:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 26, 2017 at 07:18:24PM +0100, Mark Rutland wrote:
> +#define __BUG_ENTRY(flags) 				\
> +		.pushsection __bug_table,"a";		\
> +		.align 2;				\
> +	0:	.long 1f - 0b;				\
> +_BUGVERBOSE_LOCATION(__FILE__, __LINE__)		\
> +		.short flags; 				\
> +		.popsection;				\
> +	1:
> +#else
> +#define __BUG_ENTRY(flags)
> +#endif
> +
> +#define ASM_BUG_FLAGS(flags)				\
> +	__BUG_ENTRY(0)					\
> +	brk	BUG_BRK_IMM

I accidentally dropped the flags here, which turns all WARN*s into BUGs.

I've fixed this up to pass the flags to __BUG_ENTRY(). I've pushed out
updated arm64/exception-stack and arm64/vmap-stack branches.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv2 1/7] arm64: Add ASM_BUG()
  2017-08-08 15:31   ` Mark Rutland
@ 2017-08-08 15:58     ` Catalin Marinas
  2017-08-08 16:10       ` Mark Rutland
  0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2017-08-08 15:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Aug 08, 2017 at 04:31:12PM +0100, Mark Rutland wrote:
> On Wed, Jul 26, 2017 at 07:18:24PM +0100, Mark Rutland wrote:
> > +#define __BUG_ENTRY(flags) 				\
> > +		.pushsection __bug_table,"a";		\
> > +		.align 2;				\
> > +	0:	.long 1f - 0b;				\
> > +_BUGVERBOSE_LOCATION(__FILE__, __LINE__)		\
> > +		.short flags; 				\
> > +		.popsection;				\
> > +	1:
> > +#else
> > +#define __BUG_ENTRY(flags)
> > +#endif
> > +
> > +#define ASM_BUG_FLAGS(flags)				\
> > +	__BUG_ENTRY(0)					\
> > +	brk	BUG_BRK_IMM
> 
> I accidentally dropped the flags here, which turns all WARN*s into BUGs.
> 
> I've fixed this up to pass the flags to __BUG_ENTRY(). I've pushed out
> updated arm64/exception-stack and arm64/vmap-stack branches.

I'll pull arm64/exception-stack into for-next/core (I haven't got to the
vmap-stack series yet).

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv2 1/7] arm64: Add ASM_BUG()
  2017-08-08 15:58     ` Catalin Marinas
@ 2017-08-08 16:10       ` Mark Rutland
  2017-08-09 10:07         ` Catalin Marinas
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Rutland @ 2017-08-08 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Aug 08, 2017 at 04:58:53PM +0100, Catalin Marinas wrote:
> On Tue, Aug 08, 2017 at 04:31:12PM +0100, Mark Rutland wrote:
> > On Wed, Jul 26, 2017 at 07:18:24PM +0100, Mark Rutland wrote:
> > > +#define __BUG_ENTRY(flags) 				\
> > > +		.pushsection __bug_table,"a";		\
> > > +		.align 2;				\
> > > +	0:	.long 1f - 0b;				\
> > > +_BUGVERBOSE_LOCATION(__FILE__, __LINE__)		\
> > > +		.short flags; 				\
> > > +		.popsection;				\
> > > +	1:
> > > +#else
> > > +#define __BUG_ENTRY(flags)
> > > +#endif
> > > +
> > > +#define ASM_BUG_FLAGS(flags)				\
> > > +	__BUG_ENTRY(0)					\
> > > +	brk	BUG_BRK_IMM
> > 
> > I accidentally dropped the flags here, which turns all WARN*s into BUGs.
> > 
> > I've fixed this up to pass the flags to __BUG_ENTRY(). I've pushed out
> > updated arm64/exception-stack and arm64/vmap-stack branches.
> 
> I'll pull arm64/exception-stack into for-next/core (I haven't got to the
> vmap-stack series yet).

If you could hold off for a day, I'd like to make one final change and prevent
use of the final record's LR value, where FP is NULL, since that LR isn't
meaningful, and makes the backtrace look weird:

[ 2785.650646] [<ffff000008082cb0>] el0_svc_naked+0x24/0x28
[ 2785.656016] [<0000ffffaf717554>] 0xffffaf717554

Otherwise, I can do that as a fixup.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv2 1/7] arm64: Add ASM_BUG()
  2017-08-08 16:10       ` Mark Rutland
@ 2017-08-09 10:07         ` Catalin Marinas
  2017-08-09 13:21           ` Mark Rutland
  0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2017-08-09 10:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Aug 08, 2017 at 05:10:51PM +0100, Mark Rutland wrote:
> On Tue, Aug 08, 2017 at 04:58:53PM +0100, Catalin Marinas wrote:
> > On Tue, Aug 08, 2017 at 04:31:12PM +0100, Mark Rutland wrote:
> > > On Wed, Jul 26, 2017 at 07:18:24PM +0100, Mark Rutland wrote:
> > > > +#define __BUG_ENTRY(flags) 				\
> > > > +		.pushsection __bug_table,"a";		\
> > > > +		.align 2;				\
> > > > +	0:	.long 1f - 0b;				\
> > > > +_BUGVERBOSE_LOCATION(__FILE__, __LINE__)		\
> > > > +		.short flags; 				\
> > > > +		.popsection;				\
> > > > +	1:
> > > > +#else
> > > > +#define __BUG_ENTRY(flags)
> > > > +#endif
> > > > +
> > > > +#define ASM_BUG_FLAGS(flags)				\
> > > > +	__BUG_ENTRY(0)					\
> > > > +	brk	BUG_BRK_IMM
> > > 
> > > I accidentally dropped the flags here, which turns all WARN*s into BUGs.
> > > 
> > > I've fixed this up to pass the flags to __BUG_ENTRY(). I've pushed out
> > > updated arm64/exception-stack and arm64/vmap-stack branches.
> > 
> > I'll pull arm64/exception-stack into for-next/core (I haven't got to the
> > vmap-stack series yet).
> 
> If you could hold off for a day, I'd like to make one final change and prevent
> use of the final record's LR value, where FP is NULL, since that LR isn't
> meaningful, and makes the backtrace look weird:
> 
> [ 2785.650646] [<ffff000008082cb0>] el0_svc_naked+0x24/0x28
> [ 2785.656016] [<0000ffffaf717554>] 0xffffaf717554
> 
> Otherwise, I can do that as a fixup.

I'll hold off, I haven't pushed the for-next/core branch out yet.

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv2 1/7] arm64: Add ASM_BUG()
  2017-08-09 10:07         ` Catalin Marinas
@ 2017-08-09 13:21           ` Mark Rutland
  2017-08-09 14:32             ` Catalin Marinas
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Rutland @ 2017-08-09 13:21 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Aug 09, 2017 at 11:07:35AM +0100, Catalin Marinas wrote:
> On Tue, Aug 08, 2017 at 05:10:51PM +0100, Mark Rutland wrote:
> > On Tue, Aug 08, 2017 at 04:58:53PM +0100, Catalin Marinas wrote:
> > > I'll pull arm64/exception-stack into for-next/core (I haven't got to the
> > > vmap-stack series yet).
> > 
> > If you could hold off for a day, I'd like to make one final change and prevent
> > use of the final record's LR value, where FP is NULL, since that LR isn't
> > meaningful, and makes the backtrace look weird:
> > 
> > [ 2785.650646] [<ffff000008082cb0>] el0_svc_naked+0x24/0x28
> > [ 2785.656016] [<0000ffffaf717554>] 0xffffaf717554
> > 
> > Otherwise, I can do that as a fixup.
> 
> I'll hold off, I haven't pushed the for-next/core branch out yet.

I've pushed out an updated arm64/exception-stack branch. The HEAD should
be:

  31e43ad3b74a5d7b ("arm64: unwind: remove sp from struct stackframe")

That should have tvhe ASM_BUG() fix, and the below diff folded into the
pt_regs patch, to ensure that backtraces don't use user-controlled PCs
or idmap aliases of startup code.

If you'd like, I can send the updated series as a v3.

Thanks,
Mark.

---->8----
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 4ddb8d7..612a077 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -116,7 +116,11 @@
 	 * time the exception was taken (in case we attempt to walk the call
 	 * stack later), chain it together with the stack frames.
 	 */
+	.if \el == 0
+	stp	xzr, xzr, [sp, #S_STACKFRAME]
+	.else
 	stp	x29, x22, [sp, #S_STACKFRAME]
+	.endif
 	add	x29, sp, #S_STACKFRAME
 
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 973df7d..f9e4aac 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -362,6 +362,9 @@ __primary_switched:
 	ret					// to __primary_switch()
 0:
 #endif
+	add	sp, sp, #16
+	mov	x29, #0
+	mov	x30, #0
 	b	start_kernel
 ENDPROC(__primary_switched)
 
@@ -617,6 +620,7 @@ __secondary_switched:
 	ldr	x2, [x0, #CPU_BOOT_TASK]
 	msr	sp_el0, x2
 	mov	x29, #0
+	mov	x30, #0
 	b	secondary_start_kernel
 ENDPROC(__secondary_switched)
 
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 54f3463..35588ca 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -74,6 +74,15 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	}
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
 
+	/*
+	 * Frames created upon entry from EL0 have NULL FP and PC values, so
+	 * don't bother reporting these. Frames created by __noreturn functions
+	 * might have a valid FP even if PC is bogus, so only terminate where
+	 * both are NULL.
+	 */
+	if (!frame->fp && !frame->pc)
+		return -EINVAL;
+
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2 1/7] arm64: Add ASM_BUG()
  2017-08-09 13:21           ` Mark Rutland
@ 2017-08-09 14:32             ` Catalin Marinas
  0 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2017-08-09 14:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Aug 09, 2017 at 02:21:31PM +0100, Mark Rutland wrote:
> On Wed, Aug 09, 2017 at 11:07:35AM +0100, Catalin Marinas wrote:
> > On Tue, Aug 08, 2017 at 05:10:51PM +0100, Mark Rutland wrote:
> > > On Tue, Aug 08, 2017 at 04:58:53PM +0100, Catalin Marinas wrote:
> > > > I'll pull arm64/exception-stack into for-next/core (I haven't got to the
> > > > vmap-stack series yet).
> > > 
> > > If you could hold off for a day, I'd like to make one final change and prevent
> > > use of the final record's LR value, where FP is NULL, since that LR isn't
> > > meaningful, and makes the backtrace look weird:
> > > 
> > > [ 2785.650646] [<ffff000008082cb0>] el0_svc_naked+0x24/0x28
> > > [ 2785.656016] [<0000ffffaf717554>] 0xffffaf717554
> > > 
> > > Otherwise, I can do that as a fixup.
> > 
> > I'll hold off, I haven't pushed the for-next/core branch out yet.
> 
> I've pushed out an updated arm64/exception-stack branch. The HEAD should
> be:
> 
>   31e43ad3b74a5d7b ("arm64: unwind: remove sp from struct stackframe")
> 
> That should have tvhe ASM_BUG() fix, and the below diff folded into the
> pt_regs patch, to ensure that backtraces don't use user-controlled PCs
> or idmap aliases of startup code.
> 
> If you'd like, I can send the updated series as a v3.

No need to, I'll just pull the branch.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-08-09 14:32 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-26 18:18 [PATCHv2 0/7] arm64: unwind: fix broken exception stack dump Mark Rutland
2017-07-26 18:18 ` [PATCHv2 1/7] arm64: Add ASM_BUG() Mark Rutland
2017-08-08 15:31   ` Mark Rutland
2017-08-08 15:58     ` Catalin Marinas
2017-08-08 16:10       ` Mark Rutland
2017-08-09 10:07         ` Catalin Marinas
2017-08-09 13:21           ` Mark Rutland
2017-08-09 14:32             ` Catalin Marinas
2017-07-26 18:18 ` [PATCHv2 2/7] arm64: consistently use bl for C exception entry Mark Rutland
2017-07-26 18:18 ` [PATCHv2 3/7] arm64: move non-entry code out of .entry.text Mark Rutland
2017-07-26 21:38   ` Stephen Boyd
2017-07-31 10:21     ` Mark Rutland
2017-07-26 18:18 ` [PATCHv2 4/7] arm64: unwind: avoid percpu indirection for irq stack Mark Rutland
2017-07-26 18:18 ` [PATCHv2 5/7] arm64: unwind: disregard frame.sp when validating frame pointer Mark Rutland
2017-07-26 18:18 ` [PATCHv2 6/7] arm64: unwind: reference pt_regs via embedded stack frame Mark Rutland
2017-07-26 18:18 ` [PATCHv2 7/7] arm64: unwind: remove sp from struct stackframe Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.