linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] mips: fix various issues around backtrace
@ 2017-08-10 18:27 minyard
  2017-08-10 18:27 ` [PATCH 1/4] mips: Fix issues in backtraces minyard
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: minyard @ 2017-08-10 18:27 UTC (permalink / raw)
  To: linux-mips, ralf, linux-kernel

These are patches for problems I ran into while handling backtrace
issues, both inside Linux and when processing backtraces with gdb
in a kernel dump.

-corey

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/4] mips: Fix issues in backtraces
  2017-08-10 18:27 [PATCH 0/4] mips: fix various issues around backtrace minyard
@ 2017-08-10 18:27 ` minyard
  2017-08-10 18:27 ` [PATCH 2/4] mips: Make SAVE_SOME more standard minyard
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: minyard @ 2017-08-10 18:27 UTC (permalink / raw)
  To: linux-mips, ralf, linux-kernel; +Cc: Corey Minyard

From: Corey Minyard <cminyard@mvista.com>

I saw two problems when doing backtraces:

The compiler was putting a "fast return" at the top of some
functions, before it set up the frame.  The backtrace code
would stop when it saw a jump instruction, so it would never
get to the stack frame setup and would thus misinterpret it.
To fix this, don't look for jump instructions until the
frame setup has been seen.

The assembly code here is:

ffffffff80b885a0 <serial8250_handle_irq>:
ffffffff80b885a0:       c8a00003        bbit0   a1,0x0,ffffffff80b885b0 <serial8250_handle_irq+0x10>
ffffffff80b885a4:       0000102d        move    v0,zero
ffffffff80b885a8:       03e00008        jr      ra
ffffffff80b885ac:       00000000        nop
ffffffff80b885b0:       67bdffd0        daddiu  sp,sp,-48
ffffffff80b885b4:       ffb00008        sd      s0,8(sp)

The second problem was the compiler was putting the last
instruction of the frame save in the delay slot of the
jump instruction.  If it saved the RA in there, the
backtrace could would miss it and misinterpret the frame.
To fix this, make sure to process the instruction after
the first jump seen.

The assembly code for this is:

ffffffff80806fd0 <plat_irq_dispatch>:
ffffffff80806fd0:       67bdffd0        daddiu  sp,sp,-48
ffffffff80806fd4:       ffb30020        sd      s3,32(sp)
ffffffff80806fd8:       24130018        li      s3,24
ffffffff80806fdc:       ffb20018        sd      s2,24(sp)
ffffffff80806fe0:       3c12811c        lui     s2,0x811c
ffffffff80806fe4:       ffb10010        sd      s1,16(sp)
ffffffff80806fe8:       3c11811c        lui     s1,0x811c
ffffffff80806fec:       ffb00008        sd      s0,8(sp)
ffffffff80806ff0:       3c10811c        lui     s0,0x811c
ffffffff80806ff4:       08201c03        j       ffffffff8080700c <plat_irq_dispa
tch+0x3c>
ffffffff80806ff8:       ffbf0028        sd      ra,40(sp)

Signed-off-by: Corey Minyard <cminyard@mvista.com>
---
 arch/mips/kernel/process.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 5351e1f..a1d930a 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -349,6 +349,7 @@ static int get_frame_info(struct mips_frame_info *info)
 	union mips_instruction insn, *ip, *ip_end;
 	const unsigned int max_insns = 128;
 	unsigned int i;
+	bool saw_jump = false;
 
 	info->pc_offset = -1;
 	info->frame_size = 0;
@@ -370,9 +371,6 @@ static int get_frame_info(struct mips_frame_info *info)
 			insn.word = ip->word;
 		}
 
-		if (is_jump_ins(&insn))
-			break;
-
 		if (!info->frame_size) {
 			if (is_sp_move_ins(&insn))
 			{
@@ -396,10 +394,28 @@ static int get_frame_info(struct mips_frame_info *info)
 				info->frame_size = - ip->i_format.simmediate;
 			}
 			continue;
+		} else if (!saw_jump && is_jump_ins(ip)) {
+			/*
+			 * If we see a jump instruction, we are finished
+			 * with the frame save.
+			 *
+			 * Some functions can have a shortcut return at
+			 * the beginning of the function, so don't start
+			 * looking for jump instruction until we see the
+			 * frame setup.
+			 *
+			 * The RA save instruction can get put into the
+			 * delay slot of the jump instruction, so look
+			 * at the next instruction, too.
+			 */
+			saw_jump = true;
+			continue;
 		}
 		if (info->pc_offset == -1 &&
 		    is_ra_save_ins(&insn, &info->pc_offset))
 			break;
+		if (saw_jump)
+			break;
 	}
 	if (info->frame_size && info->pc_offset >= 0) /* nested */
 		return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/4] mips: Make SAVE_SOME more standard
  2017-08-10 18:27 [PATCH 0/4] mips: fix various issues around backtrace minyard
  2017-08-10 18:27 ` [PATCH 1/4] mips: Fix issues in backtraces minyard
@ 2017-08-10 18:27 ` minyard
  2017-08-10 18:27 ` [PATCH 3/4] mips: Add DWARF unwinding to assembly minyard
  2017-08-10 18:27 ` [PATCH 4/4] mips: Save all registers when saving the frame minyard
  3 siblings, 0 replies; 5+ messages in thread
From: minyard @ 2017-08-10 18:27 UTC (permalink / raw)
  To: linux-mips, ralf, linux-kernel; +Cc: Corey Minyard

From: Corey Minyard <cminyard@mvista.com>

Modify the SAVE_SOME macro to look more like a standard
function, doing the arithmetic for the frame on the SP
register instead of copying it from K1, and by saving
the stored EPC from the RA.  This lets the get_frame_info()
function process this function like any other.  It also
remove an instruction or two from the kernel entry,
making it more efficient.

unwind_stack_by_address() has special handling for
the top of the interrupt stack, but without this change
unwinding will still fail if you get an interrupt while
handling an interrupt and try to do a traceback from
the second interrupt.

This change modifies the get_saved_sp macro to
optionally store the fetched value right into sp and store the
old SP value into K0.  Then it's just a matter of subtracting
the frame from SP and storing the old SP from K0.

This required changing the DADDI workaround a bit, since K0
holds the SP, we had to use K1 for AT.  But it eliminated
some of the special handling for the DADDI workaround.

Saving the RA register was moved up to before fetching the
CP0_EPC register, so the CP0_EPC register could be stored
into RA and the saved.  This lets the traceback code know
where RA is actually stored.

Signed-off-by: Corey Minyard <cminyard@mvista.com>
---
 arch/mips/include/asm/stackframe.h | 51 +++++++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 14 deletions(-)

diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
index eaa5a4d..d2fb919 100644
--- a/arch/mips/include/asm/stackframe.h
+++ b/arch/mips/include/asm/stackframe.h
@@ -83,8 +83,16 @@
 		LONG_S	$30, PT_R30(sp)
 		.endm
 
+/*
+ * get_saved_sp returns the SP for the current CPU by looking in the
+ * kernelsp array for it.  If tosp is set, it stores the current sp in
+ * k0 and loads the new value in sp.  If not, it clobbers k0 and
+ * stores the new value in k1, leaving sp unaffected.
+ */
 #ifdef CONFIG_SMP
-		.macro	get_saved_sp	/* SMP variation */
+
+		/* SMP variation */
+		.macro	get_saved_sp docfi=0 tosp=0
 		ASM_CPUID_MFC0	k0, ASM_SMP_CPUID_REG
 #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
 		lui	k1, %hi(kernelsp)
@@ -97,7 +105,15 @@
 #endif
 		LONG_SRL	k0, SMP_CPUID_PTRSHIFT
 		LONG_ADDU	k1, k0
+		.if \tosp
+		move	k0, sp
+		.if \docfi
+		.cfi_register sp, k0
+		.endif
+		LONG_L	sp, %lo(kernelsp)(k1)
+		.else
 		LONG_L	k1, %lo(kernelsp)(k1)
+		.endif
 		.endm
 
 		.macro	set_saved_sp stackp temp temp2
@@ -106,7 +122,8 @@
 		LONG_S	\stackp, kernelsp(\temp)
 		.endm
 #else /* !CONFIG_SMP */
-		.macro	get_saved_sp	/* Uniprocessor variation */
+		/* Uniprocessor variation */
+		.macro	get_saved_sp docfi=0 tosp=0
 #ifdef CONFIG_CPU_JUMP_WORKAROUNDS
 		/*
 		 * Clear BTB (branch target buffer), forbid RAS (return address
@@ -135,7 +152,15 @@
 		daddiu	k1, %hi(kernelsp)
 		dsll	k1, k1, 16
 #endif
+		.if \tosp
+		move	k0, sp
+		.if \docfi
+		.cfi_register sp, k0
+		.endif
+		LONG_L	sp, %lo(kernelsp)(k1)
+		.else
 		LONG_L	k1, %lo(kernelsp)(k1)
+		.endif
 		.endm
 
 		.macro	set_saved_sp stackp temp temp2
@@ -151,7 +176,6 @@
 		sll	k0, 3		/* extract cu0 bit */
 		.set	noreorder
 		bltz	k0, 8f
-		 move	k1, sp
 #ifdef CONFIG_EVA
 		/*
 		 * Flush interAptiv's Return Prediction Stack (RPS) by writing
@@ -178,17 +202,16 @@
 		MTC0	k0, CP0_ENTRYHI
 #endif
 		.set	reorder
+		 move	k0, sp
 		/* Called from user mode, new stack. */
 		get_saved_sp
-#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
-8:		move	k0, sp
-		PTR_SUBU sp, k1, PT_SIZE
-#else
-		.set	at=k0
-8:		PTR_SUBU k1, PT_SIZE
+8:
+#ifdef CONFIG_CPU_DADDI_WORKAROUNDS
+		.set	at=k1
+#endif
+		PTR_SUBU sp, PT_SIZE
+#ifdef CONFIG_CPU_DADDI_WORKAROUNDS
 		.set	noat
-		move	k0, sp
-		move	sp, k1
 #endif
 		LONG_S	k0, PT_R29(sp)
 		LONG_S	$3, PT_R3(sp)
@@ -206,16 +229,16 @@
 		LONG_S	$5, PT_R5(sp)
 		LONG_S	v1, PT_CAUSE(sp)
 		LONG_S	$6, PT_R6(sp)
-		MFC0	v1, CP0_EPC
+		LONG_S	ra, PT_R31(sp)
+		MFC0	ra, CP0_EPC
 		LONG_S	$7, PT_R7(sp)
 #ifdef CONFIG_64BIT
 		LONG_S	$8, PT_R8(sp)
 		LONG_S	$9, PT_R9(sp)
 #endif
-		LONG_S	v1, PT_EPC(sp)
+		LONG_S	ra, PT_EPC(sp)
 		LONG_S	$25, PT_R25(sp)
 		LONG_S	$28, PT_R28(sp)
-		LONG_S	$31, PT_R31(sp)
 
 		/* Set thread_info if we're coming from user mode */
 		mfc0	k0, CP0_STATUS
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/4] mips: Add DWARF unwinding to assembly
  2017-08-10 18:27 [PATCH 0/4] mips: fix various issues around backtrace minyard
  2017-08-10 18:27 ` [PATCH 1/4] mips: Fix issues in backtraces minyard
  2017-08-10 18:27 ` [PATCH 2/4] mips: Make SAVE_SOME more standard minyard
@ 2017-08-10 18:27 ` minyard
  2017-08-10 18:27 ` [PATCH 4/4] mips: Save all registers when saving the frame minyard
  3 siblings, 0 replies; 5+ messages in thread
From: minyard @ 2017-08-10 18:27 UTC (permalink / raw)
  To: linux-mips, ralf, linux-kernel; +Cc: Corey Minyard

From: Corey Minyard <cminyard@mvista.com>

This will allow kdump dumps to work correclty with MIPS and
future DWARF unwinding of the stack to give accurate tracebacks.

Signed-off-by: Corey Minyard <cminyard@mvista.com>
---
 arch/mips/Makefile                 |   4 +
 arch/mips/include/asm/asm.h        |   3 +
 arch/mips/include/asm/stackframe.h | 231 +++++++++++++++++++++----------------
 arch/mips/kernel/genex.S           |  13 ++-
 arch/mips/mm/tlbex-fault.S         |   7 +-
 arch/mips/vdso/sigreturn.S         |  10 --
 6 files changed, 151 insertions(+), 117 deletions(-)

diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index 0434362..fb0b6dc 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -286,6 +286,10 @@ ifdef CONFIG_64BIT
 bootvars-y	+= ADDR_BITS=64
 endif
 
+# This is required to get dwarf unwinding tables into .debug_frame
+# instead of .eh_frame so we don't discard them.
+KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
+
 LDFLAGS			+= -m $(ld-emul)
 
 ifdef CONFIG_MIPS
diff --git a/arch/mips/include/asm/asm.h b/arch/mips/include/asm/asm.h
index 859cf70..81fae23 100644
--- a/arch/mips/include/asm/asm.h
+++ b/arch/mips/include/asm/asm.h
@@ -55,6 +55,7 @@
 		.type	symbol, @function;		\
 		.ent	symbol, 0;			\
 symbol:		.frame	sp, 0, ra;			\
+		.cfi_startproc;				\
 		.insn
 
 /*
@@ -66,12 +67,14 @@ symbol:		.frame	sp, 0, ra;			\
 		.type	symbol, @function;		\
 		.ent	symbol, 0;			\
 symbol:		.frame	sp, framesize, rpc;		\
+		.cfi_startproc;				\
 		.insn
 
 /*
  * END - mark end of function
  */
 #define END(function)					\
+		.cfi_endproc;				\
 		.end	function;			\
 		.size	function, .-function
 
diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
index d2fb919..5d3563c 100644
--- a/arch/mips/include/asm/stackframe.h
+++ b/arch/mips/include/asm/stackframe.h
@@ -19,20 +19,43 @@
 #include <asm/asm-offsets.h>
 #include <asm/thread_info.h>
 
+/* Make the addition of cfi info a little easier. */
+	.macro cfi_rel_offset reg offset=0 docfi=0
+	.if \docfi
+	.cfi_rel_offset \reg, \offset
+	.endif
+	.endm
+
+	.macro cfi_st reg offset=0 docfi=0
+	LONG_S	\reg, \offset(sp)
+	cfi_rel_offset \reg, \offset, \docfi
+	.endm
+
+	.macro cfi_restore reg offset=0 docfi=0
+	.if \docfi
+	.cfi_restore \reg
+	.endif
+	.endm
+
+	.macro cfi_ld reg offset=0 docfi=0
+	LONG_L	\reg, \offset(sp)
+	cfi_restore \reg \offset \docfi
+	.endm
+
 #if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 #define STATMASK 0x3f
 #else
 #define STATMASK 0x1f
 #endif
 
-		.macro	SAVE_AT
+		.macro	SAVE_AT docfi=0
 		.set	push
 		.set	noat
-		LONG_S	$1, PT_R1(sp)
+		cfi_st	$1, PT_R1, \docfi
 		.set	pop
 		.endm
 
-		.macro	SAVE_TEMP
+		.macro	SAVE_TEMP docfi=0
 #ifdef CONFIG_CPU_HAS_SMARTMIPS
 		mflhxu	v1
 		LONG_S	v1, PT_LO(sp)
@@ -44,20 +67,20 @@
 		mfhi	v1
 #endif
 #ifdef CONFIG_32BIT
-		LONG_S	$8, PT_R8(sp)
-		LONG_S	$9, PT_R9(sp)
+		cfi_st	$8, PT_R8, \docfi
+		cfi_st	$9, PT_R9, \docfi
 #endif
-		LONG_S	$10, PT_R10(sp)
-		LONG_S	$11, PT_R11(sp)
-		LONG_S	$12, PT_R12(sp)
+		cfi_st	$10, PT_R10, \docfi
+		cfi_st	$11, PT_R11, \docfi
+		cfi_st	$12, PT_R12, \docfi
 #if !defined(CONFIG_CPU_HAS_SMARTMIPS) && !defined(CONFIG_CPU_MIPSR6)
 		LONG_S	v1, PT_HI(sp)
 		mflo	v1
 #endif
-		LONG_S	$13, PT_R13(sp)
-		LONG_S	$14, PT_R14(sp)
-		LONG_S	$15, PT_R15(sp)
-		LONG_S	$24, PT_R24(sp)
+		cfi_st	$13, PT_R13, \docfi
+		cfi_st	$14, PT_R14, \docfi
+		cfi_st	$15, PT_R15, \docfi
+		cfi_st	$24, PT_R24, \docfi
 #if !defined(CONFIG_CPU_HAS_SMARTMIPS) && !defined(CONFIG_CPU_MIPSR6)
 		LONG_S	v1, PT_LO(sp)
 #endif
@@ -71,16 +94,16 @@
 #endif
 		.endm
 
-		.macro	SAVE_STATIC
-		LONG_S	$16, PT_R16(sp)
-		LONG_S	$17, PT_R17(sp)
-		LONG_S	$18, PT_R18(sp)
-		LONG_S	$19, PT_R19(sp)
-		LONG_S	$20, PT_R20(sp)
-		LONG_S	$21, PT_R21(sp)
-		LONG_S	$22, PT_R22(sp)
-		LONG_S	$23, PT_R23(sp)
-		LONG_S	$30, PT_R30(sp)
+		.macro	SAVE_STATIC docfi=0
+		cfi_st	$16, PT_R16, \docfi
+		cfi_st	$17, PT_R17, \docfi
+		cfi_st	$18, PT_R18, \docfi
+		cfi_st	$19, PT_R19, \docfi
+		cfi_st	$20, PT_R20, \docfi
+		cfi_st	$21, PT_R21, \docfi
+		cfi_st	$22, PT_R22, \docfi
+		cfi_st	$23, PT_R23, \docfi
+		cfi_st	$30, PT_R30, \docfi
 		.endm
 
 /*
@@ -168,7 +191,7 @@
 		.endm
 #endif
 
-		.macro	SAVE_SOME
+		.macro	SAVE_SOME docfi=0
 		.set	push
 		.set	noat
 		.set	reorder
@@ -203,8 +226,11 @@
 #endif
 		.set	reorder
 		 move	k0, sp
+		.if \docfi
+		.cfi_register sp, k0
+		.endif
 		/* Called from user mode, new stack. */
-		get_saved_sp
+		get_saved_sp docfi=\docfi tosp=1
 8:
 #ifdef CONFIG_CPU_DADDI_WORKAROUNDS
 		.set	at=k1
@@ -213,8 +239,12 @@
 #ifdef CONFIG_CPU_DADDI_WORKAROUNDS
 		.set	noat
 #endif
-		LONG_S	k0, PT_R29(sp)
-		LONG_S	$3, PT_R3(sp)
+		.if \docfi
+		.cfi_def_cfa sp,0
+		.endif
+		cfi_st	k0, PT_R29, \docfi
+		cfi_rel_offset  sp, PT_R29, \docfi
+		cfi_st	v1, PT_R3, \docfi
 		/*
 		 * You might think that you don't need to save $0,
 		 * but the FPU emulator and gdb remote debug stub
@@ -222,23 +252,26 @@
 		 */
 		LONG_S	$0, PT_R0(sp)
 		mfc0	v1, CP0_STATUS
-		LONG_S	$2, PT_R2(sp)
+		cfi_st	v0, PT_R2, \docfi
 		LONG_S	v1, PT_STATUS(sp)
-		LONG_S	$4, PT_R4(sp)
+		cfi_st	$4, PT_R4, \docfi
 		mfc0	v1, CP0_CAUSE
-		LONG_S	$5, PT_R5(sp)
+		cfi_st	$5, PT_R5, \docfi
 		LONG_S	v1, PT_CAUSE(sp)
-		LONG_S	$6, PT_R6(sp)
-		LONG_S	ra, PT_R31(sp)
+		cfi_st	$6, PT_R6, \docfi
+		cfi_st	ra, PT_R31, \docfi
 		MFC0	ra, CP0_EPC
-		LONG_S	$7, PT_R7(sp)
+		cfi_st	$7, PT_R7, \docfi
 #ifdef CONFIG_64BIT
-		LONG_S	$8, PT_R8(sp)
-		LONG_S	$9, PT_R9(sp)
+		cfi_st	$8, PT_R8, \docfi
+		cfi_st	$9, PT_R9, \docfi
 #endif
 		LONG_S	ra, PT_EPC(sp)
-		LONG_S	$25, PT_R25(sp)
-		LONG_S	$28, PT_R28(sp)
+		.if \docfi
+		.cfi_rel_offset ra, PT_EPC
+		.endif
+		cfi_st	$25, PT_R25, \docfi
+		cfi_st	$28, PT_R28, \docfi
 
 		/* Set thread_info if we're coming from user mode */
 		mfc0	k0, CP0_STATUS
@@ -255,21 +288,21 @@
 		.set	pop
 		.endm
 
-		.macro	SAVE_ALL
-		SAVE_SOME
-		SAVE_AT
-		SAVE_TEMP
-		SAVE_STATIC
+		.macro	SAVE_ALL docfi=0
+		SAVE_SOME \docfi
+		SAVE_AT \docfi
+		SAVE_TEMP \docfi
+		SAVE_STATIC \docfi
 		.endm
 
-		.macro	RESTORE_AT
+		.macro	RESTORE_AT docfi=0
 		.set	push
 		.set	noat
-		LONG_L	$1,  PT_R1(sp)
+		cfi_ld	$1, PT_R1, \docfi
 		.set	pop
 		.endm
 
-		.macro	RESTORE_TEMP
+		.macro	RESTORE_TEMP docfi=0
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
 		/* Restore the Octeon multiplier state */
 		jal	octeon_mult_restore
@@ -288,33 +321,37 @@
 		mthi	$24
 #endif
 #ifdef CONFIG_32BIT
-		LONG_L	$8, PT_R8(sp)
-		LONG_L	$9, PT_R9(sp)
+		cfi_ld	$8, PT_R8, \docfi
+		cfi_ld	$9, PT_R9, \docfi
 #endif
-		LONG_L	$10, PT_R10(sp)
-		LONG_L	$11, PT_R11(sp)
-		LONG_L	$12, PT_R12(sp)
-		LONG_L	$13, PT_R13(sp)
-		LONG_L	$14, PT_R14(sp)
-		LONG_L	$15, PT_R15(sp)
-		LONG_L	$24, PT_R24(sp)
+		cfi_ld	$10, PT_R10, \docfi
+		cfi_ld	$11, PT_R11, \docfi
+		cfi_ld	$12, PT_R12, \docfi
+		cfi_ld	$13, PT_R13, \docfi
+		cfi_ld	$14, PT_R14, \docfi
+		cfi_ld	$15, PT_R15, \docfi
+		cfi_ld	$24, PT_R24, \docfi
 		.endm
 
-		.macro	RESTORE_STATIC
-		LONG_L	$16, PT_R16(sp)
-		LONG_L	$17, PT_R17(sp)
-		LONG_L	$18, PT_R18(sp)
-		LONG_L	$19, PT_R19(sp)
-		LONG_L	$20, PT_R20(sp)
-		LONG_L	$21, PT_R21(sp)
-		LONG_L	$22, PT_R22(sp)
-		LONG_L	$23, PT_R23(sp)
-		LONG_L	$30, PT_R30(sp)
+		.macro	RESTORE_STATIC docfi=0
+		cfi_ld	$16, PT_R16, \docfi
+		cfi_ld	$17, PT_R17, \docfi
+		cfi_ld	$18, PT_R18, \docfi
+		cfi_ld	$19, PT_R19, \docfi
+		cfi_ld	$20, PT_R20, \docfi
+		cfi_ld	$21, PT_R21, \docfi
+		cfi_ld	$22, PT_R22, \docfi
+		cfi_ld	$23, PT_R23, \docfi
+		cfi_ld	$30, PT_R30, \docfi
+		.endm
+
+		.macro	RESTORE_SP docfi=0
+		cfi_ld	sp, PT_R29, \docfi
 		.endm
 
 #if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
-		.macro	RESTORE_SOME
+		.macro	RESTORE_SOME docfi=0
 		.set	push
 		.set	reorder
 		.set	noat
@@ -329,30 +366,30 @@
 		and	v0, v1
 		or	v0, a0
 		mtc0	v0, CP0_STATUS
-		LONG_L	$31, PT_R31(sp)
-		LONG_L	$28, PT_R28(sp)
-		LONG_L	$25, PT_R25(sp)
-		LONG_L	$7,  PT_R7(sp)
-		LONG_L	$6,  PT_R6(sp)
-		LONG_L	$5,  PT_R5(sp)
-		LONG_L	$4,  PT_R4(sp)
-		LONG_L	$3,  PT_R3(sp)
-		LONG_L	$2,  PT_R2(sp)
+		cfi_ld	$31, PT_R31, \docfi
+		cfi_ld	$28, PT_R28, \docfi
+		cfi_ld	$25, PT_R25, \docfi
+		cfi_ld	$7,  PT_R7, \docfi
+		cfi_ld	$6,  PT_R6, \docfi
+		cfi_ld	$5,  PT_R5, \docfi
+		cfi_ld	$4,  PT_R4, \docfi
+		cfi_ld	$3,  PT_R3, \docfi
+		cfi_ld	$2,  PT_R2, \docfi
 		.set	pop
 		.endm
 
-		.macro	RESTORE_SP_AND_RET
+		.macro	RESTORE_SP_AND_RET docfi=0
 		.set	push
 		.set	noreorder
 		LONG_L	k0, PT_EPC(sp)
-		LONG_L	sp, PT_R29(sp)
+		RESTORE_SP \docfi
 		jr	k0
 		 rfe
 		.set	pop
 		.endm
 
 #else
-		.macro	RESTORE_SOME
+		.macro	RESTORE_SOME docfi=0
 		.set	push
 		.set	reorder
 		.set	noat
@@ -369,24 +406,24 @@
 		mtc0	v0, CP0_STATUS
 		LONG_L	v1, PT_EPC(sp)
 		MTC0	v1, CP0_EPC
-		LONG_L	$31, PT_R31(sp)
-		LONG_L	$28, PT_R28(sp)
-		LONG_L	$25, PT_R25(sp)
+		cfi_ld	$31, PT_R31, \docfi
+		cfi_ld	$28, PT_R28, \docfi
+		cfi_ld	$25, PT_R25, \docfi
 #ifdef CONFIG_64BIT
-		LONG_L	$8, PT_R8(sp)
-		LONG_L	$9, PT_R9(sp)
+		cfi_ld	$8, PT_R8, \docfi
+		cfi_ld	$9, PT_R9, \docfi
 #endif
-		LONG_L	$7,  PT_R7(sp)
-		LONG_L	$6,  PT_R6(sp)
-		LONG_L	$5,  PT_R5(sp)
-		LONG_L	$4,  PT_R4(sp)
-		LONG_L	$3,  PT_R3(sp)
-		LONG_L	$2,  PT_R2(sp)
+		cfi_ld	$7,  PT_R7, \docfi
+		cfi_ld	$6,  PT_R6, \docfi
+		cfi_ld	$5,  PT_R5, \docfi
+		cfi_ld	$4,  PT_R4, \docfi
+		cfi_ld	$3,  PT_R3, \docfi
+		cfi_ld	$2,  PT_R2, \docfi
 		.set	pop
 		.endm
 
-		.macro	RESTORE_SP_AND_RET
-		LONG_L	sp, PT_R29(sp)
+		.macro	RESTORE_SP_AND_RET docfi=0
+		RESTORE_SP \docfi
 #ifdef CONFIG_CPU_MIPSR6
 		eretnc
 #else
@@ -398,16 +435,12 @@
 
 #endif
 
-		.macro	RESTORE_SP
-		LONG_L	sp, PT_R29(sp)
-		.endm
-
-		.macro	RESTORE_ALL
-		RESTORE_TEMP
-		RESTORE_STATIC
-		RESTORE_AT
-		RESTORE_SOME
-		RESTORE_SP
+		.macro	RESTORE_ALL docfi=0
+		RESTORE_TEMP \docfi
+		RESTORE_STATIC \docfi
+		RESTORE_AT \docfi
+		RESTORE_SOME \docfi
+		RESTORE_SP \docfi
 		.endm
 
 /*
diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
index ae810da..37b9383 100644
--- a/arch/mips/kernel/genex.S
+++ b/arch/mips/kernel/genex.S
@@ -150,6 +150,7 @@ LEAF(__r4k_wait)
 	.align	5
 BUILD_ROLLBACK_PROLOGUE handle_int
 NESTED(handle_int, PT_SIZE, sp)
+	.cfi_signal_frame
 #ifdef CONFIG_TRACE_IRQFLAGS
 	/*
 	 * Check to see if the interrupted code has just disabled
@@ -181,7 +182,7 @@ NESTED(handle_int, PT_SIZE, sp)
 1:
 	.set pop
 #endif
-	SAVE_ALL
+	SAVE_ALL docfi=1
 	CLI
 	TRACE_IRQS_OFF
 
@@ -269,8 +270,8 @@ NESTED(except_vec_ejtag_debug, 0, sp)
  */
 BUILD_ROLLBACK_PROLOGUE except_vec_vi
 NESTED(except_vec_vi, 0, sp)
-	SAVE_SOME
-	SAVE_AT
+	SAVE_SOME docfi=1
+	SAVE_AT docfi=1
 	.set	push
 	.set	noreorder
 	PTR_LA	v1, except_vec_vi_handler
@@ -396,6 +397,7 @@ NESTED(except_vec_nmi, 0, sp)
 	__FINIT
 
 NESTED(nmi_handler, PT_SIZE, sp)
+	.cfi_signal_frame
 	.set	push
 	.set	noat
 	/*
@@ -478,6 +480,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
 	.macro	__BUILD_HANDLER exception handler clear verbose ext
 	.align	5
 	NESTED(handle_\exception, PT_SIZE, sp)
+	.cfi_signal_frame
 	.set	noat
 	SAVE_ALL
 	FEXPORT(handle_\exception\ext)
@@ -485,8 +488,8 @@ NESTED(nmi_handler, PT_SIZE, sp)
 	.set	at
 	__BUILD_\verbose \exception
 	move	a0, sp
-	PTR_LA	ra, ret_from_exception
-	j	do_\handler
+	jal	do_\handler
+	j	ret_from_exception
 	END(handle_\exception)
 	.endm
 
diff --git a/arch/mips/mm/tlbex-fault.S b/arch/mips/mm/tlbex-fault.S
index 318855e..77db401 100644
--- a/arch/mips/mm/tlbex-fault.S
+++ b/arch/mips/mm/tlbex-fault.S
@@ -12,14 +12,15 @@
 
 	.macro tlb_do_page_fault, write
 	NESTED(tlb_do_page_fault_\write, PT_SIZE, sp)
-	SAVE_ALL
+	.cfi_signal_frame
+	SAVE_ALL docfi=1
 	MFC0	a2, CP0_BADVADDR
 	KMODE
 	move	a0, sp
 	REG_S	a2, PT_BVADDR(sp)
 	li	a1, \write
-	PTR_LA	ra, ret_from_exception
-	j	do_page_fault
+	jal	do_page_fault
+	j	ret_from_exception
 	END(tlb_do_page_fault_\write)
 	.endm
 
diff --git a/arch/mips/vdso/sigreturn.S b/arch/mips/vdso/sigreturn.S
index 715bf59..30c6219 100644
--- a/arch/mips/vdso/sigreturn.S
+++ b/arch/mips/vdso/sigreturn.S
@@ -19,31 +19,21 @@
 	.cfi_sections	.debug_frame
 
 LEAF(__vdso_rt_sigreturn)
-	.cfi_startproc
-	.frame	sp, 0, ra
-	.mask	0x00000000, 0
-	.fmask	0x00000000, 0
 	.cfi_signal_frame
 
 	li	v0, __NR_rt_sigreturn
 	syscall
 
-	.cfi_endproc
 	END(__vdso_rt_sigreturn)
 
 #if _MIPS_SIM == _MIPS_SIM_ABI32
 
 LEAF(__vdso_sigreturn)
-	.cfi_startproc
-	.frame	sp, 0, ra
-	.mask	0x00000000, 0
-	.fmask	0x00000000, 0
 	.cfi_signal_frame
 
 	li	v0, __NR_sigreturn
 	syscall
 
-	.cfi_endproc
 	END(__vdso_sigreturn)
 
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 4/4] mips: Save all registers when saving the frame
  2017-08-10 18:27 [PATCH 0/4] mips: fix various issues around backtrace minyard
                   ` (2 preceding siblings ...)
  2017-08-10 18:27 ` [PATCH 3/4] mips: Add DWARF unwinding to assembly minyard
@ 2017-08-10 18:27 ` minyard
  3 siblings, 0 replies; 5+ messages in thread
From: minyard @ 2017-08-10 18:27 UTC (permalink / raw)
  To: linux-mips, ralf, linux-kernel; +Cc: Corey Minyard

From: Corey Minyard <cminyard@mvista.com>

The MIPS frame save code was just saving a few registers, enough to
do a backtrace if every function set up a frame.  However, this is
not working if you are using DWARF unwinding, because most of the
registers are wrong.  This was causing kdump backtraces to be short
or bogus.

So save all the registers.

Signed-off-by: Corey Minyard <cminyard@mvista.com>
---
 arch/mips/include/asm/stacktrace.h | 64 +++++++++++++++++++++++++++++---------
 1 file changed, 50 insertions(+), 14 deletions(-)

diff --git a/arch/mips/include/asm/stacktrace.h b/arch/mips/include/asm/stacktrace.h
index 780ee2c..10c4e9c 100644
--- a/arch/mips/include/asm/stacktrace.h
+++ b/arch/mips/include/asm/stacktrace.h
@@ -2,6 +2,8 @@
 #define _ASM_STACKTRACE_H
 
 #include <asm/ptrace.h>
+#include <asm/asm.h>
+#include <linux/stringify.h>
 
 #ifdef CONFIG_KALLSYMS
 extern int raw_show_trace;
@@ -20,6 +22,14 @@ static inline unsigned long unwind_stack(struct task_struct *task,
 }
 #endif
 
+#define STR_PTR_LA    __stringify(PTR_LA)
+#define STR_LONG_S    __stringify(LONG_S)
+#define STR_LONG_L    __stringify(LONG_L)
+#define STR_LONGSIZE  __stringify(LONGSIZE)
+
+#define STORE_ONE_REG(r) \
+    STR_LONG_S   " $" __stringify(r)",("STR_LONGSIZE"*"__stringify(r)")(%1)\n\t"
+
 static __always_inline void prepare_frametrace(struct pt_regs *regs)
 {
 #ifndef CONFIG_KALLSYMS
@@ -32,21 +42,47 @@ static __always_inline void prepare_frametrace(struct pt_regs *regs)
 	__asm__ __volatile__(
 		".set push\n\t"
 		".set noat\n\t"
-#ifdef CONFIG_64BIT
-		"1: dla $1, 1b\n\t"
-		"sd $1, %0\n\t"
-		"sd $29, %1\n\t"
-		"sd $31, %2\n\t"
-#else
-		"1: la $1, 1b\n\t"
-		"sw $1, %0\n\t"
-		"sw $29, %1\n\t"
-		"sw $31, %2\n\t"
-#endif
+		/* Store $1 so we can use it */
+		STR_LONG_S " $1,"STR_LONGSIZE"(%1)\n\t"
+		/* Store the PC */
+		"1: " STR_PTR_LA " $1, 1b\n\t"
+		STR_LONG_S " $1,%0\n\t"
+		STORE_ONE_REG(2)
+		STORE_ONE_REG(3)
+		STORE_ONE_REG(4)
+		STORE_ONE_REG(5)
+		STORE_ONE_REG(6)
+		STORE_ONE_REG(7)
+		STORE_ONE_REG(8)
+		STORE_ONE_REG(9)
+		STORE_ONE_REG(10)
+		STORE_ONE_REG(11)
+		STORE_ONE_REG(12)
+		STORE_ONE_REG(13)
+		STORE_ONE_REG(14)
+		STORE_ONE_REG(15)
+		STORE_ONE_REG(16)
+		STORE_ONE_REG(17)
+		STORE_ONE_REG(18)
+		STORE_ONE_REG(19)
+		STORE_ONE_REG(20)
+		STORE_ONE_REG(21)
+		STORE_ONE_REG(22)
+		STORE_ONE_REG(23)
+		STORE_ONE_REG(24)
+		STORE_ONE_REG(25)
+		STORE_ONE_REG(26)
+		STORE_ONE_REG(27)
+		STORE_ONE_REG(28)
+		STORE_ONE_REG(29)
+		STORE_ONE_REG(30)
+		STORE_ONE_REG(31)
+		/* Restore $1 */
+		STR_LONG_L " $1,"STR_LONGSIZE"(%1)\n\t"
 		".set pop\n\t"
-		: "=m" (regs->cp0_epc),
-		"=m" (regs->regs[29]), "=m" (regs->regs[31])
-		: : "memory");
+		: "=m" (regs->cp0_epc)
+		: "r" (regs->regs)
+		: "memory");
 }
 
 #endif /* _ASM_STACKTRACE_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-08-10 18:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-10 18:27 [PATCH 0/4] mips: fix various issues around backtrace minyard
2017-08-10 18:27 ` [PATCH 1/4] mips: Fix issues in backtraces minyard
2017-08-10 18:27 ` [PATCH 2/4] mips: Make SAVE_SOME more standard minyard
2017-08-10 18:27 ` [PATCH 3/4] mips: Add DWARF unwinding to assembly minyard
2017-08-10 18:27 ` [PATCH 4/4] mips: Save all registers when saving the frame minyard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).