linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] ARM: add vmap'ed stack support
@ 2021-10-08  7:41 Ard Biesheuvel
  2021-10-08  7:41 ` [PATCH 1/5] ARM: memcpy: use frame pointer as unwind anchor Ard Biesheuvel
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2021-10-08  7:41 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Nicolas Pitre, Arnd Bergmann, Kees Cook,
	Keith Packard, Linus Walleij

This series enables support on ARM for vmap'ed task and IRQ stacks in
the kernel. This is an important hardening feature that terminates tasks
on inadvertent or deliberate accesses past the stack pointer, which
might otherwise go completely unnoticed.

Since having an accurate backtrace is especially important in such
cases, this series includes some enhancements to the unwinder and to
some hand rolled unwind info to increase the likelihood that a backtrace
can be generated when relying on the ARM unwinder. The frame pointer
unwinder turns out to be rather bullet proof in this context, and does
not need any such enhancements.

According to a quick survey I did, compiler generated code puts a single
stack push as the first instruction in about 2/3 of the cases, which the
unwinder can deal with after applying patch #4, even if this push
faulted because of a stack overflow. In the remaining cases, the
compiler tends to fall back to R11 or R7 as the frame pointer (on ARM
or Thumb-2, respectively), or emit partial unwind frames for the part of
the function that runs before the stack frame is set up, and the part
that runs inside the stack frame. In either case, the unwinder can deal
with such occurrences as they don't rely on the stack pointer directly.

Patches #1, #2 and #3 update the ARM asm string routines to align more
closely with the compiler's approach, increasing the likelihood that we
can unwind them in case of a stack overflow.

Patch #5 wires up the generic support, and adds the entry code to detect
and deal with stack overflows.

This series applies onto my IRQ stacks series sent out earlier:
https://lore.kernel.org/linux-arm-kernel/20211005071542.3127341-1-ardb@kernel.org/

Cc: Russell King <linux@armlinux.org.uk>
Cc: Nicolas Pitre <nico@fluxnic.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: Keith Packard <keithpac@amazon.com>
Cc: Linus Walleij <linus.walleij@linaro.org>

Ard Biesheuvel (5):
  ARM: memcpy: use frame pointer as unwind anchor
  ARM: memmove: use frame pointer as unwind anchor
  ARM: memset: clean up unwind annotations
  ARM: unwind: disregard unwind info before stack frame is set up
  ARM: implement support for vmap'ed stacks

 arch/arm/Kconfig                   |  1 +
 arch/arm/include/asm/assembler.h   |  4 ++
 arch/arm/include/asm/page.h        |  4 ++
 arch/arm/include/asm/thread_info.h |  8 +++
 arch/arm/kernel/entry-armv.S       | 75 ++++++++++++++++++--
 arch/arm/kernel/entry-header.S     | 74 +++++++++++++++++++
 arch/arm/kernel/irq.c              |  9 ++-
 arch/arm/kernel/traps.c            | 65 ++++++++++++++++-
 arch/arm/kernel/unwind.c           | 17 ++++-
 arch/arm/kernel/vmlinux.lds.S      |  4 +-
 arch/arm/lib/copy_template.S       | 66 +++++++----------
 arch/arm/lib/memmove.S             | 60 ++++++----------
 arch/arm/lib/memset.S              |  7 +-
 13 files changed, 295 insertions(+), 99 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/5] ARM: memcpy: use frame pointer as unwind anchor
  2021-10-08  7:41 [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
@ 2021-10-08  7:41 ` Ard Biesheuvel
  2021-10-08  7:41 ` [PATCH 2/5] ARM: memmove: " Ard Biesheuvel
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2021-10-08  7:41 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Nicolas Pitre, Arnd Bergmann, Kees Cook,
	Keith Packard, Linus Walleij

The memcpy template is a bit unusual in the way it manages the stack
pointer: depending on the execution path through the function, the SP
assumes different values as different subsets of the register file are
preserved and restored again. This is problematic when it comes to EHABI
unwind info, as it is not instruction accurate, and does not allow
tracking the SP value as it changes.

Commit 279f487e0b471 ("ARM: 8225/1: Add unwinding support for memory
copy functions") addressed this by carving up the function in different
chunks as far as the unwinder is concerned, and keeping a set of unwind
directives for each of them, each corresponding with the state of the
stack pointer during execution of the chunk in question. This not only
duplicates unwind info unnecessarily, but it also complicates unwinding
the stack upon overflow.

Instead, let's do what the compiler does when the SP is updated halfway
through a function, which is to use a frame pointer and emit the
appropriate unwind directives to communicate this to the unwinder.

Note that Thumb-2 uses R7 for this, while ARM uses R11 aka FP. So let's
avoid touching R7 in the body of the template, so that Thumb-2 can use
it as the frame pointer. R11 was not modified in the first place.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/include/asm/assembler.h |  4 ++
 arch/arm/lib/copy_template.S     | 66 ++++++++------------
 2 files changed, 29 insertions(+), 41 deletions(-)

diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index abb8202ef0da..405950494208 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -86,6 +86,10 @@
 
 #define IMM12_MASK 0xfff
 
+/* the frame pointer used for stack unwinding */
+ARM(	fpreg	.req	r11	)
+THUMB(	fpreg	.req	r7	)
+
 /*
  * Enable and disable interrupts
  */
diff --git a/arch/arm/lib/copy_template.S b/arch/arm/lib/copy_template.S
index 810a805d36dc..08c311fc70e0 100644
--- a/arch/arm/lib/copy_template.S
+++ b/arch/arm/lib/copy_template.S
@@ -69,13 +69,13 @@
  *	than one 32bit instruction in Thumb-2)
  */
 
-
 	UNWIND(	.fnstart			)
 		enter	r4, lr
-	UNWIND(	.fnend				)
-
-	UNWIND(	.fnstart			)
-		usave	r4, lr			  @ in first stmdb block
+		usave	r4, lr
+	UNWIND(	.save	{fpreg}			)
+	UNWIND(	push	{fpreg}			)
+	UNWIND(	.setfp	fpreg, sp		)
+	UNWIND(	mov	fpreg, sp		)
 
 		subs	r2, r2, #4
 		blt	8f
@@ -86,12 +86,7 @@
 		bne	10f
 
 1:		subs	r2, r2, #(28)
-		stmfd	sp!, {r5 - r8}
-	UNWIND(	.fnend				)
-
-	UNWIND(	.fnstart			)
-		usave	r4, lr
-	UNWIND(	.save	{r5 - r8}		) @ in second stmfd block
+		stmfd	sp!, {r5, r6, r8, r9}
 		blt	5f
 
 	CALGN(	ands	ip, r0, #31		)
@@ -110,9 +105,9 @@
 	PLD(	pld	[r1, #92]		)
 
 3:	PLD(	pld	[r1, #124]		)
-4:		ldr8w	r1, r3, r4, r5, r6, r7, r8, ip, lr, abort=20f
+4:		ldr8w	r1, r3, r4, r5, r6, r8, r9, ip, lr, abort=20f
 		subs	r2, r2, #32
-		str8w	r0, r3, r4, r5, r6, r7, r8, ip, lr, abort=20f
+		str8w	r0, r3, r4, r5, r6, r8, r9, ip, lr, abort=20f
 		bge	3b
 	PLD(	cmn	r2, #96			)
 	PLD(	bge	4b			)
@@ -132,8 +127,8 @@
 		ldr1w	r1, r4, abort=20f
 		ldr1w	r1, r5, abort=20f
 		ldr1w	r1, r6, abort=20f
-		ldr1w	r1, r7, abort=20f
 		ldr1w	r1, r8, abort=20f
+		ldr1w	r1, r9, abort=20f
 		ldr1w	r1, lr, abort=20f
 
 #if LDR1W_SHIFT < STR1W_SHIFT
@@ -150,17 +145,14 @@
 		str1w	r0, r4, abort=20f
 		str1w	r0, r5, abort=20f
 		str1w	r0, r6, abort=20f
-		str1w	r0, r7, abort=20f
 		str1w	r0, r8, abort=20f
+		str1w	r0, r9, abort=20f
 		str1w	r0, lr, abort=20f
 
 	CALGN(	bcs	2b			)
 
-7:		ldmfd	sp!, {r5 - r8}
-	UNWIND(	.fnend				) @ end of second stmfd block
+7:		ldmfd	sp!, {r5, r6, r8, r9}
 
-	UNWIND(	.fnstart			)
-		usave	r4, lr			  @ still in first stmdb block
 8:		movs	r2, r2, lsl #31
 		ldr1b	r1, r3, ne, abort=21f
 		ldr1b	r1, r4, cs, abort=21f
@@ -169,6 +161,7 @@
 		str1b	r0, r4, cs, abort=21f
 		str1b	r0, ip, cs, abort=21f
 
+	UNWIND(	pop	{fpreg}			)
 		exit	r4, pc
 
 9:		rsb	ip, ip, #4
@@ -189,13 +182,10 @@
 		ldr1w	r1, lr, abort=21f
 		beq	17f
 		bgt	18f
-	UNWIND(	.fnend				)
 
 
 		.macro	forward_copy_shift pull push
 
-	UNWIND(	.fnstart			)
-		usave	r4, lr			  @ still in first stmdb block
 		subs	r2, r2, #28
 		blt	14f
 
@@ -205,12 +195,8 @@
 	CALGN(	subcc	r2, r2, ip		)
 	CALGN(	bcc	15f			)
 
-11:		stmfd	sp!, {r5 - r9}
-	UNWIND(	.fnend				)
+11:		stmfd	sp!, {r5, r6, r8 - r10}
 
-	UNWIND(	.fnstart			)
-		usave	r4, lr
-	UNWIND(	.save	{r5 - r9}		) @ in new second stmfd block
 	PLD(	pld	[r1, #0]		)
 	PLD(	subs	r2, r2, #96		)
 	PLD(	pld	[r1, #28]		)
@@ -219,35 +205,32 @@
 	PLD(	pld	[r1, #92]		)
 
 12:	PLD(	pld	[r1, #124]		)
-13:		ldr4w	r1, r4, r5, r6, r7, abort=19f
+13:		ldr4w	r1, r4, r5, r6, r8, abort=19f
 		mov	r3, lr, lspull #\pull
 		subs	r2, r2, #32
-		ldr4w	r1, r8, r9, ip, lr, abort=19f
+		ldr4w	r1, r9, r10, ip, lr, abort=19f
 		orr	r3, r3, r4, lspush #\push
 		mov	r4, r4, lspull #\pull
 		orr	r4, r4, r5, lspush #\push
 		mov	r5, r5, lspull #\pull
 		orr	r5, r5, r6, lspush #\push
 		mov	r6, r6, lspull #\pull
-		orr	r6, r6, r7, lspush #\push
-		mov	r7, r7, lspull #\pull
-		orr	r7, r7, r8, lspush #\push
+		orr	r6, r6, r8, lspush #\push
 		mov	r8, r8, lspull #\pull
 		orr	r8, r8, r9, lspush #\push
 		mov	r9, r9, lspull #\pull
-		orr	r9, r9, ip, lspush #\push
+		orr	r9, r9, r10, lspush #\push
+		mov	r10, r10, lspull #\pull
+		orr	r10, r10, ip, lspush #\push
 		mov	ip, ip, lspull #\pull
 		orr	ip, ip, lr, lspush #\push
-		str8w	r0, r3, r4, r5, r6, r7, r8, r9, ip, abort=19f
+		str8w	r0, r3, r4, r5, r6, r8, r9, r10, ip, abort=19f
 		bge	12b
 	PLD(	cmn	r2, #96			)
 	PLD(	bge	13b			)
 
-		ldmfd	sp!, {r5 - r9}
-	UNWIND(	.fnend				) @ end of the second stmfd block
+		ldmfd	sp!, {r5, r6, r8 - r10}
 
-	UNWIND(	.fnstart			)
-		usave	r4, lr			  @ still in first stmdb block
 14:		ands	ip, r2, #28
 		beq	16f
 
@@ -262,7 +245,6 @@
 
 16:		sub	r1, r1, #(\push / 8)
 		b	8b
-	UNWIND(	.fnend				)
 
 		.endm
 
@@ -273,6 +255,7 @@
 
 18:		forward_copy_shift	pull=24	push=8
 
+	UNWIND(	.fnend				)
 
 /*
  * Abort preamble and completion macros.
@@ -282,10 +265,11 @@
  */
 
 	.macro	copy_abort_preamble
-19:	ldmfd	sp!, {r5 - r9}
+19:	ldmfd	sp!, {r5, r6, r8 - r10}
 	b	21f
-20:	ldmfd	sp!, {r5 - r8}
+20:	ldmfd	sp!, {r5, r6, r8, r9}
 21:
+UNWIND( pop	{fpreg}				)
 	.endm
 
 	.macro	copy_abort_end
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/5] ARM: memmove: use frame pointer as unwind anchor
  2021-10-08  7:41 [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
  2021-10-08  7:41 ` [PATCH 1/5] ARM: memcpy: use frame pointer as unwind anchor Ard Biesheuvel
@ 2021-10-08  7:41 ` Ard Biesheuvel
  2021-10-08  7:41 ` [PATCH 3/5] ARM: memset: clean up unwind annotations Ard Biesheuvel
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2021-10-08  7:41 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Nicolas Pitre, Arnd Bergmann, Kees Cook,
	Keith Packard, Linus Walleij

The memmove routine is a bit unusual in the way it manages the stack
pointer: depending on the execution path through the function, the SP
assumes different values as different subsets of the register file are
preserved and restored again. This is problematic when it comes to EHABI
unwind info, as it is not instruction accurate, and does not allow
tracking the SP value as it changes.

Commit 207a6cb06990c ("ARM: 8224/1: Add unwinding support for memmove
function") addressed this by carving up the function in different chunks
as far as the unwinder is concerned, and keeping a set of unwind
directives for each of them, each corresponding with the state of the
stack pointer during execution of the chunk in question. This not only
duplicates unwind info unnecessarily, but it also complicates unwinding
the stack upon overflow.

Instead, let's do what the compiler does when the SP is updated halfway
through a function, which is to use a frame pointer and emit the
appropriate unwind directives to communicate this to the unwinder.

Note that Thumb-2 uses R7 for this, while ARM uses R11 aka FP. So let's
avoid touching R7 in the body of the function, so that Thumb-2 can use
it as the frame pointer. R11 was not modified in the first place.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/lib/memmove.S | 60 +++++++-------------
 1 file changed, 20 insertions(+), 40 deletions(-)

diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index 6fecc12a1f51..6410554039fd 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -31,12 +31,13 @@ WEAK(memmove)
 		subs	ip, r0, r1
 		cmphi	r2, ip
 		bls	__memcpy
-
-		stmfd	sp!, {r0, r4, lr}
 	UNWIND(	.fnend				)
 
 	UNWIND(	.fnstart			)
-	UNWIND(	.save	{r0, r4, lr}		) @ in first stmfd block
+	UNWIND(	.save	{r0, r4, fpreg, lr}	)
+		stmfd	sp!, {r0, r4, UNWIND(fpreg,) lr}
+	UNWIND(	.setfp	fpreg, sp		)
+	UNWIND(	mov	fpreg, sp		)
 		add	r1, r1, r2
 		add	r0, r0, r2
 		subs	r2, r2, #4
@@ -48,12 +49,7 @@ WEAK(memmove)
 		bne	10f
 
 1:		subs	r2, r2, #(28)
-		stmfd	sp!, {r5 - r8}
-	UNWIND(	.fnend				)
-
-	UNWIND(	.fnstart			)
-	UNWIND(	.save	{r0, r4, lr}		)
-	UNWIND(	.save	{r5 - r8}		) @ in second stmfd block
+		stmfd	sp!, {r5, r6, r8, r9}
 		blt	5f
 
 	CALGN(	ands	ip, r0, #31		)
@@ -72,9 +68,9 @@ WEAK(memmove)
 	PLD(	pld	[r1, #-96]		)
 
 3:	PLD(	pld	[r1, #-128]		)
-4:		ldmdb	r1!, {r3, r4, r5, r6, r7, r8, ip, lr}
+4:		ldmdb	r1!, {r3, r4, r5, r6, r8, r9, ip, lr}
 		subs	r2, r2, #32
-		stmdb	r0!, {r3, r4, r5, r6, r7, r8, ip, lr}
+		stmdb	r0!, {r3, r4, r5, r6, r8, r9, ip, lr}
 		bge	3b
 	PLD(	cmn	r2, #96			)
 	PLD(	bge	4b			)
@@ -88,8 +84,8 @@ WEAK(memmove)
 		W(ldr)	r4, [r1, #-4]!
 		W(ldr)	r5, [r1, #-4]!
 		W(ldr)	r6, [r1, #-4]!
-		W(ldr)	r7, [r1, #-4]!
 		W(ldr)	r8, [r1, #-4]!
+		W(ldr)	r9, [r1, #-4]!
 		W(ldr)	lr, [r1, #-4]!
 
 		add	pc, pc, ip
@@ -99,17 +95,13 @@ WEAK(memmove)
 		W(str)	r4, [r0, #-4]!
 		W(str)	r5, [r0, #-4]!
 		W(str)	r6, [r0, #-4]!
-		W(str)	r7, [r0, #-4]!
 		W(str)	r8, [r0, #-4]!
+		W(str)	r9, [r0, #-4]!
 		W(str)	lr, [r0, #-4]!
 
 	CALGN(	bcs	2b			)
 
-7:		ldmfd	sp!, {r5 - r8}
-	UNWIND(	.fnend				) @ end of second stmfd block
-
-	UNWIND(	.fnstart			)
-	UNWIND(	.save	{r0, r4, lr}		) @ still in first stmfd block
+7:		ldmfd	sp!, {r5, r6, r8, r9}
 
 8:		movs	r2, r2, lsl #31
 		ldrbne	r3, [r1, #-1]!
@@ -118,7 +110,7 @@ WEAK(memmove)
 		strbne	r3, [r0, #-1]!
 		strbcs	r4, [r0, #-1]!
 		strbcs	ip, [r0, #-1]
-		ldmfd	sp!, {r0, r4, pc}
+		ldmfd	sp!, {r0, r4, UNWIND(fpreg,) pc}
 
 9:		cmp	ip, #2
 		ldrbgt	r3, [r1, #-1]!
@@ -137,13 +129,10 @@ WEAK(memmove)
 		ldr	r3, [r1, #0]
 		beq	17f
 		blt	18f
-	UNWIND(	.fnend				)
 
 
 		.macro	backward_copy_shift push pull
 
-	UNWIND(	.fnstart			)
-	UNWIND(	.save	{r0, r4, lr}		) @ still in first stmfd block
 		subs	r2, r2, #28
 		blt	14f
 
@@ -152,12 +141,7 @@ WEAK(memmove)
 	CALGN(	subcc	r2, r2, ip		)
 	CALGN(	bcc	15f			)
 
-11:		stmfd	sp!, {r5 - r9}
-	UNWIND(	.fnend				)
-
-	UNWIND(	.fnstart			)
-	UNWIND(	.save	{r0, r4, lr}		)
-	UNWIND(	.save	{r5 - r9}		) @ in new second stmfd block
+11:		stmfd	sp!, {r5, r6, r8 - r10}
 
 	PLD(	pld	[r1, #-4]		)
 	PLD(	subs	r2, r2, #96		)
@@ -167,35 +151,31 @@ WEAK(memmove)
 	PLD(	pld	[r1, #-96]		)
 
 12:	PLD(	pld	[r1, #-128]		)
-13:		ldmdb   r1!, {r7, r8, r9, ip}
+13:		ldmdb   r1!, {r8, r9, r10, ip}
 		mov     lr, r3, lspush #\push
 		subs    r2, r2, #32
 		ldmdb   r1!, {r3, r4, r5, r6}
 		orr     lr, lr, ip, lspull #\pull
 		mov     ip, ip, lspush #\push
-		orr     ip, ip, r9, lspull #\pull
+		orr     ip, ip, r10, lspull #\pull
+		mov     r10, r10, lspush #\push
+		orr     r10, r10, r9, lspull #\pull
 		mov     r9, r9, lspush #\push
 		orr     r9, r9, r8, lspull #\pull
 		mov     r8, r8, lspush #\push
-		orr     r8, r8, r7, lspull #\pull
-		mov     r7, r7, lspush #\push
-		orr     r7, r7, r6, lspull #\pull
+		orr     r8, r8, r6, lspull #\pull
 		mov     r6, r6, lspush #\push
 		orr     r6, r6, r5, lspull #\pull
 		mov     r5, r5, lspush #\push
 		orr     r5, r5, r4, lspull #\pull
 		mov     r4, r4, lspush #\push
 		orr     r4, r4, r3, lspull #\pull
-		stmdb   r0!, {r4 - r9, ip, lr}
+		stmdb   r0!, {r4 - r6, r8 - r10, ip, lr}
 		bge	12b
 	PLD(	cmn	r2, #96			)
 	PLD(	bge	13b			)
 
-		ldmfd	sp!, {r5 - r9}
-	UNWIND(	.fnend				) @ end of the second stmfd block
-
-	UNWIND(	.fnstart			)
-	UNWIND(	.save {r0, r4, lr}		) @ still in first stmfd block
+		ldmfd	sp!, {r5, r6, r8 - r10}
 
 14:		ands	ip, r2, #28
 		beq	16f
@@ -211,7 +191,6 @@ WEAK(memmove)
 
 16:		add	r1, r1, #(\pull / 8)
 		b	8b
-	UNWIND(	.fnend				)
 
 		.endm
 
@@ -222,5 +201,6 @@ WEAK(memmove)
 
 18:		backward_copy_shift	push=24	pull=8
 
+	UNWIND(	.fnend				)
 ENDPROC(memmove)
 ENDPROC(__memmove)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/5] ARM: memset: clean up unwind annotations
  2021-10-08  7:41 [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
  2021-10-08  7:41 ` [PATCH 1/5] ARM: memcpy: use frame pointer as unwind anchor Ard Biesheuvel
  2021-10-08  7:41 ` [PATCH 2/5] ARM: memmove: " Ard Biesheuvel
@ 2021-10-08  7:41 ` Ard Biesheuvel
  2021-10-08  7:41 ` [PATCH 4/5] ARM: unwind: disregard unwind info before stack frame is set up Ard Biesheuvel
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2021-10-08  7:41 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Nicolas Pitre, Arnd Bergmann, Kees Cook,
	Keith Packard, Linus Walleij

The memset implementation carves up the code in different sections, each
covered with their own unwind info. In this case, it is done in a way
similar to how the compiler might do it, to disambiguate between parts
where the return address is in LR and the SP is unmodified, and parts
where a stack frame is live, and the unwinder needs to know the size of
the stack frame and the location of the return address within it.

Only the placement of the unwind directives is slightly odd: the stack
pushes are placed in the wrong sections, which may confuse the unwinder
when attempting to unwind with PC pointing at the stack push in
question.

So let's fix this up, by reordering the directives and instructions as
appropriate.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/lib/memset.S | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index 9817cb258c1a..d71ab61430b2 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -28,16 +28,16 @@ UNWIND( .fnstart         )
 	mov	r3, r1
 7:	cmp	r2, #16
 	blt	4f
+UNWIND( .fnend              )
 
 #if ! CALGN(1)+0
 
 /*
  * We need 2 extra registers for this loop - use r8 and the LR
  */
-	stmfd	sp!, {r8, lr}
-UNWIND( .fnend              )
 UNWIND( .fnstart            )
 UNWIND( .save {r8, lr}      )
+	stmfd	sp!, {r8, lr}
 	mov	r8, r1
 	mov	lr, r3
 
@@ -66,10 +66,9 @@ UNWIND( .fnend              )
  * whole cache lines at once.
  */
 
-	stmfd	sp!, {r4-r8, lr}
-UNWIND( .fnend                 )
 UNWIND( .fnstart               )
 UNWIND( .save {r4-r8, lr}      )
+	stmfd	sp!, {r4-r8, lr}
 	mov	r4, r1
 	mov	r5, r3
 	mov	r6, r1
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/5] ARM: unwind: disregard unwind info before stack frame is set up
  2021-10-08  7:41 [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2021-10-08  7:41 ` [PATCH 3/5] ARM: memset: clean up unwind annotations Ard Biesheuvel
@ 2021-10-08  7:41 ` Ard Biesheuvel
  2021-10-08  7:41 ` [PATCH 5/5] ARM: implement support for vmap'ed stacks Ard Biesheuvel
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2021-10-08  7:41 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Nicolas Pitre, Arnd Bergmann, Kees Cook,
	Keith Packard, Linus Walleij

When unwinding the stack from a stack overflow, we are likely to start
from a stack push instruction, given that this is the most common way to
grow the stack for compiler emitted code. This push instruction rarely
appears anywhere else than at offset 0x0 of the function, and if it
doesn't, the compiler tends to split up the unwind annotations, given
that the stack frame layout is apparently not the same throughout the
function.

This means that, in the general case, if the frame's PC points at the
first instruction covered by a certain unwind entry, there is no way the
stack frame that the unwind entry describes could have been created yet,
and so we are still on the stack frame of the caller in that case. So
treat this as a special case, and return with the new PC taken from the
frame's LR, without applying the unwind transformations to the virtual
register set.

This permits us to unwind the call stack on stack overflow when the
overflow was caused by a stack push on function entry.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/kernel/unwind.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index b7a6141c342f..de0d26dc73fd 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -411,7 +411,19 @@ int unwind_frame(struct stackframe *frame)
 	if (idx->insn == 1)
 		/* can't unwind */
 		return -URC_FAILURE;
-	else if ((idx->insn & 0x80000000) == 0)
+	else if (frame->pc == prel31_to_addr(&idx->addr_offset)) {
+		/*
+		 * Unwinding is tricky when we're halfway through the prologue,
+		 * since the stack frame that the unwinder expects may not be
+		 * fully set up yet. However, one thing we do know for sure is
+		 * that if we are unwinding from the very first instruction of
+		 * a function, we are still effectively in the stack frame of
+		 * the caller, and the unwind info has no relevance yet.
+		 */
+		frame->sp_low = frame->sp;
+		frame->pc = frame->lr;
+		return URC_OK;
+	} else if ((idx->insn & 0x80000000) == 0)
 		/* prel31 to the unwind table */
 		ctrl.insn = (unsigned long *)prel31_to_addr(&idx->insn);
 	else if ((idx->insn & 0xff000000) == 0x80000000)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 5/5] ARM: implement support for vmap'ed stacks
  2021-10-08  7:41 [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2021-10-08  7:41 ` [PATCH 4/5] ARM: unwind: disregard unwind info before stack frame is set up Ard Biesheuvel
@ 2021-10-08  7:41 ` Ard Biesheuvel
  2021-10-08  9:54 ` [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
  2021-10-11 23:32 ` Keith Packard
  6 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2021-10-08  7:41 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Nicolas Pitre, Arnd Bergmann, Kees Cook,
	Keith Packard, Linus Walleij

Wire up the generic support for managing task stack allocations via vmalloc,
and implement the entry code that detects whether we faulted because of a
stack overrun (or future stack overrun caused by pushing the pt_regs array)

While this adds a fair amount of tricky entry asm code, it should be
noted that it only adds a TST + branch to the svc_entry path (although
Thumb-2 needs a little tweak to get the stack pointer into a GPR). The
code implementing the non-trivial manipulation of the overflow stack
pointer is emitted out-of-line into the .text section, with only a small
trampoline residing in .entry.text (but not on the entry path itself)
that branches out to C code if a kernel stack overflow actually occurs.

Since on ARM, we rely on do_translation_fault() to keep PMD level page
table entries that cover the vmalloc region up to date, we need to
ensure that we don't hit such a stale PMD entry when accessing the
stack. So we do a dummy read from the new stack while still running from
the old one on the context switch path, and bump the vmalloc_seq counter
when PMD level entries in the vmalloc range are modified, so that the MM
switch fetches the latest version of the entries.

Note that this also adds unwind information to the entry code dealing
with the stack overflow condition but this is only used when an
additional fault is triggered while trying to mitigate.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/Kconfig                   |  1 +
 arch/arm/include/asm/page.h        |  4 ++
 arch/arm/include/asm/thread_info.h |  8 +++
 arch/arm/kernel/entry-armv.S       | 75 ++++++++++++++++++--
 arch/arm/kernel/entry-header.S     | 74 +++++++++++++++++++
 arch/arm/kernel/irq.c              |  9 ++-
 arch/arm/kernel/traps.c            | 65 ++++++++++++++++-
 arch/arm/kernel/unwind.c           |  3 +-
 arch/arm/kernel/vmlinux.lds.S      |  4 +-
 9 files changed, 230 insertions(+), 13 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index d46b243e1b26..964cc41a0fed 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -126,6 +126,7 @@ config ARM
 	select RTC_LIB
 	select SYS_SUPPORTS_APM_EMULATION
 	select THREAD_INFO_IN_TASK if CURRENT_POINTER_IN_TPIDRURO
+	select HAVE_ARCH_VMAP_STACK if THREAD_INFO_IN_TASK
 	select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M
 	# Above selects are sorted alphabetically; please add new ones
 	# according to that.  Thanks.
diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h
index 11b058a72a5b..7b871ed99ccf 100644
--- a/arch/arm/include/asm/page.h
+++ b/arch/arm/include/asm/page.h
@@ -149,6 +149,10 @@ extern void copy_page(void *to, const void *from);
 #include <asm/pgtable-2level-types.h>
 #endif
 
+#ifdef CONFIG_VMAP_STACK
+#define ARCH_PAGE_TABLE_SYNC_MASK	PGTBL_PMD_MODIFIED
+#endif
+
 #endif /* CONFIG_MMU */
 
 typedef struct page *pgtable_t;
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 787511396f3f..900c8ae484c9 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -25,6 +25,14 @@
 #define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
 #define THREAD_START_SP		(THREAD_SIZE - 8)
 
+#ifdef CONFIG_VMAP_STACK
+#define THREAD_ALIGN		(2 * THREAD_SIZE)
+#else
+#define THREAD_ALIGN		THREAD_SIZE
+#endif
+
+#define OVERFLOW_STACK_SIZE	SZ_4K
+
 #ifndef __ASSEMBLY__
 
 struct task_struct;
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 47ba80d7a84a..ca1b01425a1a 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -53,6 +53,14 @@ ARM(	mov	r8, fp		)	@ Preserve original FP
 	@
 	subs	r2, sp, r7		@ SP above bottom of IRQ stack?
 	rsbscs	r2, r2, #THREAD_SIZE	@ ... and below the top?
+#ifdef CONFIG_VMAP_STACK
+	bcs	.L\@
+	mov_l	r2, overflow_stack	@ Take base address
+	add	r2, r2, r3		@ Top of this CPU's overflow stack
+	subs	r2, r7, r2		@ Compare with incoming SP
+	rsbscs	r2, r2, #OVERFLOW_STACK_SIZE
+.L\@:
+#endif
 	movcs	sp, r7			@ If so, revert to incoming SP
 
 	@
@@ -177,10 +185,15 @@ ENDPROC(__und_invalid)
 #define SPFIX(code...)
 #endif
 
-	.macro	svc_entry, stack_hole=0, trace=1, uaccess=1
+	.macro	svc_entry, stack_hole=0, trace=1, uaccess=1, overflow_check=1
+	sub	sp, sp, #(SVC_REGS_SIZE + \stack_hole - 4)
+	.if	\overflow_check
+	do_overflow_check (SVC_REGS_SIZE + \stack_hole - 4)
  UNWIND(.fnstart		)
  UNWIND(.save {r0 - pc}		)
-	sub	sp, sp, #(SVC_REGS_SIZE + \stack_hole - 4)
+	.else
+ UNWIND(.fnstart		)
+	.endif
 #ifdef CONFIG_THUMB2_KERNEL
  SPFIX(	str	r0, [sp]	)	@ temporarily saved
  SPFIX(	mov	r0, sp		)
@@ -812,16 +825,64 @@ ENTRY(__switch_to)
 #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_SMP)
 	str	r7, [r8]
 #endif
- THUMB(	mov	ip, r4			   )
 	mov	r0, r5
+#if !defined(CONFIG_THUMB2_KERNEL) && !defined(CONFIG_VMAP_STACK)
 	set_current r7
- ARM(	ldmia	r4, {r4 - sl, fp, sp, pc}  )	@ Load all regs saved previously
- THUMB(	ldmia	ip!, {r4 - sl, fp}	   )	@ Load all regs saved previously
- THUMB(	ldr	sp, [ip], #4		   )
- THUMB(	ldr	pc, [ip]		   )
+	ldmia	r4, {r4 - sl, fp, sp, pc}	@ Load all regs saved previously
+#else
+	mov	r1, r7
+	ldmia	r4, {r4 - sl, fp, ip, lr}	@ Load all regs saved previously
+#ifdef CONFIG_VMAP_STACK
+	@
+	@ Do a dummy read from the new stack while running from the old one so
+	@ that we can rely on do_translation_fault() to fix up any stale PGD
+	@ entries covering the vmalloc region.
+	@
+	ldr	r2, [ip]
+#endif
+	set_current r1
+	mov	sp, ip
+	bx	lr
+#endif
  UNWIND(.fnend		)
 ENDPROC(__switch_to)
 
+#ifdef CONFIG_VMAP_STACK
+__bad_stack:
+	@
+	@ We detected an overflow in svc_entry, which switched to the
+	@ overflow stack. Stash the exception regs, and head to our overflow
+	@ handler.
+	@
+
+	mov	ip, sp
+THUMB(	sub	sp, #4		)
+	push	{fpreg, ip, lr ARM(, pc)}
+
+UNWIND( ldr	lr, [ip, #4]	)		@ restore exception LR
+	mrc	p15, 0, ip, c13, c0, 2		@ reload IP
+
+	@ Store the original GPRs to the new stack.
+	svc_entry uaccess=0, overflow_check=0
+
+UNWIND( .save   {fpreg, sp, lr}	)
+UNWIND( .setfp  fpreg, sp, #12	)
+	ldr	fpreg, [sp, #S_SP]		@ Add our frame record
+	add	fpreg, fpreg, #12		@ to the linked list
+
+	ldr	r1, [fpreg, #4]			@ reload SP at entry
+	str	r1, [sp, #S_SP]			@ store in pt_regs
+
+	@ Stash the regs for handle_bad_stack
+	mov	r0, sp
+
+	@ Time to die
+	bl	handle_bad_stack
+	nop
+UNWIND( .fnend			)
+ENDPROC(__bad_stack)
+#endif
+
 	__INIT
 
 /*
diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
index ae24dd54e9ef..083967065688 100644
--- a/arch/arm/kernel/entry-header.S
+++ b/arch/arm/kernel/entry-header.S
@@ -423,3 +423,77 @@ scno	.req	r7		@ syscall number
 tbl	.req	r8		@ syscall table pointer
 why	.req	r8		@ Linux syscall (!= 0)
 tsk	.req	r9		@ current thread_info
+
+	.macro	do_overflow_check, frame_size:req
+#ifdef CONFIG_VMAP_STACK
+	@
+	@ Test whether the SP has overflowed, without corrupting a GPR.
+	@ Task and IRQ stacks are aligned so that
+	@ SP & BIT(THREAD_SIZE_ORDER + PAGE_SHIFT) should always be zero.
+	@
+ARM(	tst	sp, #1 << (THREAD_SIZE_ORDER + PAGE_SHIFT)	)
+
+	@
+	@ Note that Thumb-2 needs this little dance to get the value of SP
+	@ into a register that may be used with TST, and needs the explicit
+	@ IT NE to ensure that the unconditional branch opcode is chosen,
+	@ which guarantees that the cross-section branch will be in range.
+	@
+THUMB(	add	sp, sp, r0					)
+THUMB(	sub	r0, sp, r0					)
+THUMB(	tst	r0, #1 << (THREAD_SIZE_ORDER + PAGE_SHIFT)	)
+THUMB(	sub	r0, sp, r0					)
+THUMB(	sub	sp, sp, r0					)
+THUMB(	it	ne						)
+	bne	stack_overflow_check\@
+
+	.pushsection	.text
+stack_overflow_check\@:
+UNWIND( .fnstart		)
+	@
+	@ Either we've just detected an overflow, or we've taken an exception
+	@ while on the overflow stack. We cannot use the stack until we have
+	@ decided which is the case. However, as we won't return to userspace,
+	@ we can clobber some USR/SYS mode registers to free up GPRs.
+	@
+
+	mcr	p15, 0, ip, c13, c0, 2		@ Stash IP in TPIDRURW
+	mrs	ip, cpsr
+	eor	ip, ip, #(SVC_MODE ^ SYSTEM_MODE)
+	msr	cpsr_c, ip			@ switch to SYS mode
+	eor	lr, ip, #(SVC_MODE ^ SYSTEM_MODE)
+
+	@ Load the overflow stack into IP using SP_usr as a scratch register
+	ldr	sp, .Lit\@			@ Get base address
+	mrc	p15, 0, ip, c13, c0, 4		@ Get CPU offset
+	add	ip, sp, ip			@ IP := this CPU's overflow stack
+	msr	cpsr_c, lr			@ Switch back using mode in LR_usr
+
+	@
+	@ Check whether we are already on the overflow stack. This may happen
+	@ after panic() re-enables interrupts or when performing accesses that
+	@ may fault when dumping the stack.
+	@ The overflow stack is not in the vmalloc space so we only need to
+	@ check whether the incoming SP is below the top of the overflow stack.
+	@
+	subs	ip, sp, ip			@ Delta with top of overflow stack
+	mrclo	p15, 0, ip, c13, c0, 2		@ Restore IP
+	blo	.Lout\@				@ Carry on
+
+	sub	sp, sp, ip			@ Switch to overflow stack
+	add	ip, sp, ip			@ Keep incoming SP value in IP
+	add	ip, ip, #\frame_size		@ Undo svc_entry's SP change
+	push	{ip, lr}			@ Preserve on the stack
+
+UNWIND( .save	{sp, lr}	)		@ Create an unwind frame here to
+UNWIND( mov	lr, pc		)		@ chain the two stacks together
+	b	__bad_stack
+UNWIND( .fnend			)
+ENDPROC(stack_overflow_check\@)
+
+	.align	2
+.Lit\@:	.long		overflow_stack + OVERFLOW_STACK_SIZE
+	.popsection
+.Lout\@:
+#endif
+	.endm
diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c
index feb07f703a98..8718aaf47891 100644
--- a/arch/arm/kernel/irq.c
+++ b/arch/arm/kernel/irq.c
@@ -55,7 +55,14 @@ static void __init init_irq_stacks(void)
 	int cpu;
 
 	for_each_possible_cpu(cpu) {
-		stack = (u8 *)__get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER);
+		if (!IS_ENABLED(CONFIG_VMAP_STACK))
+			stack = (u8 *)__get_free_pages(GFP_KERNEL,
+						       THREAD_SIZE_ORDER);
+		else
+			stack = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN,
+					       THREADINFO_GFP, NUMA_NO_NODE,
+					       __builtin_return_address(0));
+
 		if (WARN_ON(!stack))
 			break;
 		per_cpu(irq_stack_ptr, cpu) = &stack[THREAD_SIZE];
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index b42c446cec9a..eb8c73be7c81 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -121,7 +121,8 @@ void dump_backtrace_stm(u32 *stack, u32 instruction, const char *loglvl)
 static int verify_stack(unsigned long sp)
 {
 	if (sp < PAGE_OFFSET ||
-	    (sp > (unsigned long)high_memory && high_memory != NULL))
+	    (!IS_ENABLED(CONFIG_VMAP_STACK) &&
+	     sp > (unsigned long)high_memory && high_memory != NULL))
 		return -EFAULT;
 
 	return 0;
@@ -291,7 +292,8 @@ static int __die(const char *str, int err, struct pt_regs *regs)
 
 	if (!user_mode(regs) || in_interrupt()) {
 		dump_mem(KERN_EMERG, "Stack: ", regs->ARM_sp,
-			 ALIGN(regs->ARM_sp, THREAD_SIZE));
+			 ALIGN(regs->ARM_sp - THREAD_SIZE, THREAD_ALIGN)
+			 + THREAD_SIZE);
 		dump_backtrace(regs, tsk, KERN_EMERG);
 		dump_instr(KERN_EMERG, regs);
 	}
@@ -838,3 +840,62 @@ void __init early_trap_init(void *vectors_base)
 	 */
 #endif
 }
+
+#ifdef CONFIG_VMAP_STACK
+
+DECLARE_PER_CPU(u8 *, irq_stack_ptr);
+
+asmlinkage DEFINE_PER_CPU_ALIGNED(u8[OVERFLOW_STACK_SIZE], overflow_stack);
+
+asmlinkage void handle_bad_stack(struct pt_regs *regs)
+{
+	unsigned long tsk_stk = (unsigned long)current->stack;
+	unsigned long irq_stk = (unsigned long)this_cpu_read(irq_stack_ptr);
+	unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack);
+
+	console_verbose();
+	pr_emerg("Insufficient stack space to handle exception!");
+
+	pr_emerg("Task stack:     [0x%08lx..0x%08lx]\n",
+		 tsk_stk, tsk_stk + THREAD_SIZE);
+	pr_emerg("IRQ stack:      [0x%08lx..0x%08lx]\n",
+		 irq_stk, irq_stk + THREAD_SIZE);
+	pr_emerg("Overflow stack: [0x%08lx..0x%08lx]\n",
+		 ovf_stk, ovf_stk + OVERFLOW_STACK_SIZE);
+
+	die("kernel stack overflow", regs, 0);
+}
+
+/*
+ * Normally, we rely on the logic in do_translation_fault() to update stale PMD
+ * entries covering the vmalloc space in a task's page tables when it first
+ * accesses the region in question. Unfortunately, this is not sufficient when
+ * the task stack resides in the vmalloc region, as do_translation_fault() is a
+ * C function that needs a stack to run.
+ *
+ * So we need to ensure that these PMD entries are up to date *before* the MM
+ * switch. As we already have some logic in the MM switch path that takes care
+ * of this, let's trigger it by bumping the counter every time the core vmalloc
+ * code modifies a PMD entry in the vmalloc region.
+ */
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
+{
+	if (start > VMALLOC_END || end < VMALLOC_START)
+		return;
+
+	/*
+	 * This hooks into the core vmalloc code to receive notifications of
+	 * any PMD level changes that have been made to the kernel page tables.
+	 * This means it should only be triggered once for every MiB worth of
+	 * vmalloc space, given that we don't support huge vmalloc/vmap on ARM,
+	 * and that kernel PMD level table entries are rarely (if ever)
+	 * updated.
+	 *
+	 * This means that the counter is going to max out at ~250 for the
+	 * typical case. If it overflows, something entirely unexpected has
+	 * occurred so let's throw a warning if that happens.
+	 */
+	WARN_ON(++init_mm.context.vmalloc_seq == UINT_MAX);
+}
+
+#endif
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index de0d26dc73fd..f67d548d2b28 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -389,7 +389,8 @@ int unwind_frame(struct stackframe *frame)
 
 	/* store the highest address on the stack to avoid crossing it*/
 	ctrl.sp_low = frame->sp;
-	ctrl.sp_high = ALIGN(ctrl.sp_low, THREAD_SIZE);
+	ctrl.sp_high = ALIGN(ctrl.sp_low - THREAD_SIZE, THREAD_ALIGN)
+		       + THREAD_SIZE;
 
 	pr_debug("%s(pc = %08lx lr = %08lx sp = %08lx)\n", __func__,
 		 frame->pc, frame->lr, frame->sp);
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 20c4f6d20c7a..924cde084293 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -138,12 +138,12 @@ SECTIONS
 #ifdef CONFIG_STRICT_KERNEL_RWX
 	. = ALIGN(1<<SECTION_SHIFT);
 #else
-	. = ALIGN(THREAD_SIZE);
+	. = ALIGN(THREAD_ALIGN);
 #endif
 	__init_end = .;
 
 	_sdata = .;
-	RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
+	RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
 	_edata = .;
 
 	BSS_SECTION(0, 0, 0)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] ARM: add vmap'ed stack support
  2021-10-08  7:41 [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2021-10-08  7:41 ` [PATCH 5/5] ARM: implement support for vmap'ed stacks Ard Biesheuvel
@ 2021-10-08  9:54 ` Ard Biesheuvel
  2021-10-11 23:32 ` Keith Packard
  6 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2021-10-08  9:54 UTC (permalink / raw)
  To: Linux ARM, Russell King
  Cc: Nicolas Pitre, Arnd Bergmann, Kees Cook, Keith Packard, Linus Walleij

On Fri, 8 Oct 2021 at 09:41, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> This series enables support on ARM for vmap'ed task and IRQ stacks in
> the kernel. This is an important hardening feature that terminates tasks
> on inadvertent or deliberate accesses past the stack pointer, which
> might otherwise go completely unnoticed.
>

Forgot to include the sample output from lkdtm exhausting the stack.
The below is a Thumb2 build, which does not support the frame pointer
unwinder, so it is using the ARM unwinder to unwind the overflown
stack.

/ # echo EXHAUST_STACK >/sys/kernel/debug/provoke-crash/DIRECT
lkdtm: Performing direct entry EXHAUST_STACK
lkdtm: Calling function with 512 frame size to depth 32 ...
Insufficient stack space to handle exception!
Task stack:     [0xe9c08000..0xe9c0a000]
IRQ stack:      [0xe0802000..0xe0804000]
Overflow stack: [0xdbbc7500..0xdbbc8500]
Internal error: kernel stack overflow: 0 [#1] SMP THUMB2
Modules linked in:
CPU: 0 PID: 120 Comm: sh Not tainted 5.15.0-rc1+ #991
Hardware name: Generic DT based system
PC is at mmioset+0x16/0xa0
LR is at recursive_loop+0x21/0x50
pc : [<c070d0f6>]    lr : [<c091ae9d>]    psr: 20000033
sp : e9c07f40  ip : e9c07f44  fp : 000e3460
r10: 00000004  r9 : c0fb6770  r8 : c4186000
r7 : e9c09f78  r6 : c2142300  r5 : e9c07f44  r4 : 00000012
r3 : 12121212  r2 : 00000200  r1 : 12121212  r0 : e9c07f44
Flags: nzCv  IRQs on  FIQs on  Mode SVC_32  ISA Thumb  Segment user
Control: 70c5387d  Table: 441dd600  DAC: fffffffd
Register r0 information: 2-page vmalloc region starting at 0xe9c08000
allocated at kernel_clone+0x41/0x2d8
Register r1 information: non-paged memory
Register r2 information: non-paged memory
Register r3 information: non-paged memory
Register r4 information: non-paged memory
Register r5 information: 2-page vmalloc region starting at 0xe9c08000
allocated at kernel_clone+0x41/0x2d8
Register r6 information: slab task_struct start c2142300 pointer offset 0
Register r7 information: 2-page vmalloc region starting at 0xe9c08000
allocated at kernel_clone+0x41/0x2d8
Register r8 information: non-slab/vmalloc memory
Register r9 information: non-slab/vmalloc memory
Register r10 information: non-paged memory
Register r11 information: non-paged memory
Register r12 information: 2-page vmalloc region starting at 0xe9c08000
allocated at kernel_clone+0x41/0x2d8
Process sh (pid: 120, stack limit = 0x(ptrval))
Stack: (0xe9c07f40 to 0xe9c0a000)
7f40: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ????????
7f60: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ????????
7f80: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ????????
7fa0: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ????????
7fc0: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ????????
7fe0: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ????????
8000: 57ac6e9d 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8020: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8040: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8060: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8080: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
80a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
80c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
80e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8100: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8120: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8140: 00000000 47ece8b1 00000013 e9c0815c c2142300 c091aec3 00000000 13131313
8160: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8180: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
81a0: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
81c0: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
81e0: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8200: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8220: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8240: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8260: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8280: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
82a0: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
82c0: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
82e0: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8300: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8320: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 13131313
8340: 13131313 13131313 13131313 13131313 13131313 13131313 13131313 47ece8b1
8360: 00000014 e9c08374 c2142300 c091aec3 00000000 14141414 14141414 14141414
8380: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
83a0: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
83c0: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
83e0: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8400: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8420: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8440: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8460: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8480: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
84a0: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
84c0: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
84e0: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8500: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8520: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8540: 14141414 14141414 14141414 14141414 14141414 14141414 14141414 14141414
8560: 14141414 14141414 14141414 14141414 14141414 47ece8b1 00000015 e9c0858c
8580: c2142300 c091aec3 00000000 15151515 15151515 15151515 15151515 15151515
85a0: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
85c0: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
85e0: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8600: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8620: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8640: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8660: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8680: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
86a0: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
86c0: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
86e0: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8700: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8720: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8740: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8760: 15151515 15151515 15151515 15151515 15151515 15151515 15151515 15151515
8780: 15151515 15151515 15151515 47ece8b1 00000016 e9c087a4 c2142300 c091aec3
87a0: 00000000 16161616 16161616 16161616 16161616 16161616 16161616 16161616
87c0: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
87e0: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8800: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8820: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8840: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8860: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8880: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
88a0: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
88c0: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
88e0: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8900: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8920: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8940: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8960: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
8980: 16161616 16161616 16161616 16161616 16161616 16161616 16161616 16161616
89a0: 16161616 47ece8b1 00000017 e9c089bc c2142300 c091aec3 00000000 17171717
89c0: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
89e0: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8a00: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8a20: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8a40: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8a60: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8a80: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8aa0: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8ac0: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8ae0: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8b00: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8b20: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8b40: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8b60: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8b80: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 17171717
8ba0: 17171717 17171717 17171717 17171717 17171717 17171717 17171717 47ece8b1
8bc0: 00000018 e9c08bd4 c2142300 c091aec3 00000000 18181818 18181818 18181818
8be0: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8c00: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8c20: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8c40: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8c60: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8c80: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8ca0: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8cc0: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8ce0: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8d00: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8d20: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8d40: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8d60: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8d80: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8da0: 18181818 18181818 18181818 18181818 18181818 18181818 18181818 18181818
8dc0: 18181818 18181818 18181818 18181818 18181818 47ece8b1 00000019 e9c08dec
8de0: c2142300 c091aec3 00000000 19191919 19191919 19191919 19191919 19191919
8e00: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8e20: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8e40: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8e60: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8e80: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8ea0: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8ec0: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8ee0: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8f00: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8f20: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8f40: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8f60: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8f80: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8fa0: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8fc0: 19191919 19191919 19191919 19191919 19191919 19191919 19191919 19191919
8fe0: 19191919 19191919 19191919 47ece8b1 0000001a e9c09004 c2142300 c091aec3
9000: 00000000 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9020: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9040: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9060: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9080: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
90a0: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
90c0: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
90e0: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9100: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9120: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9140: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9160: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9180: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
91a0: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
91c0: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
91e0: 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a 1a1a1a1a
9200: 1a1a1a1a 47ece8b1 0000001b e9c0921c c2142300 c091aec3 00000000 1b1b1b1b
9220: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9240: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9260: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9280: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
92a0: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
92c0: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
92e0: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9300: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9320: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9340: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9360: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9380: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
93a0: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
93c0: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
93e0: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b
9400: 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 1b1b1b1b 47ece8b1
9420: 0000001c e9c09434 c2142300 c091aec3 00000000 1c1c1c1c 1c1c1c1c 1c1c1c1c
9440: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9460: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9480: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
94a0: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
94c0: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
94e0: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9500: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9520: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9540: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9560: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9580: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
95a0: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
95c0: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
95e0: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9600: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c
9620: 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 1c1c1c1c 47ece8b1 0000001d e9c0964c
9640: c2142300 c091aec3 00000000 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9660: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9680: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
96a0: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
96c0: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
96e0: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9700: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9720: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9740: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9760: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9780: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
97a0: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
97c0: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
97e0: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9800: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9820: 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d 1d1d1d1d
9840: 1d1d1d1d 1d1d1d1d 1d1d1d1d 47ece8b1 0000001e e9c09864 c2142300 c091aec3
9860: 00000000 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9880: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
98a0: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
98c0: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
98e0: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9900: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9920: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9940: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9960: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9980: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
99a0: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
99c0: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
99e0: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9a00: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9a20: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9a40: 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e 1e1e1e1e
9a60: 1e1e1e1e 47ece8b1 0000001f e9c09a7c c2142300 c091aec3 dbbc8980 1f1f1f1f
9a80: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9aa0: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9ac0: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9ae0: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9b00: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9b20: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9b40: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9b60: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9b80: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9ba0: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9bc0: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9be0: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9c00: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9c20: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9c40: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f
9c60: 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 1f1f1f1f 47ece8b1
9c80: 00000020 e9c09c94 c2142300 c091aec3 00000609 20202020 20202020 20202020
9ca0: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9cc0: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9ce0: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9d00: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9d20: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9d40: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9d60: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9d80: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9da0: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9dc0: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9de0: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9e00: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9e20: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9e40: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9e60: 20202020 20202020 20202020 20202020 20202020 20202020 20202020 20202020
9e80: 20202020 20202020 20202020 20202020 20202020 47ece8b1 c17b5920 c1245ebc
9ea0: 0000000e c0d29257 00000006 c091adbf e9c09f78 c091ad19 c33cea80 000e6428
9ec0: 0000000e c2c237f8 e9c09f78 c0690123 e9c09f78 c33cea80 e9c09f78 000e6428
9ee0: c2142300 0000000e c06900f1 c0535171 00000000 00000000 00000000 00000000
9f00: c42203f8 dbc5d414 00000000 00000000 0000000a 47ece8b1 00000000 e9c09fb0
9f20: 0007b5f4 c2142300 80000207 c4195200 c180f5a0 c180f580 c4195240 c040fc2d
9f40: 00000000 00000072 000e3460 c04382d1 47ece8b1 47ece8b1 c33cea80 c33cea80
9f60: 000e6428 c2142300 00000000 00000000 00000004 c0535481 00000000 00000000
9f80: 000e3460 47ece8b1 c0400364 00000001 b6f62070 000e6428 00000004 c0400364
9fa0: c2142300 c0400161 00000001 b6f62070 00000001 000e6428 0000000e 00000000
9fc0: 00000001 b6f62070 000e6428 00000004 000e2c7c 00000020 000e3444 000e3460
9fe0: 000e23b8 be8db878 b6e89960 b6e8997c 60000010 00000001 00000000 00000000
[<c070d0f6>] (mmioset) from [<c091ae9d>] (recursive_loop+0x21/0x50)
[<c091ae9d>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c091aec3>] (recursive_loop+0x47/0x50)
[<c091aec3>] (recursive_loop) from [<c0d29257>] (lkdtm_EXHAUST_STACK+0x23/0x32)
[<c0d29257>] (lkdtm_EXHAUST_STACK) from [<c091adbf>] (direct_entry+0xa7/0xf0)
[<c091adbf>] (direct_entry) from [<c0690123>] (full_proxy_write+0x33/0x54)
[<c0690123>] (full_proxy_write) from [<c0535171>] (vfs_write+0x8d/0x2c8)
[<c0535171>] (vfs_write) from [<c0535481>] (ksys_write+0x41/0x90)
[<c0535481>] (ksys_write) from [<c0400161>] (ret_fast_syscall+0x1/0x54)
Code: 4101 460b 2a10 db1f (e92d) 4100
---[ end trace 5dfadbdbe8c81ed8 ]---

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] ARM: add vmap'ed stack support
  2021-10-08  7:41 [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2021-10-08  9:54 ` [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
@ 2021-10-11 23:32 ` Keith Packard
  6 siblings, 0 replies; 8+ messages in thread
From: Keith Packard @ 2021-10-11 23:32 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Nicolas Pitre, Arnd Bergmann, Kees Cook, Linus Walleij


[-- Attachment #1.1: Type: text/plain, Size: 569 bytes --]

Ard Biesheuvel <ardb@kernel.org> writes:

> Ard Biesheuvel (5):
>   ARM: memcpy: use frame pointer as unwind anchor
>   ARM: memmove: use frame pointer as unwind anchor
>   ARM: memset: clean up unwind annotations
>   ARM: unwind: disregard unwind info before stack frame is set up
>   ARM: implement support for vmap'ed stacks

I've been testing these on my Raspberry Pi 2B (quad v7), including
testing kernel stack overflow which now kills the related process rather
than the whole system.

Tested-by: Keith Packard <keithpac@amazon.com>

-- 
-keith

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-10-11 23:34 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-08  7:41 [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
2021-10-08  7:41 ` [PATCH 1/5] ARM: memcpy: use frame pointer as unwind anchor Ard Biesheuvel
2021-10-08  7:41 ` [PATCH 2/5] ARM: memmove: " Ard Biesheuvel
2021-10-08  7:41 ` [PATCH 3/5] ARM: memset: clean up unwind annotations Ard Biesheuvel
2021-10-08  7:41 ` [PATCH 4/5] ARM: unwind: disregard unwind info before stack frame is set up Ard Biesheuvel
2021-10-08  7:41 ` [PATCH 5/5] ARM: implement support for vmap'ed stacks Ard Biesheuvel
2021-10-08  9:54 ` [PATCH 0/5] ARM: add vmap'ed stack support Ard Biesheuvel
2021-10-11 23:32 ` Keith Packard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).