[tip:,objtool/core] x86/crypto/sha1_avx2: Standardize stack alignment prologue
diff mbox series

Message ID 161891562230.29796.6626679200764483556.tip-bot2@tip-bot2
State Accepted
Commit 20114c899cafa8313534a841cab0ab1f7ab09672
Headers show
  • [tip:,objtool/core] x86/crypto/sha1_avx2: Standardize stack alignment prologue
Related show

Commit Message

tip-bot2 for Peter Zijlstra April 20, 2021, 10:47 a.m. UTC
The following commit has been merged into the objtool/core branch of tip:

Commit-ID:     20114c899cafa8313534a841cab0ab1f7ab09672
Gitweb:        https://git.kernel.org/tip/20114c899cafa8313534a841cab0ab1f7ab09672
Author:        Josh Poimboeuf <jpoimboe@redhat.com>
AuthorDate:    Wed, 24 Feb 2021 10:29:21 -06:00
Committer:     Josh Poimboeuf <jpoimboe@redhat.com>
CommitterDate: Mon, 19 Apr 2021 12:36:35 -05:00

x86/crypto/sha1_avx2: Standardize stack alignment prologue

Use a more standard prologue for saving the stack pointer before
realigning the stack.

This enables ORC unwinding by allowing objtool to understand the stack

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Link: https://lore.kernel.org/r/fdaaf8670ed1f52f55ba9a6bbac98c1afddc1af6.1614182415.git.jpoimboe@redhat.com
 arch/x86/crypto/sha1_avx2_x86_64_asm.S | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff mbox series

diff --git a/arch/x86/crypto/sha1_avx2_x86_64_asm.S b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
index 1e594d6..5eed620 100644
--- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S
+++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
@@ -645,9 +645,9 @@  _loop3:
 	RESERVE_STACK  = (W_SIZE*4 + 8+24)
 	/* Align stack */
-	mov	%rsp, %rbx
+	push	%rbp
+	mov	%rsp, %rbp
 	and	$~(0x20-1), %rsp
-	push	%rbx
 	sub	$RESERVE_STACK, %rsp
@@ -665,8 +665,8 @@  _loop3:
-	add	$RESERVE_STACK, %rsp
-	pop	%rsp
+	mov	%rbp, %rsp
+	pop	%rbp
 	pop	%r15
 	pop	%r14