All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] crypto: chacha20_4block_xor_ssse3: Align stack pointer to 64 bytes
@ 2016-01-21 16:24 Eli Cooper
  2016-01-22  7:55 ` Martin Willi
  0 siblings, 1 reply; 5+ messages in thread
From: Eli Cooper @ 2016-01-21 16:24 UTC (permalink / raw)
  To: linux-crypto, Martin Willi; +Cc: Herbert Xu

This aligns the stack pointer in chacha20_4block_xor_ssse3 to 64 bytes.
Fixes general protection faults and potential kernel panics.

Cc: stable@vger.kernel.org
Signed-off-by: Eli Cooper <elicooper@gmx.com>
---
 arch/x86/crypto/chacha20-ssse3-x86_64.S | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/crypto/chacha20-ssse3-x86_64.S b/arch/x86/crypto/chacha20-ssse3-x86_64.S
index 712b130..3a33124 100644
--- a/arch/x86/crypto/chacha20-ssse3-x86_64.S
+++ b/arch/x86/crypto/chacha20-ssse3-x86_64.S
@@ -157,7 +157,9 @@ ENTRY(chacha20_4block_xor_ssse3)
 	# done with the slightly better performing SSSE3 byte shuffling,
 	# 7/12-bit word rotation uses traditional shift+OR.
 
-	sub		$0x40,%rsp
+	mov		%rsp,%r11
+	sub		$0x80,%rsp
+	and		$~63,%rsp
 
 	# x0..15[0-3] = s0..3[0..3]
 	movq		0x00(%rdi),%xmm1
@@ -620,6 +622,6 @@ ENTRY(chacha20_4block_xor_ssse3)
 	pxor		%xmm1,%xmm15
 	movdqu		%xmm15,0xf0(%rsi)
 
-	add		$0x40,%rsp
+	mov		%r11,%rsp
 	ret
 ENDPROC(chacha20_4block_xor_ssse3)
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-01-27  3:19 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-21 16:24 [PATCH] crypto: chacha20_4block_xor_ssse3: Align stack pointer to 64 bytes Eli Cooper
2016-01-22  7:55 ` Martin Willi
2016-01-25 13:59   ` Herbert Xu
2016-01-27  0:40     ` Jason A. Donenfeld
2016-01-27  3:19       ` Herbert Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.