All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/11] crypto: CFI fixes
@ 2022-11-18  9:02 ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
Integrity) is enabled, with the new CFI implementation that was merged
in 6.1 and is supported on x86.  Some of them were unconditional
crashes, while others depended on whether the compiler optimized out the
indirect calls or not.  This series also simplifies some code that was
intended to work around limitations of the old CFI implementation and is
unnecessary for the new CFI implementation.

Eric Biggers (11):
  crypto: x86/aegis128 - fix crash with CFI enabled
  crypto: x86/aria - fix crash with CFI enabled
  crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers
  crypto: x86/sha1 - fix possible crash with CFI enabled
  crypto: x86/sha256 - fix possible crash with CFI enabled
  crypto: x86/sha512 - fix possible crash with CFI enabled
  crypto: x86/sm3 - fix possible crash with CFI enabled
  crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper
  crypto: arm64/sm3 - fix possible crash with CFI enabled
  crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper
  Revert "crypto: shash - avoid comparing pointers to exported functions
    under CFI"

 arch/arm/crypto/nh-neon-core.S           |  2 +-
 arch/arm/crypto/nhpoly1305-neon-glue.c   | 11 ++---------
 arch/arm64/crypto/nh-neon-core.S         |  5 +++--
 arch/arm64/crypto/nhpoly1305-neon-glue.c | 11 ++---------
 arch/arm64/crypto/sm3-neon-core.S        |  4 ++--
 arch/x86/crypto/aegis128-aesni-asm.S     |  9 +++++----
 arch/x86/crypto/aria-aesni-avx-asm_64.S  | 13 +++++++------
 arch/x86/crypto/nh-avx2-x86_64.S         |  5 +++--
 arch/x86/crypto/nh-sse2-x86_64.S         |  5 +++--
 arch/x86/crypto/nhpoly1305-avx2-glue.c   | 11 ++---------
 arch/x86/crypto/nhpoly1305-sse2-glue.c   | 11 ++---------
 arch/x86/crypto/sha1_ni_asm.S            |  3 ++-
 arch/x86/crypto/sha1_ssse3_asm.S         |  3 ++-
 arch/x86/crypto/sha256-avx-asm.S         |  3 ++-
 arch/x86/crypto/sha256-avx2-asm.S        |  3 ++-
 arch/x86/crypto/sha256-ssse3-asm.S       |  3 ++-
 arch/x86/crypto/sha256_ni_asm.S          |  3 ++-
 arch/x86/crypto/sha512-avx-asm.S         |  3 ++-
 arch/x86/crypto/sha512-avx2-asm.S        |  3 ++-
 arch/x86/crypto/sha512-ssse3-asm.S       |  3 ++-
 arch/x86/crypto/sm3-avx-asm_64.S         |  3 ++-
 crypto/shash.c                           | 18 +++---------------
 include/crypto/internal/hash.h           |  8 +++++++-
 23 files changed, 62 insertions(+), 81 deletions(-)


base-commit: 557ffd5a4726f8b6f0dd1d4b632ae02c1c063233
-- 
2.38.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 0/11] crypto: CFI fixes
@ 2022-11-18  9:02 ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
Integrity) is enabled, with the new CFI implementation that was merged
in 6.1 and is supported on x86.  Some of them were unconditional
crashes, while others depended on whether the compiler optimized out the
indirect calls or not.  This series also simplifies some code that was
intended to work around limitations of the old CFI implementation and is
unnecessary for the new CFI implementation.

Eric Biggers (11):
  crypto: x86/aegis128 - fix crash with CFI enabled
  crypto: x86/aria - fix crash with CFI enabled
  crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers
  crypto: x86/sha1 - fix possible crash with CFI enabled
  crypto: x86/sha256 - fix possible crash with CFI enabled
  crypto: x86/sha512 - fix possible crash with CFI enabled
  crypto: x86/sm3 - fix possible crash with CFI enabled
  crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper
  crypto: arm64/sm3 - fix possible crash with CFI enabled
  crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper
  Revert "crypto: shash - avoid comparing pointers to exported functions
    under CFI"

 arch/arm/crypto/nh-neon-core.S           |  2 +-
 arch/arm/crypto/nhpoly1305-neon-glue.c   | 11 ++---------
 arch/arm64/crypto/nh-neon-core.S         |  5 +++--
 arch/arm64/crypto/nhpoly1305-neon-glue.c | 11 ++---------
 arch/arm64/crypto/sm3-neon-core.S        |  4 ++--
 arch/x86/crypto/aegis128-aesni-asm.S     |  9 +++++----
 arch/x86/crypto/aria-aesni-avx-asm_64.S  | 13 +++++++------
 arch/x86/crypto/nh-avx2-x86_64.S         |  5 +++--
 arch/x86/crypto/nh-sse2-x86_64.S         |  5 +++--
 arch/x86/crypto/nhpoly1305-avx2-glue.c   | 11 ++---------
 arch/x86/crypto/nhpoly1305-sse2-glue.c   | 11 ++---------
 arch/x86/crypto/sha1_ni_asm.S            |  3 ++-
 arch/x86/crypto/sha1_ssse3_asm.S         |  3 ++-
 arch/x86/crypto/sha256-avx-asm.S         |  3 ++-
 arch/x86/crypto/sha256-avx2-asm.S        |  3 ++-
 arch/x86/crypto/sha256-ssse3-asm.S       |  3 ++-
 arch/x86/crypto/sha256_ni_asm.S          |  3 ++-
 arch/x86/crypto/sha512-avx-asm.S         |  3 ++-
 arch/x86/crypto/sha512-avx2-asm.S        |  3 ++-
 arch/x86/crypto/sha512-ssse3-asm.S       |  3 ++-
 arch/x86/crypto/sm3-avx-asm_64.S         |  3 ++-
 crypto/shash.c                           | 18 +++---------------
 include/crypto/internal/hash.h           |  8 +++++++-
 23 files changed, 62 insertions(+), 81 deletions(-)


base-commit: 557ffd5a4726f8b6f0dd1d4b632ae02c1c063233
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 01/11] crypto: x86/aegis128 - fix crash with CFI enabled
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

Some functions in aegis128-aesni-asm.S are called via indirect function
calls.  These functions need to use SYM_TYPED_FUNC_START instead of
SYM_FUNC_START to cause type hashes to be emitted when the kernel is
built with CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI
failure.

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/aegis128-aesni-asm.S | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S
index b48ddebb47489..cdf3215ec272c 100644
--- a/arch/x86/crypto/aegis128-aesni-asm.S
+++ b/arch/x86/crypto/aegis128-aesni-asm.S
@@ -7,6 +7,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/frame.h>
 
 #define STATE0	%xmm0
@@ -402,7 +403,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_ad)
  * void crypto_aegis128_aesni_enc(void *state, unsigned int length,
  *                                const void *src, void *dst);
  */
-SYM_FUNC_START(crypto_aegis128_aesni_enc)
+SYM_TYPED_FUNC_START(crypto_aegis128_aesni_enc)
 	FRAME_BEGIN
 
 	cmp $0x10, LEN
@@ -499,7 +500,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_enc)
  * void crypto_aegis128_aesni_enc_tail(void *state, unsigned int length,
  *                                     const void *src, void *dst);
  */
-SYM_FUNC_START(crypto_aegis128_aesni_enc_tail)
+SYM_TYPED_FUNC_START(crypto_aegis128_aesni_enc_tail)
 	FRAME_BEGIN
 
 	/* load the state: */
@@ -556,7 +557,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_enc_tail)
  * void crypto_aegis128_aesni_dec(void *state, unsigned int length,
  *                                const void *src, void *dst);
  */
-SYM_FUNC_START(crypto_aegis128_aesni_dec)
+SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec)
 	FRAME_BEGIN
 
 	cmp $0x10, LEN
@@ -653,7 +654,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_dec)
  * void crypto_aegis128_aesni_dec_tail(void *state, unsigned int length,
  *                                     const void *src, void *dst);
  */
-SYM_FUNC_START(crypto_aegis128_aesni_dec_tail)
+SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec_tail)
 	FRAME_BEGIN
 
 	/* load the state: */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 01/11] crypto: x86/aegis128 - fix crash with CFI enabled
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

Some functions in aegis128-aesni-asm.S are called via indirect function
calls.  These functions need to use SYM_TYPED_FUNC_START instead of
SYM_FUNC_START to cause type hashes to be emitted when the kernel is
built with CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI
failure.

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/aegis128-aesni-asm.S | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S
index b48ddebb47489..cdf3215ec272c 100644
--- a/arch/x86/crypto/aegis128-aesni-asm.S
+++ b/arch/x86/crypto/aegis128-aesni-asm.S
@@ -7,6 +7,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/frame.h>
 
 #define STATE0	%xmm0
@@ -402,7 +403,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_ad)
  * void crypto_aegis128_aesni_enc(void *state, unsigned int length,
  *                                const void *src, void *dst);
  */
-SYM_FUNC_START(crypto_aegis128_aesni_enc)
+SYM_TYPED_FUNC_START(crypto_aegis128_aesni_enc)
 	FRAME_BEGIN
 
 	cmp $0x10, LEN
@@ -499,7 +500,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_enc)
  * void crypto_aegis128_aesni_enc_tail(void *state, unsigned int length,
  *                                     const void *src, void *dst);
  */
-SYM_FUNC_START(crypto_aegis128_aesni_enc_tail)
+SYM_TYPED_FUNC_START(crypto_aegis128_aesni_enc_tail)
 	FRAME_BEGIN
 
 	/* load the state: */
@@ -556,7 +557,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_enc_tail)
  * void crypto_aegis128_aesni_dec(void *state, unsigned int length,
  *                                const void *src, void *dst);
  */
-SYM_FUNC_START(crypto_aegis128_aesni_dec)
+SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec)
 	FRAME_BEGIN
 
 	cmp $0x10, LEN
@@ -653,7 +654,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_dec)
  * void crypto_aegis128_aesni_dec_tail(void *state, unsigned int length,
  *                                     const void *src, void *dst);
  */
-SYM_FUNC_START(crypto_aegis128_aesni_dec_tail)
+SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec_tail)
 	FRAME_BEGIN
 
 	/* load the state: */
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 02/11] crypto: x86/aria - fix crash with CFI enabled
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen, Taehee Yoo

From: Eric Biggers <ebiggers@google.com>

Some functions in aria-aesni-avx-asm_64.S are called via indirect
function calls.  These functions need to use SYM_TYPED_FUNC_START
instead of SYM_FUNC_START to cause type hashes to be emitted when the
kernel is built with CONFIG_CFI_CLANG=y.  Otherwise, the code crashes
with a CFI failure.

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Cc: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/aria-aesni-avx-asm_64.S | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/crypto/aria-aesni-avx-asm_64.S b/arch/x86/crypto/aria-aesni-avx-asm_64.S
index c75fd7d015ed8..03ae4cd1d976a 100644
--- a/arch/x86/crypto/aria-aesni-avx-asm_64.S
+++ b/arch/x86/crypto/aria-aesni-avx-asm_64.S
@@ -7,6 +7,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/frame.h>
 
 /* struct aria_ctx: */
@@ -913,7 +914,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_crypt_16way)
 	RET;
 SYM_FUNC_END(__aria_aesni_avx_crypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_encrypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_encrypt_16way)
 	/* input:
 	*      %rdi: ctx, CTX
 	*      %rsi: dst
@@ -938,7 +939,7 @@ SYM_FUNC_START(aria_aesni_avx_encrypt_16way)
 	RET;
 SYM_FUNC_END(aria_aesni_avx_encrypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_decrypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_decrypt_16way)
 	/* input:
 	*      %rdi: ctx, CTX
 	*      %rsi: dst
@@ -1039,7 +1040,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_ctr_gen_keystream_16way)
 	RET;
 SYM_FUNC_END(__aria_aesni_avx_ctr_gen_keystream_16way)
 
-SYM_FUNC_START(aria_aesni_avx_ctr_crypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_ctr_crypt_16way)
 	/* input:
 	*      %rdi: ctx
 	*      %rsi: dst
@@ -1208,7 +1209,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_gfni_crypt_16way)
 	RET;
 SYM_FUNC_END(__aria_aesni_avx_gfni_crypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
 	/* input:
 	*      %rdi: ctx, CTX
 	*      %rsi: dst
@@ -1233,7 +1234,7 @@ SYM_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
 	RET;
 SYM_FUNC_END(aria_aesni_avx_gfni_encrypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
 	/* input:
 	*      %rdi: ctx, CTX
 	*      %rsi: dst
@@ -1258,7 +1259,7 @@ SYM_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
 	RET;
 SYM_FUNC_END(aria_aesni_avx_gfni_decrypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_gfni_ctr_crypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_ctr_crypt_16way)
 	/* input:
 	*      %rdi: ctx
 	*      %rsi: dst
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 02/11] crypto: x86/aria - fix crash with CFI enabled
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen, Taehee Yoo

From: Eric Biggers <ebiggers@google.com>

Some functions in aria-aesni-avx-asm_64.S are called via indirect
function calls.  These functions need to use SYM_TYPED_FUNC_START
instead of SYM_FUNC_START to cause type hashes to be emitted when the
kernel is built with CONFIG_CFI_CLANG=y.  Otherwise, the code crashes
with a CFI failure.

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Cc: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/aria-aesni-avx-asm_64.S | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/crypto/aria-aesni-avx-asm_64.S b/arch/x86/crypto/aria-aesni-avx-asm_64.S
index c75fd7d015ed8..03ae4cd1d976a 100644
--- a/arch/x86/crypto/aria-aesni-avx-asm_64.S
+++ b/arch/x86/crypto/aria-aesni-avx-asm_64.S
@@ -7,6 +7,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/frame.h>
 
 /* struct aria_ctx: */
@@ -913,7 +914,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_crypt_16way)
 	RET;
 SYM_FUNC_END(__aria_aesni_avx_crypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_encrypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_encrypt_16way)
 	/* input:
 	*      %rdi: ctx, CTX
 	*      %rsi: dst
@@ -938,7 +939,7 @@ SYM_FUNC_START(aria_aesni_avx_encrypt_16way)
 	RET;
 SYM_FUNC_END(aria_aesni_avx_encrypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_decrypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_decrypt_16way)
 	/* input:
 	*      %rdi: ctx, CTX
 	*      %rsi: dst
@@ -1039,7 +1040,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_ctr_gen_keystream_16way)
 	RET;
 SYM_FUNC_END(__aria_aesni_avx_ctr_gen_keystream_16way)
 
-SYM_FUNC_START(aria_aesni_avx_ctr_crypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_ctr_crypt_16way)
 	/* input:
 	*      %rdi: ctx
 	*      %rsi: dst
@@ -1208,7 +1209,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_gfni_crypt_16way)
 	RET;
 SYM_FUNC_END(__aria_aesni_avx_gfni_crypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
 	/* input:
 	*      %rdi: ctx, CTX
 	*      %rsi: dst
@@ -1233,7 +1234,7 @@ SYM_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
 	RET;
 SYM_FUNC_END(aria_aesni_avx_gfni_encrypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
 	/* input:
 	*      %rdi: ctx, CTX
 	*      %rsi: dst
@@ -1258,7 +1259,7 @@ SYM_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
 	RET;
 SYM_FUNC_END(aria_aesni_avx_gfni_decrypt_16way)
 
-SYM_FUNC_START(aria_aesni_avx_gfni_ctr_crypt_16way)
+SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_ctr_crypt_16way)
 	/* input:
 	*      %rdi: ctx
 	*      %rsi: dst
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 03/11] crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

Since the CFI implementation now supports indirect calls to assembly
functions, take advantage of that rather than use wrapper functions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/nh-avx2-x86_64.S       |  5 +++--
 arch/x86/crypto/nh-sse2-x86_64.S       |  5 +++--
 arch/x86/crypto/nhpoly1305-avx2-glue.c | 11 ++---------
 arch/x86/crypto/nhpoly1305-sse2-glue.c | 11 ++---------
 4 files changed, 10 insertions(+), 22 deletions(-)

diff --git a/arch/x86/crypto/nh-avx2-x86_64.S b/arch/x86/crypto/nh-avx2-x86_64.S
index 6a0b15e7196a8..ef73a3ab87263 100644
--- a/arch/x86/crypto/nh-avx2-x86_64.S
+++ b/arch/x86/crypto/nh-avx2-x86_64.S
@@ -8,6 +8,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define		PASS0_SUMS	%ymm0
 #define		PASS1_SUMS	%ymm1
@@ -65,11 +66,11 @@
 
 /*
  * void nh_avx2(const u32 *key, const u8 *message, size_t message_len,
- *		u8 hash[NH_HASH_BYTES])
+ *		__le64 hash[NH_NUM_PASSES])
  *
  * It's guaranteed that message_len % 16 == 0.
  */
-SYM_FUNC_START(nh_avx2)
+SYM_TYPED_FUNC_START(nh_avx2)
 
 	vmovdqu		0x00(KEY), K0
 	vmovdqu		0x10(KEY), K1
diff --git a/arch/x86/crypto/nh-sse2-x86_64.S b/arch/x86/crypto/nh-sse2-x86_64.S
index 34c567bbcb4fa..75fb994b6d177 100644
--- a/arch/x86/crypto/nh-sse2-x86_64.S
+++ b/arch/x86/crypto/nh-sse2-x86_64.S
@@ -8,6 +8,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define		PASS0_SUMS	%xmm0
 #define		PASS1_SUMS	%xmm1
@@ -67,11 +68,11 @@
 
 /*
  * void nh_sse2(const u32 *key, const u8 *message, size_t message_len,
- *		u8 hash[NH_HASH_BYTES])
+ *		__le64 hash[NH_NUM_PASSES])
  *
  * It's guaranteed that message_len % 16 == 0.
  */
-SYM_FUNC_START(nh_sse2)
+SYM_TYPED_FUNC_START(nh_sse2)
 
 	movdqu		0x00(KEY), K0
 	movdqu		0x10(KEY), K1
diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c
index 8ea5ab0f1ca74..46b036204ed91 100644
--- a/arch/x86/crypto/nhpoly1305-avx2-glue.c
+++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c
@@ -14,14 +14,7 @@
 #include <asm/simd.h>
 
 asmlinkage void nh_avx2(const u32 *key, const u8 *message, size_t message_len,
-			u8 hash[NH_HASH_BYTES]);
-
-/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */
-static void _nh_avx2(const u32 *key, const u8 *message, size_t message_len,
-		     __le64 hash[NH_NUM_PASSES])
-{
-	nh_avx2(key, message, message_len, (u8 *)hash);
-}
+			__le64 hash[NH_NUM_PASSES]);
 
 static int nhpoly1305_avx2_update(struct shash_desc *desc,
 				  const u8 *src, unsigned int srclen)
@@ -33,7 +26,7 @@ static int nhpoly1305_avx2_update(struct shash_desc *desc,
 		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
 
 		kernel_fpu_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);
+		crypto_nhpoly1305_update_helper(desc, src, n, nh_avx2);
 		kernel_fpu_end();
 		src += n;
 		srclen -= n;
diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c
index 2b353d42ed13f..4a4970d751076 100644
--- a/arch/x86/crypto/nhpoly1305-sse2-glue.c
+++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c
@@ -14,14 +14,7 @@
 #include <asm/simd.h>
 
 asmlinkage void nh_sse2(const u32 *key, const u8 *message, size_t message_len,
-			u8 hash[NH_HASH_BYTES]);
-
-/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */
-static void _nh_sse2(const u32 *key, const u8 *message, size_t message_len,
-		     __le64 hash[NH_NUM_PASSES])
-{
-	nh_sse2(key, message, message_len, (u8 *)hash);
-}
+			__le64 hash[NH_NUM_PASSES]);
 
 static int nhpoly1305_sse2_update(struct shash_desc *desc,
 				  const u8 *src, unsigned int srclen)
@@ -33,7 +26,7 @@ static int nhpoly1305_sse2_update(struct shash_desc *desc,
 		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
 
 		kernel_fpu_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);
+		crypto_nhpoly1305_update_helper(desc, src, n, nh_sse2);
 		kernel_fpu_end();
 		src += n;
 		srclen -= n;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 03/11] crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

Since the CFI implementation now supports indirect calls to assembly
functions, take advantage of that rather than use wrapper functions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/nh-avx2-x86_64.S       |  5 +++--
 arch/x86/crypto/nh-sse2-x86_64.S       |  5 +++--
 arch/x86/crypto/nhpoly1305-avx2-glue.c | 11 ++---------
 arch/x86/crypto/nhpoly1305-sse2-glue.c | 11 ++---------
 4 files changed, 10 insertions(+), 22 deletions(-)

diff --git a/arch/x86/crypto/nh-avx2-x86_64.S b/arch/x86/crypto/nh-avx2-x86_64.S
index 6a0b15e7196a8..ef73a3ab87263 100644
--- a/arch/x86/crypto/nh-avx2-x86_64.S
+++ b/arch/x86/crypto/nh-avx2-x86_64.S
@@ -8,6 +8,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define		PASS0_SUMS	%ymm0
 #define		PASS1_SUMS	%ymm1
@@ -65,11 +66,11 @@
 
 /*
  * void nh_avx2(const u32 *key, const u8 *message, size_t message_len,
- *		u8 hash[NH_HASH_BYTES])
+ *		__le64 hash[NH_NUM_PASSES])
  *
  * It's guaranteed that message_len % 16 == 0.
  */
-SYM_FUNC_START(nh_avx2)
+SYM_TYPED_FUNC_START(nh_avx2)
 
 	vmovdqu		0x00(KEY), K0
 	vmovdqu		0x10(KEY), K1
diff --git a/arch/x86/crypto/nh-sse2-x86_64.S b/arch/x86/crypto/nh-sse2-x86_64.S
index 34c567bbcb4fa..75fb994b6d177 100644
--- a/arch/x86/crypto/nh-sse2-x86_64.S
+++ b/arch/x86/crypto/nh-sse2-x86_64.S
@@ -8,6 +8,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define		PASS0_SUMS	%xmm0
 #define		PASS1_SUMS	%xmm1
@@ -67,11 +68,11 @@
 
 /*
  * void nh_sse2(const u32 *key, const u8 *message, size_t message_len,
- *		u8 hash[NH_HASH_BYTES])
+ *		__le64 hash[NH_NUM_PASSES])
  *
  * It's guaranteed that message_len % 16 == 0.
  */
-SYM_FUNC_START(nh_sse2)
+SYM_TYPED_FUNC_START(nh_sse2)
 
 	movdqu		0x00(KEY), K0
 	movdqu		0x10(KEY), K1
diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c
index 8ea5ab0f1ca74..46b036204ed91 100644
--- a/arch/x86/crypto/nhpoly1305-avx2-glue.c
+++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c
@@ -14,14 +14,7 @@
 #include <asm/simd.h>
 
 asmlinkage void nh_avx2(const u32 *key, const u8 *message, size_t message_len,
-			u8 hash[NH_HASH_BYTES]);
-
-/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */
-static void _nh_avx2(const u32 *key, const u8 *message, size_t message_len,
-		     __le64 hash[NH_NUM_PASSES])
-{
-	nh_avx2(key, message, message_len, (u8 *)hash);
-}
+			__le64 hash[NH_NUM_PASSES]);
 
 static int nhpoly1305_avx2_update(struct shash_desc *desc,
 				  const u8 *src, unsigned int srclen)
@@ -33,7 +26,7 @@ static int nhpoly1305_avx2_update(struct shash_desc *desc,
 		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
 
 		kernel_fpu_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);
+		crypto_nhpoly1305_update_helper(desc, src, n, nh_avx2);
 		kernel_fpu_end();
 		src += n;
 		srclen -= n;
diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c
index 2b353d42ed13f..4a4970d751076 100644
--- a/arch/x86/crypto/nhpoly1305-sse2-glue.c
+++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c
@@ -14,14 +14,7 @@
 #include <asm/simd.h>
 
 asmlinkage void nh_sse2(const u32 *key, const u8 *message, size_t message_len,
-			u8 hash[NH_HASH_BYTES]);
-
-/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */
-static void _nh_sse2(const u32 *key, const u8 *message, size_t message_len,
-		     __le64 hash[NH_NUM_PASSES])
-{
-	nh_sse2(key, message, message_len, (u8 *)hash);
-}
+			__le64 hash[NH_NUM_PASSES]);
 
 static int nhpoly1305_sse2_update(struct shash_desc *desc,
 				  const u8 *src, unsigned int srclen)
@@ -33,7 +26,7 @@ static int nhpoly1305_sse2_update(struct shash_desc *desc,
 		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
 
 		kernel_fpu_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);
+		crypto_nhpoly1305_update_helper(desc, src, n, nh_sse2);
 		kernel_fpu_end();
 		src += n;
 		srclen -= n;
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 04/11] crypto: x86/sha1 - fix possible crash with CFI enabled
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sha1_transform_ssse3(), sha1_transform_avx(), and sha1_ni_transform()
(but not sha1_transform_avx2()) are called via indirect function calls.
These functions need to use SYM_TYPED_FUNC_START instead of
SYM_FUNC_START to cause type hashes to be emitted when the kernel is
built with CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI
failure (if the compiler didn't happen to optimize out the indirect
calls).

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/sha1_ni_asm.S    | 3 ++-
 arch/x86/crypto/sha1_ssse3_asm.S | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/crypto/sha1_ni_asm.S b/arch/x86/crypto/sha1_ni_asm.S
index 2f94ec0e763bf..3cae5a1bb3d6e 100644
--- a/arch/x86/crypto/sha1_ni_asm.S
+++ b/arch/x86/crypto/sha1_ni_asm.S
@@ -54,6 +54,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define DIGEST_PTR	%rdi	/* 1st arg */
 #define DATA_PTR	%rsi	/* 2nd arg */
@@ -93,7 +94,7 @@
  */
 .text
 .align 32
-SYM_FUNC_START(sha1_ni_transform)
+SYM_TYPED_FUNC_START(sha1_ni_transform)
 	push		%rbp
 	mov		%rsp, %rbp
 	sub		$FRAME_SIZE, %rsp
diff --git a/arch/x86/crypto/sha1_ssse3_asm.S b/arch/x86/crypto/sha1_ssse3_asm.S
index 263f916362e02..f54988c80eb40 100644
--- a/arch/x86/crypto/sha1_ssse3_asm.S
+++ b/arch/x86/crypto/sha1_ssse3_asm.S
@@ -25,6 +25,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define CTX	%rdi	// arg1
 #define BUF	%rsi	// arg2
@@ -67,7 +68,7 @@
  * param: function's name
  */
 .macro SHA1_VECTOR_ASM  name
-	SYM_FUNC_START(\name)
+	SYM_TYPED_FUNC_START(\name)
 
 	push	%rbx
 	push	%r12
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 04/11] crypto: x86/sha1 - fix possible crash with CFI enabled
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sha1_transform_ssse3(), sha1_transform_avx(), and sha1_ni_transform()
(but not sha1_transform_avx2()) are called via indirect function calls.
These functions need to use SYM_TYPED_FUNC_START instead of
SYM_FUNC_START to cause type hashes to be emitted when the kernel is
built with CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI
failure (if the compiler didn't happen to optimize out the indirect
calls).

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/sha1_ni_asm.S    | 3 ++-
 arch/x86/crypto/sha1_ssse3_asm.S | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/crypto/sha1_ni_asm.S b/arch/x86/crypto/sha1_ni_asm.S
index 2f94ec0e763bf..3cae5a1bb3d6e 100644
--- a/arch/x86/crypto/sha1_ni_asm.S
+++ b/arch/x86/crypto/sha1_ni_asm.S
@@ -54,6 +54,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define DIGEST_PTR	%rdi	/* 1st arg */
 #define DATA_PTR	%rsi	/* 2nd arg */
@@ -93,7 +94,7 @@
  */
 .text
 .align 32
-SYM_FUNC_START(sha1_ni_transform)
+SYM_TYPED_FUNC_START(sha1_ni_transform)
 	push		%rbp
 	mov		%rsp, %rbp
 	sub		$FRAME_SIZE, %rsp
diff --git a/arch/x86/crypto/sha1_ssse3_asm.S b/arch/x86/crypto/sha1_ssse3_asm.S
index 263f916362e02..f54988c80eb40 100644
--- a/arch/x86/crypto/sha1_ssse3_asm.S
+++ b/arch/x86/crypto/sha1_ssse3_asm.S
@@ -25,6 +25,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define CTX	%rdi	// arg1
 #define BUF	%rsi	// arg2
@@ -67,7 +68,7 @@
  * param: function's name
  */
 .macro SHA1_VECTOR_ASM  name
-	SYM_FUNC_START(\name)
+	SYM_TYPED_FUNC_START(\name)
 
 	push	%rbx
 	push	%r12
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 05/11] crypto: x86/sha256 - fix possible crash with CFI enabled
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sha256_transform_ssse3(), sha256_transform_avx(),
sha256_transform_rorx(), and sha256_transform_ni() are called via
indirect function calls.  These functions need to use
SYM_TYPED_FUNC_START instead of SYM_FUNC_START to cause type hashes to
be emitted when the kernel is built with CONFIG_CFI_CLANG=y.  Otherwise,
the code crashes with a CFI failure (if the compiler didn't happen to
optimize out the indirect calls).

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/sha256-avx-asm.S   | 3 ++-
 arch/x86/crypto/sha256-avx2-asm.S  | 3 ++-
 arch/x86/crypto/sha256-ssse3-asm.S | 3 ++-
 arch/x86/crypto/sha256_ni_asm.S    | 3 ++-
 4 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/crypto/sha256-avx-asm.S b/arch/x86/crypto/sha256-avx-asm.S
index 3baa1ec390974..06ea30c20828d 100644
--- a/arch/x86/crypto/sha256-avx-asm.S
+++ b/arch/x86/crypto/sha256-avx-asm.S
@@ -48,6 +48,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 ## assume buffers not aligned
 #define    VMOVDQ vmovdqu
@@ -346,7 +347,7 @@ a = TMP_
 ## arg 3 : Num blocks
 ########################################################################
 .text
-SYM_FUNC_START(sha256_transform_avx)
+SYM_TYPED_FUNC_START(sha256_transform_avx)
 .align 32
 	pushq   %rbx
 	pushq   %r12
diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S
index 9bcdbc47b8b4b..2d2be531a11ed 100644
--- a/arch/x86/crypto/sha256-avx2-asm.S
+++ b/arch/x86/crypto/sha256-avx2-asm.S
@@ -49,6 +49,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 ## assume buffers not aligned
 #define	VMOVDQ vmovdqu
@@ -523,7 +524,7 @@ STACK_SIZE	= _CTX      + _CTX_SIZE
 ## arg 3 : Num blocks
 ########################################################################
 .text
-SYM_FUNC_START(sha256_transform_rorx)
+SYM_TYPED_FUNC_START(sha256_transform_rorx)
 .align 32
 	pushq	%rbx
 	pushq	%r12
diff --git a/arch/x86/crypto/sha256-ssse3-asm.S b/arch/x86/crypto/sha256-ssse3-asm.S
index c4a5db612c327..7db28839108dd 100644
--- a/arch/x86/crypto/sha256-ssse3-asm.S
+++ b/arch/x86/crypto/sha256-ssse3-asm.S
@@ -47,6 +47,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 ## assume buffers not aligned
 #define    MOVDQ movdqu
@@ -355,7 +356,7 @@ a = TMP_
 ## arg 3 : Num blocks
 ########################################################################
 .text
-SYM_FUNC_START(sha256_transform_ssse3)
+SYM_TYPED_FUNC_START(sha256_transform_ssse3)
 .align 32
 	pushq   %rbx
 	pushq   %r12
diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S
index 94d50dd27cb53..47f93937f798a 100644
--- a/arch/x86/crypto/sha256_ni_asm.S
+++ b/arch/x86/crypto/sha256_ni_asm.S
@@ -54,6 +54,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define DIGEST_PTR	%rdi	/* 1st arg */
 #define DATA_PTR	%rsi	/* 2nd arg */
@@ -97,7 +98,7 @@
 
 .text
 .align 32
-SYM_FUNC_START(sha256_ni_transform)
+SYM_TYPED_FUNC_START(sha256_ni_transform)
 
 	shl		$6, NUM_BLKS		/*  convert to bytes */
 	jz		.Ldone_hash
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 05/11] crypto: x86/sha256 - fix possible crash with CFI enabled
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sha256_transform_ssse3(), sha256_transform_avx(),
sha256_transform_rorx(), and sha256_transform_ni() are called via
indirect function calls.  These functions need to use
SYM_TYPED_FUNC_START instead of SYM_FUNC_START to cause type hashes to
be emitted when the kernel is built with CONFIG_CFI_CLANG=y.  Otherwise,
the code crashes with a CFI failure (if the compiler didn't happen to
optimize out the indirect calls).

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/sha256-avx-asm.S   | 3 ++-
 arch/x86/crypto/sha256-avx2-asm.S  | 3 ++-
 arch/x86/crypto/sha256-ssse3-asm.S | 3 ++-
 arch/x86/crypto/sha256_ni_asm.S    | 3 ++-
 4 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/crypto/sha256-avx-asm.S b/arch/x86/crypto/sha256-avx-asm.S
index 3baa1ec390974..06ea30c20828d 100644
--- a/arch/x86/crypto/sha256-avx-asm.S
+++ b/arch/x86/crypto/sha256-avx-asm.S
@@ -48,6 +48,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 ## assume buffers not aligned
 #define    VMOVDQ vmovdqu
@@ -346,7 +347,7 @@ a = TMP_
 ## arg 3 : Num blocks
 ########################################################################
 .text
-SYM_FUNC_START(sha256_transform_avx)
+SYM_TYPED_FUNC_START(sha256_transform_avx)
 .align 32
 	pushq   %rbx
 	pushq   %r12
diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S
index 9bcdbc47b8b4b..2d2be531a11ed 100644
--- a/arch/x86/crypto/sha256-avx2-asm.S
+++ b/arch/x86/crypto/sha256-avx2-asm.S
@@ -49,6 +49,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 ## assume buffers not aligned
 #define	VMOVDQ vmovdqu
@@ -523,7 +524,7 @@ STACK_SIZE	= _CTX      + _CTX_SIZE
 ## arg 3 : Num blocks
 ########################################################################
 .text
-SYM_FUNC_START(sha256_transform_rorx)
+SYM_TYPED_FUNC_START(sha256_transform_rorx)
 .align 32
 	pushq	%rbx
 	pushq	%r12
diff --git a/arch/x86/crypto/sha256-ssse3-asm.S b/arch/x86/crypto/sha256-ssse3-asm.S
index c4a5db612c327..7db28839108dd 100644
--- a/arch/x86/crypto/sha256-ssse3-asm.S
+++ b/arch/x86/crypto/sha256-ssse3-asm.S
@@ -47,6 +47,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 ## assume buffers not aligned
 #define    MOVDQ movdqu
@@ -355,7 +356,7 @@ a = TMP_
 ## arg 3 : Num blocks
 ########################################################################
 .text
-SYM_FUNC_START(sha256_transform_ssse3)
+SYM_TYPED_FUNC_START(sha256_transform_ssse3)
 .align 32
 	pushq   %rbx
 	pushq   %r12
diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S
index 94d50dd27cb53..47f93937f798a 100644
--- a/arch/x86/crypto/sha256_ni_asm.S
+++ b/arch/x86/crypto/sha256_ni_asm.S
@@ -54,6 +54,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 #define DIGEST_PTR	%rdi	/* 1st arg */
 #define DATA_PTR	%rsi	/* 2nd arg */
@@ -97,7 +98,7 @@
 
 .text
 .align 32
-SYM_FUNC_START(sha256_ni_transform)
+SYM_TYPED_FUNC_START(sha256_ni_transform)
 
 	shl		$6, NUM_BLKS		/*  convert to bytes */
 	jz		.Ldone_hash
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 06/11] crypto: x86/sha512 - fix possible crash with CFI enabled
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sha512_transform_rorx(), sha512_transform_ssse3(), and
sha512_transform_avx() are called via indirect function calls.  These
functions need to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to
cause type hashes to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI failure (if
the compiler didn't happen to optimize out the indirect calls).

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/sha512-avx-asm.S   | 3 ++-
 arch/x86/crypto/sha512-avx2-asm.S  | 3 ++-
 arch/x86/crypto/sha512-ssse3-asm.S | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/crypto/sha512-avx-asm.S b/arch/x86/crypto/sha512-avx-asm.S
index 1fefe6dd3a9e2..b0984f19fdb40 100644
--- a/arch/x86/crypto/sha512-avx-asm.S
+++ b/arch/x86/crypto/sha512-avx-asm.S
@@ -48,6 +48,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 .text
 
@@ -273,7 +274,7 @@ frame_size = frame_WK + WK_SIZE
 # of SHA512 message blocks.
 # "blocks" is the message length in SHA512 blocks
 ########################################################################
-SYM_FUNC_START(sha512_transform_avx)
+SYM_TYPED_FUNC_START(sha512_transform_avx)
 	test msglen, msglen
 	je nowork
 
diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/arch/x86/crypto/sha512-avx2-asm.S
index 5cdaab7d69015..b1ca99055ef99 100644
--- a/arch/x86/crypto/sha512-avx2-asm.S
+++ b/arch/x86/crypto/sha512-avx2-asm.S
@@ -50,6 +50,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 .text
 
@@ -565,7 +566,7 @@ frame_size = frame_CTX + CTX_SIZE
 # of SHA512 message blocks.
 # "blocks" is the message length in SHA512 blocks
 ########################################################################
-SYM_FUNC_START(sha512_transform_rorx)
+SYM_TYPED_FUNC_START(sha512_transform_rorx)
 	# Save GPRs
 	push	%rbx
 	push	%r12
diff --git a/arch/x86/crypto/sha512-ssse3-asm.S b/arch/x86/crypto/sha512-ssse3-asm.S
index b84c22e06c5f7..c06afb5270e5f 100644
--- a/arch/x86/crypto/sha512-ssse3-asm.S
+++ b/arch/x86/crypto/sha512-ssse3-asm.S
@@ -48,6 +48,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 .text
 
@@ -274,7 +275,7 @@ frame_size = frame_WK + WK_SIZE
 # of SHA512 message blocks.
 # "blocks" is the message length in SHA512 blocks.
 ########################################################################
-SYM_FUNC_START(sha512_transform_ssse3)
+SYM_TYPED_FUNC_START(sha512_transform_ssse3)
 
 	test msglen, msglen
 	je nowork
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 06/11] crypto: x86/sha512 - fix possible crash with CFI enabled
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sha512_transform_rorx(), sha512_transform_ssse3(), and
sha512_transform_avx() are called via indirect function calls.  These
functions need to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to
cause type hashes to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI failure (if
the compiler didn't happen to optimize out the indirect calls).

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/sha512-avx-asm.S   | 3 ++-
 arch/x86/crypto/sha512-avx2-asm.S  | 3 ++-
 arch/x86/crypto/sha512-ssse3-asm.S | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/crypto/sha512-avx-asm.S b/arch/x86/crypto/sha512-avx-asm.S
index 1fefe6dd3a9e2..b0984f19fdb40 100644
--- a/arch/x86/crypto/sha512-avx-asm.S
+++ b/arch/x86/crypto/sha512-avx-asm.S
@@ -48,6 +48,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 .text
 
@@ -273,7 +274,7 @@ frame_size = frame_WK + WK_SIZE
 # of SHA512 message blocks.
 # "blocks" is the message length in SHA512 blocks
 ########################################################################
-SYM_FUNC_START(sha512_transform_avx)
+SYM_TYPED_FUNC_START(sha512_transform_avx)
 	test msglen, msglen
 	je nowork
 
diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/arch/x86/crypto/sha512-avx2-asm.S
index 5cdaab7d69015..b1ca99055ef99 100644
--- a/arch/x86/crypto/sha512-avx2-asm.S
+++ b/arch/x86/crypto/sha512-avx2-asm.S
@@ -50,6 +50,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 .text
 
@@ -565,7 +566,7 @@ frame_size = frame_CTX + CTX_SIZE
 # of SHA512 message blocks.
 # "blocks" is the message length in SHA512 blocks
 ########################################################################
-SYM_FUNC_START(sha512_transform_rorx)
+SYM_TYPED_FUNC_START(sha512_transform_rorx)
 	# Save GPRs
 	push	%rbx
 	push	%r12
diff --git a/arch/x86/crypto/sha512-ssse3-asm.S b/arch/x86/crypto/sha512-ssse3-asm.S
index b84c22e06c5f7..c06afb5270e5f 100644
--- a/arch/x86/crypto/sha512-ssse3-asm.S
+++ b/arch/x86/crypto/sha512-ssse3-asm.S
@@ -48,6 +48,7 @@
 ########################################################################
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 .text
 
@@ -274,7 +275,7 @@ frame_size = frame_WK + WK_SIZE
 # of SHA512 message blocks.
 # "blocks" is the message length in SHA512 blocks.
 ########################################################################
-SYM_FUNC_START(sha512_transform_ssse3)
+SYM_TYPED_FUNC_START(sha512_transform_ssse3)
 
 	test msglen, msglen
 	je nowork
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 07/11] crypto: x86/sm3 - fix possible crash with CFI enabled
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sm3_transform_avx() is called via indirect function calls.  This
function needs to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to
cause type hashes to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI failure (if
the compiler didn't happen to optimize out the indirect call).

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/sm3-avx-asm_64.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/crypto/sm3-avx-asm_64.S b/arch/x86/crypto/sm3-avx-asm_64.S
index b12b9efb5ec51..8fc5ac681fd63 100644
--- a/arch/x86/crypto/sm3-avx-asm_64.S
+++ b/arch/x86/crypto/sm3-avx-asm_64.S
@@ -12,6 +12,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/frame.h>
 
 /* Context structure */
@@ -328,7 +329,7 @@
  *                        const u8 *data, int nblocks);
  */
 .align 16
-SYM_FUNC_START(sm3_transform_avx)
+SYM_TYPED_FUNC_START(sm3_transform_avx)
 	/* input:
 	 *	%rdi: ctx, CTX
 	 *	%rsi: data (64*nblks bytes)
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 07/11] crypto: x86/sm3 - fix possible crash with CFI enabled
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sm3_transform_avx() is called via indirect function calls.  This
function needs to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to
cause type hashes to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI failure (if
the compiler didn't happen to optimize out the indirect call).

Fixes: 3c516f89e17e ("x86: Add support for CONFIG_CFI_CLANG")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/sm3-avx-asm_64.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/crypto/sm3-avx-asm_64.S b/arch/x86/crypto/sm3-avx-asm_64.S
index b12b9efb5ec51..8fc5ac681fd63 100644
--- a/arch/x86/crypto/sm3-avx-asm_64.S
+++ b/arch/x86/crypto/sm3-avx-asm_64.S
@@ -12,6 +12,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/frame.h>
 
 /* Context structure */
@@ -328,7 +329,7 @@
  *                        const u8 *data, int nblocks);
  */
 .align 16
-SYM_FUNC_START(sm3_transform_avx)
+SYM_TYPED_FUNC_START(sm3_transform_avx)
 	/* input:
 	 *	%rdi: ctx, CTX
 	 *	%rsi: data (64*nblks bytes)
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 08/11] crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

Since the CFI implementation now supports indirect calls to assembly
functions, take advantage of that rather than use a wrapper function.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm64/crypto/nh-neon-core.S         |  5 +++--
 arch/arm64/crypto/nhpoly1305-neon-glue.c | 11 ++---------
 2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/crypto/nh-neon-core.S b/arch/arm64/crypto/nh-neon-core.S
index 51c0a534ef87c..13eda08fda1e5 100644
--- a/arch/arm64/crypto/nh-neon-core.S
+++ b/arch/arm64/crypto/nh-neon-core.S
@@ -8,6 +8,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 	KEY		.req	x0
 	MESSAGE		.req	x1
@@ -58,11 +59,11 @@
 
 /*
  * void nh_neon(const u32 *key, const u8 *message, size_t message_len,
- *		u8 hash[NH_HASH_BYTES])
+ *		__le64 hash[NH_NUM_PASSES])
  *
  * It's guaranteed that message_len % 16 == 0.
  */
-SYM_FUNC_START(nh_neon)
+SYM_TYPED_FUNC_START(nh_neon)
 
 	ld1		{K0.4s,K1.4s}, [KEY], #32
 	  movi		PASS0_SUMS.2d, #0
diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c
index c5405e6a6db76..cd882c35d9252 100644
--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
@@ -14,14 +14,7 @@
 #include <linux/module.h>
 
 asmlinkage void nh_neon(const u32 *key, const u8 *message, size_t message_len,
-			u8 hash[NH_HASH_BYTES]);
-
-/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */
-static void _nh_neon(const u32 *key, const u8 *message, size_t message_len,
-		     __le64 hash[NH_NUM_PASSES])
-{
-	nh_neon(key, message, message_len, (u8 *)hash);
-}
+			__le64 hash[NH_NUM_PASSES]);
 
 static int nhpoly1305_neon_update(struct shash_desc *desc,
 				  const u8 *src, unsigned int srclen)
@@ -33,7 +26,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
 		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
 
 		kernel_neon_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+		crypto_nhpoly1305_update_helper(desc, src, n, nh_neon);
 		kernel_neon_end();
 		src += n;
 		srclen -= n;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 08/11] crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

Since the CFI implementation now supports indirect calls to assembly
functions, take advantage of that rather than use a wrapper function.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm64/crypto/nh-neon-core.S         |  5 +++--
 arch/arm64/crypto/nhpoly1305-neon-glue.c | 11 ++---------
 2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/crypto/nh-neon-core.S b/arch/arm64/crypto/nh-neon-core.S
index 51c0a534ef87c..13eda08fda1e5 100644
--- a/arch/arm64/crypto/nh-neon-core.S
+++ b/arch/arm64/crypto/nh-neon-core.S
@@ -8,6 +8,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 	KEY		.req	x0
 	MESSAGE		.req	x1
@@ -58,11 +59,11 @@
 
 /*
  * void nh_neon(const u32 *key, const u8 *message, size_t message_len,
- *		u8 hash[NH_HASH_BYTES])
+ *		__le64 hash[NH_NUM_PASSES])
  *
  * It's guaranteed that message_len % 16 == 0.
  */
-SYM_FUNC_START(nh_neon)
+SYM_TYPED_FUNC_START(nh_neon)
 
 	ld1		{K0.4s,K1.4s}, [KEY], #32
 	  movi		PASS0_SUMS.2d, #0
diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c
index c5405e6a6db76..cd882c35d9252 100644
--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
@@ -14,14 +14,7 @@
 #include <linux/module.h>
 
 asmlinkage void nh_neon(const u32 *key, const u8 *message, size_t message_len,
-			u8 hash[NH_HASH_BYTES]);
-
-/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */
-static void _nh_neon(const u32 *key, const u8 *message, size_t message_len,
-		     __le64 hash[NH_NUM_PASSES])
-{
-	nh_neon(key, message, message_len, (u8 *)hash);
-}
+			__le64 hash[NH_NUM_PASSES]);
 
 static int nhpoly1305_neon_update(struct shash_desc *desc,
 				  const u8 *src, unsigned int srclen)
@@ -33,7 +26,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
 		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
 
 		kernel_neon_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+		crypto_nhpoly1305_update_helper(desc, src, n, nh_neon);
 		kernel_neon_end();
 		src += n;
 		srclen -= n;
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 09/11] crypto: arm64/sm3 - fix possible crash with CFI enabled
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sm3_neon_transform() is called via indirect function calls.  This
function needs to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to
cause type hashes to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI failure (if
the compiler didn't happen to optimize out the indirect call).

Fixes: c50d32859e70 ("arm64: Add types to indirect called assembly functions")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm64/crypto/sm3-neon-core.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/crypto/sm3-neon-core.S b/arch/arm64/crypto/sm3-neon-core.S
index 3e3b4e5c736fc..8abea1d39ddd9 100644
--- a/arch/arm64/crypto/sm3-neon-core.S
+++ b/arch/arm64/crypto/sm3-neon-core.S
@@ -9,7 +9,7 @@
  */
 
 #include <linux/linkage.h>
-#include <asm/assembler.h>
+#include <linux/cfi_types.h>
 
 /* Context structure */
 
@@ -351,7 +351,7 @@
 	 */
 	.text
 .align 3
-SYM_FUNC_START(sm3_neon_transform)
+SYM_TYPED_FUNC_START(sm3_neon_transform)
 	ldp		ra, rb, [RSTATE, #0]
 	ldp		rc, rd, [RSTATE, #8]
 	ldp		re, rf, [RSTATE, #16]
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 09/11] crypto: arm64/sm3 - fix possible crash with CFI enabled
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

sm3_neon_transform() is called via indirect function calls.  This
function needs to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to
cause type hashes to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y.  Otherwise, the code crashes with a CFI failure (if
the compiler didn't happen to optimize out the indirect call).

Fixes: c50d32859e70 ("arm64: Add types to indirect called assembly functions")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm64/crypto/sm3-neon-core.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/crypto/sm3-neon-core.S b/arch/arm64/crypto/sm3-neon-core.S
index 3e3b4e5c736fc..8abea1d39ddd9 100644
--- a/arch/arm64/crypto/sm3-neon-core.S
+++ b/arch/arm64/crypto/sm3-neon-core.S
@@ -9,7 +9,7 @@
  */
 
 #include <linux/linkage.h>
-#include <asm/assembler.h>
+#include <linux/cfi_types.h>
 
 /* Context structure */
 
@@ -351,7 +351,7 @@
 	 */
 	.text
 .align 3
-SYM_FUNC_START(sm3_neon_transform)
+SYM_TYPED_FUNC_START(sm3_neon_transform)
 	ldp		ra, rb, [RSTATE, #0]
 	ldp		rc, rd, [RSTATE, #8]
 	ldp		re, rf, [RSTATE, #16]
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 10/11] crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

The arm architecture doesn't support CFI yet, and even if it did, the
new CFI implementation supports indirect calls to assembly functions.
Therefore, there's no need to use a wrapper function for nh_neon().

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm/crypto/nh-neon-core.S         |  2 +-
 arch/arm/crypto/nhpoly1305-neon-glue.c | 11 ++---------
 2 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/arch/arm/crypto/nh-neon-core.S b/arch/arm/crypto/nh-neon-core.S
index 434d80ab531c2..01620a0782ca9 100644
--- a/arch/arm/crypto/nh-neon-core.S
+++ b/arch/arm/crypto/nh-neon-core.S
@@ -69,7 +69,7 @@
 
 /*
  * void nh_neon(const u32 *key, const u8 *message, size_t message_len,
- *		u8 hash[NH_HASH_BYTES])
+ *		__le64 hash[NH_NUM_PASSES])
  *
  * It's guaranteed that message_len % 16 == 0.
  */
diff --git a/arch/arm/crypto/nhpoly1305-neon-glue.c b/arch/arm/crypto/nhpoly1305-neon-glue.c
index ffa8d73fe722c..e93e41ff26566 100644
--- a/arch/arm/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm/crypto/nhpoly1305-neon-glue.c
@@ -14,14 +14,7 @@
 #include <linux/module.h>
 
 asmlinkage void nh_neon(const u32 *key, const u8 *message, size_t message_len,
-			u8 hash[NH_HASH_BYTES]);
-
-/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */
-static void _nh_neon(const u32 *key, const u8 *message, size_t message_len,
-		     __le64 hash[NH_NUM_PASSES])
-{
-	nh_neon(key, message, message_len, (u8 *)hash);
-}
+			__le64 hash[NH_NUM_PASSES]);
 
 static int nhpoly1305_neon_update(struct shash_desc *desc,
 				  const u8 *src, unsigned int srclen)
@@ -33,7 +26,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
 		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
 
 		kernel_neon_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+		crypto_nhpoly1305_update_helper(desc, src, n, nh_neon);
 		kernel_neon_end();
 		src += n;
 		srclen -= n;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 10/11] crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

The arm architecture doesn't support CFI yet, and even if it did, the
new CFI implementation supports indirect calls to assembly functions.
Therefore, there's no need to use a wrapper function for nh_neon().

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm/crypto/nh-neon-core.S         |  2 +-
 arch/arm/crypto/nhpoly1305-neon-glue.c | 11 ++---------
 2 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/arch/arm/crypto/nh-neon-core.S b/arch/arm/crypto/nh-neon-core.S
index 434d80ab531c2..01620a0782ca9 100644
--- a/arch/arm/crypto/nh-neon-core.S
+++ b/arch/arm/crypto/nh-neon-core.S
@@ -69,7 +69,7 @@
 
 /*
  * void nh_neon(const u32 *key, const u8 *message, size_t message_len,
- *		u8 hash[NH_HASH_BYTES])
+ *		__le64 hash[NH_NUM_PASSES])
  *
  * It's guaranteed that message_len % 16 == 0.
  */
diff --git a/arch/arm/crypto/nhpoly1305-neon-glue.c b/arch/arm/crypto/nhpoly1305-neon-glue.c
index ffa8d73fe722c..e93e41ff26566 100644
--- a/arch/arm/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm/crypto/nhpoly1305-neon-glue.c
@@ -14,14 +14,7 @@
 #include <linux/module.h>
 
 asmlinkage void nh_neon(const u32 *key, const u8 *message, size_t message_len,
-			u8 hash[NH_HASH_BYTES]);
-
-/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */
-static void _nh_neon(const u32 *key, const u8 *message, size_t message_len,
-		     __le64 hash[NH_NUM_PASSES])
-{
-	nh_neon(key, message, message_len, (u8 *)hash);
-}
+			__le64 hash[NH_NUM_PASSES]);
 
 static int nhpoly1305_neon_update(struct shash_desc *desc,
 				  const u8 *src, unsigned int srclen)
@@ -33,7 +26,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
 		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
 
 		kernel_neon_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+		crypto_nhpoly1305_update_helper(desc, src, n, nh_neon);
 		kernel_neon_end();
 		src += n;
 		srclen -= n;
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 11/11] Revert "crypto: shash - avoid comparing pointers to exported functions under CFI"
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:02   ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

This reverts commit 22ca9f4aaf431a9413dcc115dd590123307f274f because CFI
no longer breaks cross-module function address equality, so
crypto_shash_alg_has_setkey() can now be an inline function like before.

This commit should not be backported to kernels that don't have the new
CFI implementation.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/shash.c                 | 18 +++---------------
 include/crypto/internal/hash.h |  8 +++++++-
 2 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index 4c88e63b3350f..0f85431588267 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -20,24 +20,12 @@
 
 static const struct crypto_type crypto_shash_type;
 
-static int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
-			   unsigned int keylen)
+int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
+		    unsigned int keylen)
 {
 	return -ENOSYS;
 }
-
-/*
- * Check whether an shash algorithm has a setkey function.
- *
- * For CFI compatibility, this must not be an inline function.  This is because
- * when CFI is enabled, modules won't get the same address for shash_no_setkey
- * (if it were exported, which inlining would require) as the core kernel will.
- */
-bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
-{
-	return alg->setkey != shash_no_setkey;
-}
-EXPORT_SYMBOL_GPL(crypto_shash_alg_has_setkey);
+EXPORT_SYMBOL_GPL(shash_no_setkey);
 
 static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
 				  unsigned int keylen)
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index 25806141db591..0a288dddcf5be 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -75,7 +75,13 @@ void crypto_unregister_ahashes(struct ahash_alg *algs, int count);
 int ahash_register_instance(struct crypto_template *tmpl,
 			    struct ahash_instance *inst);
 
-bool crypto_shash_alg_has_setkey(struct shash_alg *alg);
+int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
+		    unsigned int keylen);
+
+static inline bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
+{
+	return alg->setkey != shash_no_setkey;
+}
 
 static inline bool crypto_shash_alg_needs_key(struct shash_alg *alg)
 {
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 11/11] Revert "crypto: shash - avoid comparing pointers to exported functions under CFI"
@ 2022-11-18  9:02   ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18  9:02 UTC (permalink / raw)
  To: linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen

From: Eric Biggers <ebiggers@google.com>

This reverts commit 22ca9f4aaf431a9413dcc115dd590123307f274f because CFI
no longer breaks cross-module function address equality, so
crypto_shash_alg_has_setkey() can now be an inline function like before.

This commit should not be backported to kernels that don't have the new
CFI implementation.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/shash.c                 | 18 +++---------------
 include/crypto/internal/hash.h |  8 +++++++-
 2 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index 4c88e63b3350f..0f85431588267 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -20,24 +20,12 @@
 
 static const struct crypto_type crypto_shash_type;
 
-static int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
-			   unsigned int keylen)
+int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
+		    unsigned int keylen)
 {
 	return -ENOSYS;
 }
-
-/*
- * Check whether an shash algorithm has a setkey function.
- *
- * For CFI compatibility, this must not be an inline function.  This is because
- * when CFI is enabled, modules won't get the same address for shash_no_setkey
- * (if it were exported, which inlining would require) as the core kernel will.
- */
-bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
-{
-	return alg->setkey != shash_no_setkey;
-}
-EXPORT_SYMBOL_GPL(crypto_shash_alg_has_setkey);
+EXPORT_SYMBOL_GPL(shash_no_setkey);
 
 static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
 				  unsigned int keylen)
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index 25806141db591..0a288dddcf5be 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -75,7 +75,13 @@ void crypto_unregister_ahashes(struct ahash_alg *algs, int count);
 int ahash_register_instance(struct crypto_template *tmpl,
 			    struct ahash_instance *inst);
 
-bool crypto_shash_alg_has_setkey(struct shash_alg *alg);
+int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
+		    unsigned int keylen);
+
+static inline bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
+{
+	return alg->setkey != shash_no_setkey;
+}
 
 static inline bool crypto_shash_alg_needs_key(struct shash_alg *alg)
 {
-- 
2.38.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/11] crypto: CFI fixes
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18  9:51   ` Peter Zijlstra
  -1 siblings, 0 replies; 36+ messages in thread
From: Peter Zijlstra @ 2022-11-18  9:51 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, x86, linux-arm-kernel, Sami Tolvanen

On Fri, Nov 18, 2022 at 01:02:09AM -0800, Eric Biggers wrote:
> This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
> Integrity) is enabled, with the new CFI implementation that was merged
> in 6.1 and is supported on x86.  Some of them were unconditional
> crashes, while others depended on whether the compiler optimized out the
> indirect calls or not.  This series also simplifies some code that was
> intended to work around limitations of the old CFI implementation and is
> unnecessary for the new CFI implementation.
> 
> Eric Biggers (11):
>   crypto: x86/aegis128 - fix crash with CFI enabled
>   crypto: x86/aria - fix crash with CFI enabled
>   crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers
>   crypto: x86/sha1 - fix possible crash with CFI enabled
>   crypto: x86/sha256 - fix possible crash with CFI enabled
>   crypto: x86/sha512 - fix possible crash with CFI enabled
>   crypto: x86/sm3 - fix possible crash with CFI enabled
>   crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper
>   crypto: arm64/sm3 - fix possible crash with CFI enabled
>   crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper
>   Revert "crypto: shash - avoid comparing pointers to exported functions
>     under CFI"

These all look good. They will hoever conflict with the alignment
cleanups/changes we've got in tip/x86/core, but there's no helping that
I support.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/11] crypto: CFI fixes
@ 2022-11-18  9:51   ` Peter Zijlstra
  0 siblings, 0 replies; 36+ messages in thread
From: Peter Zijlstra @ 2022-11-18  9:51 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, x86, linux-arm-kernel, Sami Tolvanen

On Fri, Nov 18, 2022 at 01:02:09AM -0800, Eric Biggers wrote:
> This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
> Integrity) is enabled, with the new CFI implementation that was merged
> in 6.1 and is supported on x86.  Some of them were unconditional
> crashes, while others depended on whether the compiler optimized out the
> indirect calls or not.  This series also simplifies some code that was
> intended to work around limitations of the old CFI implementation and is
> unnecessary for the new CFI implementation.
> 
> Eric Biggers (11):
>   crypto: x86/aegis128 - fix crash with CFI enabled
>   crypto: x86/aria - fix crash with CFI enabled
>   crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers
>   crypto: x86/sha1 - fix possible crash with CFI enabled
>   crypto: x86/sha256 - fix possible crash with CFI enabled
>   crypto: x86/sha512 - fix possible crash with CFI enabled
>   crypto: x86/sm3 - fix possible crash with CFI enabled
>   crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper
>   crypto: arm64/sm3 - fix possible crash with CFI enabled
>   crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper
>   Revert "crypto: shash - avoid comparing pointers to exported functions
>     under CFI"

These all look good. They will hoever conflict with the alignment
cleanups/changes we've got in tip/x86/core, but there's no helping that
I support.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 0/11] crypto: CFI fixes
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18 15:43   ` Elliott, Robert (Servers)
  -1 siblings, 0 replies; 36+ messages in thread
From: Elliott, Robert (Servers) @ 2022-11-18 15:43 UTC (permalink / raw)
  To: Eric Biggers, linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen



> -----Original Message-----
> From: Eric Biggers <ebiggers@kernel.org>
> Sent: Friday, November 18, 2022 3:02 AM
> To: linux-crypto@vger.kernel.org
> Cc: x86@kernel.org; linux-arm-kernel@lists.infradead.org; Sami Tolvanen
> <samitolvanen@google.com>
> Subject: [PATCH 0/11] crypto: CFI fixes
> 
> This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
> Integrity) is enabled, with the new CFI implementation that was merged
> in 6.1 and is supported on x86.  Some of them were unconditional
> crashes, while others depended on whether the compiler optimized out the
> indirect calls or not.  This series also simplifies some code that was
> intended to work around limitations of the old CFI implementation and is
> unnecessary for the new CFI implementation.

Some of the x86 modules EXPORT their asm functions. Does that leave them
at risk of being called indirectly?

arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_ecb_dec_16way)
arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_ecb_enc_16way)
arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_cbc_dec_16way)
arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_ecb_enc_16way(const void *ctx, u8 *dst, const u8 *src);
arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_ecb_enc_16way);
arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_ecb_dec_16way(const void *ctx, u8 *dst, const u8 *src);
arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_ecb_dec_16way);
arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_cbc_dec_16way(const void *ctx, u8 *dst, const u8 *src);
arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_cbc_dec_16way);

arch/x86/crypto/twofish-x86_64-asm_64-3way.S:SYM_FUNC_START(__twofish_enc_blk_3way)
arch/x86/crypto/twofish.h:asmlinkage void __twofish_enc_blk_3way(const void *ctx, u8 *dst, const u8 *src,
arch/x86/crypto/twofish_glue_3way.c:EXPORT_SYMBOL_GPL(__twofish_enc_blk_3way);


A few of the x86 asm functions used by C code are not referenced with
asmlinkage like all the others. They're not EXPORTed, though, so whether
they're indirectly used can be determined.

u32 crc32_pclmul_le_16(unsigned char const *buffer, size_t len, u32 crc32);

void clmul_ghash_mul(char *dst, const u128 *shash);

void clmul_ghash_update(char *dst, const char *src, unsigned int srclen,
                        const u128 *shash);



^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 0/11] crypto: CFI fixes
@ 2022-11-18 15:43   ` Elliott, Robert (Servers)
  0 siblings, 0 replies; 36+ messages in thread
From: Elliott, Robert (Servers) @ 2022-11-18 15:43 UTC (permalink / raw)
  To: Eric Biggers, linux-crypto; +Cc: x86, linux-arm-kernel, Sami Tolvanen



> -----Original Message-----
> From: Eric Biggers <ebiggers@kernel.org>
> Sent: Friday, November 18, 2022 3:02 AM
> To: linux-crypto@vger.kernel.org
> Cc: x86@kernel.org; linux-arm-kernel@lists.infradead.org; Sami Tolvanen
> <samitolvanen@google.com>
> Subject: [PATCH 0/11] crypto: CFI fixes
> 
> This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
> Integrity) is enabled, with the new CFI implementation that was merged
> in 6.1 and is supported on x86.  Some of them were unconditional
> crashes, while others depended on whether the compiler optimized out the
> indirect calls or not.  This series also simplifies some code that was
> intended to work around limitations of the old CFI implementation and is
> unnecessary for the new CFI implementation.

Some of the x86 modules EXPORT their asm functions. Does that leave them
at risk of being called indirectly?

arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_ecb_dec_16way)
arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_ecb_enc_16way)
arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_cbc_dec_16way)
arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_ecb_enc_16way(const void *ctx, u8 *dst, const u8 *src);
arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_ecb_enc_16way);
arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_ecb_dec_16way(const void *ctx, u8 *dst, const u8 *src);
arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_ecb_dec_16way);
arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_cbc_dec_16way(const void *ctx, u8 *dst, const u8 *src);
arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_cbc_dec_16way);

arch/x86/crypto/twofish-x86_64-asm_64-3way.S:SYM_FUNC_START(__twofish_enc_blk_3way)
arch/x86/crypto/twofish.h:asmlinkage void __twofish_enc_blk_3way(const void *ctx, u8 *dst, const u8 *src,
arch/x86/crypto/twofish_glue_3way.c:EXPORT_SYMBOL_GPL(__twofish_enc_blk_3way);


A few of the x86 asm functions used by C code are not referenced with
asmlinkage like all the others. They're not EXPORTed, though, so whether
they're indirectly used can be determined.

u32 crc32_pclmul_le_16(unsigned char const *buffer, size_t len, u32 crc32);

void clmul_ghash_mul(char *dst, const u128 *shash);

void clmul_ghash_update(char *dst, const char *src, unsigned int srclen,
                        const u128 *shash);



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/11] crypto: CFI fixes
  2022-11-18  9:02 ` Eric Biggers
@ 2022-11-18 17:21   ` Sami Tolvanen
  -1 siblings, 0 replies; 36+ messages in thread
From: Sami Tolvanen @ 2022-11-18 17:21 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, x86, linux-arm-kernel

On Fri, Nov 18, 2022 at 1:04 AM Eric Biggers <ebiggers@kernel.org> wrote:
>
> This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
> Integrity) is enabled, with the new CFI implementation that was merged
> in 6.1 and is supported on x86.  Some of them were unconditional
> crashes, while others depended on whether the compiler optimized out the
> indirect calls or not.  This series also simplifies some code that was
> intended to work around limitations of the old CFI implementation and is
> unnecessary for the new CFI implementation.
>
> Eric Biggers (11):
>   crypto: x86/aegis128 - fix crash with CFI enabled
>   crypto: x86/aria - fix crash with CFI enabled
>   crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers
>   crypto: x86/sha1 - fix possible crash with CFI enabled
>   crypto: x86/sha256 - fix possible crash with CFI enabled
>   crypto: x86/sha512 - fix possible crash with CFI enabled
>   crypto: x86/sm3 - fix possible crash with CFI enabled
>   crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper
>   crypto: arm64/sm3 - fix possible crash with CFI enabled
>   crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper
>   Revert "crypto: shash - avoid comparing pointers to exported functions
>     under CFI"

Thanks for the patches, Eric!  These look good to me.

Reviewed-by: Sami Tolvanen <samitolvanen@google.com>

Sami

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/11] crypto: CFI fixes
@ 2022-11-18 17:21   ` Sami Tolvanen
  0 siblings, 0 replies; 36+ messages in thread
From: Sami Tolvanen @ 2022-11-18 17:21 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, x86, linux-arm-kernel

On Fri, Nov 18, 2022 at 1:04 AM Eric Biggers <ebiggers@kernel.org> wrote:
>
> This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
> Integrity) is enabled, with the new CFI implementation that was merged
> in 6.1 and is supported on x86.  Some of them were unconditional
> crashes, while others depended on whether the compiler optimized out the
> indirect calls or not.  This series also simplifies some code that was
> intended to work around limitations of the old CFI implementation and is
> unnecessary for the new CFI implementation.
>
> Eric Biggers (11):
>   crypto: x86/aegis128 - fix crash with CFI enabled
>   crypto: x86/aria - fix crash with CFI enabled
>   crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers
>   crypto: x86/sha1 - fix possible crash with CFI enabled
>   crypto: x86/sha256 - fix possible crash with CFI enabled
>   crypto: x86/sha512 - fix possible crash with CFI enabled
>   crypto: x86/sm3 - fix possible crash with CFI enabled
>   crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper
>   crypto: arm64/sm3 - fix possible crash with CFI enabled
>   crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper
>   Revert "crypto: shash - avoid comparing pointers to exported functions
>     under CFI"

Thanks for the patches, Eric!  These look good to me.

Reviewed-by: Sami Tolvanen <samitolvanen@google.com>

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/11] crypto: CFI fixes
  2022-11-18 15:43   ` Elliott, Robert (Servers)
@ 2022-11-18 18:49     ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18 18:49 UTC (permalink / raw)
  To: Elliott, Robert (Servers)
  Cc: linux-crypto, x86, linux-arm-kernel, Sami Tolvanen

On Fri, Nov 18, 2022 at 03:43:55PM +0000, Elliott, Robert (Servers) wrote:
> 
> > -----Original Message-----
> > From: Eric Biggers <ebiggers@kernel.org>
> > Sent: Friday, November 18, 2022 3:02 AM
> > To: linux-crypto@vger.kernel.org
> > Cc: x86@kernel.org; linux-arm-kernel@lists.infradead.org; Sami Tolvanen
> > <samitolvanen@google.com>
> > Subject: [PATCH 0/11] crypto: CFI fixes
> > 
> > This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
> > Integrity) is enabled, with the new CFI implementation that was merged
> > in 6.1 and is supported on x86.  Some of them were unconditional
> > crashes, while others depended on whether the compiler optimized out the
> > indirect calls or not.  This series also simplifies some code that was
> > intended to work around limitations of the old CFI implementation and is
> > unnecessary for the new CFI implementation.
> 
> Some of the x86 modules EXPORT their asm functions. Does that leave them
> at risk of being called indirectly?
> 
> arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_ecb_dec_16way)
> arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_ecb_enc_16way)
> arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_cbc_dec_16way)
> arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_ecb_enc_16way(const void *ctx, u8 *dst, const u8 *src);
> arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_ecb_enc_16way);
> arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_ecb_dec_16way(const void *ctx, u8 *dst, const u8 *src);
> arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_ecb_dec_16way);
> arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_cbc_dec_16way(const void *ctx, u8 *dst, const u8 *src);
> arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_cbc_dec_16way);
> 
> arch/x86/crypto/twofish-x86_64-asm_64-3way.S:SYM_FUNC_START(__twofish_enc_blk_3way)
> arch/x86/crypto/twofish.h:asmlinkage void __twofish_enc_blk_3way(const void *ctx, u8 *dst, const u8 *src,
> arch/x86/crypto/twofish_glue_3way.c:EXPORT_SYMBOL_GPL(__twofish_enc_blk_3way);

No, that doesn't matter at all.  Whether a symbol is exported or not just has to
do with how the code is divided into modules.  It doesn't have anything to do
with indirect calls.

> A few of the x86 asm functions used by C code are not referenced with
> asmlinkage like all the others. They're not EXPORTed, though, so whether
> they're indirectly used can be determined.
> 
> u32 crc32_pclmul_le_16(unsigned char const *buffer, size_t len, u32 crc32);
> 
> void clmul_ghash_mul(char *dst, const u128 *shash);
> 
> void clmul_ghash_update(char *dst, const char *src, unsigned int srclen,
>                         const u128 *shash);

No, the above functions are only called directly.

I did do another search and found that some of the sm4 functions are called
indirectly, though, so I'll send out an updated patchset that fixes those too.

- Eric

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/11] crypto: CFI fixes
@ 2022-11-18 18:49     ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18 18:49 UTC (permalink / raw)
  To: Elliott, Robert (Servers)
  Cc: linux-crypto, x86, linux-arm-kernel, Sami Tolvanen

On Fri, Nov 18, 2022 at 03:43:55PM +0000, Elliott, Robert (Servers) wrote:
> 
> > -----Original Message-----
> > From: Eric Biggers <ebiggers@kernel.org>
> > Sent: Friday, November 18, 2022 3:02 AM
> > To: linux-crypto@vger.kernel.org
> > Cc: x86@kernel.org; linux-arm-kernel@lists.infradead.org; Sami Tolvanen
> > <samitolvanen@google.com>
> > Subject: [PATCH 0/11] crypto: CFI fixes
> > 
> > This series fixes some crashes when CONFIG_CFI_CLANG (Control Flow
> > Integrity) is enabled, with the new CFI implementation that was merged
> > in 6.1 and is supported on x86.  Some of them were unconditional
> > crashes, while others depended on whether the compiler optimized out the
> > indirect calls or not.  This series also simplifies some code that was
> > intended to work around limitations of the old CFI implementation and is
> > unnecessary for the new CFI implementation.
> 
> Some of the x86 modules EXPORT their asm functions. Does that leave them
> at risk of being called indirectly?
> 
> arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_ecb_dec_16way)
> arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_ecb_enc_16way)
> arch/x86/crypto/camellia-aesni-avx-asm_64.S:SYM_FUNC_START(camellia_cbc_dec_16way)
> arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_ecb_enc_16way(const void *ctx, u8 *dst, const u8 *src);
> arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_ecb_enc_16way);
> arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_ecb_dec_16way(const void *ctx, u8 *dst, const u8 *src);
> arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_ecb_dec_16way);
> arch/x86/crypto/camellia_aesni_avx_glue.c:asmlinkage void camellia_cbc_dec_16way(const void *ctx, u8 *dst, const u8 *src);
> arch/x86/crypto/camellia_aesni_avx_glue.c:EXPORT_SYMBOL_GPL(camellia_cbc_dec_16way);
> 
> arch/x86/crypto/twofish-x86_64-asm_64-3way.S:SYM_FUNC_START(__twofish_enc_blk_3way)
> arch/x86/crypto/twofish.h:asmlinkage void __twofish_enc_blk_3way(const void *ctx, u8 *dst, const u8 *src,
> arch/x86/crypto/twofish_glue_3way.c:EXPORT_SYMBOL_GPL(__twofish_enc_blk_3way);

No, that doesn't matter at all.  Whether a symbol is exported or not just has to
do with how the code is divided into modules.  It doesn't have anything to do
with indirect calls.

> A few of the x86 asm functions used by C code are not referenced with
> asmlinkage like all the others. They're not EXPORTed, though, so whether
> they're indirectly used can be determined.
> 
> u32 crc32_pclmul_le_16(unsigned char const *buffer, size_t len, u32 crc32);
> 
> void clmul_ghash_mul(char *dst, const u128 *shash);
> 
> void clmul_ghash_update(char *dst, const char *src, unsigned int srclen,
>                         const u128 *shash);

No, the above functions are only called directly.

I did do another search and found that some of the sm4 functions are called
indirectly, though, so I'll send out an updated patchset that fixes those too.

- Eric

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 0/11] crypto: CFI fixes
  2022-11-18 18:49     ` Eric Biggers
@ 2022-11-18 19:14       ` Elliott, Robert (Servers)
  -1 siblings, 0 replies; 36+ messages in thread
From: Elliott, Robert (Servers) @ 2022-11-18 19:14 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, x86, linux-arm-kernel, Sami Tolvanen


> arch/x86/crypto/twofish_glue_3way.c:EXPORT_SYMBOL_GPL(__twofish_enc_blk_3w
> ay);
> 
> No, that doesn't matter at all.  Whether a symbol is exported or not just
> has to do with how the code is divided into modules.  It doesn't have
> anything to do with indirect calls.
 
I thought that makes them available to external modules, and there is no
control over how they use them.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 0/11] crypto: CFI fixes
@ 2022-11-18 19:14       ` Elliott, Robert (Servers)
  0 siblings, 0 replies; 36+ messages in thread
From: Elliott, Robert (Servers) @ 2022-11-18 19:14 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, x86, linux-arm-kernel, Sami Tolvanen


> arch/x86/crypto/twofish_glue_3way.c:EXPORT_SYMBOL_GPL(__twofish_enc_blk_3w
> ay);
> 
> No, that doesn't matter at all.  Whether a symbol is exported or not just
> has to do with how the code is divided into modules.  It doesn't have
> anything to do with indirect calls.
 
I thought that makes them available to external modules, and there is no
control over how they use them.


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/11] crypto: CFI fixes
  2022-11-18 19:14       ` Elliott, Robert (Servers)
@ 2022-11-18 19:18         ` Eric Biggers
  -1 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18 19:18 UTC (permalink / raw)
  To: Elliott, Robert (Servers)
  Cc: linux-crypto, x86, linux-arm-kernel, Sami Tolvanen

On Fri, Nov 18, 2022 at 07:14:13PM +0000, Elliott, Robert (Servers) wrote:
> 
> > arch/x86/crypto/twofish_glue_3way.c:EXPORT_SYMBOL_GPL(__twofish_enc_blk_3w
> > ay);
> > 
> > No, that doesn't matter at all.  Whether a symbol is exported or not just
> > has to do with how the code is divided into modules.  It doesn't have
> > anything to do with indirect calls.
>  
> I thought that makes them available to external modules, and there is no
> control over how they use them.
> 

The upstream kernel doesn't support out of tree modules.  In the highly unlikely
event that someone wants to make these low-level implementation details
available to out of tree modules, they can just patch the kernel themselves.

- Eric

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/11] crypto: CFI fixes
@ 2022-11-18 19:18         ` Eric Biggers
  0 siblings, 0 replies; 36+ messages in thread
From: Eric Biggers @ 2022-11-18 19:18 UTC (permalink / raw)
  To: Elliott, Robert (Servers)
  Cc: linux-crypto, x86, linux-arm-kernel, Sami Tolvanen

On Fri, Nov 18, 2022 at 07:14:13PM +0000, Elliott, Robert (Servers) wrote:
> 
> > arch/x86/crypto/twofish_glue_3way.c:EXPORT_SYMBOL_GPL(__twofish_enc_blk_3w
> > ay);
> > 
> > No, that doesn't matter at all.  Whether a symbol is exported or not just
> > has to do with how the code is divided into modules.  It doesn't have
> > anything to do with indirect calls.
>  
> I thought that makes them available to external modules, and there is no
> control over how they use them.
> 

The upstream kernel doesn't support out of tree modules.  In the highly unlikely
event that someone wants to make these low-level implementation details
available to out of tree modules, they can just patch the kernel themselves.

- Eric

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2022-11-18 19:24 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-18  9:02 [PATCH 0/11] crypto: CFI fixes Eric Biggers
2022-11-18  9:02 ` Eric Biggers
2022-11-18  9:02 ` [PATCH 01/11] crypto: x86/aegis128 - fix crash with CFI enabled Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 02/11] crypto: x86/aria " Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 03/11] crypto: x86/nhpoly1305 - eliminate unnecessary CFI wrappers Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 04/11] crypto: x86/sha1 - fix possible crash with CFI enabled Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 05/11] crypto: x86/sha256 " Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 06/11] crypto: x86/sha512 " Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 07/11] crypto: x86/sm3 " Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 08/11] crypto: arm64/nhpoly1305 - eliminate unnecessary CFI wrapper Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 09/11] crypto: arm64/sm3 - fix possible crash with CFI enabled Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 10/11] crypto: arm/nhpoly1305 - eliminate unnecessary CFI wrapper Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:02 ` [PATCH 11/11] Revert "crypto: shash - avoid comparing pointers to exported functions under CFI" Eric Biggers
2022-11-18  9:02   ` Eric Biggers
2022-11-18  9:51 ` [PATCH 0/11] crypto: CFI fixes Peter Zijlstra
2022-11-18  9:51   ` Peter Zijlstra
2022-11-18 15:43 ` Elliott, Robert (Servers)
2022-11-18 15:43   ` Elliott, Robert (Servers)
2022-11-18 18:49   ` Eric Biggers
2022-11-18 18:49     ` Eric Biggers
2022-11-18 19:14     ` Elliott, Robert (Servers)
2022-11-18 19:14       ` Elliott, Robert (Servers)
2022-11-18 19:18       ` Eric Biggers
2022-11-18 19:18         ` Eric Biggers
2022-11-18 17:21 ` Sami Tolvanen
2022-11-18 17:21   ` Sami Tolvanen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.