linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
@ 2019-05-07 16:13 Kees Cook
  2019-05-07 16:13 ` [PATCH v3 1/7] crypto: x86/glue_helper: Add static inline function glue macros Kees Cook
                   ` (8 more replies)
  0 siblings, 9 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-07 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Joao Moreira, Eric Biggers, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, x86, linux-crypto,
	linux-kernel, kernel-hardening

It is possible to indirectly invoke functions with prototypes that do
not match those of the respectively used function pointers by using void
types or casts. This feature is frequently used as a way of relaxing
function invocation, making it possible that different data structures
are passed to different functions through the same pointer.

Despite the benefits, this can lead to a situation where functions with a
given prototype are invoked by pointers with a different prototype. This
is undesirable as it may prevent the use of heuristics such as prototype
matching-based Control-Flow Integrity, which can be used to prevent
ROP-based attacks.

One way of fixing this situation is through the use of inline helper
functions with prototypes that match the one in the respective invoking
pointer.

Given the above, the current efforts to improve the Linux security,
and the upcoming kernel support to compilers with CFI features, this
creates macros to be used to build the needed function definitions,
to be used in camellia, cast6, serpent, twofish, and aesni.

-Kees (and Joao)

v3:
- no longer RFC
- consolidate macros into glue_helper.h
- include aesni which was using casts as well
- remove XTS_TWEAK_CAST while we're at it

v2:
- update cast macros for clarity

v1:
- initial prototype

Joao Moreira (4):
  crypto: x86/crypto: Use new glue function macros
  crypto: x86/camellia: Use new glue function macros
  crypto: x86/twofish: Use new glue function macros
  crypto: x86/cast6: Use new glue function macros

Kees Cook (3):
  crypto: x86/glue_helper: Add static inline function glue macros
  crypto: x86/aesni: Use new glue function macros
  crypto: x86/glue_helper: Remove function prototype cast helpers

 arch/x86/crypto/aesni-intel_glue.c         | 31 ++++-----
 arch/x86/crypto/camellia_aesni_avx2_glue.c | 73 +++++++++-------------
 arch/x86/crypto/camellia_aesni_avx_glue.c  | 63 +++++++------------
 arch/x86/crypto/camellia_glue.c            | 21 +++----
 arch/x86/crypto/cast6_avx_glue.c           | 65 +++++++++----------
 arch/x86/crypto/serpent_avx2_glue.c        | 65 +++++++++----------
 arch/x86/crypto/serpent_avx_glue.c         | 58 ++++++-----------
 arch/x86/crypto/serpent_sse2_glue.c        | 27 +++++---
 arch/x86/crypto/twofish_avx_glue.c         | 71 ++++++++-------------
 arch/x86/crypto/twofish_glue_3way.c        | 28 ++++-----
 arch/x86/include/asm/crypto/camellia.h     | 64 ++++++-------------
 arch/x86/include/asm/crypto/glue_helper.h  | 34 ++++++++--
 arch/x86/include/asm/crypto/serpent-avx.h  | 28 ++++-----
 arch/x86/include/asm/crypto/twofish.h      | 22 ++++---
 include/crypto/xts.h                       |  2 -
 15 files changed, 283 insertions(+), 369 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 1/7] crypto: x86/glue_helper: Add static inline function glue macros
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
@ 2019-05-07 16:13 ` Kees Cook
  2019-05-07 16:13 ` [PATCH v3 2/7] crypto: x86/crypto: Use new glue function macros Kees Cook
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-07 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Joao Moreira, Eric Biggers, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, x86, linux-crypto,
	linux-kernel, kernel-hardening

It is possible to indirectly invoke functions with prototypes that do
not match those of the respectively used function pointers by using void
types or casts. This feature is frequently used as a way of relaxing
function invocation, making it possible that different data structures
are passed to different functions through the same pointer.

Despite the benefits, this can lead to a situation where functions with a
given prototype are invoked by pointers with a different prototype. This
is undesirable as it may prevent the use of heuristics such as prototype
matching-based Control-Flow Integrity, which can be used to prevent
ROP-based attacks.

One way of fixing this situation is through the use of inline helper
functions with prototypes that match the one in the respective invoking
pointer.

Given the above, the current efforts to improve the Linux security,
and the upcoming kernel support to compilers with CFI features, this
creates macros to be used to build the needed function definitions,
to be used in later patches to camellia, cast6, serpent, twofish, and
aesni.

Co-developed-by: Joao Moreira <jmoreira@suse.de>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/crypto/glue_helper.h | 32 +++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/arch/x86/include/asm/crypto/glue_helper.h b/arch/x86/include/asm/crypto/glue_helper.h
index d1818634ae7e..3b039d563809 100644
--- a/arch/x86/include/asm/crypto/glue_helper.h
+++ b/arch/x86/include/asm/crypto/glue_helper.h
@@ -23,6 +23,38 @@ typedef void (*common_glue_xts_func_t)(void *ctx, u128 *dst, const u128 *src,
 #define GLUE_CTR_FUNC_CAST(fn) ((common_glue_ctr_func_t)(fn))
 #define GLUE_XTS_FUNC_CAST(fn) ((common_glue_xts_func_t)(fn))
 
+
+#define GLUE_CAST(func, context)					\
+asmlinkage void func(struct context *ctx, u8 *dst, const u8 *src);	\
+asmlinkage static inline						\
+void func ## _glue(void *ctx, u8 *dst, const u8 *src)			\
+{ func((struct context *) ctx, dst, src); }
+
+#define GLUE_CAST_XOR(func, context)					\
+asmlinkage void __ ## func(struct context *ctx, u8 *dst, const u8 *src,	\
+			   bool y);					\
+asmlinkage static inline						\
+void func(void *ctx, u8 *dst, const u8 *src)				\
+{ __ ## func((struct context *) ctx, dst, src, false); }		\
+asmlinkage static inline						\
+void func ## _xor(void *ctx, u8 *dst, const u8 *src)			\
+{ __ ## func((struct context *) ctx, dst, src, true); }
+
+#define GLUE_CAST_CBC(func, context)					\
+asmlinkage void func(struct context *ctx, u8 *dst, const u8 *src);	\
+asmlinkage static inline						\
+void func ## _cbc_glue(void *ctx, u128 *dst, const u128 *src)		\
+{ func((struct context *) ctx, (u8 *) dst, (u8 *) src); }
+
+#define GLUE_CAST_CTR(func, context)					\
+asmlinkage void func(struct context *ctx, u128 *dst,			\
+		     const u128 *src, le128 *iv);			\
+asmlinkage static inline						\
+void func ## _glue(void *ctx, u128 *dst, const u128 *src, le128 *iv)	\
+{ func((struct context *) ctx, dst, src, iv); }
+
+#define GLUE_CAST_XTS(func, context) GLUE_CAST_CTR(func, context)
+
 struct common_glue_func_entry {
 	unsigned int num_blocks; /* number of blocks that @fn will process */
 	union {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 2/7] crypto: x86/crypto: Use new glue function macros
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
  2019-05-07 16:13 ` [PATCH v3 1/7] crypto: x86/glue_helper: Add static inline function glue macros Kees Cook
@ 2019-05-07 16:13 ` Kees Cook
  2019-05-07 16:13 ` [PATCH v3 3/7] crypto: x86/camellia: " Kees Cook
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-07 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Joao Moreira, Eric Biggers, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, x86, linux-crypto,
	linux-kernel, kernel-hardening

From: Joao Moreira <jmoreira@suse.de>

Convert to function declaration macros from function prototype casts
to avoid trigger Control-Flow Integrity checks during indirect function
calls.

Signed-off-by: Joao Moreira <jmoreira@suse.de>
Co-developed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/crypto/serpent_avx2_glue.c       | 65 ++++++++++-------------
 arch/x86/crypto/serpent_avx_glue.c        | 58 +++++++-------------
 arch/x86/crypto/serpent_sse2_glue.c       | 27 ++++++----
 arch/x86/include/asm/crypto/serpent-avx.h | 28 +++++-----
 4 files changed, 80 insertions(+), 98 deletions(-)

diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c
index 03347b16ac9d..36a0cd694792 100644
--- a/arch/x86/crypto/serpent_avx2_glue.c
+++ b/arch/x86/crypto/serpent_avx2_glue.c
@@ -24,18 +24,12 @@
 #define SERPENT_AVX2_PARALLEL_BLOCKS 16
 
 /* 16-way AVX2 parallel cipher functions */
-asmlinkage void serpent_ecb_enc_16way(struct serpent_ctx *ctx, u8 *dst,
-				      const u8 *src);
-asmlinkage void serpent_ecb_dec_16way(struct serpent_ctx *ctx, u8 *dst,
-				      const u8 *src);
-asmlinkage void serpent_cbc_dec_16way(void *ctx, u128 *dst, const u128 *src);
-
-asmlinkage void serpent_ctr_16way(void *ctx, u128 *dst, const u128 *src,
-				  le128 *iv);
-asmlinkage void serpent_xts_enc_16way(struct serpent_ctx *ctx, u8 *dst,
-				      const u8 *src, le128 *iv);
-asmlinkage void serpent_xts_dec_16way(struct serpent_ctx *ctx, u8 *dst,
-				      const u8 *src, le128 *iv);
+SERPENT_GLUE(serpent_ecb_enc_16way);
+SERPENT_GLUE(serpent_ecb_dec_16way);
+SERPENT_GLUE_CBC(serpent_cbc_dec_16way);
+SERPENT_GLUE_CTR(serpent_ctr_16way);
+SERPENT_GLUE_XTS(serpent_xts_enc_16way);
+SERPENT_GLUE_XTS(serpent_xts_dec_16way);
 
 static int serpent_setkey_skcipher(struct crypto_skcipher *tfm,
 				   const u8 *key, unsigned int keylen)
@@ -49,13 +43,13 @@ static const struct common_glue_ctx serpent_enc = {
 
 	.funcs = { {
 		.num_blocks = 16,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_enc_16way) }
+		.fn_u = { .ecb = serpent_ecb_enc_16way_glue }
 	}, {
 		.num_blocks = 8,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_enc_8way_avx) }
+		.fn_u = { .ecb = serpent_ecb_enc_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_encrypt) }
+		.fn_u = { .ecb = __serpent_encrypt_glue }
 	} }
 };
 
@@ -65,13 +59,13 @@ static const struct common_glue_ctx serpent_ctr = {
 
 	.funcs = { {
 		.num_blocks = 16,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_ctr_16way) }
+		.fn_u = { .ctr = serpent_ctr_16way_glue }
 	},  {
 		.num_blocks = 8,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_ctr_8way_avx) }
+		.fn_u = { .ctr = serpent_ctr_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(__serpent_crypt_ctr) }
+		.fn_u = { .ctr = __serpent_crypt_ctr }
 	} }
 };
 
@@ -81,13 +75,13 @@ static const struct common_glue_ctx serpent_enc_xts = {
 
 	.funcs = { {
 		.num_blocks = 16,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc_16way) }
+		.fn_u = { .xts = serpent_xts_enc_16way_glue }
 	}, {
 		.num_blocks = 8,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc_8way_avx) }
+		.fn_u = { .xts = serpent_xts_enc_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc) }
+		.fn_u = { .xts = serpent_xts_enc }
 	} }
 };
 
@@ -97,13 +91,13 @@ static const struct common_glue_ctx serpent_dec = {
 
 	.funcs = { {
 		.num_blocks = 16,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_dec_16way) }
+		.fn_u = { .ecb = serpent_ecb_dec_16way_glue }
 	}, {
 		.num_blocks = 8,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_dec_8way_avx) }
+		.fn_u = { .ecb = serpent_ecb_dec_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_decrypt) }
+		.fn_u = { .ecb = __serpent_decrypt_glue }
 	} }
 };
 
@@ -113,13 +107,13 @@ static const struct common_glue_ctx serpent_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = 16,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(serpent_cbc_dec_16way) }
+		.fn_u = { .cbc = serpent_cbc_dec_16way_cbc_glue }
 	}, {
 		.num_blocks = 8,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(serpent_cbc_dec_8way_avx) }
+		.fn_u = { .cbc = serpent_cbc_dec_8way_avx_cbc_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(__serpent_decrypt) }
+		.fn_u = { .cbc = __serpent_decrypt_cbc_glue }
 	} }
 };
 
@@ -129,13 +123,13 @@ static const struct common_glue_ctx serpent_dec_xts = {
 
 	.funcs = { {
 		.num_blocks = 16,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec_16way) }
+		.fn_u = { .xts = serpent_xts_dec_16way_glue }
 	}, {
 		.num_blocks = 8,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec_8way_avx) }
+		.fn_u = { .xts = serpent_xts_dec_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec) }
+		.fn_u = { .xts = serpent_xts_dec }
 	} }
 };
 
@@ -151,8 +145,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(__serpent_encrypt),
-					   req);
+	return glue_cbc_encrypt_req_128bit(__serpent_encrypt_glue, req);
 }
 
 static int cbc_decrypt(struct skcipher_request *req)
@@ -171,8 +164,8 @@ static int xts_encrypt(struct skcipher_request *req)
 	struct serpent_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&serpent_enc_xts, req,
-				   XTS_TWEAK_CAST(__serpent_encrypt),
-				   &ctx->tweak_ctx, &ctx->crypt_ctx);
+				   __serpent_encrypt_glue, &ctx->tweak_ctx,
+				   &ctx->crypt_ctx);
 }
 
 static int xts_decrypt(struct skcipher_request *req)
@@ -181,8 +174,8 @@ static int xts_decrypt(struct skcipher_request *req)
 	struct serpent_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&serpent_dec_xts, req,
-				   XTS_TWEAK_CAST(__serpent_encrypt),
-				   &ctx->tweak_ctx, &ctx->crypt_ctx);
+				   __serpent_encrypt_glue, &ctx->tweak_ctx,
+				   &ctx->crypt_ctx);
 }
 
 static struct skcipher_alg serpent_algs[] = {
diff --git a/arch/x86/crypto/serpent_avx_glue.c b/arch/x86/crypto/serpent_avx_glue.c
index 458567ecf76c..897bb3f0116d 100644
--- a/arch/x86/crypto/serpent_avx_glue.c
+++ b/arch/x86/crypto/serpent_avx_glue.c
@@ -35,28 +35,11 @@
 #include <asm/crypto/serpent-avx.h>
 
 /* 8-way parallel cipher functions */
-asmlinkage void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_ecb_enc_8way_avx);
-
-asmlinkage void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_ecb_dec_8way_avx);
-
-asmlinkage void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_cbc_dec_8way_avx);
-
-asmlinkage void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-				     const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_ctr_8way_avx);
-
-asmlinkage void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_xts_enc_8way_avx);
-
-asmlinkage void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_xts_dec_8way_avx);
 
 void __serpent_crypt_ctr(void *ctx, u128 *dst, const u128 *src, le128 *iv)
@@ -73,15 +56,13 @@ EXPORT_SYMBOL_GPL(__serpent_crypt_ctr);
 
 void serpent_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv,
-				  GLUE_FUNC_CAST(__serpent_encrypt));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, __serpent_encrypt_glue);
 }
 EXPORT_SYMBOL_GPL(serpent_xts_enc);
 
 void serpent_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv,
-				  GLUE_FUNC_CAST(__serpent_decrypt));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, __serpent_decrypt_glue);
 }
 EXPORT_SYMBOL_GPL(serpent_xts_dec);
 
@@ -117,10 +98,10 @@ static const struct common_glue_ctx serpent_enc = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_enc_8way_avx) }
+		.fn_u = { .ecb = serpent_ecb_enc_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_encrypt) }
+		.fn_u = { .ecb = __serpent_encrypt_glue }
 	} }
 };
 
@@ -130,10 +111,10 @@ static const struct common_glue_ctx serpent_ctr = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_ctr_8way_avx) }
+		.fn_u = { .ctr = serpent_ctr_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(__serpent_crypt_ctr) }
+		.fn_u = { .ctr = __serpent_crypt_ctr }
 	} }
 };
 
@@ -143,10 +124,10 @@ static const struct common_glue_ctx serpent_enc_xts = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc_8way_avx) }
+		.fn_u = { .xts = serpent_xts_enc_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc) }
+		.fn_u = { .xts = serpent_xts_enc }
 	} }
 };
 
@@ -156,10 +137,10 @@ static const struct common_glue_ctx serpent_dec = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_dec_8way_avx) }
+		.fn_u = { .ecb = serpent_ecb_dec_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_decrypt) }
+		.fn_u = { .ecb = __serpent_decrypt_glue }
 	} }
 };
 
@@ -169,10 +150,10 @@ static const struct common_glue_ctx serpent_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(serpent_cbc_dec_8way_avx) }
+		.fn_u = { .cbc = serpent_cbc_dec_8way_avx_cbc_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(__serpent_decrypt) }
+		.fn_u = { .cbc = __serpent_decrypt_cbc_glue }
 	} }
 };
 
@@ -182,10 +163,10 @@ static const struct common_glue_ctx serpent_dec_xts = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec_8way_avx) }
+		.fn_u = { .xts = serpent_xts_dec_8way_avx_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec) }
+		.fn_u = { .xts = serpent_xts_dec }
 	} }
 };
 
@@ -201,8 +182,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(__serpent_encrypt),
-					   req);
+	return glue_cbc_encrypt_req_128bit(__serpent_encrypt_glue, req);
 }
 
 static int cbc_decrypt(struct skcipher_request *req)
@@ -221,8 +201,8 @@ static int xts_encrypt(struct skcipher_request *req)
 	struct serpent_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&serpent_enc_xts, req,
-				   XTS_TWEAK_CAST(__serpent_encrypt),
-				   &ctx->tweak_ctx, &ctx->crypt_ctx);
+			__serpent_encrypt_glue, &ctx->tweak_ctx,
+			&ctx->crypt_ctx);
 }
 
 static int xts_decrypt(struct skcipher_request *req)
@@ -231,8 +211,8 @@ static int xts_decrypt(struct skcipher_request *req)
 	struct serpent_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&serpent_dec_xts, req,
-				   XTS_TWEAK_CAST(__serpent_encrypt),
-				   &ctx->tweak_ctx, &ctx->crypt_ctx);
+			__serpent_encrypt_glue, &ctx->tweak_ctx,
+			&ctx->crypt_ctx);
 }
 
 static struct skcipher_alg serpent_algs[] = {
diff --git a/arch/x86/crypto/serpent_sse2_glue.c b/arch/x86/crypto/serpent_sse2_glue.c
index 3dafe137596a..135f6b616bc6 100644
--- a/arch/x86/crypto/serpent_sse2_glue.c
+++ b/arch/x86/crypto/serpent_sse2_glue.c
@@ -40,6 +40,15 @@
 #include <asm/crypto/serpent-sse2.h>
 #include <asm/crypto/glue_helper.h>
 
+#define SERPENT_GLUE(func)	GLUE_CAST(func, serpent_ctx)
+#define SERPENT_GLUE_CBC(func)	GLUE_CAST_CBC(func, serpent_ctx)
+
+SERPENT_GLUE(__serpent_encrypt);
+SERPENT_GLUE(__serpent_decrypt);
+SERPENT_GLUE_CBC(__serpent_decrypt);
+SERPENT_GLUE(serpent_enc_blk_xway);
+SERPENT_GLUE(serpent_dec_blk_xway);
+
 static int serpent_setkey_skcipher(struct crypto_skcipher *tfm,
 				   const u8 *key, unsigned int keylen)
 {
@@ -94,10 +103,10 @@ static const struct common_glue_ctx serpent_enc = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(serpent_enc_blk_xway) }
+		.fn_u = { .ecb = serpent_enc_blk_xway_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_encrypt) }
+		.fn_u = { .ecb = __serpent_encrypt_glue }
 	} }
 };
 
@@ -107,10 +116,10 @@ static const struct common_glue_ctx serpent_ctr = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_crypt_ctr_xway) }
+		.fn_u = { .ctr = serpent_crypt_ctr_xway }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_crypt_ctr) }
+		.fn_u = { .ctr = serpent_crypt_ctr }
 	} }
 };
 
@@ -120,10 +129,10 @@ static const struct common_glue_ctx serpent_dec = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(serpent_dec_blk_xway) }
+		.fn_u = { .ecb = serpent_dec_blk_xway_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_decrypt) }
+		.fn_u = { .ecb = __serpent_decrypt_glue }
 	} }
 };
 
@@ -133,10 +142,10 @@ static const struct common_glue_ctx serpent_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = SERPENT_PARALLEL_BLOCKS,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(serpent_decrypt_cbc_xway) }
+		.fn_u = { .cbc = serpent_decrypt_cbc_xway }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(__serpent_decrypt) }
+		.fn_u = { .cbc = __serpent_decrypt_cbc_glue }
 	} }
 };
 
@@ -152,7 +161,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(__serpent_encrypt),
+	return glue_cbc_encrypt_req_128bit(__serpent_encrypt_glue,
 					   req);
 }
 
diff --git a/arch/x86/include/asm/crypto/serpent-avx.h b/arch/x86/include/asm/crypto/serpent-avx.h
index db7c9cc32234..c95059be3ae6 100644
--- a/arch/x86/include/asm/crypto/serpent-avx.h
+++ b/arch/x86/include/asm/crypto/serpent-avx.h
@@ -15,20 +15,20 @@ struct serpent_xts_ctx {
 	struct serpent_ctx crypt_ctx;
 };
 
-asmlinkage void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
-asmlinkage void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
-
-asmlinkage void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
-asmlinkage void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-				     const u8 *src, le128 *iv);
-
-asmlinkage void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src, le128 *iv);
-asmlinkage void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src, le128 *iv);
+#define SERPENT_GLUE(func)	GLUE_CAST(func, serpent_ctx)
+#define SERPENT_GLUE_CBC(func)	GLUE_CAST_CBC(func, serpent_ctx)
+#define SERPENT_GLUE_CTR(func)	GLUE_CAST_CTR(func, serpent_ctx)
+#define SERPENT_GLUE_XTS(func)	GLUE_CAST_XTS(func, serpent_ctx)
+
+SERPENT_GLUE(__serpent_encrypt);
+SERPENT_GLUE(__serpent_decrypt);
+SERPENT_GLUE_CBC(__serpent_decrypt);
+SERPENT_GLUE(serpent_ecb_enc_8way_avx);
+SERPENT_GLUE(serpent_ecb_dec_8way_avx);
+SERPENT_GLUE_CBC(serpent_cbc_dec_8way_avx);
+SERPENT_GLUE_CTR(serpent_ctr_8way_avx);
+SERPENT_GLUE_XTS(serpent_xts_enc_8way_avx);
+SERPENT_GLUE_XTS(serpent_xts_dec_8way_avx);
 
 extern void __serpent_crypt_ctr(void *ctx, u128 *dst, const u128 *src,
 				le128 *iv);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 3/7] crypto: x86/camellia: Use new glue function macros
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
  2019-05-07 16:13 ` [PATCH v3 1/7] crypto: x86/glue_helper: Add static inline function glue macros Kees Cook
  2019-05-07 16:13 ` [PATCH v3 2/7] crypto: x86/crypto: Use new glue function macros Kees Cook
@ 2019-05-07 16:13 ` Kees Cook
  2019-05-07 16:13 ` [PATCH v3 4/7] crypto: x86/twofish: " Kees Cook
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-07 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Joao Moreira, Eric Biggers, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, x86, linux-crypto,
	linux-kernel, kernel-hardening

From: Joao Moreira <jmoreira@suse.de>

Convert to function declaration macros from function prototype casts
to avoid trigger Control-Flow Integrity checks during indirect function
calls.

Signed-off-by: Joao Moreira <jmoreira@suse.de>
Co-developed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/crypto/camellia_aesni_avx2_glue.c | 73 +++++++++-------------
 arch/x86/crypto/camellia_aesni_avx_glue.c  | 63 +++++++------------
 arch/x86/crypto/camellia_glue.c            | 21 +++----
 arch/x86/include/asm/crypto/camellia.h     | 64 ++++++-------------
 4 files changed, 80 insertions(+), 141 deletions(-)

diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c
index d4992e458f92..863a336fd4f5 100644
--- a/arch/x86/crypto/camellia_aesni_avx2_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c
@@ -24,20 +24,12 @@
 #define CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS 32
 
 /* 32-way AVX2/AES-NI parallel cipher functions */
-asmlinkage void camellia_ecb_enc_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-asmlinkage void camellia_ecb_dec_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-
-asmlinkage void camellia_cbc_dec_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-asmlinkage void camellia_ctr_32way(struct camellia_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
-
-asmlinkage void camellia_xts_enc_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
-asmlinkage void camellia_xts_dec_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
+CAMELLIA_GLUE(camellia_ecb_enc_32way);
+CAMELLIA_GLUE(camellia_ecb_dec_32way);
+CAMELLIA_GLUE_CBC(camellia_cbc_dec_32way);
+CAMELLIA_GLUE_CTR(camellia_ctr_32way);
+CAMELLIA_GLUE_XTS(camellia_xts_enc_32way);
+CAMELLIA_GLUE_XTS(camellia_xts_dec_32way);
 
 static const struct common_glue_ctx camellia_enc = {
 	.num_funcs = 4,
@@ -45,16 +37,16 @@ static const struct common_glue_ctx camellia_enc = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_enc_32way) }
+		.fn_u = { .ecb = camellia_ecb_enc_32way_glue }
 	}, {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_enc_16way) }
+		.fn_u = { .ecb = camellia_ecb_enc_16way_glue }
 	}, {
 		.num_blocks = 2,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk_2way) }
+		.fn_u = { .ecb = camellia_enc_blk_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk) }
+		.fn_u = { .ecb = camellia_enc_blk }
 	} }
 };
 
@@ -64,16 +56,16 @@ static const struct common_glue_ctx camellia_ctr = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_ctr_32way) }
+		.fn_u = { .ctr = camellia_ctr_32way_glue }
 	}, {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_ctr_16way) }
+		.fn_u = { .ctr = camellia_ctr_16way_glue }
 	}, {
 		.num_blocks = 2,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr_2way) }
+		.fn_u = { .ctr = camellia_crypt_ctr_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr) }
+		.fn_u = { .ctr = camellia_crypt_ctr }
 	} }
 };
 
@@ -83,13 +75,13 @@ static const struct common_glue_ctx camellia_enc_xts = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc_32way) }
+		.fn_u = { .xts = camellia_xts_enc_32way_glue }
 	}, {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc_16way) }
+		.fn_u = { .xts = camellia_xts_enc_16way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc) }
+		.fn_u = { .xts = camellia_xts_enc }
 	} }
 };
 
@@ -99,16 +91,16 @@ static const struct common_glue_ctx camellia_dec = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_dec_32way) }
+		.fn_u = { .ecb = camellia_ecb_dec_32way_glue }
 	}, {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_dec_16way) }
+		.fn_u = { .ecb = camellia_ecb_dec_16way_glue }
 	}, {
 		.num_blocks = 2,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk_2way) }
+		.fn_u = { .ecb = camellia_dec_blk_2way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk) }
+		.fn_u = { .ecb = camellia_dec_blk_glue }
 	} }
 };
 
@@ -118,16 +110,16 @@ static const struct common_glue_ctx camellia_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_cbc_dec_32way) }
+		.fn_u = { .cbc = camellia_cbc_dec_32way_cbc_glue }
 	}, {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_cbc_dec_16way) }
+		.fn_u = { .cbc = camellia_cbc_dec_16way_cbc_glue }
 	}, {
 		.num_blocks = 2,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_decrypt_cbc_2way) }
+		.fn_u = { .cbc = camellia_decrypt_cbc_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_dec_blk) }
+		.fn_u = { .cbc = camellia_dec_blk_cbc_glue }
 	} }
 };
 
@@ -137,13 +129,13 @@ static const struct common_glue_ctx camellia_dec_xts = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec_32way) }
+		.fn_u = { .xts = camellia_xts_dec_32way_glue }
 	}, {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec_16way) }
+		.fn_u = { .xts = camellia_xts_dec_16way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec) }
+		.fn_u = { .xts = camellia_xts_dec }
 	} }
 };
 
@@ -166,8 +158,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(camellia_enc_blk),
-					   req);
+	return glue_cbc_encrypt_req_128bit(camellia_enc_blk, req);
 }
 
 static int cbc_decrypt(struct skcipher_request *req)
@@ -185,8 +176,7 @@ static int xts_encrypt(struct skcipher_request *req)
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct camellia_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	return glue_xts_req_128bit(&camellia_enc_xts, req,
-				   XTS_TWEAK_CAST(camellia_enc_blk),
+	return glue_xts_req_128bit(&camellia_enc_xts, req, camellia_enc_blk,
 				   &ctx->tweak_ctx, &ctx->crypt_ctx);
 }
 
@@ -195,8 +185,7 @@ static int xts_decrypt(struct skcipher_request *req)
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct camellia_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	return glue_xts_req_128bit(&camellia_dec_xts, req,
-				   XTS_TWEAK_CAST(camellia_enc_blk),
+	return glue_xts_req_128bit(&camellia_dec_xts, req, camellia_enc_blk,
 				   &ctx->tweak_ctx, &ctx->crypt_ctx);
 }
 
diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c
index d09f6521466a..182c23180377 100644
--- a/arch/x86/crypto/camellia_aesni_avx_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx_glue.c
@@ -11,7 +11,6 @@
  */
 
 #include <asm/crypto/camellia.h>
-#include <asm/crypto/glue_helper.h>
 #include <crypto/algapi.h>
 #include <crypto/internal/simd.h>
 #include <crypto/xts.h>
@@ -23,41 +22,22 @@
 #define CAMELLIA_AESNI_PARALLEL_BLOCKS 16
 
 /* 16-way parallel cipher functions (avx/aes-ni) */
-asmlinkage void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_ecb_enc_16way);
-
-asmlinkage void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_ecb_dec_16way);
-
-asmlinkage void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_cbc_dec_16way);
-
-asmlinkage void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_ctr_16way);
-
-asmlinkage void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_xts_enc_16way);
-
-asmlinkage void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_xts_dec_16way);
 
 void camellia_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv,
-				  GLUE_FUNC_CAST(camellia_enc_blk));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, camellia_enc_blk);
 }
 EXPORT_SYMBOL_GPL(camellia_xts_enc);
 
 void camellia_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv,
-				  GLUE_FUNC_CAST(camellia_dec_blk));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, camellia_dec_blk_glue);
 }
 EXPORT_SYMBOL_GPL(camellia_xts_dec);
 
@@ -67,13 +47,13 @@ static const struct common_glue_ctx camellia_enc = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_enc_16way) }
+		.fn_u = { .ecb = camellia_ecb_enc_16way_glue }
 	}, {
 		.num_blocks = 2,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk_2way) }
+		.fn_u = { .ecb = camellia_enc_blk_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk) }
+		.fn_u = { .ecb = camellia_enc_blk }
 	} }
 };
 
@@ -83,13 +63,13 @@ static const struct common_glue_ctx camellia_ctr = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_ctr_16way) }
+		.fn_u = { .ctr = camellia_ctr_16way_glue }
 	}, {
 		.num_blocks = 2,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr_2way) }
+		.fn_u = { .ctr = camellia_crypt_ctr_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr) }
+		.fn_u = { .ctr = camellia_crypt_ctr }
 	} }
 };
 
@@ -99,10 +79,10 @@ static const struct common_glue_ctx camellia_enc_xts = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc_16way) }
+		.fn_u = { .xts = camellia_xts_enc_16way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc) }
+		.fn_u = { .xts = camellia_xts_enc }
 	} }
 };
 
@@ -112,13 +92,13 @@ static const struct common_glue_ctx camellia_dec = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_dec_16way) }
+		.fn_u = { .ecb = camellia_ecb_dec_16way_glue }
 	}, {
 		.num_blocks = 2,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk_2way) }
+		.fn_u = { .ecb = camellia_dec_blk_2way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk) }
+		.fn_u = { .ecb = camellia_dec_blk_glue }
 	} }
 };
 
@@ -128,13 +108,13 @@ static const struct common_glue_ctx camellia_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_cbc_dec_16way) }
+		.fn_u = { .cbc = camellia_cbc_dec_16way_cbc_glue }
 	}, {
 		.num_blocks = 2,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_decrypt_cbc_2way) }
+		.fn_u = { .cbc = camellia_decrypt_cbc_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_dec_blk) }
+		.fn_u = { .cbc = camellia_dec_blk_cbc_glue }
 	} }
 };
 
@@ -144,10 +124,10 @@ static const struct common_glue_ctx camellia_dec_xts = {
 
 	.funcs = { {
 		.num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec_16way) }
+		.fn_u = { .xts = camellia_xts_dec_16way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec) }
+		.fn_u = { .xts = camellia_xts_dec }
 	} }
 };
 
@@ -170,8 +150,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(camellia_enc_blk),
-					   req);
+	return glue_cbc_encrypt_req_128bit(camellia_enc_blk, req);
 }
 
 static int cbc_decrypt(struct skcipher_request *req)
@@ -212,7 +191,7 @@ static int xts_encrypt(struct skcipher_request *req)
 	struct camellia_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&camellia_enc_xts, req,
-				   XTS_TWEAK_CAST(camellia_enc_blk),
+				   camellia_enc_blk,
 				   &ctx->tweak_ctx, &ctx->crypt_ctx);
 }
 
@@ -222,7 +201,7 @@ static int xts_decrypt(struct skcipher_request *req)
 	struct camellia_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&camellia_dec_xts, req,
-				   XTS_TWEAK_CAST(camellia_enc_blk),
+				   camellia_enc_blk,
 				   &ctx->tweak_ctx, &ctx->crypt_ctx);
 }
 
diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c
index dcd5e0f71b00..23173046a609 100644
--- a/arch/x86/crypto/camellia_glue.c
+++ b/arch/x86/crypto/camellia_glue.c
@@ -1320,7 +1320,7 @@ void camellia_crypt_ctr_2way(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 	le128_to_be128(&ctrblks[1], iv);
 	le128_inc(iv);
 
-	camellia_enc_blk_xor_2way(ctx, (u8 *)dst, (u8 *)ctrblks);
+	camellia_enc_blk_2way_xor(ctx, (u8 *)dst, (u8 *)ctrblks);
 }
 EXPORT_SYMBOL_GPL(camellia_crypt_ctr_2way);
 
@@ -1330,10 +1330,10 @@ static const struct common_glue_ctx camellia_enc = {
 
 	.funcs = { {
 		.num_blocks = 2,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk_2way) }
+		.fn_u = { .ecb = camellia_enc_blk_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk) }
+		.fn_u = { .ecb = camellia_enc_blk }
 	} }
 };
 
@@ -1343,10 +1343,10 @@ static const struct common_glue_ctx camellia_ctr = {
 
 	.funcs = { {
 		.num_blocks = 2,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr_2way) }
+		.fn_u = { .ctr = camellia_crypt_ctr_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr) }
+		.fn_u = { .ctr = camellia_crypt_ctr }
 	} }
 };
 
@@ -1356,10 +1356,10 @@ static const struct common_glue_ctx camellia_dec = {
 
 	.funcs = { {
 		.num_blocks = 2,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk_2way) }
+		.fn_u = { .ecb = camellia_dec_blk_2way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk) }
+		.fn_u = { .ecb = camellia_dec_blk_glue }
 	} }
 };
 
@@ -1369,10 +1369,10 @@ static const struct common_glue_ctx camellia_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = 2,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_decrypt_cbc_2way) }
+		.fn_u = { .cbc = camellia_decrypt_cbc_2way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_dec_blk) }
+		.fn_u = { .cbc = camellia_dec_blk_cbc_glue }
 	} }
 };
 
@@ -1388,8 +1388,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(camellia_enc_blk),
-					   req);
+	return glue_cbc_encrypt_req_128bit(camellia_enc_blk, req);
 }
 
 static int cbc_decrypt(struct skcipher_request *req)
diff --git a/arch/x86/include/asm/crypto/camellia.h b/arch/x86/include/asm/crypto/camellia.h
index a5d86fc0593f..4a55b037c422 100644
--- a/arch/x86/include/asm/crypto/camellia.h
+++ b/arch/x86/include/asm/crypto/camellia.h
@@ -2,6 +2,7 @@
 #ifndef ASM_X86_CAMELLIA_H
 #define ASM_X86_CAMELLIA_H
 
+#include <asm/crypto/glue_helper.h>
 #include <crypto/b128ops.h>
 #include <linux/crypto.h>
 #include <linux/kernel.h>
@@ -24,6 +25,12 @@ struct camellia_xts_ctx {
 	struct camellia_ctx crypt_ctx;
 };
 
+#define CAMELLIA_GLUE(func)	GLUE_CAST(func, camellia_ctx)
+#define CAMELLIA_GLUE_XOR(func)	GLUE_CAST_XOR(func, camellia_ctx)
+#define CAMELLIA_GLUE_CBC(func)	GLUE_CAST_CBC(func, camellia_ctx)
+#define CAMELLIA_GLUE_CTR(func)	GLUE_CAST_CTR(func, camellia_ctx)
+#define CAMELLIA_GLUE_XTS(func)	GLUE_CAST_XTS(func, camellia_ctx)
+
 extern int __camellia_setkey(struct camellia_ctx *cctx,
 			     const unsigned char *key,
 			     unsigned int key_len, u32 *flags);
@@ -32,56 +39,21 @@ extern int xts_camellia_setkey(struct crypto_skcipher *tfm, const u8 *key,
 			       unsigned int keylen);
 
 /* regular block cipher functions */
-asmlinkage void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
-				   const u8 *src, bool xor);
-asmlinkage void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst,
-				 const u8 *src);
+CAMELLIA_GLUE_XOR(camellia_enc_blk);
+CAMELLIA_GLUE(camellia_dec_blk);
+CAMELLIA_GLUE_CBC(camellia_dec_blk);
 
 /* 2-way parallel cipher functions */
-asmlinkage void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
-					const u8 *src, bool xor);
-asmlinkage void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst,
-				      const u8 *src);
+CAMELLIA_GLUE_XOR(camellia_enc_blk_2way);
+CAMELLIA_GLUE(camellia_dec_blk_2way);
 
 /* 16-way parallel cipher functions (avx/aes-ni) */
-asmlinkage void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-asmlinkage void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-
-asmlinkage void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-asmlinkage void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
-
-asmlinkage void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
-asmlinkage void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
-
-static inline void camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
-				    const u8 *src)
-{
-	__camellia_enc_blk(ctx, dst, src, false);
-}
-
-static inline void camellia_enc_blk_xor(struct camellia_ctx *ctx, u8 *dst,
-					const u8 *src)
-{
-	__camellia_enc_blk(ctx, dst, src, true);
-}
-
-static inline void camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
-					 const u8 *src)
-{
-	__camellia_enc_blk_2way(ctx, dst, src, false);
-}
-
-static inline void camellia_enc_blk_xor_2way(struct camellia_ctx *ctx, u8 *dst,
-					     const u8 *src)
-{
-	__camellia_enc_blk_2way(ctx, dst, src, true);
-}
+CAMELLIA_GLUE(camellia_ecb_enc_16way);
+CAMELLIA_GLUE(camellia_ecb_dec_16way);
+CAMELLIA_GLUE_CBC(camellia_cbc_dec_16way);
+CAMELLIA_GLUE_CTR(camellia_ctr_16way);
+CAMELLIA_GLUE_XTS(camellia_xts_enc_16way);
+CAMELLIA_GLUE_XTS(camellia_xts_dec_16way);
 
 /* glue helpers */
 extern void camellia_decrypt_cbc_2way(void *ctx, u128 *dst, const u128 *src);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 4/7] crypto: x86/twofish: Use new glue function macros
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
                   ` (2 preceding siblings ...)
  2019-05-07 16:13 ` [PATCH v3 3/7] crypto: x86/camellia: " Kees Cook
@ 2019-05-07 16:13 ` Kees Cook
  2019-05-07 16:13 ` [PATCH v3 5/7] crypto: x86/cast6: " Kees Cook
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-07 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Joao Moreira, Eric Biggers, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, x86, linux-crypto,
	linux-kernel, kernel-hardening

From: Joao Moreira <jmoreira@suse.de>

Convert to function declaration macros from function prototype casts
to avoid trigger Control-Flow Integrity checks during indirect function
calls.

Signed-off-by: Joao Moreira <jmoreira@suse.de>
Co-developed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/crypto/twofish_avx_glue.c    | 71 ++++++++++-----------------
 arch/x86/crypto/twofish_glue_3way.c   | 28 +++++------
 arch/x86/include/asm/crypto/twofish.h | 22 +++++----
 3 files changed, 50 insertions(+), 71 deletions(-)

diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c
index 66d989230d10..68dc52291c5d 100644
--- a/arch/x86/crypto/twofish_avx_glue.c
+++ b/arch/x86/crypto/twofish_avx_glue.c
@@ -37,20 +37,12 @@
 #define TWOFISH_PARALLEL_BLOCKS 8
 
 /* 8-way parallel cipher functions */
-asmlinkage void twofish_ecb_enc_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src);
-asmlinkage void twofish_ecb_dec_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src);
-
-asmlinkage void twofish_cbc_dec_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src);
-asmlinkage void twofish_ctr_8way(struct twofish_ctx *ctx, u8 *dst,
-				 const u8 *src, le128 *iv);
-
-asmlinkage void twofish_xts_enc_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src, le128 *iv);
-asmlinkage void twofish_xts_dec_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src, le128 *iv);
+TWOFISH_GLUE(twofish_ecb_enc_8way);
+TWOFISH_GLUE(twofish_ecb_dec_8way);
+TWOFISH_GLUE_CBC(twofish_cbc_dec_8way);
+TWOFISH_GLUE_CTR(twofish_ctr_8way);
+TWOFISH_GLUE_XTS(twofish_xts_enc_8way);
+TWOFISH_GLUE_XTS(twofish_xts_dec_8way);
 
 static int twofish_setkey_skcipher(struct crypto_skcipher *tfm,
 				   const u8 *key, unsigned int keylen)
@@ -58,22 +50,14 @@ static int twofish_setkey_skcipher(struct crypto_skcipher *tfm,
 	return twofish_setkey(&tfm->base, key, keylen);
 }
 
-static inline void twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
-					const u8 *src)
-{
-	__twofish_enc_blk_3way(ctx, dst, src, false);
-}
-
 static void twofish_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv,
-				  GLUE_FUNC_CAST(twofish_enc_blk));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, twofish_enc_blk_glue);
 }
 
 static void twofish_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv,
-				  GLUE_FUNC_CAST(twofish_dec_blk));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, twofish_dec_blk_glue);
 }
 
 struct twofish_xts_ctx {
@@ -108,13 +92,13 @@ static const struct common_glue_ctx twofish_enc = {
 
 	.funcs = { {
 		.num_blocks = TWOFISH_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_ecb_enc_8way) }
+		.fn_u = { .ecb = twofish_ecb_enc_8way_glue }
 	}, {
 		.num_blocks = 3,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk_3way) }
+		.fn_u = { .ecb = twofish_enc_blk_3way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk) }
+		.fn_u = { .ecb = twofish_enc_blk_glue }
 	} }
 };
 
@@ -124,13 +108,13 @@ static const struct common_glue_ctx twofish_ctr = {
 
 	.funcs = { {
 		.num_blocks = TWOFISH_PARALLEL_BLOCKS,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(twofish_ctr_8way) }
+		.fn_u = { .ctr = twofish_ctr_8way_glue }
 	}, {
 		.num_blocks = 3,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(twofish_enc_blk_ctr_3way) }
+		.fn_u = { .ctr = twofish_enc_blk_ctr_3way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(twofish_enc_blk_ctr) }
+		.fn_u = { .ctr = twofish_enc_blk_ctr_glue }
 	} }
 };
 
@@ -140,10 +124,10 @@ static const struct common_glue_ctx twofish_enc_xts = {
 
 	.funcs = { {
 		.num_blocks = TWOFISH_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(twofish_xts_enc_8way) }
+		.fn_u = { .xts = twofish_xts_enc_8way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(twofish_xts_enc) }
+		.fn_u = { .xts = twofish_xts_enc }
 	} }
 };
 
@@ -153,13 +137,13 @@ static const struct common_glue_ctx twofish_dec = {
 
 	.funcs = { {
 		.num_blocks = TWOFISH_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_ecb_dec_8way) }
+		.fn_u = { .ecb = twofish_ecb_dec_8way_glue }
 	}, {
 		.num_blocks = 3,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk_3way) }
+		.fn_u = { .ecb = twofish_dec_blk_3way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk) }
+		.fn_u = { .ecb = twofish_dec_blk_glue }
 	} }
 };
 
@@ -169,13 +153,13 @@ static const struct common_glue_ctx twofish_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = TWOFISH_PARALLEL_BLOCKS,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_cbc_dec_8way) }
+		.fn_u = { .cbc = twofish_cbc_dec_8way_cbc_glue }
 	}, {
 		.num_blocks = 3,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk_cbc_3way) }
+		.fn_u = { .cbc = twofish_dec_blk_cbc_3way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk) }
+		.fn_u = { .cbc = twofish_dec_blk_cbc_glue }
 	} }
 };
 
@@ -185,10 +169,10 @@ static const struct common_glue_ctx twofish_dec_xts = {
 
 	.funcs = { {
 		.num_blocks = TWOFISH_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(twofish_xts_dec_8way) }
+		.fn_u = { .xts = twofish_xts_dec_8way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(twofish_xts_dec) }
+		.fn_u = { .xts = twofish_xts_dec }
 	} }
 };
 
@@ -204,8 +188,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(twofish_enc_blk),
-					   req);
+	return glue_cbc_encrypt_req_128bit(twofish_enc_blk_glue, req);
 }
 
 static int cbc_decrypt(struct skcipher_request *req)
@@ -224,7 +207,7 @@ static int xts_encrypt(struct skcipher_request *req)
 	struct twofish_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&twofish_enc_xts, req,
-				   XTS_TWEAK_CAST(twofish_enc_blk),
+				   twofish_enc_blk_glue,
 				   &ctx->tweak_ctx, &ctx->crypt_ctx);
 }
 
@@ -234,7 +217,7 @@ static int xts_decrypt(struct skcipher_request *req)
 	struct twofish_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&twofish_dec_xts, req,
-				   XTS_TWEAK_CAST(twofish_enc_blk),
+				   twofish_enc_blk_glue,
 				   &ctx->tweak_ctx, &ctx->crypt_ctx);
 }
 
diff --git a/arch/x86/crypto/twofish_glue_3way.c b/arch/x86/crypto/twofish_glue_3way.c
index 571485502ec8..e58236435735 100644
--- a/arch/x86/crypto/twofish_glue_3way.c
+++ b/arch/x86/crypto/twofish_glue_3way.c
@@ -40,12 +40,6 @@ static int twofish_setkey_skcipher(struct crypto_skcipher *tfm,
 	return twofish_setkey(&tfm->base, key, keylen);
 }
 
-static inline void twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
-					const u8 *src)
-{
-	__twofish_enc_blk_3way(ctx, dst, src, false);
-}
-
 static inline void twofish_enc_blk_xor_3way(struct twofish_ctx *ctx, u8 *dst,
 					    const u8 *src)
 {
@@ -66,7 +60,8 @@ void twofish_dec_blk_cbc_3way(void *ctx, u128 *dst, const u128 *src)
 }
 EXPORT_SYMBOL_GPL(twofish_dec_blk_cbc_3way);
 
-void twofish_enc_blk_ctr(void *ctx, u128 *dst, const u128 *src, le128 *iv)
+void twofish_enc_blk_ctr(struct twofish_ctx *ctx, u128 *dst, const u128 *src,
+		le128 *iv)
 {
 	be128 ctrblk;
 
@@ -109,10 +104,10 @@ static const struct common_glue_ctx twofish_enc = {
 
 	.funcs = { {
 		.num_blocks = 3,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk_3way) }
+		.fn_u = { .ecb = twofish_enc_blk_3way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk) }
+		.fn_u = { .ecb = twofish_enc_blk_glue }
 	} }
 };
 
@@ -122,10 +117,10 @@ static const struct common_glue_ctx twofish_ctr = {
 
 	.funcs = { {
 		.num_blocks = 3,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk_ctr_3way) }
+		.fn_u = { .ctr = twofish_enc_blk_ctr_3way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk_ctr) }
+		.fn_u = { .ctr = twofish_enc_blk_ctr_glue }
 	} }
 };
 
@@ -135,10 +130,10 @@ static const struct common_glue_ctx twofish_dec = {
 
 	.funcs = { {
 		.num_blocks = 3,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk_3way) }
+		.fn_u = { .ecb = twofish_dec_blk_3way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk) }
+		.fn_u = { .ecb = twofish_dec_blk_glue }
 	} }
 };
 
@@ -148,10 +143,10 @@ static const struct common_glue_ctx twofish_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = 3,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk_cbc_3way) }
+		.fn_u = { .cbc = twofish_dec_blk_cbc_3way }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk) }
+		.fn_u = { .cbc = twofish_dec_blk_cbc_glue }
 	} }
 };
 
@@ -167,8 +162,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(twofish_enc_blk),
-					   req);
+	return glue_cbc_encrypt_req_128bit(twofish_enc_blk_glue, req);
 }
 
 static int cbc_decrypt(struct skcipher_request *req)
diff --git a/arch/x86/include/asm/crypto/twofish.h b/arch/x86/include/asm/crypto/twofish.h
index f618bf272b90..59f3228fbc5d 100644
--- a/arch/x86/include/asm/crypto/twofish.h
+++ b/arch/x86/include/asm/crypto/twofish.h
@@ -6,22 +6,24 @@
 #include <crypto/twofish.h>
 #include <crypto/b128ops.h>
 
+#define TWOFISH_GLUE(func)	GLUE_CAST(func, twofish_ctx)
+#define TWOFISH_GLUE_XOR(func)	GLUE_CAST_XOR(func, twofish_ctx)
+#define TWOFISH_GLUE_CBC(func)	GLUE_CAST_CBC(func, twofish_ctx)
+#define TWOFISH_GLUE_CTR(func)	GLUE_CAST_CTR(func, twofish_ctx)
+#define TWOFISH_GLUE_XTS(func)	GLUE_CAST_XTS(func, twofish_ctx)
+
 /* regular block cipher functions from twofish_x86_64 module */
-asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
-				const u8 *src);
-asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
-				const u8 *src);
+TWOFISH_GLUE(twofish_enc_blk);
+TWOFISH_GLUE(twofish_dec_blk);
+TWOFISH_GLUE_CBC(twofish_dec_blk);
 
 /* 3-way parallel cipher functions */
-asmlinkage void __twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
-				       const u8 *src, bool xor);
-asmlinkage void twofish_dec_blk_3way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src);
+TWOFISH_GLUE_XOR(twofish_enc_blk_3way);
+TWOFISH_GLUE(twofish_dec_blk_3way);
 
 /* helpers from twofish_x86_64-3way module */
 extern void twofish_dec_blk_cbc_3way(void *ctx, u128 *dst, const u128 *src);
-extern void twofish_enc_blk_ctr(void *ctx, u128 *dst, const u128 *src,
-				le128 *iv);
+TWOFISH_GLUE_CTR(twofish_enc_blk_ctr);
 extern void twofish_enc_blk_ctr_3way(void *ctx, u128 *dst, const u128 *src,
 				     le128 *iv);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 5/7] crypto: x86/cast6: Use new glue function macros
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
                   ` (3 preceding siblings ...)
  2019-05-07 16:13 ` [PATCH v3 4/7] crypto: x86/twofish: " Kees Cook
@ 2019-05-07 16:13 ` Kees Cook
  2019-05-07 16:13 ` [PATCH v3 6/7] crypto: x86/aesni: " Kees Cook
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-07 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Joao Moreira, Eric Biggers, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, x86, linux-crypto,
	linux-kernel, kernel-hardening

From: Joao Moreira <jmoreira@suse.de>

Convert to function declaration macros from function prototype casts
to avoid trigger Control-Flow Integrity checks during indirect function
calls.

Signed-off-by: Joao Moreira <jmoreira@suse.de>
Co-developed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/crypto/cast6_avx_glue.c | 65 +++++++++++++++-----------------
 1 file changed, 31 insertions(+), 34 deletions(-)

diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c
index 18965c39305e..4735cd0ef379 100644
--- a/arch/x86/crypto/cast6_avx_glue.c
+++ b/arch/x86/crypto/cast6_avx_glue.c
@@ -35,20 +35,20 @@
 
 #define CAST6_PARALLEL_BLOCKS 8
 
-asmlinkage void cast6_ecb_enc_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src);
-asmlinkage void cast6_ecb_dec_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src);
-
-asmlinkage void cast6_cbc_dec_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src);
-asmlinkage void cast6_ctr_8way(struct cast6_ctx *ctx, u8 *dst, const u8 *src,
-			       le128 *iv);
-
-asmlinkage void cast6_xts_enc_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
-asmlinkage void cast6_xts_dec_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
+#define CAST6_GLUE(func)	GLUE_CAST(func, cast6_ctx)
+#define CAST6_GLUE_CBC(func)	GLUE_CAST_CBC(func, cast6_ctx)
+#define CAST6_GLUE_CTR(func)	GLUE_CAST_CTR(func, cast6_ctx)
+#define CAST6_GLUE_XTS(func)	GLUE_CAST_XTS(func, cast6_ctx)
+
+CAST6_GLUE(__cast6_encrypt);
+CAST6_GLUE(__cast6_decrypt);
+CAST6_GLUE(cast6_ecb_enc_8way);
+CAST6_GLUE(cast6_ecb_dec_8way);
+CAST6_GLUE_CBC(cast6_cbc_dec_8way);
+CAST6_GLUE_CBC(__cast6_decrypt);
+CAST6_GLUE_CTR(cast6_ctr_8way);
+CAST6_GLUE_XTS(cast6_xts_enc_8way);
+CAST6_GLUE_XTS(cast6_xts_dec_8way);
 
 static int cast6_setkey_skcipher(struct crypto_skcipher *tfm,
 				 const u8 *key, unsigned int keylen)
@@ -58,14 +58,12 @@ static int cast6_setkey_skcipher(struct crypto_skcipher *tfm,
 
 static void cast6_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv,
-				  GLUE_FUNC_CAST(__cast6_encrypt));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, __cast6_encrypt_glue);
 }
 
 static void cast6_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv,
-				  GLUE_FUNC_CAST(__cast6_decrypt));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, __cast6_decrypt_glue);
 }
 
 static void cast6_crypt_ctr(void *ctx, u128 *dst, const u128 *src, le128 *iv)
@@ -85,10 +83,10 @@ static const struct common_glue_ctx cast6_enc = {
 
 	.funcs = { {
 		.num_blocks = CAST6_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(cast6_ecb_enc_8way) }
+		.fn_u = { .ecb = cast6_ecb_enc_8way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(__cast6_encrypt) }
+		.fn_u = { .ecb = __cast6_encrypt_glue }
 	} }
 };
 
@@ -98,10 +96,10 @@ static const struct common_glue_ctx cast6_ctr = {
 
 	.funcs = { {
 		.num_blocks = CAST6_PARALLEL_BLOCKS,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(cast6_ctr_8way) }
+		.fn_u = { .ctr = cast6_ctr_8way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ctr = GLUE_CTR_FUNC_CAST(cast6_crypt_ctr) }
+		.fn_u = { .ctr = cast6_crypt_ctr }
 	} }
 };
 
@@ -111,10 +109,10 @@ static const struct common_glue_ctx cast6_enc_xts = {
 
 	.funcs = { {
 		.num_blocks = CAST6_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(cast6_xts_enc_8way) }
+		.fn_u = { .xts = cast6_xts_enc_8way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(cast6_xts_enc) }
+		.fn_u = { .xts = cast6_xts_enc }
 	} }
 };
 
@@ -124,10 +122,10 @@ static const struct common_glue_ctx cast6_dec = {
 
 	.funcs = { {
 		.num_blocks = CAST6_PARALLEL_BLOCKS,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(cast6_ecb_dec_8way) }
+		.fn_u = { .ecb = cast6_ecb_dec_8way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .ecb = GLUE_FUNC_CAST(__cast6_decrypt) }
+		.fn_u = { .ecb = __cast6_decrypt_glue }
 	} }
 };
 
@@ -137,10 +135,10 @@ static const struct common_glue_ctx cast6_dec_cbc = {
 
 	.funcs = { {
 		.num_blocks = CAST6_PARALLEL_BLOCKS,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(cast6_cbc_dec_8way) }
+		.fn_u = { .cbc = cast6_cbc_dec_8way_cbc_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .cbc = GLUE_CBC_FUNC_CAST(__cast6_decrypt) }
+		.fn_u = { .cbc = __cast6_decrypt_cbc_glue }
 	} }
 };
 
@@ -150,10 +148,10 @@ static const struct common_glue_ctx cast6_dec_xts = {
 
 	.funcs = { {
 		.num_blocks = CAST6_PARALLEL_BLOCKS,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(cast6_xts_dec_8way) }
+		.fn_u = { .xts = cast6_xts_dec_8way_glue }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(cast6_xts_dec) }
+		.fn_u = { .xts = cast6_xts_dec }
 	} }
 };
 
@@ -169,8 +167,7 @@ static int ecb_decrypt(struct skcipher_request *req)
 
 static int cbc_encrypt(struct skcipher_request *req)
 {
-	return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(__cast6_encrypt),
-					   req);
+	return glue_cbc_encrypt_req_128bit(__cast6_encrypt_glue, req);
 }
 
 static int cbc_decrypt(struct skcipher_request *req)
@@ -215,7 +212,7 @@ static int xts_encrypt(struct skcipher_request *req)
 	struct cast6_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&cast6_enc_xts, req,
-				   XTS_TWEAK_CAST(__cast6_encrypt),
+				   __cast6_encrypt_glue,
 				   &ctx->tweak_ctx, &ctx->crypt_ctx);
 }
 
@@ -225,7 +222,7 @@ static int xts_decrypt(struct skcipher_request *req)
 	struct cast6_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&cast6_dec_xts, req,
-				   XTS_TWEAK_CAST(__cast6_encrypt),
+				   __cast6_encrypt_glue,
 				   &ctx->tweak_ctx, &ctx->crypt_ctx);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 6/7] crypto: x86/aesni: Use new glue function macros
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
                   ` (4 preceding siblings ...)
  2019-05-07 16:13 ` [PATCH v3 5/7] crypto: x86/cast6: " Kees Cook
@ 2019-05-07 16:13 ` Kees Cook
  2019-05-07 16:13 ` [PATCH v3 7/7] crypto: x86/glue_helper: Remove function prototype cast helpers Kees Cook
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-07 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Joao Moreira, Eric Biggers, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, x86, linux-crypto,
	linux-kernel, kernel-hardening

Convert to function declaration macros from function prototype casts
to avoid trigger Control-Flow Integrity checks during indirect function
calls.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/crypto/aesni-intel_glue.c | 31 ++++++++++++------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 1e3d2102033a..350286235a47 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -39,9 +39,7 @@
 #include <crypto/internal/skcipher.h>
 #include <linux/workqueue.h>
 #include <linux/spinlock.h>
-#ifdef CONFIG_X86_64
 #include <asm/crypto/glue_helper.h>
-#endif
 
 
 #define AESNI_ALIGN	16
@@ -52,6 +50,8 @@
 #define CRYPTO_AES_CTX_SIZE (sizeof(struct crypto_aes_ctx) + AESNI_ALIGN_EXTRA)
 #define XTS_AES_CTX_SIZE (sizeof(struct aesni_xts_ctx) + AESNI_ALIGN_EXTRA)
 
+#define AESNI_GLUE(func)       GLUE_CAST(func, crypto_aes_ctx)
+
 /* This data is stored at the end of the crypto_tfm struct.
  * It's a type of per "session" data storage location.
  * This needs to be 16 byte aligned.
@@ -89,10 +89,8 @@ struct gcm_context_data {
 
 asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
 			     unsigned int key_len);
-asmlinkage void aesni_enc(struct crypto_aes_ctx *ctx, u8 *out,
-			  const u8 *in);
-asmlinkage void aesni_dec(struct crypto_aes_ctx *ctx, u8 *out,
-			  const u8 *in);
+AESNI_GLUE(aesni_enc);
+AESNI_GLUE(aesni_dec);
 asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out,
 			      const u8 *in, unsigned int len);
 asmlinkage void aesni_ecb_dec(struct crypto_aes_ctx *ctx, u8 *out,
@@ -570,19 +568,14 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
 }
 
 
-static void aesni_xts_tweak(void *ctx, u8 *out, const u8 *in)
-{
-	aesni_enc(ctx, out, in);
-}
-
 static void aesni_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv, GLUE_FUNC_CAST(aesni_enc));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, aesni_enc_glue);
 }
 
 static void aesni_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-	glue_xts_crypt_128bit_one(ctx, dst, src, iv, GLUE_FUNC_CAST(aesni_dec));
+	glue_xts_crypt_128bit_one(ctx, dst, src, iv, aesni_dec_glue);
 }
 
 static void aesni_xts_enc8(void *ctx, u128 *dst, const u128 *src, le128 *iv)
@@ -601,10 +594,10 @@ static const struct common_glue_ctx aesni_enc_xts = {
 
 	.funcs = { {
 		.num_blocks = 8,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(aesni_xts_enc8) }
+		.fn_u = { .xts = aesni_xts_enc8 }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(aesni_xts_enc) }
+		.fn_u = { .xts = aesni_xts_enc }
 	} }
 };
 
@@ -614,10 +607,10 @@ static const struct common_glue_ctx aesni_dec_xts = {
 
 	.funcs = { {
 		.num_blocks = 8,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(aesni_xts_dec8) }
+		.fn_u = { .xts = aesni_xts_dec8 }
 	}, {
 		.num_blocks = 1,
-		.fn_u = { .xts = GLUE_XTS_FUNC_CAST(aesni_xts_dec) }
+		.fn_u = { .xts = aesni_xts_dec }
 	} }
 };
 
@@ -627,7 +620,7 @@ static int xts_encrypt(struct skcipher_request *req)
 	struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&aesni_enc_xts, req,
-				   XTS_TWEAK_CAST(aesni_xts_tweak),
+				   aesni_enc_glue,
 				   aes_ctx(ctx->raw_tweak_ctx),
 				   aes_ctx(ctx->raw_crypt_ctx));
 }
@@ -638,7 +631,7 @@ static int xts_decrypt(struct skcipher_request *req)
 	struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
 
 	return glue_xts_req_128bit(&aesni_dec_xts, req,
-				   XTS_TWEAK_CAST(aesni_xts_tweak),
+				   aesni_enc_glue,
 				   aes_ctx(ctx->raw_tweak_ctx),
 				   aes_ctx(ctx->raw_crypt_ctx));
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 7/7] crypto: x86/glue_helper: Remove function prototype cast helpers
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
                   ` (5 preceding siblings ...)
  2019-05-07 16:13 ` [PATCH v3 6/7] crypto: x86/aesni: " Kees Cook
@ 2019-05-07 16:13 ` Kees Cook
  2019-05-07 17:00 ` [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Eric Biggers
  2019-05-09  1:53 ` Eric Biggers
  8 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-07 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Joao Moreira, Eric Biggers, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, x86, linux-crypto,
	linux-kernel, kernel-hardening

Now that all users of the function prototype casting helpers have been
removed, this deletes the unused macros.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/crypto/glue_helper.h | 6 ------
 include/crypto/xts.h                      | 2 --
 2 files changed, 8 deletions(-)

diff --git a/arch/x86/include/asm/crypto/glue_helper.h b/arch/x86/include/asm/crypto/glue_helper.h
index 3b039d563809..2b2d8d4a5081 100644
--- a/arch/x86/include/asm/crypto/glue_helper.h
+++ b/arch/x86/include/asm/crypto/glue_helper.h
@@ -18,12 +18,6 @@ typedef void (*common_glue_ctr_func_t)(void *ctx, u128 *dst, const u128 *src,
 typedef void (*common_glue_xts_func_t)(void *ctx, u128 *dst, const u128 *src,
 				       le128 *iv);
 
-#define GLUE_FUNC_CAST(fn) ((common_glue_func_t)(fn))
-#define GLUE_CBC_FUNC_CAST(fn) ((common_glue_cbc_func_t)(fn))
-#define GLUE_CTR_FUNC_CAST(fn) ((common_glue_ctr_func_t)(fn))
-#define GLUE_XTS_FUNC_CAST(fn) ((common_glue_xts_func_t)(fn))
-
-
 #define GLUE_CAST(func, context)					\
 asmlinkage void func(struct context *ctx, u8 *dst, const u8 *src);	\
 asmlinkage static inline						\
diff --git a/include/crypto/xts.h b/include/crypto/xts.h
index 75fd96ff976b..15ae7fdc0478 100644
--- a/include/crypto/xts.h
+++ b/include/crypto/xts.h
@@ -8,8 +8,6 @@
 
 #define XTS_BLOCK_SIZE 16
 
-#define XTS_TWEAK_CAST(x) ((void (*)(void *, u8*, const u8*))(x))
-
 static inline int xts_check_key(struct crypto_tfm *tfm,
 				const u8 *key, unsigned int keylen)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
                   ` (6 preceding siblings ...)
  2019-05-07 16:13 ` [PATCH v3 7/7] crypto: x86/glue_helper: Remove function prototype cast helpers Kees Cook
@ 2019-05-07 17:00 ` Eric Biggers
  2019-05-07 21:07   ` Kees Cook
  2019-05-09  1:53 ` Eric Biggers
  8 siblings, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2019-05-07 17:00 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Joao Moreira, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, x86, linux-crypto, linux-kernel,
	kernel-hardening

On Tue, May 07, 2019 at 09:13:14AM -0700, Kees Cook wrote:
> It is possible to indirectly invoke functions with prototypes that do
> not match those of the respectively used function pointers by using void
> types or casts. This feature is frequently used as a way of relaxing
> function invocation, making it possible that different data structures
> are passed to different functions through the same pointer.
> 
> Despite the benefits, this can lead to a situation where functions with a
> given prototype are invoked by pointers with a different prototype. This
> is undesirable as it may prevent the use of heuristics such as prototype
> matching-based Control-Flow Integrity, which can be used to prevent
> ROP-based attacks.
> 
> One way of fixing this situation is through the use of inline helper
> functions with prototypes that match the one in the respective invoking
> pointer.
> 
> Given the above, the current efforts to improve the Linux security,
> and the upcoming kernel support to compilers with CFI features, this
> creates macros to be used to build the needed function definitions,
> to be used in camellia, cast6, serpent, twofish, and aesni.

So why not change the function prototypes to be compatible with common_glue_*_t
instead, rather than wrapping them with another layer of functions?  Is it
because indirect calls into asm code won't be allowed with CFI?

> 
> -Kees (and Joao)
> 
> v3:
> - no longer RFC
> - consolidate macros into glue_helper.h
> - include aesni which was using casts as well
> - remove XTS_TWEAK_CAST while we're at it
> 
> v2:
> - update cast macros for clarity
> 
> v1:
> - initial prototype
> 
> Joao Moreira (4):
>   crypto: x86/crypto: Use new glue function macros

This one should be "x86/serpent", not "x86/crypto".

>   crypto: x86/camellia: Use new glue function macros
>   crypto: x86/twofish: Use new glue function macros
>   crypto: x86/cast6: Use new glue function macros
> 
> Kees Cook (3):
>   crypto: x86/glue_helper: Add static inline function glue macros
>   crypto: x86/aesni: Use new glue function macros
>   crypto: x86/glue_helper: Remove function prototype cast helpers
> 
>  arch/x86/crypto/aesni-intel_glue.c         | 31 ++++-----
>  arch/x86/crypto/camellia_aesni_avx2_glue.c | 73 +++++++++-------------
>  arch/x86/crypto/camellia_aesni_avx_glue.c  | 63 +++++++------------
>  arch/x86/crypto/camellia_glue.c            | 21 +++----
>  arch/x86/crypto/cast6_avx_glue.c           | 65 +++++++++----------
>  arch/x86/crypto/serpent_avx2_glue.c        | 65 +++++++++----------
>  arch/x86/crypto/serpent_avx_glue.c         | 58 ++++++-----------
>  arch/x86/crypto/serpent_sse2_glue.c        | 27 +++++---
>  arch/x86/crypto/twofish_avx_glue.c         | 71 ++++++++-------------
>  arch/x86/crypto/twofish_glue_3way.c        | 28 ++++-----
>  arch/x86/include/asm/crypto/camellia.h     | 64 ++++++-------------
>  arch/x86/include/asm/crypto/glue_helper.h  | 34 ++++++++--
>  arch/x86/include/asm/crypto/serpent-avx.h  | 28 ++++-----
>  arch/x86/include/asm/crypto/twofish.h      | 22 ++++---
>  include/crypto/xts.h                       |  2 -
>  15 files changed, 283 insertions(+), 369 deletions(-)
> 
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-07 17:00 ` [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Eric Biggers
@ 2019-05-07 21:07   ` Kees Cook
  2019-05-07 21:50     ` Eric Biggers
  0 siblings, 1 reply; 21+ messages in thread
From: Kees Cook @ 2019-05-07 21:07 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Herbert Xu, Joao Moreira, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, X86 ML, linux-crypto, LKML, Kernel Hardening

On Tue, May 7, 2019 at 10:00 AM Eric Biggers <ebiggers@kernel.org> wrote:
> > Given the above, the current efforts to improve the Linux security,
> > and the upcoming kernel support to compilers with CFI features, this
> > creates macros to be used to build the needed function definitions,
> > to be used in camellia, cast6, serpent, twofish, and aesni.
>
> So why not change the function prototypes to be compatible with common_glue_*_t
> instead, rather than wrapping them with another layer of functions?  Is it
> because indirect calls into asm code won't be allowed with CFI?

I don't know why they're not that way to begin with. But given that
the casting was already happening, this is just moving it to a place
where CFI won't be angry. :)

> >   crypto: x86/crypto: Use new glue function macros
>
> This one should be "x86/serpent", not "x86/crypto".

Oops, yes, that's my typo. I'll fix for v4. Do the conversions
themselves look okay (the changes are pretty mechanical)? If so,
Herbert, do you want a v4 with the typo fix, or do you want to fix
that up yourself?

Thanks!

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-07 21:07   ` Kees Cook
@ 2019-05-07 21:50     ` Eric Biggers
  2019-05-08 13:36       ` Herbert Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2019-05-07 21:50 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Joao Moreira, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, X86 ML, linux-crypto, LKML, Kernel Hardening

On Tue, May 07, 2019 at 02:07:51PM -0700, Kees Cook wrote:
> On Tue, May 7, 2019 at 10:00 AM Eric Biggers <ebiggers@kernel.org> wrote:
> > > Given the above, the current efforts to improve the Linux security,
> > > and the upcoming kernel support to compilers with CFI features, this
> > > creates macros to be used to build the needed function definitions,
> > > to be used in camellia, cast6, serpent, twofish, and aesni.
> >
> > So why not change the function prototypes to be compatible with common_glue_*_t
> > instead, rather than wrapping them with another layer of functions?  Is it
> > because indirect calls into asm code won't be allowed with CFI?
> 
> I don't know why they're not that way to begin with. But given that
> the casting was already happening, this is just moving it to a place
> where CFI won't be angry. :)
> 
> > >   crypto: x86/crypto: Use new glue function macros
> >
> > This one should be "x86/serpent", not "x86/crypto".
> 
> Oops, yes, that's my typo. I'll fix for v4. Do the conversions
> themselves look okay (the changes are pretty mechanical)? If so,
> Herbert, do you want a v4 with the typo fix, or do you want to fix
> that up yourself?
> 
> Thanks!
> 

I don't know yet.  It's difficult to read the code with 2 layers of macros.

Hence why I asked why you didn't just change the prototypes to be compatible.

- Eric

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-07 21:50     ` Eric Biggers
@ 2019-05-08 13:36       ` Herbert Xu
  2019-05-08 21:08         ` Kees Cook
  0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2019-05-08 13:36 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Kees Cook, Joao Moreira, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, X86 ML, linux-crypto, LKML, Kernel Hardening

On Tue, May 07, 2019 at 02:50:46PM -0700, Eric Biggers wrote:
>
> I don't know yet.  It's difficult to read the code with 2 layers of macros.
> 
> Hence why I asked why you didn't just change the prototypes to be compatible.

I agree.  Kees, since you're changing this anyway please make it
look better not worse.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-08 13:36       ` Herbert Xu
@ 2019-05-08 21:08         ` Kees Cook
  2019-05-09  1:39           ` Herbert Xu
  2019-05-09  2:04           ` Eric Biggers
  0 siblings, 2 replies; 21+ messages in thread
From: Kees Cook @ 2019-05-08 21:08 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Eric Biggers, Kees Cook, Joao Moreira, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, X86 ML, linux-crypto, LKML,
	Kernel Hardening

On Wed, May 8, 2019 at 6:36 AM Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Tue, May 07, 2019 at 02:50:46PM -0700, Eric Biggers wrote:
> >
> > I don't know yet.  It's difficult to read the code with 2 layers of macros.
> >
> > Hence why I asked why you didn't just change the prototypes to be compatible.
>
> I agree.  Kees, since you're changing this anyway please make it
> look better not worse.

Do you mean I should use the typedefs in the new macros? I'm not aware
of a way to use a typedef to declare a function body, so I had to
repeat them. I'm open to suggestions!

As far as "fixing the prototypes", the API is agnostic of the context
type, and uses void *. And also it provides a way to call the same
function with different pointer types on the other arguments:

For example, quoting the existing code:

asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
                                const u8 *src);

Which is used for ecb and cbc:

#define GLUE_FUNC_CAST(fn) ((common_glue_func_t)(fn))
#define GLUE_CBC_FUNC_CAST(fn) ((common_glue_cbc_func_t)(fn))
...
static const struct common_glue_ctx twofish_dec = {
...
                .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk) }

static const struct common_glue_ctx twofish_dec_cbc = {
...
                .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk) }

which have different prototypes:

typedef void (*common_glue_func_t)(void *ctx, u8 *dst, const u8 *src);
typedef void (*common_glue_cbc_func_t)(void *ctx, u128 *dst, const u128 *src);
...
struct common_glue_func_entry {
        unsigned int num_blocks; /* number of blocks that @fn will process */
        union {
                common_glue_func_t ecb;
                common_glue_cbc_func_t cbc;
                common_glue_ctr_func_t ctr;
                common_glue_xts_func_t xts;
        } fn_u;
};

What CFI dislikes is calling a func(void *ctx, ...) when the actual
function is, for example, func(struct twofish_ctx *ctx, ...).

This needs to be fixed at the call site, not the static initializers,
and since the call site is void, there needs to be a static inline
that will satisfy the types.

I'm open to suggestions! :)

Thanks,

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-08 21:08         ` Kees Cook
@ 2019-05-09  1:39           ` Herbert Xu
  2019-05-09  2:04           ` Eric Biggers
  1 sibling, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2019-05-09  1:39 UTC (permalink / raw)
  To: Kees Cook
  Cc: Eric Biggers, Joao Moreira, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, X86 ML, linux-crypto, LKML, Kernel Hardening

On Wed, May 08, 2019 at 02:08:25PM -0700, Kees Cook wrote:
>
> For example, quoting the existing code:
> 
> asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
>                                 const u8 *src);

So just make it

	asmlinkage void twofish_dec_blk(void *ctx, u8 *dst, const u8 *src);

and you won't need any of these casts.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
                   ` (7 preceding siblings ...)
  2019-05-07 17:00 ` [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Eric Biggers
@ 2019-05-09  1:53 ` Eric Biggers
  8 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2019-05-09  1:53 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Joao Moreira, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, x86, linux-crypto, linux-kernel,
	kernel-hardening

On Tue, May 07, 2019 at 09:13:14AM -0700, Kees Cook wrote:
> It is possible to indirectly invoke functions with prototypes that do
> not match those of the respectively used function pointers by using void
> types or casts. This feature is frequently used as a way of relaxing
> function invocation, making it possible that different data structures
> are passed to different functions through the same pointer.
> 
> Despite the benefits, this can lead to a situation where functions with a
> given prototype are invoked by pointers with a different prototype. This
> is undesirable as it may prevent the use of heuristics such as prototype
> matching-based Control-Flow Integrity, which can be used to prevent
> ROP-based attacks.
> 
> One way of fixing this situation is through the use of inline helper
> functions with prototypes that match the one in the respective invoking
> pointer.
> 
> Given the above, the current efforts to improve the Linux security,
> and the upcoming kernel support to compilers with CFI features, this
> creates macros to be used to build the needed function definitions,
> to be used in camellia, cast6, serpent, twofish, and aesni.
> 
> -Kees (and Joao)

Did you try enabling -Wcast-function-type?  It seems you missed some cases:

arch/x86/crypto/sha256_ssse3_glue.c: In function ‘sha256_update’:
arch/x86/crypto/sha256_ssse3_glue.c:62:10: warning: cast between incompatible function types from ‘void (*)(u32 *, const char *, u64)’ {aka ‘void (*)(unsigned int *, const char *, long long unsigned int)’} to ‘void (*)(struct sha256_state *, const u8 *, int)’ {aka ‘void (*)(struct sha256_state *, const unsigned char *, int)’} [-Wcast-function-type]
          (sha256_block_fn *)sha256_xform);
          ^
arch/x86/crypto/sha256_ssse3_glue.c: In function ‘sha256_finup’:
arch/x86/crypto/sha256_ssse3_glue.c:77:11: warning: cast between incompatible function types from ‘void (*)(u32 *, const char *, u64)’ {aka ‘void (*)(unsigned int *, const char *, long long unsigned int)’} to ‘void (*)(struct sha256_state *, const u8 *, int)’ {aka ‘void (*)(struct sha256_state *, const unsigned char *, int)’} [-Wcast-function-type]
           (sha256_block_fn *)sha256_xform);
           ^
arch/x86/crypto/sha256_ssse3_glue.c:78:32: warning: cast between incompatible function types from ‘void (*)(u32 *, const char *, u64)’ {aka ‘void (*)(unsigned int *, const char *, long long unsigned int)’} to ‘void (*)(struct sha256_state *, const u8 *, int)’ {aka ‘void (*)(struct sha256_state *, const unsigned char *, int)’} [-Wcast-function-type]
  sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_xform);
                                ^
  CC      arch/x86/crypto/sha512_ssse3_glue.o
arch/x86/crypto/sha512_ssse3_glue.c: In function ‘sha512_update’:
arch/x86/crypto/sha512_ssse3_glue.c:61:10: warning: cast between incompatible function types from ‘void (*)(u64 *, const char *, u64)’ {aka ‘void (*)(long long unsigned int *, const char *, long long unsigned int)’} to ‘void (*)(struct sha512_state *, const u8 *, int)’ {aka ‘void (*)(struct sha512_state *, const unsigned char *, int)’} [-Wcast-function-type]
          (sha512_block_fn *)sha512_xform);
          ^
arch/x86/crypto/sha512_ssse3_glue.c: In function ‘sha512_finup’:
arch/x86/crypto/sha512_ssse3_glue.c:76:11: warning: cast between incompatible function types from ‘void (*)(u64 *, const char *, u64)’ {aka ‘void (*)(long long unsigned int *, const char *, long long unsigned int)’} to ‘void (*)(struct sha512_state *, const u8 *, int)’ {aka ‘void (*)(struct sha512_state *, const unsigned char *, int)’} [-Wcast-function-type]
           (sha512_block_fn *)sha512_xform);
           ^
arch/x86/crypto/sha512_ssse3_glue.c:77:32: warning: cast between incompatible function types from ‘void (*)(u64 *, const char *, u64)’ {aka ‘void (*)(long long unsigned int *, const char *, long long unsigned int)’} to ‘void (*)(struct sha512_state *, const u8 *, int)’ {aka ‘void (*)(struct sha512_state *, const unsigned char *, int)’} [-Wcast-function-type]
  sha512_base_do_finalize(desc, (sha512_block_fn *)sha512_xform);
                                ^

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-08 21:08         ` Kees Cook
  2019-05-09  1:39           ` Herbert Xu
@ 2019-05-09  2:04           ` Eric Biggers
  2019-05-09  3:12             ` Joao Moreira
  2019-05-09 15:38             ` Sami Tolvanen
  1 sibling, 2 replies; 21+ messages in thread
From: Eric Biggers @ 2019-05-09  2:04 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Joao Moreira, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, X86 ML, linux-crypto, LKML, Kernel Hardening

On Wed, May 08, 2019 at 02:08:25PM -0700, Kees Cook wrote:
> On Wed, May 8, 2019 at 6:36 AM Herbert Xu <herbert@gondor.apana.org.au> wrote:
> > On Tue, May 07, 2019 at 02:50:46PM -0700, Eric Biggers wrote:
> > >
> > > I don't know yet.  It's difficult to read the code with 2 layers of macros.
> > >
> > > Hence why I asked why you didn't just change the prototypes to be compatible.
> >
> > I agree.  Kees, since you're changing this anyway please make it
> > look better not worse.
> 
> Do you mean I should use the typedefs in the new macros? I'm not aware
> of a way to use a typedef to declare a function body, so I had to
> repeat them. I'm open to suggestions!
> 
> As far as "fixing the prototypes", the API is agnostic of the context
> type, and uses void *. And also it provides a way to call the same
> function with different pointer types on the other arguments:
> 
> For example, quoting the existing code:
> 
> asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
>                                 const u8 *src);
> 
> Which is used for ecb and cbc:
> 
> #define GLUE_FUNC_CAST(fn) ((common_glue_func_t)(fn))
> #define GLUE_CBC_FUNC_CAST(fn) ((common_glue_cbc_func_t)(fn))
> ...
> static const struct common_glue_ctx twofish_dec = {
> ...
>                 .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk) }
> 
> static const struct common_glue_ctx twofish_dec_cbc = {
> ...
>                 .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk) }
> 
> which have different prototypes:
> 
> typedef void (*common_glue_func_t)(void *ctx, u8 *dst, const u8 *src);
> typedef void (*common_glue_cbc_func_t)(void *ctx, u128 *dst, const u128 *src);
> ...
> struct common_glue_func_entry {
>         unsigned int num_blocks; /* number of blocks that @fn will process */
>         union {
>                 common_glue_func_t ecb;
>                 common_glue_cbc_func_t cbc;
>                 common_glue_ctr_func_t ctr;
>                 common_glue_xts_func_t xts;
>         } fn_u;
> };
> 

As Herbert said, the ctx parameters could be made 'void *'.

And I also asked whether indirect calls to asm code are even allowed with CFI.
IIRC, the AOSP kernels have been patched to remove them from arm64.  It would be
helpful if you would answer that question, since it would inform the best
approach here.

As for the "ecb" functions taking 'u8 *' but the "cbc" ones taking 'u128 *' and
the same function being used in the blocks==1 case, you could just pick one of
the types to use for both.  'u8 *' probably makes more sense since both ecb and
cbc operate on blocks of 16 bytes but don't interpret them as 128-bit integers.

- Eric

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-09  2:04           ` Eric Biggers
@ 2019-05-09  3:12             ` Joao Moreira
  2019-05-09  3:16               ` Herbert Xu
  2019-05-09 15:38             ` Sami Tolvanen
  1 sibling, 1 reply; 21+ messages in thread
From: Joao Moreira @ 2019-05-09  3:12 UTC (permalink / raw)
  To: Eric Biggers, Kees Cook
  Cc: Herbert Xu, Ingo Molnar, Thomas Gleixner, Borislav Petkov,
	X86 ML, linux-crypto, LKML, Kernel Hardening



On 5/8/19 11:04 PM, Eric Biggers wrote:
> On Wed, May 08, 2019 at 02:08:25PM -0700, Kees Cook wrote:
>> On Wed, May 8, 2019 at 6:36 AM Herbert Xu <herbert@gondor.apana.org.au> wrote:
>>> On Tue, May 07, 2019 at 02:50:46PM -0700, Eric Biggers wrote:
>>>>
>>>> I don't know yet.  It's difficult to read the code with 2 layers of macros.
>>>>
>>>> Hence why I asked why you didn't just change the prototypes to be compatible.
>>>
>>> I agree.  Kees, since you're changing this anyway please make it
>>> look better not worse.
>>
>> Do you mean I should use the typedefs in the new macros? I'm not aware
>> of a way to use a typedef to declare a function body, so I had to
>> repeat them. I'm open to suggestions!
>>
>> As far as "fixing the prototypes", the API is agnostic of the context
>> type, and uses void *. And also it provides a way to call the same
>> function with different pointer types on the other arguments:
>>
>> For example, quoting the existing code:
>>
>> asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
>>                                  const u8 *src);
>>
>> Which is used for ecb and cbc:
>>
>> #define GLUE_FUNC_CAST(fn) ((common_glue_func_t)(fn))
>> #define GLUE_CBC_FUNC_CAST(fn) ((common_glue_cbc_func_t)(fn))
>> ...
>> static const struct common_glue_ctx twofish_dec = {
>> ...
>>                  .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk) }
>>
>> static const struct common_glue_ctx twofish_dec_cbc = {
>> ...
>>                  .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk) }
>>
>> which have different prototypes:
>>
>> typedef void (*common_glue_func_t)(void *ctx, u8 *dst, const u8 *src);
>> typedef void (*common_glue_cbc_func_t)(void *ctx, u128 *dst, const u128 *src);
>> ...
>> struct common_glue_func_entry {
>>          unsigned int num_blocks; /* number of blocks that @fn will process */
>>          union {
>>                  common_glue_func_t ecb;
>>                  common_glue_cbc_func_t cbc;
>>                  common_glue_ctr_func_t ctr;
>>                  common_glue_xts_func_t xts;
>>          } fn_u;
>> };
>>
> 
> As Herbert said, the ctx parameters could be made 'void *'.
> 

This is how things were done in the original patch set, but some 
concerns were raised about this approach:

https://lkml.org/lkml/2018/4/16/74

Tks,
Joao.

> And I also asked whether indirect calls to asm code are even allowed with CFI.
> IIRC, the AOSP kernels have been patched to remove them from arm64.  It would be
> helpful if you would answer that question, since it would inform the best
> approach here.
> 
> As for the "ecb" functions taking 'u8 *' but the "cbc" ones taking 'u128 *' and
> the same function being used in the blocks==1 case, you could just pick one of
> the types to use for both.  'u8 *' probably makes more sense since both ecb and
> cbc operate on blocks of 16 bytes but don't interpret them as 128-bit integers.
> 
> - Eric
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-09  3:12             ` Joao Moreira
@ 2019-05-09  3:16               ` Herbert Xu
  0 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2019-05-09  3:16 UTC (permalink / raw)
  To: Joao Moreira
  Cc: Eric Biggers, Kees Cook, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, X86 ML, linux-crypto, LKML, Kernel Hardening

On Thu, May 09, 2019 at 12:12:54AM -0300, Joao Moreira wrote:
>
> This is how things were done in the original patch set, but some concerns
> were raised about this approach:
> 
> https://lkml.org/lkml/2018/4/16/74

No that's not what I'm suggesting at all.  Just get rid of those
wrapper functions and change the underlying asm functions to take
a void *.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-09  2:04           ` Eric Biggers
  2019-05-09  3:12             ` Joao Moreira
@ 2019-05-09 15:38             ` Sami Tolvanen
  2019-05-09 17:58               ` Eric Biggers
  1 sibling, 1 reply; 21+ messages in thread
From: Sami Tolvanen @ 2019-05-09 15:38 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Kees Cook, Herbert Xu, Joao Moreira, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, X86 ML, linux-crypto, LKML,
	Kernel Hardening

On Wed, May 08, 2019 at 07:04:40PM -0700, Eric Biggers wrote:
> And I also asked whether indirect calls to asm code are even allowed
> with CFI. IIRC, the AOSP kernels have been patched to remove them from
> arm64

At least with clang, indirect calls to stand-alone assembly functions
trip CFI checks, which is why Android kernels use static inline stubs
to convert these to direct calls instead.

Sami

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-09 15:38             ` Sami Tolvanen
@ 2019-05-09 17:58               ` Eric Biggers
  2019-05-09 19:27                 ` Sami Tolvanen
  0 siblings, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2019-05-09 17:58 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: Kees Cook, Herbert Xu, Joao Moreira, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, X86 ML, linux-crypto, LKML,
	Kernel Hardening

On Thu, May 09, 2019 at 08:38:28AM -0700, Sami Tolvanen wrote:
> On Wed, May 08, 2019 at 07:04:40PM -0700, Eric Biggers wrote:
> > And I also asked whether indirect calls to asm code are even allowed
> > with CFI. IIRC, the AOSP kernels have been patched to remove them from
> > arm64
> 
> At least with clang, indirect calls to stand-alone assembly functions
> trip CFI checks, which is why Android kernels use static inline stubs
> to convert these to direct calls instead.
> 
> Sami

Thanks Sami.  Is there any way to annotate assembly functions such that they
work directly with CFI?  Otherwise, we need the wrapper functions.

Kees and Joao, it would be helpful if you'd explain this in the patchset.

- Eric

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/7] crypto: x86: Fix indirect function call casts
  2019-05-09 17:58               ` Eric Biggers
@ 2019-05-09 19:27                 ` Sami Tolvanen
  0 siblings, 0 replies; 21+ messages in thread
From: Sami Tolvanen @ 2019-05-09 19:27 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Kees Cook, Herbert Xu, Joao Moreira, Ingo Molnar,
	Thomas Gleixner, Borislav Petkov, X86 ML, linux-crypto, LKML,
	Kernel Hardening

On Thu, May 09, 2019 at 10:58:23AM -0700, Eric Biggers wrote:
> Is there any way to annotate assembly functions such that they work
> directly with CFI?

Not to my knowledge.

Sami

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2019-05-09 19:28 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-07 16:13 [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Kees Cook
2019-05-07 16:13 ` [PATCH v3 1/7] crypto: x86/glue_helper: Add static inline function glue macros Kees Cook
2019-05-07 16:13 ` [PATCH v3 2/7] crypto: x86/crypto: Use new glue function macros Kees Cook
2019-05-07 16:13 ` [PATCH v3 3/7] crypto: x86/camellia: " Kees Cook
2019-05-07 16:13 ` [PATCH v3 4/7] crypto: x86/twofish: " Kees Cook
2019-05-07 16:13 ` [PATCH v3 5/7] crypto: x86/cast6: " Kees Cook
2019-05-07 16:13 ` [PATCH v3 6/7] crypto: x86/aesni: " Kees Cook
2019-05-07 16:13 ` [PATCH v3 7/7] crypto: x86/glue_helper: Remove function prototype cast helpers Kees Cook
2019-05-07 17:00 ` [PATCH v3 0/7] crypto: x86: Fix indirect function call casts Eric Biggers
2019-05-07 21:07   ` Kees Cook
2019-05-07 21:50     ` Eric Biggers
2019-05-08 13:36       ` Herbert Xu
2019-05-08 21:08         ` Kees Cook
2019-05-09  1:39           ` Herbert Xu
2019-05-09  2:04           ` Eric Biggers
2019-05-09  3:12             ` Joao Moreira
2019-05-09  3:16               ` Herbert Xu
2019-05-09 15:38             ` Sami Tolvanen
2019-05-09 17:58               ` Eric Biggers
2019-05-09 19:27                 ` Sami Tolvanen
2019-05-09  1:53 ` Eric Biggers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).