All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] x86/crypto: Fix function prototypes
@ 2018-04-15  4:15 Joao Moreira
  2018-04-15  4:15 ` [PATCH 1/4] x86/crypto: camellia: " Joao Moreira
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Joao Moreira @ 2018-04-15  4:15 UTC (permalink / raw)
  To: kernel-hardening
  Cc: linux-kernel, x86, herbert, davem, tglx, mingo, hpa, gregkh, keescook

It is possible to indirectly invoke functions with prototypes that do not
match those of the respectively used function pointers by using void types.
This feature is frequently used as a way of relaxing function invocation,
making it possible that different data structures are passed to different
functions through the same pointer.

Despite the benefits, this can lead to a situation where functions with a
given prototype are invoked by pointers with a different prototype, what is
undesirable as it may prevent the use of heuristics such as prototype
matching-based Control-Flow Integrity, which can be used to prevent
ROP-based attacks.

One way of fixing this situation is through converting the function
prototypes to the one used in the pointer declaration, what means converting
function arguments from a given '<type> *' to a 'void *', and later casting
its uses accordingly throughout the function scope.

Given the above, the current efforts to improve the Linux security, and the
upcoming kernel support to compilers with CFI features, fix prototypes in
x86/crypto algorithms: camellia, cast6, serpent, and twofish.

This patch does not introduce semantic changes to the cryptographic
algorithms, yet, if someone finds relevant, the affected algorithms were
tested with the help of tcrypt.ko without any visible harm.

Joao Moreira (4):
  x86/crypto: camellia: Fix function prototypes
  x86/crypto: cast6: Fix function prototypes
  x86/crypto: serpent: Fix function prototypes
  x86/crypto: twofish: Fix function prototypes

 arch/x86/crypto/camellia_aesni_avx2_glue.c | 25 ++++++++---------
 arch/x86/crypto/camellia_aesni_avx_glue.c  | 21 +++++++--------
 arch/x86/crypto/camellia_glue.c            |  6 ++---
 arch/x86/crypto/cast6_avx_glue.c           | 22 ++++++---------
 arch/x86/crypto/serpent_avx2_glue.c        | 14 +++++-----
 arch/x86/crypto/serpent_avx_glue.c         | 19 ++++++-------
 arch/x86/crypto/twofish_avx_glue.c         | 30 +++++++++------------
 arch/x86/crypto/twofish_glue.c             |  7 +++--
 arch/x86/crypto/twofish_glue_3way.c        | 10 +++----
 arch/x86/include/asm/crypto/camellia.h     | 43 +++++++++++++-----------------
 arch/x86/include/asm/crypto/serpent-avx.h  | 25 ++++++++---------
 arch/x86/include/asm/crypto/serpent-sse2.h | 30 +++++++++------------
 arch/x86/include/asm/crypto/twofish.h      |  9 +++----
 13 files changed, 108 insertions(+), 153 deletions(-)

-- 
2.12.0

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/4] x86/crypto: camellia: Fix function prototypes
  2018-04-15  4:15 [PATCH 0/4] x86/crypto: Fix function prototypes Joao Moreira
@ 2018-04-15  4:15 ` Joao Moreira
  2018-04-16  6:10   ` Ingo Molnar
  2018-04-15  4:15 ` [PATCH 2/4] x86/crypto: cast6: " Joao Moreira
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Joao Moreira @ 2018-04-15  4:15 UTC (permalink / raw)
  To: kernel-hardening
  Cc: linux-kernel, x86, herbert, davem, tglx, mingo, hpa, gregkh, keescook

Convert the use of 'struct camellia_ctx *' to 'void *' in prototypes of
functions which are referenced through 'struct common_glue_func_entry',
making their prototypes match those of this struct and, consequently,
turning them compatible with CFI requirements.

Whenever needed, cast 'void *' to 'struct camellia_ctx *'.

Signed-off-by: João Moreira <jmoreira@suse.de>
---
 arch/x86/crypto/camellia_aesni_avx2_glue.c | 25 ++++++++---------
 arch/x86/crypto/camellia_aesni_avx_glue.c  | 21 +++++++--------
 arch/x86/crypto/camellia_glue.c            |  6 ++---
 arch/x86/include/asm/crypto/camellia.h     | 43 +++++++++++++-----------------
 4 files changed, 40 insertions(+), 55 deletions(-)

diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c
index 60907c139c4e..6fe248ee5efa 100644
--- a/arch/x86/crypto/camellia_aesni_avx2_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c
@@ -27,20 +27,17 @@
 #define CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS 32
 
 /* 32-way AVX2/AES-NI parallel cipher functions */
-asmlinkage void camellia_ecb_enc_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-asmlinkage void camellia_ecb_dec_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-
-asmlinkage void camellia_cbc_dec_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-asmlinkage void camellia_ctr_32way(struct camellia_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
-
-asmlinkage void camellia_xts_enc_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
-asmlinkage void camellia_xts_dec_32way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
+asmlinkage void camellia_ecb_enc_32way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void camellia_ecb_dec_32way(void *ctx, u8 *dst, const u8 *src);
+
+asmlinkage void camellia_cbc_dec_32way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void camellia_ctr_32way(void *ctx, u8 *dst, const u8 *src,
+				   le128 *iv);
+
+asmlinkage void camellia_xts_enc_32way(void *ctx, u8 *dst, const u8 *src,
+				       le128 *iv);
+asmlinkage void camellia_xts_dec_32way(void *ctx, u8 *dst, const u8 *src,
+				       le128 *iv);
 
 static const struct common_glue_ctx camellia_enc = {
 	.num_funcs = 4,
diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c
index d96429da88eb..3ccfd75b4af4 100644
--- a/arch/x86/crypto/camellia_aesni_avx_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx_glue.c
@@ -26,28 +26,25 @@
 #define CAMELLIA_AESNI_PARALLEL_BLOCKS 16
 
 /* 16-way parallel cipher functions (avx/aes-ni) */
-asmlinkage void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
+asmlinkage void camellia_ecb_enc_16way(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_ecb_enc_16way);
 
-asmlinkage void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
+asmlinkage void camellia_ecb_dec_16way(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_ecb_dec_16way);
 
-asmlinkage void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
+asmlinkage void camellia_cbc_dec_16way(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_cbc_dec_16way);
 
-asmlinkage void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
+asmlinkage void camellia_ctr_16way(void *ctx, u8 *dst, const u8 *src,
+				   le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_ctr_16way);
 
-asmlinkage void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
+asmlinkage void camellia_xts_enc_16way(void *ctx, u8 *dst, const u8 *src,
+				       le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_xts_enc_16way);
 
-asmlinkage void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
+asmlinkage void camellia_xts_dec_16way(void *ctx, u8 *dst, const u8 *src,
+				       le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_xts_dec_16way);
 
 void camellia_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c
index af4840ab2a3d..0942b3998de5 100644
--- a/arch/x86/crypto/camellia_glue.c
+++ b/arch/x86/crypto/camellia_glue.c
@@ -39,16 +39,14 @@
 asmlinkage void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
 				   const u8 *src, bool xor);
 EXPORT_SYMBOL_GPL(__camellia_enc_blk);
-asmlinkage void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst,
-				 const u8 *src);
+asmlinkage void camellia_dec_blk(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_dec_blk);
 
 /* 2-way parallel cipher functions */
 asmlinkage void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
 					const u8 *src, bool xor);
 EXPORT_SYMBOL_GPL(__camellia_enc_blk_2way);
-asmlinkage void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst,
-				      const u8 *src);
+asmlinkage void camellia_dec_blk_2way(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_dec_blk_2way);
 
 static void camellia_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
diff --git a/arch/x86/include/asm/crypto/camellia.h b/arch/x86/include/asm/crypto/camellia.h
index 10f8d590bcfe..974bea896d96 100644
--- a/arch/x86/include/asm/crypto/camellia.h
+++ b/arch/x86/include/asm/crypto/camellia.h
@@ -40,35 +40,29 @@ extern int xts_camellia_setkey(struct crypto_tfm *tfm, const u8 *key,
 /* regular block cipher functions */
 asmlinkage void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
 				   const u8 *src, bool xor);
-asmlinkage void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst,
-				 const u8 *src);
+asmlinkage void camellia_dec_blk(void *ctx, u8 *dst, const u8 *src);
 
 /* 2-way parallel cipher functions */
 asmlinkage void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
 					const u8 *src, bool xor);
-asmlinkage void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst,
-				      const u8 *src);
+asmlinkage void camellia_dec_blk_2way(void *ctx, u8 *dst, const u8 *src);
 
 /* 16-way parallel cipher functions (avx/aes-ni) */
-asmlinkage void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-asmlinkage void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-
-asmlinkage void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src);
-asmlinkage void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
-
-asmlinkage void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
-asmlinkage void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst,
-				       const u8 *src, le128 *iv);
-
-static inline void camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
-				    const u8 *src)
+asmlinkage void camellia_ecb_enc_16way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void camellia_ecb_dec_16way(void *ctx, u8 *dst, const u8 *src);
+
+asmlinkage void camellia_cbc_dec_16way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void camellia_ctr_16way(void *ctx, u8 *dst, const u8 *src,
+				   le128 *iv);
+
+asmlinkage void camellia_xts_enc_16way(void *ctx, u8 *dst, const u8 *src,
+				       le128 *iv);
+asmlinkage void camellia_xts_dec_16way(void *ctx, u8 *dst, const u8 *src,
+				       le128 *iv);
+
+static inline void camellia_enc_blk(void *ctx, u8 *dst, const u8 *src)
 {
-	__camellia_enc_blk(ctx, dst, src, false);
+	__camellia_enc_blk((struct camellia_ctx *) ctx, dst, src, false);
 }
 
 static inline void camellia_enc_blk_xor(struct camellia_ctx *ctx, u8 *dst,
@@ -77,10 +71,9 @@ static inline void camellia_enc_blk_xor(struct camellia_ctx *ctx, u8 *dst,
 	__camellia_enc_blk(ctx, dst, src, true);
 }
 
-static inline void camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
-					 const u8 *src)
+static inline void camellia_enc_blk_2way(void *ctx, u8 *dst, const u8 *src)
 {
-	__camellia_enc_blk_2way(ctx, dst, src, false);
+	__camellia_enc_blk_2way((struct camellia_ctx *) ctx, dst, src, false);
 }
 
 static inline void camellia_enc_blk_xor_2way(struct camellia_ctx *ctx, u8 *dst,
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/4] x86/crypto: cast6: Fix function prototypes
  2018-04-15  4:15 [PATCH 0/4] x86/crypto: Fix function prototypes Joao Moreira
  2018-04-15  4:15 ` [PATCH 1/4] x86/crypto: camellia: " Joao Moreira
@ 2018-04-15  4:15 ` Joao Moreira
  2018-04-15  4:15 ` [PATCH 3/4] x86/crypto: serpent: " Joao Moreira
  2018-04-15  4:15 ` [PATCH 4/4] x86/crypto: twofish: " Joao Moreira
  3 siblings, 0 replies; 9+ messages in thread
From: Joao Moreira @ 2018-04-15  4:15 UTC (permalink / raw)
  To: kernel-hardening
  Cc: linux-kernel, x86, herbert, davem, tglx, mingo, hpa, gregkh, keescook

Convert the use of 'struct cast6_ctx *' to 'void *' in prototypes of
functions which are referenced through 'struct common_glue_func_entry',
making their prototypes match those of this struct and, consequently,
turning them compatible with CFI requirements.

Signed-off-by: João Moreira <jmoreira@suse.de>
---
 arch/x86/crypto/cast6_avx_glue.c | 24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c
index 50e684768c55..f4e05388fbbb 100644
--- a/arch/x86/crypto/cast6_avx_glue.c
+++ b/arch/x86/crypto/cast6_avx_glue.c
@@ -41,20 +41,16 @@
 
 #define CAST6_PARALLEL_BLOCKS 8
 
-asmlinkage void cast6_ecb_enc_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src);
-asmlinkage void cast6_ecb_dec_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src);
-
-asmlinkage void cast6_cbc_dec_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src);
-asmlinkage void cast6_ctr_8way(struct cast6_ctx *ctx, u8 *dst, const u8 *src,
-			       le128 *iv);
-
-asmlinkage void cast6_xts_enc_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
-asmlinkage void cast6_xts_dec_8way(struct cast6_ctx *ctx, u8 *dst,
-				   const u8 *src, le128 *iv);
+asmlinkage void cast6_ecb_enc_8way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void cast6_ecb_dec_8way(void *ctx, u8 *dst, const u8 *src);
+
+asmlinkage void cast6_cbc_dec_8way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void cast6_ctr_8way(void *ctx, u8 *dst, const u8 *src, le128 *iv);
+
+asmlinkage void cast6_xts_enc_8way(void *ctx, u8 *dst, const u8 *src,
+				  le128 *iv);
+asmlinkage void cast6_xts_dec_8way(void *ctx, u8 *dst, const u8 *src,
+				  le128 *iv);
 
 static void cast6_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
 {
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/4] x86/crypto: serpent: Fix function prototypes
  2018-04-15  4:15 [PATCH 0/4] x86/crypto: Fix function prototypes Joao Moreira
  2018-04-15  4:15 ` [PATCH 1/4] x86/crypto: camellia: " Joao Moreira
  2018-04-15  4:15 ` [PATCH 2/4] x86/crypto: cast6: " Joao Moreira
@ 2018-04-15  4:15 ` Joao Moreira
  2018-04-15  4:15 ` [PATCH 4/4] x86/crypto: twofish: " Joao Moreira
  3 siblings, 0 replies; 9+ messages in thread
From: Joao Moreira @ 2018-04-15  4:15 UTC (permalink / raw)
  To: kernel-hardening
  Cc: linux-kernel, x86, herbert, davem, tglx, mingo, hpa, gregkh, keescook

Convert the use of 'struct serpent_ctx *' to 'void *' in prototypes of
functions which are referenced through 'struct common_glue_func_entry',
making their prototypes match those of this struct and, consequently,
turning them compatible with CFI requirements.

Whenever needed, cast 'void *' to 'struct serpent_ctx *'.

Signed-off-by: João Moreira <jmoreira@suse.de>
---
 arch/x86/crypto/serpent_avx2_glue.c        | 14 ++++++--------
 arch/x86/crypto/serpent_avx_glue.c         | 19 ++++++++-----------
 arch/x86/include/asm/crypto/serpent-avx.h  | 25 +++++++++++--------------
 arch/x86/include/asm/crypto/serpent-sse2.h | 30 ++++++++++++------------------
 4 files changed, 37 insertions(+), 51 deletions(-)

diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c
index 870f6d812a2d..17d89b4521ac 100644
--- a/arch/x86/crypto/serpent_avx2_glue.c
+++ b/arch/x86/crypto/serpent_avx2_glue.c
@@ -27,18 +27,16 @@
 #define SERPENT_AVX2_PARALLEL_BLOCKS 16
 
 /* 16-way AVX2 parallel cipher functions */
-asmlinkage void serpent_ecb_enc_16way(struct serpent_ctx *ctx, u8 *dst,
-				      const u8 *src);
-asmlinkage void serpent_ecb_dec_16way(struct serpent_ctx *ctx, u8 *dst,
-				      const u8 *src);
+asmlinkage void serpent_ecb_enc_16way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void serpent_ecb_dec_16way(void *ctx, u8 *dst, const u8 *src);
 asmlinkage void serpent_cbc_dec_16way(void *ctx, u128 *dst, const u128 *src);
 
 asmlinkage void serpent_ctr_16way(void *ctx, u128 *dst, const u128 *src,
 				  le128 *iv);
-asmlinkage void serpent_xts_enc_16way(struct serpent_ctx *ctx, u8 *dst,
-				      const u8 *src, le128 *iv);
-asmlinkage void serpent_xts_dec_16way(struct serpent_ctx *ctx, u8 *dst,
-				      const u8 *src, le128 *iv);
+asmlinkage void serpent_xts_enc_16way(void *ctx, u8 *dst, const u8 *src,
+				      le128 *iv);
+asmlinkage void serpent_xts_dec_16way(void *ctx, u8 *dst, const u8 *src,
+				      le128 *iv);
 
 static const struct common_glue_ctx serpent_enc = {
 	.num_funcs = 3,
diff --git a/arch/x86/crypto/serpent_avx_glue.c b/arch/x86/crypto/serpent_avx_glue.c
index 6f778d3daa22..0d234458a734 100644
--- a/arch/x86/crypto/serpent_avx_glue.c
+++ b/arch/x86/crypto/serpent_avx_glue.c
@@ -41,28 +41,25 @@
 #include <asm/crypto/glue_helper.h>
 
 /* 8-way parallel cipher functions */
-asmlinkage void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
+asmlinkage void serpent_ecb_enc_8way_avx(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_ecb_enc_8way_avx);
 
-asmlinkage void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
+asmlinkage void serpent_ecb_dec_8way_avx(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_ecb_dec_8way_avx);
 
-asmlinkage void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
+asmlinkage void serpent_cbc_dec_8way_avx(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_cbc_dec_8way_avx);
 
-asmlinkage void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage void serpent_ctr_8way_avx(void *ctx, u8 *dst,
 				     const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_ctr_8way_avx);
 
-asmlinkage void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src, le128 *iv);
+asmlinkage void serpent_xts_enc_8way_avx(void *ctx, u8 *dst, const u8 *src,
+					 le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_xts_enc_8way_avx);
 
-asmlinkage void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src, le128 *iv);
+asmlinkage void serpent_xts_dec_8way_avx(void *ctx, u8 *dst, const u8 *src,
+					 le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_xts_dec_8way_avx);
 
 void __serpent_crypt_ctr(void *ctx, u128 *dst, const u128 *src, le128 *iv)
diff --git a/arch/x86/include/asm/crypto/serpent-avx.h b/arch/x86/include/asm/crypto/serpent-avx.h
index c958b7bd0fcb..c92cb0cc1f8b 100644
--- a/arch/x86/include/asm/crypto/serpent-avx.h
+++ b/arch/x86/include/asm/crypto/serpent-avx.h
@@ -17,20 +17,17 @@ struct serpent_xts_ctx {
 	struct serpent_ctx crypt_ctx;
 };
 
-asmlinkage void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
-asmlinkage void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
-
-asmlinkage void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src);
-asmlinkage void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-				     const u8 *src, le128 *iv);
-
-asmlinkage void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src, le128 *iv);
-asmlinkage void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
-					 const u8 *src, le128 *iv);
+asmlinkage void serpent_ecb_enc_8way_avx(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void serpent_ecb_dec_8way_avx(void *ctx, u8 *dst, const u8 *src);
+
+asmlinkage void serpent_cbc_dec_8way_avx(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void serpent_ctr_8way_avx(void *ctx, u8 *dst, const u8 *src,
+				     le128 *iv);
+
+asmlinkage void serpent_xts_enc_8way_avx(void *ctx, u8 *dst, const u8 *src,
+					 le128 *iv);
+asmlinkage void serpent_xts_dec_8way_avx(void *ctx, u8 *dst, const u8 *src,
+					 le128 *iv);
 
 extern void __serpent_crypt_ctr(void *ctx, u128 *dst, const u128 *src,
 				le128 *iv);
diff --git a/arch/x86/include/asm/crypto/serpent-sse2.h b/arch/x86/include/asm/crypto/serpent-sse2.h
index 1a345e8a7496..69b14fda1f4c 100644
--- a/arch/x86/include/asm/crypto/serpent-sse2.h
+++ b/arch/x86/include/asm/crypto/serpent-sse2.h
@@ -14,22 +14,19 @@ asmlinkage void __serpent_enc_blk_4way(struct serpent_ctx *ctx, u8 *dst,
 asmlinkage void serpent_dec_blk_4way(struct serpent_ctx *ctx, u8 *dst,
 				     const u8 *src);
 
-static inline void serpent_enc_blk_xway(struct serpent_ctx *ctx, u8 *dst,
-					const u8 *src)
+static inline void serpent_enc_blk_xway(void *ctx, u8 *dst, const u8 *src)
 {
-	__serpent_enc_blk_4way(ctx, dst, src, false);
+	__serpent_enc_blk_4way((struct serpent_ctx *) ctx, dst, src, false);
 }
 
-static inline void serpent_enc_blk_xway_xor(struct serpent_ctx *ctx, u8 *dst,
-					    const u8 *src)
+static inline void serpent_enc_blk_xway_xor(void *ctx, u8 *dst, const u8 *src)
 {
-	__serpent_enc_blk_4way(ctx, dst, src, true);
+	__serpent_enc_blk_4way((struct serpent_ctx *) ctx, dst, src, true);
 }
 
-static inline void serpent_dec_blk_xway(struct serpent_ctx *ctx, u8 *dst,
-					const u8 *src)
+static inline void serpent_dec_blk_xway(void *ctx, u8 *dst, const u8 *src)
 {
-	serpent_dec_blk_4way(ctx, dst, src);
+	serpent_dec_blk_4way((struct serpent_ctx *) ctx, dst, src);
 }
 
 #else
@@ -41,22 +38,19 @@ asmlinkage void __serpent_enc_blk_8way(struct serpent_ctx *ctx, u8 *dst,
 asmlinkage void serpent_dec_blk_8way(struct serpent_ctx *ctx, u8 *dst,
 				     const u8 *src);
 
-static inline void serpent_enc_blk_xway(struct serpent_ctx *ctx, u8 *dst,
-				   const u8 *src)
+static inline void serpent_enc_blk_xway(void *ctx, u8 *dst, const u8 *src)
 {
-	__serpent_enc_blk_8way(ctx, dst, src, false);
+	__serpent_enc_blk_8way((struct serpent_ctx *) ctx, dst, src, false);
 }
 
-static inline void serpent_enc_blk_xway_xor(struct serpent_ctx *ctx, u8 *dst,
-				       const u8 *src)
+static inline void serpent_enc_blk_xway_xor(void *ctx, u8 *dst, const u8 *src)
 {
-	__serpent_enc_blk_8way(ctx, dst, src, true);
+	__serpent_enc_blk_8way((struct serpent_ctx *) ctx, dst, src, true);
 }
 
-static inline void serpent_dec_blk_xway(struct serpent_ctx *ctx, u8 *dst,
-				   const u8 *src)
+static inline void serpent_dec_blk_xway(void *ctx, u8 *dst, const u8 *src)
 {
-	serpent_dec_blk_8way(ctx, dst, src);
+	serpent_dec_blk_8way((struct serpent_ctx *) ctx, dst, src);
 }
 
 #endif
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/4] x86/crypto: twofish: Fix function prototypes
  2018-04-15  4:15 [PATCH 0/4] x86/crypto: Fix function prototypes Joao Moreira
                   ` (2 preceding siblings ...)
  2018-04-15  4:15 ` [PATCH 3/4] x86/crypto: serpent: " Joao Moreira
@ 2018-04-15  4:15 ` Joao Moreira
  3 siblings, 0 replies; 9+ messages in thread
From: Joao Moreira @ 2018-04-15  4:15 UTC (permalink / raw)
  To: kernel-hardening
  Cc: linux-kernel, x86, herbert, davem, tglx, mingo, hpa, gregkh, keescook

Convert the use of 'struct twofish_ctx *' to 'void *' in prototypes of
functions which are referenced through 'struct common_glue_func_entry',
making their prototypes match those of this struct and, consequently,
turning them compatible with CFI requirements.

Whenever needed, cast 'void *' to 'struct twofish_ctx *'.

Signed-off-by: João Moreira <jmoreira@suse.de>
---
 arch/x86/crypto/twofish_avx_glue.c    | 30 +++++++++++++-----------------
 arch/x86/crypto/twofish_glue.c        |  7 +++----
 arch/x86/crypto/twofish_glue_3way.c   | 10 ++++------
 arch/x86/include/asm/crypto/twofish.h |  9 +++------
 4 files changed, 23 insertions(+), 33 deletions(-)

diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c
index b7a3904b953c..916a4904fa25 100644
--- a/arch/x86/crypto/twofish_avx_glue.c
+++ b/arch/x86/crypto/twofish_avx_glue.c
@@ -46,25 +46,21 @@
 #define TWOFISH_PARALLEL_BLOCKS 8
 
 /* 8-way parallel cipher functions */
-asmlinkage void twofish_ecb_enc_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src);
-asmlinkage void twofish_ecb_dec_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src);
-
-asmlinkage void twofish_cbc_dec_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src);
-asmlinkage void twofish_ctr_8way(struct twofish_ctx *ctx, u8 *dst,
-				 const u8 *src, le128 *iv);
-
-asmlinkage void twofish_xts_enc_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src, le128 *iv);
-asmlinkage void twofish_xts_dec_8way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src, le128 *iv);
-
-static inline void twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage void twofish_ecb_enc_8way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void twofish_ecb_dec_8way(void *ctx, u8 *dst, const u8 *src);
+
+asmlinkage void twofish_cbc_dec_8way(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void twofish_ctr_8way(void *ctx, u8 *dst, const u8 *src, le128 *iv);
+
+asmlinkage void twofish_xts_enc_8way(void *ctx, u8 *dst, const u8 *src,
+				     le128 *iv);
+asmlinkage void twofish_xts_dec_8way(void *ctx, u8 *dst, const u8 *src,
+				     le128 *iv);
+
+static inline void twofish_enc_blk_3way(void *ctx, u8 *dst,
 					const u8 *src)
 {
-	__twofish_enc_blk_3way(ctx, dst, src, false);
+	__twofish_enc_blk_3way((struct twofish_ctx *) ctx, dst, src, false);
 }
 
 static void twofish_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
diff --git a/arch/x86/crypto/twofish_glue.c b/arch/x86/crypto/twofish_glue.c
index 77e06c2da83d..3dee618c2dd2 100644
--- a/arch/x86/crypto/twofish_glue.c
+++ b/arch/x86/crypto/twofish_glue.c
@@ -44,11 +44,10 @@
 #include <linux/module.h>
 #include <linux/types.h>
 
-asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
-				const u8 *src);
+asmlinkage void twofish_enc_blk(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(twofish_enc_blk);
-asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
-				const u8 *src);
+
+asmlinkage void twofish_dec_blk(void *ctx, u8 *dst, const u8 *src);
 EXPORT_SYMBOL_GPL(twofish_dec_blk);
 
 static void twofish_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
diff --git a/arch/x86/crypto/twofish_glue_3way.c b/arch/x86/crypto/twofish_glue_3way.c
index 243e90a4b5d9..ce25be1bc260 100644
--- a/arch/x86/crypto/twofish_glue_3way.c
+++ b/arch/x86/crypto/twofish_glue_3way.c
@@ -36,16 +36,14 @@
 EXPORT_SYMBOL_GPL(__twofish_enc_blk_3way);
 EXPORT_SYMBOL_GPL(twofish_dec_blk_3way);
 
-static inline void twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
-					const u8 *src)
+static inline void twofish_enc_blk_3way(void *ctx, u8 *dst, const u8 *src)
 {
-	__twofish_enc_blk_3way(ctx, dst, src, false);
+	__twofish_enc_blk_3way((struct twofish_ctx *) ctx, dst, src, false);
 }
 
-static inline void twofish_enc_blk_xor_3way(struct twofish_ctx *ctx, u8 *dst,
-					    const u8 *src)
+static inline void twofish_enc_blk_xor_3way(void *ctx, u8 *dst, const u8 *src)
 {
-	__twofish_enc_blk_3way(ctx, dst, src, true);
+	__twofish_enc_blk_3way((struct twofish_ctx *) ctx, dst, src, true);
 }
 
 void twofish_dec_blk_cbc_3way(void *ctx, u128 *dst, const u128 *src)
diff --git a/arch/x86/include/asm/crypto/twofish.h b/arch/x86/include/asm/crypto/twofish.h
index 65bb80adba3e..871eaa7bacdc 100644
--- a/arch/x86/include/asm/crypto/twofish.h
+++ b/arch/x86/include/asm/crypto/twofish.h
@@ -18,16 +18,13 @@ struct twofish_xts_ctx {
 };
 
 /* regular block cipher functions from twofish_x86_64 module */
-asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
-				const u8 *src);
-asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
-				const u8 *src);
+asmlinkage void twofish_enc_blk(void *ctx, u8 *dst, const u8 *src);
+asmlinkage void twofish_dec_blk(void *ctx, u8 *dst, const u8 *src);
 
 /* 3-way parallel cipher functions */
 asmlinkage void __twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
 				       const u8 *src, bool xor);
-asmlinkage void twofish_dec_blk_3way(struct twofish_ctx *ctx, u8 *dst,
-				     const u8 *src);
+asmlinkage void twofish_dec_blk_3way(void *ctx, u8 *dst, const u8 *src);
 
 /* helpers from twofish_x86_64-3way module */
 extern void twofish_dec_blk_cbc_3way(void *ctx, u128 *dst, const u128 *src);
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4] x86/crypto: camellia: Fix function prototypes
  2018-04-15  4:15 ` [PATCH 1/4] x86/crypto: camellia: " Joao Moreira
@ 2018-04-16  6:10   ` Ingo Molnar
  2018-04-16 14:35     ` João Moreira
  2018-04-18 16:07     ` Kees Cook
  0 siblings, 2 replies; 9+ messages in thread
From: Ingo Molnar @ 2018-04-16  6:10 UTC (permalink / raw)
  To: Joao Moreira
  Cc: kernel-hardening, linux-kernel, x86, herbert, davem, tglx, mingo,
	hpa, gregkh, keescook


* Joao Moreira <jmoreira@suse.de> wrote:

> Convert the use of 'struct camellia_ctx *' to 'void *' in prototypes of
> functions which are referenced through 'struct common_glue_func_entry',
> making their prototypes match those of this struct and, consequently,
> turning them compatible with CFI requirements.
> 
> Whenever needed, cast 'void *' to 'struct camellia_ctx *'.

> +static inline void camellia_enc_blk(void *ctx, u8 *dst, const u8 *src)
>  {
> -	__camellia_enc_blk(ctx, dst, src, false);
> +	__camellia_enc_blk((struct camellia_ctx *) ctx, dst, src, false);
>  }

I don't see how this is an improvement: instead of having a proper type there's 
now an opaque type and an ugly (and dangerous) type cast.

What does "compatible with CFI requirements" mean?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4] x86/crypto: camellia: Fix function prototypes
  2018-04-16  6:10   ` Ingo Molnar
@ 2018-04-16 14:35     ` João Moreira
  2018-04-18 16:07     ` Kees Cook
  1 sibling, 0 replies; 9+ messages in thread
From: João Moreira @ 2018-04-16 14:35 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: kernel-hardening, linux-kernel, x86, herbert, davem, tglx, mingo,
	hpa, gregkh, keescook

> 
>> +static inline void camellia_enc_blk(void *ctx, u8 *dst, const u8 *src)
>>   {
>> -	__camellia_enc_blk(ctx, dst, src, false);
>> +	__camellia_enc_blk((struct camellia_ctx *) ctx, dst, src, false);
>>   }
> 
> I don't see how this is an improvement: instead of having a proper type there's
> now an opaque type and an ugly (and dangerous) type cast.
> 
> What does "compatible with CFI requirements" mean?

For "compatible with CFI requirements", I meant: given a set of 
functions that are invoked through a given set of pointers, all 
functions and pointers in these sets have the same prototype. I added a 
note about this CFI heuristic in the cover letter, but I guess I should 
have been clearer there and added something more substantial to the 
commit message.

Regarding the changes, this was the most straight-forward way I found to 
make the sources compliant with the above, not requiring deeper semantic 
changes to the underneath invoking code. On the other hand, I agree that 
there is a collateral damage and that an ideal fix would be the other 
way around -- removing the opaque type from the pointer instead of 
adding it to the functions.

I'll try to think of another way and send if I come to something.

Tks,
João.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4] x86/crypto: camellia: Fix function prototypes
  2018-04-16  6:10   ` Ingo Molnar
  2018-04-16 14:35     ` João Moreira
@ 2018-04-18 16:07     ` Kees Cook
  2018-04-19 10:35       ` Thomas Gleixner
  1 sibling, 1 reply; 9+ messages in thread
From: Kees Cook @ 2018-04-18 16:07 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Joao Moreira, Kernel Hardening, LKML, X86 ML, Herbert Xu,
	David S. Miller, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Greg KH

On Sun, Apr 15, 2018 at 11:10 PM, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Joao Moreira <jmoreira@suse.de> wrote:
>
>> Convert the use of 'struct camellia_ctx *' to 'void *' in prototypes of
>> functions which are referenced through 'struct common_glue_func_entry',
>> making their prototypes match those of this struct and, consequently,
>> turning them compatible with CFI requirements.
>>
>> Whenever needed, cast 'void *' to 'struct camellia_ctx *'.
>
>> +static inline void camellia_enc_blk(void *ctx, u8 *dst, const u8 *src)
>>  {
>> -     __camellia_enc_blk(ctx, dst, src, false);
>> +     __camellia_enc_blk((struct camellia_ctx *) ctx, dst, src, false);
>>  }
>
> I don't see how this is an improvement: instead of having a proper type there's
> now an opaque type and an ugly (and dangerous) type cast.

I agree with your instinct, as the existing best-practice in the
kernel is to always use correct types and avoid casts to let the
compiler check things, avoid nasty surprises, etc, but for callbacks
with context pointers, we're already just hiding the casts in various
places. For example, even without this patch:

typedef void (*common_glue_func_t)(void *ctx, u8 *dst, const u8 *src);

#define GLUE_FUNC_CAST(fn) ((common_glue_func_t)(fn))

static const struct common_glue_ctx camellia_enc = {
...
        .funcs = { {
...
                .num_blocks = 1,
                .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk) }
        } }
};

While it is nicely wrapped up in macros, this is actually casting away
the _entire_ function prototype, not just the first argument. The
"camellia_enc_blk" function could point to anything (even "void
(*func)(void)") and the compiler wouldn't complain. And this was done
just to mask the ctx argument.

> What does "compatible with CFI requirements" mean?

As Joao mentioned, he wrote this up pretty well in his 0/n patch:
https://lkml.kernel.org/r/20180415041542.5364-1-jmoreira@suse.de

To further clarify, fine-grained function-prototype-checking CFI makes
sure that indirect function call sites only call functions with a
matching expected prototype (to protect against having function
pointers rewritten during an attack, etc). In this case, callers of
.fn_u.ecb() expect the target function to have the prototype "void
(*func)(void *ctx, u8 *dst, const u8 *src)" (as defined by struct
common_glue_ctx), however, camellia_enc_blk() is "void (*func)(struct
camellia_ctx *ctx, u8 *dst, const u8 *src)" and will trip the CFI
check.

If we rearrange our macro magic to defining the callbacks rather than
defining the function pointers, I think we'll gain several things:

- we will cast only the ctx argument instead of the entire function prototype
- we'll retain (and improve) source-code robustness against generally miscasting
- CFI will be happy

So, instead of the original series, how about something like this:

Switch function pointers to not use the function cast, and define a
new GLUE_FUNC macro that kinda looks like the tricks used for
syscalls:

static const struct common_glue_ctx camellia_enc = {
...
        .funcs = { {
...
                .num_blocks = 1,
                .fn_u = { .ecb = camellia_enc_blk }
        } }
};

#define GLUE_FUNC(func, context) \
static inline void func(void *ctx, u8 *dst, const u8 *src) \
{ __##func((context)ctx, dst, src); } \
__##func(context ctx, u8 *dst, const u8 *src)

GLUE_FUNC(camellia_enc_blk, struct camellia_ctx *)
{
...original function...
}


-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4] x86/crypto: camellia: Fix function prototypes
  2018-04-18 16:07     ` Kees Cook
@ 2018-04-19 10:35       ` Thomas Gleixner
  0 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2018-04-19 10:35 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, Joao Moreira, Kernel Hardening, LKML, X86 ML,
	Herbert Xu, David S. Miller, Ingo Molnar, H. Peter Anvin,
	Greg KH

On Wed, 18 Apr 2018, Kees Cook wrote:
> So, instead of the original series, how about something like this:
> 
> Switch function pointers to not use the function cast, and define a
> new GLUE_FUNC macro that kinda looks like the tricks used for
> syscalls:
> 
> static const struct common_glue_ctx camellia_enc = {
> ...
>         .funcs = { {
> ...
>                 .num_blocks = 1,
>                 .fn_u = { .ecb = camellia_enc_blk }
>         } }
> };
> 
> #define GLUE_FUNC(func, context) \
> static inline void func(void *ctx, u8 *dst, const u8 *src) \
> { __##func((context)ctx, dst, src); } \
> __##func(context ctx, u8 *dst, const u8 *src)
> 
> GLUE_FUNC(camellia_enc_blk, struct camellia_ctx *)
> {
> ...original function...
> }

I was about to suggest this before I read to the end of the mail. Yes, that
is much more palatable.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-04-19 10:35 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-15  4:15 [PATCH 0/4] x86/crypto: Fix function prototypes Joao Moreira
2018-04-15  4:15 ` [PATCH 1/4] x86/crypto: camellia: " Joao Moreira
2018-04-16  6:10   ` Ingo Molnar
2018-04-16 14:35     ` João Moreira
2018-04-18 16:07     ` Kees Cook
2018-04-19 10:35       ` Thomas Gleixner
2018-04-15  4:15 ` [PATCH 2/4] x86/crypto: cast6: " Joao Moreira
2018-04-15  4:15 ` [PATCH 3/4] x86/crypto: serpent: " Joao Moreira
2018-04-15  4:15 ` [PATCH 4/4] x86/crypto: twofish: " Joao Moreira

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.