* [PATCH] ARM: crypto: fix function cast warnings
@ 2024-02-13 10:13 Arnd Bergmann
2024-02-13 12:09 ` Herbert Xu
0 siblings, 1 reply; 3+ messages in thread
From: Arnd Bergmann @ 2024-02-13 10:13 UTC (permalink / raw)
To: Herbert Xu, David S. Miller, Russell King, Ard Biesheuvel
Cc: Arnd Bergmann, Nathan Chancellor, Nick Desaulniers,
Bill Wendling, Justin Stitt, Jussi Kivilinna, linux-crypto,
linux-arm-kernel, linux-kernel, llvm
From: Arnd Bergmann <arnd@arndb.de>
clang-16 warns about casting between incompatible function types:
arch/arm/crypto/sha256_glue.c:37:5: error: cast from 'void (*)(u32 *, const void *, unsigned int)' (aka 'void (*)(unsigned int *, const void *, unsigned int)') to 'sha256_block_fn *' (aka 'void (*)(struct sha256_state *, const unsigned char *, int)') converts to incompatible function type [-Werror,-Wcast-function-type-strict]
37 | (sha256_block_fn *)sha256_block_data_order);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/arm/crypto/sha512-glue.c:34:3: error: cast from 'void (*)(u64 *, const u8 *, int)' (aka 'void (*)(unsigned long long *, const unsigned char *, int)') to 'sha512_block_fn *' (aka 'void (*)(struct sha512_state *, const unsigned char *, int)') converts to incompatible function type [-Werror,-Wcast-function-type-strict]
34 | (sha512_block_fn *)sha512_block_data_order);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rework the sha256/sha512 code to instead go through a trivial helper
function to preserve the calling conventions.
Fixes: c80ae7ca3726 ("crypto: arm/sha512 - accelerated SHA-512 using ARM generic ASM and NEON")
Fixes: b59e2ae3690c ("crypto: arm/sha256 - move SHA-224/256 ASM/NEON implementation to base layer")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
arch/arm/crypto/sha256_glue.c | 18 ++++++++++--------
arch/arm/crypto/sha512-glue.c | 11 ++++++++---
2 files changed, 18 insertions(+), 11 deletions(-)
diff --git a/arch/arm/crypto/sha256_glue.c b/arch/arm/crypto/sha256_glue.c
index 433ee4ddce6c..d80448d96ab3 100644
--- a/arch/arm/crypto/sha256_glue.c
+++ b/arch/arm/crypto/sha256_glue.c
@@ -27,29 +27,31 @@
asmlinkage void sha256_block_data_order(u32 *digest, const void *data,
unsigned int num_blks);
-int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
- unsigned int len)
+static void sha256_block_data_order_wrapper(struct sha256_state *sst, u8 const *src, int blocks)
{
/* make sure casting to sha256_block_fn() is safe */
BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0);
- return sha256_base_do_update(desc, data, len,
- (sha256_block_fn *)sha256_block_data_order);
+ return sha256_block_data_order((u32 *)sst, src, blocks);
+}
+
+int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
+{
+ return sha256_base_do_update(desc, data, len, sha256_block_data_order_wrapper);
}
EXPORT_SYMBOL(crypto_sha256_arm_update);
static int crypto_sha256_arm_final(struct shash_desc *desc, u8 *out)
{
- sha256_base_do_finalize(desc,
- (sha256_block_fn *)sha256_block_data_order);
+ sha256_base_do_finalize(desc, sha256_block_data_order_wrapper);
return sha256_base_finish(desc, out);
}
int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
- sha256_base_do_update(desc, data, len,
- (sha256_block_fn *)sha256_block_data_order);
+ sha256_base_do_update(desc, data, len, sha256_block_data_order_wrapper);
return crypto_sha256_arm_final(desc, out);
}
EXPORT_SYMBOL(crypto_sha256_arm_finup);
diff --git a/arch/arm/crypto/sha512-glue.c b/arch/arm/crypto/sha512-glue.c
index 0635a65aa488..1b2c9c0c8a5f 100644
--- a/arch/arm/crypto/sha512-glue.c
+++ b/arch/arm/crypto/sha512-glue.c
@@ -27,17 +27,22 @@ MODULE_ALIAS_CRYPTO("sha512-arm");
asmlinkage void sha512_block_data_order(u64 *state, u8 const *src, int blocks);
+static void sha512_block_data_order_wrapper(struct sha512_state *sst, u8 const *src, int blocks)
+{
+ return sha512_block_data_order((u64 *)sst, src, blocks);
+}
+
int sha512_arm_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_base_do_update(desc, data, len,
- (sha512_block_fn *)sha512_block_data_order);
+ sha512_block_data_order_wrapper);
}
static int sha512_arm_final(struct shash_desc *desc, u8 *out)
{
sha512_base_do_finalize(desc,
- (sha512_block_fn *)sha512_block_data_order);
+ sha512_block_data_order_wrapper);
return sha512_base_finish(desc, out);
}
@@ -45,7 +50,7 @@ int sha512_arm_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha512_base_do_update(desc, data, len,
- (sha512_block_fn *)sha512_block_data_order);
+ sha512_block_data_order_wrapper);
return sha512_arm_final(desc, out);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] ARM: crypto: fix function cast warnings
2024-02-13 10:13 [PATCH] ARM: crypto: fix function cast warnings Arnd Bergmann
@ 2024-02-13 12:09 ` Herbert Xu
2024-02-13 13:50 ` Arnd Bergmann
0 siblings, 1 reply; 3+ messages in thread
From: Herbert Xu @ 2024-02-13 12:09 UTC (permalink / raw)
To: Arnd Bergmann
Cc: David S. Miller, Russell King, Ard Biesheuvel, Arnd Bergmann,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
Jussi Kivilinna, linux-crypto, linux-arm-kernel, linux-kernel,
llvm
On Tue, Feb 13, 2024 at 11:13:44AM +0100, Arnd Bergmann wrote:
>
> Rework the sha256/sha512 code to instead go through a trivial helper
> function to preserve the calling conventions.
Why not just change the assembly function prototype?
Thanks,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] ARM: crypto: fix function cast warnings
2024-02-13 12:09 ` Herbert Xu
@ 2024-02-13 13:50 ` Arnd Bergmann
0 siblings, 0 replies; 3+ messages in thread
From: Arnd Bergmann @ 2024-02-13 13:50 UTC (permalink / raw)
To: Herbert Xu, Arnd Bergmann
Cc: David S . Miller, Russell King, Ard Biesheuvel,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
Jussi Kivilinna, linux-crypto, linux-arm-kernel, linux-kernel,
llvm
On Tue, Feb 13, 2024, at 13:09, Herbert Xu wrote:
> On Tue, Feb 13, 2024 at 11:13:44AM +0100, Arnd Bergmann wrote:
>>
>> Rework the sha256/sha512 code to instead go through a trivial helper
>> function to preserve the calling conventions.
>
> Why not just change the assembly function prototype?
Good idea, sent v2 now.
Arnd
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-02-13 13:50 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-13 10:13 [PATCH] ARM: crypto: fix function cast warnings Arnd Bergmann
2024-02-13 12:09 ` Herbert Xu
2024-02-13 13:50 ` Arnd Bergmann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).