linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* crypto: x86/chacha20 - Manually align stack buffer
@ 2017-01-11 12:08 Herbert Xu
  2017-01-11 12:14 ` Ard Biesheuvel
  0 siblings, 1 reply; 4+ messages in thread
From: Herbert Xu @ 2017-01-11 12:08 UTC (permalink / raw)
  To: Ard Biesheuvel, Linux Crypto Mailing List

The kernel on x86-64 cannot use gcc attribute align to align to
a 16-byte boundary.  This patch reverts to the old way of aligning
it by hand.

Incidentally the old way was actually broken in not allocating
enough space and would silently corrupt the stack.  This patch
fixes it by allocating an extra 8 bytes.

Fixes: 9ae433bc79f9 ("crypto: chacha20 - convert generic and...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c
index 78f75b0..054306d 100644
--- a/arch/x86/crypto/chacha20_glue.c
+++ b/arch/x86/crypto/chacha20_glue.c
@@ -67,10 +67,13 @@ static int chacha20_simd(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm);
-	u32 state[16] __aligned(CHACHA20_STATE_ALIGN);
+	u32 *state, state_buf[16 + 8] __aligned(8);
 	struct skcipher_walk walk;
 	int err;
 
+	BUILD_BUG_ON(CHACHA20_STATE_ALIGN != 16);
+	state = PTR_ALIGN(state_buf + 0, CHACHA20_STATE_ALIGN);
+
 	if (req->cryptlen <= CHACHA20_BLOCK_SIZE || !may_use_simd())
 		return crypto_chacha20_crypt(req);
 
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: crypto: x86/chacha20 - Manually align stack buffer
  2017-01-11 12:08 crypto: x86/chacha20 - Manually align stack buffer Herbert Xu
@ 2017-01-11 12:14 ` Ard Biesheuvel
  2017-01-11 12:28   ` [PATCH v2] " Herbert Xu
  0 siblings, 1 reply; 4+ messages in thread
From: Ard Biesheuvel @ 2017-01-11 12:14 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On 11 January 2017 at 12:08, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> The kernel on x86-64 cannot use gcc attribute align to align to
> a 16-byte boundary.  This patch reverts to the old way of aligning
> it by hand.
>
> Incidentally the old way was actually broken in not allocating
> enough space and would silently corrupt the stack.  This patch
> fixes it by allocating an extra 8 bytes.
>

I think the old code was fine, actually:

u32 *state, state_buf[16 + (CHACHA20_STATE_ALIGN / sizeof(u32)) - 1];

ends up allocating 16 + 3 *words* == 64 + 12 bytes , which given the
guaranteed 4 byte alignment is sufficient for ensuring the pointer can
be 16 byte aligned.

So [16 + 2] should be sufficient here

> Fixes: 9ae433bc79f9 ("crypto: chacha20 - convert generic and...")
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
>
> diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c
> index 78f75b0..054306d 100644
> --- a/arch/x86/crypto/chacha20_glue.c
> +++ b/arch/x86/crypto/chacha20_glue.c
> @@ -67,10 +67,13 @@ static int chacha20_simd(struct skcipher_request *req)
>  {
>         struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>         struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm);
> -       u32 state[16] __aligned(CHACHA20_STATE_ALIGN);
> +       u32 *state, state_buf[16 + 8] __aligned(8);
>         struct skcipher_walk walk;
>         int err;
>
> +       BUILD_BUG_ON(CHACHA20_STATE_ALIGN != 16);
> +       state = PTR_ALIGN(state_buf + 0, CHACHA20_STATE_ALIGN);
> +
>         if (req->cryptlen <= CHACHA20_BLOCK_SIZE || !may_use_simd())
>                 return crypto_chacha20_crypt(req);
>
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2] crypto: x86/chacha20 - Manually align stack buffer
  2017-01-11 12:14 ` Ard Biesheuvel
@ 2017-01-11 12:28   ` Herbert Xu
  2017-01-11 12:31     ` Ard Biesheuvel
  0 siblings, 1 reply; 4+ messages in thread
From: Herbert Xu @ 2017-01-11 12:28 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Linux Crypto Mailing List

On Wed, Jan 11, 2017 at 12:14:24PM +0000, Ard Biesheuvel wrote:
> 
> I think the old code was fine, actually:
> 
> u32 *state, state_buf[16 + (CHACHA20_STATE_ALIGN / sizeof(u32)) - 1];
> 
> ends up allocating 16 + 3 *words* == 64 + 12 bytes , which given the
> guaranteed 4 byte alignment is sufficient for ensuring the pointer can
> be 16 byte aligned.

Ah yes you're right, it's a u32.

> So [16 + 2] should be sufficient here

Here's an updated version.

---8<---
The kernel on x86-64 cannot use gcc attribute align to align to
a 16-byte boundary.  This patch reverts to the old way of aligning
it by hand.

Fixes: 9ae433bc79f9 ("crypto: chacha20 - convert generic and...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c
index 78f75b0..1e6af1b 100644
--- a/arch/x86/crypto/chacha20_glue.c
+++ b/arch/x86/crypto/chacha20_glue.c
@@ -67,10 +67,13 @@ static int chacha20_simd(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm);
-	u32 state[16] __aligned(CHACHA20_STATE_ALIGN);
+	u32 *state, state_buf[16 + 2] __aligned(8);
 	struct skcipher_walk walk;
 	int err;
 
+	BUILD_BUG_ON(CHACHA20_STATE_ALIGN != 16);
+	state = PTR_ALIGN(state_buf + 0, CHACHA20_STATE_ALIGN);
+
 	if (req->cryptlen <= CHACHA20_BLOCK_SIZE || !may_use_simd())
 		return crypto_chacha20_crypt(req);
 
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] crypto: x86/chacha20 - Manually align stack buffer
  2017-01-11 12:28   ` [PATCH v2] " Herbert Xu
@ 2017-01-11 12:31     ` Ard Biesheuvel
  0 siblings, 0 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2017-01-11 12:31 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On 11 January 2017 at 12:28, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Wed, Jan 11, 2017 at 12:14:24PM +0000, Ard Biesheuvel wrote:
>>
>> I think the old code was fine, actually:
>>
>> u32 *state, state_buf[16 + (CHACHA20_STATE_ALIGN / sizeof(u32)) - 1];
>>
>> ends up allocating 16 + 3 *words* == 64 + 12 bytes , which given the
>> guaranteed 4 byte alignment is sufficient for ensuring the pointer can
>> be 16 byte aligned.
>
> Ah yes you're right, it's a u32.
>
>> So [16 + 2] should be sufficient here
>
> Here's an updated version.
>
> ---8<---
> The kernel on x86-64 cannot use gcc attribute align to align to
> a 16-byte boundary.  This patch reverts to the old way of aligning
> it by hand.
>
> Fixes: 9ae433bc79f9 ("crypto: chacha20 - convert generic and...")
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
>
> diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c
> index 78f75b0..1e6af1b 100644
> --- a/arch/x86/crypto/chacha20_glue.c
> +++ b/arch/x86/crypto/chacha20_glue.c
> @@ -67,10 +67,13 @@ static int chacha20_simd(struct skcipher_request *req)
>  {
>         struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>         struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm);
> -       u32 state[16] __aligned(CHACHA20_STATE_ALIGN);
> +       u32 *state, state_buf[16 + 2] __aligned(8);
>         struct skcipher_walk walk;
>         int err;
>
> +       BUILD_BUG_ON(CHACHA20_STATE_ALIGN != 16);
> +       state = PTR_ALIGN(state_buf + 0, CHACHA20_STATE_ALIGN);
> +
>         if (req->cryptlen <= CHACHA20_BLOCK_SIZE || !may_use_simd())
>                 return crypto_chacha20_crypt(req);
>

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-01-11 12:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-11 12:08 crypto: x86/chacha20 - Manually align stack buffer Herbert Xu
2017-01-11 12:14 ` Ard Biesheuvel
2017-01-11 12:28   ` [PATCH v2] " Herbert Xu
2017-01-11 12:31     ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).