All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] arm64 aes: fix encryption of unaligned data
@ 2014-07-25 23:40 Mikulas Patocka
  2014-07-26 13:13 ` Ard Biesheuvel
  2014-07-28 14:02 ` Herbert Xu
  0 siblings, 2 replies; 6+ messages in thread
From: Mikulas Patocka @ 2014-07-25 23:40 UTC (permalink / raw)
  To: Ard Biesheuvel, Herbert Xu, David S. Miller
  Cc: Andy Polyakov, Nicolas Pitre, Alasdair G. Kergon, linux-crypto, dm-devel

cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
See https://bugzilla.redhat.com/show_bug.cgi?id=1122937

The bug is caused by incorrect handling of unaligned data in
arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
socket to encrypt data in the buffer. The arm64 crypto accelerator causes
data corruption or crashes in the scatterwalk_pagedone.

This patch fixes the bug by passing the residue bytes that were not
processed as the last parameter to blkcipher_walk_done.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
===================================================================
--- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
+++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
@@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
 	for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
 		aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
 				(u8 *)ctx->key_enc, rounds, blocks, first);
-		err = blkcipher_walk_done(desc, &walk, 0);
+		err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
 	}
 	kernel_neon_end();
 	return err;
@@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
 	for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
 		aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
 				(u8 *)ctx->key_dec, rounds, blocks, first);
-		err = blkcipher_walk_done(desc, &walk, 0);
+		err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
 	}
 	kernel_neon_end();
 	return err;
@@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
 		aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
 				(u8 *)ctx->key_enc, rounds, blocks, walk.iv,
 				first);
-		err = blkcipher_walk_done(desc, &walk, 0);
+		err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
 	}
 	kernel_neon_end();
 	return err;
@@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
 		aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
 				(u8 *)ctx->key_dec, rounds, blocks, walk.iv,
 				first);
-		err = blkcipher_walk_done(desc, &walk, 0);
+		err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
 	}
 	kernel_neon_end();
 	return err;
@@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
 		aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
 				(u8 *)ctx->key1.key_enc, rounds, blocks,
 				(u8 *)ctx->key2.key_enc, walk.iv, first);
-		err = blkcipher_walk_done(desc, &walk, 0);
+		err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
 	}
 	kernel_neon_end();
 
@@ -267,7 +267,7 @@ static int xts_decrypt(struct blkcipher_
 		aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
 				(u8 *)ctx->key1.key_dec, rounds, blocks,
 				(u8 *)ctx->key2.key_enc, walk.iv, first);
-		err = blkcipher_walk_done(desc, &walk, 0);
+		err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
 	}
 	kernel_neon_end();
 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data
  2014-07-25 23:40 [PATCH 1/2] arm64 aes: fix encryption of unaligned data Mikulas Patocka
@ 2014-07-26 13:13 ` Ard Biesheuvel
  2014-07-26 13:19   ` Ard Biesheuvel
  2014-07-28 14:02 ` Herbert Xu
  1 sibling, 1 reply; 6+ messages in thread
From: Ard Biesheuvel @ 2014-07-26 13:13 UTC (permalink / raw)
  To: Mikulas Patocka, Herbert Xu, Catalin Marinas, Russell King - ARM Linux
  Cc: David S. Miller, Nicolas Pitre, Alasdair G. Kergon, linux-crypto,
	dm-devel

On 26 July 2014 01:40, Mikulas Patocka <mpatocka@redhat.com> wrote:
> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
>
> The bug is caused by incorrect handling of unaligned data in
> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
> data corruption or crashes in the scatterwalk_pagedone.
>
> This patch fixes the bug by passing the residue bytes that were not
> processed as the last parameter to blkcipher_walk_done.
>
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
>

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Thanks for the patch. This correctly fixes a thinko on my part
regarding the guarantees offered by the blkcipher API. Unfortunately,
this wasn't caught by the tcrypt test suite, so I will propose some
patches later to address cases like this.

BTW using kernel crypto with AF_ALG is fairly pointless when using
crypto instructions instead of crypto accelerator peripherals, should
we change anything on the kernel side so we don't expose these
drivers?

@Catalin: this is a bug fix for the code that was merged this cycle. I
would recommend to merge this for 3.16, but if not, could you please
add a cc stable? Or ack it and perhaps Herbert can take both? (There
is a similar patch for ARM as well)

Regards,
Ard.



> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
> ===================================================================
> --- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
>                 aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>                                 (u8 *)ctx->key_enc, rounds, blocks, first);
> -               err = blkcipher_walk_done(desc, &walk, 0);
> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>         }
>         kernel_neon_end();
>         return err;
> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
>                 aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>                                 (u8 *)ctx->key_dec, rounds, blocks, first);
> -               err = blkcipher_walk_done(desc, &walk, 0);
> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>         }
>         kernel_neon_end();
>         return err;
> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
>                 aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>                                 (u8 *)ctx->key_enc, rounds, blocks, walk.iv,
>                                 first);
> -               err = blkcipher_walk_done(desc, &walk, 0);
> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>         }
>         kernel_neon_end();
>         return err;
> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
>                 aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>                                 (u8 *)ctx->key_dec, rounds, blocks, walk.iv,
>                                 first);
> -               err = blkcipher_walk_done(desc, &walk, 0);
> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>         }
>         kernel_neon_end();
>         return err;
> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
>                 aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>                                 (u8 *)ctx->key1.key_enc, rounds, blocks,
>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
> -               err = blkcipher_walk_done(desc, &walk, 0);
> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>         }
>         kernel_neon_end();
>
> @@ -267,7 +267,7 @@ static int xts_decrypt(struct blkcipher_
>                 aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>                                 (u8 *)ctx->key1.key_dec, rounds, blocks,
>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
> -               err = blkcipher_walk_done(desc, &walk, 0);
> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>         }
>         kernel_neon_end();
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data
  2014-07-26 13:13 ` Ard Biesheuvel
@ 2014-07-26 13:19   ` Ard Biesheuvel
  2014-07-26 15:31     ` Mikulas Patocka
  0 siblings, 1 reply; 6+ messages in thread
From: Ard Biesheuvel @ 2014-07-26 13:19 UTC (permalink / raw)
  To: Mikulas Patocka, Herbert Xu, Catalin Marinas, Russell King - ARM Linux
  Cc: David S. Miller, Nicolas Pitre, Alasdair G. Kergon, linux-crypto,
	dm-devel

On 26 July 2014 15:13, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 26 July 2014 01:40, Mikulas Patocka <mpatocka@redhat.com> wrote:
>> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
>> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
>>
>> The bug is caused by incorrect handling of unaligned data in
>> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
>> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
>> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
>> data corruption or crashes in the scatterwalk_pagedone.
>>
>> This patch fixes the bug by passing the residue bytes that were not
>> processed as the last parameter to blkcipher_walk_done.
>>
>> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
>>
>
> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>
> Thanks for the patch. This correctly fixes a thinko on my part
> regarding the guarantees offered by the blkcipher API. Unfortunately,
> this wasn't caught by the tcrypt test suite, so I will propose some
> patches later to address cases like this.
>
> BTW using kernel crypto with AF_ALG is fairly pointless when using
> crypto instructions instead of crypto accelerator peripherals, should
> we change anything on the kernel side so we don't expose these
> drivers?
>
> @Catalin: this is a bug fix for the code that was merged this cycle. I
> would recommend to merge this for 3.16, but if not, could you please
> add a cc stable? Or ack it and perhaps Herbert can take both? (There
> is a similar patch for ARM as well)
>

... only this patch fails to repair the ECB case.
@Mikulas: could you do a v2 and include ECB encryption/decryption?

Cheers,
Ard.


>> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> ===================================================================
>> --- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
>> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
>>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
>>                 aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key_enc, rounds, blocks, first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>         return err;
>> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
>>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
>>                 aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key_dec, rounds, blocks, first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>         return err;
>> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
>>                 aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key_enc, rounds, blocks, walk.iv,
>>                                 first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>         return err;
>> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
>>                 aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key_dec, rounds, blocks, walk.iv,
>>                                 first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>         return err;
>> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
>>                 aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key1.key_enc, rounds, blocks,
>>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>
>> @@ -267,7 +267,7 @@ static int xts_decrypt(struct blkcipher_
>>                 aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key1.key_dec, rounds, blocks,
>>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data
  2014-07-26 13:19   ` Ard Biesheuvel
@ 2014-07-26 15:31     ` Mikulas Patocka
  2014-07-26 16:57       ` Ard Biesheuvel
  0 siblings, 1 reply; 6+ messages in thread
From: Mikulas Patocka @ 2014-07-26 15:31 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Herbert Xu, Catalin Marinas, Russell King - ARM Linux,
	David S. Miller, Nicolas Pitre, Alasdair G. Kergon, linux-crypto,
	dm-devel



On Sat, 26 Jul 2014, Ard Biesheuvel wrote:

> On 26 July 2014 15:13, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> > On 26 July 2014 01:40, Mikulas Patocka <mpatocka@redhat.com> wrote:
> >> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
> >> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
> >>
> >> The bug is caused by incorrect handling of unaligned data in
> >> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
> >> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
> >> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
> >> data corruption or crashes in the scatterwalk_pagedone.
> >>
> >> This patch fixes the bug by passing the residue bytes that were not
> >> processed as the last parameter to blkcipher_walk_done.
> >>
> >> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> >>
> >
> > Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >
> > Thanks for the patch. This correctly fixes a thinko on my part
> > regarding the guarantees offered by the blkcipher API. Unfortunately,
> > this wasn't caught by the tcrypt test suite, so I will propose some
> > patches later to address cases like this.
> >
> > BTW using kernel crypto with AF_ALG is fairly pointless when using
> > crypto instructions instead of crypto accelerator peripherals, should
> > we change anything on the kernel side so we don't expose these
> > drivers?

The userspace library (gcrypt in this case) doesn't support ARM64, so it's 
still better to call the kernel.

> > @Catalin: this is a bug fix for the code that was merged this cycle. I
> > would recommend to merge this for 3.16, but if not, could you please
> > add a cc stable? Or ack it and perhaps Herbert can take both? (There
> > is a similar patch for ARM as well)
> >
> 
> ... only this patch fails to repair the ECB case.
> @Mikulas: could you do a v2 and include ECB encryption/decryption?
> 
> Cheers,
> Ard.

The patch changes ecb_encrypt and ecb_decrypt just like cbc and xts 
functions. What is the problem with ECB?

Mikulas

> >> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
> >> ===================================================================
> >> --- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
> >> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
> >> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
> >>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
> >>                 aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >>                                 (u8 *)ctx->key_enc, rounds, blocks, first);
> >> -               err = blkcipher_walk_done(desc, &walk, 0);
> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
> >>         }
> >>         kernel_neon_end();
> >>         return err;
> >> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
> >>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
> >>                 aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >>                                 (u8 *)ctx->key_dec, rounds, blocks, first);
> >> -               err = blkcipher_walk_done(desc, &walk, 0);
> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
> >>         }
> >>         kernel_neon_end();
> >>         return err;
> >> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
> >>                 aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >>                                 (u8 *)ctx->key_enc, rounds, blocks, walk.iv,
> >>                                 first);
> >> -               err = blkcipher_walk_done(desc, &walk, 0);
> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
> >>         }
> >>         kernel_neon_end();
> >>         return err;
> >> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
> >>                 aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >>                                 (u8 *)ctx->key_dec, rounds, blocks, walk.iv,
> >>                                 first);
> >> -               err = blkcipher_walk_done(desc, &walk, 0);
> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
> >>         }
> >>         kernel_neon_end();
> >>         return err;
> >> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
> >>                 aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >>                                 (u8 *)ctx->key1.key_enc, rounds, blocks,
> >>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
> >> -               err = blkcipher_walk_done(desc, &walk, 0);
> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
> >>         }
> >>         kernel_neon_end();
> >>
> >> @@ -267,7 +267,7 @@ static int xts_decrypt(struct blkcipher_
> >>                 aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >>                                 (u8 *)ctx->key1.key_dec, rounds, blocks,
> >>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
> >> -               err = blkcipher_walk_done(desc, &walk, 0);
> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
> >>         }
> >>         kernel_neon_end();
> >>
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data
  2014-07-26 15:31     ` Mikulas Patocka
@ 2014-07-26 16:57       ` Ard Biesheuvel
  0 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2014-07-26 16:57 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Herbert Xu, Catalin Marinas, Russell King - ARM Linux,
	David S. Miller, Nicolas Pitre, Alasdair G. Kergon, linux-crypto,
	dm-devel

On 26 July 2014 17:31, Mikulas Patocka <mpatocka@redhat.com> wrote:
>
>
> On Sat, 26 Jul 2014, Ard Biesheuvel wrote:
>
>> On 26 July 2014 15:13, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> > On 26 July 2014 01:40, Mikulas Patocka <mpatocka@redhat.com> wrote:
>> >> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
>> >> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
>> >>
>> >> The bug is caused by incorrect handling of unaligned data in
>> >> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
>> >> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
>> >> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
>> >> data corruption or crashes in the scatterwalk_pagedone.
>> >>
>> >> This patch fixes the bug by passing the residue bytes that were not
>> >> processed as the last parameter to blkcipher_walk_done.
>> >>
>> >> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
>> >>
>> >
>> > Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> >
>> > Thanks for the patch. This correctly fixes a thinko on my part
>> > regarding the guarantees offered by the blkcipher API. Unfortunately,
>> > this wasn't caught by the tcrypt test suite, so I will propose some
>> > patches later to address cases like this.
>> >
>> > BTW using kernel crypto with AF_ALG is fairly pointless when using
>> > crypto instructions instead of crypto accelerator peripherals, should
>> > we change anything on the kernel side so we don't expose these
>> > drivers?
>
> The userspace library (gcrypt in this case) doesn't support ARM64, so it's
> still better to call the kernel.
>

OK

>> > @Catalin: this is a bug fix for the code that was merged this cycle. I
>> > would recommend to merge this for 3.16, but if not, could you please
>> > add a cc stable? Or ack it and perhaps Herbert can take both? (There
>> > is a similar patch for ARM as well)
>> >
>>
>> ... only this patch fails to repair the ECB case.
>> @Mikulas: could you do a v2 and include ECB encryption/decryption?
>>
>> Cheers,
>> Ard.
>
> The patch changes ecb_encrypt and ecb_decrypt just like cbc and xts
> functions. What is the problem with ECB?
>

You are right, I got the patches mixed up (this one and the ARM one)
Both look fine, and complete.

-- 
Ard.


>> >> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> >> ===================================================================
>> >> --- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
>> >> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> >> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
>> >>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
>> >>                 aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >>                                 (u8 *)ctx->key_enc, rounds, blocks, first);
>> >> -               err = blkcipher_walk_done(desc, &walk, 0);
>> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>> >>         }
>> >>         kernel_neon_end();
>> >>         return err;
>> >> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
>> >>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
>> >>                 aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >>                                 (u8 *)ctx->key_dec, rounds, blocks, first);
>> >> -               err = blkcipher_walk_done(desc, &walk, 0);
>> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>> >>         }
>> >>         kernel_neon_end();
>> >>         return err;
>> >> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
>> >>                 aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >>                                 (u8 *)ctx->key_enc, rounds, blocks, walk.iv,
>> >>                                 first);
>> >> -               err = blkcipher_walk_done(desc, &walk, 0);
>> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>> >>         }
>> >>         kernel_neon_end();
>> >>         return err;
>> >> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
>> >>                 aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >>                                 (u8 *)ctx->key_dec, rounds, blocks, walk.iv,
>> >>                                 first);
>> >> -               err = blkcipher_walk_done(desc, &walk, 0);
>> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>> >>         }
>> >>         kernel_neon_end();
>> >>         return err;
>> >> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
>> >>                 aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >>                                 (u8 *)ctx->key1.key_enc, rounds, blocks,
>> >>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
>> >> -               err = blkcipher_walk_done(desc, &walk, 0);
>> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>> >>         }
>> >>         kernel_neon_end();
>> >>
>> >> @@ -267,7 +267,7 @@ static int xts_decrypt(struct blkcipher_
>> >>                 aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >>                                 (u8 *)ctx->key1.key_dec, rounds, blocks,
>> >>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
>> >> -               err = blkcipher_walk_done(desc, &walk, 0);
>> >> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>> >>         }
>> >>         kernel_neon_end();
>> >>
>>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data
  2014-07-25 23:40 [PATCH 1/2] arm64 aes: fix encryption of unaligned data Mikulas Patocka
  2014-07-26 13:13 ` Ard Biesheuvel
@ 2014-07-28 14:02 ` Herbert Xu
  1 sibling, 0 replies; 6+ messages in thread
From: Herbert Xu @ 2014-07-28 14:02 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Ard Biesheuvel, David S. Miller, Andy Polyakov, Nicolas Pitre,
	Alasdair G. Kergon, linux-crypto, dm-devel

On Fri, Jul 25, 2014 at 07:40:20PM -0400, Mikulas Patocka wrote:
> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
> 
> The bug is caused by incorrect handling of unaligned data in
> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
> data corruption or crashes in the scatterwalk_pagedone.
> 
> This patch fixes the bug by passing the residue bytes that were not
> processed as the last parameter to blkcipher_walk_done.
> 
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

Both patches applied to crypto.  Thanks for catching this!
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-07-28 14:02 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-25 23:40 [PATCH 1/2] arm64 aes: fix encryption of unaligned data Mikulas Patocka
2014-07-26 13:13 ` Ard Biesheuvel
2014-07-26 13:19   ` Ard Biesheuvel
2014-07-26 15:31     ` Mikulas Patocka
2014-07-26 16:57       ` Ard Biesheuvel
2014-07-28 14:02 ` Herbert Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.