* [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption @ 2023-01-18 14:19 ` Tianjia Zhang 0 siblings, 0 replies; 14+ messages in thread From: Tianjia Zhang @ 2023-01-18 14:19 UTC (permalink / raw) To: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel Cc: Tianjia Zhang When the cryption total length is zero, GCM cryption call skcipher_walk_done() will cause an unexpected crash, so skip calling this function to avoid possible crash when the GCM cryption length is equal to zero. Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> --- arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c index c450a2025ca9..9b63bcf9aa85 100644 --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, kernel_neon_end(); - err = skcipher_walk_done(walk, tail); - if (err) - return err; - if (walk->nbytes) - kernel_neon_begin(); + if (walk->nbytes) { + err = skcipher_walk_done(walk, tail); + if (err) + return err; + if (walk->nbytes) + kernel_neon_begin(); + } } while (walk->nbytes > 0); return 0; -- 2.24.3 (Apple Git-128) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption @ 2023-01-18 14:19 ` Tianjia Zhang 0 siblings, 0 replies; 14+ messages in thread From: Tianjia Zhang @ 2023-01-18 14:19 UTC (permalink / raw) To: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel Cc: Tianjia Zhang When the cryption total length is zero, GCM cryption call skcipher_walk_done() will cause an unexpected crash, so skip calling this function to avoid possible crash when the GCM cryption length is equal to zero. Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> --- arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c index c450a2025ca9..9b63bcf9aa85 100644 --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, kernel_neon_end(); - err = skcipher_walk_done(walk, tail); - if (err) - return err; - if (walk->nbytes) - kernel_neon_begin(); + if (walk->nbytes) { + err = skcipher_walk_done(walk, tail); + if (err) + return err; + if (walk->nbytes) + kernel_neon_begin(); + } } while (walk->nbytes > 0); return 0; -- 2.24.3 (Apple Git-128) ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption 2023-01-18 14:19 ` Tianjia Zhang @ 2023-01-18 14:54 ` Herbert Xu -1 siblings, 0 replies; 14+ messages in thread From: Herbert Xu @ 2023-01-18 14:54 UTC (permalink / raw) To: Tianjia Zhang Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel On Wed, Jan 18, 2023 at 10:19:28PM +0800, Tianjia Zhang wrote: > When the cryption total length is zero, GCM cryption call > skcipher_walk_done() will cause an unexpected crash, so skip calling > this function to avoid possible crash when the GCM cryption length > is equal to zero. > > Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") > Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> > --- > arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c > index c450a2025ca9..9b63bcf9aa85 100644 > --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c > +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c > @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, > > kernel_neon_end(); > > - err = skcipher_walk_done(walk, tail); > - if (err) > - return err; > - if (walk->nbytes) > - kernel_neon_begin(); > + if (walk->nbytes) { Please do if (!walk->nbytes) break; As an additional improvement, the tail calculation can be removed entirely because you already set the chunksize so the walker should only be feeding you multiples of chunksize except at the end. Cheers, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption @ 2023-01-18 14:54 ` Herbert Xu 0 siblings, 0 replies; 14+ messages in thread From: Herbert Xu @ 2023-01-18 14:54 UTC (permalink / raw) To: Tianjia Zhang Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel On Wed, Jan 18, 2023 at 10:19:28PM +0800, Tianjia Zhang wrote: > When the cryption total length is zero, GCM cryption call > skcipher_walk_done() will cause an unexpected crash, so skip calling > this function to avoid possible crash when the GCM cryption length > is equal to zero. > > Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") > Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> > --- > arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c > index c450a2025ca9..9b63bcf9aa85 100644 > --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c > +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c > @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, > > kernel_neon_end(); > > - err = skcipher_walk_done(walk, tail); > - if (err) > - return err; > - if (walk->nbytes) > - kernel_neon_begin(); > + if (walk->nbytes) { Please do if (!walk->nbytes) break; As an additional improvement, the tail calculation can be removed entirely because you already set the chunksize so the walker should only be feeding you multiples of chunksize except at the end. Cheers, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption 2023-01-18 14:54 ` Herbert Xu @ 2023-01-30 7:34 ` Tianjia Zhang -1 siblings, 0 replies; 14+ messages in thread From: Tianjia Zhang @ 2023-01-30 7:34 UTC (permalink / raw) To: Herbert Xu Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel Hi Herbert, On 1/18/23 10:54 PM, Herbert Xu wrote: > On Wed, Jan 18, 2023 at 10:19:28PM +0800, Tianjia Zhang wrote: >> When the cryption total length is zero, GCM cryption call >> skcipher_walk_done() will cause an unexpected crash, so skip calling >> this function to avoid possible crash when the GCM cryption length >> is equal to zero. >> >> Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") >> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> >> --- >> arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- >> 1 file changed, 7 insertions(+), 5 deletions(-) >> >> diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c >> index c450a2025ca9..9b63bcf9aa85 100644 >> --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c >> +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c >> @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, >> >> kernel_neon_end(); >> >> - err = skcipher_walk_done(walk, tail); >> - if (err) >> - return err; >> - if (walk->nbytes) >> - kernel_neon_begin(); >> + if (walk->nbytes) { > > Please do > if (!walk->nbytes) > break; Thanks for the suggestion, a new patch has been sent. > > As an additional improvement, the tail calculation can be removed > entirely because you already set the chunksize so the walker should > only be feeding you multiples of chunksize except at the end. > > Cheers I printed the walk->nbytes of each iteration of the walker, it is not always multiples of chunksize except at the end when the algorithm test manager is turned on. For example, during a GCM encryption process, I get data like this: total = 4014, nbytes = 2078, tail = 14 total = 1950, nbytes = 16, tail = 0 total = 1934, nbytes = 311, tail = 7 total = 1630, nbytes = 16, tail = 0 total = 1614, nbytes = 16, tail = 0 total = 1598, nbytes = 1598, tail = 14 Is my understanding wrong? Best regards, Tianjia ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption @ 2023-01-30 7:34 ` Tianjia Zhang 0 siblings, 0 replies; 14+ messages in thread From: Tianjia Zhang @ 2023-01-30 7:34 UTC (permalink / raw) To: Herbert Xu Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel Hi Herbert, On 1/18/23 10:54 PM, Herbert Xu wrote: > On Wed, Jan 18, 2023 at 10:19:28PM +0800, Tianjia Zhang wrote: >> When the cryption total length is zero, GCM cryption call >> skcipher_walk_done() will cause an unexpected crash, so skip calling >> this function to avoid possible crash when the GCM cryption length >> is equal to zero. >> >> Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") >> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> >> --- >> arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- >> 1 file changed, 7 insertions(+), 5 deletions(-) >> >> diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c >> index c450a2025ca9..9b63bcf9aa85 100644 >> --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c >> +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c >> @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, >> >> kernel_neon_end(); >> >> - err = skcipher_walk_done(walk, tail); >> - if (err) >> - return err; >> - if (walk->nbytes) >> - kernel_neon_begin(); >> + if (walk->nbytes) { > > Please do > if (!walk->nbytes) > break; Thanks for the suggestion, a new patch has been sent. > > As an additional improvement, the tail calculation can be removed > entirely because you already set the chunksize so the walker should > only be feeding you multiples of chunksize except at the end. > > Cheers I printed the walk->nbytes of each iteration of the walker, it is not always multiples of chunksize except at the end when the algorithm test manager is turned on. For example, during a GCM encryption process, I get data like this: total = 4014, nbytes = 2078, tail = 14 total = 1950, nbytes = 16, tail = 0 total = 1934, nbytes = 311, tail = 7 total = 1630, nbytes = 16, tail = 0 total = 1614, nbytes = 16, tail = 0 total = 1598, nbytes = 1598, tail = 14 Is my understanding wrong? Best regards, Tianjia _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption 2023-01-30 7:34 ` Tianjia Zhang @ 2023-01-30 8:15 ` Herbert Xu -1 siblings, 0 replies; 14+ messages in thread From: Herbert Xu @ 2023-01-30 8:15 UTC (permalink / raw) To: Tianjia Zhang Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel On Mon, Jan 30, 2023 at 03:34:42PM +0800, Tianjia Zhang wrote: > > I printed the walk->nbytes of each iteration of the walker, it is not > always multiples of chunksize except at the end when the algorithm test > manager is turned on. Sorry I was mistaken. We only guarantee that a minimum of chunksize bytes is given to you until the very end, not that it is exactly a multiple of chunksize. While you still need to compute tail, you could get rid of the else if check as walk->nbytes - tail cannot be zero (we must provide you with at least one chunk before the end): if (walk->nbytes == walk->total) { tail = 0; sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes, ghash, ctx->ghash_table, (const u8 *)&lengths); } else { sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes - tail, ghash, ctx->ghash_table, NULL); } In fact we could rewrite it like this: unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE; unsigned int nbytes = walk->nbytes - tail; const u8 *src = walk->src.virt.addr; u8 *dst = walk->dst.virt.addr; u8 *lp = NULL; if (walk->nbytes == walk->total) { nbytes = walk->nbytes; tail = 0; lp = (u8 *)&lengths; } sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, nbytes, ghash, ctx->ghash_table, lp); The second part of that loop could also be rewritten as: kernel_neon_end(); err = skcipher_walk_done(walk, tail); if (!walk->nbytes) return err; kernel_neon_begin(); } while (1); Actually I think there is a serious bug here. If you're doing an empty message, you must not call skcipher_walk_done as that may then free random uninitialised stack memory. Did you copy this code from somewhere else? If so wherever you got it from needs to be fixed too. The loop should look like this: if (!walk->nbytes) { /* iv may be unaligned as the walker didn't run at all. */ sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, NULL, NULL, iv, 0, ghash, ctx->ghash_table, (u8 *)&lengths); kernel_neon_end(); return 0; } do { ... } Thanks, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption @ 2023-01-30 8:15 ` Herbert Xu 0 siblings, 0 replies; 14+ messages in thread From: Herbert Xu @ 2023-01-30 8:15 UTC (permalink / raw) To: Tianjia Zhang Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel On Mon, Jan 30, 2023 at 03:34:42PM +0800, Tianjia Zhang wrote: > > I printed the walk->nbytes of each iteration of the walker, it is not > always multiples of chunksize except at the end when the algorithm test > manager is turned on. Sorry I was mistaken. We only guarantee that a minimum of chunksize bytes is given to you until the very end, not that it is exactly a multiple of chunksize. While you still need to compute tail, you could get rid of the else if check as walk->nbytes - tail cannot be zero (we must provide you with at least one chunk before the end): if (walk->nbytes == walk->total) { tail = 0; sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes, ghash, ctx->ghash_table, (const u8 *)&lengths); } else { sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes - tail, ghash, ctx->ghash_table, NULL); } In fact we could rewrite it like this: unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE; unsigned int nbytes = walk->nbytes - tail; const u8 *src = walk->src.virt.addr; u8 *dst = walk->dst.virt.addr; u8 *lp = NULL; if (walk->nbytes == walk->total) { nbytes = walk->nbytes; tail = 0; lp = (u8 *)&lengths; } sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, nbytes, ghash, ctx->ghash_table, lp); The second part of that loop could also be rewritten as: kernel_neon_end(); err = skcipher_walk_done(walk, tail); if (!walk->nbytes) return err; kernel_neon_begin(); } while (1); Actually I think there is a serious bug here. If you're doing an empty message, you must not call skcipher_walk_done as that may then free random uninitialised stack memory. Did you copy this code from somewhere else? If so wherever you got it from needs to be fixed too. The loop should look like this: if (!walk->nbytes) { /* iv may be unaligned as the walker didn't run at all. */ sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, NULL, NULL, iv, 0, ghash, ctx->ghash_table, (u8 *)&lengths); kernel_neon_end(); return 0; } do { ... } Thanks, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption 2023-01-30 8:15 ` Herbert Xu @ 2023-01-30 9:01 ` Herbert Xu -1 siblings, 0 replies; 14+ messages in thread From: Herbert Xu @ 2023-01-30 9:01 UTC (permalink / raw) To: Tianjia Zhang Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel On Mon, Jan 30, 2023 at 04:15:33PM +0800, Herbert Xu wrote: > > Actually I think there is a serious bug here. If you're doing an > empty message, you must not call skcipher_walk_done as that may > then free random uninitialised stack memory. Hah, I had forgotten that this thread started with your patch to fix this exact bug :) Could you confirm that you did copy this from ccm? It would be nice if you could rewrite your loop in a form similar to my patch to ccm. Thanks, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption @ 2023-01-30 9:01 ` Herbert Xu 0 siblings, 0 replies; 14+ messages in thread From: Herbert Xu @ 2023-01-30 9:01 UTC (permalink / raw) To: Tianjia Zhang Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel On Mon, Jan 30, 2023 at 04:15:33PM +0800, Herbert Xu wrote: > > Actually I think there is a serious bug here. If you're doing an > empty message, you must not call skcipher_walk_done as that may > then free random uninitialised stack memory. Hah, I had forgotten that this thread started with your patch to fix this exact bug :) Could you confirm that you did copy this from ccm? It would be nice if you could rewrite your loop in a form similar to my patch to ccm. Thanks, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption 2023-01-30 9:01 ` Herbert Xu @ 2023-01-31 9:39 ` Tianjia Zhang -1 siblings, 0 replies; 14+ messages in thread From: Tianjia Zhang @ 2023-01-31 9:39 UTC (permalink / raw) To: Herbert Xu Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel Hi Herbert, On 1/30/23 5:01 PM, Herbert Xu wrote: > On Mon, Jan 30, 2023 at 04:15:33PM +0800, Herbert Xu wrote: >> >> Actually I think there is a serious bug here. If you're doing an >> empty message, you must not call skcipher_walk_done as that may >> then free random uninitialised stack memory. > > Hah, I had forgotten that this thread started with your patch > to fix this exact bug :) > > Could you confirm that you did copy this from ccm? > > It would be nice if you could rewrite your loop in a form similar > to my patch to ccm. > > Thanks, These codes are copied from gcm and ccm at the same time. I am not sure which has more components, but I will rewrite the gcm and ccm encryption loop of sm4 as soon as possible. Cheers, Tianjia ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption @ 2023-01-31 9:39 ` Tianjia Zhang 0 siblings, 0 replies; 14+ messages in thread From: Tianjia Zhang @ 2023-01-31 9:39 UTC (permalink / raw) To: Herbert Xu Cc: David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel, Ard Biesheuvel Hi Herbert, On 1/30/23 5:01 PM, Herbert Xu wrote: > On Mon, Jan 30, 2023 at 04:15:33PM +0800, Herbert Xu wrote: >> >> Actually I think there is a serious bug here. If you're doing an >> empty message, you must not call skcipher_walk_done as that may >> then free random uninitialised stack memory. > > Hah, I had forgotten that this thread started with your patch > to fix this exact bug :) > > Could you confirm that you did copy this from ccm? > > It would be nice if you could rewrite your loop in a form similar > to my patch to ccm. > > Thanks, These codes are copied from gcm and ccm at the same time. I am not sure which has more components, but I will rewrite the gcm and ccm encryption loop of sm4 as soon as possible. Cheers, Tianjia _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2] crypto: arm64/sm4 - Fix possible crash in GCM cryption 2023-01-18 14:19 ` Tianjia Zhang @ 2023-01-30 7:35 ` Tianjia Zhang -1 siblings, 0 replies; 14+ messages in thread From: Tianjia Zhang @ 2023-01-30 7:35 UTC (permalink / raw) To: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel Cc: Tianjia Zhang When the cryption total length is zero, GCM cryption call skcipher_walk_done() will cause an unexpected crash, so skip calling this function to avoid possible crash when the GCM cryption length is equal to zero. Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> --- arch/arm64/crypto/sm4-ce-gcm-glue.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c index c450a2025ca9..29aa7470281d 100644 --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c @@ -178,6 +178,9 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, kernel_neon_end(); + if (unlikely(!walk->nbytes)) + break; + err = skcipher_walk_done(walk, tail); if (err) return err; -- 2.24.3 (Apple Git-128) ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2] crypto: arm64/sm4 - Fix possible crash in GCM cryption @ 2023-01-30 7:35 ` Tianjia Zhang 0 siblings, 0 replies; 14+ messages in thread From: Tianjia Zhang @ 2023-01-30 7:35 UTC (permalink / raw) To: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon, linux-crypto, linux-arm-kernel, linux-kernel Cc: Tianjia Zhang When the cryption total length is zero, GCM cryption call skcipher_walk_done() will cause an unexpected crash, so skip calling this function to avoid possible crash when the GCM cryption length is equal to zero. Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> --- arch/arm64/crypto/sm4-ce-gcm-glue.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c index c450a2025ca9..29aa7470281d 100644 --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c @@ -178,6 +178,9 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, kernel_neon_end(); + if (unlikely(!walk->nbytes)) + break; + err = skcipher_walk_done(walk, tail); if (err) return err; -- 2.24.3 (Apple Git-128) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-01-31 9:41 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-01-18 14:19 [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption Tianjia Zhang 2023-01-18 14:19 ` Tianjia Zhang 2023-01-18 14:54 ` Herbert Xu 2023-01-18 14:54 ` Herbert Xu 2023-01-30 7:34 ` Tianjia Zhang 2023-01-30 7:34 ` Tianjia Zhang 2023-01-30 8:15 ` Herbert Xu 2023-01-30 8:15 ` Herbert Xu 2023-01-30 9:01 ` Herbert Xu 2023-01-30 9:01 ` Herbert Xu 2023-01-31 9:39 ` Tianjia Zhang 2023-01-31 9:39 ` Tianjia Zhang 2023-01-30 7:35 ` [PATCH v2] " Tianjia Zhang 2023-01-30 7:35 ` Tianjia Zhang
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.