linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Biggers <ebiggers@kernel.org>
To: Iuliana Prodan <iuliana.prodan@nxp.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>,
	"David S. Miller" <davem@davemloft.net>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Horia Geanta <horia.geanta@nxp.com>,
	Sascha Hauer <s.hauer@pengutronix.de>,
	linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-imx <linux-imx@nxp.com>
Subject: Re: [PATCH] crypto: gcm - fix cacheline sharing
Date: Wed, 29 May 2019 13:27:28 -0700	[thread overview]
Message-ID: <20190529202728.GA35103@gmail.com> (raw)
In-Reply-To: <1559149856-7938-1-git-send-email-iuliana.prodan@nxp.com>

On Wed, May 29, 2019 at 08:10:56PM +0300, Iuliana Prodan wrote:
> The generic GCM driver should ensure that whatever it passes into
> scatterlists is safe for non-cache coherent DMA.
> The issue was seen while running GCM on CAAM driver. But, since CAAM
> does not support GHASH on i.MX6, only CTR skcipher part of the GCM is
> offloaded.
> The skcipher request received by CAAM has req->src pointing to
> auth_tag[16] and req->iv pointing to iv[16]. Problem is that when
> the iv is updated (crypto API requires skcipher implementations to
> update the IV with the last ciphertext block) is written in iv[16],
> which is on the same cacheline as auth_tag[16] that was previously
> DMA mapped.
> Solution is to use a pointer, aligned to cache line, instead of auth_tag
> buffer, for encryption/decryption and then free it on completion.
> 
> Link: https://lore.kernel.org/linux-crypto/20190208114459.5nixe76xmmkhur75@gondor.apana.org.au/
> Cc: <stable@vger.kernel.org> # v4.19+
> Fixes: adcbc688fe2f ("crypto: gcm - Convert to new AEAD interface")
> Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
> 
> ---
> I've checked the reproducibility of this issue starting with 4.19.y.
> ---
>  crypto/gcm.c         | 26 +++++++++++++++++---------
>  include/crypto/gcm.h |  1 +
>  2 files changed, 18 insertions(+), 9 deletions(-)
> 
> diff --git a/crypto/gcm.c b/crypto/gcm.c
> index 33f45a9..53e3ce5 100644
> --- a/crypto/gcm.c
> +++ b/crypto/gcm.c
> @@ -66,7 +66,7 @@ struct crypto_gcm_ghash_ctx {
>  
>  struct crypto_gcm_req_priv_ctx {
>  	u8 iv[16];
> -	u8 auth_tag[16];
> +	u8 *auth_tag;
>  	u8 iauth_tag[16];
>  	struct scatterlist src[3];
>  	struct scatterlist dst[3];
> @@ -177,19 +177,23 @@ static void crypto_gcm_init_common(struct aead_request *req)
>  	__be32 counter = cpu_to_be32(1);
>  	struct scatterlist *sg;
>  
> -	memset(pctx->auth_tag, 0, sizeof(pctx->auth_tag));
> +	/*
> +	 * kzalloc alignment is at least the cache line size
> +	 * for non-cache coherent architectures.
> +	 */
> +	pctx->auth_tag = kzalloc(GCM_MAX_AUTH_SIZE, GFP_KERNEL);
>  	memcpy(pctx->iv, req->iv, GCM_AES_IV_SIZE);
>  	memcpy(pctx->iv + GCM_AES_IV_SIZE, &counter, 4);
>  
>  	sg_init_table(pctx->src, 3);
> -	sg_set_buf(pctx->src, pctx->auth_tag, sizeof(pctx->auth_tag));
> +	sg_set_buf(pctx->src, pctx->auth_tag, GCM_MAX_AUTH_SIZE);
>  	sg = scatterwalk_ffwd(pctx->src + 1, req->src, req->assoclen);
>  	if (sg != pctx->src + 1)
>  		sg_chain(pctx->src, 2, sg);
>  
>  	if (req->src != req->dst) {
>  		sg_init_table(pctx->dst, 3);
> -		sg_set_buf(pctx->dst, pctx->auth_tag, sizeof(pctx->auth_tag));
> +		sg_set_buf(pctx->dst, pctx->auth_tag, GCM_MAX_AUTH_SIZE);
>  		sg = scatterwalk_ffwd(pctx->dst + 1, req->dst, req->assoclen);
>  		if (sg != pctx->dst + 1)
>  			sg_chain(pctx->dst, 2, sg);
> @@ -208,9 +212,8 @@ static void crypto_gcm_init_crypt(struct aead_request *req,
>  	dst = req->src == req->dst ? pctx->src : pctx->dst;
>  
>  	skcipher_request_set_tfm(skreq, ctx->ctr);
> -	skcipher_request_set_crypt(skreq, pctx->src, dst,
> -				     cryptlen + sizeof(pctx->auth_tag),
> -				     pctx->iv);
> +	skcipher_request_set_crypt(skreq, pctx->src, dst, cryptlen +
> +				   GCM_MAX_AUTH_SIZE, pctx->iv);
>  }
>  
>  static inline unsigned int gcm_remain(unsigned int len)
> @@ -440,6 +443,7 @@ static int gcm_enc_copy_hash(struct aead_request *req, u32 flags)
>  	scatterwalk_map_and_copy(auth_tag, req->dst,
>  				 req->assoclen + req->cryptlen,
>  				 crypto_aead_authsize(aead), 1);
> +	kfree(auth_tag);
>  	return 0;
>  }
>  
> @@ -492,11 +496,15 @@ static int crypto_gcm_verify(struct aead_request *req)
>  	u8 *iauth_tag = pctx->iauth_tag;
>  	unsigned int authsize = crypto_aead_authsize(aead);
>  	unsigned int cryptlen = req->cryptlen - authsize;
> +	int err;
>  
>  	crypto_xor(auth_tag, iauth_tag, 16);
>  	scatterwalk_map_and_copy(iauth_tag, req->src,
>  				 req->assoclen + cryptlen, authsize, 0);
> -	return crypto_memneq(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0;
> +	err = crypto_memneq(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0;
> +	kfree(auth_tag);
> +
> +	return err;
>  }
>  

So what about the other places that also pass an IV located next to the data,
like crypto/ccm.c and crypto/adiantum.c?  If we're actually going to make this a
new API requirement, then we need to add a debugging option that makes the API
detect this violation so that the other places can be fixed too.

Also, doing a kmalloc() per requset is inefficient and very error-prone.  In
fact there are at least 3 bugs here: (1) not checking the return value, (2)
incorrectly using GFP_KERNEL when it may be atomic context, and (3) not always
freeing the memory.  Why not use cacheline-aligned memory within the request
context, so that a separate kmalloc() isn't needed?

Also, did you consider whether there's any way to make the crypto API handle
this automatically, so that all the individual users don't have to?

- Eric

  reply	other threads:[~2019-05-29 20:27 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-29 17:10 [PATCH] crypto: gcm - fix cacheline sharing Iuliana Prodan
2019-05-29 20:27 ` Eric Biggers [this message]
2019-05-29 22:16   ` Ard Biesheuvel
2019-05-30 13:29     ` Iuliana Prodan
2019-05-30 13:34       ` Herbert Xu
2019-05-30 13:45         ` Iuliana Prodan
2019-05-30 13:53           ` Ard Biesheuvel
2019-05-30 13:55             ` Ard Biesheuvel
2019-05-30 14:27               ` Herbert Xu
2019-05-30 14:28                 ` Ard Biesheuvel
2019-05-30 14:31                   ` Ard Biesheuvel
2019-05-30 14:34                     ` Herbert Xu
2019-05-30 15:04                       ` Ard Biesheuvel
2019-05-30 15:06                         ` Herbert Xu
2019-05-30 15:10                           ` Ard Biesheuvel
2019-05-30 15:13                             ` Herbert Xu
2019-05-30 15:17                               ` Ard Biesheuvel
2019-05-30 22:00                         ` Iuliana Prodan
2019-05-31  5:54                           ` Horia Geanta
2019-05-30 14:53                     ` Pascal Van Leeuwen
2019-05-30 15:13                       ` Ard Biesheuvel
2019-06-06  6:37                     ` Herbert Xu
2019-06-06  6:42                       ` Ard Biesheuvel
2019-06-06  6:46                         ` Herbert Xu
2019-06-06  6:53                           ` Ard Biesheuvel
2019-06-06  6:57                             ` Herbert Xu
2019-06-06  7:10                               ` Horia Geanta
2019-06-06  7:15                                 ` Herbert Xu
2019-06-06  8:36                                   ` Horia Geanta
2019-06-06  9:33                                     ` Herbert Xu
2019-05-30 13:58           ` Herbert Xu
2019-05-30 14:11             ` Ard Biesheuvel
2019-05-30 14:29               ` Pascal Van Leeuwen
2019-05-30  5:34   ` Herbert Xu
2019-05-30  7:46     ` Horia Geanta
2019-05-30  8:08       ` Ard Biesheuvel
2019-05-30 13:18         ` Horia Geanta
2019-05-30 13:25           ` Ard Biesheuvel
2019-05-31  6:05             ` Horia Geanta
2019-05-30 13:26           ` Herbert Xu
2019-05-31  5:22             ` Horia Geanta
2019-05-31  5:42               ` Herbert Xu
2019-05-31  6:10                 ` Horia Geanta
2019-05-31  6:14                   ` Herbert Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190529202728.GA35103@gmail.com \
    --to=ebiggers@kernel.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=davem@davemloft.net \
    --cc=herbert@gondor.apana.org.au \
    --cc=horia.geanta@nxp.com \
    --cc=iuliana.prodan@nxp.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-imx@nxp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=s.hauer@pengutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).