linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/6] Regression fixes/clean ups in the Qualcomm crypto engine driver
@ 2021-01-20 18:48 Thara Gopinath
  2021-01-20 18:48 ` [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import Thara Gopinath
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Thara Gopinath @ 2021-01-20 18:48 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

This patch series is a result of running kernel crypto fuzz tests (by
enabling CONFIG_CRYPTO_MANAGER_EXTRA_TESTS) on the transformations
currently supported via the Qualcomm crypto engine on sdm845.  The first
four patches are fixes for various regressions found during testing. The
last two patches are minor clean ups of unused variable and parameters.

v2->v3:
	- Made the comparison between keys to check if any two keys are
	  same for triple des algorithms constant-time as per
	  Nym Seddon's suggestion.
	- Rebased to 5.11-rc4.
v1->v2:
	- Introduced custom struct qce_sha_saved_state to store and restore
	  partial sha transformation.
	- Rebased to 5.11-rc3.

Thara Gopinath (6):
  drivers: crypto: qce: sha: Restore/save ahash state with custom struct
    in export/import
  drivers: crypto: qce: sha: Hold back a block of data to be transferred
    as part of final
  drivers: crypto: qce: skcipher: Fix regressions found during fuzz
    testing
  drivers: crypto: qce: common: Set data unit size to message length for
    AES XTS transformation
  drivers: crypto: qce: Remover src_tbl from qce_cipher_reqctx
  drivers: crypto: qce: Remove totallen and offset in qce_start

 drivers/crypto/qce/cipher.h   |   1 -
 drivers/crypto/qce/common.c   |  25 +++---
 drivers/crypto/qce/common.h   |   3 +-
 drivers/crypto/qce/sha.c      | 143 +++++++++++++---------------------
 drivers/crypto/qce/skcipher.c |  70 ++++++++++++++---
 5 files changed, 127 insertions(+), 115 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import
  2021-01-20 18:48 [PATCH v3 0/6] Regression fixes/clean ups in the Qualcomm crypto engine driver Thara Gopinath
@ 2021-01-20 18:48 ` Thara Gopinath
  2021-01-25 16:07   ` Bjorn Andersson
  2021-02-02 23:49   ` kernel test robot
  2021-01-20 18:48 ` [PATCH v3 2/6] drivers: crypto: qce: sha: Hold back a block of data to be transferred as part of final Thara Gopinath
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 14+ messages in thread
From: Thara Gopinath @ 2021-01-20 18:48 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

Export and import interfaces save and restore partial transformation
states. The partial states were being stored and restored in struct
sha1_state for sha1/hmac(sha1) transformations and sha256_state for
sha256/hmac(sha256) transformations.This led to a bunch of corner cases
where improper state was being stored and restored. A few of the corner
cases that turned up during testing are:

- wrong byte_count restored if export/import is called twice without h/w
transaction in between
- wrong buflen restored back if the pending buffer
length is exactly the block size.
- wrong state restored if buffer length is 0.

To fix these issues, save and restore the partial transformation state
using the newly introduced qce_sha_saved_state struct. This ensures that
all the pieces required to properly restart the transformation is captured
and restored back

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v1->v2:
	- Introduced custom struct qce_sha_saved_state to store and
	  restore partial sha transformation. v1 was re-using
	  qce_sha_reqctx to save and restore partial states and this
	  could lead to potential memcpy issues around pointer copying.

 drivers/crypto/qce/sha.c | 122 +++++++++++----------------------------
 1 file changed, 34 insertions(+), 88 deletions(-)

diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
index 61c418c12345..08aed03e2b59 100644
--- a/drivers/crypto/qce/sha.c
+++ b/drivers/crypto/qce/sha.c
@@ -12,9 +12,15 @@
 #include "core.h"
 #include "sha.h"
 
-/* crypto hw padding constant for first operation */
-#define SHA_PADDING		64
-#define SHA_PADDING_MASK	(SHA_PADDING - 1)
+struct qce_sha_saved_state {
+	u8 pending_buf[QCE_SHA_MAX_BLOCKSIZE];
+	u8 partial_digest[QCE_SHA_MAX_DIGESTSIZE];
+	__be32 byte_count[2];
+	unsigned int pending_buflen;
+	unsigned int flags;
+	u64 count;
+	bool first_blk;
+};
 
 static LIST_HEAD(ahash_algs);
 
@@ -139,97 +145,37 @@ static int qce_ahash_init(struct ahash_request *req)
 
 static int qce_ahash_export(struct ahash_request *req, void *out)
 {
-	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct qce_sha_reqctx *rctx = ahash_request_ctx(req);
-	unsigned long flags = rctx->flags;
-	unsigned int digestsize = crypto_ahash_digestsize(ahash);
-	unsigned int blocksize =
-			crypto_tfm_alg_blocksize(crypto_ahash_tfm(ahash));
-
-	if (IS_SHA1(flags) || IS_SHA1_HMAC(flags)) {
-		struct sha1_state *out_state = out;
-
-		out_state->count = rctx->count;
-		qce_cpu_to_be32p_array((__be32 *)out_state->state,
-				       rctx->digest, digestsize);
-		memcpy(out_state->buffer, rctx->buf, blocksize);
-	} else if (IS_SHA256(flags) || IS_SHA256_HMAC(flags)) {
-		struct sha256_state *out_state = out;
-
-		out_state->count = rctx->count;
-		qce_cpu_to_be32p_array((__be32 *)out_state->state,
-				       rctx->digest, digestsize);
-		memcpy(out_state->buf, rctx->buf, blocksize);
-	} else {
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static int qce_import_common(struct ahash_request *req, u64 in_count,
-			     const u32 *state, const u8 *buffer, bool hmac)
-{
-	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
-	struct qce_sha_reqctx *rctx = ahash_request_ctx(req);
-	unsigned int digestsize = crypto_ahash_digestsize(ahash);
-	unsigned int blocksize;
-	u64 count = in_count;
-
-	blocksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(ahash));
-	rctx->count = in_count;
-	memcpy(rctx->buf, buffer, blocksize);
-
-	if (in_count <= blocksize) {
-		rctx->first_blk = 1;
-	} else {
-		rctx->first_blk = 0;
-		/*
-		 * For HMAC, there is a hardware padding done when first block
-		 * is set. Therefore the byte_count must be incremened by 64
-		 * after the first block operation.
-		 */
-		if (hmac)
-			count += SHA_PADDING;
-	}
+	struct qce_sha_saved_state *export_state = out;
 
-	rctx->byte_count[0] = (__force __be32)(count & ~SHA_PADDING_MASK);
-	rctx->byte_count[1] = (__force __be32)(count >> 32);
-	qce_cpu_to_be32p_array((__be32 *)rctx->digest, (const u8 *)state,
-			       digestsize);
-	rctx->buflen = (unsigned int)(in_count & (blocksize - 1));
+	memcpy(export_state->pending_buf, rctx->buf, rctx->buflen);
+	memcpy(export_state->partial_digest, rctx->digest,
+	       sizeof(rctx->digest));
+	memcpy(export_state->byte_count, rctx->byte_count, 2);
+	export_state->pending_buflen = rctx->buflen;
+	export_state->count = rctx->count;
+	export_state->first_blk = rctx->first_blk;
+	export_state->flags = rctx->flags;
 
 	return 0;
 }
 
 static int qce_ahash_import(struct ahash_request *req, const void *in)
 {
-	struct qce_sha_reqctx *rctx;
-	unsigned long flags;
-	bool hmac;
-	int ret;
-
-	ret = qce_ahash_init(req);
-	if (ret)
-		return ret;
-
-	rctx = ahash_request_ctx(req);
-	flags = rctx->flags;
-	hmac = IS_SHA_HMAC(flags);
-
-	if (IS_SHA1(flags) || IS_SHA1_HMAC(flags)) {
-		const struct sha1_state *state = in;
-
-		ret = qce_import_common(req, state->count, state->state,
-					state->buffer, hmac);
-	} else if (IS_SHA256(flags) || IS_SHA256_HMAC(flags)) {
-		const struct sha256_state *state = in;
+	struct qce_sha_reqctx *rctx = ahash_request_ctx(req);
+	struct qce_sha_saved_state *import_state = in;
 
-		ret = qce_import_common(req, state->count, state->state,
-					state->buf, hmac);
-	}
+	memset(rctx, 0, sizeof(*rctx));
+	rctx->count = import_state->count;
+	rctx->buflen = import_state->pending_buflen;
+	rctx->first_blk = import_state->first_blk;
+	rctx->flags = import_state->flags;
+	memcpy(rctx->buf, import_state->pending_buf, rctx->buflen);
+	memcpy(rctx->digest, import_state->partial_digest,
+	       sizeof(rctx->digest));
+	memcpy(rctx->byte_count, import_state->byte_count, 2);
 
-	return ret;
+	return 0;
 }
 
 static int qce_ahash_update(struct ahash_request *req)
@@ -450,7 +396,7 @@ static const struct qce_ahash_def ahash_def[] = {
 		.drv_name	= "sha1-qce",
 		.digestsize	= SHA1_DIGEST_SIZE,
 		.blocksize	= SHA1_BLOCK_SIZE,
-		.statesize	= sizeof(struct sha1_state),
+		.statesize	= sizeof(struct qce_sha_saved_state),
 		.std_iv		= std_iv_sha1,
 	},
 	{
@@ -459,7 +405,7 @@ static const struct qce_ahash_def ahash_def[] = {
 		.drv_name	= "sha256-qce",
 		.digestsize	= SHA256_DIGEST_SIZE,
 		.blocksize	= SHA256_BLOCK_SIZE,
-		.statesize	= sizeof(struct sha256_state),
+		.statesize	= sizeof(struct qce_sha_saved_state),
 		.std_iv		= std_iv_sha256,
 	},
 	{
@@ -468,7 +414,7 @@ static const struct qce_ahash_def ahash_def[] = {
 		.drv_name	= "hmac-sha1-qce",
 		.digestsize	= SHA1_DIGEST_SIZE,
 		.blocksize	= SHA1_BLOCK_SIZE,
-		.statesize	= sizeof(struct sha1_state),
+		.statesize	= sizeof(struct qce_sha_saved_state),
 		.std_iv		= std_iv_sha1,
 	},
 	{
@@ -477,7 +423,7 @@ static const struct qce_ahash_def ahash_def[] = {
 		.drv_name	= "hmac-sha256-qce",
 		.digestsize	= SHA256_DIGEST_SIZE,
 		.blocksize	= SHA256_BLOCK_SIZE,
-		.statesize	= sizeof(struct sha256_state),
+		.statesize	= sizeof(struct qce_sha_saved_state),
 		.std_iv		= std_iv_sha256,
 	},
 };
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 2/6] drivers: crypto: qce: sha: Hold back a block of data to be transferred as part of final
  2021-01-20 18:48 [PATCH v3 0/6] Regression fixes/clean ups in the Qualcomm crypto engine driver Thara Gopinath
  2021-01-20 18:48 ` [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import Thara Gopinath
@ 2021-01-20 18:48 ` Thara Gopinath
  2021-01-25 16:19   ` Bjorn Andersson
  2021-01-20 18:48 ` [PATCH v3 3/6] drivers: crypto: qce: skcipher: Fix regressions found during fuzz testing Thara Gopinath
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Thara Gopinath @ 2021-01-20 18:48 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

If the available data to transfer is exactly a multiple of block size, save
the last block to be transferred in qce_ahash_final (with the last block
bit set) if this is indeed the end of data stream. If not this saved block
will be transferred as part of next update. If this block is not held back
and if this is indeed the end of data stream, the digest obtained will be
wrong since qce_ahash_final will see that rctx->buflen is 0 and return
doing nothing which in turn means that a digest will not be copied to the
destination result buffer.  qce_ahash_final cannot be made to alter this
behavior and allowed to proceed if rctx->buflen is 0 because the crypto
engine BAM does not allow for zero length transfers.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 drivers/crypto/qce/sha.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
index 08aed03e2b59..dd263c5e4dd8 100644
--- a/drivers/crypto/qce/sha.c
+++ b/drivers/crypto/qce/sha.c
@@ -216,6 +216,25 @@ static int qce_ahash_update(struct ahash_request *req)
 
 	/* calculate how many bytes will be hashed later */
 	hash_later = total % blocksize;
+
+	/*
+	 * At this point, there is more than one block size of data.  If
+	 * the available data to transfer is exactly a multiple of block
+	 * size, save the last block to be transferred in qce_ahash_final
+	 * (with the last block bit set) if this is indeed the end of data
+	 * stream. If not this saved block will be transferred as part of
+	 * next update. If this block is not held back and if this is
+	 * indeed the end of data stream, the digest obtained will be wrong
+	 * since qce_ahash_final will see that rctx->buflen is 0 and return
+	 * doing nothing which in turn means that a digest will not be
+	 * copied to the destination result buffer.  qce_ahash_final cannot
+	 * be made to alter this behavior and allowed to proceed if
+	 * rctx->buflen is 0 because the crypto engine BAM does not allow
+	 * for zero length transfers.
+	 */
+	if (!hash_later)
+		hash_later = blocksize;
+
 	if (hash_later) {
 		unsigned int src_offset = req->nbytes - hash_later;
 		scatterwalk_map_and_copy(rctx->buf, req->src, src_offset,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 3/6] drivers: crypto: qce: skcipher: Fix regressions found during fuzz testing
  2021-01-20 18:48 [PATCH v3 0/6] Regression fixes/clean ups in the Qualcomm crypto engine driver Thara Gopinath
  2021-01-20 18:48 ` [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import Thara Gopinath
  2021-01-20 18:48 ` [PATCH v3 2/6] drivers: crypto: qce: sha: Hold back a block of data to be transferred as part of final Thara Gopinath
@ 2021-01-20 18:48 ` Thara Gopinath
  2021-01-25 16:25   ` Bjorn Andersson
  2021-01-20 18:48 ` [PATCH v3 4/6] drivers: crypto: qce: common: Set data unit size to message length for AES XTS transformation Thara Gopinath
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Thara Gopinath @ 2021-01-20 18:48 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

This patch contains the following fixes for the supported encryption
algorithms in the Qualcomm crypto engine(CE)
1. Return unsupported if key1 = key2 for AES XTS algorithm since CE
does not support this and the operation causes the engine to hang.
2. Return unsupported if any three keys are same for DES3 algorithms
since CE does not support this and the operation causes the engine to
hang.
3. Return unsupported for 0 length plain texts since crypto engine BAM
dma does not support 0 length data.
4. ECB messages do not have an IV and hence set the ivsize to 0.
5. Ensure that the data passed for ECB/CBC encryption/decryption is
blocksize aligned. Otherwise the CE hangs on the operation.
6. Allow messages of length less that 512 bytes for all other encryption
algorithms other than AES XTS. The recommendation is only for AES XTS
to have data size greater than 512 bytes.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v2->v3:
	- Made the comparison between keys to check if any two keys are
	  same for triple des algorithms constant-time as per
	  Nym Seddon's suggestion.

 drivers/crypto/qce/skcipher.c | 68 ++++++++++++++++++++++++++++++-----
 1 file changed, 60 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
index a2d3da0ad95f..d78b932441ab 100644
--- a/drivers/crypto/qce/skcipher.c
+++ b/drivers/crypto/qce/skcipher.c
@@ -167,16 +167,32 @@ static int qce_skcipher_setkey(struct crypto_skcipher *ablk, const u8 *key,
 	struct crypto_tfm *tfm = crypto_skcipher_tfm(ablk);
 	struct qce_cipher_ctx *ctx = crypto_tfm_ctx(tfm);
 	unsigned long flags = to_cipher_tmpl(ablk)->alg_flags;
+	unsigned int __keylen;
 	int ret;
 
 	if (!key || !keylen)
 		return -EINVAL;
 
-	switch (IS_XTS(flags) ? keylen >> 1 : keylen) {
+	/*
+	 * AES XTS key1 = key2 not supported by crypto engine.
+	 * Revisit to request a fallback cipher in this case.
+	 */
+	if (IS_XTS(flags)) {
+		__keylen = keylen >> 1;
+		if (!memcmp(key, key + __keylen, __keylen))
+			return -EINVAL;
+	} else {
+		__keylen = keylen;
+	}
+	switch (__keylen) {
 	case AES_KEYSIZE_128:
 	case AES_KEYSIZE_256:
 		memcpy(ctx->enc_key, key, keylen);
 		break;
+	case AES_KEYSIZE_192:
+		break;
+	default:
+		return -EINVAL;
 	}
 
 	ret = crypto_skcipher_setkey(ctx->fallback, key, keylen);
@@ -204,12 +220,27 @@ static int qce_des3_setkey(struct crypto_skcipher *ablk, const u8 *key,
 			   unsigned int keylen)
 {
 	struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(ablk);
+	u32 _key[6];
 	int err;
 
 	err = verify_skcipher_des3_key(ablk, key);
 	if (err)
 		return err;
 
+	/*
+	 * The crypto engine does not support any two keys
+	 * being the same for triple des algorithms. The
+	 * verify_skcipher_des3_key does not check for all the
+	 * below conditions. Return -ENOKEY in case any two keys
+	 * are the same. Revisit to see if a fallback cipher
+	 * is needed to handle this condition.
+	 */
+	memcpy(_key, key, DES3_EDE_KEY_SIZE);
+	if (!((_key[0] ^ _key[2]) | (_key[1] ^ _key[3])) |
+	    !((_key[2] ^ _key[4]) | (_key[3] ^ _key[5])) |
+	    !((_key[0] ^ _key[4]) | (_key[1] ^ _key[5])))
+		return -ENOKEY;
+
 	ctx->enc_keylen = keylen;
 	memcpy(ctx->enc_key, key, keylen);
 	return 0;
@@ -221,6 +252,7 @@ static int qce_skcipher_crypt(struct skcipher_request *req, int encrypt)
 	struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
 	struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req);
 	struct qce_alg_template *tmpl = to_cipher_tmpl(tfm);
+	unsigned int blocksize = crypto_skcipher_blocksize(tfm);
 	int keylen;
 	int ret;
 
@@ -228,14 +260,34 @@ static int qce_skcipher_crypt(struct skcipher_request *req, int encrypt)
 	rctx->flags |= encrypt ? QCE_ENCRYPT : QCE_DECRYPT;
 	keylen = IS_XTS(rctx->flags) ? ctx->enc_keylen >> 1 : ctx->enc_keylen;
 
-	/* qce is hanging when AES-XTS request len > QCE_SECTOR_SIZE and
-	 * is not a multiple of it; pass such requests to the fallback
+	/* CE does not handle 0 length messages */
+	if (!req->cryptlen)
+		return -EINVAL;
+
+	/*
+	 * ECB and CBC algorithms require message lengths to be
+	 * multiples of block size.
+	 * TODO: The spec says AES CBC mode for certain versions
+	 * of crypto engine can handle partial blocks as well.
+	 * Test and enable such messages.
+	 */
+	if (IS_ECB(rctx->flags) || IS_CBC(rctx->flags))
+		if (!IS_ALIGNED(req->cryptlen, blocksize))
+			return -EINVAL;
+
+	/*
+	 * Conditions for requesting a fallback cipher
+	 * AES-192 (not supported by crypto engine (CE))
+	 * AES-XTS request with len <= 512 byte (not recommended to use CE)
+	 * AES-XTS request with len > QCE_SECTOR_SIZE and
+	 * is not a multiple of it.(Revisit this condition to check if it is
+	 * needed in all versions of CE)
 	 */
 	if (IS_AES(rctx->flags) &&
-	    (((keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256) ||
-	      req->cryptlen <= aes_sw_max_len) ||
-	     (IS_XTS(rctx->flags) && req->cryptlen > QCE_SECTOR_SIZE &&
-	      req->cryptlen % QCE_SECTOR_SIZE))) {
+	    ((keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256) ||
+	    (IS_XTS(rctx->flags) && ((req->cryptlen <= aes_sw_max_len) ||
+	    (req->cryptlen > QCE_SECTOR_SIZE &&
+	    req->cryptlen % QCE_SECTOR_SIZE))))) {
 		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
 		skcipher_request_set_callback(&rctx->fallback_req,
 					      req->base.flags,
@@ -307,7 +359,7 @@ static const struct qce_skcipher_def skcipher_def[] = {
 		.name		= "ecb(aes)",
 		.drv_name	= "ecb-aes-qce",
 		.blocksize	= AES_BLOCK_SIZE,
-		.ivsize		= AES_BLOCK_SIZE,
+		.ivsize		= 0,
 		.min_keysize	= AES_MIN_KEY_SIZE,
 		.max_keysize	= AES_MAX_KEY_SIZE,
 	},
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 4/6] drivers: crypto: qce: common: Set data unit size to message length for AES XTS transformation
  2021-01-20 18:48 [PATCH v3 0/6] Regression fixes/clean ups in the Qualcomm crypto engine driver Thara Gopinath
                   ` (2 preceding siblings ...)
  2021-01-20 18:48 ` [PATCH v3 3/6] drivers: crypto: qce: skcipher: Fix regressions found during fuzz testing Thara Gopinath
@ 2021-01-20 18:48 ` Thara Gopinath
  2021-01-25 16:31   ` Bjorn Andersson
  2021-01-20 18:48 ` [PATCH v3 5/6] drivers: crypto: qce: Remover src_tbl from qce_cipher_reqctx Thara Gopinath
  2021-01-20 18:48 ` [PATCH v3 6/6] drivers: crypto: qce: Remove totallen and offset in qce_start Thara Gopinath
  5 siblings, 1 reply; 14+ messages in thread
From: Thara Gopinath @ 2021-01-20 18:48 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

Set the register REG_ENCR_XTS_DU_SIZE to cryptlen for AES XTS
transformation. Anything else causes the engine to return back
wrong results.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 drivers/crypto/qce/common.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index a73db2a5637f..f7bc701a4aa2 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -295,15 +295,15 @@ static void qce_xtskey(struct qce_device *qce, const u8 *enckey,
 {
 	u32 xtskey[QCE_MAX_CIPHER_KEY_SIZE / sizeof(u32)] = {0};
 	unsigned int xtsklen = enckeylen / (2 * sizeof(u32));
-	unsigned int xtsdusize;
 
 	qce_cpu_to_be32p_array((__be32 *)xtskey, enckey + enckeylen / 2,
 			       enckeylen / 2);
 	qce_write_array(qce, REG_ENCR_XTS_KEY0, xtskey, xtsklen);
 
-	/* xts du size 512B */
-	xtsdusize = min_t(u32, QCE_SECTOR_SIZE, cryptlen);
-	qce_write(qce, REG_ENCR_XTS_DU_SIZE, xtsdusize);
+	/* Set data unit size to cryptlen. Anything else causes
+	 * crypto engine to return back incorrect results.
+	 */
+	qce_write(qce, REG_ENCR_XTS_DU_SIZE, cryptlen);
 }
 
 static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 5/6] drivers: crypto: qce: Remover src_tbl from qce_cipher_reqctx
  2021-01-20 18:48 [PATCH v3 0/6] Regression fixes/clean ups in the Qualcomm crypto engine driver Thara Gopinath
                   ` (3 preceding siblings ...)
  2021-01-20 18:48 ` [PATCH v3 4/6] drivers: crypto: qce: common: Set data unit size to message length for AES XTS transformation Thara Gopinath
@ 2021-01-20 18:48 ` Thara Gopinath
  2021-01-25 16:32   ` Bjorn Andersson
  2021-01-20 18:48 ` [PATCH v3 6/6] drivers: crypto: qce: Remove totallen and offset in qce_start Thara Gopinath
  5 siblings, 1 reply; 14+ messages in thread
From: Thara Gopinath @ 2021-01-20 18:48 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

src_table is unused and hence remove it from struct qce_cipher_reqctx

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 drivers/crypto/qce/cipher.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/crypto/qce/cipher.h b/drivers/crypto/qce/cipher.h
index cffa9fc628ff..850f257d00f3 100644
--- a/drivers/crypto/qce/cipher.h
+++ b/drivers/crypto/qce/cipher.h
@@ -40,7 +40,6 @@ struct qce_cipher_reqctx {
 	struct scatterlist result_sg;
 	struct sg_table dst_tbl;
 	struct scatterlist *dst_sg;
-	struct sg_table src_tbl;
 	struct scatterlist *src_sg;
 	unsigned int cryptlen;
 	struct skcipher_request fallback_req;	// keep at the end
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 6/6] drivers: crypto: qce: Remove totallen and offset in qce_start
  2021-01-20 18:48 [PATCH v3 0/6] Regression fixes/clean ups in the Qualcomm crypto engine driver Thara Gopinath
                   ` (4 preceding siblings ...)
  2021-01-20 18:48 ` [PATCH v3 5/6] drivers: crypto: qce: Remover src_tbl from qce_cipher_reqctx Thara Gopinath
@ 2021-01-20 18:48 ` Thara Gopinath
  2021-01-25 16:34   ` Bjorn Andersson
  5 siblings, 1 reply; 14+ messages in thread
From: Thara Gopinath @ 2021-01-20 18:48 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

totallen is used to get the size of the data to be transformed.
This is also available via nbytes or cryptlen in the qce_sha_reqctx
and qce_cipher_ctx. Similarly offset convey nothing for the supported
encryption and authentication transformations and is always 0.
Remove these two redundant parameters in qce_start.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 drivers/crypto/qce/common.c   | 17 +++++++----------
 drivers/crypto/qce/common.h   |  3 +--
 drivers/crypto/qce/sha.c      |  2 +-
 drivers/crypto/qce/skcipher.c |  2 +-
 4 files changed, 10 insertions(+), 14 deletions(-)

diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index f7bc701a4aa2..dceb9579d87a 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -140,8 +140,7 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size)
 	return cfg;
 }
 
-static int qce_setup_regs_ahash(struct crypto_async_request *async_req,
-				u32 totallen, u32 offset)
+static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
 {
 	struct ahash_request *req = ahash_request_cast(async_req);
 	struct crypto_ahash *ahash = __crypto_ahash_cast(async_req->tfm);
@@ -306,8 +305,7 @@ static void qce_xtskey(struct qce_device *qce, const u8 *enckey,
 	qce_write(qce, REG_ENCR_XTS_DU_SIZE, cryptlen);
 }
 
-static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
-				     u32 totallen, u32 offset)
+static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
 {
 	struct skcipher_request *req = skcipher_request_cast(async_req);
 	struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req);
@@ -367,7 +365,7 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
 
 	qce_write(qce, REG_ENCR_SEG_CFG, encr_cfg);
 	qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen);
-	qce_write(qce, REG_ENCR_SEG_START, offset & 0xffff);
+	qce_write(qce, REG_ENCR_SEG_START, 0);
 
 	if (IS_CTR(flags)) {
 		qce_write(qce, REG_CNTR_MASK, ~0);
@@ -376,7 +374,7 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
 		qce_write(qce, REG_CNTR_MASK2, ~0);
 	}
 
-	qce_write(qce, REG_SEG_SIZE, totallen);
+	qce_write(qce, REG_SEG_SIZE, rctx->cryptlen);
 
 	/* get little endianness */
 	config = qce_config_reg(qce, 1);
@@ -388,17 +386,16 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
 }
 #endif
 
-int qce_start(struct crypto_async_request *async_req, u32 type, u32 totallen,
-	      u32 offset)
+int qce_start(struct crypto_async_request *async_req, u32 type)
 {
 	switch (type) {
 #ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER
 	case CRYPTO_ALG_TYPE_SKCIPHER:
-		return qce_setup_regs_skcipher(async_req, totallen, offset);
+		return qce_setup_regs_skcipher(async_req);
 #endif
 #ifdef CONFIG_CRYPTO_DEV_QCE_SHA
 	case CRYPTO_ALG_TYPE_AHASH:
-		return qce_setup_regs_ahash(async_req, totallen, offset);
+		return qce_setup_regs_ahash(async_req);
 #endif
 	default:
 		return -EINVAL;
diff --git a/drivers/crypto/qce/common.h b/drivers/crypto/qce/common.h
index 85ba16418a04..3bc244bcca2d 100644
--- a/drivers/crypto/qce/common.h
+++ b/drivers/crypto/qce/common.h
@@ -94,7 +94,6 @@ struct qce_alg_template {
 void qce_cpu_to_be32p_array(__be32 *dst, const u8 *src, unsigned int len);
 int qce_check_status(struct qce_device *qce, u32 *status);
 void qce_get_version(struct qce_device *qce, u32 *major, u32 *minor, u32 *step);
-int qce_start(struct crypto_async_request *async_req, u32 type, u32 totallen,
-	      u32 offset);
+int qce_start(struct crypto_async_request *async_req, u32 type);
 
 #endif /* _COMMON_H_ */
diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
index dd263c5e4dd8..a079e92b4e75 100644
--- a/drivers/crypto/qce/sha.c
+++ b/drivers/crypto/qce/sha.c
@@ -113,7 +113,7 @@ static int qce_ahash_async_req_handle(struct crypto_async_request *async_req)
 
 	qce_dma_issue_pending(&qce->dma);
 
-	ret = qce_start(async_req, tmpl->crypto_alg_type, 0, 0);
+	ret = qce_start(async_req, tmpl->crypto_alg_type);
 	if (ret)
 		goto error_terminate;
 
diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
index d78b932441ab..a93fd3fd5f1a 100644
--- a/drivers/crypto/qce/skcipher.c
+++ b/drivers/crypto/qce/skcipher.c
@@ -143,7 +143,7 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
 
 	qce_dma_issue_pending(&qce->dma);
 
-	ret = qce_start(async_req, tmpl->crypto_alg_type, req->cryptlen, 0);
+	ret = qce_start(async_req, tmpl->crypto_alg_type);
 	if (ret)
 		goto error_terminate;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import
  2021-01-20 18:48 ` [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import Thara Gopinath
@ 2021-01-25 16:07   ` Bjorn Andersson
  2021-02-02 23:49   ` kernel test robot
  1 sibling, 0 replies; 14+ messages in thread
From: Bjorn Andersson @ 2021-01-25 16:07 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: herbert, davem, ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

On Wed 20 Jan 12:48 CST 2021, Thara Gopinath wrote:

Please drop "drivers: " from $subject.

> Export and import interfaces save and restore partial transformation
> states. The partial states were being stored and restored in struct
> sha1_state for sha1/hmac(sha1) transformations and sha256_state for
> sha256/hmac(sha256) transformations.This led to a bunch of corner cases
> where improper state was being stored and restored. A few of the corner
> cases that turned up during testing are:
> 
> - wrong byte_count restored if export/import is called twice without h/w
> transaction in between
> - wrong buflen restored back if the pending buffer
> length is exactly the block size.
> - wrong state restored if buffer length is 0.
> 
> To fix these issues, save and restore the partial transformation state
> using the newly introduced qce_sha_saved_state struct. This ensures that
> all the pieces required to properly restart the transformation is captured
> and restored back
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
> 
> v1->v2:
> 	- Introduced custom struct qce_sha_saved_state to store and
> 	  restore partial sha transformation. v1 was re-using
> 	  qce_sha_reqctx to save and restore partial states and this
> 	  could lead to potential memcpy issues around pointer copying.
> 
>  drivers/crypto/qce/sha.c | 122 +++++++++++----------------------------
>  1 file changed, 34 insertions(+), 88 deletions(-)
> 
> diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
> index 61c418c12345..08aed03e2b59 100644
> --- a/drivers/crypto/qce/sha.c
> +++ b/drivers/crypto/qce/sha.c
> @@ -12,9 +12,15 @@
>  #include "core.h"
>  #include "sha.h"
>  
> -/* crypto hw padding constant for first operation */
> -#define SHA_PADDING		64
> -#define SHA_PADDING_MASK	(SHA_PADDING - 1)
> +struct qce_sha_saved_state {
> +	u8 pending_buf[QCE_SHA_MAX_BLOCKSIZE];
> +	u8 partial_digest[QCE_SHA_MAX_DIGESTSIZE];
> +	__be32 byte_count[2];
> +	unsigned int pending_buflen;
> +	unsigned int flags;
> +	u64 count;
> +	bool first_blk;
> +};
>  
>  static LIST_HEAD(ahash_algs);
>  
> @@ -139,97 +145,37 @@ static int qce_ahash_init(struct ahash_request *req)
>  
>  static int qce_ahash_export(struct ahash_request *req, void *out)
>  {
> -	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
>  	struct qce_sha_reqctx *rctx = ahash_request_ctx(req);
> -	unsigned long flags = rctx->flags;
> -	unsigned int digestsize = crypto_ahash_digestsize(ahash);
> -	unsigned int blocksize =
> -			crypto_tfm_alg_blocksize(crypto_ahash_tfm(ahash));
> -
> -	if (IS_SHA1(flags) || IS_SHA1_HMAC(flags)) {
> -		struct sha1_state *out_state = out;
> -
> -		out_state->count = rctx->count;
> -		qce_cpu_to_be32p_array((__be32 *)out_state->state,
> -				       rctx->digest, digestsize);
> -		memcpy(out_state->buffer, rctx->buf, blocksize);
> -	} else if (IS_SHA256(flags) || IS_SHA256_HMAC(flags)) {
> -		struct sha256_state *out_state = out;
> -
> -		out_state->count = rctx->count;
> -		qce_cpu_to_be32p_array((__be32 *)out_state->state,
> -				       rctx->digest, digestsize);
> -		memcpy(out_state->buf, rctx->buf, blocksize);
> -	} else {
> -		return -EINVAL;
> -	}
> -
> -	return 0;
> -}
> -
> -static int qce_import_common(struct ahash_request *req, u64 in_count,
> -			     const u32 *state, const u8 *buffer, bool hmac)
> -{
> -	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
> -	struct qce_sha_reqctx *rctx = ahash_request_ctx(req);
> -	unsigned int digestsize = crypto_ahash_digestsize(ahash);
> -	unsigned int blocksize;
> -	u64 count = in_count;
> -
> -	blocksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(ahash));
> -	rctx->count = in_count;
> -	memcpy(rctx->buf, buffer, blocksize);
> -
> -	if (in_count <= blocksize) {
> -		rctx->first_blk = 1;
> -	} else {
> -		rctx->first_blk = 0;
> -		/*
> -		 * For HMAC, there is a hardware padding done when first block
> -		 * is set. Therefore the byte_count must be incremened by 64
> -		 * after the first block operation.
> -		 */
> -		if (hmac)
> -			count += SHA_PADDING;
> -	}
> +	struct qce_sha_saved_state *export_state = out;
>  
> -	rctx->byte_count[0] = (__force __be32)(count & ~SHA_PADDING_MASK);
> -	rctx->byte_count[1] = (__force __be32)(count >> 32);
> -	qce_cpu_to_be32p_array((__be32 *)rctx->digest, (const u8 *)state,
> -			       digestsize);
> -	rctx->buflen = (unsigned int)(in_count & (blocksize - 1));
> +	memcpy(export_state->pending_buf, rctx->buf, rctx->buflen);
> +	memcpy(export_state->partial_digest, rctx->digest,
> +	       sizeof(rctx->digest));

No need to wrap this line.

> +	memcpy(export_state->byte_count, rctx->byte_count, 2);

You're only stashing 2 of the 8 bytes here. So you should either copy
sizeof(byte_count) bytes, or perhaps it's more obvious if you just
assigned byte_count[0] and byte_count[1]?

> +	export_state->pending_buflen = rctx->buflen;
> +	export_state->count = rctx->count;
> +	export_state->first_blk = rctx->first_blk;
> +	export_state->flags = rctx->flags;
>  
>  	return 0;
>  }
>  
>  static int qce_ahash_import(struct ahash_request *req, const void *in)
>  {
> -	struct qce_sha_reqctx *rctx;
> -	unsigned long flags;
> -	bool hmac;
> -	int ret;
> -
> -	ret = qce_ahash_init(req);
> -	if (ret)
> -		return ret;
> -
> -	rctx = ahash_request_ctx(req);
> -	flags = rctx->flags;
> -	hmac = IS_SHA_HMAC(flags);
> -
> -	if (IS_SHA1(flags) || IS_SHA1_HMAC(flags)) {
> -		const struct sha1_state *state = in;
> -
> -		ret = qce_import_common(req, state->count, state->state,
> -					state->buffer, hmac);
> -	} else if (IS_SHA256(flags) || IS_SHA256_HMAC(flags)) {
> -		const struct sha256_state *state = in;
> +	struct qce_sha_reqctx *rctx = ahash_request_ctx(req);
> +	struct qce_sha_saved_state *import_state = in;
>  
> -		ret = qce_import_common(req, state->count, state->state,
> -					state->buf, hmac);
> -	}
> +	memset(rctx, 0, sizeof(*rctx));
> +	rctx->count = import_state->count;
> +	rctx->buflen = import_state->pending_buflen;
> +	rctx->first_blk = import_state->first_blk;
> +	rctx->flags = import_state->flags;
> +	memcpy(rctx->buf, import_state->pending_buf, rctx->buflen);
> +	memcpy(rctx->digest, import_state->partial_digest,
> +	       sizeof(rctx->digest));
> +	memcpy(rctx->byte_count, import_state->byte_count, 2);

Same as above, you're just restoring 2 of the 8 bytes.

Regards,
Bjorn

>  
> -	return ret;
> +	return 0;
>  }
>  
>  static int qce_ahash_update(struct ahash_request *req)
> @@ -450,7 +396,7 @@ static const struct qce_ahash_def ahash_def[] = {
>  		.drv_name	= "sha1-qce",
>  		.digestsize	= SHA1_DIGEST_SIZE,
>  		.blocksize	= SHA1_BLOCK_SIZE,
> -		.statesize	= sizeof(struct sha1_state),
> +		.statesize	= sizeof(struct qce_sha_saved_state),
>  		.std_iv		= std_iv_sha1,
>  	},
>  	{
> @@ -459,7 +405,7 @@ static const struct qce_ahash_def ahash_def[] = {
>  		.drv_name	= "sha256-qce",
>  		.digestsize	= SHA256_DIGEST_SIZE,
>  		.blocksize	= SHA256_BLOCK_SIZE,
> -		.statesize	= sizeof(struct sha256_state),
> +		.statesize	= sizeof(struct qce_sha_saved_state),
>  		.std_iv		= std_iv_sha256,
>  	},
>  	{
> @@ -468,7 +414,7 @@ static const struct qce_ahash_def ahash_def[] = {
>  		.drv_name	= "hmac-sha1-qce",
>  		.digestsize	= SHA1_DIGEST_SIZE,
>  		.blocksize	= SHA1_BLOCK_SIZE,
> -		.statesize	= sizeof(struct sha1_state),
> +		.statesize	= sizeof(struct qce_sha_saved_state),
>  		.std_iv		= std_iv_sha1,
>  	},
>  	{
> @@ -477,7 +423,7 @@ static const struct qce_ahash_def ahash_def[] = {
>  		.drv_name	= "hmac-sha256-qce",
>  		.digestsize	= SHA256_DIGEST_SIZE,
>  		.blocksize	= SHA256_BLOCK_SIZE,
> -		.statesize	= sizeof(struct sha256_state),
> +		.statesize	= sizeof(struct qce_sha_saved_state),
>  		.std_iv		= std_iv_sha256,
>  	},
>  };
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/6] drivers: crypto: qce: sha: Hold back a block of data to be transferred as part of final
  2021-01-20 18:48 ` [PATCH v3 2/6] drivers: crypto: qce: sha: Hold back a block of data to be transferred as part of final Thara Gopinath
@ 2021-01-25 16:19   ` Bjorn Andersson
  0 siblings, 0 replies; 14+ messages in thread
From: Bjorn Andersson @ 2021-01-25 16:19 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: herbert, davem, ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

On Wed 20 Jan 12:48 CST 2021, Thara Gopinath wrote:

> If the available data to transfer is exactly a multiple of block size, save
> the last block to be transferred in qce_ahash_final (with the last block
> bit set) if this is indeed the end of data stream. If not this saved block
> will be transferred as part of next update. If this block is not held back
> and if this is indeed the end of data stream, the digest obtained will be
> wrong since qce_ahash_final will see that rctx->buflen is 0 and return
> doing nothing which in turn means that a digest will not be copied to the
> destination result buffer.  qce_ahash_final cannot be made to alter this
> behavior and allowed to proceed if rctx->buflen is 0 because the crypto
> engine BAM does not allow for zero length transfers.
> 

Please drop "drivers: " from $subject.

Apart from that this looks good.

Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>

Regards,
Bjorn

> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
>  drivers/crypto/qce/sha.c | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
> index 08aed03e2b59..dd263c5e4dd8 100644
> --- a/drivers/crypto/qce/sha.c
> +++ b/drivers/crypto/qce/sha.c
> @@ -216,6 +216,25 @@ static int qce_ahash_update(struct ahash_request *req)
>  
>  	/* calculate how many bytes will be hashed later */
>  	hash_later = total % blocksize;
> +
> +	/*
> +	 * At this point, there is more than one block size of data.  If
> +	 * the available data to transfer is exactly a multiple of block
> +	 * size, save the last block to be transferred in qce_ahash_final
> +	 * (with the last block bit set) if this is indeed the end of data
> +	 * stream. If not this saved block will be transferred as part of
> +	 * next update. If this block is not held back and if this is
> +	 * indeed the end of data stream, the digest obtained will be wrong
> +	 * since qce_ahash_final will see that rctx->buflen is 0 and return
> +	 * doing nothing which in turn means that a digest will not be
> +	 * copied to the destination result buffer.  qce_ahash_final cannot
> +	 * be made to alter this behavior and allowed to proceed if
> +	 * rctx->buflen is 0 because the crypto engine BAM does not allow
> +	 * for zero length transfers.
> +	 */
> +	if (!hash_later)
> +		hash_later = blocksize;
> +
>  	if (hash_later) {
>  		unsigned int src_offset = req->nbytes - hash_later;
>  		scatterwalk_map_and_copy(rctx->buf, req->src, src_offset,
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/6] drivers: crypto: qce: skcipher: Fix regressions found during fuzz testing
  2021-01-20 18:48 ` [PATCH v3 3/6] drivers: crypto: qce: skcipher: Fix regressions found during fuzz testing Thara Gopinath
@ 2021-01-25 16:25   ` Bjorn Andersson
  0 siblings, 0 replies; 14+ messages in thread
From: Bjorn Andersson @ 2021-01-25 16:25 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: herbert, davem, ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

On Wed 20 Jan 12:48 CST 2021, Thara Gopinath wrote:

> This patch contains the following fixes for the supported encryption
> algorithms in the Qualcomm crypto engine(CE)
> 1. Return unsupported if key1 = key2 for AES XTS algorithm since CE
> does not support this and the operation causes the engine to hang.
> 2. Return unsupported if any three keys are same for DES3 algorithms
> since CE does not support this and the operation causes the engine to
> hang.
> 3. Return unsupported for 0 length plain texts since crypto engine BAM
> dma does not support 0 length data.
> 4. ECB messages do not have an IV and hence set the ivsize to 0.
> 5. Ensure that the data passed for ECB/CBC encryption/decryption is
> blocksize aligned. Otherwise the CE hangs on the operation.
> 6. Allow messages of length less that 512 bytes for all other encryption
> algorithms other than AES XTS. The recommendation is only for AES XTS
> to have data size greater than 512 bytes.
> 

This seems like 6 trivial changes, that if send individually will be
easy to reason about and if there's ever any regressions it will be easy
to bisect.

So please split this patch.

Regards,
Bjorn

> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
> 
> v2->v3:
> 	- Made the comparison between keys to check if any two keys are
> 	  same for triple des algorithms constant-time as per
> 	  Nym Seddon's suggestion.
> 
>  drivers/crypto/qce/skcipher.c | 68 ++++++++++++++++++++++++++++++-----
>  1 file changed, 60 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
> index a2d3da0ad95f..d78b932441ab 100644
> --- a/drivers/crypto/qce/skcipher.c
> +++ b/drivers/crypto/qce/skcipher.c
> @@ -167,16 +167,32 @@ static int qce_skcipher_setkey(struct crypto_skcipher *ablk, const u8 *key,
>  	struct crypto_tfm *tfm = crypto_skcipher_tfm(ablk);
>  	struct qce_cipher_ctx *ctx = crypto_tfm_ctx(tfm);
>  	unsigned long flags = to_cipher_tmpl(ablk)->alg_flags;
> +	unsigned int __keylen;
>  	int ret;
>  
>  	if (!key || !keylen)
>  		return -EINVAL;
>  
> -	switch (IS_XTS(flags) ? keylen >> 1 : keylen) {
> +	/*
> +	 * AES XTS key1 = key2 not supported by crypto engine.
> +	 * Revisit to request a fallback cipher in this case.
> +	 */
> +	if (IS_XTS(flags)) {
> +		__keylen = keylen >> 1;
> +		if (!memcmp(key, key + __keylen, __keylen))
> +			return -EINVAL;
> +	} else {
> +		__keylen = keylen;
> +	}
> +	switch (__keylen) {
>  	case AES_KEYSIZE_128:
>  	case AES_KEYSIZE_256:
>  		memcpy(ctx->enc_key, key, keylen);
>  		break;
> +	case AES_KEYSIZE_192:
> +		break;
> +	default:
> +		return -EINVAL;
>  	}
>  
>  	ret = crypto_skcipher_setkey(ctx->fallback, key, keylen);
> @@ -204,12 +220,27 @@ static int qce_des3_setkey(struct crypto_skcipher *ablk, const u8 *key,
>  			   unsigned int keylen)
>  {
>  	struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(ablk);
> +	u32 _key[6];
>  	int err;
>  
>  	err = verify_skcipher_des3_key(ablk, key);
>  	if (err)
>  		return err;
>  
> +	/*
> +	 * The crypto engine does not support any two keys
> +	 * being the same for triple des algorithms. The
> +	 * verify_skcipher_des3_key does not check for all the
> +	 * below conditions. Return -ENOKEY in case any two keys
> +	 * are the same. Revisit to see if a fallback cipher
> +	 * is needed to handle this condition.
> +	 */
> +	memcpy(_key, key, DES3_EDE_KEY_SIZE);
> +	if (!((_key[0] ^ _key[2]) | (_key[1] ^ _key[3])) |
> +	    !((_key[2] ^ _key[4]) | (_key[3] ^ _key[5])) |
> +	    !((_key[0] ^ _key[4]) | (_key[1] ^ _key[5])))
> +		return -ENOKEY;
> +
>  	ctx->enc_keylen = keylen;
>  	memcpy(ctx->enc_key, key, keylen);
>  	return 0;
> @@ -221,6 +252,7 @@ static int qce_skcipher_crypt(struct skcipher_request *req, int encrypt)
>  	struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
>  	struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req);
>  	struct qce_alg_template *tmpl = to_cipher_tmpl(tfm);
> +	unsigned int blocksize = crypto_skcipher_blocksize(tfm);
>  	int keylen;
>  	int ret;
>  
> @@ -228,14 +260,34 @@ static int qce_skcipher_crypt(struct skcipher_request *req, int encrypt)
>  	rctx->flags |= encrypt ? QCE_ENCRYPT : QCE_DECRYPT;
>  	keylen = IS_XTS(rctx->flags) ? ctx->enc_keylen >> 1 : ctx->enc_keylen;
>  
> -	/* qce is hanging when AES-XTS request len > QCE_SECTOR_SIZE and
> -	 * is not a multiple of it; pass such requests to the fallback
> +	/* CE does not handle 0 length messages */
> +	if (!req->cryptlen)
> +		return -EINVAL;
> +
> +	/*
> +	 * ECB and CBC algorithms require message lengths to be
> +	 * multiples of block size.
> +	 * TODO: The spec says AES CBC mode for certain versions
> +	 * of crypto engine can handle partial blocks as well.
> +	 * Test and enable such messages.
> +	 */
> +	if (IS_ECB(rctx->flags) || IS_CBC(rctx->flags))
> +		if (!IS_ALIGNED(req->cryptlen, blocksize))
> +			return -EINVAL;
> +
> +	/*
> +	 * Conditions for requesting a fallback cipher
> +	 * AES-192 (not supported by crypto engine (CE))
> +	 * AES-XTS request with len <= 512 byte (not recommended to use CE)
> +	 * AES-XTS request with len > QCE_SECTOR_SIZE and
> +	 * is not a multiple of it.(Revisit this condition to check if it is
> +	 * needed in all versions of CE)
>  	 */
>  	if (IS_AES(rctx->flags) &&
> -	    (((keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256) ||
> -	      req->cryptlen <= aes_sw_max_len) ||
> -	     (IS_XTS(rctx->flags) && req->cryptlen > QCE_SECTOR_SIZE &&
> -	      req->cryptlen % QCE_SECTOR_SIZE))) {
> +	    ((keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256) ||
> +	    (IS_XTS(rctx->flags) && ((req->cryptlen <= aes_sw_max_len) ||
> +	    (req->cryptlen > QCE_SECTOR_SIZE &&
> +	    req->cryptlen % QCE_SECTOR_SIZE))))) {
>  		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
>  		skcipher_request_set_callback(&rctx->fallback_req,
>  					      req->base.flags,
> @@ -307,7 +359,7 @@ static const struct qce_skcipher_def skcipher_def[] = {
>  		.name		= "ecb(aes)",
>  		.drv_name	= "ecb-aes-qce",
>  		.blocksize	= AES_BLOCK_SIZE,
> -		.ivsize		= AES_BLOCK_SIZE,
> +		.ivsize		= 0,
>  		.min_keysize	= AES_MIN_KEY_SIZE,
>  		.max_keysize	= AES_MAX_KEY_SIZE,
>  	},
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/6] drivers: crypto: qce: common: Set data unit size to message length for AES XTS transformation
  2021-01-20 18:48 ` [PATCH v3 4/6] drivers: crypto: qce: common: Set data unit size to message length for AES XTS transformation Thara Gopinath
@ 2021-01-25 16:31   ` Bjorn Andersson
  0 siblings, 0 replies; 14+ messages in thread
From: Bjorn Andersson @ 2021-01-25 16:31 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: herbert, davem, ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

On Wed 20 Jan 12:48 CST 2021, Thara Gopinath wrote:

> Set the register REG_ENCR_XTS_DU_SIZE to cryptlen for AES XTS
> transformation. Anything else causes the engine to return back
> wrong results.
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
>  drivers/crypto/qce/common.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
> index a73db2a5637f..f7bc701a4aa2 100644
> --- a/drivers/crypto/qce/common.c
> +++ b/drivers/crypto/qce/common.c
> @@ -295,15 +295,15 @@ static void qce_xtskey(struct qce_device *qce, const u8 *enckey,
>  {
>  	u32 xtskey[QCE_MAX_CIPHER_KEY_SIZE / sizeof(u32)] = {0};
>  	unsigned int xtsklen = enckeylen / (2 * sizeof(u32));
> -	unsigned int xtsdusize;
>  
>  	qce_cpu_to_be32p_array((__be32 *)xtskey, enckey + enckeylen / 2,
>  			       enckeylen / 2);
>  	qce_write_array(qce, REG_ENCR_XTS_KEY0, xtskey, xtsklen);
>  
> -	/* xts du size 512B */
> -	xtsdusize = min_t(u32, QCE_SECTOR_SIZE, cryptlen);

I wonder if this is a hardware limitation that has gone away in the
newer chips. I am however not able to find anything about it, so I'm in
favor of merging this patch and if anyone actually uses the driver on
the older hardware we'd have to go back and quirk it somehow.

Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org>

Regards,
Bjorn

> -	qce_write(qce, REG_ENCR_XTS_DU_SIZE, xtsdusize);
> +	/* Set data unit size to cryptlen. Anything else causes
> +	 * crypto engine to return back incorrect results.
> +	 */
> +	qce_write(qce, REG_ENCR_XTS_DU_SIZE, cryptlen);
>  }
>  
>  static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 5/6] drivers: crypto: qce: Remover src_tbl from qce_cipher_reqctx
  2021-01-20 18:48 ` [PATCH v3 5/6] drivers: crypto: qce: Remover src_tbl from qce_cipher_reqctx Thara Gopinath
@ 2021-01-25 16:32   ` Bjorn Andersson
  0 siblings, 0 replies; 14+ messages in thread
From: Bjorn Andersson @ 2021-01-25 16:32 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: herbert, davem, ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

On Wed 20 Jan 12:48 CST 2021, Thara Gopinath wrote:

> src_table is unused and hence remove it from struct qce_cipher_reqctx
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
>  drivers/crypto/qce/cipher.h | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/crypto/qce/cipher.h b/drivers/crypto/qce/cipher.h
> index cffa9fc628ff..850f257d00f3 100644
> --- a/drivers/crypto/qce/cipher.h
> +++ b/drivers/crypto/qce/cipher.h
> @@ -40,7 +40,6 @@ struct qce_cipher_reqctx {
>  	struct scatterlist result_sg;
>  	struct sg_table dst_tbl;
>  	struct scatterlist *dst_sg;
> -	struct sg_table src_tbl;

Please also remove the associated kerneldoc entry.

Regards,
Bjorn

>  	struct scatterlist *src_sg;
>  	unsigned int cryptlen;
>  	struct skcipher_request fallback_req;	// keep at the end
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 6/6] drivers: crypto: qce: Remove totallen and offset in qce_start
  2021-01-20 18:48 ` [PATCH v3 6/6] drivers: crypto: qce: Remove totallen and offset in qce_start Thara Gopinath
@ 2021-01-25 16:34   ` Bjorn Andersson
  0 siblings, 0 replies; 14+ messages in thread
From: Bjorn Andersson @ 2021-01-25 16:34 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: herbert, davem, ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

On Wed 20 Jan 12:48 CST 2021, Thara Gopinath wrote:

> totallen is used to get the size of the data to be transformed.
> This is also available via nbytes or cryptlen in the qce_sha_reqctx
> and qce_cipher_ctx. Similarly offset convey nothing for the supported
> encryption and authentication transformations and is always 0.
> Remove these two redundant parameters in qce_start.
> 

Please drop "drivers: " from $subject.

Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>

Regards,
Bjorn

> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
>  drivers/crypto/qce/common.c   | 17 +++++++----------
>  drivers/crypto/qce/common.h   |  3 +--
>  drivers/crypto/qce/sha.c      |  2 +-
>  drivers/crypto/qce/skcipher.c |  2 +-
>  4 files changed, 10 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
> index f7bc701a4aa2..dceb9579d87a 100644
> --- a/drivers/crypto/qce/common.c
> +++ b/drivers/crypto/qce/common.c
> @@ -140,8 +140,7 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size)
>  	return cfg;
>  }
>  
> -static int qce_setup_regs_ahash(struct crypto_async_request *async_req,
> -				u32 totallen, u32 offset)
> +static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
>  {
>  	struct ahash_request *req = ahash_request_cast(async_req);
>  	struct crypto_ahash *ahash = __crypto_ahash_cast(async_req->tfm);
> @@ -306,8 +305,7 @@ static void qce_xtskey(struct qce_device *qce, const u8 *enckey,
>  	qce_write(qce, REG_ENCR_XTS_DU_SIZE, cryptlen);
>  }
>  
> -static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
> -				     u32 totallen, u32 offset)
> +static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
>  {
>  	struct skcipher_request *req = skcipher_request_cast(async_req);
>  	struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req);
> @@ -367,7 +365,7 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
>  
>  	qce_write(qce, REG_ENCR_SEG_CFG, encr_cfg);
>  	qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen);
> -	qce_write(qce, REG_ENCR_SEG_START, offset & 0xffff);
> +	qce_write(qce, REG_ENCR_SEG_START, 0);
>  
>  	if (IS_CTR(flags)) {
>  		qce_write(qce, REG_CNTR_MASK, ~0);
> @@ -376,7 +374,7 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
>  		qce_write(qce, REG_CNTR_MASK2, ~0);
>  	}
>  
> -	qce_write(qce, REG_SEG_SIZE, totallen);
> +	qce_write(qce, REG_SEG_SIZE, rctx->cryptlen);
>  
>  	/* get little endianness */
>  	config = qce_config_reg(qce, 1);
> @@ -388,17 +386,16 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req,
>  }
>  #endif
>  
> -int qce_start(struct crypto_async_request *async_req, u32 type, u32 totallen,
> -	      u32 offset)
> +int qce_start(struct crypto_async_request *async_req, u32 type)
>  {
>  	switch (type) {
>  #ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER
>  	case CRYPTO_ALG_TYPE_SKCIPHER:
> -		return qce_setup_regs_skcipher(async_req, totallen, offset);
> +		return qce_setup_regs_skcipher(async_req);
>  #endif
>  #ifdef CONFIG_CRYPTO_DEV_QCE_SHA
>  	case CRYPTO_ALG_TYPE_AHASH:
> -		return qce_setup_regs_ahash(async_req, totallen, offset);
> +		return qce_setup_regs_ahash(async_req);
>  #endif
>  	default:
>  		return -EINVAL;
> diff --git a/drivers/crypto/qce/common.h b/drivers/crypto/qce/common.h
> index 85ba16418a04..3bc244bcca2d 100644
> --- a/drivers/crypto/qce/common.h
> +++ b/drivers/crypto/qce/common.h
> @@ -94,7 +94,6 @@ struct qce_alg_template {
>  void qce_cpu_to_be32p_array(__be32 *dst, const u8 *src, unsigned int len);
>  int qce_check_status(struct qce_device *qce, u32 *status);
>  void qce_get_version(struct qce_device *qce, u32 *major, u32 *minor, u32 *step);
> -int qce_start(struct crypto_async_request *async_req, u32 type, u32 totallen,
> -	      u32 offset);
> +int qce_start(struct crypto_async_request *async_req, u32 type);
>  
>  #endif /* _COMMON_H_ */
> diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
> index dd263c5e4dd8..a079e92b4e75 100644
> --- a/drivers/crypto/qce/sha.c
> +++ b/drivers/crypto/qce/sha.c
> @@ -113,7 +113,7 @@ static int qce_ahash_async_req_handle(struct crypto_async_request *async_req)
>  
>  	qce_dma_issue_pending(&qce->dma);
>  
> -	ret = qce_start(async_req, tmpl->crypto_alg_type, 0, 0);
> +	ret = qce_start(async_req, tmpl->crypto_alg_type);
>  	if (ret)
>  		goto error_terminate;
>  
> diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
> index d78b932441ab..a93fd3fd5f1a 100644
> --- a/drivers/crypto/qce/skcipher.c
> +++ b/drivers/crypto/qce/skcipher.c
> @@ -143,7 +143,7 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
>  
>  	qce_dma_issue_pending(&qce->dma);
>  
> -	ret = qce_start(async_req, tmpl->crypto_alg_type, req->cryptlen, 0);
> +	ret = qce_start(async_req, tmpl->crypto_alg_type);
>  	if (ret)
>  		goto error_terminate;
>  
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import
  2021-01-20 18:48 ` [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import Thara Gopinath
  2021-01-25 16:07   ` Bjorn Andersson
@ 2021-02-02 23:49   ` kernel test robot
  1 sibling, 0 replies; 14+ messages in thread
From: kernel test robot @ 2021-02-02 23:49 UTC (permalink / raw)
  To: Thara Gopinath, herbert, davem, bjorn.andersson
  Cc: kbuild-all, ebiggers, ardb, sivaprak, linux-crypto, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2754 bytes --]

Hi Thara,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on cryptodev/master]
[also build test WARNING on crypto/master v5.11-rc6 next-20210125]
[cannot apply to sparc-next/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thara-Gopinath/Regression-fixes-clean-ups-in-the-Qualcomm-crypto-engine-driver/20210121-032302
base:   https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
config: arm64-randconfig-r024-20210202 (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/b282823110c3b59ae881393d33df0b0e7e0eb90b
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Thara-Gopinath/Regression-fixes-clean-ups-in-the-Qualcomm-crypto-engine-driver/20210121-032302
        git checkout b282823110c3b59ae881393d33df0b0e7e0eb90b
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/crypto/qce/sha.c: In function 'qce_ahash_import':
>> drivers/crypto/qce/sha.c:166:45: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
     166 |  struct qce_sha_saved_state *import_state = in;
         |                                             ^~


vim +/const +166 drivers/crypto/qce/sha.c

   162	
   163	static int qce_ahash_import(struct ahash_request *req, const void *in)
   164	{
   165		struct qce_sha_reqctx *rctx = ahash_request_ctx(req);
 > 166		struct qce_sha_saved_state *import_state = in;
   167	
   168		memset(rctx, 0, sizeof(*rctx));
   169		rctx->count = import_state->count;
   170		rctx->buflen = import_state->pending_buflen;
   171		rctx->first_blk = import_state->first_blk;
   172		rctx->flags = import_state->flags;
   173		memcpy(rctx->buf, import_state->pending_buf, rctx->buflen);
   174		memcpy(rctx->digest, import_state->partial_digest,
   175		       sizeof(rctx->digest));
   176		memcpy(rctx->byte_count, import_state->byte_count, 2);
   177	
   178		return 0;
   179	}
   180	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 40082 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-02-02 23:50 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-20 18:48 [PATCH v3 0/6] Regression fixes/clean ups in the Qualcomm crypto engine driver Thara Gopinath
2021-01-20 18:48 ` [PATCH v3 1/6] drivers: crypto: qce: sha: Restore/save ahash state with custom struct in export/import Thara Gopinath
2021-01-25 16:07   ` Bjorn Andersson
2021-02-02 23:49   ` kernel test robot
2021-01-20 18:48 ` [PATCH v3 2/6] drivers: crypto: qce: sha: Hold back a block of data to be transferred as part of final Thara Gopinath
2021-01-25 16:19   ` Bjorn Andersson
2021-01-20 18:48 ` [PATCH v3 3/6] drivers: crypto: qce: skcipher: Fix regressions found during fuzz testing Thara Gopinath
2021-01-25 16:25   ` Bjorn Andersson
2021-01-20 18:48 ` [PATCH v3 4/6] drivers: crypto: qce: common: Set data unit size to message length for AES XTS transformation Thara Gopinath
2021-01-25 16:31   ` Bjorn Andersson
2021-01-20 18:48 ` [PATCH v3 5/6] drivers: crypto: qce: Remover src_tbl from qce_cipher_reqctx Thara Gopinath
2021-01-25 16:32   ` Bjorn Andersson
2021-01-20 18:48 ` [PATCH v3 6/6] drivers: crypto: qce: Remove totallen and offset in qce_start Thara Gopinath
2021-01-25 16:34   ` Bjorn Andersson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).