linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver
@ 2021-04-20  3:35 Thara Gopinath
  2021-04-20  3:35 ` [Patch v3 1/7] crypto: qce: common: Add MAC failed error checking Thara Gopinath
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Thara Gopinath @ 2021-04-20  3:35 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel, linux-arm-msm

Enable support for AEAD algorithms in Qualcomm CE driver.  The first three
patches in this series are cleanups and add a few missing pieces required
to add support for AEAD algorithms.  Patch 4 introduces supported AEAD
transformations on Qualcomm CE.  Patches 5 and 6 implements the h/w
infrastructure needed to enable and run the AEAD transformations on
Qualcomm CE.  Patch 7 adds support to queue fallback algorithms in case of
unsupported special inputs.

This patch series has been tested with in kernel crypto testing module
tcrypt.ko with fuzz tests enabled as well.

Thara Gopinath (7):
  crypto: qce: common: Add MAC failed error checking
  crypto: qce: common: Make result dump optional
  crypto: qce: Add mode for rfc4309
  crypto: qce: Add support for AEAD algorithms
  crypto: qce: common: Clean up qce_auth_cfg
  crypto: qce: common: Add support for AEAD algorithms
  crypto: qce: aead: Schedule fallback algorithm

 drivers/crypto/Kconfig      |  15 +
 drivers/crypto/qce/Makefile |   1 +
 drivers/crypto/qce/aead.c   | 841 ++++++++++++++++++++++++++++++++++++
 drivers/crypto/qce/aead.h   |  56 +++
 drivers/crypto/qce/common.c | 196 ++++++++-
 drivers/crypto/qce/common.h |   9 +-
 drivers/crypto/qce/core.c   |   4 +
 7 files changed, 1102 insertions(+), 20 deletions(-)
 create mode 100644 drivers/crypto/qce/aead.c
 create mode 100644 drivers/crypto/qce/aead.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Patch v3 1/7] crypto: qce: common: Add MAC failed error checking
  2021-04-20  3:35 [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver Thara Gopinath
@ 2021-04-20  3:35 ` Thara Gopinath
  2021-04-20  3:35 ` [Patch v3 2/7] crypto: qce: common: Make result dump optional Thara Gopinath
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Thara Gopinath @ 2021-04-20  3:35 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel, linux-arm-msm

MAC_FAILED gets set in the status register if authenthication fails
for ccm algorithms(during decryption). Add support to catch and flag
this error.

Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v1->v2:
	- Split the error checking for -ENXIO and -EBADMSG into if-else clause
	  so that the code is more readable as per Bjorn's review comment.

 drivers/crypto/qce/common.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index dceb9579d87a..dd76175d5c62 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -419,6 +419,8 @@ int qce_check_status(struct qce_device *qce, u32 *status)
 	 */
 	if (*status & STATUS_ERRORS || !(*status & BIT(OPERATION_DONE_SHIFT)))
 		ret = -ENXIO;
+	else if (*status & BIT(MAC_FAILED_SHIFT))
+		ret = -EBADMSG;
 
 	return ret;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch v3 2/7] crypto: qce: common: Make result dump optional
  2021-04-20  3:35 [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver Thara Gopinath
  2021-04-20  3:35 ` [Patch v3 1/7] crypto: qce: common: Add MAC failed error checking Thara Gopinath
@ 2021-04-20  3:35 ` Thara Gopinath
  2021-04-20  3:35 ` [Patch v3 3/7] crypto: qce: Add mode for rfc4309 Thara Gopinath
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Thara Gopinath @ 2021-04-20  3:35 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel, linux-arm-msm

Qualcomm crypto engine allows for IV registers and status register
to be concatenated to the output. This option is enabled by setting the
RESULTS_DUMP field in GOPROC  register. This is useful for most of the
algorithms to either retrieve status of operation or in case of
authentication algorithms to retrieve the mac. But for ccm
algorithms, the mac is part of the output stream and not retrieved
from the IV registers, thus needing a separate buffer to retrieve it.
Make enabling RESULTS_DUMP field optional so that algorithms can choose
whether or not to enable the option.
Note that in this patch, the enabled algorithms always choose
RESULTS_DUMP to be enabled. But later with the introduction of ccm
algorithms, this changes.

Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 drivers/crypto/qce/common.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index dd76175d5c62..7b5bc5a6ae81 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -88,9 +88,12 @@ static void qce_setup_config(struct qce_device *qce)
 	qce_write(qce, REG_CONFIG, config);
 }
 
-static inline void qce_crypto_go(struct qce_device *qce)
+static inline void qce_crypto_go(struct qce_device *qce, bool result_dump)
 {
-	qce_write(qce, REG_GOPROC, BIT(GO_SHIFT) | BIT(RESULTS_DUMP_SHIFT));
+	if (result_dump)
+		qce_write(qce, REG_GOPROC, BIT(GO_SHIFT) | BIT(RESULTS_DUMP_SHIFT));
+	else
+		qce_write(qce, REG_GOPROC, BIT(GO_SHIFT));
 }
 
 #ifdef CONFIG_CRYPTO_DEV_QCE_SHA
@@ -219,7 +222,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
 	config = qce_config_reg(qce, 1);
 	qce_write(qce, REG_CONFIG, config);
 
-	qce_crypto_go(qce);
+	qce_crypto_go(qce, true);
 
 	return 0;
 }
@@ -380,7 +383,7 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
 	config = qce_config_reg(qce, 1);
 	qce_write(qce, REG_CONFIG, config);
 
-	qce_crypto_go(qce);
+	qce_crypto_go(qce, true);
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch v3 3/7] crypto: qce: Add mode for rfc4309
  2021-04-20  3:35 [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver Thara Gopinath
  2021-04-20  3:35 ` [Patch v3 1/7] crypto: qce: common: Add MAC failed error checking Thara Gopinath
  2021-04-20  3:35 ` [Patch v3 2/7] crypto: qce: common: Make result dump optional Thara Gopinath
@ 2021-04-20  3:35 ` Thara Gopinath
  2021-04-20  3:35 ` [Patch v3 4/7] crypto: qce: Add support for AEAD algorithms Thara Gopinath
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Thara Gopinath @ 2021-04-20  3:35 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel, linux-arm-msm

rf4309 is the specification that uses aes ccm algorithms with IPsec
security packets. Add a submode to identify rfc4309 ccm(aes) algorithm
in the crypto driver.

Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v1->v2:
	- Moved up the QCE_ENCRYPT AND QCE_DECRYPT bit positions so that
	  addition of other algorithms in future will not affect these
	  macros.

 drivers/crypto/qce/common.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/qce/common.h b/drivers/crypto/qce/common.h
index 3bc244bcca2d..b135440bf72b 100644
--- a/drivers/crypto/qce/common.h
+++ b/drivers/crypto/qce/common.h
@@ -51,9 +51,11 @@
 #define QCE_MODE_CCM			BIT(12)
 #define QCE_MODE_MASK			GENMASK(12, 8)
 
+#define QCE_MODE_CCM_RFC4309		BIT(13)
+
 /* cipher encryption/decryption operations */
-#define QCE_ENCRYPT			BIT(13)
-#define QCE_DECRYPT			BIT(14)
+#define QCE_ENCRYPT			BIT(30)
+#define QCE_DECRYPT			BIT(31)
 
 #define IS_DES(flags)			(flags & QCE_ALG_DES)
 #define IS_3DES(flags)			(flags & QCE_ALG_3DES)
@@ -73,6 +75,7 @@
 #define IS_CTR(mode)			(mode & QCE_MODE_CTR)
 #define IS_XTS(mode)			(mode & QCE_MODE_XTS)
 #define IS_CCM(mode)			(mode & QCE_MODE_CCM)
+#define IS_CCM_RFC4309(mode)		((mode) & QCE_MODE_CCM_RFC4309)
 
 #define IS_ENCRYPT(dir)			(dir & QCE_ENCRYPT)
 #define IS_DECRYPT(dir)			(dir & QCE_DECRYPT)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch v3 4/7] crypto: qce: Add support for AEAD algorithms
  2021-04-20  3:35 [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver Thara Gopinath
                   ` (2 preceding siblings ...)
  2021-04-20  3:35 ` [Patch v3 3/7] crypto: qce: Add mode for rfc4309 Thara Gopinath
@ 2021-04-20  3:35 ` Thara Gopinath
  2021-04-20  3:36 ` [Patch v3 5/7] crypto: qce: common: Clean up qce_auth_cfg Thara Gopinath
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Thara Gopinath @ 2021-04-20  3:35 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel, linux-arm-msm

Introduce support to enable following algorithms in Qualcomm Crypto Engine.

- authenc(hmac(sha1),cbc(des))
- authenc(hmac(sha1),cbc(des3_ede))
- authenc(hmac(sha256),cbc(des))
- authenc(hmac(sha256),cbc(des3_ede))
- authenc(hmac(sha256),cbc(aes))
- ccm(aes)
- rfc4309(ccm(aes))

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v1->v2:
	- Removed spurious totallen from qce_aead_async_req_handle since
	  it was unused as pointed out by Hubert.
	- Updated qce_dma_prep_sgs to use #nents returned by dma_map_sg rather than
	  using precomputed #nents.


 drivers/crypto/Kconfig      |  15 +
 drivers/crypto/qce/Makefile |   1 +
 drivers/crypto/qce/aead.c   | 799 ++++++++++++++++++++++++++++++++++++
 drivers/crypto/qce/aead.h   |  53 +++
 drivers/crypto/qce/common.h |   2 +
 drivers/crypto/qce/core.c   |   4 +
 6 files changed, 874 insertions(+)
 create mode 100644 drivers/crypto/qce/aead.c
 create mode 100644 drivers/crypto/qce/aead.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 9a4c275a1335..1fe5b7eafc02 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -627,6 +627,12 @@ config CRYPTO_DEV_QCE_SHA
 	select CRYPTO_SHA1
 	select CRYPTO_SHA256
 
+config CRYPTO_DEV_QCE_AEAD
+	bool
+	depends on CRYPTO_DEV_QCE
+	select CRYPTO_AUTHENC
+	select CRYPTO_LIB_DES
+
 choice
 	prompt "Algorithms enabled for QCE acceleration"
 	default CRYPTO_DEV_QCE_ENABLE_ALL
@@ -647,6 +653,7 @@ choice
 		bool "All supported algorithms"
 		select CRYPTO_DEV_QCE_SKCIPHER
 		select CRYPTO_DEV_QCE_SHA
+		select CRYPTO_DEV_QCE_AEAD
 		help
 		  Enable all supported algorithms:
 			- AES (CBC, CTR, ECB, XTS)
@@ -672,6 +679,14 @@ choice
 			- SHA1, HMAC-SHA1
 			- SHA256, HMAC-SHA256
 
+	config CRYPTO_DEV_QCE_ENABLE_AEAD
+		bool "AEAD algorithms only"
+		select CRYPTO_DEV_QCE_AEAD
+		help
+		  Enable AEAD algorithms only:
+			- authenc()
+			- ccm(aes)
+			- rfc4309(ccm(aes))
 endchoice
 
 config CRYPTO_DEV_QCE_SW_MAX_LEN
diff --git a/drivers/crypto/qce/Makefile b/drivers/crypto/qce/Makefile
index 14ade8a7d664..2cf8984e1b85 100644
--- a/drivers/crypto/qce/Makefile
+++ b/drivers/crypto/qce/Makefile
@@ -6,3 +6,4 @@ qcrypto-objs := core.o \
 
 qcrypto-$(CONFIG_CRYPTO_DEV_QCE_SHA) += sha.o
 qcrypto-$(CONFIG_CRYPTO_DEV_QCE_SKCIPHER) += skcipher.o
+qcrypto-$(CONFIG_CRYPTO_DEV_QCE_AEAD) += aead.o
diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c
new file mode 100644
index 000000000000..ef66ae21eae3
--- /dev/null
+++ b/drivers/crypto/qce/aead.c
@@ -0,0 +1,799 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+/*
+ * Copyright (C) 2021, Linaro Limited. All rights reserved.
+ */
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <crypto/gcm.h>
+#include <crypto/authenc.h>
+#include <crypto/internal/aead.h>
+#include <crypto/internal/des.h>
+#include <crypto/sha1.h>
+#include <crypto/sha2.h>
+#include <crypto/scatterwalk.h>
+#include "aead.h"
+
+#define CCM_NONCE_ADATA_SHIFT		6
+#define CCM_NONCE_AUTHSIZE_SHIFT	3
+#define MAX_CCM_ADATA_HEADER_LEN        6
+
+static LIST_HEAD(aead_algs);
+
+static void qce_aead_done(void *data)
+{
+	struct crypto_async_request *async_req = data;
+	struct aead_request *req = aead_request_cast(async_req);
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm);
+	struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
+	struct qce_device *qce = tmpl->qce;
+	struct qce_result_dump *result_buf = qce->dma.result_buf;
+	enum dma_data_direction dir_src, dir_dst;
+	bool diff_dst;
+	int error;
+	u32 status;
+	unsigned int totallen;
+	unsigned char tag[SHA256_DIGEST_SIZE] = {0};
+	int ret = 0;
+
+	diff_dst = (req->src != req->dst) ? true : false;
+	dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
+	dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL;
+
+	error = qce_dma_terminate_all(&qce->dma);
+	if (error)
+		dev_dbg(qce->dev, "aead dma termination error (%d)\n",
+			error);
+	if (diff_dst)
+		dma_unmap_sg(qce->dev, rctx->src_sg, rctx->src_nents, dir_src);
+
+	dma_unmap_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
+
+	if (IS_CCM(rctx->flags)) {
+		if (req->assoclen) {
+			sg_free_table(&rctx->src_tbl);
+			if (diff_dst)
+				sg_free_table(&rctx->dst_tbl);
+		} else {
+			if (!(IS_DECRYPT(rctx->flags) && !diff_dst))
+				sg_free_table(&rctx->dst_tbl);
+		}
+	} else {
+		sg_free_table(&rctx->dst_tbl);
+	}
+
+	error = qce_check_status(qce, &status);
+	if (error < 0 && (error != -EBADMSG))
+		dev_err(qce->dev, "aead operation error (%x)\n", status);
+
+	if (IS_ENCRYPT(rctx->flags)) {
+		totallen = req->cryptlen + req->assoclen;
+		if (IS_CCM(rctx->flags))
+			scatterwalk_map_and_copy(rctx->ccmresult_buf, req->dst,
+						 totallen, ctx->authsize, 1);
+		else
+			scatterwalk_map_and_copy(result_buf->auth_iv, req->dst,
+						 totallen, ctx->authsize, 1);
+
+	} else if (!IS_CCM(rctx->flags)) {
+		totallen = req->cryptlen + req->assoclen - ctx->authsize;
+		scatterwalk_map_and_copy(tag, req->src, totallen, ctx->authsize, 0);
+		ret = memcmp(result_buf->auth_iv, tag, ctx->authsize);
+		if (ret) {
+			pr_err("Bad message error\n");
+			 error = -EBADMSG;
+		}
+	}
+
+	qce->async_req_done(qce, error);
+}
+
+static struct scatterlist *
+qce_aead_prepare_result_buf(struct sg_table *tbl, struct aead_request *req)
+{
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
+	struct qce_device *qce = tmpl->qce;
+
+	sg_init_one(&rctx->result_sg, qce->dma.result_buf, QCE_RESULT_BUF_SZ);
+	return qce_sgtable_add(tbl, &rctx->result_sg, QCE_RESULT_BUF_SZ);
+}
+
+static struct scatterlist *
+qce_aead_prepare_ccm_result_buf(struct sg_table *tbl, struct aead_request *req)
+{
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+
+	sg_init_one(&rctx->result_sg, rctx->ccmresult_buf, QCE_BAM_BURST_SIZE);
+	return qce_sgtable_add(tbl, &rctx->result_sg, QCE_BAM_BURST_SIZE);
+}
+
+static struct scatterlist *
+qce_aead_prepare_dst_buf(struct aead_request *req)
+{
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
+	struct qce_device *qce = tmpl->qce;
+	struct scatterlist *sg, *msg_sg, __sg[2];
+	gfp_t gfp;
+	unsigned int assoclen = req->assoclen;
+	unsigned int totallen;
+	int ret;
+
+	totallen = rctx->cryptlen + assoclen;
+	rctx->dst_nents = sg_nents_for_len(req->dst, totallen);
+	if (rctx->dst_nents < 0) {
+		dev_err(qce->dev, "Invalid numbers of dst SG.\n");
+		return ERR_PTR(-EINVAL);
+	}
+	if (IS_CCM(rctx->flags))
+		rctx->dst_nents += 2;
+	else
+		rctx->dst_nents += 1;
+
+	gfp = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+						GFP_KERNEL : GFP_ATOMIC;
+	ret = sg_alloc_table(&rctx->dst_tbl, rctx->dst_nents, gfp);
+	if (ret)
+		return ERR_PTR(ret);
+
+	if (IS_CCM(rctx->flags) && assoclen) {
+		/* Get the dst buffer */
+		msg_sg = scatterwalk_ffwd(__sg, req->dst, assoclen);
+
+		sg = qce_sgtable_add(&rctx->dst_tbl, &rctx->adata_sg,
+				     rctx->assoclen);
+		if (IS_ERR(sg)) {
+			ret = PTR_ERR(sg);
+			goto dst_tbl_free;
+		}
+		/* dst buffer */
+		sg = qce_sgtable_add(&rctx->dst_tbl, msg_sg, rctx->cryptlen);
+		if (IS_ERR(sg)) {
+			ret = PTR_ERR(sg);
+			goto dst_tbl_free;
+		}
+		totallen = rctx->cryptlen + rctx->assoclen;
+	} else {
+		if (totallen) {
+			sg = qce_sgtable_add(&rctx->dst_tbl, req->dst, totallen);
+			if (IS_ERR(sg))
+				goto dst_tbl_free;
+		}
+	}
+	if (IS_CCM(rctx->flags))
+		sg = qce_aead_prepare_ccm_result_buf(&rctx->dst_tbl, req);
+	else
+		sg = qce_aead_prepare_result_buf(&rctx->dst_tbl, req);
+
+	if (IS_ERR(sg))
+		goto dst_tbl_free;
+
+	sg_mark_end(sg);
+	rctx->dst_sg = rctx->dst_tbl.sgl;
+	rctx->dst_nents = sg_nents_for_len(rctx->dst_sg, totallen) + 1;
+
+	return sg;
+
+dst_tbl_free:
+	sg_free_table(&rctx->dst_tbl);
+	return sg;
+}
+
+static int
+qce_aead_ccm_prepare_buf_assoclen(struct aead_request *req)
+{
+	struct scatterlist *sg, *msg_sg, __sg[2];
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	unsigned int assoclen = rctx->assoclen;
+	unsigned int adata_header_len, cryptlen, totallen;
+	gfp_t gfp;
+	bool diff_dst;
+	int ret;
+
+	if (IS_DECRYPT(rctx->flags))
+		cryptlen = rctx->cryptlen + ctx->authsize;
+	else
+		cryptlen = rctx->cryptlen;
+	totallen = cryptlen + req->assoclen;
+
+	/* Get the msg */
+	msg_sg = scatterwalk_ffwd(__sg, req->src, req->assoclen);
+
+	rctx->adata = kzalloc((ALIGN(assoclen, 16) + MAX_CCM_ADATA_HEADER_LEN) *
+			       sizeof(unsigned char), GFP_ATOMIC);
+	if (!rctx->adata)
+		return -ENOMEM;
+
+	/*
+	 * Format associated data (RFC3610 and NIST 800-38C)
+	 * Even though specification allows for AAD to be up to 2^64 - 1 bytes,
+	 * the assoclen field in aead_request is unsigned int and thus limits
+	 * the AAD to be up to 2^32 - 1 bytes. So we handle only two scenarios
+	 * while forming the header for AAD.
+	 */
+	if (assoclen < 0xff00) {
+		adata_header_len = 2;
+		*(__be16 *)rctx->adata = cpu_to_be16(assoclen);
+	} else {
+		adata_header_len = 6;
+		*(__be16 *)rctx->adata = cpu_to_be16(0xfffe);
+		*(__be32 *)(rctx->adata + 2) = cpu_to_be32(assoclen);
+	}
+
+	/* Copy the associated data */
+	if (sg_copy_to_buffer(req->src, sg_nents_for_len(req->src, assoclen),
+			      rctx->adata + adata_header_len,
+			      assoclen) != assoclen)
+		return -EINVAL;
+
+	/* Pad associated data to block size */
+	rctx->assoclen = ALIGN(assoclen + adata_header_len, 16);
+
+	diff_dst = (req->src != req->dst) ? true : false;
+
+	if (diff_dst)
+		rctx->src_nents = sg_nents_for_len(req->src, totallen) + 1;
+	else
+		rctx->src_nents = sg_nents_for_len(req->src, totallen) + 2;
+
+	gfp = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC;
+	ret = sg_alloc_table(&rctx->src_tbl, rctx->src_nents, gfp);
+	if (ret)
+		return ret;
+
+	/* Associated Data */
+	sg_init_one(&rctx->adata_sg, rctx->adata, rctx->assoclen);
+	sg = qce_sgtable_add(&rctx->src_tbl, &rctx->adata_sg,
+			     rctx->assoclen);
+	if (IS_ERR(sg)) {
+		ret = PTR_ERR(sg);
+		goto err_free;
+	}
+	/* src msg */
+	sg = qce_sgtable_add(&rctx->src_tbl, msg_sg, cryptlen);
+	if (IS_ERR(sg)) {
+		ret = PTR_ERR(sg);
+		goto err_free;
+	}
+	if (!diff_dst) {
+		/*
+		 * For decrypt, when src and dst buffers are same, there is already space
+		 * in the buffer for padded 0's which is output in lieu of
+		 * the MAC that is input. So skip the below.
+		 */
+		if (!IS_DECRYPT(rctx->flags)) {
+			sg = qce_aead_prepare_ccm_result_buf(&rctx->src_tbl, req);
+			if (IS_ERR(sg)) {
+				ret = PTR_ERR(sg);
+				goto err_free;
+			}
+		}
+	}
+	sg_mark_end(sg);
+	rctx->src_sg = rctx->src_tbl.sgl;
+	totallen = cryptlen + rctx->assoclen;
+	rctx->src_nents = sg_nents_for_len(rctx->src_sg, totallen);
+
+	if (diff_dst) {
+		sg = qce_aead_prepare_dst_buf(req);
+		if (IS_ERR(sg))
+			goto err_free;
+	} else {
+		if (IS_ENCRYPT(rctx->flags))
+			rctx->dst_nents = rctx->src_nents + 1;
+		else
+			rctx->dst_nents = rctx->src_nents;
+		rctx->dst_sg = rctx->src_sg;
+	}
+
+	return 0;
+err_free:
+	sg_free_table(&rctx->src_tbl);
+	return ret;
+}
+
+static int qce_aead_prepare_buf(struct aead_request *req)
+{
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
+	struct qce_device *qce = tmpl->qce;
+	struct scatterlist *sg;
+	bool diff_dst = (req->src != req->dst) ? true : false;
+	unsigned int totallen;
+
+	totallen = rctx->cryptlen + rctx->assoclen;
+
+	sg = qce_aead_prepare_dst_buf(req);
+	if (IS_ERR(sg))
+		return PTR_ERR(sg);
+	if (diff_dst) {
+		rctx->src_nents = sg_nents_for_len(req->src, totallen);
+		if (rctx->src_nents < 0) {
+			dev_err(qce->dev, "Invalid numbers of src SG.\n");
+			return -EINVAL;
+		}
+		rctx->src_sg = req->src;
+	} else {
+		rctx->src_nents = rctx->dst_nents - 1;
+		rctx->src_sg = rctx->dst_sg;
+	}
+	return 0;
+}
+
+static int qce_aead_ccm_prepare_buf(struct aead_request *req)
+{
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct scatterlist *sg;
+	bool diff_dst = (req->src != req->dst) ? true : false;
+	unsigned int cryptlen;
+
+	if (rctx->assoclen)
+		return qce_aead_ccm_prepare_buf_assoclen(req);
+
+	if (IS_ENCRYPT(rctx->flags))
+		return qce_aead_prepare_buf(req);
+
+	cryptlen = rctx->cryptlen + ctx->authsize;
+	if (diff_dst) {
+		rctx->src_nents = sg_nents_for_len(req->src, cryptlen);
+		rctx->src_sg = req->src;
+		sg = qce_aead_prepare_dst_buf(req);
+		if (IS_ERR(sg))
+			return PTR_ERR(sg);
+	} else {
+		rctx->src_nents = sg_nents_for_len(req->src, cryptlen);
+		rctx->src_sg = req->src;
+		rctx->dst_nents = rctx->src_nents;
+		rctx->dst_sg = rctx->src_sg;
+	}
+
+	return 0;
+}
+
+static int qce_aead_create_ccm_nonce(struct qce_aead_reqctx *rctx, struct qce_aead_ctx *ctx)
+{
+	unsigned int msglen_size, ivsize;
+	u8 msg_len[4];
+	int i;
+
+	if (!rctx || !rctx->iv)
+		return -EINVAL;
+
+	msglen_size = rctx->iv[0] + 1;
+
+	/* Verify that msg len size is valid */
+	if (msglen_size < 2 || msglen_size > 8)
+		return -EINVAL;
+
+	ivsize = rctx->ivsize;
+
+	/*
+	 * Clear the msglen bytes in IV.
+	 * Else the h/w engine and nonce will use any stray value pending there.
+	 */
+	if (!IS_CCM_RFC4309(rctx->flags)) {
+		for (i = 0; i < msglen_size; i++)
+			rctx->iv[ivsize - i - 1] = 0;
+	}
+
+	/*
+	 * The crypto framework encodes cryptlen as unsigned int. Thus, even though
+	 * spec allows for upto 8 bytes to encode msg_len only 4 bytes are needed.
+	 */
+	if (msglen_size > 4)
+		msglen_size = 4;
+
+	memcpy(&msg_len[0], &rctx->cryptlen, 4);
+
+	memcpy(&rctx->ccm_nonce[0], rctx->iv, rctx->ivsize);
+	if (rctx->assoclen)
+		rctx->ccm_nonce[0] |= 1 << CCM_NONCE_ADATA_SHIFT;
+	rctx->ccm_nonce[0] |= ((ctx->authsize - 2) / 2) <<
+				CCM_NONCE_AUTHSIZE_SHIFT;
+	for (i = 0; i < msglen_size; i++)
+		rctx->ccm_nonce[QCE_MAX_NONCE - i - 1] = msg_len[i];
+
+	return 0;
+}
+
+static int
+qce_aead_async_req_handle(struct crypto_async_request *async_req)
+{
+	struct aead_request *req = aead_request_cast(async_req);
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm);
+	struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
+	struct qce_device *qce = tmpl->qce;
+	enum dma_data_direction dir_src, dir_dst;
+	bool diff_dst;
+	int dst_nents, src_nents, ret;
+
+	if (IS_CCM_RFC4309(rctx->flags)) {
+		memset(rctx->ccm_rfc4309_iv, 0, QCE_MAX_IV_SIZE);
+		rctx->ccm_rfc4309_iv[0] = 3;
+		memcpy(&rctx->ccm_rfc4309_iv[1], ctx->ccm4309_salt, QCE_CCM4309_SALT_SIZE);
+		memcpy(&rctx->ccm_rfc4309_iv[4], req->iv, 8);
+		rctx->iv = rctx->ccm_rfc4309_iv;
+		rctx->ivsize = AES_BLOCK_SIZE;
+	} else {
+		rctx->iv = req->iv;
+		rctx->ivsize = crypto_aead_ivsize(tfm);
+	}
+	if (IS_CCM_RFC4309(rctx->flags))
+		rctx->assoclen = req->assoclen - 8;
+	else
+		rctx->assoclen = req->assoclen;
+
+	diff_dst = (req->src != req->dst) ? true : false;
+	dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
+	dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL;
+
+	if (IS_CCM(rctx->flags)) {
+		ret = qce_aead_create_ccm_nonce(rctx, ctx);
+		if (ret)
+			return ret;
+	}
+	if (IS_CCM(rctx->flags))
+		ret = qce_aead_ccm_prepare_buf(req);
+	else
+		ret = qce_aead_prepare_buf(req);
+
+	if (ret)
+		return ret;
+	dst_nents = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
+	if (dst_nents < 0)
+		goto error_free;
+
+	if (diff_dst) {
+		src_nents = dma_map_sg(qce->dev, rctx->src_sg, rctx->src_nents, dir_src);
+		if (src_nents < 0)
+			goto error_unmap_dst;
+	} else {
+		if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
+			src_nents = dst_nents;
+		else
+			src_nents = dst_nents - 1;
+	}
+
+	ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, src_nents, rctx->dst_sg, dst_nents,
+			       qce_aead_done, async_req);
+	if (ret)
+		goto error_unmap_src;
+
+	qce_dma_issue_pending(&qce->dma);
+
+	ret = qce_start(async_req, tmpl->crypto_alg_type);
+	if (ret)
+		goto error_terminate;
+
+	return 0;
+
+error_terminate:
+	qce_dma_terminate_all(&qce->dma);
+error_unmap_src:
+	if (diff_dst)
+		dma_unmap_sg(qce->dev, req->src, rctx->src_nents, dir_src);
+error_unmap_dst:
+	dma_unmap_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
+error_free:
+	if (IS_CCM(rctx->flags) && rctx->assoclen) {
+		sg_free_table(&rctx->src_tbl);
+		if (diff_dst)
+			sg_free_table(&rctx->dst_tbl);
+	} else {
+		sg_free_table(&rctx->dst_tbl);
+	}
+	return ret;
+}
+
+static int qce_aead_crypt(struct aead_request *req, int encrypt)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct qce_alg_template *tmpl = to_aead_tmpl(tfm);
+	unsigned int blocksize = crypto_aead_blocksize(tfm);
+
+	rctx->flags  = tmpl->alg_flags;
+	rctx->flags |= encrypt ? QCE_ENCRYPT : QCE_DECRYPT;
+
+	if (encrypt)
+		rctx->cryptlen = req->cryptlen;
+	else
+		rctx->cryptlen = req->cryptlen - ctx->authsize;
+
+	/* CE does not handle 0 length messages */
+	if (!rctx->cryptlen) {
+		if (!(IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags)))
+			return -EINVAL;
+	}
+
+	/*
+	 * CBC algorithms require message lengths to be
+	 * multiples of block size.
+	 */
+	if (IS_CBC(rctx->flags) && !IS_ALIGNED(rctx->cryptlen, blocksize))
+		return -EINVAL;
+
+	/* RFC4309 supported AAD size 16 bytes/20 bytes */
+	if (IS_CCM_RFC4309(rctx->flags))
+		if (crypto_ipsec_check_assoclen(req->assoclen))
+			return -EINVAL;
+
+	return tmpl->qce->async_req_enqueue(tmpl->qce, &req->base);
+}
+
+static int qce_aead_encrypt(struct aead_request *req)
+{
+	return qce_aead_crypt(req, 1);
+}
+
+static int qce_aead_decrypt(struct aead_request *req)
+{
+	return qce_aead_crypt(req, 0);
+}
+
+static int qce_aead_ccm_setkey(struct crypto_aead *tfm, const u8 *key,
+			       unsigned int keylen)
+{
+	struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	unsigned long flags = to_aead_tmpl(tfm)->alg_flags;
+
+	if (IS_CCM_RFC4309(flags)) {
+		if (keylen < QCE_CCM4309_SALT_SIZE)
+			return -EINVAL;
+		keylen -= QCE_CCM4309_SALT_SIZE;
+		memcpy(ctx->ccm4309_salt, key + keylen, QCE_CCM4309_SALT_SIZE);
+	}
+
+	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256)
+		return -EINVAL;
+
+	ctx->enc_keylen = keylen;
+	ctx->auth_keylen = keylen;
+
+	memcpy(ctx->enc_key, key, keylen);
+	memcpy(ctx->auth_key, key, keylen);
+
+	return 0;
+}
+
+static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+	struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	struct crypto_authenc_keys authenc_keys;
+	unsigned long flags = to_aead_tmpl(tfm)->alg_flags;
+	u32 _key[6];
+	int err;
+
+	err = crypto_authenc_extractkeys(&authenc_keys, key, keylen);
+	if (err)
+		return err;
+
+	if (authenc_keys.enckeylen > QCE_MAX_KEY_SIZE ||
+	    authenc_keys.authkeylen > QCE_MAX_KEY_SIZE)
+		return -EINVAL;
+
+	if (IS_DES(flags)) {
+		err = verify_aead_des_key(tfm, authenc_keys.enckey, authenc_keys.enckeylen);
+		if (err)
+			return err;
+	} else if (IS_3DES(flags)) {
+		err = verify_aead_des3_key(tfm, authenc_keys.enckey, authenc_keys.enckeylen);
+		if (err)
+			return err;
+		/*
+		 * The crypto engine does not support any two keys
+		 * being the same for triple des algorithms. The
+		 * verify_skcipher_des3_key does not check for all the
+		 * below conditions. Return -EINVAL in case any two keys
+		 * are the same. Revisit to see if a fallback cipher
+		 * is needed to handle this condition.
+		 */
+		memcpy(_key, authenc_keys.enckey, DES3_EDE_KEY_SIZE);
+		if (!((_key[0] ^ _key[2]) | (_key[1] ^ _key[3])) ||
+		    !((_key[2] ^ _key[4]) | (_key[3] ^ _key[5])) ||
+		    !((_key[0] ^ _key[4]) | (_key[1] ^ _key[5])))
+			return -EINVAL;
+	} else if (IS_AES(flags)) {
+		/* No random key sizes */
+		if (authenc_keys.enckeylen != AES_KEYSIZE_128 &&
+		    authenc_keys.enckeylen != AES_KEYSIZE_256)
+			return -EINVAL;
+	}
+
+	ctx->enc_keylen = authenc_keys.enckeylen;
+	ctx->auth_keylen = authenc_keys.authkeylen;
+
+	memcpy(ctx->enc_key, authenc_keys.enckey, authenc_keys.enckeylen);
+
+	memset(ctx->auth_key, 0, sizeof(ctx->auth_key));
+	memcpy(ctx->auth_key, authenc_keys.authkey, authenc_keys.authkeylen);
+
+	return 0;
+}
+
+static int qce_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
+{
+	struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+	unsigned long flags = to_aead_tmpl(tfm)->alg_flags;
+
+	if (IS_CCM(flags)) {
+		if (authsize < 4 || authsize > 16 || authsize % 2)
+			return -EINVAL;
+		if (IS_CCM_RFC4309(flags) && (authsize < 8 || authsize % 4))
+			return -EINVAL;
+	}
+	ctx->authsize = authsize;
+	return 0;
+}
+
+static int qce_aead_init(struct crypto_aead *tfm)
+{
+	crypto_aead_set_reqsize(tfm, sizeof(struct qce_aead_reqctx));
+	return 0;
+}
+
+struct qce_aead_def {
+	unsigned long flags;
+	const char *name;
+	const char *drv_name;
+	unsigned int blocksize;
+	unsigned int chunksize;
+	unsigned int ivsize;
+	unsigned int maxauthsize;
+};
+
+static const struct qce_aead_def aead_def[] = {
+	{
+		.flags          = QCE_ALG_DES | QCE_MODE_CBC | QCE_HASH_SHA1_HMAC,
+		.name           = "authenc(hmac(sha1),cbc(des))",
+		.drv_name       = "authenc-hmac-sha1-cbc-des-qce",
+		.blocksize      = DES_BLOCK_SIZE,
+		.ivsize         = DES_BLOCK_SIZE,
+		.maxauthsize	= SHA1_DIGEST_SIZE,
+	},
+	{
+		.flags          = QCE_ALG_3DES | QCE_MODE_CBC | QCE_HASH_SHA1_HMAC,
+		.name           = "authenc(hmac(sha1),cbc(des3_ede))",
+		.drv_name       = "authenc-hmac-sha1-cbc-3des-qce",
+		.blocksize      = DES3_EDE_BLOCK_SIZE,
+		.ivsize         = DES3_EDE_BLOCK_SIZE,
+		.maxauthsize	= SHA1_DIGEST_SIZE,
+	},
+	{
+		.flags          = QCE_ALG_DES | QCE_MODE_CBC | QCE_HASH_SHA256_HMAC,
+		.name           = "authenc(hmac(sha256),cbc(des))",
+		.drv_name       = "authenc-hmac-sha256-cbc-des-qce",
+		.blocksize      = DES_BLOCK_SIZE,
+		.ivsize         = DES_BLOCK_SIZE,
+		.maxauthsize	= SHA256_DIGEST_SIZE,
+	},
+	{
+		.flags          = QCE_ALG_3DES | QCE_MODE_CBC | QCE_HASH_SHA256_HMAC,
+		.name           = "authenc(hmac(sha256),cbc(des3_ede))",
+		.drv_name       = "authenc-hmac-sha256-cbc-3des-qce",
+		.blocksize      = DES3_EDE_BLOCK_SIZE,
+		.ivsize         = DES3_EDE_BLOCK_SIZE,
+		.maxauthsize	= SHA256_DIGEST_SIZE,
+	},
+	{
+		.flags          =  QCE_ALG_AES | QCE_MODE_CBC | QCE_HASH_SHA256_HMAC,
+		.name           = "authenc(hmac(sha256),cbc(aes))",
+		.drv_name       = "authenc-hmac-sha256-cbc-aes-qce",
+		.blocksize      = AES_BLOCK_SIZE,
+		.ivsize         = AES_BLOCK_SIZE,
+		.maxauthsize	= SHA256_DIGEST_SIZE,
+	},
+	{
+		.flags          =  QCE_ALG_AES | QCE_MODE_CCM,
+		.name           = "ccm(aes)",
+		.drv_name       = "ccm-aes-qce",
+		.blocksize	= 1,
+		.ivsize         = AES_BLOCK_SIZE,
+		.maxauthsize	= AES_BLOCK_SIZE,
+	},
+	{
+		.flags          =  QCE_ALG_AES | QCE_MODE_CCM | QCE_MODE_CCM_RFC4309,
+		.name           = "rfc4309(ccm(aes))",
+		.drv_name       = "rfc4309-ccm-aes-qce",
+		.blocksize	= 1,
+		.ivsize         = 8,
+		.maxauthsize	= AES_BLOCK_SIZE,
+	},
+};
+
+static int qce_aead_register_one(const struct qce_aead_def *def, struct qce_device *qce)
+{
+	struct qce_alg_template *tmpl;
+	struct aead_alg *alg;
+	int ret;
+
+	tmpl = kzalloc(sizeof(*tmpl), GFP_KERNEL);
+	if (!tmpl)
+		return -ENOMEM;
+
+	alg = &tmpl->alg.aead;
+
+	snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name);
+	snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+		 def->drv_name);
+
+	alg->base.cra_blocksize		= def->blocksize;
+	alg->chunksize			= def->chunksize;
+	alg->ivsize			= def->ivsize;
+	alg->maxauthsize		= def->maxauthsize;
+	if (IS_CCM(def->flags))
+		alg->setkey		= qce_aead_ccm_setkey;
+	else
+		alg->setkey		= qce_aead_setkey;
+	alg->setauthsize		= qce_aead_setauthsize;
+	alg->encrypt			= qce_aead_encrypt;
+	alg->decrypt			= qce_aead_decrypt;
+	alg->init			= qce_aead_init;
+
+	alg->base.cra_priority		= 300;
+	alg->base.cra_flags		= CRYPTO_ALG_ASYNC |
+					  CRYPTO_ALG_ALLOCATES_MEMORY |
+					  CRYPTO_ALG_KERN_DRIVER_ONLY;
+	alg->base.cra_ctxsize		= sizeof(struct qce_aead_ctx);
+	alg->base.cra_alignmask		= 0;
+	alg->base.cra_module		= THIS_MODULE;
+
+	INIT_LIST_HEAD(&tmpl->entry);
+	tmpl->crypto_alg_type = CRYPTO_ALG_TYPE_AEAD;
+	tmpl->alg_flags = def->flags;
+	tmpl->qce = qce;
+
+	ret = crypto_register_aead(alg);
+	if (ret) {
+		kfree(tmpl);
+		dev_err(qce->dev, "%s registration failed\n", alg->base.cra_name);
+		return ret;
+	}
+
+	list_add_tail(&tmpl->entry, &aead_algs);
+	dev_dbg(qce->dev, "%s is registered\n", alg->base.cra_name);
+	return 0;
+}
+
+static void qce_aead_unregister(struct qce_device *qce)
+{
+	struct qce_alg_template *tmpl, *n;
+
+	list_for_each_entry_safe(tmpl, n, &aead_algs, entry) {
+		crypto_unregister_aead(&tmpl->alg.aead);
+		list_del(&tmpl->entry);
+		kfree(tmpl);
+	}
+}
+
+static int qce_aead_register(struct qce_device *qce)
+{
+	int ret, i;
+
+	for (i = 0; i < ARRAY_SIZE(aead_def); i++) {
+		ret = qce_aead_register_one(&aead_def[i], qce);
+		if (ret)
+			goto err;
+	}
+
+	return 0;
+err:
+	qce_aead_unregister(qce);
+	return ret;
+}
+
+const struct qce_algo_ops aead_ops = {
+	.type = CRYPTO_ALG_TYPE_AEAD,
+	.register_algs = qce_aead_register,
+	.unregister_algs = qce_aead_unregister,
+	.async_req_handle = qce_aead_async_req_handle,
+};
diff --git a/drivers/crypto/qce/aead.h b/drivers/crypto/qce/aead.h
new file mode 100644
index 000000000000..3d1f2039930b
--- /dev/null
+++ b/drivers/crypto/qce/aead.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2021, Linaro Limited. All rights reserved.
+ */
+
+#ifndef _AEAD_H_
+#define _AEAD_H_
+
+#include "common.h"
+#include "core.h"
+
+#define QCE_MAX_KEY_SIZE		64
+#define QCE_CCM4309_SALT_SIZE		3
+
+struct qce_aead_ctx {
+	u8 enc_key[QCE_MAX_KEY_SIZE];
+	u8 auth_key[QCE_MAX_KEY_SIZE];
+	u8 ccm4309_salt[QCE_CCM4309_SALT_SIZE];
+	unsigned int enc_keylen;
+	unsigned int auth_keylen;
+	unsigned int authsize;
+};
+
+struct qce_aead_reqctx {
+	unsigned long flags;
+	u8 *iv;
+	unsigned int ivsize;
+	int src_nents;
+	int dst_nents;
+	struct scatterlist result_sg;
+	struct scatterlist adata_sg;
+	struct sg_table dst_tbl;
+	struct sg_table src_tbl;
+	struct scatterlist *dst_sg;
+	struct scatterlist *src_sg;
+	unsigned int cryptlen;
+	unsigned int assoclen;
+	unsigned char *adata;
+	u8 ccm_nonce[QCE_MAX_NONCE];
+	u8 ccmresult_buf[QCE_BAM_BURST_SIZE];
+	u8 ccm_rfc4309_iv[QCE_MAX_IV_SIZE];
+};
+
+static inline struct qce_alg_template *to_aead_tmpl(struct crypto_aead *tfm)
+{
+	struct aead_alg *alg = crypto_aead_alg(tfm);
+
+	return container_of(alg, struct qce_alg_template, alg.aead);
+}
+
+extern const struct qce_algo_ops aead_ops;
+
+#endif /* _AEAD_H_ */
diff --git a/drivers/crypto/qce/common.h b/drivers/crypto/qce/common.h
index b135440bf72b..02e63ad9f245 100644
--- a/drivers/crypto/qce/common.h
+++ b/drivers/crypto/qce/common.h
@@ -11,6 +11,7 @@
 #include <crypto/aes.h>
 #include <crypto/hash.h>
 #include <crypto/internal/skcipher.h>
+#include <crypto/internal/aead.h>
 
 /* xts du size */
 #define QCE_SECTOR_SIZE			512
@@ -88,6 +89,7 @@ struct qce_alg_template {
 	union {
 		struct skcipher_alg skcipher;
 		struct ahash_alg ahash;
+		struct aead_alg aead;
 	} alg;
 	struct qce_device *qce;
 	const u8 *hash_zero;
diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c
index 80b75085c265..d3780be44a76 100644
--- a/drivers/crypto/qce/core.c
+++ b/drivers/crypto/qce/core.c
@@ -17,6 +17,7 @@
 #include "core.h"
 #include "cipher.h"
 #include "sha.h"
+#include "aead.h"
 
 #define QCE_MAJOR_VERSION5	0x05
 #define QCE_QUEUE_LENGTH	1
@@ -28,6 +29,9 @@ static const struct qce_algo_ops *qce_ops[] = {
 #ifdef CONFIG_CRYPTO_DEV_QCE_SHA
 	&ahash_ops,
 #endif
+#ifdef CONFIG_CRYPTO_DEV_QCE_AEAD
+	&aead_ops,
+#endif
 };
 
 static void qce_unregister_algs(struct qce_device *qce)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch v3 5/7] crypto: qce: common: Clean up qce_auth_cfg
  2021-04-20  3:35 [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver Thara Gopinath
                   ` (3 preceding siblings ...)
  2021-04-20  3:35 ` [Patch v3 4/7] crypto: qce: Add support for AEAD algorithms Thara Gopinath
@ 2021-04-20  3:36 ` Thara Gopinath
  2021-04-20  3:36 ` [Patch v3 6/7] crypto: qce: common: Add support for AEAD algorithms Thara Gopinath
  2021-04-20  3:36 ` [Patch v3 7/7] crypto: qce: aead: Schedule fallback algorithm Thara Gopinath
  6 siblings, 0 replies; 10+ messages in thread
From: Thara Gopinath @ 2021-04-20  3:36 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel, linux-arm-msm

Remove various redundant checks in qce_auth_cfg. Also allow qce_auth_cfg
to take auth_size as a parameter which is a required setting for ccm(aes)
algorithms

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 drivers/crypto/qce/common.c | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index 7b5bc5a6ae81..7b3d6caec1b2 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -97,11 +97,11 @@ static inline void qce_crypto_go(struct qce_device *qce, bool result_dump)
 }
 
 #ifdef CONFIG_CRYPTO_DEV_QCE_SHA
-static u32 qce_auth_cfg(unsigned long flags, u32 key_size)
+static u32 qce_auth_cfg(unsigned long flags, u32 key_size, u32 auth_size)
 {
 	u32 cfg = 0;
 
-	if (IS_AES(flags) && (IS_CCM(flags) || IS_CMAC(flags)))
+	if (IS_CCM(flags) || IS_CMAC(flags))
 		cfg |= AUTH_ALG_AES << AUTH_ALG_SHIFT;
 	else
 		cfg |= AUTH_ALG_SHA << AUTH_ALG_SHIFT;
@@ -119,15 +119,16 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size)
 		cfg |= AUTH_SIZE_SHA256 << AUTH_SIZE_SHIFT;
 	else if (IS_CMAC(flags))
 		cfg |= AUTH_SIZE_ENUM_16_BYTES << AUTH_SIZE_SHIFT;
+	else if (IS_CCM(flags))
+		cfg |= (auth_size - 1) << AUTH_SIZE_SHIFT;
 
 	if (IS_SHA1(flags) || IS_SHA256(flags))
 		cfg |= AUTH_MODE_HASH << AUTH_MODE_SHIFT;
-	else if (IS_SHA1_HMAC(flags) || IS_SHA256_HMAC(flags) ||
-		 IS_CBC(flags) || IS_CTR(flags))
+	else if (IS_SHA1_HMAC(flags) || IS_SHA256_HMAC(flags))
 		cfg |= AUTH_MODE_HMAC << AUTH_MODE_SHIFT;
-	else if (IS_AES(flags) && IS_CCM(flags))
+	else if (IS_CCM(flags))
 		cfg |= AUTH_MODE_CCM << AUTH_MODE_SHIFT;
-	else if (IS_AES(flags) && IS_CMAC(flags))
+	else if (IS_CMAC(flags))
 		cfg |= AUTH_MODE_CMAC << AUTH_MODE_SHIFT;
 
 	if (IS_SHA(flags) || IS_SHA_HMAC(flags))
@@ -136,10 +137,6 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size)
 	if (IS_CCM(flags))
 		cfg |= QCE_MAX_NONCE_WORDS << AUTH_NONCE_NUM_WORDS_SHIFT;
 
-	if (IS_CBC(flags) || IS_CTR(flags) || IS_CCM(flags) ||
-	    IS_CMAC(flags))
-		cfg |= BIT(AUTH_LAST_SHIFT) | BIT(AUTH_FIRST_SHIFT);
-
 	return cfg;
 }
 
@@ -171,7 +168,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
 		qce_clear_array(qce, REG_AUTH_KEY0, 16);
 		qce_clear_array(qce, REG_AUTH_BYTECNT0, 4);
 
-		auth_cfg = qce_auth_cfg(rctx->flags, rctx->authklen);
+		auth_cfg = qce_auth_cfg(rctx->flags, rctx->authklen, digestsize);
 	}
 
 	if (IS_SHA_HMAC(rctx->flags) || IS_CMAC(rctx->flags)) {
@@ -199,7 +196,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
 		qce_write_array(qce, REG_AUTH_BYTECNT0,
 				(u32 *)rctx->byte_count, 2);
 
-	auth_cfg = qce_auth_cfg(rctx->flags, 0);
+	auth_cfg = qce_auth_cfg(rctx->flags, 0, digestsize);
 
 	if (rctx->last_blk)
 		auth_cfg |= BIT(AUTH_LAST_SHIFT);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch v3 6/7] crypto: qce: common: Add support for AEAD algorithms
  2021-04-20  3:35 [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver Thara Gopinath
                   ` (4 preceding siblings ...)
  2021-04-20  3:36 ` [Patch v3 5/7] crypto: qce: common: Clean up qce_auth_cfg Thara Gopinath
@ 2021-04-20  3:36 ` Thara Gopinath
  2021-04-21 20:15   ` kernel test robot
  2021-04-29  5:25   ` [kbuild] " Dan Carpenter
  2021-04-20  3:36 ` [Patch v3 7/7] crypto: qce: aead: Schedule fallback algorithm Thara Gopinath
  6 siblings, 2 replies; 10+ messages in thread
From: Thara Gopinath @ 2021-04-20  3:36 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel, linux-arm-msm

Add register programming sequence for enabling AEAD
algorithms on the Qualcomm crypto engine.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v2->v3:
	- Made qce_be32_to_cpu_array truly be32 to cpu endian by using be32_to_cpup
	  instead of cpu_to_be32p. Also remove the (u32 *) typcasting of arrays obtained
	  as output from qce_be32_to_cpu_array as per Bjorn's review comments.
	- Wrapped newly introduced std_iv_sha1, std_iv_sha256 and qce_be32_to_cpu_array
	  in CONFIG_CRYPTO_DEV_QCE_AEAD to prevent W1 warnings as reported by kernel
	  test robot <lkp@intel.com>.

v1->v2:
	- Minor fixes like removing not needed initializing of variables
	  and using bool values in lieu of 0 and 1 as pointed out by Bjorn.
	- Introduced qce_be32_to_cpu_array which converts the u8 string in big
	  endian order to array of u32 and returns back total number of words,
	  as per Bjorn's review comments. Presently this function is used only by
	  qce_setup_regs_aead to format keys, iv and nonce. cipher and hash 
	  algorithms can be made to use this function as a separate clean up patch.

 drivers/crypto/qce/common.c | 162 +++++++++++++++++++++++++++++++++++-
 1 file changed, 160 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index 7b3d6caec1b2..6d6b3792323b 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -15,6 +15,7 @@
 #include "core.h"
 #include "regs-v5.h"
 #include "sha.h"
+#include "aead.h"
 
 static inline u32 qce_read(struct qce_device *qce, u32 offset)
 {
@@ -96,7 +97,7 @@ static inline void qce_crypto_go(struct qce_device *qce, bool result_dump)
 		qce_write(qce, REG_GOPROC, BIT(GO_SHIFT));
 }
 
-#ifdef CONFIG_CRYPTO_DEV_QCE_SHA
+#if defined(CONFIG_CRYPTO_DEV_QCE_SHA) || defined(CONFIG_CRYPTO_DEV_QCE_AEAD)
 static u32 qce_auth_cfg(unsigned long flags, u32 key_size, u32 auth_size)
 {
 	u32 cfg = 0;
@@ -139,7 +140,9 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size, u32 auth_size)
 
 	return cfg;
 }
+#endif
 
+#ifdef CONFIG_CRYPTO_DEV_QCE_SHA
 static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
 {
 	struct ahash_request *req = ahash_request_cast(async_req);
@@ -225,7 +228,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
 }
 #endif
 
-#ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER
+#if defined(CONFIG_CRYPTO_DEV_QCE_SKCIPHER) || defined(CONFIG_CRYPTO_DEV_QCE_AEAD)
 static u32 qce_encr_cfg(unsigned long flags, u32 aes_key_size)
 {
 	u32 cfg = 0;
@@ -271,7 +274,9 @@ static u32 qce_encr_cfg(unsigned long flags, u32 aes_key_size)
 
 	return cfg;
 }
+#endif
 
+#ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER
 static void qce_xts_swapiv(__be32 *dst, const u8 *src, unsigned int ivsize)
 {
 	u8 swap[QCE_AES_IV_LENGTH];
@@ -386,6 +391,155 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
 }
 #endif
 
+#ifdef CONFIG_CRYPTO_DEV_QCE_AEAD
+static const u32 std_iv_sha1[SHA256_DIGEST_SIZE / sizeof(u32)] = {
+	SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4, 0, 0, 0
+};
+
+static const u32 std_iv_sha256[SHA256_DIGEST_SIZE / sizeof(u32)] = {
+	SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
+	SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7
+};
+
+static unsigned int qce_be32_to_cpu_array(u32 *dst, const u8 *src, unsigned int len)
+{
+	u32 *d = dst;
+	const u8 *s = src;
+	unsigned int n;
+
+	n = len / sizeof(u32);
+	for (; n > 0; n--) {
+		*d = be32_to_cpup((const __be32 *)s);
+		s += sizeof(u32);
+		d++;
+	}
+	return DIV_ROUND_UP(len, sizeof(u32));
+}
+
+static int qce_setup_regs_aead(struct crypto_async_request *async_req)
+{
+	struct aead_request *req = aead_request_cast(async_req);
+	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+	struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm);
+	struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
+	struct qce_device *qce = tmpl->qce;
+	u32 enckey[QCE_MAX_CIPHER_KEY_SIZE / sizeof(u32)] = {0};
+	u32 enciv[QCE_MAX_IV_SIZE / sizeof(u32)] = {0};
+	u32 authkey[QCE_SHA_HMAC_KEY_SIZE / sizeof(u32)] = {0};
+	u32 authiv[SHA256_DIGEST_SIZE / sizeof(u32)] = {0};
+	u32 authnonce[QCE_MAX_NONCE / sizeof(u32)] = {0};
+	unsigned int enc_keylen = ctx->enc_keylen;
+	unsigned int auth_keylen = ctx->auth_keylen;
+	unsigned int enc_ivsize = rctx->ivsize;
+	unsigned int auth_ivsize;
+	unsigned int enckey_words, enciv_words;
+	unsigned int authkey_words, authiv_words, authnonce_words;
+	unsigned long flags = rctx->flags;
+	u32 encr_cfg, auth_cfg, config, totallen;
+	u32 iv_last_word;
+
+	qce_setup_config(qce);
+
+	/* Write encryption key */
+	enckey_words = qce_be32_to_cpu_array(enckey, ctx->enc_key, enc_keylen);
+	qce_write_array(qce, REG_ENCR_KEY0, enckey, enckey_words);
+
+	/* Write encryption iv */
+	enciv_words = qce_be32_to_cpu_array(enciv, rctx->iv, enc_ivsize);
+	qce_write_array(qce, REG_CNTR0_IV0, enciv, enciv_words);
+
+	if (IS_CCM(rctx->flags)) {
+		iv_last_word = enciv[enciv_words - 1];
+		qce_write(qce, REG_CNTR3_IV3, iv_last_word + 1);
+		qce_write_array(qce, REG_ENCR_CCM_INT_CNTR0, (u32 *)enciv, enciv_words);
+		qce_write(qce, REG_CNTR_MASK, ~0);
+		qce_write(qce, REG_CNTR_MASK0, ~0);
+		qce_write(qce, REG_CNTR_MASK1, ~0);
+		qce_write(qce, REG_CNTR_MASK2, ~0);
+	}
+
+	/* Clear authentication IV and KEY registers of previous values */
+	qce_clear_array(qce, REG_AUTH_IV0, 16);
+	qce_clear_array(qce, REG_AUTH_KEY0, 16);
+
+	/* Clear byte count */
+	qce_clear_array(qce, REG_AUTH_BYTECNT0, 4);
+
+	/* Write authentication key */
+	authkey_words = qce_be32_to_cpu_array(authkey, ctx->auth_key, auth_keylen);
+	qce_write_array(qce, REG_AUTH_KEY0, (u32 *)authkey, authkey_words);
+
+	/* Write initial authentication IV only for HMAC algorithms */
+	if (IS_SHA_HMAC(rctx->flags)) {
+		/* Write default authentication iv */
+		if (IS_SHA1_HMAC(rctx->flags)) {
+			auth_ivsize = SHA1_DIGEST_SIZE;
+			memcpy(authiv, std_iv_sha1, auth_ivsize);
+		} else if (IS_SHA256_HMAC(rctx->flags)) {
+			auth_ivsize = SHA256_DIGEST_SIZE;
+			memcpy(authiv, std_iv_sha256, auth_ivsize);
+		}
+		authiv_words = auth_ivsize / sizeof(u32);
+		qce_write_array(qce, REG_AUTH_IV0, (u32 *)authiv, authiv_words);
+	} else if (IS_CCM(rctx->flags)) {
+		/* Write nonce for CCM algorithms */
+		authnonce_words = qce_be32_to_cpu_array(authnonce, rctx->ccm_nonce, QCE_MAX_NONCE);
+		qce_write_array(qce, REG_AUTH_INFO_NONCE0, authnonce, authnonce_words);
+	}
+
+	/* Set up ENCR_SEG_CFG */
+	encr_cfg = qce_encr_cfg(flags, enc_keylen);
+	if (IS_ENCRYPT(flags))
+		encr_cfg |= BIT(ENCODE_SHIFT);
+	qce_write(qce, REG_ENCR_SEG_CFG, encr_cfg);
+
+	/* Set up AUTH_SEG_CFG */
+	auth_cfg = qce_auth_cfg(rctx->flags, auth_keylen, ctx->authsize);
+	auth_cfg |= BIT(AUTH_LAST_SHIFT);
+	auth_cfg |= BIT(AUTH_FIRST_SHIFT);
+	if (IS_ENCRYPT(flags)) {
+		if (IS_CCM(rctx->flags))
+			auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT;
+		else
+			auth_cfg |= AUTH_POS_AFTER << AUTH_POS_SHIFT;
+	} else {
+		if (IS_CCM(rctx->flags))
+			auth_cfg |= AUTH_POS_AFTER << AUTH_POS_SHIFT;
+		else
+			auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT;
+	}
+	qce_write(qce, REG_AUTH_SEG_CFG, auth_cfg);
+
+	totallen = rctx->cryptlen + rctx->assoclen;
+
+	/* Set the encryption size and start offset */
+	if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
+		qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen + ctx->authsize);
+	else
+		qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen);
+	qce_write(qce, REG_ENCR_SEG_START, rctx->assoclen & 0xffff);
+
+	/* Set the authentication size and start offset */
+	qce_write(qce, REG_AUTH_SEG_SIZE, totallen);
+	qce_write(qce, REG_AUTH_SEG_START, 0);
+
+	/* Write total length */
+	if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
+		qce_write(qce, REG_SEG_SIZE, totallen + ctx->authsize);
+	else
+		qce_write(qce, REG_SEG_SIZE, totallen);
+
+	/* get little endianness */
+	config = qce_config_reg(qce, 1);
+	qce_write(qce, REG_CONFIG, config);
+
+	/* Start the process */
+	qce_crypto_go(qce, !IS_CCM(flags));
+
+	return 0;
+}
+#endif
+
 int qce_start(struct crypto_async_request *async_req, u32 type)
 {
 	switch (type) {
@@ -396,6 +550,10 @@ int qce_start(struct crypto_async_request *async_req, u32 type)
 #ifdef CONFIG_CRYPTO_DEV_QCE_SHA
 	case CRYPTO_ALG_TYPE_AHASH:
 		return qce_setup_regs_ahash(async_req);
+#endif
+#ifdef CONFIG_CRYPTO_DEV_QCE_AEAD
+	case CRYPTO_ALG_TYPE_AEAD:
+		return qce_setup_regs_aead(async_req);
 #endif
 	default:
 		return -EINVAL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch v3 7/7] crypto: qce: aead: Schedule fallback algorithm
  2021-04-20  3:35 [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver Thara Gopinath
                   ` (5 preceding siblings ...)
  2021-04-20  3:36 ` [Patch v3 6/7] crypto: qce: common: Add support for AEAD algorithms Thara Gopinath
@ 2021-04-20  3:36 ` Thara Gopinath
  6 siblings, 0 replies; 10+ messages in thread
From: Thara Gopinath @ 2021-04-20  3:36 UTC (permalink / raw)
  To: herbert, davem, bjorn.andersson
  Cc: ebiggers, ardb, sivaprak, linux-crypto, linux-kernel, linux-arm-msm

Qualcomm crypto engine does not handle the following scenarios and
will issue an abort. In such cases, pass on the transformation to
a fallback algorithm.

- DES3 algorithms with all three keys same.
- AES192 algorithms.
- 0 length messages.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v1->v2:
	- Updated crypto_aead_set_reqsize to include the size of fallback
	  request as well.

 drivers/crypto/qce/aead.c | 64 ++++++++++++++++++++++++++++++++-------
 drivers/crypto/qce/aead.h |  3 ++
 2 files changed, 56 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c
index ef66ae21eae3..6d06a19b48e4 100644
--- a/drivers/crypto/qce/aead.c
+++ b/drivers/crypto/qce/aead.c
@@ -512,7 +512,23 @@ static int qce_aead_crypt(struct aead_request *req, int encrypt)
 	/* CE does not handle 0 length messages */
 	if (!rctx->cryptlen) {
 		if (!(IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags)))
-			return -EINVAL;
+			ctx->need_fallback = true;
+	}
+
+	/* If fallback is needed, schedule and exit */
+	if (ctx->need_fallback) {
+		/* Reset need_fallback in case the same ctx is used for another transaction */
+		ctx->need_fallback = false;
+
+		aead_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+		aead_request_set_callback(&rctx->fallback_req, req->base.flags,
+					  req->base.complete, req->base.data);
+		aead_request_set_crypt(&rctx->fallback_req, req->src,
+				       req->dst, req->cryptlen, req->iv);
+		aead_request_set_ad(&rctx->fallback_req, req->assoclen);
+
+		return encrypt ? crypto_aead_encrypt(&rctx->fallback_req) :
+				 crypto_aead_decrypt(&rctx->fallback_req);
 	}
 
 	/*
@@ -553,7 +569,7 @@ static int qce_aead_ccm_setkey(struct crypto_aead *tfm, const u8 *key,
 		memcpy(ctx->ccm4309_salt, key + keylen, QCE_CCM4309_SALT_SIZE);
 	}
 
-	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256)
+	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256 && keylen != AES_KEYSIZE_192)
 		return -EINVAL;
 
 	ctx->enc_keylen = keylen;
@@ -562,7 +578,12 @@ static int qce_aead_ccm_setkey(struct crypto_aead *tfm, const u8 *key,
 	memcpy(ctx->enc_key, key, keylen);
 	memcpy(ctx->auth_key, key, keylen);
 
-	return 0;
+	if (keylen == AES_KEYSIZE_192)
+		ctx->need_fallback = true;
+
+	return IS_CCM_RFC4309(flags) ?
+		crypto_aead_setkey(ctx->fallback, key, keylen + QCE_CCM4309_SALT_SIZE) :
+		crypto_aead_setkey(ctx->fallback, key, keylen);
 }
 
 static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
@@ -593,20 +614,21 @@ static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int
 		 * The crypto engine does not support any two keys
 		 * being the same for triple des algorithms. The
 		 * verify_skcipher_des3_key does not check for all the
-		 * below conditions. Return -EINVAL in case any two keys
-		 * are the same. Revisit to see if a fallback cipher
-		 * is needed to handle this condition.
+		 * below conditions. Schedule fallback in this case.
 		 */
 		memcpy(_key, authenc_keys.enckey, DES3_EDE_KEY_SIZE);
 		if (!((_key[0] ^ _key[2]) | (_key[1] ^ _key[3])) ||
 		    !((_key[2] ^ _key[4]) | (_key[3] ^ _key[5])) ||
 		    !((_key[0] ^ _key[4]) | (_key[1] ^ _key[5])))
-			return -EINVAL;
+			ctx->need_fallback = true;
 	} else if (IS_AES(flags)) {
 		/* No random key sizes */
 		if (authenc_keys.enckeylen != AES_KEYSIZE_128 &&
+		    authenc_keys.enckeylen != AES_KEYSIZE_192 &&
 		    authenc_keys.enckeylen != AES_KEYSIZE_256)
 			return -EINVAL;
+		if (authenc_keys.enckeylen == AES_KEYSIZE_192)
+			ctx->need_fallback = true;
 	}
 
 	ctx->enc_keylen = authenc_keys.enckeylen;
@@ -617,7 +639,7 @@ static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int
 	memset(ctx->auth_key, 0, sizeof(ctx->auth_key));
 	memcpy(ctx->auth_key, authenc_keys.authkey, authenc_keys.authkeylen);
 
-	return 0;
+	return crypto_aead_setkey(ctx->fallback, key, keylen);
 }
 
 static int qce_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
@@ -632,15 +654,33 @@ static int qce_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
 			return -EINVAL;
 	}
 	ctx->authsize = authsize;
-	return 0;
+
+	return crypto_aead_setauthsize(ctx->fallback, authsize);
 }
 
 static int qce_aead_init(struct crypto_aead *tfm)
 {
-	crypto_aead_set_reqsize(tfm, sizeof(struct qce_aead_reqctx));
+	struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+	ctx->need_fallback = false;
+	ctx->fallback = crypto_alloc_aead(crypto_tfm_alg_name(&tfm->base),
+					  0, CRYPTO_ALG_NEED_FALLBACK);
+
+	if (IS_ERR(ctx->fallback))
+		return PTR_ERR(ctx->fallback);
+
+	crypto_aead_set_reqsize(tfm, sizeof(struct qce_aead_reqctx) +
+				crypto_aead_reqsize(ctx->fallback));
 	return 0;
 }
 
+static void qce_aead_exit(struct crypto_aead *tfm)
+{
+	struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+	crypto_free_aead(ctx->fallback);
+}
+
 struct qce_aead_def {
 	unsigned long flags;
 	const char *name;
@@ -738,11 +778,13 @@ static int qce_aead_register_one(const struct qce_aead_def *def, struct qce_devi
 	alg->encrypt			= qce_aead_encrypt;
 	alg->decrypt			= qce_aead_decrypt;
 	alg->init			= qce_aead_init;
+	alg->exit			= qce_aead_exit;
 
 	alg->base.cra_priority		= 300;
 	alg->base.cra_flags		= CRYPTO_ALG_ASYNC |
 					  CRYPTO_ALG_ALLOCATES_MEMORY |
-					  CRYPTO_ALG_KERN_DRIVER_ONLY;
+					  CRYPTO_ALG_KERN_DRIVER_ONLY |
+					  CRYPTO_ALG_NEED_FALLBACK;
 	alg->base.cra_ctxsize		= sizeof(struct qce_aead_ctx);
 	alg->base.cra_alignmask		= 0;
 	alg->base.cra_module		= THIS_MODULE;
diff --git a/drivers/crypto/qce/aead.h b/drivers/crypto/qce/aead.h
index 3d1f2039930b..efb8477cc088 100644
--- a/drivers/crypto/qce/aead.h
+++ b/drivers/crypto/qce/aead.h
@@ -19,6 +19,8 @@ struct qce_aead_ctx {
 	unsigned int enc_keylen;
 	unsigned int auth_keylen;
 	unsigned int authsize;
+	bool need_fallback;
+	struct crypto_aead *fallback;
 };
 
 struct qce_aead_reqctx {
@@ -39,6 +41,7 @@ struct qce_aead_reqctx {
 	u8 ccm_nonce[QCE_MAX_NONCE];
 	u8 ccmresult_buf[QCE_BAM_BURST_SIZE];
 	u8 ccm_rfc4309_iv[QCE_MAX_IV_SIZE];
+	struct aead_request fallback_req;
 };
 
 static inline struct qce_alg_template *to_aead_tmpl(struct crypto_aead *tfm)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [Patch v3 6/7] crypto: qce: common: Add support for AEAD algorithms
  2021-04-20  3:36 ` [Patch v3 6/7] crypto: qce: common: Add support for AEAD algorithms Thara Gopinath
@ 2021-04-21 20:15   ` kernel test robot
  2021-04-29  5:25   ` [kbuild] " Dan Carpenter
  1 sibling, 0 replies; 10+ messages in thread
From: kernel test robot @ 2021-04-21 20:15 UTC (permalink / raw)
  To: Thara Gopinath, herbert, davem, bjorn.andersson
  Cc: kbuild-all, clang-built-linux, ebiggers, ardb, sivaprak,
	linux-crypto, linux-kernel, linux-arm-msm

[-- Attachment #1: Type: text/plain, Size: 8247 bytes --]

Hi Thara,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on cryptodev/master]
[also build test WARNING on next-20210421]
[cannot apply to crypto/master v5.12-rc8]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thara-Gopinath/Add-support-for-AEAD-algorithms-in-Qualcomm-Crypto-Engine-driver/20210420-113944
base:   https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
config: x86_64-randconfig-a013-20210421 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project d87b9b81ccb95217181ce75515c6c68bbb408ca4)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/b152c1b17bb6ad7923f0f3f8bc5ef81fb4cd054a
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Thara-Gopinath/Add-support-for-AEAD-algorithms-in-Qualcomm-Crypto-Engine-driver/20210420-113944
        git checkout b152c1b17bb6ad7923f0f3f8bc5ef81fb4cd054a
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/crypto/qce/common.c:478:14: warning: variable 'auth_ivsize' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                   } else if (IS_SHA256_HMAC(rctx->flags)) {
                              ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/crypto/qce/common.h:68:32: note: expanded from macro 'IS_SHA256_HMAC'
   #define IS_SHA256_HMAC(flags)           (flags & QCE_HASH_SHA256_HMAC)
                                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/crypto/qce/common.c:482:18: note: uninitialized use occurs here
                   authiv_words = auth_ivsize / sizeof(u32);
                                  ^~~~~~~~~~~
   drivers/crypto/qce/common.c:478:10: note: remove the 'if' if its condition is always true
                   } else if (IS_SHA256_HMAC(rctx->flags)) {
                          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/crypto/qce/common.c:434:26: note: initialize the variable 'auth_ivsize' to silence this warning
           unsigned int auth_ivsize;
                                   ^
                                    = 0
   1 warning generated.


vim +478 drivers/crypto/qce/common.c

   418	
   419	static int qce_setup_regs_aead(struct crypto_async_request *async_req)
   420	{
   421		struct aead_request *req = aead_request_cast(async_req);
   422		struct qce_aead_reqctx *rctx = aead_request_ctx(req);
   423		struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm);
   424		struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
   425		struct qce_device *qce = tmpl->qce;
   426		u32 enckey[QCE_MAX_CIPHER_KEY_SIZE / sizeof(u32)] = {0};
   427		u32 enciv[QCE_MAX_IV_SIZE / sizeof(u32)] = {0};
   428		u32 authkey[QCE_SHA_HMAC_KEY_SIZE / sizeof(u32)] = {0};
   429		u32 authiv[SHA256_DIGEST_SIZE / sizeof(u32)] = {0};
   430		u32 authnonce[QCE_MAX_NONCE / sizeof(u32)] = {0};
   431		unsigned int enc_keylen = ctx->enc_keylen;
   432		unsigned int auth_keylen = ctx->auth_keylen;
   433		unsigned int enc_ivsize = rctx->ivsize;
   434		unsigned int auth_ivsize;
   435		unsigned int enckey_words, enciv_words;
   436		unsigned int authkey_words, authiv_words, authnonce_words;
   437		unsigned long flags = rctx->flags;
   438		u32 encr_cfg, auth_cfg, config, totallen;
   439		u32 iv_last_word;
   440	
   441		qce_setup_config(qce);
   442	
   443		/* Write encryption key */
   444		enckey_words = qce_be32_to_cpu_array(enckey, ctx->enc_key, enc_keylen);
   445		qce_write_array(qce, REG_ENCR_KEY0, enckey, enckey_words);
   446	
   447		/* Write encryption iv */
   448		enciv_words = qce_be32_to_cpu_array(enciv, rctx->iv, enc_ivsize);
   449		qce_write_array(qce, REG_CNTR0_IV0, enciv, enciv_words);
   450	
   451		if (IS_CCM(rctx->flags)) {
   452			iv_last_word = enciv[enciv_words - 1];
   453			qce_write(qce, REG_CNTR3_IV3, iv_last_word + 1);
   454			qce_write_array(qce, REG_ENCR_CCM_INT_CNTR0, (u32 *)enciv, enciv_words);
   455			qce_write(qce, REG_CNTR_MASK, ~0);
   456			qce_write(qce, REG_CNTR_MASK0, ~0);
   457			qce_write(qce, REG_CNTR_MASK1, ~0);
   458			qce_write(qce, REG_CNTR_MASK2, ~0);
   459		}
   460	
   461		/* Clear authentication IV and KEY registers of previous values */
   462		qce_clear_array(qce, REG_AUTH_IV0, 16);
   463		qce_clear_array(qce, REG_AUTH_KEY0, 16);
   464	
   465		/* Clear byte count */
   466		qce_clear_array(qce, REG_AUTH_BYTECNT0, 4);
   467	
   468		/* Write authentication key */
   469		authkey_words = qce_be32_to_cpu_array(authkey, ctx->auth_key, auth_keylen);
   470		qce_write_array(qce, REG_AUTH_KEY0, (u32 *)authkey, authkey_words);
   471	
   472		/* Write initial authentication IV only for HMAC algorithms */
   473		if (IS_SHA_HMAC(rctx->flags)) {
   474			/* Write default authentication iv */
   475			if (IS_SHA1_HMAC(rctx->flags)) {
   476				auth_ivsize = SHA1_DIGEST_SIZE;
   477				memcpy(authiv, std_iv_sha1, auth_ivsize);
 > 478			} else if (IS_SHA256_HMAC(rctx->flags)) {
   479				auth_ivsize = SHA256_DIGEST_SIZE;
   480				memcpy(authiv, std_iv_sha256, auth_ivsize);
   481			}
   482			authiv_words = auth_ivsize / sizeof(u32);
   483			qce_write_array(qce, REG_AUTH_IV0, (u32 *)authiv, authiv_words);
   484		} else if (IS_CCM(rctx->flags)) {
   485			/* Write nonce for CCM algorithms */
   486			authnonce_words = qce_be32_to_cpu_array(authnonce, rctx->ccm_nonce, QCE_MAX_NONCE);
   487			qce_write_array(qce, REG_AUTH_INFO_NONCE0, authnonce, authnonce_words);
   488		}
   489	
   490		/* Set up ENCR_SEG_CFG */
   491		encr_cfg = qce_encr_cfg(flags, enc_keylen);
   492		if (IS_ENCRYPT(flags))
   493			encr_cfg |= BIT(ENCODE_SHIFT);
   494		qce_write(qce, REG_ENCR_SEG_CFG, encr_cfg);
   495	
   496		/* Set up AUTH_SEG_CFG */
   497		auth_cfg = qce_auth_cfg(rctx->flags, auth_keylen, ctx->authsize);
   498		auth_cfg |= BIT(AUTH_LAST_SHIFT);
   499		auth_cfg |= BIT(AUTH_FIRST_SHIFT);
   500		if (IS_ENCRYPT(flags)) {
   501			if (IS_CCM(rctx->flags))
   502				auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT;
   503			else
   504				auth_cfg |= AUTH_POS_AFTER << AUTH_POS_SHIFT;
   505		} else {
   506			if (IS_CCM(rctx->flags))
   507				auth_cfg |= AUTH_POS_AFTER << AUTH_POS_SHIFT;
   508			else
   509				auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT;
   510		}
   511		qce_write(qce, REG_AUTH_SEG_CFG, auth_cfg);
   512	
   513		totallen = rctx->cryptlen + rctx->assoclen;
   514	
   515		/* Set the encryption size and start offset */
   516		if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
   517			qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen + ctx->authsize);
   518		else
   519			qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen);
   520		qce_write(qce, REG_ENCR_SEG_START, rctx->assoclen & 0xffff);
   521	
   522		/* Set the authentication size and start offset */
   523		qce_write(qce, REG_AUTH_SEG_SIZE, totallen);
   524		qce_write(qce, REG_AUTH_SEG_START, 0);
   525	
   526		/* Write total length */
   527		if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
   528			qce_write(qce, REG_SEG_SIZE, totallen + ctx->authsize);
   529		else
   530			qce_write(qce, REG_SEG_SIZE, totallen);
   531	
   532		/* get little endianness */
   533		config = qce_config_reg(qce, 1);
   534		qce_write(qce, REG_CONFIG, config);
   535	
   536		/* Start the process */
   537		qce_crypto_go(qce, !IS_CCM(flags));
   538	
   539		return 0;
   540	}
   541	#endif
   542	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 40386 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [kbuild] Re: [Patch v3 6/7] crypto: qce: common: Add support for AEAD algorithms
  2021-04-20  3:36 ` [Patch v3 6/7] crypto: qce: common: Add support for AEAD algorithms Thara Gopinath
  2021-04-21 20:15   ` kernel test robot
@ 2021-04-29  5:25   ` Dan Carpenter
  1 sibling, 0 replies; 10+ messages in thread
From: Dan Carpenter @ 2021-04-29  5:25 UTC (permalink / raw)
  To: kbuild, Thara Gopinath, herbert, davem, bjorn.andersson
  Cc: lkp, kbuild-all, ebiggers, ardb, sivaprak, linux-crypto,
	linux-kernel, linux-arm-msm

[-- Attachment #1: Type: text/plain, Size: 6907 bytes --]

Hi Thara,

url:    https://github.com/0day-ci/linux/commits/Thara-Gopinath/Add-support-for-AEAD-algorithms-in-Qualcomm-Crypto-Engine-driver/20210420-113944 
base:   https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git  master
config: arm-randconfig-m031-20210428 (attached as .config)
compiler: arm-linux-gnueabi-gcc (GCC) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

smatch warnings:
drivers/crypto/qce/common.c:482 qce_setup_regs_aead() error: uninitialized symbol 'auth_ivsize'.

vim +/auth_ivsize +482 drivers/crypto/qce/common.c

b152c1b17bb6ad Thara Gopinath 2021-04-19  419  static int qce_setup_regs_aead(struct crypto_async_request *async_req)
b152c1b17bb6ad Thara Gopinath 2021-04-19  420  {
b152c1b17bb6ad Thara Gopinath 2021-04-19  421  	struct aead_request *req = aead_request_cast(async_req);
b152c1b17bb6ad Thara Gopinath 2021-04-19  422  	struct qce_aead_reqctx *rctx = aead_request_ctx(req);
b152c1b17bb6ad Thara Gopinath 2021-04-19  423  	struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm);
b152c1b17bb6ad Thara Gopinath 2021-04-19  424  	struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
b152c1b17bb6ad Thara Gopinath 2021-04-19  425  	struct qce_device *qce = tmpl->qce;
b152c1b17bb6ad Thara Gopinath 2021-04-19  426  	u32 enckey[QCE_MAX_CIPHER_KEY_SIZE / sizeof(u32)] = {0};
b152c1b17bb6ad Thara Gopinath 2021-04-19  427  	u32 enciv[QCE_MAX_IV_SIZE / sizeof(u32)] = {0};
b152c1b17bb6ad Thara Gopinath 2021-04-19  428  	u32 authkey[QCE_SHA_HMAC_KEY_SIZE / sizeof(u32)] = {0};
b152c1b17bb6ad Thara Gopinath 2021-04-19  429  	u32 authiv[SHA256_DIGEST_SIZE / sizeof(u32)] = {0};
b152c1b17bb6ad Thara Gopinath 2021-04-19  430  	u32 authnonce[QCE_MAX_NONCE / sizeof(u32)] = {0};
b152c1b17bb6ad Thara Gopinath 2021-04-19  431  	unsigned int enc_keylen = ctx->enc_keylen;
b152c1b17bb6ad Thara Gopinath 2021-04-19  432  	unsigned int auth_keylen = ctx->auth_keylen;
b152c1b17bb6ad Thara Gopinath 2021-04-19  433  	unsigned int enc_ivsize = rctx->ivsize;
b152c1b17bb6ad Thara Gopinath 2021-04-19  434  	unsigned int auth_ivsize;
b152c1b17bb6ad Thara Gopinath 2021-04-19  435  	unsigned int enckey_words, enciv_words;
b152c1b17bb6ad Thara Gopinath 2021-04-19  436  	unsigned int authkey_words, authiv_words, authnonce_words;
b152c1b17bb6ad Thara Gopinath 2021-04-19  437  	unsigned long flags = rctx->flags;
b152c1b17bb6ad Thara Gopinath 2021-04-19  438  	u32 encr_cfg, auth_cfg, config, totallen;
b152c1b17bb6ad Thara Gopinath 2021-04-19  439  	u32 iv_last_word;
b152c1b17bb6ad Thara Gopinath 2021-04-19  440  
b152c1b17bb6ad Thara Gopinath 2021-04-19  441  	qce_setup_config(qce);
b152c1b17bb6ad Thara Gopinath 2021-04-19  442  
b152c1b17bb6ad Thara Gopinath 2021-04-19  443  	/* Write encryption key */
b152c1b17bb6ad Thara Gopinath 2021-04-19  444  	enckey_words = qce_be32_to_cpu_array(enckey, ctx->enc_key, enc_keylen);
b152c1b17bb6ad Thara Gopinath 2021-04-19  445  	qce_write_array(qce, REG_ENCR_KEY0, enckey, enckey_words);
b152c1b17bb6ad Thara Gopinath 2021-04-19  446  
b152c1b17bb6ad Thara Gopinath 2021-04-19  447  	/* Write encryption iv */
b152c1b17bb6ad Thara Gopinath 2021-04-19  448  	enciv_words = qce_be32_to_cpu_array(enciv, rctx->iv, enc_ivsize);
b152c1b17bb6ad Thara Gopinath 2021-04-19  449  	qce_write_array(qce, REG_CNTR0_IV0, enciv, enciv_words);
b152c1b17bb6ad Thara Gopinath 2021-04-19  450  
b152c1b17bb6ad Thara Gopinath 2021-04-19  451  	if (IS_CCM(rctx->flags)) {
b152c1b17bb6ad Thara Gopinath 2021-04-19  452  		iv_last_word = enciv[enciv_words - 1];
b152c1b17bb6ad Thara Gopinath 2021-04-19  453  		qce_write(qce, REG_CNTR3_IV3, iv_last_word + 1);
b152c1b17bb6ad Thara Gopinath 2021-04-19  454  		qce_write_array(qce, REG_ENCR_CCM_INT_CNTR0, (u32 *)enciv, enciv_words);
b152c1b17bb6ad Thara Gopinath 2021-04-19  455  		qce_write(qce, REG_CNTR_MASK, ~0);
b152c1b17bb6ad Thara Gopinath 2021-04-19  456  		qce_write(qce, REG_CNTR_MASK0, ~0);
b152c1b17bb6ad Thara Gopinath 2021-04-19  457  		qce_write(qce, REG_CNTR_MASK1, ~0);
b152c1b17bb6ad Thara Gopinath 2021-04-19  458  		qce_write(qce, REG_CNTR_MASK2, ~0);
b152c1b17bb6ad Thara Gopinath 2021-04-19  459  	}
b152c1b17bb6ad Thara Gopinath 2021-04-19  460  
b152c1b17bb6ad Thara Gopinath 2021-04-19  461  	/* Clear authentication IV and KEY registers of previous values */
b152c1b17bb6ad Thara Gopinath 2021-04-19  462  	qce_clear_array(qce, REG_AUTH_IV0, 16);
b152c1b17bb6ad Thara Gopinath 2021-04-19  463  	qce_clear_array(qce, REG_AUTH_KEY0, 16);
b152c1b17bb6ad Thara Gopinath 2021-04-19  464  
b152c1b17bb6ad Thara Gopinath 2021-04-19  465  	/* Clear byte count */
b152c1b17bb6ad Thara Gopinath 2021-04-19  466  	qce_clear_array(qce, REG_AUTH_BYTECNT0, 4);
b152c1b17bb6ad Thara Gopinath 2021-04-19  467  
b152c1b17bb6ad Thara Gopinath 2021-04-19  468  	/* Write authentication key */
b152c1b17bb6ad Thara Gopinath 2021-04-19  469  	authkey_words = qce_be32_to_cpu_array(authkey, ctx->auth_key, auth_keylen);
b152c1b17bb6ad Thara Gopinath 2021-04-19  470  	qce_write_array(qce, REG_AUTH_KEY0, (u32 *)authkey, authkey_words);
b152c1b17bb6ad Thara Gopinath 2021-04-19  471  
b152c1b17bb6ad Thara Gopinath 2021-04-19  472  	/* Write initial authentication IV only for HMAC algorithms */
b152c1b17bb6ad Thara Gopinath 2021-04-19  473  	if (IS_SHA_HMAC(rctx->flags)) {
b152c1b17bb6ad Thara Gopinath 2021-04-19  474  		/* Write default authentication iv */
b152c1b17bb6ad Thara Gopinath 2021-04-19  475  		if (IS_SHA1_HMAC(rctx->flags)) {
b152c1b17bb6ad Thara Gopinath 2021-04-19  476  			auth_ivsize = SHA1_DIGEST_SIZE;
b152c1b17bb6ad Thara Gopinath 2021-04-19  477  			memcpy(authiv, std_iv_sha1, auth_ivsize);
b152c1b17bb6ad Thara Gopinath 2021-04-19  478  		} else if (IS_SHA256_HMAC(rctx->flags)) {
b152c1b17bb6ad Thara Gopinath 2021-04-19  479  			auth_ivsize = SHA256_DIGEST_SIZE;
b152c1b17bb6ad Thara Gopinath 2021-04-19  480  			memcpy(authiv, std_iv_sha256, auth_ivsize);
b152c1b17bb6ad Thara Gopinath 2021-04-19  481  		}

No else path.

b152c1b17bb6ad Thara Gopinath 2021-04-19 @482  		authiv_words = auth_ivsize / sizeof(u32);
b152c1b17bb6ad Thara Gopinath 2021-04-19  483  		qce_write_array(qce, REG_AUTH_IV0, (u32 *)authiv, authiv_words);
b152c1b17bb6ad Thara Gopinath 2021-04-19  484  	} else if (IS_CCM(rctx->flags)) {
b152c1b17bb6ad Thara Gopinath 2021-04-19  485  		/* Write nonce for CCM algorithms */
b152c1b17bb6ad Thara Gopinath 2021-04-19  486  		authnonce_words = qce_be32_to_cpu_array(authnonce, rctx->ccm_nonce, QCE_MAX_NONCE);
b152c1b17bb6ad Thara Gopinath 2021-04-19  487  		qce_write_array(qce, REG_AUTH_INFO_NONCE0, authnonce, authnonce_words);
b152c1b17bb6ad Thara Gopinath 2021-04-19  488  	}

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org 

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 41010 bytes --]

[-- Attachment #3: Type: text/plain, Size: 149 bytes --]

_______________________________________________
kbuild mailing list -- kbuild@lists.01.org
To unsubscribe send an email to kbuild-leave@lists.01.org

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-04-29  5:26 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-20  3:35 [Patch v3 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver Thara Gopinath
2021-04-20  3:35 ` [Patch v3 1/7] crypto: qce: common: Add MAC failed error checking Thara Gopinath
2021-04-20  3:35 ` [Patch v3 2/7] crypto: qce: common: Make result dump optional Thara Gopinath
2021-04-20  3:35 ` [Patch v3 3/7] crypto: qce: Add mode for rfc4309 Thara Gopinath
2021-04-20  3:35 ` [Patch v3 4/7] crypto: qce: Add support for AEAD algorithms Thara Gopinath
2021-04-20  3:36 ` [Patch v3 5/7] crypto: qce: common: Clean up qce_auth_cfg Thara Gopinath
2021-04-20  3:36 ` [Patch v3 6/7] crypto: qce: common: Add support for AEAD algorithms Thara Gopinath
2021-04-21 20:15   ` kernel test robot
2021-04-29  5:25   ` [kbuild] " Dan Carpenter
2021-04-20  3:36 ` [Patch v3 7/7] crypto: qce: aead: Schedule fallback algorithm Thara Gopinath

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).