linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] crypto: caam - backlogging support
@ 2019-11-17 22:30 Iuliana Prodan
  2019-11-17 22:30 ` [PATCH 01/12] crypto: add helper function for akcipher_request Iuliana Prodan
                   ` (11 more replies)
  0 siblings, 12 replies; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Integrate crypto_engine framework into CAAM, to make use of
the engine queue.
Added support for SKCIPHER, HASH, RSA and AEAD algorithms.
This is intended to be used for CAAM backlogging support.
The requests, with backlog flag (e.g. from dm-crypt) will be
listed into crypto-engine queue and processed by CAAM when free.
For better performance, crypto-engine software queue is bypassed,
if empty, and the request is sent directly to hardware. If this
returns -ENOSPC, the request is transferred to crypto-engine and
let it handle it.

While here, I've also made some refactorization.
Patch #1, adds a helper function - akcipher_request_cast, to get
an akcipher_request struct from a crypto_async_request struct.
Patches #2 - #5 include some refactorizations on caamalg, caamhash
and caampkc.
Patches #6, #7 change the return code of caam_jr_enqueue function
to -EINPROGRESS, in case of success, -ENOSPC in case the CAAM is
busy, -EIO if it cannot map the caller's descriptor and add a new
enqueue function for no backlogging requests. Also, to keep each
request information, like backlog flag, a new struct is passed as
argument to enqueue functions.
Patches #8 - #12 integrate crypto_engine into CAAM, for
SKCIPHER/AEAD/RSA/HASH algorithms.


Iuliana Prodan (12):
  crypto: add helper function for akcipher_request
  crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt
    functions
  crypto: caam - refactor ahash_done callbacks
  crypto: caam - refactor ahash_edesc_alloc
  crypto: caam - refactor RSA private key _done callbacks
  crypto: caam - change return code in caam_jr_enqueue function
  crypto: caam - refactor caam_jr_enqueue
  crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  crypto: caam - bypass crypto-engine sw queue, if empty
  crypto: caam - add crypto_engine support for AEAD algorithms
  crypto: caam - add crypto_engine support for RSA algorithms
  crypto: caam - add crypto_engine support for HASH algorithms

 drivers/crypto/caam/Kconfig         |   1 +
 drivers/crypto/caam/caamalg.c       | 411 ++++++++++++++++--------------------
 drivers/crypto/caam/caamhash.c      | 323 +++++++++++++++-------------
 drivers/crypto/caam/caampkc.c       | 177 ++++++++++------
 drivers/crypto/caam/caampkc.h       |  11 +
 drivers/crypto/caam/caamrng.c       |   7 +-
 drivers/crypto/caam/intern.h        |  12 ++
 drivers/crypto/caam/jr.c            | 136 +++++++++++-
 drivers/crypto/caam/jr.h            |   4 +
 drivers/crypto/caam/key_gen.c       |   4 +-
 drivers/crypto/ccp/ccp-crypto-rsa.c |   6 -
 include/crypto/akcipher.h           |   6 +
 12 files changed, 635 insertions(+), 463 deletions(-)

-- 
2.1.0


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 01/12] crypto: add helper function for akcipher_request
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-18 13:29   ` Corentin Labbe
                     ` (3 more replies)
  2019-11-17 22:30 ` [PATCH 02/12] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt functions Iuliana Prodan
                   ` (10 subsequent siblings)
  11 siblings, 4 replies; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Add akcipher_request_cast function to get an akcipher_request struct from
a crypto_async_request struct.

Remove this function from ccp driver.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/ccp/ccp-crypto-rsa.c | 6 ------
 include/crypto/akcipher.h           | 6 ++++++
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c b/drivers/crypto/ccp/ccp-crypto-rsa.c
index 649c91d..3ab659d 100644
--- a/drivers/crypto/ccp/ccp-crypto-rsa.c
+++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
@@ -19,12 +19,6 @@
 
 #include "ccp-crypto.h"
 
-static inline struct akcipher_request *akcipher_request_cast(
-	struct crypto_async_request *req)
-{
-	return container_of(req, struct akcipher_request, base);
-}
-
 static inline int ccp_copy_and_save_keypart(u8 **kpbuf, unsigned int *kplen,
 					    const u8 *buf, size_t sz)
 {
diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h
index 6924b09..4365edd 100644
--- a/include/crypto/akcipher.h
+++ b/include/crypto/akcipher.h
@@ -170,6 +170,12 @@ static inline struct crypto_akcipher *crypto_akcipher_reqtfm(
 	return __crypto_akcipher_tfm(req->base.tfm);
 }
 
+static inline struct akcipher_request *akcipher_request_cast(
+	struct crypto_async_request *req)
+{
+	return container_of(req, struct akcipher_request, base);
+}
+
 /**
  * crypto_free_akcipher() - free AKCIPHER tfm handle
  *
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 02/12] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt functions
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
  2019-11-17 22:30 ` [PATCH 01/12] crypto: add helper function for akcipher_request Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-19 14:41   ` Horia Geanta
  2019-11-17 22:30 ` [PATCH 03/12] crypto: caam - refactor ahash_done callbacks Iuliana Prodan
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Create a common crypt function for each skcipher/aead/gcm/chachapoly
algorithms and call it for encrypt/decrypt with the specific boolean -
true for encrypt and false for decrypt.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caamalg.c | 268 +++++++++---------------------------------
 1 file changed, 53 insertions(+), 215 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 2912006..6e021692 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -960,8 +960,8 @@ static void skcipher_unmap(struct device *dev, struct skcipher_edesc *edesc,
 		   edesc->sec4_sg_dma, edesc->sec4_sg_bytes);
 }
 
-static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
-				   void *context)
+static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
+			    void *context)
 {
 	struct aead_request *req = context;
 	struct aead_edesc *edesc;
@@ -981,69 +981,8 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
 	aead_request_complete(req, ecode);
 }
 
-static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
-				   void *context)
-{
-	struct aead_request *req = context;
-	struct aead_edesc *edesc;
-	int ecode = 0;
-
-	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
-
-	edesc = container_of(desc, struct aead_edesc, hw_desc[0]);
-
-	if (err)
-		ecode = caam_jr_strstatus(jrdev, err);
-
-	aead_unmap(jrdev, edesc, req);
-
-	kfree(edesc);
-
-	aead_request_complete(req, ecode);
-}
-
-static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
-				  void *context)
-{
-	struct skcipher_request *req = context;
-	struct skcipher_edesc *edesc;
-	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
-	int ivsize = crypto_skcipher_ivsize(skcipher);
-	int ecode = 0;
-
-	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
-
-	edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]);
-
-	if (err)
-		ecode = caam_jr_strstatus(jrdev, err);
-
-	skcipher_unmap(jrdev, edesc, req);
-
-	/*
-	 * The crypto API expects us to set the IV (req->iv) to the last
-	 * ciphertext block (CBC mode) or last counter (CTR mode).
-	 * This is used e.g. by the CTS mode.
-	 */
-	if (ivsize && !ecode) {
-		memcpy(req->iv, (u8 *)edesc->sec4_sg + edesc->sec4_sg_bytes,
-		       ivsize);
-		print_hex_dump_debug("dstiv  @"__stringify(__LINE__)": ",
-				     DUMP_PREFIX_ADDRESS, 16, 4, req->iv,
-				     edesc->src_nents > 1 ? 100 : ivsize, 1);
-	}
-
-	caam_dump_sg("dst    @" __stringify(__LINE__)": ",
-		     DUMP_PREFIX_ADDRESS, 16, 4, req->dst,
-		     edesc->dst_nents > 1 ? 100 : req->cryptlen, 1);
-
-	kfree(edesc);
-
-	skcipher_request_complete(req, ecode);
-}
-
-static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
-				  void *context)
+static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
+				void *context)
 {
 	struct skcipher_request *req = context;
 	struct skcipher_edesc *edesc;
@@ -1455,41 +1394,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
 	return edesc;
 }
 
-static int gcm_encrypt(struct aead_request *req)
-{
-	struct aead_edesc *edesc;
-	struct crypto_aead *aead = crypto_aead_reqtfm(req);
-	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
-	bool all_contig;
-	u32 *desc;
-	int ret = 0;
-
-	/* allocate extended descriptor */
-	edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, true);
-	if (IS_ERR(edesc))
-		return PTR_ERR(edesc);
-
-	/* Create and submit job descriptor */
-	init_gcm_job(req, edesc, all_contig, true);
-
-	print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
-			     desc_bytes(edesc->hw_desc), 1);
-
-	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
-		aead_unmap(jrdev, edesc, req);
-		kfree(edesc);
-	}
-
-	return ret;
-}
-
-static int chachapoly_encrypt(struct aead_request *req)
+static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
 {
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
@@ -1500,18 +1405,18 @@ static int chachapoly_encrypt(struct aead_request *req)
 	int ret;
 
 	edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig,
-				 true);
+				 encrypt);
 	if (IS_ERR(edesc))
 		return PTR_ERR(edesc);
 
 	desc = edesc->hw_desc;
 
-	init_chachapoly_job(req, edesc, all_contig, true);
+	init_chachapoly_job(req, edesc, all_contig, encrypt);
 	print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ",
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
+	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1522,45 +1427,17 @@ static int chachapoly_encrypt(struct aead_request *req)
 	return ret;
 }
 
-static int chachapoly_decrypt(struct aead_request *req)
+static int chachapoly_encrypt(struct aead_request *req)
 {
-	struct aead_edesc *edesc;
-	struct crypto_aead *aead = crypto_aead_reqtfm(req);
-	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
-	bool all_contig;
-	u32 *desc;
-	int ret;
-
-	edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig,
-				 false);
-	if (IS_ERR(edesc))
-		return PTR_ERR(edesc);
-
-	desc = edesc->hw_desc;
-
-	init_chachapoly_job(req, edesc, all_contig, false);
-	print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
-			     1);
-
-	ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
-		aead_unmap(jrdev, edesc, req);
-		kfree(edesc);
-	}
-
-	return ret;
+	return chachapoly_crypt(req, true);
 }
 
-static int ipsec_gcm_encrypt(struct aead_request *req)
+static int chachapoly_decrypt(struct aead_request *req)
 {
-	return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_encrypt(req);
+	return chachapoly_crypt(req, false);
 }
 
-static int aead_encrypt(struct aead_request *req)
+static inline int aead_crypt(struct aead_request *req, bool encrypt)
 {
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
@@ -1572,19 +1449,19 @@ static int aead_encrypt(struct aead_request *req)
 
 	/* allocate extended descriptor */
 	edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN,
-				 &all_contig, true);
+				 &all_contig, encrypt);
 	if (IS_ERR(edesc))
 		return PTR_ERR(edesc);
 
 	/* Create and submit job descriptor */
-	init_authenc_job(req, edesc, all_contig, true);
+	init_authenc_job(req, edesc, all_contig, encrypt);
 
 	print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
 			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
+	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1595,7 +1472,17 @@ static int aead_encrypt(struct aead_request *req)
 	return ret;
 }
 
-static int gcm_decrypt(struct aead_request *req)
+static int aead_encrypt(struct aead_request *req)
+{
+	return aead_crypt(req, true);
+}
+
+static int aead_decrypt(struct aead_request *req)
+{
+	return aead_crypt(req, false);
+}
+
+static inline int gcm_crypt(struct aead_request *req, bool encrypt)
 {
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
@@ -1606,19 +1493,20 @@ static int gcm_decrypt(struct aead_request *req)
 	int ret = 0;
 
 	/* allocate extended descriptor */
-	edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, false);
+	edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig,
+				 encrypt);
 	if (IS_ERR(edesc))
 		return PTR_ERR(edesc);
 
-	/* Create and submit job descriptor*/
-	init_gcm_job(req, edesc, all_contig, false);
+	/* Create and submit job descriptor */
+	init_gcm_job(req, edesc, all_contig, encrypt);
 
 	print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
 			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
+	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1629,48 +1517,24 @@ static int gcm_decrypt(struct aead_request *req)
 	return ret;
 }
 
-static int ipsec_gcm_decrypt(struct aead_request *req)
+static int gcm_encrypt(struct aead_request *req)
 {
-	return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_decrypt(req);
+	return gcm_crypt(req, true);
 }
 
-static int aead_decrypt(struct aead_request *req)
+static int gcm_decrypt(struct aead_request *req)
 {
-	struct aead_edesc *edesc;
-	struct crypto_aead *aead = crypto_aead_reqtfm(req);
-	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
-	bool all_contig;
-	u32 *desc;
-	int ret = 0;
-
-	caam_dump_sg("dec src@" __stringify(__LINE__)": ",
-		     DUMP_PREFIX_ADDRESS, 16, 4, req->src,
-		     req->assoclen + req->cryptlen, 1);
-
-	/* allocate extended descriptor */
-	edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN,
-				 &all_contig, false);
-	if (IS_ERR(edesc))
-		return PTR_ERR(edesc);
-
-	/* Create and submit job descriptor*/
-	init_authenc_job(req, edesc, all_contig, false);
-
-	print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
-			     desc_bytes(edesc->hw_desc), 1);
+	return gcm_crypt(req, false);
+}
 
-	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
-		aead_unmap(jrdev, edesc, req);
-		kfree(edesc);
-	}
+static int ipsec_gcm_encrypt(struct aead_request *req)
+{
+	return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_encrypt(req);
+}
 
-	return ret;
+static int ipsec_gcm_decrypt(struct aead_request *req)
+{
+	return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_decrypt(req);
 }
 
 /*
@@ -1834,7 +1698,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
 	return edesc;
 }
 
-static int skcipher_encrypt(struct skcipher_request *req)
+static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
 {
 	struct skcipher_edesc *edesc;
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
@@ -1852,14 +1716,14 @@ static int skcipher_encrypt(struct skcipher_request *req)
 		return PTR_ERR(edesc);
 
 	/* Create and submit job descriptor*/
-	init_skcipher_job(req, edesc, true);
+	init_skcipher_job(req, edesc, encrypt);
 
 	print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ",
 			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, skcipher_encrypt_done, req);
+	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req);
 
 	if (!ret) {
 		ret = -EINPROGRESS;
@@ -1871,40 +1735,14 @@ static int skcipher_encrypt(struct skcipher_request *req)
 	return ret;
 }
 
-static int skcipher_decrypt(struct skcipher_request *req)
+static int skcipher_encrypt(struct skcipher_request *req)
 {
-	struct skcipher_edesc *edesc;
-	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
-	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct device *jrdev = ctx->jrdev;
-	u32 *desc;
-	int ret = 0;
-
-	if (!req->cryptlen)
-		return 0;
-
-	/* allocate extended descriptor */
-	edesc = skcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ);
-	if (IS_ERR(edesc))
-		return PTR_ERR(edesc);
-
-	/* Create and submit job descriptor*/
-	init_skcipher_job(req, edesc, false);
-	desc = edesc->hw_desc;
-
-	print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
-			     desc_bytes(edesc->hw_desc), 1);
-
-	ret = caam_jr_enqueue(jrdev, desc, skcipher_decrypt_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
-		skcipher_unmap(jrdev, edesc, req);
-		kfree(edesc);
-	}
+	return skcipher_crypt(req, true);
+}
 
-	return ret;
+static int skcipher_decrypt(struct skcipher_request *req)
+{
+	return skcipher_crypt(req, false);
 }
 
 static struct caam_skcipher_alg driver_algs[] = {
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 03/12] crypto: caam - refactor ahash_done callbacks
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
  2019-11-17 22:30 ` [PATCH 01/12] crypto: add helper function for akcipher_request Iuliana Prodan
  2019-11-17 22:30 ` [PATCH 02/12] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt functions Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-19 14:56   ` Horia Geanta
  2019-11-17 22:30 ` [PATCH 04/12] crypto: caam - refactor ahash_edesc_alloc Iuliana Prodan
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Create two common ahash_done_* functions with the dma
direction as parameter. Then, these 2 are called with
the proper direction for unmap.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caamhash.c | 80 ++++++++++++------------------------------
 1 file changed, 22 insertions(+), 58 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 65399cb..3d6e978 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -597,8 +597,8 @@ static inline void ahash_unmap_ctx(struct device *dev,
 	ahash_unmap(dev, edesc, req, dst_len);
 }
 
-static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
-		       void *context)
+static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
+				  void *context, enum dma_data_direction dir)
 {
 	struct ahash_request *req = context;
 	struct ahash_edesc *edesc;
@@ -614,7 +614,7 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
 	if (err)
 		ecode = caam_jr_strstatus(jrdev, err);
 
-	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+	ahash_unmap_ctx(jrdev, edesc, req, digestsize, dir);
 	memcpy(req->result, state->caam_ctx, digestsize);
 	kfree(edesc);
 
@@ -625,68 +625,20 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
 	req->base.complete(&req->base, ecode);
 }
 
-static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
-			    void *context)
+static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+		       void *context)
 {
-	struct ahash_request *req = context;
-	struct ahash_edesc *edesc;
-	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
-	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
-	struct caam_hash_state *state = ahash_request_ctx(req);
-	int digestsize = crypto_ahash_digestsize(ahash);
-	int ecode = 0;
-
-	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
-
-	edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
-	if (err)
-		ecode = caam_jr_strstatus(jrdev, err);
-
-	ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL);
-	switch_buf(state);
-	kfree(edesc);
-
-	print_hex_dump_debug("ctx@"__stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
-			     ctx->ctx_len, 1);
-	if (req->result)
-		print_hex_dump_debug("result@"__stringify(__LINE__)": ",
-				     DUMP_PREFIX_ADDRESS, 16, 4, req->result,
-				     digestsize, 1);
-
-	req->base.complete(&req->base, ecode);
+	ahash_done_cpy(jrdev, desc, err, context, DMA_FROM_DEVICE);
 }
 
 static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
 			       void *context)
 {
-	struct ahash_request *req = context;
-	struct ahash_edesc *edesc;
-	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
-	int digestsize = crypto_ahash_digestsize(ahash);
-	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
-	int ecode = 0;
-
-	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
-
-	edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
-	if (err)
-		ecode = caam_jr_strstatus(jrdev, err);
-
-	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
-	memcpy(req->result, state->caam_ctx, digestsize);
-	kfree(edesc);
-
-	print_hex_dump_debug("ctx@"__stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
-			     ctx->ctx_len, 1);
-
-	req->base.complete(&req->base, ecode);
+	ahash_done_cpy(jrdev, desc, err, context, DMA_BIDIRECTIONAL);
 }
 
-static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
-			       void *context)
+static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
+				     void *context, enum dma_data_direction dir)
 {
 	struct ahash_request *req = context;
 	struct ahash_edesc *edesc;
@@ -702,7 +654,7 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
 	if (err)
 		ecode = caam_jr_strstatus(jrdev, err);
 
-	ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_FROM_DEVICE);
+	ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, dir);
 	switch_buf(state);
 	kfree(edesc);
 
@@ -717,6 +669,18 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
 	req->base.complete(&req->base, ecode);
 }
 
+static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
+			  void *context)
+{
+	ahash_done_switch(jrdev, desc, err, context, DMA_BIDIRECTIONAL);
+}
+
+static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
+			       void *context)
+{
+	ahash_done_switch(jrdev, desc, err, context, DMA_FROM_DEVICE);
+}
+
 /*
  * Allocate an enhanced descriptor, which contains the hardware descriptor
  * and space for hardware scatter table containing sg_num entries.
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 04/12] crypto: caam - refactor ahash_edesc_alloc
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (2 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 03/12] crypto: caam - refactor ahash_done callbacks Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-19 15:05   ` Horia Geanta
  2019-11-17 22:30 ` [PATCH 05/12] crypto: caam - refactor RSA private key _done callbacks Iuliana Prodan
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Changed parameters for ahash_edesc_alloc function:
- remove flags since they can be computed in
ahash_edesc_alloc, the only place they are needed;
- use ahash_request instead of caam_hash_ctx, which
can be obtained from request.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caamhash.c | 62 +++++++++++++++---------------------------
 1 file changed, 22 insertions(+), 40 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 3d6e978..5f9f16c 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -685,11 +685,14 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
  * Allocate an enhanced descriptor, which contains the hardware descriptor
  * and space for hardware scatter table containing sg_num entries.
  */
-static struct ahash_edesc *ahash_edesc_alloc(struct caam_hash_ctx *ctx,
+static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
 					     int sg_num, u32 *sh_desc,
-					     dma_addr_t sh_desc_dma,
-					     gfp_t flags)
+					     dma_addr_t sh_desc_dma)
 {
+	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+		       GFP_KERNEL : GFP_ATOMIC;
 	struct ahash_edesc *edesc;
 	unsigned int sg_size = sg_num * sizeof(struct sec4_sg_entry);
 
@@ -748,8 +751,6 @@ static int ahash_update_ctx(struct ahash_request *req)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
-	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC;
 	u8 *buf = current_buf(state);
 	int *buflen = current_buflen(state);
 	u8 *next_buf = alt_buf(state);
@@ -805,8 +806,8 @@ static int ahash_update_ctx(struct ahash_request *req)
 		 * allocate space for base edesc and hw desc commands,
 		 * link tables
 		 */
-		edesc = ahash_edesc_alloc(ctx, pad_nents, ctx->sh_desc_update,
-					  ctx->sh_desc_update_dma, flags);
+		edesc = ahash_edesc_alloc(req, pad_nents, ctx->sh_desc_update,
+					  ctx->sh_desc_update_dma);
 		if (!edesc) {
 			dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
 			return -ENOMEM;
@@ -887,8 +888,6 @@ static int ahash_final_ctx(struct ahash_request *req)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
-	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC;
 	int buflen = *current_buflen(state);
 	u32 *desc;
 	int sec4_sg_bytes;
@@ -900,8 +899,8 @@ static int ahash_final_ctx(struct ahash_request *req)
 			sizeof(struct sec4_sg_entry);
 
 	/* allocate space for base edesc and hw desc commands, link tables */
-	edesc = ahash_edesc_alloc(ctx, 4, ctx->sh_desc_fin,
-				  ctx->sh_desc_fin_dma, flags);
+	edesc = ahash_edesc_alloc(req, 4, ctx->sh_desc_fin,
+				  ctx->sh_desc_fin_dma);
 	if (!edesc)
 		return -ENOMEM;
 
@@ -953,8 +952,6 @@ static int ahash_finup_ctx(struct ahash_request *req)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
-	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC;
 	int buflen = *current_buflen(state);
 	u32 *desc;
 	int sec4_sg_src_index;
@@ -983,9 +980,8 @@ static int ahash_finup_ctx(struct ahash_request *req)
 	sec4_sg_src_index = 1 + (buflen ? 1 : 0);
 
 	/* allocate space for base edesc and hw desc commands, link tables */
-	edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents,
-				  ctx->sh_desc_fin, ctx->sh_desc_fin_dma,
-				  flags);
+	edesc = ahash_edesc_alloc(req, sec4_sg_src_index + mapped_nents,
+				  ctx->sh_desc_fin, ctx->sh_desc_fin_dma);
 	if (!edesc) {
 		dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
 		return -ENOMEM;
@@ -1033,8 +1029,6 @@ static int ahash_digest(struct ahash_request *req)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
-	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC;
 	u32 *desc;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	int src_nents, mapped_nents;
@@ -1061,9 +1055,8 @@ static int ahash_digest(struct ahash_request *req)
 	}
 
 	/* allocate space for base edesc and hw desc commands, link tables */
-	edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ? mapped_nents : 0,
-				  ctx->sh_desc_digest, ctx->sh_desc_digest_dma,
-				  flags);
+	edesc = ahash_edesc_alloc(req, mapped_nents > 1 ? mapped_nents : 0,
+				  ctx->sh_desc_digest, ctx->sh_desc_digest_dma);
 	if (!edesc) {
 		dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
 		return -ENOMEM;
@@ -1110,8 +1103,6 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
-	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC;
 	u8 *buf = current_buf(state);
 	int buflen = *current_buflen(state);
 	u32 *desc;
@@ -1120,8 +1111,8 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 	int ret;
 
 	/* allocate space for base edesc and hw desc commands, link tables */
-	edesc = ahash_edesc_alloc(ctx, 0, ctx->sh_desc_digest,
-				  ctx->sh_desc_digest_dma, flags);
+	edesc = ahash_edesc_alloc(req, 0, ctx->sh_desc_digest,
+				  ctx->sh_desc_digest_dma);
 	if (!edesc)
 		return -ENOMEM;
 
@@ -1169,8 +1160,6 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
-	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC;
 	u8 *buf = current_buf(state);
 	int *buflen = current_buflen(state);
 	int blocksize = crypto_ahash_blocksize(ahash);
@@ -1224,10 +1213,9 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 		 * allocate space for base edesc and hw desc commands,
 		 * link tables
 		 */
-		edesc = ahash_edesc_alloc(ctx, pad_nents,
+		edesc = ahash_edesc_alloc(req, pad_nents,
 					  ctx->sh_desc_update_first,
-					  ctx->sh_desc_update_first_dma,
-					  flags);
+					  ctx->sh_desc_update_first_dma);
 		if (!edesc) {
 			dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
 			return -ENOMEM;
@@ -1304,8 +1292,6 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
-	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC;
 	int buflen = *current_buflen(state);
 	u32 *desc;
 	int sec4_sg_bytes, sec4_sg_src_index, src_nents, mapped_nents;
@@ -1335,9 +1321,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 			 sizeof(struct sec4_sg_entry);
 
 	/* allocate space for base edesc and hw desc commands, link tables */
-	edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents,
-				  ctx->sh_desc_digest, ctx->sh_desc_digest_dma,
-				  flags);
+	edesc = ahash_edesc_alloc(req, sec4_sg_src_index + mapped_nents,
+				  ctx->sh_desc_digest, ctx->sh_desc_digest_dma);
 	if (!edesc) {
 		dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
 		return -ENOMEM;
@@ -1390,8 +1375,6 @@ static int ahash_update_first(struct ahash_request *req)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
-	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC;
 	u8 *next_buf = alt_buf(state);
 	int *next_buflen = alt_buflen(state);
 	int to_hash;
@@ -1438,11 +1421,10 @@ static int ahash_update_first(struct ahash_request *req)
 		 * allocate space for base edesc and hw desc commands,
 		 * link tables
 		 */
-		edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ?
+		edesc = ahash_edesc_alloc(req, mapped_nents > 1 ?
 					  mapped_nents : 0,
 					  ctx->sh_desc_update_first,
-					  ctx->sh_desc_update_first_dma,
-					  flags);
+					  ctx->sh_desc_update_first_dma);
 		if (!edesc) {
 			dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
 			return -ENOMEM;
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 05/12] crypto: caam - refactor RSA private key _done callbacks
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (3 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 04/12] crypto: caam - refactor ahash_edesc_alloc Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-19 15:06   ` Horia Geanta
  2019-11-17 22:30 ` [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function Iuliana Prodan
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Create a common rsa_priv_f_done function, which based
on private key form calls the specific unmap function.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caampkc.c | 61 +++++++++++++------------------------------
 1 file changed, 18 insertions(+), 43 deletions(-)

diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 6619c51..ebf1677 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -132,29 +132,13 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
 	akcipher_request_complete(req, ecode);
 }
 
-static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err,
-			     void *context)
-{
-	struct akcipher_request *req = context;
-	struct rsa_edesc *edesc;
-	int ecode = 0;
-
-	if (err)
-		ecode = caam_jr_strstatus(dev, err);
-
-	edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
-
-	rsa_priv_f1_unmap(dev, edesc, req);
-	rsa_io_unmap(dev, edesc, req);
-	kfree(edesc);
-
-	akcipher_request_complete(req, ecode);
-}
-
-static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
-			     void *context)
+static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
+			    void *context)
 {
 	struct akcipher_request *req = context;
+	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+	struct caam_rsa_key *key = &ctx->key;
 	struct rsa_edesc *edesc;
 	int ecode = 0;
 
@@ -163,26 +147,17 @@ static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
 
 	edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
 
-	rsa_priv_f2_unmap(dev, edesc, req);
-	rsa_io_unmap(dev, edesc, req);
-	kfree(edesc);
-
-	akcipher_request_complete(req, ecode);
-}
-
-static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
-			     void *context)
-{
-	struct akcipher_request *req = context;
-	struct rsa_edesc *edesc;
-	int ecode = 0;
-
-	if (err)
-		ecode = caam_jr_strstatus(dev, err);
-
-	edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
+	switch (key->priv_form) {
+	case FORM1:
+		rsa_priv_f1_unmap(dev, edesc, req);
+		break;
+	case FORM2:
+		rsa_priv_f2_unmap(dev, edesc, req);
+		break;
+	case FORM3:
+		rsa_priv_f3_unmap(dev, edesc, req);
+	}
 
-	rsa_priv_f3_unmap(dev, edesc, req);
 	rsa_io_unmap(dev, edesc, req);
 	kfree(edesc);
 
@@ -691,7 +666,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f1_done, req);
+	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
 	if (!ret)
 		return -EINPROGRESS;
 
@@ -724,7 +699,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f2_done, req);
+	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
 	if (!ret)
 		return -EINPROGRESS;
 
@@ -757,7 +732,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f3_done, req);
+	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
 	if (!ret)
 		return -EINPROGRESS;
 
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (4 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 05/12] crypto: caam - refactor RSA private key _done callbacks Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-19 15:21   ` Horia Geanta
  2019-12-10 11:56   ` Bastian Krause
  2019-11-17 22:30 ` [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue Iuliana Prodan
                   ` (5 subsequent siblings)
  11 siblings, 2 replies; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Change the return code of caam_jr_enqueue function to -EINPROGRESS, in
case of success, -ENOSPC in case the CAAM is busy (has no space left
in job ring queue), -EIO if it cannot map the caller's descriptor.

Update, also, the cases for resource-freeing for each algorithm type.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caamalg.c  | 16 ++++------------
 drivers/crypto/caam/caamhash.c | 34 +++++++++++-----------------------
 drivers/crypto/caam/caampkc.c  | 16 ++++++++--------
 drivers/crypto/caam/caamrng.c  |  4 ++--
 drivers/crypto/caam/jr.c       |  8 ++++----
 drivers/crypto/caam/key_gen.c  |  2 +-
 6 files changed, 30 insertions(+), 50 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 6e021692..21b6172 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -1417,9 +1417,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
 			     1);
 
 	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
+	if (ret != -EINPROGRESS) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
 	}
@@ -1462,9 +1460,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
 
 	desc = edesc->hw_desc;
 	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
+	if (ret != -EINPROGRESS) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
 	}
@@ -1507,9 +1503,7 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
 
 	desc = edesc->hw_desc;
 	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
+	if (ret != -EINPROGRESS) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
 	}
@@ -1725,9 +1719,7 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
 	desc = edesc->hw_desc;
 	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req);
 
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
+	if (ret != -EINPROGRESS) {
 		skcipher_unmap(jrdev, edesc, req);
 		kfree(edesc);
 	}
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 5f9f16c..baf4ab1 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -422,7 +422,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
 	init_completion(&result.completion);
 
 	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
-	if (!ret) {
+	if (ret == -EINPROGRESS) {
 		/* in progress */
 		wait_for_completion(&result.completion);
 		ret = result.err;
@@ -858,10 +858,8 @@ static int ahash_update_ctx(struct ahash_request *req)
 				     desc_bytes(desc), 1);
 
 		ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req);
-		if (ret)
+		if (ret != -EINPROGRESS)
 			goto unmap_ctx;
-
-		ret = -EINPROGRESS;
 	} else if (*next_buflen) {
 		scatterwalk_map_and_copy(buf + *buflen, req->src, 0,
 					 req->nbytes, 0);
@@ -936,10 +934,9 @@ static int ahash_final_ctx(struct ahash_request *req)
 			     1);
 
 	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
-	if (ret)
-		goto unmap_ctx;
+	if (ret == -EINPROGRESS)
+		return ret;
 
-	return -EINPROGRESS;
  unmap_ctx:
 	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
 	kfree(edesc);
@@ -1013,10 +1010,9 @@ static int ahash_finup_ctx(struct ahash_request *req)
 			     1);
 
 	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
-	if (ret)
-		goto unmap_ctx;
+	if (ret == -EINPROGRESS)
+		return ret;
 
-	return -EINPROGRESS;
  unmap_ctx:
 	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
 	kfree(edesc);
@@ -1086,9 +1082,7 @@ static int ahash_digest(struct ahash_request *req)
 			     1);
 
 	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
+	if (ret != -EINPROGRESS) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
 	}
@@ -1138,9 +1132,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 			     1);
 
 	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
+	if (ret != -EINPROGRESS) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
 	}
@@ -1258,10 +1250,9 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 				     desc_bytes(desc), 1);
 
 		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
-		if (ret)
+		if (ret != -EINPROGRESS)
 			goto unmap_ctx;
 
-		ret = -EINPROGRESS;
 		state->update = ahash_update_ctx;
 		state->finup = ahash_finup_ctx;
 		state->final = ahash_final_ctx;
@@ -1353,9 +1344,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 			     1);
 
 	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
-	if (!ret) {
-		ret = -EINPROGRESS;
-	} else {
+	if (ret != -EINPROGRESS) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
 	}
@@ -1452,10 +1441,9 @@ static int ahash_update_first(struct ahash_request *req)
 				     desc_bytes(desc), 1);
 
 		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
-		if (ret)
+		if (ret != -EINPROGRESS)
 			goto unmap_ctx;
 
-		ret = -EINPROGRESS;
 		state->update = ahash_update_ctx;
 		state->finup = ahash_finup_ctx;
 		state->final = ahash_final_ctx;
diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index ebf1677..7f7ea32 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -634,8 +634,8 @@ static int caam_rsa_enc(struct akcipher_request *req)
 	init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
 
 	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
-	if (!ret)
-		return -EINPROGRESS;
+	if (ret == -EINPROGRESS)
+		return ret;
 
 	rsa_pub_unmap(jrdev, edesc, req);
 
@@ -667,8 +667,8 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
 	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
 
 	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
-	if (!ret)
-		return -EINPROGRESS;
+	if (ret == -EINPROGRESS)
+		return ret;
 
 	rsa_priv_f1_unmap(jrdev, edesc, req);
 
@@ -700,8 +700,8 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
 	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
 
 	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
-	if (!ret)
-		return -EINPROGRESS;
+	if (ret == -EINPROGRESS)
+		return ret;
 
 	rsa_priv_f2_unmap(jrdev, edesc, req);
 
@@ -733,8 +733,8 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
 	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
 
 	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
-	if (!ret)
-		return -EINPROGRESS;
+	if (ret == -EINPROGRESS)
+		return ret;
 
 	rsa_priv_f3_unmap(jrdev, edesc, req);
 
diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index e8baaca..e3e4bf2 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -133,7 +133,7 @@ static inline int submit_job(struct caam_rng_ctx *ctx, int to_current)
 	dev_dbg(jrdev, "submitting job %d\n", !(to_current ^ ctx->current_buf));
 	init_completion(&bd->filled);
 	err = caam_jr_enqueue(jrdev, desc, rng_done, ctx);
-	if (err)
+	if (err != EINPROGRESS)
 		complete(&bd->filled); /* don't wait on failed job*/
 	else
 		atomic_inc(&bd->empty); /* note if pending */
@@ -153,7 +153,7 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait)
 		if (atomic_read(&bd->empty) == BUF_EMPTY) {
 			err = submit_job(ctx, 1);
 			/* if can't submit job, can't even wait */
-			if (err)
+			if (err != EINPROGRESS)
 				return 0;
 		}
 		/* no immediate data, so exit if not waiting */
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index fc97cde..df2a050 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -324,8 +324,8 @@ void caam_jr_free(struct device *rdev)
 EXPORT_SYMBOL(caam_jr_free);
 
 /**
- * caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK,
- * -EBUSY if the queue is full, -EIO if it cannot map the caller's
+ * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS
+ * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's
  * descriptor.
  * @dev:  device of the job ring to be used. This device should have
  *        been assigned prior by caam_jr_register().
@@ -377,7 +377,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
 	    CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) {
 		spin_unlock_bh(&jrp->inplock);
 		dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE);
-		return -EBUSY;
+		return -ENOSPC;
 	}
 
 	head_entry = &jrp->entinfo[head];
@@ -414,7 +414,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
 
 	spin_unlock_bh(&jrp->inplock);
 
-	return 0;
+	return -EINPROGRESS;
 }
 EXPORT_SYMBOL(caam_jr_enqueue);
 
diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
index 5a851dd..b0e8a49 100644
--- a/drivers/crypto/caam/key_gen.c
+++ b/drivers/crypto/caam/key_gen.c
@@ -108,7 +108,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out,
 	init_completion(&result.completion);
 
 	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
-	if (!ret) {
+	if (ret == -EINPROGRESS) {
 		/* in progress */
 		wait_for_completion(&result.completion);
 		ret = result.err;
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (5 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-19 17:55   ` Horia Geanta
  2019-11-17 22:30 ` [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms Iuliana Prodan
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Added a new struct - caam_jr_request_entry, to keep each request
information. This has a crypto_async_request, used to determine
the request type, and a bool to check if the request has backlog
flag or not.
This struct is passed to CAAM, via enqueue function - caam_jr_enqueue.

The new added caam_jr_enqueue_no_bklog function is used to enqueue a job
descriptor head for cases like caamrng, key_gen, digest_key, where we
don't have backlogged requests.

This is done for later use, on backlogging support in CAAM.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caamalg.c  | 29 +++++++++++++++++-----
 drivers/crypto/caam/caamhash.c | 56 ++++++++++++++++++++++++++++++++----------
 drivers/crypto/caam/caampkc.c  | 32 +++++++++++++++++++-----
 drivers/crypto/caam/caampkc.h  |  3 +++
 drivers/crypto/caam/caamrng.c  |  3 ++-
 drivers/crypto/caam/intern.h   | 10 ++++++++
 drivers/crypto/caam/jr.c       | 53 +++++++++++++++++++++++++++++++++------
 drivers/crypto/caam/jr.h       |  4 +++
 drivers/crypto/caam/key_gen.c  |  2 +-
 9 files changed, 158 insertions(+), 34 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 21b6172..abebcfc 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -878,6 +878,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
  * @mapped_dst_nents: number of segments in output h/w link table
  * @sec4_sg_bytes: length of dma mapped sec4_sg space
  * @sec4_sg_dma: bus physical mapped address of h/w link table
+ * @jrentry: information about the current request that is processed by a ring
  * @sec4_sg: pointer to h/w link table
  * @hw_desc: the h/w job descriptor followed by any referenced link tables
  */
@@ -888,6 +889,7 @@ struct aead_edesc {
 	int mapped_dst_nents;
 	int sec4_sg_bytes;
 	dma_addr_t sec4_sg_dma;
+	struct caam_jr_request_entry jrentry;
 	struct sec4_sg_entry *sec4_sg;
 	u32 hw_desc[];
 };
@@ -901,6 +903,7 @@ struct aead_edesc {
  * @iv_dma: dma address of iv for checking continuity and link table
  * @sec4_sg_bytes: length of dma mapped sec4_sg space
  * @sec4_sg_dma: bus physical mapped address of h/w link table
+ * @jrentry: information about the current request that is processed by a ring
  * @sec4_sg: pointer to h/w link table
  * @hw_desc: the h/w job descriptor followed by any referenced link tables
  *	     and IV
@@ -913,6 +916,7 @@ struct skcipher_edesc {
 	dma_addr_t iv_dma;
 	int sec4_sg_bytes;
 	dma_addr_t sec4_sg_dma;
+	struct caam_jr_request_entry jrentry;
 	struct sec4_sg_entry *sec4_sg;
 	u32 hw_desc[0];
 };
@@ -963,7 +967,8 @@ static void skcipher_unmap(struct device *dev, struct skcipher_edesc *edesc,
 static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
 			    void *context)
 {
-	struct aead_request *req = context;
+	struct caam_jr_request_entry *jrentry = context;
+	struct aead_request *req = aead_request_cast(jrentry->base);
 	struct aead_edesc *edesc;
 	int ecode = 0;
 
@@ -984,7 +989,8 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
 static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
 				void *context)
 {
-	struct skcipher_request *req = context;
+	struct caam_jr_request_entry *jrentry = context;
+	struct skcipher_request *req = skcipher_request_cast(jrentry->base);
 	struct skcipher_edesc *edesc;
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	int ivsize = crypto_skcipher_ivsize(skcipher);
@@ -1364,6 +1370,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
 	edesc->mapped_dst_nents = mapped_dst_nents;
 	edesc->sec4_sg = (void *)edesc + sizeof(struct aead_edesc) +
 			 desc_bytes;
+	edesc->jrentry.base = &req->base;
+
 	*all_contig_ptr = !(mapped_src_nents > 1);
 
 	sec4_sg_index = 0;
@@ -1416,7 +1424,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
+	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
 	if (ret != -EINPROGRESS) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
@@ -1440,6 +1448,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
+	struct caam_jr_request_entry *jrentry;
 	struct device *jrdev = ctx->jrdev;
 	bool all_contig;
 	u32 *desc;
@@ -1459,7 +1468,9 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
+	jrentry = &edesc->jrentry;
+
+	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry);
 	if (ret != -EINPROGRESS) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
@@ -1484,6 +1495,7 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
 	struct device *jrdev = ctx->jrdev;
+	struct caam_jr_request_entry *jrentry;
 	bool all_contig;
 	u32 *desc;
 	int ret = 0;
@@ -1502,7 +1514,9 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
+	jrentry = &edesc->jrentry;
+
+	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry);
 	if (ret != -EINPROGRESS) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
@@ -1637,6 +1651,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
 	edesc->sec4_sg_bytes = sec4_sg_bytes;
 	edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc +
 						  desc_bytes);
+	edesc->jrentry.base = &req->base;
 
 	/* Make sure IV is located in a DMAable area */
 	if (ivsize) {
@@ -1698,6 +1713,7 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
 	struct device *jrdev = ctx->jrdev;
+	struct caam_jr_request_entry *jrentry;
 	u32 *desc;
 	int ret = 0;
 
@@ -1717,8 +1733,9 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req);
+	jrentry = &edesc->jrentry;
 
+	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, jrentry);
 	if (ret != -EINPROGRESS) {
 		skcipher_unmap(jrdev, edesc, req);
 		kfree(edesc);
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index baf4ab1..d9de3dc 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -421,7 +421,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
 	result.err = 0;
 	init_completion(&result.completion);
 
-	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
+	ret = caam_jr_enqueue_no_bklog(jrdev, desc, split_key_done, &result);
 	if (ret == -EINPROGRESS) {
 		/* in progress */
 		wait_for_completion(&result.completion);
@@ -553,6 +553,7 @@ static int acmac_setkey(struct crypto_ahash *ahash, const u8 *key,
  * @sec4_sg_dma: physical mapped address of h/w link table
  * @src_nents: number of segments in input scatterlist
  * @sec4_sg_bytes: length of dma mapped sec4_sg space
+ * @jrentry:  information about the current request that is processed by a ring
  * @hw_desc: the h/w job descriptor followed by any referenced link tables
  * @sec4_sg: h/w link table
  */
@@ -560,6 +561,7 @@ struct ahash_edesc {
 	dma_addr_t sec4_sg_dma;
 	int src_nents;
 	int sec4_sg_bytes;
+	struct caam_jr_request_entry jrentry;
 	u32 hw_desc[DESC_JOB_IO_LEN_MAX / sizeof(u32)] ____cacheline_aligned;
 	struct sec4_sg_entry sec4_sg[0];
 };
@@ -600,7 +602,8 @@ static inline void ahash_unmap_ctx(struct device *dev,
 static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
 				  void *context, enum dma_data_direction dir)
 {
-	struct ahash_request *req = context;
+	struct caam_jr_request_entry *jrentry = context;
+	struct ahash_request *req = ahash_request_cast(jrentry->base);
 	struct ahash_edesc *edesc;
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	int digestsize = crypto_ahash_digestsize(ahash);
@@ -640,7 +643,8 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
 static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
 				     void *context, enum dma_data_direction dir)
 {
-	struct ahash_request *req = context;
+	struct caam_jr_request_entry *jrentry = context;
+	struct ahash_request *req = ahash_request_cast(jrentry->base);
 	struct ahash_edesc *edesc;
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
@@ -702,6 +706,8 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
 		return NULL;
 	}
 
+	edesc->jrentry.base = &req->base;
+
 	init_job_desc_shared(edesc->hw_desc, sh_desc_dma, desc_len(sh_desc),
 			     HDR_SHARE_DEFER | HDR_REVERSE);
 
@@ -760,6 +766,7 @@ static int ahash_update_ctx(struct ahash_request *req)
 	u32 *desc;
 	int src_nents, mapped_nents, sec4_sg_bytes, sec4_sg_src_index;
 	struct ahash_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret = 0;
 
 	last_buflen = *next_buflen;
@@ -857,7 +864,9 @@ static int ahash_update_ctx(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req);
+		jrentry = &edesc->jrentry;
+
+		ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, jrentry);
 		if (ret != -EINPROGRESS)
 			goto unmap_ctx;
 	} else if (*next_buflen) {
@@ -891,6 +900,7 @@ static int ahash_final_ctx(struct ahash_request *req)
 	int sec4_sg_bytes;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct ahash_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	sec4_sg_bytes = pad_sg_nents(1 + (buflen ? 1 : 0)) *
@@ -933,11 +943,13 @@ static int ahash_final_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
+	jrentry = &edesc->jrentry;
+
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
 	if (ret == -EINPROGRESS)
 		return ret;
 
- unmap_ctx:
+unmap_ctx:
 	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
 	kfree(edesc);
 	return ret;
@@ -955,6 +967,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
 	int src_nents, mapped_nents;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct ahash_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	src_nents = sg_nents_for_len(req->src, req->nbytes);
@@ -1009,11 +1022,13 @@ static int ahash_finup_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
+	jrentry = &edesc->jrentry;
+
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
 	if (ret == -EINPROGRESS)
 		return ret;
 
- unmap_ctx:
+unmap_ctx:
 	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
 	kfree(edesc);
 	return ret;
@@ -1029,6 +1044,7 @@ static int ahash_digest(struct ahash_request *req)
 	int digestsize = crypto_ahash_digestsize(ahash);
 	int src_nents, mapped_nents;
 	struct ahash_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	state->buf_dma = 0;
@@ -1081,7 +1097,9 @@ static int ahash_digest(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+	jrentry = &edesc->jrentry;
+
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry);
 	if (ret != -EINPROGRESS) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
@@ -1102,6 +1120,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 	u32 *desc;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct ahash_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	/* allocate space for base edesc and hw desc commands, link tables */
@@ -1131,7 +1150,9 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+	jrentry = &edesc->jrentry;
+
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry);
 	if (ret != -EINPROGRESS) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
@@ -1160,6 +1181,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 	int in_len = *buflen + req->nbytes, to_hash;
 	int sec4_sg_bytes, src_nents, mapped_nents;
 	struct ahash_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	u32 *desc;
 	int ret = 0;
 
@@ -1249,7 +1271,9 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
+		jrentry = &edesc->jrentry;
+
+		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, jrentry);
 		if (ret != -EINPROGRESS)
 			goto unmap_ctx;
 
@@ -1288,6 +1312,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 	int sec4_sg_bytes, sec4_sg_src_index, src_nents, mapped_nents;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct ahash_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	src_nents = sg_nents_for_len(req->src, req->nbytes);
@@ -1343,7 +1368,9 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+	jrentry = &edesc->jrentry;
+
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry);
 	if (ret != -EINPROGRESS) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
@@ -1371,6 +1398,7 @@ static int ahash_update_first(struct ahash_request *req)
 	u32 *desc;
 	int src_nents, mapped_nents;
 	struct ahash_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret = 0;
 
 	*next_buflen = req->nbytes & (blocksize - 1);
@@ -1440,7 +1468,9 @@ static int ahash_update_first(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
+		jrentry = &edesc->jrentry;
+
+		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, jrentry);
 		if (ret != -EINPROGRESS)
 			goto unmap_ctx;
 
diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 7f7ea32..bb0e4b9 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -116,7 +116,8 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
 /* RSA Job Completion handler */
 static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
 {
-	struct akcipher_request *req = context;
+	struct caam_jr_request_entry *jrentry = context;
+	struct akcipher_request *req = akcipher_request_cast(jrentry->base);
 	struct rsa_edesc *edesc;
 	int ecode = 0;
 
@@ -135,7 +136,8 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
 static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
 			    void *context)
 {
-	struct akcipher_request *req = context;
+	struct caam_jr_request_entry *jrentry = context;
+	struct akcipher_request *req = akcipher_request_cast(jrentry->base);
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct caam_rsa_key *key = &ctx->key;
@@ -315,6 +317,8 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
 	edesc->mapped_src_nents = mapped_src_nents;
 	edesc->mapped_dst_nents = mapped_dst_nents;
 
+	edesc->jrentry.base = &req->base;
+
 	edesc->sec4_sg_dma = dma_map_single(dev, edesc->sec4_sg,
 					    sec4_sg_bytes, DMA_TO_DEVICE);
 	if (dma_mapping_error(dev, edesc->sec4_sg_dma)) {
@@ -609,6 +613,7 @@ static int caam_rsa_enc(struct akcipher_request *req)
 	struct caam_rsa_key *key = &ctx->key;
 	struct device *jrdev = ctx->dev;
 	struct rsa_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	if (unlikely(!key->n || !key->e))
@@ -633,7 +638,10 @@ static int caam_rsa_enc(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
+	jrentry = &edesc->jrentry;
+	jrentry->base = &req->base;
+
+	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, jrentry);
 	if (ret == -EINPROGRESS)
 		return ret;
 
@@ -651,6 +659,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct device *jrdev = ctx->dev;
 	struct rsa_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	/* Allocate extended descriptor */
@@ -666,7 +675,10 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
+	jrentry = &edesc->jrentry;
+	jrentry->base = &req->base;
+
+	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry);
 	if (ret == -EINPROGRESS)
 		return ret;
 
@@ -684,6 +696,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct device *jrdev = ctx->dev;
 	struct rsa_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	/* Allocate extended descriptor */
@@ -699,7 +712,10 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
+	jrentry = &edesc->jrentry;
+	jrentry->base = &req->base;
+
+	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry);
 	if (ret == -EINPROGRESS)
 		return ret;
 
@@ -717,6 +733,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct device *jrdev = ctx->dev;
 	struct rsa_edesc *edesc;
+	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	/* Allocate extended descriptor */
@@ -732,7 +749,10 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
+	jrentry = &edesc->jrentry;
+	jrentry->base = &req->base;
+
+	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry);
 	if (ret == -EINPROGRESS)
 		return ret;
 
diff --git a/drivers/crypto/caam/caampkc.h b/drivers/crypto/caam/caampkc.h
index c68fb4c..fe46d73 100644
--- a/drivers/crypto/caam/caampkc.h
+++ b/drivers/crypto/caam/caampkc.h
@@ -11,6 +11,7 @@
 #ifndef _PKC_DESC_H_
 #define _PKC_DESC_H_
 #include "compat.h"
+#include "intern.h"
 #include "pdb.h"
 
 /**
@@ -118,6 +119,7 @@ struct caam_rsa_req_ctx {
  * @mapped_dst_nents: number of segments in output h/w link table
  * @sec4_sg_bytes : length of h/w link table
  * @sec4_sg_dma   : dma address of h/w link table
+ * @jrentry       : info about the current request that is processed by a ring
  * @sec4_sg       : pointer to h/w link table
  * @pdb           : specific RSA Protocol Data Block (PDB)
  * @hw_desc       : descriptor followed by link tables if any
@@ -129,6 +131,7 @@ struct rsa_edesc {
 	int mapped_dst_nents;
 	int sec4_sg_bytes;
 	dma_addr_t sec4_sg_dma;
+	struct caam_jr_request_entry jrentry;
 	struct sec4_sg_entry *sec4_sg;
 	union {
 		struct rsa_pub_pdb pub;
diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index e3e4bf2..96891b6 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -132,7 +132,8 @@ static inline int submit_job(struct caam_rng_ctx *ctx, int to_current)
 
 	dev_dbg(jrdev, "submitting job %d\n", !(to_current ^ ctx->current_buf));
 	init_completion(&bd->filled);
-	err = caam_jr_enqueue(jrdev, desc, rng_done, ctx);
+
+	err = caam_jr_enqueue_no_bklog(jrdev, desc, rng_done, ctx);
 	if (err != EINPROGRESS)
 		complete(&bd->filled); /* don't wait on failed job*/
 	else
diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
index c7c10c9..58be66c 100644
--- a/drivers/crypto/caam/intern.h
+++ b/drivers/crypto/caam/intern.h
@@ -11,6 +11,7 @@
 #define INTERN_H
 
 #include "ctrl.h"
+#include "regs.h"
 
 /* Currently comes from Kconfig param as a ^2 (driver-required) */
 #define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE)
@@ -104,6 +105,15 @@ struct caam_drv_private {
 #endif
 };
 
+/*
+ * Storage for tracking each request that is processed by a ring
+ */
+struct caam_jr_request_entry {
+	/* Common attributes for async crypto requests */
+	struct crypto_async_request *base;
+	bool bklog;	/* Stored to determine if the request needs backlog */
+};
+
 #ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API
 
 int caam_algapi_init(struct device *dev);
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index df2a050..544cafa 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -324,9 +324,9 @@ void caam_jr_free(struct device *rdev)
 EXPORT_SYMBOL(caam_jr_free);
 
 /**
- * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS
- * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's
- * descriptor.
+ * caam_jr_enqueue_no_bklog() - Enqueue a job descriptor head for no
+ * backlogging requests. Returns -EINPROGRESS if OK, -ENOSPC if the queue
+ * is full, -EIO if it cannot map the caller's descriptor.
  * @dev:  device of the job ring to be used. This device should have
  *        been assigned prior by caam_jr_register().
  * @desc: points to a job descriptor that execute our request. All
@@ -351,10 +351,10 @@ EXPORT_SYMBOL(caam_jr_free);
  * @areq: optional pointer to a user argument for use at callback
  *        time.
  **/
-int caam_jr_enqueue(struct device *dev, u32 *desc,
-		    void (*cbk)(struct device *dev, u32 *desc,
-				u32 status, void *areq),
-		    void *areq)
+int caam_jr_enqueue_no_bklog(struct device *dev, u32 *desc,
+			     void (*cbk)(struct device *dev, u32 *desc,
+					 u32 status, void *areq),
+			     void *areq)
 {
 	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
 	struct caam_jrentry_info *head_entry;
@@ -416,6 +416,45 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
 
 	return -EINPROGRESS;
 }
+EXPORT_SYMBOL(caam_jr_enqueue_no_bklog);
+
+/**
+ * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS
+ * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's
+ * descriptor.
+ * @dev:  device of the job ring to be used. This device should have
+ *        been assigned prior by caam_jr_register().
+ * @desc: points to a job descriptor that execute our request. All
+ *        descriptors (and all referenced data) must be in a DMAable
+ *        region, and all data references must be physical addresses
+ *        accessible to CAAM (i.e. within a PAMU window granted
+ *        to it).
+ * @cbk:  pointer to a callback function to be invoked upon completion
+ *        of this request. This has the form:
+ *        callback(struct device *dev, u32 *desc, u32 stat, void *arg)
+ *        where:
+ *        @dev:    contains the job ring device that processed this
+ *                 response.
+ *        @desc:   descriptor that initiated the request, same as
+ *                 "desc" being argued to caam_jr_enqueue().
+ *        @status: untranslated status received from CAAM. See the
+ *                 reference manual for a detailed description of
+ *                 error meaning, or see the JRSTA definitions in the
+ *                 register header file
+ *        @areq:   optional pointer to an argument passed with the
+ *                 original request
+ * @areq: optional pointer to a user argument for use at callback
+ *        time.
+ **/
+int caam_jr_enqueue(struct device *dev, u32 *desc,
+		    void (*cbk)(struct device *dev, u32 *desc,
+				u32 status, void *areq),
+		    void *areq)
+{
+	struct caam_jr_request_entry *jrentry = areq;
+
+	return caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry);
+}
 EXPORT_SYMBOL(caam_jr_enqueue);
 
 /*
diff --git a/drivers/crypto/caam/jr.h b/drivers/crypto/caam/jr.h
index eab6115..c47a0cd 100644
--- a/drivers/crypto/caam/jr.h
+++ b/drivers/crypto/caam/jr.h
@@ -11,6 +11,10 @@
 /* Prototypes for backend-level services exposed to APIs */
 struct device *caam_jr_alloc(void);
 void caam_jr_free(struct device *rdev);
+int caam_jr_enqueue_no_bklog(struct device *dev, u32 *desc,
+			     void (*cbk)(struct device *dev, u32 *desc,
+					 u32 status, void *areq),
+			     void *areq);
 int caam_jr_enqueue(struct device *dev, u32 *desc,
 		    void (*cbk)(struct device *dev, u32 *desc, u32 status,
 				void *areq),
diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
index b0e8a49..854e718 100644
--- a/drivers/crypto/caam/key_gen.c
+++ b/drivers/crypto/caam/key_gen.c
@@ -107,7 +107,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out,
 	result.err = 0;
 	init_completion(&result.completion);
 
-	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
+	ret = caam_jr_enqueue_no_bklog(jrdev, desc, split_key_done, &result);
 	if (ret == -EINPROGRESS) {
 		/* in progress */
 		wait_for_completion(&result.completion);
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (6 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-21 11:46   ` Horia Geanta
                     ` (2 more replies)
  2019-11-17 22:30 ` [PATCH 09/12] crypto: caam - bypass crypto-engine sw queue, if empty Iuliana Prodan
                   ` (3 subsequent siblings)
  11 siblings, 3 replies; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Integrate crypto_engine into CAAM, to make use of the engine queue.
Add support for SKCIPHER algorithms.

This is intended to be used for CAAM backlogging support.
The requests, with backlog flag (e.g. from dm-crypt) will be listed
into crypto-engine queue and processed by CAAM when free.
This changes the return codes for caam_jr_enqueue:
-EINPROGRESS if OK, -EBUSY if request is backlogged,
-ENOSPC if the queue is full, -EIO if it cannot map the caller's
descriptor, -EINVAL if crypto_tfm not supported by crypto_engine.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Signed-off-by: Franck LENORMAND <franck.lenormand@nxp.com>
---
 drivers/crypto/caam/Kconfig   |  1 +
 drivers/crypto/caam/caamalg.c | 84 +++++++++++++++++++++++++++++++++++--------
 drivers/crypto/caam/intern.h  |  2 ++
 drivers/crypto/caam/jr.c      | 51 ++++++++++++++++++++++++--
 4 files changed, 122 insertions(+), 16 deletions(-)

diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig
index 87053e4..1930e19 100644
--- a/drivers/crypto/caam/Kconfig
+++ b/drivers/crypto/caam/Kconfig
@@ -33,6 +33,7 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG
 
 menuconfig CRYPTO_DEV_FSL_CAAM_JR
 	tristate "Freescale CAAM Job Ring driver backend"
+	select CRYPTO_ENGINE
 	default y
 	help
 	  Enables the driver module for Job Rings which are part of
diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index abebcfc..23de94d 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -56,6 +56,7 @@
 #include "sg_sw_sec4.h"
 #include "key_gen.h"
 #include "caamalg_desc.h"
+#include <crypto/engine.h>
 
 /*
  * crypto alg
@@ -101,6 +102,7 @@ struct caam_skcipher_alg {
  * per-session context
  */
 struct caam_ctx {
+	struct crypto_engine_ctx enginectx;
 	u32 sh_desc_enc[DESC_MAX_USED_LEN];
 	u32 sh_desc_dec[DESC_MAX_USED_LEN];
 	u8 key[CAAM_MAX_KEY_SIZE];
@@ -114,6 +116,12 @@ struct caam_ctx {
 	unsigned int authsize;
 };
 
+struct caam_skcipher_req_ctx {
+	struct skcipher_edesc *edesc;
+	void (*skcipher_op_done)(struct device *jrdev, u32 *desc, u32 err,
+				 void *context);
+};
+
 static int aead_null_set_sh_desc(struct crypto_aead *aead)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
@@ -992,13 +1000,15 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
 	struct caam_jr_request_entry *jrentry = context;
 	struct skcipher_request *req = skcipher_request_cast(jrentry->base);
 	struct skcipher_edesc *edesc;
+	struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+	struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
 	int ivsize = crypto_skcipher_ivsize(skcipher);
 	int ecode = 0;
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
 
-	edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]);
+	edesc = rctx->edesc;
 	if (err)
 		ecode = caam_jr_strstatus(jrdev, err);
 
@@ -1024,7 +1034,14 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
 
 	kfree(edesc);
 
-	skcipher_request_complete(req, ecode);
+	/*
+	 * If no backlog flag, the completion of the request is done
+	 * by CAAM, not crypto engine.
+	 */
+	if (!jrentry->bklog)
+		skcipher_request_complete(req, ecode);
+	else
+		crypto_finalize_skcipher_request(jrp->engine, req, ecode);
 }
 
 /*
@@ -1553,6 +1570,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
+	struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
 	struct device *jrdev = ctx->jrdev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
@@ -1653,6 +1671,9 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
 						  desc_bytes);
 	edesc->jrentry.base = &req->base;
 
+	rctx->edesc = edesc;
+	rctx->skcipher_op_done = skcipher_crypt_done;
+
 	/* Make sure IV is located in a DMAable area */
 	if (ivsize) {
 		iv = (u8 *)edesc->sec4_sg + sec4_sg_bytes;
@@ -1707,13 +1728,37 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
 	return edesc;
 }
 
+static int skcipher_do_one_req(struct crypto_engine *engine, void *areq)
+{
+	struct skcipher_request *req = skcipher_request_cast(areq);
+	struct caam_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
+	struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
+	struct caam_jr_request_entry *jrentry;
+	u32 *desc = rctx->edesc->hw_desc;
+	int ret;
+
+	jrentry = &rctx->edesc->jrentry;
+	jrentry->bklog = true;
+
+	ret = caam_jr_enqueue_no_bklog(ctx->jrdev, desc,
+				       rctx->skcipher_op_done, jrentry);
+
+	if (ret != -EINPROGRESS) {
+		skcipher_unmap(ctx->jrdev, rctx->edesc, req);
+		kfree(rctx->edesc);
+	} else {
+		ret = 0;
+	}
+
+	return ret;
+}
+
 static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
 {
 	struct skcipher_edesc *edesc;
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
 	struct device *jrdev = ctx->jrdev;
-	struct caam_jr_request_entry *jrentry;
 	u32 *desc;
 	int ret = 0;
 
@@ -1727,16 +1772,15 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
 
 	/* Create and submit job descriptor*/
 	init_skcipher_job(req, edesc, encrypt);
+	desc = edesc->hw_desc;
 
 	print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
-			     desc_bytes(edesc->hw_desc), 1);
-
-	desc = edesc->hw_desc;
-	jrentry = &edesc->jrentry;
+			     DUMP_PREFIX_ADDRESS, 16, 4, desc,
+			     desc_bytes(desc), 1);
 
-	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, jrentry);
-	if (ret != -EINPROGRESS) {
+	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done,
+			      &edesc->jrentry);
+	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
 		skcipher_unmap(jrdev, edesc, req);
 		kfree(edesc);
 	}
@@ -3272,7 +3316,9 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam,
 
 	dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_enc,
 					offsetof(struct caam_ctx,
-						 sh_desc_enc_dma),
+						 sh_desc_enc_dma) -
+					offsetof(struct caam_ctx,
+						 sh_desc_enc),
 					ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
 	if (dma_mapping_error(ctx->jrdev, dma_addr)) {
 		dev_err(ctx->jrdev, "unable to map key, shared descriptors\n");
@@ -3282,8 +3328,12 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam,
 
 	ctx->sh_desc_enc_dma = dma_addr;
 	ctx->sh_desc_dec_dma = dma_addr + offsetof(struct caam_ctx,
-						   sh_desc_dec);
-	ctx->key_dma = dma_addr + offsetof(struct caam_ctx, key);
+						   sh_desc_dec) -
+					offsetof(struct caam_ctx,
+						 sh_desc_enc);
+	ctx->key_dma = dma_addr + offsetof(struct caam_ctx, key) -
+					offsetof(struct caam_ctx,
+						 sh_desc_enc);
 
 	/* copy descriptor header template value */
 	ctx->cdata.algtype = OP_TYPE_CLASS1_ALG | caam->class1_alg_type;
@@ -3297,6 +3347,11 @@ static int caam_cra_init(struct crypto_skcipher *tfm)
 	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
 	struct caam_skcipher_alg *caam_alg =
 		container_of(alg, typeof(*caam_alg), skcipher);
+	struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+	crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx));
+
+	ctx->enginectx.op.do_one_request = skcipher_do_one_req;
 
 	return caam_init_common(crypto_skcipher_ctx(tfm), &caam_alg->caam,
 				false);
@@ -3315,7 +3370,8 @@ static int caam_aead_init(struct crypto_aead *tfm)
 static void caam_exit_common(struct caam_ctx *ctx)
 {
 	dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_enc_dma,
-			       offsetof(struct caam_ctx, sh_desc_enc_dma),
+			       offsetof(struct caam_ctx, sh_desc_enc_dma) -
+			       offsetof(struct caam_ctx, sh_desc_enc),
 			       ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
 	caam_jr_free(ctx->jrdev);
 }
diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
index 58be66c..31abb94 100644
--- a/drivers/crypto/caam/intern.h
+++ b/drivers/crypto/caam/intern.h
@@ -12,6 +12,7 @@
 
 #include "ctrl.h"
 #include "regs.h"
+#include <crypto/engine.h>
 
 /* Currently comes from Kconfig param as a ^2 (driver-required) */
 #define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE)
@@ -61,6 +62,7 @@ struct caam_drv_private_jr {
 	int out_ring_read_index;	/* Output index "tail" */
 	int tail;			/* entinfo (s/w ring) tail index */
 	void *outring;			/* Base of output ring, DMA-safe */
+	struct crypto_engine *engine;
 };
 
 /*
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 544cafa..5c55d3d 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -62,6 +62,15 @@ static void unregister_algs(void)
 	mutex_unlock(&algs_lock);
 }
 
+static void caam_jr_crypto_engine_exit(void *data)
+{
+	struct device *jrdev = data;
+	struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
+
+	/* Free the resources of crypto-engine */
+	crypto_engine_exit(jrpriv->engine);
+}
+
 static int caam_reset_hw_jr(struct device *dev)
 {
 	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
@@ -418,10 +427,23 @@ int caam_jr_enqueue_no_bklog(struct device *dev, u32 *desc,
 }
 EXPORT_SYMBOL(caam_jr_enqueue_no_bklog);
 
+static int transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req)
+{
+	switch (crypto_tfm_alg_type(req->tfm)) {
+	case CRYPTO_ALG_TYPE_SKCIPHER:
+		return crypto_transfer_skcipher_request_to_engine(engine,
+								  skcipher_request_cast(req));
+	default:
+		return -EINVAL;
+	}
+}
+
 /**
  * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS
- * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's
- * descriptor.
+ * if OK, -EBUSY if request is backlogged, -ENOSPC if the queue is full,
+ * -EIO if it cannot map the caller's descriptor, -EINVAL if crypto_tfm
+ * not supported by crypto_engine.
  * @dev:  device of the job ring to be used. This device should have
  *        been assigned prior by caam_jr_register().
  * @desc: points to a job descriptor that execute our request. All
@@ -451,7 +473,12 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
 				u32 status, void *areq),
 		    void *areq)
 {
+	struct caam_drv_private_jr *jrpriv = dev_get_drvdata(dev);
 	struct caam_jr_request_entry *jrentry = areq;
+	struct crypto_async_request *req = jrentry->base;
+
+	if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
+		return transfer_request_to_engine(jrpriv->engine, req);
 
 	return caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry);
 }
@@ -577,6 +604,26 @@ static int caam_jr_probe(struct platform_device *pdev)
 		return error;
 	}
 
+	/* Initialize crypto engine */
+	jrpriv->engine = crypto_engine_alloc_init(jrdev, false);
+	if (!jrpriv->engine) {
+		dev_err(jrdev, "Could not init crypto-engine\n");
+		return -ENOMEM;
+	}
+
+	/* Start crypto engine */
+	error = crypto_engine_start(jrpriv->engine);
+	if (error) {
+		dev_err(jrdev, "Could not start crypto-engine\n");
+		crypto_engine_exit(jrpriv->engine);
+		return error;
+	}
+
+	error = devm_add_action_or_reset(jrdev, caam_jr_crypto_engine_exit,
+					 jrdev);
+	if (error)
+		return error;
+
 	/* Identify the interrupt */
 	jrpriv->irq = irq_of_parse_and_map(nprop, 0);
 	if (!jrpriv->irq) {
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 09/12] crypto: caam - bypass crypto-engine sw queue, if empty
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (7 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-21 11:53   ` Horia Geanta
  2019-11-17 22:30 ` [PATCH 10/12] crypto: caam - add crypto_engine support for AEAD algorithms Iuliana Prodan
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Bypass crypto-engine software queue, if empty, and send the request
directly to hardware. If this returns -ENOSPC, transfer the request to
crypto-engine and let it handle it.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/jr.c | 29 ++++++++++++++++++++++++++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 5c55d3d..ddf3d39 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -476,10 +476,33 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
 	struct caam_drv_private_jr *jrpriv = dev_get_drvdata(dev);
 	struct caam_jr_request_entry *jrentry = areq;
 	struct crypto_async_request *req = jrentry->base;
+	int ret;
 
-	if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
-		return transfer_request_to_engine(jrpriv->engine, req);
-
+	if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG) {
+		if (crypto_queue_len(&jrpriv->engine->queue) == 0) {
+			/*
+			 * send the request to CAAM, if crypto-engine queue
+			 * is empty
+			 */
+			ret = caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry);
+			if (ret == -ENOSPC)
+				/*
+				 * CAAM has no space, so transfer the request
+				 * to crypto-engine
+				 */
+				return transfer_request_to_engine(jrpriv->engine,
+								  req);
+			else
+				return ret;
+		} else {
+			/*
+			 * crypto-engine queue is not empty, so transfer the
+			 * request to crypto-engine, to keep the order
+			 * of requests
+			 */
+			return transfer_request_to_engine(jrpriv->engine, req);
+		}
+	}
 	return caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry);
 }
 EXPORT_SYMBOL(caam_jr_enqueue);
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 10/12] crypto: caam - add crypto_engine support for AEAD algorithms
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (8 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 09/12] crypto: caam - bypass crypto-engine sw queue, if empty Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-21 16:46   ` Horia Geanta
  2019-11-17 22:30 ` [PATCH 11/12] crypto: caam - add crypto_engine support for RSA algorithms Iuliana Prodan
  2019-11-17 22:30 ` [PATCH 12/12] crypto: caam - add crypto_engine support for HASH algorithms Iuliana Prodan
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Add crypto_engine support for AEAD algorithms, to make use of
the engine queue.
The requests, with backlog flag, will be listed into crypto-engine
queue and processed by CAAM when free. In case the queue is empty,
the request is directly sent to CAAM.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caamalg.c | 80 +++++++++++++++++++++++++++++++++----------
 drivers/crypto/caam/jr.c      |  3 ++
 2 files changed, 64 insertions(+), 19 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 23de94d..786713a 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -122,6 +122,12 @@ struct caam_skcipher_req_ctx {
 				 void *context);
 };
 
+struct caam_aead_req_ctx {
+	struct aead_edesc *edesc;
+	void (*aead_op_done)(struct device *jrdev, u32 *desc, u32 err,
+			     void *context);
+};
+
 static int aead_null_set_sh_desc(struct crypto_aead *aead)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
@@ -977,12 +983,14 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
 {
 	struct caam_jr_request_entry *jrentry = context;
 	struct aead_request *req = aead_request_cast(jrentry->base);
+	struct caam_aead_req_ctx *rctx = aead_request_ctx(req);
+	struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
 	struct aead_edesc *edesc;
 	int ecode = 0;
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
 
-	edesc = container_of(desc, struct aead_edesc, hw_desc[0]);
+	edesc = rctx->edesc;
 
 	if (err)
 		ecode = caam_jr_strstatus(jrdev, err);
@@ -991,7 +999,14 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
 
 	kfree(edesc);
 
-	aead_request_complete(req, ecode);
+	/*
+	 * If no backlog flag, the completion of the request is done
+	 * by CAAM, not crypto engine.
+	 */
+	if (!jrentry->bklog)
+		aead_request_complete(req, ecode);
+	else
+		crypto_finalize_aead_request(jrp->engine, req, ecode);
 }
 
 static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
@@ -1287,6 +1302,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
 	struct device *jrdev = ctx->jrdev;
+	struct caam_aead_req_ctx *rctx = aead_request_ctx(req);
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0;
@@ -1389,6 +1405,9 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
 			 desc_bytes;
 	edesc->jrentry.base = &req->base;
 
+	rctx->edesc = edesc;
+	rctx->aead_op_done = aead_crypt_done;
+
 	*all_contig_ptr = !(mapped_src_nents > 1);
 
 	sec4_sg_index = 0;
@@ -1442,7 +1461,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
 			     1);
 
 	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
-	if (ret != -EINPROGRESS) {
+	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
 	}
@@ -1465,7 +1484,6 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct caam_jr_request_entry *jrentry;
 	struct device *jrdev = ctx->jrdev;
 	bool all_contig;
 	u32 *desc;
@@ -1479,16 +1497,14 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
 
 	/* Create and submit job descriptor */
 	init_authenc_job(req, edesc, all_contig, encrypt);
+	desc = edesc->hw_desc;
 
 	print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
-			     desc_bytes(edesc->hw_desc), 1);
-
-	desc = edesc->hw_desc;
-	jrentry = &edesc->jrentry;
+			     DUMP_PREFIX_ADDRESS, 16, 4, desc,
+			     desc_bytes(desc), 1);
 
-	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry);
-	if (ret != -EINPROGRESS) {
+	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
+	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
 	}
@@ -1506,13 +1522,37 @@ static int aead_decrypt(struct aead_request *req)
 	return aead_crypt(req, false);
 }
 
+static int aead_do_one_req(struct crypto_engine *engine, void *areq)
+{
+	struct aead_request *req = aead_request_cast(areq);
+	struct caam_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
+	struct caam_aead_req_ctx *rctx = aead_request_ctx(req);
+	struct caam_jr_request_entry *jrentry;
+	u32 *desc = rctx->edesc->hw_desc;
+	int ret;
+
+	jrentry = &rctx->edesc->jrentry;
+	jrentry->bklog = true;
+
+	ret = caam_jr_enqueue_no_bklog(ctx->jrdev, desc, rctx->aead_op_done,
+				       jrentry);
+
+	if (ret != -EINPROGRESS) {
+		aead_unmap(ctx->jrdev, rctx->edesc, req);
+		kfree(rctx->edesc);
+	} else {
+		ret = 0;
+	}
+
+	return ret;
+}
+
 static inline int gcm_crypt(struct aead_request *req, bool encrypt)
 {
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
 	struct device *jrdev = ctx->jrdev;
-	struct caam_jr_request_entry *jrentry;
 	bool all_contig;
 	u32 *desc;
 	int ret = 0;
@@ -1525,16 +1565,14 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
 
 	/* Create and submit job descriptor */
 	init_gcm_job(req, edesc, all_contig, encrypt);
+	desc = edesc->hw_desc;
 
 	print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
-			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
-			     desc_bytes(edesc->hw_desc), 1);
-
-	desc = edesc->hw_desc;
-	jrentry = &edesc->jrentry;
+			     DUMP_PREFIX_ADDRESS, 16, 4, desc,
+			     desc_bytes(desc), 1);
 
-	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry);
-	if (ret != -EINPROGRESS) {
+	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
+	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
 		aead_unmap(jrdev, edesc, req);
 		kfree(edesc);
 	}
@@ -3364,6 +3402,10 @@ static int caam_aead_init(struct crypto_aead *tfm)
 		 container_of(alg, struct caam_aead_alg, aead);
 	struct caam_ctx *ctx = crypto_aead_ctx(tfm);
 
+	crypto_aead_set_reqsize(tfm, sizeof(struct caam_aead_req_ctx));
+
+	ctx->enginectx.op.do_one_request = aead_do_one_req;
+
 	return caam_init_common(ctx, &caam_alg->caam, !caam_alg->caam.nodkp);
 }
 
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index ddf3d39..7e6632d 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -434,6 +434,9 @@ static int transfer_request_to_engine(struct crypto_engine *engine,
 	case CRYPTO_ALG_TYPE_SKCIPHER:
 		return crypto_transfer_skcipher_request_to_engine(engine,
 								  skcipher_request_cast(req));
+	case CRYPTO_ALG_TYPE_AEAD:
+		return crypto_transfer_aead_request_to_engine(engine,
+							      aead_request_cast(req));
 	default:
 		return -EINVAL;
 	}
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 11/12] crypto: caam - add crypto_engine support for RSA algorithms
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (9 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 10/12] crypto: caam - add crypto_engine support for AEAD algorithms Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-21 16:53   ` Horia Geanta
  2019-11-17 22:30 ` [PATCH 12/12] crypto: caam - add crypto_engine support for HASH algorithms Iuliana Prodan
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Add crypto_engine support for RSA algorithms, to make use of
the engine queue.
The requests, with backlog flag, will be listed into crypto-engine
queue and processed by CAAM when free. In case the queue is empty,
the request is directly sent to CAAM.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caampkc.c | 124 ++++++++++++++++++++++++++++++------------
 drivers/crypto/caam/caampkc.h |   8 +++
 drivers/crypto/caam/jr.c      |   3 +
 3 files changed, 101 insertions(+), 34 deletions(-)

diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index bb0e4b9..8ffce06 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -118,19 +118,28 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
 {
 	struct caam_jr_request_entry *jrentry = context;
 	struct akcipher_request *req = akcipher_request_cast(jrentry->base);
+	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
+	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
 	struct rsa_edesc *edesc;
 	int ecode = 0;
 
 	if (err)
 		ecode = caam_jr_strstatus(dev, err);
 
-	edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
+	edesc = req_ctx->edesc;
 
 	rsa_pub_unmap(dev, edesc, req);
 	rsa_io_unmap(dev, edesc, req);
 	kfree(edesc);
 
-	akcipher_request_complete(req, ecode);
+	/*
+	 * If no backlog flag, the completion of the request is done
+	 * by CAAM, not crypto engine.
+	 */
+	if (!jrentry->bklog)
+		akcipher_request_complete(req, ecode);
+	else
+		crypto_finalize_akcipher_request(jrp->engine, req, ecode);
 }
 
 static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
@@ -139,15 +148,17 @@ static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
 	struct caam_jr_request_entry *jrentry = context;
 	struct akcipher_request *req = akcipher_request_cast(jrentry->base);
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct caam_rsa_key *key = &ctx->key;
+	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
 	struct rsa_edesc *edesc;
 	int ecode = 0;
 
 	if (err)
 		ecode = caam_jr_strstatus(dev, err);
 
-	edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
+	edesc = req_ctx->edesc;
 
 	switch (key->priv_form) {
 	case FORM1:
@@ -163,7 +174,14 @@ static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
 	rsa_io_unmap(dev, edesc, req);
 	kfree(edesc);
 
-	akcipher_request_complete(req, ecode);
+	/*
+	 * If no backlog flag, the completion of the request is done
+	 * by CAAM, not crypto engine.
+	 */
+	if (!jrentry->bklog)
+		akcipher_request_complete(req, ecode);
+	else
+		crypto_finalize_akcipher_request(jrp->engine, req, ecode);
 }
 
 /**
@@ -311,14 +329,16 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
 	edesc->src_nents = src_nents;
 	edesc->dst_nents = dst_nents;
 
+	edesc->jrentry.base = &req->base;
+
+	req_ctx->edesc = edesc;
+
 	if (!sec4_sg_bytes)
 		return edesc;
 
 	edesc->mapped_src_nents = mapped_src_nents;
 	edesc->mapped_dst_nents = mapped_dst_nents;
 
-	edesc->jrentry.base = &req->base;
-
 	edesc->sec4_sg_dma = dma_map_single(dev, edesc->sec4_sg,
 					    sec4_sg_bytes, DMA_TO_DEVICE);
 	if (dma_mapping_error(dev, edesc->sec4_sg_dma)) {
@@ -343,6 +363,34 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
 	return ERR_PTR(-ENOMEM);
 }
 
+static int akcipher_do_one_req(struct crypto_engine *engine, void *areq)
+{
+	int ret;
+	struct akcipher_request *req = akcipher_request_cast(areq);
+	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
+	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+	struct caam_jr_request_entry *jrentry;
+	struct device *jrdev = ctx->dev;
+	u32 *desc = req_ctx->edesc->hw_desc;
+
+	jrentry = &req_ctx->edesc->jrentry;
+	jrentry->bklog = true;
+
+	ret = caam_jr_enqueue_no_bklog(jrdev, desc, req_ctx->akcipher_op_done,
+				       jrentry);
+
+	if (ret != -EINPROGRESS) {
+		rsa_pub_unmap(jrdev, req_ctx->edesc, req);
+		rsa_io_unmap(jrdev, req_ctx->edesc, req);
+		kfree(req_ctx->edesc);
+	} else {
+		ret = 0;
+	}
+
+	return ret;
+}
+
 static int set_rsa_pub_pdb(struct akcipher_request *req,
 			   struct rsa_edesc *edesc)
 {
@@ -610,10 +658,11 @@ static int caam_rsa_enc(struct akcipher_request *req)
 {
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
 	struct caam_rsa_key *key = &ctx->key;
 	struct device *jrdev = ctx->dev;
 	struct rsa_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
+	u32 *desc;
 	int ret;
 
 	if (unlikely(!key->n || !key->e))
@@ -635,14 +684,14 @@ static int caam_rsa_enc(struct akcipher_request *req)
 	if (ret)
 		goto init_fail;
 
-	/* Initialize Job Descriptor */
-	init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
+	desc = edesc->hw_desc;
 
-	jrentry = &edesc->jrentry;
-	jrentry->base = &req->base;
+	/* Initialize Job Descriptor */
+	init_rsa_pub_desc(desc, &edesc->pdb.pub);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, jrentry);
-	if (ret == -EINPROGRESS)
+	req_ctx->akcipher_op_done = rsa_pub_done;
+	ret = caam_jr_enqueue(jrdev, desc, rsa_pub_done, &edesc->jrentry);
+	if (ret == -EINPROGRESS || ret == -EBUSY)
 		return ret;
 
 	rsa_pub_unmap(jrdev, edesc, req);
@@ -657,9 +706,10 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
 {
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
 	struct device *jrdev = ctx->dev;
 	struct rsa_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
+	u32 *desc;
 	int ret;
 
 	/* Allocate extended descriptor */
@@ -672,14 +722,14 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
 	if (ret)
 		goto init_fail;
 
-	/* Initialize Job Descriptor */
-	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
+	desc = edesc->hw_desc;
 
-	jrentry = &edesc->jrentry;
-	jrentry->base = &req->base;
+	/* Initialize Job Descriptor */
+	init_rsa_priv_f1_desc(desc, &edesc->pdb.priv_f1);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry);
-	if (ret == -EINPROGRESS)
+	req_ctx->akcipher_op_done = rsa_priv_f_done;
+	ret = caam_jr_enqueue(jrdev, desc, rsa_priv_f_done, &edesc->jrentry);
+	if (ret == -EINPROGRESS || ret == -EBUSY)
 		return ret;
 
 	rsa_priv_f1_unmap(jrdev, edesc, req);
@@ -694,9 +744,10 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
 {
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
 	struct device *jrdev = ctx->dev;
 	struct rsa_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
+	u32 *desc;
 	int ret;
 
 	/* Allocate extended descriptor */
@@ -709,14 +760,14 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
 	if (ret)
 		goto init_fail;
 
-	/* Initialize Job Descriptor */
-	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
+	desc = edesc->hw_desc;
 
-	jrentry = &edesc->jrentry;
-	jrentry->base = &req->base;
+	/* Initialize Job Descriptor */
+	init_rsa_priv_f2_desc(desc, &edesc->pdb.priv_f2);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry);
-	if (ret == -EINPROGRESS)
+	req_ctx->akcipher_op_done = rsa_priv_f_done;
+	ret = caam_jr_enqueue(jrdev, desc, rsa_priv_f_done, &edesc->jrentry);
+	if (ret == -EINPROGRESS || ret == -EBUSY)
 		return ret;
 
 	rsa_priv_f2_unmap(jrdev, edesc, req);
@@ -731,9 +782,10 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
 {
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
 	struct device *jrdev = ctx->dev;
 	struct rsa_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
+	u32 *desc;
 	int ret;
 
 	/* Allocate extended descriptor */
@@ -746,14 +798,14 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
 	if (ret)
 		goto init_fail;
 
-	/* Initialize Job Descriptor */
-	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
+	desc = edesc->hw_desc;
 
-	jrentry = &edesc->jrentry;
-	jrentry->base = &req->base;
+	/* Initialize Job Descriptor */
+	init_rsa_priv_f3_desc(desc, &edesc->pdb.priv_f3);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry);
-	if (ret == -EINPROGRESS)
+	req_ctx->akcipher_op_done = rsa_priv_f_done;
+	ret = caam_jr_enqueue(jrdev, desc, rsa_priv_f_done, &edesc->jrentry);
+	if (ret == -EINPROGRESS || ret == -EBUSY)
 		return ret;
 
 	rsa_priv_f3_unmap(jrdev, edesc, req);
@@ -1049,6 +1101,10 @@ static int caam_rsa_init_tfm(struct crypto_akcipher *tfm)
 		return -ENOMEM;
 	}
 
+	ctx->enginectx.op.do_one_request = akcipher_do_one_req;
+
+	akcipher_set_reqsize(tfm, sizeof(struct caam_rsa_req_ctx));
+
 	return 0;
 }
 
diff --git a/drivers/crypto/caam/caampkc.h b/drivers/crypto/caam/caampkc.h
index fe46d73..d31b040 100644
--- a/drivers/crypto/caam/caampkc.h
+++ b/drivers/crypto/caam/caampkc.h
@@ -13,6 +13,7 @@
 #include "compat.h"
 #include "intern.h"
 #include "pdb.h"
+#include <crypto/engine.h>
 
 /**
  * caam_priv_key_form - CAAM RSA private key representation
@@ -88,11 +89,13 @@ struct caam_rsa_key {
 
 /**
  * caam_rsa_ctx - per session context.
+ * @enginectx   : crypto engine context
  * @key         : RSA key in DMA zone
  * @dev         : device structure
  * @padding_dma : dma address of padding, for adding it to the input
  */
 struct caam_rsa_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct caam_rsa_key key;
 	struct device *dev;
 	dma_addr_t padding_dma;
@@ -104,11 +107,16 @@ struct caam_rsa_ctx {
  * @src           : input scatterlist (stripped of leading zeros)
  * @fixup_src     : input scatterlist (that might be stripped of leading zeros)
  * @fixup_src_len : length of the fixup_src input scatterlist
+ * @edesc         : s/w-extended rsa descriptor
+ * @akcipher_op_done : callback used when operation is done
  */
 struct caam_rsa_req_ctx {
 	struct scatterlist src[2];
 	struct scatterlist *fixup_src;
 	unsigned int fixup_src_len;
+	struct rsa_edesc *edesc;
+	void (*akcipher_op_done)(struct device *jrdev, u32 *desc, u32 err,
+				 void *context);
 };
 
 /**
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 7e6632d..579b1ba 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -437,6 +437,9 @@ static int transfer_request_to_engine(struct crypto_engine *engine,
 	case CRYPTO_ALG_TYPE_AEAD:
 		return crypto_transfer_aead_request_to_engine(engine,
 							      aead_request_cast(req));
+	case CRYPTO_ALG_TYPE_AKCIPHER:
+		return crypto_transfer_akcipher_request_to_engine(engine,
+								  akcipher_request_cast(req));
 	default:
 		return -EINVAL;
 	}
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 12/12] crypto: caam - add crypto_engine support for HASH algorithms
  2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
                   ` (10 preceding siblings ...)
  2019-11-17 22:30 ` [PATCH 11/12] crypto: caam - add crypto_engine support for RSA algorithms Iuliana Prodan
@ 2019-11-17 22:30 ` Iuliana Prodan
  2019-11-21 17:06   ` Horia Geanta
  11 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-17 22:30 UTC (permalink / raw)
  To: Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, Iuliana Prodan

Add crypto_engine support for HASH algorithms, to make use of
the engine queue.
The requests, with backlog flag, will be listed into crypto-engine
queue and processed by CAAM when free. In case the queue is empty,
the request is directly sent to CAAM.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
---
 drivers/crypto/caam/caamhash.c | 155 +++++++++++++++++++++++++++++------------
 drivers/crypto/caam/jr.c       |   3 +
 2 files changed, 113 insertions(+), 45 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index d9de3dc..7f9ffde 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -65,6 +65,7 @@
 #include "sg_sw_sec4.h"
 #include "key_gen.h"
 #include "caamhash_desc.h"
+#include <crypto/engine.h>
 
 #define CAAM_CRA_PRIORITY		3000
 
@@ -86,6 +87,7 @@ static struct list_head hash_list;
 
 /* ahash per-session context */
 struct caam_hash_ctx {
+	struct crypto_engine_ctx enginectx;
 	u32 sh_desc_update[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned;
 	u32 sh_desc_update_first[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned;
 	u32 sh_desc_fin[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned;
@@ -112,10 +114,13 @@ struct caam_hash_state {
 	u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
 	int buflen_1;
 	u8 caam_ctx[MAX_CTX_LEN] ____cacheline_aligned;
-	int (*update)(struct ahash_request *req);
+	int (*update)(struct ahash_request *req) ____cacheline_aligned;
 	int (*final)(struct ahash_request *req);
 	int (*finup)(struct ahash_request *req);
 	int current_buf;
+	struct ahash_edesc *edesc;
+	void (*ahash_op_done)(struct device *jrdev, u32 *desc, u32 err,
+			      void *context);
 };
 
 struct caam_export_state {
@@ -125,6 +130,9 @@ struct caam_export_state {
 	int (*update)(struct ahash_request *req);
 	int (*final)(struct ahash_request *req);
 	int (*finup)(struct ahash_request *req);
+	struct ahash_edesc *edesc;
+	void (*ahash_op_done)(struct device *jrdev, u32 *desc, u32 err,
+			      void *context);
 };
 
 static inline void switch_buf(struct caam_hash_state *state)
@@ -604,6 +612,7 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
 {
 	struct caam_jr_request_entry *jrentry = context;
 	struct ahash_request *req = ahash_request_cast(jrentry->base);
+	struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
 	struct ahash_edesc *edesc;
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	int digestsize = crypto_ahash_digestsize(ahash);
@@ -613,7 +622,8 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
 
-	edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
+	edesc = state->edesc;
+
 	if (err)
 		ecode = caam_jr_strstatus(jrdev, err);
 
@@ -625,7 +635,14 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
 			     DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
 			     ctx->ctx_len, 1);
 
-	req->base.complete(&req->base, ecode);
+	/*
+	 * If no backlog flag, the completion of the request is done
+	 * by CAAM, not crypto engine.
+	 */
+	if (!jrentry->bklog)
+		req->base.complete(&req->base, ecode);
+	else
+		crypto_finalize_hash_request(jrp->engine, req, ecode);
 }
 
 static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
@@ -645,6 +662,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
 {
 	struct caam_jr_request_entry *jrentry = context;
 	struct ahash_request *req = ahash_request_cast(jrentry->base);
+	struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
 	struct ahash_edesc *edesc;
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
@@ -654,7 +672,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
 
-	edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
+	edesc = state->edesc;
 	if (err)
 		ecode = caam_jr_strstatus(jrdev, err);
 
@@ -670,7 +688,15 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
 				     DUMP_PREFIX_ADDRESS, 16, 4, req->result,
 				     digestsize, 1);
 
-	req->base.complete(&req->base, ecode);
+	/*
+	 * If no backlog flag, the completion of the request is done
+	 * by CAAM, not crypto engine.
+	 */
+	if (!jrentry->bklog)
+		req->base.complete(&req->base, ecode);
+	else
+		crypto_finalize_hash_request(jrp->engine, req, ecode);
+
 }
 
 static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
@@ -695,6 +721,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
 {
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	struct caam_hash_state *state = ahash_request_ctx(req);
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	struct ahash_edesc *edesc;
@@ -707,6 +734,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
 	}
 
 	edesc->jrentry.base = &req->base;
+	state->edesc = edesc;
 
 	init_job_desc_shared(edesc->hw_desc, sh_desc_dma, desc_len(sh_desc),
 			     HDR_SHARE_DEFER | HDR_REVERSE);
@@ -750,6 +778,32 @@ static int ahash_edesc_add_src(struct caam_hash_ctx *ctx,
 	return 0;
 }
 
+static int ahash_do_one_req(struct crypto_engine *engine, void *areq)
+{
+	struct ahash_request *req = ahash_request_cast(areq);
+	struct caam_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
+	struct caam_hash_state *state = ahash_request_ctx(req);
+	struct caam_jr_request_entry *jrentry;
+	struct device *jrdev = ctx->jrdev;
+	u32 *desc = state->edesc->hw_desc;
+	int ret;
+
+	jrentry = &state->edesc->jrentry;
+	jrentry->bklog = true;
+
+	ret = caam_jr_enqueue_no_bklog(jrdev, desc, state->ahash_op_done,
+				       jrentry);
+
+	if (ret != -EINPROGRESS) {
+		ahash_unmap(jrdev, state->edesc, req, 0);
+		kfree(state->edesc);
+	} else {
+		ret = 0;
+	}
+
+	return ret;
+}
+
 /* submit update job descriptor */
 static int ahash_update_ctx(struct ahash_request *req)
 {
@@ -766,7 +820,6 @@ static int ahash_update_ctx(struct ahash_request *req)
 	u32 *desc;
 	int src_nents, mapped_nents, sec4_sg_bytes, sec4_sg_src_index;
 	struct ahash_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
 	int ret = 0;
 
 	last_buflen = *next_buflen;
@@ -864,10 +917,11 @@ static int ahash_update_ctx(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		jrentry = &edesc->jrentry;
+		state->ahash_op_done = ahash_done_bi;
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, jrentry);
-		if (ret != -EINPROGRESS)
+		ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi,
+				      &edesc->jrentry);
+		if ((ret != -EINPROGRESS) && (ret != -EBUSY))
 			goto unmap_ctx;
 	} else if (*next_buflen) {
 		scatterwalk_map_and_copy(buf + *buflen, req->src, 0,
@@ -900,7 +954,6 @@ static int ahash_final_ctx(struct ahash_request *req)
 	int sec4_sg_bytes;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct ahash_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	sec4_sg_bytes = pad_sg_nents(1 + (buflen ? 1 : 0)) *
@@ -943,10 +996,11 @@ static int ahash_final_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	jrentry = &edesc->jrentry;
+	state->ahash_op_done = ahash_done_ctx_src;
+
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
-	if (ret == -EINPROGRESS)
+	if ((ret == -EINPROGRESS) || (ret == -EBUSY))
 		return ret;
 
 unmap_ctx:
@@ -967,7 +1021,6 @@ static int ahash_finup_ctx(struct ahash_request *req)
 	int src_nents, mapped_nents;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct ahash_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	src_nents = sg_nents_for_len(req->src, req->nbytes);
@@ -1022,10 +1075,10 @@ static int ahash_finup_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	jrentry = &edesc->jrentry;
+	state->ahash_op_done = ahash_done_ctx_src;
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
-	if (ret == -EINPROGRESS)
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry);
+	if ((ret == -EINPROGRESS) || (ret == -EBUSY))
 		return ret;
 
 unmap_ctx:
@@ -1044,7 +1097,6 @@ static int ahash_digest(struct ahash_request *req)
 	int digestsize = crypto_ahash_digestsize(ahash);
 	int src_nents, mapped_nents;
 	struct ahash_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	state->buf_dma = 0;
@@ -1097,10 +1149,10 @@ static int ahash_digest(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	jrentry = &edesc->jrentry;
+	state->ahash_op_done = ahash_done;
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry);
-	if (ret != -EINPROGRESS) {
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
+	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
 	}
@@ -1120,7 +1172,6 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 	u32 *desc;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct ahash_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	/* allocate space for base edesc and hw desc commands, link tables */
@@ -1150,20 +1201,19 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	jrentry = &edesc->jrentry;
+	state->ahash_op_done = ahash_done;
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry);
-	if (ret != -EINPROGRESS) {
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
+	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
 	}
 
 	return ret;
- unmap:
+unmap:
 	ahash_unmap(jrdev, edesc, req, digestsize);
 	kfree(edesc);
 	return -ENOMEM;
-
 }
 
 /* submit ahash update if it the first job descriptor after update */
@@ -1181,7 +1231,6 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 	int in_len = *buflen + req->nbytes, to_hash;
 	int sec4_sg_bytes, src_nents, mapped_nents;
 	struct ahash_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
 	u32 *desc;
 	int ret = 0;
 
@@ -1271,10 +1320,11 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		jrentry = &edesc->jrentry;
+		state->ahash_op_done = ahash_done_ctx_dst;
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, jrentry);
-		if (ret != -EINPROGRESS)
+		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst,
+				      &edesc->jrentry);
+		if ((ret != -EINPROGRESS) && (ret != -EBUSY))
 			goto unmap_ctx;
 
 		state->update = ahash_update_ctx;
@@ -1294,7 +1344,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 			     1);
 
 	return ret;
- unmap_ctx:
+unmap_ctx:
 	ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE);
 	kfree(edesc);
 	return ret;
@@ -1312,7 +1362,6 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 	int sec4_sg_bytes, sec4_sg_src_index, src_nents, mapped_nents;
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct ahash_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
 	int ret;
 
 	src_nents = sg_nents_for_len(req->src, req->nbytes);
@@ -1368,10 +1417,10 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	jrentry = &edesc->jrentry;
+	state->ahash_op_done = ahash_done;
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry);
-	if (ret != -EINPROGRESS) {
+	ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
+	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
 		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
 		kfree(edesc);
 	}
@@ -1398,7 +1447,6 @@ static int ahash_update_first(struct ahash_request *req)
 	u32 *desc;
 	int src_nents, mapped_nents;
 	struct ahash_edesc *edesc;
-	struct caam_jr_request_entry *jrentry;
 	int ret = 0;
 
 	*next_buflen = req->nbytes & (blocksize - 1);
@@ -1468,10 +1516,11 @@ static int ahash_update_first(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		jrentry = &edesc->jrentry;
+		state->ahash_op_done = ahash_done_ctx_dst;
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, jrentry);
-		if (ret != -EINPROGRESS)
+		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst,
+				      &edesc->jrentry);
+		if ((ret != -EINPROGRESS) && (ret != -EBUSY))
 			goto unmap_ctx;
 
 		state->update = ahash_update_ctx;
@@ -1509,6 +1558,7 @@ static int ahash_init(struct ahash_request *req)
 	state->update = ahash_update_first;
 	state->finup = ahash_finup_first;
 	state->final = ahash_final_no_ctx;
+	state->ahash_op_done = ahash_done;
 
 	state->ctx_dma = 0;
 	state->ctx_dma_len = 0;
@@ -1562,6 +1612,8 @@ static int ahash_export(struct ahash_request *req, void *out)
 	export->update = state->update;
 	export->final = state->final;
 	export->finup = state->finup;
+	export->edesc = state->edesc;
+	export->ahash_op_done = state->ahash_op_done;
 
 	return 0;
 }
@@ -1578,6 +1630,8 @@ static int ahash_import(struct ahash_request *req, const void *in)
 	state->update = export->update;
 	state->final = export->final;
 	state->finup = export->finup;
+	state->edesc = export->edesc;
+	state->ahash_op_done = export->ahash_op_done;
 
 	return 0;
 }
@@ -1837,7 +1891,9 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
 	}
 
 	dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_update,
-					offsetof(struct caam_hash_ctx, key),
+					offsetof(struct caam_hash_ctx, key) -
+					offsetof(struct caam_hash_ctx,
+						 sh_desc_update),
 					ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
 	if (dma_mapping_error(ctx->jrdev, dma_addr)) {
 		dev_err(ctx->jrdev, "unable to map shared descriptors\n");
@@ -1855,11 +1911,19 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
 	ctx->sh_desc_update_dma = dma_addr;
 	ctx->sh_desc_update_first_dma = dma_addr +
 					offsetof(struct caam_hash_ctx,
-						 sh_desc_update_first);
+						 sh_desc_update_first) -
+					offsetof(struct caam_hash_ctx,
+						 sh_desc_update);
 	ctx->sh_desc_fin_dma = dma_addr + offsetof(struct caam_hash_ctx,
-						   sh_desc_fin);
+						   sh_desc_fin) -
+					  offsetof(struct caam_hash_ctx,
+						   sh_desc_update);
 	ctx->sh_desc_digest_dma = dma_addr + offsetof(struct caam_hash_ctx,
-						      sh_desc_digest);
+						      sh_desc_digest) -
+					     offsetof(struct caam_hash_ctx,
+						      sh_desc_update);
+
+	ctx->enginectx.op.do_one_request = ahash_do_one_req;
 
 	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
 				 sizeof(struct caam_hash_state));
@@ -1876,7 +1940,8 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
 	struct caam_hash_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_update_dma,
-			       offsetof(struct caam_hash_ctx, key),
+			       offsetof(struct caam_hash_ctx, key) -
+			       offsetof(struct caam_hash_ctx, sh_desc_update),
 			       ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
 	if (ctx->key_dir != DMA_NONE)
 		dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 579b1ba..5f7b797 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -440,6 +440,9 @@ static int transfer_request_to_engine(struct crypto_engine *engine,
 	case CRYPTO_ALG_TYPE_AKCIPHER:
 		return crypto_transfer_akcipher_request_to_engine(engine,
 								  akcipher_request_cast(req));
+	case CRYPTO_ALG_TYPE_AHASH:
+		return crypto_transfer_hash_request_to_engine(engine,
+							      ahash_request_cast(req));
 	default:
 		return -EINVAL;
 	}
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/12] crypto: add helper function for akcipher_request
  2019-11-17 22:30 ` [PATCH 01/12] crypto: add helper function for akcipher_request Iuliana Prodan
@ 2019-11-18 13:29   ` Corentin Labbe
  2019-11-19 14:27   ` Horia Geanta
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 42+ messages in thread
From: Corentin Labbe @ 2019-11-18 13:29 UTC (permalink / raw)
  To: Iuliana Prodan
  Cc: Herbert Xu, Horia Geanta, Aymen Sghaier, David S. Miller,
	Tom Lendacky, Gary Hook, linux-crypto, linux-kernel, linux-imx

On Mon, Nov 18, 2019 at 12:30:34AM +0200, Iuliana Prodan wrote:
> Add akcipher_request_cast function to get an akcipher_request struct from
> a crypto_async_request struct.
> 
> Remove this function from ccp driver.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
> ---
>  drivers/crypto/ccp/ccp-crypto-rsa.c | 6 ------
>  include/crypto/akcipher.h           | 6 ++++++
>  2 files changed, 6 insertions(+), 6 deletions(-)
> 

I need (and did) the same for future sun8i-ss/sun8i-ce RSA support.
Thanks

Reviewed-by: Corentin Labbe <clabbe.montjoie@gmail.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/12] crypto: add helper function for akcipher_request
  2019-11-17 22:30 ` [PATCH 01/12] crypto: add helper function for akcipher_request Iuliana Prodan
  2019-11-18 13:29   ` Corentin Labbe
@ 2019-11-19 14:27   ` Horia Geanta
  2019-11-19 15:10   ` Gary R Hook
  2019-11-22  9:08   ` Herbert Xu
  3 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-19 14:27 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Add akcipher_request_cast function to get an akcipher_request struct from
> a crypto_async_request struct.
> 
> Remove this function from ccp driver.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>

Thanks,
Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 02/12] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt functions
  2019-11-17 22:30 ` [PATCH 02/12] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt functions Iuliana Prodan
@ 2019-11-19 14:41   ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-19 14:41 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Create a common crypt function for each skcipher/aead/gcm/chachapoly
> algorithms and call it for encrypt/decrypt with the specific boolean -
> true for encrypt and false for decrypt.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>

Thanks,
Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/12] crypto: caam - refactor ahash_done callbacks
  2019-11-17 22:30 ` [PATCH 03/12] crypto: caam - refactor ahash_done callbacks Iuliana Prodan
@ 2019-11-19 14:56   ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-19 14:56 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Create two common ahash_done_* functions with the dma
> direction as parameter. Then, these 2 are called with
> the proper direction for unmap.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>

Thanks,
Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 04/12] crypto: caam - refactor ahash_edesc_alloc
  2019-11-17 22:30 ` [PATCH 04/12] crypto: caam - refactor ahash_edesc_alloc Iuliana Prodan
@ 2019-11-19 15:05   ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-19 15:05 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Changed parameters for ahash_edesc_alloc function:
> - remove flags since they can be computed in
> ahash_edesc_alloc, the only place they are needed;
> - use ahash_request instead of caam_hash_ctx, which
> can be obtained from request.
> 
Technically, the use of ahash_request is to allow for access to
request flags. The change is needed only to be able to refactor
the computation of gfp flags.

> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>

Thanks,
Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 05/12] crypto: caam - refactor RSA private key _done callbacks
  2019-11-17 22:30 ` [PATCH 05/12] crypto: caam - refactor RSA private key _done callbacks Iuliana Prodan
@ 2019-11-19 15:06   ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-19 15:06 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Create a common rsa_priv_f_done function, which based
> on private key form calls the specific unmap function.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>

Thanks,
Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/12] crypto: add helper function for akcipher_request
  2019-11-17 22:30 ` [PATCH 01/12] crypto: add helper function for akcipher_request Iuliana Prodan
  2019-11-18 13:29   ` Corentin Labbe
  2019-11-19 14:27   ` Horia Geanta
@ 2019-11-19 15:10   ` Gary R Hook
  2019-11-22  9:08   ` Herbert Xu
  3 siblings, 0 replies; 42+ messages in thread
From: Gary R Hook @ 2019-11-19 15:10 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, linux-crypto, linux-kernel, linux-imx

On 11/17/19 4:30 PM, Iuliana Prodan wrote:
> Add akcipher_request_cast function to get an akcipher_request struct from
> a crypto_async_request struct.
> 
> Remove this function from ccp driver.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>

Acked-by: Gary R Hook <gary.hook@amd.com>

> ---
>   drivers/crypto/ccp/ccp-crypto-rsa.c | 6 ------
>   include/crypto/akcipher.h           | 6 ++++++
>   2 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c b/drivers/crypto/ccp/ccp-crypto-rsa.c
> index 649c91d..3ab659d 100644
> --- a/drivers/crypto/ccp/ccp-crypto-rsa.c
> +++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
> @@ -19,12 +19,6 @@
>   
>   #include "ccp-crypto.h"
>   
> -static inline struct akcipher_request *akcipher_request_cast(
> -	struct crypto_async_request *req)
> -{
> -	return container_of(req, struct akcipher_request, base);
> -}
> -
>   static inline int ccp_copy_and_save_keypart(u8 **kpbuf, unsigned int *kplen,
>   					    const u8 *buf, size_t sz)
>   {
> diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h
> index 6924b09..4365edd 100644
> --- a/include/crypto/akcipher.h
> +++ b/include/crypto/akcipher.h
> @@ -170,6 +170,12 @@ static inline struct crypto_akcipher *crypto_akcipher_reqtfm(
>   	return __crypto_akcipher_tfm(req->base.tfm);
>   }
>   
> +static inline struct akcipher_request *akcipher_request_cast(
> +	struct crypto_async_request *req)
> +{
> +	return container_of(req, struct akcipher_request, base);
> +}
> +
>   /**
>    * crypto_free_akcipher() - free AKCIPHER tfm handle
>    *
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function
  2019-11-17 22:30 ` [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function Iuliana Prodan
@ 2019-11-19 15:21   ` Horia Geanta
  2019-12-10 11:56   ` Bastian Krause
  1 sibling, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-19 15:21 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Change the return code of caam_jr_enqueue function to -EINPROGRESS, in
> case of success, -ENOSPC in case the CAAM is busy (has no space left
> in job ring queue), -EIO if it cannot map the caller's descriptor.
> 
> Update, also, the cases for resource-freeing for each algorithm type.
> 
It probably would've been worth saying *why* these changes are needed.

Even though the patch is part of a patch set adding "backlogging support",
this grouping won't be visible in git log.

There's another reason however for the -EBUSY -> -ENOSPC change,
i.e. commit 6b80ea389a0b ("crypto: change transient busy return code to -ENOSPC")

> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>

Thanks,
Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue
  2019-11-17 22:30 ` [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue Iuliana Prodan
@ 2019-11-19 17:55   ` Horia Geanta
  2019-11-19 22:49     ` Iuliana Prodan
  0 siblings, 1 reply; 42+ messages in thread
From: Horia Geanta @ 2019-11-19 17:55 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Added a new struct - caam_jr_request_entry, to keep each request
> information. This has a crypto_async_request, used to determine
> the request type, and a bool to check if the request has backlog
> flag or not.
> This struct is passed to CAAM, via enqueue function - caam_jr_enqueue.
> 
> The new added caam_jr_enqueue_no_bklog function is used to enqueue a job
> descriptor head for cases like caamrng, key_gen, digest_key, where we
Enqueuing terminology: either generic "job" or more HW-specific
"job descriptor".
Job descriptor *head* has no meaning.

> don't have backlogged requests.
> 
...because the "requests" are not crypto requests - they are either coming
from hwrng (caamrng's case) or are driver-internal (key_gen, digest_key -
used for key hashing / derivation during .setkey callback).

> diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
> index 21b6172..abebcfc 100644
> --- a/drivers/crypto/caam/caamalg.c
> +++ b/drivers/crypto/caam/caamalg.c
[...]
> @@ -1416,7 +1424,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
>  			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
>  			     1);
>  
> -	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
> +	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
>  	if (ret != -EINPROGRESS) {
>  		aead_unmap(jrdev, edesc, req);
>  		kfree(edesc);
> @@ -1440,6 +1448,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
>  	struct aead_edesc *edesc;
>  	struct crypto_aead *aead = crypto_aead_reqtfm(req);
>  	struct caam_ctx *ctx = crypto_aead_ctx(aead);
> +	struct caam_jr_request_entry *jrentry;
>  	struct device *jrdev = ctx->jrdev;
>  	bool all_contig;
>  	u32 *desc;
> @@ -1459,7 +1468,9 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
>  			     desc_bytes(edesc->hw_desc), 1);
>  
>  	desc = edesc->hw_desc;
> -	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
> +	jrentry = &edesc->jrentry;
> +
> +	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry);
Let's avoid adding a new local variable by using &edesc->jrentry directly,
like in chachapoly_crypt().
Similar for the other places.

> diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
> index baf4ab1..d9de3dc 100644
> --- a/drivers/crypto/caam/caamhash.c
> +++ b/drivers/crypto/caam/caamhash.c
[...]
> @@ -933,11 +943,13 @@ static int ahash_final_ctx(struct ahash_request *req)
>  			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
>  			     1);
>  
> -	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
> +	jrentry = &edesc->jrentry;
> +
> +	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
>  	if (ret == -EINPROGRESS)
>  		return ret;
>  
> - unmap_ctx:
> +unmap_ctx:
That's correct, however whitespace fixing should be done separately.

> @@ -1009,11 +1022,13 @@ static int ahash_finup_ctx(struct ahash_request *req)
>  			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
>  			     1);
>  
> -	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
> +	jrentry = &edesc->jrentry;
> +
> +	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
>  	if (ret == -EINPROGRESS)
>  		return ret;
>  
> - unmap_ctx:
> +unmap_ctx:
Again, unrelated whitespace fix.

> diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
> index 7f7ea32..bb0e4b9 100644
> --- a/drivers/crypto/caam/caampkc.c
> +++ b/drivers/crypto/caam/caampkc.c
[...]
> @@ -315,6 +317,8 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
>  	edesc->mapped_src_nents = mapped_src_nents;
>  	edesc->mapped_dst_nents = mapped_dst_nents;
>  
> +	edesc->jrentry.base = &req->base;
> +
>  	edesc->sec4_sg_dma = dma_map_single(dev, edesc->sec4_sg,
>  					    sec4_sg_bytes, DMA_TO_DEVICE);
>  	if (dma_mapping_error(dev, edesc->sec4_sg_dma)) {
[...]
> @@ -633,7 +638,10 @@ static int caam_rsa_enc(struct akcipher_request *req)
>  	/* Initialize Job Descriptor */
>  	init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
>  
> -	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
> +	jrentry = &edesc->jrentry;
> +	jrentry->base = &req->base;
This field is already set in rsa_edesc_alloc().

> @@ -666,7 +675,10 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
>  	/* Initialize Job Descriptor */
>  	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
>  
> -	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
> +	jrentry = &edesc->jrentry;
> +	jrentry->base = &req->base;
The same here.

> @@ -699,7 +712,10 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
>  	/* Initialize Job Descriptor */
>  	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
>  
> -	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
> +	jrentry = &edesc->jrentry;
> +	jrentry->base = &req->base;
And here.

> @@ -732,7 +749,10 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
>  	/* Initialize Job Descriptor */
>  	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
>  
> -	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
> +	jrentry = &edesc->jrentry;
> +	jrentry->base = &req->base;
Also here.

> diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
> index c7c10c9..58be66c 100644
> --- a/drivers/crypto/caam/intern.h
> +++ b/drivers/crypto/caam/intern.h
[...]
> @@ -104,6 +105,15 @@ struct caam_drv_private {
>  #endif
>  };
>  
> +/*
> + * Storage for tracking each request that is processed by a ring
> + */
> +struct caam_jr_request_entry {
> +	/* Common attributes for async crypto requests */
> +	struct crypto_async_request *base;
> +	bool bklog;	/* Stored to determine if the request needs backlog */
> +};
> +
Could we use kernel-doc here?

Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue
  2019-11-19 17:55   ` Horia Geanta
@ 2019-11-19 22:49     ` Iuliana Prodan
  2019-11-20  6:48       ` Horia Geanta
  0 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-19 22:49 UTC (permalink / raw)
  To: Horia Geanta, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/19/2019 7:55 PM, Horia Geanta wrote:
> On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
>> Added a new struct - caam_jr_request_entry, to keep each request
>> information. This has a crypto_async_request, used to determine
>> the request type, and a bool to check if the request has backlog
>> flag or not.
>> This struct is passed to CAAM, via enqueue function - caam_jr_enqueue.
>>
>> The new added caam_jr_enqueue_no_bklog function is used to enqueue a job
>> descriptor head for cases like caamrng, key_gen, digest_key, where we
> Enqueuing terminology: either generic "job" or more HW-specific
> "job descriptor".
> Job descriptor *head* has no meaning.
> 
>> don't have backlogged requests.
>>
> ...because the "requests" are not crypto requests - they are either coming
> from hwrng (caamrng's case) or are driver-internal (key_gen, digest_key -
> used for key hashing / derivation during .setkey callback).
> 

Right, I'll update the patch description in v2.

>> diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
>> index 21b6172..abebcfc 100644
>> --- a/drivers/crypto/caam/caamalg.c
>> +++ b/drivers/crypto/caam/caamalg.c
> [...]
>> @@ -1416,7 +1424,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
>>   			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
>>   			     1);
>>   
>> -	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
>> +	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
>>   	if (ret != -EINPROGRESS) {
>>   		aead_unmap(jrdev, edesc, req);
>>   		kfree(edesc);
>> @@ -1440,6 +1448,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
>>   	struct aead_edesc *edesc;
>>   	struct crypto_aead *aead = crypto_aead_reqtfm(req);
>>   	struct caam_ctx *ctx = crypto_aead_ctx(aead);
>> +	struct caam_jr_request_entry *jrentry;
>>   	struct device *jrdev = ctx->jrdev;
>>   	bool all_contig;
>>   	u32 *desc;
>> @@ -1459,7 +1468,9 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
>>   			     desc_bytes(edesc->hw_desc), 1);
>>   
>>   	desc = edesc->hw_desc;
>> -	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
>> +	jrentry = &edesc->jrentry;
>> +
>> +	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry);
> Let's avoid adding a new local variable by using &edesc->jrentry directly,
> like in chachapoly_crypt().
> Similar for the other places.
> 
I've removed jrentry and req (as mentioned below) in patch #11 of this 
series, but I'll remove them, from this patch, in v2.

>> diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
>> index baf4ab1..d9de3dc 100644
>> --- a/drivers/crypto/caam/caamhash.c
>> +++ b/drivers/crypto/caam/caamhash.c
> [...]
>> @@ -933,11 +943,13 @@ static int ahash_final_ctx(struct ahash_request *req)
>>   			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
>>   			     1);
>>   
>> -	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
>> +	jrentry = &edesc->jrentry;
>> +
>> +	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
>>   	if (ret == -EINPROGRESS)
>>   		return ret;
>>   
>> - unmap_ctx:
>> +unmap_ctx:
> That's correct, however whitespace fixing should be done separately.
> 
Should I make a separate patch for these two whitespaces?

>> @@ -1009,11 +1022,13 @@ static int ahash_finup_ctx(struct ahash_request *req)
>>   			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
>>   			     1);
>>   
>> -	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
>> +	jrentry = &edesc->jrentry;
>> +
>> +	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
>>   	if (ret == -EINPROGRESS)
>>   		return ret;
>>   
>> - unmap_ctx:
>> +unmap_ctx:
> Again, unrelated whitespace fix.
> 
>> diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
>> index 7f7ea32..bb0e4b9 100644
>> --- a/drivers/crypto/caam/caampkc.c
>> +++ b/drivers/crypto/caam/caampkc.c
> [...]
>> @@ -315,6 +317,8 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
>>   	edesc->mapped_src_nents = mapped_src_nents;
>>   	edesc->mapped_dst_nents = mapped_dst_nents;
>>   
>> +	edesc->jrentry.base = &req->base;
>> +
>>   	edesc->sec4_sg_dma = dma_map_single(dev, edesc->sec4_sg,
>>   					    sec4_sg_bytes, DMA_TO_DEVICE);
>>   	if (dma_mapping_error(dev, edesc->sec4_sg_dma)) {
> [...]
>> @@ -633,7 +638,10 @@ static int caam_rsa_enc(struct akcipher_request *req)
>>   	/* Initialize Job Descriptor */
>>   	init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
>>   
>> -	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
>> +	jrentry = &edesc->jrentry;
>> +	jrentry->base = &req->base;
> This field is already set in rsa_edesc_alloc().
> 
>> @@ -666,7 +675,10 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
>>   	/* Initialize Job Descriptor */
>>   	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
>>   
>> -	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
>> +	jrentry = &edesc->jrentry;
>> +	jrentry->base = &req->base;
> The same here.
> 
>> @@ -699,7 +712,10 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
>>   	/* Initialize Job Descriptor */
>>   	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
>>   
>> -	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
>> +	jrentry = &edesc->jrentry;
>> +	jrentry->base = &req->base;
> And here.
> 
>> @@ -732,7 +749,10 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
>>   	/* Initialize Job Descriptor */
>>   	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
>>   
>> -	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
>> +	jrentry = &edesc->jrentry;
>> +	jrentry->base = &req->base;
> Also here.
> 
>> diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
>> index c7c10c9..58be66c 100644
>> --- a/drivers/crypto/caam/intern.h
>> +++ b/drivers/crypto/caam/intern.h
> [...]
>> @@ -104,6 +105,15 @@ struct caam_drv_private {
>>   #endif
>>   };
>>   
>> +/*
>> + * Storage for tracking each request that is processed by a ring
>> + */
>> +struct caam_jr_request_entry {
>> +	/* Common attributes for async crypto requests */
>> +	struct crypto_async_request *base;
>> +	bool bklog;	/* Stored to determine if the request needs backlog */
>> +};
>> +
> Could we use kernel-doc here?

Sure, will do in v2.

Thanks,
Iulia

> 
> Horia
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue
  2019-11-19 22:49     ` Iuliana Prodan
@ 2019-11-20  6:48       ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-20  6:48 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/20/2019 12:49 AM, Iuliana Prodan wrote:
> On 11/19/2019 7:55 PM, Horia Geanta wrote:
>> On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
>>> diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
>>> index baf4ab1..d9de3dc 100644
>>> --- a/drivers/crypto/caam/caamhash.c
>>> +++ b/drivers/crypto/caam/caamhash.c
>> [...]
>>> @@ -933,11 +943,13 @@ static int ahash_final_ctx(struct ahash_request *req)
>>>   			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
>>>   			     1);
>>>   
>>> -	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
>>> +	jrentry = &edesc->jrentry;
>>> +
>>> +	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry);
>>>   	if (ret == -EINPROGRESS)
>>>   		return ret;
>>>   
>>> - unmap_ctx:
>>> +unmap_ctx:
>> That's correct, however whitespace fixing should be done separately.
>>
> Should I make a separate patch for these two whitespaces?
> 
Whitespace fixes should be moved out of this patch set.
In general, patches handling this go through the whole file / driver.

Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-11-17 22:30 ` [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms Iuliana Prodan
@ 2019-11-21 11:46   ` Horia Geanta
  2019-11-22 10:33   ` Herbert Xu
  2019-12-10 15:27   ` Bastian Krause
  2 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-21 11:46 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Integrate crypto_engine into CAAM, to make use of the engine queue.
> Add support for SKCIPHER algorithms.
> 
> This is intended to be used for CAAM backlogging support.
> The requests, with backlog flag (e.g. from dm-crypt) will be listed
> into crypto-engine queue and processed by CAAM when free.
> This changes the return codes for caam_jr_enqueue:
> -EINPROGRESS if OK, -EBUSY if request is backlogged,
> -ENOSPC if the queue is full, -EIO if it cannot map the caller's
> descriptor, -EINVAL if crypto_tfm not supported by crypto_engine.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
> Signed-off-by: Franck LENORMAND <franck.lenormand@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>

Thanks,
Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 09/12] crypto: caam - bypass crypto-engine sw queue, if empty
  2019-11-17 22:30 ` [PATCH 09/12] crypto: caam - bypass crypto-engine sw queue, if empty Iuliana Prodan
@ 2019-11-21 11:53   ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-21 11:53 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier, Baolin Wang
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> Bypass crypto-engine software queue, if empty, and send the request
> directly to hardware. If this returns -ENOSPC, transfer the request to
> crypto-engine and let it handle it.
> 
Could this optimization be added directly into the crypto engine?

> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
> ---
>  drivers/crypto/caam/jr.c | 29 ++++++++++++++++++++++++++---
>  1 file changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
> index 5c55d3d..ddf3d39 100644
> --- a/drivers/crypto/caam/jr.c
> +++ b/drivers/crypto/caam/jr.c
> @@ -476,10 +476,33 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
>  	struct caam_drv_private_jr *jrpriv = dev_get_drvdata(dev);
>  	struct caam_jr_request_entry *jrentry = areq;
>  	struct crypto_async_request *req = jrentry->base;
> +	int ret;
>  
> -	if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
> -		return transfer_request_to_engine(jrpriv->engine, req);
> -
> +	if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG) {
> +		if (crypto_queue_len(&jrpriv->engine->queue) == 0) {
> +			/*
> +			 * send the request to CAAM, if crypto-engine queue
> +			 * is empty
> +			 */
> +			ret = caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry);
> +			if (ret == -ENOSPC)
> +				/*
> +				 * CAAM has no space, so transfer the request
> +				 * to crypto-engine
> +				 */
> +				return transfer_request_to_engine(jrpriv->engine,
> +								  req);
> +			else
> +				return ret;
> +		} else {
> +			/*
> +			 * crypto-engine queue is not empty, so transfer the
> +			 * request to crypto-engine, to keep the order
> +			 * of requests
> +			 */
> +			return transfer_request_to_engine(jrpriv->engine, req);
> +		}
> +	}
>  	return caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry);
>  }
>  EXPORT_SYMBOL(caam_jr_enqueue);
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/12] crypto: caam - add crypto_engine support for AEAD algorithms
  2019-11-17 22:30 ` [PATCH 10/12] crypto: caam - add crypto_engine support for AEAD algorithms Iuliana Prodan
@ 2019-11-21 16:46   ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-21 16:46 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> @@ -1465,7 +1484,6 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
>  	struct aead_edesc *edesc;
>  	struct crypto_aead *aead = crypto_aead_reqtfm(req);
>  	struct caam_ctx *ctx = crypto_aead_ctx(aead);
> -	struct caam_jr_request_entry *jrentry;
>  	struct device *jrdev = ctx->jrdev;
>  	bool all_contig;
>  	u32 *desc;
> @@ -1479,16 +1497,14 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
>  
>  	/* Create and submit job descriptor */
>  	init_authenc_job(req, edesc, all_contig, encrypt);
> +	desc = edesc->hw_desc;
>  
This change is unrelated.

>  	print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
> -			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
> -			     desc_bytes(edesc->hw_desc), 1);
> -
> -	desc = edesc->hw_desc;
> -	jrentry = &edesc->jrentry;
> +			     DUMP_PREFIX_ADDRESS, 16, 4, desc,
> +			     desc_bytes(desc), 1);
>  
[...]
>  static inline int gcm_crypt(struct aead_request *req, bool encrypt)
>  {
>  	struct aead_edesc *edesc;
>  	struct crypto_aead *aead = crypto_aead_reqtfm(req);
>  	struct caam_ctx *ctx = crypto_aead_ctx(aead);
>  	struct device *jrdev = ctx->jrdev;
> -	struct caam_jr_request_entry *jrentry;
>  	bool all_contig;
>  	u32 *desc;
>  	int ret = 0;
> @@ -1525,16 +1565,14 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
>  
>  	/* Create and submit job descriptor */
>  	init_gcm_job(req, edesc, all_contig, encrypt);
> +	desc = edesc->hw_desc;
>  
Same here.

>  	print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
> -			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
> -			     desc_bytes(edesc->hw_desc), 1);
> -
> -	desc = edesc->hw_desc;
> -	jrentry = &edesc->jrentry;
> +			     DUMP_PREFIX_ADDRESS, 16, 4, desc,
> +			     desc_bytes(desc), 1);
>  

Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 11/12] crypto: caam - add crypto_engine support for RSA algorithms
  2019-11-17 22:30 ` [PATCH 11/12] crypto: caam - add crypto_engine support for RSA algorithms Iuliana Prodan
@ 2019-11-21 16:53   ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-21 16:53 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> @@ -311,14 +329,16 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
>  	edesc->src_nents = src_nents;
>  	edesc->dst_nents = dst_nents;
>  
> +	edesc->jrentry.base = &req->base;
> +
> +	req_ctx->edesc = edesc;
> +
>  	if (!sec4_sg_bytes)
>  		return edesc;
>  
>  	edesc->mapped_src_nents = mapped_src_nents;
>  	edesc->mapped_dst_nents = mapped_dst_nents;
>  
> -	edesc->jrentry.base = &req->base;
> -
This is a bug fix - edesc->jrentry.base must be set earlier,
before having the chance to return from the function (in case no S/G table
needs to be generated).

It should be squashed into
[PATCH 07/12] crypto: caam - refactor caam_jr_enqueue

Horia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 12/12] crypto: caam - add crypto_engine support for HASH algorithms
  2019-11-17 22:30 ` [PATCH 12/12] crypto: caam - add crypto_engine support for HASH algorithms Iuliana Prodan
@ 2019-11-21 17:06   ` Horia Geanta
  0 siblings, 0 replies; 42+ messages in thread
From: Horia Geanta @ 2019-11-21 17:06 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx

On 11/18/2019 12:31 AM, Iuliana Prodan wrote:
> @@ -1150,20 +1201,19 @@ static int ahash_final_no_ctx(struct ahash_request *req)
>  			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
>  			     1);
>  
> -	jrentry = &edesc->jrentry;
> +	state->ahash_op_done = ahash_done;
>  
> -	ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry);
> -	if (ret != -EINPROGRESS) {
> +	ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
> +	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
>  		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
>  		kfree(edesc);
>  	}
>  
>  	return ret;
> - unmap:
> +unmap:
>  	ahash_unmap(jrdev, edesc, req, digestsize);
>  	kfree(edesc);
>  	return -ENOMEM;
> -
>  }
Unrelated whitespace changes.

> @@ -1294,7 +1344,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
>  			     1);
>  
>  	return ret;
> - unmap_ctx:
> +unmap_ctx:
Same here.

> @@ -1509,6 +1558,7 @@ static int ahash_init(struct ahash_request *req)
>  	state->update = ahash_update_first;
>  	state->finup = ahash_finup_first;
>  	state->final = ahash_final_no_ctx;
> +	state->ahash_op_done = ahash_done;
>  
Is this initialization really needed?

Horia


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/12] crypto: add helper function for akcipher_request
  2019-11-17 22:30 ` [PATCH 01/12] crypto: add helper function for akcipher_request Iuliana Prodan
                     ` (2 preceding siblings ...)
  2019-11-19 15:10   ` Gary R Hook
@ 2019-11-22  9:08   ` Herbert Xu
  2019-11-22 10:29     ` Iuliana Prodan
  3 siblings, 1 reply; 42+ messages in thread
From: Herbert Xu @ 2019-11-22  9:08 UTC (permalink / raw)
  To: Iuliana Prodan
  Cc: Horia Geanta, Aymen Sghaier, David S. Miller, Tom Lendacky,
	Gary Hook, linux-crypto, linux-kernel, linux-imx

On Mon, Nov 18, 2019 at 12:30:34AM +0200, Iuliana Prodan wrote:
>
> diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h
> index 6924b09..4365edd 100644
> --- a/include/crypto/akcipher.h
> +++ b/include/crypto/akcipher.h
> @@ -170,6 +170,12 @@ static inline struct crypto_akcipher *crypto_akcipher_reqtfm(
>  	return __crypto_akcipher_tfm(req->base.tfm);
>  }
>  
> +static inline struct akcipher_request *akcipher_request_cast(
> +	struct crypto_async_request *req)
> +{
> +	return container_of(req, struct akcipher_request, base);
> +}

This should go into include/crypto/internal/akcipher.h as it's
only used by implementors.

But having reviewed the subsequent patches I think we shouldn't
have this function at all.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/12] crypto: add helper function for akcipher_request
  2019-11-22  9:08   ` Herbert Xu
@ 2019-11-22 10:29     ` Iuliana Prodan
  2019-11-22 10:34       ` Herbert Xu
  0 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-22 10:29 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Horia Geanta, Aymen Sghaier, David S. Miller, Tom Lendacky,
	Gary Hook, linux-crypto, linux-kernel, dl-linux-imx

On 11/22/2019 11:08 AM, Herbert Xu wrote:
> On Mon, Nov 18, 2019 at 12:30:34AM +0200, Iuliana Prodan wrote:
>>
>> diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h
>> index 6924b09..4365edd 100644
>> --- a/include/crypto/akcipher.h
>> +++ b/include/crypto/akcipher.h
>> @@ -170,6 +170,12 @@ static inline struct crypto_akcipher *crypto_akcipher_reqtfm(
>>   	return __crypto_akcipher_tfm(req->base.tfm);
>>   }
>>   
>> +static inline struct akcipher_request *akcipher_request_cast(
>> +	struct crypto_async_request *req)
>> +{
>> +	return container_of(req, struct akcipher_request, base);
>> +}
> 
> This should go into include/crypto/internal/akcipher.h as it's
> only used by implementors.
> 
> But having reviewed the subsequent patches I think we shouldn't
> have this function at all.
> 

Why can't we use this? There are similar functions for 
skcipher/aead/ahash and they are all in include/crypto.

Thanks,
Iulia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-11-17 22:30 ` [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms Iuliana Prodan
  2019-11-21 11:46   ` Horia Geanta
@ 2019-11-22 10:33   ` Herbert Xu
  2019-11-22 11:05     ` Iuliana Prodan
  2019-12-10 15:27   ` Bastian Krause
  2 siblings, 1 reply; 42+ messages in thread
From: Herbert Xu @ 2019-11-22 10:33 UTC (permalink / raw)
  To: Iuliana Prodan
  Cc: Horia Geanta, Aymen Sghaier, David S. Miller, Tom Lendacky,
	Gary Hook, linux-crypto, linux-kernel, linux-imx

On Mon, Nov 18, 2019 at 12:30:41AM +0200, Iuliana Prodan wrote:
>
> +static int transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req)
> +{
> +	switch (crypto_tfm_alg_type(req->tfm)) {
> +	case CRYPTO_ALG_TYPE_SKCIPHER:
> +		return crypto_transfer_skcipher_request_to_engine(engine,
> +								  skcipher_request_cast(req));
> +	default:
> +		return -EINVAL;
> +	}
> +}

Please don't do this.  As you can see the crypto engine interface
wants to you to use the correct type for the request object.  That's
what you should do to.

In fact I don't understand why you're only using the crypto engine
for the backlog case.  Wouldn't it be much simpler if you used the
engine unconditionally?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/12] crypto: add helper function for akcipher_request
  2019-11-22 10:29     ` Iuliana Prodan
@ 2019-11-22 10:34       ` Herbert Xu
  0 siblings, 0 replies; 42+ messages in thread
From: Herbert Xu @ 2019-11-22 10:34 UTC (permalink / raw)
  To: Iuliana Prodan
  Cc: Horia Geanta, Aymen Sghaier, David S. Miller, Tom Lendacky,
	Gary Hook, linux-crypto, linux-kernel, dl-linux-imx

On Fri, Nov 22, 2019 at 10:29:01AM +0000, Iuliana Prodan wrote:
>
> Why can't we use this? There are similar functions for 
> skcipher/aead/ahash and they are all in include/crypto.

Because we don't want drivers to use the underlying crypto_request
at all.  All drivers should be using the aead_request and others.

Only infrastructure code such as crypto_engine may use the base
type internally.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-11-22 10:33   ` Herbert Xu
@ 2019-11-22 11:05     ` Iuliana Prodan
  2019-11-22 11:09       ` Herbert Xu
  0 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-22 11:05 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Horia Geanta, Aymen Sghaier, David S. Miller, Tom Lendacky,
	Gary Hook, linux-crypto, linux-kernel, dl-linux-imx

On 11/22/2019 12:33 PM, Herbert Xu wrote:
> On Mon, Nov 18, 2019 at 12:30:41AM +0200, Iuliana Prodan wrote:
>>
>> +static int transfer_request_to_engine(struct crypto_engine *engine,
>> +				      struct crypto_async_request *req)
>> +{
>> +	switch (crypto_tfm_alg_type(req->tfm)) {
>> +	case CRYPTO_ALG_TYPE_SKCIPHER:
>> +		return crypto_transfer_skcipher_request_to_engine(engine,
>> +								  skcipher_request_cast(req));
>> +	default:
>> +		return -EINVAL;
>> +	}
>> +}
> 
> Please don't do this.  As you can see the crypto engine interface
> wants to you to use the correct type for the request object.  That's
> what you should do to.

Sorry, but I don't understand what is wrong here? I'm using the correct 
type, the specific type, for the request when sending it to crypto engine.
This transfer_request_to_engine function is called from caam_jr_enqueue, 
where I have all types of request, so I'm using the async_request, and 
when transferring to crypto engine I cast it to the specific type.


> In fact I don't understand why you're only using the crypto engine
> for the backlog case.  Wouldn't it be much simpler if you used the
> engine unconditionally?

I believe is an overhead to sent all request to crypto engine since most 
of them can be directly executed by hw.
Also, in case there is no need for backlog and the hw is busy we can 
drop the request.

Thanks,
Iulia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-11-22 11:05     ` Iuliana Prodan
@ 2019-11-22 11:09       ` Herbert Xu
  2019-11-22 14:11         ` Iuliana Prodan
  0 siblings, 1 reply; 42+ messages in thread
From: Herbert Xu @ 2019-11-22 11:09 UTC (permalink / raw)
  To: Iuliana Prodan
  Cc: Horia Geanta, Aymen Sghaier, David S. Miller, Tom Lendacky,
	Gary Hook, linux-crypto, linux-kernel, dl-linux-imx

On Fri, Nov 22, 2019 at 11:05:59AM +0000, Iuliana Prodan wrote:
>
> Sorry, but I don't understand what is wrong here? I'm using the correct 
> type, the specific type, for the request when sending it to crypto engine.
> This transfer_request_to_engine function is called from caam_jr_enqueue, 
> where I have all types of request, so I'm using the async_request, and 
> when transferring to crypto engine I cast it to the specific type.

These internal types are only for use by the crypto API and helper
code such as crypto_engine.  They should not be used by drivers in
general.

> I believe is an overhead to sent all request to crypto engine since most 
> of them can be directly executed by hw.
> Also, in case there is no need for backlog and the hw is busy we can 
> drop the request.

If the crypto_engine has so much overhead then you should work on
fixing crypto_engine and not work around it like this.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-11-22 11:09       ` Herbert Xu
@ 2019-11-22 14:11         ` Iuliana Prodan
  2019-11-22 14:31           ` Herbert Xu
  0 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-11-22 14:11 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Horia Geanta, Aymen Sghaier, David S. Miller, Tom Lendacky,
	Gary Hook, linux-crypto, linux-kernel, dl-linux-imx

On 11/22/2019 1:09 PM, Herbert Xu wrote:
> On Fri, Nov 22, 2019 at 11:05:59AM +0000, Iuliana Prodan wrote:
>>
>> Sorry, but I don't understand what is wrong here? I'm using the correct
>> type, the specific type, for the request when sending it to crypto engine.
>> This transfer_request_to_engine function is called from caam_jr_enqueue,
>> where I have all types of request, so I'm using the async_request, and
>> when transferring to crypto engine I cast it to the specific type.
> 
> These internal types are only for use by the crypto API and helper
> code such as crypto_engine.  They should not be used by drivers in
> general.
> 
So, just to be clear, I shouldn't use crypto_async_request in driver code?
I see that this generic crypto request is used in multiple drivers.

>> I believe is an overhead to sent all request to crypto engine since most
>> of them can be directly executed by hw.
>> Also, in case there is no need for backlog and the hw is busy we can
>> drop the request.
> 
> If the crypto_engine has so much overhead then you should work on
> fixing crypto_engine and not work around it like this.
> 
I can try sending _all_ requests to crypto engine and make some 
performance measurements to see which solution is best.

Thanks,
Iulia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-11-22 14:11         ` Iuliana Prodan
@ 2019-11-22 14:31           ` Herbert Xu
  0 siblings, 0 replies; 42+ messages in thread
From: Herbert Xu @ 2019-11-22 14:31 UTC (permalink / raw)
  To: Iuliana Prodan
  Cc: Horia Geanta, Aymen Sghaier, David S. Miller, Tom Lendacky,
	Gary Hook, linux-crypto, linux-kernel, dl-linux-imx

On Fri, Nov 22, 2019 at 02:11:46PM +0000, Iuliana Prodan wrote:
>
> So, just to be clear, I shouldn't use crypto_async_request in driver code?
> I see that this generic crypto request is used in multiple drivers.

I understand that a number of drivers do this in order to share
common code.  However, this is definitely not the preferred way
of handling this.  Ideally such code should be abstracted into
a higher layer such as crypto_engine so that the driver itself
never references these internal types.

> I can try sending _all_ requests to crypto engine and make some 
> performance measurements to see which solution is best.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function
  2019-11-17 22:30 ` [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function Iuliana Prodan
  2019-11-19 15:21   ` Horia Geanta
@ 2019-12-10 11:56   ` Bastian Krause
  2019-12-10 12:28     ` Iuliana Prodan
  1 sibling, 1 reply; 42+ messages in thread
From: Bastian Krause @ 2019-12-10 11:56 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, kernel

Hi,

On 11/17/19 11:30 PM, Iuliana Prodan wrote:
> Change the return code of caam_jr_enqueue function to -EINPROGRESS, in
> case of success, -ENOSPC in case the CAAM is busy (has no space left
> in job ring queue), -EIO if it cannot map the caller's descriptor.
> 
> Update, also, the cases for resource-freeing for each algorithm type.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
> Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
> ---
>  drivers/crypto/caam/caamalg.c  | 16 ++++------------
>  drivers/crypto/caam/caamhash.c | 34 +++++++++++-----------------------
>  drivers/crypto/caam/caampkc.c  | 16 ++++++++--------
>  drivers/crypto/caam/caamrng.c  |  4 ++--
>  drivers/crypto/caam/jr.c       |  8 ++++----
>  drivers/crypto/caam/key_gen.c  |  2 +-
>  6 files changed, 30 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
> index 6e021692..21b6172 100644
> --- a/drivers/crypto/caam/caamalg.c
> +++ b/drivers/crypto/caam/caamalg.c
> @@ -1417,9 +1417,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
>  			     1);
>  
>  	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
> -	if (!ret) {
> -		ret = -EINPROGRESS;
> -	} else {
> +	if (ret != -EINPROGRESS) {
>  		aead_unmap(jrdev, edesc, req);
>  		kfree(edesc);
>  	}
> @@ -1462,9 +1460,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
>  
>  	desc = edesc->hw_desc;
>  	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
> -	if (!ret) {
> -		ret = -EINPROGRESS;
> -	} else {
> +	if (ret != -EINPROGRESS) {
>  		aead_unmap(jrdev, edesc, req);
>  		kfree(edesc);
>  	}
> @@ -1507,9 +1503,7 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
>  
>  	desc = edesc->hw_desc;
>  	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
> -	if (!ret) {
> -		ret = -EINPROGRESS;
> -	} else {
> +	if (ret != -EINPROGRESS) {
>  		aead_unmap(jrdev, edesc, req);
>  		kfree(edesc);
>  	}
> @@ -1725,9 +1719,7 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
>  	desc = edesc->hw_desc;
>  	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req);
>  
> -	if (!ret) {
> -		ret = -EINPROGRESS;
> -	} else {
> +	if (ret != -EINPROGRESS) {
>  		skcipher_unmap(jrdev, edesc, req);
>  		kfree(edesc);
>  	}
> diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
> index 5f9f16c..baf4ab1 100644
> --- a/drivers/crypto/caam/caamhash.c
> +++ b/drivers/crypto/caam/caamhash.c
> @@ -422,7 +422,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
>  	init_completion(&result.completion);
>  
>  	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
> -	if (!ret) {
> +	if (ret == -EINPROGRESS) {
>  		/* in progress */
>  		wait_for_completion(&result.completion);
>  		ret = result.err;
> @@ -858,10 +858,8 @@ static int ahash_update_ctx(struct ahash_request *req)
>  				     desc_bytes(desc), 1);
>  
>  		ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req);
> -		if (ret)
> +		if (ret != -EINPROGRESS)
>  			goto unmap_ctx;
> -
> -		ret = -EINPROGRESS;
>  	} else if (*next_buflen) {
>  		scatterwalk_map_and_copy(buf + *buflen, req->src, 0,
>  					 req->nbytes, 0);
> @@ -936,10 +934,9 @@ static int ahash_final_ctx(struct ahash_request *req)
>  			     1);
>  
>  	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
> -	if (ret)
> -		goto unmap_ctx;
> +	if (ret == -EINPROGRESS)
> +		return ret;
>  
> -	return -EINPROGRESS;
>   unmap_ctx:
>  	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
>  	kfree(edesc);
> @@ -1013,10 +1010,9 @@ static int ahash_finup_ctx(struct ahash_request *req)
>  			     1);
>  
>  	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
> -	if (ret)
> -		goto unmap_ctx;
> +	if (ret == -EINPROGRESS)
> +		return ret;
>  
> -	return -EINPROGRESS;
>   unmap_ctx:
>  	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
>  	kfree(edesc);
> @@ -1086,9 +1082,7 @@ static int ahash_digest(struct ahash_request *req)
>  			     1);
>  
>  	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
> -	if (!ret) {
> -		ret = -EINPROGRESS;
> -	} else {
> +	if (ret != -EINPROGRESS) {
>  		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
>  		kfree(edesc);
>  	}
> @@ -1138,9 +1132,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
>  			     1);
>  
>  	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
> -	if (!ret) {
> -		ret = -EINPROGRESS;
> -	} else {
> +	if (ret != -EINPROGRESS) {
>  		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
>  		kfree(edesc);
>  	}
> @@ -1258,10 +1250,9 @@ static int ahash_update_no_ctx(struct ahash_request *req)
>  				     desc_bytes(desc), 1);
>  
>  		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
> -		if (ret)
> +		if (ret != -EINPROGRESS)
>  			goto unmap_ctx;
>  
> -		ret = -EINPROGRESS;
>  		state->update = ahash_update_ctx;
>  		state->finup = ahash_finup_ctx;
>  		state->final = ahash_final_ctx;
> @@ -1353,9 +1344,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
>  			     1);
>  
>  	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
> -	if (!ret) {
> -		ret = -EINPROGRESS;
> -	} else {
> +	if (ret != -EINPROGRESS) {
>  		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
>  		kfree(edesc);
>  	}
> @@ -1452,10 +1441,9 @@ static int ahash_update_first(struct ahash_request *req)
>  				     desc_bytes(desc), 1);
>  
>  		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
> -		if (ret)
> +		if (ret != -EINPROGRESS)
>  			goto unmap_ctx;
>  
> -		ret = -EINPROGRESS;
>  		state->update = ahash_update_ctx;
>  		state->finup = ahash_finup_ctx;
>  		state->final = ahash_final_ctx;
> diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
> index ebf1677..7f7ea32 100644
> --- a/drivers/crypto/caam/caampkc.c
> +++ b/drivers/crypto/caam/caampkc.c
> @@ -634,8 +634,8 @@ static int caam_rsa_enc(struct akcipher_request *req)
>  	init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
>  
>  	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
> -	if (!ret)
> -		return -EINPROGRESS;
> +	if (ret == -EINPROGRESS)
> +		return ret;
>  
>  	rsa_pub_unmap(jrdev, edesc, req);
>  
> @@ -667,8 +667,8 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
>  	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
>  
>  	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
> -	if (!ret)
> -		return -EINPROGRESS;
> +	if (ret == -EINPROGRESS)
> +		return ret;
>  
>  	rsa_priv_f1_unmap(jrdev, edesc, req);
>  
> @@ -700,8 +700,8 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
>  	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
>  
>  	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
> -	if (!ret)
> -		return -EINPROGRESS;
> +	if (ret == -EINPROGRESS)
> +		return ret;
>  
>  	rsa_priv_f2_unmap(jrdev, edesc, req);
>  
> @@ -733,8 +733,8 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
>  	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
>  
>  	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
> -	if (!ret)
> -		return -EINPROGRESS;
> +	if (ret == -EINPROGRESS)
> +		return ret;
>  
>  	rsa_priv_f3_unmap(jrdev, edesc, req);
>  
> diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
> index e8baaca..e3e4bf2 100644
> --- a/drivers/crypto/caam/caamrng.c
> +++ b/drivers/crypto/caam/caamrng.c
> @@ -133,7 +133,7 @@ static inline int submit_job(struct caam_rng_ctx *ctx, int to_current)
>  	dev_dbg(jrdev, "submitting job %d\n", !(to_current ^ ctx->current_buf));
>  	init_completion(&bd->filled);
>  	err = caam_jr_enqueue(jrdev, desc, rng_done, ctx);
> -	if (err)
> +	if (err != EINPROGRESS)

Shouldn't this be -EINPROGRESS ?

>  		complete(&bd->filled); /* don't wait on failed job*/
>  	else
>  		atomic_inc(&bd->empty); /* note if pending */
> @@ -153,7 +153,7 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait)
>  		if (atomic_read(&bd->empty) == BUF_EMPTY) {
>  			err = submit_job(ctx, 1);
>  			/* if can't submit job, can't even wait */
> -			if (err)
> +			if (err != EINPROGRESS)

And here the same?

Regards,
Bastian

>  				return 0;
>  		}
>  		/* no immediate data, so exit if not waiting */
> diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
> index fc97cde..df2a050 100644
> --- a/drivers/crypto/caam/jr.c
> +++ b/drivers/crypto/caam/jr.c
> @@ -324,8 +324,8 @@ void caam_jr_free(struct device *rdev)
>  EXPORT_SYMBOL(caam_jr_free);
>  
>  /**
> - * caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK,
> - * -EBUSY if the queue is full, -EIO if it cannot map the caller's
> + * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS
> + * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's
>   * descriptor.
>   * @dev:  device of the job ring to be used. This device should have
>   *        been assigned prior by caam_jr_register().
> @@ -377,7 +377,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
>  	    CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) {
>  		spin_unlock_bh(&jrp->inplock);
>  		dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE);
> -		return -EBUSY;
> +		return -ENOSPC;
>  	}
>  
>  	head_entry = &jrp->entinfo[head];
> @@ -414,7 +414,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
>  
>  	spin_unlock_bh(&jrp->inplock);
>  
> -	return 0;
> +	return -EINPROGRESS;
>  }
>  EXPORT_SYMBOL(caam_jr_enqueue);
>  
> diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
> index 5a851dd..b0e8a49 100644
> --- a/drivers/crypto/caam/key_gen.c
> +++ b/drivers/crypto/caam/key_gen.c
> @@ -108,7 +108,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out,
>  	init_completion(&result.completion);
>  
>  	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
> -	if (!ret) {
> +	if (ret == -EINPROGRESS) {
>  		/* in progress */
>  		wait_for_completion(&result.completion);
>  		ret = result.err;
> 


-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function
  2019-12-10 11:56   ` Bastian Krause
@ 2019-12-10 12:28     ` Iuliana Prodan
  0 siblings, 0 replies; 42+ messages in thread
From: Iuliana Prodan @ 2019-12-10 12:28 UTC (permalink / raw)
  To: Bastian Krause, Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx, kernel

On 12/10/2019 1:56 PM, Bastian Krause wrote:
> Hi,
> 
> On 11/17/19 11:30 PM, Iuliana Prodan wrote:
>> Change the return code of caam_jr_enqueue function to -EINPROGRESS, in
>> case of success, -ENOSPC in case the CAAM is busy (has no space left
>> in job ring queue), -EIO if it cannot map the caller's descriptor.
>>
>> Update, also, the cases for resource-freeing for each algorithm type.
>>
>> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
>> Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
>> ---
>>   drivers/crypto/caam/caamalg.c  | 16 ++++------------
>>   drivers/crypto/caam/caamhash.c | 34 +++++++++++-----------------------
>>   drivers/crypto/caam/caampkc.c  | 16 ++++++++--------
>>   drivers/crypto/caam/caamrng.c  |  4 ++--
>>   drivers/crypto/caam/jr.c       |  8 ++++----
>>   drivers/crypto/caam/key_gen.c  |  2 +-
>>   6 files changed, 30 insertions(+), 50 deletions(-)
>>
>> diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
>> index 6e021692..21b6172 100644
>> --- a/drivers/crypto/caam/caamalg.c
>> +++ b/drivers/crypto/caam/caamalg.c
>> @@ -1417,9 +1417,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
>>   			     1);
>>   
>>   	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
>> -	if (!ret) {
>> -		ret = -EINPROGRESS;
>> -	} else {
>> +	if (ret != -EINPROGRESS) {
>>   		aead_unmap(jrdev, edesc, req);
>>   		kfree(edesc);
>>   	}
>> @@ -1462,9 +1460,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
>>   
>>   	desc = edesc->hw_desc;
>>   	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
>> -	if (!ret) {
>> -		ret = -EINPROGRESS;
>> -	} else {
>> +	if (ret != -EINPROGRESS) {
>>   		aead_unmap(jrdev, edesc, req);
>>   		kfree(edesc);
>>   	}
>> @@ -1507,9 +1503,7 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
>>   
>>   	desc = edesc->hw_desc;
>>   	ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
>> -	if (!ret) {
>> -		ret = -EINPROGRESS;
>> -	} else {
>> +	if (ret != -EINPROGRESS) {
>>   		aead_unmap(jrdev, edesc, req);
>>   		kfree(edesc);
>>   	}
>> @@ -1725,9 +1719,7 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
>>   	desc = edesc->hw_desc;
>>   	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req);
>>   
>> -	if (!ret) {
>> -		ret = -EINPROGRESS;
>> -	} else {
>> +	if (ret != -EINPROGRESS) {
>>   		skcipher_unmap(jrdev, edesc, req);
>>   		kfree(edesc);
>>   	}
>> diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
>> index 5f9f16c..baf4ab1 100644
>> --- a/drivers/crypto/caam/caamhash.c
>> +++ b/drivers/crypto/caam/caamhash.c
>> @@ -422,7 +422,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
>>   	init_completion(&result.completion);
>>   
>>   	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
>> -	if (!ret) {
>> +	if (ret == -EINPROGRESS) {
>>   		/* in progress */
>>   		wait_for_completion(&result.completion);
>>   		ret = result.err;
>> @@ -858,10 +858,8 @@ static int ahash_update_ctx(struct ahash_request *req)
>>   				     desc_bytes(desc), 1);
>>   
>>   		ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req);
>> -		if (ret)
>> +		if (ret != -EINPROGRESS)
>>   			goto unmap_ctx;
>> -
>> -		ret = -EINPROGRESS;
>>   	} else if (*next_buflen) {
>>   		scatterwalk_map_and_copy(buf + *buflen, req->src, 0,
>>   					 req->nbytes, 0);
>> @@ -936,10 +934,9 @@ static int ahash_final_ctx(struct ahash_request *req)
>>   			     1);
>>   
>>   	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
>> -	if (ret)
>> -		goto unmap_ctx;
>> +	if (ret == -EINPROGRESS)
>> +		return ret;
>>   
>> -	return -EINPROGRESS;
>>    unmap_ctx:
>>   	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
>>   	kfree(edesc);
>> @@ -1013,10 +1010,9 @@ static int ahash_finup_ctx(struct ahash_request *req)
>>   			     1);
>>   
>>   	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
>> -	if (ret)
>> -		goto unmap_ctx;
>> +	if (ret == -EINPROGRESS)
>> +		return ret;
>>   
>> -	return -EINPROGRESS;
>>    unmap_ctx:
>>   	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
>>   	kfree(edesc);
>> @@ -1086,9 +1082,7 @@ static int ahash_digest(struct ahash_request *req)
>>   			     1);
>>   
>>   	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
>> -	if (!ret) {
>> -		ret = -EINPROGRESS;
>> -	} else {
>> +	if (ret != -EINPROGRESS) {
>>   		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
>>   		kfree(edesc);
>>   	}
>> @@ -1138,9 +1132,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
>>   			     1);
>>   
>>   	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
>> -	if (!ret) {
>> -		ret = -EINPROGRESS;
>> -	} else {
>> +	if (ret != -EINPROGRESS) {
>>   		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
>>   		kfree(edesc);
>>   	}
>> @@ -1258,10 +1250,9 @@ static int ahash_update_no_ctx(struct ahash_request *req)
>>   				     desc_bytes(desc), 1);
>>   
>>   		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
>> -		if (ret)
>> +		if (ret != -EINPROGRESS)
>>   			goto unmap_ctx;
>>   
>> -		ret = -EINPROGRESS;
>>   		state->update = ahash_update_ctx;
>>   		state->finup = ahash_finup_ctx;
>>   		state->final = ahash_final_ctx;
>> @@ -1353,9 +1344,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
>>   			     1);
>>   
>>   	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
>> -	if (!ret) {
>> -		ret = -EINPROGRESS;
>> -	} else {
>> +	if (ret != -EINPROGRESS) {
>>   		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
>>   		kfree(edesc);
>>   	}
>> @@ -1452,10 +1441,9 @@ static int ahash_update_first(struct ahash_request *req)
>>   				     desc_bytes(desc), 1);
>>   
>>   		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
>> -		if (ret)
>> +		if (ret != -EINPROGRESS)
>>   			goto unmap_ctx;
>>   
>> -		ret = -EINPROGRESS;
>>   		state->update = ahash_update_ctx;
>>   		state->finup = ahash_finup_ctx;
>>   		state->final = ahash_final_ctx;
>> diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
>> index ebf1677..7f7ea32 100644
>> --- a/drivers/crypto/caam/caampkc.c
>> +++ b/drivers/crypto/caam/caampkc.c
>> @@ -634,8 +634,8 @@ static int caam_rsa_enc(struct akcipher_request *req)
>>   	init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
>>   
>>   	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
>> -	if (!ret)
>> -		return -EINPROGRESS;
>> +	if (ret == -EINPROGRESS)
>> +		return ret;
>>   
>>   	rsa_pub_unmap(jrdev, edesc, req);
>>   
>> @@ -667,8 +667,8 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
>>   	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
>>   
>>   	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
>> -	if (!ret)
>> -		return -EINPROGRESS;
>> +	if (ret == -EINPROGRESS)
>> +		return ret;
>>   
>>   	rsa_priv_f1_unmap(jrdev, edesc, req);
>>   
>> @@ -700,8 +700,8 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
>>   	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
>>   
>>   	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
>> -	if (!ret)
>> -		return -EINPROGRESS;
>> +	if (ret == -EINPROGRESS)
>> +		return ret;
>>   
>>   	rsa_priv_f2_unmap(jrdev, edesc, req);
>>   
>> @@ -733,8 +733,8 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
>>   	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
>>   
>>   	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
>> -	if (!ret)
>> -		return -EINPROGRESS;
>> +	if (ret == -EINPROGRESS)
>> +		return ret;
>>   
>>   	rsa_priv_f3_unmap(jrdev, edesc, req);
>>   
>> diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
>> index e8baaca..e3e4bf2 100644
>> --- a/drivers/crypto/caam/caamrng.c
>> +++ b/drivers/crypto/caam/caamrng.c
>> @@ -133,7 +133,7 @@ static inline int submit_job(struct caam_rng_ctx *ctx, int to_current)
>>   	dev_dbg(jrdev, "submitting job %d\n", !(to_current ^ ctx->current_buf));
>>   	init_completion(&bd->filled);
>>   	err = caam_jr_enqueue(jrdev, desc, rng_done, ctx);
>> -	if (err)
>> +	if (err != EINPROGRESS)
> 
> Shouldn't this be -EINPROGRESS ?
> 
Yes, it should be -EINPROGRESS.
I'm working on a v2 and will update this, also.

Thanks,
Iulia

>>   		complete(&bd->filled); /* don't wait on failed job*/
>>   	else
>>   		atomic_inc(&bd->empty); /* note if pending */
>> @@ -153,7 +153,7 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait)
>>   		if (atomic_read(&bd->empty) == BUF_EMPTY) {
>>   			err = submit_job(ctx, 1);
>>   			/* if can't submit job, can't even wait */
>> -			if (err)
>> +			if (err != EINPROGRESS)
> 
> And here the same?
> 
> Regards,
> Bastian
> 
>>   				return 0;
>>   		}
>>   		/* no immediate data, so exit if not waiting */
>> diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
>> index fc97cde..df2a050 100644
>> --- a/drivers/crypto/caam/jr.c
>> +++ b/drivers/crypto/caam/jr.c
>> @@ -324,8 +324,8 @@ void caam_jr_free(struct device *rdev)
>>   EXPORT_SYMBOL(caam_jr_free);
>>   
>>   /**
>> - * caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK,
>> - * -EBUSY if the queue is full, -EIO if it cannot map the caller's
>> + * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS
>> + * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's
>>    * descriptor.
>>    * @dev:  device of the job ring to be used. This device should have
>>    *        been assigned prior by caam_jr_register().
>> @@ -377,7 +377,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
>>   	    CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) {
>>   		spin_unlock_bh(&jrp->inplock);
>>   		dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE);
>> -		return -EBUSY;
>> +		return -ENOSPC;
>>   	}
>>   
>>   	head_entry = &jrp->entinfo[head];
>> @@ -414,7 +414,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
>>   
>>   	spin_unlock_bh(&jrp->inplock);
>>   
>> -	return 0;
>> +	return -EINPROGRESS;
>>   }
>>   EXPORT_SYMBOL(caam_jr_enqueue);
>>   
>> diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
>> index 5a851dd..b0e8a49 100644
>> --- a/drivers/crypto/caam/key_gen.c
>> +++ b/drivers/crypto/caam/key_gen.c
>> @@ -108,7 +108,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out,
>>   	init_completion(&result.completion);
>>   
>>   	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
>> -	if (!ret) {
>> +	if (ret == -EINPROGRESS) {
>>   		/* in progress */
>>   		wait_for_completion(&result.completion);
>>   		ret = result.err;
>>
> 
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-11-17 22:30 ` [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms Iuliana Prodan
  2019-11-21 11:46   ` Horia Geanta
  2019-11-22 10:33   ` Herbert Xu
@ 2019-12-10 15:27   ` Bastian Krause
  2019-12-11 12:20     ` Iuliana Prodan
  2 siblings, 1 reply; 42+ messages in thread
From: Bastian Krause @ 2019-12-10 15:27 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, linux-imx, kernel


Hi Iulia,

On 11/17/19 11:30 PM, Iuliana Prodan wrote:
> Integrate crypto_engine into CAAM, to make use of the engine queue.
> Add support for SKCIPHER algorithms.
> 
> This is intended to be used for CAAM backlogging support.
> The requests, with backlog flag (e.g. from dm-crypt) will be listed
> into crypto-engine queue and processed by CAAM when free.
> This changes the return codes for caam_jr_enqueue:
> -EINPROGRESS if OK, -EBUSY if request is backlogged,
> -ENOSPC if the queue is full, -EIO if it cannot map the caller's
> descriptor, -EINVAL if crypto_tfm not supported by crypto_engine.
> 
> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
> Signed-off-by: Franck LENORMAND <franck.lenormand@nxp.com>
> Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
> ---
>  drivers/crypto/caam/Kconfig   |  1 +
>  drivers/crypto/caam/caamalg.c | 84 +++++++++++++++++++++++++++++++++++--------
>  drivers/crypto/caam/intern.h  |  2 ++
>  drivers/crypto/caam/jr.c      | 51 ++++++++++++++++++++++++--
>  4 files changed, 122 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig
> index 87053e4..1930e19 100644
> --- a/drivers/crypto/caam/Kconfig
> +++ b/drivers/crypto/caam/Kconfig
> @@ -33,6 +33,7 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG
>  
>  menuconfig CRYPTO_DEV_FSL_CAAM_JR
>  	tristate "Freescale CAAM Job Ring driver backend"
> +	select CRYPTO_ENGINE
>  	default y
>  	help
>  	  Enables the driver module for Job Rings which are part of
> diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
> index abebcfc..23de94d 100644
> --- a/drivers/crypto/caam/caamalg.c
> +++ b/drivers/crypto/caam/caamalg.c
> @@ -56,6 +56,7 @@
>  #include "sg_sw_sec4.h"
>  #include "key_gen.h"
>  #include "caamalg_desc.h"
> +#include <crypto/engine.h>
>  
>  /*
>   * crypto alg
> @@ -101,6 +102,7 @@ struct caam_skcipher_alg {
>   * per-session context
>   */
>  struct caam_ctx {
> +	struct crypto_engine_ctx enginectx;
>  	u32 sh_desc_enc[DESC_MAX_USED_LEN];
>  	u32 sh_desc_dec[DESC_MAX_USED_LEN];
>  	u8 key[CAAM_MAX_KEY_SIZE];
> @@ -114,6 +116,12 @@ struct caam_ctx {
>  	unsigned int authsize;
>  };
>  
> +struct caam_skcipher_req_ctx {
> +	struct skcipher_edesc *edesc;
> +	void (*skcipher_op_done)(struct device *jrdev, u32 *desc, u32 err,
> +				 void *context);
> +};
> +
>  static int aead_null_set_sh_desc(struct crypto_aead *aead)
>  {
>  	struct caam_ctx *ctx = crypto_aead_ctx(aead);
> @@ -992,13 +1000,15 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
>  	struct caam_jr_request_entry *jrentry = context;
>  	struct skcipher_request *req = skcipher_request_cast(jrentry->base);
>  	struct skcipher_edesc *edesc;
> +	struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
>  	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
> +	struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
>  	int ivsize = crypto_skcipher_ivsize(skcipher);
>  	int ecode = 0;
>  
>  	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
>  
> -	edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]);
> +	edesc = rctx->edesc;
>  	if (err)
>  		ecode = caam_jr_strstatus(jrdev, err);
>  
> @@ -1024,7 +1034,14 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
>  
>  	kfree(edesc);
>  
> -	skcipher_request_complete(req, ecode);
> +	/*
> +	 * If no backlog flag, the completion of the request is done
> +	 * by CAAM, not crypto engine.
> +	 */
> +	if (!jrentry->bklog)
> +		skcipher_request_complete(req, ecode);
> +	else
> +		crypto_finalize_skcipher_request(jrp->engine, req, ecode);
>  }
>  
>  /*
> @@ -1553,6 +1570,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
>  {
>  	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
>  	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
> +	struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
>  	struct device *jrdev = ctx->jrdev;
>  	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
>  		       GFP_KERNEL : GFP_ATOMIC;
> @@ -1653,6 +1671,9 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
>  						  desc_bytes);
>  	edesc->jrentry.base = &req->base;
>  
> +	rctx->edesc = edesc;
> +	rctx->skcipher_op_done = skcipher_crypt_done;
> +
>  	/* Make sure IV is located in a DMAable area */
>  	if (ivsize) {
>  		iv = (u8 *)edesc->sec4_sg + sec4_sg_bytes;
> @@ -1707,13 +1728,37 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
>  	return edesc;
>  }
>  
> +static int skcipher_do_one_req(struct crypto_engine *engine, void *areq)
> +{
> +	struct skcipher_request *req = skcipher_request_cast(areq);
> +	struct caam_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
> +	struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
> +	struct caam_jr_request_entry *jrentry;
> +	u32 *desc = rctx->edesc->hw_desc;
> +	int ret;
> +
> +	jrentry = &rctx->edesc->jrentry;
> +	jrentry->bklog = true;
> +
> +	ret = caam_jr_enqueue_no_bklog(ctx->jrdev, desc,
> +				       rctx->skcipher_op_done, jrentry);
> +
> +	if (ret != -EINPROGRESS) {
> +		skcipher_unmap(ctx->jrdev, rctx->edesc, req);
> +		kfree(rctx->edesc);
> +	} else {
> +		ret = 0;
> +	}
> +
> +	return ret;

While testing this on a i.MX6 DualLite I see -ENOSPC being returned here
after a couple of GiB of data being encrypted (via dm-crypt with LUKS
extension). This results in these messages from crypto_engine:

  caam_jr 2101000.jr0: Failed to do one request from queue: -28

And later..

  Buffer I/O error on device dm-0, logical block 59392
  JBD2: Detected IO errors while flushing file data on dm-0-8

Reproducible with something like this:

  echo "testkey" | cryptsetup luksFormat \
    --cipher=aes-cbc-essiv:sha256 \
    --key-file=- \
    --key-size=256 \
    /dev/mmcblk1p8
  echo "testkey" | cryptsetup open \
    --type luks \
    --key-file=- \
    /dev/mmcblk1p8 data

  mkfs.ext4 /dev/mapper/data
  mount /dev/mapper/data /mnt

  set -x
  while [ true ]; do
    dd if=/dev/zero of=/mnt/big_file bs=1M count=1024
    sync
  done

Any ideas?

Regards,
Bastian


> +}
> +
>  static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
>  {
>  	struct skcipher_edesc *edesc;
>  	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
>  	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
>  	struct device *jrdev = ctx->jrdev;
> -	struct caam_jr_request_entry *jrentry;
>  	u32 *desc;
>  	int ret = 0;
>  
> @@ -1727,16 +1772,15 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
>  
>  	/* Create and submit job descriptor*/
>  	init_skcipher_job(req, edesc, encrypt);
> +	desc = edesc->hw_desc;
>  
>  	print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ",
> -			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
> -			     desc_bytes(edesc->hw_desc), 1);
> -
> -	desc = edesc->hw_desc;
> -	jrentry = &edesc->jrentry;
> +			     DUMP_PREFIX_ADDRESS, 16, 4, desc,
> +			     desc_bytes(desc), 1);
>  
> -	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, jrentry);
> -	if (ret != -EINPROGRESS) {
> +	ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done,
> +			      &edesc->jrentry);
> +	if ((ret != -EINPROGRESS) && (ret != -EBUSY)) {
>  		skcipher_unmap(jrdev, edesc, req);
>  		kfree(edesc);
>  	}
> @@ -3272,7 +3316,9 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam,
>  
>  	dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_enc,
>  					offsetof(struct caam_ctx,
> -						 sh_desc_enc_dma),
> +						 sh_desc_enc_dma) -
> +					offsetof(struct caam_ctx,
> +						 sh_desc_enc),
>  					ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
>  	if (dma_mapping_error(ctx->jrdev, dma_addr)) {
>  		dev_err(ctx->jrdev, "unable to map key, shared descriptors\n");
> @@ -3282,8 +3328,12 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam,
>  
>  	ctx->sh_desc_enc_dma = dma_addr;
>  	ctx->sh_desc_dec_dma = dma_addr + offsetof(struct caam_ctx,
> -						   sh_desc_dec);
> -	ctx->key_dma = dma_addr + offsetof(struct caam_ctx, key);
> +						   sh_desc_dec) -
> +					offsetof(struct caam_ctx,
> +						 sh_desc_enc);
> +	ctx->key_dma = dma_addr + offsetof(struct caam_ctx, key) -
> +					offsetof(struct caam_ctx,
> +						 sh_desc_enc);
>  
>  	/* copy descriptor header template value */
>  	ctx->cdata.algtype = OP_TYPE_CLASS1_ALG | caam->class1_alg_type;
> @@ -3297,6 +3347,11 @@ static int caam_cra_init(struct crypto_skcipher *tfm)
>  	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
>  	struct caam_skcipher_alg *caam_alg =
>  		container_of(alg, typeof(*caam_alg), skcipher);
> +	struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> +	crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx));
> +
> +	ctx->enginectx.op.do_one_request = skcipher_do_one_req;
>  
>  	return caam_init_common(crypto_skcipher_ctx(tfm), &caam_alg->caam,
>  				false);
> @@ -3315,7 +3370,8 @@ static int caam_aead_init(struct crypto_aead *tfm)
>  static void caam_exit_common(struct caam_ctx *ctx)
>  {
>  	dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_enc_dma,
> -			       offsetof(struct caam_ctx, sh_desc_enc_dma),
> +			       offsetof(struct caam_ctx, sh_desc_enc_dma) -
> +			       offsetof(struct caam_ctx, sh_desc_enc),
>  			       ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
>  	caam_jr_free(ctx->jrdev);
>  }
> diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
> index 58be66c..31abb94 100644
> --- a/drivers/crypto/caam/intern.h
> +++ b/drivers/crypto/caam/intern.h
> @@ -12,6 +12,7 @@
>  
>  #include "ctrl.h"
>  #include "regs.h"
> +#include <crypto/engine.h>
>  
>  /* Currently comes from Kconfig param as a ^2 (driver-required) */
>  #define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE)
> @@ -61,6 +62,7 @@ struct caam_drv_private_jr {
>  	int out_ring_read_index;	/* Output index "tail" */
>  	int tail;			/* entinfo (s/w ring) tail index */
>  	void *outring;			/* Base of output ring, DMA-safe */
> +	struct crypto_engine *engine;
>  };
>  
>  /*
> diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
> index 544cafa..5c55d3d 100644
> --- a/drivers/crypto/caam/jr.c
> +++ b/drivers/crypto/caam/jr.c
> @@ -62,6 +62,15 @@ static void unregister_algs(void)
>  	mutex_unlock(&algs_lock);
>  }
>  
> +static void caam_jr_crypto_engine_exit(void *data)
> +{
> +	struct device *jrdev = data;
> +	struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
> +
> +	/* Free the resources of crypto-engine */
> +	crypto_engine_exit(jrpriv->engine);
> +}
> +
>  static int caam_reset_hw_jr(struct device *dev)
>  {
>  	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
> @@ -418,10 +427,23 @@ int caam_jr_enqueue_no_bklog(struct device *dev, u32 *desc,
>  }
>  EXPORT_SYMBOL(caam_jr_enqueue_no_bklog);
>  
> +static int transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req)
> +{
> +	switch (crypto_tfm_alg_type(req->tfm)) {
> +	case CRYPTO_ALG_TYPE_SKCIPHER:
> +		return crypto_transfer_skcipher_request_to_engine(engine,
> +								  skcipher_request_cast(req));
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
>  /**
>   * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS
> - * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's
> - * descriptor.
> + * if OK, -EBUSY if request is backlogged, -ENOSPC if the queue is full,
> + * -EIO if it cannot map the caller's descriptor, -EINVAL if crypto_tfm
> + * not supported by crypto_engine.
>   * @dev:  device of the job ring to be used. This device should have
>   *        been assigned prior by caam_jr_register().
>   * @desc: points to a job descriptor that execute our request. All
> @@ -451,7 +473,12 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
>  				u32 status, void *areq),
>  		    void *areq)
>  {
> +	struct caam_drv_private_jr *jrpriv = dev_get_drvdata(dev);
>  	struct caam_jr_request_entry *jrentry = areq;
> +	struct crypto_async_request *req = jrentry->base;
> +
> +	if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
> +		return transfer_request_to_engine(jrpriv->engine, req);
>  
>  	return caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry);
>  }
> @@ -577,6 +604,26 @@ static int caam_jr_probe(struct platform_device *pdev)
>  		return error;
>  	}
>  
> +	/* Initialize crypto engine */
> +	jrpriv->engine = crypto_engine_alloc_init(jrdev, false);
> +	if (!jrpriv->engine) {
> +		dev_err(jrdev, "Could not init crypto-engine\n");
> +		return -ENOMEM;
> +	}
> +
> +	/* Start crypto engine */
> +	error = crypto_engine_start(jrpriv->engine);
> +	if (error) {
> +		dev_err(jrdev, "Could not start crypto-engine\n");
> +		crypto_engine_exit(jrpriv->engine);
> +		return error;
> +	}
> +
> +	error = devm_add_action_or_reset(jrdev, caam_jr_crypto_engine_exit,
> +					 jrdev);
> +	if (error)
> +		return error;
> +
>  	/* Identify the interrupt */
>  	jrpriv->irq = irq_of_parse_and_map(nprop, 0);
>  	if (!jrpriv->irq) {
> 


-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-12-10 15:27   ` Bastian Krause
@ 2019-12-11 12:20     ` Iuliana Prodan
  2019-12-11 13:33       ` Bastian Krause
  0 siblings, 1 reply; 42+ messages in thread
From: Iuliana Prodan @ 2019-12-11 12:20 UTC (permalink / raw)
  To: Bastian Krause, Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx, kernel

Hi,

On 12/10/2019 5:27 PM, Bastian Krause wrote:
> 
> Hi Iulia,
> 
> On 11/17/19 11:30 PM, Iuliana Prodan wrote:
>> Integrate crypto_engine into CAAM, to make use of the engine queue.
>> Add support for SKCIPHER algorithms.
>>
>> This is intended to be used for CAAM backlogging support.
>> The requests, with backlog flag (e.g. from dm-crypt) will be listed
>> into crypto-engine queue and processed by CAAM when free.
>> This changes the return codes for caam_jr_enqueue:
>> -EINPROGRESS if OK, -EBUSY if request is backlogged,
>> -ENOSPC if the queue is full, -EIO if it cannot map the caller's
>> descriptor, -EINVAL if crypto_tfm not supported by crypto_engine.
>>
>> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
>> Signed-off-by: Franck LENORMAND <franck.lenormand@nxp.com>
>> Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
>> ---
>>   drivers/crypto/caam/Kconfig   |  1 +
>>   drivers/crypto/caam/caamalg.c | 84 +++++++++++++++++++++++++++++++++++--------
>>   drivers/crypto/caam/intern.h  |  2 ++
>>   drivers/crypto/caam/jr.c      | 51 ++++++++++++++++++++++++--
>>   4 files changed, 122 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig
>> index 87053e4..1930e19 100644
>> --- a/drivers/crypto/caam/Kconfig
>> +++ b/drivers/crypto/caam/Kconfig
>> @@ -33,6 +33,7 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG
>>   
...
>>   
>> +static int skcipher_do_one_req(struct crypto_engine *engine, void *areq)
>> +{
>> +	struct skcipher_request *req = skcipher_request_cast(areq);
>> +	struct caam_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
>> +	struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
>> +	struct caam_jr_request_entry *jrentry;
>> +	u32 *desc = rctx->edesc->hw_desc;
>> +	int ret;
>> +
>> +	jrentry = &rctx->edesc->jrentry;
>> +	jrentry->bklog = true;
>> +
>> +	ret = caam_jr_enqueue_no_bklog(ctx->jrdev, desc,
>> +				       rctx->skcipher_op_done, jrentry);
>> +
>> +	if (ret != -EINPROGRESS) {
>> +		skcipher_unmap(ctx->jrdev, rctx->edesc, req);
>> +		kfree(rctx->edesc);
>> +	} else {
>> +		ret = 0;
>> +	}
>> +
>> +	return ret;
> 
> While testing this on a i.MX6 DualLite I see -ENOSPC being returned here
> after a couple of GiB of data being encrypted (via dm-crypt with LUKS
> extension). This results in these messages from crypto_engine:
> 
>    caam_jr 2101000.jr0: Failed to do one request from queue: -28
> 
> And later..
> 
>    Buffer I/O error on device dm-0, logical block 59392
>    JBD2: Detected IO errors while flushing file data on dm-0-8
> 
> Reproducible with something like this:
> 
>    echo "testkey" | cryptsetup luksFormat \
>      --cipher=aes-cbc-essiv:sha256 \
>      --key-file=- \
>      --key-size=256 \
>      /dev/mmcblk1p8
>    echo "testkey" | cryptsetup open \
>      --type luks \
>      --key-file=- \
>      /dev/mmcblk1p8 data
> 
>    mkfs.ext4 /dev/mapper/data
>    mount /dev/mapper/data /mnt
> 
>    set -x
>    while [ true ]; do
>      dd if=/dev/zero of=/mnt/big_file bs=1M count=1024
>      sync
>    done
> 
> Any ideas?
> 

Thanks for testing this!
I reproduced this issue on imx6dl, _but_ only with the bypass sw queue 
patch. It only reproduces on some targets, e.g. on imx7d I don't get the 
-ENOSPC error. So, I believe there is a timing issue between 
crypto-engine and CAAM driver, both sending requests to CAAM hw.
I'm debugging this and I'll let you know my findings.

Best regards,
Iulia

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms
  2019-12-11 12:20     ` Iuliana Prodan
@ 2019-12-11 13:33       ` Bastian Krause
  0 siblings, 0 replies; 42+ messages in thread
From: Bastian Krause @ 2019-12-11 13:33 UTC (permalink / raw)
  To: Iuliana Prodan, Herbert Xu, Horia Geanta, Aymen Sghaier
  Cc: David S. Miller, Tom Lendacky, Gary Hook, linux-crypto,
	linux-kernel, dl-linux-imx, kernel


Hi,

On 12/11/19 1:20 PM, Iuliana Prodan wrote:
> On 12/10/2019 5:27 PM, Bastian Krause wrote:
>> On 11/17/19 11:30 PM, Iuliana Prodan wrote:
>>> Integrate crypto_engine into CAAM, to make use of the engine queue.
>>> Add support for SKCIPHER algorithms.
>>>
>>> This is intended to be used for CAAM backlogging support.
>>> The requests, with backlog flag (e.g. from dm-crypt) will be listed
>>> into crypto-engine queue and processed by CAAM when free.
>>> This changes the return codes for caam_jr_enqueue:
>>> -EINPROGRESS if OK, -EBUSY if request is backlogged,
>>> -ENOSPC if the queue is full, -EIO if it cannot map the caller's
>>> descriptor, -EINVAL if crypto_tfm not supported by crypto_engine.
>>>
>>> Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
>>> Signed-off-by: Franck LENORMAND <franck.lenormand@nxp.com>
>>> Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
>>> ---
>>>   drivers/crypto/caam/Kconfig   |  1 +
>>>   drivers/crypto/caam/caamalg.c | 84 +++++++++++++++++++++++++++++++++++--------
>>>   drivers/crypto/caam/intern.h  |  2 ++
>>>   drivers/crypto/caam/jr.c      | 51 ++++++++++++++++++++++++--
>>>   4 files changed, 122 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig
>>> index 87053e4..1930e19 100644
>>> --- a/drivers/crypto/caam/Kconfig
>>> +++ b/drivers/crypto/caam/Kconfig
>>> @@ -33,6 +33,7 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG
>>>   
> ...
>>>   
>>> +static int skcipher_do_one_req(struct crypto_engine *engine, void *areq)
>>> +{
>>> +	struct skcipher_request *req = skcipher_request_cast(areq);
>>> +	struct caam_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
>>> +	struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
>>> +	struct caam_jr_request_entry *jrentry;
>>> +	u32 *desc = rctx->edesc->hw_desc;
>>> +	int ret;
>>> +
>>> +	jrentry = &rctx->edesc->jrentry;
>>> +	jrentry->bklog = true;
>>> +
>>> +	ret = caam_jr_enqueue_no_bklog(ctx->jrdev, desc,
>>> +				       rctx->skcipher_op_done, jrentry);
>>> +
>>> +	if (ret != -EINPROGRESS) {
>>> +		skcipher_unmap(ctx->jrdev, rctx->edesc, req);
>>> +		kfree(rctx->edesc);
>>> +	} else {
>>> +		ret = 0;
>>> +	}
>>> +
>>> +	return ret;
>>
>> While testing this on a i.MX6 DualLite I see -ENOSPC being returned here
>> after a couple of GiB of data being encrypted (via dm-crypt with LUKS
>> extension). This results in these messages from crypto_engine:
>>
>>    caam_jr 2101000.jr0: Failed to do one request from queue: -28
>>
>> And later..
>>
>>    Buffer I/O error on device dm-0, logical block 59392
>>    JBD2: Detected IO errors while flushing file data on dm-0-8
>>
>> Reproducible with something like this:
>>
>>    echo "testkey" | cryptsetup luksFormat \
>>      --cipher=aes-cbc-essiv:sha256 \
>>      --key-file=- \
>>      --key-size=256 \
>>      /dev/mmcblk1p8
>>    echo "testkey" | cryptsetup open \
>>      --type luks \
>>      --key-file=- \
>>      /dev/mmcblk1p8 data
>>
>>    mkfs.ext4 /dev/mapper/data
>>    mount /dev/mapper/data /mnt
>>
>>    set -x
>>    while [ true ]; do
>>      dd if=/dev/zero of=/mnt/big_file bs=1M count=1024
>>      sync
>>    done
>>
>> Any ideas?
>>
> 
> Thanks for testing this!

Sure :)

> I reproduced this issue on imx6dl, _but_ only with the bypass sw queue 
> patch. It only reproduces on some targets, e.g. on imx7d I don't get the 
> -ENOSPC error. So, I believe there is a timing issue between 
> crypto-engine and CAAM driver, both sending requests to CAAM hw.
> I'm debugging this and I'll let you know my findings.

I can't even use this without the "crypto: caam - bypass crypto-engine
sw queue, if empty". The mkfs.ext4 command does not even finish and I
see hung task warnings. Am I holding it wrong?

Regards,
Bastian

-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2019-12-11 13:33 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-17 22:30 [PATCH 00/12] crypto: caam - backlogging support Iuliana Prodan
2019-11-17 22:30 ` [PATCH 01/12] crypto: add helper function for akcipher_request Iuliana Prodan
2019-11-18 13:29   ` Corentin Labbe
2019-11-19 14:27   ` Horia Geanta
2019-11-19 15:10   ` Gary R Hook
2019-11-22  9:08   ` Herbert Xu
2019-11-22 10:29     ` Iuliana Prodan
2019-11-22 10:34       ` Herbert Xu
2019-11-17 22:30 ` [PATCH 02/12] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt functions Iuliana Prodan
2019-11-19 14:41   ` Horia Geanta
2019-11-17 22:30 ` [PATCH 03/12] crypto: caam - refactor ahash_done callbacks Iuliana Prodan
2019-11-19 14:56   ` Horia Geanta
2019-11-17 22:30 ` [PATCH 04/12] crypto: caam - refactor ahash_edesc_alloc Iuliana Prodan
2019-11-19 15:05   ` Horia Geanta
2019-11-17 22:30 ` [PATCH 05/12] crypto: caam - refactor RSA private key _done callbacks Iuliana Prodan
2019-11-19 15:06   ` Horia Geanta
2019-11-17 22:30 ` [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function Iuliana Prodan
2019-11-19 15:21   ` Horia Geanta
2019-12-10 11:56   ` Bastian Krause
2019-12-10 12:28     ` Iuliana Prodan
2019-11-17 22:30 ` [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue Iuliana Prodan
2019-11-19 17:55   ` Horia Geanta
2019-11-19 22:49     ` Iuliana Prodan
2019-11-20  6:48       ` Horia Geanta
2019-11-17 22:30 ` [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms Iuliana Prodan
2019-11-21 11:46   ` Horia Geanta
2019-11-22 10:33   ` Herbert Xu
2019-11-22 11:05     ` Iuliana Prodan
2019-11-22 11:09       ` Herbert Xu
2019-11-22 14:11         ` Iuliana Prodan
2019-11-22 14:31           ` Herbert Xu
2019-12-10 15:27   ` Bastian Krause
2019-12-11 12:20     ` Iuliana Prodan
2019-12-11 13:33       ` Bastian Krause
2019-11-17 22:30 ` [PATCH 09/12] crypto: caam - bypass crypto-engine sw queue, if empty Iuliana Prodan
2019-11-21 11:53   ` Horia Geanta
2019-11-17 22:30 ` [PATCH 10/12] crypto: caam - add crypto_engine support for AEAD algorithms Iuliana Prodan
2019-11-21 16:46   ` Horia Geanta
2019-11-17 22:30 ` [PATCH 11/12] crypto: caam - add crypto_engine support for RSA algorithms Iuliana Prodan
2019-11-21 16:53   ` Horia Geanta
2019-11-17 22:30 ` [PATCH 12/12] crypto: caam - add crypto_engine support for HASH algorithms Iuliana Prodan
2019-11-21 17:06   ` Horia Geanta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).