All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/32] crypto: api - Prepare to change callback argument to void star
@ 2023-01-31  8:00 Herbert Xu
  2023-01-31  8:01 ` [PATCH 1/32] crypto: api - Add scaffolding to change completion function signature Herbert Xu
                   ` (31 more replies)
  0 siblings, 32 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:00 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Hi:

The crypto completion function currently takes a pointer to a
struct crypto_async_request object.  However, in reality the API
does not allow the use of any part of the object apart from the
data field.  For example, ahash/shash will create a fake object
on the stack to pass along a different data field.

This leads to potential bugs where the user may try to dereference
or otherwise use the crypto_async_request object.

This series lays the groundwork for converting the completion
function to take a void * argument instead of crypto_async_request.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH 1/32] crypto: api - Add scaffolding to change completion function signature
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
@ 2023-01-31  8:01 ` Herbert Xu
  2023-02-01 16:41   ` Giovanni Cabiddu
  2023-01-31  8:01 ` [PATCH 2/32] crypto: cryptd - Use subreq for AEAD Herbert Xu
                   ` (30 subsequent siblings)
  31 siblings, 1 reply; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:01 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

The crypto completion function currently takes a pointer to a
struct crypto_async_request object.  However, in reality the API
does not allow the use of any part of the object apart from the
data field.  For example, ahash/shash will create a fake object
on the stack to pass along a different data field.

This leads to potential bugs where the user may try to dereference
or otherwise use the crypto_async_request object.

This patch adds some temporary scaffolding so that the completion
function can take a void * instead.  Once affected users have been
converted this can be removed.

The helper crypto_request_complete will remain even after the
conversion is complete.  It should be used instead of calling
the completion functino directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/algapi.h |    7 +++++++
 include/linux/crypto.h  |    6 ++++++
 2 files changed, 13 insertions(+)

diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index 61b327206b55..1fd81e74a174 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -302,4 +302,11 @@ enum {
 	CRYPTO_MSG_ALG_LOADED,
 };
 
+static inline void crypto_request_complete(struct crypto_async_request *req,
+					   int err)
+{
+	crypto_completion_t complete = req->complete;
+	complete(req, err);
+}
+
 #endif	/* _CRYPTO_ALGAPI_H */
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 5d1e961f810e..b18f6e669fb1 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -176,6 +176,7 @@ struct crypto_async_request;
 struct crypto_tfm;
 struct crypto_type;
 
+typedef struct crypto_async_request crypto_completion_data_t;
 typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err);
 
 /**
@@ -595,6 +596,11 @@ struct crypto_wait {
 /*
  * Async ops completion helper functioons
  */
+static inline void *crypto_get_completion_data(crypto_completion_data_t *req)
+{
+	return req->data;
+}
+
 void crypto_req_done(struct crypto_async_request *req, int err);
 
 static inline int crypto_wait_req(int err, struct crypto_wait *wait)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 2/32] crypto: cryptd - Use subreq for AEAD
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
  2023-01-31  8:01 ` [PATCH 1/32] crypto: api - Add scaffolding to change completion function signature Herbert Xu
@ 2023-01-31  8:01 ` Herbert Xu
  2023-02-08  5:53   ` [v2 PATCH " Herbert Xu
  2023-01-31  8:01 ` [PATCH 3/32] crypto: acompress - Use crypto_request_complete Herbert Xu
                   ` (29 subsequent siblings)
  31 siblings, 1 reply; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:01 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

AEAD reuses the existing request object for its child.  This is
error-prone and unnecessary.  This patch adds a subrequest object
just like we do for skcipher and hash.

This patch also restores the original completion function as we
do for skcipher/hash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/cryptd.c |   18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 1ff58a021d57..c0c416eda8e8 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -93,6 +93,7 @@ struct cryptd_aead_ctx {
 
 struct cryptd_aead_request_ctx {
 	crypto_completion_t complete;
+	struct aead_request req;
 };
 
 static void cryptd_queue_worker(struct work_struct *work);
@@ -715,6 +716,7 @@ static void cryptd_aead_crypt(struct aead_request *req,
 			int (*crypt)(struct aead_request *req))
 {
 	struct cryptd_aead_request_ctx *rctx;
+	struct aead_request *subreq;
 	struct cryptd_aead_ctx *ctx;
 	crypto_completion_t compl;
 	struct crypto_aead *tfm;
@@ -722,14 +724,24 @@ static void cryptd_aead_crypt(struct aead_request *req,
 
 	rctx = aead_request_ctx(req);
 	compl = rctx->complete;
+	subreq = &rctx->req;
 
 	tfm = crypto_aead_reqtfm(req);
 
 	if (unlikely(err == -EINPROGRESS))
 		goto out;
-	aead_request_set_tfm(req, child);
+
+	aead_request_set_tfm(subreq, child);
+	aead_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
+				  NULL, NULL);
+	aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+			       req->iv);
+	aead_request_set_ad(subreq, req->assoclen);
+
 	err = crypt( req );
 
+	req->base.complete = compl;
+
 out:
 	ctx = crypto_aead_ctx(tfm);
 	refcnt = refcount_read(&ctx->refcnt);
@@ -798,8 +810,8 @@ static int cryptd_aead_init_tfm(struct crypto_aead *tfm)
 
 	ctx->child = cipher;
 	crypto_aead_set_reqsize(
-		tfm, max((unsigned)sizeof(struct cryptd_aead_request_ctx),
-			 crypto_aead_reqsize(cipher)));
+		tfm, sizeof(struct cryptd_aead_request_ctx) +
+		     crypto_aead_reqsize(cipher));
 	return 0;
 }
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 3/32] crypto: acompress - Use crypto_request_complete
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
  2023-01-31  8:01 ` [PATCH 1/32] crypto: api - Add scaffolding to change completion function signature Herbert Xu
  2023-01-31  8:01 ` [PATCH 2/32] crypto: cryptd - Use subreq for AEAD Herbert Xu
@ 2023-01-31  8:01 ` Herbert Xu
  2023-02-01 16:45   ` Giovanni Cabiddu
  2023-01-31  8:01 ` [PATCH 4/32] crypto: aead " Herbert Xu
                   ` (28 subsequent siblings)
  31 siblings, 1 reply; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:01 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/internal/acompress.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h
index 49339003bd2c..978b57a3f4f0 100644
--- a/include/crypto/internal/acompress.h
+++ b/include/crypto/internal/acompress.h
@@ -28,7 +28,7 @@ static inline void *acomp_tfm_ctx(struct crypto_acomp *tfm)
 static inline void acomp_request_complete(struct acomp_req *req,
 					  int err)
 {
-	req->base.complete(&req->base, err);
+	crypto_request_complete(&req->base, err);
 }
 
 static inline const char *acomp_alg_name(struct crypto_acomp *tfm)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 4/32] crypto: aead - Use crypto_request_complete
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (2 preceding siblings ...)
  2023-01-31  8:01 ` [PATCH 3/32] crypto: acompress - Use crypto_request_complete Herbert Xu
@ 2023-01-31  8:01 ` Herbert Xu
  2023-01-31  8:01 ` [PATCH 5/32] crypto: akcipher " Herbert Xu
                   ` (27 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:01 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/internal/aead.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/crypto/internal/aead.h b/include/crypto/internal/aead.h
index cd8cb1e921b7..28a95eb3182d 100644
--- a/include/crypto/internal/aead.h
+++ b/include/crypto/internal/aead.h
@@ -82,7 +82,7 @@ static inline void *aead_request_ctx_dma(struct aead_request *req)
 
 static inline void aead_request_complete(struct aead_request *req, int err)
 {
-	req->base.complete(&req->base, err);
+	crypto_request_complete(&req->base, err);
 }
 
 static inline u32 aead_request_flags(struct aead_request *req)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 5/32] crypto: akcipher - Use crypto_request_complete
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (3 preceding siblings ...)
  2023-01-31  8:01 ` [PATCH 4/32] crypto: aead " Herbert Xu
@ 2023-01-31  8:01 ` Herbert Xu
  2023-01-31  8:01 ` [PATCH 6/32] crypto: hash " Herbert Xu
                   ` (26 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:01 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/internal/akcipher.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/crypto/internal/akcipher.h b/include/crypto/internal/akcipher.h
index aaf1092b93b8..a0fba4b2eccf 100644
--- a/include/crypto/internal/akcipher.h
+++ b/include/crypto/internal/akcipher.h
@@ -69,7 +69,7 @@ static inline void *akcipher_tfm_ctx_dma(struct crypto_akcipher *tfm)
 static inline void akcipher_request_complete(struct akcipher_request *req,
 					     int err)
 {
-	req->base.complete(&req->base, err);
+	crypto_request_complete(&req->base, err);
 }
 
 static inline const char *akcipher_alg_name(struct crypto_akcipher *tfm)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 6/32] crypto: hash - Use crypto_request_complete
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (4 preceding siblings ...)
  2023-01-31  8:01 ` [PATCH 5/32] crypto: akcipher " Herbert Xu
@ 2023-01-31  8:01 ` Herbert Xu
  2023-02-10 12:20   ` [v2 PATCH " Herbert Xu
  2023-01-31  8:01 ` [PATCH 7/32] crypto: kpp " Herbert Xu
                   ` (25 subsequent siblings)
  31 siblings, 1 reply; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:01 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the crypto_request_complete helper instead of calling the
completion function directly.

This patch also removes the voodoo programming previously used
for unaligned ahash operations and replaces it with a sub-request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/ahash.c                 |  137 +++++++++++++----------------------------
 include/crypto/internal/hash.h |    2 
 2 files changed, 45 insertions(+), 94 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 4b089f1b770f..369447e483cd 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -190,121 +190,69 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
 }
 EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
 
-static inline unsigned int ahash_align_buffer_size(unsigned len,
-						   unsigned long mask)
-{
-	return len + (mask & ~(crypto_tfm_ctx_alignment() - 1));
-}
-
 static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt)
 {
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	unsigned long alignmask = crypto_ahash_alignmask(tfm);
 	unsigned int ds = crypto_ahash_digestsize(tfm);
-	struct ahash_request_priv *priv;
+	struct ahash_request *subreq;
+	unsigned int subreq_size;
+	unsigned int reqsize;
+	u8 *result;
+	u32 flags;
 
-	priv = kmalloc(sizeof(*priv) + ahash_align_buffer_size(ds, alignmask),
-		       (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC);
-	if (!priv)
+	subreq_size = sizeof(*subreq);
+	reqsize = crypto_ahash_reqsize(tfm);
+	reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
+	subreq_size += reqsize;
+	subreq_size += ds;
+	subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1);
+
+	flags = ahash_request_flags(req);
+	subreq = kmalloc(subreq_size, (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+				      GFP_KERNEL : GFP_ATOMIC);
+	if (!subreq)
 		return -ENOMEM;
 
-	/*
-	 * WARNING: Voodoo programming below!
-	 *
-	 * The code below is obscure and hard to understand, thus explanation
-	 * is necessary. See include/crypto/hash.h and include/linux/crypto.h
-	 * to understand the layout of structures used here!
-	 *
-	 * The code here will replace portions of the ORIGINAL request with
-	 * pointers to new code and buffers so the hashing operation can store
-	 * the result in aligned buffer. We will call the modified request
-	 * an ADJUSTED request.
-	 *
-	 * The newly mangled request will look as such:
-	 *
-	 * req {
-	 *   .result        = ADJUSTED[new aligned buffer]
-	 *   .base.complete = ADJUSTED[pointer to completion function]
-	 *   .base.data     = ADJUSTED[*req (pointer to self)]
-	 *   .priv          = ADJUSTED[new priv] {
-	 *           .result   = ORIGINAL(result)
-	 *           .complete = ORIGINAL(base.complete)
-	 *           .data     = ORIGINAL(base.data)
-	 *   }
-	 */
-
-	priv->result = req->result;
-	priv->complete = req->base.complete;
-	priv->data = req->base.data;
-	priv->flags = req->base.flags;
-
-	/*
-	 * WARNING: We do not backup req->priv here! The req->priv
-	 *          is for internal use of the Crypto API and the
-	 *          user must _NOT_ _EVER_ depend on it's content!
-	 */
-
-	req->result = PTR_ALIGN((u8 *)priv->ubuf, alignmask + 1);
-	req->base.complete = cplt;
-	req->base.data = req;
-	req->priv = priv;
+	ahash_request_set_tfm(subreq, tfm);
+	ahash_request_set_callback(subreq, flags, cplt, req);
+
+	result = (u8 *)(subreq + 1) + reqsize;
+	result = PTR_ALIGN(result, alignmask + 1);
+
+	ahash_request_set_crypt(subreq, req->src, result, req->nbytes);
+
+	req->priv = subreq;
 
 	return 0;
 }
 
 static void ahash_restore_req(struct ahash_request *req, int err)
 {
-	struct ahash_request_priv *priv = req->priv;
+	struct ahash_request *subreq = req->priv;
 
 	if (!err)
-		memcpy(priv->result, req->result,
+		memcpy(req->result, subreq->result,
 		       crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
 
-	/* Restore the original crypto request. */
-	req->result = priv->result;
-
-	ahash_request_set_callback(req, priv->flags,
-				   priv->complete, priv->data);
 	req->priv = NULL;
 
-	/* Free the req->priv.priv from the ADJUSTED request. */
-	kfree_sensitive(priv);
-}
-
-static void ahash_notify_einprogress(struct ahash_request *req)
-{
-	struct ahash_request_priv *priv = req->priv;
-	struct crypto_async_request oreq;
-
-	oreq.data = priv->data;
-
-	priv->complete(&oreq, -EINPROGRESS);
+	kfree_sensitive(subreq);
 }
 
 static void ahash_op_unaligned_done(struct crypto_async_request *req, int err)
 {
 	struct ahash_request *areq = req->data;
 
-	if (err == -EINPROGRESS) {
-		ahash_notify_einprogress(areq);
-		return;
-	}
-
-	/*
-	 * Restore the original request, see ahash_op_unaligned() for what
-	 * goes where.
-	 *
-	 * The "struct ahash_request *req" here is in fact the "req.base"
-	 * from the ADJUSTED request from ahash_op_unaligned(), thus as it
-	 * is a pointer to self, it is also the ADJUSTED "req" .
-	 */
+	if (err == -EINPROGRESS)
+		goto out;
 
 	/* First copy req->result into req->priv.result */
 	ahash_restore_req(areq, err);
 
+out:
 	/* Complete the ORIGINAL request. */
-	areq->base.complete(&areq->base, err);
+	ahash_request_complete(areq, err);
 }
 
 static int ahash_op_unaligned(struct ahash_request *req,
@@ -391,15 +339,17 @@ static void ahash_def_finup_done2(struct crypto_async_request *req, int err)
 
 	ahash_restore_req(areq, err);
 
-	areq->base.complete(&areq->base, err);
+	ahash_request_complete(areq, err);
 }
 
 static int ahash_def_finup_finish1(struct ahash_request *req, int err)
 {
+	struct ahash_request *subreq = req->priv;
+
 	if (err)
 		goto out;
 
-	req->base.complete = ahash_def_finup_done2;
+	subreq->base.complete = ahash_def_finup_done2;
 
 	err = crypto_ahash_reqtfm(req)->final(req);
 	if (err == -EINPROGRESS || err == -EBUSY)
@@ -413,19 +363,20 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err)
 static void ahash_def_finup_done1(struct crypto_async_request *req, int err)
 {
 	struct ahash_request *areq = req->data;
+	struct ahash_request *subreq;
 
-	if (err == -EINPROGRESS) {
-		ahash_notify_einprogress(areq);
-		return;
-	}
+	if (err == -EINPROGRESS)
+		goto out;
 
-	areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+	subreq = areq->priv;
+	subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
 
 	err = ahash_def_finup_finish1(areq, err);
-	if (areq->priv)
+	if (err == -EINPROGRESS || err == -EBUSY)
 		return;
 
-	areq->base.complete(&areq->base, err);
+out:
+	ahash_request_complete(areq, err);
 }
 
 static int ahash_def_finup(struct ahash_request *req)
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index 1a2a41b79253..0b259dbb97af 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -199,7 +199,7 @@ static inline void *ahash_request_ctx_dma(struct ahash_request *req)
 
 static inline void ahash_request_complete(struct ahash_request *req, int err)
 {
-	req->base.complete(&req->base, err);
+	crypto_request_complete(&req->base, err);
 }
 
 static inline u32 ahash_request_flags(struct ahash_request *req)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 7/32] crypto: kpp - Use crypto_request_complete
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (5 preceding siblings ...)
  2023-01-31  8:01 ` [PATCH 6/32] crypto: hash " Herbert Xu
@ 2023-01-31  8:01 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 8/32] crypto: skcipher " Herbert Xu
                   ` (24 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:01 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/internal/kpp.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/crypto/internal/kpp.h b/include/crypto/internal/kpp.h
index 3c9726e89f53..0a6db8c4a9a0 100644
--- a/include/crypto/internal/kpp.h
+++ b/include/crypto/internal/kpp.h
@@ -85,7 +85,7 @@ static inline void *kpp_tfm_ctx_dma(struct crypto_kpp *tfm)
 
 static inline void kpp_request_complete(struct kpp_request *req, int err)
 {
-	req->base.complete(&req->base, err);
+	crypto_request_complete(&req->base, err);
 }
 
 static inline const char *kpp_alg_name(struct crypto_kpp *tfm)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 8/32] crypto: skcipher - Use crypto_request_complete
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (6 preceding siblings ...)
  2023-01-31  8:01 ` [PATCH 7/32] crypto: kpp " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 9/32] crypto: engine " Herbert Xu
                   ` (23 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/internal/skcipher.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index 06d0a5491cf3..fb3d9e899f52 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -94,7 +94,7 @@ static inline void *skcipher_instance_ctx(struct skcipher_instance *inst)
 
 static inline void skcipher_request_complete(struct skcipher_request *req, int err)
 {
-	req->base.complete(&req->base, err);
+	crypto_request_complete(&req->base, err);
 }
 
 int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn,

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 9/32] crypto: engine - Use crypto_request_complete
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (7 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 8/32] crypto: skcipher " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 10/32] crypto: rsa-pkcs1pad - Use akcipher_request_complete Herbert Xu
                   ` (22 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/crypto_engine.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 64dc9aa3ca24..21f791615114 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -54,7 +54,7 @@ static void crypto_finalize_request(struct crypto_engine *engine,
 		}
 	}
 	lockdep_assert_in_softirq();
-	req->complete(req, err);
+	crypto_request_complete(req, err);
 
 	kthread_queue_work(engine->kworker, &engine->pump_requests);
 }
@@ -130,7 +130,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		engine->cur_req = async_req;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 
 	if (engine->busy)
 		was_busy = true;
@@ -214,7 +214,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 	}
 
 req_err_2:
-	async_req->complete(async_req, ret);
+	crypto_request_complete(async_req, ret);
 
 retry:
 	/* If retry mechanism is supported, send new requests to engine */

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 10/32] crypto: rsa-pkcs1pad - Use akcipher_request_complete
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (8 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 9/32] crypto: engine " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 11/32] crypto: cryptd - Use request_complete helpers Herbert Xu
                   ` (21 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the akcipher_request_complete helper instead of calling the
completion function directly.  In fact the previous code was buggy
in that EINPROGRESS was never passed back to the original caller.

Fixes: 3d5b1ecdea6f ("crypto: rsa - RSA padding algorithm")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/rsa-pkcs1pad.c |   34 +++++++++++++++-------------------
 1 file changed, 15 insertions(+), 19 deletions(-)

diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
index 6ee5b8a060c0..4e9d2244ee31 100644
--- a/crypto/rsa-pkcs1pad.c
+++ b/crypto/rsa-pkcs1pad.c
@@ -214,16 +214,14 @@ static void pkcs1pad_encrypt_sign_complete_cb(
 		struct crypto_async_request *child_async_req, int err)
 {
 	struct akcipher_request *req = child_async_req->data;
-	struct crypto_async_request async_req;
 
 	if (err == -EINPROGRESS)
-		return;
+		goto out;
+
+	err = pkcs1pad_encrypt_sign_complete(req, err);
 
-	async_req.data = req->base.data;
-	async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
-	async_req.flags = child_async_req->flags;
-	req->base.complete(&async_req,
-			pkcs1pad_encrypt_sign_complete(req, err));
+out:
+	akcipher_request_complete(req, err);
 }
 
 static int pkcs1pad_encrypt(struct akcipher_request *req)
@@ -332,15 +330,14 @@ static void pkcs1pad_decrypt_complete_cb(
 		struct crypto_async_request *child_async_req, int err)
 {
 	struct akcipher_request *req = child_async_req->data;
-	struct crypto_async_request async_req;
 
 	if (err == -EINPROGRESS)
-		return;
+		goto out;
+
+	err = pkcs1pad_decrypt_complete(req, err);
 
-	async_req.data = req->base.data;
-	async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
-	async_req.flags = child_async_req->flags;
-	req->base.complete(&async_req, pkcs1pad_decrypt_complete(req, err));
+out:
+	akcipher_request_complete(req, err);
 }
 
 static int pkcs1pad_decrypt(struct akcipher_request *req)
@@ -513,15 +510,14 @@ static void pkcs1pad_verify_complete_cb(
 		struct crypto_async_request *child_async_req, int err)
 {
 	struct akcipher_request *req = child_async_req->data;
-	struct crypto_async_request async_req;
 
 	if (err == -EINPROGRESS)
-		return;
+		goto out;
 
-	async_req.data = req->base.data;
-	async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
-	async_req.flags = child_async_req->flags;
-	req->base.complete(&async_req, pkcs1pad_verify_complete(req, err));
+	err = pkcs1pad_verify_complete(req, err);
+
+out:
+	akcipher_request_complete(req, err);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 11/32] crypto: cryptd - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (9 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 10/32] crypto: rsa-pkcs1pad - Use akcipher_request_complete Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-02-08  5:56   ` [v2 PATCH " Herbert Xu
  2023-01-31  8:02 ` [PATCH 12/32] crypto: atmel " Herbert Xu
                   ` (20 subsequent siblings)
  31 siblings, 1 reply; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/cryptd.c |  228 +++++++++++++++++++++++++++++---------------------------
 1 file changed, 120 insertions(+), 108 deletions(-)

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index c0c416eda8e8..06ef3fcbe4ae 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -72,7 +72,6 @@ struct cryptd_skcipher_ctx {
 };
 
 struct cryptd_skcipher_request_ctx {
-	crypto_completion_t complete;
 	struct skcipher_request req;
 };
 
@@ -83,6 +82,7 @@ struct cryptd_hash_ctx {
 
 struct cryptd_hash_request_ctx {
 	crypto_completion_t complete;
+	void *data;
 	struct shash_desc desc;
 };
 
@@ -92,7 +92,6 @@ struct cryptd_aead_ctx {
 };
 
 struct cryptd_aead_request_ctx {
-	crypto_completion_t complete;
 	struct aead_request req;
 };
 
@@ -178,8 +177,8 @@ static void cryptd_queue_worker(struct work_struct *work)
 		return;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
-	req->complete(req, 0);
+		crypto_request_complete(backlog, -EINPROGRESS);
+	crypto_request_complete(req, 0);
 
 	if (cpu_queue->queue.qlen)
 		queue_work(cryptd_wq, &cpu_queue->work);
@@ -238,18 +237,47 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent,
 	return crypto_skcipher_setkey(child, key, keylen);
 }
 
-static void cryptd_skcipher_complete(struct skcipher_request *req, int err)
+static struct skcipher_request *cryptd_skcipher_prepare(
+	struct skcipher_request *req, int err)
+{
+	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+	struct skcipher_request *subreq = &rctx->req;
+	struct cryptd_skcipher_ctx *ctx;
+	struct crypto_skcipher *child;
+
+	req->base.complete = subreq->base.complete;
+	req->base.data = subreq->base.data;
+
+	if (unlikely(err == -EINPROGRESS))
+		return NULL;
+
+	ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
+	child = ctx->child;
+
+	skcipher_request_set_tfm(subreq, child);
+	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
+				      NULL, NULL);
+	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+				   req->iv);
+
+	return subreq;
+}
+
+static void cryptd_skcipher_complete(struct skcipher_request *req, int err,
+				     crypto_completion_t complete)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
 	int refcnt = refcount_read(&ctx->refcnt);
 
 	local_bh_disable();
-	rctx->complete(&req->base, err);
+	skcipher_request_complete(req, err);
 	local_bh_enable();
 
-	if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt))
+	if (unlikely(err == -EINPROGRESS)) {
+		req->base.complete = complete;
+		req->base.data = req;
+	} else if (refcnt && refcount_dec_and_test(&ctx->refcnt))
 		crypto_free_skcipher(tfm);
 }
 
@@ -257,54 +285,26 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base,
 				    int err)
 {
 	struct skcipher_request *req = skcipher_request_cast(base);
-	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct skcipher_request *subreq = &rctx->req;
-	struct crypto_skcipher *child = ctx->child;
-
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	skcipher_request_set_tfm(subreq, child);
-	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
-				      NULL, NULL);
-	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
-				   req->iv);
-
-	err = crypto_skcipher_encrypt(subreq);
+	struct skcipher_request *subreq;
 
-	req->base.complete = rctx->complete;
+	subreq = cryptd_skcipher_prepare(req, err);
+	if (likely(subreq))
+		err = crypto_skcipher_encrypt(subreq);
 
-out:
-	cryptd_skcipher_complete(req, err);
+	cryptd_skcipher_complete(req, err, cryptd_skcipher_encrypt);
 }
 
 static void cryptd_skcipher_decrypt(struct crypto_async_request *base,
 				    int err)
 {
 	struct skcipher_request *req = skcipher_request_cast(base);
-	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct skcipher_request *subreq = &rctx->req;
-	struct crypto_skcipher *child = ctx->child;
+	struct skcipher_request *subreq;
 
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	skcipher_request_set_tfm(subreq, child);
-	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
-				      NULL, NULL);
-	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
-				   req->iv);
-
-	err = crypto_skcipher_decrypt(subreq);
-
-	req->base.complete = rctx->complete;
+	subreq = cryptd_skcipher_prepare(req, err);
+	if (likely(subreq))
+		err = crypto_skcipher_decrypt(subreq);
 
-out:
-	cryptd_skcipher_complete(req, err);
+	cryptd_skcipher_complete(req, err, cryptd_skcipher_decrypt);
 }
 
 static int cryptd_skcipher_enqueue(struct skcipher_request *req,
@@ -312,11 +312,14 @@ static int cryptd_skcipher_enqueue(struct skcipher_request *req,
 {
 	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_request *subreq = &rctx->req;
 	struct cryptd_queue *queue;
 
 	queue = cryptd_get_queue(crypto_skcipher_tfm(tfm));
-	rctx->complete = req->base.complete;
+	subreq->base.complete = req->base.complete;
+	subreq->base.data = req->base.data;
 	req->base.complete = compl;
+	req->base.data = req;
 
 	return cryptd_enqueue_request(queue, &req->base);
 }
@@ -469,45 +472,63 @@ static int cryptd_hash_enqueue(struct ahash_request *req,
 		cryptd_get_queue(crypto_ahash_tfm(tfm));
 
 	rctx->complete = req->base.complete;
+	rctx->data = req->base.data;
 	req->base.complete = compl;
+	req->base.data = req;
 
 	return cryptd_enqueue_request(queue, &req->base);
 }
 
-static void cryptd_hash_complete(struct ahash_request *req, int err)
+static struct shash_desc *cryptd_hash_prepare(struct ahash_request *req,
+					      int err)
+{
+	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
+
+	req->base.complete = rctx->complete;
+	req->base.data = rctx->data;
+
+	if (unlikely(err == -EINPROGRESS))
+		return NULL;
+
+	return &rctx->desc;
+}
+
+static void cryptd_hash_complete(struct ahash_request *req, int err,
+				 crypto_completion_t complete)
 {
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
 	int refcnt = refcount_read(&ctx->refcnt);
 
 	local_bh_disable();
-	rctx->complete(&req->base, err);
+	ahash_request_complete(req, err);
 	local_bh_enable();
 
-	if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt))
+	if (err == -EINPROGRESS) {
+		req->base.complete = complete;
+		req->base.data = req;
+	} else if (refcnt && refcount_dec_and_test(&ctx->refcnt))
 		crypto_free_ahash(tfm);
 }
 
 static void cryptd_hash_init(struct crypto_async_request *req_async, int err)
 {
-	struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
-	struct crypto_shash *child = ctx->child;
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-	struct shash_desc *desc = &rctx->desc;
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct crypto_shash *child = ctx->child;
+	struct shash_desc *desc;
 
-	if (unlikely(err == -EINPROGRESS))
+	desc = cryptd_hash_prepare(req, err);
+	if (unlikely(!desc))
 		goto out;
 
 	desc->tfm = child;
 
 	err = crypto_shash_init(desc);
 
-	req->base.complete = rctx->complete;
-
 out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_init);
 }
 
 static int cryptd_hash_init_enqueue(struct ahash_request *req)
@@ -518,19 +539,13 @@ static int cryptd_hash_init_enqueue(struct ahash_request *req)
 static void cryptd_hash_update(struct crypto_async_request *req_async, int err)
 {
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx;
-
-	rctx = ahash_request_ctx(req);
-
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	err = shash_ahash_update(req, &rctx->desc);
+	struct shash_desc *desc;
 
-	req->base.complete = rctx->complete;
+	desc = cryptd_hash_prepare(req, err);
+	if (likely(desc))
+		err = shash_ahash_update(req, desc);
 
-out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_update);
 }
 
 static int cryptd_hash_update_enqueue(struct ahash_request *req)
@@ -541,17 +556,13 @@ static int cryptd_hash_update_enqueue(struct ahash_request *req)
 static void cryptd_hash_final(struct crypto_async_request *req_async, int err)
 {
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	err = crypto_shash_final(&rctx->desc, req->result);
+	struct shash_desc *desc;
 
-	req->base.complete = rctx->complete;
+	desc = cryptd_hash_prepare(req, err);
+	if (likely(desc))
+		err = crypto_shash_final(desc, req->result);
 
-out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_final);
 }
 
 static int cryptd_hash_final_enqueue(struct ahash_request *req)
@@ -562,17 +573,13 @@ static int cryptd_hash_final_enqueue(struct ahash_request *req)
 static void cryptd_hash_finup(struct crypto_async_request *req_async, int err)
 {
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
+	struct shash_desc *desc;
 
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	err = shash_ahash_finup(req, &rctx->desc);
+	desc = cryptd_hash_prepare(req, err);
+	if (likely(desc))
+		err = shash_ahash_finup(req, desc);
 
-	req->base.complete = rctx->complete;
-
-out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_finup);
 }
 
 static int cryptd_hash_finup_enqueue(struct ahash_request *req)
@@ -582,23 +589,22 @@ static int cryptd_hash_finup_enqueue(struct ahash_request *req)
 
 static void cryptd_hash_digest(struct crypto_async_request *req_async, int err)
 {
-	struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
-	struct crypto_shash *child = ctx->child;
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-	struct shash_desc *desc = &rctx->desc;
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct crypto_shash *child = ctx->child;
+	struct shash_desc *desc;
 
-	if (unlikely(err == -EINPROGRESS))
+	desc = cryptd_hash_prepare(req, err);
+	if (unlikely(!desc))
 		goto out;
 
 	desc->tfm = child;
 
 	err = shash_ahash_digest(req, desc);
 
-	req->base.complete = rctx->complete;
-
 out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_digest);
 }
 
 static int cryptd_hash_digest_enqueue(struct ahash_request *req)
@@ -711,20 +717,20 @@ static int cryptd_aead_setauthsize(struct crypto_aead *parent,
 }
 
 static void cryptd_aead_crypt(struct aead_request *req,
-			struct crypto_aead *child,
-			int err,
-			int (*crypt)(struct aead_request *req))
+			      struct crypto_aead *child, int err,
+			      int (*crypt)(struct aead_request *req),
+			      crypto_completion_t compl)
 {
 	struct cryptd_aead_request_ctx *rctx;
 	struct aead_request *subreq;
 	struct cryptd_aead_ctx *ctx;
-	crypto_completion_t compl;
 	struct crypto_aead *tfm;
 	int refcnt;
 
 	rctx = aead_request_ctx(req);
-	compl = rctx->complete;
 	subreq = &rctx->req;
+	req->base.complete = subreq->base.complete;
+	req->base.data = subreq->base.data;
 
 	tfm = crypto_aead_reqtfm(req);
 
@@ -740,17 +746,18 @@ static void cryptd_aead_crypt(struct aead_request *req,
 
 	err = crypt( req );
 
-	req->base.complete = compl;
-
 out:
 	ctx = crypto_aead_ctx(tfm);
 	refcnt = refcount_read(&ctx->refcnt);
 
 	local_bh_disable();
-	compl(&req->base, err);
+	aead_request_complete(req, err);
 	local_bh_enable();
 
-	if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt))
+	if (err == -EINPROGRESS) {
+		req->base.complete = compl;
+		req->base.data = req;
+	} else if (refcnt && refcount_dec_and_test(&ctx->refcnt))
 		crypto_free_aead(tfm);
 }
 
@@ -761,7 +768,8 @@ static void cryptd_aead_encrypt(struct crypto_async_request *areq, int err)
 	struct aead_request *req;
 
 	req = container_of(areq, struct aead_request, base);
-	cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt);
+	cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt,
+			  cryptd_aead_encrypt);
 }
 
 static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err)
@@ -771,7 +779,8 @@ static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err)
 	struct aead_request *req;
 
 	req = container_of(areq, struct aead_request, base);
-	cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt);
+	cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt,
+			  cryptd_aead_decrypt);
 }
 
 static int cryptd_aead_enqueue(struct aead_request *req,
@@ -780,9 +789,12 @@ static int cryptd_aead_enqueue(struct aead_request *req,
 	struct cryptd_aead_request_ctx *rctx = aead_request_ctx(req);
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct cryptd_queue *queue = cryptd_get_queue(crypto_aead_tfm(tfm));
+	struct aead_request *subreq = &rctx->req;
 
-	rctx->complete = req->base.complete;
+	subreq->base.complete = req->base.complete;
+	subreq->base.data = req->base.data;
 	req->base.complete = compl;
+	req->base.data = req;
 	return cryptd_enqueue_request(queue, &req->base);
 }
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 12/32] crypto: atmel - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (10 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 11/32] crypto: cryptd - Use request_complete helpers Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 13/32] crypto: artpec6 " Herbert Xu
                   ` (19 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/atmel-aes.c  |    4 ++--
 drivers/crypto/atmel-sha.c  |    4 ++--
 drivers/crypto/atmel-tdes.c |    4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index e90e4a6cc37a..ed10f2ae4523 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -554,7 +554,7 @@ static inline int atmel_aes_complete(struct atmel_aes_dev *dd, int err)
 	}
 
 	if (dd->is_async)
-		dd->areq->complete(dd->areq, err);
+		crypto_request_complete(dd->areq, err);
 
 	tasklet_schedule(&dd->queue_task);
 
@@ -955,7 +955,7 @@ static int atmel_aes_handle_queue(struct atmel_aes_dev *dd,
 		return ret;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 
 	ctx = crypto_tfm_ctx(areq->tfm);
 
diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c
index 00be792e605c..a77cf0da0816 100644
--- a/drivers/crypto/atmel-sha.c
+++ b/drivers/crypto/atmel-sha.c
@@ -292,7 +292,7 @@ static inline int atmel_sha_complete(struct atmel_sha_dev *dd, int err)
 	clk_disable(dd->iclk);
 
 	if ((dd->is_async || dd->force_complete) && req->base.complete)
-		req->base.complete(&req->base, err);
+		ahash_request_complete(req, err);
 
 	/* handle new request */
 	tasklet_schedule(&dd->queue_task);
@@ -1080,7 +1080,7 @@ static int atmel_sha_handle_queue(struct atmel_sha_dev *dd,
 		return ret;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 
 	ctx = crypto_tfm_ctx(async_req->tfm);
 
diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c
index 8b7bc1076e0d..b2d48c1649b9 100644
--- a/drivers/crypto/atmel-tdes.c
+++ b/drivers/crypto/atmel-tdes.c
@@ -590,7 +590,7 @@ static void atmel_tdes_finish_req(struct atmel_tdes_dev *dd, int err)
 	if (!err && (rctx->mode & TDES_FLAGS_OPMODE_MASK) != TDES_FLAGS_ECB)
 		atmel_tdes_set_iv_as_last_ciphertext_block(dd);
 
-	req->base.complete(&req->base, err);
+	skcipher_request_complete(req, err);
 }
 
 static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd,
@@ -619,7 +619,7 @@ static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd,
 		return ret;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 
 	req = skcipher_request_cast(async_req);
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 13/32] crypto: artpec6 - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (11 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 12/32] crypto: atmel " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-02-08  7:20   ` Jesper Nilsson
  2023-01-31  8:02 ` [PATCH 14/32] crypto: bcm " Herbert Xu
                   ` (18 subsequent siblings)
  31 siblings, 1 reply; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/axis/artpec6_crypto.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/axis/artpec6_crypto.c b/drivers/crypto/axis/artpec6_crypto.c
index f6f41e316dfe..8493a45e1bd4 100644
--- a/drivers/crypto/axis/artpec6_crypto.c
+++ b/drivers/crypto/axis/artpec6_crypto.c
@@ -2143,13 +2143,13 @@ static void artpec6_crypto_task(unsigned long data)
 
 	list_for_each_entry_safe(req, n, &complete_in_progress,
 				 complete_in_progress) {
-		req->req->complete(req->req, -EINPROGRESS);
+		crypto_request_complete(req->req, -EINPROGRESS);
 	}
 }
 
 static void artpec6_crypto_complete_crypto(struct crypto_async_request *req)
 {
-	req->complete(req, 0);
+	crypto_request_complete(req, 0);
 }
 
 static void
@@ -2161,7 +2161,7 @@ artpec6_crypto_complete_cbc_decrypt(struct crypto_async_request *req)
 	scatterwalk_map_and_copy(cipher_req->iv, cipher_req->src,
 				 cipher_req->cryptlen - AES_BLOCK_SIZE,
 				 AES_BLOCK_SIZE, 0);
-	req->complete(req, 0);
+	skcipher_request_complete(cipher_req, 0);
 }
 
 static void
@@ -2173,7 +2173,7 @@ artpec6_crypto_complete_cbc_encrypt(struct crypto_async_request *req)
 	scatterwalk_map_and_copy(cipher_req->iv, cipher_req->dst,
 				 cipher_req->cryptlen - AES_BLOCK_SIZE,
 				 AES_BLOCK_SIZE, 0);
-	req->complete(req, 0);
+	skcipher_request_complete(cipher_req, 0);
 }
 
 static void artpec6_crypto_complete_aead(struct crypto_async_request *req)
@@ -2211,12 +2211,12 @@ static void artpec6_crypto_complete_aead(struct crypto_async_request *req)
 		}
 	}
 
-	req->complete(req, result);
+	aead_request_complete(areq, result);
 }
 
 static void artpec6_crypto_complete_hash(struct crypto_async_request *req)
 {
-	req->complete(req, 0);
+	crypto_request_complete(req, 0);
 }
 
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 14/32] crypto: bcm - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (12 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 13/32] crypto: artpec6 " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 15/32] crypto: cpt " Herbert Xu
                   ` (17 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/bcm/cipher.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
index f8e035039aeb..70b911baab26 100644
--- a/drivers/crypto/bcm/cipher.c
+++ b/drivers/crypto/bcm/cipher.c
@@ -1614,7 +1614,7 @@ static void finish_req(struct iproc_reqctx_s *rctx, int err)
 	spu_chunk_cleanup(rctx);
 
 	if (areq)
-		areq->complete(areq, err);
+		crypto_request_complete(areq, err);
 }
 
 /**

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 15/32] crypto: cpt - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (13 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 14/32] crypto: bcm " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 16/32] crypto: nitrox " Herbert Xu
                   ` (16 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/cavium/cpt/cptvf_algs.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/cavium/cpt/cptvf_algs.c b/drivers/crypto/cavium/cpt/cptvf_algs.c
index 0b38c2600b86..ee476c6c7f82 100644
--- a/drivers/crypto/cavium/cpt/cptvf_algs.c
+++ b/drivers/crypto/cavium/cpt/cptvf_algs.c
@@ -28,7 +28,7 @@ static void cvm_callback(u32 status, void *arg)
 {
 	struct crypto_async_request *req = (struct crypto_async_request *)arg;
 
-	req->complete(req, !status);
+	crypto_request_complete(req, !status);
 }
 
 static inline void update_input_iv(struct cpt_request_info *req_info,

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 16/32] crypto: nitrox - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (14 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 15/32] crypto: cpt " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 17/32] crypto: ccp " Herbert Xu
                   ` (15 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/cavium/nitrox/nitrox_aead.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/cavium/nitrox/nitrox_aead.c b/drivers/crypto/cavium/nitrox/nitrox_aead.c
index 0653484df23f..b0e53034164a 100644
--- a/drivers/crypto/cavium/nitrox/nitrox_aead.c
+++ b/drivers/crypto/cavium/nitrox/nitrox_aead.c
@@ -199,7 +199,7 @@ static void nitrox_aead_callback(void *arg, int err)
 		err = -EINVAL;
 	}
 
-	areq->base.complete(&areq->base, err);
+	aead_request_complete(areq, err);
 }
 
 static inline bool nitrox_aes_gcm_assoclen_supported(unsigned int assoclen)
@@ -434,7 +434,7 @@ static void nitrox_rfc4106_callback(void *arg, int err)
 		err = -EINVAL;
 	}
 
-	areq->base.complete(&areq->base, err);
+	aead_request_complete(areq, err);
 }
 
 static int nitrox_rfc4106_enc(struct aead_request *areq)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 17/32] crypto: ccp - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (15 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 16/32] crypto: nitrox " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31 15:21   ` Tom Lendacky
  2023-01-31  8:02 ` [PATCH 18/32] crypto: chelsio " Herbert Xu
                   ` (14 subsequent siblings)
  31 siblings, 1 reply; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/ccp/ccp-crypto-main.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-main.c b/drivers/crypto/ccp/ccp-crypto-main.c
index 73442a382f68..ecd58b38c46e 100644
--- a/drivers/crypto/ccp/ccp-crypto-main.c
+++ b/drivers/crypto/ccp/ccp-crypto-main.c
@@ -146,7 +146,7 @@ static void ccp_crypto_complete(void *data, int err)
 		/* Only propagate the -EINPROGRESS if necessary */
 		if (crypto_cmd->ret == -EBUSY) {
 			crypto_cmd->ret = -EINPROGRESS;
-			req->complete(req, -EINPROGRESS);
+			crypto_request_complete(req, -EINPROGRESS);
 		}
 
 		return;
@@ -159,18 +159,18 @@ static void ccp_crypto_complete(void *data, int err)
 	held = ccp_crypto_cmd_complete(crypto_cmd, &backlog);
 	if (backlog) {
 		backlog->ret = -EINPROGRESS;
-		backlog->req->complete(backlog->req, -EINPROGRESS);
+		crypto_request_complete(backlog->req, -EINPROGRESS);
 	}
 
 	/* Transition the state from -EBUSY to -EINPROGRESS first */
 	if (crypto_cmd->ret == -EBUSY)
-		req->complete(req, -EINPROGRESS);
+		crypto_request_complete(req, -EINPROGRESS);
 
 	/* Completion callbacks */
 	ret = err;
 	if (ctx->complete)
 		ret = ctx->complete(req, ret);
-	req->complete(req, ret);
+	crypto_request_complete(req, ret);
 
 	/* Submit the next cmd */
 	while (held) {
@@ -186,12 +186,12 @@ static void ccp_crypto_complete(void *data, int err)
 		ctx = crypto_tfm_ctx_dma(held->req->tfm);
 		if (ctx->complete)
 			ret = ctx->complete(held->req, ret);
-		held->req->complete(held->req, ret);
+		crypto_request_complete(held->req, ret);
 
 		next = ccp_crypto_cmd_complete(held, &backlog);
 		if (backlog) {
 			backlog->ret = -EINPROGRESS;
-			backlog->req->complete(backlog->req, -EINPROGRESS);
+			crypto_request_complete(backlog->req, -EINPROGRESS);
 		}
 
 		kfree(held);

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 18/32] crypto: chelsio - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (16 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 17/32] crypto: ccp " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 19/32] crypto: hifn_795x " Herbert Xu
                   ` (13 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/chelsio/chcr_algo.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
index 68d65773ef2b..0eade4fa6695 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -220,7 +220,7 @@ static inline int chcr_handle_aead_resp(struct aead_request *req,
 		reqctx->verify = VERIFY_HW;
 	}
 	chcr_dec_wrcount(dev);
-	req->base.complete(&req->base, err);
+	aead_request_complete(req, err);
 
 	return err;
 }
@@ -1235,7 +1235,7 @@ static int chcr_handle_cipher_resp(struct skcipher_request *req,
 		complete(&ctx->cbc_aes_aio_done);
 	}
 	chcr_dec_wrcount(dev);
-	req->base.complete(&req->base, err);
+	skcipher_request_complete(req, err);
 	return err;
 }
 
@@ -2132,7 +2132,7 @@ static inline void chcr_handle_ahash_resp(struct ahash_request *req,
 
 out:
 	chcr_dec_wrcount(dev);
-	req->base.complete(&req->base, err);
+	ahash_request_complete(req, err);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 19/32] crypto: hifn_795x - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (17 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 18/32] crypto: chelsio " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 20/32] crypto: hisilicon " Herbert Xu
                   ` (12 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/hifn_795x.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/hifn_795x.c b/drivers/crypto/hifn_795x.c
index 7e7a8f01ea6b..5a7f6611803c 100644
--- a/drivers/crypto/hifn_795x.c
+++ b/drivers/crypto/hifn_795x.c
@@ -1705,7 +1705,7 @@ static void hifn_process_ready(struct skcipher_request *req, int error)
 		hifn_cipher_walk_exit(&rctx->walk);
 	}
 
-	req->base.complete(&req->base, error);
+	skcipher_request_complete(req, error);
 }
 
 static void hifn_clear_rings(struct hifn_device *dev, int error)
@@ -2054,7 +2054,7 @@ static int hifn_process_queue(struct hifn_device *dev)
 			break;
 
 		if (backlog)
-			backlog->complete(backlog, -EINPROGRESS);
+			crypto_request_complete(backlog, -EINPROGRESS);
 
 		req = skcipher_request_cast(async_req);
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 20/32] crypto: hisilicon - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (18 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 19/32] crypto: hifn_795x " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 21/32] crypto: img-hash " Herbert Xu
                   ` (11 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/hisilicon/sec/sec_algs.c    |    6 +++---
 drivers/crypto/hisilicon/sec2/sec_crypto.c |   10 ++++------
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c b/drivers/crypto/hisilicon/sec/sec_algs.c
index 490e1542305e..1189effcdad0 100644
--- a/drivers/crypto/hisilicon/sec/sec_algs.c
+++ b/drivers/crypto/hisilicon/sec/sec_algs.c
@@ -504,8 +504,8 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp,
 		     kfifo_avail(&ctx->queue->softqueue) >
 		     backlog_req->num_elements)) {
 			sec_send_request(backlog_req, ctx->queue);
-			backlog_req->req_base->complete(backlog_req->req_base,
-							-EINPROGRESS);
+			crypto_request_complete(backlog_req->req_base,
+						-EINPROGRESS);
 			list_del(&backlog_req->backlog_head);
 		}
 	}
@@ -534,7 +534,7 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp,
 		if (skreq->src != skreq->dst)
 			dma_unmap_sg(dev, skreq->dst, sec_req->len_out,
 				     DMA_BIDIRECTIONAL);
-		skreq->base.complete(&skreq->base, sec_req->err);
+		skcipher_request_complete(skreq, sec_req->err);
 	}
 }
 
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
index f5bfc9755a4a..074e50ef512c 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
@@ -1459,12 +1459,11 @@ static void sec_skcipher_callback(struct sec_ctx *ctx, struct sec_req *req,
 			break;
 
 		backlog_sk_req = backlog_req->c_req.sk_req;
-		backlog_sk_req->base.complete(&backlog_sk_req->base,
-						-EINPROGRESS);
+		skcipher_request_complete(backlog_sk_req, -EINPROGRESS);
 		atomic64_inc(&ctx->sec->debug.dfx.recv_busy_cnt);
 	}
 
-	sk_req->base.complete(&sk_req->base, err);
+	skcipher_request_complete(sk_req, err);
 }
 
 static void set_aead_auth_iv(struct sec_ctx *ctx, struct sec_req *req)
@@ -1736,12 +1735,11 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
 			break;
 
 		backlog_aead_req = backlog_req->aead_req.aead_req;
-		backlog_aead_req->base.complete(&backlog_aead_req->base,
-						-EINPROGRESS);
+		aead_request_complete(backlog_aead_req, -EINPROGRESS);
 		atomic64_inc(&c->sec->debug.dfx.recv_busy_cnt);
 	}
 
-	a_req->base.complete(&a_req->base, err);
+	aead_request_complete(a_req, err);
 }
 
 static void sec_request_uninit(struct sec_ctx *ctx, struct sec_req *req)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 21/32] crypto: img-hash - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (19 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 20/32] crypto: hisilicon " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 22/32] crypto: safexcel " Herbert Xu
                   ` (10 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/img-hash.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c
index 9629e98bd68b..b4155993a7fc 100644
--- a/drivers/crypto/img-hash.c
+++ b/drivers/crypto/img-hash.c
@@ -308,7 +308,7 @@ static void img_hash_finish_req(struct ahash_request *req, int err)
 		DRIVER_FLAGS_CPU | DRIVER_FLAGS_BUSY | DRIVER_FLAGS_FINAL);
 
 	if (req->base.complete)
-		req->base.complete(&req->base, err);
+		ahash_request_complete(req, err);
 }
 
 static int img_hash_write_via_dma(struct img_hash_dev *hdev)
@@ -526,7 +526,7 @@ static int img_hash_handle_queue(struct img_hash_dev *hdev,
 		return res;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 
 	req = ahash_request_cast(async_req);
 	hdev->req = req;

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 22/32] crypto: safexcel - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (20 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 21/32] crypto: img-hash " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 23/32] crypto: ixp4xx " Herbert Xu
                   ` (9 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/inside-secure/safexcel.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
index ae6110376e21..5fac36d98070 100644
--- a/drivers/crypto/inside-secure/safexcel.c
+++ b/drivers/crypto/inside-secure/safexcel.c
@@ -850,7 +850,7 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
 			goto request_failed;
 
 		if (backlog)
-			backlog->complete(backlog, -EINPROGRESS);
+			crypto_request_complete(backlog, -EINPROGRESS);
 
 		/* In case the send() helper did not issue any command to push
 		 * to the engine because the input data was cached, continue to
@@ -1050,7 +1050,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
 
 		if (should_complete) {
 			local_bh_disable();
-			req->complete(req, ret);
+			crypto_request_complete(req, ret);
 			local_bh_enable();
 		}
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 23/32] crypto: ixp4xx - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (21 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 22/32] crypto: safexcel " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 24/32] crypto: marvell/cesa " Herbert Xu
                   ` (8 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/ixp4xx_crypto.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c
index 984b3cc0237c..b63e2359a133 100644
--- a/drivers/crypto/ixp4xx_crypto.c
+++ b/drivers/crypto/ixp4xx_crypto.c
@@ -382,7 +382,7 @@ static void one_packet(dma_addr_t phys)
 		if (req_ctx->hmac_virt)
 			finish_scattered_hmac(crypt);
 
-		req->base.complete(&req->base, failed);
+		aead_request_complete(req, failed);
 		break;
 	}
 	case CTL_FLAG_PERFORM_ABLK: {
@@ -407,7 +407,7 @@ static void one_packet(dma_addr_t phys)
 			free_buf_chain(dev, req_ctx->dst, crypt->dst_buf);
 
 		free_buf_chain(dev, req_ctx->src, crypt->src_buf);
-		req->base.complete(&req->base, failed);
+		skcipher_request_complete(req, failed);
 		break;
 	}
 	case CTL_FLAG_GEN_ICV:

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 24/32] crypto: marvell/cesa - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (22 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 23/32] crypto: ixp4xx " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 25/32] crypto: octeontx " Herbert Xu
                   ` (7 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/marvell/cesa/cesa.c |    4 ++--
 drivers/crypto/marvell/cesa/tdma.c |    2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/marvell/cesa/cesa.c b/drivers/crypto/marvell/cesa/cesa.c
index 5cd332880653..b61e35b932e5 100644
--- a/drivers/crypto/marvell/cesa/cesa.c
+++ b/drivers/crypto/marvell/cesa/cesa.c
@@ -66,7 +66,7 @@ static void mv_cesa_rearm_engine(struct mv_cesa_engine *engine)
 		return;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 
 	ctx = crypto_tfm_ctx(req->tfm);
 	ctx->ops->step(req);
@@ -106,7 +106,7 @@ mv_cesa_complete_req(struct mv_cesa_ctx *ctx, struct crypto_async_request *req,
 {
 	ctx->ops->cleanup(req);
 	local_bh_disable();
-	req->complete(req, res);
+	crypto_request_complete(req, res);
 	local_bh_enable();
 }
 
diff --git a/drivers/crypto/marvell/cesa/tdma.c b/drivers/crypto/marvell/cesa/tdma.c
index f0b5537038c2..388a06e180d6 100644
--- a/drivers/crypto/marvell/cesa/tdma.c
+++ b/drivers/crypto/marvell/cesa/tdma.c
@@ -168,7 +168,7 @@ int mv_cesa_tdma_process(struct mv_cesa_engine *engine, u32 status)
 									req);
 
 			if (backlog)
-				backlog->complete(backlog, -EINPROGRESS);
+				crypto_request_complete(backlog, -EINPROGRESS);
 		}
 
 		if (res || tdma->cur_dma == tdma_cur)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 25/32] crypto: octeontx - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (23 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 24/32] crypto: marvell/cesa " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 26/32] crypto: octeontx2 " Herbert Xu
                   ` (6 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/marvell/octeontx/otx_cptvf_algs.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
index 83493dd0416f..1c2c870e887a 100644
--- a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+++ b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
@@ -138,7 +138,7 @@ static void otx_cpt_aead_callback(int status, void *arg1, void *arg2)
 
 complete:
 	if (areq)
-		areq->complete(areq, status);
+		crypto_request_complete(areq, status);
 }
 
 static void output_iv_copyback(struct crypto_async_request *areq)
@@ -188,7 +188,7 @@ static void otx_cpt_skcipher_callback(int status, void *arg1, void *arg2)
 			pdev = cpt_info->pdev;
 			do_request_cleanup(pdev, cpt_info);
 		}
-		areq->complete(areq, status);
+		crypto_request_complete(areq, status);
 	}
 }
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 26/32] crypto: octeontx2 - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (24 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 25/32] crypto: octeontx " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 27/32] crypto: mxs-dcp " Herbert Xu
                   ` (5 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
index 443202caa140..e27ddd3c4e55 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
@@ -120,7 +120,7 @@ static void otx2_cpt_aead_callback(int status, void *arg1, void *arg2)
 		otx2_cpt_info_destroy(pdev, inst_info);
 	}
 	if (areq)
-		areq->complete(areq, status);
+		crypto_request_complete(areq, status);
 }
 
 static void output_iv_copyback(struct crypto_async_request *areq)
@@ -170,7 +170,7 @@ static void otx2_cpt_skcipher_callback(int status, void *arg1, void *arg2)
 			pdev = inst_info->pdev;
 			otx2_cpt_info_destroy(pdev, inst_info);
 		}
-		areq->complete(areq, status);
+		crypto_request_complete(areq, status);
 	}
 }
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 27/32] crypto: mxs-dcp - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (25 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 26/32] crypto: octeontx2 " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 28/32] crypto: qat " Herbert Xu
                   ` (4 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/mxs-dcp.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
index d6f9e2fe863d..1c11946a4f0b 100644
--- a/drivers/crypto/mxs-dcp.c
+++ b/drivers/crypto/mxs-dcp.c
@@ -413,11 +413,11 @@ static int dcp_chan_thread_aes(void *data)
 		set_current_state(TASK_RUNNING);
 
 		if (backlog)
-			backlog->complete(backlog, -EINPROGRESS);
+			crypto_request_complete(backlog, -EINPROGRESS);
 
 		if (arq) {
 			ret = mxs_dcp_aes_block_crypt(arq);
-			arq->complete(arq, ret);
+			crypto_request_complete(arq, ret);
 		}
 	}
 
@@ -709,11 +709,11 @@ static int dcp_chan_thread_sha(void *data)
 		set_current_state(TASK_RUNNING);
 
 		if (backlog)
-			backlog->complete(backlog, -EINPROGRESS);
+			crypto_request_complete(backlog, -EINPROGRESS);
 
 		if (arq) {
 			ret = dcp_sha_req_to_buf(arq);
-			arq->complete(arq, ret);
+			crypto_request_complete(arq, ret);
 		}
 	}
 

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 28/32] crypto: qat - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (26 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 27/32] crypto: mxs-dcp " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-02-01 16:57   ` Giovanni Cabiddu
  2023-01-31  8:02 ` [PATCH 29/32] crypto: qce " Herbert Xu
                   ` (3 subsequent siblings)
  31 siblings, 1 reply; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/qat/qat_common/qat_algs.c      |    4 ++--
 drivers/crypto/qat/qat_common/qat_algs_send.c |    3 ++-
 drivers/crypto/qat/qat_common/qat_comp_algs.c |    4 ++--
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
index b4b9f0aa59b9..bcd239b11ec0 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -676,7 +676,7 @@ static void qat_aead_alg_callback(struct icp_qat_fw_la_resp *qat_resp,
 	qat_bl_free_bufl(inst->accel_dev, &qat_req->buf);
 	if (unlikely(qat_res != ICP_QAT_FW_COMN_STATUS_FLAG_OK))
 		res = -EBADMSG;
-	areq->base.complete(&areq->base, res);
+	aead_request_complete(areq, res);
 }
 
 static void qat_alg_update_iv_ctr_mode(struct qat_crypto_request *qat_req)
@@ -752,7 +752,7 @@ static void qat_skcipher_alg_callback(struct icp_qat_fw_la_resp *qat_resp,
 
 	memcpy(sreq->iv, qat_req->iv, AES_BLOCK_SIZE);
 
-	sreq->base.complete(&sreq->base, res);
+	skcipher_request_complete(sreq, res);
 }
 
 void qat_alg_callback(void *resp)
diff --git a/drivers/crypto/qat/qat_common/qat_algs_send.c b/drivers/crypto/qat/qat_common/qat_algs_send.c
index ff5b4347f783..bb80455b3e81 100644
--- a/drivers/crypto/qat/qat_common/qat_algs_send.c
+++ b/drivers/crypto/qat/qat_common/qat_algs_send.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
 /* Copyright(c) 2022 Intel Corporation */
+#include <crypto/algapi.h>
 #include "adf_transport.h"
 #include "qat_algs_send.h"
 #include "qat_crypto.h"
@@ -34,7 +35,7 @@ void qat_alg_send_backlog(struct qat_instance_backlog *backlog)
 			break;
 		}
 		list_del(&req->list);
-		req->base->complete(req->base, -EINPROGRESS);
+		crypto_request_complete(req->base, -EINPROGRESS);
 	}
 	spin_unlock_bh(&backlog->lock);
 }
diff --git a/drivers/crypto/qat/qat_common/qat_comp_algs.c b/drivers/crypto/qat/qat_common/qat_comp_algs.c
index 1480d36a8d2b..cf0ac9f8ea83 100644
--- a/drivers/crypto/qat/qat_common/qat_comp_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_comp_algs.c
@@ -94,7 +94,7 @@ static void qat_comp_resubmit(struct work_struct *work)
 
 err:
 	qat_bl_free_bufl(accel_dev, qat_bufs);
-	areq->base.complete(&areq->base, ret);
+	acomp_request_complete(areq, ret);
 }
 
 static void qat_comp_generic_callback(struct qat_compression_req *qat_req,
@@ -169,7 +169,7 @@ static void qat_comp_generic_callback(struct qat_compression_req *qat_req,
 
 end:
 	qat_bl_free_bufl(accel_dev, &qat_req->buf);
-	areq->base.complete(&areq->base, res);
+	acomp_request_complete(areq, res);
 }
 
 void qat_comp_alg_callback(void *resp)

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 29/32] crypto: qce - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (27 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 28/32] crypto: qat " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 30/32] crypto: s5p-sss " Herbert Xu
                   ` (2 subsequent siblings)
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/qce/core.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c
index d3780be44a76..74deca4f96e0 100644
--- a/drivers/crypto/qce/core.c
+++ b/drivers/crypto/qce/core.c
@@ -107,7 +107,7 @@ static int qce_handle_queue(struct qce_device *qce,
 
 	if (backlog) {
 		spin_lock_bh(&qce->lock);
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 		spin_unlock_bh(&qce->lock);
 	}
 
@@ -132,7 +132,7 @@ static void qce_tasklet_req_done(unsigned long data)
 	spin_unlock_irqrestore(&qce->lock, flags);
 
 	if (req)
-		req->complete(req, qce->result);
+		crypto_request_complete(req, qce->result);
 
 	qce_handle_queue(qce, NULL);
 }

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 30/32] crypto: s5p-sss - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (28 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 29/32] crypto: qce " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 31/32] crypto: sahara " Herbert Xu
  2023-01-31  8:02 ` [PATCH 32/32] crypto: talitos " Herbert Xu
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/s5p-sss.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
index b79e49aa724f..1c4d5fb05d69 100644
--- a/drivers/crypto/s5p-sss.c
+++ b/drivers/crypto/s5p-sss.c
@@ -499,7 +499,7 @@ static void s5p_sg_done(struct s5p_aes_dev *dev)
 /* Calls the completion. Cannot be called with dev->lock hold. */
 static void s5p_aes_complete(struct skcipher_request *req, int err)
 {
-	req->base.complete(&req->base, err);
+	skcipher_request_complete(req, err);
 }
 
 static void s5p_unset_outdata(struct s5p_aes_dev *dev)
@@ -1355,7 +1355,7 @@ static void s5p_hash_finish_req(struct ahash_request *req, int err)
 	spin_unlock_irqrestore(&dd->hash_lock, flags);
 
 	if (req->base.complete)
-		req->base.complete(&req->base, err);
+		ahash_request_complete(req, err);
 }
 
 /**
@@ -1397,7 +1397,7 @@ static int s5p_hash_handle_queue(struct s5p_aes_dev *dd,
 		return ret;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 
 	req = ahash_request_cast(async_req);
 	dd->hash_req = req;
@@ -1991,7 +1991,7 @@ static void s5p_tasklet_cb(unsigned long data)
 	spin_unlock_irqrestore(&dev->lock, flags);
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
+		crypto_request_complete(backlog, -EINPROGRESS);
 
 	dev->req = skcipher_request_cast(async_req);
 	dev->ctx = crypto_tfm_ctx(dev->req->base.tfm);

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 31/32] crypto: sahara - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (29 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 30/32] crypto: s5p-sss " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  2023-01-31  8:02 ` [PATCH 32/32] crypto: talitos " Herbert Xu
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/sahara.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
index 7ab20fb95166..dd4c703cd855 100644
--- a/drivers/crypto/sahara.c
+++ b/drivers/crypto/sahara.c
@@ -1049,7 +1049,7 @@ static int sahara_queue_manage(void *data)
 		spin_unlock_bh(&dev->queue_spinlock);
 
 		if (backlog)
-			backlog->complete(backlog, -EINPROGRESS);
+			crypto_request_complete(backlog, -EINPROGRESS);
 
 		if (async_req) {
 			if (crypto_tfm_alg_type(async_req->tfm) ==
@@ -1065,7 +1065,7 @@ static int sahara_queue_manage(void *data)
 				ret = sahara_aes_process(req);
 			}
 
-			async_req->complete(async_req, ret);
+			crypto_request_complete(async_req, ret);
 
 			continue;
 		}

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 32/32] crypto: talitos - Use request_complete helpers
  2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
                   ` (30 preceding siblings ...)
  2023-01-31  8:02 ` [PATCH 31/32] crypto: sahara " Herbert Xu
@ 2023-01-31  8:02 ` Herbert Xu
  31 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-01-31  8:02 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/talitos.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index d62ec68e3183..bb27f011cf31 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -1560,7 +1560,7 @@ static void skcipher_done(struct device *dev,
 
 	kfree(edesc);
 
-	areq->base.complete(&areq->base, err);
+	skcipher_request_complete(areq, err);
 }
 
 static int common_nonsnoop(struct talitos_edesc *edesc,
@@ -1759,7 +1759,7 @@ static void ahash_done(struct device *dev,
 
 	kfree(edesc);
 
-	areq->base.complete(&areq->base, err);
+	ahash_request_complete(areq, err);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH 17/32] crypto: ccp - Use request_complete helpers
  2023-01-31  8:02 ` [PATCH 17/32] crypto: ccp " Herbert Xu
@ 2023-01-31 15:21   ` Tom Lendacky
  0 siblings, 0 replies; 41+ messages in thread
From: Tom Lendacky @ 2023-01-31 15:21 UTC (permalink / raw)
  To: Herbert Xu, Linux Crypto Mailing List, Tudor Ambarus,
	Jesper Nilsson, Lars Persson, linux-arm-kernel,
	Raveendra Padasalagi, George Cherian, John Allen, Ayush Sawal,
	Kai Ye, Longfang Liu, Antoine Tenart, Corentin Labbe,
	Boris Brezillon, Arnaud Ebalard, Srujana Challa,
	Giovanni Cabiddu, qat-linux, Thara Gopinath, Krzysztof Kozlowski,
	Vladimir Zapolskiy

On 1/31/23 02:02, Herbert Xu wrote:
> Use the request_complete helpers instead of calling the completion
> function directly.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Acked-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
> 
>   drivers/crypto/ccp/ccp-crypto-main.c |   12 ++++++------
>   1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/crypto/ccp/ccp-crypto-main.c b/drivers/crypto/ccp/ccp-crypto-main.c
> index 73442a382f68..ecd58b38c46e 100644
> --- a/drivers/crypto/ccp/ccp-crypto-main.c
> +++ b/drivers/crypto/ccp/ccp-crypto-main.c
> @@ -146,7 +146,7 @@ static void ccp_crypto_complete(void *data, int err)
>   		/* Only propagate the -EINPROGRESS if necessary */
>   		if (crypto_cmd->ret == -EBUSY) {
>   			crypto_cmd->ret = -EINPROGRESS;
> -			req->complete(req, -EINPROGRESS);
> +			crypto_request_complete(req, -EINPROGRESS);
>   		}
>   
>   		return;
> @@ -159,18 +159,18 @@ static void ccp_crypto_complete(void *data, int err)
>   	held = ccp_crypto_cmd_complete(crypto_cmd, &backlog);
>   	if (backlog) {
>   		backlog->ret = -EINPROGRESS;
> -		backlog->req->complete(backlog->req, -EINPROGRESS);
> +		crypto_request_complete(backlog->req, -EINPROGRESS);
>   	}
>   
>   	/* Transition the state from -EBUSY to -EINPROGRESS first */
>   	if (crypto_cmd->ret == -EBUSY)
> -		req->complete(req, -EINPROGRESS);
> +		crypto_request_complete(req, -EINPROGRESS);
>   
>   	/* Completion callbacks */
>   	ret = err;
>   	if (ctx->complete)
>   		ret = ctx->complete(req, ret);
> -	req->complete(req, ret);
> +	crypto_request_complete(req, ret);
>   
>   	/* Submit the next cmd */
>   	while (held) {
> @@ -186,12 +186,12 @@ static void ccp_crypto_complete(void *data, int err)
>   		ctx = crypto_tfm_ctx_dma(held->req->tfm);
>   		if (ctx->complete)
>   			ret = ctx->complete(held->req, ret);
> -		held->req->complete(held->req, ret);
> +		crypto_request_complete(held->req, ret);
>   
>   		next = ccp_crypto_cmd_complete(held, &backlog);
>   		if (backlog) {
>   			backlog->ret = -EINPROGRESS;
> -			backlog->req->complete(backlog->req, -EINPROGRESS);
> +			crypto_request_complete(backlog->req, -EINPROGRESS);
>   		}
>   
>   		kfree(held);

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/32] crypto: api - Add scaffolding to change completion function signature
  2023-01-31  8:01 ` [PATCH 1/32] crypto: api - Add scaffolding to change completion function signature Herbert Xu
@ 2023-02-01 16:41   ` Giovanni Cabiddu
  0 siblings, 0 replies; 41+ messages in thread
From: Giovanni Cabiddu @ 2023-02-01 16:41 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, qat-linux, Thara Gopinath,
	Krzysztof Kozlowski, Vladimir Zapolskiy

On Tue, Jan 31, 2023 at 04:01:45PM +0800, Herbert Xu wrote:
> The crypto completion function currently takes a pointer to a
> struct crypto_async_request object.  However, in reality the API
> does not allow the use of any part of the object apart from the
> data field.  For example, ahash/shash will create a fake object
> on the stack to pass along a different data field.
> 
> This leads to potential bugs where the user may try to dereference
> or otherwise use the crypto_async_request object.
> 
> This patch adds some temporary scaffolding so that the completion
> function can take a void * instead.  Once affected users have been
> converted this can be removed.
> 
> The helper crypto_request_complete will remain even after the
> conversion is complete.  It should be used instead of calling
> the completion functino directly.
Typo
/s/functino/function

> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 3/32] crypto: acompress - Use crypto_request_complete
  2023-01-31  8:01 ` [PATCH 3/32] crypto: acompress - Use crypto_request_complete Herbert Xu
@ 2023-02-01 16:45   ` Giovanni Cabiddu
  0 siblings, 0 replies; 41+ messages in thread
From: Giovanni Cabiddu @ 2023-02-01 16:45 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, qat-linux, Thara Gopinath,
	Krzysztof Kozlowski, Vladimir Zapolskiy

On Tue, Jan 31, 2023 at 04:01:49PM +0800, Herbert Xu wrote:
> Use the crypto_request_complete helper instead of calling the
> completion function directly.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 28/32] crypto: qat - Use request_complete helpers
  2023-01-31  8:02 ` [PATCH 28/32] crypto: qat " Herbert Xu
@ 2023-02-01 16:57   ` Giovanni Cabiddu
  0 siblings, 0 replies; 41+ messages in thread
From: Giovanni Cabiddu @ 2023-02-01 16:57 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, qat-linux, Thara Gopinath,
	Krzysztof Kozlowski, Vladimir Zapolskiy

On Tue, Jan 31, 2023 at 04:02:42PM +0800, Herbert Xu wrote:
> Use the request_complete helpers instead of calling the completion
> function directly.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [v2 PATCH 2/32] crypto: cryptd - Use subreq for AEAD
  2023-01-31  8:01 ` [PATCH 2/32] crypto: cryptd - Use subreq for AEAD Herbert Xu
@ 2023-02-08  5:53   ` Herbert Xu
  0 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-02-08  5:53 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

v2 fixes the patch so that we actually call crypt with subreq and
not the original request.

---8<---
AEAD reuses the existing request object for its child.  This is
error-prone and unnecessary.  This patch adds a subrequest object
just like we do for skcipher and hash.

This patch also restores the original completion function as we
do for skcipher/hash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/cryptd.c |   20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 1ff58a021d57..9d60acc920cb 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -93,6 +93,7 @@ struct cryptd_aead_ctx {
 
 struct cryptd_aead_request_ctx {
 	crypto_completion_t complete;
+	struct aead_request req;
 };
 
 static void cryptd_queue_worker(struct work_struct *work);
@@ -715,6 +716,7 @@ static void cryptd_aead_crypt(struct aead_request *req,
 			int (*crypt)(struct aead_request *req))
 {
 	struct cryptd_aead_request_ctx *rctx;
+	struct aead_request *subreq;
 	struct cryptd_aead_ctx *ctx;
 	crypto_completion_t compl;
 	struct crypto_aead *tfm;
@@ -722,13 +724,23 @@ static void cryptd_aead_crypt(struct aead_request *req,
 
 	rctx = aead_request_ctx(req);
 	compl = rctx->complete;
+	subreq = &rctx->req;
 
 	tfm = crypto_aead_reqtfm(req);
 
 	if (unlikely(err == -EINPROGRESS))
 		goto out;
-	aead_request_set_tfm(req, child);
-	err = crypt( req );
+
+	aead_request_set_tfm(subreq, child);
+	aead_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
+				  NULL, NULL);
+	aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+			       req->iv);
+	aead_request_set_ad(subreq, req->assoclen);
+
+	err = crypt(subreq);
+
+	req->base.complete = compl;
 
 out:
 	ctx = crypto_aead_ctx(tfm);
@@ -798,8 +810,8 @@ static int cryptd_aead_init_tfm(struct crypto_aead *tfm)
 
 	ctx->child = cipher;
 	crypto_aead_set_reqsize(
-		tfm, max((unsigned)sizeof(struct cryptd_aead_request_ctx),
-			 crypto_aead_reqsize(cipher)));
+		tfm, sizeof(struct cryptd_aead_request_ctx) +
+		     crypto_aead_reqsize(cipher));
 	return 0;
 }
 
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [v2 PATCH 11/32] crypto: cryptd - Use request_complete helpers
  2023-01-31  8:02 ` [PATCH 11/32] crypto: cryptd - Use request_complete helpers Herbert Xu
@ 2023-02-08  5:56   ` Herbert Xu
  0 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-02-08  5:56 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

v2 adds some missing subreq assignments which breaks the callback
in case of EINPROGRESS.

---8<---
Use the request_complete helpers instead of calling the completion
function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/cryptd.c |  234 ++++++++++++++++++++++++++++++--------------------------
 1 file changed, 126 insertions(+), 108 deletions(-)

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 9d60acc920cb..29890fc0eab7 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -72,7 +72,6 @@ struct cryptd_skcipher_ctx {
 };
 
 struct cryptd_skcipher_request_ctx {
-	crypto_completion_t complete;
 	struct skcipher_request req;
 };
 
@@ -83,6 +82,7 @@ struct cryptd_hash_ctx {
 
 struct cryptd_hash_request_ctx {
 	crypto_completion_t complete;
+	void *data;
 	struct shash_desc desc;
 };
 
@@ -92,7 +92,6 @@ struct cryptd_aead_ctx {
 };
 
 struct cryptd_aead_request_ctx {
-	crypto_completion_t complete;
 	struct aead_request req;
 };
 
@@ -178,8 +177,8 @@ static void cryptd_queue_worker(struct work_struct *work)
 		return;
 
 	if (backlog)
-		backlog->complete(backlog, -EINPROGRESS);
-	req->complete(req, 0);
+		crypto_request_complete(backlog, -EINPROGRESS);
+	crypto_request_complete(req, 0);
 
 	if (cpu_queue->queue.qlen)
 		queue_work(cryptd_wq, &cpu_queue->work);
@@ -238,18 +237,51 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent,
 	return crypto_skcipher_setkey(child, key, keylen);
 }
 
-static void cryptd_skcipher_complete(struct skcipher_request *req, int err)
+static struct skcipher_request *cryptd_skcipher_prepare(
+	struct skcipher_request *req, int err)
+{
+	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+	struct skcipher_request *subreq = &rctx->req;
+	struct cryptd_skcipher_ctx *ctx;
+	struct crypto_skcipher *child;
+
+	req->base.complete = subreq->base.complete;
+	req->base.data = subreq->base.data;
+
+	if (unlikely(err == -EINPROGRESS))
+		return NULL;
+
+	ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
+	child = ctx->child;
+
+	skcipher_request_set_tfm(subreq, child);
+	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
+				      NULL, NULL);
+	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+				   req->iv);
+
+	return subreq;
+}
+
+static void cryptd_skcipher_complete(struct skcipher_request *req, int err,
+				     crypto_completion_t complete)
 {
+	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+	struct skcipher_request *subreq = &rctx->req;
 	int refcnt = refcount_read(&ctx->refcnt);
 
 	local_bh_disable();
-	rctx->complete(&req->base, err);
+	skcipher_request_complete(req, err);
 	local_bh_enable();
 
-	if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt))
+	if (unlikely(err == -EINPROGRESS)) {
+		subreq->base.complete = req->base.complete;
+		subreq->base.data = req->base.data;
+		req->base.complete = complete;
+		req->base.data = req;
+	} else if (refcnt && refcount_dec_and_test(&ctx->refcnt))
 		crypto_free_skcipher(tfm);
 }
 
@@ -257,54 +289,26 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base,
 				    int err)
 {
 	struct skcipher_request *req = skcipher_request_cast(base);
-	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct skcipher_request *subreq = &rctx->req;
-	struct crypto_skcipher *child = ctx->child;
+	struct skcipher_request *subreq;
 
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	skcipher_request_set_tfm(subreq, child);
-	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
-				      NULL, NULL);
-	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
-				   req->iv);
-
-	err = crypto_skcipher_encrypt(subreq);
-
-	req->base.complete = rctx->complete;
+	subreq = cryptd_skcipher_prepare(req, err);
+	if (likely(subreq))
+		err = crypto_skcipher_encrypt(subreq);
 
-out:
-	cryptd_skcipher_complete(req, err);
+	cryptd_skcipher_complete(req, err, cryptd_skcipher_encrypt);
 }
 
 static void cryptd_skcipher_decrypt(struct crypto_async_request *base,
 				    int err)
 {
 	struct skcipher_request *req = skcipher_request_cast(base);
-	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct skcipher_request *subreq = &rctx->req;
-	struct crypto_skcipher *child = ctx->child;
-
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	skcipher_request_set_tfm(subreq, child);
-	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
-				      NULL, NULL);
-	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
-				   req->iv);
-
-	err = crypto_skcipher_decrypt(subreq);
+	struct skcipher_request *subreq;
 
-	req->base.complete = rctx->complete;
+	subreq = cryptd_skcipher_prepare(req, err);
+	if (likely(subreq))
+		err = crypto_skcipher_decrypt(subreq);
 
-out:
-	cryptd_skcipher_complete(req, err);
+	cryptd_skcipher_complete(req, err, cryptd_skcipher_decrypt);
 }
 
 static int cryptd_skcipher_enqueue(struct skcipher_request *req,
@@ -312,11 +316,14 @@ static int cryptd_skcipher_enqueue(struct skcipher_request *req,
 {
 	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_request *subreq = &rctx->req;
 	struct cryptd_queue *queue;
 
 	queue = cryptd_get_queue(crypto_skcipher_tfm(tfm));
-	rctx->complete = req->base.complete;
+	subreq->base.complete = req->base.complete;
+	subreq->base.data = req->base.data;
 	req->base.complete = compl;
+	req->base.data = req;
 
 	return cryptd_enqueue_request(queue, &req->base);
 }
@@ -469,45 +476,63 @@ static int cryptd_hash_enqueue(struct ahash_request *req,
 		cryptd_get_queue(crypto_ahash_tfm(tfm));
 
 	rctx->complete = req->base.complete;
+	rctx->data = req->base.data;
 	req->base.complete = compl;
+	req->base.data = req;
 
 	return cryptd_enqueue_request(queue, &req->base);
 }
 
-static void cryptd_hash_complete(struct ahash_request *req, int err)
+static struct shash_desc *cryptd_hash_prepare(struct ahash_request *req,
+					      int err)
+{
+	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
+
+	req->base.complete = rctx->complete;
+	req->base.data = rctx->data;
+
+	if (unlikely(err == -EINPROGRESS))
+		return NULL;
+
+	return &rctx->desc;
+}
+
+static void cryptd_hash_complete(struct ahash_request *req, int err,
+				 crypto_completion_t complete)
 {
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
 	int refcnt = refcount_read(&ctx->refcnt);
 
 	local_bh_disable();
-	rctx->complete(&req->base, err);
+	ahash_request_complete(req, err);
 	local_bh_enable();
 
-	if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt))
+	if (err == -EINPROGRESS) {
+		req->base.complete = complete;
+		req->base.data = req;
+	} else if (refcnt && refcount_dec_and_test(&ctx->refcnt))
 		crypto_free_ahash(tfm);
 }
 
 static void cryptd_hash_init(struct crypto_async_request *req_async, int err)
 {
-	struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
-	struct crypto_shash *child = ctx->child;
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-	struct shash_desc *desc = &rctx->desc;
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct crypto_shash *child = ctx->child;
+	struct shash_desc *desc;
 
-	if (unlikely(err == -EINPROGRESS))
+	desc = cryptd_hash_prepare(req, err);
+	if (unlikely(!desc))
 		goto out;
 
 	desc->tfm = child;
 
 	err = crypto_shash_init(desc);
 
-	req->base.complete = rctx->complete;
-
 out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_init);
 }
 
 static int cryptd_hash_init_enqueue(struct ahash_request *req)
@@ -518,19 +543,13 @@ static int cryptd_hash_init_enqueue(struct ahash_request *req)
 static void cryptd_hash_update(struct crypto_async_request *req_async, int err)
 {
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx;
+	struct shash_desc *desc;
 
-	rctx = ahash_request_ctx(req);
+	desc = cryptd_hash_prepare(req, err);
+	if (likely(desc))
+		err = shash_ahash_update(req, desc);
 
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	err = shash_ahash_update(req, &rctx->desc);
-
-	req->base.complete = rctx->complete;
-
-out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_update);
 }
 
 static int cryptd_hash_update_enqueue(struct ahash_request *req)
@@ -541,17 +560,13 @@ static int cryptd_hash_update_enqueue(struct ahash_request *req)
 static void cryptd_hash_final(struct crypto_async_request *req_async, int err)
 {
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	err = crypto_shash_final(&rctx->desc, req->result);
+	struct shash_desc *desc;
 
-	req->base.complete = rctx->complete;
+	desc = cryptd_hash_prepare(req, err);
+	if (likely(desc))
+		err = crypto_shash_final(desc, req->result);
 
-out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_final);
 }
 
 static int cryptd_hash_final_enqueue(struct ahash_request *req)
@@ -562,17 +577,13 @@ static int cryptd_hash_final_enqueue(struct ahash_request *req)
 static void cryptd_hash_finup(struct crypto_async_request *req_async, int err)
 {
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
+	struct shash_desc *desc;
 
-	if (unlikely(err == -EINPROGRESS))
-		goto out;
-
-	err = shash_ahash_finup(req, &rctx->desc);
+	desc = cryptd_hash_prepare(req, err);
+	if (likely(desc))
+		err = shash_ahash_finup(req, desc);
 
-	req->base.complete = rctx->complete;
-
-out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_finup);
 }
 
 static int cryptd_hash_finup_enqueue(struct ahash_request *req)
@@ -582,23 +593,22 @@ static int cryptd_hash_finup_enqueue(struct ahash_request *req)
 
 static void cryptd_hash_digest(struct crypto_async_request *req_async, int err)
 {
-	struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
-	struct crypto_shash *child = ctx->child;
 	struct ahash_request *req = ahash_request_cast(req_async);
-	struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-	struct shash_desc *desc = &rctx->desc;
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct crypto_shash *child = ctx->child;
+	struct shash_desc *desc;
 
-	if (unlikely(err == -EINPROGRESS))
+	desc = cryptd_hash_prepare(req, err);
+	if (unlikely(!desc))
 		goto out;
 
 	desc->tfm = child;
 
 	err = shash_ahash_digest(req, desc);
 
-	req->base.complete = rctx->complete;
-
 out:
-	cryptd_hash_complete(req, err);
+	cryptd_hash_complete(req, err, cryptd_hash_digest);
 }
 
 static int cryptd_hash_digest_enqueue(struct ahash_request *req)
@@ -711,20 +721,20 @@ static int cryptd_aead_setauthsize(struct crypto_aead *parent,
 }
 
 static void cryptd_aead_crypt(struct aead_request *req,
-			struct crypto_aead *child,
-			int err,
-			int (*crypt)(struct aead_request *req))
+			      struct crypto_aead *child, int err,
+			      int (*crypt)(struct aead_request *req),
+			      crypto_completion_t compl)
 {
 	struct cryptd_aead_request_ctx *rctx;
 	struct aead_request *subreq;
 	struct cryptd_aead_ctx *ctx;
-	crypto_completion_t compl;
 	struct crypto_aead *tfm;
 	int refcnt;
 
 	rctx = aead_request_ctx(req);
-	compl = rctx->complete;
 	subreq = &rctx->req;
+	req->base.complete = subreq->base.complete;
+	req->base.data = subreq->base.data;
 
 	tfm = crypto_aead_reqtfm(req);
 
@@ -740,17 +750,20 @@ static void cryptd_aead_crypt(struct aead_request *req,
 
 	err = crypt(subreq);
 
-	req->base.complete = compl;
-
 out:
 	ctx = crypto_aead_ctx(tfm);
 	refcnt = refcount_read(&ctx->refcnt);
 
 	local_bh_disable();
-	compl(&req->base, err);
+	aead_request_complete(req, err);
 	local_bh_enable();
 
-	if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt))
+	if (err == -EINPROGRESS) {
+		subreq->base.complete = req->base.complete;
+		subreq->base.data = req->base.data;
+		req->base.complete = compl;
+		req->base.data = req;
+	} else if (refcnt && refcount_dec_and_test(&ctx->refcnt))
 		crypto_free_aead(tfm);
 }
 
@@ -761,7 +774,8 @@ static void cryptd_aead_encrypt(struct crypto_async_request *areq, int err)
 	struct aead_request *req;
 
 	req = container_of(areq, struct aead_request, base);
-	cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt);
+	cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt,
+			  cryptd_aead_encrypt);
 }
 
 static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err)
@@ -771,7 +785,8 @@ static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err)
 	struct aead_request *req;
 
 	req = container_of(areq, struct aead_request, base);
-	cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt);
+	cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt,
+			  cryptd_aead_decrypt);
 }
 
 static int cryptd_aead_enqueue(struct aead_request *req,
@@ -780,9 +795,12 @@ static int cryptd_aead_enqueue(struct aead_request *req,
 	struct cryptd_aead_request_ctx *rctx = aead_request_ctx(req);
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct cryptd_queue *queue = cryptd_get_queue(crypto_aead_tfm(tfm));
+	struct aead_request *subreq = &rctx->req;
 
-	rctx->complete = req->base.complete;
+	subreq->base.complete = req->base.complete;
+	subreq->base.data = req->base.data;
 	req->base.complete = compl;
+	req->base.data = req;
 	return cryptd_enqueue_request(queue, &req->base);
 }
 
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH 13/32] crypto: artpec6 - Use request_complete helpers
  2023-01-31  8:02 ` [PATCH 13/32] crypto: artpec6 " Herbert Xu
@ 2023-02-08  7:20   ` Jesper Nilsson
  0 siblings, 0 replies; 41+ messages in thread
From: Jesper Nilsson @ 2023-02-08  7:20 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

On Tue, Jan 31, 2023 at 09:02:10AM +0100, Herbert Xu wrote:
> Use the request_complete helpers instead of calling the completion
> function directly.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>

/^JN - Jesper Nilsson
-- 
               Jesper Nilsson -- jesper.nilsson@axis.com

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [v2 PATCH 6/32] crypto: hash - Use crypto_request_complete
  2023-01-31  8:01 ` [PATCH 6/32] crypto: hash " Herbert Xu
@ 2023-02-10 12:20   ` Herbert Xu
  0 siblings, 0 replies; 41+ messages in thread
From: Herbert Xu @ 2023-02-10 12:20 UTC (permalink / raw)
  To: Linux Crypto Mailing List, Tudor Ambarus, Jesper Nilsson,
	Lars Persson, linux-arm-kernel, Raveendra Padasalagi,
	George Cherian, Tom Lendacky, John Allen, Ayush Sawal, Kai Ye,
	Longfang Liu, Antoine Tenart, Corentin Labbe, Boris Brezillon,
	Arnaud Ebalard, Srujana Challa, Giovanni Cabiddu, qat-linux,
	Thara Gopinath, Krzysztof Kozlowski, Vladimir Zapolskiy

v2 fixes the problem of the broken hash state in subreq.

---8<---
Use the crypto_request_complete helper instead of calling the
completion function directly.

This patch also removes the voodoo programming previously used
for unaligned ahash operations and replaces it with a sub-request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/ahash.c                 |  179 ++++++++++++++++-------------------------
 include/crypto/internal/hash.h |    2 
 2 files changed, 75 insertions(+), 106 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 4b089f1b770f..19241b18a4d1 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -190,133 +190,98 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
 }
 EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
 
-static inline unsigned int ahash_align_buffer_size(unsigned len,
-						   unsigned long mask)
-{
-	return len + (mask & ~(crypto_tfm_ctx_alignment() - 1));
-}
-
-static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt)
+static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
+			  bool has_state)
 {
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	unsigned long alignmask = crypto_ahash_alignmask(tfm);
 	unsigned int ds = crypto_ahash_digestsize(tfm);
-	struct ahash_request_priv *priv;
+	struct ahash_request *subreq;
+	unsigned int subreq_size;
+	unsigned int reqsize;
+	u8 *result;
+	gfp_t gfp;
+	u32 flags;
 
-	priv = kmalloc(sizeof(*priv) + ahash_align_buffer_size(ds, alignmask),
-		       (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
-		       GFP_KERNEL : GFP_ATOMIC);
-	if (!priv)
+	subreq_size = sizeof(*subreq);
+	reqsize = crypto_ahash_reqsize(tfm);
+	reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
+	subreq_size += reqsize;
+	subreq_size += ds;
+	subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1);
+
+	flags = ahash_request_flags(req);
+	gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?  GFP_KERNEL : GFP_ATOMIC;
+	subreq = kmalloc(subreq_size, gfp);
+	if (!subreq)
 		return -ENOMEM;
 
-	/*
-	 * WARNING: Voodoo programming below!
-	 *
-	 * The code below is obscure and hard to understand, thus explanation
-	 * is necessary. See include/crypto/hash.h and include/linux/crypto.h
-	 * to understand the layout of structures used here!
-	 *
-	 * The code here will replace portions of the ORIGINAL request with
-	 * pointers to new code and buffers so the hashing operation can store
-	 * the result in aligned buffer. We will call the modified request
-	 * an ADJUSTED request.
-	 *
-	 * The newly mangled request will look as such:
-	 *
-	 * req {
-	 *   .result        = ADJUSTED[new aligned buffer]
-	 *   .base.complete = ADJUSTED[pointer to completion function]
-	 *   .base.data     = ADJUSTED[*req (pointer to self)]
-	 *   .priv          = ADJUSTED[new priv] {
-	 *           .result   = ORIGINAL(result)
-	 *           .complete = ORIGINAL(base.complete)
-	 *           .data     = ORIGINAL(base.data)
-	 *   }
-	 */
-
-	priv->result = req->result;
-	priv->complete = req->base.complete;
-	priv->data = req->base.data;
-	priv->flags = req->base.flags;
-
-	/*
-	 * WARNING: We do not backup req->priv here! The req->priv
-	 *          is for internal use of the Crypto API and the
-	 *          user must _NOT_ _EVER_ depend on it's content!
-	 */
-
-	req->result = PTR_ALIGN((u8 *)priv->ubuf, alignmask + 1);
-	req->base.complete = cplt;
-	req->base.data = req;
-	req->priv = priv;
+	ahash_request_set_tfm(subreq, tfm);
+	ahash_request_set_callback(subreq, flags, cplt, req);
+
+	result = (u8 *)(subreq + 1) + reqsize;
+	result = PTR_ALIGN(result, alignmask + 1);
+
+	ahash_request_set_crypt(subreq, req->src, result, req->nbytes);
+
+	if (has_state) {
+		void *state;
+
+		state = kmalloc(crypto_ahash_statesize(tfm), gfp);
+		if (!state) {
+			kfree(subreq);
+			return -ENOMEM;
+		}
+
+		crypto_ahash_export(req, state);
+		crypto_ahash_import(subreq, state);
+		kfree_sensitive(state);
+	}
+
+	req->priv = subreq;
 
 	return 0;
 }
 
 static void ahash_restore_req(struct ahash_request *req, int err)
 {
-	struct ahash_request_priv *priv = req->priv;
+	struct ahash_request *subreq = req->priv;
 
 	if (!err)
-		memcpy(priv->result, req->result,
+		memcpy(req->result, subreq->result,
 		       crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
 
-	/* Restore the original crypto request. */
-	req->result = priv->result;
-
-	ahash_request_set_callback(req, priv->flags,
-				   priv->complete, priv->data);
 	req->priv = NULL;
 
-	/* Free the req->priv.priv from the ADJUSTED request. */
-	kfree_sensitive(priv);
-}
-
-static void ahash_notify_einprogress(struct ahash_request *req)
-{
-	struct ahash_request_priv *priv = req->priv;
-	struct crypto_async_request oreq;
-
-	oreq.data = priv->data;
-
-	priv->complete(&oreq, -EINPROGRESS);
+	kfree_sensitive(subreq);
 }
 
 static void ahash_op_unaligned_done(struct crypto_async_request *req, int err)
 {
 	struct ahash_request *areq = req->data;
 
-	if (err == -EINPROGRESS) {
-		ahash_notify_einprogress(areq);
-		return;
-	}
-
-	/*
-	 * Restore the original request, see ahash_op_unaligned() for what
-	 * goes where.
-	 *
-	 * The "struct ahash_request *req" here is in fact the "req.base"
-	 * from the ADJUSTED request from ahash_op_unaligned(), thus as it
-	 * is a pointer to self, it is also the ADJUSTED "req" .
-	 */
+	if (err == -EINPROGRESS)
+		goto out;
 
 	/* First copy req->result into req->priv.result */
 	ahash_restore_req(areq, err);
 
+out:
 	/* Complete the ORIGINAL request. */
-	areq->base.complete(&areq->base, err);
+	ahash_request_complete(areq, err);
 }
 
 static int ahash_op_unaligned(struct ahash_request *req,
-			      int (*op)(struct ahash_request *))
+			      int (*op)(struct ahash_request *),
+			      bool has_state)
 {
 	int err;
 
-	err = ahash_save_req(req, ahash_op_unaligned_done);
+	err = ahash_save_req(req, ahash_op_unaligned_done, has_state);
 	if (err)
 		return err;
 
-	err = op(req);
+	err = op(req->priv);
 	if (err == -EINPROGRESS || err == -EBUSY)
 		return err;
 
@@ -326,13 +291,14 @@ static int ahash_op_unaligned(struct ahash_request *req,
 }
 
 static int crypto_ahash_op(struct ahash_request *req,
-			   int (*op)(struct ahash_request *))
+			   int (*op)(struct ahash_request *),
+			   bool has_state)
 {
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	unsigned long alignmask = crypto_ahash_alignmask(tfm);
 
 	if ((unsigned long)req->result & alignmask)
-		return ahash_op_unaligned(req, op);
+		return ahash_op_unaligned(req, op, has_state);
 
 	return op(req);
 }
@@ -345,7 +311,7 @@ int crypto_ahash_final(struct ahash_request *req)
 	int ret;
 
 	crypto_stats_get(alg);
-	ret = crypto_ahash_op(req, crypto_ahash_reqtfm(req)->final);
+	ret = crypto_ahash_op(req, crypto_ahash_reqtfm(req)->final, true);
 	crypto_stats_ahash_final(nbytes, ret, alg);
 	return ret;
 }
@@ -359,7 +325,7 @@ int crypto_ahash_finup(struct ahash_request *req)
 	int ret;
 
 	crypto_stats_get(alg);
-	ret = crypto_ahash_op(req, crypto_ahash_reqtfm(req)->finup);
+	ret = crypto_ahash_op(req, crypto_ahash_reqtfm(req)->finup, true);
 	crypto_stats_ahash_final(nbytes, ret, alg);
 	return ret;
 }
@@ -376,7 +342,7 @@ int crypto_ahash_digest(struct ahash_request *req)
 	if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
 		ret = -ENOKEY;
 	else
-		ret = crypto_ahash_op(req, tfm->digest);
+		ret = crypto_ahash_op(req, tfm->digest, false);
 	crypto_stats_ahash_final(nbytes, ret, alg);
 	return ret;
 }
@@ -391,17 +357,19 @@ static void ahash_def_finup_done2(struct crypto_async_request *req, int err)
 
 	ahash_restore_req(areq, err);
 
-	areq->base.complete(&areq->base, err);
+	ahash_request_complete(areq, err);
 }
 
 static int ahash_def_finup_finish1(struct ahash_request *req, int err)
 {
+	struct ahash_request *subreq = req->priv;
+
 	if (err)
 		goto out;
 
-	req->base.complete = ahash_def_finup_done2;
+	subreq->base.complete = ahash_def_finup_done2;
 
-	err = crypto_ahash_reqtfm(req)->final(req);
+	err = crypto_ahash_reqtfm(req)->final(subreq);
 	if (err == -EINPROGRESS || err == -EBUSY)
 		return err;
 
@@ -413,19 +381,20 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err)
 static void ahash_def_finup_done1(struct crypto_async_request *req, int err)
 {
 	struct ahash_request *areq = req->data;
+	struct ahash_request *subreq;
 
-	if (err == -EINPROGRESS) {
-		ahash_notify_einprogress(areq);
-		return;
-	}
+	if (err == -EINPROGRESS)
+		goto out;
 
-	areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+	subreq = areq->priv;
+	subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
 
 	err = ahash_def_finup_finish1(areq, err);
-	if (areq->priv)
+	if (err == -EINPROGRESS || err == -EBUSY)
 		return;
 
-	areq->base.complete(&areq->base, err);
+out:
+	ahash_request_complete(areq, err);
 }
 
 static int ahash_def_finup(struct ahash_request *req)
@@ -433,11 +402,11 @@ static int ahash_def_finup(struct ahash_request *req)
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	int err;
 
-	err = ahash_save_req(req, ahash_def_finup_done1);
+	err = ahash_save_req(req, ahash_def_finup_done1, true);
 	if (err)
 		return err;
 
-	err = tfm->update(req);
+	err = tfm->update(req->priv);
 	if (err == -EINPROGRESS || err == -EBUSY)
 		return err;
 
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index 1a2a41b79253..0b259dbb97af 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -199,7 +199,7 @@ static inline void *ahash_request_ctx_dma(struct ahash_request *req)
 
 static inline void ahash_request_complete(struct ahash_request *req, int err)
 {
-	req->base.complete(&req->base, err);
+	crypto_request_complete(&req->base, err);
 }
 
 static inline u32 ahash_request_flags(struct ahash_request *req)
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2023-02-10 12:20 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-31  8:00 [PATCH 0/32] crypto: api - Prepare to change callback argument to void star Herbert Xu
2023-01-31  8:01 ` [PATCH 1/32] crypto: api - Add scaffolding to change completion function signature Herbert Xu
2023-02-01 16:41   ` Giovanni Cabiddu
2023-01-31  8:01 ` [PATCH 2/32] crypto: cryptd - Use subreq for AEAD Herbert Xu
2023-02-08  5:53   ` [v2 PATCH " Herbert Xu
2023-01-31  8:01 ` [PATCH 3/32] crypto: acompress - Use crypto_request_complete Herbert Xu
2023-02-01 16:45   ` Giovanni Cabiddu
2023-01-31  8:01 ` [PATCH 4/32] crypto: aead " Herbert Xu
2023-01-31  8:01 ` [PATCH 5/32] crypto: akcipher " Herbert Xu
2023-01-31  8:01 ` [PATCH 6/32] crypto: hash " Herbert Xu
2023-02-10 12:20   ` [v2 PATCH " Herbert Xu
2023-01-31  8:01 ` [PATCH 7/32] crypto: kpp " Herbert Xu
2023-01-31  8:02 ` [PATCH 8/32] crypto: skcipher " Herbert Xu
2023-01-31  8:02 ` [PATCH 9/32] crypto: engine " Herbert Xu
2023-01-31  8:02 ` [PATCH 10/32] crypto: rsa-pkcs1pad - Use akcipher_request_complete Herbert Xu
2023-01-31  8:02 ` [PATCH 11/32] crypto: cryptd - Use request_complete helpers Herbert Xu
2023-02-08  5:56   ` [v2 PATCH " Herbert Xu
2023-01-31  8:02 ` [PATCH 12/32] crypto: atmel " Herbert Xu
2023-01-31  8:02 ` [PATCH 13/32] crypto: artpec6 " Herbert Xu
2023-02-08  7:20   ` Jesper Nilsson
2023-01-31  8:02 ` [PATCH 14/32] crypto: bcm " Herbert Xu
2023-01-31  8:02 ` [PATCH 15/32] crypto: cpt " Herbert Xu
2023-01-31  8:02 ` [PATCH 16/32] crypto: nitrox " Herbert Xu
2023-01-31  8:02 ` [PATCH 17/32] crypto: ccp " Herbert Xu
2023-01-31 15:21   ` Tom Lendacky
2023-01-31  8:02 ` [PATCH 18/32] crypto: chelsio " Herbert Xu
2023-01-31  8:02 ` [PATCH 19/32] crypto: hifn_795x " Herbert Xu
2023-01-31  8:02 ` [PATCH 20/32] crypto: hisilicon " Herbert Xu
2023-01-31  8:02 ` [PATCH 21/32] crypto: img-hash " Herbert Xu
2023-01-31  8:02 ` [PATCH 22/32] crypto: safexcel " Herbert Xu
2023-01-31  8:02 ` [PATCH 23/32] crypto: ixp4xx " Herbert Xu
2023-01-31  8:02 ` [PATCH 24/32] crypto: marvell/cesa " Herbert Xu
2023-01-31  8:02 ` [PATCH 25/32] crypto: octeontx " Herbert Xu
2023-01-31  8:02 ` [PATCH 26/32] crypto: octeontx2 " Herbert Xu
2023-01-31  8:02 ` [PATCH 27/32] crypto: mxs-dcp " Herbert Xu
2023-01-31  8:02 ` [PATCH 28/32] crypto: qat " Herbert Xu
2023-02-01 16:57   ` Giovanni Cabiddu
2023-01-31  8:02 ` [PATCH 29/32] crypto: qce " Herbert Xu
2023-01-31  8:02 ` [PATCH 30/32] crypto: s5p-sss " Herbert Xu
2023-01-31  8:02 ` [PATCH 31/32] crypto: sahara " Herbert Xu
2023-01-31  8:02 ` [PATCH 32/32] crypto: talitos " Herbert Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.