All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
@ 2017-11-29  8:41 ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization,
	Corentin Labbe

Hello

The current crypto_engine support only ahash and ablkcipher.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.

This patchset remove all algs specific stuff and now only process generic crypto_async_request.

The requests handler function pointer are now moved out of struct engine and
are now stored directly in a crypto_engine_reqctx.

The original proposal of Herbert [1] cannot be done completly since the crypto_engine
could only dequeue crypto_async_request and it is impossible to access any request_ctx
without knowing the underlying request type.

So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
Note that the current implementation expect that crypto_engine_reqctx
is the first member of the context.

The first patch convert the crypto engine with the new way,
while the following patchs convert the 3 existing users of crypto_engine.
Note that this split break bisection, so probably the final commit will be all merged.

The 3 latest patch were compile tested only, but the first is tested successfully
with my new sun8i-ce driver.

Regards

[1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html

Corentin Labbe (4):
  crypto: engine - Permit to enqueue all async requests
  crypto: omap: convert to new crypto engine API
  crypto: virtio: convert to new crypto engine API
  crypto: stm32: convert to the new crypto engine API

 crypto/crypto_engine.c                       | 188 ++++++---------------------
 drivers/crypto/omap-aes.c                    |  21 ++-
 drivers/crypto/omap-aes.h                    |   3 +
 drivers/crypto/omap-des.c                    |  24 +++-
 drivers/crypto/stm32/stm32-hash.c            |  22 +++-
 drivers/crypto/virtio/virtio_crypto_algs.c   |  15 ++-
 drivers/crypto/virtio/virtio_crypto_common.h |   2 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
 include/crypto/engine.h                      |  46 +++----
 9 files changed, 122 insertions(+), 202 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
@ 2017-11-29  8:41 ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: linux-arm-kernel

Hello

The current crypto_engine support only ahash and ablkcipher.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.

This patchset remove all algs specific stuff and now only process generic crypto_async_request.

The requests handler function pointer are now moved out of struct engine and
are now stored directly in a crypto_engine_reqctx.

The original proposal of Herbert [1] cannot be done completly since the crypto_engine
could only dequeue crypto_async_request and it is impossible to access any request_ctx
without knowing the underlying request type.

So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
Note that the current implementation expect that crypto_engine_reqctx
is the first member of the context.

The first patch convert the crypto engine with the new way,
while the following patchs convert the 3 existing users of crypto_engine.
Note that this split break bisection, so probably the final commit will be all merged.

The 3 latest patch were compile tested only, but the first is tested successfully
with my new sun8i-ce driver.

Regards

[1] https://www.mail-archive.com/linux-kernel at vger.kernel.org/msg1474434.html

Corentin Labbe (4):
  crypto: engine - Permit to enqueue all async requests
  crypto: omap: convert to new crypto engine API
  crypto: virtio: convert to new crypto engine API
  crypto: stm32: convert to the new crypto engine API

 crypto/crypto_engine.c                       | 188 ++++++---------------------
 drivers/crypto/omap-aes.c                    |  21 ++-
 drivers/crypto/omap-aes.h                    |   3 +
 drivers/crypto/omap-des.c                    |  24 +++-
 drivers/crypto/stm32/stm32-hash.c            |  22 +++-
 drivers/crypto/virtio/virtio_crypto_algs.c   |  15 ++-
 drivers/crypto/virtio/virtio_crypto_common.h |   2 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
 include/crypto/engine.h                      |  46 +++----
 9 files changed, 122 insertions(+), 202 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-11-29  8:41 ` Corentin Labbe
  (?)
@ 2017-11-29  8:41   ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: virtualization, Corentin Labbe, linux-crypto, linux-arm-kernel,
	linux-kernel

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
 include/crypto/engine.h |  46 +++++-------
 2 files changed, 60 insertions(+), 174 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..f7c4c4c1f41b 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 				 bool in_kthread)
 {
 	struct crypto_async_request *async_req, *backlog;
-	struct ahash_request *hreq;
-	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
-	int ret, rtype;
+	int ret;
+	struct crypto_engine_reqctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
-	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
 	/* Until here we get the request need to be encrypted successfully */
 	if (!was_busy && engine->prepare_crypt_hardware) {
 		ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		}
 	}
 
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		if (engine->prepare_hash_request) {
-			ret = engine->prepare_hash_request(engine, hreq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->hash_one_request(engine, hreq);
-		if (ret) {
-			dev_err(engine->dev, "failed to hash one request from queue\n");
-			goto req_err;
-		}
-		return;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		if (engine->prepare_cipher_request) {
-			ret = engine->prepare_cipher_request(engine, breq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->cipher_one_request(engine, breq);
+	enginectx = crypto_tfm_ctx(async_req->tfm);
+
+	if (enginectx->op.prepare_request) {
+		ret = enginectx->op.prepare_request(engine, async_req);
 		if (ret) {
-			dev_err(engine->dev, "failed to cipher one request from queue\n");
+			dev_err(engine->dev, "failed to prepare request: %d\n",
+				ret);
 			goto req_err;
 		}
-		return;
-	default:
-		dev_err(engine->dev, "failed to prepare request of unknown type\n");
-		return;
+		engine->cur_req_prepared = true;
+	}
+	if (!enginectx->op.do_one_request) {
+		dev_err(engine->dev, "failed to do request\n");
+		ret = -EINVAL;
+		goto req_err;
+	}
+	ret = enginectx->op.do_one_request(engine, async_req);
+	if (ret) {
+		dev_err(engine->dev, "failed to hash one request from queue\n");
+		goto req_err;
 	}
+	return;
 
 req_err:
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		crypto_finalize_hash_request(engine, hreq, ret);
-		break;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		crypto_finalize_cipher_request(engine, breq, ret);
-		break;
-	}
+	crypto_finalize_request(engine, async_req, ret);
 	return;
 
 out:
@@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
 }
 
 /**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump)
+int crypto_transfer_request(struct crypto_engine *engine,
+			    struct crypto_async_request *req, bool need_pump)
 {
 	unsigned long flags;
 	int ret;
 
-	spin_lock_irqsave(&engine->queue_lock, flags);
-
-	if (!engine->running) {
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-		return -ESHUTDOWN;
-	}
-
-	ret = ablkcipher_enqueue_request(&engine->queue, req);
-
-	if (!engine->busy && need_pump)
-		kthread_queue_work(engine->kworker, &engine->pump_requests);
-
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
-
-/**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
- * into the engine queue
- * @engine: the hardware engine
- * @req: the request need to be listed into the engine queue
- */
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req)
-{
-	return crypto_transfer_cipher_request(engine, req, true);
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
-
-/**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
- * @engine: the hardware engine
- * @req: the request need to be listed into the engine queue
- */
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump)
-{
-	unsigned long flags;
-	int ret;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 		return -ESHUTDOWN;
 	}
 
-	ret = ahash_enqueue_request(&engine->queue, req);
+	ret = crypto_enqueue_request(&engine->queue, req);
 
 	if (!engine->busy && need_pump)
 		kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 	return ret;
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request);
 
 /**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
  * into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req)
-{
-	return crypto_transfer_hash_request(engine, req, true);
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
-
-/**
- * crypto_finalize_cipher_request - finalize one request if the request is done
- * @engine: the hardware engine
- * @req: the request need to be finalized
- * @err: error number
- */
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err)
+int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_cipher_request) {
-			ret = engine->unprepare_cipher_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
-
-	req->base.complete(&req->base, err);
-
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+	return crypto_transfer_request(engine, req, true);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
 
 /**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_request - finalize one request if the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
  */
-void crypto_finalize_hash_request(struct crypto_engine *engine,
-				  struct ahash_request *req, int err)
+void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err)
 {
 	unsigned long flags;
 	bool finalize_cur_req = false;
 	int ret;
+	struct crypto_engine_reqctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
+	if (engine->cur_req == req)
 		finalize_cur_req = true;
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
 	if (finalize_cur_req) {
+		enginectx = crypto_tfm_ctx(req->tfm);
 		if (engine->cur_req_prepared &&
-		    engine->unprepare_hash_request) {
-			ret = engine->unprepare_hash_request(engine, req);
+		    enginectx->op.unprepare_request) {
+			ret = enginectx->op.unprepare_request(engine, req);
 			if (ret)
 				dev_err(engine->dev, "failed to unprepare request\n");
 		}
@@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
 		spin_unlock_irqrestore(&engine->queue_lock, flags);
 	}
 
-	req->base.complete(&req->base, err);
+	req->complete(req, err);
 
 	kthread_queue_work(engine->kworker, &engine->pump_requests);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
+EXPORT_SYMBOL_GPL(crypto_finalize_request);
 
 /**
  * crypto_engine_start - start the hardware engine
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..2e45db45849b 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,6 @@
 #include <linux/kernel.h>
 #include <linux/kthread.h>
 #include <crypto/algapi.h>
-#include <crypto/hash.h>
 
 #define ENGINE_NAME_LEN	30
 /*
@@ -65,19 +64,6 @@ struct crypto_engine {
 	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
 	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
-	int (*prepare_cipher_request)(struct crypto_engine *engine,
-				      struct ablkcipher_request *req);
-	int (*unprepare_cipher_request)(struct crypto_engine *engine,
-					struct ablkcipher_request *req);
-	int (*prepare_hash_request)(struct crypto_engine *engine,
-				    struct ahash_request *req);
-	int (*unprepare_hash_request)(struct crypto_engine *engine,
-				      struct ahash_request *req);
-	int (*cipher_one_request)(struct crypto_engine *engine,
-				  struct ablkcipher_request *req);
-	int (*hash_one_request)(struct crypto_engine *engine,
-				struct ahash_request *req);
-
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
 
@@ -85,19 +71,25 @@ struct crypto_engine {
 	struct crypto_async_request	*cur_req;
 };
 
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump);
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump);
-int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req);
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err);
-void crypto_finalize_hash_request(struct crypto_engine *engine,
-				  struct ahash_request *req, int err);
+struct crypto_engine_op {
+	int (*prepare_request)(struct crypto_engine *engine,
+			       struct crypto_async_request *areq);
+	int (*unprepare_request)(struct crypto_engine *engine,
+				 struct crypto_async_request *areq);
+	int (*do_one_request)(struct crypto_engine *engine,
+			      struct crypto_async_request *areq);
+};
+
+struct crypto_engine_reqctx {
+	struct crypto_engine_op op;
+};
+
+int crypto_transfer_request(struct crypto_engine *engine,
+			    struct crypto_async_request *req, bool need_pump);
+int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req);
+void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-11-29  8:41   ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization,
	Corentin Labbe

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
 include/crypto/engine.h |  46 +++++-------
 2 files changed, 60 insertions(+), 174 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..f7c4c4c1f41b 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 				 bool in_kthread)
 {
 	struct crypto_async_request *async_req, *backlog;
-	struct ahash_request *hreq;
-	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
-	int ret, rtype;
+	int ret;
+	struct crypto_engine_reqctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
-	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
 	/* Until here we get the request need to be encrypted successfully */
 	if (!was_busy && engine->prepare_crypt_hardware) {
 		ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		}
 	}
 
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		if (engine->prepare_hash_request) {
-			ret = engine->prepare_hash_request(engine, hreq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->hash_one_request(engine, hreq);
-		if (ret) {
-			dev_err(engine->dev, "failed to hash one request from queue\n");
-			goto req_err;
-		}
-		return;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		if (engine->prepare_cipher_request) {
-			ret = engine->prepare_cipher_request(engine, breq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->cipher_one_request(engine, breq);
+	enginectx = crypto_tfm_ctx(async_req->tfm);
+
+	if (enginectx->op.prepare_request) {
+		ret = enginectx->op.prepare_request(engine, async_req);
 		if (ret) {
-			dev_err(engine->dev, "failed to cipher one request from queue\n");
+			dev_err(engine->dev, "failed to prepare request: %d\n",
+				ret);
 			goto req_err;
 		}
-		return;
-	default:
-		dev_err(engine->dev, "failed to prepare request of unknown type\n");
-		return;
+		engine->cur_req_prepared = true;
+	}
+	if (!enginectx->op.do_one_request) {
+		dev_err(engine->dev, "failed to do request\n");
+		ret = -EINVAL;
+		goto req_err;
+	}
+	ret = enginectx->op.do_one_request(engine, async_req);
+	if (ret) {
+		dev_err(engine->dev, "failed to hash one request from queue\n");
+		goto req_err;
 	}
+	return;
 
 req_err:
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		crypto_finalize_hash_request(engine, hreq, ret);
-		break;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		crypto_finalize_cipher_request(engine, breq, ret);
-		break;
-	}
+	crypto_finalize_request(engine, async_req, ret);
 	return;
 
 out:
@@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
 }
 
 /**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump)
+int crypto_transfer_request(struct crypto_engine *engine,
+			    struct crypto_async_request *req, bool need_pump)
 {
 	unsigned long flags;
 	int ret;
 
-	spin_lock_irqsave(&engine->queue_lock, flags);
-
-	if (!engine->running) {
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-		return -ESHUTDOWN;
-	}
-
-	ret = ablkcipher_enqueue_request(&engine->queue, req);
-
-	if (!engine->busy && need_pump)
-		kthread_queue_work(engine->kworker, &engine->pump_requests);
-
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
-
-/**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
- * into the engine queue
- * @engine: the hardware engine
- * @req: the request need to be listed into the engine queue
- */
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req)
-{
-	return crypto_transfer_cipher_request(engine, req, true);
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
-
-/**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
- * @engine: the hardware engine
- * @req: the request need to be listed into the engine queue
- */
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump)
-{
-	unsigned long flags;
-	int ret;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 		return -ESHUTDOWN;
 	}
 
-	ret = ahash_enqueue_request(&engine->queue, req);
+	ret = crypto_enqueue_request(&engine->queue, req);
 
 	if (!engine->busy && need_pump)
 		kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 	return ret;
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request);
 
 /**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
  * into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req)
-{
-	return crypto_transfer_hash_request(engine, req, true);
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
-
-/**
- * crypto_finalize_cipher_request - finalize one request if the request is done
- * @engine: the hardware engine
- * @req: the request need to be finalized
- * @err: error number
- */
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err)
+int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_cipher_request) {
-			ret = engine->unprepare_cipher_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
-
-	req->base.complete(&req->base, err);
-
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+	return crypto_transfer_request(engine, req, true);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
 
 /**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_request - finalize one request if the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
  */
-void crypto_finalize_hash_request(struct crypto_engine *engine,
-				  struct ahash_request *req, int err)
+void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err)
 {
 	unsigned long flags;
 	bool finalize_cur_req = false;
 	int ret;
+	struct crypto_engine_reqctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
+	if (engine->cur_req == req)
 		finalize_cur_req = true;
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
 	if (finalize_cur_req) {
+		enginectx = crypto_tfm_ctx(req->tfm);
 		if (engine->cur_req_prepared &&
-		    engine->unprepare_hash_request) {
-			ret = engine->unprepare_hash_request(engine, req);
+		    enginectx->op.unprepare_request) {
+			ret = enginectx->op.unprepare_request(engine, req);
 			if (ret)
 				dev_err(engine->dev, "failed to unprepare request\n");
 		}
@@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
 		spin_unlock_irqrestore(&engine->queue_lock, flags);
 	}
 
-	req->base.complete(&req->base, err);
+	req->complete(req, err);
 
 	kthread_queue_work(engine->kworker, &engine->pump_requests);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
+EXPORT_SYMBOL_GPL(crypto_finalize_request);
 
 /**
  * crypto_engine_start - start the hardware engine
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..2e45db45849b 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,6 @@
 #include <linux/kernel.h>
 #include <linux/kthread.h>
 #include <crypto/algapi.h>
-#include <crypto/hash.h>
 
 #define ENGINE_NAME_LEN	30
 /*
@@ -65,19 +64,6 @@ struct crypto_engine {
 	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
 	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
-	int (*prepare_cipher_request)(struct crypto_engine *engine,
-				      struct ablkcipher_request *req);
-	int (*unprepare_cipher_request)(struct crypto_engine *engine,
-					struct ablkcipher_request *req);
-	int (*prepare_hash_request)(struct crypto_engine *engine,
-				    struct ahash_request *req);
-	int (*unprepare_hash_request)(struct crypto_engine *engine,
-				      struct ahash_request *req);
-	int (*cipher_one_request)(struct crypto_engine *engine,
-				  struct ablkcipher_request *req);
-	int (*hash_one_request)(struct crypto_engine *engine,
-				struct ahash_request *req);
-
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
 
@@ -85,19 +71,25 @@ struct crypto_engine {
 	struct crypto_async_request	*cur_req;
 };
 
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump);
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump);
-int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req);
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err);
-void crypto_finalize_hash_request(struct crypto_engine *engine,
-				  struct ahash_request *req, int err);
+struct crypto_engine_op {
+	int (*prepare_request)(struct crypto_engine *engine,
+			       struct crypto_async_request *areq);
+	int (*unprepare_request)(struct crypto_engine *engine,
+				 struct crypto_async_request *areq);
+	int (*do_one_request)(struct crypto_engine *engine,
+			      struct crypto_async_request *areq);
+};
+
+struct crypto_engine_reqctx {
+	struct crypto_engine_op op;
+};
+
+int crypto_transfer_request(struct crypto_engine *engine,
+			    struct crypto_async_request *req, bool need_pump);
+int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req);
+void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-11-29  8:41 ` Corentin Labbe
  (?)
  (?)
@ 2017-11-29  8:41 ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: virtualization, Corentin Labbe, linux-crypto, linux-arm-kernel,
	linux-kernel

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
 include/crypto/engine.h |  46 +++++-------
 2 files changed, 60 insertions(+), 174 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..f7c4c4c1f41b 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 				 bool in_kthread)
 {
 	struct crypto_async_request *async_req, *backlog;
-	struct ahash_request *hreq;
-	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
-	int ret, rtype;
+	int ret;
+	struct crypto_engine_reqctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
-	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
 	/* Until here we get the request need to be encrypted successfully */
 	if (!was_busy && engine->prepare_crypt_hardware) {
 		ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		}
 	}
 
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		if (engine->prepare_hash_request) {
-			ret = engine->prepare_hash_request(engine, hreq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->hash_one_request(engine, hreq);
-		if (ret) {
-			dev_err(engine->dev, "failed to hash one request from queue\n");
-			goto req_err;
-		}
-		return;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		if (engine->prepare_cipher_request) {
-			ret = engine->prepare_cipher_request(engine, breq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->cipher_one_request(engine, breq);
+	enginectx = crypto_tfm_ctx(async_req->tfm);
+
+	if (enginectx->op.prepare_request) {
+		ret = enginectx->op.prepare_request(engine, async_req);
 		if (ret) {
-			dev_err(engine->dev, "failed to cipher one request from queue\n");
+			dev_err(engine->dev, "failed to prepare request: %d\n",
+				ret);
 			goto req_err;
 		}
-		return;
-	default:
-		dev_err(engine->dev, "failed to prepare request of unknown type\n");
-		return;
+		engine->cur_req_prepared = true;
+	}
+	if (!enginectx->op.do_one_request) {
+		dev_err(engine->dev, "failed to do request\n");
+		ret = -EINVAL;
+		goto req_err;
+	}
+	ret = enginectx->op.do_one_request(engine, async_req);
+	if (ret) {
+		dev_err(engine->dev, "failed to hash one request from queue\n");
+		goto req_err;
 	}
+	return;
 
 req_err:
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		crypto_finalize_hash_request(engine, hreq, ret);
-		break;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		crypto_finalize_cipher_request(engine, breq, ret);
-		break;
-	}
+	crypto_finalize_request(engine, async_req, ret);
 	return;
 
 out:
@@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
 }
 
 /**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump)
+int crypto_transfer_request(struct crypto_engine *engine,
+			    struct crypto_async_request *req, bool need_pump)
 {
 	unsigned long flags;
 	int ret;
 
-	spin_lock_irqsave(&engine->queue_lock, flags);
-
-	if (!engine->running) {
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-		return -ESHUTDOWN;
-	}
-
-	ret = ablkcipher_enqueue_request(&engine->queue, req);
-
-	if (!engine->busy && need_pump)
-		kthread_queue_work(engine->kworker, &engine->pump_requests);
-
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
-
-/**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
- * into the engine queue
- * @engine: the hardware engine
- * @req: the request need to be listed into the engine queue
- */
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req)
-{
-	return crypto_transfer_cipher_request(engine, req, true);
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
-
-/**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
- * @engine: the hardware engine
- * @req: the request need to be listed into the engine queue
- */
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump)
-{
-	unsigned long flags;
-	int ret;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 		return -ESHUTDOWN;
 	}
 
-	ret = ahash_enqueue_request(&engine->queue, req);
+	ret = crypto_enqueue_request(&engine->queue, req);
 
 	if (!engine->busy && need_pump)
 		kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 	return ret;
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request);
 
 /**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
  * into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req)
-{
-	return crypto_transfer_hash_request(engine, req, true);
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
-
-/**
- * crypto_finalize_cipher_request - finalize one request if the request is done
- * @engine: the hardware engine
- * @req: the request need to be finalized
- * @err: error number
- */
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err)
+int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_cipher_request) {
-			ret = engine->unprepare_cipher_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
-
-	req->base.complete(&req->base, err);
-
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+	return crypto_transfer_request(engine, req, true);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
 
 /**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_request - finalize one request if the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
  */
-void crypto_finalize_hash_request(struct crypto_engine *engine,
-				  struct ahash_request *req, int err)
+void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err)
 {
 	unsigned long flags;
 	bool finalize_cur_req = false;
 	int ret;
+	struct crypto_engine_reqctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
+	if (engine->cur_req == req)
 		finalize_cur_req = true;
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
 	if (finalize_cur_req) {
+		enginectx = crypto_tfm_ctx(req->tfm);
 		if (engine->cur_req_prepared &&
-		    engine->unprepare_hash_request) {
-			ret = engine->unprepare_hash_request(engine, req);
+		    enginectx->op.unprepare_request) {
+			ret = enginectx->op.unprepare_request(engine, req);
 			if (ret)
 				dev_err(engine->dev, "failed to unprepare request\n");
 		}
@@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
 		spin_unlock_irqrestore(&engine->queue_lock, flags);
 	}
 
-	req->base.complete(&req->base, err);
+	req->complete(req, err);
 
 	kthread_queue_work(engine->kworker, &engine->pump_requests);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
+EXPORT_SYMBOL_GPL(crypto_finalize_request);
 
 /**
  * crypto_engine_start - start the hardware engine
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..2e45db45849b 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,6 @@
 #include <linux/kernel.h>
 #include <linux/kthread.h>
 #include <crypto/algapi.h>
-#include <crypto/hash.h>
 
 #define ENGINE_NAME_LEN	30
 /*
@@ -65,19 +64,6 @@ struct crypto_engine {
 	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
 	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
-	int (*prepare_cipher_request)(struct crypto_engine *engine,
-				      struct ablkcipher_request *req);
-	int (*unprepare_cipher_request)(struct crypto_engine *engine,
-					struct ablkcipher_request *req);
-	int (*prepare_hash_request)(struct crypto_engine *engine,
-				    struct ahash_request *req);
-	int (*unprepare_hash_request)(struct crypto_engine *engine,
-				      struct ahash_request *req);
-	int (*cipher_one_request)(struct crypto_engine *engine,
-				  struct ablkcipher_request *req);
-	int (*hash_one_request)(struct crypto_engine *engine,
-				struct ahash_request *req);
-
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
 
@@ -85,19 +71,25 @@ struct crypto_engine {
 	struct crypto_async_request	*cur_req;
 };
 
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump);
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump);
-int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req);
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err);
-void crypto_finalize_hash_request(struct crypto_engine *engine,
-				  struct ahash_request *req, int err);
+struct crypto_engine_op {
+	int (*prepare_request)(struct crypto_engine *engine,
+			       struct crypto_async_request *areq);
+	int (*unprepare_request)(struct crypto_engine *engine,
+				 struct crypto_async_request *areq);
+	int (*do_one_request)(struct crypto_engine *engine,
+			      struct crypto_async_request *areq);
+};
+
+struct crypto_engine_reqctx {
+	struct crypto_engine_op op;
+};
+
+int crypto_transfer_request(struct crypto_engine *engine,
+			    struct crypto_async_request *req, bool need_pump);
+int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req);
+void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-11-29  8:41   ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: linux-arm-kernel

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
 include/crypto/engine.h |  46 +++++-------
 2 files changed, 60 insertions(+), 174 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..f7c4c4c1f41b 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 				 bool in_kthread)
 {
 	struct crypto_async_request *async_req, *backlog;
-	struct ahash_request *hreq;
-	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
-	int ret, rtype;
+	int ret;
+	struct crypto_engine_reqctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
-	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
 	/* Until here we get the request need to be encrypted successfully */
 	if (!was_busy && engine->prepare_crypt_hardware) {
 		ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		}
 	}
 
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		if (engine->prepare_hash_request) {
-			ret = engine->prepare_hash_request(engine, hreq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->hash_one_request(engine, hreq);
-		if (ret) {
-			dev_err(engine->dev, "failed to hash one request from queue\n");
-			goto req_err;
-		}
-		return;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		if (engine->prepare_cipher_request) {
-			ret = engine->prepare_cipher_request(engine, breq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->cipher_one_request(engine, breq);
+	enginectx = crypto_tfm_ctx(async_req->tfm);
+
+	if (enginectx->op.prepare_request) {
+		ret = enginectx->op.prepare_request(engine, async_req);
 		if (ret) {
-			dev_err(engine->dev, "failed to cipher one request from queue\n");
+			dev_err(engine->dev, "failed to prepare request: %d\n",
+				ret);
 			goto req_err;
 		}
-		return;
-	default:
-		dev_err(engine->dev, "failed to prepare request of unknown type\n");
-		return;
+		engine->cur_req_prepared = true;
+	}
+	if (!enginectx->op.do_one_request) {
+		dev_err(engine->dev, "failed to do request\n");
+		ret = -EINVAL;
+		goto req_err;
+	}
+	ret = enginectx->op.do_one_request(engine, async_req);
+	if (ret) {
+		dev_err(engine->dev, "failed to hash one request from queue\n");
+		goto req_err;
 	}
+	return;
 
 req_err:
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		crypto_finalize_hash_request(engine, hreq, ret);
-		break;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		crypto_finalize_cipher_request(engine, breq, ret);
-		break;
-	}
+	crypto_finalize_request(engine, async_req, ret);
 	return;
 
 out:
@@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
 }
 
 /**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump)
+int crypto_transfer_request(struct crypto_engine *engine,
+			    struct crypto_async_request *req, bool need_pump)
 {
 	unsigned long flags;
 	int ret;
 
-	spin_lock_irqsave(&engine->queue_lock, flags);
-
-	if (!engine->running) {
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-		return -ESHUTDOWN;
-	}
-
-	ret = ablkcipher_enqueue_request(&engine->queue, req);
-
-	if (!engine->busy && need_pump)
-		kthread_queue_work(engine->kworker, &engine->pump_requests);
-
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
-
-/**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
- * into the engine queue
- * @engine: the hardware engine
- * @req: the request need to be listed into the engine queue
- */
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req)
-{
-	return crypto_transfer_cipher_request(engine, req, true);
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
-
-/**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
- * @engine: the hardware engine
- * @req: the request need to be listed into the engine queue
- */
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump)
-{
-	unsigned long flags;
-	int ret;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 		return -ESHUTDOWN;
 	}
 
-	ret = ahash_enqueue_request(&engine->queue, req);
+	ret = crypto_enqueue_request(&engine->queue, req);
 
 	if (!engine->busy && need_pump)
 		kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 	return ret;
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request);
 
 /**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
  * into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req)
-{
-	return crypto_transfer_hash_request(engine, req, true);
-}
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
-
-/**
- * crypto_finalize_cipher_request - finalize one request if the request is done
- * @engine: the hardware engine
- * @req: the request need to be finalized
- * @err: error number
- */
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err)
+int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_cipher_request) {
-			ret = engine->unprepare_cipher_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
-
-	req->base.complete(&req->base, err);
-
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+	return crypto_transfer_request(engine, req, true);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
 
 /**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_request - finalize one request if the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
  */
-void crypto_finalize_hash_request(struct crypto_engine *engine,
-				  struct ahash_request *req, int err)
+void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err)
 {
 	unsigned long flags;
 	bool finalize_cur_req = false;
 	int ret;
+	struct crypto_engine_reqctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
+	if (engine->cur_req == req)
 		finalize_cur_req = true;
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
 	if (finalize_cur_req) {
+		enginectx = crypto_tfm_ctx(req->tfm);
 		if (engine->cur_req_prepared &&
-		    engine->unprepare_hash_request) {
-			ret = engine->unprepare_hash_request(engine, req);
+		    enginectx->op.unprepare_request) {
+			ret = enginectx->op.unprepare_request(engine, req);
 			if (ret)
 				dev_err(engine->dev, "failed to unprepare request\n");
 		}
@@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
 		spin_unlock_irqrestore(&engine->queue_lock, flags);
 	}
 
-	req->base.complete(&req->base, err);
+	req->complete(req, err);
 
 	kthread_queue_work(engine->kworker, &engine->pump_requests);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
+EXPORT_SYMBOL_GPL(crypto_finalize_request);
 
 /**
  * crypto_engine_start - start the hardware engine
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..2e45db45849b 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,6 @@
 #include <linux/kernel.h>
 #include <linux/kthread.h>
 #include <crypto/algapi.h>
-#include <crypto/hash.h>
 
 #define ENGINE_NAME_LEN	30
 /*
@@ -65,19 +64,6 @@ struct crypto_engine {
 	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
 	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
-	int (*prepare_cipher_request)(struct crypto_engine *engine,
-				      struct ablkcipher_request *req);
-	int (*unprepare_cipher_request)(struct crypto_engine *engine,
-					struct ablkcipher_request *req);
-	int (*prepare_hash_request)(struct crypto_engine *engine,
-				    struct ahash_request *req);
-	int (*unprepare_hash_request)(struct crypto_engine *engine,
-				      struct ahash_request *req);
-	int (*cipher_one_request)(struct crypto_engine *engine,
-				  struct ablkcipher_request *req);
-	int (*hash_one_request)(struct crypto_engine *engine,
-				struct ahash_request *req);
-
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
 
@@ -85,19 +71,25 @@ struct crypto_engine {
 	struct crypto_async_request	*cur_req;
 };
 
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump);
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump);
-int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req);
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err);
-void crypto_finalize_hash_request(struct crypto_engine *engine,
-				  struct ahash_request *req, int err);
+struct crypto_engine_op {
+	int (*prepare_request)(struct crypto_engine *engine,
+			       struct crypto_async_request *areq);
+	int (*unprepare_request)(struct crypto_engine *engine,
+				 struct crypto_async_request *areq);
+	int (*do_one_request)(struct crypto_engine *engine,
+			      struct crypto_async_request *areq);
+};
+
+struct crypto_engine_reqctx {
+	struct crypto_engine_op op;
+};
+
+int crypto_transfer_request(struct crypto_engine *engine,
+			    struct crypto_async_request *req, bool need_pump);
+int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+				      struct crypto_async_request *req);
+void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 2/4] crypto: omap: convert to new crypto engine API
  2017-11-29  8:41 ` Corentin Labbe
  (?)
@ 2017-11-29  8:41   ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: virtualization, Corentin Labbe, linux-crypto, linux-arm-kernel,
	linux-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/omap-aes.c | 21 +++++++++++++++------
 drivers/crypto/omap-aes.h |  3 +++
 drivers/crypto/omap-des.c | 24 ++++++++++++++++++------
 3 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index fbec0a2e76dd..4a31bbfca9a4 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -388,7 +388,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_request(dd->engine, &req->base, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -408,14 +408,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_request_to_engine(dd->engine, &req->base);
 
 	return 0;
 }
 
 static int omap_aes_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
@@ -468,8 +469,9 @@ static int omap_aes_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_aes_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
 	struct omap_aes_dev *dd = rctx->dd;
 
@@ -601,6 +603,11 @@ static int omap_aes_ctr_decrypt(struct ablkcipher_request *req)
 	return omap_aes_crypt(req, FLAGS_CTR);
 }
 
+static int omap_aes_prepare_req(struct crypto_engine *engine,
+				struct crypto_async_request *req);
+static int omap_aes_crypt_req(struct crypto_engine *engine,
+			      struct crypto_async_request *req);
+
 static int omap_aes_cra_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
@@ -616,6 +623,10 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm)
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_aes_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_aes_crypt_req;
+
 	return 0;
 }
 
@@ -1119,8 +1130,6 @@ static int omap_aes_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_aes_prepare_req;
-	dd->engine->cipher_one_request = omap_aes_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index 8906342e2b9a..f6ce94907ade 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -13,6 +13,8 @@
 #ifndef __OMAP_AES_H__
 #define __OMAP_AES_H__
 
+#include <crypto/engine.h>
+
 #define DST_MAXBURST			4
 #define DMA_MIN				(DST_MAXBURST * sizeof(u32))
 
@@ -95,6 +97,7 @@ struct omap_aes_gcm_result {
 };
 
 struct omap_aes_ctx {
+	struct crypto_engine_reqctx enginectx;
 	int		keylen;
 	u32		key[AES_KEYSIZE_256 / sizeof(u32)];
 	u8		nonce[4];
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index ebc5c0f11f03..94212248159e 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -86,6 +86,7 @@
 #define FLAGS_OUT_DATA_ST_SHIFT	10
 
 struct omap_des_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct omap_des_dev *dd;
 
 	int		keylen;
@@ -498,7 +499,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_request(dd->engine, &req->base, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -520,14 +521,15 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_request_to_engine(dd->engine, &req->base);
 
 	return 0;
 }
 
 static int omap_des_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -582,8 +584,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_des_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -695,12 +698,23 @@ static int omap_des_cbc_decrypt(struct ablkcipher_request *req)
 	return omap_des_crypt(req, FLAGS_CBC);
 }
 
+static int omap_des_prepare_req(struct crypto_engine *engine,
+				struct crypto_async_request *areq);
+static int omap_des_crypt_req(struct crypto_engine *engine,
+			      struct crypto_async_request *areq);
+
 static int omap_des_cra_init(struct crypto_tfm *tfm)
 {
+	struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	pr_debug("enter\n");
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_des_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_des_crypt_req;
+
 	return 0;
 }
 
@@ -1046,8 +1060,6 @@ static int omap_des_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_des_prepare_req;
-	dd->engine->cipher_one_request = omap_des_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 2/4] crypto: omap: convert to new crypto engine API
@ 2017-11-29  8:41   ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization,
	Corentin Labbe

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/omap-aes.c | 21 +++++++++++++++------
 drivers/crypto/omap-aes.h |  3 +++
 drivers/crypto/omap-des.c | 24 ++++++++++++++++++------
 3 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index fbec0a2e76dd..4a31bbfca9a4 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -388,7 +388,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_request(dd->engine, &req->base, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -408,14 +408,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_request_to_engine(dd->engine, &req->base);
 
 	return 0;
 }
 
 static int omap_aes_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
@@ -468,8 +469,9 @@ static int omap_aes_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_aes_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
 	struct omap_aes_dev *dd = rctx->dd;
 
@@ -601,6 +603,11 @@ static int omap_aes_ctr_decrypt(struct ablkcipher_request *req)
 	return omap_aes_crypt(req, FLAGS_CTR);
 }
 
+static int omap_aes_prepare_req(struct crypto_engine *engine,
+				struct crypto_async_request *req);
+static int omap_aes_crypt_req(struct crypto_engine *engine,
+			      struct crypto_async_request *req);
+
 static int omap_aes_cra_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
@@ -616,6 +623,10 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm)
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_aes_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_aes_crypt_req;
+
 	return 0;
 }
 
@@ -1119,8 +1130,6 @@ static int omap_aes_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_aes_prepare_req;
-	dd->engine->cipher_one_request = omap_aes_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index 8906342e2b9a..f6ce94907ade 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -13,6 +13,8 @@
 #ifndef __OMAP_AES_H__
 #define __OMAP_AES_H__
 
+#include <crypto/engine.h>
+
 #define DST_MAXBURST			4
 #define DMA_MIN				(DST_MAXBURST * sizeof(u32))
 
@@ -95,6 +97,7 @@ struct omap_aes_gcm_result {
 };
 
 struct omap_aes_ctx {
+	struct crypto_engine_reqctx enginectx;
 	int		keylen;
 	u32		key[AES_KEYSIZE_256 / sizeof(u32)];
 	u8		nonce[4];
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index ebc5c0f11f03..94212248159e 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -86,6 +86,7 @@
 #define FLAGS_OUT_DATA_ST_SHIFT	10
 
 struct omap_des_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct omap_des_dev *dd;
 
 	int		keylen;
@@ -498,7 +499,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_request(dd->engine, &req->base, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -520,14 +521,15 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_request_to_engine(dd->engine, &req->base);
 
 	return 0;
 }
 
 static int omap_des_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -582,8 +584,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_des_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -695,12 +698,23 @@ static int omap_des_cbc_decrypt(struct ablkcipher_request *req)
 	return omap_des_crypt(req, FLAGS_CBC);
 }
 
+static int omap_des_prepare_req(struct crypto_engine *engine,
+				struct crypto_async_request *areq);
+static int omap_des_crypt_req(struct crypto_engine *engine,
+			      struct crypto_async_request *areq);
+
 static int omap_des_cra_init(struct crypto_tfm *tfm)
 {
+	struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	pr_debug("enter\n");
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_des_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_des_crypt_req;
+
 	return 0;
 }
 
@@ -1046,8 +1060,6 @@ static int omap_des_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_des_prepare_req;
-	dd->engine->cipher_one_request = omap_des_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 2/4] crypto: omap: convert to new crypto engine API
@ 2017-11-29  8:41   ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: linux-arm-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/omap-aes.c | 21 +++++++++++++++------
 drivers/crypto/omap-aes.h |  3 +++
 drivers/crypto/omap-des.c | 24 ++++++++++++++++++------
 3 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index fbec0a2e76dd..4a31bbfca9a4 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -388,7 +388,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_request(dd->engine, &req->base, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -408,14 +408,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_request_to_engine(dd->engine, &req->base);
 
 	return 0;
 }
 
 static int omap_aes_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
@@ -468,8 +469,9 @@ static int omap_aes_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_aes_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
 	struct omap_aes_dev *dd = rctx->dd;
 
@@ -601,6 +603,11 @@ static int omap_aes_ctr_decrypt(struct ablkcipher_request *req)
 	return omap_aes_crypt(req, FLAGS_CTR);
 }
 
+static int omap_aes_prepare_req(struct crypto_engine *engine,
+				struct crypto_async_request *req);
+static int omap_aes_crypt_req(struct crypto_engine *engine,
+			      struct crypto_async_request *req);
+
 static int omap_aes_cra_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
@@ -616,6 +623,10 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm)
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_aes_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_aes_crypt_req;
+
 	return 0;
 }
 
@@ -1119,8 +1130,6 @@ static int omap_aes_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_aes_prepare_req;
-	dd->engine->cipher_one_request = omap_aes_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index 8906342e2b9a..f6ce94907ade 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -13,6 +13,8 @@
 #ifndef __OMAP_AES_H__
 #define __OMAP_AES_H__
 
+#include <crypto/engine.h>
+
 #define DST_MAXBURST			4
 #define DMA_MIN				(DST_MAXBURST * sizeof(u32))
 
@@ -95,6 +97,7 @@ struct omap_aes_gcm_result {
 };
 
 struct omap_aes_ctx {
+	struct crypto_engine_reqctx enginectx;
 	int		keylen;
 	u32		key[AES_KEYSIZE_256 / sizeof(u32)];
 	u8		nonce[4];
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index ebc5c0f11f03..94212248159e 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -86,6 +86,7 @@
 #define FLAGS_OUT_DATA_ST_SHIFT	10
 
 struct omap_des_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct omap_des_dev *dd;
 
 	int		keylen;
@@ -498,7 +499,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_request(dd->engine, &req->base, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -520,14 +521,15 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_request_to_engine(dd->engine, &req->base);
 
 	return 0;
 }
 
 static int omap_des_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -582,8 +584,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_des_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -695,12 +698,23 @@ static int omap_des_cbc_decrypt(struct ablkcipher_request *req)
 	return omap_des_crypt(req, FLAGS_CBC);
 }
 
+static int omap_des_prepare_req(struct crypto_engine *engine,
+				struct crypto_async_request *areq);
+static int omap_des_crypt_req(struct crypto_engine *engine,
+			      struct crypto_async_request *areq);
+
 static int omap_des_cra_init(struct crypto_tfm *tfm)
 {
+	struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	pr_debug("enter\n");
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_des_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_des_crypt_req;
+
 	return 0;
 }
 
@@ -1046,8 +1060,6 @@ static int omap_des_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_des_prepare_req;
-	dd->engine->cipher_one_request = omap_des_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 3/4] crypto: virtio: convert to new crypto engine API
  2017-11-29  8:41 ` Corentin Labbe
@ 2017-11-29  8:41   ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization,
	Corentin Labbe

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/virtio/virtio_crypto_algs.c   | 15 ++++++++++-----
 drivers/crypto/virtio/virtio_crypto_common.h |  2 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
 3 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
index abe8c15450df..a2ac7a08247f 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -29,6 +29,7 @@
 
 
 struct virtio_crypto_ablkcipher_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct virtio_crypto *vcrypto;
 	struct crypto_tfm *tfm;
 
@@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = true;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_request_to_engine(data_vq->engine, &req->base);
 }
 
 static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
@@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = false;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_request_to_engine(data_vq->engine, &req->base);
 }
 
 static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
@@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
 	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
 	ctx->tfm = tfm;
 
+	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
+	ctx->enginectx.op.prepare_request = NULL;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -539,8 +543,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
 
 int virtio_crypto_ablkcipher_crypt_req(
 	struct crypto_engine *engine,
-	struct ablkcipher_request *req)
+	struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct virtio_crypto_sym_request *vc_sym_req =
 				ablkcipher_request_ctx(req);
 	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
@@ -561,8 +566,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
 	struct ablkcipher_request *req,
 	int err)
 {
-	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
-					req, err);
+	crypto_finalize_request(vc_sym_req->base.dataq->engine,
+				&req->base, err);
 	kzfree(vc_sym_req->iv);
 	virtcrypto_clear_request(&vc_sym_req->base);
 }
diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
index e976539a05d9..0d51f6ce226d 100644
--- a/drivers/crypto/virtio/virtio_crypto_common.h
+++ b/drivers/crypto/virtio/virtio_crypto_common.h
@@ -108,7 +108,7 @@ int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
 void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
 int virtio_crypto_ablkcipher_crypt_req(
 	struct crypto_engine *engine,
-	struct ablkcipher_request *req);
+	struct crypto_async_request *req);
 
 void
 virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index ff1410a32c2b..83326986c113 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
 			ret = -ENOMEM;
 			goto err_engine;
 		}
-
-		vi->data_vq[i].engine->cipher_one_request =
-			virtio_crypto_ablkcipher_crypt_req;
 	}
 
 	kfree(names);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 3/4] crypto: virtio: convert to new crypto engine API
  2017-11-29  8:41 ` Corentin Labbe
                   ` (4 preceding siblings ...)
  (?)
@ 2017-11-29  8:41 ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: virtualization, Corentin Labbe, linux-crypto, linux-arm-kernel,
	linux-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/virtio/virtio_crypto_algs.c   | 15 ++++++++++-----
 drivers/crypto/virtio/virtio_crypto_common.h |  2 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
 3 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
index abe8c15450df..a2ac7a08247f 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -29,6 +29,7 @@
 
 
 struct virtio_crypto_ablkcipher_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct virtio_crypto *vcrypto;
 	struct crypto_tfm *tfm;
 
@@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = true;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_request_to_engine(data_vq->engine, &req->base);
 }
 
 static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
@@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = false;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_request_to_engine(data_vq->engine, &req->base);
 }
 
 static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
@@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
 	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
 	ctx->tfm = tfm;
 
+	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
+	ctx->enginectx.op.prepare_request = NULL;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -539,8 +543,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
 
 int virtio_crypto_ablkcipher_crypt_req(
 	struct crypto_engine *engine,
-	struct ablkcipher_request *req)
+	struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct virtio_crypto_sym_request *vc_sym_req =
 				ablkcipher_request_ctx(req);
 	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
@@ -561,8 +566,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
 	struct ablkcipher_request *req,
 	int err)
 {
-	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
-					req, err);
+	crypto_finalize_request(vc_sym_req->base.dataq->engine,
+				&req->base, err);
 	kzfree(vc_sym_req->iv);
 	virtcrypto_clear_request(&vc_sym_req->base);
 }
diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
index e976539a05d9..0d51f6ce226d 100644
--- a/drivers/crypto/virtio/virtio_crypto_common.h
+++ b/drivers/crypto/virtio/virtio_crypto_common.h
@@ -108,7 +108,7 @@ int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
 void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
 int virtio_crypto_ablkcipher_crypt_req(
 	struct crypto_engine *engine,
-	struct ablkcipher_request *req);
+	struct crypto_async_request *req);
 
 void
 virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index ff1410a32c2b..83326986c113 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
 			ret = -ENOMEM;
 			goto err_engine;
 		}
-
-		vi->data_vq[i].engine->cipher_one_request =
-			virtio_crypto_ablkcipher_crypt_req;
 	}
 
 	kfree(names);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 3/4] crypto: virtio: convert to new crypto engine API
@ 2017-11-29  8:41   ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: linux-arm-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/virtio/virtio_crypto_algs.c   | 15 ++++++++++-----
 drivers/crypto/virtio/virtio_crypto_common.h |  2 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
 3 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
index abe8c15450df..a2ac7a08247f 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -29,6 +29,7 @@
 
 
 struct virtio_crypto_ablkcipher_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct virtio_crypto *vcrypto;
 	struct crypto_tfm *tfm;
 
@@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = true;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_request_to_engine(data_vq->engine, &req->base);
 }
 
 static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
@@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = false;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_request_to_engine(data_vq->engine, &req->base);
 }
 
 static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
@@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
 	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
 	ctx->tfm = tfm;
 
+	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
+	ctx->enginectx.op.prepare_request = NULL;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -539,8 +543,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
 
 int virtio_crypto_ablkcipher_crypt_req(
 	struct crypto_engine *engine,
-	struct ablkcipher_request *req)
+	struct crypto_async_request *areq)
 {
+	struct ablkcipher_request *req = ablkcipher_request_cast(areq);
 	struct virtio_crypto_sym_request *vc_sym_req =
 				ablkcipher_request_ctx(req);
 	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
@@ -561,8 +566,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
 	struct ablkcipher_request *req,
 	int err)
 {
-	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
-					req, err);
+	crypto_finalize_request(vc_sym_req->base.dataq->engine,
+				&req->base, err);
 	kzfree(vc_sym_req->iv);
 	virtcrypto_clear_request(&vc_sym_req->base);
 }
diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
index e976539a05d9..0d51f6ce226d 100644
--- a/drivers/crypto/virtio/virtio_crypto_common.h
+++ b/drivers/crypto/virtio/virtio_crypto_common.h
@@ -108,7 +108,7 @@ int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
 void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
 int virtio_crypto_ablkcipher_crypt_req(
 	struct crypto_engine *engine,
-	struct ablkcipher_request *req);
+	struct crypto_async_request *req);
 
 void
 virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index ff1410a32c2b..83326986c113 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
 			ret = -ENOMEM;
 			goto err_engine;
 		}
-
-		vi->data_vq[i].engine->cipher_one_request =
-			virtio_crypto_ablkcipher_crypt_req;
 	}
 
 	kfree(names);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 4/4] crypto: stm32: convert to the new crypto engine API
  2017-11-29  8:41 ` Corentin Labbe
@ 2017-11-29  8:41   ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization,
	Corentin Labbe

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/stm32/stm32-hash.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 4ca4a264a833..e3f9f7b04ce2 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -122,6 +122,7 @@ enum stm32_hash_data_format {
 #define HASH_DMA_THRESHOLD		50
 
 struct stm32_hash_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct stm32_hash_dev	*hdev;
 	unsigned long		flags;
 
@@ -811,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
 		rctx->flags |= HASH_FLAGS_ERRORS;
 	}
 
-	crypto_finalize_hash_request(hdev->engine, req, err);
+	crypto_finalize_request(hdev->engine, &req->base, err);
 }
 
 static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
@@ -828,15 +829,21 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
 	return 0;
 }
 
+static int stm32_hash_one_request(struct crypto_engine *engine,
+				  struct crypto_async_request *areq);
+static int stm32_hash_prepare_req(struct crypto_engine *engine,
+				  struct crypto_async_request *areq);
+
 static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
 				   struct ahash_request *req)
 {
-	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
+	return crypto_transfer_request_to_engine(hdev->engine, &req->base);
 }
 
 static int stm32_hash_prepare_req(struct crypto_engine *engine,
-				  struct ahash_request *req)
+				  struct crypto_async_request *areq)
 {
+	struct ahash_request *req = ahash_request_cast(areq);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
 }
 
 static int stm32_hash_one_request(struct crypto_engine *engine,
-				  struct ahash_request *req)
+				  struct crypto_async_request *areq)
 {
+	struct ahash_request *req = ahash_request_cast(areq);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
 	if (algs_hmac_name)
 		ctx->flags |= HASH_FLAGS_HMAC;
 
+	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
+	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
-	hdev->engine->hash_one_request = stm32_hash_one_request;
-
 	ret = crypto_engine_start(hdev->engine);
 	if (ret)
 		goto err_engine_start;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 4/4] crypto: stm32: convert to the new crypto engine API
  2017-11-29  8:41 ` Corentin Labbe
                   ` (6 preceding siblings ...)
  (?)
@ 2017-11-29  8:41 ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: virtualization, Corentin Labbe, linux-crypto, linux-arm-kernel,
	linux-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/stm32/stm32-hash.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 4ca4a264a833..e3f9f7b04ce2 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -122,6 +122,7 @@ enum stm32_hash_data_format {
 #define HASH_DMA_THRESHOLD		50
 
 struct stm32_hash_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct stm32_hash_dev	*hdev;
 	unsigned long		flags;
 
@@ -811,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
 		rctx->flags |= HASH_FLAGS_ERRORS;
 	}
 
-	crypto_finalize_hash_request(hdev->engine, req, err);
+	crypto_finalize_request(hdev->engine, &req->base, err);
 }
 
 static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
@@ -828,15 +829,21 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
 	return 0;
 }
 
+static int stm32_hash_one_request(struct crypto_engine *engine,
+				  struct crypto_async_request *areq);
+static int stm32_hash_prepare_req(struct crypto_engine *engine,
+				  struct crypto_async_request *areq);
+
 static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
 				   struct ahash_request *req)
 {
-	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
+	return crypto_transfer_request_to_engine(hdev->engine, &req->base);
 }
 
 static int stm32_hash_prepare_req(struct crypto_engine *engine,
-				  struct ahash_request *req)
+				  struct crypto_async_request *areq)
 {
+	struct ahash_request *req = ahash_request_cast(areq);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
 }
 
 static int stm32_hash_one_request(struct crypto_engine *engine,
-				  struct ahash_request *req)
+				  struct crypto_async_request *areq)
 {
+	struct ahash_request *req = ahash_request_cast(areq);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
 	if (algs_hmac_name)
 		ctx->flags |= HASH_FLAGS_HMAC;
 
+	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
+	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
-	hdev->engine->hash_one_request = stm32_hash_one_request;
-
 	ret = crypto_engine_start(hdev->engine);
 	if (ret)
 		goto err_engine_start;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH RFC 4/4] crypto: stm32: convert to the new crypto engine API
@ 2017-11-29  8:41   ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: linux-arm-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/stm32/stm32-hash.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 4ca4a264a833..e3f9f7b04ce2 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -122,6 +122,7 @@ enum stm32_hash_data_format {
 #define HASH_DMA_THRESHOLD		50
 
 struct stm32_hash_ctx {
+	struct crypto_engine_reqctx enginectx;
 	struct stm32_hash_dev	*hdev;
 	unsigned long		flags;
 
@@ -811,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
 		rctx->flags |= HASH_FLAGS_ERRORS;
 	}
 
-	crypto_finalize_hash_request(hdev->engine, req, err);
+	crypto_finalize_request(hdev->engine, &req->base, err);
 }
 
 static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
@@ -828,15 +829,21 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
 	return 0;
 }
 
+static int stm32_hash_one_request(struct crypto_engine *engine,
+				  struct crypto_async_request *areq);
+static int stm32_hash_prepare_req(struct crypto_engine *engine,
+				  struct crypto_async_request *areq);
+
 static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
 				   struct ahash_request *req)
 {
-	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
+	return crypto_transfer_request_to_engine(hdev->engine, &req->base);
 }
 
 static int stm32_hash_prepare_req(struct crypto_engine *engine,
-				  struct ahash_request *req)
+				  struct crypto_async_request *areq)
 {
+	struct ahash_request *req = ahash_request_cast(areq);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
 }
 
 static int stm32_hash_one_request(struct crypto_engine *engine,
-				  struct ahash_request *req)
+				  struct crypto_async_request *areq)
 {
+	struct ahash_request *req = ahash_request_cast(areq);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
 	if (algs_hmac_name)
 		ctx->flags |= HASH_FLAGS_HMAC;
 
+	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
+	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
-	hdev->engine->hash_one_request = stm32_hash_one_request;
-
 	ret = crypto_engine_start(hdev->engine);
 	if (ret)
 		goto err_engine_start;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
  2017-11-29  8:41 ` Corentin Labbe
  (?)
@ 2017-12-06 10:59   ` Fabien DESSENNE
  -1 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 10:59 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst, Lionel DEBIEVE,
	Benjamin GAIGNARD
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization

Hi Corentin,


I am fine with this proposal: it is generic enough and I have been able 
to test and run the crypto engine with aead_request without changing any 
single line of code.

This is what I need to be able to send the AEAD extension of the 
stm32-cryp driver (successfully tested with your engine upgrade proposal).


I have also tested the stm32-hash patch.

Note that stm32-cryp (new driver applied by Herbert recently 
[https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git/commit/?id=9e054ec21ef8344345b28603fb272fe999f735db]) 
would also need to be converted to the new crypto engine API : this is a 
trivial patch.

Thank you for your proposal, I hope that this proposal is fine for 
Herbert too.

BR


Fabien


On 29/11/17 09:41, Corentin Labbe wrote:
> Hello
>
> The current crypto_engine support only ahash and ablkcipher.
> My first patch which try to add skcipher was Nacked, it will add too many functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
>
> This patchset remove all algs specific stuff and now only process generic crypto_async_request.
>
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
>
> The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any request_ctx
> without knowing the underlying request type.
>
> So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
>
> The first patch convert the crypto engine with the new way,
> while the following patchs convert the 3 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be all merged.
>
> The 3 latest patch were compile tested only, but the first is tested successfully
> with my new sun8i-ce driver.
>
> Regards
>
> [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
>
> Corentin Labbe (4):
>    crypto: engine - Permit to enqueue all async requests
>    crypto: omap: convert to new crypto engine API
>    crypto: virtio: convert to new crypto engine API
>    crypto: stm32: convert to the new crypto engine API
>
>   crypto/crypto_engine.c                       | 188 ++++++---------------------
>   drivers/crypto/omap-aes.c                    |  21 ++-
>   drivers/crypto/omap-aes.h                    |   3 +
>   drivers/crypto/omap-des.c                    |  24 +++-
>   drivers/crypto/stm32/stm32-hash.c            |  22 +++-
>   drivers/crypto/virtio/virtio_crypto_algs.c   |  15 ++-
>   drivers/crypto/virtio/virtio_crypto_common.h |   2 +-
>   drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
>   include/crypto/engine.h                      |  46 +++----
>   9 files changed, 122 insertions(+), 202 deletions(-)
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-06 10:59   ` Fabien DESSENNE
  0 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 10:59 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst, Lionel DEBIEVE,
	Benjamin GAIGNARD
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization

Hi Corentin,


I am fine with this proposal: it is generic enough and I have been able 
to test and run the crypto engine with aead_request without changing any 
single line of code.

This is what I need to be able to send the AEAD extension of the 
stm32-cryp driver (successfully tested with your engine upgrade proposal).


I have also tested the stm32-hash patch.

Note that stm32-cryp (new driver applied by Herbert recently 
[https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git/commit/?id=9e054ec21ef8344345b28603fb272fe999f735db]) 
would also need to be converted to the new crypto engine API : this is a 
trivial patch.

Thank you for your proposal, I hope that this proposal is fine for 
Herbert too.

BR


Fabien


On 29/11/17 09:41, Corentin Labbe wrote:
> Hello
>
> The current crypto_engine support only ahash and ablkcipher.
> My first patch which try to add skcipher was Nacked, it will add too many functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
>
> This patchset remove all algs specific stuff and now only process generic crypto_async_request.
>
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
>
> The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any request_ctx
> without knowing the underlying request type.
>
> So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
>
> The first patch convert the crypto engine with the new way,
> while the following patchs convert the 3 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be all merged.
>
> The 3 latest patch were compile tested only, but the first is tested successfully
> with my new sun8i-ce driver.
>
> Regards
>
> [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
>
> Corentin Labbe (4):
>    crypto: engine - Permit to enqueue all async requests
>    crypto: omap: convert to new crypto engine API
>    crypto: virtio: convert to new crypto engine API
>    crypto: stm32: convert to the new crypto engine API
>
>   crypto/crypto_engine.c                       | 188 ++++++---------------------
>   drivers/crypto/omap-aes.c                    |  21 ++-
>   drivers/crypto/omap-aes.h                    |   3 +
>   drivers/crypto/omap-des.c                    |  24 +++-
>   drivers/crypto/stm32/stm32-hash.c            |  22 +++-
>   drivers/crypto/virtio/virtio_crypto_algs.c   |  15 ++-
>   drivers/crypto/virtio/virtio_crypto_common.h |   2 +-
>   drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
>   include/crypto/engine.h                      |  46 +++----
>   9 files changed, 122 insertions(+), 202 deletions(-)
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
  2017-11-29  8:41 ` Corentin Labbe
                   ` (7 preceding siblings ...)
  (?)
@ 2017-12-06 10:59 ` Fabien DESSENNE
  -1 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 10:59 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst, Lionel DEBIEVE,
	Benjamin GAIGNARD
  Cc: virtualization, linux-crypto, linux-arm-kernel, linux-kernel

Hi Corentin,


I am fine with this proposal: it is generic enough and I have been able 
to test and run the crypto engine with aead_request without changing any 
single line of code.

This is what I need to be able to send the AEAD extension of the 
stm32-cryp driver (successfully tested with your engine upgrade proposal).


I have also tested the stm32-hash patch.

Note that stm32-cryp (new driver applied by Herbert recently 
[https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git/commit/?id=9e054ec21ef8344345b28603fb272fe999f735db]) 
would also need to be converted to the new crypto engine API : this is a 
trivial patch.

Thank you for your proposal, I hope that this proposal is fine for 
Herbert too.

BR


Fabien


On 29/11/17 09:41, Corentin Labbe wrote:
> Hello
>
> The current crypto_engine support only ahash and ablkcipher.
> My first patch which try to add skcipher was Nacked, it will add too many functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
>
> This patchset remove all algs specific stuff and now only process generic crypto_async_request.
>
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
>
> The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any request_ctx
> without knowing the underlying request type.
>
> So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
>
> The first patch convert the crypto engine with the new way,
> while the following patchs convert the 3 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be all merged.
>
> The 3 latest patch were compile tested only, but the first is tested successfully
> with my new sun8i-ce driver.
>
> Regards
>
> [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
>
> Corentin Labbe (4):
>    crypto: engine - Permit to enqueue all async requests
>    crypto: omap: convert to new crypto engine API
>    crypto: virtio: convert to new crypto engine API
>    crypto: stm32: convert to the new crypto engine API
>
>   crypto/crypto_engine.c                       | 188 ++++++---------------------
>   drivers/crypto/omap-aes.c                    |  21 ++-
>   drivers/crypto/omap-aes.h                    |   3 +
>   drivers/crypto/omap-des.c                    |  24 +++-
>   drivers/crypto/stm32/stm32-hash.c            |  22 +++-
>   drivers/crypto/virtio/virtio_crypto_algs.c   |  15 ++-
>   drivers/crypto/virtio/virtio_crypto_common.h |   2 +-
>   drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
>   include/crypto/engine.h                      |  46 +++----
>   9 files changed, 122 insertions(+), 202 deletions(-)
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-06 10:59   ` Fabien DESSENNE
  0 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 10:59 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Corentin,


I am fine with this proposal: it is generic enough and I have been able 
to test and run the crypto engine with aead_request without changing any 
single line of code.

This is what I need to be able to send the AEAD extension of the 
stm32-cryp driver (successfully tested with your engine upgrade proposal).


I have also tested the stm32-hash patch.

Note that stm32-cryp (new driver applied by Herbert recently 
[https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git/commit/?id=9e054ec21ef8344345b28603fb272fe999f735db]) 
would also need to be converted to the new crypto engine API : this is a 
trivial patch.

Thank you for your proposal, I hope that this proposal is fine for 
Herbert too.

BR


Fabien


On 29/11/17 09:41, Corentin Labbe wrote:
> Hello
>
> The current crypto_engine support only ahash and ablkcipher.
> My first patch which try to add skcipher was Nacked, it will add too many functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
>
> This patchset remove all algs specific stuff and now only process generic crypto_async_request.
>
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
>
> The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any request_ctx
> without knowing the underlying request type.
>
> So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
>
> The first patch convert the crypto engine with the new way,
> while the following patchs convert the 3 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be all merged.
>
> The 3 latest patch were compile tested only, but the first is tested successfully
> with my new sun8i-ce driver.
>
> Regards
>
> [1] https://www.mail-archive.com/linux-kernel at vger.kernel.org/msg1474434.html
>
> Corentin Labbe (4):
>    crypto: engine - Permit to enqueue all async requests
>    crypto: omap: convert to new crypto engine API
>    crypto: virtio: convert to new crypto engine API
>    crypto: stm32: convert to the new crypto engine API
>
>   crypto/crypto_engine.c                       | 188 ++++++---------------------
>   drivers/crypto/omap-aes.c                    |  21 ++-
>   drivers/crypto/omap-aes.h                    |   3 +
>   drivers/crypto/omap-des.c                    |  24 +++-
>   drivers/crypto/stm32/stm32-hash.c            |  22 +++-
>   drivers/crypto/virtio/virtio_crypto_algs.c   |  15 ++-
>   drivers/crypto/virtio/virtio_crypto_common.h |   2 +-
>   drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
>   include/crypto/engine.h                      |  46 +++----
>   9 files changed, 122 insertions(+), 202 deletions(-)
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-11-29  8:41   ` Corentin Labbe
  (?)
@ 2017-12-06 11:02     ` Fabien DESSENNE
  -1 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 11:02 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization



On 29/11/17 09:41, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> ---
>   crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
>   include/crypto/engine.h |  46 +++++-------
>   2 files changed, 60 insertions(+), 174 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..f7c4c4c1f41b 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   				 bool in_kthread)
>   {
>   	struct crypto_async_request *async_req, *backlog;
> -	struct ahash_request *hreq;
> -	struct ablkcipher_request *breq;
>   	unsigned long flags;
>   	bool was_busy = false;
> -	int ret, rtype;
> +	int ret;
> +	struct crypto_engine_reqctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
> -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>   	/* Until here we get the request need to be encrypted successfully */
>   	if (!was_busy && engine->prepare_crypt_hardware) {
>   		ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   		}
>   	}
>   
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		if (engine->prepare_hash_request) {
> -			ret = engine->prepare_hash_request(engine, hreq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->hash_one_request(engine, hreq);
> -		if (ret) {
> -			dev_err(engine->dev, "failed to hash one request from queue\n");
> -			goto req_err;
> -		}
> -		return;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		if (engine->prepare_cipher_request) {
> -			ret = engine->prepare_cipher_request(engine, breq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->cipher_one_request(engine, breq);
> +	enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> +	if (enginectx->op.prepare_request) {
> +		ret = enginectx->op.prepare_request(engine, async_req);
>   		if (ret) {
> -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> +			dev_err(engine->dev, "failed to prepare request: %d\n",
> +				ret);
>   			goto req_err;
>   		}
> -		return;
> -	default:
> -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> -		return;
> +		engine->cur_req_prepared = true;
> +	}
> +	if (!enginectx->op.do_one_request) {
> +		dev_err(engine->dev, "failed to do request\n");
> +		ret = -EINVAL;
> +		goto req_err;
> +	}
> +	ret = enginectx->op.do_one_request(engine, async_req);
> +	if (ret) {
> +		dev_err(engine->dev, "failed to hash one request from queue\n");
> +		goto req_err;
>   	}
> +	return;
>   
>   req_err:
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		crypto_finalize_hash_request(engine, hreq, ret);
> -		break;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		crypto_finalize_cipher_request(engine, breq, ret);
> -		break;
> -	}
> +	crypto_finalize_request(engine, async_req, ret);
>   	return;
>   
>   out:
> @@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
>   }
>   
>   /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump)
> +int crypto_transfer_request(struct crypto_engine *engine,
> +			    struct crypto_async_request *req, bool need_pump)
>   {
>   	unsigned long flags;
>   	int ret;
>   
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -
> -	if (!engine->running) {
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -		return -ESHUTDOWN;
> -	}
> -
> -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> -
> -	if (!engine->busy && need_pump)
> -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> -
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	return ret;
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
> -
> -/**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> - * into the engine queue
> - * @engine: the hardware engine
> - * @req: the request need to be listed into the engine queue
> - */
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req)
> -{
> -	return crypto_transfer_cipher_request(engine, req, true);
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
> -
> -/**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> - * @engine: the hardware engine
> - * @req: the request need to be listed into the engine queue
> - */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump)
> -{
> -	unsigned long flags;
> -	int ret;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
>   		return -ESHUTDOWN;
>   	}
>   
> -	ret = ahash_enqueue_request(&engine->queue, req);
> +	ret = crypto_enqueue_request(&engine->queue, req);
>   
>   	if (!engine->busy && need_pump)
>   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request);
>   
>   /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
>    * into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req)
> -{
> -	return crypto_transfer_hash_request(engine, req, true);
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
> -
> -/**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> - * @engine: the hardware engine
> - * @req: the request need to be finalized
> - * @err: error number
> - */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err)
> +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_cipher_request) {
> -			ret = engine->unprepare_cipher_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> -
> -	req->base.complete(&req->base, err);
> -
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +	return crypto_transfer_request(engine, req, true);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
>   
>   /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_request - finalize one request if the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
>    */
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> -				  struct ahash_request *req, int err)
> +void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err)
>   {
>   	unsigned long flags;
>   	bool finalize_cur_req = false;
>   	int ret;
> +	struct crypto_engine_reqctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> +	if (engine->cur_req == req)
>   		finalize_cur_req = true;
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
>   	if (finalize_cur_req) {
> +		enginectx = crypto_tfm_ctx(req->tfm);
>   		if (engine->cur_req_prepared &&
> -		    engine->unprepare_hash_request) {
> -			ret = engine->unprepare_hash_request(engine, req);
> +		    enginectx->op.unprepare_request) {
> +			ret = enginectx->op.unprepare_request(engine, req);
>   			if (ret)
>   				dev_err(engine->dev, "failed to unprepare request\n");
>   		}
> @@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
>   		spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	}
>   
> -	req->base.complete(&req->base, err);
> +	req->complete(req, err);
>   
>   	kthread_queue_work(engine->kworker, &engine->pump_requests);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_finalize_request);
>   
>   /**
>    * crypto_engine_start - start the hardware engine
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..2e45db45849b 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,6 @@
>   #include <linux/kernel.h>
>   #include <linux/kthread.h>
>   #include <crypto/algapi.h>
> -#include <crypto/hash.h>
>   
>   #define ENGINE_NAME_LEN	30
>   /*
> @@ -65,19 +64,6 @@ struct crypto_engine {

You also need to remove these 6 functions from the comment header of 
that structure

>   	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>   	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>   
> -	int (*prepare_cipher_request)(struct crypto_engine *engine,
> -				      struct ablkcipher_request *req);
> -	int (*unprepare_cipher_request)(struct crypto_engine *engine,
> -					struct ablkcipher_request *req);
> -	int (*prepare_hash_request)(struct crypto_engine *engine,
> -				    struct ahash_request *req);
> -	int (*unprepare_hash_request)(struct crypto_engine *engine,
> -				      struct ahash_request *req);
> -	int (*cipher_one_request)(struct crypto_engine *engine,
> -				  struct ablkcipher_request *req);
> -	int (*hash_one_request)(struct crypto_engine *engine,
> -				struct ahash_request *req);
> -
>   	struct kthread_worker           *kworker;
>   	struct kthread_work             pump_requests;
>   
> @@ -85,19 +71,25 @@ struct crypto_engine {
>   	struct crypto_async_request	*cur_req;
>   };
>   
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump);
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump);
> -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req);
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err);
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> -				  struct ahash_request *req, int err);
> +struct crypto_engine_op {
> +	int (*prepare_request)(struct crypto_engine *engine,
> +			       struct crypto_async_request *areq);
> +	int (*unprepare_request)(struct crypto_engine *engine,
> +				 struct crypto_async_request *areq);
> +	int (*do_one_request)(struct crypto_engine *engine,
> +			      struct crypto_async_request *areq);
> +};
> +
> +struct crypto_engine_reqctx {
> +	struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_request(struct crypto_engine *engine,
> +			    struct crypto_async_request *req, bool need_pump);
> +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req);
> +void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err);
>   int crypto_engine_start(struct crypto_engine *engine);
>   int crypto_engine_stop(struct crypto_engine *engine);
>   struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-06 11:02     ` Fabien DESSENNE
  0 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 11:02 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization



On 29/11/17 09:41, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> ---
>   crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
>   include/crypto/engine.h |  46 +++++-------
>   2 files changed, 60 insertions(+), 174 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..f7c4c4c1f41b 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   				 bool in_kthread)
>   {
>   	struct crypto_async_request *async_req, *backlog;
> -	struct ahash_request *hreq;
> -	struct ablkcipher_request *breq;
>   	unsigned long flags;
>   	bool was_busy = false;
> -	int ret, rtype;
> +	int ret;
> +	struct crypto_engine_reqctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
> -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>   	/* Until here we get the request need to be encrypted successfully */
>   	if (!was_busy && engine->prepare_crypt_hardware) {
>   		ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   		}
>   	}
>   
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		if (engine->prepare_hash_request) {
> -			ret = engine->prepare_hash_request(engine, hreq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->hash_one_request(engine, hreq);
> -		if (ret) {
> -			dev_err(engine->dev, "failed to hash one request from queue\n");
> -			goto req_err;
> -		}
> -		return;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		if (engine->prepare_cipher_request) {
> -			ret = engine->prepare_cipher_request(engine, breq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->cipher_one_request(engine, breq);
> +	enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> +	if (enginectx->op.prepare_request) {
> +		ret = enginectx->op.prepare_request(engine, async_req);
>   		if (ret) {
> -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> +			dev_err(engine->dev, "failed to prepare request: %d\n",
> +				ret);
>   			goto req_err;
>   		}
> -		return;
> -	default:
> -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> -		return;
> +		engine->cur_req_prepared = true;
> +	}
> +	if (!enginectx->op.do_one_request) {
> +		dev_err(engine->dev, "failed to do request\n");
> +		ret = -EINVAL;
> +		goto req_err;
> +	}
> +	ret = enginectx->op.do_one_request(engine, async_req);
> +	if (ret) {
> +		dev_err(engine->dev, "failed to hash one request from queue\n");
> +		goto req_err;
>   	}
> +	return;
>   
>   req_err:
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		crypto_finalize_hash_request(engine, hreq, ret);
> -		break;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		crypto_finalize_cipher_request(engine, breq, ret);
> -		break;
> -	}
> +	crypto_finalize_request(engine, async_req, ret);
>   	return;
>   
>   out:
> @@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
>   }
>   
>   /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump)
> +int crypto_transfer_request(struct crypto_engine *engine,
> +			    struct crypto_async_request *req, bool need_pump)
>   {
>   	unsigned long flags;
>   	int ret;
>   
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -
> -	if (!engine->running) {
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -		return -ESHUTDOWN;
> -	}
> -
> -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> -
> -	if (!engine->busy && need_pump)
> -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> -
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	return ret;
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
> -
> -/**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> - * into the engine queue
> - * @engine: the hardware engine
> - * @req: the request need to be listed into the engine queue
> - */
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req)
> -{
> -	return crypto_transfer_cipher_request(engine, req, true);
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
> -
> -/**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> - * @engine: the hardware engine
> - * @req: the request need to be listed into the engine queue
> - */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump)
> -{
> -	unsigned long flags;
> -	int ret;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
>   		return -ESHUTDOWN;
>   	}
>   
> -	ret = ahash_enqueue_request(&engine->queue, req);
> +	ret = crypto_enqueue_request(&engine->queue, req);
>   
>   	if (!engine->busy && need_pump)
>   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request);
>   
>   /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
>    * into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req)
> -{
> -	return crypto_transfer_hash_request(engine, req, true);
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
> -
> -/**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> - * @engine: the hardware engine
> - * @req: the request need to be finalized
> - * @err: error number
> - */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err)
> +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_cipher_request) {
> -			ret = engine->unprepare_cipher_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> -
> -	req->base.complete(&req->base, err);
> -
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +	return crypto_transfer_request(engine, req, true);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
>   
>   /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_request - finalize one request if the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
>    */
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> -				  struct ahash_request *req, int err)
> +void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err)
>   {
>   	unsigned long flags;
>   	bool finalize_cur_req = false;
>   	int ret;
> +	struct crypto_engine_reqctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> +	if (engine->cur_req == req)
>   		finalize_cur_req = true;
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
>   	if (finalize_cur_req) {
> +		enginectx = crypto_tfm_ctx(req->tfm);
>   		if (engine->cur_req_prepared &&
> -		    engine->unprepare_hash_request) {
> -			ret = engine->unprepare_hash_request(engine, req);
> +		    enginectx->op.unprepare_request) {
> +			ret = enginectx->op.unprepare_request(engine, req);
>   			if (ret)
>   				dev_err(engine->dev, "failed to unprepare request\n");
>   		}
> @@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
>   		spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	}
>   
> -	req->base.complete(&req->base, err);
> +	req->complete(req, err);
>   
>   	kthread_queue_work(engine->kworker, &engine->pump_requests);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_finalize_request);
>   
>   /**
>    * crypto_engine_start - start the hardware engine
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..2e45db45849b 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,6 @@
>   #include <linux/kernel.h>
>   #include <linux/kthread.h>
>   #include <crypto/algapi.h>
> -#include <crypto/hash.h>
>   
>   #define ENGINE_NAME_LEN	30
>   /*
> @@ -65,19 +64,6 @@ struct crypto_engine {

You also need to remove these 6 functions from the comment header of 
that structure

>   	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>   	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>   
> -	int (*prepare_cipher_request)(struct crypto_engine *engine,
> -				      struct ablkcipher_request *req);
> -	int (*unprepare_cipher_request)(struct crypto_engine *engine,
> -					struct ablkcipher_request *req);
> -	int (*prepare_hash_request)(struct crypto_engine *engine,
> -				    struct ahash_request *req);
> -	int (*unprepare_hash_request)(struct crypto_engine *engine,
> -				      struct ahash_request *req);
> -	int (*cipher_one_request)(struct crypto_engine *engine,
> -				  struct ablkcipher_request *req);
> -	int (*hash_one_request)(struct crypto_engine *engine,
> -				struct ahash_request *req);
> -
>   	struct kthread_worker           *kworker;
>   	struct kthread_work             pump_requests;
>   
> @@ -85,19 +71,25 @@ struct crypto_engine {
>   	struct crypto_async_request	*cur_req;
>   };
>   
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump);
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump);
> -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req);
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err);
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> -				  struct ahash_request *req, int err);
> +struct crypto_engine_op {
> +	int (*prepare_request)(struct crypto_engine *engine,
> +			       struct crypto_async_request *areq);
> +	int (*unprepare_request)(struct crypto_engine *engine,
> +				 struct crypto_async_request *areq);
> +	int (*do_one_request)(struct crypto_engine *engine,
> +			      struct crypto_async_request *areq);
> +};
> +
> +struct crypto_engine_reqctx {
> +	struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_request(struct crypto_engine *engine,
> +			    struct crypto_async_request *req, bool need_pump);
> +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req);
> +void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err);
>   int crypto_engine_start(struct crypto_engine *engine);
>   int crypto_engine_stop(struct crypto_engine *engine);
>   struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-11-29  8:41   ` Corentin Labbe
                     ` (2 preceding siblings ...)
  (?)
@ 2017-12-06 11:02   ` Fabien DESSENNE
  -1 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 11:02 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst
  Cc: virtualization, linux-crypto, linux-arm-kernel, linux-kernel



On 29/11/17 09:41, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> ---
>   crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
>   include/crypto/engine.h |  46 +++++-------
>   2 files changed, 60 insertions(+), 174 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..f7c4c4c1f41b 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   				 bool in_kthread)
>   {
>   	struct crypto_async_request *async_req, *backlog;
> -	struct ahash_request *hreq;
> -	struct ablkcipher_request *breq;
>   	unsigned long flags;
>   	bool was_busy = false;
> -	int ret, rtype;
> +	int ret;
> +	struct crypto_engine_reqctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
> -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>   	/* Until here we get the request need to be encrypted successfully */
>   	if (!was_busy && engine->prepare_crypt_hardware) {
>   		ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   		}
>   	}
>   
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		if (engine->prepare_hash_request) {
> -			ret = engine->prepare_hash_request(engine, hreq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->hash_one_request(engine, hreq);
> -		if (ret) {
> -			dev_err(engine->dev, "failed to hash one request from queue\n");
> -			goto req_err;
> -		}
> -		return;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		if (engine->prepare_cipher_request) {
> -			ret = engine->prepare_cipher_request(engine, breq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->cipher_one_request(engine, breq);
> +	enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> +	if (enginectx->op.prepare_request) {
> +		ret = enginectx->op.prepare_request(engine, async_req);
>   		if (ret) {
> -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> +			dev_err(engine->dev, "failed to prepare request: %d\n",
> +				ret);
>   			goto req_err;
>   		}
> -		return;
> -	default:
> -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> -		return;
> +		engine->cur_req_prepared = true;
> +	}
> +	if (!enginectx->op.do_one_request) {
> +		dev_err(engine->dev, "failed to do request\n");
> +		ret = -EINVAL;
> +		goto req_err;
> +	}
> +	ret = enginectx->op.do_one_request(engine, async_req);
> +	if (ret) {
> +		dev_err(engine->dev, "failed to hash one request from queue\n");
> +		goto req_err;
>   	}
> +	return;
>   
>   req_err:
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		crypto_finalize_hash_request(engine, hreq, ret);
> -		break;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		crypto_finalize_cipher_request(engine, breq, ret);
> -		break;
> -	}
> +	crypto_finalize_request(engine, async_req, ret);
>   	return;
>   
>   out:
> @@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
>   }
>   
>   /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump)
> +int crypto_transfer_request(struct crypto_engine *engine,
> +			    struct crypto_async_request *req, bool need_pump)
>   {
>   	unsigned long flags;
>   	int ret;
>   
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -
> -	if (!engine->running) {
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -		return -ESHUTDOWN;
> -	}
> -
> -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> -
> -	if (!engine->busy && need_pump)
> -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> -
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	return ret;
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
> -
> -/**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> - * into the engine queue
> - * @engine: the hardware engine
> - * @req: the request need to be listed into the engine queue
> - */
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req)
> -{
> -	return crypto_transfer_cipher_request(engine, req, true);
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
> -
> -/**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> - * @engine: the hardware engine
> - * @req: the request need to be listed into the engine queue
> - */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump)
> -{
> -	unsigned long flags;
> -	int ret;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
>   		return -ESHUTDOWN;
>   	}
>   
> -	ret = ahash_enqueue_request(&engine->queue, req);
> +	ret = crypto_enqueue_request(&engine->queue, req);
>   
>   	if (!engine->busy && need_pump)
>   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request);
>   
>   /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
>    * into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req)
> -{
> -	return crypto_transfer_hash_request(engine, req, true);
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
> -
> -/**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> - * @engine: the hardware engine
> - * @req: the request need to be finalized
> - * @err: error number
> - */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err)
> +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_cipher_request) {
> -			ret = engine->unprepare_cipher_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> -
> -	req->base.complete(&req->base, err);
> -
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +	return crypto_transfer_request(engine, req, true);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
>   
>   /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_request - finalize one request if the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
>    */
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> -				  struct ahash_request *req, int err)
> +void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err)
>   {
>   	unsigned long flags;
>   	bool finalize_cur_req = false;
>   	int ret;
> +	struct crypto_engine_reqctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> +	if (engine->cur_req == req)
>   		finalize_cur_req = true;
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
>   	if (finalize_cur_req) {
> +		enginectx = crypto_tfm_ctx(req->tfm);
>   		if (engine->cur_req_prepared &&
> -		    engine->unprepare_hash_request) {
> -			ret = engine->unprepare_hash_request(engine, req);
> +		    enginectx->op.unprepare_request) {
> +			ret = enginectx->op.unprepare_request(engine, req);
>   			if (ret)
>   				dev_err(engine->dev, "failed to unprepare request\n");
>   		}
> @@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
>   		spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	}
>   
> -	req->base.complete(&req->base, err);
> +	req->complete(req, err);
>   
>   	kthread_queue_work(engine->kworker, &engine->pump_requests);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_finalize_request);
>   
>   /**
>    * crypto_engine_start - start the hardware engine
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..2e45db45849b 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,6 @@
>   #include <linux/kernel.h>
>   #include <linux/kthread.h>
>   #include <crypto/algapi.h>
> -#include <crypto/hash.h>
>   
>   #define ENGINE_NAME_LEN	30
>   /*
> @@ -65,19 +64,6 @@ struct crypto_engine {

You also need to remove these 6 functions from the comment header of 
that structure

>   	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>   	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>   
> -	int (*prepare_cipher_request)(struct crypto_engine *engine,
> -				      struct ablkcipher_request *req);
> -	int (*unprepare_cipher_request)(struct crypto_engine *engine,
> -					struct ablkcipher_request *req);
> -	int (*prepare_hash_request)(struct crypto_engine *engine,
> -				    struct ahash_request *req);
> -	int (*unprepare_hash_request)(struct crypto_engine *engine,
> -				      struct ahash_request *req);
> -	int (*cipher_one_request)(struct crypto_engine *engine,
> -				  struct ablkcipher_request *req);
> -	int (*hash_one_request)(struct crypto_engine *engine,
> -				struct ahash_request *req);
> -
>   	struct kthread_worker           *kworker;
>   	struct kthread_work             pump_requests;
>   
> @@ -85,19 +71,25 @@ struct crypto_engine {
>   	struct crypto_async_request	*cur_req;
>   };
>   
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump);
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump);
> -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req);
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err);
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> -				  struct ahash_request *req, int err);
> +struct crypto_engine_op {
> +	int (*prepare_request)(struct crypto_engine *engine,
> +			       struct crypto_async_request *areq);
> +	int (*unprepare_request)(struct crypto_engine *engine,
> +				 struct crypto_async_request *areq);
> +	int (*do_one_request)(struct crypto_engine *engine,
> +			      struct crypto_async_request *areq);
> +};
> +
> +struct crypto_engine_reqctx {
> +	struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_request(struct crypto_engine *engine,
> +			    struct crypto_async_request *req, bool need_pump);
> +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req);
> +void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err);
>   int crypto_engine_start(struct crypto_engine *engine);
>   int crypto_engine_stop(struct crypto_engine *engine);
>   struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-06 11:02     ` Fabien DESSENNE
  0 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 11:02 UTC (permalink / raw)
  To: linux-arm-kernel



On 29/11/17 09:41, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> ---
>   crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
>   include/crypto/engine.h |  46 +++++-------
>   2 files changed, 60 insertions(+), 174 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..f7c4c4c1f41b 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   				 bool in_kthread)
>   {
>   	struct crypto_async_request *async_req, *backlog;
> -	struct ahash_request *hreq;
> -	struct ablkcipher_request *breq;
>   	unsigned long flags;
>   	bool was_busy = false;
> -	int ret, rtype;
> +	int ret;
> +	struct crypto_engine_reqctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
> -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>   	/* Until here we get the request need to be encrypted successfully */
>   	if (!was_busy && engine->prepare_crypt_hardware) {
>   		ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   		}
>   	}
>   
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		if (engine->prepare_hash_request) {
> -			ret = engine->prepare_hash_request(engine, hreq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->hash_one_request(engine, hreq);
> -		if (ret) {
> -			dev_err(engine->dev, "failed to hash one request from queue\n");
> -			goto req_err;
> -		}
> -		return;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		if (engine->prepare_cipher_request) {
> -			ret = engine->prepare_cipher_request(engine, breq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->cipher_one_request(engine, breq);
> +	enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> +	if (enginectx->op.prepare_request) {
> +		ret = enginectx->op.prepare_request(engine, async_req);
>   		if (ret) {
> -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> +			dev_err(engine->dev, "failed to prepare request: %d\n",
> +				ret);
>   			goto req_err;
>   		}
> -		return;
> -	default:
> -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> -		return;
> +		engine->cur_req_prepared = true;
> +	}
> +	if (!enginectx->op.do_one_request) {
> +		dev_err(engine->dev, "failed to do request\n");
> +		ret = -EINVAL;
> +		goto req_err;
> +	}
> +	ret = enginectx->op.do_one_request(engine, async_req);
> +	if (ret) {
> +		dev_err(engine->dev, "failed to hash one request from queue\n");
> +		goto req_err;
>   	}
> +	return;
>   
>   req_err:
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		crypto_finalize_hash_request(engine, hreq, ret);
> -		break;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		crypto_finalize_cipher_request(engine, breq, ret);
> -		break;
> -	}
> +	crypto_finalize_request(engine, async_req, ret);
>   	return;
>   
>   out:
> @@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
>   }
>   
>   /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump)
> +int crypto_transfer_request(struct crypto_engine *engine,
> +			    struct crypto_async_request *req, bool need_pump)
>   {
>   	unsigned long flags;
>   	int ret;
>   
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -
> -	if (!engine->running) {
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -		return -ESHUTDOWN;
> -	}
> -
> -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> -
> -	if (!engine->busy && need_pump)
> -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> -
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	return ret;
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
> -
> -/**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> - * into the engine queue
> - * @engine: the hardware engine
> - * @req: the request need to be listed into the engine queue
> - */
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req)
> -{
> -	return crypto_transfer_cipher_request(engine, req, true);
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
> -
> -/**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> - * @engine: the hardware engine
> - * @req: the request need to be listed into the engine queue
> - */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump)
> -{
> -	unsigned long flags;
> -	int ret;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
>   		return -ESHUTDOWN;
>   	}
>   
> -	ret = ahash_enqueue_request(&engine->queue, req);
> +	ret = crypto_enqueue_request(&engine->queue, req);
>   
>   	if (!engine->busy && need_pump)
>   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request);
>   
>   /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
>    * into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req)
> -{
> -	return crypto_transfer_hash_request(engine, req, true);
> -}
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
> -
> -/**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> - * @engine: the hardware engine
> - * @req: the request need to be finalized
> - * @err: error number
> - */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err)
> +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_cipher_request) {
> -			ret = engine->unprepare_cipher_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> -
> -	req->base.complete(&req->base, err);
> -
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +	return crypto_transfer_request(engine, req, true);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
>   
>   /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_request - finalize one request if the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
>    */
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> -				  struct ahash_request *req, int err)
> +void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err)
>   {
>   	unsigned long flags;
>   	bool finalize_cur_req = false;
>   	int ret;
> +	struct crypto_engine_reqctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> +	if (engine->cur_req == req)
>   		finalize_cur_req = true;
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
>   	if (finalize_cur_req) {
> +		enginectx = crypto_tfm_ctx(req->tfm);
>   		if (engine->cur_req_prepared &&
> -		    engine->unprepare_hash_request) {
> -			ret = engine->unprepare_hash_request(engine, req);
> +		    enginectx->op.unprepare_request) {
> +			ret = enginectx->op.unprepare_request(engine, req);
>   			if (ret)
>   				dev_err(engine->dev, "failed to unprepare request\n");
>   		}
> @@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
>   		spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	}
>   
> -	req->base.complete(&req->base, err);
> +	req->complete(req, err);
>   
>   	kthread_queue_work(engine->kworker, &engine->pump_requests);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_finalize_request);
>   
>   /**
>    * crypto_engine_start - start the hardware engine
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..2e45db45849b 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,6 @@
>   #include <linux/kernel.h>
>   #include <linux/kthread.h>
>   #include <crypto/algapi.h>
> -#include <crypto/hash.h>
>   
>   #define ENGINE_NAME_LEN	30
>   /*
> @@ -65,19 +64,6 @@ struct crypto_engine {

You also need to remove these 6 functions from the comment header of 
that structure

>   	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>   	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>   
> -	int (*prepare_cipher_request)(struct crypto_engine *engine,
> -				      struct ablkcipher_request *req);
> -	int (*unprepare_cipher_request)(struct crypto_engine *engine,
> -					struct ablkcipher_request *req);
> -	int (*prepare_hash_request)(struct crypto_engine *engine,
> -				    struct ahash_request *req);
> -	int (*unprepare_hash_request)(struct crypto_engine *engine,
> -				      struct ahash_request *req);
> -	int (*cipher_one_request)(struct crypto_engine *engine,
> -				  struct ablkcipher_request *req);
> -	int (*hash_one_request)(struct crypto_engine *engine,
> -				struct ahash_request *req);
> -
>   	struct kthread_worker           *kworker;
>   	struct kthread_work             pump_requests;
>   
> @@ -85,19 +71,25 @@ struct crypto_engine {
>   	struct crypto_async_request	*cur_req;
>   };
>   
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump);
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump);
> -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req);
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err);
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> -				  struct ahash_request *req, int err);
> +struct crypto_engine_op {
> +	int (*prepare_request)(struct crypto_engine *engine,
> +			       struct crypto_async_request *areq);
> +	int (*unprepare_request)(struct crypto_engine *engine,
> +				 struct crypto_async_request *areq);
> +	int (*do_one_request)(struct crypto_engine *engine,
> +			      struct crypto_async_request *areq);
> +};
> +
> +struct crypto_engine_reqctx {
> +	struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_request(struct crypto_engine *engine,
> +			    struct crypto_async_request *req, bool need_pump);
> +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +				      struct crypto_async_request *req);
> +void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err);
>   int crypto_engine_start(struct crypto_engine *engine);
>   int crypto_engine_stop(struct crypto_engine *engine);
>   struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 4/4] crypto: stm32: convert to the new crypto engine API
  2017-11-29  8:41   ` Corentin Labbe
  (?)
@ 2017-12-06 11:03     ` Fabien DESSENNE
  -1 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 11:03 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst, Lionel DEBIEVE
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization

Hi,


On 29/11/17 09:41, Corentin Labbe wrote:
> This patch convert the driver to the new crypto engine API.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

Tested-by: Fabien Dessenne <fabien.dessenne@st.com>

> ---
>   drivers/crypto/stm32/stm32-hash.c | 22 +++++++++++++++-------
>   1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
> index 4ca4a264a833..e3f9f7b04ce2 100644
> --- a/drivers/crypto/stm32/stm32-hash.c
> +++ b/drivers/crypto/stm32/stm32-hash.c
> @@ -122,6 +122,7 @@ enum stm32_hash_data_format {
>   #define HASH_DMA_THRESHOLD		50
>   
>   struct stm32_hash_ctx {
> +	struct crypto_engine_reqctx enginectx;
>   	struct stm32_hash_dev	*hdev;
>   	unsigned long		flags;
>   
> @@ -811,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
>   		rctx->flags |= HASH_FLAGS_ERRORS;
>   	}
>   
> -	crypto_finalize_hash_request(hdev->engine, req, err);
> +	crypto_finalize_request(hdev->engine, &req->base, err);
>   }
>   
>   static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
> @@ -828,15 +829,21 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
>   	return 0;
>   }
>   
> +static int stm32_hash_one_request(struct crypto_engine *engine,
> +				  struct crypto_async_request *areq);
> +static int stm32_hash_prepare_req(struct crypto_engine *engine,
> +				  struct crypto_async_request *areq);
> +
>   static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
>   				   struct ahash_request *req)
>   {
> -	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
> +	return crypto_transfer_request_to_engine(hdev->engine, &req->base);
>   }
>   
>   static int stm32_hash_prepare_req(struct crypto_engine *engine,
> -				  struct ahash_request *req)
> +				  struct crypto_async_request *areq)
>   {
> +	struct ahash_request *req = ahash_request_cast(areq);
>   	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>   	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>   	struct stm32_hash_request_ctx *rctx;
> @@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
>   }
>   
>   static int stm32_hash_one_request(struct crypto_engine *engine,
> -				  struct ahash_request *req)
> +				  struct crypto_async_request *areq)
>   {
> +	struct ahash_request *req = ahash_request_cast(areq);
>   	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>   	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>   	struct stm32_hash_request_ctx *rctx;
> @@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
>   	if (algs_hmac_name)
>   		ctx->flags |= HASH_FLAGS_HMAC;
>   
> +	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
> +	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
> +	ctx->enginectx.op.unprepare_request = NULL;
>   	return 0;
>   }
>   
> @@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
>   		goto err_engine;
>   	}
>   
> -	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
> -	hdev->engine->hash_one_request = stm32_hash_one_request;
> -
>   	ret = crypto_engine_start(hdev->engine);
>   	if (ret)
>   		goto err_engine_start;

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 4/4] crypto: stm32: convert to the new crypto engine API
@ 2017-12-06 11:03     ` Fabien DESSENNE
  0 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 11:03 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst, Lionel DEBIEVE
  Cc: linux-arm-kernel, linux-crypto, linux-kernel, virtualization

Hi,


On 29/11/17 09:41, Corentin Labbe wrote:
> This patch convert the driver to the new crypto engine API.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

Tested-by: Fabien Dessenne <fabien.dessenne@st.com>

> ---
>   drivers/crypto/stm32/stm32-hash.c | 22 +++++++++++++++-------
>   1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
> index 4ca4a264a833..e3f9f7b04ce2 100644
> --- a/drivers/crypto/stm32/stm32-hash.c
> +++ b/drivers/crypto/stm32/stm32-hash.c
> @@ -122,6 +122,7 @@ enum stm32_hash_data_format {
>   #define HASH_DMA_THRESHOLD		50
>   
>   struct stm32_hash_ctx {
> +	struct crypto_engine_reqctx enginectx;
>   	struct stm32_hash_dev	*hdev;
>   	unsigned long		flags;
>   
> @@ -811,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
>   		rctx->flags |= HASH_FLAGS_ERRORS;
>   	}
>   
> -	crypto_finalize_hash_request(hdev->engine, req, err);
> +	crypto_finalize_request(hdev->engine, &req->base, err);
>   }
>   
>   static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
> @@ -828,15 +829,21 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
>   	return 0;
>   }
>   
> +static int stm32_hash_one_request(struct crypto_engine *engine,
> +				  struct crypto_async_request *areq);
> +static int stm32_hash_prepare_req(struct crypto_engine *engine,
> +				  struct crypto_async_request *areq);
> +
>   static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
>   				   struct ahash_request *req)
>   {
> -	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
> +	return crypto_transfer_request_to_engine(hdev->engine, &req->base);
>   }
>   
>   static int stm32_hash_prepare_req(struct crypto_engine *engine,
> -				  struct ahash_request *req)
> +				  struct crypto_async_request *areq)
>   {
> +	struct ahash_request *req = ahash_request_cast(areq);
>   	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>   	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>   	struct stm32_hash_request_ctx *rctx;
> @@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
>   }
>   
>   static int stm32_hash_one_request(struct crypto_engine *engine,
> -				  struct ahash_request *req)
> +				  struct crypto_async_request *areq)
>   {
> +	struct ahash_request *req = ahash_request_cast(areq);
>   	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>   	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>   	struct stm32_hash_request_ctx *rctx;
> @@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
>   	if (algs_hmac_name)
>   		ctx->flags |= HASH_FLAGS_HMAC;
>   
> +	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
> +	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
> +	ctx->enginectx.op.unprepare_request = NULL;
>   	return 0;
>   }
>   
> @@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
>   		goto err_engine;
>   	}
>   
> -	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
> -	hdev->engine->hash_one_request = stm32_hash_one_request;
> -
>   	ret = crypto_engine_start(hdev->engine);
>   	if (ret)
>   		goto err_engine_start;

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 4/4] crypto: stm32: convert to the new crypto engine API
  2017-11-29  8:41   ` Corentin Labbe
  (?)
  (?)
@ 2017-12-06 11:03   ` Fabien DESSENNE
  -1 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 11:03 UTC (permalink / raw)
  To: Corentin Labbe, herbert, Alexandre TORGUE, arei.gonglei, davem,
	jasowang, mcoquelin.stm32, mst, Lionel DEBIEVE
  Cc: virtualization, linux-crypto, linux-arm-kernel, linux-kernel

Hi,


On 29/11/17 09:41, Corentin Labbe wrote:
> This patch convert the driver to the new crypto engine API.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

Tested-by: Fabien Dessenne <fabien.dessenne@st.com>

> ---
>   drivers/crypto/stm32/stm32-hash.c | 22 +++++++++++++++-------
>   1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
> index 4ca4a264a833..e3f9f7b04ce2 100644
> --- a/drivers/crypto/stm32/stm32-hash.c
> +++ b/drivers/crypto/stm32/stm32-hash.c
> @@ -122,6 +122,7 @@ enum stm32_hash_data_format {
>   #define HASH_DMA_THRESHOLD		50
>   
>   struct stm32_hash_ctx {
> +	struct crypto_engine_reqctx enginectx;
>   	struct stm32_hash_dev	*hdev;
>   	unsigned long		flags;
>   
> @@ -811,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
>   		rctx->flags |= HASH_FLAGS_ERRORS;
>   	}
>   
> -	crypto_finalize_hash_request(hdev->engine, req, err);
> +	crypto_finalize_request(hdev->engine, &req->base, err);
>   }
>   
>   static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
> @@ -828,15 +829,21 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
>   	return 0;
>   }
>   
> +static int stm32_hash_one_request(struct crypto_engine *engine,
> +				  struct crypto_async_request *areq);
> +static int stm32_hash_prepare_req(struct crypto_engine *engine,
> +				  struct crypto_async_request *areq);
> +
>   static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
>   				   struct ahash_request *req)
>   {
> -	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
> +	return crypto_transfer_request_to_engine(hdev->engine, &req->base);
>   }
>   
>   static int stm32_hash_prepare_req(struct crypto_engine *engine,
> -				  struct ahash_request *req)
> +				  struct crypto_async_request *areq)
>   {
> +	struct ahash_request *req = ahash_request_cast(areq);
>   	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>   	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>   	struct stm32_hash_request_ctx *rctx;
> @@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
>   }
>   
>   static int stm32_hash_one_request(struct crypto_engine *engine,
> -				  struct ahash_request *req)
> +				  struct crypto_async_request *areq)
>   {
> +	struct ahash_request *req = ahash_request_cast(areq);
>   	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>   	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>   	struct stm32_hash_request_ctx *rctx;
> @@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
>   	if (algs_hmac_name)
>   		ctx->flags |= HASH_FLAGS_HMAC;
>   
> +	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
> +	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
> +	ctx->enginectx.op.unprepare_request = NULL;
>   	return 0;
>   }
>   
> @@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
>   		goto err_engine;
>   	}
>   
> -	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
> -	hdev->engine->hash_one_request = stm32_hash_one_request;
> -
>   	ret = crypto_engine_start(hdev->engine);
>   	if (ret)
>   		goto err_engine_start;

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 4/4] crypto: stm32: convert to the new crypto engine API
@ 2017-12-06 11:03     ` Fabien DESSENNE
  0 siblings, 0 replies; 47+ messages in thread
From: Fabien DESSENNE @ 2017-12-06 11:03 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,


On 29/11/17 09:41, Corentin Labbe wrote:
> This patch convert the driver to the new crypto engine API.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

Tested-by: Fabien Dessenne <fabien.dessenne@st.com>

> ---
>   drivers/crypto/stm32/stm32-hash.c | 22 +++++++++++++++-------
>   1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
> index 4ca4a264a833..e3f9f7b04ce2 100644
> --- a/drivers/crypto/stm32/stm32-hash.c
> +++ b/drivers/crypto/stm32/stm32-hash.c
> @@ -122,6 +122,7 @@ enum stm32_hash_data_format {
>   #define HASH_DMA_THRESHOLD		50
>   
>   struct stm32_hash_ctx {
> +	struct crypto_engine_reqctx enginectx;
>   	struct stm32_hash_dev	*hdev;
>   	unsigned long		flags;
>   
> @@ -811,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
>   		rctx->flags |= HASH_FLAGS_ERRORS;
>   	}
>   
> -	crypto_finalize_hash_request(hdev->engine, req, err);
> +	crypto_finalize_request(hdev->engine, &req->base, err);
>   }
>   
>   static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
> @@ -828,15 +829,21 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
>   	return 0;
>   }
>   
> +static int stm32_hash_one_request(struct crypto_engine *engine,
> +				  struct crypto_async_request *areq);
> +static int stm32_hash_prepare_req(struct crypto_engine *engine,
> +				  struct crypto_async_request *areq);
> +
>   static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
>   				   struct ahash_request *req)
>   {
> -	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
> +	return crypto_transfer_request_to_engine(hdev->engine, &req->base);
>   }
>   
>   static int stm32_hash_prepare_req(struct crypto_engine *engine,
> -				  struct ahash_request *req)
> +				  struct crypto_async_request *areq)
>   {
> +	struct ahash_request *req = ahash_request_cast(areq);
>   	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>   	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>   	struct stm32_hash_request_ctx *rctx;
> @@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
>   }
>   
>   static int stm32_hash_one_request(struct crypto_engine *engine,
> -				  struct ahash_request *req)
> +				  struct crypto_async_request *areq)
>   {
> +	struct ahash_request *req = ahash_request_cast(areq);
>   	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>   	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>   	struct stm32_hash_request_ctx *rctx;
> @@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
>   	if (algs_hmac_name)
>   		ctx->flags |= HASH_FLAGS_HMAC;
>   
> +	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
> +	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
> +	ctx->enginectx.op.unprepare_request = NULL;
>   	return 0;
>   }
>   
> @@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
>   		goto err_engine;
>   	}
>   
> -	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
> -	hdev->engine->hash_one_request = stm32_hash_one_request;
> -
>   	ret = crypto_engine_start(hdev->engine);
>   	if (ret)
>   		goto err_engine_start;

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-12-06 11:02     ` Fabien DESSENNE
  (?)
@ 2017-12-07  9:24       ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-07  9:24 UTC (permalink / raw)
  To: Fabien DESSENNE
  Cc: herbert, mst, linux-kernel, virtualization, linux-crypto,
	mcoquelin.stm32, davem, linux-arm-kernel, Alexandre TORGUE

On Wed, Dec 06, 2017 at 11:02:23AM +0000, Fabien DESSENNE wrote:
> 
> 
> On 29/11/17 09:41, Corentin Labbe wrote:
> > The crypto engine could actually only enqueue hash and ablkcipher request.
> > This patch permit it to enqueue any type of crypto_async_request.
> >
> > Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> > ---
> >   crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
> >   include/crypto/engine.h |  46 +++++-------
> >   2 files changed, 60 insertions(+), 174 deletions(-)
> >
> > diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> > index 61e7c4e02fd2..f7c4c4c1f41b 100644
> > --- a/crypto/crypto_engine.c
> > +++ b/crypto/crypto_engine.c
> > @@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   				 bool in_kthread)
> >   {
> >   	struct crypto_async_request *async_req, *backlog;
> > -	struct ahash_request *hreq;
> > -	struct ablkcipher_request *breq;
> >   	unsigned long flags;
> >   	bool was_busy = false;
> > -	int ret, rtype;
> > +	int ret;
> > +	struct crypto_engine_reqctx *enginectx;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> >   
> > @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   
> > -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
> >   	/* Until here we get the request need to be encrypted successfully */
> >   	if (!was_busy && engine->prepare_crypt_hardware) {
> >   		ret = engine->prepare_crypt_hardware(engine);
> > @@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   		}
> >   	}
> >   
> > -	switch (rtype) {
> > -	case CRYPTO_ALG_TYPE_AHASH:
> > -		hreq = ahash_request_cast(engine->cur_req);
> > -		if (engine->prepare_hash_request) {
> > -			ret = engine->prepare_hash_request(engine, hreq);
> > -			if (ret) {
> > -				dev_err(engine->dev, "failed to prepare request: %d\n",
> > -					ret);
> > -				goto req_err;
> > -			}
> > -			engine->cur_req_prepared = true;
> > -		}
> > -		ret = engine->hash_one_request(engine, hreq);
> > -		if (ret) {
> > -			dev_err(engine->dev, "failed to hash one request from queue\n");
> > -			goto req_err;
> > -		}
> > -		return;
> > -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> > -		breq = ablkcipher_request_cast(engine->cur_req);
> > -		if (engine->prepare_cipher_request) {
> > -			ret = engine->prepare_cipher_request(engine, breq);
> > -			if (ret) {
> > -				dev_err(engine->dev, "failed to prepare request: %d\n",
> > -					ret);
> > -				goto req_err;
> > -			}
> > -			engine->cur_req_prepared = true;
> > -		}
> > -		ret = engine->cipher_one_request(engine, breq);
> > +	enginectx = crypto_tfm_ctx(async_req->tfm);
> > +
> > +	if (enginectx->op.prepare_request) {
> > +		ret = enginectx->op.prepare_request(engine, async_req);
> >   		if (ret) {
> > -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> > +			dev_err(engine->dev, "failed to prepare request: %d\n",
> > +				ret);
> >   			goto req_err;
> >   		}
> > -		return;
> > -	default:
> > -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> > -		return;
> > +		engine->cur_req_prepared = true;
> > +	}
> > +	if (!enginectx->op.do_one_request) {
> > +		dev_err(engine->dev, "failed to do request\n");
> > +		ret = -EINVAL;
> > +		goto req_err;
> > +	}
> > +	ret = enginectx->op.do_one_request(engine, async_req);
> > +	if (ret) {
> > +		dev_err(engine->dev, "failed to hash one request from queue\n");
> > +		goto req_err;
> >   	}
> > +	return;
> >   
> >   req_err:
> > -	switch (rtype) {
> > -	case CRYPTO_ALG_TYPE_AHASH:
> > -		hreq = ahash_request_cast(engine->cur_req);
> > -		crypto_finalize_hash_request(engine, hreq, ret);
> > -		break;
> > -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> > -		breq = ablkcipher_request_cast(engine->cur_req);
> > -		crypto_finalize_cipher_request(engine, breq, ret);
> > -		break;
> > -	}
> > +	crypto_finalize_request(engine, async_req, ret);
> >   	return;
> >   
> >   out:
> > @@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
> >   }
> >   
> >   /**
> > - * crypto_transfer_cipher_request - transfer the new request into the
> > - * enginequeue
> > + * crypto_transfer_request - transfer the new request into the engine queue
> >    * @engine: the hardware engine
> >    * @req: the request need to be listed into the engine queue
> >    */
> > -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> > -				   struct ablkcipher_request *req,
> > -				   bool need_pump)
> > +int crypto_transfer_request(struct crypto_engine *engine,
> > +			    struct crypto_async_request *req, bool need_pump)
> >   {
> >   	unsigned long flags;
> >   	int ret;
> >   
> > -	spin_lock_irqsave(&engine->queue_lock, flags);
> > -
> > -	if (!engine->running) {
> > -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -		return -ESHUTDOWN;
> > -	}
> > -
> > -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> > -
> > -	if (!engine->busy && need_pump)
> > -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> > -
> > -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -	return ret;
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
> > -
> > -/**
> > - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> > - * into the engine queue
> > - * @engine: the hardware engine
> > - * @req: the request need to be listed into the engine queue
> > - */
> > -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> > -					     struct ablkcipher_request *req)
> > -{
> > -	return crypto_transfer_cipher_request(engine, req, true);
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
> > -
> > -/**
> > - * crypto_transfer_hash_request - transfer the new request into the
> > - * enginequeue
> > - * @engine: the hardware engine
> > - * @req: the request need to be listed into the engine queue
> > - */
> > -int crypto_transfer_hash_request(struct crypto_engine *engine,
> > -				 struct ahash_request *req, bool need_pump)
> > -{
> > -	unsigned long flags;
> > -	int ret;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> >   
> > @@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
> >   		return -ESHUTDOWN;
> >   	}
> >   
> > -	ret = ahash_enqueue_request(&engine->queue, req);
> > +	ret = crypto_enqueue_request(&engine->queue, req);
> >   
> >   	if (!engine->busy && need_pump)
> >   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> > @@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   	return ret;
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> > +EXPORT_SYMBOL_GPL(crypto_transfer_request);
> >   
> >   /**
> > - * crypto_transfer_hash_request_to_engine - transfer one request to list
> > + * crypto_transfer_request_to_engine - transfer one request to list
> >    * into the engine queue
> >    * @engine: the hardware engine
> >    * @req: the request need to be listed into the engine queue
> >    */
> > -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> > -					   struct ahash_request *req)
> > -{
> > -	return crypto_transfer_hash_request(engine, req, true);
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
> > -
> > -/**
> > - * crypto_finalize_cipher_request - finalize one request if the request is done
> > - * @engine: the hardware engine
> > - * @req: the request need to be finalized
> > - * @err: error number
> > - */
> > -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> > -				    struct ablkcipher_request *req, int err)
> > +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> > +				      struct crypto_async_request *req)
> >   {
> > -	unsigned long flags;
> > -	bool finalize_cur_req = false;
> > -	int ret;
> > -
> > -	spin_lock_irqsave(&engine->queue_lock, flags);
> > -	if (engine->cur_req == &req->base)
> > -		finalize_cur_req = true;
> > -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -
> > -	if (finalize_cur_req) {
> > -		if (engine->cur_req_prepared &&
> > -		    engine->unprepare_cipher_request) {
> > -			ret = engine->unprepare_cipher_request(engine, req);
> > -			if (ret)
> > -				dev_err(engine->dev, "failed to unprepare request\n");
> > -		}
> > -		spin_lock_irqsave(&engine->queue_lock, flags);
> > -		engine->cur_req = NULL;
> > -		engine->cur_req_prepared = false;
> > -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -	}
> > -
> > -	req->base.complete(&req->base, err);
> > -
> > -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> > +	return crypto_transfer_request(engine, req, true);
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> > +EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
> >   
> >   /**
> > - * crypto_finalize_hash_request - finalize one request if the request is done
> > + * crypto_finalize_request - finalize one request if the request is done
> >    * @engine: the hardware engine
> >    * @req: the request need to be finalized
> >    * @err: error number
> >    */
> > -void crypto_finalize_hash_request(struct crypto_engine *engine,
> > -				  struct ahash_request *req, int err)
> > +void crypto_finalize_request(struct crypto_engine *engine,
> > +			     struct crypto_async_request *req, int err)
> >   {
> >   	unsigned long flags;
> >   	bool finalize_cur_req = false;
> >   	int ret;
> > +	struct crypto_engine_reqctx *enginectx;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> > -	if (engine->cur_req == &req->base)
> > +	if (engine->cur_req == req)
> >   		finalize_cur_req = true;
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   
> >   	if (finalize_cur_req) {
> > +		enginectx = crypto_tfm_ctx(req->tfm);
> >   		if (engine->cur_req_prepared &&
> > -		    engine->unprepare_hash_request) {
> > -			ret = engine->unprepare_hash_request(engine, req);
> > +		    enginectx->op.unprepare_request) {
> > +			ret = enginectx->op.unprepare_request(engine, req);
> >   			if (ret)
> >   				dev_err(engine->dev, "failed to unprepare request\n");
> >   		}
> > @@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
> >   		spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   	}
> >   
> > -	req->base.complete(&req->base, err);
> > +	req->complete(req, err);
> >   
> >   	kthread_queue_work(engine->kworker, &engine->pump_requests);
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
> > +EXPORT_SYMBOL_GPL(crypto_finalize_request);
> >   
> >   /**
> >    * crypto_engine_start - start the hardware engine
> > diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> > index dd04c1699b51..2e45db45849b 100644
> > --- a/include/crypto/engine.h
> > +++ b/include/crypto/engine.h
> > @@ -17,7 +17,6 @@
> >   #include <linux/kernel.h>
> >   #include <linux/kthread.h>
> >   #include <crypto/algapi.h>
> > -#include <crypto/hash.h>
> >   
> >   #define ENGINE_NAME_LEN	30
> >   /*
> > @@ -65,19 +64,6 @@ struct crypto_engine {
> 
> You also need to remove these 6 functions from the comment header of 
> that structure
> 

Thanks, fixed for next version.

Regards

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-07  9:24       ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-07  9:24 UTC (permalink / raw)
  To: Fabien DESSENNE
  Cc: herbert, Alexandre TORGUE, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, linux-arm-kernel, linux-crypto,
	linux-kernel, virtualization

On Wed, Dec 06, 2017 at 11:02:23AM +0000, Fabien DESSENNE wrote:
> 
> 
> On 29/11/17 09:41, Corentin Labbe wrote:
> > The crypto engine could actually only enqueue hash and ablkcipher request.
> > This patch permit it to enqueue any type of crypto_async_request.
> >
> > Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> > ---
> >   crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
> >   include/crypto/engine.h |  46 +++++-------
> >   2 files changed, 60 insertions(+), 174 deletions(-)
> >
> > diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> > index 61e7c4e02fd2..f7c4c4c1f41b 100644
> > --- a/crypto/crypto_engine.c
> > +++ b/crypto/crypto_engine.c
> > @@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   				 bool in_kthread)
> >   {
> >   	struct crypto_async_request *async_req, *backlog;
> > -	struct ahash_request *hreq;
> > -	struct ablkcipher_request *breq;
> >   	unsigned long flags;
> >   	bool was_busy = false;
> > -	int ret, rtype;
> > +	int ret;
> > +	struct crypto_engine_reqctx *enginectx;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> >   
> > @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   
> > -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
> >   	/* Until here we get the request need to be encrypted successfully */
> >   	if (!was_busy && engine->prepare_crypt_hardware) {
> >   		ret = engine->prepare_crypt_hardware(engine);
> > @@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   		}
> >   	}
> >   
> > -	switch (rtype) {
> > -	case CRYPTO_ALG_TYPE_AHASH:
> > -		hreq = ahash_request_cast(engine->cur_req);
> > -		if (engine->prepare_hash_request) {
> > -			ret = engine->prepare_hash_request(engine, hreq);
> > -			if (ret) {
> > -				dev_err(engine->dev, "failed to prepare request: %d\n",
> > -					ret);
> > -				goto req_err;
> > -			}
> > -			engine->cur_req_prepared = true;
> > -		}
> > -		ret = engine->hash_one_request(engine, hreq);
> > -		if (ret) {
> > -			dev_err(engine->dev, "failed to hash one request from queue\n");
> > -			goto req_err;
> > -		}
> > -		return;
> > -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> > -		breq = ablkcipher_request_cast(engine->cur_req);
> > -		if (engine->prepare_cipher_request) {
> > -			ret = engine->prepare_cipher_request(engine, breq);
> > -			if (ret) {
> > -				dev_err(engine->dev, "failed to prepare request: %d\n",
> > -					ret);
> > -				goto req_err;
> > -			}
> > -			engine->cur_req_prepared = true;
> > -		}
> > -		ret = engine->cipher_one_request(engine, breq);
> > +	enginectx = crypto_tfm_ctx(async_req->tfm);
> > +
> > +	if (enginectx->op.prepare_request) {
> > +		ret = enginectx->op.prepare_request(engine, async_req);
> >   		if (ret) {
> > -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> > +			dev_err(engine->dev, "failed to prepare request: %d\n",
> > +				ret);
> >   			goto req_err;
> >   		}
> > -		return;
> > -	default:
> > -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> > -		return;
> > +		engine->cur_req_prepared = true;
> > +	}
> > +	if (!enginectx->op.do_one_request) {
> > +		dev_err(engine->dev, "failed to do request\n");
> > +		ret = -EINVAL;
> > +		goto req_err;
> > +	}
> > +	ret = enginectx->op.do_one_request(engine, async_req);
> > +	if (ret) {
> > +		dev_err(engine->dev, "failed to hash one request from queue\n");
> > +		goto req_err;
> >   	}
> > +	return;
> >   
> >   req_err:
> > -	switch (rtype) {
> > -	case CRYPTO_ALG_TYPE_AHASH:
> > -		hreq = ahash_request_cast(engine->cur_req);
> > -		crypto_finalize_hash_request(engine, hreq, ret);
> > -		break;
> > -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> > -		breq = ablkcipher_request_cast(engine->cur_req);
> > -		crypto_finalize_cipher_request(engine, breq, ret);
> > -		break;
> > -	}
> > +	crypto_finalize_request(engine, async_req, ret);
> >   	return;
> >   
> >   out:
> > @@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
> >   }
> >   
> >   /**
> > - * crypto_transfer_cipher_request - transfer the new request into the
> > - * enginequeue
> > + * crypto_transfer_request - transfer the new request into the engine queue
> >    * @engine: the hardware engine
> >    * @req: the request need to be listed into the engine queue
> >    */
> > -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> > -				   struct ablkcipher_request *req,
> > -				   bool need_pump)
> > +int crypto_transfer_request(struct crypto_engine *engine,
> > +			    struct crypto_async_request *req, bool need_pump)
> >   {
> >   	unsigned long flags;
> >   	int ret;
> >   
> > -	spin_lock_irqsave(&engine->queue_lock, flags);
> > -
> > -	if (!engine->running) {
> > -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -		return -ESHUTDOWN;
> > -	}
> > -
> > -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> > -
> > -	if (!engine->busy && need_pump)
> > -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> > -
> > -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -	return ret;
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
> > -
> > -/**
> > - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> > - * into the engine queue
> > - * @engine: the hardware engine
> > - * @req: the request need to be listed into the engine queue
> > - */
> > -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> > -					     struct ablkcipher_request *req)
> > -{
> > -	return crypto_transfer_cipher_request(engine, req, true);
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
> > -
> > -/**
> > - * crypto_transfer_hash_request - transfer the new request into the
> > - * enginequeue
> > - * @engine: the hardware engine
> > - * @req: the request need to be listed into the engine queue
> > - */
> > -int crypto_transfer_hash_request(struct crypto_engine *engine,
> > -				 struct ahash_request *req, bool need_pump)
> > -{
> > -	unsigned long flags;
> > -	int ret;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> >   
> > @@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
> >   		return -ESHUTDOWN;
> >   	}
> >   
> > -	ret = ahash_enqueue_request(&engine->queue, req);
> > +	ret = crypto_enqueue_request(&engine->queue, req);
> >   
> >   	if (!engine->busy && need_pump)
> >   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> > @@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   	return ret;
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> > +EXPORT_SYMBOL_GPL(crypto_transfer_request);
> >   
> >   /**
> > - * crypto_transfer_hash_request_to_engine - transfer one request to list
> > + * crypto_transfer_request_to_engine - transfer one request to list
> >    * into the engine queue
> >    * @engine: the hardware engine
> >    * @req: the request need to be listed into the engine queue
> >    */
> > -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> > -					   struct ahash_request *req)
> > -{
> > -	return crypto_transfer_hash_request(engine, req, true);
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
> > -
> > -/**
> > - * crypto_finalize_cipher_request - finalize one request if the request is done
> > - * @engine: the hardware engine
> > - * @req: the request need to be finalized
> > - * @err: error number
> > - */
> > -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> > -				    struct ablkcipher_request *req, int err)
> > +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> > +				      struct crypto_async_request *req)
> >   {
> > -	unsigned long flags;
> > -	bool finalize_cur_req = false;
> > -	int ret;
> > -
> > -	spin_lock_irqsave(&engine->queue_lock, flags);
> > -	if (engine->cur_req == &req->base)
> > -		finalize_cur_req = true;
> > -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -
> > -	if (finalize_cur_req) {
> > -		if (engine->cur_req_prepared &&
> > -		    engine->unprepare_cipher_request) {
> > -			ret = engine->unprepare_cipher_request(engine, req);
> > -			if (ret)
> > -				dev_err(engine->dev, "failed to unprepare request\n");
> > -		}
> > -		spin_lock_irqsave(&engine->queue_lock, flags);
> > -		engine->cur_req = NULL;
> > -		engine->cur_req_prepared = false;
> > -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -	}
> > -
> > -	req->base.complete(&req->base, err);
> > -
> > -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> > +	return crypto_transfer_request(engine, req, true);
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> > +EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
> >   
> >   /**
> > - * crypto_finalize_hash_request - finalize one request if the request is done
> > + * crypto_finalize_request - finalize one request if the request is done
> >    * @engine: the hardware engine
> >    * @req: the request need to be finalized
> >    * @err: error number
> >    */
> > -void crypto_finalize_hash_request(struct crypto_engine *engine,
> > -				  struct ahash_request *req, int err)
> > +void crypto_finalize_request(struct crypto_engine *engine,
> > +			     struct crypto_async_request *req, int err)
> >   {
> >   	unsigned long flags;
> >   	bool finalize_cur_req = false;
> >   	int ret;
> > +	struct crypto_engine_reqctx *enginectx;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> > -	if (engine->cur_req == &req->base)
> > +	if (engine->cur_req == req)
> >   		finalize_cur_req = true;
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   
> >   	if (finalize_cur_req) {
> > +		enginectx = crypto_tfm_ctx(req->tfm);
> >   		if (engine->cur_req_prepared &&
> > -		    engine->unprepare_hash_request) {
> > -			ret = engine->unprepare_hash_request(engine, req);
> > +		    enginectx->op.unprepare_request) {
> > +			ret = enginectx->op.unprepare_request(engine, req);
> >   			if (ret)
> >   				dev_err(engine->dev, "failed to unprepare request\n");
> >   		}
> > @@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
> >   		spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   	}
> >   
> > -	req->base.complete(&req->base, err);
> > +	req->complete(req, err);
> >   
> >   	kthread_queue_work(engine->kworker, &engine->pump_requests);
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
> > +EXPORT_SYMBOL_GPL(crypto_finalize_request);
> >   
> >   /**
> >    * crypto_engine_start - start the hardware engine
> > diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> > index dd04c1699b51..2e45db45849b 100644
> > --- a/include/crypto/engine.h
> > +++ b/include/crypto/engine.h
> > @@ -17,7 +17,6 @@
> >   #include <linux/kernel.h>
> >   #include <linux/kthread.h>
> >   #include <crypto/algapi.h>
> > -#include <crypto/hash.h>
> >   
> >   #define ENGINE_NAME_LEN	30
> >   /*
> > @@ -65,19 +64,6 @@ struct crypto_engine {
> 
> You also need to remove these 6 functions from the comment header of 
> that structure
> 

Thanks, fixed for next version.

Regards

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-07  9:24       ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-07  9:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 06, 2017 at 11:02:23AM +0000, Fabien DESSENNE wrote:
> 
> 
> On 29/11/17 09:41, Corentin Labbe wrote:
> > The crypto engine could actually only enqueue hash and ablkcipher request.
> > This patch permit it to enqueue any type of crypto_async_request.
> >
> > Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> > ---
> >   crypto/crypto_engine.c  | 188 +++++++++++-------------------------------------
> >   include/crypto/engine.h |  46 +++++-------
> >   2 files changed, 60 insertions(+), 174 deletions(-)
> >
> > diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> > index 61e7c4e02fd2..f7c4c4c1f41b 100644
> > --- a/crypto/crypto_engine.c
> > +++ b/crypto/crypto_engine.c
> > @@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   				 bool in_kthread)
> >   {
> >   	struct crypto_async_request *async_req, *backlog;
> > -	struct ahash_request *hreq;
> > -	struct ablkcipher_request *breq;
> >   	unsigned long flags;
> >   	bool was_busy = false;
> > -	int ret, rtype;
> > +	int ret;
> > +	struct crypto_engine_reqctx *enginectx;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> >   
> > @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   
> > -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
> >   	/* Until here we get the request need to be encrypted successfully */
> >   	if (!was_busy && engine->prepare_crypt_hardware) {
> >   		ret = engine->prepare_crypt_hardware(engine);
> > @@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> >   		}
> >   	}
> >   
> > -	switch (rtype) {
> > -	case CRYPTO_ALG_TYPE_AHASH:
> > -		hreq = ahash_request_cast(engine->cur_req);
> > -		if (engine->prepare_hash_request) {
> > -			ret = engine->prepare_hash_request(engine, hreq);
> > -			if (ret) {
> > -				dev_err(engine->dev, "failed to prepare request: %d\n",
> > -					ret);
> > -				goto req_err;
> > -			}
> > -			engine->cur_req_prepared = true;
> > -		}
> > -		ret = engine->hash_one_request(engine, hreq);
> > -		if (ret) {
> > -			dev_err(engine->dev, "failed to hash one request from queue\n");
> > -			goto req_err;
> > -		}
> > -		return;
> > -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> > -		breq = ablkcipher_request_cast(engine->cur_req);
> > -		if (engine->prepare_cipher_request) {
> > -			ret = engine->prepare_cipher_request(engine, breq);
> > -			if (ret) {
> > -				dev_err(engine->dev, "failed to prepare request: %d\n",
> > -					ret);
> > -				goto req_err;
> > -			}
> > -			engine->cur_req_prepared = true;
> > -		}
> > -		ret = engine->cipher_one_request(engine, breq);
> > +	enginectx = crypto_tfm_ctx(async_req->tfm);
> > +
> > +	if (enginectx->op.prepare_request) {
> > +		ret = enginectx->op.prepare_request(engine, async_req);
> >   		if (ret) {
> > -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> > +			dev_err(engine->dev, "failed to prepare request: %d\n",
> > +				ret);
> >   			goto req_err;
> >   		}
> > -		return;
> > -	default:
> > -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> > -		return;
> > +		engine->cur_req_prepared = true;
> > +	}
> > +	if (!enginectx->op.do_one_request) {
> > +		dev_err(engine->dev, "failed to do request\n");
> > +		ret = -EINVAL;
> > +		goto req_err;
> > +	}
> > +	ret = enginectx->op.do_one_request(engine, async_req);
> > +	if (ret) {
> > +		dev_err(engine->dev, "failed to hash one request from queue\n");
> > +		goto req_err;
> >   	}
> > +	return;
> >   
> >   req_err:
> > -	switch (rtype) {
> > -	case CRYPTO_ALG_TYPE_AHASH:
> > -		hreq = ahash_request_cast(engine->cur_req);
> > -		crypto_finalize_hash_request(engine, hreq, ret);
> > -		break;
> > -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> > -		breq = ablkcipher_request_cast(engine->cur_req);
> > -		crypto_finalize_cipher_request(engine, breq, ret);
> > -		break;
> > -	}
> > +	crypto_finalize_request(engine, async_req, ret);
> >   	return;
> >   
> >   out:
> > @@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work)
> >   }
> >   
> >   /**
> > - * crypto_transfer_cipher_request - transfer the new request into the
> > - * enginequeue
> > + * crypto_transfer_request - transfer the new request into the engine queue
> >    * @engine: the hardware engine
> >    * @req: the request need to be listed into the engine queue
> >    */
> > -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> > -				   struct ablkcipher_request *req,
> > -				   bool need_pump)
> > +int crypto_transfer_request(struct crypto_engine *engine,
> > +			    struct crypto_async_request *req, bool need_pump)
> >   {
> >   	unsigned long flags;
> >   	int ret;
> >   
> > -	spin_lock_irqsave(&engine->queue_lock, flags);
> > -
> > -	if (!engine->running) {
> > -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -		return -ESHUTDOWN;
> > -	}
> > -
> > -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> > -
> > -	if (!engine->busy && need_pump)
> > -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> > -
> > -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -	return ret;
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
> > -
> > -/**
> > - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> > - * into the engine queue
> > - * @engine: the hardware engine
> > - * @req: the request need to be listed into the engine queue
> > - */
> > -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> > -					     struct ablkcipher_request *req)
> > -{
> > -	return crypto_transfer_cipher_request(engine, req, true);
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
> > -
> > -/**
> > - * crypto_transfer_hash_request - transfer the new request into the
> > - * enginequeue
> > - * @engine: the hardware engine
> > - * @req: the request need to be listed into the engine queue
> > - */
> > -int crypto_transfer_hash_request(struct crypto_engine *engine,
> > -				 struct ahash_request *req, bool need_pump)
> > -{
> > -	unsigned long flags;
> > -	int ret;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> >   
> > @@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
> >   		return -ESHUTDOWN;
> >   	}
> >   
> > -	ret = ahash_enqueue_request(&engine->queue, req);
> > +	ret = crypto_enqueue_request(&engine->queue, req);
> >   
> >   	if (!engine->busy && need_pump)
> >   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> > @@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   	return ret;
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> > +EXPORT_SYMBOL_GPL(crypto_transfer_request);
> >   
> >   /**
> > - * crypto_transfer_hash_request_to_engine - transfer one request to list
> > + * crypto_transfer_request_to_engine - transfer one request to list
> >    * into the engine queue
> >    * @engine: the hardware engine
> >    * @req: the request need to be listed into the engine queue
> >    */
> > -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> > -					   struct ahash_request *req)
> > -{
> > -	return crypto_transfer_hash_request(engine, req, true);
> > -}
> > -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
> > -
> > -/**
> > - * crypto_finalize_cipher_request - finalize one request if the request is done
> > - * @engine: the hardware engine
> > - * @req: the request need to be finalized
> > - * @err: error number
> > - */
> > -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> > -				    struct ablkcipher_request *req, int err)
> > +int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> > +				      struct crypto_async_request *req)
> >   {
> > -	unsigned long flags;
> > -	bool finalize_cur_req = false;
> > -	int ret;
> > -
> > -	spin_lock_irqsave(&engine->queue_lock, flags);
> > -	if (engine->cur_req == &req->base)
> > -		finalize_cur_req = true;
> > -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -
> > -	if (finalize_cur_req) {
> > -		if (engine->cur_req_prepared &&
> > -		    engine->unprepare_cipher_request) {
> > -			ret = engine->unprepare_cipher_request(engine, req);
> > -			if (ret)
> > -				dev_err(engine->dev, "failed to unprepare request\n");
> > -		}
> > -		spin_lock_irqsave(&engine->queue_lock, flags);
> > -		engine->cur_req = NULL;
> > -		engine->cur_req_prepared = false;
> > -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> > -	}
> > -
> > -	req->base.complete(&req->base, err);
> > -
> > -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> > +	return crypto_transfer_request(engine, req, true);
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> > +EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
> >   
> >   /**
> > - * crypto_finalize_hash_request - finalize one request if the request is done
> > + * crypto_finalize_request - finalize one request if the request is done
> >    * @engine: the hardware engine
> >    * @req: the request need to be finalized
> >    * @err: error number
> >    */
> > -void crypto_finalize_hash_request(struct crypto_engine *engine,
> > -				  struct ahash_request *req, int err)
> > +void crypto_finalize_request(struct crypto_engine *engine,
> > +			     struct crypto_async_request *req, int err)
> >   {
> >   	unsigned long flags;
> >   	bool finalize_cur_req = false;
> >   	int ret;
> > +	struct crypto_engine_reqctx *enginectx;
> >   
> >   	spin_lock_irqsave(&engine->queue_lock, flags);
> > -	if (engine->cur_req == &req->base)
> > +	if (engine->cur_req == req)
> >   		finalize_cur_req = true;
> >   	spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   
> >   	if (finalize_cur_req) {
> > +		enginectx = crypto_tfm_ctx(req->tfm);
> >   		if (engine->cur_req_prepared &&
> > -		    engine->unprepare_hash_request) {
> > -			ret = engine->unprepare_hash_request(engine, req);
> > +		    enginectx->op.unprepare_request) {
> > +			ret = enginectx->op.unprepare_request(engine, req);
> >   			if (ret)
> >   				dev_err(engine->dev, "failed to unprepare request\n");
> >   		}
> > @@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
> >   		spin_unlock_irqrestore(&engine->queue_lock, flags);
> >   	}
> >   
> > -	req->base.complete(&req->base, err);
> > +	req->complete(req, err);
> >   
> >   	kthread_queue_work(engine->kworker, &engine->pump_requests);
> >   }
> > -EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
> > +EXPORT_SYMBOL_GPL(crypto_finalize_request);
> >   
> >   /**
> >    * crypto_engine_start - start the hardware engine
> > diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> > index dd04c1699b51..2e45db45849b 100644
> > --- a/include/crypto/engine.h
> > +++ b/include/crypto/engine.h
> > @@ -17,7 +17,6 @@
> >   #include <linux/kernel.h>
> >   #include <linux/kthread.h>
> >   #include <crypto/algapi.h>
> > -#include <crypto/hash.h>
> >   
> >   #define ENGINE_NAME_LEN	30
> >   /*
> > @@ -65,19 +64,6 @@ struct crypto_engine {
> 
> You also need to remove these 6 functions from the comment header of 
> that structure
> 

Thanks, fixed for next version.

Regards

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
  2017-12-06 10:59   ` Fabien DESSENNE
  (?)
@ 2017-12-07  9:37     ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-07  9:37 UTC (permalink / raw)
  To: Fabien DESSENNE
  Cc: herbert, Alexandre TORGUE, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, Lionel DEBIEVE, Benjamin GAIGNARD,
	linux-arm-kernel, linux-crypto, linux-kernel, virtualization

On Wed, Dec 06, 2017 at 10:59:47AM +0000, Fabien DESSENNE wrote:
> Hi Corentin,
> 
> 
> I am fine with this proposal: it is generic enough and I have been able 
> to test and run the crypto engine with aead_request without changing any 
> single line of code.
> 
> This is what I need to be able to send the AEAD extension of the 
> stm32-cryp driver (successfully tested with your engine upgrade proposal).
> 
> 
> I have also tested the stm32-hash patch.
> 
> Note that stm32-cryp (new driver applied by Herbert recently 
> [https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git/commit/?id=9e054ec21ef8344345b28603fb272fe999f735db]) 
> would also need to be converted to the new crypto engine API : this is a 
> trivial patch.

Yes, patch for converting it is already done.

> 
> Thank you for your proposal, I hope that this proposal is fine for 
> Herbert too.
> 

Thanks for your test, I hope other maintainer will test it also.

Regards
Corentin Labbe

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-07  9:37     ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-07  9:37 UTC (permalink / raw)
  To: Fabien DESSENNE
  Cc: herbert, Alexandre TORGUE, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, Lionel DEBIEVE, Benjamin GAIGNARD,
	linux-arm-kernel, linux-crypto, linux-kernel, virtualization

On Wed, Dec 06, 2017 at 10:59:47AM +0000, Fabien DESSENNE wrote:
> Hi Corentin,
> 
> 
> I am fine with this proposal: it is generic enough and I have been able 
> to test and run the crypto engine with aead_request without changing any 
> single line of code.
> 
> This is what I need to be able to send the AEAD extension of the 
> stm32-cryp driver (successfully tested with your engine upgrade proposal).
> 
> 
> I have also tested the stm32-hash patch.
> 
> Note that stm32-cryp (new driver applied by Herbert recently 
> [https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git/commit/?id=9e054ec21ef8344345b28603fb272fe999f735db]) 
> would also need to be converted to the new crypto engine API : this is a 
> trivial patch.

Yes, patch for converting it is already done.

> 
> Thank you for your proposal, I hope that this proposal is fine for 
> Herbert too.
> 

Thanks for your test, I hope other maintainer will test it also.

Regards
Corentin Labbe

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
  2017-12-06 10:59   ` Fabien DESSENNE
  (?)
  (?)
@ 2017-12-07  9:37   ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-07  9:37 UTC (permalink / raw)
  To: Fabien DESSENNE
  Cc: Lionel DEBIEVE, herbert, mst, linux-kernel, virtualization,
	linux-crypto, mcoquelin.stm32, Benjamin GAIGNARD, davem,
	linux-arm-kernel, Alexandre TORGUE

On Wed, Dec 06, 2017 at 10:59:47AM +0000, Fabien DESSENNE wrote:
> Hi Corentin,
> 
> 
> I am fine with this proposal: it is generic enough and I have been able 
> to test and run the crypto engine with aead_request without changing any 
> single line of code.
> 
> This is what I need to be able to send the AEAD extension of the 
> stm32-cryp driver (successfully tested with your engine upgrade proposal).
> 
> 
> I have also tested the stm32-hash patch.
> 
> Note that stm32-cryp (new driver applied by Herbert recently 
> [https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git/commit/?id=9e054ec21ef8344345b28603fb272fe999f735db]) 
> would also need to be converted to the new crypto engine API : this is a 
> trivial patch.

Yes, patch for converting it is already done.

> 
> Thank you for your proposal, I hope that this proposal is fine for 
> Herbert too.
> 

Thanks for your test, I hope other maintainer will test it also.

Regards
Corentin Labbe

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-07  9:37     ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-07  9:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 06, 2017 at 10:59:47AM +0000, Fabien DESSENNE wrote:
> Hi Corentin,
> 
> 
> I am fine with this proposal: it is generic enough and I have been able 
> to test and run the crypto engine with aead_request without changing any 
> single line of code.
> 
> This is what I need to be able to send the AEAD extension of the 
> stm32-cryp driver (successfully tested with your engine upgrade proposal).
> 
> 
> I have also tested the stm32-hash patch.
> 
> Note that stm32-cryp (new driver applied by Herbert recently 
> [https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git/commit/?id=9e054ec21ef8344345b28603fb272fe999f735db]) 
> would also need to be converted to the new crypto engine API : this is a 
> trivial patch.

Yes, patch for converting it is already done.

> 
> Thank you for your proposal, I hope that this proposal is fine for 
> Herbert too.
> 

Thanks for your test, I hope other maintainer will test it also.

Regards
Corentin Labbe

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-11-29  8:41   ` Corentin Labbe
@ 2017-12-22  6:57     ` Herbert Xu
  -1 siblings, 0 replies; 47+ messages in thread
From: Herbert Xu @ 2017-12-22  6:57 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, arei.gonglei, davem, jasowang, mcoquelin.stm32,
	mst, fabien.dessenne, linux-arm-kernel, linux-crypto,
	linux-kernel, virtualization

On Wed, Nov 29, 2017 at 09:41:18AM +0100, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
> 
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

This is going the wrong way.  We do not want to expose any of the
base types such as crypto_alg, crypto_async_request to end-users
and that includes drivers.  Only core API code should touch these
base types.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-11-29  8:41   ` Corentin Labbe
                     ` (3 preceding siblings ...)
  (?)
@ 2017-12-22  6:57   ` Herbert Xu
  -1 siblings, 0 replies; 47+ messages in thread
From: Herbert Xu @ 2017-12-22  6:57 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, mst, linux-kernel, fabien.dessenne,
	virtualization, linux-crypto, mcoquelin.stm32, davem,
	linux-arm-kernel

On Wed, Nov 29, 2017 at 09:41:18AM +0100, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
> 
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

This is going the wrong way.  We do not want to expose any of the
base types such as crypto_alg, crypto_async_request to end-users
and that includes drivers.  Only core API code should touch these
base types.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-22  6:57     ` Herbert Xu
  0 siblings, 0 replies; 47+ messages in thread
From: Herbert Xu @ 2017-12-22  6:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Nov 29, 2017 at 09:41:18AM +0100, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
> 
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

This is going the wrong way.  We do not want to expose any of the
base types such as crypto_alg, crypto_async_request to end-users
and that includes drivers.  Only core API code should touch these
base types.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-12-22  6:57     ` Herbert Xu
@ 2017-12-22  8:41       ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-22  8:41 UTC (permalink / raw)
  To: Herbert Xu
  Cc: alexandre.torgue, arei.gonglei, davem, jasowang, mcoquelin.stm32,
	mst, fabien.dessenne, linux-arm-kernel, linux-crypto,
	linux-kernel, virtualization

On Fri, Dec 22, 2017 at 05:57:24PM +1100, Herbert Xu wrote:
> On Wed, Nov 29, 2017 at 09:41:18AM +0100, Corentin Labbe wrote:
> > The crypto engine could actually only enqueue hash and ablkcipher request.
> > This patch permit it to enqueue any type of crypto_async_request.
> > 
> > Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> 
> This is going the wrong way.  We do not want to expose any of the
> base types such as crypto_alg, crypto_async_request to end-users
> and that includes drivers.  Only core API code should touch these
> base types.
> 

Hello

It's you that was suggesting using crypto_async_request:
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
"The only wart with this scheme is that the drivers end up seeing
struct crypto_async_request and will need to convert that to the
respective request types but I couldn't really find a better way."

So I wait for any suggestion.

Regards
Corentin Labbe

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-12-22  6:57     ` Herbert Xu
  (?)
  (?)
@ 2017-12-22  8:41     ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-22  8:41 UTC (permalink / raw)
  To: Herbert Xu
  Cc: alexandre.torgue, mst, linux-kernel, fabien.dessenne,
	virtualization, linux-crypto, mcoquelin.stm32, davem,
	linux-arm-kernel

On Fri, Dec 22, 2017 at 05:57:24PM +1100, Herbert Xu wrote:
> On Wed, Nov 29, 2017 at 09:41:18AM +0100, Corentin Labbe wrote:
> > The crypto engine could actually only enqueue hash and ablkcipher request.
> > This patch permit it to enqueue any type of crypto_async_request.
> > 
> > Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> 
> This is going the wrong way.  We do not want to expose any of the
> base types such as crypto_alg, crypto_async_request to end-users
> and that includes drivers.  Only core API code should touch these
> base types.
> 

Hello

It's you that was suggesting using crypto_async_request:
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
"The only wart with this scheme is that the drivers end up seeing
struct crypto_async_request and will need to convert that to the
respective request types but I couldn't really find a better way."

So I wait for any suggestion.

Regards
Corentin Labbe

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-22  8:41       ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-22  8:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Dec 22, 2017 at 05:57:24PM +1100, Herbert Xu wrote:
> On Wed, Nov 29, 2017 at 09:41:18AM +0100, Corentin Labbe wrote:
> > The crypto engine could actually only enqueue hash and ablkcipher request.
> > This patch permit it to enqueue any type of crypto_async_request.
> > 
> > Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> 
> This is going the wrong way.  We do not want to expose any of the
> base types such as crypto_alg, crypto_async_request to end-users
> and that includes drivers.  Only core API code should touch these
> base types.
> 

Hello

It's you that was suggesting using crypto_async_request:
https://www.mail-archive.com/linux-kernel at vger.kernel.org/msg1474434.html
"The only wart with this scheme is that the drivers end up seeing
struct crypto_async_request and will need to convert that to the
respective request types but I couldn't really find a better way."

So I wait for any suggestion.

Regards
Corentin Labbe

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-12-22  8:41       ` Corentin Labbe
  (?)
@ 2017-12-22  9:06         ` Herbert Xu
  -1 siblings, 0 replies; 47+ messages in thread
From: Herbert Xu @ 2017-12-22  9:06 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, mst, linux-kernel, fabien.dessenne,
	virtualization, linux-crypto, mcoquelin.stm32, davem,
	linux-arm-kernel

On Fri, Dec 22, 2017 at 09:41:48AM +0100, Corentin Labbe wrote:
>
> It's you that was suggesting using crypto_async_request:
> https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> "The only wart with this scheme is that the drivers end up seeing
> struct crypto_async_request and will need to convert that to the
> respective request types but I couldn't really find a better way."
> 
> So I wait for any suggestion.

The core engine code obviously will use the base type but it should
not be exposed to the driver authors.  IOW all exposed API should
take the final types such as aead_request before casting it.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-22  9:06         ` Herbert Xu
  0 siblings, 0 replies; 47+ messages in thread
From: Herbert Xu @ 2017-12-22  9:06 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, arei.gonglei, davem, jasowang, mcoquelin.stm32,
	mst, fabien.dessenne, linux-arm-kernel, linux-crypto,
	linux-kernel, virtualization

On Fri, Dec 22, 2017 at 09:41:48AM +0100, Corentin Labbe wrote:
>
> It's you that was suggesting using crypto_async_request:
> https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> "The only wart with this scheme is that the drivers end up seeing
> struct crypto_async_request and will need to convert that to the
> respective request types but I couldn't really find a better way."
> 
> So I wait for any suggestion.

The core engine code obviously will use the base type but it should
not be exposed to the driver authors.  IOW all exposed API should
take the final types such as aead_request before casting it.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-22  9:06         ` Herbert Xu
  0 siblings, 0 replies; 47+ messages in thread
From: Herbert Xu @ 2017-12-22  9:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Dec 22, 2017 at 09:41:48AM +0100, Corentin Labbe wrote:
>
> It's you that was suggesting using crypto_async_request:
> https://www.mail-archive.com/linux-kernel at vger.kernel.org/msg1474434.html
> "The only wart with this scheme is that the drivers end up seeing
> struct crypto_async_request and will need to convert that to the
> respective request types but I couldn't really find a better way."
> 
> So I wait for any suggestion.

The core engine code obviously will use the base type but it should
not be exposed to the driver authors.  IOW all exposed API should
take the final types such as aead_request before casting it.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
  2017-12-22  9:06         ` Herbert Xu
  (?)
@ 2017-12-22  9:34           ` Corentin Labbe
  -1 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-22  9:34 UTC (permalink / raw)
  To: Herbert Xu
  Cc: alexandre.torgue, mst, linux-kernel, fabien.dessenne,
	virtualization, linux-crypto, mcoquelin.stm32, davem,
	linux-arm-kernel

On Fri, Dec 22, 2017 at 08:06:03PM +1100, Herbert Xu wrote:
> On Fri, Dec 22, 2017 at 09:41:48AM +0100, Corentin Labbe wrote:
> >
> > It's you that was suggesting using crypto_async_request:
> > https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> > "The only wart with this scheme is that the drivers end up seeing
> > struct crypto_async_request and will need to convert that to the
> > respective request types but I couldn't really find a better way."
> > 
> > So I wait for any suggestion.
> 
> The core engine code obviously will use the base type but it should
> not be exposed to the driver authors.  IOW all exposed API should
> take the final types such as aead_request before casting it.
> 

For driver->engine calls(crypto_finalize_request/crypto_transfer_request_to_engine) it's easy.

But I do not see how to do it for crypto_engine_op appart re-introducing the big if/then/else that
you didnt want.
Or do you agree to set the request parameter for crypto_engine_op(prepare_request/unprepare_request/do_one_request) to void * ?

Regards

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-22  9:34           ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-22  9:34 UTC (permalink / raw)
  To: Herbert Xu
  Cc: alexandre.torgue, arei.gonglei, davem, jasowang, mcoquelin.stm32,
	mst, fabien.dessenne, linux-arm-kernel, linux-crypto,
	linux-kernel, virtualization

On Fri, Dec 22, 2017 at 08:06:03PM +1100, Herbert Xu wrote:
> On Fri, Dec 22, 2017 at 09:41:48AM +0100, Corentin Labbe wrote:
> >
> > It's you that was suggesting using crypto_async_request:
> > https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> > "The only wart with this scheme is that the drivers end up seeing
> > struct crypto_async_request and will need to convert that to the
> > respective request types but I couldn't really find a better way."
> > 
> > So I wait for any suggestion.
> 
> The core engine code obviously will use the base type but it should
> not be exposed to the driver authors.  IOW all exposed API should
> take the final types such as aead_request before casting it.
> 

For driver->engine calls(crypto_finalize_request/crypto_transfer_request_to_engine) it's easy.

But I do not see how to do it for crypto_engine_op appart re-introducing the big if/then/else that
you didnt want.
Or do you agree to set the request parameter for crypto_engine_op(prepare_request/unprepare_request/do_one_request) to void * ?

Regards

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
@ 2017-12-22  9:34           ` Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-12-22  9:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Dec 22, 2017 at 08:06:03PM +1100, Herbert Xu wrote:
> On Fri, Dec 22, 2017 at 09:41:48AM +0100, Corentin Labbe wrote:
> >
> > It's you that was suggesting using crypto_async_request:
> > https://www.mail-archive.com/linux-kernel at vger.kernel.org/msg1474434.html
> > "The only wart with this scheme is that the drivers end up seeing
> > struct crypto_async_request and will need to convert that to the
> > respective request types but I couldn't really find a better way."
> > 
> > So I wait for any suggestion.
> 
> The core engine code obviously will use the base type but it should
> not be exposed to the driver authors.  IOW all exposed API should
> take the final types such as aead_request before casting it.
> 

For driver->engine calls(crypto_finalize_request/crypto_transfer_request_to_engine) it's easy.

But I do not see how to do it for crypto_engine_op appart re-introducing the big if/then/else that
you didnt want.
Or do you agree to set the request parameter for crypto_engine_op(prepare_request/unprepare_request/do_one_request) to void * ?

Regards

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
@ 2017-11-29  8:41 Corentin Labbe
  0 siblings, 0 replies; 47+ messages in thread
From: Corentin Labbe @ 2017-11-29  8:41 UTC (permalink / raw)
  To: herbert, alexandre.torgue, arei.gonglei, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: virtualization, Corentin Labbe, linux-crypto, linux-arm-kernel,
	linux-kernel

Hello

The current crypto_engine support only ahash and ablkcipher.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.

This patchset remove all algs specific stuff and now only process generic crypto_async_request.

The requests handler function pointer are now moved out of struct engine and
are now stored directly in a crypto_engine_reqctx.

The original proposal of Herbert [1] cannot be done completly since the crypto_engine
could only dequeue crypto_async_request and it is impossible to access any request_ctx
without knowing the underlying request type.

So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
Note that the current implementation expect that crypto_engine_reqctx
is the first member of the context.

The first patch convert the crypto engine with the new way,
while the following patchs convert the 3 existing users of crypto_engine.
Note that this split break bisection, so probably the final commit will be all merged.

The 3 latest patch were compile tested only, but the first is tested successfully
with my new sun8i-ce driver.

Regards

[1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html

Corentin Labbe (4):
  crypto: engine - Permit to enqueue all async requests
  crypto: omap: convert to new crypto engine API
  crypto: virtio: convert to new crypto engine API
  crypto: stm32: convert to the new crypto engine API

 crypto/crypto_engine.c                       | 188 ++++++---------------------
 drivers/crypto/omap-aes.c                    |  21 ++-
 drivers/crypto/omap-aes.h                    |   3 +
 drivers/crypto/omap-des.c                    |  24 +++-
 drivers/crypto/stm32/stm32-hash.c            |  22 +++-
 drivers/crypto/virtio/virtio_crypto_algs.c   |  15 ++-
 drivers/crypto/virtio/virtio_crypto_common.h |   2 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
 include/crypto/engine.h                      |  46 +++----
 9 files changed, 122 insertions(+), 202 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2017-12-22  9:34 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-29  8:41 [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests Corentin Labbe
2017-11-29  8:41 ` Corentin Labbe
2017-11-29  8:41 ` [PATCH RFC 1/4] " Corentin Labbe
2017-11-29  8:41   ` Corentin Labbe
2017-11-29  8:41   ` Corentin Labbe
2017-12-06 11:02   ` Fabien DESSENNE
2017-12-06 11:02     ` Fabien DESSENNE
2017-12-06 11:02     ` Fabien DESSENNE
2017-12-07  9:24     ` Corentin Labbe
2017-12-07  9:24       ` Corentin Labbe
2017-12-07  9:24       ` Corentin Labbe
2017-12-06 11:02   ` Fabien DESSENNE
2017-12-22  6:57   ` Herbert Xu
2017-12-22  6:57   ` Herbert Xu
2017-12-22  6:57     ` Herbert Xu
2017-12-22  8:41     ` Corentin Labbe
2017-12-22  8:41       ` Corentin Labbe
2017-12-22  9:06       ` Herbert Xu
2017-12-22  9:06         ` Herbert Xu
2017-12-22  9:06         ` Herbert Xu
2017-12-22  9:34         ` Corentin Labbe
2017-12-22  9:34           ` Corentin Labbe
2017-12-22  9:34           ` Corentin Labbe
2017-12-22  8:41     ` Corentin Labbe
2017-11-29  8:41 ` Corentin Labbe
2017-11-29  8:41 ` [PATCH RFC 2/4] crypto: omap: convert to new crypto engine API Corentin Labbe
2017-11-29  8:41   ` Corentin Labbe
2017-11-29  8:41   ` Corentin Labbe
2017-11-29  8:41 ` [PATCH RFC 3/4] crypto: virtio: " Corentin Labbe
2017-11-29  8:41   ` Corentin Labbe
2017-11-29  8:41 ` Corentin Labbe
2017-11-29  8:41 ` [PATCH RFC 4/4] crypto: stm32: convert to the " Corentin Labbe
2017-11-29  8:41   ` Corentin Labbe
2017-12-06 11:03   ` Fabien DESSENNE
2017-12-06 11:03     ` Fabien DESSENNE
2017-12-06 11:03     ` Fabien DESSENNE
2017-12-06 11:03   ` Fabien DESSENNE
2017-11-29  8:41 ` Corentin Labbe
2017-12-06 10:59 ` [PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests Fabien DESSENNE
2017-12-06 10:59 ` Fabien DESSENNE
2017-12-06 10:59   ` Fabien DESSENNE
2017-12-06 10:59   ` Fabien DESSENNE
2017-12-07  9:37   ` Corentin Labbe
2017-12-07  9:37   ` Corentin Labbe
2017-12-07  9:37     ` Corentin Labbe
2017-12-07  9:37     ` Corentin Labbe
2017-11-29  8:41 Corentin Labbe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.