All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
@ 2018-01-26 19:15 ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue-qxv4g6HH51o,
	arei.gonglei-hv44wF8Li93QT0dZR+AlfA, corbet-T1hC0tSOHrs,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	mcoquelin.stm32-Re5JQEeQqe8AvxtiuMwx3w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, fabien.dessenne-qxv4g6HH51o
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	linux-doc-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Corentin Labbe

Hello

The current crypto_engine support only ahash and ablkcipher request.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.

This patchset remove all algs specific stuff and now only process generic crypto_async_request.

The requests handler function pointer are now moved out of struct engine and
are now stored directly in a crypto_engine_reqctx.

The original proposal of Herbert [1] cannot be done completly since the crypto_engine
could only dequeue crypto_async_request and it is impossible to access any request_ctx
without knowing the underlying request type.

So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
Note that the current implementation expect that crypto_engine_reqctx
is the first member of the context.

The first patch is a try to document the crypto engine API.
The second patch convert the crypto engine with the new way,
while the following patchs convert the 4 existing users of crypto_engine.
Note that this split break bisection, so probably the final commit will be all merged.

Appart from virtio, all 4 latest patch were compile tested only.
But the crypto engine is tested with my new sun8i-ce driver.

Regards

[1] https://www.mail-archive.com/linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org/msg1474434.html

Changes since V1:
- renamed crypto_engine_reqctx to crypto_engine_ctx
- indentation fix in function parameter
- do not export crypto_transfer_request
- Add aead support
- crypto_finalize_request is now static

Changes since RFC:
- Added a documentation patch
- Added patch for stm32-cryp
- Changed parameter of all crypto_engine_op functions from
	crypto_async_request to void*
- Reintroduced crypto_transfer_xxx_request_to_engine functions

Corentin Labbe (6):
  Documentation: crypto: document crypto engine API
  crypto: engine - Permit to enqueue all async requests
  crypto: omap: convert to new crypto engine API
  crypto: virtio: convert to new crypto engine API
  crypto: stm32-hash: convert to the new crypto engine API
  crypto: stm32-cryp: convert to the new crypto engine API

 Documentation/crypto/crypto_engine.rst       |  48 +++++
 crypto/crypto_engine.c                       | 301 +++++++++++++++------------
 drivers/crypto/omap-aes.c                    |  21 +-
 drivers/crypto/omap-aes.h                    |   3 +
 drivers/crypto/omap-des.c                    |  24 ++-
 drivers/crypto/stm32/stm32-cryp.c            |  29 ++-
 drivers/crypto/stm32/stm32-hash.c            |  20 +-
 drivers/crypto/virtio/virtio_crypto_algs.c   |  16 +-
 drivers/crypto/virtio/virtio_crypto_common.h |   3 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
 include/crypto/engine.h                      |  68 +++---
 11 files changed, 332 insertions(+), 204 deletions(-)
 create mode 100644 Documentation/crypto/crypto_engine.rst

-- 
2.13.6

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
@ 2018-01-26 19:15 ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-doc, linux-kernel,
	virtualization, linux-sunxi, Corentin Labbe

Hello

The current crypto_engine support only ahash and ablkcipher request.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.

This patchset remove all algs specific stuff and now only process generic crypto_async_request.

The requests handler function pointer are now moved out of struct engine and
are now stored directly in a crypto_engine_reqctx.

The original proposal of Herbert [1] cannot be done completly since the crypto_engine
could only dequeue crypto_async_request and it is impossible to access any request_ctx
without knowing the underlying request type.

So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
Note that the current implementation expect that crypto_engine_reqctx
is the first member of the context.

The first patch is a try to document the crypto engine API.
The second patch convert the crypto engine with the new way,
while the following patchs convert the 4 existing users of crypto_engine.
Note that this split break bisection, so probably the final commit will be all merged.

Appart from virtio, all 4 latest patch were compile tested only.
But the crypto engine is tested with my new sun8i-ce driver.

Regards

[1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html

Changes since V1:
- renamed crypto_engine_reqctx to crypto_engine_ctx
- indentation fix in function parameter
- do not export crypto_transfer_request
- Add aead support
- crypto_finalize_request is now static

Changes since RFC:
- Added a documentation patch
- Added patch for stm32-cryp
- Changed parameter of all crypto_engine_op functions from
	crypto_async_request to void*
- Reintroduced crypto_transfer_xxx_request_to_engine functions

Corentin Labbe (6):
  Documentation: crypto: document crypto engine API
  crypto: engine - Permit to enqueue all async requests
  crypto: omap: convert to new crypto engine API
  crypto: virtio: convert to new crypto engine API
  crypto: stm32-hash: convert to the new crypto engine API
  crypto: stm32-cryp: convert to the new crypto engine API

 Documentation/crypto/crypto_engine.rst       |  48 +++++
 crypto/crypto_engine.c                       | 301 +++++++++++++++------------
 drivers/crypto/omap-aes.c                    |  21 +-
 drivers/crypto/omap-aes.h                    |   3 +
 drivers/crypto/omap-des.c                    |  24 ++-
 drivers/crypto/stm32/stm32-cryp.c            |  29 ++-
 drivers/crypto/stm32/stm32-hash.c            |  20 +-
 drivers/crypto/virtio/virtio_crypto_algs.c   |  16 +-
 drivers/crypto/virtio/virtio_crypto_common.h |   3 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
 include/crypto/engine.h                      |  68 +++---
 11 files changed, 332 insertions(+), 204 deletions(-)
 create mode 100644 Documentation/crypto/crypto_engine.rst

-- 
2.13.6

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
@ 2018-01-26 19:15 ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

Hello

The current crypto_engine support only ahash and ablkcipher request.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.

This patchset remove all algs specific stuff and now only process generic crypto_async_request.

The requests handler function pointer are now moved out of struct engine and
are now stored directly in a crypto_engine_reqctx.

The original proposal of Herbert [1] cannot be done completly since the crypto_engine
could only dequeue crypto_async_request and it is impossible to access any request_ctx
without knowing the underlying request type.

So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
Note that the current implementation expect that crypto_engine_reqctx
is the first member of the context.

The first patch is a try to document the crypto engine API.
The second patch convert the crypto engine with the new way,
while the following patchs convert the 4 existing users of crypto_engine.
Note that this split break bisection, so probably the final commit will be all merged.

Appart from virtio, all 4 latest patch were compile tested only.
But the crypto engine is tested with my new sun8i-ce driver.

Regards

[1] https://www.mail-archive.com/linux-kernel at vger.kernel.org/msg1474434.html

Changes since V1:
- renamed crypto_engine_reqctx to crypto_engine_ctx
- indentation fix in function parameter
- do not export crypto_transfer_request
- Add aead support
- crypto_finalize_request is now static

Changes since RFC:
- Added a documentation patch
- Added patch for stm32-cryp
- Changed parameter of all crypto_engine_op functions from
	crypto_async_request to void*
- Reintroduced crypto_transfer_xxx_request_to_engine functions

Corentin Labbe (6):
  Documentation: crypto: document crypto engine API
  crypto: engine - Permit to enqueue all async requests
  crypto: omap: convert to new crypto engine API
  crypto: virtio: convert to new crypto engine API
  crypto: stm32-hash: convert to the new crypto engine API
  crypto: stm32-cryp: convert to the new crypto engine API

 Documentation/crypto/crypto_engine.rst       |  48 +++++
 crypto/crypto_engine.c                       | 301 +++++++++++++++------------
 drivers/crypto/omap-aes.c                    |  21 +-
 drivers/crypto/omap-aes.h                    |   3 +
 drivers/crypto/omap-des.c                    |  24 ++-
 drivers/crypto/stm32/stm32-cryp.c            |  29 ++-
 drivers/crypto/stm32/stm32-hash.c            |  20 +-
 drivers/crypto/virtio/virtio_crypto_algs.c   |  16 +-
 drivers/crypto/virtio/virtio_crypto_common.h |   3 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |   3 -
 include/crypto/engine.h                      |  68 +++---
 11 files changed, 332 insertions(+), 204 deletions(-)
 create mode 100644 Documentation/crypto/crypto_engine.rst

-- 
2.13.6

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 1/6] Documentation: crypto: document crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
  (?)
@ 2018-01-26 19:15     ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue-qxv4g6HH51o,
	arei.gonglei-hv44wF8Li93QT0dZR+AlfA, corbet-T1hC0tSOHrs,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	mcoquelin.stm32-Re5JQEeQqe8AvxtiuMwx3w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, fabien.dessenne-qxv4g6HH51o
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	linux-doc-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Corentin Labbe

Signed-off-by: Corentin Labbe <clabbe.montjoie-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 Documentation/crypto/crypto_engine.rst | 48 ++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100644 Documentation/crypto/crypto_engine.rst

diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst
new file mode 100644
index 000000000000..8272ac92a14f
--- /dev/null
+++ b/Documentation/crypto/crypto_engine.rst
@@ -0,0 +1,48 @@
+=============
+CRYPTO ENGINE
+=============
+
+Overview
+--------
+The crypto engine API (CE), is a crypto queue manager.
+
+Requirement
+-----------
+You have to put at start of your tfm_ctx the struct crypto_engine_ctx
+struct your_tfm_ctx {
+        struct crypto_engine_ctx enginectx;
+        ...
+};
+Why: Since CE manage only crypto_async_request, it cannot know the underlying
+request_type and so have access only on the TFM.
+So using container_of for accessing __ctx is impossible.
+Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
+so it must assume that crypto_engine_ctx is at start of it.
+
+Order of operations
+-------------------
+You have to obtain a struct crypto_engine via crypto_engine_alloc_init().
+And start it via crypto_engine_start().
+
+Before transferring any request, you have to fill the enginectx.
+- prepare_request: (taking a function pointer) If you need to do some processing before doing the request
+- unprepare_request: (taking a function pointer) Undoing what's done in prepare_request
+- do_one_request: (taking a function pointer) Do encryption for current request
+
+Note: that those three functions get the crypto_async_request associated with the received request.
+So your need to get the original request via container_of(areq, struct yourrequesttype_request, base);
+
+When your driver receive a crypto_request, you have to transfer it to
+the cryptoengine via one of:
+- crypto_transfer_ablkcipher_request_to_engine()
+- crypto_transfer_aead_request_to_engine()
+- crypto_transfer_akcipher_request_to_engine()
+- crypto_transfer_hash_request_to_engine()
+- crypto_transfer_skcipher_request_to_engine()
+
+At the end of the request process, a call to one of the following function is needed:
+- crypto_finalize_ablkcipher_request
+- crypto_finalize_aead_request
+- crypto_finalize_akcipher_request
+- crypto_finalize_hash_request
+- crypto_finalize_skcipher_request
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 1/6] Documentation: crypto: document crypto engine API
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-doc, linux-kernel,
	virtualization, linux-sunxi, Corentin Labbe

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 Documentation/crypto/crypto_engine.rst | 48 ++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100644 Documentation/crypto/crypto_engine.rst

diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst
new file mode 100644
index 000000000000..8272ac92a14f
--- /dev/null
+++ b/Documentation/crypto/crypto_engine.rst
@@ -0,0 +1,48 @@
+=============
+CRYPTO ENGINE
+=============
+
+Overview
+--------
+The crypto engine API (CE), is a crypto queue manager.
+
+Requirement
+-----------
+You have to put at start of your tfm_ctx the struct crypto_engine_ctx
+struct your_tfm_ctx {
+        struct crypto_engine_ctx enginectx;
+        ...
+};
+Why: Since CE manage only crypto_async_request, it cannot know the underlying
+request_type and so have access only on the TFM.
+So using container_of for accessing __ctx is impossible.
+Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
+so it must assume that crypto_engine_ctx is at start of it.
+
+Order of operations
+-------------------
+You have to obtain a struct crypto_engine via crypto_engine_alloc_init().
+And start it via crypto_engine_start().
+
+Before transferring any request, you have to fill the enginectx.
+- prepare_request: (taking a function pointer) If you need to do some processing before doing the request
+- unprepare_request: (taking a function pointer) Undoing what's done in prepare_request
+- do_one_request: (taking a function pointer) Do encryption for current request
+
+Note: that those three functions get the crypto_async_request associated with the received request.
+So your need to get the original request via container_of(areq, struct yourrequesttype_request, base);
+
+When your driver receive a crypto_request, you have to transfer it to
+the cryptoengine via one of:
+- crypto_transfer_ablkcipher_request_to_engine()
+- crypto_transfer_aead_request_to_engine()
+- crypto_transfer_akcipher_request_to_engine()
+- crypto_transfer_hash_request_to_engine()
+- crypto_transfer_skcipher_request_to_engine()
+
+At the end of the request process, a call to one of the following function is needed:
+- crypto_finalize_ablkcipher_request
+- crypto_finalize_aead_request
+- crypto_finalize_akcipher_request
+- crypto_finalize_hash_request
+- crypto_finalize_skcipher_request
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 1/6] Documentation: crypto: document crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
  (?)
  (?)
@ 2018-01-26 19:15 ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-doc, linux-kernel, virtualization, linux-sunxi,
	Corentin Labbe, linux-crypto, linux-arm-kernel

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 Documentation/crypto/crypto_engine.rst | 48 ++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100644 Documentation/crypto/crypto_engine.rst

diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst
new file mode 100644
index 000000000000..8272ac92a14f
--- /dev/null
+++ b/Documentation/crypto/crypto_engine.rst
@@ -0,0 +1,48 @@
+=============
+CRYPTO ENGINE
+=============
+
+Overview
+--------
+The crypto engine API (CE), is a crypto queue manager.
+
+Requirement
+-----------
+You have to put at start of your tfm_ctx the struct crypto_engine_ctx
+struct your_tfm_ctx {
+        struct crypto_engine_ctx enginectx;
+        ...
+};
+Why: Since CE manage only crypto_async_request, it cannot know the underlying
+request_type and so have access only on the TFM.
+So using container_of for accessing __ctx is impossible.
+Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
+so it must assume that crypto_engine_ctx is at start of it.
+
+Order of operations
+-------------------
+You have to obtain a struct crypto_engine via crypto_engine_alloc_init().
+And start it via crypto_engine_start().
+
+Before transferring any request, you have to fill the enginectx.
+- prepare_request: (taking a function pointer) If you need to do some processing before doing the request
+- unprepare_request: (taking a function pointer) Undoing what's done in prepare_request
+- do_one_request: (taking a function pointer) Do encryption for current request
+
+Note: that those three functions get the crypto_async_request associated with the received request.
+So your need to get the original request via container_of(areq, struct yourrequesttype_request, base);
+
+When your driver receive a crypto_request, you have to transfer it to
+the cryptoengine via one of:
+- crypto_transfer_ablkcipher_request_to_engine()
+- crypto_transfer_aead_request_to_engine()
+- crypto_transfer_akcipher_request_to_engine()
+- crypto_transfer_hash_request_to_engine()
+- crypto_transfer_skcipher_request_to_engine()
+
+At the end of the request process, a call to one of the following function is needed:
+- crypto_finalize_ablkcipher_request
+- crypto_finalize_aead_request
+- crypto_finalize_akcipher_request
+- crypto_finalize_hash_request
+- crypto_finalize_skcipher_request
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 1/6] Documentation: crypto: document crypto engine API
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 Documentation/crypto/crypto_engine.rst | 48 ++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100644 Documentation/crypto/crypto_engine.rst

diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst
new file mode 100644
index 000000000000..8272ac92a14f
--- /dev/null
+++ b/Documentation/crypto/crypto_engine.rst
@@ -0,0 +1,48 @@
+=============
+CRYPTO ENGINE
+=============
+
+Overview
+--------
+The crypto engine API (CE), is a crypto queue manager.
+
+Requirement
+-----------
+You have to put at start of your tfm_ctx the struct crypto_engine_ctx
+struct your_tfm_ctx {
+        struct crypto_engine_ctx enginectx;
+        ...
+};
+Why: Since CE manage only crypto_async_request, it cannot know the underlying
+request_type and so have access only on the TFM.
+So using container_of for accessing __ctx is impossible.
+Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
+so it must assume that crypto_engine_ctx is at start of it.
+
+Order of operations
+-------------------
+You have to obtain a struct crypto_engine via crypto_engine_alloc_init().
+And start it via crypto_engine_start().
+
+Before transferring any request, you have to fill the enginectx.
+- prepare_request: (taking a function pointer) If you need to do some processing before doing the request
+- unprepare_request: (taking a function pointer) Undoing what's done in prepare_request
+- do_one_request: (taking a function pointer) Do encryption for current request
+
+Note: that those three functions get the crypto_async_request associated with the received request.
+So your need to get the original request via container_of(areq, struct yourrequesttype_request, base);
+
+When your driver receive a crypto_request, you have to transfer it to
+the cryptoengine via one of:
+- crypto_transfer_ablkcipher_request_to_engine()
+- crypto_transfer_aead_request_to_engine()
+- crypto_transfer_akcipher_request_to_engine()
+- crypto_transfer_hash_request_to_engine()
+- crypto_transfer_skcipher_request_to_engine()
+
+At the end of the request process, a call to one of the following function is needed:
+- crypto_finalize_ablkcipher_request
+- crypto_finalize_aead_request
+- crypto_finalize_akcipher_request
+- crypto_finalize_hash_request
+- crypto_finalize_skcipher_request
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
  2018-01-26 19:15 ` Corentin Labbe
  (?)
@ 2018-01-26 19:15     ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue-qxv4g6HH51o,
	arei.gonglei-hv44wF8Li93QT0dZR+AlfA, corbet-T1hC0tSOHrs,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	mcoquelin.stm32-Re5JQEeQqe8AvxtiuMwx3w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, fabien.dessenne-qxv4g6HH51o
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	linux-doc-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Corentin Labbe

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <clabbe.montjoie-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Fabien Dessenne <fabien.dessenne-qxv4g6HH51o@public.gmane.org>
---
 crypto/crypto_engine.c  | 301 ++++++++++++++++++++++++++----------------------
 include/crypto/engine.h |  68 ++++++-----
 2 files changed, 203 insertions(+), 166 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..992e8d8dcdd9 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -15,13 +15,50 @@
 #include <linux/err.h>
 #include <linux/delay.h>
 #include <crypto/engine.h>
-#include <crypto/internal/hash.h>
 #include <uapi/linux/sched/types.h>
 #include "internal.h"
 
 #define CRYPTO_ENGINE_MAX_QLEN 10
 
 /**
+ * crypto_finalize_request - finalize one request if the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+static void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err)
+{
+	unsigned long flags;
+	bool finalize_cur_req = false;
+	int ret;
+	struct crypto_engine_ctx *enginectx;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+	if (engine->cur_req == req)
+		finalize_cur_req = true;
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+
+	if (finalize_cur_req) {
+		enginectx = crypto_tfm_ctx(req->tfm);
+		if (engine->cur_req_prepared &&
+		    enginectx->op.unprepare_request) {
+			ret = enginectx->op.unprepare_request(engine, req);
+			if (ret)
+				dev_err(engine->dev, "failed to unprepare request\n");
+		}
+		spin_lock_irqsave(&engine->queue_lock, flags);
+		engine->cur_req = NULL;
+		engine->cur_req_prepared = false;
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+	}
+
+	req->complete(req, err);
+
+	kthread_queue_work(engine->kworker, &engine->pump_requests);
+}
+
+/**
  * crypto_pump_requests - dequeue one request from engine queue to process
  * @engine: the hardware engine
  * @in_kthread: true if we are in the context of the request pump thread
@@ -34,11 +71,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 				 bool in_kthread)
 {
 	struct crypto_async_request *async_req, *backlog;
-	struct ahash_request *hreq;
-	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
-	int ret, rtype;
+	int ret;
+	struct crypto_engine_ctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -94,7 +130,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
-	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
 	/* Until here we get the request need to be encrypted successfully */
 	if (!was_busy && engine->prepare_crypt_hardware) {
 		ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +139,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		}
 	}
 
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		if (engine->prepare_hash_request) {
-			ret = engine->prepare_hash_request(engine, hreq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->hash_one_request(engine, hreq);
-		if (ret) {
-			dev_err(engine->dev, "failed to hash one request from queue\n");
-			goto req_err;
-		}
-		return;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		if (engine->prepare_cipher_request) {
-			ret = engine->prepare_cipher_request(engine, breq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->cipher_one_request(engine, breq);
+	enginectx = crypto_tfm_ctx(async_req->tfm);
+
+	if (enginectx->op.prepare_request) {
+		ret = enginectx->op.prepare_request(engine, async_req);
 		if (ret) {
-			dev_err(engine->dev, "failed to cipher one request from queue\n");
+			dev_err(engine->dev, "failed to prepare request: %d\n",
+				ret);
 			goto req_err;
 		}
-		return;
-	default:
-		dev_err(engine->dev, "failed to prepare request of unknown type\n");
-		return;
+		engine->cur_req_prepared = true;
+	}
+	if (!enginectx->op.do_one_request) {
+		dev_err(engine->dev, "failed to do request\n");
+		ret = -EINVAL;
+		goto req_err;
 	}
+	ret = enginectx->op.do_one_request(engine, async_req);
+	if (ret) {
+		dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
+		goto req_err;
+	}
+	return;
 
 req_err:
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		crypto_finalize_hash_request(engine, hreq, ret);
-		break;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		crypto_finalize_cipher_request(engine, breq, ret);
-		break;
-	}
+	crypto_finalize_request(engine, async_req, ret);
 	return;
 
 out:
@@ -170,13 +179,12 @@ static void crypto_pump_work(struct kthread_work *work)
 }
 
 /**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
+static int crypto_transfer_request(struct crypto_engine *engine,
+				   struct crypto_async_request *req,
 				   bool need_pump)
 {
 	unsigned long flags;
@@ -189,7 +197,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
 		return -ESHUTDOWN;
 	}
 
-	ret = ablkcipher_enqueue_request(&engine->queue, req);
+	ret = crypto_enqueue_request(&engine->queue, req);
 
 	if (!engine->busy && need_pump)
 		kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -197,102 +205,131 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 	return ret;
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
 
 /**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
  * into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req)
+static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+					     struct crypto_async_request *req)
 {
-	return crypto_transfer_cipher_request(engine, req, true);
+	return crypto_transfer_request(engine, req, true);
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
 
 /**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_ablkcipher_request_to_engine - transfer one ablkcipher_request
+ * to list into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
+ * TODO: Remove this function when skcipher conversion is finished
  */
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump)
+int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
+						 struct ablkcipher_request *req)
 {
-	unsigned long flags;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-
-	if (!engine->running) {
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-		return -ESHUTDOWN;
-	}
-
-	ret = ahash_enqueue_request(&engine->queue, req);
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_ablkcipher_request_to_engine);
 
-	if (!engine->busy && need_pump)
-		kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_transfer_aead_request_to_engine - transfer one aead_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
 
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-	return ret;
+/**
+ * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+					       struct akcipher_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
 
 /**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
- * into the engine queue
+ * crypto_transfer_hash_request_to_engine - transfer one ahash_request
+ * to list into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 					   struct ahash_request *req)
 {
-	return crypto_transfer_hash_request(engine, req, true);
+	return crypto_transfer_request_to_engine(engine, &req->base);
 }
 EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
 
 /**
- * crypto_finalize_cipher_request - finalize one request if the request is done
+ * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+					       struct skcipher_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
+
+/**
+ * crypto_finalize_ablkcipher_request - finalize one ablkcipher_request if
+ * the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
+ * TODO: Remove this function when skcipher conversion is finished
  */
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err)
+void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
+					struct ablkcipher_request *req, int err)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_cipher_request) {
-			ret = engine->unprepare_cipher_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_ablkcipher_request);
 
-	req->base.complete(&req->base, err);
+/**
+ * crypto_finalize_aead_request - finalize one aead_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
 
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_finalize_akcipher_request - finalize one akcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+				      struct akcipher_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
 
 /**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_hash_request - finalize one ahash_request if
+ * the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
@@ -300,35 +337,25 @@ EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_hash_request) {
-			ret = engine->unprepare_hash_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
-
-	req->base.complete(&req->base, err);
-
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+	return crypto_finalize_request(engine, &req->base, err);
 }
 EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
 
 /**
+ * crypto_finalize_skcipher_request - finalize one skcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+				      struct skcipher_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
+
+/**
  * crypto_engine_start - start the hardware engine
  * @engine: the hardware engine need to be started
  *
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..1cbec29af3d6 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,10 @@
 #include <linux/kernel.h>
 #include <linux/kthread.h>
 #include <crypto/algapi.h>
+#include <crypto/aead.h>
+#include <crypto/akcipher.h>
 #include <crypto/hash.h>
+#include <crypto/skcipher.h>
 
 #define ENGINE_NAME_LEN	30
 /*
@@ -37,12 +40,6 @@
  * @unprepare_crypt_hardware: there are currently no more requests on the
  * queue so the subsystem notifies the driver that it may relax the
  * hardware by issuing this call
- * @prepare_cipher_request: do some prepare if need before handle the current request
- * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
- * @cipher_one_request: do encryption for current request
- * @prepare_hash_request: do some prepare if need before handle the current request
- * @unprepare_hash_request: undo any work done by prepare_hash_request()
- * @hash_one_request: do hash for current request
  * @kworker: kthread worker struct for request pump
  * @pump_requests: work struct for scheduling work to the request pump
  * @priv_data: the engine private data
@@ -65,19 +62,6 @@ struct crypto_engine {
 	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
 	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
-	int (*prepare_cipher_request)(struct crypto_engine *engine,
-				      struct ablkcipher_request *req);
-	int (*unprepare_cipher_request)(struct crypto_engine *engine,
-					struct ablkcipher_request *req);
-	int (*prepare_hash_request)(struct crypto_engine *engine,
-				    struct ahash_request *req);
-	int (*unprepare_hash_request)(struct crypto_engine *engine,
-				      struct ahash_request *req);
-	int (*cipher_one_request)(struct crypto_engine *engine,
-				  struct ablkcipher_request *req);
-	int (*hash_one_request)(struct crypto_engine *engine,
-				struct ahash_request *req);
-
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
 
@@ -85,19 +69,45 @@ struct crypto_engine {
 	struct crypto_async_request	*cur_req;
 };
 
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump);
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump);
+/*
+ * struct crypto_engine_op - crypto hardware engine operations
+ * @prepare__request: do some prepare if need before handle the current request
+ * @unprepare_request: undo any work done by prepare_request()
+ * @do_one_request: do encryption for current request
+ */
+struct crypto_engine_op {
+	int (*prepare_request)(struct crypto_engine *engine,
+			       void *areq);
+	int (*unprepare_request)(struct crypto_engine *engine,
+				 void *areq);
+	int (*do_one_request)(struct crypto_engine *engine,
+			      void *areq);
+};
+
+struct crypto_engine_ctx {
+	struct crypto_engine_op op;
+};
+
+int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
+						 struct ablkcipher_request *req);
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req);
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+					       struct akcipher_request *req);
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req);
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err);
+					       struct ahash_request *req);
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+					       struct skcipher_request *req);
+void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
+					struct ablkcipher_request *req, int err);
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err);
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+				      struct akcipher_request *req, int err);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err);
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+				      struct skcipher_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-doc, linux-kernel,
	virtualization, linux-sunxi, Corentin Labbe

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 crypto/crypto_engine.c  | 301 ++++++++++++++++++++++++++----------------------
 include/crypto/engine.h |  68 ++++++-----
 2 files changed, 203 insertions(+), 166 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..992e8d8dcdd9 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -15,13 +15,50 @@
 #include <linux/err.h>
 #include <linux/delay.h>
 #include <crypto/engine.h>
-#include <crypto/internal/hash.h>
 #include <uapi/linux/sched/types.h>
 #include "internal.h"
 
 #define CRYPTO_ENGINE_MAX_QLEN 10
 
 /**
+ * crypto_finalize_request - finalize one request if the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+static void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err)
+{
+	unsigned long flags;
+	bool finalize_cur_req = false;
+	int ret;
+	struct crypto_engine_ctx *enginectx;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+	if (engine->cur_req == req)
+		finalize_cur_req = true;
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+
+	if (finalize_cur_req) {
+		enginectx = crypto_tfm_ctx(req->tfm);
+		if (engine->cur_req_prepared &&
+		    enginectx->op.unprepare_request) {
+			ret = enginectx->op.unprepare_request(engine, req);
+			if (ret)
+				dev_err(engine->dev, "failed to unprepare request\n");
+		}
+		spin_lock_irqsave(&engine->queue_lock, flags);
+		engine->cur_req = NULL;
+		engine->cur_req_prepared = false;
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+	}
+
+	req->complete(req, err);
+
+	kthread_queue_work(engine->kworker, &engine->pump_requests);
+}
+
+/**
  * crypto_pump_requests - dequeue one request from engine queue to process
  * @engine: the hardware engine
  * @in_kthread: true if we are in the context of the request pump thread
@@ -34,11 +71,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 				 bool in_kthread)
 {
 	struct crypto_async_request *async_req, *backlog;
-	struct ahash_request *hreq;
-	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
-	int ret, rtype;
+	int ret;
+	struct crypto_engine_ctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -94,7 +130,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
-	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
 	/* Until here we get the request need to be encrypted successfully */
 	if (!was_busy && engine->prepare_crypt_hardware) {
 		ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +139,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		}
 	}
 
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		if (engine->prepare_hash_request) {
-			ret = engine->prepare_hash_request(engine, hreq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->hash_one_request(engine, hreq);
-		if (ret) {
-			dev_err(engine->dev, "failed to hash one request from queue\n");
-			goto req_err;
-		}
-		return;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		if (engine->prepare_cipher_request) {
-			ret = engine->prepare_cipher_request(engine, breq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->cipher_one_request(engine, breq);
+	enginectx = crypto_tfm_ctx(async_req->tfm);
+
+	if (enginectx->op.prepare_request) {
+		ret = enginectx->op.prepare_request(engine, async_req);
 		if (ret) {
-			dev_err(engine->dev, "failed to cipher one request from queue\n");
+			dev_err(engine->dev, "failed to prepare request: %d\n",
+				ret);
 			goto req_err;
 		}
-		return;
-	default:
-		dev_err(engine->dev, "failed to prepare request of unknown type\n");
-		return;
+		engine->cur_req_prepared = true;
+	}
+	if (!enginectx->op.do_one_request) {
+		dev_err(engine->dev, "failed to do request\n");
+		ret = -EINVAL;
+		goto req_err;
 	}
+	ret = enginectx->op.do_one_request(engine, async_req);
+	if (ret) {
+		dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
+		goto req_err;
+	}
+	return;
 
 req_err:
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		crypto_finalize_hash_request(engine, hreq, ret);
-		break;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		crypto_finalize_cipher_request(engine, breq, ret);
-		break;
-	}
+	crypto_finalize_request(engine, async_req, ret);
 	return;
 
 out:
@@ -170,13 +179,12 @@ static void crypto_pump_work(struct kthread_work *work)
 }
 
 /**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
+static int crypto_transfer_request(struct crypto_engine *engine,
+				   struct crypto_async_request *req,
 				   bool need_pump)
 {
 	unsigned long flags;
@@ -189,7 +197,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
 		return -ESHUTDOWN;
 	}
 
-	ret = ablkcipher_enqueue_request(&engine->queue, req);
+	ret = crypto_enqueue_request(&engine->queue, req);
 
 	if (!engine->busy && need_pump)
 		kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -197,102 +205,131 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 	return ret;
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
 
 /**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
  * into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req)
+static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+					     struct crypto_async_request *req)
 {
-	return crypto_transfer_cipher_request(engine, req, true);
+	return crypto_transfer_request(engine, req, true);
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
 
 /**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_ablkcipher_request_to_engine - transfer one ablkcipher_request
+ * to list into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
+ * TODO: Remove this function when skcipher conversion is finished
  */
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump)
+int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
+						 struct ablkcipher_request *req)
 {
-	unsigned long flags;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-
-	if (!engine->running) {
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-		return -ESHUTDOWN;
-	}
-
-	ret = ahash_enqueue_request(&engine->queue, req);
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_ablkcipher_request_to_engine);
 
-	if (!engine->busy && need_pump)
-		kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_transfer_aead_request_to_engine - transfer one aead_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
 
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-	return ret;
+/**
+ * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+					       struct akcipher_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
 
 /**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
- * into the engine queue
+ * crypto_transfer_hash_request_to_engine - transfer one ahash_request
+ * to list into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 					   struct ahash_request *req)
 {
-	return crypto_transfer_hash_request(engine, req, true);
+	return crypto_transfer_request_to_engine(engine, &req->base);
 }
 EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
 
 /**
- * crypto_finalize_cipher_request - finalize one request if the request is done
+ * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+					       struct skcipher_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
+
+/**
+ * crypto_finalize_ablkcipher_request - finalize one ablkcipher_request if
+ * the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
+ * TODO: Remove this function when skcipher conversion is finished
  */
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err)
+void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
+					struct ablkcipher_request *req, int err)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_cipher_request) {
-			ret = engine->unprepare_cipher_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_ablkcipher_request);
 
-	req->base.complete(&req->base, err);
+/**
+ * crypto_finalize_aead_request - finalize one aead_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
 
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_finalize_akcipher_request - finalize one akcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+				      struct akcipher_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
 
 /**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_hash_request - finalize one ahash_request if
+ * the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
@@ -300,35 +337,25 @@ EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_hash_request) {
-			ret = engine->unprepare_hash_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
-
-	req->base.complete(&req->base, err);
-
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+	return crypto_finalize_request(engine, &req->base, err);
 }
 EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
 
 /**
+ * crypto_finalize_skcipher_request - finalize one skcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+				      struct skcipher_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
+
+/**
  * crypto_engine_start - start the hardware engine
  * @engine: the hardware engine need to be started
  *
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..1cbec29af3d6 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,10 @@
 #include <linux/kernel.h>
 #include <linux/kthread.h>
 #include <crypto/algapi.h>
+#include <crypto/aead.h>
+#include <crypto/akcipher.h>
 #include <crypto/hash.h>
+#include <crypto/skcipher.h>
 
 #define ENGINE_NAME_LEN	30
 /*
@@ -37,12 +40,6 @@
  * @unprepare_crypt_hardware: there are currently no more requests on the
  * queue so the subsystem notifies the driver that it may relax the
  * hardware by issuing this call
- * @prepare_cipher_request: do some prepare if need before handle the current request
- * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
- * @cipher_one_request: do encryption for current request
- * @prepare_hash_request: do some prepare if need before handle the current request
- * @unprepare_hash_request: undo any work done by prepare_hash_request()
- * @hash_one_request: do hash for current request
  * @kworker: kthread worker struct for request pump
  * @pump_requests: work struct for scheduling work to the request pump
  * @priv_data: the engine private data
@@ -65,19 +62,6 @@ struct crypto_engine {
 	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
 	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
-	int (*prepare_cipher_request)(struct crypto_engine *engine,
-				      struct ablkcipher_request *req);
-	int (*unprepare_cipher_request)(struct crypto_engine *engine,
-					struct ablkcipher_request *req);
-	int (*prepare_hash_request)(struct crypto_engine *engine,
-				    struct ahash_request *req);
-	int (*unprepare_hash_request)(struct crypto_engine *engine,
-				      struct ahash_request *req);
-	int (*cipher_one_request)(struct crypto_engine *engine,
-				  struct ablkcipher_request *req);
-	int (*hash_one_request)(struct crypto_engine *engine,
-				struct ahash_request *req);
-
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
 
@@ -85,19 +69,45 @@ struct crypto_engine {
 	struct crypto_async_request	*cur_req;
 };
 
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump);
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump);
+/*
+ * struct crypto_engine_op - crypto hardware engine operations
+ * @prepare__request: do some prepare if need before handle the current request
+ * @unprepare_request: undo any work done by prepare_request()
+ * @do_one_request: do encryption for current request
+ */
+struct crypto_engine_op {
+	int (*prepare_request)(struct crypto_engine *engine,
+			       void *areq);
+	int (*unprepare_request)(struct crypto_engine *engine,
+				 void *areq);
+	int (*do_one_request)(struct crypto_engine *engine,
+			      void *areq);
+};
+
+struct crypto_engine_ctx {
+	struct crypto_engine_op op;
+};
+
+int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
+						 struct ablkcipher_request *req);
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req);
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+					       struct akcipher_request *req);
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req);
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err);
+					       struct ahash_request *req);
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+					       struct skcipher_request *req);
+void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
+					struct ablkcipher_request *req, int err);
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err);
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+				      struct akcipher_request *req, int err);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err);
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+				      struct skcipher_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
  2018-01-26 19:15 ` Corentin Labbe
                   ` (2 preceding siblings ...)
  (?)
@ 2018-01-26 19:15 ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-doc, linux-kernel, virtualization, linux-sunxi,
	Corentin Labbe, linux-crypto, linux-arm-kernel

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 crypto/crypto_engine.c  | 301 ++++++++++++++++++++++++++----------------------
 include/crypto/engine.h |  68 ++++++-----
 2 files changed, 203 insertions(+), 166 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..992e8d8dcdd9 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -15,13 +15,50 @@
 #include <linux/err.h>
 #include <linux/delay.h>
 #include <crypto/engine.h>
-#include <crypto/internal/hash.h>
 #include <uapi/linux/sched/types.h>
 #include "internal.h"
 
 #define CRYPTO_ENGINE_MAX_QLEN 10
 
 /**
+ * crypto_finalize_request - finalize one request if the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+static void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err)
+{
+	unsigned long flags;
+	bool finalize_cur_req = false;
+	int ret;
+	struct crypto_engine_ctx *enginectx;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+	if (engine->cur_req == req)
+		finalize_cur_req = true;
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+
+	if (finalize_cur_req) {
+		enginectx = crypto_tfm_ctx(req->tfm);
+		if (engine->cur_req_prepared &&
+		    enginectx->op.unprepare_request) {
+			ret = enginectx->op.unprepare_request(engine, req);
+			if (ret)
+				dev_err(engine->dev, "failed to unprepare request\n");
+		}
+		spin_lock_irqsave(&engine->queue_lock, flags);
+		engine->cur_req = NULL;
+		engine->cur_req_prepared = false;
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+	}
+
+	req->complete(req, err);
+
+	kthread_queue_work(engine->kworker, &engine->pump_requests);
+}
+
+/**
  * crypto_pump_requests - dequeue one request from engine queue to process
  * @engine: the hardware engine
  * @in_kthread: true if we are in the context of the request pump thread
@@ -34,11 +71,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 				 bool in_kthread)
 {
 	struct crypto_async_request *async_req, *backlog;
-	struct ahash_request *hreq;
-	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
-	int ret, rtype;
+	int ret;
+	struct crypto_engine_ctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -94,7 +130,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
-	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
 	/* Until here we get the request need to be encrypted successfully */
 	if (!was_busy && engine->prepare_crypt_hardware) {
 		ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +139,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		}
 	}
 
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		if (engine->prepare_hash_request) {
-			ret = engine->prepare_hash_request(engine, hreq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->hash_one_request(engine, hreq);
-		if (ret) {
-			dev_err(engine->dev, "failed to hash one request from queue\n");
-			goto req_err;
-		}
-		return;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		if (engine->prepare_cipher_request) {
-			ret = engine->prepare_cipher_request(engine, breq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->cipher_one_request(engine, breq);
+	enginectx = crypto_tfm_ctx(async_req->tfm);
+
+	if (enginectx->op.prepare_request) {
+		ret = enginectx->op.prepare_request(engine, async_req);
 		if (ret) {
-			dev_err(engine->dev, "failed to cipher one request from queue\n");
+			dev_err(engine->dev, "failed to prepare request: %d\n",
+				ret);
 			goto req_err;
 		}
-		return;
-	default:
-		dev_err(engine->dev, "failed to prepare request of unknown type\n");
-		return;
+		engine->cur_req_prepared = true;
+	}
+	if (!enginectx->op.do_one_request) {
+		dev_err(engine->dev, "failed to do request\n");
+		ret = -EINVAL;
+		goto req_err;
 	}
+	ret = enginectx->op.do_one_request(engine, async_req);
+	if (ret) {
+		dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
+		goto req_err;
+	}
+	return;
 
 req_err:
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		crypto_finalize_hash_request(engine, hreq, ret);
-		break;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		crypto_finalize_cipher_request(engine, breq, ret);
-		break;
-	}
+	crypto_finalize_request(engine, async_req, ret);
 	return;
 
 out:
@@ -170,13 +179,12 @@ static void crypto_pump_work(struct kthread_work *work)
 }
 
 /**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
+static int crypto_transfer_request(struct crypto_engine *engine,
+				   struct crypto_async_request *req,
 				   bool need_pump)
 {
 	unsigned long flags;
@@ -189,7 +197,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
 		return -ESHUTDOWN;
 	}
 
-	ret = ablkcipher_enqueue_request(&engine->queue, req);
+	ret = crypto_enqueue_request(&engine->queue, req);
 
 	if (!engine->busy && need_pump)
 		kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -197,102 +205,131 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 	return ret;
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
 
 /**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
  * into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req)
+static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+					     struct crypto_async_request *req)
 {
-	return crypto_transfer_cipher_request(engine, req, true);
+	return crypto_transfer_request(engine, req, true);
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
 
 /**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_ablkcipher_request_to_engine - transfer one ablkcipher_request
+ * to list into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
+ * TODO: Remove this function when skcipher conversion is finished
  */
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump)
+int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
+						 struct ablkcipher_request *req)
 {
-	unsigned long flags;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-
-	if (!engine->running) {
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-		return -ESHUTDOWN;
-	}
-
-	ret = ahash_enqueue_request(&engine->queue, req);
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_ablkcipher_request_to_engine);
 
-	if (!engine->busy && need_pump)
-		kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_transfer_aead_request_to_engine - transfer one aead_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
 
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-	return ret;
+/**
+ * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+					       struct akcipher_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
 
 /**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
- * into the engine queue
+ * crypto_transfer_hash_request_to_engine - transfer one ahash_request
+ * to list into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 					   struct ahash_request *req)
 {
-	return crypto_transfer_hash_request(engine, req, true);
+	return crypto_transfer_request_to_engine(engine, &req->base);
 }
 EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
 
 /**
- * crypto_finalize_cipher_request - finalize one request if the request is done
+ * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+					       struct skcipher_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
+
+/**
+ * crypto_finalize_ablkcipher_request - finalize one ablkcipher_request if
+ * the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
+ * TODO: Remove this function when skcipher conversion is finished
  */
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err)
+void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
+					struct ablkcipher_request *req, int err)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_cipher_request) {
-			ret = engine->unprepare_cipher_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_ablkcipher_request);
 
-	req->base.complete(&req->base, err);
+/**
+ * crypto_finalize_aead_request - finalize one aead_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
 
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_finalize_akcipher_request - finalize one akcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+				      struct akcipher_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
 
 /**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_hash_request - finalize one ahash_request if
+ * the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
@@ -300,35 +337,25 @@ EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_hash_request) {
-			ret = engine->unprepare_hash_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
-
-	req->base.complete(&req->base, err);
-
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+	return crypto_finalize_request(engine, &req->base, err);
 }
 EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
 
 /**
+ * crypto_finalize_skcipher_request - finalize one skcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+				      struct skcipher_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
+
+/**
  * crypto_engine_start - start the hardware engine
  * @engine: the hardware engine need to be started
  *
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..1cbec29af3d6 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,10 @@
 #include <linux/kernel.h>
 #include <linux/kthread.h>
 #include <crypto/algapi.h>
+#include <crypto/aead.h>
+#include <crypto/akcipher.h>
 #include <crypto/hash.h>
+#include <crypto/skcipher.h>
 
 #define ENGINE_NAME_LEN	30
 /*
@@ -37,12 +40,6 @@
  * @unprepare_crypt_hardware: there are currently no more requests on the
  * queue so the subsystem notifies the driver that it may relax the
  * hardware by issuing this call
- * @prepare_cipher_request: do some prepare if need before handle the current request
- * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
- * @cipher_one_request: do encryption for current request
- * @prepare_hash_request: do some prepare if need before handle the current request
- * @unprepare_hash_request: undo any work done by prepare_hash_request()
- * @hash_one_request: do hash for current request
  * @kworker: kthread worker struct for request pump
  * @pump_requests: work struct for scheduling work to the request pump
  * @priv_data: the engine private data
@@ -65,19 +62,6 @@ struct crypto_engine {
 	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
 	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
-	int (*prepare_cipher_request)(struct crypto_engine *engine,
-				      struct ablkcipher_request *req);
-	int (*unprepare_cipher_request)(struct crypto_engine *engine,
-					struct ablkcipher_request *req);
-	int (*prepare_hash_request)(struct crypto_engine *engine,
-				    struct ahash_request *req);
-	int (*unprepare_hash_request)(struct crypto_engine *engine,
-				      struct ahash_request *req);
-	int (*cipher_one_request)(struct crypto_engine *engine,
-				  struct ablkcipher_request *req);
-	int (*hash_one_request)(struct crypto_engine *engine,
-				struct ahash_request *req);
-
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
 
@@ -85,19 +69,45 @@ struct crypto_engine {
 	struct crypto_async_request	*cur_req;
 };
 
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump);
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump);
+/*
+ * struct crypto_engine_op - crypto hardware engine operations
+ * @prepare__request: do some prepare if need before handle the current request
+ * @unprepare_request: undo any work done by prepare_request()
+ * @do_one_request: do encryption for current request
+ */
+struct crypto_engine_op {
+	int (*prepare_request)(struct crypto_engine *engine,
+			       void *areq);
+	int (*unprepare_request)(struct crypto_engine *engine,
+				 void *areq);
+	int (*do_one_request)(struct crypto_engine *engine,
+			      void *areq);
+};
+
+struct crypto_engine_ctx {
+	struct crypto_engine_op op;
+};
+
+int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
+						 struct ablkcipher_request *req);
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req);
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+					       struct akcipher_request *req);
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req);
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err);
+					       struct ahash_request *req);
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+					       struct skcipher_request *req);
+void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
+					struct ablkcipher_request *req, int err);
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err);
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+				      struct akcipher_request *req, int err);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err);
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+				      struct skcipher_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 crypto/crypto_engine.c  | 301 ++++++++++++++++++++++++++----------------------
 include/crypto/engine.h |  68 ++++++-----
 2 files changed, 203 insertions(+), 166 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..992e8d8dcdd9 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -15,13 +15,50 @@
 #include <linux/err.h>
 #include <linux/delay.h>
 #include <crypto/engine.h>
-#include <crypto/internal/hash.h>
 #include <uapi/linux/sched/types.h>
 #include "internal.h"
 
 #define CRYPTO_ENGINE_MAX_QLEN 10
 
 /**
+ * crypto_finalize_request - finalize one request if the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+static void crypto_finalize_request(struct crypto_engine *engine,
+			     struct crypto_async_request *req, int err)
+{
+	unsigned long flags;
+	bool finalize_cur_req = false;
+	int ret;
+	struct crypto_engine_ctx *enginectx;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+	if (engine->cur_req == req)
+		finalize_cur_req = true;
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+
+	if (finalize_cur_req) {
+		enginectx = crypto_tfm_ctx(req->tfm);
+		if (engine->cur_req_prepared &&
+		    enginectx->op.unprepare_request) {
+			ret = enginectx->op.unprepare_request(engine, req);
+			if (ret)
+				dev_err(engine->dev, "failed to unprepare request\n");
+		}
+		spin_lock_irqsave(&engine->queue_lock, flags);
+		engine->cur_req = NULL;
+		engine->cur_req_prepared = false;
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+	}
+
+	req->complete(req, err);
+
+	kthread_queue_work(engine->kworker, &engine->pump_requests);
+}
+
+/**
  * crypto_pump_requests - dequeue one request from engine queue to process
  * @engine: the hardware engine
  * @in_kthread: true if we are in the context of the request pump thread
@@ -34,11 +71,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 				 bool in_kthread)
 {
 	struct crypto_async_request *async_req, *backlog;
-	struct ahash_request *hreq;
-	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
-	int ret, rtype;
+	int ret;
+	struct crypto_engine_ctx *enginectx;
 
 	spin_lock_irqsave(&engine->queue_lock, flags);
 
@@ -94,7 +130,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 
-	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
 	/* Until here we get the request need to be encrypted successfully */
 	if (!was_busy && engine->prepare_crypt_hardware) {
 		ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +139,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		}
 	}
 
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		if (engine->prepare_hash_request) {
-			ret = engine->prepare_hash_request(engine, hreq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->hash_one_request(engine, hreq);
-		if (ret) {
-			dev_err(engine->dev, "failed to hash one request from queue\n");
-			goto req_err;
-		}
-		return;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		if (engine->prepare_cipher_request) {
-			ret = engine->prepare_cipher_request(engine, breq);
-			if (ret) {
-				dev_err(engine->dev, "failed to prepare request: %d\n",
-					ret);
-				goto req_err;
-			}
-			engine->cur_req_prepared = true;
-		}
-		ret = engine->cipher_one_request(engine, breq);
+	enginectx = crypto_tfm_ctx(async_req->tfm);
+
+	if (enginectx->op.prepare_request) {
+		ret = enginectx->op.prepare_request(engine, async_req);
 		if (ret) {
-			dev_err(engine->dev, "failed to cipher one request from queue\n");
+			dev_err(engine->dev, "failed to prepare request: %d\n",
+				ret);
 			goto req_err;
 		}
-		return;
-	default:
-		dev_err(engine->dev, "failed to prepare request of unknown type\n");
-		return;
+		engine->cur_req_prepared = true;
+	}
+	if (!enginectx->op.do_one_request) {
+		dev_err(engine->dev, "failed to do request\n");
+		ret = -EINVAL;
+		goto req_err;
 	}
+	ret = enginectx->op.do_one_request(engine, async_req);
+	if (ret) {
+		dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
+		goto req_err;
+	}
+	return;
 
 req_err:
-	switch (rtype) {
-	case CRYPTO_ALG_TYPE_AHASH:
-		hreq = ahash_request_cast(engine->cur_req);
-		crypto_finalize_hash_request(engine, hreq, ret);
-		break;
-	case CRYPTO_ALG_TYPE_ABLKCIPHER:
-		breq = ablkcipher_request_cast(engine->cur_req);
-		crypto_finalize_cipher_request(engine, breq, ret);
-		break;
-	}
+	crypto_finalize_request(engine, async_req, ret);
 	return;
 
 out:
@@ -170,13 +179,12 @@ static void crypto_pump_work(struct kthread_work *work)
 }
 
 /**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
+static int crypto_transfer_request(struct crypto_engine *engine,
+				   struct crypto_async_request *req,
 				   bool need_pump)
 {
 	unsigned long flags;
@@ -189,7 +197,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
 		return -ESHUTDOWN;
 	}
 
-	ret = ablkcipher_enqueue_request(&engine->queue, req);
+	ret = crypto_enqueue_request(&engine->queue, req);
 
 	if (!engine->busy && need_pump)
 		kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -197,102 +205,131 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
 	spin_unlock_irqrestore(&engine->queue_lock, flags);
 	return ret;
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
 
 /**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
  * into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req)
+static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+					     struct crypto_async_request *req)
 {
-	return crypto_transfer_cipher_request(engine, req, true);
+	return crypto_transfer_request(engine, req, true);
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
 
 /**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_ablkcipher_request_to_engine - transfer one ablkcipher_request
+ * to list into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
+ * TODO: Remove this function when skcipher conversion is finished
  */
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump)
+int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
+						 struct ablkcipher_request *req)
 {
-	unsigned long flags;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-
-	if (!engine->running) {
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-		return -ESHUTDOWN;
-	}
-
-	ret = ahash_enqueue_request(&engine->queue, req);
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_ablkcipher_request_to_engine);
 
-	if (!engine->busy && need_pump)
-		kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_transfer_aead_request_to_engine - transfer one aead_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
 
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-	return ret;
+/**
+ * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+					       struct akcipher_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
 }
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
 
 /**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
- * into the engine queue
+ * crypto_transfer_hash_request_to_engine - transfer one ahash_request
+ * to list into the engine queue
  * @engine: the hardware engine
  * @req: the request need to be listed into the engine queue
  */
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 					   struct ahash_request *req)
 {
-	return crypto_transfer_hash_request(engine, req, true);
+	return crypto_transfer_request_to_engine(engine, &req->base);
 }
 EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
 
 /**
- * crypto_finalize_cipher_request - finalize one request if the request is done
+ * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+					       struct skcipher_request *req)
+{
+	return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
+
+/**
+ * crypto_finalize_ablkcipher_request - finalize one ablkcipher_request if
+ * the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
+ * TODO: Remove this function when skcipher conversion is finished
  */
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err)
+void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
+					struct ablkcipher_request *req, int err)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_cipher_request) {
-			ret = engine->unprepare_cipher_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_ablkcipher_request);
 
-	req->base.complete(&req->base, err);
+/**
+ * crypto_finalize_aead_request - finalize one aead_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
 
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_finalize_akcipher_request - finalize one akcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+				      struct akcipher_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
 }
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
 
 /**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_hash_request - finalize one ahash_request if
+ * the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
  * @err: error number
@@ -300,35 +337,25 @@ EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err)
 {
-	unsigned long flags;
-	bool finalize_cur_req = false;
-	int ret;
-
-	spin_lock_irqsave(&engine->queue_lock, flags);
-	if (engine->cur_req == &req->base)
-		finalize_cur_req = true;
-	spin_unlock_irqrestore(&engine->queue_lock, flags);
-
-	if (finalize_cur_req) {
-		if (engine->cur_req_prepared &&
-		    engine->unprepare_hash_request) {
-			ret = engine->unprepare_hash_request(engine, req);
-			if (ret)
-				dev_err(engine->dev, "failed to unprepare request\n");
-		}
-		spin_lock_irqsave(&engine->queue_lock, flags);
-		engine->cur_req = NULL;
-		engine->cur_req_prepared = false;
-		spin_unlock_irqrestore(&engine->queue_lock, flags);
-	}
-
-	req->base.complete(&req->base, err);
-
-	kthread_queue_work(engine->kworker, &engine->pump_requests);
+	return crypto_finalize_request(engine, &req->base, err);
 }
 EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
 
 /**
+ * crypto_finalize_skcipher_request - finalize one skcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+				      struct skcipher_request *req, int err)
+{
+	return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
+
+/**
  * crypto_engine_start - start the hardware engine
  * @engine: the hardware engine need to be started
  *
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..1cbec29af3d6 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,10 @@
 #include <linux/kernel.h>
 #include <linux/kthread.h>
 #include <crypto/algapi.h>
+#include <crypto/aead.h>
+#include <crypto/akcipher.h>
 #include <crypto/hash.h>
+#include <crypto/skcipher.h>
 
 #define ENGINE_NAME_LEN	30
 /*
@@ -37,12 +40,6 @@
  * @unprepare_crypt_hardware: there are currently no more requests on the
  * queue so the subsystem notifies the driver that it may relax the
  * hardware by issuing this call
- * @prepare_cipher_request: do some prepare if need before handle the current request
- * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
- * @cipher_one_request: do encryption for current request
- * @prepare_hash_request: do some prepare if need before handle the current request
- * @unprepare_hash_request: undo any work done by prepare_hash_request()
- * @hash_one_request: do hash for current request
  * @kworker: kthread worker struct for request pump
  * @pump_requests: work struct for scheduling work to the request pump
  * @priv_data: the engine private data
@@ -65,19 +62,6 @@ struct crypto_engine {
 	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
 	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
-	int (*prepare_cipher_request)(struct crypto_engine *engine,
-				      struct ablkcipher_request *req);
-	int (*unprepare_cipher_request)(struct crypto_engine *engine,
-					struct ablkcipher_request *req);
-	int (*prepare_hash_request)(struct crypto_engine *engine,
-				    struct ahash_request *req);
-	int (*unprepare_hash_request)(struct crypto_engine *engine,
-				      struct ahash_request *req);
-	int (*cipher_one_request)(struct crypto_engine *engine,
-				  struct ablkcipher_request *req);
-	int (*hash_one_request)(struct crypto_engine *engine,
-				struct ahash_request *req);
-
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
 
@@ -85,19 +69,45 @@ struct crypto_engine {
 	struct crypto_async_request	*cur_req;
 };
 
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
-				   struct ablkcipher_request *req,
-				   bool need_pump);
-int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
-					     struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
-				 struct ahash_request *req, bool need_pump);
+/*
+ * struct crypto_engine_op - crypto hardware engine operations
+ * @prepare__request: do some prepare if need before handle the current request
+ * @unprepare_request: undo any work done by prepare_request()
+ * @do_one_request: do encryption for current request
+ */
+struct crypto_engine_op {
+	int (*prepare_request)(struct crypto_engine *engine,
+			       void *areq);
+	int (*unprepare_request)(struct crypto_engine *engine,
+				 void *areq);
+	int (*do_one_request)(struct crypto_engine *engine,
+			      void *areq);
+};
+
+struct crypto_engine_ctx {
+	struct crypto_engine_op op;
+};
+
+int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
+						 struct ablkcipher_request *req);
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req);
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+					       struct akcipher_request *req);
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
-					   struct ahash_request *req);
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
-				    struct ablkcipher_request *req, int err);
+					       struct ahash_request *req);
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+					       struct skcipher_request *req);
+void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
+					struct ablkcipher_request *req, int err);
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err);
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+				      struct akcipher_request *req, int err);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err);
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+				      struct skcipher_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 3/6] crypto: omap: convert to new crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
  (?)
@ 2018-01-26 19:15     ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue-qxv4g6HH51o,
	arei.gonglei-hv44wF8Li93QT0dZR+AlfA, corbet-T1hC0tSOHrs,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	mcoquelin.stm32-Re5JQEeQqe8AvxtiuMwx3w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, fabien.dessenne-qxv4g6HH51o
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	linux-doc-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Corentin Labbe

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 drivers/crypto/omap-aes.c | 21 +++++++++++++++------
 drivers/crypto/omap-aes.h |  3 +++
 drivers/crypto/omap-des.c | 24 ++++++++++++++++++------
 3 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index fbec0a2e76dd..5bd383ed3dec 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -388,7 +388,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_ablkcipher_request(dd->engine, req, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -408,14 +408,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req);
 
 	return 0;
 }
 
 static int omap_aes_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
@@ -468,8 +469,9 @@ static int omap_aes_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_aes_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
 	struct omap_aes_dev *dd = rctx->dd;
 
@@ -601,6 +603,11 @@ static int omap_aes_ctr_decrypt(struct ablkcipher_request *req)
 	return omap_aes_crypt(req, FLAGS_CTR);
 }
 
+static int omap_aes_prepare_req(struct crypto_engine *engine,
+				void *req);
+static int omap_aes_crypt_req(struct crypto_engine *engine,
+			      void *req);
+
 static int omap_aes_cra_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
@@ -616,6 +623,10 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm)
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_aes_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_aes_crypt_req;
+
 	return 0;
 }
 
@@ -1119,8 +1130,6 @@ static int omap_aes_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_aes_prepare_req;
-	dd->engine->cipher_one_request = omap_aes_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index 8906342e2b9a..fc3b46a85809 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -13,6 +13,8 @@
 #ifndef __OMAP_AES_H__
 #define __OMAP_AES_H__
 
+#include <crypto/engine.h>
+
 #define DST_MAXBURST			4
 #define DMA_MIN				(DST_MAXBURST * sizeof(u32))
 
@@ -95,6 +97,7 @@ struct omap_aes_gcm_result {
 };
 
 struct omap_aes_ctx {
+	struct crypto_engine_ctx enginectx;
 	int		keylen;
 	u32		key[AES_KEYSIZE_256 / sizeof(u32)];
 	u8		nonce[4];
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index ebc5c0f11f03..eb95b0d7f184 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -86,6 +86,7 @@
 #define FLAGS_OUT_DATA_ST_SHIFT	10
 
 struct omap_des_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct omap_des_dev *dd;
 
 	int		keylen;
@@ -498,7 +499,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_ablkcipher_request(dd->engine, req, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -520,14 +521,15 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req);
 
 	return 0;
 }
 
 static int omap_des_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -582,8 +584,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_des_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -695,12 +698,23 @@ static int omap_des_cbc_decrypt(struct ablkcipher_request *req)
 	return omap_des_crypt(req, FLAGS_CBC);
 }
 
+static int omap_des_prepare_req(struct crypto_engine *engine,
+				void *areq);
+static int omap_des_crypt_req(struct crypto_engine *engine,
+			      void *areq);
+
 static int omap_des_cra_init(struct crypto_tfm *tfm)
 {
+	struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	pr_debug("enter\n");
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_des_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_des_crypt_req;
+
 	return 0;
 }
 
@@ -1046,8 +1060,6 @@ static int omap_des_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_des_prepare_req;
-	dd->engine->cipher_one_request = omap_des_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 3/6] crypto: omap: convert to new crypto engine API
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-doc, linux-kernel,
	virtualization, linux-sunxi, Corentin Labbe

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/omap-aes.c | 21 +++++++++++++++------
 drivers/crypto/omap-aes.h |  3 +++
 drivers/crypto/omap-des.c | 24 ++++++++++++++++++------
 3 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index fbec0a2e76dd..5bd383ed3dec 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -388,7 +388,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_ablkcipher_request(dd->engine, req, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -408,14 +408,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req);
 
 	return 0;
 }
 
 static int omap_aes_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
@@ -468,8 +469,9 @@ static int omap_aes_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_aes_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
 	struct omap_aes_dev *dd = rctx->dd;
 
@@ -601,6 +603,11 @@ static int omap_aes_ctr_decrypt(struct ablkcipher_request *req)
 	return omap_aes_crypt(req, FLAGS_CTR);
 }
 
+static int omap_aes_prepare_req(struct crypto_engine *engine,
+				void *req);
+static int omap_aes_crypt_req(struct crypto_engine *engine,
+			      void *req);
+
 static int omap_aes_cra_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
@@ -616,6 +623,10 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm)
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_aes_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_aes_crypt_req;
+
 	return 0;
 }
 
@@ -1119,8 +1130,6 @@ static int omap_aes_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_aes_prepare_req;
-	dd->engine->cipher_one_request = omap_aes_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index 8906342e2b9a..fc3b46a85809 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -13,6 +13,8 @@
 #ifndef __OMAP_AES_H__
 #define __OMAP_AES_H__
 
+#include <crypto/engine.h>
+
 #define DST_MAXBURST			4
 #define DMA_MIN				(DST_MAXBURST * sizeof(u32))
 
@@ -95,6 +97,7 @@ struct omap_aes_gcm_result {
 };
 
 struct omap_aes_ctx {
+	struct crypto_engine_ctx enginectx;
 	int		keylen;
 	u32		key[AES_KEYSIZE_256 / sizeof(u32)];
 	u8		nonce[4];
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index ebc5c0f11f03..eb95b0d7f184 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -86,6 +86,7 @@
 #define FLAGS_OUT_DATA_ST_SHIFT	10
 
 struct omap_des_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct omap_des_dev *dd;
 
 	int		keylen;
@@ -498,7 +499,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_ablkcipher_request(dd->engine, req, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -520,14 +521,15 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req);
 
 	return 0;
 }
 
 static int omap_des_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -582,8 +584,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_des_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -695,12 +698,23 @@ static int omap_des_cbc_decrypt(struct ablkcipher_request *req)
 	return omap_des_crypt(req, FLAGS_CBC);
 }
 
+static int omap_des_prepare_req(struct crypto_engine *engine,
+				void *areq);
+static int omap_des_crypt_req(struct crypto_engine *engine,
+			      void *areq);
+
 static int omap_des_cra_init(struct crypto_tfm *tfm)
 {
+	struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	pr_debug("enter\n");
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_des_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_des_crypt_req;
+
 	return 0;
 }
 
@@ -1046,8 +1060,6 @@ static int omap_des_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_des_prepare_req;
-	dd->engine->cipher_one_request = omap_des_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 3/6] crypto: omap: convert to new crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
                   ` (3 preceding siblings ...)
  (?)
@ 2018-01-26 19:15 ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-doc, linux-kernel, virtualization, linux-sunxi,
	Corentin Labbe, linux-crypto, linux-arm-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/omap-aes.c | 21 +++++++++++++++------
 drivers/crypto/omap-aes.h |  3 +++
 drivers/crypto/omap-des.c | 24 ++++++++++++++++++------
 3 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index fbec0a2e76dd..5bd383ed3dec 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -388,7 +388,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_ablkcipher_request(dd->engine, req, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -408,14 +408,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req);
 
 	return 0;
 }
 
 static int omap_aes_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
@@ -468,8 +469,9 @@ static int omap_aes_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_aes_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
 	struct omap_aes_dev *dd = rctx->dd;
 
@@ -601,6 +603,11 @@ static int omap_aes_ctr_decrypt(struct ablkcipher_request *req)
 	return omap_aes_crypt(req, FLAGS_CTR);
 }
 
+static int omap_aes_prepare_req(struct crypto_engine *engine,
+				void *req);
+static int omap_aes_crypt_req(struct crypto_engine *engine,
+			      void *req);
+
 static int omap_aes_cra_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
@@ -616,6 +623,10 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm)
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_aes_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_aes_crypt_req;
+
 	return 0;
 }
 
@@ -1119,8 +1130,6 @@ static int omap_aes_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_aes_prepare_req;
-	dd->engine->cipher_one_request = omap_aes_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index 8906342e2b9a..fc3b46a85809 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -13,6 +13,8 @@
 #ifndef __OMAP_AES_H__
 #define __OMAP_AES_H__
 
+#include <crypto/engine.h>
+
 #define DST_MAXBURST			4
 #define DMA_MIN				(DST_MAXBURST * sizeof(u32))
 
@@ -95,6 +97,7 @@ struct omap_aes_gcm_result {
 };
 
 struct omap_aes_ctx {
+	struct crypto_engine_ctx enginectx;
 	int		keylen;
 	u32		key[AES_KEYSIZE_256 / sizeof(u32)];
 	u8		nonce[4];
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index ebc5c0f11f03..eb95b0d7f184 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -86,6 +86,7 @@
 #define FLAGS_OUT_DATA_ST_SHIFT	10
 
 struct omap_des_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct omap_des_dev *dd;
 
 	int		keylen;
@@ -498,7 +499,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_ablkcipher_request(dd->engine, req, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -520,14 +521,15 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req);
 
 	return 0;
 }
 
 static int omap_des_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -582,8 +584,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_des_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -695,12 +698,23 @@ static int omap_des_cbc_decrypt(struct ablkcipher_request *req)
 	return omap_des_crypt(req, FLAGS_CBC);
 }
 
+static int omap_des_prepare_req(struct crypto_engine *engine,
+				void *areq);
+static int omap_des_crypt_req(struct crypto_engine *engine,
+			      void *areq);
+
 static int omap_des_cra_init(struct crypto_tfm *tfm)
 {
+	struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	pr_debug("enter\n");
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_des_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_des_crypt_req;
+
 	return 0;
 }
 
@@ -1046,8 +1060,6 @@ static int omap_des_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_des_prepare_req;
-	dd->engine->cipher_one_request = omap_des_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 3/6] crypto: omap: convert to new crypto engine API
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/omap-aes.c | 21 +++++++++++++++------
 drivers/crypto/omap-aes.h |  3 +++
 drivers/crypto/omap-des.c | 24 ++++++++++++++++++------
 3 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index fbec0a2e76dd..5bd383ed3dec 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -388,7 +388,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_ablkcipher_request(dd->engine, req, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -408,14 +408,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req);
 
 	return 0;
 }
 
 static int omap_aes_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
@@ -468,8 +469,9 @@ static int omap_aes_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_aes_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
 	struct omap_aes_dev *dd = rctx->dd;
 
@@ -601,6 +603,11 @@ static int omap_aes_ctr_decrypt(struct ablkcipher_request *req)
 	return omap_aes_crypt(req, FLAGS_CTR);
 }
 
+static int omap_aes_prepare_req(struct crypto_engine *engine,
+				void *req);
+static int omap_aes_crypt_req(struct crypto_engine *engine,
+			      void *req);
+
 static int omap_aes_cra_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
@@ -616,6 +623,10 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm)
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_aes_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_aes_crypt_req;
+
 	return 0;
 }
 
@@ -1119,8 +1130,6 @@ static int omap_aes_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_aes_prepare_req;
-	dd->engine->cipher_one_request = omap_aes_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index 8906342e2b9a..fc3b46a85809 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -13,6 +13,8 @@
 #ifndef __OMAP_AES_H__
 #define __OMAP_AES_H__
 
+#include <crypto/engine.h>
+
 #define DST_MAXBURST			4
 #define DMA_MIN				(DST_MAXBURST * sizeof(u32))
 
@@ -95,6 +97,7 @@ struct omap_aes_gcm_result {
 };
 
 struct omap_aes_ctx {
+	struct crypto_engine_ctx enginectx;
 	int		keylen;
 	u32		key[AES_KEYSIZE_256 / sizeof(u32)];
 	u8		nonce[4];
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index ebc5c0f11f03..eb95b0d7f184 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -86,6 +86,7 @@
 #define FLAGS_OUT_DATA_ST_SHIFT	10
 
 struct omap_des_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct omap_des_dev *dd;
 
 	int		keylen;
@@ -498,7 +499,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, int err)
 
 	pr_debug("err: %d\n", err);
 
-	crypto_finalize_cipher_request(dd->engine, req, err);
+	crypto_finalize_ablkcipher_request(dd->engine, req, err);
 
 	pm_runtime_mark_last_busy(dd->dev);
 	pm_runtime_put_autosuspend(dd->dev);
@@ -520,14 +521,15 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
 				 struct ablkcipher_request *req)
 {
 	if (req)
-		return crypto_transfer_cipher_request_to_engine(dd->engine, req);
+		return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req);
 
 	return 0;
 }
 
 static int omap_des_prepare_req(struct crypto_engine *engine,
-				struct ablkcipher_request *req)
+				void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -582,8 +584,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine,
 }
 
 static int omap_des_crypt_req(struct crypto_engine *engine,
-			      struct ablkcipher_request *req)
+			      void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
 	struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -695,12 +698,23 @@ static int omap_des_cbc_decrypt(struct ablkcipher_request *req)
 	return omap_des_crypt(req, FLAGS_CBC);
 }
 
+static int omap_des_prepare_req(struct crypto_engine *engine,
+				void *areq);
+static int omap_des_crypt_req(struct crypto_engine *engine,
+			      void *areq);
+
 static int omap_des_cra_init(struct crypto_tfm *tfm)
 {
+	struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	pr_debug("enter\n");
 
 	tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx);
 
+	ctx->enginectx.op.prepare_request = omap_des_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
+	ctx->enginectx.op.do_one_request = omap_des_crypt_req;
+
 	return 0;
 }
 
@@ -1046,8 +1060,6 @@ static int omap_des_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	dd->engine->prepare_cipher_request = omap_des_prepare_req;
-	dd->engine->cipher_one_request = omap_des_crypt_req;
 	err = crypto_engine_start(dd->engine);
 	if (err)
 		goto err_engine;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 4/6] crypto: virtio: convert to new crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
  (?)
@ 2018-01-26 19:15     ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue-qxv4g6HH51o,
	arei.gonglei-hv44wF8Li93QT0dZR+AlfA, corbet-T1hC0tSOHrs,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	mcoquelin.stm32-Re5JQEeQqe8AvxtiuMwx3w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, fabien.dessenne-qxv4g6HH51o
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	linux-doc-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Corentin Labbe

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
 drivers/crypto/virtio/virtio_crypto_algs.c   | 16 ++++++++++------
 drivers/crypto/virtio/virtio_crypto_common.h |  3 +--
 drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
index abe8c15450df..ba190cfa7aa1 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -29,6 +29,7 @@
 
 
 struct virtio_crypto_ablkcipher_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct virtio_crypto *vcrypto;
 	struct crypto_tfm *tfm;
 
@@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = true;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
 }
 
 static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
@@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = false;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
 }
 
 static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
@@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
 	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
 	ctx->tfm = tfm;
 
+	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
+	ctx->enginectx.op.prepare_request = NULL;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -538,9 +542,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
 }
 
 int virtio_crypto_ablkcipher_crypt_req(
-	struct crypto_engine *engine,
-	struct ablkcipher_request *req)
+	struct crypto_engine *engine, void *vreq)
 {
+	struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base);
 	struct virtio_crypto_sym_request *vc_sym_req =
 				ablkcipher_request_ctx(req);
 	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
@@ -561,8 +565,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
 	struct ablkcipher_request *req,
 	int err)
 {
-	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
-					req, err);
+	crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine,
+					   req, err);
 	kzfree(vc_sym_req->iv);
 	virtcrypto_clear_request(&vc_sym_req->base);
 }
diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
index e976539a05d9..72621bd67211 100644
--- a/drivers/crypto/virtio/virtio_crypto_common.h
+++ b/drivers/crypto/virtio/virtio_crypto_common.h
@@ -107,8 +107,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node);
 int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
 void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
 int virtio_crypto_ablkcipher_crypt_req(
-	struct crypto_engine *engine,
-	struct ablkcipher_request *req);
+	struct crypto_engine *engine, void *vreq);
 
 void
 virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index ff1410a32c2b..83326986c113 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
 			ret = -ENOMEM;
 			goto err_engine;
 		}
-
-		vi->data_vq[i].engine->cipher_one_request =
-			virtio_crypto_ablkcipher_crypt_req;
 	}
 
 	kfree(names);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 4/6] crypto: virtio: convert to new crypto engine API
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-doc, linux-kernel,
	virtualization, linux-sunxi, Corentin Labbe

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/virtio/virtio_crypto_algs.c   | 16 ++++++++++------
 drivers/crypto/virtio/virtio_crypto_common.h |  3 +--
 drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
index abe8c15450df..ba190cfa7aa1 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -29,6 +29,7 @@
 
 
 struct virtio_crypto_ablkcipher_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct virtio_crypto *vcrypto;
 	struct crypto_tfm *tfm;
 
@@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = true;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
 }
 
 static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
@@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = false;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
 }
 
 static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
@@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
 	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
 	ctx->tfm = tfm;
 
+	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
+	ctx->enginectx.op.prepare_request = NULL;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -538,9 +542,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
 }
 
 int virtio_crypto_ablkcipher_crypt_req(
-	struct crypto_engine *engine,
-	struct ablkcipher_request *req)
+	struct crypto_engine *engine, void *vreq)
 {
+	struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base);
 	struct virtio_crypto_sym_request *vc_sym_req =
 				ablkcipher_request_ctx(req);
 	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
@@ -561,8 +565,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
 	struct ablkcipher_request *req,
 	int err)
 {
-	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
-					req, err);
+	crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine,
+					   req, err);
 	kzfree(vc_sym_req->iv);
 	virtcrypto_clear_request(&vc_sym_req->base);
 }
diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
index e976539a05d9..72621bd67211 100644
--- a/drivers/crypto/virtio/virtio_crypto_common.h
+++ b/drivers/crypto/virtio/virtio_crypto_common.h
@@ -107,8 +107,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node);
 int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
 void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
 int virtio_crypto_ablkcipher_crypt_req(
-	struct crypto_engine *engine,
-	struct ablkcipher_request *req);
+	struct crypto_engine *engine, void *vreq);
 
 void
 virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index ff1410a32c2b..83326986c113 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
 			ret = -ENOMEM;
 			goto err_engine;
 		}
-
-		vi->data_vq[i].engine->cipher_one_request =
-			virtio_crypto_ablkcipher_crypt_req;
 	}
 
 	kfree(names);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 4/6] crypto: virtio: convert to new crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
                   ` (4 preceding siblings ...)
  (?)
@ 2018-01-26 19:15 ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-doc, linux-kernel, virtualization, linux-sunxi,
	Corentin Labbe, linux-crypto, linux-arm-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/virtio/virtio_crypto_algs.c   | 16 ++++++++++------
 drivers/crypto/virtio/virtio_crypto_common.h |  3 +--
 drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
index abe8c15450df..ba190cfa7aa1 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -29,6 +29,7 @@
 
 
 struct virtio_crypto_ablkcipher_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct virtio_crypto *vcrypto;
 	struct crypto_tfm *tfm;
 
@@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = true;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
 }
 
 static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
@@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = false;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
 }
 
 static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
@@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
 	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
 	ctx->tfm = tfm;
 
+	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
+	ctx->enginectx.op.prepare_request = NULL;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -538,9 +542,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
 }
 
 int virtio_crypto_ablkcipher_crypt_req(
-	struct crypto_engine *engine,
-	struct ablkcipher_request *req)
+	struct crypto_engine *engine, void *vreq)
 {
+	struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base);
 	struct virtio_crypto_sym_request *vc_sym_req =
 				ablkcipher_request_ctx(req);
 	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
@@ -561,8 +565,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
 	struct ablkcipher_request *req,
 	int err)
 {
-	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
-					req, err);
+	crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine,
+					   req, err);
 	kzfree(vc_sym_req->iv);
 	virtcrypto_clear_request(&vc_sym_req->base);
 }
diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
index e976539a05d9..72621bd67211 100644
--- a/drivers/crypto/virtio/virtio_crypto_common.h
+++ b/drivers/crypto/virtio/virtio_crypto_common.h
@@ -107,8 +107,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node);
 int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
 void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
 int virtio_crypto_ablkcipher_crypt_req(
-	struct crypto_engine *engine,
-	struct ablkcipher_request *req);
+	struct crypto_engine *engine, void *vreq);
 
 void
 virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index ff1410a32c2b..83326986c113 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
 			ret = -ENOMEM;
 			goto err_engine;
 		}
-
-		vi->data_vq[i].engine->cipher_one_request =
-			virtio_crypto_ablkcipher_crypt_req;
 	}
 
 	kfree(names);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 4/6] crypto: virtio: convert to new crypto engine API
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
---
 drivers/crypto/virtio/virtio_crypto_algs.c   | 16 ++++++++++------
 drivers/crypto/virtio/virtio_crypto_common.h |  3 +--
 drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
index abe8c15450df..ba190cfa7aa1 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -29,6 +29,7 @@
 
 
 struct virtio_crypto_ablkcipher_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct virtio_crypto *vcrypto;
 	struct crypto_tfm *tfm;
 
@@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = true;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
 }
 
 static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
@@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
 	vc_sym_req->ablkcipher_req = req;
 	vc_sym_req->encrypt = false;
 
-	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
 }
 
 static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
@@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
 	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
 	ctx->tfm = tfm;
 
+	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
+	ctx->enginectx.op.prepare_request = NULL;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -538,9 +542,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
 }
 
 int virtio_crypto_ablkcipher_crypt_req(
-	struct crypto_engine *engine,
-	struct ablkcipher_request *req)
+	struct crypto_engine *engine, void *vreq)
 {
+	struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base);
 	struct virtio_crypto_sym_request *vc_sym_req =
 				ablkcipher_request_ctx(req);
 	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
@@ -561,8 +565,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
 	struct ablkcipher_request *req,
 	int err)
 {
-	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
-					req, err);
+	crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine,
+					   req, err);
 	kzfree(vc_sym_req->iv);
 	virtcrypto_clear_request(&vc_sym_req->base);
 }
diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
index e976539a05d9..72621bd67211 100644
--- a/drivers/crypto/virtio/virtio_crypto_common.h
+++ b/drivers/crypto/virtio/virtio_crypto_common.h
@@ -107,8 +107,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node);
 int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
 void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
 int virtio_crypto_ablkcipher_crypt_req(
-	struct crypto_engine *engine,
-	struct ablkcipher_request *req);
+	struct crypto_engine *engine, void *vreq);
 
 void
 virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index ff1410a32c2b..83326986c113 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
 			ret = -ENOMEM;
 			goto err_engine;
 		}
-
-		vi->data_vq[i].engine->cipher_one_request =
-			virtio_crypto_ablkcipher_crypt_req;
 	}
 
 	kfree(names);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 5/6] crypto: stm32-hash: convert to the new crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
  (?)
@ 2018-01-26 19:15     ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue-qxv4g6HH51o,
	arei.gonglei-hv44wF8Li93QT0dZR+AlfA, corbet-T1hC0tSOHrs,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	mcoquelin.stm32-Re5JQEeQqe8AvxtiuMwx3w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, fabien.dessenne-qxv4g6HH51o
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	linux-doc-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Corentin Labbe

This patch convert the stm32-hash driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Fabien Dessenne <fabien.dessenne-qxv4g6HH51o@public.gmane.org>
---
 drivers/crypto/stm32/stm32-hash.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 4ca4a264a833..89b0c2490d80 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -122,6 +122,7 @@ enum stm32_hash_data_format {
 #define HASH_DMA_THRESHOLD		50
 
 struct stm32_hash_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct stm32_hash_dev	*hdev;
 	unsigned long		flags;
 
@@ -828,15 +829,19 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
 	return 0;
 }
 
+static int stm32_hash_one_request(struct crypto_engine *engine, void *areq);
+static int stm32_hash_prepare_req(struct crypto_engine *engine, void *areq);
+
 static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
 				   struct ahash_request *req)
 {
 	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
 }
 
-static int stm32_hash_prepare_req(struct crypto_engine *engine,
-				  struct ahash_request *req)
+static int stm32_hash_prepare_req(struct crypto_engine *engine, void *areq)
 {
+	struct ahash_request *req = container_of(areq, struct ahash_request,
+						 base);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -854,9 +859,10 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
 	return stm32_hash_hw_init(hdev, rctx);
 }
 
-static int stm32_hash_one_request(struct crypto_engine *engine,
-				  struct ahash_request *req)
+static int stm32_hash_one_request(struct crypto_engine *engine, void *areq)
 {
+	struct ahash_request *req = container_of(areq, struct ahash_request,
+						 base);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -1033,6 +1039,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
 	if (algs_hmac_name)
 		ctx->flags |= HASH_FLAGS_HMAC;
 
+	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
+	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -1493,9 +1502,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
-	hdev->engine->hash_one_request = stm32_hash_one_request;
-
 	ret = crypto_engine_start(hdev->engine);
 	if (ret)
 		goto err_engine_start;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 5/6] crypto: stm32-hash: convert to the new crypto engine API
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-doc, linux-kernel,
	virtualization, linux-sunxi, Corentin Labbe

This patch convert the stm32-hash driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/stm32-hash.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 4ca4a264a833..89b0c2490d80 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -122,6 +122,7 @@ enum stm32_hash_data_format {
 #define HASH_DMA_THRESHOLD		50
 
 struct stm32_hash_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct stm32_hash_dev	*hdev;
 	unsigned long		flags;
 
@@ -828,15 +829,19 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
 	return 0;
 }
 
+static int stm32_hash_one_request(struct crypto_engine *engine, void *areq);
+static int stm32_hash_prepare_req(struct crypto_engine *engine, void *areq);
+
 static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
 				   struct ahash_request *req)
 {
 	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
 }
 
-static int stm32_hash_prepare_req(struct crypto_engine *engine,
-				  struct ahash_request *req)
+static int stm32_hash_prepare_req(struct crypto_engine *engine, void *areq)
 {
+	struct ahash_request *req = container_of(areq, struct ahash_request,
+						 base);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -854,9 +859,10 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
 	return stm32_hash_hw_init(hdev, rctx);
 }
 
-static int stm32_hash_one_request(struct crypto_engine *engine,
-				  struct ahash_request *req)
+static int stm32_hash_one_request(struct crypto_engine *engine, void *areq)
 {
+	struct ahash_request *req = container_of(areq, struct ahash_request,
+						 base);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -1033,6 +1039,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
 	if (algs_hmac_name)
 		ctx->flags |= HASH_FLAGS_HMAC;
 
+	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
+	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -1493,9 +1502,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
-	hdev->engine->hash_one_request = stm32_hash_one_request;
-
 	ret = crypto_engine_start(hdev->engine);
 	if (ret)
 		goto err_engine_start;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 5/6] crypto: stm32-hash: convert to the new crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
                   ` (6 preceding siblings ...)
  (?)
@ 2018-01-26 19:15 ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-doc, linux-kernel, virtualization, linux-sunxi,
	Corentin Labbe, linux-crypto, linux-arm-kernel

This patch convert the stm32-hash driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/stm32-hash.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 4ca4a264a833..89b0c2490d80 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -122,6 +122,7 @@ enum stm32_hash_data_format {
 #define HASH_DMA_THRESHOLD		50
 
 struct stm32_hash_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct stm32_hash_dev	*hdev;
 	unsigned long		flags;
 
@@ -828,15 +829,19 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
 	return 0;
 }
 
+static int stm32_hash_one_request(struct crypto_engine *engine, void *areq);
+static int stm32_hash_prepare_req(struct crypto_engine *engine, void *areq);
+
 static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
 				   struct ahash_request *req)
 {
 	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
 }
 
-static int stm32_hash_prepare_req(struct crypto_engine *engine,
-				  struct ahash_request *req)
+static int stm32_hash_prepare_req(struct crypto_engine *engine, void *areq)
 {
+	struct ahash_request *req = container_of(areq, struct ahash_request,
+						 base);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -854,9 +859,10 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
 	return stm32_hash_hw_init(hdev, rctx);
 }
 
-static int stm32_hash_one_request(struct crypto_engine *engine,
-				  struct ahash_request *req)
+static int stm32_hash_one_request(struct crypto_engine *engine, void *areq)
 {
+	struct ahash_request *req = container_of(areq, struct ahash_request,
+						 base);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -1033,6 +1039,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
 	if (algs_hmac_name)
 		ctx->flags |= HASH_FLAGS_HMAC;
 
+	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
+	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -1493,9 +1502,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
-	hdev->engine->hash_one_request = stm32_hash_one_request;
-
 	ret = crypto_engine_start(hdev->engine);
 	if (ret)
 		goto err_engine_start;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 5/6] crypto: stm32-hash: convert to the new crypto engine API
@ 2018-01-26 19:15     ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

This patch convert the stm32-hash driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/stm32-hash.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 4ca4a264a833..89b0c2490d80 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -122,6 +122,7 @@ enum stm32_hash_data_format {
 #define HASH_DMA_THRESHOLD		50
 
 struct stm32_hash_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct stm32_hash_dev	*hdev;
 	unsigned long		flags;
 
@@ -828,15 +829,19 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
 	return 0;
 }
 
+static int stm32_hash_one_request(struct crypto_engine *engine, void *areq);
+static int stm32_hash_prepare_req(struct crypto_engine *engine, void *areq);
+
 static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
 				   struct ahash_request *req)
 {
 	return crypto_transfer_hash_request_to_engine(hdev->engine, req);
 }
 
-static int stm32_hash_prepare_req(struct crypto_engine *engine,
-				  struct ahash_request *req)
+static int stm32_hash_prepare_req(struct crypto_engine *engine, void *areq)
 {
+	struct ahash_request *req = container_of(areq, struct ahash_request,
+						 base);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -854,9 +859,10 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
 	return stm32_hash_hw_init(hdev, rctx);
 }
 
-static int stm32_hash_one_request(struct crypto_engine *engine,
-				  struct ahash_request *req)
+static int stm32_hash_one_request(struct crypto_engine *engine, void *areq)
 {
+	struct ahash_request *req = container_of(areq, struct ahash_request,
+						 base);
 	struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
 	struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
 	struct stm32_hash_request_ctx *rctx;
@@ -1033,6 +1039,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
 	if (algs_hmac_name)
 		ctx->flags |= HASH_FLAGS_HMAC;
 
+	ctx->enginectx.op.do_one_request = stm32_hash_one_request;
+	ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -1493,9 +1502,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
 		goto err_engine;
 	}
 
-	hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
-	hdev->engine->hash_one_request = stm32_hash_one_request;
-
 	ret = crypto_engine_start(hdev->engine);
 	if (ret)
 		goto err_engine_start;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 6/6] crypto: stm32-cryp: convert to the new crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
@ 2018-01-26 19:15   ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-arm-kernel, linux-crypto, linux-doc, linux-kernel,
	virtualization, linux-sunxi, Corentin Labbe

This patch convert the stm32-cryp driver to the new crypto engine API.
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/stm32-cryp.c | 29 +++++++++++++++++++++--------
 1 file changed, 21 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
index cf1dddbeaa2c..a816b2ffcaad 100644
--- a/drivers/crypto/stm32/stm32-cryp.c
+++ b/drivers/crypto/stm32/stm32-cryp.c
@@ -91,6 +91,7 @@
 #define _walked_out             (cryp->out_walk.offset - cryp->out_sg->offset)
 
 struct stm32_cryp_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct stm32_cryp       *cryp;
 	int                     keylen;
 	u32                     key[AES_KEYSIZE_256 / sizeof(u32)];
@@ -478,7 +479,7 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp)
 		free_pages((unsigned long)buf_out, pages);
 	}
 
-	crypto_finalize_cipher_request(cryp->engine, cryp->req, err);
+	crypto_finalize_ablkcipher_request(cryp->engine, cryp->req, err);
 	cryp->req = NULL;
 
 	memset(cryp->ctx->key, 0, cryp->ctx->keylen);
@@ -494,10 +495,19 @@ static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
 	return 0;
 }
 
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine, void *areq);
+static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
+					 void *areq);
+
 static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
 {
+	struct stm32_cryp_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);
 
+	ctx->enginectx.op.do_one_request = stm32_cryp_cipher_one_req;
+	ctx->enginectx.op.prepare_request = stm32_cryp_prepare_cipher_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -513,7 +523,7 @@ static int stm32_cryp_crypt(struct ablkcipher_request *req, unsigned long mode)
 
 	rctx->mode = mode;
 
-	return crypto_transfer_cipher_request_to_engine(cryp->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(cryp->engine, req);
 }
 
 static int stm32_cryp_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
@@ -695,14 +705,20 @@ static int stm32_cryp_prepare_req(struct crypto_engine *engine,
 }
 
 static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
-					 struct ablkcipher_request *req)
+					 void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq,
+						      struct ablkcipher_request,
+						      base);
+
 	return stm32_cryp_prepare_req(engine, req);
 }
 
-static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
-				     struct ablkcipher_request *req)
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine, void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq,
+						      struct ablkcipher_request,
+						      base);
 	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct stm32_cryp *cryp = ctx->cryp;
@@ -1104,9 +1120,6 @@ static int stm32_cryp_probe(struct platform_device *pdev)
 		goto err_engine1;
 	}
 
-	cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
-	cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
-
 	ret = crypto_engine_start(cryp->engine);
 	if (ret) {
 		dev_err(dev, "Could not start crypto engine\n");
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 6/6] crypto: stm32-cryp: convert to the new crypto engine API
  2018-01-26 19:15 ` Corentin Labbe
                   ` (7 preceding siblings ...)
  (?)
@ 2018-01-26 19:15 ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne
  Cc: linux-doc, linux-kernel, virtualization, linux-sunxi,
	Corentin Labbe, linux-crypto, linux-arm-kernel

This patch convert the stm32-cryp driver to the new crypto engine API.
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/stm32-cryp.c | 29 +++++++++++++++++++++--------
 1 file changed, 21 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
index cf1dddbeaa2c..a816b2ffcaad 100644
--- a/drivers/crypto/stm32/stm32-cryp.c
+++ b/drivers/crypto/stm32/stm32-cryp.c
@@ -91,6 +91,7 @@
 #define _walked_out             (cryp->out_walk.offset - cryp->out_sg->offset)
 
 struct stm32_cryp_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct stm32_cryp       *cryp;
 	int                     keylen;
 	u32                     key[AES_KEYSIZE_256 / sizeof(u32)];
@@ -478,7 +479,7 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp)
 		free_pages((unsigned long)buf_out, pages);
 	}
 
-	crypto_finalize_cipher_request(cryp->engine, cryp->req, err);
+	crypto_finalize_ablkcipher_request(cryp->engine, cryp->req, err);
 	cryp->req = NULL;
 
 	memset(cryp->ctx->key, 0, cryp->ctx->keylen);
@@ -494,10 +495,19 @@ static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
 	return 0;
 }
 
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine, void *areq);
+static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
+					 void *areq);
+
 static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
 {
+	struct stm32_cryp_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);
 
+	ctx->enginectx.op.do_one_request = stm32_cryp_cipher_one_req;
+	ctx->enginectx.op.prepare_request = stm32_cryp_prepare_cipher_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -513,7 +523,7 @@ static int stm32_cryp_crypt(struct ablkcipher_request *req, unsigned long mode)
 
 	rctx->mode = mode;
 
-	return crypto_transfer_cipher_request_to_engine(cryp->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(cryp->engine, req);
 }
 
 static int stm32_cryp_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
@@ -695,14 +705,20 @@ static int stm32_cryp_prepare_req(struct crypto_engine *engine,
 }
 
 static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
-					 struct ablkcipher_request *req)
+					 void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq,
+						      struct ablkcipher_request,
+						      base);
+
 	return stm32_cryp_prepare_req(engine, req);
 }
 
-static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
-				     struct ablkcipher_request *req)
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine, void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq,
+						      struct ablkcipher_request,
+						      base);
 	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct stm32_cryp *cryp = ctx->cryp;
@@ -1104,9 +1120,6 @@ static int stm32_cryp_probe(struct platform_device *pdev)
 		goto err_engine1;
 	}
 
-	cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
-	cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
-
 	ret = crypto_engine_start(cryp->engine);
 	if (ret) {
 		dev_err(dev, "Could not start crypto engine\n");
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 6/6] crypto: stm32-cryp: convert to the new crypto engine API
@ 2018-01-26 19:15   ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-01-26 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

This patch convert the stm32-cryp driver to the new crypto engine API.
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Tested-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/stm32-cryp.c | 29 +++++++++++++++++++++--------
 1 file changed, 21 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
index cf1dddbeaa2c..a816b2ffcaad 100644
--- a/drivers/crypto/stm32/stm32-cryp.c
+++ b/drivers/crypto/stm32/stm32-cryp.c
@@ -91,6 +91,7 @@
 #define _walked_out             (cryp->out_walk.offset - cryp->out_sg->offset)
 
 struct stm32_cryp_ctx {
+	struct crypto_engine_ctx enginectx;
 	struct stm32_cryp       *cryp;
 	int                     keylen;
 	u32                     key[AES_KEYSIZE_256 / sizeof(u32)];
@@ -478,7 +479,7 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp)
 		free_pages((unsigned long)buf_out, pages);
 	}
 
-	crypto_finalize_cipher_request(cryp->engine, cryp->req, err);
+	crypto_finalize_ablkcipher_request(cryp->engine, cryp->req, err);
 	cryp->req = NULL;
 
 	memset(cryp->ctx->key, 0, cryp->ctx->keylen);
@@ -494,10 +495,19 @@ static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
 	return 0;
 }
 
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine, void *areq);
+static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
+					 void *areq);
+
 static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
 {
+	struct stm32_cryp_ctx *ctx = crypto_tfm_ctx(tfm);
+
 	tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);
 
+	ctx->enginectx.op.do_one_request = stm32_cryp_cipher_one_req;
+	ctx->enginectx.op.prepare_request = stm32_cryp_prepare_cipher_req;
+	ctx->enginectx.op.unprepare_request = NULL;
 	return 0;
 }
 
@@ -513,7 +523,7 @@ static int stm32_cryp_crypt(struct ablkcipher_request *req, unsigned long mode)
 
 	rctx->mode = mode;
 
-	return crypto_transfer_cipher_request_to_engine(cryp->engine, req);
+	return crypto_transfer_ablkcipher_request_to_engine(cryp->engine, req);
 }
 
 static int stm32_cryp_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
@@ -695,14 +705,20 @@ static int stm32_cryp_prepare_req(struct crypto_engine *engine,
 }
 
 static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
-					 struct ablkcipher_request *req)
+					 void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq,
+						      struct ablkcipher_request,
+						      base);
+
 	return stm32_cryp_prepare_req(engine, req);
 }
 
-static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
-				     struct ablkcipher_request *req)
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine, void *areq)
 {
+	struct ablkcipher_request *req = container_of(areq,
+						      struct ablkcipher_request,
+						      base);
 	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
 			crypto_ablkcipher_reqtfm(req));
 	struct stm32_cryp *cryp = ctx->cryp;
@@ -1104,9 +1120,6 @@ static int stm32_cryp_probe(struct platform_device *pdev)
 		goto err_engine1;
 	}
 
-	cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
-	cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
-
 	ret = crypto_engine_start(cryp->engine);
 	if (ret) {
 		dev_err(dev, "Could not start crypto engine\n");
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
  2018-01-26 19:15     ` Corentin Labbe
  (?)
@ 2018-02-14 13:31       ` Fabien DESSENNE
  -1 siblings, 0 replies; 44+ messages in thread
From: Fabien DESSENNE @ 2018-02-14 13:31 UTC (permalink / raw)
  To: Corentin Labbe, Alexandre TORGUE, arei.gonglei, corbet, davem,
	herbert, jasowang, mcoquelin.stm32, mst
  Cc: linux-doc, linux-kernel, virtualization, linux-sunxi,
	linux-crypto, linux-arm-kernel

Adding my tested-by for the AEAD part which is new in v2


On 26/01/18 20:15, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> Tested-by: Fabien Dessenne <fabien.dessenne@st.com>

Tested-by: Fabien Dessenne <fabien.dessenne@st.com>


> ---
>   crypto/crypto_engine.c  | 301 ++++++++++++++++++++++++++----------------------
>   include/crypto/engine.h |  68 ++++++-----
>   2 files changed, 203 insertions(+), 166 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..992e8d8dcdd9 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -15,13 +15,50 @@
>   #include <linux/err.h>
>   #include <linux/delay.h>
>   #include <crypto/engine.h>
> -#include <crypto/internal/hash.h>
>   #include <uapi/linux/sched/types.h>
>   #include "internal.h"
>   
>   #define CRYPTO_ENGINE_MAX_QLEN 10
>   
>   /**
> + * crypto_finalize_request - finalize one request if the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +static void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err)
> +{
> +	unsigned long flags;
> +	bool finalize_cur_req = false;
> +	int ret;
> +	struct crypto_engine_ctx *enginectx;
> +
> +	spin_lock_irqsave(&engine->queue_lock, flags);
> +	if (engine->cur_req == req)
> +		finalize_cur_req = true;
> +	spin_unlock_irqrestore(&engine->queue_lock, flags);
> +
> +	if (finalize_cur_req) {
> +		enginectx = crypto_tfm_ctx(req->tfm);
> +		if (engine->cur_req_prepared &&
> +		    enginectx->op.unprepare_request) {
> +			ret = enginectx->op.unprepare_request(engine, req);
> +			if (ret)
> +				dev_err(engine->dev, "failed to unprepare request\n");
> +		}
> +		spin_lock_irqsave(&engine->queue_lock, flags);
> +		engine->cur_req = NULL;
> +		engine->cur_req_prepared = false;
> +		spin_unlock_irqrestore(&engine->queue_lock, flags);
> +	}
> +
> +	req->complete(req, err);
> +
> +	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +}
> +
> +/**
>    * crypto_pump_requests - dequeue one request from engine queue to process
>    * @engine: the hardware engine
>    * @in_kthread: true if we are in the context of the request pump thread
> @@ -34,11 +71,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   				 bool in_kthread)
>   {
>   	struct crypto_async_request *async_req, *backlog;
> -	struct ahash_request *hreq;
> -	struct ablkcipher_request *breq;
>   	unsigned long flags;
>   	bool was_busy = false;
> -	int ret, rtype;
> +	int ret;
> +	struct crypto_engine_ctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -94,7 +130,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
> -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>   	/* Until here we get the request need to be encrypted successfully */
>   	if (!was_busy && engine->prepare_crypt_hardware) {
>   		ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +139,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   		}
>   	}
>   
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		if (engine->prepare_hash_request) {
> -			ret = engine->prepare_hash_request(engine, hreq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->hash_one_request(engine, hreq);
> -		if (ret) {
> -			dev_err(engine->dev, "failed to hash one request from queue\n");
> -			goto req_err;
> -		}
> -		return;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		if (engine->prepare_cipher_request) {
> -			ret = engine->prepare_cipher_request(engine, breq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->cipher_one_request(engine, breq);
> +	enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> +	if (enginectx->op.prepare_request) {
> +		ret = enginectx->op.prepare_request(engine, async_req);
>   		if (ret) {
> -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> +			dev_err(engine->dev, "failed to prepare request: %d\n",
> +				ret);
>   			goto req_err;
>   		}
> -		return;
> -	default:
> -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> -		return;
> +		engine->cur_req_prepared = true;
> +	}
> +	if (!enginectx->op.do_one_request) {
> +		dev_err(engine->dev, "failed to do request\n");
> +		ret = -EINVAL;
> +		goto req_err;
>   	}
> +	ret = enginectx->op.do_one_request(engine, async_req);
> +	if (ret) {
> +		dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
> +		goto req_err;
> +	}
> +	return;
>   
>   req_err:
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		crypto_finalize_hash_request(engine, hreq, ret);
> -		break;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		crypto_finalize_cipher_request(engine, breq, ret);
> -		break;
> -	}
> +	crypto_finalize_request(engine, async_req, ret);
>   	return;
>   
>   out:
> @@ -170,13 +179,12 @@ static void crypto_pump_work(struct kthread_work *work)
>   }
>   
>   /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> +static int crypto_transfer_request(struct crypto_engine *engine,
> +				   struct crypto_async_request *req,
>   				   bool need_pump)
>   {
>   	unsigned long flags;
> @@ -189,7 +197,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>   		return -ESHUTDOWN;
>   	}
>   
> -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> +	ret = crypto_enqueue_request(&engine->queue, req);
>   
>   	if (!engine->busy && need_pump)
>   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -197,102 +205,131 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
>   
>   /**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
>    * into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req)
> +static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +					     struct crypto_async_request *req)
>   {
> -	return crypto_transfer_cipher_request(engine, req, true);
> +	return crypto_transfer_request(engine, req, true);
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
>   
>   /**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_ablkcipher_request_to_engine - transfer one ablkcipher_request
> + * to list into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
> + * TODO: Remove this function when skcipher conversion is finished
>    */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump)
> +int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
> +						 struct ablkcipher_request *req)
>   {
> -	unsigned long flags;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -
> -	if (!engine->running) {
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -		return -ESHUTDOWN;
> -	}
> -
> -	ret = ahash_enqueue_request(&engine->queue, req);
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_ablkcipher_request_to_engine);
>   
> -	if (!engine->busy && need_pump)
> -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_transfer_aead_request_to_engine - transfer one aead_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> +					   struct aead_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
>   
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	return ret;
> +/**
> + * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct akcipher_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
>   
>   /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> - * into the engine queue
> + * crypto_transfer_hash_request_to_engine - transfer one ahash_request
> + * to list into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
>   int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
>   					   struct ahash_request *req)
>   {
> -	return crypto_transfer_hash_request(engine, req, true);
> +	return crypto_transfer_request_to_engine(engine, &req->base);
>   }
>   EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
>   
>   /**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> + * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct skcipher_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
> +
> +/**
> + * crypto_finalize_ablkcipher_request - finalize one ablkcipher_request if
> + * the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
> + * TODO: Remove this function when skcipher conversion is finished
>    */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err)
> +void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
> +					struct ablkcipher_request *req, int err)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_cipher_request) {
> -			ret = engine->unprepare_cipher_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_ablkcipher_request);
>   
> -	req->base.complete(&req->base, err);
> +/**
> + * crypto_finalize_aead_request - finalize one aead_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_aead_request(struct crypto_engine *engine,
> +				  struct aead_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
>   
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_finalize_akcipher_request - finalize one akcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> +				      struct akcipher_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
>   
>   /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_hash_request - finalize one ahash_request if
> + * the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
> @@ -300,35 +337,25 @@ EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
>   void crypto_finalize_hash_request(struct crypto_engine *engine,
>   				  struct ahash_request *req, int err)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_hash_request) {
> -			ret = engine->unprepare_hash_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> -
> -	req->base.complete(&req->base, err);
> -
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +	return crypto_finalize_request(engine, &req->base, err);
>   }
>   EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
>   
>   /**
> + * crypto_finalize_skcipher_request - finalize one skcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> +				      struct skcipher_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
> +
> +/**
>    * crypto_engine_start - start the hardware engine
>    * @engine: the hardware engine need to be started
>    *
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..1cbec29af3d6 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,10 @@
>   #include <linux/kernel.h>
>   #include <linux/kthread.h>
>   #include <crypto/algapi.h>
> +#include <crypto/aead.h>
> +#include <crypto/akcipher.h>
>   #include <crypto/hash.h>
> +#include <crypto/skcipher.h>
>   
>   #define ENGINE_NAME_LEN	30
>   /*
> @@ -37,12 +40,6 @@
>    * @unprepare_crypt_hardware: there are currently no more requests on the
>    * queue so the subsystem notifies the driver that it may relax the
>    * hardware by issuing this call
> - * @prepare_cipher_request: do some prepare if need before handle the current request
> - * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
> - * @cipher_one_request: do encryption for current request
> - * @prepare_hash_request: do some prepare if need before handle the current request
> - * @unprepare_hash_request: undo any work done by prepare_hash_request()
> - * @hash_one_request: do hash for current request
>    * @kworker: kthread worker struct for request pump
>    * @pump_requests: work struct for scheduling work to the request pump
>    * @priv_data: the engine private data
> @@ -65,19 +62,6 @@ struct crypto_engine {
>   	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>   	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>   
> -	int (*prepare_cipher_request)(struct crypto_engine *engine,
> -				      struct ablkcipher_request *req);
> -	int (*unprepare_cipher_request)(struct crypto_engine *engine,
> -					struct ablkcipher_request *req);
> -	int (*prepare_hash_request)(struct crypto_engine *engine,
> -				    struct ahash_request *req);
> -	int (*unprepare_hash_request)(struct crypto_engine *engine,
> -				      struct ahash_request *req);
> -	int (*cipher_one_request)(struct crypto_engine *engine,
> -				  struct ablkcipher_request *req);
> -	int (*hash_one_request)(struct crypto_engine *engine,
> -				struct ahash_request *req);
> -
>   	struct kthread_worker           *kworker;
>   	struct kthread_work             pump_requests;
>   
> @@ -85,19 +69,45 @@ struct crypto_engine {
>   	struct crypto_async_request	*cur_req;
>   };
>   
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump);
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump);
> +/*
> + * struct crypto_engine_op - crypto hardware engine operations
> + * @prepare__request: do some prepare if need before handle the current request
> + * @unprepare_request: undo any work done by prepare_request()
> + * @do_one_request: do encryption for current request
> + */
> +struct crypto_engine_op {
> +	int (*prepare_request)(struct crypto_engine *engine,
> +			       void *areq);
> +	int (*unprepare_request)(struct crypto_engine *engine,
> +				 void *areq);
> +	int (*do_one_request)(struct crypto_engine *engine,
> +			      void *areq);
> +};
> +
> +struct crypto_engine_ctx {
> +	struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
> +						 struct ablkcipher_request *req);
> +int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> +					   struct aead_request *req);
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct akcipher_request *req);
>   int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req);
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err);
> +					       struct ahash_request *req);
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct skcipher_request *req);
> +void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
> +					struct ablkcipher_request *req, int err);
> +void crypto_finalize_aead_request(struct crypto_engine *engine,
> +				  struct aead_request *req, int err);
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> +				      struct akcipher_request *req, int err);
>   void crypto_finalize_hash_request(struct crypto_engine *engine,
>   				  struct ahash_request *req, int err);
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> +				      struct skcipher_request *req, int err);
>   int crypto_engine_start(struct crypto_engine *engine);
>   int crypto_engine_stop(struct crypto_engine *engine);
>   struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
@ 2018-02-14 13:31       ` Fabien DESSENNE
  0 siblings, 0 replies; 44+ messages in thread
From: Fabien DESSENNE @ 2018-02-14 13:31 UTC (permalink / raw)
  To: Corentin Labbe, Alexandre TORGUE, arei.gonglei, corbet, davem,
	herbert, jasowang, mcoquelin.stm32, mst
  Cc: linux-arm-kernel, linux-crypto, linux-doc, linux-kernel,
	virtualization, linux-sunxi

Adding my tested-by for the AEAD part which is new in v2


On 26/01/18 20:15, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> Tested-by: Fabien Dessenne <fabien.dessenne@st.com>

Tested-by: Fabien Dessenne <fabien.dessenne@st.com>


> ---
>   crypto/crypto_engine.c  | 301 ++++++++++++++++++++++++++----------------------
>   include/crypto/engine.h |  68 ++++++-----
>   2 files changed, 203 insertions(+), 166 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..992e8d8dcdd9 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -15,13 +15,50 @@
>   #include <linux/err.h>
>   #include <linux/delay.h>
>   #include <crypto/engine.h>
> -#include <crypto/internal/hash.h>
>   #include <uapi/linux/sched/types.h>
>   #include "internal.h"
>   
>   #define CRYPTO_ENGINE_MAX_QLEN 10
>   
>   /**
> + * crypto_finalize_request - finalize one request if the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +static void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err)
> +{
> +	unsigned long flags;
> +	bool finalize_cur_req = false;
> +	int ret;
> +	struct crypto_engine_ctx *enginectx;
> +
> +	spin_lock_irqsave(&engine->queue_lock, flags);
> +	if (engine->cur_req == req)
> +		finalize_cur_req = true;
> +	spin_unlock_irqrestore(&engine->queue_lock, flags);
> +
> +	if (finalize_cur_req) {
> +		enginectx = crypto_tfm_ctx(req->tfm);
> +		if (engine->cur_req_prepared &&
> +		    enginectx->op.unprepare_request) {
> +			ret = enginectx->op.unprepare_request(engine, req);
> +			if (ret)
> +				dev_err(engine->dev, "failed to unprepare request\n");
> +		}
> +		spin_lock_irqsave(&engine->queue_lock, flags);
> +		engine->cur_req = NULL;
> +		engine->cur_req_prepared = false;
> +		spin_unlock_irqrestore(&engine->queue_lock, flags);
> +	}
> +
> +	req->complete(req, err);
> +
> +	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +}
> +
> +/**
>    * crypto_pump_requests - dequeue one request from engine queue to process
>    * @engine: the hardware engine
>    * @in_kthread: true if we are in the context of the request pump thread
> @@ -34,11 +71,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   				 bool in_kthread)
>   {
>   	struct crypto_async_request *async_req, *backlog;
> -	struct ahash_request *hreq;
> -	struct ablkcipher_request *breq;
>   	unsigned long flags;
>   	bool was_busy = false;
> -	int ret, rtype;
> +	int ret;
> +	struct crypto_engine_ctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -94,7 +130,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
> -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>   	/* Until here we get the request need to be encrypted successfully */
>   	if (!was_busy && engine->prepare_crypt_hardware) {
>   		ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +139,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   		}
>   	}
>   
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		if (engine->prepare_hash_request) {
> -			ret = engine->prepare_hash_request(engine, hreq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->hash_one_request(engine, hreq);
> -		if (ret) {
> -			dev_err(engine->dev, "failed to hash one request from queue\n");
> -			goto req_err;
> -		}
> -		return;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		if (engine->prepare_cipher_request) {
> -			ret = engine->prepare_cipher_request(engine, breq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->cipher_one_request(engine, breq);
> +	enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> +	if (enginectx->op.prepare_request) {
> +		ret = enginectx->op.prepare_request(engine, async_req);
>   		if (ret) {
> -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> +			dev_err(engine->dev, "failed to prepare request: %d\n",
> +				ret);
>   			goto req_err;
>   		}
> -		return;
> -	default:
> -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> -		return;
> +		engine->cur_req_prepared = true;
> +	}
> +	if (!enginectx->op.do_one_request) {
> +		dev_err(engine->dev, "failed to do request\n");
> +		ret = -EINVAL;
> +		goto req_err;
>   	}
> +	ret = enginectx->op.do_one_request(engine, async_req);
> +	if (ret) {
> +		dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
> +		goto req_err;
> +	}
> +	return;
>   
>   req_err:
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		crypto_finalize_hash_request(engine, hreq, ret);
> -		break;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		crypto_finalize_cipher_request(engine, breq, ret);
> -		break;
> -	}
> +	crypto_finalize_request(engine, async_req, ret);
>   	return;
>   
>   out:
> @@ -170,13 +179,12 @@ static void crypto_pump_work(struct kthread_work *work)
>   }
>   
>   /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> +static int crypto_transfer_request(struct crypto_engine *engine,
> +				   struct crypto_async_request *req,
>   				   bool need_pump)
>   {
>   	unsigned long flags;
> @@ -189,7 +197,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>   		return -ESHUTDOWN;
>   	}
>   
> -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> +	ret = crypto_enqueue_request(&engine->queue, req);
>   
>   	if (!engine->busy && need_pump)
>   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -197,102 +205,131 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
>   
>   /**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
>    * into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req)
> +static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +					     struct crypto_async_request *req)
>   {
> -	return crypto_transfer_cipher_request(engine, req, true);
> +	return crypto_transfer_request(engine, req, true);
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
>   
>   /**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_ablkcipher_request_to_engine - transfer one ablkcipher_request
> + * to list into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
> + * TODO: Remove this function when skcipher conversion is finished
>    */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump)
> +int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
> +						 struct ablkcipher_request *req)
>   {
> -	unsigned long flags;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -
> -	if (!engine->running) {
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -		return -ESHUTDOWN;
> -	}
> -
> -	ret = ahash_enqueue_request(&engine->queue, req);
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_ablkcipher_request_to_engine);
>   
> -	if (!engine->busy && need_pump)
> -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_transfer_aead_request_to_engine - transfer one aead_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> +					   struct aead_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
>   
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	return ret;
> +/**
> + * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct akcipher_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
>   
>   /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> - * into the engine queue
> + * crypto_transfer_hash_request_to_engine - transfer one ahash_request
> + * to list into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
>   int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
>   					   struct ahash_request *req)
>   {
> -	return crypto_transfer_hash_request(engine, req, true);
> +	return crypto_transfer_request_to_engine(engine, &req->base);
>   }
>   EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
>   
>   /**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> + * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct skcipher_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
> +
> +/**
> + * crypto_finalize_ablkcipher_request - finalize one ablkcipher_request if
> + * the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
> + * TODO: Remove this function when skcipher conversion is finished
>    */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err)
> +void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
> +					struct ablkcipher_request *req, int err)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_cipher_request) {
> -			ret = engine->unprepare_cipher_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_ablkcipher_request);
>   
> -	req->base.complete(&req->base, err);
> +/**
> + * crypto_finalize_aead_request - finalize one aead_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_aead_request(struct crypto_engine *engine,
> +				  struct aead_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
>   
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_finalize_akcipher_request - finalize one akcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> +				      struct akcipher_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
>   
>   /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_hash_request - finalize one ahash_request if
> + * the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
> @@ -300,35 +337,25 @@ EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
>   void crypto_finalize_hash_request(struct crypto_engine *engine,
>   				  struct ahash_request *req, int err)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_hash_request) {
> -			ret = engine->unprepare_hash_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> -
> -	req->base.complete(&req->base, err);
> -
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +	return crypto_finalize_request(engine, &req->base, err);
>   }
>   EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
>   
>   /**
> + * crypto_finalize_skcipher_request - finalize one skcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> +				      struct skcipher_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
> +
> +/**
>    * crypto_engine_start - start the hardware engine
>    * @engine: the hardware engine need to be started
>    *
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..1cbec29af3d6 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,10 @@
>   #include <linux/kernel.h>
>   #include <linux/kthread.h>
>   #include <crypto/algapi.h>
> +#include <crypto/aead.h>
> +#include <crypto/akcipher.h>
>   #include <crypto/hash.h>
> +#include <crypto/skcipher.h>
>   
>   #define ENGINE_NAME_LEN	30
>   /*
> @@ -37,12 +40,6 @@
>    * @unprepare_crypt_hardware: there are currently no more requests on the
>    * queue so the subsystem notifies the driver that it may relax the
>    * hardware by issuing this call
> - * @prepare_cipher_request: do some prepare if need before handle the current request
> - * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
> - * @cipher_one_request: do encryption for current request
> - * @prepare_hash_request: do some prepare if need before handle the current request
> - * @unprepare_hash_request: undo any work done by prepare_hash_request()
> - * @hash_one_request: do hash for current request
>    * @kworker: kthread worker struct for request pump
>    * @pump_requests: work struct for scheduling work to the request pump
>    * @priv_data: the engine private data
> @@ -65,19 +62,6 @@ struct crypto_engine {
>   	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>   	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>   
> -	int (*prepare_cipher_request)(struct crypto_engine *engine,
> -				      struct ablkcipher_request *req);
> -	int (*unprepare_cipher_request)(struct crypto_engine *engine,
> -					struct ablkcipher_request *req);
> -	int (*prepare_hash_request)(struct crypto_engine *engine,
> -				    struct ahash_request *req);
> -	int (*unprepare_hash_request)(struct crypto_engine *engine,
> -				      struct ahash_request *req);
> -	int (*cipher_one_request)(struct crypto_engine *engine,
> -				  struct ablkcipher_request *req);
> -	int (*hash_one_request)(struct crypto_engine *engine,
> -				struct ahash_request *req);
> -
>   	struct kthread_worker           *kworker;
>   	struct kthread_work             pump_requests;
>   
> @@ -85,19 +69,45 @@ struct crypto_engine {
>   	struct crypto_async_request	*cur_req;
>   };
>   
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump);
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump);
> +/*
> + * struct crypto_engine_op - crypto hardware engine operations
> + * @prepare__request: do some prepare if need before handle the current request
> + * @unprepare_request: undo any work done by prepare_request()
> + * @do_one_request: do encryption for current request
> + */
> +struct crypto_engine_op {
> +	int (*prepare_request)(struct crypto_engine *engine,
> +			       void *areq);
> +	int (*unprepare_request)(struct crypto_engine *engine,
> +				 void *areq);
> +	int (*do_one_request)(struct crypto_engine *engine,
> +			      void *areq);
> +};
> +
> +struct crypto_engine_ctx {
> +	struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
> +						 struct ablkcipher_request *req);
> +int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> +					   struct aead_request *req);
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct akcipher_request *req);
>   int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req);
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err);
> +					       struct ahash_request *req);
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct skcipher_request *req);
> +void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
> +					struct ablkcipher_request *req, int err);
> +void crypto_finalize_aead_request(struct crypto_engine *engine,
> +				  struct aead_request *req, int err);
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> +				      struct akcipher_request *req, int err);
>   void crypto_finalize_hash_request(struct crypto_engine *engine,
>   				  struct ahash_request *req, int err);
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> +				      struct skcipher_request *req, int err);
>   int crypto_engine_start(struct crypto_engine *engine);
>   int crypto_engine_stop(struct crypto_engine *engine);
>   struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
  2018-01-26 19:15     ` Corentin Labbe
  (?)
  (?)
@ 2018-02-14 13:31     ` Fabien DESSENNE
  -1 siblings, 0 replies; 44+ messages in thread
From: Fabien DESSENNE @ 2018-02-14 13:31 UTC (permalink / raw)
  To: Corentin Labbe, Alexandre TORGUE, arei.gonglei, corbet, davem,
	herbert, jasowang, mcoquelin.stm32, mst
  Cc: linux-doc, linux-kernel, virtualization, linux-sunxi,
	linux-crypto, linux-arm-kernel

Adding my tested-by for the AEAD part which is new in v2


On 26/01/18 20:15, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> Tested-by: Fabien Dessenne <fabien.dessenne@st.com>

Tested-by: Fabien Dessenne <fabien.dessenne@st.com>


> ---
>   crypto/crypto_engine.c  | 301 ++++++++++++++++++++++++++----------------------
>   include/crypto/engine.h |  68 ++++++-----
>   2 files changed, 203 insertions(+), 166 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..992e8d8dcdd9 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -15,13 +15,50 @@
>   #include <linux/err.h>
>   #include <linux/delay.h>
>   #include <crypto/engine.h>
> -#include <crypto/internal/hash.h>
>   #include <uapi/linux/sched/types.h>
>   #include "internal.h"
>   
>   #define CRYPTO_ENGINE_MAX_QLEN 10
>   
>   /**
> + * crypto_finalize_request - finalize one request if the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +static void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err)
> +{
> +	unsigned long flags;
> +	bool finalize_cur_req = false;
> +	int ret;
> +	struct crypto_engine_ctx *enginectx;
> +
> +	spin_lock_irqsave(&engine->queue_lock, flags);
> +	if (engine->cur_req == req)
> +		finalize_cur_req = true;
> +	spin_unlock_irqrestore(&engine->queue_lock, flags);
> +
> +	if (finalize_cur_req) {
> +		enginectx = crypto_tfm_ctx(req->tfm);
> +		if (engine->cur_req_prepared &&
> +		    enginectx->op.unprepare_request) {
> +			ret = enginectx->op.unprepare_request(engine, req);
> +			if (ret)
> +				dev_err(engine->dev, "failed to unprepare request\n");
> +		}
> +		spin_lock_irqsave(&engine->queue_lock, flags);
> +		engine->cur_req = NULL;
> +		engine->cur_req_prepared = false;
> +		spin_unlock_irqrestore(&engine->queue_lock, flags);
> +	}
> +
> +	req->complete(req, err);
> +
> +	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +}
> +
> +/**
>    * crypto_pump_requests - dequeue one request from engine queue to process
>    * @engine: the hardware engine
>    * @in_kthread: true if we are in the context of the request pump thread
> @@ -34,11 +71,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   				 bool in_kthread)
>   {
>   	struct crypto_async_request *async_req, *backlog;
> -	struct ahash_request *hreq;
> -	struct ablkcipher_request *breq;
>   	unsigned long flags;
>   	bool was_busy = false;
> -	int ret, rtype;
> +	int ret;
> +	struct crypto_engine_ctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -94,7 +130,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
> -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>   	/* Until here we get the request need to be encrypted successfully */
>   	if (!was_busy && engine->prepare_crypt_hardware) {
>   		ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +139,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   		}
>   	}
>   
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		if (engine->prepare_hash_request) {
> -			ret = engine->prepare_hash_request(engine, hreq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->hash_one_request(engine, hreq);
> -		if (ret) {
> -			dev_err(engine->dev, "failed to hash one request from queue\n");
> -			goto req_err;
> -		}
> -		return;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		if (engine->prepare_cipher_request) {
> -			ret = engine->prepare_cipher_request(engine, breq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->cipher_one_request(engine, breq);
> +	enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> +	if (enginectx->op.prepare_request) {
> +		ret = enginectx->op.prepare_request(engine, async_req);
>   		if (ret) {
> -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> +			dev_err(engine->dev, "failed to prepare request: %d\n",
> +				ret);
>   			goto req_err;
>   		}
> -		return;
> -	default:
> -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> -		return;
> +		engine->cur_req_prepared = true;
> +	}
> +	if (!enginectx->op.do_one_request) {
> +		dev_err(engine->dev, "failed to do request\n");
> +		ret = -EINVAL;
> +		goto req_err;
>   	}
> +	ret = enginectx->op.do_one_request(engine, async_req);
> +	if (ret) {
> +		dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
> +		goto req_err;
> +	}
> +	return;
>   
>   req_err:
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		crypto_finalize_hash_request(engine, hreq, ret);
> -		break;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		crypto_finalize_cipher_request(engine, breq, ret);
> -		break;
> -	}
> +	crypto_finalize_request(engine, async_req, ret);
>   	return;
>   
>   out:
> @@ -170,13 +179,12 @@ static void crypto_pump_work(struct kthread_work *work)
>   }
>   
>   /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> +static int crypto_transfer_request(struct crypto_engine *engine,
> +				   struct crypto_async_request *req,
>   				   bool need_pump)
>   {
>   	unsigned long flags;
> @@ -189,7 +197,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>   		return -ESHUTDOWN;
>   	}
>   
> -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> +	ret = crypto_enqueue_request(&engine->queue, req);
>   
>   	if (!engine->busy && need_pump)
>   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -197,102 +205,131 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
>   
>   /**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
>    * into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req)
> +static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +					     struct crypto_async_request *req)
>   {
> -	return crypto_transfer_cipher_request(engine, req, true);
> +	return crypto_transfer_request(engine, req, true);
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
>   
>   /**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_ablkcipher_request_to_engine - transfer one ablkcipher_request
> + * to list into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
> + * TODO: Remove this function when skcipher conversion is finished
>    */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump)
> +int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
> +						 struct ablkcipher_request *req)
>   {
> -	unsigned long flags;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -
> -	if (!engine->running) {
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -		return -ESHUTDOWN;
> -	}
> -
> -	ret = ahash_enqueue_request(&engine->queue, req);
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_ablkcipher_request_to_engine);
>   
> -	if (!engine->busy && need_pump)
> -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_transfer_aead_request_to_engine - transfer one aead_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> +					   struct aead_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
>   
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	return ret;
> +/**
> + * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct akcipher_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
>   
>   /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> - * into the engine queue
> + * crypto_transfer_hash_request_to_engine - transfer one ahash_request
> + * to list into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
>   int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
>   					   struct ahash_request *req)
>   {
> -	return crypto_transfer_hash_request(engine, req, true);
> +	return crypto_transfer_request_to_engine(engine, &req->base);
>   }
>   EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
>   
>   /**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> + * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct skcipher_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
> +
> +/**
> + * crypto_finalize_ablkcipher_request - finalize one ablkcipher_request if
> + * the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
> + * TODO: Remove this function when skcipher conversion is finished
>    */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err)
> +void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
> +					struct ablkcipher_request *req, int err)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_cipher_request) {
> -			ret = engine->unprepare_cipher_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_ablkcipher_request);
>   
> -	req->base.complete(&req->base, err);
> +/**
> + * crypto_finalize_aead_request - finalize one aead_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_aead_request(struct crypto_engine *engine,
> +				  struct aead_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
>   
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_finalize_akcipher_request - finalize one akcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> +				      struct akcipher_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
>   
>   /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_hash_request - finalize one ahash_request if
> + * the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
> @@ -300,35 +337,25 @@ EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
>   void crypto_finalize_hash_request(struct crypto_engine *engine,
>   				  struct ahash_request *req, int err)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_hash_request) {
> -			ret = engine->unprepare_hash_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> -
> -	req->base.complete(&req->base, err);
> -
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +	return crypto_finalize_request(engine, &req->base, err);
>   }
>   EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
>   
>   /**
> + * crypto_finalize_skcipher_request - finalize one skcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> +				      struct skcipher_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
> +
> +/**
>    * crypto_engine_start - start the hardware engine
>    * @engine: the hardware engine need to be started
>    *
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..1cbec29af3d6 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,10 @@
>   #include <linux/kernel.h>
>   #include <linux/kthread.h>
>   #include <crypto/algapi.h>
> +#include <crypto/aead.h>
> +#include <crypto/akcipher.h>
>   #include <crypto/hash.h>
> +#include <crypto/skcipher.h>
>   
>   #define ENGINE_NAME_LEN	30
>   /*
> @@ -37,12 +40,6 @@
>    * @unprepare_crypt_hardware: there are currently no more requests on the
>    * queue so the subsystem notifies the driver that it may relax the
>    * hardware by issuing this call
> - * @prepare_cipher_request: do some prepare if need before handle the current request
> - * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
> - * @cipher_one_request: do encryption for current request
> - * @prepare_hash_request: do some prepare if need before handle the current request
> - * @unprepare_hash_request: undo any work done by prepare_hash_request()
> - * @hash_one_request: do hash for current request
>    * @kworker: kthread worker struct for request pump
>    * @pump_requests: work struct for scheduling work to the request pump
>    * @priv_data: the engine private data
> @@ -65,19 +62,6 @@ struct crypto_engine {
>   	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>   	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>   
> -	int (*prepare_cipher_request)(struct crypto_engine *engine,
> -				      struct ablkcipher_request *req);
> -	int (*unprepare_cipher_request)(struct crypto_engine *engine,
> -					struct ablkcipher_request *req);
> -	int (*prepare_hash_request)(struct crypto_engine *engine,
> -				    struct ahash_request *req);
> -	int (*unprepare_hash_request)(struct crypto_engine *engine,
> -				      struct ahash_request *req);
> -	int (*cipher_one_request)(struct crypto_engine *engine,
> -				  struct ablkcipher_request *req);
> -	int (*hash_one_request)(struct crypto_engine *engine,
> -				struct ahash_request *req);
> -
>   	struct kthread_worker           *kworker;
>   	struct kthread_work             pump_requests;
>   
> @@ -85,19 +69,45 @@ struct crypto_engine {
>   	struct crypto_async_request	*cur_req;
>   };
>   
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump);
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump);
> +/*
> + * struct crypto_engine_op - crypto hardware engine operations
> + * @prepare__request: do some prepare if need before handle the current request
> + * @unprepare_request: undo any work done by prepare_request()
> + * @do_one_request: do encryption for current request
> + */
> +struct crypto_engine_op {
> +	int (*prepare_request)(struct crypto_engine *engine,
> +			       void *areq);
> +	int (*unprepare_request)(struct crypto_engine *engine,
> +				 void *areq);
> +	int (*do_one_request)(struct crypto_engine *engine,
> +			      void *areq);
> +};
> +
> +struct crypto_engine_ctx {
> +	struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
> +						 struct ablkcipher_request *req);
> +int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> +					   struct aead_request *req);
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct akcipher_request *req);
>   int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req);
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err);
> +					       struct ahash_request *req);
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct skcipher_request *req);
> +void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
> +					struct ablkcipher_request *req, int err);
> +void crypto_finalize_aead_request(struct crypto_engine *engine,
> +				  struct aead_request *req, int err);
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> +				      struct akcipher_request *req, int err);
>   void crypto_finalize_hash_request(struct crypto_engine *engine,
>   				  struct ahash_request *req, int err);
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> +				      struct skcipher_request *req, int err);
>   int crypto_engine_start(struct crypto_engine *engine);
>   int crypto_engine_stop(struct crypto_engine *engine);
>   struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
@ 2018-02-14 13:31       ` Fabien DESSENNE
  0 siblings, 0 replies; 44+ messages in thread
From: Fabien DESSENNE @ 2018-02-14 13:31 UTC (permalink / raw)
  To: linux-arm-kernel

Adding my tested-by for the AEAD part which is new in v2


On 26/01/18 20:15, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
> Tested-by: Fabien Dessenne <fabien.dessenne@st.com>

Tested-by: Fabien Dessenne <fabien.dessenne@st.com>


> ---
>   crypto/crypto_engine.c  | 301 ++++++++++++++++++++++++++----------------------
>   include/crypto/engine.h |  68 ++++++-----
>   2 files changed, 203 insertions(+), 166 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..992e8d8dcdd9 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -15,13 +15,50 @@
>   #include <linux/err.h>
>   #include <linux/delay.h>
>   #include <crypto/engine.h>
> -#include <crypto/internal/hash.h>
>   #include <uapi/linux/sched/types.h>
>   #include "internal.h"
>   
>   #define CRYPTO_ENGINE_MAX_QLEN 10
>   
>   /**
> + * crypto_finalize_request - finalize one request if the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +static void crypto_finalize_request(struct crypto_engine *engine,
> +			     struct crypto_async_request *req, int err)
> +{
> +	unsigned long flags;
> +	bool finalize_cur_req = false;
> +	int ret;
> +	struct crypto_engine_ctx *enginectx;
> +
> +	spin_lock_irqsave(&engine->queue_lock, flags);
> +	if (engine->cur_req == req)
> +		finalize_cur_req = true;
> +	spin_unlock_irqrestore(&engine->queue_lock, flags);
> +
> +	if (finalize_cur_req) {
> +		enginectx = crypto_tfm_ctx(req->tfm);
> +		if (engine->cur_req_prepared &&
> +		    enginectx->op.unprepare_request) {
> +			ret = enginectx->op.unprepare_request(engine, req);
> +			if (ret)
> +				dev_err(engine->dev, "failed to unprepare request\n");
> +		}
> +		spin_lock_irqsave(&engine->queue_lock, flags);
> +		engine->cur_req = NULL;
> +		engine->cur_req_prepared = false;
> +		spin_unlock_irqrestore(&engine->queue_lock, flags);
> +	}
> +
> +	req->complete(req, err);
> +
> +	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +}
> +
> +/**
>    * crypto_pump_requests - dequeue one request from engine queue to process
>    * @engine: the hardware engine
>    * @in_kthread: true if we are in the context of the request pump thread
> @@ -34,11 +71,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   				 bool in_kthread)
>   {
>   	struct crypto_async_request *async_req, *backlog;
> -	struct ahash_request *hreq;
> -	struct ablkcipher_request *breq;
>   	unsigned long flags;
>   	bool was_busy = false;
> -	int ret, rtype;
> +	int ret;
> +	struct crypto_engine_ctx *enginectx;
>   
>   	spin_lock_irqsave(&engine->queue_lock, flags);
>   
> @@ -94,7 +130,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   
> -	rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>   	/* Until here we get the request need to be encrypted successfully */
>   	if (!was_busy && engine->prepare_crypt_hardware) {
>   		ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +139,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>   		}
>   	}
>   
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		if (engine->prepare_hash_request) {
> -			ret = engine->prepare_hash_request(engine, hreq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->hash_one_request(engine, hreq);
> -		if (ret) {
> -			dev_err(engine->dev, "failed to hash one request from queue\n");
> -			goto req_err;
> -		}
> -		return;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		if (engine->prepare_cipher_request) {
> -			ret = engine->prepare_cipher_request(engine, breq);
> -			if (ret) {
> -				dev_err(engine->dev, "failed to prepare request: %d\n",
> -					ret);
> -				goto req_err;
> -			}
> -			engine->cur_req_prepared = true;
> -		}
> -		ret = engine->cipher_one_request(engine, breq);
> +	enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> +	if (enginectx->op.prepare_request) {
> +		ret = enginectx->op.prepare_request(engine, async_req);
>   		if (ret) {
> -			dev_err(engine->dev, "failed to cipher one request from queue\n");
> +			dev_err(engine->dev, "failed to prepare request: %d\n",
> +				ret);
>   			goto req_err;
>   		}
> -		return;
> -	default:
> -		dev_err(engine->dev, "failed to prepare request of unknown type\n");
> -		return;
> +		engine->cur_req_prepared = true;
> +	}
> +	if (!enginectx->op.do_one_request) {
> +		dev_err(engine->dev, "failed to do request\n");
> +		ret = -EINVAL;
> +		goto req_err;
>   	}
> +	ret = enginectx->op.do_one_request(engine, async_req);
> +	if (ret) {
> +		dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
> +		goto req_err;
> +	}
> +	return;
>   
>   req_err:
> -	switch (rtype) {
> -	case CRYPTO_ALG_TYPE_AHASH:
> -		hreq = ahash_request_cast(engine->cur_req);
> -		crypto_finalize_hash_request(engine, hreq, ret);
> -		break;
> -	case CRYPTO_ALG_TYPE_ABLKCIPHER:
> -		breq = ablkcipher_request_cast(engine->cur_req);
> -		crypto_finalize_cipher_request(engine, breq, ret);
> -		break;
> -	}
> +	crypto_finalize_request(engine, async_req, ret);
>   	return;
>   
>   out:
> @@ -170,13 +179,12 @@ static void crypto_pump_work(struct kthread_work *work)
>   }
>   
>   /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> +static int crypto_transfer_request(struct crypto_engine *engine,
> +				   struct crypto_async_request *req,
>   				   bool need_pump)
>   {
>   	unsigned long flags;
> @@ -189,7 +197,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>   		return -ESHUTDOWN;
>   	}
>   
> -	ret = ablkcipher_enqueue_request(&engine->queue, req);
> +	ret = crypto_enqueue_request(&engine->queue, req);
>   
>   	if (!engine->busy && need_pump)
>   		kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -197,102 +205,131 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>   	spin_unlock_irqrestore(&engine->queue_lock, flags);
>   	return ret;
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
>   
>   /**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
>    * into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req)
> +static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> +					     struct crypto_async_request *req)
>   {
> -	return crypto_transfer_cipher_request(engine, req, true);
> +	return crypto_transfer_request(engine, req, true);
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
>   
>   /**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_ablkcipher_request_to_engine - transfer one ablkcipher_request
> + * to list into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
> + * TODO: Remove this function when skcipher conversion is finished
>    */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump)
> +int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
> +						 struct ablkcipher_request *req)
>   {
> -	unsigned long flags;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -
> -	if (!engine->running) {
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -		return -ESHUTDOWN;
> -	}
> -
> -	ret = ahash_enqueue_request(&engine->queue, req);
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_ablkcipher_request_to_engine);
>   
> -	if (!engine->busy && need_pump)
> -		kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_transfer_aead_request_to_engine - transfer one aead_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> +					   struct aead_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
>   
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	return ret;
> +/**
> + * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct akcipher_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
>   }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
>   
>   /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> - * into the engine queue
> + * crypto_transfer_hash_request_to_engine - transfer one ahash_request
> + * to list into the engine queue
>    * @engine: the hardware engine
>    * @req: the request need to be listed into the engine queue
>    */
>   int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
>   					   struct ahash_request *req)
>   {
> -	return crypto_transfer_hash_request(engine, req, true);
> +	return crypto_transfer_request_to_engine(engine, &req->base);
>   }
>   EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
>   
>   /**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> + * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct skcipher_request *req)
> +{
> +	return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
> +
> +/**
> + * crypto_finalize_ablkcipher_request - finalize one ablkcipher_request if
> + * the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
> + * TODO: Remove this function when skcipher conversion is finished
>    */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err)
> +void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
> +					struct ablkcipher_request *req, int err)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_cipher_request) {
> -			ret = engine->unprepare_cipher_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_ablkcipher_request);
>   
> -	req->base.complete(&req->base, err);
> +/**
> + * crypto_finalize_aead_request - finalize one aead_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_aead_request(struct crypto_engine *engine,
> +				  struct aead_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
>   
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_finalize_akcipher_request - finalize one akcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> +				      struct akcipher_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
>   }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
>   
>   /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_hash_request - finalize one ahash_request if
> + * the request is done
>    * @engine: the hardware engine
>    * @req: the request need to be finalized
>    * @err: error number
> @@ -300,35 +337,25 @@ EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
>   void crypto_finalize_hash_request(struct crypto_engine *engine,
>   				  struct ahash_request *req, int err)
>   {
> -	unsigned long flags;
> -	bool finalize_cur_req = false;
> -	int ret;
> -
> -	spin_lock_irqsave(&engine->queue_lock, flags);
> -	if (engine->cur_req == &req->base)
> -		finalize_cur_req = true;
> -	spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> -	if (finalize_cur_req) {
> -		if (engine->cur_req_prepared &&
> -		    engine->unprepare_hash_request) {
> -			ret = engine->unprepare_hash_request(engine, req);
> -			if (ret)
> -				dev_err(engine->dev, "failed to unprepare request\n");
> -		}
> -		spin_lock_irqsave(&engine->queue_lock, flags);
> -		engine->cur_req = NULL;
> -		engine->cur_req_prepared = false;
> -		spin_unlock_irqrestore(&engine->queue_lock, flags);
> -	}
> -
> -	req->base.complete(&req->base, err);
> -
> -	kthread_queue_work(engine->kworker, &engine->pump_requests);
> +	return crypto_finalize_request(engine, &req->base, err);
>   }
>   EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
>   
>   /**
> + * crypto_finalize_skcipher_request - finalize one skcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> +				      struct skcipher_request *req, int err)
> +{
> +	return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
> +
> +/**
>    * crypto_engine_start - start the hardware engine
>    * @engine: the hardware engine need to be started
>    *
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..1cbec29af3d6 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,10 @@
>   #include <linux/kernel.h>
>   #include <linux/kthread.h>
>   #include <crypto/algapi.h>
> +#include <crypto/aead.h>
> +#include <crypto/akcipher.h>
>   #include <crypto/hash.h>
> +#include <crypto/skcipher.h>
>   
>   #define ENGINE_NAME_LEN	30
>   /*
> @@ -37,12 +40,6 @@
>    * @unprepare_crypt_hardware: there are currently no more requests on the
>    * queue so the subsystem notifies the driver that it may relax the
>    * hardware by issuing this call
> - * @prepare_cipher_request: do some prepare if need before handle the current request
> - * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
> - * @cipher_one_request: do encryption for current request
> - * @prepare_hash_request: do some prepare if need before handle the current request
> - * @unprepare_hash_request: undo any work done by prepare_hash_request()
> - * @hash_one_request: do hash for current request
>    * @kworker: kthread worker struct for request pump
>    * @pump_requests: work struct for scheduling work to the request pump
>    * @priv_data: the engine private data
> @@ -65,19 +62,6 @@ struct crypto_engine {
>   	int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>   	int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>   
> -	int (*prepare_cipher_request)(struct crypto_engine *engine,
> -				      struct ablkcipher_request *req);
> -	int (*unprepare_cipher_request)(struct crypto_engine *engine,
> -					struct ablkcipher_request *req);
> -	int (*prepare_hash_request)(struct crypto_engine *engine,
> -				    struct ahash_request *req);
> -	int (*unprepare_hash_request)(struct crypto_engine *engine,
> -				      struct ahash_request *req);
> -	int (*cipher_one_request)(struct crypto_engine *engine,
> -				  struct ablkcipher_request *req);
> -	int (*hash_one_request)(struct crypto_engine *engine,
> -				struct ahash_request *req);
> -
>   	struct kthread_worker           *kworker;
>   	struct kthread_work             pump_requests;
>   
> @@ -85,19 +69,45 @@ struct crypto_engine {
>   	struct crypto_async_request	*cur_req;
>   };
>   
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> -				   struct ablkcipher_request *req,
> -				   bool need_pump);
> -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> -					     struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> -				 struct ahash_request *req, bool need_pump);
> +/*
> + * struct crypto_engine_op - crypto hardware engine operations
> + * @prepare__request: do some prepare if need before handle the current request
> + * @unprepare_request: undo any work done by prepare_request()
> + * @do_one_request: do encryption for current request
> + */
> +struct crypto_engine_op {
> +	int (*prepare_request)(struct crypto_engine *engine,
> +			       void *areq);
> +	int (*unprepare_request)(struct crypto_engine *engine,
> +				 void *areq);
> +	int (*do_one_request)(struct crypto_engine *engine,
> +			      void *areq);
> +};
> +
> +struct crypto_engine_ctx {
> +	struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_ablkcipher_request_to_engine(struct crypto_engine *engine,
> +						 struct ablkcipher_request *req);
> +int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> +					   struct aead_request *req);
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct akcipher_request *req);
>   int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> -					   struct ahash_request *req);
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> -				    struct ablkcipher_request *req, int err);
> +					       struct ahash_request *req);
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> +					       struct skcipher_request *req);
> +void crypto_finalize_ablkcipher_request(struct crypto_engine *engine,
> +					struct ablkcipher_request *req, int err);
> +void crypto_finalize_aead_request(struct crypto_engine *engine,
> +				  struct aead_request *req, int err);
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> +				      struct akcipher_request *req, int err);
>   void crypto_finalize_hash_request(struct crypto_engine *engine,
>   				  struct ahash_request *req, int err);
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> +				      struct skcipher_request *req, int err);
>   int crypto_engine_start(struct crypto_engine *engine);
>   int crypto_engine_stop(struct crypto_engine *engine);
>   struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 4/6] crypto: virtio: convert to new crypto engine API
  2018-01-26 19:15     ` Corentin Labbe
  (?)
@ 2018-02-14 15:51       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 44+ messages in thread
From: Michael S. Tsirkin @ 2018-02-14 15:51 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: herbert, corbet, linux-doc, linux-kernel, fabien.dessenne,
	virtualization, linux-sunxi, linux-crypto, mcoquelin.stm32,
	davem, linux-arm-kernel, alexandre.torgue

On Fri, Jan 26, 2018 at 08:15:32PM +0100, Corentin Labbe wrote:
> This patch convert the driver to the new crypto engine API.
> 
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

Acked-by: Michael S. Tsirkin <mst@redhat.com>

Pls queue when/if rest of changes go in.

> ---
>  drivers/crypto/virtio/virtio_crypto_algs.c   | 16 ++++++++++------
>  drivers/crypto/virtio/virtio_crypto_common.h |  3 +--
>  drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
>  3 files changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
> index abe8c15450df..ba190cfa7aa1 100644
> --- a/drivers/crypto/virtio/virtio_crypto_algs.c
> +++ b/drivers/crypto/virtio/virtio_crypto_algs.c
> @@ -29,6 +29,7 @@
>  
>  
>  struct virtio_crypto_ablkcipher_ctx {
> +	struct crypto_engine_ctx enginectx;
>  	struct virtio_crypto *vcrypto;
>  	struct crypto_tfm *tfm;
>  
> @@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
>  	vc_sym_req->ablkcipher_req = req;
>  	vc_sym_req->encrypt = true;
>  
> -	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
> +	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
>  }
>  
>  static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
> @@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
>  	vc_sym_req->ablkcipher_req = req;
>  	vc_sym_req->encrypt = false;
>  
> -	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
> +	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
>  }
>  
>  static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
> @@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
>  	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
>  	ctx->tfm = tfm;
>  
> +	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
> +	ctx->enginectx.op.prepare_request = NULL;
> +	ctx->enginectx.op.unprepare_request = NULL;
>  	return 0;
>  }
>  
> @@ -538,9 +542,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
>  }
>  
>  int virtio_crypto_ablkcipher_crypt_req(
> -	struct crypto_engine *engine,
> -	struct ablkcipher_request *req)
> +	struct crypto_engine *engine, void *vreq)
>  {
> +	struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base);
>  	struct virtio_crypto_sym_request *vc_sym_req =
>  				ablkcipher_request_ctx(req);
>  	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
> @@ -561,8 +565,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
>  	struct ablkcipher_request *req,
>  	int err)
>  {
> -	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
> -					req, err);
> +	crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine,
> +					   req, err);
>  	kzfree(vc_sym_req->iv);
>  	virtcrypto_clear_request(&vc_sym_req->base);
>  }
> diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
> index e976539a05d9..72621bd67211 100644
> --- a/drivers/crypto/virtio/virtio_crypto_common.h
> +++ b/drivers/crypto/virtio/virtio_crypto_common.h
> @@ -107,8 +107,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node);
>  int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
>  void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
>  int virtio_crypto_ablkcipher_crypt_req(
> -	struct crypto_engine *engine,
> -	struct ablkcipher_request *req);
> +	struct crypto_engine *engine, void *vreq);
>  
>  void
>  virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
> diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
> index ff1410a32c2b..83326986c113 100644
> --- a/drivers/crypto/virtio/virtio_crypto_core.c
> +++ b/drivers/crypto/virtio/virtio_crypto_core.c
> @@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
>  			ret = -ENOMEM;
>  			goto err_engine;
>  		}
> -
> -		vi->data_vq[i].engine->cipher_one_request =
> -			virtio_crypto_ablkcipher_crypt_req;
>  	}
>  
>  	kfree(names);
> -- 
> 2.13.6

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 4/6] crypto: virtio: convert to new crypto engine API
@ 2018-02-14 15:51       ` Michael S. Tsirkin
  0 siblings, 0 replies; 44+ messages in thread
From: Michael S. Tsirkin @ 2018-02-14 15:51 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, arei.gonglei, corbet, davem, herbert, jasowang,
	mcoquelin.stm32, fabien.dessenne, linux-arm-kernel, linux-crypto,
	linux-doc, linux-kernel, virtualization, linux-sunxi

On Fri, Jan 26, 2018 at 08:15:32PM +0100, Corentin Labbe wrote:
> This patch convert the driver to the new crypto engine API.
> 
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

Acked-by: Michael S. Tsirkin <mst@redhat.com>

Pls queue when/if rest of changes go in.

> ---
>  drivers/crypto/virtio/virtio_crypto_algs.c   | 16 ++++++++++------
>  drivers/crypto/virtio/virtio_crypto_common.h |  3 +--
>  drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
>  3 files changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
> index abe8c15450df..ba190cfa7aa1 100644
> --- a/drivers/crypto/virtio/virtio_crypto_algs.c
> +++ b/drivers/crypto/virtio/virtio_crypto_algs.c
> @@ -29,6 +29,7 @@
>  
>  
>  struct virtio_crypto_ablkcipher_ctx {
> +	struct crypto_engine_ctx enginectx;
>  	struct virtio_crypto *vcrypto;
>  	struct crypto_tfm *tfm;
>  
> @@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
>  	vc_sym_req->ablkcipher_req = req;
>  	vc_sym_req->encrypt = true;
>  
> -	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
> +	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
>  }
>  
>  static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
> @@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
>  	vc_sym_req->ablkcipher_req = req;
>  	vc_sym_req->encrypt = false;
>  
> -	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
> +	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
>  }
>  
>  static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
> @@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
>  	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
>  	ctx->tfm = tfm;
>  
> +	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
> +	ctx->enginectx.op.prepare_request = NULL;
> +	ctx->enginectx.op.unprepare_request = NULL;
>  	return 0;
>  }
>  
> @@ -538,9 +542,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
>  }
>  
>  int virtio_crypto_ablkcipher_crypt_req(
> -	struct crypto_engine *engine,
> -	struct ablkcipher_request *req)
> +	struct crypto_engine *engine, void *vreq)
>  {
> +	struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base);
>  	struct virtio_crypto_sym_request *vc_sym_req =
>  				ablkcipher_request_ctx(req);
>  	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
> @@ -561,8 +565,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
>  	struct ablkcipher_request *req,
>  	int err)
>  {
> -	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
> -					req, err);
> +	crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine,
> +					   req, err);
>  	kzfree(vc_sym_req->iv);
>  	virtcrypto_clear_request(&vc_sym_req->base);
>  }
> diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
> index e976539a05d9..72621bd67211 100644
> --- a/drivers/crypto/virtio/virtio_crypto_common.h
> +++ b/drivers/crypto/virtio/virtio_crypto_common.h
> @@ -107,8 +107,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node);
>  int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
>  void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
>  int virtio_crypto_ablkcipher_crypt_req(
> -	struct crypto_engine *engine,
> -	struct ablkcipher_request *req);
> +	struct crypto_engine *engine, void *vreq);
>  
>  void
>  virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
> diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
> index ff1410a32c2b..83326986c113 100644
> --- a/drivers/crypto/virtio/virtio_crypto_core.c
> +++ b/drivers/crypto/virtio/virtio_crypto_core.c
> @@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
>  			ret = -ENOMEM;
>  			goto err_engine;
>  		}
> -
> -		vi->data_vq[i].engine->cipher_one_request =
> -			virtio_crypto_ablkcipher_crypt_req;
>  	}
>  
>  	kfree(names);
> -- 
> 2.13.6

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 4/6] crypto: virtio: convert to new crypto engine API
@ 2018-02-14 15:51       ` Michael S. Tsirkin
  0 siblings, 0 replies; 44+ messages in thread
From: Michael S. Tsirkin @ 2018-02-14 15:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 26, 2018 at 08:15:32PM +0100, Corentin Labbe wrote:
> This patch convert the driver to the new crypto engine API.
> 
> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>

Acked-by: Michael S. Tsirkin <mst@redhat.com>

Pls queue when/if rest of changes go in.

> ---
>  drivers/crypto/virtio/virtio_crypto_algs.c   | 16 ++++++++++------
>  drivers/crypto/virtio/virtio_crypto_common.h |  3 +--
>  drivers/crypto/virtio/virtio_crypto_core.c   |  3 ---
>  3 files changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
> index abe8c15450df..ba190cfa7aa1 100644
> --- a/drivers/crypto/virtio/virtio_crypto_algs.c
> +++ b/drivers/crypto/virtio/virtio_crypto_algs.c
> @@ -29,6 +29,7 @@
>  
>  
>  struct virtio_crypto_ablkcipher_ctx {
> +	struct crypto_engine_ctx enginectx;
>  	struct virtio_crypto *vcrypto;
>  	struct crypto_tfm *tfm;
>  
> @@ -491,7 +492,7 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
>  	vc_sym_req->ablkcipher_req = req;
>  	vc_sym_req->encrypt = true;
>  
> -	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
> +	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
>  }
>  
>  static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
> @@ -511,7 +512,7 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
>  	vc_sym_req->ablkcipher_req = req;
>  	vc_sym_req->encrypt = false;
>  
> -	return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
> +	return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req);
>  }
>  
>  static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
> @@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
>  	tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
>  	ctx->tfm = tfm;
>  
> +	ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
> +	ctx->enginectx.op.prepare_request = NULL;
> +	ctx->enginectx.op.unprepare_request = NULL;
>  	return 0;
>  }
>  
> @@ -538,9 +542,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
>  }
>  
>  int virtio_crypto_ablkcipher_crypt_req(
> -	struct crypto_engine *engine,
> -	struct ablkcipher_request *req)
> +	struct crypto_engine *engine, void *vreq)
>  {
> +	struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base);
>  	struct virtio_crypto_sym_request *vc_sym_req =
>  				ablkcipher_request_ctx(req);
>  	struct virtio_crypto_request *vc_req = &vc_sym_req->base;
> @@ -561,8 +565,8 @@ static void virtio_crypto_ablkcipher_finalize_req(
>  	struct ablkcipher_request *req,
>  	int err)
>  {
> -	crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
> -					req, err);
> +	crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine,
> +					   req, err);
>  	kzfree(vc_sym_req->iv);
>  	virtcrypto_clear_request(&vc_sym_req->base);
>  }
> diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
> index e976539a05d9..72621bd67211 100644
> --- a/drivers/crypto/virtio/virtio_crypto_common.h
> +++ b/drivers/crypto/virtio/virtio_crypto_common.h
> @@ -107,8 +107,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node);
>  int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
>  void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
>  int virtio_crypto_ablkcipher_crypt_req(
> -	struct crypto_engine *engine,
> -	struct ablkcipher_request *req);
> +	struct crypto_engine *engine, void *vreq);
>  
>  void
>  virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
> diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
> index ff1410a32c2b..83326986c113 100644
> --- a/drivers/crypto/virtio/virtio_crypto_core.c
> +++ b/drivers/crypto/virtio/virtio_crypto_core.c
> @@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
>  			ret = -ENOMEM;
>  			goto err_engine;
>  		}
> -
> -		vi->data_vq[i].engine->cipher_one_request =
> -			virtio_crypto_ablkcipher_crypt_req;
>  	}
>  
>  	kfree(names);
> -- 
> 2.13.6

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
  2018-01-26 19:15 ` Corentin Labbe
  (?)
@ 2018-02-15 15:51     ` Herbert Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Herbert Xu @ 2018-02-15 15:51 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue-qxv4g6HH51o,
	arei.gonglei-hv44wF8Li93QT0dZR+AlfA, corbet-T1hC0tSOHrs,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q, jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	mcoquelin.stm32-Re5JQEeQqe8AvxtiuMwx3w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, fabien.dessenne-qxv4g6HH51o,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	linux-doc-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw

On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> Hello
> 
> The current crypto_engine support only ahash and ablkcipher request.
> My first patch which try to add skcipher was Nacked, it will add too many functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
> 
> This patchset remove all algs specific stuff and now only process generic crypto_async_request.
> 
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
> 
> The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any request_ctx
> without knowing the underlying request type.
> 
> So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
> 
> The first patch is a try to document the crypto engine API.
> The second patch convert the crypto engine with the new way,
> while the following patchs convert the 4 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be all merged.
> 
> Appart from virtio, all 4 latest patch were compile tested only.
> But the crypto engine is tested with my new sun8i-ce driver.
> 
> Regards
> 
> [1] https://www.mail-archive.com/linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org/msg1474434.html
> 
> Changes since V1:
> - renamed crypto_engine_reqctx to crypto_engine_ctx
> - indentation fix in function parameter
> - do not export crypto_transfer_request
> - Add aead support
> - crypto_finalize_request is now static
> 
> Changes since RFC:
> - Added a documentation patch
> - Added patch for stm32-cryp
> - Changed parameter of all crypto_engine_op functions from
> 	crypto_async_request to void*
> - Reintroduced crypto_transfer_xxx_request_to_engine functions
> 
> Corentin Labbe (6):
>   Documentation: crypto: document crypto engine API
>   crypto: engine - Permit to enqueue all async requests
>   crypto: omap: convert to new crypto engine API
>   crypto: virtio: convert to new crypto engine API
>   crypto: stm32-hash: convert to the new crypto engine API
>   crypto: stm32-cryp: convert to the new crypto engine API

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q@public.gmane.org>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
@ 2018-02-15 15:51     ` Herbert Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Herbert Xu @ 2018-02-15 15:51 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, arei.gonglei, corbet, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne, linux-arm-kernel,
	linux-crypto, linux-doc, linux-kernel, virtualization,
	linux-sunxi

On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> Hello
> 
> The current crypto_engine support only ahash and ablkcipher request.
> My first patch which try to add skcipher was Nacked, it will add too many functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
> 
> This patchset remove all algs specific stuff and now only process generic crypto_async_request.
> 
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
> 
> The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any request_ctx
> without knowing the underlying request type.
> 
> So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
> 
> The first patch is a try to document the crypto engine API.
> The second patch convert the crypto engine with the new way,
> while the following patchs convert the 4 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be all merged.
> 
> Appart from virtio, all 4 latest patch were compile tested only.
> But the crypto engine is tested with my new sun8i-ce driver.
> 
> Regards
> 
> [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> 
> Changes since V1:
> - renamed crypto_engine_reqctx to crypto_engine_ctx
> - indentation fix in function parameter
> - do not export crypto_transfer_request
> - Add aead support
> - crypto_finalize_request is now static
> 
> Changes since RFC:
> - Added a documentation patch
> - Added patch for stm32-cryp
> - Changed parameter of all crypto_engine_op functions from
> 	crypto_async_request to void*
> - Reintroduced crypto_transfer_xxx_request_to_engine functions
> 
> Corentin Labbe (6):
>   Documentation: crypto: document crypto engine API
>   crypto: engine - Permit to enqueue all async requests
>   crypto: omap: convert to new crypto engine API
>   crypto: virtio: convert to new crypto engine API
>   crypto: stm32-hash: convert to the new crypto engine API
>   crypto: stm32-cryp: convert to the new crypto engine API

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
  2018-01-26 19:15 ` Corentin Labbe
                   ` (9 preceding siblings ...)
  (?)
@ 2018-02-15 15:51 ` Herbert Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Herbert Xu @ 2018-02-15 15:51 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, corbet, mst, linux-doc, linux-kernel,
	fabien.dessenne, virtualization, linux-sunxi, linux-crypto,
	mcoquelin.stm32, davem, linux-arm-kernel

On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> Hello
> 
> The current crypto_engine support only ahash and ablkcipher request.
> My first patch which try to add skcipher was Nacked, it will add too many functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
> 
> This patchset remove all algs specific stuff and now only process generic crypto_async_request.
> 
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
> 
> The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any request_ctx
> without knowing the underlying request type.
> 
> So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
> 
> The first patch is a try to document the crypto engine API.
> The second patch convert the crypto engine with the new way,
> while the following patchs convert the 4 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be all merged.
> 
> Appart from virtio, all 4 latest patch were compile tested only.
> But the crypto engine is tested with my new sun8i-ce driver.
> 
> Regards
> 
> [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> 
> Changes since V1:
> - renamed crypto_engine_reqctx to crypto_engine_ctx
> - indentation fix in function parameter
> - do not export crypto_transfer_request
> - Add aead support
> - crypto_finalize_request is now static
> 
> Changes since RFC:
> - Added a documentation patch
> - Added patch for stm32-cryp
> - Changed parameter of all crypto_engine_op functions from
> 	crypto_async_request to void*
> - Reintroduced crypto_transfer_xxx_request_to_engine functions
> 
> Corentin Labbe (6):
>   Documentation: crypto: document crypto engine API
>   crypto: engine - Permit to enqueue all async requests
>   crypto: omap: convert to new crypto engine API
>   crypto: virtio: convert to new crypto engine API
>   crypto: stm32-hash: convert to the new crypto engine API
>   crypto: stm32-cryp: convert to the new crypto engine API

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
@ 2018-02-15 15:51     ` Herbert Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Herbert Xu @ 2018-02-15 15:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> Hello
> 
> The current crypto_engine support only ahash and ablkcipher request.
> My first patch which try to add skcipher was Nacked, it will add too many functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
> 
> This patchset remove all algs specific stuff and now only process generic crypto_async_request.
> 
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
> 
> The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any request_ctx
> without knowing the underlying request type.
> 
> So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
> 
> The first patch is a try to document the crypto engine API.
> The second patch convert the crypto engine with the new way,
> while the following patchs convert the 4 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be all merged.
> 
> Appart from virtio, all 4 latest patch were compile tested only.
> But the crypto engine is tested with my new sun8i-ce driver.
> 
> Regards
> 
> [1] https://www.mail-archive.com/linux-kernel at vger.kernel.org/msg1474434.html
> 
> Changes since V1:
> - renamed crypto_engine_reqctx to crypto_engine_ctx
> - indentation fix in function parameter
> - do not export crypto_transfer_request
> - Add aead support
> - crypto_finalize_request is now static
> 
> Changes since RFC:
> - Added a documentation patch
> - Added patch for stm32-cryp
> - Changed parameter of all crypto_engine_op functions from
> 	crypto_async_request to void*
> - Reintroduced crypto_transfer_xxx_request_to_engine functions
> 
> Corentin Labbe (6):
>   Documentation: crypto: document crypto engine API
>   crypto: engine - Permit to enqueue all async requests
>   crypto: omap: convert to new crypto engine API
>   crypto: virtio: convert to new crypto engine API
>   crypto: stm32-hash: convert to the new crypto engine API
>   crypto: stm32-cryp: convert to the new crypto engine API

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
  2018-02-15 15:51     ` Herbert Xu
  (?)
@ 2018-02-16 15:36         ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-02-16 15:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: alexandre.torgue-qxv4g6HH51o,
	arei.gonglei-hv44wF8Li93QT0dZR+AlfA, corbet-T1hC0tSOHrs,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q, jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	mcoquelin.stm32-Re5JQEeQqe8AvxtiuMwx3w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, fabien.dessenne-qxv4g6HH51o,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	linux-doc-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw

On Thu, Feb 15, 2018 at 11:51:00PM +0800, Herbert Xu wrote:
> On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> > Hello
> > 
> > The current crypto_engine support only ahash and ablkcipher request.
> > My first patch which try to add skcipher was Nacked, it will add too many functions
> > and adding other algs(aead, asymetric_key) will make the situation worst.
> > 
> > This patchset remove all algs specific stuff and now only process generic crypto_async_request.
> > 
> > The requests handler function pointer are now moved out of struct engine and
> > are now stored directly in a crypto_engine_reqctx.
> > 
> > The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> > could only dequeue crypto_async_request and it is impossible to access any request_ctx
> > without knowing the underlying request type.
> > 
> > So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> > Note that the current implementation expect that crypto_engine_reqctx
> > is the first member of the context.
> > 
> > The first patch is a try to document the crypto engine API.
> > The second patch convert the crypto engine with the new way,
> > while the following patchs convert the 4 existing users of crypto_engine.
> > Note that this split break bisection, so probably the final commit will be all merged.
> > 
> > Appart from virtio, all 4 latest patch were compile tested only.
> > But the crypto engine is tested with my new sun8i-ce driver.
> > 
> > Regards
> > 
> > [1] https://www.mail-archive.com/linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org/msg1474434.html
> > 
> > Changes since V1:
> > - renamed crypto_engine_reqctx to crypto_engine_ctx
> > - indentation fix in function parameter
> > - do not export crypto_transfer_request
> > - Add aead support
> > - crypto_finalize_request is now static
> > 
> > Changes since RFC:
> > - Added a documentation patch
> > - Added patch for stm32-cryp
> > - Changed parameter of all crypto_engine_op functions from
> > 	crypto_async_request to void*
> > - Reintroduced crypto_transfer_xxx_request_to_engine functions
> > 
> > Corentin Labbe (6):
> >   Documentation: crypto: document crypto engine API
> >   crypto: engine - Permit to enqueue all async requests
> >   crypto: omap: convert to new crypto engine API
> >   crypto: virtio: convert to new crypto engine API
> >   crypto: stm32-hash: convert to the new crypto engine API
> >   crypto: stm32-cryp: convert to the new crypto engine API
> 
> All applied.  Thanks.

Hello

As mentionned in the cover letter, all patchs (except documentation one) should be squashed.
A kbuild robot reported a build error on cryptodev due to this.

Regards

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
@ 2018-02-16 15:36         ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-02-16 15:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: alexandre.torgue, arei.gonglei, corbet, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne, linux-arm-kernel,
	linux-crypto, linux-doc, linux-kernel, virtualization,
	linux-sunxi

On Thu, Feb 15, 2018 at 11:51:00PM +0800, Herbert Xu wrote:
> On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> > Hello
> > 
> > The current crypto_engine support only ahash and ablkcipher request.
> > My first patch which try to add skcipher was Nacked, it will add too many functions
> > and adding other algs(aead, asymetric_key) will make the situation worst.
> > 
> > This patchset remove all algs specific stuff and now only process generic crypto_async_request.
> > 
> > The requests handler function pointer are now moved out of struct engine and
> > are now stored directly in a crypto_engine_reqctx.
> > 
> > The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> > could only dequeue crypto_async_request and it is impossible to access any request_ctx
> > without knowing the underlying request type.
> > 
> > So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> > Note that the current implementation expect that crypto_engine_reqctx
> > is the first member of the context.
> > 
> > The first patch is a try to document the crypto engine API.
> > The second patch convert the crypto engine with the new way,
> > while the following patchs convert the 4 existing users of crypto_engine.
> > Note that this split break bisection, so probably the final commit will be all merged.
> > 
> > Appart from virtio, all 4 latest patch were compile tested only.
> > But the crypto engine is tested with my new sun8i-ce driver.
> > 
> > Regards
> > 
> > [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> > 
> > Changes since V1:
> > - renamed crypto_engine_reqctx to crypto_engine_ctx
> > - indentation fix in function parameter
> > - do not export crypto_transfer_request
> > - Add aead support
> > - crypto_finalize_request is now static
> > 
> > Changes since RFC:
> > - Added a documentation patch
> > - Added patch for stm32-cryp
> > - Changed parameter of all crypto_engine_op functions from
> > 	crypto_async_request to void*
> > - Reintroduced crypto_transfer_xxx_request_to_engine functions
> > 
> > Corentin Labbe (6):
> >   Documentation: crypto: document crypto engine API
> >   crypto: engine - Permit to enqueue all async requests
> >   crypto: omap: convert to new crypto engine API
> >   crypto: virtio: convert to new crypto engine API
> >   crypto: stm32-hash: convert to the new crypto engine API
> >   crypto: stm32-cryp: convert to the new crypto engine API
> 
> All applied.  Thanks.

Hello

As mentionned in the cover letter, all patchs (except documentation one) should be squashed.
A kbuild robot reported a build error on cryptodev due to this.

Regards

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
  2018-02-15 15:51     ` Herbert Xu
  (?)
  (?)
@ 2018-02-16 15:36     ` Corentin Labbe
  -1 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-02-16 15:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: alexandre.torgue, corbet, mst, linux-doc, linux-kernel,
	fabien.dessenne, virtualization, linux-sunxi, linux-crypto,
	mcoquelin.stm32, davem, linux-arm-kernel

On Thu, Feb 15, 2018 at 11:51:00PM +0800, Herbert Xu wrote:
> On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> > Hello
> > 
> > The current crypto_engine support only ahash and ablkcipher request.
> > My first patch which try to add skcipher was Nacked, it will add too many functions
> > and adding other algs(aead, asymetric_key) will make the situation worst.
> > 
> > This patchset remove all algs specific stuff and now only process generic crypto_async_request.
> > 
> > The requests handler function pointer are now moved out of struct engine and
> > are now stored directly in a crypto_engine_reqctx.
> > 
> > The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> > could only dequeue crypto_async_request and it is impossible to access any request_ctx
> > without knowing the underlying request type.
> > 
> > So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> > Note that the current implementation expect that crypto_engine_reqctx
> > is the first member of the context.
> > 
> > The first patch is a try to document the crypto engine API.
> > The second patch convert the crypto engine with the new way,
> > while the following patchs convert the 4 existing users of crypto_engine.
> > Note that this split break bisection, so probably the final commit will be all merged.
> > 
> > Appart from virtio, all 4 latest patch were compile tested only.
> > But the crypto engine is tested with my new sun8i-ce driver.
> > 
> > Regards
> > 
> > [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> > 
> > Changes since V1:
> > - renamed crypto_engine_reqctx to crypto_engine_ctx
> > - indentation fix in function parameter
> > - do not export crypto_transfer_request
> > - Add aead support
> > - crypto_finalize_request is now static
> > 
> > Changes since RFC:
> > - Added a documentation patch
> > - Added patch for stm32-cryp
> > - Changed parameter of all crypto_engine_op functions from
> > 	crypto_async_request to void*
> > - Reintroduced crypto_transfer_xxx_request_to_engine functions
> > 
> > Corentin Labbe (6):
> >   Documentation: crypto: document crypto engine API
> >   crypto: engine - Permit to enqueue all async requests
> >   crypto: omap: convert to new crypto engine API
> >   crypto: virtio: convert to new crypto engine API
> >   crypto: stm32-hash: convert to the new crypto engine API
> >   crypto: stm32-cryp: convert to the new crypto engine API
> 
> All applied.  Thanks.

Hello

As mentionned in the cover letter, all patchs (except documentation one) should be squashed.
A kbuild robot reported a build error on cryptodev due to this.

Regards

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
@ 2018-02-16 15:36         ` Corentin Labbe
  0 siblings, 0 replies; 44+ messages in thread
From: Corentin Labbe @ 2018-02-16 15:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Feb 15, 2018 at 11:51:00PM +0800, Herbert Xu wrote:
> On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> > Hello
> > 
> > The current crypto_engine support only ahash and ablkcipher request.
> > My first patch which try to add skcipher was Nacked, it will add too many functions
> > and adding other algs(aead, asymetric_key) will make the situation worst.
> > 
> > This patchset remove all algs specific stuff and now only process generic crypto_async_request.
> > 
> > The requests handler function pointer are now moved out of struct engine and
> > are now stored directly in a crypto_engine_reqctx.
> > 
> > The original proposal of Herbert [1] cannot be done completly since the crypto_engine
> > could only dequeue crypto_async_request and it is impossible to access any request_ctx
> > without knowing the underlying request type.
> > 
> > So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
> > Note that the current implementation expect that crypto_engine_reqctx
> > is the first member of the context.
> > 
> > The first patch is a try to document the crypto engine API.
> > The second patch convert the crypto engine with the new way,
> > while the following patchs convert the 4 existing users of crypto_engine.
> > Note that this split break bisection, so probably the final commit will be all merged.
> > 
> > Appart from virtio, all 4 latest patch were compile tested only.
> > But the crypto engine is tested with my new sun8i-ce driver.
> > 
> > Regards
> > 
> > [1] https://www.mail-archive.com/linux-kernel at vger.kernel.org/msg1474434.html
> > 
> > Changes since V1:
> > - renamed crypto_engine_reqctx to crypto_engine_ctx
> > - indentation fix in function parameter
> > - do not export crypto_transfer_request
> > - Add aead support
> > - crypto_finalize_request is now static
> > 
> > Changes since RFC:
> > - Added a documentation patch
> > - Added patch for stm32-cryp
> > - Changed parameter of all crypto_engine_op functions from
> > 	crypto_async_request to void*
> > - Reintroduced crypto_transfer_xxx_request_to_engine functions
> > 
> > Corentin Labbe (6):
> >   Documentation: crypto: document crypto engine API
> >   crypto: engine - Permit to enqueue all async requests
> >   crypto: omap: convert to new crypto engine API
> >   crypto: virtio: convert to new crypto engine API
> >   crypto: stm32-hash: convert to the new crypto engine API
> >   crypto: stm32-cryp: convert to the new crypto engine API
> 
> All applied.  Thanks.

Hello

As mentionned in the cover letter, all patchs (except documentation one) should be squashed.
A kbuild robot reported a build error on cryptodev due to this.

Regards

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
  2018-02-16 15:36         ` Corentin Labbe
@ 2018-02-17  4:42           ` Herbert Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Herbert Xu @ 2018-02-17  4:42 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, arei.gonglei, corbet, davem, jasowang,
	mcoquelin.stm32, mst, fabien.dessenne, linux-arm-kernel,
	linux-crypto, linux-doc, linux-kernel, virtualization,
	linux-sunxi

On Fri, Feb 16, 2018 at 04:36:56PM +0100, Corentin Labbe wrote:
>
> As mentionned in the cover letter, all patchs (except documentation one) should be squashed.
> A kbuild robot reported a build error on cryptodev due to this.

It's too late now.  In future if you want the patches to be squashed
then please send them in one email.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
  2018-02-16 15:36         ` Corentin Labbe
  (?)
  (?)
@ 2018-02-17  4:42         ` Herbert Xu
  -1 siblings, 0 replies; 44+ messages in thread
From: Herbert Xu @ 2018-02-17  4:42 UTC (permalink / raw)
  To: Corentin Labbe
  Cc: alexandre.torgue, corbet, mst, linux-doc, linux-kernel,
	fabien.dessenne, virtualization, linux-sunxi, linux-crypto,
	mcoquelin.stm32, davem, linux-arm-kernel

On Fri, Feb 16, 2018 at 04:36:56PM +0100, Corentin Labbe wrote:
>
> As mentionned in the cover letter, all patchs (except documentation one) should be squashed.
> A kbuild robot reported a build error on cryptodev due to this.

It's too late now.  In future if you want the patches to be squashed
then please send them in one email.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
@ 2018-02-17  4:42           ` Herbert Xu
  0 siblings, 0 replies; 44+ messages in thread
From: Herbert Xu @ 2018-02-17  4:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 16, 2018 at 04:36:56PM +0100, Corentin Labbe wrote:
>
> As mentionned in the cover letter, all patchs (except documentation one) should be squashed.
> A kbuild robot reported a build error on cryptodev due to this.

It's too late now.  In future if you want the patches to be squashed
then please send them in one email.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2018-02-17  4:42 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-26 19:15 [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests Corentin Labbe
2018-01-26 19:15 ` Corentin Labbe
2018-01-26 19:15 ` Corentin Labbe
2018-01-26 19:15 ` [PATCH v2 1/6] Documentation: crypto: document crypto engine API Corentin Labbe
2018-01-26 19:15 ` [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests Corentin Labbe
2018-01-26 19:15 ` [PATCH v2 3/6] crypto: omap: convert to new crypto engine API Corentin Labbe
2018-01-26 19:15 ` [PATCH v2 4/6] crypto: virtio: " Corentin Labbe
     [not found] ` <20180126191534.17569-1-clabbe.montjoie-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-01-26 19:15   ` [PATCH v2 1/6] Documentation: crypto: document " Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-01-26 19:15   ` [PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-02-14 13:31     ` Fabien DESSENNE
2018-02-14 13:31     ` Fabien DESSENNE
2018-02-14 13:31       ` Fabien DESSENNE
2018-02-14 13:31       ` Fabien DESSENNE
2018-01-26 19:15   ` [PATCH v2 3/6] crypto: omap: convert to new crypto engine API Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-01-26 19:15   ` [PATCH v2 4/6] crypto: virtio: " Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-02-14 15:51     ` Michael S. Tsirkin
2018-02-14 15:51       ` Michael S. Tsirkin
2018-02-14 15:51       ` Michael S. Tsirkin
2018-01-26 19:15   ` [PATCH v2 5/6] crypto: stm32-hash: convert to the " Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-01-26 19:15     ` Corentin Labbe
2018-02-15 15:51   ` [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests Herbert Xu
2018-02-15 15:51     ` Herbert Xu
2018-02-15 15:51     ` Herbert Xu
2018-02-16 15:36     ` Corentin Labbe
     [not found]     ` <20180215155100.GJ7352-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q@public.gmane.org>
2018-02-16 15:36       ` Corentin Labbe
2018-02-16 15:36         ` Corentin Labbe
2018-02-16 15:36         ` Corentin Labbe
2018-02-17  4:42         ` Herbert Xu
2018-02-17  4:42         ` Herbert Xu
2018-02-17  4:42           ` Herbert Xu
2018-01-26 19:15 ` [PATCH v2 5/6] crypto: stm32-hash: convert to the new crypto engine API Corentin Labbe
2018-01-26 19:15 ` [PATCH v2 6/6] crypto: stm32-cryp: " Corentin Labbe
2018-01-26 19:15 ` Corentin Labbe
2018-01-26 19:15   ` Corentin Labbe
2018-02-15 15:51 ` [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests Herbert Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.