All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] STM32 CRYP crypto driver
@ 2017-07-13  9:59 ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue, linux-crypto, devicetree,
	linux-arm-kernel, linux-kernel
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

This set of patches adds a new crypto driver for STMicroelectronics stm32 HW.
This drivers uses the crypto API and provides with HW-enabled AEAD and block
cipher algorithms.
It makes use of the crypto engine which is upgraded in order to support AEAD
requests.

This driver was successfully tested with tcrypt / testmgr.

Note:
Since two other set of patches (update of STM32 CRC32 and addition of STM32
HASH) are being proposed, it may happen that there are some minor conflicts in
'Kconfig' and 'Makefile'. In that case, I will fix the issue in due course.

Fabien Dessenne (3):
  crypto: engine - permit to enqueue aead_request
  dt-bindings: Document STM32 CRYP bindings
  crypto: stm32 - Support for STM32 CRYP crypto module

 .../devicetree/bindings/crypto/st,stm32-cryp.txt   |   20 +
 crypto/crypto_engine.c                             |  101 +
 drivers/crypto/stm32/Kconfig                       |    9 +
 drivers/crypto/stm32/Makefile                      |    1 +
 drivers/crypto/stm32/stm32-cryp.c                  | 1962 ++++++++++++++++++++
 include/crypto/engine.h                            |   16 +
 6 files changed, 2109 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
 create mode 100644 drivers/crypto/stm32/stm32-cryp.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 0/3] STM32 CRYP crypto driver
@ 2017-07-13  9:59 ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue, linux-crypto, devicetree,
	linux-arm-kernel, linux-kernel
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

This set of patches adds a new crypto driver for STMicroelectronics stm32 HW.
This drivers uses the crypto API and provides with HW-enabled AEAD and block
cipher algorithms.
It makes use of the crypto engine which is upgraded in order to support AEAD
requests.

This driver was successfully tested with tcrypt / testmgr.

Note:
Since two other set of patches (update of STM32 CRC32 and addition of STM32
HASH) are being proposed, it may happen that there are some minor conflicts in
'Kconfig' and 'Makefile'. In that case, I will fix the issue in due course.

Fabien Dessenne (3):
  crypto: engine - permit to enqueue aead_request
  dt-bindings: Document STM32 CRYP bindings
  crypto: stm32 - Support for STM32 CRYP crypto module

 .../devicetree/bindings/crypto/st,stm32-cryp.txt   |   20 +
 crypto/crypto_engine.c                             |  101 +
 drivers/crypto/stm32/Kconfig                       |    9 +
 drivers/crypto/stm32/Makefile                      |    1 +
 drivers/crypto/stm32/stm32-cryp.c                  | 1962 ++++++++++++++++++++
 include/crypto/engine.h                            |   16 +
 6 files changed, 2109 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
 create mode 100644 drivers/crypto/stm32/stm32-cryp.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 0/3] STM32 CRYP crypto driver
@ 2017-07-13  9:59 ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: linux-arm-kernel

This set of patches adds a new crypto driver for STMicroelectronics stm32 HW.
This drivers uses the crypto API and provides with HW-enabled AEAD and block
cipher algorithms.
It makes use of the crypto engine which is upgraded in order to support AEAD
requests.

This driver was successfully tested with tcrypt / testmgr.

Note:
Since two other set of patches (update of STM32 CRC32 and addition of STM32
HASH) are being proposed, it may happen that there are some minor conflicts in
'Kconfig' and 'Makefile'. In that case, I will fix the issue in due course.

Fabien Dessenne (3):
  crypto: engine - permit to enqueue aead_request
  dt-bindings: Document STM32 CRYP bindings
  crypto: stm32 - Support for STM32 CRYP crypto module

 .../devicetree/bindings/crypto/st,stm32-cryp.txt   |   20 +
 crypto/crypto_engine.c                             |  101 +
 drivers/crypto/stm32/Kconfig                       |    9 +
 drivers/crypto/stm32/Makefile                      |    1 +
 drivers/crypto/stm32/stm32-cryp.c                  | 1962 ++++++++++++++++++++
 include/crypto/engine.h                            |   16 +
 6 files changed, 2109 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
 create mode 100644 drivers/crypto/stm32/stm32-cryp.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] crypto: engine - permit to enqueue aead_request
  2017-07-13  9:59 ` Fabien Dessenne
  (?)
@ 2017-07-13  9:59   ` Fabien Dessenne
  -1 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue, linux-crypto, devicetree,
	linux-arm-kernel, linux-kernel
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

The current crypto engine allows ablkcipher_request and ahash_request to
be enqueued. Extend this to aead_request.

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 crypto/crypto_engine.c  | 101 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/crypto/engine.h |  16 ++++++++
 2 files changed, 117 insertions(+)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 727bd5c..01701ac 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -15,6 +15,7 @@
 #include <linux/err.h>
 #include <linux/delay.h>
 #include <crypto/engine.h>
+#include <crypto/internal/aead.h>
 #include <crypto/internal/hash.h>
 #include <uapi/linux/sched/types.h>
 #include "internal.h"
@@ -35,6 +36,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 {
 	struct crypto_async_request *async_req, *backlog;
 	struct ahash_request *hreq;
+	struct aead_request *areq;
 	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
@@ -121,6 +123,22 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 			goto req_err;
 		}
 		return;
+	case CRYPTO_ALG_TYPE_AEAD:
+		areq = aead_request_cast(engine->cur_req);
+		if (engine->prepare_aead_request) {
+			ret = engine->prepare_aead_request(engine, areq);
+			if (ret) {
+				pr_err("failed to prepare request: %d\n", ret);
+				goto req_err;
+			}
+			engine->cur_req_prepared = true;
+		}
+		ret = engine->aead_one_request(engine, areq);
+		if (ret) {
+			pr_err("failed to do aead one request from queue\n");
+			goto req_err;
+		}
+		return;
 	case CRYPTO_ALG_TYPE_ABLKCIPHER:
 		breq = ablkcipher_request_cast(engine->cur_req);
 		if (engine->prepare_cipher_request) {
@@ -148,6 +166,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		hreq = ahash_request_cast(engine->cur_req);
 		crypto_finalize_hash_request(engine, hreq, ret);
 		break;
+	case CRYPTO_ALG_TYPE_AEAD:
+		areq = aead_request_cast(engine->cur_req);
+		crypto_finalize_aead_request(engine, areq, ret);
+		break;
 	case CRYPTO_ALG_TYPE_ABLKCIPHER:
 		breq = ablkcipher_request_cast(engine->cur_req);
 		crypto_finalize_cipher_request(engine, breq, ret);
@@ -253,6 +275,48 @@ int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
 
 /**
+ * crypto_transfer_aead_request - transfer the new request into the
+ * enginequeue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request(struct crypto_engine *engine,
+				 struct aead_request *req, bool need_pump)
+{
+	unsigned long flags;
+	int ret;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+
+	if (!engine->running) {
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+		return -ESHUTDOWN;
+	}
+
+	ret = aead_enqueue_request((struct aead_queue *)&engine->queue, req);
+
+	if (!engine->busy && need_pump)
+		kthread_queue_work(engine->kworker, &engine->pump_requests);
+
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request);
+
+/**
+ * crypto_transfer_aead_request_to_engine - transfer one request to list
+ * into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req)
+{
+	return crypto_transfer_aead_request(engine, req, true);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
+
+/**
  * crypto_finalize_cipher_request - finalize one request if the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
@@ -327,6 +391,43 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
 EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
 
 /**
+ * crypto_finalize_aead_request - finalize one request if the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err)
+{
+	unsigned long flags;
+	bool finalize_cur_req = false;
+	int ret;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+	if (engine->cur_req == &req->base)
+		finalize_cur_req = true;
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+
+	if (finalize_cur_req) {
+		if (engine->cur_req_prepared &&
+		    engine->unprepare_aead_request) {
+			ret = engine->unprepare_aead_request(engine, req);
+			if (ret)
+				pr_err("failed to unprepare request\n");
+		}
+		spin_lock_irqsave(&engine->queue_lock, flags);
+		engine->cur_req = NULL;
+		engine->cur_req_prepared = false;
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+	}
+
+	req->base.complete(&req->base, err);
+
+	kthread_queue_work(engine->kworker, &engine->pump_requests);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
+
+/**
  * crypto_engine_start - start the hardware engine
  * @engine: the hardware engine need to be started
  *
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index 1bf600f..a1c4a92 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -16,6 +16,7 @@
 #include <linux/list.h>
 #include <linux/kernel.h>
 #include <linux/kthread.h>
+#include <crypto/aead.h>
 #include <crypto/algapi.h>
 #include <crypto/hash.h>
 
@@ -43,6 +44,9 @@
  * @prepare_hash_request: do some prepare if need before handle the current request
  * @unprepare_hash_request: undo any work done by prepare_hash_request()
  * @hash_one_request: do hash for current request
+ * @prepare_aead_request: do some prepare if need before handle the current request
+ * @unprepare_aead_request: undo any work done by prepare_aead_request()
+ * @aead_one_request: do aead for current request
  * @kworker: kthread worker struct for request pump
  * @pump_requests: work struct for scheduling work to the request pump
  * @priv_data: the engine private data
@@ -72,10 +76,16 @@ struct crypto_engine {
 				    struct ahash_request *req);
 	int (*unprepare_hash_request)(struct crypto_engine *engine,
 				      struct ahash_request *req);
+	int (*prepare_aead_request)(struct crypto_engine *engine,
+				    struct aead_request *req);
+	int (*unprepare_aead_request)(struct crypto_engine *engine,
+				      struct aead_request *req);
 	int (*cipher_one_request)(struct crypto_engine *engine,
 				  struct ablkcipher_request *req);
 	int (*hash_one_request)(struct crypto_engine *engine,
 				struct ahash_request *req);
+	int (*aead_one_request)(struct crypto_engine *engine,
+				struct aead_request *req);
 
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
@@ -93,10 +103,16 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 				 struct ahash_request *req, bool need_pump);
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 					   struct ahash_request *req);
+int crypto_transfer_aead_request(struct crypto_engine *engine,
+				 struct aead_request *req, bool need_pump);
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req);
 void crypto_finalize_cipher_request(struct crypto_engine *engine,
 				    struct ablkcipher_request *req, int err);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err);
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 1/3] crypto: engine - permit to enqueue aead_request
@ 2017-07-13  9:59   ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue, linux-crypto, devicetree,
	linux-arm-kernel, linux-kernel
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

The current crypto engine allows ablkcipher_request and ahash_request to
be enqueued. Extend this to aead_request.

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 crypto/crypto_engine.c  | 101 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/crypto/engine.h |  16 ++++++++
 2 files changed, 117 insertions(+)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 727bd5c..01701ac 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -15,6 +15,7 @@
 #include <linux/err.h>
 #include <linux/delay.h>
 #include <crypto/engine.h>
+#include <crypto/internal/aead.h>
 #include <crypto/internal/hash.h>
 #include <uapi/linux/sched/types.h>
 #include "internal.h"
@@ -35,6 +36,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 {
 	struct crypto_async_request *async_req, *backlog;
 	struct ahash_request *hreq;
+	struct aead_request *areq;
 	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
@@ -121,6 +123,22 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 			goto req_err;
 		}
 		return;
+	case CRYPTO_ALG_TYPE_AEAD:
+		areq = aead_request_cast(engine->cur_req);
+		if (engine->prepare_aead_request) {
+			ret = engine->prepare_aead_request(engine, areq);
+			if (ret) {
+				pr_err("failed to prepare request: %d\n", ret);
+				goto req_err;
+			}
+			engine->cur_req_prepared = true;
+		}
+		ret = engine->aead_one_request(engine, areq);
+		if (ret) {
+			pr_err("failed to do aead one request from queue\n");
+			goto req_err;
+		}
+		return;
 	case CRYPTO_ALG_TYPE_ABLKCIPHER:
 		breq = ablkcipher_request_cast(engine->cur_req);
 		if (engine->prepare_cipher_request) {
@@ -148,6 +166,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		hreq = ahash_request_cast(engine->cur_req);
 		crypto_finalize_hash_request(engine, hreq, ret);
 		break;
+	case CRYPTO_ALG_TYPE_AEAD:
+		areq = aead_request_cast(engine->cur_req);
+		crypto_finalize_aead_request(engine, areq, ret);
+		break;
 	case CRYPTO_ALG_TYPE_ABLKCIPHER:
 		breq = ablkcipher_request_cast(engine->cur_req);
 		crypto_finalize_cipher_request(engine, breq, ret);
@@ -253,6 +275,48 @@ int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
 
 /**
+ * crypto_transfer_aead_request - transfer the new request into the
+ * enginequeue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request(struct crypto_engine *engine,
+				 struct aead_request *req, bool need_pump)
+{
+	unsigned long flags;
+	int ret;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+
+	if (!engine->running) {
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+		return -ESHUTDOWN;
+	}
+
+	ret = aead_enqueue_request((struct aead_queue *)&engine->queue, req);
+
+	if (!engine->busy && need_pump)
+		kthread_queue_work(engine->kworker, &engine->pump_requests);
+
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request);
+
+/**
+ * crypto_transfer_aead_request_to_engine - transfer one request to list
+ * into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req)
+{
+	return crypto_transfer_aead_request(engine, req, true);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
+
+/**
  * crypto_finalize_cipher_request - finalize one request if the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
@@ -327,6 +391,43 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
 EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
 
 /**
+ * crypto_finalize_aead_request - finalize one request if the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err)
+{
+	unsigned long flags;
+	bool finalize_cur_req = false;
+	int ret;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+	if (engine->cur_req == &req->base)
+		finalize_cur_req = true;
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+
+	if (finalize_cur_req) {
+		if (engine->cur_req_prepared &&
+		    engine->unprepare_aead_request) {
+			ret = engine->unprepare_aead_request(engine, req);
+			if (ret)
+				pr_err("failed to unprepare request\n");
+		}
+		spin_lock_irqsave(&engine->queue_lock, flags);
+		engine->cur_req = NULL;
+		engine->cur_req_prepared = false;
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+	}
+
+	req->base.complete(&req->base, err);
+
+	kthread_queue_work(engine->kworker, &engine->pump_requests);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
+
+/**
  * crypto_engine_start - start the hardware engine
  * @engine: the hardware engine need to be started
  *
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index 1bf600f..a1c4a92 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -16,6 +16,7 @@
 #include <linux/list.h>
 #include <linux/kernel.h>
 #include <linux/kthread.h>
+#include <crypto/aead.h>
 #include <crypto/algapi.h>
 #include <crypto/hash.h>
 
@@ -43,6 +44,9 @@
  * @prepare_hash_request: do some prepare if need before handle the current request
  * @unprepare_hash_request: undo any work done by prepare_hash_request()
  * @hash_one_request: do hash for current request
+ * @prepare_aead_request: do some prepare if need before handle the current request
+ * @unprepare_aead_request: undo any work done by prepare_aead_request()
+ * @aead_one_request: do aead for current request
  * @kworker: kthread worker struct for request pump
  * @pump_requests: work struct for scheduling work to the request pump
  * @priv_data: the engine private data
@@ -72,10 +76,16 @@ struct crypto_engine {
 				    struct ahash_request *req);
 	int (*unprepare_hash_request)(struct crypto_engine *engine,
 				      struct ahash_request *req);
+	int (*prepare_aead_request)(struct crypto_engine *engine,
+				    struct aead_request *req);
+	int (*unprepare_aead_request)(struct crypto_engine *engine,
+				      struct aead_request *req);
 	int (*cipher_one_request)(struct crypto_engine *engine,
 				  struct ablkcipher_request *req);
 	int (*hash_one_request)(struct crypto_engine *engine,
 				struct ahash_request *req);
+	int (*aead_one_request)(struct crypto_engine *engine,
+				struct aead_request *req);
 
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
@@ -93,10 +103,16 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 				 struct ahash_request *req, bool need_pump);
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 					   struct ahash_request *req);
+int crypto_transfer_aead_request(struct crypto_engine *engine,
+				 struct aead_request *req, bool need_pump);
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req);
 void crypto_finalize_cipher_request(struct crypto_engine *engine,
 				    struct ablkcipher_request *req, int err);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err);
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 1/3] crypto: engine - permit to enqueue aead_request
@ 2017-07-13  9:59   ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: linux-arm-kernel

The current crypto engine allows ablkcipher_request and ahash_request to
be enqueued. Extend this to aead_request.

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 crypto/crypto_engine.c  | 101 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/crypto/engine.h |  16 ++++++++
 2 files changed, 117 insertions(+)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 727bd5c..01701ac 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -15,6 +15,7 @@
 #include <linux/err.h>
 #include <linux/delay.h>
 #include <crypto/engine.h>
+#include <crypto/internal/aead.h>
 #include <crypto/internal/hash.h>
 #include <uapi/linux/sched/types.h>
 #include "internal.h"
@@ -35,6 +36,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 {
 	struct crypto_async_request *async_req, *backlog;
 	struct ahash_request *hreq;
+	struct aead_request *areq;
 	struct ablkcipher_request *breq;
 	unsigned long flags;
 	bool was_busy = false;
@@ -121,6 +123,22 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 			goto req_err;
 		}
 		return;
+	case CRYPTO_ALG_TYPE_AEAD:
+		areq = aead_request_cast(engine->cur_req);
+		if (engine->prepare_aead_request) {
+			ret = engine->prepare_aead_request(engine, areq);
+			if (ret) {
+				pr_err("failed to prepare request: %d\n", ret);
+				goto req_err;
+			}
+			engine->cur_req_prepared = true;
+		}
+		ret = engine->aead_one_request(engine, areq);
+		if (ret) {
+			pr_err("failed to do aead one request from queue\n");
+			goto req_err;
+		}
+		return;
 	case CRYPTO_ALG_TYPE_ABLKCIPHER:
 		breq = ablkcipher_request_cast(engine->cur_req);
 		if (engine->prepare_cipher_request) {
@@ -148,6 +166,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 		hreq = ahash_request_cast(engine->cur_req);
 		crypto_finalize_hash_request(engine, hreq, ret);
 		break;
+	case CRYPTO_ALG_TYPE_AEAD:
+		areq = aead_request_cast(engine->cur_req);
+		crypto_finalize_aead_request(engine, areq, ret);
+		break;
 	case CRYPTO_ALG_TYPE_ABLKCIPHER:
 		breq = ablkcipher_request_cast(engine->cur_req);
 		crypto_finalize_cipher_request(engine, breq, ret);
@@ -253,6 +275,48 @@ int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
 
 /**
+ * crypto_transfer_aead_request - transfer the new request into the
+ * enginequeue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request(struct crypto_engine *engine,
+				 struct aead_request *req, bool need_pump)
+{
+	unsigned long flags;
+	int ret;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+
+	if (!engine->running) {
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+		return -ESHUTDOWN;
+	}
+
+	ret = aead_enqueue_request((struct aead_queue *)&engine->queue, req);
+
+	if (!engine->busy && need_pump)
+		kthread_queue_work(engine->kworker, &engine->pump_requests);
+
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request);
+
+/**
+ * crypto_transfer_aead_request_to_engine - transfer one request to list
+ * into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req)
+{
+	return crypto_transfer_aead_request(engine, req, true);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_aead_request_to_engine);
+
+/**
  * crypto_finalize_cipher_request - finalize one request if the request is done
  * @engine: the hardware engine
  * @req: the request need to be finalized
@@ -327,6 +391,43 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
 EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
 
 /**
+ * crypto_finalize_aead_request - finalize one request if the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err)
+{
+	unsigned long flags;
+	bool finalize_cur_req = false;
+	int ret;
+
+	spin_lock_irqsave(&engine->queue_lock, flags);
+	if (engine->cur_req == &req->base)
+		finalize_cur_req = true;
+	spin_unlock_irqrestore(&engine->queue_lock, flags);
+
+	if (finalize_cur_req) {
+		if (engine->cur_req_prepared &&
+		    engine->unprepare_aead_request) {
+			ret = engine->unprepare_aead_request(engine, req);
+			if (ret)
+				pr_err("failed to unprepare request\n");
+		}
+		spin_lock_irqsave(&engine->queue_lock, flags);
+		engine->cur_req = NULL;
+		engine->cur_req_prepared = false;
+		spin_unlock_irqrestore(&engine->queue_lock, flags);
+	}
+
+	req->base.complete(&req->base, err);
+
+	kthread_queue_work(engine->kworker, &engine->pump_requests);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_aead_request);
+
+/**
  * crypto_engine_start - start the hardware engine
  * @engine: the hardware engine need to be started
  *
diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index 1bf600f..a1c4a92 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -16,6 +16,7 @@
 #include <linux/list.h>
 #include <linux/kernel.h>
 #include <linux/kthread.h>
+#include <crypto/aead.h>
 #include <crypto/algapi.h>
 #include <crypto/hash.h>
 
@@ -43,6 +44,9 @@
  * @prepare_hash_request: do some prepare if need before handle the current request
  * @unprepare_hash_request: undo any work done by prepare_hash_request()
  * @hash_one_request: do hash for current request
+ * @prepare_aead_request: do some prepare if need before handle the current request
+ * @unprepare_aead_request: undo any work done by prepare_aead_request()
+ * @aead_one_request: do aead for current request
  * @kworker: kthread worker struct for request pump
  * @pump_requests: work struct for scheduling work to the request pump
  * @priv_data: the engine private data
@@ -72,10 +76,16 @@ struct crypto_engine {
 				    struct ahash_request *req);
 	int (*unprepare_hash_request)(struct crypto_engine *engine,
 				      struct ahash_request *req);
+	int (*prepare_aead_request)(struct crypto_engine *engine,
+				    struct aead_request *req);
+	int (*unprepare_aead_request)(struct crypto_engine *engine,
+				      struct aead_request *req);
 	int (*cipher_one_request)(struct crypto_engine *engine,
 				  struct ablkcipher_request *req);
 	int (*hash_one_request)(struct crypto_engine *engine,
 				struct ahash_request *req);
+	int (*aead_one_request)(struct crypto_engine *engine,
+				struct aead_request *req);
 
 	struct kthread_worker           *kworker;
 	struct kthread_work             pump_requests;
@@ -93,10 +103,16 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
 				 struct ahash_request *req, bool need_pump);
 int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
 					   struct ahash_request *req);
+int crypto_transfer_aead_request(struct crypto_engine *engine,
+				 struct aead_request *req, bool need_pump);
+int crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
+					   struct aead_request *req);
 void crypto_finalize_cipher_request(struct crypto_engine *engine,
 				    struct ablkcipher_request *req, int err);
 void crypto_finalize_hash_request(struct crypto_engine *engine,
 				  struct ahash_request *req, int err);
+void crypto_finalize_aead_request(struct crypto_engine *engine,
+				  struct aead_request *req, int err);
 int crypto_engine_start(struct crypto_engine *engine);
 int crypto_engine_stop(struct crypto_engine *engine);
 struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] dt-bindings: Document STM32 CRYP bindings
  2017-07-13  9:59 ` Fabien Dessenne
  (?)
  (?)
@ 2017-07-13  9:59     ` Fabien Dessenne
  -1 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

Document device tree bindings for the STM32 CRYP.

Signed-off-by: Fabien Dessenne <fabien.dessenne-qxv4g6HH51o@public.gmane.org>
---
 .../devicetree/bindings/crypto/st,stm32-cryp.txt     | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt

diff --git a/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
new file mode 100644
index 0000000..f631c37
--- /dev/null
+++ b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
@@ -0,0 +1,20 @@
+* STMicroelectronics STM32 CRYP
+
+Required properties:
+- compatible: Should be "st,stm32f756-cryp".
+- reg: The address and length of the peripheral registers space
+- clocks: The input clock of the CRYP instance
+- interrupts: The CRYP interrupts
+
+Optional properties:
+- resets: The input reset of the CRYP instance
+
+Example:
+cryp1: cryp@50060000 {
+	compatible = "st,stm32f756-cryp";
+	reg = <0x50060000 0x400>;
+	interrupts = <79>;
+	clocks = <&rcc 0 STM32F7_AHB2_CLOCK(CRYP)>;
+	resets = <&rcc STM32F7_AHB2_RESET(CRYP)>;
+	status = "disabled";
+};
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] dt-bindings: Document STM32 CRYP bindings
@ 2017-07-13  9:59     ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue, linux-crypto, devicetree,
	linux-arm-kernel, linux-kernel
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

Document device tree bindings for the STM32 CRYP.

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 .../devicetree/bindings/crypto/st,stm32-cryp.txt     | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt

diff --git a/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
new file mode 100644
index 0000000..f631c37
--- /dev/null
+++ b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
@@ -0,0 +1,20 @@
+* STMicroelectronics STM32 CRYP
+
+Required properties:
+- compatible: Should be "st,stm32f756-cryp".
+- reg: The address and length of the peripheral registers space
+- clocks: The input clock of the CRYP instance
+- interrupts: The CRYP interrupts
+
+Optional properties:
+- resets: The input reset of the CRYP instance
+
+Example:
+cryp1: cryp@50060000 {
+	compatible = "st,stm32f756-cryp";
+	reg = <0x50060000 0x400>;
+	interrupts = <79>;
+	clocks = <&rcc 0 STM32F7_AHB2_CLOCK(CRYP)>;
+	resets = <&rcc STM32F7_AHB2_RESET(CRYP)>;
+	status = "disabled";
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] dt-bindings: Document STM32 CRYP bindings
@ 2017-07-13  9:59     ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

Document device tree bindings for the STM32 CRYP.

Signed-off-by: Fabien Dessenne <fabien.dessenne-qxv4g6HH51o@public.gmane.org>
---
 .../devicetree/bindings/crypto/st,stm32-cryp.txt     | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt

diff --git a/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
new file mode 100644
index 0000000..f631c37
--- /dev/null
+++ b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
@@ -0,0 +1,20 @@
+* STMicroelectronics STM32 CRYP
+
+Required properties:
+- compatible: Should be "st,stm32f756-cryp".
+- reg: The address and length of the peripheral registers space
+- clocks: The input clock of the CRYP instance
+- interrupts: The CRYP interrupts
+
+Optional properties:
+- resets: The input reset of the CRYP instance
+
+Example:
+cryp1: cryp@50060000 {
+	compatible = "st,stm32f756-cryp";
+	reg = <0x50060000 0x400>;
+	interrupts = <79>;
+	clocks = <&rcc 0 STM32F7_AHB2_CLOCK(CRYP)>;
+	resets = <&rcc STM32F7_AHB2_RESET(CRYP)>;
+	status = "disabled";
+};
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] dt-bindings: Document STM32 CRYP bindings
@ 2017-07-13  9:59     ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: linux-arm-kernel

Document device tree bindings for the STM32 CRYP.

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 .../devicetree/bindings/crypto/st,stm32-cryp.txt     | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt

diff --git a/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
new file mode 100644
index 0000000..f631c37
--- /dev/null
+++ b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
@@ -0,0 +1,20 @@
+* STMicroelectronics STM32 CRYP
+
+Required properties:
+- compatible: Should be "st,stm32f756-cryp".
+- reg: The address and length of the peripheral registers space
+- clocks: The input clock of the CRYP instance
+- interrupts: The CRYP interrupts
+
+Optional properties:
+- resets: The input reset of the CRYP instance
+
+Example:
+cryp1: cryp at 50060000 {
+	compatible = "st,stm32f756-cryp";
+	reg = <0x50060000 0x400>;
+	interrupts = <79>;
+	clocks = <&rcc 0 STM32F7_AHB2_CLOCK(CRYP)>;
+	resets = <&rcc STM32F7_AHB2_RESET(CRYP)>;
+	status = "disabled";
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] crypto: stm32 - Support for STM32 CRYP crypto module
  2017-07-13  9:59 ` Fabien Dessenne
  (?)
@ 2017-07-13  9:59   ` Fabien Dessenne
  -1 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue, linux-crypto, devicetree,
	linux-arm-kernel, linux-kernel
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

This module registers block and AEAD cipher algorithms that make use of
the STMicroelectronics STM32 crypto "CRYP1" hardware.
The following algorithms are supported:
- aes: ecb, cbc, ctr, gcm, ccm
- des: ecb, cbc
- tdes: ecb, cbc

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/Kconfig      |    9 +
 drivers/crypto/stm32/Makefile     |    1 +
 drivers/crypto/stm32/stm32-cryp.c | 1962 +++++++++++++++++++++++++++++++++++++
 3 files changed, 1972 insertions(+)
 create mode 100644 drivers/crypto/stm32/stm32-cryp.c

diff --git a/drivers/crypto/stm32/Kconfig b/drivers/crypto/stm32/Kconfig
index 09b4ec8..c89d651 100644
--- a/drivers/crypto/stm32/Kconfig
+++ b/drivers/crypto/stm32/Kconfig
@@ -5,3 +5,12 @@ config CRYPTO_DEV_STM32
 	help
           This enables support for the CRC32 hw accelerator which can be found
 	  on STMicroelectronis STM32 SOC.
+
+config CRYP_DEV_STM32
+	tristate "Support for STM32 cryp accelerators"
+	depends on ARCH_STM32
+	select CRYPTO_HASH
+	select CRYPTO_ENGINE
+	help
+          This enables support for the CRYP (AES/DES/TDES) hw accelerator which
+	  can be found on STMicroelectronics STM32 SOC.
diff --git a/drivers/crypto/stm32/Makefile b/drivers/crypto/stm32/Makefile
index 73b4c6e..06b51c6 100644
--- a/drivers/crypto/stm32/Makefile
+++ b/drivers/crypto/stm32/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_CRYPTO_DEV_STM32) += stm32_cryp.o
 stm32_cryp-objs := stm32_crc32.o
+obj-$(CONFIG_CRYP_DEV_STM32) += stm32-cryp.o
diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
new file mode 100644
index 0000000..9a02d7c
--- /dev/null
+++ b/drivers/crypto/stm32/stm32-cryp.c
@@ -0,0 +1,1962 @@
+/*
+ * Copyright (C) STMicroelectronics SA 2017
+ * Author: Fabien Dessenne <fabien.dessenne@st.com>
+ * License terms:  GNU General Public License (GPL), version 2
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+
+#include <crypto/aes.h>
+#include <crypto/des.h>
+#include <crypto/engine.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/aead.h>
+
+#define DRIVER_NAME             "stm32-cryp"
+
+/* Bit [0] encrypt / decrypt */
+#define FLG_ENCRYPT             BIT(0)
+/* Bit [8..1] algo & operation mode */
+#define FLG_AES                 BIT(1)
+#define FLG_DES                 BIT(2)
+#define FLG_TDES                BIT(3)
+#define FLG_ECB                 BIT(4)
+#define FLG_CBC                 BIT(5)
+#define FLG_CTR                 BIT(6)
+#define FLG_GCM                 BIT(7)
+#define FLG_CCM                 BIT(8)
+/* Mode mask = bits [15..0] */
+#define FLG_MODE_MASK           GENMASK(15, 0)
+/* Bit [31..16] status  */
+#define FLG_CCM_PADDED_WA       BIT(16)
+
+/* Registers */
+#define CRYP_CR                 0x00000000
+#define CRYP_SR                 0x00000004
+#define CRYP_DIN                0x00000008
+#define CRYP_DOUT               0x0000000C
+#define CRYP_DMACR              0x00000010
+#define CRYP_IMSCR              0x00000014
+#define CRYP_RISR               0x00000018
+#define CRYP_MISR               0x0000001C
+#define CRYP_K0LR               0x00000020
+#define CRYP_K0RR               0x00000024
+#define CRYP_K1LR               0x00000028
+#define CRYP_K1RR               0x0000002C
+#define CRYP_K2LR               0x00000030
+#define CRYP_K2RR               0x00000034
+#define CRYP_K3LR               0x00000038
+#define CRYP_K3RR               0x0000003C
+#define CRYP_IV0LR              0x00000040
+#define CRYP_IV0RR              0x00000044
+#define CRYP_IV1LR              0x00000048
+#define CRYP_IV1RR              0x0000004C
+#define CRYP_CSGCMCCM0R         0x00000050
+#define CRYP_CSGCM0R            0x00000070
+
+/* Registers values */
+#define CR_DEC_NOT_ENC          0x00000004
+#define CR_TDES_ECB             0x00000000
+#define CR_TDES_CBC             0x00000008
+#define CR_DES_ECB              0x00000010
+#define CR_DES_CBC              0x00000018
+#define CR_AES_ECB              0x00000020
+#define CR_AES_CBC              0x00000028
+#define CR_AES_CTR              0x00000030
+#define CR_AES_KP               0x00000038
+#define CR_AES_GCM              0x00080000
+#define CR_AES_CCM              0x00080008
+#define CR_AES_UNKNOWN          0xFFFFFFFF
+#define CR_ALGO_MASK            0x00080038
+#define CR_DATA32               0x00000000
+#define CR_DATA16               0x00000040
+#define CR_DATA8                0x00000080
+#define CR_DATA1                0x000000C0
+#define CR_KEY128               0x00000000
+#define CR_KEY192               0x00000100
+#define CR_KEY256               0x00000200
+#define CR_FFLUSH               0x00004000
+#define CR_CRYPEN               0x00008000
+#define CR_PH_INIT              0x00000000
+#define CR_PH_HEADER            0x00010000
+#define CR_PH_PAYLOAD           0x00020000
+#define CR_PH_FINAL             0x00030000
+#define CR_PH_MASK              0x00030000
+
+#define SR_BUSY                 0x00000010
+#define SR_OFNE                 0x00000004
+
+#define IMSCR_IN                BIT(0)
+#define IMSCR_OUT               BIT(1)
+
+#define MISR_IN                 BIT(0)
+#define MISR_OUT                BIT(1)
+
+/* Misc */
+#define AES_BLOCK_32            (AES_BLOCK_SIZE / sizeof(u32))
+#define GCM_CTR_INIT            2
+#define _walked_in              (cryp->in_walk.offset - cryp->in_sg->offset)
+#define _walked_out             (cryp->out_walk.offset - cryp->out_sg->offset)
+
+struct stm32_cryp_caps {
+	bool                    swap_final;
+	bool                    padding_wa;
+};
+
+struct stm32_cryp_ctx {
+	struct stm32_cryp       *cryp;
+	int                     keylen;
+	u32                     key[AES_KEYSIZE_256 / sizeof(u32)];
+	unsigned long           flags;
+};
+
+struct stm32_cryp_reqctx {
+	unsigned long mode;
+};
+
+struct stm32_cryp {
+	struct list_head        list;
+	struct device           *dev;
+	void __iomem            *regs;
+	struct clk              *clk;
+	unsigned long           flags;
+	u32                     irq_status;
+	const struct stm32_cryp_caps *caps;
+	struct stm32_cryp_ctx   *ctx;
+
+	struct crypto_engine    *engine;
+
+	struct mutex            lock; /* protects req / areq */
+	struct ablkcipher_request *req;
+	struct aead_request     *areq;
+
+	size_t                  authsize;
+	size_t                  hw_blocksize;
+
+	size_t                  total_in;
+	size_t                  total_in_save;
+	size_t                  total_out;
+	size_t                  total_out_save;
+
+	struct scatterlist      *in_sg;
+	struct scatterlist      *out_sg;
+	struct scatterlist      *out_sg_save;
+
+	struct scatterlist      in_sgl;
+	struct scatterlist      out_sgl;
+	bool                    sgs_copied;
+
+	int                     in_sg_len;
+	int                     out_sg_len;
+
+	struct scatter_walk     in_walk;
+	struct scatter_walk     out_walk;
+
+	u32                     last_ctr[4];
+	u32                     gcm_ctr;
+};
+
+struct stm32_cryp_list {
+	struct list_head        dev_list;
+	spinlock_t              lock; /* protect dev_list */
+};
+
+static struct stm32_cryp_list cryp_list = {
+	.dev_list = LIST_HEAD_INIT(cryp_list.dev_list),
+	.lock     = __SPIN_LOCK_UNLOCKED(cryp_list.lock),
+};
+
+static inline bool is_aes(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_AES;
+}
+
+static inline bool is_des(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_DES;
+}
+
+static inline bool is_tdes(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_TDES;
+}
+
+static inline bool is_ecb(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_ECB;
+}
+
+static inline bool is_cbc(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CBC;
+}
+
+static inline bool is_ctr(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CTR;
+}
+
+static inline bool is_gcm(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_GCM;
+}
+
+static inline bool is_ccm(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CCM;
+}
+
+static inline bool is_encrypt(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_ENCRYPT;
+}
+
+static inline bool is_decrypt(struct stm32_cryp *cryp)
+{
+	return !is_encrypt(cryp);
+}
+
+static inline u32 stm32_cryp_read(struct stm32_cryp *cryp, u32 ofst)
+{
+	return readl_relaxed(cryp->regs + ofst);
+}
+
+static inline void stm32_cryp_write(struct stm32_cryp *cryp, u32 ofst, u32 val)
+{
+	writel_relaxed(val, cryp->regs + ofst);
+}
+
+static inline void stm32_cryp_wait_enable(struct stm32_cryp *cryp)
+{
+	while (stm32_cryp_read(cryp, CRYP_CR) & CR_CRYPEN)
+		cpu_relax();
+}
+
+static inline void stm32_cryp_wait_busy(struct stm32_cryp *cryp)
+{
+	while (stm32_cryp_read(cryp, CRYP_SR) & SR_BUSY)
+		cpu_relax();
+}
+
+static inline void stm32_cryp_wait_output(struct stm32_cryp *cryp)
+{
+	while (!(stm32_cryp_read(cryp, CRYP_SR) & SR_OFNE))
+		cpu_relax();
+}
+
+static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp);
+
+static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
+{
+	struct stm32_cryp *tmp, *cryp = NULL;
+
+	spin_lock_bh(&cryp_list.lock);
+	if (!ctx->cryp) {
+		list_for_each_entry(tmp, &cryp_list.dev_list, list) {
+			cryp = tmp;
+			break;
+		}
+		ctx->cryp = cryp;
+	} else {
+		cryp = ctx->cryp;
+	}
+
+	spin_unlock_bh(&cryp_list.lock);
+
+	return cryp;
+}
+
+static int stm32_cryp_check_aligned(struct scatterlist *sg, size_t total,
+				    size_t align)
+{
+	int len = 0;
+
+	if (!total)
+		return 0;
+
+	if (!IS_ALIGNED(total, align))
+		return -EINVAL;
+
+	while (sg) {
+		if (!IS_ALIGNED(sg->offset, sizeof(u32)))
+			return -1;
+
+		if (!IS_ALIGNED(sg->length, align))
+			return -1;
+
+		len += sg->length;
+		sg = sg_next(sg);
+	}
+
+	if (len != total)
+		return -1;
+
+	return 0;
+}
+
+static int stm32_cryp_check_io_aligned(struct stm32_cryp *cryp)
+{
+	int ret;
+
+	ret = stm32_cryp_check_aligned(cryp->in_sg, cryp->total_in,
+				       cryp->hw_blocksize);
+	if (ret)
+		return ret;
+
+	ret = stm32_cryp_check_aligned(cryp->out_sg, cryp->total_out,
+				       cryp->hw_blocksize);
+
+	return ret;
+}
+
+static void sg_copy_buf(void *buf, struct scatterlist *sg,
+			unsigned int start, unsigned int nbytes, int out)
+{
+	struct scatter_walk walk;
+
+	if (!nbytes)
+		return;
+
+	scatterwalk_start(&walk, sg);
+	scatterwalk_advance(&walk, start);
+	scatterwalk_copychunks(buf, &walk, nbytes, out);
+	scatterwalk_done(&walk, out, 0);
+}
+
+static int stm32_cryp_copy_sgs(struct stm32_cryp *cryp)
+{
+	void *buf_in, *buf_out;
+	int pages, total_in, total_out;
+
+	if (!stm32_cryp_check_io_aligned(cryp)) {
+		cryp->sgs_copied = 0;
+		return 0;
+	}
+
+	total_in = ALIGN(cryp->total_in, cryp->hw_blocksize);
+	pages = total_in ? get_order(total_in) : 1;
+	buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages);
+
+	total_out = ALIGN(cryp->total_out, cryp->hw_blocksize);
+	pages = total_out ? get_order(total_out) : 1;
+	buf_out = (void *)__get_free_pages(GFP_ATOMIC, pages);
+
+	if (!buf_in || !buf_out) {
+		pr_err("Couldn't allocate pages for unaligned cases.\n");
+		cryp->sgs_copied = 0;
+		return -1;
+	}
+
+	sg_copy_buf(buf_in, cryp->in_sg, 0, cryp->total_in, 0);
+
+	sg_init_one(&cryp->in_sgl, buf_in, total_in);
+	cryp->in_sg = &cryp->in_sgl;
+	cryp->in_sg_len = 1;
+
+	sg_init_one(&cryp->out_sgl, buf_out, total_out);
+	cryp->out_sg_save = cryp->out_sg;
+	cryp->out_sg = &cryp->out_sgl;
+	cryp->out_sg_len = 1;
+
+	cryp->sgs_copied = 1;
+
+	return 0;
+}
+
+static void stm32_cryp_hw_write_iv(struct stm32_cryp *cryp, u32 *iv)
+{
+	if (!iv)
+		return;
+
+	stm32_cryp_write(cryp, CRYP_IV0LR, cpu_to_be32(*iv++));
+	stm32_cryp_write(cryp, CRYP_IV0RR, cpu_to_be32(*iv++));
+
+	if (is_aes(cryp)) {
+		stm32_cryp_write(cryp, CRYP_IV1LR, cpu_to_be32(*iv++));
+		stm32_cryp_write(cryp, CRYP_IV1RR, cpu_to_be32(*iv++));
+	}
+}
+
+static void stm32_cryp_hw_write_key(struct stm32_cryp *c)
+{
+	unsigned int i;
+	int r_id;
+
+	if (is_des(c)) {
+		stm32_cryp_write(c, CRYP_K1LR, cpu_to_be32(c->ctx->key[0]));
+		stm32_cryp_write(c, CRYP_K1RR, cpu_to_be32(c->ctx->key[1]));
+	} else {
+		r_id = CRYP_K3RR;
+		for (i = c->ctx->keylen / sizeof(u32); i > 0; i--, r_id -= 4)
+			stm32_cryp_write(c, r_id,
+					 cpu_to_be32(c->ctx->key[i - 1]));
+	}
+}
+
+static u32 stm32_cryp_get_hw_mode(struct stm32_cryp *cryp)
+{
+	if (is_aes(cryp) && is_ecb(cryp))
+		return CR_AES_ECB;
+
+	if (is_aes(cryp) && is_cbc(cryp))
+		return CR_AES_CBC;
+
+	if (is_aes(cryp) && is_ctr(cryp))
+		return CR_AES_CTR;
+
+	if (is_aes(cryp) && is_gcm(cryp))
+		return CR_AES_GCM;
+
+	if (is_aes(cryp) && is_ccm(cryp))
+		return CR_AES_CCM;
+
+	if (is_des(cryp) && is_ecb(cryp))
+		return CR_DES_ECB;
+
+	if (is_des(cryp) && is_cbc(cryp))
+		return CR_DES_CBC;
+
+	if (is_tdes(cryp) && is_ecb(cryp))
+		return CR_TDES_ECB;
+
+	if (is_tdes(cryp) && is_cbc(cryp))
+		return CR_TDES_CBC;
+
+	dev_err(cryp->dev, "Unknown mode\n");
+	return CR_AES_UNKNOWN;
+}
+
+static void stm32_cryp_gcm_init(struct stm32_cryp *cryp, u32 cfg)
+{
+	u32 iv[4];
+
+	/* Phase 1 : init */
+	memcpy(iv, cryp->areq->iv, 12);
+	iv[3] = cpu_to_be32(GCM_CTR_INIT);
+	cryp->gcm_ctr = GCM_CTR_INIT;
+	stm32_cryp_hw_write_iv(cryp, iv);
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg | CR_PH_INIT | CR_CRYPEN);
+
+	/* Wait for end of processing */
+	stm32_cryp_wait_enable(cryp);
+}
+
+static void stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
+{
+	u8 iv[AES_BLOCK_SIZE], b0[AES_BLOCK_SIZE];
+	u32 *d;
+	unsigned int i, textlen;
+
+	/* Phase 1 : init. Firstly set the CTR value to 1 (not 0) */
+	memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
+	memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
+	iv[AES_BLOCK_SIZE - 1] = 1;
+	stm32_cryp_hw_write_iv(cryp, (u32 *)iv);
+
+	/* Build B0 */
+	memcpy(b0, iv, AES_BLOCK_SIZE);
+
+	b0[0] |= (8 * ((cryp->authsize - 2) / 2));
+
+	if (cryp->areq->assoclen)
+		b0[0] |= 0x40;
+
+	if (is_encrypt(cryp))
+		textlen = cryp->areq->cryptlen;
+	else
+		textlen = cryp->areq->cryptlen - cryp->authsize;
+
+	b0[AES_BLOCK_SIZE - 2] = textlen >> 8;
+	b0[AES_BLOCK_SIZE - 1] = textlen & 0xFF;
+
+	/* Enable HW */
+	stm32_cryp_write(cryp, CRYP_CR, cfg | CR_PH_INIT | CR_CRYPEN);
+
+	/* Write B0 */
+	d = (u32 *)b0;
+	for (i = 0; i < AES_BLOCK_32; i++)
+		stm32_cryp_write(cryp, CRYP_DIN, *d++);
+
+	/* Wait for end of processing */
+	stm32_cryp_wait_enable(cryp);
+}
+
+static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+{
+	u32 cfg, hw_mode;
+
+	/* Disable interrupt */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+
+	/* Set key */
+	stm32_cryp_hw_write_key(cryp);
+
+	/* Set configuration */
+	cfg = CR_DATA8 | CR_FFLUSH;
+
+	switch (cryp->ctx->keylen) {
+	case AES_KEYSIZE_128:
+		cfg |= CR_KEY128;
+		break;
+
+	case AES_KEYSIZE_192:
+		cfg |= CR_KEY192;
+		break;
+
+	default:
+	case AES_KEYSIZE_256:
+		cfg |= CR_KEY256;
+		break;
+	}
+
+	hw_mode = stm32_cryp_get_hw_mode(cryp);
+	if (hw_mode == CR_AES_UNKNOWN)
+		return -EINVAL;
+
+	/* AES ECB/CBC decrypt: run key preparation first */
+	if (is_decrypt(cryp) &&
+	    ((hw_mode == CR_AES_ECB) || (hw_mode == CR_AES_CBC))) {
+		stm32_cryp_write(cryp, CRYP_CR, cfg | CR_AES_KP | CR_CRYPEN);
+
+		/* Wait for end of processing */
+		stm32_cryp_wait_busy(cryp);
+	}
+
+	cfg |= hw_mode;
+
+	if (is_decrypt(cryp))
+		cfg |= CR_DEC_NOT_ENC;
+
+	/* Apply config and flush (valid when CRYPEN = 0) */
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	switch (hw_mode) {
+	case CR_AES_GCM:
+	case CR_AES_CCM:
+		/* Phase 1 : init */
+		if (hw_mode == CR_AES_CCM)
+			stm32_cryp_ccm_init(cryp, cfg);
+		else
+			stm32_cryp_gcm_init(cryp, cfg);
+
+		/* Phase 2 : header (authenticated data) */
+		if (cryp->areq->assoclen) {
+			cfg |= CR_PH_HEADER;
+		} else if (cryp->areq->cryptlen) {
+			/* Phase 3 : payload */
+			cfg |= CR_PH_PAYLOAD;
+			stm32_cryp_write(cryp, CRYP_CR, cfg);
+		} else {
+			cfg |= CR_PH_INIT;
+		}
+
+		break;
+
+	case CR_DES_CBC:
+	case CR_TDES_CBC:
+	case CR_AES_CBC:
+	case CR_AES_CTR:
+		stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->req->info);
+		break;
+
+	default:
+		break;
+	}
+
+	/* Enable now */
+	cfg |= CR_CRYPEN;
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	cryp->flags &= ~FLG_CCM_PADDED_WA;
+
+	return 0;
+}
+
+static void stm32_cryp_finish_req(struct stm32_cryp *cryp)
+{
+	int err = 0;
+
+	if (is_gcm(cryp) || is_ccm(cryp))
+		/* Phase 4 : output tag */
+		err = stm32_cryp_read_auth_tag(cryp);
+
+	if (cryp->sgs_copied) {
+		void *buf_in, *buf_out;
+		int pages, len;
+
+		buf_in = sg_virt(&cryp->in_sgl);
+		buf_out = sg_virt(&cryp->out_sgl);
+
+		sg_copy_buf(buf_out, cryp->out_sg_save, 0,
+			    cryp->total_out_save, 1);
+
+		len = ALIGN(cryp->total_in_save, cryp->hw_blocksize);
+		pages = len ? get_order(len) : 1;
+		free_pages((unsigned long)buf_in, pages);
+
+		len = ALIGN(cryp->total_out_save, cryp->hw_blocksize);
+		pages = len ? get_order(len) : 1;
+		free_pages((unsigned long)buf_out, pages);
+	}
+
+	if (is_gcm(cryp) || is_ccm(cryp)) {
+		crypto_finalize_aead_request(cryp->engine, cryp->areq, err);
+		cryp->areq = NULL;
+	} else {
+		crypto_finalize_cipher_request(cryp->engine, cryp->req, err);
+		cryp->req = NULL;
+	}
+
+	mutex_unlock(&cryp->lock);
+}
+
+static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
+{
+	if ((stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
+	    !cryp->areq->assoclen && !cryp->areq->cryptlen)
+		/* No input data, get output tag (phase 4) and finish */
+		stm32_cryp_finish_req(cryp);
+	else
+		/* Enable interrupt and let the IRQ handler do everything */
+		stm32_cryp_write(cryp, CRYP_IMSCR, IMSCR_IN | IMSCR_OUT);
+
+	return 0;
+}
+
+static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
+{
+	tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);
+
+	return 0;
+}
+
+static int stm32_cryp_aes_aead_init(struct crypto_aead *tfm)
+{
+	tfm->reqsize = sizeof(struct stm32_cryp_reqctx);
+
+	return 0;
+}
+
+static void stm32_cryp_cra_exit(struct crypto_tfm *tfm)
+{
+}
+
+static void stm32_cryp_aes_aead_exit(struct crypto_aead *tfm)
+{
+}
+
+static int stm32_cryp_crypt(struct ablkcipher_request *req, unsigned long mode)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
+			crypto_ablkcipher_reqtfm(req));
+	struct stm32_cryp_reqctx *rctx = ablkcipher_request_ctx(req);
+	struct stm32_cryp *cryp = stm32_cryp_find_dev(ctx);
+
+	if (!cryp)
+		return -ENODEV;
+
+	rctx->mode = mode;
+
+	return crypto_transfer_cipher_request_to_engine(cryp->engine, req);
+}
+
+static int stm32_cryp_aead_crypt(struct aead_request *req, unsigned long mode)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
+	struct stm32_cryp_reqctx *rctx = aead_request_ctx(req);
+	struct stm32_cryp *cryp = stm32_cryp_find_dev(ctx);
+
+	if (!cryp)
+		return -ENODEV;
+
+	rctx->mode = mode;
+
+	return crypto_transfer_aead_request_to_engine(cryp->engine, req);
+}
+
+static int stm32_cryp_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+			     unsigned int keylen)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+
+	memcpy(ctx->key, key, keylen);
+	ctx->keylen = keylen;
+
+	return 0;
+}
+
+static int stm32_cryp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				 unsigned int keylen)
+{
+	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 &&
+	    keylen != AES_KEYSIZE_256)
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_des_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				 unsigned int keylen)
+{
+	if (keylen != DES_KEY_SIZE)
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_tdes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				  unsigned int keylen)
+{
+	if (keylen != (3 * DES_KEY_SIZE))
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+				      unsigned int keylen)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(tfm);
+
+	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 &&
+	    keylen != AES_KEYSIZE_256)
+		return -EINVAL;
+
+	memcpy(ctx->key, key, keylen);
+	ctx->keylen = keylen;
+
+	return 0;
+}
+
+static int stm32_cryp_aes_gcm_setauthsize(struct crypto_aead *tfm,
+					  unsigned int authsize)
+{
+	return authsize == AES_BLOCK_SIZE ? 0 : -EINVAL;
+}
+
+static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
+					  unsigned int authsize)
+{
+	switch (authsize) {
+	case 4:
+	case 6:
+	case 8:
+	case 10:
+	case 12:
+	case 14:
+	case 16:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int stm32_cryp_aes_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB);
+}
+
+static int stm32_cryp_aes_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC);
+}
+
+static int stm32_cryp_aes_ctr_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ctr_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR);
+}
+
+static int stm32_cryp_aes_gcm_encrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_gcm_decrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM);
+}
+
+static int stm32_cryp_aes_ccm_encrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ccm_decrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM);
+}
+
+static int stm32_cryp_des_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_des_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB);
+}
+
+static int stm32_cryp_des_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_des_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC);
+}
+
+static int stm32_cryp_tdes_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_tdes_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB);
+}
+
+static int stm32_cryp_tdes_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_tdes_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC);
+}
+
+static int stm32_cryp_prepare_req(struct crypto_engine *engine,
+				  struct ablkcipher_request *req,
+				  struct aead_request *areq)
+{
+	struct stm32_cryp_ctx *ctx;
+	struct stm32_cryp *cryp;
+	struct stm32_cryp_reqctx *rctx;
+	int ret;
+
+	if (!req && !areq)
+		return -EINVAL;
+
+	ctx = req ? crypto_ablkcipher_ctx(crypto_ablkcipher_reqtfm(req)) :
+		    crypto_aead_ctx(crypto_aead_reqtfm(areq));
+
+	cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	mutex_lock(&cryp->lock);
+
+	rctx = req ? ablkcipher_request_ctx(req) : aead_request_ctx(areq);
+	rctx->mode &= FLG_MODE_MASK;
+
+	ctx->cryp = cryp;
+
+	cryp->flags = (cryp->flags & ~FLG_MODE_MASK) | rctx->mode;
+	cryp->hw_blocksize = is_aes(cryp) ? AES_BLOCK_SIZE : DES_BLOCK_SIZE;
+	cryp->ctx = ctx;
+
+	if (req) {
+		cryp->req = req;
+		cryp->total_in = req->nbytes;
+		cryp->total_out = cryp->total_in;
+	} else {
+		/*
+		 * Length of input and output data:
+		 * Encryption case:
+		 *  INPUT  =   AssocData  ||   PlainText
+		 *          <- assoclen ->  <- cryptlen ->
+		 *          <------- total_in ----------->
+		 *
+		 *  OUTPUT =   AssocData  ||  CipherText  ||   AuthTag
+		 *          <- assoclen ->  <- cryptlen ->  <- authsize ->
+		 *          <---------------- total_out ----------------->
+		 *
+		 * Decryption case:
+		 *  INPUT  =   AssocData  ||  CipherText  ||  AuthTag
+		 *          <- assoclen ->  <--------- cryptlen --------->
+		 *                                          <- authsize ->
+		 *          <---------------- total_in ------------------>
+		 *
+		 *  OUTPUT =   AssocData  ||   PlainText
+		 *          <- assoclen ->  <- crypten - authsize ->
+		 *          <---------- total_out ----------------->
+		 */
+		cryp->areq = areq;
+		cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq));
+		cryp->total_in = areq->assoclen + areq->cryptlen;
+		if (is_encrypt(cryp))
+			/* Append auth tag to output */
+			cryp->total_out = cryp->total_in + cryp->authsize;
+		else
+			/* No auth tag in output */
+			cryp->total_out = cryp->total_in - cryp->authsize;
+	}
+
+	cryp->total_in_save = cryp->total_in;
+	cryp->total_out_save = cryp->total_out;
+
+	cryp->in_sg = req ? req->src : areq->src;
+	cryp->out_sg = req ? req->dst : areq->dst;
+	cryp->out_sg_save = cryp->out_sg;
+
+	cryp->in_sg_len = sg_nents_for_len(cryp->in_sg, cryp->total_in);
+	if (cryp->in_sg_len < 0) {
+		dev_err(cryp->dev, "Cannot get in_sg_len\n");
+		ret = cryp->in_sg_len;
+		goto out;
+	}
+
+	cryp->out_sg_len = sg_nents_for_len(cryp->out_sg, cryp->total_out);
+	if (cryp->out_sg_len < 0) {
+		dev_err(cryp->dev, "Cannot get out_sg_len\n");
+		ret = cryp->out_sg_len;
+		goto out;
+	}
+
+	stm32_cryp_copy_sgs(cryp);
+
+	scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+	scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+
+	if (is_gcm(cryp) || is_ccm(cryp)) {
+		/* In output, jump after assoc data */
+		scatterwalk_advance(&cryp->out_walk, cryp->areq->assoclen);
+		cryp->total_out -= cryp->areq->assoclen;
+	}
+
+	ret = stm32_cryp_hw_init(cryp);
+out:
+	if (ret)
+		mutex_unlock(&cryp->lock);
+
+	return ret;
+}
+
+static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
+					 struct ablkcipher_request *req)
+{
+	return stm32_cryp_prepare_req(engine, req, NULL);
+}
+
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
+				     struct ablkcipher_request *req)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
+			crypto_ablkcipher_reqtfm(req));
+	struct stm32_cryp *cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	return stm32_cryp_cpu_start(cryp);
+}
+
+static int stm32_cryp_prepare_aead_req(struct crypto_engine *engine,
+				       struct aead_request *areq)
+{
+	return stm32_cryp_prepare_req(engine, NULL, areq);
+}
+
+static int stm32_cryp_aead_one_req(struct crypto_engine *engine,
+				   struct aead_request *areq)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(areq));
+	struct stm32_cryp *cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	return stm32_cryp_cpu_start(cryp);
+}
+
+static u32 *stm32_cryp_next_out(struct stm32_cryp *cryp, u32 *dst,
+				unsigned int n)
+{
+	scatterwalk_advance(&cryp->out_walk, n);
+
+	if (unlikely(cryp->out_sg->length == _walked_out)) {
+		cryp->out_sg = sg_next(cryp->out_sg);
+		if (cryp->out_sg) {
+			scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+			return (sg_virt(cryp->out_sg) + _walked_out);
+		}
+	}
+
+	return (u32 *)((u8 *)dst + n);
+}
+
+static u32 *stm32_cryp_next_in(struct stm32_cryp *cryp, u32 *src,
+			       unsigned int n)
+{
+	scatterwalk_advance(&cryp->in_walk, n);
+
+	if (unlikely(cryp->in_sg->length == _walked_in)) {
+		cryp->in_sg = sg_next(cryp->in_sg);
+		if (cryp->in_sg) {
+			scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+			return (sg_virt(cryp->in_sg) + _walked_in);
+		}
+	}
+
+	return (u32 *)((u8 *)src + n);
+}
+
+static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+{
+	u32 cfg, size_bit, *dst, d32;
+	u8 *d8;
+	unsigned int i, j;
+	int ret = 0;
+
+	/* Update Config */
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_FINAL;
+	cfg &= ~CR_DEC_NOT_ENC;
+	cfg |= CR_CRYPEN;
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	if (is_gcm(cryp)) {
+		/* GCM: write aad and payload size (in bits) */
+		size_bit = cryp->areq->assoclen * 8;
+		if (cryp->caps->swap_final)
+			size_bit = cpu_to_be32(size_bit);
+
+		stm32_cryp_write(cryp, CRYP_DIN, 0);
+		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+
+		size_bit = is_encrypt(cryp) ? cryp->areq->cryptlen :
+				cryp->areq->cryptlen - AES_BLOCK_SIZE;
+		size_bit *= 8;
+		if (cryp->caps->swap_final)
+			size_bit = cpu_to_be32(size_bit);
+
+		stm32_cryp_write(cryp, CRYP_DIN, 0);
+		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+	} else {
+		/* CCM: write CTR0 */
+		u8 iv[AES_BLOCK_SIZE];
+		u32 *iv32 = (u32 *)iv;
+
+		memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
+		memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
+
+		for (i = 0; i < AES_BLOCK_32; i++)
+			stm32_cryp_write(cryp, CRYP_DIN, *iv32++);
+	}
+
+	/* Wait for output data */
+	stm32_cryp_wait_output(cryp);
+
+	if (is_encrypt(cryp)) {
+		/* Get and write tag */
+		dst = sg_virt(cryp->out_sg) + _walked_out;
+
+		for (i = 0; i < AES_BLOCK_32; i++) {
+			if (cryp->total_out >= sizeof(u32)) {
+				/* Read a full u32 */
+				*dst = stm32_cryp_read(cryp, CRYP_DOUT);
+
+				dst = stm32_cryp_next_out(cryp, dst,
+							  sizeof(u32));
+				cryp->total_out -= sizeof(u32);
+			} else if (!cryp->total_out) {
+				/* Empty fifo out (data from input padding) */
+				stm32_cryp_read(cryp, CRYP_DOUT);
+			} else {
+				/* Read less than an u32 */
+				d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+				d8 = (u8 *)&d32;
+
+				for (j = 0; j < cryp->total_out; j++) {
+					*((u8 *)dst) = *(d8++);
+					dst = stm32_cryp_next_out(cryp, dst, 1);
+				}
+				cryp->total_out = 0;
+			}
+		}
+	} else if (!(cryp->flags & FLG_CCM_PADDED_WA)) {
+		/*
+		 *  FIXME: when CCM workaround has been run, the tag is wrongly
+		 *  computed. Hence it shall not be compared with the expected
+		 *  input tag.
+		 */
+		u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32];
+
+		scatterwalk_map_and_copy(in_tag, cryp->in_sg,
+					 cryp->total_in_save - cryp->authsize,
+					 cryp->authsize, 0);
+
+		for (i = 0; i < AES_BLOCK_32; i++)
+			out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+
+		if (crypto_memneq(in_tag, out_tag, cryp->authsize))
+			ret = -EBADMSG;
+	}
+
+	/* Disable cryp */
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	return ret;
+}
+
+static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp)
+{
+	u32 cr;
+
+	if (unlikely(cryp->last_ctr[3] == 0xFFFFFFFF)) {
+		cryp->last_ctr[3] = 0;
+		cryp->last_ctr[2]++;
+		if (!cryp->last_ctr[2]) {
+			cryp->last_ctr[1]++;
+			if (!cryp->last_ctr[1])
+				cryp->last_ctr[0]++;
+		}
+
+		cr = stm32_cryp_read(cryp, CRYP_CR);
+		stm32_cryp_write(cryp, CRYP_CR, cr & ~CR_CRYPEN);
+
+		stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->last_ctr);
+
+		stm32_cryp_write(cryp, CRYP_CR, cr);
+	}
+
+	cryp->last_ctr[0] = stm32_cryp_read(cryp, CRYP_IV0LR);
+	cryp->last_ctr[1] = stm32_cryp_read(cryp, CRYP_IV0RR);
+	cryp->last_ctr[2] = stm32_cryp_read(cryp, CRYP_IV1LR);
+	cryp->last_ctr[3] = stm32_cryp_read(cryp, CRYP_IV1RR);
+}
+
+static bool stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 d32, *dst;
+	u8 *d8;
+	size_t tag_size;
+
+	/* Do no read tag now (if any) */
+	if (is_encrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+		tag_size = cryp->authsize;
+	else
+		tag_size = 0;
+
+	dst = sg_virt(cryp->out_sg) + _walked_out;
+
+	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+		if (likely(cryp->total_out - tag_size >= sizeof(u32))) {
+			/* Read a full u32 */
+			*dst = stm32_cryp_read(cryp, CRYP_DOUT);
+
+			dst = stm32_cryp_next_out(cryp, dst, sizeof(u32));
+			cryp->total_out -= sizeof(u32);
+		} else if (cryp->total_out == tag_size) {
+			/* Empty fifo out (data from input padding) */
+			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+		} else {
+			/* Read less than an u32 */
+			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+			d8 = (u8 *)&d32;
+
+			for (j = 0; j < cryp->total_out - tag_size; j++) {
+				*((u8 *)dst) = *(d8++);
+				dst = stm32_cryp_next_out(cryp, dst, 1);
+			}
+			cryp->total_out = tag_size;
+		}
+	}
+
+	return !(cryp->total_out - tag_size) || !cryp->total_in;
+}
+
+static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 *src;
+	u8 d8[4];
+	size_t tag_size;
+
+	/* Do no write tag (if any) */
+	if (is_decrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+		tag_size = cryp->authsize;
+	else
+		tag_size = 0;
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+
+	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+		if (likely(cryp->total_in - tag_size >= sizeof(u32))) {
+			/* Write a full u32 */
+			stm32_cryp_write(cryp, CRYP_DIN, *src);
+
+			src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+			cryp->total_in -= sizeof(u32);
+		} else if (cryp->total_in == tag_size) {
+			/* Write padding data */
+			stm32_cryp_write(cryp, CRYP_DIN, 0);
+		} else {
+			/* Write less than an u32 */
+			memset(d8, 0, sizeof(u32));
+			for (j = 0; j < cryp->total_in - tag_size; j++) {
+				d8[j] = *((u8 *)src);
+				src = stm32_cryp_next_in(cryp, src, 1);
+			}
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			cryp->total_in = tag_size;
+		}
+	}
+}
+
+static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+{
+	u32 cfg, tmp[AES_BLOCK_32];
+	size_t total_in_ori = cryp->total_in;
+	struct scatterlist *out_sg_ori = cryp->out_sg;
+	unsigned int i;
+
+	/* 'Special workaround' procedure described in the datasheet */
+
+	/* a) disable ip */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) Update IV1R */
+	stm32_cryp_write(cryp, CRYP_IV1RR, cryp->gcm_ctr - 2);
+
+	/* c) change mode to CTR */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CTR;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* a) enable IP */
+	cfg |= CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) pad and write the last block */
+	stm32_cryp_irq_write_block(cryp);
+	cryp->total_in = total_in_ori;
+	stm32_cryp_wait_output(cryp);
+
+	/* c) get and store encrypted data */
+	stm32_cryp_irq_read_data(cryp);
+	scatterwalk_map_and_copy(tmp, out_sg_ori,
+				 cryp->total_in_save - total_in_ori,
+				 total_in_ori, 0);
+
+	/* d) change mode back to AES GCM */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_GCM;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* e) change phase to Final */
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_FINAL;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* f) write padded data */
+	for (i = 0; i < AES_BLOCK_32; i++) {
+		if (cryp->total_in)
+			stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
+		else
+			stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+	}
+
+	/* g) Empty fifo out */
+	stm32_cryp_wait_output(cryp);
+
+	for (i = 0; i < AES_BLOCK_32; i++)
+		stm32_cryp_read(cryp, CRYP_DOUT);
+
+	/* h) run the he normal Final phase */
+	stm32_cryp_finish_req(cryp);
+}
+
+static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+{
+	u32 cfg, iv1tmp;
+	u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32], tmp[AES_BLOCK_32];
+	size_t last_total_out, total_in_ori = cryp->total_in;
+	struct scatterlist *out_sg_ori = cryp->out_sg;
+	unsigned int i;
+
+	/* 'Special workaround' procedure described in the datasheet */
+	cryp->flags |= FLG_CCM_PADDED_WA;
+
+	/* a) disable ip */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) get IV1 from CRYP_CSGCMCCM7 */
+	iv1tmp = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + 7 * 4);
+
+	/* c) Load CRYP_CSGCMCCMxR */
+	for (i = 0; i < ARRAY_SIZE(cstmp1); i++)
+		cstmp1[i] = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + i * 4);
+
+	/* d) Write IV1R */
+	stm32_cryp_write(cryp, CRYP_IV1RR, iv1tmp);
+
+	/* e) change mode to CTR */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CTR;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* a) enable IP */
+	cfg |= CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) pad and write the last block */
+	stm32_cryp_irq_write_block(cryp);
+	cryp->total_in = total_in_ori;
+	stm32_cryp_wait_output(cryp);
+
+	/* c) get and store decrypted data */
+	last_total_out = cryp->total_out;
+	stm32_cryp_irq_read_data(cryp);
+
+	memset(tmp, 0, sizeof(tmp));
+	scatterwalk_map_and_copy(tmp, out_sg_ori,
+				 cryp->total_out_save - last_total_out,
+				 last_total_out, 0);
+
+	/* d) Load again CRYP_CSGCMCCMxR */
+	for (i = 0; i < ARRAY_SIZE(cstmp2); i++)
+		cstmp2[i] = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + i * 4);
+
+	/* e) change mode back to AES CCM */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CCM;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* f) change phase to header */
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_HEADER;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* g) XOR and write padded data */
+	for (i = 0; i < ARRAY_SIZE(tmp); i++) {
+		tmp[i] ^= cstmp1[i];
+		tmp[i] ^= cstmp2[i];
+		stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
+	}
+
+	/* h) wait for completion */
+	stm32_cryp_wait_busy(cryp);
+
+	/* i) run the he normal Final phase */
+	stm32_cryp_finish_req(cryp);
+}
+
+static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
+{
+	if (unlikely(!cryp->total_in)) {
+		dev_warn(cryp->dev, "No more data to process\n");
+		return;
+	}
+
+	if (unlikely(cryp->caps->padding_wa &&
+		     (cryp->total_in < AES_BLOCK_SIZE) &&
+		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
+		     (is_encrypt(cryp))))
+		/* Special case 1: padding for AES GCM encryption */
+		return stm32_cryp_irq_write_gcm_padded_data(cryp);
+
+	if (unlikely(cryp->caps->padding_wa &&
+		     (cryp->total_in - cryp->authsize < AES_BLOCK_SIZE) &&
+		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_CCM) &&
+		     (is_decrypt(cryp))))
+		/* Special case 2: padding for AES CCM decryption */
+		return stm32_cryp_irq_write_ccm_padded_data(cryp);
+
+	if (is_aes(cryp) && is_ctr(cryp))
+		stm32_cryp_check_ctr_counter(cryp);
+
+	stm32_cryp_irq_write_block(cryp);
+}
+
+static void stm32_cryp_irq_write_gcm_header(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 cfg, *src;
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+
+	for (i = 0; i < AES_BLOCK_32; i++) {
+		stm32_cryp_write(cryp, CRYP_DIN, *src);
+
+		src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+
+		/* Check if whole header written */
+		if ((cryp->total_in_save - cryp->total_in) ==
+				cryp->areq->assoclen) {
+			/* Write padding if needed */
+			for (j = i + 1; j < AES_BLOCK_32; j++)
+				stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+			/* Wait for completion */
+			stm32_cryp_wait_busy(cryp);
+
+			if (cryp->areq->cryptlen) {
+				/* Phase 3 : payload */
+				cfg = stm32_cryp_read(cryp, CRYP_CR);
+				cfg &= ~CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+				cfg &= ~CR_PH_MASK;
+				cfg |= CR_PH_PAYLOAD;
+				cfg |= CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+			} else {
+				/* Phase 4 : tag */
+				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+				stm32_cryp_finish_req(cryp);
+			}
+
+			break;
+		}
+
+		if (!cryp->total_in)
+			break;
+	}
+}
+
+static void stm32_cryp_irq_write_ccm_header(struct stm32_cryp *cryp)
+{
+	unsigned int i = 0, j, k;
+	u32 alen, cfg, *src;
+	u8 d8[4];
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+	alen = cryp->areq->assoclen;
+
+	if (!_walked_in) {
+		if (cryp->areq->assoclen <= 65280) {
+			/* Write first u32 of B1 */
+			d8[0] = (alen >> 8) & 0xFF;
+			d8[1] = alen & 0xFF;
+			d8[2] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+			d8[3] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+		} else {
+			/* Build the two first u32 of B1 */
+			d8[0] = 0xFF;
+			d8[1] = 0xFE;
+			d8[2] = alen & 0xFF000000;
+			d8[3] = alen & 0x00FF0000;
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			d8[0] = alen & 0x0000FF00;
+			d8[1] = alen & 0x000000FF;
+			d8[2] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+			d8[3] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+		}
+	}
+
+	/* Write next u32 */
+	for (; i < AES_BLOCK_32; i++) {
+		/* Build an u32 */
+		memset(d8, 0, sizeof(u32));
+		for (k = 0; k < sizeof(u32); k++) {
+			d8[k] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			cryp->total_in -= min_t(size_t, 1, cryp->total_in);
+			if ((cryp->total_in_save - cryp->total_in) == alen)
+				break;
+		}
+
+		stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+
+		if ((cryp->total_in_save - cryp->total_in) == alen) {
+			/* Write padding if needed */
+			for (j = i + 1; j < AES_BLOCK_32; j++)
+				stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+			/* Wait for completion */
+			stm32_cryp_wait_busy(cryp);
+
+			if (cryp->areq->cryptlen) {
+				/* Phase 3 : payload */
+				cfg = stm32_cryp_read(cryp, CRYP_CR);
+				cfg &= ~CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+				cfg &= ~CR_PH_MASK;
+				cfg |= CR_PH_PAYLOAD;
+				cfg |= CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+			} else {
+				/* Phase 4 : tag */
+				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+				stm32_cryp_finish_req(cryp);
+			}
+
+			break;
+		}
+	}
+}
+
+static irqreturn_t stm32_cryp_irq_thread(int irq, void *arg)
+{
+	struct stm32_cryp *cryp = arg;
+	u32 ph;
+
+	if (cryp->irq_status & MISR_OUT)
+		/* Output FIFO IRQ: read data */
+		if (unlikely(stm32_cryp_irq_read_data(cryp))) {
+			/* All bytes processed, finish */
+			stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+			stm32_cryp_finish_req(cryp);
+			return IRQ_HANDLED;
+		}
+
+	if (cryp->irq_status & MISR_IN) {
+		if (is_gcm(cryp)) {
+			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+			if (unlikely(ph == CR_PH_HEADER))
+				/* Write Header */
+				stm32_cryp_irq_write_gcm_header(cryp);
+			else
+				/* Input FIFO IRQ: write data */
+				stm32_cryp_irq_write_data(cryp);
+			cryp->gcm_ctr++;
+		} else if (is_ccm(cryp)) {
+			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+			if (unlikely(ph == CR_PH_HEADER))
+				/* Write Header */
+				stm32_cryp_irq_write_ccm_header(cryp);
+			else
+				/* Input FIFO IRQ: write data */
+				stm32_cryp_irq_write_data(cryp);
+		} else {
+			/* Input FIFO IRQ: write data */
+			stm32_cryp_irq_write_data(cryp);
+		}
+	}
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t stm32_cryp_irq(int irq, void *arg)
+{
+	struct stm32_cryp *cryp = arg;
+
+	cryp->irq_status = stm32_cryp_read(cryp, CRYP_MISR);
+
+	return IRQ_WAKE_THREAD;
+}
+
+static struct crypto_alg crypto_algs[] = {
+{
+	.cra_name		= "ecb(aes)",
+	.cra_driver_name	= "stm32-ecb-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_ecb_encrypt,
+		.decrypt	= stm32_cryp_aes_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(aes)",
+	.cra_driver_name	= "stm32-cbc-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_cbc_encrypt,
+		.decrypt	= stm32_cryp_aes_cbc_decrypt,
+	}
+},
+{
+	.cra_name		= "ctr(aes)",
+	.cra_driver_name	= "stm32-ctr-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= 1,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_ctr_encrypt,
+		.decrypt	= stm32_cryp_aes_ctr_decrypt,
+	}
+},
+{
+	.cra_name		= "ecb(des)",
+	.cra_driver_name	= "stm32-ecb-des",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= DES_BLOCK_SIZE,
+		.max_keysize	= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_des_setkey,
+		.encrypt	= stm32_cryp_des_ecb_encrypt,
+		.decrypt	= stm32_cryp_des_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(des)",
+	.cra_driver_name	= "stm32-cbc-des",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= DES_BLOCK_SIZE,
+		.max_keysize	= DES_BLOCK_SIZE,
+		.ivsize		= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_des_setkey,
+		.encrypt	= stm32_cryp_des_cbc_encrypt,
+		.decrypt	= stm32_cryp_des_cbc_decrypt,
+	}
+},
+{
+	.cra_name		= "ecb(des3_ede)",
+	.cra_driver_name	= "stm32-ecb-des3",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= 3 * DES_BLOCK_SIZE,
+		.max_keysize	= 3 * DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_tdes_setkey,
+		.encrypt	= stm32_cryp_tdes_ecb_encrypt,
+		.decrypt	= stm32_cryp_tdes_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(des3_ede)",
+	.cra_driver_name	= "stm32-cbc-des3",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= 3 * DES_BLOCK_SIZE,
+		.max_keysize	= 3 * DES_BLOCK_SIZE,
+		.ivsize		= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_tdes_setkey,
+		.encrypt	= stm32_cryp_tdes_cbc_encrypt,
+		.decrypt	= stm32_cryp_tdes_cbc_decrypt,
+	}
+},
+};
+
+static struct aead_alg aead_algs[] = {
+{
+	.setkey		= stm32_cryp_aes_aead_setkey,
+	.setauthsize	= stm32_cryp_aes_gcm_setauthsize,
+	.encrypt	= stm32_cryp_aes_gcm_encrypt,
+	.decrypt	= stm32_cryp_aes_gcm_decrypt,
+	.init		= stm32_cryp_aes_aead_init,
+	.exit		= stm32_cryp_aes_aead_exit,
+	.ivsize		= 12,
+	.maxauthsize	= AES_BLOCK_SIZE,
+
+	.base = {
+		.cra_name		= "gcm(aes)",
+		.cra_driver_name	= "stm32-gcm-aes",
+		.cra_priority		= 200,
+		.cra_flags		= CRYPTO_ALG_ASYNC,
+		.cra_blocksize		= 1,
+		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+		.cra_alignmask		= 0xf,
+		.cra_module		= THIS_MODULE,
+	},
+},
+{
+	.setkey		= stm32_cryp_aes_aead_setkey,
+	.setauthsize	= stm32_cryp_aes_ccm_setauthsize,
+	.encrypt	= stm32_cryp_aes_ccm_encrypt,
+	.decrypt	= stm32_cryp_aes_ccm_decrypt,
+	.init		= stm32_cryp_aes_aead_init,
+	.exit		= stm32_cryp_aes_aead_exit,
+	.ivsize		= AES_BLOCK_SIZE,
+	.maxauthsize	= AES_BLOCK_SIZE,
+
+	.base = {
+		.cra_name		= "ccm(aes)",
+		.cra_driver_name	= "stm32-ccm-aes",
+		.cra_priority		= 200,
+		.cra_flags		= CRYPTO_ALG_ASYNC,
+		.cra_blocksize		= 1,
+		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+		.cra_alignmask		= 0xf,
+		.cra_module		= THIS_MODULE,
+	},
+},
+};
+
+static const struct stm32_cryp_caps f7_data = {
+	.swap_final = true,
+	.padding_wa = true,
+};
+
+static const struct of_device_id stm32_dt_ids[] = {
+	{ .compatible = "st,stm32f756-cryp", .data = &f7_data},
+	{},
+};
+MODULE_DEVICE_TABLE(of, sti_dt_ids);
+
+static int stm32_cryp_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct stm32_cryp *cryp;
+	struct resource *res;
+	struct reset_control *rst;
+	const struct of_device_id *match;
+	int irq, ret;
+
+	cryp = devm_kzalloc(dev, sizeof(*cryp), GFP_KERNEL);
+	if (!cryp)
+		return -ENOMEM;
+
+	match = of_match_device(stm32_dt_ids, dev);
+	if (!match)
+		return -ENODEV;
+
+	cryp->caps = match->data;
+	cryp->dev = dev;
+
+	mutex_init(&cryp->lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	cryp->regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(cryp->regs)) {
+		dev_err(dev, "Cannot map CRYP IO\n");
+		return PTR_ERR(cryp->regs);
+	}
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(dev, "Cannot get IRQ resource\n");
+		return irq;
+	}
+
+	ret = devm_request_threaded_irq(dev, irq, stm32_cryp_irq,
+					stm32_cryp_irq_thread, IRQF_ONESHOT,
+					dev_name(dev), cryp);
+	if (ret) {
+		dev_err(dev, "Cannot grab IRQ\n");
+		return ret;
+	}
+
+	cryp->clk = devm_clk_get(dev, NULL);
+	if (IS_ERR(cryp->clk)) {
+		dev_err(dev, "Could not get clock\n");
+		return PTR_ERR(cryp->clk);
+	}
+
+	ret = clk_prepare_enable(cryp->clk);
+	if (ret) {
+		dev_err(cryp->dev, "Failed to enable clock\n");
+		return ret;
+	}
+
+	rst = devm_reset_control_get(dev, NULL);
+	if (!IS_ERR(rst)) {
+		reset_control_assert(rst);
+		udelay(2);
+		reset_control_deassert(rst);
+	}
+
+	platform_set_drvdata(pdev, cryp);
+
+	spin_lock(&cryp_list.lock);
+	list_add(&cryp->list, &cryp_list.dev_list);
+	spin_unlock(&cryp_list.lock);
+
+	/* Initialize crypto engine */
+	cryp->engine = crypto_engine_alloc_init(dev, 1);
+	if (!cryp->engine) {
+		dev_err(dev, "Could not init crypto engine\n");
+		ret = -ENOMEM;
+		goto err_engine1;
+	}
+
+	cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
+	cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
+	cryp->engine->prepare_aead_request = stm32_cryp_prepare_aead_req;
+	cryp->engine->aead_one_request = stm32_cryp_aead_one_req;
+
+	ret = crypto_engine_start(cryp->engine);
+	if (ret) {
+		dev_err(dev, "Could not start crypto engine\n");
+		goto err_engine2;
+	}
+
+	ret = crypto_register_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+	if (ret) {
+		dev_err(dev, "Could not register algs\n");
+		goto err_algs;
+	}
+
+	ret = crypto_register_aeads(aead_algs, ARRAY_SIZE(aead_algs));
+	if (ret)
+		goto err_aead_algs;
+
+	dev_info(dev, "Initialized\n");
+
+	return 0;
+
+err_aead_algs:
+	crypto_unregister_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+err_algs:
+err_engine2:
+	crypto_engine_exit(cryp->engine);
+err_engine1:
+	spin_lock(&cryp_list.lock);
+	list_del(&cryp->list);
+	spin_unlock(&cryp_list.lock);
+
+	clk_disable_unprepare(cryp->clk);
+
+	return ret;
+}
+
+static int stm32_cryp_remove(struct platform_device *pdev)
+{
+	struct stm32_cryp *cryp = platform_get_drvdata(pdev);
+
+	if (!cryp)
+		return -ENODEV;
+
+	crypto_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs));
+	crypto_unregister_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+
+	crypto_engine_exit(cryp->engine);
+
+	spin_lock(&cryp_list.lock);
+	list_del(&cryp->list);
+	spin_unlock(&cryp_list.lock);
+
+	clk_disable_unprepare(cryp->clk);
+
+	return 0;
+}
+
+static struct platform_driver stm32_cryp_driver = {
+	.probe  = stm32_cryp_probe,
+	.remove = stm32_cryp_remove,
+	.driver = {
+		.name           = DRIVER_NAME,
+		.of_match_table = stm32_dt_ids,
+	},
+};
+
+module_platform_driver(stm32_cryp_driver);
+
+MODULE_AUTHOR("Fabien Dessenne <fabien.dessenne@st.com>");
+MODULE_DESCRIPTION("STMicrolectronics STM32 CRYP hardware driver");
+MODULE_LICENSE("GPL");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] crypto: stm32 - Support for STM32 CRYP crypto module
@ 2017-07-13  9:59   ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre Torgue, linux-crypto, devicetree,
	linux-arm-kernel, linux-kernel
  Cc: Benjamin Gaignard, Lionel Debieve, Ludovic Barre

This module registers block and AEAD cipher algorithms that make use of
the STMicroelectronics STM32 crypto "CRYP1" hardware.
The following algorithms are supported:
- aes: ecb, cbc, ctr, gcm, ccm
- des: ecb, cbc
- tdes: ecb, cbc

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/Kconfig      |    9 +
 drivers/crypto/stm32/Makefile     |    1 +
 drivers/crypto/stm32/stm32-cryp.c | 1962 +++++++++++++++++++++++++++++++++++++
 3 files changed, 1972 insertions(+)
 create mode 100644 drivers/crypto/stm32/stm32-cryp.c

diff --git a/drivers/crypto/stm32/Kconfig b/drivers/crypto/stm32/Kconfig
index 09b4ec8..c89d651 100644
--- a/drivers/crypto/stm32/Kconfig
+++ b/drivers/crypto/stm32/Kconfig
@@ -5,3 +5,12 @@ config CRYPTO_DEV_STM32
 	help
           This enables support for the CRC32 hw accelerator which can be found
 	  on STMicroelectronis STM32 SOC.
+
+config CRYP_DEV_STM32
+	tristate "Support for STM32 cryp accelerators"
+	depends on ARCH_STM32
+	select CRYPTO_HASH
+	select CRYPTO_ENGINE
+	help
+          This enables support for the CRYP (AES/DES/TDES) hw accelerator which
+	  can be found on STMicroelectronics STM32 SOC.
diff --git a/drivers/crypto/stm32/Makefile b/drivers/crypto/stm32/Makefile
index 73b4c6e..06b51c6 100644
--- a/drivers/crypto/stm32/Makefile
+++ b/drivers/crypto/stm32/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_CRYPTO_DEV_STM32) += stm32_cryp.o
 stm32_cryp-objs := stm32_crc32.o
+obj-$(CONFIG_CRYP_DEV_STM32) += stm32-cryp.o
diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
new file mode 100644
index 0000000..9a02d7c
--- /dev/null
+++ b/drivers/crypto/stm32/stm32-cryp.c
@@ -0,0 +1,1962 @@
+/*
+ * Copyright (C) STMicroelectronics SA 2017
+ * Author: Fabien Dessenne <fabien.dessenne@st.com>
+ * License terms:  GNU General Public License (GPL), version 2
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+
+#include <crypto/aes.h>
+#include <crypto/des.h>
+#include <crypto/engine.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/aead.h>
+
+#define DRIVER_NAME             "stm32-cryp"
+
+/* Bit [0] encrypt / decrypt */
+#define FLG_ENCRYPT             BIT(0)
+/* Bit [8..1] algo & operation mode */
+#define FLG_AES                 BIT(1)
+#define FLG_DES                 BIT(2)
+#define FLG_TDES                BIT(3)
+#define FLG_ECB                 BIT(4)
+#define FLG_CBC                 BIT(5)
+#define FLG_CTR                 BIT(6)
+#define FLG_GCM                 BIT(7)
+#define FLG_CCM                 BIT(8)
+/* Mode mask = bits [15..0] */
+#define FLG_MODE_MASK           GENMASK(15, 0)
+/* Bit [31..16] status  */
+#define FLG_CCM_PADDED_WA       BIT(16)
+
+/* Registers */
+#define CRYP_CR                 0x00000000
+#define CRYP_SR                 0x00000004
+#define CRYP_DIN                0x00000008
+#define CRYP_DOUT               0x0000000C
+#define CRYP_DMACR              0x00000010
+#define CRYP_IMSCR              0x00000014
+#define CRYP_RISR               0x00000018
+#define CRYP_MISR               0x0000001C
+#define CRYP_K0LR               0x00000020
+#define CRYP_K0RR               0x00000024
+#define CRYP_K1LR               0x00000028
+#define CRYP_K1RR               0x0000002C
+#define CRYP_K2LR               0x00000030
+#define CRYP_K2RR               0x00000034
+#define CRYP_K3LR               0x00000038
+#define CRYP_K3RR               0x0000003C
+#define CRYP_IV0LR              0x00000040
+#define CRYP_IV0RR              0x00000044
+#define CRYP_IV1LR              0x00000048
+#define CRYP_IV1RR              0x0000004C
+#define CRYP_CSGCMCCM0R         0x00000050
+#define CRYP_CSGCM0R            0x00000070
+
+/* Registers values */
+#define CR_DEC_NOT_ENC          0x00000004
+#define CR_TDES_ECB             0x00000000
+#define CR_TDES_CBC             0x00000008
+#define CR_DES_ECB              0x00000010
+#define CR_DES_CBC              0x00000018
+#define CR_AES_ECB              0x00000020
+#define CR_AES_CBC              0x00000028
+#define CR_AES_CTR              0x00000030
+#define CR_AES_KP               0x00000038
+#define CR_AES_GCM              0x00080000
+#define CR_AES_CCM              0x00080008
+#define CR_AES_UNKNOWN          0xFFFFFFFF
+#define CR_ALGO_MASK            0x00080038
+#define CR_DATA32               0x00000000
+#define CR_DATA16               0x00000040
+#define CR_DATA8                0x00000080
+#define CR_DATA1                0x000000C0
+#define CR_KEY128               0x00000000
+#define CR_KEY192               0x00000100
+#define CR_KEY256               0x00000200
+#define CR_FFLUSH               0x00004000
+#define CR_CRYPEN               0x00008000
+#define CR_PH_INIT              0x00000000
+#define CR_PH_HEADER            0x00010000
+#define CR_PH_PAYLOAD           0x00020000
+#define CR_PH_FINAL             0x00030000
+#define CR_PH_MASK              0x00030000
+
+#define SR_BUSY                 0x00000010
+#define SR_OFNE                 0x00000004
+
+#define IMSCR_IN                BIT(0)
+#define IMSCR_OUT               BIT(1)
+
+#define MISR_IN                 BIT(0)
+#define MISR_OUT                BIT(1)
+
+/* Misc */
+#define AES_BLOCK_32            (AES_BLOCK_SIZE / sizeof(u32))
+#define GCM_CTR_INIT            2
+#define _walked_in              (cryp->in_walk.offset - cryp->in_sg->offset)
+#define _walked_out             (cryp->out_walk.offset - cryp->out_sg->offset)
+
+struct stm32_cryp_caps {
+	bool                    swap_final;
+	bool                    padding_wa;
+};
+
+struct stm32_cryp_ctx {
+	struct stm32_cryp       *cryp;
+	int                     keylen;
+	u32                     key[AES_KEYSIZE_256 / sizeof(u32)];
+	unsigned long           flags;
+};
+
+struct stm32_cryp_reqctx {
+	unsigned long mode;
+};
+
+struct stm32_cryp {
+	struct list_head        list;
+	struct device           *dev;
+	void __iomem            *regs;
+	struct clk              *clk;
+	unsigned long           flags;
+	u32                     irq_status;
+	const struct stm32_cryp_caps *caps;
+	struct stm32_cryp_ctx   *ctx;
+
+	struct crypto_engine    *engine;
+
+	struct mutex            lock; /* protects req / areq */
+	struct ablkcipher_request *req;
+	struct aead_request     *areq;
+
+	size_t                  authsize;
+	size_t                  hw_blocksize;
+
+	size_t                  total_in;
+	size_t                  total_in_save;
+	size_t                  total_out;
+	size_t                  total_out_save;
+
+	struct scatterlist      *in_sg;
+	struct scatterlist      *out_sg;
+	struct scatterlist      *out_sg_save;
+
+	struct scatterlist      in_sgl;
+	struct scatterlist      out_sgl;
+	bool                    sgs_copied;
+
+	int                     in_sg_len;
+	int                     out_sg_len;
+
+	struct scatter_walk     in_walk;
+	struct scatter_walk     out_walk;
+
+	u32                     last_ctr[4];
+	u32                     gcm_ctr;
+};
+
+struct stm32_cryp_list {
+	struct list_head        dev_list;
+	spinlock_t              lock; /* protect dev_list */
+};
+
+static struct stm32_cryp_list cryp_list = {
+	.dev_list = LIST_HEAD_INIT(cryp_list.dev_list),
+	.lock     = __SPIN_LOCK_UNLOCKED(cryp_list.lock),
+};
+
+static inline bool is_aes(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_AES;
+}
+
+static inline bool is_des(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_DES;
+}
+
+static inline bool is_tdes(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_TDES;
+}
+
+static inline bool is_ecb(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_ECB;
+}
+
+static inline bool is_cbc(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CBC;
+}
+
+static inline bool is_ctr(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CTR;
+}
+
+static inline bool is_gcm(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_GCM;
+}
+
+static inline bool is_ccm(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CCM;
+}
+
+static inline bool is_encrypt(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_ENCRYPT;
+}
+
+static inline bool is_decrypt(struct stm32_cryp *cryp)
+{
+	return !is_encrypt(cryp);
+}
+
+static inline u32 stm32_cryp_read(struct stm32_cryp *cryp, u32 ofst)
+{
+	return readl_relaxed(cryp->regs + ofst);
+}
+
+static inline void stm32_cryp_write(struct stm32_cryp *cryp, u32 ofst, u32 val)
+{
+	writel_relaxed(val, cryp->regs + ofst);
+}
+
+static inline void stm32_cryp_wait_enable(struct stm32_cryp *cryp)
+{
+	while (stm32_cryp_read(cryp, CRYP_CR) & CR_CRYPEN)
+		cpu_relax();
+}
+
+static inline void stm32_cryp_wait_busy(struct stm32_cryp *cryp)
+{
+	while (stm32_cryp_read(cryp, CRYP_SR) & SR_BUSY)
+		cpu_relax();
+}
+
+static inline void stm32_cryp_wait_output(struct stm32_cryp *cryp)
+{
+	while (!(stm32_cryp_read(cryp, CRYP_SR) & SR_OFNE))
+		cpu_relax();
+}
+
+static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp);
+
+static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
+{
+	struct stm32_cryp *tmp, *cryp = NULL;
+
+	spin_lock_bh(&cryp_list.lock);
+	if (!ctx->cryp) {
+		list_for_each_entry(tmp, &cryp_list.dev_list, list) {
+			cryp = tmp;
+			break;
+		}
+		ctx->cryp = cryp;
+	} else {
+		cryp = ctx->cryp;
+	}
+
+	spin_unlock_bh(&cryp_list.lock);
+
+	return cryp;
+}
+
+static int stm32_cryp_check_aligned(struct scatterlist *sg, size_t total,
+				    size_t align)
+{
+	int len = 0;
+
+	if (!total)
+		return 0;
+
+	if (!IS_ALIGNED(total, align))
+		return -EINVAL;
+
+	while (sg) {
+		if (!IS_ALIGNED(sg->offset, sizeof(u32)))
+			return -1;
+
+		if (!IS_ALIGNED(sg->length, align))
+			return -1;
+
+		len += sg->length;
+		sg = sg_next(sg);
+	}
+
+	if (len != total)
+		return -1;
+
+	return 0;
+}
+
+static int stm32_cryp_check_io_aligned(struct stm32_cryp *cryp)
+{
+	int ret;
+
+	ret = stm32_cryp_check_aligned(cryp->in_sg, cryp->total_in,
+				       cryp->hw_blocksize);
+	if (ret)
+		return ret;
+
+	ret = stm32_cryp_check_aligned(cryp->out_sg, cryp->total_out,
+				       cryp->hw_blocksize);
+
+	return ret;
+}
+
+static void sg_copy_buf(void *buf, struct scatterlist *sg,
+			unsigned int start, unsigned int nbytes, int out)
+{
+	struct scatter_walk walk;
+
+	if (!nbytes)
+		return;
+
+	scatterwalk_start(&walk, sg);
+	scatterwalk_advance(&walk, start);
+	scatterwalk_copychunks(buf, &walk, nbytes, out);
+	scatterwalk_done(&walk, out, 0);
+}
+
+static int stm32_cryp_copy_sgs(struct stm32_cryp *cryp)
+{
+	void *buf_in, *buf_out;
+	int pages, total_in, total_out;
+
+	if (!stm32_cryp_check_io_aligned(cryp)) {
+		cryp->sgs_copied = 0;
+		return 0;
+	}
+
+	total_in = ALIGN(cryp->total_in, cryp->hw_blocksize);
+	pages = total_in ? get_order(total_in) : 1;
+	buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages);
+
+	total_out = ALIGN(cryp->total_out, cryp->hw_blocksize);
+	pages = total_out ? get_order(total_out) : 1;
+	buf_out = (void *)__get_free_pages(GFP_ATOMIC, pages);
+
+	if (!buf_in || !buf_out) {
+		pr_err("Couldn't allocate pages for unaligned cases.\n");
+		cryp->sgs_copied = 0;
+		return -1;
+	}
+
+	sg_copy_buf(buf_in, cryp->in_sg, 0, cryp->total_in, 0);
+
+	sg_init_one(&cryp->in_sgl, buf_in, total_in);
+	cryp->in_sg = &cryp->in_sgl;
+	cryp->in_sg_len = 1;
+
+	sg_init_one(&cryp->out_sgl, buf_out, total_out);
+	cryp->out_sg_save = cryp->out_sg;
+	cryp->out_sg = &cryp->out_sgl;
+	cryp->out_sg_len = 1;
+
+	cryp->sgs_copied = 1;
+
+	return 0;
+}
+
+static void stm32_cryp_hw_write_iv(struct stm32_cryp *cryp, u32 *iv)
+{
+	if (!iv)
+		return;
+
+	stm32_cryp_write(cryp, CRYP_IV0LR, cpu_to_be32(*iv++));
+	stm32_cryp_write(cryp, CRYP_IV0RR, cpu_to_be32(*iv++));
+
+	if (is_aes(cryp)) {
+		stm32_cryp_write(cryp, CRYP_IV1LR, cpu_to_be32(*iv++));
+		stm32_cryp_write(cryp, CRYP_IV1RR, cpu_to_be32(*iv++));
+	}
+}
+
+static void stm32_cryp_hw_write_key(struct stm32_cryp *c)
+{
+	unsigned int i;
+	int r_id;
+
+	if (is_des(c)) {
+		stm32_cryp_write(c, CRYP_K1LR, cpu_to_be32(c->ctx->key[0]));
+		stm32_cryp_write(c, CRYP_K1RR, cpu_to_be32(c->ctx->key[1]));
+	} else {
+		r_id = CRYP_K3RR;
+		for (i = c->ctx->keylen / sizeof(u32); i > 0; i--, r_id -= 4)
+			stm32_cryp_write(c, r_id,
+					 cpu_to_be32(c->ctx->key[i - 1]));
+	}
+}
+
+static u32 stm32_cryp_get_hw_mode(struct stm32_cryp *cryp)
+{
+	if (is_aes(cryp) && is_ecb(cryp))
+		return CR_AES_ECB;
+
+	if (is_aes(cryp) && is_cbc(cryp))
+		return CR_AES_CBC;
+
+	if (is_aes(cryp) && is_ctr(cryp))
+		return CR_AES_CTR;
+
+	if (is_aes(cryp) && is_gcm(cryp))
+		return CR_AES_GCM;
+
+	if (is_aes(cryp) && is_ccm(cryp))
+		return CR_AES_CCM;
+
+	if (is_des(cryp) && is_ecb(cryp))
+		return CR_DES_ECB;
+
+	if (is_des(cryp) && is_cbc(cryp))
+		return CR_DES_CBC;
+
+	if (is_tdes(cryp) && is_ecb(cryp))
+		return CR_TDES_ECB;
+
+	if (is_tdes(cryp) && is_cbc(cryp))
+		return CR_TDES_CBC;
+
+	dev_err(cryp->dev, "Unknown mode\n");
+	return CR_AES_UNKNOWN;
+}
+
+static void stm32_cryp_gcm_init(struct stm32_cryp *cryp, u32 cfg)
+{
+	u32 iv[4];
+
+	/* Phase 1 : init */
+	memcpy(iv, cryp->areq->iv, 12);
+	iv[3] = cpu_to_be32(GCM_CTR_INIT);
+	cryp->gcm_ctr = GCM_CTR_INIT;
+	stm32_cryp_hw_write_iv(cryp, iv);
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg | CR_PH_INIT | CR_CRYPEN);
+
+	/* Wait for end of processing */
+	stm32_cryp_wait_enable(cryp);
+}
+
+static void stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
+{
+	u8 iv[AES_BLOCK_SIZE], b0[AES_BLOCK_SIZE];
+	u32 *d;
+	unsigned int i, textlen;
+
+	/* Phase 1 : init. Firstly set the CTR value to 1 (not 0) */
+	memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
+	memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
+	iv[AES_BLOCK_SIZE - 1] = 1;
+	stm32_cryp_hw_write_iv(cryp, (u32 *)iv);
+
+	/* Build B0 */
+	memcpy(b0, iv, AES_BLOCK_SIZE);
+
+	b0[0] |= (8 * ((cryp->authsize - 2) / 2));
+
+	if (cryp->areq->assoclen)
+		b0[0] |= 0x40;
+
+	if (is_encrypt(cryp))
+		textlen = cryp->areq->cryptlen;
+	else
+		textlen = cryp->areq->cryptlen - cryp->authsize;
+
+	b0[AES_BLOCK_SIZE - 2] = textlen >> 8;
+	b0[AES_BLOCK_SIZE - 1] = textlen & 0xFF;
+
+	/* Enable HW */
+	stm32_cryp_write(cryp, CRYP_CR, cfg | CR_PH_INIT | CR_CRYPEN);
+
+	/* Write B0 */
+	d = (u32 *)b0;
+	for (i = 0; i < AES_BLOCK_32; i++)
+		stm32_cryp_write(cryp, CRYP_DIN, *d++);
+
+	/* Wait for end of processing */
+	stm32_cryp_wait_enable(cryp);
+}
+
+static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+{
+	u32 cfg, hw_mode;
+
+	/* Disable interrupt */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+
+	/* Set key */
+	stm32_cryp_hw_write_key(cryp);
+
+	/* Set configuration */
+	cfg = CR_DATA8 | CR_FFLUSH;
+
+	switch (cryp->ctx->keylen) {
+	case AES_KEYSIZE_128:
+		cfg |= CR_KEY128;
+		break;
+
+	case AES_KEYSIZE_192:
+		cfg |= CR_KEY192;
+		break;
+
+	default:
+	case AES_KEYSIZE_256:
+		cfg |= CR_KEY256;
+		break;
+	}
+
+	hw_mode = stm32_cryp_get_hw_mode(cryp);
+	if (hw_mode == CR_AES_UNKNOWN)
+		return -EINVAL;
+
+	/* AES ECB/CBC decrypt: run key preparation first */
+	if (is_decrypt(cryp) &&
+	    ((hw_mode == CR_AES_ECB) || (hw_mode == CR_AES_CBC))) {
+		stm32_cryp_write(cryp, CRYP_CR, cfg | CR_AES_KP | CR_CRYPEN);
+
+		/* Wait for end of processing */
+		stm32_cryp_wait_busy(cryp);
+	}
+
+	cfg |= hw_mode;
+
+	if (is_decrypt(cryp))
+		cfg |= CR_DEC_NOT_ENC;
+
+	/* Apply config and flush (valid when CRYPEN = 0) */
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	switch (hw_mode) {
+	case CR_AES_GCM:
+	case CR_AES_CCM:
+		/* Phase 1 : init */
+		if (hw_mode == CR_AES_CCM)
+			stm32_cryp_ccm_init(cryp, cfg);
+		else
+			stm32_cryp_gcm_init(cryp, cfg);
+
+		/* Phase 2 : header (authenticated data) */
+		if (cryp->areq->assoclen) {
+			cfg |= CR_PH_HEADER;
+		} else if (cryp->areq->cryptlen) {
+			/* Phase 3 : payload */
+			cfg |= CR_PH_PAYLOAD;
+			stm32_cryp_write(cryp, CRYP_CR, cfg);
+		} else {
+			cfg |= CR_PH_INIT;
+		}
+
+		break;
+
+	case CR_DES_CBC:
+	case CR_TDES_CBC:
+	case CR_AES_CBC:
+	case CR_AES_CTR:
+		stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->req->info);
+		break;
+
+	default:
+		break;
+	}
+
+	/* Enable now */
+	cfg |= CR_CRYPEN;
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	cryp->flags &= ~FLG_CCM_PADDED_WA;
+
+	return 0;
+}
+
+static void stm32_cryp_finish_req(struct stm32_cryp *cryp)
+{
+	int err = 0;
+
+	if (is_gcm(cryp) || is_ccm(cryp))
+		/* Phase 4 : output tag */
+		err = stm32_cryp_read_auth_tag(cryp);
+
+	if (cryp->sgs_copied) {
+		void *buf_in, *buf_out;
+		int pages, len;
+
+		buf_in = sg_virt(&cryp->in_sgl);
+		buf_out = sg_virt(&cryp->out_sgl);
+
+		sg_copy_buf(buf_out, cryp->out_sg_save, 0,
+			    cryp->total_out_save, 1);
+
+		len = ALIGN(cryp->total_in_save, cryp->hw_blocksize);
+		pages = len ? get_order(len) : 1;
+		free_pages((unsigned long)buf_in, pages);
+
+		len = ALIGN(cryp->total_out_save, cryp->hw_blocksize);
+		pages = len ? get_order(len) : 1;
+		free_pages((unsigned long)buf_out, pages);
+	}
+
+	if (is_gcm(cryp) || is_ccm(cryp)) {
+		crypto_finalize_aead_request(cryp->engine, cryp->areq, err);
+		cryp->areq = NULL;
+	} else {
+		crypto_finalize_cipher_request(cryp->engine, cryp->req, err);
+		cryp->req = NULL;
+	}
+
+	mutex_unlock(&cryp->lock);
+}
+
+static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
+{
+	if ((stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
+	    !cryp->areq->assoclen && !cryp->areq->cryptlen)
+		/* No input data, get output tag (phase 4) and finish */
+		stm32_cryp_finish_req(cryp);
+	else
+		/* Enable interrupt and let the IRQ handler do everything */
+		stm32_cryp_write(cryp, CRYP_IMSCR, IMSCR_IN | IMSCR_OUT);
+
+	return 0;
+}
+
+static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
+{
+	tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);
+
+	return 0;
+}
+
+static int stm32_cryp_aes_aead_init(struct crypto_aead *tfm)
+{
+	tfm->reqsize = sizeof(struct stm32_cryp_reqctx);
+
+	return 0;
+}
+
+static void stm32_cryp_cra_exit(struct crypto_tfm *tfm)
+{
+}
+
+static void stm32_cryp_aes_aead_exit(struct crypto_aead *tfm)
+{
+}
+
+static int stm32_cryp_crypt(struct ablkcipher_request *req, unsigned long mode)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
+			crypto_ablkcipher_reqtfm(req));
+	struct stm32_cryp_reqctx *rctx = ablkcipher_request_ctx(req);
+	struct stm32_cryp *cryp = stm32_cryp_find_dev(ctx);
+
+	if (!cryp)
+		return -ENODEV;
+
+	rctx->mode = mode;
+
+	return crypto_transfer_cipher_request_to_engine(cryp->engine, req);
+}
+
+static int stm32_cryp_aead_crypt(struct aead_request *req, unsigned long mode)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
+	struct stm32_cryp_reqctx *rctx = aead_request_ctx(req);
+	struct stm32_cryp *cryp = stm32_cryp_find_dev(ctx);
+
+	if (!cryp)
+		return -ENODEV;
+
+	rctx->mode = mode;
+
+	return crypto_transfer_aead_request_to_engine(cryp->engine, req);
+}
+
+static int stm32_cryp_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+			     unsigned int keylen)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+
+	memcpy(ctx->key, key, keylen);
+	ctx->keylen = keylen;
+
+	return 0;
+}
+
+static int stm32_cryp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				 unsigned int keylen)
+{
+	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 &&
+	    keylen != AES_KEYSIZE_256)
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_des_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				 unsigned int keylen)
+{
+	if (keylen != DES_KEY_SIZE)
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_tdes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				  unsigned int keylen)
+{
+	if (keylen != (3 * DES_KEY_SIZE))
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+				      unsigned int keylen)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(tfm);
+
+	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 &&
+	    keylen != AES_KEYSIZE_256)
+		return -EINVAL;
+
+	memcpy(ctx->key, key, keylen);
+	ctx->keylen = keylen;
+
+	return 0;
+}
+
+static int stm32_cryp_aes_gcm_setauthsize(struct crypto_aead *tfm,
+					  unsigned int authsize)
+{
+	return authsize == AES_BLOCK_SIZE ? 0 : -EINVAL;
+}
+
+static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
+					  unsigned int authsize)
+{
+	switch (authsize) {
+	case 4:
+	case 6:
+	case 8:
+	case 10:
+	case 12:
+	case 14:
+	case 16:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int stm32_cryp_aes_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB);
+}
+
+static int stm32_cryp_aes_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC);
+}
+
+static int stm32_cryp_aes_ctr_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ctr_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR);
+}
+
+static int stm32_cryp_aes_gcm_encrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_gcm_decrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM);
+}
+
+static int stm32_cryp_aes_ccm_encrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ccm_decrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM);
+}
+
+static int stm32_cryp_des_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_des_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB);
+}
+
+static int stm32_cryp_des_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_des_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC);
+}
+
+static int stm32_cryp_tdes_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_tdes_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB);
+}
+
+static int stm32_cryp_tdes_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_tdes_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC);
+}
+
+static int stm32_cryp_prepare_req(struct crypto_engine *engine,
+				  struct ablkcipher_request *req,
+				  struct aead_request *areq)
+{
+	struct stm32_cryp_ctx *ctx;
+	struct stm32_cryp *cryp;
+	struct stm32_cryp_reqctx *rctx;
+	int ret;
+
+	if (!req && !areq)
+		return -EINVAL;
+
+	ctx = req ? crypto_ablkcipher_ctx(crypto_ablkcipher_reqtfm(req)) :
+		    crypto_aead_ctx(crypto_aead_reqtfm(areq));
+
+	cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	mutex_lock(&cryp->lock);
+
+	rctx = req ? ablkcipher_request_ctx(req) : aead_request_ctx(areq);
+	rctx->mode &= FLG_MODE_MASK;
+
+	ctx->cryp = cryp;
+
+	cryp->flags = (cryp->flags & ~FLG_MODE_MASK) | rctx->mode;
+	cryp->hw_blocksize = is_aes(cryp) ? AES_BLOCK_SIZE : DES_BLOCK_SIZE;
+	cryp->ctx = ctx;
+
+	if (req) {
+		cryp->req = req;
+		cryp->total_in = req->nbytes;
+		cryp->total_out = cryp->total_in;
+	} else {
+		/*
+		 * Length of input and output data:
+		 * Encryption case:
+		 *  INPUT  =   AssocData  ||   PlainText
+		 *          <- assoclen ->  <- cryptlen ->
+		 *          <------- total_in ----------->
+		 *
+		 *  OUTPUT =   AssocData  ||  CipherText  ||   AuthTag
+		 *          <- assoclen ->  <- cryptlen ->  <- authsize ->
+		 *          <---------------- total_out ----------------->
+		 *
+		 * Decryption case:
+		 *  INPUT  =   AssocData  ||  CipherText  ||  AuthTag
+		 *          <- assoclen ->  <--------- cryptlen --------->
+		 *                                          <- authsize ->
+		 *          <---------------- total_in ------------------>
+		 *
+		 *  OUTPUT =   AssocData  ||   PlainText
+		 *          <- assoclen ->  <- crypten - authsize ->
+		 *          <---------- total_out ----------------->
+		 */
+		cryp->areq = areq;
+		cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq));
+		cryp->total_in = areq->assoclen + areq->cryptlen;
+		if (is_encrypt(cryp))
+			/* Append auth tag to output */
+			cryp->total_out = cryp->total_in + cryp->authsize;
+		else
+			/* No auth tag in output */
+			cryp->total_out = cryp->total_in - cryp->authsize;
+	}
+
+	cryp->total_in_save = cryp->total_in;
+	cryp->total_out_save = cryp->total_out;
+
+	cryp->in_sg = req ? req->src : areq->src;
+	cryp->out_sg = req ? req->dst : areq->dst;
+	cryp->out_sg_save = cryp->out_sg;
+
+	cryp->in_sg_len = sg_nents_for_len(cryp->in_sg, cryp->total_in);
+	if (cryp->in_sg_len < 0) {
+		dev_err(cryp->dev, "Cannot get in_sg_len\n");
+		ret = cryp->in_sg_len;
+		goto out;
+	}
+
+	cryp->out_sg_len = sg_nents_for_len(cryp->out_sg, cryp->total_out);
+	if (cryp->out_sg_len < 0) {
+		dev_err(cryp->dev, "Cannot get out_sg_len\n");
+		ret = cryp->out_sg_len;
+		goto out;
+	}
+
+	stm32_cryp_copy_sgs(cryp);
+
+	scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+	scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+
+	if (is_gcm(cryp) || is_ccm(cryp)) {
+		/* In output, jump after assoc data */
+		scatterwalk_advance(&cryp->out_walk, cryp->areq->assoclen);
+		cryp->total_out -= cryp->areq->assoclen;
+	}
+
+	ret = stm32_cryp_hw_init(cryp);
+out:
+	if (ret)
+		mutex_unlock(&cryp->lock);
+
+	return ret;
+}
+
+static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
+					 struct ablkcipher_request *req)
+{
+	return stm32_cryp_prepare_req(engine, req, NULL);
+}
+
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
+				     struct ablkcipher_request *req)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
+			crypto_ablkcipher_reqtfm(req));
+	struct stm32_cryp *cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	return stm32_cryp_cpu_start(cryp);
+}
+
+static int stm32_cryp_prepare_aead_req(struct crypto_engine *engine,
+				       struct aead_request *areq)
+{
+	return stm32_cryp_prepare_req(engine, NULL, areq);
+}
+
+static int stm32_cryp_aead_one_req(struct crypto_engine *engine,
+				   struct aead_request *areq)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(areq));
+	struct stm32_cryp *cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	return stm32_cryp_cpu_start(cryp);
+}
+
+static u32 *stm32_cryp_next_out(struct stm32_cryp *cryp, u32 *dst,
+				unsigned int n)
+{
+	scatterwalk_advance(&cryp->out_walk, n);
+
+	if (unlikely(cryp->out_sg->length == _walked_out)) {
+		cryp->out_sg = sg_next(cryp->out_sg);
+		if (cryp->out_sg) {
+			scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+			return (sg_virt(cryp->out_sg) + _walked_out);
+		}
+	}
+
+	return (u32 *)((u8 *)dst + n);
+}
+
+static u32 *stm32_cryp_next_in(struct stm32_cryp *cryp, u32 *src,
+			       unsigned int n)
+{
+	scatterwalk_advance(&cryp->in_walk, n);
+
+	if (unlikely(cryp->in_sg->length == _walked_in)) {
+		cryp->in_sg = sg_next(cryp->in_sg);
+		if (cryp->in_sg) {
+			scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+			return (sg_virt(cryp->in_sg) + _walked_in);
+		}
+	}
+
+	return (u32 *)((u8 *)src + n);
+}
+
+static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+{
+	u32 cfg, size_bit, *dst, d32;
+	u8 *d8;
+	unsigned int i, j;
+	int ret = 0;
+
+	/* Update Config */
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_FINAL;
+	cfg &= ~CR_DEC_NOT_ENC;
+	cfg |= CR_CRYPEN;
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	if (is_gcm(cryp)) {
+		/* GCM: write aad and payload size (in bits) */
+		size_bit = cryp->areq->assoclen * 8;
+		if (cryp->caps->swap_final)
+			size_bit = cpu_to_be32(size_bit);
+
+		stm32_cryp_write(cryp, CRYP_DIN, 0);
+		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+
+		size_bit = is_encrypt(cryp) ? cryp->areq->cryptlen :
+				cryp->areq->cryptlen - AES_BLOCK_SIZE;
+		size_bit *= 8;
+		if (cryp->caps->swap_final)
+			size_bit = cpu_to_be32(size_bit);
+
+		stm32_cryp_write(cryp, CRYP_DIN, 0);
+		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+	} else {
+		/* CCM: write CTR0 */
+		u8 iv[AES_BLOCK_SIZE];
+		u32 *iv32 = (u32 *)iv;
+
+		memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
+		memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
+
+		for (i = 0; i < AES_BLOCK_32; i++)
+			stm32_cryp_write(cryp, CRYP_DIN, *iv32++);
+	}
+
+	/* Wait for output data */
+	stm32_cryp_wait_output(cryp);
+
+	if (is_encrypt(cryp)) {
+		/* Get and write tag */
+		dst = sg_virt(cryp->out_sg) + _walked_out;
+
+		for (i = 0; i < AES_BLOCK_32; i++) {
+			if (cryp->total_out >= sizeof(u32)) {
+				/* Read a full u32 */
+				*dst = stm32_cryp_read(cryp, CRYP_DOUT);
+
+				dst = stm32_cryp_next_out(cryp, dst,
+							  sizeof(u32));
+				cryp->total_out -= sizeof(u32);
+			} else if (!cryp->total_out) {
+				/* Empty fifo out (data from input padding) */
+				stm32_cryp_read(cryp, CRYP_DOUT);
+			} else {
+				/* Read less than an u32 */
+				d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+				d8 = (u8 *)&d32;
+
+				for (j = 0; j < cryp->total_out; j++) {
+					*((u8 *)dst) = *(d8++);
+					dst = stm32_cryp_next_out(cryp, dst, 1);
+				}
+				cryp->total_out = 0;
+			}
+		}
+	} else if (!(cryp->flags & FLG_CCM_PADDED_WA)) {
+		/*
+		 *  FIXME: when CCM workaround has been run, the tag is wrongly
+		 *  computed. Hence it shall not be compared with the expected
+		 *  input tag.
+		 */
+		u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32];
+
+		scatterwalk_map_and_copy(in_tag, cryp->in_sg,
+					 cryp->total_in_save - cryp->authsize,
+					 cryp->authsize, 0);
+
+		for (i = 0; i < AES_BLOCK_32; i++)
+			out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+
+		if (crypto_memneq(in_tag, out_tag, cryp->authsize))
+			ret = -EBADMSG;
+	}
+
+	/* Disable cryp */
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	return ret;
+}
+
+static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp)
+{
+	u32 cr;
+
+	if (unlikely(cryp->last_ctr[3] == 0xFFFFFFFF)) {
+		cryp->last_ctr[3] = 0;
+		cryp->last_ctr[2]++;
+		if (!cryp->last_ctr[2]) {
+			cryp->last_ctr[1]++;
+			if (!cryp->last_ctr[1])
+				cryp->last_ctr[0]++;
+		}
+
+		cr = stm32_cryp_read(cryp, CRYP_CR);
+		stm32_cryp_write(cryp, CRYP_CR, cr & ~CR_CRYPEN);
+
+		stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->last_ctr);
+
+		stm32_cryp_write(cryp, CRYP_CR, cr);
+	}
+
+	cryp->last_ctr[0] = stm32_cryp_read(cryp, CRYP_IV0LR);
+	cryp->last_ctr[1] = stm32_cryp_read(cryp, CRYP_IV0RR);
+	cryp->last_ctr[2] = stm32_cryp_read(cryp, CRYP_IV1LR);
+	cryp->last_ctr[3] = stm32_cryp_read(cryp, CRYP_IV1RR);
+}
+
+static bool stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 d32, *dst;
+	u8 *d8;
+	size_t tag_size;
+
+	/* Do no read tag now (if any) */
+	if (is_encrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+		tag_size = cryp->authsize;
+	else
+		tag_size = 0;
+
+	dst = sg_virt(cryp->out_sg) + _walked_out;
+
+	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+		if (likely(cryp->total_out - tag_size >= sizeof(u32))) {
+			/* Read a full u32 */
+			*dst = stm32_cryp_read(cryp, CRYP_DOUT);
+
+			dst = stm32_cryp_next_out(cryp, dst, sizeof(u32));
+			cryp->total_out -= sizeof(u32);
+		} else if (cryp->total_out == tag_size) {
+			/* Empty fifo out (data from input padding) */
+			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+		} else {
+			/* Read less than an u32 */
+			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+			d8 = (u8 *)&d32;
+
+			for (j = 0; j < cryp->total_out - tag_size; j++) {
+				*((u8 *)dst) = *(d8++);
+				dst = stm32_cryp_next_out(cryp, dst, 1);
+			}
+			cryp->total_out = tag_size;
+		}
+	}
+
+	return !(cryp->total_out - tag_size) || !cryp->total_in;
+}
+
+static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 *src;
+	u8 d8[4];
+	size_t tag_size;
+
+	/* Do no write tag (if any) */
+	if (is_decrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+		tag_size = cryp->authsize;
+	else
+		tag_size = 0;
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+
+	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+		if (likely(cryp->total_in - tag_size >= sizeof(u32))) {
+			/* Write a full u32 */
+			stm32_cryp_write(cryp, CRYP_DIN, *src);
+
+			src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+			cryp->total_in -= sizeof(u32);
+		} else if (cryp->total_in == tag_size) {
+			/* Write padding data */
+			stm32_cryp_write(cryp, CRYP_DIN, 0);
+		} else {
+			/* Write less than an u32 */
+			memset(d8, 0, sizeof(u32));
+			for (j = 0; j < cryp->total_in - tag_size; j++) {
+				d8[j] = *((u8 *)src);
+				src = stm32_cryp_next_in(cryp, src, 1);
+			}
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			cryp->total_in = tag_size;
+		}
+	}
+}
+
+static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+{
+	u32 cfg, tmp[AES_BLOCK_32];
+	size_t total_in_ori = cryp->total_in;
+	struct scatterlist *out_sg_ori = cryp->out_sg;
+	unsigned int i;
+
+	/* 'Special workaround' procedure described in the datasheet */
+
+	/* a) disable ip */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) Update IV1R */
+	stm32_cryp_write(cryp, CRYP_IV1RR, cryp->gcm_ctr - 2);
+
+	/* c) change mode to CTR */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CTR;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* a) enable IP */
+	cfg |= CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) pad and write the last block */
+	stm32_cryp_irq_write_block(cryp);
+	cryp->total_in = total_in_ori;
+	stm32_cryp_wait_output(cryp);
+
+	/* c) get and store encrypted data */
+	stm32_cryp_irq_read_data(cryp);
+	scatterwalk_map_and_copy(tmp, out_sg_ori,
+				 cryp->total_in_save - total_in_ori,
+				 total_in_ori, 0);
+
+	/* d) change mode back to AES GCM */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_GCM;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* e) change phase to Final */
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_FINAL;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* f) write padded data */
+	for (i = 0; i < AES_BLOCK_32; i++) {
+		if (cryp->total_in)
+			stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
+		else
+			stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+	}
+
+	/* g) Empty fifo out */
+	stm32_cryp_wait_output(cryp);
+
+	for (i = 0; i < AES_BLOCK_32; i++)
+		stm32_cryp_read(cryp, CRYP_DOUT);
+
+	/* h) run the he normal Final phase */
+	stm32_cryp_finish_req(cryp);
+}
+
+static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+{
+	u32 cfg, iv1tmp;
+	u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32], tmp[AES_BLOCK_32];
+	size_t last_total_out, total_in_ori = cryp->total_in;
+	struct scatterlist *out_sg_ori = cryp->out_sg;
+	unsigned int i;
+
+	/* 'Special workaround' procedure described in the datasheet */
+	cryp->flags |= FLG_CCM_PADDED_WA;
+
+	/* a) disable ip */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) get IV1 from CRYP_CSGCMCCM7 */
+	iv1tmp = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + 7 * 4);
+
+	/* c) Load CRYP_CSGCMCCMxR */
+	for (i = 0; i < ARRAY_SIZE(cstmp1); i++)
+		cstmp1[i] = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + i * 4);
+
+	/* d) Write IV1R */
+	stm32_cryp_write(cryp, CRYP_IV1RR, iv1tmp);
+
+	/* e) change mode to CTR */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CTR;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* a) enable IP */
+	cfg |= CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) pad and write the last block */
+	stm32_cryp_irq_write_block(cryp);
+	cryp->total_in = total_in_ori;
+	stm32_cryp_wait_output(cryp);
+
+	/* c) get and store decrypted data */
+	last_total_out = cryp->total_out;
+	stm32_cryp_irq_read_data(cryp);
+
+	memset(tmp, 0, sizeof(tmp));
+	scatterwalk_map_and_copy(tmp, out_sg_ori,
+				 cryp->total_out_save - last_total_out,
+				 last_total_out, 0);
+
+	/* d) Load again CRYP_CSGCMCCMxR */
+	for (i = 0; i < ARRAY_SIZE(cstmp2); i++)
+		cstmp2[i] = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + i * 4);
+
+	/* e) change mode back to AES CCM */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CCM;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* f) change phase to header */
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_HEADER;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* g) XOR and write padded data */
+	for (i = 0; i < ARRAY_SIZE(tmp); i++) {
+		tmp[i] ^= cstmp1[i];
+		tmp[i] ^= cstmp2[i];
+		stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
+	}
+
+	/* h) wait for completion */
+	stm32_cryp_wait_busy(cryp);
+
+	/* i) run the he normal Final phase */
+	stm32_cryp_finish_req(cryp);
+}
+
+static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
+{
+	if (unlikely(!cryp->total_in)) {
+		dev_warn(cryp->dev, "No more data to process\n");
+		return;
+	}
+
+	if (unlikely(cryp->caps->padding_wa &&
+		     (cryp->total_in < AES_BLOCK_SIZE) &&
+		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
+		     (is_encrypt(cryp))))
+		/* Special case 1: padding for AES GCM encryption */
+		return stm32_cryp_irq_write_gcm_padded_data(cryp);
+
+	if (unlikely(cryp->caps->padding_wa &&
+		     (cryp->total_in - cryp->authsize < AES_BLOCK_SIZE) &&
+		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_CCM) &&
+		     (is_decrypt(cryp))))
+		/* Special case 2: padding for AES CCM decryption */
+		return stm32_cryp_irq_write_ccm_padded_data(cryp);
+
+	if (is_aes(cryp) && is_ctr(cryp))
+		stm32_cryp_check_ctr_counter(cryp);
+
+	stm32_cryp_irq_write_block(cryp);
+}
+
+static void stm32_cryp_irq_write_gcm_header(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 cfg, *src;
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+
+	for (i = 0; i < AES_BLOCK_32; i++) {
+		stm32_cryp_write(cryp, CRYP_DIN, *src);
+
+		src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+
+		/* Check if whole header written */
+		if ((cryp->total_in_save - cryp->total_in) ==
+				cryp->areq->assoclen) {
+			/* Write padding if needed */
+			for (j = i + 1; j < AES_BLOCK_32; j++)
+				stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+			/* Wait for completion */
+			stm32_cryp_wait_busy(cryp);
+
+			if (cryp->areq->cryptlen) {
+				/* Phase 3 : payload */
+				cfg = stm32_cryp_read(cryp, CRYP_CR);
+				cfg &= ~CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+				cfg &= ~CR_PH_MASK;
+				cfg |= CR_PH_PAYLOAD;
+				cfg |= CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+			} else {
+				/* Phase 4 : tag */
+				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+				stm32_cryp_finish_req(cryp);
+			}
+
+			break;
+		}
+
+		if (!cryp->total_in)
+			break;
+	}
+}
+
+static void stm32_cryp_irq_write_ccm_header(struct stm32_cryp *cryp)
+{
+	unsigned int i = 0, j, k;
+	u32 alen, cfg, *src;
+	u8 d8[4];
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+	alen = cryp->areq->assoclen;
+
+	if (!_walked_in) {
+		if (cryp->areq->assoclen <= 65280) {
+			/* Write first u32 of B1 */
+			d8[0] = (alen >> 8) & 0xFF;
+			d8[1] = alen & 0xFF;
+			d8[2] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+			d8[3] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+		} else {
+			/* Build the two first u32 of B1 */
+			d8[0] = 0xFF;
+			d8[1] = 0xFE;
+			d8[2] = alen & 0xFF000000;
+			d8[3] = alen & 0x00FF0000;
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			d8[0] = alen & 0x0000FF00;
+			d8[1] = alen & 0x000000FF;
+			d8[2] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+			d8[3] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+		}
+	}
+
+	/* Write next u32 */
+	for (; i < AES_BLOCK_32; i++) {
+		/* Build an u32 */
+		memset(d8, 0, sizeof(u32));
+		for (k = 0; k < sizeof(u32); k++) {
+			d8[k] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			cryp->total_in -= min_t(size_t, 1, cryp->total_in);
+			if ((cryp->total_in_save - cryp->total_in) == alen)
+				break;
+		}
+
+		stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+
+		if ((cryp->total_in_save - cryp->total_in) == alen) {
+			/* Write padding if needed */
+			for (j = i + 1; j < AES_BLOCK_32; j++)
+				stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+			/* Wait for completion */
+			stm32_cryp_wait_busy(cryp);
+
+			if (cryp->areq->cryptlen) {
+				/* Phase 3 : payload */
+				cfg = stm32_cryp_read(cryp, CRYP_CR);
+				cfg &= ~CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+				cfg &= ~CR_PH_MASK;
+				cfg |= CR_PH_PAYLOAD;
+				cfg |= CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+			} else {
+				/* Phase 4 : tag */
+				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+				stm32_cryp_finish_req(cryp);
+			}
+
+			break;
+		}
+	}
+}
+
+static irqreturn_t stm32_cryp_irq_thread(int irq, void *arg)
+{
+	struct stm32_cryp *cryp = arg;
+	u32 ph;
+
+	if (cryp->irq_status & MISR_OUT)
+		/* Output FIFO IRQ: read data */
+		if (unlikely(stm32_cryp_irq_read_data(cryp))) {
+			/* All bytes processed, finish */
+			stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+			stm32_cryp_finish_req(cryp);
+			return IRQ_HANDLED;
+		}
+
+	if (cryp->irq_status & MISR_IN) {
+		if (is_gcm(cryp)) {
+			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+			if (unlikely(ph == CR_PH_HEADER))
+				/* Write Header */
+				stm32_cryp_irq_write_gcm_header(cryp);
+			else
+				/* Input FIFO IRQ: write data */
+				stm32_cryp_irq_write_data(cryp);
+			cryp->gcm_ctr++;
+		} else if (is_ccm(cryp)) {
+			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+			if (unlikely(ph == CR_PH_HEADER))
+				/* Write Header */
+				stm32_cryp_irq_write_ccm_header(cryp);
+			else
+				/* Input FIFO IRQ: write data */
+				stm32_cryp_irq_write_data(cryp);
+		} else {
+			/* Input FIFO IRQ: write data */
+			stm32_cryp_irq_write_data(cryp);
+		}
+	}
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t stm32_cryp_irq(int irq, void *arg)
+{
+	struct stm32_cryp *cryp = arg;
+
+	cryp->irq_status = stm32_cryp_read(cryp, CRYP_MISR);
+
+	return IRQ_WAKE_THREAD;
+}
+
+static struct crypto_alg crypto_algs[] = {
+{
+	.cra_name		= "ecb(aes)",
+	.cra_driver_name	= "stm32-ecb-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_ecb_encrypt,
+		.decrypt	= stm32_cryp_aes_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(aes)",
+	.cra_driver_name	= "stm32-cbc-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_cbc_encrypt,
+		.decrypt	= stm32_cryp_aes_cbc_decrypt,
+	}
+},
+{
+	.cra_name		= "ctr(aes)",
+	.cra_driver_name	= "stm32-ctr-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= 1,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_ctr_encrypt,
+		.decrypt	= stm32_cryp_aes_ctr_decrypt,
+	}
+},
+{
+	.cra_name		= "ecb(des)",
+	.cra_driver_name	= "stm32-ecb-des",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= DES_BLOCK_SIZE,
+		.max_keysize	= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_des_setkey,
+		.encrypt	= stm32_cryp_des_ecb_encrypt,
+		.decrypt	= stm32_cryp_des_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(des)",
+	.cra_driver_name	= "stm32-cbc-des",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= DES_BLOCK_SIZE,
+		.max_keysize	= DES_BLOCK_SIZE,
+		.ivsize		= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_des_setkey,
+		.encrypt	= stm32_cryp_des_cbc_encrypt,
+		.decrypt	= stm32_cryp_des_cbc_decrypt,
+	}
+},
+{
+	.cra_name		= "ecb(des3_ede)",
+	.cra_driver_name	= "stm32-ecb-des3",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= 3 * DES_BLOCK_SIZE,
+		.max_keysize	= 3 * DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_tdes_setkey,
+		.encrypt	= stm32_cryp_tdes_ecb_encrypt,
+		.decrypt	= stm32_cryp_tdes_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(des3_ede)",
+	.cra_driver_name	= "stm32-cbc-des3",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= 3 * DES_BLOCK_SIZE,
+		.max_keysize	= 3 * DES_BLOCK_SIZE,
+		.ivsize		= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_tdes_setkey,
+		.encrypt	= stm32_cryp_tdes_cbc_encrypt,
+		.decrypt	= stm32_cryp_tdes_cbc_decrypt,
+	}
+},
+};
+
+static struct aead_alg aead_algs[] = {
+{
+	.setkey		= stm32_cryp_aes_aead_setkey,
+	.setauthsize	= stm32_cryp_aes_gcm_setauthsize,
+	.encrypt	= stm32_cryp_aes_gcm_encrypt,
+	.decrypt	= stm32_cryp_aes_gcm_decrypt,
+	.init		= stm32_cryp_aes_aead_init,
+	.exit		= stm32_cryp_aes_aead_exit,
+	.ivsize		= 12,
+	.maxauthsize	= AES_BLOCK_SIZE,
+
+	.base = {
+		.cra_name		= "gcm(aes)",
+		.cra_driver_name	= "stm32-gcm-aes",
+		.cra_priority		= 200,
+		.cra_flags		= CRYPTO_ALG_ASYNC,
+		.cra_blocksize		= 1,
+		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+		.cra_alignmask		= 0xf,
+		.cra_module		= THIS_MODULE,
+	},
+},
+{
+	.setkey		= stm32_cryp_aes_aead_setkey,
+	.setauthsize	= stm32_cryp_aes_ccm_setauthsize,
+	.encrypt	= stm32_cryp_aes_ccm_encrypt,
+	.decrypt	= stm32_cryp_aes_ccm_decrypt,
+	.init		= stm32_cryp_aes_aead_init,
+	.exit		= stm32_cryp_aes_aead_exit,
+	.ivsize		= AES_BLOCK_SIZE,
+	.maxauthsize	= AES_BLOCK_SIZE,
+
+	.base = {
+		.cra_name		= "ccm(aes)",
+		.cra_driver_name	= "stm32-ccm-aes",
+		.cra_priority		= 200,
+		.cra_flags		= CRYPTO_ALG_ASYNC,
+		.cra_blocksize		= 1,
+		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+		.cra_alignmask		= 0xf,
+		.cra_module		= THIS_MODULE,
+	},
+},
+};
+
+static const struct stm32_cryp_caps f7_data = {
+	.swap_final = true,
+	.padding_wa = true,
+};
+
+static const struct of_device_id stm32_dt_ids[] = {
+	{ .compatible = "st,stm32f756-cryp", .data = &f7_data},
+	{},
+};
+MODULE_DEVICE_TABLE(of, sti_dt_ids);
+
+static int stm32_cryp_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct stm32_cryp *cryp;
+	struct resource *res;
+	struct reset_control *rst;
+	const struct of_device_id *match;
+	int irq, ret;
+
+	cryp = devm_kzalloc(dev, sizeof(*cryp), GFP_KERNEL);
+	if (!cryp)
+		return -ENOMEM;
+
+	match = of_match_device(stm32_dt_ids, dev);
+	if (!match)
+		return -ENODEV;
+
+	cryp->caps = match->data;
+	cryp->dev = dev;
+
+	mutex_init(&cryp->lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	cryp->regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(cryp->regs)) {
+		dev_err(dev, "Cannot map CRYP IO\n");
+		return PTR_ERR(cryp->regs);
+	}
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(dev, "Cannot get IRQ resource\n");
+		return irq;
+	}
+
+	ret = devm_request_threaded_irq(dev, irq, stm32_cryp_irq,
+					stm32_cryp_irq_thread, IRQF_ONESHOT,
+					dev_name(dev), cryp);
+	if (ret) {
+		dev_err(dev, "Cannot grab IRQ\n");
+		return ret;
+	}
+
+	cryp->clk = devm_clk_get(dev, NULL);
+	if (IS_ERR(cryp->clk)) {
+		dev_err(dev, "Could not get clock\n");
+		return PTR_ERR(cryp->clk);
+	}
+
+	ret = clk_prepare_enable(cryp->clk);
+	if (ret) {
+		dev_err(cryp->dev, "Failed to enable clock\n");
+		return ret;
+	}
+
+	rst = devm_reset_control_get(dev, NULL);
+	if (!IS_ERR(rst)) {
+		reset_control_assert(rst);
+		udelay(2);
+		reset_control_deassert(rst);
+	}
+
+	platform_set_drvdata(pdev, cryp);
+
+	spin_lock(&cryp_list.lock);
+	list_add(&cryp->list, &cryp_list.dev_list);
+	spin_unlock(&cryp_list.lock);
+
+	/* Initialize crypto engine */
+	cryp->engine = crypto_engine_alloc_init(dev, 1);
+	if (!cryp->engine) {
+		dev_err(dev, "Could not init crypto engine\n");
+		ret = -ENOMEM;
+		goto err_engine1;
+	}
+
+	cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
+	cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
+	cryp->engine->prepare_aead_request = stm32_cryp_prepare_aead_req;
+	cryp->engine->aead_one_request = stm32_cryp_aead_one_req;
+
+	ret = crypto_engine_start(cryp->engine);
+	if (ret) {
+		dev_err(dev, "Could not start crypto engine\n");
+		goto err_engine2;
+	}
+
+	ret = crypto_register_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+	if (ret) {
+		dev_err(dev, "Could not register algs\n");
+		goto err_algs;
+	}
+
+	ret = crypto_register_aeads(aead_algs, ARRAY_SIZE(aead_algs));
+	if (ret)
+		goto err_aead_algs;
+
+	dev_info(dev, "Initialized\n");
+
+	return 0;
+
+err_aead_algs:
+	crypto_unregister_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+err_algs:
+err_engine2:
+	crypto_engine_exit(cryp->engine);
+err_engine1:
+	spin_lock(&cryp_list.lock);
+	list_del(&cryp->list);
+	spin_unlock(&cryp_list.lock);
+
+	clk_disable_unprepare(cryp->clk);
+
+	return ret;
+}
+
+static int stm32_cryp_remove(struct platform_device *pdev)
+{
+	struct stm32_cryp *cryp = platform_get_drvdata(pdev);
+
+	if (!cryp)
+		return -ENODEV;
+
+	crypto_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs));
+	crypto_unregister_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+
+	crypto_engine_exit(cryp->engine);
+
+	spin_lock(&cryp_list.lock);
+	list_del(&cryp->list);
+	spin_unlock(&cryp_list.lock);
+
+	clk_disable_unprepare(cryp->clk);
+
+	return 0;
+}
+
+static struct platform_driver stm32_cryp_driver = {
+	.probe  = stm32_cryp_probe,
+	.remove = stm32_cryp_remove,
+	.driver = {
+		.name           = DRIVER_NAME,
+		.of_match_table = stm32_dt_ids,
+	},
+};
+
+module_platform_driver(stm32_cryp_driver);
+
+MODULE_AUTHOR("Fabien Dessenne <fabien.dessenne@st.com>");
+MODULE_DESCRIPTION("STMicrolectronics STM32 CRYP hardware driver");
+MODULE_LICENSE("GPL");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] crypto: stm32 - Support for STM32 CRYP crypto module
@ 2017-07-13  9:59   ` Fabien Dessenne
  0 siblings, 0 replies; 16+ messages in thread
From: Fabien Dessenne @ 2017-07-13  9:59 UTC (permalink / raw)
  To: linux-arm-kernel

This module registers block and AEAD cipher algorithms that make use of
the STMicroelectronics STM32 crypto "CRYP1" hardware.
The following algorithms are supported:
- aes: ecb, cbc, ctr, gcm, ccm
- des: ecb, cbc
- tdes: ecb, cbc

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
---
 drivers/crypto/stm32/Kconfig      |    9 +
 drivers/crypto/stm32/Makefile     |    1 +
 drivers/crypto/stm32/stm32-cryp.c | 1962 +++++++++++++++++++++++++++++++++++++
 3 files changed, 1972 insertions(+)
 create mode 100644 drivers/crypto/stm32/stm32-cryp.c

diff --git a/drivers/crypto/stm32/Kconfig b/drivers/crypto/stm32/Kconfig
index 09b4ec8..c89d651 100644
--- a/drivers/crypto/stm32/Kconfig
+++ b/drivers/crypto/stm32/Kconfig
@@ -5,3 +5,12 @@ config CRYPTO_DEV_STM32
 	help
           This enables support for the CRC32 hw accelerator which can be found
 	  on STMicroelectronis STM32 SOC.
+
+config CRYP_DEV_STM32
+	tristate "Support for STM32 cryp accelerators"
+	depends on ARCH_STM32
+	select CRYPTO_HASH
+	select CRYPTO_ENGINE
+	help
+          This enables support for the CRYP (AES/DES/TDES) hw accelerator which
+	  can be found on STMicroelectronics STM32 SOC.
diff --git a/drivers/crypto/stm32/Makefile b/drivers/crypto/stm32/Makefile
index 73b4c6e..06b51c6 100644
--- a/drivers/crypto/stm32/Makefile
+++ b/drivers/crypto/stm32/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_CRYPTO_DEV_STM32) += stm32_cryp.o
 stm32_cryp-objs := stm32_crc32.o
+obj-$(CONFIG_CRYP_DEV_STM32) += stm32-cryp.o
diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
new file mode 100644
index 0000000..9a02d7c
--- /dev/null
+++ b/drivers/crypto/stm32/stm32-cryp.c
@@ -0,0 +1,1962 @@
+/*
+ * Copyright (C) STMicroelectronics SA 2017
+ * Author: Fabien Dessenne <fabien.dessenne@st.com>
+ * License terms:  GNU General Public License (GPL), version 2
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+
+#include <crypto/aes.h>
+#include <crypto/des.h>
+#include <crypto/engine.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/aead.h>
+
+#define DRIVER_NAME             "stm32-cryp"
+
+/* Bit [0] encrypt / decrypt */
+#define FLG_ENCRYPT             BIT(0)
+/* Bit [8..1] algo & operation mode */
+#define FLG_AES                 BIT(1)
+#define FLG_DES                 BIT(2)
+#define FLG_TDES                BIT(3)
+#define FLG_ECB                 BIT(4)
+#define FLG_CBC                 BIT(5)
+#define FLG_CTR                 BIT(6)
+#define FLG_GCM                 BIT(7)
+#define FLG_CCM                 BIT(8)
+/* Mode mask = bits [15..0] */
+#define FLG_MODE_MASK           GENMASK(15, 0)
+/* Bit [31..16] status  */
+#define FLG_CCM_PADDED_WA       BIT(16)
+
+/* Registers */
+#define CRYP_CR                 0x00000000
+#define CRYP_SR                 0x00000004
+#define CRYP_DIN                0x00000008
+#define CRYP_DOUT               0x0000000C
+#define CRYP_DMACR              0x00000010
+#define CRYP_IMSCR              0x00000014
+#define CRYP_RISR               0x00000018
+#define CRYP_MISR               0x0000001C
+#define CRYP_K0LR               0x00000020
+#define CRYP_K0RR               0x00000024
+#define CRYP_K1LR               0x00000028
+#define CRYP_K1RR               0x0000002C
+#define CRYP_K2LR               0x00000030
+#define CRYP_K2RR               0x00000034
+#define CRYP_K3LR               0x00000038
+#define CRYP_K3RR               0x0000003C
+#define CRYP_IV0LR              0x00000040
+#define CRYP_IV0RR              0x00000044
+#define CRYP_IV1LR              0x00000048
+#define CRYP_IV1RR              0x0000004C
+#define CRYP_CSGCMCCM0R         0x00000050
+#define CRYP_CSGCM0R            0x00000070
+
+/* Registers values */
+#define CR_DEC_NOT_ENC          0x00000004
+#define CR_TDES_ECB             0x00000000
+#define CR_TDES_CBC             0x00000008
+#define CR_DES_ECB              0x00000010
+#define CR_DES_CBC              0x00000018
+#define CR_AES_ECB              0x00000020
+#define CR_AES_CBC              0x00000028
+#define CR_AES_CTR              0x00000030
+#define CR_AES_KP               0x00000038
+#define CR_AES_GCM              0x00080000
+#define CR_AES_CCM              0x00080008
+#define CR_AES_UNKNOWN          0xFFFFFFFF
+#define CR_ALGO_MASK            0x00080038
+#define CR_DATA32               0x00000000
+#define CR_DATA16               0x00000040
+#define CR_DATA8                0x00000080
+#define CR_DATA1                0x000000C0
+#define CR_KEY128               0x00000000
+#define CR_KEY192               0x00000100
+#define CR_KEY256               0x00000200
+#define CR_FFLUSH               0x00004000
+#define CR_CRYPEN               0x00008000
+#define CR_PH_INIT              0x00000000
+#define CR_PH_HEADER            0x00010000
+#define CR_PH_PAYLOAD           0x00020000
+#define CR_PH_FINAL             0x00030000
+#define CR_PH_MASK              0x00030000
+
+#define SR_BUSY                 0x00000010
+#define SR_OFNE                 0x00000004
+
+#define IMSCR_IN                BIT(0)
+#define IMSCR_OUT               BIT(1)
+
+#define MISR_IN                 BIT(0)
+#define MISR_OUT                BIT(1)
+
+/* Misc */
+#define AES_BLOCK_32            (AES_BLOCK_SIZE / sizeof(u32))
+#define GCM_CTR_INIT            2
+#define _walked_in              (cryp->in_walk.offset - cryp->in_sg->offset)
+#define _walked_out             (cryp->out_walk.offset - cryp->out_sg->offset)
+
+struct stm32_cryp_caps {
+	bool                    swap_final;
+	bool                    padding_wa;
+};
+
+struct stm32_cryp_ctx {
+	struct stm32_cryp       *cryp;
+	int                     keylen;
+	u32                     key[AES_KEYSIZE_256 / sizeof(u32)];
+	unsigned long           flags;
+};
+
+struct stm32_cryp_reqctx {
+	unsigned long mode;
+};
+
+struct stm32_cryp {
+	struct list_head        list;
+	struct device           *dev;
+	void __iomem            *regs;
+	struct clk              *clk;
+	unsigned long           flags;
+	u32                     irq_status;
+	const struct stm32_cryp_caps *caps;
+	struct stm32_cryp_ctx   *ctx;
+
+	struct crypto_engine    *engine;
+
+	struct mutex            lock; /* protects req / areq */
+	struct ablkcipher_request *req;
+	struct aead_request     *areq;
+
+	size_t                  authsize;
+	size_t                  hw_blocksize;
+
+	size_t                  total_in;
+	size_t                  total_in_save;
+	size_t                  total_out;
+	size_t                  total_out_save;
+
+	struct scatterlist      *in_sg;
+	struct scatterlist      *out_sg;
+	struct scatterlist      *out_sg_save;
+
+	struct scatterlist      in_sgl;
+	struct scatterlist      out_sgl;
+	bool                    sgs_copied;
+
+	int                     in_sg_len;
+	int                     out_sg_len;
+
+	struct scatter_walk     in_walk;
+	struct scatter_walk     out_walk;
+
+	u32                     last_ctr[4];
+	u32                     gcm_ctr;
+};
+
+struct stm32_cryp_list {
+	struct list_head        dev_list;
+	spinlock_t              lock; /* protect dev_list */
+};
+
+static struct stm32_cryp_list cryp_list = {
+	.dev_list = LIST_HEAD_INIT(cryp_list.dev_list),
+	.lock     = __SPIN_LOCK_UNLOCKED(cryp_list.lock),
+};
+
+static inline bool is_aes(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_AES;
+}
+
+static inline bool is_des(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_DES;
+}
+
+static inline bool is_tdes(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_TDES;
+}
+
+static inline bool is_ecb(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_ECB;
+}
+
+static inline bool is_cbc(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CBC;
+}
+
+static inline bool is_ctr(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CTR;
+}
+
+static inline bool is_gcm(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_GCM;
+}
+
+static inline bool is_ccm(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_CCM;
+}
+
+static inline bool is_encrypt(struct stm32_cryp *cryp)
+{
+	return cryp->flags & FLG_ENCRYPT;
+}
+
+static inline bool is_decrypt(struct stm32_cryp *cryp)
+{
+	return !is_encrypt(cryp);
+}
+
+static inline u32 stm32_cryp_read(struct stm32_cryp *cryp, u32 ofst)
+{
+	return readl_relaxed(cryp->regs + ofst);
+}
+
+static inline void stm32_cryp_write(struct stm32_cryp *cryp, u32 ofst, u32 val)
+{
+	writel_relaxed(val, cryp->regs + ofst);
+}
+
+static inline void stm32_cryp_wait_enable(struct stm32_cryp *cryp)
+{
+	while (stm32_cryp_read(cryp, CRYP_CR) & CR_CRYPEN)
+		cpu_relax();
+}
+
+static inline void stm32_cryp_wait_busy(struct stm32_cryp *cryp)
+{
+	while (stm32_cryp_read(cryp, CRYP_SR) & SR_BUSY)
+		cpu_relax();
+}
+
+static inline void stm32_cryp_wait_output(struct stm32_cryp *cryp)
+{
+	while (!(stm32_cryp_read(cryp, CRYP_SR) & SR_OFNE))
+		cpu_relax();
+}
+
+static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp);
+
+static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
+{
+	struct stm32_cryp *tmp, *cryp = NULL;
+
+	spin_lock_bh(&cryp_list.lock);
+	if (!ctx->cryp) {
+		list_for_each_entry(tmp, &cryp_list.dev_list, list) {
+			cryp = tmp;
+			break;
+		}
+		ctx->cryp = cryp;
+	} else {
+		cryp = ctx->cryp;
+	}
+
+	spin_unlock_bh(&cryp_list.lock);
+
+	return cryp;
+}
+
+static int stm32_cryp_check_aligned(struct scatterlist *sg, size_t total,
+				    size_t align)
+{
+	int len = 0;
+
+	if (!total)
+		return 0;
+
+	if (!IS_ALIGNED(total, align))
+		return -EINVAL;
+
+	while (sg) {
+		if (!IS_ALIGNED(sg->offset, sizeof(u32)))
+			return -1;
+
+		if (!IS_ALIGNED(sg->length, align))
+			return -1;
+
+		len += sg->length;
+		sg = sg_next(sg);
+	}
+
+	if (len != total)
+		return -1;
+
+	return 0;
+}
+
+static int stm32_cryp_check_io_aligned(struct stm32_cryp *cryp)
+{
+	int ret;
+
+	ret = stm32_cryp_check_aligned(cryp->in_sg, cryp->total_in,
+				       cryp->hw_blocksize);
+	if (ret)
+		return ret;
+
+	ret = stm32_cryp_check_aligned(cryp->out_sg, cryp->total_out,
+				       cryp->hw_blocksize);
+
+	return ret;
+}
+
+static void sg_copy_buf(void *buf, struct scatterlist *sg,
+			unsigned int start, unsigned int nbytes, int out)
+{
+	struct scatter_walk walk;
+
+	if (!nbytes)
+		return;
+
+	scatterwalk_start(&walk, sg);
+	scatterwalk_advance(&walk, start);
+	scatterwalk_copychunks(buf, &walk, nbytes, out);
+	scatterwalk_done(&walk, out, 0);
+}
+
+static int stm32_cryp_copy_sgs(struct stm32_cryp *cryp)
+{
+	void *buf_in, *buf_out;
+	int pages, total_in, total_out;
+
+	if (!stm32_cryp_check_io_aligned(cryp)) {
+		cryp->sgs_copied = 0;
+		return 0;
+	}
+
+	total_in = ALIGN(cryp->total_in, cryp->hw_blocksize);
+	pages = total_in ? get_order(total_in) : 1;
+	buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages);
+
+	total_out = ALIGN(cryp->total_out, cryp->hw_blocksize);
+	pages = total_out ? get_order(total_out) : 1;
+	buf_out = (void *)__get_free_pages(GFP_ATOMIC, pages);
+
+	if (!buf_in || !buf_out) {
+		pr_err("Couldn't allocate pages for unaligned cases.\n");
+		cryp->sgs_copied = 0;
+		return -1;
+	}
+
+	sg_copy_buf(buf_in, cryp->in_sg, 0, cryp->total_in, 0);
+
+	sg_init_one(&cryp->in_sgl, buf_in, total_in);
+	cryp->in_sg = &cryp->in_sgl;
+	cryp->in_sg_len = 1;
+
+	sg_init_one(&cryp->out_sgl, buf_out, total_out);
+	cryp->out_sg_save = cryp->out_sg;
+	cryp->out_sg = &cryp->out_sgl;
+	cryp->out_sg_len = 1;
+
+	cryp->sgs_copied = 1;
+
+	return 0;
+}
+
+static void stm32_cryp_hw_write_iv(struct stm32_cryp *cryp, u32 *iv)
+{
+	if (!iv)
+		return;
+
+	stm32_cryp_write(cryp, CRYP_IV0LR, cpu_to_be32(*iv++));
+	stm32_cryp_write(cryp, CRYP_IV0RR, cpu_to_be32(*iv++));
+
+	if (is_aes(cryp)) {
+		stm32_cryp_write(cryp, CRYP_IV1LR, cpu_to_be32(*iv++));
+		stm32_cryp_write(cryp, CRYP_IV1RR, cpu_to_be32(*iv++));
+	}
+}
+
+static void stm32_cryp_hw_write_key(struct stm32_cryp *c)
+{
+	unsigned int i;
+	int r_id;
+
+	if (is_des(c)) {
+		stm32_cryp_write(c, CRYP_K1LR, cpu_to_be32(c->ctx->key[0]));
+		stm32_cryp_write(c, CRYP_K1RR, cpu_to_be32(c->ctx->key[1]));
+	} else {
+		r_id = CRYP_K3RR;
+		for (i = c->ctx->keylen / sizeof(u32); i > 0; i--, r_id -= 4)
+			stm32_cryp_write(c, r_id,
+					 cpu_to_be32(c->ctx->key[i - 1]));
+	}
+}
+
+static u32 stm32_cryp_get_hw_mode(struct stm32_cryp *cryp)
+{
+	if (is_aes(cryp) && is_ecb(cryp))
+		return CR_AES_ECB;
+
+	if (is_aes(cryp) && is_cbc(cryp))
+		return CR_AES_CBC;
+
+	if (is_aes(cryp) && is_ctr(cryp))
+		return CR_AES_CTR;
+
+	if (is_aes(cryp) && is_gcm(cryp))
+		return CR_AES_GCM;
+
+	if (is_aes(cryp) && is_ccm(cryp))
+		return CR_AES_CCM;
+
+	if (is_des(cryp) && is_ecb(cryp))
+		return CR_DES_ECB;
+
+	if (is_des(cryp) && is_cbc(cryp))
+		return CR_DES_CBC;
+
+	if (is_tdes(cryp) && is_ecb(cryp))
+		return CR_TDES_ECB;
+
+	if (is_tdes(cryp) && is_cbc(cryp))
+		return CR_TDES_CBC;
+
+	dev_err(cryp->dev, "Unknown mode\n");
+	return CR_AES_UNKNOWN;
+}
+
+static void stm32_cryp_gcm_init(struct stm32_cryp *cryp, u32 cfg)
+{
+	u32 iv[4];
+
+	/* Phase 1 : init */
+	memcpy(iv, cryp->areq->iv, 12);
+	iv[3] = cpu_to_be32(GCM_CTR_INIT);
+	cryp->gcm_ctr = GCM_CTR_INIT;
+	stm32_cryp_hw_write_iv(cryp, iv);
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg | CR_PH_INIT | CR_CRYPEN);
+
+	/* Wait for end of processing */
+	stm32_cryp_wait_enable(cryp);
+}
+
+static void stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
+{
+	u8 iv[AES_BLOCK_SIZE], b0[AES_BLOCK_SIZE];
+	u32 *d;
+	unsigned int i, textlen;
+
+	/* Phase 1 : init. Firstly set the CTR value to 1 (not 0) */
+	memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
+	memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
+	iv[AES_BLOCK_SIZE - 1] = 1;
+	stm32_cryp_hw_write_iv(cryp, (u32 *)iv);
+
+	/* Build B0 */
+	memcpy(b0, iv, AES_BLOCK_SIZE);
+
+	b0[0] |= (8 * ((cryp->authsize - 2) / 2));
+
+	if (cryp->areq->assoclen)
+		b0[0] |= 0x40;
+
+	if (is_encrypt(cryp))
+		textlen = cryp->areq->cryptlen;
+	else
+		textlen = cryp->areq->cryptlen - cryp->authsize;
+
+	b0[AES_BLOCK_SIZE - 2] = textlen >> 8;
+	b0[AES_BLOCK_SIZE - 1] = textlen & 0xFF;
+
+	/* Enable HW */
+	stm32_cryp_write(cryp, CRYP_CR, cfg | CR_PH_INIT | CR_CRYPEN);
+
+	/* Write B0 */
+	d = (u32 *)b0;
+	for (i = 0; i < AES_BLOCK_32; i++)
+		stm32_cryp_write(cryp, CRYP_DIN, *d++);
+
+	/* Wait for end of processing */
+	stm32_cryp_wait_enable(cryp);
+}
+
+static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
+{
+	u32 cfg, hw_mode;
+
+	/* Disable interrupt */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+
+	/* Set key */
+	stm32_cryp_hw_write_key(cryp);
+
+	/* Set configuration */
+	cfg = CR_DATA8 | CR_FFLUSH;
+
+	switch (cryp->ctx->keylen) {
+	case AES_KEYSIZE_128:
+		cfg |= CR_KEY128;
+		break;
+
+	case AES_KEYSIZE_192:
+		cfg |= CR_KEY192;
+		break;
+
+	default:
+	case AES_KEYSIZE_256:
+		cfg |= CR_KEY256;
+		break;
+	}
+
+	hw_mode = stm32_cryp_get_hw_mode(cryp);
+	if (hw_mode == CR_AES_UNKNOWN)
+		return -EINVAL;
+
+	/* AES ECB/CBC decrypt: run key preparation first */
+	if (is_decrypt(cryp) &&
+	    ((hw_mode == CR_AES_ECB) || (hw_mode == CR_AES_CBC))) {
+		stm32_cryp_write(cryp, CRYP_CR, cfg | CR_AES_KP | CR_CRYPEN);
+
+		/* Wait for end of processing */
+		stm32_cryp_wait_busy(cryp);
+	}
+
+	cfg |= hw_mode;
+
+	if (is_decrypt(cryp))
+		cfg |= CR_DEC_NOT_ENC;
+
+	/* Apply config and flush (valid when CRYPEN = 0) */
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	switch (hw_mode) {
+	case CR_AES_GCM:
+	case CR_AES_CCM:
+		/* Phase 1 : init */
+		if (hw_mode == CR_AES_CCM)
+			stm32_cryp_ccm_init(cryp, cfg);
+		else
+			stm32_cryp_gcm_init(cryp, cfg);
+
+		/* Phase 2 : header (authenticated data) */
+		if (cryp->areq->assoclen) {
+			cfg |= CR_PH_HEADER;
+		} else if (cryp->areq->cryptlen) {
+			/* Phase 3 : payload */
+			cfg |= CR_PH_PAYLOAD;
+			stm32_cryp_write(cryp, CRYP_CR, cfg);
+		} else {
+			cfg |= CR_PH_INIT;
+		}
+
+		break;
+
+	case CR_DES_CBC:
+	case CR_TDES_CBC:
+	case CR_AES_CBC:
+	case CR_AES_CTR:
+		stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->req->info);
+		break;
+
+	default:
+		break;
+	}
+
+	/* Enable now */
+	cfg |= CR_CRYPEN;
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	cryp->flags &= ~FLG_CCM_PADDED_WA;
+
+	return 0;
+}
+
+static void stm32_cryp_finish_req(struct stm32_cryp *cryp)
+{
+	int err = 0;
+
+	if (is_gcm(cryp) || is_ccm(cryp))
+		/* Phase 4 : output tag */
+		err = stm32_cryp_read_auth_tag(cryp);
+
+	if (cryp->sgs_copied) {
+		void *buf_in, *buf_out;
+		int pages, len;
+
+		buf_in = sg_virt(&cryp->in_sgl);
+		buf_out = sg_virt(&cryp->out_sgl);
+
+		sg_copy_buf(buf_out, cryp->out_sg_save, 0,
+			    cryp->total_out_save, 1);
+
+		len = ALIGN(cryp->total_in_save, cryp->hw_blocksize);
+		pages = len ? get_order(len) : 1;
+		free_pages((unsigned long)buf_in, pages);
+
+		len = ALIGN(cryp->total_out_save, cryp->hw_blocksize);
+		pages = len ? get_order(len) : 1;
+		free_pages((unsigned long)buf_out, pages);
+	}
+
+	if (is_gcm(cryp) || is_ccm(cryp)) {
+		crypto_finalize_aead_request(cryp->engine, cryp->areq, err);
+		cryp->areq = NULL;
+	} else {
+		crypto_finalize_cipher_request(cryp->engine, cryp->req, err);
+		cryp->req = NULL;
+	}
+
+	mutex_unlock(&cryp->lock);
+}
+
+static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
+{
+	if ((stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
+	    !cryp->areq->assoclen && !cryp->areq->cryptlen)
+		/* No input data, get output tag (phase 4) and finish */
+		stm32_cryp_finish_req(cryp);
+	else
+		/* Enable interrupt and let the IRQ handler do everything */
+		stm32_cryp_write(cryp, CRYP_IMSCR, IMSCR_IN | IMSCR_OUT);
+
+	return 0;
+}
+
+static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
+{
+	tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);
+
+	return 0;
+}
+
+static int stm32_cryp_aes_aead_init(struct crypto_aead *tfm)
+{
+	tfm->reqsize = sizeof(struct stm32_cryp_reqctx);
+
+	return 0;
+}
+
+static void stm32_cryp_cra_exit(struct crypto_tfm *tfm)
+{
+}
+
+static void stm32_cryp_aes_aead_exit(struct crypto_aead *tfm)
+{
+}
+
+static int stm32_cryp_crypt(struct ablkcipher_request *req, unsigned long mode)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
+			crypto_ablkcipher_reqtfm(req));
+	struct stm32_cryp_reqctx *rctx = ablkcipher_request_ctx(req);
+	struct stm32_cryp *cryp = stm32_cryp_find_dev(ctx);
+
+	if (!cryp)
+		return -ENODEV;
+
+	rctx->mode = mode;
+
+	return crypto_transfer_cipher_request_to_engine(cryp->engine, req);
+}
+
+static int stm32_cryp_aead_crypt(struct aead_request *req, unsigned long mode)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
+	struct stm32_cryp_reqctx *rctx = aead_request_ctx(req);
+	struct stm32_cryp *cryp = stm32_cryp_find_dev(ctx);
+
+	if (!cryp)
+		return -ENODEV;
+
+	rctx->mode = mode;
+
+	return crypto_transfer_aead_request_to_engine(cryp->engine, req);
+}
+
+static int stm32_cryp_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+			     unsigned int keylen)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+
+	memcpy(ctx->key, key, keylen);
+	ctx->keylen = keylen;
+
+	return 0;
+}
+
+static int stm32_cryp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				 unsigned int keylen)
+{
+	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 &&
+	    keylen != AES_KEYSIZE_256)
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_des_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				 unsigned int keylen)
+{
+	if (keylen != DES_KEY_SIZE)
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_tdes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+				  unsigned int keylen)
+{
+	if (keylen != (3 * DES_KEY_SIZE))
+		return -EINVAL;
+	else
+		return stm32_cryp_setkey(tfm, key, keylen);
+}
+
+static int stm32_cryp_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+				      unsigned int keylen)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(tfm);
+
+	if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 &&
+	    keylen != AES_KEYSIZE_256)
+		return -EINVAL;
+
+	memcpy(ctx->key, key, keylen);
+	ctx->keylen = keylen;
+
+	return 0;
+}
+
+static int stm32_cryp_aes_gcm_setauthsize(struct crypto_aead *tfm,
+					  unsigned int authsize)
+{
+	return authsize == AES_BLOCK_SIZE ? 0 : -EINVAL;
+}
+
+static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
+					  unsigned int authsize)
+{
+	switch (authsize) {
+	case 4:
+	case 6:
+	case 8:
+	case 10:
+	case 12:
+	case 14:
+	case 16:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int stm32_cryp_aes_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_ECB);
+}
+
+static int stm32_cryp_aes_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CBC);
+}
+
+static int stm32_cryp_aes_ctr_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ctr_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_AES | FLG_CTR);
+}
+
+static int stm32_cryp_aes_gcm_encrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_gcm_decrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM);
+}
+
+static int stm32_cryp_aes_ccm_encrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_aes_ccm_decrypt(struct aead_request *req)
+{
+	return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM);
+}
+
+static int stm32_cryp_des_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_des_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_ECB);
+}
+
+static int stm32_cryp_des_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_des_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_DES | FLG_CBC);
+}
+
+static int stm32_cryp_tdes_ecb_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_tdes_ecb_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB);
+}
+
+static int stm32_cryp_tdes_cbc_encrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC | FLG_ENCRYPT);
+}
+
+static int stm32_cryp_tdes_cbc_decrypt(struct ablkcipher_request *req)
+{
+	return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC);
+}
+
+static int stm32_cryp_prepare_req(struct crypto_engine *engine,
+				  struct ablkcipher_request *req,
+				  struct aead_request *areq)
+{
+	struct stm32_cryp_ctx *ctx;
+	struct stm32_cryp *cryp;
+	struct stm32_cryp_reqctx *rctx;
+	int ret;
+
+	if (!req && !areq)
+		return -EINVAL;
+
+	ctx = req ? crypto_ablkcipher_ctx(crypto_ablkcipher_reqtfm(req)) :
+		    crypto_aead_ctx(crypto_aead_reqtfm(areq));
+
+	cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	mutex_lock(&cryp->lock);
+
+	rctx = req ? ablkcipher_request_ctx(req) : aead_request_ctx(areq);
+	rctx->mode &= FLG_MODE_MASK;
+
+	ctx->cryp = cryp;
+
+	cryp->flags = (cryp->flags & ~FLG_MODE_MASK) | rctx->mode;
+	cryp->hw_blocksize = is_aes(cryp) ? AES_BLOCK_SIZE : DES_BLOCK_SIZE;
+	cryp->ctx = ctx;
+
+	if (req) {
+		cryp->req = req;
+		cryp->total_in = req->nbytes;
+		cryp->total_out = cryp->total_in;
+	} else {
+		/*
+		 * Length of input and output data:
+		 * Encryption case:
+		 *  INPUT  =   AssocData  ||   PlainText
+		 *          <- assoclen ->  <- cryptlen ->
+		 *          <------- total_in ----------->
+		 *
+		 *  OUTPUT =   AssocData  ||  CipherText  ||   AuthTag
+		 *          <- assoclen ->  <- cryptlen ->  <- authsize ->
+		 *          <---------------- total_out ----------------->
+		 *
+		 * Decryption case:
+		 *  INPUT  =   AssocData  ||  CipherText  ||  AuthTag
+		 *          <- assoclen ->  <--------- cryptlen --------->
+		 *                                          <- authsize ->
+		 *          <---------------- total_in ------------------>
+		 *
+		 *  OUTPUT =   AssocData  ||   PlainText
+		 *          <- assoclen ->  <- crypten - authsize ->
+		 *          <---------- total_out ----------------->
+		 */
+		cryp->areq = areq;
+		cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq));
+		cryp->total_in = areq->assoclen + areq->cryptlen;
+		if (is_encrypt(cryp))
+			/* Append auth tag to output */
+			cryp->total_out = cryp->total_in + cryp->authsize;
+		else
+			/* No auth tag in output */
+			cryp->total_out = cryp->total_in - cryp->authsize;
+	}
+
+	cryp->total_in_save = cryp->total_in;
+	cryp->total_out_save = cryp->total_out;
+
+	cryp->in_sg = req ? req->src : areq->src;
+	cryp->out_sg = req ? req->dst : areq->dst;
+	cryp->out_sg_save = cryp->out_sg;
+
+	cryp->in_sg_len = sg_nents_for_len(cryp->in_sg, cryp->total_in);
+	if (cryp->in_sg_len < 0) {
+		dev_err(cryp->dev, "Cannot get in_sg_len\n");
+		ret = cryp->in_sg_len;
+		goto out;
+	}
+
+	cryp->out_sg_len = sg_nents_for_len(cryp->out_sg, cryp->total_out);
+	if (cryp->out_sg_len < 0) {
+		dev_err(cryp->dev, "Cannot get out_sg_len\n");
+		ret = cryp->out_sg_len;
+		goto out;
+	}
+
+	stm32_cryp_copy_sgs(cryp);
+
+	scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+	scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+
+	if (is_gcm(cryp) || is_ccm(cryp)) {
+		/* In output, jump after assoc data */
+		scatterwalk_advance(&cryp->out_walk, cryp->areq->assoclen);
+		cryp->total_out -= cryp->areq->assoclen;
+	}
+
+	ret = stm32_cryp_hw_init(cryp);
+out:
+	if (ret)
+		mutex_unlock(&cryp->lock);
+
+	return ret;
+}
+
+static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
+					 struct ablkcipher_request *req)
+{
+	return stm32_cryp_prepare_req(engine, req, NULL);
+}
+
+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
+				     struct ablkcipher_request *req)
+{
+	struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
+			crypto_ablkcipher_reqtfm(req));
+	struct stm32_cryp *cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	return stm32_cryp_cpu_start(cryp);
+}
+
+static int stm32_cryp_prepare_aead_req(struct crypto_engine *engine,
+				       struct aead_request *areq)
+{
+	return stm32_cryp_prepare_req(engine, NULL, areq);
+}
+
+static int stm32_cryp_aead_one_req(struct crypto_engine *engine,
+				   struct aead_request *areq)
+{
+	struct stm32_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(areq));
+	struct stm32_cryp *cryp = ctx->cryp;
+
+	if (!cryp)
+		return -ENODEV;
+
+	return stm32_cryp_cpu_start(cryp);
+}
+
+static u32 *stm32_cryp_next_out(struct stm32_cryp *cryp, u32 *dst,
+				unsigned int n)
+{
+	scatterwalk_advance(&cryp->out_walk, n);
+
+	if (unlikely(cryp->out_sg->length == _walked_out)) {
+		cryp->out_sg = sg_next(cryp->out_sg);
+		if (cryp->out_sg) {
+			scatterwalk_start(&cryp->out_walk, cryp->out_sg);
+			return (sg_virt(cryp->out_sg) + _walked_out);
+		}
+	}
+
+	return (u32 *)((u8 *)dst + n);
+}
+
+static u32 *stm32_cryp_next_in(struct stm32_cryp *cryp, u32 *src,
+			       unsigned int n)
+{
+	scatterwalk_advance(&cryp->in_walk, n);
+
+	if (unlikely(cryp->in_sg->length == _walked_in)) {
+		cryp->in_sg = sg_next(cryp->in_sg);
+		if (cryp->in_sg) {
+			scatterwalk_start(&cryp->in_walk, cryp->in_sg);
+			return (sg_virt(cryp->in_sg) + _walked_in);
+		}
+	}
+
+	return (u32 *)((u8 *)src + n);
+}
+
+static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
+{
+	u32 cfg, size_bit, *dst, d32;
+	u8 *d8;
+	unsigned int i, j;
+	int ret = 0;
+
+	/* Update Config */
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_FINAL;
+	cfg &= ~CR_DEC_NOT_ENC;
+	cfg |= CR_CRYPEN;
+
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	if (is_gcm(cryp)) {
+		/* GCM: write aad and payload size (in bits) */
+		size_bit = cryp->areq->assoclen * 8;
+		if (cryp->caps->swap_final)
+			size_bit = cpu_to_be32(size_bit);
+
+		stm32_cryp_write(cryp, CRYP_DIN, 0);
+		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+
+		size_bit = is_encrypt(cryp) ? cryp->areq->cryptlen :
+				cryp->areq->cryptlen - AES_BLOCK_SIZE;
+		size_bit *= 8;
+		if (cryp->caps->swap_final)
+			size_bit = cpu_to_be32(size_bit);
+
+		stm32_cryp_write(cryp, CRYP_DIN, 0);
+		stm32_cryp_write(cryp, CRYP_DIN, size_bit);
+	} else {
+		/* CCM: write CTR0 */
+		u8 iv[AES_BLOCK_SIZE];
+		u32 *iv32 = (u32 *)iv;
+
+		memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
+		memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
+
+		for (i = 0; i < AES_BLOCK_32; i++)
+			stm32_cryp_write(cryp, CRYP_DIN, *iv32++);
+	}
+
+	/* Wait for output data */
+	stm32_cryp_wait_output(cryp);
+
+	if (is_encrypt(cryp)) {
+		/* Get and write tag */
+		dst = sg_virt(cryp->out_sg) + _walked_out;
+
+		for (i = 0; i < AES_BLOCK_32; i++) {
+			if (cryp->total_out >= sizeof(u32)) {
+				/* Read a full u32 */
+				*dst = stm32_cryp_read(cryp, CRYP_DOUT);
+
+				dst = stm32_cryp_next_out(cryp, dst,
+							  sizeof(u32));
+				cryp->total_out -= sizeof(u32);
+			} else if (!cryp->total_out) {
+				/* Empty fifo out (data from input padding) */
+				stm32_cryp_read(cryp, CRYP_DOUT);
+			} else {
+				/* Read less than an u32 */
+				d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+				d8 = (u8 *)&d32;
+
+				for (j = 0; j < cryp->total_out; j++) {
+					*((u8 *)dst) = *(d8++);
+					dst = stm32_cryp_next_out(cryp, dst, 1);
+				}
+				cryp->total_out = 0;
+			}
+		}
+	} else if (!(cryp->flags & FLG_CCM_PADDED_WA)) {
+		/*
+		 *  FIXME: when CCM workaround has been run, the tag is wrongly
+		 *  computed. Hence it shall not be compared with the expected
+		 *  input tag.
+		 */
+		u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32];
+
+		scatterwalk_map_and_copy(in_tag, cryp->in_sg,
+					 cryp->total_in_save - cryp->authsize,
+					 cryp->authsize, 0);
+
+		for (i = 0; i < AES_BLOCK_32; i++)
+			out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
+
+		if (crypto_memneq(in_tag, out_tag, cryp->authsize))
+			ret = -EBADMSG;
+	}
+
+	/* Disable cryp */
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	return ret;
+}
+
+static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp)
+{
+	u32 cr;
+
+	if (unlikely(cryp->last_ctr[3] == 0xFFFFFFFF)) {
+		cryp->last_ctr[3] = 0;
+		cryp->last_ctr[2]++;
+		if (!cryp->last_ctr[2]) {
+			cryp->last_ctr[1]++;
+			if (!cryp->last_ctr[1])
+				cryp->last_ctr[0]++;
+		}
+
+		cr = stm32_cryp_read(cryp, CRYP_CR);
+		stm32_cryp_write(cryp, CRYP_CR, cr & ~CR_CRYPEN);
+
+		stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->last_ctr);
+
+		stm32_cryp_write(cryp, CRYP_CR, cr);
+	}
+
+	cryp->last_ctr[0] = stm32_cryp_read(cryp, CRYP_IV0LR);
+	cryp->last_ctr[1] = stm32_cryp_read(cryp, CRYP_IV0RR);
+	cryp->last_ctr[2] = stm32_cryp_read(cryp, CRYP_IV1LR);
+	cryp->last_ctr[3] = stm32_cryp_read(cryp, CRYP_IV1RR);
+}
+
+static bool stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 d32, *dst;
+	u8 *d8;
+	size_t tag_size;
+
+	/* Do no read tag now (if any) */
+	if (is_encrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+		tag_size = cryp->authsize;
+	else
+		tag_size = 0;
+
+	dst = sg_virt(cryp->out_sg) + _walked_out;
+
+	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+		if (likely(cryp->total_out - tag_size >= sizeof(u32))) {
+			/* Read a full u32 */
+			*dst = stm32_cryp_read(cryp, CRYP_DOUT);
+
+			dst = stm32_cryp_next_out(cryp, dst, sizeof(u32));
+			cryp->total_out -= sizeof(u32);
+		} else if (cryp->total_out == tag_size) {
+			/* Empty fifo out (data from input padding) */
+			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+		} else {
+			/* Read less than an u32 */
+			d32 = stm32_cryp_read(cryp, CRYP_DOUT);
+			d8 = (u8 *)&d32;
+
+			for (j = 0; j < cryp->total_out - tag_size; j++) {
+				*((u8 *)dst) = *(d8++);
+				dst = stm32_cryp_next_out(cryp, dst, 1);
+			}
+			cryp->total_out = tag_size;
+		}
+	}
+
+	return !(cryp->total_out - tag_size) || !cryp->total_in;
+}
+
+static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 *src;
+	u8 d8[4];
+	size_t tag_size;
+
+	/* Do no write tag (if any) */
+	if (is_decrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
+		tag_size = cryp->authsize;
+	else
+		tag_size = 0;
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+
+	for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
+		if (likely(cryp->total_in - tag_size >= sizeof(u32))) {
+			/* Write a full u32 */
+			stm32_cryp_write(cryp, CRYP_DIN, *src);
+
+			src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+			cryp->total_in -= sizeof(u32);
+		} else if (cryp->total_in == tag_size) {
+			/* Write padding data */
+			stm32_cryp_write(cryp, CRYP_DIN, 0);
+		} else {
+			/* Write less than an u32 */
+			memset(d8, 0, sizeof(u32));
+			for (j = 0; j < cryp->total_in - tag_size; j++) {
+				d8[j] = *((u8 *)src);
+				src = stm32_cryp_next_in(cryp, src, 1);
+			}
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			cryp->total_in = tag_size;
+		}
+	}
+}
+
+static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
+{
+	u32 cfg, tmp[AES_BLOCK_32];
+	size_t total_in_ori = cryp->total_in;
+	struct scatterlist *out_sg_ori = cryp->out_sg;
+	unsigned int i;
+
+	/* 'Special workaround' procedure described in the datasheet */
+
+	/* a) disable ip */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) Update IV1R */
+	stm32_cryp_write(cryp, CRYP_IV1RR, cryp->gcm_ctr - 2);
+
+	/* c) change mode to CTR */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CTR;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* a) enable IP */
+	cfg |= CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) pad and write the last block */
+	stm32_cryp_irq_write_block(cryp);
+	cryp->total_in = total_in_ori;
+	stm32_cryp_wait_output(cryp);
+
+	/* c) get and store encrypted data */
+	stm32_cryp_irq_read_data(cryp);
+	scatterwalk_map_and_copy(tmp, out_sg_ori,
+				 cryp->total_in_save - total_in_ori,
+				 total_in_ori, 0);
+
+	/* d) change mode back to AES GCM */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_GCM;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* e) change phase to Final */
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_FINAL;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* f) write padded data */
+	for (i = 0; i < AES_BLOCK_32; i++) {
+		if (cryp->total_in)
+			stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
+		else
+			stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+	}
+
+	/* g) Empty fifo out */
+	stm32_cryp_wait_output(cryp);
+
+	for (i = 0; i < AES_BLOCK_32; i++)
+		stm32_cryp_read(cryp, CRYP_DOUT);
+
+	/* h) run the he normal Final phase */
+	stm32_cryp_finish_req(cryp);
+}
+
+static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
+{
+	u32 cfg, iv1tmp;
+	u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32], tmp[AES_BLOCK_32];
+	size_t last_total_out, total_in_ori = cryp->total_in;
+	struct scatterlist *out_sg_ori = cryp->out_sg;
+	unsigned int i;
+
+	/* 'Special workaround' procedure described in the datasheet */
+	cryp->flags |= FLG_CCM_PADDED_WA;
+
+	/* a) disable ip */
+	stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+
+	cfg = stm32_cryp_read(cryp, CRYP_CR);
+	cfg &= ~CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) get IV1 from CRYP_CSGCMCCM7 */
+	iv1tmp = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + 7 * 4);
+
+	/* c) Load CRYP_CSGCMCCMxR */
+	for (i = 0; i < ARRAY_SIZE(cstmp1); i++)
+		cstmp1[i] = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + i * 4);
+
+	/* d) Write IV1R */
+	stm32_cryp_write(cryp, CRYP_IV1RR, iv1tmp);
+
+	/* e) change mode to CTR */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CTR;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* a) enable IP */
+	cfg |= CR_CRYPEN;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* b) pad and write the last block */
+	stm32_cryp_irq_write_block(cryp);
+	cryp->total_in = total_in_ori;
+	stm32_cryp_wait_output(cryp);
+
+	/* c) get and store decrypted data */
+	last_total_out = cryp->total_out;
+	stm32_cryp_irq_read_data(cryp);
+
+	memset(tmp, 0, sizeof(tmp));
+	scatterwalk_map_and_copy(tmp, out_sg_ori,
+				 cryp->total_out_save - last_total_out,
+				 last_total_out, 0);
+
+	/* d) Load again CRYP_CSGCMCCMxR */
+	for (i = 0; i < ARRAY_SIZE(cstmp2); i++)
+		cstmp2[i] = stm32_cryp_read(cryp, CRYP_CSGCMCCM0R + i * 4);
+
+	/* e) change mode back to AES CCM */
+	cfg &= ~CR_ALGO_MASK;
+	cfg |= CR_AES_CCM;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* f) change phase to header */
+	cfg &= ~CR_PH_MASK;
+	cfg |= CR_PH_HEADER;
+	stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+	/* g) XOR and write padded data */
+	for (i = 0; i < ARRAY_SIZE(tmp); i++) {
+		tmp[i] ^= cstmp1[i];
+		tmp[i] ^= cstmp2[i];
+		stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
+	}
+
+	/* h) wait for completion */
+	stm32_cryp_wait_busy(cryp);
+
+	/* i) run the he normal Final phase */
+	stm32_cryp_finish_req(cryp);
+}
+
+static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
+{
+	if (unlikely(!cryp->total_in)) {
+		dev_warn(cryp->dev, "No more data to process\n");
+		return;
+	}
+
+	if (unlikely(cryp->caps->padding_wa &&
+		     (cryp->total_in < AES_BLOCK_SIZE) &&
+		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
+		     (is_encrypt(cryp))))
+		/* Special case 1: padding for AES GCM encryption */
+		return stm32_cryp_irq_write_gcm_padded_data(cryp);
+
+	if (unlikely(cryp->caps->padding_wa &&
+		     (cryp->total_in - cryp->authsize < AES_BLOCK_SIZE) &&
+		     (stm32_cryp_get_hw_mode(cryp) == CR_AES_CCM) &&
+		     (is_decrypt(cryp))))
+		/* Special case 2: padding for AES CCM decryption */
+		return stm32_cryp_irq_write_ccm_padded_data(cryp);
+
+	if (is_aes(cryp) && is_ctr(cryp))
+		stm32_cryp_check_ctr_counter(cryp);
+
+	stm32_cryp_irq_write_block(cryp);
+}
+
+static void stm32_cryp_irq_write_gcm_header(struct stm32_cryp *cryp)
+{
+	unsigned int i, j;
+	u32 cfg, *src;
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+
+	for (i = 0; i < AES_BLOCK_32; i++) {
+		stm32_cryp_write(cryp, CRYP_DIN, *src);
+
+		src = stm32_cryp_next_in(cryp, src, sizeof(u32));
+		cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
+
+		/* Check if whole header written */
+		if ((cryp->total_in_save - cryp->total_in) ==
+				cryp->areq->assoclen) {
+			/* Write padding if needed */
+			for (j = i + 1; j < AES_BLOCK_32; j++)
+				stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+			/* Wait for completion */
+			stm32_cryp_wait_busy(cryp);
+
+			if (cryp->areq->cryptlen) {
+				/* Phase 3 : payload */
+				cfg = stm32_cryp_read(cryp, CRYP_CR);
+				cfg &= ~CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+				cfg &= ~CR_PH_MASK;
+				cfg |= CR_PH_PAYLOAD;
+				cfg |= CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+			} else {
+				/* Phase 4 : tag */
+				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+				stm32_cryp_finish_req(cryp);
+			}
+
+			break;
+		}
+
+		if (!cryp->total_in)
+			break;
+	}
+}
+
+static void stm32_cryp_irq_write_ccm_header(struct stm32_cryp *cryp)
+{
+	unsigned int i = 0, j, k;
+	u32 alen, cfg, *src;
+	u8 d8[4];
+
+	src = sg_virt(cryp->in_sg) + _walked_in;
+	alen = cryp->areq->assoclen;
+
+	if (!_walked_in) {
+		if (cryp->areq->assoclen <= 65280) {
+			/* Write first u32 of B1 */
+			d8[0] = (alen >> 8) & 0xFF;
+			d8[1] = alen & 0xFF;
+			d8[2] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+			d8[3] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+		} else {
+			/* Build the two first u32 of B1 */
+			d8[0] = 0xFF;
+			d8[1] = 0xFE;
+			d8[2] = alen & 0xFF000000;
+			d8[3] = alen & 0x00FF0000;
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			d8[0] = alen & 0x0000FF00;
+			d8[1] = alen & 0x000000FF;
+			d8[2] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+			d8[3] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+			i++;
+
+			cryp->total_in -= min_t(size_t, 2, cryp->total_in);
+		}
+	}
+
+	/* Write next u32 */
+	for (; i < AES_BLOCK_32; i++) {
+		/* Build an u32 */
+		memset(d8, 0, sizeof(u32));
+		for (k = 0; k < sizeof(u32); k++) {
+			d8[k] = *((u8 *)src);
+			src = stm32_cryp_next_in(cryp, src, 1);
+
+			cryp->total_in -= min_t(size_t, 1, cryp->total_in);
+			if ((cryp->total_in_save - cryp->total_in) == alen)
+				break;
+		}
+
+		stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
+
+		if ((cryp->total_in_save - cryp->total_in) == alen) {
+			/* Write padding if needed */
+			for (j = i + 1; j < AES_BLOCK_32; j++)
+				stm32_cryp_write(cryp, CRYP_DIN, 0);
+
+			/* Wait for completion */
+			stm32_cryp_wait_busy(cryp);
+
+			if (cryp->areq->cryptlen) {
+				/* Phase 3 : payload */
+				cfg = stm32_cryp_read(cryp, CRYP_CR);
+				cfg &= ~CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+
+				cfg &= ~CR_PH_MASK;
+				cfg |= CR_PH_PAYLOAD;
+				cfg |= CR_CRYPEN;
+				stm32_cryp_write(cryp, CRYP_CR, cfg);
+			} else {
+				/* Phase 4 : tag */
+				stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+				stm32_cryp_finish_req(cryp);
+			}
+
+			break;
+		}
+	}
+}
+
+static irqreturn_t stm32_cryp_irq_thread(int irq, void *arg)
+{
+	struct stm32_cryp *cryp = arg;
+	u32 ph;
+
+	if (cryp->irq_status & MISR_OUT)
+		/* Output FIFO IRQ: read data */
+		if (unlikely(stm32_cryp_irq_read_data(cryp))) {
+			/* All bytes processed, finish */
+			stm32_cryp_write(cryp, CRYP_IMSCR, 0);
+			stm32_cryp_finish_req(cryp);
+			return IRQ_HANDLED;
+		}
+
+	if (cryp->irq_status & MISR_IN) {
+		if (is_gcm(cryp)) {
+			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+			if (unlikely(ph == CR_PH_HEADER))
+				/* Write Header */
+				stm32_cryp_irq_write_gcm_header(cryp);
+			else
+				/* Input FIFO IRQ: write data */
+				stm32_cryp_irq_write_data(cryp);
+			cryp->gcm_ctr++;
+		} else if (is_ccm(cryp)) {
+			ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
+			if (unlikely(ph == CR_PH_HEADER))
+				/* Write Header */
+				stm32_cryp_irq_write_ccm_header(cryp);
+			else
+				/* Input FIFO IRQ: write data */
+				stm32_cryp_irq_write_data(cryp);
+		} else {
+			/* Input FIFO IRQ: write data */
+			stm32_cryp_irq_write_data(cryp);
+		}
+	}
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t stm32_cryp_irq(int irq, void *arg)
+{
+	struct stm32_cryp *cryp = arg;
+
+	cryp->irq_status = stm32_cryp_read(cryp, CRYP_MISR);
+
+	return IRQ_WAKE_THREAD;
+}
+
+static struct crypto_alg crypto_algs[] = {
+{
+	.cra_name		= "ecb(aes)",
+	.cra_driver_name	= "stm32-ecb-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_ecb_encrypt,
+		.decrypt	= stm32_cryp_aes_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(aes)",
+	.cra_driver_name	= "stm32-cbc-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_cbc_encrypt,
+		.decrypt	= stm32_cryp_aes_cbc_decrypt,
+	}
+},
+{
+	.cra_name		= "ctr(aes)",
+	.cra_driver_name	= "stm32-ctr-aes",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= 1,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_aes_setkey,
+		.encrypt	= stm32_cryp_aes_ctr_encrypt,
+		.decrypt	= stm32_cryp_aes_ctr_decrypt,
+	}
+},
+{
+	.cra_name		= "ecb(des)",
+	.cra_driver_name	= "stm32-ecb-des",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= DES_BLOCK_SIZE,
+		.max_keysize	= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_des_setkey,
+		.encrypt	= stm32_cryp_des_ecb_encrypt,
+		.decrypt	= stm32_cryp_des_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(des)",
+	.cra_driver_name	= "stm32-cbc-des",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= DES_BLOCK_SIZE,
+		.max_keysize	= DES_BLOCK_SIZE,
+		.ivsize		= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_des_setkey,
+		.encrypt	= stm32_cryp_des_cbc_encrypt,
+		.decrypt	= stm32_cryp_des_cbc_decrypt,
+	}
+},
+{
+	.cra_name		= "ecb(des3_ede)",
+	.cra_driver_name	= "stm32-ecb-des3",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= 3 * DES_BLOCK_SIZE,
+		.max_keysize	= 3 * DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_tdes_setkey,
+		.encrypt	= stm32_cryp_tdes_ecb_encrypt,
+		.decrypt	= stm32_cryp_tdes_ecb_decrypt,
+	}
+},
+{
+	.cra_name		= "cbc(des3_ede)",
+	.cra_driver_name	= "stm32-cbc-des3",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
+				  CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= DES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+	.cra_alignmask		= 0xf,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= stm32_cryp_cra_init,
+	.cra_exit		= stm32_cryp_cra_exit,
+	.cra_ablkcipher = {
+		.min_keysize	= 3 * DES_BLOCK_SIZE,
+		.max_keysize	= 3 * DES_BLOCK_SIZE,
+		.ivsize		= DES_BLOCK_SIZE,
+		.setkey		= stm32_cryp_tdes_setkey,
+		.encrypt	= stm32_cryp_tdes_cbc_encrypt,
+		.decrypt	= stm32_cryp_tdes_cbc_decrypt,
+	}
+},
+};
+
+static struct aead_alg aead_algs[] = {
+{
+	.setkey		= stm32_cryp_aes_aead_setkey,
+	.setauthsize	= stm32_cryp_aes_gcm_setauthsize,
+	.encrypt	= stm32_cryp_aes_gcm_encrypt,
+	.decrypt	= stm32_cryp_aes_gcm_decrypt,
+	.init		= stm32_cryp_aes_aead_init,
+	.exit		= stm32_cryp_aes_aead_exit,
+	.ivsize		= 12,
+	.maxauthsize	= AES_BLOCK_SIZE,
+
+	.base = {
+		.cra_name		= "gcm(aes)",
+		.cra_driver_name	= "stm32-gcm-aes",
+		.cra_priority		= 200,
+		.cra_flags		= CRYPTO_ALG_ASYNC,
+		.cra_blocksize		= 1,
+		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+		.cra_alignmask		= 0xf,
+		.cra_module		= THIS_MODULE,
+	},
+},
+{
+	.setkey		= stm32_cryp_aes_aead_setkey,
+	.setauthsize	= stm32_cryp_aes_ccm_setauthsize,
+	.encrypt	= stm32_cryp_aes_ccm_encrypt,
+	.decrypt	= stm32_cryp_aes_ccm_decrypt,
+	.init		= stm32_cryp_aes_aead_init,
+	.exit		= stm32_cryp_aes_aead_exit,
+	.ivsize		= AES_BLOCK_SIZE,
+	.maxauthsize	= AES_BLOCK_SIZE,
+
+	.base = {
+		.cra_name		= "ccm(aes)",
+		.cra_driver_name	= "stm32-ccm-aes",
+		.cra_priority		= 200,
+		.cra_flags		= CRYPTO_ALG_ASYNC,
+		.cra_blocksize		= 1,
+		.cra_ctxsize		= sizeof(struct stm32_cryp_ctx),
+		.cra_alignmask		= 0xf,
+		.cra_module		= THIS_MODULE,
+	},
+},
+};
+
+static const struct stm32_cryp_caps f7_data = {
+	.swap_final = true,
+	.padding_wa = true,
+};
+
+static const struct of_device_id stm32_dt_ids[] = {
+	{ .compatible = "st,stm32f756-cryp", .data = &f7_data},
+	{},
+};
+MODULE_DEVICE_TABLE(of, sti_dt_ids);
+
+static int stm32_cryp_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct stm32_cryp *cryp;
+	struct resource *res;
+	struct reset_control *rst;
+	const struct of_device_id *match;
+	int irq, ret;
+
+	cryp = devm_kzalloc(dev, sizeof(*cryp), GFP_KERNEL);
+	if (!cryp)
+		return -ENOMEM;
+
+	match = of_match_device(stm32_dt_ids, dev);
+	if (!match)
+		return -ENODEV;
+
+	cryp->caps = match->data;
+	cryp->dev = dev;
+
+	mutex_init(&cryp->lock);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	cryp->regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(cryp->regs)) {
+		dev_err(dev, "Cannot map CRYP IO\n");
+		return PTR_ERR(cryp->regs);
+	}
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(dev, "Cannot get IRQ resource\n");
+		return irq;
+	}
+
+	ret = devm_request_threaded_irq(dev, irq, stm32_cryp_irq,
+					stm32_cryp_irq_thread, IRQF_ONESHOT,
+					dev_name(dev), cryp);
+	if (ret) {
+		dev_err(dev, "Cannot grab IRQ\n");
+		return ret;
+	}
+
+	cryp->clk = devm_clk_get(dev, NULL);
+	if (IS_ERR(cryp->clk)) {
+		dev_err(dev, "Could not get clock\n");
+		return PTR_ERR(cryp->clk);
+	}
+
+	ret = clk_prepare_enable(cryp->clk);
+	if (ret) {
+		dev_err(cryp->dev, "Failed to enable clock\n");
+		return ret;
+	}
+
+	rst = devm_reset_control_get(dev, NULL);
+	if (!IS_ERR(rst)) {
+		reset_control_assert(rst);
+		udelay(2);
+		reset_control_deassert(rst);
+	}
+
+	platform_set_drvdata(pdev, cryp);
+
+	spin_lock(&cryp_list.lock);
+	list_add(&cryp->list, &cryp_list.dev_list);
+	spin_unlock(&cryp_list.lock);
+
+	/* Initialize crypto engine */
+	cryp->engine = crypto_engine_alloc_init(dev, 1);
+	if (!cryp->engine) {
+		dev_err(dev, "Could not init crypto engine\n");
+		ret = -ENOMEM;
+		goto err_engine1;
+	}
+
+	cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
+	cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
+	cryp->engine->prepare_aead_request = stm32_cryp_prepare_aead_req;
+	cryp->engine->aead_one_request = stm32_cryp_aead_one_req;
+
+	ret = crypto_engine_start(cryp->engine);
+	if (ret) {
+		dev_err(dev, "Could not start crypto engine\n");
+		goto err_engine2;
+	}
+
+	ret = crypto_register_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+	if (ret) {
+		dev_err(dev, "Could not register algs\n");
+		goto err_algs;
+	}
+
+	ret = crypto_register_aeads(aead_algs, ARRAY_SIZE(aead_algs));
+	if (ret)
+		goto err_aead_algs;
+
+	dev_info(dev, "Initialized\n");
+
+	return 0;
+
+err_aead_algs:
+	crypto_unregister_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+err_algs:
+err_engine2:
+	crypto_engine_exit(cryp->engine);
+err_engine1:
+	spin_lock(&cryp_list.lock);
+	list_del(&cryp->list);
+	spin_unlock(&cryp_list.lock);
+
+	clk_disable_unprepare(cryp->clk);
+
+	return ret;
+}
+
+static int stm32_cryp_remove(struct platform_device *pdev)
+{
+	struct stm32_cryp *cryp = platform_get_drvdata(pdev);
+
+	if (!cryp)
+		return -ENODEV;
+
+	crypto_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs));
+	crypto_unregister_algs(crypto_algs, ARRAY_SIZE(crypto_algs));
+
+	crypto_engine_exit(cryp->engine);
+
+	spin_lock(&cryp_list.lock);
+	list_del(&cryp->list);
+	spin_unlock(&cryp_list.lock);
+
+	clk_disable_unprepare(cryp->clk);
+
+	return 0;
+}
+
+static struct platform_driver stm32_cryp_driver = {
+	.probe  = stm32_cryp_probe,
+	.remove = stm32_cryp_remove,
+	.driver = {
+		.name           = DRIVER_NAME,
+		.of_match_table = stm32_dt_ids,
+	},
+};
+
+module_platform_driver(stm32_cryp_driver);
+
+MODULE_AUTHOR("Fabien Dessenne <fabien.dessenne@st.com>");
+MODULE_DESCRIPTION("STMicrolectronics STM32 CRYP hardware driver");
+MODULE_LICENSE("GPL");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] dt-bindings: Document STM32 CRYP bindings
  2017-07-13  9:59     ` Fabien Dessenne
@ 2017-07-17 17:49       ` Rob Herring
  -1 siblings, 0 replies; 16+ messages in thread
From: Rob Herring @ 2017-07-17 17:49 UTC (permalink / raw)
  To: Fabien Dessenne
  Cc: Herbert Xu, David S . Miller, Mark Rutland, Maxime Coquelin,
	Alexandre Torgue, linux-crypto, devicetree, linux-arm-kernel,
	linux-kernel, Benjamin Gaignard, Lionel Debieve, Ludovic Barre

On Thu, Jul 13, 2017 at 11:59:38AM +0200, Fabien Dessenne wrote:
> Document device tree bindings for the STM32 CRYP.
> 
> Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
> ---
>  .../devicetree/bindings/crypto/st,stm32-cryp.txt     | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
> 
> diff --git a/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
> new file mode 100644
> index 0000000..f631c37
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
> @@ -0,0 +1,20 @@
> +* STMicroelectronics STM32 CRYP
> +
> +Required properties:
> +- compatible: Should be "st,stm32f756-cryp".
> +- reg: The address and length of the peripheral registers space
> +- clocks: The input clock of the CRYP instance
> +- interrupts: The CRYP interrupts

More than 1? How many?

> +
> +Optional properties:
> +- resets: The input reset of the CRYP instance
> +
> +Example:
> +cryp1: cryp@50060000 {
> +	compatible = "st,stm32f756-cryp";
> +	reg = <0x50060000 0x400>;
> +	interrupts = <79>;
> +	clocks = <&rcc 0 STM32F7_AHB2_CLOCK(CRYP)>;
> +	resets = <&rcc STM32F7_AHB2_RESET(CRYP)>;
> +	status = "disabled";
> +};
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 2/3] dt-bindings: Document STM32 CRYP bindings
@ 2017-07-17 17:49       ` Rob Herring
  0 siblings, 0 replies; 16+ messages in thread
From: Rob Herring @ 2017-07-17 17:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 13, 2017 at 11:59:38AM +0200, Fabien Dessenne wrote:
> Document device tree bindings for the STM32 CRYP.
> 
> Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
> ---
>  .../devicetree/bindings/crypto/st,stm32-cryp.txt     | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
> 
> diff --git a/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
> new file mode 100644
> index 0000000..f631c37
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
> @@ -0,0 +1,20 @@
> +* STMicroelectronics STM32 CRYP
> +
> +Required properties:
> +- compatible: Should be "st,stm32f756-cryp".
> +- reg: The address and length of the peripheral registers space
> +- clocks: The input clock of the CRYP instance
> +- interrupts: The CRYP interrupts

More than 1? How many?

> +
> +Optional properties:
> +- resets: The input reset of the CRYP instance
> +
> +Example:
> +cryp1: cryp at 50060000 {
> +	compatible = "st,stm32f756-cryp";
> +	reg = <0x50060000 0x400>;
> +	interrupts = <79>;
> +	clocks = <&rcc 0 STM32F7_AHB2_CLOCK(CRYP)>;
> +	resets = <&rcc STM32F7_AHB2_RESET(CRYP)>;
> +	status = "disabled";
> +};
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/3] STM32 CRYP crypto driver
  2017-07-13  9:59 ` Fabien Dessenne
                   ` (4 preceding siblings ...)
  (?)
@ 2017-09-18  9:20 ` Fabien DESSENNE
  -1 siblings, 0 replies; 16+ messages in thread
From: Fabien DESSENNE @ 2017-09-18  9:20 UTC (permalink / raw)
  To: Herbert Xu, David S . Miller, Rob Herring, Mark Rutland,
	Maxime Coquelin, Alexandre TORGUE, linux-crypto, devicetree,
	linux-arm-kernel, linux-kernel
  Cc: Benjamin GAIGNARD, Lionel DEBIEVE, Ludovic BARRE

[-- Attachment #1: Type: text/plain, Size: 1423 bytes --]

Just a gentle ping ... or have I missed out on a reply?

On 13/07/17 11:59, Fabien Dessenne wrote:

This set of patches adds a new crypto driver for STMicroelectronics stm32 HW.
This drivers uses the crypto API and provides with HW-enabled AEAD and block
cipher algorithms.
It makes use of the crypto engine which is upgraded in order to support AEAD
requests.

This driver was successfully tested with tcrypt / testmgr.

Note:
Since two other set of patches (update of STM32 CRC32 and addition of STM32
HASH) are being proposed, it may happen that there are some minor conflicts in
'Kconfig' and 'Makefile'. In that case, I will fix the issue in due course.

Fabien Dessenne (3):
  crypto: engine - permit to enqueue aead_request
  dt-bindings: Document STM32 CRYP bindings
  crypto: stm32 - Support for STM32 CRYP crypto module

 .../devicetree/bindings/crypto/st,stm32-cryp.txt   |   20 +
 crypto/crypto_engine.c                             |  101 +
 drivers/crypto/stm32/Kconfig                       |    9 +
 drivers/crypto/stm32/Makefile                      |    1 +
 drivers/crypto/stm32/stm32-cryp.c                  | 1962 ++++++++++++++++++++
 include/crypto/engine.h                            |   16 +
 6 files changed, 2109 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/crypto/st,stm32-cryp.txt
 create mode 100644 drivers/crypto/stm32/stm32-cryp.c




[-- Attachment #2: Type: text/html, Size: 1893 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-09-18  9:20 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-13  9:59 [PATCH 0/3] STM32 CRYP crypto driver Fabien Dessenne
2017-07-13  9:59 ` Fabien Dessenne
2017-07-13  9:59 ` Fabien Dessenne
2017-07-13  9:59 ` [PATCH 1/3] crypto: engine - permit to enqueue aead_request Fabien Dessenne
2017-07-13  9:59   ` Fabien Dessenne
2017-07-13  9:59   ` Fabien Dessenne
     [not found] ` <1499939979-8623-1-git-send-email-fabien.dessenne-qxv4g6HH51o@public.gmane.org>
2017-07-13  9:59   ` [PATCH 2/3] dt-bindings: Document STM32 CRYP bindings Fabien Dessenne
2017-07-13  9:59     ` Fabien Dessenne
2017-07-13  9:59     ` Fabien Dessenne
2017-07-13  9:59     ` Fabien Dessenne
2017-07-17 17:49     ` Rob Herring
2017-07-17 17:49       ` Rob Herring
2017-07-13  9:59 ` [PATCH 3/3] crypto: stm32 - Support for STM32 CRYP crypto module Fabien Dessenne
2017-07-13  9:59   ` Fabien Dessenne
2017-07-13  9:59   ` Fabien Dessenne
2017-09-18  9:20 ` [PATCH 0/3] STM32 CRYP crypto driver Fabien DESSENNE

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.