All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 14:58 ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-09 14:58 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto
  Cc: Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree, Tawfik Bayouk, Lior Amsalem, Nadav Haklai,
	Eran Ben-Avi, Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, Andrew Lunn, linux-arm-kernel,
	linux-kernel, Arnaud Ebalard, Boris Brezillon

Hello,

This is an attempt to replace the mv_cesa driver by a new one to address
some limitations of the existing driver.
>From a performance and CPU load point of view the most important
limitation is the lack of DMA support, thus preventing us from chaining
crypto operations.

I know we usually try to adapt existing drivers instead of replacing them
by new ones, but after trying to refactor the mv_cesa driver I realized it
would take longer than writing an new one from scratch.

Here are the main features brought by this new driver:
- support for armada SoCs (up to 38x) while keeping support for older ones
  (Orion and Kirkwood)
- DMA mode to offload the CPU in case of intensive crypto usage
- new algorithms: SHA256, DES and 3DES

I'd like to thank Arnaud, who has carefully reviewed several iterations of
this driver, helped me improved my implementation, provided support for
several crypto algorithms, provided support for armada-370 and tested
the driver on different platforms, hence the SoB and dual MODULE_AUTHOR
in the driver code.

Best Regards,

Boris

Boris Brezillon (2):
  crypto: add new driver for Marvell CESA
  crypto: marvell/CESA: update DT bindings documentation

 .../devicetree/bindings/crypto/mv_cesa.txt         |   50 +-
 drivers/crypto/Kconfig                             |    2 +
 drivers/crypto/Makefile                            |    2 +-
 drivers/crypto/marvell/Makefile                    |    1 +
 drivers/crypto/marvell/cesa.c                      |  539 ++++++++
 drivers/crypto/marvell/cesa.h                      |  802 ++++++++++++
 drivers/crypto/marvell/cipher.c                    |  761 +++++++++++
 drivers/crypto/marvell/hash.c                      | 1349 ++++++++++++++++++++
 drivers/crypto/marvell/tdma.c                      |  223 ++++
 drivers/crypto/mv_cesa.c                           | 1193 -----------------
 drivers/crypto/mv_cesa.h                           |  150 ---
 11 files changed, 3716 insertions(+), 1356 deletions(-)
 create mode 100644 drivers/crypto/marvell/Makefile
 create mode 100644 drivers/crypto/marvell/cesa.c
 create mode 100644 drivers/crypto/marvell/cesa.h
 create mode 100644 drivers/crypto/marvell/cipher.c
 create mode 100644 drivers/crypto/marvell/hash.c
 create mode 100644 drivers/crypto/marvell/tdma.c
 delete mode 100644 drivers/crypto/mv_cesa.c
 delete mode 100644 drivers/crypto/mv_cesa.h

-- 
1.9.1

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 14:58 ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-09 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

This is an attempt to replace the mv_cesa driver by a new one to address
some limitations of the existing driver.
>From a performance and CPU load point of view the most important
limitation is the lack of DMA support, thus preventing us from chaining
crypto operations.

I know we usually try to adapt existing drivers instead of replacing them
by new ones, but after trying to refactor the mv_cesa driver I realized it
would take longer than writing an new one from scratch.

Here are the main features brought by this new driver:
- support for armada SoCs (up to 38x) while keeping support for older ones
  (Orion and Kirkwood)
- DMA mode to offload the CPU in case of intensive crypto usage
- new algorithms: SHA256, DES and 3DES

I'd like to thank Arnaud, who has carefully reviewed several iterations of
this driver, helped me improved my implementation, provided support for
several crypto algorithms, provided support for armada-370 and tested
the driver on different platforms, hence the SoB and dual MODULE_AUTHOR
in the driver code.

Best Regards,

Boris

Boris Brezillon (2):
  crypto: add new driver for Marvell CESA
  crypto: marvell/CESA: update DT bindings documentation

 .../devicetree/bindings/crypto/mv_cesa.txt         |   50 +-
 drivers/crypto/Kconfig                             |    2 +
 drivers/crypto/Makefile                            |    2 +-
 drivers/crypto/marvell/Makefile                    |    1 +
 drivers/crypto/marvell/cesa.c                      |  539 ++++++++
 drivers/crypto/marvell/cesa.h                      |  802 ++++++++++++
 drivers/crypto/marvell/cipher.c                    |  761 +++++++++++
 drivers/crypto/marvell/hash.c                      | 1349 ++++++++++++++++++++
 drivers/crypto/marvell/tdma.c                      |  223 ++++
 drivers/crypto/mv_cesa.c                           | 1193 -----------------
 drivers/crypto/mv_cesa.h                           |  150 ---
 11 files changed, 3716 insertions(+), 1356 deletions(-)
 create mode 100644 drivers/crypto/marvell/Makefile
 create mode 100644 drivers/crypto/marvell/cesa.c
 create mode 100644 drivers/crypto/marvell/cesa.h
 create mode 100644 drivers/crypto/marvell/cipher.c
 create mode 100644 drivers/crypto/marvell/hash.c
 create mode 100644 drivers/crypto/marvell/tdma.c
 delete mode 100644 drivers/crypto/mv_cesa.c
 delete mode 100644 drivers/crypto/mv_cesa.h

-- 
1.9.1

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 1/2] crypto: add new driver for Marvell CESA
  2015-04-09 14:58 ` Boris Brezillon
  (?)
@ 2015-04-09 14:58 ` Boris Brezillon
       [not found]   ` <1428591523-1780-2-git-send-email-boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
  -1 siblings, 1 reply; 67+ messages in thread
From: Boris Brezillon @ 2015-04-09 14:58 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto
  Cc: Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree, Tawfik Bayouk, Lior Amsalem, Nadav Haklai,
	Eran Ben-Avi, Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, Andrew Lunn, linux-arm-kernel,
	linux-kernel, Arnaud Ebalard, Boris Brezillon

The existing mv_cesa driver supports some features of the CESA IP but is
quite limited, and reworking it to support new features (like involving the
TDMA engine to offload the CPU) is almost impossible.
This driver has been rewritten from scratch to take those new features into
account.

This new driver adds support for:
- new armada SoCs (up to 38x) while keeping support for older ones (Orion
  and Kirkwood)
- DMA mode to offload the CPU in case of intensive crypto usage
- new algorithms: SHA256, DES and 3DES

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
---
 drivers/crypto/Kconfig          |    2 +
 drivers/crypto/Makefile         |    2 +-
 drivers/crypto/marvell/Makefile |    1 +
 drivers/crypto/marvell/cesa.c   |  539 ++++++++++++++++
 drivers/crypto/marvell/cesa.h   |  802 +++++++++++++++++++++++
 drivers/crypto/marvell/cipher.c |  761 ++++++++++++++++++++++
 drivers/crypto/marvell/hash.c   | 1349 +++++++++++++++++++++++++++++++++++++++
 drivers/crypto/marvell/tdma.c   |  223 +++++++
 drivers/crypto/mv_cesa.c        | 1193 ----------------------------------
 drivers/crypto/mv_cesa.h        |  150 -----
 10 files changed, 3678 insertions(+), 1344 deletions(-)
 create mode 100644 drivers/crypto/marvell/Makefile
 create mode 100644 drivers/crypto/marvell/cesa.c
 create mode 100644 drivers/crypto/marvell/cesa.h
 create mode 100644 drivers/crypto/marvell/cipher.c
 create mode 100644 drivers/crypto/marvell/hash.c
 create mode 100644 drivers/crypto/marvell/tdma.c
 delete mode 100644 drivers/crypto/mv_cesa.c
 delete mode 100644 drivers/crypto/mv_cesa.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 2fb0fdf..a3f61ab 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -164,8 +164,10 @@ config CRYPTO_DEV_MV_CESA
 	depends on PLAT_ORION
 	select CRYPTO_ALGAPI
 	select CRYPTO_AES
+	select CRYPTO_DES
 	select CRYPTO_BLKCIPHER2
 	select CRYPTO_HASH
+	select SRAM
 	help
 	  This driver allows you to utilize the Cryptographic Engines and
 	  Security Accelerator (CESA) which can be found on the Marvell Orion
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 3924f93..77a56aa 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -7,7 +7,7 @@ obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam/
 obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
 obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
 obj-$(CONFIG_CRYPTO_DEV_IXP4XX) += ixp4xx_crypto.o
-obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
+obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += marvell/
 obj-$(CONFIG_CRYPTO_DEV_MXS_DCP) += mxs-dcp.o
 obj-$(CONFIG_CRYPTO_DEV_NIAGARA2) += n2_crypto.o
 n2_crypto-y := n2_core.o n2_asm.o
diff --git a/drivers/crypto/marvell/Makefile b/drivers/crypto/marvell/Makefile
new file mode 100644
index 0000000..a241e94
--- /dev/null
+++ b/drivers/crypto/marvell/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += cesa.o cipher.o hash.o tdma.o
diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
new file mode 100644
index 0000000..0cefba8
--- /dev/null
+++ b/drivers/crypto/marvell/cesa.c
@@ -0,0 +1,539 @@
+/*
+ * Support for Marvell's Cryptographic Engine and Security Accelerator (CESA)
+ * that can be found on the following platform: Orion, Kirkwood, Armada. This
+ * driver supports the TDMA engine on platforms on which it is available.
+ *
+ * Author: Boris Brezillon <boris.brezillon@free-electrons.com>
+ * Author: Arnaud Ebalard <arno@natisbad.org>
+ *
+ * This work is based on an initial version written by
+ * Sebastian Andrzej Siewior < sebastian at breakpoint dot cc >
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ */
+
+
+#include <linux/delay.h>
+#include <linux/genalloc.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kthread.h>
+#include <linux/mbus.h>
+#include <linux/platform_device.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/clk.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/of_irq.h>
+
+#include "cesa.h"
+
+struct mv_cesa_dev *cesa_dev;
+
+static void mv_cesa_dequeue_req_unlocked(struct mv_cesa_engine *engine)
+{
+	struct crypto_async_request *req;
+	struct mv_cesa_ctx *ctx;
+
+	spin_lock_bh(&cesa_dev->lock);
+	req = crypto_dequeue_request(&cesa_dev->queue);
+	engine->req = req;
+	spin_unlock_bh(&cesa_dev->lock);
+
+	if (!req)
+		return;
+
+	ctx = crypto_tfm_ctx(req->tfm);
+	ctx->ops->prepare(req, engine);
+	ctx->ops->step(req);
+}
+
+static irqreturn_t mv_cesa_int(int irq, void *priv)
+{
+	struct mv_cesa_engine *engine = priv;
+	struct crypto_async_request *req;
+	struct mv_cesa_ctx *ctx;
+	u32 status, mask;
+	irqreturn_t ret = IRQ_NONE;
+
+	while (true) {
+		int res;
+
+		mask = mv_cesa_get_int_mask(engine);
+		status = readl(engine->regs + CESA_SA_INT_STATUS);
+
+		if (!(status & mask))
+			break;
+
+		/*
+		 * TODO: avoid clearing the FPGA_INT_STATUS if this not
+		 * relevant on some platforms.
+		 */
+		writel(~status, engine->regs + CESA_SA_FPGA_INT_STATUS);
+		writel(~status, engine->regs + CESA_SA_INT_STATUS);
+
+		ret = IRQ_HANDLED;
+		spin_lock_bh(&engine->lock);
+		req = engine->req;
+		spin_unlock_bh(&engine->lock);
+		if (req) {
+			ctx = crypto_tfm_ctx(req->tfm);
+			res = ctx->ops->process(req, status & mask);
+			if (res != -EINPROGRESS) {
+				spin_lock_bh(&engine->lock);
+				engine->req = NULL;
+				mv_cesa_dequeue_req_unlocked(engine);
+				spin_unlock_bh(&engine->lock);
+				ctx->ops->cleanup(req);
+				local_bh_disable();
+				req->complete(req, res);
+				local_bh_enable();
+			} else {
+				ctx->ops->step(req);
+			}
+		}
+	}
+
+	return ret;
+}
+
+int mv_cesa_queue_req(struct crypto_async_request *req)
+{
+	int ret;
+	int i;
+
+	spin_lock_bh(&cesa_dev->lock);
+	ret = crypto_enqueue_request(&cesa_dev->queue, req);
+	spin_unlock_bh(&cesa_dev->lock);
+
+	if (ret != -EINPROGRESS)
+		return ret;
+
+	for (i = 0; i < cesa_dev->caps->nengines; i++) {
+		spin_lock_bh(&cesa_dev->engines[i].lock);
+		if (!cesa_dev->engines[i].req)
+			mv_cesa_dequeue_req_unlocked(&cesa_dev->engines[i]);
+		spin_unlock_bh(&cesa_dev->engines[i].lock);
+	}
+
+	return -EINPROGRESS;
+}
+
+
+
+static int mv_cesa_add_algs(struct mv_cesa_dev *cesa)
+{
+	int ret;
+	int i, j;
+
+	for (i = 0; i < cesa->caps->ncipher_algs; i++) {
+		ret = crypto_register_alg(cesa->caps->cipher_algs[i]);
+		if (ret)
+			goto err_unregister_crypto;
+	}
+
+	for (i = 0; i < cesa->caps->nahash_algs; i++) {
+		ret = crypto_register_ahash(cesa->caps->ahash_algs[i]);
+		if (ret)
+			goto err_unregister_ahash;
+	}
+
+	return 0;
+
+err_unregister_ahash:
+	for (j = 0; j < i; j++)
+		crypto_unregister_ahash(cesa->caps->ahash_algs[j]);
+	i = cesa->caps->ncipher_algs;
+
+err_unregister_crypto:
+	for (j = 0; j < i; j++)
+		crypto_unregister_alg(cesa->caps->cipher_algs[j]);
+
+	return ret;
+}
+
+static void mv_cesa_remove_algs(struct mv_cesa_dev *cesa)
+{
+	int i;
+
+	for (i = 0; i < cesa->caps->nahash_algs; i++)
+		crypto_unregister_ahash(cesa->caps->ahash_algs[i]);
+
+	for (i = 0; i < cesa->caps->ncipher_algs; i++)
+		crypto_unregister_alg(cesa->caps->cipher_algs[i]);
+}
+
+static struct crypto_alg *orion_cipher_algs[] = {
+	&mv_cesa_ecb_des_alg,
+	&mv_cesa_cbc_des_alg,
+	&mv_cesa_ecb_des3_ede_alg,
+	&mv_cesa_cbc_des3_ede_alg,
+	&mv_cesa_ecb_aes_alg,
+	&mv_cesa_cbc_aes_alg,
+};
+
+static struct ahash_alg *orion_ahash_algs[] = {
+	&mv_md5_alg,
+	&mv_sha1_alg,
+	&mv_ahmac_md5_alg,
+	&mv_ahmac_sha1_alg,
+};
+
+static struct crypto_alg *armada_370_cipher_algs[] = {
+	&mv_cesa_ecb_des_alg,
+	&mv_cesa_cbc_des_alg,
+	&mv_cesa_ecb_des3_ede_alg,
+	&mv_cesa_cbc_des3_ede_alg,
+	&mv_cesa_ecb_aes_alg,
+	&mv_cesa_cbc_aes_alg,
+};
+
+static struct ahash_alg *armada_370_ahash_algs[] = {
+	&mv_md5_alg,
+	&mv_sha1_alg,
+	&mv_sha256_alg,
+	&mv_ahmac_md5_alg,
+	&mv_ahmac_sha1_alg,
+	&mv_ahmac_sha256_alg,
+};
+
+static const struct mv_cesa_caps orion_caps = {
+	.nengines = 1,
+	.cipher_algs = orion_cipher_algs,
+	.ncipher_algs = ARRAY_SIZE(orion_cipher_algs),
+	.ahash_algs = orion_ahash_algs,
+	.nahash_algs = ARRAY_SIZE(orion_ahash_algs),
+	.has_tdma = false,
+};
+
+static const struct mv_cesa_caps kirkwood_caps = {
+	.nengines = 1,
+	.cipher_algs = orion_cipher_algs,
+	.ncipher_algs = ARRAY_SIZE(orion_cipher_algs),
+	.ahash_algs = orion_ahash_algs,
+	.nahash_algs = ARRAY_SIZE(orion_ahash_algs),
+	.has_tdma = true,
+};
+
+static const struct mv_cesa_caps armada_370_caps = {
+	.nengines = 1,
+	.cipher_algs = armada_370_cipher_algs,
+	.ncipher_algs = ARRAY_SIZE(armada_370_cipher_algs),
+	.ahash_algs = armada_370_ahash_algs,
+	.nahash_algs = ARRAY_SIZE(armada_370_ahash_algs),
+	.has_tdma = true,
+};
+
+static const struct mv_cesa_caps armada_xp_caps = {
+	.nengines = 2,
+	.cipher_algs = armada_370_cipher_algs,
+	.ncipher_algs = ARRAY_SIZE(armada_370_cipher_algs),
+	.ahash_algs = armada_370_ahash_algs,
+	.nahash_algs = ARRAY_SIZE(armada_370_ahash_algs),
+	.has_tdma = true,
+};
+
+static const struct of_device_id mv_cesa_of_match_table[] = {
+	{ .compatible = "marvell,orion-crypto", .data = &orion_caps },
+	{ .compatible = "marvell,kirkwood-crypto", .data = &kirkwood_caps },
+	{ .compatible = "marvell,armada-370-crypto", .data = &armada_370_caps },
+	{ .compatible = "marvell,armada-xp-crypto", .data = &armada_xp_caps },
+	{ .compatible = "marvell,armada-375-crypto", .data = &armada_xp_caps },
+	{ .compatible = "marvell,armada-38x-crypto", .data = &armada_xp_caps },
+	{}
+};
+MODULE_DEVICE_TABLE(of, mv_cesa_of_match_table);
+
+static void
+mv_cesa_conf_mbus_windows(struct mv_cesa_engine *engine,
+			  const struct mbus_dram_target_info *dram)
+{
+	void __iomem *iobase = engine->regs;
+	int i;
+
+	for (i = 0; i < 4; i++) {
+		writel(0, iobase + CESA_TDMA_WINDOW_CTRL(i));
+		writel(0, iobase + CESA_TDMA_WINDOW_BASE(i));
+	}
+
+	for (i = 0; i < dram->num_cs; i++) {
+		const struct mbus_dram_window *cs = dram->cs + i;
+
+		writel(((cs->size - 1) & 0xffff0000) |
+		       (cs->mbus_attr << 8) |
+		       (dram->mbus_dram_target_id << 4) | 1,
+		       iobase + CESA_TDMA_WINDOW_CTRL(i));
+		writel(cs->base, iobase + CESA_TDMA_WINDOW_BASE(i));
+	}
+}
+
+static int mv_cesa_dev_dma_init(struct mv_cesa_dev *cesa)
+{
+	struct device *dev = cesa->dev;
+	struct mv_cesa_dev_dma *dma;
+
+	if (!cesa->caps->has_tdma)
+		return 0;
+
+	dma = devm_kzalloc(dev, sizeof(*dma), GFP_KERNEL);
+	if (!dma)
+		return -ENOMEM;
+
+	dma->tdma_desc_pool = dmam_pool_create("tdma_desc", dev,
+					sizeof(struct mv_cesa_tdma_desc),
+					16, 0);
+	if (!dma->tdma_desc_pool)
+		return -ENOMEM;
+
+	dma->op_pool = dmam_pool_create("cesa_op", dev,
+					sizeof(struct mv_cesa_op_ctx), 16, 0);
+	if (!dma->op_pool)
+		return -ENOMEM;
+
+	dma->cache_pool = dmam_pool_create("cesa_cache", dev,
+					   CESA_MAX_HASH_BLOCK_SIZE, 1, 0);
+	if (!dma->cache_pool)
+		return -ENOMEM;
+
+	dma->padding_pool = dmam_pool_create("cesa_padding", dev, 72, 1, 0);
+	if (!dma->cache_pool)
+		return -ENOMEM;
+
+	cesa->dma = dma;
+
+	return 0;
+}
+
+static int mv_cesa_get_sram(struct platform_device *pdev, int idx)
+{
+	struct mv_cesa_dev *cesa = platform_get_drvdata(pdev);
+	struct mv_cesa_engine *engine = &cesa->engines[idx];
+	const char *res_name = "sram";
+	struct resource *res;
+
+	engine->pool = of_get_named_gen_pool(cesa->dev->of_node,
+					     "marvell,crypto-srams",
+					     idx);
+	if (engine->pool) {
+		engine->sram = gen_pool_dma_alloc(engine->pool,
+						  cesa->sram_size,
+						  &engine->sram_dma);
+		if (engine->sram)
+			return 0;
+
+		engine->pool = NULL;
+		return -ENOMEM;
+	}
+
+	if (cesa->caps->nengines > 1) {
+		if (!idx)
+			res_name = "sram0";
+		else
+			res_name = "sram1";
+	}
+
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+					   res_name);
+	if (!res || resource_size(res) < cesa->sram_size)
+		return -EINVAL;
+
+	engine->sram = devm_ioremap_resource(cesa->dev, res);
+	if (IS_ERR(engine->sram))
+		return PTR_ERR(engine->sram);
+
+	engine->sram_dma = phys_to_dma(cesa->dev,
+				       (phys_addr_t)res->start);
+
+	return 0;
+}
+
+static void mv_cesa_put_sram(struct platform_device *pdev, int idx)
+{
+	struct mv_cesa_dev *cesa = platform_get_drvdata(pdev);
+	struct mv_cesa_engine *engine = &cesa->engines[idx];
+
+	if (!engine->pool)
+		return;
+
+	gen_pool_free(engine->pool, (unsigned long)engine->sram,
+		      cesa->sram_size);
+}
+
+static int mv_cesa_probe(struct platform_device *pdev)
+{
+	const struct mv_cesa_caps *caps = &armada_xp_caps;
+	const struct mbus_dram_target_info *dram;
+	const struct of_device_id *match;
+	struct device *dev = &pdev->dev;
+	struct mv_cesa_dev *cesa;
+	struct mv_cesa_engine *engines;
+	struct resource *res;
+	int irq, ret, i;
+	u32 sram_size;
+
+	if (cesa_dev) {
+		dev_err(&pdev->dev, "Only one CESA device authorized\n");
+		return -EEXIST;
+	}
+
+	if (dev->of_node) {
+		match = of_match_node(mv_cesa_of_match_table, dev->of_node);
+		if (match && match->data)
+			caps = match->data;
+	}
+
+	cesa = devm_kzalloc(dev, sizeof(*cesa), GFP_KERNEL);
+	if (!cesa)
+		return -ENOMEM;
+
+	cesa->caps = caps;
+	cesa->dev = dev;
+
+	sram_size = CESA_SA_DEFAULT_SRAM_SIZE;
+	of_property_read_u32(cesa->dev->of_node, "marvell,crypto-sram-size",
+			     &sram_size);
+	if (sram_size < CESA_SA_MIN_SRAM_SIZE)
+		sram_size = CESA_SA_MIN_SRAM_SIZE;
+
+	cesa->sram_size = sram_size;
+	cesa->engines = devm_kzalloc(dev, caps->nengines * sizeof(*engines),
+				     GFP_KERNEL);
+	if (!cesa->engines)
+		return -ENOMEM;
+
+	spin_lock_init(&cesa->lock);
+	crypto_init_queue(&cesa->queue, 50);
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
+	cesa->regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(cesa->regs))
+		return -ENOMEM;
+
+	ret = mv_cesa_dev_dma_init(cesa);
+	if (ret)
+		return ret;
+
+	dram = mv_mbus_dram_info();
+
+	platform_set_drvdata(pdev, cesa);
+
+	for (i = 0; i < caps->nengines; i++) {
+		struct mv_cesa_engine *engine = &cesa->engines[i];
+		char res_name[7];
+
+		engine->id = i;
+		spin_lock_init(&engine->lock);
+
+		ret = mv_cesa_get_sram(pdev, i);
+		if (ret)
+			goto err_cleanup;
+
+		irq = platform_get_irq(pdev, i);
+		if (irq < 0) {
+			ret = irq;
+			goto err_cleanup;
+		}
+
+		/*
+		 * Not all platforms can gate the CESA clocks: do not complain
+		 * if the clock does not exist.
+		 */
+		snprintf(res_name, sizeof(res_name), "cesa%d", i);
+		engine->clk = devm_clk_get(dev, res_name);
+		if (IS_ERR(engine->clk)) {
+			engine->clk = devm_clk_get(dev, NULL);
+			if (IS_ERR(engine->clk))
+				engine->clk = NULL;
+		}
+
+		snprintf(res_name, sizeof(res_name), "cesaz%d", i);
+		engine->zclk = devm_clk_get(dev, res_name);
+		if (IS_ERR(engine->zclk))
+			engine->zclk = NULL;
+
+		ret = clk_prepare_enable(engine->clk);
+		if (ret)
+			goto err_cleanup;
+
+		ret = clk_prepare_enable(engine->zclk);
+		if (ret)
+			goto err_cleanup;
+
+		engine->regs = cesa->regs + CESA_ENGINE_OFF(i);
+
+		if (dram)
+			mv_cesa_conf_mbus_windows(&cesa->engines[i], dram);
+
+		writel(0, cesa->engines[i].regs + CESA_SA_INT_STATUS);
+		writel(CESA_SA_CFG_STOP_DIG_ERR,
+		       cesa->engines[i].regs + CESA_SA_CFG);
+		writel(engine->sram_dma & CESA_SA_SRAM_MSK,
+		       cesa->engines[i].regs + CESA_SA_DESC_P0);
+
+		ret = devm_request_threaded_irq(dev, irq, NULL, mv_cesa_int,
+						IRQF_ONESHOT,
+						dev_name(&pdev->dev),
+						&cesa->engines[i]);
+		if (ret)
+			goto err_cleanup;
+	}
+
+	cesa_dev = cesa;
+
+	ret = mv_cesa_add_algs(cesa);
+	if (ret) {
+		cesa_dev = NULL;
+		goto err_cleanup;
+	}
+
+	dev_info(dev, "CESA device successfully registered\n");
+
+	return 0;
+
+err_cleanup:
+	for (i = 0; i < caps->nengines; i++) {
+		clk_disable_unprepare(cesa->engines[i].zclk);
+		clk_disable_unprepare(cesa->engines[i].clk);
+		mv_cesa_put_sram(pdev, i);
+	}
+
+	return ret;
+}
+
+static int mv_cesa_remove(struct platform_device *pdev)
+{
+	struct mv_cesa_dev *cesa = platform_get_drvdata(pdev);
+	int i;
+
+	mv_cesa_remove_algs(cesa);
+
+	for (i = 0; i < cesa->caps->nengines; i++) {
+		clk_disable_unprepare(cesa->engines[i].zclk);
+		clk_disable_unprepare(cesa->engines[i].clk);
+		mv_cesa_put_sram(pdev, i);
+	}
+
+	return 0;
+}
+
+
+static struct platform_driver marvell_cesa = {
+	.probe		= mv_cesa_probe,
+	.remove		= mv_cesa_remove,
+	.driver		= {
+		.owner	= THIS_MODULE,
+		.name	= "mv_crypto",
+		.of_match_table = mv_cesa_of_match_table,
+	},
+};
+MODULE_ALIAS("platform:mv_crypto");
+
+module_platform_driver(marvell_cesa);
+
+MODULE_AUTHOR("Boris Brezillon <boris.brezillon@free-electrons.com>");
+MODULE_AUTHOR("Arnaud Ebalard <arno@natisbad.org>");
+MODULE_DESCRIPTION("Support for Marvell's cryptographic engine");
+MODULE_LICENSE("GPL");
diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
new file mode 100644
index 0000000..008c964
--- /dev/null
+++ b/drivers/crypto/marvell/cesa.h
@@ -0,0 +1,802 @@
+#ifndef __MARVELL_CESA_H__
+#define __MARVELL_CESA_H__
+
+#include <crypto/algapi.h>
+#include <crypto/hash.h>
+#include <crypto/internal/hash.h>
+
+#include <linux/crypto.h>
+#include <linux/dmapool.h>
+
+#define CESA_ENGINE_OFF(i)			(((i) * 0x2000))
+
+#define CESA_TDMA_BYTE_CNT			0x800
+#define CESA_TDMA_SRC_ADDR			0x810
+#define CESA_TDMA_DST_ADDR			0x820
+#define CESA_TDMA_NEXT_ADDR			0x830
+
+#define CESA_TDMA_CONTROL			0x840
+#define CESA_TDMA_DST_BURST			GENMASK(2, 0)
+#define CESA_TDMA_DST_BURST_32B			3
+#define CESA_TDMA_DST_BURST_128B		4
+#define CESA_TDMA_OUT_RD_EN			BIT(4)
+#define CESA_TDMA_SRC_BURST			GENMASK(8, 6)
+#define CESA_TDMA_SRC_BURST_32B			(3 << 6)
+#define CESA_TDMA_SRC_BURST_128B		(4 << 6)
+#define CESA_TDMA_CHAIN				BIT(9)
+#define CESA_TDMA_BYTE_SWAP			BIT(11)
+#define CESA_TDMA_NO_BYTE_SWAP			BIT(11)
+#define CESA_TDMA_EN				BIT(12)
+#define CESA_TDMA_FETCH_ND			BIT(13)
+#define CESA_TDMA_ACT				BIT(14)
+
+#define CESA_TDMA_CUR				0x870
+#define CESA_TDMA_ERROR_CAUSE			0x8c8
+#define CESA_TDMA_ERROR_MSK			0x8cc
+
+#define CESA_TDMA_WINDOW_BASE(x)		(((x) * 0x8) + 0xa00)
+#define CESA_TDMA_WINDOW_CTRL(x)		(((x) * 0x8) + 0xa04)
+
+#define CESA_IVDIG(x)				(0xdd00 + ((x) * 4) +	\
+						 (((x) < 5) ? 0 : 0x14))
+
+#define CESA_SA_CMD				0xde00
+#define CESA_SA_CMD_EN_CESA_SA_ACCL0		BIT(0)
+#define CESA_SA_CMD_EN_CESA_SA_ACCL1		BIT(1)
+#define CESA_SA_CMD_DISABLE_SEC			BIT(2)
+
+#define CESA_SA_DESC_P0				0xde04
+
+#define CESA_SA_DESC_P1				0xde14
+
+#define CESA_SA_CFG				0xde08
+#define CESA_SA_CFG_STOP_DIG_ERR		GENMASK(1, 0)
+#define CESA_SA_CFG_DIG_ERR_CONT		0
+#define CESA_SA_CFG_DIG_ERR_SKIP		1
+#define CESA_SA_CFG_DIG_ERR_STOP		3
+#define CESA_SA_CFG_CH0_W_IDMA			BIT(7)
+#define CESA_SA_CFG_CH1_W_IDMA			BIT(8)
+#define CESA_SA_CFG_ACT_CH0_IDMA		BIT(9)
+#define CESA_SA_CFG_ACT_CH1_IDMA		BIT(10)
+#define CESA_SA_CFG_MULTI_PKT			BIT(11)
+#define CESA_SA_CFG_PARA_DIS			BIT(13)
+
+#define CESA_SA_ACCEL_STATUS			0xde0c
+#define CESA_SA_ST_ACT_0			(1 << 0)
+#define CESA_SA_ST_ACT_1			(1 << 1)
+
+/*
+ * CESA_SA_FPGA_INT_STATUS looks like a FPGA leftover and is documented only
+ * in Errata 4.12. It looks like that it was part of an IRQ-controller in FPGA
+ * and someone forgot to remove  it while switching to the core and moving to
+ * CESA_SA_INT_STATUS.
+ */
+#define CESA_SA_FPGA_INT_STATUS			0xdd68
+#define CESA_SA_INT_STATUS			0xde20
+#define CESA_SA_INT_AUTH_DONE			BIT(0)
+#define CESA_SA_INT_DES_E_DONE			BIT(1)
+#define CESA_SA_INT_AES_E_DONE			BIT(2)
+#define CESA_SA_INT_AES_D_DONE			BIT(3)
+#define CESA_SA_INT_ENC_DONE			BIT(4)
+#define CESA_SA_INT_ACCEL0_DONE			BIT(5)
+#define CESA_SA_INT_ACCEL1_DONE			BIT(6)
+#define CESA_SA_INT_ACC0_IDMA_DONE		BIT(7)
+#define CESA_SA_INT_ACC1_IDMA_DONE		BIT(8)
+#define CESA_SA_INT_IDMA_DONE			BIT(9)
+#define CESA_SA_INT_IDMA_OWN_ERR		BIT(10)
+
+#define CESA_SA_INT_MSK				0xde24
+
+#define CESA_SA_DESC_CFG_OP_MAC_ONLY		0
+#define CESA_SA_DESC_CFG_OP_CRYPT_ONLY		1
+#define CESA_SA_DESC_CFG_OP_MAC_CRYPT		2
+#define CESA_SA_DESC_CFG_OP_CRYPT_MAC		3
+#define CESA_SA_DESC_CFG_OP_MSK			GENMASK(1, 0)
+#define CESA_SA_DESC_CFG_MACM_SHA256		(1 << 4)
+#define CESA_SA_DESC_CFG_MACM_HMAC_SHA256	(3 << 4)
+#define CESA_SA_DESC_CFG_MACM_MD5		(4 << 4)
+#define CESA_SA_DESC_CFG_MACM_SHA1		(5 << 4)
+#define CESA_SA_DESC_CFG_MACM_HMAC_MD5		(6 << 4)
+#define CESA_SA_DESC_CFG_MACM_HMAC_SHA1		(7 << 4)
+#define CESA_SA_DESC_CFG_MACM_MSK		GENMASK(6, 4)
+#define CESA_SA_DESC_CFG_CRYPTM_DES		(1 << 8)
+#define CESA_SA_DESC_CFG_CRYPTM_3DES		(2 << 8)
+#define CESA_SA_DESC_CFG_CRYPTM_AES		(3 << 8)
+#define CESA_SA_DESC_CFG_CRYPTM_MSK		GENMASK(9, 8)
+#define CESA_SA_DESC_CFG_DIR_ENC		(0 << 12)
+#define CESA_SA_DESC_CFG_DIR_DEC		(1 << 12)
+#define CESA_SA_DESC_CFG_CRYPTCM_ECB		(0 << 16)
+#define CESA_SA_DESC_CFG_CRYPTCM_CBC		(1 << 16)
+#define CESA_SA_DESC_CFG_CRYPTCM_MSK		BIT(16)
+#define CESA_SA_DESC_CFG_3DES_EEE		(0 << 20)
+#define CESA_SA_DESC_CFG_3DES_EDE		(1 << 20)
+#define CESA_SA_DESC_CFG_AES_LEN_128		(0 << 24)
+#define CESA_SA_DESC_CFG_AES_LEN_192		(1 << 24)
+#define CESA_SA_DESC_CFG_AES_LEN_256		(2 << 24)
+#define CESA_SA_DESC_CFG_AES_LEN_MSK		GENMASK(25, 24)
+#define CESA_SA_DESC_CFG_NOT_FRAG		(0 << 30)
+#define CESA_SA_DESC_CFG_FIRST_FRAG		(1 << 30)
+#define CESA_SA_DESC_CFG_LAST_FRAG		(2 << 30)
+#define CESA_SA_DESC_CFG_MID_FRAG		(3 << 30)
+#define CESA_SA_DESC_CFG_FRAG_MSK		GENMASK(31, 30)
+
+/*
+ * /-----------\ 0
+ * | ACCEL CFG |	4 * 8
+ * |-----------| 0x20
+ * | CRYPT KEY |	8 * 4
+ * |-----------| 0x40
+ * |  IV   IN  |	4 * 4
+ * |-----------| 0x40 (inplace)
+ * |  IV BUF   |	4 * 4
+ * |-----------| 0x80
+ * |  DATA IN  |	16 * x (max ->max_req_size)
+ * |-----------| 0x80 (inplace operation)
+ * |  DATA OUT |	16 * x (max ->max_req_size)
+ * \-----------/ SRAM size
+ */
+
+/*
+ * Hashing memory map:
+ * /-----------\ 0
+ * | ACCEL CFG |        4 * 8
+ * |-----------| 0x20
+ * | Inner IV  |        8 * 4
+ * |-----------| 0x40
+ * | Outer IV  |        8 * 4
+ * |-----------| 0x60
+ * | Output BUF|        8 * 4
+ * |-----------| 0x80
+ * |  DATA IN  |        64 * x (max ->max_req_size)
+ * \-----------/ SRAM size
+ */
+
+#define CESA_SA_CFG_SRAM_OFFSET			0x00
+#define CESA_SA_DATA_SRAM_OFFSET		0x80
+
+#define CESA_SA_CRYPT_KEY_SRAM_OFFSET		0x20
+#define CESA_SA_CRYPT_IV_SRAM_OFFSET		0x40
+
+#define CESA_SA_MAC_IIV_SRAM_OFFSET		0x20
+#define CESA_SA_MAC_OIV_SRAM_OFFSET		0x40
+#define CESA_SA_MAC_DIG_SRAM_OFFSET		0x60
+
+#define CESA_SA_DESC_CRYPT_DATA(offset)					\
+	cpu_to_le32((CESA_SA_DATA_SRAM_OFFSET + (offset)) |		\
+		    ((CESA_SA_DATA_SRAM_OFFSET + (offset)) << 16))
+
+#define CESA_SA_DESC_CRYPT_IV(offset)					\
+	cpu_to_le32((CESA_SA_CRYPT_IV_SRAM_OFFSET + (offset)) |	\
+		    ((CESA_SA_CRYPT_IV_SRAM_OFFSET + (offset)) << 16))
+
+#define CESA_SA_DESC_CRYPT_KEY(offset)					\
+	cpu_to_le32(CESA_SA_CRYPT_KEY_SRAM_OFFSET + (offset))
+
+#define CESA_SA_DESC_MAC_DATA(offset)					\
+	cpu_to_le32(CESA_SA_DATA_SRAM_OFFSET + (offset))
+#define CESA_SA_DESC_MAC_DATA_MSK		GENMASK(15, 0)
+
+#define CESA_SA_DESC_MAC_TOTAL_LEN(total_len)	cpu_to_le32((total_len) << 16)
+#define CESA_SA_DESC_MAC_TOTAL_LEN_MSK		GENMASK(31, 16)
+
+#define CESA_SA_DESC_MAC_SRC_TOTAL_LEN_MAX	0xffff
+
+#define CESA_SA_DESC_MAC_DIGEST(offset)					\
+	cpu_to_le32(CESA_SA_MAC_DIG_SRAM_OFFSET + (offset))
+#define CESA_SA_DESC_MAC_DIGEST_MSK		GENMASK(15, 0)
+
+#define CESA_SA_DESC_MAC_FRAG_LEN(frag_len)	cpu_to_le32((frag_len) << 16)
+#define CESA_SA_DESC_MAC_FRAG_LEN_MSK		GENMASK(31, 16)
+
+#define CESA_SA_DESC_MAC_IV(offset)					\
+	cpu_to_le32((CESA_SA_MAC_IIV_SRAM_OFFSET + (offset)) |		\
+		    ((CESA_SA_MAC_OIV_SRAM_OFFSET + (offset)) << 16))
+
+#define CESA_SA_SRAM_SIZE			2048
+#define CESA_SA_SRAM_PAYLOAD_SIZE		(cesa_dev->sram_size - \
+						 CESA_SA_DATA_SRAM_OFFSET)
+
+#define CESA_SA_DEFAULT_SRAM_SIZE		2048
+#define CESA_SA_MIN_SRAM_SIZE			1024
+
+#define CESA_SA_SRAM_MSK			(2048 - 1)
+
+#define CESA_MAX_HASH_BLOCK_SIZE		64
+#define CESA_HASH_BLOCK_SIZE_MSK		(CESA_MAX_HASH_BLOCK_SIZE - 1)
+
+/**
+ * struct mv_cesa_sec_accel_desc - security accelerator descriptor
+ * @config:	engine config
+ * @enc_p:	input and output data pointers for a cipher operation
+ * @enc_len:	cipher operation length
+ * @enc_key_p:	cipher key pointer
+ * @enc_iv:	cipher IV pointers
+ * @mac_src_p:	input pointer and total hash length
+ * @mac_digest:	digest pointer and hash operation length
+ * @mac_iv:	hmac IV pointers
+ *
+ * Structure passed to the CESA engine to describe the crypto operation
+ * to be executed.
+ */
+struct mv_cesa_sec_accel_desc {
+	u32 config;
+	u32 enc_p;
+	u32 enc_len;
+	u32 enc_key_p;
+	u32 enc_iv;
+	u32 mac_src_p;
+	u32 mac_digest;
+	u32 mac_iv;
+} __packed;
+
+/**
+ * struct mv_cesa_blkcipher_op_ctx - cipher operation context
+ * @key:	cipher key
+ * @iv:		cipher IV
+ *
+ * Context associated to a cipher operation.
+ */
+struct mv_cesa_blkcipher_op_ctx {
+	u32 key[8];
+	u32 iv[4];
+};
+
+/**
+ * struct mv_cesa_hash_op_ctx - hash or hmac operation context
+ * @key:	cipher key
+ * @iv:		cipher IV
+ *
+ * Context associated to an hash or hmac operation.
+ */
+struct mv_cesa_hash_op_ctx {
+	u32 iv[16];
+	u32 hash[8];
+};
+
+/**
+ * struct mv_cesa_op_ctx - crypto operation context
+ * @desc:	CESA descriptor
+ * @ctx:	context associated to the crypto operation
+ *
+ * Context associated to a crypto operation.
+ */
+struct mv_cesa_op_ctx {
+	struct mv_cesa_sec_accel_desc desc;
+	union {
+		struct mv_cesa_blkcipher_op_ctx blkcipher;
+		struct mv_cesa_hash_op_ctx hash;
+	} ctx;
+};
+
+/* TDMA descriptor flags */
+#define CESA_TDMA_DST_IN_SRAM			BIT(31)
+#define CESA_TDMA_SRC_IN_SRAM			BIT(30)
+#define CESA_TDMA_TYPE_MSK			GENMASK(29, 0)
+#define CESA_TDMA_DUMMY				0
+#define CESA_TDMA_DATA				1
+#define CESA_TDMA_OP				2
+
+/**
+ * struct mv_cesa_tdma_desc - TDMA descriptor
+ * @byte_cnt:	number of bytes to transfer
+ * @src:	DMA address of the source
+ * @dst:	DMA address of the destination
+ * @next_dma:	DMA address of the next TDMA descriptor
+ * @cur_dma:	DMA address of this TDMA descriptor
+ * @next:	pointer to the next TDMA descriptor
+ * @op:		CESA operation attached to this TDMA descriptor
+ * @data:	raw data attached to this TDMA descriptor
+ * @flags:	flags describing the TDMA transfer. See the
+ *		"TDMA descriptor flags" section above
+ *
+ * TDMA descriptor used to create a transfer chain describing a crypto
+ * operation.
+ */
+struct mv_cesa_tdma_desc {
+	u32 byte_cnt;
+	u32 src;
+	u32 dst;
+	u32 next_dma;
+	u32 cur_dma;
+	struct mv_cesa_tdma_desc *next;
+	union {
+		struct mv_cesa_op_ctx *op;
+		void *data;
+	};
+	u32 flags;
+};
+
+/**
+ * struct mv_cesa_sg_dma_iter - scatter-gather iterator
+ * @dir:	transfer direction
+ * @sg:		scatter list
+ * @offset:	current position in the scatter list
+ * @op_offset:	current position in the crypto operation
+ *
+ * Iterator used to iterate over a scatterlist while creating a TDMA chain for
+ * a crypto operation.
+ */
+struct mv_cesa_sg_dma_iter {
+	enum dma_data_direction dir;
+	struct scatterlist *sg;
+	unsigned int offset;
+	unsigned int op_offset;
+};
+
+/**
+ * struct mv_cesa_dma_iter - crypto operation iterator
+ * @len:	the crypto operation length
+ * @offset:	current position in the crypto operation
+ * @op_len:	sub-operation length (the crypto engine can only act on 2kb
+ *		chunks)
+ *
+ * Iterator used to create a TDMA chain for a given crypto operation.
+ */
+struct mv_cesa_dma_iter {
+	unsigned int len;
+	unsigned int offset;
+	unsigned int op_len;
+};
+
+/**
+ * struct mv_cesa_tdma_chain - TDMA chain
+ * @first:	first entry in the TDMA chain
+ * @last:	last entry in the TDMA chain
+ *
+ * Stores a TDMA chain for a specific crypto operation.
+ */
+struct mv_cesa_tdma_chain {
+	struct mv_cesa_tdma_desc *first;
+	struct mv_cesa_tdma_desc *last;
+};
+
+struct mv_cesa_engine;
+
+/**
+ * struct mv_cesa_caps - CESA device capabilities
+ * @engines:		number of engines
+ * @has_tdma:		whether this device has a TDMA block
+ * @cipher_algs:	supported cipher algorithms
+ * @ncipher_algs:	number of supported cipher algorithms
+ * @ahash_algs:		supported hash algorithms
+ * @nahash_algs:	number of supported hash algorithms
+ *
+ * Structure used to describe CESA device capabilities.
+ */
+struct mv_cesa_caps {
+	int nengines;
+	bool has_tdma;
+	struct crypto_alg **cipher_algs;
+	int ncipher_algs;
+	struct ahash_alg **ahash_algs;
+	int nahash_algs;
+};
+
+/**
+ * struct mv_cesa_dev_dma - DMA pools
+ * @tdma_desc_pool:	TDMA desc pool
+ * @op_pool:		crypto operation pool
+ * @cache_pool:		data cache pool (used by hash implementation when the
+ *			hash request is smaller than the hash block size)
+ * @padding_pool:	padding pool (used by hash implementation when hardware
+ *			padding cannot be used)
+ *
+ * Structure containing the different DMA pools used by this driver.
+ */
+struct mv_cesa_dev_dma {
+	struct dma_pool *tdma_desc_pool;
+	struct dma_pool *op_pool;
+	struct dma_pool *cache_pool;
+	struct dma_pool *padding_pool;
+};
+
+/**
+ * struct mv_cesa_dev - CESA device
+ * @caps:	device capabilities
+ * @regs:	device registers
+ * @sram_size:	usable SRAM size
+ * @lock:	device lock
+ * @queue:	crypto request queue
+ * @engines:	array of engines
+ * @dma:	dma pools
+ *
+ * Structure storing CESA device information.
+ */
+struct mv_cesa_dev {
+	const struct mv_cesa_caps *caps;
+	void __iomem *regs;
+	struct device *dev;
+	unsigned int sram_size;
+	spinlock_t lock;
+	struct crypto_queue queue;
+	struct mv_cesa_engine *engines;
+	struct mv_cesa_dev_dma *dma;
+};
+
+/**
+ * struct mv_cesa_engine - CESA engine
+ * @id:			engine id
+ * @regs:		engine registers
+ * @sram:		SRAM memory region
+ * @sram_dma:		DMA address of the SRAM memory region
+ * @lock:		engine lock
+ * @req:		current crypto request
+ * @clk:		engine clk
+ * @zclk:		engine zclk
+ * @max_req_len:	maximum chunk length (useful to create the TDMA chain)
+ * @int_mask:		interrupt mask cache
+ * @pool:		memory pool pointing to the memory region reserved in
+ *			SRAM
+ *
+ * Structure storing CESA engine information.
+ */
+struct mv_cesa_engine {
+	int id;
+	void __iomem *regs;
+	void __iomem *sram;
+	dma_addr_t sram_dma;
+	spinlock_t lock;
+	struct crypto_async_request *req;
+	struct clk *clk;
+	struct clk *zclk;
+	size_t max_req_len;
+	u32 int_mask;
+	struct gen_pool *pool;
+};
+
+/**
+ * struct mv_cesa_req_ops - CESA request operations
+ * @prepare:	prepare a request to be executed on the specified engine
+ * @process:	process a request chunk result (should return 0 if the
+ *		operation, -EINPROGRESS if it needs more steps or an error
+ *		code)
+ * @step:	launch the crypto operation on the next chunk
+ * @cleanup:	cleanup the crypto request (release associated data)
+ */
+struct mv_cesa_req_ops {
+	void (*prepare)(struct crypto_async_request *req,
+			struct mv_cesa_engine *engine);
+	int (*process)(struct crypto_async_request *req, u32 status);
+	void (*step)(struct crypto_async_request *req);
+	void (*cleanup)(struct crypto_async_request *req);
+};
+
+/**
+ * struct mv_cesa_ctx - CESA operation context
+ * @ops:	crypto operations
+ *
+ * Base context structure inherited by operation specific ones.
+ */
+struct mv_cesa_ctx {
+	const struct mv_cesa_req_ops *ops;
+};
+
+/**
+ * struct mv_cesa_hash_ctx - CESA hash operation context
+ * @base:	base context structure
+ *
+ * Hash context structure.
+ */
+struct mv_cesa_hash_ctx {
+	struct mv_cesa_ctx base;
+};
+
+/**
+ * struct mv_cesa_hash_ctx - CESA hmac operation context
+ * @base:	base context structure
+ * @iv:		initialization vectors
+ *
+ * HMAC context structure.
+ */
+struct mv_cesa_hmac_ctx {
+	struct mv_cesa_ctx base;
+	u32 iv[16];
+};
+
+/**
+ * enum mv_cesa_req_type - request type definitions
+ * @CESA_STD_REQ:	standard request
+ * @CESA_DMA_REQ:	DMA request
+ */
+enum mv_cesa_req_type {
+	CESA_STD_REQ,
+	CESA_DMA_REQ,
+};
+
+/**
+ * struct mv_cesa_req - CESA request
+ * @type:	request type
+ * @engine:	engine associated with this request
+ */
+struct mv_cesa_req {
+	enum mv_cesa_req_type type;
+	struct mv_cesa_engine *engine;
+};
+
+/**
+ * struct mv_cesa_tdma_req - CESA TDMA request
+ * @base:	base information
+ * @chain:	TDMA chain
+ */
+struct mv_cesa_tdma_req {
+	struct mv_cesa_req base;
+	struct mv_cesa_tdma_chain chain;
+};
+
+/**
+ * struct mv_cesa_sg_std_iter - CESA scatter-gather iterator for standard
+ *				requests
+ * @iter:	sg mapping iterator
+ * @offset:	current offset in the SG entry mapped in memory
+ */
+struct mv_cesa_sg_std_iter {
+	struct sg_mapping_iter iter;
+	unsigned int offset;
+};
+
+/**
+ * struct mv_cesa_ablkcipher_std_req - cipher standard request
+ * @base:	base information
+ * @op:		operation context
+ * @offset:	current operation offset
+ * @size:	size of the crypto operation
+ */
+struct mv_cesa_ablkcipher_std_req {
+	struct mv_cesa_req base;
+	struct mv_cesa_op_ctx op;
+	unsigned int offset;
+	unsigned int size;
+};
+
+/**
+ * struct mv_cesa_ablkcipher_req - cipher request
+ * @req:	type specific request information
+ * @src_nents:	number of entries in the src sg list
+ * @dst_nents:	number of entries in the dest sg list
+ */
+struct mv_cesa_ablkcipher_req {
+	union {
+		struct mv_cesa_req base;
+		struct mv_cesa_tdma_req dma;
+		struct mv_cesa_ablkcipher_std_req std;
+	} req;
+	int src_nents;
+	int dst_nents;
+};
+
+/**
+ * struct mv_cesa_ahash_std_req - standard hash request
+ * @base:	base information
+ * @offset:	current operation offset
+ */
+struct mv_cesa_ahash_std_req {
+	struct mv_cesa_req base;
+	unsigned int offset;
+};
+
+/**
+ * struct mv_cesa_ahash_dma_req - DMA hash request
+ * @base:		base information
+ * @padding:		padding buffer
+ * @padding_dma:	DMA address of the padding buffer
+ * @cache_dma:		DMA address of the cache buffer
+ */
+struct mv_cesa_ahash_dma_req {
+	struct mv_cesa_tdma_req base;
+	u8 *padding;
+	dma_addr_t padding_dma;
+	dma_addr_t cache_dma;
+};
+
+/**
+ * struct mv_cesa_ahash_req - hash request
+ * @req:		type specific request information
+ * @cache:		cache buffer
+ * @cache_ptr:		write pointer in the cache buffer
+ * @len:		hash total length
+ * @src_nents:		number of entries in the scatterlist
+ * @last_req:		define whether the current operation is the last one
+ *			or not
+ * @state:		hash state
+ */
+struct mv_cesa_ahash_req {
+	union {
+		struct mv_cesa_req base;
+		struct mv_cesa_ahash_dma_req dma;
+		struct mv_cesa_ahash_std_req std;
+	} req;
+	struct mv_cesa_op_ctx op_tmpl;
+	u8 *cache;
+	unsigned int cache_ptr;
+	u64 len;
+	int src_nents;
+	bool last_req;
+	__be32 state[8];
+};
+
+/* CESA functions */
+
+extern struct mv_cesa_dev *cesa_dev;
+
+static inline void mv_cesa_update_op_cfg(struct mv_cesa_op_ctx *op,
+					 u32 cfg, u32 mask)
+{
+	op->desc.config &= cpu_to_le32(~mask);
+	op->desc.config |= cpu_to_le32(cfg);
+}
+
+static inline u32 mv_cesa_get_op_cfg(struct mv_cesa_op_ctx *op)
+{
+	return le32_to_cpu(op->desc.config);
+}
+
+static inline void mv_cesa_set_op_cfg(struct mv_cesa_op_ctx *op, u32 cfg)
+{
+	op->desc.config = cpu_to_le32(cfg);
+}
+
+static inline void mv_cesa_adjust_op(struct mv_cesa_engine *engine,
+				     struct mv_cesa_op_ctx *op)
+{
+	u32 offset = engine->sram_dma & CESA_SA_SRAM_MSK;
+
+	op->desc.enc_p = CESA_SA_DESC_CRYPT_DATA(offset);
+	op->desc.enc_key_p = CESA_SA_DESC_CRYPT_KEY(offset);
+	op->desc.enc_iv = CESA_SA_DESC_CRYPT_IV(offset);
+	op->desc.mac_src_p &= ~CESA_SA_DESC_MAC_DATA_MSK;
+	op->desc.mac_src_p |= CESA_SA_DESC_MAC_DATA(offset);
+	op->desc.mac_digest &= ~CESA_SA_DESC_MAC_DIGEST_MSK;
+	op->desc.mac_digest |= CESA_SA_DESC_MAC_DIGEST(offset);
+	op->desc.mac_iv = CESA_SA_DESC_MAC_IV(offset);
+}
+
+static inline void mv_cesa_set_crypt_op_len(struct mv_cesa_op_ctx *op, int len)
+{
+	op->desc.enc_len = cpu_to_le32(len);
+}
+
+static inline void mv_cesa_set_mac_op_total_len(struct mv_cesa_op_ctx *op,
+						int len)
+{
+	op->desc.mac_src_p &= ~CESA_SA_DESC_MAC_TOTAL_LEN_MSK;
+	op->desc.mac_src_p |= CESA_SA_DESC_MAC_TOTAL_LEN(len);
+}
+
+static inline void mv_cesa_set_mac_op_frag_len(struct mv_cesa_op_ctx *op,
+					       int len)
+{
+	op->desc.mac_digest &= ~CESA_SA_DESC_MAC_FRAG_LEN_MSK;
+	op->desc.mac_digest |= CESA_SA_DESC_MAC_FRAG_LEN(len);
+}
+
+static inline void mv_cesa_set_int_mask(struct mv_cesa_engine *engine,
+					u32 int_mask)
+{
+	if (int_mask == engine->int_mask)
+		return;
+
+	writel(int_mask, engine->regs + CESA_SA_INT_MSK);
+	engine->int_mask = int_mask;
+}
+
+static inline u32 mv_cesa_get_int_mask(struct mv_cesa_engine *engine)
+{
+	return engine->int_mask;
+}
+
+int mv_cesa_queue_req(struct crypto_async_request *req);
+
+static inline int mv_cesa_sg_count(struct scatterlist *sg, int nbytes)
+{
+	int nents = 0;
+
+	while (nbytes > 0) {
+		nents++;
+		nbytes -= sg->length;
+		sg = sg_next(sg);
+	}
+
+	return nents;
+}
+
+/* TDMA functions */
+
+static inline void mv_cesa_req_dma_iter_init(struct mv_cesa_dma_iter *iter,
+					     unsigned int len)
+{
+	iter->len = len;
+	iter->op_len = min(len, CESA_SA_SRAM_PAYLOAD_SIZE);
+	iter->offset = 0;
+}
+
+static inline void mv_cesa_sg_dma_iter_init(struct mv_cesa_sg_dma_iter *iter,
+					    struct scatterlist *sg,
+					    enum dma_data_direction dir)
+{
+	iter->op_offset = 0;
+	iter->offset = 0;
+	iter->sg = sg;
+	iter->dir = dir;
+}
+
+static inline unsigned int
+mv_cesa_req_dma_iter_transfer_len(struct mv_cesa_dma_iter *iter,
+				  struct mv_cesa_sg_dma_iter *sgiter)
+{
+	return min(iter->op_len - sgiter->op_offset,
+		   sgiter->sg->length - sgiter->offset);
+}
+
+bool mv_cesa_req_dma_iter_next_transfer(struct mv_cesa_dma_iter *chain,
+				    struct mv_cesa_sg_dma_iter *sgiter,
+				    unsigned int len);
+
+static inline bool mv_cesa_req_dma_iter_next_op(struct mv_cesa_dma_iter *iter)
+{
+	iter->offset += iter->op_len;
+	iter->op_len = min(iter->len - iter->offset,
+			   CESA_SA_SRAM_PAYLOAD_SIZE);
+
+	return iter->op_len;
+}
+
+void mv_cesa_dma_step(struct mv_cesa_tdma_req *dreq);
+
+static inline int mv_cesa_dma_process(struct mv_cesa_tdma_req *dreq,
+				      u32 status)
+{
+	if (!(status & CESA_SA_INT_ACC0_IDMA_DONE))
+		return -EINPROGRESS;
+
+	if (status & CESA_SA_INT_IDMA_OWN_ERR)
+		return -EINVAL;
+
+	return 0;
+}
+
+void mv_cesa_dma_prepare(struct mv_cesa_tdma_req *dreq,
+			 struct mv_cesa_engine *engine);
+
+void mv_cesa_dma_cleanup(struct mv_cesa_tdma_req *dreq);
+
+static inline void
+mv_cesa_tdma_desc_iter_init(struct mv_cesa_tdma_chain *chain)
+{
+	memset(chain, 0, sizeof(*chain));
+}
+
+struct mv_cesa_op_ctx *mv_cesa_dma_add_op(struct mv_cesa_tdma_chain *chain,
+					const struct mv_cesa_op_ctx *op_templ,
+					gfp_t flags);
+
+int mv_cesa_dma_add_data_transfer(struct mv_cesa_tdma_chain *chain,
+				  dma_addr_t dst, dma_addr_t src, u32 size,
+				  u32 flags, gfp_t gfp_flags);
+
+int mv_cesa_dma_add_dummy_launch(struct mv_cesa_tdma_chain *chain,
+				 u32 flags);
+
+int mv_cesa_dma_add_dummy_end(struct mv_cesa_tdma_chain *chain, u32 flags);
+
+int mv_cesa_dma_add_op_transfers(struct mv_cesa_tdma_chain *chain,
+				 struct mv_cesa_dma_iter *dma_iter,
+				 struct mv_cesa_sg_dma_iter *sgiter,
+				 gfp_t gfp_flags);
+
+/* Algorithm definitions */
+
+extern struct ahash_alg mv_md5_alg;
+extern struct ahash_alg mv_sha1_alg;
+extern struct ahash_alg mv_sha256_alg;
+extern struct ahash_alg mv_ahmac_md5_alg;
+extern struct ahash_alg mv_ahmac_sha1_alg;
+extern struct ahash_alg mv_ahmac_sha256_alg;
+
+extern struct crypto_alg mv_cesa_ecb_des_alg;
+extern struct crypto_alg mv_cesa_cbc_des_alg;
+extern struct crypto_alg mv_cesa_ecb_des3_ede_alg;
+extern struct crypto_alg mv_cesa_cbc_des3_ede_alg;
+extern struct crypto_alg mv_cesa_ecb_aes_alg;
+extern struct crypto_alg mv_cesa_cbc_aes_alg;
+
+#endif /* __MARVELL_CESA_H__ */
diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
new file mode 100644
index 0000000..ddb8260
--- /dev/null
+++ b/drivers/crypto/marvell/cipher.c
@@ -0,0 +1,761 @@
+/*
+ * Cipher algorithms supported by the CESA: DES, 3DES and AES.
+ *
+ * Author: Boris Brezillon <boris.brezillon@free-electrons.com>
+ * Author: Arnaud Ebalard <arno@natisbad.org>
+ *
+ * This work is based on an initial version written by
+ * Sebastian Andrzej Siewior < sebastian at breakpoint dot cc >
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ */
+
+#include <crypto/aes.h>
+#include <crypto/des.h>
+
+#include "cesa.h"
+
+struct mv_cesa_des_ctx {
+	struct mv_cesa_ctx base;
+	u8 key[DES_KEY_SIZE];
+};
+
+struct mv_cesa_des3_ctx {
+	struct mv_cesa_ctx base;
+	u8 key[DES3_EDE_KEY_SIZE];
+};
+
+struct mv_cesa_aes_ctx {
+	struct mv_cesa_ctx base;
+	struct crypto_aes_ctx aes;
+};
+
+struct mv_cesa_ablkcipher_dma_iter {
+	struct mv_cesa_dma_iter base;
+	struct mv_cesa_sg_dma_iter src;
+	struct mv_cesa_sg_dma_iter dst;
+};
+
+static inline void
+mv_cesa_ablkcipher_req_iter_init(struct mv_cesa_ablkcipher_dma_iter *iter,
+				 struct ablkcipher_request *req)
+{
+	mv_cesa_req_dma_iter_init(&iter->base, req->nbytes);
+	mv_cesa_sg_dma_iter_init(&iter->src, req->src, DMA_TO_DEVICE);
+	mv_cesa_sg_dma_iter_init(&iter->dst, req->dst, DMA_FROM_DEVICE);
+}
+
+static inline bool
+mv_cesa_ablkcipher_req_iter_next_op(struct mv_cesa_ablkcipher_dma_iter *iter)
+{
+	iter->src.op_offset = 0;
+	iter->dst.op_offset = 0;
+
+	return mv_cesa_req_dma_iter_next_op(&iter->base);
+}
+
+static inline void
+mv_cesa_ablkcipher_dma_cleanup(struct ablkcipher_request *req)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+
+	dma_unmap_sg(cesa_dev->dev, req->dst, creq->dst_nents, DMA_FROM_DEVICE);
+	dma_unmap_sg(cesa_dev->dev, req->src, creq->src_nents, DMA_TO_DEVICE);
+	mv_cesa_dma_cleanup(&creq->req.dma);
+}
+
+static inline void mv_cesa_ablkcipher_cleanup(struct ablkcipher_request *req)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		mv_cesa_ablkcipher_dma_cleanup(req);
+}
+
+static void mv_cesa_ablkcipher_std_step(struct ablkcipher_request *req)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+	struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
+	struct mv_cesa_engine *engine = sreq->base.engine;
+	size_t  len = min_t(size_t, req->nbytes - sreq->offset,
+			    CESA_SA_SRAM_PAYLOAD_SIZE);
+
+	len = sg_pcopy_to_buffer(req->src, creq->src_nents,
+				 engine->sram + CESA_SA_DATA_SRAM_OFFSET,
+				 len, sreq->offset);
+
+	sreq->size = len;
+	mv_cesa_set_crypt_op_len(&sreq->op, len);
+
+	/* FIXME: only update enc_len field */
+	memcpy(engine->sram, &sreq->op, sizeof(sreq->op));
+
+	mv_cesa_set_int_mask(engine, CESA_SA_INT_ACCEL0_DONE);
+	writel(CESA_SA_CFG_PARA_DIS, engine->regs + CESA_SA_CFG);
+	writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
+}
+
+static int mv_cesa_ablkcipher_std_process(struct ablkcipher_request *req,
+					  u32 status)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+	struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
+	struct mv_cesa_engine *engine = sreq->base.engine;
+	size_t len;
+
+	len = sg_pcopy_from_buffer(req->dst, creq->dst_nents,
+				   engine->sram + CESA_SA_DATA_SRAM_OFFSET,
+				   sreq->size, sreq->offset);
+
+	sreq->offset += len;
+	if (sreq->offset < req->nbytes)
+		return -EINPROGRESS;
+
+	return 0;
+}
+
+static int mv_cesa_ablkcipher_process(struct crypto_async_request *req,
+				      u32 status)
+{
+	struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
+	int ret;
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		ret = mv_cesa_dma_process(&creq->req.dma, status);
+	else
+		ret = mv_cesa_ablkcipher_std_process(ablkreq, status);
+
+	return ret;
+}
+
+static void mv_cesa_ablkcipher_step(struct crypto_async_request *req)
+{
+	struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		mv_cesa_dma_step(&creq->req.dma);
+	else
+		mv_cesa_ablkcipher_std_step(ablkreq);
+}
+
+static inline void
+mv_cesa_ablkcipher_dma_prepare(struct ablkcipher_request *req)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+	struct mv_cesa_tdma_req *dreq = &creq->req.dma;
+
+	mv_cesa_dma_prepare(dreq, dreq->base.engine);
+}
+
+static inline void
+mv_cesa_ablkcipher_std_prepare(struct ablkcipher_request *req)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+	struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
+	struct mv_cesa_engine *engine = sreq->base.engine;
+
+	sreq->size = 0;
+	sreq->offset = 0;
+	mv_cesa_adjust_op(engine, &sreq->op);
+	memcpy(engine->sram, &sreq->op, sizeof(sreq->op));
+}
+
+static inline void mv_cesa_ablkcipher_prepare(struct crypto_async_request *req,
+					      struct mv_cesa_engine *engine)
+{
+	struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
+
+	creq->req.base.engine = engine;
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		mv_cesa_ablkcipher_dma_prepare(ablkreq);
+	else
+		mv_cesa_ablkcipher_std_prepare(ablkreq);
+}
+
+static inline void
+mv_cesa_ablkcipher_req_cleanup(struct crypto_async_request *req)
+{
+	struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
+
+	mv_cesa_ablkcipher_cleanup(ablkreq);
+}
+
+static const struct mv_cesa_req_ops mv_cesa_ablkcipher_req_ops = {
+	.step = mv_cesa_ablkcipher_step,
+	.process = mv_cesa_ablkcipher_process,
+	.prepare = mv_cesa_ablkcipher_prepare,
+	.cleanup = mv_cesa_ablkcipher_req_cleanup,
+};
+
+static int mv_cesa_ablkcipher_cra_init(struct crypto_tfm *tfm)
+{
+	struct mv_cesa_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	ctx->base.ops = &mv_cesa_ablkcipher_req_ops;
+
+	tfm->crt_ablkcipher.reqsize = sizeof(struct mv_cesa_ablkcipher_req);
+
+	return 0;
+}
+
+static int mv_cesa_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key,
+			      unsigned int len)
+{
+	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher);
+	struct mv_cesa_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+	int remaining;
+	int offset;
+	int ret;
+	int i;
+
+	ret = crypto_aes_expand_key(&ctx->aes, key, len);
+	if (ret) {
+		crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN);
+		return ret;
+	}
+
+	remaining = (ctx->aes.key_length - 16) / 4;
+	offset = ctx->aes.key_length + 24 - remaining;
+	for (i = 0; i < remaining; i++)
+		ctx->aes.key_dec[4 + i] =
+			cpu_to_le32(ctx->aes.key_enc[offset + i]);
+
+	return 0;
+}
+
+static int mv_cesa_des_setkey(struct crypto_ablkcipher *cipher, const u8 *key,
+			      unsigned int len)
+{
+	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher);
+	struct mv_cesa_des_ctx *ctx = crypto_tfm_ctx(tfm);
+	u32 tmp[DES_EXPKEY_WORDS];
+	int ret;
+
+	if (len != DES_KEY_SIZE) {
+		crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN);
+		return -EINVAL;
+	}
+
+	ret = des_ekey(tmp, key);
+	if (!ret && (tfm->crt_flags & CRYPTO_TFM_REQ_WEAK_KEY)) {
+		tfm->crt_flags |= CRYPTO_TFM_RES_WEAK_KEY;
+		return -EINVAL;
+	}
+
+	memcpy(ctx->key, key, DES_KEY_SIZE);
+
+	return 0;
+}
+
+static int mv_cesa_des3_ede_setkey(struct crypto_ablkcipher *cipher,
+				   const u8 *key, unsigned int len)
+{
+	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher);
+	struct mv_cesa_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	if (len != DES3_EDE_KEY_SIZE) {
+		crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN);
+		return -EINVAL;
+	}
+
+	memcpy(ctx->key, key, DES3_EDE_KEY_SIZE);
+
+	return 0;
+}
+
+static int mv_cesa_ablkcipher_dma_req_init(struct ablkcipher_request *req,
+				const struct mv_cesa_op_ctx *op_templ)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+		      GFP_KERNEL : GFP_ATOMIC;
+	struct mv_cesa_tdma_req *dreq = &creq->req.dma;
+	struct mv_cesa_ablkcipher_dma_iter iter;
+	struct mv_cesa_tdma_chain chain;
+	int ret;
+
+	dreq->base.type = CESA_DMA_REQ;
+	dreq->chain.first = NULL;
+	dreq->chain.last = NULL;
+
+	ret = dma_map_sg(cesa_dev->dev, req->src, creq->src_nents,
+			 DMA_TO_DEVICE);
+	if (ret < 0)
+		return ret;
+
+	creq->src_nents = ret;
+
+	ret = dma_map_sg(cesa_dev->dev, req->dst, creq->dst_nents,
+			 DMA_FROM_DEVICE);
+	if (ret < 0)
+		goto err_unmap_src;
+
+	creq->dst_nents = ret;
+
+	mv_cesa_tdma_desc_iter_init(&chain);
+	mv_cesa_ablkcipher_req_iter_init(&iter, req);
+
+	do {
+		struct mv_cesa_op_ctx *op;
+
+		op = mv_cesa_dma_add_op(&chain, op_templ, flags);
+		if (IS_ERR(op)) {
+			ret = PTR_ERR(op);
+			goto err_free_tdma;
+		}
+
+		mv_cesa_set_crypt_op_len(op, iter.base.op_len);
+
+		/* Add input transfers */
+		ret = mv_cesa_dma_add_op_transfers(&chain, &iter.base,
+						   &iter.src, flags);
+		if (ret)
+			goto err_free_tdma;
+
+		/* Add dummy desc to launch the crypto operation */
+		ret = mv_cesa_dma_add_dummy_launch(&chain, flags);
+		if (ret)
+			goto err_free_tdma;
+
+		/* Add output transfers */
+		ret = mv_cesa_dma_add_op_transfers(&chain, &iter.base,
+						   &iter.dst, flags);
+		if (ret)
+			goto err_free_tdma;
+
+	} while (mv_cesa_ablkcipher_req_iter_next_op(&iter));
+
+	dreq->chain = chain;
+
+	return 0;
+
+err_free_tdma:
+	mv_cesa_dma_cleanup(dreq);
+	dma_unmap_sg(cesa_dev->dev, req->dst, creq->dst_nents, DMA_FROM_DEVICE);
+
+err_unmap_src:
+	dma_unmap_sg(cesa_dev->dev, req->src, creq->src_nents, DMA_TO_DEVICE);
+
+	return ret;
+}
+
+static inline int
+mv_cesa_ablkcipher_std_req_init(struct ablkcipher_request *req,
+				const struct mv_cesa_op_ctx *op_templ)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+	struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
+
+	sreq->base.type = CESA_STD_REQ;
+	sreq->op = *op_templ;
+
+	return 0;
+}
+
+static int mv_cesa_ablkcipher_req_init(struct ablkcipher_request *req,
+				       struct mv_cesa_op_ctx *tmpl)
+{
+	struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
+	int ret;
+
+	creq->src_nents = mv_cesa_sg_count(req->src, req->nbytes);
+	creq->dst_nents = mv_cesa_sg_count(req->src, req->nbytes);
+
+	mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_OP_CRYPT_ONLY,
+			      CESA_SA_DESC_CFG_OP_MSK);
+
+	/* TODO: add a threshold for DMA usage */
+	if (cesa_dev->caps->has_tdma)
+		ret = mv_cesa_ablkcipher_dma_req_init(req, tmpl);
+	else
+		ret = mv_cesa_ablkcipher_std_req_init(req, tmpl);
+
+	return ret;
+}
+
+static int mv_cesa_des_op(struct ablkcipher_request *req,
+			  struct mv_cesa_op_ctx *tmpl)
+{
+	struct mv_cesa_des_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+	int ret;
+
+	mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_CRYPTM_DES,
+			      CESA_SA_DESC_CFG_CRYPTM_MSK);
+
+	memcpy(tmpl->ctx.blkcipher.key, ctx->key, DES_KEY_SIZE);
+
+	ret = mv_cesa_ablkcipher_req_init(req, tmpl);
+	if (ret)
+		return ret;
+
+	ret = mv_cesa_queue_req(&req->base);
+	if (ret && ret != -EINPROGRESS)
+		mv_cesa_ablkcipher_cleanup(req);
+
+	return ret;
+}
+
+static int mv_cesa_ecb_des_encrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl,
+			   CESA_SA_DESC_CFG_CRYPTCM_ECB |
+			   CESA_SA_DESC_CFG_DIR_ENC);
+
+	return mv_cesa_des_op(req, &tmpl);
+}
+
+static int mv_cesa_ecb_des_decrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl,
+			   CESA_SA_DESC_CFG_CRYPTCM_ECB |
+			   CESA_SA_DESC_CFG_DIR_DEC);
+
+	return mv_cesa_des_op(req, &tmpl);
+}
+
+struct crypto_alg mv_cesa_ecb_des_alg = {
+	.cra_name = "ecb(des)",
+	.cra_driver_name = "mv-ecb-des",
+	.cra_priority = 300,
+	.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER |
+		     CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC,
+	.cra_blocksize = DES_BLOCK_SIZE,
+	.cra_ctxsize = sizeof(struct mv_cesa_des_ctx),
+	.cra_alignmask = 0,
+	.cra_type = &crypto_ablkcipher_type,
+	.cra_module = THIS_MODULE,
+	.cra_init = mv_cesa_ablkcipher_cra_init,
+	.cra_u = {
+		.ablkcipher = {
+			.min_keysize = DES_KEY_SIZE,
+			.max_keysize = DES_KEY_SIZE,
+			.setkey = mv_cesa_des_setkey,
+			.encrypt = mv_cesa_ecb_des_encrypt,
+			.decrypt = mv_cesa_ecb_des_decrypt,
+		},
+	},
+};
+
+static int mv_cesa_cbc_des_op(struct ablkcipher_request *req,
+			      struct mv_cesa_op_ctx *tmpl)
+{
+	mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_CRYPTCM_CBC,
+			      CESA_SA_DESC_CFG_CRYPTCM_MSK);
+
+	memcpy(tmpl->ctx.blkcipher.iv, req->info, 8);
+
+	return mv_cesa_des_op(req, tmpl);
+}
+
+static int mv_cesa_cbc_des_encrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_DIR_ENC);
+
+	return mv_cesa_cbc_des_op(req, &tmpl);
+}
+
+static int mv_cesa_cbc_des_decrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_DIR_DEC);
+
+	return mv_cesa_cbc_des_op(req, &tmpl);
+}
+
+struct crypto_alg mv_cesa_cbc_des_alg = {
+	.cra_name = "cbc(des)",
+	.cra_driver_name = "mv-cbc-des",
+	.cra_priority = 300,
+	.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER |
+		     CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC,
+	.cra_blocksize = DES_BLOCK_SIZE,
+	.cra_ctxsize = sizeof(struct mv_cesa_des_ctx),
+	.cra_alignmask = 0,
+	.cra_type = &crypto_ablkcipher_type,
+	.cra_module = THIS_MODULE,
+	.cra_init = mv_cesa_ablkcipher_cra_init,
+	.cra_u = {
+		.ablkcipher = {
+			.min_keysize = DES_KEY_SIZE,
+			.max_keysize = DES_KEY_SIZE,
+			.ivsize	     = DES_BLOCK_SIZE,
+			.setkey = mv_cesa_des_setkey,
+			.encrypt = mv_cesa_cbc_des_encrypt,
+			.decrypt = mv_cesa_cbc_des_decrypt,
+		},
+	},
+};
+
+static int mv_cesa_des3_op(struct ablkcipher_request *req,
+			   struct mv_cesa_op_ctx *tmpl)
+{
+	struct mv_cesa_des3_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+	int ret;
+
+	mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_CRYPTM_3DES,
+			      CESA_SA_DESC_CFG_CRYPTM_MSK);
+
+	memcpy(tmpl->ctx.blkcipher.key, ctx->key, DES3_EDE_KEY_SIZE);
+
+	ret = mv_cesa_ablkcipher_req_init(req, tmpl);
+	if (ret)
+		return ret;
+
+	ret = mv_cesa_queue_req(&req->base);
+	if (ret && ret != -EINPROGRESS)
+		mv_cesa_ablkcipher_cleanup(req);
+
+	return ret;
+}
+
+static int mv_cesa_ecb_des3_ede_encrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl,
+			   CESA_SA_DESC_CFG_CRYPTCM_ECB |
+			   CESA_SA_DESC_CFG_3DES_EDE |
+			   CESA_SA_DESC_CFG_DIR_ENC);
+
+	return mv_cesa_des3_op(req, &tmpl);
+}
+
+static int mv_cesa_ecb_des3_ede_decrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl,
+			   CESA_SA_DESC_CFG_CRYPTCM_ECB |
+			   CESA_SA_DESC_CFG_3DES_EDE |
+			   CESA_SA_DESC_CFG_DIR_DEC);
+
+	return mv_cesa_des3_op(req, &tmpl);
+}
+
+struct crypto_alg mv_cesa_ecb_des3_ede_alg = {
+	.cra_name = "ecb(des3_ede)",
+	.cra_driver_name = "mv-ecb-des3-ede",
+	.cra_priority = 300,
+	.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER |
+		     CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC,
+	.cra_blocksize = DES3_EDE_BLOCK_SIZE,
+	.cra_ctxsize = sizeof(struct mv_cesa_des3_ctx),
+	.cra_alignmask = 0,
+	.cra_type = &crypto_ablkcipher_type,
+	.cra_module = THIS_MODULE,
+	.cra_init = mv_cesa_ablkcipher_cra_init,
+	.cra_u = {
+		.ablkcipher = {
+			.min_keysize = DES3_EDE_KEY_SIZE,
+			.max_keysize = DES3_EDE_KEY_SIZE,
+			.ivsize	     = DES3_EDE_BLOCK_SIZE,
+			.setkey = mv_cesa_des3_ede_setkey,
+			.encrypt = mv_cesa_ecb_des3_ede_encrypt,
+			.decrypt = mv_cesa_ecb_des3_ede_decrypt,
+		},
+	},
+};
+
+static int mv_cesa_cbc_des3_op(struct ablkcipher_request *req,
+			      struct mv_cesa_op_ctx *tmpl)
+{
+	memcpy(tmpl->ctx.blkcipher.iv, req->info, DES3_EDE_BLOCK_SIZE);
+
+	return mv_cesa_des3_op(req, tmpl);
+}
+
+static int mv_cesa_cbc_des3_ede_encrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl,
+			   CESA_SA_DESC_CFG_CRYPTCM_CBC |
+			   CESA_SA_DESC_CFG_3DES_EDE |
+			   CESA_SA_DESC_CFG_DIR_ENC);
+
+	return mv_cesa_cbc_des3_op(req, &tmpl);
+}
+
+static int mv_cesa_cbc_des3_ede_decrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl,
+			   CESA_SA_DESC_CFG_CRYPTCM_CBC |
+			   CESA_SA_DESC_CFG_3DES_EDE |
+			   CESA_SA_DESC_CFG_DIR_DEC);
+
+	return mv_cesa_cbc_des3_op(req, &tmpl);
+}
+
+struct crypto_alg mv_cesa_cbc_des3_ede_alg = {
+	.cra_name = "cbc(des3_ede)",
+	.cra_driver_name = "mv-cbc-des3-ede",
+	.cra_priority = 300,
+	.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER |
+		     CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC,
+	.cra_blocksize = DES3_EDE_BLOCK_SIZE,
+	.cra_ctxsize = sizeof(struct mv_cesa_des3_ctx),
+	.cra_alignmask = 0,
+	.cra_type = &crypto_ablkcipher_type,
+	.cra_module = THIS_MODULE,
+	.cra_init = mv_cesa_ablkcipher_cra_init,
+	.cra_u = {
+		.ablkcipher = {
+			.min_keysize = DES3_EDE_KEY_SIZE,
+			.max_keysize = DES3_EDE_KEY_SIZE,
+			.ivsize	     = DES3_EDE_BLOCK_SIZE,
+			.setkey = mv_cesa_des3_ede_setkey,
+			.encrypt = mv_cesa_cbc_des3_ede_encrypt,
+			.decrypt = mv_cesa_cbc_des3_ede_decrypt,
+		},
+	},
+};
+
+static int mv_cesa_aes_op(struct ablkcipher_request *req,
+			  struct mv_cesa_op_ctx *tmpl)
+{
+	struct mv_cesa_aes_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+	int ret, i;
+	u32 *key;
+	u32 cfg;
+
+	cfg = CESA_SA_DESC_CFG_CRYPTM_AES;
+
+	if (mv_cesa_get_op_cfg(tmpl) & CESA_SA_DESC_CFG_DIR_DEC)
+		key = ctx->aes.key_dec;
+	else
+		key = ctx->aes.key_enc;
+
+	for (i = 0; i < ctx->aes.key_length / sizeof(u32); i++)
+		tmpl->ctx.blkcipher.key[i] = cpu_to_le32(key[i]);
+
+	if (ctx->aes.key_length == 24)
+		cfg |= CESA_SA_DESC_CFG_AES_LEN_192;
+	else if (ctx->aes.key_length == 32)
+		cfg |= CESA_SA_DESC_CFG_AES_LEN_256;
+
+	mv_cesa_update_op_cfg(tmpl, cfg,
+			      CESA_SA_DESC_CFG_CRYPTM_MSK |
+			      CESA_SA_DESC_CFG_AES_LEN_MSK);
+
+	ret = mv_cesa_ablkcipher_req_init(req, tmpl);
+	if (ret)
+		return ret;
+
+	ret = mv_cesa_queue_req(&req->base);
+	if (ret && ret != -EINPROGRESS)
+		mv_cesa_ablkcipher_cleanup(req);
+
+	return ret;
+}
+
+static int mv_cesa_ecb_aes_encrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl,
+			   CESA_SA_DESC_CFG_CRYPTCM_ECB |
+			   CESA_SA_DESC_CFG_DIR_ENC);
+
+	return mv_cesa_aes_op(req, &tmpl);
+}
+
+static int mv_cesa_ecb_aes_decrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl,
+			   CESA_SA_DESC_CFG_CRYPTCM_ECB |
+			   CESA_SA_DESC_CFG_DIR_DEC);
+
+	return mv_cesa_aes_op(req, &tmpl);
+}
+
+struct crypto_alg mv_cesa_ecb_aes_alg = {
+	.cra_name = "ecb(aes)",
+	.cra_driver_name = "mv-ecb-aes",
+	.cra_priority = 300,
+	.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER |
+		     CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC,
+	.cra_blocksize = AES_BLOCK_SIZE,
+	.cra_ctxsize = sizeof(struct mv_cesa_aes_ctx),
+	.cra_alignmask = 0,
+	.cra_type = &crypto_ablkcipher_type,
+	.cra_module = THIS_MODULE,
+	.cra_init = mv_cesa_ablkcipher_cra_init,
+	.cra_u = {
+		.ablkcipher = {
+			.min_keysize = AES_MIN_KEY_SIZE,
+			.max_keysize = AES_MAX_KEY_SIZE,
+			.setkey = mv_cesa_aes_setkey,
+			.encrypt = mv_cesa_ecb_aes_encrypt,
+			.decrypt = mv_cesa_ecb_aes_decrypt,
+		},
+	},
+};
+
+static int mv_cesa_cbc_aes_op(struct ablkcipher_request *req,
+			      struct mv_cesa_op_ctx *tmpl)
+{
+	mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_CRYPTCM_CBC,
+			      CESA_SA_DESC_CFG_CRYPTCM_MSK);
+	memcpy(tmpl->ctx.blkcipher.iv, req->info, 16);
+
+	return mv_cesa_aes_op(req, tmpl);
+}
+
+static int mv_cesa_cbc_aes_encrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_DIR_ENC);
+
+	return mv_cesa_cbc_aes_op(req, &tmpl);
+}
+
+static int mv_cesa_cbc_aes_decrypt(struct ablkcipher_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_DIR_DEC);
+
+	return mv_cesa_cbc_aes_op(req, &tmpl);
+}
+
+struct crypto_alg mv_cesa_cbc_aes_alg = {
+	.cra_name = "cbc(aes)",
+	.cra_driver_name = "mv-cbc-aes",
+	.cra_priority = 300,
+	.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER |
+		     CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC,
+	.cra_blocksize = AES_BLOCK_SIZE,
+	.cra_ctxsize = sizeof(struct mv_cesa_aes_ctx),
+	.cra_alignmask = 0,
+	.cra_type = &crypto_ablkcipher_type,
+	.cra_module = THIS_MODULE,
+	.cra_init = mv_cesa_ablkcipher_cra_init,
+	.cra_u = {
+		.ablkcipher = {
+			.min_keysize = AES_MIN_KEY_SIZE,
+			.max_keysize = AES_MAX_KEY_SIZE,
+			.ivsize = AES_BLOCK_SIZE,
+			.setkey = mv_cesa_aes_setkey,
+			.encrypt = mv_cesa_cbc_aes_encrypt,
+			.decrypt = mv_cesa_cbc_aes_decrypt,
+		},
+	},
+};
diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
new file mode 100644
index 0000000..ec8c1ed
--- /dev/null
+++ b/drivers/crypto/marvell/hash.c
@@ -0,0 +1,1349 @@
+/*
+ * Hash algorithms supported by the CESA: MD5, SHA1 and SHA256.
+ *
+ * Author: Boris Brezillon <boris.brezillon@free-electrons.com>
+ * Author: Arnaud Ebalard <arno@natisbad.org>
+ *
+ * This work is based on an initial version written by
+ * Sebastian Andrzej Siewior < sebastian at breakpoint dot cc >
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ */
+
+#include <crypto/md5.h>
+#include <crypto/sha.h>
+
+#include "cesa.h"
+
+struct mv_cesa_ahash_dma_iter {
+	struct mv_cesa_dma_iter base;
+	struct mv_cesa_sg_dma_iter src;
+};
+
+static inline void
+mv_cesa_ahash_req_iter_init(struct mv_cesa_ahash_dma_iter *iter,
+			    struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	unsigned int len = req->nbytes;
+
+	if (!creq->last_req)
+		len = (len + creq->cache_ptr) & ~CESA_HASH_BLOCK_SIZE_MSK;
+
+	mv_cesa_req_dma_iter_init(&iter->base, len);
+	mv_cesa_sg_dma_iter_init(&iter->src, req->src, DMA_TO_DEVICE);
+	iter->src.op_offset = creq->cache_ptr;
+}
+
+static inline bool
+mv_cesa_ahash_req_iter_next_op(struct mv_cesa_ahash_dma_iter *iter)
+{
+	iter->src.op_offset = 0;
+
+	return mv_cesa_req_dma_iter_next_op(&iter->base);
+}
+
+static inline int mv_cesa_ahash_dma_alloc_cache(struct mv_cesa_ahash_req *creq,
+						gfp_t flags)
+{
+	struct mv_cesa_ahash_dma_req *dreq = &creq->req.dma;
+
+	creq->cache = dma_pool_alloc(cesa_dev->dma->cache_pool, flags,
+				     &dreq->cache_dma);
+	if (!creq->cache)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static inline int mv_cesa_ahash_std_alloc_cache(struct mv_cesa_ahash_req *creq,
+						gfp_t flags)
+{
+	creq->cache = kzalloc(CESA_MAX_HASH_BLOCK_SIZE, flags);
+	if (!creq->cache)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static int mv_cesa_ahash_alloc_cache(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+		      GFP_KERNEL : GFP_ATOMIC;
+	int ret;
+
+	if (creq->cache)
+		return 0;
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		ret = mv_cesa_ahash_dma_alloc_cache(creq, flags);
+	else
+		ret = mv_cesa_ahash_std_alloc_cache(creq, flags);
+
+	return ret;
+}
+
+static inline void mv_cesa_ahash_dma_free_cache(struct mv_cesa_ahash_req *creq)
+{
+	dma_pool_free(cesa_dev->dma->cache_pool, creq->cache,
+		      creq->req.dma.cache_dma);
+}
+
+static inline void mv_cesa_ahash_std_free_cache(struct mv_cesa_ahash_req *creq)
+{
+	kfree(creq->cache);
+}
+
+static void mv_cesa_ahash_free_cache(struct mv_cesa_ahash_req *creq)
+{
+	if (!creq->cache)
+		return;
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		mv_cesa_ahash_dma_free_cache(creq);
+	else
+		mv_cesa_ahash_std_free_cache(creq);
+
+	creq->cache = NULL;
+}
+
+static int mv_cesa_ahash_dma_alloc_padding(struct mv_cesa_ahash_dma_req *req,
+					   gfp_t flags)
+{
+	if (req->padding)
+		return 0;
+
+	req->padding = dma_pool_alloc(cesa_dev->dma->padding_pool, flags,
+				      &req->padding_dma);
+	if (!req->padding)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void mv_cesa_ahash_dma_free_padding(struct mv_cesa_ahash_dma_req *req)
+{
+	if (!req->padding)
+		return;
+
+	dma_pool_free(cesa_dev->dma->padding_pool, req->padding,
+		      req->padding_dma);
+	req->padding = NULL;
+}
+
+static inline void mv_cesa_ahash_dma_last_cleanup(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+
+	mv_cesa_ahash_dma_free_padding(&creq->req.dma);
+}
+
+static inline void mv_cesa_ahash_dma_cleanup(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+
+	dma_unmap_sg(cesa_dev->dev, req->src, creq->src_nents, DMA_TO_DEVICE);
+	mv_cesa_dma_cleanup(&creq->req.dma.base);
+}
+
+static inline void mv_cesa_ahash_cleanup(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		mv_cesa_ahash_dma_cleanup(req);
+}
+
+static void mv_cesa_ahash_last_cleanup(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+
+	mv_cesa_ahash_free_cache(creq);
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		mv_cesa_ahash_dma_last_cleanup(req);
+}
+
+static int mv_cesa_ahash_pad_len(struct mv_cesa_ahash_req *creq)
+{
+	unsigned int index, padlen;
+
+	index = creq->len & CESA_HASH_BLOCK_SIZE_MSK;
+	padlen = (index < 56) ? (56 - index) : (64 + 56 - index);
+
+	return padlen;
+}
+
+static int mv_cesa_ahash_pad_req(struct mv_cesa_ahash_req *creq, u8 *buf)
+{
+	__be64 bits = cpu_to_be64(creq->len << 3);
+	unsigned int index, padlen;
+
+	buf[0] = 0x80;
+	/* Pad out to 56 mod 64 */
+	index = creq->len & CESA_HASH_BLOCK_SIZE_MSK;
+	padlen = mv_cesa_ahash_pad_len(creq);
+	memset(buf + 1, 0, padlen - 1);
+	memcpy(buf + padlen, &bits, sizeof(bits));
+
+	return padlen + 8;
+}
+
+static void mv_cesa_ahash_std_step(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	struct mv_cesa_ahash_std_req *sreq = &creq->req.std;
+	struct mv_cesa_engine *engine = sreq->base.engine;
+	struct mv_cesa_op_ctx *op;
+	unsigned int new_cache_ptr = 0;
+	u32 frag_mode;
+	size_t  len;
+
+	if (creq->cache_ptr)
+		memcpy(engine->sram + CESA_SA_DATA_SRAM_OFFSET, creq->cache,
+		       creq->cache_ptr);
+
+	len = min_t(size_t, req->nbytes + creq->cache_ptr - sreq->offset,
+		    CESA_SA_SRAM_PAYLOAD_SIZE);
+
+	if (!creq->last_req) {
+		new_cache_ptr = len & CESA_HASH_BLOCK_SIZE_MSK;
+		len &= ~CESA_HASH_BLOCK_SIZE_MSK;
+	}
+
+	if (len - creq->cache_ptr)
+		sreq->offset += sg_pcopy_to_buffer(req->src, creq->src_nents,
+						   engine->sram +
+						   CESA_SA_DATA_SRAM_OFFSET +
+						   creq->cache_ptr,
+						   len - creq->cache_ptr,
+						   sreq->offset);
+
+	op = &creq->op_tmpl;
+
+	frag_mode = mv_cesa_get_op_cfg(op) & CESA_SA_DESC_CFG_FRAG_MSK;
+
+	if (creq->last_req && sreq->offset == req->nbytes &&
+	    creq->len <= CESA_SA_DESC_MAC_SRC_TOTAL_LEN_MAX) {
+		if (frag_mode == CESA_SA_DESC_CFG_FIRST_FRAG)
+			frag_mode = CESA_SA_DESC_CFG_NOT_FRAG;
+		else if (frag_mode == CESA_SA_DESC_CFG_MID_FRAG)
+			frag_mode = CESA_SA_DESC_CFG_LAST_FRAG;
+	}
+
+	if (frag_mode == CESA_SA_DESC_CFG_NOT_FRAG ||
+	    frag_mode == CESA_SA_DESC_CFG_LAST_FRAG) {
+		if (len &&
+		    creq->len <= CESA_SA_DESC_MAC_SRC_TOTAL_LEN_MAX) {
+			mv_cesa_set_mac_op_total_len(op, creq->len);
+		} else {
+			int trailerlen = mv_cesa_ahash_pad_len(creq) + 8;
+
+			if (len + trailerlen > CESA_SA_SRAM_PAYLOAD_SIZE) {
+				len &= CESA_HASH_BLOCK_SIZE_MSK;
+				new_cache_ptr = 64 - trailerlen;
+				memcpy(creq->cache,
+				       engine->sram +
+				       CESA_SA_DATA_SRAM_OFFSET + len,
+				       new_cache_ptr);
+			} else {
+				len += mv_cesa_ahash_pad_req(creq,
+						engine->sram + len +
+						CESA_SA_DATA_SRAM_OFFSET);
+			}
+
+			if (frag_mode == CESA_SA_DESC_CFG_LAST_FRAG)
+				frag_mode = CESA_SA_DESC_CFG_MID_FRAG;
+			else
+				frag_mode = CESA_SA_DESC_CFG_FIRST_FRAG;
+		}
+	}
+
+	mv_cesa_set_mac_op_frag_len(op, len);
+	mv_cesa_update_op_cfg(op, frag_mode, CESA_SA_DESC_CFG_FRAG_MSK);
+
+	/* FIXME: only update enc_len field */
+	memcpy(engine->sram, op, sizeof(*op));
+
+	if (frag_mode == CESA_SA_DESC_CFG_FIRST_FRAG)
+		mv_cesa_update_op_cfg(op, CESA_SA_DESC_CFG_MID_FRAG,
+				      CESA_SA_DESC_CFG_FRAG_MSK);
+
+	creq->cache_ptr = new_cache_ptr;
+
+	mv_cesa_set_int_mask(engine, CESA_SA_INT_ACCEL0_DONE);
+	writel(CESA_SA_CFG_PARA_DIS, engine->regs + CESA_SA_CFG);
+	writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
+}
+
+static int mv_cesa_ahash_std_process(struct ahash_request *req, u32 status)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	struct mv_cesa_ahash_std_req *sreq = &creq->req.std;
+
+	if (sreq->offset < (req->nbytes - creq->cache_ptr))
+		return -EINPROGRESS;
+
+	return 0;
+}
+
+static inline void mv_cesa_ahash_dma_prepare(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	struct mv_cesa_tdma_req *dreq = &creq->req.dma.base;
+
+	mv_cesa_dma_prepare(dreq, dreq->base.engine);
+}
+
+static void mv_cesa_ahash_std_prepare(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	struct mv_cesa_ahash_std_req *sreq = &creq->req.std;
+	struct mv_cesa_engine *engine = sreq->base.engine;
+
+	sreq->offset = 0;
+	mv_cesa_adjust_op(engine, &creq->op_tmpl);
+	memcpy(engine->sram, &creq->op_tmpl, sizeof(creq->op_tmpl));
+}
+
+static void mv_cesa_ahash_step(struct crypto_async_request *req)
+{
+	struct ahash_request *ahashreq = ahash_request_cast(req);
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(ahashreq);
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		mv_cesa_dma_step(&creq->req.dma.base);
+	else
+		mv_cesa_ahash_std_step(ahashreq);
+}
+
+static int mv_cesa_ahash_process(struct crypto_async_request *req, u32 status)
+{
+	struct ahash_request *ahashreq = ahash_request_cast(req);
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(ahashreq);
+	struct mv_cesa_engine *engine = creq->req.base.engine;
+	unsigned int digsize;
+	int ret, i;
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		ret = mv_cesa_dma_process(&creq->req.dma.base, status);
+	else
+		ret = mv_cesa_ahash_std_process(ahashreq, status);
+
+	if (ret == -EINPROGRESS)
+		return ret;
+
+	digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(ahashreq));
+	for (i = 0; i < digsize / 4; i++)
+		creq->state[i] = readl(engine->regs + CESA_IVDIG(i));
+
+	if (creq->cache_ptr)
+		sg_pcopy_to_buffer(ahashreq->src, creq->src_nents,
+				   creq->cache,
+				   creq->cache_ptr,
+				   ahashreq->nbytes - creq->cache_ptr);
+
+	if (creq->last_req) {
+		for (i = 0; i < digsize / 4; i++) {
+			/*
+			 * Hardware provides MD5 digest in a different
+			 * endianness than SHA-1 and SHA-256 ones.
+			 */
+			if (digsize == MD5_DIGEST_SIZE)
+				creq->state[i] = cpu_to_le32(creq->state[i]);
+			else
+				creq->state[i] = cpu_to_be32(creq->state[i]);
+		}
+
+		memcpy(ahashreq->result, creq->state, digsize);
+	}
+
+	return ret;
+}
+
+static void mv_cesa_ahash_prepare(struct crypto_async_request *req,
+				  struct mv_cesa_engine *engine)
+{
+	struct ahash_request *ahashreq = ahash_request_cast(req);
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(ahashreq);
+	unsigned int digsize;
+	int i;
+
+	creq->req.base.engine = engine;
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		mv_cesa_ahash_dma_prepare(ahashreq);
+	else
+		mv_cesa_ahash_std_prepare(ahashreq);
+
+	digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(ahashreq));
+	for (i = 0; i < digsize / 4; i++)
+		writel(creq->state[i],
+		       engine->regs + CESA_IVDIG(i));
+}
+
+static void mv_cesa_ahash_req_cleanup(struct crypto_async_request *req)
+{
+	struct ahash_request *ahashreq = ahash_request_cast(req);
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(ahashreq);
+
+	if (creq->last_req)
+		mv_cesa_ahash_last_cleanup(ahashreq);
+
+	mv_cesa_ahash_cleanup(ahashreq);
+}
+
+static const struct mv_cesa_req_ops mv_cesa_ahash_req_ops = {
+	.step = mv_cesa_ahash_step,
+	.process = mv_cesa_ahash_process,
+	.prepare = mv_cesa_ahash_prepare,
+	.cleanup = mv_cesa_ahash_req_cleanup,
+};
+
+static int mv_cesa_ahash_init(struct ahash_request *req,
+			      struct mv_cesa_op_ctx *tmpl)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+
+	memset(creq, 0, sizeof(*creq));
+	mv_cesa_update_op_cfg(tmpl,
+			      CESA_SA_DESC_CFG_OP_MAC_ONLY |
+			      CESA_SA_DESC_CFG_FIRST_FRAG,
+			      CESA_SA_DESC_CFG_OP_MSK |
+			      CESA_SA_DESC_CFG_FRAG_MSK);
+	mv_cesa_set_mac_op_total_len(tmpl, 0);
+	mv_cesa_set_mac_op_frag_len(tmpl, 0);
+	creq->op_tmpl = *tmpl;
+	creq->len = 0;
+
+	return 0;
+}
+
+static inline int mv_cesa_ahash_cra_init(struct crypto_tfm *tfm)
+{
+	struct mv_cesa_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	ctx->base.ops = &mv_cesa_ahash_req_ops;
+
+	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+				 sizeof(struct mv_cesa_ahash_req));
+	return 0;
+}
+
+static int mv_cesa_ahash_cache_req(struct ahash_request *req, bool *cached)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	int ret;
+
+	if (((creq->cache_ptr + req->nbytes) & CESA_HASH_BLOCK_SIZE_MSK) &&
+	    !creq->last_req) {
+		ret = mv_cesa_ahash_alloc_cache(req);
+		if (ret)
+			return ret;
+	}
+
+	if (creq->cache_ptr + req->nbytes < 64 && !creq->last_req) {
+		*cached = true;
+
+		if (!req->nbytes)
+			return 0;
+
+		sg_pcopy_to_buffer(req->src, creq->src_nents,
+				   creq->cache + creq->cache_ptr,
+				   req->nbytes, 0);
+
+		creq->cache_ptr += req->nbytes;
+	}
+
+	return 0;
+}
+
+static struct mv_cesa_op_ctx *
+mv_cesa_ahash_dma_add_cache(struct mv_cesa_tdma_chain *chain,
+			    struct mv_cesa_ahash_dma_iter *dma_iter,
+			    struct mv_cesa_ahash_req *creq,
+			    gfp_t flags)
+{
+	struct mv_cesa_ahash_dma_req *ahashdreq = &creq->req.dma;
+	struct mv_cesa_op_ctx *op = NULL;
+	int ret;
+
+	if (!creq->cache_ptr)
+		return NULL;
+
+	ret = mv_cesa_dma_add_data_transfer(chain,
+					    CESA_SA_DATA_SRAM_OFFSET,
+					    ahashdreq->cache_dma,
+					    creq->cache_ptr,
+					    CESA_TDMA_DST_IN_SRAM,
+					    flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+	if (!dma_iter->base.op_len) {
+		op = mv_cesa_dma_add_op(chain, &creq->op_tmpl, flags);
+		if (IS_ERR(op))
+			return op;
+
+		mv_cesa_set_mac_op_frag_len(op, creq->cache_ptr);
+
+		/* Add dummy desc to launch crypto operation */
+		ret = mv_cesa_dma_add_dummy_launch(chain, flags);
+		if (ret)
+			return ERR_PTR(ret);
+	}
+
+	return op;
+}
+
+static struct mv_cesa_op_ctx *
+mv_cesa_ahash_dma_add_data(struct mv_cesa_tdma_chain *chain,
+			   struct mv_cesa_ahash_dma_iter *dma_iter,
+			   struct mv_cesa_ahash_req *creq,
+			   gfp_t flags)
+{
+	struct mv_cesa_op_ctx *op;
+	int ret;
+
+	op = mv_cesa_dma_add_op(chain, &creq->op_tmpl, flags);
+	if (IS_ERR(op))
+		return op;
+
+	mv_cesa_set_mac_op_frag_len(op, dma_iter->base.op_len);
+
+	if ((mv_cesa_get_op_cfg(&creq->op_tmpl) & CESA_SA_DESC_CFG_FRAG_MSK) ==
+	    CESA_SA_DESC_CFG_FIRST_FRAG)
+		mv_cesa_update_op_cfg(&creq->op_tmpl,
+				      CESA_SA_DESC_CFG_MID_FRAG,
+				      CESA_SA_DESC_CFG_FRAG_MSK);
+
+	/* Add input transfers */
+	ret = mv_cesa_dma_add_op_transfers(chain, &dma_iter->base,
+					   &dma_iter->src, flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+	/* Add dummy desc to launch crypto operation */
+	ret = mv_cesa_dma_add_dummy_launch(chain, flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+	return op;
+}
+
+static struct mv_cesa_op_ctx *
+mv_cesa_ahash_dma_last_req(struct mv_cesa_tdma_chain *chain,
+			   struct mv_cesa_ahash_dma_iter *dma_iter,
+			   struct mv_cesa_ahash_req *creq,
+			   struct mv_cesa_op_ctx *op,
+			   gfp_t flags)
+{
+	struct mv_cesa_ahash_dma_req *ahashdreq = &creq->req.dma;
+	unsigned int len, trailerlen, padoff = 0;
+	int ret;
+
+	if (!creq->last_req)
+		return op;
+
+	if (op && creq->len <= CESA_SA_DESC_MAC_SRC_TOTAL_LEN_MAX) {
+		u32 frag = CESA_SA_DESC_CFG_NOT_FRAG;
+
+		if ((mv_cesa_get_op_cfg(op) & CESA_SA_DESC_CFG_FRAG_MSK) !=
+		    CESA_SA_DESC_CFG_FIRST_FRAG)
+			frag = CESA_SA_DESC_CFG_LAST_FRAG;
+
+		mv_cesa_update_op_cfg(op, frag, CESA_SA_DESC_CFG_FRAG_MSK);
+
+		return op;
+	}
+
+	ret = mv_cesa_ahash_dma_alloc_padding(ahashdreq, flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+	trailerlen = mv_cesa_ahash_pad_req(creq, ahashdreq->padding);
+
+	if (op) {
+		len = min(CESA_SA_SRAM_PAYLOAD_SIZE - dma_iter->base.op_len,
+			  trailerlen);
+		if (len) {
+			ret = mv_cesa_dma_add_data_transfer(chain,
+						CESA_SA_DATA_SRAM_OFFSET +
+						dma_iter->base.op_len,
+						ahashdreq->padding_dma,
+						len, CESA_TDMA_DST_IN_SRAM,
+						flags);
+			if (ret)
+				return ERR_PTR(ret);
+
+			mv_cesa_update_op_cfg(op, CESA_SA_DESC_CFG_MID_FRAG,
+					      CESA_SA_DESC_CFG_FRAG_MSK);
+			mv_cesa_set_mac_op_frag_len(op,
+					dma_iter->base.op_len + len);
+			padoff += len;
+		}
+	}
+
+	if (padoff >= trailerlen)
+		return op;
+
+	if ((mv_cesa_get_op_cfg(&creq->op_tmpl) & CESA_SA_DESC_CFG_FRAG_MSK) !=
+	    CESA_SA_DESC_CFG_FIRST_FRAG)
+		mv_cesa_update_op_cfg(&creq->op_tmpl,
+				      CESA_SA_DESC_CFG_MID_FRAG,
+				      CESA_SA_DESC_CFG_FRAG_MSK);
+
+	op = mv_cesa_dma_add_op(chain, &creq->op_tmpl, flags);
+	if (IS_ERR(op))
+		return op;
+
+	mv_cesa_set_mac_op_frag_len(op, trailerlen - padoff);
+
+	ret = mv_cesa_dma_add_data_transfer(chain,
+					CESA_SA_DATA_SRAM_OFFSET,
+					ahashdreq->padding_dma +
+					padoff,
+					trailerlen - padoff,
+					CESA_TDMA_DST_IN_SRAM,
+					flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+	/* Add dummy desc to launch crypto operation */
+	ret = mv_cesa_dma_add_dummy_launch(chain, flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+	return op;
+}
+
+static int mv_cesa_ahash_dma_req_init(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+		      GFP_KERNEL : GFP_ATOMIC;
+	struct mv_cesa_ahash_dma_req *ahashdreq = &creq->req.dma;
+	struct mv_cesa_tdma_req *dreq = &ahashdreq->base;
+	struct mv_cesa_tdma_chain chain;
+	struct mv_cesa_ahash_dma_iter iter;
+	struct mv_cesa_op_ctx *op = NULL;
+	int ret;
+
+	dreq->chain.first = NULL;
+	dreq->chain.last = NULL;
+
+	ret = dma_map_sg(cesa_dev->dev, req->src, creq->src_nents,
+			 DMA_TO_DEVICE);
+	if (ret < 0)
+		goto err;
+
+	creq->src_nents = ret;
+
+	mv_cesa_tdma_desc_iter_init(&chain);
+	mv_cesa_ahash_req_iter_init(&iter, req);
+
+	op = mv_cesa_ahash_dma_add_cache(&chain, &iter,
+					 creq, flags);
+	if (IS_ERR(op)) {
+		ret = PTR_ERR(op);
+		goto err_free_tdma;
+	}
+
+	do {
+		if (!iter.base.op_len)
+			break;
+
+		op = mv_cesa_ahash_dma_add_data(&chain, &iter,
+						creq, flags);
+		if (IS_ERR(op)) {
+			ret = PTR_ERR(op);
+			goto err_free_tdma;
+		}
+	} while (mv_cesa_ahash_req_iter_next_op(&iter));
+
+	op = mv_cesa_ahash_dma_last_req(&chain, &iter, creq, op, flags);
+	if (IS_ERR(op)) {
+		ret = PTR_ERR(op);
+		goto err_free_tdma;
+	}
+
+	if (op) {
+		/* Add dummy desc to wait for crypto operation end */
+		ret = mv_cesa_dma_add_dummy_end(&chain, flags);
+		if (ret)
+			goto err_free_tdma;
+	}
+
+	if (!creq->last_req)
+		creq->cache_ptr = req->nbytes + creq->cache_ptr -
+				  iter.base.len;
+	else
+		creq->cache_ptr = 0;
+
+	dreq->chain = chain;
+
+	return 0;
+
+err_free_tdma:
+	mv_cesa_dma_cleanup(dreq);
+	dma_unmap_sg(cesa_dev->dev, req->src, creq->src_nents, DMA_TO_DEVICE);
+
+err:
+	mv_cesa_ahash_last_cleanup(req);
+
+	return ret;
+}
+
+static int mv_cesa_ahash_req_init(struct ahash_request *req, bool *cached)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	int ret;
+
+	if (cesa_dev->caps->has_tdma)
+		creq->req.base.type = CESA_DMA_REQ;
+	else
+		creq->req.base.type = CESA_STD_REQ;
+
+	creq->src_nents = mv_cesa_sg_count(req->src, req->nbytes);
+
+	ret = mv_cesa_ahash_cache_req(req, cached);
+	if (ret)
+		return ret;
+
+	if (*cached)
+		return 0;
+
+	if (creq->req.base.type == CESA_DMA_REQ)
+		ret = mv_cesa_ahash_dma_req_init(req);
+
+	return ret;
+}
+
+static int mv_cesa_ahash_update(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	bool cached = false;
+	int ret;
+
+	creq->len += req->nbytes;
+	ret = mv_cesa_ahash_req_init(req, &cached);
+	if (ret)
+		return ret;
+
+	if (cached)
+		return 0;
+
+	ret = mv_cesa_queue_req(&req->base);
+	if (ret && ret != -EINPROGRESS) {
+		mv_cesa_ahash_cleanup(req);
+		return ret;
+	}
+
+	return ret;
+}
+
+static int mv_cesa_ahash_final(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	struct mv_cesa_op_ctx *tmpl = &creq->op_tmpl;
+	bool cached = false;
+	int ret;
+
+	mv_cesa_set_mac_op_total_len(tmpl, creq->len);
+	creq->last_req = true;
+	req->nbytes = 0;
+
+	ret = mv_cesa_ahash_req_init(req, &cached);
+	if (ret)
+		return ret;
+
+	if (cached)
+		return 0;
+
+	ret = mv_cesa_queue_req(&req->base);
+	if (ret && ret != -EINPROGRESS)
+		mv_cesa_ahash_cleanup(req);
+
+	return ret;
+}
+
+static int mv_cesa_ahash_finup(struct ahash_request *req)
+{
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	struct mv_cesa_op_ctx *tmpl = &creq->op_tmpl;
+	bool cached = false;
+	int ret;
+
+	creq->len += req->nbytes;
+	mv_cesa_set_mac_op_total_len(tmpl, creq->len);
+	creq->last_req = true;
+
+	ret = mv_cesa_ahash_req_init(req, &cached);
+	if (ret)
+		return ret;
+
+	if (cached)
+		return 0;
+
+	ret = mv_cesa_queue_req(&req->base);
+	if (ret && ret != -EINPROGRESS)
+		mv_cesa_ahash_cleanup(req);
+
+	return ret;
+}
+
+static int mv_cesa_md5_init(struct ahash_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_MD5);
+
+	mv_cesa_ahash_init(req, &tmpl);
+
+	return 0;
+}
+
+static int mv_cesa_md5_export(struct ahash_request *req, void *out)
+{
+	struct md5_state *out_state = out;
+	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	unsigned int digsize = crypto_ahash_digestsize(ahash);
+
+	out_state->byte_count = creq->len;
+	memcpy(out_state->hash, creq->state, digsize);
+	memset(out_state->block, 0, sizeof(out_state->block));
+	if (creq->cache)
+		memcpy(out_state->block, creq->cache, creq->cache_ptr);
+
+	return 0;
+}
+
+static int mv_cesa_md5_digest(struct ahash_request *req)
+{
+	int ret;
+
+	ret = mv_cesa_md5_init(req);
+	if (ret)
+		return ret;
+
+	return mv_cesa_ahash_finup(req);
+}
+
+struct ahash_alg mv_md5_alg = {
+	.init = mv_cesa_md5_init,
+	.update = mv_cesa_ahash_update,
+	.final = mv_cesa_ahash_final,
+	.finup = mv_cesa_ahash_finup,
+	.digest = mv_cesa_md5_digest,
+	.export = mv_cesa_md5_export,
+	.halg = {
+		.digestsize = MD5_DIGEST_SIZE,
+		.base = {
+			.cra_name = "md5",
+			.cra_driver_name = "mv-md5",
+			.cra_priority = 300,
+			.cra_flags = CRYPTO_ALG_ASYNC |
+				     CRYPTO_ALG_KERN_DRIVER_ONLY,
+			.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
+			.cra_ctxsize = sizeof(struct mv_cesa_hash_ctx),
+			.cra_init = mv_cesa_ahash_cra_init,
+			.cra_module = THIS_MODULE,
+		 }
+	}
+};
+
+static int mv_cesa_sha1_init(struct ahash_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_SHA1);
+
+	mv_cesa_ahash_init(req, &tmpl);
+
+	return 0;
+}
+
+static int mv_cesa_sha1_export(struct ahash_request *req, void *out)
+{
+	struct sha1_state *out_state = out;
+	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	unsigned int digsize = crypto_ahash_digestsize(ahash);
+
+	out_state->count = creq->len;
+	memcpy(out_state->state, creq->state, digsize);
+	memset(out_state->buffer, 0, sizeof(out_state->buffer));
+	if (creq->cache)
+		memcpy(out_state->buffer, creq->cache, creq->cache_ptr);
+
+	return 0;
+}
+
+static int mv_cesa_sha1_digest(struct ahash_request *req)
+{
+	int ret;
+
+	ret = mv_cesa_sha1_init(req);
+	if (ret)
+		return ret;
+
+	return mv_cesa_ahash_finup(req);
+}
+
+struct ahash_alg mv_sha1_alg = {
+	.init = mv_cesa_sha1_init,
+	.update = mv_cesa_ahash_update,
+	.final = mv_cesa_ahash_final,
+	.finup = mv_cesa_ahash_finup,
+	.digest = mv_cesa_sha1_digest,
+	.export = mv_cesa_sha1_export,
+	.halg = {
+		.digestsize = SHA1_DIGEST_SIZE,
+		.base = {
+			.cra_name = "sha1",
+			.cra_driver_name = "mv-sha1",
+			.cra_priority = 300,
+			.cra_flags = CRYPTO_ALG_ASYNC |
+				     CRYPTO_ALG_KERN_DRIVER_ONLY,
+			.cra_blocksize = SHA1_BLOCK_SIZE,
+			.cra_ctxsize = sizeof(struct mv_cesa_hash_ctx),
+			.cra_init = mv_cesa_ahash_cra_init,
+			.cra_module = THIS_MODULE,
+		 }
+	}
+};
+
+static int mv_cesa_sha256_init(struct ahash_request *req)
+{
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_SHA256);
+
+	mv_cesa_ahash_init(req, &tmpl);
+
+	return 0;
+}
+
+static int mv_cesa_sha256_digest(struct ahash_request *req)
+{
+	int ret;
+
+	ret = mv_cesa_sha256_init(req);
+	if (ret)
+		return ret;
+
+	return mv_cesa_ahash_finup(req);
+}
+
+static int mv_cesa_sha256_export(struct ahash_request *req, void *out)
+{
+	struct sha256_state *out_state = out;
+	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
+	unsigned int ds = crypto_ahash_digestsize(ahash);
+
+	out_state->count = creq->len;
+	memcpy(out_state->state, creq->state, ds);
+	memset(out_state->buf, 0, sizeof(out_state->buf));
+	if (creq->cache)
+		memcpy(out_state->buf, creq->cache, creq->cache_ptr);
+
+	return 0;
+}
+
+struct ahash_alg mv_sha256_alg = {
+	.init = mv_cesa_sha256_init,
+	.update = mv_cesa_ahash_update,
+	.final = mv_cesa_ahash_final,
+	.finup = mv_cesa_ahash_finup,
+	.digest = mv_cesa_sha256_digest,
+	.export = mv_cesa_sha256_export,
+	.halg = {
+		.digestsize = SHA256_DIGEST_SIZE,
+		.base = {
+			.cra_name = "sha256",
+			.cra_driver_name = "mv-sha256",
+			.cra_priority = 300,
+			.cra_flags = CRYPTO_ALG_ASYNC |
+				     CRYPTO_ALG_KERN_DRIVER_ONLY,
+			.cra_blocksize = SHA256_BLOCK_SIZE,
+			.cra_ctxsize = sizeof(struct mv_cesa_hash_ctx),
+			.cra_init = mv_cesa_ahash_cra_init,
+			.cra_module = THIS_MODULE,
+		 }
+	}
+};
+
+struct mv_cesa_ahash_result {
+	struct completion completion;
+	int error;
+};
+
+static void mv_cesa_hmac_ahash_complete(struct crypto_async_request *req,
+					int error)
+{
+	struct mv_cesa_ahash_result *result = req->data;
+
+	if (error == -EINPROGRESS)
+		return;
+
+	result->error = error;
+	complete(&result->completion);
+}
+
+static int mv_cesa_ahmac_iv_state_init(struct ahash_request *req, u8 *pad,
+				       void *state, unsigned int blocksize)
+{
+	struct mv_cesa_ahash_result result;
+	struct scatterlist sg;
+	int ret;
+
+	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+				   mv_cesa_hmac_ahash_complete, &result);
+	sg_init_one(&sg, pad, blocksize);
+	ahash_request_set_crypt(req, &sg, pad, blocksize);
+	init_completion(&result.completion);
+
+	ret = crypto_ahash_init(req);
+	if (ret)
+		return ret;
+
+	ret = crypto_ahash_update(req);
+	if (ret && ret != -EINPROGRESS)
+		return ret;
+
+	wait_for_completion_interruptible(&result.completion);
+	if (result.error)
+		return result.error;
+
+	ret = crypto_ahash_export(req, state);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int mv_cesa_ahmac_pad_init(struct ahash_request *req,
+				  const u8 *key, unsigned int keylen,
+				  u8 *ipad, u8 *opad,
+				  unsigned int blocksize)
+{
+	struct mv_cesa_ahash_result result;
+	struct scatterlist sg;
+	int ret;
+	int i;
+
+	if (keylen <= blocksize) {
+		memcpy(ipad, key, keylen);
+	} else {
+		u8 *keydup = kmemdup(key, keylen, GFP_KERNEL);
+
+		if (!keydup)
+			return -ENOMEM;
+
+		ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+					   mv_cesa_hmac_ahash_complete,
+					   &result);
+		sg_init_one(&sg, keydup, keylen);
+		ahash_request_set_crypt(req, &sg, ipad, keylen);
+		init_completion(&result.completion);
+
+		ret = crypto_ahash_digest(req);
+		if (ret == -EINPROGRESS) {
+			wait_for_completion_interruptible(&result.completion);
+			ret = result.error;
+		}
+
+		/* Set the memory region to 0 to avoid any leak. */
+		memset(keydup, 0, keylen);
+		kfree(keydup);
+
+		if (ret)
+			return ret;
+
+		keylen = crypto_ahash_digestsize(crypto_ahash_reqtfm(req));
+	}
+
+	memset(ipad + keylen, 0, blocksize - keylen);
+	memcpy(opad, ipad, blocksize);
+
+	for (i = 0; i < blocksize; i++) {
+		ipad[i] ^= 0x36;
+		opad[i] ^= 0x5c;
+	}
+
+	return 0;
+}
+
+static int mv_cesa_ahmac_setkey(const char *hash_alg_name,
+				const u8 *key, unsigned int keylen,
+				void *istate, void *ostate)
+{
+	struct ahash_request *req;
+	struct crypto_ahash *tfm;
+	unsigned int blocksize;
+	u8 *ipad = NULL;
+	u8 *opad;
+	int ret;
+
+	tfm = crypto_alloc_ahash(hash_alg_name, CRYPTO_ALG_TYPE_AHASH,
+				 CRYPTO_ALG_TYPE_AHASH_MASK);
+	if (IS_ERR(tfm))
+		return PTR_ERR(tfm);
+
+	req = ahash_request_alloc(tfm, GFP_KERNEL);
+	if (!req) {
+		ret = -ENOMEM;
+		goto free_ahash;
+	}
+
+	crypto_ahash_clear_flags(tfm, ~0);
+
+	blocksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
+
+	ipad = kzalloc(2 * blocksize, GFP_KERNEL);
+	if (!ipad) {
+		ret = -ENOMEM;
+		goto free_req;
+	}
+
+	opad = ipad + blocksize;
+
+	ret = mv_cesa_ahmac_pad_init(req, key, keylen, ipad, opad, blocksize);
+	if (ret)
+		goto free_ipad;
+
+	ret = mv_cesa_ahmac_iv_state_init(req, ipad, istate, blocksize);
+	if (ret)
+		goto free_ipad;
+
+	ret = mv_cesa_ahmac_iv_state_init(req, opad, ostate, blocksize);
+
+free_ipad:
+	kfree(ipad);
+free_req:
+	ahash_request_free(req);
+free_ahash:
+	crypto_free_ahash(tfm);
+
+	return ret;
+}
+
+static int mv_cesa_ahmac_cra_init(struct crypto_tfm *tfm)
+{
+	struct mv_cesa_hmac_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	ctx->base.ops = &mv_cesa_ahash_req_ops;
+
+	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+				 sizeof(struct mv_cesa_ahash_req));
+	return 0;
+}
+
+static int mv_cesa_ahmac_md5_init(struct ahash_request *req)
+{
+	struct mv_cesa_hmac_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_HMAC_MD5);
+	memcpy(tmpl.ctx.hash.iv, ctx->iv, sizeof(ctx->iv));
+
+	mv_cesa_ahash_init(req, &tmpl);
+
+	return 0;
+}
+
+static int mv_cesa_ahmac_md5_setkey(struct crypto_ahash *tfm, const u8 *key,
+				    unsigned int keylen)
+{
+	struct mv_cesa_hmac_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm));
+	struct md5_state istate, ostate;
+	int ret, i;
+
+	ret = mv_cesa_ahmac_setkey("mv-md5", key, keylen, &istate, &ostate);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < ARRAY_SIZE(istate.hash); i++)
+		ctx->iv[i] = be32_to_cpu(istate.hash[i]);
+
+	for (i = 0; i < ARRAY_SIZE(ostate.hash); i++)
+		ctx->iv[i + 8] = be32_to_cpu(ostate.hash[i]);
+
+	return 0;
+}
+
+static int mv_cesa_ahmac_md5_digest(struct ahash_request *req)
+{
+	int ret;
+
+	ret = mv_cesa_ahmac_md5_init(req);
+	if (ret)
+		return ret;
+
+	return mv_cesa_ahash_finup(req);
+}
+
+struct ahash_alg mv_ahmac_md5_alg = {
+	.init = mv_cesa_ahmac_md5_init,
+	.update = mv_cesa_ahash_update,
+	.final = mv_cesa_ahash_final,
+	.finup = mv_cesa_ahash_finup,
+	.digest = mv_cesa_ahmac_md5_digest,
+	.setkey = mv_cesa_ahmac_md5_setkey,
+	.halg = {
+		.digestsize = MD5_DIGEST_SIZE,
+		.statesize = sizeof(struct md5_state),
+		.base = {
+			.cra_name = "hmac(md5)",
+			.cra_driver_name = "mv-hmac-md5",
+			.cra_priority = 300,
+			.cra_flags = CRYPTO_ALG_ASYNC |
+				     CRYPTO_ALG_KERN_DRIVER_ONLY,
+			.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
+			.cra_ctxsize = sizeof(struct mv_cesa_hmac_ctx),
+			.cra_init = mv_cesa_ahmac_cra_init,
+			.cra_module = THIS_MODULE,
+		 }
+	}
+};
+
+static int mv_cesa_ahmac_sha1_init(struct ahash_request *req)
+{
+	struct mv_cesa_hmac_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_HMAC_SHA1);
+	memcpy(tmpl.ctx.hash.iv, ctx->iv, sizeof(ctx->iv));
+
+	mv_cesa_ahash_init(req, &tmpl);
+
+	return 0;
+}
+
+static int mv_cesa_ahmac_sha1_setkey(struct crypto_ahash *tfm, const u8 *key,
+				     unsigned int keylen)
+{
+	struct mv_cesa_hmac_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm));
+	struct sha1_state istate, ostate;
+	int ret, i;
+
+	ret = mv_cesa_ahmac_setkey("mv-sha1", key, keylen, &istate, &ostate);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < ARRAY_SIZE(istate.state); i++)
+		ctx->iv[i] = be32_to_cpu(istate.state[i]);
+
+	for (i = 0; i < ARRAY_SIZE(ostate.state); i++)
+		ctx->iv[i + 8] = be32_to_cpu(ostate.state[i]);
+
+	return 0;
+}
+
+static int mv_cesa_ahmac_sha1_digest(struct ahash_request *req)
+{
+	int ret;
+
+	ret = mv_cesa_ahmac_sha1_init(req);
+	if (ret)
+		return ret;
+
+	return mv_cesa_ahash_finup(req);
+}
+
+struct ahash_alg mv_ahmac_sha1_alg = {
+	.init = mv_cesa_ahmac_sha1_init,
+	.update = mv_cesa_ahash_update,
+	.final = mv_cesa_ahash_final,
+	.finup = mv_cesa_ahash_finup,
+	.digest = mv_cesa_ahmac_sha1_digest,
+	.setkey = mv_cesa_ahmac_sha1_setkey,
+	.halg = {
+		.digestsize = SHA1_DIGEST_SIZE,
+		.statesize = sizeof(struct sha1_state),
+		.base = {
+			.cra_name = "hmac(sha1)",
+			.cra_driver_name = "mv-hmac-sha1",
+			.cra_priority = 300,
+			.cra_flags = CRYPTO_ALG_ASYNC |
+				     CRYPTO_ALG_KERN_DRIVER_ONLY,
+			.cra_blocksize = SHA1_BLOCK_SIZE,
+			.cra_ctxsize = sizeof(struct mv_cesa_hmac_ctx),
+			.cra_init = mv_cesa_ahmac_cra_init,
+			.cra_module = THIS_MODULE,
+		 }
+	}
+};
+
+static int mv_cesa_ahmac_sha256_setkey(struct crypto_ahash *tfm, const u8 *key,
+				       unsigned int keylen)
+{
+	struct mv_cesa_hmac_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm));
+	struct sha256_state istate, ostate;
+	int ret, i;
+
+	ret = mv_cesa_ahmac_setkey("mv-sha256", key, keylen, &istate, &ostate);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < ARRAY_SIZE(istate.state); i++)
+		ctx->iv[i] = be32_to_cpu(istate.state[i]);
+
+	for (i = 0; i < ARRAY_SIZE(ostate.state); i++)
+		ctx->iv[i + 8] = be32_to_cpu(ostate.state[i]);
+
+	return 0;
+}
+
+static int mv_cesa_ahmac_sha256_init(struct ahash_request *req)
+{
+	struct mv_cesa_hmac_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+	struct mv_cesa_op_ctx tmpl;
+
+	mv_cesa_set_op_cfg(&tmpl, CESA_SA_DESC_CFG_MACM_HMAC_SHA256);
+	memcpy(tmpl.ctx.hash.iv, ctx->iv, sizeof(ctx->iv));
+
+	mv_cesa_ahash_init(req, &tmpl);
+
+	return 0;
+}
+
+static int mv_cesa_ahmac_sha256_digest(struct ahash_request *req)
+{
+	int ret;
+
+	ret = mv_cesa_ahmac_sha256_init(req);
+	if (ret)
+		return ret;
+
+	return mv_cesa_ahash_finup(req);
+}
+
+struct ahash_alg mv_ahmac_sha256_alg = {
+	.init = mv_cesa_ahmac_sha256_init,
+	.update = mv_cesa_ahash_update,
+	.final = mv_cesa_ahash_final,
+	.finup = mv_cesa_ahash_finup,
+	.digest = mv_cesa_ahmac_sha256_digest,
+	.setkey = mv_cesa_ahmac_sha256_setkey,
+	.halg = {
+		.digestsize = SHA256_DIGEST_SIZE,
+		.statesize = sizeof(struct sha256_state),
+		.base = {
+			.cra_name = "hmac(sha256)",
+			.cra_driver_name = "mv-hmac-sha256",
+			.cra_priority = 300,
+			.cra_flags = CRYPTO_ALG_ASYNC |
+				     CRYPTO_ALG_KERN_DRIVER_ONLY,
+			.cra_blocksize = SHA256_BLOCK_SIZE,
+			.cra_ctxsize = sizeof(struct mv_cesa_hmac_ctx),
+			.cra_init = mv_cesa_ahmac_cra_init,
+			.cra_module = THIS_MODULE,
+		 }
+	}
+};
diff --git a/drivers/crypto/marvell/tdma.c b/drivers/crypto/marvell/tdma.c
new file mode 100644
index 0000000..1084c5a
--- /dev/null
+++ b/drivers/crypto/marvell/tdma.c
@@ -0,0 +1,223 @@
+/*
+ * Provide TDMA helper functions used by cipher and hash algorithm
+ * implementations.
+ *
+ * Author: Boris Brezillon <boris.brezillon@free-electrons.com>
+ * Author: Arnaud Ebalard <arno@natisbad.org>
+ *
+ * This work is based on an initial version written by
+ * Sebastian Andrzej Siewior < sebastian at breakpoint dot cc >
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ */
+
+#include "cesa.h"
+
+bool mv_cesa_req_dma_iter_next_transfer(struct mv_cesa_dma_iter *iter,
+					struct mv_cesa_sg_dma_iter *sgiter,
+					unsigned int len)
+{
+	if (!sgiter->sg)
+		return false;
+
+	sgiter->op_offset += len;
+	sgiter->offset += len;
+	if (sgiter->offset == sgiter->sg->length) {
+		if (sg_is_last(sgiter->sg))
+			return false;
+		sgiter->offset = 0;
+		sgiter->sg = sg_next(sgiter->sg);
+	}
+
+	if (sgiter->op_offset == iter->op_len)
+		return false;
+
+	return true;
+}
+
+void mv_cesa_dma_step(struct mv_cesa_tdma_req *dreq)
+{
+	struct mv_cesa_engine *engine = dreq->base.engine;
+
+	writel(0, engine->regs + CESA_SA_CFG);
+
+	mv_cesa_set_int_mask(engine, CESA_SA_INT_ACC0_IDMA_DONE);
+	writel(CESA_TDMA_DST_BURST_128B | CESA_TDMA_SRC_BURST_128B |
+	       CESA_TDMA_NO_BYTE_SWAP | CESA_TDMA_EN,
+	       engine->regs + CESA_TDMA_CONTROL);
+
+	writel(CESA_SA_CFG_ACT_CH0_IDMA | CESA_SA_CFG_MULTI_PKT |
+	       CESA_SA_CFG_CH0_W_IDMA | CESA_SA_CFG_PARA_DIS,
+	       engine->regs + CESA_SA_CFG);
+	writel(dreq->chain.first->cur_dma,
+	       engine->regs + CESA_TDMA_NEXT_ADDR);
+	writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
+}
+
+void mv_cesa_dma_cleanup(struct mv_cesa_tdma_req *dreq)
+{
+	struct mv_cesa_tdma_desc *tdma;
+
+	for (tdma = dreq->chain.first; tdma;) {
+		struct mv_cesa_tdma_desc *old_tdma = tdma;
+
+		if (tdma->op)
+			dma_pool_free(cesa_dev->dma->op_pool, tdma->op,
+				      le32_to_cpu(tdma->src));
+
+		tdma = tdma->next;
+		dma_pool_free(cesa_dev->dma->tdma_desc_pool, old_tdma,
+			      le32_to_cpu(old_tdma->cur_dma));
+	}
+
+	dreq->chain.first = NULL;
+	dreq->chain.last = NULL;
+}
+
+void mv_cesa_dma_prepare(struct mv_cesa_tdma_req *dreq,
+			 struct mv_cesa_engine *engine)
+{
+	struct mv_cesa_tdma_desc *tdma;
+
+	for (tdma = dreq->chain.first; tdma; tdma = tdma->next) {
+		if (tdma->flags & CESA_TDMA_DST_IN_SRAM)
+			tdma->dst = cpu_to_le32(tdma->dst + engine->sram_dma);
+
+		if (tdma->flags & CESA_TDMA_SRC_IN_SRAM)
+			tdma->src = cpu_to_le32(tdma->src + engine->sram_dma);
+
+		if (tdma->op)
+			mv_cesa_adjust_op(engine, tdma->op);
+	}
+}
+
+static struct mv_cesa_tdma_desc *
+mv_cesa_dma_add_desc(struct mv_cesa_tdma_chain *chain, gfp_t flags)
+{
+	struct mv_cesa_tdma_desc *new_tdma = NULL;
+	dma_addr_t dma_handle;
+
+	new_tdma = dma_pool_alloc(cesa_dev->dma->tdma_desc_pool, flags,
+				  &dma_handle);
+	if (!new_tdma)
+		return ERR_PTR(-ENOMEM);
+
+	memset(new_tdma, 0, sizeof(*new_tdma));
+	new_tdma->cur_dma = cpu_to_le32(dma_handle);
+	if (chain->last) {
+		chain->last->next_dma = new_tdma->cur_dma;
+		chain->last->next = new_tdma;
+	} else {
+		chain->first = new_tdma;
+	}
+
+	chain->last = new_tdma;
+
+	return new_tdma;
+}
+
+struct mv_cesa_op_ctx *mv_cesa_dma_add_op(struct mv_cesa_tdma_chain *chain,
+					const struct mv_cesa_op_ctx *op_templ,
+					gfp_t flags)
+{
+	struct mv_cesa_tdma_desc *tdma;
+	struct mv_cesa_op_ctx *op;
+	dma_addr_t dma_handle;
+
+	tdma = mv_cesa_dma_add_desc(chain, flags);
+	if (IS_ERR(tdma))
+		return ERR_CAST(tdma);
+
+	op = dma_pool_alloc(cesa_dev->dma->op_pool, flags, &dma_handle);
+	if (!op)
+		return ERR_PTR(-ENOMEM);
+
+	*op = *op_templ;
+
+	tdma = chain->last;
+	tdma->op = op;
+	tdma->byte_cnt = sizeof(*op) | BIT(31);
+	tdma->src = dma_handle;
+	tdma->flags = CESA_TDMA_DST_IN_SRAM | CESA_TDMA_OP;
+
+	return op;
+}
+
+int mv_cesa_dma_add_data_transfer(struct mv_cesa_tdma_chain *chain,
+				  dma_addr_t dst, dma_addr_t src, u32 size,
+				  u32 flags, gfp_t gfp_flags)
+{
+	struct mv_cesa_tdma_desc *tdma;
+
+	tdma = mv_cesa_dma_add_desc(chain, gfp_flags);
+	if (IS_ERR(tdma))
+		return PTR_ERR(tdma);
+
+	tdma->byte_cnt = size | BIT(31);
+	tdma->src = src;
+	tdma->dst = dst;
+
+	flags &= (CESA_TDMA_DST_IN_SRAM | CESA_TDMA_SRC_IN_SRAM);
+	tdma->flags = flags | CESA_TDMA_DATA;
+
+	return 0;
+}
+
+int mv_cesa_dma_add_dummy_launch(struct mv_cesa_tdma_chain *chain,
+				 u32 flags)
+{
+	struct mv_cesa_tdma_desc *tdma;
+
+	tdma = mv_cesa_dma_add_desc(chain, flags);
+	if (IS_ERR(tdma))
+		return PTR_ERR(tdma);
+
+	return 0;
+}
+
+int mv_cesa_dma_add_dummy_end(struct mv_cesa_tdma_chain *chain, u32 flags)
+{
+	struct mv_cesa_tdma_desc *tdma;
+
+	tdma = mv_cesa_dma_add_desc(chain, flags);
+	if (IS_ERR(tdma))
+		return PTR_ERR(tdma);
+
+	tdma->byte_cnt = BIT(31);
+
+	return 0;
+}
+
+int mv_cesa_dma_add_op_transfers(struct mv_cesa_tdma_chain *chain,
+				 struct mv_cesa_dma_iter *dma_iter,
+				 struct mv_cesa_sg_dma_iter *sgiter,
+				 gfp_t gfp_flags)
+{
+	u32 flags = sgiter->dir == DMA_TO_DEVICE ?
+		    CESA_TDMA_DST_IN_SRAM : CESA_TDMA_SRC_IN_SRAM;
+	unsigned int len;
+
+	do {
+		dma_addr_t dst, src;
+		int ret;
+
+		len = mv_cesa_req_dma_iter_transfer_len(dma_iter, sgiter);
+		if (sgiter->dir == DMA_TO_DEVICE) {
+			dst = CESA_SA_DATA_SRAM_OFFSET + sgiter->op_offset;
+			src = sgiter->sg->dma_address + sgiter->offset;
+		} else {
+			dst = sgiter->sg->dma_address + sgiter->offset;
+			src = CESA_SA_DATA_SRAM_OFFSET + sgiter->op_offset;
+		}
+
+		ret = mv_cesa_dma_add_data_transfer(chain, dst, src, len,
+						    flags, gfp_flags);
+		if (ret)
+			return ret;
+
+	} while (mv_cesa_req_dma_iter_next_transfer(dma_iter, sgiter, len));
+
+	return 0;
+}
diff --git a/drivers/crypto/mv_cesa.c b/drivers/crypto/mv_cesa.c
deleted file mode 100644
index f91f15d..0000000
--- a/drivers/crypto/mv_cesa.c
+++ /dev/null
@@ -1,1193 +0,0 @@
-/*
- * Support for Marvell's crypto engine which can be found on some Orion5X
- * boards.
- *
- * Author: Sebastian Andrzej Siewior < sebastian at breakpoint dot cc >
- * License: GPLv2
- *
- */
-#include <crypto/aes.h>
-#include <crypto/algapi.h>
-#include <linux/crypto.h>
-#include <linux/interrupt.h>
-#include <linux/io.h>
-#include <linux/kthread.h>
-#include <linux/platform_device.h>
-#include <linux/scatterlist.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include <linux/clk.h>
-#include <crypto/internal/hash.h>
-#include <crypto/sha.h>
-#include <linux/of.h>
-#include <linux/of_platform.h>
-#include <linux/of_irq.h>
-
-#include "mv_cesa.h"
-
-#define MV_CESA	"MV-CESA:"
-#define MAX_HW_HASH_SIZE	0xFFFF
-#define MV_CESA_EXPIRE		500 /* msec */
-
-/*
- * STM:
- *   /---------------------------------------\
- *   |					     | request complete
- *  \./					     |
- * IDLE -> new request -> BUSY -> done -> DEQUEUE
- *                         /°\               |
- *			    |		     | more scatter entries
- *			    \________________/
- */
-enum engine_status {
-	ENGINE_IDLE,
-	ENGINE_BUSY,
-	ENGINE_W_DEQUEUE,
-};
-
-/**
- * struct req_progress - used for every crypt request
- * @src_sg_it:		sg iterator for src
- * @dst_sg_it:		sg iterator for dst
- * @sg_src_left:	bytes left in src to process (scatter list)
- * @src_start:		offset to add to src start position (scatter list)
- * @crypt_len:		length of current hw crypt/hash process
- * @hw_nbytes:		total bytes to process in hw for this request
- * @copy_back:		whether to copy data back (crypt) or not (hash)
- * @sg_dst_left:	bytes left dst to process in this scatter list
- * @dst_start:		offset to add to dst start position (scatter list)
- * @hw_processed_bytes:	number of bytes processed by hw (request).
- *
- * sg helper are used to iterate over the scatterlist. Since the size of the
- * SRAM may be less than the scatter size, this struct struct is used to keep
- * track of progress within current scatterlist.
- */
-struct req_progress {
-	struct sg_mapping_iter src_sg_it;
-	struct sg_mapping_iter dst_sg_it;
-	void (*complete) (void);
-	void (*process) (int is_first);
-
-	/* src mostly */
-	int sg_src_left;
-	int src_start;
-	int crypt_len;
-	int hw_nbytes;
-	/* dst mostly */
-	int copy_back;
-	int sg_dst_left;
-	int dst_start;
-	int hw_processed_bytes;
-};
-
-struct crypto_priv {
-	void __iomem *reg;
-	void __iomem *sram;
-	int irq;
-	struct clk *clk;
-	struct task_struct *queue_th;
-
-	/* the lock protects queue and eng_st */
-	spinlock_t lock;
-	struct crypto_queue queue;
-	enum engine_status eng_st;
-	struct timer_list completion_timer;
-	struct crypto_async_request *cur_req;
-	struct req_progress p;
-	int max_req_size;
-	int sram_size;
-	int has_sha1;
-	int has_hmac_sha1;
-};
-
-static struct crypto_priv *cpg;
-
-struct mv_ctx {
-	u8 aes_enc_key[AES_KEY_LEN];
-	u32 aes_dec_key[8];
-	int key_len;
-	u32 need_calc_aes_dkey;
-};
-
-enum crypto_op {
-	COP_AES_ECB,
-	COP_AES_CBC,
-};
-
-struct mv_req_ctx {
-	enum crypto_op op;
-	int decrypt;
-};
-
-enum hash_op {
-	COP_SHA1,
-	COP_HMAC_SHA1
-};
-
-struct mv_tfm_hash_ctx {
-	struct crypto_shash *fallback;
-	struct crypto_shash *base_hash;
-	u32 ivs[2 * SHA1_DIGEST_SIZE / 4];
-	int count_add;
-	enum hash_op op;
-};
-
-struct mv_req_hash_ctx {
-	u64 count;
-	u32 state[SHA1_DIGEST_SIZE / 4];
-	u8 buffer[SHA1_BLOCK_SIZE];
-	int first_hash;		/* marks that we don't have previous state */
-	int last_chunk;		/* marks that this is the 'final' request */
-	int extra_bytes;	/* unprocessed bytes in buffer */
-	enum hash_op op;
-	int count_add;
-};
-
-static void mv_completion_timer_callback(unsigned long unused)
-{
-	int active = readl(cpg->reg + SEC_ACCEL_CMD) & SEC_CMD_EN_SEC_ACCL0;
-
-	printk(KERN_ERR MV_CESA
-	       "completion timer expired (CESA %sactive), cleaning up.\n",
-	       active ? "" : "in");
-
-	del_timer(&cpg->completion_timer);
-	writel(SEC_CMD_DISABLE_SEC, cpg->reg + SEC_ACCEL_CMD);
-	while(readl(cpg->reg + SEC_ACCEL_CMD) & SEC_CMD_DISABLE_SEC)
-		printk(KERN_INFO MV_CESA "%s: waiting for engine finishing\n", __func__);
-	cpg->eng_st = ENGINE_W_DEQUEUE;
-	wake_up_process(cpg->queue_th);
-}
-
-static void mv_setup_timer(void)
-{
-	setup_timer(&cpg->completion_timer, &mv_completion_timer_callback, 0);
-	mod_timer(&cpg->completion_timer,
-			jiffies + msecs_to_jiffies(MV_CESA_EXPIRE));
-}
-
-static void compute_aes_dec_key(struct mv_ctx *ctx)
-{
-	struct crypto_aes_ctx gen_aes_key;
-	int key_pos;
-
-	if (!ctx->need_calc_aes_dkey)
-		return;
-
-	crypto_aes_expand_key(&gen_aes_key, ctx->aes_enc_key, ctx->key_len);
-
-	key_pos = ctx->key_len + 24;
-	memcpy(ctx->aes_dec_key, &gen_aes_key.key_enc[key_pos], 4 * 4);
-	switch (ctx->key_len) {
-	case AES_KEYSIZE_256:
-		key_pos -= 2;
-		/* fall */
-	case AES_KEYSIZE_192:
-		key_pos -= 2;
-		memcpy(&ctx->aes_dec_key[4], &gen_aes_key.key_enc[key_pos],
-				4 * 4);
-		break;
-	}
-	ctx->need_calc_aes_dkey = 0;
-}
-
-static int mv_setkey_aes(struct crypto_ablkcipher *cipher, const u8 *key,
-		unsigned int len)
-{
-	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher);
-	struct mv_ctx *ctx = crypto_tfm_ctx(tfm);
-
-	switch (len) {
-	case AES_KEYSIZE_128:
-	case AES_KEYSIZE_192:
-	case AES_KEYSIZE_256:
-		break;
-	default:
-		crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN);
-		return -EINVAL;
-	}
-	ctx->key_len = len;
-	ctx->need_calc_aes_dkey = 1;
-
-	memcpy(ctx->aes_enc_key, key, AES_KEY_LEN);
-	return 0;
-}
-
-static void copy_src_to_buf(struct req_progress *p, char *dbuf, int len)
-{
-	int ret;
-	void *sbuf;
-	int copy_len;
-
-	while (len) {
-		if (!p->sg_src_left) {
-			ret = sg_miter_next(&p->src_sg_it);
-			BUG_ON(!ret);
-			p->sg_src_left = p->src_sg_it.length;
-			p->src_start = 0;
-		}
-
-		sbuf = p->src_sg_it.addr + p->src_start;
-
-		copy_len = min(p->sg_src_left, len);
-		memcpy(dbuf, sbuf, copy_len);
-
-		p->src_start += copy_len;
-		p->sg_src_left -= copy_len;
-
-		len -= copy_len;
-		dbuf += copy_len;
-	}
-}
-
-static void setup_data_in(void)
-{
-	struct req_progress *p = &cpg->p;
-	int data_in_sram =
-	    min(p->hw_nbytes - p->hw_processed_bytes, cpg->max_req_size);
-	copy_src_to_buf(p, cpg->sram + SRAM_DATA_IN_START + p->crypt_len,
-			data_in_sram - p->crypt_len);
-	p->crypt_len = data_in_sram;
-}
-
-static void mv_process_current_q(int first_block)
-{
-	struct ablkcipher_request *req = ablkcipher_request_cast(cpg->cur_req);
-	struct mv_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
-	struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
-	struct sec_accel_config op;
-
-	switch (req_ctx->op) {
-	case COP_AES_ECB:
-		op.config = CFG_OP_CRYPT_ONLY | CFG_ENCM_AES | CFG_ENC_MODE_ECB;
-		break;
-	case COP_AES_CBC:
-	default:
-		op.config = CFG_OP_CRYPT_ONLY | CFG_ENCM_AES | CFG_ENC_MODE_CBC;
-		op.enc_iv = ENC_IV_POINT(SRAM_DATA_IV) |
-			ENC_IV_BUF_POINT(SRAM_DATA_IV_BUF);
-		if (first_block)
-			memcpy(cpg->sram + SRAM_DATA_IV, req->info, 16);
-		break;
-	}
-	if (req_ctx->decrypt) {
-		op.config |= CFG_DIR_DEC;
-		memcpy(cpg->sram + SRAM_DATA_KEY_P, ctx->aes_dec_key,
-				AES_KEY_LEN);
-	} else {
-		op.config |= CFG_DIR_ENC;
-		memcpy(cpg->sram + SRAM_DATA_KEY_P, ctx->aes_enc_key,
-				AES_KEY_LEN);
-	}
-
-	switch (ctx->key_len) {
-	case AES_KEYSIZE_128:
-		op.config |= CFG_AES_LEN_128;
-		break;
-	case AES_KEYSIZE_192:
-		op.config |= CFG_AES_LEN_192;
-		break;
-	case AES_KEYSIZE_256:
-		op.config |= CFG_AES_LEN_256;
-		break;
-	}
-	op.enc_p = ENC_P_SRC(SRAM_DATA_IN_START) |
-		ENC_P_DST(SRAM_DATA_OUT_START);
-	op.enc_key_p = SRAM_DATA_KEY_P;
-
-	setup_data_in();
-	op.enc_len = cpg->p.crypt_len;
-	memcpy(cpg->sram + SRAM_CONFIG, &op,
-			sizeof(struct sec_accel_config));
-
-	/* GO */
-	mv_setup_timer();
-	writel(SEC_CMD_EN_SEC_ACCL0, cpg->reg + SEC_ACCEL_CMD);
-}
-
-static void mv_crypto_algo_completion(void)
-{
-	struct ablkcipher_request *req = ablkcipher_request_cast(cpg->cur_req);
-	struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
-
-	sg_miter_stop(&cpg->p.src_sg_it);
-	sg_miter_stop(&cpg->p.dst_sg_it);
-
-	if (req_ctx->op != COP_AES_CBC)
-		return ;
-
-	memcpy(req->info, cpg->sram + SRAM_DATA_IV_BUF, 16);
-}
-
-static void mv_process_hash_current(int first_block)
-{
-	struct ahash_request *req = ahash_request_cast(cpg->cur_req);
-	const struct mv_tfm_hash_ctx *tfm_ctx = crypto_tfm_ctx(req->base.tfm);
-	struct mv_req_hash_ctx *req_ctx = ahash_request_ctx(req);
-	struct req_progress *p = &cpg->p;
-	struct sec_accel_config op = { 0 };
-	int is_last;
-
-	switch (req_ctx->op) {
-	case COP_SHA1:
-	default:
-		op.config = CFG_OP_MAC_ONLY | CFG_MACM_SHA1;
-		break;
-	case COP_HMAC_SHA1:
-		op.config = CFG_OP_MAC_ONLY | CFG_MACM_HMAC_SHA1;
-		memcpy(cpg->sram + SRAM_HMAC_IV_IN,
-				tfm_ctx->ivs, sizeof(tfm_ctx->ivs));
-		break;
-	}
-
-	op.mac_src_p =
-		MAC_SRC_DATA_P(SRAM_DATA_IN_START) | MAC_SRC_TOTAL_LEN((u32)
-		req_ctx->
-		count);
-
-	setup_data_in();
-
-	op.mac_digest =
-		MAC_DIGEST_P(SRAM_DIGEST_BUF) | MAC_FRAG_LEN(p->crypt_len);
-	op.mac_iv =
-		MAC_INNER_IV_P(SRAM_HMAC_IV_IN) |
-		MAC_OUTER_IV_P(SRAM_HMAC_IV_OUT);
-
-	is_last = req_ctx->last_chunk
-		&& (p->hw_processed_bytes + p->crypt_len >= p->hw_nbytes)
-		&& (req_ctx->count <= MAX_HW_HASH_SIZE);
-	if (req_ctx->first_hash) {
-		if (is_last)
-			op.config |= CFG_NOT_FRAG;
-		else
-			op.config |= CFG_FIRST_FRAG;
-
-		req_ctx->first_hash = 0;
-	} else {
-		if (is_last)
-			op.config |= CFG_LAST_FRAG;
-		else
-			op.config |= CFG_MID_FRAG;
-
-		if (first_block) {
-			writel(req_ctx->state[0], cpg->reg + DIGEST_INITIAL_VAL_A);
-			writel(req_ctx->state[1], cpg->reg + DIGEST_INITIAL_VAL_B);
-			writel(req_ctx->state[2], cpg->reg + DIGEST_INITIAL_VAL_C);
-			writel(req_ctx->state[3], cpg->reg + DIGEST_INITIAL_VAL_D);
-			writel(req_ctx->state[4], cpg->reg + DIGEST_INITIAL_VAL_E);
-		}
-	}
-
-	memcpy(cpg->sram + SRAM_CONFIG, &op, sizeof(struct sec_accel_config));
-
-	/* GO */
-	mv_setup_timer();
-	writel(SEC_CMD_EN_SEC_ACCL0, cpg->reg + SEC_ACCEL_CMD);
-}
-
-static inline int mv_hash_import_sha1_ctx(const struct mv_req_hash_ctx *ctx,
-					  struct shash_desc *desc)
-{
-	int i;
-	struct sha1_state shash_state;
-
-	shash_state.count = ctx->count + ctx->count_add;
-	for (i = 0; i < 5; i++)
-		shash_state.state[i] = ctx->state[i];
-	memcpy(shash_state.buffer, ctx->buffer, sizeof(shash_state.buffer));
-	return crypto_shash_import(desc, &shash_state);
-}
-
-static int mv_hash_final_fallback(struct ahash_request *req)
-{
-	const struct mv_tfm_hash_ctx *tfm_ctx = crypto_tfm_ctx(req->base.tfm);
-	struct mv_req_hash_ctx *req_ctx = ahash_request_ctx(req);
-	SHASH_DESC_ON_STACK(shash, tfm_ctx->fallback);
-	int rc;
-
-	shash->tfm = tfm_ctx->fallback;
-	shash->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
-	if (unlikely(req_ctx->first_hash)) {
-		crypto_shash_init(shash);
-		crypto_shash_update(shash, req_ctx->buffer,
-				    req_ctx->extra_bytes);
-	} else {
-		/* only SHA1 for now....
-		 */
-		rc = mv_hash_import_sha1_ctx(req_ctx, shash);
-		if (rc)
-			goto out;
-	}
-	rc = crypto_shash_final(shash, req->result);
-out:
-	return rc;
-}
-
-static void mv_save_digest_state(struct mv_req_hash_ctx *ctx)
-{
-	ctx->state[0] = readl(cpg->reg + DIGEST_INITIAL_VAL_A);
-	ctx->state[1] = readl(cpg->reg + DIGEST_INITIAL_VAL_B);
-	ctx->state[2] = readl(cpg->reg + DIGEST_INITIAL_VAL_C);
-	ctx->state[3] = readl(cpg->reg + DIGEST_INITIAL_VAL_D);
-	ctx->state[4] = readl(cpg->reg + DIGEST_INITIAL_VAL_E);
-}
-
-static void mv_hash_algo_completion(void)
-{
-	struct ahash_request *req = ahash_request_cast(cpg->cur_req);
-	struct mv_req_hash_ctx *ctx = ahash_request_ctx(req);
-
-	if (ctx->extra_bytes)
-		copy_src_to_buf(&cpg->p, ctx->buffer, ctx->extra_bytes);
-	sg_miter_stop(&cpg->p.src_sg_it);
-
-	if (likely(ctx->last_chunk)) {
-		if (likely(ctx->count <= MAX_HW_HASH_SIZE)) {
-			memcpy(req->result, cpg->sram + SRAM_DIGEST_BUF,
-			       crypto_ahash_digestsize(crypto_ahash_reqtfm
-						       (req)));
-		} else {
-			mv_save_digest_state(ctx);
-			mv_hash_final_fallback(req);
-		}
-	} else {
-		mv_save_digest_state(ctx);
-	}
-}
-
-static void dequeue_complete_req(void)
-{
-	struct crypto_async_request *req = cpg->cur_req;
-	void *buf;
-	int ret;
-	cpg->p.hw_processed_bytes += cpg->p.crypt_len;
-	if (cpg->p.copy_back) {
-		int need_copy_len = cpg->p.crypt_len;
-		int sram_offset = 0;
-		do {
-			int dst_copy;
-
-			if (!cpg->p.sg_dst_left) {
-				ret = sg_miter_next(&cpg->p.dst_sg_it);
-				BUG_ON(!ret);
-				cpg->p.sg_dst_left = cpg->p.dst_sg_it.length;
-				cpg->p.dst_start = 0;
-			}
-
-			buf = cpg->p.dst_sg_it.addr;
-			buf += cpg->p.dst_start;
-
-			dst_copy = min(need_copy_len, cpg->p.sg_dst_left);
-
-			memcpy(buf,
-			       cpg->sram + SRAM_DATA_OUT_START + sram_offset,
-			       dst_copy);
-			sram_offset += dst_copy;
-			cpg->p.sg_dst_left -= dst_copy;
-			need_copy_len -= dst_copy;
-			cpg->p.dst_start += dst_copy;
-		} while (need_copy_len > 0);
-	}
-
-	cpg->p.crypt_len = 0;
-
-	BUG_ON(cpg->eng_st != ENGINE_W_DEQUEUE);
-	if (cpg->p.hw_processed_bytes < cpg->p.hw_nbytes) {
-		/* process next scatter list entry */
-		cpg->eng_st = ENGINE_BUSY;
-		cpg->p.process(0);
-	} else {
-		cpg->p.complete();
-		cpg->eng_st = ENGINE_IDLE;
-		local_bh_disable();
-		req->complete(req, 0);
-		local_bh_enable();
-	}
-}
-
-static int count_sgs(struct scatterlist *sl, unsigned int total_bytes)
-{
-	int i = 0;
-	size_t cur_len;
-
-	while (sl) {
-		cur_len = sl[i].length;
-		++i;
-		if (total_bytes > cur_len)
-			total_bytes -= cur_len;
-		else
-			break;
-	}
-
-	return i;
-}
-
-static void mv_start_new_crypt_req(struct ablkcipher_request *req)
-{
-	struct req_progress *p = &cpg->p;
-	int num_sgs;
-
-	cpg->cur_req = &req->base;
-	memset(p, 0, sizeof(struct req_progress));
-	p->hw_nbytes = req->nbytes;
-	p->complete = mv_crypto_algo_completion;
-	p->process = mv_process_current_q;
-	p->copy_back = 1;
-
-	num_sgs = count_sgs(req->src, req->nbytes);
-	sg_miter_start(&p->src_sg_it, req->src, num_sgs, SG_MITER_FROM_SG);
-
-	num_sgs = count_sgs(req->dst, req->nbytes);
-	sg_miter_start(&p->dst_sg_it, req->dst, num_sgs, SG_MITER_TO_SG);
-
-	mv_process_current_q(1);
-}
-
-static void mv_start_new_hash_req(struct ahash_request *req)
-{
-	struct req_progress *p = &cpg->p;
-	struct mv_req_hash_ctx *ctx = ahash_request_ctx(req);
-	int num_sgs, hw_bytes, old_extra_bytes, rc;
-	cpg->cur_req = &req->base;
-	memset(p, 0, sizeof(struct req_progress));
-	hw_bytes = req->nbytes + ctx->extra_bytes;
-	old_extra_bytes = ctx->extra_bytes;
-
-	ctx->extra_bytes = hw_bytes % SHA1_BLOCK_SIZE;
-	if (ctx->extra_bytes != 0
-	    && (!ctx->last_chunk || ctx->count > MAX_HW_HASH_SIZE))
-		hw_bytes -= ctx->extra_bytes;
-	else
-		ctx->extra_bytes = 0;
-
-	num_sgs = count_sgs(req->src, req->nbytes);
-	sg_miter_start(&p->src_sg_it, req->src, num_sgs, SG_MITER_FROM_SG);
-
-	if (hw_bytes) {
-		p->hw_nbytes = hw_bytes;
-		p->complete = mv_hash_algo_completion;
-		p->process = mv_process_hash_current;
-
-		if (unlikely(old_extra_bytes)) {
-			memcpy(cpg->sram + SRAM_DATA_IN_START, ctx->buffer,
-			       old_extra_bytes);
-			p->crypt_len = old_extra_bytes;
-		}
-
-		mv_process_hash_current(1);
-	} else {
-		copy_src_to_buf(p, ctx->buffer + old_extra_bytes,
-				ctx->extra_bytes - old_extra_bytes);
-		sg_miter_stop(&p->src_sg_it);
-		if (ctx->last_chunk)
-			rc = mv_hash_final_fallback(req);
-		else
-			rc = 0;
-		cpg->eng_st = ENGINE_IDLE;
-		local_bh_disable();
-		req->base.complete(&req->base, rc);
-		local_bh_enable();
-	}
-}
-
-static int queue_manag(void *data)
-{
-	cpg->eng_st = ENGINE_IDLE;
-	do {
-		struct crypto_async_request *async_req = NULL;
-		struct crypto_async_request *backlog;
-
-		__set_current_state(TASK_INTERRUPTIBLE);
-
-		if (cpg->eng_st == ENGINE_W_DEQUEUE)
-			dequeue_complete_req();
-
-		spin_lock_irq(&cpg->lock);
-		if (cpg->eng_st == ENGINE_IDLE) {
-			backlog = crypto_get_backlog(&cpg->queue);
-			async_req = crypto_dequeue_request(&cpg->queue);
-			if (async_req) {
-				BUG_ON(cpg->eng_st != ENGINE_IDLE);
-				cpg->eng_st = ENGINE_BUSY;
-			}
-		}
-		spin_unlock_irq(&cpg->lock);
-
-		if (backlog) {
-			backlog->complete(backlog, -EINPROGRESS);
-			backlog = NULL;
-		}
-
-		if (async_req) {
-			if (crypto_tfm_alg_type(async_req->tfm) !=
-			    CRYPTO_ALG_TYPE_AHASH) {
-				struct ablkcipher_request *req =
-				    ablkcipher_request_cast(async_req);
-				mv_start_new_crypt_req(req);
-			} else {
-				struct ahash_request *req =
-				    ahash_request_cast(async_req);
-				mv_start_new_hash_req(req);
-			}
-			async_req = NULL;
-		}
-
-		schedule();
-
-	} while (!kthread_should_stop());
-	return 0;
-}
-
-static int mv_handle_req(struct crypto_async_request *req)
-{
-	unsigned long flags;
-	int ret;
-
-	spin_lock_irqsave(&cpg->lock, flags);
-	ret = crypto_enqueue_request(&cpg->queue, req);
-	spin_unlock_irqrestore(&cpg->lock, flags);
-	wake_up_process(cpg->queue_th);
-	return ret;
-}
-
-static int mv_enc_aes_ecb(struct ablkcipher_request *req)
-{
-	struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
-
-	req_ctx->op = COP_AES_ECB;
-	req_ctx->decrypt = 0;
-
-	return mv_handle_req(&req->base);
-}
-
-static int mv_dec_aes_ecb(struct ablkcipher_request *req)
-{
-	struct mv_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
-	struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
-
-	req_ctx->op = COP_AES_ECB;
-	req_ctx->decrypt = 1;
-
-	compute_aes_dec_key(ctx);
-	return mv_handle_req(&req->base);
-}
-
-static int mv_enc_aes_cbc(struct ablkcipher_request *req)
-{
-	struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
-
-	req_ctx->op = COP_AES_CBC;
-	req_ctx->decrypt = 0;
-
-	return mv_handle_req(&req->base);
-}
-
-static int mv_dec_aes_cbc(struct ablkcipher_request *req)
-{
-	struct mv_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
-	struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
-
-	req_ctx->op = COP_AES_CBC;
-	req_ctx->decrypt = 1;
-
-	compute_aes_dec_key(ctx);
-	return mv_handle_req(&req->base);
-}
-
-static int mv_cra_init(struct crypto_tfm *tfm)
-{
-	tfm->crt_ablkcipher.reqsize = sizeof(struct mv_req_ctx);
-	return 0;
-}
-
-static void mv_init_hash_req_ctx(struct mv_req_hash_ctx *ctx, int op,
-				 int is_last, unsigned int req_len,
-				 int count_add)
-{
-	memset(ctx, 0, sizeof(*ctx));
-	ctx->op = op;
-	ctx->count = req_len;
-	ctx->first_hash = 1;
-	ctx->last_chunk = is_last;
-	ctx->count_add = count_add;
-}
-
-static void mv_update_hash_req_ctx(struct mv_req_hash_ctx *ctx, int is_last,
-				   unsigned req_len)
-{
-	ctx->last_chunk = is_last;
-	ctx->count += req_len;
-}
-
-static int mv_hash_init(struct ahash_request *req)
-{
-	const struct mv_tfm_hash_ctx *tfm_ctx = crypto_tfm_ctx(req->base.tfm);
-	mv_init_hash_req_ctx(ahash_request_ctx(req), tfm_ctx->op, 0, 0,
-			     tfm_ctx->count_add);
-	return 0;
-}
-
-static int mv_hash_update(struct ahash_request *req)
-{
-	if (!req->nbytes)
-		return 0;
-
-	mv_update_hash_req_ctx(ahash_request_ctx(req), 0, req->nbytes);
-	return mv_handle_req(&req->base);
-}
-
-static int mv_hash_final(struct ahash_request *req)
-{
-	struct mv_req_hash_ctx *ctx = ahash_request_ctx(req);
-
-	ahash_request_set_crypt(req, NULL, req->result, 0);
-	mv_update_hash_req_ctx(ctx, 1, 0);
-	return mv_handle_req(&req->base);
-}
-
-static int mv_hash_finup(struct ahash_request *req)
-{
-	mv_update_hash_req_ctx(ahash_request_ctx(req), 1, req->nbytes);
-	return mv_handle_req(&req->base);
-}
-
-static int mv_hash_digest(struct ahash_request *req)
-{
-	const struct mv_tfm_hash_ctx *tfm_ctx = crypto_tfm_ctx(req->base.tfm);
-	mv_init_hash_req_ctx(ahash_request_ctx(req), tfm_ctx->op, 1,
-			     req->nbytes, tfm_ctx->count_add);
-	return mv_handle_req(&req->base);
-}
-
-static void mv_hash_init_ivs(struct mv_tfm_hash_ctx *ctx, const void *istate,
-			     const void *ostate)
-{
-	const struct sha1_state *isha1_state = istate, *osha1_state = ostate;
-	int i;
-	for (i = 0; i < 5; i++) {
-		ctx->ivs[i] = cpu_to_be32(isha1_state->state[i]);
-		ctx->ivs[i + 5] = cpu_to_be32(osha1_state->state[i]);
-	}
-}
-
-static int mv_hash_setkey(struct crypto_ahash *tfm, const u8 * key,
-			  unsigned int keylen)
-{
-	int rc;
-	struct mv_tfm_hash_ctx *ctx = crypto_tfm_ctx(&tfm->base);
-	int bs, ds, ss;
-
-	if (!ctx->base_hash)
-		return 0;
-
-	rc = crypto_shash_setkey(ctx->fallback, key, keylen);
-	if (rc)
-		return rc;
-
-	/* Can't see a way to extract the ipad/opad from the fallback tfm
-	   so I'm basically copying code from the hmac module */
-	bs = crypto_shash_blocksize(ctx->base_hash);
-	ds = crypto_shash_digestsize(ctx->base_hash);
-	ss = crypto_shash_statesize(ctx->base_hash);
-
-	{
-		SHASH_DESC_ON_STACK(shash, ctx->base_hash);
-
-		unsigned int i;
-		char ipad[ss];
-		char opad[ss];
-
-		shash->tfm = ctx->base_hash;
-		shash->flags = crypto_shash_get_flags(ctx->base_hash) &
-		    CRYPTO_TFM_REQ_MAY_SLEEP;
-
-		if (keylen > bs) {
-			int err;
-
-			err =
-			    crypto_shash_digest(shash, key, keylen, ipad);
-			if (err)
-				return err;
-
-			keylen = ds;
-		} else
-			memcpy(ipad, key, keylen);
-
-		memset(ipad + keylen, 0, bs - keylen);
-		memcpy(opad, ipad, bs);
-
-		for (i = 0; i < bs; i++) {
-			ipad[i] ^= 0x36;
-			opad[i] ^= 0x5c;
-		}
-
-		rc = crypto_shash_init(shash) ? :
-		    crypto_shash_update(shash, ipad, bs) ? :
-		    crypto_shash_export(shash, ipad) ? :
-		    crypto_shash_init(shash) ? :
-		    crypto_shash_update(shash, opad, bs) ? :
-		    crypto_shash_export(shash, opad);
-
-		if (rc == 0)
-			mv_hash_init_ivs(ctx, ipad, opad);
-
-		return rc;
-	}
-}
-
-static int mv_cra_hash_init(struct crypto_tfm *tfm, const char *base_hash_name,
-			    enum hash_op op, int count_add)
-{
-	const char *fallback_driver_name = crypto_tfm_alg_name(tfm);
-	struct mv_tfm_hash_ctx *ctx = crypto_tfm_ctx(tfm);
-	struct crypto_shash *fallback_tfm = NULL;
-	struct crypto_shash *base_hash = NULL;
-	int err = -ENOMEM;
-
-	ctx->op = op;
-	ctx->count_add = count_add;
-
-	/* Allocate a fallback and abort if it failed. */
-	fallback_tfm = crypto_alloc_shash(fallback_driver_name, 0,
-					  CRYPTO_ALG_NEED_FALLBACK);
-	if (IS_ERR(fallback_tfm)) {
-		printk(KERN_WARNING MV_CESA
-		       "Fallback driver '%s' could not be loaded!\n",
-		       fallback_driver_name);
-		err = PTR_ERR(fallback_tfm);
-		goto out;
-	}
-	ctx->fallback = fallback_tfm;
-
-	if (base_hash_name) {
-		/* Allocate a hash to compute the ipad/opad of hmac. */
-		base_hash = crypto_alloc_shash(base_hash_name, 0,
-					       CRYPTO_ALG_NEED_FALLBACK);
-		if (IS_ERR(base_hash)) {
-			printk(KERN_WARNING MV_CESA
-			       "Base driver '%s' could not be loaded!\n",
-			       base_hash_name);
-			err = PTR_ERR(base_hash);
-			goto err_bad_base;
-		}
-	}
-	ctx->base_hash = base_hash;
-
-	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
-				 sizeof(struct mv_req_hash_ctx) +
-				 crypto_shash_descsize(ctx->fallback));
-	return 0;
-err_bad_base:
-	crypto_free_shash(fallback_tfm);
-out:
-	return err;
-}
-
-static void mv_cra_hash_exit(struct crypto_tfm *tfm)
-{
-	struct mv_tfm_hash_ctx *ctx = crypto_tfm_ctx(tfm);
-
-	crypto_free_shash(ctx->fallback);
-	if (ctx->base_hash)
-		crypto_free_shash(ctx->base_hash);
-}
-
-static int mv_cra_hash_sha1_init(struct crypto_tfm *tfm)
-{
-	return mv_cra_hash_init(tfm, NULL, COP_SHA1, 0);
-}
-
-static int mv_cra_hash_hmac_sha1_init(struct crypto_tfm *tfm)
-{
-	return mv_cra_hash_init(tfm, "sha1", COP_HMAC_SHA1, SHA1_BLOCK_SIZE);
-}
-
-static irqreturn_t crypto_int(int irq, void *priv)
-{
-	u32 val;
-
-	val = readl(cpg->reg + SEC_ACCEL_INT_STATUS);
-	if (!(val & SEC_INT_ACCEL0_DONE))
-		return IRQ_NONE;
-
-	if (!del_timer(&cpg->completion_timer)) {
-		printk(KERN_WARNING MV_CESA
-		       "got an interrupt but no pending timer?\n");
-	}
-	val &= ~SEC_INT_ACCEL0_DONE;
-	writel(val, cpg->reg + FPGA_INT_STATUS);
-	writel(val, cpg->reg + SEC_ACCEL_INT_STATUS);
-	BUG_ON(cpg->eng_st != ENGINE_BUSY);
-	cpg->eng_st = ENGINE_W_DEQUEUE;
-	wake_up_process(cpg->queue_th);
-	return IRQ_HANDLED;
-}
-
-static struct crypto_alg mv_aes_alg_ecb = {
-	.cra_name		= "ecb(aes)",
-	.cra_driver_name	= "mv-ecb-aes",
-	.cra_priority	= 300,
-	.cra_flags	= CRYPTO_ALG_TYPE_ABLKCIPHER |
-			  CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC,
-	.cra_blocksize	= 16,
-	.cra_ctxsize	= sizeof(struct mv_ctx),
-	.cra_alignmask	= 0,
-	.cra_type	= &crypto_ablkcipher_type,
-	.cra_module	= THIS_MODULE,
-	.cra_init	= mv_cra_init,
-	.cra_u		= {
-		.ablkcipher = {
-			.min_keysize	=	AES_MIN_KEY_SIZE,
-			.max_keysize	=	AES_MAX_KEY_SIZE,
-			.setkey		=	mv_setkey_aes,
-			.encrypt	=	mv_enc_aes_ecb,
-			.decrypt	=	mv_dec_aes_ecb,
-		},
-	},
-};
-
-static struct crypto_alg mv_aes_alg_cbc = {
-	.cra_name		= "cbc(aes)",
-	.cra_driver_name	= "mv-cbc-aes",
-	.cra_priority	= 300,
-	.cra_flags	= CRYPTO_ALG_TYPE_ABLKCIPHER |
-			  CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC,
-	.cra_blocksize	= AES_BLOCK_SIZE,
-	.cra_ctxsize	= sizeof(struct mv_ctx),
-	.cra_alignmask	= 0,
-	.cra_type	= &crypto_ablkcipher_type,
-	.cra_module	= THIS_MODULE,
-	.cra_init	= mv_cra_init,
-	.cra_u		= {
-		.ablkcipher = {
-			.ivsize		=	AES_BLOCK_SIZE,
-			.min_keysize	=	AES_MIN_KEY_SIZE,
-			.max_keysize	=	AES_MAX_KEY_SIZE,
-			.setkey		=	mv_setkey_aes,
-			.encrypt	=	mv_enc_aes_cbc,
-			.decrypt	=	mv_dec_aes_cbc,
-		},
-	},
-};
-
-static struct ahash_alg mv_sha1_alg = {
-	.init = mv_hash_init,
-	.update = mv_hash_update,
-	.final = mv_hash_final,
-	.finup = mv_hash_finup,
-	.digest = mv_hash_digest,
-	.halg = {
-		 .digestsize = SHA1_DIGEST_SIZE,
-		 .base = {
-			  .cra_name = "sha1",
-			  .cra_driver_name = "mv-sha1",
-			  .cra_priority = 300,
-			  .cra_flags =
-			  CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
-			  CRYPTO_ALG_NEED_FALLBACK,
-			  .cra_blocksize = SHA1_BLOCK_SIZE,
-			  .cra_ctxsize = sizeof(struct mv_tfm_hash_ctx),
-			  .cra_init = mv_cra_hash_sha1_init,
-			  .cra_exit = mv_cra_hash_exit,
-			  .cra_module = THIS_MODULE,
-			  }
-		 }
-};
-
-static struct ahash_alg mv_hmac_sha1_alg = {
-	.init = mv_hash_init,
-	.update = mv_hash_update,
-	.final = mv_hash_final,
-	.finup = mv_hash_finup,
-	.digest = mv_hash_digest,
-	.setkey = mv_hash_setkey,
-	.halg = {
-		 .digestsize = SHA1_DIGEST_SIZE,
-		 .base = {
-			  .cra_name = "hmac(sha1)",
-			  .cra_driver_name = "mv-hmac-sha1",
-			  .cra_priority = 300,
-			  .cra_flags =
-			  CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
-			  CRYPTO_ALG_NEED_FALLBACK,
-			  .cra_blocksize = SHA1_BLOCK_SIZE,
-			  .cra_ctxsize = sizeof(struct mv_tfm_hash_ctx),
-			  .cra_init = mv_cra_hash_hmac_sha1_init,
-			  .cra_exit = mv_cra_hash_exit,
-			  .cra_module = THIS_MODULE,
-			  }
-		 }
-};
-
-static int mv_probe(struct platform_device *pdev)
-{
-	struct crypto_priv *cp;
-	struct resource *res;
-	int irq;
-	int ret;
-
-	if (cpg) {
-		printk(KERN_ERR MV_CESA "Second crypto dev?\n");
-		return -EEXIST;
-	}
-
-	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
-	if (!res)
-		return -ENXIO;
-
-	cp = kzalloc(sizeof(*cp), GFP_KERNEL);
-	if (!cp)
-		return -ENOMEM;
-
-	spin_lock_init(&cp->lock);
-	crypto_init_queue(&cp->queue, 50);
-	cp->reg = ioremap(res->start, resource_size(res));
-	if (!cp->reg) {
-		ret = -ENOMEM;
-		goto err;
-	}
-
-	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "sram");
-	if (!res) {
-		ret = -ENXIO;
-		goto err_unmap_reg;
-	}
-	cp->sram_size = resource_size(res);
-	cp->max_req_size = cp->sram_size - SRAM_CFG_SPACE;
-	cp->sram = ioremap(res->start, cp->sram_size);
-	if (!cp->sram) {
-		ret = -ENOMEM;
-		goto err_unmap_reg;
-	}
-
-	if (pdev->dev.of_node)
-		irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
-	else
-		irq = platform_get_irq(pdev, 0);
-	if (irq < 0 || irq == NO_IRQ) {
-		ret = irq;
-		goto err_unmap_sram;
-	}
-	cp->irq = irq;
-
-	platform_set_drvdata(pdev, cp);
-	cpg = cp;
-
-	cp->queue_th = kthread_run(queue_manag, cp, "mv_crypto");
-	if (IS_ERR(cp->queue_th)) {
-		ret = PTR_ERR(cp->queue_th);
-		goto err_unmap_sram;
-	}
-
-	ret = request_irq(irq, crypto_int, 0, dev_name(&pdev->dev),
-			cp);
-	if (ret)
-		goto err_thread;
-
-	/* Not all platforms can gate the clock, so it is not
-	   an error if the clock does not exists. */
-	cp->clk = clk_get(&pdev->dev, NULL);
-	if (!IS_ERR(cp->clk))
-		clk_prepare_enable(cp->clk);
-
-	writel(0, cpg->reg + SEC_ACCEL_INT_STATUS);
-	writel(SEC_INT_ACCEL0_DONE, cpg->reg + SEC_ACCEL_INT_MASK);
-	writel(SEC_CFG_STOP_DIG_ERR, cpg->reg + SEC_ACCEL_CFG);
-	writel(SRAM_CONFIG, cpg->reg + SEC_ACCEL_DESC_P0);
-
-	ret = crypto_register_alg(&mv_aes_alg_ecb);
-	if (ret) {
-		printk(KERN_WARNING MV_CESA
-		       "Could not register aes-ecb driver\n");
-		goto err_irq;
-	}
-
-	ret = crypto_register_alg(&mv_aes_alg_cbc);
-	if (ret) {
-		printk(KERN_WARNING MV_CESA
-		       "Could not register aes-cbc driver\n");
-		goto err_unreg_ecb;
-	}
-
-	ret = crypto_register_ahash(&mv_sha1_alg);
-	if (ret == 0)
-		cpg->has_sha1 = 1;
-	else
-		printk(KERN_WARNING MV_CESA "Could not register sha1 driver\n");
-
-	ret = crypto_register_ahash(&mv_hmac_sha1_alg);
-	if (ret == 0) {
-		cpg->has_hmac_sha1 = 1;
-	} else {
-		printk(KERN_WARNING MV_CESA
-		       "Could not register hmac-sha1 driver\n");
-	}
-
-	return 0;
-err_unreg_ecb:
-	crypto_unregister_alg(&mv_aes_alg_ecb);
-err_irq:
-	free_irq(irq, cp);
-	if (!IS_ERR(cp->clk)) {
-		clk_disable_unprepare(cp->clk);
-		clk_put(cp->clk);
-	}
-err_thread:
-	kthread_stop(cp->queue_th);
-err_unmap_sram:
-	iounmap(cp->sram);
-err_unmap_reg:
-	iounmap(cp->reg);
-err:
-	kfree(cp);
-	cpg = NULL;
-	return ret;
-}
-
-static int mv_remove(struct platform_device *pdev)
-{
-	struct crypto_priv *cp = platform_get_drvdata(pdev);
-
-	crypto_unregister_alg(&mv_aes_alg_ecb);
-	crypto_unregister_alg(&mv_aes_alg_cbc);
-	if (cp->has_sha1)
-		crypto_unregister_ahash(&mv_sha1_alg);
-	if (cp->has_hmac_sha1)
-		crypto_unregister_ahash(&mv_hmac_sha1_alg);
-	kthread_stop(cp->queue_th);
-	free_irq(cp->irq, cp);
-	memset(cp->sram, 0, cp->sram_size);
-	iounmap(cp->sram);
-	iounmap(cp->reg);
-
-	if (!IS_ERR(cp->clk)) {
-		clk_disable_unprepare(cp->clk);
-		clk_put(cp->clk);
-	}
-
-	kfree(cp);
-	cpg = NULL;
-	return 0;
-}
-
-static const struct of_device_id mv_cesa_of_match_table[] = {
-	{ .compatible = "marvell,orion-crypto", },
-	{}
-};
-MODULE_DEVICE_TABLE(of, mv_cesa_of_match_table);
-
-static struct platform_driver marvell_crypto = {
-	.probe		= mv_probe,
-	.remove		= mv_remove,
-	.driver		= {
-		.name	= "mv_crypto",
-		.of_match_table = mv_cesa_of_match_table,
-	},
-};
-MODULE_ALIAS("platform:mv_crypto");
-
-module_platform_driver(marvell_crypto);
-
-MODULE_AUTHOR("Sebastian Andrzej Siewior <sebastian@breakpoint.cc>");
-MODULE_DESCRIPTION("Support for Marvell's cryptographic engine");
-MODULE_LICENSE("GPL");
diff --git a/drivers/crypto/mv_cesa.h b/drivers/crypto/mv_cesa.h
deleted file mode 100644
index 9249d3e..0000000
--- a/drivers/crypto/mv_cesa.h
+++ /dev/null
@@ -1,150 +0,0 @@
-#ifndef __MV_CRYPTO_H__
-#define __MV_CRYPTO_H__
-
-#define DIGEST_INITIAL_VAL_A	0xdd00
-#define DIGEST_INITIAL_VAL_B	0xdd04
-#define DIGEST_INITIAL_VAL_C	0xdd08
-#define DIGEST_INITIAL_VAL_D	0xdd0c
-#define DIGEST_INITIAL_VAL_E	0xdd10
-#define DES_CMD_REG		0xdd58
-
-#define SEC_ACCEL_CMD		0xde00
-#define SEC_CMD_EN_SEC_ACCL0	(1 << 0)
-#define SEC_CMD_EN_SEC_ACCL1	(1 << 1)
-#define SEC_CMD_DISABLE_SEC	(1 << 2)
-
-#define SEC_ACCEL_DESC_P0	0xde04
-#define SEC_DESC_P0_PTR(x)	(x)
-
-#define SEC_ACCEL_DESC_P1	0xde14
-#define SEC_DESC_P1_PTR(x)	(x)
-
-#define SEC_ACCEL_CFG		0xde08
-#define SEC_CFG_STOP_DIG_ERR	(1 << 0)
-#define SEC_CFG_CH0_W_IDMA	(1 << 7)
-#define SEC_CFG_CH1_W_IDMA	(1 << 8)
-#define SEC_CFG_ACT_CH0_IDMA	(1 << 9)
-#define SEC_CFG_ACT_CH1_IDMA	(1 << 10)
-
-#define SEC_ACCEL_STATUS	0xde0c
-#define SEC_ST_ACT_0		(1 << 0)
-#define SEC_ST_ACT_1		(1 << 1)
-
-/*
- * FPGA_INT_STATUS looks like a FPGA leftover and is documented only in Errata
- * 4.12. It looks like that it was part of an IRQ-controller in FPGA and
- * someone forgot to remove  it while switching to the core and moving to
- * SEC_ACCEL_INT_STATUS.
- */
-#define FPGA_INT_STATUS		0xdd68
-#define SEC_ACCEL_INT_STATUS	0xde20
-#define SEC_INT_AUTH_DONE	(1 << 0)
-#define SEC_INT_DES_E_DONE	(1 << 1)
-#define SEC_INT_AES_E_DONE	(1 << 2)
-#define SEC_INT_AES_D_DONE	(1 << 3)
-#define SEC_INT_ENC_DONE	(1 << 4)
-#define SEC_INT_ACCEL0_DONE	(1 << 5)
-#define SEC_INT_ACCEL1_DONE	(1 << 6)
-#define SEC_INT_ACC0_IDMA_DONE	(1 << 7)
-#define SEC_INT_ACC1_IDMA_DONE	(1 << 8)
-
-#define SEC_ACCEL_INT_MASK	0xde24
-
-#define AES_KEY_LEN	(8 * 4)
-
-struct sec_accel_config {
-
-	u32 config;
-#define CFG_OP_MAC_ONLY		0
-#define CFG_OP_CRYPT_ONLY	1
-#define CFG_OP_MAC_CRYPT	2
-#define CFG_OP_CRYPT_MAC	3
-#define CFG_MACM_MD5		(4 << 4)
-#define CFG_MACM_SHA1		(5 << 4)
-#define CFG_MACM_HMAC_MD5	(6 << 4)
-#define CFG_MACM_HMAC_SHA1	(7 << 4)
-#define CFG_ENCM_DES		(1 << 8)
-#define CFG_ENCM_3DES		(2 << 8)
-#define CFG_ENCM_AES		(3 << 8)
-#define CFG_DIR_ENC		(0 << 12)
-#define CFG_DIR_DEC		(1 << 12)
-#define CFG_ENC_MODE_ECB	(0 << 16)
-#define CFG_ENC_MODE_CBC	(1 << 16)
-#define CFG_3DES_EEE		(0 << 20)
-#define CFG_3DES_EDE		(1 << 20)
-#define CFG_AES_LEN_128		(0 << 24)
-#define CFG_AES_LEN_192		(1 << 24)
-#define CFG_AES_LEN_256		(2 << 24)
-#define CFG_NOT_FRAG		(0 << 30)
-#define CFG_FIRST_FRAG		(1 << 30)
-#define CFG_LAST_FRAG		(2 << 30)
-#define CFG_MID_FRAG		(3 << 30)
-
-	u32 enc_p;
-#define ENC_P_SRC(x)		(x)
-#define ENC_P_DST(x)		((x) << 16)
-
-	u32 enc_len;
-#define ENC_LEN(x)		(x)
-
-	u32 enc_key_p;
-#define ENC_KEY_P(x)		(x)
-
-	u32 enc_iv;
-#define ENC_IV_POINT(x)		((x) << 0)
-#define ENC_IV_BUF_POINT(x)	((x) << 16)
-
-	u32 mac_src_p;
-#define MAC_SRC_DATA_P(x)	(x)
-#define MAC_SRC_TOTAL_LEN(x)	((x) << 16)
-
-	u32 mac_digest;
-#define MAC_DIGEST_P(x)	(x)
-#define MAC_FRAG_LEN(x)	((x) << 16)
-	u32 mac_iv;
-#define MAC_INNER_IV_P(x)	(x)
-#define MAC_OUTER_IV_P(x)	((x) << 16)
-}__attribute__ ((packed));
-	/*
-	 * /-----------\ 0
-	 * | ACCEL CFG |	4 * 8
-	 * |-----------| 0x20
-	 * | CRYPT KEY |	8 * 4
-	 * |-----------| 0x40
-	 * |  IV   IN  |	4 * 4
-	 * |-----------| 0x40 (inplace)
-	 * |  IV BUF   |	4 * 4
-	 * |-----------| 0x80
-	 * |  DATA IN  |	16 * x (max ->max_req_size)
-	 * |-----------| 0x80 (inplace operation)
-	 * |  DATA OUT |	16 * x (max ->max_req_size)
-	 * \-----------/ SRAM size
-	 */
-
-	/* Hashing memory map:
-	 * /-----------\ 0
-	 * | ACCEL CFG |        4 * 8
-	 * |-----------| 0x20
-	 * | Inner IV  |        5 * 4
-	 * |-----------| 0x34
-	 * | Outer IV  |        5 * 4
-	 * |-----------| 0x48
-	 * | Output BUF|        5 * 4
-	 * |-----------| 0x80
-	 * |  DATA IN  |        64 * x (max ->max_req_size)
-	 * \-----------/ SRAM size
-	 */
-#define SRAM_CONFIG		0x00
-#define SRAM_DATA_KEY_P		0x20
-#define SRAM_DATA_IV		0x40
-#define SRAM_DATA_IV_BUF	0x40
-#define SRAM_DATA_IN_START	0x80
-#define SRAM_DATA_OUT_START	0x80
-
-#define SRAM_HMAC_IV_IN		0x20
-#define SRAM_HMAC_IV_OUT	0x34
-#define SRAM_DIGEST_BUF		0x48
-
-#define SRAM_CFG_SPACE		0x80
-
-#endif
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH 2/2] crypto: marvell/CESA: update DT bindings documentation
  2015-04-09 14:58 ` Boris Brezillon
@ 2015-04-09 14:58   ` Boris Brezillon
  -1 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-09 14:58 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto
  Cc: Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree, Tawfik Bayouk, Lior Amsalem, Nadav Haklai,
	Eran Ben-Avi, Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, Andrew Lunn, linux-arm-kernel,
	linux-kernel, Arnaud Ebalard, Boris Brezillon

Document new compatible strings, document the new method to reference the
crypto SRAM and deprecate the old one and document the the 'clocks' and
'clock-names' properties.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 .../devicetree/bindings/crypto/mv_cesa.txt         | 50 ++++++++++++++++------
 1 file changed, 38 insertions(+), 12 deletions(-)

diff --git a/Documentation/devicetree/bindings/crypto/mv_cesa.txt b/Documentation/devicetree/bindings/crypto/mv_cesa.txt
index 47229b1..4ce9bc5 100644
--- a/Documentation/devicetree/bindings/crypto/mv_cesa.txt
+++ b/Documentation/devicetree/bindings/crypto/mv_cesa.txt
@@ -1,20 +1,46 @@
 Marvell Cryptographic Engines And Security Accelerator
 
 Required properties:
-- compatible : should be "marvell,orion-crypto"
-- reg : base physical address of the engine and length of memory mapped
-        region, followed by base physical address of sram and its memory
-        length
-- reg-names : "regs" , "sram";
-- interrupts : interrupt number
+- compatible: should be one of the following string
+	      "marvell,orion-crypto"
+	      "marvell,kirkwood-crypto"
+	      "marvell,armada-370-crypto"
+	      "marvell,armada-xp-crypto"
+	      "marvell,armada-375-crypto"
+	      "marvell,armada-38x-crypto"
+- reg: base physical address of the engine and length of memory mapped
+       region
+- reg-names: "regs"
+- interrupts: interrupt number
+- clocks: reference to the crypto engines clocks. This property is not
+	  required for orion and kirkwood platforms
+- clock-names: "cesaX" and "cesazX", X should be replaced by the crypto engine
+	       id.
+	       This property is not required for the orion and kirkwoord
+	       platforms.
+	       "cesazX" clocks are not required on armada-370 platforms
+- marvell,crypto-srams: phandle to crypto SRAM definitions
+
+Optional properties:
+- marvell,crypto-sram-size: SRAM size reserved for crypto operations, if not
+			    specified the whole SRAM is used (2KB)
+
+Deprecated properties:
+- reg: base physical address of the engine and length of memory mapped
+       region, followed by base physical address of sram and its memory
+       length
+- reg-names: "regs" , "sram"
 
 Examples:
 
-	crypto@30000 {
-		compatible = "marvell,orion-crypto";
-		reg = <0x30000 0x10000>,
-		      <0x4000000 0x800>;
-		reg-names = "regs" , "sram";
-		interrupts = <22>;
+	crypto@90000 {
+		compatible = "marvell,armada-xp-crypto";
+		reg = <0x90000 0x10000>;
+		reg-names = "regs";
+		interrupts = <48>, <49>;
+		clocks = <&gateclk 23>, <&gateclk 23>;
+		clock-names = "cesa0", "cesa1";
+		marvell,crypto-srams = <&crypto_sram0>, <&crypto_sram1>;
+		marvell,crypto-sram-size = <0x600>;
 		status = "okay";
 	};
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH 2/2] crypto: marvell/CESA: update DT bindings documentation
@ 2015-04-09 14:58   ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-09 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

Document new compatible strings, document the new method to reference the
crypto SRAM and deprecate the old one and document the the 'clocks' and
'clock-names' properties.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 .../devicetree/bindings/crypto/mv_cesa.txt         | 50 ++++++++++++++++------
 1 file changed, 38 insertions(+), 12 deletions(-)

diff --git a/Documentation/devicetree/bindings/crypto/mv_cesa.txt b/Documentation/devicetree/bindings/crypto/mv_cesa.txt
index 47229b1..4ce9bc5 100644
--- a/Documentation/devicetree/bindings/crypto/mv_cesa.txt
+++ b/Documentation/devicetree/bindings/crypto/mv_cesa.txt
@@ -1,20 +1,46 @@
 Marvell Cryptographic Engines And Security Accelerator
 
 Required properties:
-- compatible : should be "marvell,orion-crypto"
-- reg : base physical address of the engine and length of memory mapped
-        region, followed by base physical address of sram and its memory
-        length
-- reg-names : "regs" , "sram";
-- interrupts : interrupt number
+- compatible: should be one of the following string
+	      "marvell,orion-crypto"
+	      "marvell,kirkwood-crypto"
+	      "marvell,armada-370-crypto"
+	      "marvell,armada-xp-crypto"
+	      "marvell,armada-375-crypto"
+	      "marvell,armada-38x-crypto"
+- reg: base physical address of the engine and length of memory mapped
+       region
+- reg-names: "regs"
+- interrupts: interrupt number
+- clocks: reference to the crypto engines clocks. This property is not
+	  required for orion and kirkwood platforms
+- clock-names: "cesaX" and "cesazX", X should be replaced by the crypto engine
+	       id.
+	       This property is not required for the orion and kirkwoord
+	       platforms.
+	       "cesazX" clocks are not required on armada-370 platforms
+- marvell,crypto-srams: phandle to crypto SRAM definitions
+
+Optional properties:
+- marvell,crypto-sram-size: SRAM size reserved for crypto operations, if not
+			    specified the whole SRAM is used (2KB)
+
+Deprecated properties:
+- reg: base physical address of the engine and length of memory mapped
+       region, followed by base physical address of sram and its memory
+       length
+- reg-names: "regs" , "sram"
 
 Examples:
 
-	crypto at 30000 {
-		compatible = "marvell,orion-crypto";
-		reg = <0x30000 0x10000>,
-		      <0x4000000 0x800>;
-		reg-names = "regs" , "sram";
-		interrupts = <22>;
+	crypto at 90000 {
+		compatible = "marvell,armada-xp-crypto";
+		reg = <0x90000 0x10000>;
+		reg-names = "regs";
+		interrupts = <48>, <49>;
+		clocks = <&gateclk 23>, <&gateclk 23>;
+		clock-names = "cesa0", "cesa1";
+		marvell,crypto-srams = <&crypto_sram0>, <&crypto_sram1>;
+		marvell,crypto-sram-size = <0x600>;
 		status = "okay";
 	};
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-09 14:58 ` Boris Brezillon
@ 2015-04-09 15:18   ` Andrew Lunn
  -1 siblings, 0 replies; 67+ messages in thread
From: Andrew Lunn @ 2015-04-09 15:18 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, linux-arm-kernel, linux-kernel,
	Arnaud Ebalard

On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> Hello,
> 
> This is an attempt to replace the mv_cesa driver by a new one to address
> some limitations of the existing driver.
> >From a performance and CPU load point of view the most important
> limitation is the lack of DMA support, thus preventing us from chaining
> crypto operations.
> 
> I know we usually try to adapt existing drivers instead of replacing them
> by new ones, but after trying to refactor the mv_cesa driver I realized it
> would take longer than writing an new one from scratch.

Hi Boris

What is the situation with backwards compatibility? I see you have
kept the old compatibility string, and added lots of new ones, and
deprecated some properties. Will an old DT blob still work?
 
 Thanks
	Andrew

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 15:18   ` Andrew Lunn
  0 siblings, 0 replies; 67+ messages in thread
From: Andrew Lunn @ 2015-04-09 15:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> Hello,
> 
> This is an attempt to replace the mv_cesa driver by a new one to address
> some limitations of the existing driver.
> >From a performance and CPU load point of view the most important
> limitation is the lack of DMA support, thus preventing us from chaining
> crypto operations.
> 
> I know we usually try to adapt existing drivers instead of replacing them
> by new ones, but after trying to refactor the mv_cesa driver I realized it
> would take longer than writing an new one from scratch.

Hi Boris

What is the situation with backwards compatibility? I see you have
kept the old compatibility string, and added lots of new ones, and
deprecated some properties. Will an old DT blob still work?
 
 Thanks
	Andrew

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-09 14:58 ` Boris Brezillon
@ 2015-04-09 15:34   ` Sebastian Hesselbarth
  -1 siblings, 0 replies; 67+ messages in thread
From: Sebastian Hesselbarth @ 2015-04-09 15:34 UTC (permalink / raw)
  To: Boris Brezillon, Herbert Xu, David S. Miller, linux-crypto
  Cc: Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree, Tawfik Bayouk, Lior Amsalem, Nadav Haklai,
	Eran Ben-Avi, Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Andrew Lunn, linux-arm-kernel, linux-kernel, Arnaud Ebalard

On 09.04.2015 16:58, Boris Brezillon wrote:
> This is an attempt to replace the mv_cesa driver by a new one to address
> some limitations of the existing driver.
>  From a performance and CPU load point of view the most important
> limitation is the lack of DMA support, thus preventing us from chaining
> crypto operations.
>
> I know we usually try to adapt existing drivers instead of replacing them
> by new ones, but after trying to refactor the mv_cesa driver I realized it
> would take longer than writing an new one from scratch.

Boris,

if you include a bunch of performance measurements, I guess it will help
you to get an agreement of replacing the driver instead of reworking it.

> Here are the main features brought by this new driver:
> - support for armada SoCs (up to 38x) while keeping support for older ones
>    (Orion and Kirkwood)

Unfortunately, the list above is missing Dove SoCs which also have a
CESA engine with TDMA support. I checked the registers _very_ quickly
but it seems that they are compatible with Kirkwood's CESA.

> - DMA mode to offload the CPU in case of intensive crypto usage
> - new algorithms: SHA256, DES and 3DES
>
[...]
> Boris Brezillon (2):
>    crypto: add new driver for Marvell CESA
>    crypto: marvell/CESA: update DT bindings documentation

IMHO, the patch set should be split up in:
- new core driver
- add support for TDMA on platforms that support it
- new cipher algorithms
- removal of old mv_cesa

I'd love to test on Dove, but time still is very limited. I guess the
patches will receive another round anyway, maybe I find some until the
final version.

Sebastian

>   .../devicetree/bindings/crypto/mv_cesa.txt         |   50 +-
>   drivers/crypto/Kconfig                             |    2 +
>   drivers/crypto/Makefile                            |    2 +-
>   drivers/crypto/marvell/Makefile                    |    1 +
>   drivers/crypto/marvell/cesa.c                      |  539 ++++++++
>   drivers/crypto/marvell/cesa.h                      |  802 ++++++++++++
>   drivers/crypto/marvell/cipher.c                    |  761 +++++++++++
>   drivers/crypto/marvell/hash.c                      | 1349 ++++++++++++++++++++
>   drivers/crypto/marvell/tdma.c                      |  223 ++++
>   drivers/crypto/mv_cesa.c                           | 1193 -----------------
>   drivers/crypto/mv_cesa.h                           |  150 ---
>   11 files changed, 3716 insertions(+), 1356 deletions(-)
>   create mode 100644 drivers/crypto/marvell/Makefile
>   create mode 100644 drivers/crypto/marvell/cesa.c
>   create mode 100644 drivers/crypto/marvell/cesa.h
>   create mode 100644 drivers/crypto/marvell/cipher.c
>   create mode 100644 drivers/crypto/marvell/hash.c
>   create mode 100644 drivers/crypto/marvell/tdma.c
>   delete mode 100644 drivers/crypto/mv_cesa.c
>   delete mode 100644 drivers/crypto/mv_cesa.h
>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 15:34   ` Sebastian Hesselbarth
  0 siblings, 0 replies; 67+ messages in thread
From: Sebastian Hesselbarth @ 2015-04-09 15:34 UTC (permalink / raw)
  To: linux-arm-kernel

On 09.04.2015 16:58, Boris Brezillon wrote:
> This is an attempt to replace the mv_cesa driver by a new one to address
> some limitations of the existing driver.
>  From a performance and CPU load point of view the most important
> limitation is the lack of DMA support, thus preventing us from chaining
> crypto operations.
>
> I know we usually try to adapt existing drivers instead of replacing them
> by new ones, but after trying to refactor the mv_cesa driver I realized it
> would take longer than writing an new one from scratch.

Boris,

if you include a bunch of performance measurements, I guess it will help
you to get an agreement of replacing the driver instead of reworking it.

> Here are the main features brought by this new driver:
> - support for armada SoCs (up to 38x) while keeping support for older ones
>    (Orion and Kirkwood)

Unfortunately, the list above is missing Dove SoCs which also have a
CESA engine with TDMA support. I checked the registers _very_ quickly
but it seems that they are compatible with Kirkwood's CESA.

> - DMA mode to offload the CPU in case of intensive crypto usage
> - new algorithms: SHA256, DES and 3DES
>
[...]
> Boris Brezillon (2):
>    crypto: add new driver for Marvell CESA
>    crypto: marvell/CESA: update DT bindings documentation

IMHO, the patch set should be split up in:
- new core driver
- add support for TDMA on platforms that support it
- new cipher algorithms
- removal of old mv_cesa

I'd love to test on Dove, but time still is very limited. I guess the
patches will receive another round anyway, maybe I find some until the
final version.

Sebastian

>   .../devicetree/bindings/crypto/mv_cesa.txt         |   50 +-
>   drivers/crypto/Kconfig                             |    2 +
>   drivers/crypto/Makefile                            |    2 +-
>   drivers/crypto/marvell/Makefile                    |    1 +
>   drivers/crypto/marvell/cesa.c                      |  539 ++++++++
>   drivers/crypto/marvell/cesa.h                      |  802 ++++++++++++
>   drivers/crypto/marvell/cipher.c                    |  761 +++++++++++
>   drivers/crypto/marvell/hash.c                      | 1349 ++++++++++++++++++++
>   drivers/crypto/marvell/tdma.c                      |  223 ++++
>   drivers/crypto/mv_cesa.c                           | 1193 -----------------
>   drivers/crypto/mv_cesa.h                           |  150 ---
>   11 files changed, 3716 insertions(+), 1356 deletions(-)
>   create mode 100644 drivers/crypto/marvell/Makefile
>   create mode 100644 drivers/crypto/marvell/cesa.c
>   create mode 100644 drivers/crypto/marvell/cesa.h
>   create mode 100644 drivers/crypto/marvell/cipher.c
>   create mode 100644 drivers/crypto/marvell/hash.c
>   create mode 100644 drivers/crypto/marvell/tdma.c
>   delete mode 100644 drivers/crypto/mv_cesa.c
>   delete mode 100644 drivers/crypto/mv_cesa.h
>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
       [not found]   ` <20150409172826.18916274@bbrezillon>
@ 2015-04-09 15:37     ` Andrew Lunn
  2015-04-09 15:37       ` Andrew Lunn
  1 sibling, 0 replies; 67+ messages in thread
From: Andrew Lunn @ 2015-04-09 15:37 UTC (permalink / raw)
  To: Boris Brezillon

On Thu, Apr 09, 2015 at 05:28:26PM +0200, Boris Brezillon wrote:
> Hi Andrew,
> 
> On Thu, 9 Apr 2015 17:18:46 +0200
> Andrew Lunn <andrew@lunn.ch> wrote:
> 
> > On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> > > Hello,
> > > 
> > > This is an attempt to replace the mv_cesa driver by a new one to address
> > > some limitations of the existing driver.
> > > >From a performance and CPU load point of view the most important
> > > limitation is the lack of DMA support, thus preventing us from chaining
> > > crypto operations.
> > > 
> > > I know we usually try to adapt existing drivers instead of replacing them
> > > by new ones, but after trying to refactor the mv_cesa driver I realized it
> > > would take longer than writing an new one from scratch.
> > 
> > Hi Boris
> > 
> > What is the situation with backwards compatibility? I see you have
> > kept the old compatibility string, and added lots of new ones, and
> > deprecated some properties. Will an old DT blob still work?
> 
> Yep, I tried on an Orion platform, and Arnaud tried on a Kirkwood one
> without any changes to the existing DT and it works.

Great. It would be nice to state this in the ChangeLog or cover note.

> Anyway, IMHO even those DT should be updated to use the new bindings
> (sram nodes, new compatible if available, addition of clock-names
> properties, ...).

Agreed. Maybe you can extend the patchset to modify the relevant .dtsi
files?

Thanks
	Andrew

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
       [not found]   ` <20150409172826.18916274@bbrezillon>
  2015-04-09 15:37     ` Andrew Lunn
@ 2015-04-09 15:37       ` Andrew Lunn
  1 sibling, 0 replies; 67+ messages in thread
From: Andrew Lunn @ 2015-04-09 15:37 UTC (permalink / raw)
  To: Boris Brezillon

On Thu, Apr 09, 2015 at 05:28:26PM +0200, Boris Brezillon wrote:
> Hi Andrew,
> 
> On Thu, 9 Apr 2015 17:18:46 +0200
> Andrew Lunn <andrew@lunn.ch> wrote:
> 
> > On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> > > Hello,
> > > 
> > > This is an attempt to replace the mv_cesa driver by a new one to address
> > > some limitations of the existing driver.
> > > >From a performance and CPU load point of view the most important
> > > limitation is the lack of DMA support, thus preventing us from chaining
> > > crypto operations.
> > > 
> > > I know we usually try to adapt existing drivers instead of replacing them
> > > by new ones, but after trying to refactor the mv_cesa driver I realized it
> > > would take longer than writing an new one from scratch.
> > 
> > Hi Boris
> > 
> > What is the situation with backwards compatibility? I see you have
> > kept the old compatibility string, and added lots of new ones, and
> > deprecated some properties. Will an old DT blob still work?
> 
> Yep, I tried on an Orion platform, and Arnaud tried on a Kirkwood one
> without any changes to the existing DT and it works.

Great. It would be nice to state this in the ChangeLog or cover note.

> Anyway, IMHO even those DT should be updated to use the new bindings
> (sram nodes, new compatible if available, addition of clock-names
> properties, ...).

Agreed. Maybe you can extend the patchset to modify the relevant .dtsi
files?

Thanks
	Andrew

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 15:37       ` Andrew Lunn
  0 siblings, 0 replies; 67+ messages in thread
From: Andrew Lunn @ 2015-04-09 15:37 UTC (permalink / raw)
  To: Boris Brezillon

On Thu, Apr 09, 2015 at 05:28:26PM +0200, Boris Brezillon wrote:
> Hi Andrew,
> 
> On Thu, 9 Apr 2015 17:18:46 +0200
> Andrew Lunn <andrew-g2DYL2Zd6BY@public.gmane.org> wrote:
> 
> > On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> > > Hello,
> > > 
> > > This is an attempt to replace the mv_cesa driver by a new one to address
> > > some limitations of the existing driver.
> > > >From a performance and CPU load point of view the most important
> > > limitation is the lack of DMA support, thus preventing us from chaining
> > > crypto operations.
> > > 
> > > I know we usually try to adapt existing drivers instead of replacing them
> > > by new ones, but after trying to refactor the mv_cesa driver I realized it
> > > would take longer than writing an new one from scratch.
> > 
> > Hi Boris
> > 
> > What is the situation with backwards compatibility? I see you have
> > kept the old compatibility string, and added lots of new ones, and
> > deprecated some properties. Will an old DT blob still work?
> 
> Yep, I tried on an Orion platform, and Arnaud tried on a Kirkwood one
> without any changes to the existing DT and it works.

Great. It would be nice to state this in the ChangeLog or cover note.

> Anyway, IMHO even those DT should be updated to use the new bindings
> (sram nodes, new compatible if available, addition of clock-names
> properties, ...).

Agreed. Maybe you can extend the patchset to modify the relevant .dtsi
files?

Thanks
	Andrew
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 15:37       ` Andrew Lunn
  0 siblings, 0 replies; 67+ messages in thread
From: Andrew Lunn @ 2015-04-09 15:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Apr 09, 2015 at 05:28:26PM +0200, Boris Brezillon wrote:
> Hi Andrew,
> 
> On Thu, 9 Apr 2015 17:18:46 +0200
> Andrew Lunn <andrew@lunn.ch> wrote:
> 
> > On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> > > Hello,
> > > 
> > > This is an attempt to replace the mv_cesa driver by a new one to address
> > > some limitations of the existing driver.
> > > >From a performance and CPU load point of view the most important
> > > limitation is the lack of DMA support, thus preventing us from chaining
> > > crypto operations.
> > > 
> > > I know we usually try to adapt existing drivers instead of replacing them
> > > by new ones, but after trying to refactor the mv_cesa driver I realized it
> > > would take longer than writing an new one from scratch.
> > 
> > Hi Boris
> > 
> > What is the situation with backwards compatibility? I see you have
> > kept the old compatibility string, and added lots of new ones, and
> > deprecated some properties. Will an old DT blob still work?
> 
> Yep, I tried on an Orion platform, and Arnaud tried on a Kirkwood one
> without any changes to the existing DT and it works.

Great. It would be nice to state this in the ChangeLog or cover note.

> Anyway, IMHO even those DT should be updated to use the new bindings
> (sram nodes, new compatible if available, addition of clock-names
> properties, ...).

Agreed. Maybe you can extend the patchset to modify the relevant .dtsi
files?

Thanks
	Andrew

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-09 14:58 ` Boris Brezillon
@ 2015-04-09 15:52   ` Stephan Mueller
  -1 siblings, 0 replies; 67+ messages in thread
From: Stephan Mueller @ 2015-04-09 15:52 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, Andrew Lunn, linux-arm-kernel,
	linux-kernel, Arnaud Ebalard

Am Donnerstag, 9. April 2015, 16:58:41 schrieb Boris Brezillon:

Hi Boris,

>Hello,
>
>This is an attempt to replace the mv_cesa driver by a new one to address
>some limitations of the existing driver.
>From a performance and CPU load point of view the most important
>limitation is the lack of DMA support, thus preventing us from chaining
>crypto operations.
>
>I know we usually try to adapt existing drivers instead of replacing them
>by new ones, but after trying to refactor the mv_cesa driver I realized it
>would take longer than writing an new one from scratch.
>
>Here are the main features brought by this new driver:
>- support for armada SoCs (up to 38x) while keeping support for older ones
>  (Orion and Kirkwood)
>- DMA mode to offload the CPU in case of intensive crypto usage
>- new algorithms: SHA256, DES and 3DES
>
>I'd like to thank Arnaud, who has carefully reviewed several iterations of
>this driver, helped me improved my implementation, provided support for
>several crypto algorithms, provided support for armada-370 and tested
>the driver on different platforms, hence the SoB and dual MODULE_AUTHOR
>in the driver code.

Your patch 1/2 did not make it to the crypto list. To big? It is on the lkml 
list though.
>
>Best Regards,
>
>Boris
>
>Boris Brezillon (2):
>  crypto: add new driver for Marvell CESA
>  crypto: marvell/CESA: update DT bindings documentation
>
> .../devicetree/bindings/crypto/mv_cesa.txt         |   50 +-
> drivers/crypto/Kconfig                             |    2 +
> drivers/crypto/Makefile                            |    2 +-
> drivers/crypto/marvell/Makefile                    |    1 +
> drivers/crypto/marvell/cesa.c                      |  539 ++++++++
> drivers/crypto/marvell/cesa.h                      |  802 ++++++++++++
> drivers/crypto/marvell/cipher.c                    |  761 +++++++++++
> drivers/crypto/marvell/hash.c                      | 1349
>++++++++++++++++++++ drivers/crypto/marvell/tdma.c                      | 
>223 ++++
> drivers/crypto/mv_cesa.c                           | 1193 -----------------
> drivers/crypto/mv_cesa.h                           |  150 ---
> 11 files changed, 3716 insertions(+), 1356 deletions(-)
> create mode 100644 drivers/crypto/marvell/Makefile
> create mode 100644 drivers/crypto/marvell/cesa.c
> create mode 100644 drivers/crypto/marvell/cesa.h
> create mode 100644 drivers/crypto/marvell/cipher.c
> create mode 100644 drivers/crypto/marvell/hash.c
> create mode 100644 drivers/crypto/marvell/tdma.c
> delete mode 100644 drivers/crypto/mv_cesa.c
> delete mode 100644 drivers/crypto/mv_cesa.h


Ciao
Stephan

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 15:52   ` Stephan Mueller
  0 siblings, 0 replies; 67+ messages in thread
From: Stephan Mueller @ 2015-04-09 15:52 UTC (permalink / raw)
  To: linux-arm-kernel

Am Donnerstag, 9. April 2015, 16:58:41 schrieb Boris Brezillon:

Hi Boris,

>Hello,
>
>This is an attempt to replace the mv_cesa driver by a new one to address
>some limitations of the existing driver.
>From a performance and CPU load point of view the most important
>limitation is the lack of DMA support, thus preventing us from chaining
>crypto operations.
>
>I know we usually try to adapt existing drivers instead of replacing them
>by new ones, but after trying to refactor the mv_cesa driver I realized it
>would take longer than writing an new one from scratch.
>
>Here are the main features brought by this new driver:
>- support for armada SoCs (up to 38x) while keeping support for older ones
>  (Orion and Kirkwood)
>- DMA mode to offload the CPU in case of intensive crypto usage
>- new algorithms: SHA256, DES and 3DES
>
>I'd like to thank Arnaud, who has carefully reviewed several iterations of
>this driver, helped me improved my implementation, provided support for
>several crypto algorithms, provided support for armada-370 and tested
>the driver on different platforms, hence the SoB and dual MODULE_AUTHOR
>in the driver code.

Your patch 1/2 did not make it to the crypto list. To big? It is on the lkml 
list though.
>
>Best Regards,
>
>Boris
>
>Boris Brezillon (2):
>  crypto: add new driver for Marvell CESA
>  crypto: marvell/CESA: update DT bindings documentation
>
> .../devicetree/bindings/crypto/mv_cesa.txt         |   50 +-
> drivers/crypto/Kconfig                             |    2 +
> drivers/crypto/Makefile                            |    2 +-
> drivers/crypto/marvell/Makefile                    |    1 +
> drivers/crypto/marvell/cesa.c                      |  539 ++++++++
> drivers/crypto/marvell/cesa.h                      |  802 ++++++++++++
> drivers/crypto/marvell/cipher.c                    |  761 +++++++++++
> drivers/crypto/marvell/hash.c                      | 1349
>++++++++++++++++++++ drivers/crypto/marvell/tdma.c                      | 
>223 ++++
> drivers/crypto/mv_cesa.c                           | 1193 -----------------
> drivers/crypto/mv_cesa.h                           |  150 ---
> 11 files changed, 3716 insertions(+), 1356 deletions(-)
> create mode 100644 drivers/crypto/marvell/Makefile
> create mode 100644 drivers/crypto/marvell/cesa.c
> create mode 100644 drivers/crypto/marvell/cesa.h
> create mode 100644 drivers/crypto/marvell/cipher.c
> create mode 100644 drivers/crypto/marvell/hash.c
> create mode 100644 drivers/crypto/marvell/tdma.c
> delete mode 100644 drivers/crypto/mv_cesa.c
> delete mode 100644 drivers/crypto/mv_cesa.h


Ciao
Stephan

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-09 15:34   ` Sebastian Hesselbarth
@ 2015-04-09 15:57     ` Boris Brezillon
  -1 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-09 15:57 UTC (permalink / raw)
  To: Sebastian Hesselbarth
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Jason Cooper, Andrew Lunn,
	linux-arm-kernel, linux-kernel, Arnaud Ebalard

Hi Sebastian,

On Thu, 09 Apr 2015 17:34:29 +0200
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> wrote:

> 
> if you include a bunch of performance measurements, I guess it will help
> you to get an agreement of replacing the driver instead of reworking it.

Yep, I made some measurements using tcrypt a while ago, I'll provide
them in the next round.

> 
> > Here are the main features brought by this new driver:
> > - support for armada SoCs (up to 38x) while keeping support for older ones
> >    (Orion and Kirkwood)
> 
> Unfortunately, the list above is missing Dove SoCs which also have a
> CESA engine with TDMA support. I checked the registers _very_ quickly
> but it seems that they are compatible with Kirkwood's CESA.

I checked it too: it should work, but I don't have any board to test
it :-/.

> 
> > - DMA mode to offload the CPU in case of intensive crypto usage
> > - new algorithms: SHA256, DES and 3DES
> >
> [...]
> > Boris Brezillon (2):
> >    crypto: add new driver for Marvell CESA
> >    crypto: marvell/CESA: update DT bindings documentation
> 
> IMHO, the patch set should be split up in:
> - new core driver
> - add support for TDMA on platforms that support it
> - new cipher algorithms
> - removal of old mv_cesa

I agree for the 2 steps operation:
- add new driver code without compiling it
- remove old code, update Kconfig (add new dependencies) and Makefile
  entries (compile the code in marvell/ instead of mv_cesa.c)

Is there a good reason for separating the core, TDMA and algorithms
support in different patches (keep Arnaud's authorship ?) ?
Anyway, this should be feasible, but I thought the policy was to
minimize the number of patches when submitting new drivers.

> 
> I'd love to test on Dove, but time still is very limited. I guess the
> patches will receive another round anyway, maybe I find some until the
> final version.

No problem (thanks for the offer).

Best Regards,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 15:57     ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-09 15:57 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sebastian,

On Thu, 09 Apr 2015 17:34:29 +0200
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> wrote:

> 
> if you include a bunch of performance measurements, I guess it will help
> you to get an agreement of replacing the driver instead of reworking it.

Yep, I made some measurements using tcrypt a while ago, I'll provide
them in the next round.

> 
> > Here are the main features brought by this new driver:
> > - support for armada SoCs (up to 38x) while keeping support for older ones
> >    (Orion and Kirkwood)
> 
> Unfortunately, the list above is missing Dove SoCs which also have a
> CESA engine with TDMA support. I checked the registers _very_ quickly
> but it seems that they are compatible with Kirkwood's CESA.

I checked it too: it should work, but I don't have any board to test
it :-/.

> 
> > - DMA mode to offload the CPU in case of intensive crypto usage
> > - new algorithms: SHA256, DES and 3DES
> >
> [...]
> > Boris Brezillon (2):
> >    crypto: add new driver for Marvell CESA
> >    crypto: marvell/CESA: update DT bindings documentation
> 
> IMHO, the patch set should be split up in:
> - new core driver
> - add support for TDMA on platforms that support it
> - new cipher algorithms
> - removal of old mv_cesa

I agree for the 2 steps operation:
- add new driver code without compiling it
- remove old code, update Kconfig (add new dependencies) and Makefile
  entries (compile the code in marvell/ instead of mv_cesa.c)

Is there a good reason for separating the core, TDMA and algorithms
support in different patches (keep Arnaud's authorship ?) ?
Anyway, this should be feasible, but I thought the policy was to
minimize the number of patches when submitting new drivers.

> 
> I'd love to test on Dove, but time still is very limited. I guess the
> patches will receive another round anyway, maybe I find some until the
> final version.

No problem (thanks for the offer).

Best Regards,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-09 15:57     ` Boris Brezillon
  (?)
@ 2015-04-09 23:21       ` Arnaud Ebalard
  -1 siblings, 0 replies; 67+ messages in thread
From: Arnaud Ebalard @ 2015-04-09 23:21 UTC (permalink / raw)
  To: Sebastian Hesselbarth
  Cc: Mark Rutland, Boris Brezillon, Thomas Petazzoni, Herbert Xu,
	Pawel Moll, Ian Campbell, linux-arm-kernel, linux-kernel,
	Eran Ben-Avi, Nadav Haklai, devicetree, Rob Herring, Andrew Lunn,
	linux-crypto, Kumar Gala, Gregory CLEMENT, Tawfik Bayouk,
	David S. Miller, Lior Amsalem, Jason Cooper

Hi Sebastien,

Boris Brezillon <boris.brezillon@free-electrons.com> writes:

>> if you include a bunch of performance measurements, I guess it will help
>> you to get an agreement of replacing the driver instead of reworking it.
>
> Yep, I made some measurements using tcrypt a while ago, I'll provide
> them in the next round.

Here are some tests on 2 Marvell SoC (I do not have dove platforms at
hand and did not collect the results for A370):

  - Kirkwood 88F6282 (Feroceon 88FR131 rev 1) at 1.6GHz
  - Armada XP (mv78230, i.e. 2 core @1.2GHz)

The targets are AES ECB and CBC encryption (decryption is similar
performance-wise), done w/ tcrypt (mode=500 passed to tcrypt module).

For each SoC, the various tests done by tcrypt are the following:

AES ECB/CBC encryption:

t  0 (128 bit key, 16 byte blocks)
t  1 (128 bit key, 64 byte blocks)
t  2 (128 bit key, 256 byte blocks)
t  3 (128 bit key, 1024 byte blocks)
t  4 (128 bit key, 8192 byte blocks)
t  5 (192 bit key, 16 byte blocks)
t  6 (192 bit key, 64 byte blocks)
t  7 (192 bit key, 256 byte blocks)
t  8 (192 bit key, 1024 byte blocks)
t  9 (192 bit key, 8192 byte blocks)
t 10 (256 bit key, 16 byte blocks)
t 11 (256 bit key, 64 byte blocks)
t 12 (256 bit key, 256 byte blocks)
t 13 (256 bit key, 1024 byte blocks)
t 14 (256 bit key, 8192 byte blocks)

The three columns provide the value for software implementation
(aes-asm), current driver (if available for that SoC), submitted
v0. The percentage is the improvement against software
implementation.

         soft         current driver          submitted v0     
                      (if available)


KW:

ECB
   t 0:  5.23 MB/s   1.01 MB/s (-80.58%)    1.11 MB/s (-78.75%) 
   t 1: 12.40 MB/s   3.70 MB/s (-70.16%)    4.14 MB/s (-66.59%) 
   t 2: 18.94 MB/s  10.81 MB/s (-42.94%)   13.86 MB/s (-26.78%) 
   t 3: 21.79 MB/s  20.69 MB/s  (-5.05%)   33.80 MB/s  (55.12%)  
   t 4: 22.54 MB/s  25.97 MB/s  (15.23%)   50.27 MB/s (123.05%) 

   t 5:  5.00 MB/s   1.01 MB/s (-79.75%)    1.10 MB/s (-78.02%) 
   t 6: 11.35 MB/s   3.70 MB/s (-67.41%)    3.84 MB/s (-66.17%) 
   t 7: 16.60 MB/s  10.66 MB/s (-35.81%)   13.59 MB/s (-18.14%) 
   t 8: 18.76 MB/s  20.13 MB/s   (7.29%)   32.30 MB/s  (72.15%)  
   t 9: 19.20 MB/s  25.10 MB/s  (30.74%)   47.11 MB/s (145.37%) 

   t10:  4.85 MB/s   1.02 MB/s (-79.02%)    1.10 MB/s (-77.25%) 
   t11: 10.50 MB/s   3.74 MB/s (-64.35%)    4.10 MB/s (-60.89%) 
   t12: 14.80 MB/s   4.65 MB/s (-68.55%)   13.40 MB/s (-9.43%)  
   t13: 16.47 MB/s  19.22 MB/s  (16.69%)   31.14 MB/s  (89.02%)  
   t14: 16.89 MB/s  24.36 MB/s  (44.18%)   44.33 MB/s (162.40%) 

CBC
   t 0:  4.78 MB/s   0.98 MB/s (-79.50%)    1.09 MB/s (-77.12%) 
   t 1: 11.44 MB/s   3.59 MB/s (-68.62%)    4.07 MB/s (-64.41%) 
   t 2: 17.66 MB/s  10.53 MB/s (-40.38%)   13.67 MB/s (-22.58%) 
   t 3: 20.41 MB/s  20.42 MB/s   (0.00%)   33.50 MB/s  (64.10%)  
   t 4: 21.14 MB/s  25.86 MB/s  (22.36%)   50.02 MB/s (136.63%) 

   t 5:  4.58 MB/s   0.98 MB/s (-78.64%)    1.08 MB/s (-76.31%) 
   t 6: 10.54 MB/s   3.58 MB/s (-66.00%)    4.04 MB/s (-61.68%) 
   t 7: 15.61 MB/s  10.39 MB/s (-33.49%)   13.40 MB/s (-14.16%) 
   t 8: 17.73 MB/s  19.88 MB/s  (12.10%)   32.04 MB/s  (80.69%)  
   t 9: 18.18 MB/s  25.02 MB/s  (37.60%)   46.90 MB/s (157.97%) 

   t10:  4.45 MB/s   0.98 MB/s (-77.96%)    1.09 MB/s (-75.62%) 
   t11:  9.80 MB/s   3.60 MB/s (-63.28%)    4.03 MB/s (-58.83%) 
   t12: 14.01 MB/s   4.34 MB/s (-69.01%)   13.24 MB/s  (-5.48%)  
   t13: 15.67 MB/s  19.44 MB/s  (24.01%)   30.90 MB/s  (97.17%)  
   t14: 16.09 MB/s  24.28 MB/s  (50.85%)   44.15 MB/s (174.34%) 

XP:

ECB
   t 0:  8.85 MB/s                          0.77 MB/s (-91.25%) 
   t 1: 21.73 MB/s                          3.09 MB/s (-85.79%) 
   t 2: 34.81 MB/s                         12.35 MB/s (-64.52%) 
   t 3: 40.81 MB/s                         38.68 MB/s  (-5.22%) 
   t 4: 42.69 MB/s                         84.52 MB/s  (98.00%) 

   t 5:  8.55 MB/s                          0.78 MB/s (-90.92%) 
   t 6: 20.63 MB/s                          3.11 MB/s (-84.92%) 
   t 7: 31.47 MB/s                         12.43 MB/s (-60.52%) 
   t 8: 36.07 MB/s                         38.08 MB/s   (5.58%) 
   t 9: 37.09 MB/s                         80.43 MB/s (116.85%) 

   t10:  8.25 MB/s                          0.78 MB/s (-90.56%) 
   t11: 19.19 MB/s                          3.11 MB/s (-83.80%) 
   t12: 28.61 MB/s                         12.42 MB/s (-56.59%) 
   t13: 32.49 MB/s                         37.28 MB/s  (14.74%) 
   t14: 33.56 MB/s                         77.11 MB/s (129.79%) 

CBC   
   t 0:  8.20 MB/s                          0.78 MB/s (-90.53%) 
   t 1: 19.85 MB/s                          3.10 MB/s (-84.36%) 
   t 2: 31.60 MB/s                         12.42 MB/s (-60.69%) 
   t 3: 37.03 MB/s                         38.70 MB/s   (4.51%) 
   t 4: 38.76 MB/s                         84.05 MB/s (116.87%) 

   t 5:  7.69 MB/s                          0.78 MB/s (-89.90%) 
   t 6: 18.62 MB/s                          3.10 MB/s (-83.32%) 
   t 7: 28.47 MB/s                         12.40 MB/s (-56.44%) 
   t 8: 32.73 MB/s                         37.97 MB/s  (16.02%) 
   t 9: 33.73 MB/s                         79.96 MB/s (137.07%) 

   t10:  7.58 MB/s                          0.77 MB/s (-89.88%) 
   t11: 17.59 MB/s                          3.07 MB/s (-82.56%) 
   t12: 26.26 MB/s                         12.28 MB/s (-53.23%) 
   t13: 29.89 MB/s                         37.02 MB/s  (23.87%) 
   t14: 30.87 MB/s                         76.70 MB/s (148.45%)


Cheers,

a+

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 23:21       ` Arnaud Ebalard
  0 siblings, 0 replies; 67+ messages in thread
From: Arnaud Ebalard @ 2015-04-09 23:21 UTC (permalink / raw)
  To: Sebastian Hesselbarth
  Cc: Boris Brezillon, Herbert Xu, David S. Miller, linux-crypto,
	Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree, Tawfik Bayouk, Lior Amsalem, Nadav Haklai,
	Eran Ben-Avi, Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Andrew Lunn, linux-arm-kernel, linux-kernel

Hi Sebastien,

Boris Brezillon <boris.brezillon@free-electrons.com> writes:

>> if you include a bunch of performance measurements, I guess it will help
>> you to get an agreement of replacing the driver instead of reworking it.
>
> Yep, I made some measurements using tcrypt a while ago, I'll provide
> them in the next round.

Here are some tests on 2 Marvell SoC (I do not have dove platforms at
hand and did not collect the results for A370):

  - Kirkwood 88F6282 (Feroceon 88FR131 rev 1) at 1.6GHz
  - Armada XP (mv78230, i.e. 2 core @1.2GHz)

The targets are AES ECB and CBC encryption (decryption is similar
performance-wise), done w/ tcrypt (mode=500 passed to tcrypt module).

For each SoC, the various tests done by tcrypt are the following:

AES ECB/CBC encryption:

t  0 (128 bit key, 16 byte blocks)
t  1 (128 bit key, 64 byte blocks)
t  2 (128 bit key, 256 byte blocks)
t  3 (128 bit key, 1024 byte blocks)
t  4 (128 bit key, 8192 byte blocks)
t  5 (192 bit key, 16 byte blocks)
t  6 (192 bit key, 64 byte blocks)
t  7 (192 bit key, 256 byte blocks)
t  8 (192 bit key, 1024 byte blocks)
t  9 (192 bit key, 8192 byte blocks)
t 10 (256 bit key, 16 byte blocks)
t 11 (256 bit key, 64 byte blocks)
t 12 (256 bit key, 256 byte blocks)
t 13 (256 bit key, 1024 byte blocks)
t 14 (256 bit key, 8192 byte blocks)

The three columns provide the value for software implementation
(aes-asm), current driver (if available for that SoC), submitted
v0. The percentage is the improvement against software
implementation.

         soft         current driver          submitted v0     
                      (if available)


KW:

ECB
   t 0:  5.23 MB/s   1.01 MB/s (-80.58%)    1.11 MB/s (-78.75%) 
   t 1: 12.40 MB/s   3.70 MB/s (-70.16%)    4.14 MB/s (-66.59%) 
   t 2: 18.94 MB/s  10.81 MB/s (-42.94%)   13.86 MB/s (-26.78%) 
   t 3: 21.79 MB/s  20.69 MB/s  (-5.05%)   33.80 MB/s  (55.12%)  
   t 4: 22.54 MB/s  25.97 MB/s  (15.23%)   50.27 MB/s (123.05%) 

   t 5:  5.00 MB/s   1.01 MB/s (-79.75%)    1.10 MB/s (-78.02%) 
   t 6: 11.35 MB/s   3.70 MB/s (-67.41%)    3.84 MB/s (-66.17%) 
   t 7: 16.60 MB/s  10.66 MB/s (-35.81%)   13.59 MB/s (-18.14%) 
   t 8: 18.76 MB/s  20.13 MB/s   (7.29%)   32.30 MB/s  (72.15%)  
   t 9: 19.20 MB/s  25.10 MB/s  (30.74%)   47.11 MB/s (145.37%) 

   t10:  4.85 MB/s   1.02 MB/s (-79.02%)    1.10 MB/s (-77.25%) 
   t11: 10.50 MB/s   3.74 MB/s (-64.35%)    4.10 MB/s (-60.89%) 
   t12: 14.80 MB/s   4.65 MB/s (-68.55%)   13.40 MB/s (-9.43%)  
   t13: 16.47 MB/s  19.22 MB/s  (16.69%)   31.14 MB/s  (89.02%)  
   t14: 16.89 MB/s  24.36 MB/s  (44.18%)   44.33 MB/s (162.40%) 

CBC
   t 0:  4.78 MB/s   0.98 MB/s (-79.50%)    1.09 MB/s (-77.12%) 
   t 1: 11.44 MB/s   3.59 MB/s (-68.62%)    4.07 MB/s (-64.41%) 
   t 2: 17.66 MB/s  10.53 MB/s (-40.38%)   13.67 MB/s (-22.58%) 
   t 3: 20.41 MB/s  20.42 MB/s   (0.00%)   33.50 MB/s  (64.10%)  
   t 4: 21.14 MB/s  25.86 MB/s  (22.36%)   50.02 MB/s (136.63%) 

   t 5:  4.58 MB/s   0.98 MB/s (-78.64%)    1.08 MB/s (-76.31%) 
   t 6: 10.54 MB/s   3.58 MB/s (-66.00%)    4.04 MB/s (-61.68%) 
   t 7: 15.61 MB/s  10.39 MB/s (-33.49%)   13.40 MB/s (-14.16%) 
   t 8: 17.73 MB/s  19.88 MB/s  (12.10%)   32.04 MB/s  (80.69%)  
   t 9: 18.18 MB/s  25.02 MB/s  (37.60%)   46.90 MB/s (157.97%) 

   t10:  4.45 MB/s   0.98 MB/s (-77.96%)    1.09 MB/s (-75.62%) 
   t11:  9.80 MB/s   3.60 MB/s (-63.28%)    4.03 MB/s (-58.83%) 
   t12: 14.01 MB/s   4.34 MB/s (-69.01%)   13.24 MB/s  (-5.48%)  
   t13: 15.67 MB/s  19.44 MB/s  (24.01%)   30.90 MB/s  (97.17%)  
   t14: 16.09 MB/s  24.28 MB/s  (50.85%)   44.15 MB/s (174.34%) 

XP:

ECB
   t 0:  8.85 MB/s                          0.77 MB/s (-91.25%) 
   t 1: 21.73 MB/s                          3.09 MB/s (-85.79%) 
   t 2: 34.81 MB/s                         12.35 MB/s (-64.52%) 
   t 3: 40.81 MB/s                         38.68 MB/s  (-5.22%) 
   t 4: 42.69 MB/s                         84.52 MB/s  (98.00%) 

   t 5:  8.55 MB/s                          0.78 MB/s (-90.92%) 
   t 6: 20.63 MB/s                          3.11 MB/s (-84.92%) 
   t 7: 31.47 MB/s                         12.43 MB/s (-60.52%) 
   t 8: 36.07 MB/s                         38.08 MB/s   (5.58%) 
   t 9: 37.09 MB/s                         80.43 MB/s (116.85%) 

   t10:  8.25 MB/s                          0.78 MB/s (-90.56%) 
   t11: 19.19 MB/s                          3.11 MB/s (-83.80%) 
   t12: 28.61 MB/s                         12.42 MB/s (-56.59%) 
   t13: 32.49 MB/s                         37.28 MB/s  (14.74%) 
   t14: 33.56 MB/s                         77.11 MB/s (129.79%) 

CBC   
   t 0:  8.20 MB/s                          0.78 MB/s (-90.53%) 
   t 1: 19.85 MB/s                          3.10 MB/s (-84.36%) 
   t 2: 31.60 MB/s                         12.42 MB/s (-60.69%) 
   t 3: 37.03 MB/s                         38.70 MB/s   (4.51%) 
   t 4: 38.76 MB/s                         84.05 MB/s (116.87%) 

   t 5:  7.69 MB/s                          0.78 MB/s (-89.90%) 
   t 6: 18.62 MB/s                          3.10 MB/s (-83.32%) 
   t 7: 28.47 MB/s                         12.40 MB/s (-56.44%) 
   t 8: 32.73 MB/s                         37.97 MB/s  (16.02%) 
   t 9: 33.73 MB/s                         79.96 MB/s (137.07%) 

   t10:  7.58 MB/s                          0.77 MB/s (-89.88%) 
   t11: 17.59 MB/s                          3.07 MB/s (-82.56%) 
   t12: 26.26 MB/s                         12.28 MB/s (-53.23%) 
   t13: 29.89 MB/s                         37.02 MB/s  (23.87%) 
   t14: 30.87 MB/s                         76.70 MB/s (148.45%)


Cheers,

a+

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-09 23:21       ` Arnaud Ebalard
  0 siblings, 0 replies; 67+ messages in thread
From: Arnaud Ebalard @ 2015-04-09 23:21 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sebastien,

Boris Brezillon <boris.brezillon@free-electrons.com> writes:

>> if you include a bunch of performance measurements, I guess it will help
>> you to get an agreement of replacing the driver instead of reworking it.
>
> Yep, I made some measurements using tcrypt a while ago, I'll provide
> them in the next round.

Here are some tests on 2 Marvell SoC (I do not have dove platforms at
hand and did not collect the results for A370):

  - Kirkwood 88F6282 (Feroceon 88FR131 rev 1) at 1.6GHz
  - Armada XP (mv78230, i.e. 2 core @1.2GHz)

The targets are AES ECB and CBC encryption (decryption is similar
performance-wise), done w/ tcrypt (mode=500 passed to tcrypt module).

For each SoC, the various tests done by tcrypt are the following:

AES ECB/CBC encryption:

t  0 (128 bit key, 16 byte blocks)
t  1 (128 bit key, 64 byte blocks)
t  2 (128 bit key, 256 byte blocks)
t  3 (128 bit key, 1024 byte blocks)
t  4 (128 bit key, 8192 byte blocks)
t  5 (192 bit key, 16 byte blocks)
t  6 (192 bit key, 64 byte blocks)
t  7 (192 bit key, 256 byte blocks)
t  8 (192 bit key, 1024 byte blocks)
t  9 (192 bit key, 8192 byte blocks)
t 10 (256 bit key, 16 byte blocks)
t 11 (256 bit key, 64 byte blocks)
t 12 (256 bit key, 256 byte blocks)
t 13 (256 bit key, 1024 byte blocks)
t 14 (256 bit key, 8192 byte blocks)

The three columns provide the value for software implementation
(aes-asm), current driver (if available for that SoC), submitted
v0. The percentage is the improvement against software
implementation.

         soft         current driver          submitted v0     
                      (if available)


KW:

ECB
   t 0:  5.23 MB/s   1.01 MB/s (-80.58%)    1.11 MB/s (-78.75%) 
   t 1: 12.40 MB/s   3.70 MB/s (-70.16%)    4.14 MB/s (-66.59%) 
   t 2: 18.94 MB/s  10.81 MB/s (-42.94%)   13.86 MB/s (-26.78%) 
   t 3: 21.79 MB/s  20.69 MB/s  (-5.05%)   33.80 MB/s  (55.12%)  
   t 4: 22.54 MB/s  25.97 MB/s  (15.23%)   50.27 MB/s (123.05%) 

   t 5:  5.00 MB/s   1.01 MB/s (-79.75%)    1.10 MB/s (-78.02%) 
   t 6: 11.35 MB/s   3.70 MB/s (-67.41%)    3.84 MB/s (-66.17%) 
   t 7: 16.60 MB/s  10.66 MB/s (-35.81%)   13.59 MB/s (-18.14%) 
   t 8: 18.76 MB/s  20.13 MB/s   (7.29%)   32.30 MB/s  (72.15%)  
   t 9: 19.20 MB/s  25.10 MB/s  (30.74%)   47.11 MB/s (145.37%) 

   t10:  4.85 MB/s   1.02 MB/s (-79.02%)    1.10 MB/s (-77.25%) 
   t11: 10.50 MB/s   3.74 MB/s (-64.35%)    4.10 MB/s (-60.89%) 
   t12: 14.80 MB/s   4.65 MB/s (-68.55%)   13.40 MB/s (-9.43%)  
   t13: 16.47 MB/s  19.22 MB/s  (16.69%)   31.14 MB/s  (89.02%)  
   t14: 16.89 MB/s  24.36 MB/s  (44.18%)   44.33 MB/s (162.40%) 

CBC
   t 0:  4.78 MB/s   0.98 MB/s (-79.50%)    1.09 MB/s (-77.12%) 
   t 1: 11.44 MB/s   3.59 MB/s (-68.62%)    4.07 MB/s (-64.41%) 
   t 2: 17.66 MB/s  10.53 MB/s (-40.38%)   13.67 MB/s (-22.58%) 
   t 3: 20.41 MB/s  20.42 MB/s   (0.00%)   33.50 MB/s  (64.10%)  
   t 4: 21.14 MB/s  25.86 MB/s  (22.36%)   50.02 MB/s (136.63%) 

   t 5:  4.58 MB/s   0.98 MB/s (-78.64%)    1.08 MB/s (-76.31%) 
   t 6: 10.54 MB/s   3.58 MB/s (-66.00%)    4.04 MB/s (-61.68%) 
   t 7: 15.61 MB/s  10.39 MB/s (-33.49%)   13.40 MB/s (-14.16%) 
   t 8: 17.73 MB/s  19.88 MB/s  (12.10%)   32.04 MB/s  (80.69%)  
   t 9: 18.18 MB/s  25.02 MB/s  (37.60%)   46.90 MB/s (157.97%) 

   t10:  4.45 MB/s   0.98 MB/s (-77.96%)    1.09 MB/s (-75.62%) 
   t11:  9.80 MB/s   3.60 MB/s (-63.28%)    4.03 MB/s (-58.83%) 
   t12: 14.01 MB/s   4.34 MB/s (-69.01%)   13.24 MB/s  (-5.48%)  
   t13: 15.67 MB/s  19.44 MB/s  (24.01%)   30.90 MB/s  (97.17%)  
   t14: 16.09 MB/s  24.28 MB/s  (50.85%)   44.15 MB/s (174.34%) 

XP:

ECB
   t 0:  8.85 MB/s                          0.77 MB/s (-91.25%) 
   t 1: 21.73 MB/s                          3.09 MB/s (-85.79%) 
   t 2: 34.81 MB/s                         12.35 MB/s (-64.52%) 
   t 3: 40.81 MB/s                         38.68 MB/s  (-5.22%) 
   t 4: 42.69 MB/s                         84.52 MB/s  (98.00%) 

   t 5:  8.55 MB/s                          0.78 MB/s (-90.92%) 
   t 6: 20.63 MB/s                          3.11 MB/s (-84.92%) 
   t 7: 31.47 MB/s                         12.43 MB/s (-60.52%) 
   t 8: 36.07 MB/s                         38.08 MB/s   (5.58%) 
   t 9: 37.09 MB/s                         80.43 MB/s (116.85%) 

   t10:  8.25 MB/s                          0.78 MB/s (-90.56%) 
   t11: 19.19 MB/s                          3.11 MB/s (-83.80%) 
   t12: 28.61 MB/s                         12.42 MB/s (-56.59%) 
   t13: 32.49 MB/s                         37.28 MB/s  (14.74%) 
   t14: 33.56 MB/s                         77.11 MB/s (129.79%) 

CBC   
   t 0:  8.20 MB/s                          0.78 MB/s (-90.53%) 
   t 1: 19.85 MB/s                          3.10 MB/s (-84.36%) 
   t 2: 31.60 MB/s                         12.42 MB/s (-60.69%) 
   t 3: 37.03 MB/s                         38.70 MB/s   (4.51%) 
   t 4: 38.76 MB/s                         84.05 MB/s (116.87%) 

   t 5:  7.69 MB/s                          0.78 MB/s (-89.90%) 
   t 6: 18.62 MB/s                          3.10 MB/s (-83.32%) 
   t 7: 28.47 MB/s                         12.40 MB/s (-56.44%) 
   t 8: 32.73 MB/s                         37.97 MB/s  (16.02%) 
   t 9: 33.73 MB/s                         79.96 MB/s (137.07%) 

   t10:  7.58 MB/s                          0.77 MB/s (-89.88%) 
   t11: 17.59 MB/s                          3.07 MB/s (-82.56%) 
   t12: 26.26 MB/s                         12.28 MB/s (-53.23%) 
   t13: 29.89 MB/s                         37.02 MB/s  (23.87%) 
   t14: 30.87 MB/s                         76.70 MB/s (148.45%)


Cheers,

a+

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 1/2] crypto: add new driver for Marvell CESA
  2015-04-09 14:58 ` [PATCH 1/2] " Boris Brezillon
       [not found]   ` <1428591523-1780-2-git-send-email-boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
@ 2015-04-10 10:38       ` Paul Bolle
  0 siblings, 0 replies; 67+ messages in thread
From: Paul Bolle @ 2015-04-10 10:38 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Herbert Xu, David S. Miller, linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tawfik Bayouk, Lior Amsalem,
	Nadav Haklai, Eran Ben-Avi, Thomas Petazzoni, Gregory CLEMENT,
	Jason Cooper, Sebastian Hesselbarth, Andrew Lunn,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnaud Ebalard

(This patch needed a trivial context change in drivers/crypto/Makefile
to get it applied on top of next-20150409.)

On Thu, 2015-04-09 at 16:58 +0200, Boris Brezillon wrote:
> --- a/drivers/crypto/Kconfig
> +++ b/drivers/crypto/Kconfig
> @@ -164,8 +164,10 @@ config CRYPTO_DEV_MV_CESA
>  	depends on PLAT_ORION
>  	select CRYPTO_ALGAPI
>  	select CRYPTO_AES
> +	select CRYPTO_DES
>  	select CRYPTO_BLKCIPHER2
>  	select CRYPTO_HASH
> +	select SRAM
>  	help
>  	  This driver allows you to utilize the Cryptographic Engines and
>  	  Security Accelerator (CESA) which can be found on the Marvell Orion

> --- a/drivers/crypto/Makefile
> +++ b/drivers/crypto/Makefile

> -obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
> +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += marvell/

> --- /dev/null
> +++ b/drivers/crypto/marvell/Makefile
> @@ -0,0 +1 @@
> +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += cesa.o cipher.o hash.o tdma.o

For a modular build (which is all I tried) this doesn't do what you
probably want, as this will generate four modules. Assuming you want to
keep the mv_cesa name for the module, you could try something like:
    obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
    mv_cesa-objs := cesa.o cipher.o hash.o tdma.o

Does that do what you want?

> --- /dev/null
> +++ b/drivers/crypto/marvell/cesa.c

> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published
> + * by the Free Software Foundation.

This states the license is GPL v2.

> +static struct platform_driver marvell_cesa = {
> +	.probe		= mv_cesa_probe,
> +	.remove		= mv_cesa_remove,
> +	.driver		= {
> +		.owner	= THIS_MODULE,
> +		.name	= "mv_crypto",
> +		.of_match_table = mv_cesa_of_match_table,
> +	},
> +};
> +MODULE_ALIAS("platform:mv_crypto");

(It's nicer to make that macro be a part of the block of the other
MODULE_ macros.)

> +module_platform_driver(marvell_cesa);

(And it's nicer to have this directly follow the definition of
marvell_cesa.)

> +MODULE_AUTHOR("Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>");
> +MODULE_AUTHOR("Arnaud Ebalard <arno-LkuqDEemtHBg9hUCZPvPmw@public.gmane.org>");
> +MODULE_DESCRIPTION("Support for Marvell's cryptographic engine");
> +MODULE_LICENSE("GPL");

And this states the license is GPL v2 or later. So either the comment at
the top of this file or this macro need to be changed.


Paul Bolle

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 1/2] crypto: add new driver for Marvell CESA
@ 2015-04-10 10:38       ` Paul Bolle
  0 siblings, 0 replies; 67+ messages in thread
From: Paul Bolle @ 2015-04-10 10:38 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, Andrew Lunn, linux-arm-kernel,
	linux-kernel, Arnaud Ebalard

(This patch needed a trivial context change in drivers/crypto/Makefile
to get it applied on top of next-20150409.)

On Thu, 2015-04-09 at 16:58 +0200, Boris Brezillon wrote:
> --- a/drivers/crypto/Kconfig
> +++ b/drivers/crypto/Kconfig
> @@ -164,8 +164,10 @@ config CRYPTO_DEV_MV_CESA
>  	depends on PLAT_ORION
>  	select CRYPTO_ALGAPI
>  	select CRYPTO_AES
> +	select CRYPTO_DES
>  	select CRYPTO_BLKCIPHER2
>  	select CRYPTO_HASH
> +	select SRAM
>  	help
>  	  This driver allows you to utilize the Cryptographic Engines and
>  	  Security Accelerator (CESA) which can be found on the Marvell Orion

> --- a/drivers/crypto/Makefile
> +++ b/drivers/crypto/Makefile

> -obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
> +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += marvell/

> --- /dev/null
> +++ b/drivers/crypto/marvell/Makefile
> @@ -0,0 +1 @@
> +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += cesa.o cipher.o hash.o tdma.o

For a modular build (which is all I tried) this doesn't do what you
probably want, as this will generate four modules. Assuming you want to
keep the mv_cesa name for the module, you could try something like:
    obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
    mv_cesa-objs := cesa.o cipher.o hash.o tdma.o

Does that do what you want?

> --- /dev/null
> +++ b/drivers/crypto/marvell/cesa.c

> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published
> + * by the Free Software Foundation.

This states the license is GPL v2.

> +static struct platform_driver marvell_cesa = {
> +	.probe		= mv_cesa_probe,
> +	.remove		= mv_cesa_remove,
> +	.driver		= {
> +		.owner	= THIS_MODULE,
> +		.name	= "mv_crypto",
> +		.of_match_table = mv_cesa_of_match_table,
> +	},
> +};
> +MODULE_ALIAS("platform:mv_crypto");

(It's nicer to make that macro be a part of the block of the other
MODULE_ macros.)

> +module_platform_driver(marvell_cesa);

(And it's nicer to have this directly follow the definition of
marvell_cesa.)

> +MODULE_AUTHOR("Boris Brezillon <boris.brezillon@free-electrons.com>");
> +MODULE_AUTHOR("Arnaud Ebalard <arno@natisbad.org>");
> +MODULE_DESCRIPTION("Support for Marvell's cryptographic engine");
> +MODULE_LICENSE("GPL");

And this states the license is GPL v2 or later. So either the comment at
the top of this file or this macro need to be changed.


Paul Bolle


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 1/2] crypto: add new driver for Marvell CESA
@ 2015-04-10 10:38       ` Paul Bolle
  0 siblings, 0 replies; 67+ messages in thread
From: Paul Bolle @ 2015-04-10 10:38 UTC (permalink / raw)
  To: linux-arm-kernel

(This patch needed a trivial context change in drivers/crypto/Makefile
to get it applied on top of next-20150409.)

On Thu, 2015-04-09 at 16:58 +0200, Boris Brezillon wrote:
> --- a/drivers/crypto/Kconfig
> +++ b/drivers/crypto/Kconfig
> @@ -164,8 +164,10 @@ config CRYPTO_DEV_MV_CESA
>  	depends on PLAT_ORION
>  	select CRYPTO_ALGAPI
>  	select CRYPTO_AES
> +	select CRYPTO_DES
>  	select CRYPTO_BLKCIPHER2
>  	select CRYPTO_HASH
> +	select SRAM
>  	help
>  	  This driver allows you to utilize the Cryptographic Engines and
>  	  Security Accelerator (CESA) which can be found on the Marvell Orion

> --- a/drivers/crypto/Makefile
> +++ b/drivers/crypto/Makefile

> -obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
> +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += marvell/

> --- /dev/null
> +++ b/drivers/crypto/marvell/Makefile
> @@ -0,0 +1 @@
> +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += cesa.o cipher.o hash.o tdma.o

For a modular build (which is all I tried) this doesn't do what you
probably want, as this will generate four modules. Assuming you want to
keep the mv_cesa name for the module, you could try something like:
    obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
    mv_cesa-objs := cesa.o cipher.o hash.o tdma.o

Does that do what you want?

> --- /dev/null
> +++ b/drivers/crypto/marvell/cesa.c

> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published
> + * by the Free Software Foundation.

This states the license is GPL v2.

> +static struct platform_driver marvell_cesa = {
> +	.probe		= mv_cesa_probe,
> +	.remove		= mv_cesa_remove,
> +	.driver		= {
> +		.owner	= THIS_MODULE,
> +		.name	= "mv_crypto",
> +		.of_match_table = mv_cesa_of_match_table,
> +	},
> +};
> +MODULE_ALIAS("platform:mv_crypto");

(It's nicer to make that macro be a part of the block of the other
MODULE_ macros.)

> +module_platform_driver(marvell_cesa);

(And it's nicer to have this directly follow the definition of
marvell_cesa.)

> +MODULE_AUTHOR("Boris Brezillon <boris.brezillon@free-electrons.com>");
> +MODULE_AUTHOR("Arnaud Ebalard <arno@natisbad.org>");
> +MODULE_DESCRIPTION("Support for Marvell's cryptographic engine");
> +MODULE_LICENSE("GPL");

And this states the license is GPL v2 or later. So either the comment at
the top of this file or this macro need to be changed.


Paul Bolle

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 1/2] crypto: add new driver for Marvell CESA
  2015-04-10 10:38       ` Paul Bolle
@ 2015-04-10 11:17         ` Boris Brezillon
  -1 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-10 11:17 UTC (permalink / raw)
  To: Paul Bolle
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, Andrew Lunn, linux-arm-kernel,
	linux-kernel, Arnaud Ebalard

Hi Paul,

On Fri, 10 Apr 2015 12:38:59 +0200
Paul Bolle <pebolle@tiscali.nl> wrote:

> (This patch needed a trivial context change in drivers/crypto/Makefile
> to get it applied on top of next-20150409.)

I'll rebase my work on linux-next.

> 
> On Thu, 2015-04-09 at 16:58 +0200, Boris Brezillon wrote:
> > --- a/drivers/crypto/Kconfig
> > +++ b/drivers/crypto/Kconfig
> > @@ -164,8 +164,10 @@ config CRYPTO_DEV_MV_CESA
> >  	depends on PLAT_ORION
> >  	select CRYPTO_ALGAPI
> >  	select CRYPTO_AES
> > +	select CRYPTO_DES
> >  	select CRYPTO_BLKCIPHER2
> >  	select CRYPTO_HASH
> > +	select SRAM
> >  	help
> >  	  This driver allows you to utilize the Cryptographic Engines and
> >  	  Security Accelerator (CESA) which can be found on the Marvell Orion
> 
> > --- a/drivers/crypto/Makefile
> > +++ b/drivers/crypto/Makefile
> 
> > -obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
> > +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += marvell/
> 
> > --- /dev/null
> > +++ b/drivers/crypto/marvell/Makefile
> > @@ -0,0 +1 @@
> > +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += cesa.o cipher.o hash.o tdma.o
> 
> For a modular build (which is all I tried) this doesn't do what you
> probably want, as this will generate four modules. Assuming you want to
> keep the mv_cesa name for the module, you could try something like:
>     obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
>     mv_cesa-objs := cesa.o cipher.o hash.o tdma.o
> 
> Does that do what you want?

Yes, I'll fix that in v2.

> 
> > --- /dev/null
> > +++ b/drivers/crypto/marvell/cesa.c
> 
> > + * This program is free software; you can redistribute it and/or modify it
> > + * under the terms of the GNU General Public License version 2 as published
> > + * by the Free Software Foundation.
> 
> This states the license is GPL v2.
> 
> > +static struct platform_driver marvell_cesa = {
> > +	.probe		= mv_cesa_probe,
> > +	.remove		= mv_cesa_remove,
> > +	.driver		= {
> > +		.owner	= THIS_MODULE,
> > +		.name	= "mv_crypto",
> > +		.of_match_table = mv_cesa_of_match_table,
> > +	},
> > +};
> > +MODULE_ALIAS("platform:mv_crypto");
> 
> (It's nicer to make that macro be a part of the block of the other
> MODULE_ macros.)
> 
> > +module_platform_driver(marvell_cesa);
> 
> (And it's nicer to have this directly follow the definition of
> marvell_cesa.)

Absolutely.

> 
> > +MODULE_AUTHOR("Boris Brezillon <boris.brezillon@free-electrons.com>");
> > +MODULE_AUTHOR("Arnaud Ebalard <arno@natisbad.org>");
> > +MODULE_DESCRIPTION("Support for Marvell's cryptographic engine");
> > +MODULE_LICENSE("GPL");
> 
> And this states the license is GPL v2 or later. So either the comment at
> the top of this file or this macro need to be changed.

I'll change the MODULE_LICENSE definition.

Thanks,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 1/2] crypto: add new driver for Marvell CESA
@ 2015-04-10 11:17         ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-10 11:17 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Paul,

On Fri, 10 Apr 2015 12:38:59 +0200
Paul Bolle <pebolle@tiscali.nl> wrote:

> (This patch needed a trivial context change in drivers/crypto/Makefile
> to get it applied on top of next-20150409.)

I'll rebase my work on linux-next.

> 
> On Thu, 2015-04-09 at 16:58 +0200, Boris Brezillon wrote:
> > --- a/drivers/crypto/Kconfig
> > +++ b/drivers/crypto/Kconfig
> > @@ -164,8 +164,10 @@ config CRYPTO_DEV_MV_CESA
> >  	depends on PLAT_ORION
> >  	select CRYPTO_ALGAPI
> >  	select CRYPTO_AES
> > +	select CRYPTO_DES
> >  	select CRYPTO_BLKCIPHER2
> >  	select CRYPTO_HASH
> > +	select SRAM
> >  	help
> >  	  This driver allows you to utilize the Cryptographic Engines and
> >  	  Security Accelerator (CESA) which can be found on the Marvell Orion
> 
> > --- a/drivers/crypto/Makefile
> > +++ b/drivers/crypto/Makefile
> 
> > -obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
> > +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += marvell/
> 
> > --- /dev/null
> > +++ b/drivers/crypto/marvell/Makefile
> > @@ -0,0 +1 @@
> > +obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += cesa.o cipher.o hash.o tdma.o
> 
> For a modular build (which is all I tried) this doesn't do what you
> probably want, as this will generate four modules. Assuming you want to
> keep the mv_cesa name for the module, you could try something like:
>     obj-$(CONFIG_CRYPTO_DEV_MV_CESA) += mv_cesa.o
>     mv_cesa-objs := cesa.o cipher.o hash.o tdma.o
> 
> Does that do what you want?

Yes, I'll fix that in v2.

> 
> > --- /dev/null
> > +++ b/drivers/crypto/marvell/cesa.c
> 
> > + * This program is free software; you can redistribute it and/or modify it
> > + * under the terms of the GNU General Public License version 2 as published
> > + * by the Free Software Foundation.
> 
> This states the license is GPL v2.
> 
> > +static struct platform_driver marvell_cesa = {
> > +	.probe		= mv_cesa_probe,
> > +	.remove		= mv_cesa_remove,
> > +	.driver		= {
> > +		.owner	= THIS_MODULE,
> > +		.name	= "mv_crypto",
> > +		.of_match_table = mv_cesa_of_match_table,
> > +	},
> > +};
> > +MODULE_ALIAS("platform:mv_crypto");
> 
> (It's nicer to make that macro be a part of the block of the other
> MODULE_ macros.)
> 
> > +module_platform_driver(marvell_cesa);
> 
> (And it's nicer to have this directly follow the definition of
> marvell_cesa.)

Absolutely.

> 
> > +MODULE_AUTHOR("Boris Brezillon <boris.brezillon@free-electrons.com>");
> > +MODULE_AUTHOR("Arnaud Ebalard <arno@natisbad.org>");
> > +MODULE_DESCRIPTION("Support for Marvell's cryptographic engine");
> > +MODULE_LICENSE("GPL");
> 
> And this states the license is GPL v2 or later. So either the comment at
> the top of this file or this macro need to be changed.

I'll change the MODULE_LICENSE definition.

Thanks,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-09 14:58 ` Boris Brezillon
@ 2015-04-10 13:50   ` Jason Cooper
  -1 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-10 13:50 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Sebastian Hesselbarth,
	Andrew Lunn, linux-arm-kernel, linux-kernel, Arnaud Ebalard

Hey Boris,

On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> I know we usually try to adapt existing drivers instead of replacing them
> by new ones, but after trying to refactor the mv_cesa driver I realized it
> would take longer than writing an new one from scratch.

I'm sorry, but this makes me *very* uncomfortable.  Any organization
worth it's salt will do a very careful audit of code touching
cryptographic material and sensitive (decrypted) data.  From that point
on, small audits of the changes to the code allow the organization to
build a comfort level with kernel updates.  iow, following the git
history of the driver.

By apply this series, we are basically forcing those organizations to
either a) stop updating, or b) expend significant resources to do
another full audit.

In short, this series breaks the audit chain for the mv_cesa driver.

Maybe I'm the only person with this level of paranoia.  If so, I'm sure
others will override me.

>From my POV, it looks like the *only* reason we've chosen this route is
developer convenience.  I don't think that's sufficient reason to break
the change history of a driver handling sensitive data.

For an example of how I use the git history and binary differences to
audit a series of changes to cryptographic code, please take a look at
objdiff [1]. You can even duplicate my audit of my submission for the
skein/threefish driver currently in the staging tree, starting at [2]
and going up to [3].

thx,

Jason.

[1] scripts/objdiff [4]
[2] 449bb8125e3f "staging: crypto: skein: import code from Skein3Fish.git"
[3] 0264b7b7fb44 "staging: crypto: skein: rename macros"
[4] There are better tools out there for auditing actual code changes,
    radare (http://radare.org/r/) is one of them.  objdiff is good only
    at validating object code *hasn't* been changed by style commits.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-10 13:50   ` Jason Cooper
  0 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-10 13:50 UTC (permalink / raw)
  To: linux-arm-kernel

Hey Boris,

On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> I know we usually try to adapt existing drivers instead of replacing them
> by new ones, but after trying to refactor the mv_cesa driver I realized it
> would take longer than writing an new one from scratch.

I'm sorry, but this makes me *very* uncomfortable.  Any organization
worth it's salt will do a very careful audit of code touching
cryptographic material and sensitive (decrypted) data.  From that point
on, small audits of the changes to the code allow the organization to
build a comfort level with kernel updates.  iow, following the git
history of the driver.

By apply this series, we are basically forcing those organizations to
either a) stop updating, or b) expend significant resources to do
another full audit.

In short, this series breaks the audit chain for the mv_cesa driver.

Maybe I'm the only person with this level of paranoia.  If so, I'm sure
others will override me.

>From my POV, it looks like the *only* reason we've chosen this route is
developer convenience.  I don't think that's sufficient reason to break
the change history of a driver handling sensitive data.

For an example of how I use the git history and binary differences to
audit a series of changes to cryptographic code, please take a look at
objdiff [1]. You can even duplicate my audit of my submission for the
skein/threefish driver currently in the staging tree, starting at [2]
and going up to [3].

thx,

Jason.

[1] scripts/objdiff [4]
[2] 449bb8125e3f "staging: crypto: skein: import code from Skein3Fish.git"
[3] 0264b7b7fb44 "staging: crypto: skein: rename macros"
[4] There are better tools out there for auditing actual code changes,
    radare (http://radare.org/r/) is one of them.  objdiff is good only
    at validating object code *hasn't* been changed by style commits.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-10 13:50   ` Jason Cooper
@ 2015-04-10 15:11     ` Boris Brezillon
  -1 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-10 15:11 UTC (permalink / raw)
  To: Jason Cooper
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Sebastian Hesselbarth,
	Andrew Lunn, linux-arm-kernel, linux-kernel, Arnaud Ebalard

Hi Jason,

On Fri, 10 Apr 2015 13:50:56 +0000
Jason Cooper <jason@lakedaemon.net> wrote:

> Hey Boris,
> 
> On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> > I know we usually try to adapt existing drivers instead of replacing them
> > by new ones, but after trying to refactor the mv_cesa driver I realized it
> > would take longer than writing an new one from scratch.
> 
> I'm sorry, but this makes me *very* uncomfortable.  Any organization
> worth it's salt will do a very careful audit of code touching
> cryptographic material and sensitive (decrypted) data.  From that point
> on, small audits of the changes to the code allow the organization to
> build a comfort level with kernel updates.  iow, following the git
> history of the driver.
> 
> By apply this series, we are basically forcing those organizations to
> either a) stop updating, or b) expend significant resources to do
> another full audit.
> 
> In short, this series breaks the audit chain for the mv_cesa driver.
> 
> Maybe I'm the only person with this level of paranoia.  If so, I'm sure
> others will override me.
> 
> From my POV, it looks like the *only* reason we've chosen this route is
> developer convenience.  I don't think that's sufficient reason to break
> the change history of a driver handling sensitive data.

Well, I understand you concern, but if you read carefully the old and
new drivers, you'll notice that they are completely different (the only
thing I kept are the macro definitions).
I really tried to adapt the existing driver to add the missing
features (especially the support for TDMA), but all my attempts
ended up introducing hackish code (not even talking about the
performance penalty of this approach). Is that really what we want ?
How would you make such big changes on the existing driver (I mean, the
core infrastructure dealing with crypto requests is completely
different) ?

I have another solution though: keep the existing driver for old
marvell SoCs (orion, kirkwood and dove), and add a new one for modern
SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
won't have to audit the new code.

> 
> For an example of how I use the git history and binary differences to
> audit a series of changes to cryptographic code, please take a look at
> objdiff [1]. You can even duplicate my audit of my submission for the
> skein/threefish driver currently in the staging tree, starting at [2]
> and going up to [3].

Thanks for the pointers.

Best Regards,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-10 15:11     ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-10 15:11 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Jason,

On Fri, 10 Apr 2015 13:50:56 +0000
Jason Cooper <jason@lakedaemon.net> wrote:

> Hey Boris,
> 
> On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> > I know we usually try to adapt existing drivers instead of replacing them
> > by new ones, but after trying to refactor the mv_cesa driver I realized it
> > would take longer than writing an new one from scratch.
> 
> I'm sorry, but this makes me *very* uncomfortable.  Any organization
> worth it's salt will do a very careful audit of code touching
> cryptographic material and sensitive (decrypted) data.  From that point
> on, small audits of the changes to the code allow the organization to
> build a comfort level with kernel updates.  iow, following the git
> history of the driver.
> 
> By apply this series, we are basically forcing those organizations to
> either a) stop updating, or b) expend significant resources to do
> another full audit.
> 
> In short, this series breaks the audit chain for the mv_cesa driver.
> 
> Maybe I'm the only person with this level of paranoia.  If so, I'm sure
> others will override me.
> 
> From my POV, it looks like the *only* reason we've chosen this route is
> developer convenience.  I don't think that's sufficient reason to break
> the change history of a driver handling sensitive data.

Well, I understand you concern, but if you read carefully the old and
new drivers, you'll notice that they are completely different (the only
thing I kept are the macro definitions).
I really tried to adapt the existing driver to add the missing
features (especially the support for TDMA), but all my attempts
ended up introducing hackish code (not even talking about the
performance penalty of this approach). Is that really what we want ?
How would you make such big changes on the existing driver (I mean, the
core infrastructure dealing with crypto requests is completely
different) ?

I have another solution though: keep the existing driver for old
marvell SoCs (orion, kirkwood and dove), and add a new one for modern
SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
won't have to audit the new code.

> 
> For an example of how I use the git history and binary differences to
> audit a series of changes to cryptographic code, please take a look at
> objdiff [1]. You can even duplicate my audit of my submission for the
> skein/threefish driver currently in the staging tree, starting at [2]
> and going up to [3].

Thanks for the pointers.

Best Regards,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-10 15:11     ` Boris Brezillon
@ 2015-04-10 22:30       ` Jason Cooper
  -1 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-10 22:30 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Sebastian Hesselbarth,
	Andrew Lunn, linux-arm-kernel, linux-kernel, Arnaud Ebalard

Hey Boris,

On Fri, Apr 10, 2015 at 05:11:48PM +0200, Boris Brezillon wrote:
> On Fri, 10 Apr 2015 13:50:56 +0000 Jason Cooper <jason@lakedaemon.net> wrote:
> > On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> > > I know we usually try to adapt existing drivers instead of replacing them
> > > by new ones, but after trying to refactor the mv_cesa driver I realized it
> > > would take longer than writing an new one from scratch.
> > 
> > I'm sorry, but this makes me *very* uncomfortable.  Any organization
> > worth it's salt will do a very careful audit of code touching
> > cryptographic material and sensitive (decrypted) data.  From that point
> > on, small audits of the changes to the code allow the organization to
> > build a comfort level with kernel updates.  iow, following the git
> > history of the driver.
> > 
> > By apply this series, we are basically forcing those organizations to
> > either a) stop updating, or b) expend significant resources to do
> > another full audit.
> > 
> > In short, this series breaks the audit chain for the mv_cesa driver.
> > 
> > Maybe I'm the only person with this level of paranoia.  If so, I'm sure
> > others will override me.
> > 
> > From my POV, it looks like the *only* reason we've chosen this route is
> > developer convenience.  I don't think that's sufficient reason to break
> > the change history of a driver handling sensitive data.
> 
> Well, I understand you concern, but if you read carefully the old and
> new drivers, you'll notice that they are completely different (the only
> thing I kept are the macro definitions).

Yes, that's the worrying part for me. ;-)

> I really tried to adapt the existing driver to add the missing
> features (especially the support for TDMA), but all my attempts
> ended up introducing hackish code (not even talking about the
> performance penalty of this approach).

Ok, fair enough.  It would be helpful if this account of attempting to
reconcile the old driver made it into the commit message.  This puts us
in "perfect is the enemy of getting it done" territory.

> I have another solution though: keep the existing driver for old
> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
> won't have to audit the new code.

A fair proposal, but I'll freely admit the number of people actually auditing
their code paths is orders of magnitude smaller than the number of users
of the driver.

There's such a large population of compatible legacy SoCs in the wild,
adding an artificial boundary doesn't make sense.  Especially since
we're talking about features everyone would want to use.

Perhaps we should keep both around, and deprecate the legacy driver over
3 to 4 cycles?

thx,

Jason.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-10 22:30       ` Jason Cooper
  0 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-10 22:30 UTC (permalink / raw)
  To: linux-arm-kernel

Hey Boris,

On Fri, Apr 10, 2015 at 05:11:48PM +0200, Boris Brezillon wrote:
> On Fri, 10 Apr 2015 13:50:56 +0000 Jason Cooper <jason@lakedaemon.net> wrote:
> > On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> > > I know we usually try to adapt existing drivers instead of replacing them
> > > by new ones, but after trying to refactor the mv_cesa driver I realized it
> > > would take longer than writing an new one from scratch.
> > 
> > I'm sorry, but this makes me *very* uncomfortable.  Any organization
> > worth it's salt will do a very careful audit of code touching
> > cryptographic material and sensitive (decrypted) data.  From that point
> > on, small audits of the changes to the code allow the organization to
> > build a comfort level with kernel updates.  iow, following the git
> > history of the driver.
> > 
> > By apply this series, we are basically forcing those organizations to
> > either a) stop updating, or b) expend significant resources to do
> > another full audit.
> > 
> > In short, this series breaks the audit chain for the mv_cesa driver.
> > 
> > Maybe I'm the only person with this level of paranoia.  If so, I'm sure
> > others will override me.
> > 
> > From my POV, it looks like the *only* reason we've chosen this route is
> > developer convenience.  I don't think that's sufficient reason to break
> > the change history of a driver handling sensitive data.
> 
> Well, I understand you concern, but if you read carefully the old and
> new drivers, you'll notice that they are completely different (the only
> thing I kept are the macro definitions).

Yes, that's the worrying part for me. ;-)

> I really tried to adapt the existing driver to add the missing
> features (especially the support for TDMA), but all my attempts
> ended up introducing hackish code (not even talking about the
> performance penalty of this approach).

Ok, fair enough.  It would be helpful if this account of attempting to
reconcile the old driver made it into the commit message.  This puts us
in "perfect is the enemy of getting it done" territory.

> I have another solution though: keep the existing driver for old
> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
> won't have to audit the new code.

A fair proposal, but I'll freely admit the number of people actually auditing
their code paths is orders of magnitude smaller than the number of users
of the driver.

There's such a large population of compatible legacy SoCs in the wild,
adding an artificial boundary doesn't make sense.  Especially since
we're talking about features everyone would want to use.

Perhaps we should keep both around, and deprecate the legacy driver over
3 to 4 cycles?

thx,

Jason.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-10 22:30       ` Jason Cooper
@ 2015-04-13  9:39         ` Gregory CLEMENT
  -1 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-13  9:39 UTC (permalink / raw)
  To: Jason Cooper, Boris Brezillon
  Cc: Herbert Xu, David S. Miller, linux-crypto, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Sebastian Hesselbarth, Andrew Lunn,
	linux-arm-kernel, linux-kernel, Arnaud Ebalard

Hi Jason, Boris,

On 11/04/2015 00:30, Jason Cooper wrote:
> Hey Boris,
> 
> On Fri, Apr 10, 2015 at 05:11:48PM +0200, Boris Brezillon wrote:
>> On Fri, 10 Apr 2015 13:50:56 +0000 Jason Cooper <jason@lakedaemon.net> wrote:
>>> On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
>>>> I know we usually try to adapt existing drivers instead of replacing them
>>>> by new ones, but after trying to refactor the mv_cesa driver I realized it
>>>> would take longer than writing an new one from scratch.
>>>
>>> I'm sorry, but this makes me *very* uncomfortable.  Any organization
>>> worth it's salt will do a very careful audit of code touching
>>> cryptographic material and sensitive (decrypted) data.  From that point
>>> on, small audits of the changes to the code allow the organization to
>>> build a comfort level with kernel updates.  iow, following the git
>>> history of the driver.
>>>
>>> By apply this series, we are basically forcing those organizations to
>>> either a) stop updating, or b) expend significant resources to do
>>> another full audit.
>>>
>>> In short, this series breaks the audit chain for the mv_cesa driver.
>>>
>>> Maybe I'm the only person with this level of paranoia.  If so, I'm sure
>>> others will override me.
>>>
>>> From my POV, it looks like the *only* reason we've chosen this route is
>>> developer convenience.  I don't think that's sufficient reason to break
>>> the change history of a driver handling sensitive data.
>>
>> Well, I understand you concern, but if you read carefully the old and
>> new drivers, you'll notice that they are completely different (the only
>> thing I kept are the macro definitions).
> 
> Yes, that's the worrying part for me. ;-)

I understand the logic behind your concern, but I wonder if it is really
an issue. My knowledge ans my background around crypto is very limited,
so I really would like the opinion of other people on the subject.

> 
>> I really tried to adapt the existing driver to add the missing
>> features (especially the support for TDMA), but all my attempts
>> ended up introducing hackish code (not even talking about the
>> performance penalty of this approach).
> 
> Ok, fair enough.  It would be helpful if this account of attempting to
> reconcile the old driver made it into the commit message.  This puts us
> in "perfect is the enemy of getting it done" territory.
> 
>> I have another solution though: keep the existing driver for old
>> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
>> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
>> won't have to audit the new code.
> 
> A fair proposal, but I'll freely admit the number of people actually auditing
> their code paths is orders of magnitude smaller than the number of users
> of the driver.
> 
> There's such a large population of compatible legacy SoCs in the wild,
> adding an artificial boundary doesn't make sense.  Especially since
> we're talking about features everyone would want to use.
> 
> Perhaps we should keep both around, and deprecate the legacy driver over
> 3 to 4 cycles?

But I guess that some users will want to use the new driver on the "old" marvell
SoCs (especially kirkwood and dove). If we go to this path, then the best solution
would be to still update all the the dts, and modifying the old driver to be able to
use the new binding: for my point of view the only adaptation should be related
to the SRAM. It will be also needed to find a way to be able to load only one driver
at a time: either the old or the new, but not both.

However I still wonder if it worth the effort.


Thanks,

Gregory




> 
> thx,
> 
> Jason.
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-13  9:39         ` Gregory CLEMENT
  0 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-13  9:39 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Jason, Boris,

On 11/04/2015 00:30, Jason Cooper wrote:
> Hey Boris,
> 
> On Fri, Apr 10, 2015 at 05:11:48PM +0200, Boris Brezillon wrote:
>> On Fri, 10 Apr 2015 13:50:56 +0000 Jason Cooper <jason@lakedaemon.net> wrote:
>>> On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
>>>> I know we usually try to adapt existing drivers instead of replacing them
>>>> by new ones, but after trying to refactor the mv_cesa driver I realized it
>>>> would take longer than writing an new one from scratch.
>>>
>>> I'm sorry, but this makes me *very* uncomfortable.  Any organization
>>> worth it's salt will do a very careful audit of code touching
>>> cryptographic material and sensitive (decrypted) data.  From that point
>>> on, small audits of the changes to the code allow the organization to
>>> build a comfort level with kernel updates.  iow, following the git
>>> history of the driver.
>>>
>>> By apply this series, we are basically forcing those organizations to
>>> either a) stop updating, or b) expend significant resources to do
>>> another full audit.
>>>
>>> In short, this series breaks the audit chain for the mv_cesa driver.
>>>
>>> Maybe I'm the only person with this level of paranoia.  If so, I'm sure
>>> others will override me.
>>>
>>> From my POV, it looks like the *only* reason we've chosen this route is
>>> developer convenience.  I don't think that's sufficient reason to break
>>> the change history of a driver handling sensitive data.
>>
>> Well, I understand you concern, but if you read carefully the old and
>> new drivers, you'll notice that they are completely different (the only
>> thing I kept are the macro definitions).
> 
> Yes, that's the worrying part for me. ;-)

I understand the logic behind your concern, but I wonder if it is really
an issue. My knowledge ans my background around crypto is very limited,
so I really would like the opinion of other people on the subject.

> 
>> I really tried to adapt the existing driver to add the missing
>> features (especially the support for TDMA), but all my attempts
>> ended up introducing hackish code (not even talking about the
>> performance penalty of this approach).
> 
> Ok, fair enough.  It would be helpful if this account of attempting to
> reconcile the old driver made it into the commit message.  This puts us
> in "perfect is the enemy of getting it done" territory.
> 
>> I have another solution though: keep the existing driver for old
>> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
>> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
>> won't have to audit the new code.
> 
> A fair proposal, but I'll freely admit the number of people actually auditing
> their code paths is orders of magnitude smaller than the number of users
> of the driver.
> 
> There's such a large population of compatible legacy SoCs in the wild,
> adding an artificial boundary doesn't make sense.  Especially since
> we're talking about features everyone would want to use.
> 
> Perhaps we should keep both around, and deprecate the legacy driver over
> 3 to 4 cycles?

But I guess that some users will want to use the new driver on the "old" marvell
SoCs (especially kirkwood and dove). If we go to this path, then the best solution
would be to still update all the the dts, and modifying the old driver to be able to
use the new binding: for my point of view the only adaptation should be related
to the SRAM. It will be also needed to find a way to be able to load only one driver
at a time: either the old or the new, but not both.

However I still wonder if it worth the effort.


Thanks,

Gregory




> 
> thx,
> 
> Jason.
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-13  9:39         ` Gregory CLEMENT
@ 2015-04-13 12:47           ` Jason Cooper
  -1 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-13 12:47 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: Boris Brezillon, Herbert Xu, David S. Miller, linux-crypto,
	Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree, Tawfik Bayouk, Lior Amsalem, Nadav Haklai,
	Eran Ben-Avi, Thomas Petazzoni, Sebastian Hesselbarth,
	Andrew Lunn, linux-arm-kernel, linux-kernel, Arnaud Ebalard

Hey Gregory,

On Mon, Apr 13, 2015 at 11:39:16AM +0200, Gregory CLEMENT wrote:
> Hi Jason, Boris,
> 
> On 11/04/2015 00:30, Jason Cooper wrote:
> > Hey Boris,
> > 
> > On Fri, Apr 10, 2015 at 05:11:48PM +0200, Boris Brezillon wrote:
> >> On Fri, 10 Apr 2015 13:50:56 +0000 Jason Cooper <jason@lakedaemon.net> wrote:
> >>> On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> >>>> I know we usually try to adapt existing drivers instead of replacing them
> >>>> by new ones, but after trying to refactor the mv_cesa driver I realized it
> >>>> would take longer than writing an new one from scratch.
> >>>
> >>> I'm sorry, but this makes me *very* uncomfortable.  Any organization
> >>> worth it's salt will do a very careful audit of code touching
> >>> cryptographic material and sensitive (decrypted) data.  From that point
> >>> on, small audits of the changes to the code allow the organization to
> >>> build a comfort level with kernel updates.  iow, following the git
> >>> history of the driver.
> >>>
> >>> By apply this series, we are basically forcing those organizations to
> >>> either a) stop updating, or b) expend significant resources to do
> >>> another full audit.
> >>>
> >>> In short, this series breaks the audit chain for the mv_cesa driver.
> >>>
> >>> Maybe I'm the only person with this level of paranoia.  If so, I'm sure
> >>> others will override me.
> >>>
> >>> From my POV, it looks like the *only* reason we've chosen this route is
> >>> developer convenience.  I don't think that's sufficient reason to break
> >>> the change history of a driver handling sensitive data.
> >>
> >> Well, I understand you concern, but if you read carefully the old and
> >> new drivers, you'll notice that they are completely different (the only
> >> thing I kept are the macro definitions).
> > 
> > Yes, that's the worrying part for me. ;-)
> 
> I understand the logic behind your concern, but I wonder if it is really
> an issue. My knowledge ans my background around crypto is very limited,
> so I really would like the opinion of other people on the subject.

It's not about the crypto, it's about trust.  imho, one of the most
important security advances in the past 20 years is the default use of
git (or other SCMs) by open source projects.  Now, no one is forced to
trust the authors and maintainers tarball dumps.  Regular code audits
and security updates are *much* more feasible because you can audit
small changes.  It can even be automated to a large extent.

All this means the user has a choice: they can trust the authors and
maintainers, or they can trust their own audits.  Since updates are an
essential part of a security posture, small commits facilitate
maintaining the 'trust in audits'.

It's not about "Should you trust free-electrons?"  Or, "Should you trust
Jason / Herbert / Linus?"  It's about "Should you have to trust any of
them?"

> >> I really tried to adapt the existing driver to add the missing
> >> features (especially the support for TDMA), but all my attempts
> >> ended up introducing hackish code (not even talking about the
> >> performance penalty of this approach).
> > 
> > Ok, fair enough.  It would be helpful if this account of attempting to
> > reconcile the old driver made it into the commit message.  This puts us
> > in "perfect is the enemy of getting it done" territory.
> > 
> >> I have another solution though: keep the existing driver for old
> >> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
> >> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
> >> won't have to audit the new code.
> > 
> > A fair proposal, but I'll freely admit the number of people actually auditing
> > their code paths is orders of magnitude smaller than the number of users
> > of the driver.
> > 
> > There's such a large population of compatible legacy SoCs in the wild,
> > adding an artificial boundary doesn't make sense.  Especially since
> > we're talking about features everyone would want to use.
> > 
> > Perhaps we should keep both around, and deprecate the legacy driver over
> > 3 to 4 cycles?
> 
> But I guess that some users will want to use the new driver on the "old" marvell
> SoCs (especially kirkwood and dove).

Yes, despite my arguments, I'm one of those people.  :-P

> If we go to this path, then the best solution would be to still update
> all the the dts, and modifying the old driver to be able to use the
> new binding: for my point of view the only adaptation should be
> related to the SRAM. It will be also needed to find a way to be able
> to load only one driver at a time: either the old or the new, but not
> both.

We can look at how the wireless drivers handle this.  They often have to
choose between multiple drivers (foss, proprietary, ndis-something, etc)
for a given card.  Not much different here.

> However I still wonder if it worth the effort.

I'd appreciate if we'd look into it.  I understand from on-list and
off-list discussion that the rewrite was unavoidable.  So I'm willing to
concede that.  Giving people time to migrate from old to new while still
being able to update for other security fixes seems reasonable.

thx,

Jason.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-13 12:47           ` Jason Cooper
  0 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-13 12:47 UTC (permalink / raw)
  To: linux-arm-kernel

Hey Gregory,

On Mon, Apr 13, 2015 at 11:39:16AM +0200, Gregory CLEMENT wrote:
> Hi Jason, Boris,
> 
> On 11/04/2015 00:30, Jason Cooper wrote:
> > Hey Boris,
> > 
> > On Fri, Apr 10, 2015 at 05:11:48PM +0200, Boris Brezillon wrote:
> >> On Fri, 10 Apr 2015 13:50:56 +0000 Jason Cooper <jason@lakedaemon.net> wrote:
> >>> On Thu, Apr 09, 2015 at 04:58:41PM +0200, Boris Brezillon wrote:
> >>>> I know we usually try to adapt existing drivers instead of replacing them
> >>>> by new ones, but after trying to refactor the mv_cesa driver I realized it
> >>>> would take longer than writing an new one from scratch.
> >>>
> >>> I'm sorry, but this makes me *very* uncomfortable.  Any organization
> >>> worth it's salt will do a very careful audit of code touching
> >>> cryptographic material and sensitive (decrypted) data.  From that point
> >>> on, small audits of the changes to the code allow the organization to
> >>> build a comfort level with kernel updates.  iow, following the git
> >>> history of the driver.
> >>>
> >>> By apply this series, we are basically forcing those organizations to
> >>> either a) stop updating, or b) expend significant resources to do
> >>> another full audit.
> >>>
> >>> In short, this series breaks the audit chain for the mv_cesa driver.
> >>>
> >>> Maybe I'm the only person with this level of paranoia.  If so, I'm sure
> >>> others will override me.
> >>>
> >>> From my POV, it looks like the *only* reason we've chosen this route is
> >>> developer convenience.  I don't think that's sufficient reason to break
> >>> the change history of a driver handling sensitive data.
> >>
> >> Well, I understand you concern, but if you read carefully the old and
> >> new drivers, you'll notice that they are completely different (the only
> >> thing I kept are the macro definitions).
> > 
> > Yes, that's the worrying part for me. ;-)
> 
> I understand the logic behind your concern, but I wonder if it is really
> an issue. My knowledge ans my background around crypto is very limited,
> so I really would like the opinion of other people on the subject.

It's not about the crypto, it's about trust.  imho, one of the most
important security advances in the past 20 years is the default use of
git (or other SCMs) by open source projects.  Now, no one is forced to
trust the authors and maintainers tarball dumps.  Regular code audits
and security updates are *much* more feasible because you can audit
small changes.  It can even be automated to a large extent.

All this means the user has a choice: they can trust the authors and
maintainers, or they can trust their own audits.  Since updates are an
essential part of a security posture, small commits facilitate
maintaining the 'trust in audits'.

It's not about "Should you trust free-electrons?"  Or, "Should you trust
Jason / Herbert / Linus?"  It's about "Should you have to trust any of
them?"

> >> I really tried to adapt the existing driver to add the missing
> >> features (especially the support for TDMA), but all my attempts
> >> ended up introducing hackish code (not even talking about the
> >> performance penalty of this approach).
> > 
> > Ok, fair enough.  It would be helpful if this account of attempting to
> > reconcile the old driver made it into the commit message.  This puts us
> > in "perfect is the enemy of getting it done" territory.
> > 
> >> I have another solution though: keep the existing driver for old
> >> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
> >> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
> >> won't have to audit the new code.
> > 
> > A fair proposal, but I'll freely admit the number of people actually auditing
> > their code paths is orders of magnitude smaller than the number of users
> > of the driver.
> > 
> > There's such a large population of compatible legacy SoCs in the wild,
> > adding an artificial boundary doesn't make sense.  Especially since
> > we're talking about features everyone would want to use.
> > 
> > Perhaps we should keep both around, and deprecate the legacy driver over
> > 3 to 4 cycles?
> 
> But I guess that some users will want to use the new driver on the "old" marvell
> SoCs (especially kirkwood and dove).

Yes, despite my arguments, I'm one of those people.  :-P

> If we go to this path, then the best solution would be to still update
> all the the dts, and modifying the old driver to be able to use the
> new binding: for my point of view the only adaptation should be
> related to the SRAM. It will be also needed to find a way to be able
> to load only one driver at a time: either the old or the new, but not
> both.

We can look at how the wireless drivers handle this.  They often have to
choose between multiple drivers (foss, proprietary, ndis-something, etc)
for a given card.  Not much different here.

> However I still wonder if it worth the effort.

I'd appreciate if we'd look into it.  I understand from on-list and
off-list discussion that the rewrite was unavoidable.  So I'm willing to
concede that.  Giving people time to migrate from old to new while still
being able to update for other security fixes seems reasonable.

thx,

Jason.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-13 12:47           ` Jason Cooper
@ 2015-04-13 16:06             ` Arnaud Ebalard
  -1 siblings, 0 replies; 67+ messages in thread
From: Arnaud Ebalard @ 2015-04-13 16:06 UTC (permalink / raw)
  To: Jason Cooper
  Cc: Gregory CLEMENT, Mark Rutland, Boris Brezillon, Thomas Petazzoni,
	Herbert Xu, Pawel Moll, Ian Campbell, linux-arm-kernel,
	linux-kernel, Eran Ben-Avi, Nadav Haklai, devicetree,
	Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

Hi Jason,

Jason Cooper <jason@lakedaemon.net> writes:

> It's not about the crypto, it's about trust.  imho, one of the most
> important security advances in the past 20 years is the default use of
> git (or other SCMs) by open source projects.  Now, no one is forced to
> trust the authors and maintainers tarball dumps.  Regular code audits
> and security updates are *much* more feasible because you can audit
> small changes.  It can even be automated to a large extent.
>
> All this means the user has a choice: they can trust the authors and
> maintainers, or they can trust their own audits.  Since updates are an
> essential part of a security posture, small commits facilitate
> maintaining the 'trust in audits'.
>
> It's not about "Should you trust free-electrons?"  Or, "Should you trust
> Jason / Herbert / Linus?"  It's about "Should you have to trust any of
> them?"

It's ok, you can call our driver fat. It is ;-) More seriously, I tend
to agree w/ what you write above.


>> >> I really tried to adapt the existing driver to add the missing
>> >> features (especially the support for TDMA), but all my attempts
>> >> ended up introducing hackish code (not even talking about the
>> >> performance penalty of this approach).
>> > 
>> > Ok, fair enough.  It would be helpful if this account of attempting to
>> > reconcile the old driver made it into the commit message.  This puts us
>> > in "perfect is the enemy of getting it done" territory.
>> > 
>> >> I have another solution though: keep the existing driver for old
>> >> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
>> >> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
>> >> won't have to audit the new code.
>> > 
>> > A fair proposal, but I'll freely admit the number of people actually auditing
>> > their code paths is orders of magnitude smaller than the number of users
>> > of the driver.
>> > 
>> > There's such a large population of compatible legacy SoCs in the wild,
>> > adding an artificial boundary doesn't make sense.  Especially since
>> > we're talking about features everyone would want to use.
>> > 
>> > Perhaps we should keep both around, and deprecate the legacy driver over
>> > 3 to 4 cycles?
>> 
>> But I guess that some users will want to use the new driver on the "old" marvell
>> SoCs (especially kirkwood and dove).
>
> Yes, despite my arguments, I'm one of those people.  :-P
>
>> If we go to this path, then the best solution would be to still update
>> all the the dts, and modifying the old driver to be able to use the
>> new binding: for my point of view the only adaptation should be
>> related to the SRAM. It will be also needed to find a way to be able
>> to load only one driver at a time: either the old or the new, but not
>> both.

The approach Boris proposed above seems to make everyone happy:

 1) Keep the old driver for old marvells SoCs (kirkwood, dove and orion)
 2) Introduce the new driver for those that are not supported by the old
    driver, i.e. armada (370, XP, 375, 38x)

AFAICT, this can easily be done (based on compatible strings) and it
will let everyone the time to audit the new driver. Current users will
not be taken by surprise. At some point, when everyone is confident w/
the new driver, we can then switch to that one for all SoCs so that
old platform get more performance.

Additionnally, for those who want to get the feature of the new driver
for their old SoC right now, we *could* add a simple kernel config option
for the new driver to use it for the old SoC too (that one disabling the
old one).


> I'd appreciate if we'd look into it.  I understand from on-list and
> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> concede that.  Giving people time to migrate from old to new while still
> being able to update for other security fixes seems reasonable.

Jason, what do you think of the approach above? 

Cheers,

a+

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-13 16:06             ` Arnaud Ebalard
  0 siblings, 0 replies; 67+ messages in thread
From: Arnaud Ebalard @ 2015-04-13 16:06 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Jason,

Jason Cooper <jason@lakedaemon.net> writes:

> It's not about the crypto, it's about trust.  imho, one of the most
> important security advances in the past 20 years is the default use of
> git (or other SCMs) by open source projects.  Now, no one is forced to
> trust the authors and maintainers tarball dumps.  Regular code audits
> and security updates are *much* more feasible because you can audit
> small changes.  It can even be automated to a large extent.
>
> All this means the user has a choice: they can trust the authors and
> maintainers, or they can trust their own audits.  Since updates are an
> essential part of a security posture, small commits facilitate
> maintaining the 'trust in audits'.
>
> It's not about "Should you trust free-electrons?"  Or, "Should you trust
> Jason / Herbert / Linus?"  It's about "Should you have to trust any of
> them?"

It's ok, you can call our driver fat. It is ;-) More seriously, I tend
to agree w/ what you write above.


>> >> I really tried to adapt the existing driver to add the missing
>> >> features (especially the support for TDMA), but all my attempts
>> >> ended up introducing hackish code (not even talking about the
>> >> performance penalty of this approach).
>> > 
>> > Ok, fair enough.  It would be helpful if this account of attempting to
>> > reconcile the old driver made it into the commit message.  This puts us
>> > in "perfect is the enemy of getting it done" territory.
>> > 
>> >> I have another solution though: keep the existing driver for old
>> >> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
>> >> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
>> >> won't have to audit the new code.
>> > 
>> > A fair proposal, but I'll freely admit the number of people actually auditing
>> > their code paths is orders of magnitude smaller than the number of users
>> > of the driver.
>> > 
>> > There's such a large population of compatible legacy SoCs in the wild,
>> > adding an artificial boundary doesn't make sense.  Especially since
>> > we're talking about features everyone would want to use.
>> > 
>> > Perhaps we should keep both around, and deprecate the legacy driver over
>> > 3 to 4 cycles?
>> 
>> But I guess that some users will want to use the new driver on the "old" marvell
>> SoCs (especially kirkwood and dove).
>
> Yes, despite my arguments, I'm one of those people.  :-P
>
>> If we go to this path, then the best solution would be to still update
>> all the the dts, and modifying the old driver to be able to use the
>> new binding: for my point of view the only adaptation should be
>> related to the SRAM. It will be also needed to find a way to be able
>> to load only one driver at a time: either the old or the new, but not
>> both.

The approach Boris proposed above seems to make everyone happy:

 1) Keep the old driver for old marvells SoCs (kirkwood, dove and orion)
 2) Introduce the new driver for those that are not supported by the old
    driver, i.e. armada (370, XP, 375, 38x)

AFAICT, this can easily be done (based on compatible strings) and it
will let everyone the time to audit the new driver. Current users will
not be taken by surprise. At some point, when everyone is confident w/
the new driver, we can then switch to that one for all SoCs so that
old platform get more performance.

Additionnally, for those who want to get the feature of the new driver
for their old SoC right now, we *could* add a simple kernel config option
for the new driver to use it for the old SoC too (that one disabling the
old one).


> I'd appreciate if we'd look into it.  I understand from on-list and
> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> concede that.  Giving people time to migrate from old to new while still
> being able to update for other security fixes seems reasonable.

Jason, what do you think of the approach above? 

Cheers,

a+

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-13 16:06             ` Arnaud Ebalard
@ 2015-04-13 20:11               ` Jason Cooper
  -1 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-13 20:11 UTC (permalink / raw)
  To: Arnaud Ebalard
  Cc: Gregory CLEMENT, Mark Rutland, Boris Brezillon, Thomas Petazzoni,
	Herbert Xu, Pawel Moll, Ian Campbell, linux-arm-kernel,
	linux-kernel, Eran Ben-Avi, Nadav Haklai, devicetree,
	Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

Hey Arnaud,

On Mon, Apr 13, 2015 at 06:06:49PM +0200, Arnaud Ebalard wrote:
> Jason Cooper <jason@lakedaemon.net> writes:
...
> >> >> I really tried to adapt the existing driver to add the missing
> >> >> features (especially the support for TDMA), but all my attempts
> >> >> ended up introducing hackish code (not even talking about the
> >> >> performance penalty of this approach).
> >> > 
> >> > Ok, fair enough.  It would be helpful if this account of attempting to
> >> > reconcile the old driver made it into the commit message.  This puts us
> >> > in "perfect is the enemy of getting it done" territory.
> >> > 
> >> >> I have another solution though: keep the existing driver for old
> >> >> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
> >> >> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
> >> >> won't have to audit the new code.
> >> > 
> >> > A fair proposal, but I'll freely admit the number of people actually auditing
> >> > their code paths is orders of magnitude smaller than the number of users
> >> > of the driver.
> >> > 
> >> > There's such a large population of compatible legacy SoCs in the wild,
> >> > adding an artificial boundary doesn't make sense.  Especially since
> >> > we're talking about features everyone would want to use.
> >> > 
> >> > Perhaps we should keep both around, and deprecate the legacy driver over
> >> > 3 to 4 cycles?
> >> 
> >> But I guess that some users will want to use the new driver on the "old" marvell
> >> SoCs (especially kirkwood and dove).
> >
> > Yes, despite my arguments, I'm one of those people.  :-P
> >
> >> If we go to this path, then the best solution would be to still update
> >> all the the dts, and modifying the old driver to be able to use the
> >> new binding: for my point of view the only adaptation should be
> >> related to the SRAM. It will be also needed to find a way to be able
> >> to load only one driver at a time: either the old or the new, but not
> >> both.
> 
> The approach Boris proposed above seems to make everyone happy:
> 
>  1) Keep the old driver for old marvells SoCs (kirkwood, dove and orion)
>  2) Introduce the new driver for those that are not supported by the old
>     driver, i.e. armada (370, XP, 375, 38x)
> 
> AFAICT, this can easily be done (based on compatible strings) and it
> will let everyone the time to audit the new driver. Current users will
> not be taken by surprise. At some point, when everyone is confident w/
> the new driver, we can then switch to that one for all SoCs so that
> old platform get more performance.
> 
> Additionnally, for those who want to get the feature of the new driver
> for their old SoC right now, we *could* add a simple kernel config option
> for the new driver to use it for the old SoC too (that one disabling the
> old one).
> 
> 
> > I'd appreciate if we'd look into it.  I understand from on-list and
> > off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > concede that.  Giving people time to migrate from old to new while still
> > being able to update for other security fixes seems reasonable.
> 
> Jason, what do you think of the approach above? 

I say keep it simple.  We shouldn't use the DT changes to trigger one
vice the other.  We need to be able to build both, but only load one at
a time.  If that's anything other than simple to do, then we make it a
Kconfig binary choice and move on.

thx,

Jason.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-13 20:11               ` Jason Cooper
  0 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-13 20:11 UTC (permalink / raw)
  To: linux-arm-kernel

Hey Arnaud,

On Mon, Apr 13, 2015 at 06:06:49PM +0200, Arnaud Ebalard wrote:
> Jason Cooper <jason@lakedaemon.net> writes:
...
> >> >> I really tried to adapt the existing driver to add the missing
> >> >> features (especially the support for TDMA), but all my attempts
> >> >> ended up introducing hackish code (not even talking about the
> >> >> performance penalty of this approach).
> >> > 
> >> > Ok, fair enough.  It would be helpful if this account of attempting to
> >> > reconcile the old driver made it into the commit message.  This puts us
> >> > in "perfect is the enemy of getting it done" territory.
> >> > 
> >> >> I have another solution though: keep the existing driver for old
> >> >> marvell SoCs (orion, kirkwood and dove), and add a new one for modern
> >> >> SoCs (armada 370, XP, 375 and 38x), so that users of the mv_cesa driver
> >> >> won't have to audit the new code.
> >> > 
> >> > A fair proposal, but I'll freely admit the number of people actually auditing
> >> > their code paths is orders of magnitude smaller than the number of users
> >> > of the driver.
> >> > 
> >> > There's such a large population of compatible legacy SoCs in the wild,
> >> > adding an artificial boundary doesn't make sense.  Especially since
> >> > we're talking about features everyone would want to use.
> >> > 
> >> > Perhaps we should keep both around, and deprecate the legacy driver over
> >> > 3 to 4 cycles?
> >> 
> >> But I guess that some users will want to use the new driver on the "old" marvell
> >> SoCs (especially kirkwood and dove).
> >
> > Yes, despite my arguments, I'm one of those people.  :-P
> >
> >> If we go to this path, then the best solution would be to still update
> >> all the the dts, and modifying the old driver to be able to use the
> >> new binding: for my point of view the only adaptation should be
> >> related to the SRAM. It will be also needed to find a way to be able
> >> to load only one driver at a time: either the old or the new, but not
> >> both.
> 
> The approach Boris proposed above seems to make everyone happy:
> 
>  1) Keep the old driver for old marvells SoCs (kirkwood, dove and orion)
>  2) Introduce the new driver for those that are not supported by the old
>     driver, i.e. armada (370, XP, 375, 38x)
> 
> AFAICT, this can easily be done (based on compatible strings) and it
> will let everyone the time to audit the new driver. Current users will
> not be taken by surprise. At some point, when everyone is confident w/
> the new driver, we can then switch to that one for all SoCs so that
> old platform get more performance.
> 
> Additionnally, for those who want to get the feature of the new driver
> for their old SoC right now, we *could* add a simple kernel config option
> for the new driver to use it for the old SoC too (that one disabling the
> old one).
> 
> 
> > I'd appreciate if we'd look into it.  I understand from on-list and
> > off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > concede that.  Giving people time to migrate from old to new while still
> > being able to update for other security fixes seems reasonable.
> 
> Jason, what do you think of the approach above? 

I say keep it simple.  We shouldn't use the DT changes to trigger one
vice the other.  We need to be able to build both, but only load one at
a time.  If that's anything other than simple to do, then we make it a
Kconfig binary choice and move on.

thx,

Jason.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-13 20:11               ` Jason Cooper
@ 2015-04-17  8:33                 ` Boris Brezillon
  -1 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-17  8:33 UTC (permalink / raw)
  To: Jason Cooper
  Cc: Arnaud Ebalard, Gregory CLEMENT, Mark Rutland, Thomas Petazzoni,
	Herbert Xu, Pawel Moll, Ian Campbell, linux-arm-kernel,
	linux-kernel, Eran Ben-Avi, Nadav Haklai, devicetree,
	Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

Hi Jason,

On Mon, 13 Apr 2015 20:11:46 +0000
Jason Cooper <jason@lakedaemon.net> wrote:

> > 
> > > I'd appreciate if we'd look into it.  I understand from on-list and
> > > off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > > concede that.  Giving people time to migrate from old to new while still
> > > being able to update for other security fixes seems reasonable.
> > 
> > Jason, what do you think of the approach above? 
> 
> I say keep it simple.  We shouldn't use the DT changes to trigger one
> vice the other.  We need to be able to build both, but only load one at
> a time.  If that's anything other than simple to do, then we make it a
> Kconfig binary choice and move on.

Actually I was planning to handle it with a Kconfig dependency rule
(NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
on !NEW_DRIVER).
I don't know how to make it a runtime check without adding new
compatible strings for the kirkwood, dove and orion platforms, and I'm
sure sure this is a good idea.
Do you have any ideas ?

Regards,

Boris


-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17  8:33                 ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-17  8:33 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Jason,

On Mon, 13 Apr 2015 20:11:46 +0000
Jason Cooper <jason@lakedaemon.net> wrote:

> > 
> > > I'd appreciate if we'd look into it.  I understand from on-list and
> > > off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > > concede that.  Giving people time to migrate from old to new while still
> > > being able to update for other security fixes seems reasonable.
> > 
> > Jason, what do you think of the approach above? 
> 
> I say keep it simple.  We shouldn't use the DT changes to trigger one
> vice the other.  We need to be able to build both, but only load one at
> a time.  If that's anything other than simple to do, then we make it a
> Kconfig binary choice and move on.

Actually I was planning to handle it with a Kconfig dependency rule
(NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
on !NEW_DRIVER).
I don't know how to make it a runtime check without adding new
compatible strings for the kirkwood, dove and orion platforms, and I'm
sure sure this is a good idea.
Do you have any ideas ?

Regards,

Boris


-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17  8:33                 ` Boris Brezillon
@ 2015-04-17  8:39                   ` Boris Brezillon
  -1 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-17  8:39 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Jason Cooper, Arnaud Ebalard, Gregory CLEMENT, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel, linux-kernel, Eran Ben-Avi, Nadav Haklai,
	devicetree, Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

On Fri, 17 Apr 2015 10:33:56 +0200
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> Hi Jason,
> 
> On Mon, 13 Apr 2015 20:11:46 +0000
> Jason Cooper <jason@lakedaemon.net> wrote:
> 
> > > 
> > > > I'd appreciate if we'd look into it.  I understand from on-list and
> > > > off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > > > concede that.  Giving people time to migrate from old to new while still
> > > > being able to update for other security fixes seems reasonable.
> > > 
> > > Jason, what do you think of the approach above? 
> > 
> > I say keep it simple.  We shouldn't use the DT changes to trigger one
> > vice the other.  We need to be able to build both, but only load one at
> > a time.  If that's anything other than simple to do, then we make it a
> > Kconfig binary choice and move on.
> 
> Actually I was planning to handle it with a Kconfig dependency rule
> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> on !NEW_DRIVER).
> I don't know how to make it a runtime check without adding new
> compatible strings for the kirkwood, dove and orion platforms, and I'm
> sure sure this is a good idea.
  ^ not

> Do you have any ideas ?
> 
> Regards,
> 
> Boris
> 
> 



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17  8:39                   ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-17  8:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 17 Apr 2015 10:33:56 +0200
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> Hi Jason,
> 
> On Mon, 13 Apr 2015 20:11:46 +0000
> Jason Cooper <jason@lakedaemon.net> wrote:
> 
> > > 
> > > > I'd appreciate if we'd look into it.  I understand from on-list and
> > > > off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > > > concede that.  Giving people time to migrate from old to new while still
> > > > being able to update for other security fixes seems reasonable.
> > > 
> > > Jason, what do you think of the approach above? 
> > 
> > I say keep it simple.  We shouldn't use the DT changes to trigger one
> > vice the other.  We need to be able to build both, but only load one at
> > a time.  If that's anything other than simple to do, then we make it a
> > Kconfig binary choice and move on.
> 
> Actually I was planning to handle it with a Kconfig dependency rule
> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> on !NEW_DRIVER).
> I don't know how to make it a runtime check without adding new
> compatible strings for the kirkwood, dove and orion platforms, and I'm
> sure sure this is a good idea.
  ^ not

> Do you have any ideas ?
> 
> Regards,
> 
> Boris
> 
> 



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17  8:39                   ` Boris Brezillon
@ 2015-04-17 10:59                     ` Jason Cooper
  -1 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-17 10:59 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Arnaud Ebalard, Gregory CLEMENT, Mark Rutland, Thomas Petazzoni,
	Herbert Xu, Pawel Moll, Ian Campbell, linux-arm-kernel,
	linux-kernel, Eran Ben-Avi, Nadav Haklai, devicetree,
	Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

Hey Boris,

On Fri, Apr 17, 2015 at 10:39:46AM +0200, Boris Brezillon wrote:
> On Fri, 17 Apr 2015 10:33:56 +0200 Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> > On Mon, 13 Apr 2015 20:11:46 +0000 Jason Cooper <jason@lakedaemon.net> wrote:
> > > > 
> > > > > I'd appreciate if we'd look into it.  I understand from on-list and
> > > > > off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > > > > concede that.  Giving people time to migrate from old to new while still
> > > > > being able to update for other security fixes seems reasonable.
> > > > 
> > > > Jason, what do you think of the approach above? 
> > > 
> > > I say keep it simple.  We shouldn't use the DT changes to trigger one
> > > vice the other.  We need to be able to build both, but only load one at
> > > a time.  If that's anything other than simple to do, then we make it a
> > > Kconfig binary choice and move on.
> > 
> > Actually I was planning to handle it with a Kconfig dependency rule
> > (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> > on !NEW_DRIVER).
> > I don't know how to make it a runtime check without adding new
> > compatible strings for the kirkwood, dove and orion platforms, and I'm
> > sure sure this is a good idea.
>   ^ not
> 
> > Do you have any ideas ?

I'm kinda wrapped up with dayjob stuff atm...  But I'd look at the wireless
drivers.  eg b43, b43legacy, brcm80211.  There are devices they overlap for.
So, they need to deconflict in some way.

thx,

Jason.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 10:59                     ` Jason Cooper
  0 siblings, 0 replies; 67+ messages in thread
From: Jason Cooper @ 2015-04-17 10:59 UTC (permalink / raw)
  To: linux-arm-kernel

Hey Boris,

On Fri, Apr 17, 2015 at 10:39:46AM +0200, Boris Brezillon wrote:
> On Fri, 17 Apr 2015 10:33:56 +0200 Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> > On Mon, 13 Apr 2015 20:11:46 +0000 Jason Cooper <jason@lakedaemon.net> wrote:
> > > > 
> > > > > I'd appreciate if we'd look into it.  I understand from on-list and
> > > > > off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > > > > concede that.  Giving people time to migrate from old to new while still
> > > > > being able to update for other security fixes seems reasonable.
> > > > 
> > > > Jason, what do you think of the approach above? 
> > > 
> > > I say keep it simple.  We shouldn't use the DT changes to trigger one
> > > vice the other.  We need to be able to build both, but only load one at
> > > a time.  If that's anything other than simple to do, then we make it a
> > > Kconfig binary choice and move on.
> > 
> > Actually I was planning to handle it with a Kconfig dependency rule
> > (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> > on !NEW_DRIVER).
> > I don't know how to make it a runtime check without adding new
> > compatible strings for the kirkwood, dove and orion platforms, and I'm
> > sure sure this is a good idea.
>   ^ not
> 
> > Do you have any ideas ?

I'm kinda wrapped up with dayjob stuff atm...  But I'd look at the wireless
drivers.  eg b43, b43legacy, brcm80211.  There are devices they overlap for.
So, they need to deconflict in some way.

thx,

Jason.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17  8:39                   ` Boris Brezillon
@ 2015-04-17 13:01                     ` Gregory CLEMENT
  -1 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 13:01 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Jason Cooper, Arnaud Ebalard, Mark Rutland, Thomas Petazzoni,
	Herbert Xu, Pawel Moll, Ian Campbell, linux-arm-kernel,
	linux-kernel, Eran Ben-Avi, Nadav Haklai, devicetree,
	Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

Hi Boris,

On 17/04/2015 10:39, Boris Brezillon wrote:
> On Fri, 17 Apr 2015 10:33:56 +0200
> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> 
>> Hi Jason,
>>
>> On Mon, 13 Apr 2015 20:11:46 +0000
>> Jason Cooper <jason@lakedaemon.net> wrote:
>>
>>>>
>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>> being able to update for other security fixes seems reasonable.
>>>>
>>>> Jason, what do you think of the approach above? 
>>>
>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>> vice the other.  We need to be able to build both, but only load one at
>>> a time.  If that's anything other than simple to do, then we make it a
>>> Kconfig binary choice and move on.
>>
>> Actually I was planning to handle it with a Kconfig dependency rule
>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>> on !NEW_DRIVER).
>> I don't know how to make it a runtime check without adding new
>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>> sure sure this is a good idea.
>   ^ not
> 
>> Do you have any ideas ?

You use devm_ioremap_resource() in the new driver, so if the old one
is already loaded the memory region will be already hold and the new
driver will simply fail during the probe. So for this part it is OK.

However, the old driver doesn't try to reserve the region, it directly
uses an ioremap(). So if the new driver is loaded first, then the old
one will manage to be loaded too. I think that just adding a
request_region()/release_region() (or converting the ioremap in a
devm_ioremap_resource() in the old driver would be enough.


Gregory



-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 13:01                     ` Gregory CLEMENT
  0 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 13:01 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Boris,

On 17/04/2015 10:39, Boris Brezillon wrote:
> On Fri, 17 Apr 2015 10:33:56 +0200
> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> 
>> Hi Jason,
>>
>> On Mon, 13 Apr 2015 20:11:46 +0000
>> Jason Cooper <jason@lakedaemon.net> wrote:
>>
>>>>
>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>> being able to update for other security fixes seems reasonable.
>>>>
>>>> Jason, what do you think of the approach above? 
>>>
>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>> vice the other.  We need to be able to build both, but only load one at
>>> a time.  If that's anything other than simple to do, then we make it a
>>> Kconfig binary choice and move on.
>>
>> Actually I was planning to handle it with a Kconfig dependency rule
>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>> on !NEW_DRIVER).
>> I don't know how to make it a runtime check without adding new
>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>> sure sure this is a good idea.
>   ^ not
> 
>> Do you have any ideas ?

You use devm_ioremap_resource() in the new driver, so if the old one
is already loaded the memory region will be already hold and the new
driver will simply fail during the probe. So for this part it is OK.

However, the old driver doesn't try to reserve the region, it directly
uses an ioremap(). So if the new driver is loaded first, then the old
one will manage to be loaded too. I think that just adding a
request_region()/release_region() (or converting the ioremap in a
devm_ioremap_resource() in the old driver would be enough.


Gregory



-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17 13:01                     ` Gregory CLEMENT
@ 2015-04-17 14:19                       ` Boris Brezillon
  -1 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-17 14:19 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: Jason Cooper, Arnaud Ebalard, Mark Rutland, Thomas Petazzoni,
	Herbert Xu, Pawel Moll, Ian Campbell, linux-arm-kernel,
	linux-kernel, Eran Ben-Avi, Nadav Haklai, devicetree,
	Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

Hi Gregory,

On Fri, 17 Apr 2015 15:01:01 +0200
Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:

> Hi Boris,
> 
> On 17/04/2015 10:39, Boris Brezillon wrote:
> > On Fri, 17 Apr 2015 10:33:56 +0200
> > Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> > 
> >> Hi Jason,
> >>
> >> On Mon, 13 Apr 2015 20:11:46 +0000
> >> Jason Cooper <jason@lakedaemon.net> wrote:
> >>
> >>>>
> >>>>> I'd appreciate if we'd look into it.  I understand from on-list and
> >>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> >>>>> concede that.  Giving people time to migrate from old to new while still
> >>>>> being able to update for other security fixes seems reasonable.
> >>>>
> >>>> Jason, what do you think of the approach above? 
> >>>
> >>> I say keep it simple.  We shouldn't use the DT changes to trigger one
> >>> vice the other.  We need to be able to build both, but only load one at
> >>> a time.  If that's anything other than simple to do, then we make it a
> >>> Kconfig binary choice and move on.
> >>
> >> Actually I was planning to handle it with a Kconfig dependency rule
> >> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> >> on !NEW_DRIVER).
> >> I don't know how to make it a runtime check without adding new
> >> compatible strings for the kirkwood, dove and orion platforms, and I'm
> >> sure sure this is a good idea.
> >   ^ not
> > 
> >> Do you have any ideas ?
> 
> You use devm_ioremap_resource() in the new driver, so if the old one
> is already loaded the memory region will be already hold and the new
> driver will simply fail during the probe. So for this part it is OK.

I like the idea :-).

> 
> However, the old driver doesn't try to reserve the region, it directly
> uses an ioremap(). So if the new driver is loaded first, then the old
> one will manage to be loaded too. I think that just adding a
> request_region()/release_region() (or converting the ioremap in a
> devm_ioremap_resource() in the old driver would be enough.

Absolutely. Unless someone is opposed to this solution I think I'll
choose this solution.

Thanks,

Boris


-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 14:19                       ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-17 14:19 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Gregory,

On Fri, 17 Apr 2015 15:01:01 +0200
Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:

> Hi Boris,
> 
> On 17/04/2015 10:39, Boris Brezillon wrote:
> > On Fri, 17 Apr 2015 10:33:56 +0200
> > Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> > 
> >> Hi Jason,
> >>
> >> On Mon, 13 Apr 2015 20:11:46 +0000
> >> Jason Cooper <jason@lakedaemon.net> wrote:
> >>
> >>>>
> >>>>> I'd appreciate if we'd look into it.  I understand from on-list and
> >>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> >>>>> concede that.  Giving people time to migrate from old to new while still
> >>>>> being able to update for other security fixes seems reasonable.
> >>>>
> >>>> Jason, what do you think of the approach above? 
> >>>
> >>> I say keep it simple.  We shouldn't use the DT changes to trigger one
> >>> vice the other.  We need to be able to build both, but only load one at
> >>> a time.  If that's anything other than simple to do, then we make it a
> >>> Kconfig binary choice and move on.
> >>
> >> Actually I was planning to handle it with a Kconfig dependency rule
> >> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> >> on !NEW_DRIVER).
> >> I don't know how to make it a runtime check without adding new
> >> compatible strings for the kirkwood, dove and orion platforms, and I'm
> >> sure sure this is a good idea.
> >   ^ not
> > 
> >> Do you have any ideas ?
> 
> You use devm_ioremap_resource() in the new driver, so if the old one
> is already loaded the memory region will be already hold and the new
> driver will simply fail during the probe. So for this part it is OK.

I like the idea :-).

> 
> However, the old driver doesn't try to reserve the region, it directly
> uses an ioremap(). So if the new driver is loaded first, then the old
> one will manage to be loaded too. I think that just adding a
> request_region()/release_region() (or converting the ioremap in a
> devm_ioremap_resource() in the old driver would be enough.

Absolutely. Unless someone is opposed to this solution I think I'll
choose this solution.

Thanks,

Boris


-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17 14:19                       ` Boris Brezillon
@ 2015-04-17 14:32                         ` Maxime Ripard
  -1 siblings, 0 replies; 67+ messages in thread
From: Maxime Ripard @ 2015-04-17 14:32 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Gregory CLEMENT, Jason Cooper, Arnaud Ebalard, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel, linux-kernel, Eran Ben-Avi, Nadav Haklai,
	devicetree, Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

[-- Attachment #1: Type: text/plain, Size: 2220 bytes --]

On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
> Hi Gregory,
> 
> On Fri, 17 Apr 2015 15:01:01 +0200
> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
> 
> > Hi Boris,
> > 
> > On 17/04/2015 10:39, Boris Brezillon wrote:
> > > On Fri, 17 Apr 2015 10:33:56 +0200
> > > Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> > > 
> > >> Hi Jason,
> > >>
> > >> On Mon, 13 Apr 2015 20:11:46 +0000
> > >> Jason Cooper <jason@lakedaemon.net> wrote:
> > >>
> > >>>>
> > >>>>> I'd appreciate if we'd look into it.  I understand from on-list and
> > >>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > >>>>> concede that.  Giving people time to migrate from old to new while still
> > >>>>> being able to update for other security fixes seems reasonable.
> > >>>>
> > >>>> Jason, what do you think of the approach above? 
> > >>>
> > >>> I say keep it simple.  We shouldn't use the DT changes to trigger one
> > >>> vice the other.  We need to be able to build both, but only load one at
> > >>> a time.  If that's anything other than simple to do, then we make it a
> > >>> Kconfig binary choice and move on.
> > >>
> > >> Actually I was planning to handle it with a Kconfig dependency rule
> > >> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> > >> on !NEW_DRIVER).
> > >> I don't know how to make it a runtime check without adding new
> > >> compatible strings for the kirkwood, dove and orion platforms, and I'm
> > >> sure sure this is a good idea.
> > >   ^ not
> > > 
> > >> Do you have any ideas ?
> > 
> > You use devm_ioremap_resource() in the new driver, so if the old one
> > is already loaded the memory region will be already hold and the new
> > driver will simply fail during the probe. So for this part it is OK.
> 
> I like the idea :-).

Not really, how do you know which device is going to be probed? For
that matter, it's pretty much random, and you have no control over it.

Why not just have a choice option, and select which one you want to
enable?

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 14:32                         ` Maxime Ripard
  0 siblings, 0 replies; 67+ messages in thread
From: Maxime Ripard @ 2015-04-17 14:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
> Hi Gregory,
> 
> On Fri, 17 Apr 2015 15:01:01 +0200
> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
> 
> > Hi Boris,
> > 
> > On 17/04/2015 10:39, Boris Brezillon wrote:
> > > On Fri, 17 Apr 2015 10:33:56 +0200
> > > Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> > > 
> > >> Hi Jason,
> > >>
> > >> On Mon, 13 Apr 2015 20:11:46 +0000
> > >> Jason Cooper <jason@lakedaemon.net> wrote:
> > >>
> > >>>>
> > >>>>> I'd appreciate if we'd look into it.  I understand from on-list and
> > >>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> > >>>>> concede that.  Giving people time to migrate from old to new while still
> > >>>>> being able to update for other security fixes seems reasonable.
> > >>>>
> > >>>> Jason, what do you think of the approach above? 
> > >>>
> > >>> I say keep it simple.  We shouldn't use the DT changes to trigger one
> > >>> vice the other.  We need to be able to build both, but only load one at
> > >>> a time.  If that's anything other than simple to do, then we make it a
> > >>> Kconfig binary choice and move on.
> > >>
> > >> Actually I was planning to handle it with a Kconfig dependency rule
> > >> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> > >> on !NEW_DRIVER).
> > >> I don't know how to make it a runtime check without adding new
> > >> compatible strings for the kirkwood, dove and orion platforms, and I'm
> > >> sure sure this is a good idea.
> > >   ^ not
> > > 
> > >> Do you have any ideas ?
> > 
> > You use devm_ioremap_resource() in the new driver, so if the old one
> > is already loaded the memory region will be already hold and the new
> > driver will simply fail during the probe. So for this part it is OK.
> 
> I like the idea :-).

Not really, how do you know which device is going to be probed? For
that matter, it's pretty much random, and you have no control over it.

Why not just have a choice option, and select which one you want to
enable?

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20150417/1428ad9e/attachment.sig>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17 14:32                         ` Maxime Ripard
@ 2015-04-17 14:40                           ` Gregory CLEMENT
  -1 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 14:40 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Boris Brezillon, Jason Cooper, Arnaud Ebalard, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel, linux-kernel, Eran Ben-Avi, Nadav Haklai,
	devicetree, Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

Hi Maxime,

On 17/04/2015 16:32, Maxime Ripard wrote:
> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
>> Hi Gregory,
>>
>> On Fri, 17 Apr 2015 15:01:01 +0200
>> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
>>
>>> Hi Boris,
>>>
>>> On 17/04/2015 10:39, Boris Brezillon wrote:
>>>> On Fri, 17 Apr 2015 10:33:56 +0200
>>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
>>>>
>>>>> Hi Jason,
>>>>>
>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
>>>>> Jason Cooper <jason@lakedaemon.net> wrote:
>>>>>
>>>>>>>
>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>>>>> being able to update for other security fixes seems reasonable.
>>>>>>>
>>>>>>> Jason, what do you think of the approach above? 
>>>>>>
>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>>>>> vice the other.  We need to be able to build both, but only load one at
>>>>>> a time.  If that's anything other than simple to do, then we make it a
>>>>>> Kconfig binary choice and move on.
>>>>>
>>>>> Actually I was planning to handle it with a Kconfig dependency rule
>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>>>>> on !NEW_DRIVER).
>>>>> I don't know how to make it a runtime check without adding new
>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>>>>> sure sure this is a good idea.
>>>>   ^ not
>>>>
>>>>> Do you have any ideas ?
>>>
>>> You use devm_ioremap_resource() in the new driver, so if the old one
>>> is already loaded the memory region will be already hold and the new
>>> driver will simply fail during the probe. So for this part it is OK.
>>
>> I like the idea :-).
> 
> Not really, how do you know which device is going to be probed? For
> that matter, it's pretty much random, and you have no control over it.
> 
> Why not just have a choice option, and select which one you want to
> enable?

Because you can't prevent an user to build a module, then modifying the
configuration and building the other module. So even if there is a choice at
build time, and I think that it is something expected for the v2, we still need
preventing having the both drivers trying accessing the same hardware in the
same time.


Thanks,

Gregory



> 
> Maxime
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 14:40                           ` Gregory CLEMENT
  0 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 14:40 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Maxime,

On 17/04/2015 16:32, Maxime Ripard wrote:
> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
>> Hi Gregory,
>>
>> On Fri, 17 Apr 2015 15:01:01 +0200
>> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
>>
>>> Hi Boris,
>>>
>>> On 17/04/2015 10:39, Boris Brezillon wrote:
>>>> On Fri, 17 Apr 2015 10:33:56 +0200
>>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
>>>>
>>>>> Hi Jason,
>>>>>
>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
>>>>> Jason Cooper <jason@lakedaemon.net> wrote:
>>>>>
>>>>>>>
>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>>>>> being able to update for other security fixes seems reasonable.
>>>>>>>
>>>>>>> Jason, what do you think of the approach above? 
>>>>>>
>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>>>>> vice the other.  We need to be able to build both, but only load one at
>>>>>> a time.  If that's anything other than simple to do, then we make it a
>>>>>> Kconfig binary choice and move on.
>>>>>
>>>>> Actually I was planning to handle it with a Kconfig dependency rule
>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>>>>> on !NEW_DRIVER).
>>>>> I don't know how to make it a runtime check without adding new
>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>>>>> sure sure this is a good idea.
>>>>   ^ not
>>>>
>>>>> Do you have any ideas ?
>>>
>>> You use devm_ioremap_resource() in the new driver, so if the old one
>>> is already loaded the memory region will be already hold and the new
>>> driver will simply fail during the probe. So for this part it is OK.
>>
>> I like the idea :-).
> 
> Not really, how do you know which device is going to be probed? For
> that matter, it's pretty much random, and you have no control over it.
> 
> Why not just have a choice option, and select which one you want to
> enable?

Because you can't prevent an user to build a module, then modifying the
configuration and building the other module. So even if there is a choice at
build time, and I think that it is something expected for the v2, we still need
preventing having the both drivers trying accessing the same hardware in the
same time.


Thanks,

Gregory



> 
> Maxime
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17 14:40                           ` Gregory CLEMENT
@ 2015-04-17 14:50                             ` Maxime Ripard
  -1 siblings, 0 replies; 67+ messages in thread
From: Maxime Ripard @ 2015-04-17 14:50 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: Boris Brezillon, Jason Cooper, Arnaud Ebalard, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel, linux-kernel, Eran Ben-Avi, Nadav Haklai,
	devicetree, Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

[-- Attachment #1: Type: text/plain, Size: 3046 bytes --]

On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
> Hi Maxime,
> 
> On 17/04/2015 16:32, Maxime Ripard wrote:
> > On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
> >> Hi Gregory,
> >>
> >> On Fri, 17 Apr 2015 15:01:01 +0200
> >> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
> >>
> >>> Hi Boris,
> >>>
> >>> On 17/04/2015 10:39, Boris Brezillon wrote:
> >>>> On Fri, 17 Apr 2015 10:33:56 +0200
> >>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> >>>>
> >>>>> Hi Jason,
> >>>>>
> >>>>> On Mon, 13 Apr 2015 20:11:46 +0000
> >>>>> Jason Cooper <jason@lakedaemon.net> wrote:
> >>>>>
> >>>>>>>
> >>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
> >>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> >>>>>>>> concede that.  Giving people time to migrate from old to new while still
> >>>>>>>> being able to update for other security fixes seems reasonable.
> >>>>>>>
> >>>>>>> Jason, what do you think of the approach above? 
> >>>>>>
> >>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
> >>>>>> vice the other.  We need to be able to build both, but only load one at
> >>>>>> a time.  If that's anything other than simple to do, then we make it a
> >>>>>> Kconfig binary choice and move on.
> >>>>>
> >>>>> Actually I was planning to handle it with a Kconfig dependency rule
> >>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> >>>>> on !NEW_DRIVER).
> >>>>> I don't know how to make it a runtime check without adding new
> >>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
> >>>>> sure sure this is a good idea.
> >>>>   ^ not
> >>>>
> >>>>> Do you have any ideas ?
> >>>
> >>> You use devm_ioremap_resource() in the new driver, so if the old one
> >>> is already loaded the memory region will be already hold and the new
> >>> driver will simply fail during the probe. So for this part it is OK.
> >>
> >> I like the idea :-).
> > 
> > Not really, how do you know which device is going to be probed? For
> > that matter, it's pretty much random, and you have no control over it.
> > 
> > Why not just have a choice option, and select which one you want to
> > enable?
> 
> Because you can't prevent an user to build a module, then modifying the
> configuration and building the other module.

Well, actually, you don't even know if it's going to be a module. You
might very well have both drivers compiled statically in the kernel
image, and this is where the trouble begins.

> So even if there is a choice at build time, and I think that it is
> something expected for the v2, we still need preventing having the
> both drivers trying accessing the same hardware in the same time.

Of course, but this is already there, and doesn't really address the
same issue.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 14:50                             ` Maxime Ripard
  0 siblings, 0 replies; 67+ messages in thread
From: Maxime Ripard @ 2015-04-17 14:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
> Hi Maxime,
> 
> On 17/04/2015 16:32, Maxime Ripard wrote:
> > On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
> >> Hi Gregory,
> >>
> >> On Fri, 17 Apr 2015 15:01:01 +0200
> >> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
> >>
> >>> Hi Boris,
> >>>
> >>> On 17/04/2015 10:39, Boris Brezillon wrote:
> >>>> On Fri, 17 Apr 2015 10:33:56 +0200
> >>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> >>>>
> >>>>> Hi Jason,
> >>>>>
> >>>>> On Mon, 13 Apr 2015 20:11:46 +0000
> >>>>> Jason Cooper <jason@lakedaemon.net> wrote:
> >>>>>
> >>>>>>>
> >>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
> >>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> >>>>>>>> concede that.  Giving people time to migrate from old to new while still
> >>>>>>>> being able to update for other security fixes seems reasonable.
> >>>>>>>
> >>>>>>> Jason, what do you think of the approach above? 
> >>>>>>
> >>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
> >>>>>> vice the other.  We need to be able to build both, but only load one at
> >>>>>> a time.  If that's anything other than simple to do, then we make it a
> >>>>>> Kconfig binary choice and move on.
> >>>>>
> >>>>> Actually I was planning to handle it with a Kconfig dependency rule
> >>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> >>>>> on !NEW_DRIVER).
> >>>>> I don't know how to make it a runtime check without adding new
> >>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
> >>>>> sure sure this is a good idea.
> >>>>   ^ not
> >>>>
> >>>>> Do you have any ideas ?
> >>>
> >>> You use devm_ioremap_resource() in the new driver, so if the old one
> >>> is already loaded the memory region will be already hold and the new
> >>> driver will simply fail during the probe. So for this part it is OK.
> >>
> >> I like the idea :-).
> > 
> > Not really, how do you know which device is going to be probed? For
> > that matter, it's pretty much random, and you have no control over it.
> > 
> > Why not just have a choice option, and select which one you want to
> > enable?
> 
> Because you can't prevent an user to build a module, then modifying the
> configuration and building the other module.

Well, actually, you don't even know if it's going to be a module. You
might very well have both drivers compiled statically in the kernel
image, and this is where the trouble begins.

> So even if there is a choice at build time, and I think that it is
> something expected for the v2, we still need preventing having the
> both drivers trying accessing the same hardware in the same time.

Of course, but this is already there, and doesn't really address the
same issue.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20150417/274bd08b/attachment.sig>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17 14:50                             ` Maxime Ripard
  (?)
@ 2015-04-17 15:01                               ` Gregory CLEMENT
  -1 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 15:01 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Boris Brezillon, Jason Cooper, Arnaud Ebalard, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Eran Ben-Avi, Nadav Haklai,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Andrew Lunn,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA, Kumar Gala, Tawfik Bayouk,
	David S. Miller, Lior Amsalem, Sebastian Hesselbarth

On 17/04/2015 16:50, Maxime Ripard wrote:
> On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
>> Hi Maxime,
>>
>> On 17/04/2015 16:32, Maxime Ripard wrote:
>>> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
>>>> Hi Gregory,
>>>>
>>>> On Fri, 17 Apr 2015 15:01:01 +0200
>>>> Gregory CLEMENT <gregory.clement-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org> wrote:
>>>>
>>>>> Hi Boris,
>>>>>
>>>>> On 17/04/2015 10:39, Boris Brezillon wrote:
>>>>>> On Fri, 17 Apr 2015 10:33:56 +0200
>>>>>> Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org> wrote:
>>>>>>
>>>>>>> Hi Jason,
>>>>>>>
>>>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
>>>>>>> Jason Cooper <jason-NLaQJdtUoK4Be96aLqz0jA@public.gmane.org> wrote:
>>>>>>>
>>>>>>>>>
>>>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>>>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>>>>>>> being able to update for other security fixes seems reasonable.
>>>>>>>>>
>>>>>>>>> Jason, what do you think of the approach above? 
>>>>>>>>
>>>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>>>>>>> vice the other.  We need to be able to build both, but only load one at
>>>>>>>> a time.  If that's anything other than simple to do, then we make it a
>>>>>>>> Kconfig binary choice and move on.
>>>>>>>
>>>>>>> Actually I was planning to handle it with a Kconfig dependency rule
>>>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>>>>>>> on !NEW_DRIVER).
>>>>>>> I don't know how to make it a runtime check without adding new
>>>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>>>>>>> sure sure this is a good idea.
>>>>>>   ^ not
>>>>>>
>>>>>>> Do you have any ideas ?
>>>>>
>>>>> You use devm_ioremap_resource() in the new driver, so if the old one
>>>>> is already loaded the memory region will be already hold and the new
>>>>> driver will simply fail during the probe. So for this part it is OK.
>>>>
>>>> I like the idea :-).
>>>
>>> Not really, how do you know which device is going to be probed? For
>>> that matter, it's pretty much random, and you have no control over it.
>>>
>>> Why not just have a choice option, and select which one you want to
>>> enable?
>>
>> Because you can't prevent an user to build a module, then modifying the
>> configuration and building the other module.
> 
> Well, actually, you don't even know if it's going to be a module. You
> might very well have both drivers compiled statically in the kernel
> image, and this is where the trouble begins.

No it won't be possible, Boris already speak about this issue (see below):
"Actually I was planning to handle it with a Kconfig dependency rule
(NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
on !NEW_DRIVER)."

> 
>> So even if there is a choice at build time, and I think that it is
>> something expected for the v2, we still need preventing having the
>> both drivers trying accessing the same hardware in the same time.
> 
> Of course, but this is already there, and doesn't really address the
> same issue.

This was the only issue remaining, (see below again):
"I don't know how to make it a runtime check ". And my last emails
was bout it.

Gregory


> 
> Maxime
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 15:01                               ` Gregory CLEMENT
  0 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 15:01 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Boris Brezillon, Jason Cooper, Arnaud Ebalard, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel, linux-kernel, Eran Ben-Avi, Nadav Haklai,
	devicetree, Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

On 17/04/2015 16:50, Maxime Ripard wrote:
> On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
>> Hi Maxime,
>>
>> On 17/04/2015 16:32, Maxime Ripard wrote:
>>> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
>>>> Hi Gregory,
>>>>
>>>> On Fri, 17 Apr 2015 15:01:01 +0200
>>>> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
>>>>
>>>>> Hi Boris,
>>>>>
>>>>> On 17/04/2015 10:39, Boris Brezillon wrote:
>>>>>> On Fri, 17 Apr 2015 10:33:56 +0200
>>>>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
>>>>>>
>>>>>>> Hi Jason,
>>>>>>>
>>>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
>>>>>>> Jason Cooper <jason@lakedaemon.net> wrote:
>>>>>>>
>>>>>>>>>
>>>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>>>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>>>>>>> being able to update for other security fixes seems reasonable.
>>>>>>>>>
>>>>>>>>> Jason, what do you think of the approach above? 
>>>>>>>>
>>>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>>>>>>> vice the other.  We need to be able to build both, but only load one at
>>>>>>>> a time.  If that's anything other than simple to do, then we make it a
>>>>>>>> Kconfig binary choice and move on.
>>>>>>>
>>>>>>> Actually I was planning to handle it with a Kconfig dependency rule
>>>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>>>>>>> on !NEW_DRIVER).
>>>>>>> I don't know how to make it a runtime check without adding new
>>>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>>>>>>> sure sure this is a good idea.
>>>>>>   ^ not
>>>>>>
>>>>>>> Do you have any ideas ?
>>>>>
>>>>> You use devm_ioremap_resource() in the new driver, so if the old one
>>>>> is already loaded the memory region will be already hold and the new
>>>>> driver will simply fail during the probe. So for this part it is OK.
>>>>
>>>> I like the idea :-).
>>>
>>> Not really, how do you know which device is going to be probed? For
>>> that matter, it's pretty much random, and you have no control over it.
>>>
>>> Why not just have a choice option, and select which one you want to
>>> enable?
>>
>> Because you can't prevent an user to build a module, then modifying the
>> configuration and building the other module.
> 
> Well, actually, you don't even know if it's going to be a module. You
> might very well have both drivers compiled statically in the kernel
> image, and this is where the trouble begins.

No it won't be possible, Boris already speak about this issue (see below):
"Actually I was planning to handle it with a Kconfig dependency rule
(NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
on !NEW_DRIVER)."

> 
>> So even if there is a choice at build time, and I think that it is
>> something expected for the v2, we still need preventing having the
>> both drivers trying accessing the same hardware in the same time.
> 
> Of course, but this is already there, and doesn't really address the
> same issue.

This was the only issue remaining, (see below again):
"I don't know how to make it a runtime check ". And my last emails
was bout it.

Gregory


> 
> Maxime
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 15:01                               ` Gregory CLEMENT
  0 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 15:01 UTC (permalink / raw)
  To: linux-arm-kernel

On 17/04/2015 16:50, Maxime Ripard wrote:
> On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
>> Hi Maxime,
>>
>> On 17/04/2015 16:32, Maxime Ripard wrote:
>>> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
>>>> Hi Gregory,
>>>>
>>>> On Fri, 17 Apr 2015 15:01:01 +0200
>>>> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
>>>>
>>>>> Hi Boris,
>>>>>
>>>>> On 17/04/2015 10:39, Boris Brezillon wrote:
>>>>>> On Fri, 17 Apr 2015 10:33:56 +0200
>>>>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
>>>>>>
>>>>>>> Hi Jason,
>>>>>>>
>>>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
>>>>>>> Jason Cooper <jason@lakedaemon.net> wrote:
>>>>>>>
>>>>>>>>>
>>>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>>>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>>>>>>> being able to update for other security fixes seems reasonable.
>>>>>>>>>
>>>>>>>>> Jason, what do you think of the approach above? 
>>>>>>>>
>>>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>>>>>>> vice the other.  We need to be able to build both, but only load one at
>>>>>>>> a time.  If that's anything other than simple to do, then we make it a
>>>>>>>> Kconfig binary choice and move on.
>>>>>>>
>>>>>>> Actually I was planning to handle it with a Kconfig dependency rule
>>>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>>>>>>> on !NEW_DRIVER).
>>>>>>> I don't know how to make it a runtime check without adding new
>>>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>>>>>>> sure sure this is a good idea.
>>>>>>   ^ not
>>>>>>
>>>>>>> Do you have any ideas ?
>>>>>
>>>>> You use devm_ioremap_resource() in the new driver, so if the old one
>>>>> is already loaded the memory region will be already hold and the new
>>>>> driver will simply fail during the probe. So for this part it is OK.
>>>>
>>>> I like the idea :-).
>>>
>>> Not really, how do you know which device is going to be probed? For
>>> that matter, it's pretty much random, and you have no control over it.
>>>
>>> Why not just have a choice option, and select which one you want to
>>> enable?
>>
>> Because you can't prevent an user to build a module, then modifying the
>> configuration and building the other module.
> 
> Well, actually, you don't even know if it's going to be a module. You
> might very well have both drivers compiled statically in the kernel
> image, and this is where the trouble begins.

No it won't be possible, Boris already speak about this issue (see below):
"Actually I was planning to handle it with a Kconfig dependency rule
(NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
on !NEW_DRIVER)."

> 
>> So even if there is a choice at build time, and I think that it is
>> something expected for the v2, we still need preventing having the
>> both drivers trying accessing the same hardware in the same time.
> 
> Of course, but this is already there, and doesn't really address the
> same issue.

This was the only issue remaining, (see below again):
"I don't know how to make it a runtime check ". And my last emails
was bout it.

Gregory


> 
> Maxime
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17 15:01                               ` Gregory CLEMENT
@ 2015-04-17 15:49                                 ` Maxime Ripard
  -1 siblings, 0 replies; 67+ messages in thread
From: Maxime Ripard @ 2015-04-17 15:49 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: Boris Brezillon, Jason Cooper, Arnaud Ebalard, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel, linux-kernel, Eran Ben-Avi, Nadav Haklai,
	devicetree, Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

[-- Attachment #1: Type: text/plain, Size: 3769 bytes --]

On Fri, Apr 17, 2015 at 05:01:55PM +0200, Gregory CLEMENT wrote:
> On 17/04/2015 16:50, Maxime Ripard wrote:
> > On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
> >> Hi Maxime,
> >>
> >> On 17/04/2015 16:32, Maxime Ripard wrote:
> >>> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
> >>>> Hi Gregory,
> >>>>
> >>>> On Fri, 17 Apr 2015 15:01:01 +0200
> >>>> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
> >>>>
> >>>>> Hi Boris,
> >>>>>
> >>>>> On 17/04/2015 10:39, Boris Brezillon wrote:
> >>>>>> On Fri, 17 Apr 2015 10:33:56 +0200
> >>>>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> >>>>>>
> >>>>>>> Hi Jason,
> >>>>>>>
> >>>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
> >>>>>>> Jason Cooper <jason@lakedaemon.net> wrote:
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
> >>>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> >>>>>>>>>> concede that.  Giving people time to migrate from old to new while still
> >>>>>>>>>> being able to update for other security fixes seems reasonable.
> >>>>>>>>>
> >>>>>>>>> Jason, what do you think of the approach above? 
> >>>>>>>>
> >>>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
> >>>>>>>> vice the other.  We need to be able to build both, but only load one at
> >>>>>>>> a time.  If that's anything other than simple to do, then we make it a
> >>>>>>>> Kconfig binary choice and move on.
> >>>>>>>
> >>>>>>> Actually I was planning to handle it with a Kconfig dependency rule
> >>>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> >>>>>>> on !NEW_DRIVER).
> >>>>>>> I don't know how to make it a runtime check without adding new
> >>>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
> >>>>>>> sure sure this is a good idea.
> >>>>>>   ^ not
> >>>>>>
> >>>>>>> Do you have any ideas ?
> >>>>>
> >>>>> You use devm_ioremap_resource() in the new driver, so if the old one
> >>>>> is already loaded the memory region will be already hold and the new
> >>>>> driver will simply fail during the probe. So for this part it is OK.
> >>>>
> >>>> I like the idea :-).
> >>>
> >>> Not really, how do you know which device is going to be probed? For
> >>> that matter, it's pretty much random, and you have no control over it.
> >>>
> >>> Why not just have a choice option, and select which one you want to
> >>> enable?
> >>
> >> Because you can't prevent an user to build a module, then modifying the
> >> configuration and building the other module.
> > 
> > Well, actually, you don't even know if it's going to be a module. You
> > might very well have both drivers compiled statically in the kernel
> > image, and this is where the trouble begins.
> 
> No it won't be possible, Boris already speak about this issue (see below):
> "Actually I was planning to handle it with a Kconfig dependency rule
> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> on !NEW_DRIVER)."

Which is a circular dependency and won't work.

> >> So even if there is a choice at build time, and I think that it is
> >> something expected for the v2, we still need preventing having the
> >> both drivers trying accessing the same hardware in the same time.
> > 
> > Of course, but this is already there, and doesn't really address the
> > same issue.
> 
> This was the only issue remaining, (see below again):
> "I don't know how to make it a runtime check ". And my last emails
> was bout it.

Ok, my bad then :)

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 15:49                                 ` Maxime Ripard
  0 siblings, 0 replies; 67+ messages in thread
From: Maxime Ripard @ 2015-04-17 15:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Apr 17, 2015 at 05:01:55PM +0200, Gregory CLEMENT wrote:
> On 17/04/2015 16:50, Maxime Ripard wrote:
> > On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
> >> Hi Maxime,
> >>
> >> On 17/04/2015 16:32, Maxime Ripard wrote:
> >>> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
> >>>> Hi Gregory,
> >>>>
> >>>> On Fri, 17 Apr 2015 15:01:01 +0200
> >>>> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
> >>>>
> >>>>> Hi Boris,
> >>>>>
> >>>>> On 17/04/2015 10:39, Boris Brezillon wrote:
> >>>>>> On Fri, 17 Apr 2015 10:33:56 +0200
> >>>>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
> >>>>>>
> >>>>>>> Hi Jason,
> >>>>>>>
> >>>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
> >>>>>>> Jason Cooper <jason@lakedaemon.net> wrote:
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
> >>>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
> >>>>>>>>>> concede that.  Giving people time to migrate from old to new while still
> >>>>>>>>>> being able to update for other security fixes seems reasonable.
> >>>>>>>>>
> >>>>>>>>> Jason, what do you think of the approach above? 
> >>>>>>>>
> >>>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
> >>>>>>>> vice the other.  We need to be able to build both, but only load one at
> >>>>>>>> a time.  If that's anything other than simple to do, then we make it a
> >>>>>>>> Kconfig binary choice and move on.
> >>>>>>>
> >>>>>>> Actually I was planning to handle it with a Kconfig dependency rule
> >>>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> >>>>>>> on !NEW_DRIVER).
> >>>>>>> I don't know how to make it a runtime check without adding new
> >>>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
> >>>>>>> sure sure this is a good idea.
> >>>>>>   ^ not
> >>>>>>
> >>>>>>> Do you have any ideas ?
> >>>>>
> >>>>> You use devm_ioremap_resource() in the new driver, so if the old one
> >>>>> is already loaded the memory region will be already hold and the new
> >>>>> driver will simply fail during the probe. So for this part it is OK.
> >>>>
> >>>> I like the idea :-).
> >>>
> >>> Not really, how do you know which device is going to be probed? For
> >>> that matter, it's pretty much random, and you have no control over it.
> >>>
> >>> Why not just have a choice option, and select which one you want to
> >>> enable?
> >>
> >> Because you can't prevent an user to build a module, then modifying the
> >> configuration and building the other module.
> > 
> > Well, actually, you don't even know if it's going to be a module. You
> > might very well have both drivers compiled statically in the kernel
> > image, and this is where the trouble begins.
> 
> No it won't be possible, Boris already speak about this issue (see below):
> "Actually I was planning to handle it with a Kconfig dependency rule
> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
> on !NEW_DRIVER)."

Which is a circular dependency and won't work.

> >> So even if there is a choice at build time, and I think that it is
> >> something expected for the v2, we still need preventing having the
> >> both drivers trying accessing the same hardware in the same time.
> > 
> > Of course, but this is already there, and doesn't really address the
> > same issue.
> 
> This was the only issue remaining, (see below again):
> "I don't know how to make it a runtime check ". And my last emails
> was bout it.

Ok, my bad then :)

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20150417/23e64995/attachment.sig>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-17 15:49                                 ` Maxime Ripard
  (?)
@ 2015-04-17 16:04                                   ` Gregory CLEMENT
  -1 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 16:04 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Boris Brezillon, Jason Cooper, Arnaud Ebalard, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Eran Ben-Avi, Nadav Haklai,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Andrew Lunn,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA, Kumar Gala, Tawfik Bayouk,
	David S. Miller, Lior Amsalem, Sebastian Hesselbarth

On 17/04/2015 17:49, Maxime Ripard wrote:
> On Fri, Apr 17, 2015 at 05:01:55PM +0200, Gregory CLEMENT wrote:
>> On 17/04/2015 16:50, Maxime Ripard wrote:
>>> On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
>>>> Hi Maxime,
>>>>
>>>> On 17/04/2015 16:32, Maxime Ripard wrote:
>>>>> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
>>>>>> Hi Gregory,
>>>>>>
>>>>>> On Fri, 17 Apr 2015 15:01:01 +0200
>>>>>> Gregory CLEMENT <gregory.clement-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org> wrote:
>>>>>>
>>>>>>> Hi Boris,
>>>>>>>
>>>>>>> On 17/04/2015 10:39, Boris Brezillon wrote:
>>>>>>>> On Fri, 17 Apr 2015 10:33:56 +0200
>>>>>>>> Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org> wrote:
>>>>>>>>
>>>>>>>>> Hi Jason,
>>>>>>>>>
>>>>>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
>>>>>>>>> Jason Cooper <jason-NLaQJdtUoK4Be96aLqz0jA@public.gmane.org> wrote:
>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>>>>>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>>>>>>>>> being able to update for other security fixes seems reasonable.
>>>>>>>>>>>
>>>>>>>>>>> Jason, what do you think of the approach above? 
>>>>>>>>>>
>>>>>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>>>>>>>>> vice the other.  We need to be able to build both, but only load one at
>>>>>>>>>> a time.  If that's anything other than simple to do, then we make it a
>>>>>>>>>> Kconfig binary choice and move on.
>>>>>>>>>
>>>>>>>>> Actually I was planning to handle it with a Kconfig dependency rule
>>>>>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>>>>>>>>> on !NEW_DRIVER).
>>>>>>>>> I don't know how to make it a runtime check without adding new
>>>>>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>>>>>>>>> sure sure this is a good idea.
>>>>>>>>   ^ not
>>>>>>>>
>>>>>>>>> Do you have any ideas ?
>>>>>>>
>>>>>>> You use devm_ioremap_resource() in the new driver, so if the old one
>>>>>>> is already loaded the memory region will be already hold and the new
>>>>>>> driver will simply fail during the probe. So for this part it is OK.
>>>>>>
>>>>>> I like the idea :-).
>>>>>
>>>>> Not really, how do you know which device is going to be probed? For
>>>>> that matter, it's pretty much random, and you have no control over it.
>>>>>
>>>>> Why not just have a choice option, and select which one you want to
>>>>> enable?
>>>>
>>>> Because you can't prevent an user to build a module, then modifying the
>>>> configuration and building the other module.
>>>
>>> Well, actually, you don't even know if it's going to be a module. You
>>> might very well have both drivers compiled statically in the kernel
>>> image, and this is where the trouble begins.
>>
>> No it won't be possible, Boris already speak about this issue (see below):
>> "Actually I was planning to handle it with a Kconfig dependency rule
>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>> on !NEW_DRIVER)."
> 
> Which is a circular dependency and won't work.

Indeed.

Boris what about using choice/endchoice ?

Thanks,

Gregory



> 
>>>> So even if there is a choice at build time, and I think that it is
>>>> something expected for the v2, we still need preventing having the
>>>> both drivers trying accessing the same hardware in the same time.
>>>
>>> Of course, but this is already there, and doesn't really address the
>>> same issue.
>>
>> This was the only issue remaining, (see below again):
>> "I don't know how to make it a runtime check ". And my last emails
>> was bout it.
> 
> Ok, my bad then :)
> 
> Maxime
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 16:04                                   ` Gregory CLEMENT
  0 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 16:04 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Boris Brezillon, Jason Cooper, Arnaud Ebalard, Mark Rutland,
	Thomas Petazzoni, Herbert Xu, Pawel Moll, Ian Campbell,
	linux-arm-kernel, linux-kernel, Eran Ben-Avi, Nadav Haklai,
	devicetree, Rob Herring, Andrew Lunn, linux-crypto, Kumar Gala,
	Tawfik Bayouk, David S. Miller, Lior Amsalem,
	Sebastian Hesselbarth

On 17/04/2015 17:49, Maxime Ripard wrote:
> On Fri, Apr 17, 2015 at 05:01:55PM +0200, Gregory CLEMENT wrote:
>> On 17/04/2015 16:50, Maxime Ripard wrote:
>>> On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
>>>> Hi Maxime,
>>>>
>>>> On 17/04/2015 16:32, Maxime Ripard wrote:
>>>>> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
>>>>>> Hi Gregory,
>>>>>>
>>>>>> On Fri, 17 Apr 2015 15:01:01 +0200
>>>>>> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
>>>>>>
>>>>>>> Hi Boris,
>>>>>>>
>>>>>>> On 17/04/2015 10:39, Boris Brezillon wrote:
>>>>>>>> On Fri, 17 Apr 2015 10:33:56 +0200
>>>>>>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Jason,
>>>>>>>>>
>>>>>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
>>>>>>>>> Jason Cooper <jason@lakedaemon.net> wrote:
>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>>>>>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>>>>>>>>> being able to update for other security fixes seems reasonable.
>>>>>>>>>>>
>>>>>>>>>>> Jason, what do you think of the approach above? 
>>>>>>>>>>
>>>>>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>>>>>>>>> vice the other.  We need to be able to build both, but only load one at
>>>>>>>>>> a time.  If that's anything other than simple to do, then we make it a
>>>>>>>>>> Kconfig binary choice and move on.
>>>>>>>>>
>>>>>>>>> Actually I was planning to handle it with a Kconfig dependency rule
>>>>>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>>>>>>>>> on !NEW_DRIVER).
>>>>>>>>> I don't know how to make it a runtime check without adding new
>>>>>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>>>>>>>>> sure sure this is a good idea.
>>>>>>>>   ^ not
>>>>>>>>
>>>>>>>>> Do you have any ideas ?
>>>>>>>
>>>>>>> You use devm_ioremap_resource() in the new driver, so if the old one
>>>>>>> is already loaded the memory region will be already hold and the new
>>>>>>> driver will simply fail during the probe. So for this part it is OK.
>>>>>>
>>>>>> I like the idea :-).
>>>>>
>>>>> Not really, how do you know which device is going to be probed? For
>>>>> that matter, it's pretty much random, and you have no control over it.
>>>>>
>>>>> Why not just have a choice option, and select which one you want to
>>>>> enable?
>>>>
>>>> Because you can't prevent an user to build a module, then modifying the
>>>> configuration and building the other module.
>>>
>>> Well, actually, you don't even know if it's going to be a module. You
>>> might very well have both drivers compiled statically in the kernel
>>> image, and this is where the trouble begins.
>>
>> No it won't be possible, Boris already speak about this issue (see below):
>> "Actually I was planning to handle it with a Kconfig dependency rule
>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>> on !NEW_DRIVER)."
> 
> Which is a circular dependency and won't work.

Indeed.

Boris what about using choice/endchoice ?

Thanks,

Gregory



> 
>>>> So even if there is a choice at build time, and I think that it is
>>>> something expected for the v2, we still need preventing having the
>>>> both drivers trying accessing the same hardware in the same time.
>>>
>>> Of course, but this is already there, and doesn't really address the
>>> same issue.
>>
>> This was the only issue remaining, (see below again):
>> "I don't know how to make it a runtime check ". And my last emails
>> was bout it.
> 
> Ok, my bad then :)
> 
> Maxime
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-17 16:04                                   ` Gregory CLEMENT
  0 siblings, 0 replies; 67+ messages in thread
From: Gregory CLEMENT @ 2015-04-17 16:04 UTC (permalink / raw)
  To: linux-arm-kernel

On 17/04/2015 17:49, Maxime Ripard wrote:
> On Fri, Apr 17, 2015 at 05:01:55PM +0200, Gregory CLEMENT wrote:
>> On 17/04/2015 16:50, Maxime Ripard wrote:
>>> On Fri, Apr 17, 2015 at 04:40:43PM +0200, Gregory CLEMENT wrote:
>>>> Hi Maxime,
>>>>
>>>> On 17/04/2015 16:32, Maxime Ripard wrote:
>>>>> On Fri, Apr 17, 2015 at 04:19:22PM +0200, Boris Brezillon wrote:
>>>>>> Hi Gregory,
>>>>>>
>>>>>> On Fri, 17 Apr 2015 15:01:01 +0200
>>>>>> Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:
>>>>>>
>>>>>>> Hi Boris,
>>>>>>>
>>>>>>> On 17/04/2015 10:39, Boris Brezillon wrote:
>>>>>>>> On Fri, 17 Apr 2015 10:33:56 +0200
>>>>>>>> Boris Brezillon <boris.brezillon@free-electrons.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Jason,
>>>>>>>>>
>>>>>>>>> On Mon, 13 Apr 2015 20:11:46 +0000
>>>>>>>>> Jason Cooper <jason@lakedaemon.net> wrote:
>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> I'd appreciate if we'd look into it.  I understand from on-list and
>>>>>>>>>>>> off-list discussion that the rewrite was unavoidable.  So I'm willing to
>>>>>>>>>>>> concede that.  Giving people time to migrate from old to new while still
>>>>>>>>>>>> being able to update for other security fixes seems reasonable.
>>>>>>>>>>>
>>>>>>>>>>> Jason, what do you think of the approach above? 
>>>>>>>>>>
>>>>>>>>>> I say keep it simple.  We shouldn't use the DT changes to trigger one
>>>>>>>>>> vice the other.  We need to be able to build both, but only load one at
>>>>>>>>>> a time.  If that's anything other than simple to do, then we make it a
>>>>>>>>>> Kconfig binary choice and move on.
>>>>>>>>>
>>>>>>>>> Actually I was planning to handle it with a Kconfig dependency rule
>>>>>>>>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>>>>>>>>> on !NEW_DRIVER).
>>>>>>>>> I don't know how to make it a runtime check without adding new
>>>>>>>>> compatible strings for the kirkwood, dove and orion platforms, and I'm
>>>>>>>>> sure sure this is a good idea.
>>>>>>>>   ^ not
>>>>>>>>
>>>>>>>>> Do you have any ideas ?
>>>>>>>
>>>>>>> You use devm_ioremap_resource() in the new driver, so if the old one
>>>>>>> is already loaded the memory region will be already hold and the new
>>>>>>> driver will simply fail during the probe. So for this part it is OK.
>>>>>>
>>>>>> I like the idea :-).
>>>>>
>>>>> Not really, how do you know which device is going to be probed? For
>>>>> that matter, it's pretty much random, and you have no control over it.
>>>>>
>>>>> Why not just have a choice option, and select which one you want to
>>>>> enable?
>>>>
>>>> Because you can't prevent an user to build a module, then modifying the
>>>> configuration and building the other module.
>>>
>>> Well, actually, you don't even know if it's going to be a module. You
>>> might very well have both drivers compiled statically in the kernel
>>> image, and this is where the trouble begins.
>>
>> No it won't be possible, Boris already speak about this issue (see below):
>> "Actually I was planning to handle it with a Kconfig dependency rule
>> (NEW_DRIVER depends on !OLD_DRIVER and OLD_DRIVER depends
>> on !NEW_DRIVER)."
> 
> Which is a circular dependency and won't work.

Indeed.

Boris what about using choice/endchoice ?

Thanks,

Gregory



> 
>>>> So even if there is a choice at build time, and I think that it is
>>>> something expected for the v2, we still need preventing having the
>>>> both drivers trying accessing the same hardware in the same time.
>>>
>>> Of course, but this is already there, and doesn't really address the
>>> same issue.
>>
>> This was the only issue remaining, (see below again):
>> "I don't know how to make it a runtime check ". And my last emails
>> was bout it.
> 
> Ok, my bad then :)
> 
> Maxime
> 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-09 14:58 ` Boris Brezillon
@ 2015-04-28 19:52   ` Boris Brezillon
  -1 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-28 19:52 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller
  Cc: Boris Brezillon, linux-crypto, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, Andrew Lunn, linux-arm-kernel,
	linux-kernel, Arnaud Ebalard

Herbert, David,

Any comment on the crypto driver implementation ?

I've had several reviews focused on:
1/ splitting the patch series into smaller subsets
2/ allowing for smoother transition from the old driver to the new one

I'll address (or have addressed) all of these comments, but I'd like to
have your opinion on the crypto driver itself.

In particular, I'd like to discuss the threaded-irq approach taken in
this driver (other drivers are using tasklets).
The main reason behind this choice is the fact that crypto engines
are quite fast, and I'm worried about the CPU contention that might
happen in case of fully loaded crypto engines (the CPU might spend most
of its time in interrupt context until the crypto queue is emptied).
Using threaded-irq allows preemption of the crypto completion
operation, thus authorizing another thread (with higher priority) to be
scheduled. ITOH, the tasklet approach provide slightly performances (I
don't recall the exact numbers, but Arnaud did some tests).

On Thu,  9 Apr 2015 16:58:41 +0200
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> Hello,
> 
> This is an attempt to replace the mv_cesa driver by a new one to address
> some limitations of the existing driver.
> From a performance and CPU load point of view the most important
> limitation is the lack of DMA support, thus preventing us from chaining
> crypto operations.
> 
> I know we usually try to adapt existing drivers instead of replacing them
> by new ones, but after trying to refactor the mv_cesa driver I realized it
> would take longer than writing an new one from scratch.
> 
> Here are the main features brought by this new driver:
> - support for armada SoCs (up to 38x) while keeping support for older ones
>   (Orion and Kirkwood)
> - DMA mode to offload the CPU in case of intensive crypto usage
> - new algorithms: SHA256, DES and 3DES
> 
> I'd like to thank Arnaud, who has carefully reviewed several iterations of
> this driver, helped me improved my implementation, provided support for
> several crypto algorithms, provided support for armada-370 and tested
> the driver on different platforms, hence the SoB and dual MODULE_AUTHOR
> in the driver code.



> 
> Best Regards,
> 
> Boris
> 
> Boris Brezillon (2):
>   crypto: add new driver for Marvell CESA
>   crypto: marvell/CESA: update DT bindings documentation
> 
>  .../devicetree/bindings/crypto/mv_cesa.txt         |   50 +-
>  drivers/crypto/Kconfig                             |    2 +
>  drivers/crypto/Makefile                            |    2 +-
>  drivers/crypto/marvell/Makefile                    |    1 +
>  drivers/crypto/marvell/cesa.c                      |  539 ++++++++
>  drivers/crypto/marvell/cesa.h                      |  802 ++++++++++++
>  drivers/crypto/marvell/cipher.c                    |  761 +++++++++++
>  drivers/crypto/marvell/hash.c                      | 1349 ++++++++++++++++++++
>  drivers/crypto/marvell/tdma.c                      |  223 ++++
>  drivers/crypto/mv_cesa.c                           | 1193 -----------------
>  drivers/crypto/mv_cesa.h                           |  150 ---
>  11 files changed, 3716 insertions(+), 1356 deletions(-)
>  create mode 100644 drivers/crypto/marvell/Makefile
>  create mode 100644 drivers/crypto/marvell/cesa.c
>  create mode 100644 drivers/crypto/marvell/cesa.h
>  create mode 100644 drivers/crypto/marvell/cipher.c
>  create mode 100644 drivers/crypto/marvell/hash.c
>  create mode 100644 drivers/crypto/marvell/tdma.c
>  delete mode 100644 drivers/crypto/mv_cesa.c
>  delete mode 100644 drivers/crypto/mv_cesa.h
> 



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-28 19:52   ` Boris Brezillon
  0 siblings, 0 replies; 67+ messages in thread
From: Boris Brezillon @ 2015-04-28 19:52 UTC (permalink / raw)
  To: linux-arm-kernel

Herbert, David,

Any comment on the crypto driver implementation ?

I've had several reviews focused on:
1/ splitting the patch series into smaller subsets
2/ allowing for smoother transition from the old driver to the new one

I'll address (or have addressed) all of these comments, but I'd like to
have your opinion on the crypto driver itself.

In particular, I'd like to discuss the threaded-irq approach taken in
this driver (other drivers are using tasklets).
The main reason behind this choice is the fact that crypto engines
are quite fast, and I'm worried about the CPU contention that might
happen in case of fully loaded crypto engines (the CPU might spend most
of its time in interrupt context until the crypto queue is emptied).
Using threaded-irq allows preemption of the crypto completion
operation, thus authorizing another thread (with higher priority) to be
scheduled. ITOH, the tasklet approach provide slightly performances (I
don't recall the exact numbers, but Arnaud did some tests).

On Thu,  9 Apr 2015 16:58:41 +0200
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> Hello,
> 
> This is an attempt to replace the mv_cesa driver by a new one to address
> some limitations of the existing driver.
> From a performance and CPU load point of view the most important
> limitation is the lack of DMA support, thus preventing us from chaining
> crypto operations.
> 
> I know we usually try to adapt existing drivers instead of replacing them
> by new ones, but after trying to refactor the mv_cesa driver I realized it
> would take longer than writing an new one from scratch.
> 
> Here are the main features brought by this new driver:
> - support for armada SoCs (up to 38x) while keeping support for older ones
>   (Orion and Kirkwood)
> - DMA mode to offload the CPU in case of intensive crypto usage
> - new algorithms: SHA256, DES and 3DES
> 
> I'd like to thank Arnaud, who has carefully reviewed several iterations of
> this driver, helped me improved my implementation, provided support for
> several crypto algorithms, provided support for armada-370 and tested
> the driver on different platforms, hence the SoB and dual MODULE_AUTHOR
> in the driver code.



> 
> Best Regards,
> 
> Boris
> 
> Boris Brezillon (2):
>   crypto: add new driver for Marvell CESA
>   crypto: marvell/CESA: update DT bindings documentation
> 
>  .../devicetree/bindings/crypto/mv_cesa.txt         |   50 +-
>  drivers/crypto/Kconfig                             |    2 +
>  drivers/crypto/Makefile                            |    2 +-
>  drivers/crypto/marvell/Makefile                    |    1 +
>  drivers/crypto/marvell/cesa.c                      |  539 ++++++++
>  drivers/crypto/marvell/cesa.h                      |  802 ++++++++++++
>  drivers/crypto/marvell/cipher.c                    |  761 +++++++++++
>  drivers/crypto/marvell/hash.c                      | 1349 ++++++++++++++++++++
>  drivers/crypto/marvell/tdma.c                      |  223 ++++
>  drivers/crypto/mv_cesa.c                           | 1193 -----------------
>  drivers/crypto/mv_cesa.h                           |  150 ---
>  11 files changed, 3716 insertions(+), 1356 deletions(-)
>  create mode 100644 drivers/crypto/marvell/Makefile
>  create mode 100644 drivers/crypto/marvell/cesa.c
>  create mode 100644 drivers/crypto/marvell/cesa.h
>  create mode 100644 drivers/crypto/marvell/cipher.c
>  create mode 100644 drivers/crypto/marvell/hash.c
>  create mode 100644 drivers/crypto/marvell/tdma.c
>  delete mode 100644 drivers/crypto/mv_cesa.c
>  delete mode 100644 drivers/crypto/mv_cesa.h
> 



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/2] crypto: add new driver for Marvell CESA
  2015-04-28 19:52   ` Boris Brezillon
@ 2015-04-29  9:49     ` Herbert Xu
  -1 siblings, 0 replies; 67+ messages in thread
From: Herbert Xu @ 2015-04-29  9:49 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: David S. Miller, linux-crypto, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Tawfik Bayouk, Lior Amsalem, Nadav Haklai, Eran Ben-Avi,
	Thomas Petazzoni, Gregory CLEMENT, Jason Cooper,
	Sebastian Hesselbarth, Andrew Lunn, linux-arm-kernel,
	linux-kernel, Arnaud Ebalard

On Tue, Apr 28, 2015 at 09:52:32PM +0200, Boris Brezillon wrote:
> 
> In particular, I'd like to discuss the threaded-irq approach taken in
> this driver (other drivers are using tasklets).
> The main reason behind this choice is the fact that crypto engines
> are quite fast, and I'm worried about the CPU contention that might
> happen in case of fully loaded crypto engines (the CPU might spend most
> of its time in interrupt context until the crypto queue is emptied).
> Using threaded-irq allows preemption of the crypto completion
> operation, thus authorizing another thread (with higher priority) to be
> scheduled. ITOH, the tasklet approach provide slightly performances (I
> don't recall the exact numbers, but Arnaud did some tests).

Either approach is fine with me.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 0/2] crypto: add new driver for Marvell CESA
@ 2015-04-29  9:49     ` Herbert Xu
  0 siblings, 0 replies; 67+ messages in thread
From: Herbert Xu @ 2015-04-29  9:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Apr 28, 2015 at 09:52:32PM +0200, Boris Brezillon wrote:
> 
> In particular, I'd like to discuss the threaded-irq approach taken in
> this driver (other drivers are using tasklets).
> The main reason behind this choice is the fact that crypto engines
> are quite fast, and I'm worried about the CPU contention that might
> happen in case of fully loaded crypto engines (the CPU might spend most
> of its time in interrupt context until the crypto queue is emptied).
> Using threaded-irq allows preemption of the crypto completion
> operation, thus authorizing another thread (with higher priority) to be
> scheduled. ITOH, the tasklet approach provide slightly performances (I
> don't recall the exact numbers, but Arnaud did some tests).

Either approach is fine with me.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 67+ messages in thread

end of thread, other threads:[~2015-04-29  9:49 UTC | newest]

Thread overview: 67+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-09 14:58 [PATCH 0/2] crypto: add new driver for Marvell CESA Boris Brezillon
2015-04-09 14:58 ` Boris Brezillon
2015-04-09 14:58 ` [PATCH 1/2] " Boris Brezillon
     [not found]   ` <1428591523-1780-2-git-send-email-boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
2015-04-10 10:38     ` Paul Bolle
2015-04-10 10:38       ` Paul Bolle
2015-04-10 10:38       ` Paul Bolle
2015-04-10 11:17       ` Boris Brezillon
2015-04-10 11:17         ` Boris Brezillon
2015-04-09 14:58 ` [PATCH 2/2] crypto: marvell/CESA: update DT bindings documentation Boris Brezillon
2015-04-09 14:58   ` Boris Brezillon
2015-04-09 15:18 ` [PATCH 0/2] crypto: add new driver for Marvell CESA Andrew Lunn
2015-04-09 15:18   ` Andrew Lunn
     [not found]   ` <20150409172826.18916274@bbrezillon>
2015-04-09 15:37     ` Andrew Lunn
2015-04-09 15:37     ` Andrew Lunn
2015-04-09 15:37       ` Andrew Lunn
2015-04-09 15:37       ` Andrew Lunn
2015-04-09 15:34 ` Sebastian Hesselbarth
2015-04-09 15:34   ` Sebastian Hesselbarth
2015-04-09 15:57   ` Boris Brezillon
2015-04-09 15:57     ` Boris Brezillon
2015-04-09 23:21     ` Arnaud Ebalard
2015-04-09 23:21       ` Arnaud Ebalard
2015-04-09 23:21       ` Arnaud Ebalard
2015-04-09 15:52 ` Stephan Mueller
2015-04-09 15:52   ` Stephan Mueller
2015-04-10 13:50 ` Jason Cooper
2015-04-10 13:50   ` Jason Cooper
2015-04-10 15:11   ` Boris Brezillon
2015-04-10 15:11     ` Boris Brezillon
2015-04-10 22:30     ` Jason Cooper
2015-04-10 22:30       ` Jason Cooper
2015-04-13  9:39       ` Gregory CLEMENT
2015-04-13  9:39         ` Gregory CLEMENT
2015-04-13 12:47         ` Jason Cooper
2015-04-13 12:47           ` Jason Cooper
2015-04-13 16:06           ` Arnaud Ebalard
2015-04-13 16:06             ` Arnaud Ebalard
2015-04-13 20:11             ` Jason Cooper
2015-04-13 20:11               ` Jason Cooper
2015-04-17  8:33               ` Boris Brezillon
2015-04-17  8:33                 ` Boris Brezillon
2015-04-17  8:39                 ` Boris Brezillon
2015-04-17  8:39                   ` Boris Brezillon
2015-04-17 10:59                   ` Jason Cooper
2015-04-17 10:59                     ` Jason Cooper
2015-04-17 13:01                   ` Gregory CLEMENT
2015-04-17 13:01                     ` Gregory CLEMENT
2015-04-17 14:19                     ` Boris Brezillon
2015-04-17 14:19                       ` Boris Brezillon
2015-04-17 14:32                       ` Maxime Ripard
2015-04-17 14:32                         ` Maxime Ripard
2015-04-17 14:40                         ` Gregory CLEMENT
2015-04-17 14:40                           ` Gregory CLEMENT
2015-04-17 14:50                           ` Maxime Ripard
2015-04-17 14:50                             ` Maxime Ripard
2015-04-17 15:01                             ` Gregory CLEMENT
2015-04-17 15:01                               ` Gregory CLEMENT
2015-04-17 15:01                               ` Gregory CLEMENT
2015-04-17 15:49                               ` Maxime Ripard
2015-04-17 15:49                                 ` Maxime Ripard
2015-04-17 16:04                                 ` Gregory CLEMENT
2015-04-17 16:04                                   ` Gregory CLEMENT
2015-04-17 16:04                                   ` Gregory CLEMENT
2015-04-28 19:52 ` Boris Brezillon
2015-04-28 19:52   ` Boris Brezillon
2015-04-29  9:49   ` Herbert Xu
2015-04-29  9:49     ` Herbert Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.