linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] crypto: add IV generation templates
@ 2018-07-18  7:30 Xiongfeng Wang
  2018-07-18  7:30 ` [PATCH 1/5] crypto: api - introduce API to (un)register a array of templates Xiongfeng Wang
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-18  7:30 UTC (permalink / raw)
  To: agk, snitzer, herbert
  Cc: dm-devel, linux-kernel, wangxiongfeng2, broonie, arnd, jonathan.cameron

Currently, the iv generation algorithms are implemented in dm-crypt.c.
This patchset moves these algorithms from the dm layer to the kernel
crypto layer by implementing them as template ciphers so they
can be implemented in hardware for performance. We modify the dm layer
to send a whole 'bio' rather than a sector at a time, so the dm layer
needs to called into the crypto layer less times. Each bio contains an
in memory representation of physically contiguous disk blocks. The dm
layer sets up a chained scatterlist of these blocks split into physically
contiguous segments in memory so that DMA can be performed.

This patchset is based on the patchset originally started by
Binoy Jayan <binoy.jayan@linaro.org>
( crypto: Add IV generation algorithms
https://patchwork.kernel.org/patch/9803469/ )

I tested the performance of software implemented ciphers before and after
applying this patchset. The performance didn't change much except for 
slight regression when writting. The detail information is as follows.

The command I used:
cryptsetup -y -c aes-xts-plain -s 256 --hash sha256 luksFormat /dev/sdd1
cryptsetup -y -c aes-cbc-essiv:sha256 -s 256 --hash sha256 luksFormat /dev/sdd1
cryptsetup -y -c aes-cbc-benbi -s 256 --hash sha256 luksFormat /dev/sdd1

cryptsetup luksOpen /dev/sdd1 crypt_fun
time dd if=/dev/mapper/crypt_fun of=/dev/null bs=1M count=500 iflag=direct
time dd if=/dev/zero of=/dev/mapper/crypt_fun bs=1M count=500 oflag=direct

Performance comparision:
--------------------------------------------------------
algorithms	| before applying   | 	after applying
--------------------------------------------------------
		|  read  | write    |  read  | write
--------------------------------------------------------
aes-xts-plain 	| 145.34 | 145.09   | 145.89 | 144.2 
--------------------------------------------------------
aes-cbc-essiv 	| 146.87 | 144.62   | 146.74 | 143.41
--------------------------------------------------------
aes-cbc-benbi 	| 146.03 | 144.74   | 146.77 | 144.46
--------------------------------------------------------

Xiongfeng Wang (5):
  crypto: api - introduce API to (un)register a array of templates
  crypto: ccm - use template array registering API to simplify the code
  crypto: gcm - use template array registering API to simplify the code
  crypto: Add IV generation templates
  dm-crypt: modify dm-crypt to rely on IV generation templates

 crypto/Kconfig          |    7 +
 crypto/Makefile         |    1 +
 crypto/algapi.c         |   27 +
 crypto/ccm.c            |   82 +-
 crypto/gcm.c            |   76 +-
 crypto/geniv.c          | 2240 +++++++++++++++++++++++++++++++++++++++++++++++
 drivers/md/Kconfig      |    1 +
 drivers/md/dm-crypt.c   | 1697 ++++++++---------------------------
 include/crypto/algapi.h |    2 +
 include/crypto/geniv.h  |   47 +
 10 files changed, 2722 insertions(+), 1458 deletions(-)
 create mode 100644 crypto/geniv.c
 create mode 100644 include/crypto/geniv.h

-- 
1.7.12.4


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 1/5] crypto: api - introduce API to (un)register a array of templates
  2018-07-18  7:30 [PATCH 0/5] crypto: add IV generation templates Xiongfeng Wang
@ 2018-07-18  7:30 ` Xiongfeng Wang
  2018-07-18  7:30 ` [PATCH 2/5] crypto: ccm - use template array registering API to simplify the code Xiongfeng Wang
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-18  7:30 UTC (permalink / raw)
  To: agk, snitzer, herbert
  Cc: dm-devel, linux-kernel, wangxiongfeng2, broonie, arnd, jonathan.cameron

The following patch introduce several crypto templates. To simplify the
code, this patch add two APIs to (un)register a array of templates.

Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
---
 crypto/algapi.c         | 27 +++++++++++++++++++++++++++
 include/crypto/algapi.h |  2 ++
 2 files changed, 29 insertions(+)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index c0755cf..08c29e6 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -485,6 +485,24 @@ int crypto_register_template(struct crypto_template *tmpl)
 }
 EXPORT_SYMBOL_GPL(crypto_register_template);
 
+int crypto_register_template_array(struct crypto_template *tmpl, int num)
+{
+	int i, err;
+
+	for (i = 0; i < num; i++) {
+		err = crypto_register_template(&tmpl[i]);
+		if (err)
+			goto out;
+	}
+	return 0;
+
+out:
+	for (i -= 1; i >= 0; i--)
+		crypto_unregister_template(&tmpl[i]);
+	return err;
+}
+EXPORT_SYMBOL_GPL(crypto_register_template_array);
+
 void crypto_unregister_template(struct crypto_template *tmpl)
 {
 	struct crypto_instance *inst;
@@ -514,6 +532,15 @@ void crypto_unregister_template(struct crypto_template *tmpl)
 }
 EXPORT_SYMBOL_GPL(crypto_unregister_template);
 
+void crypto_unregister_template_array(struct crypto_template *tmpl, int num)
+{
+	int i;
+
+	for (i = 0; i < num; i++)
+		crypto_unregister_template(&tmpl[i]);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_template_array);
+
 static struct crypto_template *__crypto_lookup_template(const char *name)
 {
 	struct crypto_template *q, *tmpl = NULL;
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index bd5e8cc..14bfbe31 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -141,7 +141,9 @@ struct ablkcipher_walk {
 void crypto_mod_put(struct crypto_alg *alg);
 
 int crypto_register_template(struct crypto_template *tmpl);
+int crypto_register_template_array(struct crypto_template *tmpl, int num);
 void crypto_unregister_template(struct crypto_template *tmpl);
+void crypto_unregister_template_array(struct crypto_template *tmpl, int num);
 struct crypto_template *crypto_lookup_template(const char *name);
 
 int crypto_register_instance(struct crypto_template *tmpl,
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 2/5] crypto: ccm - use template array registering API to simplify the code
  2018-07-18  7:30 [PATCH 0/5] crypto: add IV generation templates Xiongfeng Wang
  2018-07-18  7:30 ` [PATCH 1/5] crypto: api - introduce API to (un)register a array of templates Xiongfeng Wang
@ 2018-07-18  7:30 ` Xiongfeng Wang
  2018-07-18  7:30 ` [PATCH 3/5] crypto: gcm " Xiongfeng Wang
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-18  7:30 UTC (permalink / raw)
  To: agk, snitzer, herbert
  Cc: dm-devel, linux-kernel, wangxiongfeng2, broonie, arnd, jonathan.cameron

Use crypto template array registering API to simplify the code.

Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
---
 crypto/ccm.c | 82 ++++++++++++++++++++----------------------------------------
 1 file changed, 27 insertions(+), 55 deletions(-)

diff --git a/crypto/ccm.c b/crypto/ccm.c
index 0a08334..1742d41 100644
--- a/crypto/ccm.c
+++ b/crypto/ccm.c
@@ -586,12 +586,6 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb)
 					mac_name);
 }
 
-static struct crypto_template crypto_ccm_tmpl = {
-	.name = "ccm",
-	.create = crypto_ccm_create,
-	.module = THIS_MODULE,
-};
-
 static int crypto_ccm_base_create(struct crypto_template *tmpl,
 				  struct rtattr **tb)
 {
@@ -615,12 +609,6 @@ static int crypto_ccm_base_create(struct crypto_template *tmpl,
 					cipher_name);
 }
 
-static struct crypto_template crypto_ccm_base_tmpl = {
-	.name = "ccm_base",
-	.create = crypto_ccm_base_create,
-	.module = THIS_MODULE,
-};
-
 static int crypto_rfc4309_setkey(struct crypto_aead *parent, const u8 *key,
 				 unsigned int keylen)
 {
@@ -851,12 +839,6 @@ static int crypto_rfc4309_create(struct crypto_template *tmpl,
 	goto out;
 }
 
-static struct crypto_template crypto_rfc4309_tmpl = {
-	.name = "rfc4309",
-	.create = crypto_rfc4309_create,
-	.module = THIS_MODULE,
-};
-
 static int crypto_cbcmac_digest_setkey(struct crypto_shash *parent,
 				     const u8 *inkey, unsigned int keylen)
 {
@@ -996,51 +978,41 @@ static int cbcmac_create(struct crypto_template *tmpl, struct rtattr **tb)
 	return err;
 }
 
-static struct crypto_template crypto_cbcmac_tmpl = {
-	.name = "cbcmac",
-	.create = cbcmac_create,
-	.free = shash_free_instance,
-	.module = THIS_MODULE,
+#define CCM_TEMPLATE_NUM 4
+
+static struct crypto_template crypto_ccm_tmpl[CCM_TEMPLATE_NUM] = {
+	{
+		.name = "cbcmac",
+		.create = cbcmac_create,
+		.free = shash_free_instance,
+		.module = THIS_MODULE,
+	},
+	{
+		.name = "ccm_base",
+		.create = crypto_ccm_base_create,
+		.module = THIS_MODULE,
+	},
+	{
+		.name = "ccm",
+		.create = crypto_ccm_create,
+		.module = THIS_MODULE,
+	},
+	{
+		.name = "rfc4309",
+		.create = crypto_rfc4309_create,
+		.module = THIS_MODULE,
+	},
 };
 
 static int __init crypto_ccm_module_init(void)
 {
-	int err;
-
-	err = crypto_register_template(&crypto_cbcmac_tmpl);
-	if (err)
-		goto out;
-
-	err = crypto_register_template(&crypto_ccm_base_tmpl);
-	if (err)
-		goto out_undo_cbcmac;
-
-	err = crypto_register_template(&crypto_ccm_tmpl);
-	if (err)
-		goto out_undo_base;
-
-	err = crypto_register_template(&crypto_rfc4309_tmpl);
-	if (err)
-		goto out_undo_ccm;
-
-out:
-	return err;
-
-out_undo_ccm:
-	crypto_unregister_template(&crypto_ccm_tmpl);
-out_undo_base:
-	crypto_unregister_template(&crypto_ccm_base_tmpl);
-out_undo_cbcmac:
-	crypto_register_template(&crypto_cbcmac_tmpl);
-	goto out;
+	return crypto_register_template_array(crypto_ccm_tmpl,
+			CCM_TEMPLATE_NUM);
 }
 
 static void __exit crypto_ccm_module_exit(void)
 {
-	crypto_unregister_template(&crypto_rfc4309_tmpl);
-	crypto_unregister_template(&crypto_ccm_tmpl);
-	crypto_unregister_template(&crypto_ccm_base_tmpl);
-	crypto_unregister_template(&crypto_cbcmac_tmpl);
+	crypto_unregister_template_array(crypto_ccm_tmpl, CCM_TEMPLATE_NUM);
 }
 
 module_init(crypto_ccm_module_init);
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 3/5] crypto: gcm - use template array registering API to simplify the code
  2018-07-18  7:30 [PATCH 0/5] crypto: add IV generation templates Xiongfeng Wang
  2018-07-18  7:30 ` [PATCH 1/5] crypto: api - introduce API to (un)register a array of templates Xiongfeng Wang
  2018-07-18  7:30 ` [PATCH 2/5] crypto: ccm - use template array registering API to simplify the code Xiongfeng Wang
@ 2018-07-18  7:30 ` Xiongfeng Wang
  2018-07-18  7:30 ` [PATCH 4/5] crypto: Add IV generation templates Xiongfeng Wang
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-18  7:30 UTC (permalink / raw)
  To: agk, snitzer, herbert
  Cc: dm-devel, linux-kernel, wangxiongfeng2, broonie, arnd, jonathan.cameron

Use crypto template array registering API to simplify the code.

Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
---
 crypto/gcm.c | 76 +++++++++++++++++++++---------------------------------------
 1 file changed, 26 insertions(+), 50 deletions(-)

diff --git a/crypto/gcm.c b/crypto/gcm.c
index 0ad879e..b180536 100644
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -727,12 +727,6 @@ static int crypto_gcm_create(struct crypto_template *tmpl, struct rtattr **tb)
 					ctr_name, "ghash");
 }
 
-static struct crypto_template crypto_gcm_tmpl = {
-	.name = "gcm",
-	.create = crypto_gcm_create,
-	.module = THIS_MODULE,
-};
-
 static int crypto_gcm_base_create(struct crypto_template *tmpl,
 				  struct rtattr **tb)
 {
@@ -756,12 +750,6 @@ static int crypto_gcm_base_create(struct crypto_template *tmpl,
 					ctr_name, ghash_name);
 }
 
-static struct crypto_template crypto_gcm_base_tmpl = {
-	.name = "gcm_base",
-	.create = crypto_gcm_base_create,
-	.module = THIS_MODULE,
-};
-
 static int crypto_rfc4106_setkey(struct crypto_aead *parent, const u8 *key,
 				 unsigned int keylen)
 {
@@ -989,12 +977,6 @@ static int crypto_rfc4106_create(struct crypto_template *tmpl,
 	goto out;
 }
 
-static struct crypto_template crypto_rfc4106_tmpl = {
-	.name = "rfc4106",
-	.create = crypto_rfc4106_create,
-	.module = THIS_MODULE,
-};
-
 static int crypto_rfc4543_setkey(struct crypto_aead *parent, const u8 *key,
 				 unsigned int keylen)
 {
@@ -1231,10 +1213,29 @@ static int crypto_rfc4543_create(struct crypto_template *tmpl,
 	goto out;
 }
 
-static struct crypto_template crypto_rfc4543_tmpl = {
-	.name = "rfc4543",
-	.create = crypto_rfc4543_create,
-	.module = THIS_MODULE,
+#define GCM_TEMPLATE_NUM 4
+
+static struct crypto_template crypto_gcm_tmpl[GCM_TEMPLATE_NUM] = {
+	{
+		.name = "gcm_base",
+		.create = crypto_gcm_base_create,
+		.module = THIS_MODULE,
+	},
+	{
+		.name = "gcm",
+		.create = crypto_gcm_create,
+		.module = THIS_MODULE,
+	},
+	{
+		.name = "rfc4106",
+		.create = crypto_rfc4106_create,
+		.module = THIS_MODULE,
+	},
+	{
+		.name = "rfc4543",
+		.create = crypto_rfc4543_create,
+		.module = THIS_MODULE,
+	},
 };
 
 static int __init crypto_gcm_module_init(void)
@@ -1247,42 +1248,17 @@ static int __init crypto_gcm_module_init(void)
 
 	sg_init_one(&gcm_zeroes->sg, gcm_zeroes->buf, sizeof(gcm_zeroes->buf));
 
-	err = crypto_register_template(&crypto_gcm_base_tmpl);
-	if (err)
-		goto out;
-
-	err = crypto_register_template(&crypto_gcm_tmpl);
+	err = crypto_register_template_array(crypto_gcm_tmpl, GCM_TEMPLATE_NUM);
 	if (err)
-		goto out_undo_base;
+		kfree(gcm_zeroes);
 
-	err = crypto_register_template(&crypto_rfc4106_tmpl);
-	if (err)
-		goto out_undo_gcm;
-
-	err = crypto_register_template(&crypto_rfc4543_tmpl);
-	if (err)
-		goto out_undo_rfc4106;
-
-	return 0;
-
-out_undo_rfc4106:
-	crypto_unregister_template(&crypto_rfc4106_tmpl);
-out_undo_gcm:
-	crypto_unregister_template(&crypto_gcm_tmpl);
-out_undo_base:
-	crypto_unregister_template(&crypto_gcm_base_tmpl);
-out:
-	kfree(gcm_zeroes);
 	return err;
 }
 
 static void __exit crypto_gcm_module_exit(void)
 {
 	kfree(gcm_zeroes);
-	crypto_unregister_template(&crypto_rfc4543_tmpl);
-	crypto_unregister_template(&crypto_rfc4106_tmpl);
-	crypto_unregister_template(&crypto_gcm_tmpl);
-	crypto_unregister_template(&crypto_gcm_base_tmpl);
+	crypto_unregister_template_array(crypto_gcm_tmpl, GCM_TEMPLATE_NUM);
 }
 
 module_init(crypto_gcm_module_init);
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18  7:30 [PATCH 0/5] crypto: add IV generation templates Xiongfeng Wang
                   ` (2 preceding siblings ...)
  2018-07-18  7:30 ` [PATCH 3/5] crypto: gcm " Xiongfeng Wang
@ 2018-07-18  7:30 ` Xiongfeng Wang
  2018-07-18  8:16   ` Milan Broz
  2018-07-19 18:14   ` kbuild test robot
  2018-07-18  7:30 ` [PATCH 5/5] dm-crypt: modify dm-crypt to rely on " Xiongfeng Wang
  2018-07-18 10:59 ` [PATCH 0/5] crypto: add " Arnd Bergmann
  5 siblings, 2 replies; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-18  7:30 UTC (permalink / raw)
  To: agk, snitzer, herbert
  Cc: dm-devel, linux-kernel, wangxiongfeng2, broonie, arnd, jonathan.cameron

Currently, the IV generation algorithms are implemented in dm-crypt.c.
This patch implement these algorithms as template ciphers, so that
dm-crypt layer can be simplified, and also these algorithms can be
implemented in hardware for performance.

Synchronous crypto requests to encrypt/decrypt a sector are processed
sequentially. Asynchronous requests if processed in paralled, are freed
in the async callback.

Interface to the crypto layer - include/crypto/geniv.h

This patch is based on the patchset originally started by
Binoy Jayan <binoy.jayan@linaro.org>
( crypto: Add IV generation algorithms
https://patchwork.kernel.org/patch/9803469/ )

Signed-off-by: Binoy Jayan <binoy.jayan@linaro.org>
Signed-off-by: Xiongfeng Wang <wangxiongfeng2@linaro.org>
---
 crypto/Kconfig         |    7 +
 crypto/Makefile        |    1 +
 crypto/geniv.c         | 2240 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/crypto/geniv.h |   47 +
 4 files changed, 2295 insertions(+)
 create mode 100644 crypto/geniv.c
 create mode 100644 include/crypto/geniv.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index f3e40ac..98f025a 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -257,6 +257,13 @@ config CRYPTO_GLUE_HELPER_X86
 config CRYPTO_ENGINE
 	tristate
 
+config CRYPTO_GENIV
+	tristate "IV Generator Template"
+	select CRYPTO_AEAD
+	select CRYPTO_BLKCIPHER
+	help
+	  Support for IV generator template, so that dm-crypt can rely on it.
+
 comment "Authenticated Encryption with Associated Data"
 
 config CRYPTO_CCM
diff --git a/crypto/Makefile b/crypto/Makefile
index 6d1d40e..1077d2f 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -23,6 +23,7 @@ crypto_blkcipher-y += skcipher.o
 obj-$(CONFIG_CRYPTO_BLKCIPHER2) += crypto_blkcipher.o
 obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
 obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
+obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
 
 crypto_hash-y += ahash.o
 crypto_hash-y += shash.o
diff --git a/crypto/geniv.c b/crypto/geniv.c
new file mode 100644
index 0000000..55d1212
--- /dev/null
+++ b/crypto/geniv.c
@@ -0,0 +1,2240 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * geniv.c - crypto template for generating IV
+ *
+ * Copyright (C) 2018, Linaro
+ *
+ * This file adds a crypto template to generate IV, so the dm-crypt can rely
+ * on it and remove the existing generating IV code.
+ */
+
+#include <linux/completion.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/key.h>
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/mempool.h>
+#include <linux/slab.h>
+#include <linux/crypto.h>
+#include <linux/atomic.h>
+#include <linux/scatterlist.h>
+#include <linux/ctype.h>
+#include <asm/page.h>
+#include <asm/unaligned.h>
+#include <crypto/hash.h>
+#include <crypto/md5.h>
+#include <crypto/algapi.h>
+#include <crypto/skcipher.h>
+#include <crypto/aead.h>
+#include <crypto/authenc.h>
+#include <crypto/geniv.h>
+#include <crypto/internal/aead.h>
+#include <crypto/internal/skcipher.h>
+#include <linux/rtnetlink.h> /* for struct rtattr and RTA macros only */
+#include <keys/user-type.h>
+#include <linux/backing-dev.h>
+#include <linux/device-mapper.h>
+#include <linux/log2.h>
+
+#define DM_MSG_PREFIX		"crypt"
+#define MIN_IOS		64
+#define IV_TYPE_NUM 8
+#define SECTOR_MASK ((1 << SECTOR_SHIFT) - 1)
+
+struct geniv_ctx;
+struct geniv_req_ctx;
+
+/* Sub request for each of the skcipher_request's for a segment */
+struct geniv_subreq {
+	struct scatterlist sg_in[4];
+	struct scatterlist sg_out[4];
+	sector_t iv_sector;
+	struct geniv_req_ctx *rctx;
+	union {
+		struct skcipher_request req;
+		struct aead_request req_aead;
+	} r CRYPTO_MINALIGN_ATTR;
+};
+
+/* used to iter the src scatterlist of the input parent request */
+struct scatterlist_iter {
+	/* current segment to be processed */
+	unsigned int seg_no;
+	/* bytes had been processed in current segment */
+	unsigned int done;
+	/* bytes to be processed in the next request */
+	unsigned int len;
+};
+
+/* contex of the input parent request */
+struct geniv_req_ctx {
+	struct geniv_subreq *subreq;
+	bool is_write;
+	bool is_aead_request;
+	sector_t cc_sector;
+	/* array size of src scatterlist of parent request */
+	unsigned int nents;
+	struct scatterlist_iter iter;
+	struct completion restart;
+	atomic_t req_pending;
+	u8 *integrity_metadata;
+	/* point to the input parent request */
+	union {
+		struct skcipher_request *req;
+		struct aead_request *req_aead;
+	} r;
+};
+
+struct crypt_iv_operations {
+	int (*ctr)(struct geniv_ctx *ctx);
+	void (*dtr)(struct geniv_ctx *ctx);
+	int (*init)(struct geniv_ctx *ctx);
+	int (*wipe)(struct geniv_ctx *ctx);
+	int (*generator)(struct geniv_ctx *ctx,
+			struct geniv_req_ctx *rctx,
+			struct geniv_subreq *subreq, u8 *iv);
+	int (*post)(struct geniv_ctx *ctx,
+			struct geniv_req_ctx *rctx,
+			struct geniv_subreq *subreq, u8 *iv);
+};
+
+struct geniv_essiv_private {
+	struct crypto_ahash *hash_tfm;
+	u8 *salt;
+};
+
+struct geniv_benbi_private {
+	int shift;
+};
+
+#define LMK_SEED_SIZE 64 /* hash + 0 */
+struct geniv_lmk_private {
+	struct crypto_shash *hash_tfm;
+	u8 *seed;
+};
+
+#define TCW_WHITENING_SIZE 16
+struct geniv_tcw_private {
+	struct crypto_shash *crc32_tfm;
+	u8 *iv_seed;
+	u8 *whitening;
+};
+
+/* context of geniv tfm */
+struct geniv_ctx {
+	unsigned int tfms_count;
+	union {
+		struct crypto_skcipher *tfm;
+		struct crypto_aead *tfm_aead;
+	} tfm_child;
+	union {
+		struct crypto_skcipher **tfms;
+		struct crypto_aead **tfms_aead;
+	} tfms;
+
+	char *ivmode;
+	unsigned int iv_size;
+	unsigned int iv_start;
+	unsigned int rctx_start;
+	sector_t iv_offset;
+	unsigned short int sector_size;
+	unsigned char sector_shift;
+	char *algname;
+	char *ivopts;
+	char *cipher;
+	char *ciphermode;
+	unsigned long cipher_flags;
+
+	const struct crypt_iv_operations *iv_gen_ops;
+	union {
+		struct geniv_essiv_private essiv;
+		struct geniv_benbi_private benbi;
+		struct geniv_lmk_private lmk;
+		struct geniv_tcw_private tcw;
+	} iv_gen_private;
+	void *iv_private;
+
+	mempool_t *subreq_pool;
+	unsigned int key_size;
+	unsigned int key_parts;      /* independent parts in key buffer */
+	unsigned int key_extra_size; /* additional keys length */
+	unsigned int key_mac_size;
+
+	unsigned int integrity_tag_size;
+	unsigned int integrity_iv_size;
+	unsigned int on_disk_tag_size;
+
+	char *msg;
+	u8 *authenc_key; /* space for keys in authenc() format (if used) */
+	u8 *key;
+};
+
+static struct scatterlist *crypt_get_sg_data(struct geniv_ctx *ctx,
+					     struct scatterlist *sg);
+
+static bool geniv_integrity_aead(struct geniv_ctx *ctx)
+{
+	return test_bit(CRYPT_MODE_INTEGRITY_AEAD, &ctx->cipher_flags);
+}
+
+static bool geniv_integrity_hmac(struct geniv_ctx *ctx)
+{
+	return geniv_integrity_aead(ctx) && ctx->key_mac_size;
+}
+
+static struct geniv_req_ctx *geniv_skcipher_req_ctx(struct skcipher_request *req)
+{
+	return (void *)PTR_ALIGN((u8 *)skcipher_request_ctx(req),  __alignof__(struct geniv_req_ctx));
+}
+
+static struct geniv_req_ctx *geniv_aead_req_ctx(struct aead_request *req)
+{
+	return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), __alignof__(struct geniv_req_ctx));
+}
+
+static u8 *iv_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
+{
+	if (geniv_integrity_aead(ctx))
+		return (u8 *)ALIGN((unsigned long)((char *)subreq + ctx->iv_start),
+			crypto_aead_alignmask(crypto_aead_reqtfm(subreq->rctx->r.req_aead)) + 1);
+	else
+		return (u8 *)ALIGN((unsigned long)((char *)subreq + ctx->iv_start),
+			crypto_skcipher_alignmask(crypto_skcipher_reqtfm(subreq->rctx->r.req)) + 1);
+}
+
+/* Get sg containing data */
+static struct scatterlist *crypt_get_sg_data(struct geniv_ctx *ctx,
+					     struct scatterlist *sg)
+{
+	if (unlikely(geniv_integrity_aead(ctx)))
+		return &sg[2];
+
+	return sg;
+}
+
+/*
+ * Different IV generation algorithms:
+ *
+ * plain: the initial vector is the 32-bit little-endian version of the sector
+ *        number, padded with zeros if necessary.
+ *
+ * plain64: the initial vector is the 64-bit little-endian version of the sector
+ *        number, padded with zeros if necessary.
+ *
+ * plain64be: the initial vector is the 64-bit big-endian version of the sector
+ *        number, padded with zeros if necessary.
+ *
+ * essiv: "encrypted sector|salt initial vector", the sector number is
+ *        encrypted with the bulk cipher using a salt as key. The salt
+ *        should be derived from the bulk cipher's key via hashing.
+ *
+ * benbi: the 64-bit "big-endian 'narrow block'-count", starting at 1
+ *        (needed for LRW-32-AES and possible other narrow block modes)
+ *
+ * null: the initial vector is always zero.  Provides compatibility with
+ *       obsolete loop_fish2 devices.  Do not use for new devices.
+ *
+ * lmk:  Compatible implementation of the block chaining mode used
+ *       by the Loop-AES block device encryption system
+ *       designed by Jari Ruusu. See http://loop-aes.sourceforge.net/
+ *       It operates on full 512 byte sectors and uses CBC
+ *       with an IV derived from the sector number, the data and
+ *       optionally extra IV seed.
+ *       This means that after decryption the first block
+ *       of sector must be tweaked according to decrypted data.
+ *       Loop-AES can use three encryption schemes:
+ *         version 1: is plain aes-cbc mode
+ *         version 2: uses 64 multikey scheme with lmk IV generator
+ *         version 3: the same as version 2 with additional IV seed
+ *                   (it uses 65 keys, last key is used as IV seed)
+ *
+ * tcw:  Compatible implementation of the block chaining mode used
+ *       by the TrueCrypt device encryption system (prior to version 4.1).
+ *       For more info see: https://gitlab.com/cryptsetup/cryptsetup/wikis/TrueCryptOnDiskFormat
+ *       It operates on full 512 byte sectors and uses CBC
+ *       with an IV derived from initial key and the sector number.
+ *       In addition, whitening value is applied on every sector, whitening
+ *       is calculated from initial key, sector number and mixed using CRC32.
+ *       Note that this encryption scheme is vulnerable to watermarking attacks
+ *       and should be used for old compatible containers access only.
+ *
+ * plumb: unimplemented, see:
+ * http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
+ */
+
+static int crypt_iv_plain_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	memset(iv, 0, ctx->iv_size);
+	*(__le32 *)iv = cpu_to_le32(subreq->iv_sector & 0xffffffff);
+
+	return 0;
+}
+
+static int crypt_iv_plain64_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	memset(iv, 0, ctx->iv_size);
+	*(__le64 *)iv = cpu_to_le64(subreq->iv_sector);
+
+	return 0;
+}
+
+static int crypt_iv_plain64be_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	memset(iv, 0, ctx->iv_size);
+	/* iv_size is at least of size u64; usually it is 16 bytes */
+	*(__be64 *)&iv[ctx->iv_size - sizeof(u64)] = cpu_to_be64(subreq->iv_sector);
+
+	return 0;
+}
+
+/* Initialise ESSIV - compute salt but no local memory allocations */
+static int crypt_iv_essiv_init(struct geniv_ctx *ctx)
+{
+	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
+	AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
+	struct scatterlist sg;
+	struct crypto_cipher *essiv_tfm;
+	int err;
+
+	sg_init_one(&sg, ctx->key, ctx->key_size);
+	ahash_request_set_tfm(req, essiv->hash_tfm);
+	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
+	ahash_request_set_crypt(req, &sg, essiv->salt, ctx->key_size);
+
+	err = crypto_ahash_digest(req);
+	ahash_request_zero(req);
+	if (err)
+		return err;
+
+	essiv_tfm = ctx->iv_private;
+
+	return crypto_cipher_setkey(essiv_tfm, essiv->salt,
+			    crypto_ahash_digestsize(essiv->hash_tfm));
+}
+
+/* Wipe salt and reset key derived from volume key */
+static int crypt_iv_essiv_wipe(struct geniv_ctx *ctx)
+{
+	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
+	unsigned int salt_size = crypto_ahash_digestsize(essiv->hash_tfm);
+	struct crypto_cipher *essiv_tfm;
+
+	memset(essiv->salt, 0, salt_size);
+
+	essiv_tfm = ctx->iv_private;
+	return crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size);
+}
+
+/* Allocate the cipher for ESSIV */
+static struct crypto_cipher *alloc_essiv_cipher(struct geniv_ctx *ctx,
+					u8 *salt, unsigned int saltsize)
+{
+	struct crypto_cipher *essiv_tfm;
+	int err;
+
+	/* Setup the essiv_tfm with the given salt */
+	essiv_tfm = crypto_alloc_cipher(ctx->cipher, 0, CRYPTO_ALG_ASYNC);
+	if (IS_ERR(essiv_tfm)) {
+		DMERR("Error allocating crypto tfm for ESSIV\n");
+		return essiv_tfm;
+	}
+
+	if (crypto_cipher_blocksize(essiv_tfm) != ctx->iv_size) {
+		DMERR("Block size of ESSIV cipher does "
+			    "not match IV size of block cipher\n");
+		crypto_free_cipher(essiv_tfm);
+		return ERR_PTR(-EINVAL);
+	}
+
+	err = crypto_cipher_setkey(essiv_tfm, salt, saltsize);
+	if (err) {
+		DMERR("Failed to set key for ESSIV cipher\n");
+		crypto_free_cipher(essiv_tfm);
+		return ERR_PTR(err);
+	}
+
+	return essiv_tfm;
+}
+
+static void crypt_iv_essiv_dtr(struct geniv_ctx *ctx)
+{
+	struct crypto_cipher *essiv_tfm;
+	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
+
+	crypto_free_ahash(essiv->hash_tfm);
+	essiv->hash_tfm = NULL;
+
+	kzfree(essiv->salt);
+	essiv->salt = NULL;
+
+	essiv_tfm = ctx->iv_private;
+
+	if (essiv_tfm)
+		crypto_free_cipher(essiv_tfm);
+
+	ctx->iv_private = NULL;
+}
+
+static int crypt_iv_essiv_ctr(struct geniv_ctx *ctx)
+{
+	struct crypto_cipher *essiv_tfm = NULL;
+	struct crypto_ahash *hash_tfm = NULL;
+	u8 *salt = NULL;
+	int err;
+
+	if (!ctx->ivopts) {
+		DMERR("Digest algorithm missing for ESSIV mode\n");
+		return -EINVAL;
+	}
+
+	/* Allocate hash algorithm */
+	hash_tfm = crypto_alloc_ahash(ctx->ivopts, 0, CRYPTO_ALG_ASYNC);
+	if (IS_ERR(hash_tfm)) {
+		DMERR("Error initializing ESSIV hash\n");
+		err = PTR_ERR(hash_tfm);
+		goto bad;
+	}
+
+	salt = kzalloc(crypto_ahash_digestsize(hash_tfm), GFP_KERNEL);
+	if (!salt) {
+		DMERR("Error kmallocing salt storage in ESSIV\n");
+		err = -ENOMEM;
+		goto bad;
+	}
+
+	ctx->iv_gen_private.essiv.salt = salt;
+	ctx->iv_gen_private.essiv.hash_tfm = hash_tfm;
+
+	essiv_tfm = alloc_essiv_cipher(ctx, salt,
+				       crypto_ahash_digestsize(hash_tfm));
+	if (IS_ERR(essiv_tfm)) {
+		crypt_iv_essiv_dtr(ctx);
+		return PTR_ERR(essiv_tfm);
+	}
+	ctx->iv_private = essiv_tfm;
+
+	return 0;
+
+bad:
+	if (hash_tfm && !IS_ERR(hash_tfm))
+		crypto_free_ahash(hash_tfm);
+	kfree(salt);
+	return err;
+}
+
+static int crypt_iv_essiv_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	struct crypto_cipher *essiv_tfm = ctx->iv_private;
+
+	memset(iv, 0, ctx->iv_size);
+	*(__le64 *)iv = cpu_to_le64(subreq->iv_sector);
+	crypto_cipher_encrypt_one(essiv_tfm, iv, iv);
+
+	return 0;
+}
+
+static int crypt_iv_benbi_ctr(struct geniv_ctx *ctx)
+{
+	unsigned int bs = crypto_skcipher_blocksize(ctx->tfms.tfms[0]);
+	int log = ilog2(bs);
+
+	/* we need to calculate how far we must shift the sector count
+	 * to get the cipher block count, we use this shift in _gen */
+
+	if (1 << log != bs) {
+		DMERR("cypher blocksize is not a power of 2\n");
+		return -EINVAL;
+	}
+
+	if (log > 9) {
+		DMERR("cypher blocksize is > 512\n");
+		return -EINVAL;
+	}
+
+	ctx->iv_gen_private.benbi.shift = 9 - log;
+
+	return 0;
+}
+
+static void crypt_iv_benbi_dtr(struct geniv_ctx *ctx)
+{
+}
+
+static int crypt_iv_benbi_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	__be64 val;
+
+	memset(iv, 0, ctx->iv_size - sizeof(u64)); /* rest is cleared below */
+
+	val = cpu_to_be64(((u64)subreq->iv_sector << ctx->iv_gen_private.benbi.shift) + 1);
+	put_unaligned(val, (__be64 *)(iv + ctx->iv_size - sizeof(u64)));
+
+	return 0;
+}
+
+static int crypt_iv_null_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	memset(iv, 0, ctx->iv_size);
+
+	return 0;
+}
+
+static void crypt_iv_lmk_dtr(struct geniv_ctx *ctx)
+{
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
+
+	if (lmk->hash_tfm && !IS_ERR(lmk->hash_tfm))
+		crypto_free_shash(lmk->hash_tfm);
+	lmk->hash_tfm = NULL;
+
+	kzfree(lmk->seed);
+	lmk->seed = NULL;
+}
+
+static int crypt_iv_lmk_ctr(struct geniv_ctx *ctx)
+{
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
+
+	if (ctx->sector_size != (1 << SECTOR_SHIFT)) {
+		DMERR("Unsupported sector size for LMK\n");
+		return -EINVAL;
+	}
+
+	lmk->hash_tfm = crypto_alloc_shash("md5", 0, 0);
+	if (IS_ERR(lmk->hash_tfm)) {
+		DMERR("Error initializing LMK hash, err=%ld\n",
+			PTR_ERR(lmk->hash_tfm));
+		return PTR_ERR(lmk->hash_tfm);
+	}
+
+	/* No seed in LMK version 2 */
+	if (ctx->key_parts == ctx->tfms_count) {
+		lmk->seed = NULL;
+		return 0;
+	}
+
+	lmk->seed = kzalloc(LMK_SEED_SIZE, GFP_KERNEL);
+	if (!lmk->seed) {
+		crypt_iv_lmk_dtr(ctx);
+		DMERR("Error kmallocing seed storage in LMK\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int crypt_iv_lmk_init(struct geniv_ctx *ctx)
+{
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
+	int subkey_size = ctx->key_size / ctx->key_parts;
+
+	/* LMK seed is on the position of LMK_KEYS + 1 key */
+	if (lmk->seed)
+		memcpy(lmk->seed, ctx->key + (ctx->tfms_count * subkey_size),
+		       crypto_shash_digestsize(lmk->hash_tfm));
+
+	return 0;
+}
+
+static int crypt_iv_lmk_wipe(struct geniv_ctx *ctx)
+{
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
+
+	if (lmk->seed)
+		memset(lmk->seed, 0, LMK_SEED_SIZE);
+
+	return 0;
+}
+
+static int crypt_iv_lmk_one(struct geniv_ctx *ctx, u8 *iv,
+				struct geniv_subreq *subreq, u8 *data)
+{
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
+	SHASH_DESC_ON_STACK(desc, lmk->hash_tfm);
+	struct md5_state md5state;
+	__le32 buf[4];
+	int i, r;
+
+	desc->tfm = lmk->hash_tfm;
+	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+
+	r = crypto_shash_init(desc);
+	if (r)
+		return r;
+
+	if (lmk->seed) {
+		r = crypto_shash_update(desc, lmk->seed, LMK_SEED_SIZE);
+		if (r)
+			return r;
+	}
+
+	/* Sector is always 512B, block size 16, add data of blocks 1-31 */
+	r = crypto_shash_update(desc, data + 16, 16 * 31);
+	if (r)
+		return r;
+
+	/* Sector is cropped to 56 bits here */
+	buf[0] = cpu_to_le32(subreq->iv_sector & 0xFFFFFFFF);
+	buf[1] = cpu_to_le32((((u64)subreq->iv_sector >> 32) & 0x00FFFFFF) | 0x80000000);
+	buf[2] = cpu_to_le32(4024);
+	buf[3] = 0;
+	r = crypto_shash_update(desc, (u8 *)buf, sizeof(buf));
+	if (r)
+		return r;
+
+	/* No MD5 padding here */
+	r = crypto_shash_export(desc, &md5state);
+	if (r)
+		return r;
+
+	for (i = 0; i < MD5_HASH_WORDS; i++)
+		__cpu_to_le32s(&md5state.hash[i]);
+	memcpy(iv, &md5state.hash, ctx->iv_size);
+
+	return 0;
+}
+
+static int crypt_iv_lmk_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	struct scatterlist *sg;
+	u8 *src;
+	int r = 0;
+
+	if (rctx->is_write) {
+		sg = crypt_get_sg_data(ctx, subreq->sg_in);
+		src = kmap_atomic(sg_page(sg));
+		r = crypt_iv_lmk_one(ctx, iv, subreq, src + sg->offset);
+		kunmap_atomic(src);
+	} else
+		memset(iv, 0, ctx->iv_size);
+
+	return r;
+}
+
+static int crypt_iv_lmk_post(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	struct scatterlist *sg;
+	u8 *dst;
+	int r;
+
+	if (rctx->is_write)
+		return 0;
+
+	sg = crypt_get_sg_data(ctx, subreq->sg_out);
+	dst = kmap_atomic(sg_page(sg));
+	r = crypt_iv_lmk_one(ctx, iv, subreq, dst + sg->offset);
+
+	/* Tweak the first block of plaintext sector */
+	if (!r)
+		crypto_xor(dst + sg->offset, iv, ctx->iv_size);
+
+	kunmap_atomic(dst);
+	return r;
+}
+
+static void crypt_iv_tcw_dtr(struct geniv_ctx *ctx)
+{
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+
+	kzfree(tcw->iv_seed);
+	tcw->iv_seed = NULL;
+	kzfree(tcw->whitening);
+	tcw->whitening = NULL;
+
+	if (tcw->crc32_tfm && !IS_ERR(tcw->crc32_tfm))
+		crypto_free_shash(tcw->crc32_tfm);
+	tcw->crc32_tfm = NULL;
+}
+
+static int crypt_iv_tcw_ctr(struct geniv_ctx *ctx)
+{
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+
+	if (ctx->sector_size != (1 << SECTOR_SHIFT)) {
+		DMERR("Unsupported sector size for TCW\n");
+		return -EINVAL;
+	}
+
+	if (ctx->key_size <= (ctx->iv_size + TCW_WHITENING_SIZE)) {
+		DMERR("Wrong key size (%d) for TCW. Choose a value > %d bytes\n",
+			ctx->key_size, ctx->iv_size + TCW_WHITENING_SIZE);
+		return -EINVAL;
+	}
+
+	tcw->crc32_tfm = crypto_alloc_shash("crc32", 0, 0);
+	if (IS_ERR(tcw->crc32_tfm)) {
+		DMERR("Error initializing CRC32 in TCW; err=%ld\n",
+			PTR_ERR(tcw->crc32_tfm));
+		return PTR_ERR(tcw->crc32_tfm);
+	}
+
+	tcw->iv_seed = kzalloc(ctx->iv_size, GFP_KERNEL);
+	tcw->whitening = kzalloc(TCW_WHITENING_SIZE, GFP_KERNEL);
+	if (!tcw->iv_seed || !tcw->whitening) {
+		crypt_iv_tcw_dtr(ctx);
+		DMERR("Error allocating seed storage in TCW\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int crypt_iv_tcw_init(struct geniv_ctx *ctx)
+{
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+	int key_offset = ctx->key_size - ctx->iv_size - TCW_WHITENING_SIZE;
+
+	memcpy(tcw->iv_seed, &ctx->key[key_offset], ctx->iv_size);
+	memcpy(tcw->whitening, &ctx->key[key_offset + ctx->iv_size],
+	       TCW_WHITENING_SIZE);
+
+	return 0;
+}
+
+static int crypt_iv_tcw_wipe(struct geniv_ctx *ctx)
+{
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+
+	memset(tcw->iv_seed, 0, ctx->iv_size);
+	memset(tcw->whitening, 0, TCW_WHITENING_SIZE);
+
+	return 0;
+}
+
+static int crypt_iv_tcw_whitening(struct geniv_ctx *ctx,
+				struct geniv_subreq *subreq, u8 *data)
+{
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+	__le64 sector = cpu_to_le64(subreq->iv_sector);
+	u8 buf[TCW_WHITENING_SIZE];
+	SHASH_DESC_ON_STACK(desc, tcw->crc32_tfm);
+	int i, r;
+
+	/* xor whitening with sector number */
+	crypto_xor_cpy(buf, tcw->whitening, (u8 *)&sector, 8);
+	crypto_xor_cpy(&buf[8], tcw->whitening + 8, (u8 *)&sector, 8);
+
+	/* calculate crc32 for every 32bit part and xor it */
+	desc->tfm = tcw->crc32_tfm;
+	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+	for (i = 0; i < 4; i++) {
+		r = crypto_shash_init(desc);
+		if (r)
+			goto out;
+		r = crypto_shash_update(desc, &buf[i * 4], 4);
+		if (r)
+			goto out;
+		r = crypto_shash_final(desc, &buf[i * 4]);
+		if (r)
+			goto out;
+	}
+	crypto_xor(&buf[0], &buf[12], 4);
+	crypto_xor(&buf[4], &buf[8], 4);
+
+	/* apply whitening (8 bytes) to whole sector */
+	for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++)
+		crypto_xor(data + i * 8, buf, 8);
+out:
+	memzero_explicit(buf, sizeof(buf));
+	return r;
+}
+
+static int crypt_iv_tcw_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	struct scatterlist *sg;
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+	__le64 sector = cpu_to_le64(subreq->iv_sector);
+	u8 *src;
+	int r = 0;
+
+	/* Remove whitening from ciphertext */
+	if (!rctx->is_write) {
+		sg = crypt_get_sg_data(ctx, subreq->sg_in);
+		src = kmap_atomic(sg_page(sg));
+		r = crypt_iv_tcw_whitening(ctx, subreq, src + sg->offset);
+		kunmap_atomic(src);
+	}
+
+	/* Calculate IV */
+	crypto_xor_cpy(iv, tcw->iv_seed, (u8 *)&sector, 8);
+	if (ctx->iv_size > 8)
+		crypto_xor_cpy(&iv[8], tcw->iv_seed + 8, (u8 *)&sector,
+			       ctx->iv_size - 8);
+
+	return r;
+}
+
+static int crypt_iv_tcw_post(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	struct scatterlist *sg;
+	u8 *dst;
+	int r;
+
+	if (!rctx->is_write)
+		return 0;
+
+	/* Apply whitening on ciphertext */
+	sg = crypt_get_sg_data(ctx, subreq->sg_out);
+	dst = kmap_atomic(sg_page(sg));
+	r = crypt_iv_tcw_whitening(ctx, subreq, dst + sg->offset);
+	kunmap_atomic(dst);
+
+	return r;
+}
+
+static int crypt_iv_random_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq, u8 *iv)
+{
+	/* Used only for writes, there must be an additional space to store IV */
+	get_random_bytes(iv, ctx->iv_size);
+	return 0;
+}
+
+static const struct crypt_iv_operations crypt_iv_plain_ops = {
+	.generator = crypt_iv_plain_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_plain64_ops = {
+	.generator = crypt_iv_plain64_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_plain64be_ops = {
+	.generator = crypt_iv_plain64be_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_essiv_ops = {
+	.ctr       = crypt_iv_essiv_ctr,
+	.dtr       = crypt_iv_essiv_dtr,
+	.init      = crypt_iv_essiv_init,
+	.wipe      = crypt_iv_essiv_wipe,
+	.generator = crypt_iv_essiv_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_benbi_ops = {
+	.ctr	   = crypt_iv_benbi_ctr,
+	.dtr	   = crypt_iv_benbi_dtr,
+	.generator = crypt_iv_benbi_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_null_ops = {
+	.generator = crypt_iv_null_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_lmk_ops = {
+	.ctr	   = crypt_iv_lmk_ctr,
+	.dtr	   = crypt_iv_lmk_dtr,
+	.init	   = crypt_iv_lmk_init,
+	.wipe	   = crypt_iv_lmk_wipe,
+	.generator = crypt_iv_lmk_gen,
+	.post	   = crypt_iv_lmk_post
+};
+
+static const struct crypt_iv_operations crypt_iv_tcw_ops = {
+	.ctr	   = crypt_iv_tcw_ctr,
+	.dtr	   = crypt_iv_tcw_dtr,
+	.init	   = crypt_iv_tcw_init,
+	.wipe	   = crypt_iv_tcw_wipe,
+	.generator = crypt_iv_tcw_gen,
+	.post	   = crypt_iv_tcw_post
+};
+
+static struct crypt_iv_operations crypt_iv_random_ops = {
+	.generator = crypt_iv_random_gen
+};
+
+static int geniv_init_iv(struct geniv_ctx *ctx)
+{
+	int ret;
+
+	DMDEBUG("IV Generation algorithm : %s\n", ctx->ivmode);
+
+	if (ctx->ivmode == NULL)
+		ctx->iv_gen_ops = NULL;
+	else if (strcmp(ctx->ivmode, "plain") == 0)
+		ctx->iv_gen_ops = &crypt_iv_plain_ops;
+	else if (strcmp(ctx->ivmode, "plain64") == 0)
+		ctx->iv_gen_ops = &crypt_iv_plain64_ops;
+	else if (strcmp(ctx->ivmode, "essiv") == 0)
+		ctx->iv_gen_ops = &crypt_iv_essiv_ops;
+	else if (strcmp(ctx->ivmode, "benbi") == 0)
+		ctx->iv_gen_ops = &crypt_iv_benbi_ops;
+	else if (strcmp(ctx->ivmode, "null") == 0)
+		ctx->iv_gen_ops = &crypt_iv_null_ops;
+	else if (strcmp(ctx->ivmode, "lmk") == 0) {
+		ctx->iv_gen_ops = &crypt_iv_lmk_ops;
+		/*
+		 * Version 2 and 3 is recognised according
+		 * to length of provided multi-key string.
+		 * If present (version 3), last key is used as IV seed.
+		 * All keys (including IV seed) are always the same size.
+		 */
+		if (ctx->key_size % ctx->key_parts) {
+			ctx->key_parts++;
+			ctx->key_extra_size = ctx->key_size / ctx->key_parts;
+		}
+	} else if (strcmp(ctx->ivmode, "tcw") == 0) {
+		ctx->iv_gen_ops = &crypt_iv_tcw_ops;
+		ctx->key_parts += 2; /* IV + whitening */
+		ctx->key_extra_size = ctx->iv_size + TCW_WHITENING_SIZE;
+	} else if (strcmp(ctx->ivmode, "random") == 0) {
+		ctx->iv_gen_ops = &crypt_iv_random_ops;
+		/* Need storage space in integrity fields. */
+		ctx->integrity_iv_size = ctx->iv_size;
+	} else {
+		DMERR("Invalid IV mode %s\n", ctx->ivmode);
+		return -EINVAL;
+	}
+
+	/* Allocate IV */
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->ctr) {
+		ret = ctx->iv_gen_ops->ctr(ctx);
+		if (ret < 0) {
+			DMERR("Error creating IV for %s\n", ctx->ivmode);
+			return ret;
+		}
+	}
+
+	/* Initialize IV (set keys for ESSIV etc) */
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->init) {
+		ret = ctx->iv_gen_ops->init(ctx);
+		if (ret < 0) {
+			DMERR("Error creating IV for %s\n", ctx->ivmode);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static void geniv_free_tfms_aead(struct geniv_ctx *ctx)
+{
+	if (!ctx->tfms.tfms_aead)
+		return;
+
+	if (ctx->tfms.tfms_aead[0] && IS_ERR(ctx->tfms.tfms_aead[0])) {
+		crypto_free_aead(ctx->tfms.tfms_aead[0]);
+		ctx->tfms.tfms_aead[0] = NULL;
+	}
+
+	kfree(ctx->tfms.tfms_aead);
+	ctx->tfms.tfms_aead = NULL;
+}
+
+static void geniv_free_tfms_skcipher(struct geniv_ctx *ctx)
+{
+	unsigned int i;
+
+	if (!ctx->tfms.tfms)
+		return;
+
+	for (i = 0; i < ctx->tfms_count; i++)
+		if (ctx->tfms.tfms[i] && IS_ERR(ctx->tfms.tfms[i])) {
+			crypto_free_skcipher(ctx->tfms.tfms[i]);
+			ctx->tfms.tfms[i] = NULL;
+		}
+
+	kfree(ctx->tfms.tfms);
+	ctx->tfms.tfms = NULL;
+}
+
+static void geniv_free_tfms(struct geniv_ctx *ctx)
+{
+	if (geniv_integrity_aead(ctx))
+		geniv_free_tfms_aead(ctx);
+	else
+		geniv_free_tfms_skcipher(ctx);
+}
+
+static int geniv_alloc_tfms_aead(struct crypto_aead *parent,
+			    struct geniv_ctx *ctx)
+{
+	unsigned int reqsize, align;
+
+	ctx->tfms.tfms_aead = kcalloc(1, sizeof(struct crypto_aead *),
+			   GFP_KERNEL);
+	if (!ctx->tfms.tfms_aead)
+		return -ENOMEM;
+
+	/* First instance is already allocated in geniv_init_tfm */
+	ctx->tfms.tfms_aead[0] = ctx->tfm_child.tfm_aead;
+
+	/* Setup the current cipher's request structure */
+	align = crypto_aead_alignmask(parent);
+	align &= ~(crypto_tfm_ctx_alignment() - 1);
+	reqsize = align + sizeof(struct geniv_req_ctx) +
+		  crypto_aead_reqsize(ctx->tfms.tfms_aead[0]);
+
+	crypto_aead_set_reqsize(parent, reqsize);
+
+	return 0;
+}
+
+/* Allocate memory for the underlying cipher algorithm. Ex: cbc(aes)
+ */
+static int geniv_alloc_tfms_skcipher(struct crypto_skcipher *parent,
+			    struct geniv_ctx *ctx)
+{
+	unsigned int i, reqsize, align, err;
+
+	ctx->tfms.tfms = kcalloc(ctx->tfms_count, sizeof(struct crypto_skcipher *),
+			   GFP_KERNEL);
+	if (!ctx->tfms.tfms)
+		return -ENOMEM;
+
+	/* First instance is already allocated in geniv_init_tfm */
+	ctx->tfms.tfms[0] = ctx->tfm_child.tfm;
+	for (i = 1; i < ctx->tfms_count; i++) {
+		ctx->tfms.tfms[i] = crypto_alloc_skcipher(ctx->ciphermode, 0, 0);
+		if (IS_ERR(ctx->tfms.tfms[i])) {
+			err = PTR_ERR(ctx->tfms.tfms[i]);
+			geniv_free_tfms(ctx);
+			return err;
+		}
+
+		/* Setup the current cipher's request structure */
+		align = crypto_skcipher_alignmask(parent);
+		align &= ~(crypto_tfm_ctx_alignment() - 1);
+		reqsize = align + sizeof(struct geniv_req_ctx) +
+			  crypto_skcipher_reqsize(ctx->tfms.tfms[i]);
+
+		crypto_skcipher_set_reqsize(parent, reqsize);
+	}
+
+	return 0;
+}
+
+static unsigned int geniv_authenckey_size(struct geniv_ctx *ctx)
+{
+	return ctx->key_size - ctx->key_extra_size +
+		RTA_SPACE(sizeof(struct crypto_authenc_key_param));
+}
+
+/* Initialize the cipher's context with the key, ivmode and other parameters.
+ * Also allocate IV generation template ciphers and initialize them.
+ */
+static int geniv_setkey_init(void *parent, struct geniv_key_info *info)
+{
+	struct geniv_ctx *ctx;
+	int ret;
+
+	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
+		ctx = crypto_aead_ctx((struct crypto_aead *)parent);
+	else
+		ctx = crypto_skcipher_ctx((struct crypto_skcipher *)parent);
+
+	ctx->tfms_count = info->tfms_count;
+	ctx->key = info->key;
+	ctx->cipher_flags = info->cipher_flags;
+	ctx->ivopts = info->ivopts;
+	ctx->iv_offset = info->iv_offset;
+	ctx->sector_size = info->sector_size;
+	ctx->sector_shift = __ffs(ctx->sector_size) - SECTOR_SHIFT;
+
+	ctx->key_size = info->key_size;
+	ctx->key_parts = info->key_parts;
+	ctx->key_mac_size = info->key_mac_size;
+	ctx->on_disk_tag_size = info->on_disk_tag_size;
+
+	if (geniv_integrity_hmac(ctx)) {
+		ctx->authenc_key = kmalloc(geniv_authenckey_size(ctx), GFP_KERNEL);
+		if (!ctx->authenc_key)
+			return -ENOMEM;
+	}
+
+	if (geniv_integrity_aead(ctx))
+		ret = geniv_alloc_tfms_aead((struct crypto_aead *)parent, ctx);
+	else
+		ret = geniv_alloc_tfms_skcipher((struct crypto_skcipher *)parent, ctx);
+	if (ret)
+		return ret;
+
+	ret = geniv_init_iv(ctx);
+
+	if (geniv_integrity_aead(ctx))
+		ctx->integrity_tag_size = ctx->on_disk_tag_size - ctx->integrity_iv_size;
+
+	return ret;
+}
+
+/*
+ * If AEAD is composed like authenc(hmac(sha256),xts(aes)),
+ * the key must be for some reason in special format.
+ * This function converts cc->key to this special format.
+ */
+static void crypt_copy_authenckey(char *p, const void *key,
+			unsigned int enckeylen, unsigned int authkeylen)
+{
+	struct crypto_authenc_key_param *param;
+	struct rtattr *rta;
+
+	rta = (struct rtattr *)p;
+	param = RTA_DATA(rta);
+	param->enckeylen = cpu_to_be32(enckeylen);
+	rta->rta_len = RTA_LENGTH(sizeof(*param));
+	rta->rta_type = CRYPTO_AUTHENC_KEYA_PARAM;
+	p += RTA_SPACE(sizeof(*param));
+	memcpy(p, key + enckeylen, authkeylen);
+	p += authkeylen;
+	memcpy(p, key, enckeylen);
+}
+
+static int geniv_setkey_tfms_aead(struct crypto_aead *parent, struct geniv_ctx *ctx,
+			     struct geniv_key_info *info)
+{
+	unsigned int key_size;
+	unsigned int authenc_key_size;
+	struct crypto_aead *child_aead;
+	int ret = 0;
+
+	/* Ignore extra keys (which are used for IV etc) */
+	key_size = ctx->key_size - ctx->key_extra_size;
+	authenc_key_size = key_size + RTA_SPACE(sizeof(struct crypto_authenc_key_param));
+
+	child_aead = ctx->tfms.tfms_aead[0];
+	crypto_aead_clear_flags(child_aead, CRYPTO_TFM_REQ_MASK);
+	crypto_aead_set_flags(child_aead, crypto_aead_get_flags(parent) & CRYPTO_TFM_REQ_MASK);
+
+	if (geniv_integrity_hmac(ctx)) {
+		if (key_size < ctx->key_mac_size)
+			return -EINVAL;
+
+		crypt_copy_authenckey(ctx->authenc_key, ctx->key, key_size - ctx->key_mac_size,
+				      ctx->key_mac_size);
+	}
+
+	if (geniv_integrity_hmac(ctx))
+		ret = crypto_aead_setkey(child_aead, ctx->authenc_key, authenc_key_size);
+	else
+		ret = crypto_aead_setkey(child_aead, ctx->key, key_size);
+	if (ret) {
+		DMERR("Error setting key for tfms[0]\n");
+		goto out;
+	}
+
+	crypto_aead_set_flags(parent, crypto_aead_get_flags(child_aead) & CRYPTO_TFM_RES_MASK);
+
+out:
+	if (geniv_integrity_hmac(ctx))
+		memzero_explicit(ctx->authenc_key, authenc_key_size);
+
+	return ret;
+}
+
+static int geniv_setkey_tfms_skcipher(struct crypto_skcipher *parent, struct geniv_ctx *ctx,
+			     struct geniv_key_info *info)
+{
+	unsigned int subkey_size;
+	char *subkey;
+	struct crypto_skcipher *child;
+	int ret, i;
+
+	/* Ignore extra keys (which are used for IV etc) */
+	subkey_size = (ctx->key_size - ctx->key_extra_size)
+		      >> ilog2(ctx->tfms_count);
+
+	for (i = 0; i < ctx->tfms_count; i++) {
+		child = ctx->tfms.tfms[i];
+		crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+		crypto_skcipher_set_flags(child,
+			crypto_skcipher_get_flags(parent) & CRYPTO_TFM_REQ_MASK);
+
+		subkey = ctx->key + (subkey_size) * i;
+
+		ret = crypto_skcipher_setkey(child, subkey, subkey_size);
+		if (ret) {
+			DMERR("Error setting key for tfms[%d]\n", i);
+			return ret;
+		}
+
+		crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) &
+					  CRYPTO_TFM_RES_MASK);
+	}
+
+	return 0;
+}
+
+static int geniv_setkey_set(struct geniv_ctx *ctx)
+{
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->init)
+		return ctx->iv_gen_ops->init(ctx);
+	else
+		return 0;
+}
+
+static int geniv_setkey_wipe(struct geniv_ctx *ctx)
+{
+	int ret;
+
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->wipe) {
+		ret = ctx->iv_gen_ops->wipe(ctx);
+		if (ret)
+			return ret;
+	}
+
+	if (geniv_integrity_hmac(ctx))
+		kzfree(ctx->authenc_key);
+
+	return 0;
+}
+
+static int geniv_setkey(void *parent, const u8 *key, unsigned int keylen)
+{
+	int err = 0;
+	struct geniv_ctx *ctx;
+	struct geniv_key_info *info = (struct geniv_key_info *) key;
+
+	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
+		ctx = crypto_aead_ctx((struct crypto_aead *)parent);
+	else
+		ctx = crypto_skcipher_ctx((struct crypto_skcipher *)parent);
+
+	DMDEBUG("SETKEY Operation : %d\n", info->keyop);
+
+	switch (info->keyop) {
+	case SETKEY_OP_INIT:
+		err = geniv_setkey_init(parent, info);
+		break;
+	case SETKEY_OP_SET:
+		err = geniv_setkey_set(ctx);
+		break;
+	case SETKEY_OP_WIPE:
+		err = geniv_setkey_wipe(ctx);
+		break;
+	}
+
+	if (err)
+		return err;
+
+	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
+		return geniv_setkey_tfms_aead((struct crypto_aead *)parent, ctx, info);
+	else
+		return geniv_setkey_tfms_skcipher((struct crypto_skcipher *)parent, ctx, info);
+}
+
+static int geniv_aead_setkey(struct crypto_aead *parent,
+				const u8 *key, unsigned int keylen)
+{
+	return geniv_setkey(parent, key, keylen);
+}
+
+static int geniv_skcipher_setkey(struct crypto_skcipher *parent,
+				const u8 *key, unsigned int keylen)
+{
+	return geniv_setkey(parent, key, keylen);
+}
+
+static void geniv_async_done(struct crypto_async_request *async_req, int error);
+
+static int geniv_alloc_subreq_aead(struct geniv_ctx *ctx,
+					struct geniv_req_ctx *rctx,
+					u32 req_flags)
+{
+	struct aead_request *req;
+
+	if (!rctx->subreq) {
+		rctx->subreq = mempool_alloc(ctx->subreq_pool, GFP_NOIO);
+		if (!rctx->subreq)
+			return -ENOMEM;
+	}
+
+	req = &rctx->subreq->r.req_aead;
+	rctx->subreq->rctx = rctx;
+
+	aead_request_set_tfm(req, ctx->tfms.tfms_aead[0]);
+	aead_request_set_callback(req, req_flags,
+					geniv_async_done, rctx->subreq);
+
+	return 0;
+}
+
+/* req_flags: flags from parent request */
+static int geniv_alloc_subreq_skcipher(struct geniv_ctx *ctx,
+					struct geniv_req_ctx *rctx,
+					u32 req_flags)
+{
+	int key_index;
+	struct skcipher_request *req;
+
+	if (!rctx->subreq) {
+		rctx->subreq = mempool_alloc(ctx->subreq_pool, GFP_NOIO);
+		if (!rctx->subreq)
+			return -ENOMEM;
+	}
+
+	req = &rctx->subreq->r.req;
+	rctx->subreq->rctx = rctx;
+
+	key_index = rctx->cc_sector & (ctx->tfms_count - 1);
+
+	skcipher_request_set_tfm(req, ctx->tfms.tfms[key_index]);
+	skcipher_request_set_callback(req, req_flags,
+					geniv_async_done, rctx->subreq);
+
+	return 0;
+}
+
+/* Asynchronous IO completion callback for each sector in a segment. When all
+ * pending i/o are completed the parent cipher's async function is called.
+ */
+static void geniv_async_done(struct crypto_async_request *async_req, int error)
+{
+	struct geniv_subreq *subreq =
+		(struct geniv_subreq *) async_req->data;
+	struct geniv_req_ctx *rctx = subreq->rctx;
+	struct skcipher_request *req = NULL;
+	struct aead_request *req_aead = NULL;
+	struct geniv_ctx *ctx;
+	u8 *iv;
+
+	if (!rctx->is_aead_request) {
+		req = rctx->r.req;
+		ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
+	} else {
+		req_aead = rctx->r.req_aead;
+		ctx = crypto_aead_ctx(crypto_aead_reqtfm(req_aead));
+	}
+
+	/*
+	 * A request from crypto driver backlog is going to be processed now,
+	 * finish the completion and continue in crypt_convert().
+	 * (Callback will be called for the second time for this request.)
+	 */
+	if (error == -EINPROGRESS) {
+		complete(&rctx->restart);
+		return;
+	}
+
+	iv = iv_of_subreq(ctx, subreq);
+	if (!error && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
+		error = ctx->iv_gen_ops->post(ctx, rctx, subreq, iv);
+
+	mempool_free(subreq, ctx->subreq_pool);
+
+	/* req_pending needs to be checked before req->base.complete is called
+	 * as we need 'req_pending' to be equal to 1 to ensure all subrequests
+	 * are processed.
+	 */
+	if (atomic_dec_and_test(&rctx->req_pending)) {
+		/* Call the parent cipher's completion function */
+		if (!rctx->is_aead_request)
+			skcipher_request_complete(req, error);
+		else
+			aead_request_complete(req_aead, error);
+
+	}
+}
+
+static unsigned int geniv_get_sectors(struct scatterlist *sg1,
+				      struct scatterlist *sg2,
+				      unsigned int segments)
+{
+	unsigned int i, n1, n2;
+
+	n1 = n2 = 0;
+	for (i = 0; i < segments ; i++) {
+		n1 += sg1[i].length >> SECTOR_SHIFT;
+		n1 += (sg1[i].length & SECTOR_MASK) ? 1 : 0;
+	}
+
+	for (i = 0; i < segments ; i++) {
+		n2 += sg2[i].length >> SECTOR_SHIFT;
+		n2 += (sg2[i].length & SECTOR_MASK) ? 1 : 0;
+	}
+
+	return n1 > n2 ? n1 : n2;
+}
+
+/* Iterate scatterlist of segments to retrieve the 512-byte sectors so that
+ * unique IVs could be generated for each 512-byte sector. This split may not
+ * be necessary e.g. when these ciphers are modelled in hardware, where it can
+ * make use of the hardware's IV generation capabilities.
+ */
+static int geniv_iter_block(void *req_in,
+			struct geniv_ctx *ctx, struct geniv_req_ctx *rctx)
+
+{
+	unsigned int rem;
+	struct scatterlist *src_org, *dst_org;
+	struct scatterlist *src1, *dst1;
+	struct scatterlist_iter *iter = &rctx->iter;
+	struct skcipher_request *req;
+	struct aead_request *req_aead;
+
+	if (unlikely(iter->seg_no >= rctx->nents))
+		return 0;
+
+	if (geniv_integrity_aead(ctx)) {
+		req_aead = (struct aead_request *)req_in;
+		src_org = &req_aead->src[0];
+		dst_org = &req_aead->dst[0];
+	} else {
+		req = (struct skcipher_request *)req_in;
+		src_org = &req->src[0];
+		dst_org = &req->dst[0];
+	}
+
+	src1 = &src_org[iter->seg_no];
+	dst1 = &dst_org[iter->seg_no];
+	iter->done += iter->len;
+
+	if (iter->done >= src1->length) {
+		iter->seg_no++;
+
+		if (iter->seg_no >= rctx->nents)
+			return 0;
+
+		src1 = &src_org[iter->seg_no];
+		dst1 = &dst_org[iter->seg_no];
+		iter->done = 0;
+	}
+
+	rem = src1->length - iter->done;
+
+	iter->len = rem > ctx->sector_size ? ctx->sector_size : rem;
+
+	DMDEBUG("segment:(%d/%u),  done:%d, rem:%d\n",
+		iter->seg_no, rctx->nents, iter->done, rem);
+
+	return iter->len;
+}
+
+static u8 *org_iv_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
+{
+	return iv_of_subreq(ctx, subreq) + ctx->iv_size;
+}
+
+static uint64_t *org_sector_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
+{
+	u8 *ptr = iv_of_subreq(ctx, subreq) + ctx->iv_size + ctx->iv_size;
+
+	return (uint64_t *) ptr;
+}
+
+static unsigned int *org_tag_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
+{
+	u8 *ptr = iv_of_subreq(ctx, subreq) + ctx->iv_size +
+		  ctx->iv_size + sizeof(uint64_t);
+
+	return (unsigned int *)ptr;
+}
+
+static void *tag_from_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
+{
+	return &subreq->rctx->integrity_metadata[*org_tag_of_subreq(ctx, subreq) *
+		ctx->on_disk_tag_size];
+}
+
+static void *iv_tag_from_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
+{
+	return tag_from_subreq(ctx, subreq) + ctx->integrity_tag_size;
+}
+
+static int geniv_convert_block_aead(struct geniv_ctx *ctx,
+				     struct geniv_req_ctx *rctx,
+				     struct geniv_subreq *subreq,
+				     unsigned int tag_offset)
+{
+	struct scatterlist *sg_in, *sg_out;
+	u8 *iv, *org_iv, *tag_iv, *tag;
+	uint64_t *sector;
+	int r = 0;
+	struct scatterlist_iter *iter = &rctx->iter;
+	struct aead_request *req_aead;
+	struct aead_request *parent_req = rctx->r.req_aead;
+
+	BUG_ON(ctx->integrity_iv_size && ctx->integrity_iv_size != ctx->iv_size);
+
+	/* Reject unexpected unaligned bio. */
+	if (unlikely(iter->len & (ctx->sector_size - 1)))
+		return -EIO;
+
+	subreq->iv_sector = rctx->cc_sector;
+	if (test_bit(CRYPT_IV_LARGE_SECTORS, &ctx->cipher_flags))
+		subreq->iv_sector >>= ctx->sector_shift;
+
+	*org_tag_of_subreq(ctx, subreq) = tag_offset;
+
+	sector = org_sector_of_subreq(ctx, subreq);
+	*sector = cpu_to_le64(rctx->cc_sector - ctx->iv_offset);
+
+	iv = iv_of_subreq(ctx, subreq);
+	org_iv = org_iv_of_subreq(ctx, subreq);
+	tag = tag_from_subreq(ctx, subreq);
+	tag_iv = iv_tag_from_subreq(ctx, subreq);
+
+	sg_in = subreq->sg_in;
+	sg_out = subreq->sg_out;
+
+	/* AEAD request:
+	 *  |----- AAD -------|------ DATA -------|-- AUTH TAG --|
+	 *  | (authenticated) | (auth+encryption) |              |
+	 *  | sector_LE |  IV |  sector in/out    |  tag in/out  |
+	 */
+	sg_init_table(sg_in, 4);
+	sg_set_buf(&sg_in[0], sector, sizeof(uint64_t));
+	sg_set_buf(&sg_in[1], org_iv, ctx->iv_size);
+	sg_set_page(&sg_in[2], sg_page(&parent_req->src[iter->seg_no]),
+			iter->len, parent_req->src[iter->seg_no].offset + iter->done);
+	sg_set_buf(&sg_in[3], tag, ctx->integrity_tag_size);
+
+	sg_init_table(sg_out, 4);
+	sg_set_buf(&sg_out[0], sector, sizeof(uint64_t));
+	sg_set_buf(&sg_out[1], org_iv, ctx->iv_size);
+	sg_set_page(&sg_out[2], sg_page(&parent_req->dst[iter->seg_no]),
+			iter->len, parent_req->dst[iter->seg_no].offset + iter->done);
+	sg_set_buf(&sg_out[3], tag, ctx->integrity_tag_size);
+
+	if (ctx->iv_gen_ops) {
+		/* For READs use IV stored in integrity metadata */
+		if (ctx->integrity_iv_size && !rctx->is_write) {
+			memcpy(org_iv, tag_iv, ctx->iv_size);
+		} else {
+			r = ctx->iv_gen_ops->generator(ctx, rctx, subreq, org_iv);
+			if (r < 0)
+				return r;
+			/* Store generated IV in integrity metadata */
+			if (ctx->integrity_iv_size)
+				memcpy(tag_iv, org_iv, ctx->iv_size);
+		}
+		/* Working copy of IV, to be modified in crypto API */
+		memcpy(iv, org_iv, ctx->iv_size);
+	}
+
+	req_aead = &subreq->r.req_aead;
+	aead_request_set_ad(req_aead, sizeof(uint64_t) + ctx->iv_size);
+	if (rctx->is_write) {
+		aead_request_set_crypt(req_aead, subreq->sg_in, subreq->sg_out,
+				       ctx->sector_size, iv);
+		r = crypto_aead_encrypt(req_aead);
+		if (ctx->integrity_tag_size + ctx->integrity_iv_size != ctx->on_disk_tag_size)
+			memset(tag + ctx->integrity_tag_size + ctx->integrity_iv_size, 0,
+			       ctx->on_disk_tag_size - (ctx->integrity_tag_size + ctx->integrity_iv_size));
+	} else {
+		aead_request_set_crypt(req_aead, subreq->sg_in, subreq->sg_out,
+				       ctx->sector_size + ctx->integrity_tag_size, iv);
+		r = crypto_aead_decrypt(req_aead);
+	}
+
+	if (r == -EBADMSG)
+		DMERR_LIMIT("INTEGRITY AEAD ERROR, sector %llu",
+			    (unsigned long long)le64_to_cpu(*sector));
+
+	if (!r && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
+		r = ctx->iv_gen_ops->post(ctx, rctx, subreq, org_iv);
+
+	return r;
+}
+
+static int geniv_convert_block_skcipher(struct geniv_ctx *ctx,
+					struct geniv_req_ctx *rctx,
+					struct geniv_subreq *subreq,
+					unsigned int tag_offset)
+{
+	struct scatterlist *sg_in, *sg_out;
+	u8 *iv, *org_iv, *tag_iv;
+	uint64_t *sector;
+	int r = 0;
+	struct scatterlist_iter *iter = &rctx->iter;
+	struct skcipher_request *req;
+	struct skcipher_request *parent_req = rctx->r.req;
+
+	/* Reject unexpected unaligned bio. */
+	if (unlikely(iter->len & (ctx->sector_size - 1)))
+		return -EIO;
+
+	subreq->iv_sector = rctx->cc_sector;
+	if (test_bit(CRYPT_IV_LARGE_SECTORS, &ctx->cipher_flags))
+		subreq->iv_sector >>= ctx->sector_shift;
+
+	*org_tag_of_subreq(ctx, subreq) = tag_offset;
+
+	iv = iv_of_subreq(ctx, subreq);
+	org_iv = org_iv_of_subreq(ctx, subreq);
+	tag_iv = iv_tag_from_subreq(ctx, subreq);
+
+	sector = org_sector_of_subreq(ctx, subreq);
+	*sector = cpu_to_le64(rctx->cc_sector - ctx->iv_offset);
+
+	/* For skcipher we use only the first sg item */
+	sg_in = subreq->sg_in;
+	sg_out = subreq->sg_out;
+
+	sg_init_table(sg_in, 1);
+	sg_set_page(sg_in, sg_page(&parent_req->src[iter->seg_no]),
+			iter->len, parent_req->src[iter->seg_no].offset + iter->done);
+
+	sg_init_table(sg_out, 1);
+	sg_set_page(sg_out, sg_page(&parent_req->dst[iter->seg_no]),
+			iter->len, parent_req->dst[iter->seg_no].offset + iter->done);
+
+	if (ctx->iv_gen_ops) {
+		/* For READs use IV stored in integrity metadata */
+		if (ctx->integrity_iv_size && !rctx->is_write) {
+			memcpy(org_iv, tag_iv, ctx->integrity_iv_size);
+		} else {
+			r = ctx->iv_gen_ops->generator(ctx, rctx, subreq, org_iv);
+			if (r < 0)
+				return r;
+			/* Store generated IV in integrity metadata */
+			if (ctx->integrity_iv_size)
+				memcpy(tag_iv, org_iv, ctx->integrity_iv_size);
+		}
+		/* Working copy of IV, to be modified in crypto API */
+		memcpy(iv, org_iv, ctx->iv_size);
+	}
+
+	req = &subreq->r.req;
+	skcipher_request_set_crypt(req, sg_in, sg_out, ctx->sector_size, iv);
+
+	if (rctx->is_write)
+		r = crypto_skcipher_encrypt(req);
+	else
+		r = crypto_skcipher_decrypt(req);
+
+	if (!r && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
+		r = ctx->iv_gen_ops->post(ctx, rctx, subreq, org_iv);
+
+	return r;
+}
+
+/* Common encryt/decrypt function for geniv template cipher. Before the crypto
+ * operation, it splits the memory segments (in the scatterlist) into 512 byte
+ * sectors. The initialization vector(IV) used is based on a unique sector
+ * number which is generated here.
+ */
+static int geniv_crypt(struct geniv_ctx *ctx, void *parent_req, bool is_encrypt)
+{
+	struct skcipher_request *req = NULL;
+	struct aead_request *req_aead = NULL;
+	struct geniv_req_ctx *rctx;
+	struct geniv_req_info *rinfo;
+	int i, bytes, cryptlen, ret = 0;
+	unsigned int sectors;
+	unsigned int tag_offset = 0;
+	unsigned int sector_step = ctx->sector_size >> SECTOR_SHIFT;
+	char *str __maybe_unused = is_encrypt ? "encrypt" : "decrypt";
+
+	if (geniv_integrity_aead(ctx)) {
+		req_aead = (struct aead_request *)parent_req;
+		rctx = geniv_aead_req_ctx(req_aead);
+		rctx->r.req_aead = req_aead;
+		rinfo = (struct geniv_req_info *)req_aead->iv;
+	} else {
+		req = (struct skcipher_request *)parent_req;
+		rctx = geniv_skcipher_req_ctx(req);
+		rctx->r.req = req;
+		rinfo = (struct geniv_req_info *)req->iv;
+	}
+
+	/* Instance of 'struct geniv_req_info' is stored in IV ptr */
+	rctx->is_write = is_encrypt;
+	rctx->is_aead_request = geniv_integrity_aead(ctx);
+	rctx->cc_sector = rinfo->cc_sector;
+	rctx->nents = rinfo->nents;
+	rctx->integrity_metadata = rinfo->integrity_metadata;
+	rctx->subreq = NULL;
+	cryptlen = req->cryptlen;
+
+	rctx->iter.seg_no = 0;
+	rctx->iter.done = 0;
+	rctx->iter.len = 0;
+
+	DMDEBUG("geniv:%s: starting sector=%d, #segments=%u\n", str,
+		(unsigned int)rctx->cc_sector, rctx->nents);
+
+	if (geniv_integrity_aead(ctx))
+		sectors = geniv_get_sectors(req_aead->src, req_aead->dst, rctx->nents);
+	else
+		sectors = geniv_get_sectors(req->src, req->dst, rctx->nents);
+
+	init_completion(&rctx->restart);
+	atomic_set(&rctx->req_pending, 1);
+
+	for (i = 0; i < sectors; i++) {
+		struct geniv_subreq *subreq;
+
+		if (geniv_integrity_aead(ctx))
+			ret = geniv_alloc_subreq_aead(ctx, rctx, req_aead->base.flags);
+		else
+			ret = geniv_alloc_subreq_skcipher(ctx, rctx, req->base.flags);
+		if (ret)
+			return -ENOMEM;
+
+		subreq = rctx->subreq;
+
+		atomic_inc(&rctx->req_pending);
+
+		if (geniv_integrity_aead(ctx))
+			bytes = geniv_iter_block(req_aead, ctx, rctx);
+		else
+			bytes = geniv_iter_block(req, ctx, rctx);
+
+		if (bytes == 0)
+			break;
+
+		cryptlen -= bytes;
+
+		if (geniv_integrity_aead(ctx))
+			ret = geniv_convert_block_aead(ctx, rctx, subreq, tag_offset);
+		else
+			ret = geniv_convert_block_skcipher(ctx, rctx, subreq, tag_offset);
+
+		switch (ret) {
+		/*
+		 * The request was queued by a crypto driver
+		 * but the driver request queue is full, let's wait.
+		 */
+		case -EBUSY:
+			wait_for_completion(&rctx->restart);
+			reinit_completion(&rctx->restart);
+			/* fall through */
+		/*
+		 * The request is queued and processed asynchronously,
+		 * completion function geniv_async_done() is called.
+		 */
+		case -EINPROGRESS:
+			/* Marking this NULL lets the creation of a new sub-
+			 * request when 'geniv_alloc_subreq' is called.
+			 */
+			rctx->subreq = NULL;
+			rctx->cc_sector += sector_step;
+			tag_offset++;
+			cond_resched();
+			break;
+		/*
+		 * The request was already processed (synchronously).
+		 */
+		case 0:
+			atomic_dec(&rctx->req_pending);
+			rctx->cc_sector += sector_step;
+			tag_offset++;
+			cond_resched();
+			continue;
+
+		/* There was an error while processing the request. */
+		default:
+			atomic_dec(&rctx->req_pending);
+			mempool_free(rctx->subreq, ctx->subreq_pool);
+			atomic_dec(&rctx->req_pending);
+			return ret;
+		}
+	}
+
+	if (rctx->subreq)
+		mempool_free(rctx->subreq, ctx->subreq_pool);
+
+	if (atomic_dec_and_test(&rctx->req_pending))
+		return 0;
+	else
+		return -EINPROGRESS;
+}
+
+static int geniv_skcipher_encrypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+	return geniv_crypt(ctx, req, true);
+}
+
+static int geniv_skcipher_decrypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+	return geniv_crypt(ctx, req, false);
+}
+
+static int geniv_aead_encrypt(struct aead_request *req)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
+
+	return geniv_crypt(ctx, req, true);
+}
+
+static int geniv_aead_decrypt(struct aead_request *req)
+{
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
+
+	return geniv_crypt(ctx, req, false);
+}
+
+/*
+ * Workaround to parse cipher algorithm from crypto API spec.
+ * The ctx->cipher is currently used only in ESSIV.
+ * This should be probably done by crypto-api calls (once available...)
+ */
+static int geniv_blkdev_cipher(struct geniv_ctx *ctx, bool is_crypto_aead)
+{
+	const char *alg_name = NULL;
+	char *start, *end;
+
+	alg_name = ctx->ciphermode;
+	if (!alg_name)
+		return -EINVAL;
+
+	if (is_crypto_aead) {
+		alg_name = strchr(alg_name, ',');
+		if (!alg_name)
+			alg_name = ctx->ciphermode;
+		alg_name++;
+	}
+
+	start = strchr(alg_name, '(');
+	end = strchr(alg_name, ')');
+
+	if (!start && !end) {
+		ctx->cipher = kstrdup(alg_name, GFP_KERNEL);
+		return ctx->cipher ? 0 : -ENOMEM;
+	}
+
+	if (!start || !end || ++start >= end)
+		return -EINVAL;
+
+	ctx->cipher = kzalloc(end - start + 1, GFP_KERNEL);
+	if (!ctx->cipher)
+		return -ENOMEM;
+
+	strncpy(ctx->cipher, start, end - start);
+
+	return 0;
+}
+
+static int geniv_init_tfm(void *tfm_tmp, bool is_crypto_aead)
+{
+	struct geniv_ctx *ctx;
+	struct crypto_skcipher *tfm;
+	struct crypto_aead *tfm_aead;
+	unsigned int reqsize;
+	size_t iv_size_padding;
+	char *algname;
+	int psize, ret;
+
+	if (is_crypto_aead) {
+		tfm_aead = (struct crypto_aead *)tfm_tmp;
+		ctx = crypto_aead_ctx(tfm_aead);
+		algname = (char *) crypto_tfm_alg_name(crypto_aead_tfm(tfm_aead));
+	} else {
+		tfm = (struct crypto_skcipher *)tfm_tmp;
+		ctx = crypto_skcipher_ctx(tfm);
+		algname = (char *) crypto_tfm_alg_name(crypto_skcipher_tfm(tfm));
+	}
+
+	ctx->ciphermode = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL);
+	if (!ctx->ciphermode)
+		return -ENOMEM;
+
+	ctx->algname = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL);
+	if (!ctx->algname) {
+		ret = -ENOMEM;
+		goto free_ciphermode;
+	}
+
+	strlcpy(ctx->algname, algname, CRYPTO_MAX_ALG_NAME);
+	algname = ctx->algname;
+
+	/* Parse the algorithm name 'ivmode(ciphermode)' */
+	ctx->ivmode = strsep(&algname, "(");
+	strlcpy(ctx->ciphermode, algname, CRYPTO_MAX_ALG_NAME);
+	ctx->ciphermode[strlen(algname) - 1] = '\0';
+
+	DMDEBUG("ciphermode=%s, ivmode=%s\n", ctx->ciphermode, ctx->ivmode);
+
+	/*
+	 * Usually the underlying cipher instances are spawned here, but since
+	 * the value of tfms_count (which is equal to the key_count) is not
+	 * known yet, create only one instance and delay the creation of the
+	 * rest of the instances of the underlying cipher 'cbc(aes)' until
+	 * the setkey operation is invoked.
+	 * The first instance created i.e. ctx->child will later be assigned as
+	 * the 1st element in the array ctx->tfms. Creation of atleast one
+	 * instance of the cipher is necessary to be created here to uncover
+	 * any errors earlier than during the setkey operation later where the
+	 * remaining instances are created.
+	 */
+	if (is_crypto_aead)
+		ctx->tfm_child.tfm_aead = crypto_alloc_aead(ctx->ciphermode, 0, 0);
+	else
+		ctx->tfm_child.tfm = crypto_alloc_skcipher(ctx->ciphermode, 0, 0);
+	if (IS_ERR(ctx->tfm_child.tfm)) {
+		ret = PTR_ERR(ctx->tfm_child.tfm);
+		DMERR("Failed to create cipher %s. err %d\n",
+		      ctx->ciphermode, ret);
+		goto free_algname;
+	}
+
+	/* Setup the current cipher's request structure */
+	if (is_crypto_aead) {
+		reqsize = sizeof(struct geniv_req_ctx) + __alignof__(struct geniv_req_ctx);
+		crypto_aead_set_reqsize(tfm_aead, reqsize);
+
+		ctx->iv_start = sizeof(struct geniv_subreq);
+		ctx->iv_start += crypto_aead_reqsize(ctx->tfm_child.tfm_aead);
+
+		ctx->iv_size = crypto_aead_ivsize(tfm_aead);
+	} else {
+		reqsize = sizeof(struct geniv_req_ctx) + __alignof__(struct geniv_req_ctx);
+		crypto_skcipher_set_reqsize(tfm, reqsize);
+
+		ctx->iv_start = sizeof(struct geniv_subreq);
+		ctx->iv_start += crypto_skcipher_reqsize(ctx->tfm_child.tfm);
+
+		ctx->iv_size = crypto_skcipher_ivsize(tfm);
+	}
+	/* at least a 64 bit sector number should fit in our buffer */
+	if (ctx->iv_size)
+		ctx->iv_size = max(ctx->iv_size,
+				  (unsigned int)(sizeof(u64) / sizeof(u8)));
+
+	if (is_crypto_aead) {
+		if (crypto_aead_alignmask(tfm_aead) < CRYPTO_MINALIGN) {
+			/* Allocate the padding exactly */
+			iv_size_padding = -ctx->iv_start
+					& crypto_aead_alignmask(ctx->tfm_child.tfm_aead);
+		} else {
+			/*
+			 * If the cipher requires greater alignment than kmalloc
+			 * alignment, we don't know the exact position of the
+			 * initialization vector. We must assume worst case.
+			 */
+			iv_size_padding = crypto_aead_alignmask(ctx->tfm_child.tfm_aead);
+		}
+	} else {
+		if (crypto_skcipher_alignmask(tfm) < CRYPTO_MINALIGN) {
+			iv_size_padding = -ctx->iv_start
+					& crypto_skcipher_alignmask(ctx->tfm_child.tfm);
+		} else {
+			iv_size_padding = crypto_skcipher_alignmask(ctx->tfm_child.tfm);
+		}
+	}
+
+	/* create memory pool for sub-request structure
+	 *  ...| IV + padding | original IV | original sec. number | bio tag offset |
+	 */
+	psize = ctx->iv_start + iv_size_padding + ctx->iv_size + ctx->iv_size +
+		sizeof(uint64_t) + sizeof(unsigned int);
+
+	ctx->subreq_pool = mempool_create_kmalloc_pool(MIN_IOS, psize);
+	if (!ctx->subreq_pool) {
+		ret = -ENOMEM;
+		DMERR("Could not allocate crypt sub-request mempool\n");
+		goto free_tfm;
+	}
+
+	ret = geniv_blkdev_cipher(ctx, is_crypto_aead);
+	if (ret < 0) {
+		ret = -ENOMEM;
+		DMERR("Cannot allocate cipher string\n");
+		goto free_tfm;
+	}
+
+	return 0;
+
+free_tfm:
+	if (is_crypto_aead)
+		crypto_free_aead(ctx->tfm_child.tfm_aead);
+	else
+		crypto_free_skcipher(ctx->tfm_child.tfm);
+free_algname:
+	kfree(ctx->algname);
+free_ciphermode:
+	kfree(ctx->ciphermode);
+	return ret;
+}
+
+static int geniv_skcipher_init_tfm(struct crypto_skcipher *tfm)
+{
+	return geniv_init_tfm(tfm, 0);
+}
+
+static int geniv_aead_init_tfm(struct crypto_aead *tfm)
+{
+	return geniv_init_tfm(tfm, 1);
+}
+
+static void geniv_exit_tfm(struct geniv_ctx *ctx)
+{
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->dtr)
+		ctx->iv_gen_ops->dtr(ctx);
+
+	mempool_destroy(ctx->subreq_pool);
+	geniv_free_tfms(ctx);
+	kzfree(ctx->ciphermode);
+	kzfree(ctx->algname);
+	kzfree(ctx->cipher);
+}
+
+static void geniv_skcipher_exit_tfm(struct crypto_skcipher *tfm)
+{
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+	geniv_exit_tfm(ctx);
+}
+
+static void geniv_aead_exit_tfm(struct crypto_aead *tfm)
+{
+	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
+
+	geniv_exit_tfm(ctx);
+}
+
+static void geniv_skcipher_free(struct skcipher_instance *inst)
+{
+	struct crypto_skcipher_spawn *spawn = skcipher_instance_ctx(inst);
+
+	crypto_drop_skcipher(spawn);
+	kfree(inst);
+}
+
+static void geniv_aead_free(struct aead_instance *inst)
+{
+	struct crypto_aead_spawn *spawn = aead_instance_ctx(inst);
+
+	crypto_drop_aead(spawn);
+	kfree(inst);
+}
+
+static int geniv_skcipher_create(struct crypto_template *tmpl,
+			struct rtattr **tb, char *algname)
+{
+	struct crypto_attr_type *algt;
+	struct skcipher_instance *inst;
+	struct skcipher_alg *alg;
+	struct crypto_skcipher_spawn *spawn;
+	const char *cipher_name;
+	int err;
+
+	algt = crypto_get_attr_type(tb);
+
+	cipher_name = crypto_attr_alg_name(tb[1]);
+
+	if (IS_ERR(cipher_name))
+		return PTR_ERR(cipher_name);
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	if (!inst)
+		return -ENOMEM;
+
+	spawn = skcipher_instance_ctx(inst);
+
+	crypto_set_skcipher_spawn(spawn, skcipher_crypto_instance(inst));
+	err = crypto_grab_skcipher(spawn, cipher_name, 0,
+				    crypto_requires_sync(algt->type,
+							 algt->mask));
+
+	if (err)
+		goto err_free_inst;
+
+	alg = crypto_spawn_skcipher_alg(spawn);
+
+	err = -EINVAL;
+
+	/* Only support blocks of size which is of a power of 2 */
+	if (!is_power_of_2(alg->base.cra_blocksize))
+		goto err_drop_spawn;
+
+	/* algname: essiv, base.cra_name: cbc(aes) */
+	err = -ENAMETOOLONG;
+	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
+		     algname, alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
+		goto err_drop_spawn;
+	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+		     "%s(%s)", algname, alg->base.cra_driver_name) >=
+	    CRYPTO_MAX_ALG_NAME)
+		goto err_drop_spawn;
+
+	inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+	inst->alg.base.cra_priority = alg->base.cra_priority;
+	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
+	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
+	inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
+	inst->alg.ivsize = alg->base.cra_blocksize;
+	inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
+	inst->alg.min_keysize = sizeof(struct geniv_key_info);
+	inst->alg.max_keysize = sizeof(struct geniv_key_info);
+
+	inst->alg.setkey = geniv_skcipher_setkey;
+	inst->alg.encrypt = geniv_skcipher_encrypt;
+	inst->alg.decrypt = geniv_skcipher_decrypt;
+
+	inst->alg.base.cra_ctxsize = sizeof(struct geniv_ctx);
+
+	inst->alg.init = geniv_skcipher_init_tfm;
+	inst->alg.exit = geniv_skcipher_exit_tfm;
+
+	inst->free = geniv_skcipher_free;
+
+	err = skcipher_register_instance(tmpl, inst);
+	if (err)
+		goto err_drop_spawn;
+
+out:
+	return err;
+
+err_drop_spawn:
+	crypto_drop_skcipher(spawn);
+err_free_inst:
+	kfree(inst);
+	goto out;
+}
+
+
+static int geniv_aead_create(struct crypto_template *tmpl,
+			struct rtattr **tb, char *algname)
+{
+	struct crypto_attr_type *algt;
+	struct aead_instance *inst;
+	struct aead_alg *alg;
+	struct crypto_aead_spawn *spawn;
+	const char *cipher_name;
+	int err;
+
+	algt = crypto_get_attr_type(tb);
+
+	cipher_name = crypto_attr_alg_name(tb[1]);
+	if (IS_ERR(cipher_name))
+		return PTR_ERR(cipher_name);
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	if (!inst)
+		return -ENOMEM;
+
+	spawn = aead_instance_ctx(inst);
+
+	crypto_set_aead_spawn(spawn, aead_crypto_instance(inst));
+	err = crypto_grab_aead(spawn, cipher_name, 0,
+				    crypto_requires_sync(algt->type,
+							 algt->mask));
+	if (err)
+		goto err_free_inst;
+
+	alg = crypto_spawn_aead_alg(spawn);
+
+	/* Only support blocks of size which is of a power of 2 */
+	if (!is_power_of_2(alg->base.cra_blocksize)) {
+		err = -EINVAL;
+		goto err_drop_spawn;
+	}
+
+	/* algname: essiv, base.cra_name: cbc(aes) */
+	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
+		     algname, alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME) {
+		err = -ENAMETOOLONG;
+		goto err_drop_spawn;
+	}
+
+	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+		     "%s(%s)", algname, alg->base.cra_driver_name) >=
+	    CRYPTO_MAX_ALG_NAME) {
+		err = -ENAMETOOLONG;
+		goto err_drop_spawn;
+	}
+
+	inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+	inst->alg.base.cra_priority = alg->base.cra_priority;
+	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
+	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
+	inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
+	inst->alg.ivsize = crypto_aead_alg_ivsize(alg);
+	inst->alg.chunksize = crypto_aead_alg_chunksize(alg);
+	inst->alg.maxauthsize = crypto_aead_alg_maxauthsize(alg);
+
+	inst->alg.setkey = geniv_aead_setkey;
+	inst->alg.encrypt = geniv_aead_encrypt;
+	inst->alg.decrypt = geniv_aead_decrypt;
+
+	inst->alg.base.cra_ctxsize = sizeof(struct geniv_ctx);
+
+	inst->alg.init = geniv_aead_init_tfm;
+	inst->alg.exit = geniv_aead_exit_tfm;
+
+	inst->free = geniv_aead_free;
+
+	err = aead_register_instance(tmpl, inst);
+	if (err)
+		goto err_drop_spawn;
+
+	return 0;
+
+err_drop_spawn:
+	crypto_drop_aead(spawn);
+err_free_inst:
+	kfree(inst);
+	return err;
+}
+
+static int geniv_create(struct crypto_template *tmpl,
+			struct rtattr **tb, char *algname)
+{
+	if (!crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER))
+		return geniv_skcipher_create(tmpl, tb, algname);
+	else if (!crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD))
+		return geniv_aead_create(tmpl, tb, algname);
+	else
+		return -EINVAL;
+}
+
+static int geniv_template_create(struct crypto_template *tmpl,
+			       struct rtattr **tb)
+{
+	return geniv_create(tmpl, tb, tmpl->name);
+}
+
+#define DEFINE_CRYPTO_TEMPLATE(type) \
+	{ .name = type, \
+	.create = geniv_template_create, \
+	.module = THIS_MODULE, },
+
+static struct crypto_template geniv_tmpl[IV_TYPE_NUM] = {
+	DEFINE_CRYPTO_TEMPLATE("plain")
+	DEFINE_CRYPTO_TEMPLATE("plain64")
+	DEFINE_CRYPTO_TEMPLATE("essiv")
+	DEFINE_CRYPTO_TEMPLATE("benbi")
+	DEFINE_CRYPTO_TEMPLATE("null")
+	DEFINE_CRYPTO_TEMPLATE("lmk")
+	DEFINE_CRYPTO_TEMPLATE("tcw")
+	DEFINE_CRYPTO_TEMPLATE("random")
+};
+
+static int __init geniv_init(void)
+{
+	return crypto_register_template_array(geniv_tmpl, IV_TYPE_NUM);
+}
+
+static void __exit geniv_exit(void)
+{
+	crypto_unregister_template_array(geniv_tmpl, IV_TYPE_NUM);
+}
+
+module_init(geniv_init);
+module_exit(geniv_exit);
+
+MODULE_AUTHOR("Xiongfeng Wang <xiongfeng.wang@linaro.org>");
+MODULE_DESCRIPTION(DM_NAME " IV Generation Template ");
+MODULE_LICENSE("GPL");
diff --git a/include/crypto/geniv.h b/include/crypto/geniv.h
new file mode 100644
index 0000000..d8084fc
--- /dev/null
+++ b/include/crypto/geniv.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * geniv.h: common interface for IV generation algorithms
+ *
+ * Copyright (C) 2018, Linaro
+ *
+ * This file define the data structure the user should pass to the template.
+ */
+
+#ifndef _CRYPTO_GENIV_H
+#define _CRYPTO_GENIV_H
+
+#include <linux/types.h>
+
+enum cipher_flags {
+	CRYPT_MODE_INTEGRITY_AEAD,	/* Use authenticated mode for cihper */
+	CRYPT_IV_LARGE_SECTORS,		/* Calculate IV from sector_size, not 512B sectors */
+};
+
+enum setkey_op {
+	SETKEY_OP_INIT,
+	SETKEY_OP_SET,
+	SETKEY_OP_WIPE,
+};
+
+struct geniv_key_info {
+	enum setkey_op keyop;
+	unsigned int tfms_count;
+	u8 *key;
+	char *ivopts;
+	sector_t iv_offset;
+	unsigned long cipher_flags;
+
+	unsigned short int sector_size;
+	unsigned int key_size;
+	unsigned int key_parts;
+	unsigned int key_mac_size;
+	unsigned int on_disk_tag_size;
+};
+
+struct geniv_req_info {
+	sector_t cc_sector;
+	unsigned int nents;
+	u8 *integrity_metadata;
+};
+
+#endif
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 5/5] dm-crypt: modify dm-crypt to rely on IV generation templates
  2018-07-18  7:30 [PATCH 0/5] crypto: add IV generation templates Xiongfeng Wang
                   ` (3 preceding siblings ...)
  2018-07-18  7:30 ` [PATCH 4/5] crypto: Add IV generation templates Xiongfeng Wang
@ 2018-07-18  7:30 ` Xiongfeng Wang
  2018-07-18 10:59 ` [PATCH 0/5] crypto: add " Arnd Bergmann
  5 siblings, 0 replies; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-18  7:30 UTC (permalink / raw)
  To: agk, snitzer, herbert
  Cc: dm-devel, linux-kernel, wangxiongfeng2, broonie, arnd, jonathan.cameron

This patch remove the IV generation algorithms from dm-crypt.c and rely
on the IV generation templates for generating IV. We modify the dm-layer
to send a whole 'bio' (as defined in the block layer) at a time. Each bio
contains an in memory representation of physically contiguous disk
blocks. The dm layer sets up a chained scatterlist of these blocks split
into physically contiguous segments in memory so that DMA can be
performed.

This patch is based on the patchset originally started by
Binoy Jayan <binoy.jayan@linaro.org>
( crypto: Add IV generation algorithms
https://patchwork.kernel.org/patch/9803469/ )

Signed-off-by: Binoy Jayan <binoy.jayan@linaro.org>
Signed-off-by: Xiongfeng Wang <wangxiongfeng2@linaro.org>
---
 drivers/md/Kconfig    |    1 +
 drivers/md/dm-crypt.c | 1697 ++++++++++---------------------------------------
 2 files changed, 345 insertions(+), 1353 deletions(-)

diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 8b8c123..51a3451 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -281,6 +281,7 @@ config DM_CRYPT
 	depends on BLK_DEV_DM
 	select CRYPTO
 	select CRYPTO_CBC
+	select CRYPTO_GENIV
 	---help---
 	  This device-mapper target allows you to create a device that
 	  transparently encrypts the data on it. You'll need to activate
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index b61b069..3761a43 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -33,13 +33,20 @@
 #include <crypto/skcipher.h>
 #include <crypto/aead.h>
 #include <crypto/authenc.h>
+#include <crypto/geniv.h>
+#include <crypto/internal/aead.h>
+#include <crypto/internal/skcipher.h>
 #include <linux/rtnetlink.h> /* for struct rtattr and RTA macros only */
 #include <keys/user-type.h>
-
 #include <linux/device-mapper.h>
+#include <linux/backing-dev.h>
+#include <linux/log2.h>
 
 #define DM_MSG_PREFIX "crypt"
 
+struct geniv_ctx;
+struct geniv_req_ctx;
+
 /*
  * context holding the current state of a multi-part conversion
  */
@@ -55,7 +62,6 @@ struct convert_context {
 		struct skcipher_request *req;
 		struct aead_request *req_aead;
 	} r;
-
 };
 
 /*
@@ -79,47 +85,13 @@ struct dm_crypt_io {
 
 struct dm_crypt_request {
 	struct convert_context *ctx;
-	struct scatterlist sg_in[4];
-	struct scatterlist sg_out[4];
+	struct scatterlist *sg_in;
+	struct scatterlist *sg_out;
 	sector_t iv_sector;
 };
 
 struct crypt_config;
 
-struct crypt_iv_operations {
-	int (*ctr)(struct crypt_config *cc, struct dm_target *ti,
-		   const char *opts);
-	void (*dtr)(struct crypt_config *cc);
-	int (*init)(struct crypt_config *cc);
-	int (*wipe)(struct crypt_config *cc);
-	int (*generator)(struct crypt_config *cc, u8 *iv,
-			 struct dm_crypt_request *dmreq);
-	int (*post)(struct crypt_config *cc, u8 *iv,
-		    struct dm_crypt_request *dmreq);
-};
-
-struct iv_essiv_private {
-	struct crypto_ahash *hash_tfm;
-	u8 *salt;
-};
-
-struct iv_benbi_private {
-	int shift;
-};
-
-#define LMK_SEED_SIZE 64 /* hash + 0 */
-struct iv_lmk_private {
-	struct crypto_shash *hash_tfm;
-	u8 *seed;
-};
-
-#define TCW_WHITENING_SIZE 16
-struct iv_tcw_private {
-	struct crypto_shash *crc32_tfm;
-	u8 *iv_seed;
-	u8 *whitening;
-};
-
 /*
  * Crypt: maps a linear range of a block device
  * and encrypts / decrypts at the same time.
@@ -127,11 +99,6 @@ struct iv_tcw_private {
 enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID,
 	     DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD };
 
-enum cipher_flags {
-	CRYPT_MODE_INTEGRITY_AEAD,	/* Use authenticated mode for cihper */
-	CRYPT_IV_LARGE_SECTORS,		/* Calculate IV from sector_size, not 512B sectors */
-};
-
 /*
  * The fields in here must be read only after initialization.
  */
@@ -148,18 +115,10 @@ struct crypt_config {
 	struct task_struct *write_thread;
 	struct rb_root write_tree;
 
-	char *cipher;
 	char *cipher_string;
 	char *cipher_auth;
 	char *key_string;
 
-	const struct crypt_iv_operations *iv_gen_ops;
-	union {
-		struct iv_essiv_private essiv;
-		struct iv_benbi_private benbi;
-		struct iv_lmk_private lmk;
-		struct iv_tcw_private tcw;
-	} iv_gen_private;
 	sector_t iv_offset;
 	unsigned int iv_size;
 	unsigned short int sector_size;
@@ -168,10 +127,10 @@ struct crypt_config {
 	/* ESSIV: struct crypto_cipher *essiv_tfm */
 	void *iv_private;
 	union {
-		struct crypto_skcipher **tfms;
-		struct crypto_aead **tfms_aead;
+		struct crypto_skcipher *tfm;
+		struct crypto_aead *tfm_aead;
 	} cipher_tfm;
-	unsigned tfms_count;
+	unsigned int tfms_count;
 	unsigned long cipher_flags;
 
 	/*
@@ -213,13 +172,16 @@ struct crypt_config {
 	struct bio_set bs;
 	struct mutex bio_alloc_lock;
 
-	u8 *authenc_key; /* space for keys in authenc() format (if used) */
 	u8 key[0];
 };
 
-#define MIN_IOS		64
-#define MAX_TAG_SIZE	480
-#define POOL_ENTRY_SIZE	512
+#define MAX_SG_LIST     (BIO_MAX_PAGES * 8)
+#define MIN_IOS         64
+#define MAX_TAG_SIZE    480
+#define POOL_ENTRY_SIZE 512
+
+static void clone_init(struct dm_crypt_io *, struct bio *);
+static void kcryptd_queue_crypt(struct dm_crypt_io *io);
 
 static DEFINE_SPINLOCK(dm_crypt_clients_lock);
 static unsigned dm_crypt_clients_n = 0;
@@ -229,677 +191,21 @@ struct crypt_config {
 
 static void clone_init(struct dm_crypt_io *, struct bio *);
 static void kcryptd_queue_crypt(struct dm_crypt_io *io);
-static struct scatterlist *crypt_get_sg_data(struct crypt_config *cc,
-					     struct scatterlist *sg);
 
 /*
  * Use this to access cipher attributes that are independent of the key.
  */
 static struct crypto_skcipher *any_tfm(struct crypt_config *cc)
 {
-	return cc->cipher_tfm.tfms[0];
+	return cc->cipher_tfm.tfm;
 }
 
 static struct crypto_aead *any_tfm_aead(struct crypt_config *cc)
 {
-	return cc->cipher_tfm.tfms_aead[0];
+	return cc->cipher_tfm.tfm_aead;
 }
 
 /*
- * Different IV generation algorithms:
- *
- * plain: the initial vector is the 32-bit little-endian version of the sector
- *        number, padded with zeros if necessary.
- *
- * plain64: the initial vector is the 64-bit little-endian version of the sector
- *        number, padded with zeros if necessary.
- *
- * plain64be: the initial vector is the 64-bit big-endian version of the sector
- *        number, padded with zeros if necessary.
- *
- * essiv: "encrypted sector|salt initial vector", the sector number is
- *        encrypted with the bulk cipher using a salt as key. The salt
- *        should be derived from the bulk cipher's key via hashing.
- *
- * benbi: the 64-bit "big-endian 'narrow block'-count", starting at 1
- *        (needed for LRW-32-AES and possible other narrow block modes)
- *
- * null: the initial vector is always zero.  Provides compatibility with
- *       obsolete loop_fish2 devices.  Do not use for new devices.
- *
- * lmk:  Compatible implementation of the block chaining mode used
- *       by the Loop-AES block device encryption system
- *       designed by Jari Ruusu. See http://loop-aes.sourceforge.net/
- *       It operates on full 512 byte sectors and uses CBC
- *       with an IV derived from the sector number, the data and
- *       optionally extra IV seed.
- *       This means that after decryption the first block
- *       of sector must be tweaked according to decrypted data.
- *       Loop-AES can use three encryption schemes:
- *         version 1: is plain aes-cbc mode
- *         version 2: uses 64 multikey scheme with lmk IV generator
- *         version 3: the same as version 2 with additional IV seed
- *                   (it uses 65 keys, last key is used as IV seed)
- *
- * tcw:  Compatible implementation of the block chaining mode used
- *       by the TrueCrypt device encryption system (prior to version 4.1).
- *       For more info see: https://gitlab.com/cryptsetup/cryptsetup/wikis/TrueCryptOnDiskFormat
- *       It operates on full 512 byte sectors and uses CBC
- *       with an IV derived from initial key and the sector number.
- *       In addition, whitening value is applied on every sector, whitening
- *       is calculated from initial key, sector number and mixed using CRC32.
- *       Note that this encryption scheme is vulnerable to watermarking attacks
- *       and should be used for old compatible containers access only.
- *
- * plumb: unimplemented, see:
- * http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
- */
-
-static int crypt_iv_plain_gen(struct crypt_config *cc, u8 *iv,
-			      struct dm_crypt_request *dmreq)
-{
-	memset(iv, 0, cc->iv_size);
-	*(__le32 *)iv = cpu_to_le32(dmreq->iv_sector & 0xffffffff);
-
-	return 0;
-}
-
-static int crypt_iv_plain64_gen(struct crypt_config *cc, u8 *iv,
-				struct dm_crypt_request *dmreq)
-{
-	memset(iv, 0, cc->iv_size);
-	*(__le64 *)iv = cpu_to_le64(dmreq->iv_sector);
-
-	return 0;
-}
-
-static int crypt_iv_plain64be_gen(struct crypt_config *cc, u8 *iv,
-				  struct dm_crypt_request *dmreq)
-{
-	memset(iv, 0, cc->iv_size);
-	/* iv_size is at least of size u64; usually it is 16 bytes */
-	*(__be64 *)&iv[cc->iv_size - sizeof(u64)] = cpu_to_be64(dmreq->iv_sector);
-
-	return 0;
-}
-
-/* Initialise ESSIV - compute salt but no local memory allocations */
-static int crypt_iv_essiv_init(struct crypt_config *cc)
-{
-	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-	AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
-	struct scatterlist sg;
-	struct crypto_cipher *essiv_tfm;
-	int err;
-
-	sg_init_one(&sg, cc->key, cc->key_size);
-	ahash_request_set_tfm(req, essiv->hash_tfm);
-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
-	ahash_request_set_crypt(req, &sg, essiv->salt, cc->key_size);
-
-	err = crypto_ahash_digest(req);
-	ahash_request_zero(req);
-	if (err)
-		return err;
-
-	essiv_tfm = cc->iv_private;
-
-	err = crypto_cipher_setkey(essiv_tfm, essiv->salt,
-			    crypto_ahash_digestsize(essiv->hash_tfm));
-	if (err)
-		return err;
-
-	return 0;
-}
-
-/* Wipe salt and reset key derived from volume key */
-static int crypt_iv_essiv_wipe(struct crypt_config *cc)
-{
-	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-	unsigned salt_size = crypto_ahash_digestsize(essiv->hash_tfm);
-	struct crypto_cipher *essiv_tfm;
-	int r, err = 0;
-
-	memset(essiv->salt, 0, salt_size);
-
-	essiv_tfm = cc->iv_private;
-	r = crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size);
-	if (r)
-		err = r;
-
-	return err;
-}
-
-/* Allocate the cipher for ESSIV */
-static struct crypto_cipher *alloc_essiv_cipher(struct crypt_config *cc,
-						struct dm_target *ti,
-						const u8 *salt,
-						unsigned int saltsize)
-{
-	struct crypto_cipher *essiv_tfm;
-	int err;
-
-	/* Setup the essiv_tfm with the given salt */
-	essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, CRYPTO_ALG_ASYNC);
-	if (IS_ERR(essiv_tfm)) {
-		ti->error = "Error allocating crypto tfm for ESSIV";
-		return essiv_tfm;
-	}
-
-	if (crypto_cipher_blocksize(essiv_tfm) != cc->iv_size) {
-		ti->error = "Block size of ESSIV cipher does "
-			    "not match IV size of block cipher";
-		crypto_free_cipher(essiv_tfm);
-		return ERR_PTR(-EINVAL);
-	}
-
-	err = crypto_cipher_setkey(essiv_tfm, salt, saltsize);
-	if (err) {
-		ti->error = "Failed to set key for ESSIV cipher";
-		crypto_free_cipher(essiv_tfm);
-		return ERR_PTR(err);
-	}
-
-	return essiv_tfm;
-}
-
-static void crypt_iv_essiv_dtr(struct crypt_config *cc)
-{
-	struct crypto_cipher *essiv_tfm;
-	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-
-	crypto_free_ahash(essiv->hash_tfm);
-	essiv->hash_tfm = NULL;
-
-	kzfree(essiv->salt);
-	essiv->salt = NULL;
-
-	essiv_tfm = cc->iv_private;
-
-	if (essiv_tfm)
-		crypto_free_cipher(essiv_tfm);
-
-	cc->iv_private = NULL;
-}
-
-static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
-			      const char *opts)
-{
-	struct crypto_cipher *essiv_tfm = NULL;
-	struct crypto_ahash *hash_tfm = NULL;
-	u8 *salt = NULL;
-	int err;
-
-	if (!opts) {
-		ti->error = "Digest algorithm missing for ESSIV mode";
-		return -EINVAL;
-	}
-
-	/* Allocate hash algorithm */
-	hash_tfm = crypto_alloc_ahash(opts, 0, CRYPTO_ALG_ASYNC);
-	if (IS_ERR(hash_tfm)) {
-		ti->error = "Error initializing ESSIV hash";
-		err = PTR_ERR(hash_tfm);
-		goto bad;
-	}
-
-	salt = kzalloc(crypto_ahash_digestsize(hash_tfm), GFP_KERNEL);
-	if (!salt) {
-		ti->error = "Error kmallocing salt storage in ESSIV";
-		err = -ENOMEM;
-		goto bad;
-	}
-
-	cc->iv_gen_private.essiv.salt = salt;
-	cc->iv_gen_private.essiv.hash_tfm = hash_tfm;
-
-	essiv_tfm = alloc_essiv_cipher(cc, ti, salt,
-				       crypto_ahash_digestsize(hash_tfm));
-	if (IS_ERR(essiv_tfm)) {
-		crypt_iv_essiv_dtr(cc);
-		return PTR_ERR(essiv_tfm);
-	}
-	cc->iv_private = essiv_tfm;
-
-	return 0;
-
-bad:
-	if (hash_tfm && !IS_ERR(hash_tfm))
-		crypto_free_ahash(hash_tfm);
-	kfree(salt);
-	return err;
-}
-
-static int crypt_iv_essiv_gen(struct crypt_config *cc, u8 *iv,
-			      struct dm_crypt_request *dmreq)
-{
-	struct crypto_cipher *essiv_tfm = cc->iv_private;
-
-	memset(iv, 0, cc->iv_size);
-	*(__le64 *)iv = cpu_to_le64(dmreq->iv_sector);
-	crypto_cipher_encrypt_one(essiv_tfm, iv, iv);
-
-	return 0;
-}
-
-static int crypt_iv_benbi_ctr(struct crypt_config *cc, struct dm_target *ti,
-			      const char *opts)
-{
-	unsigned bs = crypto_skcipher_blocksize(any_tfm(cc));
-	int log = ilog2(bs);
-
-	/* we need to calculate how far we must shift the sector count
-	 * to get the cipher block count, we use this shift in _gen */
-
-	if (1 << log != bs) {
-		ti->error = "cypher blocksize is not a power of 2";
-		return -EINVAL;
-	}
-
-	if (log > 9) {
-		ti->error = "cypher blocksize is > 512";
-		return -EINVAL;
-	}
-
-	cc->iv_gen_private.benbi.shift = 9 - log;
-
-	return 0;
-}
-
-static void crypt_iv_benbi_dtr(struct crypt_config *cc)
-{
-}
-
-static int crypt_iv_benbi_gen(struct crypt_config *cc, u8 *iv,
-			      struct dm_crypt_request *dmreq)
-{
-	__be64 val;
-
-	memset(iv, 0, cc->iv_size - sizeof(u64)); /* rest is cleared below */
-
-	val = cpu_to_be64(((u64)dmreq->iv_sector << cc->iv_gen_private.benbi.shift) + 1);
-	put_unaligned(val, (__be64 *)(iv + cc->iv_size - sizeof(u64)));
-
-	return 0;
-}
-
-static int crypt_iv_null_gen(struct crypt_config *cc, u8 *iv,
-			     struct dm_crypt_request *dmreq)
-{
-	memset(iv, 0, cc->iv_size);
-
-	return 0;
-}
-
-static void crypt_iv_lmk_dtr(struct crypt_config *cc)
-{
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
-
-	if (lmk->hash_tfm && !IS_ERR(lmk->hash_tfm))
-		crypto_free_shash(lmk->hash_tfm);
-	lmk->hash_tfm = NULL;
-
-	kzfree(lmk->seed);
-	lmk->seed = NULL;
-}
-
-static int crypt_iv_lmk_ctr(struct crypt_config *cc, struct dm_target *ti,
-			    const char *opts)
-{
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
-
-	if (cc->sector_size != (1 << SECTOR_SHIFT)) {
-		ti->error = "Unsupported sector size for LMK";
-		return -EINVAL;
-	}
-
-	lmk->hash_tfm = crypto_alloc_shash("md5", 0, 0);
-	if (IS_ERR(lmk->hash_tfm)) {
-		ti->error = "Error initializing LMK hash";
-		return PTR_ERR(lmk->hash_tfm);
-	}
-
-	/* No seed in LMK version 2 */
-	if (cc->key_parts == cc->tfms_count) {
-		lmk->seed = NULL;
-		return 0;
-	}
-
-	lmk->seed = kzalloc(LMK_SEED_SIZE, GFP_KERNEL);
-	if (!lmk->seed) {
-		crypt_iv_lmk_dtr(cc);
-		ti->error = "Error kmallocing seed storage in LMK";
-		return -ENOMEM;
-	}
-
-	return 0;
-}
-
-static int crypt_iv_lmk_init(struct crypt_config *cc)
-{
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
-	int subkey_size = cc->key_size / cc->key_parts;
-
-	/* LMK seed is on the position of LMK_KEYS + 1 key */
-	if (lmk->seed)
-		memcpy(lmk->seed, cc->key + (cc->tfms_count * subkey_size),
-		       crypto_shash_digestsize(lmk->hash_tfm));
-
-	return 0;
-}
-
-static int crypt_iv_lmk_wipe(struct crypt_config *cc)
-{
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
-
-	if (lmk->seed)
-		memset(lmk->seed, 0, LMK_SEED_SIZE);
-
-	return 0;
-}
-
-static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv,
-			    struct dm_crypt_request *dmreq,
-			    u8 *data)
-{
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
-	SHASH_DESC_ON_STACK(desc, lmk->hash_tfm);
-	struct md5_state md5state;
-	__le32 buf[4];
-	int i, r;
-
-	desc->tfm = lmk->hash_tfm;
-	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
-
-	r = crypto_shash_init(desc);
-	if (r)
-		return r;
-
-	if (lmk->seed) {
-		r = crypto_shash_update(desc, lmk->seed, LMK_SEED_SIZE);
-		if (r)
-			return r;
-	}
-
-	/* Sector is always 512B, block size 16, add data of blocks 1-31 */
-	r = crypto_shash_update(desc, data + 16, 16 * 31);
-	if (r)
-		return r;
-
-	/* Sector is cropped to 56 bits here */
-	buf[0] = cpu_to_le32(dmreq->iv_sector & 0xFFFFFFFF);
-	buf[1] = cpu_to_le32((((u64)dmreq->iv_sector >> 32) & 0x00FFFFFF) | 0x80000000);
-	buf[2] = cpu_to_le32(4024);
-	buf[3] = 0;
-	r = crypto_shash_update(desc, (u8 *)buf, sizeof(buf));
-	if (r)
-		return r;
-
-	/* No MD5 padding here */
-	r = crypto_shash_export(desc, &md5state);
-	if (r)
-		return r;
-
-	for (i = 0; i < MD5_HASH_WORDS; i++)
-		__cpu_to_le32s(&md5state.hash[i]);
-	memcpy(iv, &md5state.hash, cc->iv_size);
-
-	return 0;
-}
-
-static int crypt_iv_lmk_gen(struct crypt_config *cc, u8 *iv,
-			    struct dm_crypt_request *dmreq)
-{
-	struct scatterlist *sg;
-	u8 *src;
-	int r = 0;
-
-	if (bio_data_dir(dmreq->ctx->bio_in) == WRITE) {
-		sg = crypt_get_sg_data(cc, dmreq->sg_in);
-		src = kmap_atomic(sg_page(sg));
-		r = crypt_iv_lmk_one(cc, iv, dmreq, src + sg->offset);
-		kunmap_atomic(src);
-	} else
-		memset(iv, 0, cc->iv_size);
-
-	return r;
-}
-
-static int crypt_iv_lmk_post(struct crypt_config *cc, u8 *iv,
-			     struct dm_crypt_request *dmreq)
-{
-	struct scatterlist *sg;
-	u8 *dst;
-	int r;
-
-	if (bio_data_dir(dmreq->ctx->bio_in) == WRITE)
-		return 0;
-
-	sg = crypt_get_sg_data(cc, dmreq->sg_out);
-	dst = kmap_atomic(sg_page(sg));
-	r = crypt_iv_lmk_one(cc, iv, dmreq, dst + sg->offset);
-
-	/* Tweak the first block of plaintext sector */
-	if (!r)
-		crypto_xor(dst + sg->offset, iv, cc->iv_size);
-
-	kunmap_atomic(dst);
-	return r;
-}
-
-static void crypt_iv_tcw_dtr(struct crypt_config *cc)
-{
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-
-	kzfree(tcw->iv_seed);
-	tcw->iv_seed = NULL;
-	kzfree(tcw->whitening);
-	tcw->whitening = NULL;
-
-	if (tcw->crc32_tfm && !IS_ERR(tcw->crc32_tfm))
-		crypto_free_shash(tcw->crc32_tfm);
-	tcw->crc32_tfm = NULL;
-}
-
-static int crypt_iv_tcw_ctr(struct crypt_config *cc, struct dm_target *ti,
-			    const char *opts)
-{
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-
-	if (cc->sector_size != (1 << SECTOR_SHIFT)) {
-		ti->error = "Unsupported sector size for TCW";
-		return -EINVAL;
-	}
-
-	if (cc->key_size <= (cc->iv_size + TCW_WHITENING_SIZE)) {
-		ti->error = "Wrong key size for TCW";
-		return -EINVAL;
-	}
-
-	tcw->crc32_tfm = crypto_alloc_shash("crc32", 0, 0);
-	if (IS_ERR(tcw->crc32_tfm)) {
-		ti->error = "Error initializing CRC32 in TCW";
-		return PTR_ERR(tcw->crc32_tfm);
-	}
-
-	tcw->iv_seed = kzalloc(cc->iv_size, GFP_KERNEL);
-	tcw->whitening = kzalloc(TCW_WHITENING_SIZE, GFP_KERNEL);
-	if (!tcw->iv_seed || !tcw->whitening) {
-		crypt_iv_tcw_dtr(cc);
-		ti->error = "Error allocating seed storage in TCW";
-		return -ENOMEM;
-	}
-
-	return 0;
-}
-
-static int crypt_iv_tcw_init(struct crypt_config *cc)
-{
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-	int key_offset = cc->key_size - cc->iv_size - TCW_WHITENING_SIZE;
-
-	memcpy(tcw->iv_seed, &cc->key[key_offset], cc->iv_size);
-	memcpy(tcw->whitening, &cc->key[key_offset + cc->iv_size],
-	       TCW_WHITENING_SIZE);
-
-	return 0;
-}
-
-static int crypt_iv_tcw_wipe(struct crypt_config *cc)
-{
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-
-	memset(tcw->iv_seed, 0, cc->iv_size);
-	memset(tcw->whitening, 0, TCW_WHITENING_SIZE);
-
-	return 0;
-}
-
-static int crypt_iv_tcw_whitening(struct crypt_config *cc,
-				  struct dm_crypt_request *dmreq,
-				  u8 *data)
-{
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-	__le64 sector = cpu_to_le64(dmreq->iv_sector);
-	u8 buf[TCW_WHITENING_SIZE];
-	SHASH_DESC_ON_STACK(desc, tcw->crc32_tfm);
-	int i, r;
-
-	/* xor whitening with sector number */
-	crypto_xor_cpy(buf, tcw->whitening, (u8 *)&sector, 8);
-	crypto_xor_cpy(&buf[8], tcw->whitening + 8, (u8 *)&sector, 8);
-
-	/* calculate crc32 for every 32bit part and xor it */
-	desc->tfm = tcw->crc32_tfm;
-	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
-	for (i = 0; i < 4; i++) {
-		r = crypto_shash_init(desc);
-		if (r)
-			goto out;
-		r = crypto_shash_update(desc, &buf[i * 4], 4);
-		if (r)
-			goto out;
-		r = crypto_shash_final(desc, &buf[i * 4]);
-		if (r)
-			goto out;
-	}
-	crypto_xor(&buf[0], &buf[12], 4);
-	crypto_xor(&buf[4], &buf[8], 4);
-
-	/* apply whitening (8 bytes) to whole sector */
-	for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++)
-		crypto_xor(data + i * 8, buf, 8);
-out:
-	memzero_explicit(buf, sizeof(buf));
-	return r;
-}
-
-static int crypt_iv_tcw_gen(struct crypt_config *cc, u8 *iv,
-			    struct dm_crypt_request *dmreq)
-{
-	struct scatterlist *sg;
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-	__le64 sector = cpu_to_le64(dmreq->iv_sector);
-	u8 *src;
-	int r = 0;
-
-	/* Remove whitening from ciphertext */
-	if (bio_data_dir(dmreq->ctx->bio_in) != WRITE) {
-		sg = crypt_get_sg_data(cc, dmreq->sg_in);
-		src = kmap_atomic(sg_page(sg));
-		r = crypt_iv_tcw_whitening(cc, dmreq, src + sg->offset);
-		kunmap_atomic(src);
-	}
-
-	/* Calculate IV */
-	crypto_xor_cpy(iv, tcw->iv_seed, (u8 *)&sector, 8);
-	if (cc->iv_size > 8)
-		crypto_xor_cpy(&iv[8], tcw->iv_seed + 8, (u8 *)&sector,
-			       cc->iv_size - 8);
-
-	return r;
-}
-
-static int crypt_iv_tcw_post(struct crypt_config *cc, u8 *iv,
-			     struct dm_crypt_request *dmreq)
-{
-	struct scatterlist *sg;
-	u8 *dst;
-	int r;
-
-	if (bio_data_dir(dmreq->ctx->bio_in) != WRITE)
-		return 0;
-
-	/* Apply whitening on ciphertext */
-	sg = crypt_get_sg_data(cc, dmreq->sg_out);
-	dst = kmap_atomic(sg_page(sg));
-	r = crypt_iv_tcw_whitening(cc, dmreq, dst + sg->offset);
-	kunmap_atomic(dst);
-
-	return r;
-}
-
-static int crypt_iv_random_gen(struct crypt_config *cc, u8 *iv,
-				struct dm_crypt_request *dmreq)
-{
-	/* Used only for writes, there must be an additional space to store IV */
-	get_random_bytes(iv, cc->iv_size);
-	return 0;
-}
-
-static const struct crypt_iv_operations crypt_iv_plain_ops = {
-	.generator = crypt_iv_plain_gen
-};
-
-static const struct crypt_iv_operations crypt_iv_plain64_ops = {
-	.generator = crypt_iv_plain64_gen
-};
-
-static const struct crypt_iv_operations crypt_iv_plain64be_ops = {
-	.generator = crypt_iv_plain64be_gen
-};
-
-static const struct crypt_iv_operations crypt_iv_essiv_ops = {
-	.ctr       = crypt_iv_essiv_ctr,
-	.dtr       = crypt_iv_essiv_dtr,
-	.init      = crypt_iv_essiv_init,
-	.wipe      = crypt_iv_essiv_wipe,
-	.generator = crypt_iv_essiv_gen
-};
-
-static const struct crypt_iv_operations crypt_iv_benbi_ops = {
-	.ctr	   = crypt_iv_benbi_ctr,
-	.dtr	   = crypt_iv_benbi_dtr,
-	.generator = crypt_iv_benbi_gen
-};
-
-static const struct crypt_iv_operations crypt_iv_null_ops = {
-	.generator = crypt_iv_null_gen
-};
-
-static const struct crypt_iv_operations crypt_iv_lmk_ops = {
-	.ctr	   = crypt_iv_lmk_ctr,
-	.dtr	   = crypt_iv_lmk_dtr,
-	.init	   = crypt_iv_lmk_init,
-	.wipe	   = crypt_iv_lmk_wipe,
-	.generator = crypt_iv_lmk_gen,
-	.post	   = crypt_iv_lmk_post
-};
-
-static const struct crypt_iv_operations crypt_iv_tcw_ops = {
-	.ctr	   = crypt_iv_tcw_ctr,
-	.dtr	   = crypt_iv_tcw_dtr,
-	.init	   = crypt_iv_tcw_init,
-	.wipe	   = crypt_iv_tcw_wipe,
-	.generator = crypt_iv_tcw_gen,
-	.post	   = crypt_iv_tcw_post
-};
-
-static struct crypt_iv_operations crypt_iv_random_ops = {
-	.generator = crypt_iv_random_gen
-};
-
-/*
  * Integrity extensions
  */
 static bool crypt_integrity_aead(struct crypt_config *cc)
@@ -907,21 +213,6 @@ static bool crypt_integrity_aead(struct crypt_config *cc)
 	return test_bit(CRYPT_MODE_INTEGRITY_AEAD, &cc->cipher_flags);
 }
 
-static bool crypt_integrity_hmac(struct crypt_config *cc)
-{
-	return crypt_integrity_aead(cc) && cc->key_mac_size;
-}
-
-/* Get sg containing data */
-static struct scatterlist *crypt_get_sg_data(struct crypt_config *cc,
-					     struct scatterlist *sg)
-{
-	if (unlikely(crypt_integrity_aead(cc)))
-		return &sg[2];
-
-	return sg;
-}
-
 static int dm_crypt_integrity_io_alloc(struct dm_crypt_io *io, struct bio *bio)
 {
 	struct bio_integrity_payload *bip;
@@ -971,283 +262,66 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
 
 	if (crypt_integrity_aead(cc)) {
 		cc->integrity_tag_size = cc->on_disk_tag_size - cc->integrity_iv_size;
-		DMINFO("Integrity AEAD, tag size %u, IV size %u.",
-		       cc->integrity_tag_size, cc->integrity_iv_size);
-
-		if (crypto_aead_setauthsize(any_tfm_aead(cc), cc->integrity_tag_size)) {
-			ti->error = "Integrity AEAD auth tag size is not supported.";
-			return -EINVAL;
-		}
-	} else if (cc->integrity_iv_size)
-		DMINFO("Additional per-sector space %u bytes for IV.",
-		       cc->integrity_iv_size);
-
-	if ((cc->integrity_tag_size + cc->integrity_iv_size) != bi->tag_size) {
-		ti->error = "Not enough space for integrity tag in the profile.";
-		return -EINVAL;
-	}
-
-	return 0;
-#else
-	ti->error = "Integrity profile not supported.";
-	return -EINVAL;
-#endif
-}
-
-static void crypt_convert_init(struct crypt_config *cc,
-			       struct convert_context *ctx,
-			       struct bio *bio_out, struct bio *bio_in,
-			       sector_t sector)
-{
-	ctx->bio_in = bio_in;
-	ctx->bio_out = bio_out;
-	if (bio_in)
-		ctx->iter_in = bio_in->bi_iter;
-	if (bio_out)
-		ctx->iter_out = bio_out->bi_iter;
-	ctx->cc_sector = sector + cc->iv_offset;
-	init_completion(&ctx->restart);
-}
-
-static struct dm_crypt_request *dmreq_of_req(struct crypt_config *cc,
-					     void *req)
-{
-	return (struct dm_crypt_request *)((char *)req + cc->dmreq_start);
-}
-
-static void *req_of_dmreq(struct crypt_config *cc, struct dm_crypt_request *dmreq)
-{
-	return (void *)((char *)dmreq - cc->dmreq_start);
-}
-
-static u8 *iv_of_dmreq(struct crypt_config *cc,
-		       struct dm_crypt_request *dmreq)
-{
-	if (crypt_integrity_aead(cc))
-		return (u8 *)ALIGN((unsigned long)(dmreq + 1),
-			crypto_aead_alignmask(any_tfm_aead(cc)) + 1);
-	else
-		return (u8 *)ALIGN((unsigned long)(dmreq + 1),
-			crypto_skcipher_alignmask(any_tfm(cc)) + 1);
-}
-
-static u8 *org_iv_of_dmreq(struct crypt_config *cc,
-		       struct dm_crypt_request *dmreq)
-{
-	return iv_of_dmreq(cc, dmreq) + cc->iv_size;
-}
-
-static uint64_t *org_sector_of_dmreq(struct crypt_config *cc,
-		       struct dm_crypt_request *dmreq)
-{
-	u8 *ptr = iv_of_dmreq(cc, dmreq) + cc->iv_size + cc->iv_size;
-	return (uint64_t*) ptr;
-}
-
-static unsigned int *org_tag_of_dmreq(struct crypt_config *cc,
-		       struct dm_crypt_request *dmreq)
-{
-	u8 *ptr = iv_of_dmreq(cc, dmreq) + cc->iv_size +
-		  cc->iv_size + sizeof(uint64_t);
-	return (unsigned int*)ptr;
-}
-
-static void *tag_from_dmreq(struct crypt_config *cc,
-				struct dm_crypt_request *dmreq)
-{
-	struct convert_context *ctx = dmreq->ctx;
-	struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
-
-	return &io->integrity_metadata[*org_tag_of_dmreq(cc, dmreq) *
-		cc->on_disk_tag_size];
-}
-
-static void *iv_tag_from_dmreq(struct crypt_config *cc,
-			       struct dm_crypt_request *dmreq)
-{
-	return tag_from_dmreq(cc, dmreq) + cc->integrity_tag_size;
-}
-
-static int crypt_convert_block_aead(struct crypt_config *cc,
-				     struct convert_context *ctx,
-				     struct aead_request *req,
-				     unsigned int tag_offset)
-{
-	struct bio_vec bv_in = bio_iter_iovec(ctx->bio_in, ctx->iter_in);
-	struct bio_vec bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out);
-	struct dm_crypt_request *dmreq;
-	u8 *iv, *org_iv, *tag_iv, *tag;
-	uint64_t *sector;
-	int r = 0;
-
-	BUG_ON(cc->integrity_iv_size && cc->integrity_iv_size != cc->iv_size);
-
-	/* Reject unexpected unaligned bio. */
-	if (unlikely(bv_in.bv_len & (cc->sector_size - 1)))
-		return -EIO;
-
-	dmreq = dmreq_of_req(cc, req);
-	dmreq->iv_sector = ctx->cc_sector;
-	if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags))
-		dmreq->iv_sector >>= cc->sector_shift;
-	dmreq->ctx = ctx;
-
-	*org_tag_of_dmreq(cc, dmreq) = tag_offset;
-
-	sector = org_sector_of_dmreq(cc, dmreq);
-	*sector = cpu_to_le64(ctx->cc_sector - cc->iv_offset);
-
-	iv = iv_of_dmreq(cc, dmreq);
-	org_iv = org_iv_of_dmreq(cc, dmreq);
-	tag = tag_from_dmreq(cc, dmreq);
-	tag_iv = iv_tag_from_dmreq(cc, dmreq);
-
-	/* AEAD request:
-	 *  |----- AAD -------|------ DATA -------|-- AUTH TAG --|
-	 *  | (authenticated) | (auth+encryption) |              |
-	 *  | sector_LE |  IV |  sector in/out    |  tag in/out  |
-	 */
-	sg_init_table(dmreq->sg_in, 4);
-	sg_set_buf(&dmreq->sg_in[0], sector, sizeof(uint64_t));
-	sg_set_buf(&dmreq->sg_in[1], org_iv, cc->iv_size);
-	sg_set_page(&dmreq->sg_in[2], bv_in.bv_page, cc->sector_size, bv_in.bv_offset);
-	sg_set_buf(&dmreq->sg_in[3], tag, cc->integrity_tag_size);
-
-	sg_init_table(dmreq->sg_out, 4);
-	sg_set_buf(&dmreq->sg_out[0], sector, sizeof(uint64_t));
-	sg_set_buf(&dmreq->sg_out[1], org_iv, cc->iv_size);
-	sg_set_page(&dmreq->sg_out[2], bv_out.bv_page, cc->sector_size, bv_out.bv_offset);
-	sg_set_buf(&dmreq->sg_out[3], tag, cc->integrity_tag_size);
-
-	if (cc->iv_gen_ops) {
-		/* For READs use IV stored in integrity metadata */
-		if (cc->integrity_iv_size && bio_data_dir(ctx->bio_in) != WRITE) {
-			memcpy(org_iv, tag_iv, cc->iv_size);
-		} else {
-			r = cc->iv_gen_ops->generator(cc, org_iv, dmreq);
-			if (r < 0)
-				return r;
-			/* Store generated IV in integrity metadata */
-			if (cc->integrity_iv_size)
-				memcpy(tag_iv, org_iv, cc->iv_size);
-		}
-		/* Working copy of IV, to be modified in crypto API */
-		memcpy(iv, org_iv, cc->iv_size);
-	}
-
-	aead_request_set_ad(req, sizeof(uint64_t) + cc->iv_size);
-	if (bio_data_dir(ctx->bio_in) == WRITE) {
-		aead_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out,
-				       cc->sector_size, iv);
-		r = crypto_aead_encrypt(req);
-		if (cc->integrity_tag_size + cc->integrity_iv_size != cc->on_disk_tag_size)
-			memset(tag + cc->integrity_tag_size + cc->integrity_iv_size, 0,
-			       cc->on_disk_tag_size - (cc->integrity_tag_size + cc->integrity_iv_size));
-	} else {
-		aead_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out,
-				       cc->sector_size + cc->integrity_tag_size, iv);
-		r = crypto_aead_decrypt(req);
-	}
-
-	if (r == -EBADMSG)
-		DMERR_LIMIT("INTEGRITY AEAD ERROR, sector %llu",
-			    (unsigned long long)le64_to_cpu(*sector));
-
-	if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post)
-		r = cc->iv_gen_ops->post(cc, org_iv, dmreq);
-
-	bio_advance_iter(ctx->bio_in, &ctx->iter_in, cc->sector_size);
-	bio_advance_iter(ctx->bio_out, &ctx->iter_out, cc->sector_size);
-
-	return r;
-}
-
-static int crypt_convert_block_skcipher(struct crypt_config *cc,
-					struct convert_context *ctx,
-					struct skcipher_request *req,
-					unsigned int tag_offset)
-{
-	struct bio_vec bv_in = bio_iter_iovec(ctx->bio_in, ctx->iter_in);
-	struct bio_vec bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out);
-	struct scatterlist *sg_in, *sg_out;
-	struct dm_crypt_request *dmreq;
-	u8 *iv, *org_iv, *tag_iv;
-	uint64_t *sector;
-	int r = 0;
-
-	/* Reject unexpected unaligned bio. */
-	if (unlikely(bv_in.bv_len & (cc->sector_size - 1)))
-		return -EIO;
-
-	dmreq = dmreq_of_req(cc, req);
-	dmreq->iv_sector = ctx->cc_sector;
-	if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags))
-		dmreq->iv_sector >>= cc->sector_shift;
-	dmreq->ctx = ctx;
-
-	*org_tag_of_dmreq(cc, dmreq) = tag_offset;
-
-	iv = iv_of_dmreq(cc, dmreq);
-	org_iv = org_iv_of_dmreq(cc, dmreq);
-	tag_iv = iv_tag_from_dmreq(cc, dmreq);
-
-	sector = org_sector_of_dmreq(cc, dmreq);
-	*sector = cpu_to_le64(ctx->cc_sector - cc->iv_offset);
-
-	/* For skcipher we use only the first sg item */
-	sg_in  = &dmreq->sg_in[0];
-	sg_out = &dmreq->sg_out[0];
-
-	sg_init_table(sg_in, 1);
-	sg_set_page(sg_in, bv_in.bv_page, cc->sector_size, bv_in.bv_offset);
-
-	sg_init_table(sg_out, 1);
-	sg_set_page(sg_out, bv_out.bv_page, cc->sector_size, bv_out.bv_offset);
+		DMINFO("Integrity AEAD, tag size %u, IV size %u.",
+		       cc->integrity_tag_size, cc->integrity_iv_size);
 
-	if (cc->iv_gen_ops) {
-		/* For READs use IV stored in integrity metadata */
-		if (cc->integrity_iv_size && bio_data_dir(ctx->bio_in) != WRITE) {
-			memcpy(org_iv, tag_iv, cc->integrity_iv_size);
-		} else {
-			r = cc->iv_gen_ops->generator(cc, org_iv, dmreq);
-			if (r < 0)
-				return r;
-			/* Store generated IV in integrity metadata */
-			if (cc->integrity_iv_size)
-				memcpy(tag_iv, org_iv, cc->integrity_iv_size);
+		if (crypto_aead_setauthsize(any_tfm_aead(cc), cc->integrity_tag_size)) {
+			ti->error = "Integrity AEAD auth tag size is not supported.";
+			return -EINVAL;
 		}
-		/* Working copy of IV, to be modified in crypto API */
-		memcpy(iv, org_iv, cc->iv_size);
-	}
+	} else if (cc->integrity_iv_size)
+		DMINFO("Additional per-sector space %u bytes for IV.",
+		       cc->integrity_iv_size);
 
-	skcipher_request_set_crypt(req, sg_in, sg_out, cc->sector_size, iv);
+	if ((cc->integrity_tag_size + cc->integrity_iv_size) != bi->tag_size) {
+		ti->error = "Not enough space for integrity tag in the profile.";
+		return -EINVAL;
+	}
 
-	if (bio_data_dir(ctx->bio_in) == WRITE)
-		r = crypto_skcipher_encrypt(req);
-	else
-		r = crypto_skcipher_decrypt(req);
+	return 0;
+#else
+	ti->error = "Integrity profile not supported.";
+	return -EINVAL;
+#endif
+}
 
-	if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post)
-		r = cc->iv_gen_ops->post(cc, org_iv, dmreq);
+static void crypt_convert_init(struct crypt_config *cc,
+			       struct convert_context *ctx,
+			       struct bio *bio_out, struct bio *bio_in,
+			       sector_t sector)
+{
+	ctx->bio_in = bio_in;
+	ctx->bio_out = bio_out;
+	if (bio_in)
+		ctx->iter_in = bio_in->bi_iter;
+	if (bio_out)
+		ctx->iter_out = bio_out->bi_iter;
+	ctx->cc_sector = sector + cc->iv_offset;
+	init_completion(&ctx->restart);
+}
 
-	bio_advance_iter(ctx->bio_in, &ctx->iter_in, cc->sector_size);
-	bio_advance_iter(ctx->bio_out, &ctx->iter_out, cc->sector_size);
+static struct dm_crypt_request *dmreq_of_req(struct crypt_config *cc,
+					     void *req)
+{
+	return (struct dm_crypt_request *)((char *)req + cc->dmreq_start);
+}
 
-	return r;
+static void *req_of_dmreq(struct crypt_config *cc, struct dm_crypt_request *dmreq)
+{
+	return (void *)((char *)dmreq - cc->dmreq_start);
 }
 
+
 static void kcryptd_async_done(struct crypto_async_request *async_req,
 			       int error);
 
 static void crypt_alloc_req_skcipher(struct crypt_config *cc,
 				     struct convert_context *ctx)
 {
-	unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1);
-
 	if (!ctx->r.req)
 		ctx->r.req = mempool_alloc(&cc->req_pool, GFP_NOIO);
 
-	skcipher_request_set_tfm(ctx->r.req, cc->cipher_tfm.tfms[key_index]);
+	skcipher_request_set_tfm(ctx->r.req, cc->cipher_tfm.tfm);
 
 	/*
 	 * Use REQ_MAY_BACKLOG so a cipher driver internally backlogs
@@ -1264,7 +338,7 @@ static void crypt_alloc_req_aead(struct crypt_config *cc,
 	if (!ctx->r.req_aead)
 		ctx->r.req_aead = mempool_alloc(&cc->req_pool, GFP_NOIO);
 
-	aead_request_set_tfm(ctx->r.req_aead, cc->cipher_tfm.tfms_aead[0]);
+	aead_request_set_tfm(ctx->r.req_aead, cc->cipher_tfm.tfm_aead);
 
 	/*
 	 * Use REQ_MAY_BACKLOG so a cipher driver internally backlogs
@@ -1313,67 +387,124 @@ static void crypt_free_req(struct crypt_config *cc, void *req, struct bio *base_
 /*
  * Encrypt / decrypt data from one bio to another one (can be the same one)
  */
-static blk_status_t crypt_convert(struct crypt_config *cc,
-			 struct convert_context *ctx)
+static blk_status_t crypt_convert_bio(struct crypt_config *cc,
+					struct convert_context *ctx)
 {
-	unsigned int tag_offset = 0;
-	unsigned int sector_step = cc->sector_size >> SECTOR_SHIFT;
+	unsigned int cryptlen, n1, n2, nents, i = 0, bytes = 0;
+	struct skcipher_request *req = NULL;
+	struct aead_request *req_aead = NULL;
+	struct dm_crypt_request *dmreq;
+	struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
+	struct geniv_req_info rinfo;
+	struct bio_vec bv_in, bv_out;
 	int r;
 
 	atomic_set(&ctx->cc_pending, 1);
+	crypt_alloc_req(cc, ctx);
 
-	while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) {
+	if (crypt_integrity_aead(cc)) {
+		req_aead = ctx->r.req_aead;
+		dmreq = dmreq_of_req(cc, req_aead);
+	} else {
+		req = ctx->r.req;
+		dmreq = dmreq_of_req(cc, req);
+	}
 
-		crypt_alloc_req(cc, ctx);
-		atomic_inc(&ctx->cc_pending);
+	n1 = bio_segments(ctx->bio_in);
+	n2 = bio_segments(ctx->bio_out);
+	nents = n1 > n2 ? n1 : n2;
+	nents = nents > MAX_SG_LIST ? MAX_SG_LIST : nents;
+	cryptlen = ctx->iter_in.bi_size;
 
-		if (crypt_integrity_aead(cc))
-			r = crypt_convert_block_aead(cc, ctx, ctx->r.req_aead, tag_offset);
-		else
-			r = crypt_convert_block_skcipher(cc, ctx, ctx->r.req, tag_offset);
+	DMDEBUG("dm-crypt:%s: segments:[in=%u, out=%u] bi_size=%u\n",
+		bio_data_dir(ctx->bio_in) == WRITE ? "write" : "read",
+		n1, n2, cryptlen);
 
-		switch (r) {
-		/*
-		 * The request was queued by a crypto driver
-		 * but the driver request queue is full, let's wait.
-		 */
-		case -EBUSY:
-			wait_for_completion(&ctx->restart);
-			reinit_completion(&ctx->restart);
-			/* fall through */
-		/*
-		 * The request is queued and processed asynchronously,
-		 * completion function kcryptd_async_done() will be called.
-		 */
-		case -EINPROGRESS:
-			ctx->r.req = NULL;
-			ctx->cc_sector += sector_step;
-			tag_offset++;
-			continue;
-		/*
-		 * The request was already processed (synchronously).
-		 */
-		case 0:
-			atomic_dec(&ctx->cc_pending);
-			ctx->cc_sector += sector_step;
-			tag_offset++;
-			cond_resched();
-			continue;
-		/*
-		 * There was a data integrity error.
-		 */
-		case -EBADMSG:
-			atomic_dec(&ctx->cc_pending);
-			return BLK_STS_PROTECTION;
-		/*
-		 * There was an error while processing the request.
-		 */
-		default:
-			atomic_dec(&ctx->cc_pending);
-			return BLK_STS_IOERR;
-		}
+	dmreq->sg_in = kcalloc(nents, sizeof(struct scatterlist), GFP_KERNEL);
+	dmreq->sg_out = kcalloc(nents, sizeof(struct scatterlist), GFP_KERNEL);
+	if (!dmreq->sg_in || !dmreq->sg_out) {
+		DMERR("dm-crypt: Failed to allocate scatterlist\n");
+		r = -ENOMEM;
+		goto end;
+	}
+	dmreq->ctx = ctx;
+
+	sg_init_table(dmreq->sg_in, nents);
+	sg_init_table(dmreq->sg_out, nents);
+
+	while (ctx->iter_in.bi_size && ctx->iter_out.bi_size && i < nents) {
+		bv_in = bio_iter_iovec(ctx->bio_in, ctx->iter_in);
+		bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out);
+
+		sg_set_page(&dmreq->sg_in[i], bv_in.bv_page, bv_in.bv_len,
+				bv_in.bv_offset);
+		sg_set_page(&dmreq->sg_out[i], bv_out.bv_page, bv_out.bv_len,
+				bv_out.bv_offset);
+
+		bio_advance_iter(ctx->bio_in, &ctx->iter_in, bv_in.bv_len);
+		bio_advance_iter(ctx->bio_out, &ctx->iter_out, bv_out.bv_len);
+
+		bytes += bv_in.bv_len;
+		i++;
+	}
+
+	DMDEBUG("dm-crypt: Processed %u of %u bytes\n", bytes, cryptlen);
+
+	rinfo.cc_sector = ctx->cc_sector;
+	rinfo.nents = nents;
+	rinfo.integrity_metadata = io->integrity_metadata;
+
+	atomic_inc(&ctx->cc_pending);
+	if (crypt_integrity_aead(cc)) {
+		aead_request_set_crypt(req_aead, dmreq->sg_in, dmreq->sg_out,
+					bytes, (u8 *)&rinfo);
+		if (bio_data_dir(ctx->bio_in) == WRITE)
+			r = crypto_aead_encrypt(req_aead);
+		else
+			r = crypto_aead_decrypt(req_aead);
+	} else {
+		skcipher_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out,
+					bytes, (u8 *)&rinfo);
+		if (bio_data_dir(ctx->bio_in) == WRITE)
+			r = crypto_skcipher_encrypt(req);
+		else
+			r = crypto_skcipher_decrypt(req);
 	}
 
+	switch (r) {
+	/* The request was queued so wait. */
+	case -EBUSY:
+		wait_for_completion(&ctx->restart);
+		reinit_completion(&ctx->restart);
+		/* fall through */
+	/*
+	 * The request is queued and processed asynchronously,
+	 * completion function kcryptd_async_done() is called.
+	 */
+	case -EINPROGRESS:
+		ctx->r.req = NULL;
+		cond_resched();
+		break;
+	/*
+	 * The requeest was already processed (synchronously).
+	 */
+	case 0:
+		atomic_dec(&ctx->cc_pending);
+		break;
+	/*
+	 * There was a data integrity error.
+	 */
+	case -EBADMSG:
+		atomic_dec(&ctx->cc_pending);
+		return BLK_STS_PROTECTION;
+	/*
+	 * There was an error while processing the request.
+	 */
+	default:
+		atomic_dec(&ctx->cc_pending);
+		return BLK_STS_IOERR;
+	}
+end:
 	return 0;
 }
 
@@ -1483,14 +614,24 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
 {
 	struct crypt_config *cc = io->cc;
 	struct bio *base_bio = io->base_bio;
+	struct dm_crypt_request *dmreq;
 	blk_status_t error = io->error;
 
 	if (!atomic_dec_and_test(&io->io_pending))
 		return;
 
-	if (io->ctx.r.req)
+	if (io->ctx.r.req) {
 		crypt_free_req(cc, io->ctx.r.req, base_bio);
 
+		if (crypt_integrity_aead(cc))
+			dmreq = dmreq_of_req(cc, io->ctx.r.req_aead);
+		else
+			dmreq = dmreq_of_req(cc, io->ctx.r.req);
+		DMDEBUG("dm-crypt: Freeing scatterlists [sync]\n");
+		kfree(dmreq->sg_in);
+		kfree(dmreq->sg_out);
+	}
+
 	if (unlikely(io->integrity_metadata_from_pool))
 		mempool_free(io->integrity_metadata, &io->cc->tag_pool);
 	else
@@ -1737,7 +878,7 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
 	sector += bio_sectors(clone);
 
 	crypt_inc_pending(io);
-	r = crypt_convert(cc, &io->ctx);
+	r = crypt_convert_bio(cc, &io->ctx);
 	if (r)
 		io->error = r;
 	crypt_finished = atomic_dec_and_test(&io->ctx.cc_pending);
@@ -1767,7 +908,7 @@ static void kcryptd_crypt_read_convert(struct dm_crypt_io *io)
 	crypt_convert_init(cc, &io->ctx, io->base_bio, io->base_bio,
 			   io->sector);
 
-	r = crypt_convert(cc, &io->ctx);
+	r = crypt_convert_bio(cc, &io->ctx);
 	if (r)
 		io->error = r;
 
@@ -1795,16 +936,16 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
 		return;
 	}
 
-	if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
-		error = cc->iv_gen_ops->post(cc, org_iv_of_dmreq(cc, dmreq), dmreq);
-
 	if (error == -EBADMSG) {
-		DMERR_LIMIT("INTEGRITY AEAD ERROR, sector %llu",
-			    (unsigned long long)le64_to_cpu(*org_sector_of_dmreq(cc, dmreq)));
+		DMERR("INTEGRITY AEAD ERROR\n");
 		io->error = BLK_STS_PROTECTION;
 	} else if (error < 0)
 		io->error = BLK_STS_IOERR;
 
+	DMDEBUG("dm-crypt: Freeing scatterlists and request struct [async]\n");
+	kfree(dmreq->sg_in);
+	kfree(dmreq->sg_out);
+
 	crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
 
 	if (!atomic_dec_and_test(&ctx->cc_pending))
@@ -1834,163 +975,78 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io)
 	queue_work(cc->crypt_queue, &io->work);
 }
 
-static void crypt_free_tfms_aead(struct crypt_config *cc)
+static void crypt_free_tfm(struct crypt_config *cc)
 {
-	if (!cc->cipher_tfm.tfms_aead)
-		return;
-
-	if (cc->cipher_tfm.tfms_aead[0] && !IS_ERR(cc->cipher_tfm.tfms_aead[0])) {
-		crypto_free_aead(cc->cipher_tfm.tfms_aead[0]);
-		cc->cipher_tfm.tfms_aead[0] = NULL;
-	}
-
-	kfree(cc->cipher_tfm.tfms_aead);
-	cc->cipher_tfm.tfms_aead = NULL;
-}
-
-static void crypt_free_tfms_skcipher(struct crypt_config *cc)
-{
-	unsigned i;
-
-	if (!cc->cipher_tfm.tfms)
-		return;
-
-	for (i = 0; i < cc->tfms_count; i++)
-		if (cc->cipher_tfm.tfms[i] && !IS_ERR(cc->cipher_tfm.tfms[i])) {
-			crypto_free_skcipher(cc->cipher_tfm.tfms[i]);
-			cc->cipher_tfm.tfms[i] = NULL;
+	if (crypt_integrity_aead(cc)) {
+		if (!cc->cipher_tfm.tfm_aead)
+			return;
+		if (cc->cipher_tfm.tfm_aead && !IS_ERR(cc->cipher_tfm.tfm_aead)) {
+			crypto_free_aead(cc->cipher_tfm.tfm_aead);
+			cc->cipher_tfm.tfm_aead = NULL;
 		}
-
-	kfree(cc->cipher_tfm.tfms);
-	cc->cipher_tfm.tfms = NULL;
-}
-
-static void crypt_free_tfms(struct crypt_config *cc)
-{
-	if (crypt_integrity_aead(cc))
-		crypt_free_tfms_aead(cc);
-	else
-		crypt_free_tfms_skcipher(cc);
-}
-
-static int crypt_alloc_tfms_skcipher(struct crypt_config *cc, char *ciphermode)
-{
-	unsigned i;
-	int err;
-
-	cc->cipher_tfm.tfms = kcalloc(cc->tfms_count,
-				      sizeof(struct crypto_skcipher *),
-				      GFP_KERNEL);
-	if (!cc->cipher_tfm.tfms)
-		return -ENOMEM;
-
-	for (i = 0; i < cc->tfms_count; i++) {
-		cc->cipher_tfm.tfms[i] = crypto_alloc_skcipher(ciphermode, 0, 0);
-		if (IS_ERR(cc->cipher_tfm.tfms[i])) {
-			err = PTR_ERR(cc->cipher_tfm.tfms[i]);
-			crypt_free_tfms(cc);
-			return err;
+	} else {
+		if (!cc->cipher_tfm.tfm)
+			return;
+		if (cc->cipher_tfm.tfm && !IS_ERR(cc->cipher_tfm.tfm)) {
+			crypto_free_skcipher(cc->cipher_tfm.tfm);
+			cc->cipher_tfm.tfm = NULL;
 		}
 	}
-
-	return 0;
 }
 
-static int crypt_alloc_tfms_aead(struct crypt_config *cc, char *ciphermode)
+static int crypt_alloc_tfm(struct crypt_config *cc, char *ciphermode)
 {
 	int err;
 
-	cc->cipher_tfm.tfms = kmalloc(sizeof(struct crypto_aead *), GFP_KERNEL);
-	if (!cc->cipher_tfm.tfms)
-		return -ENOMEM;
-
-	cc->cipher_tfm.tfms_aead[0] = crypto_alloc_aead(ciphermode, 0, 0);
-	if (IS_ERR(cc->cipher_tfm.tfms_aead[0])) {
-		err = PTR_ERR(cc->cipher_tfm.tfms_aead[0]);
-		crypt_free_tfms(cc);
-		return err;
+	if (crypt_integrity_aead(cc)) {
+		cc->cipher_tfm.tfm_aead = crypto_alloc_aead(ciphermode, 0, 0);
+		if (IS_ERR(cc->cipher_tfm.tfm_aead)) {
+			err = PTR_ERR(cc->cipher_tfm.tfm_aead);
+			crypt_free_tfm(cc);
+			return err;
+		}
+	} else {
+		cc->cipher_tfm.tfm = crypto_alloc_skcipher(ciphermode, 0, 0);
+		if (IS_ERR(cc->cipher_tfm.tfm)) {
+			err = PTR_ERR(cc->cipher_tfm.tfm);
+			crypt_free_tfm(cc);
+			return err;
+		}
 	}
 
 	return 0;
 }
 
-static int crypt_alloc_tfms(struct crypt_config *cc, char *ciphermode)
-{
-	if (crypt_integrity_aead(cc))
-		return crypt_alloc_tfms_aead(cc, ciphermode);
-	else
-		return crypt_alloc_tfms_skcipher(cc, ciphermode);
-}
-
-static unsigned crypt_subkey_size(struct crypt_config *cc)
-{
-	return (cc->key_size - cc->key_extra_size) >> ilog2(cc->tfms_count);
-}
-
-static unsigned crypt_authenckey_size(struct crypt_config *cc)
-{
-	return crypt_subkey_size(cc) + RTA_SPACE(sizeof(struct crypto_authenc_key_param));
-}
-
-/*
- * If AEAD is composed like authenc(hmac(sha256),xts(aes)),
- * the key must be for some reason in special format.
- * This funcion converts cc->key to this special format.
- */
-static void crypt_copy_authenckey(char *p, const void *key,
-				  unsigned enckeylen, unsigned authkeylen)
+static void init_key_info(struct crypt_config *cc, enum setkey_op keyop,
+			char *ivopts, struct geniv_key_info *kinfo)
 {
-	struct crypto_authenc_key_param *param;
-	struct rtattr *rta;
-
-	rta = (struct rtattr *)p;
-	param = RTA_DATA(rta);
-	param->enckeylen = cpu_to_be32(enckeylen);
-	rta->rta_len = RTA_LENGTH(sizeof(*param));
-	rta->rta_type = CRYPTO_AUTHENC_KEYA_PARAM;
-	p += RTA_SPACE(sizeof(*param));
-	memcpy(p, key + enckeylen, authkeylen);
-	p += authkeylen;
-	memcpy(p, key, enckeylen);
+	kinfo->keyop = keyop;
+	kinfo->tfms_count = cc->tfms_count;
+	kinfo->key = cc->key;
+	kinfo->cipher_flags = cc->cipher_flags;
+	kinfo->ivopts = ivopts;
+	kinfo->iv_offset = cc->iv_offset;
+	kinfo->sector_size = cc->sector_size;
+	kinfo->key_size = cc->key_size;
+	kinfo->key_parts = cc->key_parts;
+	kinfo->key_mac_size = cc->key_mac_size;
+	kinfo->on_disk_tag_size = cc->on_disk_tag_size;
 }
 
-static int crypt_setkey(struct crypt_config *cc)
+static int crypt_setkey(struct crypt_config *cc, enum setkey_op keyop,
+			char *ivopts)
 {
-	unsigned subkey_size;
-	int err = 0, i, r;
-
-	/* Ignore extra keys (which are used for IV etc) */
-	subkey_size = crypt_subkey_size(cc);
-
-	if (crypt_integrity_hmac(cc)) {
-		if (subkey_size < cc->key_mac_size)
-			return -EINVAL;
-
-		crypt_copy_authenckey(cc->authenc_key, cc->key,
-				      subkey_size - cc->key_mac_size,
-				      cc->key_mac_size);
-	}
+	int r = 0;
+	struct geniv_key_info kinfo;
 
-	for (i = 0; i < cc->tfms_count; i++) {
-		if (crypt_integrity_hmac(cc))
-			r = crypto_aead_setkey(cc->cipher_tfm.tfms_aead[i],
-				cc->authenc_key, crypt_authenckey_size(cc));
-		else if (crypt_integrity_aead(cc))
-			r = crypto_aead_setkey(cc->cipher_tfm.tfms_aead[i],
-					       cc->key + (i * subkey_size),
-					       subkey_size);
-		else
-			r = crypto_skcipher_setkey(cc->cipher_tfm.tfms[i],
-						   cc->key + (i * subkey_size),
-						   subkey_size);
-		if (r)
-			err = r;
-	}
+	init_key_info(cc, keyop, ivopts, &kinfo);
 
-	if (crypt_integrity_hmac(cc))
-		memzero_explicit(cc->authenc_key, crypt_authenckey_size(cc));
+	if (crypt_integrity_aead(cc))
+		r = crypto_aead_setkey(cc->cipher_tfm.tfm_aead, (u8 *)&kinfo, sizeof(kinfo));
+	else
+		r = crypto_skcipher_setkey(cc->cipher_tfm.tfm, (u8 *)&kinfo, sizeof(kinfo));
 
-	return err;
+	return r;
 }
 
 #ifdef CONFIG_KEYS
@@ -2003,7 +1059,9 @@ static bool contains_whitespace(const char *str)
 	return false;
 }
 
-static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string)
+static int crypt_set_keyring_key(struct crypt_config *cc,
+				const char *key_string,
+				enum setkey_op keyop, char *ivopts)
 {
 	char *new_key_string, *key_desc;
 	int ret;
@@ -2064,7 +1122,7 @@ static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string
 	/* clear the flag since following operations may invalidate previously valid key */
 	clear_bit(DM_CRYPT_KEY_VALID, &cc->flags);
 
-	ret = crypt_setkey(cc);
+	ret = crypt_setkey(cc, keyop, ivopts);
 
 	if (!ret) {
 		set_bit(DM_CRYPT_KEY_VALID, &cc->flags);
@@ -2101,7 +1159,9 @@ static int get_key_size(char **key_string)
 
 #else
 
-static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string)
+static int crypt_set_keyring_key(struct crypt_config *cc,
+				const char *key_string,
+				enum setkey_op keyop, char *ivopts)
 {
 	return -EINVAL;
 }
@@ -2113,7 +1173,8 @@ static int get_key_size(char **key_string)
 
 #endif
 
-static int crypt_set_key(struct crypt_config *cc, char *key)
+static int crypt_set_key(struct crypt_config *cc, enum setkey_op keyop,
+			char *key, char *ivopts)
 {
 	int r = -EINVAL;
 	int key_string_len = strlen(key);
@@ -2124,7 +1185,7 @@ static int crypt_set_key(struct crypt_config *cc, char *key)
 
 	/* ':' means the key is in kernel keyring, short-circuit normal key processing */
 	if (key[0] == ':') {
-		r = crypt_set_keyring_key(cc, key + 1);
+		r = crypt_set_keyring_key(cc, key + 1, keyop, ivopts);
 		goto out;
 	}
 
@@ -2139,7 +1200,7 @@ static int crypt_set_key(struct crypt_config *cc, char *key)
 	if (cc->key_size && hex2bin(cc->key, key, cc->key_size) < 0)
 		goto out;
 
-	r = crypt_setkey(cc);
+	r = crypt_setkey(cc, keyop, ivopts);
 	if (!r)
 		set_bit(DM_CRYPT_KEY_VALID, &cc->flags);
 
@@ -2150,6 +1211,17 @@ static int crypt_set_key(struct crypt_config *cc, char *key)
 	return r;
 }
 
+static int crypt_init_key(struct dm_target *ti, char *key, char *ivopts)
+{
+	struct crypt_config *cc = ti->private;
+	int ret;
+
+	ret = crypt_set_key(cc, SETKEY_OP_INIT, key, ivopts);
+	if (ret < 0)
+		ti->error = "Error decoding and setting key";
+	return ret;
+}
+
 static int crypt_wipe_key(struct crypt_config *cc)
 {
 	int r;
@@ -2158,7 +1230,7 @@ static int crypt_wipe_key(struct crypt_config *cc)
 	get_random_bytes(&cc->key, cc->key_size);
 	kzfree(cc->key_string);
 	cc->key_string = NULL;
-	r = crypt_setkey(cc);
+	r = crypt_setkey(cc, SETKEY_OP_WIPE, NULL);
 	memset(&cc->key, 0, cc->key_size * sizeof(u8));
 
 	return r;
@@ -2218,7 +1290,7 @@ static void crypt_dtr(struct dm_target *ti)
 	if (cc->crypt_queue)
 		destroy_workqueue(cc->crypt_queue);
 
-	crypt_free_tfms(cc);
+	crypt_free_tfm(cc);
 
 	bioset_exit(&cc->bs);
 
@@ -2229,17 +1301,12 @@ static void crypt_dtr(struct dm_target *ti)
 	WARN_ON(percpu_counter_sum(&cc->n_allocated_pages) != 0);
 	percpu_counter_destroy(&cc->n_allocated_pages);
 
-	if (cc->iv_gen_ops && cc->iv_gen_ops->dtr)
-		cc->iv_gen_ops->dtr(cc);
-
 	if (cc->dev)
 		dm_put_device(ti, cc->dev);
 
-	kzfree(cc->cipher);
 	kzfree(cc->cipher_string);
 	kzfree(cc->key_string);
 	kzfree(cc->cipher_auth);
-	kzfree(cc->authenc_key);
 
 	mutex_destroy(&cc->bio_alloc_lock);
 
@@ -2253,6 +1320,32 @@ static void crypt_dtr(struct dm_target *ti)
 	spin_unlock(&dm_crypt_clients_lock);
 }
 
+static int get_iv_size_by_name(struct crypt_config *cc, char *alg_name)
+{
+	unsigned int iv_size;
+	struct crypto_aead *tfm_aead;
+	struct crypto_skcipher *tfm;
+
+	if (crypt_integrity_aead(cc)) {
+		tfm_aead = crypto_alloc_aead(alg_name, 0, 0);
+		if (IS_ERR(tfm_aead))
+			return -ENOMEM;
+
+		iv_size = crypto_aead_ivsize(tfm_aead);
+		crypto_free_aead(tfm_aead);
+	} else {
+		tfm = crypto_alloc_skcipher(alg_name, 0, 0);
+		if (IS_ERR(tfm))
+			return -ENOMEM;
+
+		iv_size = crypto_skcipher_ivsize(tfm);
+		crypto_free_skcipher(tfm);
+	}
+
+	return iv_size;
+
+}
+
 static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode)
 {
 	struct crypt_config *cc = ti->private;
@@ -2266,97 +1359,12 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode)
 		/* at least a 64 bit sector number should fit in our buffer */
 		cc->iv_size = max(cc->iv_size,
 				  (unsigned int)(sizeof(u64) / sizeof(u8)));
-	else if (ivmode) {
-		DMWARN("Selected cipher does not support IVs");
-		ivmode = NULL;
-	}
-
-	/* Choose ivmode, see comments at iv code. */
-	if (ivmode == NULL)
-		cc->iv_gen_ops = NULL;
-	else if (strcmp(ivmode, "plain") == 0)
-		cc->iv_gen_ops = &crypt_iv_plain_ops;
-	else if (strcmp(ivmode, "plain64") == 0)
-		cc->iv_gen_ops = &crypt_iv_plain64_ops;
-	else if (strcmp(ivmode, "plain64be") == 0)
-		cc->iv_gen_ops = &crypt_iv_plain64be_ops;
-	else if (strcmp(ivmode, "essiv") == 0)
-		cc->iv_gen_ops = &crypt_iv_essiv_ops;
-	else if (strcmp(ivmode, "benbi") == 0)
-		cc->iv_gen_ops = &crypt_iv_benbi_ops;
-	else if (strcmp(ivmode, "null") == 0)
-		cc->iv_gen_ops = &crypt_iv_null_ops;
-	else if (strcmp(ivmode, "lmk") == 0) {
-		cc->iv_gen_ops = &crypt_iv_lmk_ops;
-		/*
-		 * Version 2 and 3 is recognised according
-		 * to length of provided multi-key string.
-		 * If present (version 3), last key is used as IV seed.
-		 * All keys (including IV seed) are always the same size.
-		 */
-		if (cc->key_size % cc->key_parts) {
-			cc->key_parts++;
-			cc->key_extra_size = cc->key_size / cc->key_parts;
-		}
-	} else if (strcmp(ivmode, "tcw") == 0) {
-		cc->iv_gen_ops = &crypt_iv_tcw_ops;
-		cc->key_parts += 2; /* IV + whitening */
-		cc->key_extra_size = cc->iv_size + TCW_WHITENING_SIZE;
-	} else if (strcmp(ivmode, "random") == 0) {
-		cc->iv_gen_ops = &crypt_iv_random_ops;
+
+	if (strcmp(ivmode, "random") == 0) {
 		/* Need storage space in integrity fields. */
 		cc->integrity_iv_size = cc->iv_size;
-	} else {
-		ti->error = "Invalid IV mode";
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-/*
- * Workaround to parse cipher algorithm from crypto API spec.
- * The cc->cipher is currently used only in ESSIV.
- * This should be probably done by crypto-api calls (once available...)
- */
-static int crypt_ctr_blkdev_cipher(struct crypt_config *cc)
-{
-	const char *alg_name = NULL;
-	char *start, *end;
-
-	if (crypt_integrity_aead(cc)) {
-		alg_name = crypto_tfm_alg_name(crypto_aead_tfm(any_tfm_aead(cc)));
-		if (!alg_name)
-			return -EINVAL;
-		if (crypt_integrity_hmac(cc)) {
-			alg_name = strchr(alg_name, ',');
-			if (!alg_name)
-				return -EINVAL;
-		}
-		alg_name++;
-	} else {
-		alg_name = crypto_tfm_alg_name(crypto_skcipher_tfm(any_tfm(cc)));
-		if (!alg_name)
-			return -EINVAL;
 	}
 
-	start = strchr(alg_name, '(');
-	end = strchr(alg_name, ')');
-
-	if (!start && !end) {
-		cc->cipher = kstrdup(alg_name, GFP_KERNEL);
-		return cc->cipher ? 0 : -ENOMEM;
-	}
-
-	if (!start || !end || ++start >= end)
-		return -EINVAL;
-
-	cc->cipher = kzalloc(end - start + 1, GFP_KERNEL);
-	if (!cc->cipher)
-		return -ENOMEM;
-
-	strncpy(cc->cipher, start, end - start);
-
 	return 0;
 }
 
@@ -2392,10 +1400,6 @@ static int crypt_ctr_auth_cipher(struct crypt_config *cc, char *cipher_api)
 	cc->key_mac_size = crypto_ahash_digestsize(mac);
 	crypto_free_ahash(mac);
 
-	cc->authenc_key = kmalloc(crypt_authenckey_size(cc), GFP_KERNEL);
-	if (!cc->authenc_key)
-		return -ENOMEM;
-
 	return 0;
 }
 
@@ -2404,6 +1408,7 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
 {
 	struct crypt_config *cc = ti->private;
 	char *tmp, *cipher_api;
+	char cipher_name[CRYPTO_MAX_ALG_NAME];
 	int ret = -EINVAL;
 
 	cc->tfms_count = 1;
@@ -2422,8 +1427,29 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
 
 	cc->key_parts = cc->tfms_count;
 
+	if (!*ivmode)
+		*ivmode = "null";
+
+	/*
+	 * For those ciphers which do not support IVs, but input ivmode is not
+	 * NULL, use "null" as ivmode compulsively.
+	 */
+	cc->iv_size = get_iv_size_by_name(cc, cipher_api);
+	if (cc->iv_size < 0)
+		return -ENOMEM;
+	if (!cc->iv_size && ivmode) {
+		DMWARN("Selected cipher does not support IVs");
+		*ivmode = "null";
+	}
+
 	/* Allocate cipher */
-	ret = crypt_alloc_tfms(cc, cipher_api);
+	ret = snprintf(cipher_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
+			*ivmode, cipher_api);
+	if (ret < 0) {
+		ti->error = "Cannot allocate cipher strings";
+		return -ENOMEM;
+	}
+	ret = crypt_alloc_tfm(cc, cipher_name);
 	if (ret < 0) {
 		ti->error = "Error allocating crypto tfm";
 		return ret;
@@ -2440,12 +1466,6 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
 	} else
 		cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc));
 
-	ret = crypt_ctr_blkdev_cipher(cc);
-	if (ret < 0) {
-		ti->error = "Cannot allocate cipher string";
-		return -ENOMEM;
-	}
-
 	return 0;
 }
 
@@ -2480,10 +1500,6 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key
 	}
 	cc->key_parts = cc->tfms_count;
 
-	cc->cipher = kstrdup(cipher, GFP_KERNEL);
-	if (!cc->cipher)
-		goto bad_mem;
-
 	chainmode = strsep(&tmp, "-");
 	*ivopts = strsep(&tmp, "-");
 	*ivmode = strsep(&*ivopts, ":");
@@ -2509,15 +1525,35 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key
 	if (!cipher_api)
 		goto bad_mem;
 
+	/* For those ciphers which do not support IVs,
+	 * use the 'null' template cipher
+	 */
+	if (!*ivmode)
+		*ivmode = "null";
+
+	/*
+	 * For those ciphers which do not support IVs, but input ivmode is not
+	 * NULL, use "null" as ivmode compulsively.
+	 */
 	ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
 		       "%s(%s)", chainmode, cipher);
+	cc->iv_size = get_iv_size_by_name(cc, cipher_api);
+	if (cc->iv_size < 0)
+		return -ENOMEM;
+	if (!cc->iv_size && ivmode) {
+		DMWARN("Selected cipher does not support IVs");
+		*ivmode = "null";
+	}
+
+	ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
+		       "%s(%s(%s))", *ivmode, chainmode, cipher);
 	if (ret < 0) {
 		kfree(cipher_api);
 		goto bad_mem;
 	}
 
 	/* Allocate cipher */
-	ret = crypt_alloc_tfms(cc, cipher_api);
+	ret = crypt_alloc_tfm(cc, cipher_api);
 	if (ret < 0) {
 		ti->error = "Error allocating crypto tfm";
 		kfree(cipher_api);
@@ -2556,30 +1592,12 @@ static int crypt_ctr_cipher(struct dm_target *ti, char *cipher_in, char *key)
 		return ret;
 
 	/* Initialize and set key */
-	ret = crypt_set_key(cc, key);
+	ret = crypt_init_key(ti, key, ivopts);
 	if (ret < 0) {
 		ti->error = "Error decoding and setting key";
 		return ret;
 	}
 
-	/* Allocate IV */
-	if (cc->iv_gen_ops && cc->iv_gen_ops->ctr) {
-		ret = cc->iv_gen_ops->ctr(cc, ti, ivopts);
-		if (ret < 0) {
-			ti->error = "Error creating IV";
-			return ret;
-		}
-	}
-
-	/* Initialize IV (set keys for ESSIV etc) */
-	if (cc->iv_gen_ops && cc->iv_gen_ops->init) {
-		ret = cc->iv_gen_ops->init(cc);
-		if (ret < 0) {
-			ti->error = "Error initialising IV";
-			return ret;
-		}
-	}
-
 	/* wipe the kernel key payload copy */
 	if (cc->key_string)
 		memset(cc->key, 0, cc->key_size * sizeof(u8));
@@ -2673,7 +1691,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	unsigned int align_mask;
 	unsigned long long tmpll;
 	int ret;
-	size_t iv_size_padding, additional_req_size;
+	size_t additional_req_size;
 	char dummy;
 
 	if (argc < 5) {
@@ -2729,25 +1747,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	}
 	cc->dmreq_start = ALIGN(cc->dmreq_start, __alignof__(struct dm_crypt_request));
 
-	if (align_mask < CRYPTO_MINALIGN) {
-		/* Allocate the padding exactly */
-		iv_size_padding = -(cc->dmreq_start + sizeof(struct dm_crypt_request))
-				& align_mask;
-	} else {
-		/*
-		 * If the cipher requires greater alignment than kmalloc
-		 * alignment, we don't know the exact position of the
-		 * initialization vector. We must assume worst case.
-		 */
-		iv_size_padding = align_mask;
-	}
-
-	/*  ...| IV + padding | original IV | original sec. number | bio tag offset | */
-	additional_req_size = sizeof(struct dm_crypt_request) +
-		iv_size_padding + cc->iv_size +
-		cc->iv_size +
-		sizeof(uint64_t) +
-		sizeof(unsigned int);
+	additional_req_size = sizeof(struct dm_crypt_request);
 
 	ret = mempool_init_kmalloc_pool(&cc->req_pool, MIN_IOS, cc->dmreq_start + additional_req_size);
 	if (ret) {
@@ -3024,22 +2024,13 @@ static int crypt_message(struct dm_target *ti, unsigned argc, char **argv,
 				return -EINVAL;
 			}
 
-			ret = crypt_set_key(cc, argv[2]);
-			if (ret)
-				return ret;
-			if (cc->iv_gen_ops && cc->iv_gen_ops->init)
-				ret = cc->iv_gen_ops->init(cc);
+			ret = crypt_set_key(cc, SETKEY_OP_SET, argv[2], NULL);
 			/* wipe the kernel key payload copy */
 			if (cc->key_string)
 				memset(cc->key, 0, cc->key_size * sizeof(u8));
 			return ret;
 		}
 		if (argc == 2 && !strcasecmp(argv[1], "wipe")) {
-			if (cc->iv_gen_ops && cc->iv_gen_ops->wipe) {
-				ret = cc->iv_gen_ops->wipe(cc);
-				if (ret)
-					return ret;
-			}
 			return crypt_wipe_key(cc);
 		}
 	}
@@ -3078,7 +2069,7 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
 
 static struct target_type crypt_target = {
 	.name   = "crypt",
-	.version = {1, 18, 1},
+	.version = {1, 19, 1},
 	.module = THIS_MODULE,
 	.ctr    = crypt_ctr,
 	.dtr    = crypt_dtr,
@@ -3103,7 +2094,7 @@ static int __init dm_crypt_init(void)
 	return r;
 }
 
-static void __exit dm_crypt_exit(void)
+void __exit dm_crypt_exit(void)
 {
 	dm_unregister_target(&crypt_target);
 }
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18  7:30 ` [PATCH 4/5] crypto: Add IV generation templates Xiongfeng Wang
@ 2018-07-18  8:16   ` Milan Broz
  2018-07-18  8:48     ` Xiongfeng Wang
                       ` (2 more replies)
  2018-07-19 18:14   ` kbuild test robot
  1 sibling, 3 replies; 28+ messages in thread
From: Milan Broz @ 2018-07-18  8:16 UTC (permalink / raw)
  To: Xiongfeng Wang, agk, snitzer, herbert
  Cc: dm-devel, linux-kernel, broonie, arnd, jonathan.cameron

On 18/07/18 09:30, Xiongfeng Wang wrote:
> Currently, the IV generation algorithms are implemented in dm-crypt.c.
> This patch implement these algorithms as template ciphers, so that
> dm-crypt layer can be simplified, and also these algorithms can be
> implemented in hardware for performance.
> 
> Synchronous crypto requests to encrypt/decrypt a sector are processed
> sequentially. Asynchronous requests if processed in paralled, are freed
> in the async callback.

So we are here again and moving INTERNAL dm-crypt functionality into
cryptoapi.

The TCW,LMK  IVs generator make sense only for dm-crypt 
for compatible old disk encryption mappings.

I strongly disagree to move this outside of dm-crypt.

Sorry, the last discussion was that it remains inside dm-crypt
and it will be only registered through crypto API.

And this for all files:

> + * Copyright (C) 2018, Linaro

It is NOT YOUR code! Please keep copyright and authors as in dm-crypt.

Milan

> 
> Interface to the crypto layer - include/crypto/geniv.h
> 
> This patch is based on the patchset originally started by
> Binoy Jayan <binoy.jayan@linaro.org>
> ( crypto: Add IV generation algorithms
> https://patchwork.kernel.org/patch/9803469/ )
> 
> Signed-off-by: Binoy Jayan <binoy.jayan@linaro.org>
> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@linaro.org>
> ---
>  crypto/Kconfig         |    7 +
>  crypto/Makefile        |    1 +
>  crypto/geniv.c         | 2240 ++++++++++++++++++++++++++++++++++++++++++++++++
>  include/crypto/geniv.h |   47 +
>  4 files changed, 2295 insertions(+)
>  create mode 100644 crypto/geniv.c
>  create mode 100644 include/crypto/geniv.h
> 
> diff --git a/crypto/Kconfig b/crypto/Kconfig
> index f3e40ac..98f025a 100644
> --- a/crypto/Kconfig
> +++ b/crypto/Kconfig
> @@ -257,6 +257,13 @@ config CRYPTO_GLUE_HELPER_X86
>  config CRYPTO_ENGINE
>  	tristate
>  
> +config CRYPTO_GENIV
> +	tristate "IV Generator Template"
> +	select CRYPTO_AEAD
> +	select CRYPTO_BLKCIPHER
> +	help
> +	  Support for IV generator template, so that dm-crypt can rely on it.
> +
>  comment "Authenticated Encryption with Associated Data"
>  
>  config CRYPTO_CCM
> diff --git a/crypto/Makefile b/crypto/Makefile
> index 6d1d40e..1077d2f 100644
> --- a/crypto/Makefile
> +++ b/crypto/Makefile
> @@ -23,6 +23,7 @@ crypto_blkcipher-y += skcipher.o
>  obj-$(CONFIG_CRYPTO_BLKCIPHER2) += crypto_blkcipher.o
>  obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
>  obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
> +obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
>  
>  crypto_hash-y += ahash.o
>  crypto_hash-y += shash.o
> diff --git a/crypto/geniv.c b/crypto/geniv.c
> new file mode 100644
> index 0000000..55d1212
> --- /dev/null
> +++ b/crypto/geniv.c
> @@ -0,0 +1,2240 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * geniv.c - crypto template for generating IV
> + *
> + * Copyright (C) 2018, Linaro
> + *
> + * This file adds a crypto template to generate IV, so the dm-crypt can rely
> + * on it and remove the existing generating IV code.
> + */
> +
> +#include <linux/completion.h>
> +#include <linux/err.h>
> +#include <linux/module.h>
> +#include <linux/init.h>
> +#include <linux/kernel.h>
> +#include <linux/key.h>
> +#include <linux/bio.h>
> +#include <linux/blkdev.h>
> +#include <linux/mempool.h>
> +#include <linux/slab.h>
> +#include <linux/crypto.h>
> +#include <linux/atomic.h>
> +#include <linux/scatterlist.h>
> +#include <linux/ctype.h>
> +#include <asm/page.h>
> +#include <asm/unaligned.h>
> +#include <crypto/hash.h>
> +#include <crypto/md5.h>
> +#include <crypto/algapi.h>
> +#include <crypto/skcipher.h>
> +#include <crypto/aead.h>
> +#include <crypto/authenc.h>
> +#include <crypto/geniv.h>
> +#include <crypto/internal/aead.h>
> +#include <crypto/internal/skcipher.h>
> +#include <linux/rtnetlink.h> /* for struct rtattr and RTA macros only */
> +#include <keys/user-type.h>
> +#include <linux/backing-dev.h>
> +#include <linux/device-mapper.h>
> +#include <linux/log2.h>
> +
> +#define DM_MSG_PREFIX		"crypt"
> +#define MIN_IOS		64
> +#define IV_TYPE_NUM 8
> +#define SECTOR_MASK ((1 << SECTOR_SHIFT) - 1)
> +
> +struct geniv_ctx;
> +struct geniv_req_ctx;
> +
> +/* Sub request for each of the skcipher_request's for a segment */
> +struct geniv_subreq {
> +	struct scatterlist sg_in[4];
> +	struct scatterlist sg_out[4];
> +	sector_t iv_sector;
> +	struct geniv_req_ctx *rctx;
> +	union {
> +		struct skcipher_request req;
> +		struct aead_request req_aead;
> +	} r CRYPTO_MINALIGN_ATTR;
> +};
> +
> +/* used to iter the src scatterlist of the input parent request */
> +struct scatterlist_iter {
> +	/* current segment to be processed */
> +	unsigned int seg_no;
> +	/* bytes had been processed in current segment */
> +	unsigned int done;
> +	/* bytes to be processed in the next request */
> +	unsigned int len;
> +};
> +
> +/* contex of the input parent request */
> +struct geniv_req_ctx {
> +	struct geniv_subreq *subreq;
> +	bool is_write;
> +	bool is_aead_request;
> +	sector_t cc_sector;
> +	/* array size of src scatterlist of parent request */
> +	unsigned int nents;
> +	struct scatterlist_iter iter;
> +	struct completion restart;
> +	atomic_t req_pending;
> +	u8 *integrity_metadata;
> +	/* point to the input parent request */
> +	union {
> +		struct skcipher_request *req;
> +		struct aead_request *req_aead;
> +	} r;
> +};
> +
> +struct crypt_iv_operations {
> +	int (*ctr)(struct geniv_ctx *ctx);
> +	void (*dtr)(struct geniv_ctx *ctx);
> +	int (*init)(struct geniv_ctx *ctx);
> +	int (*wipe)(struct geniv_ctx *ctx);
> +	int (*generator)(struct geniv_ctx *ctx,
> +			struct geniv_req_ctx *rctx,
> +			struct geniv_subreq *subreq, u8 *iv);
> +	int (*post)(struct geniv_ctx *ctx,
> +			struct geniv_req_ctx *rctx,
> +			struct geniv_subreq *subreq, u8 *iv);
> +};
> +
> +struct geniv_essiv_private {
> +	struct crypto_ahash *hash_tfm;
> +	u8 *salt;
> +};
> +
> +struct geniv_benbi_private {
> +	int shift;
> +};
> +
> +#define LMK_SEED_SIZE 64 /* hash + 0 */
> +struct geniv_lmk_private {
> +	struct crypto_shash *hash_tfm;
> +	u8 *seed;
> +};
> +
> +#define TCW_WHITENING_SIZE 16
> +struct geniv_tcw_private {
> +	struct crypto_shash *crc32_tfm;
> +	u8 *iv_seed;
> +	u8 *whitening;
> +};
> +
> +/* context of geniv tfm */
> +struct geniv_ctx {
> +	unsigned int tfms_count;
> +	union {
> +		struct crypto_skcipher *tfm;
> +		struct crypto_aead *tfm_aead;
> +	} tfm_child;
> +	union {
> +		struct crypto_skcipher **tfms;
> +		struct crypto_aead **tfms_aead;
> +	} tfms;
> +
> +	char *ivmode;
> +	unsigned int iv_size;
> +	unsigned int iv_start;
> +	unsigned int rctx_start;
> +	sector_t iv_offset;
> +	unsigned short int sector_size;
> +	unsigned char sector_shift;
> +	char *algname;
> +	char *ivopts;
> +	char *cipher;
> +	char *ciphermode;
> +	unsigned long cipher_flags;
> +
> +	const struct crypt_iv_operations *iv_gen_ops;
> +	union {
> +		struct geniv_essiv_private essiv;
> +		struct geniv_benbi_private benbi;
> +		struct geniv_lmk_private lmk;
> +		struct geniv_tcw_private tcw;
> +	} iv_gen_private;
> +	void *iv_private;
> +
> +	mempool_t *subreq_pool;
> +	unsigned int key_size;
> +	unsigned int key_parts;      /* independent parts in key buffer */
> +	unsigned int key_extra_size; /* additional keys length */
> +	unsigned int key_mac_size;
> +
> +	unsigned int integrity_tag_size;
> +	unsigned int integrity_iv_size;
> +	unsigned int on_disk_tag_size;
> +
> +	char *msg;
> +	u8 *authenc_key; /* space for keys in authenc() format (if used) */
> +	u8 *key;
> +};
> +
> +static struct scatterlist *crypt_get_sg_data(struct geniv_ctx *ctx,
> +					     struct scatterlist *sg);
> +
> +static bool geniv_integrity_aead(struct geniv_ctx *ctx)
> +{
> +	return test_bit(CRYPT_MODE_INTEGRITY_AEAD, &ctx->cipher_flags);
> +}
> +
> +static bool geniv_integrity_hmac(struct geniv_ctx *ctx)
> +{
> +	return geniv_integrity_aead(ctx) && ctx->key_mac_size;
> +}
> +
> +static struct geniv_req_ctx *geniv_skcipher_req_ctx(struct skcipher_request *req)
> +{
> +	return (void *)PTR_ALIGN((u8 *)skcipher_request_ctx(req),  __alignof__(struct geniv_req_ctx));
> +}
> +
> +static struct geniv_req_ctx *geniv_aead_req_ctx(struct aead_request *req)
> +{
> +	return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), __alignof__(struct geniv_req_ctx));
> +}
> +
> +static u8 *iv_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
> +{
> +	if (geniv_integrity_aead(ctx))
> +		return (u8 *)ALIGN((unsigned long)((char *)subreq + ctx->iv_start),
> +			crypto_aead_alignmask(crypto_aead_reqtfm(subreq->rctx->r.req_aead)) + 1);
> +	else
> +		return (u8 *)ALIGN((unsigned long)((char *)subreq + ctx->iv_start),
> +			crypto_skcipher_alignmask(crypto_skcipher_reqtfm(subreq->rctx->r.req)) + 1);
> +}
> +
> +/* Get sg containing data */
> +static struct scatterlist *crypt_get_sg_data(struct geniv_ctx *ctx,
> +					     struct scatterlist *sg)
> +{
> +	if (unlikely(geniv_integrity_aead(ctx)))
> +		return &sg[2];
> +
> +	return sg;
> +}
> +
> +/*
> + * Different IV generation algorithms:
> + *
> + * plain: the initial vector is the 32-bit little-endian version of the sector
> + *        number, padded with zeros if necessary.
> + *
> + * plain64: the initial vector is the 64-bit little-endian version of the sector
> + *        number, padded with zeros if necessary.
> + *
> + * plain64be: the initial vector is the 64-bit big-endian version of the sector
> + *        number, padded with zeros if necessary.
> + *
> + * essiv: "encrypted sector|salt initial vector", the sector number is
> + *        encrypted with the bulk cipher using a salt as key. The salt
> + *        should be derived from the bulk cipher's key via hashing.
> + *
> + * benbi: the 64-bit "big-endian 'narrow block'-count", starting at 1
> + *        (needed for LRW-32-AES and possible other narrow block modes)
> + *
> + * null: the initial vector is always zero.  Provides compatibility with
> + *       obsolete loop_fish2 devices.  Do not use for new devices.
> + *
> + * lmk:  Compatible implementation of the block chaining mode used
> + *       by the Loop-AES block device encryption system
> + *       designed by Jari Ruusu. See http://loop-aes.sourceforge.net/
> + *       It operates on full 512 byte sectors and uses CBC
> + *       with an IV derived from the sector number, the data and
> + *       optionally extra IV seed.
> + *       This means that after decryption the first block
> + *       of sector must be tweaked according to decrypted data.
> + *       Loop-AES can use three encryption schemes:
> + *         version 1: is plain aes-cbc mode
> + *         version 2: uses 64 multikey scheme with lmk IV generator
> + *         version 3: the same as version 2 with additional IV seed
> + *                   (it uses 65 keys, last key is used as IV seed)
> + *
> + * tcw:  Compatible implementation of the block chaining mode used
> + *       by the TrueCrypt device encryption system (prior to version 4.1).
> + *       For more info see: https://gitlab.com/cryptsetup/cryptsetup/wikis/TrueCryptOnDiskFormat
> + *       It operates on full 512 byte sectors and uses CBC
> + *       with an IV derived from initial key and the sector number.
> + *       In addition, whitening value is applied on every sector, whitening
> + *       is calculated from initial key, sector number and mixed using CRC32.
> + *       Note that this encryption scheme is vulnerable to watermarking attacks
> + *       and should be used for old compatible containers access only.
> + *
> + * plumb: unimplemented, see:
> + * http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
> + */
> +
> +static int crypt_iv_plain_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	memset(iv, 0, ctx->iv_size);
> +	*(__le32 *)iv = cpu_to_le32(subreq->iv_sector & 0xffffffff);
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_plain64_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	memset(iv, 0, ctx->iv_size);
> +	*(__le64 *)iv = cpu_to_le64(subreq->iv_sector);
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_plain64be_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	memset(iv, 0, ctx->iv_size);
> +	/* iv_size is at least of size u64; usually it is 16 bytes */
> +	*(__be64 *)&iv[ctx->iv_size - sizeof(u64)] = cpu_to_be64(subreq->iv_sector);
> +
> +	return 0;
> +}
> +
> +/* Initialise ESSIV - compute salt but no local memory allocations */
> +static int crypt_iv_essiv_init(struct geniv_ctx *ctx)
> +{
> +	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
> +	AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
> +	struct scatterlist sg;
> +	struct crypto_cipher *essiv_tfm;
> +	int err;
> +
> +	sg_init_one(&sg, ctx->key, ctx->key_size);
> +	ahash_request_set_tfm(req, essiv->hash_tfm);
> +	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
> +	ahash_request_set_crypt(req, &sg, essiv->salt, ctx->key_size);
> +
> +	err = crypto_ahash_digest(req);
> +	ahash_request_zero(req);
> +	if (err)
> +		return err;
> +
> +	essiv_tfm = ctx->iv_private;
> +
> +	return crypto_cipher_setkey(essiv_tfm, essiv->salt,
> +			    crypto_ahash_digestsize(essiv->hash_tfm));
> +}
> +
> +/* Wipe salt and reset key derived from volume key */
> +static int crypt_iv_essiv_wipe(struct geniv_ctx *ctx)
> +{
> +	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
> +	unsigned int salt_size = crypto_ahash_digestsize(essiv->hash_tfm);
> +	struct crypto_cipher *essiv_tfm;
> +
> +	memset(essiv->salt, 0, salt_size);
> +
> +	essiv_tfm = ctx->iv_private;
> +	return crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size);
> +}
> +
> +/* Allocate the cipher for ESSIV */
> +static struct crypto_cipher *alloc_essiv_cipher(struct geniv_ctx *ctx,
> +					u8 *salt, unsigned int saltsize)
> +{
> +	struct crypto_cipher *essiv_tfm;
> +	int err;
> +
> +	/* Setup the essiv_tfm with the given salt */
> +	essiv_tfm = crypto_alloc_cipher(ctx->cipher, 0, CRYPTO_ALG_ASYNC);
> +	if (IS_ERR(essiv_tfm)) {
> +		DMERR("Error allocating crypto tfm for ESSIV\n");
> +		return essiv_tfm;
> +	}
> +
> +	if (crypto_cipher_blocksize(essiv_tfm) != ctx->iv_size) {
> +		DMERR("Block size of ESSIV cipher does "
> +			    "not match IV size of block cipher\n");
> +		crypto_free_cipher(essiv_tfm);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	err = crypto_cipher_setkey(essiv_tfm, salt, saltsize);
> +	if (err) {
> +		DMERR("Failed to set key for ESSIV cipher\n");
> +		crypto_free_cipher(essiv_tfm);
> +		return ERR_PTR(err);
> +	}
> +
> +	return essiv_tfm;
> +}
> +
> +static void crypt_iv_essiv_dtr(struct geniv_ctx *ctx)
> +{
> +	struct crypto_cipher *essiv_tfm;
> +	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
> +
> +	crypto_free_ahash(essiv->hash_tfm);
> +	essiv->hash_tfm = NULL;
> +
> +	kzfree(essiv->salt);
> +	essiv->salt = NULL;
> +
> +	essiv_tfm = ctx->iv_private;
> +
> +	if (essiv_tfm)
> +		crypto_free_cipher(essiv_tfm);
> +
> +	ctx->iv_private = NULL;
> +}
> +
> +static int crypt_iv_essiv_ctr(struct geniv_ctx *ctx)
> +{
> +	struct crypto_cipher *essiv_tfm = NULL;
> +	struct crypto_ahash *hash_tfm = NULL;
> +	u8 *salt = NULL;
> +	int err;
> +
> +	if (!ctx->ivopts) {
> +		DMERR("Digest algorithm missing for ESSIV mode\n");
> +		return -EINVAL;
> +	}
> +
> +	/* Allocate hash algorithm */
> +	hash_tfm = crypto_alloc_ahash(ctx->ivopts, 0, CRYPTO_ALG_ASYNC);
> +	if (IS_ERR(hash_tfm)) {
> +		DMERR("Error initializing ESSIV hash\n");
> +		err = PTR_ERR(hash_tfm);
> +		goto bad;
> +	}
> +
> +	salt = kzalloc(crypto_ahash_digestsize(hash_tfm), GFP_KERNEL);
> +	if (!salt) {
> +		DMERR("Error kmallocing salt storage in ESSIV\n");
> +		err = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	ctx->iv_gen_private.essiv.salt = salt;
> +	ctx->iv_gen_private.essiv.hash_tfm = hash_tfm;
> +
> +	essiv_tfm = alloc_essiv_cipher(ctx, salt,
> +				       crypto_ahash_digestsize(hash_tfm));
> +	if (IS_ERR(essiv_tfm)) {
> +		crypt_iv_essiv_dtr(ctx);
> +		return PTR_ERR(essiv_tfm);
> +	}
> +	ctx->iv_private = essiv_tfm;
> +
> +	return 0;
> +
> +bad:
> +	if (hash_tfm && !IS_ERR(hash_tfm))
> +		crypto_free_ahash(hash_tfm);
> +	kfree(salt);
> +	return err;
> +}
> +
> +static int crypt_iv_essiv_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	struct crypto_cipher *essiv_tfm = ctx->iv_private;
> +
> +	memset(iv, 0, ctx->iv_size);
> +	*(__le64 *)iv = cpu_to_le64(subreq->iv_sector);
> +	crypto_cipher_encrypt_one(essiv_tfm, iv, iv);
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_benbi_ctr(struct geniv_ctx *ctx)
> +{
> +	unsigned int bs = crypto_skcipher_blocksize(ctx->tfms.tfms[0]);
> +	int log = ilog2(bs);
> +
> +	/* we need to calculate how far we must shift the sector count
> +	 * to get the cipher block count, we use this shift in _gen */
> +
> +	if (1 << log != bs) {
> +		DMERR("cypher blocksize is not a power of 2\n");
> +		return -EINVAL;
> +	}
> +
> +	if (log > 9) {
> +		DMERR("cypher blocksize is > 512\n");
> +		return -EINVAL;
> +	}
> +
> +	ctx->iv_gen_private.benbi.shift = 9 - log;
> +
> +	return 0;
> +}
> +
> +static void crypt_iv_benbi_dtr(struct geniv_ctx *ctx)
> +{
> +}
> +
> +static int crypt_iv_benbi_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	__be64 val;
> +
> +	memset(iv, 0, ctx->iv_size - sizeof(u64)); /* rest is cleared below */
> +
> +	val = cpu_to_be64(((u64)subreq->iv_sector << ctx->iv_gen_private.benbi.shift) + 1);
> +	put_unaligned(val, (__be64 *)(iv + ctx->iv_size - sizeof(u64)));
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_null_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	memset(iv, 0, ctx->iv_size);
> +
> +	return 0;
> +}
> +
> +static void crypt_iv_lmk_dtr(struct geniv_ctx *ctx)
> +{
> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
> +
> +	if (lmk->hash_tfm && !IS_ERR(lmk->hash_tfm))
> +		crypto_free_shash(lmk->hash_tfm);
> +	lmk->hash_tfm = NULL;
> +
> +	kzfree(lmk->seed);
> +	lmk->seed = NULL;
> +}
> +
> +static int crypt_iv_lmk_ctr(struct geniv_ctx *ctx)
> +{
> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
> +
> +	if (ctx->sector_size != (1 << SECTOR_SHIFT)) {
> +		DMERR("Unsupported sector size for LMK\n");
> +		return -EINVAL;
> +	}
> +
> +	lmk->hash_tfm = crypto_alloc_shash("md5", 0, 0);
> +	if (IS_ERR(lmk->hash_tfm)) {
> +		DMERR("Error initializing LMK hash, err=%ld\n",
> +			PTR_ERR(lmk->hash_tfm));
> +		return PTR_ERR(lmk->hash_tfm);
> +	}
> +
> +	/* No seed in LMK version 2 */
> +	if (ctx->key_parts == ctx->tfms_count) {
> +		lmk->seed = NULL;
> +		return 0;
> +	}
> +
> +	lmk->seed = kzalloc(LMK_SEED_SIZE, GFP_KERNEL);
> +	if (!lmk->seed) {
> +		crypt_iv_lmk_dtr(ctx);
> +		DMERR("Error kmallocing seed storage in LMK\n");
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_lmk_init(struct geniv_ctx *ctx)
> +{
> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
> +	int subkey_size = ctx->key_size / ctx->key_parts;
> +
> +	/* LMK seed is on the position of LMK_KEYS + 1 key */
> +	if (lmk->seed)
> +		memcpy(lmk->seed, ctx->key + (ctx->tfms_count * subkey_size),
> +		       crypto_shash_digestsize(lmk->hash_tfm));
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_lmk_wipe(struct geniv_ctx *ctx)
> +{
> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
> +
> +	if (lmk->seed)
> +		memset(lmk->seed, 0, LMK_SEED_SIZE);
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_lmk_one(struct geniv_ctx *ctx, u8 *iv,
> +				struct geniv_subreq *subreq, u8 *data)
> +{
> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
> +	SHASH_DESC_ON_STACK(desc, lmk->hash_tfm);
> +	struct md5_state md5state;
> +	__le32 buf[4];
> +	int i, r;
> +
> +	desc->tfm = lmk->hash_tfm;
> +	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +
> +	r = crypto_shash_init(desc);
> +	if (r)
> +		return r;
> +
> +	if (lmk->seed) {
> +		r = crypto_shash_update(desc, lmk->seed, LMK_SEED_SIZE);
> +		if (r)
> +			return r;
> +	}
> +
> +	/* Sector is always 512B, block size 16, add data of blocks 1-31 */
> +	r = crypto_shash_update(desc, data + 16, 16 * 31);
> +	if (r)
> +		return r;
> +
> +	/* Sector is cropped to 56 bits here */
> +	buf[0] = cpu_to_le32(subreq->iv_sector & 0xFFFFFFFF);
> +	buf[1] = cpu_to_le32((((u64)subreq->iv_sector >> 32) & 0x00FFFFFF) | 0x80000000);
> +	buf[2] = cpu_to_le32(4024);
> +	buf[3] = 0;
> +	r = crypto_shash_update(desc, (u8 *)buf, sizeof(buf));
> +	if (r)
> +		return r;
> +
> +	/* No MD5 padding here */
> +	r = crypto_shash_export(desc, &md5state);
> +	if (r)
> +		return r;
> +
> +	for (i = 0; i < MD5_HASH_WORDS; i++)
> +		__cpu_to_le32s(&md5state.hash[i]);
> +	memcpy(iv, &md5state.hash, ctx->iv_size);
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_lmk_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	struct scatterlist *sg;
> +	u8 *src;
> +	int r = 0;
> +
> +	if (rctx->is_write) {
> +		sg = crypt_get_sg_data(ctx, subreq->sg_in);
> +		src = kmap_atomic(sg_page(sg));
> +		r = crypt_iv_lmk_one(ctx, iv, subreq, src + sg->offset);
> +		kunmap_atomic(src);
> +	} else
> +		memset(iv, 0, ctx->iv_size);
> +
> +	return r;
> +}
> +
> +static int crypt_iv_lmk_post(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	struct scatterlist *sg;
> +	u8 *dst;
> +	int r;
> +
> +	if (rctx->is_write)
> +		return 0;
> +
> +	sg = crypt_get_sg_data(ctx, subreq->sg_out);
> +	dst = kmap_atomic(sg_page(sg));
> +	r = crypt_iv_lmk_one(ctx, iv, subreq, dst + sg->offset);
> +
> +	/* Tweak the first block of plaintext sector */
> +	if (!r)
> +		crypto_xor(dst + sg->offset, iv, ctx->iv_size);
> +
> +	kunmap_atomic(dst);
> +	return r;
> +}
> +
> +static void crypt_iv_tcw_dtr(struct geniv_ctx *ctx)
> +{
> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
> +
> +	kzfree(tcw->iv_seed);
> +	tcw->iv_seed = NULL;
> +	kzfree(tcw->whitening);
> +	tcw->whitening = NULL;
> +
> +	if (tcw->crc32_tfm && !IS_ERR(tcw->crc32_tfm))
> +		crypto_free_shash(tcw->crc32_tfm);
> +	tcw->crc32_tfm = NULL;
> +}
> +
> +static int crypt_iv_tcw_ctr(struct geniv_ctx *ctx)
> +{
> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
> +
> +	if (ctx->sector_size != (1 << SECTOR_SHIFT)) {
> +		DMERR("Unsupported sector size for TCW\n");
> +		return -EINVAL;
> +	}
> +
> +	if (ctx->key_size <= (ctx->iv_size + TCW_WHITENING_SIZE)) {
> +		DMERR("Wrong key size (%d) for TCW. Choose a value > %d bytes\n",
> +			ctx->key_size, ctx->iv_size + TCW_WHITENING_SIZE);
> +		return -EINVAL;
> +	}
> +
> +	tcw->crc32_tfm = crypto_alloc_shash("crc32", 0, 0);
> +	if (IS_ERR(tcw->crc32_tfm)) {
> +		DMERR("Error initializing CRC32 in TCW; err=%ld\n",
> +			PTR_ERR(tcw->crc32_tfm));
> +		return PTR_ERR(tcw->crc32_tfm);
> +	}
> +
> +	tcw->iv_seed = kzalloc(ctx->iv_size, GFP_KERNEL);
> +	tcw->whitening = kzalloc(TCW_WHITENING_SIZE, GFP_KERNEL);
> +	if (!tcw->iv_seed || !tcw->whitening) {
> +		crypt_iv_tcw_dtr(ctx);
> +		DMERR("Error allocating seed storage in TCW\n");
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_tcw_init(struct geniv_ctx *ctx)
> +{
> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
> +	int key_offset = ctx->key_size - ctx->iv_size - TCW_WHITENING_SIZE;
> +
> +	memcpy(tcw->iv_seed, &ctx->key[key_offset], ctx->iv_size);
> +	memcpy(tcw->whitening, &ctx->key[key_offset + ctx->iv_size],
> +	       TCW_WHITENING_SIZE);
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_tcw_wipe(struct geniv_ctx *ctx)
> +{
> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
> +
> +	memset(tcw->iv_seed, 0, ctx->iv_size);
> +	memset(tcw->whitening, 0, TCW_WHITENING_SIZE);
> +
> +	return 0;
> +}
> +
> +static int crypt_iv_tcw_whitening(struct geniv_ctx *ctx,
> +				struct geniv_subreq *subreq, u8 *data)
> +{
> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
> +	__le64 sector = cpu_to_le64(subreq->iv_sector);
> +	u8 buf[TCW_WHITENING_SIZE];
> +	SHASH_DESC_ON_STACK(desc, tcw->crc32_tfm);
> +	int i, r;
> +
> +	/* xor whitening with sector number */
> +	crypto_xor_cpy(buf, tcw->whitening, (u8 *)&sector, 8);
> +	crypto_xor_cpy(&buf[8], tcw->whitening + 8, (u8 *)&sector, 8);
> +
> +	/* calculate crc32 for every 32bit part and xor it */
> +	desc->tfm = tcw->crc32_tfm;
> +	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +	for (i = 0; i < 4; i++) {
> +		r = crypto_shash_init(desc);
> +		if (r)
> +			goto out;
> +		r = crypto_shash_update(desc, &buf[i * 4], 4);
> +		if (r)
> +			goto out;
> +		r = crypto_shash_final(desc, &buf[i * 4]);
> +		if (r)
> +			goto out;
> +	}
> +	crypto_xor(&buf[0], &buf[12], 4);
> +	crypto_xor(&buf[4], &buf[8], 4);
> +
> +	/* apply whitening (8 bytes) to whole sector */
> +	for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++)
> +		crypto_xor(data + i * 8, buf, 8);
> +out:
> +	memzero_explicit(buf, sizeof(buf));
> +	return r;
> +}
> +
> +static int crypt_iv_tcw_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	struct scatterlist *sg;
> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
> +	__le64 sector = cpu_to_le64(subreq->iv_sector);
> +	u8 *src;
> +	int r = 0;
> +
> +	/* Remove whitening from ciphertext */
> +	if (!rctx->is_write) {
> +		sg = crypt_get_sg_data(ctx, subreq->sg_in);
> +		src = kmap_atomic(sg_page(sg));
> +		r = crypt_iv_tcw_whitening(ctx, subreq, src + sg->offset);
> +		kunmap_atomic(src);
> +	}
> +
> +	/* Calculate IV */
> +	crypto_xor_cpy(iv, tcw->iv_seed, (u8 *)&sector, 8);
> +	if (ctx->iv_size > 8)
> +		crypto_xor_cpy(&iv[8], tcw->iv_seed + 8, (u8 *)&sector,
> +			       ctx->iv_size - 8);
> +
> +	return r;
> +}
> +
> +static int crypt_iv_tcw_post(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	struct scatterlist *sg;
> +	u8 *dst;
> +	int r;
> +
> +	if (!rctx->is_write)
> +		return 0;
> +
> +	/* Apply whitening on ciphertext */
> +	sg = crypt_get_sg_data(ctx, subreq->sg_out);
> +	dst = kmap_atomic(sg_page(sg));
> +	r = crypt_iv_tcw_whitening(ctx, subreq, dst + sg->offset);
> +	kunmap_atomic(dst);
> +
> +	return r;
> +}
> +
> +static int crypt_iv_random_gen(struct geniv_ctx *ctx,
> +				struct geniv_req_ctx *rctx,
> +				struct geniv_subreq *subreq, u8 *iv)
> +{
> +	/* Used only for writes, there must be an additional space to store IV */
> +	get_random_bytes(iv, ctx->iv_size);
> +	return 0;
> +}
> +
> +static const struct crypt_iv_operations crypt_iv_plain_ops = {
> +	.generator = crypt_iv_plain_gen
> +};
> +
> +static const struct crypt_iv_operations crypt_iv_plain64_ops = {
> +	.generator = crypt_iv_plain64_gen
> +};
> +
> +static const struct crypt_iv_operations crypt_iv_plain64be_ops = {
> +	.generator = crypt_iv_plain64be_gen
> +};
> +
> +static const struct crypt_iv_operations crypt_iv_essiv_ops = {
> +	.ctr       = crypt_iv_essiv_ctr,
> +	.dtr       = crypt_iv_essiv_dtr,
> +	.init      = crypt_iv_essiv_init,
> +	.wipe      = crypt_iv_essiv_wipe,
> +	.generator = crypt_iv_essiv_gen
> +};
> +
> +static const struct crypt_iv_operations crypt_iv_benbi_ops = {
> +	.ctr	   = crypt_iv_benbi_ctr,
> +	.dtr	   = crypt_iv_benbi_dtr,
> +	.generator = crypt_iv_benbi_gen
> +};
> +
> +static const struct crypt_iv_operations crypt_iv_null_ops = {
> +	.generator = crypt_iv_null_gen
> +};
> +
> +static const struct crypt_iv_operations crypt_iv_lmk_ops = {
> +	.ctr	   = crypt_iv_lmk_ctr,
> +	.dtr	   = crypt_iv_lmk_dtr,
> +	.init	   = crypt_iv_lmk_init,
> +	.wipe	   = crypt_iv_lmk_wipe,
> +	.generator = crypt_iv_lmk_gen,
> +	.post	   = crypt_iv_lmk_post
> +};
> +
> +static const struct crypt_iv_operations crypt_iv_tcw_ops = {
> +	.ctr	   = crypt_iv_tcw_ctr,
> +	.dtr	   = crypt_iv_tcw_dtr,
> +	.init	   = crypt_iv_tcw_init,
> +	.wipe	   = crypt_iv_tcw_wipe,
> +	.generator = crypt_iv_tcw_gen,
> +	.post	   = crypt_iv_tcw_post
> +};
> +
> +static struct crypt_iv_operations crypt_iv_random_ops = {
> +	.generator = crypt_iv_random_gen
> +};
> +
> +static int geniv_init_iv(struct geniv_ctx *ctx)
> +{
> +	int ret;
> +
> +	DMDEBUG("IV Generation algorithm : %s\n", ctx->ivmode);
> +
> +	if (ctx->ivmode == NULL)
> +		ctx->iv_gen_ops = NULL;
> +	else if (strcmp(ctx->ivmode, "plain") == 0)
> +		ctx->iv_gen_ops = &crypt_iv_plain_ops;
> +	else if (strcmp(ctx->ivmode, "plain64") == 0)
> +		ctx->iv_gen_ops = &crypt_iv_plain64_ops;
> +	else if (strcmp(ctx->ivmode, "essiv") == 0)
> +		ctx->iv_gen_ops = &crypt_iv_essiv_ops;
> +	else if (strcmp(ctx->ivmode, "benbi") == 0)
> +		ctx->iv_gen_ops = &crypt_iv_benbi_ops;
> +	else if (strcmp(ctx->ivmode, "null") == 0)
> +		ctx->iv_gen_ops = &crypt_iv_null_ops;
> +	else if (strcmp(ctx->ivmode, "lmk") == 0) {
> +		ctx->iv_gen_ops = &crypt_iv_lmk_ops;
> +		/*
> +		 * Version 2 and 3 is recognised according
> +		 * to length of provided multi-key string.
> +		 * If present (version 3), last key is used as IV seed.
> +		 * All keys (including IV seed) are always the same size.
> +		 */
> +		if (ctx->key_size % ctx->key_parts) {
> +			ctx->key_parts++;
> +			ctx->key_extra_size = ctx->key_size / ctx->key_parts;
> +		}
> +	} else if (strcmp(ctx->ivmode, "tcw") == 0) {
> +		ctx->iv_gen_ops = &crypt_iv_tcw_ops;
> +		ctx->key_parts += 2; /* IV + whitening */
> +		ctx->key_extra_size = ctx->iv_size + TCW_WHITENING_SIZE;
> +	} else if (strcmp(ctx->ivmode, "random") == 0) {
> +		ctx->iv_gen_ops = &crypt_iv_random_ops;
> +		/* Need storage space in integrity fields. */
> +		ctx->integrity_iv_size = ctx->iv_size;
> +	} else {
> +		DMERR("Invalid IV mode %s\n", ctx->ivmode);
> +		return -EINVAL;
> +	}
> +
> +	/* Allocate IV */
> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->ctr) {
> +		ret = ctx->iv_gen_ops->ctr(ctx);
> +		if (ret < 0) {
> +			DMERR("Error creating IV for %s\n", ctx->ivmode);
> +			return ret;
> +		}
> +	}
> +
> +	/* Initialize IV (set keys for ESSIV etc) */
> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->init) {
> +		ret = ctx->iv_gen_ops->init(ctx);
> +		if (ret < 0) {
> +			DMERR("Error creating IV for %s\n", ctx->ivmode);
> +			return ret;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static void geniv_free_tfms_aead(struct geniv_ctx *ctx)
> +{
> +	if (!ctx->tfms.tfms_aead)
> +		return;
> +
> +	if (ctx->tfms.tfms_aead[0] && IS_ERR(ctx->tfms.tfms_aead[0])) {
> +		crypto_free_aead(ctx->tfms.tfms_aead[0]);
> +		ctx->tfms.tfms_aead[0] = NULL;
> +	}
> +
> +	kfree(ctx->tfms.tfms_aead);
> +	ctx->tfms.tfms_aead = NULL;
> +}
> +
> +static void geniv_free_tfms_skcipher(struct geniv_ctx *ctx)
> +{
> +	unsigned int i;
> +
> +	if (!ctx->tfms.tfms)
> +		return;
> +
> +	for (i = 0; i < ctx->tfms_count; i++)
> +		if (ctx->tfms.tfms[i] && IS_ERR(ctx->tfms.tfms[i])) {
> +			crypto_free_skcipher(ctx->tfms.tfms[i]);
> +			ctx->tfms.tfms[i] = NULL;
> +		}
> +
> +	kfree(ctx->tfms.tfms);
> +	ctx->tfms.tfms = NULL;
> +}
> +
> +static void geniv_free_tfms(struct geniv_ctx *ctx)
> +{
> +	if (geniv_integrity_aead(ctx))
> +		geniv_free_tfms_aead(ctx);
> +	else
> +		geniv_free_tfms_skcipher(ctx);
> +}
> +
> +static int geniv_alloc_tfms_aead(struct crypto_aead *parent,
> +			    struct geniv_ctx *ctx)
> +{
> +	unsigned int reqsize, align;
> +
> +	ctx->tfms.tfms_aead = kcalloc(1, sizeof(struct crypto_aead *),
> +			   GFP_KERNEL);
> +	if (!ctx->tfms.tfms_aead)
> +		return -ENOMEM;
> +
> +	/* First instance is already allocated in geniv_init_tfm */
> +	ctx->tfms.tfms_aead[0] = ctx->tfm_child.tfm_aead;
> +
> +	/* Setup the current cipher's request structure */
> +	align = crypto_aead_alignmask(parent);
> +	align &= ~(crypto_tfm_ctx_alignment() - 1);
> +	reqsize = align + sizeof(struct geniv_req_ctx) +
> +		  crypto_aead_reqsize(ctx->tfms.tfms_aead[0]);
> +
> +	crypto_aead_set_reqsize(parent, reqsize);
> +
> +	return 0;
> +}
> +
> +/* Allocate memory for the underlying cipher algorithm. Ex: cbc(aes)
> + */
> +static int geniv_alloc_tfms_skcipher(struct crypto_skcipher *parent,
> +			    struct geniv_ctx *ctx)
> +{
> +	unsigned int i, reqsize, align, err;
> +
> +	ctx->tfms.tfms = kcalloc(ctx->tfms_count, sizeof(struct crypto_skcipher *),
> +			   GFP_KERNEL);
> +	if (!ctx->tfms.tfms)
> +		return -ENOMEM;
> +
> +	/* First instance is already allocated in geniv_init_tfm */
> +	ctx->tfms.tfms[0] = ctx->tfm_child.tfm;
> +	for (i = 1; i < ctx->tfms_count; i++) {
> +		ctx->tfms.tfms[i] = crypto_alloc_skcipher(ctx->ciphermode, 0, 0);
> +		if (IS_ERR(ctx->tfms.tfms[i])) {
> +			err = PTR_ERR(ctx->tfms.tfms[i]);
> +			geniv_free_tfms(ctx);
> +			return err;
> +		}
> +
> +		/* Setup the current cipher's request structure */
> +		align = crypto_skcipher_alignmask(parent);
> +		align &= ~(crypto_tfm_ctx_alignment() - 1);
> +		reqsize = align + sizeof(struct geniv_req_ctx) +
> +			  crypto_skcipher_reqsize(ctx->tfms.tfms[i]);
> +
> +		crypto_skcipher_set_reqsize(parent, reqsize);
> +	}
> +
> +	return 0;
> +}
> +
> +static unsigned int geniv_authenckey_size(struct geniv_ctx *ctx)
> +{
> +	return ctx->key_size - ctx->key_extra_size +
> +		RTA_SPACE(sizeof(struct crypto_authenc_key_param));
> +}
> +
> +/* Initialize the cipher's context with the key, ivmode and other parameters.
> + * Also allocate IV generation template ciphers and initialize them.
> + */
> +static int geniv_setkey_init(void *parent, struct geniv_key_info *info)
> +{
> +	struct geniv_ctx *ctx;
> +	int ret;
> +
> +	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
> +		ctx = crypto_aead_ctx((struct crypto_aead *)parent);
> +	else
> +		ctx = crypto_skcipher_ctx((struct crypto_skcipher *)parent);
> +
> +	ctx->tfms_count = info->tfms_count;
> +	ctx->key = info->key;
> +	ctx->cipher_flags = info->cipher_flags;
> +	ctx->ivopts = info->ivopts;
> +	ctx->iv_offset = info->iv_offset;
> +	ctx->sector_size = info->sector_size;
> +	ctx->sector_shift = __ffs(ctx->sector_size) - SECTOR_SHIFT;
> +
> +	ctx->key_size = info->key_size;
> +	ctx->key_parts = info->key_parts;
> +	ctx->key_mac_size = info->key_mac_size;
> +	ctx->on_disk_tag_size = info->on_disk_tag_size;
> +
> +	if (geniv_integrity_hmac(ctx)) {
> +		ctx->authenc_key = kmalloc(geniv_authenckey_size(ctx), GFP_KERNEL);
> +		if (!ctx->authenc_key)
> +			return -ENOMEM;
> +	}
> +
> +	if (geniv_integrity_aead(ctx))
> +		ret = geniv_alloc_tfms_aead((struct crypto_aead *)parent, ctx);
> +	else
> +		ret = geniv_alloc_tfms_skcipher((struct crypto_skcipher *)parent, ctx);
> +	if (ret)
> +		return ret;
> +
> +	ret = geniv_init_iv(ctx);
> +
> +	if (geniv_integrity_aead(ctx))
> +		ctx->integrity_tag_size = ctx->on_disk_tag_size - ctx->integrity_iv_size;
> +
> +	return ret;
> +}
> +
> +/*
> + * If AEAD is composed like authenc(hmac(sha256),xts(aes)),
> + * the key must be for some reason in special format.
> + * This function converts cc->key to this special format.
> + */
> +static void crypt_copy_authenckey(char *p, const void *key,
> +			unsigned int enckeylen, unsigned int authkeylen)
> +{
> +	struct crypto_authenc_key_param *param;
> +	struct rtattr *rta;
> +
> +	rta = (struct rtattr *)p;
> +	param = RTA_DATA(rta);
> +	param->enckeylen = cpu_to_be32(enckeylen);
> +	rta->rta_len = RTA_LENGTH(sizeof(*param));
> +	rta->rta_type = CRYPTO_AUTHENC_KEYA_PARAM;
> +	p += RTA_SPACE(sizeof(*param));
> +	memcpy(p, key + enckeylen, authkeylen);
> +	p += authkeylen;
> +	memcpy(p, key, enckeylen);
> +}
> +
> +static int geniv_setkey_tfms_aead(struct crypto_aead *parent, struct geniv_ctx *ctx,
> +			     struct geniv_key_info *info)
> +{
> +	unsigned int key_size;
> +	unsigned int authenc_key_size;
> +	struct crypto_aead *child_aead;
> +	int ret = 0;
> +
> +	/* Ignore extra keys (which are used for IV etc) */
> +	key_size = ctx->key_size - ctx->key_extra_size;
> +	authenc_key_size = key_size + RTA_SPACE(sizeof(struct crypto_authenc_key_param));
> +
> +	child_aead = ctx->tfms.tfms_aead[0];
> +	crypto_aead_clear_flags(child_aead, CRYPTO_TFM_REQ_MASK);
> +	crypto_aead_set_flags(child_aead, crypto_aead_get_flags(parent) & CRYPTO_TFM_REQ_MASK);
> +
> +	if (geniv_integrity_hmac(ctx)) {
> +		if (key_size < ctx->key_mac_size)
> +			return -EINVAL;
> +
> +		crypt_copy_authenckey(ctx->authenc_key, ctx->key, key_size - ctx->key_mac_size,
> +				      ctx->key_mac_size);
> +	}
> +
> +	if (geniv_integrity_hmac(ctx))
> +		ret = crypto_aead_setkey(child_aead, ctx->authenc_key, authenc_key_size);
> +	else
> +		ret = crypto_aead_setkey(child_aead, ctx->key, key_size);
> +	if (ret) {
> +		DMERR("Error setting key for tfms[0]\n");
> +		goto out;
> +	}
> +
> +	crypto_aead_set_flags(parent, crypto_aead_get_flags(child_aead) & CRYPTO_TFM_RES_MASK);
> +
> +out:
> +	if (geniv_integrity_hmac(ctx))
> +		memzero_explicit(ctx->authenc_key, authenc_key_size);
> +
> +	return ret;
> +}
> +
> +static int geniv_setkey_tfms_skcipher(struct crypto_skcipher *parent, struct geniv_ctx *ctx,
> +			     struct geniv_key_info *info)
> +{
> +	unsigned int subkey_size;
> +	char *subkey;
> +	struct crypto_skcipher *child;
> +	int ret, i;
> +
> +	/* Ignore extra keys (which are used for IV etc) */
> +	subkey_size = (ctx->key_size - ctx->key_extra_size)
> +		      >> ilog2(ctx->tfms_count);
> +
> +	for (i = 0; i < ctx->tfms_count; i++) {
> +		child = ctx->tfms.tfms[i];
> +		crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
> +		crypto_skcipher_set_flags(child,
> +			crypto_skcipher_get_flags(parent) & CRYPTO_TFM_REQ_MASK);
> +
> +		subkey = ctx->key + (subkey_size) * i;
> +
> +		ret = crypto_skcipher_setkey(child, subkey, subkey_size);
> +		if (ret) {
> +			DMERR("Error setting key for tfms[%d]\n", i);
> +			return ret;
> +		}
> +
> +		crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) &
> +					  CRYPTO_TFM_RES_MASK);
> +	}
> +
> +	return 0;
> +}
> +
> +static int geniv_setkey_set(struct geniv_ctx *ctx)
> +{
> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->init)
> +		return ctx->iv_gen_ops->init(ctx);
> +	else
> +		return 0;
> +}
> +
> +static int geniv_setkey_wipe(struct geniv_ctx *ctx)
> +{
> +	int ret;
> +
> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->wipe) {
> +		ret = ctx->iv_gen_ops->wipe(ctx);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	if (geniv_integrity_hmac(ctx))
> +		kzfree(ctx->authenc_key);
> +
> +	return 0;
> +}
> +
> +static int geniv_setkey(void *parent, const u8 *key, unsigned int keylen)
> +{
> +	int err = 0;
> +	struct geniv_ctx *ctx;
> +	struct geniv_key_info *info = (struct geniv_key_info *) key;
> +
> +	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
> +		ctx = crypto_aead_ctx((struct crypto_aead *)parent);
> +	else
> +		ctx = crypto_skcipher_ctx((struct crypto_skcipher *)parent);
> +
> +	DMDEBUG("SETKEY Operation : %d\n", info->keyop);
> +
> +	switch (info->keyop) {
> +	case SETKEY_OP_INIT:
> +		err = geniv_setkey_init(parent, info);
> +		break;
> +	case SETKEY_OP_SET:
> +		err = geniv_setkey_set(ctx);
> +		break;
> +	case SETKEY_OP_WIPE:
> +		err = geniv_setkey_wipe(ctx);
> +		break;
> +	}
> +
> +	if (err)
> +		return err;
> +
> +	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
> +		return geniv_setkey_tfms_aead((struct crypto_aead *)parent, ctx, info);
> +	else
> +		return geniv_setkey_tfms_skcipher((struct crypto_skcipher *)parent, ctx, info);
> +}
> +
> +static int geniv_aead_setkey(struct crypto_aead *parent,
> +				const u8 *key, unsigned int keylen)
> +{
> +	return geniv_setkey(parent, key, keylen);
> +}
> +
> +static int geniv_skcipher_setkey(struct crypto_skcipher *parent,
> +				const u8 *key, unsigned int keylen)
> +{
> +	return geniv_setkey(parent, key, keylen);
> +}
> +
> +static void geniv_async_done(struct crypto_async_request *async_req, int error);
> +
> +static int geniv_alloc_subreq_aead(struct geniv_ctx *ctx,
> +					struct geniv_req_ctx *rctx,
> +					u32 req_flags)
> +{
> +	struct aead_request *req;
> +
> +	if (!rctx->subreq) {
> +		rctx->subreq = mempool_alloc(ctx->subreq_pool, GFP_NOIO);
> +		if (!rctx->subreq)
> +			return -ENOMEM;
> +	}
> +
> +	req = &rctx->subreq->r.req_aead;
> +	rctx->subreq->rctx = rctx;
> +
> +	aead_request_set_tfm(req, ctx->tfms.tfms_aead[0]);
> +	aead_request_set_callback(req, req_flags,
> +					geniv_async_done, rctx->subreq);
> +
> +	return 0;
> +}
> +
> +/* req_flags: flags from parent request */
> +static int geniv_alloc_subreq_skcipher(struct geniv_ctx *ctx,
> +					struct geniv_req_ctx *rctx,
> +					u32 req_flags)
> +{
> +	int key_index;
> +	struct skcipher_request *req;
> +
> +	if (!rctx->subreq) {
> +		rctx->subreq = mempool_alloc(ctx->subreq_pool, GFP_NOIO);
> +		if (!rctx->subreq)
> +			return -ENOMEM;
> +	}
> +
> +	req = &rctx->subreq->r.req;
> +	rctx->subreq->rctx = rctx;
> +
> +	key_index = rctx->cc_sector & (ctx->tfms_count - 1);
> +
> +	skcipher_request_set_tfm(req, ctx->tfms.tfms[key_index]);
> +	skcipher_request_set_callback(req, req_flags,
> +					geniv_async_done, rctx->subreq);
> +
> +	return 0;
> +}
> +
> +/* Asynchronous IO completion callback for each sector in a segment. When all
> + * pending i/o are completed the parent cipher's async function is called.
> + */
> +static void geniv_async_done(struct crypto_async_request *async_req, int error)
> +{
> +	struct geniv_subreq *subreq =
> +		(struct geniv_subreq *) async_req->data;
> +	struct geniv_req_ctx *rctx = subreq->rctx;
> +	struct skcipher_request *req = NULL;
> +	struct aead_request *req_aead = NULL;
> +	struct geniv_ctx *ctx;
> +	u8 *iv;
> +
> +	if (!rctx->is_aead_request) {
> +		req = rctx->r.req;
> +		ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
> +	} else {
> +		req_aead = rctx->r.req_aead;
> +		ctx = crypto_aead_ctx(crypto_aead_reqtfm(req_aead));
> +	}
> +
> +	/*
> +	 * A request from crypto driver backlog is going to be processed now,
> +	 * finish the completion and continue in crypt_convert().
> +	 * (Callback will be called for the second time for this request.)
> +	 */
> +	if (error == -EINPROGRESS) {
> +		complete(&rctx->restart);
> +		return;
> +	}
> +
> +	iv = iv_of_subreq(ctx, subreq);
> +	if (!error && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
> +		error = ctx->iv_gen_ops->post(ctx, rctx, subreq, iv);
> +
> +	mempool_free(subreq, ctx->subreq_pool);
> +
> +	/* req_pending needs to be checked before req->base.complete is called
> +	 * as we need 'req_pending' to be equal to 1 to ensure all subrequests
> +	 * are processed.
> +	 */
> +	if (atomic_dec_and_test(&rctx->req_pending)) {
> +		/* Call the parent cipher's completion function */
> +		if (!rctx->is_aead_request)
> +			skcipher_request_complete(req, error);
> +		else
> +			aead_request_complete(req_aead, error);
> +
> +	}
> +}
> +
> +static unsigned int geniv_get_sectors(struct scatterlist *sg1,
> +				      struct scatterlist *sg2,
> +				      unsigned int segments)
> +{
> +	unsigned int i, n1, n2;
> +
> +	n1 = n2 = 0;
> +	for (i = 0; i < segments ; i++) {
> +		n1 += sg1[i].length >> SECTOR_SHIFT;
> +		n1 += (sg1[i].length & SECTOR_MASK) ? 1 : 0;
> +	}
> +
> +	for (i = 0; i < segments ; i++) {
> +		n2 += sg2[i].length >> SECTOR_SHIFT;
> +		n2 += (sg2[i].length & SECTOR_MASK) ? 1 : 0;
> +	}
> +
> +	return n1 > n2 ? n1 : n2;
> +}
> +
> +/* Iterate scatterlist of segments to retrieve the 512-byte sectors so that
> + * unique IVs could be generated for each 512-byte sector. This split may not
> + * be necessary e.g. when these ciphers are modelled in hardware, where it can
> + * make use of the hardware's IV generation capabilities.
> + */
> +static int geniv_iter_block(void *req_in,
> +			struct geniv_ctx *ctx, struct geniv_req_ctx *rctx)
> +
> +{
> +	unsigned int rem;
> +	struct scatterlist *src_org, *dst_org;
> +	struct scatterlist *src1, *dst1;
> +	struct scatterlist_iter *iter = &rctx->iter;
> +	struct skcipher_request *req;
> +	struct aead_request *req_aead;
> +
> +	if (unlikely(iter->seg_no >= rctx->nents))
> +		return 0;
> +
> +	if (geniv_integrity_aead(ctx)) {
> +		req_aead = (struct aead_request *)req_in;
> +		src_org = &req_aead->src[0];
> +		dst_org = &req_aead->dst[0];
> +	} else {
> +		req = (struct skcipher_request *)req_in;
> +		src_org = &req->src[0];
> +		dst_org = &req->dst[0];
> +	}
> +
> +	src1 = &src_org[iter->seg_no];
> +	dst1 = &dst_org[iter->seg_no];
> +	iter->done += iter->len;
> +
> +	if (iter->done >= src1->length) {
> +		iter->seg_no++;
> +
> +		if (iter->seg_no >= rctx->nents)
> +			return 0;
> +
> +		src1 = &src_org[iter->seg_no];
> +		dst1 = &dst_org[iter->seg_no];
> +		iter->done = 0;
> +	}
> +
> +	rem = src1->length - iter->done;
> +
> +	iter->len = rem > ctx->sector_size ? ctx->sector_size : rem;
> +
> +	DMDEBUG("segment:(%d/%u),  done:%d, rem:%d\n",
> +		iter->seg_no, rctx->nents, iter->done, rem);
> +
> +	return iter->len;
> +}
> +
> +static u8 *org_iv_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
> +{
> +	return iv_of_subreq(ctx, subreq) + ctx->iv_size;
> +}
> +
> +static uint64_t *org_sector_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
> +{
> +	u8 *ptr = iv_of_subreq(ctx, subreq) + ctx->iv_size + ctx->iv_size;
> +
> +	return (uint64_t *) ptr;
> +}
> +
> +static unsigned int *org_tag_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
> +{
> +	u8 *ptr = iv_of_subreq(ctx, subreq) + ctx->iv_size +
> +		  ctx->iv_size + sizeof(uint64_t);
> +
> +	return (unsigned int *)ptr;
> +}
> +
> +static void *tag_from_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
> +{
> +	return &subreq->rctx->integrity_metadata[*org_tag_of_subreq(ctx, subreq) *
> +		ctx->on_disk_tag_size];
> +}
> +
> +static void *iv_tag_from_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
> +{
> +	return tag_from_subreq(ctx, subreq) + ctx->integrity_tag_size;
> +}
> +
> +static int geniv_convert_block_aead(struct geniv_ctx *ctx,
> +				     struct geniv_req_ctx *rctx,
> +				     struct geniv_subreq *subreq,
> +				     unsigned int tag_offset)
> +{
> +	struct scatterlist *sg_in, *sg_out;
> +	u8 *iv, *org_iv, *tag_iv, *tag;
> +	uint64_t *sector;
> +	int r = 0;
> +	struct scatterlist_iter *iter = &rctx->iter;
> +	struct aead_request *req_aead;
> +	struct aead_request *parent_req = rctx->r.req_aead;
> +
> +	BUG_ON(ctx->integrity_iv_size && ctx->integrity_iv_size != ctx->iv_size);
> +
> +	/* Reject unexpected unaligned bio. */
> +	if (unlikely(iter->len & (ctx->sector_size - 1)))
> +		return -EIO;
> +
> +	subreq->iv_sector = rctx->cc_sector;
> +	if (test_bit(CRYPT_IV_LARGE_SECTORS, &ctx->cipher_flags))
> +		subreq->iv_sector >>= ctx->sector_shift;
> +
> +	*org_tag_of_subreq(ctx, subreq) = tag_offset;
> +
> +	sector = org_sector_of_subreq(ctx, subreq);
> +	*sector = cpu_to_le64(rctx->cc_sector - ctx->iv_offset);
> +
> +	iv = iv_of_subreq(ctx, subreq);
> +	org_iv = org_iv_of_subreq(ctx, subreq);
> +	tag = tag_from_subreq(ctx, subreq);
> +	tag_iv = iv_tag_from_subreq(ctx, subreq);
> +
> +	sg_in = subreq->sg_in;
> +	sg_out = subreq->sg_out;
> +
> +	/* AEAD request:
> +	 *  |----- AAD -------|------ DATA -------|-- AUTH TAG --|
> +	 *  | (authenticated) | (auth+encryption) |              |
> +	 *  | sector_LE |  IV |  sector in/out    |  tag in/out  |
> +	 */
> +	sg_init_table(sg_in, 4);
> +	sg_set_buf(&sg_in[0], sector, sizeof(uint64_t));
> +	sg_set_buf(&sg_in[1], org_iv, ctx->iv_size);
> +	sg_set_page(&sg_in[2], sg_page(&parent_req->src[iter->seg_no]),
> +			iter->len, parent_req->src[iter->seg_no].offset + iter->done);
> +	sg_set_buf(&sg_in[3], tag, ctx->integrity_tag_size);
> +
> +	sg_init_table(sg_out, 4);
> +	sg_set_buf(&sg_out[0], sector, sizeof(uint64_t));
> +	sg_set_buf(&sg_out[1], org_iv, ctx->iv_size);
> +	sg_set_page(&sg_out[2], sg_page(&parent_req->dst[iter->seg_no]),
> +			iter->len, parent_req->dst[iter->seg_no].offset + iter->done);
> +	sg_set_buf(&sg_out[3], tag, ctx->integrity_tag_size);
> +
> +	if (ctx->iv_gen_ops) {
> +		/* For READs use IV stored in integrity metadata */
> +		if (ctx->integrity_iv_size && !rctx->is_write) {
> +			memcpy(org_iv, tag_iv, ctx->iv_size);
> +		} else {
> +			r = ctx->iv_gen_ops->generator(ctx, rctx, subreq, org_iv);
> +			if (r < 0)
> +				return r;
> +			/* Store generated IV in integrity metadata */
> +			if (ctx->integrity_iv_size)
> +				memcpy(tag_iv, org_iv, ctx->iv_size);
> +		}
> +		/* Working copy of IV, to be modified in crypto API */
> +		memcpy(iv, org_iv, ctx->iv_size);
> +	}
> +
> +	req_aead = &subreq->r.req_aead;
> +	aead_request_set_ad(req_aead, sizeof(uint64_t) + ctx->iv_size);
> +	if (rctx->is_write) {
> +		aead_request_set_crypt(req_aead, subreq->sg_in, subreq->sg_out,
> +				       ctx->sector_size, iv);
> +		r = crypto_aead_encrypt(req_aead);
> +		if (ctx->integrity_tag_size + ctx->integrity_iv_size != ctx->on_disk_tag_size)
> +			memset(tag + ctx->integrity_tag_size + ctx->integrity_iv_size, 0,
> +			       ctx->on_disk_tag_size - (ctx->integrity_tag_size + ctx->integrity_iv_size));
> +	} else {
> +		aead_request_set_crypt(req_aead, subreq->sg_in, subreq->sg_out,
> +				       ctx->sector_size + ctx->integrity_tag_size, iv);
> +		r = crypto_aead_decrypt(req_aead);
> +	}
> +
> +	if (r == -EBADMSG)
> +		DMERR_LIMIT("INTEGRITY AEAD ERROR, sector %llu",
> +			    (unsigned long long)le64_to_cpu(*sector));
> +
> +	if (!r && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
> +		r = ctx->iv_gen_ops->post(ctx, rctx, subreq, org_iv);
> +
> +	return r;
> +}
> +
> +static int geniv_convert_block_skcipher(struct geniv_ctx *ctx,
> +					struct geniv_req_ctx *rctx,
> +					struct geniv_subreq *subreq,
> +					unsigned int tag_offset)
> +{
> +	struct scatterlist *sg_in, *sg_out;
> +	u8 *iv, *org_iv, *tag_iv;
> +	uint64_t *sector;
> +	int r = 0;
> +	struct scatterlist_iter *iter = &rctx->iter;
> +	struct skcipher_request *req;
> +	struct skcipher_request *parent_req = rctx->r.req;
> +
> +	/* Reject unexpected unaligned bio. */
> +	if (unlikely(iter->len & (ctx->sector_size - 1)))
> +		return -EIO;
> +
> +	subreq->iv_sector = rctx->cc_sector;
> +	if (test_bit(CRYPT_IV_LARGE_SECTORS, &ctx->cipher_flags))
> +		subreq->iv_sector >>= ctx->sector_shift;
> +
> +	*org_tag_of_subreq(ctx, subreq) = tag_offset;
> +
> +	iv = iv_of_subreq(ctx, subreq);
> +	org_iv = org_iv_of_subreq(ctx, subreq);
> +	tag_iv = iv_tag_from_subreq(ctx, subreq);
> +
> +	sector = org_sector_of_subreq(ctx, subreq);
> +	*sector = cpu_to_le64(rctx->cc_sector - ctx->iv_offset);
> +
> +	/* For skcipher we use only the first sg item */
> +	sg_in = subreq->sg_in;
> +	sg_out = subreq->sg_out;
> +
> +	sg_init_table(sg_in, 1);
> +	sg_set_page(sg_in, sg_page(&parent_req->src[iter->seg_no]),
> +			iter->len, parent_req->src[iter->seg_no].offset + iter->done);
> +
> +	sg_init_table(sg_out, 1);
> +	sg_set_page(sg_out, sg_page(&parent_req->dst[iter->seg_no]),
> +			iter->len, parent_req->dst[iter->seg_no].offset + iter->done);
> +
> +	if (ctx->iv_gen_ops) {
> +		/* For READs use IV stored in integrity metadata */
> +		if (ctx->integrity_iv_size && !rctx->is_write) {
> +			memcpy(org_iv, tag_iv, ctx->integrity_iv_size);
> +		} else {
> +			r = ctx->iv_gen_ops->generator(ctx, rctx, subreq, org_iv);
> +			if (r < 0)
> +				return r;
> +			/* Store generated IV in integrity metadata */
> +			if (ctx->integrity_iv_size)
> +				memcpy(tag_iv, org_iv, ctx->integrity_iv_size);
> +		}
> +		/* Working copy of IV, to be modified in crypto API */
> +		memcpy(iv, org_iv, ctx->iv_size);
> +	}
> +
> +	req = &subreq->r.req;
> +	skcipher_request_set_crypt(req, sg_in, sg_out, ctx->sector_size, iv);
> +
> +	if (rctx->is_write)
> +		r = crypto_skcipher_encrypt(req);
> +	else
> +		r = crypto_skcipher_decrypt(req);
> +
> +	if (!r && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
> +		r = ctx->iv_gen_ops->post(ctx, rctx, subreq, org_iv);
> +
> +	return r;
> +}
> +
> +/* Common encryt/decrypt function for geniv template cipher. Before the crypto
> + * operation, it splits the memory segments (in the scatterlist) into 512 byte
> + * sectors. The initialization vector(IV) used is based on a unique sector
> + * number which is generated here.
> + */
> +static int geniv_crypt(struct geniv_ctx *ctx, void *parent_req, bool is_encrypt)
> +{
> +	struct skcipher_request *req = NULL;
> +	struct aead_request *req_aead = NULL;
> +	struct geniv_req_ctx *rctx;
> +	struct geniv_req_info *rinfo;
> +	int i, bytes, cryptlen, ret = 0;
> +	unsigned int sectors;
> +	unsigned int tag_offset = 0;
> +	unsigned int sector_step = ctx->sector_size >> SECTOR_SHIFT;
> +	char *str __maybe_unused = is_encrypt ? "encrypt" : "decrypt";
> +
> +	if (geniv_integrity_aead(ctx)) {
> +		req_aead = (struct aead_request *)parent_req;
> +		rctx = geniv_aead_req_ctx(req_aead);
> +		rctx->r.req_aead = req_aead;
> +		rinfo = (struct geniv_req_info *)req_aead->iv;
> +	} else {
> +		req = (struct skcipher_request *)parent_req;
> +		rctx = geniv_skcipher_req_ctx(req);
> +		rctx->r.req = req;
> +		rinfo = (struct geniv_req_info *)req->iv;
> +	}
> +
> +	/* Instance of 'struct geniv_req_info' is stored in IV ptr */
> +	rctx->is_write = is_encrypt;
> +	rctx->is_aead_request = geniv_integrity_aead(ctx);
> +	rctx->cc_sector = rinfo->cc_sector;
> +	rctx->nents = rinfo->nents;
> +	rctx->integrity_metadata = rinfo->integrity_metadata;
> +	rctx->subreq = NULL;
> +	cryptlen = req->cryptlen;
> +
> +	rctx->iter.seg_no = 0;
> +	rctx->iter.done = 0;
> +	rctx->iter.len = 0;
> +
> +	DMDEBUG("geniv:%s: starting sector=%d, #segments=%u\n", str,
> +		(unsigned int)rctx->cc_sector, rctx->nents);
> +
> +	if (geniv_integrity_aead(ctx))
> +		sectors = geniv_get_sectors(req_aead->src, req_aead->dst, rctx->nents);
> +	else
> +		sectors = geniv_get_sectors(req->src, req->dst, rctx->nents);
> +
> +	init_completion(&rctx->restart);
> +	atomic_set(&rctx->req_pending, 1);
> +
> +	for (i = 0; i < sectors; i++) {
> +		struct geniv_subreq *subreq;
> +
> +		if (geniv_integrity_aead(ctx))
> +			ret = geniv_alloc_subreq_aead(ctx, rctx, req_aead->base.flags);
> +		else
> +			ret = geniv_alloc_subreq_skcipher(ctx, rctx, req->base.flags);
> +		if (ret)
> +			return -ENOMEM;
> +
> +		subreq = rctx->subreq;
> +
> +		atomic_inc(&rctx->req_pending);
> +
> +		if (geniv_integrity_aead(ctx))
> +			bytes = geniv_iter_block(req_aead, ctx, rctx);
> +		else
> +			bytes = geniv_iter_block(req, ctx, rctx);
> +
> +		if (bytes == 0)
> +			break;
> +
> +		cryptlen -= bytes;
> +
> +		if (geniv_integrity_aead(ctx))
> +			ret = geniv_convert_block_aead(ctx, rctx, subreq, tag_offset);
> +		else
> +			ret = geniv_convert_block_skcipher(ctx, rctx, subreq, tag_offset);
> +
> +		switch (ret) {
> +		/*
> +		 * The request was queued by a crypto driver
> +		 * but the driver request queue is full, let's wait.
> +		 */
> +		case -EBUSY:
> +			wait_for_completion(&rctx->restart);
> +			reinit_completion(&rctx->restart);
> +			/* fall through */
> +		/*
> +		 * The request is queued and processed asynchronously,
> +		 * completion function geniv_async_done() is called.
> +		 */
> +		case -EINPROGRESS:
> +			/* Marking this NULL lets the creation of a new sub-
> +			 * request when 'geniv_alloc_subreq' is called.
> +			 */
> +			rctx->subreq = NULL;
> +			rctx->cc_sector += sector_step;
> +			tag_offset++;
> +			cond_resched();
> +			break;
> +		/*
> +		 * The request was already processed (synchronously).
> +		 */
> +		case 0:
> +			atomic_dec(&rctx->req_pending);
> +			rctx->cc_sector += sector_step;
> +			tag_offset++;
> +			cond_resched();
> +			continue;
> +
> +		/* There was an error while processing the request. */
> +		default:
> +			atomic_dec(&rctx->req_pending);
> +			mempool_free(rctx->subreq, ctx->subreq_pool);
> +			atomic_dec(&rctx->req_pending);
> +			return ret;
> +		}
> +	}
> +
> +	if (rctx->subreq)
> +		mempool_free(rctx->subreq, ctx->subreq_pool);
> +
> +	if (atomic_dec_and_test(&rctx->req_pending))
> +		return 0;
> +	else
> +		return -EINPROGRESS;
> +}
> +
> +static int geniv_skcipher_encrypt(struct skcipher_request *req)
> +{
> +	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> +	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> +	return geniv_crypt(ctx, req, true);
> +}
> +
> +static int geniv_skcipher_decrypt(struct skcipher_request *req)
> +{
> +	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> +	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> +	return geniv_crypt(ctx, req, false);
> +}
> +
> +static int geniv_aead_encrypt(struct aead_request *req)
> +{
> +	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> +	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
> +
> +	return geniv_crypt(ctx, req, true);
> +}
> +
> +static int geniv_aead_decrypt(struct aead_request *req)
> +{
> +	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> +	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
> +
> +	return geniv_crypt(ctx, req, false);
> +}
> +
> +/*
> + * Workaround to parse cipher algorithm from crypto API spec.
> + * The ctx->cipher is currently used only in ESSIV.
> + * This should be probably done by crypto-api calls (once available...)
> + */
> +static int geniv_blkdev_cipher(struct geniv_ctx *ctx, bool is_crypto_aead)
> +{
> +	const char *alg_name = NULL;
> +	char *start, *end;
> +
> +	alg_name = ctx->ciphermode;
> +	if (!alg_name)
> +		return -EINVAL;
> +
> +	if (is_crypto_aead) {
> +		alg_name = strchr(alg_name, ',');
> +		if (!alg_name)
> +			alg_name = ctx->ciphermode;
> +		alg_name++;
> +	}
> +
> +	start = strchr(alg_name, '(');
> +	end = strchr(alg_name, ')');
> +
> +	if (!start && !end) {
> +		ctx->cipher = kstrdup(alg_name, GFP_KERNEL);
> +		return ctx->cipher ? 0 : -ENOMEM;
> +	}
> +
> +	if (!start || !end || ++start >= end)
> +		return -EINVAL;
> +
> +	ctx->cipher = kzalloc(end - start + 1, GFP_KERNEL);
> +	if (!ctx->cipher)
> +		return -ENOMEM;
> +
> +	strncpy(ctx->cipher, start, end - start);
> +
> +	return 0;
> +}
> +
> +static int geniv_init_tfm(void *tfm_tmp, bool is_crypto_aead)
> +{
> +	struct geniv_ctx *ctx;
> +	struct crypto_skcipher *tfm;
> +	struct crypto_aead *tfm_aead;
> +	unsigned int reqsize;
> +	size_t iv_size_padding;
> +	char *algname;
> +	int psize, ret;
> +
> +	if (is_crypto_aead) {
> +		tfm_aead = (struct crypto_aead *)tfm_tmp;
> +		ctx = crypto_aead_ctx(tfm_aead);
> +		algname = (char *) crypto_tfm_alg_name(crypto_aead_tfm(tfm_aead));
> +	} else {
> +		tfm = (struct crypto_skcipher *)tfm_tmp;
> +		ctx = crypto_skcipher_ctx(tfm);
> +		algname = (char *) crypto_tfm_alg_name(crypto_skcipher_tfm(tfm));
> +	}
> +
> +	ctx->ciphermode = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL);
> +	if (!ctx->ciphermode)
> +		return -ENOMEM;
> +
> +	ctx->algname = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL);
> +	if (!ctx->algname) {
> +		ret = -ENOMEM;
> +		goto free_ciphermode;
> +	}
> +
> +	strlcpy(ctx->algname, algname, CRYPTO_MAX_ALG_NAME);
> +	algname = ctx->algname;
> +
> +	/* Parse the algorithm name 'ivmode(ciphermode)' */
> +	ctx->ivmode = strsep(&algname, "(");
> +	strlcpy(ctx->ciphermode, algname, CRYPTO_MAX_ALG_NAME);
> +	ctx->ciphermode[strlen(algname) - 1] = '\0';
> +
> +	DMDEBUG("ciphermode=%s, ivmode=%s\n", ctx->ciphermode, ctx->ivmode);
> +
> +	/*
> +	 * Usually the underlying cipher instances are spawned here, but since
> +	 * the value of tfms_count (which is equal to the key_count) is not
> +	 * known yet, create only one instance and delay the creation of the
> +	 * rest of the instances of the underlying cipher 'cbc(aes)' until
> +	 * the setkey operation is invoked.
> +	 * The first instance created i.e. ctx->child will later be assigned as
> +	 * the 1st element in the array ctx->tfms. Creation of atleast one
> +	 * instance of the cipher is necessary to be created here to uncover
> +	 * any errors earlier than during the setkey operation later where the
> +	 * remaining instances are created.
> +	 */
> +	if (is_crypto_aead)
> +		ctx->tfm_child.tfm_aead = crypto_alloc_aead(ctx->ciphermode, 0, 0);
> +	else
> +		ctx->tfm_child.tfm = crypto_alloc_skcipher(ctx->ciphermode, 0, 0);
> +	if (IS_ERR(ctx->tfm_child.tfm)) {
> +		ret = PTR_ERR(ctx->tfm_child.tfm);
> +		DMERR("Failed to create cipher %s. err %d\n",
> +		      ctx->ciphermode, ret);
> +		goto free_algname;
> +	}
> +
> +	/* Setup the current cipher's request structure */
> +	if (is_crypto_aead) {
> +		reqsize = sizeof(struct geniv_req_ctx) + __alignof__(struct geniv_req_ctx);
> +		crypto_aead_set_reqsize(tfm_aead, reqsize);
> +
> +		ctx->iv_start = sizeof(struct geniv_subreq);
> +		ctx->iv_start += crypto_aead_reqsize(ctx->tfm_child.tfm_aead);
> +
> +		ctx->iv_size = crypto_aead_ivsize(tfm_aead);
> +	} else {
> +		reqsize = sizeof(struct geniv_req_ctx) + __alignof__(struct geniv_req_ctx);
> +		crypto_skcipher_set_reqsize(tfm, reqsize);
> +
> +		ctx->iv_start = sizeof(struct geniv_subreq);
> +		ctx->iv_start += crypto_skcipher_reqsize(ctx->tfm_child.tfm);
> +
> +		ctx->iv_size = crypto_skcipher_ivsize(tfm);
> +	}
> +	/* at least a 64 bit sector number should fit in our buffer */
> +	if (ctx->iv_size)
> +		ctx->iv_size = max(ctx->iv_size,
> +				  (unsigned int)(sizeof(u64) / sizeof(u8)));
> +
> +	if (is_crypto_aead) {
> +		if (crypto_aead_alignmask(tfm_aead) < CRYPTO_MINALIGN) {
> +			/* Allocate the padding exactly */
> +			iv_size_padding = -ctx->iv_start
> +					& crypto_aead_alignmask(ctx->tfm_child.tfm_aead);
> +		} else {
> +			/*
> +			 * If the cipher requires greater alignment than kmalloc
> +			 * alignment, we don't know the exact position of the
> +			 * initialization vector. We must assume worst case.
> +			 */
> +			iv_size_padding = crypto_aead_alignmask(ctx->tfm_child.tfm_aead);
> +		}
> +	} else {
> +		if (crypto_skcipher_alignmask(tfm) < CRYPTO_MINALIGN) {
> +			iv_size_padding = -ctx->iv_start
> +					& crypto_skcipher_alignmask(ctx->tfm_child.tfm);
> +		} else {
> +			iv_size_padding = crypto_skcipher_alignmask(ctx->tfm_child.tfm);
> +		}
> +	}
> +
> +	/* create memory pool for sub-request structure
> +	 *  ...| IV + padding | original IV | original sec. number | bio tag offset |
> +	 */
> +	psize = ctx->iv_start + iv_size_padding + ctx->iv_size + ctx->iv_size +
> +		sizeof(uint64_t) + sizeof(unsigned int);
> +
> +	ctx->subreq_pool = mempool_create_kmalloc_pool(MIN_IOS, psize);
> +	if (!ctx->subreq_pool) {
> +		ret = -ENOMEM;
> +		DMERR("Could not allocate crypt sub-request mempool\n");
> +		goto free_tfm;
> +	}
> +
> +	ret = geniv_blkdev_cipher(ctx, is_crypto_aead);
> +	if (ret < 0) {
> +		ret = -ENOMEM;
> +		DMERR("Cannot allocate cipher string\n");
> +		goto free_tfm;
> +	}
> +
> +	return 0;
> +
> +free_tfm:
> +	if (is_crypto_aead)
> +		crypto_free_aead(ctx->tfm_child.tfm_aead);
> +	else
> +		crypto_free_skcipher(ctx->tfm_child.tfm);
> +free_algname:
> +	kfree(ctx->algname);
> +free_ciphermode:
> +	kfree(ctx->ciphermode);
> +	return ret;
> +}
> +
> +static int geniv_skcipher_init_tfm(struct crypto_skcipher *tfm)
> +{
> +	return geniv_init_tfm(tfm, 0);
> +}
> +
> +static int geniv_aead_init_tfm(struct crypto_aead *tfm)
> +{
> +	return geniv_init_tfm(tfm, 1);
> +}
> +
> +static void geniv_exit_tfm(struct geniv_ctx *ctx)
> +{
> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->dtr)
> +		ctx->iv_gen_ops->dtr(ctx);
> +
> +	mempool_destroy(ctx->subreq_pool);
> +	geniv_free_tfms(ctx);
> +	kzfree(ctx->ciphermode);
> +	kzfree(ctx->algname);
> +	kzfree(ctx->cipher);
> +}
> +
> +static void geniv_skcipher_exit_tfm(struct crypto_skcipher *tfm)
> +{
> +	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> +	geniv_exit_tfm(ctx);
> +}
> +
> +static void geniv_aead_exit_tfm(struct crypto_aead *tfm)
> +{
> +	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
> +
> +	geniv_exit_tfm(ctx);
> +}
> +
> +static void geniv_skcipher_free(struct skcipher_instance *inst)
> +{
> +	struct crypto_skcipher_spawn *spawn = skcipher_instance_ctx(inst);
> +
> +	crypto_drop_skcipher(spawn);
> +	kfree(inst);
> +}
> +
> +static void geniv_aead_free(struct aead_instance *inst)
> +{
> +	struct crypto_aead_spawn *spawn = aead_instance_ctx(inst);
> +
> +	crypto_drop_aead(spawn);
> +	kfree(inst);
> +}
> +
> +static int geniv_skcipher_create(struct crypto_template *tmpl,
> +			struct rtattr **tb, char *algname)
> +{
> +	struct crypto_attr_type *algt;
> +	struct skcipher_instance *inst;
> +	struct skcipher_alg *alg;
> +	struct crypto_skcipher_spawn *spawn;
> +	const char *cipher_name;
> +	int err;
> +
> +	algt = crypto_get_attr_type(tb);
> +
> +	cipher_name = crypto_attr_alg_name(tb[1]);
> +
> +	if (IS_ERR(cipher_name))
> +		return PTR_ERR(cipher_name);
> +
> +	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
> +	if (!inst)
> +		return -ENOMEM;
> +
> +	spawn = skcipher_instance_ctx(inst);
> +
> +	crypto_set_skcipher_spawn(spawn, skcipher_crypto_instance(inst));
> +	err = crypto_grab_skcipher(spawn, cipher_name, 0,
> +				    crypto_requires_sync(algt->type,
> +							 algt->mask));
> +
> +	if (err)
> +		goto err_free_inst;
> +
> +	alg = crypto_spawn_skcipher_alg(spawn);
> +
> +	err = -EINVAL;
> +
> +	/* Only support blocks of size which is of a power of 2 */
> +	if (!is_power_of_2(alg->base.cra_blocksize))
> +		goto err_drop_spawn;
> +
> +	/* algname: essiv, base.cra_name: cbc(aes) */
> +	err = -ENAMETOOLONG;
> +	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
> +		     algname, alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
> +		goto err_drop_spawn;
> +	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
> +		     "%s(%s)", algname, alg->base.cra_driver_name) >=
> +	    CRYPTO_MAX_ALG_NAME)
> +		goto err_drop_spawn;
> +
> +	inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
> +	inst->alg.base.cra_priority = alg->base.cra_priority;
> +	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
> +	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
> +	inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
> +	inst->alg.ivsize = alg->base.cra_blocksize;
> +	inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
> +	inst->alg.min_keysize = sizeof(struct geniv_key_info);
> +	inst->alg.max_keysize = sizeof(struct geniv_key_info);
> +
> +	inst->alg.setkey = geniv_skcipher_setkey;
> +	inst->alg.encrypt = geniv_skcipher_encrypt;
> +	inst->alg.decrypt = geniv_skcipher_decrypt;
> +
> +	inst->alg.base.cra_ctxsize = sizeof(struct geniv_ctx);
> +
> +	inst->alg.init = geniv_skcipher_init_tfm;
> +	inst->alg.exit = geniv_skcipher_exit_tfm;
> +
> +	inst->free = geniv_skcipher_free;
> +
> +	err = skcipher_register_instance(tmpl, inst);
> +	if (err)
> +		goto err_drop_spawn;
> +
> +out:
> +	return err;
> +
> +err_drop_spawn:
> +	crypto_drop_skcipher(spawn);
> +err_free_inst:
> +	kfree(inst);
> +	goto out;
> +}
> +
> +
> +static int geniv_aead_create(struct crypto_template *tmpl,
> +			struct rtattr **tb, char *algname)
> +{
> +	struct crypto_attr_type *algt;
> +	struct aead_instance *inst;
> +	struct aead_alg *alg;
> +	struct crypto_aead_spawn *spawn;
> +	const char *cipher_name;
> +	int err;
> +
> +	algt = crypto_get_attr_type(tb);
> +
> +	cipher_name = crypto_attr_alg_name(tb[1]);
> +	if (IS_ERR(cipher_name))
> +		return PTR_ERR(cipher_name);
> +
> +	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
> +	if (!inst)
> +		return -ENOMEM;
> +
> +	spawn = aead_instance_ctx(inst);
> +
> +	crypto_set_aead_spawn(spawn, aead_crypto_instance(inst));
> +	err = crypto_grab_aead(spawn, cipher_name, 0,
> +				    crypto_requires_sync(algt->type,
> +							 algt->mask));
> +	if (err)
> +		goto err_free_inst;
> +
> +	alg = crypto_spawn_aead_alg(spawn);
> +
> +	/* Only support blocks of size which is of a power of 2 */
> +	if (!is_power_of_2(alg->base.cra_blocksize)) {
> +		err = -EINVAL;
> +		goto err_drop_spawn;
> +	}
> +
> +	/* algname: essiv, base.cra_name: cbc(aes) */
> +	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
> +		     algname, alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME) {
> +		err = -ENAMETOOLONG;
> +		goto err_drop_spawn;
> +	}
> +
> +	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
> +		     "%s(%s)", algname, alg->base.cra_driver_name) >=
> +	    CRYPTO_MAX_ALG_NAME) {
> +		err = -ENAMETOOLONG;
> +		goto err_drop_spawn;
> +	}
> +
> +	inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
> +	inst->alg.base.cra_priority = alg->base.cra_priority;
> +	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
> +	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
> +	inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
> +	inst->alg.ivsize = crypto_aead_alg_ivsize(alg);
> +	inst->alg.chunksize = crypto_aead_alg_chunksize(alg);
> +	inst->alg.maxauthsize = crypto_aead_alg_maxauthsize(alg);
> +
> +	inst->alg.setkey = geniv_aead_setkey;
> +	inst->alg.encrypt = geniv_aead_encrypt;
> +	inst->alg.decrypt = geniv_aead_decrypt;
> +
> +	inst->alg.base.cra_ctxsize = sizeof(struct geniv_ctx);
> +
> +	inst->alg.init = geniv_aead_init_tfm;
> +	inst->alg.exit = geniv_aead_exit_tfm;
> +
> +	inst->free = geniv_aead_free;
> +
> +	err = aead_register_instance(tmpl, inst);
> +	if (err)
> +		goto err_drop_spawn;
> +
> +	return 0;
> +
> +err_drop_spawn:
> +	crypto_drop_aead(spawn);
> +err_free_inst:
> +	kfree(inst);
> +	return err;
> +}
> +
> +static int geniv_create(struct crypto_template *tmpl,
> +			struct rtattr **tb, char *algname)
> +{
> +	if (!crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER))
> +		return geniv_skcipher_create(tmpl, tb, algname);
> +	else if (!crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD))
> +		return geniv_aead_create(tmpl, tb, algname);
> +	else
> +		return -EINVAL;
> +}
> +
> +static int geniv_template_create(struct crypto_template *tmpl,
> +			       struct rtattr **tb)
> +{
> +	return geniv_create(tmpl, tb, tmpl->name);
> +}
> +
> +#define DEFINE_CRYPTO_TEMPLATE(type) \
> +	{ .name = type, \
> +	.create = geniv_template_create, \
> +	.module = THIS_MODULE, },
> +
> +static struct crypto_template geniv_tmpl[IV_TYPE_NUM] = {
> +	DEFINE_CRYPTO_TEMPLATE("plain")
> +	DEFINE_CRYPTO_TEMPLATE("plain64")
> +	DEFINE_CRYPTO_TEMPLATE("essiv")
> +	DEFINE_CRYPTO_TEMPLATE("benbi")
> +	DEFINE_CRYPTO_TEMPLATE("null")
> +	DEFINE_CRYPTO_TEMPLATE("lmk")
> +	DEFINE_CRYPTO_TEMPLATE("tcw")
> +	DEFINE_CRYPTO_TEMPLATE("random")
> +};
> +
> +static int __init geniv_init(void)
> +{
> +	return crypto_register_template_array(geniv_tmpl, IV_TYPE_NUM);
> +}
> +
> +static void __exit geniv_exit(void)
> +{
> +	crypto_unregister_template_array(geniv_tmpl, IV_TYPE_NUM);
> +}
> +
> +module_init(geniv_init);
> +module_exit(geniv_exit);
> +
> +MODULE_AUTHOR("Xiongfeng Wang <xiongfeng.wang@linaro.org>");
> +MODULE_DESCRIPTION(DM_NAME " IV Generation Template ");
> +MODULE_LICENSE("GPL");
> diff --git a/include/crypto/geniv.h b/include/crypto/geniv.h
> new file mode 100644
> index 0000000..d8084fc
> --- /dev/null
> +++ b/include/crypto/geniv.h
> @@ -0,0 +1,47 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * geniv.h: common interface for IV generation algorithms
> + *
> + * Copyright (C) 2018, Linaro
> + *
> + * This file define the data structure the user should pass to the template.
> + */
> +
> +#ifndef _CRYPTO_GENIV_H
> +#define _CRYPTO_GENIV_H
> +
> +#include <linux/types.h>
> +
> +enum cipher_flags {
> +	CRYPT_MODE_INTEGRITY_AEAD,	/* Use authenticated mode for cihper */
> +	CRYPT_IV_LARGE_SECTORS,		/* Calculate IV from sector_size, not 512B sectors */
> +};
> +
> +enum setkey_op {
> +	SETKEY_OP_INIT,
> +	SETKEY_OP_SET,
> +	SETKEY_OP_WIPE,
> +};
> +
> +struct geniv_key_info {
> +	enum setkey_op keyop;
> +	unsigned int tfms_count;
> +	u8 *key;
> +	char *ivopts;
> +	sector_t iv_offset;
> +	unsigned long cipher_flags;
> +
> +	unsigned short int sector_size;
> +	unsigned int key_size;
> +	unsigned int key_parts;
> +	unsigned int key_mac_size;
> +	unsigned int on_disk_tag_size;
> +};
> +
> +struct geniv_req_info {
> +	sector_t cc_sector;
> +	unsigned int nents;
> +	u8 *integrity_metadata;
> +};
> +
> +#endif
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18  8:16   ` Milan Broz
@ 2018-07-18  8:48     ` Xiongfeng Wang
  2018-07-18 13:11     ` Mike Snitzer
  2018-07-18 16:46     ` Mark Brown
  2 siblings, 0 replies; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-18  8:48 UTC (permalink / raw)
  To: Milan Broz, agk, snitzer, herbert
  Cc: dm-devel, linux-kernel, broonie, arnd, jonathan.cameron



On 2018/7/18 16:16, Milan Broz wrote:
> On 18/07/18 09:30, Xiongfeng Wang wrote:
>> Currently, the IV generation algorithms are implemented in dm-crypt.c.
>> This patch implement these algorithms as template ciphers, so that
>> dm-crypt layer can be simplified, and also these algorithms can be
>> implemented in hardware for performance.
>>
>> Synchronous crypto requests to encrypt/decrypt a sector are processed
>> sequentially. Asynchronous requests if processed in paralled, are freed
>> in the async callback.
> 
> So we are here again and moving INTERNAL dm-crypt functionality into
> cryptoapi.
> 
> The TCW,LMK  IVs generator make sense only for dm-crypt 
> for compatible old disk encryption mappings.
> 
> I strongly disagree to move this outside of dm-crypt.
> 
> Sorry, the last discussion was that it remains inside dm-crypt
> and it will be only registered through crypto API.
> 
> And this for all files:
> 
>> + * Copyright (C) 2018, Linaro
> 
> It is NOT YOUR code! Please keep copyright and authors as in dm-crypt.
> 
> Milan

Sorry, I will add it in my next version.

Thanks,
Xiongfeng
> 
>>
>> Interface to the crypto layer - include/crypto/geniv.h
>>
>> This patch is based on the patchset originally started by
>> Binoy Jayan <binoy.jayan@linaro.org>
>> ( crypto: Add IV generation algorithms
>> https://patchwork.kernel.org/patch/9803469/ )
>>
>> Signed-off-by: Binoy Jayan <binoy.jayan@linaro.org>
>> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@linaro.org>
>> ---
>>  crypto/Kconfig         |    7 +
>>  crypto/Makefile        |    1 +
>>  crypto/geniv.c         | 2240 ++++++++++++++++++++++++++++++++++++++++++++++++
>>  include/crypto/geniv.h |   47 +
>>  4 files changed, 2295 insertions(+)
>>  create mode 100644 crypto/geniv.c
>>  create mode 100644 include/crypto/geniv.h
>>
>> diff --git a/crypto/Kconfig b/crypto/Kconfig
>> index f3e40ac..98f025a 100644
>> --- a/crypto/Kconfig
>> +++ b/crypto/Kconfig
>> @@ -257,6 +257,13 @@ config CRYPTO_GLUE_HELPER_X86
>>  config CRYPTO_ENGINE
>>  	tristate
>>  
>> +config CRYPTO_GENIV
>> +	tristate "IV Generator Template"
>> +	select CRYPTO_AEAD
>> +	select CRYPTO_BLKCIPHER
>> +	help
>> +	  Support for IV generator template, so that dm-crypt can rely on it.
>> +
>>  comment "Authenticated Encryption with Associated Data"
>>  
>>  config CRYPTO_CCM
>> diff --git a/crypto/Makefile b/crypto/Makefile
>> index 6d1d40e..1077d2f 100644
>> --- a/crypto/Makefile
>> +++ b/crypto/Makefile
>> @@ -23,6 +23,7 @@ crypto_blkcipher-y += skcipher.o
>>  obj-$(CONFIG_CRYPTO_BLKCIPHER2) += crypto_blkcipher.o
>>  obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
>>  obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
>> +obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
>>  
>>  crypto_hash-y += ahash.o
>>  crypto_hash-y += shash.o
>> diff --git a/crypto/geniv.c b/crypto/geniv.c
>> new file mode 100644
>> index 0000000..55d1212
>> --- /dev/null
>> +++ b/crypto/geniv.c
>> @@ -0,0 +1,2240 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * geniv.c - crypto template for generating IV
>> + *
>> + * Copyright (C) 2018, Linaro
>> + *
>> + * This file adds a crypto template to generate IV, so the dm-crypt can rely
>> + * on it and remove the existing generating IV code.
>> + */
>> +
>> +#include <linux/completion.h>
>> +#include <linux/err.h>
>> +#include <linux/module.h>
>> +#include <linux/init.h>
>> +#include <linux/kernel.h>
>> +#include <linux/key.h>
>> +#include <linux/bio.h>
>> +#include <linux/blkdev.h>
>> +#include <linux/mempool.h>
>> +#include <linux/slab.h>
>> +#include <linux/crypto.h>
>> +#include <linux/atomic.h>
>> +#include <linux/scatterlist.h>
>> +#include <linux/ctype.h>
>> +#include <asm/page.h>
>> +#include <asm/unaligned.h>
>> +#include <crypto/hash.h>
>> +#include <crypto/md5.h>
>> +#include <crypto/algapi.h>
>> +#include <crypto/skcipher.h>
>> +#include <crypto/aead.h>
>> +#include <crypto/authenc.h>
>> +#include <crypto/geniv.h>
>> +#include <crypto/internal/aead.h>
>> +#include <crypto/internal/skcipher.h>
>> +#include <linux/rtnetlink.h> /* for struct rtattr and RTA macros only */
>> +#include <keys/user-type.h>
>> +#include <linux/backing-dev.h>
>> +#include <linux/device-mapper.h>
>> +#include <linux/log2.h>
>> +
>> +#define DM_MSG_PREFIX		"crypt"
>> +#define MIN_IOS		64
>> +#define IV_TYPE_NUM 8
>> +#define SECTOR_MASK ((1 << SECTOR_SHIFT) - 1)
>> +
>> +struct geniv_ctx;
>> +struct geniv_req_ctx;
>> +
>> +/* Sub request for each of the skcipher_request's for a segment */
>> +struct geniv_subreq {
>> +	struct scatterlist sg_in[4];
>> +	struct scatterlist sg_out[4];
>> +	sector_t iv_sector;
>> +	struct geniv_req_ctx *rctx;
>> +	union {
>> +		struct skcipher_request req;
>> +		struct aead_request req_aead;
>> +	} r CRYPTO_MINALIGN_ATTR;
>> +};
>> +
>> +/* used to iter the src scatterlist of the input parent request */
>> +struct scatterlist_iter {
>> +	/* current segment to be processed */
>> +	unsigned int seg_no;
>> +	/* bytes had been processed in current segment */
>> +	unsigned int done;
>> +	/* bytes to be processed in the next request */
>> +	unsigned int len;
>> +};
>> +
>> +/* contex of the input parent request */
>> +struct geniv_req_ctx {
>> +	struct geniv_subreq *subreq;
>> +	bool is_write;
>> +	bool is_aead_request;
>> +	sector_t cc_sector;
>> +	/* array size of src scatterlist of parent request */
>> +	unsigned int nents;
>> +	struct scatterlist_iter iter;
>> +	struct completion restart;
>> +	atomic_t req_pending;
>> +	u8 *integrity_metadata;
>> +	/* point to the input parent request */
>> +	union {
>> +		struct skcipher_request *req;
>> +		struct aead_request *req_aead;
>> +	} r;
>> +};
>> +
>> +struct crypt_iv_operations {
>> +	int (*ctr)(struct geniv_ctx *ctx);
>> +	void (*dtr)(struct geniv_ctx *ctx);
>> +	int (*init)(struct geniv_ctx *ctx);
>> +	int (*wipe)(struct geniv_ctx *ctx);
>> +	int (*generator)(struct geniv_ctx *ctx,
>> +			struct geniv_req_ctx *rctx,
>> +			struct geniv_subreq *subreq, u8 *iv);
>> +	int (*post)(struct geniv_ctx *ctx,
>> +			struct geniv_req_ctx *rctx,
>> +			struct geniv_subreq *subreq, u8 *iv);
>> +};
>> +
>> +struct geniv_essiv_private {
>> +	struct crypto_ahash *hash_tfm;
>> +	u8 *salt;
>> +};
>> +
>> +struct geniv_benbi_private {
>> +	int shift;
>> +};
>> +
>> +#define LMK_SEED_SIZE 64 /* hash + 0 */
>> +struct geniv_lmk_private {
>> +	struct crypto_shash *hash_tfm;
>> +	u8 *seed;
>> +};
>> +
>> +#define TCW_WHITENING_SIZE 16
>> +struct geniv_tcw_private {
>> +	struct crypto_shash *crc32_tfm;
>> +	u8 *iv_seed;
>> +	u8 *whitening;
>> +};
>> +
>> +/* context of geniv tfm */
>> +struct geniv_ctx {
>> +	unsigned int tfms_count;
>> +	union {
>> +		struct crypto_skcipher *tfm;
>> +		struct crypto_aead *tfm_aead;
>> +	} tfm_child;
>> +	union {
>> +		struct crypto_skcipher **tfms;
>> +		struct crypto_aead **tfms_aead;
>> +	} tfms;
>> +
>> +	char *ivmode;
>> +	unsigned int iv_size;
>> +	unsigned int iv_start;
>> +	unsigned int rctx_start;
>> +	sector_t iv_offset;
>> +	unsigned short int sector_size;
>> +	unsigned char sector_shift;
>> +	char *algname;
>> +	char *ivopts;
>> +	char *cipher;
>> +	char *ciphermode;
>> +	unsigned long cipher_flags;
>> +
>> +	const struct crypt_iv_operations *iv_gen_ops;
>> +	union {
>> +		struct geniv_essiv_private essiv;
>> +		struct geniv_benbi_private benbi;
>> +		struct geniv_lmk_private lmk;
>> +		struct geniv_tcw_private tcw;
>> +	} iv_gen_private;
>> +	void *iv_private;
>> +
>> +	mempool_t *subreq_pool;
>> +	unsigned int key_size;
>> +	unsigned int key_parts;      /* independent parts in key buffer */
>> +	unsigned int key_extra_size; /* additional keys length */
>> +	unsigned int key_mac_size;
>> +
>> +	unsigned int integrity_tag_size;
>> +	unsigned int integrity_iv_size;
>> +	unsigned int on_disk_tag_size;
>> +
>> +	char *msg;
>> +	u8 *authenc_key; /* space for keys in authenc() format (if used) */
>> +	u8 *key;
>> +};
>> +
>> +static struct scatterlist *crypt_get_sg_data(struct geniv_ctx *ctx,
>> +					     struct scatterlist *sg);
>> +
>> +static bool geniv_integrity_aead(struct geniv_ctx *ctx)
>> +{
>> +	return test_bit(CRYPT_MODE_INTEGRITY_AEAD, &ctx->cipher_flags);
>> +}
>> +
>> +static bool geniv_integrity_hmac(struct geniv_ctx *ctx)
>> +{
>> +	return geniv_integrity_aead(ctx) && ctx->key_mac_size;
>> +}
>> +
>> +static struct geniv_req_ctx *geniv_skcipher_req_ctx(struct skcipher_request *req)
>> +{
>> +	return (void *)PTR_ALIGN((u8 *)skcipher_request_ctx(req),  __alignof__(struct geniv_req_ctx));
>> +}
>> +
>> +static struct geniv_req_ctx *geniv_aead_req_ctx(struct aead_request *req)
>> +{
>> +	return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), __alignof__(struct geniv_req_ctx));
>> +}
>> +
>> +static u8 *iv_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
>> +{
>> +	if (geniv_integrity_aead(ctx))
>> +		return (u8 *)ALIGN((unsigned long)((char *)subreq + ctx->iv_start),
>> +			crypto_aead_alignmask(crypto_aead_reqtfm(subreq->rctx->r.req_aead)) + 1);
>> +	else
>> +		return (u8 *)ALIGN((unsigned long)((char *)subreq + ctx->iv_start),
>> +			crypto_skcipher_alignmask(crypto_skcipher_reqtfm(subreq->rctx->r.req)) + 1);
>> +}
>> +
>> +/* Get sg containing data */
>> +static struct scatterlist *crypt_get_sg_data(struct geniv_ctx *ctx,
>> +					     struct scatterlist *sg)
>> +{
>> +	if (unlikely(geniv_integrity_aead(ctx)))
>> +		return &sg[2];
>> +
>> +	return sg;
>> +}
>> +
>> +/*
>> + * Different IV generation algorithms:
>> + *
>> + * plain: the initial vector is the 32-bit little-endian version of the sector
>> + *        number, padded with zeros if necessary.
>> + *
>> + * plain64: the initial vector is the 64-bit little-endian version of the sector
>> + *        number, padded with zeros if necessary.
>> + *
>> + * plain64be: the initial vector is the 64-bit big-endian version of the sector
>> + *        number, padded with zeros if necessary.
>> + *
>> + * essiv: "encrypted sector|salt initial vector", the sector number is
>> + *        encrypted with the bulk cipher using a salt as key. The salt
>> + *        should be derived from the bulk cipher's key via hashing.
>> + *
>> + * benbi: the 64-bit "big-endian 'narrow block'-count", starting at 1
>> + *        (needed for LRW-32-AES and possible other narrow block modes)
>> + *
>> + * null: the initial vector is always zero.  Provides compatibility with
>> + *       obsolete loop_fish2 devices.  Do not use for new devices.
>> + *
>> + * lmk:  Compatible implementation of the block chaining mode used
>> + *       by the Loop-AES block device encryption system
>> + *       designed by Jari Ruusu. See http://loop-aes.sourceforge.net/
>> + *       It operates on full 512 byte sectors and uses CBC
>> + *       with an IV derived from the sector number, the data and
>> + *       optionally extra IV seed.
>> + *       This means that after decryption the first block
>> + *       of sector must be tweaked according to decrypted data.
>> + *       Loop-AES can use three encryption schemes:
>> + *         version 1: is plain aes-cbc mode
>> + *         version 2: uses 64 multikey scheme with lmk IV generator
>> + *         version 3: the same as version 2 with additional IV seed
>> + *                   (it uses 65 keys, last key is used as IV seed)
>> + *
>> + * tcw:  Compatible implementation of the block chaining mode used
>> + *       by the TrueCrypt device encryption system (prior to version 4.1).
>> + *       For more info see: https://gitlab.com/cryptsetup/cryptsetup/wikis/TrueCryptOnDiskFormat
>> + *       It operates on full 512 byte sectors and uses CBC
>> + *       with an IV derived from initial key and the sector number.
>> + *       In addition, whitening value is applied on every sector, whitening
>> + *       is calculated from initial key, sector number and mixed using CRC32.
>> + *       Note that this encryption scheme is vulnerable to watermarking attacks
>> + *       and should be used for old compatible containers access only.
>> + *
>> + * plumb: unimplemented, see:
>> + * http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
>> + */
>> +
>> +static int crypt_iv_plain_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	memset(iv, 0, ctx->iv_size);
>> +	*(__le32 *)iv = cpu_to_le32(subreq->iv_sector & 0xffffffff);
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_plain64_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	memset(iv, 0, ctx->iv_size);
>> +	*(__le64 *)iv = cpu_to_le64(subreq->iv_sector);
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_plain64be_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	memset(iv, 0, ctx->iv_size);
>> +	/* iv_size is at least of size u64; usually it is 16 bytes */
>> +	*(__be64 *)&iv[ctx->iv_size - sizeof(u64)] = cpu_to_be64(subreq->iv_sector);
>> +
>> +	return 0;
>> +}
>> +
>> +/* Initialise ESSIV - compute salt but no local memory allocations */
>> +static int crypt_iv_essiv_init(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
>> +	AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
>> +	struct scatterlist sg;
>> +	struct crypto_cipher *essiv_tfm;
>> +	int err;
>> +
>> +	sg_init_one(&sg, ctx->key, ctx->key_size);
>> +	ahash_request_set_tfm(req, essiv->hash_tfm);
>> +	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
>> +	ahash_request_set_crypt(req, &sg, essiv->salt, ctx->key_size);
>> +
>> +	err = crypto_ahash_digest(req);
>> +	ahash_request_zero(req);
>> +	if (err)
>> +		return err;
>> +
>> +	essiv_tfm = ctx->iv_private;
>> +
>> +	return crypto_cipher_setkey(essiv_tfm, essiv->salt,
>> +			    crypto_ahash_digestsize(essiv->hash_tfm));
>> +}
>> +
>> +/* Wipe salt and reset key derived from volume key */
>> +static int crypt_iv_essiv_wipe(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
>> +	unsigned int salt_size = crypto_ahash_digestsize(essiv->hash_tfm);
>> +	struct crypto_cipher *essiv_tfm;
>> +
>> +	memset(essiv->salt, 0, salt_size);
>> +
>> +	essiv_tfm = ctx->iv_private;
>> +	return crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size);
>> +}
>> +
>> +/* Allocate the cipher for ESSIV */
>> +static struct crypto_cipher *alloc_essiv_cipher(struct geniv_ctx *ctx,
>> +					u8 *salt, unsigned int saltsize)
>> +{
>> +	struct crypto_cipher *essiv_tfm;
>> +	int err;
>> +
>> +	/* Setup the essiv_tfm with the given salt */
>> +	essiv_tfm = crypto_alloc_cipher(ctx->cipher, 0, CRYPTO_ALG_ASYNC);
>> +	if (IS_ERR(essiv_tfm)) {
>> +		DMERR("Error allocating crypto tfm for ESSIV\n");
>> +		return essiv_tfm;
>> +	}
>> +
>> +	if (crypto_cipher_blocksize(essiv_tfm) != ctx->iv_size) {
>> +		DMERR("Block size of ESSIV cipher does "
>> +			    "not match IV size of block cipher\n");
>> +		crypto_free_cipher(essiv_tfm);
>> +		return ERR_PTR(-EINVAL);
>> +	}
>> +
>> +	err = crypto_cipher_setkey(essiv_tfm, salt, saltsize);
>> +	if (err) {
>> +		DMERR("Failed to set key for ESSIV cipher\n");
>> +		crypto_free_cipher(essiv_tfm);
>> +		return ERR_PTR(err);
>> +	}
>> +
>> +	return essiv_tfm;
>> +}
>> +
>> +static void crypt_iv_essiv_dtr(struct geniv_ctx *ctx)
>> +{
>> +	struct crypto_cipher *essiv_tfm;
>> +	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
>> +
>> +	crypto_free_ahash(essiv->hash_tfm);
>> +	essiv->hash_tfm = NULL;
>> +
>> +	kzfree(essiv->salt);
>> +	essiv->salt = NULL;
>> +
>> +	essiv_tfm = ctx->iv_private;
>> +
>> +	if (essiv_tfm)
>> +		crypto_free_cipher(essiv_tfm);
>> +
>> +	ctx->iv_private = NULL;
>> +}
>> +
>> +static int crypt_iv_essiv_ctr(struct geniv_ctx *ctx)
>> +{
>> +	struct crypto_cipher *essiv_tfm = NULL;
>> +	struct crypto_ahash *hash_tfm = NULL;
>> +	u8 *salt = NULL;
>> +	int err;
>> +
>> +	if (!ctx->ivopts) {
>> +		DMERR("Digest algorithm missing for ESSIV mode\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Allocate hash algorithm */
>> +	hash_tfm = crypto_alloc_ahash(ctx->ivopts, 0, CRYPTO_ALG_ASYNC);
>> +	if (IS_ERR(hash_tfm)) {
>> +		DMERR("Error initializing ESSIV hash\n");
>> +		err = PTR_ERR(hash_tfm);
>> +		goto bad;
>> +	}
>> +
>> +	salt = kzalloc(crypto_ahash_digestsize(hash_tfm), GFP_KERNEL);
>> +	if (!salt) {
>> +		DMERR("Error kmallocing salt storage in ESSIV\n");
>> +		err = -ENOMEM;
>> +		goto bad;
>> +	}
>> +
>> +	ctx->iv_gen_private.essiv.salt = salt;
>> +	ctx->iv_gen_private.essiv.hash_tfm = hash_tfm;
>> +
>> +	essiv_tfm = alloc_essiv_cipher(ctx, salt,
>> +				       crypto_ahash_digestsize(hash_tfm));
>> +	if (IS_ERR(essiv_tfm)) {
>> +		crypt_iv_essiv_dtr(ctx);
>> +		return PTR_ERR(essiv_tfm);
>> +	}
>> +	ctx->iv_private = essiv_tfm;
>> +
>> +	return 0;
>> +
>> +bad:
>> +	if (hash_tfm && !IS_ERR(hash_tfm))
>> +		crypto_free_ahash(hash_tfm);
>> +	kfree(salt);
>> +	return err;
>> +}
>> +
>> +static int crypt_iv_essiv_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	struct crypto_cipher *essiv_tfm = ctx->iv_private;
>> +
>> +	memset(iv, 0, ctx->iv_size);
>> +	*(__le64 *)iv = cpu_to_le64(subreq->iv_sector);
>> +	crypto_cipher_encrypt_one(essiv_tfm, iv, iv);
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_benbi_ctr(struct geniv_ctx *ctx)
>> +{
>> +	unsigned int bs = crypto_skcipher_blocksize(ctx->tfms.tfms[0]);
>> +	int log = ilog2(bs);
>> +
>> +	/* we need to calculate how far we must shift the sector count
>> +	 * to get the cipher block count, we use this shift in _gen */
>> +
>> +	if (1 << log != bs) {
>> +		DMERR("cypher blocksize is not a power of 2\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (log > 9) {
>> +		DMERR("cypher blocksize is > 512\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	ctx->iv_gen_private.benbi.shift = 9 - log;
>> +
>> +	return 0;
>> +}
>> +
>> +static void crypt_iv_benbi_dtr(struct geniv_ctx *ctx)
>> +{
>> +}
>> +
>> +static int crypt_iv_benbi_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	__be64 val;
>> +
>> +	memset(iv, 0, ctx->iv_size - sizeof(u64)); /* rest is cleared below */
>> +
>> +	val = cpu_to_be64(((u64)subreq->iv_sector << ctx->iv_gen_private.benbi.shift) + 1);
>> +	put_unaligned(val, (__be64 *)(iv + ctx->iv_size - sizeof(u64)));
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_null_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	memset(iv, 0, ctx->iv_size);
>> +
>> +	return 0;
>> +}
>> +
>> +static void crypt_iv_lmk_dtr(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
>> +
>> +	if (lmk->hash_tfm && !IS_ERR(lmk->hash_tfm))
>> +		crypto_free_shash(lmk->hash_tfm);
>> +	lmk->hash_tfm = NULL;
>> +
>> +	kzfree(lmk->seed);
>> +	lmk->seed = NULL;
>> +}
>> +
>> +static int crypt_iv_lmk_ctr(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
>> +
>> +	if (ctx->sector_size != (1 << SECTOR_SHIFT)) {
>> +		DMERR("Unsupported sector size for LMK\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	lmk->hash_tfm = crypto_alloc_shash("md5", 0, 0);
>> +	if (IS_ERR(lmk->hash_tfm)) {
>> +		DMERR("Error initializing LMK hash, err=%ld\n",
>> +			PTR_ERR(lmk->hash_tfm));
>> +		return PTR_ERR(lmk->hash_tfm);
>> +	}
>> +
>> +	/* No seed in LMK version 2 */
>> +	if (ctx->key_parts == ctx->tfms_count) {
>> +		lmk->seed = NULL;
>> +		return 0;
>> +	}
>> +
>> +	lmk->seed = kzalloc(LMK_SEED_SIZE, GFP_KERNEL);
>> +	if (!lmk->seed) {
>> +		crypt_iv_lmk_dtr(ctx);
>> +		DMERR("Error kmallocing seed storage in LMK\n");
>> +		return -ENOMEM;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_lmk_init(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
>> +	int subkey_size = ctx->key_size / ctx->key_parts;
>> +
>> +	/* LMK seed is on the position of LMK_KEYS + 1 key */
>> +	if (lmk->seed)
>> +		memcpy(lmk->seed, ctx->key + (ctx->tfms_count * subkey_size),
>> +		       crypto_shash_digestsize(lmk->hash_tfm));
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_lmk_wipe(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
>> +
>> +	if (lmk->seed)
>> +		memset(lmk->seed, 0, LMK_SEED_SIZE);
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_lmk_one(struct geniv_ctx *ctx, u8 *iv,
>> +				struct geniv_subreq *subreq, u8 *data)
>> +{
>> +	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
>> +	SHASH_DESC_ON_STACK(desc, lmk->hash_tfm);
>> +	struct md5_state md5state;
>> +	__le32 buf[4];
>> +	int i, r;
>> +
>> +	desc->tfm = lmk->hash_tfm;
>> +	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
>> +
>> +	r = crypto_shash_init(desc);
>> +	if (r)
>> +		return r;
>> +
>> +	if (lmk->seed) {
>> +		r = crypto_shash_update(desc, lmk->seed, LMK_SEED_SIZE);
>> +		if (r)
>> +			return r;
>> +	}
>> +
>> +	/* Sector is always 512B, block size 16, add data of blocks 1-31 */
>> +	r = crypto_shash_update(desc, data + 16, 16 * 31);
>> +	if (r)
>> +		return r;
>> +
>> +	/* Sector is cropped to 56 bits here */
>> +	buf[0] = cpu_to_le32(subreq->iv_sector & 0xFFFFFFFF);
>> +	buf[1] = cpu_to_le32((((u64)subreq->iv_sector >> 32) & 0x00FFFFFF) | 0x80000000);
>> +	buf[2] = cpu_to_le32(4024);
>> +	buf[3] = 0;
>> +	r = crypto_shash_update(desc, (u8 *)buf, sizeof(buf));
>> +	if (r)
>> +		return r;
>> +
>> +	/* No MD5 padding here */
>> +	r = crypto_shash_export(desc, &md5state);
>> +	if (r)
>> +		return r;
>> +
>> +	for (i = 0; i < MD5_HASH_WORDS; i++)
>> +		__cpu_to_le32s(&md5state.hash[i]);
>> +	memcpy(iv, &md5state.hash, ctx->iv_size);
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_lmk_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	struct scatterlist *sg;
>> +	u8 *src;
>> +	int r = 0;
>> +
>> +	if (rctx->is_write) {
>> +		sg = crypt_get_sg_data(ctx, subreq->sg_in);
>> +		src = kmap_atomic(sg_page(sg));
>> +		r = crypt_iv_lmk_one(ctx, iv, subreq, src + sg->offset);
>> +		kunmap_atomic(src);
>> +	} else
>> +		memset(iv, 0, ctx->iv_size);
>> +
>> +	return r;
>> +}
>> +
>> +static int crypt_iv_lmk_post(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	struct scatterlist *sg;
>> +	u8 *dst;
>> +	int r;
>> +
>> +	if (rctx->is_write)
>> +		return 0;
>> +
>> +	sg = crypt_get_sg_data(ctx, subreq->sg_out);
>> +	dst = kmap_atomic(sg_page(sg));
>> +	r = crypt_iv_lmk_one(ctx, iv, subreq, dst + sg->offset);
>> +
>> +	/* Tweak the first block of plaintext sector */
>> +	if (!r)
>> +		crypto_xor(dst + sg->offset, iv, ctx->iv_size);
>> +
>> +	kunmap_atomic(dst);
>> +	return r;
>> +}
>> +
>> +static void crypt_iv_tcw_dtr(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
>> +
>> +	kzfree(tcw->iv_seed);
>> +	tcw->iv_seed = NULL;
>> +	kzfree(tcw->whitening);
>> +	tcw->whitening = NULL;
>> +
>> +	if (tcw->crc32_tfm && !IS_ERR(tcw->crc32_tfm))
>> +		crypto_free_shash(tcw->crc32_tfm);
>> +	tcw->crc32_tfm = NULL;
>> +}
>> +
>> +static int crypt_iv_tcw_ctr(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
>> +
>> +	if (ctx->sector_size != (1 << SECTOR_SHIFT)) {
>> +		DMERR("Unsupported sector size for TCW\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (ctx->key_size <= (ctx->iv_size + TCW_WHITENING_SIZE)) {
>> +		DMERR("Wrong key size (%d) for TCW. Choose a value > %d bytes\n",
>> +			ctx->key_size, ctx->iv_size + TCW_WHITENING_SIZE);
>> +		return -EINVAL;
>> +	}
>> +
>> +	tcw->crc32_tfm = crypto_alloc_shash("crc32", 0, 0);
>> +	if (IS_ERR(tcw->crc32_tfm)) {
>> +		DMERR("Error initializing CRC32 in TCW; err=%ld\n",
>> +			PTR_ERR(tcw->crc32_tfm));
>> +		return PTR_ERR(tcw->crc32_tfm);
>> +	}
>> +
>> +	tcw->iv_seed = kzalloc(ctx->iv_size, GFP_KERNEL);
>> +	tcw->whitening = kzalloc(TCW_WHITENING_SIZE, GFP_KERNEL);
>> +	if (!tcw->iv_seed || !tcw->whitening) {
>> +		crypt_iv_tcw_dtr(ctx);
>> +		DMERR("Error allocating seed storage in TCW\n");
>> +		return -ENOMEM;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_tcw_init(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
>> +	int key_offset = ctx->key_size - ctx->iv_size - TCW_WHITENING_SIZE;
>> +
>> +	memcpy(tcw->iv_seed, &ctx->key[key_offset], ctx->iv_size);
>> +	memcpy(tcw->whitening, &ctx->key[key_offset + ctx->iv_size],
>> +	       TCW_WHITENING_SIZE);
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_tcw_wipe(struct geniv_ctx *ctx)
>> +{
>> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
>> +
>> +	memset(tcw->iv_seed, 0, ctx->iv_size);
>> +	memset(tcw->whitening, 0, TCW_WHITENING_SIZE);
>> +
>> +	return 0;
>> +}
>> +
>> +static int crypt_iv_tcw_whitening(struct geniv_ctx *ctx,
>> +				struct geniv_subreq *subreq, u8 *data)
>> +{
>> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
>> +	__le64 sector = cpu_to_le64(subreq->iv_sector);
>> +	u8 buf[TCW_WHITENING_SIZE];
>> +	SHASH_DESC_ON_STACK(desc, tcw->crc32_tfm);
>> +	int i, r;
>> +
>> +	/* xor whitening with sector number */
>> +	crypto_xor_cpy(buf, tcw->whitening, (u8 *)&sector, 8);
>> +	crypto_xor_cpy(&buf[8], tcw->whitening + 8, (u8 *)&sector, 8);
>> +
>> +	/* calculate crc32 for every 32bit part and xor it */
>> +	desc->tfm = tcw->crc32_tfm;
>> +	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
>> +	for (i = 0; i < 4; i++) {
>> +		r = crypto_shash_init(desc);
>> +		if (r)
>> +			goto out;
>> +		r = crypto_shash_update(desc, &buf[i * 4], 4);
>> +		if (r)
>> +			goto out;
>> +		r = crypto_shash_final(desc, &buf[i * 4]);
>> +		if (r)
>> +			goto out;
>> +	}
>> +	crypto_xor(&buf[0], &buf[12], 4);
>> +	crypto_xor(&buf[4], &buf[8], 4);
>> +
>> +	/* apply whitening (8 bytes) to whole sector */
>> +	for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++)
>> +		crypto_xor(data + i * 8, buf, 8);
>> +out:
>> +	memzero_explicit(buf, sizeof(buf));
>> +	return r;
>> +}
>> +
>> +static int crypt_iv_tcw_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	struct scatterlist *sg;
>> +	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
>> +	__le64 sector = cpu_to_le64(subreq->iv_sector);
>> +	u8 *src;
>> +	int r = 0;
>> +
>> +	/* Remove whitening from ciphertext */
>> +	if (!rctx->is_write) {
>> +		sg = crypt_get_sg_data(ctx, subreq->sg_in);
>> +		src = kmap_atomic(sg_page(sg));
>> +		r = crypt_iv_tcw_whitening(ctx, subreq, src + sg->offset);
>> +		kunmap_atomic(src);
>> +	}
>> +
>> +	/* Calculate IV */
>> +	crypto_xor_cpy(iv, tcw->iv_seed, (u8 *)&sector, 8);
>> +	if (ctx->iv_size > 8)
>> +		crypto_xor_cpy(&iv[8], tcw->iv_seed + 8, (u8 *)&sector,
>> +			       ctx->iv_size - 8);
>> +
>> +	return r;
>> +}
>> +
>> +static int crypt_iv_tcw_post(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	struct scatterlist *sg;
>> +	u8 *dst;
>> +	int r;
>> +
>> +	if (!rctx->is_write)
>> +		return 0;
>> +
>> +	/* Apply whitening on ciphertext */
>> +	sg = crypt_get_sg_data(ctx, subreq->sg_out);
>> +	dst = kmap_atomic(sg_page(sg));
>> +	r = crypt_iv_tcw_whitening(ctx, subreq, dst + sg->offset);
>> +	kunmap_atomic(dst);
>> +
>> +	return r;
>> +}
>> +
>> +static int crypt_iv_random_gen(struct geniv_ctx *ctx,
>> +				struct geniv_req_ctx *rctx,
>> +				struct geniv_subreq *subreq, u8 *iv)
>> +{
>> +	/* Used only for writes, there must be an additional space to store IV */
>> +	get_random_bytes(iv, ctx->iv_size);
>> +	return 0;
>> +}
>> +
>> +static const struct crypt_iv_operations crypt_iv_plain_ops = {
>> +	.generator = crypt_iv_plain_gen
>> +};
>> +
>> +static const struct crypt_iv_operations crypt_iv_plain64_ops = {
>> +	.generator = crypt_iv_plain64_gen
>> +};
>> +
>> +static const struct crypt_iv_operations crypt_iv_plain64be_ops = {
>> +	.generator = crypt_iv_plain64be_gen
>> +};
>> +
>> +static const struct crypt_iv_operations crypt_iv_essiv_ops = {
>> +	.ctr       = crypt_iv_essiv_ctr,
>> +	.dtr       = crypt_iv_essiv_dtr,
>> +	.init      = crypt_iv_essiv_init,
>> +	.wipe      = crypt_iv_essiv_wipe,
>> +	.generator = crypt_iv_essiv_gen
>> +};
>> +
>> +static const struct crypt_iv_operations crypt_iv_benbi_ops = {
>> +	.ctr	   = crypt_iv_benbi_ctr,
>> +	.dtr	   = crypt_iv_benbi_dtr,
>> +	.generator = crypt_iv_benbi_gen
>> +};
>> +
>> +static const struct crypt_iv_operations crypt_iv_null_ops = {
>> +	.generator = crypt_iv_null_gen
>> +};
>> +
>> +static const struct crypt_iv_operations crypt_iv_lmk_ops = {
>> +	.ctr	   = crypt_iv_lmk_ctr,
>> +	.dtr	   = crypt_iv_lmk_dtr,
>> +	.init	   = crypt_iv_lmk_init,
>> +	.wipe	   = crypt_iv_lmk_wipe,
>> +	.generator = crypt_iv_lmk_gen,
>> +	.post	   = crypt_iv_lmk_post
>> +};
>> +
>> +static const struct crypt_iv_operations crypt_iv_tcw_ops = {
>> +	.ctr	   = crypt_iv_tcw_ctr,
>> +	.dtr	   = crypt_iv_tcw_dtr,
>> +	.init	   = crypt_iv_tcw_init,
>> +	.wipe	   = crypt_iv_tcw_wipe,
>> +	.generator = crypt_iv_tcw_gen,
>> +	.post	   = crypt_iv_tcw_post
>> +};
>> +
>> +static struct crypt_iv_operations crypt_iv_random_ops = {
>> +	.generator = crypt_iv_random_gen
>> +};
>> +
>> +static int geniv_init_iv(struct geniv_ctx *ctx)
>> +{
>> +	int ret;
>> +
>> +	DMDEBUG("IV Generation algorithm : %s\n", ctx->ivmode);
>> +
>> +	if (ctx->ivmode == NULL)
>> +		ctx->iv_gen_ops = NULL;
>> +	else if (strcmp(ctx->ivmode, "plain") == 0)
>> +		ctx->iv_gen_ops = &crypt_iv_plain_ops;
>> +	else if (strcmp(ctx->ivmode, "plain64") == 0)
>> +		ctx->iv_gen_ops = &crypt_iv_plain64_ops;
>> +	else if (strcmp(ctx->ivmode, "essiv") == 0)
>> +		ctx->iv_gen_ops = &crypt_iv_essiv_ops;
>> +	else if (strcmp(ctx->ivmode, "benbi") == 0)
>> +		ctx->iv_gen_ops = &crypt_iv_benbi_ops;
>> +	else if (strcmp(ctx->ivmode, "null") == 0)
>> +		ctx->iv_gen_ops = &crypt_iv_null_ops;
>> +	else if (strcmp(ctx->ivmode, "lmk") == 0) {
>> +		ctx->iv_gen_ops = &crypt_iv_lmk_ops;
>> +		/*
>> +		 * Version 2 and 3 is recognised according
>> +		 * to length of provided multi-key string.
>> +		 * If present (version 3), last key is used as IV seed.
>> +		 * All keys (including IV seed) are always the same size.
>> +		 */
>> +		if (ctx->key_size % ctx->key_parts) {
>> +			ctx->key_parts++;
>> +			ctx->key_extra_size = ctx->key_size / ctx->key_parts;
>> +		}
>> +	} else if (strcmp(ctx->ivmode, "tcw") == 0) {
>> +		ctx->iv_gen_ops = &crypt_iv_tcw_ops;
>> +		ctx->key_parts += 2; /* IV + whitening */
>> +		ctx->key_extra_size = ctx->iv_size + TCW_WHITENING_SIZE;
>> +	} else if (strcmp(ctx->ivmode, "random") == 0) {
>> +		ctx->iv_gen_ops = &crypt_iv_random_ops;
>> +		/* Need storage space in integrity fields. */
>> +		ctx->integrity_iv_size = ctx->iv_size;
>> +	} else {
>> +		DMERR("Invalid IV mode %s\n", ctx->ivmode);
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Allocate IV */
>> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->ctr) {
>> +		ret = ctx->iv_gen_ops->ctr(ctx);
>> +		if (ret < 0) {
>> +			DMERR("Error creating IV for %s\n", ctx->ivmode);
>> +			return ret;
>> +		}
>> +	}
>> +
>> +	/* Initialize IV (set keys for ESSIV etc) */
>> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->init) {
>> +		ret = ctx->iv_gen_ops->init(ctx);
>> +		if (ret < 0) {
>> +			DMERR("Error creating IV for %s\n", ctx->ivmode);
>> +			return ret;
>> +		}
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static void geniv_free_tfms_aead(struct geniv_ctx *ctx)
>> +{
>> +	if (!ctx->tfms.tfms_aead)
>> +		return;
>> +
>> +	if (ctx->tfms.tfms_aead[0] && IS_ERR(ctx->tfms.tfms_aead[0])) {
>> +		crypto_free_aead(ctx->tfms.tfms_aead[0]);
>> +		ctx->tfms.tfms_aead[0] = NULL;
>> +	}
>> +
>> +	kfree(ctx->tfms.tfms_aead);
>> +	ctx->tfms.tfms_aead = NULL;
>> +}
>> +
>> +static void geniv_free_tfms_skcipher(struct geniv_ctx *ctx)
>> +{
>> +	unsigned int i;
>> +
>> +	if (!ctx->tfms.tfms)
>> +		return;
>> +
>> +	for (i = 0; i < ctx->tfms_count; i++)
>> +		if (ctx->tfms.tfms[i] && IS_ERR(ctx->tfms.tfms[i])) {
>> +			crypto_free_skcipher(ctx->tfms.tfms[i]);
>> +			ctx->tfms.tfms[i] = NULL;
>> +		}
>> +
>> +	kfree(ctx->tfms.tfms);
>> +	ctx->tfms.tfms = NULL;
>> +}
>> +
>> +static void geniv_free_tfms(struct geniv_ctx *ctx)
>> +{
>> +	if (geniv_integrity_aead(ctx))
>> +		geniv_free_tfms_aead(ctx);
>> +	else
>> +		geniv_free_tfms_skcipher(ctx);
>> +}
>> +
>> +static int geniv_alloc_tfms_aead(struct crypto_aead *parent,
>> +			    struct geniv_ctx *ctx)
>> +{
>> +	unsigned int reqsize, align;
>> +
>> +	ctx->tfms.tfms_aead = kcalloc(1, sizeof(struct crypto_aead *),
>> +			   GFP_KERNEL);
>> +	if (!ctx->tfms.tfms_aead)
>> +		return -ENOMEM;
>> +
>> +	/* First instance is already allocated in geniv_init_tfm */
>> +	ctx->tfms.tfms_aead[0] = ctx->tfm_child.tfm_aead;
>> +
>> +	/* Setup the current cipher's request structure */
>> +	align = crypto_aead_alignmask(parent);
>> +	align &= ~(crypto_tfm_ctx_alignment() - 1);
>> +	reqsize = align + sizeof(struct geniv_req_ctx) +
>> +		  crypto_aead_reqsize(ctx->tfms.tfms_aead[0]);
>> +
>> +	crypto_aead_set_reqsize(parent, reqsize);
>> +
>> +	return 0;
>> +}
>> +
>> +/* Allocate memory for the underlying cipher algorithm. Ex: cbc(aes)
>> + */
>> +static int geniv_alloc_tfms_skcipher(struct crypto_skcipher *parent,
>> +			    struct geniv_ctx *ctx)
>> +{
>> +	unsigned int i, reqsize, align, err;
>> +
>> +	ctx->tfms.tfms = kcalloc(ctx->tfms_count, sizeof(struct crypto_skcipher *),
>> +			   GFP_KERNEL);
>> +	if (!ctx->tfms.tfms)
>> +		return -ENOMEM;
>> +
>> +	/* First instance is already allocated in geniv_init_tfm */
>> +	ctx->tfms.tfms[0] = ctx->tfm_child.tfm;
>> +	for (i = 1; i < ctx->tfms_count; i++) {
>> +		ctx->tfms.tfms[i] = crypto_alloc_skcipher(ctx->ciphermode, 0, 0);
>> +		if (IS_ERR(ctx->tfms.tfms[i])) {
>> +			err = PTR_ERR(ctx->tfms.tfms[i]);
>> +			geniv_free_tfms(ctx);
>> +			return err;
>> +		}
>> +
>> +		/* Setup the current cipher's request structure */
>> +		align = crypto_skcipher_alignmask(parent);
>> +		align &= ~(crypto_tfm_ctx_alignment() - 1);
>> +		reqsize = align + sizeof(struct geniv_req_ctx) +
>> +			  crypto_skcipher_reqsize(ctx->tfms.tfms[i]);
>> +
>> +		crypto_skcipher_set_reqsize(parent, reqsize);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static unsigned int geniv_authenckey_size(struct geniv_ctx *ctx)
>> +{
>> +	return ctx->key_size - ctx->key_extra_size +
>> +		RTA_SPACE(sizeof(struct crypto_authenc_key_param));
>> +}
>> +
>> +/* Initialize the cipher's context with the key, ivmode and other parameters.
>> + * Also allocate IV generation template ciphers and initialize them.
>> + */
>> +static int geniv_setkey_init(void *parent, struct geniv_key_info *info)
>> +{
>> +	struct geniv_ctx *ctx;
>> +	int ret;
>> +
>> +	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
>> +		ctx = crypto_aead_ctx((struct crypto_aead *)parent);
>> +	else
>> +		ctx = crypto_skcipher_ctx((struct crypto_skcipher *)parent);
>> +
>> +	ctx->tfms_count = info->tfms_count;
>> +	ctx->key = info->key;
>> +	ctx->cipher_flags = info->cipher_flags;
>> +	ctx->ivopts = info->ivopts;
>> +	ctx->iv_offset = info->iv_offset;
>> +	ctx->sector_size = info->sector_size;
>> +	ctx->sector_shift = __ffs(ctx->sector_size) - SECTOR_SHIFT;
>> +
>> +	ctx->key_size = info->key_size;
>> +	ctx->key_parts = info->key_parts;
>> +	ctx->key_mac_size = info->key_mac_size;
>> +	ctx->on_disk_tag_size = info->on_disk_tag_size;
>> +
>> +	if (geniv_integrity_hmac(ctx)) {
>> +		ctx->authenc_key = kmalloc(geniv_authenckey_size(ctx), GFP_KERNEL);
>> +		if (!ctx->authenc_key)
>> +			return -ENOMEM;
>> +	}
>> +
>> +	if (geniv_integrity_aead(ctx))
>> +		ret = geniv_alloc_tfms_aead((struct crypto_aead *)parent, ctx);
>> +	else
>> +		ret = geniv_alloc_tfms_skcipher((struct crypto_skcipher *)parent, ctx);
>> +	if (ret)
>> +		return ret;
>> +
>> +	ret = geniv_init_iv(ctx);
>> +
>> +	if (geniv_integrity_aead(ctx))
>> +		ctx->integrity_tag_size = ctx->on_disk_tag_size - ctx->integrity_iv_size;
>> +
>> +	return ret;
>> +}
>> +
>> +/*
>> + * If AEAD is composed like authenc(hmac(sha256),xts(aes)),
>> + * the key must be for some reason in special format.
>> + * This function converts cc->key to this special format.
>> + */
>> +static void crypt_copy_authenckey(char *p, const void *key,
>> +			unsigned int enckeylen, unsigned int authkeylen)
>> +{
>> +	struct crypto_authenc_key_param *param;
>> +	struct rtattr *rta;
>> +
>> +	rta = (struct rtattr *)p;
>> +	param = RTA_DATA(rta);
>> +	param->enckeylen = cpu_to_be32(enckeylen);
>> +	rta->rta_len = RTA_LENGTH(sizeof(*param));
>> +	rta->rta_type = CRYPTO_AUTHENC_KEYA_PARAM;
>> +	p += RTA_SPACE(sizeof(*param));
>> +	memcpy(p, key + enckeylen, authkeylen);
>> +	p += authkeylen;
>> +	memcpy(p, key, enckeylen);
>> +}
>> +
>> +static int geniv_setkey_tfms_aead(struct crypto_aead *parent, struct geniv_ctx *ctx,
>> +			     struct geniv_key_info *info)
>> +{
>> +	unsigned int key_size;
>> +	unsigned int authenc_key_size;
>> +	struct crypto_aead *child_aead;
>> +	int ret = 0;
>> +
>> +	/* Ignore extra keys (which are used for IV etc) */
>> +	key_size = ctx->key_size - ctx->key_extra_size;
>> +	authenc_key_size = key_size + RTA_SPACE(sizeof(struct crypto_authenc_key_param));
>> +
>> +	child_aead = ctx->tfms.tfms_aead[0];
>> +	crypto_aead_clear_flags(child_aead, CRYPTO_TFM_REQ_MASK);
>> +	crypto_aead_set_flags(child_aead, crypto_aead_get_flags(parent) & CRYPTO_TFM_REQ_MASK);
>> +
>> +	if (geniv_integrity_hmac(ctx)) {
>> +		if (key_size < ctx->key_mac_size)
>> +			return -EINVAL;
>> +
>> +		crypt_copy_authenckey(ctx->authenc_key, ctx->key, key_size - ctx->key_mac_size,
>> +				      ctx->key_mac_size);
>> +	}
>> +
>> +	if (geniv_integrity_hmac(ctx))
>> +		ret = crypto_aead_setkey(child_aead, ctx->authenc_key, authenc_key_size);
>> +	else
>> +		ret = crypto_aead_setkey(child_aead, ctx->key, key_size);
>> +	if (ret) {
>> +		DMERR("Error setting key for tfms[0]\n");
>> +		goto out;
>> +	}
>> +
>> +	crypto_aead_set_flags(parent, crypto_aead_get_flags(child_aead) & CRYPTO_TFM_RES_MASK);
>> +
>> +out:
>> +	if (geniv_integrity_hmac(ctx))
>> +		memzero_explicit(ctx->authenc_key, authenc_key_size);
>> +
>> +	return ret;
>> +}
>> +
>> +static int geniv_setkey_tfms_skcipher(struct crypto_skcipher *parent, struct geniv_ctx *ctx,
>> +			     struct geniv_key_info *info)
>> +{
>> +	unsigned int subkey_size;
>> +	char *subkey;
>> +	struct crypto_skcipher *child;
>> +	int ret, i;
>> +
>> +	/* Ignore extra keys (which are used for IV etc) */
>> +	subkey_size = (ctx->key_size - ctx->key_extra_size)
>> +		      >> ilog2(ctx->tfms_count);
>> +
>> +	for (i = 0; i < ctx->tfms_count; i++) {
>> +		child = ctx->tfms.tfms[i];
>> +		crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
>> +		crypto_skcipher_set_flags(child,
>> +			crypto_skcipher_get_flags(parent) & CRYPTO_TFM_REQ_MASK);
>> +
>> +		subkey = ctx->key + (subkey_size) * i;
>> +
>> +		ret = crypto_skcipher_setkey(child, subkey, subkey_size);
>> +		if (ret) {
>> +			DMERR("Error setting key for tfms[%d]\n", i);
>> +			return ret;
>> +		}
>> +
>> +		crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) &
>> +					  CRYPTO_TFM_RES_MASK);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int geniv_setkey_set(struct geniv_ctx *ctx)
>> +{
>> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->init)
>> +		return ctx->iv_gen_ops->init(ctx);
>> +	else
>> +		return 0;
>> +}
>> +
>> +static int geniv_setkey_wipe(struct geniv_ctx *ctx)
>> +{
>> +	int ret;
>> +
>> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->wipe) {
>> +		ret = ctx->iv_gen_ops->wipe(ctx);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	if (geniv_integrity_hmac(ctx))
>> +		kzfree(ctx->authenc_key);
>> +
>> +	return 0;
>> +}
>> +
>> +static int geniv_setkey(void *parent, const u8 *key, unsigned int keylen)
>> +{
>> +	int err = 0;
>> +	struct geniv_ctx *ctx;
>> +	struct geniv_key_info *info = (struct geniv_key_info *) key;
>> +
>> +	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
>> +		ctx = crypto_aead_ctx((struct crypto_aead *)parent);
>> +	else
>> +		ctx = crypto_skcipher_ctx((struct crypto_skcipher *)parent);
>> +
>> +	DMDEBUG("SETKEY Operation : %d\n", info->keyop);
>> +
>> +	switch (info->keyop) {
>> +	case SETKEY_OP_INIT:
>> +		err = geniv_setkey_init(parent, info);
>> +		break;
>> +	case SETKEY_OP_SET:
>> +		err = geniv_setkey_set(ctx);
>> +		break;
>> +	case SETKEY_OP_WIPE:
>> +		err = geniv_setkey_wipe(ctx);
>> +		break;
>> +	}
>> +
>> +	if (err)
>> +		return err;
>> +
>> +	if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &info->cipher_flags))
>> +		return geniv_setkey_tfms_aead((struct crypto_aead *)parent, ctx, info);
>> +	else
>> +		return geniv_setkey_tfms_skcipher((struct crypto_skcipher *)parent, ctx, info);
>> +}
>> +
>> +static int geniv_aead_setkey(struct crypto_aead *parent,
>> +				const u8 *key, unsigned int keylen)
>> +{
>> +	return geniv_setkey(parent, key, keylen);
>> +}
>> +
>> +static int geniv_skcipher_setkey(struct crypto_skcipher *parent,
>> +				const u8 *key, unsigned int keylen)
>> +{
>> +	return geniv_setkey(parent, key, keylen);
>> +}
>> +
>> +static void geniv_async_done(struct crypto_async_request *async_req, int error);
>> +
>> +static int geniv_alloc_subreq_aead(struct geniv_ctx *ctx,
>> +					struct geniv_req_ctx *rctx,
>> +					u32 req_flags)
>> +{
>> +	struct aead_request *req;
>> +
>> +	if (!rctx->subreq) {
>> +		rctx->subreq = mempool_alloc(ctx->subreq_pool, GFP_NOIO);
>> +		if (!rctx->subreq)
>> +			return -ENOMEM;
>> +	}
>> +
>> +	req = &rctx->subreq->r.req_aead;
>> +	rctx->subreq->rctx = rctx;
>> +
>> +	aead_request_set_tfm(req, ctx->tfms.tfms_aead[0]);
>> +	aead_request_set_callback(req, req_flags,
>> +					geniv_async_done, rctx->subreq);
>> +
>> +	return 0;
>> +}
>> +
>> +/* req_flags: flags from parent request */
>> +static int geniv_alloc_subreq_skcipher(struct geniv_ctx *ctx,
>> +					struct geniv_req_ctx *rctx,
>> +					u32 req_flags)
>> +{
>> +	int key_index;
>> +	struct skcipher_request *req;
>> +
>> +	if (!rctx->subreq) {
>> +		rctx->subreq = mempool_alloc(ctx->subreq_pool, GFP_NOIO);
>> +		if (!rctx->subreq)
>> +			return -ENOMEM;
>> +	}
>> +
>> +	req = &rctx->subreq->r.req;
>> +	rctx->subreq->rctx = rctx;
>> +
>> +	key_index = rctx->cc_sector & (ctx->tfms_count - 1);
>> +
>> +	skcipher_request_set_tfm(req, ctx->tfms.tfms[key_index]);
>> +	skcipher_request_set_callback(req, req_flags,
>> +					geniv_async_done, rctx->subreq);
>> +
>> +	return 0;
>> +}
>> +
>> +/* Asynchronous IO completion callback for each sector in a segment. When all
>> + * pending i/o are completed the parent cipher's async function is called.
>> + */
>> +static void geniv_async_done(struct crypto_async_request *async_req, int error)
>> +{
>> +	struct geniv_subreq *subreq =
>> +		(struct geniv_subreq *) async_req->data;
>> +	struct geniv_req_ctx *rctx = subreq->rctx;
>> +	struct skcipher_request *req = NULL;
>> +	struct aead_request *req_aead = NULL;
>> +	struct geniv_ctx *ctx;
>> +	u8 *iv;
>> +
>> +	if (!rctx->is_aead_request) {
>> +		req = rctx->r.req;
>> +		ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
>> +	} else {
>> +		req_aead = rctx->r.req_aead;
>> +		ctx = crypto_aead_ctx(crypto_aead_reqtfm(req_aead));
>> +	}
>> +
>> +	/*
>> +	 * A request from crypto driver backlog is going to be processed now,
>> +	 * finish the completion and continue in crypt_convert().
>> +	 * (Callback will be called for the second time for this request.)
>> +	 */
>> +	if (error == -EINPROGRESS) {
>> +		complete(&rctx->restart);
>> +		return;
>> +	}
>> +
>> +	iv = iv_of_subreq(ctx, subreq);
>> +	if (!error && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
>> +		error = ctx->iv_gen_ops->post(ctx, rctx, subreq, iv);
>> +
>> +	mempool_free(subreq, ctx->subreq_pool);
>> +
>> +	/* req_pending needs to be checked before req->base.complete is called
>> +	 * as we need 'req_pending' to be equal to 1 to ensure all subrequests
>> +	 * are processed.
>> +	 */
>> +	if (atomic_dec_and_test(&rctx->req_pending)) {
>> +		/* Call the parent cipher's completion function */
>> +		if (!rctx->is_aead_request)
>> +			skcipher_request_complete(req, error);
>> +		else
>> +			aead_request_complete(req_aead, error);
>> +
>> +	}
>> +}
>> +
>> +static unsigned int geniv_get_sectors(struct scatterlist *sg1,
>> +				      struct scatterlist *sg2,
>> +				      unsigned int segments)
>> +{
>> +	unsigned int i, n1, n2;
>> +
>> +	n1 = n2 = 0;
>> +	for (i = 0; i < segments ; i++) {
>> +		n1 += sg1[i].length >> SECTOR_SHIFT;
>> +		n1 += (sg1[i].length & SECTOR_MASK) ? 1 : 0;
>> +	}
>> +
>> +	for (i = 0; i < segments ; i++) {
>> +		n2 += sg2[i].length >> SECTOR_SHIFT;
>> +		n2 += (sg2[i].length & SECTOR_MASK) ? 1 : 0;
>> +	}
>> +
>> +	return n1 > n2 ? n1 : n2;
>> +}
>> +
>> +/* Iterate scatterlist of segments to retrieve the 512-byte sectors so that
>> + * unique IVs could be generated for each 512-byte sector. This split may not
>> + * be necessary e.g. when these ciphers are modelled in hardware, where it can
>> + * make use of the hardware's IV generation capabilities.
>> + */
>> +static int geniv_iter_block(void *req_in,
>> +			struct geniv_ctx *ctx, struct geniv_req_ctx *rctx)
>> +
>> +{
>> +	unsigned int rem;
>> +	struct scatterlist *src_org, *dst_org;
>> +	struct scatterlist *src1, *dst1;
>> +	struct scatterlist_iter *iter = &rctx->iter;
>> +	struct skcipher_request *req;
>> +	struct aead_request *req_aead;
>> +
>> +	if (unlikely(iter->seg_no >= rctx->nents))
>> +		return 0;
>> +
>> +	if (geniv_integrity_aead(ctx)) {
>> +		req_aead = (struct aead_request *)req_in;
>> +		src_org = &req_aead->src[0];
>> +		dst_org = &req_aead->dst[0];
>> +	} else {
>> +		req = (struct skcipher_request *)req_in;
>> +		src_org = &req->src[0];
>> +		dst_org = &req->dst[0];
>> +	}
>> +
>> +	src1 = &src_org[iter->seg_no];
>> +	dst1 = &dst_org[iter->seg_no];
>> +	iter->done += iter->len;
>> +
>> +	if (iter->done >= src1->length) {
>> +		iter->seg_no++;
>> +
>> +		if (iter->seg_no >= rctx->nents)
>> +			return 0;
>> +
>> +		src1 = &src_org[iter->seg_no];
>> +		dst1 = &dst_org[iter->seg_no];
>> +		iter->done = 0;
>> +	}
>> +
>> +	rem = src1->length - iter->done;
>> +
>> +	iter->len = rem > ctx->sector_size ? ctx->sector_size : rem;
>> +
>> +	DMDEBUG("segment:(%d/%u),  done:%d, rem:%d\n",
>> +		iter->seg_no, rctx->nents, iter->done, rem);
>> +
>> +	return iter->len;
>> +}
>> +
>> +static u8 *org_iv_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
>> +{
>> +	return iv_of_subreq(ctx, subreq) + ctx->iv_size;
>> +}
>> +
>> +static uint64_t *org_sector_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
>> +{
>> +	u8 *ptr = iv_of_subreq(ctx, subreq) + ctx->iv_size + ctx->iv_size;
>> +
>> +	return (uint64_t *) ptr;
>> +}
>> +
>> +static unsigned int *org_tag_of_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
>> +{
>> +	u8 *ptr = iv_of_subreq(ctx, subreq) + ctx->iv_size +
>> +		  ctx->iv_size + sizeof(uint64_t);
>> +
>> +	return (unsigned int *)ptr;
>> +}
>> +
>> +static void *tag_from_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
>> +{
>> +	return &subreq->rctx->integrity_metadata[*org_tag_of_subreq(ctx, subreq) *
>> +		ctx->on_disk_tag_size];
>> +}
>> +
>> +static void *iv_tag_from_subreq(struct geniv_ctx *ctx, struct geniv_subreq *subreq)
>> +{
>> +	return tag_from_subreq(ctx, subreq) + ctx->integrity_tag_size;
>> +}
>> +
>> +static int geniv_convert_block_aead(struct geniv_ctx *ctx,
>> +				     struct geniv_req_ctx *rctx,
>> +				     struct geniv_subreq *subreq,
>> +				     unsigned int tag_offset)
>> +{
>> +	struct scatterlist *sg_in, *sg_out;
>> +	u8 *iv, *org_iv, *tag_iv, *tag;
>> +	uint64_t *sector;
>> +	int r = 0;
>> +	struct scatterlist_iter *iter = &rctx->iter;
>> +	struct aead_request *req_aead;
>> +	struct aead_request *parent_req = rctx->r.req_aead;
>> +
>> +	BUG_ON(ctx->integrity_iv_size && ctx->integrity_iv_size != ctx->iv_size);
>> +
>> +	/* Reject unexpected unaligned bio. */
>> +	if (unlikely(iter->len & (ctx->sector_size - 1)))
>> +		return -EIO;
>> +
>> +	subreq->iv_sector = rctx->cc_sector;
>> +	if (test_bit(CRYPT_IV_LARGE_SECTORS, &ctx->cipher_flags))
>> +		subreq->iv_sector >>= ctx->sector_shift;
>> +
>> +	*org_tag_of_subreq(ctx, subreq) = tag_offset;
>> +
>> +	sector = org_sector_of_subreq(ctx, subreq);
>> +	*sector = cpu_to_le64(rctx->cc_sector - ctx->iv_offset);
>> +
>> +	iv = iv_of_subreq(ctx, subreq);
>> +	org_iv = org_iv_of_subreq(ctx, subreq);
>> +	tag = tag_from_subreq(ctx, subreq);
>> +	tag_iv = iv_tag_from_subreq(ctx, subreq);
>> +
>> +	sg_in = subreq->sg_in;
>> +	sg_out = subreq->sg_out;
>> +
>> +	/* AEAD request:
>> +	 *  |----- AAD -------|------ DATA -------|-- AUTH TAG --|
>> +	 *  | (authenticated) | (auth+encryption) |              |
>> +	 *  | sector_LE |  IV |  sector in/out    |  tag in/out  |
>> +	 */
>> +	sg_init_table(sg_in, 4);
>> +	sg_set_buf(&sg_in[0], sector, sizeof(uint64_t));
>> +	sg_set_buf(&sg_in[1], org_iv, ctx->iv_size);
>> +	sg_set_page(&sg_in[2], sg_page(&parent_req->src[iter->seg_no]),
>> +			iter->len, parent_req->src[iter->seg_no].offset + iter->done);
>> +	sg_set_buf(&sg_in[3], tag, ctx->integrity_tag_size);
>> +
>> +	sg_init_table(sg_out, 4);
>> +	sg_set_buf(&sg_out[0], sector, sizeof(uint64_t));
>> +	sg_set_buf(&sg_out[1], org_iv, ctx->iv_size);
>> +	sg_set_page(&sg_out[2], sg_page(&parent_req->dst[iter->seg_no]),
>> +			iter->len, parent_req->dst[iter->seg_no].offset + iter->done);
>> +	sg_set_buf(&sg_out[3], tag, ctx->integrity_tag_size);
>> +
>> +	if (ctx->iv_gen_ops) {
>> +		/* For READs use IV stored in integrity metadata */
>> +		if (ctx->integrity_iv_size && !rctx->is_write) {
>> +			memcpy(org_iv, tag_iv, ctx->iv_size);
>> +		} else {
>> +			r = ctx->iv_gen_ops->generator(ctx, rctx, subreq, org_iv);
>> +			if (r < 0)
>> +				return r;
>> +			/* Store generated IV in integrity metadata */
>> +			if (ctx->integrity_iv_size)
>> +				memcpy(tag_iv, org_iv, ctx->iv_size);
>> +		}
>> +		/* Working copy of IV, to be modified in crypto API */
>> +		memcpy(iv, org_iv, ctx->iv_size);
>> +	}
>> +
>> +	req_aead = &subreq->r.req_aead;
>> +	aead_request_set_ad(req_aead, sizeof(uint64_t) + ctx->iv_size);
>> +	if (rctx->is_write) {
>> +		aead_request_set_crypt(req_aead, subreq->sg_in, subreq->sg_out,
>> +				       ctx->sector_size, iv);
>> +		r = crypto_aead_encrypt(req_aead);
>> +		if (ctx->integrity_tag_size + ctx->integrity_iv_size != ctx->on_disk_tag_size)
>> +			memset(tag + ctx->integrity_tag_size + ctx->integrity_iv_size, 0,
>> +			       ctx->on_disk_tag_size - (ctx->integrity_tag_size + ctx->integrity_iv_size));
>> +	} else {
>> +		aead_request_set_crypt(req_aead, subreq->sg_in, subreq->sg_out,
>> +				       ctx->sector_size + ctx->integrity_tag_size, iv);
>> +		r = crypto_aead_decrypt(req_aead);
>> +	}
>> +
>> +	if (r == -EBADMSG)
>> +		DMERR_LIMIT("INTEGRITY AEAD ERROR, sector %llu",
>> +			    (unsigned long long)le64_to_cpu(*sector));
>> +
>> +	if (!r && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
>> +		r = ctx->iv_gen_ops->post(ctx, rctx, subreq, org_iv);
>> +
>> +	return r;
>> +}
>> +
>> +static int geniv_convert_block_skcipher(struct geniv_ctx *ctx,
>> +					struct geniv_req_ctx *rctx,
>> +					struct geniv_subreq *subreq,
>> +					unsigned int tag_offset)
>> +{
>> +	struct scatterlist *sg_in, *sg_out;
>> +	u8 *iv, *org_iv, *tag_iv;
>> +	uint64_t *sector;
>> +	int r = 0;
>> +	struct scatterlist_iter *iter = &rctx->iter;
>> +	struct skcipher_request *req;
>> +	struct skcipher_request *parent_req = rctx->r.req;
>> +
>> +	/* Reject unexpected unaligned bio. */
>> +	if (unlikely(iter->len & (ctx->sector_size - 1)))
>> +		return -EIO;
>> +
>> +	subreq->iv_sector = rctx->cc_sector;
>> +	if (test_bit(CRYPT_IV_LARGE_SECTORS, &ctx->cipher_flags))
>> +		subreq->iv_sector >>= ctx->sector_shift;
>> +
>> +	*org_tag_of_subreq(ctx, subreq) = tag_offset;
>> +
>> +	iv = iv_of_subreq(ctx, subreq);
>> +	org_iv = org_iv_of_subreq(ctx, subreq);
>> +	tag_iv = iv_tag_from_subreq(ctx, subreq);
>> +
>> +	sector = org_sector_of_subreq(ctx, subreq);
>> +	*sector = cpu_to_le64(rctx->cc_sector - ctx->iv_offset);
>> +
>> +	/* For skcipher we use only the first sg item */
>> +	sg_in = subreq->sg_in;
>> +	sg_out = subreq->sg_out;
>> +
>> +	sg_init_table(sg_in, 1);
>> +	sg_set_page(sg_in, sg_page(&parent_req->src[iter->seg_no]),
>> +			iter->len, parent_req->src[iter->seg_no].offset + iter->done);
>> +
>> +	sg_init_table(sg_out, 1);
>> +	sg_set_page(sg_out, sg_page(&parent_req->dst[iter->seg_no]),
>> +			iter->len, parent_req->dst[iter->seg_no].offset + iter->done);
>> +
>> +	if (ctx->iv_gen_ops) {
>> +		/* For READs use IV stored in integrity metadata */
>> +		if (ctx->integrity_iv_size && !rctx->is_write) {
>> +			memcpy(org_iv, tag_iv, ctx->integrity_iv_size);
>> +		} else {
>> +			r = ctx->iv_gen_ops->generator(ctx, rctx, subreq, org_iv);
>> +			if (r < 0)
>> +				return r;
>> +			/* Store generated IV in integrity metadata */
>> +			if (ctx->integrity_iv_size)
>> +				memcpy(tag_iv, org_iv, ctx->integrity_iv_size);
>> +		}
>> +		/* Working copy of IV, to be modified in crypto API */
>> +		memcpy(iv, org_iv, ctx->iv_size);
>> +	}
>> +
>> +	req = &subreq->r.req;
>> +	skcipher_request_set_crypt(req, sg_in, sg_out, ctx->sector_size, iv);
>> +
>> +	if (rctx->is_write)
>> +		r = crypto_skcipher_encrypt(req);
>> +	else
>> +		r = crypto_skcipher_decrypt(req);
>> +
>> +	if (!r && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
>> +		r = ctx->iv_gen_ops->post(ctx, rctx, subreq, org_iv);
>> +
>> +	return r;
>> +}
>> +
>> +/* Common encryt/decrypt function for geniv template cipher. Before the crypto
>> + * operation, it splits the memory segments (in the scatterlist) into 512 byte
>> + * sectors. The initialization vector(IV) used is based on a unique sector
>> + * number which is generated here.
>> + */
>> +static int geniv_crypt(struct geniv_ctx *ctx, void *parent_req, bool is_encrypt)
>> +{
>> +	struct skcipher_request *req = NULL;
>> +	struct aead_request *req_aead = NULL;
>> +	struct geniv_req_ctx *rctx;
>> +	struct geniv_req_info *rinfo;
>> +	int i, bytes, cryptlen, ret = 0;
>> +	unsigned int sectors;
>> +	unsigned int tag_offset = 0;
>> +	unsigned int sector_step = ctx->sector_size >> SECTOR_SHIFT;
>> +	char *str __maybe_unused = is_encrypt ? "encrypt" : "decrypt";
>> +
>> +	if (geniv_integrity_aead(ctx)) {
>> +		req_aead = (struct aead_request *)parent_req;
>> +		rctx = geniv_aead_req_ctx(req_aead);
>> +		rctx->r.req_aead = req_aead;
>> +		rinfo = (struct geniv_req_info *)req_aead->iv;
>> +	} else {
>> +		req = (struct skcipher_request *)parent_req;
>> +		rctx = geniv_skcipher_req_ctx(req);
>> +		rctx->r.req = req;
>> +		rinfo = (struct geniv_req_info *)req->iv;
>> +	}
>> +
>> +	/* Instance of 'struct geniv_req_info' is stored in IV ptr */
>> +	rctx->is_write = is_encrypt;
>> +	rctx->is_aead_request = geniv_integrity_aead(ctx);
>> +	rctx->cc_sector = rinfo->cc_sector;
>> +	rctx->nents = rinfo->nents;
>> +	rctx->integrity_metadata = rinfo->integrity_metadata;
>> +	rctx->subreq = NULL;
>> +	cryptlen = req->cryptlen;
>> +
>> +	rctx->iter.seg_no = 0;
>> +	rctx->iter.done = 0;
>> +	rctx->iter.len = 0;
>> +
>> +	DMDEBUG("geniv:%s: starting sector=%d, #segments=%u\n", str,
>> +		(unsigned int)rctx->cc_sector, rctx->nents);
>> +
>> +	if (geniv_integrity_aead(ctx))
>> +		sectors = geniv_get_sectors(req_aead->src, req_aead->dst, rctx->nents);
>> +	else
>> +		sectors = geniv_get_sectors(req->src, req->dst, rctx->nents);
>> +
>> +	init_completion(&rctx->restart);
>> +	atomic_set(&rctx->req_pending, 1);
>> +
>> +	for (i = 0; i < sectors; i++) {
>> +		struct geniv_subreq *subreq;
>> +
>> +		if (geniv_integrity_aead(ctx))
>> +			ret = geniv_alloc_subreq_aead(ctx, rctx, req_aead->base.flags);
>> +		else
>> +			ret = geniv_alloc_subreq_skcipher(ctx, rctx, req->base.flags);
>> +		if (ret)
>> +			return -ENOMEM;
>> +
>> +		subreq = rctx->subreq;
>> +
>> +		atomic_inc(&rctx->req_pending);
>> +
>> +		if (geniv_integrity_aead(ctx))
>> +			bytes = geniv_iter_block(req_aead, ctx, rctx);
>> +		else
>> +			bytes = geniv_iter_block(req, ctx, rctx);
>> +
>> +		if (bytes == 0)
>> +			break;
>> +
>> +		cryptlen -= bytes;
>> +
>> +		if (geniv_integrity_aead(ctx))
>> +			ret = geniv_convert_block_aead(ctx, rctx, subreq, tag_offset);
>> +		else
>> +			ret = geniv_convert_block_skcipher(ctx, rctx, subreq, tag_offset);
>> +
>> +		switch (ret) {
>> +		/*
>> +		 * The request was queued by a crypto driver
>> +		 * but the driver request queue is full, let's wait.
>> +		 */
>> +		case -EBUSY:
>> +			wait_for_completion(&rctx->restart);
>> +			reinit_completion(&rctx->restart);
>> +			/* fall through */
>> +		/*
>> +		 * The request is queued and processed asynchronously,
>> +		 * completion function geniv_async_done() is called.
>> +		 */
>> +		case -EINPROGRESS:
>> +			/* Marking this NULL lets the creation of a new sub-
>> +			 * request when 'geniv_alloc_subreq' is called.
>> +			 */
>> +			rctx->subreq = NULL;
>> +			rctx->cc_sector += sector_step;
>> +			tag_offset++;
>> +			cond_resched();
>> +			break;
>> +		/*
>> +		 * The request was already processed (synchronously).
>> +		 */
>> +		case 0:
>> +			atomic_dec(&rctx->req_pending);
>> +			rctx->cc_sector += sector_step;
>> +			tag_offset++;
>> +			cond_resched();
>> +			continue;
>> +
>> +		/* There was an error while processing the request. */
>> +		default:
>> +			atomic_dec(&rctx->req_pending);
>> +			mempool_free(rctx->subreq, ctx->subreq_pool);
>> +			atomic_dec(&rctx->req_pending);
>> +			return ret;
>> +		}
>> +	}
>> +
>> +	if (rctx->subreq)
>> +		mempool_free(rctx->subreq, ctx->subreq_pool);
>> +
>> +	if (atomic_dec_and_test(&rctx->req_pending))
>> +		return 0;
>> +	else
>> +		return -EINPROGRESS;
>> +}
>> +
>> +static int geniv_skcipher_encrypt(struct skcipher_request *req)
>> +{
>> +	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>> +	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
>> +
>> +	return geniv_crypt(ctx, req, true);
>> +}
>> +
>> +static int geniv_skcipher_decrypt(struct skcipher_request *req)
>> +{
>> +	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>> +	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
>> +
>> +	return geniv_crypt(ctx, req, false);
>> +}
>> +
>> +static int geniv_aead_encrypt(struct aead_request *req)
>> +{
>> +	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
>> +	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
>> +
>> +	return geniv_crypt(ctx, req, true);
>> +}
>> +
>> +static int geniv_aead_decrypt(struct aead_request *req)
>> +{
>> +	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
>> +	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
>> +
>> +	return geniv_crypt(ctx, req, false);
>> +}
>> +
>> +/*
>> + * Workaround to parse cipher algorithm from crypto API spec.
>> + * The ctx->cipher is currently used only in ESSIV.
>> + * This should be probably done by crypto-api calls (once available...)
>> + */
>> +static int geniv_blkdev_cipher(struct geniv_ctx *ctx, bool is_crypto_aead)
>> +{
>> +	const char *alg_name = NULL;
>> +	char *start, *end;
>> +
>> +	alg_name = ctx->ciphermode;
>> +	if (!alg_name)
>> +		return -EINVAL;
>> +
>> +	if (is_crypto_aead) {
>> +		alg_name = strchr(alg_name, ',');
>> +		if (!alg_name)
>> +			alg_name = ctx->ciphermode;
>> +		alg_name++;
>> +	}
>> +
>> +	start = strchr(alg_name, '(');
>> +	end = strchr(alg_name, ')');
>> +
>> +	if (!start && !end) {
>> +		ctx->cipher = kstrdup(alg_name, GFP_KERNEL);
>> +		return ctx->cipher ? 0 : -ENOMEM;
>> +	}
>> +
>> +	if (!start || !end || ++start >= end)
>> +		return -EINVAL;
>> +
>> +	ctx->cipher = kzalloc(end - start + 1, GFP_KERNEL);
>> +	if (!ctx->cipher)
>> +		return -ENOMEM;
>> +
>> +	strncpy(ctx->cipher, start, end - start);
>> +
>> +	return 0;
>> +}
>> +
>> +static int geniv_init_tfm(void *tfm_tmp, bool is_crypto_aead)
>> +{
>> +	struct geniv_ctx *ctx;
>> +	struct crypto_skcipher *tfm;
>> +	struct crypto_aead *tfm_aead;
>> +	unsigned int reqsize;
>> +	size_t iv_size_padding;
>> +	char *algname;
>> +	int psize, ret;
>> +
>> +	if (is_crypto_aead) {
>> +		tfm_aead = (struct crypto_aead *)tfm_tmp;
>> +		ctx = crypto_aead_ctx(tfm_aead);
>> +		algname = (char *) crypto_tfm_alg_name(crypto_aead_tfm(tfm_aead));
>> +	} else {
>> +		tfm = (struct crypto_skcipher *)tfm_tmp;
>> +		ctx = crypto_skcipher_ctx(tfm);
>> +		algname = (char *) crypto_tfm_alg_name(crypto_skcipher_tfm(tfm));
>> +	}
>> +
>> +	ctx->ciphermode = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL);
>> +	if (!ctx->ciphermode)
>> +		return -ENOMEM;
>> +
>> +	ctx->algname = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL);
>> +	if (!ctx->algname) {
>> +		ret = -ENOMEM;
>> +		goto free_ciphermode;
>> +	}
>> +
>> +	strlcpy(ctx->algname, algname, CRYPTO_MAX_ALG_NAME);
>> +	algname = ctx->algname;
>> +
>> +	/* Parse the algorithm name 'ivmode(ciphermode)' */
>> +	ctx->ivmode = strsep(&algname, "(");
>> +	strlcpy(ctx->ciphermode, algname, CRYPTO_MAX_ALG_NAME);
>> +	ctx->ciphermode[strlen(algname) - 1] = '\0';
>> +
>> +	DMDEBUG("ciphermode=%s, ivmode=%s\n", ctx->ciphermode, ctx->ivmode);
>> +
>> +	/*
>> +	 * Usually the underlying cipher instances are spawned here, but since
>> +	 * the value of tfms_count (which is equal to the key_count) is not
>> +	 * known yet, create only one instance and delay the creation of the
>> +	 * rest of the instances of the underlying cipher 'cbc(aes)' until
>> +	 * the setkey operation is invoked.
>> +	 * The first instance created i.e. ctx->child will later be assigned as
>> +	 * the 1st element in the array ctx->tfms. Creation of atleast one
>> +	 * instance of the cipher is necessary to be created here to uncover
>> +	 * any errors earlier than during the setkey operation later where the
>> +	 * remaining instances are created.
>> +	 */
>> +	if (is_crypto_aead)
>> +		ctx->tfm_child.tfm_aead = crypto_alloc_aead(ctx->ciphermode, 0, 0);
>> +	else
>> +		ctx->tfm_child.tfm = crypto_alloc_skcipher(ctx->ciphermode, 0, 0);
>> +	if (IS_ERR(ctx->tfm_child.tfm)) {
>> +		ret = PTR_ERR(ctx->tfm_child.tfm);
>> +		DMERR("Failed to create cipher %s. err %d\n",
>> +		      ctx->ciphermode, ret);
>> +		goto free_algname;
>> +	}
>> +
>> +	/* Setup the current cipher's request structure */
>> +	if (is_crypto_aead) {
>> +		reqsize = sizeof(struct geniv_req_ctx) + __alignof__(struct geniv_req_ctx);
>> +		crypto_aead_set_reqsize(tfm_aead, reqsize);
>> +
>> +		ctx->iv_start = sizeof(struct geniv_subreq);
>> +		ctx->iv_start += crypto_aead_reqsize(ctx->tfm_child.tfm_aead);
>> +
>> +		ctx->iv_size = crypto_aead_ivsize(tfm_aead);
>> +	} else {
>> +		reqsize = sizeof(struct geniv_req_ctx) + __alignof__(struct geniv_req_ctx);
>> +		crypto_skcipher_set_reqsize(tfm, reqsize);
>> +
>> +		ctx->iv_start = sizeof(struct geniv_subreq);
>> +		ctx->iv_start += crypto_skcipher_reqsize(ctx->tfm_child.tfm);
>> +
>> +		ctx->iv_size = crypto_skcipher_ivsize(tfm);
>> +	}
>> +	/* at least a 64 bit sector number should fit in our buffer */
>> +	if (ctx->iv_size)
>> +		ctx->iv_size = max(ctx->iv_size,
>> +				  (unsigned int)(sizeof(u64) / sizeof(u8)));
>> +
>> +	if (is_crypto_aead) {
>> +		if (crypto_aead_alignmask(tfm_aead) < CRYPTO_MINALIGN) {
>> +			/* Allocate the padding exactly */
>> +			iv_size_padding = -ctx->iv_start
>> +					& crypto_aead_alignmask(ctx->tfm_child.tfm_aead);
>> +		} else {
>> +			/*
>> +			 * If the cipher requires greater alignment than kmalloc
>> +			 * alignment, we don't know the exact position of the
>> +			 * initialization vector. We must assume worst case.
>> +			 */
>> +			iv_size_padding = crypto_aead_alignmask(ctx->tfm_child.tfm_aead);
>> +		}
>> +	} else {
>> +		if (crypto_skcipher_alignmask(tfm) < CRYPTO_MINALIGN) {
>> +			iv_size_padding = -ctx->iv_start
>> +					& crypto_skcipher_alignmask(ctx->tfm_child.tfm);
>> +		} else {
>> +			iv_size_padding = crypto_skcipher_alignmask(ctx->tfm_child.tfm);
>> +		}
>> +	}
>> +
>> +	/* create memory pool for sub-request structure
>> +	 *  ...| IV + padding | original IV | original sec. number | bio tag offset |
>> +	 */
>> +	psize = ctx->iv_start + iv_size_padding + ctx->iv_size + ctx->iv_size +
>> +		sizeof(uint64_t) + sizeof(unsigned int);
>> +
>> +	ctx->subreq_pool = mempool_create_kmalloc_pool(MIN_IOS, psize);
>> +	if (!ctx->subreq_pool) {
>> +		ret = -ENOMEM;
>> +		DMERR("Could not allocate crypt sub-request mempool\n");
>> +		goto free_tfm;
>> +	}
>> +
>> +	ret = geniv_blkdev_cipher(ctx, is_crypto_aead);
>> +	if (ret < 0) {
>> +		ret = -ENOMEM;
>> +		DMERR("Cannot allocate cipher string\n");
>> +		goto free_tfm;
>> +	}
>> +
>> +	return 0;
>> +
>> +free_tfm:
>> +	if (is_crypto_aead)
>> +		crypto_free_aead(ctx->tfm_child.tfm_aead);
>> +	else
>> +		crypto_free_skcipher(ctx->tfm_child.tfm);
>> +free_algname:
>> +	kfree(ctx->algname);
>> +free_ciphermode:
>> +	kfree(ctx->ciphermode);
>> +	return ret;
>> +}
>> +
>> +static int geniv_skcipher_init_tfm(struct crypto_skcipher *tfm)
>> +{
>> +	return geniv_init_tfm(tfm, 0);
>> +}
>> +
>> +static int geniv_aead_init_tfm(struct crypto_aead *tfm)
>> +{
>> +	return geniv_init_tfm(tfm, 1);
>> +}
>> +
>> +static void geniv_exit_tfm(struct geniv_ctx *ctx)
>> +{
>> +	if (ctx->iv_gen_ops && ctx->iv_gen_ops->dtr)
>> +		ctx->iv_gen_ops->dtr(ctx);
>> +
>> +	mempool_destroy(ctx->subreq_pool);
>> +	geniv_free_tfms(ctx);
>> +	kzfree(ctx->ciphermode);
>> +	kzfree(ctx->algname);
>> +	kzfree(ctx->cipher);
>> +}
>> +
>> +static void geniv_skcipher_exit_tfm(struct crypto_skcipher *tfm)
>> +{
>> +	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
>> +
>> +	geniv_exit_tfm(ctx);
>> +}
>> +
>> +static void geniv_aead_exit_tfm(struct crypto_aead *tfm)
>> +{
>> +	struct geniv_ctx *ctx = crypto_aead_ctx(tfm);
>> +
>> +	geniv_exit_tfm(ctx);
>> +}
>> +
>> +static void geniv_skcipher_free(struct skcipher_instance *inst)
>> +{
>> +	struct crypto_skcipher_spawn *spawn = skcipher_instance_ctx(inst);
>> +
>> +	crypto_drop_skcipher(spawn);
>> +	kfree(inst);
>> +}
>> +
>> +static void geniv_aead_free(struct aead_instance *inst)
>> +{
>> +	struct crypto_aead_spawn *spawn = aead_instance_ctx(inst);
>> +
>> +	crypto_drop_aead(spawn);
>> +	kfree(inst);
>> +}
>> +
>> +static int geniv_skcipher_create(struct crypto_template *tmpl,
>> +			struct rtattr **tb, char *algname)
>> +{
>> +	struct crypto_attr_type *algt;
>> +	struct skcipher_instance *inst;
>> +	struct skcipher_alg *alg;
>> +	struct crypto_skcipher_spawn *spawn;
>> +	const char *cipher_name;
>> +	int err;
>> +
>> +	algt = crypto_get_attr_type(tb);
>> +
>> +	cipher_name = crypto_attr_alg_name(tb[1]);
>> +
>> +	if (IS_ERR(cipher_name))
>> +		return PTR_ERR(cipher_name);
>> +
>> +	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
>> +	if (!inst)
>> +		return -ENOMEM;
>> +
>> +	spawn = skcipher_instance_ctx(inst);
>> +
>> +	crypto_set_skcipher_spawn(spawn, skcipher_crypto_instance(inst));
>> +	err = crypto_grab_skcipher(spawn, cipher_name, 0,
>> +				    crypto_requires_sync(algt->type,
>> +							 algt->mask));
>> +
>> +	if (err)
>> +		goto err_free_inst;
>> +
>> +	alg = crypto_spawn_skcipher_alg(spawn);
>> +
>> +	err = -EINVAL;
>> +
>> +	/* Only support blocks of size which is of a power of 2 */
>> +	if (!is_power_of_2(alg->base.cra_blocksize))
>> +		goto err_drop_spawn;
>> +
>> +	/* algname: essiv, base.cra_name: cbc(aes) */
>> +	err = -ENAMETOOLONG;
>> +	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
>> +		     algname, alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
>> +		goto err_drop_spawn;
>> +	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
>> +		     "%s(%s)", algname, alg->base.cra_driver_name) >=
>> +	    CRYPTO_MAX_ALG_NAME)
>> +		goto err_drop_spawn;
>> +
>> +	inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
>> +	inst->alg.base.cra_priority = alg->base.cra_priority;
>> +	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
>> +	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
>> +	inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
>> +	inst->alg.ivsize = alg->base.cra_blocksize;
>> +	inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
>> +	inst->alg.min_keysize = sizeof(struct geniv_key_info);
>> +	inst->alg.max_keysize = sizeof(struct geniv_key_info);
>> +
>> +	inst->alg.setkey = geniv_skcipher_setkey;
>> +	inst->alg.encrypt = geniv_skcipher_encrypt;
>> +	inst->alg.decrypt = geniv_skcipher_decrypt;
>> +
>> +	inst->alg.base.cra_ctxsize = sizeof(struct geniv_ctx);
>> +
>> +	inst->alg.init = geniv_skcipher_init_tfm;
>> +	inst->alg.exit = geniv_skcipher_exit_tfm;
>> +
>> +	inst->free = geniv_skcipher_free;
>> +
>> +	err = skcipher_register_instance(tmpl, inst);
>> +	if (err)
>> +		goto err_drop_spawn;
>> +
>> +out:
>> +	return err;
>> +
>> +err_drop_spawn:
>> +	crypto_drop_skcipher(spawn);
>> +err_free_inst:
>> +	kfree(inst);
>> +	goto out;
>> +}
>> +
>> +
>> +static int geniv_aead_create(struct crypto_template *tmpl,
>> +			struct rtattr **tb, char *algname)
>> +{
>> +	struct crypto_attr_type *algt;
>> +	struct aead_instance *inst;
>> +	struct aead_alg *alg;
>> +	struct crypto_aead_spawn *spawn;
>> +	const char *cipher_name;
>> +	int err;
>> +
>> +	algt = crypto_get_attr_type(tb);
>> +
>> +	cipher_name = crypto_attr_alg_name(tb[1]);
>> +	if (IS_ERR(cipher_name))
>> +		return PTR_ERR(cipher_name);
>> +
>> +	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
>> +	if (!inst)
>> +		return -ENOMEM;
>> +
>> +	spawn = aead_instance_ctx(inst);
>> +
>> +	crypto_set_aead_spawn(spawn, aead_crypto_instance(inst));
>> +	err = crypto_grab_aead(spawn, cipher_name, 0,
>> +				    crypto_requires_sync(algt->type,
>> +							 algt->mask));
>> +	if (err)
>> +		goto err_free_inst;
>> +
>> +	alg = crypto_spawn_aead_alg(spawn);
>> +
>> +	/* Only support blocks of size which is of a power of 2 */
>> +	if (!is_power_of_2(alg->base.cra_blocksize)) {
>> +		err = -EINVAL;
>> +		goto err_drop_spawn;
>> +	}
>> +
>> +	/* algname: essiv, base.cra_name: cbc(aes) */
>> +	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
>> +		     algname, alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME) {
>> +		err = -ENAMETOOLONG;
>> +		goto err_drop_spawn;
>> +	}
>> +
>> +	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
>> +		     "%s(%s)", algname, alg->base.cra_driver_name) >=
>> +	    CRYPTO_MAX_ALG_NAME) {
>> +		err = -ENAMETOOLONG;
>> +		goto err_drop_spawn;
>> +	}
>> +
>> +	inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
>> +	inst->alg.base.cra_priority = alg->base.cra_priority;
>> +	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
>> +	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
>> +	inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
>> +	inst->alg.ivsize = crypto_aead_alg_ivsize(alg);
>> +	inst->alg.chunksize = crypto_aead_alg_chunksize(alg);
>> +	inst->alg.maxauthsize = crypto_aead_alg_maxauthsize(alg);
>> +
>> +	inst->alg.setkey = geniv_aead_setkey;
>> +	inst->alg.encrypt = geniv_aead_encrypt;
>> +	inst->alg.decrypt = geniv_aead_decrypt;
>> +
>> +	inst->alg.base.cra_ctxsize = sizeof(struct geniv_ctx);
>> +
>> +	inst->alg.init = geniv_aead_init_tfm;
>> +	inst->alg.exit = geniv_aead_exit_tfm;
>> +
>> +	inst->free = geniv_aead_free;
>> +
>> +	err = aead_register_instance(tmpl, inst);
>> +	if (err)
>> +		goto err_drop_spawn;
>> +
>> +	return 0;
>> +
>> +err_drop_spawn:
>> +	crypto_drop_aead(spawn);
>> +err_free_inst:
>> +	kfree(inst);
>> +	return err;
>> +}
>> +
>> +static int geniv_create(struct crypto_template *tmpl,
>> +			struct rtattr **tb, char *algname)
>> +{
>> +	if (!crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER))
>> +		return geniv_skcipher_create(tmpl, tb, algname);
>> +	else if (!crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD))
>> +		return geniv_aead_create(tmpl, tb, algname);
>> +	else
>> +		return -EINVAL;
>> +}
>> +
>> +static int geniv_template_create(struct crypto_template *tmpl,
>> +			       struct rtattr **tb)
>> +{
>> +	return geniv_create(tmpl, tb, tmpl->name);
>> +}
>> +
>> +#define DEFINE_CRYPTO_TEMPLATE(type) \
>> +	{ .name = type, \
>> +	.create = geniv_template_create, \
>> +	.module = THIS_MODULE, },
>> +
>> +static struct crypto_template geniv_tmpl[IV_TYPE_NUM] = {
>> +	DEFINE_CRYPTO_TEMPLATE("plain")
>> +	DEFINE_CRYPTO_TEMPLATE("plain64")
>> +	DEFINE_CRYPTO_TEMPLATE("essiv")
>> +	DEFINE_CRYPTO_TEMPLATE("benbi")
>> +	DEFINE_CRYPTO_TEMPLATE("null")
>> +	DEFINE_CRYPTO_TEMPLATE("lmk")
>> +	DEFINE_CRYPTO_TEMPLATE("tcw")
>> +	DEFINE_CRYPTO_TEMPLATE("random")
>> +};
>> +
>> +static int __init geniv_init(void)
>> +{
>> +	return crypto_register_template_array(geniv_tmpl, IV_TYPE_NUM);
>> +}
>> +
>> +static void __exit geniv_exit(void)
>> +{
>> +	crypto_unregister_template_array(geniv_tmpl, IV_TYPE_NUM);
>> +}
>> +
>> +module_init(geniv_init);
>> +module_exit(geniv_exit);
>> +
>> +MODULE_AUTHOR("Xiongfeng Wang <xiongfeng.wang@linaro.org>");
>> +MODULE_DESCRIPTION(DM_NAME " IV Generation Template ");
>> +MODULE_LICENSE("GPL");
>> diff --git a/include/crypto/geniv.h b/include/crypto/geniv.h
>> new file mode 100644
>> index 0000000..d8084fc
>> --- /dev/null
>> +++ b/include/crypto/geniv.h
>> @@ -0,0 +1,47 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * geniv.h: common interface for IV generation algorithms
>> + *
>> + * Copyright (C) 2018, Linaro
>> + *
>> + * This file define the data structure the user should pass to the template.
>> + */
>> +
>> +#ifndef _CRYPTO_GENIV_H
>> +#define _CRYPTO_GENIV_H
>> +
>> +#include <linux/types.h>
>> +
>> +enum cipher_flags {
>> +	CRYPT_MODE_INTEGRITY_AEAD,	/* Use authenticated mode for cihper */
>> +	CRYPT_IV_LARGE_SECTORS,		/* Calculate IV from sector_size, not 512B sectors */
>> +};
>> +
>> +enum setkey_op {
>> +	SETKEY_OP_INIT,
>> +	SETKEY_OP_SET,
>> +	SETKEY_OP_WIPE,
>> +};
>> +
>> +struct geniv_key_info {
>> +	enum setkey_op keyop;
>> +	unsigned int tfms_count;
>> +	u8 *key;
>> +	char *ivopts;
>> +	sector_t iv_offset;
>> +	unsigned long cipher_flags;
>> +
>> +	unsigned short int sector_size;
>> +	unsigned int key_size;
>> +	unsigned int key_parts;
>> +	unsigned int key_mac_size;
>> +	unsigned int on_disk_tag_size;
>> +};
>> +
>> +struct geniv_req_info {
>> +	sector_t cc_sector;
>> +	unsigned int nents;
>> +	u8 *integrity_metadata;
>> +};
>> +
>> +#endif
>>
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-18  7:30 [PATCH 0/5] crypto: add IV generation templates Xiongfeng Wang
                   ` (4 preceding siblings ...)
  2018-07-18  7:30 ` [PATCH 5/5] dm-crypt: modify dm-crypt to rely on " Xiongfeng Wang
@ 2018-07-18 10:59 ` Arnd Bergmann
  2018-07-18 15:34   ` Ard Biesheuvel
  5 siblings, 1 reply; 28+ messages in thread
From: Arnd Bergmann @ 2018-07-18 10:59 UTC (permalink / raw)
  To: Xiongfeng Wang
  Cc: Alasdair Kergon, Mike Snitzer, Herbert Xu, dm-devel,
	Linux Kernel Mailing List, Mark Brown, Jonathan Cameron,
	Ard Biesheuvel

On Wed, Jul 18, 2018 at 9:30 AM, Xiongfeng Wang
<wangxiongfeng2@huawei.com> wrote:
>
> I tested the performance of software implemented ciphers before and after
> applying this patchset. The performance didn't change much except for
> slight regression when writting. The detail information is as follows.
>
> The command I used:
> cryptsetup -y -c aes-xts-plain -s 256 --hash sha256 luksFormat /dev/sdd1
> cryptsetup -y -c aes-cbc-essiv:sha256 -s 256 --hash sha256 luksFormat /dev/sdd1
> cryptsetup -y -c aes-cbc-benbi -s 256 --hash sha256 luksFormat /dev/sdd1
>
> cryptsetup luksOpen /dev/sdd1 crypt_fun
> time dd if=/dev/mapper/crypt_fun of=/dev/null bs=1M count=500 iflag=direct
> time dd if=/dev/zero of=/dev/mapper/crypt_fun bs=1M count=500 oflag=direct
>
> Performance comparision:
> --------------------------------------------------------
> algorithms      | before applying   |   after applying
> --------------------------------------------------------
>                 |  read  | write    |  read  | write
> --------------------------------------------------------
> aes-xts-plain   | 145.34 | 145.09   | 145.89 | 144.2
> --------------------------------------------------------
> aes-cbc-essiv   | 146.87 | 144.62   | 146.74 | 143.41
> --------------------------------------------------------
> aes-cbc-benbi   | 146.03 | 144.74   | 146.77 | 144.46
> --------------------------------------------------------

Do you have any estimate of the expected gains for hardware
implementations?

Would it make sense to try out implementing aes-cbc-essiv
on the ARMv8 crypto extensions? I see that Ard has done
some prior work on aes-ccm in arch/arm64/crypto/aes-ce-ccm-*
that (AFAICT) has a similar goal of avoiding overhead by
combining the usual operations, so maybe the same can
be done here.

      Arnd

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18  8:16   ` Milan Broz
  2018-07-18  8:48     ` Xiongfeng Wang
@ 2018-07-18 13:11     ` Mike Snitzer
  2018-07-18 16:46     ` Mark Brown
  2 siblings, 0 replies; 28+ messages in thread
From: Mike Snitzer @ 2018-07-18 13:11 UTC (permalink / raw)
  To: Xiongfeng Wang, Milan Broz
  Cc: agk, herbert, dm-devel, broonie, linux-kernel, arnd, jonathan.cameron

On Wed, Jul 18 2018 at  4:16am -0400,
Milan Broz <gmazyland@gmail.com> wrote:

> On 18/07/18 09:30, Xiongfeng Wang wrote:
> > Currently, the IV generation algorithms are implemented in dm-crypt.c.
> > This patch implement these algorithms as template ciphers, so that
> > dm-crypt layer can be simplified, and also these algorithms can be
> > implemented in hardware for performance.
> > 
> > Synchronous crypto requests to encrypt/decrypt a sector are processed
> > sequentially. Asynchronous requests if processed in paralled, are freed
> > in the async callback.
> 
> So we are here again and moving INTERNAL dm-crypt functionality into
> cryptoapi.
> 
> The TCW,LMK  IVs generator make sense only for dm-crypt 
> for compatible old disk encryption mappings.
> 
> I strongly disagree to move this outside of dm-crypt.
> 
> Sorry, the last discussion was that it remains inside dm-crypt
> and it will be only registered through crypto API.
> 
> And this for all files:
> 
> > + * Copyright (C) 2018, Linaro
> 
> It is NOT YOUR code! Please keep copyright and authors as in dm-crypt.
> 
> Milan
> 
> > 
> > Interface to the crypto layer - include/crypto/geniv.h
> > 
> > This patch is based on the patchset originally started by
> > Binoy Jayan <binoy.jayan@linaro.org>
> > ( crypto: Add IV generation algorithms
> > https://patchwork.kernel.org/patch/9803469/ )
> > 
> > Signed-off-by: Binoy Jayan <binoy.jayan@linaro.org>
> > Signed-off-by: Xiongfeng Wang <wangxiongfeng2@linaro.org>
> > ---
> >  crypto/Kconfig         |    7 +
> >  crypto/Makefile        |    1 +
> >  crypto/geniv.c         | 2240 ++++++++++++++++++++++++++++++++++++++++++++++++
> >  include/crypto/geniv.h |   47 +
> >  4 files changed, 2295 insertions(+)
> >  create mode 100644 crypto/geniv.c
> >  create mode 100644 include/crypto/geniv.h
> > 
> > diff --git a/crypto/Kconfig b/crypto/Kconfig
> > index f3e40ac..98f025a 100644
> > --- a/crypto/Kconfig
> > +++ b/crypto/Kconfig
> > @@ -257,6 +257,13 @@ config CRYPTO_GLUE_HELPER_X86
> >  config CRYPTO_ENGINE
> >  	tristate
> >  
> > +config CRYPTO_GENIV
> > +	tristate "IV Generator Template"
> > +	select CRYPTO_AEAD
> > +	select CRYPTO_BLKCIPHER
> > +	help
> > +	  Support for IV generator template, so that dm-crypt can rely on it.
> > +
> >  comment "Authenticated Encryption with Associated Data"
> >  
> >  config CRYPTO_CCM
> > diff --git a/crypto/Makefile b/crypto/Makefile
> > index 6d1d40e..1077d2f 100644
> > --- a/crypto/Makefile
> > +++ b/crypto/Makefile
> > @@ -23,6 +23,7 @@ crypto_blkcipher-y += skcipher.o
> >  obj-$(CONFIG_CRYPTO_BLKCIPHER2) += crypto_blkcipher.o
> >  obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
> >  obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
> > +obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
> >  
> >  crypto_hash-y += ahash.o
> >  crypto_hash-y += shash.o
> > diff --git a/crypto/geniv.c b/crypto/geniv.c
> > new file mode 100644
> > index 0000000..55d1212
> > --- /dev/null
> > +++ b/crypto/geniv.c
> > @@ -0,0 +1,2240 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * geniv.c - crypto template for generating IV
> > + *
> > + * Copyright (C) 2018, Linaro
> > + *
> > + * This file adds a crypto template to generate IV, so the dm-crypt can rely
> > + * on it and remove the existing generating IV code.
> > + */
> > +
> > +#include <linux/completion.h>
> > +#include <linux/err.h>
> > +#include <linux/module.h>
> > +#include <linux/init.h>
> > +#include <linux/kernel.h>
> > +#include <linux/key.h>
> > +#include <linux/bio.h>
> > +#include <linux/blkdev.h>
> > +#include <linux/mempool.h>
> > +#include <linux/slab.h>
> > +#include <linux/crypto.h>
> > +#include <linux/atomic.h>
> > +#include <linux/scatterlist.h>
> > +#include <linux/ctype.h>
> > +#include <asm/page.h>
> > +#include <asm/unaligned.h>
> > +#include <crypto/hash.h>
> > +#include <crypto/md5.h>
> > +#include <crypto/algapi.h>
> > +#include <crypto/skcipher.h>
> > +#include <crypto/aead.h>
> > +#include <crypto/authenc.h>
> > +#include <crypto/geniv.h>
> > +#include <crypto/internal/aead.h>
> > +#include <crypto/internal/skcipher.h>
> > +#include <linux/rtnetlink.h> /* for struct rtattr and RTA macros only */
> > +#include <keys/user-type.h>
> > +#include <linux/backing-dev.h>
> > +#include <linux/device-mapper.h>
> > +#include <linux/log2.h>
> > +
> > +#define DM_MSG_PREFIX		"crypt"

I agree with Milan, the code should remain where it currently is.  If
you want to plumb in generic access to it fine.  But crypto/geniv.c has
_no_ business defining DM_MSG_PREFIX.

And I'm sure there are other things that have no place in generic crypto
code.

Mike

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-18 10:59 ` [PATCH 0/5] crypto: add " Arnd Bergmann
@ 2018-07-18 15:34   ` Ard Biesheuvel
  2018-07-19 10:55     ` Xiongfeng Wang
  0 siblings, 1 reply; 28+ messages in thread
From: Ard Biesheuvel @ 2018-07-18 15:34 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Xiongfeng Wang, Alasdair Kergon, Mike Snitzer, Herbert Xu,
	dm-devel, Linux Kernel Mailing List, Mark Brown,
	Jonathan Cameron

On 18 July 2018 at 19:59, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wed, Jul 18, 2018 at 9:30 AM, Xiongfeng Wang
> <wangxiongfeng2@huawei.com> wrote:
>>
>> I tested the performance of software implemented ciphers before and after
>> applying this patchset. The performance didn't change much except for
>> slight regression when writting. The detail information is as follows.
>>
>> The command I used:
>> cryptsetup -y -c aes-xts-plain -s 256 --hash sha256 luksFormat /dev/sdd1
>> cryptsetup -y -c aes-cbc-essiv:sha256 -s 256 --hash sha256 luksFormat /dev/sdd1
>> cryptsetup -y -c aes-cbc-benbi -s 256 --hash sha256 luksFormat /dev/sdd1
>>
>> cryptsetup luksOpen /dev/sdd1 crypt_fun
>> time dd if=/dev/mapper/crypt_fun of=/dev/null bs=1M count=500 iflag=direct
>> time dd if=/dev/zero of=/dev/mapper/crypt_fun bs=1M count=500 oflag=direct
>>
>> Performance comparision:
>> --------------------------------------------------------
>> algorithms      | before applying   |   after applying
>> --------------------------------------------------------
>>                 |  read  | write    |  read  | write
>> --------------------------------------------------------
>> aes-xts-plain   | 145.34 | 145.09   | 145.89 | 144.2
>> --------------------------------------------------------
>> aes-cbc-essiv   | 146.87 | 144.62   | 146.74 | 143.41
>> --------------------------------------------------------
>> aes-cbc-benbi   | 146.03 | 144.74   | 146.77 | 144.46
>> --------------------------------------------------------
>
> Do you have any estimate of the expected gains for hardware
> implementations?
>
> Would it make sense to try out implementing aes-cbc-essiv
> on the ARMv8 crypto extensions? I see that Ard has done
> some prior work on aes-ccm in arch/arm64/crypto/aes-ce-ccm-*
> that (AFAICT) has a similar goal of avoiding overhead by
> combining the usual operations, so maybe the same can
> be done here.
>

I am having trouble understanding what exactly this series aims to achieve.

Calling into the crypto layer fewer times is a nice goal, but a disk
sector seems like a reasonable granularity for the dm layer to operate
on, and I don't think any hardware exists that operates on multi
sector sequences, where it would pay off to amortize the latency of
invoking the hardware over an entire bio.

So in summary, you need to explain to us why we need this. It is
really very easy to convince people if your changes make things go
faster.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18  8:16   ` Milan Broz
  2018-07-18  8:48     ` Xiongfeng Wang
  2018-07-18 13:11     ` Mike Snitzer
@ 2018-07-18 16:46     ` Mark Brown
  2018-07-18 17:17       ` Milan Broz
  2 siblings, 1 reply; 28+ messages in thread
From: Mark Brown @ 2018-07-18 16:46 UTC (permalink / raw)
  To: Milan Broz
  Cc: Xiongfeng Wang, agk, snitzer, herbert, dm-devel, linux-kernel,
	arnd, jonathan.cameron

[-- Attachment #1: Type: text/plain, Size: 772 bytes --]

On Wed, Jul 18, 2018 at 10:16:05AM +0200, Milan Broz wrote:

> So we are here again and moving INTERNAL dm-crypt functionality into
> cryptoapi.

> The TCW,LMK  IVs generator make sense only for dm-crypt 
> for compatible old disk encryption mappings.

> I strongly disagree to move this outside of dm-crypt.

> Sorry, the last discussion was that it remains inside dm-crypt
> and it will be only registered through crypto API.

Sorry, I'm partly to blame for this in that I asked Xiongfeng to pick up
Binoy Jayan's old patch set.  I seem to have missed that particular part
of the discussion and so haven't forwarded it on to him - do you have a
link, I can't seem to see it in my local archives of the prior
discussions but they might not be complete?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18 16:46     ` Mark Brown
@ 2018-07-18 17:17       ` Milan Broz
  2018-07-18 17:47         ` Mark Brown
  2018-07-19  1:46         ` Xiongfeng Wang
  0 siblings, 2 replies; 28+ messages in thread
From: Milan Broz @ 2018-07-18 17:17 UTC (permalink / raw)
  To: Mark Brown
  Cc: Xiongfeng Wang, agk, snitzer, herbert, dm-devel, linux-kernel,
	arnd, jonathan.cameron

On 18/07/18 18:46, Mark Brown wrote:
> On Wed, Jul 18, 2018 at 10:16:05AM +0200, Milan Broz wrote:
> 
>> So we are here again and moving INTERNAL dm-crypt functionality into
>> cryptoapi.
> 
>> The TCW,LMK  IVs generator make sense only for dm-crypt 
>> for compatible old disk encryption mappings.
> 
>> I strongly disagree to move this outside of dm-crypt.
> 
>> Sorry, the last discussion was that it remains inside dm-crypt
>> and it will be only registered through crypto API.
> 
> Sorry, I'm partly to blame for this in that I asked Xiongfeng to pick up
> Binoy Jayan's old patch set.  I seem to have missed that particular part
> of the discussion and so haven't forwarded it on to him - do you have a
> link, I can't seem to see it in my local archives of the prior
> discussions but they might not be complete?

I think the last iteration was this patch
https://lore.kernel.org/lkml/1498106510-19793-2-git-send-email-binoy.jayan@linaro.org/

But I have still some questions, because I really do not understand
the real reason for this patchset.
For now, it adds a lot of complexity for ... what?

1) If the reason is to make cryptoapi to include IV algorithms, I think we should
focus on universal algorithms (sequential aka plain64 in dmcrypt) as used
in XTS mode. ESSIV is intended for CBC mode only and I think general
consensus today is that XTS mode is preferred to CBC (despite it known problems).
But I see ESSIV used elsewhere, so maybe it makes sense to export this one as well.

But definitely not other internal IVs - some IV generators inside dm-crypt
(namely TCW and LMK) do much more that IV - they modify encryption mode.
This was a hack to support some FDE encryption modes (old Truecrypt and loopAES)
and that should not spread outside dm-crypt (and blame me for this code hacks :).

2) If the reason is performance, please provide numbers with the patch.
What I see now is that the performance is almost the same. So why you are doing it?
Any real hw that benefits from it?

I added 4k sector support in dmcrypt and IMO this helps much more
than some hw IV accelerations (AFAIK is is already used in some mainframe
accelerators this way because of performance).

Milan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18 17:17       ` Milan Broz
@ 2018-07-18 17:47         ` Mark Brown
  2018-07-19  1:46         ` Xiongfeng Wang
  1 sibling, 0 replies; 28+ messages in thread
From: Mark Brown @ 2018-07-18 17:47 UTC (permalink / raw)
  To: Milan Broz
  Cc: Xiongfeng Wang, agk, snitzer, herbert, dm-devel, linux-kernel,
	arnd, jonathan.cameron, Ard Biesheuvel

[-- Attachment #1: Type: text/plain, Size: 1548 bytes --]

On Wed, Jul 18, 2018 at 07:17:45PM +0200, Milan Broz wrote:

> I think the last iteration was this patch
> https://lore.kernel.org/lkml/1498106510-19793-2-git-send-email-binoy.jayan@linaro.org/

Thanks!  I'd got v5 but v6 went AWOL for some reason :(

> 2) If the reason is performance, please provide numbers with the patch.
> What I see now is that the performance is almost the same. So why you are doing it?
> Any real hw that benefits from it?

The main focus was performance with accelerators, currently we can't use
ESSIV acceleration which is implemented by some hardware.  Xiongfeng, we
probably need to discuss offline before sharing any actual numbers for
the hardware accelerated case since people can be sensitive about how
those are shared.  Software only benchmarks are only really relevant in
showing that this won't harm existing users.

Some of the relevant systems are somewhat CPU constrained so even if the
I/O performance remains fairly consistent with the accelerators in play
it can still be a win if it frees up a useful amount of CPU for other
purposes.  That'd mean CPU usage is probably interesting to benchmark
also though I don't know that the systems Xiongfeng has access to are
particularly good models there.

> I added 4k sector support in dmcrypt and IMO this helps much more than
> some hw IV accelerations (AFAIK is is already used in some mainframe
> accelerators this way because of performance).

Right, that does help too (and an out of tree variation of this was one
of the original sources of this work).

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18 17:17       ` Milan Broz
  2018-07-18 17:47         ` Mark Brown
@ 2018-07-19  1:46         ` Xiongfeng Wang
  2018-07-19  8:50           ` Arnd Bergmann
  1 sibling, 1 reply; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-19  1:46 UTC (permalink / raw)
  To: Milan Broz, Mark Brown
  Cc: agk, snitzer, herbert, dm-devel, linux-kernel, arnd, jonathan.cameron

Hi,

On 2018/7/19 1:17, Milan Broz wrote:
> On 18/07/18 18:46, Mark Brown wrote:
>> On Wed, Jul 18, 2018 at 10:16:05AM +0200, Milan Broz wrote:
>>
>>> So we are here again and moving INTERNAL dm-crypt functionality into
>>> cryptoapi.

> (namely TCW and LMK) do much more that IV - they modify encryption mode.
> This was a hack to support some FDE encryption modes (old Truecrypt and loopAES)
> and that should not spread outside dm-crypt (and blame me for this code hacks :).
> 
> 2) If the reason is performance, please provide numbers with the patch.
> What I see now is that the performance is almost the same. So why you are doing it?
> Any real hw that benefits from it?

I add IV templates, such as 'plain()',  'benbi()'
When applying it to the existing algorithm, such as 'aes-cbc', and so on,
it generates new algorithm 'aes-cbc-plain', 'aes-cbc-benbi.
This patch modify the dm-crypt to rely on the new algorithm 'aes-cbc-benbi'.
Dm-crypt passes the whole 'bio' to 'aes-cbc-benbi', rather than divide
the bio into sectors, and alternatively pass each sector to 'aes-cbc'.

Because the internal implementation of the IV template 'benbi()' is still
dividing the whole bio into sectors, so the performance is almost the same.
The purpose of this patch is to let dm-crypt rely on the new algorithm 'aes-cbc-benbi'
and pass the whole bio to the new algorithm.
And then if the hardware driver implements this new algorithm, it can get the data of
the bio at one time, and return the processed data at one time.
I think it will decrease the overhead of passing each sector alternatively.
But the hardware need to implement the new algorithm if it want to benefit from this.

Thanks,
Xiongfeng
> 
> I added 4k sector support in dmcrypt and IMO this helps much more
> than some hw IV accelerations (AFAIK is is already used in some mainframe
> accelerators this way because of performance).
> 
> Milan
> 
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-19  1:46         ` Xiongfeng Wang
@ 2018-07-19  8:50           ` Arnd Bergmann
  2018-07-19  8:54             ` Herbert Xu
  2018-07-19 13:30             ` Mark Brown
  0 siblings, 2 replies; 28+ messages in thread
From: Arnd Bergmann @ 2018-07-19  8:50 UTC (permalink / raw)
  To: Xiongfeng Wang
  Cc: Milan Broz, Mark Brown, Alasdair Kergon, Mike Snitzer,
	Herbert Xu, dm-devel, Linux Kernel Mailing List,
	Jonathan Cameron

On Thu, Jul 19, 2018 at 3:46 AM, Xiongfeng Wang
<wangxiongfeng2@huawei.com> wrote:
> Hi,
>
> On 2018/7/19 1:17, Milan Broz wrote:
>> On 18/07/18 18:46, Mark Brown wrote:
>>> On Wed, Jul 18, 2018 at 10:16:05AM +0200, Milan Broz wrote:
>>>
>>>> So we are here again and moving INTERNAL dm-crypt functionality into
>>>> cryptoapi.
>>
>> 2) If the reason is performance, please provide numbers with the patch.
>> What I see now is that the performance is almost the same. So why you are doing it?
>> Any real hw that benefits from it?
>
> Because the internal implementation of the IV template 'benbi()' is still
> dividing the whole bio into sectors, so the performance is almost the same.
> The purpose of this patch is to let dm-crypt rely on the new algorithm 'aes-cbc-benbi'
> and pass the whole bio to the new algorithm.
> And then if the hardware driver implements this new algorithm, it can get the data of
> the bio at one time, and return the processed data at one time.
> I think it will decrease the overhead of passing each sector alternatively.
> But the hardware need to implement the new algorithm if it want to benefit from this.

There seems to be some support for at least essiv(aes) in
drivers/crypto/ccree/cc_cipher.c, is that compatible with your essiv(*)
template, or is that something else?

       Arnd

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-19  8:50           ` Arnd Bergmann
@ 2018-07-19  8:54             ` Herbert Xu
  2018-07-19 13:30             ` Mark Brown
  1 sibling, 0 replies; 28+ messages in thread
From: Herbert Xu @ 2018-07-19  8:54 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Xiongfeng Wang, Milan Broz, Mark Brown, Alasdair Kergon,
	Mike Snitzer, dm-devel, Linux Kernel Mailing List,
	Jonathan Cameron

On Thu, Jul 19, 2018 at 10:50:12AM +0200, Arnd Bergmann wrote:
>
> There seems to be some support for at least essiv(aes) in
> drivers/crypto/ccree/cc_cipher.c, is that compatible with your essiv(*)
> template, or is that something else?

Whatever it is it should be removed.  We should not be adding
hardware algorithms for which there is no software equivalent.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-18 15:34   ` Ard Biesheuvel
@ 2018-07-19 10:55     ` Xiongfeng Wang
  2018-07-19 14:08       ` Ard Biesheuvel
  0 siblings, 1 reply; 28+ messages in thread
From: Xiongfeng Wang @ 2018-07-19 10:55 UTC (permalink / raw)
  To: Ard Biesheuvel, Arnd Bergmann
  Cc: Alasdair Kergon, Mike Snitzer, Herbert Xu, dm-devel,
	Linux Kernel Mailing List, Mark Brown, Jonathan Cameron

Hi,

On 2018/7/18 23:34, Ard Biesheuvel wrote:
> On 18 July 2018 at 19:59, Arnd Bergmann <arnd@arndb.de> wrote:
>> On Wed, Jul 18, 2018 at 9:30 AM, Xiongfeng Wang
>> <wangxiongfeng2@huawei.com> wrote:
>>>
>>> I tested the performance of software implemented ciphers before and after
>>> applying this patchset. The performance didn't change much except for
>>> slight regression when writting. The detail information is as follows.
>>>
>>> The command I used:
>>> cryptsetup -y -c aes-xts-plain -s 256 --hash sha256 luksFormat /dev/sdd1
>>> cryptsetup -y -c aes-cbc-essiv:sha256 -s 256 --hash sha256 luksFormat /dev/sdd1
>>> cryptsetup -y -c aes-cbc-benbi -s 256 --hash sha256 luksFormat /dev/sdd1
>>>
>>> cryptsetup luksOpen /dev/sdd1 crypt_fun
>>> time dd if=/dev/mapper/crypt_fun of=/dev/null bs=1M count=500 iflag=direct
>>> time dd if=/dev/zero of=/dev/mapper/crypt_fun bs=1M count=500 oflag=direct
>>>
>>> Performance comparision:
>>> --------------------------------------------------------
>>> algorithms      | before applying   |   after applying
>>> --------------------------------------------------------
>>>                 |  read  | write    |  read  | write
>>> --------------------------------------------------------
>>> aes-xts-plain   | 145.34 | 145.09   | 145.89 | 144.2
>>> --------------------------------------------------------
>>> aes-cbc-essiv   | 146.87 | 144.62   | 146.74 | 143.41
>>> --------------------------------------------------------
>>> aes-cbc-benbi   | 146.03 | 144.74   | 146.77 | 144.46
>>> --------------------------------------------------------
>>
>> Do you have any estimate of the expected gains for hardware
>> implementations?
>>
>> Would it make sense to try out implementing aes-cbc-essiv
>> on the ARMv8 crypto extensions? I see that Ard has done
>> some prior work on aes-ccm in arch/arm64/crypto/aes-ce-ccm-*
>> that (AFAICT) has a similar goal of avoiding overhead by
>> combining the usual operations, so maybe the same can
>> be done here.
>>
> 
> I am having trouble understanding what exactly this series aims to achieve.
> 
> Calling into the crypto layer fewer times is a nice goal, but a disk
> sector seems like a reasonable granularity for the dm layer to operate
> on, and I don't think any hardware exists that operates on multi
> sector sequences, where it would pay off to amortize the latency of
> invoking the hardware over an entire bio.

I don't know much about crypto hardware, but I think a crypto hardware can handle
data more than one sector at one time. So I think passing the whole bio to the hardware
at one time will decrease the overhead in passing each sector alternatively.

Thanks,
Xiongfeng
> 
> So in summary, you need to explain to us why we need this. It is
> really very easy to convince people if your changes make things go
> faster.
> 
> .
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-19  8:50           ` Arnd Bergmann
  2018-07-19  8:54             ` Herbert Xu
@ 2018-07-19 13:30             ` Mark Brown
  1 sibling, 0 replies; 28+ messages in thread
From: Mark Brown @ 2018-07-19 13:30 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Xiongfeng Wang, Milan Broz, Alasdair Kergon, Mike Snitzer,
	Herbert Xu, dm-devel, Linux Kernel Mailing List,
	Jonathan Cameron

[-- Attachment #1: Type: text/plain, Size: 347 bytes --]

On Thu, Jul 19, 2018 at 10:50:12AM +0200, Arnd Bergmann wrote:

> There seems to be some support for at least essiv(aes) in
> drivers/crypto/ccree/cc_cipher.c, is that compatible with your essiv(*)
> template, or is that something else?

Yes, that is in fact a driver for one of the pieces of hardware that
was one of the prompts to do this work.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-19 10:55     ` Xiongfeng Wang
@ 2018-07-19 14:08       ` Ard Biesheuvel
  2018-07-19 15:50         ` Mark Brown
  0 siblings, 1 reply; 28+ messages in thread
From: Ard Biesheuvel @ 2018-07-19 14:08 UTC (permalink / raw)
  To: Xiongfeng Wang
  Cc: Arnd Bergmann, Alasdair Kergon, Mike Snitzer, Herbert Xu,
	dm-devel, Linux Kernel Mailing List, Mark Brown,
	Jonathan Cameron

On 19 July 2018 at 19:55, Xiongfeng Wang <wangxiongfeng2@huawei.com> wrote:
> Hi,
>
> On 2018/7/18 23:34, Ard Biesheuvel wrote:
>> On 18 July 2018 at 19:59, Arnd Bergmann <arnd@arndb.de> wrote:
>>> On Wed, Jul 18, 2018 at 9:30 AM, Xiongfeng Wang
>>> <wangxiongfeng2@huawei.com> wrote:
>>>>
>>>> I tested the performance of software implemented ciphers before and after
>>>> applying this patchset. The performance didn't change much except for
>>>> slight regression when writting. The detail information is as follows.
>>>>
>>>> The command I used:
>>>> cryptsetup -y -c aes-xts-plain -s 256 --hash sha256 luksFormat /dev/sdd1
>>>> cryptsetup -y -c aes-cbc-essiv:sha256 -s 256 --hash sha256 luksFormat /dev/sdd1
>>>> cryptsetup -y -c aes-cbc-benbi -s 256 --hash sha256 luksFormat /dev/sdd1
>>>>
>>>> cryptsetup luksOpen /dev/sdd1 crypt_fun
>>>> time dd if=/dev/mapper/crypt_fun of=/dev/null bs=1M count=500 iflag=direct
>>>> time dd if=/dev/zero of=/dev/mapper/crypt_fun bs=1M count=500 oflag=direct
>>>>
>>>> Performance comparision:
>>>> --------------------------------------------------------
>>>> algorithms      | before applying   |   after applying
>>>> --------------------------------------------------------
>>>>                 |  read  | write    |  read  | write
>>>> --------------------------------------------------------
>>>> aes-xts-plain   | 145.34 | 145.09   | 145.89 | 144.2
>>>> --------------------------------------------------------
>>>> aes-cbc-essiv   | 146.87 | 144.62   | 146.74 | 143.41
>>>> --------------------------------------------------------
>>>> aes-cbc-benbi   | 146.03 | 144.74   | 146.77 | 144.46
>>>> --------------------------------------------------------
>>>
>>> Do you have any estimate of the expected gains for hardware
>>> implementations?
>>>
>>> Would it make sense to try out implementing aes-cbc-essiv
>>> on the ARMv8 crypto extensions? I see that Ard has done
>>> some prior work on aes-ccm in arch/arm64/crypto/aes-ce-ccm-*
>>> that (AFAICT) has a similar goal of avoiding overhead by
>>> combining the usual operations, so maybe the same can
>>> be done here.
>>>
>>
>> I am having trouble understanding what exactly this series aims to achieve.
>>
>> Calling into the crypto layer fewer times is a nice goal, but a disk
>> sector seems like a reasonable granularity for the dm layer to operate
>> on, and I don't think any hardware exists that operates on multi
>> sector sequences, where it would pay off to amortize the latency of
>> invoking the hardware over an entire bio.
>
> I don't know much about crypto hardware, but I think a crypto hardware can handle
> data more than one sector at one time. So I think passing the whole bio to the hardware
> at one time will decrease the overhead in passing each sector alternatively.
>

But this will only be the case if the accelerator is capable of doing
the IV generation and en/decryption of multiple contiguous sectors in
a single call. Otherwise, you are just shifting work from one layer to
the next.

So at this point, it would be useful to clarify what exactly these
accelerators are doing and how.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-19 14:08       ` Ard Biesheuvel
@ 2018-07-19 15:50         ` Mark Brown
  2018-07-20  1:02           ` Ard Biesheuvel
  0 siblings, 1 reply; 28+ messages in thread
From: Mark Brown @ 2018-07-19 15:50 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Xiongfeng Wang, Arnd Bergmann, Alasdair Kergon, Mike Snitzer,
	Herbert Xu, dm-devel, Linux Kernel Mailing List,
	Jonathan Cameron

[-- Attachment #1: Type: text/plain, Size: 668 bytes --]

On Thu, Jul 19, 2018 at 11:08:52PM +0900, Ard Biesheuvel wrote:

> But this will only be the case if the accelerator is capable of doing
> the IV generation and en/decryption of multiple contiguous sectors in
> a single call. Otherwise, you are just shifting work from one layer to
> the next.

> So at this point, it would be useful to clarify what exactly these
> accelerators are doing and how.

Existing hardware can definitely do the IV generation and I believe that
it can chain multiple sectors together though I'd need to confirm this,
as mentioned elsewhere in the thread the ccree driver is for one of
the relevant devices.  I've poked some relevant people.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] crypto: Add IV generation templates
  2018-07-18  7:30 ` [PATCH 4/5] crypto: Add IV generation templates Xiongfeng Wang
  2018-07-18  8:16   ` Milan Broz
@ 2018-07-19 18:14   ` kbuild test robot
  1 sibling, 0 replies; 28+ messages in thread
From: kbuild test robot @ 2018-07-19 18:14 UTC (permalink / raw)
  To: Xiongfeng Wang
  Cc: kbuild-all, agk, snitzer, herbert, dm-devel, linux-kernel,
	wangxiongfeng2, broonie, arnd, jonathan.cameron

Hi Xiongfeng,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on cryptodev/master]
[also build test WARNING on v4.18-rc5 next-20180719]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Xiongfeng-Wang/crypto-add-IV-generation-templates/20180719-034438
base:   https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
reproduce:
        # apt-get install sparse
        make ARCH=x86_64 allmodconfig
        make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

>> crypto/geniv.c:303:9: sparse: Variable length array is used.
   crypto/geniv.c:568:9: sparse: Variable length array is used.
   crypto/geniv.c:729:9: sparse: Variable length array is used.
   include/linux/slab.h:631:13: sparse: undefined identifier '__builtin_mul_overflow'
   include/linux/slab.h:631:13: sparse: not a function <noident>
>> crypto/geniv.c:1482:17: sparse: incorrect type in assignment (different base types) @@    expected unsigned long long [unsigned] [long] [long long] [usertype] <noident> @@    got long] [long long] [usertype] <noident> @@
   crypto/geniv.c:1482:17:    expected unsigned long long [unsigned] [long] [long long] [usertype] <noident>
   crypto/geniv.c:1482:17:    got restricted __le64 [usertype] <noident>
>> crypto/geniv.c:1543:17: sparse: cast to restricted __le64
   crypto/geniv.c:1580:17: sparse: incorrect type in assignment (different base types) @@    expected unsigned long long [unsigned] [long] [long long] [usertype] <noident> @@    got long] [long long] [usertype] <noident> @@
   crypto/geniv.c:1580:17:    expected unsigned long long [unsigned] [long] [long long] [usertype] <noident>
   crypto/geniv.c:1580:17:    got restricted __le64 [usertype] <noident>
>> crypto/geniv.c:1912:32: sparse: expression using sizeof(void)
   include/linux/slab.h:631:13: sparse: call with no type!

vim +303 crypto/geniv.c

   298	
   299	/* Initialise ESSIV - compute salt but no local memory allocations */
   300	static int crypt_iv_essiv_init(struct geniv_ctx *ctx)
   301	{
   302		struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
 > 303		AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
   304		struct scatterlist sg;
   305		struct crypto_cipher *essiv_tfm;
   306		int err;
   307	
   308		sg_init_one(&sg, ctx->key, ctx->key_size);
   309		ahash_request_set_tfm(req, essiv->hash_tfm);
   310		ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
   311		ahash_request_set_crypt(req, &sg, essiv->salt, ctx->key_size);
   312	
   313		err = crypto_ahash_digest(req);
   314		ahash_request_zero(req);
   315		if (err)
   316			return err;
   317	
   318		essiv_tfm = ctx->iv_private;
   319	
   320		return crypto_cipher_setkey(essiv_tfm, essiv->salt,
   321				    crypto_ahash_digestsize(essiv->hash_tfm));
   322	}
   323	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-19 15:50         ` Mark Brown
@ 2018-07-20  1:02           ` Ard Biesheuvel
  2018-07-20 11:45             ` Mark Brown
  0 siblings, 1 reply; 28+ messages in thread
From: Ard Biesheuvel @ 2018-07-20  1:02 UTC (permalink / raw)
  To: Mark Brown
  Cc: Xiongfeng Wang, Arnd Bergmann, Alasdair Kergon, Mike Snitzer,
	Herbert Xu, dm-devel, Linux Kernel Mailing List,
	Jonathan Cameron

On 20 July 2018 at 00:50, Mark Brown <broonie@kernel.org> wrote:
> On Thu, Jul 19, 2018 at 11:08:52PM +0900, Ard Biesheuvel wrote:
>
>> But this will only be the case if the accelerator is capable of doing
>> the IV generation and en/decryption of multiple contiguous sectors in
>> a single call. Otherwise, you are just shifting work from one layer to
>> the next.
>
>> So at this point, it would be useful to clarify what exactly these
>> accelerators are doing and how.
>
> Existing hardware can definitely do the IV generation and I believe that
> it can chain multiple sectors together though I'd need to confirm this,
> as mentioned elsewhere in the thread the ccree driver is for one of
> the relevant devices.  I've poked some relevant people.

As far as I can infer from the ccree driver source, IV generation and
en/decryption are separate operations, and given that each sector
requires both operations to be applied in sequence, letting the crypto
layer handle an entire bio does not have any benefit *at the moment*.

In fact, it seems to me that the ability to use protected AES keys is
much more appealing than any performance argument (including 'it may
be slower but at least it does not load the CPU'), so some background
on how such a change would enable this use case would be beneficial as
well to getting this adopted.

So my recommendation would be to focus on moving the IV generation
into the crypto layer, but conservatively, and not confuse people by
making additional changes that could theoretically improve
performance, but only on hardware that does not exist.

-- 
Ard.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-20  1:02           ` Ard Biesheuvel
@ 2018-07-20 11:45             ` Mark Brown
  2018-07-20 12:23               ` Ard Biesheuvel
  2018-07-22 13:39               ` Gilad Ben-Yossef
  0 siblings, 2 replies; 28+ messages in thread
From: Mark Brown @ 2018-07-20 11:45 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Xiongfeng Wang, Arnd Bergmann, Alasdair Kergon, Mike Snitzer,
	Herbert Xu, dm-devel, Linux Kernel Mailing List,
	Jonathan Cameron

[-- Attachment #1: Type: text/plain, Size: 1876 bytes --]

On Fri, Jul 20, 2018 at 10:02:21AM +0900, Ard Biesheuvel wrote:
> On 20 July 2018 at 00:50, Mark Brown <broonie@kernel.org> wrote:

> > Existing hardware can definitely do the IV generation and I believe that
> > it can chain multiple sectors together though I'd need to confirm this,
> > as mentioned elsewhere in the thread the ccree driver is for one of
> > the relevant devices.  I've poked some relevant people.

> As far as I can infer from the ccree driver source, IV generation and
> en/decryption are separate operations, and given that each sector
> requires both operations to be applied in sequence, letting the crypto
> layer handle an entire bio does not have any benefit *at the moment*.

Interesting...  they were reporting some benefits from that with their
out of tree driver prior to upstreaming (and there are other
implementations out there, that's the only one I definitely know about).
I have to confess I didn't look at their in tree driver, looking briefly
now it looks awfully like the hardware should be able to chain IV
generation together with encryption without bothering the CPU which
would be good enough.

> In fact, it seems to me that the ability to use protected AES keys is
> much more appealing than any performance argument (including 'it may
> be slower but at least it does not load the CPU'), so some background
> on how such a change would enable this use case would be beneficial as
> well to getting this adopted.

Right, that's another benefit which was on the radar for followup work.

> So my recommendation would be to focus on moving the IV generation
> into the crypto layer, but conservatively, and not confuse people by
> making additional changes that could theoretically improve
> performance, but only on hardware that does not exist.

It certainly seems like splitting things up will at least allow things
to progress.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-20 11:45             ` Mark Brown
@ 2018-07-20 12:23               ` Ard Biesheuvel
  2018-07-20 12:32                 ` Mark Brown
  2018-07-22 13:39               ` Gilad Ben-Yossef
  1 sibling, 1 reply; 28+ messages in thread
From: Ard Biesheuvel @ 2018-07-20 12:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Xiongfeng Wang, Arnd Bergmann, Alasdair Kergon, Mike Snitzer,
	Herbert Xu, dm-devel, Linux Kernel Mailing List,
	Jonathan Cameron

On 20 July 2018 at 20:45, Mark Brown <broonie@kernel.org> wrote:
> On Fri, Jul 20, 2018 at 10:02:21AM +0900, Ard Biesheuvel wrote:
>> On 20 July 2018 at 00:50, Mark Brown <broonie@kernel.org> wrote:
>
>> > Existing hardware can definitely do the IV generation and I believe that
>> > it can chain multiple sectors together though I'd need to confirm this,
>> > as mentioned elsewhere in the thread the ccree driver is for one of
>> > the relevant devices.  I've poked some relevant people.
>
>> As far as I can infer from the ccree driver source, IV generation and
>> en/decryption are separate operations, and given that each sector
>> requires both operations to be applied in sequence, letting the crypto
>> layer handle an entire bio does not have any benefit *at the moment*.
>
> Interesting...  they were reporting some benefits from that with their
> out of tree driver prior to upstreaming (and there are other
> implementations out there, that's the only one I definitely know about).
> I have to confess I didn't look at their in tree driver, looking briefly
> now it looks awfully like the hardware should be able to chain IV
> generation together with encryption without bothering the CPU which
> would be good enough.
>

Indeed interesting. But afaict, that would still mean that the IV
generation transform and the payload transform would be expressed as a
single crypto algorithm, e.g., 'dm(essiv-foo(aes),gcm(aes)), or the DM
layer would still need to be involved in sequencing one operation
after the other, and I don't think any of that support is in the
current series. But I'm just a drive by reviewer here, so please
correct me if I am wrong.

>> In fact, it seems to me that the ability to use protected AES keys is
>> much more appealing than any performance argument (including 'it may
>> be slower but at least it does not load the CPU'), so some background
>> on how such a change would enable this use case would be beneficial as
>> well to getting this adopted.
>
> Right, that's another benefit which was on the radar for followup work.
>
>> So my recommendation would be to focus on moving the IV generation
>> into the crypto layer, but conservatively, and not confuse people by
>> making additional changes that could theoretically improve
>> performance, but only on hardware that does not exist.
>
> It certainly seems like splitting things up will at least allow things
> to progress.

Indeed.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-20 12:23               ` Ard Biesheuvel
@ 2018-07-20 12:32                 ` Mark Brown
  0 siblings, 0 replies; 28+ messages in thread
From: Mark Brown @ 2018-07-20 12:32 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Xiongfeng Wang, Arnd Bergmann, Alasdair Kergon, Mike Snitzer,
	Herbert Xu, dm-devel, Linux Kernel Mailing List,
	Jonathan Cameron

[-- Attachment #1: Type: text/plain, Size: 1036 bytes --]

On Fri, Jul 20, 2018 at 09:23:15PM +0900, Ard Biesheuvel wrote:
> On 20 July 2018 at 20:45, Mark Brown <broonie@kernel.org> wrote:

> > I have to confess I didn't look at their in tree driver, looking briefly
> > now it looks awfully like the hardware should be able to chain IV
> > generation together with encryption without bothering the CPU which
> > would be good enough.

> Indeed interesting. But afaict, that would still mean that the IV
> generation transform and the payload transform would be expressed as a
> single crypto algorithm, e.g., 'dm(essiv-foo(aes),gcm(aes)), or the DM
> layer would still need to be involved in sequencing one operation
> after the other, and I don't think any of that support is in the
> current series. But I'm just a drive by reviewer here, so please
> correct me if I am wrong.

Yeah, I'm also a bit of a drive by here and not seeing how the two are
joined up at present, but it may be a case of needing to get this and/or
other drivers fixed rather than the hardware lacking the capability.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-20 11:45             ` Mark Brown
  2018-07-20 12:23               ` Ard Biesheuvel
@ 2018-07-22 13:39               ` Gilad Ben-Yossef
  2018-07-23  0:13                 ` Ard Biesheuvel
  1 sibling, 1 reply; 28+ messages in thread
From: Gilad Ben-Yossef @ 2018-07-22 13:39 UTC (permalink / raw)
  To: Mark Brown
  Cc: Ard Biesheuvel, Xiongfeng Wang, Arnd Bergmann, Alasdair Kergon,
	Mike Snitzer, Herbert Xu, device-mapper development,
	Linux Kernel Mailing List, Jonathan Cameron

Hi there,

Sorry for delay in response - the patch set was sent just as we shut
down the office for moving to a new location... :-)

On Fri, Jul 20, 2018 at 2:45 PM, Mark Brown <broonie@kernel.org> wrote:
> On Fri, Jul 20, 2018 at 10:02:21AM +0900, Ard Biesheuvel wrote:
>> On 20 July 2018 at 00:50, Mark Brown <broonie@kernel.org> wrote:
>
>> > Existing hardware can definitely do the IV generation and I believe that
>> > it can chain multiple sectors together though I'd need to confirm this,
>> > as mentioned elsewhere in the thread the ccree driver is for one of
>> > the relevant devices.  I've poked some relevant people.
>
>> As far as I can infer from the ccree driver source, IV generation and
>> en/decryption are separate operations, and given that each sector
>> requires both operations to be applied in sequence, letting the crypto
>> layer handle an entire bio does not have any benefit *at the moment*.
>

So there are two separate things that can be considered IV generation
in the ccree driver:
- The ability to generate a none repeating IV for encryption mode of
operations that require it.
- The ability to compute an IV from sector number for storage related
modes of operation, such as xts

What you saw in the driver relates to the first whereas we are
discussing making use of the second.

In essence, it means providing a key, buffer to encrypt (that may span
a sector or possibly more) and the sector number
and the CryptoCell hardware can compute the IV hence forth for blocks
in the sector and across sector boundaries (it knows
the size of the sector so can increment the sector number as needed)
when fed a buffer that is bigger than a single sector.

Consider getting a 4k page with a sector size of 512 bytes and the
difference between 8 x 512 HW accesses and crypto APi calls
vs just one. Of course, you can just set the sector size to 4k and
indeed a recent change to dm-crypt allows that. You get similar
benefit
but at the cost of having to read 4k of data even if you just need 1 byte...

I believe that other security hardware from other common vendors
posses similar abilities - but can't really speak for them.
I will note that the Android source code contains a hacked up dm-crypt
that uses an out-of-tree version of a common vendor
driver to drive this ability.

What is being aimed at here is to do the same but in an upstream-able,
community reviewed and accepted fashion.

Of course, breaking it up to stages is fine - it's just that it is
hard to show the benefits if you don't do the full monty....

I hope I've managed to shed some light on the matter and would be
happy to supply more details if needed.

Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

values of β will give rise to dom!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 0/5] crypto: add IV generation templates
  2018-07-22 13:39               ` Gilad Ben-Yossef
@ 2018-07-23  0:13                 ` Ard Biesheuvel
  0 siblings, 0 replies; 28+ messages in thread
From: Ard Biesheuvel @ 2018-07-23  0:13 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Mark Brown, Xiongfeng Wang, Arnd Bergmann, Alasdair Kergon,
	Mike Snitzer, Herbert Xu, device-mapper development,
	Linux Kernel Mailing List, Jonathan Cameron

On 22 July 2018 at 22:39, Gilad Ben-Yossef <gilad@benyossef.com> wrote:
> Hi there,
>
> Sorry for delay in response - the patch set was sent just as we shut
> down the office for moving to a new location... :-)
>
> On Fri, Jul 20, 2018 at 2:45 PM, Mark Brown <broonie@kernel.org> wrote:
>> On Fri, Jul 20, 2018 at 10:02:21AM +0900, Ard Biesheuvel wrote:
>>> On 20 July 2018 at 00:50, Mark Brown <broonie@kernel.org> wrote:
>>
>>> > Existing hardware can definitely do the IV generation and I believe that
>>> > it can chain multiple sectors together though I'd need to confirm this,
>>> > as mentioned elsewhere in the thread the ccree driver is for one of
>>> > the relevant devices.  I've poked some relevant people.
>>
>>> As far as I can infer from the ccree driver source, IV generation and
>>> en/decryption are separate operations, and given that each sector
>>> requires both operations to be applied in sequence, letting the crypto
>>> layer handle an entire bio does not have any benefit *at the moment*.
>>
>
> So there are two separate things that can be considered IV generation
> in the ccree driver:
> - The ability to generate a none repeating IV for encryption mode of
> operations that require it.
> - The ability to compute an IV from sector number for storage related
> modes of operation, such as xts
>
> What you saw in the driver relates to the first whereas we are
> discussing making use of the second.
>
> In essence, it means providing a key, buffer to encrypt (that may span
> a sector or possibly more) and the sector number
> and the CryptoCell hardware can compute the IV hence forth for blocks
> in the sector and across sector boundaries (it knows
> the size of the sector so can increment the sector number as needed)
> when fed a buffer that is bigger than a single sector.
>
> Consider getting a 4k page with a sector size of 512 bytes and the
> difference between 8 x 512 HW accesses and crypto APi calls
> vs just one. Of course, you can just set the sector size to 4k and
> indeed a recent change to dm-crypt allows that. You get similar
> benefit
> but at the cost of having to read 4k of data even if you just need 1 byte...
>
> I believe that other security hardware from other common vendors
> posses similar abilities - but can't really speak for them.
> I will note that the Android source code contains a hacked up dm-crypt
> that uses an out-of-tree version of a common vendor
> driver to drive this ability.
>
> What is being aimed at here is to do the same but in an upstream-able,
> community reviewed and accepted fashion.
>
> Of course, breaking it up to stages is fine - it's just that it is
> hard to show the benefits if you don't do the full monty....
>
> I hope I've managed to shed some light on the matter and would be
> happy to supply more details if needed.
>

Thanks Gilad.

So are you saying the hardware can apply the essiv algos in
drivers/crypto/ccree/cc_cipher.c (as well as perform the en/decryption
itself) on multiple subsequent sectors in one single invocation? If
that is the case, then I stand corrected, and it is absolutely useful
to increase the granularity at which the data is passed to the
hardware.

But I still think we should keep this as a separate change in the
series, and it should describe clearly what the benefits are, and
which currently known hardware can make use of it using which
combination of algorithms.

Thanks,
Ard.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2018-07-23  0:13 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-18  7:30 [PATCH 0/5] crypto: add IV generation templates Xiongfeng Wang
2018-07-18  7:30 ` [PATCH 1/5] crypto: api - introduce API to (un)register a array of templates Xiongfeng Wang
2018-07-18  7:30 ` [PATCH 2/5] crypto: ccm - use template array registering API to simplify the code Xiongfeng Wang
2018-07-18  7:30 ` [PATCH 3/5] crypto: gcm " Xiongfeng Wang
2018-07-18  7:30 ` [PATCH 4/5] crypto: Add IV generation templates Xiongfeng Wang
2018-07-18  8:16   ` Milan Broz
2018-07-18  8:48     ` Xiongfeng Wang
2018-07-18 13:11     ` Mike Snitzer
2018-07-18 16:46     ` Mark Brown
2018-07-18 17:17       ` Milan Broz
2018-07-18 17:47         ` Mark Brown
2018-07-19  1:46         ` Xiongfeng Wang
2018-07-19  8:50           ` Arnd Bergmann
2018-07-19  8:54             ` Herbert Xu
2018-07-19 13:30             ` Mark Brown
2018-07-19 18:14   ` kbuild test robot
2018-07-18  7:30 ` [PATCH 5/5] dm-crypt: modify dm-crypt to rely on " Xiongfeng Wang
2018-07-18 10:59 ` [PATCH 0/5] crypto: add " Arnd Bergmann
2018-07-18 15:34   ` Ard Biesheuvel
2018-07-19 10:55     ` Xiongfeng Wang
2018-07-19 14:08       ` Ard Biesheuvel
2018-07-19 15:50         ` Mark Brown
2018-07-20  1:02           ` Ard Biesheuvel
2018-07-20 11:45             ` Mark Brown
2018-07-20 12:23               ` Ard Biesheuvel
2018-07-20 12:32                 ` Mark Brown
2018-07-22 13:39               ` Gilad Ben-Yossef
2018-07-23  0:13                 ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).