linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v4]  IV Generation algorithms for dm-crypt
@ 2017-02-07 10:35 Binoy Jayan
  2017-02-07 10:35 ` [RFC PATCH v4] crypto: Add IV generation algorithms Binoy Jayan
  2017-02-08  7:32 ` [RFC PATCH v4] IV Generation algorithms for dm-crypt Gilad Ben-Yossef
  0 siblings, 2 replies; 17+ messages in thread
From: Binoy Jayan @ 2017-02-07 10:35 UTC (permalink / raw)
  To: Oded, Ofir
  Cc: Herbert Xu, David S. Miller, linux-crypto, Mark Brown,
	Arnd Bergmann, linux-kernel, Alasdair Kergon, Mike Snitzer,
	dm-devel, Shaohua Li, linux-raid, Rajendra, Milan Broz, Gilad,
	Binoy Jayan

===============================================================================
dm-crypt optimization for larger block sizes
===============================================================================

Currently, the iv generation algorithms are implemented in dm-crypt.c. The goal
is to move these algorithms from the dm layer to the kernel crypto layer by
implementing them as template ciphers so they can be used in relation with
algorithms like aes, and with multiple modes like cbc, ecb etc. As part of this
patchset, the iv-generation code is moved from the dm layer to the crypto layer
and adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
at a time. Each bio contains the in memory representation of physically
contiguous disk blocks. Since the bio itself may not be contiguous in main
memory, the dm layer sets up a chained scatterlist of these blocks split into
physically contiguous segments in memory so that DMA can be performed.

One challenge in doing so is that the IVs are generated based on a 512-byte
sector number. This infact limits the block sizes to 512 bytes. But this should
not be a problem if a hardware with iv generation support is used. The geniv
itself splits the segments into sectors so it could choose the IV based on
sector number. But it could be modelled in hardware effectively by not
splitting up the segments in the bio.

Another challenge faced is that dm-crypt has an option to use multiple keys.
The key selection is done based on the sector number. If the whole bio is
encrypted / decrypted with the same key, the encrypted volumes will not be
compatible with the original dm-crypt [without the changes]. So, the key
selection code is moved to crypto layer so the neighboring sectors are
encrypted with a different key.

The dm layer allocates space for iv. The hardware drivers can choose to make
use of this space to generate their IVs sequentially or allocate it on their
own. This can be moved to crypto layer too. Postponing this decision until
the requirement to integrate milan's changes are clear.

Interface to the crypto layer - include/crypto/geniv.h

Revisions:
----------

v1: https://patchwork.kernel.org/patch/9439175
v2: https://patchwork.kernel.org/patch/9471923
v3: https://lkml.org/lkml/2017/1/18/170

v3 --> v4
----------
Fix for the bug reported by Gilad Ben-Yossef.
The element '__ctx' in 'struct skcipher_request req' overflowed into the
element 'struct scatterlist src' which immediately follows 'req' in
'struct geniv_subreq' and corrupted src.

v2 --> v3
----------

1. Moved iv algorithms in dm-crypt.c for control
2. Key management code moved from dm layer to cryto layer
   so that cipher instance selection can be made depending on key_index
3. The revision v2 had scatterlist nodes created for every sector in the bio.
   It is modified to create only once scatterlist node to reduce memory
   foot print. Synchronous requests are processed sequentially. Asynchronous
   requests are processed in parallel and is freed in the async callback.
4. Changed allocation for sub-requests using mempool

v1 --> v2
----------

1. dm-crypt changes to process larger block sizes (one segment in a bio)
2. Incorporated changes w.r.t. comments from Herbert.

Binoy Jayan (1):
  crypto: Add IV generation algorithms

 drivers/md/dm-crypt.c  | 1894 ++++++++++++++++++++++++++++++++++--------------
 include/crypto/geniv.h |   47 ++
 2 files changed, 1402 insertions(+), 539 deletions(-)
 create mode 100644 include/crypto/geniv.h

-- 
Binoy Jayan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC PATCH v4] crypto: Add IV generation algorithms
  2017-02-07 10:35 [RFC PATCH v4] IV Generation algorithms for dm-crypt Binoy Jayan
@ 2017-02-07 10:35 ` Binoy Jayan
  2017-02-08  7:32 ` [RFC PATCH v4] IV Generation algorithms for dm-crypt Gilad Ben-Yossef
  1 sibling, 0 replies; 17+ messages in thread
From: Binoy Jayan @ 2017-02-07 10:35 UTC (permalink / raw)
  To: Oded, Ofir
  Cc: Herbert Xu, David S. Miller, linux-crypto, Mark Brown,
	Arnd Bergmann, linux-kernel, Alasdair Kergon, Mike Snitzer,
	dm-devel, Shaohua Li, linux-raid, Rajendra, Milan Broz, Gilad,
	Binoy Jayan

Currently, the iv generation algorithms are implemented in dm-crypt.c.
The goal is to move these algorithms from the dm layer to the kernel
crypto layer by implementing them as template ciphers so they can be
implemented in hardware for performance. As part of this patchset, the
iv-generation code is moved from the dm layer to the crypto layer and
adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
at a time. Each bio contains an in memory representation of physically
contiguous disk blocks. The dm layer sets up a chained scatterlist of
these blocks split into physically contiguous segments in memory so that
DMA can be performed. Also, the key management code is moved from dm layer
to the cryto layer since the key selection for encrypting neighboring
sectors depend on the keycount.

Synchronous crypto requests to encrypt/decrypt a sector are processed
sequentially. Asynchronous requests if processed in parallel, are freed
in the async callback. The dm layer allocates space for iv. The hardware
implementations can choose to make use of this space to generate their IVs
sequentially or allocate it on their own.
Interface to the crypto layer - include/crypto/geniv.h

Signed-off-by: Binoy Jayan <binoy.jayan@linaro.org>
---
 drivers/md/dm-crypt.c  | 1894 ++++++++++++++++++++++++++++++++++--------------
 include/crypto/geniv.h |   47 ++
 2 files changed, 1402 insertions(+), 539 deletions(-)
 create mode 100644 include/crypto/geniv.h

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 7c6c572..8540c0f 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -32,170 +32,113 @@
 #include <crypto/algapi.h>
 #include <crypto/skcipher.h>
 #include <keys/user-type.h>
-
 #include <linux/device-mapper.h>
-
-#define DM_MSG_PREFIX "crypt"
-
-/*
- * context holding the current state of a multi-part conversion
- */
-struct convert_context {
-	struct completion restart;
-	struct bio *bio_in;
-	struct bio *bio_out;
-	struct bvec_iter iter_in;
-	struct bvec_iter iter_out;
-	sector_t cc_sector;
-	atomic_t cc_pending;
-	struct skcipher_request *req;
+#include <crypto/internal/skcipher.h>
+#include <linux/backing-dev.h>
+#include <linux/log2.h>
+#include <crypto/geniv.h>
+
+#define DM_MSG_PREFIX		"crypt"
+#define MAX_SG_LIST		(BIO_MAX_PAGES * 8)
+#define MIN_IOS			64
+#define LMK_SEED_SIZE		64 /* hash + 0 */
+#define TCW_WHITENING_SIZE	16
+
+struct geniv_ctx;
+struct geniv_req_ctx;
+
+/* Sub request for each of the skcipher_request's for a segment */
+struct geniv_subreq {
+	struct scatterlist src;
+	struct scatterlist dst;
+	int n;
+	struct geniv_req_ctx *rctx;
+	struct skcipher_request req CRYPTO_MINALIGN_ATTR;
 };
 
-/*
- * per bio private data
- */
-struct dm_crypt_io {
-	struct crypt_config *cc;
-	struct bio *base_bio;
-	struct work_struct work;
-
-	struct convert_context ctx;
-
-	atomic_t io_pending;
-	int error;
-	sector_t sector;
-
-	struct rb_node rb_node;
-} CRYPTO_MINALIGN_ATTR;
-
-struct dm_crypt_request {
-	struct convert_context *ctx;
-	struct scatterlist sg_in;
-	struct scatterlist sg_out;
+struct geniv_req_ctx {
+	struct geniv_subreq *subreq;
+	bool is_write;
 	sector_t iv_sector;
+	unsigned int nents;
+	u8 *iv;
+	struct completion restart;
+	atomic_t req_pending;
+	struct skcipher_request *req;
 };
 
-struct crypt_config;
-
 struct crypt_iv_operations {
-	int (*ctr)(struct crypt_config *cc, struct dm_target *ti,
-		   const char *opts);
-	void (*dtr)(struct crypt_config *cc);
-	int (*init)(struct crypt_config *cc);
-	int (*wipe)(struct crypt_config *cc);
-	int (*generator)(struct crypt_config *cc, u8 *iv,
-			 struct dm_crypt_request *dmreq);
-	int (*post)(struct crypt_config *cc, u8 *iv,
-		    struct dm_crypt_request *dmreq);
+	int (*ctr)(struct geniv_ctx *ctx);
+	void (*dtr)(struct geniv_ctx *ctx);
+	int (*init)(struct geniv_ctx *ctx);
+	int (*wipe)(struct geniv_ctx *ctx);
+	int (*generator)(struct geniv_ctx *ctx,
+			 struct geniv_req_ctx *rctx,
+			 struct geniv_subreq *subreq);
+	int (*post)(struct geniv_ctx *ctx,
+		    struct geniv_req_ctx *rctx,
+		    struct geniv_subreq *subreq);
 };
 
-struct iv_essiv_private {
+struct geniv_essiv_private {
 	struct crypto_ahash *hash_tfm;
 	u8 *salt;
 };
 
-struct iv_benbi_private {
+struct geniv_benbi_private {
 	int shift;
 };
 
-#define LMK_SEED_SIZE 64 /* hash + 0 */
-struct iv_lmk_private {
+struct geniv_lmk_private {
 	struct crypto_shash *hash_tfm;
 	u8 *seed;
 };
 
-#define TCW_WHITENING_SIZE 16
-struct iv_tcw_private {
+struct geniv_tcw_private {
 	struct crypto_shash *crc32_tfm;
 	u8 *iv_seed;
 	u8 *whitening;
 };
 
-/*
- * Crypt: maps a linear range of a block device
- * and encrypts / decrypts at the same time.
- */
-enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID,
-	     DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD };
-
-/*
- * The fields in here must be read only after initialization.
- */
-struct crypt_config {
-	struct dm_dev *dev;
-	sector_t start;
-
-	/*
-	 * pool for per bio private data, crypto requests and
-	 * encryption requeusts/buffer pages
-	 */
-	mempool_t *req_pool;
-	mempool_t *page_pool;
-	struct bio_set *bs;
-	struct mutex bio_alloc_lock;
-
-	struct workqueue_struct *io_queue;
-	struct workqueue_struct *crypt_queue;
-
-	struct task_struct *write_thread;
-	wait_queue_head_t write_thread_wait;
-	struct rb_root write_tree;
-
+struct geniv_ctx {
+	unsigned int tfms_count;
+	struct crypto_skcipher *child;
+	struct crypto_skcipher **tfms;
+	char *ivmode;
+	unsigned int iv_size;
+	char *ivopts;
 	char *cipher;
-	char *cipher_string;
-	char *key_string;
-
+	char *ciphermode;
 	const struct crypt_iv_operations *iv_gen_ops;
 	union {
-		struct iv_essiv_private essiv;
-		struct iv_benbi_private benbi;
-		struct iv_lmk_private lmk;
-		struct iv_tcw_private tcw;
+		struct geniv_essiv_private essiv;
+		struct geniv_benbi_private benbi;
+		struct geniv_lmk_private lmk;
+		struct geniv_tcw_private tcw;
 	} iv_gen_private;
-	sector_t iv_offset;
-	unsigned int iv_size;
-
-	/* ESSIV: struct crypto_cipher *essiv_tfm */
 	void *iv_private;
-	struct crypto_skcipher **tfms;
-	unsigned tfms_count;
-
-	/*
-	 * Layout of each crypto request:
-	 *
-	 *   struct skcipher_request
-	 *      context
-	 *      padding
-	 *   struct dm_crypt_request
-	 *      padding
-	 *   IV
-	 *
-	 * The padding is added so that dm_crypt_request and the IV are
-	 * correctly aligned.
-	 */
-	unsigned int dmreq_start;
-
-	unsigned int per_bio_data_size;
-
-	unsigned long flags;
+	struct crypto_skcipher *tfm;
+	mempool_t *subreq_pool;
 	unsigned int key_size;
+	unsigned int key_extra_size;
 	unsigned int key_parts;      /* independent parts in key buffer */
-	unsigned int key_extra_size; /* additional keys length */
-	u8 key[0];
+	enum setkey_op keyop;
+	char *msg;
+	u8 *key;
 };
 
-#define MIN_IOS        64
-
-static void clone_init(struct dm_crypt_io *, struct bio *);
-static void kcryptd_queue_crypt(struct dm_crypt_io *io);
-static u8 *iv_of_dmreq(struct crypt_config *cc, struct dm_crypt_request *dmreq);
+static struct crypto_skcipher *any_tfm(struct geniv_ctx *ctx)
+{
+	return ctx->tfms[0];
+}
 
-/*
- * Use this to access cipher attributes that are the same for each CPU.
- */
-static struct crypto_skcipher *any_tfm(struct crypt_config *cc)
+static inline
+struct geniv_req_ctx *geniv_req_ctx(struct skcipher_request *req)
 {
-	return cc->tfms[0];
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	unsigned long align = crypto_skcipher_alignmask(tfm);
+
+	return (void *) PTR_ALIGN((u8 *) skcipher_request_ctx(req), align + 1);
 }
 
 /*
@@ -245,44 +188,50 @@ static struct crypto_skcipher *any_tfm(struct crypt_config *cc)
  * http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
  */
 
-static int crypt_iv_plain_gen(struct crypt_config *cc, u8 *iv,
-			      struct dm_crypt_request *dmreq)
+static int crypt_iv_plain_gen(struct geniv_ctx *ctx,
+			      struct geniv_req_ctx *rctx,
+			      struct geniv_subreq *subreq)
 {
-	memset(iv, 0, cc->iv_size);
-	*(__le32 *)iv = cpu_to_le32(dmreq->iv_sector & 0xffffffff);
+	u8 *iv = rctx->iv;
+
+	memset(iv, 0, ctx->iv_size);
+	*(__le32 *)iv = cpu_to_le32(rctx->iv_sector & 0xffffffff);
 
 	return 0;
 }
 
-static int crypt_iv_plain64_gen(struct crypt_config *cc, u8 *iv,
-				struct dm_crypt_request *dmreq)
+static int crypt_iv_plain64_gen(struct geniv_ctx *ctx,
+				struct geniv_req_ctx *rctx,
+				struct geniv_subreq *subreq)
 {
-	memset(iv, 0, cc->iv_size);
-	*(__le64 *)iv = cpu_to_le64(dmreq->iv_sector);
+	u8 *iv = rctx->iv;
+
+	memset(iv, 0, ctx->iv_size);
+	*(__le64 *)iv = cpu_to_le64(rctx->iv_sector);
 
 	return 0;
 }
 
 /* Initialise ESSIV - compute salt but no local memory allocations */
-static int crypt_iv_essiv_init(struct crypt_config *cc)
+static int crypt_iv_essiv_init(struct geniv_ctx *ctx)
 {
-	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-	AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
+	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
 	struct scatterlist sg;
 	struct crypto_cipher *essiv_tfm;
 	int err;
+	AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
 
-	sg_init_one(&sg, cc->key, cc->key_size);
+	sg_init_one(&sg, ctx->key, ctx->key_size);
 	ahash_request_set_tfm(req, essiv->hash_tfm);
 	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
-	ahash_request_set_crypt(req, &sg, essiv->salt, cc->key_size);
+	ahash_request_set_crypt(req, &sg, essiv->salt, ctx->key_size);
 
 	err = crypto_ahash_digest(req);
 	ahash_request_zero(req);
 	if (err)
 		return err;
 
-	essiv_tfm = cc->iv_private;
+	essiv_tfm = ctx->iv_private;
 
 	err = crypto_cipher_setkey(essiv_tfm, essiv->salt,
 			    crypto_ahash_digestsize(essiv->hash_tfm));
@@ -293,16 +242,16 @@ static int crypt_iv_essiv_init(struct crypt_config *cc)
 }
 
 /* Wipe salt and reset key derived from volume key */
-static int crypt_iv_essiv_wipe(struct crypt_config *cc)
+static int crypt_iv_essiv_wipe(struct geniv_ctx *ctx)
 {
-	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-	unsigned salt_size = crypto_ahash_digestsize(essiv->hash_tfm);
+	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
+	unsigned int salt_size = crypto_ahash_digestsize(essiv->hash_tfm);
 	struct crypto_cipher *essiv_tfm;
 	int r, err = 0;
 
 	memset(essiv->salt, 0, salt_size);
 
-	essiv_tfm = cc->iv_private;
+	essiv_tfm = ctx->iv_private;
 	r = crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size);
 	if (r)
 		err = r;
@@ -311,42 +260,40 @@ static int crypt_iv_essiv_wipe(struct crypt_config *cc)
 }
 
 /* Set up per cpu cipher state */
-static struct crypto_cipher *setup_essiv_cpu(struct crypt_config *cc,
-					     struct dm_target *ti,
-					     u8 *salt, unsigned saltsize)
+static struct crypto_cipher *setup_essiv_cpu(struct geniv_ctx *ctx,
+					     u8 *salt, unsigned int saltsize)
 {
 	struct crypto_cipher *essiv_tfm;
 	int err;
 
 	/* Setup the essiv_tfm with the given salt */
-	essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, CRYPTO_ALG_ASYNC);
+	essiv_tfm = crypto_alloc_cipher(ctx->cipher, 0, CRYPTO_ALG_ASYNC);
+
 	if (IS_ERR(essiv_tfm)) {
-		ti->error = "Error allocating crypto tfm for ESSIV";
+		DMERR("Error allocating crypto tfm for ESSIV\n");
 		return essiv_tfm;
 	}
 
 	if (crypto_cipher_blocksize(essiv_tfm) !=
-	    crypto_skcipher_ivsize(any_tfm(cc))) {
-		ti->error = "Block size of ESSIV cipher does "
-			    "not match IV size of block cipher";
+	    crypto_skcipher_ivsize(any_tfm(ctx))) {
+		DMERR("Block size of ESSIV cipher does not match IV size of block cipher\n");
 		crypto_free_cipher(essiv_tfm);
 		return ERR_PTR(-EINVAL);
 	}
 
 	err = crypto_cipher_setkey(essiv_tfm, salt, saltsize);
 	if (err) {
-		ti->error = "Failed to set key for ESSIV cipher";
+		DMERR("Failed to set key for ESSIV cipher\n");
 		crypto_free_cipher(essiv_tfm);
 		return ERR_PTR(err);
 	}
-
 	return essiv_tfm;
 }
 
-static void crypt_iv_essiv_dtr(struct crypt_config *cc)
+static void crypt_iv_essiv_dtr(struct geniv_ctx *ctx)
 {
 	struct crypto_cipher *essiv_tfm;
-	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
+	struct geniv_essiv_private *essiv = &ctx->iv_gen_private.essiv;
 
 	crypto_free_ahash(essiv->hash_tfm);
 	essiv->hash_tfm = NULL;
@@ -354,52 +301,50 @@ static void crypt_iv_essiv_dtr(struct crypt_config *cc)
 	kzfree(essiv->salt);
 	essiv->salt = NULL;
 
-	essiv_tfm = cc->iv_private;
+	essiv_tfm = ctx->iv_private;
 
 	if (essiv_tfm)
 		crypto_free_cipher(essiv_tfm);
 
-	cc->iv_private = NULL;
+	ctx->iv_private = NULL;
 }
 
-static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
-			      const char *opts)
+static int crypt_iv_essiv_ctr(struct geniv_ctx *ctx)
 {
 	struct crypto_cipher *essiv_tfm = NULL;
 	struct crypto_ahash *hash_tfm = NULL;
 	u8 *salt = NULL;
 	int err;
 
-	if (!opts) {
-		ti->error = "Digest algorithm missing for ESSIV mode";
+	if (!ctx->ivopts) {
+		DMERR("Digest algorithm missing for ESSIV mode\n");
 		return -EINVAL;
 	}
 
 	/* Allocate hash algorithm */
-	hash_tfm = crypto_alloc_ahash(opts, 0, CRYPTO_ALG_ASYNC);
+	hash_tfm = crypto_alloc_ahash(ctx->ivopts, 0, CRYPTO_ALG_ASYNC);
 	if (IS_ERR(hash_tfm)) {
-		ti->error = "Error initializing ESSIV hash";
 		err = PTR_ERR(hash_tfm);
+		DMERR("Error initializing ESSIV hash. err=%d\n", err);
 		goto bad;
 	}
 
 	salt = kzalloc(crypto_ahash_digestsize(hash_tfm), GFP_KERNEL);
 	if (!salt) {
-		ti->error = "Error kmallocing salt storage in ESSIV";
 		err = -ENOMEM;
 		goto bad;
 	}
 
-	cc->iv_gen_private.essiv.salt = salt;
-	cc->iv_gen_private.essiv.hash_tfm = hash_tfm;
+	ctx->iv_gen_private.essiv.salt = salt;
+	ctx->iv_gen_private.essiv.hash_tfm = hash_tfm;
 
-	essiv_tfm = setup_essiv_cpu(cc, ti, salt,
+	essiv_tfm = setup_essiv_cpu(ctx, salt,
 				crypto_ahash_digestsize(hash_tfm));
 	if (IS_ERR(essiv_tfm)) {
-		crypt_iv_essiv_dtr(cc);
+		crypt_iv_essiv_dtr(ctx);
 		return PTR_ERR(essiv_tfm);
 	}
-	cc->iv_private = essiv_tfm;
+	ctx->iv_private = essiv_tfm;
 
 	return 0;
 
@@ -410,70 +355,73 @@ static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
 	return err;
 }
 
-static int crypt_iv_essiv_gen(struct crypt_config *cc, u8 *iv,
-			      struct dm_crypt_request *dmreq)
+static int crypt_iv_essiv_gen(struct geniv_ctx *ctx,
+			      struct geniv_req_ctx *rctx,
+			      struct geniv_subreq *subreq)
 {
-	struct crypto_cipher *essiv_tfm = cc->iv_private;
+	u8 *iv = rctx->iv;
+	struct crypto_cipher *essiv_tfm = ctx->iv_private;
 
-	memset(iv, 0, cc->iv_size);
-	*(__le64 *)iv = cpu_to_le64(dmreq->iv_sector);
+	memset(iv, 0, ctx->iv_size);
+	*(__le64 *)iv = cpu_to_le64(rctx->iv_sector);
 	crypto_cipher_encrypt_one(essiv_tfm, iv, iv);
 
 	return 0;
 }
 
-static int crypt_iv_benbi_ctr(struct crypt_config *cc, struct dm_target *ti,
-			      const char *opts)
+static int crypt_iv_benbi_ctr(struct geniv_ctx *ctx)
 {
-	unsigned bs = crypto_skcipher_blocksize(any_tfm(cc));
+	unsigned int bs = crypto_skcipher_blocksize(any_tfm(ctx));
 	int log = ilog2(bs);
 
 	/* we need to calculate how far we must shift the sector count
-	 * to get the cipher block count, we use this shift in _gen */
+	 * to get the cipher block count, we use this shift in _gen
+	 */
 
 	if (1 << log != bs) {
-		ti->error = "cypher blocksize is not a power of 2";
+		DMERR("cypher blocksize is not a power of 2\n");
 		return -EINVAL;
 	}
 
 	if (log > 9) {
-		ti->error = "cypher blocksize is > 512";
+		DMERR("cypher blocksize is > 512\n");
 		return -EINVAL;
 	}
 
-	cc->iv_gen_private.benbi.shift = 9 - log;
+	ctx->iv_gen_private.benbi.shift = 9 - log;
 
 	return 0;
 }
 
-static void crypt_iv_benbi_dtr(struct crypt_config *cc)
-{
-}
-
-static int crypt_iv_benbi_gen(struct crypt_config *cc, u8 *iv,
-			      struct dm_crypt_request *dmreq)
+static int crypt_iv_benbi_gen(struct geniv_ctx *ctx,
+			      struct geniv_req_ctx *rctx,
+			      struct geniv_subreq *subreq)
 {
+	u8 *iv = rctx->iv;
 	__be64 val;
 
-	memset(iv, 0, cc->iv_size - sizeof(u64)); /* rest is cleared below */
+	memset(iv, 0, ctx->iv_size - sizeof(u64)); /* rest is cleared below */
 
-	val = cpu_to_be64(((u64)dmreq->iv_sector << cc->iv_gen_private.benbi.shift) + 1);
-	put_unaligned(val, (__be64 *)(iv + cc->iv_size - sizeof(u64)));
+	val = cpu_to_be64(((u64) rctx->iv_sector <<
+			  ctx->iv_gen_private.benbi.shift) + 1);
+	put_unaligned(val, (__be64 *)(iv + ctx->iv_size - sizeof(u64)));
 
 	return 0;
 }
 
-static int crypt_iv_null_gen(struct crypt_config *cc, u8 *iv,
-			     struct dm_crypt_request *dmreq)
+static int crypt_iv_null_gen(struct geniv_ctx *ctx,
+			     struct geniv_req_ctx *rctx,
+			     struct geniv_subreq *subreq)
 {
-	memset(iv, 0, cc->iv_size);
+	u8 *iv = rctx->iv;
 
+	memset(iv, 0, ctx->iv_size);
 	return 0;
 }
 
-static void crypt_iv_lmk_dtr(struct crypt_config *cc)
+static void crypt_iv_lmk_dtr(struct geniv_ctx *ctx)
 {
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
 
 	if (lmk->hash_tfm && !IS_ERR(lmk->hash_tfm))
 		crypto_free_shash(lmk->hash_tfm);
@@ -483,49 +431,49 @@ static void crypt_iv_lmk_dtr(struct crypt_config *cc)
 	lmk->seed = NULL;
 }
 
-static int crypt_iv_lmk_ctr(struct crypt_config *cc, struct dm_target *ti,
-			    const char *opts)
+static int crypt_iv_lmk_ctr(struct geniv_ctx *ctx)
 {
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
 
 	lmk->hash_tfm = crypto_alloc_shash("md5", 0, 0);
 	if (IS_ERR(lmk->hash_tfm)) {
-		ti->error = "Error initializing LMK hash";
+		DMERR("Error initializing LMK hash; err=%ld\n",
+		      PTR_ERR(lmk->hash_tfm));
 		return PTR_ERR(lmk->hash_tfm);
 	}
 
 	/* No seed in LMK version 2 */
-	if (cc->key_parts == cc->tfms_count) {
+	if (ctx->key_parts == ctx->tfms_count) {
 		lmk->seed = NULL;
 		return 0;
 	}
 
 	lmk->seed = kzalloc(LMK_SEED_SIZE, GFP_KERNEL);
 	if (!lmk->seed) {
-		crypt_iv_lmk_dtr(cc);
-		ti->error = "Error kmallocing seed storage in LMK";
+		crypt_iv_lmk_dtr(ctx);
+		DMERR("Error kmallocing seed storage in LMK\n");
 		return -ENOMEM;
 	}
 
 	return 0;
 }
 
-static int crypt_iv_lmk_init(struct crypt_config *cc)
+static int crypt_iv_lmk_init(struct geniv_ctx *ctx)
 {
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
-	int subkey_size = cc->key_size / cc->key_parts;
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
+	int subkey_size = ctx->key_size / ctx->key_parts;
 
 	/* LMK seed is on the position of LMK_KEYS + 1 key */
 	if (lmk->seed)
-		memcpy(lmk->seed, cc->key + (cc->tfms_count * subkey_size),
+		memcpy(lmk->seed, ctx->key + (ctx->tfms_count * subkey_size),
 		       crypto_shash_digestsize(lmk->hash_tfm));
 
 	return 0;
 }
 
-static int crypt_iv_lmk_wipe(struct crypt_config *cc)
+static int crypt_iv_lmk_wipe(struct geniv_ctx *ctx)
 {
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
 
 	if (lmk->seed)
 		memset(lmk->seed, 0, LMK_SEED_SIZE);
@@ -533,15 +481,14 @@ static int crypt_iv_lmk_wipe(struct crypt_config *cc)
 	return 0;
 }
 
-static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv,
-			    struct dm_crypt_request *dmreq,
-			    u8 *data)
+static int crypt_iv_lmk_one(struct geniv_ctx *ctx, u8 *iv,
+			    struct geniv_req_ctx *rctx, u8 *data)
 {
-	struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk;
-	SHASH_DESC_ON_STACK(desc, lmk->hash_tfm);
+	struct geniv_lmk_private *lmk = &ctx->iv_gen_private.lmk;
 	struct md5_state md5state;
 	__le32 buf[4];
 	int i, r;
+	SHASH_DESC_ON_STACK(desc, lmk->hash_tfm);
 
 	desc->tfm = lmk->hash_tfm;
 	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
@@ -562,8 +509,9 @@ static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv,
 		return r;
 
 	/* Sector is cropped to 56 bits here */
-	buf[0] = cpu_to_le32(dmreq->iv_sector & 0xFFFFFFFF);
-	buf[1] = cpu_to_le32((((u64)dmreq->iv_sector >> 32) & 0x00FFFFFF) | 0x80000000);
+	buf[0] = cpu_to_le32(rctx->iv_sector & 0xFFFFFFFF);
+	buf[1] = cpu_to_le32((((u64)rctx->iv_sector >> 32) & 0x00FFFFFF)
+			     | 0x80000000);
 	buf[2] = cpu_to_le32(4024);
 	buf[3] = 0;
 	r = crypto_shash_update(desc, (u8 *)buf, sizeof(buf));
@@ -577,50 +525,54 @@ static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv,
 
 	for (i = 0; i < MD5_HASH_WORDS; i++)
 		__cpu_to_le32s(&md5state.hash[i]);
-	memcpy(iv, &md5state.hash, cc->iv_size);
+	memcpy(iv, &md5state.hash, ctx->iv_size);
 
 	return 0;
 }
 
-static int crypt_iv_lmk_gen(struct crypt_config *cc, u8 *iv,
-			    struct dm_crypt_request *dmreq)
+static int crypt_iv_lmk_gen(struct geniv_ctx *ctx,
+			    struct geniv_req_ctx *rctx,
+			    struct geniv_subreq *subreq)
 {
 	u8 *src;
+	u8 *iv = rctx->iv;
 	int r = 0;
 
-	if (bio_data_dir(dmreq->ctx->bio_in) == WRITE) {
-		src = kmap_atomic(sg_page(&dmreq->sg_in));
-		r = crypt_iv_lmk_one(cc, iv, dmreq, src + dmreq->sg_in.offset);
+	if (rctx->is_write) {
+		src = kmap_atomic(sg_page(&subreq->src));
+		r = crypt_iv_lmk_one(ctx, iv, rctx, src + subreq->src.offset);
 		kunmap_atomic(src);
 	} else
-		memset(iv, 0, cc->iv_size);
+		memset(iv, 0, ctx->iv_size);
 
 	return r;
 }
 
-static int crypt_iv_lmk_post(struct crypt_config *cc, u8 *iv,
-			     struct dm_crypt_request *dmreq)
+static int crypt_iv_lmk_post(struct geniv_ctx *ctx,
+			     struct geniv_req_ctx *rctx,
+			     struct geniv_subreq *subreq)
 {
 	u8 *dst;
+	u8 *iv = rctx->iv;
 	int r;
 
-	if (bio_data_dir(dmreq->ctx->bio_in) == WRITE)
+	if (rctx->is_write)
 		return 0;
 
-	dst = kmap_atomic(sg_page(&dmreq->sg_out));
-	r = crypt_iv_lmk_one(cc, iv, dmreq, dst + dmreq->sg_out.offset);
+	dst = kmap_atomic(sg_page(&subreq->dst));
+	r = crypt_iv_lmk_one(ctx, iv, rctx, dst + subreq->dst.offset);
 
 	/* Tweak the first block of plaintext sector */
 	if (!r)
-		crypto_xor(dst + dmreq->sg_out.offset, iv, cc->iv_size);
+		crypto_xor(dst + subreq->dst.offset, iv, ctx->iv_size);
 
 	kunmap_atomic(dst);
 	return r;
 }
 
-static void crypt_iv_tcw_dtr(struct crypt_config *cc)
+static void crypt_iv_tcw_dtr(struct geniv_ctx *ctx)
 {
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
 
 	kzfree(tcw->iv_seed);
 	tcw->iv_seed = NULL;
@@ -632,64 +584,65 @@ static void crypt_iv_tcw_dtr(struct crypt_config *cc)
 	tcw->crc32_tfm = NULL;
 }
 
-static int crypt_iv_tcw_ctr(struct crypt_config *cc, struct dm_target *ti,
-			    const char *opts)
+static int crypt_iv_tcw_ctr(struct geniv_ctx *ctx)
 {
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
 
-	if (cc->key_size <= (cc->iv_size + TCW_WHITENING_SIZE)) {
-		ti->error = "Wrong key size for TCW";
+	if (ctx->key_size <= (ctx->iv_size + TCW_WHITENING_SIZE)) {
+		DMERR("Wrong key size (%d) for TCW. Choose a value > %d bytes\n",
+			ctx->key_size,
+			ctx->iv_size + TCW_WHITENING_SIZE);
 		return -EINVAL;
 	}
 
 	tcw->crc32_tfm = crypto_alloc_shash("crc32", 0, 0);
 	if (IS_ERR(tcw->crc32_tfm)) {
-		ti->error = "Error initializing CRC32 in TCW";
+		DMERR("Error initializing CRC32 in TCW; err=%ld\n",
+			PTR_ERR(tcw->crc32_tfm));
 		return PTR_ERR(tcw->crc32_tfm);
 	}
 
-	tcw->iv_seed = kzalloc(cc->iv_size, GFP_KERNEL);
+	tcw->iv_seed = kzalloc(ctx->iv_size, GFP_KERNEL);
 	tcw->whitening = kzalloc(TCW_WHITENING_SIZE, GFP_KERNEL);
 	if (!tcw->iv_seed || !tcw->whitening) {
-		crypt_iv_tcw_dtr(cc);
-		ti->error = "Error allocating seed storage in TCW";
+		crypt_iv_tcw_dtr(ctx);
+		DMERR("Error allocating seed storage in TCW\n");
 		return -ENOMEM;
 	}
 
 	return 0;
 }
 
-static int crypt_iv_tcw_init(struct crypt_config *cc)
+static int crypt_iv_tcw_init(struct geniv_ctx *ctx)
 {
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-	int key_offset = cc->key_size - cc->iv_size - TCW_WHITENING_SIZE;
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+	int key_offset = ctx->key_size - ctx->iv_size - TCW_WHITENING_SIZE;
 
-	memcpy(tcw->iv_seed, &cc->key[key_offset], cc->iv_size);
-	memcpy(tcw->whitening, &cc->key[key_offset + cc->iv_size],
+	memcpy(tcw->iv_seed, &ctx->key[key_offset], ctx->iv_size);
+	memcpy(tcw->whitening, &ctx->key[key_offset + ctx->iv_size],
 	       TCW_WHITENING_SIZE);
 
 	return 0;
 }
 
-static int crypt_iv_tcw_wipe(struct crypt_config *cc)
+static int crypt_iv_tcw_wipe(struct geniv_ctx *ctx)
 {
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
 
-	memset(tcw->iv_seed, 0, cc->iv_size);
+	memset(tcw->iv_seed, 0, ctx->iv_size);
 	memset(tcw->whitening, 0, TCW_WHITENING_SIZE);
 
 	return 0;
 }
 
-static int crypt_iv_tcw_whitening(struct crypt_config *cc,
-				  struct dm_crypt_request *dmreq,
-				  u8 *data)
+static int crypt_iv_tcw_whitening(struct geniv_ctx *ctx,
+				  struct geniv_req_ctx *rctx, u8 *data)
 {
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-	__le64 sector = cpu_to_le64(dmreq->iv_sector);
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+	__le64 sector = cpu_to_le64(rctx->iv_sector);
 	u8 buf[TCW_WHITENING_SIZE];
-	SHASH_DESC_ON_STACK(desc, tcw->crc32_tfm);
 	int i, r;
+	SHASH_DESC_ON_STACK(desc, tcw->crc32_tfm);
 
 	/* xor whitening with sector number */
 	memcpy(buf, tcw->whitening, TCW_WHITENING_SIZE);
@@ -713,99 +666,1009 @@ static int crypt_iv_tcw_whitening(struct crypt_config *cc,
 	crypto_xor(&buf[0], &buf[12], 4);
 	crypto_xor(&buf[4], &buf[8], 4);
 
-	/* apply whitening (8 bytes) to whole sector */
-	for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++)
-		crypto_xor(data + i * 8, buf, 8);
-out:
-	memzero_explicit(buf, sizeof(buf));
-	return r;
-}
+	/* apply whitening (8 bytes) to whole sector */
+	for (i = 0; i < (SECTOR_SIZE / 8); i++)
+		crypto_xor(data + i * 8, buf, 8);
+out:
+	memzero_explicit(buf, sizeof(buf));
+	return r;
+}
+
+static int crypt_iv_tcw_gen(struct geniv_ctx *ctx,
+			    struct geniv_req_ctx *rctx,
+			    struct geniv_subreq *subreq)
+{
+	u8 *iv = rctx->iv;
+	struct geniv_tcw_private *tcw = &ctx->iv_gen_private.tcw;
+	__le64 sector = cpu_to_le64(rctx->iv_sector);
+	u8 *src;
+	int r = 0;
+
+	/* Remove whitening from ciphertext */
+	if (!rctx->is_write) {
+		src = kmap_atomic(sg_page(&subreq->src));
+		r = crypt_iv_tcw_whitening(ctx, rctx,
+					   src + subreq->src.offset);
+		kunmap_atomic(src);
+	}
+
+	/* Calculate IV */
+	memcpy(iv, tcw->iv_seed, ctx->iv_size);
+	crypto_xor(iv, (u8 *)&sector, 8);
+	if (ctx->iv_size > 8)
+		crypto_xor(&iv[8], (u8 *)&sector, ctx->iv_size - 8);
+
+	return r;
+}
+
+static int crypt_iv_tcw_post(struct geniv_ctx *ctx,
+			     struct geniv_req_ctx *rctx,
+			     struct geniv_subreq *subreq)
+{
+	u8 *dst;
+	int r;
+
+	if (!rctx->is_write)
+		return 0;
+
+	/* Apply whitening on ciphertext */
+	dst = kmap_atomic(sg_page(&subreq->dst));
+	r = crypt_iv_tcw_whitening(ctx, rctx, dst + subreq->dst.offset);
+	kunmap_atomic(dst);
+
+	return r;
+}
+
+static const struct crypt_iv_operations crypt_iv_plain_ops = {
+	.generator = crypt_iv_plain_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_plain64_ops = {
+	.generator = crypt_iv_plain64_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_essiv_ops = {
+	.ctr       = crypt_iv_essiv_ctr,
+	.dtr       = crypt_iv_essiv_dtr,
+	.init      = crypt_iv_essiv_init,
+	.wipe      = crypt_iv_essiv_wipe,
+	.generator = crypt_iv_essiv_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_benbi_ops = {
+	.ctr	   = crypt_iv_benbi_ctr,
+	.generator = crypt_iv_benbi_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_null_ops = {
+	.generator = crypt_iv_null_gen
+};
+
+static const struct crypt_iv_operations crypt_iv_lmk_ops = {
+	.ctr	   = crypt_iv_lmk_ctr,
+	.dtr	   = crypt_iv_lmk_dtr,
+	.init	   = crypt_iv_lmk_init,
+	.wipe	   = crypt_iv_lmk_wipe,
+	.generator = crypt_iv_lmk_gen,
+	.post	   = crypt_iv_lmk_post
+};
+
+static const struct crypt_iv_operations crypt_iv_tcw_ops = {
+	.ctr	   = crypt_iv_tcw_ctr,
+	.dtr	   = crypt_iv_tcw_dtr,
+	.init	   = crypt_iv_tcw_init,
+	.wipe	   = crypt_iv_tcw_wipe,
+	.generator = crypt_iv_tcw_gen,
+	.post	   = crypt_iv_tcw_post
+};
+
+static int geniv_setkey_set(struct geniv_ctx *ctx)
+{
+	int ret = 0;
+
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->init)
+		ret = ctx->iv_gen_ops->init(ctx);
+	return ret;
+}
+
+static int geniv_setkey_wipe(struct geniv_ctx *ctx)
+{
+	int ret = 0;
+
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->wipe) {
+		ret = ctx->iv_gen_ops->wipe(ctx);
+		if (ret)
+			return ret;
+	}
+	return ret;
+}
+
+static int geniv_init_iv(struct geniv_ctx *ctx)
+{
+	int ret = -EINVAL;
+
+	DMDEBUG("IV Generation algorithm : %s\n", ctx->ivmode);
+
+	if (ctx->ivmode == NULL)
+		ctx->iv_gen_ops = NULL;
+	else if (strcmp(ctx->ivmode, "plain") == 0)
+		ctx->iv_gen_ops = &crypt_iv_plain_ops;
+	else if (strcmp(ctx->ivmode, "plain64") == 0)
+		ctx->iv_gen_ops = &crypt_iv_plain64_ops;
+	else if (strcmp(ctx->ivmode, "essiv") == 0)
+		ctx->iv_gen_ops = &crypt_iv_essiv_ops;
+	else if (strcmp(ctx->ivmode, "benbi") == 0)
+		ctx->iv_gen_ops = &crypt_iv_benbi_ops;
+	else if (strcmp(ctx->ivmode, "null") == 0)
+		ctx->iv_gen_ops = &crypt_iv_null_ops;
+	else if (strcmp(ctx->ivmode, "lmk") == 0)
+		ctx->iv_gen_ops = &crypt_iv_lmk_ops;
+	else if (strcmp(ctx->ivmode, "tcw") == 0) {
+		ctx->iv_gen_ops = &crypt_iv_tcw_ops;
+		ctx->key_parts += 2; /* IV + whitening */
+		ctx->key_extra_size = ctx->iv_size + TCW_WHITENING_SIZE;
+	} else {
+		ret = -EINVAL;
+		DMERR("Invalid IV mode %s\n", ctx->ivmode);
+		goto end;
+	}
+
+	/* Allocate IV */
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->ctr) {
+		ret = ctx->iv_gen_ops->ctr(ctx);
+		if (ret < 0) {
+			DMERR("Error creating IV for %s\n", ctx->ivmode);
+			goto end;
+		}
+	}
+
+	/* Initialize IV (set keys for ESSIV etc) */
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->init) {
+		ret = ctx->iv_gen_ops->init(ctx);
+		if (ret < 0)
+			DMERR("Error creating IV for %s\n", ctx->ivmode);
+	}
+	ret = 0;
+end:
+	return ret;
+}
+
+static void geniv_free_tfms(struct geniv_ctx *ctx)
+{
+	unsigned int i;
+
+	if (!ctx->tfms)
+		return;
+
+	for (i = 0; i < ctx->tfms_count; i++)
+		if (ctx->tfms[i] && !IS_ERR(ctx->tfms[i])) {
+			crypto_free_skcipher(ctx->tfms[i]);
+			ctx->tfms[i] = NULL;
+		}
+
+	kfree(ctx->tfms);
+	ctx->tfms = NULL;
+}
+
+/* Allocate memory for the underlying cipher algorithm. Ex: cbc(aes)
+ */
+
+static int geniv_alloc_tfms(struct crypto_skcipher *parent,
+			    struct geniv_ctx *ctx)
+{
+	unsigned int i, reqsize, align;
+	int err = 0;
+
+	ctx->tfms = kcalloc(ctx->tfms_count, sizeof(struct crypto_skcipher *),
+			   GFP_KERNEL);
+	if (!ctx->tfms) {
+		err = -ENOMEM;
+		goto end;
+	}
+
+	/* First instance is already allocated in geniv_init_tfm */
+	ctx->tfms[0] = ctx->child;
+	for (i = 1; i < ctx->tfms_count; i++) {
+		ctx->tfms[i] = crypto_alloc_skcipher(ctx->ciphermode, 0, 0);
+		if (IS_ERR(ctx->tfms[i])) {
+			err = PTR_ERR(ctx->tfms[i]);
+			geniv_free_tfms(ctx);
+			goto end;
+		}
+
+		/* Setup the current cipher's request structure */
+		align = crypto_skcipher_alignmask(parent);
+		align &= ~(crypto_tfm_ctx_alignment() - 1);
+		reqsize = align + sizeof(struct geniv_req_ctx) +
+			  crypto_skcipher_reqsize(ctx->tfms[i]);
+		crypto_skcipher_set_reqsize(parent, reqsize);
+	}
+
+end:
+	return err;
+}
+
+/* Initialize the cipher's context with the key, ivmode and other parameters.
+ * Also allocate IV generation template ciphers and initialize them.
+ */
+
+static int geniv_setkey_init(struct crypto_skcipher *parent,
+			     struct geniv_key_info *info)
+{
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(parent);
+	int ret = -ENOMEM;
+
+	ctx->iv_size = crypto_skcipher_ivsize(parent);
+	ctx->tfms_count = info->tfms_count;
+	ctx->key = info->key;
+	ctx->key_size = info->key_size;
+	ctx->key_parts = info->key_parts;
+	ctx->ivopts = info->ivopts;
+
+	ret = geniv_alloc_tfms(parent, ctx);
+	if (ret)
+		goto end;
+
+	ret = geniv_init_iv(ctx);
+
+end:
+	return ret;
+}
+
+static int geniv_setkey_tfms(struct crypto_skcipher *parent,
+			     struct geniv_ctx *ctx,
+			     struct geniv_key_info *info)
+{
+	unsigned int subkey_size;
+	int ret = 0, i;
+
+	/* Ignore extra keys (which are used for IV etc) */
+	subkey_size = (ctx->key_size - ctx->key_extra_size)
+		      >> ilog2(ctx->tfms_count);
+
+	for (i = 0; i < ctx->tfms_count; i++) {
+		struct crypto_skcipher *child = ctx->tfms[i];
+		char *subkey = ctx->key + (subkey_size) * i;
+
+		crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+		crypto_skcipher_set_flags(child,
+					  crypto_skcipher_get_flags(parent) &
+					  CRYPTO_TFM_REQ_MASK);
+		ret = crypto_skcipher_setkey(child, subkey, subkey_size);
+		if (ret) {
+			DMERR("Error setting key for tfms[%d]\n", i);
+			break;
+		}
+		crypto_skcipher_set_flags(parent,
+					  crypto_skcipher_get_flags(child) &
+					  CRYPTO_TFM_RES_MASK);
+	}
+
+	return ret;
+}
+
+static int geniv_setkey(struct crypto_skcipher *parent,
+			const u8 *key, unsigned int keylen)
+{
+	int err = 0;
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(parent);
+	struct geniv_key_info *info = (struct geniv_key_info *) key;
+
+	DMDEBUG("SETKEY Operation : %d\n", info->keyop);
+
+	switch (info->keyop) {
+	case SETKEY_OP_INIT:
+		err = geniv_setkey_init(parent, info);
+		break;
+	case SETKEY_OP_SET:
+		err = geniv_setkey_set(ctx);
+		break;
+	case SETKEY_OP_WIPE:
+		err = geniv_setkey_wipe(ctx);
+		break;
+	}
+
+	if (err)
+		goto end;
+
+	err = geniv_setkey_tfms(parent, ctx, info);
+
+end:
+	return err;
+}
+
+static void geniv_async_done(struct crypto_async_request *async_req, int error);
+
+static int geniv_alloc_subreq(struct skcipher_request *req,
+			      struct geniv_ctx *ctx,
+			      struct geniv_req_ctx *rctx)
+{
+	int key_index, r = 0;
+	struct skcipher_request *sreq;
+
+	if (!rctx->subreq) {
+		rctx->subreq = mempool_alloc(ctx->subreq_pool, GFP_NOIO);
+		if (!rctx->subreq)
+			r = -ENOMEM;
+	}
+
+	sreq = &rctx->subreq->req;
+	rctx->subreq->rctx = rctx;
+
+	key_index = rctx->iv_sector & (ctx->tfms_count - 1);
+
+	skcipher_request_set_tfm(sreq, ctx->tfms[key_index]);
+	skcipher_request_set_callback(sreq, req->base.flags,
+				      geniv_async_done, rctx->subreq);
+	return r;
+}
+
+/* Asynchronous IO completion callback for each sector in a segment. When all
+ * pending i/o are completed the parent cipher's async function is called.
+ */
+
+static void geniv_async_done(struct crypto_async_request *async_req, int error)
+{
+	struct geniv_subreq *subreq =
+		(struct geniv_subreq *) async_req->data;
+	struct geniv_req_ctx *rctx = subreq->rctx;
+	struct skcipher_request *req = rctx->req;
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+	/*
+	 * A request from crypto driver backlog is going to be processed now,
+	 * finish the completion and continue in crypt_convert().
+	 * (Callback will be called for the second time for this request.)
+	 */
+
+	if (error == -EINPROGRESS) {
+		complete(&rctx->restart);
+		return;
+	}
+
+	if (!error && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
+		error = ctx->iv_gen_ops->post(ctx, rctx, subreq);
+
+	mempool_free(subreq, ctx->subreq_pool);
+
+	/* req_pending needs to be checked before req->base.complete is called
+	 * as we need 'req_pending' to be equal to 1 to ensure all subrequests
+	 * are processed.
+	 */
+	if (!atomic_dec_and_test(&rctx->req_pending)) {
+		/* Call the parent cipher's completion function */
+		skcipher_request_complete(req, error);
+	}
+}
+
+static unsigned int geniv_get_sectors(struct scatterlist *sg1,
+				      struct scatterlist *sg2,
+				      unsigned int segments)
+{
+	unsigned int i, n1, n2, nents;
+
+	n1 = n2 = 0;
+	for (i = 0; i < segments ; i++)
+		n1 += sg1[i].length >> SECTOR_SHIFT;
+
+	for (i = 0; i < segments ; i++)
+		n2 += sg2[i].length >> SECTOR_SHIFT;
+
+	nents = n1 > n2 ? n1 : n2;
+	return nents;
+}
+
+/* Iterate scatterlist of segments to retrieve the 512-byte sectors so that
+ * unique IVs could be generated for each 512-byte sector. This split may not
+ * be necessary e.g. when these ciphers are modelled in hardware, where it can
+ * make use of the hardware's IV generation capabilities.
+ */
+
+static int geniv_iter_block(struct skcipher_request *req,
+			    struct geniv_subreq *subreq,
+			    struct geniv_req_ctx *rctx,
+			    unsigned int *seg_no,
+			    unsigned int *done)
+
+{
+	unsigned int srcoff, dstoff, len, rem;
+	struct scatterlist *src1, *dst1, *src2, *dst2;
+
+	if (unlikely(*seg_no >= rctx->nents))
+		return 0; /* done */
+
+	src1 = &req->src[*seg_no];
+	dst1 = &req->dst[*seg_no];
+	src2 = &subreq->src;
+	dst2 = &subreq->dst;
+
+	if (*done >= src1->length) {
+		(*seg_no)++;
+
+		if (*seg_no >= rctx->nents)
+			return 0; /* done */
+
+		src1 = &req->src[*seg_no];
+		dst1 = &req->dst[*seg_no];
+		*done = 0;
+	}
+
+	srcoff = src1->offset + *done;
+	dstoff = dst1->offset + *done;
+	rem = src1->length - *done;
+
+	len = rem > SECTOR_SIZE ? SECTOR_SIZE : rem;
+
+	DMDEBUG("segment:(%d/%u), srcoff:%d, dstoff:%d, done:%d, rem:%d\n",
+		*seg_no + 1, rctx->nents, srcoff, dstoff, *done, rem);
+
+	sg_init_table(src2, 1);
+	sg_set_page(src2, sg_page(src1), len, srcoff);
+	sg_init_table(dst2, 1);
+	sg_set_page(dst2, sg_page(dst1), len, dstoff);
+
+	*done += len;
+
+	return len; /* bytes returned */
+}
+
+/* Common encryt/decrypt function for geniv template cipher. Before the crypto
+ * operation, it splits the memory segments (in the scatterlist) into 512 byte
+ * sectors. The initialization vector(IV) used is based on a unique sector
+ * number which is generated here.
+ */
+static int geniv_crypt(struct skcipher_request *req, bool encrypt)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct geniv_req_ctx *rctx = geniv_req_ctx(req);
+	struct geniv_req_info *rinfo = (struct geniv_req_info *) req->iv;
+	int i, bytes, cryptlen, ret = 0;
+	unsigned int sectors, segno = 0, done = 0;
+	char *str __maybe_unused = encrypt ? "encrypt" : "decrypt";
+
+	/* Instance of 'struct geniv_req_info' is stored in IV ptr */
+	rctx->is_write = rinfo->is_write;
+	rctx->iv_sector = rinfo->iv_sector;
+	rctx->nents = rinfo->nents;
+	rctx->iv = rinfo->iv;
+	rctx->req = req;
+	rctx->subreq = NULL;
+	cryptlen = req->cryptlen;
+
+	DMDEBUG("geniv:%s: starting sector=%d, #segments=%u\n", str,
+		(unsigned int) rctx->iv_sector, rctx->nents);
+
+	sectors = geniv_get_sectors(req->src, req->dst, rctx->nents);
+
+	init_completion(&rctx->restart);
+	atomic_set(&rctx->req_pending, 1);
+
+	for (i = 0; i < sectors; i++) {
+		struct geniv_subreq *subreq;
+
+		ret = geniv_alloc_subreq(req, ctx, rctx);
+		if (ret)
+			goto end;
+
+		subreq = rctx->subreq;
+		subreq->rctx = rctx;
+
+		atomic_inc(&rctx->req_pending);
+		bytes = geniv_iter_block(req, subreq, rctx, &segno, &done);
+
+		if (bytes == 0)
+			break;
+
+		cryptlen -= bytes;
+
+		if (ctx->iv_gen_ops)
+			ret = ctx->iv_gen_ops->generator(ctx, rctx, subreq);
+
+		if (ret < 0) {
+			DMERR("Error in generating IV ret: %d\n", ret);
+			goto end;
+		}
+
+		skcipher_request_set_crypt(&subreq->req, &subreq->src,
+					   &subreq->dst, bytes, rctx->iv);
+
+		if (encrypt)
+			ret = crypto_skcipher_encrypt(&subreq->req);
+
+		else
+			ret = crypto_skcipher_decrypt(&subreq->req);
+
+		if (!ret && ctx->iv_gen_ops && ctx->iv_gen_ops->post)
+			ret = ctx->iv_gen_ops->post(ctx, rctx, subreq);
+
+		switch (ret) {
+		/*
+		 * The request was queued by a crypto driver
+		 * but the driver request queue is full, let's wait.
+		 */
+		case -EBUSY:
+			wait_for_completion(&rctx->restart);
+			reinit_completion(&rctx->restart);
+			/* fall through */
+		/*
+		 * The request is queued and processed asynchronously,
+		 * completion function geniv_async_done() is called.
+		 */
+		case -EINPROGRESS:
+			/* Marking this NULL lets the creation of a new sub-
+			 * request when 'geniv_alloc_subreq' is called.
+			 */
+			rctx->subreq = NULL;
+			rctx->iv_sector++;
+			cond_resched();
+			break;
+		/*
+		 * The request was already processed (synchronously).
+		 */
+		case 0:
+			atomic_dec(&rctx->req_pending);
+			rctx->iv_sector++;
+			cond_resched();
+			continue;
+
+		/* There was an error while processing the request. */
+		default:
+			atomic_dec(&rctx->req_pending);
+			return ret;
+		}
+
+		if (ret)
+			break;
+	}
+
+	if (rctx->subreq && atomic_read(&rctx->req_pending) == 1) {
+		DMDEBUG("geniv:%s: Freeing sub request\n", str);
+		mempool_free(rctx->subreq, ctx->subreq_pool);
+	}
+
+end:
+	return ret;
+}
+
+static int geniv_encrypt(struct skcipher_request *req)
+{
+	return geniv_crypt(req, true);
+}
+
+static int geniv_decrypt(struct skcipher_request *req)
+{
+	return geniv_crypt(req, false);
+}
+
+static int geniv_init_tfm(struct crypto_skcipher *tfm)
+{
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
+	unsigned int reqsize, align;
+	char *algname, *chainmode;
+	int psize, ret = 0;
+
+	algname = (char *) crypto_tfm_alg_name(crypto_skcipher_tfm(tfm));
+	ctx->ciphermode = kmalloc(CRYPTO_MAX_ALG_NAME, GFP_KERNEL);
+	if (!ctx->ciphermode) {
+		ret = -ENOMEM;
+		goto end;
+	}
+
+	/* Parse algorithm name 'ivmode(chainmode(cipher))' */
+	ctx->ivmode	= strsep(&algname, "(");
+	chainmode	= strsep(&algname, "(");
+	ctx->cipher	= strsep(&algname, ")");
+
+	snprintf(ctx->ciphermode, CRYPTO_MAX_ALG_NAME, "%s(%s)",
+		 chainmode, ctx->cipher);
+
+	DMDEBUG("ciphermode=%s, ivmode=%s\n", ctx->ciphermode, ctx->ivmode);
+
+	/*
+	 * Usually the underlying cipher instances are spawned here, but since
+	 * the value of tfms_count (which is equal to the key_count) is not
+	 * known yet, create only one instance and delay the creation of the
+	 * rest of the instances of the underlying cipher 'cbc(aes)' until
+	 * the setkey operation is invoked.
+	 * The first instance created i.e. ctx->child will later be assigned as
+	 * the 1st element in the array ctx->tfms. Creation of atleast one
+	 * instance of the cipher is necessary to be created here to uncover
+	 * any errors earlier than during the setkey operation later where the
+	 * remaining instances are created.
+	 */
+	ctx->child = crypto_alloc_skcipher(ctx->ciphermode, 0, 0);
+	if (IS_ERR(ctx->child)) {
+		ret = PTR_ERR(ctx->child);
+		DMERR("Failed to create skcipher %s. err %d\n",
+		      ctx->ciphermode, ret);
+		goto end;
+	}
+
+	/* Setup the current cipher's request structure */
+	align = crypto_skcipher_alignmask(tfm);
+	align &= ~(crypto_tfm_ctx_alignment() - 1);
+	reqsize = align + sizeof(struct geniv_req_ctx)
+			+ crypto_skcipher_reqsize(ctx->child);
+	crypto_skcipher_set_reqsize(tfm, reqsize);
+
+	/* create memory pool for sub-request structure */
+	psize = sizeof(struct geniv_subreq)
+			+ crypto_skcipher_reqsize(ctx->child);
+	ctx->subreq_pool = mempool_create_kmalloc_pool(MIN_IOS, psize);
+	if (!ctx->subreq_pool) {
+		ret = -ENOMEM;
+		DMERR("Could not allocate crypt sub-request mempool\n");
+	}
+end:
+	return ret;
+}
+
+static void geniv_exit_tfm(struct crypto_skcipher *tfm)
+{
+	struct geniv_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+	if (ctx->iv_gen_ops && ctx->iv_gen_ops->dtr)
+		ctx->iv_gen_ops->dtr(ctx);
+
+	mempool_destroy(ctx->subreq_pool);
+	geniv_free_tfms(ctx);
+	kfree(ctx->ciphermode);
+}
+
+static void geniv_free(struct skcipher_instance *inst)
+{
+	struct crypto_skcipher_spawn *spawn = skcipher_instance_ctx(inst);
+
+	crypto_drop_skcipher(spawn);
+	kfree(inst);
+}
+
+static int geniv_create(struct crypto_template *tmpl,
+			struct rtattr **tb, char *algname)
+{
+	struct crypto_attr_type *algt;
+	struct skcipher_instance *inst;
+	struct skcipher_alg *alg;
+	struct crypto_skcipher_spawn *spawn;
+	const char *cipher_name;
+	int err;
+
+	algt = crypto_get_attr_type(tb);
+
+	if (IS_ERR(algt))
+		return PTR_ERR(algt);
+
+	if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask)
+		return -EINVAL;
+
+	cipher_name = crypto_attr_alg_name(tb[1]);
+
+	if (IS_ERR(cipher_name))
+		return PTR_ERR(cipher_name);
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	if (!inst)
+		return -ENOMEM;
+
+	spawn = skcipher_instance_ctx(inst);
+
+	crypto_set_skcipher_spawn(spawn, skcipher_crypto_instance(inst));
+	err = crypto_grab_skcipher(spawn, cipher_name, 0,
+				    crypto_requires_sync(algt->type,
+							 algt->mask));
+
+	if (err)
+		goto err_free_inst;
+
+	alg = crypto_spawn_skcipher_alg(spawn);
+
+	err = -EINVAL;
+
+	/* Only support blocks of size which is of a power of 2 */
+	if (!is_power_of_2(alg->base.cra_blocksize))
+		goto err_drop_spawn;
+
+	/* algname: essiv, base.cra_name: cbc(aes) */
+	err = -ENAMETOOLONG;
+	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
+		     algname, alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
+		goto err_drop_spawn;
+	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+		     "%s(%s)", algname, alg->base.cra_driver_name) >=
+	    CRYPTO_MAX_ALG_NAME)
+		goto err_drop_spawn;
+
+	inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+	inst->alg.base.cra_priority = alg->base.cra_priority;
+	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
+	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
+	inst->alg.base.cra_flags = alg->base.cra_flags & CRYPTO_ALG_ASYNC;
+	inst->alg.ivsize = alg->base.cra_blocksize;
+	inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
+	inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
+	inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
+
+	inst->alg.setkey = geniv_setkey;
+	inst->alg.encrypt = geniv_encrypt;
+	inst->alg.decrypt = geniv_decrypt;
+
+	inst->alg.base.cra_ctxsize = sizeof(struct geniv_ctx);
+
+	inst->alg.init = geniv_init_tfm;
+	inst->alg.exit = geniv_exit_tfm;
+
+	inst->free = geniv_free;
+
+	err = skcipher_register_instance(tmpl, inst);
+	if (err)
+		goto err_drop_spawn;
+
+out:
+	return err;
+
+err_drop_spawn:
+	crypto_drop_skcipher(spawn);
+err_free_inst:
+	kfree(inst);
+	goto out;
+}
+
+static int crypto_plain_create(struct crypto_template *tmpl,
+			       struct rtattr **tb)
+{
+	return geniv_create(tmpl, tb, "plain");
+}
+
+static int crypto_plain64_create(struct crypto_template *tmpl,
+				 struct rtattr **tb)
+{
+	return geniv_create(tmpl, tb, "plain64");
+}
+
+static int crypto_essiv_create(struct crypto_template *tmpl,
+			       struct rtattr **tb)
+{
+	return geniv_create(tmpl, tb, "essiv");
+}
+
+static int crypto_benbi_create(struct crypto_template *tmpl,
+			       struct rtattr **tb)
+{
+	return geniv_create(tmpl, tb, "benbi");
+}
+
+static int crypto_null_create(struct crypto_template *tmpl,
+			      struct rtattr **tb)
+{
+	return geniv_create(tmpl, tb, "null");
+}
+
+static int crypto_lmk_create(struct crypto_template *tmpl,
+			     struct rtattr **tb)
+{
+	return geniv_create(tmpl, tb, "lmk");
+}
+
+static int crypto_tcw_create(struct crypto_template *tmpl,
+			     struct rtattr **tb)
+{
+	return geniv_create(tmpl, tb, "tcw");
+}
+
+static struct crypto_template crypto_plain_tmpl = {
+	.name   = "plain",
+	.create = crypto_plain_create,
+	.module = THIS_MODULE,
+};
+
+static struct crypto_template crypto_plain64_tmpl = {
+	.name   = "plain64",
+	.create = crypto_plain64_create,
+	.module = THIS_MODULE,
+};
+
+static struct crypto_template crypto_essiv_tmpl = {
+	.name   = "essiv",
+	.create = crypto_essiv_create,
+	.module = THIS_MODULE,
+};
+
+static struct crypto_template crypto_benbi_tmpl = {
+	.name   = "benbi",
+	.create = crypto_benbi_create,
+	.module = THIS_MODULE,
+};
+
+static struct crypto_template crypto_null_tmpl = {
+	.name   = "null",
+	.create = crypto_null_create,
+	.module = THIS_MODULE,
+};
+
+static struct crypto_template crypto_lmk_tmpl = {
+	.name   = "lmk",
+	.create = crypto_lmk_create,
+	.module = THIS_MODULE,
+};
+
+static struct crypto_template crypto_tcw_tmpl = {
+	.name   = "tcw",
+	.create = crypto_tcw_create,
+	.module = THIS_MODULE,
+};
+
+static int __init geniv_register_algs(void)
+{
+	int err;
+
+	err = crypto_register_template(&crypto_plain_tmpl);
+	if (err)
+		goto out;
+
+	err = crypto_register_template(&crypto_plain64_tmpl);
+	if (err)
+		goto out_undo_plain;
+
+	err = crypto_register_template(&crypto_essiv_tmpl);
+	if (err)
+		goto out_undo_plain64;
+
+	err = crypto_register_template(&crypto_benbi_tmpl);
+	if (err)
+		goto out_undo_essiv;
+
+	err = crypto_register_template(&crypto_null_tmpl);
+	if (err)
+		goto out_undo_benbi;
+
+	err = crypto_register_template(&crypto_lmk_tmpl);
+	if (err)
+		goto out_undo_null;
+
+	err = crypto_register_template(&crypto_tcw_tmpl);
+	if (!err)
+		goto out;
+
+	crypto_unregister_template(&crypto_lmk_tmpl);
+out_undo_null:
+	crypto_unregister_template(&crypto_null_tmpl);
+out_undo_benbi:
+	crypto_unregister_template(&crypto_benbi_tmpl);
+out_undo_essiv:
+	crypto_unregister_template(&crypto_essiv_tmpl);
+out_undo_plain64:
+	crypto_unregister_template(&crypto_plain64_tmpl);
+out_undo_plain:
+	crypto_unregister_template(&crypto_plain_tmpl);
+out:
+	return err;
+}
+
+static void __exit geniv_deregister_algs(void)
+{
+	crypto_unregister_template(&crypto_plain_tmpl);
+	crypto_unregister_template(&crypto_plain64_tmpl);
+	crypto_unregister_template(&crypto_essiv_tmpl);
+	crypto_unregister_template(&crypto_benbi_tmpl);
+	crypto_unregister_template(&crypto_null_tmpl);
+	crypto_unregister_template(&crypto_lmk_tmpl);
+	crypto_unregister_template(&crypto_tcw_tmpl);
+}
+
+/* End of geniv template cipher algorithms */
+
+/*
+ * context holding the current state of a multi-part conversion
+ */
+struct convert_context {
+	struct completion restart;
+	struct bio *bio_in;
+	struct bio *bio_out;
+	struct bvec_iter iter_in;
+	struct bvec_iter iter_out;
+	sector_t cc_sector;
+	atomic_t cc_pending;
+	struct skcipher_request *req;
+};
+
+/*
+ * per bio private data
+ */
+struct dm_crypt_io {
+	struct crypt_config *cc;
+	struct bio *base_bio;
+	struct work_struct work;
+
+	struct convert_context ctx;
 
-static int crypt_iv_tcw_gen(struct crypt_config *cc, u8 *iv,
-			    struct dm_crypt_request *dmreq)
-{
-	struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw;
-	__le64 sector = cpu_to_le64(dmreq->iv_sector);
-	u8 *src;
-	int r = 0;
+	atomic_t io_pending;
+	int error;
+	sector_t sector;
 
-	/* Remove whitening from ciphertext */
-	if (bio_data_dir(dmreq->ctx->bio_in) != WRITE) {
-		src = kmap_atomic(sg_page(&dmreq->sg_in));
-		r = crypt_iv_tcw_whitening(cc, dmreq, src + dmreq->sg_in.offset);
-		kunmap_atomic(src);
-	}
+	struct rb_node rb_node;
+} CRYPTO_MINALIGN_ATTR;
 
-	/* Calculate IV */
-	memcpy(iv, tcw->iv_seed, cc->iv_size);
-	crypto_xor(iv, (u8 *)&sector, 8);
-	if (cc->iv_size > 8)
-		crypto_xor(&iv[8], (u8 *)&sector, cc->iv_size - 8);
+struct dm_crypt_request {
+	struct convert_context *ctx;
+	struct scatterlist *sg_in;
+	struct scatterlist *sg_out;
+	sector_t iv_sector;
+};
 
-	return r;
-}
+struct crypt_config;
 
-static int crypt_iv_tcw_post(struct crypt_config *cc, u8 *iv,
-			     struct dm_crypt_request *dmreq)
-{
-	u8 *dst;
-	int r;
+/*
+ * Crypt: maps a linear range of a block device
+ * and encrypts / decrypts at the same time.
+ */
+enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID,
+	     DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD };
 
-	if (bio_data_dir(dmreq->ctx->bio_in) != WRITE)
-		return 0;
+/*
+ * The fields in here must be read only after initialization.
+ */
+struct crypt_config {
+	struct dm_dev *dev;
+	sector_t start;
 
-	/* Apply whitening on ciphertext */
-	dst = kmap_atomic(sg_page(&dmreq->sg_out));
-	r = crypt_iv_tcw_whitening(cc, dmreq, dst + dmreq->sg_out.offset);
-	kunmap_atomic(dst);
+	/*
+	 * pool for per bio private data, crypto requests and
+	 * encryption requeusts/buffer pages
+	 */
+	mempool_t *req_pool;
+	mempool_t *page_pool;
+	struct bio_set *bs;
+	struct mutex bio_alloc_lock;
 
-	return r;
-}
+	struct workqueue_struct *io_queue;
+	struct workqueue_struct *crypt_queue;
 
-static const struct crypt_iv_operations crypt_iv_plain_ops = {
-	.generator = crypt_iv_plain_gen
-};
+	struct task_struct *write_thread;
+	wait_queue_head_t write_thread_wait;
+	struct rb_root write_tree;
 
-static const struct crypt_iv_operations crypt_iv_plain64_ops = {
-	.generator = crypt_iv_plain64_gen
-};
+	char *cipher;
+	char *cipher_string;
+	char *key_string;
 
-static const struct crypt_iv_operations crypt_iv_essiv_ops = {
-	.ctr       = crypt_iv_essiv_ctr,
-	.dtr       = crypt_iv_essiv_dtr,
-	.init      = crypt_iv_essiv_init,
-	.wipe      = crypt_iv_essiv_wipe,
-	.generator = crypt_iv_essiv_gen
-};
+	sector_t iv_offset;
+	unsigned int iv_size;
 
-static const struct crypt_iv_operations crypt_iv_benbi_ops = {
-	.ctr	   = crypt_iv_benbi_ctr,
-	.dtr	   = crypt_iv_benbi_dtr,
-	.generator = crypt_iv_benbi_gen
-};
+	/* ESSIV: struct crypto_cipher *essiv_tfm */
+	void *iv_private;
+	struct crypto_skcipher *tfm;
+	unsigned int tfms_count;
 
-static const struct crypt_iv_operations crypt_iv_null_ops = {
-	.generator = crypt_iv_null_gen
-};
+	/*
+	 * Layout of each crypto request:
+	 *
+	 *   struct skcipher_request
+	 *      context
+	 *      padding
+	 *   struct dm_crypt_request
+	 *      padding
+	 *   IV
+	 *
+	 * The padding is added so that dm_crypt_request and the IV are
+	 * correctly aligned.
+	 */
+	unsigned int dmreq_start;
 
-static const struct crypt_iv_operations crypt_iv_lmk_ops = {
-	.ctr	   = crypt_iv_lmk_ctr,
-	.dtr	   = crypt_iv_lmk_dtr,
-	.init	   = crypt_iv_lmk_init,
-	.wipe	   = crypt_iv_lmk_wipe,
-	.generator = crypt_iv_lmk_gen,
-	.post	   = crypt_iv_lmk_post
-};
+	unsigned int per_bio_data_size;
 
-static const struct crypt_iv_operations crypt_iv_tcw_ops = {
-	.ctr	   = crypt_iv_tcw_ctr,
-	.dtr	   = crypt_iv_tcw_dtr,
-	.init	   = crypt_iv_tcw_init,
-	.wipe	   = crypt_iv_tcw_wipe,
-	.generator = crypt_iv_tcw_gen,
-	.post	   = crypt_iv_tcw_post
+	unsigned long flags;
+	unsigned int key_size;
+	unsigned int key_parts;      /* independent parts in key buffer */
+	unsigned int key_extra_size; /* additional keys length */
+	u8 key[0];
 };
 
+static void clone_init(struct dm_crypt_io *, struct bio *);
+static void kcryptd_queue_crypt(struct dm_crypt_io *io);
+static u8 *iv_of_dmreq(struct crypt_config *cc, struct dm_crypt_request *dmreq);
+
 static void crypt_convert_init(struct crypt_config *cc,
 			       struct convert_context *ctx,
 			       struct bio *bio_out, struct bio *bio_in,
@@ -837,53 +1700,7 @@ static u8 *iv_of_dmreq(struct crypt_config *cc,
 		       struct dm_crypt_request *dmreq)
 {
 	return (u8 *)ALIGN((unsigned long)(dmreq + 1),
-		crypto_skcipher_alignmask(any_tfm(cc)) + 1);
-}
-
-static int crypt_convert_block(struct crypt_config *cc,
-			       struct convert_context *ctx,
-			       struct skcipher_request *req)
-{
-	struct bio_vec bv_in = bio_iter_iovec(ctx->bio_in, ctx->iter_in);
-	struct bio_vec bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out);
-	struct dm_crypt_request *dmreq;
-	u8 *iv;
-	int r;
-
-	dmreq = dmreq_of_req(cc, req);
-	iv = iv_of_dmreq(cc, dmreq);
-
-	dmreq->iv_sector = ctx->cc_sector;
-	dmreq->ctx = ctx;
-	sg_init_table(&dmreq->sg_in, 1);
-	sg_set_page(&dmreq->sg_in, bv_in.bv_page, 1 << SECTOR_SHIFT,
-		    bv_in.bv_offset);
-
-	sg_init_table(&dmreq->sg_out, 1);
-	sg_set_page(&dmreq->sg_out, bv_out.bv_page, 1 << SECTOR_SHIFT,
-		    bv_out.bv_offset);
-
-	bio_advance_iter(ctx->bio_in, &ctx->iter_in, 1 << SECTOR_SHIFT);
-	bio_advance_iter(ctx->bio_out, &ctx->iter_out, 1 << SECTOR_SHIFT);
-
-	if (cc->iv_gen_ops) {
-		r = cc->iv_gen_ops->generator(cc, iv, dmreq);
-		if (r < 0)
-			return r;
-	}
-
-	skcipher_request_set_crypt(req, &dmreq->sg_in, &dmreq->sg_out,
-				   1 << SECTOR_SHIFT, iv);
-
-	if (bio_data_dir(ctx->bio_in) == WRITE)
-		r = crypto_skcipher_encrypt(req);
-	else
-		r = crypto_skcipher_decrypt(req);
-
-	if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post)
-		r = cc->iv_gen_ops->post(cc, iv, dmreq);
-
-	return r;
+		crypto_skcipher_alignmask(cc->tfm) + 1);
 }
 
 static void kcryptd_async_done(struct crypto_async_request *async_req,
@@ -892,12 +1709,10 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
 static void crypt_alloc_req(struct crypt_config *cc,
 			    struct convert_context *ctx)
 {
-	unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1);
-
 	if (!ctx->req)
 		ctx->req = mempool_alloc(cc->req_pool, GFP_NOIO);
 
-	skcipher_request_set_tfm(ctx->req, cc->tfms[key_index]);
+	skcipher_request_set_tfm(ctx->req, cc->tfm);
 
 	/*
 	 * Use REQ_MAY_BACKLOG so a cipher driver internally backlogs
@@ -920,57 +1735,98 @@ static void crypt_free_req(struct crypt_config *cc,
 /*
  * Encrypt / decrypt data from one bio to another one (can be the same one)
  */
-static int crypt_convert(struct crypt_config *cc,
-			 struct convert_context *ctx)
+
+static int crypt_convert_bio(struct crypt_config *cc,
+			     struct convert_context *ctx)
 {
+	unsigned int cryptlen, n1, n2, nents, i = 0, bytes = 0;
+	struct skcipher_request *req;
+	struct dm_crypt_request *dmreq;
+	struct geniv_req_info rinfo;
+	struct bio_vec bv_in, bv_out;
 	int r;
+	u8 *iv;
 
 	atomic_set(&ctx->cc_pending, 1);
+	crypt_alloc_req(cc, ctx);
+
+	req = ctx->req;
+	dmreq = dmreq_of_req(cc, req);
+	iv = iv_of_dmreq(cc, dmreq);
 
-	while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) {
+	n1 = bio_segments(ctx->bio_in);
+	n2 = bio_segments(ctx->bio_out);
+	nents = n1 > n2 ? n1 : n2;
+	nents = nents > MAX_SG_LIST ? MAX_SG_LIST : nents;
+	cryptlen = ctx->iter_in.bi_size;
 
-		crypt_alloc_req(cc, ctx);
+	DMDEBUG("dm-crypt:%s: segments:[in=%u, out=%u] bi_size=%u\n",
+		bio_data_dir(ctx->bio_in) == WRITE ? "write" : "read",
+		n1, n2, cryptlen);
 
-		atomic_inc(&ctx->cc_pending);
+	dmreq->sg_in  = kcalloc(nents, sizeof(struct scatterlist), GFP_KERNEL);
+	dmreq->sg_out = kcalloc(nents, sizeof(struct scatterlist), GFP_KERNEL);
+	if (!dmreq->sg_in || !dmreq->sg_out) {
+		DMERR("dm-crypt: Failed to allocate scatterlist\n");
+		r = -ENOMEM;
+		goto end;
+	}
+	dmreq->ctx = ctx;
 
-		r = crypt_convert_block(cc, ctx, ctx->req);
+	sg_init_table(dmreq->sg_in, nents);
+	sg_init_table(dmreq->sg_out, nents);
 
-		switch (r) {
-		/*
-		 * The request was queued by a crypto driver
-		 * but the driver request queue is full, let's wait.
-		 */
-		case -EBUSY:
-			wait_for_completion(&ctx->restart);
-			reinit_completion(&ctx->restart);
-			/* fall through */
-		/*
-		 * The request is queued and processed asynchronously,
-		 * completion function kcryptd_async_done() will be called.
-		 */
-		case -EINPROGRESS:
-			ctx->req = NULL;
-			ctx->cc_sector++;
-			continue;
-		/*
-		 * The request was already processed (synchronously).
-		 */
-		case 0:
-			atomic_dec(&ctx->cc_pending);
-			ctx->cc_sector++;
-			cond_resched();
-			continue;
+	while (ctx->iter_in.bi_size && ctx->iter_out.bi_size && i < nents) {
+		bv_in = bio_iter_iovec(ctx->bio_in, ctx->iter_in);
+		bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out);
 
-		/* There was an error while processing the request. */
-		default:
-			atomic_dec(&ctx->cc_pending);
-			return r;
-		}
+		sg_set_page(&dmreq->sg_in[i], bv_in.bv_page, bv_in.bv_len,
+			    bv_in.bv_offset);
+		sg_set_page(&dmreq->sg_out[i], bv_out.bv_page, bv_out.bv_len,
+			    bv_out.bv_offset);
+
+		bio_advance_iter(ctx->bio_in, &ctx->iter_in, bv_in.bv_len);
+		bio_advance_iter(ctx->bio_out, &ctx->iter_out, bv_out.bv_len);
+
+		bytes += bv_in.bv_len;
+		i++;
 	}
 
-	return 0;
+	DMDEBUG("dm-crypt: Processed %u of %u bytes\n", bytes, cryptlen);
+
+	rinfo.is_write = (bio_data_dir(ctx->bio_in) == WRITE);
+	rinfo.iv_sector = ctx->cc_sector;
+	rinfo.nents = nents;
+	rinfo.iv = iv;
+
+	skcipher_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out,
+				   bytes, &rinfo);
+
+	if (bio_data_dir(ctx->bio_in) == WRITE)
+		r = crypto_skcipher_encrypt(req);
+	else
+		r = crypto_skcipher_decrypt(req);
+
+	switch (r) {
+	/* The request was queued so wait. */
+	case -EBUSY:
+		wait_for_completion(&ctx->restart);
+		reinit_completion(&ctx->restart);
+		/* fall through */
+	/*
+	 * The request is queued and processed asynchronously,
+	 * completion function kcryptd_async_done() is called.
+	 */
+	case -EINPROGRESS:
+		ctx->req = NULL;
+		cond_resched();
+		break;
+	}
+end:
+	return r;
 }
 
+
 static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone);
 
 /*
@@ -1070,11 +1926,17 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
 {
 	struct crypt_config *cc = io->cc;
 	struct bio *base_bio = io->base_bio;
+	struct dm_crypt_request *dmreq;
 	int error = io->error;
 
 	if (!atomic_dec_and_test(&io->io_pending))
 		return;
 
+	dmreq = dmreq_of_req(cc, io->ctx.req);
+	DMDEBUG("dm-crypt: Freeing scatterlists [sync]\n");
+	kfree(dmreq->sg_in);
+	kfree(dmreq->sg_out);
+
 	if (io->ctx.req)
 		crypt_free_req(cc, io->ctx.req, base_bio);
 
@@ -1313,7 +2175,7 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
 	sector += bio_sectors(clone);
 
 	crypt_inc_pending(io);
-	r = crypt_convert(cc, &io->ctx);
+	r = crypt_convert_bio(cc, &io->ctx);
 	if (r)
 		io->error = -EIO;
 	crypt_finished = atomic_dec_and_test(&io->ctx.cc_pending);
@@ -1343,7 +2205,8 @@ static void kcryptd_crypt_read_convert(struct dm_crypt_io *io)
 	crypt_convert_init(cc, &io->ctx, io->base_bio, io->base_bio,
 			   io->sector);
 
-	r = crypt_convert(cc, &io->ctx);
+	r = crypt_convert_bio(cc, &io->ctx);
+
 	if (r < 0)
 		io->error = -EIO;
 
@@ -1371,12 +2234,13 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
 		return;
 	}
 
-	if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
-		error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq);
-
 	if (error < 0)
 		io->error = -EIO;
 
+	DMDEBUG("dm-crypt: Freeing scatterlists and request struct [async]\n");
+	kfree(dmreq->sg_in);
+	kfree(dmreq->sg_out);
+
 	crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
 
 	if (!atomic_dec_and_test(&ctx->cc_pending))
@@ -1430,62 +2294,38 @@ static int crypt_decode_key(u8 *key, char *hex, unsigned int size)
 	return 0;
 }
 
-static void crypt_free_tfms(struct crypt_config *cc)
+static void crypt_free_tfm(struct crypt_config *cc)
 {
-	unsigned i;
-
-	if (!cc->tfms)
+	if (!cc->tfm)
 		return;
 
-	for (i = 0; i < cc->tfms_count; i++)
-		if (cc->tfms[i] && !IS_ERR(cc->tfms[i])) {
-			crypto_free_skcipher(cc->tfms[i]);
-			cc->tfms[i] = NULL;
-		}
+	if (cc->tfm && !IS_ERR(cc->tfm))
+		crypto_free_skcipher(cc->tfm);
 
-	kfree(cc->tfms);
-	cc->tfms = NULL;
+	cc->tfm = NULL;
 }
 
-static int crypt_alloc_tfms(struct crypt_config *cc, char *ciphermode)
+static int crypt_alloc_tfm(struct crypt_config *cc, char *ciphermode)
 {
-	unsigned i;
 	int err;
 
-	cc->tfms = kzalloc(cc->tfms_count * sizeof(struct crypto_skcipher *),
-			   GFP_KERNEL);
-	if (!cc->tfms)
-		return -ENOMEM;
-
-	for (i = 0; i < cc->tfms_count; i++) {
-		cc->tfms[i] = crypto_alloc_skcipher(ciphermode, 0, 0);
-		if (IS_ERR(cc->tfms[i])) {
-			err = PTR_ERR(cc->tfms[i]);
-			crypt_free_tfms(cc);
-			return err;
-		}
+	cc->tfm = crypto_alloc_skcipher(ciphermode, 0, 0);
+	if (IS_ERR(cc->tfm)) {
+		err = PTR_ERR(cc->tfm);
+		crypt_free_tfm(cc);
+		return err;
 	}
 
 	return 0;
 }
 
-static int crypt_setkey(struct crypt_config *cc)
+static inline int crypt_setkey(struct crypt_config *cc, enum setkey_op keyop,
+			       char *ivopts)
 {
-	unsigned subkey_size;
-	int err = 0, i, r;
-
-	/* Ignore extra keys (which are used for IV etc) */
-	subkey_size = (cc->key_size - cc->key_extra_size) >> ilog2(cc->tfms_count);
-
-	for (i = 0; i < cc->tfms_count; i++) {
-		r = crypto_skcipher_setkey(cc->tfms[i],
-					   cc->key + (i * subkey_size),
-					   subkey_size);
-		if (r)
-			err = r;
-	}
+	DECLARE_GENIV_KEY(kinfo, keyop, cc->tfms_count, cc->key, cc->key_size,
+			  cc->key_parts, ivopts);
 
-	return err;
+	return crypto_skcipher_setkey(cc->tfm, (u8 *) &kinfo, sizeof(kinfo));
 }
 
 #ifdef CONFIG_KEYS
@@ -1498,7 +2338,9 @@ static bool contains_whitespace(const char *str)
 	return false;
 }
 
-static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string)
+static int crypt_set_keyring_key(struct crypt_config *cc,
+				 const char *key_string,
+				 enum setkey_op keyop, char *ivopts)
 {
 	char *new_key_string, *key_desc;
 	int ret;
@@ -1559,7 +2401,7 @@ static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string
 	/* clear the flag since following operations may invalidate previously valid key */
 	clear_bit(DM_CRYPT_KEY_VALID, &cc->flags);
 
-	ret = crypt_setkey(cc);
+	ret = crypt_setkey(cc, keyop, ivopts);
 
 	/* wipe the kernel key payload copy in each case */
 	memset(cc->key, 0, cc->key_size * sizeof(u8));
@@ -1599,7 +2441,9 @@ static int get_key_size(char **key_string)
 
 #else
 
-static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string)
+static int crypt_set_keyring_key(struct crypt_config *cc,
+				 const char *key_string,
+				 enum setkey_op keyop, char *ivopts)
 {
 	return -EINVAL;
 }
@@ -1611,7 +2455,8 @@ static int get_key_size(char **key_string)
 
 #endif
 
-static int crypt_set_key(struct crypt_config *cc, char *key)
+static int crypt_set_key(struct crypt_config *cc, enum setkey_op keyop,
+			 char *key, char *ivopts)
 {
 	int r = -EINVAL;
 	int key_string_len = strlen(key);
@@ -1622,7 +2467,7 @@ static int crypt_set_key(struct crypt_config *cc, char *key)
 
 	/* ':' means the key is in kernel keyring, short-circuit normal key processing */
 	if (key[0] == ':') {
-		r = crypt_set_keyring_key(cc, key + 1);
+		r = crypt_set_keyring_key(cc, key + 1, keyop, ivopts);
 		goto out;
 	}
 
@@ -1636,7 +2481,7 @@ static int crypt_set_key(struct crypt_config *cc, char *key)
 	if (cc->key_size && crypt_decode_key(cc->key, key, cc->key_size) < 0)
 		goto out;
 
-	r = crypt_setkey(cc);
+	r = crypt_setkey(cc, keyop, ivopts);
 	if (!r)
 		set_bit(DM_CRYPT_KEY_VALID, &cc->flags);
 
@@ -1647,6 +2492,17 @@ static int crypt_set_key(struct crypt_config *cc, char *key)
 	return r;
 }
 
+static int crypt_init_key(struct dm_target *ti, char *key, char *ivopts)
+{
+	struct crypt_config *cc = ti->private;
+	int ret;
+
+	ret = crypt_set_key(cc, SETKEY_OP_INIT, key, ivopts);
+	if (ret < 0)
+		ti->error = "Error decoding and setting key";
+	return ret;
+}
+
 static int crypt_wipe_key(struct crypt_config *cc)
 {
 	clear_bit(DM_CRYPT_KEY_VALID, &cc->flags);
@@ -1654,7 +2510,7 @@ static int crypt_wipe_key(struct crypt_config *cc)
 	kzfree(cc->key_string);
 	cc->key_string = NULL;
 
-	return crypt_setkey(cc);
+	return crypt_setkey(cc, SETKEY_OP_WIPE, NULL);
 }
 
 static void crypt_dtr(struct dm_target *ti)
@@ -1674,7 +2530,7 @@ static void crypt_dtr(struct dm_target *ti)
 	if (cc->crypt_queue)
 		destroy_workqueue(cc->crypt_queue);
 
-	crypt_free_tfms(cc);
+	crypt_free_tfm(cc);
 
 	if (cc->bs)
 		bioset_free(cc->bs);
@@ -1682,9 +2538,6 @@ static void crypt_dtr(struct dm_target *ti)
 	mempool_destroy(cc->page_pool);
 	mempool_destroy(cc->req_pool);
 
-	if (cc->iv_gen_ops && cc->iv_gen_ops->dtr)
-		cc->iv_gen_ops->dtr(cc);
-
 	if (cc->dev)
 		dm_put_device(ti, cc->dev);
 
@@ -1762,22 +2615,30 @@ static int crypt_ctr_cipher(struct dm_target *ti,
 	if (!cipher_api)
 		goto bad_mem;
 
-	ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
-		       "%s(%s)", chainmode, cipher);
+create_cipher:
+	/* For those ciphers which do not support IVs,
+	 * use the 'null' template cipher
+	 */
+
+	if (!ivmode)
+		ivmode = "null";
+
+	ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, "%s(%s(%s))",
+		       ivmode, chainmode, cipher);
 	if (ret < 0) {
 		kfree(cipher_api);
 		goto bad_mem;
 	}
 
 	/* Allocate cipher */
-	ret = crypt_alloc_tfms(cc, cipher_api);
+	ret = crypt_alloc_tfm(cc, cipher_api);
 	if (ret < 0) {
 		ti->error = "Error allocating crypto tfm";
 		goto bad;
 	}
 
 	/* Initialize IV */
-	cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc));
+	cc->iv_size = crypto_skcipher_ivsize(cc->tfm);
 	if (cc->iv_size)
 		/* at least a 64 bit sector number should fit in our buffer */
 		cc->iv_size = max(cc->iv_size,
@@ -1785,23 +2646,10 @@ static int crypt_ctr_cipher(struct dm_target *ti,
 	else if (ivmode) {
 		DMWARN("Selected cipher does not support IVs");
 		ivmode = NULL;
+		goto create_cipher;
 	}
 
-	/* Choose ivmode, see comments at iv code. */
-	if (ivmode == NULL)
-		cc->iv_gen_ops = NULL;
-	else if (strcmp(ivmode, "plain") == 0)
-		cc->iv_gen_ops = &crypt_iv_plain_ops;
-	else if (strcmp(ivmode, "plain64") == 0)
-		cc->iv_gen_ops = &crypt_iv_plain64_ops;
-	else if (strcmp(ivmode, "essiv") == 0)
-		cc->iv_gen_ops = &crypt_iv_essiv_ops;
-	else if (strcmp(ivmode, "benbi") == 0)
-		cc->iv_gen_ops = &crypt_iv_benbi_ops;
-	else if (strcmp(ivmode, "null") == 0)
-		cc->iv_gen_ops = &crypt_iv_null_ops;
-	else if (strcmp(ivmode, "lmk") == 0) {
-		cc->iv_gen_ops = &crypt_iv_lmk_ops;
+	if (strcmp(ivmode, "lmk") == 0) {
 		/*
 		 * Version 2 and 3 is recognised according
 		 * to length of provided multi-key string.
@@ -1813,39 +2661,14 @@ static int crypt_ctr_cipher(struct dm_target *ti,
 			cc->key_extra_size = cc->key_size / cc->key_parts;
 		}
 	} else if (strcmp(ivmode, "tcw") == 0) {
-		cc->iv_gen_ops = &crypt_iv_tcw_ops;
 		cc->key_parts += 2; /* IV + whitening */
 		cc->key_extra_size = cc->iv_size + TCW_WHITENING_SIZE;
-	} else {
-		ret = -EINVAL;
-		ti->error = "Invalid IV mode";
-		goto bad;
 	}
 
 	/* Initialize and set key */
-	ret = crypt_set_key(cc, key);
-	if (ret < 0) {
-		ti->error = "Error decoding and setting key";
+	ret = crypt_init_key(ti, key, ivopts);
+	if (ret < 0)
 		goto bad;
-	}
-
-	/* Allocate IV */
-	if (cc->iv_gen_ops && cc->iv_gen_ops->ctr) {
-		ret = cc->iv_gen_ops->ctr(cc, ti, ivopts);
-		if (ret < 0) {
-			ti->error = "Error creating IV";
-			goto bad;
-		}
-	}
-
-	/* Initialize IV (set keys for ESSIV etc) */
-	if (cc->iv_gen_ops && cc->iv_gen_ops->init) {
-		ret = cc->iv_gen_ops->init(cc);
-		if (ret < 0) {
-			ti->error = "Error initialising IV";
-			goto bad;
-		}
-	}
 
 	ret = 0;
 bad:
@@ -1901,20 +2724,20 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 		goto bad;
 
 	cc->dmreq_start = sizeof(struct skcipher_request);
-	cc->dmreq_start += crypto_skcipher_reqsize(any_tfm(cc));
+	cc->dmreq_start += crypto_skcipher_reqsize(cc->tfm);
 	cc->dmreq_start = ALIGN(cc->dmreq_start, __alignof__(struct dm_crypt_request));
 
-	if (crypto_skcipher_alignmask(any_tfm(cc)) < CRYPTO_MINALIGN) {
+	if (crypto_skcipher_alignmask(cc->tfm) < CRYPTO_MINALIGN) {
 		/* Allocate the padding exactly */
 		iv_size_padding = -(cc->dmreq_start + sizeof(struct dm_crypt_request))
-				& crypto_skcipher_alignmask(any_tfm(cc));
+				& crypto_skcipher_alignmask(cc->tfm);
 	} else {
 		/*
 		 * If the cipher requires greater alignment than kmalloc
 		 * alignment, we don't know the exact position of the
 		 * initialization vector. We must assume worst case.
 		 */
-		iv_size_padding = crypto_skcipher_alignmask(any_tfm(cc));
+		iv_size_padding = crypto_skcipher_alignmask(cc->tfm);
 	}
 
 	ret = -ENOMEM;
@@ -2072,8 +2895,9 @@ static int crypt_map(struct dm_target *ti, struct bio *bio)
 	if (bio_data_dir(io->base_bio) == READ) {
 		if (kcryptd_io_read(io, GFP_NOWAIT))
 			kcryptd_queue_read(io);
-	} else
+	} else {
 		kcryptd_queue_crypt(io);
+	}
 
 	return DM_MAPIO_SUBMITTED;
 }
@@ -2155,7 +2979,7 @@ static void crypt_resume(struct dm_target *ti)
 static int crypt_message(struct dm_target *ti, unsigned argc, char **argv)
 {
 	struct crypt_config *cc = ti->private;
-	int key_size, ret = -EINVAL;
+	int key_size;
 
 	if (argc < 2)
 		goto error;
@@ -2173,19 +2997,9 @@ static int crypt_message(struct dm_target *ti, unsigned argc, char **argv)
 				return -EINVAL;
 			}
 
-			ret = crypt_set_key(cc, argv[2]);
-			if (ret)
-				return ret;
-			if (cc->iv_gen_ops && cc->iv_gen_ops->init)
-				ret = cc->iv_gen_ops->init(cc);
-			return ret;
+			return crypt_set_key(cc, SETKEY_OP_SET, argv[2], NULL);
 		}
 		if (argc == 2 && !strcasecmp(argv[1], "wipe")) {
-			if (cc->iv_gen_ops && cc->iv_gen_ops->wipe) {
-				ret = cc->iv_gen_ops->wipe(cc);
-				if (ret)
-					return ret;
-			}
 			return crypt_wipe_key(cc);
 		}
 	}
@@ -2216,7 +3030,7 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
 
 static struct target_type crypt_target = {
 	.name   = "crypt",
-	.version = {1, 15, 0},
+	.version = {1, 16, 0},
 	.module = THIS_MODULE,
 	.ctr    = crypt_ctr,
 	.dtr    = crypt_dtr,
@@ -2234,6 +3048,7 @@ static int __init dm_crypt_init(void)
 {
 	int r;
 
+	geniv_register_algs();
 	r = dm_register_target(&crypt_target);
 	if (r < 0)
 		DMERR("register failed %d", r);
@@ -2244,6 +3059,7 @@ static int __init dm_crypt_init(void)
 static void __exit dm_crypt_exit(void)
 {
 	dm_unregister_target(&crypt_target);
+	geniv_deregister_algs();
 }
 
 module_init(dm_crypt_init);
diff --git a/include/crypto/geniv.h b/include/crypto/geniv.h
new file mode 100644
index 0000000..b472507
--- /dev/null
+++ b/include/crypto/geniv.h
@@ -0,0 +1,47 @@
+/*
+ * geniv: common interface for IV generation algorithms
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#ifndef _CRYPTO_GENIV_
+#define _CRYPTO_GENIV_
+
+#define SECTOR_SIZE		(1 << SECTOR_SHIFT)
+
+enum setkey_op {
+	SETKEY_OP_INIT,
+	SETKEY_OP_SET,
+	SETKEY_OP_WIPE,
+};
+
+struct geniv_key_info {
+	enum setkey_op keyop;
+	unsigned int tfms_count;
+	u8 *key;
+	unsigned int key_size;
+	unsigned int key_parts;
+	char *ivopts;
+};
+
+#define DECLARE_GENIV_KEY(c, op, n, k, sz, kp, opts)	\
+	struct geniv_key_info c = {			\
+		.keyop = op,				\
+		.tfms_count = n,			\
+		.key = k,				\
+		.key_size = sz,				\
+		.key_parts = kp,			\
+		.ivopts = opts,				\
+	}
+
+struct geniv_req_info {
+	bool is_write;
+	sector_t iv_sector;
+	unsigned int nents;
+	u8 *iv;
+};
+
+#endif
-- 
Binoy Jayan

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-02-07 10:35 [RFC PATCH v4] IV Generation algorithms for dm-crypt Binoy Jayan
  2017-02-07 10:35 ` [RFC PATCH v4] crypto: Add IV generation algorithms Binoy Jayan
@ 2017-02-08  7:32 ` Gilad Ben-Yossef
  2017-02-09  8:30   ` Binoy Jayan
  2017-02-22  6:12   ` Binoy Jayan
  1 sibling, 2 replies; 17+ messages in thread
From: Gilad Ben-Yossef @ 2017-02-08  7:32 UTC (permalink / raw)
  To: Binoy Jayan
  Cc: Oded, Ofir, Herbert Xu, David S. Miller, linux-crypto,
	Mark Brown, Arnd Bergmann, Linux kernel mailing list,
	Alasdair Kergon, Mike Snitzer, dm-devel, Shaohua Li, linux-raid,
	Rajendra, Milan Broz

On Tue, Feb 7, 2017 at 12:35 PM, Binoy Jayan <binoy.jayan@linaro.org> wrote:
> ===============================================================================
> dm-crypt optimization for larger block sizes
> ===============================================================================
>
> Currently, the iv generation algorithms are implemented in dm-crypt.c. The goal
> is to move these algorithms from the dm layer to the kernel crypto layer by
> implementing them as template ciphers so they can be used in relation with
> algorithms like aes, and with multiple modes like cbc, ecb etc. As part of this
> patchset, the iv-generation code is moved from the dm layer to the crypto layer
> and adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
> at a time. Each bio contains the in memory representation of physically
> contiguous disk blocks. Since the bio itself may not be contiguous in main
> memory, the dm layer sets up a chained scatterlist of these blocks split into
> physically contiguous segments in memory so that DMA can be performed.
...
> Binoy Jayan (1):
>   crypto: Add IV generation algorithms
>
>  drivers/md/dm-crypt.c  | 1894 ++++++++++++++++++++++++++++++++++--------------
>  include/crypto/geniv.h |   47 ++
>  2 files changed, 1402 insertions(+), 539 deletions(-)
>  create mode 100644 include/crypto/geniv.h

Ran Bonnie++ on it last night  (Luks mode, plain64, Qemu Virt platform
Arm64) and it works just fine.

Tested-by: Gilad Ben-Yossef <gilad@benyossef.com>

Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-02-08  7:32 ` [RFC PATCH v4] IV Generation algorithms for dm-crypt Gilad Ben-Yossef
@ 2017-02-09  8:30   ` Binoy Jayan
  2017-02-22  6:12   ` Binoy Jayan
  1 sibling, 0 replies; 17+ messages in thread
From: Binoy Jayan @ 2017-02-09  8:30 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Oded, Ofir, Herbert Xu, David S. Miller, linux-crypto,
	Mark Brown, Arnd Bergmann, Linux kernel mailing list,
	Alasdair Kergon, Mike Snitzer, dm-devel, Shaohua Li, linux-raid,
	Rajendra, Milan Broz

On 8 February 2017 at 13:02, Gilad Ben-Yossef <gilad@benyossef.com> wrote:
>
> Ran Bonnie++ on it last night  (Luks mode, plain64, Qemu Virt platform
> Arm64) and it works just fine.
>
> Tested-by: Gilad Ben-Yossef <gilad@benyossef.com>
>

Hi Gilad,

Thank you for testing it. Do you have access to a device having crypto
hardware with IV generation capability and associated drivers for let
say, aes with cbc or any another mode? I was wondering if I can customize
it to work with dm-crypt by generating IVs automatically. Please let
me know your thoughts.

Thanks,
Binoy

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-02-08  7:32 ` [RFC PATCH v4] IV Generation algorithms for dm-crypt Gilad Ben-Yossef
  2017-02-09  8:30   ` Binoy Jayan
@ 2017-02-22  6:12   ` Binoy Jayan
  2017-02-28 21:05     ` Milan Broz
  1 sibling, 1 reply; 17+ messages in thread
From: Binoy Jayan @ 2017-02-22  6:12 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Oded, Ofir, Herbert Xu, David S. Miller, linux-crypto,
	Mark Brown, Arnd Bergmann, Linux kernel mailing list,
	Alasdair Kergon, Mike Snitzer, dm-devel, Shaohua Li, linux-raid,
	Rajendra, Milan Broz

Hi Herbert,

On 8 February 2017 at 13:02, Gilad Ben-Yossef <gilad@benyossef.com> wrote:
> On Tue, Feb 7, 2017 at 12:35 PM, Binoy Jayan <binoy.jayan@linaro.org> wrote:
>> ===============================================================================
>> dm-crypt optimization for larger block sizes
>> ===============================================================================
>>
>> Currently, the iv generation algorithms are implemented in dm-crypt.c. The goal
>> is to move these algorithms from the dm layer to the kernel crypto layer by
>> implementing them as template ciphers so they can be used in relation with
>> algorithms like aes, and with multiple modes like cbc, ecb etc. As part of this
>> patchset, the iv-generation code is moved from the dm layer to the crypto layer
>> and adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
>> at a time. Each bio contains the in memory representation of physically
>> contiguous disk blocks. Since the bio itself may not be contiguous in main
>> memory, the dm layer sets up a chained scatterlist of these blocks split into
>> physically contiguous segments in memory so that DMA can be performed.

> Ran Bonnie++ on it last night  (Luks mode, plain64, Qemu Virt platform
> Arm64) and it works just fine.
>
> Tested-by: Gilad Ben-Yossef <gilad@benyossef.com>

I was wondering if this is near to be ready for submission (apart from
the testmgr.c
changes) or I need to make some changes to make it similar to the IPSec offload?

Thanks,
Binoy

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-02-22  6:12   ` Binoy Jayan
@ 2017-02-28 21:05     ` Milan Broz
  2017-03-01  8:30       ` Gilad Ben-Yossef
  2017-03-01 18:04       ` Binoy Jayan
  0 siblings, 2 replies; 17+ messages in thread
From: Milan Broz @ 2017-02-28 21:05 UTC (permalink / raw)
  To: Binoy Jayan
  Cc: Rajendra, Herbert Xu, Oded, Mike Snitzer,
	Linux kernel mailing list, Ondrej Mosnacek, linux-raid,
	Gilad Ben-Yossef, dm-devel, Mark Brown, Arnd Bergmann,
	linux-crypto, Shaohua Li, David S. Miller, Alasdair Kergon, Ofir

On 02/22/2017 07:12 AM, Binoy Jayan wrote:
> 
> I was wondering if this is near to be ready for submission (apart from
> the testmgr.c
> changes) or I need to make some changes to make it similar to the IPSec offload?

I just tried this and except it registers the IV for every new device again, it works...
(After a while you have many duplicate entries in /proc/crypto.)

But I would like to see some summary why such a big patch is needed in the first place.
(During an internal discussions seems that people are already lost in mails and
patches here, so Ondra promised me to send some summary mail soon here.)

IIRC the first initial problem was dmcrypt performance on some embedded
crypto processors that are not able to cope with small crypto requests effectively.

Do you have some real performance numbers that proves that such a patch is adequate?

I would really like to see the performance issue fixed but I am really not sure
this approach works for everyone. It would be better to avoid repeating this exercise later.
IIRC Ondra's "bulk" mode, despite rejected, shows that there is a potential
to speedup things even for crypt drivers that do not support own IV generators.

I like the patch is now contained inside dmcrypt, but it still exposes IVs that
are designed just for old, insecure, compatibility-only containers.

I really do not think every compatible crap must be accessible through crypto API.
(I wrote the dmcrypt lrw and tcw compatibility IVs and I would never do that this way
if I know it is accessible outside of dmcrypt internals...)
Even the ESSIV is something that was born to fix predictive IVs (CBC watermarking
attacks) for disk encryption only, no reason to expose it outside of disk encryption.

Milan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-02-28 21:05     ` Milan Broz
@ 2017-03-01  8:30       ` Gilad Ben-Yossef
  2017-03-01  9:29         ` Milan Broz
  2017-03-01 18:04       ` Binoy Jayan
  1 sibling, 1 reply; 17+ messages in thread
From: Gilad Ben-Yossef @ 2017-03-01  8:30 UTC (permalink / raw)
  To: Milan Broz
  Cc: Binoy Jayan, Rajendra, Herbert Xu, Oded, Mike Snitzer,
	Linux kernel mailing list, Ondrej Mosnacek, linux-raid, dm-devel,
	Mark Brown, Arnd Bergmann, linux-crypto, Shaohua Li,
	David S. Miller, Alasdair Kergon, Ofir

On Tue, Feb 28, 2017 at 11:05 PM, Milan Broz <gmazyland@gmail.com> wrote:
>
> On 02/22/2017 07:12 AM, Binoy Jayan wrote:
> >
> > I was wondering if this is near to be ready for submission (apart from
> > the testmgr.c
> > changes) or I need to make some changes to make it similar to the IPSec offload?
>
> I just tried this and except it registers the IV for every new device again, it works...
> (After a while you have many duplicate entries in /proc/crypto.)
>
> But I would like to see some summary why such a big patch is needed in the first place.
> (During an internal discussions seems that people are already lost in mails and
> patches here, so Ondra promised me to send some summary mail soon here.)
>
> IIRC the first initial problem was dmcrypt performance on some embedded
> crypto processors that are not able to cope with small crypto requests effectively.
>
>
> Do you have some real performance numbers that proves that such a patch is adequate?
>
> I would really like to see the performance issue fixed but I am really not sure
> this approach works for everyone. It would be better to avoid repeating this exercise later.
> IIRC Ondra's "bulk" mode, despite rejected, shows that there is a potential
> to speedup things even for crypt drivers that do not support own IV generators.
>

AFAIK the problem that we are trying to solve is that if the IV is
generated outside the crypto API
domain than you are forced to have an invocation of the crypto API per
each block because you
need to provide the IV for each block.

By putting the IV generation responsibility in the Crypto API we open
the way to do a single invocation
of the crypto API for a whole sequence of blocks.

For software implementation of XTS this doesn't matter much - but for
hardware based XTS providers
that can do the IV generation themselves it's the difference between a
sequence of small invocation,
with all the overhead that that entails  and a single big invocatio -
and this can be big.

This lead some vendors to ship hacked up versions of dm-crypt to match
the specific crypto hardware
they were using, or so I've heard at least - didn't see the code myself.

I believe Binoy is trying to address this in a generic upstream worthy
way instead.

Anyway, you are only supposed to see s difference when using a
hardware based XTS provider algo
that supports IV generation.

> I like the patch is now contained inside dmcrypt, but it still exposes IVs that
> are designed just for old, insecure, compatibility-only containers.
>
> I really do not think every compatible crap must be accessible through crypto API.
> (I wrote the dmcrypt lrw and tcw compatibility IVs and I would never do that this way
> if I know it is accessible outside of dmcrypt internals...)
> Even the ESSIV is something that was born to fix predictive IVs (CBC watermarking
> attacks) for disk encryption only, no reason to expose it outside of disk encryption.
>

The point is that you have more than one implementation of these
"compatible crap" - the
software implementation that you wrote and potentially multiple
hardware implementations
and putting this in the crypto API domain is the way to abstract this
so you use the one
that works best of your platform.

Thanks,
Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-01  8:30       ` Gilad Ben-Yossef
@ 2017-03-01  9:29         ` Milan Broz
  2017-03-01 12:42           ` Gilad Ben-Yossef
  0 siblings, 1 reply; 17+ messages in thread
From: Milan Broz @ 2017-03-01  9:29 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Binoy Jayan, Rajendra, Herbert Xu, Oded, Mike Snitzer,
	Linux kernel mailing list, Ondrej Mosnacek, linux-raid, dm-devel,
	Mark Brown, Arnd Bergmann, linux-crypto, Shaohua Li,
	David S. Miller, Alasdair Kergon, Ofir

On 03/01/2017 09:30 AM, Gilad Ben-Yossef wrote:
> On Tue, Feb 28, 2017 at 11:05 PM, Milan Broz <gmazyland@gmail.com> wrote:
>>
>> On 02/22/2017 07:12 AM, Binoy Jayan wrote:
>>>
>>> I was wondering if this is near to be ready for submission (apart from
>>> the testmgr.c
>>> changes) or I need to make some changes to make it similar to the IPSec offload?
>>
>> I just tried this and except it registers the IV for every new device again, it works...
>> (After a while you have many duplicate entries in /proc/crypto.)
>>
>> But I would like to see some summary why such a big patch is needed in the first place.
>> (During an internal discussions seems that people are already lost in mails and
>> patches here, so Ondra promised me to send some summary mail soon here.)
>>
>> IIRC the first initial problem was dmcrypt performance on some embedded
>> crypto processors that are not able to cope with small crypto requests effectively.
>>
>>
>> Do you have some real performance numbers that proves that such a patch is adequate?
>>
>> I would really like to see the performance issue fixed but I am really not sure
>> this approach works for everyone. It would be better to avoid repeating this exercise later.
>> IIRC Ondra's "bulk" mode, despite rejected, shows that there is a potential
>> to speedup things even for crypt drivers that do not support own IV generators.
>>
> 
> AFAIK the problem that we are trying to solve is that if the IV is
> generated outside the crypto API
> domain than you are forced to have an invocation of the crypto API per
> each block because you
> need to provide the IV for each block.
> 
> By putting the IV generation responsibility in the Crypto API we open
> the way to do a single invocation
> of the crypto API for a whole sequence of blocks.

Sure, but this is theory. Does it really work on some hw already?
Do you have performance measurements or comparison?

> For software implementation of XTS this doesn't matter much - but for
> hardware based XTS providers

It is not only embedded crypto, we have some more reports in the past
that 512B sectors are not ideal even for other systems.
(IIRC it was also with AES-NI that represents really big group of users).

> This lead some vendors to ship hacked up versions of dm-crypt to match
> the specific crypto hardware
> they were using, or so I've heard at least - didn't see the code myself.

I saw few version of that. There was a very hacky way to provide request-based dmcrypt
(see old "Introduce the request handling for dm-crypt" thread on dm-devel).
This is not the acceptable way but definitely it points to the same problem.

> I believe Binoy is trying to address this in a generic upstream worthy
> way instead.

IIRC the problem is performance, if we can solve it by some generic way,
good, but for now it seems to be a big change and just hope it helps later...

> Anyway, you are only supposed to see s difference when using a
> hardware based XTS provider algo
> that supports IV generation.
> 
>> I like the patch is now contained inside dmcrypt, but it still exposes IVs that
>> are designed just for old, insecure, compatibility-only containers.
>>
>> I really do not think every compatible crap must be accessible through crypto API.
>> (I wrote the dmcrypt lrw and tcw compatibility IVs and I would never do that this way
>> if I know it is accessible outside of dmcrypt internals...)
>> Even the ESSIV is something that was born to fix predictive IVs (CBC watermarking
>> attacks) for disk encryption only, no reason to expose it outside of disk encryption.
>>
> 
> The point is that you have more than one implementation of these
> "compatible crap" - the
> software implementation that you wrote and potentially multiple
> hardware implementations
> and putting this in the crypto API domain is the way to abstract this
> so you use the one
> that works best of your platform.

For XTS you need just simple linear IV. No problem with that, implementation
in crypto API and hw is trivial.

But for compatible IV (that provides compatibility with loopAES and very old TrueCrypt),
these should be never ever implemented again anywhere.

Specifically "tcw" is broken, insecure and provided here just to help people to migrate
from old Truecrypt containers. Even Truecrypt followers removed it from the codebase.
(It is basically combination of IV and slight modification of CBC mode. All
recent version switched to XTS and plain IV.)

So building abstraction over something known to be broken and that is now intentionally
isolated inside dmcrypt is, in my opinion, really not a good idea.


But please do get me wrong,  I do not want to block any improvement.

But it seems to me that this thread focused on creating nice crypto API interface
for FDE IVs instead of demonstration that the proposed solution really solves
the performance issue.
And not only for your hw driver, maybe other systems could benefit from the better
processing of small requests as well.

Milan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-01  9:29         ` Milan Broz
@ 2017-03-01 12:42           ` Gilad Ben-Yossef
  2017-03-01 13:04             ` Milan Broz
  2017-03-01 13:21             ` Ondrej Mosnacek
  0 siblings, 2 replies; 17+ messages in thread
From: Gilad Ben-Yossef @ 2017-03-01 12:42 UTC (permalink / raw)
  To: Milan Broz
  Cc: Binoy Jayan, Rajendra, Herbert Xu, Oded, Mike Snitzer,
	Linux kernel mailing list, Ondrej Mosnacek, linux-raid, dm-devel,
	Mark Brown, Arnd Bergmann, linux-crypto, Shaohua Li,
	David S. Miller, Alasdair Kergon, Ofir

On Wed, Mar 1, 2017 at 11:29 AM, Milan Broz <gmazyland@gmail.com> wrote:
>
> On 03/01/2017 09:30 AM, Gilad Ben-Yossef wrote:
> > On Tue, Feb 28, 2017 at 11:05 PM, Milan Broz <gmazyland@gmail.com> wrote:
> >>
> >> On 02/22/2017 07:12 AM, Binoy Jayan wrote:
> >>>
> >>> I was wondering if this is near to be ready for submission (apart from
> >>> the testmgr.c
> >>> changes) or I need to make some changes to make it similar to the IPSec offload?
> >>
> >> I just tried this and except it registers the IV for every new device again, it works...
> >> (After a while you have many duplicate entries in /proc/crypto.)
> >>
> >> But I would like to see some summary why such a big patch is needed in the first place.
> >> (During an internal discussions seems that people are already lost in mails and
> >> patches here, so Ondra promised me to send some summary mail soon here.)
> >>
> >> IIRC the first initial problem was dmcrypt performance on some embedded
> >> crypto processors that are not able to cope with small crypto requests effectively.
> >>
> >>
> >> Do you have some real performance numbers that proves that such a patch is adequate?
> >>
> >> I would really like to see the performance issue fixed but I am really not sure
> >> this approach works for everyone. It would be better to avoid repeating this exercise later.
> >> IIRC Ondra's "bulk" mode, despite rejected, shows that there is a potential
> >> to speedup things even for crypt drivers that do not support own IV generators.
> >>
> >
> > AFAIK the problem that we are trying to solve is that if the IV is
> > generated outside the crypto API
> > domain than you are forced to have an invocation of the crypto API per
> > each block because you
> > need to provide the IV for each block.
> >
> > By putting the IV generation responsibility in the Crypto API we open
> > the way to do a single invocation
> > of the crypto API for a whole sequence of blocks.
>
> Sure, but this is theory. Does it really work on some hw already?
> Do you have performance measurements or comparison?

I'm working on up streaming a driver for Arm  CryptoCell that supports
this and working
offline to get Binoy a board to test this with. Alas, shipping crypto
HW has its fair share
of regulatory challenges... :-)

I can certainly understand if you don't wont to take the patch until
we have results with
dm-crypt itself but the difference between 8 separate invocation of
the engine for 512
bytes of XTS and a single invocation for 4KB are pretty big.

>From what I know of HW engines I'd be surprised if this is in any way
unique to CryptoCell.

> > For software implementation of XTS this doesn't matter much - but for
> > hardware based XTS providers
>
> It is not only embedded crypto, we have some more reports in the past
> that 512B sectors are not ideal even for other systems.
> (IIRC it was also with AES-NI that represents really big group of users).

I never said anything about embedded :-)

It really is an observation about overhead of context switches between
dm-crypt and
whatever/wherever you handle crypto - be it an off CPU hardware engine
or a bunch
of parallel kernel threads running on other cores. You really want to
burst as much as
possible.


>
> > This lead some vendors to ship hacked up versions of dm-crypt to match
> > the specific crypto hardware
> > they were using, or so I've heard at least - didn't see the code myself.
>
> I saw few version of that. There was a very hacky way to provide request-based dmcrypt
> (see old "Introduce the request handling for dm-crypt" thread on dm-devel).
> This is not the acceptable way but definitely it points to the same problem.
>
> > I believe Binoy is trying to address this in a generic upstream worthy
> > way instead.
>
> IIRC the problem is performance, if we can solve it by some generic way,
> good, but for now it seems to be a big change and just hope it helps later...
>

I see what you're saying. We need number to back this up.

> > Anyway, you are only supposed to see s difference when using a
> > hardware based XTS provider algo
> > that supports IV generation.
> >
> >> I like the patch is now contained inside dmcrypt, but it still exposes IVs that
> >> are designed just for old, insecure, compatibility-only containers.
> >>
> >> I really do not think every compatible crap must be accessible through crypto API.
> >> (I wrote the dmcrypt lrw and tcw compatibility IVs and I would never do that this way
> >> if I know it is accessible outside of dmcrypt internals...)
> >> Even the ESSIV is something that was born to fix predictive IVs (CBC watermarking
> >> attacks) for disk encryption only, no reason to expose it outside of disk encryption.
> >>
> >
> > The point is that you have more than one implementation of these
> > "compatible crap" - the
> > software implementation that you wrote and potentially multiple
> > hardware implementations
> > and putting this in the crypto API domain is the way to abstract this
> > so you use the one
> > that works best of your platform.
>
> For XTS you need just simple linear IV. No problem with that, implementation
> in crypto API and hw is trivial.
>
> But for compatible IV (that provides compatibility with loopAES and very old TrueCrypt),
> these should be never ever implemented again anywhere.

>
> Specifically "tcw" is broken, insecure and provided here just to help people to migrate
> from old Truecrypt containers. Even Truecrypt followers removed it from the codebase.
> (It is basically combination of IV and slight modification of CBC mode. All
> recent version switched to XTS and plain IV.)
>
> So building abstraction over something known to be broken and that is now intentionally
> isolated inside dmcrypt is, in my opinion, really not a good idea.
>

I don't think anyone is interested in these modes. How do you support
XTS and essiv in
a generic way without supporting this broken modes is not something
I'm clear on though.

>
> But please do get me wrong,  I do not want to block any improvement.
>
> But it seems to me that this thread focused on creating nice crypto API interface
> for FDE IVs instead of demonstration that the proposed solution really solves
> the performance issue.
> And not only for your hw driver, maybe other systems could benefit from the better
> processing of small requests as well.
>

Of course, the benefits at large needs to outweigh the cost. But I
don't think functioning
better when working on large bursts is in any way special to specific HW.

Indeed, I wonder if we can show a benefit for just cryptd use case.
I'll look into that.

Thanks,
Gilad



-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-01 12:42           ` Gilad Ben-Yossef
@ 2017-03-01 13:04             ` Milan Broz
  2017-03-01 15:38               ` Milan Broz
  2017-03-01 13:21             ` Ondrej Mosnacek
  1 sibling, 1 reply; 17+ messages in thread
From: Milan Broz @ 2017-03-01 13:04 UTC (permalink / raw)
  To: Gilad Ben-Yossef, Milan Broz
  Cc: Binoy Jayan, Rajendra, Herbert Xu, Oded, Mike Snitzer,
	Linux kernel mailing list, Ondrej Mosnacek, linux-raid, dm-devel,
	Mark Brown, Arnd Bergmann, linux-crypto, Shaohua Li,
	David S. Miller, Alasdair Kergon, Ofir

On 03/01/2017 01:42 PM, Gilad Ben-Yossef wrote:
...

> I can certainly understand if you don't wont to take the patch until
> we have results with
> dm-crypt itself but the difference between 8 separate invocation of
> the engine for 512
> bytes of XTS and a single invocation for 4KB are pretty big.

Yes, I know it. But the same can be achieved if we just implement
4k sector encryption in dmcrypt. It is incompatible with LUKS1
(but next LUKS version will support it) but I think this is not
a problem for now.

If the underlying device supports atomic write of 4k sectors, then
there should not be a problem.

This is one of the speed-up I would like to compare with the IV approach,
because everyone should benefit from 4k sectors in the end.
And no crypto API changes are needed here.

(I have an old patch for this, so I will try to revive it.)

Milan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-01 12:42           ` Gilad Ben-Yossef
  2017-03-01 13:04             ` Milan Broz
@ 2017-03-01 13:21             ` Ondrej Mosnacek
  2017-03-02 14:01               ` Gilad Ben-Yossef
  1 sibling, 1 reply; 17+ messages in thread
From: Ondrej Mosnacek @ 2017-03-01 13:21 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Milan Broz, Binoy Jayan, Oded, Ofir, Herbert Xu, David S. Miller,
	linux-crypto, Mark Brown, Arnd Bergmann,
	Linux kernel mailing list, Alasdair Kergon, Mike Snitzer,
	dm-devel, Shaohua Li, linux-raid, Rajendra

2017-03-01 13:42 GMT+01:00 Gilad Ben-Yossef <gilad@benyossef.com>:
> It really is an observation about overhead of context switches between
> dm-crypt and
> whatever/wherever you handle crypto - be it an off CPU hardware engine
> or a bunch
> of parallel kernel threads running on other cores. You really want to
> burst as much as
> possible.

[...]

>> For XTS you need just simple linear IV. No problem with that, implementation
>> in crypto API and hw is trivial.
>>
>> But for compatible IV (that provides compatibility with loopAES and very old TrueCrypt),
>> these should be never ever implemented again anywhere.
>
>>
>> Specifically "tcw" is broken, insecure and provided here just to help people to migrate
>> from old Truecrypt containers. Even Truecrypt followers removed it from the codebase.
>> (It is basically combination of IV and slight modification of CBC mode. All
>> recent version switched to XTS and plain IV.)
>>
>> So building abstraction over something known to be broken and that is now intentionally
>> isolated inside dmcrypt is, in my opinion, really not a good idea.
>>
>
> I don't think anyone is interested in these modes. How do you support
> XTS and essiv in
> a generic way without supporting this broken modes is not something
> I'm clear on though.

Wouldn't adopting a bulk request API (something like what I tried to
do here [1]) that allows users to supply multiple messages, each with
their own IV, fulfill this purpose? That way, we wouldn't need to
introduce any new modes into Crypto API and the drivers/accelerators
would only need to provide bulk implementations of common modes
(xts(aes), cbc(aes), ...) to provide better performance for dm-crypt
(and possibly other users, too).

I'm not sure how exactly these crypto accelerators work, but wouldn't
it help if the drivers simply get more messages (in our case sectors)
in a single call? I wonder, would (efficiently) supporting such a
scheme require changes in the HW itself or could it be achieved just
by modifying driver code (let's say specifically for your CryptoCell
accelerator)?

Thanks,
Ondrej

[1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg23007.html

>
> Thanks,
> Gilad
>
>
>
> --
> Gilad Ben-Yossef
> Chief Coffee Drinker
>
> "If you take a class in large-scale robotics, can you end up in a
> situation where the homework eats your dog?"
>  -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-01 13:04             ` Milan Broz
@ 2017-03-01 15:38               ` Milan Broz
  2017-03-06 14:38                 ` Gilad Ben-Yossef
  0 siblings, 1 reply; 17+ messages in thread
From: Milan Broz @ 2017-03-01 15:38 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Binoy Jayan, Oded, Ofir, Herbert Xu, David S. Miller,
	linux-crypto, Mark Brown, Arnd Bergmann,
	Linux kernel mailing list, Alasdair Kergon, Mike Snitzer,
	dm-devel, Shaohua Li, linux-raid, Rajendra, Ondrej Mosnacek


On 03/01/2017 02:04 PM, Milan Broz wrote:
> On 03/01/2017 01:42 PM, Gilad Ben-Yossef wrote:
> ...
> 
>> I can certainly understand if you don't wont to take the patch until
>> we have results with
>> dm-crypt itself but the difference between 8 separate invocation of
>> the engine for 512
>> bytes of XTS and a single invocation for 4KB are pretty big.
> 
> Yes, I know it. But the same can be achieved if we just implement
> 4k sector encryption in dmcrypt. It is incompatible with LUKS1
> (but next LUKS version will support it) but I think this is not
> a problem for now.
> 
> If the underlying device supports atomic write of 4k sectors, then
> there should not be a problem.
> 
> This is one of the speed-up I would like to compare with the IV approach,
> because everyone should benefit from 4k sectors in the end.
> And no crypto API changes are needed here.
> 
> (I have an old patch for this, so I will try to revive it.)

If anyone interested, simple experimental patch for larger sector size
(up to the page size) for dmcrypt is in this branch:

http://git.kernel.org/cgit/linux/kernel/git/mbroz/linux.git/log/?h=dm-crypt-4k-sector

It would be nice to check what performance gain could be provided
by this simple approach.

Milan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-02-28 21:05     ` Milan Broz
  2017-03-01  8:30       ` Gilad Ben-Yossef
@ 2017-03-01 18:04       ` Binoy Jayan
  1 sibling, 0 replies; 17+ messages in thread
From: Binoy Jayan @ 2017-03-01 18:04 UTC (permalink / raw)
  To: Milan Broz
  Cc: Rajendra, Herbert Xu, Oded, Mike Snitzer,
	Linux kernel mailing list, Ondrej Mosnacek, linux-raid,
	Gilad Ben-Yossef, dm-devel, Mark Brown, Arnd Bergmann,
	linux-crypto, Shaohua Li, David S. Miller, Alasdair Kergon, Ofir


[-- Attachment #1.1: Type: text/plain, Size: 1616 bytes --]

Hi Milan,

On 1 March 2017 at 02:35, Milan Broz <gmazyland@gmail.com> wrote:

> On 02/22/2017 07:12 AM, Binoy Jayan wrote:
> >
> > I was wondering if this is near to be ready for submission (apart from
> > the testmgr.c
> > changes) or I need to make some changes to make it similar to the IPSec
> offload?
>
> I just tried this and except it registers the IV for every new device
> again, it works...
> (After a while you have many duplicate entries in /proc/crypto.)


It is because the the crypto lookup api sees that the crypto algorithm is in
a LARVAL state and registers a new instance every time by invoking the
".create" callback. I guess it should be solved by adding test data to
testmgr.

Do you have some real performance numbers that proves that such a patch is
> adequate?
>

While waiting to do some implementation of the hw crypto drivers to work
with
dm-crypt, I'll also generate some numbers to compare the performance with
the
original dm-crypt code with the new one with a software implementation in
place.


> I would really like to see the performance issue fixed but I am really not
> sure
> this approach works for everyone. It would be better to avoid repeating
> this exercise later.
> IIRC Ondra's "bulk" mode, despite rejected, shows that there is a potential
> to speedup things even for crypt drivers that do not support own IV
> generators.
>

I think it should work for everyone (even for ciphers not supporting IVs)
if the null IV
mode is used. It should be upto the IV generation template to choose to
generate IV
or just call the underlying (base) template/cipher.

Regards,
Binoy

[-- Attachment #1.2: Type: text/html, Size: 2501 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-01 13:21             ` Ondrej Mosnacek
@ 2017-03-02 14:01               ` Gilad Ben-Yossef
  0 siblings, 0 replies; 17+ messages in thread
From: Gilad Ben-Yossef @ 2017-03-02 14:01 UTC (permalink / raw)
  To: Ondrej Mosnacek
  Cc: Binoy Jayan, Rajendra, Herbert Xu, Oded, Mike Snitzer,
	Linux kernel mailing list, Milan Broz, linux-raid, dm-devel,
	Mark Brown, Arnd Bergmann, linux-crypto, Shaohua Li,
	David S. Miller, Alasdair Kergon, Ofir

On Wed, Mar 1, 2017 at 3:21 PM, Ondrej Mosnacek <omosnace@redhat.com> wrote:
> 2017-03-01 13:42 GMT+01:00 Gilad Ben-Yossef <gilad@benyossef.com>:
>
> Wouldn't adopting a bulk request API (something like what I tried to
> do here [1]) that allows users to supply multiple messages, each with
> their own IV, fulfill this purpose? That way, we wouldn't need to
> introduce any new modes into Crypto API and the drivers/accelerators
> would only need to provide bulk implementations of common modes
> (xts(aes), cbc(aes), ...) to provide better performance for dm-crypt
> (and possibly other users, too).
>
> I'm not sure how exactly these crypto accelerators work, but wouldn't
> it help if the drivers simply get more messages (in our case sectors)
> in a single call? I wonder, would (efficiently) supporting such a
> scheme require changes in the HW itself or could it be achieved just
> by modifying driver code (let's say specifically for your CryptoCell
> accelerator)?
>
> [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg23007.html
>


>From a general perspective - that is things are expect to be true not
just for CryptoCell but for most HW crypto engines,
you want two things - for the HW engine to be able to burst work for a
long time and than rest for a long time vs. a stop and go scheme
(engine utilization)
and for the average IO transaction to be relatively long (bus utilization)

So, a big cluster size i.e. Milan's proposal) works great - you get both.

Submitting a series of sequential small clusters where the HW can
calculate the IV (e.g. Binoy's proposal) works great if the HW
supports it - you get both.

A batched series of small clusters + IV is less favorable - if your HW
engines has lots of parallel context processing (this is expensive for
HW) you might enjoy engine utilization but the bus utilization will be
low - lots of small transactions.

Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-01 15:38               ` Milan Broz
@ 2017-03-06 14:38                 ` Gilad Ben-Yossef
  2017-03-20 14:31                   ` Binoy Jayan
  2017-03-20 14:38                   ` Binoy Jayan
  0 siblings, 2 replies; 17+ messages in thread
From: Gilad Ben-Yossef @ 2017-03-06 14:38 UTC (permalink / raw)
  To: Milan Broz
  Cc: Binoy Jayan, Rajendra, Herbert Xu, Oded, Mike Snitzer,
	Linux kernel mailing list, Ondrej Mosnacek, linux-raid, dm-devel,
	Mark Brown, Arnd Bergmann, linux-crypto, Shaohua Li,
	David S. Miller, Alasdair Kergon, Ofir

On Wed, Mar 1, 2017 at 5:38 PM, Milan Broz <gmazyland@gmail.com> wrote:
>
> On 03/01/2017 02:04 PM, Milan Broz wrote:
>> On 03/01/2017 01:42 PM, Gilad Ben-Yossef wrote:
>> ...
>>
>>> I can certainly understand if you don't wont to take the patch until
>>> we have results with
>>> dm-crypt itself but the difference between 8 separate invocation of
>>> the engine for 512
>>> bytes of XTS and a single invocation for 4KB are pretty big.
>>
>> Yes, I know it. But the same can be achieved if we just implement
>> 4k sector encryption in dmcrypt. It is incompatible with LUKS1
>> (but next LUKS version will support it) but I think this is not
>> a problem for now.
>>
>> If the underlying device supports atomic write of 4k sectors, then
>> there should not be a problem.
>>
>> This is one of the speed-up I would like to compare with the IV approach,
>> because everyone should benefit from 4k sectors in the end.
>> And no crypto API changes are needed here.
>>
>> (I have an old patch for this, so I will try to revive it.)
>
> If anyone interested, simple experimental patch for larger sector size
> (up to the page size) for dmcrypt is in this branch:
>
> http://git.kernel.org/cgit/linux/kernel/git/mbroz/linux.git/log/?h=dm-crypt-4k-sector
>
> It would be nice to check what performance gain could be provided
> by this simple approach.


I gave it a spin on a x86_64 with 8 CPUs with AES-NI using cryptd and
on Arm  using CryptoCell hardware accelerator.

There was no difference in performance between 512 and 4096 bytes
cluster size on the x86_64 (800 MB loop file system)

There was an improvement in latency of 3.2% between 512 and 4096 bytes
cluster size on the Arm. I expect the performance benefits for this
test for Binoy's patch to be the same.

In both cases the very naive test was a simple dd with block size of
4096 bytes or the raw block device.

I do not know what effect having a bigger cluster size would have on
have on other more complex file system operations.
Is there any specific benchmark worth testing with?


Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-06 14:38                 ` Gilad Ben-Yossef
@ 2017-03-20 14:31                   ` Binoy Jayan
  2017-03-20 14:38                   ` Binoy Jayan
  1 sibling, 0 replies; 17+ messages in thread
From: Binoy Jayan @ 2017-03-20 14:31 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Rajendra, Herbert Xu, Oded, Ondrej Mosnacek, Mike Snitzer,
	Linux kernel mailing list, Milan Broz, linux-raid, dm-devel,
	Mark Brown, Arnd Bergmann, linux-crypto, Shaohua Li,
	David S. Miller, Alasdair Kergon, Ofir


[-- Attachment #1.1: Type: text/plain, Size: 2012 bytes --]

Hi,

On 8 March 2017 at 13:49, Binoy Jayan <binoy.jayan@linaro.org> wrote:
> Hi Gilad,
>
>> I gave it a spin on a x86_64 with 8 CPUs with AES-NI using cryptd and
>> on Arm  using CryptoCell hardware accelerator.
>>
>> There was no difference in performance between 512 and 4096 bytes
>> cluster size on the x86_64 (800 MB loop file system)
>>
>> There was an improvement in latency of 3.2% between 512 and 4096 bytes
>> cluster size on the Arm. I expect the performance benefits for this
>> test for Binoy's patch to be the same.
>>
>> In both cases the very naive test was a simple dd with block size of
>> 4096 bytes or the raw block device.
>>
>> I do not know what effect having a bigger cluster size would have on
>> have on other more complex file system operations.
>> Is there any specific benchmark worth testing with?

The multiple instances issue in /proc/crypto is fixed. It was because of
the IV code itself modifying the algorithm name inadvertently in the
global crypto algorithm lookup table when it was splitting up
"plain(cbc(aes))" into "plain" and "cbc(aes)" so as to invoke the child
algorithm.

I ran a few tests with dd, bonnie and FIO under Qemu - x86 using the
automated script [1] that I wrote to make the testing easy.
The tests were done on software implementations of the algorithms
as the real hardware was not available with me. According to the test,
I found that the sequential reads and writes have a good improvement
(5.7 %) in the data rate with the proposed solution while the random
reads shows a very little improvement. When tested with FIO, the
random writes also shows a small improvement (2.2%) but the random
reads show a little deterioration in performance (4 %).

When tested in arm hardware, only the sequential writes with bonnie
shows improvement (5.6%). All other tests shows degraded performance
in the absence of crypto hardware.

[1] https://github.com/binoyjayan/utilities/blob/master/utils/dmtest
Dependencies: dd [Full version], bonnie, fio

Thanks,
Binoy

[-- Attachment #1.2: Type: text/html, Size: 2545 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt
  2017-03-06 14:38                 ` Gilad Ben-Yossef
  2017-03-20 14:31                   ` Binoy Jayan
@ 2017-03-20 14:38                   ` Binoy Jayan
  1 sibling, 0 replies; 17+ messages in thread
From: Binoy Jayan @ 2017-03-20 14:38 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Milan Broz, Oded, Ofir, Herbert Xu, David S. Miller,
	linux-crypto, Mark Brown, Arnd Bergmann,
	Linux kernel mailing list, Alasdair Kergon, Mike Snitzer,
	dm-devel, Shaohua Li, linux-raid, Rajendra, Ondrej Mosnacek

On 6 March 2017 at 20:08, Gilad Ben-Yossef <gilad@benyossef.com> wrote:
>
> I gave it a spin on a x86_64 with 8 CPUs with AES-NI using cryptd and
> on Arm  using CryptoCell hardware accelerator.
>
> There was no difference in performance between 512 and 4096 bytes
> cluster size on the x86_64 (800 MB loop file system)
>
> There was an improvement in latency of 3.2% between 512 and 4096 bytes
> cluster size on the Arm. I expect the performance benefits for this
> test for Binoy's patch to be the same.
>
> In both cases the very naive test was a simple dd with block size of
> 4096 bytes or the raw block device.
>
> I do not know what effect having a bigger cluster size would have on
> have on other more complex file system operations.
> Is there any specific benchmark worth testing with?


The multiple instances issue in /proc/crypto is fixed. It was because of
the IV code itself modifying the algorithm name inadvertently in the
global crypto algorithm lookup table when it was splitting up
"plain(cbc(aes))" into "plain" and "cbc(aes)" so as to invoke the child
algorithm.

I ran a few tests with dd, bonnie and FIO under Qemu - x86 using the
automated script [1] that I wrote to make the testing easy.
The tests were done on software implementations of the algorithms
as the real hardware was not available with me. According to the test,
I found that the sequential reads and writes have a good improvement
(5.7 %) in the data rate with the proposed solution while the random
reads shows a very little improvement. When tested with FIO, the
random writes also shows a small improvement (2.2%) but the random
reads show a little deterioration in performance (4 %).

When tested in arm hardware, only the sequential writes with bonnie
shows improvement (5.6%). All other tests shows degraded performance
in the absence of crypto hardware.

[1] https://github.com/binoyjayan/utilities/blob/master/utils/dmtest
Dependencies: dd [Full version], bonnie, fio

Thanks,
Binoy

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-03-20 14:38 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-07 10:35 [RFC PATCH v4] IV Generation algorithms for dm-crypt Binoy Jayan
2017-02-07 10:35 ` [RFC PATCH v4] crypto: Add IV generation algorithms Binoy Jayan
2017-02-08  7:32 ` [RFC PATCH v4] IV Generation algorithms for dm-crypt Gilad Ben-Yossef
2017-02-09  8:30   ` Binoy Jayan
2017-02-22  6:12   ` Binoy Jayan
2017-02-28 21:05     ` Milan Broz
2017-03-01  8:30       ` Gilad Ben-Yossef
2017-03-01  9:29         ` Milan Broz
2017-03-01 12:42           ` Gilad Ben-Yossef
2017-03-01 13:04             ` Milan Broz
2017-03-01 15:38               ` Milan Broz
2017-03-06 14:38                 ` Gilad Ben-Yossef
2017-03-20 14:31                   ` Binoy Jayan
2017-03-20 14:38                   ` Binoy Jayan
2017-03-01 13:21             ` Ondrej Mosnacek
2017-03-02 14:01               ` Gilad Ben-Yossef
2017-03-01 18:04       ` Binoy Jayan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).