linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v4 0/4] Introduce the bulk mode method when sending request to crypto layer
@ 2016-06-07 12:17 Baolin Wang
  2016-06-07 12:17 ` [RFC v4 1/4] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Baolin Wang @ 2016-06-07 12:17 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

This patchset will check if the cipher can support bulk mode, then dm-crypt
will handle different ways to send requests to crypto layer according to
cipher mode. For bulk mode, we can use sg table to map the whole bio and
send all scatterlists of one bio to crypto engine to encrypt or decrypt,
which can improve the hardware engine's efficiency.

Changes since v3:
 - Some optimization for blk_bio_map_sg() function.

Changes since v2:
 - Add one cipher user with CRYPTO_ALG_BULK flag to support bulk mode.
 - Add one atomic variable to avoid the sg table race.

Changes since v1:
 - Refactor the blk_bio_map_sg() function to avoid duplicated code.
 - Move the sg table allocation to crypt_ctr_cipher() function to avoid memory
 allocation in the IO path.
 - Remove the crypt_sg_entry() function.
 - Other optimization.

Baolin Wang (4):
  block: Introduce blk_bio_map_sg() to map one bio
  crypto: Introduce CRYPTO_ALG_BULK flag
  md: dm-crypt: Introduce the bulk mode method when sending request
  crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher

 block/blk-merge.c         |   19 ++++++
 drivers/crypto/omap-aes.c |    2 +-
 drivers/md/dm-crypt.c     |  159 ++++++++++++++++++++++++++++++++++++++++++++-
 include/crypto/skcipher.h |    7 ++
 include/linux/blkdev.h    |    2 +
 include/linux/crypto.h    |    6 ++
 6 files changed, 193 insertions(+), 2 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC v4 1/4] block: Introduce blk_bio_map_sg() to map one bio
  2016-06-07 12:17 [RFC v4 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
@ 2016-06-07 12:17 ` Baolin Wang
  2016-06-07 12:17 ` [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Baolin Wang @ 2016-06-07 12:17 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

In dm-crypt, it need to map one bio to scatterlist for improving the
hardware engine encryption efficiency. Thus this patch introduces the
blk_bio_map_sg() function to map one bio with scatterlists.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 block/blk-merge.c      |   19 +++++++++++++++++++
 include/linux/blkdev.h |    2 ++
 2 files changed, 21 insertions(+)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2613531..bca0609 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -417,6 +417,25 @@ single_segment:
 }
 
 /*
+ * Map a bio to scatterlist, return number of sg entries setup. Caller must
+ * make sure sg can hold bio segments entries and detach the bio from list.
+ */
+int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
+		   struct scatterlist *sglist)
+{
+	struct scatterlist *sg = NULL;
+	int nsegs;
+
+	nsegs = __blk_bios_map_sg(q, bio, sglist, &sg);
+
+	if (sg)
+		sg_mark_end(sg);
+
+	return nsegs;
+}
+EXPORT_SYMBOL(blk_bio_map_sg);
+
+/*
  * map a request to scatterlist, return number of sg entries setup. Caller
  * must make sure sg can hold rq->nr_phys_segments entries
  */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 3d9cf32..cbe31f9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1024,6 +1024,8 @@ extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fu
 extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev);
 
 extern int blk_rq_map_sg(struct request_queue *, struct request *, struct scatterlist *);
+extern int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
+			  struct scatterlist *sglist);
 extern void blk_dump_rq_flags(struct request *, char *);
 extern long nr_blockdev_pages(void);
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-07 12:17 [RFC v4 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
  2016-06-07 12:17 ` [RFC v4 1/4] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
@ 2016-06-07 12:17 ` Baolin Wang
  2016-06-07 14:16   ` Herbert Xu
  2016-06-07 12:17 ` [RFC v4 3/4] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang
  2016-06-07 12:17 ` [RFC v4 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher Baolin Wang
  3 siblings, 1 reply; 12+ messages in thread
From: Baolin Wang @ 2016-06-07 12:17 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

Now some cipher hardware engines prefer to handle bulk block rather than one
sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
the intermediate values (IV) by themselves in one bulk block. This means we
can increase the size of the request by merging request rather than always 512
bytes and thus increase the hardware engine processing speed.

So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 include/crypto/skcipher.h |    7 +++++++
 include/linux/crypto.h    |    6 ++++++
 2 files changed, 13 insertions(+)

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 0f987f5..d89d29a 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -519,5 +519,12 @@ static inline void skcipher_request_set_crypt(
 	req->iv = iv;
 }
 
+static inline unsigned int skcipher_is_bulk_mode(struct crypto_skcipher *sk_tfm)
+{
+	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
+
+	return crypto_tfm_alg_bulk(tfm);
+}
+
 #endif	/* _CRYPTO_SKCIPHER_H */
 
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 6e28c89..a315487 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -63,6 +63,7 @@
 #define CRYPTO_ALG_DEAD			0x00000020
 #define CRYPTO_ALG_DYING		0x00000040
 #define CRYPTO_ALG_ASYNC		0x00000080
+#define CRYPTO_ALG_BULK			0x00000100
 
 /*
  * Set this bit if and only if the algorithm requires another algorithm of
@@ -623,6 +624,11 @@ static inline u32 crypto_tfm_alg_type(struct crypto_tfm *tfm)
 	return tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK;
 }
 
+static inline unsigned int crypto_tfm_alg_bulk(struct crypto_tfm *tfm)
+{
+	return tfm->__crt_alg->cra_flags & CRYPTO_ALG_BULK;
+}
+
 static inline unsigned int crypto_tfm_alg_blocksize(struct crypto_tfm *tfm)
 {
 	return tfm->__crt_alg->cra_blocksize;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v4 3/4] md: dm-crypt: Introduce the bulk mode method when sending request
  2016-06-07 12:17 [RFC v4 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
  2016-06-07 12:17 ` [RFC v4 1/4] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
  2016-06-07 12:17 ` [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
@ 2016-06-07 12:17 ` Baolin Wang
  2016-06-07 12:17 ` [RFC v4 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher Baolin Wang
  3 siblings, 0 replies; 12+ messages in thread
From: Baolin Wang @ 2016-06-07 12:17 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

In now dm-crypt code, it is ineffective to map one segment (always one
sector) of one bio with just only one scatterlist at one time for hardware
crypto engine. Especially for some encryption mode (like ecb or xts mode)
cooperating with the crypto engine, they just need one initial IV or null
IV instead of different IV for each sector. In this situation We can consider
to use multiple scatterlists to map the whole bio and send all scatterlists
of one bio to crypto engine to encrypt or decrypt, which can improve the
hardware engine's efficiency.

With this optimization, On my test setup (beaglebone black board with ecb(aes)
cipher and dd testing) using 64KB I/Os on an eMMC storage device I saw about
127% improvement in throughput for encrypted writes, and about 206% improvement
for encrypted reads.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 drivers/md/dm-crypt.c |  159 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 158 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 4f3cb35..0b1d452 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -33,6 +33,7 @@
 #include <linux/device-mapper.h>
 
 #define DM_MSG_PREFIX "crypt"
+#define DM_MAX_SG_LIST	512
 
 /*
  * context holding the current state of a multi-part conversion
@@ -142,6 +143,11 @@ struct crypt_config {
 	char *cipher;
 	char *cipher_string;
 
+	struct sg_table sgt_in;
+	struct sg_table sgt_out;
+	atomic_t sgt_init_done;
+	struct completion sgt_restart;
+
 	struct crypt_iv_operations *iv_gen_ops;
 	union {
 		struct iv_essiv_private essiv;
@@ -837,6 +843,141 @@ static u8 *iv_of_dmreq(struct crypt_config *cc,
 		crypto_skcipher_alignmask(any_tfm(cc)) + 1);
 }
 
+static void crypt_init_sg_table(struct scatterlist *sgl)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) {
+		if (i < DM_MAX_SG_LIST - 1 && sg_is_last(sg))
+			sg_unmark_end(sg);
+		else if (i == DM_MAX_SG_LIST - 1)
+			sg_mark_end(sg);
+	}
+
+	for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) {
+		memset(sg, 0, sizeof(struct scatterlist));
+
+		if (i == DM_MAX_SG_LIST - 1)
+			sg_mark_end(sg);
+	}
+}
+
+static void crypt_reinit_sg_table(struct crypt_config *cc)
+{
+	if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents)
+		return;
+
+	crypt_init_sg_table(cc->sgt_in.sgl);
+	crypt_init_sg_table(cc->sgt_out.sgl);
+
+	if (atomic_inc_and_test(&cc->sgt_init_done))
+		complete(&cc->sgt_restart);
+	atomic_set(&cc->sgt_init_done, 1);
+}
+
+static int crypt_alloc_sg_table(struct crypt_config *cc)
+{
+	unsigned int bulk_mode = skcipher_is_bulk_mode(any_tfm(cc));
+	int ret = 0;
+
+	if (!bulk_mode)
+		goto out_skip_alloc;
+
+	ret = sg_alloc_table(&cc->sgt_in, DM_MAX_SG_LIST, GFP_KERNEL);
+	if (ret)
+		goto out_skip_alloc;
+
+	ret = sg_alloc_table(&cc->sgt_out, DM_MAX_SG_LIST, GFP_KERNEL);
+	if (ret)
+		goto out_free_table;
+
+	init_completion(&cc->sgt_restart);
+	atomic_set(&cc->sgt_init_done, 1);
+	return 0;
+
+out_free_table:
+	sg_free_table(&cc->sgt_in);
+out_skip_alloc:
+	cc->sgt_in.orig_nents = 0;
+	cc->sgt_out.orig_nents = 0;
+
+	return ret;
+}
+
+static int crypt_convert_bulk_block(struct crypt_config *cc,
+				    struct convert_context *ctx,
+				    struct skcipher_request *req)
+{
+	struct bio *bio_in = ctx->bio_in;
+	struct bio *bio_out = ctx->bio_out;
+	unsigned int total_bytes = bio_in->bi_iter.bi_size;
+	unsigned int total_sg_in, total_sg_out;
+	struct scatterlist *sg_in, *sg_out;
+	struct dm_crypt_request *dmreq;
+	u8 *iv;
+	int r;
+
+	if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents)
+		return -EINVAL;
+
+	if (!atomic_dec_and_test(&cc->sgt_init_done)) {
+		wait_for_completion(&cc->sgt_restart);
+		reinit_completion(&cc->sgt_restart);
+	}
+
+	dmreq = dmreq_of_req(cc, req);
+	iv = iv_of_dmreq(cc, dmreq);
+	dmreq->iv_sector = ctx->cc_sector;
+	dmreq->ctx = ctx;
+
+	total_sg_in = blk_bio_map_sg(bdev_get_queue(bio_in->bi_bdev),
+				     bio_in, cc->sgt_in.sgl);
+	if ((total_sg_in <= 0) || (total_sg_in > DM_MAX_SG_LIST)) {
+		DMERR("%s in sg map error %d, sg table nents[%d]\n",
+		      __func__, total_sg_in, cc->sgt_in.orig_nents);
+		return -EINVAL;
+	}
+
+	ctx->iter_in.bi_size -= total_bytes;
+	sg_in = cc->sgt_in.sgl;
+	sg_out = cc->sgt_in.sgl;
+
+	if (bio_data_dir(bio_in) == READ)
+		goto set_crypt;
+
+	total_sg_out = blk_bio_map_sg(bdev_get_queue(bio_out->bi_bdev),
+				      bio_out, cc->sgt_out.sgl);
+	if ((total_sg_out <= 0) || (total_sg_out > DM_MAX_SG_LIST)) {
+		DMERR("%s out sg map error %d, sg table nents[%d]\n",
+		      __func__, total_sg_out, cc->sgt_out.orig_nents);
+		return -EINVAL;
+	}
+
+	ctx->iter_out.bi_size -= total_bytes;
+	sg_out = cc->sgt_out.sgl;
+
+set_crypt:
+	if (cc->iv_gen_ops) {
+		r = cc->iv_gen_ops->generator(cc, iv, dmreq);
+		if (r < 0)
+			return r;
+	}
+
+	atomic_set(&cc->sgt_init_done, 0);
+	skcipher_request_set_crypt(req, sg_in, sg_out, total_bytes, iv);
+
+	if (bio_data_dir(ctx->bio_in) == WRITE)
+		r = crypto_skcipher_encrypt(req);
+	else
+		r = crypto_skcipher_decrypt(req);
+
+	if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post)
+		r = cc->iv_gen_ops->post(cc, iv, dmreq);
+
+	return r;
+}
+
 static int crypt_convert_block(struct crypt_config *cc,
 			       struct convert_context *ctx,
 			       struct skcipher_request *req)
@@ -920,6 +1061,7 @@ static void crypt_free_req(struct crypt_config *cc,
 static int crypt_convert(struct crypt_config *cc,
 			 struct convert_context *ctx)
 {
+	unsigned int bulk_mode;
 	int r;
 
 	atomic_set(&ctx->cc_pending, 1);
@@ -930,7 +1072,14 @@ static int crypt_convert(struct crypt_config *cc,
 
 		atomic_inc(&ctx->cc_pending);
 
-		r = crypt_convert_block(cc, ctx, ctx->req);
+		bulk_mode = skcipher_is_bulk_mode(any_tfm(cc));
+		if (!bulk_mode) {
+			r = crypt_convert_block(cc, ctx, ctx->req);
+		} else {
+			r = crypt_convert_bulk_block(cc, ctx, ctx->req);
+			if (r == -EINVAL)
+				r = crypt_convert_block(cc, ctx, ctx->req);
+		}
 
 		switch (r) {
 		/*
@@ -1081,6 +1230,7 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
 	if (io->ctx.req)
 		crypt_free_req(cc, io->ctx.req, base_bio);
 
+	crypt_reinit_sg_table(cc);
 	base_bio->bi_error = error;
 	bio_endio(base_bio);
 }
@@ -1563,6 +1713,9 @@ static void crypt_dtr(struct dm_target *ti)
 	kzfree(cc->cipher);
 	kzfree(cc->cipher_string);
 
+	sg_free_table(&cc->sgt_in);
+	sg_free_table(&cc->sgt_out);
+
 	/* Must zero key material before freeing */
 	kzfree(cc);
 }
@@ -1718,6 +1871,10 @@ static int crypt_ctr_cipher(struct dm_target *ti,
 		}
 	}
 
+	ret = crypt_alloc_sg_table(cc);
+	if (ret)
+		DMWARN("Allocate sg table for bulk mode failed");
+
 	ret = 0;
 bad:
 	kfree(cipher_api);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v4 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher
  2016-06-07 12:17 [RFC v4 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
                   ` (2 preceding siblings ...)
  2016-06-07 12:17 ` [RFC v4 3/4] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang
@ 2016-06-07 12:17 ` Baolin Wang
  3 siblings, 0 replies; 12+ messages in thread
From: Baolin Wang @ 2016-06-07 12:17 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

Since the ecb(aes) cipher does not need to handle the IV things for encryption
or decryption, that means it can support for bulk block when handling data.
Thus this patch adds the CRYPTO_ALG_BULK flag for ecb(aes) cipher to improve
the hardware aes engine's efficiency.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 drivers/crypto/omap-aes.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index ce174d3..ab09429 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -804,7 +804,7 @@ static struct crypto_alg algs_ecb_cbc[] = {
 	.cra_priority		= 300,
 	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
 				  CRYPTO_ALG_KERN_DRIVER_ONLY |
-				  CRYPTO_ALG_ASYNC,
+				  CRYPTO_ALG_ASYNC | CRYPTO_ALG_BULK,
 	.cra_blocksize		= AES_BLOCK_SIZE,
 	.cra_ctxsize		= sizeof(struct omap_aes_ctx),
 	.cra_alignmask		= 0,
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-07 12:17 ` [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
@ 2016-06-07 14:16   ` Herbert Xu
  2016-06-08  2:00     ` Baolin Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Herbert Xu @ 2016-06-07 14:16 UTC (permalink / raw)
  To: Baolin Wang
  Cc: axboe, agk, snitzer, dm-devel, davem, ebiggers3, js1304,
	tadeusz.struk, smueller, standby24x7, shli, dan.j.williams,
	martin.petersen, sagig, kent.overstreet, keith.busch, tj,
	ming.lei, broonie, arnd, linux-crypto, linux-block, linux-raid,
	linux-kernel

On Tue, Jun 07, 2016 at 08:17:05PM +0800, Baolin Wang wrote:
> Now some cipher hardware engines prefer to handle bulk block rather than one
> sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
> the intermediate values (IV) by themselves in one bulk block. This means we
> can increase the size of the request by merging request rather than always 512
> bytes and thus increase the hardware engine processing speed.
> 
> So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
> mode.
> 
> Signed-off-by: Baolin Wang <baolin.wang@linaro.org>

Nack.  As I said before, please do it using explicit IV generators
like we do for IPsec.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-07 14:16   ` Herbert Xu
@ 2016-06-08  2:00     ` Baolin Wang
  2016-06-15  6:27       ` Baolin Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Baolin Wang @ 2016-06-08  2:00 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

Hi Herbert,

On 7 June 2016 at 22:16, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Tue, Jun 07, 2016 at 08:17:05PM +0800, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block rather than one
>> sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
>> the intermediate values (IV) by themselves in one bulk block. This means we
>> can increase the size of the request by merging request rather than always 512
>> bytes and thus increase the hardware engine processing speed.
>>
>> So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
>> mode.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
>
> Nack.  As I said before, please do it using explicit IV generators
> like we do for IPsec.

OK. I would like to try your suggestion. Thanks.

> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-08  2:00     ` Baolin Wang
@ 2016-06-15  6:27       ` Baolin Wang
  2016-06-15  6:49         ` Herbert Xu
  0 siblings, 1 reply; 12+ messages in thread
From: Baolin Wang @ 2016-06-15  6:27 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

Hi Herbert,

On 8 June 2016 at 10:00, Baolin Wang <baolin.wang@linaro.org> wrote:
> Hi Herbert,
>
> On 7 June 2016 at 22:16, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>> On Tue, Jun 07, 2016 at 08:17:05PM +0800, Baolin Wang wrote:
>>> Now some cipher hardware engines prefer to handle bulk block rather than one
>>> sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
>>> the intermediate values (IV) by themselves in one bulk block. This means we
>>> can increase the size of the request by merging request rather than always 512
>>> bytes and thus increase the hardware engine processing speed.
>>>
>>> So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
>>> mode.
>>>
>>> Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
>>
>> Nack.  As I said before, please do it using explicit IV generators
>> like we do for IPsec.
>
> OK. I would like to try your suggestion. Thanks.

After some investigation, I still think we should divide the bulk
request from dm-crypt into small request (each one is 512bytes) if
this algorithm is not support bulk mode (like CBC). We have talked
with dm-crypt
maintainers why dm-crypt always use 512 bytes as one request size in
below thread, could you please check it?
http://www.kernelhub.org/?p=2&msg=907022

That means if we move the IV handling into crypto API, we still can
not use bulk interface for all algorithm, for example we still need to
read/write with 512 bytes for CBC, you can't use 4k or more block on
CBC (and most other encryption modes). If only a part of 4k block is
written (and then system crash happens), CBC would corrupt the block
completely. It means if we map one whole bio with bulk interface in
dm-crypt, we need to divide into every 512 bytes requests in crypto
layer. So I don't think we can handle every algorithm with bulk
interface just moving the IV handling into crypto API. Thanks.

>
>> --
>> Email: Herbert Xu <herbert@gondor.apana.org.au>
>> Home Page: http://gondor.apana.org.au/~herbert/
>> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
>
>
>
> --
> Baolin.wang
> Best Regards



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-15  6:27       ` Baolin Wang
@ 2016-06-15  6:49         ` Herbert Xu
  2016-06-15  7:38           ` Baolin Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Herbert Xu @ 2016-06-15  6:49 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On Wed, Jun 15, 2016 at 02:27:04PM +0800, Baolin Wang wrote:
> 
> After some investigation, I still think we should divide the bulk
> request from dm-crypt into small request (each one is 512bytes) if
> this algorithm is not support bulk mode (like CBC). We have talked
> with dm-crypt
> maintainers why dm-crypt always use 512 bytes as one request size in
> below thread, could you please check it?
> http://www.kernelhub.org/?p=2&msg=907022

That link only points to an email about an oops.

Diggin through that thread, the only objection I have seen is about
the fact that you have to generate a fresh IV for each sector, which
is precisely what I'm suggesting that you do.

IOW, implement the IV generators in the crypto API, and then you can
easily generate a new IV (if necessary) for each sector.

> That means if we move the IV handling into crypto API, we still can
> not use bulk interface for all algorithm, for example we still need to
> read/write with 512 bytes for CBC, you can't use 4k or more block on
> CBC (and most other encryption modes). If only a part of 4k block is
> written (and then system crash happens), CBC would corrupt the block
> completely. It means if we map one whole bio with bulk interface in
> dm-crypt, we need to divide into every 512 bytes requests in crypto
> layer. So I don't think we can handle every algorithm with bulk
> interface just moving the IV handling into crypto API. Thanks.

Of course you would do CBC in 512-byte blocks, but my point is that
you should do this in a crypto API algorithm, rather than dm-crypt
as we do now.  Once you implement that then dm-crypt can treat
every algorithm as if they supported bulk processing.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-15  6:49         ` Herbert Xu
@ 2016-06-15  7:38           ` Baolin Wang
  2016-06-15  7:39             ` Herbert Xu
  0 siblings, 1 reply; 12+ messages in thread
From: Baolin Wang @ 2016-06-15  7:38 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On 15 June 2016 at 14:49, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Wed, Jun 15, 2016 at 02:27:04PM +0800, Baolin Wang wrote:
>>
>> After some investigation, I still think we should divide the bulk
>> request from dm-crypt into small request (each one is 512bytes) if
>> this algorithm is not support bulk mode (like CBC). We have talked
>> with dm-crypt
>> maintainers why dm-crypt always use 512 bytes as one request size in
>> below thread, could you please check it?
>> http://www.kernelhub.org/?p=2&msg=907022
>
> That link only points to an email about an oops.

Ah, sorry. Would you check this thread?
http://lkml.iu.edu/hypermail/linux/kernel/1601.1/03829.html

>
> Diggin through that thread, the only objection I have seen is about
> the fact that you have to generate a fresh IV for each sector, which
> is precisely what I'm suggesting that you do.
>
> IOW, implement the IV generators in the crypto API, and then you can
> easily generate a new IV (if necessary) for each sector.
>
>> That means if we move the IV handling into crypto API, we still can
>> not use bulk interface for all algorithm, for example we still need to
>> read/write with 512 bytes for CBC, you can't use 4k or more block on
>> CBC (and most other encryption modes). If only a part of 4k block is
>> written (and then system crash happens), CBC would corrupt the block
>> completely. It means if we map one whole bio with bulk interface in
>> dm-crypt, we need to divide into every 512 bytes requests in crypto
>> layer. So I don't think we can handle every algorithm with bulk
>> interface just moving the IV handling into crypto API. Thanks.
>
> Of course you would do CBC in 512-byte blocks, but my point is that
> you should do this in a crypto API algorithm, rather than dm-crypt
> as we do now.  Once you implement that then dm-crypt can treat
> every algorithm as if they supported bulk processing.

But that means we should divide the bulk request into 512-byte size
requests and break up the mapped sg table for each request. Another
hand we should allocate memory for each request in crypto layer, which
dm-crypt have supplied one high efficiency way. I think these are
really top level how to use the crypro APIs, does that need to move
into crypto laryer? Thanks.

>
> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-15  7:38           ` Baolin Wang
@ 2016-06-15  7:39             ` Herbert Xu
  2016-06-15  8:48               ` Baolin Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Herbert Xu @ 2016-06-15  7:39 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On Wed, Jun 15, 2016 at 03:38:02PM +0800, Baolin Wang wrote:
>
> But that means we should divide the bulk request into 512-byte size
> requests and break up the mapped sg table for each request. Another
> hand we should allocate memory for each request in crypto layer, which
> dm-crypt have supplied one high efficiency way. I think these are
> really top level how to use the crypro APIs, does that need to move
> into crypto laryer? Thanks.

I have already explained to you how you can piggy-back off dm-crypt's
allocation, so what's the problem?
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-15  7:39             ` Herbert Xu
@ 2016-06-15  8:48               ` Baolin Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Baolin Wang @ 2016-06-15  8:48 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On 15 June 2016 at 15:39, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Wed, Jun 15, 2016 at 03:38:02PM +0800, Baolin Wang wrote:
>>
>> But that means we should divide the bulk request into 512-byte size
>> requests and break up the mapped sg table for each request. Another
>> hand we should allocate memory for each request in crypto layer, which
>> dm-crypt have supplied one high efficiency way. I think these are
>> really top level how to use the crypro APIs, does that need to move
>> into crypto laryer? Thanks.
>
> I have already explained to you how you can piggy-back off dm-crypt's
> allocation, so what's the problem?

Because the request created in dm-crypt is connecting with dm-crypt
closely, I am worried if it can work or introduce other issues if we
move these top level things into crypto layer. Anyway I will try to do
that. Thanks.

> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-06-15  8:48 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-07 12:17 [RFC v4 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
2016-06-07 12:17 ` [RFC v4 1/4] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
2016-06-07 12:17 ` [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
2016-06-07 14:16   ` Herbert Xu
2016-06-08  2:00     ` Baolin Wang
2016-06-15  6:27       ` Baolin Wang
2016-06-15  6:49         ` Herbert Xu
2016-06-15  7:38           ` Baolin Wang
2016-06-15  7:39             ` Herbert Xu
2016-06-15  8:48               ` Baolin Wang
2016-06-07 12:17 ` [RFC v4 3/4] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang
2016-06-07 12:17 ` [RFC v4 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher Baolin Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).