linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v2 0/3] Introduce the bulk mode method when sending request to crypto layer
@ 2016-05-27 11:11 Baolin Wang
  2016-05-27 11:11 ` [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Baolin Wang @ 2016-05-27 11:11 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

This patchset will check if the cipher can support bulk mode, then dm-crypt
will handle different ways to send requests to crypto layer according to
cipher mode. For bulk mode, we can use sg table to map the whole bio and
send all scatterlists of one bio to crypto engine to encrypt or decrypt,
which can improve the hardware engine's efficiency.

As Milan pointed out we need one driver with using the new 'CRYPTO_ALG_BULK'
flag, I'll add one cipher engine driver with 'CRYPTO_ALG_BULK' flag to test
this optimization in next version, if I am sure this optimization is on the
right direction according to your comments.

Looking forward to any comments and suggestions. Thanks.

Changes since v1:
 - Refactor the blk_bio_map_sg() function to avoid duplicated code.
 - Move the sg table allocation to crypt_ctr_cipher() function to avoid memory
 allocation in the IO path.
 - Remove the crypt_sg_entry() function.
 - Other optimization.

Baolin Wang (3):
  block: Introduce blk_bio_map_sg() to map one bio
  crypto: Introduce CRYPTO_ALG_BULK flag
  md: dm-crypt: Introduce the bulk mode method when sending request

 block/blk-merge.c         |   36 +++++++++--
 drivers/md/dm-crypt.c     |  145 ++++++++++++++++++++++++++++++++++++++++++++-
 include/crypto/skcipher.h |    7 +++
 include/linux/blkdev.h    |    2 +
 include/linux/crypto.h    |    6 ++
 5 files changed, 190 insertions(+), 6 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio
  2016-05-27 11:11 [RFC v2 0/3] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
@ 2016-05-27 11:11 ` Baolin Wang
  2016-06-03 14:35   ` Jens Axboe
  2016-06-03 14:38   ` Jens Axboe
  2016-05-27 11:11 ` [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
  2016-05-27 11:11 ` [RFC v2 3/3] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang
  2 siblings, 2 replies; 18+ messages in thread
From: Baolin Wang @ 2016-05-27 11:11 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

In dm-crypt, it need to map one bio to scatterlist for improving the
hardware engine encryption efficiency. Thus this patch introduces the
blk_bio_map_sg() function to map one bio with scatterlists.

For avoiding the duplicated code in __blk_bios_map_sg() function, add
one parameter to distinguish bio map or request map.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 block/blk-merge.c      |   36 +++++++++++++++++++++++++++++++-----
 include/linux/blkdev.h |    2 ++
 2 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2613531..badae44 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -376,7 +376,7 @@ new_segment:
 
 static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
 			     struct scatterlist *sglist,
-			     struct scatterlist **sg)
+			     struct scatterlist **sg, bool single_bio)
 {
 	struct bio_vec bvec, bvprv = { NULL };
 	struct bvec_iter iter;
@@ -408,13 +408,39 @@ single_segment:
 		return 1;
 	}
 
-	for_each_bio(bio)
+	if (!single_bio) {
+		for_each_bio(bio)
+			bio_for_each_segment(bvec, bio, iter)
+				__blk_segment_map_sg(q, &bvec, sglist, &bvprv,
+						     sg, &nsegs, &cluster);
+	} else {
 		bio_for_each_segment(bvec, bio, iter)
-			__blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg,
-					     &nsegs, &cluster);
+			__blk_segment_map_sg(q, &bvec, sglist, &bvprv,
+					     sg, &nsegs, &cluster);
+	}
+
+	return nsegs;
+}
+
+/*
+ * Map a bio to scatterlist, return number of sg entries setup. Caller must
+ * make sure sg can hold bio segments entries.
+ */
+int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
+		   struct scatterlist *sglist)
+{
+	struct scatterlist *sg = NULL;
+	int nsegs = 0;
+
+	if (bio)
+		nsegs = __blk_bios_map_sg(q, bio, sglist, &sg, true);
+
+	if (sg)
+		sg_mark_end(sg);
 
 	return nsegs;
 }
+EXPORT_SYMBOL(blk_bio_map_sg);
 
 /*
  * map a request to scatterlist, return number of sg entries setup. Caller
@@ -427,7 +453,7 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
 	int nsegs = 0;
 
 	if (rq->bio)
-		nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg);
+		nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg, false);
 
 	if (unlikely(rq->cmd_flags & REQ_COPY_USER) &&
 	    (blk_rq_bytes(rq) & q->dma_pad_mask)) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1fd8fdf..5868062 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1013,6 +1013,8 @@ extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fu
 extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev);
 
 extern int blk_rq_map_sg(struct request_queue *, struct request *, struct scatterlist *);
+extern int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
+			  struct scatterlist *sglist);
 extern void blk_dump_rq_flags(struct request *, char *);
 extern long nr_blockdev_pages(void);
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-05-27 11:11 [RFC v2 0/3] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
  2016-05-27 11:11 ` [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
@ 2016-05-27 11:11 ` Baolin Wang
  2016-06-02  8:26   ` Herbert Xu
  2016-05-27 11:11 ` [RFC v2 3/3] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang
  2 siblings, 1 reply; 18+ messages in thread
From: Baolin Wang @ 2016-05-27 11:11 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

Now some cipher hardware engines prefer to handle bulk block rather than one
sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
the intermediate values (IV) by themselves in one bulk block. This means we
can increase the size of the request by merging request rather than always 512
bytes and thus increase the hardware engine processing speed.

So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 include/crypto/skcipher.h |    7 +++++++
 include/linux/crypto.h    |    6 ++++++
 2 files changed, 13 insertions(+)

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 0f987f5..d89d29a 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -519,5 +519,12 @@ static inline void skcipher_request_set_crypt(
 	req->iv = iv;
 }
 
+static inline unsigned int skcipher_is_bulk_mode(struct crypto_skcipher *sk_tfm)
+{
+	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
+
+	return crypto_tfm_alg_bulk(tfm);
+}
+
 #endif	/* _CRYPTO_SKCIPHER_H */
 
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 6e28c89..a315487 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -63,6 +63,7 @@
 #define CRYPTO_ALG_DEAD			0x00000020
 #define CRYPTO_ALG_DYING		0x00000040
 #define CRYPTO_ALG_ASYNC		0x00000080
+#define CRYPTO_ALG_BULK			0x00000100
 
 /*
  * Set this bit if and only if the algorithm requires another algorithm of
@@ -623,6 +624,11 @@ static inline u32 crypto_tfm_alg_type(struct crypto_tfm *tfm)
 	return tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK;
 }
 
+static inline unsigned int crypto_tfm_alg_bulk(struct crypto_tfm *tfm)
+{
+	return tfm->__crt_alg->cra_flags & CRYPTO_ALG_BULK;
+}
+
 static inline unsigned int crypto_tfm_alg_blocksize(struct crypto_tfm *tfm)
 {
 	return tfm->__crt_alg->cra_blocksize;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC v2 3/3] md: dm-crypt: Introduce the bulk mode method when sending request
  2016-05-27 11:11 [RFC v2 0/3] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
  2016-05-27 11:11 ` [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
  2016-05-27 11:11 ` [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
@ 2016-05-27 11:11 ` Baolin Wang
  2 siblings, 0 replies; 18+ messages in thread
From: Baolin Wang @ 2016-05-27 11:11 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

In now dm-crypt code, it is ineffective to map one segment (always one
sector) of one bio with just only one scatterlist at one time for hardware
crypto engine. Especially for some encryption mode (like ecb or xts mode)
cooperating with the crypto engine, they just need one initial IV or null
IV instead of different IV for each sector. In this situation We can consider
to use multiple scatterlists to map the whole bio and send all scatterlists
of one bio to crypto engine to encrypt or decrypt, which can improve the
hardware engine's efficiency.

With this optimization, On my test setup (beaglebone black board) using 64KB
I/Os on an eMMC storage device I saw about 60% improvement in throughput for
encrypted writes, and about 100% improvement for encrypted reads. But this
is not fit for other modes which need different IV for each sector.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 drivers/md/dm-crypt.c |  145 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 144 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 4f3cb35..2101f35 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -33,6 +33,7 @@
 #include <linux/device-mapper.h>
 
 #define DM_MSG_PREFIX "crypt"
+#define DM_MAX_SG_LIST	1024
 
 /*
  * context holding the current state of a multi-part conversion
@@ -142,6 +143,9 @@ struct crypt_config {
 	char *cipher;
 	char *cipher_string;
 
+	struct sg_table sgt_in;
+	struct sg_table sgt_out;
+
 	struct crypt_iv_operations *iv_gen_ops;
 	union {
 		struct iv_essiv_private essiv;
@@ -837,6 +841,129 @@ static u8 *iv_of_dmreq(struct crypt_config *cc,
 		crypto_skcipher_alignmask(any_tfm(cc)) + 1);
 }
 
+static void crypt_init_sg_table(struct scatterlist *sgl)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) {
+		if (i < DM_MAX_SG_LIST - 1 && sg_is_last(sg))
+			sg_unmark_end(sg);
+		else if (i == DM_MAX_SG_LIST - 1)
+			sg_mark_end(sg);
+	}
+
+	for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) {
+		memset(sg, 0, sizeof(struct scatterlist));
+
+		if (i == DM_MAX_SG_LIST - 1)
+			sg_mark_end(sg);
+	}
+}
+
+static void crypt_reinit_sg_table(struct crypt_config *cc)
+{
+	if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents)
+		return;
+
+	crypt_init_sg_table(cc->sgt_in.sgl);
+	crypt_init_sg_table(cc->sgt_out.sgl);
+}
+
+static int crypt_alloc_sg_table(struct crypt_config *cc)
+{
+	unsigned int bulk_mode = skcipher_is_bulk_mode(any_tfm(cc));
+	int ret = 0;
+
+	if (!bulk_mode)
+		goto out_skip_alloc;
+
+	ret = sg_alloc_table(&cc->sgt_in, DM_MAX_SG_LIST, GFP_KERNEL);
+	if (ret)
+		goto out_skip_alloc;
+
+	ret = sg_alloc_table(&cc->sgt_out, DM_MAX_SG_LIST, GFP_KERNEL);
+	if (ret)
+		goto out_free_table;
+
+	return 0;
+
+out_free_table:
+	sg_free_table(&cc->sgt_in);
+out_skip_alloc:
+	cc->sgt_in.orig_nents = 0;
+	cc->sgt_out.orig_nents = 0;
+
+	return ret;
+}
+
+static int crypt_convert_bulk_block(struct crypt_config *cc,
+				    struct convert_context *ctx,
+				    struct skcipher_request *req)
+{
+	struct bio *bio_in = ctx->bio_in;
+	struct bio *bio_out = ctx->bio_out;
+	unsigned int total_bytes = bio_in->bi_iter.bi_size;
+	unsigned int total_sg_in, total_sg_out;
+	struct scatterlist *sg_in, *sg_out;
+	struct dm_crypt_request *dmreq;
+	u8 *iv;
+	int r;
+
+	if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents)
+		return -EINVAL;
+
+	dmreq = dmreq_of_req(cc, req);
+	iv = iv_of_dmreq(cc, dmreq);
+	dmreq->iv_sector = ctx->cc_sector;
+	dmreq->ctx = ctx;
+
+	total_sg_in = blk_bio_map_sg(bdev_get_queue(bio_in->bi_bdev),
+				     bio_in, cc->sgt_in.sgl);
+	if ((total_sg_in <= 0) || (total_sg_in > DM_MAX_SG_LIST)) {
+		DMERR("%s in sg map error %d, sg table nents[%d]\n",
+		      __func__, total_sg_in, cc->sgt_in.orig_nents);
+		return -EINVAL;
+	}
+
+	ctx->iter_in.bi_size -= total_bytes;
+	sg_in = cc->sgt_in.sgl;
+	sg_out = cc->sgt_in.sgl;
+
+	if (bio_data_dir(bio_in) == READ)
+		goto set_crypt;
+
+	total_sg_out = blk_bio_map_sg(bdev_get_queue(bio_out->bi_bdev),
+				      bio_out, cc->sgt_out.sgl);
+	if ((total_sg_out <= 0) || (total_sg_out > DM_MAX_SG_LIST)) {
+		DMERR("%s out sg map error %d, sg table nents[%d]\n",
+		      __func__, total_sg_out, cc->sgt_out.orig_nents);
+		return -EINVAL;
+	}
+
+	ctx->iter_out.bi_size -= total_bytes;
+	sg_out = cc->sgt_out.sgl;
+
+set_crypt:
+	if (cc->iv_gen_ops) {
+		r = cc->iv_gen_ops->generator(cc, iv, dmreq);
+		if (r < 0)
+			return r;
+	}
+
+	skcipher_request_set_crypt(req, sg_in, sg_out, total_bytes, iv);
+
+	if (bio_data_dir(ctx->bio_in) == WRITE)
+		r = crypto_skcipher_encrypt(req);
+	else
+		r = crypto_skcipher_decrypt(req);
+
+	if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post)
+		r = cc->iv_gen_ops->post(cc, iv, dmreq);
+
+	return r;
+}
+
 static int crypt_convert_block(struct crypt_config *cc,
 			       struct convert_context *ctx,
 			       struct skcipher_request *req)
@@ -920,6 +1047,7 @@ static void crypt_free_req(struct crypt_config *cc,
 static int crypt_convert(struct crypt_config *cc,
 			 struct convert_context *ctx)
 {
+	unsigned int bulk_mode;
 	int r;
 
 	atomic_set(&ctx->cc_pending, 1);
@@ -930,7 +1058,14 @@ static int crypt_convert(struct crypt_config *cc,
 
 		atomic_inc(&ctx->cc_pending);
 
-		r = crypt_convert_block(cc, ctx, ctx->req);
+		bulk_mode = skcipher_is_bulk_mode(any_tfm(cc));
+		if (!bulk_mode) {
+			r = crypt_convert_block(cc, ctx, ctx->req);
+		} else {
+			r = crypt_convert_bulk_block(cc, ctx, ctx->req);
+			if (r == -EINVAL)
+				r = crypt_convert_block(cc, ctx, ctx->req);
+		}
 
 		switch (r) {
 		/*
@@ -1081,6 +1216,7 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
 	if (io->ctx.req)
 		crypt_free_req(cc, io->ctx.req, base_bio);
 
+	crypt_reinit_sg_table(cc);
 	base_bio->bi_error = error;
 	bio_endio(base_bio);
 }
@@ -1563,6 +1699,9 @@ static void crypt_dtr(struct dm_target *ti)
 	kzfree(cc->cipher);
 	kzfree(cc->cipher_string);
 
+	sg_free_table(&cc->sgt_in);
+	sg_free_table(&cc->sgt_out);
+
 	/* Must zero key material before freeing */
 	kzfree(cc);
 }
@@ -1718,6 +1857,10 @@ static int crypt_ctr_cipher(struct dm_target *ti,
 		}
 	}
 
+	ret = crypt_alloc_sg_table(cc);
+	if (ret)
+		DMWARN("Allocate sg table for bulk mode failed");
+
 	ret = 0;
 bad:
 	kfree(cipher_api);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-05-27 11:11 ` [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
@ 2016-06-02  8:26   ` Herbert Xu
  2016-06-03  6:48     ` Baolin Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Herbert Xu @ 2016-06-02  8:26 UTC (permalink / raw)
  To: Baolin Wang
  Cc: axboe, agk, snitzer, dm-devel, davem, ebiggers3, js1304,
	tadeusz.struk, smueller, standby24x7, shli, dan.j.williams,
	martin.petersen, sagig, kent.overstreet, keith.busch, tj,
	ming.lei, broonie, arnd, linux-crypto, linux-block, linux-raid,
	linux-kernel

On Fri, May 27, 2016 at 07:11:23PM +0800, Baolin Wang wrote:
> Now some cipher hardware engines prefer to handle bulk block rather than one
> sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
> the intermediate values (IV) by themselves in one bulk block. This means we
> can increase the size of the request by merging request rather than always 512
> bytes and thus increase the hardware engine processing speed.
> 
> So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
> mode.
> 
> Signed-off-by: Baolin Wang <baolin.wang@linaro.org>

I think a better aproach would be to explicitly move the IV generation
into the crypto API, similar to how we handle IPsec.  Once you do
that then every algorithm can be handled through the bulk interface.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-02  8:26   ` Herbert Xu
@ 2016-06-03  6:48     ` Baolin Wang
  2016-06-03  6:51       ` Herbert Xu
  0 siblings, 1 reply; 18+ messages in thread
From: Baolin Wang @ 2016-06-03  6:48 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

Hi Herbet,

On 2 June 2016 at 16:26, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Fri, May 27, 2016 at 07:11:23PM +0800, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block rather than one
>> sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
>> the intermediate values (IV) by themselves in one bulk block. This means we
>> can increase the size of the request by merging request rather than always 512
>> bytes and thus increase the hardware engine processing speed.
>>
>> So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
>> mode.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
>
> I think a better aproach would be to explicitly move the IV generation
> into the crypto API, similar to how we handle IPsec.  Once you do
> that then every algorithm can be handled through the bulk interface.
>

Sorry for late reply.
If we move the IV generation into the crypto API, we also can not
handle every algorithm with the bulk interface. Cause we also need to
use different methods to map one whole bio or map one sector according
to the algorithm whether can support bulk mode or not. Please correct
me if I misunderstand your points. Thanks.


> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-03  6:48     ` Baolin Wang
@ 2016-06-03  6:51       ` Herbert Xu
  2016-06-03  7:10         ` Baolin Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Herbert Xu @ 2016-06-03  6:51 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On Fri, Jun 03, 2016 at 02:48:34PM +0800, Baolin Wang wrote:
>
> If we move the IV generation into the crypto API, we also can not
> handle every algorithm with the bulk interface. Cause we also need to
> use different methods to map one whole bio or map one sector according
> to the algorithm whether can support bulk mode or not. Please correct
> me if I misunderstand your points. Thanks.

Which ones can't be handled this way?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-03  6:51       ` Herbert Xu
@ 2016-06-03  7:10         ` Baolin Wang
  2016-06-03  7:54           ` Herbert Xu
  0 siblings, 1 reply; 18+ messages in thread
From: Baolin Wang @ 2016-06-03  7:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On 3 June 2016 at 14:51, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Fri, Jun 03, 2016 at 02:48:34PM +0800, Baolin Wang wrote:
>>
>> If we move the IV generation into the crypto API, we also can not
>> handle every algorithm with the bulk interface. Cause we also need to
>> use different methods to map one whole bio or map one sector according
>> to the algorithm whether can support bulk mode or not. Please correct
>> me if I misunderstand your points. Thanks.
>
> Which ones can't be handled this way?

What I mean is bulk mode and sector mode's difference is not only the
IV handling method, but also the method to map the data with
scatterlists.
Then we have two processes in dm-crypt ( crypt_convert_block() and
crypt_convert_bulk_block() ) to handle the data, so we can not handle
every algorithm with the bulk interface.

>
> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-03  7:10         ` Baolin Wang
@ 2016-06-03  7:54           ` Herbert Xu
  2016-06-03  8:15             ` Baolin Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Herbert Xu @ 2016-06-03  7:54 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On Fri, Jun 03, 2016 at 03:10:31PM +0800, Baolin Wang wrote:
> On 3 June 2016 at 14:51, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> > On Fri, Jun 03, 2016 at 02:48:34PM +0800, Baolin Wang wrote:
> >>
> >> If we move the IV generation into the crypto API, we also can not
> >> handle every algorithm with the bulk interface. Cause we also need to
> >> use different methods to map one whole bio or map one sector according
> >> to the algorithm whether can support bulk mode or not. Please correct
> >> me if I misunderstand your points. Thanks.
> >
> > Which ones can't be handled this way?
> 
> What I mean is bulk mode and sector mode's difference is not only the
> IV handling method, but also the method to map the data with
> scatterlists.
> Then we have two processes in dm-crypt ( crypt_convert_block() and
> crypt_convert_bulk_block() ) to handle the data, so we can not handle
> every algorithm with the bulk interface.

As I asked, which algorithm can't you handle through the bulk
interface, assuming it did all the requisite magic to generate
the correct IV?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-03  7:54           ` Herbert Xu
@ 2016-06-03  8:15             ` Baolin Wang
  2016-06-03  8:21               ` Herbert Xu
  0 siblings, 1 reply; 18+ messages in thread
From: Baolin Wang @ 2016-06-03  8:15 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On 3 June 2016 at 15:54, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Fri, Jun 03, 2016 at 03:10:31PM +0800, Baolin Wang wrote:
>> On 3 June 2016 at 14:51, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>> > On Fri, Jun 03, 2016 at 02:48:34PM +0800, Baolin Wang wrote:
>> >>
>> >> If we move the IV generation into the crypto API, we also can not
>> >> handle every algorithm with the bulk interface. Cause we also need to
>> >> use different methods to map one whole bio or map one sector according
>> >> to the algorithm whether can support bulk mode or not. Please correct
>> >> me if I misunderstand your points. Thanks.
>> >
>> > Which ones can't be handled this way?
>>
>> What I mean is bulk mode and sector mode's difference is not only the
>> IV handling method, but also the method to map the data with
>> scatterlists.
>> Then we have two processes in dm-crypt ( crypt_convert_block() and
>> crypt_convert_bulk_block() ) to handle the data, so we can not handle
>> every algorithm with the bulk interface.
>
> As I asked, which algorithm can't you handle through the bulk
> interface, assuming it did all the requisite magic to generate
> the correct IV?

Suppose the cbc(aes) algorithm, which can not be handled through bulk
interface, it need to map the data sector by sector.
If we also handle the cbc(aes) algorithm with bulk interface, we need
to divide the sg table into sectors and need to allocate request
memory for each divided sectors (As Mike pointed out  this is in the
IO mapping
path and we try to avoid memory allocations at all costs -- due to the
risk of deadlock when issuing IO to stacked block devices (dm-crypt
could be part of a much more elaborate IO stack). ), that will
introduce more messy things I think.

>
> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-03  8:15             ` Baolin Wang
@ 2016-06-03  8:21               ` Herbert Xu
  2016-06-03  9:23                 ` Baolin Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Herbert Xu @ 2016-06-03  8:21 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On Fri, Jun 03, 2016 at 04:15:28PM +0800, Baolin Wang wrote:
>
> Suppose the cbc(aes) algorithm, which can not be handled through bulk
> interface, it need to map the data sector by sector.
> If we also handle the cbc(aes) algorithm with bulk interface, we need
> to divide the sg table into sectors and need to allocate request
> memory for each divided sectors (As Mike pointed out  this is in the
> IO mapping
> path and we try to avoid memory allocations at all costs -- due to the
> risk of deadlock when issuing IO to stacked block devices (dm-crypt
> could be part of a much more elaborate IO stack). ), that will
> introduce more messy things I think.

Perhaps I'm not making myself very clear.  If you move the IV
generation into the crypto API, those crypto API algorithms will
be operating at the sector level.

For example, assuming you're doing lmk, then the algorithm would
be called lmk(cbc(aes)) and it will take as its input one or more
sectors and for each sector it should generate an IV and operate
on it.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-03  8:21               ` Herbert Xu
@ 2016-06-03  9:23                 ` Baolin Wang
  2016-06-03 10:09                   ` Herbert Xu
  0 siblings, 1 reply; 18+ messages in thread
From: Baolin Wang @ 2016-06-03  9:23 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On 3 June 2016 at 16:21, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Fri, Jun 03, 2016 at 04:15:28PM +0800, Baolin Wang wrote:
>>
>> Suppose the cbc(aes) algorithm, which can not be handled through bulk
>> interface, it need to map the data sector by sector.
>> If we also handle the cbc(aes) algorithm with bulk interface, we need
>> to divide the sg table into sectors and need to allocate request
>> memory for each divided sectors (As Mike pointed out  this is in the
>> IO mapping
>> path and we try to avoid memory allocations at all costs -- due to the
>> risk of deadlock when issuing IO to stacked block devices (dm-crypt
>> could be part of a much more elaborate IO stack). ), that will
>> introduce more messy things I think.
>
> Perhaps I'm not making myself very clear.  If you move the IV
> generation into the crypto API, those crypto API algorithms will
> be operating at the sector level.

Yeah, IV generation is OK. But it is not only related to IV thing. For example:

(1) For ecb(aes) algorithm which don't need to handle IV generation,
so it can support bulk mode:
Assuming one 64K size bio coming , we can map the whole bio with one
sg table in dm-crypt (assume it used 16 scatterlists from sg table),
then issue the 'skcipher_request_set_crypt()' function to set one
request with the mapped sg table, which will be sent to crypto driver
to be handled.

(2) For cbc(aes) algorithm which need to handle IV generation sector
by sector, so it can not support bulk mode and can not use bulk
interface:
Assuming one 64K size bio coming , we should map the bio sector by
sector with one scatterlist at one time. Each time we will issue the
'skcipher_request_set_crypt()' function to set one request with only
one mapped scatterlist, until it handled done the whole bio.

(3) As your suggestion, if we also use bulk interface for cbc(aes)
algorithm assuming it did all the requisite magic to generate the
correct IV:
Assuming one 64K size bio coming, we can map the whole bio with one sg
table in crypt_convert_bulk_block() function. But if we send this bulk
request to crypto layer, we should divide the bulk request into small
requests, and each small request should be one sector size (512 bytes)
with assuming the correct IV, but we need to allocate small requests
memory for the division, which will not good for IO mapping, and how
each small request connect to dm-crypt (how to notify the request is
done?)?

Thus we should not handle every algorithm with bulk interface.

>
> For example, assuming you're doing lmk, then the algorithm would
> be called lmk(cbc(aes)) and it will take as its input one or more
> sectors and for each sector it should generate an IV and operate
> on it.



>
> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-03  9:23                 ` Baolin Wang
@ 2016-06-03 10:09                   ` Herbert Xu
  2016-06-03 10:47                     ` Baolin Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Herbert Xu @ 2016-06-03 10:09 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On Fri, Jun 03, 2016 at 05:23:59PM +0800, Baolin Wang wrote:
>
> Assuming one 64K size bio coming, we can map the whole bio with one sg
> table in crypt_convert_bulk_block() function. But if we send this bulk
> request to crypto layer, we should divide the bulk request into small
> requests, and each small request should be one sector size (512 bytes)
> with assuming the correct IV, but we need to allocate small requests
> memory for the division, which will not good for IO mapping, and how
> each small request connect to dm-crypt (how to notify the request is
> done?)?

Why won't it be good? The actual AES block size is 16 and yet we
have no trouble when you feed it a block of 512 bytes.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-03 10:09                   ` Herbert Xu
@ 2016-06-03 10:47                     ` Baolin Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Baolin Wang @ 2016-06-03 10:47 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Jens Axboe, Alasdair G Kergon, Mike Snitzer,
	open list:DEVICE-MAPPER (LVM),
	David Miller, Eric Biggers, Joonsoo Kim, tadeusz.struk, smueller,
	Masanari Iida, Shaohua Li, Dan Williams, Martin K. Petersen,
	Sagi Grimberg, Kent Overstreet, Keith Busch, Tejun Heo, Ming Lei,
	Mark Brown, Arnd Bergmann, linux-crypto, linux-block,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT, LKML

On 3 June 2016 at 18:09, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Fri, Jun 03, 2016 at 05:23:59PM +0800, Baolin Wang wrote:
>>
>> Assuming one 64K size bio coming, we can map the whole bio with one sg
>> table in crypt_convert_bulk_block() function. But if we send this bulk
>> request to crypto layer, we should divide the bulk request into small
>> requests, and each small request should be one sector size (512 bytes)
>> with assuming the correct IV, but we need to allocate small requests
>> memory for the division, which will not good for IO mapping, and how
>> each small request connect to dm-crypt (how to notify the request is
>> done?)?
>
> Why won't it be good? The actual AES block size is 16 and yet we

Like I said, we should avoid memory allocation to improve efficiency
in the IO path. Another hand is how the divided small requests
(allocate request memory at crypt layer) connect with dm-crypt? Since
dm-crypt just send one bulk request to crypt layer, but it will be
divided into small requests at crypt layer.

> have no trouble when you feed it a block of 512 bytes.

That's right.

-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio
  2016-05-27 11:11 ` [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
@ 2016-06-03 14:35   ` Jens Axboe
  2016-06-06  5:03     ` Baolin Wang
  2016-06-03 14:38   ` Jens Axboe
  1 sibling, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2016-06-03 14:35 UTC (permalink / raw)
  To: Baolin Wang, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel

On 05/27/2016 05:11 AM, Baolin Wang wrote:
> In dm-crypt, it need to map one bio to scatterlist for improving the
> hardware engine encryption efficiency. Thus this patch introduces the
> blk_bio_map_sg() function to map one bio with scatterlists.
>
> For avoiding the duplicated code in __blk_bios_map_sg() function, add
> one parameter to distinguish bio map or request map.

Just detach the bio in blk_bio_map_sg() instead of adding a separate 
case (and argument) for it in __blk_bios_map_sg().

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio
  2016-05-27 11:11 ` [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
  2016-06-03 14:35   ` Jens Axboe
@ 2016-06-03 14:38   ` Jens Axboe
  2016-06-06  5:04     ` Baolin Wang
  1 sibling, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2016-06-03 14:38 UTC (permalink / raw)
  To: Baolin Wang, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel

On 05/27/2016 05:11 AM, Baolin Wang wrote:
> +/*
> + * Map a bio to scatterlist, return number of sg entries setup. Caller must
> + * make sure sg can hold bio segments entries.
> + */
> +int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
> +		   struct scatterlist *sglist)
> +{
> +	struct scatterlist *sg = NULL;
> +	int nsegs = 0;
> +
> +	if (bio)
> +		nsegs = __blk_bios_map_sg(q, bio, sglist, &sg, true);
> +
> +	if (sg)
> +		sg_mark_end(sg);

Put that if (sg) inside the if (bio) section, 'sg' isn't going to be
non-NULL outside of that.

Additionally, who would call this with a NULL bio? That seems odd, I'd
get rid of that check completely.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio
  2016-06-03 14:35   ` Jens Axboe
@ 2016-06-06  5:03     ` Baolin Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Baolin Wang @ 2016-06-06  5:03 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Alasdair G Kergon, Mike Snitzer, open list:DEVICE-MAPPER (LVM),
	Herbert Xu, David Miller, Eric Biggers, Joonsoo Kim,
	tadeusz.struk, smueller, Masanari Iida, Shaohua Li, Dan Williams,
	Martin K. Petersen, Sagi Grimberg, Kent Overstreet, Keith Busch,
	Tejun Heo, Ming Lei, Mark Brown, Arnd Bergmann, linux-crypto,
	linux-block, open list:SOFTWARE RAID (Multiple Disks) SUPPORT,
	LKML

On 3 June 2016 at 22:35, Jens Axboe <axboe@kernel.dk> wrote:
> On 05/27/2016 05:11 AM, Baolin Wang wrote:
>>
>> In dm-crypt, it need to map one bio to scatterlist for improving the
>> hardware engine encryption efficiency. Thus this patch introduces the
>> blk_bio_map_sg() function to map one bio with scatterlists.
>>
>> For avoiding the duplicated code in __blk_bios_map_sg() function, add
>> one parameter to distinguish bio map or request map.
>
>
> Just detach the bio in blk_bio_map_sg() instead of adding a separate case
> (and argument) for it in __blk_bios_map_sg().

Make sense.

>
> --
> Jens Axboe
>



-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio
  2016-06-03 14:38   ` Jens Axboe
@ 2016-06-06  5:04     ` Baolin Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Baolin Wang @ 2016-06-06  5:04 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Alasdair G Kergon, Mike Snitzer, open list:DEVICE-MAPPER (LVM),
	Herbert Xu, David Miller, Eric Biggers, Joonsoo Kim,
	tadeusz.struk, smueller, Masanari Iida, Shaohua Li, Dan Williams,
	Martin K. Petersen, Sagi Grimberg, Kent Overstreet, Keith Busch,
	Tejun Heo, Ming Lei, Mark Brown, Arnd Bergmann, linux-crypto,
	linux-block, open list:SOFTWARE RAID (Multiple Disks) SUPPORT,
	LKML

On 3 June 2016 at 22:38, Jens Axboe <axboe@kernel.dk> wrote:
> On 05/27/2016 05:11 AM, Baolin Wang wrote:
>>
>> +/*
>> + * Map a bio to scatterlist, return number of sg entries setup. Caller
>> must
>> + * make sure sg can hold bio segments entries.
>> + */
>> +int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
>> +                  struct scatterlist *sglist)
>> +{
>> +       struct scatterlist *sg = NULL;
>> +       int nsegs = 0;
>> +
>> +       if (bio)
>> +               nsegs = __blk_bios_map_sg(q, bio, sglist, &sg, true);
>> +
>> +       if (sg)
>> +               sg_mark_end(sg);
>
>
> Put that if (sg) inside the if (bio) section, 'sg' isn't going to be
> non-NULL outside of that.
>
> Additionally, who would call this with a NULL bio? That seems odd, I'd
> get rid of that check completely.

OK. I'll fix these in next version. Thanks for your comments.

-- 
Baolin.wang
Best Regards

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-06-06  5:04 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-27 11:11 [RFC v2 0/3] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
2016-05-27 11:11 ` [RFC v2 1/3] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
2016-06-03 14:35   ` Jens Axboe
2016-06-06  5:03     ` Baolin Wang
2016-06-03 14:38   ` Jens Axboe
2016-06-06  5:04     ` Baolin Wang
2016-05-27 11:11 ` [RFC v2 2/3] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
2016-06-02  8:26   ` Herbert Xu
2016-06-03  6:48     ` Baolin Wang
2016-06-03  6:51       ` Herbert Xu
2016-06-03  7:10         ` Baolin Wang
2016-06-03  7:54           ` Herbert Xu
2016-06-03  8:15             ` Baolin Wang
2016-06-03  8:21               ` Herbert Xu
2016-06-03  9:23                 ` Baolin Wang
2016-06-03 10:09                   ` Herbert Xu
2016-06-03 10:47                     ` Baolin Wang
2016-05-27 11:11 ` [RFC v2 3/3] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).