linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V7 00/24] block: support multipage bvec
@ 2018-06-27 12:45 Ming Lei
  2018-06-27 12:45 ` [PATCH V7 01/24] dm: use bio_split() when splitting out the already processed bio Ming Lei
                   ` (23 more replies)
  0 siblings, 24 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

Hi,

This patchset brings multipage bvec into block layer:

1) what is multipage bvec?

Multipage bvecs means that one 'struct bio_bvec' can hold multiple pages
which are physically contiguous instead of one single page used in linux
kernel for long time.

2) why is multipage bvec introduced?

Kent proposed the idea[1] first. 

As system's RAM becomes much bigger than before, and huge page, transparent
huge page and memory compaction are widely used, it is a bit easy now
to see physically contiguous pages from fs in I/O. On the other hand, from
block layer's view, it isn't necessary to store intermediate pages into bvec,
and it is enough to just store the physicallly contiguous 'segment' in each
io vector.

Also huge pages are being brought to filesystem and swap [2][6], we can
do IO on a hugepage each time[3], which requires that one bio can transfer
at least one huge page one time. Turns out it isn't flexiable to change
BIO_MAX_PAGES simply[3][5]. Multipage bvec can fit in this case very well.
As we saw, if CONFIG_THP_SWAP is enabled, BIO_MAX_PAGES can be configured
as much bigger, such as 512, which requires at least two 4K pages for holding
the bvec table.

With multipage bvec:

- Inside block layer, both bio splitting and sg map can become more
efficient than before by just traversing the physically contiguous
'segment' instead of each page.

- segment handling in block layer can be improved much in future since it
should be quite easy to convert multipage bvec into segment easily. For
example, we might just store segment in each bvec directly in future.

- bio size can be increased and it should improve some high-bandwidth IO
case in theory[4].

- there is opportunity in future to improve memory footprint of bvecs. 

3) how is multipage bvec implemented in this patchset?

The 1st 9 patches are prepare patches for multipage bvec, from Christoph
and Mike.

The patches of 10 ~ 22 implement multipage bvec in block layer:

	- put all tricks into bvec/bio/rq iterators, and as far as
	drivers and fs use these standard iterators, they are happy
	with multipage bvec

	- introduce bio_for_each_bvec() to iterate over multipage bvec for splitting
	bio and mapping sg

	- keep current bio_for_each_segment*() to itereate over singlepage bvec and
	make sure current users won't be broken; especailly, convert to this
	new helper prototype in single patch 21 given it is bascially a mechanism
	conversion

	- enalbe multipage bvec in patch 22

The patch 23 redefines BIO_MAX_PAGES as 256.

The patch 24 documents usages of bio iterator helpers.

These patches can be found in the following git tree:

	gitweb: https://github.com/ming1/linux/commits/v4.18-rc-mp-bvec-V7
	git:  https://github.com/ming1/linux.git  #v4.18-rc-mp-bvec-V7

Lots of test(blktest, xfstests, ltp io, ...) have been run with this patchset,
and not see regression.

Thanks Christoph for reviewing the early version and providing very good
suggestions, such as: introduce bio_init_with_vec_table(), remove another
unnecessary helpers for cleanup and so on.

Any comments are welcome!

V7:
	- include Christoph and Mike's bio_clone_bioset() patches, which is
	  actually prepare patches for multipage bvec
	- address Christoph's comments

V6:
	- avoid to introduce lots of renaming, follow Jen's suggestion of
	using the name of chunk for multipage io vector
	- include Christoph's three prepare patches
	- decrease stack usage for using bio_for_each_chunk_segment_all()
	- address Kent's comment

V5:
	- remove some of prepare patches, which have been merged already
	- add bio_clone_seg_bioset() to fix DM's bio clone, which
	is introduced by 18a25da84354c6b (dm: ensure bio submission follows
	a depth-first tree walk)
	- rebase on the latest block for-v4.18

V4:
	- rename bio_for_each_segment*() as bio_for_each_page*(), rename
	bio_segments() as bio_pages(), rename rq_for_each_segment() as
	rq_for_each_pages(), because these helpers never return real
	segment, and they always return single page bvec
	
	- introducing segment_for_each_page_all()

	- introduce new bio_for_each_segment*()/rq_for_each_segment()/bio_segments()
	for returning real multipage segment

	- rewrite segment_last_page()

	- rename bvec iterator helper as suggested by Christoph

	- replace comment with applying bio helpers as suggested by Christoph

	- document usage of bio iterator helpers

	- redefine BIO_MAX_PAGES as 256 to make the biggest bvec table
	accommodated in 4K page

	- move bio_alloc_pages() into bcache as suggested by Christoph

V3:
	- rebase on v4.13-rc3 with for-next of block tree
	- run more xfstests: xfs/ext4 over NVMe, Sata, DM(linear),
	MD(raid1), and not see regressions triggered
	- add Reviewed-by on some btrfs patches
	- remove two MD patches because both are merged to linus tree
	  already

V2:
	- bvec table direct access in raid has been cleaned, so NO_MP
	flag is dropped
	- rebase on recent Neil Brown's change on bio and bounce code
	- reorganize the patchset

V1:
	- against v4.10-rc1 and some cleanup in V0 are in -linus already
	- handle queue_virt_boundary() in mp bvec change and make NVMe happy
	- further BTRFS cleanup
	- remove QUEUE_FLAG_SPLIT_MP
	- rename for two new helpers of bio_for_each_segment_all()
	- fix bounce convertion
	- address comments in V0

[1], http://marc.info/?l=linux-kernel&m=141680246629547&w=2
[2], https://patchwork.kernel.org/patch/9451523/
[3], http://marc.info/?t=147735447100001&r=1&w=2
[4], http://marc.info/?l=linux-mm&m=147745525801433&w=2
[5], http://marc.info/?t=149569484500007&r=1&w=2
[6], http://marc.info/?t=149820215300004&r=1&w=2


Christoph Hellwig (8):
  bcache: don't clone bio in bch_data_verify
  exofs: use bio_clone_fast in _write_mirror
  block: remove bio_clone_kmalloc
  md: remove a bogus comment
  block: unexport bio_clone_bioset
  block: simplify bio_check_pages_dirty
  block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec
  block: use bio_add_page in bio_iov_iter_get_pages

Mike Snitzer (1):
  dm: use bio_split() when splitting out the already processed bio

Ming Lei (15):
  block: introduce multipage page bvec helpers
  block: introduce bio_for_each_bvec()
  block: use bio_for_each_bvec() to compute multipage bvec count
  block: use bio_for_each_bvec() to map sg
  block: introduce bvec_last_segment()
  fs/buffer.c: use bvec iterator to truncate the bio
  btrfs: use bvec_last_segment to get bio's last page
  btrfs: move bio_pages_all() to btrfs
  block: introduce bio_bvecs()
  block: loop: pass multipage bvec to iov_iter
  bcache: avoid to use bio_for_each_segment_all() in
    bch_bio_alloc_pages()
  block: allow bio_for_each_segment_all() to iterate over multipage bvec
  block: enable multipage bvecs
  block: always define BIO_MAX_PAGES as 256
  block: document usage of bio iterator helpers

 Documentation/block/biovecs.txt |  27 +++++
 block/bio.c                     | 232 ++++++++++++++--------------------------
 block/blk-merge.c               | 162 ++++++++++++++++++++++------
 block/blk-zoned.c               |   5 +-
 block/bounce.c                  |  75 ++++++++++++-
 drivers/block/loop.c            |  24 ++---
 drivers/md/bcache/btree.c       |   3 +-
 drivers/md/bcache/debug.c       |   6 +-
 drivers/md/bcache/util.c        |   2 +-
 drivers/md/dm-crypt.c           |   3 +-
 drivers/md/dm.c                 |   5 +-
 drivers/md/md.c                 |   4 -
 drivers/md/raid1.c              |   3 +-
 fs/block_dev.c                  |   6 +-
 fs/btrfs/compression.c          |   8 +-
 fs/btrfs/disk-io.c              |   3 +-
 fs/btrfs/extent_io.c            |  29 +++--
 fs/btrfs/inode.c                |   6 +-
 fs/btrfs/raid56.c               |   3 +-
 fs/buffer.c                     |   5 +-
 fs/crypto/bio.c                 |   3 +-
 fs/direct-io.c                  |   4 +-
 fs/exofs/ore.c                  |   7 +-
 fs/exofs/ore_raid.c             |   3 +-
 fs/ext4/page-io.c               |   3 +-
 fs/ext4/readpage.c              |   3 +-
 fs/f2fs/data.c                  |   9 +-
 fs/gfs2/lops.c                  |   6 +-
 fs/gfs2/meta_io.c               |   3 +-
 fs/iomap.c                      |   3 +-
 fs/mpage.c                      |   3 +-
 fs/xfs/xfs_aops.c               |   5 +-
 include/linux/bio.h             |  90 +++++++++++-----
 include/linux/bvec.h            | 155 +++++++++++++++++++++++++--
 34 files changed, 626 insertions(+), 282 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH V7 01/24] dm: use bio_split() when splitting out the already processed bio
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 13:17   ` Mike Snitzer
  2018-06-27 12:45 ` [PATCH V7 02/24] bcache: don't clone bio in bch_data_verify Ming Lei
                   ` (22 subsequent siblings)
  23 siblings, 1 reply; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, stable

From: Mike Snitzer <snitzer@redhat.com>

Use of bio_clone_bioset() is inefficient if there is no need to clone
the original bio's bio_vec array.  Best to use the bio_clone_fast()
variant.  Also, just using bio_advance() is only part of what is needed
to properly setup the clone -- it doesn't account for the various
bio_integrity() related work that also needs to be performed (see
bio_split).

Address both of these issues by switching from bio_clone_bioset() to
bio_split().

Fixes: 18a25da8 ("dm: ensure bio submission follows a depth-first tree walk")
Cc: stable@vger.kernel.org
Reported-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: NeilBrown <neilb@suse.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
 drivers/md/dm.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index e65429a29c06..a3b103e8e3ce 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1606,10 +1606,9 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
 				 * the usage of io->orig_bio in dm_remap_zone_report()
 				 * won't be affected by this reassignment.
 				 */
-				struct bio *b = bio_clone_bioset(bio, GFP_NOIO,
-								 &md->queue->bio_split);
+				struct bio *b = bio_split(bio, bio_sectors(bio) - ci.sector_count,
+							  GFP_NOIO, &md->queue->bio_split);
 				ci.io->orig_bio = b;
-				bio_advance(bio, (bio_sectors(bio) - ci.sector_count) << 9);
 				bio_chain(b, bio);
 				ret = generic_make_request(bio);
 				break;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 02/24] bcache: don't clone bio in bch_data_verify
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
  2018-06-27 12:45 ` [PATCH V7 01/24] dm: use bio_split() when splitting out the already processed bio Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 03/24] exofs: use bio_clone_fast in _write_mirror Ming Lei
                   ` (21 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap,
	Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

We immediately overwrite the biovec array, so instead just allocate
a new bio and copy over the disk, setor and size.

Acked-by: Coly Li <colyli@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/debug.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index d030ce3025a6..04d146711950 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -110,11 +110,15 @@ void bch_data_verify(struct cached_dev *dc, struct bio *bio)
 	struct bio_vec bv, cbv;
 	struct bvec_iter iter, citer = { 0 };
 
-	check = bio_clone_kmalloc(bio, GFP_NOIO);
+	check = bio_kmalloc(GFP_NOIO, bio_segments(bio));
 	if (!check)
 		return;
+	check->bi_disk = bio->bi_disk;
 	check->bi_opf = REQ_OP_READ;
+	check->bi_iter.bi_sector = bio->bi_iter.bi_sector;
+	check->bi_iter.bi_size = bio->bi_iter.bi_size;
 
+	bch_bio_map(check, NULL);
 	if (bch_bio_alloc_pages(check, GFP_NOIO))
 		goto out_put;
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 03/24] exofs: use bio_clone_fast in _write_mirror
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
  2018-06-27 12:45 ` [PATCH V7 01/24] dm: use bio_split() when splitting out the already processed bio Ming Lei
  2018-06-27 12:45 ` [PATCH V7 02/24] bcache: don't clone bio in bch_data_verify Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 04/24] block: remove bio_clone_kmalloc Ming Lei
                   ` (20 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap,
	Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

The mirroring code never changes the bio data or biovecs.  This means
we can reuse the biovec allocation easily instead of duplicating it.

Acked-by: Boaz Harrosh <ooo@electrozaur.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/exofs/ore.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/exofs/ore.c b/fs/exofs/ore.c
index 1b8b44637e70..5331a15a61f1 100644
--- a/fs/exofs/ore.c
+++ b/fs/exofs/ore.c
@@ -873,8 +873,8 @@ static int _write_mirror(struct ore_io_state *ios, int cur_comp)
 			struct bio *bio;
 
 			if (per_dev != master_dev) {
-				bio = bio_clone_kmalloc(master_dev->bio,
-							GFP_KERNEL);
+				bio = bio_clone_fast(master_dev->bio,
+						     GFP_KERNEL, NULL);
 				if (unlikely(!bio)) {
 					ORE_DBGMSG(
 					      "Failed to allocate BIO size=%u\n",
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 04/24] block: remove bio_clone_kmalloc
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (2 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 03/24] exofs: use bio_clone_fast in _write_mirror Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 05/24] md: remove a bogus comment Ming Lei
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap,
	Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

Unused now.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/bio.h | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/include/linux/bio.h b/include/linux/bio.h
index f08f5fe7bd08..430807f9f44b 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -443,12 +443,6 @@ static inline struct bio *bio_kmalloc(gfp_t gfp_mask, unsigned int nr_iovecs)
 	return bio_alloc_bioset(gfp_mask, nr_iovecs, NULL);
 }
 
-static inline struct bio *bio_clone_kmalloc(struct bio *bio, gfp_t gfp_mask)
-{
-	return bio_clone_bioset(bio, gfp_mask, NULL);
-
-}
-
 extern blk_qc_t submit_bio(struct bio *);
 
 extern void bio_endio(struct bio *);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 05/24] md: remove a bogus comment
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (3 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 04/24] block: remove bio_clone_kmalloc Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 06/24] block: unexport bio_clone_bioset Ming Lei
                   ` (18 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap,
	Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

The function name mentioned doesn't exist, and the code next to it
doesn't match the description either.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/md.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 29b0cd9ec951..81f458514ac0 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -204,10 +204,6 @@ static int start_readonly;
  */
 static bool create_on_open = true;
 
-/* bio_clone_mddev
- * like bio_clone_bioset, but with a local bio set
- */
-
 struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,
 			    struct mddev *mddev)
 {
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 06/24] block: unexport bio_clone_bioset
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (4 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 05/24] md: remove a bogus comment Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 07/24] block: simplify bio_check_pages_dirty Ming Lei
                   ` (17 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap,
	Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

Now only used by the bounce code, so move it there and mark the function
static.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c         | 77 -----------------------------------------------------
 block/bounce.c      | 69 ++++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/bio.h |  1 -
 3 files changed, 68 insertions(+), 79 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 9710e275f230..43698bcff737 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -645,83 +645,6 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs)
 EXPORT_SYMBOL(bio_clone_fast);
 
 /**
- * 	bio_clone_bioset - clone a bio
- * 	@bio_src: bio to clone
- *	@gfp_mask: allocation priority
- *	@bs: bio_set to allocate from
- *
- *	Clone bio. Caller will own the returned bio, but not the actual data it
- *	points to. Reference count of returned bio will be one.
- */
-struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask,
-			     struct bio_set *bs)
-{
-	struct bvec_iter iter;
-	struct bio_vec bv;
-	struct bio *bio;
-
-	/*
-	 * Pre immutable biovecs, __bio_clone() used to just do a memcpy from
-	 * bio_src->bi_io_vec to bio->bi_io_vec.
-	 *
-	 * We can't do that anymore, because:
-	 *
-	 *  - The point of cloning the biovec is to produce a bio with a biovec
-	 *    the caller can modify: bi_idx and bi_bvec_done should be 0.
-	 *
-	 *  - The original bio could've had more than BIO_MAX_PAGES biovecs; if
-	 *    we tried to clone the whole thing bio_alloc_bioset() would fail.
-	 *    But the clone should succeed as long as the number of biovecs we
-	 *    actually need to allocate is fewer than BIO_MAX_PAGES.
-	 *
-	 *  - Lastly, bi_vcnt should not be looked at or relied upon by code
-	 *    that does not own the bio - reason being drivers don't use it for
-	 *    iterating over the biovec anymore, so expecting it to be kept up
-	 *    to date (i.e. for clones that share the parent biovec) is just
-	 *    asking for trouble and would force extra work on
-	 *    __bio_clone_fast() anyways.
-	 */
-
-	bio = bio_alloc_bioset(gfp_mask, bio_segments(bio_src), bs);
-	if (!bio)
-		return NULL;
-	bio->bi_disk		= bio_src->bi_disk;
-	bio->bi_opf		= bio_src->bi_opf;
-	bio->bi_write_hint	= bio_src->bi_write_hint;
-	bio->bi_iter.bi_sector	= bio_src->bi_iter.bi_sector;
-	bio->bi_iter.bi_size	= bio_src->bi_iter.bi_size;
-
-	switch (bio_op(bio)) {
-	case REQ_OP_DISCARD:
-	case REQ_OP_SECURE_ERASE:
-	case REQ_OP_WRITE_ZEROES:
-		break;
-	case REQ_OP_WRITE_SAME:
-		bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0];
-		break;
-	default:
-		bio_for_each_segment(bv, bio_src, iter)
-			bio->bi_io_vec[bio->bi_vcnt++] = bv;
-		break;
-	}
-
-	if (bio_integrity(bio_src)) {
-		int ret;
-
-		ret = bio_integrity_clone(bio, bio_src, gfp_mask);
-		if (ret < 0) {
-			bio_put(bio);
-			return NULL;
-		}
-	}
-
-	bio_clone_blkcg_association(bio, bio_src);
-
-	return bio;
-}
-EXPORT_SYMBOL(bio_clone_bioset);
-
-/**
  *	bio_add_pc_page	-	attempt to add page to bio
  *	@q: the target queue
  *	@bio: destination bio
diff --git a/block/bounce.c b/block/bounce.c
index fd31347b7836..bc63b3a2d18c 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -195,6 +195,73 @@ static void bounce_end_io_read_isa(struct bio *bio)
 	__bounce_end_io_read(bio, &isa_page_pool);
 }
 
+static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
+		struct bio_set *bs)
+{
+	struct bvec_iter iter;
+	struct bio_vec bv;
+	struct bio *bio;
+
+	/*
+	 * Pre immutable biovecs, __bio_clone() used to just do a memcpy from
+	 * bio_src->bi_io_vec to bio->bi_io_vec.
+	 *
+	 * We can't do that anymore, because:
+	 *
+	 *  - The point of cloning the biovec is to produce a bio with a biovec
+	 *    the caller can modify: bi_idx and bi_bvec_done should be 0.
+	 *
+	 *  - The original bio could've had more than BIO_MAX_PAGES biovecs; if
+	 *    we tried to clone the whole thing bio_alloc_bioset() would fail.
+	 *    But the clone should succeed as long as the number of biovecs we
+	 *    actually need to allocate is fewer than BIO_MAX_PAGES.
+	 *
+	 *  - Lastly, bi_vcnt should not be looked at or relied upon by code
+	 *    that does not own the bio - reason being drivers don't use it for
+	 *    iterating over the biovec anymore, so expecting it to be kept up
+	 *    to date (i.e. for clones that share the parent biovec) is just
+	 *    asking for trouble and would force extra work on
+	 *    __bio_clone_fast() anyways.
+	 */
+
+	bio = bio_alloc_bioset(gfp_mask, bio_segments(bio_src), bs);
+	if (!bio)
+		return NULL;
+	bio->bi_disk		= bio_src->bi_disk;
+	bio->bi_opf		= bio_src->bi_opf;
+	bio->bi_write_hint	= bio_src->bi_write_hint;
+	bio->bi_iter.bi_sector	= bio_src->bi_iter.bi_sector;
+	bio->bi_iter.bi_size	= bio_src->bi_iter.bi_size;
+
+	switch (bio_op(bio)) {
+	case REQ_OP_DISCARD:
+	case REQ_OP_SECURE_ERASE:
+	case REQ_OP_WRITE_ZEROES:
+		break;
+	case REQ_OP_WRITE_SAME:
+		bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0];
+		break;
+	default:
+		bio_for_each_segment(bv, bio_src, iter)
+			bio->bi_io_vec[bio->bi_vcnt++] = bv;
+		break;
+	}
+
+	if (bio_integrity(bio_src)) {
+		int ret;
+
+		ret = bio_integrity_clone(bio, bio_src, gfp_mask);
+		if (ret < 0) {
+			bio_put(bio);
+			return NULL;
+		}
+	}
+
+	bio_clone_blkcg_association(bio, bio_src);
+
+	return bio;
+}
+
 static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
 			       mempool_t *pool)
 {
@@ -222,7 +289,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
 		generic_make_request(*bio_orig);
 		*bio_orig = bio;
 	}
-	bio = bio_clone_bioset(*bio_orig, GFP_NOIO, passthrough ? NULL :
+	bio = bounce_clone_bio(*bio_orig, GFP_NOIO, passthrough ? NULL :
 			&bounce_bio_set);
 
 	bio_for_each_segment_all(to, bio, i) {
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 430807f9f44b..21d07858ddef 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -429,7 +429,6 @@ extern void bio_put(struct bio *);
 
 extern void __bio_clone_fast(struct bio *, struct bio *);
 extern struct bio *bio_clone_fast(struct bio *, gfp_t, struct bio_set *);
-extern struct bio *bio_clone_bioset(struct bio *, gfp_t, struct bio_set *bs);
 
 extern struct bio_set fs_bio_set;
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 07/24] block: simplify bio_check_pages_dirty
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (5 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 06/24] block: unexport bio_clone_bioset Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 08/24] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec Ming Lei
                   ` (16 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap,
	Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

bio_check_pages_dirty currently inviolates the invariant that bv_page of
a bio_vec inside bi_vcnt shouldn't be zero, and that is going to become
really annoying with multpath biovecs.  Fortunately there isn't any
all that good reason for it - once we decide to defer freeing the bio
to a workqueue holding onto a few additional pages isn't really an
issue anymore.  So just check if there is a clean page that needs
dirtying in the first path, and do a second pass to free them if there
was none, while the cache is still hot.

Also use the chance to micro-optimize bio_dirty_fn a bit by not saving
irq state - we know we are called from a workqueue.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 56 +++++++++++++++++++++-----------------------------------
 1 file changed, 21 insertions(+), 35 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 43698bcff737..77f991688810 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1570,19 +1570,15 @@ static void bio_release_pages(struct bio *bio)
 	struct bio_vec *bvec;
 	int i;
 
-	bio_for_each_segment_all(bvec, bio, i) {
-		struct page *page = bvec->bv_page;
-
-		if (page)
-			put_page(page);
-	}
+	bio_for_each_segment_all(bvec, bio, i)
+		put_page(bvec->bv_page);
 }
 
 /*
  * bio_check_pages_dirty() will check that all the BIO's pages are still dirty.
  * If they are, then fine.  If, however, some pages are clean then they must
  * have been written out during the direct-IO read.  So we take another ref on
- * the BIO and the offending pages and re-dirty the pages in process context.
+ * the BIO and re-dirty the pages in process context.
  *
  * It is expected that bio_check_pages_dirty() will wholly own the BIO from
  * here on.  It will run one put_page() against each page and will run one
@@ -1600,52 +1596,42 @@ static struct bio *bio_dirty_list;
  */
 static void bio_dirty_fn(struct work_struct *work)
 {
-	unsigned long flags;
-	struct bio *bio;
+	struct bio *bio, *next;
 
-	spin_lock_irqsave(&bio_dirty_lock, flags);
-	bio = bio_dirty_list;
+	spin_lock_irq(&bio_dirty_lock);
+	next = bio_dirty_list;
 	bio_dirty_list = NULL;
-	spin_unlock_irqrestore(&bio_dirty_lock, flags);
+	spin_unlock_irq(&bio_dirty_lock);
 
-	while (bio) {
-		struct bio *next = bio->bi_private;
+	while ((bio = next) != NULL) {
+		next = bio->bi_private;
 
 		bio_set_pages_dirty(bio);
 		bio_release_pages(bio);
 		bio_put(bio);
-		bio = next;
 	}
 }
 
 void bio_check_pages_dirty(struct bio *bio)
 {
 	struct bio_vec *bvec;
-	int nr_clean_pages = 0;
+	unsigned long flags;
 	int i;
 
 	bio_for_each_segment_all(bvec, bio, i) {
-		struct page *page = bvec->bv_page;
-
-		if (PageDirty(page) || PageCompound(page)) {
-			put_page(page);
-			bvec->bv_page = NULL;
-		} else {
-			nr_clean_pages++;
-		}
+		if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page))
+			goto defer;
 	}
 
-	if (nr_clean_pages) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&bio_dirty_lock, flags);
-		bio->bi_private = bio_dirty_list;
-		bio_dirty_list = bio;
-		spin_unlock_irqrestore(&bio_dirty_lock, flags);
-		schedule_work(&bio_dirty_work);
-	} else {
-		bio_put(bio);
-	}
+	bio_release_pages(bio);
+	bio_put(bio);
+	return;
+defer:
+	spin_lock_irqsave(&bio_dirty_lock, flags);
+	bio->bi_private = bio_dirty_list;
+	bio_dirty_list = bio;
+	spin_unlock_irqrestore(&bio_dirty_lock, flags);
+	schedule_work(&bio_dirty_work);
 }
 EXPORT_SYMBOL_GPL(bio_check_pages_dirty);
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 08/24] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (6 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 07/24] block: simplify bio_check_pages_dirty Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 09/24] block: use bio_add_page in bio_iov_iter_get_pages Ming Lei
                   ` (15 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap,
	Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

So don't bother handling it.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 77f991688810..de6cbaedfb65 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1557,10 +1557,8 @@ void bio_set_pages_dirty(struct bio *bio)
 	int i;
 
 	bio_for_each_segment_all(bvec, bio, i) {
-		struct page *page = bvec->bv_page;
-
-		if (page && !PageCompound(page))
-			set_page_dirty_lock(page);
+		if (!PageCompound(bvec->bv_page))
+			set_page_dirty_lock(bvec->bv_page);
 	}
 }
 EXPORT_SYMBOL_GPL(bio_set_pages_dirty);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 09/24] block: use bio_add_page in bio_iov_iter_get_pages
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (7 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 08/24] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 10/24] block: introduce multipage page bvec helpers Ming Lei
                   ` (14 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap,
	Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

Replace a nasty hack with a different nasty hack to prepare for multipage
bio_vecs.  By moving the temporary page array as far up as possible in
the space allocated for the bio_vec array we can iterate forward over it
and thus use bio_add_page.  Using bio_add_page means we'll be able to
merge physically contiguous pages once support for multipath bio_vecs is
merged.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 45 +++++++++++++++++++++------------------------
 1 file changed, 21 insertions(+), 24 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index de6cbaedfb65..80ea0c8878bd 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -825,6 +825,8 @@ int bio_add_page(struct bio *bio, struct page *page,
 }
 EXPORT_SYMBOL(bio_add_page);
 
+#define PAGE_PTRS_PER_BVEC	(sizeof(struct bio_vec) / sizeof(struct page *))
+
 /**
  * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
  * @bio: bio to add pages to
@@ -836,38 +838,33 @@ EXPORT_SYMBOL(bio_add_page);
 int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
 {
 	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
+	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
 	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
 	struct page **pages = (struct page **)bv;
-	size_t offset, diff;
-	ssize_t size;
+	ssize_t size, left;
+	unsigned len, i;
+	size_t offset;
+
+	/*
+	 * Move page array up in the allocated memory for the bio vecs as
+	 * far as possible so that we can start filling biovecs from the
+	 * beginning without overwriting the temporary page array.
+	 */
+	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
+	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
 
 	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
 	if (unlikely(size <= 0))
 		return size ? size : -EFAULT;
-	nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
 
-	/*
-	 * Deep magic below:  We need to walk the pinned pages backwards
-	 * because we are abusing the space allocated for the bio_vecs
-	 * for the page array.  Because the bio_vecs are larger than the
-	 * page pointers by definition this will always work.  But it also
-	 * means we can't use bio_add_page, so any changes to it's semantics
-	 * need to be reflected here as well.
-	 */
-	bio->bi_iter.bi_size += size;
-	bio->bi_vcnt += nr_pages;
-
-	diff = (nr_pages * PAGE_SIZE - offset) - size;
-	while (nr_pages--) {
-		bv[nr_pages].bv_page = pages[nr_pages];
-		bv[nr_pages].bv_len = PAGE_SIZE;
-		bv[nr_pages].bv_offset = 0;
-	}
+	for (left = size, i = 0; left > 0; left -= len, i++) {
+		struct page *page = pages[i];
 
-	bv[0].bv_offset += offset;
-	bv[0].bv_len -= offset;
-	if (diff)
-		bv[bio->bi_vcnt - 1].bv_len -= diff;
+		len = min_t(size_t, PAGE_SIZE - offset, left);
+		if (WARN_ON_ONCE(bio_add_page(bio, page, len, offset) != len))
+			return -EINVAL;
+		offset = 0;
+	}
 
 	iov_iter_advance(iter, size);
 	return 0;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 10/24] block: introduce multipage page bvec helpers
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (8 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 09/24] block: use bio_add_page in bio_iov_iter_get_pages Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 15:59   ` kbuild test robot
  2018-06-27 12:45 ` [PATCH V7 11/24] block: introduce bio_for_each_bvec() Ming Lei
                   ` (13 subsequent siblings)
  23 siblings, 1 reply; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

This patch introduces helpers of 'mp_bvec_iter_*' for multipage
bvec support.

The introduced interfaces treate one bvec as real multipage segment,
for example, .bv_len is the total length of the multipage bvec.

The existed helpers of bvec_iter_* are interfaces for supporting current
bvec iterator which is thought as singlepage only by drivers, fs, dm and
etc. These introduced helpers will build singlepage bvec in flight, so
users of current bio/bvec iterator still can work well and needn't any
change even though we store real multipage io vector into bvec table.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/bvec.h | 63 +++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 60 insertions(+), 3 deletions(-)

diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index fe7a22dd133b..03a12fbb90d8 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -23,6 +23,44 @@
 #include <linux/kernel.h>
 #include <linux/bug.h>
 #include <linux/errno.h>
+#include <linux/mm.h>
+
+/*
+ * What is multipage bvecs?
+ *
+ * - bvec stored in bio->bi_io_vec is always multipage(mp) style
+ *
+ * - bvec(struct bio_vec) represents one physically contiguous I/O
+ *   buffer, now the buffer may include more than one pages since
+ *   multipage(mp) bvec is supported, and all these pages represented
+ *   by one bvec is physically contiguous. Before mp support, at most
+ *   one page can be included in one bvec, we call it singlepage(sp)
+ *   bvec.
+ *
+ * - .bv_page of the bvec represents the 1st page in the mp bvec
+ *
+ * - .bv_offset of the bvec represents offset of the buffer in the bvec
+ *
+ * The effect on the current drivers/filesystem/dm/bcache/...:
+ *
+ * - almost everyone supposes that one bvec only includes one single
+ *   page, so we keep the sp interface not changed, for example,
+ *   bio_for_each_segment() still returns bvec with single page
+ *
+ * - bio_for_each_segment*() will be changed to return singlepage
+ *   bvec too
+ *
+ * - during iterating, iterator variable(struct bvec_iter) is always
+ *   updated in multipage bvec style and that means bvec_iter_advance()
+ *   is kept not changed
+ *
+ * - returned(copied) singlepage bvec is generated in flight by bvec
+ *   helpers from the stored multipage bvec
+ *
+ * - In case that some components(such as iov_iter) need to support
+ *   multipage bvec, we introduce new helpers(mp_bvec_iter_*) for
+ *   them.
+ */
 
 /*
  * was unsigned short, but we might as well be ready for > 64kB I/O pages
@@ -52,16 +90,35 @@ struct bvec_iter {
  */
 #define __bvec_iter_bvec(bvec, iter)	(&(bvec)[(iter).bi_idx])
 
-#define bvec_iter_page(bvec, iter)				\
+#define mp_bvec_iter_page(bvec, iter)				\
 	(__bvec_iter_bvec((bvec), (iter))->bv_page)
 
-#define bvec_iter_len(bvec, iter)				\
+#define mp_bvec_iter_len(bvec, iter)				\
 	min((iter).bi_size,					\
 	    __bvec_iter_bvec((bvec), (iter))->bv_len - (iter).bi_bvec_done)
 
-#define bvec_iter_offset(bvec, iter)				\
+#define mp_bvec_iter_offset(bvec, iter)				\
 	(__bvec_iter_bvec((bvec), (iter))->bv_offset + (iter).bi_bvec_done)
 
+#define mp_bvec_iter_page_idx(bvec, iter)			\
+	(mp_bvec_iter_offset((bvec), (iter)) / PAGE_SIZE)
+
+/*
+ * <page, offset,length> of singlepage(sp) segment.
+ *
+ * This helpers are for building sp bvec in flight.
+ */
+#define bvec_iter_offset(bvec, iter)					\
+	(mp_bvec_iter_offset((bvec), (iter)) % PAGE_SIZE)
+
+#define bvec_iter_len(bvec, iter)					\
+	min_t(unsigned, mp_bvec_iter_len((bvec), (iter)),		\
+	    (PAGE_SIZE - (bvec_iter_offset((bvec), (iter)))))
+
+#define bvec_iter_page(bvec, iter)					\
+	nth_page(mp_bvec_iter_page((bvec), (iter)),		\
+		 mp_bvec_iter_page_idx((bvec), (iter)))
+
 #define bvec_iter_bvec(bvec, iter)				\
 ((struct bio_vec) {						\
 	.bv_page	= bvec_iter_page((bvec), (iter)),	\
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 11/24] block: introduce bio_for_each_bvec()
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (9 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 10/24] block: introduce multipage page bvec helpers Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 12/24] block: use bio_for_each_bvec() to compute multipage bvec count Ming Lei
                   ` (12 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

This helper is used for iterating over multipage bvec for bio
splitting/merge,

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/bio.h  | 34 +++++++++++++++++++++++++++++++---
 include/linux/bvec.h | 36 ++++++++++++++++++++++++++++++++----
 2 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/include/linux/bio.h b/include/linux/bio.h
index 21d07858ddef..551444bd9795 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -80,6 +80,9 @@
 #define bio_data_dir(bio) \
 	(op_is_write(bio_op(bio)) ? WRITE : READ)
 
+#define bio_iter_mp_iovec(bio, iter)				\
+	mp_bvec_iter_bvec((bio)->bi_io_vec, (iter))
+
 /*
  * Check whether this bio carries any data or not. A NULL bio is allowed.
  */
@@ -165,8 +168,8 @@ static inline bool bio_full(struct bio *bio)
 #define bio_for_each_segment_all(bvl, bio, i)				\
 	for (i = 0, bvl = (bio)->bi_io_vec; i < (bio)->bi_vcnt; i++, bvl++)
 
-static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter,
-				    unsigned bytes)
+static inline void __bio_advance_iter(struct bio *bio, struct bvec_iter *iter,
+				      unsigned bytes, bool mp)
 {
 	iter->bi_sector += bytes >> 9;
 
@@ -174,11 +177,26 @@ static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter,
 		iter->bi_size -= bytes;
 		iter->bi_done += bytes;
 	} else {
-		bvec_iter_advance(bio->bi_io_vec, iter, bytes);
+		if (!mp)
+			bvec_iter_advance(bio->bi_io_vec, iter, bytes);
+		else
+			mp_bvec_iter_advance(bio->bi_io_vec, iter, bytes);
 		/* TODO: It is reasonable to complete bio with error here. */
 	}
 }
 
+static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter,
+				    unsigned bytes)
+{
+	__bio_advance_iter(bio, iter, bytes, false);
+}
+
+static inline void bio_advance_mp_iter(struct bio *bio, struct bvec_iter *iter,
+				       unsigned bytes)
+{
+	__bio_advance_iter(bio, iter, bytes, true);
+}
+
 static inline bool bio_rewind_iter(struct bio *bio, struct bvec_iter *iter,
 		unsigned int bytes)
 {
@@ -202,6 +220,16 @@ static inline bool bio_rewind_iter(struct bio *bio, struct bvec_iter *iter,
 #define bio_for_each_segment(bvl, bio, iter)				\
 	__bio_for_each_segment(bvl, bio, iter, (bio)->bi_iter)
 
+#define __bio_for_each_bvec(bvl, bio, iter, start)		\
+	for (iter = (start);						\
+	     (iter).bi_size &&						\
+		((bvl = bio_iter_mp_iovec((bio), (iter))), 1);	\
+	     bio_advance_mp_iter((bio), &(iter), (bvl).bv_len))
+
+/* returns one real segment(multipage bvec) each time */
+#define bio_for_each_bvec(bvl, bio, iter)			\
+	__bio_for_each_bvec(bvl, bio, iter, (bio)->bi_iter)
+
 #define bio_iter_last(bvec, iter) ((iter).bi_size == (bvec).bv_len)
 
 static inline unsigned bio_segments(struct bio *bio)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 03a12fbb90d8..417d44cf1e82 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -126,8 +126,16 @@ struct bvec_iter {
 	.bv_offset	= bvec_iter_offset((bvec), (iter)),	\
 })
 
-static inline bool bvec_iter_advance(const struct bio_vec *bv,
-		struct bvec_iter *iter, unsigned bytes)
+#define mp_bvec_iter_bvec(bvec, iter)				\
+((struct bio_vec) {							\
+	.bv_page	= mp_bvec_iter_page((bvec), (iter)),	\
+	.bv_len		= mp_bvec_iter_len((bvec), (iter)),	\
+	.bv_offset	= mp_bvec_iter_offset((bvec), (iter)),	\
+})
+
+static inline bool __bvec_iter_advance(const struct bio_vec *bv,
+				       struct bvec_iter *iter,
+				       unsigned bytes, bool mp)
 {
 	if (WARN_ONCE(bytes > iter->bi_size,
 		     "Attempted to advance past end of bvec iter\n")) {
@@ -136,8 +144,14 @@ static inline bool bvec_iter_advance(const struct bio_vec *bv,
 	}
 
 	while (bytes) {
-		unsigned iter_len = bvec_iter_len(bv, *iter);
-		unsigned len = min(bytes, iter_len);
+		unsigned len;
+
+		if (mp)
+			len = mp_bvec_iter_len(bv, *iter);
+		else
+			len = bvec_iter_len(bv, *iter);
+
+		len = min(bytes, len);
 
 		bytes -= len;
 		iter->bi_size -= len;
@@ -176,6 +190,20 @@ static inline bool bvec_iter_rewind(const struct bio_vec *bv,
 	return true;
 }
 
+static inline bool bvec_iter_advance(const struct bio_vec *bv,
+				     struct bvec_iter *iter,
+				     unsigned bytes)
+{
+	return __bvec_iter_advance(bv, iter, bytes, false);
+}
+
+static inline bool mp_bvec_iter_advance(const struct bio_vec *bv,
+					struct bvec_iter *iter,
+					unsigned bytes)
+{
+	return __bvec_iter_advance(bv, iter, bytes, true);
+}
+
 #define for_each_bvec(bvl, bio_vec, iter, start)			\
 	for (iter = (start);						\
 	     (iter).bi_size &&						\
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 12/24] block: use bio_for_each_bvec() to compute multipage bvec count
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (10 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 11/24] block: introduce bio_for_each_bvec() Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 13/24] block: use bio_for_each_bvec() to map sg Ming Lei
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

Firstly it is more efficient to use bio_for_each_bvec() in both
blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many
multipage bvecs there are in the bio.

Secondaly once bio_for_each_bvec() is used, the bvec may need to
be splitted because its length can be very longer than max segment size,
so we have to split the big bvec into several segments.

Thirdly during splitting multipage bvec into segments, max segment number
may be reached, then the bio need to be splitted when this happens.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-merge.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 76 insertions(+), 14 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index aaec38cc37b8..bf1dceb9656a 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -97,6 +97,62 @@ static inline unsigned get_max_io_size(struct request_queue *q,
 	return sectors;
 }
 
+/*
+ * Split the bvec @bv into segments, and update all kinds of
+ * variables.
+ */
+static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv,
+		unsigned *nsegs, unsigned *last_seg_size,
+		unsigned *front_seg_size, unsigned *sectors)
+{
+	bool need_split = false;
+	unsigned len = bv->bv_len;
+	unsigned total_len = 0;
+	unsigned new_nsegs = 0, seg_size = 0;
+
+	if ((*nsegs >= queue_max_segments(q)) || !len)
+		return need_split;
+
+	/*
+	 * Multipage bvec may be too big to hold in one segment,
+	 * so the current bvec has to be splitted as multiple
+	 * segments.
+	 */
+	while (new_nsegs + *nsegs < queue_max_segments(q)) {
+		seg_size = min(queue_max_segment_size(q), len);
+
+		new_nsegs++;
+		total_len += seg_size;
+		len -= seg_size;
+
+		if ((queue_virt_boundary(q) && ((bv->bv_offset +
+		    total_len) & queue_virt_boundary(q))) || !len)
+			break;
+	}
+
+	/* split in the middle of the bvec */
+	if (len)
+		need_split = true;
+
+	/* update front segment size */
+	if (!*nsegs) {
+		unsigned first_seg_size = seg_size;
+
+		if (new_nsegs > 1)
+			first_seg_size = queue_max_segment_size(q);
+		if (*front_seg_size < first_seg_size)
+			*front_seg_size = first_seg_size;
+	}
+
+	/* update other varibles */
+	*last_seg_size = seg_size;
+	*nsegs += new_nsegs;
+	if (sectors)
+		*sectors += total_len >> 9;
+
+	return need_split;
+}
+
 static struct bio *blk_bio_segment_split(struct request_queue *q,
 					 struct bio *bio,
 					 struct bio_set *bs,
@@ -110,7 +166,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
 	struct bio *new = NULL;
 	const unsigned max_sectors = get_max_io_size(q, bio);
 
-	bio_for_each_segment(bv, bio, iter) {
+	bio_for_each_bvec(bv, bio, iter) {
 		/*
 		 * If the queue doesn't support SG gaps and adding this
 		 * offset would create a gap, disallow it.
@@ -125,8 +181,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
 			 */
 			if (nsegs < queue_max_segments(q) &&
 			    sectors < max_sectors) {
-				nsegs++;
-				sectors = max_sectors;
+				/* split in the middle of bvec */
+				bv.bv_len = (max_sectors - sectors) << 9;
+				bvec_split_segs(q, &bv, &nsegs,
+						&seg_size,
+						&front_seg_size,
+						&sectors);
 			}
 			goto split;
 		}
@@ -153,11 +213,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
 		if (nsegs == 1 && seg_size > front_seg_size)
 			front_seg_size = seg_size;
 
-		nsegs++;
 		bvprv = bv;
 		bvprvp = &bvprv;
-		seg_size = bv.bv_len;
-		sectors += bv.bv_len >> 9;
+
+		if (bvec_split_segs(q, &bv, &nsegs, &seg_size,
+					&front_seg_size, &sectors))
+			goto split;
 
 	}
 
@@ -235,6 +296,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
 	struct bio_vec bv, bvprv = { NULL };
 	int cluster, prev = 0;
 	unsigned int seg_size, nr_phys_segs;
+	unsigned front_seg_size = bio->bi_seg_front_size;
 	struct bio *fbio, *bbio;
 	struct bvec_iter iter;
 
@@ -255,7 +317,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
 	seg_size = 0;
 	nr_phys_segs = 0;
 	for_each_bio(bio) {
-		bio_for_each_segment(bv, bio, iter) {
+		bio_for_each_bvec(bv, bio, iter) {
 			/*
 			 * If SG merging is disabled, each bio vector is
 			 * a segment
@@ -277,20 +339,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
 				continue;
 			}
 new_segment:
-			if (nr_phys_segs == 1 && seg_size >
-			    fbio->bi_seg_front_size)
-				fbio->bi_seg_front_size = seg_size;
+			if (nr_phys_segs == 1 && seg_size > front_seg_size)
+				front_seg_size = seg_size;
 
-			nr_phys_segs++;
 			bvprv = bv;
 			prev = 1;
-			seg_size = bv.bv_len;
+			bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size,
+					&front_seg_size, NULL);
 		}
 		bbio = bio;
 	}
 
-	if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size)
-		fbio->bi_seg_front_size = seg_size;
+	if (nr_phys_segs == 1 && seg_size > front_seg_size)
+		front_seg_size = seg_size;
+	fbio->bi_seg_front_size = front_seg_size;
 	if (seg_size > bbio->bi_seg_back_size)
 		bbio->bi_seg_back_size = seg_size;
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 13/24] block: use bio_for_each_bvec() to map sg
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (11 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 12/24] block: use bio_for_each_bvec() to compute multipage bvec count Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 14/24] block: introduce bvec_last_segment() Ming Lei
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

It is more efficient to use bio_for_each_bvec() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-merge.c | 72 +++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 52 insertions(+), 20 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index bf1dceb9656a..0f7769c5feb5 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -424,6 +424,56 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
 	return 0;
 }
 
+static struct scatterlist *blk_next_sg(struct scatterlist **sg,
+		struct scatterlist *sglist)
+{
+	if (!*sg)
+		return sglist;
+	else {
+		/*
+		 * If the driver previously mapped a shorter
+		 * list, we could see a termination bit
+		 * prematurely unless it fully inits the sg
+		 * table on each mapping. We KNOW that there
+		 * must be more entries here or the driver
+		 * would be buggy, so force clear the
+		 * termination bit to avoid doing a full
+		 * sg_init_table() in drivers for each command.
+		 */
+		sg_unmark_end(*sg);
+		return sg_next(*sg);
+	}
+}
+
+static unsigned blk_bvec_map_sg(struct request_queue *q,
+		struct bio_vec *bvec, struct scatterlist *sglist,
+		struct scatterlist **sg)
+{
+	unsigned nbytes = bvec->bv_len;
+	unsigned nsegs = 0, total = 0;
+
+	while (nbytes > 0) {
+		unsigned seg_size;
+		struct page *pg;
+		unsigned offset, idx;
+
+		*sg = blk_next_sg(sg, sglist);
+
+		seg_size = min(nbytes, queue_max_segment_size(q));
+		offset = (total + bvec->bv_offset) % PAGE_SIZE;
+		idx = (total + bvec->bv_offset) / PAGE_SIZE;
+		pg = nth_page(bvec->bv_page, idx);
+
+		sg_set_page(*sg, pg, seg_size, offset);
+
+		total += seg_size;
+		nbytes -= seg_size;
+		nsegs++;
+	}
+
+	return nsegs;
+}
+
 static inline void
 __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
 		     struct scatterlist *sglist, struct bio_vec *bvprv,
@@ -444,25 +494,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
 		(*sg)->length += nbytes;
 	} else {
 new_segment:
-		if (!*sg)
-			*sg = sglist;
-		else {
-			/*
-			 * If the driver previously mapped a shorter
-			 * list, we could see a termination bit
-			 * prematurely unless it fully inits the sg
-			 * table on each mapping. We KNOW that there
-			 * must be more entries here or the driver
-			 * would be buggy, so force clear the
-			 * termination bit to avoid doing a full
-			 * sg_init_table() in drivers for each command.
-			 */
-			sg_unmark_end(*sg);
-			*sg = sg_next(*sg);
-		}
-
-		sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset);
-		(*nsegs)++;
+		(*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg);
 	}
 	*bvprv = *bvec;
 }
@@ -484,7 +516,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
 	int cluster = blk_queue_cluster(q), nsegs = 0;
 
 	for_each_bio(bio)
-		bio_for_each_segment(bvec, bio, iter)
+		bio_for_each_bvec(bvec, bio, iter)
 			__blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg,
 					     &nsegs, &cluster);
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 14/24] block: introduce bvec_last_segment()
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (12 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 13/24] block: use bio_for_each_bvec() to map sg Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 15/24] fs/buffer.c: use bvec iterator to truncate the bio Ming Lei
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

BTRFS and guard_bio_eod() need to get the last singlepage segment
from one multipage bvec, so introduce this helper to make them happy.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/bvec.h | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 417d44cf1e82..2269c7608a3e 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -219,4 +219,29 @@ static inline bool mp_bvec_iter_advance(const struct bio_vec *bv,
 	.bi_bvec_done	= 0,						\
 }
 
+/*
+ * Get the last singlepage segment from the multipage bvec and store it
+ * in @seg
+ */
+static inline void bvec_last_segment(const struct bio_vec *bvec,
+		struct bio_vec *seg)
+{
+	unsigned total = bvec->bv_offset + bvec->bv_len;
+	unsigned last_page = total / PAGE_SIZE;
+
+	if (last_page * PAGE_SIZE == total)
+		last_page--;
+
+	seg->bv_page = nth_page(bvec->bv_page, last_page);
+
+	/* the whole segment is inside the last page */
+	if (bvec->bv_offset >= last_page * PAGE_SIZE) {
+		seg->bv_offset = bvec->bv_offset % PAGE_SIZE;
+		seg->bv_len = bvec->bv_len;
+	} else {
+		seg->bv_offset = 0;
+		seg->bv_len = total - last_page * PAGE_SIZE;
+	}
+}
+
 #endif /* __LINUX_BVEC_ITER_H */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 15/24] fs/buffer.c: use bvec iterator to truncate the bio
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (13 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 14/24] block: introduce bvec_last_segment() Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 16/24] btrfs: use bvec_last_segment to get bio's last page Ming Lei
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

Once multipage bvec is enabled, the last bvec may include more than one
page, this patch use bvec_last_segment() to truncate the bio.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 fs/buffer.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index cabc045f483d..0660b7813315 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3021,7 +3021,10 @@ void guard_bio_eod(int op, struct bio *bio)
 
 	/* ..and clear the end of the buffer for reads */
 	if (op == REQ_OP_READ) {
-		zero_user(bvec->bv_page, bvec->bv_offset + bvec->bv_len,
+		struct bio_vec bv;
+
+		bvec_last_segment(bvec, &bv);
+		zero_user(bv.bv_page, bv.bv_offset + bv.bv_len,
 				truncated_bytes);
 	}
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 16/24] btrfs: use bvec_last_segment to get bio's last page
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (14 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 15/24] fs/buffer.c: use bvec iterator to truncate the bio Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 17/24] btrfs: move bio_pages_all() to btrfs Ming Lei
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei,
	Chris Mason, Josef Bacik, David Sterba, linux-btrfs

Preparing for supporting multipage bvec.

Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fb.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 fs/btrfs/compression.c | 5 ++++-
 fs/btrfs/extent_io.c   | 5 +++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index d3e447b45bf7..22b9e0e56c7e 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -407,8 +407,11 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start,
 static u64 bio_end_offset(struct bio *bio)
 {
 	struct bio_vec *last = bio_last_bvec_all(bio);
+	struct bio_vec bv;
 
-	return page_offset(last->bv_page) + last->bv_len + last->bv_offset;
+	bvec_last_segment(last, &bv);
+
+	return page_offset(bv.bv_page) + bv.bv_len + bv.bv_offset;
 }
 
 static noinline int add_ra_bio_pages(struct inode *inode,
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index cce6087d6880..0b5e07723f5f 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2728,11 +2728,12 @@ static int __must_check submit_one_bio(struct bio *bio, int mirror_num,
 {
 	blk_status_t ret = 0;
 	struct bio_vec *bvec = bio_last_bvec_all(bio);
-	struct page *page = bvec->bv_page;
+	struct bio_vec bv;
 	struct extent_io_tree *tree = bio->bi_private;
 	u64 start;
 
-	start = page_offset(page) + bvec->bv_offset;
+	bvec_last_segment(bvec, &bv);
+	start = page_offset(bv.bv_page) + bv.bv_offset;
 
 	bio->bi_private = NULL;
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 17/24] btrfs: move bio_pages_all() to btrfs
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (15 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 16/24] btrfs: use bvec_last_segment to get bio's last page Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 18/24] block: introduce bio_bvecs() Ming Lei
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei,
	Chris Mason, Josef Bacik, David Sterba, linux-btrfs

BTRFS is the only user of this helper, so move this helper into
BTRFS, and implement it via bio_for_each_segment_all(), since
bio->bi_vcnt may not equal to number of pages after multipage bvec
is enabled.

Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fb.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 fs/btrfs/extent_io.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 0b5e07723f5f..9fce9f0793fe 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2356,6 +2356,18 @@ struct bio *btrfs_create_repair_bio(struct inode *inode, struct bio *failed_bio,
 	return bio;
 }
 
+static unsigned btrfs_bio_pages_all(struct bio *bio)
+{
+	unsigned i;
+	struct bio_vec *bv;
+
+	WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED));
+
+	bio_for_each_segment_all(bv, bio, i)
+		;
+	return i;
+}
+
 /*
  * this is a generic handler for readpage errors (default
  * readpage_io_failed_hook). if other copies exist, read those and write back
@@ -2376,7 +2388,7 @@ static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset,
 	int read_mode = 0;
 	blk_status_t status;
 	int ret;
-	unsigned failed_bio_pages = bio_pages_all(failed_bio);
+	unsigned failed_bio_pages = btrfs_bio_pages_all(failed_bio);
 
 	BUG_ON(bio_op(failed_bio) == REQ_OP_WRITE);
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 18/24] block: introduce bio_bvecs()
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (16 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 17/24] btrfs: move bio_pages_all() to btrfs Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 19/24] block: loop: pass multipage bvec to iov_iter Ming Lei
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

There are still cases in which we need to use bio_bvecs() for get the
number of multipage segment, so introduce it.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/bio.h | 30 +++++++++++++++++++++++++-----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/include/linux/bio.h b/include/linux/bio.h
index 551444bd9795..083c1ee9c6c8 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -242,7 +242,6 @@ static inline unsigned bio_segments(struct bio *bio)
 	 * We special case discard/write same/write zeroes, because they
 	 * interpret bi_size differently:
 	 */
-
 	switch (bio_op(bio)) {
 	case REQ_OP_DISCARD:
 	case REQ_OP_SECURE_ERASE:
@@ -251,13 +250,34 @@ static inline unsigned bio_segments(struct bio *bio)
 	case REQ_OP_WRITE_SAME:
 		return 1;
 	default:
-		break;
+		bio_for_each_segment(bv, bio, iter)
+			segs++;
+		return segs;
 	}
+}
 
-	bio_for_each_segment(bv, bio, iter)
-		segs++;
+static inline unsigned bio_bvecs(struct bio *bio)
+{
+	unsigned bvecs = 0;
+	struct bio_vec bv;
+	struct bvec_iter iter;
 
-	return segs;
+	/*
+	 * We special case discard/write same/write zeroes, because they
+	 * interpret bi_size differently:
+	 */
+	switch (bio_op(bio)) {
+	case REQ_OP_DISCARD:
+	case REQ_OP_SECURE_ERASE:
+	case REQ_OP_WRITE_ZEROES:
+		return 0;
+	case REQ_OP_WRITE_SAME:
+		return 1;
+	default:
+		bio_for_each_bvec(bv, bio, iter)
+			bvecs++;
+		return bvecs;
+	}
 }
 
 /*
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 19/24] block: loop: pass multipage bvec to iov_iter
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (17 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 18/24] block: introduce bio_bvecs() Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 20/24] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages() Ming Lei
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

iov_iter is implemented with bvec itererator, so it is safe to pass
multipage bvec to it, and this way is much more efficient than
passing one page in each bvec.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/block/loop.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index d6b6f434fd4b..a350b323e891 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -515,16 +515,16 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
 	struct bio *bio = rq->bio;
 	struct file *file = lo->lo_backing_file;
 	unsigned int offset;
-	int segments = 0;
+	int nr_bvec = 0;
 	int ret;
 
 	if (rq->bio != rq->biotail) {
-		struct req_iterator iter;
+		struct bvec_iter iter;
 		struct bio_vec tmp;
 
 		__rq_for_each_bio(bio, rq)
-			segments += bio_segments(bio);
-		bvec = kmalloc_array(segments, sizeof(struct bio_vec),
+			nr_bvec += bio_bvecs(bio);
+		bvec = kmalloc_array(nr_bvec, sizeof(struct bio_vec),
 				     GFP_NOIO);
 		if (!bvec)
 			return -EIO;
@@ -533,13 +533,14 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
 		/*
 		 * The bios of the request may be started from the middle of
 		 * the 'bvec' because of bio splitting, so we can't directly
-		 * copy bio->bi_iov_vec to new bvec. The rq_for_each_segment
+		 * copy bio->bi_iov_vec to new bvec. The bio_for_each_bvec
 		 * API will take care of all details for us.
 		 */
-		rq_for_each_segment(tmp, rq, iter) {
-			*bvec = tmp;
-			bvec++;
-		}
+		__rq_for_each_bio(bio, rq)
+			bio_for_each_bvec(tmp, bio, iter) {
+				*bvec = tmp;
+				bvec++;
+			}
 		bvec = cmd->bvec;
 		offset = 0;
 	} else {
@@ -550,12 +551,11 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
 		 */
 		offset = bio->bi_iter.bi_bvec_done;
 		bvec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
-		segments = bio_segments(bio);
+		nr_bvec = bio_bvecs(bio);
 	}
 	atomic_set(&cmd->ref, 2);
 
-	iov_iter_bvec(&iter, ITER_BVEC | rw, bvec,
-		      segments, blk_rq_bytes(rq));
+	iov_iter_bvec(&iter, ITER_BVEC | rw, bvec, nr_bvec, blk_rq_bytes(rq));
 	iter.iov_offset = offset;
 
 	cmd->iocb.ki_pos = pos;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 20/24] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages()
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (18 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 19/24] block: loop: pass multipage bvec to iov_iter Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 15:55   ` Coly Li
  2018-06-27 12:45 ` [PATCH V7 21/24] block: allow bio_for_each_segment_all() to iterate over multipage bvec Ming Lei
                   ` (3 subsequent siblings)
  23 siblings, 1 reply; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei,
	linux-bcache

bch_bio_alloc_pages() is always called on one new bio, so it is safe
to access the bvec table directly. Given it is the only kind of this
case, open code the bvec table access since bio_for_each_segment_all()
will be changed to support for iterating over multipage bvec.

Cc: Coly Li <colyli@suse.de>
Cc: linux-bcache@vger.kernel.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/md/bcache/util.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index fc479b026d6d..9f2a6fd5dfc9 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -268,7 +268,7 @@ int bch_bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
 	int i;
 	struct bio_vec *bv;
 
-	bio_for_each_segment_all(bv, bio, i) {
+	for (i = 0, bv = bio->bi_io_vec; i < bio->bi_vcnt; bv++) {
 		bv->bv_page = alloc_page(gfp_mask);
 		if (!bv->bv_page) {
 			while (--bv >= bio->bi_io_vec)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 21/24] block: allow bio_for_each_segment_all() to iterate over multipage bvec
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (19 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 20/24] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages() Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 22/24] block: enable multipage bvecs Ming Lei
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

This patch introduces one extra iterator variable to bio_for_each_segment_all(),
then we can allow bio_for_each_segment_all() to iterate over multipage bvec.

Given it is just one mechannical & simple change on all bio_for_each_segment_all()
users, this patch does tree-wide change in one single patch, so that we can
avoid to use a temporary helper for this conversion.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/bio.c               | 27 ++++++++++++++++++---------
 block/blk-zoned.c         |  5 +++--
 block/bounce.c            |  6 ++++--
 drivers/md/bcache/btree.c |  3 ++-
 drivers/md/dm-crypt.c     |  3 ++-
 drivers/md/raid1.c        |  3 ++-
 fs/block_dev.c            |  6 ++++--
 fs/btrfs/compression.c    |  3 ++-
 fs/btrfs/disk-io.c        |  3 ++-
 fs/btrfs/extent_io.c      | 12 ++++++++----
 fs/btrfs/inode.c          |  6 ++++--
 fs/btrfs/raid56.c         |  3 ++-
 fs/crypto/bio.c           |  3 ++-
 fs/direct-io.c            |  4 +++-
 fs/exofs/ore.c            |  3 ++-
 fs/exofs/ore_raid.c       |  3 ++-
 fs/ext4/page-io.c         |  3 ++-
 fs/ext4/readpage.c        |  3 ++-
 fs/f2fs/data.c            |  9 ++++++---
 fs/gfs2/lops.c            |  6 ++++--
 fs/gfs2/meta_io.c         |  3 ++-
 fs/iomap.c                |  3 ++-
 fs/mpage.c                |  3 ++-
 fs/xfs/xfs_aops.c         |  5 +++--
 include/linux/bio.h       | 11 +++++++++--
 include/linux/bvec.h      | 31 +++++++++++++++++++++++++++++++
 26 files changed, 125 insertions(+), 45 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 80ea0c8878bd..22c6c83a7c8b 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1041,8 +1041,9 @@ static int bio_copy_from_iter(struct bio *bio, struct iov_iter *iter)
 {
 	int i;
 	struct bio_vec *bvec;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		ssize_t ret;
 
 		ret = copy_page_from_iter(bvec->bv_page,
@@ -1072,8 +1073,9 @@ static int bio_copy_to_iter(struct bio *bio, struct iov_iter iter)
 {
 	int i;
 	struct bio_vec *bvec;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		ssize_t ret;
 
 		ret = copy_page_to_iter(bvec->bv_page,
@@ -1095,8 +1097,9 @@ void bio_free_pages(struct bio *bio)
 {
 	struct bio_vec *bvec;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i)
+	bio_for_each_segment_all(bvec, bio, i, iter_all)
 		__free_page(bvec->bv_page);
 }
 EXPORT_SYMBOL(bio_free_pages);
@@ -1262,6 +1265,7 @@ struct bio *bio_map_user_iov(struct request_queue *q,
 	struct bio *bio;
 	int ret;
 	struct bio_vec *bvec;
+	struct bvec_iter_all iter_all;
 
 	if (!iov_iter_count(iter))
 		return ERR_PTR(-EINVAL);
@@ -1335,7 +1339,7 @@ struct bio *bio_map_user_iov(struct request_queue *q,
 	return bio;
 
  out_unmap:
-	bio_for_each_segment_all(bvec, bio, j) {
+	bio_for_each_segment_all(bvec, bio, j, iter_all) {
 		put_page(bvec->bv_page);
 	}
 	bio_put(bio);
@@ -1346,11 +1350,12 @@ static void __bio_unmap_user(struct bio *bio)
 {
 	struct bio_vec *bvec;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	/*
 	 * make sure we dirty pages we wrote to
 	 */
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		if (bio_data_dir(bio) == READ)
 			set_page_dirty_lock(bvec->bv_page);
 
@@ -1442,8 +1447,9 @@ static void bio_copy_kern_endio_read(struct bio *bio)
 	char *p = bio->bi_private;
 	struct bio_vec *bvec;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		memcpy(p, page_address(bvec->bv_page), bvec->bv_len);
 		p += bvec->bv_len;
 	}
@@ -1552,8 +1558,9 @@ void bio_set_pages_dirty(struct bio *bio)
 {
 	struct bio_vec *bvec;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		if (!PageCompound(bvec->bv_page))
 			set_page_dirty_lock(bvec->bv_page);
 	}
@@ -1564,8 +1571,9 @@ static void bio_release_pages(struct bio *bio)
 {
 	struct bio_vec *bvec;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i)
+	bio_for_each_segment_all(bvec, bio, i, iter_all)
 		put_page(bvec->bv_page);
 }
 
@@ -1612,8 +1620,9 @@ void bio_check_pages_dirty(struct bio *bio)
 	struct bio_vec *bvec;
 	unsigned long flags;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page))
 			goto defer;
 	}
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 51000914e23f..9ed544751388 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -123,6 +123,7 @@ int blkdev_report_zones(struct block_device *bdev,
 	unsigned int ofst;
 	void *addr;
 	int ret;
+	struct bvec_iter_all iter_all;
 
 	if (!q)
 		return -ENXIO;
@@ -190,7 +191,7 @@ int blkdev_report_zones(struct block_device *bdev,
 	n = 0;
 	nz = 0;
 	nr_rep = 0;
-	bio_for_each_segment_all(bv, bio, i) {
+	bio_for_each_segment_all(bv, bio, i, iter_all) {
 
 		if (!bv->bv_page)
 			break;
@@ -223,7 +224,7 @@ int blkdev_report_zones(struct block_device *bdev,
 
 	*nr_zones = nz;
 out:
-	bio_for_each_segment_all(bv, bio, i)
+	bio_for_each_segment_all(bv, bio, i, iter_all)
 		__free_page(bv->bv_page);
 	bio_put(bio);
 
diff --git a/block/bounce.c b/block/bounce.c
index bc63b3a2d18c..c0dabd25909d 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -146,11 +146,12 @@ static void bounce_end_io(struct bio *bio, mempool_t *pool)
 	struct bio_vec *bvec, orig_vec;
 	int i;
 	struct bvec_iter orig_iter = bio_orig->bi_iter;
+	struct bvec_iter_all iter_all;
 
 	/*
 	 * free up bounce indirect pages used
 	 */
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		orig_vec = bio_iter_iovec(bio_orig, orig_iter);
 		if (bvec->bv_page != orig_vec.bv_page) {
 			dec_zone_page_state(bvec->bv_page, NR_BOUNCE);
@@ -273,6 +274,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
 	bool bounce = false;
 	int sectors = 0;
 	bool passthrough = bio_is_passthrough(*bio_orig);
+	struct bvec_iter_all iter_all;
 
 	bio_for_each_segment(from, *bio_orig, iter) {
 		if (i++ < BIO_MAX_PAGES)
@@ -292,7 +294,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
 	bio = bounce_clone_bio(*bio_orig, GFP_NOIO, passthrough ? NULL :
 			&bounce_bio_set);
 
-	bio_for_each_segment_all(to, bio, i) {
+	bio_for_each_segment_all(to, bio, i, iter_all) {
 		struct page *page = to->bv_page;
 
 		if (page_to_pfn(page) <= q->limits.bounce_pfn)
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 547c9eedc2f4..defaf03d09bc 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -423,8 +423,9 @@ static void do_btree_node_write(struct btree *b)
 		int j;
 		struct bio_vec *bv;
 		void *base = (void *) ((unsigned long) i & ~(PAGE_SIZE - 1));
+		struct bvec_iter_all iter_all;
 
-		bio_for_each_segment_all(bv, b->bio, j)
+		bio_for_each_segment_all(bv, b->bio, j, iter_all)
 			memcpy(page_address(bv->bv_page),
 			       base + j * PAGE_SIZE, PAGE_SIZE);
 
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index b61b069c33af..14b4c4b3506d 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -1450,8 +1450,9 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone)
 {
 	unsigned int i;
 	struct bio_vec *bv;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bv, clone, i) {
+	bio_for_each_segment_all(bv, clone, i, iter_all) {
 		BUG_ON(!bv->bv_page);
 		mempool_free(bv->bv_page, &cc->page_pool);
 	}
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 8e05c1092aef..2101ea1f0e97 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2116,13 +2116,14 @@ static void process_checks(struct r1bio *r1_bio)
 		struct page **spages = get_resync_pages(sbio)->pages;
 		struct bio_vec *bi;
 		int page_len[RESYNC_PAGES] = { 0 };
+		struct bvec_iter_all iter_all;
 
 		if (sbio->bi_end_io != end_sync_read)
 			continue;
 		/* Now we can 'fixup' the error value */
 		sbio->bi_status = 0;
 
-		bio_for_each_segment_all(bi, sbio, j)
+		bio_for_each_segment_all(bi, sbio, j, iter_all)
 			page_len[j] = bi->bv_len;
 
 		if (!status) {
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 0dd87aaeb39a..f10806bfe202 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -197,6 +197,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
 	ssize_t ret;
 	blk_qc_t qc;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	if ((pos | iov_iter_alignment(iter)) &
 	    (bdev_logical_block_size(bdev) - 1))
@@ -244,7 +245,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
 	}
 	__set_current_state(TASK_RUNNING);
 
-	bio_for_each_segment_all(bvec, &bio, i) {
+	bio_for_each_segment_all(bvec, &bio, i, iter_all) {
 		if (should_dirty && !PageCompound(bvec->bv_page))
 			set_page_dirty_lock(bvec->bv_page);
 		put_page(bvec->bv_page);
@@ -311,8 +312,9 @@ static void blkdev_bio_end_io(struct bio *bio)
 	} else {
 		struct bio_vec *bvec;
 		int i;
+		struct bvec_iter_all iter_all;
 
-		bio_for_each_segment_all(bvec, bio, i)
+		bio_for_each_segment_all(bvec, bio, i, iter_all)
 			put_page(bvec->bv_page);
 		bio_put(bio);
 	}
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 22b9e0e56c7e..83ea1efea038 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -166,13 +166,14 @@ static void end_compressed_bio_read(struct bio *bio)
 	} else {
 		int i;
 		struct bio_vec *bvec;
+		struct bvec_iter_all iter_all;
 
 		/*
 		 * we have verified the checksum already, set page
 		 * checked so the end_io handlers know about it
 		 */
 		ASSERT(!bio_flagged(bio, BIO_CLONED));
-		bio_for_each_segment_all(bvec, cb->orig_bio, i)
+		bio_for_each_segment_all(bvec, cb->orig_bio, i, iter_all)
 			SetPageChecked(bvec->bv_page);
 
 		bio_endio(cb->orig_bio);
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 205092dc9390..bee6aec58cd9 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -829,9 +829,10 @@ static blk_status_t btree_csum_one_bio(struct bio *bio)
 	struct bio_vec *bvec;
 	struct btrfs_root *root;
 	int i, ret = 0;
+	struct bvec_iter_all iter_all;
 
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		root = BTRFS_I(bvec->bv_page->mapping->host)->root;
 		ret = csum_dirty_buffer(root->fs_info, bvec->bv_page);
 		if (ret)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 9fce9f0793fe..399e059226ec 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2360,10 +2360,11 @@ static unsigned btrfs_bio_pages_all(struct bio *bio)
 {
 	unsigned i;
 	struct bio_vec *bv;
+	struct bvec_iter_all iter_all;
 
 	WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED));
 
-	bio_for_each_segment_all(bv, bio, i)
+	bio_for_each_segment_all(bv, bio, i, iter_all)
 		;
 	return i;
 }
@@ -2465,9 +2466,10 @@ static void end_bio_extent_writepage(struct bio *bio)
 	u64 start;
 	u64 end;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		struct page *page = bvec->bv_page;
 		struct inode *inode = page->mapping->host;
 		struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
@@ -2536,9 +2538,10 @@ static void end_bio_extent_readpage(struct bio *bio)
 	int mirror;
 	int ret;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		struct page *page = bvec->bv_page;
 		struct inode *inode = page->mapping->host;
 		struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
@@ -3690,9 +3693,10 @@ static void end_bio_extent_buffer_writepage(struct bio *bio)
 	struct bio_vec *bvec;
 	struct extent_buffer *eb;
 	int i, done;
+	struct bvec_iter_all iter_all;
 
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		struct page *page = bvec->bv_page;
 
 		eb = (struct extent_buffer *)page->private;
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index e9482f0db9d0..2587794590a6 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7894,6 +7894,7 @@ static void btrfs_retry_endio_nocsum(struct bio *bio)
 	struct bio_vec *bvec;
 	struct extent_io_tree *io_tree, *failure_tree;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	if (bio->bi_status)
 		goto end;
@@ -7905,7 +7906,7 @@ static void btrfs_retry_endio_nocsum(struct bio *bio)
 
 	done->uptodate = 1;
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
-	bio_for_each_segment_all(bvec, bio, i)
+	bio_for_each_segment_all(bvec, bio, i, iter_all)
 		clean_io_failure(BTRFS_I(inode)->root->fs_info, failure_tree,
 				 io_tree, done->start, bvec->bv_page,
 				 btrfs_ino(BTRFS_I(inode)), 0);
@@ -7984,6 +7985,7 @@ static void btrfs_retry_endio(struct bio *bio)
 	int uptodate;
 	int ret;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	if (bio->bi_status)
 		goto end;
@@ -7997,7 +7999,7 @@ static void btrfs_retry_endio(struct bio *bio)
 	failure_tree = &BTRFS_I(inode)->io_failure_tree;
 
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		ret = __readpage_endio_check(inode, io_bio, i, bvec->bv_page,
 					     bvec->bv_offset, done->start,
 					     bvec->bv_len);
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 5e4ad134b9ad..420c0cf353e1 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1463,10 +1463,11 @@ static void set_bio_pages_uptodate(struct bio *bio)
 {
 	struct bio_vec *bvec;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
 
-	bio_for_each_segment_all(bvec, bio, i)
+	bio_for_each_segment_all(bvec, bio, i, iter_all)
 		SetPageUptodate(bvec->bv_page);
 }
 
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index 0959044c5cee..5759bcd018cd 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -30,8 +30,9 @@ static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
 {
 	struct bio_vec *bv;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bv, bio, i) {
+	bio_for_each_segment_all(bv, bio, i, iter_all) {
 		struct page *page = bv->bv_page;
 		int ret = fscrypt_decrypt_page(page->mapping->host, page,
 				PAGE_SIZE, 0, page->index);
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 093fb54cd316..de14d67dbd40 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -551,7 +551,9 @@ static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio)
 	if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty) {
 		bio_check_pages_dirty(bio);	/* transfers ownership */
 	} else {
-		bio_for_each_segment_all(bvec, bio, i) {
+		struct bvec_iter_all iter_all;
+
+		bio_for_each_segment_all(bvec, bio, i, iter_all) {
 			struct page *page = bvec->bv_page;
 
 			if (dio->op == REQ_OP_READ && !PageCompound(page) &&
diff --git a/fs/exofs/ore.c b/fs/exofs/ore.c
index 5331a15a61f1..24a8e34882e9 100644
--- a/fs/exofs/ore.c
+++ b/fs/exofs/ore.c
@@ -420,8 +420,9 @@ static void _clear_bio(struct bio *bio)
 {
 	struct bio_vec *bv;
 	unsigned i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bv, bio, i) {
+	bio_for_each_segment_all(bv, bio, i, iter_all) {
 		unsigned this_count = bv->bv_len;
 
 		if (likely(PAGE_SIZE == this_count))
diff --git a/fs/exofs/ore_raid.c b/fs/exofs/ore_raid.c
index 199590f36203..e83bab54b03e 100644
--- a/fs/exofs/ore_raid.c
+++ b/fs/exofs/ore_raid.c
@@ -468,11 +468,12 @@ static void _mark_read4write_pages_uptodate(struct ore_io_state *ios, int ret)
 	/* loop on all devices all pages */
 	for (d = 0; d < ios->numdevs; d++) {
 		struct bio *bio = ios->per_dev[d].bio;
+		struct bvec_iter_all iter_all;
 
 		if (!bio)
 			continue;
 
-		bio_for_each_segment_all(bv, bio, i) {
+		bio_for_each_segment_all(bv, bio, i, iter_all) {
 			struct page *page = bv->bv_page;
 
 			SetPageUptodate(page);
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index db7590178dfc..0644b4e7d6d4 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -63,8 +63,9 @@ static void ext4_finish_bio(struct bio *bio)
 {
 	int i;
 	struct bio_vec *bvec;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		struct page *page = bvec->bv_page;
 #ifdef CONFIG_EXT4_FS_ENCRYPTION
 		struct page *data_page = NULL;
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index 19b87a8de6ff..047b96e54620 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -72,6 +72,7 @@ static void mpage_end_io(struct bio *bio)
 {
 	struct bio_vec *bv;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	if (ext4_bio_encrypted(bio)) {
 		if (bio->bi_status) {
@@ -81,7 +82,7 @@ static void mpage_end_io(struct bio *bio)
 			return;
 		}
 	}
-	bio_for_each_segment_all(bv, bio, i) {
+	bio_for_each_segment_all(bv, bio, i, iter_all) {
 		struct page *page = bv->bv_page;
 
 		if (!bio->bi_status) {
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 8f931d699287..e6f5c7817496 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -73,8 +73,9 @@ static void __read_end_io(struct bio *bio)
 	struct page *page;
 	struct bio_vec *bv;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bv, bio, i) {
+	bio_for_each_segment_all(bv, bio, i, iter_all) {
 		page = bv->bv_page;
 
 		/* PG_error was set if any post_read step failed */
@@ -149,8 +150,9 @@ static void f2fs_write_end_io(struct bio *bio)
 	struct f2fs_sb_info *sbi = bio->bi_private;
 	struct bio_vec *bvec;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		struct page *page = bvec->bv_page;
 		enum count_type type = WB_DATA_TYPE(page);
 
@@ -325,6 +327,7 @@ static bool __has_merged_page(struct f2fs_bio_info *io,
 	struct bio_vec *bvec;
 	struct page *target;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	if (!io->bio)
 		return false;
@@ -332,7 +335,7 @@ static bool __has_merged_page(struct f2fs_bio_info *io,
 	if (!inode && !ino)
 		return true;
 
-	bio_for_each_segment_all(bvec, io->bio, i) {
+	bio_for_each_segment_all(bvec, io->bio, i, iter_all) {
 
 		if (bvec->bv_page->mapping)
 			target = bvec->bv_page;
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index 4d6567990baf..302c3bbc5bb7 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -168,7 +168,8 @@ u64 gfs2_log_bmap(struct gfs2_sbd *sdp)
  * that is pinned in the pagecache.
  */
 
-static void gfs2_end_log_write_bh(struct gfs2_sbd *sdp, struct bio_vec *bvec,
+static void gfs2_end_log_write_bh(struct gfs2_sbd *sdp,
+				  struct bio_vec *bvec,
 				  blk_status_t error)
 {
 	struct buffer_head *bh, *next;
@@ -207,6 +208,7 @@ static void gfs2_end_log_write(struct bio *bio)
 	struct bio_vec *bvec;
 	struct page *page;
 	int i;
+	struct bvec_iter_all iter_all;
 
 	if (bio->bi_status) {
 		fs_err(sdp, "Error %d writing to journal, jid=%u\n",
@@ -214,7 +216,7 @@ static void gfs2_end_log_write(struct bio *bio)
 		wake_up(&sdp->sd_logd_waitq);
 	}
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		page = bvec->bv_page;
 		if (page_has_buffers(page))
 			gfs2_end_log_write_bh(sdp, bvec, bio->bi_status);
diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
index 52de1036d9f9..495ed2cb8361 100644
--- a/fs/gfs2/meta_io.c
+++ b/fs/gfs2/meta_io.c
@@ -190,8 +190,9 @@ static void gfs2_meta_read_endio(struct bio *bio)
 {
 	struct bio_vec *bvec;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bvec, bio, i) {
+	bio_for_each_segment_all(bvec, bio, i, iter_all) {
 		struct page *page = bvec->bv_page;
 		struct buffer_head *bh = page_buffers(page);
 		unsigned int len = bvec->bv_len;
diff --git a/fs/iomap.c b/fs/iomap.c
index 77397b5a96ef..933f0c551aa6 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -934,8 +934,9 @@ static void iomap_dio_bio_end_io(struct bio *bio)
 	} else {
 		struct bio_vec *bvec;
 		int i;
+		struct bvec_iter_all iter_all;
 
-		bio_for_each_segment_all(bvec, bio, i)
+		bio_for_each_segment_all(bvec, bio, i, iter_all)
 			put_page(bvec->bv_page);
 		bio_put(bio);
 	}
diff --git a/fs/mpage.c b/fs/mpage.c
index b7e7f570733a..09adead23a7e 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -48,8 +48,9 @@ static void mpage_end_io(struct bio *bio)
 {
 	struct bio_vec *bv;
 	int i;
+	struct bvec_iter_all iter_all;
 
-	bio_for_each_segment_all(bv, bio, i) {
+	bio_for_each_segment_all(bv, bio, i, iter_all) {
 		struct page *page = bv->bv_page;
 		page_endio(page, op_is_write(bio_op(bio)),
 				blk_status_to_errno(bio->bi_status));
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 8eb3ba3d4d00..6ff39017dfd7 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -95,7 +95,7 @@ xfs_find_daxdev_for_inode(
 static void
 xfs_finish_page_writeback(
 	struct inode		*inode,
-	struct bio_vec		*bvec,
+	struct bio_vec	*bvec,
 	int			error)
 {
 	struct buffer_head	*head = page_buffers(bvec->bv_page), *bh = head;
@@ -157,6 +157,7 @@ xfs_destroy_ioend(
 	for (bio = &ioend->io_inline_bio; bio; bio = next) {
 		struct bio_vec	*bvec;
 		int		i;
+		struct bvec_iter_all iter_all;
 
 		/*
 		 * For the last bio, bi_private points to the ioend, so we
@@ -168,7 +169,7 @@ xfs_destroy_ioend(
 			next = bio->bi_private;
 
 		/* walk each page on bio, ending page IO on them */
-		bio_for_each_segment_all(bvec, bio, i)
+		bio_for_each_segment_all(bvec, bio, i, iter_all)
 			xfs_finish_page_writeback(inode, bvec, error);
 
 		bio_put(bio);
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 083c1ee9c6c8..b44f9a40bb8b 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -161,12 +161,19 @@ static inline bool bio_full(struct bio *bio)
 #define BIOVEC_SEG_BOUNDARY(q, b1, b2) \
 	__BIO_SEG_BOUNDARY(bvec_to_phys((b1)), bvec_to_phys((b2)) + (b2)->bv_len, queue_segment_boundary((q)))
 
+#define bvec_for_each_segment(bv, bvl, i, iter_all)			\
+	for (bv = bvec_init_iter_all(&iter_all);			\
+		(iter_all.done < (bvl)->bv_len) &&			\
+		((bvec_next_segment((bvl), &iter_all)), 1);		\
+		iter_all.done += bv->bv_len, i += 1)
+
 /*
  * drivers should _never_ use the all version - the bio may have been split
  * before it got to the driver and the driver won't own all of it
  */
-#define bio_for_each_segment_all(bvl, bio, i)				\
-	for (i = 0, bvl = (bio)->bi_io_vec; i < (bio)->bi_vcnt; i++, bvl++)
+#define bio_for_each_segment_all(bvl, bio, i, iter_all)		\
+	for (i = 0, iter_all.idx = 0; iter_all.idx < (bio)->bi_vcnt; iter_all.idx++)	\
+		bvec_for_each_segment(bvl, &((bio)->bi_io_vec[iter_all.idx]), i, iter_all)
 
 static inline void __bio_advance_iter(struct bio *bio, struct bvec_iter *iter,
 				      unsigned bytes, bool mp)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 2269c7608a3e..af00c819e37e 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -84,6 +84,12 @@ struct bvec_iter {
 						   current bvec */
 };
 
+struct bvec_iter_all {
+	struct bio_vec	bv;
+	int		idx;
+	unsigned	done;
+};
+
 /*
  * various member access, note that bio_data should of course not be used
  * on highmem page vectors
@@ -219,6 +225,31 @@ static inline bool mp_bvec_iter_advance(const struct bio_vec *bv,
 	.bi_bvec_done	= 0,						\
 }
 
+static inline struct bio_vec *bvec_init_iter_all(struct bvec_iter_all *iter_all)
+{
+	iter_all->bv.bv_page = NULL;
+	iter_all->done = 0;
+
+	return &iter_all->bv;
+}
+
+/* used for chunk_for_each_segment */
+static inline void bvec_next_segment(const struct bio_vec *bvec,
+		struct bvec_iter_all *iter_all)
+{
+	struct bio_vec *bv = &iter_all->bv;
+
+	if (bv->bv_page) {
+		bv->bv_page += 1;
+		bv->bv_offset = 0;
+	} else {
+		bv->bv_page = bvec->bv_page;
+		bv->bv_offset = bvec->bv_offset;
+	}
+	bv->bv_len = min_t(unsigned int, PAGE_SIZE - bv->bv_offset,
+			bvec->bv_len - iter_all->done);
+}
+
 /*
  * Get the last singlepage segment from the multipage bvec and store it
  * in @seg
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 22/24] block: enable multipage bvecs
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (20 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 21/24] block: allow bio_for_each_segment_all() to iterate over multipage bvec Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 23/24] block: always define BIO_MAX_PAGES as 256 Ming Lei
  2018-06-27 12:45 ` [PATCH V7 24/24] block: document usage of bio iterator helpers Ming Lei
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

This patch pulls the trigger for multipage bvecs.

Now any request queue which supports queue cluster will see multipage
bvecs.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/bio.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 22c6c83a7c8b..1dc3361fd5f9 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -765,12 +765,23 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
 
 	if (bio->bi_vcnt > 0) {
 		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
-
-		if (page == bv->bv_page && off == bv->bv_offset + bv->bv_len) {
-			bv->bv_len += len;
-			bio->bi_iter.bi_size += len;
-			return true;
-		}
+		struct request_queue *q = NULL;
+
+		if (page == bv->bv_page && off == bv->bv_offset + bv->bv_len)
+			goto merge;
+
+		if (bio->bi_disk)
+			q = bio->bi_disk->queue;
+
+		/* disable multipage bvec too if cluster isn't enabled */
+		if (!q || !blk_queue_cluster(q) ||
+		    (bvec_to_phys(bv) + bv->bv_len !=
+		     page_to_phys(page) + off))
+			return false;
+ merge:
+		bv->bv_len += len;
+		bio->bi_iter.bi_size += len;
+		return true;
 	}
 	return false;
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 23/24] block: always define BIO_MAX_PAGES as 256
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (21 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 22/24] block: enable multipage bvecs Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 12:45 ` [PATCH V7 24/24] block: document usage of bio iterator helpers Ming Lei
  23 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

Now multipage bvec can cover CONFIG_THP_SWAP, so we don't need to
increase BIO_MAX_PAGES for it.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/bio.h | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/include/linux/bio.h b/include/linux/bio.h
index b44f9a40bb8b..3412947e42f2 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -38,15 +38,7 @@
 #define BIO_BUG_ON
 #endif
 
-#ifdef CONFIG_THP_SWAP
-#if HPAGE_PMD_NR > 256
-#define BIO_MAX_PAGES		HPAGE_PMD_NR
-#else
 #define BIO_MAX_PAGES		256
-#endif
-#else
-#define BIO_MAX_PAGES		256
-#endif
 
 #define bio_prio(bio)			(bio)->bi_ioprio
 #define bio_set_prio(bio, prio)		((bio)->bi_ioprio = prio)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH V7 24/24] block: document usage of bio iterator helpers
  2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
                   ` (22 preceding siblings ...)
  2018-06-27 12:45 ` [PATCH V7 23/24] block: always define BIO_MAX_PAGES as 256 Ming Lei
@ 2018-06-27 12:45 ` Ming Lei
  2018-06-27 18:13   ` Randy Dunlap
  23 siblings, 1 reply; 32+ messages in thread
From: Ming Lei @ 2018-06-27 12:45 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

Now multipage bvec is supported, and some helpers may return page by
page, and some may return segment by segment, this patch documents the
usage for helping us use them correctly.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 Documentation/block/biovecs.txt | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/Documentation/block/biovecs.txt b/Documentation/block/biovecs.txt
index 25689584e6e0..f63af564ae89 100644
--- a/Documentation/block/biovecs.txt
+++ b/Documentation/block/biovecs.txt
@@ -117,3 +117,30 @@ Other implications:
    size limitations and the limitations of the underlying devices. Thus
    there's no need to define ->merge_bvec_fn() callbacks for individual block
    drivers.
+
+Usage of helpers:
+=================
+
+* The following helpers which name has suffix of "_all" can only be used on
+non-BIO_CLONED bio, and ususally they are used by filesystem code, and driver
+shouldn't use them becasue bio may have been splitted before they got to the
+driver:
+
+	bio_for_each_segment_all()
+	bio_first_bvec_all()
+	bio_first_page_all()
+	bio_last_bvec_all()
+
+* The following helpers iterate over singlepage bvec, and the local
+variable of 'struct bio_vec' or the reference records single page io
+vector during the itearation:
+
+	bio_for_each_segment()
+	bio_for_each_segment_all()
+
+* The following helper iterates over multipage bvec, and each bvec may
+include multiple physically contiguous pages, and the local variable of
+'struct bio_vec' or the reference records multi page io vector during the
+itearation:
+
+	bio_for_each_bvec()
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH V7 01/24] dm: use bio_split() when splitting out the already processed bio
  2018-06-27 12:45 ` [PATCH V7 01/24] dm: use bio_split() when splitting out the already processed bio Ming Lei
@ 2018-06-27 13:17   ` Mike Snitzer
  0 siblings, 0 replies; 32+ messages in thread
From: Mike Snitzer @ 2018-06-27 13:17 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Kent Overstreet, David Sterba,
	Huang Ying, linux-kernel, linux-block, linux-fsdevel, linux-mm,
	Theodore Ts'o, Darrick J . Wong, Coly Li, Filipe Manana,
	Randy Dunlap, stable

On Wed, Jun 27 2018 at  8:45am -0400,
Ming Lei <ming.lei@redhat.com> wrote:

> From: Mike Snitzer <snitzer@redhat.com>
> 
> Use of bio_clone_bioset() is inefficient if there is no need to clone
> the original bio's bio_vec array.  Best to use the bio_clone_fast()
> variant.  Also, just using bio_advance() is only part of what is needed
> to properly setup the clone -- it doesn't account for the various
> bio_integrity() related work that also needs to be performed (see
> bio_split).
> 
> Address both of these issues by switching from bio_clone_bioset() to
> bio_split().
> 
> Fixes: 18a25da8 ("dm: ensure bio submission follows a depth-first tree walk")
> Cc: stable@vger.kernel.org
> Reported-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: NeilBrown <neilb@suse.com>
> Reviewed-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Mike Snitzer <snitzer@redhat.com>

FYI, I'll be sending this to Linus tomorrow.

Mike

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH V7 20/24] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages()
  2018-06-27 12:45 ` [PATCH V7 20/24] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages() Ming Lei
@ 2018-06-27 15:55   ` Coly Li
  2018-06-28  1:28     ` Ming Lei
  0 siblings, 1 reply; 32+ messages in thread
From: Coly Li @ 2018-06-27 15:55 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Kent Overstreet, David Sterba,
	Huang Ying, Mike Snitzer, linux-kernel, linux-block,
	linux-fsdevel, linux-mm, Theodore Ts'o, Darrick J . Wong,
	Filipe Manana, Randy Dunlap, linux-bcache

On 2018/6/27 8:45 PM, Ming Lei wrote:
> bch_bio_alloc_pages() is always called on one new bio, so it is safe
> to access the bvec table directly. Given it is the only kind of this
> case, open code the bvec table access since bio_for_each_segment_all()
> will be changed to support for iterating over multipage bvec.
> 
> Cc: Coly Li <colyli@suse.de>
> Cc: linux-bcache@vger.kernel.org
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  drivers/md/bcache/util.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
> index fc479b026d6d..9f2a6fd5dfc9 100644
> --- a/drivers/md/bcache/util.c
> +++ b/drivers/md/bcache/util.c
> @@ -268,7 +268,7 @@ int bch_bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
>  	int i;
>  	struct bio_vec *bv;
> 

Hi Ming,

> -	bio_for_each_segment_all(bv, bio, i) {
> +	for (i = 0, bv = bio->bi_io_vec; i < bio->bi_vcnt; bv++) {


Is it possible to treat this as a special condition of
bio_for_each_segement_all() ? I mean only iterate one time in
bvec_for_each_segment(). I hope the above change is not our last choice
before I reply an Acked-by :-)

Thanks.

Coly Li

>  		bv->bv_page = alloc_page(gfp_mask);
>  		if (!bv->bv_page) {
>  			while (--bv >= bio->bi_io_vec)
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH V7 10/24] block: introduce multipage page bvec helpers
  2018-06-27 12:45 ` [PATCH V7 10/24] block: introduce multipage page bvec helpers Ming Lei
@ 2018-06-27 15:59   ` kbuild test robot
  2018-11-09 11:15     ` Ming Lei
  0 siblings, 1 reply; 32+ messages in thread
From: kbuild test robot @ 2018-06-27 15:59 UTC (permalink / raw)
  To: Ming Lei
  Cc: kbuild-all, Jens Axboe, Christoph Hellwig, Kent Overstreet,
	David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap, Ming Lei

Hi Ming,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v4.18-rc2]
[cannot apply to next-20180627]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ming-Lei/block-support-multipage-bvec/20180627-214022
reproduce:
        # apt-get install sparse
        make ARCH=x86_64 allmodconfig
        make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

   net/ceph/messenger.c:842:25: sparse: expression using sizeof(void)
   net/ceph/messenger.c:842:25: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
   include/linux/bvec.h:140:32: sparse: expression using sizeof(void)
   include/linux/bvec.h:140:32: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
   net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>> net/ceph/messenger.c:890:47: sparse: too many warnings

vim +890 net/ceph/messenger.c

6aaa4511 Alex Elder      2013-03-06  862  
8ae4f4f5 Alex Elder      2013-03-14  863  static bool ceph_msg_data_bio_advance(struct ceph_msg_data_cursor *cursor,
8ae4f4f5 Alex Elder      2013-03-14  864  					size_t bytes)
6aaa4511 Alex Elder      2013-03-06  865  {
5359a17d Ilya Dryomov    2018-01-20  866  	struct ceph_bio_iter *it = &cursor->bio_iter;
6aaa4511 Alex Elder      2013-03-06  867  
5359a17d Ilya Dryomov    2018-01-20  868  	BUG_ON(bytes > cursor->resid);
5359a17d Ilya Dryomov    2018-01-20  869  	BUG_ON(bytes > bio_iter_len(it->bio, it->iter));
25aff7c5 Alex Elder      2013-03-11  870  	cursor->resid -= bytes;
5359a17d Ilya Dryomov    2018-01-20  871  	bio_advance_iter(it->bio, &it->iter, bytes);
f38a5181 Kent Overstreet 2013-08-07  872  
5359a17d Ilya Dryomov    2018-01-20  873  	if (!cursor->resid) {
5359a17d Ilya Dryomov    2018-01-20  874  		BUG_ON(!cursor->last_piece);
5359a17d Ilya Dryomov    2018-01-20  875  		return false;   /* no more data */
5359a17d Ilya Dryomov    2018-01-20  876  	}
f38a5181 Kent Overstreet 2013-08-07  877  
5359a17d Ilya Dryomov    2018-01-20  878  	if (!bytes || (it->iter.bi_size && it->iter.bi_bvec_done))
6aaa4511 Alex Elder      2013-03-06  879  		return false;	/* more bytes to process in this segment */
6aaa4511 Alex Elder      2013-03-06  880  
5359a17d Ilya Dryomov    2018-01-20  881  	if (!it->iter.bi_size) {
5359a17d Ilya Dryomov    2018-01-20  882  		it->bio = it->bio->bi_next;
5359a17d Ilya Dryomov    2018-01-20  883  		it->iter = it->bio->bi_iter;
5359a17d Ilya Dryomov    2018-01-20  884  		if (cursor->resid < it->iter.bi_size)
5359a17d Ilya Dryomov    2018-01-20  885  			it->iter.bi_size = cursor->resid;
25aff7c5 Alex Elder      2013-03-11  886  	}
6aaa4511 Alex Elder      2013-03-06  887  
5359a17d Ilya Dryomov    2018-01-20  888  	BUG_ON(cursor->last_piece);
5359a17d Ilya Dryomov    2018-01-20  889  	BUG_ON(cursor->resid < bio_iter_len(it->bio, it->iter));
5359a17d Ilya Dryomov    2018-01-20 @890  	cursor->last_piece = cursor->resid == bio_iter_len(it->bio, it->iter);
6aaa4511 Alex Elder      2013-03-06  891  	return true;
6aaa4511 Alex Elder      2013-03-06  892  }
ea96571f Alex Elder      2013-04-05  893  #endif /* CONFIG_BLOCK */
df6ad1f9 Alex Elder      2012-06-11  894  

:::::: The code at line 890 was first introduced by commit
:::::: 5359a17d2706b86da2af83027343d5eb256f7670 libceph, rbd: new bio handling code (aka don't clone bios)

:::::: TO: Ilya Dryomov <idryomov@gmail.com>
:::::: CC: Ilya Dryomov <idryomov@gmail.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH V7 24/24] block: document usage of bio iterator helpers
  2018-06-27 12:45 ` [PATCH V7 24/24] block: document usage of bio iterator helpers Ming Lei
@ 2018-06-27 18:13   ` Randy Dunlap
  0 siblings, 0 replies; 32+ messages in thread
From: Randy Dunlap @ 2018-06-27 18:13 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, Kent Overstreet
  Cc: David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana

On 06/27/2018 05:45 AM, Ming Lei wrote:
> Now multipage bvec is supported, and some helpers may return page by
> page, and some may return segment by segment, this patch documents the
> usage for helping us use them correctly.
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  Documentation/block/biovecs.txt | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/Documentation/block/biovecs.txt b/Documentation/block/biovecs.txt
> index 25689584e6e0..f63af564ae89 100644
> --- a/Documentation/block/biovecs.txt
> +++ b/Documentation/block/biovecs.txt
> @@ -117,3 +117,30 @@ Other implications:
>     size limitations and the limitations of the underlying devices. Thus
>     there's no need to define ->merge_bvec_fn() callbacks for individual block
>     drivers.
> +
> +Usage of helpers:
> +=================
> +
> +* The following helpers which name has suffix of "_all" can only be used on

                   helpers whose names have the suffix of "_all" can only be used on

> +non-BIO_CLONED bio, and ususally they are used by filesystem code, and driver

                           usually

> +shouldn't use them becasue bio may have been splitted before they got to the

                      because                   split

> +driver:
> +
> +	bio_for_each_segment_all()
> +	bio_first_bvec_all()
> +	bio_first_page_all()
> +	bio_last_bvec_all()
> +
> +* The following helpers iterate over singlepage bvec, and the local

                   preferably:          single-page

> +variable of 'struct bio_vec' or the reference records single page io

                                                                     IO or I/O

> +vector during the itearation:
> +
> +	bio_for_each_segment()
> +	bio_for_each_segment_all()
> +
> +* The following helper iterates over multipage bvec, and each bvec may

                          preferably:   multi-page

> +include multiple physically contiguous pages, and the local variable of
> +'struct bio_vec' or the reference records multi page io vector during the

                                             multi-page IO or I/O

> +itearation:
> +
> +	bio_for_each_bvec()
> 


-- 
~Randy

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH V7 20/24] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages()
  2018-06-27 15:55   ` Coly Li
@ 2018-06-28  1:28     ` Ming Lei
  2018-06-28  2:01       ` Coly Li
  0 siblings, 1 reply; 32+ messages in thread
From: Ming Lei @ 2018-06-28  1:28 UTC (permalink / raw)
  To: Coly Li
  Cc: Jens Axboe, Christoph Hellwig, Kent Overstreet, David Sterba,
	Huang Ying, Mike Snitzer, linux-kernel, linux-block,
	linux-fsdevel, linux-mm, Theodore Ts'o, Darrick J . Wong,
	Filipe Manana, Randy Dunlap, linux-bcache

On Wed, Jun 27, 2018 at 11:55:33PM +0800, Coly Li wrote:
> On 2018/6/27 8:45 PM, Ming Lei wrote:
> > bch_bio_alloc_pages() is always called on one new bio, so it is safe
> > to access the bvec table directly. Given it is the only kind of this
> > case, open code the bvec table access since bio_for_each_segment_all()
> > will be changed to support for iterating over multipage bvec.
> > 
> > Cc: Coly Li <colyli@suse.de>
> > Cc: linux-bcache@vger.kernel.org
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >  drivers/md/bcache/util.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
> > index fc479b026d6d..9f2a6fd5dfc9 100644
> > --- a/drivers/md/bcache/util.c
> > +++ b/drivers/md/bcache/util.c
> > @@ -268,7 +268,7 @@ int bch_bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
> >  	int i;
> >  	struct bio_vec *bv;
> > 
> 
> Hi Ming,
> 
> > -	bio_for_each_segment_all(bv, bio, i) {
> > +	for (i = 0, bv = bio->bi_io_vec; i < bio->bi_vcnt; bv++) {
> 
> 
> Is it possible to treat this as a special condition of
> bio_for_each_segement_all() ? I mean only iterate one time in
> bvec_for_each_segment(). I hope the above change is not our last choice
> before I reply an Acked-by :-)

Now the bvec from bio_for_each_segement_all() can't be changed any more
since the referenced 'bvec' is generated in-flight given we store
real multipage bvec.

BTW, this way is actually suggested by Christoph for saving one new
helper of bio_for_each_bvec_all() as done in V6, and per previous discussion,
seems both Kent and Christoph agrees to convert bcache into bio_add_page()
finally.

So I guess this open code style should be fine.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH V7 20/24] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages()
  2018-06-28  1:28     ` Ming Lei
@ 2018-06-28  2:01       ` Coly Li
  0 siblings, 0 replies; 32+ messages in thread
From: Coly Li @ 2018-06-28  2:01 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Kent Overstreet, David Sterba,
	Huang Ying, Mike Snitzer, linux-kernel, linux-block,
	linux-fsdevel, linux-mm, Theodore Ts'o, Darrick J . Wong,
	Filipe Manana, Randy Dunlap, linux-bcache

On 2018/6/28 9:28 AM, Ming Lei wrote:
> On Wed, Jun 27, 2018 at 11:55:33PM +0800, Coly Li wrote:
>> On 2018/6/27 8:45 PM, Ming Lei wrote:
>>> bch_bio_alloc_pages() is always called on one new bio, so it is safe
>>> to access the bvec table directly. Given it is the only kind of this
>>> case, open code the bvec table access since bio_for_each_segment_all()
>>> will be changed to support for iterating over multipage bvec.
>>>
>>> Cc: Coly Li <colyli@suse.de>
>>> Cc: linux-bcache@vger.kernel.org
>>> Signed-off-by: Ming Lei <ming.lei@redhat.com>
>>> ---
>>>  drivers/md/bcache/util.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
>>> index fc479b026d6d..9f2a6fd5dfc9 100644
>>> --- a/drivers/md/bcache/util.c
>>> +++ b/drivers/md/bcache/util.c
>>> @@ -268,7 +268,7 @@ int bch_bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
>>>  	int i;
>>>  	struct bio_vec *bv;
>>>
>>
>> Hi Ming,
>>
>>> -	bio_for_each_segment_all(bv, bio, i) {
>>> +	for (i = 0, bv = bio->bi_io_vec; i < bio->bi_vcnt; bv++) {
>>
>>
>> Is it possible to treat this as a special condition of
>> bio_for_each_segement_all() ? I mean only iterate one time in
>> bvec_for_each_segment(). I hope the above change is not our last choice
>> before I reply an Acked-by :-)
> 
> Now the bvec from bio_for_each_segement_all() can't be changed any more
> since the referenced 'bvec' is generated in-flight given we store
> real multipage bvec.
> 
> BTW, this way is actually suggested by Christoph for saving one new
> helper of bio_for_each_bvec_all() as done in V6, and per previous discussion,
> seems both Kent and Christoph agrees to convert bcache into bio_add_page()
> finally.
> 
> So I guess this open code style should be fine.

Hi Ming,

I see, thanks for the hint.

Acked-by: Coly Li <colyli@suse.de>

Coly Li

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH V7 10/24] block: introduce multipage page bvec helpers
  2018-06-27 15:59   ` kbuild test robot
@ 2018-11-09 11:15     ` Ming Lei
  0 siblings, 0 replies; 32+ messages in thread
From: Ming Lei @ 2018-11-09 11:15 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, Jens Axboe, Christoph Hellwig, Kent Overstreet,
	David Sterba, Huang Ying, Mike Snitzer, linux-kernel,
	linux-block, linux-fsdevel, linux-mm, Theodore Ts'o,
	Darrick J . Wong, Coly Li, Filipe Manana, Randy Dunlap

On Wed, Jun 27, 2018 at 11:59:37PM +0800, kbuild test robot wrote:
> Hi Ming,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on linus/master]
> [also build test WARNING on v4.18-rc2]
> [cannot apply to next-20180627]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Ming-Lei/block-support-multipage-bvec/20180627-214022
> reproduce:
>         # apt-get install sparse
>         make ARCH=x86_64 allmodconfig
>         make C=1 CF=-D__CHECK_ENDIAN__
> 
> 
> sparse warnings: (new ones prefixed by >>)
> 
>    net/ceph/messenger.c:842:25: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:842:25: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:847:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:848:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:855:29: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:869:9: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:139:37: sparse: expression using sizeof(void)
>    include/linux/bvec.h:140:32: sparse: expression using sizeof(void)
>    include/linux/bvec.h:140:32: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:889:9: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
>    net/ceph/messenger.c:890:47: sparse: expression using sizeof(void)
> >> net/ceph/messenger.c:890:47: sparse: too many warnings
> 
> vim +890 net/ceph/messenger.c
> 
> 6aaa4511 Alex Elder      2013-03-06  862  
> 8ae4f4f5 Alex Elder      2013-03-14  863  static bool ceph_msg_data_bio_advance(struct ceph_msg_data_cursor *cursor,
> 8ae4f4f5 Alex Elder      2013-03-14  864  					size_t bytes)
> 6aaa4511 Alex Elder      2013-03-06  865  {
> 5359a17d Ilya Dryomov    2018-01-20  866  	struct ceph_bio_iter *it = &cursor->bio_iter;
> 6aaa4511 Alex Elder      2013-03-06  867  
> 5359a17d Ilya Dryomov    2018-01-20  868  	BUG_ON(bytes > cursor->resid);
> 5359a17d Ilya Dryomov    2018-01-20  869  	BUG_ON(bytes > bio_iter_len(it->bio, it->iter));
> 25aff7c5 Alex Elder      2013-03-11  870  	cursor->resid -= bytes;
> 5359a17d Ilya Dryomov    2018-01-20  871  	bio_advance_iter(it->bio, &it->iter, bytes);
> f38a5181 Kent Overstreet 2013-08-07  872  
> 5359a17d Ilya Dryomov    2018-01-20  873  	if (!cursor->resid) {
> 5359a17d Ilya Dryomov    2018-01-20  874  		BUG_ON(!cursor->last_piece);
> 5359a17d Ilya Dryomov    2018-01-20  875  		return false;   /* no more data */
> 5359a17d Ilya Dryomov    2018-01-20  876  	}
> f38a5181 Kent Overstreet 2013-08-07  877  
> 5359a17d Ilya Dryomov    2018-01-20  878  	if (!bytes || (it->iter.bi_size && it->iter.bi_bvec_done))
> 6aaa4511 Alex Elder      2013-03-06  879  		return false;	/* more bytes to process in this segment */
> 6aaa4511 Alex Elder      2013-03-06  880  
> 5359a17d Ilya Dryomov    2018-01-20  881  	if (!it->iter.bi_size) {
> 5359a17d Ilya Dryomov    2018-01-20  882  		it->bio = it->bio->bi_next;
> 5359a17d Ilya Dryomov    2018-01-20  883  		it->iter = it->bio->bi_iter;
> 5359a17d Ilya Dryomov    2018-01-20  884  		if (cursor->resid < it->iter.bi_size)
> 5359a17d Ilya Dryomov    2018-01-20  885  			it->iter.bi_size = cursor->resid;
> 25aff7c5 Alex Elder      2013-03-11  886  	}
> 6aaa4511 Alex Elder      2013-03-06  887  
> 5359a17d Ilya Dryomov    2018-01-20  888  	BUG_ON(cursor->last_piece);
> 5359a17d Ilya Dryomov    2018-01-20  889  	BUG_ON(cursor->resid < bio_iter_len(it->bio, it->iter));
> 5359a17d Ilya Dryomov    2018-01-20 @890  	cursor->last_piece = cursor->resid == bio_iter_len(it->bio, it->iter);
> 6aaa4511 Alex Elder      2013-03-06  891  	return true;
> 6aaa4511 Alex Elder      2013-03-06  892  }
> ea96571f Alex Elder      2013-04-05  893  #endif /* CONFIG_BLOCK */
> df6ad1f9 Alex Elder      2012-06-11  894  
> 
> :::::: The code at line 890 was first introduced by commit
> :::::: 5359a17d2706b86da2af83027343d5eb256f7670 libceph, rbd: new bio handling code (aka don't clone bios)
> 
> :::::: TO: Ilya Dryomov <idryomov@gmail.com>
> :::::: CC: Ilya Dryomov <idryomov@gmail.com>
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

Actually this sparse warning on bio_iter_len() can be triggered without this patch
too. This patch changes code in bvec.h, just causes the warned line changed. 


thanks,
Ming

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2018-11-09 11:15 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-27 12:45 [PATCH V7 00/24] block: support multipage bvec Ming Lei
2018-06-27 12:45 ` [PATCH V7 01/24] dm: use bio_split() when splitting out the already processed bio Ming Lei
2018-06-27 13:17   ` Mike Snitzer
2018-06-27 12:45 ` [PATCH V7 02/24] bcache: don't clone bio in bch_data_verify Ming Lei
2018-06-27 12:45 ` [PATCH V7 03/24] exofs: use bio_clone_fast in _write_mirror Ming Lei
2018-06-27 12:45 ` [PATCH V7 04/24] block: remove bio_clone_kmalloc Ming Lei
2018-06-27 12:45 ` [PATCH V7 05/24] md: remove a bogus comment Ming Lei
2018-06-27 12:45 ` [PATCH V7 06/24] block: unexport bio_clone_bioset Ming Lei
2018-06-27 12:45 ` [PATCH V7 07/24] block: simplify bio_check_pages_dirty Ming Lei
2018-06-27 12:45 ` [PATCH V7 08/24] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec Ming Lei
2018-06-27 12:45 ` [PATCH V7 09/24] block: use bio_add_page in bio_iov_iter_get_pages Ming Lei
2018-06-27 12:45 ` [PATCH V7 10/24] block: introduce multipage page bvec helpers Ming Lei
2018-06-27 15:59   ` kbuild test robot
2018-11-09 11:15     ` Ming Lei
2018-06-27 12:45 ` [PATCH V7 11/24] block: introduce bio_for_each_bvec() Ming Lei
2018-06-27 12:45 ` [PATCH V7 12/24] block: use bio_for_each_bvec() to compute multipage bvec count Ming Lei
2018-06-27 12:45 ` [PATCH V7 13/24] block: use bio_for_each_bvec() to map sg Ming Lei
2018-06-27 12:45 ` [PATCH V7 14/24] block: introduce bvec_last_segment() Ming Lei
2018-06-27 12:45 ` [PATCH V7 15/24] fs/buffer.c: use bvec iterator to truncate the bio Ming Lei
2018-06-27 12:45 ` [PATCH V7 16/24] btrfs: use bvec_last_segment to get bio's last page Ming Lei
2018-06-27 12:45 ` [PATCH V7 17/24] btrfs: move bio_pages_all() to btrfs Ming Lei
2018-06-27 12:45 ` [PATCH V7 18/24] block: introduce bio_bvecs() Ming Lei
2018-06-27 12:45 ` [PATCH V7 19/24] block: loop: pass multipage bvec to iov_iter Ming Lei
2018-06-27 12:45 ` [PATCH V7 20/24] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages() Ming Lei
2018-06-27 15:55   ` Coly Li
2018-06-28  1:28     ` Ming Lei
2018-06-28  2:01       ` Coly Li
2018-06-27 12:45 ` [PATCH V7 21/24] block: allow bio_for_each_segment_all() to iterate over multipage bvec Ming Lei
2018-06-27 12:45 ` [PATCH V7 22/24] block: enable multipage bvecs Ming Lei
2018-06-27 12:45 ` [PATCH V7 23/24] block: always define BIO_MAX_PAGES as 256 Ming Lei
2018-06-27 12:45 ` [PATCH V7 24/24] block: document usage of bio iterator helpers Ming Lei
2018-06-27 18:13   ` Randy Dunlap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).