All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec)
@ 2017-12-08 13:13 Ming Lei
  2017-12-08 13:14 ` [PATCH 01/10] block: introduce bio helpers for converting to multipage bvec Ming Lei
                   ` (11 more replies)
  0 siblings, 12 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:13 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei

Hi,

This patchse cleans up most of direct access to bvec table in tree, and
these patches are the follow-up of patch1 ~ 16 in the patchset of 'block:
support multipage bvec(V3)'[1]. 

Changes against [1]:
1) split the cleanup patches from [1]
2) address comments from Christoph:
	- introduce bio helpers for dealing with the cleanup
	- move bio_alloc_pages() to bcache


[1] https://marc.info/?t=150218197600001&r=1&w=2

Thanks,
Ming

Ming Lei (10):
  block: introduce bio helpers for converting to multipage bvec
  block: conver to bio_first_bvec() & bio_first_page()
  fs: convert to bio_last_bvec()
  block: bounce: avoid direct access to bvec table
  block: bounce: don't access bio->bi_io_vec in copy_to_high_bio_irq
  dm: limit the max bio size as BIO_MAX_PAGES * PAGE_SIZE
  bcache: comment on direct access to bvec table
  block: move bio_alloc_pages() to bcache
  btrfs: avoid access to .bi_vcnt directly
  btrfs: avoid to access bvec table directly for a cloned bio

 block/bio.c                      | 28 ----------------------------
 block/bounce.c                   | 33 +++++++++++++++++++--------------
 drivers/block/drbd/drbd_bitmap.c |  2 +-
 drivers/block/zram/zram_drv.c    |  2 +-
 drivers/md/bcache/btree.c        |  1 +
 drivers/md/bcache/super.c        |  8 ++++----
 drivers/md/bcache/util.c         | 34 ++++++++++++++++++++++++++++++++++
 drivers/md/bcache/util.h         |  1 +
 drivers/md/dm.c                  | 10 +++++++++-
 fs/btrfs/compression.c           |  4 ++--
 fs/btrfs/extent_io.c             | 11 ++++++-----
 fs/btrfs/extent_io.h             |  2 +-
 fs/btrfs/inode.c                 |  8 +++++---
 fs/buffer.c                      |  2 +-
 fs/f2fs/data.c                   |  2 +-
 include/linux/bio.h              | 25 ++++++++++++++++++++++++-
 include/linux/bvec.h             |  9 +++++++++
 kernel/power/swap.c              |  2 +-
 mm/page_io.c                     |  4 ++--
 19 files changed, 122 insertions(+), 66 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 01/10] block: introduce bio helpers for converting to multipage bvec
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-08 13:14 ` [PATCH 02/10] block: conver to bio_first_bvec() & bio_first_page() Ming Lei
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei

These helpers are introduced for converting current users of direct
access to bvec table, and prepares for supporting multipage bvec.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/bio.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/linux/bio.h b/include/linux/bio.h
index 82f0c8fd7be8..3f314e17364a 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -300,6 +300,30 @@ static inline void bio_get_last_bvec(struct bio *bio, struct bio_vec *bv)
 		bv->bv_len = iter.bi_bvec_done;
 }
 
+static inline unsigned bio_nr_pages(struct bio *bio)
+{
+	WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED));
+
+	return bio->bi_vcnt;
+}
+
+static inline struct bio_vec *bio_first_bvec(struct bio *bio)
+{
+	WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED));
+	return bio->bi_io_vec;
+}
+
+static inline struct page *bio_first_page(struct bio *bio)
+{
+	return bio_first_bvec(bio)->bv_page;
+}
+
+static inline struct bio_vec *bio_last_bvec(struct bio *bio)
+{
+	WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED));
+	return &bio->bi_io_vec[bio->bi_vcnt - 1];
+}
+
 enum bip_flags {
 	BIP_BLOCK_INTEGRITY	= 1 << 0, /* block layer owns integrity data */
 	BIP_MAPPED_INTEGRITY	= 1 << 1, /* ref tag has been remapped */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 02/10] block: conver to bio_first_bvec() & bio_first_page()
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
  2017-12-08 13:14 ` [PATCH 01/10] block: introduce bio helpers for converting to multipage bvec Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-08 13:14 ` [PATCH 03/10] fs: convert to bio_last_bvec() Ming Lei
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei

This patch converts to bio_first_bvec() & bio_first_page() for retrieve
the 1st bvec/page, and prepares for supporting multipage bvec.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/block/drbd/drbd_bitmap.c | 2 +-
 drivers/block/zram/zram_drv.c    | 2 +-
 drivers/md/bcache/super.c        | 8 ++++----
 fs/btrfs/compression.c           | 2 +-
 fs/btrfs/inode.c                 | 4 ++--
 fs/f2fs/data.c                   | 2 +-
 kernel/power/swap.c              | 2 +-
 mm/page_io.c                     | 4 ++--
 8 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index bd97908c766f..0fe155123c8c 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -953,7 +953,7 @@ static void drbd_bm_endio(struct bio *bio)
 	struct drbd_bm_aio_ctx *ctx = bio->bi_private;
 	struct drbd_device *device = ctx->device;
 	struct drbd_bitmap *b = device->bitmap;
-	unsigned int idx = bm_page_to_idx(bio->bi_io_vec[0].bv_page);
+	unsigned int idx = bm_page_to_idx(bio_first_page(bio));
 
 	if ((ctx->flags & BM_AIO_COPY_PAGES) == 0 &&
 	    !bm_test_page_unchanged(b->bm_pages[idx]))
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index d70eba30003a..36885b037ad3 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -430,7 +430,7 @@ static void put_entry_bdev(struct zram *zram, unsigned long entry)
 
 static void zram_page_end_io(struct bio *bio)
 {
-	struct page *page = bio->bi_io_vec[0].bv_page;
+	struct page *page = bio_first_page(bio);
 
 	page_endio(page, op_is_write(bio_op(bio)),
 			blk_status_to_errno(bio->bi_status));
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index b4d28928dec5..da1a953d9545 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -211,7 +211,7 @@ static void write_bdev_super_endio(struct bio *bio)
 
 static void __write_super(struct cache_sb *sb, struct bio *bio)
 {
-	struct cache_sb *out = page_address(bio->bi_io_vec[0].bv_page);
+	struct cache_sb *out = page_address(bio_first_page(bio));
 	unsigned i;
 
 	bio->bi_iter.bi_sector	= SB_SECTOR;
@@ -1166,7 +1166,7 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page,
 	dc->bdev->bd_holder = dc;
 
 	bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1);
-	dc->sb_bio.bi_io_vec[0].bv_page = sb_page;
+	bio_first_bvec(&dc->sb_bio)->bv_page = sb_page;
 	get_page(sb_page);
 
 	if (cached_dev_init(dc, sb->block_size << 9))
@@ -1810,7 +1810,7 @@ void bch_cache_release(struct kobject *kobj)
 		free_fifo(&ca->free[i]);
 
 	if (ca->sb_bio.bi_inline_vecs[0].bv_page)
-		put_page(ca->sb_bio.bi_io_vec[0].bv_page);
+		put_page(bio_first_page(&ca->sb_bio));
 
 	if (!IS_ERR_OR_NULL(ca->bdev))
 		blkdev_put(ca->bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL);
@@ -1864,7 +1864,7 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page,
 	ca->bdev->bd_holder = ca;
 
 	bio_init(&ca->sb_bio, ca->sb_bio.bi_inline_vecs, 1);
-	ca->sb_bio.bi_io_vec[0].bv_page = sb_page;
+	bio_first_bvec(&ca->sb_bio)->bv_page = sb_page;
 	get_page(sb_page);
 
 	if (blk_queue_discard(bdev_get_queue(ca->bdev)))
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 5982c8a71f02..fc0386c18574 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -563,7 +563,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 	/* we need the actual starting offset of this extent in the file */
 	read_lock(&em_tree->lock);
 	em = lookup_extent_mapping(em_tree,
-				   page_offset(bio->bi_io_vec->bv_page),
+				   page_offset(bio_first_page(bio)),
 				   PAGE_SIZE);
 	read_unlock(&em_tree->lock);
 	if (!em)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 993061f83067..d28b66019d54 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8072,7 +8072,7 @@ static void btrfs_retry_endio_nocsum(struct bio *bio)
 	ASSERT(bio->bi_vcnt == 1);
 	io_tree = &BTRFS_I(inode)->io_tree;
 	failure_tree = &BTRFS_I(inode)->io_failure_tree;
-	ASSERT(bio->bi_io_vec->bv_len == btrfs_inode_sectorsize(inode));
+	ASSERT(bio_first_bvec(bio)->bv_len == btrfs_inode_sectorsize(inode));
 
 	done->uptodate = 1;
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
@@ -8162,7 +8162,7 @@ static void btrfs_retry_endio(struct bio *bio)
 	uptodate = 1;
 
 	ASSERT(bio->bi_vcnt == 1);
-	ASSERT(bio->bi_io_vec->bv_len == btrfs_inode_sectorsize(done->inode));
+	ASSERT(bio_first_bvec(bio)->bv_len == btrfs_inode_sectorsize(done->inode));
 
 	io_tree = &BTRFS_I(inode)->io_tree;
 	failure_tree = &BTRFS_I(inode)->io_failure_tree;
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 516fa0d3ff9c..9c2463f9272d 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -56,7 +56,7 @@ static void f2fs_read_end_io(struct bio *bio)
 	int i;
 
 #ifdef CONFIG_F2FS_FAULT_INJECTION
-	if (time_to_inject(F2FS_P_SB(bio->bi_io_vec->bv_page), FAULT_IO)) {
+	if (time_to_inject(F2FS_P_SB(bio_first_page(bio)), FAULT_IO)) {
 		f2fs_show_injection_info(FAULT_IO);
 		bio->bi_status = BLK_STS_IOERR;
 	}
diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index 293ead59eccc..488e4a490dfa 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -240,7 +240,7 @@ static void hib_init_batch(struct hib_bio_batch *hb)
 static void hib_end_io(struct bio *bio)
 {
 	struct hib_bio_batch *hb = bio->bi_private;
-	struct page *page = bio->bi_io_vec[0].bv_page;
+	struct page *page = bio_first_page(bio);
 
 	if (bio->bi_status) {
 		pr_alert("Read-error on swap-device (%u:%u:%Lu)\n",
diff --git a/mm/page_io.c b/mm/page_io.c
index e93f1a4cacd7..edcf5389eab7 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -50,7 +50,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags,
 
 void end_swap_bio_write(struct bio *bio)
 {
-	struct page *page = bio->bi_io_vec[0].bv_page;
+	struct page *page = bio_first_page(bio);
 
 	if (bio->bi_status) {
 		SetPageError(page);
@@ -122,7 +122,7 @@ static void swap_slot_free_notify(struct page *page)
 
 static void end_swap_bio_read(struct bio *bio)
 {
-	struct page *page = bio->bi_io_vec[0].bv_page;
+	struct page *page = bio_first_page(bio);
 	struct task_struct *waiter = bio->bi_private;
 
 	if (bio->bi_status) {
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 03/10] fs: convert to bio_last_bvec()
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
  2017-12-08 13:14 ` [PATCH 01/10] block: introduce bio helpers for converting to multipage bvec Ming Lei
  2017-12-08 13:14 ` [PATCH 02/10] block: conver to bio_first_bvec() & bio_first_page() Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-08 13:14 ` [PATCH 04/10] block: bounce: avoid direct access to bvec table Ming Lei
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei

This patch convers 3 users to bio_last_bvec(), so that we can go ahread
to convert to multipage bvec.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 fs/btrfs/compression.c | 2 +-
 fs/btrfs/extent_io.c   | 2 +-
 fs/buffer.c            | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index fc0386c18574..726e1b5b06fe 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -411,7 +411,7 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start,
 
 static u64 bio_end_offset(struct bio *bio)
 {
-	struct bio_vec *last = &bio->bi_io_vec[bio->bi_vcnt - 1];
+	struct bio_vec *last = bio_last_bvec(bio);
 
 	return page_offset(last->bv_page) + last->bv_len + last->bv_offset;
 }
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 012d63870b99..6f6669f93beb 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2724,7 +2724,7 @@ static int __must_check submit_one_bio(struct bio *bio, int mirror_num,
 				       unsigned long bio_flags)
 {
 	blk_status_t ret = 0;
-	struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+	struct bio_vec *bvec = bio_last_bvec(bio);
 	struct page *page = bvec->bv_page;
 	struct extent_io_tree *tree = bio->bi_private;
 	u64 start;
diff --git a/fs/buffer.c b/fs/buffer.c
index 0736a6a2e2f0..ceb705ba939f 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3014,7 +3014,7 @@ static void end_bio_bh_io_sync(struct bio *bio)
 void guard_bio_eod(int op, struct bio *bio)
 {
 	sector_t maxsector;
-	struct bio_vec *bvec = &bio->bi_io_vec[bio->bi_vcnt - 1];
+	struct bio_vec *bvec = bio_last_bvec(bio);
 	unsigned truncated_bytes;
 	struct hd_struct *part;
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 04/10] block: bounce: avoid direct access to bvec table
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (2 preceding siblings ...)
  2017-12-08 13:14 ` [PATCH 03/10] fs: convert to bio_last_bvec() Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-08 13:14 ` [PATCH 05/10] block: bounce: don't access bio->bi_io_vec in copy_to_high_bio_irq Ming Lei
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei, Matthew Wilcox

We will support multipage bvecs in the future, so change to iterator way
for getting bv_page of bvec from original bio.

Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/bounce.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/block/bounce.c b/block/bounce.c
index fceb1a96480b..0274c31d6c05 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -137,21 +137,20 @@ static void copy_to_high_bio_irq(struct bio *to, struct bio *from)
 static void bounce_end_io(struct bio *bio, mempool_t *pool)
 {
 	struct bio *bio_orig = bio->bi_private;
-	struct bio_vec *bvec, *org_vec;
+	struct bio_vec *bvec, orig_vec;
 	int i;
-	int start = bio_orig->bi_iter.bi_idx;
+	struct bvec_iter orig_iter = bio_orig->bi_iter;
 
 	/*
 	 * free up bounce indirect pages used
 	 */
 	bio_for_each_segment_all(bvec, bio, i) {
-		org_vec = bio_orig->bi_io_vec + i + start;
-
-		if (bvec->bv_page == org_vec->bv_page)
-			continue;
-
-		dec_zone_page_state(bvec->bv_page, NR_BOUNCE);
-		mempool_free(bvec->bv_page, pool);
+		orig_vec = bio_iter_iovec(bio_orig, orig_iter);
+		if (bvec->bv_page != orig_vec.bv_page) {
+			dec_zone_page_state(bvec->bv_page, NR_BOUNCE);
+			mempool_free(bvec->bv_page, pool);
+		}
+		bio_advance_iter(bio_orig, &orig_iter, orig_vec.bv_len);
 	}
 
 	bio_orig->bi_status = bio->bi_status;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 05/10] block: bounce: don't access bio->bi_io_vec in copy_to_high_bio_irq
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (3 preceding siblings ...)
  2017-12-08 13:14 ` [PATCH 04/10] block: bounce: avoid direct access to bvec table Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-08 13:14 ` [PATCH 06/10] dm: limit the max bio size as BIO_MAX_PAGES * PAGE_SIZE Ming Lei
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei

Firstly this patch introduce BVEC_ITER_ALL_INIT for iterating one bio
from start to end.

As we need to support multipage bvecs, so don't access bio->bi_io_vec
in copy_to_high_bio_irq(), and just use the standard iterator to do that.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/bounce.c       | 16 +++++++++++-----
 include/linux/bvec.h |  9 +++++++++
 2 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/block/bounce.c b/block/bounce.c
index 0274c31d6c05..c35a3d7f0528 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -113,24 +113,30 @@ int init_emergency_isa_pool(void)
 static void copy_to_high_bio_irq(struct bio *to, struct bio *from)
 {
 	unsigned char *vfrom;
-	struct bio_vec tovec, *fromvec = from->bi_io_vec;
+	struct bio_vec tovec, fromvec;
 	struct bvec_iter iter;
+	/*
+	 * The bio of @from is created by bounce, so we can iterate
+	 * its bvec from start to end, but the @from->bi_iter can't be
+	 * trusted because it might be changed by splitting.
+	 */
+	struct bvec_iter from_iter = BVEC_ITER_ALL_INIT;
 
 	bio_for_each_segment(tovec, to, iter) {
-		if (tovec.bv_page != fromvec->bv_page) {
+		fromvec = bio_iter_iovec(from, from_iter);
+		if (tovec.bv_page != fromvec.bv_page) {
 			/*
 			 * fromvec->bv_offset and fromvec->bv_len might have
 			 * been modified by the block layer, so use the original
 			 * copy, bounce_copy_vec already uses tovec->bv_len
 			 */
-			vfrom = page_address(fromvec->bv_page) +
+			vfrom = page_address(fromvec.bv_page) +
 				tovec.bv_offset;
 
 			bounce_copy_vec(&tovec, vfrom);
 			flush_dcache_page(tovec.bv_page);
 		}
-
-		fromvec++;
+		bio_advance_iter(from, &from_iter, tovec.bv_len);
 	}
 }
 
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index ec8a4d7af6bd..fe7a22dd133b 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -125,4 +125,13 @@ static inline bool bvec_iter_rewind(const struct bio_vec *bv,
 		((bvl = bvec_iter_bvec((bio_vec), (iter))), 1);	\
 	     bvec_iter_advance((bio_vec), &(iter), (bvl).bv_len))
 
+/* for iterating one bio from start to end */
+#define BVEC_ITER_ALL_INIT (struct bvec_iter)				\
+{									\
+	.bi_sector	= 0,						\
+	.bi_size	= UINT_MAX,					\
+	.bi_idx		= 0,						\
+	.bi_bvec_done	= 0,						\
+}
+
 #endif /* __LINUX_BVEC_ITER_H */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 06/10] dm: limit the max bio size as BIO_MAX_PAGES * PAGE_SIZE
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (4 preceding siblings ...)
  2017-12-08 13:14 ` [PATCH 05/10] block: bounce: don't access bio->bi_io_vec in copy_to_high_bio_irq Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-08 13:14 ` [PATCH 07/10] bcache: comment on direct access to bvec table Ming Lei
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei, Mike Snitzer

For BIO based DM, some targets aren't ready for dealing with bigger
incoming bio than 1Mbyte, such as crypt target.

Cc: Mike Snitzer <snitzer@redhat.com>
Cc:dm-devel@redhat.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/md/dm.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index de17b7193299..7475739fee49 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -920,7 +920,15 @@ int dm_set_target_max_io_len(struct dm_target *ti, sector_t len)
 		return -EINVAL;
 	}
 
-	ti->max_io_len = (uint32_t) len;
+	/*
+	 * BIO based queue uses its own splitting. When multipage bvecs
+	 * is switched on, size of the incoming bio may be too big to
+	 * be handled in some targets, such as crypt.
+	 *
+	 * When these targets are ready for the big bio, we can remove
+	 * the limit.
+	 */
+	ti->max_io_len = min_t(uint32_t, len, BIO_MAX_PAGES * PAGE_SIZE);
 
 	return 0;
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 07/10] bcache: comment on direct access to bvec table
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (5 preceding siblings ...)
  2017-12-08 13:14 ` [PATCH 06/10] dm: limit the max bio size as BIO_MAX_PAGES * PAGE_SIZE Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-08 13:14 ` [PATCH 08/10] block: move bio_alloc_pages() to bcache Ming Lei
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei, linux-bcache

All direct access to bvec table are safe even after multipage bvec is supported.

Cc: linux-bcache@vger.kernel.org
Acked-by: Coly Li <colyli@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/md/bcache/btree.c | 1 +
 drivers/md/bcache/util.c  | 7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 11c5503d31dc..c09f3dd4bf07 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -432,6 +432,7 @@ static void do_btree_node_write(struct btree *b)
 
 		continue_at(cl, btree_node_write_done, NULL);
 	} else {
+		/* No problem for multipage bvec since the bio is just allocated */
 		b->bio->bi_vcnt = 0;
 		bch_bio_map(b->bio, i);
 
diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index e548b8b51322..61813d230015 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -249,6 +249,13 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done)
 		: 0;
 }
 
+/*
+ * Generally it isn't good to access .bi_io_vec and .bi_vcnt directly,
+ * the preferred way is bio_add_page, but in this case, bch_bio_map()
+ * supposes that the bvec table is empty, so it is safe to access
+ * .bi_vcnt & .bi_io_vec in this way even after multipage bvec is
+ * supported.
+ */
 void bch_bio_map(struct bio *bio, void *base)
 {
 	size_t size = bio->bi_iter.bi_size;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 08/10] block: move bio_alloc_pages() to bcache
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (6 preceding siblings ...)
  2017-12-08 13:14 ` [PATCH 07/10] bcache: comment on direct access to bvec table Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2018-01-08 18:05   ` Michael Lyle
  2017-12-08 13:14 ` [PATCH 09/10] btrfs: avoid access to .bi_vcnt directly Ming Lei
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: Christoph Hellwig, Ming Lei

bcache is the only user of bio_alloc_pages(), and all users should use
bio_add_page() instead, so move this function into bcache, and avoid
it misused in future.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/bio.c              | 28 ----------------------------
 drivers/md/bcache/util.c | 27 +++++++++++++++++++++++++++
 drivers/md/bcache/util.h |  1 +
 include/linux/bio.h      |  1 -
 4 files changed, 28 insertions(+), 29 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 228229f3bb76..76bb3dafffea 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -969,34 +969,6 @@ void bio_advance(struct bio *bio, unsigned bytes)
 EXPORT_SYMBOL(bio_advance);
 
 /**
- * bio_alloc_pages - allocates a single page for each bvec in a bio
- * @bio: bio to allocate pages for
- * @gfp_mask: flags for allocation
- *
- * Allocates pages up to @bio->bi_vcnt.
- *
- * Returns 0 on success, -ENOMEM on failure. On failure, any allocated pages are
- * freed.
- */
-int bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
-{
-	int i;
-	struct bio_vec *bv;
-
-	bio_for_each_segment_all(bv, bio, i) {
-		bv->bv_page = alloc_page(gfp_mask);
-		if (!bv->bv_page) {
-			while (--bv >= bio->bi_io_vec)
-				__free_page(bv->bv_page);
-			return -ENOMEM;
-		}
-	}
-
-	return 0;
-}
-EXPORT_SYMBOL(bio_alloc_pages);
-
-/**
  * bio_copy_data - copy contents of data buffers from one chain of bios to
  * another
  * @src: source bio list
diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index 61813d230015..ac557e8c7ef5 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -283,6 +283,33 @@ start:		bv->bv_len	= min_t(size_t, PAGE_SIZE - bv->bv_offset,
 	}
 }
 
+/**
+ * bio_alloc_pages - allocates a single page for each bvec in a bio
+ * @bio: bio to allocate pages for
+ * @gfp_mask: flags for allocation
+ *
+ * Allocates pages up to @bio->bi_vcnt.
+ *
+ * Returns 0 on success, -ENOMEM on failure. On failure, any allocated pages are
+ * freed.
+ */
+int bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
+{
+	int i;
+	struct bio_vec *bv;
+
+	bio_for_each_segment_all(bv, bio, i) {
+		bv->bv_page = alloc_page(gfp_mask);
+		if (!bv->bv_page) {
+			while (--bv >= bio->bi_io_vec)
+				__free_page(bv->bv_page);
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
 /*
  * Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group (Any
  * use permitted, subject to terms of PostgreSQL license; see.)
diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
index ed5e8a412eb8..c92de937bcab 100644
--- a/drivers/md/bcache/util.h
+++ b/drivers/md/bcache/util.h
@@ -558,6 +558,7 @@ static inline unsigned fract_exp_two(unsigned x, unsigned fract_bits)
 }
 
 void bch_bio_map(struct bio *bio, void *base);
+int bio_alloc_pages(struct bio *bio, gfp_t gfp_mask);
 
 static inline sector_t bdev_sectors(struct block_device *bdev)
 {
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 3f314e17364a..46cdbe0335a5 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -501,7 +501,6 @@ static inline void bio_flush_dcache_pages(struct bio *bi)
 #endif
 
 extern void bio_copy_data(struct bio *dst, struct bio *src);
-extern int bio_alloc_pages(struct bio *bio, gfp_t gfp);
 extern void bio_free_pages(struct bio *bio);
 
 extern struct bio *bio_copy_user_iov(struct request_queue *,
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 09/10] btrfs: avoid access to .bi_vcnt directly
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (7 preceding siblings ...)
  2017-12-08 13:14 ` [PATCH 08/10] block: move bio_alloc_pages() to bcache Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-08 13:14 ` [PATCH 10/10] btrfs: avoid to access bvec table directly for a cloned bio Ming Lei
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: Christoph Hellwig, Ming Lei, Chris Mason, Josef Bacik,
	David Sterba, linux-btrfs

BTRFS uses bio->bi_vcnt to figure out page numbers, this way becomes not
correct once we start to enable multipage bvec.

So use bio_nr_pages() to do that instead.

Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fb.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Acked-by: David Sterba <dsterba@suse.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 fs/btrfs/extent_io.c | 9 +++++----
 fs/btrfs/extent_io.h | 2 +-
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 6f6669f93beb..27795bf2507c 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2257,7 +2257,7 @@ int btrfs_get_io_failure_record(struct inode *inode, u64 start, u64 end,
 	return 0;
 }
 
-bool btrfs_check_repairable(struct inode *inode, struct bio *failed_bio,
+bool btrfs_check_repairable(struct inode *inode, unsigned failed_bio_pages,
 			   struct io_failure_record *failrec, int failed_mirror)
 {
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
@@ -2281,7 +2281,7 @@ bool btrfs_check_repairable(struct inode *inode, struct bio *failed_bio,
 	 *	a) deliver good data to the caller
 	 *	b) correct the bad sectors on disk
 	 */
-	if (failed_bio->bi_vcnt > 1) {
+	if (failed_bio_pages > 1) {
 		/*
 		 * to fulfill b), we need to know the exact failing sectors, as
 		 * we don't want to rewrite any more than the failed ones. thus,
@@ -2374,6 +2374,7 @@ static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset,
 	int read_mode = 0;
 	blk_status_t status;
 	int ret;
+	unsigned failed_bio_pages = bio_nr_pages(failed_bio);
 
 	BUG_ON(bio_op(failed_bio) == REQ_OP_WRITE);
 
@@ -2381,13 +2382,13 @@ static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset,
 	if (ret)
 		return ret;
 
-	if (!btrfs_check_repairable(inode, failed_bio, failrec,
+	if (!btrfs_check_repairable(inode, failed_bio_pages, failrec,
 				    failed_mirror)) {
 		free_io_failure(failure_tree, tree, failrec);
 		return -EIO;
 	}
 
-	if (failed_bio->bi_vcnt > 1)
+	if (failed_bio_pages > 1)
 		read_mode |= REQ_FAILFAST_DEV;
 
 	phy_offset >>= inode->i_sb->s_blocksize_bits;
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index 93dcae0c3183..20854d63c75b 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -540,7 +540,7 @@ void btrfs_free_io_failure_record(struct btrfs_inode *inode, u64 start,
 		u64 end);
 int btrfs_get_io_failure_record(struct inode *inode, u64 start, u64 end,
 				struct io_failure_record **failrec_ret);
-bool btrfs_check_repairable(struct inode *inode, struct bio *failed_bio,
+bool btrfs_check_repairable(struct inode *inode, unsigned failed_bio_pages,
 			    struct io_failure_record *failrec, int fail_mirror);
 struct bio *btrfs_create_repair_bio(struct inode *inode, struct bio *failed_bio,
 				    struct io_failure_record *failrec,
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 10/10] btrfs: avoid to access bvec table directly for a cloned bio
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (8 preceding siblings ...)
  2017-12-08 13:14 ` [PATCH 09/10] btrfs: avoid access to .bi_vcnt directly Ming Lei
@ 2017-12-08 13:14 ` Ming Lei
  2017-12-12  7:57 ` [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Christoph Hellwig
  2018-01-05 19:02 ` Jens Axboe
  11 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-08 13:14 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: Christoph Hellwig, Ming Lei, Chris Mason, Josef Bacik,
	David Sterba, linux-btrfs, Liu Bo

Commit 17347cec15f919901c90(Btrfs: change how we iterate bios in endio)
mentioned that for dio the submitted bio may be fast cloned, we
can't access the bvec table directly for a cloned bio, so use
bio_get_first_bvec() to retrieve the 1st bvec.

Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fb.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Cc: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Acked: David Sterba <dsterba@suse.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 fs/btrfs/inode.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index d28b66019d54..7f23c1993d24 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8013,6 +8013,7 @@ static blk_status_t dio_read_error(struct inode *inode, struct bio *failed_bio,
 	int segs;
 	int ret;
 	blk_status_t status;
+	struct bio_vec bvec;
 
 	BUG_ON(bio_op(failed_bio) == REQ_OP_WRITE);
 
@@ -8028,8 +8029,9 @@ static blk_status_t dio_read_error(struct inode *inode, struct bio *failed_bio,
 	}
 
 	segs = bio_segments(failed_bio);
+	bio_get_first_bvec(failed_bio, &bvec);
 	if (segs > 1 ||
-	    (failed_bio->bi_io_vec->bv_len > btrfs_inode_sectorsize(inode)))
+	    (bvec.bv_len > btrfs_inode_sectorsize(inode)))
 		read_mode |= REQ_FAILFAST_DEV;
 
 	isector = start - btrfs_io_bio(failed_bio)->logical;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec)
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (9 preceding siblings ...)
  2017-12-08 13:14 ` [PATCH 10/10] btrfs: avoid to access bvec table directly for a cloned bio Ming Lei
@ 2017-12-12  7:57 ` Christoph Hellwig
  2017-12-12  9:18   ` Ming Lei
  2018-01-05 19:02 ` Jens Axboe
  11 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2017-12-12  7:57 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, linux-block, Christoph Hellwig

Most of this looks sane, but I'd really like to see it in context
of the actual multipage bvec patches.  Do you have an updated branch
on top of these?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec)
  2017-12-12  7:57 ` [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Christoph Hellwig
@ 2017-12-12  9:18   ` Ming Lei
  2017-12-13 17:55     ` Ming Lei
  0 siblings, 1 reply; 19+ messages in thread
From: Ming Lei @ 2017-12-12  9:18 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block

On Mon, Dec 11, 2017 at 11:57:38PM -0800, Christoph Hellwig wrote:
> Most of this looks sane, but I'd really like to see it in context
> of the actual multipage bvec patches.  Do you have an updated branch
> on top of these?

I will post it out soon after addressing some of last comments.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec)
  2017-12-12  9:18   ` Ming Lei
@ 2017-12-13 17:55     ` Ming Lei
  0 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2017-12-13 17:55 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block

On Tue, Dec 12, 2017 at 05:18:44PM +0800, Ming Lei wrote:
> On Mon, Dec 11, 2017 at 11:57:38PM -0800, Christoph Hellwig wrote:
> > Most of this looks sane, but I'd really like to see it in context
> > of the actual multipage bvec patches.  Do you have an updated branch
> > on top of these?
> 
> I will post it out soon after addressing some of last comments.

You can find the actual multipage bvec patches in the following tree,
which is on top this prepare patchset.

	tree: https://github.com/ming1/linux.git #v4.15-rc-mp-bvec_v4-rc1
	gitweb: https://github.com/ming1/linux/commits/v4.15-rc-mp-bvec_v4-rc1

In this tree, all the current bio_for_each_segment* are converted to
bio_for_each_page*() first, then after multipage bvec is enabled, we
have the following helpers:

1) bio_for_each_segment()/bio_for_each_segment_all()
	
	iterate bio segment by segment which is real multipage bvec

2) bio_for_each_page()/bio_for_each_page_all()

	iterate bio page by page, which is the current in-tree bio_for_each_segment()/
	bio_for_each_segment_all()

3) rq_for_each_page()/rq_for_each_segment()
	similar with above


Not run full test yet, but it works on my VM, will start xfstest later.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec)
  2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
                   ` (10 preceding siblings ...)
  2017-12-12  7:57 ` [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Christoph Hellwig
@ 2018-01-05 19:02 ` Jens Axboe
  2018-01-06  9:18   ` Ming Lei
  11 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2018-01-05 19:02 UTC (permalink / raw)
  To: Ming Lei, linux-block; +Cc: Christoph Hellwig

On 12/8/17 6:13 AM, Ming Lei wrote:
> Hi,
> 
> This patchse cleans up most of direct access to bvec table in tree, and
> these patches are the follow-up of patch1 ~ 16 in the patchset of 'block:
> support multipage bvec(V3)'[1]. 
> 
> Changes against [1]:
> 1) split the cleanup patches from [1]
> 2) address comments from Christoph:
> 	- introduce bio helpers for dealing with the cleanup
> 	- move bio_alloc_pages() to bcache
> 
> 
> [1] https://marc.info/?t=150218197600001&r=1&w=2

Looks like good cleanups to me. I will apply this for 4.16, thanks.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec)
  2018-01-05 19:02 ` Jens Axboe
@ 2018-01-06  9:18   ` Ming Lei
  2018-01-06 16:21     ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Ming Lei @ 2018-01-06  9:18 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig

On Fri, Jan 05, 2018 at 12:02:10PM -0700, Jens Axboe wrote:
> On 12/8/17 6:13 AM, Ming Lei wrote:
> > Hi,
> > 
> > This patchse cleans up most of direct access to bvec table in tree, and
> > these patches are the follow-up of patch1 ~ 16 in the patchset of 'block:
> > support multipage bvec(V3)'[1]. 
> > 
> > Changes against [1]:
> > 1) split the cleanup patches from [1]
> > 2) address comments from Christoph:
> > 	- introduce bio helpers for dealing with the cleanup
> > 	- move bio_alloc_pages() to bcache
> > 
> > 
> > [1] https://marc.info/?t=150218197600001&r=1&w=2
> 
> Looks like good cleanups to me. I will apply this for 4.16, thanks.

Hi Jens,

Sorry for forgetting to update in this thread, the latest cleanup patchset
has been included into the following patchset of '[PATCH V4 00/45] block: support
multipage bvec' as patch 1 ~ 17: 

	https://marc.info/?t=151359981400006&r=1&w=2

If possible, please revert this 10 patches in your block-tree, and apply
the whole patchset of '[PATCH V4 00/45] block: support multipage bvec'
on the above link, and the V4 is much more clean than before.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec)
  2018-01-06  9:18   ` Ming Lei
@ 2018-01-06 16:21     ` Jens Axboe
  0 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2018-01-06 16:21 UTC (permalink / raw)
  To: Ming Lei; +Cc: linux-block, Christoph Hellwig

On Sat, Jan 06 2018, Ming Lei wrote:
> On Fri, Jan 05, 2018 at 12:02:10PM -0700, Jens Axboe wrote:
> > On 12/8/17 6:13 AM, Ming Lei wrote:
> > > Hi,
> > > 
> > > This patchse cleans up most of direct access to bvec table in tree, and
> > > these patches are the follow-up of patch1 ~ 16 in the patchset of 'block:
> > > support multipage bvec(V3)'[1]. 
> > > 
> > > Changes against [1]:
> > > 1) split the cleanup patches from [1]
> > > 2) address comments from Christoph:
> > > 	- introduce bio helpers for dealing with the cleanup
> > > 	- move bio_alloc_pages() to bcache
> > > 
> > > 
> > > [1] https://marc.info/?t=150218197600001&r=1&w=2
> > 
> > Looks like good cleanups to me. I will apply this for 4.16, thanks.
> 
> Hi Jens,
> 
> Sorry for forgetting to update in this thread, the latest cleanup patchset
> has been included into the following patchset of '[PATCH V4 00/45] block: support
> multipage bvec' as patch 1 ~ 17: 
> 
> 	https://marc.info/?t=151359981400006&r=1&w=2
> 
> If possible, please revert this 10 patches in your block-tree, and apply
> the whole patchset of '[PATCH V4 00/45] block: support multipage bvec'
> on the above link, and the V4 is much more clean than before.

OK, we can do that, even if I generally dislike doing it. I applied the
first part of the series, I haven't had time to review the latter part,
and others haven't either. So timing isn't great for 4.16.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 08/10] block: move bio_alloc_pages() to bcache
  2017-12-08 13:14 ` [PATCH 08/10] block: move bio_alloc_pages() to bcache Ming Lei
@ 2018-01-08 18:05   ` Michael Lyle
  2018-01-09  1:21     ` Ming Lei
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Lyle @ 2018-01-08 18:05 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, linux-block, linux-bcache; +Cc: Christoph Hellwig

On 12/08/2017 05:14 AM, Ming Lei wrote:
> bcache is the only user of bio_alloc_pages(), and all users should use
> bio_add_page() instead, so move this function into bcache, and avoid
> it misused in future.

Can things like this -please- be sent to the bcache list and bcache
maintainers?  I'm preparing my patch set for Jens and I'm surprised by
merge conflicts from stuff in my queue.  (Just showed up in next to show
the conflict in the past couple of days).

Mike

> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/bio.c              | 28 ----------------------------
>  drivers/md/bcache/util.c | 27 +++++++++++++++++++++++++++
>  drivers/md/bcache/util.h |  1 +
>  include/linux/bio.h      |  1 -
>  4 files changed, 28 insertions(+), 29 deletions(-)
> 
> diff --git a/block/bio.c b/block/bio.c
> index 228229f3bb76..76bb3dafffea 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -969,34 +969,6 @@ void bio_advance(struct bio *bio, unsigned bytes)
>  EXPORT_SYMBOL(bio_advance);
>  
>  /**
> - * bio_alloc_pages - allocates a single page for each bvec in a bio
> - * @bio: bio to allocate pages for
> - * @gfp_mask: flags for allocation
> - *
> - * Allocates pages up to @bio->bi_vcnt.
> - *
> - * Returns 0 on success, -ENOMEM on failure. On failure, any allocated pages are
> - * freed.
> - */
> -int bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
> -{
> -	int i;
> -	struct bio_vec *bv;
> -
> -	bio_for_each_segment_all(bv, bio, i) {
> -		bv->bv_page = alloc_page(gfp_mask);
> -		if (!bv->bv_page) {
> -			while (--bv >= bio->bi_io_vec)
> -				__free_page(bv->bv_page);
> -			return -ENOMEM;
> -		}
> -	}
> -
> -	return 0;
> -}
> -EXPORT_SYMBOL(bio_alloc_pages);
> -
> -/**
>   * bio_copy_data - copy contents of data buffers from one chain of bios to
>   * another
>   * @src: source bio list
> diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
> index 61813d230015..ac557e8c7ef5 100644
> --- a/drivers/md/bcache/util.c
> +++ b/drivers/md/bcache/util.c
> @@ -283,6 +283,33 @@ start:		bv->bv_len	= min_t(size_t, PAGE_SIZE - bv->bv_offset,
>  	}
>  }
>  
> +/**
> + * bio_alloc_pages - allocates a single page for each bvec in a bio
> + * @bio: bio to allocate pages for
> + * @gfp_mask: flags for allocation
> + *
> + * Allocates pages up to @bio->bi_vcnt.
> + *
> + * Returns 0 on success, -ENOMEM on failure. On failure, any allocated pages are
> + * freed.
> + */
> +int bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
> +{
> +	int i;
> +	struct bio_vec *bv;
> +
> +	bio_for_each_segment_all(bv, bio, i) {
> +		bv->bv_page = alloc_page(gfp_mask);
> +		if (!bv->bv_page) {
> +			while (--bv >= bio->bi_io_vec)
> +				__free_page(bv->bv_page);
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  /*
>   * Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group (Any
>   * use permitted, subject to terms of PostgreSQL license; see.)
> diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
> index ed5e8a412eb8..c92de937bcab 100644
> --- a/drivers/md/bcache/util.h
> +++ b/drivers/md/bcache/util.h
> @@ -558,6 +558,7 @@ static inline unsigned fract_exp_two(unsigned x, unsigned fract_bits)
>  }
>  
>  void bch_bio_map(struct bio *bio, void *base);
> +int bio_alloc_pages(struct bio *bio, gfp_t gfp_mask);
>  
>  static inline sector_t bdev_sectors(struct block_device *bdev)
>  {
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index 3f314e17364a..46cdbe0335a5 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -501,7 +501,6 @@ static inline void bio_flush_dcache_pages(struct bio *bi)
>  #endif
>  
>  extern void bio_copy_data(struct bio *dst, struct bio *src);
> -extern int bio_alloc_pages(struct bio *bio, gfp_t gfp);
>  extern void bio_free_pages(struct bio *bio);
>  
>  extern struct bio *bio_copy_user_iov(struct request_queue *,
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 08/10] block: move bio_alloc_pages() to bcache
  2018-01-08 18:05   ` Michael Lyle
@ 2018-01-09  1:21     ` Ming Lei
  0 siblings, 0 replies; 19+ messages in thread
From: Ming Lei @ 2018-01-09  1:21 UTC (permalink / raw)
  To: Michael Lyle; +Cc: Jens Axboe, linux-block, linux-bcache, Christoph Hellwig

On Mon, Jan 08, 2018 at 10:05:09AM -0800, Michael Lyle wrote:
> On 12/08/2017 05:14 AM, Ming Lei wrote:
> > bcache is the only user of bio_alloc_pages(), and all users should use
> > bio_add_page() instead, so move this function into bcache, and avoid
> > it misused in future.
> 
> Can things like this -please- be sent to the bcache list and bcache
> maintainers?  I'm preparing my patch set for Jens and I'm surprised by
> merge conflicts from stuff in my queue.  (Just showed up in next to show
> the conflict in the past couple of days).

OK, will do next time.

-- 
Ming

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2018-01-09  1:22 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-08 13:13 [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Ming Lei
2017-12-08 13:14 ` [PATCH 01/10] block: introduce bio helpers for converting to multipage bvec Ming Lei
2017-12-08 13:14 ` [PATCH 02/10] block: conver to bio_first_bvec() & bio_first_page() Ming Lei
2017-12-08 13:14 ` [PATCH 03/10] fs: convert to bio_last_bvec() Ming Lei
2017-12-08 13:14 ` [PATCH 04/10] block: bounce: avoid direct access to bvec table Ming Lei
2017-12-08 13:14 ` [PATCH 05/10] block: bounce: don't access bio->bi_io_vec in copy_to_high_bio_irq Ming Lei
2017-12-08 13:14 ` [PATCH 06/10] dm: limit the max bio size as BIO_MAX_PAGES * PAGE_SIZE Ming Lei
2017-12-08 13:14 ` [PATCH 07/10] bcache: comment on direct access to bvec table Ming Lei
2017-12-08 13:14 ` [PATCH 08/10] block: move bio_alloc_pages() to bcache Ming Lei
2018-01-08 18:05   ` Michael Lyle
2018-01-09  1:21     ` Ming Lei
2017-12-08 13:14 ` [PATCH 09/10] btrfs: avoid access to .bi_vcnt directly Ming Lei
2017-12-08 13:14 ` [PATCH 10/10] btrfs: avoid to access bvec table directly for a cloned bio Ming Lei
2017-12-12  7:57 ` [PATCH 00/10] block: cleanup on direct access to bvec table(prepare for multipage bvec) Christoph Hellwig
2017-12-12  9:18   ` Ming Lei
2017-12-13 17:55     ` Ming Lei
2018-01-05 19:02 ` Jens Axboe
2018-01-06  9:18   ` Ming Lei
2018-01-06 16:21     ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.