All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] cleanup bcache bio handling
@ 2018-06-11 19:48 Christoph Hellwig
  2018-06-11 19:48 ` [PATCH 1/6] block: add a bio_reuse helper Christoph Hellwig
                   ` (7 more replies)
  0 siblings, 8 replies; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-11 19:48 UTC (permalink / raw)
  To: Jens Axboe, Coly Li; +Cc: linux-bcache, linux-block

Hi all,

this series cleans up various places where bcache is way too intimate
with bio internals.  This is intended as a baseline for the multi-page
biovec work, which requires some nasty workarounds for the existing
code.

Note that I do not have a bcache test setup, so this will require
some careful actual testing with whatever test cases are used for
bcache.

Also the new bio_reused helper should be useful at least for MD raid,
but that work is left for later.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/6] block: add a bio_reuse helper
  2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
@ 2018-06-11 19:48 ` Christoph Hellwig
  2018-06-12  6:16   ` Kent Overstreet
  2018-06-11 19:48 ` [PATCH 2/6] bcache: use bio_reuse instead of bio_reinit where applicable Christoph Hellwig
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-11 19:48 UTC (permalink / raw)
  To: Jens Axboe, Coly Li; +Cc: linux-bcache, linux-block

This abstracts out a way to reuse a bio without destroying the
data pointers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c         | 20 ++++++++++++++++++++
 include/linux/bio.h |  1 +
 2 files changed, 21 insertions(+)

diff --git a/block/bio.c b/block/bio.c
index 70c4e1b6dd45..fa1b7ab50784 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -308,6 +308,26 @@ void bio_reset(struct bio *bio)
 }
 EXPORT_SYMBOL(bio_reset);
 
+/**
+ * bio_reuse - prepare a bio for reuse
+ * @bio:	bio to reuse
+ *
+ * Prepares an already setup and possible used bio for reusing it another
+ * time.  Compared to bio_reset() this preserves the bio size and the
+ * layout and contents of the bio vectors.
+ */
+void bio_reuse(struct bio *bio)
+{
+	unsigned int size = bio->bi_iter.bi_size;
+	unsigned short vcnt = bio->bi_vcnt;
+
+	bio_reset(bio);
+
+	bio->bi_iter.bi_size = size;
+	bio->bi_vcnt = vcnt;
+}
+EXPORT_SYMBOL_GPL(bio_reuse);
+
 static struct bio *__bio_chain_endio(struct bio *bio)
 {
 	struct bio *parent = bio->bi_private;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index f08f5fe7bd08..15c871ab50db 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -475,6 +475,7 @@ extern void bio_init(struct bio *bio, struct bio_vec *table,
 		     unsigned short max_vecs);
 extern void bio_uninit(struct bio *);
 extern void bio_reset(struct bio *);
+void bio_reuse(struct bio *);
 void bio_chain(struct bio *, struct bio *);
 
 extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/6] bcache: use bio_reuse instead of bio_reinit where applicable
  2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
  2018-06-11 19:48 ` [PATCH 1/6] block: add a bio_reuse helper Christoph Hellwig
@ 2018-06-11 19:48 ` Christoph Hellwig
  2018-06-11 19:48 ` [PATCH 3/6] bcache: clean up bio reuse for struct moving_io Christoph Hellwig
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-11 19:48 UTC (permalink / raw)
  To: Jens Axboe, Coly Li; +Cc: linux-bcache, linux-block

Use the bio_reuse helper instead of rebuilding the bio_vecs and
size for bios that get reused for the same data.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/request.c | 5 +----
 drivers/md/bcache/super.c   | 6 ++----
 2 files changed, 3 insertions(+), 8 deletions(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index ae67f5fa8047..cd2505b9bee9 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -810,12 +810,9 @@ static void cached_dev_read_done(struct closure *cl)
 	 */
 
 	if (s->iop.bio) {
-		bio_reset(s->iop.bio);
+		bio_reuse(s->iop.bio);
 		s->iop.bio->bi_iter.bi_sector = s->cache_miss->bi_iter.bi_sector;
 		bio_copy_dev(s->iop.bio, s->cache_miss);
-		s->iop.bio->bi_iter.bi_size = s->insert_bio_sectors << 9;
-		bch_bio_map(s->iop.bio, NULL);
-
 		bio_copy_data(s->cache_miss, s->iop.bio);
 
 		bio_put(s->cache_miss);
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index a31e55bcc4e5..cd27eabbab24 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -205,9 +205,7 @@ static void __write_super(struct cache_sb *sb, struct bio *bio)
 	unsigned i;
 
 	bio->bi_iter.bi_sector	= SB_SECTOR;
-	bio->bi_iter.bi_size	= SB_SIZE;
 	bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_SYNC|REQ_META);
-	bch_bio_map(bio, NULL);
 
 	out->offset		= cpu_to_le64(sb->offset);
 	out->version		= cpu_to_le64(sb->version);
@@ -249,7 +247,7 @@ void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent)
 	down(&dc->sb_write_mutex);
 	closure_init(cl, parent);
 
-	bio_reset(bio);
+	bio_reuse(bio);
 	bio_set_dev(bio, dc->bdev);
 	bio->bi_end_io	= write_bdev_super_endio;
 	bio->bi_private = dc;
@@ -298,7 +296,7 @@ void bcache_write_super(struct cache_set *c)
 
 		SET_CACHE_SYNC(&ca->sb, CACHE_SYNC(&c->sb));
 
-		bio_reset(bio);
+		bio_reuse(bio);
 		bio_set_dev(bio, ca->bdev);
 		bio->bi_end_io	= write_super_endio;
 		bio->bi_private = ca;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/6] bcache: clean up bio reuse for struct moving_io
  2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
  2018-06-11 19:48 ` [PATCH 1/6] block: add a bio_reuse helper Christoph Hellwig
  2018-06-11 19:48 ` [PATCH 2/6] bcache: use bio_reuse instead of bio_reinit where applicable Christoph Hellwig
@ 2018-06-11 19:48 ` Christoph Hellwig
  2018-06-11 19:48 ` [PATCH 4/6] bcache: clean up bio reuse for struct dirty_io Christoph Hellwig
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-11 19:48 UTC (permalink / raw)
  To: Jens Axboe, Coly Li; +Cc: linux-bcache, linux-block

Instead of reinitializing the bio everytime we can call bio_reuse when
reusing it.  Also removes the remainder of the moving_init helper
to improve readability.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/movinggc.c | 40 +++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 21 deletions(-)

diff --git a/drivers/md/bcache/movinggc.c b/drivers/md/bcache/movinggc.c
index a24c3a95b2c0..e59bc04c3d74 100644
--- a/drivers/md/bcache/movinggc.c
+++ b/drivers/md/bcache/movinggc.c
@@ -74,29 +74,20 @@ static void read_moving_endio(struct bio *bio)
 	bch_bbio_endio(io->op.c, bio, bio->bi_status, "reading data to move");
 }
 
-static void moving_init(struct moving_io *io)
-{
-	struct bio *bio = &io->bio.bio;
-
-	bio_init(bio, bio->bi_inline_vecs,
-		 DIV_ROUND_UP(KEY_SIZE(&io->w->key), PAGE_SECTORS));
-	bio_get(bio);
-	bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0));
-
-	bio->bi_iter.bi_size	= KEY_SIZE(&io->w->key) << 9;
-	bio->bi_private		= &io->cl;
-	bch_bio_map(bio, NULL);
-}
-
 static void write_moving(struct closure *cl)
 {
 	struct moving_io *io = container_of(cl, struct moving_io, cl);
 	struct data_insert_op *op = &io->op;
 
 	if (!op->status) {
-		moving_init(io);
+		struct bio *bio = &io->bio.bio;
+
+		bio_reuse(bio);
+		bio_get(bio);
+		bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0));
+		bio->bi_private = &io->cl;
+		bio->bi_iter.bi_sector = KEY_START(&io->w->key);
 
-		io->bio.bio.bi_iter.bi_sector = KEY_START(&io->w->key);
 		op->write_prio		= 1;
 		op->bio			= &io->bio.bio;
 
@@ -156,12 +147,19 @@ static void read_moving(struct cache_set *c)
 		io->op.c	= c;
 		io->op.wq	= c->moving_gc_wq;
 
-		moving_init(io);
 		bio = &io->bio.bio;
-
-		bio_set_op_attrs(bio, REQ_OP_READ, 0);
-		bio->bi_end_io	= read_moving_endio;
-
+		bio_init(bio, bio->bi_inline_vecs,
+			DIV_ROUND_UP(KEY_SIZE(&io->w->key), PAGE_SECTORS));
+		bio_get(bio);
+
+		bio->bi_iter.bi_size = KEY_SIZE(&io->w->key) << 9;
+		bio->bi_iter.bi_sector = KEY_START(&io->w->key);
+		bio->bi_opf = REQ_OP_READ;
+		bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0));
+		bio->bi_private = &io->cl;
+		bio->bi_end_io = read_moving_endio;
+
+		bch_bio_map(bio, NULL);
 		if (bch_bio_alloc_pages(bio, GFP_KERNEL))
 			goto err;
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/6] bcache: clean up bio reuse for struct dirty_io
  2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
                   ` (2 preceding siblings ...)
  2018-06-11 19:48 ` [PATCH 3/6] bcache: clean up bio reuse for struct moving_io Christoph Hellwig
@ 2018-06-11 19:48 ` Christoph Hellwig
  2018-06-11 19:48 ` [PATCH 5/6] bcache: don't clone bio in bch_data_verify Christoph Hellwig
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-11 19:48 UTC (permalink / raw)
  To: Jens Axboe, Coly Li; +Cc: linux-bcache, linux-block

Instead of reinitializing the bio everytime we can call bio_reuse when
reusing it.  Also moves the private data initialization out of
dirty_init, which is renamed to suit the remaining functionality.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/writeback.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index ad45ebe1a74b..986e50f069d6 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -179,19 +179,12 @@ struct dirty_io {
 	struct bio		bio;
 };
 
-static void dirty_init(struct keybuf_key *w)
+static void dirty_init_prio(struct keybuf_key *w)
 {
 	struct dirty_io *io = w->private;
-	struct bio *bio = &io->bio;
 
-	bio_init(bio, bio->bi_inline_vecs,
-		 DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS));
 	if (!io->dc->writeback_percent)
-		bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0));
-
-	bio->bi_iter.bi_size	= KEY_SIZE(&w->key) << 9;
-	bio->bi_private		= w;
-	bch_bio_map(bio, NULL);
+		bio_set_prio(&io->bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0));
 }
 
 static void dirty_io_destructor(struct closure *cl)
@@ -285,10 +278,12 @@ static void write_dirty(struct closure *cl)
 	 * to clean up.
 	 */
 	if (KEY_DIRTY(&w->key)) {
-		dirty_init(w);
-		bio_set_op_attrs(&io->bio, REQ_OP_WRITE, 0);
+		bio_reuse(&io->bio);
+		dirty_init_prio(w);
+		io->bio.bi_opf = REQ_OP_WRITE;
 		io->bio.bi_iter.bi_sector = KEY_START(&w->key);
 		bio_set_dev(&io->bio, io->dc->bdev);
+		io->bio.bi_private	= w;
 		io->bio.bi_end_io	= dirty_endio;
 
 		/* I/O request sent to backing device */
@@ -399,13 +394,18 @@ static void read_dirty(struct cached_dev *dc)
 			io->dc		= dc;
 			io->sequence    = sequence++;
 
-			dirty_init(w);
-			bio_set_op_attrs(&io->bio, REQ_OP_READ, 0);
+			bio_init(&io->bio, io->bio.bi_inline_vecs,
+				DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS));
+			dirty_init_prio(w);
+			io->bio.bi_opf = REQ_OP_READ;
+			io->bio.bi_iter.bi_size = KEY_SIZE(&w->key) << 9;
 			io->bio.bi_iter.bi_sector = PTR_OFFSET(&w->key, 0);
 			bio_set_dev(&io->bio,
 				    PTR_CACHE(dc->disk.c, &w->key, 0)->bdev);
+			io->bio.bi_private	= w;
 			io->bio.bi_end_io	= read_dirty_endio;
 
+			bch_bio_map(&io->bio, NULL);
 			if (bch_bio_alloc_pages(&io->bio, GFP_KERNEL))
 				goto err_free;
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 5/6] bcache: don't clone bio in bch_data_verify
  2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
                   ` (3 preceding siblings ...)
  2018-06-11 19:48 ` [PATCH 4/6] bcache: clean up bio reuse for struct dirty_io Christoph Hellwig
@ 2018-06-11 19:48 ` Christoph Hellwig
  2018-06-11 19:48 ` [PATCH 6/6] bcache: use bio_add_page instead of open coded bio manipulation Christoph Hellwig
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-11 19:48 UTC (permalink / raw)
  To: Jens Axboe, Coly Li; +Cc: linux-bcache, linux-block

We immediately overwrite the biovec array, so instead just allocate
a new bio and copy over the disk, setor and size.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/debug.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index d030ce3025a6..04d146711950 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -110,11 +110,15 @@ void bch_data_verify(struct cached_dev *dc, struct bio *bio)
 	struct bio_vec bv, cbv;
 	struct bvec_iter iter, citer = { 0 };
 
-	check = bio_clone_kmalloc(bio, GFP_NOIO);
+	check = bio_kmalloc(GFP_NOIO, bio_segments(bio));
 	if (!check)
 		return;
+	check->bi_disk = bio->bi_disk;
 	check->bi_opf = REQ_OP_READ;
+	check->bi_iter.bi_sector = bio->bi_iter.bi_sector;
+	check->bi_iter.bi_size = bio->bi_iter.bi_size;
 
+	bch_bio_map(check, NULL);
 	if (bch_bio_alloc_pages(check, GFP_NOIO))
 		goto out_put;
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 6/6] bcache: use bio_add_page instead of open coded bio manipulation
  2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
                   ` (4 preceding siblings ...)
  2018-06-11 19:48 ` [PATCH 5/6] bcache: don't clone bio in bch_data_verify Christoph Hellwig
@ 2018-06-11 19:48 ` Christoph Hellwig
  2018-06-12  4:40 ` [RFC] cleanup bcache bio handling Coly Li
  2018-06-13  9:58 ` Kent Overstreet
  7 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-11 19:48 UTC (permalink / raw)
  To: Jens Axboe, Coly Li; +Cc: linux-bcache, linux-block

Let bch_bio_alloc_pages and bch_bio_map set up the bio vec information
and bi_size.  This also means no additional bch_bio_map call with
a NULL argument is needed before bch_bio_alloc_pages.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/btree.c     | 16 +++-----
 drivers/md/bcache/debug.c     |  7 +---
 drivers/md/bcache/journal.c   |  8 +---
 drivers/md/bcache/movinggc.c  |  4 +-
 drivers/md/bcache/request.c   |  5 +--
 drivers/md/bcache/super.c     |  8 +---
 drivers/md/bcache/util.c      | 74 +++++++++++++++--------------------
 drivers/md/bcache/util.h      |  4 +-
 drivers/md/bcache/writeback.c |  4 +-
 9 files changed, 52 insertions(+), 78 deletions(-)

diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 2a0968c04e21..0f585fa9051f 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -298,12 +298,11 @@ static void bch_btree_node_read(struct btree *b)
 	closure_init_stack(&cl);
 
 	bio = bch_bbio_alloc(b->c);
-	bio->bi_iter.bi_size = KEY_SIZE(&b->key) << 9;
 	bio->bi_end_io	= btree_node_read_endio;
 	bio->bi_private	= &cl;
 	bio->bi_opf = REQ_OP_READ | REQ_META;
 
-	bch_bio_map(bio, b->keys.set[0].data);
+	bch_bio_map(bio, b->keys.set[0].data, KEY_SIZE(&b->key) << 9);
 
 	bch_submit_bbio(bio, b->c, &b->key, 0);
 	closure_sync(&cl);
@@ -386,19 +385,19 @@ static void do_btree_node_write(struct btree *b)
 {
 	struct closure *cl = &b->io;
 	struct bset *i = btree_bset_last(b);
+	size_t size;
 	BKEY_PADDED(key) k;
 
 	i->version	= BCACHE_BSET_VERSION;
 	i->csum		= btree_csum_set(b, i);
 
+	size = roundup(set_bytes(i), block_bytes(b->c));
+
 	BUG_ON(b->bio);
 	b->bio = bch_bbio_alloc(b->c);
-
 	b->bio->bi_end_io	= btree_node_write_endio;
 	b->bio->bi_private	= cl;
-	b->bio->bi_iter.bi_size	= roundup(set_bytes(i), block_bytes(b->c));
 	b->bio->bi_opf		= REQ_OP_WRITE | REQ_META | REQ_FUA;
-	bch_bio_map(b->bio, i);
 
 	/*
 	 * If we're appending to a leaf node, we don't technically need FUA -
@@ -419,7 +418,7 @@ static void do_btree_node_write(struct btree *b)
 	SET_PTR_OFFSET(&k.key, 0, PTR_OFFSET(&k.key, 0) +
 		       bset_sector_offset(&b->keys, i));
 
-	if (!bch_bio_alloc_pages(b->bio, __GFP_NOWARN|GFP_NOWAIT)) {
+	if (!bch_bio_alloc_pages(b->bio, size, __GFP_NOWARN | GFP_NOWAIT)) {
 		int j;
 		struct bio_vec *bv;
 		void *base = (void *) ((unsigned long) i & ~(PAGE_SIZE - 1));
@@ -432,10 +431,7 @@ static void do_btree_node_write(struct btree *b)
 
 		continue_at(cl, btree_node_write_done, NULL);
 	} else {
-		/* No problem for multipage bvec since the bio is just allocated */
-		b->bio->bi_vcnt = 0;
-		bch_bio_map(b->bio, i);
-
+		bch_bio_map(b->bio, i, size);
 		bch_submit_bbio(b->bio, b->c, &k.key, 0);
 
 		closure_sync(cl);
diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index 04d146711950..b089597fb607 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -52,9 +52,8 @@ void bch_btree_verify(struct btree *b)
 	bio = bch_bbio_alloc(b->c);
 	bio_set_dev(bio, PTR_CACHE(b->c, &b->key, 0)->bdev);
 	bio->bi_iter.bi_sector	= PTR_OFFSET(&b->key, 0);
-	bio->bi_iter.bi_size	= KEY_SIZE(&v->key) << 9;
 	bio->bi_opf		= REQ_OP_READ | REQ_META;
-	bch_bio_map(bio, sorted);
+	bch_bio_map(bio, sorted, KEY_SIZE(&v->key) << 9);
 
 	submit_bio_wait(bio);
 	bch_bbio_free(bio, b->c);
@@ -116,10 +115,8 @@ void bch_data_verify(struct cached_dev *dc, struct bio *bio)
 	check->bi_disk = bio->bi_disk;
 	check->bi_opf = REQ_OP_READ;
 	check->bi_iter.bi_sector = bio->bi_iter.bi_sector;
-	check->bi_iter.bi_size = bio->bi_iter.bi_size;
 
-	bch_bio_map(check, NULL);
-	if (bch_bio_alloc_pages(check, GFP_NOIO))
+	if (bch_bio_alloc_pages(check, bio->bi_iter.bi_size, GFP_NOIO))
 		goto out_put;
 
 	submit_bio_wait(check);
diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
index 18f1b5239620..44c3fc5f3b0a 100644
--- a/drivers/md/bcache/journal.c
+++ b/drivers/md/bcache/journal.c
@@ -55,12 +55,10 @@ reread:		left = ca->sb.bucket_size - offset;
 		bio_reset(bio);
 		bio->bi_iter.bi_sector	= bucket + offset;
 		bio_set_dev(bio, ca->bdev);
-		bio->bi_iter.bi_size	= len << 9;
-
 		bio->bi_end_io	= journal_read_endio;
 		bio->bi_private = &cl;
 		bio_set_op_attrs(bio, REQ_OP_READ, 0);
-		bch_bio_map(bio, data);
+		bch_bio_map(bio, data, len << 9);
 
 		closure_bio_submit(ca->set, bio, &cl);
 		closure_sync(&cl);
@@ -652,13 +650,11 @@ static void journal_write_unlocked(struct closure *cl)
 		bio_reset(bio);
 		bio->bi_iter.bi_sector	= PTR_OFFSET(k, i);
 		bio_set_dev(bio, ca->bdev);
-		bio->bi_iter.bi_size = sectors << 9;
-
 		bio->bi_end_io	= journal_write_endio;
 		bio->bi_private = w;
 		bio_set_op_attrs(bio, REQ_OP_WRITE,
 				 REQ_SYNC|REQ_META|REQ_PREFLUSH|REQ_FUA);
-		bch_bio_map(bio, w->data);
+		bch_bio_map(bio, w->data, sectors << 9);
 
 		trace_bcache_journal_write(bio);
 		bio_list_add(&list, bio);
diff --git a/drivers/md/bcache/movinggc.c b/drivers/md/bcache/movinggc.c
index e59bc04c3d74..bd99ce439e30 100644
--- a/drivers/md/bcache/movinggc.c
+++ b/drivers/md/bcache/movinggc.c
@@ -159,8 +159,8 @@ static void read_moving(struct cache_set *c)
 		bio->bi_private = &io->cl;
 		bio->bi_end_io = read_moving_endio;
 
-		bch_bio_map(bio, NULL);
-		if (bch_bio_alloc_pages(bio, GFP_KERNEL))
+		if (bch_bio_alloc_pages(bio, KEY_SIZE(&io->w->key) << 9,
+				GFP_KERNEL))
 			goto err;
 
 		trace_bcache_gc_copy(&w->key);
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index cd2505b9bee9..191cc374d246 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -897,13 +897,12 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
 
 	cache_bio->bi_iter.bi_sector	= miss->bi_iter.bi_sector;
 	bio_copy_dev(cache_bio, miss);
-	cache_bio->bi_iter.bi_size	= s->insert_bio_sectors << 9;
 
 	cache_bio->bi_end_io	= backing_request_endio;
 	cache_bio->bi_private	= &s->cl;
 
-	bch_bio_map(cache_bio, NULL);
-	if (bch_bio_alloc_pages(cache_bio, __GFP_NOWARN|GFP_NOIO))
+	if (bch_bio_alloc_pages(cache_bio, s->insert_bio_sectors << 9,
+			__GFP_NOWARN | GFP_NOIO))
 		goto out_put;
 
 	if (reada)
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index cd27eabbab24..2788d827fc73 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -343,12 +343,10 @@ static void uuid_io(struct cache_set *c, int op, unsigned long op_flags,
 		struct bio *bio = bch_bbio_alloc(c);
 
 		bio->bi_opf = REQ_SYNC | REQ_META | op_flags;
-		bio->bi_iter.bi_size = KEY_SIZE(k) << 9;
-
 		bio->bi_end_io	= uuid_endio;
 		bio->bi_private = cl;
 		bio_set_op_attrs(bio, op, REQ_SYNC|REQ_META|op_flags);
-		bch_bio_map(bio, c->uuids);
+		bch_bio_map(bio, c->uuids, KEY_SIZE(k) << 9);
 
 		bch_submit_bbio(bio, c, k, i);
 
@@ -503,12 +501,10 @@ static void prio_io(struct cache *ca, uint64_t bucket, int op,
 
 	bio->bi_iter.bi_sector	= bucket * ca->sb.bucket_size;
 	bio_set_dev(bio, ca->bdev);
-	bio->bi_iter.bi_size	= bucket_bytes(ca);
-
 	bio->bi_end_io	= prio_endio;
 	bio->bi_private = ca;
 	bio_set_op_attrs(bio, op, REQ_SYNC|REQ_META|op_flags);
-	bch_bio_map(bio, ca->disk_buckets);
+	bch_bio_map(bio, ca->disk_buckets, bucket_bytes(ca));
 
 	closure_bio_submit(ca->set, bio, &ca->prio);
 	closure_sync(cl);
diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index fc479b026d6d..0b2f1d2bdbd6 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -219,65 +219,55 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done)
 		: 0;
 }
 
-/*
- * Generally it isn't good to access .bi_io_vec and .bi_vcnt directly,
- * the preferred way is bio_add_page, but in this case, bch_bio_map()
- * supposes that the bvec table is empty, so it is safe to access
- * .bi_vcnt & .bi_io_vec in this way even after multipage bvec is
- * supported.
- */
-void bch_bio_map(struct bio *bio, void *base)
+size_t bch_bio_map(struct bio *bio, void *base, size_t size)
 {
-	size_t size = bio->bi_iter.bi_size;
-	struct bio_vec *bv = bio->bi_io_vec;
-
-	BUG_ON(!bio->bi_iter.bi_size);
-	BUG_ON(bio->bi_vcnt);
-
-	bv->bv_offset = base ? offset_in_page(base) : 0;
-	goto start;
-
-	for (; size; bio->bi_vcnt++, bv++) {
-		bv->bv_offset	= 0;
-start:		bv->bv_len	= min_t(size_t, PAGE_SIZE - bv->bv_offset,
-					size);
-		if (base) {
-			bv->bv_page = is_vmalloc_addr(base)
-				? vmalloc_to_page(base)
-				: virt_to_page(base);
-
-			base += bv->bv_len;
-		}
+	while (size > 0) {
+		unsigned offset = offset_in_page(base);
+		unsigned len = min_t(size_t, PAGE_SIZE - offset, size);
+		struct page *page = is_vmalloc_addr(base) ?
+			vmalloc_to_page(base) : virt_to_page(base);
 
-		size -= bv->bv_len;
+		if (bio_add_page(bio, page, len, offset) != len)
+			break;
+		size -= len;
 	}
+
+	return size;
 }
 
 /**
- * bch_bio_alloc_pages - allocates a single page for each bvec in a bio
+ * bch_bio_alloc_pages - allocates pages to back a bio
  * @bio: bio to allocate pages for
+ * @size: size of the allocation
  * @gfp_mask: flags for allocation
  *
- * Allocates pages up to @bio->bi_vcnt.
- *
  * Returns 0 on success, -ENOMEM on failure. On failure, any allocated pages are
  * freed.
  */
-int bch_bio_alloc_pages(struct bio *bio, gfp_t gfp_mask)
+int bch_bio_alloc_pages(struct bio *bio, size_t size, gfp_t gfp_mask)
 {
-	int i;
-	struct bio_vec *bv;
-
-	bio_for_each_segment_all(bv, bio, i) {
-		bv->bv_page = alloc_page(gfp_mask);
-		if (!bv->bv_page) {
-			while (--bv >= bio->bi_io_vec)
-				__free_page(bv->bv_page);
-			return -ENOMEM;
+	BUG_ON(bio->bi_iter.bi_size);
+	BUG_ON(bio->bi_vcnt);
+
+	while (size > 0) {
+		struct page *page = alloc_page(gfp_mask);
+		unsigned len = min_t(size_t, size, PAGE_SIZE);
+
+		if (!page)
+			goto free_pages;
+		if (WARN_ON_ONCE(bio_add_page(bio, page, len, 0) != len)) {
+			__free_page(page);
+			goto free_pages;
 		}
 	}
 
 	return 0;
+
+free_pages:
+	bio_free_pages(bio);
+	bio->bi_iter.bi_size = 0;
+	bio->bi_vcnt = 0;
+	return -ENOMEM;
 }
 
 /*
diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
index cced87f8eb27..85e0706d6666 100644
--- a/drivers/md/bcache/util.h
+++ b/drivers/md/bcache/util.h
@@ -554,8 +554,8 @@ static inline unsigned fract_exp_two(unsigned x, unsigned fract_bits)
 	return x;
 }
 
-void bch_bio_map(struct bio *bio, void *base);
-int bch_bio_alloc_pages(struct bio *bio, gfp_t gfp_mask);
+size_t bch_bio_map(struct bio *bio, void *base, size_t size);
+int bch_bio_alloc_pages(struct bio *bio, size_t size, gfp_t gfp_mask);
 
 static inline sector_t bdev_sectors(struct block_device *bdev)
 {
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 986e50f069d6..f7865697fc00 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -405,8 +405,8 @@ static void read_dirty(struct cached_dev *dc)
 			io->bio.bi_private	= w;
 			io->bio.bi_end_io	= read_dirty_endio;
 
-			bch_bio_map(&io->bio, NULL);
-			if (bch_bio_alloc_pages(&io->bio, GFP_KERNEL))
+			if (bch_bio_alloc_pages(&io->bio,
+					KEY_SIZE(&w->key) << 9, GFP_KERNEL))
 				goto err_free;
 
 			trace_bcache_writeback(&w->key);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC] cleanup bcache bio handling
  2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
                   ` (5 preceding siblings ...)
  2018-06-11 19:48 ` [PATCH 6/6] bcache: use bio_add_page instead of open coded bio manipulation Christoph Hellwig
@ 2018-06-12  4:40 ` Coly Li
  2018-06-13  9:58 ` Kent Overstreet
  7 siblings, 0 replies; 19+ messages in thread
From: Coly Li @ 2018-06-12  4:40 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-bcache, linux-block

On 2018/6/12 3:48 AM, Christoph Hellwig wrote:
> Hi all,
> 
> this series cleans up various places where bcache is way too intimate
> with bio internals.  This is intended as a baseline for the multi-page
> biovec work, which requires some nasty workarounds for the existing
> code.
> 
> Note that I do not have a bcache test setup, so this will require
> some careful actual testing with whatever test cases are used for
> bcache.
> 
> Also the new bio_reused helper should be useful at least for MD raid,
> but that work is left for later.

Hi Christoph,

Thanks for the patches. I will setup a test environment to run bcache
workload with your patches.

Coly Li

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] block: add a bio_reuse helper
  2018-06-11 19:48 ` [PATCH 1/6] block: add a bio_reuse helper Christoph Hellwig
@ 2018-06-12  6:16   ` Kent Overstreet
  2018-06-13  7:32     ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Kent Overstreet @ 2018-06-12  6:16 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Coly Li, linux-bcache, linux-block

On Mon, Jun 11, 2018 at 09:48:01PM +0200, Christoph Hellwig wrote:
> This abstracts out a way to reuse a bio without destroying the
> data pointers.

What is the point of this? What "data pointers" does it not destroy?

> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/bio.c         | 20 ++++++++++++++++++++
>  include/linux/bio.h |  1 +
>  2 files changed, 21 insertions(+)
> 
> diff --git a/block/bio.c b/block/bio.c
> index 70c4e1b6dd45..fa1b7ab50784 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -308,6 +308,26 @@ void bio_reset(struct bio *bio)
>  }
>  EXPORT_SYMBOL(bio_reset);
>  
> +/**
> + * bio_reuse - prepare a bio for reuse
> + * @bio:	bio to reuse
> + *
> + * Prepares an already setup and possible used bio for reusing it another
> + * time.  Compared to bio_reset() this preserves the bio size and the
> + * layout and contents of the bio vectors.
> + */
> +void bio_reuse(struct bio *bio)
> +{
> +	unsigned int size = bio->bi_iter.bi_size;
> +	unsigned short vcnt = bio->bi_vcnt;
> +
> +	bio_reset(bio);
> +
> +	bio->bi_iter.bi_size = size;
> +	bio->bi_vcnt = vcnt;
> +}
> +EXPORT_SYMBOL_GPL(bio_reuse);
> +
>  static struct bio *__bio_chain_endio(struct bio *bio)
>  {
>  	struct bio *parent = bio->bi_private;
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index f08f5fe7bd08..15c871ab50db 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -475,6 +475,7 @@ extern void bio_init(struct bio *bio, struct bio_vec *table,
>  		     unsigned short max_vecs);
>  extern void bio_uninit(struct bio *);
>  extern void bio_reset(struct bio *);
> +void bio_reuse(struct bio *);
>  void bio_chain(struct bio *, struct bio *);
>  
>  extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int);
> -- 
> 2.17.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] block: add a bio_reuse helper
  2018-06-12  6:16   ` Kent Overstreet
@ 2018-06-13  7:32     ` Christoph Hellwig
  2018-06-13  8:54       ` Kent Overstreet
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-13  7:32 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Christoph Hellwig, Jens Axboe, Coly Li, linux-bcache, linux-block

On Tue, Jun 12, 2018 at 02:16:30AM -0400, Kent Overstreet wrote:
> On Mon, Jun 11, 2018 at 09:48:01PM +0200, Christoph Hellwig wrote:
> > This abstracts out a way to reuse a bio without destroying the
> > data pointers.
> 
> What is the point of this? What "data pointers" does it not destroy?

It keeps bi_vcnt and bi_size intact in addition to bi_max_vecs/bi_io_vec
already kept by the existing bio_reset.

The befit is that you don't need to rebuild bi_vcnt, bio_io_vec and
bi_size when reusing the bio.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] block: add a bio_reuse helper
  2018-06-13  7:32     ` Christoph Hellwig
@ 2018-06-13  8:54       ` Kent Overstreet
  2018-06-13 13:59         ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Kent Overstreet @ 2018-06-13  8:54 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Coly Li, linux-bcache, linux-block

On Wed, Jun 13, 2018 at 09:32:04AM +0200, Christoph Hellwig wrote:
> On Tue, Jun 12, 2018 at 02:16:30AM -0400, Kent Overstreet wrote:
> > On Mon, Jun 11, 2018 at 09:48:01PM +0200, Christoph Hellwig wrote:
> > > This abstracts out a way to reuse a bio without destroying the
> > > data pointers.
> > 
> > What is the point of this? What "data pointers" does it not destroy?
> 
> It keeps bi_vcnt and bi_size intact in addition to bi_max_vecs/bi_io_vec
> already kept by the existing bio_reset.
> 
> The befit is that you don't need to rebuild bi_vcnt, bio_io_vec and
> bi_size when reusing the bio.

bi_size is not immutable though, it will usually be modified by drivers when you
submit a bio.

I see what you're trying to do, but your approach is busted given the way the
block layer works today. You'd have to save bio->bi_iter before submitting the
bio and restore it afterwards for it to work.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC] cleanup bcache bio handling
  2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
                   ` (6 preceding siblings ...)
  2018-06-12  4:40 ` [RFC] cleanup bcache bio handling Coly Li
@ 2018-06-13  9:58 ` Kent Overstreet
  2018-06-13 11:06   ` Ming Lei
  7 siblings, 1 reply; 19+ messages in thread
From: Kent Overstreet @ 2018-06-13  9:58 UTC (permalink / raw)
  To: Christoph Hellwig, ming.lei
  Cc: Jens Axboe, Coly Li, linux-bcache, linux-block

On Mon, Jun 11, 2018 at 09:48:00PM +0200, Christoph Hellwig wrote:
> Hi all,
> 
> this series cleans up various places where bcache is way too intimate
> with bio internals.  This is intended as a baseline for the multi-page
> biovec work, which requires some nasty workarounds for the existing
> code.
> 
> Note that I do not have a bcache test setup, so this will require
> some careful actual testing with whatever test cases are used for
> bcache.
> 
> Also the new bio_reused helper should be useful at least for MD raid,
> but that work is left for later.

Strong nak on the patch series (except for the patch to not clone in
bch_data_verify, that patch looks fine).

Unless something has seriously changed with how multi-page bvecs work since I
started that code nothing in bcache should be broken by it - Ming, can you
confirm or deny if that's still correct?

It is true that bch_bio_map() will be suboptimal when multi page bvecs go in,
but it won't be broken. The way it's used now is really conflating two different
things, but where it's used for mapping a bio to a single buffer (i.e. not
before bio_alloc_pages) that can be switched to something that just creates a
single bvec.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC] cleanup bcache bio handling
  2018-06-13  9:58 ` Kent Overstreet
@ 2018-06-13 11:06   ` Ming Lei
  2018-06-13 13:56     ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Ming Lei @ 2018-06-13 11:06 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Christoph Hellwig, Jens Axboe, Coly Li, linux-bcache, linux-block

On Wed, Jun 13, 2018 at 05:58:01AM -0400, Kent Overstreet wrote:
> On Mon, Jun 11, 2018 at 09:48:00PM +0200, Christoph Hellwig wrote:
> > Hi all,
> > 
> > this series cleans up various places where bcache is way too intimate
> > with bio internals.  This is intended as a baseline for the multi-page
> > biovec work, which requires some nasty workarounds for the existing
> > code.
> > 
> > Note that I do not have a bcache test setup, so this will require
> > some careful actual testing with whatever test cases are used for
> > bcache.
> > 
> > Also the new bio_reused helper should be useful at least for MD raid,
> > but that work is left for later.
> 
> Strong nak on the patch series (except for the patch to not clone in
> bch_data_verify, that patch looks fine).
> 
> Unless something has seriously changed with how multi-page bvecs work since I
> started that code nothing in bcache should be broken by it - Ming, can you
> confirm or deny if that's still correct?

Yes, the multi-page bvecs implementation didn't have big change, and
previously I run xfstests on bcache0, and no regression was observed.

> 
> It is true that bch_bio_map() will be suboptimal when multi page bvecs go in,
> but it won't be broken. The way it's used now is really conflating two different
> things, but where it's used for mapping a bio to a single buffer (i.e. not
> before bio_alloc_pages) that can be switched to something that just creates a
> single bvec.

Yes, multipage bvec shouldn't break any driver or fs.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC] cleanup bcache bio handling
  2018-06-13 11:06   ` Ming Lei
@ 2018-06-13 13:56     ` Christoph Hellwig
  2018-06-13 14:54       ` Kent Overstreet
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-13 13:56 UTC (permalink / raw)
  To: Ming Lei
  Cc: Kent Overstreet, Christoph Hellwig, Jens Axboe, Coly Li,
	linux-bcache, linux-block

On Wed, Jun 13, 2018 at 07:06:41PM +0800, Ming Lei wrote:
> > before bio_alloc_pages) that can be switched to something that just creates a
> > single bvec.
> 
> Yes, multipage bvec shouldn't break any driver or fs.

It probably isn't broken, at least I didn't see assumptions of the same
number of segments.  However the current poking into the bio internals as
a bad idea for a couple of reasons.  First because it requires touching
bcache for any of these changes, second because it won't get merging of
pages into a single bio segment for bіos built by bch_bio_map or
bch_bio_alloc_pages, and third bcache is the last user of
bio_for_each_chunk_all in your branch, which I'd like to kill off to
keep the number of iterators down.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] block: add a bio_reuse helper
  2018-06-13  8:54       ` Kent Overstreet
@ 2018-06-13 13:59         ` Christoph Hellwig
  2018-06-13 14:49           ` Kent Overstreet
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-13 13:59 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Christoph Hellwig, Jens Axboe, Coly Li, linux-bcache, linux-block

On Wed, Jun 13, 2018 at 04:54:41AM -0400, Kent Overstreet wrote:
> bi_size is not immutable though, it will usually be modified by drivers when you
> submit a bio.
> 
> I see what you're trying to do, but your approach is busted given the way the
> block layer works today. You'd have to save bio->bi_iter before submitting the
> bio and restore it afterwards for it to work.

For bi_size, agreed this needs fixing.  bi_sector is always restored
already by the callers, and the remaining fields are zeroed by bio_reset,
which does the right thing.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] block: add a bio_reuse helper
  2018-06-13 13:59         ` Christoph Hellwig
@ 2018-06-13 14:49           ` Kent Overstreet
  0 siblings, 0 replies; 19+ messages in thread
From: Kent Overstreet @ 2018-06-13 14:49 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Coly Li, linux-bcache, linux-block

On Wed, Jun 13, 2018 at 03:59:15PM +0200, Christoph Hellwig wrote:
> On Wed, Jun 13, 2018 at 04:54:41AM -0400, Kent Overstreet wrote:
> > bi_size is not immutable though, it will usually be modified by drivers when you
> > submit a bio.
> > 
> > I see what you're trying to do, but your approach is busted given the way the
> > block layer works today. You'd have to save bio->bi_iter before submitting the
> > bio and restore it afterwards for it to work.
> 
> For bi_size, agreed this needs fixing.  bi_sector is always restored
> already by the callers, and the remaining fields are zeroed by bio_reset,
> which does the right thing.

This still shouldn't be a new helper. If the caller is restoring bi_sector they
can restore bi_size as well, and restoring only part of bi_iter should not be
encouraged as it's not safe in general - it is a quite reasonable thing to want
to restore a bio to the state it was pre submission (e.g. for bouncing) and what
you're doing is definitely _not_ safe in general.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC] cleanup bcache bio handling
  2018-06-13 13:56     ` Christoph Hellwig
@ 2018-06-13 14:54       ` Kent Overstreet
  2018-06-14  1:55         ` Ming Lei
  0 siblings, 1 reply; 19+ messages in thread
From: Kent Overstreet @ 2018-06-13 14:54 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Ming Lei, Jens Axboe, Coly Li, linux-bcache, linux-block

On Wed, Jun 13, 2018 at 03:56:32PM +0200, Christoph Hellwig wrote:
> On Wed, Jun 13, 2018 at 07:06:41PM +0800, Ming Lei wrote:
> > > before bio_alloc_pages) that can be switched to something that just creates a
> > > single bvec.
> > 
> > Yes, multipage bvec shouldn't break any driver or fs.
> 
> It probably isn't broken, at least I didn't see assumptions of the same
> number of segments.  However the current poking into the bio internals as
> a bad idea for a couple of reasons.  First because it requires touching
> bcache for any of these changes, second because it won't get merging of
> pages into a single bio segment for bіos built by bch_bio_map or
> bch_bio_alloc_pages, and third bcache is the last user of
> bio_for_each_chunk_all in your branch, which I'd like to kill off to
> keep the number of iterators down.

Agreed about bio_for_each_chunk_all(), but I just looked at the patch that
introduces them and it looks to me like there's no need, they should just be
bio_for_each_segment_all().

Converting bch_bio_map() and bch_bio_alloc_pages() to bio_add_page() is fine by
me, but your patch series doesn't do any of those actual cleanups: your
description of the patch series does not actually match what it does.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC] cleanup bcache bio handling
  2018-06-13 14:54       ` Kent Overstreet
@ 2018-06-14  1:55         ` Ming Lei
  2018-06-14  7:20           ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Ming Lei @ 2018-06-14  1:55 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Christoph Hellwig, Jens Axboe, Coly Li, linux-bcache, linux-block

On Wed, Jun 13, 2018 at 10:54:09AM -0400, Kent Overstreet wrote:
> On Wed, Jun 13, 2018 at 03:56:32PM +0200, Christoph Hellwig wrote:
> > On Wed, Jun 13, 2018 at 07:06:41PM +0800, Ming Lei wrote:
> > > > before bio_alloc_pages) that can be switched to something that just creates a
> > > > single bvec.
> > > 
> > > Yes, multipage bvec shouldn't break any driver or fs.
> > 
> > It probably isn't broken, at least I didn't see assumptions of the same
> > number of segments.  However the current poking into the bio internals as
> > a bad idea for a couple of reasons.  First because it requires touching
> > bcache for any of these changes, second because it won't get merging of
> > pages into a single bio segment for bіos built by bch_bio_map or
> > bch_bio_alloc_pages, and third bcache is the last user of
> > bio_for_each_chunk_all in your branch, which I'd like to kill off to
> > keep the number of iterators down.
> 
> Agreed about bio_for_each_chunk_all(), but I just looked at the patch that
> introduces them and it looks to me like there's no need, they should just be
> bio_for_each_segment_all().

Now we can't change the vector with bio_for_each_segment_all(), so
bio_for_each_chunk_all() has to be used. So looks it makes sense to use
bio_add_page() to remove bio_for_each_chunk_all().


Thanks,
Ming

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC] cleanup bcache bio handling
  2018-06-14  1:55         ` Ming Lei
@ 2018-06-14  7:20           ` Christoph Hellwig
  0 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2018-06-14  7:20 UTC (permalink / raw)
  To: Ming Lei
  Cc: Kent Overstreet, Christoph Hellwig, Jens Axboe, Coly Li,
	linux-bcache, linux-block

On Thu, Jun 14, 2018 at 09:55:32AM +0800, Ming Lei wrote:
> > Agreed about bio_for_each_chunk_all(), but I just looked at the patch that
> > introduces them and it looks to me like there's no need, they should just be
> > bio_for_each_segment_all().
> 
> Now we can't change the vector with bio_for_each_segment_all(), so
> bio_for_each_chunk_all() has to be used. So looks it makes sense to use
> bio_add_page() to remove bio_for_each_chunk_all().

For now I'd suggest we just open code bio_for_each_segment_all in bcache
as a first step to get the multipage bio vec work rolling.  The code
already pokes deep into bio internals, so having that iteration right
next to it isn't really an issue.  In the long run I'd still like to
see it cleaned up, though.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2018-06-14  7:20 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-11 19:48 [RFC] cleanup bcache bio handling Christoph Hellwig
2018-06-11 19:48 ` [PATCH 1/6] block: add a bio_reuse helper Christoph Hellwig
2018-06-12  6:16   ` Kent Overstreet
2018-06-13  7:32     ` Christoph Hellwig
2018-06-13  8:54       ` Kent Overstreet
2018-06-13 13:59         ` Christoph Hellwig
2018-06-13 14:49           ` Kent Overstreet
2018-06-11 19:48 ` [PATCH 2/6] bcache: use bio_reuse instead of bio_reinit where applicable Christoph Hellwig
2018-06-11 19:48 ` [PATCH 3/6] bcache: clean up bio reuse for struct moving_io Christoph Hellwig
2018-06-11 19:48 ` [PATCH 4/6] bcache: clean up bio reuse for struct dirty_io Christoph Hellwig
2018-06-11 19:48 ` [PATCH 5/6] bcache: don't clone bio in bch_data_verify Christoph Hellwig
2018-06-11 19:48 ` [PATCH 6/6] bcache: use bio_add_page instead of open coded bio manipulation Christoph Hellwig
2018-06-12  4:40 ` [RFC] cleanup bcache bio handling Coly Li
2018-06-13  9:58 ` Kent Overstreet
2018-06-13 11:06   ` Ming Lei
2018-06-13 13:56     ` Christoph Hellwig
2018-06-13 14:54       ` Kent Overstreet
2018-06-14  1:55         ` Ming Lei
2018-06-14  7:20           ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.