All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3]  btrfs: btrfs_bio and btrfs_io_bio rename
@ 2021-09-15  7:17 Qu Wenruo
  2021-09-15  7:17 ` [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context Qu Wenruo
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Qu Wenruo @ 2021-09-15  7:17 UTC (permalink / raw)
  To: linux-btrfs

The branch can be fetched from github, and is the preferred way to grab
the code, as this patchset changed quite a lot of code.
https://github.com/adam900710/linux/tree/chunk_refactor

There are two structure, btrfs_io_bio and btrfs_bio, which have very
similar names but completely different meanings.

Btrfs_io_bio mostly works at logical bytenr layer (its
bio->bi_iter.bi_sector points to btrfs logical bytenr), and just
contains extra info like csum and mirror_num.

And btrfs_io_bio is in fact the most utilized bio, as all data/metadata
IO is using btrfs_io_bio.

While btrfs_bio is completely a helper structure for mirrored IO
submission (utilized by SINGLE/DUP/RAID1/RAID10), and contains RAID56
maps for RAID56 (it doesn't utilize this structure for IO submission
tracking).

Such naming is completely anti-human.

So this patchset will do the following renaming:

- btrfs_bio -> btrfs_io_context
  Since it's not really used by all bios (only mirrored profiles utilize
  it), and it contains extra info for RAID56, it's not proper to name it
  with _bio suffix.

  Later we can integrate btrfs_io_context pointer into the new
  btrfs_bio.

- btrfs_io_bio -> btrfs_logical_bio
  It is intentional not to reuse "btrfs_bio", which could cause
  confusion for later backport.

Changelog:
v2:
- Rename btrfs_bio to btrfs_io_context (bioc)
- Rename btrfs_io_bio to btrfs_bio
  Both suggested by Nikolay

v3:
- Fixes whiespace problems
  Caused by "dwi" vim commands

- Update several modified comments

- Rename btrfs_io_bio to btrfs_logical_bio
  To avoid backport confusion.


Qu Wenruo (3):
  btrfs: rename btrfs_bio to btrfs_io_context
  btrfs: remove btrfs_bio_alloc() helper
  btrfs: rename struct btrfs_io_bio to btrfs_logical_bio

 fs/btrfs/check-integrity.c |   4 +-
 fs/btrfs/compression.c     |  20 +--
 fs/btrfs/ctree.h           |   6 +-
 fs/btrfs/disk-io.c         |   2 +-
 fs/btrfs/disk-io.h         |   2 +-
 fs/btrfs/extent-tree.c     |  19 ++-
 fs/btrfs/extent_io.c       | 116 ++++++++--------
 fs/btrfs/extent_io.h       |   8 +-
 fs/btrfs/extent_map.c      |   4 +-
 fs/btrfs/file-item.c       |  12 +-
 fs/btrfs/inode.c           |  50 +++----
 fs/btrfs/raid56.c          | 135 +++++++++---------
 fs/btrfs/raid56.h          |   8 +-
 fs/btrfs/reada.c           |  26 ++--
 fs/btrfs/scrub.c           | 130 +++++++++---------
 fs/btrfs/volumes.c         | 272 ++++++++++++++++++-------------------
 fs/btrfs/volumes.h         |  63 +++++----
 fs/btrfs/zoned.c           |  16 +--
 18 files changed, 455 insertions(+), 438 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context
  2021-09-15  7:17 [PATCH v3 0/3] btrfs: btrfs_bio and btrfs_io_bio rename Qu Wenruo
@ 2021-09-15  7:17 ` Qu Wenruo
  2021-09-17 11:19   ` David Sterba
  2021-09-15  7:17 ` [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper Qu Wenruo
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 22+ messages in thread
From: Qu Wenruo @ 2021-09-15  7:17 UTC (permalink / raw)
  To: linux-btrfs

The structure btrfs_bio is used by two different sites:

- bio->bi_private for mirror based profiles
  For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures records
  how many mirrors are still pending, and save the original endio
  function of the bio.

- RAID56 code
  In that case, RAID56 only utilize the stripes info, and no long uses
  that to trace the pending mirrors.

So btrfs_bio is not always bind to a bio, and contains more info for IO
context, thus renaming it will make the naming less confusing.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/check-integrity.c |   2 +-
 fs/btrfs/extent-tree.c     |  19 ++-
 fs/btrfs/extent_io.c       |  18 +--
 fs/btrfs/extent_map.c      |   4 +-
 fs/btrfs/raid56.c          | 127 +++++++++---------
 fs/btrfs/raid56.h          |   8 +-
 fs/btrfs/reada.c           |  26 ++--
 fs/btrfs/scrub.c           | 116 ++++++++--------
 fs/btrfs/volumes.c         | 266 ++++++++++++++++++-------------------
 fs/btrfs/volumes.h         |  38 ++++--
 fs/btrfs/zoned.c           |  16 +--
 11 files changed, 326 insertions(+), 314 deletions(-)

diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index 86816088927f..81b11124b67a 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btrfs/check-integrity.c
@@ -1455,7 +1455,7 @@ static int btrfsic_map_block(struct btrfsic_state *state, u64 bytenr, u32 len,
 	struct btrfs_fs_info *fs_info = state->fs_info;
 	int ret;
 	u64 length;
-	struct btrfs_bio *multi = NULL;
+	struct btrfs_io_context *multi = NULL;
 	struct btrfs_device *device;
 
 	length = len;
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 7d03ffa04bce..704ee786acb5 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -1266,7 +1266,7 @@ static int btrfs_issue_discard(struct block_device *bdev, u64 start, u64 len,
 	return ret;
 }
 
-static int do_discard_extent(struct btrfs_bio_stripe *stripe, u64 *bytes)
+static int do_discard_extent(struct btrfs_io_stripe *stripe, u64 *bytes)
 {
 	struct btrfs_device *dev = stripe->dev;
 	struct btrfs_fs_info *fs_info = dev->fs_info;
@@ -1313,22 +1313,21 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr,
 	u64 discarded_bytes = 0;
 	u64 end = bytenr + num_bytes;
 	u64 cur = bytenr;
-	struct btrfs_bio *bbio = NULL;
-
+	struct btrfs_io_context *bioc = NULL;
 
 	/*
-	 * Avoid races with device replace and make sure our bbio has devices
+	 * Avoid races with device replace and make sure our bioc has devices
 	 * associated to its stripes that don't go away while we are discarding.
 	 */
 	btrfs_bio_counter_inc_blocked(fs_info);
 	while (cur < end) {
-		struct btrfs_bio_stripe *stripe;
+		struct btrfs_io_stripe *stripe;
 		int i;
 
 		num_bytes = end - cur;
 		/* Tell the block device(s) that the sectors can be discarded */
 		ret = btrfs_map_block(fs_info, BTRFS_MAP_DISCARD, cur,
-				      &num_bytes, &bbio, 0);
+				      &num_bytes, &bioc, 0);
 		/*
 		 * Error can be -ENOMEM, -ENOENT (no such chunk mapping) or
 		 * -EOPNOTSUPP. For any such error, @num_bytes is not updated,
@@ -1337,8 +1336,8 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr,
 		if (ret < 0)
 			goto out;
 
-		stripe = bbio->stripes;
-		for (i = 0; i < bbio->num_stripes; i++, stripe++) {
+		stripe = bioc->stripes;
+		for (i = 0; i < bioc->num_stripes; i++, stripe++) {
 			u64 bytes;
 			struct btrfs_device *device = stripe->dev;
 
@@ -1361,7 +1360,7 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr,
 				 * And since there are two loops, explicitly
 				 * go to out to avoid confusion.
 				 */
-				btrfs_put_bbio(bbio);
+				btrfs_put_bioc(bioc);
 				goto out;
 			}
 
@@ -1372,7 +1371,7 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr,
 			 */
 			ret = 0;
 		}
-		btrfs_put_bbio(bbio);
+		btrfs_put_bioc(bioc);
 		cur += num_bytes;
 	}
 out:
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 8959ac580f46..1aed03ef5f49 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2290,7 +2290,7 @@ static int repair_io_failure(struct btrfs_fs_info *fs_info, u64 ino, u64 start,
 	struct btrfs_device *dev;
 	u64 map_length = 0;
 	u64 sector;
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	int ret;
 
 	ASSERT(!(fs_info->sb->s_flags & SB_RDONLY));
@@ -2304,7 +2304,7 @@ static int repair_io_failure(struct btrfs_fs_info *fs_info, u64 ino, u64 start,
 	map_length = length;
 
 	/*
-	 * Avoid races with device replace and make sure our bbio has devices
+	 * Avoid races with device replace and make sure our bioc has devices
 	 * associated to its stripes that don't go away while we are doing the
 	 * read repair operation.
 	 */
@@ -2317,28 +2317,28 @@ static int repair_io_failure(struct btrfs_fs_info *fs_info, u64 ino, u64 start,
 		 * stripe's dev and sector.
 		 */
 		ret = btrfs_map_block(fs_info, BTRFS_MAP_READ, logical,
-				      &map_length, &bbio, 0);
+				      &map_length, &bioc, 0);
 		if (ret) {
 			btrfs_bio_counter_dec(fs_info);
 			bio_put(bio);
 			return -EIO;
 		}
-		ASSERT(bbio->mirror_num == 1);
+		ASSERT(bioc->mirror_num == 1);
 	} else {
 		ret = btrfs_map_block(fs_info, BTRFS_MAP_WRITE, logical,
-				      &map_length, &bbio, mirror_num);
+				      &map_length, &bioc, mirror_num);
 		if (ret) {
 			btrfs_bio_counter_dec(fs_info);
 			bio_put(bio);
 			return -EIO;
 		}
-		BUG_ON(mirror_num != bbio->mirror_num);
+		BUG_ON(mirror_num != bioc->mirror_num);
 	}
 
-	sector = bbio->stripes[bbio->mirror_num - 1].physical >> 9;
+	sector = bioc->stripes[bioc->mirror_num - 1].physical >> 9;
 	bio->bi_iter.bi_sector = sector;
-	dev = bbio->stripes[bbio->mirror_num - 1].dev;
-	btrfs_put_bbio(bbio);
+	dev = bioc->stripes[bioc->mirror_num - 1].dev;
+	btrfs_put_bioc(bioc);
 	if (!dev || !dev->bdev ||
 	    !test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state)) {
 		btrfs_bio_counter_dec(fs_info);
diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c
index 4a8e02f7b6c7..5a36add21305 100644
--- a/fs/btrfs/extent_map.c
+++ b/fs/btrfs/extent_map.c
@@ -360,7 +360,7 @@ static void extent_map_device_set_bits(struct extent_map *em, unsigned bits)
 	int i;
 
 	for (i = 0; i < map->num_stripes; i++) {
-		struct btrfs_bio_stripe *stripe = &map->stripes[i];
+		struct btrfs_io_stripe *stripe = &map->stripes[i];
 		struct btrfs_device *device = stripe->dev;
 
 		set_extent_bits_nowait(&device->alloc_state, stripe->physical,
@@ -375,7 +375,7 @@ static void extent_map_device_clear_bits(struct extent_map *em, unsigned bits)
 	int i;
 
 	for (i = 0; i < map->num_stripes; i++) {
-		struct btrfs_bio_stripe *stripe = &map->stripes[i];
+		struct btrfs_io_stripe *stripe = &map->stripes[i];
 		struct btrfs_device *device = stripe->dev;
 
 		__clear_extent_bit(&device->alloc_state, stripe->physical,
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index d8d268ca8aa7..96c149416f99 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -61,7 +61,7 @@ enum btrfs_rbio_ops {
 
 struct btrfs_raid_bio {
 	struct btrfs_fs_info *fs_info;
-	struct btrfs_bio *bbio;
+	struct btrfs_io_context *bioc;
 
 	/* while we're doing rmw on a stripe
 	 * we put it into a hash table so we can
@@ -271,7 +271,7 @@ static void cache_rbio_pages(struct btrfs_raid_bio *rbio)
  */
 static int rbio_bucket(struct btrfs_raid_bio *rbio)
 {
-	u64 num = rbio->bbio->raid_map[0];
+	u64 num = rbio->bioc->raid_map[0];
 
 	/*
 	 * we shift down quite a bit.  We're using byte
@@ -559,8 +559,7 @@ static int rbio_can_merge(struct btrfs_raid_bio *last,
 	    test_bit(RBIO_CACHE_BIT, &cur->flags))
 		return 0;
 
-	if (last->bbio->raid_map[0] !=
-	    cur->bbio->raid_map[0])
+	if (last->bioc->raid_map[0] != cur->bioc->raid_map[0])
 		return 0;
 
 	/* we can't merge with different operations */
@@ -673,7 +672,7 @@ static noinline int lock_stripe_add(struct btrfs_raid_bio *rbio)
 
 	spin_lock_irqsave(&h->lock, flags);
 	list_for_each_entry(cur, &h->hash_list, hash_list) {
-		if (cur->bbio->raid_map[0] != rbio->bbio->raid_map[0])
+		if (cur->bioc->raid_map[0] != rbio->bioc->raid_map[0])
 			continue;
 
 		spin_lock(&cur->bio_list_lock);
@@ -838,7 +837,7 @@ static void __free_raid_bio(struct btrfs_raid_bio *rbio)
 		}
 	}
 
-	btrfs_put_bbio(rbio->bbio);
+	btrfs_put_bioc(rbio->bioc);
 	kfree(rbio);
 }
 
@@ -906,7 +905,7 @@ static void raid_write_end_io(struct bio *bio)
 
 	/* OK, we have read all the stripes we need to. */
 	max_errors = (rbio->operation == BTRFS_RBIO_PARITY_SCRUB) ?
-		     0 : rbio->bbio->max_errors;
+		     0 : rbio->bioc->max_errors;
 	if (atomic_read(&rbio->error) > max_errors)
 		err = BLK_STS_IOERR;
 
@@ -961,12 +960,12 @@ static unsigned long rbio_nr_pages(unsigned long stripe_len, int nr_stripes)
  * this does not allocate any pages for rbio->pages.
  */
 static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info,
-					 struct btrfs_bio *bbio,
+					 struct btrfs_io_context *bioc,
 					 u64 stripe_len)
 {
 	struct btrfs_raid_bio *rbio;
 	int nr_data = 0;
-	int real_stripes = bbio->num_stripes - bbio->num_tgtdevs;
+	int real_stripes = bioc->num_stripes - bioc->num_tgtdevs;
 	int num_pages = rbio_nr_pages(stripe_len, real_stripes);
 	int stripe_npages = DIV_ROUND_UP(stripe_len, PAGE_SIZE);
 	void *p;
@@ -987,7 +986,7 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info,
 	spin_lock_init(&rbio->bio_list_lock);
 	INIT_LIST_HEAD(&rbio->stripe_cache);
 	INIT_LIST_HEAD(&rbio->hash_list);
-	rbio->bbio = bbio;
+	rbio->bioc = bioc;
 	rbio->fs_info = fs_info;
 	rbio->stripe_len = stripe_len;
 	rbio->nr_pages = num_pages;
@@ -1015,9 +1014,9 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info,
 	CONSUME_ALLOC(rbio->finish_pbitmap, BITS_TO_LONGS(stripe_npages));
 #undef  CONSUME_ALLOC
 
-	if (bbio->map_type & BTRFS_BLOCK_GROUP_RAID5)
+	if (bioc->map_type & BTRFS_BLOCK_GROUP_RAID5)
 		nr_data = real_stripes - 1;
-	else if (bbio->map_type & BTRFS_BLOCK_GROUP_RAID6)
+	else if (bioc->map_type & BTRFS_BLOCK_GROUP_RAID6)
 		nr_data = real_stripes - 2;
 	else
 		BUG();
@@ -1077,10 +1076,10 @@ static int rbio_add_io_page(struct btrfs_raid_bio *rbio,
 	struct bio *last = bio_list->tail;
 	int ret;
 	struct bio *bio;
-	struct btrfs_bio_stripe *stripe;
+	struct btrfs_io_stripe *stripe;
 	u64 disk_start;
 
-	stripe = &rbio->bbio->stripes[stripe_nr];
+	stripe = &rbio->bioc->stripes[stripe_nr];
 	disk_start = stripe->physical + (page_index << PAGE_SHIFT);
 
 	/* if the device is missing, just fail this stripe */
@@ -1155,7 +1154,7 @@ static void index_rbio_pages(struct btrfs_raid_bio *rbio)
 		int i = 0;
 
 		start = bio->bi_iter.bi_sector << 9;
-		stripe_offset = start - rbio->bbio->raid_map[0];
+		stripe_offset = start - rbio->bioc->raid_map[0];
 		page_index = stripe_offset >> PAGE_SHIFT;
 
 		if (bio_flagged(bio, BIO_CLONED))
@@ -1179,7 +1178,7 @@ static void index_rbio_pages(struct btrfs_raid_bio *rbio)
  */
 static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 {
-	struct btrfs_bio *bbio = rbio->bbio;
+	struct btrfs_io_context *bioc = rbio->bioc;
 	void **pointers = rbio->finish_pointers;
 	int nr_data = rbio->nr_data;
 	int stripe;
@@ -1284,11 +1283,11 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 		}
 	}
 
-	if (likely(!bbio->num_tgtdevs))
+	if (likely(!bioc->num_tgtdevs))
 		goto write_data;
 
 	for (stripe = 0; stripe < rbio->real_stripes; stripe++) {
-		if (!bbio->tgtdev_map[stripe])
+		if (!bioc->tgtdev_map[stripe])
 			continue;
 
 		for (pagenr = 0; pagenr < rbio->stripe_npages; pagenr++) {
@@ -1302,7 +1301,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 			}
 
 			ret = rbio_add_io_page(rbio, &bio_list, page,
-					       rbio->bbio->tgtdev_map[stripe],
+					       rbio->bioc->tgtdev_map[stripe],
 					       pagenr, rbio->stripe_len);
 			if (ret)
 				goto cleanup;
@@ -1339,12 +1338,12 @@ static int find_bio_stripe(struct btrfs_raid_bio *rbio,
 {
 	u64 physical = bio->bi_iter.bi_sector;
 	int i;
-	struct btrfs_bio_stripe *stripe;
+	struct btrfs_io_stripe *stripe;
 
 	physical <<= 9;
 
-	for (i = 0; i < rbio->bbio->num_stripes; i++) {
-		stripe = &rbio->bbio->stripes[i];
+	for (i = 0; i < rbio->bioc->num_stripes; i++) {
+		stripe = &rbio->bioc->stripes[i];
 		if (in_range(physical, stripe->physical, rbio->stripe_len) &&
 		    stripe->dev->bdev && bio->bi_bdev == stripe->dev->bdev) {
 			return i;
@@ -1365,7 +1364,7 @@ static int find_logical_bio_stripe(struct btrfs_raid_bio *rbio,
 	int i;
 
 	for (i = 0; i < rbio->nr_data; i++) {
-		u64 stripe_start = rbio->bbio->raid_map[i];
+		u64 stripe_start = rbio->bioc->raid_map[i];
 
 		if (in_range(logical, stripe_start, rbio->stripe_len))
 			return i;
@@ -1456,7 +1455,7 @@ static void raid_rmw_end_io(struct bio *bio)
 	if (!atomic_dec_and_test(&rbio->stripes_pending))
 		return;
 
-	if (atomic_read(&rbio->error) > rbio->bbio->max_errors)
+	if (atomic_read(&rbio->error) > rbio->bioc->max_errors)
 		goto cleanup;
 
 	/*
@@ -1538,8 +1537,8 @@ static int raid56_rmw_stripe(struct btrfs_raid_bio *rbio)
 	}
 
 	/*
-	 * the bbio may be freed once we submit the last bio.  Make sure
-	 * not to touch it after that
+	 * The bioc may be freed once we submit the last bio. Make sure not to
+	 * touch it after that.
 	 */
 	atomic_set(&rbio->stripes_pending, bios_to_read);
 	while ((bio = bio_list_pop(&bio_list))) {
@@ -1720,16 +1719,16 @@ static void btrfs_raid_unplug(struct blk_plug_cb *cb, bool from_schedule)
  * our main entry point for writes from the rest of the FS.
  */
 int raid56_parity_write(struct btrfs_fs_info *fs_info, struct bio *bio,
-			struct btrfs_bio *bbio, u64 stripe_len)
+			struct btrfs_io_context *bioc, u64 stripe_len)
 {
 	struct btrfs_raid_bio *rbio;
 	struct btrfs_plug_cb *plug = NULL;
 	struct blk_plug_cb *cb;
 	int ret;
 
-	rbio = alloc_rbio(fs_info, bbio, stripe_len);
+	rbio = alloc_rbio(fs_info, bioc, stripe_len);
 	if (IS_ERR(rbio)) {
-		btrfs_put_bbio(bbio);
+		btrfs_put_bioc(bioc);
 		return PTR_ERR(rbio);
 	}
 	bio_list_add(&rbio->bio_list, bio);
@@ -1842,7 +1841,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio)
 		}
 
 		/* all raid6 handling here */
-		if (rbio->bbio->map_type & BTRFS_BLOCK_GROUP_RAID6) {
+		if (rbio->bioc->map_type & BTRFS_BLOCK_GROUP_RAID6) {
 			/*
 			 * single failure, rebuild from parity raid5
 			 * style
@@ -1874,8 +1873,8 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio)
 			 * here due to a crc mismatch and we can't give them the
 			 * data they want
 			 */
-			if (rbio->bbio->raid_map[failb] == RAID6_Q_STRIPE) {
-				if (rbio->bbio->raid_map[faila] ==
+			if (rbio->bioc->raid_map[failb] == RAID6_Q_STRIPE) {
+				if (rbio->bioc->raid_map[faila] ==
 				    RAID5_P_STRIPE) {
 					err = BLK_STS_IOERR;
 					goto cleanup;
@@ -1887,7 +1886,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio)
 				goto pstripe;
 			}
 
-			if (rbio->bbio->raid_map[failb] == RAID5_P_STRIPE) {
+			if (rbio->bioc->raid_map[failb] == RAID5_P_STRIPE) {
 				raid6_datap_recov(rbio->real_stripes,
 						  PAGE_SIZE, faila, pointers);
 			} else {
@@ -2006,7 +2005,7 @@ static void raid_recover_end_io(struct bio *bio)
 	if (!atomic_dec_and_test(&rbio->stripes_pending))
 		return;
 
-	if (atomic_read(&rbio->error) > rbio->bbio->max_errors)
+	if (atomic_read(&rbio->error) > rbio->bioc->max_errors)
 		rbio_orig_end_io(rbio, BLK_STS_IOERR);
 	else
 		__raid_recover_end_io(rbio);
@@ -2074,7 +2073,7 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
 		 * were up to date, or we might have no bios to read because
 		 * the devices were gone.
 		 */
-		if (atomic_read(&rbio->error) <= rbio->bbio->max_errors) {
+		if (atomic_read(&rbio->error) <= rbio->bioc->max_errors) {
 			__raid_recover_end_io(rbio);
 			return 0;
 		} else {
@@ -2083,8 +2082,8 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
 	}
 
 	/*
-	 * the bbio may be freed once we submit the last bio.  Make sure
-	 * not to touch it after that
+	 * The bioc may be freed once we submit the last bio. Make sure not to
+	 * touch it after that.
 	 */
 	atomic_set(&rbio->stripes_pending, bios_to_read);
 	while ((bio = bio_list_pop(&bio_list))) {
@@ -2117,21 +2116,21 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
  * of the drive.
  */
 int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
-			  struct btrfs_bio *bbio, u64 stripe_len,
+			  struct btrfs_io_context *bioc, u64 stripe_len,
 			  int mirror_num, int generic_io)
 {
 	struct btrfs_raid_bio *rbio;
 	int ret;
 
 	if (generic_io) {
-		ASSERT(bbio->mirror_num == mirror_num);
+		ASSERT(bioc->mirror_num == mirror_num);
 		btrfs_io_bio(bio)->mirror_num = mirror_num;
 	}
 
-	rbio = alloc_rbio(fs_info, bbio, stripe_len);
+	rbio = alloc_rbio(fs_info, bioc, stripe_len);
 	if (IS_ERR(rbio)) {
 		if (generic_io)
-			btrfs_put_bbio(bbio);
+			btrfs_put_bioc(bioc);
 		return PTR_ERR(rbio);
 	}
 
@@ -2142,11 +2141,11 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
 	rbio->faila = find_logical_bio_stripe(rbio, bio);
 	if (rbio->faila == -1) {
 		btrfs_warn(fs_info,
-	"%s could not find the bad stripe in raid56 so that we cannot recover any more (bio has logical %llu len %llu, bbio has map_type %llu)",
+	"%s could not find the bad stripe in raid56 so that we cannot recover any more (bio has logical %llu len %llu, bioc has map_type %llu)",
 			   __func__, bio->bi_iter.bi_sector << 9,
-			   (u64)bio->bi_iter.bi_size, bbio->map_type);
+			   (u64)bio->bi_iter.bi_size, bioc->map_type);
 		if (generic_io)
-			btrfs_put_bbio(bbio);
+			btrfs_put_bioc(bioc);
 		kfree(rbio);
 		return -EIO;
 	}
@@ -2155,7 +2154,7 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
 		btrfs_bio_counter_inc_noblocked(fs_info);
 		rbio->generic_bio_cnt = 1;
 	} else {
-		btrfs_get_bbio(bbio);
+		btrfs_get_bioc(bioc);
 	}
 
 	/*
@@ -2214,7 +2213,7 @@ static void read_rebuild_work(struct btrfs_work *work)
 /*
  * The following code is used to scrub/replace the parity stripe
  *
- * Caller must have already increased bio_counter for getting @bbio.
+ * Caller must have already increased bio_counter for getting @bioc.
  *
  * Note: We need make sure all the pages that add into the scrub/replace
  * raid bio are correct and not be changed during the scrub/replace. That
@@ -2223,14 +2222,14 @@ static void read_rebuild_work(struct btrfs_work *work)
 
 struct btrfs_raid_bio *
 raid56_parity_alloc_scrub_rbio(struct btrfs_fs_info *fs_info, struct bio *bio,
-			       struct btrfs_bio *bbio, u64 stripe_len,
+			       struct btrfs_io_context *bioc, u64 stripe_len,
 			       struct btrfs_device *scrub_dev,
 			       unsigned long *dbitmap, int stripe_nsectors)
 {
 	struct btrfs_raid_bio *rbio;
 	int i;
 
-	rbio = alloc_rbio(fs_info, bbio, stripe_len);
+	rbio = alloc_rbio(fs_info, bioc, stripe_len);
 	if (IS_ERR(rbio))
 		return NULL;
 	bio_list_add(&rbio->bio_list, bio);
@@ -2242,12 +2241,12 @@ raid56_parity_alloc_scrub_rbio(struct btrfs_fs_info *fs_info, struct bio *bio,
 	rbio->operation = BTRFS_RBIO_PARITY_SCRUB;
 
 	/*
-	 * After mapping bbio with BTRFS_MAP_WRITE, parities have been sorted
+	 * After mapping bioc with BTRFS_MAP_WRITE, parities have been sorted
 	 * to the end position, so this search can start from the first parity
 	 * stripe.
 	 */
 	for (i = rbio->nr_data; i < rbio->real_stripes; i++) {
-		if (bbio->stripes[i].dev == scrub_dev) {
+		if (bioc->stripes[i].dev == scrub_dev) {
 			rbio->scrubp = i;
 			break;
 		}
@@ -2260,7 +2259,7 @@ raid56_parity_alloc_scrub_rbio(struct btrfs_fs_info *fs_info, struct bio *bio,
 	bitmap_copy(rbio->dbitmap, dbitmap, stripe_nsectors);
 
 	/*
-	 * We have already increased bio_counter when getting bbio, record it
+	 * We have already increased bio_counter when getting bioc, record it
 	 * so we can free it at rbio_orig_end_io().
 	 */
 	rbio->generic_bio_cnt = 1;
@@ -2275,10 +2274,10 @@ void raid56_add_scrub_pages(struct btrfs_raid_bio *rbio, struct page *page,
 	int stripe_offset;
 	int index;
 
-	ASSERT(logical >= rbio->bbio->raid_map[0]);
-	ASSERT(logical + PAGE_SIZE <= rbio->bbio->raid_map[0] +
+	ASSERT(logical >= rbio->bioc->raid_map[0]);
+	ASSERT(logical + PAGE_SIZE <= rbio->bioc->raid_map[0] +
 				rbio->stripe_len * rbio->nr_data);
-	stripe_offset = (int)(logical - rbio->bbio->raid_map[0]);
+	stripe_offset = (int)(logical - rbio->bioc->raid_map[0]);
 	index = stripe_offset >> PAGE_SHIFT;
 	rbio->bio_pages[index] = page;
 }
@@ -2312,7 +2311,7 @@ static int alloc_rbio_essential_pages(struct btrfs_raid_bio *rbio)
 static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
 					 int need_check)
 {
-	struct btrfs_bio *bbio = rbio->bbio;
+	struct btrfs_io_context *bioc = rbio->bioc;
 	void **pointers = rbio->finish_pointers;
 	unsigned long *pbitmap = rbio->finish_pbitmap;
 	int nr_data = rbio->nr_data;
@@ -2335,7 +2334,7 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
 	else
 		BUG();
 
-	if (bbio->num_tgtdevs && bbio->tgtdev_map[rbio->scrubp]) {
+	if (bioc->num_tgtdevs && bioc->tgtdev_map[rbio->scrubp]) {
 		is_replace = 1;
 		bitmap_copy(pbitmap, rbio->dbitmap, rbio->stripe_npages);
 	}
@@ -2435,7 +2434,7 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
 
 		page = rbio_stripe_page(rbio, rbio->scrubp, pagenr);
 		ret = rbio_add_io_page(rbio, &bio_list, page,
-				       bbio->tgtdev_map[rbio->scrubp],
+				       bioc->tgtdev_map[rbio->scrubp],
 				       pagenr, rbio->stripe_len);
 		if (ret)
 			goto cleanup;
@@ -2483,7 +2482,7 @@ static inline int is_data_stripe(struct btrfs_raid_bio *rbio, int stripe)
  */
 static void validate_rbio_for_parity_scrub(struct btrfs_raid_bio *rbio)
 {
-	if (atomic_read(&rbio->error) > rbio->bbio->max_errors)
+	if (atomic_read(&rbio->error) > rbio->bioc->max_errors)
 		goto cleanup;
 
 	if (rbio->faila >= 0 || rbio->failb >= 0) {
@@ -2504,7 +2503,7 @@ static void validate_rbio_for_parity_scrub(struct btrfs_raid_bio *rbio)
 		 * the data, so the capability of the repair is declined.
 		 * (In the case of RAID5, we can not repair anything)
 		 */
-		if (dfail > rbio->bbio->max_errors - 1)
+		if (dfail > rbio->bioc->max_errors - 1)
 			goto cleanup;
 
 		/*
@@ -2625,8 +2624,8 @@ static void raid56_parity_scrub_stripe(struct btrfs_raid_bio *rbio)
 	}
 
 	/*
-	 * the bbio may be freed once we submit the last bio.  Make sure
-	 * not to touch it after that
+	 * The bioc may be freed once we submit the last bio. Make sure not to
+	 * touch it after that.
 	 */
 	atomic_set(&rbio->stripes_pending, bios_to_read);
 	while ((bio = bio_list_pop(&bio_list))) {
@@ -2671,11 +2670,11 @@ void raid56_parity_submit_scrub_rbio(struct btrfs_raid_bio *rbio)
 
 struct btrfs_raid_bio *
 raid56_alloc_missing_rbio(struct btrfs_fs_info *fs_info, struct bio *bio,
-			  struct btrfs_bio *bbio, u64 length)
+			  struct btrfs_io_context *bioc, u64 length)
 {
 	struct btrfs_raid_bio *rbio;
 
-	rbio = alloc_rbio(fs_info, bbio, length);
+	rbio = alloc_rbio(fs_info, bioc, length);
 	if (IS_ERR(rbio))
 		return NULL;
 
@@ -2695,7 +2694,7 @@ raid56_alloc_missing_rbio(struct btrfs_fs_info *fs_info, struct bio *bio,
 	}
 
 	/*
-	 * When we get bbio, we have already increased bio_counter, record it
+	 * When we get bioc, we have already increased bio_counter, record it
 	 * so we can free it at rbio_orig_end_io()
 	 */
 	rbio->generic_bio_cnt = 1;
diff --git a/fs/btrfs/raid56.h b/fs/btrfs/raid56.h
index 2503485db859..838d3a5e07ef 100644
--- a/fs/btrfs/raid56.h
+++ b/fs/btrfs/raid56.h
@@ -31,24 +31,24 @@ struct btrfs_raid_bio;
 struct btrfs_device;
 
 int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
-			  struct btrfs_bio *bbio, u64 stripe_len,
+			  struct btrfs_io_context *bioc, u64 stripe_len,
 			  int mirror_num, int generic_io);
 int raid56_parity_write(struct btrfs_fs_info *fs_info, struct bio *bio,
-			       struct btrfs_bio *bbio, u64 stripe_len);
+			struct btrfs_io_context *bioc, u64 stripe_len);
 
 void raid56_add_scrub_pages(struct btrfs_raid_bio *rbio, struct page *page,
 			    u64 logical);
 
 struct btrfs_raid_bio *
 raid56_parity_alloc_scrub_rbio(struct btrfs_fs_info *fs_info, struct bio *bio,
-			       struct btrfs_bio *bbio, u64 stripe_len,
+			       struct btrfs_io_context *bioc, u64 stripe_len,
 			       struct btrfs_device *scrub_dev,
 			       unsigned long *dbitmap, int stripe_nsectors);
 void raid56_parity_submit_scrub_rbio(struct btrfs_raid_bio *rbio);
 
 struct btrfs_raid_bio *
 raid56_alloc_missing_rbio(struct btrfs_fs_info *fs_info, struct bio *bio,
-			  struct btrfs_bio *bbio, u64 length);
+			  struct btrfs_io_context *bioc, u64 length);
 void raid56_submit_missing_rbio(struct btrfs_raid_bio *rbio);
 
 int btrfs_alloc_stripe_hash_table(struct btrfs_fs_info *info);
diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
index 06713a8fe26b..eb96c45f205c 100644
--- a/fs/btrfs/reada.c
+++ b/fs/btrfs/reada.c
@@ -227,7 +227,7 @@ int btree_readahead_hook(struct extent_buffer *eb, int err)
 }
 
 static struct reada_zone *reada_find_zone(struct btrfs_device *dev, u64 logical,
-					  struct btrfs_bio *bbio)
+					  struct btrfs_io_context *bioc)
 {
 	struct btrfs_fs_info *fs_info = dev->fs_info;
 	int ret;
@@ -275,11 +275,11 @@ static struct reada_zone *reada_find_zone(struct btrfs_device *dev, u64 logical,
 	kref_init(&zone->refcnt);
 	zone->elems = 0;
 	zone->device = dev; /* our device always sits at index 0 */
-	for (i = 0; i < bbio->num_stripes; ++i) {
+	for (i = 0; i < bioc->num_stripes; ++i) {
 		/* bounds have already been checked */
-		zone->devs[i] = bbio->stripes[i].dev;
+		zone->devs[i] = bioc->stripes[i].dev;
 	}
-	zone->ndevs = bbio->num_stripes;
+	zone->ndevs = bioc->num_stripes;
 
 	spin_lock(&fs_info->reada_lock);
 	ret = radix_tree_insert(&dev->reada_zones,
@@ -309,7 +309,7 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
 	int ret;
 	struct reada_extent *re = NULL;
 	struct reada_extent *re_exist = NULL;
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	struct btrfs_device *dev;
 	struct btrfs_device *prev_dev;
 	u64 length;
@@ -345,28 +345,28 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
 	 */
 	length = fs_info->nodesize;
 	ret = btrfs_map_block(fs_info, BTRFS_MAP_GET_READ_MIRRORS, logical,
-			&length, &bbio, 0);
-	if (ret || !bbio || length < fs_info->nodesize)
+			&length, &bioc, 0);
+	if (ret || !bioc || length < fs_info->nodesize)
 		goto error;
 
-	if (bbio->num_stripes > BTRFS_MAX_MIRRORS) {
+	if (bioc->num_stripes > BTRFS_MAX_MIRRORS) {
 		btrfs_err(fs_info,
 			   "readahead: more than %d copies not supported",
 			   BTRFS_MAX_MIRRORS);
 		goto error;
 	}
 
-	real_stripes = bbio->num_stripes - bbio->num_tgtdevs;
+	real_stripes = bioc->num_stripes - bioc->num_tgtdevs;
 	for (nzones = 0; nzones < real_stripes; ++nzones) {
 		struct reada_zone *zone;
 
-		dev = bbio->stripes[nzones].dev;
+		dev = bioc->stripes[nzones].dev;
 
 		/* cannot read ahead on missing device. */
 		if (!dev->bdev)
 			continue;
 
-		zone = reada_find_zone(dev, logical, bbio);
+		zone = reada_find_zone(dev, logical, bioc);
 		if (!zone)
 			continue;
 
@@ -464,7 +464,7 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
 	if (!have_zone)
 		goto error;
 
-	btrfs_put_bbio(bbio);
+	btrfs_put_bioc(bioc);
 	return re;
 
 error:
@@ -488,7 +488,7 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
 		kref_put(&zone->refcnt, reada_zone_release);
 		spin_unlock(&fs_info->reada_lock);
 	}
-	btrfs_put_bbio(bbio);
+	btrfs_put_bioc(bioc);
 	kfree(re);
 	return re_exist;
 }
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 088641ba7a8e..bd38e32ef5dc 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -57,7 +57,7 @@ struct scrub_ctx;
 
 struct scrub_recover {
 	refcount_t		refs;
-	struct btrfs_bio	*bbio;
+	struct btrfs_io_context	*bioc;
 	u64			map_length;
 };
 
@@ -254,7 +254,7 @@ static void scrub_put_ctx(struct scrub_ctx *sctx);
 static inline int scrub_is_page_on_raid56(struct scrub_page *spage)
 {
 	return spage->recover &&
-	       (spage->recover->bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK);
+	       (spage->recover->bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK);
 }
 
 static void scrub_pending_bio_inc(struct scrub_ctx *sctx)
@@ -798,7 +798,7 @@ static inline void scrub_put_recover(struct btrfs_fs_info *fs_info,
 {
 	if (refcount_dec_and_test(&recover->refs)) {
 		btrfs_bio_counter_dec(fs_info);
-		btrfs_put_bbio(recover->bbio);
+		btrfs_put_bioc(recover->bioc);
 		kfree(recover);
 	}
 }
@@ -1027,8 +1027,8 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 			sblock_other = sblocks_for_recheck + mirror_index;
 		} else {
 			struct scrub_recover *r = sblock_bad->pagev[0]->recover;
-			int max_allowed = r->bbio->num_stripes -
-						r->bbio->num_tgtdevs;
+			int max_allowed = r->bioc->num_stripes -
+						r->bioc->num_tgtdevs;
 
 			if (mirror_index >= max_allowed)
 				break;
@@ -1218,14 +1218,14 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	return 0;
 }
 
-static inline int scrub_nr_raid_mirrors(struct btrfs_bio *bbio)
+static inline int scrub_nr_raid_mirrors(struct btrfs_io_context *bioc)
 {
-	if (bbio->map_type & BTRFS_BLOCK_GROUP_RAID5)
+	if (bioc->map_type & BTRFS_BLOCK_GROUP_RAID5)
 		return 2;
-	else if (bbio->map_type & BTRFS_BLOCK_GROUP_RAID6)
+	else if (bioc->map_type & BTRFS_BLOCK_GROUP_RAID6)
 		return 3;
 	else
-		return (int)bbio->num_stripes;
+		return (int)bioc->num_stripes;
 }
 
 static inline void scrub_stripe_index_and_offset(u64 logical, u64 map_type,
@@ -1269,7 +1269,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 	u64 flags = original_sblock->pagev[0]->flags;
 	u64 have_csum = original_sblock->pagev[0]->have_csum;
 	struct scrub_recover *recover;
-	struct btrfs_bio *bbio;
+	struct btrfs_io_context *bioc;
 	u64 sublen;
 	u64 mapped_length;
 	u64 stripe_offset;
@@ -1288,7 +1288,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 	while (length > 0) {
 		sublen = min_t(u64, length, fs_info->sectorsize);
 		mapped_length = sublen;
-		bbio = NULL;
+		bioc = NULL;
 
 		/*
 		 * With a length of sectorsize, each returned stripe represents
@@ -1296,27 +1296,27 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 		 */
 		btrfs_bio_counter_inc_blocked(fs_info);
 		ret = btrfs_map_sblock(fs_info, BTRFS_MAP_GET_READ_MIRRORS,
-				logical, &mapped_length, &bbio);
-		if (ret || !bbio || mapped_length < sublen) {
-			btrfs_put_bbio(bbio);
+				logical, &mapped_length, &bioc);
+		if (ret || !bioc || mapped_length < sublen) {
+			btrfs_put_bioc(bioc);
 			btrfs_bio_counter_dec(fs_info);
 			return -EIO;
 		}
 
 		recover = kzalloc(sizeof(struct scrub_recover), GFP_NOFS);
 		if (!recover) {
-			btrfs_put_bbio(bbio);
+			btrfs_put_bioc(bioc);
 			btrfs_bio_counter_dec(fs_info);
 			return -ENOMEM;
 		}
 
 		refcount_set(&recover->refs, 1);
-		recover->bbio = bbio;
+		recover->bioc = bioc;
 		recover->map_length = mapped_length;
 
 		BUG_ON(page_index >= SCRUB_MAX_PAGES_PER_BLOCK);
 
-		nmirrors = min(scrub_nr_raid_mirrors(bbio), BTRFS_MAX_MIRRORS);
+		nmirrors = min(scrub_nr_raid_mirrors(bioc), BTRFS_MAX_MIRRORS);
 
 		for (mirror_index = 0; mirror_index < nmirrors;
 		     mirror_index++) {
@@ -1348,17 +1348,17 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 				       sctx->fs_info->csum_size);
 
 			scrub_stripe_index_and_offset(logical,
-						      bbio->map_type,
-						      bbio->raid_map,
+						      bioc->map_type,
+						      bioc->raid_map,
 						      mapped_length,
-						      bbio->num_stripes -
-						      bbio->num_tgtdevs,
+						      bioc->num_stripes -
+						      bioc->num_tgtdevs,
 						      mirror_index,
 						      &stripe_index,
 						      &stripe_offset);
-			spage->physical = bbio->stripes[stripe_index].physical +
+			spage->physical = bioc->stripes[stripe_index].physical +
 					 stripe_offset;
-			spage->dev = bbio->stripes[stripe_index].dev;
+			spage->dev = bioc->stripes[stripe_index].dev;
 
 			BUG_ON(page_index >= original_sblock->page_count);
 			spage->physical_for_dev_replace =
@@ -1401,7 +1401,7 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
 	bio->bi_end_io = scrub_bio_wait_endio;
 
 	mirror_num = spage->sblock->pagev[0]->mirror_num;
-	ret = raid56_parity_recover(fs_info, bio, spage->recover->bbio,
+	ret = raid56_parity_recover(fs_info, bio, spage->recover->bioc,
 				    spage->recover->map_length,
 				    mirror_num, 0);
 	if (ret)
@@ -2203,7 +2203,7 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
 	u64 length = sblock->page_count * PAGE_SIZE;
 	u64 logical = sblock->pagev[0]->logical;
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	struct bio *bio;
 	struct btrfs_raid_bio *rbio;
 	int ret;
@@ -2211,19 +2211,19 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 
 	btrfs_bio_counter_inc_blocked(fs_info);
 	ret = btrfs_map_sblock(fs_info, BTRFS_MAP_GET_READ_MIRRORS, logical,
-			&length, &bbio);
-	if (ret || !bbio || !bbio->raid_map)
-		goto bbio_out;
+			&length, &bioc);
+	if (ret || !bioc || !bioc->raid_map)
+		goto bioc_out;
 
 	if (WARN_ON(!sctx->is_dev_replace ||
-		    !(bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK))) {
+		    !(bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK))) {
 		/*
 		 * We shouldn't be scrubbing a missing device. Even for dev
 		 * replace, we should only get here for RAID 5/6. We either
 		 * managed to mount something with no mirrors remaining or
 		 * there's a bug in scrub_remap_extent()/btrfs_map_block().
 		 */
-		goto bbio_out;
+		goto bioc_out;
 	}
 
 	bio = btrfs_io_bio_alloc(0);
@@ -2231,7 +2231,7 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 	bio->bi_private = sblock;
 	bio->bi_end_io = scrub_missing_raid56_end_io;
 
-	rbio = raid56_alloc_missing_rbio(fs_info, bio, bbio, length);
+	rbio = raid56_alloc_missing_rbio(fs_info, bio, bioc, length);
 	if (!rbio)
 		goto rbio_out;
 
@@ -2249,9 +2249,9 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 
 rbio_out:
 	bio_put(bio);
-bbio_out:
+bioc_out:
 	btrfs_bio_counter_dec(fs_info);
-	btrfs_put_bbio(bbio);
+	btrfs_put_bioc(bioc);
 	spin_lock(&sctx->stat_lock);
 	sctx->stat.malloc_errors++;
 	spin_unlock(&sctx->stat_lock);
@@ -2826,7 +2826,7 @@ static void scrub_parity_check_and_repair(struct scrub_parity *sparity)
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
 	struct bio *bio;
 	struct btrfs_raid_bio *rbio;
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	u64 length;
 	int ret;
 
@@ -2838,16 +2838,16 @@ static void scrub_parity_check_and_repair(struct scrub_parity *sparity)
 
 	btrfs_bio_counter_inc_blocked(fs_info);
 	ret = btrfs_map_sblock(fs_info, BTRFS_MAP_WRITE, sparity->logic_start,
-			       &length, &bbio);
-	if (ret || !bbio || !bbio->raid_map)
-		goto bbio_out;
+			       &length, &bioc);
+	if (ret || !bioc || !bioc->raid_map)
+		goto bioc_out;
 
 	bio = btrfs_io_bio_alloc(0);
 	bio->bi_iter.bi_sector = sparity->logic_start >> 9;
 	bio->bi_private = sparity;
 	bio->bi_end_io = scrub_parity_bio_endio;
 
-	rbio = raid56_parity_alloc_scrub_rbio(fs_info, bio, bbio,
+	rbio = raid56_parity_alloc_scrub_rbio(fs_info, bio, bioc,
 					      length, sparity->scrub_dev,
 					      sparity->dbitmap,
 					      sparity->nsectors);
@@ -2860,9 +2860,9 @@ static void scrub_parity_check_and_repair(struct scrub_parity *sparity)
 
 rbio_out:
 	bio_put(bio);
-bbio_out:
+bioc_out:
 	btrfs_bio_counter_dec(fs_info);
-	btrfs_put_bbio(bbio);
+	btrfs_put_bioc(bioc);
 	bitmap_or(sparity->ebitmap, sparity->ebitmap, sparity->dbitmap,
 		  sparity->nsectors);
 	spin_lock(&sctx->stat_lock);
@@ -2901,7 +2901,7 @@ static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
 	struct btrfs_root *root = fs_info->extent_root;
 	struct btrfs_root *csum_root = fs_info->csum_root;
 	struct btrfs_extent_item *extent;
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	u64 flags;
 	int ret;
 	int slot;
@@ -3044,22 +3044,22 @@ static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
 						       extent_len);
 
 			mapped_length = extent_len;
-			bbio = NULL;
+			bioc = NULL;
 			ret = btrfs_map_block(fs_info, BTRFS_MAP_READ,
-					extent_logical, &mapped_length, &bbio,
+					extent_logical, &mapped_length, &bioc,
 					0);
 			if (!ret) {
-				if (!bbio || mapped_length < extent_len)
+				if (!bioc || mapped_length < extent_len)
 					ret = -EIO;
 			}
 			if (ret) {
-				btrfs_put_bbio(bbio);
+				btrfs_put_bioc(bioc);
 				goto out;
 			}
-			extent_physical = bbio->stripes[0].physical;
-			extent_mirror_num = bbio->mirror_num;
-			extent_dev = bbio->stripes[0].dev;
-			btrfs_put_bbio(bbio);
+			extent_physical = bioc->stripes[0].physical;
+			extent_mirror_num = bioc->mirror_num;
+			extent_dev = bioc->stripes[0].dev;
+			btrfs_put_bioc(bioc);
 
 			ret = btrfs_lookup_csums_range(csum_root,
 						extent_logical,
@@ -4309,20 +4309,20 @@ static void scrub_remap_extent(struct btrfs_fs_info *fs_info,
 			       int *extent_mirror_num)
 {
 	u64 mapped_length;
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	int ret;
 
 	mapped_length = extent_len;
 	ret = btrfs_map_block(fs_info, BTRFS_MAP_READ, extent_logical,
-			      &mapped_length, &bbio, 0);
-	if (ret || !bbio || mapped_length < extent_len ||
-	    !bbio->stripes[0].dev->bdev) {
-		btrfs_put_bbio(bbio);
+			      &mapped_length, &bioc, 0);
+	if (ret || !bioc || mapped_length < extent_len ||
+	    !bioc->stripes[0].dev->bdev) {
+		btrfs_put_bioc(bioc);
 		return;
 	}
 
-	*extent_physical = bbio->stripes[0].physical;
-	*extent_mirror_num = bbio->mirror_num;
-	*extent_dev = bbio->stripes[0].dev;
-	btrfs_put_bbio(bbio);
+	*extent_physical = bioc->stripes[0].physical;
+	*extent_mirror_num = bioc->mirror_num;
+	*extent_dev = bioc->stripes[0].dev;
+	btrfs_put_bioc(bioc);
 }
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index b81f25eed298..592d19f95065 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -250,7 +250,7 @@ static void btrfs_dev_stat_print_on_load(struct btrfs_device *device);
 static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 			     enum btrfs_map_op op,
 			     u64 logical, u64 *length,
-			     struct btrfs_bio **bbio_ret,
+			     struct btrfs_io_context **bioc_ret,
 			     int mirror_num, int need_raid_map);
 
 /*
@@ -5776,7 +5776,7 @@ static int find_live_mirror(struct btrfs_fs_info *fs_info,
 }
 
 /* Bubble-sort the stripe set to put the parity/syndrome stripes last */
-static void sort_parity_stripes(struct btrfs_bio *bbio, int num_stripes)
+static void sort_parity_stripes(struct btrfs_io_context *bioc, int num_stripes)
 {
 	int i;
 	int again = 1;
@@ -5785,52 +5785,53 @@ static void sort_parity_stripes(struct btrfs_bio *bbio, int num_stripes)
 		again = 0;
 		for (i = 0; i < num_stripes - 1; i++) {
 			/* Swap if parity is on a smaller index */
-			if (bbio->raid_map[i] > bbio->raid_map[i + 1]) {
-				swap(bbio->stripes[i], bbio->stripes[i + 1]);
-				swap(bbio->raid_map[i], bbio->raid_map[i + 1]);
+			if (bioc->raid_map[i] > bioc->raid_map[i + 1]) {
+				swap(bioc->stripes[i], bioc->stripes[i + 1]);
+				swap(bioc->raid_map[i], bioc->raid_map[i + 1]);
 				again = 1;
 			}
 		}
 	}
 }
 
-static struct btrfs_bio *alloc_btrfs_bio(int total_stripes, int real_stripes)
+static struct btrfs_io_context *alloc_btrfs_io_context(int total_stripes,
+						       int real_stripes)
 {
-	struct btrfs_bio *bbio = kzalloc(
-		 /* the size of the btrfs_bio */
-		sizeof(struct btrfs_bio) +
-		/* plus the variable array for the stripes */
-		sizeof(struct btrfs_bio_stripe) * (total_stripes) +
-		/* plus the variable array for the tgt dev */
+	struct btrfs_io_context *bioc = kzalloc(
+		 /* The size of btrfs_io_context */
+		sizeof(struct btrfs_io_context) +
+		/* Plus the variable array for the stripes */
+		sizeof(struct btrfs_io_stripe) * (total_stripes) +
+		/* Plus the variable array for the tgt dev */
 		sizeof(int) * (real_stripes) +
 		/*
-		 * plus the raid_map, which includes both the tgt dev
-		 * and the stripes
+		 * Plus the raid_map, which includes both the tgt dev
+		 * and the stripes.
 		 */
 		sizeof(u64) * (total_stripes),
 		GFP_NOFS|__GFP_NOFAIL);
 
-	atomic_set(&bbio->error, 0);
-	refcount_set(&bbio->refs, 1);
+	atomic_set(&bioc->error, 0);
+	refcount_set(&bioc->refs, 1);
 
-	bbio->tgtdev_map = (int *)(bbio->stripes + total_stripes);
-	bbio->raid_map = (u64 *)(bbio->tgtdev_map + real_stripes);
+	bioc->tgtdev_map = (int *)(bioc->stripes + total_stripes);
+	bioc->raid_map = (u64 *)(bioc->tgtdev_map + real_stripes);
 
-	return bbio;
+	return bioc;
 }
 
-void btrfs_get_bbio(struct btrfs_bio *bbio)
+void btrfs_get_bioc(struct btrfs_io_context *bioc)
 {
-	WARN_ON(!refcount_read(&bbio->refs));
-	refcount_inc(&bbio->refs);
+	WARN_ON(!refcount_read(&bioc->refs));
+	refcount_inc(&bioc->refs);
 }
 
-void btrfs_put_bbio(struct btrfs_bio *bbio)
+void btrfs_put_bioc(struct btrfs_io_context *bioc)
 {
-	if (!bbio)
+	if (!bioc)
 		return;
-	if (refcount_dec_and_test(&bbio->refs))
-		kfree(bbio);
+	if (refcount_dec_and_test(&bioc->refs))
+		kfree(bioc);
 }
 
 /* can REQ_OP_DISCARD be sent with other REQ like REQ_OP_WRITE? */
@@ -5840,11 +5841,11 @@ void btrfs_put_bbio(struct btrfs_bio *bbio)
  */
 static int __btrfs_map_block_for_discard(struct btrfs_fs_info *fs_info,
 					 u64 logical, u64 *length_ret,
-					 struct btrfs_bio **bbio_ret)
+					 struct btrfs_io_context **bioc_ret)
 {
 	struct extent_map *em;
 	struct map_lookup *map;
-	struct btrfs_bio *bbio;
+	struct btrfs_io_context *bioc;
 	u64 length = *length_ret;
 	u64 offset;
 	u64 stripe_nr;
@@ -5863,8 +5864,8 @@ static int __btrfs_map_block_for_discard(struct btrfs_fs_info *fs_info,
 	int ret = 0;
 	int i;
 
-	/* discard always return a bbio */
-	ASSERT(bbio_ret);
+	/* Discard always return a bioc. */
+	ASSERT(bioc_ret);
 
 	em = btrfs_get_chunk_map(fs_info, logical, length);
 	if (IS_ERR(em))
@@ -5927,26 +5928,25 @@ static int __btrfs_map_block_for_discard(struct btrfs_fs_info *fs_info,
 					&stripe_index);
 	}
 
-	bbio = alloc_btrfs_bio(num_stripes, 0);
-	if (!bbio) {
+	bioc = alloc_btrfs_io_context(num_stripes, 0);
+	if (!bioc) {
 		ret = -ENOMEM;
 		goto out;
 	}
 
 	for (i = 0; i < num_stripes; i++) {
-		bbio->stripes[i].physical =
+		bioc->stripes[i].physical =
 			map->stripes[stripe_index].physical +
 			stripe_offset + stripe_nr * map->stripe_len;
-		bbio->stripes[i].dev = map->stripes[stripe_index].dev;
+		bioc->stripes[i].dev = map->stripes[stripe_index].dev;
 
 		if (map->type & (BTRFS_BLOCK_GROUP_RAID0 |
 				 BTRFS_BLOCK_GROUP_RAID10)) {
-			bbio->stripes[i].length = stripes_per_dev *
+			bioc->stripes[i].length = stripes_per_dev *
 				map->stripe_len;
 
 			if (i / sub_stripes < remaining_stripes)
-				bbio->stripes[i].length +=
-					map->stripe_len;
+				bioc->stripes[i].length += map->stripe_len;
 
 			/*
 			 * Special for the first stripe and
@@ -5957,19 +5957,17 @@ static int __btrfs_map_block_for_discard(struct btrfs_fs_info *fs_info,
 			 *    off     end_off
 			 */
 			if (i < sub_stripes)
-				bbio->stripes[i].length -=
-					stripe_offset;
+				bioc->stripes[i].length -= stripe_offset;
 
 			if (stripe_index >= last_stripe &&
 			    stripe_index <= (last_stripe +
 					     sub_stripes - 1))
-				bbio->stripes[i].length -=
-					stripe_end_offset;
+				bioc->stripes[i].length -= stripe_end_offset;
 
 			if (i == sub_stripes - 1)
 				stripe_offset = 0;
 		} else {
-			bbio->stripes[i].length = length;
+			bioc->stripes[i].length = length;
 		}
 
 		stripe_index++;
@@ -5979,9 +5977,9 @@ static int __btrfs_map_block_for_discard(struct btrfs_fs_info *fs_info,
 		}
 	}
 
-	*bbio_ret = bbio;
-	bbio->map_type = map->type;
-	bbio->num_stripes = num_stripes;
+	*bioc_ret = bioc;
+	bioc->map_type = map->type;
+	bioc->num_stripes = num_stripes;
 out:
 	free_extent_map(em);
 	return ret;
@@ -6005,7 +6003,7 @@ static int get_extra_mirror_from_replace(struct btrfs_fs_info *fs_info,
 					 u64 srcdev_devid, int *mirror_num,
 					 u64 *physical)
 {
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	int num_stripes;
 	int index_srcdev = 0;
 	int found = 0;
@@ -6014,20 +6012,20 @@ static int get_extra_mirror_from_replace(struct btrfs_fs_info *fs_info,
 	int ret = 0;
 
 	ret = __btrfs_map_block(fs_info, BTRFS_MAP_GET_READ_MIRRORS,
-				logical, &length, &bbio, 0, 0);
+				logical, &length, &bioc, 0, 0);
 	if (ret) {
-		ASSERT(bbio == NULL);
+		ASSERT(bioc == NULL);
 		return ret;
 	}
 
-	num_stripes = bbio->num_stripes;
+	num_stripes = bioc->num_stripes;
 	if (*mirror_num > num_stripes) {
 		/*
 		 * BTRFS_MAP_GET_READ_MIRRORS does not contain this mirror,
 		 * that means that the requested area is not left of the left
 		 * cursor
 		 */
-		btrfs_put_bbio(bbio);
+		btrfs_put_bioc(bioc);
 		return -EIO;
 	}
 
@@ -6037,7 +6035,7 @@ static int get_extra_mirror_from_replace(struct btrfs_fs_info *fs_info,
 	 * pointer to the one of the target drive.
 	 */
 	for (i = 0; i < num_stripes; i++) {
-		if (bbio->stripes[i].dev->devid != srcdev_devid)
+		if (bioc->stripes[i].dev->devid != srcdev_devid)
 			continue;
 
 		/*
@@ -6045,15 +6043,15 @@ static int get_extra_mirror_from_replace(struct btrfs_fs_info *fs_info,
 		 * mirror with the lowest physical address
 		 */
 		if (found &&
-		    physical_of_found <= bbio->stripes[i].physical)
+		    physical_of_found <= bioc->stripes[i].physical)
 			continue;
 
 		index_srcdev = i;
 		found = 1;
-		physical_of_found = bbio->stripes[i].physical;
+		physical_of_found = bioc->stripes[i].physical;
 	}
 
-	btrfs_put_bbio(bbio);
+	btrfs_put_bioc(bioc);
 
 	ASSERT(found);
 	if (!found)
@@ -6084,12 +6082,12 @@ static bool is_block_group_to_copy(struct btrfs_fs_info *fs_info, u64 logical)
 }
 
 static void handle_ops_on_dev_replace(enum btrfs_map_op op,
-				      struct btrfs_bio **bbio_ret,
+				      struct btrfs_io_context **bioc_ret,
 				      struct btrfs_dev_replace *dev_replace,
 				      u64 logical,
 				      int *num_stripes_ret, int *max_errors_ret)
 {
-	struct btrfs_bio *bbio = *bbio_ret;
+	struct btrfs_io_context *bioc = *bioc_ret;
 	u64 srcdev_devid = dev_replace->srcdev->devid;
 	int tgtdev_indexes = 0;
 	int num_stripes = *num_stripes_ret;
@@ -6119,17 +6117,17 @@ static void handle_ops_on_dev_replace(enum btrfs_map_op op,
 		 */
 		index_where_to_add = num_stripes;
 		for (i = 0; i < num_stripes; i++) {
-			if (bbio->stripes[i].dev->devid == srcdev_devid) {
+			if (bioc->stripes[i].dev->devid == srcdev_devid) {
 				/* write to new disk, too */
-				struct btrfs_bio_stripe *new =
-					bbio->stripes + index_where_to_add;
-				struct btrfs_bio_stripe *old =
-					bbio->stripes + i;
+				struct btrfs_io_stripe *new =
+					bioc->stripes + index_where_to_add;
+				struct btrfs_io_stripe *old =
+					bioc->stripes + i;
 
 				new->physical = old->physical;
 				new->length = old->length;
 				new->dev = dev_replace->tgtdev;
-				bbio->tgtdev_map[i] = index_where_to_add;
+				bioc->tgtdev_map[i] = index_where_to_add;
 				index_where_to_add++;
 				max_errors++;
 				tgtdev_indexes++;
@@ -6149,7 +6147,7 @@ static void handle_ops_on_dev_replace(enum btrfs_map_op op,
 		 * full copy of the source drive.
 		 */
 		for (i = 0; i < num_stripes; i++) {
-			if (bbio->stripes[i].dev->devid == srcdev_devid) {
+			if (bioc->stripes[i].dev->devid == srcdev_devid) {
 				/*
 				 * In case of DUP, in order to keep it simple,
 				 * only add the mirror with the lowest physical
@@ -6157,22 +6155,22 @@ static void handle_ops_on_dev_replace(enum btrfs_map_op op,
 				 */
 				if (found &&
 				    physical_of_found <=
-				     bbio->stripes[i].physical)
+				     bioc->stripes[i].physical)
 					continue;
 				index_srcdev = i;
 				found = 1;
-				physical_of_found = bbio->stripes[i].physical;
+				physical_of_found = bioc->stripes[i].physical;
 			}
 		}
 		if (found) {
-			struct btrfs_bio_stripe *tgtdev_stripe =
-				bbio->stripes + num_stripes;
+			struct btrfs_io_stripe *tgtdev_stripe =
+				bioc->stripes + num_stripes;
 
 			tgtdev_stripe->physical = physical_of_found;
 			tgtdev_stripe->length =
-				bbio->stripes[index_srcdev].length;
+				bioc->stripes[index_srcdev].length;
 			tgtdev_stripe->dev = dev_replace->tgtdev;
-			bbio->tgtdev_map[index_srcdev] = num_stripes;
+			bioc->tgtdev_map[index_srcdev] = num_stripes;
 
 			tgtdev_indexes++;
 			num_stripes++;
@@ -6181,8 +6179,8 @@ static void handle_ops_on_dev_replace(enum btrfs_map_op op,
 
 	*num_stripes_ret = num_stripes;
 	*max_errors_ret = max_errors;
-	bbio->num_tgtdevs = tgtdev_indexes;
-	*bbio_ret = bbio;
+	bioc->num_tgtdevs = tgtdev_indexes;
+	*bioc_ret = bioc;
 }
 
 static bool need_full_stripe(enum btrfs_map_op op)
@@ -6285,7 +6283,7 @@ int btrfs_get_io_geometry(struct btrfs_fs_info *fs_info, struct extent_map *em,
 static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 			     enum btrfs_map_op op,
 			     u64 logical, u64 *length,
-			     struct btrfs_bio **bbio_ret,
+			     struct btrfs_io_context **bioc_ret,
 			     int mirror_num, int need_raid_map)
 {
 	struct extent_map *em;
@@ -6300,7 +6298,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 	int num_stripes;
 	int max_errors = 0;
 	int tgtdev_indexes = 0;
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace;
 	int dev_replace_is_ongoing = 0;
 	int num_alloc_stripes;
@@ -6309,7 +6307,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 	u64 raid56_full_stripe_start = (u64)-1;
 	struct btrfs_io_geometry geom;
 
-	ASSERT(bbio_ret);
+	ASSERT(bioc_ret);
 	ASSERT(op != BTRFS_MAP_DISCARD);
 
 	em = btrfs_get_chunk_map(fs_info, logical, *length);
@@ -6453,20 +6451,20 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 		tgtdev_indexes = num_stripes;
 	}
 
-	bbio = alloc_btrfs_bio(num_alloc_stripes, tgtdev_indexes);
-	if (!bbio) {
+	bioc = alloc_btrfs_io_context(num_alloc_stripes, tgtdev_indexes);
+	if (!bioc) {
 		ret = -ENOMEM;
 		goto out;
 	}
 
 	for (i = 0; i < num_stripes; i++) {
-		bbio->stripes[i].physical = map->stripes[stripe_index].physical +
+		bioc->stripes[i].physical = map->stripes[stripe_index].physical +
 			stripe_offset + stripe_nr * map->stripe_len;
-		bbio->stripes[i].dev = map->stripes[stripe_index].dev;
+		bioc->stripes[i].dev = map->stripes[stripe_index].dev;
 		stripe_index++;
 	}
 
-	/* build raid_map */
+	/* Build raid_map */
 	if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK && need_raid_map &&
 	    (need_full_stripe(op) || mirror_num > 1)) {
 		u64 tmp;
@@ -6478,15 +6476,15 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 		/* Fill in the logical address of each stripe */
 		tmp = stripe_nr * data_stripes;
 		for (i = 0; i < data_stripes; i++)
-			bbio->raid_map[(i+rot) % num_stripes] =
+			bioc->raid_map[(i + rot) % num_stripes] =
 				em->start + (tmp + i) * map->stripe_len;
 
-		bbio->raid_map[(i+rot) % map->num_stripes] = RAID5_P_STRIPE;
+		bioc->raid_map[(i + rot) % map->num_stripes] = RAID5_P_STRIPE;
 		if (map->type & BTRFS_BLOCK_GROUP_RAID6)
-			bbio->raid_map[(i+rot+1) % num_stripes] =
+			bioc->raid_map[(i + rot + 1) % num_stripes] =
 				RAID6_Q_STRIPE;
 
-		sort_parity_stripes(bbio, num_stripes);
+		sort_parity_stripes(bioc, num_stripes);
 	}
 
 	if (need_full_stripe(op))
@@ -6494,15 +6492,15 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 
 	if (dev_replace_is_ongoing && dev_replace->tgtdev != NULL &&
 	    need_full_stripe(op)) {
-		handle_ops_on_dev_replace(op, &bbio, dev_replace, logical,
+		handle_ops_on_dev_replace(op, &bioc, dev_replace, logical,
 					  &num_stripes, &max_errors);
 	}
 
-	*bbio_ret = bbio;
-	bbio->map_type = map->type;
-	bbio->num_stripes = num_stripes;
-	bbio->max_errors = max_errors;
-	bbio->mirror_num = mirror_num;
+	*bioc_ret = bioc;
+	bioc->map_type = map->type;
+	bioc->num_stripes = num_stripes;
+	bioc->max_errors = max_errors;
+	bioc->mirror_num = mirror_num;
 
 	/*
 	 * this is the case that REQ_READ && dev_replace_is_ongoing &&
@@ -6511,9 +6509,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 	 */
 	if (patch_the_first_stripe_for_dev_replace && num_stripes > 0) {
 		WARN_ON(num_stripes > 1);
-		bbio->stripes[0].dev = dev_replace->tgtdev;
-		bbio->stripes[0].physical = physical_to_patch_in_first_stripe;
-		bbio->mirror_num = map->num_stripes + 1;
+		bioc->stripes[0].dev = dev_replace->tgtdev;
+		bioc->stripes[0].physical = physical_to_patch_in_first_stripe;
+		bioc->mirror_num = map->num_stripes + 1;
 	}
 out:
 	if (dev_replace_is_ongoing) {
@@ -6527,40 +6525,40 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 
 int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
 		      u64 logical, u64 *length,
-		      struct btrfs_bio **bbio_ret, int mirror_num)
+		      struct btrfs_io_context **bioc_ret, int mirror_num)
 {
 	if (op == BTRFS_MAP_DISCARD)
 		return __btrfs_map_block_for_discard(fs_info, logical,
-						     length, bbio_ret);
+						     length, bioc_ret);
 
-	return __btrfs_map_block(fs_info, op, logical, length, bbio_ret,
+	return __btrfs_map_block(fs_info, op, logical, length, bioc_ret,
 				 mirror_num, 0);
 }
 
 /* For Scrub/replace */
 int btrfs_map_sblock(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
 		     u64 logical, u64 *length,
-		     struct btrfs_bio **bbio_ret)
+		     struct btrfs_io_context **bioc_ret)
 {
-	return __btrfs_map_block(fs_info, op, logical, length, bbio_ret, 0, 1);
+	return __btrfs_map_block(fs_info, op, logical, length, bioc_ret, 0, 1);
 }
 
-static inline void btrfs_end_bbio(struct btrfs_bio *bbio, struct bio *bio)
+static inline void btrfs_end_bioc(struct btrfs_io_context *bioc, struct bio *bio)
 {
-	bio->bi_private = bbio->private;
-	bio->bi_end_io = bbio->end_io;
+	bio->bi_private = bioc->private;
+	bio->bi_end_io = bioc->end_io;
 	bio_endio(bio);
 
-	btrfs_put_bbio(bbio);
+	btrfs_put_bioc(bioc);
 }
 
 static void btrfs_end_bio(struct bio *bio)
 {
-	struct btrfs_bio *bbio = bio->bi_private;
+	struct btrfs_io_context *bioc = bio->bi_private;
 	int is_orig_bio = 0;
 
 	if (bio->bi_status) {
-		atomic_inc(&bbio->error);
+		atomic_inc(&bioc->error);
 		if (bio->bi_status == BLK_STS_IOERR ||
 		    bio->bi_status == BLK_STS_TARGET) {
 			struct btrfs_device *dev = btrfs_io_bio(bio)->device;
@@ -6578,22 +6576,22 @@ static void btrfs_end_bio(struct bio *bio)
 		}
 	}
 
-	if (bio == bbio->orig_bio)
+	if (bio == bioc->orig_bio)
 		is_orig_bio = 1;
 
-	btrfs_bio_counter_dec(bbio->fs_info);
+	btrfs_bio_counter_dec(bioc->fs_info);
 
-	if (atomic_dec_and_test(&bbio->stripes_pending)) {
+	if (atomic_dec_and_test(&bioc->stripes_pending)) {
 		if (!is_orig_bio) {
 			bio_put(bio);
-			bio = bbio->orig_bio;
+			bio = bioc->orig_bio;
 		}
 
-		btrfs_io_bio(bio)->mirror_num = bbio->mirror_num;
+		btrfs_io_bio(bio)->mirror_num = bioc->mirror_num;
 		/* only send an error to the higher layers if it is
 		 * beyond the tolerance of the btrfs bio
 		 */
-		if (atomic_read(&bbio->error) > bbio->max_errors) {
+		if (atomic_read(&bioc->error) > bioc->max_errors) {
 			bio->bi_status = BLK_STS_IOERR;
 		} else {
 			/*
@@ -6603,18 +6601,18 @@ static void btrfs_end_bio(struct bio *bio)
 			bio->bi_status = BLK_STS_OK;
 		}
 
-		btrfs_end_bbio(bbio, bio);
+		btrfs_end_bioc(bioc, bio);
 	} else if (!is_orig_bio) {
 		bio_put(bio);
 	}
 }
 
-static void submit_stripe_bio(struct btrfs_bio *bbio, struct bio *bio,
+static void submit_stripe_bio(struct btrfs_io_context *bioc, struct bio *bio,
 			      u64 physical, struct btrfs_device *dev)
 {
-	struct btrfs_fs_info *fs_info = bbio->fs_info;
+	struct btrfs_fs_info *fs_info = bioc->fs_info;
 
-	bio->bi_private = bbio;
+	bio->bi_private = bioc;
 	btrfs_io_bio(bio)->device = dev;
 	bio->bi_end_io = btrfs_end_bio;
 	bio->bi_iter.bi_sector = physical >> 9;
@@ -6644,20 +6642,20 @@ static void submit_stripe_bio(struct btrfs_bio *bbio, struct bio *bio,
 	btrfsic_submit_bio(bio);
 }
 
-static void bbio_error(struct btrfs_bio *bbio, struct bio *bio, u64 logical)
+static void bioc_error(struct btrfs_io_context *bioc, struct bio *bio, u64 logical)
 {
-	atomic_inc(&bbio->error);
-	if (atomic_dec_and_test(&bbio->stripes_pending)) {
+	atomic_inc(&bioc->error);
+	if (atomic_dec_and_test(&bioc->stripes_pending)) {
 		/* Should be the original bio. */
-		WARN_ON(bio != bbio->orig_bio);
+		WARN_ON(bio != bioc->orig_bio);
 
-		btrfs_io_bio(bio)->mirror_num = bbio->mirror_num;
+		btrfs_io_bio(bio)->mirror_num = bioc->mirror_num;
 		bio->bi_iter.bi_sector = logical >> 9;
-		if (atomic_read(&bbio->error) > bbio->max_errors)
+		if (atomic_read(&bioc->error) > bioc->max_errors)
 			bio->bi_status = BLK_STS_IOERR;
 		else
 			bio->bi_status = BLK_STS_OK;
-		btrfs_end_bbio(bbio, bio);
+		btrfs_end_bioc(bioc, bio);
 	}
 }
 
@@ -6672,35 +6670,35 @@ blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,
 	int ret;
 	int dev_nr;
 	int total_devs;
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 
 	length = bio->bi_iter.bi_size;
 	map_length = length;
 
 	btrfs_bio_counter_inc_blocked(fs_info);
 	ret = __btrfs_map_block(fs_info, btrfs_op(bio), logical,
-				&map_length, &bbio, mirror_num, 1);
+				&map_length, &bioc, mirror_num, 1);
 	if (ret) {
 		btrfs_bio_counter_dec(fs_info);
 		return errno_to_blk_status(ret);
 	}
 
-	total_devs = bbio->num_stripes;
-	bbio->orig_bio = first_bio;
-	bbio->private = first_bio->bi_private;
-	bbio->end_io = first_bio->bi_end_io;
-	bbio->fs_info = fs_info;
-	atomic_set(&bbio->stripes_pending, bbio->num_stripes);
+	total_devs = bioc->num_stripes;
+	bioc->orig_bio = first_bio;
+	bioc->private = first_bio->bi_private;
+	bioc->end_io = first_bio->bi_end_io;
+	bioc->fs_info = fs_info;
+	atomic_set(&bioc->stripes_pending, bioc->num_stripes);
 
-	if ((bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK) &&
+	if ((bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK) &&
 	    ((btrfs_op(bio) == BTRFS_MAP_WRITE) || (mirror_num > 1))) {
 		/* In this case, map_length has been set to the length of
 		   a single stripe; not the whole write */
 		if (btrfs_op(bio) == BTRFS_MAP_WRITE) {
-			ret = raid56_parity_write(fs_info, bio, bbio,
+			ret = raid56_parity_write(fs_info, bio, bioc,
 						  map_length);
 		} else {
-			ret = raid56_parity_recover(fs_info, bio, bbio,
+			ret = raid56_parity_recover(fs_info, bio, bioc,
 						    map_length, mirror_num, 1);
 		}
 
@@ -6716,12 +6714,12 @@ blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,
 	}
 
 	for (dev_nr = 0; dev_nr < total_devs; dev_nr++) {
-		dev = bbio->stripes[dev_nr].dev;
+		dev = bioc->stripes[dev_nr].dev;
 		if (!dev || !dev->bdev || test_bit(BTRFS_DEV_STATE_MISSING,
 						   &dev->dev_state) ||
 		    (btrfs_op(first_bio) == BTRFS_MAP_WRITE &&
 		    !test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))) {
-			bbio_error(bbio, first_bio, logical);
+			bioc_error(bioc, first_bio, logical);
 			continue;
 		}
 
@@ -6730,7 +6728,7 @@ blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,
 		else
 			bio = first_bio;
 
-		submit_stripe_bio(bbio, bio, bbio->stripes[dev_nr].physical, dev);
+		submit_stripe_bio(bioc, bio, bioc->stripes[dev_nr].physical, dev);
 	}
 	btrfs_bio_counter_dec(fs_info);
 	return BLK_STS_OK;
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index 7e8f205978d9..b69755fc0e0d 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -306,11 +306,11 @@ struct btrfs_fs_devices {
 /*
  * we need the mirror number and stripe index to be passed around
  * the call chain while we are processing end_io (especially errors).
- * Really, what we need is a btrfs_bio structure that has this info
+ * Really, what we need is a btrfs_io_context structure that has this info
  * and is properly sized with its stripe array, but we're not there
  * quite yet.  We have our own btrfs bioset, and all of the bios
  * we allocate are actually btrfs_io_bios.  We'll cram as much of
- * struct btrfs_bio as we can into this over time.
+ * struct btrfs_io_context as we can into this over time.
  */
 struct btrfs_io_bio {
 	unsigned int mirror_num;
@@ -339,13 +339,29 @@ static inline void btrfs_io_bio_free_csum(struct btrfs_io_bio *io_bio)
 	}
 }
 
-struct btrfs_bio_stripe {
+struct btrfs_io_stripe {
 	struct btrfs_device *dev;
 	u64 physical;
 	u64 length; /* only used for discard mappings */
 };
 
-struct btrfs_bio {
+/*
+ * Context for IO subsmission for device stripe.
+ *
+ * - Track the unfinished mirrors for mirror based profiles
+ *   Mirror based profiles are SINGLE/DUP/RAID1/RAID10.
+ *
+ * - Contain the logical -> physical mapping info
+ *   Used by submit_stripe_bio() for mapping logical bio
+ *   into physical device address.
+ *
+ * - Contain device replace info
+ *   Used by handle_ops_on_dev_replace() to copy logical bios
+ *   into the new device.
+ *
+ * - Contain RAID56 full stripe logical bytenrs
+ */
+struct btrfs_io_context {
 	refcount_t refs;
 	atomic_t stripes_pending;
 	struct btrfs_fs_info *fs_info;
@@ -365,7 +381,7 @@ struct btrfs_bio {
 	 * so raid_map[0] is the start of our full stripe
 	 */
 	u64 *raid_map;
-	struct btrfs_bio_stripe stripes[];
+	struct btrfs_io_stripe stripes[];
 };
 
 struct btrfs_device_info {
@@ -400,11 +416,11 @@ struct map_lookup {
 	int num_stripes;
 	int sub_stripes;
 	int verified_stripes; /* For mount time dev extent verification */
-	struct btrfs_bio_stripe stripes[];
+	struct btrfs_io_stripe stripes[];
 };
 
 #define map_lookup_size(n) (sizeof(struct map_lookup) + \
-			    (sizeof(struct btrfs_bio_stripe) * (n)))
+			    (sizeof(struct btrfs_io_stripe) * (n)))
 
 struct btrfs_balance_args;
 struct btrfs_balance_progress;
@@ -441,14 +457,14 @@ static inline enum btrfs_map_op btrfs_op(struct bio *bio)
 	}
 }
 
-void btrfs_get_bbio(struct btrfs_bio *bbio);
-void btrfs_put_bbio(struct btrfs_bio *bbio);
+void btrfs_get_bioc(struct btrfs_io_context *bioc);
+void btrfs_put_bioc(struct btrfs_io_context *bioc);
 int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
 		    u64 logical, u64 *length,
-		    struct btrfs_bio **bbio_ret, int mirror_num);
+		    struct btrfs_io_context **bioc_ret, int mirror_num);
 int btrfs_map_sblock(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
 		     u64 logical, u64 *length,
-		     struct btrfs_bio **bbio_ret);
+		     struct btrfs_io_context **bioc_ret);
 int btrfs_get_io_geometry(struct btrfs_fs_info *fs_info, struct extent_map *map,
 			  enum btrfs_map_op op, u64 logical,
 			  struct btrfs_io_geometry *io_geom);
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 28a06c2d80ad..8f9ccfa4157a 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1626,27 +1626,27 @@ int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 len
 static int read_zone_info(struct btrfs_fs_info *fs_info, u64 logical,
 			  struct blk_zone *zone)
 {
-	struct btrfs_bio *bbio = NULL;
+	struct btrfs_io_context *bioc = NULL;
 	u64 mapped_length = PAGE_SIZE;
 	unsigned int nofs_flag;
 	int nmirrors;
 	int i, ret;
 
 	ret = btrfs_map_sblock(fs_info, BTRFS_MAP_GET_READ_MIRRORS, logical,
-			       &mapped_length, &bbio);
-	if (ret || !bbio || mapped_length < PAGE_SIZE) {
-		btrfs_put_bbio(bbio);
+			       &mapped_length, &bioc);
+	if (ret || !bioc || mapped_length < PAGE_SIZE) {
+		btrfs_put_bioc(bioc);
 		return -EIO;
 	}
 
-	if (bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK)
+	if (bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK)
 		return -EINVAL;
 
 	nofs_flag = memalloc_nofs_save();
-	nmirrors = (int)bbio->num_stripes;
+	nmirrors = (int)bioc->num_stripes;
 	for (i = 0; i < nmirrors; i++) {
-		u64 physical = bbio->stripes[i].physical;
-		struct btrfs_device *dev = bbio->stripes[i].dev;
+		u64 physical = bioc->stripes[i].physical;
+		struct btrfs_device *dev = bioc->stripes[i].dev;
 
 		/* Missing device */
 		if (!dev->bdev)
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-15  7:17 [PATCH v3 0/3] btrfs: btrfs_bio and btrfs_io_bio rename Qu Wenruo
  2021-09-15  7:17 ` [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context Qu Wenruo
@ 2021-09-15  7:17 ` Qu Wenruo
  2021-09-17 12:27   ` Nikolay Borisov
  2021-09-23  5:57   ` Qu Wenruo
  2021-09-15  7:17 ` [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio Qu Wenruo
  2021-09-17 11:39 ` [PATCH v3 0/3] btrfs: btrfs_bio and btrfs_io_bio rename David Sterba
  3 siblings, 2 replies; 22+ messages in thread
From: Qu Wenruo @ 2021-09-15  7:17 UTC (permalink / raw)
  To: linux-btrfs

The helper btrfs_bio_alloc() is almost the same as btrfs_io_bio_alloc(),
except it's allocating using BIO_MAX_VECS as @nr_iovecs, and initialize
bio->bi_iter.bi_sector.

However the naming itself is not using "btrfs_io_bio" to indicate its
parameter is "strcut btrfs_io_bio" and can be easily confused with
"struct btrfs_bio".

Considering assigned bio->bi_iter.bi_sector is such a simple work and
there are already tons of call sites doing that manually, there is no
need to do that in a helper.

Remove btrfs_bio_alloc() helper, and enhance btrfs_io_bio_alloc()
function to provide a fail-safe value for its @nr_iovecs.

And then replace all btrfs_bio_alloc() callers with
btrfs_io_bio_alloc().

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/compression.c | 12 ++++++++----
 fs/btrfs/extent_io.c   | 33 +++++++++++++++------------------
 fs/btrfs/extent_io.h   |  1 -
 3 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 7869ad12bc6e..2475dc0b1c22 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -418,7 +418,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	cb->orig_bio = NULL;
 	cb->nr_pages = nr_pages;
 
-	bio = btrfs_bio_alloc(first_byte);
+	bio = btrfs_io_bio_alloc(0);
+	bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
 	bio->bi_opf = bio_op | write_flags;
 	bio->bi_private = cb;
 	bio->bi_end_io = end_compressed_bio_write;
@@ -490,7 +491,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 				bio_endio(bio);
 			}
 
-			bio = btrfs_bio_alloc(first_byte);
+			bio = btrfs_io_bio_alloc(0);
+			bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
 			bio->bi_opf = bio_op | write_flags;
 			bio->bi_private = cb;
 			bio->bi_end_io = end_compressed_bio_write;
@@ -748,7 +750,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 	/* include any pages we added in add_ra-bio_pages */
 	cb->len = bio->bi_iter.bi_size;
 
-	comp_bio = btrfs_bio_alloc(cur_disk_byte);
+	comp_bio = btrfs_io_bio_alloc(0);
+	comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
 	comp_bio->bi_opf = REQ_OP_READ;
 	comp_bio->bi_private = cb;
 	comp_bio->bi_end_io = end_compressed_bio_read;
@@ -806,7 +809,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 				bio_endio(comp_bio);
 			}
 
-			comp_bio = btrfs_bio_alloc(cur_disk_byte);
+			comp_bio = btrfs_io_bio_alloc(0);
+			comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
 			comp_bio->bi_opf = REQ_OP_READ;
 			comp_bio->bi_private = cb;
 			comp_bio->bi_end_io = end_compressed_bio_read;
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 1aed03ef5f49..d3fcf7e8dc48 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3121,16 +3121,22 @@ static inline void btrfs_io_bio_init(struct btrfs_io_bio *btrfs_bio)
 }
 
 /*
- * The following helpers allocate a bio. As it's backed by a bioset, it'll
- * never fail.  We're returning a bio right now but you can call btrfs_io_bio
- * for the appropriate container_of magic
+ * Allocate a btrfs_io_bio, with @nr_iovecs as maximum iovecs.
+ *
+ * If @nr_iovecs is 0, it will use BIO_MAX_VECS as @nr_iovces instead.
+ * This behavior is to provide a fail-safe default value.
+ *
+ * This helper uses bioset to allocate the bio, thus it's backed by mempool,
+ * and should not fail from process contexts.
  */
-struct bio *btrfs_bio_alloc(u64 first_byte)
+struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
 {
 	struct bio *bio;
 
-	bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &btrfs_bioset);
-	bio->bi_iter.bi_sector = first_byte >> 9;
+	ASSERT(nr_iovecs <= BIO_MAX_VECS);
+	if (nr_iovecs == 0)
+		nr_iovecs = BIO_MAX_VECS;
+	bio = bio_alloc_bioset(GFP_NOFS, nr_iovecs, &btrfs_bioset);
 	btrfs_io_bio_init(btrfs_io_bio(bio));
 	return bio;
 }
@@ -3148,16 +3154,6 @@ struct bio *btrfs_bio_clone(struct bio *bio)
 	return new;
 }
 
-struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
-{
-	struct bio *bio;
-
-	/* Bio allocation backed by a bioset does not fail */
-	bio = bio_alloc_bioset(GFP_NOFS, nr_iovecs, &btrfs_bioset);
-	btrfs_io_bio_init(btrfs_io_bio(bio));
-	return bio;
-}
-
 struct bio *btrfs_bio_clone_partial(struct bio *orig, u64 offset, u64 size)
 {
 	struct bio *bio;
@@ -3307,14 +3303,15 @@ static int alloc_new_bio(struct btrfs_inode *inode,
 	struct bio *bio;
 	int ret;
 
+	bio = btrfs_io_bio_alloc(0);
 	/*
 	 * For compressed page range, its disk_bytenr is always @disk_bytenr
 	 * passed in, no matter if we have added any range into previous bio.
 	 */
 	if (bio_flags & EXTENT_BIO_COMPRESSED)
-		bio = btrfs_bio_alloc(disk_bytenr);
+		bio->bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT;
 	else
-		bio = btrfs_bio_alloc(disk_bytenr + offset);
+		bio->bi_iter.bi_sector = (disk_bytenr + offset) >> SECTOR_SHIFT;
 	bio_ctrl->bio = bio;
 	bio_ctrl->bio_flags = bio_flags;
 	bio->bi_end_io = end_io_func;
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index ba471f2063a7..81fa68eaa699 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -278,7 +278,6 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end);
 void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
 				  struct page *locked_page,
 				  u32 bits_to_clear, unsigned long page_ops);
-struct bio *btrfs_bio_alloc(u64 first_byte);
 struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs);
 struct bio *btrfs_bio_clone(struct bio *bio);
 struct bio *btrfs_bio_clone_partial(struct bio *orig, u64 offset, u64 size);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio
  2021-09-15  7:17 [PATCH v3 0/3] btrfs: btrfs_bio and btrfs_io_bio rename Qu Wenruo
  2021-09-15  7:17 ` [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context Qu Wenruo
  2021-09-15  7:17 ` [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper Qu Wenruo
@ 2021-09-15  7:17 ` Qu Wenruo
  2021-09-17 11:39   ` David Sterba
  2021-09-20  7:04   ` Nikolay Borisov
  2021-09-17 11:39 ` [PATCH v3 0/3] btrfs: btrfs_bio and btrfs_io_bio rename David Sterba
  3 siblings, 2 replies; 22+ messages in thread
From: Qu Wenruo @ 2021-09-15  7:17 UTC (permalink / raw)
  To: linux-btrfs

Previously we have "struct btrfs_bio", which records IO context for
mirrored IO and RAID56, and "strcut btrfs_io_bio", which records extra
btrfs specific info for logical bytenr bio.

With "strcut btrfs_bio" renamed to "struct btrfs_io_context", we are
safe to rename "strcut btrfs_io_bio" to "strcut btrfs_logical_bio" which
is a more suitable name now.

Although the name, "btrfs_logical_bio", is a little long and name
"btrfs_bio" can be much shorter, "btrfs_bio" conflicts with previous
"btrfs_bio" structure and can cause a lot of problems for backports.

Thus here we choose the name "btrfs_logical_bio", which also emphasis
those bios all work at logical bytenr.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/check-integrity.c |  2 +-
 fs/btrfs/compression.c     | 16 ++++-----
 fs/btrfs/ctree.h           |  6 ++--
 fs/btrfs/disk-io.c         |  2 +-
 fs/btrfs/disk-io.h         |  2 +-
 fs/btrfs/extent_io.c       | 69 +++++++++++++++++++-------------------
 fs/btrfs/extent_io.h       |  7 ++--
 fs/btrfs/file-item.c       | 12 +++----
 fs/btrfs/inode.c           | 50 ++++++++++++++-------------
 fs/btrfs/raid56.c          |  8 ++---
 fs/btrfs/scrub.c           | 14 ++++----
 fs/btrfs/volumes.c         | 10 +++---
 fs/btrfs/volumes.h         | 29 ++++++++--------
 13 files changed, 116 insertions(+), 111 deletions(-)

diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index 81b11124b67a..94fefaa6438c 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btrfs/check-integrity.c
@@ -1561,7 +1561,7 @@ static int btrfsic_read_block(struct btrfsic_state *state,
 		struct bio *bio;
 		unsigned int j;
 
-		bio = btrfs_io_bio_alloc(num_pages - i);
+		bio = btrfs_logical_bio_alloc(num_pages - i);
 		bio_set_dev(bio, block_ctx->dev->bdev);
 		bio->bi_iter.bi_sector = dev_bytenr >> 9;
 		bio->bi_opf = REQ_OP_READ;
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 2475dc0b1c22..331ef88b87d1 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -179,9 +179,9 @@ static int check_compressed_csum(struct btrfs_inode *inode, struct bio *bio,
 			if (memcmp(&csum, cb_sum, csum_size) != 0) {
 				btrfs_print_data_csum_error(inode, disk_start,
 						csum, cb_sum, cb->mirror_num);
-				if (btrfs_io_bio(bio)->device)
+				if (btrfs_logical_bio(bio)->device)
 					btrfs_dev_stat_inc_and_print(
-						btrfs_io_bio(bio)->device,
+						btrfs_logical_bio(bio)->device,
 						BTRFS_DEV_STAT_CORRUPTION_ERRS);
 				return -EIO;
 			}
@@ -208,7 +208,7 @@ static void end_compressed_bio_read(struct bio *bio)
 	struct inode *inode;
 	struct page *page;
 	unsigned int index;
-	unsigned int mirror = btrfs_io_bio(bio)->mirror_num;
+	unsigned int mirror = btrfs_logical_bio(bio)->mirror_num;
 	int ret = 0;
 
 	if (bio->bi_status)
@@ -224,7 +224,7 @@ static void end_compressed_bio_read(struct bio *bio)
 	 * Record the correct mirror_num in cb->orig_bio so that
 	 * read-repair can work properly.
 	 */
-	btrfs_io_bio(cb->orig_bio)->mirror_num = mirror;
+	btrfs_logical_bio(cb->orig_bio)->mirror_num = mirror;
 	cb->mirror_num = mirror;
 
 	/*
@@ -418,7 +418,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	cb->orig_bio = NULL;
 	cb->nr_pages = nr_pages;
 
-	bio = btrfs_io_bio_alloc(0);
+	bio = btrfs_logical_bio_alloc(0);
 	bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
 	bio->bi_opf = bio_op | write_flags;
 	bio->bi_private = cb;
@@ -491,7 +491,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 				bio_endio(bio);
 			}
 
-			bio = btrfs_io_bio_alloc(0);
+			bio = btrfs_logical_bio_alloc(0);
 			bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
 			bio->bi_opf = bio_op | write_flags;
 			bio->bi_private = cb;
@@ -750,7 +750,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 	/* include any pages we added in add_ra-bio_pages */
 	cb->len = bio->bi_iter.bi_size;
 
-	comp_bio = btrfs_io_bio_alloc(0);
+	comp_bio = btrfs_logical_bio_alloc(0);
 	comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
 	comp_bio->bi_opf = REQ_OP_READ;
 	comp_bio->bi_private = cb;
@@ -809,7 +809,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 				bio_endio(comp_bio);
 			}
 
-			comp_bio = btrfs_io_bio_alloc(0);
+			comp_bio = btrfs_logical_bio_alloc(0);
 			comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
 			comp_bio->bi_opf = REQ_OP_READ;
 			comp_bio->bi_private = cb;
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 38870ae46cbb..41bc28f6c6c4 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -48,6 +48,7 @@ extern struct kmem_cache *btrfs_free_space_cachep;
 extern struct kmem_cache *btrfs_free_space_bitmap_cachep;
 struct btrfs_ordered_sum;
 struct btrfs_ref;
+struct btrfs_logical_bio;
 
 #define BTRFS_MAGIC 0x4D5F53665248425FULL /* ascii _BHRfS_M, no null */
 
@@ -3133,8 +3134,9 @@ u64 btrfs_file_extent_end(const struct btrfs_path *path);
 /* inode.c */
 blk_status_t btrfs_submit_data_bio(struct inode *inode, struct bio *bio,
 				   int mirror_num, unsigned long bio_flags);
-unsigned int btrfs_verify_data_csum(struct btrfs_io_bio *io_bio, u32 bio_offset,
-				    struct page *page, u64 start, u64 end);
+unsigned int btrfs_verify_data_csum(struct btrfs_logical_bio *lbio,
+				    u32 bio_offset, struct page *page,
+				    u64 start, u64 end);
 struct extent_map *btrfs_get_extent_fiemap(struct btrfs_inode *inode,
 					   u64 start, u64 len);
 noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 7d80e5b22d32..4909f6e5c6f8 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -683,7 +683,7 @@ static int validate_subpage_buffer(struct page *page, u64 start, u64 end,
 	return ret;
 }
 
-int btrfs_validate_metadata_buffer(struct btrfs_io_bio *io_bio,
+int btrfs_validate_metadata_buffer(struct btrfs_logical_bio *lbio,
 				   struct page *page, u64 start, u64 end,
 				   int mirror)
 {
diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
index 0e7e9526b6a8..aad13d02b9ad 100644
--- a/fs/btrfs/disk-io.h
+++ b/fs/btrfs/disk-io.h
@@ -81,7 +81,7 @@ void btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info);
 void btrfs_btree_balance_dirty_nodelay(struct btrfs_fs_info *fs_info);
 void btrfs_drop_and_free_fs_root(struct btrfs_fs_info *fs_info,
 				 struct btrfs_root *root);
-int btrfs_validate_metadata_buffer(struct btrfs_io_bio *io_bio,
+int btrfs_validate_metadata_buffer(struct btrfs_logical_bio *lbio,
 				   struct page *page, u64 start, u64 end,
 				   int mirror);
 blk_status_t btrfs_submit_metadata_bio(struct inode *inode, struct bio *bio,
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index d3fcf7e8dc48..9ae6fddc4cc8 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -241,7 +241,7 @@ int __init extent_io_init(void)
 		return -ENOMEM;
 
 	if (bioset_init(&btrfs_bioset, BIO_POOL_SIZE,
-			offsetof(struct btrfs_io_bio, bio),
+			offsetof(struct btrfs_logical_bio, bio),
 			BIOSET_NEED_BVECS))
 		goto free_buffer_cache;
 
@@ -2299,7 +2299,7 @@ static int repair_io_failure(struct btrfs_fs_info *fs_info, u64 ino, u64 start,
 	if (btrfs_is_zoned(fs_info))
 		return btrfs_repair_one_zone(fs_info, logical);
 
-	bio = btrfs_io_bio_alloc(1);
+	bio = btrfs_logical_bio_alloc(1);
 	bio->bi_iter.bi_size = 0;
 	map_length = length;
 
@@ -2618,10 +2618,10 @@ int btrfs_repair_one_sector(struct inode *inode,
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
 	struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
 	struct extent_io_tree *failure_tree = &BTRFS_I(inode)->io_failure_tree;
-	struct btrfs_io_bio *failed_io_bio = btrfs_io_bio(failed_bio);
+	struct btrfs_logical_bio *failed_lbio = btrfs_logical_bio(failed_bio);
 	const int icsum = bio_offset >> fs_info->sectorsize_bits;
 	struct bio *repair_bio;
-	struct btrfs_io_bio *repair_io_bio;
+	struct btrfs_logical_bio *repair_lbio;
 	blk_status_t status;
 
 	btrfs_debug(fs_info,
@@ -2639,24 +2639,24 @@ int btrfs_repair_one_sector(struct inode *inode,
 		return -EIO;
 	}
 
-	repair_bio = btrfs_io_bio_alloc(1);
-	repair_io_bio = btrfs_io_bio(repair_bio);
+	repair_bio = btrfs_logical_bio_alloc(1);
+	repair_lbio = btrfs_logical_bio(repair_bio);
 	repair_bio->bi_opf = REQ_OP_READ;
 	repair_bio->bi_end_io = failed_bio->bi_end_io;
 	repair_bio->bi_iter.bi_sector = failrec->logical >> 9;
 	repair_bio->bi_private = failed_bio->bi_private;
 
-	if (failed_io_bio->csum) {
+	if (failed_lbio->csum) {
 		const u32 csum_size = fs_info->csum_size;
 
-		repair_io_bio->csum = repair_io_bio->csum_inline;
-		memcpy(repair_io_bio->csum,
-		       failed_io_bio->csum + csum_size * icsum, csum_size);
+		repair_lbio->csum = repair_lbio->csum_inline;
+		memcpy(repair_lbio->csum,
+		       failed_lbio->csum + csum_size * icsum, csum_size);
 	}
 
 	bio_add_page(repair_bio, page, failrec->len, pgoff);
-	repair_io_bio->logical = failrec->start;
-	repair_io_bio->iter = repair_bio->bi_iter;
+	repair_lbio->logical = failrec->start;
+	repair_lbio->iter = repair_bio->bi_iter;
 
 	btrfs_debug(btrfs_sb(inode->i_sb),
 		    "repair read error: submitting new read to mirror %d",
@@ -2976,7 +2976,7 @@ static struct extent_buffer *find_extent_buffer_readpage(
 static void end_bio_extent_readpage(struct bio *bio)
 {
 	struct bio_vec *bvec;
-	struct btrfs_io_bio *io_bio = btrfs_io_bio(bio);
+	struct btrfs_logical_bio *lbio = btrfs_logical_bio(bio);
 	struct extent_io_tree *tree, *failure_tree;
 	struct processed_extent processed = { 0 };
 	/*
@@ -3003,7 +3003,7 @@ static void end_bio_extent_readpage(struct bio *bio)
 		btrfs_debug(fs_info,
 			"end_bio_extent_readpage: bi_sector=%llu, err=%d, mirror=%u",
 			bio->bi_iter.bi_sector, bio->bi_status,
-			io_bio->mirror_num);
+			lbio->mirror_num);
 		tree = &BTRFS_I(inode)->io_tree;
 		failure_tree = &BTRFS_I(inode)->io_failure_tree;
 
@@ -3028,14 +3028,14 @@ static void end_bio_extent_readpage(struct bio *bio)
 		end = start + bvec->bv_len - 1;
 		len = bvec->bv_len;
 
-		mirror = io_bio->mirror_num;
+		mirror = lbio->mirror_num;
 		if (likely(uptodate)) {
 			if (is_data_inode(inode)) {
-				error_bitmap = btrfs_verify_data_csum(io_bio,
+				error_bitmap = btrfs_verify_data_csum(lbio,
 						bio_offset, page, start, end);
 				ret = error_bitmap;
 			} else {
-				ret = btrfs_validate_metadata_buffer(io_bio,
+				ret = btrfs_validate_metadata_buffer(lbio,
 					page, start, end, mirror);
 			}
 			if (ret)
@@ -3106,7 +3106,7 @@ static void end_bio_extent_readpage(struct bio *bio)
 	}
 	/* Release the last extent */
 	endio_readpage_release_extent(&processed, NULL, 0, 0, false);
-	btrfs_io_bio_free_csum(io_bio);
+	btrfs_logical_bio_free_csum(lbio);
 	bio_put(bio);
 }
 
@@ -3115,9 +3115,9 @@ static void end_bio_extent_readpage(struct bio *bio)
  * new bio by bio_alloc_bioset as it does not initialize the bytes outside of
  * 'bio' because use of __GFP_ZERO is not supported.
  */
-static inline void btrfs_io_bio_init(struct btrfs_io_bio *btrfs_bio)
+static inline void btrfs_logical_bio_init(struct btrfs_logical_bio *lbio)
 {
-	memset(btrfs_bio, 0, offsetof(struct btrfs_io_bio, bio));
+	memset(lbio, 0, offsetof(struct btrfs_logical_bio, bio));
 }
 
 /*
@@ -3129,7 +3129,7 @@ static inline void btrfs_io_bio_init(struct btrfs_io_bio *btrfs_bio)
  * This helper uses bioset to allocate the bio, thus it's backed by mempool,
  * and should not fail from process contexts.
  */
-struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
+struct bio *btrfs_logical_bio_alloc(unsigned int nr_iovecs)
 {
 	struct bio *bio;
 
@@ -3137,27 +3137,28 @@ struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
 	if (nr_iovecs == 0)
 		nr_iovecs = BIO_MAX_VECS;
 	bio = bio_alloc_bioset(GFP_NOFS, nr_iovecs, &btrfs_bioset);
-	btrfs_io_bio_init(btrfs_io_bio(bio));
+	btrfs_logical_bio_init(btrfs_logical_bio(bio));
 	return bio;
 }
 
-struct bio *btrfs_bio_clone(struct bio *bio)
+struct bio *btrfs_logical_bio_clone(struct bio *bio)
 {
-	struct btrfs_io_bio *btrfs_bio;
+	struct btrfs_logical_bio *lbio;
 	struct bio *new;
 
 	/* Bio allocation backed by a bioset does not fail */
 	new = bio_clone_fast(bio, GFP_NOFS, &btrfs_bioset);
-	btrfs_bio = btrfs_io_bio(new);
-	btrfs_io_bio_init(btrfs_bio);
-	btrfs_bio->iter = bio->bi_iter;
+	lbio = btrfs_logical_bio(new);
+	btrfs_logical_bio_init(lbio);
+	lbio->iter = bio->bi_iter;
 	return new;
 }
 
-struct bio *btrfs_bio_clone_partial(struct bio *orig, u64 offset, u64 size)
+struct bio *btrfs_logical_bio_clone_partial(struct bio *orig, u64 offset,
+					    u64 size)
 {
 	struct bio *bio;
-	struct btrfs_io_bio *btrfs_bio;
+	struct btrfs_logical_bio *lbio;
 
 	ASSERT(offset <= UINT_MAX && size <= UINT_MAX);
 
@@ -3165,11 +3166,11 @@ struct bio *btrfs_bio_clone_partial(struct bio *orig, u64 offset, u64 size)
 	bio = bio_clone_fast(orig, GFP_NOFS, &btrfs_bioset);
 	ASSERT(bio);
 
-	btrfs_bio = btrfs_io_bio(bio);
-	btrfs_io_bio_init(btrfs_bio);
+	lbio = btrfs_logical_bio(bio);
+	btrfs_logical_bio_init(lbio);
 
 	bio_trim(bio, offset >> 9, size >> 9);
-	btrfs_bio->iter = bio->bi_iter;
+	lbio->iter = bio->bi_iter;
 	return bio;
 }
 
@@ -3303,7 +3304,7 @@ static int alloc_new_bio(struct btrfs_inode *inode,
 	struct bio *bio;
 	int ret;
 
-	bio = btrfs_io_bio_alloc(0);
+	bio = btrfs_logical_bio_alloc(0);
 	/*
 	 * For compressed page range, its disk_bytenr is always @disk_bytenr
 	 * passed in, no matter if we have added any range into previous bio.
@@ -3338,7 +3339,7 @@ static int alloc_new_bio(struct btrfs_inode *inode,
 			goto error;
 		}
 
-		btrfs_io_bio(bio)->device = device;
+		btrfs_logical_bio(bio)->device = device;
 	}
 	return 0;
 error:
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index 81fa68eaa699..7eac8536b162 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -278,9 +278,10 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end);
 void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
 				  struct page *locked_page,
 				  u32 bits_to_clear, unsigned long page_ops);
-struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs);
-struct bio *btrfs_bio_clone(struct bio *bio);
-struct bio *btrfs_bio_clone_partial(struct bio *orig, u64 offset, u64 size);
+struct bio *btrfs_logical_bio_alloc(unsigned int nr_iovecs);
+struct bio *btrfs_logical_bio_clone(struct bio *bio);
+struct bio *btrfs_logical_bio_clone_partial(struct bio *orig, u64 offset,
+					    u64 size);
 
 void end_extent_writepage(struct page *page, int err, u64 start, u64 end);
 int btrfs_repair_eb_io_failure(const struct extent_buffer *eb, int mirror_num);
diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
index 2673c6ba7a4e..6e788eb3c689 100644
--- a/fs/btrfs/file-item.c
+++ b/fs/btrfs/file-item.c
@@ -358,7 +358,7 @@ static int search_file_offset_in_bio(struct bio *bio, struct inode *inode,
  * @dst: Buffer of size nblocks * btrfs_super_csum_size() used to return
  *       checksum (nblocks = bio->bi_iter.bi_size / fs_info->sectorsize). If
  *       NULL, the checksum buffer is allocated and returned in
- *       btrfs_io_bio(bio)->csum instead.
+ *       btrfs_bio(bio)->csum instead.
  *
  * Return: BLK_STS_RESOURCE if allocating memory fails, BLK_STS_OK otherwise.
  */
@@ -397,19 +397,19 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
 		return BLK_STS_RESOURCE;
 
 	if (!dst) {
-		struct btrfs_io_bio *btrfs_bio = btrfs_io_bio(bio);
+		struct btrfs_logical_bio *logical_bio = btrfs_logical_bio(bio);
 
 		if (nblocks * csum_size > BTRFS_BIO_INLINE_CSUM_SIZE) {
-			btrfs_bio->csum = kmalloc_array(nblocks, csum_size,
+			logical_bio->csum = kmalloc_array(nblocks, csum_size,
 							GFP_NOFS);
-			if (!btrfs_bio->csum) {
+			if (!logical_bio->csum) {
 				btrfs_free_path(path);
 				return BLK_STS_RESOURCE;
 			}
 		} else {
-			btrfs_bio->csum = btrfs_bio->csum_inline;
+			logical_bio->csum = logical_bio->csum_inline;
 		}
-		csum = btrfs_bio->csum;
+		csum = logical_bio->csum;
 	} else {
 		csum = dst;
 	}
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index a3ce50289888..52d8d5c9bab1 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -3212,7 +3212,7 @@ void btrfs_writepage_endio_finish_ordered(struct btrfs_inode *inode,
  *
  * The length of such check is always one sector size.
  */
-static int check_data_csum(struct inode *inode, struct btrfs_io_bio *io_bio,
+static int check_data_csum(struct inode *inode, struct btrfs_logical_bio *lbio,
 			   u32 bio_offset, struct page *page, u32 pgoff,
 			   u64 start)
 {
@@ -3228,7 +3228,7 @@ static int check_data_csum(struct inode *inode, struct btrfs_io_bio *io_bio,
 	ASSERT(pgoff + len <= PAGE_SIZE);
 
 	offset_sectors = bio_offset >> fs_info->sectorsize_bits;
-	csum_expected = ((u8 *)io_bio->csum) + offset_sectors * csum_size;
+	csum_expected = ((u8 *)lbio->csum) + offset_sectors * csum_size;
 
 	kaddr = kmap_atomic(page);
 	shash->tfm = fs_info->csum_shash;
@@ -3242,9 +3242,9 @@ static int check_data_csum(struct inode *inode, struct btrfs_io_bio *io_bio,
 	return 0;
 zeroit:
 	btrfs_print_data_csum_error(BTRFS_I(inode), start, csum, csum_expected,
-				    io_bio->mirror_num);
-	if (io_bio->device)
-		btrfs_dev_stat_inc_and_print(io_bio->device,
+				    lbio->mirror_num);
+	if (lbio->device)
+		btrfs_dev_stat_inc_and_print(lbio->device,
 					     BTRFS_DEV_STAT_CORRUPTION_ERRS);
 	memset(kaddr + pgoff, 1, len);
 	flush_dcache_page(page);
@@ -3264,8 +3264,9 @@ static int check_data_csum(struct inode *inode, struct btrfs_io_bio *io_bio,
  * Return a bitmap where bit set means a csum mismatch, and bit not set means
  * csum match.
  */
-unsigned int btrfs_verify_data_csum(struct btrfs_io_bio *io_bio, u32 bio_offset,
-				    struct page *page, u64 start, u64 end)
+unsigned int btrfs_verify_data_csum(struct btrfs_logical_bio *lbio,
+				    u32 bio_offset, struct page *page,
+				    u64 start, u64 end)
 {
 	struct inode *inode = page->mapping->host;
 	struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
@@ -3283,14 +3284,14 @@ unsigned int btrfs_verify_data_csum(struct btrfs_io_bio *io_bio, u32 bio_offset,
 	 * For subpage case, above PageChecked is not safe as it's not subpage
 	 * compatible.
 	 * But for now only cow fixup and compressed read utilize PageChecked
-	 * flag, while in this context we can easily use io_bio->csum to
+	 * flag, while in this context we can easily use lbio->csum to
 	 * determine if we really need to do csum verification.
 	 *
-	 * So for now, just exit if io_bio->csum is NULL, as it means it's
+	 * So for now, just exit if lbio->csum is NULL, as it means it's
 	 * compressed read, and its compressed data csum has already been
 	 * verified.
 	 */
-	if (io_bio->csum == NULL)
+	if (lbio->csum == NULL)
 		return 0;
 
 	if (BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)
@@ -3317,7 +3318,7 @@ unsigned int btrfs_verify_data_csum(struct btrfs_io_bio *io_bio, u32 bio_offset,
 					  EXTENT_NODATASUM);
 			continue;
 		}
-		ret = check_data_csum(inode, io_bio, bio_offset, page, pg_off,
+		ret = check_data_csum(inode, lbio, bio_offset, page, pg_off,
 				      page_offset(page) + pg_off);
 		if (ret < 0) {
 			const int nr_bit = (pg_off - offset_in_page(start)) >>
@@ -8077,7 +8078,7 @@ static blk_status_t submit_dio_repair_bio(struct inode *inode, struct bio *bio,
 }
 
 static blk_status_t btrfs_check_read_dio_bio(struct inode *inode,
-					     struct btrfs_io_bio *io_bio,
+					     struct btrfs_logical_bio *lbio,
 					     const bool uptodate)
 {
 	struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
@@ -8087,11 +8088,11 @@ static blk_status_t btrfs_check_read_dio_bio(struct inode *inode,
 	const bool csum = !(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM);
 	struct bio_vec bvec;
 	struct bvec_iter iter;
-	u64 start = io_bio->logical;
+	u64 start = lbio->logical;
 	u32 bio_offset = 0;
 	blk_status_t err = BLK_STS_OK;
 
-	__bio_for_each_segment(bvec, &io_bio->bio, iter, io_bio->iter) {
+	__bio_for_each_segment(bvec, &lbio->bio, iter, lbio->iter) {
 		unsigned int i, nr_sectors, pgoff;
 
 		nr_sectors = BTRFS_BYTES_TO_BLKS(fs_info, bvec.bv_len);
@@ -8099,7 +8100,7 @@ static blk_status_t btrfs_check_read_dio_bio(struct inode *inode,
 		for (i = 0; i < nr_sectors; i++) {
 			ASSERT(pgoff < PAGE_SIZE);
 			if (uptodate &&
-			    (!csum || !check_data_csum(inode, io_bio,
+			    (!csum || !check_data_csum(inode, lbio,
 						       bio_offset, bvec.bv_page,
 						       pgoff, start))) {
 				clean_io_failure(fs_info, failure_tree, io_tree,
@@ -8109,12 +8110,12 @@ static blk_status_t btrfs_check_read_dio_bio(struct inode *inode,
 			} else {
 				int ret;
 
-				ASSERT((start - io_bio->logical) < UINT_MAX);
+				ASSERT((start - lbio->logical) < UINT_MAX);
 				ret = btrfs_repair_one_sector(inode,
-						&io_bio->bio,
-						start - io_bio->logical,
+						&lbio->bio,
+						start - lbio->logical,
 						bvec.bv_page, pgoff,
-						start, io_bio->mirror_num,
+						start, lbio->mirror_num,
 						submit_dio_repair_bio);
 				if (ret)
 					err = errno_to_blk_status(ret);
@@ -8156,8 +8157,8 @@ static void btrfs_end_dio_bio(struct bio *bio)
 			   bio->bi_iter.bi_size, err);
 
 	if (bio_op(bio) == REQ_OP_READ) {
-		err = btrfs_check_read_dio_bio(dip->inode, btrfs_io_bio(bio),
-					       !err);
+		err = btrfs_check_read_dio_bio(dip->inode,
+					       btrfs_logical_bio(bio), !err);
 	}
 
 	if (err)
@@ -8208,7 +8209,7 @@ static inline blk_status_t btrfs_submit_dio_bio(struct bio *bio,
 		csum_offset = file_offset - dip->logical_offset;
 		csum_offset >>= fs_info->sectorsize_bits;
 		csum_offset *= fs_info->csum_size;
-		btrfs_io_bio(bio)->csum = dip->csums + csum_offset;
+		btrfs_logical_bio(bio)->csum = dip->csums + csum_offset;
 	}
 map:
 	ret = btrfs_map_bio(fs_info, bio, 0);
@@ -8320,10 +8321,11 @@ static blk_qc_t btrfs_submit_direct(struct inode *inode, struct iomap *iomap,
 		 * This will never fail as it's passing GPF_NOFS and
 		 * the allocation is backed by btrfs_bioset.
 		 */
-		bio = btrfs_bio_clone_partial(dio_bio, clone_offset, clone_len);
+		bio = btrfs_logical_bio_clone_partial(dio_bio, clone_offset,
+						      clone_len);
 		bio->bi_private = dip;
 		bio->bi_end_io = btrfs_end_dio_bio;
-		btrfs_io_bio(bio)->logical = file_offset;
+		btrfs_logical_bio(bio)->logical = file_offset;
 
 		if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
 			status = extract_ordered_extent(BTRFS_I(inode), bio,
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 96c149416f99..705e2f243459 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1104,8 +1104,8 @@ static int rbio_add_io_page(struct btrfs_raid_bio *rbio,
 	}
 
 	/* put a new bio on the list */
-	bio = btrfs_io_bio_alloc(bio_max_len >> PAGE_SHIFT ?: 1);
-	btrfs_io_bio(bio)->device = stripe->dev;
+	bio = btrfs_logical_bio_alloc(bio_max_len >> PAGE_SHIFT ?: 1);
+	btrfs_logical_bio(bio)->device = stripe->dev;
 	bio->bi_iter.bi_size = 0;
 	bio_set_dev(bio, stripe->dev->bdev);
 	bio->bi_iter.bi_sector = disk_start >> 9;
@@ -1158,7 +1158,7 @@ static void index_rbio_pages(struct btrfs_raid_bio *rbio)
 		page_index = stripe_offset >> PAGE_SHIFT;
 
 		if (bio_flagged(bio, BIO_CLONED))
-			bio->bi_iter = btrfs_io_bio(bio)->iter;
+			bio->bi_iter = btrfs_logical_bio(bio)->iter;
 
 		bio_for_each_segment(bvec, bio, iter) {
 			rbio->bio_pages[page_index + i] = bvec.bv_page;
@@ -2124,7 +2124,7 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
 
 	if (generic_io) {
 		ASSERT(bioc->mirror_num == mirror_num);
-		btrfs_io_bio(bio)->mirror_num = mirror_num;
+		btrfs_logical_bio(bio)->mirror_num = mirror_num;
 	}
 
 	rbio = alloc_rbio(fs_info, bioc, stripe_len);
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index bd38e32ef5dc..c1dfe7f3ace6 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -1423,7 +1423,7 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
 	if (!first_page->dev->bdev)
 		goto out;
 
-	bio = btrfs_io_bio_alloc(BIO_MAX_VECS);
+	bio = btrfs_logical_bio_alloc(0);
 	bio_set_dev(bio, first_page->dev->bdev);
 
 	for (page_num = 0; page_num < sblock->page_count; page_num++) {
@@ -1480,7 +1480,7 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
 		}
 
 		WARN_ON(!spage->page);
-		bio = btrfs_io_bio_alloc(1);
+		bio = btrfs_logical_bio_alloc(1);
 		bio_set_dev(bio, spage->dev->bdev);
 
 		bio_add_page(bio, spage->page, fs_info->sectorsize, 0);
@@ -1562,7 +1562,7 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
 			return -EIO;
 		}
 
-		bio = btrfs_io_bio_alloc(1);
+		bio = btrfs_logical_bio_alloc(1);
 		bio_set_dev(bio, spage_bad->dev->bdev);
 		bio->bi_iter.bi_sector = spage_bad->physical >> 9;
 		bio->bi_opf = REQ_OP_WRITE;
@@ -1676,7 +1676,7 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
 		sbio->dev = sctx->wr_tgtdev;
 		bio = sbio->bio;
 		if (!bio) {
-			bio = btrfs_io_bio_alloc(sctx->pages_per_wr_bio);
+			bio = btrfs_logical_bio_alloc(sctx->pages_per_wr_bio);
 			sbio->bio = bio;
 		}
 
@@ -2102,7 +2102,7 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
 		sbio->dev = spage->dev;
 		bio = sbio->bio;
 		if (!bio) {
-			bio = btrfs_io_bio_alloc(sctx->pages_per_rd_bio);
+			bio = btrfs_logical_bio_alloc(sctx->pages_per_rd_bio);
 			sbio->bio = bio;
 		}
 
@@ -2226,7 +2226,7 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 		goto bioc_out;
 	}
 
-	bio = btrfs_io_bio_alloc(0);
+	bio = btrfs_logical_bio_alloc(0);
 	bio->bi_iter.bi_sector = logical >> 9;
 	bio->bi_private = sblock;
 	bio->bi_end_io = scrub_missing_raid56_end_io;
@@ -2842,7 +2842,7 @@ static void scrub_parity_check_and_repair(struct scrub_parity *sparity)
 	if (ret || !bioc || !bioc->raid_map)
 		goto bioc_out;
 
-	bio = btrfs_io_bio_alloc(0);
+	bio = btrfs_logical_bio_alloc(0);
 	bio->bi_iter.bi_sector = sparity->logic_start >> 9;
 	bio->bi_private = sparity;
 	bio->bi_end_io = scrub_parity_bio_endio;
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 592d19f95065..9c609ed2606f 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -6561,7 +6561,7 @@ static void btrfs_end_bio(struct bio *bio)
 		atomic_inc(&bioc->error);
 		if (bio->bi_status == BLK_STS_IOERR ||
 		    bio->bi_status == BLK_STS_TARGET) {
-			struct btrfs_device *dev = btrfs_io_bio(bio)->device;
+			struct btrfs_device *dev = btrfs_logical_bio(bio)->device;
 
 			ASSERT(dev->bdev);
 			if (btrfs_op(bio) == BTRFS_MAP_WRITE)
@@ -6587,7 +6587,7 @@ static void btrfs_end_bio(struct bio *bio)
 			bio = bioc->orig_bio;
 		}
 
-		btrfs_io_bio(bio)->mirror_num = bioc->mirror_num;
+		btrfs_logical_bio(bio)->mirror_num = bioc->mirror_num;
 		/* only send an error to the higher layers if it is
 		 * beyond the tolerance of the btrfs bio
 		 */
@@ -6613,7 +6613,7 @@ static void submit_stripe_bio(struct btrfs_io_context *bioc, struct bio *bio,
 	struct btrfs_fs_info *fs_info = bioc->fs_info;
 
 	bio->bi_private = bioc;
-	btrfs_io_bio(bio)->device = dev;
+	btrfs_logical_bio(bio)->device = dev;
 	bio->bi_end_io = btrfs_end_bio;
 	bio->bi_iter.bi_sector = physical >> 9;
 	/*
@@ -6649,7 +6649,7 @@ static void bioc_error(struct btrfs_io_context *bioc, struct bio *bio, u64 logic
 		/* Should be the original bio. */
 		WARN_ON(bio != bioc->orig_bio);
 
-		btrfs_io_bio(bio)->mirror_num = bioc->mirror_num;
+		btrfs_logical_bio(bio)->mirror_num = bioc->mirror_num;
 		bio->bi_iter.bi_sector = logical >> 9;
 		if (atomic_read(&bioc->error) > bioc->max_errors)
 			bio->bi_status = BLK_STS_IOERR;
@@ -6724,7 +6724,7 @@ blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,
 		}
 
 		if (dev_nr < total_devs - 1)
-			bio = btrfs_bio_clone(first_bio);
+			bio = btrfs_logical_bio_clone(first_bio);
 		else
 			bio = first_bio;
 
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index b69755fc0e0d..6b0edca6ed2a 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -304,38 +304,37 @@ struct btrfs_fs_devices {
 				/ sizeof(struct btrfs_stripe) + 1)
 
 /*
- * we need the mirror number and stripe index to be passed around
- * the call chain while we are processing end_io (especially errors).
- * Really, what we need is a btrfs_io_context structure that has this info
- * and is properly sized with its stripe array, but we're not there
- * quite yet.  We have our own btrfs bioset, and all of the bios
- * we allocate are actually btrfs_io_bios.  We'll cram as much of
- * struct btrfs_io_context as we can into this over time.
+ * Extra info to pass along bio.
+ *
+ * Mostly for btrfs specific feature like csum and mirror_num.
  */
-struct btrfs_io_bio {
+struct btrfs_logical_bio {
 	unsigned int mirror_num;
+
+	/* @device is for stripe IO submission. */
 	struct btrfs_device *device;
 	u64 logical;
 	u8 *csum;
 	u8 csum_inline[BTRFS_BIO_INLINE_CSUM_SIZE];
 	struct bvec_iter iter;
+
 	/*
 	 * This member must come last, bio_alloc_bioset will allocate enough
-	 * bytes for entire btrfs_io_bio but relies on bio being last.
+	 * bytes for entire btrfs_bio but relies on bio being last.
 	 */
 	struct bio bio;
 };
 
-static inline struct btrfs_io_bio *btrfs_io_bio(struct bio *bio)
+static inline struct btrfs_logical_bio *btrfs_logical_bio(struct bio *bio)
 {
-	return container_of(bio, struct btrfs_io_bio, bio);
+	return container_of(bio, struct btrfs_logical_bio, bio);
 }
 
-static inline void btrfs_io_bio_free_csum(struct btrfs_io_bio *io_bio)
+static inline void btrfs_logical_bio_free_csum(struct btrfs_logical_bio *lbio)
 {
-	if (io_bio->csum != io_bio->csum_inline) {
-		kfree(io_bio->csum);
-		io_bio->csum = NULL;
+	if (lbio->csum != lbio->csum_inline) {
+		kfree(lbio->csum);
+		lbio->csum = NULL;
 	}
 }
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context
  2021-09-15  7:17 ` [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context Qu Wenruo
@ 2021-09-17 11:19   ` David Sterba
  2021-09-17 11:24     ` Qu Wenruo
  0 siblings, 1 reply; 22+ messages in thread
From: David Sterba @ 2021-09-17 11:19 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Wed, Sep 15, 2021 at 03:17:16PM +0800, Qu Wenruo wrote:
> The structure btrfs_bio is used by two different sites:
> 
> - bio->bi_private for mirror based profiles
>   For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures records

Why is SINGLE here?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context
  2021-09-17 11:19   ` David Sterba
@ 2021-09-17 11:24     ` Qu Wenruo
  2021-09-17 11:27       ` Qu Wenruo
  0 siblings, 1 reply; 22+ messages in thread
From: Qu Wenruo @ 2021-09-17 11:24 UTC (permalink / raw)
  To: dsterba, Qu Wenruo, linux-btrfs



On 2021/9/17 19:19, David Sterba wrote:
> On Wed, Sep 15, 2021 at 03:17:16PM +0800, Qu Wenruo wrote:
>> The structure btrfs_bio is used by two different sites:
>>
>> - bio->bi_private for mirror based profiles
>>    For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures records
>
> Why is SINGLE here?
>
For single we use the same routine as RAID1/DUP/etc, it's
submit_stripe_bio() doing the remapping.

Thus there is really only two types, non-RAID56 and RAID56.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context
  2021-09-17 11:24     ` Qu Wenruo
@ 2021-09-17 11:27       ` Qu Wenruo
  2021-09-17 11:33         ` David Sterba
  0 siblings, 1 reply; 22+ messages in thread
From: Qu Wenruo @ 2021-09-17 11:27 UTC (permalink / raw)
  To: Qu Wenruo, dsterba, linux-btrfs



On 2021/9/17 19:24, Qu Wenruo wrote:
> 
> 
> On 2021/9/17 19:19, David Sterba wrote:
>> On Wed, Sep 15, 2021 at 03:17:16PM +0800, Qu Wenruo wrote:
>>> The structure btrfs_bio is used by two different sites:
>>>
>>> - bio->bi_private for mirror based profiles
>>>    For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures 
>>> records
>>
>> Why is SINGLE here?
>>
> For single we use the same routine as RAID1/DUP/etc, it's
> submit_stripe_bio() doing the remapping.
> 
> Thus there is really only two types, non-RAID56 and RAID56.

And there is no really "SINGLE" profile in btrfs.

As even for SINGLE profile, we may need to submit two bios to two 
different devices (one is the current device, the other is the 
dev-replace target).

Thanks,
Qu

> 
> Thanks,
> Qu
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context
  2021-09-17 11:27       ` Qu Wenruo
@ 2021-09-17 11:33         ` David Sterba
  0 siblings, 0 replies; 22+ messages in thread
From: David Sterba @ 2021-09-17 11:33 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Qu Wenruo, dsterba, linux-btrfs

On Fri, Sep 17, 2021 at 07:27:30PM +0800, Qu Wenruo wrote:
> 
> 
> On 2021/9/17 19:24, Qu Wenruo wrote:
> > 
> > 
> > On 2021/9/17 19:19, David Sterba wrote:
> >> On Wed, Sep 15, 2021 at 03:17:16PM +0800, Qu Wenruo wrote:
> >>> The structure btrfs_bio is used by two different sites:
> >>>
> >>> - bio->bi_private for mirror based profiles
> >>>    For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures 
> >>> records
> >>
> >> Why is SINGLE here?
> >>
> > For single we use the same routine as RAID1/DUP/etc, it's
> > submit_stripe_bio() doing the remapping.
> > 
> > Thus there is really only two types, non-RAID56 and RAID56.
> 
> And there is no really "SINGLE" profile in btrfs.
> 
> As even for SINGLE profile, we may need to submit two bios to two 
> different devices (one is the current device, the other is the 
> dev-replace target).

Ok, thanks. It's a bit confusing to mention 'single' with mirrored
profiles but here it's on the implementation level where it could be
written on more than one device. We don't have a terminology for that so
be it.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio
  2021-09-15  7:17 ` [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio Qu Wenruo
@ 2021-09-17 11:39   ` David Sterba
  2021-09-20  7:04   ` Nikolay Borisov
  1 sibling, 0 replies; 22+ messages in thread
From: David Sterba @ 2021-09-17 11:39 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Wed, Sep 15, 2021 at 03:17:18PM +0800, Qu Wenruo wrote:
> Previously we have "struct btrfs_bio", which records IO context for
> mirrored IO and RAID56, and "strcut btrfs_io_bio", which records extra
> btrfs specific info for logical bytenr bio.
> 
> With "strcut btrfs_bio" renamed to "struct btrfs_io_context", we are
> safe to rename "strcut btrfs_io_bio" to "strcut btrfs_logical_bio" which
> is a more suitable name now.
> 
> Although the name, "btrfs_logical_bio", is a little long and name
> "btrfs_bio" can be much shorter, "btrfs_bio" conflicts with previous
> "btrfs_bio" structure and can cause a lot of problems for backports.
> 
> Thus here we choose the name "btrfs_logical_bio", which also emphasis
> those bios all work at logical bytenr.

After reading through the whole patch I agree with the naming, though
yeah it's a bit long, but we've been using this wordy naming. For
identifiers it's fine to use lbio and it's now clear from the context
that it's about the btrfs-specific features.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3]  btrfs: btrfs_bio and btrfs_io_bio rename
  2021-09-15  7:17 [PATCH v3 0/3] btrfs: btrfs_bio and btrfs_io_bio rename Qu Wenruo
                   ` (2 preceding siblings ...)
  2021-09-15  7:17 ` [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio Qu Wenruo
@ 2021-09-17 11:39 ` David Sterba
  3 siblings, 0 replies; 22+ messages in thread
From: David Sterba @ 2021-09-17 11:39 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Wed, Sep 15, 2021 at 03:17:15PM +0800, Qu Wenruo wrote:
> The branch can be fetched from github, and is the preferred way to grab
> the code, as this patchset changed quite a lot of code.
> https://github.com/adam900710/linux/tree/chunk_refactor
> 
> There are two structure, btrfs_io_bio and btrfs_bio, which have very
> similar names but completely different meanings.
> 
> Btrfs_io_bio mostly works at logical bytenr layer (its
> bio->bi_iter.bi_sector points to btrfs logical bytenr), and just
> contains extra info like csum and mirror_num.
> 
> And btrfs_io_bio is in fact the most utilized bio, as all data/metadata
> IO is using btrfs_io_bio.
> 
> While btrfs_bio is completely a helper structure for mirrored IO
> submission (utilized by SINGLE/DUP/RAID1/RAID10), and contains RAID56
> maps for RAID56 (it doesn't utilize this structure for IO submission
> tracking).
> 
> Such naming is completely anti-human.
> 
> So this patchset will do the following renaming:
> 
> - btrfs_bio -> btrfs_io_context
>   Since it's not really used by all bios (only mirrored profiles utilize
>   it), and it contains extra info for RAID56, it's not proper to name it
>   with _bio suffix.
> 
>   Later we can integrate btrfs_io_context pointer into the new
>   btrfs_bio.
> 
> - btrfs_io_bio -> btrfs_logical_bio
>   It is intentional not to reuse "btrfs_bio", which could cause
>   confusion for later backport.
> 
> Changelog:
> v2:
> - Rename btrfs_bio to btrfs_io_context (bioc)
> - Rename btrfs_io_bio to btrfs_bio
>   Both suggested by Nikolay
> 
> v3:
> - Fixes whiespace problems
>   Caused by "dwi" vim commands
> 
> - Update several modified comments
> 
> - Rename btrfs_io_bio to btrfs_logical_bio
>   To avoid backport confusion.
> 
> Qu Wenruo (3):
>   btrfs: rename btrfs_bio to btrfs_io_context
>   btrfs: remove btrfs_bio_alloc() helper
>   btrfs: rename struct btrfs_io_bio to btrfs_logical_bio

Added to misc-next, thanks.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-15  7:17 ` [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper Qu Wenruo
@ 2021-09-17 12:27   ` Nikolay Borisov
  2021-09-17 12:33     ` Qu Wenruo
  2021-09-17 12:43     ` David Sterba
  2021-09-23  5:57   ` Qu Wenruo
  1 sibling, 2 replies; 22+ messages in thread
From: Nikolay Borisov @ 2021-09-17 12:27 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs; +Cc: David Sterba



On 15.09.21 г. 10:17, Qu Wenruo wrote:
> The helper btrfs_bio_alloc() is almost the same as btrfs_io_bio_alloc(),
> except it's allocating using BIO_MAX_VECS as @nr_iovecs, and initialize
> bio->bi_iter.bi_sector.
> 
> However the naming itself is not using "btrfs_io_bio" to indicate its
> parameter is "strcut btrfs_io_bio" and can be easily confused with
> "struct btrfs_bio".
> 
> Considering assigned bio->bi_iter.bi_sector is such a simple work and
> there are already tons of call sites doing that manually, there is no
> need to do that in a helper.
> 
> Remove btrfs_bio_alloc() helper, and enhance btrfs_io_bio_alloc()
> function to provide a fail-safe value for its @nr_iovecs.
> 
> And then replace all btrfs_bio_alloc() callers with
> btrfs_io_bio_alloc().
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/compression.c | 12 ++++++++----
>  fs/btrfs/extent_io.c   | 33 +++++++++++++++------------------
>  fs/btrfs/extent_io.h   |  1 -
>  3 files changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 7869ad12bc6e..2475dc0b1c22 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -418,7 +418,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>  	cb->orig_bio = NULL;
>  	cb->nr_pages = nr_pages;
>  
> -	bio = btrfs_bio_alloc(first_byte);
> +	bio = btrfs_io_bio_alloc(0);
> +	bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
>  	bio->bi_opf = bio_op | write_flags;
>  	bio->bi_private = cb;
>  	bio->bi_end_io = end_compressed_bio_write;
> @@ -490,7 +491,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>  				bio_endio(bio);
>  			}
>  
> -			bio = btrfs_bio_alloc(first_byte);
> +			bio = btrfs_io_bio_alloc(0);
> +			bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
>  			bio->bi_opf = bio_op | write_flags;
>  			bio->bi_private = cb;
>  			bio->bi_end_io = end_compressed_bio_write;
> @@ -748,7 +750,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  	/* include any pages we added in add_ra-bio_pages */
>  	cb->len = bio->bi_iter.bi_size;
>  
> -	comp_bio = btrfs_bio_alloc(cur_disk_byte);
> +	comp_bio = btrfs_io_bio_alloc(0);
> +	comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
>  	comp_bio->bi_opf = REQ_OP_READ;
>  	comp_bio->bi_private = cb;
>  	comp_bio->bi_end_io = end_compressed_bio_read;
> @@ -806,7 +809,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  				bio_endio(comp_bio);
>  			}
>  
> -			comp_bio = btrfs_bio_alloc(cur_disk_byte);
> +			comp_bio = btrfs_io_bio_alloc(0);
> +			comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
>  			comp_bio->bi_opf = REQ_OP_READ;
>  			comp_bio->bi_private = cb;
>  			comp_bio->bi_end_io = end_compressed_bio_read;
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 1aed03ef5f49..d3fcf7e8dc48 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -3121,16 +3121,22 @@ static inline void btrfs_io_bio_init
After reading through the whole patch I agree with the naming, though
yeah it's a bit long, but we've been using this wordy naming. For
identifiers it's fine to use lbio and it's now clear from the context
that it's about the btrfs-specific features.(struct btrfs_io_bio *btrfs_bio)
>  }
>  
>  /*
> - * The following helpers allocate a bio. As it's backed by a bioset, it'll
> - * never fail.  We're returning a bio right now but you can call btrfs_io_bio
> - * for the appropriate container_of magic
> + * Allocate a btrfs_io_bio, with @nr_iovecs as maximum iovecs.
> + *
> + * If @nr_iovecs is 0, it will use BIO_MAX_VECS as @nr_iovces instead.
> + * This behavior is to provide a fail-safe default value.
> + *
> + * This helper uses bioset to allocate the bio, thus it's backed by mempool,
> + * and should not fail from process contexts.
>   */
> -struct bio *btrfs_bio_alloc(u64 first_byte)
> +struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
>  {
>  	struct bio *bio;
>  
> -	bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &btrfs_bioset);
> -	bio->bi_iter.bi_sector = first_byte >> 9;
> +	ASSERT(nr_iovecs <= BIO_MAX_VECS);
> +	if (nr_iovecs == 0)
> +		nr_iovecs = BIO_MAX_VECS;

hell no! How come passing 0 actually means BIO_MAX_VEC. Instead of
having 0 everywhere and have the function translate this to
BIO_MAX_VECS, simply pass BIO_MAX_VECS in every call site where it's
needed.

David, please either fix the patch in the tree or retract it. Let's try
and refrain from adding such "gems" to the code base.

<snip>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-17 12:27   ` Nikolay Borisov
@ 2021-09-17 12:33     ` Qu Wenruo
  2021-09-17 12:34       ` Nikolay Borisov
  2021-09-17 12:43     ` David Sterba
  1 sibling, 1 reply; 22+ messages in thread
From: Qu Wenruo @ 2021-09-17 12:33 UTC (permalink / raw)
  To: Nikolay Borisov, Qu Wenruo, linux-btrfs; +Cc: David Sterba



On 2021/9/17 20:27, Nikolay Borisov wrote:
>
>
> On 15.09.21 г. 10:17, Qu Wenruo wrote:
>> The helper btrfs_bio_alloc() is almost the same as btrfs_io_bio_alloc(),
>> except it's allocating using BIO_MAX_VECS as @nr_iovecs, and initialize
>> bio->bi_iter.bi_sector.
>>
>> However the naming itself is not using "btrfs_io_bio" to indicate its
>> parameter is "strcut btrfs_io_bio" and can be easily confused with
>> "struct btrfs_bio".
>>
>> Considering assigned bio->bi_iter.bi_sector is such a simple work and
>> there are already tons of call sites doing that manually, there is no
>> need to do that in a helper.
>>
>> Remove btrfs_bio_alloc() helper, and enhance btrfs_io_bio_alloc()
>> function to provide a fail-safe value for its @nr_iovecs.
>>
>> And then replace all btrfs_bio_alloc() callers with
>> btrfs_io_bio_alloc().
>>
>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>> ---
>>   fs/btrfs/compression.c | 12 ++++++++----
>>   fs/btrfs/extent_io.c   | 33 +++++++++++++++------------------
>>   fs/btrfs/extent_io.h   |  1 -
>>   3 files changed, 23 insertions(+), 23 deletions(-)
>>
>> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
>> index 7869ad12bc6e..2475dc0b1c22 100644
>> --- a/fs/btrfs/compression.c
>> +++ b/fs/btrfs/compression.c
>> @@ -418,7 +418,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>>   	cb->orig_bio = NULL;
>>   	cb->nr_pages = nr_pages;
>>
>> -	bio = btrfs_bio_alloc(first_byte);
>> +	bio = btrfs_io_bio_alloc(0);
>> +	bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
>>   	bio->bi_opf = bio_op | write_flags;
>>   	bio->bi_private = cb;
>>   	bio->bi_end_io = end_compressed_bio_write;
>> @@ -490,7 +491,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>>   				bio_endio(bio);
>>   			}
>>
>> -			bio = btrfs_bio_alloc(first_byte);
>> +			bio = btrfs_io_bio_alloc(0);
>> +			bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
>>   			bio->bi_opf = bio_op | write_flags;
>>   			bio->bi_private = cb;
>>   			bio->bi_end_io = end_compressed_bio_write;
>> @@ -748,7 +750,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>>   	/* include any pages we added in add_ra-bio_pages */
>>   	cb->len = bio->bi_iter.bi_size;
>>
>> -	comp_bio = btrfs_bio_alloc(cur_disk_byte);
>> +	comp_bio = btrfs_io_bio_alloc(0);
>> +	comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
>>   	comp_bio->bi_opf = REQ_OP_READ;
>>   	comp_bio->bi_private = cb;
>>   	comp_bio->bi_end_io = end_compressed_bio_read;
>> @@ -806,7 +809,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>>   				bio_endio(comp_bio);
>>   			}
>>
>> -			comp_bio = btrfs_bio_alloc(cur_disk_byte);
>> +			comp_bio = btrfs_io_bio_alloc(0);
>> +			comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
>>   			comp_bio->bi_opf = REQ_OP_READ;
>>   			comp_bio->bi_private = cb;
>>   			comp_bio->bi_end_io = end_compressed_bio_read;
>> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
>> index 1aed03ef5f49..d3fcf7e8dc48 100644
>> --- a/fs/btrfs/extent_io.c
>> +++ b/fs/btrfs/extent_io.c
>> @@ -3121,16 +3121,22 @@ static inline void btrfs_io_bio_init
> After reading through the whole patch I agree with the naming, though
> yeah it's a bit long, but we've been using this wordy naming. For
> identifiers it's fine to use lbio and it's now clear from the context
> that it's about the btrfs-specific features.(struct btrfs_io_bio *btrfs_bio)
>>   }
>>
>>   /*
>> - * The following helpers allocate a bio. As it's backed by a bioset, it'll
>> - * never fail.  We're returning a bio right now but you can call btrfs_io_bio
>> - * for the appropriate container_of magic
>> + * Allocate a btrfs_io_bio, with @nr_iovecs as maximum iovecs.
>> + *
>> + * If @nr_iovecs is 0, it will use BIO_MAX_VECS as @nr_iovces instead.
>> + * This behavior is to provide a fail-safe default value.
>> + *
>> + * This helper uses bioset to allocate the bio, thus it's backed by mempool,
>> + * and should not fail from process contexts.
>>    */
>> -struct bio *btrfs_bio_alloc(u64 first_byte)
>> +struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
>>   {
>>   	struct bio *bio;
>>
>> -	bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &btrfs_bioset);
>> -	bio->bi_iter.bi_sector = first_byte >> 9;
>> +	ASSERT(nr_iovecs <= BIO_MAX_VECS);
>> +	if (nr_iovecs == 0)
>> +		nr_iovecs = BIO_MAX_VECS;
>
> hell no! How come passing 0 actually means BIO_MAX_VEC. Instead of
> having 0 everywhere and have the function translate this to
> BIO_MAX_VECS, simply pass BIO_MAX_VECS in every call site where it's
> needed.

That's part of the feedback I want.

I'm not yet determined on which should be the proper way.

Yes, we can pass BIO_MAX_VEC for call sites which doesn't care about the
vector size.

But I also think letting callers to bother less is a good idea.
(one of the few moments I think function overriding can be very useful here)

If you have objection, I'm pretty happy to change the behavior, and just
do an ASSERT() to catch any values larger than BIO_MAX_VECS.

Thanks,
Qu
>
> David, please either fix the patch in the tree or retract it. Let's try
> and refrain from adding such "gems" to the code base.
>
> <snip>
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-17 12:33     ` Qu Wenruo
@ 2021-09-17 12:34       ` Nikolay Borisov
  0 siblings, 0 replies; 22+ messages in thread
From: Nikolay Borisov @ 2021-09-17 12:34 UTC (permalink / raw)
  To: Qu Wenruo, Qu Wenruo, linux-btrfs; +Cc: David Sterba



On 17.09.21 г. 15:33, Qu Wenruo wrote:
> 
> 
> On 2021/9/17 20:27, Nikolay Borisov wrote:
>>
>>
>> On 15.09.21 г. 10:17, Qu Wenruo wrote:
>>> The helper btrfs_bio_alloc() is almost the same as btrfs_io_bio_alloc(),
>>> except it's allocating using BIO_MAX_VECS as @nr_iovecs, and initialize
>>> bio->bi_iter.bi_sector.
>>>
>>> However the naming itself is not using "btrfs_io_bio" to indicate its
>>> parameter is "strcut btrfs_io_bio" and can be easily confused with
>>> "struct btrfs_bio".
>>>
>>> Considering assigned bio->bi_iter.bi_sector is such a simple work and
>>> there are already tons of call sites doing that manually, there is no
>>> need to do that in a helper.
>>>
>>> Remove btrfs_bio_alloc() helper, and enhance btrfs_io_bio_alloc()
>>> function to provide a fail-safe value for its @nr_iovecs.
>>>
>>> And then replace all btrfs_bio_alloc() callers with
>>> btrfs_io_bio_alloc().
>>>
>>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>>> ---
>>>   fs/btrfs/compression.c | 12 ++++++++----
>>>   fs/btrfs/extent_io.c   | 33 +++++++++++++++------------------
>>>   fs/btrfs/extent_io.h   |  1 -
>>>   3 files changed, 23 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
>>> index 7869ad12bc6e..2475dc0b1c22 100644
>>> --- a/fs/btrfs/compression.c
>>> +++ b/fs/btrfs/compression.c
>>> @@ -418,7 +418,8 @@ blk_status_t btrfs_submit_compressed_write(struct
>>> btrfs_inode *inode, u64 start,
>>>       cb->orig_bio = NULL;
>>>       cb->nr_pages = nr_pages;
>>>
>>> -    bio = btrfs_bio_alloc(first_byte);
>>> +    bio = btrfs_io_bio_alloc(0);
>>> +    bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
>>>       bio->bi_opf = bio_op | write_flags;
>>>       bio->bi_private = cb;
>>>       bio->bi_end_io = end_compressed_bio_write;
>>> @@ -490,7 +491,8 @@ blk_status_t btrfs_submit_compressed_write(struct
>>> btrfs_inode *inode, u64 start,
>>>                   bio_endio(bio);
>>>               }
>>>
>>> -            bio = btrfs_bio_alloc(first_byte);
>>> +            bio = btrfs_io_bio_alloc(0);
>>> +            bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
>>>               bio->bi_opf = bio_op | write_flags;
>>>               bio->bi_private = cb;
>>>               bio->bi_end_io = end_compressed_bio_write;
>>> @@ -748,7 +750,8 @@ blk_status_t btrfs_submit_compressed_read(struct
>>> inode *inode, struct bio *bio,
>>>       /* include any pages we added in add_ra-bio_pages */
>>>       cb->len = bio->bi_iter.bi_size;
>>>
>>> -    comp_bio = btrfs_bio_alloc(cur_disk_byte);
>>> +    comp_bio = btrfs_io_bio_alloc(0);
>>> +    comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
>>>       comp_bio->bi_opf = REQ_OP_READ;
>>>       comp_bio->bi_private = cb;
>>>       comp_bio->bi_end_io = end_compressed_bio_read;
>>> @@ -806,7 +809,8 @@ blk_status_t btrfs_submit_compressed_read(struct
>>> inode *inode, struct bio *bio,
>>>                   bio_endio(comp_bio);
>>>               }
>>>
>>> -            comp_bio = btrfs_bio_alloc(cur_disk_byte);
>>> +            comp_bio = btrfs_io_bio_alloc(0);
>>> +            comp_bio->bi_iter.bi_sector = cur_disk_byte >>
>>> SECTOR_SHIFT;
>>>               comp_bio->bi_opf = REQ_OP_READ;
>>>               comp_bio->bi_private = cb;
>>>               comp_bio->bi_end_io = end_compressed_bio_read;
>>> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
>>> index 1aed03ef5f49..d3fcf7e8dc48 100644
>>> --- a/fs/btrfs/extent_io.c
>>> +++ b/fs/btrfs/extent_io.c
>>> @@ -3121,16 +3121,22 @@ static inline void btrfs_io_bio_init
>> After reading through the whole patch I agree with the naming, though
>> yeah it's a bit long, but we've been using this wordy naming. For
>> identifiers it's fine to use lbio and it's now clear from the context
>> that it's about the btrfs-specific features.(struct btrfs_io_bio
>> *btrfs_bio)
>>>   }
>>>
>>>   /*
>>> - * The following helpers allocate a bio. As it's backed by a bioset,
>>> it'll
>>> - * never fail.  We're returning a bio right now but you can call
>>> btrfs_io_bio
>>> - * for the appropriate container_of magic
>>> + * Allocate a btrfs_io_bio, with @nr_iovecs as maximum iovecs.
>>> + *
>>> + * If @nr_iovecs is 0, it will use BIO_MAX_VECS as @nr_iovces instead.
>>> + * This behavior is to provide a fail-safe default value.
>>> + *
>>> + * This helper uses bioset to allocate the bio, thus it's backed by
>>> mempool,
>>> + * and should not fail from process contexts.
>>>    */
>>> -struct bio *btrfs_bio_alloc(u64 first_byte)
>>> +struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
>>>   {
>>>       struct bio *bio;
>>>
>>> -    bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &btrfs_bioset);
>>> -    bio->bi_iter.bi_sector = first_byte >> 9;
>>> +    ASSERT(nr_iovecs <= BIO_MAX_VECS);
>>> +    if (nr_iovecs == 0)
>>> +        nr_iovecs = BIO_MAX_VECS;
>>
>> hell no! How come passing 0 actually means BIO_MAX_VEC. Instead of
>> having 0 everywhere and have the function translate this to
>> BIO_MAX_VECS, simply pass BIO_MAX_VECS in every call site where it's
>> needed.
> 
> That's part of the feedback I want.
> 
> I'm not yet determined on which should be the proper way.
> 
> Yes, we can pass BIO_MAX_VEC for call sites which doesn't care about the
> vector size.
> 
> But I also think letting callers to bother less is a good idea.
> (one of the few moments I think function overriding can be very useful
> here)
> 
> If you have objection, I'm pretty happy to change the behavior, and just
> do an ASSERT() to catch any values larger than BIO_MAX_VECS.

This sounds much better, in the worst case it should be -1 which treated
as the "max" and definitely not 0. I don't think 0 has been used as a
special value to mean "max" anywhere.

> 
> Thanks,
> Qu
>>
>> David, please either fix the patch in the tree or retract it. Let's try
>> and refrain from adding such "gems" to the code base.
>>
>> <snip>
>>
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-17 12:27   ` Nikolay Borisov
  2021-09-17 12:33     ` Qu Wenruo
@ 2021-09-17 12:43     ` David Sterba
  2021-09-17 12:49       ` Nikolay Borisov
  1 sibling, 1 reply; 22+ messages in thread
From: David Sterba @ 2021-09-17 12:43 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Qu Wenruo, linux-btrfs, David Sterba

On Fri, Sep 17, 2021 at 03:27:44PM +0300, Nikolay Borisov wrote:
> > -struct bio *btrfs_bio_alloc(u64 first_byte)
> > +struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
> >  {
> >  	struct bio *bio;
> >  
> > -	bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &btrfs_bioset);
> > -	bio->bi_iter.bi_sector = first_byte >> 9;
> > +	ASSERT(nr_iovecs <= BIO_MAX_VECS);
> > +	if (nr_iovecs == 0)
> > +		nr_iovecs = BIO_MAX_VECS;
> 
> hell no! How come passing 0 actually means BIO_MAX_VEC. Instead of
> having 0 everywhere and have the function translate this to
> BIO_MAX_VECS, simply pass BIO_MAX_VECS in every call site where it's
> needed.

I had thought about that before and cocluded that passing BIO_MAX_VECS
everywhere would be the wrong way as it's a detail about how many bio
vecs are allocated. So 0 is a default is ok as any other number means
that it's the exact count.

> David, please either fix the patch in the tree or retract it. Let's try
> and refrain from adding such "gems" to the code base.

So should we add another helper that takes the exact number and drop the
parameter everwhere is 0 so it's just btrfs_io_bio_alloc() with the
fallback?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-17 12:43     ` David Sterba
@ 2021-09-17 12:49       ` Nikolay Borisov
  2021-09-20 10:33         ` Qu Wenruo
  0 siblings, 1 reply; 22+ messages in thread
From: Nikolay Borisov @ 2021-09-17 12:49 UTC (permalink / raw)
  To: dsterba, Qu Wenruo, linux-btrfs, David Sterba



On 17.09.21 г. 15:43, David Sterba wrote:
> So should we add another helper that takes the exact number and drop the
> parameter everwhere is 0 so it's just btrfs_io_bio_alloc() with the
> fallback?

But by adding another helper we just introduce more indirection.

Actually I'd argue that if 0 is a sane default then BIO_MAX_VECS cannot
be any worse because:

a) It's a number which is as good as 0
b) It's even named. So this is technically better than a plain 0

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio
  2021-09-15  7:17 ` [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio Qu Wenruo
  2021-09-17 11:39   ` David Sterba
@ 2021-09-20  7:04   ` Nikolay Borisov
  2021-09-20 12:23     ` David Sterba
  1 sibling, 1 reply; 22+ messages in thread
From: Nikolay Borisov @ 2021-09-20  7:04 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs; +Cc: David Sterba



On 15.09.21 г. 10:17, Qu Wenruo wrote:
> Previously we have "struct btrfs_bio", which records IO context for
> mirrored IO and RAID56, and "strcut btrfs_io_bio", which records extra
> btrfs specific info for logical bytenr bio.
> 
> With "strcut btrfs_bio" renamed to "struct btrfs_io_context", we are
> safe to rename "strcut btrfs_io_bio" to "strcut btrfs_logical_bio" which
> is a more suitable name now.
> 
> Although the name, "btrfs_logical_bio", is a little long and name
> "btrfs_bio" can be much shorter, "btrfs_bio" conflicts with previous
> "btrfs_bio" structure and can cause a lot of problems for backports.
> 
> Thus here we choose the name "btrfs_logical_bio", which also emphasis
> those bios all work at logical bytenr.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>


So thinking a bit more about the renaming we are trading "awkwardness"
for future generations so that we make backporting easier or rather more
fool proof.

What if we backport a patch that does BUILD_BUG_ON predicated on the
size of the btrfs_io_bio. That way if a patch backports cleanly and
automatically but in fact git got confused by btrfs_bio vs btrfs_io_bio
then a build failure would ensue due to mismatched sizes and that would
be a clear indication something has gone wrong so whoever is doing the
backport can go and correct the backport? David what do you think about
this?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-17 12:49       ` Nikolay Borisov
@ 2021-09-20 10:33         ` Qu Wenruo
  2021-09-20 12:41           ` David Sterba
  0 siblings, 1 reply; 22+ messages in thread
From: Qu Wenruo @ 2021-09-20 10:33 UTC (permalink / raw)
  To: Nikolay Borisov, dsterba, Qu Wenruo, linux-btrfs, David Sterba



On 2021/9/17 20:49, Nikolay Borisov wrote:
>
>
> On 17.09.21 г. 15:43, David Sterba wrote:
>> So should we add another helper that takes the exact number and drop the
>> parameter everwhere is 0 so it's just btrfs_io_bio_alloc() with the
>> fallback?
>
> But by adding another helper we just introduce more indirection.
>
> Actually I'd argue that if 0 is a sane default then BIO_MAX_VECS cannot
> be any worse because:
>
> a) It's a number which is as good as 0
> b) It's even named. So this is technically better than a plain 0
>

Any final call on this?

I hope this could be an example for future optional parameters.

We have some existing codes using two different inline functions, and
both of them call a internal but exported function with "__" prefix.

We also have call sites passing all needed parameters just like Nikolay
suggested.


Despite that, I have some further optimization for the btrfs_logical_bio
structure, thus I hope this helper situation can be solved soon.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio
  2021-09-20  7:04   ` Nikolay Borisov
@ 2021-09-20 12:23     ` David Sterba
  2021-09-20 13:10       ` David Sterba
  0 siblings, 1 reply; 22+ messages in thread
From: David Sterba @ 2021-09-20 12:23 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Qu Wenruo, linux-btrfs, David Sterba

On Mon, Sep 20, 2021 at 10:04:10AM +0300, Nikolay Borisov wrote:
> 
> 
> On 15.09.21 г. 10:17, Qu Wenruo wrote:
> > Previously we have "struct btrfs_bio", which records IO context for
> > mirrored IO and RAID56, and "strcut btrfs_io_bio", which records extra
> > btrfs specific info for logical bytenr bio.
> > 
> > With "strcut btrfs_bio" renamed to "struct btrfs_io_context", we are
> > safe to rename "strcut btrfs_io_bio" to "strcut btrfs_logical_bio" which
> > is a more suitable name now.
> > 
> > Although the name, "btrfs_logical_bio", is a little long and name
> > "btrfs_bio" can be much shorter, "btrfs_bio" conflicts with previous
> > "btrfs_bio" structure and can cause a lot of problems for backports.
> > 
> > Thus here we choose the name "btrfs_logical_bio", which also emphasis
> > those bios all work at logical bytenr.
> > 
> > Signed-off-by: Qu Wenruo <wqu@suse.com>
> 
> So thinking a bit more about the renaming we are trading "awkwardness"
> for future generations so that we make backporting easier or rather more
> fool proof.
> 
> What if we backport a patch that does BUILD_BUG_ON predicated on the
> size of the btrfs_io_bio. That way if a patch backports cleanly and
> automatically but in fact git got confused by btrfs_bio vs btrfs_io_bio
> then a build failure would ensue due to mismatched sizes and that would
> be a clear indication something has gone wrong so whoever is doing the
> backport can go and correct the backport? David what do you think about
> this?

So you want to call the structure btrfs_bio and add build protections?  I'm not
sure how exactly you want to do the sizeof check, one way would be to add a
stub structure and compare sizeof against that, because a hardcoded value won't
work due to padding, or we'd have to have a 32bit assertion version.

I'd like to see the code, but otherwise I think it's reasonable, the shorter
name would be better. I don't expect many backports regarding the bio
related code, it could be referenced in the diff context but that we can
handle fine. I'm a bit cautious because I've seen patches to other
subystems that did changes like swapping parameters or repurposing
structures like here we do and Linus did not like that at all. It's
trade off if we'll suffer a naming we don't like or would cause a bug
because we'd forget about the change.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-20 10:33         ` Qu Wenruo
@ 2021-09-20 12:41           ` David Sterba
  2021-09-20 12:42             ` Qu Wenruo
  0 siblings, 1 reply; 22+ messages in thread
From: David Sterba @ 2021-09-20 12:41 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Nikolay Borisov, dsterba, Qu Wenruo, linux-btrfs, David Sterba

On Mon, Sep 20, 2021 at 06:33:14PM +0800, Qu Wenruo wrote:
> 
> 
> On 2021/9/17 20:49, Nikolay Borisov wrote:
> >
> >
> > On 17.09.21 г. 15:43, David Sterba wrote:
> >> So should we add another helper that takes the exact number and drop the
> >> parameter everwhere is 0 so it's just btrfs_io_bio_alloc() with the
> >> fallback?
> >
> > But by adding another helper we just introduce more indirection.
> >
> > Actually I'd argue that if 0 is a sane default then BIO_MAX_VECS cannot
> > be any worse because:
> >
> > a) It's a number which is as good as 0
> > b) It's even named. So this is technically better than a plain 0
> >
> 
> Any final call on this?
> 
> I hope this could be an example for future optional parameters.
> 
> We have some existing codes using two different inline functions, and
> both of them call a internal but exported function with "__" prefix.
> 
> We also have call sites passing all needed parameters just like Nikolay
> suggested.

I'm fine with explicitly using BIO_MAX_VECS instead of 0. I'll update it
in the patch, no need to resend.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-20 12:41           ` David Sterba
@ 2021-09-20 12:42             ` Qu Wenruo
  0 siblings, 0 replies; 22+ messages in thread
From: Qu Wenruo @ 2021-09-20 12:42 UTC (permalink / raw)
  To: dsterba, Qu Wenruo, Nikolay Borisov, linux-btrfs, David Sterba



On 2021/9/20 20:41, David Sterba wrote:
> On Mon, Sep 20, 2021 at 06:33:14PM +0800, Qu Wenruo wrote:
>>
>>
>> On 2021/9/17 20:49, Nikolay Borisov wrote:
>>>
>>>
>>> On 17.09.21 г. 15:43, David Sterba wrote:
>>>> So should we add another helper that takes the exact number and drop the
>>>> parameter everwhere is 0 so it's just btrfs_io_bio_alloc() with the
>>>> fallback?
>>>
>>> But by adding another helper we just introduce more indirection.
>>>
>>> Actually I'd argue that if 0 is a sane default then BIO_MAX_VECS cannot
>>> be any worse because:
>>>
>>> a) It's a number which is as good as 0
>>> b) It's even named. So this is technically better than a plain 0
>>>
>>
>> Any final call on this?
>>
>> I hope this could be an example for future optional parameters.
>>
>> We have some existing codes using two different inline functions, and
>> both of them call a internal but exported function with "__" prefix.
>>
>> We also have call sites passing all needed parameters just like Nikolay
>> suggested.
> 
> I'm fine with explicitly using BIO_MAX_VECS instead of 0. I'll update it
> in the patch, no need to resend.
> 
Thank you very much.

I'll no longer use this tricky way any more.

Thanks,
Qu


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio
  2021-09-20 12:23     ` David Sterba
@ 2021-09-20 13:10       ` David Sterba
  0 siblings, 0 replies; 22+ messages in thread
From: David Sterba @ 2021-09-20 13:10 UTC (permalink / raw)
  To: dsterba, Nikolay Borisov, Qu Wenruo, linux-btrfs, David Sterba

On Mon, Sep 20, 2021 at 02:23:29PM +0200, David Sterba wrote:
> On Mon, Sep 20, 2021 at 10:04:10AM +0300, Nikolay Borisov wrote:
> > On 15.09.21 г. 10:17, Qu Wenruo wrote:
> > > Previously we have "struct btrfs_bio", which records IO context for
> > > mirrored IO and RAID56, and "strcut btrfs_io_bio", which records extra
> > > btrfs specific info for logical bytenr bio.
> > > 
> > > With "strcut btrfs_bio" renamed to "struct btrfs_io_context", we are
> > > safe to rename "strcut btrfs_io_bio" to "strcut btrfs_logical_bio" which
> > > is a more suitable name now.
> > > 
> > > Although the name, "btrfs_logical_bio", is a little long and name
> > > "btrfs_bio" can be much shorter, "btrfs_bio" conflicts with previous
> > > "btrfs_bio" structure and can cause a lot of problems for backports.
> > > 
> > > Thus here we choose the name "btrfs_logical_bio", which also emphasis
> > > those bios all work at logical bytenr.
> > > 
> > > Signed-off-by: Qu Wenruo <wqu@suse.com>
> > 
> > So thinking a bit more about the renaming we are trading "awkwardness"
> > for future generations so that we make backporting easier or rather more
> > fool proof.
> > 
> > What if we backport a patch that does BUILD_BUG_ON predicated on the
> > size of the btrfs_io_bio. That way if a patch backports cleanly and
> > automatically but in fact git got confused by btrfs_bio vs btrfs_io_bio
> > then a build failure would ensue due to mismatched sizes and that would
> > be a clear indication something has gone wrong so whoever is doing the
> > backport can go and correct the backport? David what do you think about
> > this?
> 
> So you want to call the structure btrfs_bio and add build protections?  I'm not
> sure how exactly you want to do the sizeof check, one way would be to add a
> stub structure and compare sizeof against that, because a hardcoded value won't
> work due to padding, or we'd have to have a 32bit assertion version.
> 
> I'd like to see the code, but otherwise I think it's reasonable, the shorter
> name would be better. I don't expect many backports regarding the bio
> related code, it could be referenced in the diff context but that we can
> handle fine. I'm a bit cautious because I've seen patches to other
> subystems that did changes like swapping parameters or repurposing
> structures like here we do and Linus did not like that at all. It's
> trade off if we'll suffer a naming we don't like or would cause a bug
> because we'd forget about the change.

For the record, we had a chat about that and found that explicit build
checks won't be necessary as the old/new structure has no overlap of the
members so the build would fail anyway.

I did the rename from btrfs_logical_bio* to btrfs_bio again in
misc-next, please have a look, it's basically what Qu sent as v2.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper
  2021-09-15  7:17 ` [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper Qu Wenruo
  2021-09-17 12:27   ` Nikolay Borisov
@ 2021-09-23  5:57   ` Qu Wenruo
  1 sibling, 0 replies; 22+ messages in thread
From: Qu Wenruo @ 2021-09-23  5:57 UTC (permalink / raw)
  To: linux-btrfs



On 2021/9/15 15:17, Qu Wenruo wrote:
> The helper btrfs_bio_alloc() is almost the same as btrfs_io_bio_alloc(),
> except it's allocating using BIO_MAX_VECS as @nr_iovecs, and initialize
> bio->bi_iter.bi_sector.
> 
> However the naming itself is not using "btrfs_io_bio" to indicate its
> parameter is "strcut btrfs_io_bio" and can be easily confused with
> "struct btrfs_bio".
> 
> Considering assigned bio->bi_iter.bi_sector is such a simple work and
> there are already tons of call sites doing that manually, there is no
> need to do that in a helper.
> 
> Remove btrfs_bio_alloc() helper, and enhance btrfs_io_bio_alloc()
> function to provide a fail-safe value for its @nr_iovecs.
> 
> And then replace all btrfs_bio_alloc() callers with
> btrfs_io_bio_alloc().
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>   fs/btrfs/compression.c | 12 ++++++++----
>   fs/btrfs/extent_io.c   | 33 +++++++++++++++------------------
>   fs/btrfs/extent_io.h   |  1 -
>   3 files changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 7869ad12bc6e..2475dc0b1c22 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -418,7 +418,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>   	cb->orig_bio = NULL;
>   	cb->nr_pages = nr_pages;
>   
> -	bio = btrfs_bio_alloc(first_byte);
> +	bio = btrfs_io_bio_alloc(0);
> +	bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
>   	bio->bi_opf = bio_op | write_flags;
>   	bio->bi_private = cb;
>   	bio->bi_end_io = end_compressed_bio_write;
> @@ -490,7 +491,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>   				bio_endio(bio);
>   			}
>   
> -			bio = btrfs_bio_alloc(first_byte);
> +			bio = btrfs_io_bio_alloc(0);
> +			bio->bi_iter.bi_sector = first_byte >> SECTOR_SHIFT;
>   			bio->bi_opf = bio_op | write_flags;
>   			bio->bi_private = cb;
>   			bio->bi_end_io = end_compressed_bio_write;
> @@ -748,7 +750,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>   	/* include any pages we added in add_ra-bio_pages */
>   	cb->len = bio->bi_iter.bi_size;
>   
> -	comp_bio = btrfs_bio_alloc(cur_disk_byte);
> +	comp_bio = btrfs_io_bio_alloc(0);
> +	comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
>   	comp_bio->bi_opf = REQ_OP_READ;
>   	comp_bio->bi_private = cb;
>   	comp_bio->bi_end_io = end_compressed_bio_read;
> @@ -806,7 +809,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>   				bio_endio(comp_bio);
>   			}
>   
> -			comp_bio = btrfs_bio_alloc(cur_disk_byte);
> +			comp_bio = btrfs_io_bio_alloc(0);
> +			comp_bio->bi_iter.bi_sector = cur_disk_byte >> SECTOR_SHIFT;
>   			comp_bio->bi_opf = REQ_OP_READ;
>   			comp_bio->bi_private = cb;
>   			comp_bio->bi_end_io = end_compressed_bio_read;
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 1aed03ef5f49..d3fcf7e8dc48 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -3121,16 +3121,22 @@ static inline void btrfs_io_bio_init(struct btrfs_io_bio *btrfs_bio)
>   }
>   
>   /*
> - * The following helpers allocate a bio. As it's backed by a bioset, it'll
> - * never fail.  We're returning a bio right now but you can call btrfs_io_bio
> - * for the appropriate container_of magic
> + * Allocate a btrfs_io_bio, with @nr_iovecs as maximum iovecs.
> + *
> + * If @nr_iovecs is 0, it will use BIO_MAX_VECS as @nr_iovces instead.
> + * This behavior is to provide a fail-safe default value.
> + *
> + * This helper uses bioset to allocate the bio, thus it's backed by mempool,
> + * and should not fail from process contexts.
>    */
> -struct bio *btrfs_bio_alloc(u64 first_byte)
> +struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
>   {
>   	struct bio *bio;
>   
> -	bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &btrfs_bioset);
> -	bio->bi_iter.bi_sector = first_byte >> 9;

I'm very surprised that, bbio->logical is not initialized here.

This means, except two call sites which manually initialize 
bbio->logical, all other sites don't have bbio->logical set from the 
very beginning.

I guess I need another patch to set the logical bytenr for all btrfs_bio 
allocator.

Thankfully it doesn't cause any new regression, but any unitizlied 
member can always cause unexpected behavior when new use cases are added.

In fact this uninitialized @logical is already cause my RFC patchset 
("btrfs: refactor how we handle btrfs_io_context and slightly reduce 
memory usage for both btrfs_bio and btrfs_io_context") to crash.

As in that patchset, we require bbio->logical to lookup the mirror device.

I'll merge the proper initializer into the next version of that patchset.

Thanks,
Qu

> +	ASSERT(nr_iovecs <= BIO_MAX_VECS);
> +	if (nr_iovecs == 0)
> +		nr_iovecs = BIO_MAX_VECS;
> +	bio = bio_alloc_bioset(GFP_NOFS, nr_iovecs, &btrfs_bioset);
>   	btrfs_io_bio_init(btrfs_io_bio(bio));
>   	return bio;
>   }
> @@ -3148,16 +3154,6 @@ struct bio *btrfs_bio_clone(struct bio *bio)
>   	return new;
>   }
>   
> -struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
> -{
> -	struct bio *bio;
> -
> -	/* Bio allocation backed by a bioset does not fail */
> -	bio = bio_alloc_bioset(GFP_NOFS, nr_iovecs, &btrfs_bioset);
> -	btrfs_io_bio_init(btrfs_io_bio(bio));
> -	return bio;
> -}
> -
>   struct bio *btrfs_bio_clone_partial(struct bio *orig, u64 offset, u64 size)
>   {
>   	struct bio *bio;
> @@ -3307,14 +3303,15 @@ static int alloc_new_bio(struct btrfs_inode *inode,
>   	struct bio *bio;
>   	int ret;
>   
> +	bio = btrfs_io_bio_alloc(0);
>   	/*
>   	 * For compressed page range, its disk_bytenr is always @disk_bytenr
>   	 * passed in, no matter if we have added any range into previous bio.
>   	 */
>   	if (bio_flags & EXTENT_BIO_COMPRESSED)
> -		bio = btrfs_bio_alloc(disk_bytenr);
> +		bio->bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT;
>   	else
> -		bio = btrfs_bio_alloc(disk_bytenr + offset);
> +		bio->bi_iter.bi_sector = (disk_bytenr + offset) >> SECTOR_SHIFT;
>   	bio_ctrl->bio = bio;
>   	bio_ctrl->bio_flags = bio_flags;
>   	bio->bi_end_io = end_io_func;
> diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
> index ba471f2063a7..81fa68eaa699 100644
> --- a/fs/btrfs/extent_io.h
> +++ b/fs/btrfs/extent_io.h
> @@ -278,7 +278,6 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end);
>   void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
>   				  struct page *locked_page,
>   				  u32 bits_to_clear, unsigned long page_ops);
> -struct bio *btrfs_bio_alloc(u64 first_byte);
>   struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs);
>   struct bio *btrfs_bio_clone(struct bio *bio);
>   struct bio *btrfs_bio_clone_partial(struct bio *orig, u64 offset, u64 size);
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-09-23  5:57 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-15  7:17 [PATCH v3 0/3] btrfs: btrfs_bio and btrfs_io_bio rename Qu Wenruo
2021-09-15  7:17 ` [PATCH v3 1/3] btrfs: rename btrfs_bio to btrfs_io_context Qu Wenruo
2021-09-17 11:19   ` David Sterba
2021-09-17 11:24     ` Qu Wenruo
2021-09-17 11:27       ` Qu Wenruo
2021-09-17 11:33         ` David Sterba
2021-09-15  7:17 ` [PATCH v3 2/3] btrfs: remove btrfs_bio_alloc() helper Qu Wenruo
2021-09-17 12:27   ` Nikolay Borisov
2021-09-17 12:33     ` Qu Wenruo
2021-09-17 12:34       ` Nikolay Borisov
2021-09-17 12:43     ` David Sterba
2021-09-17 12:49       ` Nikolay Borisov
2021-09-20 10:33         ` Qu Wenruo
2021-09-20 12:41           ` David Sterba
2021-09-20 12:42             ` Qu Wenruo
2021-09-23  5:57   ` Qu Wenruo
2021-09-15  7:17 ` [PATCH v3 3/3] btrfs: rename struct btrfs_io_bio to btrfs_logical_bio Qu Wenruo
2021-09-17 11:39   ` David Sterba
2021-09-20  7:04   ` Nikolay Borisov
2021-09-20 12:23     ` David Sterba
2021-09-20 13:10       ` David Sterba
2021-09-17 11:39 ` [PATCH v3 0/3] btrfs: btrfs_bio and btrfs_io_bio rename David Sterba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.