All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support
@ 2021-06-15 12:18 Qu Wenruo
  2021-06-15 12:18 ` [PATCH v3 1/9] btrfs: remove a dead comment for btrfs_decompress_bio() Qu Wenruo
                   ` (8 more replies)
  0 siblings, 9 replies; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

There are quite some problems in compression code:

- Weird compressed_bio::pending_bios dance
  If we just don't want compressed_bio being freed halfway, we have more
  sane methods, just like btrfs_subpage::readers.

  So here we fix it by introducing compressed_bio::io_sectors to do the
  job.

- BUG_ON()s inside btrfs_submit_compressed_*()
  Even they are just ENOMEM, we should handle them.
  With io_sectors introduced, we have a way to finish compressed_bio
  all by ourselves, as long as we haven't submitted last bio.

  If we have last bio submitted, then endio will handle it well.

- Duplicated code for compressed bio allocation and submission
  Just small refactor can handle it

- Stripe boundary is checked every time one page is added
  This is overkilled.
  Just learn from extent_io.c refactor which use bio_ctrl to do the
  boundary check only once for each bio.

  Although in compression context, we don't need extra checks in
  extent_io.c, thus we don't need bio_ctrl structure, but
  can afford to do it locally.

- Dead code removal
  One dead comment and a new zombie function,
  btrfs_bio_fits_in_stripe(), can be removed now.

Changelog:
v2:
- Rebased to latest misc-next

- Fix a bug in btrfs_submit_compressed_write() where zoned write is not
  taken into consideration

- Reuse the existing chunk mapping of btrfs_get_chunk_map()

v3:
- Fix a bug that zoned device can't even pass btrfs/001
  This is because at endio time, bi_size for zoned device is always 0.
  We have to use bio_for_each_segment_all() to calculate the real bio
  size instead.
  In theory, it should also happen more frequently for non-zoned device,
  but no catch for all test cases (with "-o compress") except btrfs/011.

- Fix btrfs/011 hang when tested with "-o compress"
  This is caused by checking both atomic value without protection.
  Checking two atomic values is no longer atomic.

  In fact, with compressed_bio::io_sectors introduced, pending_bios is
  only used to wait for any pending bio to finish in error path.

  Thus dec_and_test_compressed_bio() only need to check if io_sectors is
  zero

- Fix a error that in error handling path, we may hang due to missing
  wake_up() in dec_and_test_compressed_bio()

Qu Wenruo (9):
  btrfs: remove a dead comment for btrfs_decompress_bio()
  btrfs: introduce compressed_bio::io_sectors to trace compressed bio
    more elegantly
  btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_read()
  btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_write()
  btrfs: introduce submit_compressed_bio() for compression
  btrfs: introduce alloc_submit_compressed_bio() for compression
  btrfs: make btrfs_submit_compressed_read() to determine stripe
    boundary at bio allocation time
  btrfs: make btrfs_submit_compressed_write() to determine stripe
    boundary at bio allocation time
  btrfs: remove unused function btrfs_bio_fits_in_stripe()

 fs/btrfs/compression.c | 607 ++++++++++++++++++++++++-----------------
 fs/btrfs/compression.h |  13 +-
 fs/btrfs/ctree.h       |   2 -
 fs/btrfs/inode.c       |  42 ---
 4 files changed, 367 insertions(+), 297 deletions(-)

-- 
2.32.0


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/9] btrfs: remove a dead comment for btrfs_decompress_bio()
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  2021-06-15 12:20   ` Johannes Thumshirn
  2021-06-15 12:18 ` [PATCH v3 2/9] btrfs: introduce compressed_bio::io_sectors to trace compressed bio more elegantly Qu Wenruo
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Anand Jain

Since commit 8140dc30a432 ("btrfs: btrfs_decompress_bio() could accept
compressed_bio instead"), btrfs_decompress_bio() accepts
"struct compressed_bio" other than open-coded parameter list.

Thus the comments for the parameter list is no longer needed.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Anand Jain <anand.jain@oracle.com>
---
 fs/btrfs/compression.c | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 35ca49893803..9a023ae0f98b 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1212,20 +1212,6 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 	return ret;
 }
 
-/*
- * pages_in is an array of pages with compressed data.
- *
- * disk_start is the starting logical offset of this array in the file
- *
- * orig_bio contains the pages from the file that we want to decompress into
- *
- * srclen is the number of bytes in pages_in
- *
- * The basic idea is that we have a bio that was created by readpages.
- * The pages in the bio are for the uncompressed data, and they may not
- * be contiguous.  They all correspond to the range of bytes covered by
- * the compressed extent.
- */
 static int btrfs_decompress_bio(struct compressed_bio *cb)
 {
 	struct list_head *workspace;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/9] btrfs: introduce compressed_bio::io_sectors to trace compressed bio more elegantly
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
  2021-06-15 12:18 ` [PATCH v3 1/9] btrfs: remove a dead comment for btrfs_decompress_bio() Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  2021-06-15 12:18 ` [PATCH v3 3/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_read() Qu Wenruo
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

For btrfs_submit_compressed_read() and btrfs_submit_compressed_write(),
we have a pretty weird dance around compressed_bio::pending_bios:

  btrfs_submit_compressed_read/write()
  {
	cb = kmalloc()
	refcount_set(&cb->pending_bios, 0);
	bio = btrfs_alloc_bio();

	/* NOTE here, we haven't yet submitted any bio */
	refcount_set(&cb->pending_bios, 1);

	for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) {
		if (submit) {
			/* Here we submit bio, but we always have one
			 * extra pending_bios */
			refcount_inc(&cb->pending_bios);
			ret = btrfs_map_bio();
		}
	}

	/* Submit the last bio */
	ret = btrfs_map_bio();
  }

There are two reasons why we do this:

- compressed_bio::pending_bios is a refcount
  Thus if it's reduced to 0, it can not be increased again.

- To ensure the compressed_bio is not freed by some submitted bios
  If the submitted bio is finished before the next bio submitted,
  we can free the compressed_bio completely.

But the above code is sometimes confusing, and we can do it better by
just introduce a new member, compressed_bio::io_sectors.

Now we use compressed_bio::io_sectors to indicate whether we have any
pending sectors under IO or not yet submitted.

If io_sectors == 0, we're definitely the last bio of compressed_bio, and
is OK to release the compressed bio.

And this also allows up to later clean up a lot of BUG_ON()s.

With this new member, now compressed_bio::pending_bios really indicates
the pending bios, without any special handling needed.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/compression.c | 85 ++++++++++++++++++++++++++----------------
 fs/btrfs/compression.h | 10 ++++-
 2 files changed, 61 insertions(+), 34 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 9a023ae0f98b..bbfee9ffd20a 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -193,6 +193,48 @@ static int check_compressed_csum(struct btrfs_inode *inode, struct bio *bio,
 	return 0;
 }
 
+/*
+ * Reduce bio and io accounting for a compressed_bio with its coresponding bio.
+ *
+ * Return true if there is no pending bio nor io.
+ * Return false otherwise.
+ */
+static bool dec_and_test_compressed_bio(struct compressed_bio *cb,
+					struct bio *bio)
+{
+	struct btrfs_fs_info *fs_info = btrfs_sb(cb->inode->i_sb);
+	unsigned int bi_size = 0;
+	bool last_io = false;
+	struct bio_vec *bvec;
+	struct bvec_iter_all iter_all;
+
+	/*
+	 * At endio time, bi_iter.bi_size doesn't represent the real bio size.
+	 * Thus here we have to iterate through all segments to grab correct
+	 * bio size.
+	 */
+	bio_for_each_segment_all(bvec, bio, iter_all)
+		bi_size += bvec->bv_len;
+
+	if (bio->bi_status)
+		cb->errors = 1;
+
+	ASSERT(bi_size && bi_size <= cb->compressed_len);
+	atomic_dec(&cb->pending_bios);
+
+	/*
+	 * Here we only need to check io_sectors, as if that is 0, we definily
+	 * have no pending bio.
+	 */
+	last_io = atomic_sub_and_test(bi_size >> fs_info->sectorsize_bits,
+				       &cb->io_sectors);
+	/* Underflow check */
+	ASSERT(atomic_read(&cb->io_sectors) <
+	       (cb->compressed_len >> fs_info->sectorsize_bits));
+
+	return last_io;
+}
+
 /* when we finish reading compressed pages from the disk, we
  * decompress them and then run the bio end_io routines on the
  * decompressed pages (in the inode address space).
@@ -212,13 +254,7 @@ static void end_compressed_bio_read(struct bio *bio)
 	unsigned int mirror = btrfs_io_bio(bio)->mirror_num;
 	int ret = 0;
 
-	if (bio->bi_status)
-		cb->errors = 1;
-
-	/* if there are more bios still pending for this compressed
-	 * extent, just exit
-	 */
-	if (!refcount_dec_and_test(&cb->pending_bios))
+	if (!dec_and_test_compressed_bio(cb, bio))
 		goto out;
 
 	/*
@@ -336,13 +372,7 @@ static void end_compressed_bio_write(struct bio *bio)
 	struct page *page;
 	unsigned int index;
 
-	if (bio->bi_status)
-		cb->errors = 1;
-
-	/* if there are more bios still pending for this compressed
-	 * extent, just exit
-	 */
-	if (!refcount_dec_and_test(&cb->pending_bios))
+	if (!dec_and_test_compressed_bio(cb, bio))
 		goto out;
 
 	/* ok, we're the last bio for this extent, step one is to
@@ -408,7 +438,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	cb = kmalloc(compressed_bio_size(fs_info, compressed_len), GFP_NOFS);
 	if (!cb)
 		return BLK_STS_RESOURCE;
-	refcount_set(&cb->pending_bios, 0);
+	atomic_set(&cb->pending_bios, 0);
+	atomic_set(&cb->io_sectors, compressed_len >> fs_info->sectorsize_bits);
 	cb->errors = 0;
 	cb->inode = &inode->vfs_inode;
 	cb->start = start;
@@ -441,7 +472,6 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 		bio->bi_opf |= REQ_CGROUP_PUNT;
 		kthread_associate_blkcg(blkcg_css);
 	}
-	refcount_set(&cb->pending_bios, 1);
 
 	/* create and submit bios for the compressed pages */
 	bytes_left = compressed_len;
@@ -469,13 +499,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 
 		page->mapping = NULL;
 		if (submit || len < PAGE_SIZE) {
-			/*
-			 * inc the count before we submit the bio so
-			 * we know the end IO handler won't happen before
-			 * we inc the count.  Otherwise, the cb might get
-			 * freed before we're done setting it up
-			 */
-			refcount_inc(&cb->pending_bios);
+			atomic_inc(&cb->pending_bios);
 			ret = btrfs_bio_wq_end_io(fs_info, bio,
 						  BTRFS_WQ_ENDIO_DATA);
 			BUG_ON(ret); /* -ENOMEM */
@@ -513,6 +537,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 		cond_resched();
 	}
 
+	atomic_inc(&cb->pending_bios);
 	ret = btrfs_bio_wq_end_io(fs_info, bio, BTRFS_WQ_ENDIO_DATA);
 	BUG_ON(ret); /* -ENOMEM */
 
@@ -696,7 +721,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 	if (!cb)
 		goto out;
 
-	refcount_set(&cb->pending_bios, 0);
+	atomic_set(&cb->pending_bios, 0);
+	atomic_set(&cb->io_sectors, compressed_len >> fs_info->sectorsize_bits);
 	cb->errors = 0;
 	cb->inode = inode;
 	cb->mirror_num = mirror_num;
@@ -741,7 +767,6 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 	comp_bio->bi_opf = REQ_OP_READ;
 	comp_bio->bi_private = cb;
 	comp_bio->bi_end_io = end_compressed_bio_read;
-	refcount_set(&cb->pending_bios, 1);
 
 	for (pg_index = 0; pg_index < nr_pages; pg_index++) {
 		u32 pg_len = PAGE_SIZE;
@@ -770,18 +795,11 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 		if (submit || bio_add_page(comp_bio, page, pg_len, 0) < pg_len) {
 			unsigned int nr_sectors;
 
+			atomic_inc(&cb->pending_bios);
 			ret = btrfs_bio_wq_end_io(fs_info, comp_bio,
 						  BTRFS_WQ_ENDIO_DATA);
 			BUG_ON(ret); /* -ENOMEM */
 
-			/*
-			 * inc the count before we submit the bio so
-			 * we know the end IO handler won't happen before
-			 * we inc the count.  Otherwise, the cb might get
-			 * freed before we're done setting it up
-			 */
-			refcount_inc(&cb->pending_bios);
-
 			ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
 			BUG_ON(ret); /* -ENOMEM */
 
@@ -805,6 +823,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 		cur_disk_byte += pg_len;
 	}
 
+	atomic_inc(&cb->pending_bios);
 	ret = btrfs_bio_wq_end_io(fs_info, comp_bio, BTRFS_WQ_ENDIO_DATA);
 	BUG_ON(ret); /* -ENOMEM */
 
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index c359f20920d0..41dd0bf6d5db 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -29,7 +29,15 @@ struct btrfs_inode;
 
 struct compressed_bio {
 	/* number of bios pending for this compressed extent */
-	refcount_t pending_bios;
+	atomic_t pending_bios;
+
+	/*
+	 * Number of sectors which hasn't finished.
+	 *
+	 * Combined with pending_bios, we can manually finish the compressed_bio
+	 * if we hit some error while there is still some pages not added.
+	 */
+	atomic_t io_sectors;
 
 	/* Number of compressed pages in the array */
 	unsigned int nr_pages;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_read()
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
  2021-06-15 12:18 ` [PATCH v3 1/9] btrfs: remove a dead comment for btrfs_decompress_bio() Qu Wenruo
  2021-06-15 12:18 ` [PATCH v3 2/9] btrfs: introduce compressed_bio::io_sectors to trace compressed bio more elegantly Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  2021-06-16 14:03   ` David Sterba
  2021-06-15 12:18 ` [PATCH v3 4/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_write() Qu Wenruo
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

There are quite some BUG_ON()s inside btrfs_submit_compressed_read(),
namingly all errors inside the for() loop relies on BUG_ON() to handle
-ENOMEM.

Hunt down these BUG_ON()s properly by:

- Introduce compressed_bio::pending_bios_wait
  This allows us to wait for any submitted bio to finish, while still
  keeps the compressed_bio from being freed, as we should have
  compressed_bio::io_sectors not zero.

- Introduce finish_compressed_bio_read() to finish the compressed_bio

- Properly end the bio and finish compressed_bio when error happens

Now in btrfs_submit_compressed_read() even when the bio submission
failed, we can properly handle the error without triggering BUG_ON().

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/compression.c | 127 ++++++++++++++++++++++++++---------------
 fs/btrfs/compression.h |   3 +
 2 files changed, 85 insertions(+), 45 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index bbfee9ffd20a..abbdb8d35001 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -220,7 +220,6 @@ static bool dec_and_test_compressed_bio(struct compressed_bio *cb,
 		cb->errors = 1;
 
 	ASSERT(bi_size && bi_size <= cb->compressed_len);
-	atomic_dec(&cb->pending_bios);
 
 	/*
 	 * Here we only need to check io_sectors, as if that is 0, we definily
@@ -232,9 +231,55 @@ static bool dec_and_test_compressed_bio(struct compressed_bio *cb,
 	ASSERT(atomic_read(&cb->io_sectors) <
 	       (cb->compressed_len >> fs_info->sectorsize_bits));
 
+	/*
+	 * Here we must wake up pending_bio_wait after all other operations on
+	 * @cb finished, or we can race with finish_compressed_bio_*() in
+	 * error path.
+	 */
+	atomic_dec(&cb->pending_bios);
+	wake_up(&cb->pending_bio_wait);
+
 	return last_io;
 }
 
+static void finish_compressed_bio_read(struct compressed_bio *cb,
+				       struct bio *bio)
+{
+	unsigned int index;
+	struct page *page;
+
+	/* release the compressed pages */
+	for (index = 0; index < cb->nr_pages; index++) {
+		page = cb->compressed_pages[index];
+		page->mapping = NULL;
+		put_page(page);
+	}
+
+	/* do io completion on the original bio */
+	if (cb->errors) {
+		bio_io_error(cb->orig_bio);
+	} else {
+		struct bio_vec *bvec;
+		struct bvec_iter_all iter_all;
+
+		ASSERT(bio);
+		ASSERT(!bio->bi_status);
+		/*
+		 * we have verified the checksum already, set page
+		 * checked so the end_io handlers know about it
+		 */
+		ASSERT(!bio_flagged(bio, BIO_CLONED));
+		bio_for_each_segment_all(bvec, cb->orig_bio, iter_all)
+			SetPageChecked(bvec->bv_page);
+
+		bio_endio(cb->orig_bio);
+	}
+
+	/* finally free the cb struct */
+	kfree(cb->compressed_pages);
+	kfree(cb);
+}
+
 /* when we finish reading compressed pages from the disk, we
  * decompress them and then run the bio end_io routines on the
  * decompressed pages (in the inode address space).
@@ -249,8 +294,6 @@ static void end_compressed_bio_read(struct bio *bio)
 {
 	struct compressed_bio *cb = bio->bi_private;
 	struct inode *inode;
-	struct page *page;
-	unsigned int index;
 	unsigned int mirror = btrfs_io_bio(bio)->mirror_num;
 	int ret = 0;
 
@@ -285,36 +328,7 @@ static void end_compressed_bio_read(struct bio *bio)
 csum_failed:
 	if (ret)
 		cb->errors = 1;
-
-	/* release the compressed pages */
-	index = 0;
-	for (index = 0; index < cb->nr_pages; index++) {
-		page = cb->compressed_pages[index];
-		page->mapping = NULL;
-		put_page(page);
-	}
-
-	/* do io completion on the original bio */
-	if (cb->errors) {
-		bio_io_error(cb->orig_bio);
-	} else {
-		struct bio_vec *bvec;
-		struct bvec_iter_all iter_all;
-
-		/*
-		 * we have verified the checksum already, set page
-		 * checked so the end_io handlers know about it
-		 */
-		ASSERT(!bio_flagged(bio, BIO_CLONED));
-		bio_for_each_segment_all(bvec, cb->orig_bio, iter_all)
-			SetPageChecked(bvec->bv_page);
-
-		bio_endio(cb->orig_bio);
-	}
-
-	/* finally free the cb struct */
-	kfree(cb->compressed_pages);
-	kfree(cb);
+	finish_compressed_bio_read(cb, bio);
 out:
 	bio_put(bio);
 }
@@ -440,6 +454,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 		return BLK_STS_RESOURCE;
 	atomic_set(&cb->pending_bios, 0);
 	atomic_set(&cb->io_sectors, compressed_len >> fs_info->sectorsize_bits);
+	init_waitqueue_head(&cb->pending_bio_wait);
 	cb->errors = 0;
 	cb->inode = &inode->vfs_inode;
 	cb->start = start;
@@ -723,6 +738,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 
 	atomic_set(&cb->pending_bios, 0);
 	atomic_set(&cb->io_sectors, compressed_len >> fs_info->sectorsize_bits);
+	init_waitqueue_head(&cb->pending_bio_wait);
 	cb->errors = 0;
 	cb->inode = inode;
 	cb->mirror_num = mirror_num;
@@ -798,20 +814,20 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 			atomic_inc(&cb->pending_bios);
 			ret = btrfs_bio_wq_end_io(fs_info, comp_bio,
 						  BTRFS_WQ_ENDIO_DATA);
-			BUG_ON(ret); /* -ENOMEM */
+			if (ret)
+				goto finish_cb;
 
 			ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
-			BUG_ON(ret); /* -ENOMEM */
+			if (ret)
+				goto finish_cb;
 
 			nr_sectors = DIV_ROUND_UP(comp_bio->bi_iter.bi_size,
 						  fs_info->sectorsize);
 			sums += fs_info->csum_size * nr_sectors;
 
 			ret = btrfs_map_bio(fs_info, comp_bio, mirror_num);
-			if (ret) {
-				comp_bio->bi_status = ret;
-				bio_endio(comp_bio);
-			}
+			if (ret)
+				goto finish_cb;
 
 			comp_bio = btrfs_bio_alloc(cur_disk_byte);
 			comp_bio->bi_opf = REQ_OP_READ;
@@ -825,16 +841,16 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 
 	atomic_inc(&cb->pending_bios);
 	ret = btrfs_bio_wq_end_io(fs_info, comp_bio, BTRFS_WQ_ENDIO_DATA);
-	BUG_ON(ret); /* -ENOMEM */
+	if (ret)
+		goto last_bio;
 
 	ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
-	BUG_ON(ret); /* -ENOMEM */
+	if (ret)
+		goto last_bio;
 
 	ret = btrfs_map_bio(fs_info, comp_bio, mirror_num);
-	if (ret) {
-		comp_bio->bi_status = ret;
-		bio_endio(comp_bio);
-	}
+	if (ret)
+		goto last_bio;
 
 	return 0;
 
@@ -850,6 +866,27 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 out:
 	free_extent_map(em);
 	return ret;
+last_bio:
+	cb->errors = 1;
+	comp_bio->bi_status = ret;
+	/* This is the last bio, endio functions will free @cb */
+	bio_endio(comp_bio);
+	return ret;
+finish_cb:
+	cb->errors = 1;
+	if (comp_bio) {
+		comp_bio->bi_status = ret;
+		bio_endio(comp_bio);
+	}
+	/*
+	 * Even with previous bio ended, we should still have io not yet
+	 * submitted, thus need to finish @cb manually.
+	 */
+	ASSERT(atomic_read(&cb->io_sectors));
+	wait_event(cb->pending_bio_wait, atomic_read(&cb->pending_bios) == 0);
+	/* Now we are the only one referring @cb, can finish it safely. */
+	finish_compressed_bio_read(cb, NULL);
+	return ret;
 }
 
 /*
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 41dd0bf6d5db..6f6c14f83c74 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -39,6 +39,9 @@ struct compressed_bio {
 	 */
 	atomic_t io_sectors;
 
+	/* To wait for any submitted bio, used in error handling */
+	wait_queue_head_t pending_bio_wait;
+
 	/* Number of compressed pages in the array */
 	unsigned int nr_pages;
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 4/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_write()
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
                   ` (2 preceding siblings ...)
  2021-06-15 12:18 ` [PATCH v3 3/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_read() Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  2021-06-15 12:18 ` [PATCH v3 5/9] btrfs: introduce submit_compressed_bio() for compression Qu Wenruo
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

Just like btrfs_submit_compressed_read(), there are quite some BUG_ON()s
inside btrfs_submit_compressed_write() for the bio submission path.

Fix them using the same method:

- For last bio, just endio the bio
  As in that case, one of the endio function of all these submitted bio
  will be able to free the compressed_bio

- For half-submitted bio, wait and finish the compressed_bio manually
  In this case, as long as all other bio finishes, we're the only one
  referring the compressed bio, and can manually finish it.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/compression.c | 103 ++++++++++++++++++++++++++---------------
 1 file changed, 66 insertions(+), 37 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index abbdb8d35001..706efa2d1438 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -371,50 +371,56 @@ static noinline void end_compressed_writeback(struct inode *inode,
 	/* the inode may be gone now */
 }
 
-/*
- * do the cleanup once all the compressed pages hit the disk.
- * This will clear writeback on the file pages and free the compressed
- * pages.
- *
- * This also calls the writeback end hooks for the file pages so that
- * metadata and checksums can be updated in the file.
- */
-static void end_compressed_bio_write(struct bio *bio)
+static void finish_compressed_bio_write(struct compressed_bio *cb)
 {
-	struct compressed_bio *cb = bio->bi_private;
-	struct inode *inode;
-	struct page *page;
+	struct inode *inode = cb->inode;
 	unsigned int index;
 
-	if (!dec_and_test_compressed_bio(cb, bio))
-		goto out;
-
-	/* ok, we're the last bio for this extent, step one is to
-	 * call back into the FS and do all the end_io operations
+	/*
+	 * Ok, we're the last bio for this extent, step one is to
+	 * call back into the FS and do all the end_io operations.
 	 */
-	inode = cb->inode;
-	btrfs_record_physical_zoned(inode, cb->start, bio);
 	btrfs_writepage_endio_finish_ordered(BTRFS_I(inode), NULL,
 			cb->start, cb->start + cb->len - 1,
-			bio->bi_status == BLK_STS_OK);
+			!cb->errors);
 
 	end_compressed_writeback(inode, cb);
-	/* note, our inode could be gone now */
+	/* Note, our inode could be gone now */
 
 	/*
-	 * release the compressed pages, these came from alloc_page and
+	 * Release the compressed pages, these came from alloc_page and
 	 * are not attached to the inode at all
 	 */
-	index = 0;
 	for (index = 0; index < cb->nr_pages; index++) {
-		page = cb->compressed_pages[index];
+		struct page *page = cb->compressed_pages[index];
+
 		page->mapping = NULL;
 		put_page(page);
 	}
 
-	/* finally free the cb struct */
+	/* Finally free the cb struct */
 	kfree(cb->compressed_pages);
 	kfree(cb);
+}
+
+/*
+ * do the cleanup once all the compressed pages hit the disk.
+ * This will clear writeback on the file pages and free the compressed
+ * pages.
+ *
+ * This also calls the writeback end hooks for the file pages so that
+ * metadata and checksums can be updated in the file.
+ */
+static void end_compressed_bio_write(struct bio *bio)
+{
+	struct compressed_bio *cb = bio->bi_private;
+
+	if (!dec_and_test_compressed_bio(cb, bio))
+		goto out;
+
+	btrfs_record_physical_zoned(cb->inode, cb->start, bio);
+
+	finish_compressed_bio_write(cb);
 out:
 	bio_put(bio);
 }
@@ -517,18 +523,18 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 			atomic_inc(&cb->pending_bios);
 			ret = btrfs_bio_wq_end_io(fs_info, bio,
 						  BTRFS_WQ_ENDIO_DATA);
-			BUG_ON(ret); /* -ENOMEM */
+			if (ret)
+				goto finish_cb;
 
 			if (!skip_sum) {
 				ret = btrfs_csum_one_bio(inode, bio, start, 1);
-				BUG_ON(ret); /* -ENOMEM */
+				if (ret)
+					goto finish_cb;
 			}
 
 			ret = btrfs_map_bio(fs_info, bio, 0);
-			if (ret) {
-				bio->bi_status = ret;
-				bio_endio(bio);
-			}
+			if (ret)
+				goto finish_cb;
 
 			bio = btrfs_bio_alloc(first_byte);
 			bio->bi_opf = bio_op | write_flags;
@@ -554,23 +560,46 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 
 	atomic_inc(&cb->pending_bios);
 	ret = btrfs_bio_wq_end_io(fs_info, bio, BTRFS_WQ_ENDIO_DATA);
-	BUG_ON(ret); /* -ENOMEM */
+	if (ret)
+		goto last_bio;
 
 	if (!skip_sum) {
 		ret = btrfs_csum_one_bio(inode, bio, start, 1);
-		BUG_ON(ret); /* -ENOMEM */
+		if (ret)
+			goto last_bio;
 	}
 
 	ret = btrfs_map_bio(fs_info, bio, 0);
-	if (ret) {
-		bio->bi_status = ret;
-		bio_endio(bio);
-	}
+	if (ret)
+		goto last_bio;
 
 	if (blkcg_css)
 		kthread_associate_blkcg(NULL);
 
 	return 0;
+last_bio:
+	cb->errors = 1;
+	bio->bi_status = ret;
+	/* One of the bios' endio function will free @cb. */
+	bio_endio(bio);
+	return ret;
+
+finish_cb:
+	cb->errors = 1;
+	if (bio) {
+		bio->bi_status = ret;
+		bio_endio(bio);
+	}
+
+	/*
+	 * Even with previous bio ended, we should still have io not yet
+	 * submitted, thus need to finish manually.
+	 */
+	ASSERT(atomic_read(&cb->io_sectors));
+	wait_event(cb->pending_bio_wait, atomic_read(&cb->pending_bios) == 0);
+	/* Now we are the only one referring @cb, can finish it safely. */
+	finish_compressed_bio_write(cb);
+	return ret;
 }
 
 static u64 bio_end_offset(struct bio *bio)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 5/9] btrfs: introduce submit_compressed_bio() for compression
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
                   ` (3 preceding siblings ...)
  2021-06-15 12:18 ` [PATCH v3 4/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_write() Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  2021-06-15 15:59   ` Johannes Thumshirn
  2021-06-15 12:18 ` [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() " Qu Wenruo
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

The new helper, submit_compressed_bio(), will aggregate the following
work:

- Increase compressed_bio::pending_bios
- Remap the endio function
- Map and submit the bio

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/compression.c | 45 ++++++++++++++++++------------------------
 1 file changed, 19 insertions(+), 26 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 706efa2d1438..608e9325b7c8 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -425,6 +425,21 @@ static void end_compressed_bio_write(struct bio *bio)
 	bio_put(bio);
 }
 
+static blk_status_t submit_compressed_bio(struct btrfs_fs_info *fs_info,
+					  struct compressed_bio *cb,
+					  struct bio *bio, int mirror_num)
+{
+	blk_status_t ret;
+
+	ASSERT(bio->bi_iter.bi_size);
+	atomic_inc(&cb->pending_bios);
+	ret = btrfs_bio_wq_end_io(fs_info, bio, BTRFS_WQ_ENDIO_DATA);
+	if (ret)
+		return ret;
+	ret = btrfs_map_bio(fs_info, bio, mirror_num);
+	return ret;
+}
+
 /*
  * worker function to build and submit bios for previously compressed pages.
  * The corresponding pages in the inode should be marked for writeback
@@ -520,19 +535,13 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 
 		page->mapping = NULL;
 		if (submit || len < PAGE_SIZE) {
-			atomic_inc(&cb->pending_bios);
-			ret = btrfs_bio_wq_end_io(fs_info, bio,
-						  BTRFS_WQ_ENDIO_DATA);
-			if (ret)
-				goto finish_cb;
-
 			if (!skip_sum) {
 				ret = btrfs_csum_one_bio(inode, bio, start, 1);
 				if (ret)
 					goto finish_cb;
 			}
 
-			ret = btrfs_map_bio(fs_info, bio, 0);
+			ret = submit_compressed_bio(fs_info, cb, bio, 0);
 			if (ret)
 				goto finish_cb;
 
@@ -558,18 +567,13 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 		cond_resched();
 	}
 
-	atomic_inc(&cb->pending_bios);
-	ret = btrfs_bio_wq_end_io(fs_info, bio, BTRFS_WQ_ENDIO_DATA);
-	if (ret)
-		goto last_bio;
-
 	if (!skip_sum) {
 		ret = btrfs_csum_one_bio(inode, bio, start, 1);
 		if (ret)
 			goto last_bio;
 	}
 
-	ret = btrfs_map_bio(fs_info, bio, 0);
+	ret = submit_compressed_bio(fs_info, cb, bio, 0);
 	if (ret)
 		goto last_bio;
 
@@ -840,12 +844,6 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 		if (submit || bio_add_page(comp_bio, page, pg_len, 0) < pg_len) {
 			unsigned int nr_sectors;
 
-			atomic_inc(&cb->pending_bios);
-			ret = btrfs_bio_wq_end_io(fs_info, comp_bio,
-						  BTRFS_WQ_ENDIO_DATA);
-			if (ret)
-				goto finish_cb;
-
 			ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
 			if (ret)
 				goto finish_cb;
@@ -854,7 +852,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 						  fs_info->sectorsize);
 			sums += fs_info->csum_size * nr_sectors;
 
-			ret = btrfs_map_bio(fs_info, comp_bio, mirror_num);
+			ret = submit_compressed_bio(fs_info, cb, comp_bio, mirror_num);
 			if (ret)
 				goto finish_cb;
 
@@ -868,16 +866,11 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 		cur_disk_byte += pg_len;
 	}
 
-	atomic_inc(&cb->pending_bios);
-	ret = btrfs_bio_wq_end_io(fs_info, comp_bio, BTRFS_WQ_ENDIO_DATA);
-	if (ret)
-		goto last_bio;
-
 	ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
 	if (ret)
 		goto last_bio;
 
-	ret = btrfs_map_bio(fs_info, comp_bio, mirror_num);
+	ret = submit_compressed_bio(fs_info, cb, comp_bio, mirror_num);
 	if (ret)
 		goto last_bio;
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() for compression
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
                   ` (4 preceding siblings ...)
  2021-06-15 12:18 ` [PATCH v3 5/9] btrfs: introduce submit_compressed_bio() for compression Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  2021-06-15 15:58   ` Johannes Thumshirn
  2021-06-16 14:08   ` David Sterba
  2021-06-15 12:18 ` [PATCH v3 7/9] btrfs: make btrfs_submit_compressed_read() to determine stripe boundary at bio allocation time Qu Wenruo
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

Just aggregate the bio allocation code into one helper, so that we can
replace 4 call sites.

There is one special note for zoned write.

Currently btrfs_submit_compressed_write() will only allocate the first
bio using ZONE_APPEND.
If we have to submit current bio due to stripe boundary, the new bio
allocated will not use ZONE_APPEND.

In theory this should be a bug, but considering zoned mode currently
only support SINGLE profile, which doesn't have any stripe boundary
limit, it should never be a problem.

This function will provide a good entrance for any work which needs to be
done at bio allocation time. Like determining the stripe boundary.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/compression.c | 91 +++++++++++++++++++++++++++++-------------
 1 file changed, 63 insertions(+), 28 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 608e9325b7c8..a00f8f8d7512 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -440,6 +440,41 @@ static blk_status_t submit_compressed_bio(struct btrfs_fs_info *fs_info,
 	return ret;
 }
 
+/*
+ * To allocate a compressed_bio, which will be used to read/write on-disk data.
+ *
+ * @disk_bytenr:	Disk bytenr of the IO
+ * @opf:		Bio opf
+ * @endio_func:		Endio function
+ */
+static struct bio *alloc_compressed_bio(struct compressed_bio *cb, u64 disk_bytenr,
+					unsigned int opf, bio_end_io_t endio_func)
+{
+	struct bio *bio;
+
+	bio = btrfs_bio_alloc(disk_bytenr);
+	/* bioset allocation should not fail */
+	ASSERT(bio);
+
+	bio->bi_opf = opf;
+	bio->bi_private = cb;
+	bio->bi_end_io = endio_func;
+
+	if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
+		struct btrfs_fs_info *fs_info = btrfs_sb(cb->inode->i_sb);
+		struct btrfs_device *device;
+
+		device = btrfs_zoned_get_device(fs_info, disk_bytenr,
+						fs_info->sectorsize);
+		if (IS_ERR(device)) {
+			bio_put(bio);
+			return ERR_CAST(device);
+		}
+		bio_set_dev(bio, device->bdev);
+	}
+	return bio;
+}
+
 /*
  * worker function to build and submit bios for previously compressed pages.
  * The corresponding pages in the inode should be marked for writeback
@@ -486,22 +521,11 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	cb->orig_bio = NULL;
 	cb->nr_pages = nr_pages;
 
-	bio = btrfs_bio_alloc(first_byte);
-	bio->bi_opf = bio_op | write_flags;
-	bio->bi_private = cb;
-	bio->bi_end_io = end_compressed_bio_write;
-
-	if (use_append) {
-		struct btrfs_device *device;
-
-		device = btrfs_zoned_get_device(fs_info, disk_start, PAGE_SIZE);
-		if (IS_ERR(device)) {
-			kfree(cb);
-			bio_put(bio);
-			return BLK_STS_NOTSUPP;
-		}
-
-		bio_set_dev(bio, device->bdev);
+	bio = alloc_compressed_bio(cb, first_byte, bio_op | write_flags,
+				   end_compressed_bio_write);
+	if (IS_ERR(bio)) {
+		kfree(cb);
+		return errno_to_blk_status(PTR_ERR(bio));
 	}
 
 	if (blkcg_css) {
@@ -545,10 +569,14 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 			if (ret)
 				goto finish_cb;
 
-			bio = btrfs_bio_alloc(first_byte);
-			bio->bi_opf = bio_op | write_flags;
-			bio->bi_private = cb;
-			bio->bi_end_io = end_compressed_bio_write;
+			bio = alloc_compressed_bio(cb, first_byte,
+					bio_op | write_flags,
+					end_compressed_bio_write);
+			if (IS_ERR(bio)) {
+				ret = errno_to_blk_status(PTR_ERR(bio));
+				bio = NULL;
+				goto finish_cb;
+			}
 			if (blkcg_css)
 				bio->bi_opf |= REQ_CGROUP_PUNT;
 			/*
@@ -812,10 +840,13 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 	/* include any pages we added in add_ra-bio_pages */
 	cb->len = bio->bi_iter.bi_size;
 
-	comp_bio = btrfs_bio_alloc(cur_disk_byte);
-	comp_bio->bi_opf = REQ_OP_READ;
-	comp_bio->bi_private = cb;
-	comp_bio->bi_end_io = end_compressed_bio_read;
+	comp_bio = alloc_compressed_bio(cb, cur_disk_byte, REQ_OP_READ,
+					end_compressed_bio_read);
+	if (IS_ERR(comp_bio)) {
+		ret = errno_to_blk_status(PTR_ERR(comp_bio));
+		comp_bio = NULL;
+		goto fail2;
+	}
 
 	for (pg_index = 0; pg_index < nr_pages; pg_index++) {
 		u32 pg_len = PAGE_SIZE;
@@ -856,10 +887,14 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 			if (ret)
 				goto finish_cb;
 
-			comp_bio = btrfs_bio_alloc(cur_disk_byte);
-			comp_bio->bi_opf = REQ_OP_READ;
-			comp_bio->bi_private = cb;
-			comp_bio->bi_end_io = end_compressed_bio_read;
+			comp_bio = alloc_compressed_bio(cb, cur_disk_byte,
+					REQ_OP_READ,
+					end_compressed_bio_read);
+			if (IS_ERR(comp_bio)) {
+				ret = errno_to_blk_status(PTR_ERR(comp_bio));
+				comp_bio = NULL;
+				goto finish_cb;
+			}
 
 			bio_add_page(comp_bio, page, pg_len, 0);
 		}
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 7/9] btrfs: make btrfs_submit_compressed_read() to determine stripe boundary at bio allocation time
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
                   ` (5 preceding siblings ...)
  2021-06-15 12:18 ` [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() " Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  2021-06-15 12:18 ` [PATCH v3 8/9] btrfs: make btrfs_submit_compressed_write() " Qu Wenruo
  2021-06-15 12:18 ` [PATCH v3 9/9] btrfs: remove unused function btrfs_bio_fits_in_stripe() Qu Wenruo
  8 siblings, 0 replies; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

Currently btrfs_submit_compressed_read() will check
btrfs_bio_fits_in_stripe() each time a new page is going to be added.

Even compressed extent is small, we don't really need to do that for
every page.

This patch will align the behavior to extent_io.c, by determining the
stripe boundary when allocating a bio.

Unlike extent_io.c, in compressed.c we don't need to bother things like
different bio flags, thus no need to re-use bio_ctrl.

Here we just manually introduce new local variable, next_stripe_start,
and teach alloc_compressed_bio() to calculate the stripe boundary.

Then each time we add some page range into the bio, we check if we
reached the boundary.
And if reached, submit it.

Also, since we have @cur_disk_byte to determine whether we're the last
bio, we don't need a explicit last_bio: tag for error handling any more.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/compression.c | 164 ++++++++++++++++++++++-------------------
 1 file changed, 90 insertions(+), 74 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index a00f8f8d7512..3d8ca17289bd 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -446,11 +446,17 @@ static blk_status_t submit_compressed_bio(struct btrfs_fs_info *fs_info,
  * @disk_bytenr:	Disk bytenr of the IO
  * @opf:		Bio opf
  * @endio_func:		Endio function
+ * @next_stripe_start:	Disk bytenr of next stripe start
  */
 static struct bio *alloc_compressed_bio(struct compressed_bio *cb, u64 disk_bytenr,
-					unsigned int opf, bio_end_io_t endio_func)
+					unsigned int opf, bio_end_io_t endio_func,
+					u64 *next_stripe_start)
 {
+	struct btrfs_fs_info *fs_info = btrfs_sb(cb->inode->i_sb);
+	struct btrfs_io_geometry geom;
+	struct extent_map *em;
 	struct bio *bio;
+	int ret;
 
 	bio = btrfs_bio_alloc(disk_bytenr);
 	/* bioset allocation should not fail */
@@ -460,18 +466,20 @@ static struct bio *alloc_compressed_bio(struct compressed_bio *cb, u64 disk_byte
 	bio->bi_private = cb;
 	bio->bi_end_io = endio_func;
 
-	if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
-		struct btrfs_fs_info *fs_info = btrfs_sb(cb->inode->i_sb);
-		struct btrfs_device *device;
+	em = btrfs_get_chunk_map(fs_info, disk_bytenr, fs_info->sectorsize);
+	if (IS_ERR(em))
+		return ERR_CAST(em);
+
+	if (bio_op(bio) == REQ_OP_ZONE_APPEND)
+		bio_set_dev(bio, em->map_lookup->stripes[0].dev->bdev);
+
+	ret = btrfs_get_io_geometry(fs_info, em, btrfs_op(bio), disk_bytenr,
+				    &geom);
+	free_extent_map(em);
+	if (ret < 0)
+		return ERR_PTR(ret);
+	*next_stripe_start = disk_bytenr + geom.len;
 
-		device = btrfs_zoned_get_device(fs_info, disk_bytenr,
-						fs_info->sectorsize);
-		if (IS_ERR(device)) {
-			bio_put(bio);
-			return ERR_CAST(device);
-		}
-		bio_set_dev(bio, device->bdev);
-	}
 	return bio;
 }
 
@@ -499,6 +507,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	int pg_index = 0;
 	struct page *page;
 	u64 first_byte = disk_start;
+	u64 next_stripe_start;
 	blk_status_t ret;
 	int skip_sum = inode->flags & BTRFS_INODE_NODATASUM;
 	const bool use_append = btrfs_use_zone_append(inode, disk_start);
@@ -522,7 +531,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	cb->nr_pages = nr_pages;
 
 	bio = alloc_compressed_bio(cb, first_byte, bio_op | write_flags,
-				   end_compressed_bio_write);
+				   end_compressed_bio_write,
+				   &next_stripe_start);
 	if (IS_ERR(bio)) {
 		kfree(cb);
 		return errno_to_blk_status(PTR_ERR(bio));
@@ -571,7 +581,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 
 			bio = alloc_compressed_bio(cb, first_byte,
 					bio_op | write_flags,
-					end_compressed_bio_write);
+					end_compressed_bio_write,
+					&next_stripe_start);
 			if (IS_ERR(bio)) {
 				ret = errno_to_blk_status(PTR_ERR(bio));
 				bio = NULL;
@@ -771,9 +782,10 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 	unsigned int compressed_len;
 	unsigned int nr_pages;
 	unsigned int pg_index;
-	struct page *page;
-	struct bio *comp_bio;
-	u64 cur_disk_byte = bio->bi_iter.bi_sector << 9;
+	struct bio *comp_bio = NULL;
+	const u64 disk_bytenr = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+	u64 cur_disk_byte = disk_bytenr;
+	u64 next_stripe_start;
 	u64 em_len;
 	u64 em_start;
 	struct extent_map *em;
@@ -840,41 +852,58 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 	/* include any pages we added in add_ra-bio_pages */
 	cb->len = bio->bi_iter.bi_size;
 
-	comp_bio = alloc_compressed_bio(cb, cur_disk_byte, REQ_OP_READ,
-					end_compressed_bio_read);
-	if (IS_ERR(comp_bio)) {
-		ret = errno_to_blk_status(PTR_ERR(comp_bio));
-		comp_bio = NULL;
-		goto fail2;
-	}
-
-	for (pg_index = 0; pg_index < nr_pages; pg_index++) {
-		u32 pg_len = PAGE_SIZE;
-		int submit = 0;
+	while (cur_disk_byte < disk_bytenr + compressed_len) {
+		u64 offset = cur_disk_byte - disk_bytenr;
+		unsigned int index = offset >> PAGE_SHIFT;
+		unsigned int real_size;
+		unsigned int added;
+		struct page *page = cb->compressed_pages[index];
 
+		/* Allocate new bio if submitted or not yet allocated */
+		if (!comp_bio) {
+			comp_bio = alloc_compressed_bio(cb, cur_disk_byte,
+					REQ_OP_READ, end_compressed_bio_read,
+					&next_stripe_start);
+			if (IS_ERR(comp_bio)) {
+				ret = errno_to_blk_status(PTR_ERR(comp_bio));
+				comp_bio = NULL;
+				goto finish_cb;
+			}
+		}
 		/*
-		 * To handle subpage case, we need to make sure the bio only
-		 * covers the range we need.
-		 *
-		 * If we're at the last page, truncate the length to only cover
-		 * the remaining part.
+		 * We should never reach next_stripe_start start as we will
+		 * submit comp_bio when reach the boundary immediately.
 		 */
-		if (pg_index == nr_pages - 1)
-			pg_len = min_t(u32, PAGE_SIZE,
-					compressed_len - pg_index * PAGE_SIZE);
-
-		page = cb->compressed_pages[pg_index];
-		page->mapping = inode->i_mapping;
-		page->index = em_start >> PAGE_SHIFT;
-
-		if (comp_bio->bi_iter.bi_size)
-			submit = btrfs_bio_fits_in_stripe(page, pg_len,
-							  comp_bio, 0);
+		ASSERT(cur_disk_byte != next_stripe_start);
+		/*
+		 * We have various limit on the real read size:
+		 * - stripe boundary
+		 * - page boundary
+		 * - compressed length boundary
+		 */
+		real_size = min_t(u64, U32_MAX,
+				  next_stripe_start - cur_disk_byte);
+		real_size = min_t(u64, real_size,
+				  PAGE_SIZE - offset_in_page(offset));
+		real_size = min_t(u64, real_size,
+				  compressed_len - offset);
+		ASSERT(IS_ALIGNED(real_size, fs_info->sectorsize));
+
+		added = bio_add_page(comp_bio, page, real_size,
+				     offset_in_page(offset));
+		/*
+		 * Maximum compressed extent is smaller than bio size limit,
+		 * thus bio_add_page() should always success.
+		 */
+		ASSERT(added == real_size);
+		cur_disk_byte += added;
 
-		page->mapping = NULL;
-		if (submit || bio_add_page(comp_bio, page, pg_len, 0) < pg_len) {
+		/* Reached stripe boundary, need to submit */
+		if (cur_disk_byte == next_stripe_start) {
 			unsigned int nr_sectors;
 
+			ASSERT(comp_bio->bi_iter.bi_size);
+
 			ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
 			if (ret)
 				goto finish_cb;
@@ -883,32 +912,22 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 						  fs_info->sectorsize);
 			sums += fs_info->csum_size * nr_sectors;
 
-			ret = submit_compressed_bio(fs_info, cb, comp_bio, mirror_num);
+			ret = submit_compressed_bio(fs_info, cb, comp_bio,
+						    mirror_num);
 			if (ret)
 				goto finish_cb;
-
-			comp_bio = alloc_compressed_bio(cb, cur_disk_byte,
-					REQ_OP_READ,
-					end_compressed_bio_read);
-			if (IS_ERR(comp_bio)) {
-				ret = errno_to_blk_status(PTR_ERR(comp_bio));
-				comp_bio = NULL;
-				goto finish_cb;
-			}
-
-			bio_add_page(comp_bio, page, pg_len, 0);
+			comp_bio = NULL;
 		}
-		cur_disk_byte += pg_len;
 	}
-
-	ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
-	if (ret)
-		goto last_bio;
-
-	ret = submit_compressed_bio(fs_info, cb, comp_bio, mirror_num);
-	if (ret)
-		goto last_bio;
-
+	/* Submit the last bio */
+	if (comp_bio) {
+		ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
+		if (ret)
+			goto finish_cb;
+		ret = submit_compressed_bio(fs_info, cb, comp_bio, mirror_num);
+		if (ret)
+			goto finish_cb;
+	}
 	return 0;
 
 fail2:
@@ -923,17 +942,14 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 out:
 	free_extent_map(em);
 	return ret;
-last_bio:
-	cb->errors = 1;
-	comp_bio->bi_status = ret;
-	/* This is the last bio, endio functions will free @cb */
-	bio_endio(comp_bio);
-	return ret;
 finish_cb:
 	cb->errors = 1;
 	if (comp_bio) {
 		comp_bio->bi_status = ret;
 		bio_endio(comp_bio);
+		/* All bytes of @cb is submitted, endio will free @cb */
+		if (cur_disk_byte >= disk_bytenr + compressed_len)
+			return ret;
 	}
 	/*
 	 * Even with previous bio ended, we should still have io not yet
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 8/9] btrfs: make btrfs_submit_compressed_write() to determine stripe boundary at bio allocation time
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
                   ` (6 preceding siblings ...)
  2021-06-15 12:18 ` [PATCH v3 7/9] btrfs: make btrfs_submit_compressed_read() to determine stripe boundary at bio allocation time Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  2021-06-16 14:12   ` David Sterba
  2021-06-15 12:18 ` [PATCH v3 9/9] btrfs: remove unused function btrfs_bio_fits_in_stripe() Qu Wenruo
  8 siblings, 1 reply; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

Currently btrfs_submit_compressed_write() will check
btrfs_bio_fits_in_stripe() each time a new page is going to be added.

Even compressed extent is small, we don't really need to do that for
every page.

This patch will align the behavior to extent_io.c, by determining the
stripe boundary when allocating a bio.

Unlike extent_io.c, in compressed.c we don't need to bother things like
different bio flags, thus no need to re-use bio_ctrl.

Here we just manually introduce new local variable, next_stripe_start,
and use that value returned from alloc_compressed_bio() to calculate
the stripe boundary.

Then each time we add some page range into the bio, we check if we
reached the boundary.
And if reached, submit it.

Also, since we have @cur_disk_byte to determine whether we're the last
bio, we don't need a explicit last_bio: tag for error handling any more.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/compression.c | 142 +++++++++++++++++++----------------------
 1 file changed, 65 insertions(+), 77 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 3d8ca17289bd..e14243f9b33c 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -503,10 +503,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct bio *bio = NULL;
 	struct compressed_bio *cb;
-	unsigned long bytes_left;
-	int pg_index = 0;
-	struct page *page;
-	u64 first_byte = disk_start;
+	u64 cur_disk_bytenr = disk_start;
 	u64 next_stripe_start;
 	blk_status_t ret;
 	int skip_sum = inode->flags & BTRFS_INODE_NODATASUM;
@@ -530,45 +527,60 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	cb->orig_bio = NULL;
 	cb->nr_pages = nr_pages;
 
-	bio = alloc_compressed_bio(cb, first_byte, bio_op | write_flags,
-				   end_compressed_bio_write,
-				   &next_stripe_start);
-	if (IS_ERR(bio)) {
-		kfree(cb);
-		return errno_to_blk_status(PTR_ERR(bio));
-	}
-
-	if (blkcg_css) {
-		bio->bi_opf |= REQ_CGROUP_PUNT;
-		kthread_associate_blkcg(blkcg_css);
-	}
-
-	/* create and submit bios for the compressed pages */
-	bytes_left = compressed_len;
-	for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) {
-		int submit = 0;
-		int len = 0;
-
-		page = compressed_pages[pg_index];
-		page->mapping = inode->vfs_inode.i_mapping;
-		if (bio->bi_iter.bi_size)
-			submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, bio,
-							  0);
+	while (cur_disk_bytenr < disk_start + compressed_len) {
+		u64 offset = cur_disk_bytenr - disk_start;
+		unsigned int index = offset >> PAGE_SHIFT;
+		unsigned int real_size;
+		unsigned int added;
+		struct page *page = compressed_pages[index];
+		bool submit = false;
 
+		/* Allocate new bio if submitted or not yet allocated */
+		if (!bio) {
+			bio = alloc_compressed_bio(cb, cur_disk_bytenr,
+				bio_op | write_flags, end_compressed_bio_write,
+				&next_stripe_start);
+			if (IS_ERR(bio)) {
+				ret = errno_to_blk_status(PTR_ERR(bio));
+				bio = NULL;
+				goto finish_cb;
+			}
+		}
 		/*
-		 * Page can only be added to bio if the current bio fits in
-		 * stripe.
+		 * We should never reach next_stripe_start start as we will
+		 * submit comp_bio when reach the boundary immediately.
 		 */
-		if (!submit) {
-			if (pg_index == 0 && use_append)
-				len = bio_add_zone_append_page(bio, page,
-							       PAGE_SIZE, 0);
-			else
-				len = bio_add_page(bio, page, PAGE_SIZE, 0);
-		}
+		ASSERT(cur_disk_bytenr != next_stripe_start);
+		/*
+		 * We have various limit on the real read size:
+		 * - stripe boundary
+		 * - page boundary
+		 * - compressed length boundary
+		 */
+		real_size = min_t(u64, U32_MAX,
+				  next_stripe_start - cur_disk_bytenr);
+		real_size = min_t(u64, real_size,
+				  PAGE_SIZE - offset_in_page(offset));
+		real_size = min_t(u64, real_size,
+				  compressed_len - offset);
+		ASSERT(IS_ALIGNED(real_size, fs_info->sectorsize));
 
-		page->mapping = NULL;
-		if (submit || len < PAGE_SIZE) {
+		if (use_append)
+			added = bio_add_zone_append_page(bio, page, real_size,
+					offset_in_page(offset));
+		else
+			added = bio_add_page(bio, page, real_size,
+					offset_in_page(offset));
+		/* bio_add_zone_append_page() may fail */
+		ASSERT(added == 0 || added == real_size);
+		if (added == 0)
+			submit = true;
+
+		cur_disk_bytenr += added;
+		if (cur_disk_bytenr == next_stripe_start)
+			submit = true;
+
+		if (submit) {
 			if (!skip_sum) {
 				ret = btrfs_csum_one_bio(inode, bio, start, 1);
 				if (ret)
@@ -578,60 +590,36 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 			ret = submit_compressed_bio(fs_info, cb, bio, 0);
 			if (ret)
 				goto finish_cb;
-
-			bio = alloc_compressed_bio(cb, first_byte,
-					bio_op | write_flags,
-					end_compressed_bio_write,
-					&next_stripe_start);
-			if (IS_ERR(bio)) {
-				ret = errno_to_blk_status(PTR_ERR(bio));
-				bio = NULL;
-				goto finish_cb;
-			}
-			if (blkcg_css)
-				bio->bi_opf |= REQ_CGROUP_PUNT;
-			/*
-			 * Use bio_add_page() to ensure the bio has at least one
-			 * page.
-			 */
-			bio_add_page(bio, page, PAGE_SIZE, 0);
-		}
-		if (bytes_left < PAGE_SIZE) {
-			btrfs_info(fs_info,
-					"bytes left %lu compress len %u nr %u",
-			       bytes_left, cb->compressed_len, cb->nr_pages);
+			bio = NULL;
 		}
-		bytes_left -= PAGE_SIZE;
-		first_byte += PAGE_SIZE;
 		cond_resched();
 	}
 
-	if (!skip_sum) {
-		ret = btrfs_csum_one_bio(inode, bio, start, 1);
+	/* Submit the last bio */
+	if (bio) {
+		if (!skip_sum) {
+			ret = btrfs_csum_one_bio(inode, bio, start, 1);
+			if (ret)
+				goto finish_cb;
+		}
+
+		ret = submit_compressed_bio(fs_info, cb, bio, 0);
 		if (ret)
-			goto last_bio;
+			goto finish_cb;
 	}
-
-	ret = submit_compressed_bio(fs_info, cb, bio, 0);
-	if (ret)
-		goto last_bio;
-
 	if (blkcg_css)
 		kthread_associate_blkcg(NULL);
 
 	return 0;
-last_bio:
-	cb->errors = 1;
-	bio->bi_status = ret;
-	/* One of the bios' endio function will free @cb. */
-	bio_endio(bio);
-	return ret;
 
 finish_cb:
 	cb->errors = 1;
 	if (bio) {
 		bio->bi_status = ret;
 		bio_endio(bio);
+		/* Last byte of @cb is submitted, endio will free @cb */
+		if (cur_disk_bytenr >= disk_start + compressed_len)
+			return ret;
 	}
 
 	/*
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 9/9] btrfs: remove unused function btrfs_bio_fits_in_stripe()
  2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
                   ` (7 preceding siblings ...)
  2021-06-15 12:18 ` [PATCH v3 8/9] btrfs: make btrfs_submit_compressed_write() " Qu Wenruo
@ 2021-06-15 12:18 ` Qu Wenruo
  8 siblings, 0 replies; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 12:18 UTC (permalink / raw)
  To: linux-btrfs

As the last caller in compression.c is removed, we don't need that
function anymore.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/ctree.h |  2 --
 fs/btrfs/inode.c | 42 ------------------------------------------
 2 files changed, 44 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 2fbd5330e2da..f33812c68454 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -3163,8 +3163,6 @@ void btrfs_merge_delalloc_extent(struct inode *inode, struct extent_state *new,
 				 struct extent_state *other);
 void btrfs_split_delalloc_extent(struct inode *inode,
 				 struct extent_state *orig, u64 split);
-int btrfs_bio_fits_in_stripe(struct page *page, size_t size, struct bio *bio,
-			     unsigned long bio_flags);
 void btrfs_set_range_writeback(struct btrfs_inode *inode, u64 start, u64 end);
 vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf);
 int btrfs_readpage(struct file *file, struct page *page);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 19421cde2050..987187c4dd72 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2217,48 +2217,6 @@ void btrfs_clear_delalloc_extent(struct inode *vfs_inode,
 	}
 }
 
-/*
- * btrfs_bio_fits_in_stripe - Checks whether the size of the given bio will fit
- * in a chunk's stripe. This function ensures that bios do not span a
- * stripe/chunk
- *
- * @page - The page we are about to add to the bio
- * @size - size we want to add to the bio
- * @bio - bio we want to ensure is smaller than a stripe
- * @bio_flags - flags of the bio
- *
- * return 1 if page cannot be added to the bio
- * return 0 if page can be added to the bio
- * return error otherwise
- */
-int btrfs_bio_fits_in_stripe(struct page *page, size_t size, struct bio *bio,
-			     unsigned long bio_flags)
-{
-	struct inode *inode = page->mapping->host;
-	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
-	u64 logical = bio->bi_iter.bi_sector << 9;
-	u32 bio_len = bio->bi_iter.bi_size;
-	struct extent_map *em;
-	int ret = 0;
-	struct btrfs_io_geometry geom;
-
-	if (bio_flags & EXTENT_BIO_COMPRESSED)
-		return 0;
-
-	em = btrfs_get_chunk_map(fs_info, logical, fs_info->sectorsize);
-	if (IS_ERR(em))
-		return PTR_ERR(em);
-	ret = btrfs_get_io_geometry(fs_info, em, btrfs_op(bio), logical, &geom);
-	if (ret < 0)
-		goto out;
-
-	if (geom.len < bio_len + size)
-		ret = 1;
-out:
-	free_extent_map(em);
-	return ret;
-}
-
 /*
  * in order to insert checksums into the metadata in large chunks,
  * we wait until bio submission time.   All the pages in the bio are
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] btrfs: remove a dead comment for btrfs_decompress_bio()
  2021-06-15 12:18 ` [PATCH v3 1/9] btrfs: remove a dead comment for btrfs_decompress_bio() Qu Wenruo
@ 2021-06-15 12:20   ` Johannes Thumshirn
  0 siblings, 0 replies; 18+ messages in thread
From: Johannes Thumshirn @ 2021-06-15 12:20 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs; +Cc: Anand Jain

Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() for compression
  2021-06-15 12:18 ` [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() " Qu Wenruo
@ 2021-06-15 15:58   ` Johannes Thumshirn
  2021-06-15 23:09     ` Qu Wenruo
  2021-06-16 14:08   ` David Sterba
  1 sibling, 1 reply; 18+ messages in thread
From: Johannes Thumshirn @ 2021-06-15 15:58 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 15/06/2021 14:18, Qu Wenruo wrote:
> +static struct bio *alloc_compressed_bio(struct compressed_bio *cb, u64 disk_bytenr,
> +					unsigned int opf, bio_end_io_t endio_func)
> +{
> +	struct bio *bio;
> +
> +	bio = btrfs_bio_alloc(disk_bytenr);
> +	/* bioset allocation should not fail */
> +	ASSERT(bio);

Here you write that bio allocation shouldn't fail (because it's backed
by a bioset/mempool and we're not calling from IRQ context).

[...]

> +	bio = alloc_compressed_bio(cb, first_byte, bio_op | write_flags,
> +				   end_compressed_bio_write);
> +	if (IS_ERR(bio)) {
> +		kfree(cb);
> +		return errno_to_blk_status(PTR_ERR(bio));
>  	}

Here you're checking for IS_ERR().

> @@ -545,10 +569,14 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,

[...]

> +			bio = alloc_compressed_bio(cb, first_byte,
> +					bio_op | write_flags,
> +					end_compressed_bio_write);
> +			if (IS_ERR(bio)) {
> +				ret = errno_to_blk_status(PTR_ERR(bio));
> +				bio = NULL;
> +				goto finish_cb;
> +			}

same

> @@ -812,10 +840,13 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,

[...]

> +	comp_bio = alloc_compressed_bio(cb, cur_disk_byte, REQ_OP_READ,
> +					end_compressed_bio_read);
> +	if (IS_ERR(comp_bio)) {
> +		ret = errno_to_blk_status(PTR_ERR(comp_bio));
> +		comp_bio = NULL;
> +		goto fail2;
> +	}
>  

same

if btrfs_bio_alloc() would have failed we'd already crash on a nullptr
dereference much earlier.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/9] btrfs: introduce submit_compressed_bio() for compression
  2021-06-15 12:18 ` [PATCH v3 5/9] btrfs: introduce submit_compressed_bio() for compression Qu Wenruo
@ 2021-06-15 15:59   ` Johannes Thumshirn
  0 siblings, 0 replies; 18+ messages in thread
From: Johannes Thumshirn @ 2021-06-15 15:59 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() for compression
  2021-06-15 15:58   ` Johannes Thumshirn
@ 2021-06-15 23:09     ` Qu Wenruo
  0 siblings, 0 replies; 18+ messages in thread
From: Qu Wenruo @ 2021-06-15 23:09 UTC (permalink / raw)
  To: Johannes Thumshirn, Qu Wenruo, linux-btrfs



On 2021/6/15 下午11:58, Johannes Thumshirn wrote:
> On 15/06/2021 14:18, Qu Wenruo wrote:
>> +static struct bio *alloc_compressed_bio(struct compressed_bio *cb, u64 disk_bytenr,
>> +					unsigned int opf, bio_end_io_t endio_func)
>> +{
>> +	struct bio *bio;
>> +
>> +	bio = btrfs_bio_alloc(disk_bytenr);
>> +	/* bioset allocation should not fail */
>> +	ASSERT(bio);
>
> Here you write that bio allocation shouldn't fail (because it's backed
> by a bioset/mempool and we're not calling from IRQ context).

But alloc_compressed_bio() has other error path, namingly
btrfs_get_chunk_map() and btrfs_get_io_geometry() can fail, thus caller
still need to check that.

Although thanks to your mention, I find that I should call bio_put() for
the above error cases before returning ERR_CAST().

Thanks,
Qu
>
> [...]
>
>> +	bio = alloc_compressed_bio(cb, first_byte, bio_op | write_flags,
>> +				   end_compressed_bio_write);
>> +	if (IS_ERR(bio)) {
>> +		kfree(cb);
>> +		return errno_to_blk_status(PTR_ERR(bio));
>>   	}
>
> Here you're checking for IS_ERR().
>
>> @@ -545,10 +569,14 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>
> [...]
>
>> +			bio = alloc_compressed_bio(cb, first_byte,
>> +					bio_op | write_flags,
>> +					end_compressed_bio_write);
>> +			if (IS_ERR(bio)) {
>> +				ret = errno_to_blk_status(PTR_ERR(bio));
>> +				bio = NULL;
>> +				goto finish_cb;
>> +			}
>
> same
>
>> @@ -812,10 +840,13 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>
> [...]
>
>> +	comp_bio = alloc_compressed_bio(cb, cur_disk_byte, REQ_OP_READ,
>> +					end_compressed_bio_read);
>> +	if (IS_ERR(comp_bio)) {
>> +		ret = errno_to_blk_status(PTR_ERR(comp_bio));
>> +		comp_bio = NULL;
>> +		goto fail2;
>> +	}
>>
>
> same
>
> if btrfs_bio_alloc() would have failed we'd already crash on a nullptr
> dereference much earlier.
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 3/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_read()
  2021-06-15 12:18 ` [PATCH v3 3/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_read() Qu Wenruo
@ 2021-06-16 14:03   ` David Sterba
  2021-06-16 22:23     ` Qu Wenruo
  0 siblings, 1 reply; 18+ messages in thread
From: David Sterba @ 2021-06-16 14:03 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Tue, Jun 15, 2021 at 08:18:30PM +0800, Qu Wenruo wrote:
> There are quite some BUG_ON()s inside btrfs_submit_compressed_read(),
> namingly all errors inside the for() loop relies on BUG_ON() to handle
> -ENOMEM.
> 
> Hunt down these BUG_ON()s properly by:
> 
> - Introduce compressed_bio::pending_bios_wait
>   This allows us to wait for any submitted bio to finish, while still
>   keeps the compressed_bio from being freed, as we should have
>   compressed_bio::io_sectors not zero.
> 
> - Introduce finish_compressed_bio_read() to finish the compressed_bio
> 
> - Properly end the bio and finish compressed_bio when error happens
> 
> Now in btrfs_submit_compressed_read() even when the bio submission
> failed, we can properly handle the error without triggering BUG_ON().
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>

Please change the subject to something like "btrfs: do proper error
handling in btrfs_submit_compressed_read", same for the other patch.

> ---
>  fs/btrfs/compression.c | 127 ++++++++++++++++++++++++++---------------
>  fs/btrfs/compression.h |   3 +
>  2 files changed, 85 insertions(+), 45 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index bbfee9ffd20a..abbdb8d35001 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -220,7 +220,6 @@ static bool dec_and_test_compressed_bio(struct compressed_bio *cb,
>  		cb->errors = 1;
>  
>  	ASSERT(bi_size && bi_size <= cb->compressed_len);
> -	atomic_dec(&cb->pending_bios);
>  
>  	/*
>  	 * Here we only need to check io_sectors, as if that is 0, we definily
> @@ -232,9 +231,55 @@ static bool dec_and_test_compressed_bio(struct compressed_bio *cb,
>  	ASSERT(atomic_read(&cb->io_sectors) <
>  	       (cb->compressed_len >> fs_info->sectorsize_bits));
>  
> +	/*
> +	 * Here we must wake up pending_bio_wait after all other operations on
> +	 * @cb finished, or we can race with finish_compressed_bio_*() in
> +	 * error path.
> +	 */
> +	atomic_dec(&cb->pending_bios);
> +	wake_up(&cb->pending_bio_wait);
> +
>  	return last_io;
>  }
>  
> +static void finish_compressed_bio_read(struct compressed_bio *cb,
> +				       struct bio *bio)
> +{
> +	unsigned int index;
> +	struct page *page;
> +
> +	/* release the compressed pages */

Please fix/update comments in code that gets moved, here it's the
uppercase

> +	for (index = 0; index < cb->nr_pages; index++) {
> +		page = cb->compressed_pages[index];
> +		page->mapping = NULL;
> +		put_page(page);
> +	}
> +
> +	/* do io completion on the original bio */

	/* Do io ... */

> +	if (cb->errors) {
> +		bio_io_error(cb->orig_bio);
> +	} else {
> +		struct bio_vec *bvec;
> +		struct bvec_iter_all iter_all;
> +
> +		ASSERT(bio);
> +		ASSERT(!bio->bi_status);
> +		/*
> +		 * we have verified the checksum already, set page
> +		 * checked so the end_io handlers know about it
> +		 */
> +		ASSERT(!bio_flagged(bio, BIO_CLONED));
> +		bio_for_each_segment_all(bvec, cb->orig_bio, iter_all)
> +			SetPageChecked(bvec->bv_page);
> +
> +		bio_endio(cb->orig_bio);
> +	}
> +
> +	/* finally free the cb struct */
> +	kfree(cb->compressed_pages);
> +	kfree(cb);
> +}
> +
>  /* when we finish reading compressed pages from the disk, we
>   * decompress them and then run the bio end_io routines on the
>   * decompressed pages (in the inode address space).
> @@ -249,8 +294,6 @@ static void end_compressed_bio_read(struct bio *bio)
>  {
>  	struct compressed_bio *cb = bio->bi_private;
>  	struct inode *inode;
> -	struct page *page;
> -	unsigned int index;
>  	unsigned int mirror = btrfs_io_bio(bio)->mirror_num;
>  	int ret = 0;
>  
> @@ -285,36 +328,7 @@ static void end_compressed_bio_read(struct bio *bio)
>  csum_failed:
>  	if (ret)
>  		cb->errors = 1;
> -
> -	/* release the compressed pages */
> -	index = 0;
> -	for (index = 0; index < cb->nr_pages; index++) {
> -		page = cb->compressed_pages[index];
> -		page->mapping = NULL;
> -		put_page(page);
> -	}
> -
> -	/* do io completion on the original bio */
> -	if (cb->errors) {
> -		bio_io_error(cb->orig_bio);
> -	} else {
> -		struct bio_vec *bvec;
> -		struct bvec_iter_all iter_all;
> -
> -		/*
> -		 * we have verified the checksum already, set page
> -		 * checked so the end_io handlers know about it
> -		 */
> -		ASSERT(!bio_flagged(bio, BIO_CLONED));
> -		bio_for_each_segment_all(bvec, cb->orig_bio, iter_all)
> -			SetPageChecked(bvec->bv_page);
> -
> -		bio_endio(cb->orig_bio);
> -	}
> -
> -	/* finally free the cb struct */
> -	kfree(cb->compressed_pages);
> -	kfree(cb);
> +	finish_compressed_bio_read(cb, bio);
>  out:
>  	bio_put(bio);
>  }
> @@ -440,6 +454,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>  		return BLK_STS_RESOURCE;
>  	atomic_set(&cb->pending_bios, 0);
>  	atomic_set(&cb->io_sectors, compressed_len >> fs_info->sectorsize_bits);
> +	init_waitqueue_head(&cb->pending_bio_wait);
>  	cb->errors = 0;
>  	cb->inode = &inode->vfs_inode;
>  	cb->start = start;
> @@ -723,6 +738,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  
>  	atomic_set(&cb->pending_bios, 0);
>  	atomic_set(&cb->io_sectors, compressed_len >> fs_info->sectorsize_bits);
> +	init_waitqueue_head(&cb->pending_bio_wait);
>  	cb->errors = 0;
>  	cb->inode = inode;
>  	cb->mirror_num = mirror_num;
> @@ -798,20 +814,20 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  			atomic_inc(&cb->pending_bios);
>  			ret = btrfs_bio_wq_end_io(fs_info, comp_bio,
>  						  BTRFS_WQ_ENDIO_DATA);
> -			BUG_ON(ret); /* -ENOMEM */
> +			if (ret)
> +				goto finish_cb;
>  
>  			ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
> -			BUG_ON(ret); /* -ENOMEM */
> +			if (ret)
> +				goto finish_cb;
>  
>  			nr_sectors = DIV_ROUND_UP(comp_bio->bi_iter.bi_size,
>  						  fs_info->sectorsize);
>  			sums += fs_info->csum_size * nr_sectors;
>  
>  			ret = btrfs_map_bio(fs_info, comp_bio, mirror_num);
> -			if (ret) {
> -				comp_bio->bi_status = ret;
> -				bio_endio(comp_bio);
> -			}
> +			if (ret)
> +				goto finish_cb;
>  
>  			comp_bio = btrfs_bio_alloc(cur_disk_byte);
>  			comp_bio->bi_opf = REQ_OP_READ;
> @@ -825,16 +841,16 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  
>  	atomic_inc(&cb->pending_bios);
>  	ret = btrfs_bio_wq_end_io(fs_info, comp_bio, BTRFS_WQ_ENDIO_DATA);
> -	BUG_ON(ret); /* -ENOMEM */
> +	if (ret)
> +		goto last_bio;
>  
>  	ret = btrfs_lookup_bio_sums(inode, comp_bio, sums);
> -	BUG_ON(ret); /* -ENOMEM */
> +	if (ret)
> +		goto last_bio;
>  
>  	ret = btrfs_map_bio(fs_info, comp_bio, mirror_num);
> -	if (ret) {
> -		comp_bio->bi_status = ret;
> -		bio_endio(comp_bio);
> -	}
> +	if (ret)
> +		goto last_bio;
>  
>  	return 0;
>  
> @@ -850,6 +866,27 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  out:
>  	free_extent_map(em);
>  	return ret;
> +last_bio:
> +	cb->errors = 1;
> +	comp_bio->bi_status = ret;
> +	/* This is the last bio, endio functions will free @cb */
> +	bio_endio(comp_bio);
> +	return ret;
> +finish_cb:
> +	cb->errors = 1;
> +	if (comp_bio) {
> +		comp_bio->bi_status = ret;
> +		bio_endio(comp_bio);
> +	}
> +	/*
> +	 * Even with previous bio ended, we should still have io not yet
> +	 * submitted, thus need to finish @cb manually.
> +	 */
> +	ASSERT(atomic_read(&cb->io_sectors));
> +	wait_event(cb->pending_bio_wait, atomic_read(&cb->pending_bios) == 0);
> +	/* Now we are the only one referring @cb, can finish it safely. */
> +	finish_compressed_bio_read(cb, NULL);
> +	return ret;
>  }
>  
>  /*
> diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> index 41dd0bf6d5db..6f6c14f83c74 100644
> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -39,6 +39,9 @@ struct compressed_bio {
>  	 */
>  	atomic_t io_sectors;
>  
> +	/* To wait for any submitted bio, used in error handling */
> +	wait_queue_head_t pending_bio_wait;

This adds 24 bytes to the structure and it's only used for error
handling, so that does not seem justified enough.

There are system-wide wait queues, shared with other subsystems but it
looks like a better fit for the exceptional case of errors. See commit
6b2bb7265f0b62605 for more details, the change is otherwise trivial and
the api functions are wait_var_event(&variable, condition) and
wake_up_var(&variable), where the variable is a unique key which would
be the compressed_bio.

> +
>  	/* Number of compressed pages in the array */
>  	unsigned int nr_pages;
>  
> -- 
> 2.32.0

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() for compression
  2021-06-15 12:18 ` [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() " Qu Wenruo
  2021-06-15 15:58   ` Johannes Thumshirn
@ 2021-06-16 14:08   ` David Sterba
  1 sibling, 0 replies; 18+ messages in thread
From: David Sterba @ 2021-06-16 14:08 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Tue, Jun 15, 2021 at 08:18:33PM +0800, Qu Wenruo wrote:
> Just aggregate the bio allocation code into one helper, so that we can
> replace 4 call sites.
> 
> There is one special note for zoned write.
> 
> Currently btrfs_submit_compressed_write() will only allocate the first
> bio using ZONE_APPEND.
> If we have to submit current bio due to stripe boundary, the new bio
> allocated will not use ZONE_APPEND.
> 
> In theory this should be a bug, but considering zoned mode currently
> only support SINGLE profile, which doesn't have any stripe boundary
> limit, it should never be a problem.
> 
> This function will provide a good entrance for any work which needs to be
> done at bio allocation time. Like determining the stripe boundary.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/compression.c | 91 +++++++++++++++++++++++++++++-------------
>  1 file changed, 63 insertions(+), 28 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 608e9325b7c8..a00f8f8d7512 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -440,6 +440,41 @@ static blk_status_t submit_compressed_bio(struct btrfs_fs_info *fs_info,
>  	return ret;
>  }
>  
> +/*
> + * To allocate a compressed_bio, which will be used to read/write on-disk data.
> + *
> + * @disk_bytenr:	Disk bytenr of the IO
> + * @opf:		Bio opf
> + * @endio_func:		Endio function

Variable comments that basically repeat the name are useless. Followup
patches add some other parameters that are not that trivial but still.

> + */
> +static struct bio *alloc_compressed_bio(struct compressed_bio *cb, u64 disk_bytenr,
> +					unsigned int opf, bio_end_io_t endio_func)
> +{
> +	struct bio *bio;
> +
> +	bio = btrfs_bio_alloc(disk_bytenr);
> +	/* bioset allocation should not fail */
> +	ASSERT(bio);

Please drop the comment and assert, that it does not fail is documented
next to btrfs_bio_alloc so it's not necessary to repeat it at each call
site.

> +
> +	bio->bi_opf = opf;
> +	bio->bi_private = cb;
> +	bio->bi_end_io = endio_func;
> +
> +	if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
> +		struct btrfs_fs_info *fs_info = btrfs_sb(cb->inode->i_sb);
> +		struct btrfs_device *device;
> +
> +		device = btrfs_zoned_get_device(fs_info, disk_bytenr,
> +						fs_info->sectorsize);
> +		if (IS_ERR(device)) {
> +			bio_put(bio);
> +			return ERR_CAST(device);
> +		}
> +		bio_set_dev(bio, device->bdev);
> +	}
> +	return bio;
> +}
> +
>  /*
>   * worker function to build and submit bios for previously compressed pages.
>   * The corresponding pages in the inode should be marked for writeback
> @@ -486,22 +521,11 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>  	cb->orig_bio = NULL;
>  	cb->nr_pages = nr_pages;
>  
> -	bio = btrfs_bio_alloc(first_byte);
> -	bio->bi_opf = bio_op | write_flags;
> -	bio->bi_private = cb;
> -	bio->bi_end_io = end_compressed_bio_write;
> -
> -	if (use_append) {
> -		struct btrfs_device *device;
> -
> -		device = btrfs_zoned_get_device(fs_info, disk_start, PAGE_SIZE);
> -		if (IS_ERR(device)) {
> -			kfree(cb);
> -			bio_put(bio);
> -			return BLK_STS_NOTSUPP;
> -		}
> -
> -		bio_set_dev(bio, device->bdev);
> +	bio = alloc_compressed_bio(cb, first_byte, bio_op | write_flags,
> +				   end_compressed_bio_write);
> +	if (IS_ERR(bio)) {
> +		kfree(cb);
> +		return errno_to_blk_status(PTR_ERR(bio));
>  	}
>  
>  	if (blkcg_css) {
> @@ -545,10 +569,14 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
>  			if (ret)
>  				goto finish_cb;
>  
> -			bio = btrfs_bio_alloc(first_byte);
> -			bio->bi_opf = bio_op | write_flags;
> -			bio->bi_private = cb;
> -			bio->bi_end_io = end_compressed_bio_write;
> +			bio = alloc_compressed_bio(cb, first_byte,
> +					bio_op | write_flags,
> +					end_compressed_bio_write);
> +			if (IS_ERR(bio)) {
> +				ret = errno_to_blk_status(PTR_ERR(bio));
> +				bio = NULL;
> +				goto finish_cb;
> +			}
>  			if (blkcg_css)
>  				bio->bi_opf |= REQ_CGROUP_PUNT;
>  			/*
> @@ -812,10 +840,13 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  	/* include any pages we added in add_ra-bio_pages */
>  	cb->len = bio->bi_iter.bi_size;
>  
> -	comp_bio = btrfs_bio_alloc(cur_disk_byte);
> -	comp_bio->bi_opf = REQ_OP_READ;
> -	comp_bio->bi_private = cb;
> -	comp_bio->bi_end_io = end_compressed_bio_read;
> +	comp_bio = alloc_compressed_bio(cb, cur_disk_byte, REQ_OP_READ,
> +					end_compressed_bio_read);
> +	if (IS_ERR(comp_bio)) {
> +		ret = errno_to_blk_status(PTR_ERR(comp_bio));
> +		comp_bio = NULL;
> +		goto fail2;
> +	}
>  
>  	for (pg_index = 0; pg_index < nr_pages; pg_index++) {
>  		u32 pg_len = PAGE_SIZE;
> @@ -856,10 +887,14 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  			if (ret)
>  				goto finish_cb;
>  
> -			comp_bio = btrfs_bio_alloc(cur_disk_byte);
> -			comp_bio->bi_opf = REQ_OP_READ;
> -			comp_bio->bi_private = cb;
> -			comp_bio->bi_end_io = end_compressed_bio_read;
> +			comp_bio = alloc_compressed_bio(cb, cur_disk_byte,
> +					REQ_OP_READ,
> +					end_compressed_bio_read);
> +			if (IS_ERR(comp_bio)) {
> +				ret = errno_to_blk_status(PTR_ERR(comp_bio));
> +				comp_bio = NULL;
> +				goto finish_cb;
> +			}
>  
>  			bio_add_page(comp_bio, page, pg_len, 0);
>  		}
> -- 
> 2.32.0

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 8/9] btrfs: make btrfs_submit_compressed_write() to determine stripe boundary at bio allocation time
  2021-06-15 12:18 ` [PATCH v3 8/9] btrfs: make btrfs_submit_compressed_write() " Qu Wenruo
@ 2021-06-16 14:12   ` David Sterba
  0 siblings, 0 replies; 18+ messages in thread
From: David Sterba @ 2021-06-16 14:12 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Tue, Jun 15, 2021 at 08:18:35PM +0800, Qu Wenruo wrote:
> -		page->mapping = NULL;
> -		if (submit || len < PAGE_SIZE) {
> +		if (use_append)
> +			added = bio_add_zone_append_page(bio, page, real_size,
> +					offset_in_page(offset));
> +		else
> +			added = bio_add_page(bio, page, real_size,
> +					offset_in_page(offset));
> +		/* bio_add_zone_append_page() may fail */
> +		ASSERT(added == 0 || added == real_size);
> +		if (added == 0)
> +			submit = true;

What's the point of the assert that also gets checked by a regular 'if'?
Either 'added == 0' is completely invalid (ie programmer error) or it's
an condition that must be handled anyway, either as an error as a
strange but valid case.

> +
> +		cur_disk_bytenr += added;
> +		if (cur_disk_bytenr == next_stripe_start)
> +			submit = true;
> +
> +		if (submit) {
>  			if (!skip_sum) {
>  				ret = btrfs_csum_one_bio(inode, bio, start, 1);
>  				if (ret)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 3/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_read()
  2021-06-16 14:03   ` David Sterba
@ 2021-06-16 22:23     ` Qu Wenruo
  0 siblings, 0 replies; 18+ messages in thread
From: Qu Wenruo @ 2021-06-16 22:23 UTC (permalink / raw)
  To: dsterba, linux-btrfs



On 2021/6/16 下午10:03, David Sterba wrote:
> On Tue, Jun 15, 2021 at 08:18:30PM +0800, Qu Wenruo wrote:
>> There are quite some BUG_ON()s inside btrfs_submit_compressed_read(),
>> namingly all errors inside the for() loop relies on BUG_ON() to handle
>> -ENOMEM.
>>
>> Hunt down these BUG_ON()s properly by:
>>
>> - Introduce compressed_bio::pending_bios_wait
>>    This allows us to wait for any submitted bio to finish, while still
>>    keeps the compressed_bio from being freed, as we should have
>>    compressed_bio::io_sectors not zero.
>>
>> - Introduce finish_compressed_bio_read() to finish the compressed_bio
>>
>> - Properly end the bio and finish compressed_bio when error happens
>>
>> Now in btrfs_submit_compressed_read() even when the bio submission
>> failed, we can properly handle the error without triggering BUG_ON().
>>
>> Signed-off-by: Qu Wenruo <wqu@suse.com>
> 
> Please change the subject to something like "btrfs: do proper error
> handling in btrfs_submit_compressed_read", same for the other patch.

Got it, will change the naming scheme for future patches too.
> 
[...]
>>   
>> +	/* To wait for any submitted bio, used in error handling */
>> +	wait_queue_head_t pending_bio_wait;
> 
> This adds 24 bytes to the structure and it's only used for error
> handling, so that does not seem justified enough.

And after more consideration, in fact we can just remove 
compressed_bio::pending_bios completely, using io_sectors to replace it.

For error handling case, finally we will have things like 
@cur_disk_bytenr to trace our submitted compressed_bio progress, and we 
can use io_sectors to wait.

I'll go this direction in next update so that no size bump at all for 
compressed_bio structure.

Thanks,
Qu

> 
> There are system-wide wait queues, shared with other subsystems but it
> looks like a better fit for the exceptional case of errors. See commit
> 6b2bb7265f0b62605 for more details, the change is otherwise trivial and
> the api functions are wait_var_event(&variable, condition) and
> wake_up_var(&variable), where the variable is a unique key which would
> be the compressed_bio.
> 
>> +
>>   	/* Number of compressed pages in the array */
>>   	unsigned int nr_pages;
>>   
>> -- 
>> 2.32.0
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-06-16 22:24 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-15 12:18 [PATCH v3 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support Qu Wenruo
2021-06-15 12:18 ` [PATCH v3 1/9] btrfs: remove a dead comment for btrfs_decompress_bio() Qu Wenruo
2021-06-15 12:20   ` Johannes Thumshirn
2021-06-15 12:18 ` [PATCH v3 2/9] btrfs: introduce compressed_bio::io_sectors to trace compressed bio more elegantly Qu Wenruo
2021-06-15 12:18 ` [PATCH v3 3/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_read() Qu Wenruo
2021-06-16 14:03   ` David Sterba
2021-06-16 22:23     ` Qu Wenruo
2021-06-15 12:18 ` [PATCH v3 4/9] btrfs: hunt down the BUG_ON()s inside btrfs_submit_compressed_write() Qu Wenruo
2021-06-15 12:18 ` [PATCH v3 5/9] btrfs: introduce submit_compressed_bio() for compression Qu Wenruo
2021-06-15 15:59   ` Johannes Thumshirn
2021-06-15 12:18 ` [PATCH v3 6/9] btrfs: introduce alloc_submit_compressed_bio() " Qu Wenruo
2021-06-15 15:58   ` Johannes Thumshirn
2021-06-15 23:09     ` Qu Wenruo
2021-06-16 14:08   ` David Sterba
2021-06-15 12:18 ` [PATCH v3 7/9] btrfs: make btrfs_submit_compressed_read() to determine stripe boundary at bio allocation time Qu Wenruo
2021-06-15 12:18 ` [PATCH v3 8/9] btrfs: make btrfs_submit_compressed_write() " Qu Wenruo
2021-06-16 14:12   ` David Sterba
2021-06-15 12:18 ` [PATCH v3 9/9] btrfs: remove unused function btrfs_bio_fits_in_stripe() Qu Wenruo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.