linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK
@ 2021-12-06  5:52 Qu Wenruo
  2021-12-06  5:52 ` [PATCH 2/2] btrfs: merge SCRUB_PAGES_PER_RD_BIO and SCRUB_PAGES_PER_WR_BIO Qu Wenruo
  2021-12-06 21:14 ` [PATCH 1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK David Sterba
  0 siblings, 2 replies; 3+ messages in thread
From: Qu Wenruo @ 2021-12-06  5:52 UTC (permalink / raw)
  To: linux-btrfs

Use BTRFS_MAX_METADATA_BLOCKSIZE and SZ_4K (minimal sectorsize) to
calculate this value.

And remove one stale comment on the value, in fact with recent subpage
support, BTRFS_MAX_METADATA_BLOCKSIZE * PAGE_SIZE is already beyond
BTRFS_STRIPE_LEN, just we don't use the full page.

Also since we're here, update the BUG_ON() related to
SCRUB_MAX_PAGES_PER_BLOCK to ASSERT().

As those ASSERT() are really only for developers to catch early obvious
bugs, not to let end users suffer.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 15a123e67108..0870d8db92cd 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -49,11 +49,10 @@ struct scrub_ctx;
 #define SCRUB_BIOS_PER_SCTX	64	/* 8MB per device in flight */
 
 /*
- * the following value times PAGE_SIZE needs to be large enough to match the
+ * The following value times PAGE_SIZE needs to be large enough to match the
  * largest node/leaf/sector size that shall be supported.
- * Values larger than BTRFS_STRIPE_LEN are not supported.
  */
-#define SCRUB_MAX_PAGES_PER_BLOCK	16	/* 64k per node/leaf/sector */
+#define SCRUB_MAX_PAGES_PER_BLOCK	(BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K)
 
 struct scrub_recover {
 	refcount_t		refs;
@@ -1313,7 +1312,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 		recover->bioc = bioc;
 		recover->map_length = mapped_length;
 
-		BUG_ON(page_index >= SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(page_index < SCRUB_MAX_PAGES_PER_BLOCK);
 
 		nmirrors = min(scrub_nr_raid_mirrors(bioc), BTRFS_MAX_MIRRORS);
 
@@ -2297,7 +2296,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 			scrub_block_put(sblock);
 			return -ENOMEM;
 		}
-		BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
 		scrub_page_get(spage);
 		sblock->pagev[index] = spage;
 		spage->sblock = sblock;
@@ -2631,7 +2630,7 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 			scrub_block_put(sblock);
 			return -ENOMEM;
 		}
-		BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
 		/* For scrub block */
 		scrub_page_get(spage);
 		sblock->pagev[index] = spage;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] btrfs: merge SCRUB_PAGES_PER_RD_BIO and SCRUB_PAGES_PER_WR_BIO
  2021-12-06  5:52 [PATCH 1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK Qu Wenruo
@ 2021-12-06  5:52 ` Qu Wenruo
  2021-12-06 21:14 ` [PATCH 1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK David Sterba
  1 sibling, 0 replies; 3+ messages in thread
From: Qu Wenruo @ 2021-12-06  5:52 UTC (permalink / raw)
  To: linux-btrfs

These two values are introduced in commit ff023aac3119 ("Btrfs: add code
to scrub to copy read data to another disk") as an optimization.

But the truth is, block layer schedule can do whatever they want to
merge/split bios to improve performance.

Doing such "optimization" is not really going to affect much, especially
considering how good current block layer guys are doing.

Remove such old and immature optimization.

Since we're here, also change BUG_ON()s using these two macros to use
ASSERT()s.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 34 ++++++++++++++--------------------
 1 file changed, 14 insertions(+), 20 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 0870d8db92cd..5c67cf94c6d6 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -39,14 +39,14 @@ struct scrub_block;
 struct scrub_ctx;
 
 /*
- * the following three values only influence the performance.
+ * The following three values only influence the performance.
+ *
  * The last one configures the number of parallel and outstanding I/O
- * operations. The first two values configure an upper limit for the number
+ * operations. The first one configures an upper limit for the number
  * of (dynamically allocated) pages that are added to a bio.
  */
-#define SCRUB_PAGES_PER_RD_BIO	32	/* 128k per bio */
-#define SCRUB_PAGES_PER_WR_BIO	32	/* 128k per bio */
-#define SCRUB_BIOS_PER_SCTX	64	/* 8MB per device in flight */
+#define SCRUB_PAGES_PER_BIO	32	/* 128k per bio for x86 */
+#define SCRUB_BIOS_PER_SCTX	64	/* 8MB per device in flight for x86 */
 
 /*
  * The following value times PAGE_SIZE needs to be large enough to match the
@@ -87,11 +87,7 @@ struct scrub_bio {
 	blk_status_t		status;
 	u64			logical;
 	u64			physical;
-#if SCRUB_PAGES_PER_WR_BIO >= SCRUB_PAGES_PER_RD_BIO
-	struct scrub_page	*pagev[SCRUB_PAGES_PER_WR_BIO];
-#else
-	struct scrub_page	*pagev[SCRUB_PAGES_PER_RD_BIO];
-#endif
+	struct scrub_page	*pagev[SCRUB_PAGES_PER_BIO];
 	int			page_count;
 	int			next_free;
 	struct btrfs_work	work;
@@ -162,7 +158,7 @@ struct scrub_ctx {
 	struct list_head	csum_list;
 	atomic_t		cancel_req;
 	int			readonly;
-	int			pages_per_rd_bio;
+	int			pages_per_bio;
 
 	/* State of IO submission throttling affecting the associated device */
 	ktime_t			throttle_deadline;
@@ -173,7 +169,6 @@ struct scrub_ctx {
 
 	struct scrub_bio        *wr_curr_bio;
 	struct mutex            wr_lock;
-	int                     pages_per_wr_bio; /* <= SCRUB_PAGES_PER_WR_BIO */
 	struct btrfs_device     *wr_tgtdev;
 	bool                    flush_all_writes;
 
@@ -577,7 +572,7 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
 		goto nomem;
 	refcount_set(&sctx->refs, 1);
 	sctx->is_dev_replace = is_dev_replace;
-	sctx->pages_per_rd_bio = SCRUB_PAGES_PER_RD_BIO;
+	sctx->pages_per_bio = SCRUB_PAGES_PER_BIO;
 	sctx->curr = -1;
 	sctx->fs_info = fs_info;
 	INIT_LIST_HEAD(&sctx->csum_list);
@@ -615,7 +610,6 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
 	sctx->wr_curr_bio = NULL;
 	if (is_dev_replace) {
 		WARN_ON(!fs_info->dev_replace.tgtdev);
-		sctx->pages_per_wr_bio = SCRUB_PAGES_PER_WR_BIO;
 		sctx->wr_tgtdev = fs_info->dev_replace.tgtdev;
 		sctx->flush_all_writes = false;
 	}
@@ -1674,7 +1668,7 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
 		sbio->dev = sctx->wr_tgtdev;
 		bio = sbio->bio;
 		if (!bio) {
-			bio = btrfs_bio_alloc(sctx->pages_per_wr_bio);
+			bio = btrfs_bio_alloc(sctx->pages_per_bio);
 			sbio->bio = bio;
 		}
 
@@ -1707,7 +1701,7 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
 	sbio->pagev[sbio->page_count] = spage;
 	scrub_page_get(spage);
 	sbio->page_count++;
-	if (sbio->page_count == sctx->pages_per_wr_bio)
+	if (sbio->page_count == sctx->pages_per_bio)
 		scrub_wr_submit(sctx);
 	mutex_unlock(&sctx->wr_lock);
 
@@ -1754,7 +1748,7 @@ static void scrub_wr_bio_end_io_worker(struct btrfs_work *work)
 	struct scrub_ctx *sctx = sbio->sctx;
 	int i;
 
-	WARN_ON(sbio->page_count > SCRUB_PAGES_PER_WR_BIO);
+	ASSERT(sbio->page_count <= SCRUB_PAGES_PER_BIO);
 	if (sbio->status) {
 		struct btrfs_dev_replace *dev_replace =
 			&sbio->sctx->fs_info->dev_replace;
@@ -2100,7 +2094,7 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
 		sbio->dev = spage->dev;
 		bio = sbio->bio;
 		if (!bio) {
-			bio = btrfs_bio_alloc(sctx->pages_per_rd_bio);
+			bio = btrfs_bio_alloc(sctx->pages_per_bio);
 			sbio->bio = bio;
 		}
 
@@ -2134,7 +2128,7 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
 	scrub_block_get(sblock); /* one for the page added to the bio */
 	atomic_inc(&sblock->outstanding_pages);
 	sbio->page_count++;
-	if (sbio->page_count == sctx->pages_per_rd_bio)
+	if (sbio->page_count == sctx->pages_per_bio)
 		scrub_submit(sctx);
 
 	return 0;
@@ -2368,7 +2362,7 @@ static void scrub_bio_end_io_worker(struct btrfs_work *work)
 	struct scrub_ctx *sctx = sbio->sctx;
 	int i;
 
-	BUG_ON(sbio->page_count > SCRUB_PAGES_PER_RD_BIO);
+	ASSERT(sbio->page_count <= SCRUB_PAGES_PER_BIO);
 	if (sbio->status) {
 		for (i = 0; i < sbio->page_count; i++) {
 			struct scrub_page *spage = sbio->pagev[i];
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK
  2021-12-06  5:52 [PATCH 1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK Qu Wenruo
  2021-12-06  5:52 ` [PATCH 2/2] btrfs: merge SCRUB_PAGES_PER_RD_BIO and SCRUB_PAGES_PER_WR_BIO Qu Wenruo
@ 2021-12-06 21:14 ` David Sterba
  1 sibling, 0 replies; 3+ messages in thread
From: David Sterba @ 2021-12-06 21:14 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Mon, Dec 06, 2021 at 01:52:57PM +0800, Qu Wenruo wrote:
> Use BTRFS_MAX_METADATA_BLOCKSIZE and SZ_4K (minimal sectorsize) to
> calculate this value.
> 
> And remove one stale comment on the value, in fact with recent subpage
> support, BTRFS_MAX_METADATA_BLOCKSIZE * PAGE_SIZE is already beyond
> BTRFS_STRIPE_LEN, just we don't use the full page.
> 
> Also since we're here, update the BUG_ON() related to
> SCRUB_MAX_PAGES_PER_BLOCK to ASSERT().
> 
> As those ASSERT() are really only for developers to catch early obvious
> bugs, not to let end users suffer.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>

1 and 2 added to misc-next, thanks.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-12-06 21:15 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-06  5:52 [PATCH 1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK Qu Wenruo
2021-12-06  5:52 ` [PATCH 2/2] btrfs: merge SCRUB_PAGES_PER_RD_BIO and SCRUB_PAGES_PER_WR_BIO Qu Wenruo
2021-12-06 21:14 ` [PATCH 1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).